GEM Summer '22 tutorial: Spherical harmonic fitting

Vishal Upendran, IUCAA, Pune, India

Catch me at: uvishal@iucaa.in

In this notebook, we will look at how Spherical harmonics may be used to fit data present on a ($\theta$,$\phi$) grid. To motivate our general scheme of fitting this data, we shall first look at fitting a Fourier series to a 1-D data, and then generalizing it to Spherical harmonics.

The method presented here is generic to fitting any basis to a given dataset.

A sparse method for performing fourier transform

We consider the Fourier basis, and perform a sparse fitting of the Fourier basis to a given dataset. This is just to see how well the frequencies are retrieved by such a sparse computer.

In [2]:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt 
import warnings
warnings.filterwarnings('ignore')
In [3]:
x = np.linspace(0,2*np.pi,2048)
signal = 10*np.cos(x)+ 20*np.cos(2*x)+24*np.cos(3*x)+30*np.cos(7*x)
plt.plot(x,signal)
plt.xlabel("Time (in $2\pi N$)")
_ = plt.ylabel("Signal")

Fig. 1: This is our example signal $\vec{s}$ from which we shall attempt to infer Fourier components.

FFT baseline

Let us perform FFT on this signal, and generate a "baseline" model.

In [4]:
fft = np.fft.fft(signal)/len(signal)
freq = np.fft.fftfreq(len(signal))
In [5]:
plt.figure(figsize=(8,4))
plt.subplot(1,2,1)
plt.stem(freq,fft.real)
plt.xlim([-.01,.01])
plt.title("Real component")
plt.xlabel(f"Frequency/{len(signal)}")
plt.ylabel("Amplitude")
plt.subplot(1,2,2)
plt.stem(freq,fft.imag)
plt.xlim([-.01,.01])
plt.title("Imaginary component")
plt.xlabel(f"Frequency/{len(signal)}")
plt.ylabel("Amplitude")
plt.suptitle("s = 10cos(x)+ 20cos(2*x)+24cos(3*x)+30cos(7*x)",y=1.05,fontsize=15)
plt.tight_layout()
In [6]:
# To get the full Fourier Series approximately:
# Get all frequencies with coefficient > 1
locs = np.where(np.abs(fft.real)>1.0)
frequencies = freq[locs]
amps = fft.real[locs]
print("Real part: cosine sum")
print(f"Frequencies = {frequencies*len(signal)}")
print(f"Amplitudes = {amps}")

# Get all frequencies with coefficient > 1
locs = np.where(np.abs(fft.imag)>1.0)
frequencies = freq[locs]
amps = fft.real[locs]
print("Imaginary part: sine sum")
print(f"Frequencies = {frequencies**len(signal)}")
print(f"Amplitudes = {fft.imag[locs]}")
Real part: cosine sum
Frequencies = [ 1.  2.  3.  7. -7. -3. -2. -1.]
Amplitudes = [ 5.04236498 10.03777571 12.01226392 14.99889933 14.99889933 12.01226392
 10.03777571  5.04236498]
Imaginary part: sine sum
Frequencies = []
Amplitudes = []

That's how a Discrete Fourier transform works. It decomposes the signal into discrete "modes", each of which has an associated coeffficient.

Typicall we can rewrite the decomposition of any signal into a "series" solution as: $$s(n) = \sum_{i=0}^{N} a_i \cos(𝜔_i n/2N) + b_i \sin(𝜔_i n/2N),$$

where we have accounted for Nyquist criterion to select the number of modes. The number $n$ stands for the time steps, and there are $\mathbf{2N}$ time steps in the signal. Generally, we have built in packages to give the max number of modes given a signal length.

Generally, $s$ is a time series. Let us denote time series variables as $\vec{s}$. We can see clearly that each of the cosine and sine series is actually a time series vector. Hence, we can write the equation as:

$$\vec{s} = \sum_{i=0}^{N} a_i \vec{c}(𝜔_i/2N) + b_i \vec{s}(𝜔_i/2N),$$

where I have used a shorthand notation for cosine and sine time series. What we are basically doing is adding together a bunch of different time series with some weights. Hence, we can convert this into a matrix equation like:

$\vec{s} = a_0 + a_1\times[c_{1,0} c_{1,1} ... c_{1,2N}] + ...... +a_N\times[c_{N,0} c_{N,1} ... c_{N,2N}] + b_1\times[s_{1,0} s_{1,1} ... s_{1,2N}]+ .. b_N\times[s_{N,0} s_{N,1} ... s_{N,2N}],$

where $c_{a,b}$ represents the cosine value at frequency $a$ and time point $b$.

This expression can be succintly written as a matrix equation as:

$$\vec{s} = \begin{bmatrix}c_{0,1} & ... & c_{0,N} & s_{0,1} & ... & s_{0,N} \\ & ... & \\ c_{0,2N} & ... & c_{N,2N} & s_{0,2N} & ... & s_{N,2N} \end{bmatrix}\begin{bmatrix} a_1 \\ ... \\ a_N \\ b_1 \\ ... \\ b_N \end{bmatrix}$$

The $\vec{a}$ is a set of coefficients, while the matrix is what we can call the "Basis matrix" $\mathbf{B}$. Hence, the problem boils down to finding the vector $\vec{a}$ such that: $$\vec{s} = \mathbf{B}\vec{a}.$$

This is a typical "fitting" problem in linear algebra. We need to define a basis, invert the matrix $\mathbf{B}$, and figure out $\vec{a}$. Let us look at a few ways to perform this inversion.

Lasso regression.

Lasso, or Least absolute shrinkage regression, attempts to solve for $\vec{a}$ such that most of the elements of $\vec{a}$ become 0. In our particular case, it will try to make as many Fourier modes = 0 as possible. Let us see what solution we get from such an exercise.

In [7]:
from sklearn.linear_model import Lasso
In [8]:
x = np.linspace(0,2*np.pi,2048)
N = len(x)
signal = 10*np.cos(x)+ 20*np.cos(2*x)+24*np.cos(3*x)+30*np.cos(7*x)
freqs = np.fft.fftfreq(N)*N
f,t = np.meshgrid(x,freqs)
basis = np.ones([N,N*2])
basis[:,:N] = np.cos(f*t)
basis[:,N:] = np.sin(f*t)
print(basis.shape)
(2048, 4096)
In [9]:
plt.imshow(basis,cmap='coolwarm',origin='lower')
plt.colorbar()
Out[9]:
<matplotlib.colorbar.Colorbar at 0x7f1009b11be0>

Let us define the Lasso model, and fit it.

Lasso model has a regularization parameter $\alpha$ which can be tuned for the amount of shrinkage to be done.

In [10]:
model = Lasso(alpha=0.1,fit_intercept=False)
model.fit(basis,signal)
Out[10]:
Lasso(alpha=0.1, fit_intercept=False)
In [11]:
model.coef_
Out[11]:
array([ 0.        ,  9.85247823, 19.888655  , ...,  0.        ,
        0.        ,  0.        ])

Let's check the coefficients for different modes.

In [12]:
coeff_cos = model.coef_[:N]
coeff_sin = model.coef_[N:]
In [13]:
plt.figure(figsize=(8,4))
plt.subplot(1,2,1)
plt.stem(freqs,coeff_cos)
plt.xlim([-10,10])
plt.title("Real component")
plt.xlabel("Frequency")
plt.ylabel("Amplitude")
plt.subplot(1,2,2)
plt.stem(freqs,coeff_sin)
plt.xlim([-10,10])
plt.title("Imaginary component")
plt.xlabel("Frequency")
plt.ylabel("Amplitude")
plt.tight_layout()
_ = plt.suptitle("s = 10cos(x)+ 20cos(2*x)+24cos(3*x)+30cos(7*x)",y=1.05,fontsize=15)

Spherical harmonics

Now that we have seen how we may write a Fourier decomposition as a linear equation, we can look at a broader picture. In the equation for decomposition, $$\vec{s} = \mathbf{B}\vec{a},$$ the basis matrix $\mathbf{B}$ can be composed with any "good" basis functions. For periodic signals, the Fourier basis is good. If our signal $\vec{s}$ is a field on a sphere, the ideal set of basis would be the Spherical Harmonic functions.

The spherical harmonics functions $Y_{nm} (\theta,\phi)$ are the angular parts of the solution to the Laplace equation on a spherically symmetric structure. These functions are defined as: $$Y_{nm}(\theta,\phi) := \sqrt{\frac{2n+1}{4\pi} \frac{(n-m)!}{(n+m)!}} e^{i m \theta} P^m_n(\cos(\phi)).$$

Here, $P^m_n(\cos(\phi))$ are the associated Legendre polyynomials.

Any scalar function $f(\theta,\phi)$ can then be decomposed into the Spherical harmonic basis (like a Fourier basis) as:

$$f(\theta,\phi) = \sum_{n=0}^{\infty} \sum_{m=-n}^{n} a_{nm} Y_{nm}(\theta,\phi)$$

Similar to how we approximated the Fourier series decomposition of our signal till the maximum number of modes (or frequency) N, we can limit the maximum number of modes we would like to use for fitting the 2-D function. Hence, the function now becomes: $$f(\theta,\phi) \approx \sum_{n=0}^{N} \sum_{m=-n}^{n} a_{nm} Y_{nm}(\theta,\phi),$$

where N is the largest mode in our expansion.

In the Fourier case, we had the signal as a function of time, i.e $$\vec{s} = \begin{bmatrix} s_0 \\ s_1 \\s_2 \\ . \\ .\\ s_t\end{bmatrix}.$$

This was a 1-D signal we broke down into a linear combination of columns. To use our "fitting schemes", we will need to transform the 2-D signals into a 1-D array, and represent it as a linear combination of columns. Hence, we will need to replace the "double summation" with a single summation.

For the maximum mode of N, we have $(N+1)^2\times 2$ number of terms in total. The $\times 2$ comes since we would have a real and an imaginary part to the decomposition. Consider each "mode" to be one column of your basis vector. Then, there would be $(N+1)^2$ such columns.

We can consider our signal on the sphere $\vec{s}$ to be: $$\vec{s} = \begin{bmatrix} f(\theta_1,\phi_1) \\ f(\theta_2,\phi_2) \\ f(\theta_3,\phi_3) \\ . \\ .\\ f(\theta_{max},\phi_{max})\end{bmatrix}.$$ Note that $0\leq\theta\leq\pi$, and $0\leq\phi\leq2\pi$. $\theta = 0$ is "North Pole", and would correspond to the co-latitude of the coordinate system.

Now we will have to define the correct Basis matrix $\mathbf{B}$. To do so, we will need to transform the 2-D indexing to a 1-D indexing.

Indexing.png

Consider a mode with n = $n_0$. If we consider all the modes from $n=0$ to $n=n_0$, we will have a total of $(n_0+1)^2$ number of modes. Let the index $i_0$ correspond to the mode with $n=n_0$ and $m=0$. Then, we may write $i_0$ as: $$i_0 = (n+1)^2-n-1\implies i_0=n^2+n,$$ where an extra 1 is subtracted since $m=0$. With this, we may write the index for any $m$ for $n=n_0$ as: $$i = n^2+n+m.$$

Thus, we can "unroll" the 2D summation into 1D.

We will also need to put this back together in 2D. For this, observe that for $m=-n$, we have: $$i_{n,-n} = n^2 \implies n = \sqrt{i_{n,-n}}.$$ If we now consider the $m=n$ mode, we would get $i_{n,n}=n^2+2n$. This $i$ is not a perfect square, and we would get the next perfect square for $i_{n+1,-n-1} = (n+1)^2$. This, for a given $i$, we have: $$n =floor(\sqrt{i}), m = i-n^2-n.$$

With these two transformations, we can move back and forth between the 2D and 1D arrays.

With this theory, let us try to perform a Lasso fit to a toy example, and see if we recover the modes correctly.

Acknowledgement: Aspects of this notebook have been adapted from a notebook written by Mark Cheung, LMSAL.

In [14]:
from scipy.special import sph_harm 
In [15]:
# Let us define the transformations to go from 2D -> 1D and back!
def nm2i(n,m):
    return n*n + n + m 

def i2nm(i):
    n = int(np.floor(np.sqrt(i)))
    m = i - n*n - n
    return n, m
In [16]:
#We will need to define the grid in theta and phi
# In Scipy convention, theta corresponds to longitude, and phi to colatitude
shape = (128,256) # Ncolat x Nlong
theta = np.linspace(0,2*np.pi,shape[1])
phi = np.linspace(0,np.pi,shape[0])
grid_theta,grid_phi = np.meshgrid(theta,phi)

Let us define a toy example function using Scipy.

Let us keep the toy example field real, i.e m=0 modes.

In [17]:
signal = 10*sph_harm(0, 1, grid_theta, grid_phi)+20*sph_harm(0, 2, grid_theta, grid_phi)+24*sph_harm(0, 4, grid_theta, grid_phi)+30*sph_harm(0, 7, grid_theta, grid_phi)
print(signal.shape)
(128, 256)
In [18]:
plt.figure(figsize=(8,4))
plt.subplot(1,2,1)
plt.imshow(signal.real,extent=(0,360,0,180),cmap='RdGy_r')
plt.colorbar(fraction=0.03)
plt.title("Real part")
plt.xlabel("$\\phi$")
plt.ylabel("$\\theta$")
plt.subplot(1,2,2)
plt.imshow(signal.imag,extent=(0,360,0,180),cmap='RdGy_r')
plt.colorbar(fraction=0.03)
plt.title("Imaginary part")
plt.xlabel("$\\phi$")
plt.ylabel("$\\theta$")
plt.tight_layout()
In [19]:
fig = plt.figure(figsize=(6,6))
ax = fig.add_subplot(projection='polar')
ax.set_theta_offset(-np.pi/2)
c=ax.pcolormesh(grid_theta,grid_phi, signal.real,cmap='RdGy_r')  
ax.set_ylim([0,np.pi])
ax.grid(linewidth=1,color='white',linestyle='--')
ax.set_title("Polar map")
plt.colorbar(c)
plt.tight_layout()

Define a sparse basis to generate the spherical harmonic decomposition

First, we will need to write a function to define the basis matrix. Then, we can infer the coefficients using any of our regression techniques.

In [20]:
def basis_matrix(nmax,theta,phi):
    theta = np.copy(theta).ravel()
    phi = np.copy(phi).ravel()
    #theta is longitude grid, phi is colat grid
    assert(len(theta) == len(phi))
    nbasis = (nmax+1)*(nmax+1)*2 # 2 for real and imag components of Y_mn
    basis = np.zeros(shape=(len(theta),nbasis),dtype=np.float64)
    for n in np.arange(nmax+1):
        for m in np.arange(-n,n+1):
            y_mn = sph_harm(m, n, theta, phi)
            basis[:,2*nm2i(n,m)  ] = y_mn.real.ravel()
            basis[:,2*nm2i(n,m)+1] = y_mn.imag.ravel()
    return basis
In [21]:
def basis_inds_coeffs(nmax,coeffs):
    nbasis = (nmax+1)*(nmax+1) 
    basis_real = np.zeros(shape=(nbasis,3),dtype=np.float) # 2 for real and imag components of Y_mn
    basis_imag = np.zeros(shape=(nbasis,3),dtype=np.float) # 2 for real and imag components of Y_mn
    for n in np.arange(nmax+1):
        for m in np.arange(-n,n+1):
            basis_real[nm2i(n,m),:2] = np.array([n,m])
            basis_real[nm2i(n,m),-1] = coeffs[2*nm2i(n,m)]
            basis_imag[nm2i(n,m),:2] = np.array([n,m])
            basis_imag[nm2i(n,m),-1] = coeffs[2*nm2i(n,m)+1]
    return basis_real,basis_imag
In [22]:
# Let us consider 10 maximum modes, and try to fit the signal.
Nmax = 10
basis = basis_matrix(Nmax,grid_theta,grid_phi)
sign_unravel = signal.real.ravel()
assert len(sign_unravel)==basis.shape[0]
In [23]:
model = Lasso(alpha=0.1,fit_intercept=False)
model.fit(basis,sign_unravel)
Out[23]:
Lasso(alpha=0.1, fit_intercept=False)
In [24]:
model.coef_.shape
Out[24]:
(242,)
In [25]:
b_i_c_real,b_i_c_imag = basis_inds_coeffs(Nmax,model.coef_)
In [26]:
b_i_c_real[:,-1]
Out[26]:
array([ 0.        , -0.        ,  9.31948749,  0.        , -0.        ,
        0.        , 19.49867139, -0.        , -0.        , -0.        ,
        0.        ,  0.        ,  0.        , -0.        ,  0.        ,
        0.        , -0.        ,  0.        ,  0.        ,  0.        ,
       23.64224525, -0.        ,  0.        , -0.        , -0.        ,
       -0.        ,  0.        ,  0.        ,  0.        ,  0.        ,
        0.        , -0.        ,  0.        , -0.        ,  0.        ,
        0.        , -0.        ,  0.        , -0.        ,  0.        ,
        0.        ,  0.        ,  0.        , -0.        ,  0.        ,
       -0.        , -0.        , -0.        , -0.        , -0.        ,
        0.        , -0.        ,  0.        ,  0.        ,  0.        ,
        0.        , 29.63307498, -0.        ,  0.        , -0.        ,
        0.        ,  0.        ,  0.        ,  0.        , -0.        ,
        0.        , -0.        ,  0.        ,  0.        ,  0.        ,
        0.        ,  0.        ,  0.        , -0.        ,  0.        ,
       -0.        ,  0.        , -0.        , -0.        , -0.        ,
       -0.        , -0.        ,  0.        , -0.        ,  0.        ,
        0.        ,  0.        ,  0.        ,  0.        ,  0.        ,
        0.        , -0.        ,  0.        , -0.        ,  0.        ,
       -0.        ,  0.        ,  0.        ,  0.        ,  0.        ,
       -0.        ,  0.        , -0.        ,  0.        ,  0.        ,
        0.        ,  0.        ,  0.        ,  0.        ,  0.        ,
        0.        , -0.        ,  0.        , -0.        ,  0.        ,
       -0.        ,  0.        , -0.        , -0.        , -0.        ,
       -0.        ])
In [27]:
plt.figure(figsize=(8,4))
plt.subplot(1,2,1)
plt.scatter(b_i_c_real[:,1],b_i_c_real[:,0],c=b_i_c_real[:,-1],cmap='Paired')
plt.colorbar()
plt.xlabel("m")
plt.ylabel("n")
plt.title("Real part")
plt.subplot(1,2,2)
plt.scatter(b_i_c_imag[:,1],b_i_c_imag[:,0],c=b_i_c_imag[:,-1],cmap='Paired')
plt.colorbar()
plt.xlabel("m")
plt.ylabel("n")
plt.title("Imaginary part")
plt.tight_layout()
_ = plt.suptitle("s = 10$Y_{1,0}$+20$Y_{2,0}$+24$Y_{4,0}$+30$Y_{7,0}$",y=1.03)

Check the error due to decomposition

We can take these coefficients, contract with the basis, and obtain the field back. Then, we can estimate the error incurred due to our decomposition.

In [28]:
estimate = np.einsum("ij,j->i",basis,model.coef_)
estimate = np.reshape(estimate,shape)

fig = plt.figure(figsize=(6,6))
ax = fig.add_subplot(projection='polar')
ax.set_theta_offset(-np.pi/2)
c=ax.pcolormesh(grid_theta,grid_phi, np.abs((signal.real-estimate)/(np.abs(signal.real)+1e-4))*100,cmap='RdGy_r')  
ax.set_ylim([0,np.pi])
ax.grid(linewidth=1,color='white',linestyle='--')
ax.set_title("Polar map")
plt.colorbar(c)
plt.tight_layout()

Recipe for fixing N and $\alpha$

There are a couple of ways of fixing N and $\alpha$. I will outline 3 methods below:

  1. N constrained by your computational resources, and $\alpha$ from the reconstruction error.
  2. Constrain N, $\alpha$ through cross validation.
  3. Constrain N, $\alpha$ through validation.

We shall look at cross validation in this tutorial.

Cross validation: The validation of validations ☕

In cross validation, we divide dataset into N parts. In each iteration, {N-1} sets belong to training set, and 1 part in testing. This is explained graphically below (credit to Scikit-learn team):

(Image from Mathworks)

So let us sweep over $N_{max}$ and $\alpha$. Let us run $N_{max}=${5,10,15,20}, and $\alpha=${10,1,0.1,0.01}.

In [29]:
# Let us consider
Nmax_list = [5,10,15,20]
alpha_list = [10,1,0.1,0.01]
alpha,Nmax = np.meshgrid(alpha_list,Nmax_list)
comb_param = list(zip(alpha.ravel(),Nmax.ravel()))
In [30]:
np.asarray(comb_param).shape
Out[30]:
(16, 2)
In [31]:
from sklearn.model_selection import cross_val_score
In [32]:
def Paramsweep(param_in):
    al,Nm = param_in[0],param_in[1]
    basis = basis_matrix(Nm,grid_theta,grid_phi)
    model = Lasso(alpha=al,fit_intercept=False)
    assert len(sign_unravel)==basis.shape[0]
    scores = cross_val_score(model, basis, sign_unravel, cv=3)
    return np.median(scores)
In [33]:
signal = 10*sph_harm(0, 1, grid_theta, grid_phi)+20*sph_harm(0, 2, grid_theta, grid_phi)+24*sph_harm(0, 4, grid_theta, grid_phi)+30*sph_harm(0, 7, grid_theta, grid_phi)
signal = signal.real
sign_unravel = signal.ravel()
In [34]:
#Multiprocessing
import multiprocessing as mp
pool = mp.Pool(processes=mp.cpu_count())
cv_metric = pool.map(Paramsweep,comb_param)
pool.close()              
In [35]:
cv_metric
Out[35]:
[-0.13748263803472316,
 -0.3650857137983834,
 -2.006348124609806,
 -53.98695588187025,
 -0.13748263803472316,
 0.9561318305304888,
 0.9995653679925952,
 0.9999960299271737,
 -0.13748263803472316,
 0.9561318305304888,
 0.9995642526299989,
 0.9999959416850978,
 -0.13748263803472316,
 0.9561318305304888,
 0.999564265235603,
 0.9999959333740678]
In [36]:
# Simple plotting to check the best param:
plt.scatter(np.log10(alpha).ravel(),Nmax.ravel(),c=cv_metric,cmap='Spectral_r',vmin=0.99)
plt.colorbar()
plt.title("Median Coefficient of determination for different combinations of $\\alpha$ and Nmax")
plt.xlabel("$\\log_{10}\\alpha$")
_ = plt.ylabel("Nmax")
In [37]:
# Now use the best parameter here for fitting the dataset
i_best = np.argmax(cv_metric)
print("Best params = ")
best_alpha = alpha.ravel()[i_best]
best_Nmax = Nmax.ravel()[i_best]
print("alpha = ",best_alpha,", Nmax = ",best_Nmax)
Best params = 
alpha =  0.01 , Nmax =  10
In [38]:
# Let us consider 10 maximum modes, and try to fit the signal.
Nmax = 10
basis = basis_matrix(Nmax,grid_theta,grid_phi)
sign_unravel = signal.real.ravel()
assert len(sign_unravel)==basis.shape[0]
model = Lasso(alpha=0.1,fit_intercept=False)
model.fit(basis,sign_unravel)
Out[38]:
Lasso(alpha=0.1, fit_intercept=False)
In [39]:
estimate = np.einsum("ij,j->i",basis,model.coef_)
err = np.abs((sign_unravel-estimate)/(np.abs(sign_unravel)+1e-4))*100
print(f"Median error = {np.median(err)}")
Median error = 2.1735299525507266
In [40]:
# Take the best coefficients, and try to fit the signal.
Nmax = best_Nmax
basis = basis_matrix(Nmax,grid_theta,grid_phi)
sign_unravel = signal.real.ravel()
assert len(sign_unravel)==basis.shape[0]
model = Lasso(alpha=best_alpha,fit_intercept=False)
model.fit(basis,sign_unravel)
Out[40]:
Lasso(alpha=0.01, fit_intercept=False)
In [41]:
estimate = np.einsum("ij,j->i",basis,model.coef_)
err = np.abs((sign_unravel-estimate)/(np.abs(sign_unravel)+1e-4))*100
print(f"Median error = {np.median(err)}")
Median error = 0.217345797515424
In [42]:
estimate = np.einsum("ij,j->i",basis,model.coef_)
estimate = np.reshape(estimate,shape)

fig = plt.figure(figsize=(6,6))
ax = fig.add_subplot(projection='polar')
ax.set_theta_offset(-np.pi/2)
c=ax.pcolormesh(grid_theta,grid_phi, np.abs((signal.real-estimate)/(np.abs(signal.real)+1e-4))*100,cmap='RdGy_r')  
ax.set_ylim([0,np.pi])
ax.grid(linewidth=1,color='white',linestyle='--')
ax.set_title("Polar map")
plt.colorbar(c)
plt.tight_layout()
In [43]:
b_i_c_real,b_i_c_imag = basis_inds_coeffs(Nmax,model.coef_)
plt.figure(figsize=(8,4))
plt.subplot(1,2,1)
plt.scatter(b_i_c_real[:,1],b_i_c_real[:,0],c=b_i_c_real[:,-1],cmap='Paired')
plt.colorbar()
plt.xlabel("m")
plt.ylabel("n")
plt.title("Real part")
plt.subplot(1,2,2)
plt.scatter(b_i_c_imag[:,1],b_i_c_imag[:,0],c=b_i_c_imag[:,-1],cmap='Paired')
plt.colorbar()
plt.xlabel("m")
plt.ylabel("n")
plt.title("Imaginary part")
plt.tight_layout()
_ = plt.suptitle("s = 10$Y_{1,0}$+20$Y_{2,0}$+24$Y_{4,0}$+30$Y_{7,0}$",y=1.03)