import numpy as np
import matplotlib.pyplot as plt
from matplotlib import cm
import re
# %pip install nfft
from nfft import nfft
from imageio import imread, imsave
from itertools import product, permutations
from scipy.linalg import expm
from scipy.optimize import minimize
Handy for homework.
# Yields the n by n Hilbert matrix
def Hilbert(n):
out = np.empty((n,n))
for i, j in product(range(n),repeat=2):
out[i,j] = 1 / (i + j + 1)
return out
g= Hilbert(5)
print(g)
print(np.linalg.cond(g))
# Checkerboard of 1 and -1
def checkerboard(n):
out = np.empty((n,n))
for i, j in product(range(n),repeat=2):
out[i,j] = (-1)**(i+j)
return out
print(checkerboard(4))
print((1 - checkerboard(4)) / 2)
# A tridiagonal matrix with the given pattern
def tridiag(n,pattern=(-1,2,-1)):
a, b, c = pattern
return a*np.eye(n,k=-1) + b*np.eye(n) + c*np.eye(n, k=1)
print(tridiag(6,pattern=(-1, 4, 2)))
# Yields all square matrices of a given n whose entries are 0 or 1
def Hadamards(n):
return np.array(list(product((0,1),repeat=n**2))).reshape(2**(n**2),n,n)
print(Hadamards(3)[[23,87,120]])
# Yields all permutation matrices of a given n
def perm_mats(n):
perms = list(permutations(range(n)))
out=np.zeros((len(perms), n, n))
for j in range(len(perms)):
for i in range(n):
out[j,i,perms[j][i]]=1
return out
print(perm_mats(4)[[0,8,22]])
Only one entry so far. cofactor(A)
yields the cofactor matrix of $A$ as defined in Strang, Linear Algebra and Its Applications 4e, 4.3:
$C_{ij} = (-1)^{i+j}\det M_{ij}$
Where $M_{ij}$ is the $(n-1)\times(n-1)$ submatrix that results from eliminating the $i$ row and $j$ column of $A$.
Then taking the dot product of any single row of $A$ with any single row of $C$ yields $\det{A}$.
def cofactor(A):
n = len(A)
out = np.zeros((n,n))
for i in range(n):
for j in range(n):
M = A
M = np.delete(M, i, axis=0)
M = np.delete(M, j, axis=1)
out[i,j]=np.linalg.det(M)*(-1)**(i+j)
return out
g = tridiag(4,pattern=(-1,4,3))
print(g)
print(cofactor(g))
# Use the dot product of any two rows to get the determinant
print(g[2] @ cofactor(g)[2])
# Numpy's evaluation of the same
print(np.linalg.det(g))
Reference: Gilbert Strang, Linear Algebra and Its Applications 4e, sections 5.3 and 5.4.
arr
or $A$ gives a random discrete Markov matrix indicating the relationship of next year’s population to this one. It satisfies the defining property of a discrete Markov matrix:
$\sum_{i=1}^n A_{ij} = 1$ for all $j$
That is, the sum of each column is one.
arr_cont
or $A'$ gives the continuously-compounded form of the same, which satisfies a similar equation:
$\sum_{i=1}^n A'_{ij} = 0$ for all $j$
arr_cont
equals simply $A - I$. This is easy to see by analogy with $(1+r)^{t}$ and $e^{rt}$, respectively the discrete and continuous forms of the one-dimensional exponential growth equation. In matrix form, the equations become:
Both matrices have the same eigenvectors (and the eigenvalues are simply reduced by one in arr_cont
). The graph below suggests that the continuous form approaches its eigenvectors slightly faster.
n = 6
np.random.seed(1987)
# A random n-by-n Markov matrix
arr = np.random.random_sample((n,n))
arr = arr / arr.sum(axis=0)
# and its continuous form (just subtract I)
arr_cont = arr - np.eye(n)
# n random population start values
u0 = np.random.randint(51, size=n)
print(arr)
print(arr.sum(axis=0))
arr
has its largest eigenvalue $\lambda_1 = 1$. Thus, we expect the matrix to converge on the corresponding eigenvector $\eta_1$. For our matrix $\eta_1$ is complex, so there will be oscillatory behavior as $t \to \infty$, but in this case the oscillations are gentle enough that they do not show on the plot.
# Eigenvalues of arr
print(np.linalg.eigvals(arr))
# First eigenvector of arr
print(np.linalg.eig(arr)[1][0])
t = 6
# Discrete results from time 0 to t
results_discrete = np.stack([np.linalg.matrix_power(arr, i) @ u0 for i in range(t+1)])
x_discrete = range(t+1)
# Continuous results from time 0 to t
results_cont = np.stack([expm(t*arr_cont) @ u0 for t in np.linspace(0,t,101)])
x_cont = np.linspace(0,t,101)
# Plot
plt.figure(figsize=(8,10))
plt.ylabel('population')
plt.xlabel('time')
plt.title('Comparison of discrete and continuous Markov processes')
plt.plot(x_discrete, results_discrete, color='yellowgreen')
plt.plot(x_cont, results_cont, color='seagreen')
# Legend
plt.plot([],[], color='yellowgreen', label='discrete')
plt.plot([],[], color='seagreen', label='continuous')
plt.legend()
References:
Public-domain image from the Biodiversity Heritage Library.
Singular value decomposition (SVD) factors a matrix into three components:
$$A = U S V^H$$The middle $S$ is a diagonal matrix containing the square roots of the eigenvalues of $A^H A$ (or $A A^H$). Because $S$ is sorted from greatest $\sigma$ to least, we can zero out the tail end of $S$ as well as the corresponding columns of $U$ and $V$ to produce a lightweight set of matrices whose products approximate $A$.
Images are matrices, so this method is the basis of many widely used image-compression algorithms. Here, I give a practical demonstration of how SVD works and the level of quality one can expect when compressing a typical image.
First, we read in the image file. It’s a JPEG, so technically it’s already been compressed (you can even see some compression artifacts if you squint), but we won’t sweat it.
I cropped the image a bit at this stage to show its detail. The area we are working with is 1200 by 1300 pixels; these will be the eventual dimensions of $S$ in SVD. The array has an additional dimension of 3, corresponding to the image’s red, green, and blue channels.
img = imread('linalg-img2.jpg')[500:1700,0:1300,:]
img.shape
plt.figure(figsize=(12,13))
plt.axis('off')
plt.imshow(img)