User Tools

Site Tools


ELEN E6860 Course Notes

Euclidian interpretation of signals

The Fourier Transform is a specialized and limited way of analyzing signals. A more general way requires the use of less sophisticated math and treating signals as vectors of super high dimension, which will be transformed in ways different than the Fourier transform - a Euclidian interpretation of signals.

Vector spaces

Vector spaces are sets of vectors equipped with two operations: Vector addition $\forall u, v \in V, u + v \in V$ and scalar multiplication $\forall \alpha \in \mathbb{R}, \forall u \in V, \alpha u \in V$.

  • The vector space of two dimensional real numbers $\mathbb{R}^2 = \{x = (x_1, x_2): x_1, x_2 \in \mathbb{R}\}$, where $x + y = (x_1 + y_1, x_2 + y_2)$ and $\alpha x = (\alpha x_1, \alpha x_2)$.
  • The n-dimensional real vector space $\mathbb{R}^N = \{x = (x_1, x_2, \ldots, x_n), x_i \in \mathbb{R} \forall i\}$.
  • A space of with infinite dimension $\mathbb{R}^\mathbb{Z} = \{x = (x[n]), x[n] \in \mathbb{R}, \forall n \in \mathbb{Z}\}$ - this describes a sequence which can be indexed with negative values. It can be represented as “sticks” on a line, each stick denoting the value of the vector at some integer index $n$ In this case adding and scalar multiplication applied to every value of the infinite sequence.
  • An infinite continuous sequence $\mathbb{R}^\mathbb{R} = \{x = \left(x(t)\right), x(t) \in \mathbb{R}, \forall t \in \mathbb{R}\}$ which can just be thought of a function.

Any of these sequences can also be defined over the complex domain.

Vector subspaces

$S$ is a subspace of a vector space $V$ when $S$ is a subset of $V$ that is itself a vector space for the same vector addition and scalar multiplication.


Starting with a vector space $V$, pick $N$ vectors out of it $v_1, v_2, \ldots, v_N \in V$ then we construct a space $S = \{\alpha_1v_1 + \alpha_2v_2 + \ldots + \alpha_Nv_N : \alpha_1, \alpha_2, \ldots\, \alpha_N \in \mathbb{R}\}$ then S is a subspace of V, and it is the “span” of $V$: $S = \mathrm{span}(v_1, v_2, \ldots\, v_N)$. That is, the set of vectors constructed from linear combinations of the $N$ vectors from $V$.


Consider the space of continuous time functions $V = \mathbb{R}^\mathbb{R}$, if we take the span of two piecewise linear functions (vectors) $v_1, v_2$ then $S = \mathrm{span}(v_1, v_2) = \{(\alpha_1v_1(t) + \alpha_2v_2(t)), \alpha_1, \alpha_2 \in \mathbb{R}, \forall t \in \mathbb{R}\}$. Each vector in the span is just a sum of the two scaled functions which is also a piecewise linear function, so $\mathrm{span}(v_1, v_2) = $ functions which are linear combinations of $v_1, v_2$. Each function in the span is defined just by the original two functions and the two numbers $\alpha_1, \alpha_2$. If we are able to approximate complex signals as piecewise linear, this could be a useful tool to represent complex signals with few numbers. Generalizing, we can consider an infinite collection of vectors $v_i, \forall i \in Z$ then $\forall i\in Z, \mathrm{span}(v_i) = \{\sum \alpha_i v_i(t), \alpha_i \in \mathbb{R}\}$

Euclidian spaces

In a two-dimensional euclidian space, each vector is represented by two real numbers $x = (x_1, x_2)$. We can define the inner product of two vectors $x$ and $y$ as $\langle x, y\rangle = x_1y_1 + x_2y_2 \in \mathbb{R}$, the norm as $\|x\| = \sqrt{\langle x, \,x\rangle } = \sqrt{x_1^2 + x_2^2}$ which can be considered to be the “length” of the vector, and the distance $d(x, y) = \|x - y\|$. This gives rise to the notion of orthogonality where $x \bot y \Leftrightarrow \langle x, y \rangle = 0$. We also have the identity vectors $e_1 = (1, 0)$ and $e_2 = (0, 1)$ which give $\langle x, e_1 \rangle = x_1$ and $\langle x, e_2 \rangle = x_2$ and $x = x_1e_1 + x_2e_2 = \langle x, e_1 \rangle e_1 + \langle x, e_2 \rangle e_2$, called the “kindergarten formula”, which is a basic true fact in all Euclidian spaced. Parseval's equality is $\|x\|^2 = |\langle x, e_1 \rangle|^2 + |\langle x, e_2 \rangle |^2$.

Pythagorean theorem

The Pythagorean Theorem states that if $x \bot y$ then $\|x+y\|^2 = \|x\|^2 + \|y\|^2$ because the inner product is distributive: $\langle x, y+z \rangle = \langle x, y \rangle + \langle x, z \rangle $ and symmetric $\langle x+y, z \rangle = \langle x, z \rangle + \langle y, z \rangle $ so $\|x+y\|^2 = \langle x + y, x + y \rangle = \langle x, x \rangle + \langle x, y \rangle + \langle y, x \rangle + \langle y, y \rangle = \|x\|^2 + 0 + 0 + \|y\|^2$.

Example exercises

Consider two vectors $x, y \in \mathbb{R}^2$. The orthogonal projection $z$ of $x$ onto $\mathrm{span}(y)$ is the vector resulting from an orthogonal projection onto the line which consists of all possible vectors proportional to $y$. Explicitly, we must have $z = \alpha y$ because $z \in \mathrm{span}(y)$ and $x - z \bot y$ ($x-z$ is the vector “dropped down” from the tip of $x$ onto the line $y$ falls on.) Then we know $\langle x-z, y \rangle = 0$ and $\langle x - \alpha y, y\rangle = \langle x, y \rangle + \langle -\alpha y, y \rangle$ so $\langle x, y \rangle - \alpha \langle y, y \rangle = 0$ and $\alpha = \frac{\langle x, y \rangle}{\langle y, y \rangle} = \frac{\langle x, y \rangle}{\|y\|^2}$ so $z = \frac{\langle x, y \rangle}{\|y\|^2}y$. Note that $z = \frac{\langle x, y \rangle}{\|y\|} \frac{y}{\|y\|} = \langle x, \frac{y}{\|y\|} \rangle \frac{y}{\|y\|}$. Defining $y^\prime = \frac{y}{\|y\|}$ we have $z = \langle x, y^\prime \rangle y^\prime$. Then $\|y^\prime\|^2 = \langle y^\prime, y^\prime \rangle = \langle \frac{1}{\|y\|} y, \frac{y}{\|y\|} \rangle = \frac{1}{\|y\|^2}\langle y, y \rangle = \frac{\|y\|^2}{\|y\|^2} = 1$.

In the same system, we can find the vector $v$ of $\mathrm{span}(y)$ that minimizes $\|x-v\|$ by the Pythagorean Theorem: $\|x-v\|^2 = \|x-z\|^2 + \|z-v\|^2$ so $\|x-v\|^2 \ge \|x-z\|^2$ because all of these distances are greater than 0. Eliminating the square we have $\|x-v\| \ge \|x-z\|$. $\|x-v\|$ is always larger than or equal to $\|x-z\|$ and will be equal only when $\|z-v\| = 0$. Only one vector has norm 0 - the zero vector, implying that $z-v = 0 \Rightarrow v = z$. So the orthogonal projection of $x$ onto $\mathrm{span}(y)$ is also the vector of $\mathrm{span}(y)$ that minimizes its distance to $x$.

General Euclidian Space

In a vector space $E$, a function $\langle \cdot, \cdot \rangle$ is an inner product of $E$ if it satisfies the following properties:

  1. For any two vectors $u, v \in E$, $\langle u, v \rangle \in \mathbb{R}$ or $\mathbb{C}$
  2. $\forall u, v, w \in E, \langle u+v, w \rangle = \langle u, w \rangle + \langle v, w \rangle $
  3. $\forall \alpha \in \mathbb{R}, \forall u, v \in E, \langle \alpha u, v \rangle = \alpha \langle u, v \rangle$
  4. $\forall u, v \in E, \langle v, u \rangle = \langle u, v \rangle^*$
  5. $\forall u \in E, \langle u, u \rangle \ge 0 \in \mathbb{R}$
  6. $\langle u, u \rangle = 0 \Rightarrow u = 0$

Note that $\langle u, \alpha v \rangle = \langle \alpha v, u \rangle^* = (\alpha \langle v, u \rangle )^* = \alpha^*\langle v, u \rangle^* = \alpha^*\langle u, v \rangle$. Once $\langle \cdot, \cdot \rangle$ is an inner product, we define the norm as $\|u\| = \sqrt{\langle u, u \rangle}$ and the distance as $\mathrm{d}(x, y) = \|x - y\|$ same as in two dimensions.

  1. In $\mathbb{R}^N = \{x = (x_1, \ldots , x_N) : x_i \in \mathbb{R}\}$, $\langle x, y \rangle = \sum\limits_{i=1}^N x_i y_i$ and $\|x\|^2 = \sum\limits_{i=1}^N x_i^2$
  2. In $\mathbb{C}^N = \{x = (x_1, \ldots , x_N) : x_i \in \mathbb{C}\}$, $\langle x, y \rangle = \sum\limits_{i=1}^N x_i y_i^*$ and $\|x\|^2 = \sum\limits_{i=1}^N x_ix_i^* = \sum\limits_{i=1}^N|x_i|^2$
  3. In $\mathbb{C}^Z = \{(x[n]) : n \in Z, x[n] \in R, \forall n \in Z\}$, define a subspace $l^2(z) = \{(x[n]), n \in Z, \sum\limits_{n \in Z}|x[n]|^2 \;\mathrm{converges}\; \in \mathbb{C}^2\}$. In $l^2(Z)$, $\langle x, y \rangle = \sum\limits_{n \in Z}x[n]y[n]^* \Rightarrow \|x\|^2 = \sum\limits_{n \in Z}|x[n]|^2$
  4. In $L^2([0, T]) = \{(x(t)), t \in [0, T], t \in \mathrm{R} : \int_0^T |x(t)|^2 dt \;\mathrm{finite} \}$, $\langle x, y \rangle = \int_0^T x(t)y(t)^* dt \Rightarrow \|x\|^2 = \int_0^T |x(t)|^2 dt$
  5. In $L^2(\mathbb{R}) = \{(x(t)_{t \in \mathbb{R}}) : \int_{-\infty}^{\infty} |x(t)|^2dt \;\mathrm{finite} \} \in \mathbb{R}^\mathbb{R}$, $ \langle x, y \rangle = \int_{-\infty}^{\infty} x(t)y(t)^*dt \Rightarrow \|x\|^2 = \int_{-\infty}^{\infty}|x(t)|^2dt$
  1. If $\alpha \in \mathbb{C}, v \in E$, then $\|\alpha v \|^2 = \langle \alpha v, \alpha v \rangle = \alpha \langle v, \alpha v \rangle = \alpha \alpha^*\langle v, v \rangle = |a|^2\|v\|^2$
  2. $\|\frac{v}{\|v\|}\| = |\frac{1}{\|v\|}| \|v\| = 1$


In a Euclidian space, we define orthogonality as when $x \bot y \Leftrightarrow \langle x, y \rangle = 0$

  1. Consider $L^2(\mathbb{R})$, the space of functions with finite energy. Say we have two functios $x(t) = 1, t \in [0, 1]$ and $0$ elsewhere and $y(t) = x(t - t_0)$. These will be orthogonal when $|t_0| > 1$ because $\langle x, y \rangle = \int_{-\infty}^{\infty}x(t)y(t)^*dt = 0 \Rightarrow x \bot y$. They don't overlap - they are orthogonal.
  2. In $L^2([0, 2\pi])$, we have two signals $x(t) = \cos(t)$ and $x(t) = \sin(t)$. Are they orthogonal? $\langle x, y \rangle = \int_0^\pi x(t)y(t)^*dt$. Because of the functions' symmetry, we have four pieces being multiplied together, each being the negative of a different piece.

Properties of Inner Product

The inner product satisfies the pythagorean theorem, $ x \bot y \Rightarrow \|x + y\| = \|x\|^2 + \|y\|^2$, and the triangular inequality $\forall x, y \in E, \|x + y\| \le \|x\| + \|y\|$. Inner products also follow the Cauchy-Schwarz inequality, $|\langle x, y \rangle | \le \|x\| \|y\|$. Cauchy-Schwarz implies that if $\|x\| = \|y\| = 1$ then $0 \le |\langle x, y\rangle | \le 1$ and that there exists $\theta \in [0, \frac{\pi}{2}]$ such that $|\langle x, y \rangle | = \cos(\theta)$, and $\theta$ is the angle between $x$ and $y$. If we find the orthogonal projection of $x$ onto $y$: $z = \langle x, y \rangle y$ then $\|z\| = |\langle x, y \rangle| = \cos(\theta)$. In other words the angle between two functions must be between $0$ (parallel) and $\frac{\pi}{2}$ (orthogonal), in between they have some kind of correlation. In general, $|\langle \frac{x}{\|x\|}, \frac{y}{\|y\|} \rangle| = \cos(\theta)$ where $\theta$ is the angle between them.

Orthogonal Projection onto Subspace

Assume that $E$ is a Euclidian space (a vector space with inner product $\langle \cdot, \cdot \rangle$). Let $V$ be a subspace of $E$. If we consider a vector $x \in E$ which is not in $V$, there exists a unique $y \in V$ such that $x - y \bot V$ meaning that $x - y \bot v\; \forall v \in V$. Then we call $y$ the orthogonal projection of $x$ onto $V$, and write $y = P_V(x)$.

Theorem (Bessel's Inequality)

$\|P_V(x)\| \le \|x\|$, with equality if and only if $x \in V$.


Define $y = P_V(x)$ and $z = x - y$. Then $z \bot v \;\forall v \in V$ and $z \bot y$ because $y \in V$. By the Pythagorean theorem, $\|z + y\|^2 = \|z\|^2 + \|y\|^2$. But $z + y = x$ and $\|z\|^2 \ge 0$ so $\|x\|^2 \ge \|y\|^2$. Because $\|y\| = \|P_V(x)\|$, we have shown $\|x\| \ge \|P_V(x)\|$. Also, this implies that $\|x\|^2 = \|y\|^2$ only when $\|z\| = 0$, which implies that $z = 0$ and therefore $x = y$ (a property of the inner product).

Theorem (Equivalent Characterization of Orthogonal Project)

$y = P_V(x)$ is the unique vector of $V$ which minimizes $d(x, y) = \|x - y\|$.


Take any $v \in V$. Compare $\|x - v\|$ and $\|x - y\|$. Note that $\|x - y\| \bot \|y - v\|$ because $y \in V, v \in V$ so $y - v \in V$. By the Pythagorean theorem, $\|(x - y) + (y - v)\|^2 = \|x - y\|^2 + \|y - v\|^2 \rightarrow \|x - v\| \ge \|x - y\|$ because $\|y - v\|^2 \ge 0$.

Orthgonal Families

Given a Euclidian space $(E, \langle \cdot, \cdot \rangle)$, define a family of vectors $\{e_n\}_{n \in \mathbb{Z}}$ of $E$ as orthogonal when $e_n \bot e_{n^\prime} \;\forall n \ne n^\prime$. $\{e_n\}_{n \in \mathbb{Z}}$ is called orthonormal when it is orthogonal and $\|e_n\| = 1 \;\forall n$. This is equivalent to writing that $\langle e_n, e_{n^\prime} \rangle$ is $0$ when $n \ne n^\prime$ and $1$ when $n = n^\prime$. Defining the sequence $\delta[n] = 0, n \ne 0; 1, n = 0$ then $\langle e_n, e_{n^\prime} \rangle = \delta[n - n^\prime]$ for orthogonal families. We call $\{e_n\}_{n \in \mathbb{Z}}$ an orthonormal basis (ONB) of $E$ when it is orthonormal and $\mathrm{span}\{e_n\} = E$.


Recall $\ell^2(\mathbb{Z}) = \{ (x[n])_{n \in \mathbb{Z}} \;\mathrm{s.t.}\; \sum\limits_{n \in \mathbb{Z}} |x[n]|^2 \;\mathrm{finite}\}$ and let $E = \ell^2(\mathbb{Z})$. Write $e_k = (\delta[n - k])_{n \in \mathbb{Z}}$, that is, the delta function shifted by $k$ defined for each $k$. From this, you get a family of vectors $\{e_k\}_{k \in \mathbb{Z}}$. This is an orthonormal family of vectors because $\langle e_k, e_{k^\prime} \rangle = \sum\limits_{k = -\infty}^\infty \delta[n - k]\delta[n - k^\prime]$ which is $0$ unless $k = k^\prime$ and $\|e_k\|^2 = \sum\limits_{n = -\infty}^\infty \delta[n - k] = 1$. It is also a basis because for any sequence $x[n] \in E$ we can write $x[n] = \sum\limits_{k \in \mathbb{Z}} x[k]\delta[n - k] \rightarrow x = \sum\limits_{k \in \mathbb{Z}} x[k]e_k$ so for any $x \in E, x \in \mathrm{span} \{e_k\}_{k \in \mathbb{Z}}$ so $E \subset \mathrm{span}\{e_k\}_{k \in \mathbb{Z}}$.


Assume that $E$ has an orthonormal basis $\{e_k\}_{k \in \mathbb{Z}}$. Then for all $x \in E$, $x = \sum\limits_{k \in \mathbb{Z}} \langle x, e_k \rangle e_k$.


Since $\mathrm{span}\{e_k\} = E$, then for any $x \in E$ there exist coefficients $\alpha_k$ such that $x = \sum\limits_{k \in \mathbb{Z}} \alpha_k e_k$. Let $n \in \mathbb{Z}$, and calculate $\langle x, e_n \rangle = \langle \sum\limits_{k \in \mathbb{Z}} \alpha_k e_k, e_n \rangle$. By distributivity of the inner product, we can write $\langle \sum\limits_{k \in \mathbb{Z}} \alpha_k e_k, e_n \rangle = \sum\limits_{k \in \mathbb{Z}} \langle \alpha_k e_k, e_n \rangle = \sum\limits_{k \in \mathbb{Z}} \alpha_k \langle e_k, e_n \rangle = \alpha_n$ so $\alpha_n = \langle x, e_n \rangle$ and therefore $x = \sum\limits_{k \in \mathbb{Z}} \langle x, e_k \rangle e_k$.

Theorem (Parseval's Equality)

$\|x\|^2 = \sum\limits_{k \in \mathbb{Z}} |\langle x, e_k \rangle |^2$.


We know that $x = \sum \alpha_k e_k$ with $\alpha_k = \langle x, e_k \rangle$. Now $\|x\|^2 = \langle x, x \rangle = \langle \sum\limits_{k \in \mathbb{Z}} \alpha_k e_k, x \rangle$ and by properties of the inner product we have $\langle \sum\limits_{k \in \mathbb{Z}} \alpha_k e_k, x \rangle = \sum\limits_{k \in \mathbb{Z}} \alpha_k \langle e_k, x \rangle$. By definition, $\alpha_k = \langle x, e_k \rangle$ and by properties of the inner product $\langle e_k, x \rangle = \langle x, e_k \rangle^\ast$ so $\|x\|^2 = \sum\limits_{k \in \mathbb{Z}} \langle x, e_k \rangle \langle x, e_k \rangle^\ast = \sum\limits_{k \in \mathbb{Z}} |\langle x, e_k \rangle|^2$.

Theorem (Generalized Parseval's Equality)

For all $x, y \in E$, $\langle x, y \rangle = \sum\limits_{k \in \mathbb{Z}} \langle x, e_k \rangle \langle y, e_k \rangle^\ast$




Assume that $\{e_k\}_{k \in \mathbb{Z}}$ is orthonormal but not a basis of $E$. We can always define $V = \mathrm{span} \{e_k\}_{k \in \mathbb{Z}}$ (which implies that $\{e_k\}_{k \in \mathbb{Z}}$ is an ONB of $V$), such that $V \subset E$. For any $x \in E$, $\sum\limits_{k \in \mathbb{Z}} \langle x, e_k \rangle e_k = P_V(x)$.


Define $y = \sum\limits_{k \in \mathbb{Z}} \langle x, e_k \rangle e_k$. Clearly, $y \in V$ because it is a linear combination of $e_k$. Since $\{e_k\}_{k \in \mathbb{Z}}$ is an ONB of $V$ and $y \in V$, we can write $y = \sum\limits_{k \in \mathbb{Z}} \langle y, e_k \rangle e_k$. So, $y$ has two different expansions on the basis $e_k$, so we must have $\langle x, e_k \rangle = \langle y, e_k \rangle \;\forall k \in \mathbb{Z}$. By properties of the inner product, $\langle x - y, e_k \rangle = \langle x, e_k \rangle - \langle y, e_k \rangle = 0$. This implies that $x - y \bot e_k$. Now, take any $v \in V$. Then $v = \sum\limits_{k \in \mathbb{Z}} \alpha_k e_k$ and $\langle x - y, v \rangle = \langle x - y, \sum\limits_{k \in \mathbb{Z}} \alpha_k e_k\rangle = \sum\limits_{k \in \mathbb{Z}} \alpha_k \langle x - y, e_k \rangle = 0$. So $x - y \bot v$ for any $v \in V$, so $x - y \bot V$, so $y = P_V(x)$.

Theorem (Bessel's Inequality)

Given $\{e_k\}_{k \in \mathbb{Z}}$, an orthonormal family (not necessarily basis), for all $x \in E$, $\sum\limits_{k \in \mathbb{Z}} |\langle x, e_k \rangle|^2 \le \|x\|^2$. (Recall that $\|P_V(x)\| = \|x\|$ if and only if $x \in v$. So $\sum\limits_{k \in \mathbb{Z}} |\langle x, e_k \rangle|^2 = \|x\|^2$ if and only if $x \in V$.)


Call $v = \mathrm{span} \{e_k\}_{k \in \mathbb{Z}} \rightarrow \{e_k\}_{k \in \mathbb{Z}}$ is an ONB of $V$. We know that $P_V(x) = \sum\limits_{k \in \mathbb{Z}} \langle x, e_k \rangle e_k$. Now $P_V(x) \in V$, so $P_V(x) = \sum\limits_{k \in \mathbb{Z}} \langle P_V(x), e_k \rangle e_k$. Applying Parseval's equality in $V$ gives $\|P_V(x)\|^2 = \sum\limits_{k \in \mathbb{Z}} |\langle P_V(x), e_k |^2$. These three representations of $P_V(x)$ imply that $\langle P_V(x), e_k \rangle = \langle x, e_k \rangle$ and $ \sum\limits_{k \in \mathbb{Z}} |\langle x, e_k \rangle |^2 = \|P_V(x)\|^2 \le \|x\|^2$.

Space Spanned by Orthogonal Family

Let $\{e_k\}_{k \in \mathbb{Z}}$ be orthogonal and non-zero (not necessarily orthonormal). Let $V = \mathrm{span} \left( \{e_k\}_{k \in \mathbb{Z}} \right)$. For any $x \in E$, how can we express $P_V(x)$ (when we don't have an orthonormal basis)? If we define $e^\prime_k = \frac{e_k}{\|e_k\|}$, then $\{e^\prime_k\}$ is orthonormal, and $\mathrm{span} \left( \{e^\prime_k\}_{k \in \mathbb{Z}} \right) = V$. So, $\{e^\prime_k\}_{k \in \mathbb{Z}}$ is an ONB of $V$. Therefore, we can write $P_V(x) = \sum\limits_{k \in Z} \langle x, e^\prime_k \rangle e^\prime_k = \sum\limits_{k \in \mathbb{Z}} \langle x, \frac{e_k}{\|e_k\|} \rangle \frac{e_k}{\|e_k\|} = \sum\limits_{k \in \mathbb{Z}} \frac{1}{\|e_k\|^2} \langle x, e_k \rangle e_k = \sum\limits_{k \in \mathbb{Z}} \frac{\langle x, e_k \rangle}{\langle e_k, e_k \rangle} e_k$.

Isomorphism between signal spaces

Consider two Euclidian spaces $(E, \langle \cdot, \cdot \rangle)$ and $(E^\prime, \langle \cdot, \cdot \rangle)$ and a linear function $F: E\rightarrow E^\prime$ which maps any vector $x \in E$ to a vector $x^\prime \in E^\prime$. $F$ is an isomorphism when $F$ is invertible and $F$ preserves the inner products which means that $\forall x, y \in E, \langle x, y \rangle_E = \langle F(x), F(y) \rangle _{E^\prime}$. It follows that $F$ preserves distance when it is an isomorphism because $\|x\|_E^2 = \|F(x)\|_{E^\prime}^2 \rightarrow \|x\|_E = \|F(x)\|_{E^\prime}$ so $d(x, y)_E = \|x - y\|_E = \|F(x - y)\|_{E^\prime} = \|F(x) - F(y)\|_{E^\prime} = d_{E^\prime}(F(x), F(y))$ (with the second to last equality following from $F$ being linear).


The mirror is an isomorphism because distances and angles are preserved. Shannon's sampling theorem implies an isomorphism between the continuous-time signal and discrete-time signal.

Theorem Given two Euclidian spaces $(E, \langle \cdot, \cdot \rangle)$ and $(E^\prime, \langle \cdot, \cdot \rangle)$, assume that $E$ has an ONB $(e_k)_{k \in \mathbb{Z}}$. A linear function $F$ is an isomorphism if and only if the family $(F(e_k))_{k \in \mathbb{Z}}$ is an ONB of $E^\prime$.

Proof First, assume that $F$ is an isomorphism. Since $F$ is invertible, $\mathrm{span}(F(e_k)) = E^\prime$ (a result from linear algebra). Now, $\langle F(e_i), F(e_j) \rangle = \langle e_i, e_j \rangle = \delta(i - j)$ because $F$ preserves inner product and $(e_k)_{k \in \mathbb{Z}}$ is orthonormal. Second, assume that $(F(e_k))_{k \in \mathbb{Z}}$ is an ONB of $E^\prime$. Let $x, y \in E$. Since $\{e_k\}_{k \in \mathbb{Z}}$ is an ONB of $E$, we have $x = \sum\limits_{k \in \mathbb{Z}} \alpha_k e_k$ where $\alpha_k = \langle x, e_k \rangle$ and $y = \sum\limits_{k \in \mathbb{Z}} \beta_k e_k$ where $\beta_k = \langle y, e_k \rangle$. From the Generalized Parseval's Equality, we have $\langle x, y \rangle_E = \sum\limits_{k \in \mathbb{Z}} \langle x, e_k \rangle_E \langle y, e_k \rangle_E^\ast = \sum_{k \in \mathbb{Z}} \alpha_k \beta_k^\ast$. Now, since $F(x), F(y) \in E^\prime$ we also have $F(x) = \sum\limits_{k \in \mathbb{Z}} \alpha^\prime_k F(e_k)$ where $\alpha^\prime_k = \langle F(x), F(e_k) \rangle_{E^\prime}$ and $F(y) = \sum\limits_{k \in \mathbb{Z}} \beta^\prime_k F(e_k)$ where $\beta^\prime_k = \langle F(y), F(e_k) \rangle_{E^\prime}$. Similarly from the Generalized Parseval's Equality we have $\langle F(x), F(y) \rangle_{E^\prime} = \sum_{k \in \mathbb{Z}} \alpha^\prime_k {\beta^\prime_k}^\ast$. Now, because $F$ is linear we have $x = \sum_{k \in \mathbb{Z}} \alpha_k e_k \rightarrow F(x) = \sum_{k \in \mathbb{Z}} \alpha_k F(e_k)$ and $y = \sum_{k \in \mathbb{Z}} \beta_k e_k \rightarrow F(x) = \sum_{k \in \mathbb{Z}} \beta_k F(e_k)$ so it follows that $\alpha^\prime_k = \alpha_k$ and $\beta^\prime_k = \beta_k$. Substituting into the generalized Perseval's equality equations, we have $\langle x, y \rangle_E = \langle F(x), F(y) \rangle_{E^\prime}$, so $F$ preserves inner products and is therefore an isomorphism.

A theoretical application of this result is that given one Euclidian space $(E, \langle \cdot, \cdot \rangle_E)$ with orthonormal basis $(e_k)_{k \in \mathbb{Z}}$, we can construct a function $F$ such that $F : E \rightarrow \ell^2(\mathbb{Z}), x \rightarrow F(x) = (x[n])_{n \in \mathbb{Z}}$ such that $x[n] = \langle x, e_n \rangle$ for each $n \in \mathbb{Z}$. To show that this is an isomorphism, we must first show that $F(x) \in \ell^2(\mathbb{Z})$. Recall that $\ell^2(\mathbb{Z}) = \{ (x[n])_{n \in \mathbb{Z}} : \sum\limits_{n \in \mathbb{Z}} |x[n]|^2 \;\mathrm{finite}\}$. Parseval's equality tells us that $\sum\limits_{n \in \mathbb{Z}} |\langle x, e_n \rangle|^2 = \|x\|^2$ which implies that $(\langle x, e_k \rangle)_{n \in \mathbb{Z}} \in \ell^2(\mathbb{Z})$. $F$ is linear due to properties of the inner product.

Theorem $F$ is auomatically an isomorphism from $(E, \langle \cdot, \cdot \rangle)$ to $(\ell^2(\mathbb{Z}), \langle \cdot, \cdot \rangle_2)$ where $\langle (x[n])_{n \in \mathbb{Z}}, (y[n])_{n \in \mathbb{Z}} \rangle = \sum\limits_{n \in \mathbb{Z}} x[n]y[n]^\ast$.

Proof If we show that $(F(e_k))_{k \in \mathbb{Z}}$ is an ONB of $\ell^2(\mathbb{Z})$ we can use the previous theorem. For a given $k \in \mathbb{Z}$, we have $F(e_k) = (\langle e_k, e_n \rangle)_{n \in \mathbb{Z}} = \{\delta[n - k] \}_{n \in \mathbb{Z}}$. For each $K \in \mathbb{Z}$, $F$ maps $e_k$ to an impulse at $k$, denoted $\delta[n - k]$. So, $F : E \rightarrow \ell^2(\mathbb{Z})$ such that $x \rightarrow F(x) = (x[n])_{n \in \mathbb{Z}}$ where $x[n] = \langle x, e_n \rangle_E$. But we know that $\{(\delta[n - k])_{n \in \mathbb{Z}}\}_{k \in \mathbb{Z}}$ is an ONB of $\ell^2(\mathbb{Z})$. Therefore, this $F$ is always an isomorphism.

Dual Basis

Intuitively, if you take $e_1, e_2 \in \mathbb{R}^2$ which is an orthonormal family (and a basis because it has as many vectors as the dimensionality of the space), we can compute coefficients $\alpha_1$ and $\alpha_2$ such that $x = \alpha_1e_1 + \alpha_2e_2$ by $\alpha_1 = \langle x, e_1 \rangle$ and $\alpha_2 = \langle x, e_2 \rangle$. However, if the vectors are not orthonormal, you can still find appropriate coefficients $\alpha_1, \alpha_2$. First, construct $e^\prime_2 \bot e_1$ such that $\langle e^\prime_2, e_2 \rangle = 1$. Then, replace $e_1$ by $e^\prime_1 \bot e_2$ such that $\langle e_1, e_1^\prime \rangle = 1$. Now, assuming that $\langle \alpha_1 e_1 + \alpha_2 e_2 \rangle$ suppose we take $$\langle x, e^\prime_1 \rangle = \langle \alpha_1 e_1, e_1^\prime \rangle + \langle \alpha_2 e_2, e^\prime_1 \rangle = \alpha_1\langle e_1, e^\prime_1 \rangle + \alpha_2\langle e_2, e^\prime_1 \rangle$$ so $\alpha_1 = \langle x, e^\prime_1 \rangle$ and similarly $\alpha_2 = \langle x, e^\prime_2 \rangle$ so that $x = \langle x, e_1^\prime \rangle e_1 + \langle x, e_2^\prime \rangle x_2$. We call $(e_1^\prime, e_2^\prime)$ the dual basis of $(e_1, e_2)$. For a dual basis, $e^\prime_j$ is defined such that $\langle e_i, e^\prime_j \rangle = \delta[i - j]$. In general, we take a Euclidian space $(E, \langle \cdot, \cdot \rangle)$ and consider a family of vectors $(e_k)_{k \in \mathbb{Z}}$ which are independent. Then we say that $(\tilde{e}_k)_{k \in \mathbb{Z}}$ is dual to $(e_k)_{k \in \mathbb{Z}}$ when $\mathrm{span}(\tilde{e}_k)_{k \in \mathbb{Z}} = \mathrm{span}(e_k)_{k \in \mathbb{Z}}$ and $\langle e_i, \tilde{e}_j \rangle = \delta[i - j]$.

Theorem Assume that $\{ \tilde{e}_k \}_{k \in \mathbb{Z}}$ is dual to $\{e_k\}_{k \in \mathbb{Z}}$ and $V = \mathrm{span}\{e_k\}_{k \in \mathbb{Z}}$. Then for all $x \in E$, $\sum\limits_{k \in \mathbb{Z}} \langle x, \tilde{e}_k \rangle e_k = P_V(x)$.

Proof First, define $y = \sum\limits_{i \in \mathbb{Z}} \langle x, \tilde{e}_k \rangle e_k$. Note that $y \in V$. Now, we need to show that $x - y \bot V$. Take any given $n \in \mathbb{Z}$ and define $\alpha_k = \langle x, \tilde{e}_k \rangle$ and calculate $\langle y, \tilde{e}_n \rangle = \langle \sum\limits_{k \in \mathbb{Z}} \alpha_k e_k, \tilde{e}_n \rangle = \sum\limits_{k \in \mathbb{Z}} \alpha_k \langle e_k, \tilde{e}_n \rangle = \alpha_n = \langle x, \tilde{e}_n \rangle$ because $\langle e_k, \tilde{e}_n \rangle = \delta[n - k]$. So, $\langle y, \tilde{e}_n \rangle - \langle x, \tilde{e}_n \rangle = 0 \rightarrow \langle y - x, \tilde{e}_n \rangle = 0$. In other words, $y - x \bot \tilde{e}_n$. If we take any $v \in V$ we can write $v = \sum\limits_{n \in \mathbb{Z}} \beta_n \tilde{e}_n = \langle x - y, v \rangle = \sum\limits_{n \in \mathbb{Z}} \beta_n \langle y - x, \tilde{e}_n \rangle = 0$. So $y - x \bot v \; \forall v \in V$.

As a simple application of this theorem, take $x \in V$. Then $P_V(x) = x$ and $\sum_{n \in \mathbb{Z}} \langle x, \tilde{e}_n \rangle e_n = x$.

Theorem Given a Euclidian space $(E, \langle \cdot, \cdot \rangle)$ and a family of independent vectors $(e_n)_{n \in \mathbb{Z}}$ with $V = \mathrm{span}(e_n)_{n \in \mathbb{Z}}$. Let $\{e^\prime_n\}_{n \in \mathbb{Z}}$ such that for all $x \in E$, $\sum\limits_{n \in \mathbb{Z}} \langle x, e^\prime_n \rangle e_n = P_V(x)$. Then $e^\prime_n$ is dual to $e_n$. (This theorem is the converse of the above)

Proof (see lecture notes)

Euclidian View of Fourier Analysis

Fourier Series

Consider $x(t)$ which is assumed to be periodic with period $T$, i.e. $\forall t \in \mathbb{R}, x(t + T) = x(t)$. Fourier found that you can express this periodic $x(t)$ as a linear combination of complex exponential functions by $x(t) = \sum\limits_{k = -\infty}^\infty X[k]e^{j2\pi kt/T} = \sum\limits_{k = -\infty}^\infty X[k]e^{j\omega_k t}$ where $X[k]$ are the function's Fourier coefficients and $\omega_k = k\frac{2\pi}{T}$. Each term in the Fourier series is periodic with period $T/k$. The Fourier coefficients can be computed by $X[k] = \frac{1}{T}\int_0^T x(t) e^{-j2\pi kt/T}$. In the Euclidian view, we consider $x(t)$ in $[0, T]$, that is, in the Euclidian space $L^2([0, T]) = \{x(t)_{t \in [0, T]} : \int_0^T |x(t)|^2 dt \;\mathrm{finite}\}$ with inner product $\langle x, y \rangle = \frac{1}{T}\int_0^T x(t) y(t)^\ast dt$. We would like to find an orthonormal basis of $L^2([0, T])$.

Theorem In $L^2([0, T])$, the family $(e_k)_{k \in \mathbb{Z}}$ where $e_k(t) = e^{j2\pi kt/T}$ is an orthonormal basis.

Proof First, we show that it is an orthonormal family: \begin{align*} \langle e_k, e_{k^\prime} \rangle &= \frac{1}{T}\int_0^T e_k(t) e_{k^\prime}(t)^\ast dt\\ &= \frac{1}{T}\int_0^T e^{j2\pi kt/T} e^{-j2\pi k^\prime t/T} dt\\ &= \frac{1}{T}\int_0^T e^{j2\pi(k - k^\prime)t/T} dt\\ &= \frac{1}{T}\begin{cases} T,& k = k^\prime\\ -\frac{j e^{j2\pi (k - k^\prime)t/T}}{2\pi(k - k^\prime)/T} \biggr|_0^T = \frac{-je^{j2\pi(k-k^\prime)} + j}{2\pi(k - k^\prime)/T } = 0,& k \ne k^\prime \end{cases}\\ &= \delta[k - k^\prime] \end{align*} The proof that $\mathrm{span}(e_k)_{k \in \mathbb{Z}} = L^2([0, T])$ is in the book.

Fourier Transform as Basis Expansion

Now, we have all of the properties of a Euclidian space. So, $\forall x \in L^2([0, T]), x = \sum\limits_{k \in \mathbb{Z}} \langle x, e_k \rangle e_k$. For example, if $x = (x(t))_{t \in [0, T]}$ and $e_k = (e_k(t))_{t \in [0, T]} = (e^{j2\pi kt/T})_{t \in [0, T]}$ then $x(t) = \sum\limits_{k \in \mathbb{Z}} \langle x, e_k \rangle e^{j2\pi kt/T}$ where \begin{align*} \langle x, e_k \rangle &= \frac{1}{T}\int_0^T x(t) e_k(t)^\ast dt\\ &= \frac{1}{T} \int_0^T x(t) e^{-j2\pi kt/T} dt\\ &= X[k] \end{align*} So, the Fourier series results from the fact that $x = \sum\limits_{k \in \mathbb{Z}} \langle x, e_k \rangle e_k$ and the fact that the Fourier series is an orthonormal basis.

Parseval's Equality

Similarly, Parseval's equality holds, implying that $\|x\|^2 = \sum\limits_{k \in \mathbb{Z}} |\langle x, e_k \rangle|^2$ which implies that $\|x\|^2 = \langle x, x \rangle = \frac{1}{T}\int_0^T |x(t)|^2 dt = \sum\limits_{k \in \mathbb{Z}} |X[k]|^2$. In other words, the norm squared of a vector (its energy) is conserved between the time domain and frequency domain.

Fundamental Isomorphism

Recall that if you have a Euclidian space $(E, \langle \cdot, \cdot \rangle)$ you can map it to $\ell^2(\mathbb{Z}), \langle \cdot, \cdot \rangle_{\ell^2(\mathbb{Z})}$ which maps $x \rightarrow (x[k])_{k \in \mathbb{Z}}$ where $x[k] = \langle x, e_k \rangle$ with $(e_k)_{k \in \mathbb{Z}}$ an orthonormal basis of $E$. This mapping is automatically and isomorphism. Applied in the Fourier setting, $E = L^2([0, T])$ and the orthonormal basis is $e_k(t) = e^{j2\pi kt/T}$, which results in the mapping $\mathcal{F} : L^2([0, T]) \rightarrow \ell^2(\mathbb{Z})$ where $x$ is mapped to $(\langle x, e_k \rangle)_{k \in \mathbb{Z}} = (X[k])_{k \in \mathbb{Z}}$. This mapping $\mathcal{F}$ is an isomorphism, it preserves the inner product $\frac{1}{T}\int_{-\infty}^\infty x(t) y(t)^\ast dt = \sum\limits_{k \in \mathbb{Z}} X[k]Y[k]^\ast$ (generalized Parseval's equality) and distance $\|x(t) - y(t)\|^2 = \|X - Y\|_{\ell^2(\mathbb{Z})}^2$.

Continuous-Time Fourier Transform

Given an aperiodic $x(t)$, we can compute the continuous Fourier transform $\mathcal{F}$ by $X(\Omega) = \int_{-\infty}^\infty x(t) e^{-j\Omega t} dt$. This transform is linear and invertible, with an inverse $\mathcal{F}$ which maps $X(\Omega)$ to $x(t)$ by $x(t) = \frac{1}{2\pi} \int_{-\infty}^\infty X(\Omega)e^{j \Omega t} d\Omega$.


Fourier transforms are useful because they transform convolutions into products, where the convolution is $(x \ast y)(t) = \int_{-\infty}^\infty x(s) y(t-s) ds$.

Theorem Given two vectors $x(t), y(t)$ and their Fourier transforms $X(\Omega), Y(\Omega)$, their convolution $(x \ast y)(t) \rightarrow_{\mathcal{F}} XY$.

Proof If we define $z(t) = (x \ast y)(t)$ with Fourier transform $Z(\Omega)$, we can compute \begin{align*} Z(\Omega) &= \int_{-\infty}^\infty z(t) e^{-j\Omega t} dt \\ &= \int_{-\infty}^\infty \left( \int_{-\infty}^\infty x(s) y(t - s) ds \right) e^{-j\Omega t} dt\\ &= \int_{-\infty}^\infty x(s) \left( \int_{-\infty}^\infty y(t - s) e^{j \Omega t} dt \right) ds \\ &= \int_{-\infty}^\infty x(s) \left( \int_{-\infty}^\infty y(t) e^{-j \Omega (t^\prime + s)} dt^\prime \right) ds\\ &= \int_{-\infty}^\infty x(s) \left( \int_{-\infty}^\infty y(t) e^{-j \Omega t^\prime} e^{-j \Omega s} dt^\prime \right) ds\\ &= \left( \int_{-\infty}^\infty x(s) e^{-j \Omega s} ds \right)\left(\int_{-\infty}^\infty y(t^\prime) e^{-j\Omega t^\prime} dt^\prime\right) \\ &= X(\Omega)Y(\Omega) \end{align*} where we made the subsitution $t^\prime = t - s$.

Properties of the Fourier Transform

Denote $\bar{x} = x(-t)$ (the time reversal operation) and $\rightarrow_{\mathcal{F}}$ as the Fourier transform.

  1. $\bar{x} \rightarrow_{\mathcal{F}} \bar{X}(\Omega)$
  2. $x(t)^\ast \rightarrow_{\mathcal{F}} \bar{X}(-\Omega)^\ast$ because $x(t)^\ast \rightarrow_{\mathcal{F}} \int_{-\infty}^\infty x(t)^\ast e^{-j \Omega t}dt = \left( \int_{-\infty}^\infty x(t) e^{-j(-\Omega)t} dt \right )^\ast = X(-\Omega)^\ast$
  3. If $x(t)$ is real (implying that $x(t) = x(t)^\ast$), then $X(\Omega) = X(-\Omega)^\ast$ or equivalently $X(-\Omega) = X(\Omega)^\ast$ (Hermitian symmetry).
  4. If $x(t)$ is real, $\bar{x}(t) \rightarrow_{\mathcal{F}} X(\Omega)^\ast$
  5. $x(-t)^\ast \rightarrow_{\mathcal{F}} X(\Omega)^\ast$
  6. $x(t - t_0) \rightarrow_{\mathcal{F}} e^{-j\Omega t_0}X(\Omega)$ because if $z(t) = x(t - t_0)$ then $Z(\Omega) = \int x(t - t_0)e^{-j\Omega t}dt$ and substituting $t^\prime = t - t_0$ we have $Z(\Omega) = \int_{-\infty}^\infty x(t^\prime) e^{-j\Omega (t^\prime + t_0)} dt^\prime = e^{-j\Omega t_0} \int_{-\infty}^\infty x(t^\prime) e^{-j\Omega t^\prime}dt^\prime = e^{-j\Omega t_0}X(\Omega)$
  7. $\int_{-\infty}^\infty X(\Omega) d\Omega = 2\pi x(0)$ because, by the inverse Fourier transform, $x(t) = \frac{1}{2\pi}\int_{-\infty}^\infty X(\Omega) e^{j \Omega t} d\Omega$. Replacing $t$ by $0$ gives this property.
  8. $\int_{-\infty}^\infty x(t) dt = X(0)$ because, by the Fourier transform, $X(\Omega) = \int_{-\infty}^\infty x(t) e^{-j\Omega t}dt$. Replacing $\Omega$ by $0$ gives this property.

Euclidian View of Continuous-Time Fourier Transform

Consider $L^2(\mathbb{R}) = \{ (x(t))_{t \in \mathbb{R}} \;\mathrm{s.t.}\; \int_{-\infty}^\infty |x(t)|^2dt \;\mathrm{finite}\}$ with inner products $\langle x, y \rangle_t = \int_{-\infty}^\infty x(t) y(t)^\ast dt$ and $\langle X, Y \rangle_\Omega = \frac{1}{2\pi}\int_{-\infty}^\infty X(\Omega)Y(\Omega)^\ast d\Omega$ (you can construct an inner product from another inner product by multiplying by a positive number). The Fourier transform $\mathcal{F}$ is then a mapping from $L^2(\mathbb{R}) \rightarrow L^2(\mathbb{R})$ by $x(t) \rightarrow X(\Omega) = \int_{-\infty}^\infty x(t) e^{-j\Omega t} dt$.

Theorem $\mathcal{F}$ is an isomorphism from $(L^2(\mathbb{R}), \langle \cdot, \cdot \rangle_t )$ to $(L^2(\mathbb{R}), \langle \cdot, \cdot \rangle_\Omega)$.

Proof We know that $\mathcal{F}$ is ilnear and invertible. We need to prove that $\mathcal{F}$ preserves the inner product: $\langle x, y \rangle_t = \int_{-\infty}^\infty x(s)y(s)^\ast ds = \int_{-\infty}^\infty x(s) \bar{y}(0-s)^\ast ds = (x \ast \bar{y}^\ast )(0)$. We know that $x(t) \rightarrow_{\mathcal{F}} X(\Omega)$ and $\bar{y}^\ast \rightarrow_{\mathcal{F}} Y(\Omega)^\ast$ which implies that $(x \ast \bar{y}^\ast)(t) \rightarrow_{\mathcal{F}} X(\Omega)Y(\Omega)^\ast$. We know that $\int_{-\infty}^\infty X(\Omega)Y(\Omega)^\ast d\Omega = 2\pi (x \ast \bar{y}^\ast)(0)$. Dividing by $2\pi$ gives $\langle X, Y \rangle_{\Omega} = (x \ast \bar{y}^\ast)(0) = \langle x, y \rangle_t$, so $\mathcal{F}$ preserves inner products.

So we now have $\int_{-\infty}^\infty x(t)y(t)^\ast dt = \frac{1}{2\pi}\int_{-\infty}^\infty X(\Omega)Y(\Omega)^\ast d\Omega$, the generalized Parseval's equality of the continuous time Fourier transform. When $x(t) = y(t)$, $\int_{-\infty}^\infty |x(t)|^2 dt = \frac{1}{2\pi}\int_{-\infty}^\infty |X(\Omega)|^2 d\Omega$, Parseval's equality for continuous-time Fourier transform. As a result, $\int_{-\infty}^\infty |x(t) - y(t)|^2 dt = \frac{1}{2\pi}\int_{-\infty}^\infty |X(\Omega) - Y(\Omega)|^2 d\Omega$ (the mean-squared error).

Band-Limited Signals

Within $L^2(\mathbb{R})$, define a subspace $B(\Omega_B)$ which is the space of vectors (functions) in $L^2(\mathbb{R})$ such that the Fourier transform $X(\Omega)$ is zero when $|\Omega| \ge \Omega_B/2$, or formally, $B(\Omega_B) = \{(x(t))_{t \in \mathbb{R}} \in L^2(\mathbb{R}) : X(\Omega) = 0 \;\mathrm{when}\; |\Omega| \ge \Omega_B/2\}$. We can look at $\mathcal{F}$ as a transformation from $B(\Omega_B)$ to $L^2([-\Omega_B/2, \Omega_B/2])$, mapping $(x(t))_{t \in \mathbb{R}}$ to $(X(\Omega))_{\Omega \in [-\Omega_B/2, \Omega_B/2]}$. The inverse Fourier transform $\mathcal{F}^{-1}$ which maps $(X(\Omega))_{\Omega \in [-\Omega_B/2, \Omega_B/2]}$ to $(x(t))_{t \in \mathbb{R}}$ by $x(t) = \frac{1}{2\pi}\int_{-\Omega_B/2}^{\Omega_B/2} X(\Omega) e^{j \Omega t} d\Omega$. We know that $\mathcal{F}$ is an isomorphism in general; here we can look at it as an isomorphism between $B(\Omega_B)$ and $L^2([-\Omega_B/2, \Omega_B/2])$. To find an Orthonormal Basis of $L^2([-\Omega_B/2, \Omega_B/2])$, recall that the space $L^2([0, T])$ with inner product $\langle x, y \rangle = \frac{1}{T}\int_0^T x(t)y(t)^\ast dt$ has orthonormal basis $(e(t))_{k \in \mathbb{Z}} = (e^{j\pi k t/T})_{k \in \mathbb{Z}}$. So, it follows that $L^2([0, \Omega_B])$ with inner product $\langle X, Y \rangle = \frac{1}{\Omega_B} \int_0^{\Omega_B} X(\Omega)Y(\Omega)^\ast d\Omega$ has orthonormal basis $(E_k(\Omega))_{k \in \mathbb{Z}} = (e^{j2\pi k\Omega/\Omega_b})_{k \in \mathbb{Z}}$. But, we are interested instead in $L^2([-\Omega_B/2, \Omega_B/2])$ with inner product $\langle X, Y \rangle = \frac{1}{\Omega_B} \int_{-\Omega_B/2}^{\Omega_B/2} X(\Omega)Y(\Omega)^\ast d\Omega$. $(E_k(\Omega))$ is still an orthonormal basis of $L^2([-\Omega_B/2, \Omega_B/2])$ because $E_k(\Omega)$ is $\Omega_B$-periodic for all $k \in \mathbb{Z}$. Finally, $L^2([-\Omega_B/2, \Omega_B/2])$ with inner product $\langle X, Y \rangle = \frac{1}{2\pi} \int_{-\Omega_B/2}^{\Omega_B/2} X(\Omega)Y(\Omega)^\ast d\Omega$ has orthonormal basis $(E_k(\Omega))_{k \in \mathbb{Z}} = \left( \sqrt{\frac{2\pi}{\Omega_B}} e^{j2\pi k \Omega/\Omega_B}\right)_{k \in \mathbb{Z}}$.

Shannon's Sampling Theorem

Consider a linear mapping $F$ from the space of bandlimited signals $B(\Omega_b) \rightarrow L^2([-\Omega_b/2, \Omega_b/2])$ which transforms a bandlimited function $(x(t))_{t \in \mathbb{R}}$ to $(X(\Omega))_{\Omega \in [-\Omega_b/2, \Omega_b/2]}$ via the Fourier transform. The inverse transform maps from $(Y(\Omega))_{\Omega \in [-\Omega_b/2, \Omega_b/2]} \in L^2([-\Omega_b/2, \Omega_b/2])$ to $(x(t))_{t \in \mathbb{R}} \in B(\Omega_b)$ via the inverse Fourier transform $X(\Omega) \rightarrow_\mathcal{F} x(t)$ where $X(\Omega) = Y(\Omega)$ when $|\Omega| \le \Omega_b/2$ and $X(\Omega) = 0$ otherwise. We saw that $\left(\sqrt{\frac{2\pi}{\Omega_b}} e^{j2\pi k \Omega/\Omega_b}\right)_{k \in \mathbb{Z}}$ is an orthonormal basis of $L^2([-\Omega_b/2, \Omega_b/2])$. If we define $T = 2\pi/\Omega_b$ and $E_k(\Omega) = \sqrt{T} e^{-j2\pi k\Omega/\Omega_b}$ then $(E_k)_{k \in \mathbb{Z}}$ is still an orthonormal basis of $L^2([-\Omega_b/2, \Omega_b/2])$. Since $F$ is an isomorphism then $(\phi_k)_{k \in \mathbb{Z}}$ where $\phi_k = F^{-1}(E_k)$ is an orthonormal basis of the space of bandlimited functions $B(\Omega_b)$. We can also compute $\phi_k = \mathcal{F}^{-1}(\Phi_k(\Omega))$ where $\Phi_k(\Omega) = E_k(\Omega), |\Omega| \le \Omega_b/2$ and $0$ otherwise. If we define $R(\Omega) = 1, |\Omega| \le \Omega_b/2$ then $\Phi_k(\Omega) = R(\Omega) \sqrt{T} e^{-j2\pi k \Omega/\Omega_b}$ and $\Phi_0(\Omega) = R(\Omega)\sqrt{T}$ so $\phi_k(\Omega) = \Phi_0(\Omega) e^{-j k T\Omega}$ and by the inverse Fourier tranform $\phi_k(t) = \phi_0 (t - kT)$. It follows that $x(t - t_0) \rightarrow_{\mathcal{F}} X(\Omega) e^{-jt_0 \Omega}$. From Homework 3, $\phi_0(t) = \mathcal{F}^{-1} (\sqrt{T} R(\Omega)) = \frac{1}{\sqrt{T}} \mathrm{sinc} \left( \frac{t}{T} \right)$. By the kindergarten formula, for any $x \in B(\Omega_b)$, we have $x = \sum\limits_{k \in \mathbb{Z}} \langle x, \phi_k \rangle \phi_k$. In Homework 3, we found that $\langle x, \phi_k \rangle = (x \ast \phi_0) (kT)$ and $(x \ast \phi_0)(t) = \sqrt{T} x(t)$ so that $x(t) = \sum\limits_{k \in \mathbb{Z}} \sqrt{T} x(kt)\phi_0(t - kT)$. Now, $\sqrt{T}\phi_0(t) = \mathrm{sinc}\left( \frac{t}{T} \right)$ which we call $h(t)$ so it follows that $x(t) = \sum\limits_{\mathbb{Z}} x(kT)h(t - kT)$. This is Shannon's Sampling Theorem, where $T$ is the Nyquist sampling period and $\Omega_b$ is the Nyquist frequency.

Decimation-Interpolation Systems

Shift-Invariant Spaces

Consider the Euclidian space $E = L^2(\mathbb{R})$ with inner product $\langle x, y \rangle = \int_{-\infty}^\infty x(t) y(t)^\ast dt$. Let $V$ be a subspace of $E$. We say that $V$ is a shift-invariant space of period $T$ when there exists a function $\phi(t) \in L^2(\mathbb{R})$ such that $V = \mathrm{span}(\phi_k)_{k \in \mathbb{Z}}$ where $\phi_k(t) = \phi(t - kT)$. In other words, the space is spanned by shifts of a single function.

Example: Piecewise Constant Functions

Consider $V$, the space of piecewise-constant functions on intervals $[kT, (k + 1)T]$ for $k \in \mathbb{Z}$. These functions look like “staircases” with steps at integer multiples of $T$. This is a shift-invariant space because $V$ is the span of shifted versions of the function $\phi(t) = 1, 0 \le t < T; 0 \;\mathrm{otherwise}$. We can express any $x(t) \in V$ as $\ldots + x(0)\phi(t) + x(1)\phi(t - T) + x(2) \phi(t - 2T) + \ldots$. The $\phi(t)$ are nice here because they have very limited support, but they have large discontinuities.

Example: Piecewise Linear Functions

Consider $V$, the space of continuous piecewise-linear functions on intervals $[kT, (k + 1)T]$ for $k \in \mathbb{Z}$. We can express each element $x$ in $V$ as the sum of shifted versions of a “triangle” function, which is $0$ at $-T$ and $T$, $1$ at $0$, and linear elsewhere. We can then still express any $x(t) \in V$ as $\ldots + x(0)\phi(t) + x(1)\phi(t - T) + x(2) \phi(t - 2T) + \ldots$. The $\phi(t)$ here produce smoother functions than the piecewise constant case but require overlapping.

Example: Bandlimited Functions

Consider $V = B(\Omega_b)$. From Shannon's sampling theorem, we know that for all $x \in B(\Omega_b)$ we can express $x(t) = \sum\limits_{k \in \mathbb{Z}} x(kT) h(t - kT)$ where $h(t) = \mathrm{sinc}\left(\frac{t}{T}\right)$ and $T = 2\pi/\Omega_b$. These functions can produce any “smooth” function (in $B(\Omega_b)$) but have infinite support.

Orthogonal Decimation Interpolation

Taking $V = \mathrm{span}\{\phi_k\}_{k \in \mathbb{Z}}$ where $\phi_k(t) = \phi(t - kT)$ and $\{\phi_k\}_{k \in \mathbb{Z}}$ is orthonormal (so $\{\phi_k\}_{k \in \mathbb{Z}}$ is an orthonormal basis of $V$). The fundamental isomorphism says that whenever you have a vector $x$ in some space $V$, you can always map (via $F$) the vector to $(x[n])_{n \in \mathbb{Z}} = (\langle x, \phi_n\rangle)_{k \in \mathbb{Z}}$ in the space $\ell^2(\mathbb{Z})$. We know that $F$ is an isomorphism if and only if $\{\phi_k\}_{k \in \mathbb{Z}}$ is an orthonormal basis of $V$. This produces a mapping from continuous-time signals to discrete-time signals. If we can invert $F$, we can convert back from discrete-time to continuous-time.


We can always define a mapping $G$ from $L^2(\mathbb{R})$ to $\ell^2(\mathbb{Z})$. The inverse mapping transforms any sequence $(y[n])_{n \in \mathbb{Z}}$ to the function $y = \sum\limits_{n \in \mathbb{Z}} y[n]\phi_n$. This follows from the kindergarten formula; $F(x)$ transforms $x(t)$ via $y[n] = \langle x, \phi_n\rangle$ so that $\sum\limits_{n \in \mathbb{Z}} y[n]\phi_n = \sum\limits_{n \in \mathbb{Z}} \langle x, \phi_n \rangle \phi_n$. Note that If you map some $x \in L^2(\mathbb{R})$ which is not in $V$ and then invert it, you will receive something in $V$. You will get the signal $\sum\limits_{n \in \mathbb{Z}} \langle x, \phi_n \rangle \phi_n = P_V(x)$, the projection of $x$ onto $V$. When implementing the transform $G$, note that \begin{align*} \langle x,\phi_n \rangle &= \int_{-\infty}^\infty x(t) \phi_n(t)^\ast d t\\ &= \int_{-\infty}^\infty x(t) \phi_n(t) d t\\ &= \int_{-\infty}^\infty x(t) \phi(t - nT) dt\\ &= \int_{-\infty}^\infty x(t) \bar{\phi}(nT - t) dt\\ &= (x \ast \bar{\phi})(nT) \end{align*} A filter is a system $x(t) \rightarrow (x \ast g)(t)$. A sampler is a system which maps $x(t)$ to $x[n] = x(nT)$. So, the mapping $G$ is actually just a convolution by some filter followed by a sampler. This type of system is called a decimator; using a decimator is called analysis.


Now, we would like to implement $F^{-1}$, which maps a sequence $(x[n])_{n \in \mathbb{Z}}$ to $y(t) = \sum\limits_{n \in \mathbb{Z}} y[n] \phi_n = \sum\limits_{n \in \mathbb{Z}} y[n] \phi(t - nT)$. This is called an interpolator. This operation is described using the delta function, $\delta(t)$, which is $\infty$ at $0$ and $0$ elsewhere with $\int_{-\infty}^\infty \delta(t) = 1$. If you integrate $\int_{-\infty}^\infty f(t) \delta(t) d t$ you get $f(0)$; if you integrate with a shifted delta $\int_{-\infty}^\infty f(t) \delta(t - t_0) d t$ you get $f(t_0)$. It follows that $f(t) \ast \delta(t) = f(t)$ and $f(t) \ast \delta( t - t_0) = f(t - t_0)$. The Fourier transform of $\delta(t)$ is $1$, and the Fourier transform of $\delta(t - t_0)$ is $e^{-j t_0 \Omega}$. “Impulse train conversion” maps $x[n]$ to $x_s(t)$ via $\sum\limits_{n \in \mathbb{Z}} x[n] \delta(t - nT)$. This just fills in zeros between each value of the function, to re-map to continuous time. Now, \begin{align*} y(t) &= \sum_{n \in \mathbb{Z}} y[n] \phi(t - nT)\\ &= \sum_{n \in \mathbb{Z}} y[n] \delta(t - nT) \ast \phi(t)\\ &= y_s(t) \ast \phi(t) \end{align*} This system performs impulse train conversion, then filters with $\phi(t)$. This system is called an interpolator.

To summarize, mapping $x$ via $G$ and $F^{-1}$ to $y$ is equivalent to filtering by $\bar{\phi}(t)$, downsampling, performing impulse train conversion, and filtering by $\phi(t)$. Or, decimating and interpolating maps $x$ to the orthogonal projection $P_V(x(t))$.

Non-orthogonal Decimation Interpolation

Assume that $V = \mathrm{span} \{\phi_k\}_{k \in \mathbb{Z}}$ where $\phi_k(t) = \phi(t - kT)$ but $\{\phi_k\}_{k \in \mathbb{Z}}$ are not orthonormal. Assume that we can find a function $\tilde{\phi}(t)$ such that $(\tilde{\phi}_k)_{k \in \mathbb{Z}}$ is dual to $(\phi_k)_{k \in \mathbb{Z}}$ where $\tilde{\phi}_k(t) = \tilde{\phi}(t - kT)$. Now, consider the decimation-interpolation system which instead filters $x(t)$ by $\bar{\tilde{\phi}}(t)$ in the analysis step. Then we will have \begin{align*} y(t) &= \sum_{n \in \mathbb{Z}} x[n] \phi(t - nT)\\ &= \sum_{n \in \mathbb{Z}} x[n] \phi_n\\ &= \sum_{n \in \mathbb{Z}} \langle x, \tilde{\phi}_n \rangle \phi_n \end{align*} This is the generalized projection of $x$ onto $V$.

Example: Piecewise Linear Functions

Consider $V$, the space of continuous piecewise-linear functions on intervals $[kT, (k + 1)T]$ for $k \in \mathbb{Z}$. The “triangle” functions we used above are not orthonormal, so we must fine $\tilde{\phi}_n$ which are orthonormal.

Fourier Analysis

In the decimation interpolation system, we can instead look at the affect of the system in the Fourier domain. In this view, the input $X(\Omega)$ is transformed to $Y(\Omega)$. The input is first transformed to $X_{\ell}(\Omega) = \Phi(\Omega)^\ast X(\Omega)$, where the time reversal $\bar{\phi}(t)$ becomes conjugation in the frequency domain. The final interpolation step sets $Y(\Omega) = \Phi(\Omega) X_S(\Omega)$. However, it's not clear how $X_\ell(\Omega)$ is transformed to $X_S(\Omega)$. In the time domain, $x_\ell(t)$ is downsampled to yield $x[k]$ by $x[k] = x_\ell(kT)$ then upsampled to yield \begin{align*} x_S(t) &= \sum\limits_{k \in \mathbb{Z}} x[k] \delta(t - kT)\\ &= \sum\limits_{k \in \mathbb{Z}} x_\ell(kT) \delta(t - kT) \end{align*} Recall that a property of $\delta(t)$ is that $\int_{-\infty}^\infty f(t) \delta(t - t_0)dt = f(t_0)$. We also have $\int_{-\infty}^\infty f(t_0) \delta(t - t_0) dt = f(t_0)$. So, we have $f(t)\delta(t - t_0) = f(t_0)\delta(t - t_0)$ which implies that $x_\ell(t) \delta(t - kT) = x_\ell(kT)\delta( t - kT)$. As a result, $x_S(t) = \sum\limits_{k \in \mathbb{Z}} x_\ell(t) \delta(t - kT)$. The first term in the sum does not depend on $k$, so we have $x_S(t) = ( \sum\limits_{k \in \mathbb{Z}} \delta(t - kT) )x_\ell(t)$. If we define $p_T(t) = \sum\limits_{k \in \mathbb{Z}} \delta(t - kT)$, then $x_S(t) = p_T(t)x_\ell(t)$. The function $p_T(t)$ is a $T$-periodic impulse train. So, the conversion of $x_\ell(t) \rightarrow x[k] \rightarrow x_S(t)$ is equivalent to multiplying $x_\ell(t)$ by $p_T(t)$. This process called impulse train modulation, which is equivalent to downsampling and upsampling. The Fourier transform of a multiplication $x(t)y(t)$ maps to a convolution $\frac{1}{2\pi} X(\Omega) \ast Y(\Omega)$. So, $x_S(t) \rightarrow_\mathcal{F} \frac{1}{2\pi} P_T(\Omega) \ast X_S(\Omega)$ where $P_T(\Omega)$ is the Fourier transform of $p_T(t)$. To find $P_T(t)$, we first find another expression of $p_T(t)$ via the Fourier series because it is $T$-periodic. So, we can expand it as $p_T(t) = \sum\limits_{k \in \mathbb{Z}} P_T[k] e^{j2\pi kt/T}$ where $P_T[k]$ are the Fourier coefficients. We know that $P_T[k] = \frac{1}{T} \int_I p_T(t) e^{-j2\pi k t/T} dt$ where $I$ is any interval of length $T$ (because if you integrate over any interval of size $T$ you get the same result). If you choose $I = [-T/2, T/2]$ then we have $P_T[k] = \frac{1}{T} \int_{-T/2}^{T/2} p_T(t) e^{-j2\pi k t/T} dt$. $p_T(t)$ is zero everywhere on this interval except at $0$ where $p_T(0) = 1$. So, we have $P_T[k] = \frac{1}{T} \int_{-T/2}^{T/2} \delta(t) e^{-j2\pi k t/T} dt$. Now, we can apply the formula $\int_{-\infty}^\infty f(t) \delta(t - t_0)dt = f(t_0)$ to get $P_T[k] = \frac{1}{T}$ - regardless of $k$! So, $p_T(t) = \frac{1}{T} \sum\limits_{k \in \mathbb{Z}} e^{j 2\pi k t/T}$ - an infinite sum of exponentials. We can look at $e^{j 2\pi kt/T}$ as $e^{jk \theta}$ where $\theta = 2\pi t/T$ - we are sampling the unit circle at angle intervals of $\theta$. Intuitively, the points cancel because they are uniformly distributed around the circle unless $\theta = n2\pi$, in which case the sum just becomes $1$. Now, it is easy to find the Fourier transform of this representation of $p_T(t)$ given that the Fourier transform of $e^{j\Omega_0t}$ is $2\pi \delta(\Omega - \Omega_0)$. So, we have $P_T(\Omega) = \frac{1}{T}\sum\limits_{k \in \mathbb{Z}} 2\pi \delta(\Omega - 2\pi k/T)$. If we define $\Omega_s = 2\pi/T$ then $$p_T(t) = \sum\limits_{k \in \mathbb{Z}} \delta(t - kT) \rightarrow_\mathcal{F} P_T(\Omega) = \Omega_s \sum_{k \in \mathbb{Z}} \delta( \Omega - k \Omega_s)$$ So, a $T$-periodic impulse train with amplitude $1$ transforms to a $\Omega_s$ periodic impulse train with amplitude $\Omega_s$. Now, we can compute $$X_s(\Omega) = \frac{1}{2\pi} P_T(\Omega)\ast X_{\ell}(\Omega) = \frac{\Omega_s}{2\pi} \sum\limits_{k \in \mathbb{Z}} \delta(\Omega - k\Omega_s) \ast X_\ell(\Omega)$$ Recalling that $f(t) \ast \delta(t - t_0) = f(t - t_0)$ we have $X_s(\Omega) = \frac{1}{T} \sum\limits_{k \in \mathbb{Z}} X_\ell(\Omega - k\Omega_s)$. This results in what we call aliasing.

Fourier Characterization of Orthonormality

Given a function $\phi(t) \in L^2(\mathbb{R})$ and $T > 0$, we define $\phi_k(t) = \phi(t - kT)$. We would like to be able to tell if $\{\phi_k\}_{k \in \mathbb{Z}}$ is orthonormal. In the time domain, $\{\phi_k\}_{k \in \mathbb{Z}}$ are orthonormal if and only if $\langle \phi_k, \phi_l \rangle = \delta(k - l)$. Now, in $L^2(\mathbb{R})$, $\langle \phi_k, \phi_l \rangle = \int_{-\infty}^\infty \phi(t - kT) \phi(t - lT)^\ast dt = \int_{-\infty}^\infty \phi(t^\prime)\phi(t^\prime - (l - k)T) dt$ where $t^\prime = t - kT$ and so $t = t^\prime + kT$. So, it follows that $\langle \phi_k, \phi_l \rangle = \langle \phi, \phi_{l - k} \rangle$. So, that $\{\phi_k\}_{k \in \mathbb{Z}}$ is orthonormal means that $\langle \phi, \phi_{l - k} \rangle = \delta[l - k]$ for all $k, l \in \mathbb{Z}$ which is equivalent to saying that $\langle \phi, \phi_k \rangle = \delta[k]$ for all $k \in \mathbb{Z}$. If we define the autocorrelation as $r_\phi[k] = \langle \phi, \phi_k \rangle$ then we can say $\{\phi_k\}_{k \in \mathbb{Z}}$ is orthonormal if $r_\phi[k] = \delta[k]$ for all $k \in \mathbb{Z}$. Now, in a decimation-interpolation system, we have by definition $x[k] = \langle x, \phi_k \rangle$. So, if we input the function $\phi(t)$ then $x[k]$ will always be $r_\phi[k] = \langle \phi, \phi_k \rangle$. So, one way to test if $\phi_k$ is orthonormal is to input it into the system and see whether we get $x[k] = \delta[k]$. We know that $\phi_s(t) = \sum\limits_{k \in \mathbb{Z}} r_\phi[k] \delta(t - kT)$ so $r_\phi[k] = \delta[k]$ if and only if $\phi_s(t) = \delta(t)$. So, $\{\phi\}_{k \in \mathbb{Z}}$ is orthonormal if and only if $\phi_s(t) = \delta(t)$ or equivalently in the Fourier domain $\Phi(\Omega) = 1$. Since $\Phi_s(\Omega) = \frac{1}{T}\sum\limits_{k \in \mathbb{Z}} \Phi_l(\Omega - k\Omega_s)$ and $\phi_l(t) = (\bar{\phi}\ast \phi)(t)$ we have $\Phi_l(\Omega) = \Phi(\Omega)^\ast \Phi(\Omega) = |\Phi(\Omega)|^2$ so that $\Phi_s(\Omega) = \frac{1}{T}\sum_{k \in \mathbb{Z}} |\Phi(\Omega - k\Omega_s)|^2$. This proves that $\{\phi_k\}_{k \in \mathbb{Z}}$ is orthonormal if and only if $\sum\limits_{k \in \mathbb{Z}} |\Phi(\Omega - k\Omega_s)|^2 = T$ for all $\Omega \in \mathbb{R}$.


Take $\phi(t) = \frac{1}{\sqrt{T}} \mathrm{sinc}(\frac{t}{T})$. The Fourier transform $\Phi(\Omega)$ is $\sqrt{T}$ from $-\Omega_b/2$ to $\Omega_b/2$ where $\Omega_b = 2\pi/T = \Omega_s$ and $0$ elsewhere. Then when computing $\sum_{k \in \mathbb{Z}} |\Phi(\Omega - k\Omega_s)|^2$ for each $k$ we have this rectangular function (with amplitude $T = \sqrt{T}^2$ instead) shifted over by $k\Omega_b/2$. When summing all of these up, we get a function which is equal to $T$ everywhere. In general, if $\sum_{k \in \mathbb{Z}} |\Phi(\Omega - k\Omega_s)|^2$ is constant for all $\Omega \in \mathbb{R}$ then $\{\phi_k\}_{k \in \mathbb{Z}}$ is orthogonal because we can scale it to get a function constantly equal to $T$; this same scale will convert from the corresponding orthonormal basis to $\{\phi_k\}_{k \in \mathbb{Z}}$.

Fourier Criterion for Dual

Let $\phi(t) \in L^2(\mathbb{R}), \tilde{\phi}(t) \in L^2(\mathbb{R})$. We would like to be able to determine that $\{\tilde{\phi}_k\}_{k \in \mathbb{Z}}$ is dual to $\{\phi_k\}_{k \in \mathbb{Z}}$. We know that $\{\tilde{\phi}_k\}_{k \in \mathbb{Z}}$ is dual to $\{\phi_k\}_{k \in \mathbb{Z}}$ if they have the same span and $\langle \phi_k, \tilde{\phi}_k \rangle = \delta[l - k]$. As above, we can show that $\langle \phi_k, \tilde{\phi}_l \rangle = \langle \phi, \tilde{\phi}_{l - k} \rangle$. For similar reasons as above, we can show that $\{\tilde{\phi}_k\}_{k \in \mathbb{Z}}$ is dual to $\{\phi_k\}_{k \in \mathbb{Z}}$ if and only if $\langle \phi, \tilde{\phi}_k \rangle = \delta[k]$ and if and only if $\phi_s(t) = \delta(t)$ or equivalently $\frac{1}{T} \sum\limits_{k \in \mathbb{Z}} |\Phi_l(\Omega - k\Omega_s)|^2 = 1$. So, $\langle \phi_k, \tilde{\phi}_l \rangle = \delta[l - k]$ if and only if $\sum\limits_{k \in \mathbb{Z}} (\Phi \tilde{\Phi}^\ast)(\Omega - k\Omega_s) = T$.

Discrete-Time Signals

In the discrete-time case, convolution is defined by $x[n] \ast y[n] = \sum\limits_{k \in \mathbb{Z}} x[k]y[n - k]$. For the discrete-time delta function we have $x[n] \ast \delta[n] = x[n]$ and $x[n] \ast \delta[n - n_0] = x[n_0]$. The discrete-time Fourier transform takes a sequence $x[n]$ and maps to a continuous frequency function $X(\omega) = \sum\limits_{n \in \mathbb{Z}} x[n]e^{-j\omega n}$ which is a $2\pi$-periodic function. The inverse discrete-time Fourier transform maps $X(\omega)$ back to $x[n]$ by $x[n] = \frac{1}{2\pi} \int_I X(\omega) e^{j \omega n} d\omega$ where $I$ is any interval of length $2\pi$. The sequence $x[n]$ are the Fourier coefficients of the $2\pi$-periodic function $X(\omega)$ (up to the sign of the exponent). Properties of the discrete-time Fourier transform include

  1. $x[n] \ast y[n] \rightarrow_{\mathcal{F}} X(\omega)Y(\omega)$
  2. $\delta[n] \rightarrow_{\mathcal{F}} 1$
  3. $\delta[n - n_0] \rightarrow_{\mathcal{F}} e^{-j \omega n_0}$ ($n_0$ is an integer so this is guaranteed to be $2\pi$-periodic)
  4. $x[n - n_0] \rightarrow_{\mathcal{F}} e^{-j\omega n_0}X(\omega)$
  5. $x[-n] \rightarrow_{\mathcal{F}} X(-\Omega)$
  6. $x[n]^\ast \rightarrow_{\mathcal{F}} X(-\omega)^\ast$
  7. $x[n] \in \mathbb{R}$ if and only if $X(-\omega) = X(\omega)^\ast$ (Hermitian symmetry)
  8. $e^{j \omega_0 n} x[n] \rightarrow_{\mathcal{F}} X(\omega - \omega_0)$
  9. $\cos(\omega_0 n) x[n] \rightarrow_{\mathcal{F}} \frac{1}{2} X(\omega - \omega_0) + \frac{1}{2}X(\omega + \omega_0)$
  10. $\int_I X(\omega) d\omega = 2\pi x[0]$
  11. $\sum\limits_{n \in \mathbb{Z}} x[n] = X(0)$
  12. The Fourier transform is an isomorphism from $\ell^2(\mathbb{Z})$ to $L^2([-\pi, \pi])$ with $\langle X, Y \rangle = \frac{1}{2\pi} \int_{-\pi}^\pi X(\omega)Y(\omega)^\ast d\omega$
  13. $\sum\limits_{n \in \mathbb{Z}} x[n] y[n]^\ast = \frac{1}{2\pi} \int_{-\pi}^\pi X(\omega) Y(\omega) d\omega$ (generalized Parseval's equality)
  14. $\sum\limits_{n \in \mathbb{Z}} |x[n]|^2 = \frac{1}{2\pi} \int_{-\pi}^\pi |X(\omega)|^2 d\omega$ (Parseval's equality)

Mixed Signal Processing

In the decimation interpolation system, we convert $x[n]$ to $y(t)$ via upsampling and filtering, such that $y(t) = \sum\limits_{k \in \mathbb{Z}} x[k] \phi(t - kT)$. In the frequency domain, we would like to compare $X(\omega)$ to $Y(\Omega)$. $\phi(t)$ has Fourier transform $\Phi(\Omega)$ so $y(t) = \sum\limits_{k \in \mathbb{Z}} x[k] \phi(t - kT)$ if and only if $Y(\Omega) = X(T\Omega)\Phi(\Omega)$. This follows from the fact that $y(t) = \sum\limits_{k \in \mathbb{Z}} x[k] \phi(t - kT)$ “looks” like a convolution. Note that $X(\omega)$ is $2\pi$-periodic with $\omega$ but $X(T\Omega)$ is a function of $\Omega$ which is $2\pi/T$ periodic. We can compute the Fourier transform $Y(\Omega)$ of $y(t)$ where $y(t) = \phi(t) \ast x_s(t)$ and $x_s(t) = \sum_{k \in \mathbb{Z}} x[k] \delta(t - kT)$. We know that $\delta(t - kT) \rightarrow_{\mathcal{F}} e^{-j(kT)\Omega}$. Since the Fourier transform is a linear operation, $x_s(t) = \sum_{k \in \mathbb{Z}} x[k] \delta(t - kT) \rightarrow_{\mathcal{F}} \sum_{k \in \mathbb{Z}} x[k]e^{-j(kT)\Omega}$. Now, the discrete-time Fourier transform of $x[k]$ is $X(\omega) = \sum_{k \in \mathbb{Z}} x[k]e^{-jk\omega}$. Studying $X_S(\Omega)$ and $X(\omega)$ reveals that $X_S(\Omega) = X(T\Omega)$. This gives $Y(\Omega) = X(T\Omega)\Phi(\Omega)$. If there was a relation between $t$ and $T$ it would be $t = kT$ but $t$ is a continuous variable. Similarly, $\omega$ is related to $\Omega$ by $\omega = T\Omega$, but again $\omega$ can take any value. We can think of $\omega$ as having no time unit - it measures angle in radiants. $\Omega$ is a “frequency” variable and thus has units radians/second. $T$ is measured in seconds.

Construction of Dual Function

Given $\phi(t)$, we'd like to find $\tilde{\phi}(t)$ such that $\mathrm{span}\{\tilde{\phi}_k\}_{k \in \mathbb{Z}} = \mathrm{span}\{\phi_k\}_{k \in \mathbb{Z}}$ and $\langle \phi, \tilde{\phi}_k\rangle = \delta[k]$. The first requirement implies that $\tilde{\phi} \in \mathrm{span}\{\phi_k\}_{k \in \mathbb{Z}}$ so $\tilde{\phi}(t) = \sum\limits_{k \in \mathbb{Z}} a[k] \phi(t - kT)$. In the Fourier domain, we have $\tilde{\Phi}(\Omega) = A(T\Omega) \Phi(\Omega)$ where $A(\omega)$ is the $2\pi$-periodic Fourier transform of $a[n]$. We know that $\langle \phi, \tilde{\phi}_k \rangle = \sum\limits_{k \in \mathbb{Z}} (\phi \tilde{\phi}^\ast) (\Omega - k\Omega_s) = T$ where $\Omega_s = 2\pi/T$ and $\tilde{\Phi}(\Omega - k\Omega_s) = A(T\Omega - Tk\Omega_s) \Phi(\Omega - k\Omega_s)$. Since $A$ is $2\pi$-periodic, we have $A(T\Omega - Tk\Omega_s) = A(T\Omega)$. So, $\langle \phi, \tilde{\phi}_k \rangle = \sum\limits_{k \in \mathbb{Z}} (\phi \tilde{\phi}^\ast) (\Omega - k\Omega_s) = T$ implies that $\sum\limits_{k \in \mathbb{Z}} A(T\Omega)^\ast |\Phi(\Omega - k\Omega_s)|^2 = T$. So,

\begin{align*} A(T\Omega)^\ast &= \frac{T}{\sum\limits_{k \in \mathbb{Z}} |\Phi(\Omega - k\Omega_s)|^2}\\ \rightarrow A(\omega) &= \frac{T}{\sum\limits_{k \in \mathbb{Z}} \left|\Phi\left(\frac{\omega - k2\pi}{T}\right)\right|^2} \end{align*}

For $a[n]$ to exist, we need $\sum\limits_{k \in \mathbb{Z}} a[k] \ge \alpha$ where $\alpha > 0$ - this makes $A(T\Omega)$ bounded, so that it has an inverse Fourier transform. By the inverse Fourier transform, we can map $A(\omega)$ to $a[n]$ to get $\tilde{\phi}(t)$. This gives a general method to find $\tilde{\phi}$.

We can also find $\tilde{\phi}$ by noting that $\langle \phi, \tilde{\phi}_n \rangle = \delta[n]$ is equivalent to $\langle \tilde{\phi}, \phi_n \rangle = \delta[n]$. We can expand out the left side to

\begin{align*} \left\langle \phi_k, \phi_n \right\rangle &= \left\langle \sum\limits_{k \in \mathbb{Z}} a[k] \phi_k, \phi_n \right\rangle\\ &= \sum\limits_{k \in \mathbb{Z}} a[k]\langle \phi_k, \phi_n\rangle\\ &= \sum\limits_{k \in \mathbb{Z}} a[k] \langle \phi, \phi_{n - k}\rangle \\ &= \sum_{k \in \mathbb{Z}} a[k] r_\phi[n - k]\\ &= (a \ast r_\phi)[n] \end{align*}

So $\langle \tilde{\phi}, \phi_n \rangle = \delta[n]$ is equivalent to $(a \ast r_\phi)[n] = \delta[n]$ which is equivalent to $A(\omega)R_\phi(\omega) = 1$ or $A(\omega) = 1/R_\phi(\omega)$. So, we can find a dual $\tilde{\phi}$ by calculating $r_\phi[k]$ then taking its Fourier transform $R_\phi(\omega)$, computing $A(\omega) = 1/R_\phi(\omega)$ and the inverse Fourier transform $a[n]$, and finally setting $\tilde{\phi}(t) = \sum\limits_{n \in \mathbb{Z}} a[n] \phi(t - nT)$.

Now, we need to show that $\tilde{\phi} \in \mathrm{span}\{\phi_n\}_{n \in \mathbb{Z}}$ and $\phi \in \mathrm{span}\{\tilde{\phi}_n\}_{n \in \mathbb{Z}}$ if and only if $\mathrm{span}\{\phi_n\}_{n \in \mathbb{Z}} = \mathrm{span}\{\tilde{\phi}_n\}_{n \in \mathbb{Z}}$. It's clear that if $\mathrm{span}\{\phi_n\}_{n \in \mathbb{Z}} = \mathrm{span}\{\tilde{\phi}_n\}_{n \in \mathbb{Z}}$, then $\tilde{\phi} \in \mathrm{span}\{\phi_n\}_{n \in \mathbb{Z}}$ and $\phi \in \mathrm{span}\{\tilde{\phi}_n\}_{n \in \mathbb{Z}}$. Now, if $\tilde{\phi} \in \mathrm{span}\{\phi_n\}_{n \in \mathbb{Z}}$ then $\tilde{\phi}(t) = \sum\limits_{n \in \mathbb{Z}} a[n]\phi(t - nT)$ and $\tilde{\phi}(t - kT) = \sum\limits_{n \in \mathbb{Z}} a[n]\phi(t - (n + k)T) = \sum\limits_{n \in \mathbb{Z}} a[n - k]\phi(t - nT)$ so it follows that $\tilde{\phi}_k \in \mathrm{span}\{\phi_k\}_{n \in \mathbb{Z}}$. So $\mathrm{span}\{\tilde{\phi}_k\}_{k \in \mathbb{Z}} \subset \mathrm{span}\{\phi_n\}_{n \in \mathbb{Z}}$. We can similarly show that if $\phi \in \mathrm{span}\{\tilde{\phi}_n\}_{n \in \mathbb{Z}}$ then $\mathrm{span}\{\phi_k\}_{k \in \mathbb{Z}}$ so $\mathrm{span}\{\phi_k\}_{k \in \mathbb{Z}} \subset \mathrm{span}\{\tilde{\phi}_n\}_{n \in \mathbb{Z}}$. It follows that $\mathrm{span}\{\phi_k\}_{k \in \mathbb{Z}} = \mathrm{span}\{\tilde{\phi}_n\}_{n \in \mathbb{Z}}$. We also want $\phi \in \mathrm{span}\{\tilde{\phi}_n\}_{n \in \mathbb{Z}}$ so that $\phi(t) = \sum\limits_{n \in \mathbb{Z}} b[n] \tilde{\phi}(t - nT)$ for some $b[n]$. This is equivalent to requiring that $\Phi(\Omega) = B(T\Omega)\tilde{\Phi}(\Omega)$. Meanwhile, we have $\tilde{\Phi}(\Omega) = A(T\Omega)\Phi(\Omega)$. Then we have $B(T\Omega) = 1/A(T\Omega)$ or $B(\omega) = 1/A(\omega)$. So, again, we will have $\phi \in \mathrm{span}\{\tilde{\phi}_n\}_{n \in \mathbb{Z}}$ when $1/A(\omega)$ exists. Now, $\frac{1}{A(T\Omega)} = \frac{1}{T}\sum\limits_{k \in \mathbb{Z}} |\Phi(\Omega - k\Omega_s)|^2$, so $\phi \in \mathrm{span}\{\tilde{\phi}_n\}_{n \in \mathbb{Z}}$ is guaranteed when $\sum\limits_{k \in \mathbb{Z}} |\Phi(\Omega - k\Omega_s)|^2 \le \beta$ for some $\beta$. So, to guarantee that $\{\tilde{\phi}_n\}_{n \in \mathbb{Z}}$ is dual to $\{\phi_n\}_{n \in \mathbb{Z}}$ we need $\alpha \le \sum\limits_{k \in \mathbb{Z}} |\Phi(\Omega - k\Omega_s)|^2 \le \beta$ with $0 < \alpha \le \beta$. We can also set $A(\omega) = 1/R_\phi(\omega)$ where $r_\phi[n] = \langle \phi, \phi_n \rangle \rightarrow_{\mathcal{F}} R_\phi(\omega)$ so it follows that $A(\omega) = T/\sum\limits_{k \in \mathbb{Z}}|\Phi\left(\frac{\omega - k2\pi}{T}\right)|^2$ so $R_\phi(\omega) = \frac{1}{T} \sum\limits_{k \in \mathbb{Z}} \Phi\left(\frac{\omega - k2\pi}{T}\right)|^2$. Note that a system which filters with $\bar{\tilde{\phi}}$ in the analysis step and filters with $\phi$ in the reconstruction step will do the same thing (project onto $P_V(x)$ where $V = \mathrm{span}\{\tilde{\phi}_n\}_{n \in \mathbb{Z}} = \mathrm{span}\{\phi_n\}_{n \in \mathbb{Z}}$) as a system which filters by $\bar{\phi}$ in the analysis step and filters by $\tilde{\phi}$ in the reconstruction step. But, we will get different representations for $x[n]$; in the first case, $x[n] = r_{\phi, \tilde{\phi}} = \langle \phi, \tilde{\phi}_k \rangle$ and in the second $x^\prime[n] = r_\phi[n] = \langle \phi, \phi_k \rangle$. Call $x_\ell(t) = \phi^\prime_\ell(t)$ and $x_S(t) = \phi^\prime_S(t)$ for the second system. Then $\Phi^\prime_S(\Omega) = \frac{1}{T}\sum\limits_{k \in \mathbb{Z}} \Phi^\prime_\ell (\Omega - k\Omega_S) = \frac{1}{T} \sum\limits_{k \in \mathbb{Z}} |\Phi(\Omega - k\Omega_s)|^2$. So performing impulse train interpolation on $r_\phi[k]$ with period $T$ gives $\phi^\prime_S(t)$. It follows that $\Phi^\prime_S(\Omega) = R_\phi(T\Omega)$ and $R_\phi(\omega) = \Phi^\prime_S(\omega/T)$ so $R_\phi(\omega) = \frac{1}{T} \sum\limits_{k \in \mathbb{Z}} |\Phi\left(\frac{\omega - k2\pi}{T}\right)|^2$. We know by definition (or from the system diagram) that $\phi(t) = \sum\limits_{k \in \mathbb{Z}} r_\phi[k] \tilde{\phi}(t - kT)$ or in the Fourier domain $\Phi(\Omega) = R_\phi(T\Omega)\tilde{\Phi}(\Omega)$ so $R_\phi(T\Omega) = 1/A(T\Omega)$. {

Orthonormalization by Fourier

Suppose we have $V = \mathrm{span}\{\phi_k\}_{k \in \mathbb{Z}}$ where $\phi_n(t) = \phi(t - nT)$. but $\{\phi_n\}_{n \in \mathbb{Z}}$ is not orthogonal. We want to be able to compute an orthogonal projection onto $V$. We can compute $\tilde{\phi}(t)$ and compute the projection using the typical non-orthonormal decimation-interpolation system with $\tilde{\phi}$ in the analysis step and $\phi$ in the reconstruction step. Alternatively, we can compute some $\hat{\phi}$ which is orthonormal and $\mathrm{span}\{\hat{\phi}_k\}_{k \in \mathbb{Z}} = V$, and then use the orthonormal decimiation-interpolation system. In this case, we need a way to compute $\hat{\phi}$ from $\phi$. That $\mathrm{span}\{\hat{\phi}_n\}_{n \in \mathbb{Z}} = V$ implies that $\hat{\phi} \in \mathrm{span}\{\phi_n\}_{n \in \mathbb{Z}}$ so $\hat{\phi}(t) = \sum\limits_{n \in \mathbb{Z}} a[n] \phi(t - nT)$ and $\hat{\Phi}(\Omega) = A(T\Omega)\Phi(\Omega)$. We know that $\{\hat{\phi}_n\}_{n \in \mathbb{Z}}$ is orthonormal if and only if $\sum\limits_{k \in \mathbb{Z}} |\hat{\Phi}(\Omega - k\Omega_s)|^2 = T$. We found previously that $A(T(\Omega - k\Omega_s)) = A(T\Omega - k2\pi) = A(T\Omega)$. So the requirement for orthonormality $\sum\limits_{k \in \mathbb{Z}} |\hat{\Phi}(\Omega - k\Omega_s)|^2 = T$ is equivalent to $|A(T\Omega)|^2 = T/\sum\limits_{k \in \mathbb{Z}} |\Phi(\Omega - k\Omega_s)|^2$ or $|A(\omega)|^2 = T/\sum\limits_{k \in \mathbb{Z}} |\Phi\left(\frac{\omega - k2\pi}{T}\right) |^2$. So, one solution is to take $A(\omega) = \sqrt{T/\sum\limits_{k \in \mathbb{Z}} |\Phi\left(\frac{\omega - k2\pi}{T}\right) |^2}$; you can take the inverse Fourier transform to get $a[n]$.

Alternatively, we can say $\{\hat{\phi}\}_{n \in \mathbb{Z}}$ is orthonormal if and only if $\langle \hat{\phi}, \hat{\phi}_n \rangle = \delta[n]$. Now, \begin{align*} \langle \hat{\phi}, \hat{\phi}_n \rangle &= \left\langle \sum\limits_{k \in \mathbb{Z}} a[k] \phi_k, \hat{\phi}_n \right\rangle\\ &= \sum\limits_{k \in \mathbb{Z}} a[k] \langle \phi_k, \hat{\phi}_k \rangle\\ &= \sum\limits_{k \in \mathbb{Z}} a[k] \langle \phi_{k - n}, \hat{\phi} \rangle\\ &= \sum\limits_{k \in \mathbb{Z}} a[k + n] \langle \phi_k, \hat{\phi} \rangle\\ \end{align*} and \begin{align*} \langle \phi_k, \hat{\phi} \rangle &= \langle \phi_k, \sum\limits_{n \in \mathbb{Z}} a[n] \phi_n \rangle\\ &= \sum\limits_{n \in \mathbb{Z}} a[n]^\ast \langle \phi_k, \phi_n \rangle\\ &= \sum\limits_{n \in \mathbb{Z}} a[n]^\ast \langle \phi_{k - n}, \phi \rangle\\ &= \sum\limits_{n \in \mathbb{Z}} a[n]^\ast r_\phi[k - n]^\ast\\ &= (a\ast r_\phi)^\ast[k] \end{align*} Combining, we have \begin{align*} \langle \hat{\phi}, \hat{\phi}_n \rangle &= \sum\limits_{k \in \mathbb{Z}} a[k + n](a \ast r_\phi)^\ast [k]\\ &= \sum_{k \in \mathbb{Z}} \bar{a}[-n - k] (a \ast r_\phi)^\ast [k]\\ &= (\bar{a} \ast r_\phi^\ast \ast a^\ast)[-n]\\ &= (\bar{a} \ast r_\phi \ast a)[-n] \end{align*} where we made the last simplfication because we want $\phi$ and $a$ real. So, $\{\hat{\phi}_n\}_{n \in \mathbb{Z}}$ if and only if $(\bar{a} \ast r_\phi \ast a)[n] = \delta[n]$ or equivalently $A(\omega)^\ast R_\phi(\omega) A(\omega) = 1$ so $|A(\omega)|^2 = 1/R_\phi(\omega)$.


Suppose you want to transmit a signal $x(t)$, but you only want to transmit a few samples. As a first approximation, we can transmit a piecewise constant version of $x(t)$. The piecewise approximation will have a constant value $a_i$ over the interval $[iT, (i + 1)T]$. To determine how to calculate $a_i$, we focus on the coefficient $a_0$ corresponding to the interval $[0, T]$, so that we are in the space $L^2([0, T])$. If we let $r(t)$ be the “rectangular function” which is $1$ from $0$ to $T$ and $0$ elsewhere (such that $\|r\|^2 = T$) and $V = \mathrm{span}(r(t))$ and $y = P_V(x)$ then we can compute $y(t) = \langle x, r \rangle/T = \frac{1}{T} \int_0^T x(t)$. That is, we are simply setting $a_0$ to the mean of the signal over the interval $[0, T]$. We can do this similarly for all other intervals. Now, if we want a higher-resolution approximation $y^\prime(t)$ over the intervals $[iT/2, (i + 1)T/2]$ of half length) then you have to, for example, transmit two numbers $a^\prime_0$ and $a^\prime_1$ in place of $a_0$. So, in $L^2([0, T])$, $y$ is the orthogonal projection of $x$ onto $V = \mathrm{span}(r(t))$. Now, $y^\prime$is the projection of $x$ onto $V^\prime = \mathrm{span}(r^\prime(t), r^\prime(t - T^\prime))$ where $r^\prime(t)$ is the function which is $1$ from $0$ to $T^\prime = T/2$ and $0$ elsewhere. Now, using the same formula, you can compute $y^\prime(t) = \frac{\langle x, r^\prime \rangle}{T^\prime}r^\prime(t) + \frac{\langle x, r_1^\prime \rangle}{T^\prime} r^\prime_1(t)$ where $r^\prime_1(t) = r^\prime(t - T^\prime)$. That is, we are re-doing the same computation twice over smaller intervals - computing the mean over each of the two smaller intervals. You are effectively doubling the amount of information sent. However, if the receiver has already received $a_0$, we can use this to our advantage, because $a_0$ is the average of $a_0^\prime$ and $a_1^\prime$. So, you can instead simply transmit the distance from $a_0$ to $a^\prime_0$, and use this distance to compute $a_0^\prime$ and $a_1^\prime$.

Mathematically, there is more to it than this. Note that we can express $V^\prime = \mathrm{span}(r^\prime(t), r^\prime_1(t))$ instead as $V^\prime = \mathrm{span}(r(t), s(t))$ where $s(t)$ is $1$ on $[0, T/2]$ and $-1$ on $[T/2, T]$. If we call $W = \mathrm{span}(s(t))$ then we can show that $V^\prime = V \oplus W$ where $\oplus$ means all possible sums of $v + w$ where $v \in V$ and $w \in W$ where $V \bot W$. In homework 6, we shoed that $P_{V^\prime}(x) = P_V(x) + P_W(x)$. But, we've already transmitted $P_V(x) = y(t)$, so we only need to transmit $b_0 = \langle x, s \rangle/T$ from which we can compute $P_W(x) = z = \frac{\langle x, s \rangle}{T} s$ becasue $\|s\|^2 = T$. The signal $y(t)$ can be obtained by using a decimation-interpolation system with $\phi(t) = r(t)/\sqrt{T}$, with downsampling and impulse train interpolation by $T$. The normalization by $1/\sqrt{T}$ provides an orthonormal basis for $V$. Similarly, we can compute $z(t)$ using a decimation-interpolation system with $\phi(t) = s(t)/\sqrt{T}$. Now, if we want a higher resolution signal, we can do the same thing, by just replacing $T$ with $T^\prime$. But, $y^\prime$ can instead be computed by adding together $y(t)$ and $z(t)$. Computing $y^\prime(t)$ requires the same amount of bandwidth as computing $y(t) + z(t)$, but when transmitting $y(t) + z(t)$ we can first send $y(t)$ to get a lower-resolution version of the image, then transmit $z(t)$ to obtain a higher resolution. You can imagine that with the same principal, you can continue shrinking the intervals by a factor of $2$ to obtain a higher and higher resolution image.

The decomposition onto $V$ is not smooth; it would be nice if we could apply this multiresolution principal to smooth decompositions. More specifically, we are computing the approximation $y(t) = \sum\limits_{n \in \mathbb{Z}} a[n]r(t - nT)$ where $a[n] = \langle x, r \rangle/T$ or in the Fourier domain $Y(\Omega) = A(T\Omega)R(\Omega)$. Note that the Fourier transform of $r(t + T/2)$ is a sinc function with period $\Omega_0 = 2\pi/T$ with amplitude $T$ given by $T\mathrm{sinc}(\Omega/\Omega_0)$, so the function $r(t)$ will be a sinc function with some linear phase shift term. So, in the Fourier domain, $R(\Omega)$ will have infinite high-frequency components, due to jumps in the signal. If we choose a “smoother” $r$, then we may be able to avoid the high-frequency effects. One possibility would be to approximate $x(t)$ with piecewise linear (affine) functions. We would like to again find some $y, z$ as above. In this case, $V = \mathrm{span}\{\phi(t - nT)\}$ where $\phi(t) = \frac{1}{T} r(t) \ast \bar{\phi}(t)$. Then $y(t) = \sum\limits_{n \in \mathbb{Z}} a[n] \phi(t - nT)$ in the Fourier domain is $Y(\Omega) = A(T\Omega)\Phi(\Omega)$. Now, the because $\phi(t) = \frac{1}{T} r(t) \ast \bar{\phi}(t)$ we have $|\Phi(\Omega)| = \frac{1}{T}|R(\Omega)|^2 = T|\mathrm{sinc}(\Omega/\Omega_0)|^2$. This will effectively force the high frequency content to decay faster. In the time domain, the function $\phi$ has larger support. We can continue in this way - increasing the support of our time-domain function to approximate with, which will force stronger decay in the Fourier domain. In this limit, we can use a sinc function in the time domain which will result in a delta function in the Fourier domain. In case where $V = \mathrm{span}\{\phi(t - nT)\}_{n \in \mathbb{Z}}$ where $\phi_k(t)$ is piecewise linear on $[kT, (k + 1)T]$, note that $V^\prime = \mathrm{span}\{\phi^\prime(t - nT^\prime)\}$ where $\phi_k^\prime(t)$ is piecewise linear on $[kT/2, (k+1)T/2]$ contains $V$. This implies that $\phi(t) \in V^\prime$. We would like to find $a[n]$ such that $\phi(t) = \sum\limits_{n \in \mathbb{Z}} a[n] \phi^\prime(t - nT^\prime)$. It's easy to see that $\phi(t) = \frac{1}{2}\phi^\prime(t + T^\prime) + \phi^\prime(t) + \frac{1}{2}\phi^\prime(t - T^\prime)$. So, $a[-1] = 1/2, a[0] = 1, a[1] = 1/2$ and $a$ is zero elsewhere. This implies that $\phi(t - kT) = \sum\limits_{n \in \mathbb{Z}} a[n] \phi^\prime(t - (n + 1k)T^\prime) = \sum_{n \in \mathbb{Z}} a[n - 2k] \phi^\prime(t - nT^\prime) \in V^\prime$. So, for all $k \in \mathbb{Z}$, $\phi(t - kT) \in V^\prime$ which implies that $V \subset V^\prime$. If $(a[n])_{n \in \mathbb{Z}}$ are the coefficients of $\phi(t)$ in $\{\phi^\prime(t - nT^\prime)\}_{n \in \mathbb{Z}}$ then $(a[n - 2k])_{n \in \mathbb{Z}}$ are the coefficients in $\phi(t - kT)$. Importantly, you can also express $\phi^\prime(t) = \phi(2t)$, so we have $\phi(t/2) = \sum\limits_{n \in \mathbb{Z}} a[n] \phi(t - nT)$. This is equivalent to $V \subset V^\prime$. We would also like to be able to find $W$ such that $V^\prime = V \oplus W$ where $W = \mathrm{span}\{\psi(t - nT)\}_{n \in \mathbb{Z}}$ with $\psi$ some other function.

Scaling Function

We say that $\phi(t)$ is a scaling function with respect to $T$ when $V \subset V^\prime$ where $V = \mathrm{span}\{\phi(t - nT)\}$, $V^\prime = \mathrm{span}\{\phi^\prime(t - nT^\prime)\}$ and $T^\prime = T/2$, $\phi^\prime(t) = \phi(2t)$. We saw that $V \subset V^\prime$ if and only if there exists a sequence $a[n]$ such that $\phi(t/2) = \sum\limits_{n \in \mathbb{Z}} a[n] \phi(t - nT)$. We would like to be able to create functions which satisfy this property. We can characterize this property in the Fourier domain by first observing that $x(\alpha t) \rightarrow_{\mathcal{F}} \frac{1}{\alpha} X(\Omega/\alpha)$. So, in the Fourier domain, our desired relation is $2\Phi(2\Omega) = A(T\Omega)\Phi(\Omega)$. Note that $A(T\Omega)$ is $\Omega_0 = 2\pi/T$-periodic and $\Phi$ is not periodic.


Consider $\phi(t) = \mathrm{sinc}(t/T)$. In the Fourier domain, $\Phi(\Omega)$ will be $T$ on the interval $[-\Omega_0/2, \Omega_0/2]$ where $\Omega_0 = 2\pi/T$. We'd like to find $a[n]$ such that $\phi(t/2) = \sum\limits_{n \in \mathbb{Z}} a[n] \phi(t - nT)$ or equivalently $2\Phi(2\Omega) = A(T\Omega)\Phi(\Omega)$ where $A(T\Omega)$ is $\Omega_0$-periodic. So, $\Phi(2\Omega)$ will be $T$ on the interval $[-\Omega_0/4, \Omega_0/4]$. This implies that $A(T\Omega) = 2$ for $|\Omega| \le \Omega_0/4$ and $0$ for $\Omega_0/4 < |\Omega| \le \Omega_0/4$, and it doesn't matter what the value is outside of these intervals. So, we must let $A(T\Omega)$ be $2$ on the interval $[\Omega_0/4, \Omega_0/4]$ and be $\Omega_0$-periodic. All of the periodic copies will not overlap with $\Phi(\Omega)$, so we will achieve the desired effect. In this case, $A(\omega)$ will be $2$ when $|\omega| \le \pi/2$, $0$ when $\pi/2 < |\omega| \le \pi$ and will be $2\pi$ periodic. Now we can find $a[n] = \frac{1}{2\pi} \int_{-\pi}^\pi A(\omega) e^{j\omega n} d \omega = \mathrm{sinc}(n/2)$.

Let $\phi(t) \in L^2(\mathbb{R})$ and define $\phi_n(t) = \phi(t - nT)$ for a given $T$. Define for each $m \in \mathbb{Z}$ the space $V_m = \mathrm{span} \left\{\phi_n \left(\frac{t}{2^m}\right) \right\}_{n \in \mathbb{Z}}$. Then $V_0 = V = \mathrm{span}\{\phi_n(t)\}_{n \in \mathbb{Z}}$. If $x(t) \in V_0$ then $x\left(\frac{t}{2^m}\right) \in V_m$ because $x(t) \in V_0$ implies that $x(t) = \sum\limits_{n \in \mathbb{Z}} a[n] \phi_n(t)$ so $x\left(\frac{t}{2^m}\right) = \sum\limits_{n \in \mathbb{Z}} a[n] \phi_n\left(\frac{t}{2^m}\right)$ and $x\left(\frac{t}{2^m}\right) \in V_m$. Equivalently, if $x(t) \in V_m$ then $x(2^m t) \in V_0$. So if $m$ is a positive integer, then signals of $V_0$ oscillate faster than those of $V_m$. When $m$ becomes negative, the signals oscillate faster. In general, if $m > m^\prime$ then the signals of $V_m$ oscillate faster than those of $V_{m^\prime}$ and similarly as $m$ decreases, the signals of $V_m$ oscillate faster.

Theorem The following statements are equivalent:

  1. $\phi(t)$ is a scaling function with respect to $T$
  2. There exists a sequence $a[n]$ such that $\phi\left(\frac{t}{2}\right) = \sum\limits_{a \in \mathbb{Z}} a[n] \phi_n(t)$
  3. There exists a $2\pi$ periodic function $A(\omega)$ such that $2\Phi(2\Omega) = A(T\Omega)\Phi(\Omega)$
  4. $V_m \subset V_{m - 1} \subset \ldots \subset V_2 \subset V_1 \subset V_0 \subset V_{-1} \subset V_{-2} \ldots$

Proof The definition of a scaling function is some $\phi(t)$ such that $V \subset V^\prime$ where $V^\prime = \mathrm{span}\{\phi^\prime(t - nT^\prime)\}_{n \in \mathbb{Z}}$, $\phi^\prime(t) = \phi(2t)$ and $T^\prime = \frac{T}{2}$. Now $\phi^\prime(t - nT^\prime) = \phi(2t - 2nT^\prime) = \phi(2t - nT) = \phi_n(2t) = \phi_n\left(\frac{t}{2^{-1}}\right)$. So, this definition implies that $V_0 \subset V_{-1}$. So, obviously the fourth condition implies the first condition. To show that the first condition implies the fourth, assume that $x(t) \in V_m$. Then we know that $x(2^m t) \in V_0$. But, we know $V_0 \subset V_{-1}$ so we know $x(2^m t) \in V_{-1}$. So, $x(2^m (2^{-1}t) ) = x(2^{m-1} t) \in V_0$ and so $x(t) \in V_{m - 1}$, which implies the fourth condition. We showed equivalence between the other conditions earlier.

Now, we'd like to figure out how to construct all possible scaling functions $\phi$. To do that, we are going to work in the time domain and frequency domain. Now, if there exists a sequence $a[n]$ such that $\phi\left(\frac{t}{2}\right) = \sum\limits_{a \in \mathbb{Z}} a[n] \phi_n(t)$ then $\alpha \phi(t)$ is also a solution, so multiplication of a scaling by a scalar produces another scaling function. Now, if we choose any sequence $a[n]$ there is not always some $\phi$ such that $\phi\left(\frac{t}{2}\right) = \sum\limits_{a \in \mathbb{Z}} a[n] \phi_n(t)$. We need $a[n]$ to satisfy a number of conditions, which we can see in the Fourier domain. We know that a scaling function satisfies $2\Phi(2\Omega) = A(T\Omega)\Phi(\Omega)$. We can define $M(\Omega) = \frac{A(T\Omega)}{2}$ so that we can equivalently require that $\Phi(2\Omega) = M(\Omega) \Phi(\Omega)$ or $\Phi(\Omega) = M\left(\frac{\Omega}{2}\right) \Phi\left(\frac{\Omega}{2}\right)$. We can also write $\Phi(\Omega) = M\left(\frac{\Omega}{2}\right)M\left(\frac{\Omega}{4}\right)\Phi\left(\frac{\Omega}{4}\right)$. We can keep on doing this, producing $\Phi(\Omega) = M\left(\frac{\Omega}{2}\right)M\left(\frac{\Omega}{4}\right)M\left(\frac{\Omega}{8}\right) \cdots M\left(\frac{\Omega}{2^N}\right) \Phi\left(\frac{\Omega}{2^N}\right)$ for any integer $N \ge 1$. Assume that $\Phi(\Omega)$ is continuous at $\Omega = 0$, then $\lim_{N \rightarrow \infty} \Phi\left(\frac{\Omega}{2^N}\right) = \Phi(0)$ so that $\Phi(\Omega) = \left( \prod\limits_{i = 1}^\infty M\left(\frac{\Omega}{2^i}\right) \right) \Phi(0)$. Now, we need $\prod\limits_{i = 1}^\infty M\left(\frac{\Omega}{2^i}\right)$ to converge, which happens if and only if $M(\Omega)$ is continuous at $\Omega = 0$ and $M(0) = 1$. Therefore, because $M(\Omega) = \frac{A(T\Omega)}{2}$ for $a[n]$ to produce a scaling function we can require $A(\omega)$ is continuous at $\omega = 0$ and $A(0) = 2$.


Consider the rectangular function $\phi(t)$ which is $1$ on $t \in [0, T]$. Now, $\phi(t/2)$ is $1$ from $0$ to $2T$. We know that $\phi(t/2) = \phi(t) + \phi(t - T)$ so $a[n] = 1, n \in \{0, 1\}$ and $0$ otherwise. Then $A(\omega) = 1 + e^{-j\omega}$, so that $A(\omega)$ is continuous everywhere and $A(0) = 2$.


If $\phi(t)$ is $1 - |t|$ from $-T$ to $T$ and $0$ elsewhere, then $\phi(t/2) = \frac{1}{2} \phi(t + T) + \phi(t) + \frac{1}{2}\phi(t - T)$ so $A(\omega) = \frac{1}{2}e^{j \omega} + 1 + \frac{1}{2}e^{-j\omega} = 1 + \cos(\omega)$ and $A(\omega)$ is continuous everywhere and $A(0) = 2$.

Theoretically, you can generate $\phi(t)$ from $a[n]$ by choosing $a[n]$ such that $A(\omega)$ is continuous at $\omega = 0$ and $A(0) = 2$, defining $M(\Omega) = \frac{A(T\Omega)}{2}$, and deriving $\Phi(\Omega) = \left( \prod\limits_{i = 1}^\infty M\left(\frac{\Omega}{2^i}\right) \right) \Phi(0)$ for some arbitrary $\Phi(0)$. Finally, you can take the inverse Fourier transform of $\Phi(\Omega)$ to get $\phi(t)$. A more practical method is to start with $a[n]$ such that $A(\omega)$ is continuous at $\omega = 0$ and $A(0) = 2$ and make a guess $\phi^{(0)}(t)$ and derive $\phi^{(1)}(t)$ such that $\phi^{(1)}\left(\frac{t}{2}\right) = \sum\limits_{n \in \mathbb{Z}} a[n] \phi^{(0)}(t - nT)$. If $\phi^{(1)}(t) = \phi^{(0)}(t)$, then you're done because $\phi(t)$ is a scaling function. If $\phi^{(1)}(t) \ne \phi^{(0)}(t)$, then iteratively find $\phi^{(i)}(t)$ such that $\phi^{(i + 1)}\left(\frac{t}{2}\right) = \sum\limits_{n \in \mathbb{Z}} a[n]\phi^{(i)}(t - nT)$. If $\lim_{i \rightarrow \infty} \phi^{(i)}(t)$ converges to a limit $\phi^{(\infty)}(t)$ as $i \rightarrow \infty$ then $\phi^{(\infty)}\left(\frac{t}{2}\right) = \sum\limits_{n \in \mathbb{Z}} a[n] \phi^{(\infty)} (t - nT)$ so $\phi^{(\infty)}(t)$ is a scaling function.

$\phi^{(i)}(t)$ will always converge to a limit when $2\Phi^{(i + 1)} (2 \Omega) = A(T\Omega)\Phi^{(i)}(\Omega)$ or when $\Phi^{(i + 1)}(\Omega) = M\left(\frac{\Omega}{2}\right) \Phi^{(i)}\left(\frac{\Omega}{2}\right)$. Similarly, $\Phi^{(n)} (\Omega) = \left( \prod_{i = 1}^N M\left(\frac{\Omega}{2^i}\right) \right) \Phi^{(0)}\left(\frac{\Omega}{2^N} \right)$. We know that $\prod_{i = 1}^N M\left(\frac{\Omega}{2^i}\right)$ converges by construction of $a[n]$ so $\Phi^{(\infty)}(\Omega)$ converges so $\phi^{(\infty)}(t)$ converges.

Now, in addition to $A(\omega)$ is continuous at $\omega = 0$ and $A(0) = 2$ we want the sequence $a[n]$ to also generate scaling function $\phi(t)$ such that $\{\phi(t - nT)\}_{n \in \mathbb{Z}}$ is orthonormal.

Theorem Define $\phi_n(t) = \phi(t - nT)$ and let $\beta > 0$ be a constant. Then $\{\phi_n(t)\}_{n \in \mathbb{Z}}$ is orthonormal if and only if $\{\sqrt{\beta} \phi_n(\beta t)\}_{n \in \mathbb{Z}}$ is orthonormal.

Proof Assume that $\{\phi_n(t)\}_{n \in \mathbb{Z}}$ is orthonormal. Then \begin{align*} \left\langle \sqrt{\beta} \phi_n(\beta t), \sqrt{\beta} \phi_n (\beta t) \right\rangle &= \beta \int_{-\infty}^\infty \phi_n(\beta t)\phi_{n^\prime} (\beta t)^\ast d t\\ & = \int_{-\infty}^\infty \phi_n(t) \phi_{n^\prime} (t)^\ast d t\\ &= \langle \phi_n, \phi_{n^\prime}\rangle\\ &= \delta[n - n^\prime] \end{align*} so $\{\sqrt{\beta} \phi_n(\beta t) \}_{n \in \mathbb{Z}}$ is orthonormal. The proof in the opposite direction follows similarly.

Theorem Assume that $\{\phi_n(t)\}_{n \in \mathbb{Z}}$ is orthonormal and let $a[n]$ be any sequence and let $g(t)$ be such that $g\left(\frac{t}{2}\right) = \sum\limits_{n \in \mathbb{Z}} a[n] \phi(t - nT)$. Then $\{g(t - kT)\}_{k \in \mathbb{Z}}$ is orthonormal in $L^2(\mathbb{R})$ if and only if $\left\{\frac{1}{\sqrt{2}} a_k[n]\right\}_{n \in \mathbb{Z}}$ where $a_k[n] = a[n - 2k]$ is orthonormal in $\ell^2(\mathbb{Z})$.

Proof Let $V = \mathrm{span}\{\phi(t - nT)\}_{n \in \mathbb{Z}}$ where we have assumed that $\{\phi(t - nT)\}_{n \in \mathbb{Z}}$ is orthonormal. The fundamental isomorphism is $F: V \rightarrow \ell^2(\mathbb{Z})$ which maps $x(t) \rightarrow x[n] = \langle x, \phi_n \rangle$. Define $g_k(t) = g(t - kT)$. From above, we know that $\{g_k(t)\}_{k \in \mathbb{Z}}$ is orthonormal if and only if $\left \{ \frac{1}{\sqrt{2}} g_k \left(\frac{t}{2}\right) \right\}_{k \in \mathbb{Z}}$ is orthonormal by setting $\beta = \frac{1}{2}$. Also from above, we can set $a[n] = \left \langle g\left(\frac{t}{2}\right), \phi_n(t) \right\rangle$. Now, we apply the fundamental isomorphism to $g_k\left(\frac{t}{2}\right)$. We have that $g_k\left(\frac{t}{2}\right) = g\left(\frac{t}{2} - kT \right) = g\left(\frac{t - k2T}{2}\right)$. We know that $g\left(\frac{t}{2}\right) = \sum\limits_{n \in \mathbb{Z}} a[n] \phi(t - nT)$ implies that $g\left(\frac{t - k2T}{2}\right) = \sum\limits_{n \in \mathbb{Z}} a[n] \phi(t - k2T - nT) = \sum\limits_{n \in \mathbb{Z}} a[n] \phi(t - (n + 2k)T)$. We also have that $g_k\left(\frac{t}{2}\right) = \sum\limits_{n \in \mathbb{Z}} a[n - 2k] \phi(t - nT)$. This implies that $\left\langle g_n\left(\frac{t}{2}\right), \phi_n \right\rangle = a[n - 2k]$. To summarize, the fundamental isomorphism maps $g_k\left(\frac{t}{2}\right) = a[n - 2k] = a_k[n]$. Since the isomorphism is linear, we also have $\frac{1}{\sqrt{2}} g_n\left(\frac{t}{2}\right)$ maps to $\frac{1}{\sqrt{2}} a_k[n]$. So, the family $\left\{\frac{1}{\sqrt{2}} g_k \left(\frac{t}{2}\right) \right\}_{k \in \mathbb{Z}}$ is orthonormal in $V$ if and only if $\left\{\frac{1}{\sqrt{2}} a_k[n] \right\}_{k \in \mathbb{Z}}$ is orthonormal in $\ell^2(\mathbb{Z})$.

We want $\{\phi_n(t)\}_{n \in \mathbb{Z}}$ to be orthonormal. But, to apply the above we would have to set $g(t) = \phi(t)$, but this is satisfied automatically by assumption. We can then conclude that $\left\{\frac{1}{\sqrt{2}} a[n - 2k]\right\}_{k \in \mathbb{Z}}$ is orthonormal in $\ell^2(\mathbb{Z})$. In fact, that $\left\{\frac{1}{\sqrt{2}} a[n - 2k]\right\}_{k \in \mathbb{Z}}$ is orthonormal implies that $\{\phi(t - nT)\}_{n \in \mathbb{Z}}$ is orthonormal (up to a scaling factor) because we can construct $\phi^{(i + 1)}\left(\frac{t}{2}\right) = \sum\limits_{n \in \mathbb{Z}} a[n] \phi^{(i)}(t - nT)$ and we know that $\phi^{(\infty)}$ satisfies our scaling function requirements. So, we can start with an initial guess $\phi^{(0)}(t)$ such that $\{\phi^{(0)}(t - nT)\}_{n \in \mathbb{Z}}$ is orthonormal (for example, the rectangular function). From above, and by induction, $\{\phi^{(i)}(t - nT)\}_{n \in \mathbb{Z}}$ is orthonormal since $\left\{\frac{1}{\sqrt{2}} a[n - 2k] \right\}_{k \in \mathbb{Z}}$ is orthonormal. So, in conclusion, if $a[n]$ satisfies $A(\omega)$ continuous at $\omega = 0$ and $A(0) = 2$ and $\left \{\frac{1}{\sqrt{2}} a[n - 2k] \right\}_{k \in \mathbb{Z}}$ is orthonormal in $\ell^2(\mathbb{Z})$ then $\phi\left(\frac{t}{2}\right) = \sum\limits_{n \in \mathbb{Z}} a[n] \phi(t - nT)$ has a solution $\phi(t)$ such that $\{\phi(t - nT)\}_{n \in \mathbb{Z}}$ is orthonormal.


To review, if we assume $\{\phi(t - nT)\}_{n \in \mathbb{Z}}$ to be orthonormal with $V = \mathrm{span}\{\phi(t - nT)\}_{n \in \mathbb{Z}}$ then we can consider the isomorphism $F$ which maps $V$ to $\ell^2(\mathbb{Z})$ by $x(t) \rightarrow x[n] = \langle x, \phi_n \rangle$. Now, let $a[n]$ be a sequence in $\ell^2(\mathbb{Z})$ and consider $g(t)$ such that $g\left(\frac{t}{2}\right) = \sum\limits_{n \in \mathbb{Z}} a[n] \phi( t - nT)$. If we define $g_k(t) = \{g(t - kT)\}_{k \in \mathbb{Z}}$ then $\{g_k(t)\}_{k \in \mathbb{Z}}$ will be orthonormal if and only if $\left\{\frac{1}{\sqrt{2}} g_k\left(\frac{t}{2}\right)\right\}_{k \in \mathbb{Z}}$ is orthonormal in $V$ or $\left\{\frac{1}{\sqrt{2}} a[n - 2k]\right\}_{k \in \mathbb{Z}}$ is orthonormal in $\ell^2(\mathbb{Z})$ such that the isomorphism $F$ maps $g_k\left(\frac{t}{2}\right) \rightarrow a[n - 2k]$. With $a[n - 2k]$, we only consider “even shifts”, so the dimension is intuitively half of $\ell^2(\mathbb{Z})$. Similarly, the signals $g_k\left(\frac{t}{2}\right)$ will span “half” of $V$ in terms of dimension - intuitively, the “lower frequency half”.

If we assume moreover that $\phi(t)$ is a scaling function so that $\phi\left(\frac{t}{2}\right) = \sum\limits_{n \in \mathbb{Z}} a[n] \phi(t - nT)$ then from the above properties with $g(t) = \phi(t)$ we automatically can conclude that $\left\{\frac{1}{\sqrt{2}} a[n - 2k]\right\}_{k \in \mathbb{Z}}$ is orthonormal in $\ell^2(\mathbb{Z})$. Now, suppose we can find $b[n] \in \ell^2(\mathbb{Z})$ such that

  1. $\left\{\frac{1}{\sqrt{2}} b[n - 2k]\right\} $ is also orthonormal in $\ell^2(\mathbb{Z})$
  2. $b[n - 2k] \bot a[n - 2k^\prime]$ for all $k, k^\prime \in \mathbb{Z}$

Then clearly $\left\{\frac{1}{\sqrt{2}} a[n - 2k], \frac{1}{\sqrt{2}} b[n - 2k^\prime] \right\}_{k, k^\prime \in \mathbb{Z}}$ is orthonormal in $\ell^2(\mathbb{Z})$. Furthermore, it can be proved (by a dimensionality argument) that $\mathrm{span}\left\{\frac{1}{\sqrt{2}} a[n - 2k], \frac{1}{\sqrt{2}} b[n - 2k^\prime] \right\}_{k, k^\prime \in \mathbb{Z}} = \ell^2(\mathbb{Z})$. So, it follows that $\left\{\frac{1}{\sqrt{2}} a[n - 2k], \frac{1}{\sqrt{2}} b[n - 2k^\prime] \right\}_{k, k^\prime \in \mathbb{Z}}$ is automatically an orthonormal basis of $\ell^2(\mathbb{Z})$. Now, suppose we find such a $b[n]$, then define $\psi(t)$ such that $\psi\left(\frac{t}{2}\right) = \sum\limits_{n \in \mathbb{Z}} b[n] \phi(t - nT)$. Then our isomorphism $F$ which maps $\phi_k\left(\frac{t}{2}\right) \rightarrow a[n - 2k]$ will map $\psi_k\left(\frac{t}{2}\right) \rightarrow b[n - 2k]$. Now, because $F$ is an isomorphism and $\left\{\frac{1}{\sqrt{2}} a[n - 2k], \frac{1}{\sqrt{2}} b[n - 2k^\prime] \right\}_{k, k^\prime \in \mathbb{Z}}$ is an orthonormal basis of $\ell^2(\mathbb{Z})$, then the family $\left\{ \frac{1}{\sqrt{2}} \phi_k \left(\frac{t}{2}\right), \frac{1}{\sqrt{2}} \psi_{k^\prime}\left(\frac{t}{2}\right) \right\}_{k, k^\prime \in \mathbb{Z}}$ is an orthonormal basis of $V$. We call $\psi(t)$ a wavelet associated with scaling function $\phi$.

Recall that we defined for any given $m \in \mathbb{Z}$ the space $V_m = \mathrm{span}\left\{\phi_n \left(\frac{t}{2^m}\right) \right\}_{n \in \mathbb{Z}}$ and we saw that $x(t) \in V_m$ if and only if $x(2^m t) \in V_0$ where we define $V_0 = V$. Similarly, we define $W_m = \mathrm{span}\left\{\psi_n\left(\frac{t}{2^m}\right) \right\}_{n \in \mathbb{Z}}$ where $\psi_n = \psi(t - nT)$. Then similarly $y(t) \in W_m$ if and only if $y(2^m t) \in W_0$ where similarly we define $W = W_0$. We know that $\left\{ \frac{1}{\sqrt{2}} \phi_k \left(\frac{t}{2}\right), \frac{1}{\sqrt{2}} \psi_{k^\prime}\left(\frac{t}{2}\right) \right\}_{k, k^\prime \in \mathbb{Z}}$ is an orthonormal basis of $V_0$, which implies that its scaled version $\left\{\frac{1}{\sqrt{2^m}} \phi_k\left(\frac{t}{2^m}\right), \frac{1}{\sqrt{2^m}} \phi_{k^\prime} \left(\frac{t}{2^m}\right) \right\}_{k, k^\prime \in \mathbb{Z}}$ is an orthonormal basis of $V_{m - 1}$. This implies that $V_m \oplus W_m = V_{m - 1}$ where $W_m = \mathrm{span} \left\{\frac{1}{\sqrt{2^m}} \phi_{k^\prime} \left(\frac{t}{2^m}\right) \right\}_{k^\prime \in \mathbb{Z}}$ and $V_m = \mathrm{span} \left\{\frac{1}{\sqrt{2^m}} \phi_k\left(\frac{t}{2^m}\right) \right\}_{k \in \mathbb{Z}}$. A particular case is $V_0 \oplus W_0 = V \oplus W = V_{-1}$.


Consider the case where $V$ is the space of piecewise constant functions on $[kT, (k + 1)T]$ for all $k \in \mathbb{Z}$. If $\phi(t)$ is $\frac{1}{\sqrt{T}}$ from $0$ to $T$ and $0$ otherwise, then $\{\phi(t - nT)\}_{n \in \mathbb{Z}}$ is an orthonormal basis of $V$. We also know that $\phi \left(\frac{t}{2}\right) = \sum\limits_{n \in \mathbb{Z}} a[n] \phi(t - nT)$ where $a[n]$ is $1$ for $n = 0, 1$ and $0$ otherwise. This implies that $\left\{\frac{1}{\sqrt{2}} a[n - 2k]\right\}_{k \in \mathbb{Z}}$ is orthonormal in $\ell^2(\mathbb{Z})$ because shifting by $2k$ causes the resulting $a[n - 2k]$ to never overlap, so that $\mathrm{span}\left\{\frac{1}{\sqrt{2}} a[n - 2k] \right\}_{k \in \mathbb{Z}}$ are piecewise constant sequences on intervals $[2k, 2k + 1]$. Now, we can find $b[n]$ such that $\left\{\frac{1}{\sqrt{2}} a[n - 2k], \frac{1}{\sqrt{2}} b[n - 2k^\prime] \right\}_{k, k^\prime \in \mathbb{Z}}$ is an orthonormal basis of $\ell^2(\mathbb{Z})$ by constructing $b[0] = 1, b[1] = -1$ with $b[n] = 0$ otherwise. It is clear that these vectors are always orthogonal to $a[n - 2k]$ and that $\left\{\frac{1}{\sqrt{2}} b[n - 2k^\prime] \right\}_{k, k^\prime \in \mathbb{Z}}$ is an orthonormal family. The wavelet is then $\psi(t)$ such that $\psi\left(\frac{t}{2}\right) = \sum\limits_{n \in \mathbb{Z}} b[n] \phi(t - nT)$, which results in the signal which is $1$ on $[0, T/2]$, $-1$ on $[T/2, 1]$, and $0$ otherwise.

We know that $V_0 \oplus W_0 = V_{-1}$ and that $\{\phi(t - kT)\}_{k \in \mathbb{Z}}$ is an orthonormal basis of $V$ and $\{\psi(t - kT)\}_{k \in \mathbb{Z}}$ is an orthonormal basis of $W$. So, for some signal $x(t)$ if we compute $P_V(x)$ and $P_W(x)$ with two decimation-interpolation systems and add their result we will obtain $P_{V^{-1}} (x)$. In this way, if we have already transmitted $P_V(x)$ we can just transmit $P_W(x)$ to get $P_{V^{-1}}(x)$ which is higher resolution. We can continue in this way, transmitting $P_{W^{-1}}(x)$ to get $P_{V^{-2}}(x)$ and so on. Now, recall that $\left\{\frac{1}{\sqrt{2^m}} \phi_n \left(\frac{t}{2^m}\right) \right\}_{n \in \mathbb{Z}}$ is an orthonormal basis of $V_m$ and $\left \{\frac{1}{\sqrt{2^m}} \psi_n \left(\frac{t}{2^m}\right) \right\}_{n \in \mathbb{Z}}$ is an orthonormal basis of $W_m$. So, an orthonormal basis of $W_{-1}$ is $\{\sqrt{2}\psi_n(2t)\}_{n \in \mathbb{Z}}$ - the space has signals which oscillate faster. Now, we can define $\psi^\prime(t) = \sqrt{2} \psi(2t)$ and we have $\sqrt{2}\psi(2t) = \sqrt{2}\psi(2t - nT) = \sqrt{2}\psi(2(t - nT^\prime)) = \psi^\prime(t - nT^\prime)$ where $T^\prime = \frac{T}{2}$. So, $\{\psi^\prime(t - nT^\prime)\}_{n \in \mathbb{Z}}$ is an orthonormal basis of $W_{-1}$. So, if we pass $x(t)$ through the decimation interpolation system filtering by $\psi^\prime = \sqrt{2}\psi(2t)$ and sampling by $T^\prime = \frac{T}{2}$ then we will get the orthogonal projection onto $P_{W^{-1}}$, which you can combine with $P_V(x) + P_W(x) = P_{V^{-1}}(x)$ to get $P_{V^{-2}}(x)$. We can continue on in this way to get higher and higher resolution signals.

We have seen that the space $V_{-n}$ gets bigger as $n$ gets larger. We would like to know if it converges to the entire space. We know that $\ldots \subset V_2 \subset V_1 \subset V_0 \subset V_{-1} \subset V_{-2} \ldots \subset L^2(\mathbb{R})$. It turns out that when $\{\phi(t - nT)\}_{n \in \mathbb{Z}}$ is orthogonal, then $V_{\infty}$ is reduced to $\{\vec{0}\}$ (the zero function). If moreover $\Phi(\Omega)$ is continuous at $\Omega = 0$ with $\Phi(0) \ne 0$, then $V_{-\infty}$ is the whole space. That is, with our construction of $\phi(t)$, $V_{\infty} = \{\vec{0}\}$ and $V_{-\infty} = L^2(\mathbb{R})$ are always realized.

Assuming $m \ge 0$, we know that $V_{-m} = V_0 \oplus W_0 \oplus W_{-1} \oplus \ldots \oplus W_{-m + 1}$. Since $L^2(\mathbb{R}) = V_{-\infty}$ then $L^2(\mathbb{R}) = V_0 + \oplus_{m = 0}^\infty W_{-m}$. Meanwhile, $V_0 = V_1 \oplus W_1 = V_2 \oplus W_2 \oplus W_1 = V_m \oplus W_{m - 1} \oplus \cdots \oplus W_2 \oplus W_1$ so that $V_{-\infty} = \{\vec{0}\} = \oplus_{m = 0}^\infty W_m$. It follows that $L^2(\mathbb{R}) = \oplus_{m = -\infty}^\infty W_m$. Now, $\left\{\frac{1}{\sqrt{2^m}} \psi_n \left(\frac{t}{2^m}\right) \right\}_{n \in \mathbb{Z}} = \{\psi_{m, n}\}_{n \in \mathbb{Z}}$ is an orthonormal basis of $W_m$ for a given $m$. But since $W_m$ form a direct sum of $L^2(\mathbb{R})$ we can conclude that $\{\psi_{m, n}(t) \}_{m, n \in \mathbb{Z}}$ is an orthonormal basis of $L^2(\mathbb{R})$.


Again, we consider the case where $V$ is the space of piecewise constant functions on $[kT, (k + 1)T]$ for all $k \in \mathbb{Z}$. Then $x(t) \in V_m$ if and only if $x(2^m t) \in V_0$. Then $V_m$ is the space of piecewise constant functions on intervals $[k2^m T, (k + 1)2^m T]$ for all $k \in \mathbb{Z}$. Now $W_m = \mathrm{span}\left\{\frac{1}{\sqrt{2^m}} \psi \left( \frac{t}{2^m}\right) \right\}$ and $\psi_n\left(\frac{t}{2^m}\right) = \psi\left(\frac{t}{2^m} - nT \right) = \psi\left(\frac{1}{2^m}(t - n2^mT) \right)$. So if we set $T_m = 2^m T$ then $\psi_{m, n}(t) = \frac{1}{\sqrt{2^m}} \psi\left(\frac{1}{2^m} (t - n T_m)\right)$, which will be a signal which is $\frac{1}{\sqrt{T_m}}$ from $nT_m$ to $(n + 1/2)T_m$ and $-\frac{1}{\sqrt{T_m}}$ from $(n + 1/2)T_m$ to $(n + 1)T_m$. These functions vary from extremely narrow and tall to very wide and short, so that the norm is always $1$ and so that they are all orthogonal to each other. Because this space spans $L^2(\mathbb{R})$, we can decompose any signal as the sum of these wavelets. Each wavelet has finite support; if you decompose as the sum of sinusoids all of the sinusoids have infinite support.

Continuous-Time Discrete Wavelet Transform

Let $\psi(t)$ be a wavelet with respect to $T$ with $\psi_{m, n}(t) = \frac{1}{\sqrt{2^m}} \psi\left(\frac{t - nT_m}{2^m}\right)$ where $T_m = 2^mT$. Then we know that $\{\psi_{m, n}(t)\}_{m, n \in \mathbb{Z}}$ is an orthonormal basis of $L^2(\mathbb{R})$. Therefore, we have the isomorphism $W : L^2(\mathbb{R}) \rightarrow \ell^2(\mathbb{Z}^2)$ where $x(t) \rightarrow X[m, n] = \langle x, \psi_{m, n} \rangle$. Note that we are projecting onto $\ell^2(\mathbb{Z}^2)$ which has the same dimension as $\mathbb{Z}$ but with inner product $\langle X, Y \rangle = \sum\limits_{m \in \mathbb{Z}} \sum\limits_{n \in \mathbb{Z}} X[m, n]Y[m, n]^\ast$. $W$ is called the wavelet transform, which maps a one dimensional continuous-time function into a two dimensional discrete function. The Inverse Wavelet Transform $W^{-1} : \ell^2(\mathbb{Z}^2) \rightarrow L^2(\mathbb{R})$ maps $X[m, n] \rightarrow x(t) = \sum\limits_{m, n \in \mathbb{Z}} X[m, n] \psi_{m, n}(t)$. If you change the wavelet, you get a new wavelet transform (unlike the Fourier transform). Once you take the Wavelet transform, you have decomposed $x(t)$ into a sum of infinitely many functions of the form $\psi(t)$ (e.g. sinc functions, piecewise linear functions, splines, etc.) As $m$ increases, the time support increases and the frequency support decreases. Because the wavelet transform is an isomorphism, it obeys Parseval's equality, so that $\int_{-\infty}^\infty |x(t)|^2 d t = \sum_{m, n \in \mathbb{Z}} |X[m, n]|^2$. Wavelets are useful and widely used in practice.

Discrete-Time Filter Banks

Discrete-time and continuous-time are similar.

Discrete-time Decimation-Interpolation

In a discrete-time decimation-interpolation system, we input some discrete-time signal $x[n]$ which is filtered by $\hat{\tilde{g}}[n]$, downsampled by a factor of $M$ to yield $y[k]$, upsampled by a factor of $M$, and filtered by $g[n]$ to yield $z[n]$. The filtering process is done by computing the convolution $g[n] \ast x[n] = \sum\limits_{k \in \mathbb{Z}} g[k] x[n - k]$. Convolution satisfies the following properties:

  1. $x[n] \ast y[n] = y[n] \ast x[n]$
  2. $\delta[n - n_0] \ast x[n] = x[n - n_0]$ where $\delta[n] = 1$ when $n = 0$ and is $0$ otherwise. $g[n]$ is called the impulse response of $g$ because $\delta[n] \ast g[n] = g[n]$.

Downsampling by a factor of $M$ corresponds to computing $y[k] = x[kM]$ in contrast to continuous-time where $y[k] = x(kT)$. For discrete-time, both the input and output signals are discrete but we shouldn't think of them as being in the same space. Upsampling is the kind of opposite, where given $y[k]$ we compute $z[n] = \sum\limits_{k \in \mathbb{Z}} y[k] \delta[n - kM]$. Upsampling essentially “stretches” the signal and fills in the gaps with zeros. So, the decimation portion of the system computes $y[k] = (\bar{\tilde{g}} \ast x)[kM]$ or alternatively \begin{align*} y[k] &= \sum\limits_{n \in \mathbb{Z}} x[n] \bar{\tilde{g}}[kM - n]\\ &= \sum_{n \in \mathbb{Z}} x[n] \tilde{g}[n - kM]\\ &= \sum_{n \in \mathbb{Z}} x[n] \tilde{g}_k[n]\\ &= \langle x, \tilde{g}_k \rangle \end{align*} where the inner product is in $\ell^2(\mathbb{Z})$ and $\tilde{g}_k[n] = \tilde{g}[n - kM]$. Similarly, the interpolation portion computes \begin{align*} z[n] &= g[n] \ast \left(\sum_{k \in \mathbb{Z}} y[k] \delta[n - kM] \right)\\ &= \sum\limits_{k \in \mathbb{Z}} y[k] \left(g[n] \ast \delta[n - kM]\right)\\ &= \sum_{k \in \mathbb{Z}} y[k] g[n - kM]\\ &= \sum_{k \in \mathbb{Z}} y[k] g_k[n]\\ &= \sum_{k \in \mathbb{Z}} y[k] g_k \in \mathrm{span} \{g_k\}_{k \in \mathbb{Z}} \end{align*} where we are using vector notation where e.g. $z = (z[n])_{n \in \mathbb{Z}}$ and as above $g_k[n] = g[n - kM]$. So, in a Euclidian view, decimation computes $y = (\langle x, \tilde{g}_k \rangle)_{k \in \mathbb{Z}}$ and $z = \sum\limits_{k \in \mathbb{Z}} \langle x, g_k \rangle g_k$. So, if $\{g_k\}_{k \in \mathbb{Z}}$ and $\{\tilde{g}_k\}_{k \in \mathbb{Z}}$ are dual of each other, then $z = P_V(x)$ where $V$ is the common span of $\{g_k\}_{k \in \mathbb{Z}}$ and $\{\tilde{g}_k\}_{k \in \mathbb{Z}}$.

Review of z-transform

The z-transform is a convenient notation for representing filters. It maps $x[n]$ to $X(z) = \sum\limits_{k \in \mathbb{Z}}x[n]z^{-n}$ which is invertible (just look at the coefficients of the resulting polynomial). $z$ is just some object such that $z^p z^q = z^{p + q}$ and $z^0 = 1$. Properties of the z-transform include

  1. $x[n] \ast y[n]$ is transformed to $X(z)Y(z)$ - it transforms convolution to multiplication.
  2. Linearity, $\alpha x[n] + \beta y[n] \rightarrow_z \alpha X(z) + \beta X(z)$
  3. The Fourier transform $X(\omega) = \sum_{n \in \mathbb{Z}} x[n] e^{-j\omega}$ can be seen as $X(z) |_{z = e^{j \omega}}$. So, properties of the z-transform implies properties of the Fourier transform.
  4. $\delta[n] \rightarrow_z 1$
  5. $\delta[n - n_0] \rightarrow_z z^{-n_0}$
  6. $x[n - n_0] \rightarrow_z z^{-n_0} X(z)$
  7. $x[-n] \rightarrow_z X(z^{-1})$
  8. $(-1)^n x[n] \rightarrow_z X(-z)$ (in the Fourier domain this gives $X(\omega + \pi)$) because $\sum_{n \in \mathbb{Z}} (-1)^n x[n] z^{-n} = \sum_{n \in \mathbb{Z}} (-1)^{-n} x[n] z^{-n} = \sum_{n \in \mathbb{Z}} x[n] (-z)^{-n} = X(-z)$.

z-domain and Fourier domain analysis of decimation-interpolation

Recall that upsampling by $M$ computes $z[n] = \sum\limits_{k \in \mathbb{Z}} y[k]\delta[n - kM]$ which has z-transform \begin{align*} Z(z) &= \sum\limits_{k \in \mathbb{Z}}y[k] z^{-kM}\\ &= \sum_{k \in \mathbb{Z}} y[k](z^M)^{-k}\\ &= Y(z^M) \end{align*} So, raising $z$ to a certain power $M$ corresponds to upsampling by a factor of $M$ in the time domain. In the Fourier domain, we have $Z(\omega) = Y(M\omega)$. So, the Fourier representation becomes contracted. Because $Z(\omega)$ is $2\pi$-periodic, this means that copies of the baseband spectrum will end up in the baseband. Downsampling is not the inverse of upsampling because of loss of information. As in the continuous-time case, we will study the downsampling and upsampling operations grouped together, which sets $z[n] = x[n]$ when $n$ is a multiple of $M$ and $0$ otherwise. We denote this relation $z[n] = p_M[n] x[n]$ where $p_M$ is the $M$-periodic impulse train. This operation is called $M$-periodic impulse train modulation. We can therefore relate $X(\omega)$ and $Z(\omega)$ by computing the Fourier expansion of $p_M$. We can write $p_M[n] = \frac{1}{M}\sum\limits_{k = 0}^{M - 1} e^{j \omega_k n}$ where $\omega_k = k\frac{2\pi}{M}$ because if we define $s[n] = \sum\limits_{k = 0}^{M - 1} e^{j \omega_k n}$ then $s[n]$ is $M$-periodic and we can prove that $s[n] = Mp_M[n]$ by noting that $\sum\limits_{k = 0}^{M - 1} e^{jk\frac{2\pi}{M}n} = 0$ (because it's a geometric series). So, we can write $z[n] = p_M[n] x[n] = \frac{1}{M} \sum\limits_{k = 0}^{M - 1} e^{j \omega_k n} x[n]$ which gives in the Fourier domain $Z(\omega) = \frac{1}{M} \sum\limits_{k = 0}^{M - 1} X(\omega - \omega_k)$ with $\omega_k = k\frac{2\pi}{M}$.

Given $g[n]$, we consider the family of sequences $\{g_k[n]\}_{k \in \mathbb{Z}}$ where $g_k[n] = g[n - kM]$ for some fixed integer $M \ge 1$. We saw that $\{g_k[n]\}_{k \in \mathbb{Z}}$ is orthonormal if and only if $\langle g, g_k \rangle = \delta[k]$. If you input $x[n]$ into a decimation-interpolation system which filters by $g[n]$ and down/upsamples by $M$, then after analysis we have $y[n] = \langle x, g_k \rangle$. So, if we input $g$ into the system, we get $y = \langle g, g_k \rangle$. When $x[n] = g[n]$, then after filtering we have $v[n] = g \ast \bar{g}[n]$ and $y[n] = \langle g, g_k \rangle = (g \ast \bar{g})[kM]$. So, when $x[n] = g[n]$, the family $\{g_k\}_{k \in \mathbb{Z}}$ is orthonormal if and only if $y[k] = \delta[k]$ or equivalently after upsampling $w[n] = \delta[n]$. So, we can check that $\{g_k\}_{k \in \mathbb{Z}}$ is orthonormal by checking whether $w[n] = \delta$ or in the Fourier domain $W(\omega) = 1$. We know that $W(\omega) = \frac{1}{M}\sum\limits_{k = 0}^{M - 1} V(\omega - \omega_k)$ where $\omega_k = k\frac{2\pi}{M}$ and $V(\omega) = |G(\omega)|^2$. So, we can conclude that $\{g_k\}_{k \in \mathbb{Z}}$ is orthonormal if and only if $\frac{1}{M} \sum\limits_{k = 0}^{M - 1}|G(\omega - \omega_k)|^2 = 1$ or equivalently $\sum\limits_{k = 0}^{M - 1}|G(\omega - \omega_k)|^2 = M$. This is analogous to the continuous-time case, where we had an infinite sum.

Orthogonal Two-Channel Filterbanks

We would like to find two sequences $g[n]$ and $h[n]$ such that combining the output of two decimation-interpolation systems, one which filters with $g[n]$ and one which filters with $h[n]$ and both downsampling/upsampling by a factor of 2, has an output $x^\prime[n]$ which is a perfect reconstruction of the input $x[n]$. To do so, we start by designing some $g[n]$ such that $\{g_k\}_{k \in \mathbb{Z}}$ is orthonormal with $g_k[n] = g[n - 2k]$ (because $M = 2$). Then, we take $h[n] = (-1)^n g[L - n]$ where $L$ is any odd integer. With this design, perfect reconstruction is guaranteed. This can be proven by first recalling that $\{g[n - 2k], h[n - 2k^\prime]\}_{k, k^\prime \in \mathbb{Z}}$ is an orthonormal basis in $\ell^2(\mathbb{Z})$ with $L = 1$. If we shift $h[n]$ by an even number the family is still orthogonal because $$ h[n - 2k^\prime] = (-1)^{n - 2k^\prime} g[1 - n + 2k^\prime] = (-1)^n g[(1 + 2p) - (n - 2(k^\prime - p))] $$ In other words, when you shift $g$ by an odd sample shift, you end up shifting $h$ by an even number which does not change the family $\{h[n - 2k^\prime]\}_{k^\prime \in \mathbb{Z}}$. If we combine the outputs of the two systems, we are saying that $$ x^\prime[n] = x[n] = \sum_{k \in \mathbb{Z}} \langle x, g_k \rangle g_k[n] + \sum_{k \in \mathbb{Z}} \langle x, h_k \rangle h_k[n] = x_{\ell}[n] + x_h[n] $$


If we take $g[n] = \frac{1}{\sqrt{2}} \delta[n] + \frac{1}{\sqrt{2}}\delta[n - 1]$ and $L = 1$ then $h[n] = \frac{1}{\sqrt{2}}\delta[n] - \frac{1}{\sqrt{2}} \delta[n - 1]$, and we call $g[n]$ and $h[n]$ Haar filters. In the z-domain, $G(z) = \frac{1}{\sqrt{2}} + \frac{1}{\sqrt{2}} z^{-1} = \frac{1}{\sqrt{2}}(1 + z^{-1})$ and $H(z) = \frac{1}{\sqrt{2}}(1 - z^{-1})$ or in the Fourier domain $G(\omega) = \frac{1}{\sqrt{2}}(1 + e^{-j\omega})$ and $H(\omega) = \frac{1}{\sqrt{2}} (1 - e^{-j\omega})$. We can compute $|G(\omega)|^2 = 2\cos^2\left(\frac{\omega}{2}\right)$ and $|H(\omega)|^2 = 2\sin^2\left(\frac{\omega}{2}\right)$. So, they are effectively “fuzzy” low-pass and high-pass filters. We would like to make them sharper so that you see fewer artifacts due do downsampling and upsampling. Note that $G(\omega = \pi) = 0$ because $G(z = -1) = 0$, but it is a “weak” zero because it is first-order - we'd like to have $z = -1$ be a higher-order zero, which we can achieve by taking $G(z) = (1 + z^{-1})^N$ where $N \ge 1$. One way to make the zero higher order would be to compute $g[n] = g_0[n] \ast \bar{g}_0[n]$, which gives in the frequency domain \begin{align*} G_1(z) &= G_0(z) G_0(z^{-1})\\ &= (1 + z^{-1})(1 + z)\\ &= z(1 + z^{-1})(z^{-1} + 1)\\ &= z(1 + z^{-1})^2 \end{align*} so that $z = -1$ is a double zero of $G_1(z)$. However, these filters may no longer be orthonormal.

Daubechies Filters

If you try to take a filter whose z-transform is $G(z) = (1 + z^{-1})^N$, it will not be orthonormal. But, we can try to multiply by some function $R(z)$ such that $G(z) = R(z)(1 + z^{-1})^N$ so that $\{g[n - 2k]\}_{k \in \mathbb{Z}}$ remains orthonormal. For example, if $N = 2$, then $G(z) = R(z)(1 + z^{-1})^2$. We'd like to find a satisfactory $R(z)$ to be small as possible; we try $R(z)$ of the form $R(z) = a + bz^{-1}$, which is the simplest filter you could try. Then, we can try to find $a$ and $b$ so that $\{g[n - 2k]\}_{k \in \mathbb{Z}}$ is orthonormal or equivalently $\langle g, g_k \rangle = (g \ast \bar{g})[2k] = \delta[k]$. We can derive $(g \ast \bar{g})[n]$ and force $(g \ast \bar{g})[2k]$ to be $\delta[k]$. Since $G(z) = R(z)(1 + z^{-1})^2$, in the time domain we have $R[n] = a\delta[n] + b\delta[n - 1]$ and $\delta[n] + 2\delta[n - 1] + \delta[n - 2] \rightarrow (1 + z^{-1})^2$. So, we can compute $$ (g \ast \bar{g})[n] = (r \ast \bar{r})[n] \ast (g_0 \ast \bar{g}_0)[n] $$ where $$ (r \ast \bar{r})[n] = ab\delta[n + 1] + (a^2 + b^2)\delta[n] + ab\delta[n - 1] $$ and $$ (g_0 \ast \bar{g}_0)[n] = \delta[n + 2] + 4\delta[n + 1] + 6\delta[n] + 4\delta[n - 1] + \delta[n - 2] $$ The requirement that $(g \ast \bar{g})[2k] = \delta[k]$ means that we want the resulting signal to be $1$ at $0$, $0$ at all even samples, and any value at all other samples. All values of $n$ outside of $[-3, 3]$ will already be $0$, and the sequence is even, so we only need to worry about setting $(g \ast \bar{g})[0] = 1$ and $(g \ast \bar{g})[2] = 0$. From above, we can find that $(g \ast \bar{g})[0] = 6(a^2 + b^2) + 8ab$ and $(g \ast \bar{g})[2] = a^2 + b^2 + 4ab$. We have two equations and two unknowns, so we can find that $a = \frac{1 + \sqrt{3}}{4\sqrt{2}}$ and $b = \frac{1 - \sqrt{3}}{4\sqrt{2}}$. So, in the end we find that $G(z) = \frac{1}{4\sqrt{2}} ((1 + \sqrt{3}) + (1 - \sqrt{3}) z^{-1})(1 + z^{-1})^2$ and we transform back to the time domain to then compute $h[n] = (-1)^n g[L - n]$; it is convenient to take $L = 3$. These $g[n]$, $h[n]$ are called the Daubechies filters of length 4. We can generalize, achieving a $N$th-order zeros at $z = -1$ by using a filter of length $2N$. One drawback of Daubechies filters is that they can never be symmetric, so they are never linear phase (i.e. never amount to just a time delay).

Iterated filter banks

Given an orthogonal (because of orthogonality of the filters) perfect reconstruction filterbank, we can place a perfect reconstruction filterbank between the downsampling and upsampling steps of one of the decimation-interpolation systems. Nothing will change, because we will be reconstructing the signal perfectly. In particular, we can nest the same overall perfect reconstruction filter bank inside one of the decimation-interpolation system. We can also perform decimation and interpolation twice in both upper branches of the system. We can further simplify the system with the following theorems:

Theorem We can replace filtering by $G(z)$ and upsampling by $M$ by upsampling by $M$ and filtering by $G(z^M)$

Proof In order to interchange a filter and an upsampler, you must upsample the filter itself. If $y[n] = (x \ast g)[n]$ and $z[n] = y[nM]$ then $Z(z) = Y(z^M)$ and $Y(z)= G(z)X(z)$, and so $Z(z) = G(z^M)X(z^M)$ which is equivalent to upsampling by $M$ and filtering by $G(z^M)$.

Theorem We can replace downsampling by $M$ and filtering by $G(z)$ with filtering by $G(z^M)$ and downsampling by $M$.

Proof Upsampling by $M$ and filtering by $G$ to compute $Y^\prime(z) = G(z^M)X(z)$ is equivalent to upsampling $g[n]$ by $M$ and filtering by $X(z)$. Then we have $y^\prime[n] = \sum\limits_{k \in \mathbb{Z}} g[k] x[n - kM]$. So, downsampling results in $y^\prime[nM] = \sum\limits_{k \in \mathbb{Z}} g[k] x[nM - kM] = \sum\limits_{k \in \mathbb{Z}} g[k] x[(n - k)M]$. But $y[n] = x[nM]$ so $y^\prime[nM] = (g \ast y)[n]$ which is just filtering $y$ by $g$.

So, we can simplify our nested system by combining the filters and up/downsampling by $4$.

elen_e6860_course_notes.txt · Last modified: 2015/12/17 21:59 (external edit)