columbia_ee_dqe

# Columbia EE DQE

This page collects notes for the Columbia University Electrical Engineering Doctoral Qualifying Exam. The sections correspond to those listed here.

## Circuits and Electronics

H+H refers to “The Art of Electronics”, 2nd Edition, by Horowitz and Hill.

### Resistors, capacitors, inductors, ideal transformers, independent sources, dependent sources and operational amplifiers

“The current through a metallic conductor is proportional to the voltage across it (Ohm's Law): $R = V/I$.” H+H p.4

“The resistance of two resistors in series is $R = R_1 + R_2$, in parallel $R = \frac{1}{1/R_1 + 1/R_2}$.” H+H p.6

For a voltage divider, $V_{out} = \frac{R_2}{R_1 + R_2}V_{in}$. H+H p.8

“A perfect (independent) voltage source maintains a fixed voltage drop across its terminals, regardless of load resistance.” (same for current) H+H p.9

“Thevenin's theorem states that any two-terminal network of resistors and voltage sources is equivalent to a single resistor resistor in series with a single voltage source.” This resistance is the equivalent internal resistance, which will cause the voltage output to change under load. H+H p.11

“Capacitors are devices that might be considered simply frequency-dependent resistors.” $I = C\frac{dV}{dt}$ H+H p.20

“The rate of current change in an inductor depends on the voltage applied across it.” $V = L\frac{dI}{dt}$ “An inductor is, in a real sense, the opposite of a capacitor.” H+H p.28

“A transformer is a device consisting of two closely coupled coils (called primary and secondary). An AC voltage applied to the primary appears across the secondary, with a voltage multiplication proportional to the turns ratio of the transformer and a current multiplication inversely proportional to the turns ratio.” H+H p.28

“Both capacitors and inductors are linear devices… the output of a linear circuit, driven with a sine wave at some frequency f, is itself a sine wave at the same frequency (with, at most, changed amplitude and phase).” O+S p.29

For a capacitor, the current is $I = \frac{V}{1/\omega C}$ - it be haves like a frequency-dependent resistor $R = 1/\omega C$, but in addition the current is 90 degrees out of phase with the voltage. H+H p.30 The complex reactance for a capacitor is $X_C = -j/\omega C = 1/j\omega c$; for an inductor, $X_L = j\omega L$.

Ohm's law for impedance is $V = IZ$; $Z$ is some impedance - a resistance or a reactance. H+H p.32

For reactive circuits and AC voltages, the power is defined as $P = \frac{1}{T}\int_0^T V(t)I(t) dt$ because it fluctuates over time. The average power is given by the real part of $VI^\ast$. H+H p.34

A voltage divider for impedance is the same: $V_out = V_in\frac{Z_2}{Z_1 + Z_2}$. H+H p.35

Operational amplifiers are “very high gain dc-coupled differential amplifiers with single-ended outputs.” “The output goes positive when the noninverting input (+) goes more positive than the inverting input (-), and vice versa.” H+H p.176

Op amp rules: “The output attempts to do whatever is necessary to make the voltage difference between the inputs zero. The inputs draw no current.” H+H p.177

The op amp rules “will be obeyed only if the op-amp is in the active region, i.e. inputs and outputs not saturated at one of the supply voltages, the feedback must be arranged so that it is negative, there must always be feedback at DC in an op-amp circuit, and many op-amps have a relatively small maximum differential input voltage limit.” H+H p.182

The ideal op amp: “1. Input impedance = infinity, 2. Output impedance (open loop) = 0, 3. Voltage gain = infinity, 4. Common-mode voltage gain = 0, 5. $V_{out} = 0$ when both inputs are at ￼ the ￼same voltage, output can change instantaneously (infinite slew rate).” H+H p.185

### Charge, current, voltage, and power

“The voltage between two points is the cost in energy (work done) required to move a unit of positive charge from the more negative point (lower potential) to the more positive point (higher potential).” H+H p.2

“Current is the rate of flow of electric charge past a point.” H+H p.2

“The power (work per unit time) consumed by a circuit device is $P = VI$.” H+H p.3

### Kirchhoff's voltage and current laws

“The sum of currents into a point in a circuit equals the sum of the currents out… for a series circuit, the current is the same everywhere.” H+H p.3

“Things hooked in parallel have the same voltage across them.” H+H p.3

### Analysis of RL, RC and RLC circuits (up to second order) using differential equations

For a resistor and capacitor in parallel, $C\frac{dV}{dt} = I = -\frac{V}{R}$, which is a differential equation with solution $V = Ae^{-t/RC}$, where $RC$ is the time constant of the circuit. For a resistor and capacitor “voltage divider”, the current at the junction is $I = C\frac{dV}{dt} = \frac{V_i - V}{R}$ which has solution $V = V_i + Ae^{-t/RC}$. For a general $V_i$, the output voltage is $V(t) = \frac{1}{RC}\int_{-\infty}^t V_i(t)e^{t-\tau)/RC d\tau$ - the input averages past voltage history with a weighting factor $e^{-\Delta t/RC}$ (integrator). For a capacitor and resistor “voltage divider”, the output is approximately proportional to the rate of change of the input signal (differentiator).

### Diodes and diode circuits

A diode has a forward voltage drop - as current flows from the anode to cathode, the voltage drops by this amount. The diode will not allow hardly any current to flow from cathode to anode, until you reach the reverse breakdown voltage. H+H p.44

### Basic logic gate circuits (CMOS, TTL, ECL)

For nMOS switch, source (bottom leg) is typically tied to ground and is used to pull signals down: when Gate = 1, Out = 0, when Gate = 0, Out = Z (high impedance). For pMOS switch, source (top leg) is typically tied to Vdd, used to pull signals up: when Gate = 0, Out = 1 (Vdd) when Gate = 1, Out = Z (high impedance). Reference

### Circuits using operational amplifiers

Inverting amplifer: voltage gain = $-R_f/R_i$, input impedance $R_i$. H+H p.178

Noninverting amplifier: gain = $1 + R_f/R_i$, input impedance theoretically infinite. H+H p.178

## Signals, Systems, and Communications

O+S refers to “Discrete-Time Signal Processing”, 2nd Edition, by Oppenheim and Schafer.

### Continuous-time and discrete-time systems

“Continuous-time signals are defined along a continuum of times and thus are represented by a continuous independent variable. Continuous-time signals are often referred to as analog signals. Discrete-time signals are defined at discrete times, and thus, the independent variable has discrete values; i.e. discrete-time signals are represented as sequences of numbers. Signals such as speech or images may have either a continuous- or a discrete-variable representation, and if certain conditions hold, these representations are entirely equivalent.” O+S p.8

### Linear time-invariant systems

“A discrete-time system is defined mathematically as a transformation or operator that maps an input sequence with values $x[n]$ into an output sequence with values $y[n]$.” O+S p.16

If $x_1[n]$ and $x_2[n]$ are the inputs to a system $T$, it is linear if and only if $T\{ax_1[n] + bx_2[n]\} = aT\{x_1[n]\} + bT\{x_2[n]\}$ O+S p.18

“A time-invariant system … is a system for which a time shift or delay of the input sequence causes a corresponding time shift in the output sequence. Specifically, suppose that a system transforms the input sequence values $x[n]$ into the output sequence with valus $y[n]$. Then the system is said to be time-invariant if, for all $n_0$, the input sequence with values $x_1[n] = x[n - n_0]$ produces the output sequence with values $y_1[n] = y[n - n_0]$. O+S p.20

“A linear time-invariant system is completely characterized by its impulse response $h[n]$ in the sense that, given $h[n]$, it is possible to … compute the output $y[n]$ due to any input $x[n]$” by $y[n] = \sum_{k = -\infty}^{\infty} x[k]h[n-k]$. O+S p.23

“Two LTI systems in cascade correspond to an LTI system with an impulse response that is the convolution of the impulse responses of the two systems.” O+S p.29

“The connection of two LTI systems in parallel is equivalent to a single system whose impulse response is the sum of the individual impulse responses” O+S p.30

“LTI systems are stable if and only if the impulse response is absolutely summable, i.e., if $S = \sum_{k=-\infty}^{\infty} |h[k]| < \infty$.” O+S p.30

An LTI system is causal if $h[n] = 0$ when $n < 0$. O+S p.31

LTI systems with an impulse response with only a finite number of nonzero samples are FIR; they are always stable. An IIR system has an impulse response of infinite length, and may not be stable. O+S p.32

“If a LTI system has impulse response $h[n]$, then its inverse system (if it exists) has impulse response $h_i[n]$ defined by the relation $h[n] \ast h_i[n] = \delta[n]$ (an impulse)”. O+S p.34

“Complex exponential sequences are eigenfunctions of LTI systems and the response to a sinusoidal input is sinusoidal with the same frequency as the input and with amplitude and phase determined by the system.” O+S p.40 If the input $x[n]$ to an LTI system is $e^{j\omega n}$ then the output (via convolution with the impulse response) is $y[n] = \sum_{k = \infty}^\infty h[k]e^{jw(n-k)} = \left(\sum_{k=-\infty}^{\infty} h[k]e^{j\omega k}\right)e^{j \omega n} = H(e^{j\omega})e^{j\omega n}$. $H(e^{j\omega})$ is the eigenvalue (called the frequency response of the system) for eigenfunction $e^{j\omega n}$. O+S p.40

“The frequency respons of discrete LTI systems is always a periodic function of the frequency variable $\omega$ with period $2\pi$.” O+S p.42

### Convolution

The convolution of two discrete-time signals $x_1[n]$ and $x_2[n]$ is $x_1[n] \ast x_2[n] = \sum_{k=-\infty}^{\infty} x_1[k]x_2[n-k]$. It's the sum, over all lags, of $x_1$ time $x_2$ flipped. Convolution is commutative and additive. The output of an LTI system can be computed as the input convolved with its impulse response. O+S p.23-28

### Discrete-time Fourier transform

“Many sequences can be represented by a Fourier integral of the form $x[n] = \frac{1}{2\pi} \int_{-\pi}^\pi X(e^{j\omega})e^{j\omega n} d\omega$ where $X(e^{j\omega}) = \sum_{n=-\infty}^{\infty}x[n]e^{-j\omega n}$.” O+S p.48

“The frequency response of an LTI system is simply the Fourier transform of the impulse response.” O+S p.49

$X(e^{j\omega})$ exist only when $x[n]$ is absolutely summable. “Any FIR system will be stable and therefore will have a finite and continuous frequency response.” O+S p.51

For any sequence $x[n]$ with Fourier Transform $X(e^{j\omega})$, we have $x^\ast[n] \rightarrow X^\ast(e^{-j\omega})$ and $x^\ast[-n] \rightarrow X^\ast(e^{j\omega})$. So, for real $x[n]$, the Fourier transform is conjugate symmetric ($X(e^{j\omega}) = X^\ast(e^{-j\omega})$, so the real part and magnitude are even and the imaginary part and phase are odd.

The Fourier transform is linear. O+S p.59

Shifting/delaying in time results in a linear phase term in the Fourier Transform: $x[n-n_d] \rightarrow e^{-j\omega n_d}X(e^{j\omega})$. O+S p.59

Shifting by frequency $e^{j\omega_0 n}$ results in a frequency shift in the frequency domain: $e^{j\omega_0 n}x[n] \rightarrow X(e^{j(\omega - \omega_0)})$. O+S p.59

Reversing a signal in time corresponds to reversing it in the frequency domain, and for real signals, corresponds to taking the conjugate. O+S p.60

Convolution in the time domain corresponds to multiplication in the frequency domain, and multiplication in the time domain corresponds to convolution in the frequency domain. O+S p.60

### Filters and difference and differential equations

A subclass of LTI systems are those that can be expressed as an $N$th-order linear constant-coefficient difference equation of the form $\sum_{k=0}^N a_ky[n-k] = \sum_{m=0}^Mb_mx[n-m]$. O+S p.34

A difference equation's output can only be determined uniquely if the prior $N$ output values are supplied - it is a recursive relation. O+S p.37

If a system is specified (through its auxiliary conditions - the previous values of $y[n]$) to ensure linearity, time-invariance, and causality, the solution to the equation is unique. This is typically done by specifying that the system be at rest initially. O+S p.39

### One-sided and two-sided z-transforms (including the handling of initial conditions)

“The (two-sided) z-transform of a sequence $x[n]$ is defined by $Z\{x[n]\} = X(z) = \sum_{n=-\infty}^{\infty} x[n]z^{-n}$” with $z$ being a complex variable. O+S p.94

“The one-sided or unilateral z-transform is defined as $\mathcal{X}(z) = \sum_{n=0}^{\infty}x[n]z^{-n}$. Clearly, the bilateral and unilateral transforms are equivalent if and only if $x[n] = 0$ for $n < 0$.” O+S p.95

“For $|z| = 1$, the z-transform corresponds to the Fourier transform.” O+S p.95

“We can express the complex variable $z$ in polar form as $re^{j\omega}$” giving $X(re^{j\omega}) = \sum_{n=-\infty}^{\infty}(x[n]r^{-n})e^{-j\omega n}$, which can be interpreted as the Fourier transform of the product of $x[n}$ with the exponential sequence $r^{-n}$. O+S p.95

“For any given sequence, the set of values $z$ for which the z-transform converges is called the region of convergence.” “Convergence of the z-transform depends only on $|z|$” so the inner and outer boundaries of the ROC will be circles (potentially including $|z| = 0$ and $|z| = \infty$). O+S p.96

“The z-transform and all its derivatives must be continuous functions of $z$ within the ROC.” O+S p.97

When $X(z) = \frac{P(z)}{Q(z)}$, $P(z)$ and $Q(z)$ polynomials, is a rational function inside the ROC, values of $z$ where $X(z) = 0$ are the zeros and values of $z$ where $X(z) = \infty$ are the poles. O+S p.98

“Any sequence that can be represented as a sum of exponentials can equivalently be represented by a rational z-transform and can be determined to within a constant multiplier by its poles and zeros.” O+S p.98

The z-transform is linear, so finding the z-transform often involves describing it in terms of common z-transform pairs. The ROC of a sum of sequences is at least the intersection of the ROC of each sequence. O+S p.104

The ROC can't contain any poles because $X(z)$ is infinite at poles. O+S p.105

If $x[n]$ is finite-duration, the ROC is the entire z-plane. If $x[n] = 0$ for $n < N_1 < \infty$, the ROC extends outward from the largest magnitude finite pole in X(z). If $x[n] = 0$ for $n > N_2 > -\infty$, the ROC extends inward from the smallest magnitude pole. If $x[n]$ is two-sided, the ROC will be a ring in the z-plane bounded by two poles and containing no poles. O+S p.105

Computing an inverse z-transform can be done by inspection (based on common z-transform pairs). O+S p.111

If a z-transform can be expressed as the ratio of two polynomials (of $z^{-1}$), it can also be expressed as the product of terms of the form $\frac{1 - c_kz^{-1}}{1 - d_kz^{-1})}$, which can be expressed as a sum of terms of the form $\frac{A_k}{1 - d_kz^{-1}}$ where $A_k = (1 - d_kz^{-1})X(z) \st_{z = d_k}$. O+S p.112

Time shifting: $x[n - n_0] \leftrightarrow z^{-n_0}X(z)$, where the ROC is the same except there may be more or less poles at $z = 0$ or $z = \infty$. O+S p.120

Multiplication by an exponential: $z_0^nx[n] \leftrightarrow X(z/z_0)$, with ROC scaled by $|z_0|$. O+S p.121

Diffentiating $X(z)$: $nx[n] \leftrightarrow -z\frac{dX(z)}{dz}$ with the same ROC. O+S p.122

Conjugation: $x^\ast \leftrightarrow X^\ast(z^\ast)$ with the same ROC. O+S p.123

Time reversal: $x^\ast[-n] \leftrightarrow X^\ast (1/z^\ast)$; the ROC is inverted. O+S p.123

Convolution: $x_1[n]\ast x_2[n] \leftrightarrow X_1(z)X_2(z)$, ROC is the intersection of both sequences. O+S p.124

A causal sequence has $x[0] = \lim_{z \leftarrow \infty} X(z)$. O+S p.126

### Time and frequency characteristics of linear time-invariant systems

“The frequency response $H(e^{j\omega})$ of an LTI system is the complex gain (eigenvalue) that the system applies to the complex exponential input (eigenfunction) $e^{j\omega n}$. The Fourier transforms of the system input and output are related by $Y(e^{j\omega}) = H(e^{j\omega})X(e^{j\omega})$.” O+S p.241

The ideal filter has frequency response with magnitude 1 for some frequencies and 0 elsewhere, and has a phase response of zero, but is non-causal and not computationally feasible. O+S p.242

A linear phase response corresponds to a simple delay, so it can be acceptable. The group delay approximates the delay (in samples) of the amplitude envelopes across frequency as $\frac{-d\angle H(e^{j\omega})}{d\omega}$. O+S p.243

Linear, constant-coefficient difference equation systems with initial rest conditions of the form $\sum_{k=0}^{N}a_ky[n-k] = \sum_{k=0}^{M} b_k x[n-k]$ have transfer functions of the form $H(z) = \frac{Y(z)}{X(z)} = \frac{B(z)}{A(z)} = \frac{\sum_{k=0}^{M} b_kz^{-k}}{\sum_{k=0}^{N}a_k z^{-k}} = \left(\frac{b_0}{a_0}\right) \frac{\prod_{k=1}^M (1 - c_kz^{-1})}{\prod_{k=1}^{N}(1 - d_kz^{-1})}$. O+S p.246

If we assume a difference equation system is stable and causal, its ROC is uniquely specified to have an ROC including the unit circle and outside the outermost pole respectively. O+S p.247

An inverse system $H_i(z)$ for $H(z)$ is one for which $H(z)H_i(z) = 1$. For a linear constant coefficient difference equation system, $H_i(z) = \left(\frac{a_0}{b_0}\right) \frac{\prod_{k=1}^N (1 - d_kz^{-1})}{\prod_{k=1}^{M}(1 - c_kz^{-1})}$. If the inverse is causal and stable, its ROC is $|z| > \max_k|d_k|$, so we must have $|z| > \max_k|c_k|$ and $\max_k|c_k| < 1$ - all poles and zeros must be inside the unit circle, called minimum phase systems. O+S p.248

If at least one nonzero pole of $H(z)$ is not canceled by a zero, the impulse response will be of infinite length (IIR). If a system has no poles (excluding cancelled ones) except at $z = 0$, the impulse response is finite in length (FIR). O+S p.251

The frequency response of a system is the z-transform evaluated at $z = e^{j\omega}$.

For a zero, the response at some frequency is affected by the vector from the zero to the point on the unit circle corresponding to that frequency. The gain is this vector's magnitude, and the phase is the difference of this vector's phase and the angle at that frequency. As a result, the gain gets small for frequencies close to the zero's frequency, and the phase approaches $\pi/2$ as it approaches the zero's frequency, then crosses zero at the zero's frequency such that it symmetric about the zero's frequency. The behavior for a pole is the inverse for magnitude, and the negative for phase. O+S p.258

For systems with more than one pole or zero, the magnitude response for a frequency will be the product of the magnitude responses for each pole and zero. O+S p.266

The frequency response magnitude at frequency $\omega$ is given by the product of the lengths of vectors drawn from the zeros to the point $e^{j\omega}$ divided by the product of the lengths of vectors drawn from the poles to $e^{j\omega}$. The phase response is obtained from the sum of the angles of all the lines from the zeros to $e^{j\omega}$ minus the angle of each line from each poles to $e^{j\omega}$.

### Sampling Theorem

“A sequence of samples $x[n]$ is obtained from a continuous-time signal $x_c(t)$ according to the relation $x[n] = x_c(nT)$, $-\infty < n < \infty$”. $f_s = 1/T$ is the sampling rate. O+S p.140

Because sampling is equivalent to multiplying the continuous-time signal with an impulse train, the Fourier spectrum is the input spectrum convolved with the impulse train. The resulting periodic spectral copies will not overlap if the continuous-time signal is bandlimited to 1/2 the sampling rate. This allows for perfect reconstruction using a perfect lowpass filter at 1/2 the sampling rate. O+S p.143

The ideal lowpass filter for reconstruction has impulse response $h_r(t) = \frac{\sin(\pi t/T)}{\pi t/T}$. O+S p.150

Sampling rate reduction (downsampling) by a factor of $M$ can be done by defining a new sequence $x_d[n] = x[nM]$. This will result in aliasing if the signal is not appropriately bandlimited. O+S p.170

Upsampling can be done by downsampling in the frequency domain, which is equivalent to constructing the signal $x_e[n] = x[n/L], n = 0, \pm L, \pm 2L, \ldots; 0 \mathrm{otherwise}$. The aliased copies of the signal in the frequency domain can be removed by interpolating. O+S p.172

### Frequency response, Bode plots

A bode plot is a simple way of approximating the phase and log-magnitude response of a continuous LTI system on a log-frequency scale based on the poles and zeros. For the magnitude plot, at each $n$th order pole/zero, the slope decreases/increases $20n$ dB per decade. The initial dB is calculated based on the magnitude at the initial frequency; the initial slope is calculated based on poles and zeros appearing before the initial frequency. For the phase plot, at each $n$th order pole/zero, the slope decreases/increases $45n$ degrees per decade starting one decade before the pole/zero frequency. The initial phase slope is determined by whether the pole-zero representation has positive or negative sign. Wikipedia

### Filtering, allpass, minimum-phase, linear phase systems

“A system for which the frequency-response magnitude is a constant is called an all-pass system”, $H_{ap}(z) = \frac{z^{-1} - a^\ast}{1-az^{-1}}$. O+S p.274

A minimum-phase system is a stable and causal system for which the inverse system is stable and causal - its poles and zeros are all inside the unit circle. O+S p.280

“Any rational system function can be expressed as $H(z) = H_{min}(z)H_{ap}(z)$.” O+S p.280 “We can form a minimum-phase system from a nonminimum-phase system by reflecting all zeros lying outside the unit circle to their conjugate reciprocal locations inside.” O+S p.281 “The reflection of zeros of $H_{min}(z)$ from inside the unit circle always decreases the continuous phase.” O+S p.288 A minimum-phase representation also has the minimum group delay. O+S p.188

“A linear-phase system has impulse response $H(e^{j\omega}) = |H(e^{j\omega})|e^{j\omega\alpha}$.. A system is referred to as a generalized linear-phase system if its frequency response can be expressed in the form $H(e^{j\omega}) = A(e^{j\omega})e^{-j\alpha\omega + j\beta}$ where $\alpha$ and $\beta$ are constants and $A(e^{j\omega})$ is a real function of $\omega$.. If we ignore any discontinuities that results from the addition of constant phase over all or part of the band $|\omega| < \pi$ then such a system can be characterized by constant group delay.” O+S p.295 If $h[n] = h[M-n], 0 \le n \le M, 0 \mathrm{otherwise}$ (type I for even $M$, type II for odd) or $h[n] = -h[M-n], 0 \le n \le M, 0 \mathrm{otherwise}$ (type III for even $M$, type IV for odd), the system is linear phase. O+S p.297 “The system function for any FIR linear-phase system can be factored into a minimum-phase term, a maximum-phase term, and a term containing only zeros on the unit circle.” O+S p.308