columbia_ee_dqe_practice_exams

This page contains possible answers for the 2008-2010 DQEs, which were sent out for practice for the 2012 exam. Please note that these answers may be incorrect and/or incomplete; no answer key was provided.

a) The difference equation is <math>y[n] = x[n] - x[n - 4]</math> with impulse response <math>h[n] = \delta[n] - \delta[n-4]</math>. This corresponds to a transfer function of <math>H(z) = 1 - z^{-4}</math>. The magnitude will approach zero when <math>1 = z^{-4}</math> (the zeros). This will occur at <math>z = e^0, e^{\pm 2\pi/4}, e^{\pm \pi}, e^{\pm 6\pi/4}, \ldots</math>. The transfer function will approach infinity as <math>z \rightarrow 0</math>, so there is a fourth-order pole at <math>z = 0</math>. When <math>z = e^{\pm (2n + 1) \pi/4}, n = 0, 1, 2, \ldots</math>, <math>z^{-4} = j</math>. So, <math>|H(e^{j\omega})|</math> will be 1 at these points and 0 at the zeros. The frequency response will look like a full-wave rectified sine wave with period <math>\pi</math>.

b) The phase will jump from <math>-\pi</math> to <math>\pi</math> at each zero. At <math>z = e^{\pm (2n + 1) \pi/4}, n = 0, 1, 2, \ldots</math>, <math>H(z) = 1 - e^{\pm(2n + 1)} \rightarrow \angle H(z) = 0</math>. So, the phase will look like a sawtooth wave with period <math>\pi/2</math>, starting at <math>\pi</math> and sloping linearly to <math>-\pi</math> at <math>\pi/2</math>.

c) Any signals which are 4-sample periodic, i.e., <math>x[n] = x[n-4]</math>.

d) The difference equation is <math>y[n] = x[n] + \frac{4}{5}x[n - 4]</math> with impulse response <math>h[n] = \delta[n] + \frac{4}{5}\delta[n-4]</math>. This corresponds to a transfer function of <math>H(z) = 1 + \frac{4}{5}z^{-4} = (z^2 + \frac{2i}{\sqrt{5}})(z^2 - \frac{2i}{\sqrt{5}})</math><math> = (z - \frac{1-i}{\sqrt[4]{5}})(z + \frac{1-i}{\sqrt[4]{5}})(z - \frac{i-1}{\sqrt[4]{5}})(z - \frac{i-1}{\sqrt[4]{5}})</math>. These four zeros will have magnitude <math>\frac{\sqrt{2}}{\sqrt[4]{5}}</math> and phase <math>\pi/4, 3\pi/4, 5\pi/4, 7\pi/4</math>. This system will be zero for signals where <math>x[n] = -\frac{4}{5}x[n-4]</math>. These signals will oscillate with a period of 4 samples and exponentially decay by a factor of <math>4/5</math> each period.

(1) <math>\phi(t) = f_1(t)\cos(\omega_c t) + f_2(t)\sin(\omega_c t)</math>. If the communications channel has an impulse response <math>h(t)</math>, then we have <math>g_1(t) = (h(t)\ast\phi(t))\cos(\omega_ct) = h(t)\ast f_1(t)\cos^2(\omega_c t) + h(t)\ast f_2(t)\sin(\omega_c t)\cos(\omega_c t)</math>. Similarly, <math>g_2(t) = h(t)\ast f_2(t)\sin^2(\omega_c t) + h(t)\ast f_1(t)\cos(\omega_c t)\sin(\omega_c t)</math>

(2) Taking Fourier transforms:

<math>\mathcal{F}\[f_1(t)\] = F_1(t)</math>, <math>\mathcal{F}\[f_2(t)\] = F_2(t)</math>

<math>\mathcal{F}\[\cos(\omega_c t)\] = \sqrt{\frac{\pi}{2}}(\delta(\omega - \omega_c) + \delta(\omega + \omega_c))</math>

<math>\mathcal{F}\[\cos(\omega_c t)\cos(\omega_c t)\] = \mathcal{F}\[\cos(\omega_c t)\]\ast\mathcal{F}\[\cos(\omega_c t)\]</math><math> = \frac{\pi}{2}(2\delta(\omega) + \delta(\omega + 2\omega_c) + \delta(\omega - 2\omega_c))</math>

<math>\mathcal{F}\[\sin(\omega_c t)\] = j\sqrt{\frac{\pi}{2}}(\delta(\omega - \omega_c) - \delta(\omega + \omega_c))</math>

<math>\mathcal{F}\[\sin(\omega_c t)\sin(\omega_c t)\] = \mathcal{F}\[\sin(\omega_c t)\]\ast\mathcal{F}\[\sin(\omega_c t)\]</math><math> = \frac{-\pi}{2}(2\delta(\omega) - \delta(\omega + 2\omega_c) - \delta(\omega - 2\omega_c))</math>

<math>\mathcal{F}\[\sin(\omega_c t)\cos(\omega_c t)\] = \mathcal{F}\[\sin(\omega_c t)\]\ast\mathcal{F}\[\cos(\omega_c t)\]</math><math> = \frac{\pi j}{2}(- \delta(\omega + 2\omega_c) + \delta(\omega - 2\omega_c))</math>

Given this derivation, the equations derived above (for example <math>g_2(t) = h(t)\ast f_2(t)\sin^2(\omega_c t) + h(t)\ast f_1(t)\cos(\omega_c t)\sin(\omega_c t)</math>) indicate that the input to each filter will be the corresponding system input, convolved in the frequency domain by either <math>\sin^2</math> or <math>\cos^2</math> and <math>\sin\cos</math>. The former frequency-domain convolutions will result in a baseband signal in the filter input (due to the <math>\delta(\omega)</math> term in the <math>\cos^2</math> and <math>\sin^2</math> Fourier transforms), while the latter will not. As a result, because <math>\omega_c \gg \omega_0</math> (the signal bandwidth), the baseband signal will be isolated. The resulting outputs, then, will be proportional to <math>h(t)\ast f_1(t)</math> and <math>h(t)\ast f_2(t)</math>.

(3) I don't know the terms “constellation and decision diagrams” or “symbol error probability expression”.

1. (a) <math>11\bar{1}0.\bar{1}\bar{1} = 27 + 9 - 3 + 0 - \frac{1}{3} - \frac{1}{9} = 32 \frac{5}{9}</math>

(b) <math>\bar{1}\bar{1}10.11 = -27 - 9 + 3 + 0 + \frac{1}{3} + \frac{1}{9} = -32 \frac{5}{9}</math>

© <math>\bar{1}\bar{1}10 = -27 - 9 + 3 + 0 = -33</math>

(d) <math>0.1111\ldots = \frac{1}{3} + \frac{1}{9} + \frac{1}{27} + \ldots = \sum_{n=1}^{\infty}\left(\frac{1}{3}\right)^n = \frac{1}{3}\frac{1}{1 - \frac{1}{3}} = \frac{1}{2}</math>

(e) <math>10\bar{1}\bar{1}01.10\bar{1}010\bar{1}010\bar{1}0\ldots =</math><math> 243 + 0 - 27 - 9 + 0 + 1 + \frac{1}{3} + 0 - \frac{1}{27} + 0 + \frac{1}{243} + 0 + \ldots =</math><math> 208 + \frac{1}{3}\sum_{n=1}^{\infty}\left(\frac{1}{9}\right)^n = 208 \frac{3}{8}</math>

2. (a) <math>1\bar{1}0\bar{1} + 1\bar{1}0\bar{1} = (27 - 9 + 0 - 1) + (27 - 9 + 0 - 1) = 34 = 11\bar{1}1</math>

(b) <math>1\bar{1}0\bar{1} - \bar{1}110 = (27 - 9 + 0 - 1) - (-27 + 9 + 3 + 0) = 17 - (-15) = 32 = 11\bar{1}\bar{1}</math>

© <math>1\bar{1}0\bar{1} \ctimes 1\bar{1}0\bar{1} = (27 - 9 + 0 - 1)\ctimes(27 - 9 + 0 - 1) = 289 = 11\bar{1}\bar{1}01</math>

3. If the leftmost nonzero digit is 1, it's positive, otherwise, it's negative, because <math>3^n > \sum_{m = 0}^{n-1} 3^m</math>

4. All nonzero digits should switch sign - this corresponds to flipping all of the signs in the nonzero terms of the sum corresponding to its base-10 representation, thereby negating it.

a) Truth table:

A | B | C | D | Y |
---|---|---|---|---|

0 | 0 | 0 | 0 | 1 |

0 | 0 | 0 | 1 | 1 |

0 | 0 | 1 | 0 | 1 |

0 | 0 | 1 | 1 | 1 |

0 | 1 | 0 | 0 | 0 |

0 | 1 | 0 | 1 | 1 |

0 | 1 | 1 | 0 | 0 |

0 | 1 | 1 | 1 | 1 |

1 | 0 | 0 | 0 | 1 |

1 | 0 | 0 | 1 | 0 |

1 | 0 | 1 | 0 | 1 |

1 | 0 | 1 | 1 | 1 |

1 | 1 | 0 | 0 | 0 |

1 | 1 | 0 | 1 | 0 |

1 | 1 | 1 | 0 | 0 |

1 | 1 | 1 | 1 | 0 |

<math>(A + \bar{B} + C + D)(A + \bar{B} + \bar{C} + D)(\bar{A} + B + C + \bar{D})(\bar{A} + \bar{B} + C + D)</math><math>(\bar{A} + \bar{B} + C + \bar{D})(\bar{A} + \bar{B} + \bar{C} + D)(\bar{A} + \bar{B} + \bar{C} + \bar{D})</math> This is not minimal form…

b) 2-input NOR gate truth table:

A | B | Y |
---|---|---|

0 | 0 | 1 |

0 | 1 | 1 |

1 | 0 | 1 |

1 | 1 | 0 |

An inverter can be made by tying the two inputs of the NAND gate together. An AND can be made by putting a NAND inverter at the output of a NAND. An OR can be made out of three NAND gates; gates A and B have their inputs tied together and take input, their outputs go to another NAND gate. Using these basic rules the function can be realized with 19 NAND gates (and maybe reduced some with some care).

c) Using the B, C, and D inputs as the input selection bits, the A input can be wired to every input optionally through an inverter. When the the output should be the opposite of A for some B, C, and D according to the truth table, an inverter should be used. Otherwise A should be connected directly to the corresponding multiplexer input.

a) For each of the three levels of one input, there are three possible levels for the other input, resulting in nine possible input combinations.

b) For each pair of input levels, there are three possible output levels. There are nine possible input level pairings. This results in <math>3^9</math> possible functions.

c) For each pair of input levels, there are two possible output levels. There are four possible input combinations, so there are <math>2^4</math> possible functions.

* The op amp will work to make its inputs equal - in this case, this corresponds to setting the negative input to zero volts. Since <math>V_p = 0</math>, <math>V_{out} = A(0-V_m) = -999V_m</math>. Note that <math>V_1 = V_m</math> because they are at the same node in the circuit. The current through the capacitor is expressed by <math>I_0 = C\frac{d(V_1-1V)}{dt}</math> because it is connected to the 1V power supply. I'm stuck at this point.

(a) The first pair of pFET and nFET transistors serve as an inverter because when <math>V_{in}</math> is low, the pFET will have 2.5V on its drain and the nFET will be high-impedance; when <math>V_{in} </math>is high, the nFET will have 0V on its drain and the pFET will be high-impedance. Similarly, for the second pair of FETs, when <math>V_{out}</math> is low, the pFET will have 2.5V on its drain and the nFET will be low impedance, and when <math>V_{out}</math> is high, the nFET will have 0V on its drain and the pFET will be high impedance. So, when <math>V_{in}</math> is low, the output of the first pair of FETs will be high, and the inverter will set <math>V_{out}</math> to low, which will cause the second pFET to also send 2.5V to the inverter input. Assuming the inverter switches from high to low at exactly 1.25V, as <math>V_{in}</math> increases,

The Schmitt trigger is useful for signals which are to be used as logic signals but are not always exactly “high” or “low”. In other words, if a signal which is meant to be treated as a logic “low” actually has some fluctuation, the Schmitt trigger will “ignore” spurious high signals caused by fluctuations as long as they are less than the “on” threshold - and this threshold can be more forgiving in either direction. It essentially requires a more affirmative “high” or “low” to make the corresponding change on the output.

(b) If <math>V_T</math> is 0.5V and -0.5V for the nFETs and pFETs respectively, then <math>I_{dst} = \frac{\beta}{2}(V_{gs} - 0.5V)</math> and <math>I_{dst} = \frac{\beta}{2}(V_{sg} - 0.5V)</math>, respectively.

(a) The Nyquist frequency is twice the highest frequency present in <math>x(t)</math>, which corresponds to <math>250(2) = 500</math> Hz.

(b) If the signal is sampled at <math>T = 1/f</math>, then sampling will be equivalent to convolving the frequency domain representation with an impulse train with period <math>2\pi f</math>. If <math>f</math> is above the Nyquist limit (500 Hz), the convolution of each impulse with the spectrum doesn't overlap, and we can achieve perfect reconstruction for general signals with a lowpass filter at the Nyquist frequency. Because we are using a bandpass filter here, we can allow for aliasing that would cause non-perfect reconstruction with a lowpass filter. However, we must avoid having the copies of the bandlimited spectral slices overlap, or else we will not be able to reconstruct them from their sum. So, we seek a frequency <math>f</math> such that the spectrum from <math>\pm 400\pi</math> to <math>\pm 500\pi</math> is maintained - the aliased terms cannot overlap in this range. The lowest sampling frequency for which this is true is 125 Hz. At this frequency, there is no overlap between convolutions of each of the impulses, although there is aliasing in the baseband (between <math>-500\pi</math> and <math>500\pi</math>). However, if the bandpass filter is set with <math>\omega_1 = 400\pi</math> and <math>\omega_2 = 500\pi</math>, then only the uncorrupted original signal will be maintained. The amplitude <math>A</math> can be set to 1 because the spectral magnitude will be maintained.

(a) Assuming (without loss of generality) that <math>t</math> is a discrete variable and <math>T</math> is an even integer, we can compute the DTFT to determine the sun-of-sinusoid representation of <math>x(t)</math>: <math>X(\omega) = \sum_{n = -\infty}^{\infty}x(n)e^{-j\omega n}</math>. Defining <math>u_M(t) = 1, 0 \le t \le M; 0 \mathrm{\;otherwise}</math>, <math>U_M(\omega) = \sum_{n = -\infty}^{\infty}u_M(n)e^{-j\omega n} = \sum_{n=0}^M e^{-j\omega n}</math>. This is the sum of a geometric series, so we have <math>U_M(\omega) = \frac{1 - e^{-j\omega(M+1)}}{1 - e^{-j\omega}}</math>. Then <math>x(t) = \delta(t)\ast u_{T/2}(t) - \delta(t+T/2)\ast u_{T/2}(t) + \delta(t+T)\ast</math><math> u_{T/2}(t) - \delta(t - T/2)\ast u_{T/2}(t) + \delta(t-T)\ast u_{T/2}(t) - \delta(t + 3T/2)\ast u_{T/2}(t) + …</math><math> = u_{T/2}(t) + u_{T/2}(t) \ast \sum_{n=1}^{\infty} (-1)^n(\delta(t + nT/2) + \delta(t - nT/2)) </math> which has a Fourier transform <math>X(\omega) = U_{T/2}(\omega) + U_{T/2}(\omega)\sum_{n=1}^{\infty}(-1)^n (e^{j\omega nT/2} + e^{-j\omega nT/2})</math>.

I know, from memory, that the harmonic series of a square wave is proportional to the sum of a sinusoid at the fundamental frequency (here <math>1/T</math>) and scaled odd harmonics.

(b) When the delay corresponds to an integer multiple of the period of one of the harmonics, that harmonic will be canceled out. As a result, for very short delays, the energy will be close to the full signal energy, as only very high harmonics (low energy) will be canceled out. As the delay approaches the square wave period, the energy cancellation corresponding to this delay-period match up will become more drastic, until a period of <math>T</math> is reached, at which point the signal will be fully cancelled out. After this point, the energy signal will be periodic, as the addition of a delay of one period will not affect the energy.

© Rather than having many “dips” in the energy plot corresponding to nulled harmonics, it would have dips were the delay was an integer multiple of each of the three harmonics (including the fundamental as the “zeroth” harmonic).

(Making all of the assumptions listed) As the input signal goes negative, <math>C_1</math> will be charged. When the signal goes positive, <math>D_1</math> will be reverse biased so <math>C_1</math> will not be able ti discharge. <math>D_2</math> then sees the input voltage in series with the voltage on <math>C_1</math>. This series voltage will charge <math>C_2</math> towards twice the input voltage. After a few cycles, <math>C_2</math> will be fully charged. The load on <math>V_{out}</math> will discharge <math>C_2</math> while <math>C_1</math> is charging.

(a) The four-resistor ladder on the left is creating a series of nodes where the voltages are <math>3V_{dd}/4</math>, <math>2V_{dd}/4</math>, and <math>V_{dd}/4</math>. So, when <math>V_{in}</math> is between <math>V_{dd}/4</math> and <math>2V_{dd}/4</math> (for example), the top two comparators will output a logical 0 because <math>V_{in} < 2V_{dd}/4 < 3V_{dd}/4</math>, but the bottom comparator will output a logical 1. This will cause the upper OR gate to output 0 and the lower OR gate to output a 1. This behavior is summarized (for <math>V_{dd} > 0</math>) for all input levels below (<math>C_1, C_2, C_3</math> correspond to the output of the bottom, middle, and top comparators respectively).

<math>V_{in}</math> | <math>C_1</math> | <math>C_2</math> | <math>C_3</math> | <math>B_1</math> | <math>B_0</math> |
---|---|---|---|---|---|

<math>V_{in} < V_{dd}/4</math> | 0 | 0 | 0 | 0 | 0 |

<math>V_{dd}/4 < V_{in} < 2V_{dd}/4</math> | 1 | 0 | 0 | 0 | 1 |

<math>2V_{dd}/4 < V_{in} < 3V_{dd}/4</math> | 1 | 1 | 0 | 1 | 1 |

<math>3V_{dd}/4 < V_{in}</math> | 1 | 1 | 1 | 1 | 1 |

Note that because <math>C_3</math> is only high when both <math>C_2</math> and <math>C_1</math> are high, <math>B_0</math> and <math>B_1</math> will already be high, so the output will not change once <math>V_{in} > 2V_{dd}/4</math>. However, when <math>V_{dd}</math> is negative (which is in some ways a misnomer as <math>V_{dd}</math> is normally used to denote the positive supply rail), the interpretation of the circuit changes as follows:

<math>V_{in}</math> | <math>C_1</math> | <math>C_2</math> | <math>C_3</math> | <math>B_1</math> | <math>B_0</math> |
---|---|---|---|---|---|

<math>V_{in} < 3V_{dd}/4</math> | 0 | 0 | 0 | 0 | 0 |

<math>3V_{dd}/4 < V_{in} < 2V_{dd}/4</math> | 0 | 0 | 1 | 1 | 1 |

<math>2V_{dd}/4 < V_{in} < V_{dd}/4</math> | 0 | 1 | 1 | 1 | 1 |

<math>V_{dd}/4 < V_{in}</math> | 1 | 1 | 1 | 1 | 1 |

The resistor ladder and comparators is pretty clearly a linear voltage level quantizer.

(b) I'm not sure why the outputs are connected to the OR gates in the way they are.

1. If the system is causal, then <math>h[n] = 0, n < 0</math>. If it's stable, <math>\sum_{n=-\infty}^{\infty}|h[n]| < \infty</math>. Note that <math>C_{hh}[l] = C_{hh}[-l]</math> because <math>C_{hh}[l] = \sum_{k=-\infty}^{\infty}h[k]h[k+l] = \sum_{m=-\infty}^{\infty}h[m-l]h[m] = C_{hh}[-l]</math>. For a simple case, say that we know that <math>h[0] = a, h[1] = b</math> (this could be deduced when <math>C_{hh}[l] = 0, |l| > 1</math>). Then <math>C_{hh}[0] = a^2 + b^2, C_{hh}[1] = ab, C_{hh}[-1] = ba</math>. Substituting, we have <math>C_{hh}[0] = a^2 + \frac{C_hh[1]^2}{a^2} \rightarrow 0 = a^4 - C_{hh}[0]a^2 + C_hh[1]^2</math>. Note that this is a quadratic equation in which the variable itself is quadratic, so there will be four solutions. This serves as a counterexample - in general we cannot recover <math>h[n]</math> from <math>C_{hh}</math>.

As another example, note that the inverse z-transform of the magnitude frequency response of the filter is <math>C_{hh}</math> because <math>a\star a \rightarrow \bar{A}A</math>. In other words, only the magnitude response is specified. Given a filter with a certain magnitude response, it is possible to create another filter with the same magnitude response by cascading the original filter with an all-pass filter. As a result, the filter cannot be unique given only this information.

2. A system of this form is characterized by the difference equation <math>y[n] = x[n] + \sum_{k=1}^{N}a_k y[n-k]</math> and has an infinite impulse response <math>h[k]</math>. Note that the system being stable implies that <math>|a_k| < 1</math>. For a simple case, assume <math>a_1 = b; a_k = 0, k > 1</math>. Then <math>h[n] = b^n</math> and <math>C_{hh}[0] = \sum_{k=\infty}^{\infty}h[k]h[k] = \sum_{k=0}^{\infty}b^{2k} = \frac{1}{1 - b^2}</math>. This gives us an explicit formula for <math>a_1</math> and thus all of <math>a_k</math>. If <math>a_1 = b; a_2 = c; a_k = 0, k > 2</math>, then <math>h[n] = 1, b, b^2 + c, b^3 + 2bc, b^4 + 3b^2c + c^2, b^5 + 4b^3c + 3bc^2, …</math>

a) For a simple first-pass, assume that <math>0 < \Delta < T</math>. If <math>\Delta = T/N</math>, then this sampling is equivalent to sampling the signal <math>x(t/N) = y(t)</math> (in the figure, the signal is sampled with <math>\Delta = T/7</math>, which results in a signal with period <math>7T</math>). As proof, note that the samples will occur at <math>y(0) = x(0), y(1) = x(T + T/N) = x(T/N), y(2)</math><math> = x(2T + 2T/N) = x(2T/N), \ldots </math>, and <math>y(N) = x(NT + NT/N) = x(T) = x(0)</math>. <math>y(t)</math> repeats itself with a period of <math>N</math>. In other words, when <math>\Delta = T/N</math>, <math>a = 1/N = </math>

columbia_ee_dqe_practice_exams.txt · Last modified: 2015/12/17 21:59 (external edit)