Downsampling

In digital signal processing, downsampling and decimation are terms associated with the process of resampling in a multi-rate digital signal processing system. Both terms are used by various authors to describe the entire process, which includes lowpass filtering, or just the part of the process that does not include filtering.[1]  When downsampling (decimation) is performed on a sequence of samples of a signal or other continuous function, it produces an approximation of the sequence that would have been obtained by sampling the signal at a lower rate (or density, as in the case of a photograph). The decimation factor is usually an integer or a rational fraction greater than one. This factor multiplies the sampling interval or, equivalently, divides the sampling rate. For example, if compact disc audio at 44,100 samples/second is decimated by a factor of 5/4, the resulting sample rate is 35,280. A system component that performs decimation is called a decimator.

## Downsampling by an integer factor

Rate reduction by an integer factor M can be explained as a two-step process, with an equivalent implementation that is more efficient:

1. Reduce high-frequency signal components with a digital lowpass filter.
2. Decimate the filtered signal by M; that is, keep only every Mth sample.  A notation for this operation is:  ${\displaystyle x[Mn]={x[n]}_{\downarrow {}M}.}$[2]

Step 2 alone allows high-frequency signal components to be misinterpreted by subsequent users of the data, which is a form of distortion called aliasing. Step 1, when necessary, suppresses aliasing to an acceptable level. In this application, the filter is called an anti-aliasing filter, and its design is discussed below. Also see undersampling for information about decimating bandpass functions and signals.

When the anti-aliasing filter is an IIR design, it relies on feedback from output to input, prior to the second step. With FIR filtering, it is an easy matter to compute only every Mth output. The calculation performed by a decimating FIR filter for the nth output sample is a dot product:

${\displaystyle y[n]=\sum _{k=0}^{K-1}x[nM-k]\cdot h[k],}$

where the h[o] sequence is the impulse response, and K is its length.  x[o] represents the input sequence being downsampled. In a general purpose processor, after computing y[n], the easiest way to compute y[n+1] is to advance the starting index in the x[o] array by M, and recompute the dot product. In the case M=2, h[o] can be designed as a half-band filter, where almost half of the coefficients are zero and need not be included in the dot products.

Impulse response coefficients taken at intervals of M form a subsequence, and there are M such subsequences (phases) multiplexed together. The dot product is the sum of the dot products of each subsequence with the corresponding samples of the x[o] sequence. Furthermore, because of downsampling by M, the stream of x[o] samples involved in any one of the M dot products is never involved in the other dot products. Thus M low-order FIR filters are each filtering one of M multiplexed phases of the input stream, and the M outputs are being summed. This viewpoint offers a different implementation that might be advantageous in a multi-processor architecture. In other words, the input stream is demultiplexed and sent through a bank of M filters whose outputs are summed. When implemented that way, it is called a polyphase filter.

For completeness, we now mention that a possible, but unlikely, implementation of each phase is to replace the coefficients of the other phases with zeros in a copy of the h[o] array, process the original x[o] sequence at the input rate, and decimate the output by a factor of M. The equivalence of this inefficient method and the implementation described above is known as the first Noble identity.[3]

Fig.1: Spectral effects of decimation compared on 3 popular frequency scale conventions

### Anti-aliasing filter

The requirements of the anti-aliasing filter can be deduced from any of the three pairs of graphs in Fig. 1. Note that all three pairs are identical, except for the units of the abscissa variables. The upper graph of each pair is an example of the periodic frequency distribution of a sampled function, x(t), with Fourier transform, X(f). The lower graph is the new distribution that results when x(t) is sampled three times slower, or (equivalently) when the original sample sequence is decimated by a factor of M=3. In all three cases, the condition that ensures the copies of X(f) do not overlap each other is the same: ${\displaystyle B<{\tfrac {1}{M}}\cdot {\tfrac {1}{2T}},}$  where T is the interval between samples, 1/T is the sample-rate, and 1/(2T) is the Nyquist frequency. The anti-aliasing filter that can ensure the condition is met has a cutoff frequency less than ${\displaystyle {\tfrac {1}{M}}}$ times the Nyquist frequency.[note 1]

The abscissa of the top pair of graphs represents the discrete-time Fourier transform (DTFT), which is a Fourier series representation of a periodic summation of X(f):

When T has units of seconds, ${\displaystyle \scriptstyle f}$ has units of hertz. Replacing T with MT in the formulas above gives the DTFT of the decimated sequence, x[nM]:

${\displaystyle \sum _{n=-\infty }^{\infty }x(n\cdot MT)\ \mathrm {e} ^{-\mathrm {i} 2\pi fn(MT)}={\frac {1}{MT}}\sum _{k=-\infty }^{\infty }X\left(f-{\tfrac {k}{MT}}\right).}$

The periodic summation has been reduced in amplitude and periodicity by a factor of M, as depicted in the second graph of Fig. 1.  Aliasing occurs when adjacent copies of X(f) overlap. The purpose of the anti-aliasing filter is to ensure that the reduced periodicity does not create overlap.

In the middle pair of graphs, the frequency variable, ${\displaystyle \scriptstyle f,}$ has been replaced by normalized frequency, which creates a periodicity of 1 and a Nyquist frequency of ½. [note 2]  A common practice in filter design programs is to assume those values and request only the corresponding cutoff frequency in the same units. In other words, the cutoff frequency ${\displaystyle B_{max}={\tfrac {1}{M}}\cdot {\tfrac {1}{2T}},}$ is normalized to ${\displaystyle TB_{max}={\tfrac {1}{M}}\cdot {\tfrac {1}{2}}={\tfrac {0.5}{M}}.}$   The units of this quantity are (seconds/sample)×(cycles/second) = cycles/sample.

The bottom pair of graphs represent the Z-transforms of the original sequence and the decimated sequence, constrained to values of complex-variable, z, of the form ${\displaystyle z=\mathrm {e} ^{\mathrm {i} \omega }.}$  Then the transform of the x[n] sequence has the form of a Fourier series. By comparison with Eq.1, we deduce:

${\displaystyle \sum _{n=-\infty }^{\infty }x[n]\ z^{-n}=\sum _{n=-\infty }^{\infty }x(nT)\ \mathrm {e} ^{-\mathrm {i} \omega n}={\frac {1}{T}}\sum _{k=-\infty }^{\infty }\underbrace {X{\Bigl (}{\tfrac {\omega }{2\pi T}}-{\tfrac {k}{T}}{\Bigr )}} _{X{\Bigl (}{\frac {\omega -2\pi k}{2\pi T}}{\Bigr )}},}$

which is depicted by the fifth graph in Fig. 1.  Similarly, the sixth graph depicts:

${\displaystyle \sum _{n=-\infty }^{\infty }x[nM]\ z^{-n}=\sum _{n=-\infty }^{\infty }x(nMT)\ \mathrm {e} ^{-\mathrm {i} \omega n}={\frac {1}{MT}}\sum _{k=-\infty }^{\infty }\underbrace {X{\Bigl (}{\tfrac {\omega }{2\pi MT}}-{\tfrac {k}{MT}}{\Bigr )}} _{X{\Bigl (}{\frac {\omega -2\pi k}{2\pi MT}}{\Bigr )}}.}$

## By a rational factor

Let M/L denote the decimation factor, where: M, L ? Z; M > L.

1. Increase (resample) the sequence by a factor of L. This is called Upsampling, or interpolation.
2. Decimate by a factor of M

Step 1 requires a lowpass filter after increasing (expanding) the data rate, and step 2 requires a lowpass filter before decimation. Therefore, both operations can be accomplished by a single filter with the lower of the two cutoff frequencies. For the M > L case, the anti-aliasing filter cutoff,  ${\displaystyle {\tfrac {0.5}{M}}}$ cycles per intermediate sample, is the lower frequency.

## By an irrational factor

Techniques for decimation (and sample-rate conversion in general) by factor R ? R+ include polynomial interpolation and the Farrow structure.[4]

## Combined methods of decimation

An important factor in the development of digital antenna arrays for radars and Massive MIMO is the need to reduce the cost per channel. Combining the decimation process not only with an anti-aliasing filter, but also with the digital frequency shifting and I/Q-demodulation as well can help to bring down this cost.

In the simpler case of decimation of OFDM signals by an integer factor M, the algorithm[5] may be used:

${\displaystyle y[n]=\sum _{k=0}^{M-1}x[nM+k]\ \mathrm {e} ^{-\mathrm {i} 2\pi fkT},n=0,1,..,N}$,

where ${\displaystyle T}$ is interval between samples of signal and ${\displaystyle f}$ is the central carrier frequency of the OFDM signal.

This algorithm is only one filter of the full discrete Fourier transform and can be useful to decimate samples in an ADC before digital beamforming in digital antenna arrays.

If more effective anti-aliasing filtering is required then this method may be modified to produce:

${\displaystyle y[n]=\sum _{k=0}^{M-1}x[nM+k]h[k]\ \mathrm {e} ^{-\mathrm {i} 2\pi fkT},n=0,1,..,N}$.

## Notes

1. ^ Realizable low-pass filters have a "skirt", where the response diminishes from near one to near zero. So in practice the cutoff frequency is placed far enough below the theoretical cutoff that the filter's skirt is contained below the theoretical cutoff.
2. ^ Some programs (such as MATLAB) that design filters with real-valued coefficients use the Nyquist frequency (${\displaystyle {\tfrac {1}{2T}}}$) as the normalization constant. That results in a Nyquist frequency of 1 and a periodicity of 2.

## References

1. ^ Poularikas, Alexander D. (September 1998). Handbook of Formulas and Tables for Signal Processing (1 ed.). CRC Press. p. 42-8. ISBN 0849385792.
2. ^ Mitra, Sanjit Kumar, and Yonghong Kuo. Digital signal processing: a computer-based approach. New York: McGraw-Hill, 2006.
3. ^ Strang, Gilbert; Nguyen, Truong (1996-10-01). Wavelets and Filter Banks (2 ed.). Wellesley,MA: Wellesley-Cambridge Press. pp. 100-101. ISBN 0961408871.
4. ^ Mili?, Ljiljana (2009). Multirate Filtering for Digital Signal Processing. New York: Hershey. p. 192. ISBN 978-1-60566-178-0. Generally, this approach is applicable when the ratio Fy/Fx is a rational, or an irrational number, and is suitable for the sampling rate increase and for the sampling rate decrease.
5. ^ Slyusar V. I. Synthesis of algorithms for measurement of range to M sources with the use of additional gating of the ADC readings.// Radioelectronics and Communications Systems. - Vol. 39. - no. 5. - 1996. - P. 36 - 40. [1]

• Oppenheim, Alan V.; Schafer, Ronald W.; Buck, John R. (1999). Discrete-Time Signal Processing (2nd ed.). Prentice Hall. ISBN 0-13-754920-2.
• Proakis, John G. (2000). Digital Signal Processing: Principles, Algorithms and Applications (3rd ed.). India: Prentice-Hall. ISBN 8120311299.
• Lyons, Richard (2001). Understanding Digital Signal Processing. Prentice Hall. p. 304. ISBN 0-201-63467-8. Decreasing the sampling rate is known as decimation.
• Antoniou, Andreas (2006). Digital Signal Processing. McGraw-Hill. p. 830. ISBN 0-07-145424-1. Decimators can be used to reduce the sampling frequency, whereas interpolators can be used to increase it.
• Milic, Ljiljana (2009). Multirate Filtering for Digital Signal Processing. New York: Hershey. p. 35. ISBN 978-1-60566-178-0. Sampling rate conversion systems are used to change the sampling rate of a signal. The process of sampling rate decrease is called decimation, and the process of sampling rate increase is called interpolation.
• Harris, Frederic J. (2004-05-24). "2.2". Multirate Signal Processing for Communication Systems. Upper Saddle River, NJ: Prentice Hall PTR. pp. 20-21. ISBN 0131465112. The process of down sampling can be visualized as a two-step progression indicated in Figure 2.9. The process starts as an input series x(n) that is processed by a filter h(n) to obtain the output sequence y(n) with reduced bandwidth. The sample rate of the output sequence is then reduced Q-to-1 to a rate commensurate with the reduced signal bandwidth.
• Tan, Li (2008-04-21). "Upsampling and downsampling". eetimes.com. EE Times. Retrieved . The process of reducing a sampling rate by an integer factor is referred to as downsampling of a data sequence. We also refer to downsampling as decimation. The term decimation used for the downsampling process has been accepted and used in many textbooks and fields.
• T. Schilcher. RF applications in digital signal processing//" Digital signal processing". Proceedings, CERN Accelerator School, Sigtuna, Sweden, May 31-June 9, 2007. - Geneva, Switzerland: CERN (2008). - P. 258. - DOI: 10.5170/CERN-2008-003. [2]
• Sliusar I.I., Slyusar V.I., Voloshko S.V., Smolyar V.G. Next Generation Optical Access based on N-OFDM with decimation.// Third International Scientific-Practical Conference "Problems of Infocommunications. Science and Technology (PIC S&T'2016)". - Kharkiv. - October 3 -6, 2016. [3]
• Saska Lindfors, Aarno Pärssinen, Kari A. I. Halonen. A 3-V 230-MHz CMOS Decimation Subsampler.// IEEE transactions on circuits and systems-- Vol. 52, No. 2, February 2005. - P. 110.

This article uses material from the Wikipedia page available here. It is released under the Creative Commons Attribution-Share-Alike License 3.0.

Connect with defaultLogic
What We've Done
Led Digital Marketing Efforts of Top 500 e-Retailers.
Worked with Top Brands at Leading Agencies.
Successfully Managed Over \$50 million in Digital Ad Spend.
Developed Strategies and Processes that Enabled Brands to Grow During an Economic Downturn.
Taught Advanced Internet Marketing Strategies at the graduate level.

Manage research, learning and skills at defaultlogic.com. Create an account using LinkedIn to manage and organize your omni-channel knowledge. defaultlogic.com is like a shopping cart for information -- helping you to save, discuss and share.