A MATHCAD ELECTRONIC BOOK
Topics in Electrical Engineering: Applications in Signal Processing -- Digitizing a Signal
Reprinted from Topics in Electrical Engineering, copyright © 1997 by MathSoft, Inc.
This document digitizes continuous or sampled signals. You input:
F(t) , the calculated or sampled continuous-time signal
D , the sampling interval
w , the bandwidth limit of the signal
The output is a vector of integers representing the quantized level assigned to each digitized sample. The document plots the original samples, the quantized output, and the quantizing noise.
Background
It is often desirable to convert a continuous-time signal into a set of digital samples. The signal can now be processed by digital circuitry, and will be easier to store and recall. This type of sampling is used in compact disk recording technology and other analog-to-digital conversion applications.
Nyquist Sampling Theorem and Aliasing
There are two issues in digitization. The first is the sampling rate. A signal is completely characterized by sampling it at twice its bandwidth. This is known as the Nyquist sampling theorem. The signal can be oversampled, which may be done to generate a smoother reconstruction of the signal.
If fewer samples are used, then signal components of higher frequencies will resemble additional components at lower frequencies. This is a phenomenon called aliasing, which can distort the information carried by a signal.
Quantization Levels
The second criterion for effective digitization is the size of the smallest quantized step in amplitude, known as the sampling interval. In theory, the greater the number of levels, the more accurately the amplitude of the signal can be represented. However, limitations on storage size and calculation dictate that levels be chosen at some reasonable size: one tenth of the amplitude or smaller. Sufficiently small intervals ensure that the error in quantization will be independent of the trends in the signal. For a sufficiently large number of levels, the maximum error will be restricted to approximately one interval.
Mathcad Implementation
Mathcad’s histogram function hist is used to digitize a continuous signal. The signal is sampled at a given rate, and each sample is assigned a quantized amplitude level. To use the example, define an input function F(t) (the signal to be digitized) with a specified bandwidth, w. Define the length of time, L , for the sampled output, and a value for N , the number of levels in the quantized amplitude.
Enter signal and sampling information.
bandwidth, shown as highest frequency component (10 kHz as an example):
input signal:
The length of the sample is given by 20 times the smallest frequency component. This is entered as a multiple of the bandwidth:
length of sample:
number of quantizing levels:
Calculate the Digitizing Parameters and the Vector of Samples
sampling frequency (4X oversampling):
sampling rate:
vector of samples:
maximum and minimum values of the function (used to calculate the sample interval):
The application now constructs a vector a which stores the N possible quantized levels for amplitude. The vector codes contains the numbers 0 through N - 1, which identify the levels.
sampling interval:
vector of levels:
vector of codes:
A straightforward application of the hist function returns a vector q giving the level (number of sampling intervals) for each sample value.
coded sample:
You can use q to reconstruct the digitized approximation to the original signal. The first plot shows this approximation o together with the original signal F . The second plot shows the quantizing noise, that is, the difference between the original sample values and the quantized amplitude values in o.
step-function output:
graphing parameters:
Fig. 9.1 Plot of output and original signal
The quantization noise is given by:
The space between the dotted lines is one sampling interval, D.
Fig. 9.2 Plot of quantization noise
To quantize any set of sample data, read the data into the vector s by replacing the first equation in this document with an Input Table component.
Let n run from 0 to length(s) – 1. You can now delete the original definition of s , and the associated definitions for sampling time and sample length.
References
William M. Siebert, Circuits, Signals, and Systems , The MIT Press (Cambridge, 1986).