C. G. Masi Technology Communications

Home
About Us
Technology Journalism
Technology Trends Library
Online Resources
Contact Us

For The Agency | For The Technology Developer | For The Magazine Publisher | For The Individual

Chapter 4

Time Quantization and Sampling

Sampling may be one of the most confusing issues involved in data acquisition.

All digital instruments sample, including data acquisition boards. It's one of the things that make them digital. It also makes digital-instrument measurements harder to interpret and more prone to error than analog measurements. Digital measurements are not inherently less accurate—I dealt with that in Chapter 3—but digital measurements are more complicated, so there are more ways of messing them up.

The most obvious specification having to do with digital-instrument sampling is the sampling rate, and the most talked about rule regarding sampling rate is Nyquist's Theorem. Most people interpret Nyquist's Theorem to mean that a digital instrument cannot measure a signal whose frequency is equal to or greater than one half the sampling rate. (One half the sampling rate is popularly known as the Nyquist frequency.) Thus, if the sampling rate is 50 kS/s (kilosamples per second), you run into problems when the signal frequency gets up into the neighborhood of 25 kHz.

What Nyquist's Theorem actually states is that you cannot accurately reconstruct from sampled data the waveform of a signal whose frequency is equal to or greater than one half the sampling rate. People then, in an effort to figure out what Nyquist's Theorem means to Joe Research Scientist using a data acquisition system, assume that if you can't reconstruct it, you can't measure it.

That ain't necessarily so.

Nyquist's Theorem addresses the problem of aliasing, not measurement accuracy. To understand what Nyquist's theorem means for your particular data acquisition situation, you have to think about what effect aliasing has on the particular measurement you're making.

Imagine a signal made up of a 40 kHz fundamental with one overtone (80 kHz) whose amplitude equals that of the fundamental. This is not the sort of signal you run across every day, but it is a very instructive one nonetheless.

To get a baseline, let's look at the signal through the eyes of a really fast data acquisition system running at 4 MS/s. Figure 4.1a shows that we get a nice, clean, well characterized waveform. Figure 4.1b shows the actual Fourier transform calculated the hard way using an actual, honest-to-gosh Fourier integral evaluated numerically.

Click image to see full size

Figure 4.1: High-speed sampling (4 MS/s in this case) collects a data set that faithfully represents a waveform (a) and can be Fourier transformed into a good representation of the signal's spectrum (b).

The wavetrain shows a nice, clean waveshape. It is exactly what you would expect to see for a well-sampled waveform displayed by, say, a digital oscilloscope, and it is very similar to what you would expect to see on an analog oscilloscope. Note that the data record includes 1,000 samples and ten periods of the fundamental waveform. These will both become important facts later on.

Spectral Analysis

The Fourier transform displays the signal's spectrum. In this case, although the waveform is very nicely sampled, we don't see a very clean spectrum. That is because we have only 1,000 points in the waveform record, and although 1,000 points gives you a pretty good looking waveform display, it really isn't very many to use to get a Fourier transform.

Also, note that the fundamental fits rather neatly into the data record—there are exactly 100 samples in each waveform period and exactly ten periods in the record. I've even contrived to have the waveform start at exactly zero phase. Without these nice coincidences (which, by the way, would be very difficult to achieve in a real data-acquisition situation), the spectrum would look even messier.

Generally, the longer the waveform record (i.e., the more periods in the record) and the better the wavform fits into the sampling window (i.e., the closer to an integer number of periods) the nicer the spectrum will appear.

The above paragraph should tell you that long data buffers in DAQ boards are not just there for convenience. Since the Windows environment (Windows 3.x/95/NT) is notorious for getting around to accepting data when it feels like it, rather than when the data is ready, you have to be concerned about latency, which is the lag between putting the data out from the DAQ board and when it gets stored nice and safe in the computer's RAM. If you're gonna use that data in a Fourier analysis, don't expect it to work if you use more than one buffer's worth.

Plan on starting with an empty buffer on the DAQ card, filling it in one go, then loading it into memory. Don't expect to do a Fourier transform on 10,000 data points obtained by filling a 1,000-point buffer ten times. Latency-induced phase errors will wipe you out! If your DAQ-card buffer carries 1,000 points, that's how long your data set for the Fourier transform is going to be, and there's nothing you can do about it. (Well there is, but time stamping every individual sample with sub-nanosecond resolution is not usually an option!)

Back to the spectrum in Figure 4.1. For all its flaws, the essential features show up pretty well: we see two spectral lines at the appropriate frequencies (40 and 80 kHz) and they have approximately the right amplitude (both roughly equal to one). The amplitudes are a little bit low because the record's shortness spreads the spectral lines out, spilling some of their power into adjacent frequency bins.

When we reduce the sampling rate to 200 kS/s (Figure 4.2), the spectrum gets smeared out more because there are now only five samples tracing each period of the fundamental. Smearing out the spectral lines reduces the peak amplitude. In other words, the system's spectral resolution drops because it can't discern the waveshape as clearly. Again, lower resolution leads to a lower peak-amplitude measurement because some of the total spectral power gets shifted into adjacent frequency bins.

Click image to see full size

Figure 4.2: Marginal sampling (200 kS/s) makes for a less faithful representation of the spectrum, but does not lead to aliasing.

Sampling at 200 kS/s still gives a Nyquist frequency of 100 kHz, which is greater than the highest frequency component in the signal. Figure 4.3 shows what happens when we drop the sampling rate still further—to 100 kS/s, which gives a Nyquist frequency of 50 kHz. The Nyquist frequency is now below the highest frequency in the signal. Aliasing occurs, producing ghosts at 20 kHz and 60 kHz. Note that the aliasing does not affect the apparent strength of the real signal components. Furthermore, note that there's no way to tell which spectral components are real and which are aliases.

Click image to see full size

Figure 4.3: Undersampling (100 kS/s) causes aliasing. Ghost frequencies appear at 20 kHz and 60 kHz.

So, undersampling of a signal will mess up spectral measurements. Oversampling with a limited record length can also mess up the spectrum, as Figure 4.4 shows. To get Figure 4.4, I used a sampling rate fast enough to put all 1,000 data points onto the same wave period. In other words, the entire record is one wave-period long. Needless to say, that makes working out the frequencies of the components pretty dicey for the Fourier transform. That vagueness translates into poor resolution and badly smeared-out lines. The peaks, however, are still right on the money as far as frequency is concerned. And, of course, aliasing is not a problem!

Click image to see full size

Figure 4.4: Extreme oversampling (40 MS/s) to reduce the waveform record to one fundamental period long (a) produces a clean spectrum, but with very poor resolution (b).

Non-Spectral Analysis

Okay, undersampling ruins spectral measurements. But, frequency measurements are not exactly the bread and butter of data acquisition. Suppose, for example, what you really want are the maximum, minimum, average (of absolutes—the average of signed values is zero for non-DC signal components) and RMS values from the time-domain waveform. Table 4.1 shows these values for several different sampling rates. Clearly, Nyquist's theorem has nothing to do with measurements of these values. The values measured with a sampling rate of 125 kS/s are identical to the 2 MS/s values, despite the fact that 125 kS/s is undersampling by Nyquist's criterion. The 125 kS/s values are more accurate even than the 200 kS/s values, although Nyquist's Theorem says the latter should be fine.

Table 4.1: Signal-level values detected using various sampling rates

Sample Rate
2 MS/s
200 kS/s
125 kS/s
100 kS/s
75 kS/s
50 kS/s

Maximum
1.125
1.000
1.125
1.000
1.121
1.000

Minimum
-1.990
-1.760
-1.990
-1.760
-1.973
-1.760

Average
0.826
0.815
0.826
0.815
0.830
0.815

RMS
0.999
0.999
0.999
0.999
1.001
0.999

Something else is going on here.

The something else that's going on here is that, for non-spectral-analysis measurements, Nyquist's theorem doesn't say a whole lot. For those measurements, what you want is a lot of data points randomly scattered throughout the waveform. It doesn't make a bit of difference how often you take a data point, so long as they are randomized and you have a lot of them.

Again, the ability to take a lot of data points (buffer size) is at least as important as being able to take them rapidly (sampling rate).

You can get good randomization simply by not having your sample rate make a simple ratio with your fundamental frequency. 125 kS/s is a good match with 40 kHz because the ratio (3.125) is non-integral to three significant figures. You should get even better mixing with a sample rate of, say, 123,456 kHz. I tried it, and got a match (to the third decimal place) with all the 2 MS/s numbers except the minimum value. The nicely randomizing sample rate found at least one data point at -2.000, which the (unrandomized) 2 MS/s data set missed.

So, unless you're looking at the shape of a repetitive waveform or doing spectral analysis, Nyquist's theorem is not a good guide!

Transient Waveforms

If spectral analysis is not exactly the bread and butter of data acquisition, the same can truly be said of repetitive waveforms. Although lots of people use data acquisition systems for capturing repetitive waveforms, DAQ was invented for capturing transients, and that is still where its greatest strength lies.

A transient waveform is not just a single occurrence of a repetitive waveform. In a repetitive waveform the spectral bandwidth is more-or-less limited. That is, the amplitudes of the harmonics asymtotically approach zero. Thus, you can usually identify a finite frequency band and a definite cutoff frequency for a repetitive waveform and say that everything beyond that is of no interest. You can then (at least in principle) get a sampling rate high enough to keep your Nyquist frequency above the cutoff frequency.

As Figure 5 shows, the spectrum for even a simple step transient has infinite bandwidth. That is, harmonics may asymtotically approach something, but it isn't necessarily zero. Of course, real transients can't have infinite risetimes (or, in the case of this negative step, fall times) because real electronics can't react instantly. Real step functions have somewhat rounded steps (caused by poor high-frequency response) or overshoots (caused by poor low-frequency response) or even ringing (caused by resonance effects).

Click image to see full size

Figure 5: Transients have essentially unlimited spectral bandwidth.

All of this doesn't change the fact that an ideal step function has an infinite bandwidth and you ain't gonna capture it anywhere near perfectly with a real data acquisition system because you can't have an infinite sampling rate. The best you can do is to sample as fast as you can, then chop the bandwidth off using an anti-aliasing filter with a cutoff below the Nyquist frequency. If you're lucky, your DAQ system's cutoff will be high enough to pass the important components in your transient.

Then, of course, you have to be honest and report what you did: what your cutoff frequency was, what the sampling rate was, etc. Being honest and admitting all this presupposes you had your thinking cap on and determined these things in the first place!

Time-Division Multiplexing

Now that I've cleared up what Nyquist's Theorem means to data acquisition mavens, it's time to point out that a four-channel, 300 kS/s data acquisition board does not necessarily give you an Nyquist frequency of 150 kHz! You see, most data acquisition boards have several input channels, but only one analog-to-digital convertor (ADC). To funnel those n data-acquisition channels through that one ADC, they use time-division multiplexing (TDM).

Suppose you have that four-channel DAQ board with one ADC sampling at 300 kS/s. You start the acquisition at time zero and, for the first sampling interval, the ADC is connected to channel 1. For the second sampling interval (which starts 3.333 microseconds later), the board's signal-routing electronics connects the ADC input to channel 2. The third sample (starting at 6.667 microseconds) comes from channel 3 and the fourth from channel 4. For the fifth sample, the electronics reconnects the ADC to channel 1, and so forth.

That's called scanning. While the ADC is zipping along at 300 kS/s, scanning drops the actual sample rate for each of the input channels (which is what really counts) to only 75 kS/s! To get a real sampling rate of 300 kS/s, you have to run the board as a one-channel DAQ board.

That is why most multichannel boards let you select certain channels for use and leave others out. It's not just a case of simply not hooking a signal into the channel n input. You have to tell the board not to scan through the unused channels, otherwise your true sampling rate suffers badly.

Until recently, nearly all DAQ boards multiplexed through a single ADC. Since the vendor couldn't predict how many signals you'd actually need for your application, they'd publish the ADC sampling rate in their specifications. It would be up to the customer (in other words you ) to figure out the sampling rate you'd actually get—by dividing the published scanning rate by the number of channels you plan to actually use.

Recently, in an effort to keep real sampling rates up with real user needs (especially for boards with 8, 16 or more input channels), manufacturers have begun installing additional ADCs. The reason they can do it now, whereas they couldn't do it before, is improved semiconductor integration. Cramming more electronics into each chip means they can now cram more ADCs into one chip. That means more ADCs can fit on a DAQ board along with all the associated electronics. Having two ADCs doubles the real sampling rate per channel. Four quadruples it, and so forth.

Of course, when a vendor tells you that they have a four-channel, 300 kS/s board with two ADCs, you now have to ask whether they have two 300 kS/s ADCs or two 150 kS/s ADCs. You also have to ask if all the ADCs are available to all of the channels.

In other words, if you connect signals to channels 1 and 2, do you have two channels operating at 150 kS/s (by fully utilizing both ADCs), or do you still only have 75 kS/s because you've stupidly (or by necessity, due to some constraint in the application) loaded both channels into one ADC while the other one sits idle? You may have the potential to get faster sampling, but to get it for real you need to apply a little more brain power as well.

Chapter 1 | Chapter 2 | Chapter 3 | Chapter 4

 


Home | About Us | Technology Journalism | Technology Trends Library | Online Resources | Contact Us
For The Agency | For The Technology Developer | For The Magazine Publisher | For The Individual


© , C. G. Masi Technology Communications, Privacy Policy
P.O. Box 10640, 978 S. San Pedro Road, Golden Valley, AZ 86413, USA
Phone: +1 928.565.4514, Fax: +1 928.565.4533, Email: cgmasi@cgmasi.com, Web: www.cgmasi.com
Developed by Telesian Technology Inc.