When the Borg from Star Trek assimilated a world it took all of its' knowledge and technology and incorporated it into its own. In order for them to adapt the new information into the collective they had to quantify the data based on some point of reference. Once the information was quantified, measured and recorded, then the process of analysis and adaptation began. On completion of adaptation it was downloaded to the hive and we all know what happened after that. But since that’s just part of historical records and you can find that information in any reputable archive it might be a better use of our time to examine the quantification process. We’ll start off with something fairly rudimentary like sampling data from the external environment and converting those pieces and parts into data that can be analyzed. I’m speaking of ADCs of course.
ADC Introduction
ADCs have been among us since 1974, a creation of IBM innovation[1]. The basic premise is that humans live in a world of infinite values. A constant environment of change and flux. In the digital world it is anything but infinite. In fact, our digital friend have a precise definition of how much data it can hold. Also, in the analog / human world, the values have no reference of value which is the complete opposite from the digital world which has a definite 0 or 1 evaluation. To bridge this gap we have the all encompassing ADC which seals the breech from analog to digital.
Process of Quantization
When we sample something from the analog world and convert it into the digital world we call this process quantization. Makes sense, we want to quantify something observable. A few very large sacrifices have to be made when you take a world of infinite values into another world of confined values. Basically, we need to break down the quantified values into components. Basic parts of quantization are resolution, accuracy, and sampling rate.
Resolution
When we speak of resolution in the analog world, it doesn’t exist. Resolution is a measure of how fine of quantization we want from a sampling of an analog signal. Bits are a component of the digital world. The larger the bit count is, the larger number that the MCU can work with. An 8 bit data type can hold numbers from -128 to 127 or 0 to 255. That means there are 256 different values that can be represented in a 8 bit value. If we need more variety, we can increase the bit count. For instance a 16 bit number has 65,536 different numbers.
N = 2M – 1 where M is and ADCs resolution in bits
If each number represented a measure, it makes sense that the more potential numbers, the more choices we have. In the ADC world we also have this ability to choose the number of bits represented in a ADC result. Most common choices are 10 bit, 12 bit, and 24 bit. So if we want to get closer in approximation to the infinite analog world we choose the highest bit count available right? Right, but at a cost. Of course! With the higher bit count we take more storage, and more importantly, more time. Time? Yes time is a integral component of the analog world. We have infinite time periods inside the analog world and if we take time to sample something from that world, infinite time values pass us by while we read it.
That brings us to accuracy.
Accuracy
While we twiddle our thumbs in the analog world, infinite number of time periods are speeding past us. If we require time to read the analog value, we might miss something. One of the sacrifices made in ADC conversions is that we need to approximate the signal we are going to measure and try to read the signal fast enough that it gives us a decent digital approximation. We have to balance the factors of resolution and speed. With lower resolution comes faster samples, we higher resolution comes lower speed. But also, with higher speed also comes noise. Yes noise. ADCs have a useful specification called SNR or signal to noise ratio. This is a relationship of how much noise we have in relationship to the signal we want to measure. The top of the noise becomes our floor of where we start to measure the desired signal.
The definition you will see in ADCs will be expressed in dB or decibels. The decibel is a logarithmic scale defined as:
Psignal = 10 log10( Psignal )
When looking at SNR like any ratio it becomes signal over noise or:
SNR = Psignal / Pnoise
We must take both number of sample bits and SNR into account for us to determine how close our approximation to the “real” world is. We’ll keep the “mathy” things to a minimum for today since quantization error analysis is beyond the scope of this introduction.
For accuracy we are looking at quantization bits and error to arrive at a LSB or least significant bit. Which is simplified to the smallest unit of value that can be accurately measured. In the case of ADCs it is usually in quantity of voltage or mV.
Sampling Rate
Since time is flying by in a continuous line and we do not have the luxury of infinite storage and infinite sampling we have to approximate quantity of measurements in time. We call this approximation rate. How many readings in time is usually specified in samples per 1 sec or also in frequency. If we know we want to read an audio signal that will probably be resonating at 600 Hz or less, we would need to read the signal faster than 600 Hz or information will be lost.
The black dots on the image represent the moments that the signal was sampled. If we sampled the ADC signal slower than the signal we are looking for, then something like the following happens:
From this sampling, the digital representation would have no likeness to the analog one that it was trying to sample. So how would I know what rate I should sample my signal at? I'm going to save you some time and effort. Yes, you could calculate, chart and imagine the frequency that you need to sample, but with a very simple formula you could save all that time:
For general applications, take the maximum frequency you expect to find and double it.
For high accuracy applications like high fidelity audio, quadruple it.
Example:
I need to know how many reading per second I need to measure the audio feedback coming from an electric motor. The motor rotates at 3860 rpm.
3860 / 60 sec = 64.33 revolutions per second or 64.33 Hz
So what minimum rate should I sample the ADC at? That's right, 64.33 x 2 or 129 Hz.
Storage Considerations
We know we want to sample data at 129 times a second and are satisfied that a 10 bit resolution will suffice. What size of structure would I need to provide to my application to store that data?
For starters, there is no 10 bit data type. The nearest container that would hold our 10 bit values is our friend the 16 bit INT or 2 bytes.
For each second we need:
129 x 2 bytes or 258 bytes
Gets expensive when you want to store values over a larger period of time or resolution is finer. For example, let’s increase the resolution to 24 bits. Closest container is 32 bits or 4 bytes, increasing our storage by factor of 2 or 516 bytes.
The real question always is: “What do I do with just a few seconds of data, that’s all I have to store it?” That is a good question and something we are going to be answering in-depth in the coming weeks. The short answer is: Tons. Most of them revolve around signal analysis and pattern recognition. We might want to look for a particular sound or frequency change that could represent any number of things. My favorite is just looking for anything out of the ordinary. This could mean impending failure, switch or trigger to an event, tampering, etc…. So many neat things to do.
How to Choose an ADC
We can take a stab at this but it can get heated:
Oversampling versus Nyquist | Internal versus External | PIC versus AVR versus ARM | ……. I have my preference, you keep to yours, but there are some factors you might want to consider:
First consideration is resolution:
Internal ADC: Most of the MCUs today have integrated ADCs and have the advantage of close coupling to the MCU. This means power saving features are all accessible through the MCU registers as well as conversion speed.
External ADC: External ADCs can have low noise ratios and fifo buffering capabilities. They do have the disadvantage of adding to the cost of a project and can suffer from bus latency.
Speed consideration:
Some ADCs can sample in excess of 2 Million samples per second. Wow, great, where are you going to be storing that? MCUs that are capable of sampling this fast are usually carrying a large RAM space to suffice for several seconds of data logging.
Speed doesn’t mean quality. Like all cases in life, you get fast, but low quality or slow and good quality. As you increase the speed of ADC sampling you invite more noise.
Rule of thumb: My personal choice is if I need 500k samples per second I only will consider those ADC units that can sample at least double what I need. In that case I would only look at ADCs with sampling of 1M samples per second.
Other considerations
Some added peripherals that add to the usefulness of ADCs are DMA and Interrupts. If you have an MCU that doesn’t allow you to have DMA transfer of Interrupts that are directly connected to the ADC you, the application designer, get the privilege of managing all that data. DMA allows for simplified transfer of that rapidly growing dataset and interrupts will inform the application when it’s all done.
Summary
For the first part of our ADC introduction we have introduced the idea of reading the analog world and converting for evaluation to the digital world. There are some issues that need to be considered but for most purposes ADC units today are very capable of simple analog observation. In the coming articles we will examine the idea of using the incoming ADC data for more than just battery level readings but for analog analysis.
References
[1] O. A. Mukhanov, “History of Superconductor Analog‐to‐Digital Converters,” in 100 Years of Superconductivity, H. Rogalla and P. Kes, Ed., Taylor & Francis, London, ISBN: 978‐1‐4398‐4946‐0
Singal-to-noise, wikipedia.org 2016