24bit accuracy test
Some mp3 decoders claim to offer 24bit accurate decodes, compared to the usual 16bits.
Are these decoders really 24bit accurate, and does this make any difference to users of 16bit soundcards?
Output bitdepth:  16bit  24bit 
Decoder: 
ACM l3codecx 1997  16+1 
ACM l3codecx 1999  16+1 
CEP FhG  16+1 
CEP GNU  15+1 
CEP GNU dithered  19+0 
CEP GNU 32  
26+0 
l3dec 2.72  16+1 
26+0 
l3dec 2.72 + external dither  19+0 
l3dec 2.72 + external NS dither  22+0 
MAD speed  15+2  16+1 
MAD speed dithered  16+1 
16+1 
MAD default  16+1 
20+5 
MAD default dithered  17+2 
20+5 
MAD accuracy  16+1 
23+3 
MAD accuracy dithered  17+2  23+3 
MJB 5  16+1 
NAD 0.94  16+1 
Quicktime 4.1  16+1 
Siren 1.5  16+1 
Sonique 1.808  16+1 
Ultra Player 2.0  16+1 
Winamp 2.22  16+1 
Winamp 2.7  16+1 
Winamp MAD 0.12.2B  17+2 
20+5 
Winamp MAD 0.14.1B  20+3 
20+5 
Winamp mpg123 1.18  16+1 

key:

 = Correct bits

 = Distorted bits

 = Lost bits


What are the important results of this test?
 Only CEP GNU 32 and l3dec give true 24bit accurate ouput.
 Since CEP GNU fails most other tests, l3dec is the only recommended 24bit decoder.
 As most people can't process the 24bit ASCII HEX files generated by l3dec and MAD, the Winamp MAD plugin is the only useable 24bit decoder.
Do we care?!
The number of bits a decoder reproduces correctly is a measure of its sound quality and numerical accuracy. Having more bits is the binary equivalent of having more decimal places. For example, saying pi=3.141592653 is more accurate than saying pi=3.14. This accuracy determines the quietest sound, or finest detail a decoder can reproduce. 16bits of accuracy (the same as a compact disc) is often thought to be enough, but it is possible to go further...
When decoding an mp3 file, the numerical results often run to many many decimal places  rounding the result to 16bits is equivalent to adding a little extra distortion. A 24bit decoder can allow owners of 24bit sound cards to keep an extra 8bits of accuracy (thus lowering the rounding distortion by a factor of 256). Owners of 16bit soundcards can use a program to dither the 24bit output down to 16bits, so hiding the extra bits within the noise of the 16th bit, and avoiding the distortion introduced by simply rounding the result.
You may hear an improvement, you may not!
How was this test carried out?
I generated a 24bit .pcm file using Cool Edit Pro. I generated a 1 bit signal that contained a 1kHz sine wave and some noise  all represented by a single bit switching on and off. I then placed this bit at the 16th, 17th, 18th ... 23rd and 24th bits of the 24bit file in turn, 1 second at each bit. I added two additional bursts of noise containing even quieter tones  equivalent to 25 and 26 bit level. The resulting file was mp3 encoded using mp3enc (which accepts 24bit accurate input files), and decoded using the programs on test. If the tone at each bit came through OK (i.e. is audibly fine, above the noise floor on a spectral plot, and free of any distortion), the decoder decodes that bit correctly (blue in the table). If the tone is audible, but is distorted (audibly or visibly) then the decoder decodes that bit with distortion (yellow in the table). If the tone is inaudible, either replaced by silence, noise, or garbage, then the decoder fails to decode that bit (grey in the table above).
I used tones buried in noise (rather than pure tones) to test each bit because no real instrument or microphone will generate a pure noiseless tone at this amplitude. Hence this method gives a more realistic test.
Is this test reliable?
In a lossless format, there is no accepted definition of a given bit being accurate. The method I have chosen probably requires much more internal accuracy that the actual numerical accuracy result it yields. For instance, if a decoder rounded each internal calculation at the 24th bit, then the final result may only be accurate to 16 bits. If this sounds nonsense, take a decimal example.
1.147+1.258+1.369+1.258+1.147=6.179
I only want the result to one decimal place. The correct answer rounds to 6.2. However, if I only take each individual number to one decimal place...
1.1+1.2+1.3+1.2+1.1=5.9
Oh dear  the result isn't even accurate to the nearest whole number, unless I round it. It certainly isn't accurate to one decimal place. In the same way, any decoder that I've recorded as exhibiting 24 bits of accuracy must have a great deal more internal accuracy to give this result.
On the other hand, when the 1bit test signal is encoded, then decoded, it isn't reconstructed perfectly (no .wav converted to an mp3 ever is), and sometimes it ranges over more than the 1bit it was designed to. That's the reason why all 16 bit decoders score 16+1. The 17th bit signal actually excites the 16th bit upon decode, all be it in a distorted manner if the 17th bit itself is discarded.
In conclusion, the criteria I have chosen are a good way to compare the performance of one decoder against another. In the strictest sense, each quoted accuracy value is itself only accurate to plus or minus one bit.
FAQs
 16bit, 24bit  what are you talking about?
Digital audio signals (the kind in PCs and on CDs) consist of a series of numbers. These numbers represent the instantaneous amplitude of the signal. They're not represented as decimal numbers (1,2,3 etc) but as binary numbers (001,010,011 etc) which is how computers work  they mean the same thing though  you can convert between decimal and binary pretty easily.
Now, each bit is like a decimal place. Let's imagine we try to measure something, using only 1 decimal place to store the result. How long is a piece of string? I've got some fantastic device that tells me it is 3.12567cm long. So I'll have to store that as 3.1, because I only have 1 decimal place. I've thrown away the 0.02567cm  or to put it another way, the number I've stored is incorrect by 0.02567cm! If I store an extra decimal place, I'll have 3.13cm as my stored measurement, which will only be 0.00433cm out. At this point, depending on why we wanted to measure the piece of string, we might decide that this is enough!
In audio, the small difference between the value we store, and the real world value, is perceived as noise or distortion. The more bits we store, the less noise is added to the signal, and the more accurately the signal is reproduced.
 Aren't CDs 16bit? Isn't this enough?
CDs store 16bits of information, which was thought to be enough at the time they were developed. However, 24bits is used in recording studios now, and DVDaudio will allow consumers to hear the result. It's not that you could hear the last bit of a 24bit signal on its own  it's too small a difference, or too quiet a signal for the human ear to detect. However, measuring larger signals with that fine degree of accuracy makes them sound more realistic.
 My mp3s are made from 16bit CDs  why use a 24bit decoder?
As you probably know, when you encode a CD to mp3 format, you don't store an exact copy of the original signal. When an mp3 is decoded, you don't get those original 16bits back, but an approximation that should sound similar. When the decoder puts together all the elements held in the mp3 file, the arithmetical result can be very accurate in numerical terms, even if it's not exactly what was on the original CD. If you round it to 16bits, you add a small amount of extra distortion to this reconstructed signal, getting even further away from what was on the original CD. If you round it to 24bits, you're still adding distortion, but it's 256 times quieter than that added by rounding to 16bits.
 I only have a 16bit sound card  what use could a 24bit decoder be?
If you calculate the result to 24bit accuracy, and then round it to 16bits, you gain nothing  the result will match all the standard 16bit decoders. However, if you dither the result from 24bits down to 16bits, you can avoid all the distortion generated by rounding to 16bits, and the result may sound better. Please read
this article about dither for a fuller explanation of this.
 I have a 32bit or 64bit or 128bit sound card  what can I do?
I'm sorry, but you don't have such a sound card. The numerical value in the title of a certain series of soundcards may double as you go up the range, but it doesn't tell you the output accuracy of the sound cards  it doesn't even claim to. The best digital to analogue converters are 24bits. Some very good ones (e.g.
this one from
dCS) may be linear down to the equivalent level of the 27th bit, but this is still using a 24bit input signal and 24bit converters. 99.9% of people still have 16bit sound cards.
 How can I use the 24bit output of l3dec?
l3dec stores 24bit accurate data in ASCII HEX files. No audio editor (that I know of) will read these files, but you can convert them to 24bit .wav files (compatible with Cool Edit Pro) using the
24bit ASCII HEX to WAV conversion utility from ff123.
 Where to now?
Copyright 2000 David J M Robinson. All Rights reserved. You may not republish any information or content from this site without the authors' express permission.