PD MAD MPEG audio decoder

Decoder Source: Robert Leslie http://www.mars.org/home/rob/proj/mpeg/
Version: private beta © 2000 Rob Leslie
Price: Free
Settings: various
Similar products: Winamp MAD decoder plug-in.
Verdict: Very Good
VBR: not tested
Full file: Rarely
Major Flaws: None
Minor Flaws: Usually clips the last few samples off files
Output level: correct
1-bit relative accuracy: Good
1-bit absolute accuracy: Excellent+

MAD, the Mpeg Audio Decoder, has been programmed from scratch by Rob Leslie. He has made the source code freely and openly available, for others to compile and incorporate into their own software and audio devices. He has also written a Winamp decoder plug-in, based on the MAD decoder, which is reviewed separately.

The core MAD decoding algorithm yields 24-bit output, and can be optimised for "speed", "accuracy", or a combination of the two ("default" - as used in the Winamp plug-in). MADPlay provides a command-line front end to MAD, and can also convert its output to 16-bits, via dithering or truncation. Rob kindly provided me with DOS executables compiled with each of these options in place. All-in-all, a lot of things to test!

The MAD Winamp plug-in was tested thoroughly. The MAD/MADPlay combination, and its associated options, were tested for accuracy in the 24-bit test (16-bit results are also included).

So, the burning questions - First, how accurate is the MAD decoding algorithm. Secondly, does the 24-bit calculation, when dithered to 16-bits, really give us better sound quality than the standard 16-bit decoders?

As a reference, l3dec can output 24-bit accurate data. The 24-bit accuracy test revealed that the internal computation of l3dec was accurate to at least 26 bits. By contrast, MAD "accuracy optimised" gave accurate results up to around 23 bits, whilst a further 3 bits were present, but distorted. MAD "default" gave accurate results up to around 20-bits. A further 5-bits worth of information was present, but distorted, while the final bit of information I tested for was entirely absent. MAD "speed optimised", even in 24-bit output mode, only gave 16-bits of information. In 16-bit mode, this translates as being slightly less accurate than a good 16-bit only decoder.

The "accuracy optimised", and "default" versions of MAD are more accurate than a good 16-bit decoder. By dithering the results down to 16-bits, does this increase the perceived resolution? The answer is yes, but not as much as is theoretically possible. I believe that the dither used by MAD is not ideal. To prove this, I took the 24-bit output from l3dec, and used Cool Edit Pro to dither the result to 16-bits. 1-bit of triangular dither resulted in a 16-bit file which still retained the information held in the 19th bit of the 24-bit file. (If you're getting lost, just hold onto the fact that 19-bits of accuracy is what we would expect). Using noise shaped dither (the same technique as is used on "SuperBitMapped" CDs), the 21st bit of the 24-bit file was still audible, and undistorted in the 16-bit version.

The dither used by MAD to generate a 16-bit file from it's 24-bit output retains the 17th bit accurately, and also a further 2 bits in a very distorted manner. These two bits should not be distorted, and this distortion is due to the non-ideal dither used with MAD. Rob has said that if enough people care, he will look into fixing this.

I tested the accuracy of each decoder using intentionally noisy signals. To a certain extent these signals self dither. However, using pure signals, the incorrect dither used by MADplay is even more obvious. The test signal is a 1kHz tone at -96dB.

l3dec decode 24-bit, dithered to 16-bits in Cool Edit Pro
Decoded by l3dec to 24-bit accuracy, then dithered to 16-bits by Cool Edit Pro.
MAD accuracy decoded, dithered to 16-bits via MADPlay
Decoded by MAD "accuracy optimised", dithered to 16-bits by MAD Play.

Note: the amplitude scales indicated on these graphs are incorrect. They actually show a 60dB range from -88dB to -148dB.

These graphs show that the MADPlay front-end is dithering incorrectly. The spiky waveform in the lower graph is severe harmonic distortion (guitarists often have a peddle to add such an effect). Correctly dithering a signal avoids this. See this article on dither for more information.

In conclusion, the MAD decoder is significantly more accurate than any 16-bit decoder in its default and accuracy-optimised modes. Dithering the output to 16-bits via MADPLAY yields a file containing more information than a standard 16-bit decode, but some of this information is distorted. This is preferable to the information being entirely absent (as it is in a typical 16-bit decoders undithered output), but it would be better still if all detectable bits of information were preserved in a liner (non-distorted) manner.

EXTRA NOTES:

  1. The binaries tested here were provided by Rob Leslie, but are not available publicly (that I know of). You may obtain the source code from the web address at the top of this page, and compile them yourself.
  2. The Winamp MAD decoder plug-in is the easiest way to obtain and use the MAD decoder.
  3. Please see the 24-bit test for more details and explanation of the accuracy results quoted in this review.
  4. This review sounds quite bad because MAD claims to be a 24-bit decoder, but isn't quite this accurate. However, compared to all the other "perfect" 16-bit decoders, it is excellent.

Screenshots

Using MAD Play to decode an mp3 files
Using MAD Play (the command-line front end for MAD) to decode an mp3 file.

Return to the list of decoders.