Richard Black
11-15-2004, 05:30 AM
Hello Audioreview forum members, glad to be joining your number.
Got here via a protracted links route including Jon Risch's site, which seems to arouse passions. It mirrors, in some ways, work I've been doing for some years, and I'd just like to make the following observations:
If a system is linear, a single test will prove the point. In theory! Obviously you're limited by noise and other extraneous problems of resolution, but if a sine wave goes in and comes out _completely_ indistinguishable (apart from amplitude and phase) the system is linear.
In practice, because of the 3-decade bandwidth of audio, and given limits on measurement resolution, you need to try a few different frequencies to prove the point. For example, crossover distortion may be buried in noise at 200Hz but very clearly visible at 20kHz. Distortion due to stressed capacitors may be buried in noise above 1kHz but detectable at 20Hz. And so on. But if sinewave testing from 20-20k shows no discernible distortion, intermod testing ain't going to pick up anything. If this statement is untrue, a lot of people would be very interested to hear the exact conditions under which it breaks down!
However, if a system is nonlinear - and in the end of course they all are - any sinewave will be afflicted by harmonic distortion, and any combination of them will exhibit intermod. The advantage of intermod is that it can show up, in a single measurement and single graph, more detail about the nature of the distortion than any small number of HD tests. However, they all need analysing! 19/20k intermod, for example, is great for testing digital recorders because at that sort of frequency any harmonics get ruthlessly filtered off, while audio-band IM tones survive (and of course are directly relevant to what we hear, unlike the 17th harmonic of 20kHz :) ) I don't really see anything wrong with Risch's test but in the end it's entirely horses for courses: some will find it simpler to apply and interpret than a variety of single-tone signals, others won't.
By the way, UK hi-fi reviewer Paul Miller has been performing, and publishing the results of, 3- and 4-tone IM tests for several years. His are really quite funky, using typically a couple of steady tones plus one swept. His rainbow-coloured pseudo-3D graphs are, I suggest, a model of intelligent data presentation. He's at www.milleraudioresearch.com, though having just looked I can't see any of the graphs in question! Old back issues of Hi-Fi Choice used to have lots of them.
Rich
Got here via a protracted links route including Jon Risch's site, which seems to arouse passions. It mirrors, in some ways, work I've been doing for some years, and I'd just like to make the following observations:
If a system is linear, a single test will prove the point. In theory! Obviously you're limited by noise and other extraneous problems of resolution, but if a sine wave goes in and comes out _completely_ indistinguishable (apart from amplitude and phase) the system is linear.
In practice, because of the 3-decade bandwidth of audio, and given limits on measurement resolution, you need to try a few different frequencies to prove the point. For example, crossover distortion may be buried in noise at 200Hz but very clearly visible at 20kHz. Distortion due to stressed capacitors may be buried in noise above 1kHz but detectable at 20Hz. And so on. But if sinewave testing from 20-20k shows no discernible distortion, intermod testing ain't going to pick up anything. If this statement is untrue, a lot of people would be very interested to hear the exact conditions under which it breaks down!
However, if a system is nonlinear - and in the end of course they all are - any sinewave will be afflicted by harmonic distortion, and any combination of them will exhibit intermod. The advantage of intermod is that it can show up, in a single measurement and single graph, more detail about the nature of the distortion than any small number of HD tests. However, they all need analysing! 19/20k intermod, for example, is great for testing digital recorders because at that sort of frequency any harmonics get ruthlessly filtered off, while audio-band IM tones survive (and of course are directly relevant to what we hear, unlike the 17th harmonic of 20kHz :) ) I don't really see anything wrong with Risch's test but in the end it's entirely horses for courses: some will find it simpler to apply and interpret than a variety of single-tone signals, others won't.
By the way, UK hi-fi reviewer Paul Miller has been performing, and publishing the results of, 3- and 4-tone IM tests for several years. His are really quite funky, using typically a couple of steady tones plus one swept. His rainbow-coloured pseudo-3D graphs are, I suggest, a model of intelligent data presentation. He's at www.milleraudioresearch.com, though having just looked I can't see any of the graphs in question! Old back issues of Hi-Fi Choice used to have lots of them.
Rich