Quote Originally Posted by Sir Terrence the Terrible
... arrange a DBT with a source encoded at 16/44.1khz, 16/48khz and 16/96khz, and compare it to a master.
I'll admit I not 100% up on audio recording processes and techniques but I have a few points.

I think it is reasonable to assume that the first time recorded analog sound is converted to digital, a high sample rate and high word length would be wise to capture as much information as possible. Then working with this high resolution data would minimize losses when the final stage is downconverting it to redbook CD.

So in my mind, the real question is that when you have your 24/192 master digital CD, is there an audible sonic difference/degradation if you downconvert it to 16/44.1 (i.e. a redbook CD format)?

I have read lots of stories about SACD and how they sound better than the redbook CD format. But the whole story isn't being told. How do we know what source the engineer used to produce the SACD? Perhaps the redbook master was not done as well as possible but the SACD was better produced. In other words, the SACD may sound better but not for the reason that it is a higher digital resolution.

I think this is highlighted when an existing CD is remastered and remixed and sounds better than the original CD. Anyways, being an engineer, I am always skeptical and investigate when I hear people claiming "more is better" like more bandwidth, higher data rates, greater word lengths, more jitter reduction, etc. Sometimes "more" doesn't do a damn thing for you other than waste money.

And in audio land, the need for "more" is very pervasive and may cloud peoples' judgement.