Quote Originally Posted by CyberStoic
Hi Thomas;

I agree, there are probably many here with enough background in rudimentary statistics or the scientific method to be able to discuss

Like ROJ, I am a clinical psychologist, trained in Cognitive-Behavioral Psychology. Had more than my fair share of Sadistics (statistics) classes. In the example study that bturk667 performed, there are two concerns I have regarding it's ability to make any kind of generalization due to design errors and sample size.

First, having all people listen at the same time and report at the same time introduces a serious confounding error. It is well known in social psychology regarding group dynamics that people tend to conform to group expectations and can easily read the social cues that others provide regarding expectations. An evaluator presenting a stimuli can easily cue the respondant as to the socially desirable or expected answer. In addition, group members tend to conform their behavior and stories to meet the group demands and dynamics. In this case, since all were in the same listening situation, and some endorse hearing differences, it may well be that the social expectation or expectation (either verbally, visually, non-verbally, etc,) leads people to endorse the idea that they hear differences. This is nothing new to group dynamics and the phenemona is well studied. For example, alcoholics in recovery groups tend to conform their stories to those that predominate the group they attend (i.e., report additional symptoms, blackouts, etc.) because it elicits higher group approval.

The second confounding variable is the extremely small sample size. In order to do a decent study involving differences in perception that you could actually generalize from, you would need hundreds, if not thousands of subjects. You would need to control for hearing, age, equipment, and probably a lot of other factors in order to really be able to draw meaningful results. A sample size of "4" could be nothing other than a statistical abberation. It is impossible to generalize from such a small sample.

Large studies of this type cost money. Big money. A decent, well controlled study, with a large sample size can cost upwards of $300K or more. Most people don't have that kind of money and generally studies of this type get money from grants or industry. I would think it self evident that the wire industry will never be sponsoring such studies because if they failed to show a demonstrable benefit from their wire it would be cutting their own throats. The risks are simply to great to fund such a study.

Take care
I am not aware of any study using blinded testing for audile differences in cables that was done with a scientific sample of listeners. You could obtain a sample if you knew what population you wanted the sample to represent, but I don't think any of the cable studies have gone that far.

Most of the cable studies that look for statistical significance don't even state the hypothesis that is being tested. However, here is what usually is implied: hypothesis --the cables are audibly different: null hypothesis -- the cables are not audibly different.
Note that the hypothesis does not say audibly different to most listeners or a certain proportion oof listeners. It just says audibly different. Therefore, just one listener could do well enough to prove the hypothesis.

The few blinded studies that have been done on "comparable" cables have their flaws, but to my knowlrdge no listeners so far have proved the hypothesis in such studies. This has been enough evidence for some to conclude there are no audible differences in cables. I am among those who don't find such evidence convincing.

You are right about decent studies being expensive. I doubt we will be seeing many on cables.