Results 1 to 24 of 24
  1. #1
    Forum Regular
    Join Date
    Dec 2003
    Posts
    20

    Question about stereo imaging

    Here's how I'm seeing this: in any room, there are reflections from walls, furniture, whatever. The reflected waves can reflect again (higher order reflections), allthough attenuated. These reflections are non uniform over frequency, hard objects absorb low frequencies, soft objects absorb high frequencies. To make things even messier, high frequencies are more directional.
    So, what gets in the end to your ears: direct wave from speaker, add delayed reflected waves with frequency content modifiend by reflecting object, add waves that reflect from more than one object with longer delay because of higher travelled distance, and frequency content even more affected. Put simply, this is an acoustic mess.
    What is my question: why do speakers provide though better stereo imaging than headphones that do not suffer from such acoustic problems ?

  2. #2
    Forum Regular
    Join Date
    Nov 2003
    Posts
    123

    I am not 100% sure about this but:

    Quote Originally Posted by codebutcher
    Here's how I'm seeing this: in any room, there are reflections from walls, furniture, whatever. The reflected waves can reflect again (higher order reflections), allthough attenuated. These reflections are non uniform over frequency, hard objects absorb low frequencies, soft objects absorb high frequencies. To make things even messier, high frequencies are more directional.
    So, what gets in the end to your ears: direct wave from speaker, add delayed reflected waves with frequency content modifiend by reflecting object, add waves that reflect from more than one object with longer delay because of higher travelled distance, and frequency content even more affected. Put simply, this is an acoustic mess.
    What is my question: why do speakers provide though better stereo imaging than headphones that do not suffer from such acoustic problems ?
    As I see it the worst thing about listening to music through headphones is that any sound coming from, say, the left hand speaker only, comes into your left ear only and vice versa for the right ear. This is obviously not what happens with a "normal" listening environment. Even sounds that come out of the left hand speaker only in a room are heard by both ears. It is the combination of the delay in signal reaching your ears, plus the volume changes that tell your brain where the sound is coming from.

    If anyone produces a headphone that plays left ear signal only, into the right ear as well, with a delay and a volume reduction that might well improve the stereo imaging.

    I would love to play with a set of headphones that did such a thing....

    In theory - if they can come up with the electronics to do this - then a set of headphones could produce a truely surround sound experience second to none. I did try out a pair of sony headphones a while back that were supopsed to do this very thing - but the result was not particularly impressive.

  3. #3
    Forum Regular
    Join Date
    Dec 2003
    Posts
    20
    There are very expensive special headphone amplifiers that simulate that effect. Check out the Orpheus combo from Sennheiser ($14,000)

  4. #4
    Forum Regular
    Join Date
    Nov 2003
    Posts
    123
    $14,000!!! for a pair of headphones???

    I think it is rather cheaper to redo the living room to get better acoustics - and throw in a fur coat for the wife - or a diamond ring if fur coats are still not PC...and maybe another 1000 records...

    I would like something closer to the $150 range if you can find anything....

  5. #5
    Forum Regular
    Join Date
    Dec 2003
    Posts
    20
    I don't want to go into philosophy or anything, but this raises the question of who actually buys this kind of stuff. And if people who can afford it really appreciate it. And if those who can appreciate it afford it. Check out the review for the German Physics speakers that you can find on this site (if you don't know what I mean, be prepared, they cost around $200,000)

  6. #6
    Sgt. At Arms Worf101's Avatar
    Join Date
    Nov 2003
    Location
    Troy, New York
    Posts
    4,288

    Red face That, my son....

    Quote Originally Posted by codebutcher
    I don't want to go into philosophy or anything, but this raises the question of who actually buys this kind of stuff. And if people who can afford it really appreciate it. And if those who can appreciate it afford it. Check out the review for the German Physics speakers that you can find on this site (if you don't know what I mean, be prepared, they cost around $200,000)
    is a subject for another day. I'm sure that there are rich audiophiles who can afford the very best and ENJOY what they've bought, and I'm equally sure that there are some that spend money for "the Best" without any appreciation of what "the best" acutally sounds like. I have no opinion either way. I generally tend to avoid generalizations.

    Da Worfster

  7. #7
    Loving This kexodusc's Avatar
    Join Date
    Nov 2003
    Location
    Department of Heuristics and Research on Material Applications
    Posts
    9,025
    Quote Originally Posted by Worf101
    is a subject for another day. I'm sure that there are rich audiophiles who can afford the very best and ENJOY what they've bought, and I'm equally sure that there are some that spend money for "the Best" without any appreciation of what "the best" acutally sounds like. I have no opinion either way. I generally tend to avoid generalizations.

    Da Worfster
    You are an honorable man Worfster,

    If I had the biggest, best, and most expensive equipment, you can guarantee I'd be here every freakin' day bad mouthing every speaker that I currently wish I could afford to own, bragging about how superior my system is and how you have to have at least an $80,000 setup to either call yourself an audiophile, or expect anyone to take you seriously. I'd have a Bryston in my Dodge Viper.
    That which I hate is what I desire...

    As it stands, I'll continue to be happy to make monthly payments on my credit card, order small fries with that, and work some overtime to be happy with what I've got.

    I gots to hear me a set of these $14,000 headphones.

  8. #8
    Sgt. At Arms Worf101's Avatar
    Join Date
    Nov 2003
    Location
    Troy, New York
    Posts
    4,288

    Thanks for the love Kex....

    Quote Originally Posted by kexodusc
    You are an honorable man Worfster,

    If I had the biggest, best, and most expensive equipment, you can guarantee I'd be here every freakin' day bad mouthing every speaker that I currently wish I could afford to own, bragging about how superior my system is and how you have to have at least an $80,000 setup to either call yourself an audiophile, or expect anyone to take you seriously. I'd have a Bryston in my Dodge Viper.
    That which I hate is what I desire...

    As it stands, I'll continue to be happy to make monthly payments on my credit card, order small fries with that, and work some overtime to be happy with what I've got.

    I gots to hear me a set of these $14,000 headphones.
    Personally, I don't think I'd want to HANDLE headphones that cost that much. I'd be too worried about steppin on the cords or scratchin' sumpthin. How can you feel comfortable around gear that costs that much? Could you see playing a record on a turntable and cartridge combo that costs more than some folks homes when you know that playing a vinyl record is by it's very nature a destructive act? I'd rather be a "midfi man" and be able to sleep at night than go that other route...

    Da Worfster

  9. #9
    Forum Regular Sealed's Avatar
    Join Date
    Feb 2004
    Posts
    189

    Imaging: headphone vs speaker

    The whole presentation is different here.

    with headphones, the "image" is a stereo soundstage projected across the plane of your head from side to side only. This is very different, except when binaural recordings adjust this.

    With speakers, the soundstage fans out in front of you.

  10. #10
    Forum Regular
    Join Date
    Dec 2003
    Posts
    20
    Quote Originally Posted by Sealed
    The whole presentation is different here.

    with headphones, the "image" is a stereo soundstage projected across the plane of your head from side to side only. This is very different, except when binaural recordings adjust this.

    With speakers, the soundstage fans out in front of you.
    I don't know details about this, but it seems to me that the whole difference is made in the mixing phase, right?

  11. #11
    Forum Regular
    Join Date
    Jan 2002
    Posts
    277

    There is a lot-o stuff that makes this all happen

    The goal is to trick your brain into thinking that something is happening in front of you that really isn't. Your brain processes all the information received and tries to assign a position to it in the space in front of you. What kept your caveman forefathers alive is used today for our entertainment. Kinda cool huh?

    If you have a really good system that disperses sound into a room somewhat evenly, keeps phase relationships somewhat intact, and offers clear information that your brain can correctly (actually "incorrectly") process, you get stereo imaging. The better it is, the more of a "picture" you get.

    This is your brain....This is your brain on hi-fi...
    Space

    The preceding comments have not been subjected to double blind testing, and so must just be taken as casual observations and not given the weight of actual scientific data to be used to prove a case in a court of law or scientific journal. The comments represent my humble opinion which will range in the readers perspective to vary from Gospel to heresy. So let it be.

  12. #12
    Forum Regular Chuck's Avatar
    Join Date
    Dec 2003
    Posts
    79
    The short answer is that your we use not only the inner ear, but also the outer ear and head (among other things) to determine the direction sounds come from. Headphones remove important queues so the brain doesn’t get the correct inputs for "out in the room" imaging.

  13. #13
    -cc
    -cc is offline
    AR Newbie Registered Member
    Join Date
    Feb 2004
    Posts
    2
    One of the things human hearing is most sensitive to is the arrival time of a sound. Almost all of our stereo imaging comes from the difference in arrival times between our two ears, not from directionality or volume of the sounds. We are also extremely good at discriminating the original sound from echos.


    This capability is why a single sub-woofer is not really the same as full range speakers. We can tell where the sub sound came from. If it is not the same place as the higher frequencies, our brain gets confused and the soundscape deteriorates. Even bass instruments have higher frequency elements to their sounds. This is also why "time aligning" your tweeter, mid, and woofer pays such big dividends.

    For those who want to experiment with cross-feeds in headphones, try this web-site. These folks build high quality headphone amps, including ones with cross-feed. All the instructions are there for DIY types, but you can also have someone build one for you. A couple of these guys have regular businesses selling these amps on e-bay for $100 or so without cross-feed. But they'll build whatever you want.

    http://headwize2.powerpill.org/index.htm

    -cc

  14. #14
    M.P.S.E /AES/SMPTE member Sir Terrence the Terrible's Avatar
    Join Date
    Jul 2002
    Posts
    6,826
    Quote Originally Posted by codebutcher
    Here's how I'm seeing this: in any room, there are reflections from walls, furniture, whatever. The reflected waves can reflect again (higher order reflections), allthough attenuated. These reflections are non uniform over frequency, hard objects absorb low frequencies, soft objects absorb high frequencies. To make things even messier, high frequencies are more directional.
    You do not have this quite right. Hard objects REFLECT low frequencies, not obsorb them. Soft objects depending on there porusness(is that a word?) will obsorb high frequencies. There are some soft materials with a tight porus structure that actually reflect high frequencies.
    Sir Terrence

    Titan Reference 3D 1080p projector
    200" SI Black Diamond II screen
    Oppo BDP-103D
    Datastat RS20I audio/video processor 12.4 audio setup
    9 Onkyo M-5099 power amp
    9 Onkyo M-510 power amp
    9 Onkyo M-508 power amp
    6 custom CAL amps for subs
    3 custom 3 way horn DSP hybrid monitors
    18 custom 3 way horn DSP hybrid surround/ceiling speakers
    2 custom 15" sealed FFEC servo subs
    4 custom 15" H-PAS FFEC servo subs
    THX Style Baffle wall

  15. #15
    Forum Regular Chuck's Avatar
    Join Date
    Dec 2003
    Posts
    79
    Quote Originally Posted by -cc
    One of the things human hearing is most sensitive to is the arrival time of a sound. Almost all of our stereo imaging comes from the difference in arrival times between our two ears, not from directionality or volume of the sounds. We are also extremely good at discriminating the original sound from echos.
    Differences in arrival time at the two ears is but one of the factors, and actually NOT a critical factor, in that people who are deaf in one ear can still detect the direction from which a sound is coming. It has been shown that such people will loose this capability if the pena of their good ear is filled with wax. The mechanism here is that the sound reflected from the pena into the ear canal has its phase shifted relative to the direct sound due to having traveled a slightly longer distance. Being shifted in phase and mixed produces a comb-filter effect which is analyzed by the brain to determine distance and direction. Visual inputs can also be factored in. Even amplitude differences can dominate if they are strong enough.

    It remains to be seen if DSP or other processing can recreate the same conditions in the ear through headphones that are created in a more natural environment, but it is probably possible to create some convincing imaging with current technologies. I'll believe it when I hear it.

  16. #16
    Forum Regular
    Join Date
    Dec 2003
    Posts
    20
    The thing that ear is not the only factor that gives the final sensation of directionality is very interesting. I had no ideea about that. I'll search Google about this issue, I guess there are a lot of useful things to learn.

  17. #17
    Forum Regular Chuck's Avatar
    Join Date
    Dec 2003
    Posts
    79
    Quote Originally Posted by codebutcher
    The thing that ear is not the only factor that gives the final sensation of directionality is very interesting. I had no ideea about that. I'll search Google about this issue, I guess there are a lot of useful things to learn.
    If I can find some good links, I'll post them.

  18. #18
    Forum Regular Chuck's Avatar
    Join Date
    Dec 2003
    Posts
    79
    One interesting link:

    http://www.earaces.com/anatomy.htm

  19. #19
    Forum Regular Swerd's Avatar
    Join Date
    Jan 2003
    Location
    Gaithersburg, MD
    Posts
    185
    Quote Originally Posted by Chuck
    If I can find some good links, I'll post them.
    Hey Chuck

    I had never heard of this before you mentioned it. It is interesting so I looked it up - the effect of the pinna on human ears has been known for some 10 to 15 years. Summarizing from some references I found:


    • Sound localization relies on the neural processing of external acoustic cues.
    • Interaural differences in time and sound level provide robust information regarding sound-source azimuth.
    • The pinnae provide spectral shape cues, enabling extraction of sound elevation, and frontal versus rear locations.
    • Other factors may also contribute to spatial hearing. These include vision (e.g., the ‘ventriloquist illusion’), head movements, and expectation about upcoming target locations.
    • Apparently sound localization can even be influenced by the position of the eyes and/or head, even in total darkness.

    Two recent links:

    http://www.mbfys.kun.nl/mbfys/people/johnvo/papers/binauralweighting.pdf

    http://www.nature.com/cgi-taf/DynaPa...n0998_417.html

    The 2nd link describes"the existence of ongoing spatial calibration in the adult human auditory system. The spectral elevation cues of human subjects were disrupted by modifying their outer ears (pinnae) with molds. Although localization of sound elevation was dramatically degraded immediately after the modification, accurate performance was steadily reacquired. Interestingly, learning the new spectral cues did not interfere with the neural representation of the original cues, as subjects could localize sounds with both normal and modified pinnae."

    This clearly demonstrates the importance of functions of the auditory cortex in sound localization. Sound perception is not just a physical phenomena, but requires active participation (learning) by the brain.

  20. #20
    M.P.S.E /AES/SMPTE member Sir Terrence the Terrible's Avatar
    Join Date
    Jul 2002
    Posts
    6,826
    Quote Originally Posted by Chuck
    Differences in arrival time at the two ears is but one of the factors, and actually NOT a critical factor, in that people who are deaf in one ear can still detect the direction from which a sound is coming. It has been shown that such people will loose this capability if the pena of their good ear is filled with wax. The mechanism here is that the sound reflected from the pena into the ear canal has its phase shifted relative to the direct sound due to having traveled a slightly longer distance. Being shifted in phase and mixed produces a comb-filter effect which is analyzed by the brain to determine distance and direction. Visual inputs can also be factored in. Even amplitude differences can dominate if they are strong enough.

    It remains to be seen if DSP or other processing can recreate the same conditions in the ear through headphones that are created in a more natural environment, but it is probably possible to create some convincing imaging with current technologies. I'll believe it when I hear it.
    Chuck,

    I understand what you are trying to say, however you have some things kinda mixed up, and they need some clarification.

    Differences in arrival time at the two ears is but one of the factors, and actually NOT a critical factor, in that people who are deaf in one ear can still detect the direction from which a sound is coming.
    It is important when dealing with direction to break things down just a bit because the ear does different things when ascertaining L/R direction, and up and down directions.

    Left and right direction is determined by time arrival AND amplitude to both ears. It is helpful that our ears are located to the sides of our heads because it makes it easier to clue in on these two parameters. People deaf in one ear have only a limited ability to detect direction, and it is basically limited to sound directed toward the working ear. The head will usually block the high frequency information(which aids us in determining direction) to the point where it is difficult to clearly tell direction on the deaf side.

    The time arrival to our ears determines the direction. The amplitude determines the distance. The wavelength and frequency assist us with both. As the frequency drops and wavelengths get longer(than the distance between our ears) it become more difficult to determine direction. That is why we cannot hear the direction of the first seismic waves of earthquakes.

    The hardest sounds to detect by the ears are sounds the originate directly in front, or in back of us. That is because there is no time difference between signals(sounds)arrival to the ears.

    You can determine when a sound comes from above, below or in front of the face. This is especially true with high frequency sounds. Height information is provided by a small amount of reflection off the back edge of the ear lobe. This reflection is out of phase for one specific sound frequency, and the elongated shape of the lobe causes the frequency to vary with angle of the source of sound. You can then tell the direction. Height detection does not work well for sounds originating to the side or back, or those lacking high frequency content.



    It has been shown that such people will loose this capability if the pena of their good ear is filled with wax.
    It quite tough the fill the pinna with wax. The pinna is the outer edge of our ears, and it would be VERY unsightly to see it full of wax. It would also be impossible to hear at all.

    The mechanism here is that the sound reflected from the pena into the ear canal has its phase shifted relative to the direct sound due to having traveled a slightly longer distance. Being shifted in phase and mixed produces a comb-filter effect which is analyzed by the brain to determine distance and direction.
    This phenomena you are describing actually interferes with our ability to determine direction clearly. This is called "interaural crosstalk". In audio it is advantageous to minimize this effect, and whole technologies have been invented for that purpose. Polk audio's SDA line of speakers in the 80's, Bob Carvers Sonic Holography found in amps and receivers, various processing boxes called image enhancers by BSR.

    Visual inputs can also be factored in. Even amplitude differences can dominate if they are strong enough.
    Visual input is a huge factor. I have already mentioned the role that amplitude plays.
    Once sounds enter the inner ear, the whole process becomes MUCH more complicated.
    Sir Terrence

    Titan Reference 3D 1080p projector
    200" SI Black Diamond II screen
    Oppo BDP-103D
    Datastat RS20I audio/video processor 12.4 audio setup
    9 Onkyo M-5099 power amp
    9 Onkyo M-510 power amp
    9 Onkyo M-508 power amp
    6 custom CAL amps for subs
    3 custom 3 way horn DSP hybrid monitors
    18 custom 3 way horn DSP hybrid surround/ceiling speakers
    2 custom 15" sealed FFEC servo subs
    4 custom 15" H-PAS FFEC servo subs
    THX Style Baffle wall

  21. #21
    Forum Regular Chuck's Avatar
    Join Date
    Dec 2003
    Posts
    79
    Hi Terrence,

    The processing is not as compartmentalized as you think, though that thinking was popular 20 years ago. Before you say that I'm confused, and try to tell me what I'm talking about (I am NOT talking about IA crosstalk, so you're obviously confused) and before you post further long-winded arguments please check out the following work (as I noted before, I cannot find Web links, so you'll have to do some homework):

    M. B. Gardner and R. S. Gardner, "Problems of Localization in the Median Plane: Effect of Pinnae Cavity Occlusion." Journal of the ASA, vol. 36, no. 3, pp 465-470, published 1966.

    D. W. Batteau, "The Role of the Pinna in Human Localization." Proceedings Royal Society, B168 (1011) pp 158-180, publication year unknown.

    J. Habrank and D. Wright, "Are Two Ears Necessary for Localization of Sound Sources in the Median Plane?" Journal of the ASA, vol. 56, no. 3, pp 935-938, published 1974

    C.P.A. Rogers, "Pinna Transforrmations and Sound Reproduction." JAES, vol. 29, no. 4, pp 226-234, published in April 1981

    There are many other articles that address other factors, but I think you'll find these very informative, considering how badly you missed the point of my post.

    Thanks for checking this stuff out before leaving a long post arguing about work you haven't seen yet.

  22. #22
    Forum Regular Chuck's Avatar
    Join Date
    Dec 2003
    Posts
    79
    Quote Originally Posted by Swerd
    Hey Chuck

    I had never heard of this before you mentioned it. It is interesting so I looked it up - the effect of the pinna on human ears has been known for some 10 to 15 years. Summarizing from some references I found:


    • Sound localization relies on the neural processing of external acoustic cues.
    • Interaural differences in time and sound level provide robust information regarding sound-source azimuth.
    • The pinnae provide spectral shape cues, enabling extraction of sound elevation, and frontal versus rear locations.
    • Other factors may also contribute to spatial hearing. These include vision (e.g., the ‘ventriloquist illusion’), head movements, and expectation about upcoming target locations.
    • Apparently sound localization can even be influenced by the position of the eyes and/or head, even in total darkness.

    Two recent links:

    http://www.mbfys.kun.nl/mbfys/people/johnvo/papers/binauralweighting.pdf

    http://www.nature.com/cgi-taf/DynaPa...n0998_417.html

    The 2nd link describes"the existence of ongoing spatial calibration in the adult human auditory system. The spectral elevation cues of human subjects were disrupted by modifying their outer ears (pinnae) with molds. Although localization of sound elevation was dramatically degraded immediately after the modification, accurate performance was steadily reacquired. Interestingly, learning the new spectral cues did not interfere with the neural representation of the original cues, as subjects could localize sounds with both normal and modified pinnae."

    This clearly demonstrates the importance of functions of the auditory cortex in sound localization. Sound perception is not just a physical phenomena, but requires active participation (learning) by the brain.
    Thanks for tracking down all those links! It's nice to know that not everyone will argue without checking their facts first.

    I posted some hard references to research that directly addresses the two-ear direction-finding issue in response to another post (where the post was much longer and much less well informed than yours). If you are interested in looking into this further, you might find it worth while to track down some of the reports. I will not repeat the references here, because typing them in accurately is a bear.

    Good links. I'll be putting them in my bookmarks.

    Thanks again,

    Chuck

  23. #23
    Forum Regular Swerd's Avatar
    Join Date
    Jan 2003
    Location
    Gaithersburg, MD
    Posts
    185
    Quote Originally Posted by Chuck
    Thanks for tracking down all those links! It's nice to know that not everyone will argue without checking their facts first.
    Chuck
    Sir T is usually very well informed, so I wouldn't dismiss what he says just yet.

    I did a PubMed search
    http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed and found that there is a fair amount of research on this subject. I found some 57 scientific references when I searched for "Sound Localization and Pinna" Some of them were done with human subjects and some studied cats, ferrets, or owls. So the results may vary with the test animal.

    I also found while scanning the titles and abstracts (no I didn't read the papers in detail) that there seems to be an incomplete understanding of what part of sound localization is played by the time and/or tonal differences heard between two ears, and what part is played by the external ear and head structures. The topic is actively being sorted out, so if you read a textbook or a review, what it says it depends on when it was published and who wrote it.

    I think that it is agreed that for lower frequencies, the time difference heard by two ears is most important especiall in the horizontal plane (azimuth). For sound above 4 kHz, the pinna play an important role, especially for judging sound localization in the vertical plane.

    The most recent general review of this subject was published in 1991.

    Middlebrooks, J. C. and D. M. Green (1991). "Sound localization by human listeners." Annual Review of Psychology 42: 135-59.

    Their abstract says:
    In keeping with our promise earlier in this review, we summarize here the process by which we believe spatial cues are used for localizing a sound source in a free-field listening situation. We believe it entails two parallel processes: 1) The azimuth of the source is determined using differences in interaural time or interaural intensity, whichever is present. Wightman and colleagues (1989) believe the low-frequency temporal information is dominant if both are present. 2) The elevation of the source is determined from spectral shape cues. The received sound spectrum, as modified by the pinna, is in effect compared with a stored set of directional transfer functions. These are actually the spectra of a nearly flat source heard at various elevations. The elevation that corresponds to the best-matching transfer function is selected as the locus of the sound. Pinnae are similar enough between people that certain general rules (e.g. Blauert's boosted bands or Butler's covert peaks) can describe this process. Head motion is probably not a critical part of the localization process, except in cases where time permits a very detailed assessment of location, in which case one tries to localize the source by turning the head toward the putative location. Sound localization is only moderately more precise when the listener points directly toward the source. The process is not analogous to localizing a visual source on the fovea of the retina. Thus, head motion provides only a moderate increase in localization accuracy. Finally, current evidence does not support the view that auditory motion perception is anything more than detection of changes in static location over time.

    I am trying to get a copy of the pdf file for the article, but something from 1991 might not be readily available online.

    If you want the list of 57 references (26 are done with human subjects), email or PM me, and I'll send it to you.

  24. #24
    M.P.S.E /AES/SMPTE member Sir Terrence the Terrible's Avatar
    Join Date
    Jul 2002
    Posts
    6,826
    Quote Originally Posted by Chuck
    Hi Terrence,

    The processing is not as compartmentalized as you think, though that thinking was popular 20 years ago. Before you say that I'm confused, and try to tell me what I'm talking about (I am NOT talking about IA crosstalk, so you're obviously confused) and before you post further long-winded arguments please check out the following work (as I noted before, I cannot find Web links, so you'll have to do some homework):

    M. B. Gardner and R. S. Gardner, "Problems of Localization in the Median Plane: Effect of Pinnae Cavity Occlusion." Journal of the ASA, vol. 36, no. 3, pp 465-470, published 1966.

    D. W. Batteau, "The Role of the Pinna in Human Localization." Proceedings Royal Society, B168 (1011) pp 158-180, publication year unknown.

    J. Habrank and D. Wright, "Are Two Ears Necessary for Localization of Sound Sources in the Median Plane?" Journal of the ASA, vol. 56, no. 3, pp 935-938, published 1974

    C.P.A. Rogers, "Pinna Transforrmations and Sound Reproduction." JAES, vol. 29, no. 4, pp 226-234, published in April 1981

    There are many other articles that address other factors, but I think you'll find these very informative, considering how badly you missed the point of my post.

    Thanks for checking this stuff out before leaving a long post arguing about work you haven't seen yet.
    Chuck,

    I think it is pretty simple of you to assume that I haven't read a great deal of this information. It is also rather simple of you to try and belittle my post(by mentioning its length and by referring to it as dated) as a way to spin that fact you are rather confused in your explaination of a KNOWN phenomina to anyone who as ever studied ear/brain functions. Its one thing to be able to quote crap you have gleaned from the internet, it's another to be able to convey that into a cohesive thought process that can be explained through a computer. That takes knowledge of the subject matter, and it is here where you are a failure.

    What I find really hilarious is you do not even know a PINNA from the inner ear, yet you find that can debate me on such a complex subject as this. Here is a link that describes what the different parts of the ear are:

    http://twist.lib.uiowa.edu/radio/Greg's%20Folder/The%20Human%20Ear.html

    Notice how it describes the Pinna(correctly spelled I might add, not pena as you spell it) as the "VISIBLE PART OF THE OUTER EAR". Have you(OR ANYONE) ever known your outer ear to be filled with wax as you state? That would be nasty to look at, and would totally prevent any localization from that ear.. As I stated before, if the PINNA(correctly spelled) was filled with wax, you wouldn't be able to hear a dang thing.

    The mechanism here is that the sound reflected from the pena into the ear canal has its phase shifted relative to the direct sound due to having traveled a slightly longer distance. Being shifted in phase and mixed produces a comb-filter effect which is analyzed by the brain to determine distance and direction.
    http://www.ambiophonics.org/Ch_3_amb...nd_edition.htm

    http://ourworld.compuserve.com/homep...ice/franci.htm


    Go to this links(that I have provided for you instead of making you search for it..sarcasm off) Go down to where it describes "Stereo crosstalk" in the first link. Look at figure 1 as it instructs you to do. Now go to the second link and read there. Now any person who is able to read with some comprehension could clearly see that what you described above has a glaring simularity to what they describe as "stereo crosstalk" not localization. Ray Charles could see this!

    I am going to stop right here because if an individual who claims to have such detailed knowledge of ear/brain interactivity cannot spell PINNA, or know where it is located, cannot debate the issue with any kind of intelligence. This is BASIC knowledge. If they don't know the difference between localization and interaural crosstalk(which is fundamental knowledge of the auditory function) then they do not have the neccesary BASIC tools to debate the subject.

    So now that it has been discovered that you are not quite a smart as you think, you can spare me the Mtry like chest thumping, egomaniac brovado, and try this again without all of the attitude and bull.

    Can you now see why it is not so bright to make assumptions? Next time you would like to debate an issue, first be knowledgeable of what you are referring to, and next be able to provide links to support your arguements. It is up to you to prove your point and support it, and not for me to search for it.
    Last edited by Sir Terrence the Terrible; 03-04-2004 at 11:41 AM.
    Sir Terrence

    Titan Reference 3D 1080p projector
    200" SI Black Diamond II screen
    Oppo BDP-103D
    Datastat RS20I audio/video processor 12.4 audio setup
    9 Onkyo M-5099 power amp
    9 Onkyo M-510 power amp
    9 Onkyo M-508 power amp
    6 custom CAL amps for subs
    3 custom 3 way horn DSP hybrid monitors
    18 custom 3 way horn DSP hybrid surround/ceiling speakers
    2 custom 15" sealed FFEC servo subs
    4 custom 15" H-PAS FFEC servo subs
    THX Style Baffle wall

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Similar Threads

  1. Question on CD vs HT (Denon/Energy)
    By gdeola101 in forum Home Theater/Video
    Replies: 10
    Last Post: 02-05-2004, 08:33 PM
  2. Running a Proton D1200 in 2 ohm stereo...danger?
    By Weister42 in forum Amps/Preamps
    Replies: 2
    Last Post: 02-01-2004, 08:13 PM
  3. speaker-to-receiver connection question
    By DaRealzG in forum Amps/Preamps
    Replies: 3
    Last Post: 01-27-2004, 04:55 PM
  4. Replies: 37
    Last Post: 01-11-2004, 12:35 AM
  5. Listening to 2ch stereo CDs in 4ch mode.
    By RichardNC in forum Home Theater/Video
    Replies: 2
    Last Post: 12-01-2003, 10:53 AM

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •