Dolphin and boat sounds (compression) [Archive] - Audio & Video Forums


View Full Version : Dolphin and boat sounds (compression)

05-09-2007, 10:36 PM
Hi everyone,

I am completely new to audio related things as well as compression and have only started reading up on it over the past few i was hoping some of you could offer me some advice! :)

I'm currently doing an honours project which will involve recording (underwater with a hydrophone) dolphin vocalizations in the presence and absence of boat noise. What i will then do is compress the dolphin signals (with and without boat noise) using lossless compression and compare the size of the files to determine which compresses more. The idea behind this is that complex sounds should compress less, so dolphin signals should compress less than boat noise. Therefore, if dolphin signals in the presence of boat noise compress more than dolphin signals with no boat noise...this will mean the dolphins are sort of "losing some information". Does that make sense :confused5:

I brought this up somewhere else and some people were suggesting that boat noise will actually compress less than dolphin signals. Do you agree? And why do you think this would be so? As i mentioned, i am completely new to these things and basically the idea was brought up to me by my supervisor and i have been trying my best to learn as much as possible about it over the last few days.

Any help, ideas or comments would be greatly appreciated!

05-11-2007, 01:18 PM
Iam an EE not a scientist or hearing specialist, so the following is mostly opinion.

All lossless compression algorithms rely on either silent spots or repetition to compress files. White noise by its' random nature will hardly compress at all.

In either event your experiment may have little value. No one knows exactly how dolphins hear and think, but human ears are very capable of separating coherent signals from random noise. When at a party you rarely have trouble understanding what is being said in your circle in spite of any number of other conversations and even in the presence of fairly loud music.

It would seem that human hearing doesn't really process the background bits but has the ability to separate relevant signals from the background noise and then process those signals. Dolphins can most likely do this as well, compression algorithms can not. They could partially if they discarded low level signals,but then they would not be lossless. And the human brain doesn't merely discard noise, it seems to use a fast sliding filter that follows the valid sound content while rejecting useless stuff. This is closer to an auto correlator than to a compressor.

IMHO I don't think compression methods do a very good job of mimicing the way the ear brain interface processes meaninful information while discarding interference. Since dolphins rely on sound processing to eat and since thay have a very large area of their brain dedicated to sound processing, it is quite likely that they are better at this than humans.

Still it's possible your experiment could shed light on these abilities, worst case to show how it's not done.

05-11-2007, 08:45 PM
I agree this isn't as simple as it might look at first. The dolphins' vocalizations will probably come off as chirps with silent spaces between them (which would be compressible) but the actual signal itself is I imagine a fairly complex set of frequencies that standard compression algorithms aren't designed to handle. To compress a signal like that I suppose they'd have to be purpose built to recognize the signal and respond accordingly, and I don't know how well that could be done.

The boat noise would remove the inter-signal silence some but I don't know how you expect that to help you. You could isolate the frequencies of the boat (probably fairly uniform spectra) and remove them, but what the dolphins are doing is probably similar to how we isolate signal from noise. In a sense we probably recognize and ignore constant background frequencies and also in a sense are probably running constant correlations of sounds and words we know and expect to hear against the signal we are receiving in order to isolate and pay attention to what matters. So we're matching known signals against what we're receiving and also using context and a built in library of expected signals to pull out the right "words" during a party.

Like hermanv said, perhaps the dolphins are doing the same thing.

Similar methods are used when they're trying to recover messages from the dead sea scrolls. Matched filtering is used to choose the most likely characters and then based on what the most likely combinations are as well as the current context (and a little user assistance) the most likely character is chosen. For example in the english language if we had a "t", then a "e" and we didn't know what was in the middle due to damage to the signal it would be a good guess that the letter in the middle was an "h." At the same time an unknown three letter word containing only one letter of "the" would probably return a "the" assuming the context were right because it's a common word.

I would imagine we're doing something similar when listening to someone speak at a noisy party or concert.