DeirdreLLogan
06-25-2014, 01:10 AM
Technical question which intrigues me:
Normally when video or audio is compressed lossily the compressed file still has a frame rate/sampling rate, in the case of video a resolution, and maybe a bit depth. Compressed images have a resolution also.
Why is this?
Let's assume perfect sources and perfect display/playback technology, so that inputs have infinite sampling rate, resolution and bit depth, and so do outputs.
In this hypothetical situation would you still want to use technologies that compress at a particular sampling rate and resolution?
Say the original is described by a function o: T ->X, where T is the continous time interval and X the continous thing that is represented (image, sample) at each point in time.
Why would you take finite approximations of T and X, Tf and Xf, project to of: Tf -> Xf, and then approximate of with a compression algorithm within Tf->Xf?
Why don't the best algorithms compress within T->X?
http://dailydigitaldeals.info/wp-content/uploads/2012/01/11/38/buy.gif
Normally when video or audio is compressed lossily the compressed file still has a frame rate/sampling rate, in the case of video a resolution, and maybe a bit depth. Compressed images have a resolution also.
Why is this?
Let's assume perfect sources and perfect display/playback technology, so that inputs have infinite sampling rate, resolution and bit depth, and so do outputs.
In this hypothetical situation would you still want to use technologies that compress at a particular sampling rate and resolution?
Say the original is described by a function o: T ->X, where T is the continous time interval and X the continous thing that is represented (image, sample) at each point in time.
Why would you take finite approximations of T and X, Tf and Xf, project to of: Tf -> Xf, and then approximate of with a compression algorithm within Tf->Xf?
Why don't the best algorithms compress within T->X?
http://dailydigitaldeals.info/wp-content/uploads/2012/01/11/38/buy.gif