Results 1 to 2 of 2
  1. #1
    AR Newbie Registered Member
    Join Date
    Jun 2014
    Posts
    1

    Audio and video (and image) lossy compression:

    Technical question which intrigues me:
    Normally when video or audio is compressed lossily the compressed file still has a frame rate/sampling rate, in the case of video a resolution, and maybe a bit depth. Compressed images have a resolution also.


    Why is this?


    Let's assume perfect sources and perfect display/playback technology, so that inputs have infinite sampling rate, resolution and bit depth, and so do outputs.


    In this hypothetical situation would you still want to use technologies that compress at a particular sampling rate and resolution?


    Say the original is described by a function o: T ->X, where T is the continous time interval and X the continous thing that is represented (image, sample) at each point in time.


    Why would you take finite approximations of T and X, Tf and Xf, project to of: Tf -> Xf, and then approximate of with a compression algorithm within Tf->Xf?
    Why don't the best algorithms compress within T->X?





  2. #2
    M.P.S.E /AES/SMPTE member Sir Terrence the Terrible's Avatar
    Join Date
    Jul 2002
    Posts
    6,826
    Quote Originally Posted by DeirdreLLogan View Post
    Technical question which intrigues me:
    Normally when video or audio is compressed lossily the compressed file still has a frame rate/sampling rate, in the case of video a resolution, and maybe a bit depth. Compressed images have a resolution also.


    Why is this?
    It is likely because the video images are being compressed to fit the capacity of the format itself. All video formats have bandwidth and capacity parameters that must be followed, or you could not get all of the data onto the disc. Bluray has 50gbs divided between the video and audio.


    Let's assume perfect sources and perfect display/playback technology, so that inputs have infinite sampling rate, resolution and bit depth, and so do outputs.


    In this hypothetical situation would you still want to use technologies that compress at a particular sampling rate and resolution?
    You still will have issues with data storage, so the answer is yes unless storage is infinite as well.


    Say the original is described by a function o: T ->X, where T is the continous time interval and X the continous thing that is represented (image, sample) at each point in time.


    Why would you take finite approximations of T and X, Tf and Xf, project to of: Tf -> Xf, and then approximate of with a compression algorithm within Tf->Xf?
    Why don't the best algorithms compress within T->X?
    This is a question best answered by the codec designer themselves. I am sure they design their codecs to work within a finite storage and bandwidth medium.
    Sir Terrence

    Titan Reference 3D 1080p projector
    200" SI Black Diamond II screen
    Oppo BDP-103D
    Datastat RS20I audio/video processor 12.4 audio setup
    9 Onkyo M-5099 power amp
    9 Onkyo M-510 power amp
    9 Onkyo M-508 power amp
    6 custom CAL amps for subs
    3 custom 3 way horn DSP hybrid monitors
    18 custom 3 way horn DSP hybrid surround/ceiling speakers
    2 custom 15" sealed FFEC servo subs
    4 custom 15" H-PAS FFEC servo subs
    THX Style Baffle wall

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •