A Question About Compressing Files

Discussion in 'Audio Hardware' started by bzfgt, Nov 11, 2017.

Thread Status:
Not open for further replies.
  1. ToneLa

    ToneLa Forum Resident

    Compression, as in MP3, actually discards data you don't need, on the basis you can miss some audio data out while keeping the integrity of the song (understanding this might be easier if you listen to a really poor MP3, low kb/s - it'll be distorted and spotty, as too much data has been jettisoned for the song to remain clean!)

    A good book that covers the creation of the MP3 is "How Music got Free" by Stephen Witt - generally an amazing book for looking at the industry in the 90s.

    PERFECT SOUND FOREVER! Killed by the MP3!
     
    JimmyCool, Grant and Shaddam IV like this.
  2. Tsomi

    Tsomi Forum Resident

    Location:
    Lille, France
    Your "massively long code language" has a huge cost. Let's take the example of FLAC.

    FLAC was created in 2000, with the aim of being compatible with < 100 MHz devices and < 64 MB of RAM (probably even lower, can't find any precise numbers). It's also a general purpose audio encoder: it shouldn't take too much resources to decode and to encode. FLAC was created with this intent. Most current lossless compression methods were also created with this intent (MonkeyAudio compresses a bit better by being more resource intensive for both encoding and decoding, though).

    Your huge dictionary (how big by the way? megabytes? gigabytes? more?) would make the encoding phase much, much, much more CPU/RAM intensive and longer. The decoding phase would also need much more resources.

    Your everyday computer probably has the resources to do this kind of encoding, nowadays (although your dictionary can't be as big as you might think...). Same thing for playing back your files: your computer or your smartphone would handle that just fine. But how about the previous FLAC compatible players? With a 75 MHz CPU and only some megabytes of RAM? They couldn't play the new files anymore, if the dictionary becomes "massively long".

    So it couldn't be called FLAC anymore, because it would create incompatibility. It would need to be called "FLAC2" or something, at least.

    So, this new kind of format would need to be disruptive. This can happen. HEVC did it for video, Google tried to launch WebP for pictures, etc. Your old browsers/computers/consoles/phones can't use them, so that's another cost.

    So you need:
    1. Someone who needs better compression (this probably exists... people storing petabytes of audio... Library of Congress or whatever).
    2. They also need to be OK to spend much more time encoding (and a bit more time decoding).
    3. A team of highly competent people to work on this (HEVC and WebP were not easy... it's not just a matter of building a bigger dictionary).
    4. A market ready to switch to a new format "just" for an improved compression.
    Remember VHS/Betamax, etc. If it's just "a bit" better, that's not enough.

    Another approach: remain compatible with the current formats and their low-resource requirements, but their algorithms mean that the new encoder would be much, much, much more complex and slower. There are some tools which do this for PNG files, for example: when some big websites update their UI, every single PNG file is aggressively optimized again, and the whole thing takes minutes (instead of less than a second with your usual PNG encoder!). It's just done once in a while, so the additional time isn't really problematic, but it saves bandwidth and it improves page generation time to all your users; so big websites do it. They do it, because bandwidth and delays cost them a lot.

    tl;dr: There are multiple costs and you need some people to be ready to pay for them.

    Completely off-topic. Lossy ≠ lossless.
     
    Last edited: Nov 11, 2017
    Grant and stetsonic like this.
  3. garymc

    garymc Forum Resident

    Location:
    Florida, USA
    Actually, the processing power is in the *encoding* of the FLAC file not in the *decoding*. Consequently, it takes more processing power to encode to "8" compression vs "5" compression. But once compressed, the *decoding* of the file back to lossless is identical regardless of original compression level. And as noted by others, the compression level of a FLAC file has nothing whatsoever to do with "quality". It only relates to file size. FLAC is lossless, and once decoded for playback the bits are identical to original without regard to how the FLAC was compressed when created.
     
    Randoms likes this.
  4. bzfgt

    bzfgt The Grand High Exalted Mystic Ruler Thread Starter

    Right, I never said anything about my hypothetical files being "FLAC," just lossless. There are several distinct questions here-- whether my idea is possible, to which at first the answers tended toward "no" and now are trending toward "yes but you wouldn't want to do it" or "yes but no one would be interested in this." So the other two distinct questions are "Would you really want to do this?" or "Would enough people want to do this?" Then there is the response that it is literally possible but de facto impossible. I'm trying to sort all this out.

    My original post assumed that a good thing to do (although I didn't explicitly say it would be good) would be to create an enormously capacious language to code music into, and have super-powered processing machines to read the files. This way I could store more music on less storage devices. I figured it would be fine if this required a more souped-up machine to read the files.

    So your points are well-taken--I take you to be saying my idea is possible but maybe not feasible. On the other hand, others are saying it's impossible.

    This thread has clarified one thing--I assumed this would be a desirable thing to do, and a lot of you think it may not be. I still think that because computers and ipods fill up with music files so quickly this would be cool, but not at any cost of course, so the thread is convincing me it might not be such a great idea after all and I should just keep filling external hard drives.
     
  5. Tsomi

    Tsomi Forum Resident

    Location:
    Lille, France
    It's probably negligible, especially nowadays, but bigger FLAC compression levels do have a small impact on decoding performance (although the biggest impact is on the encoding phase). Some structures will be represented in a way that's a bit more complex to decode, when you use the bigger compression settings. Hence, a bit more CPU/RAM usage.

    From Josh Coalson himself (creator of FLAC):
    "flac -8 does take a little bit more computation to decode than -5. usually it is negligible, but some devices like the iaudio X5 are right on the borderline; my understanding is that the X5 with the latest firmware can decode -8. I should be getting a review unit soon and will have more info then."
     
    Grant, Randoms and boiledbeans like this.
  6. Stencil

    Stencil Forum Resident

    Location:
    Lockport, IL
    That is correct, however the code still needs to be converted into binary 'words' of a specific length. This is where 8/16/24 bit audio comes in. Longer word lengths translate as more precise and better sounding files. But there is still a word length limitation.
     
  7. Stencil

    Stencil Forum Resident

    Location:
    Lockport, IL
    It would need an immense look up table to be able to do that. So you would have to add the size of the lookup table to the file, or have that lookup table on your machine where it could get corrupted. Plus there would be a longer and longer time lag in looking up that data as the lookup table got bigger.
     
  8. bzfgt

    bzfgt The Grand High Exalted Mystic Ruler Thread Starter

    Right, in the machine was my thought.
     
  9. bzfgt

    bzfgt The Grand High Exalted Mystic Ruler Thread Starter

    SO we essentially need a moon shot! Work all this stuff out.
     
  10. boiledbeans

    boiledbeans Forum Resident

    Location:
    UK
    It depends on the music encoded. If you encode WAVs where both L & R channels are the same/very similar (e.g. mono CDs), it's normal to get a 400+kbps FLAC from the original 1411kbps WAV, which is far greater than 2:1.
     
  11. Tsomi

    Tsomi Forum Resident

    Location:
    Lille, France
    It depends on the precise algorithm you're thinking about.

    Basically (absolutely not an audio or math expert, just an IT guy... so take it with a grain of salt):
    • Achieving a (slightly!) better compression ratio with a "longer code language" (= dictionary): yes, that's possible.
      • Most current lossless audio compressors just use "smaller" dictionaries, because they were created 15 years ago, and most of them want to be 1) compatible with low-power devices, 2) general purpose, 3) not too much CPU/RAM intensive and 4) that's still "good-enough" for most people, at the moment.
      • A bigger dictionary is one of the reasons why, for example, LZMA compress files much better than Gzip does.
      • However, big dictionaries have a cost. That's why LZMA encoding is much slower and much more CPU/RAM intensive, so in the end multiple compression algorithms coexist, each of them satisfying different needs.
    • Significantly improving the current audio compression ratios thanks to newer and improved compression techniques: yes.
      • This has happened in the latest years for pictures (WebP, lossy), videos (HEVC, lossy), files (LZMA, lossless but it's much easier to compress text files than random data, see below) so I don't see why it couldn't happen to audio.
      • The only problem is that the market wanting better lossless audio compression doesn't look big enough, at the moment. Video had Netflix and YouTube, data had Big Data... but audio? Spotify's whole catalog is under a Petabyte, and they serve lossy files to their end users, anyway.
      • However! There is no way a lossless compression will save you as much space as a lossy compression. What can happen in the future is a very, very, very good lossy audio compression (Opus is already quite good in this regard). Your favourite "perfect" 4k video is lossy, only the studio has the gigantic, lossless raw stuff. Maybe the lossy algorithms will become so good that no-one's going to complain about not having the lossless source, in the future.
    • Requiring an "insanely long" / "zillion-line long" word list for your algorithm: infeasible with the current technology, and not as efficient as you might think.
      • There's just need to be a limit somewhere. There's no way your dictionary could hold terabytes of data at the moment, for example. Or you'd need a gigantic machine for decoding as well.
      • (I'm speaking of current, binary, Turing machines. Quantum computing or DNA digital data storage are not ready for you yet).
      • What constitutes music is a very, very complex noise. There's no way a 100 MB or 64 GB dictionary could do extreme miracles. It'd just improve things a bit. It could be possible to make a very, very, very small file that plays a very long series of prepared, controlled sounds. But compressing any piece of music in an extreme, lossless way, no: impossible. There's just too much random, unpredictable noise appearing in any natural sound. Hit the same piano note twice: for your computer it's just going to be a very different noise, and thus a very different series of 0s and 1s. No dictionary on Earth could be big enough to hold all these natural nuances.
      • For the same reason: it's currently possible to create a certain ZIP file, that's only a few megabytes, but which creates a multiple-gigabyte file when uncompressed. It could be specially prepared and crafted, on purpose (it has already been done). However, it's not possible to convert any multiple-gigabyte file in a 10 MB ZIP archive.
      • See also: Kolmogorov complexity, Shannon's source coding theorem.
     
    Last edited: Nov 11, 2017
    bzfgt likes this.
  12. boiledbeans

    boiledbeans Forum Resident

    Location:
    UK
    That reminds me of MIDI files.
    MIDI - Wikipedia

    "MIDI symbolically represents a note. When the synth player presses a key on a keyboard, MIDI records which key was pressed, with which velocity and which duration, whereas digital audio represents the sound produced by the instrument."
     
    RomanZ and Tsomi like this.
  13. Shaddam IV

    Shaddam IV Forum Resident

    Location:
    Ca
    Think of any zipped (non audio) file, say a Word Document. It's been compressed, it's smaller, but there is no data loss. The same can be done to an audio file.

    There's lots of ways to do this. Think of a simple math equation or algorithm and you can see how data can be expressed in compressed form.

    Super simple example: You have 380 zeros in a row. Encoding this as "380 zeros" takes up much less space than "00000000..." (380 zeros in a row). Once decoded, nothing is lost. We have 380 zeros in a row.
     
    Last edited: Nov 11, 2017
    bzfgt and garymc like this.
  14. Chris DeVoe

    Chris DeVoe RIP Vickie Mapes Williams (aka Equipoise)

    Which seems reasonable; you get a similar decrease in size when you select the optimum lossy settings.
     
  15. Andreas

    Andreas Senior Member

    Location:
    Frankfurt, Germany
    Because each character would have 16 different values, so you need 4 bits to store the information which character is used.
     
  16. Andreas

    Andreas Senior Member

    Location:
    Frankfurt, Germany
    No, that's not possible for abstract reasons: The information entropy per "letter" or "character" in the compressed file would increase as well, so the compression ratio would not become better.

    In general, a lossless compression scheme has a lower bound determined by the information entropy of the source file.
     
    coffeetime and Chris DeVoe like this.
  17. Chris DeVoe

    Chris DeVoe RIP Vickie Mapes Williams (aka Equipoise)

    Thank you for saying that so succinctly and clearly.
     
  18. bzfgt

    bzfgt The Grand High Exalted Mystic Ruler Thread Starter

    Why would each character have 16 different values? I'm talking about adding more characters, so each one would cover less than in the old system.
     
  19. bzfgt

    bzfgt The Grand High Exalted Mystic Ruler Thread Starter

    Urgh now it's getting beyond my grasp.
     
  20. garymc

    garymc Forum Resident

    Location:
    Florida, USA
    Thanks. Good point.
     
  21. Chris DeVoe

    Chris DeVoe RIP Vickie Mapes Williams (aka Equipoise)

    Any comprehensible information, like music or text files have patterns, which makes compression possible.

    Information entropy just means that a file compressed to the maximum means that it is mathematically indistinguishable from random noise.
     
    Shaddam IV and bzfgt like this.
  22. bzfgt

    bzfgt The Grand High Exalted Mystic Ruler Thread Starter

    Ah, I see, thanks. I am dimly getting it a little.
     
    Chris DeVoe likes this.
  23. subtr

    subtr Forum Resident

    Yup and I apologise for hastily writing that and getting it wrong, but at least the main point came through - nothing to do with audio quality.
     
    garymc likes this.
  24. Carl Swanson

    Carl Swanson Senior Member

    Isn't level 1 the least reduction in file size?
     
  25. Chris DeVoe

    Chris DeVoe RIP Vickie Mapes Williams (aka Equipoise)

    Again, don't depend on me - as I said, I read one book on the subject two decades ago. Andreas knows a lot more about it than I do.
     
    bzfgt likes this.
Thread Status:
Not open for further replies.

Share This Page

molar-endocrine