Why were CDs recorded in 16-bit/44.1khz?

Discussion in 'Audio Hardware' started by MZ_RH1, Feb 5, 2017.

  1. Cherrycherry

    Cherrycherry Forum Resident

    Location:
    Le Froidtown
    @Grant And since I don't know what to listen for, that was one of the specific reasons for which I asked you, in order to learn something. I am sincere in thanking you.

    Sorry to everyone else for derailing the thread.
     
  2. Reamonnt

    Reamonnt Mr.T

    Location:
    Ireland
    Very interesting and lots of seriously knowlegeable people on here.
     
  3. Black Elk

    Black Elk Music Lover

    Location:
    Bay Area, U.S.A.
    1990 - Philips CD840 Bitstream CD player (SAA7321GP chip, 256x oversampling)

    Because 1-bit is inherently linear, and requires no highly accurate resistor/current source network, so is cheaper to manufacture. Since you get nothing for nothing, 1-bit can have issues with idle tones and maximum level (if not handled correctly), so few-bit designs emerged as a middle ground, lacking the high precision of multi-bit and also the 1-bit issues. Few bit designs are not inherently linear, though, so, again, you pick your poison.

    No, it solved two problems: low-level linearity and cost of manufacture. It created a few problems of its own, but they were easily dealt with.

    I think so, and they still sound better (if you value linearity and accuracy).

    Pretty much, but computer modeling/simulation has developed in that period to allow higher-order modulators to be designed and implemented.

    Yes, virtually every modern ADC uses a delta-sigma modulator to effect the conversion to digital. DSD merely records the output of the high-speed 1-bit converter found in many ADCs, thus eliminating a decimation stage (recording) and interpolation stage (playback -- since virtually all DACs are also delta-sigma designs).
     
    anorak2, sunspot42 and EddieVanHalen like this.
  4. EddieVanHalen

    EddieVanHalen Forum Resident

    So this could be interpreted as what we are hearing now (and also then) is kind of PCM converted to DSD?
     
  5. sunspot42

    sunspot42 Forum Resident

    Location:
    San Francisco
    Yes.

    The downside to DSD on SACD is, you're stuck with what amounts to the best delta-sigma conversion late '90s technology could provide of approximately 24-bit, 88kHz PCM. Which is pretty darn good, although I'd imagine today's best PCM A/D and D/A chains could surpass that, especially at 24/192.

    Whether you could even hear the difference in practice though, that's another story... I suspect few if any listeners can, let alone express a preference. At some point, it's the analog stage of these gadgets that's coloring the sound far, far more than the digital part of the chain...and we probably passed that point 20 years ago. As in, it's an angels on the head of a pin exercise once you hit 24/88kHz or above, even if you have a $20K stereo system.

    For me, the huge advantage to PCM is, it's much easier for downstream equipment to process without funky conversions. That's important if you're applying speaker/room corrections in the digital domain, shunting signal to a subwoofer, deploying some multichannel processing or other trickery. Given the growing sophistication of all of those processes, I think they're already generally a good thing at the consumer level (bang for buck), and are probably going to be increasingly adopted by audiophiles. Algorithms march on...as does the hardware to run them...
     
    crispi and EddieVanHalen like this.
  6. Grant

    Grant Life is a rock, but the radio rolled me!


    I believe in the scientific method, too, but it's not all you should rely on. There is that pesky thing called listening...using your ears that gets in the way of the propeller-heads.
     
  7. Grant

    Grant Life is a rock, but the radio rolled me!

    You did not derail the thread at all. What we're talking about relates to the topic, as some people are talking about PCM vs. DSD.

    I don't know if it is in this thread, or if I posted it elsewhere, but last Wednesday morning, I did post about what I listen for: I listen for the width of the soundstage (how the sound extends beyond the speakers), the transients (the clarity of the dynamic attacks, especially with the highs), the depth of the sound (front-to-back dimensionality), and the weight and solidness of the bass. That's what I hear. When you reduce a hi-rez file, these are the things to listen for. With sample-rate reduction, the soundstage will be the first casualty. With the bit-rate reduction, everything will be affected. Depending on the methods and gear used, the differences can be anywhere from subtle to dramatic, but YMMV.
     
  8. EddieVanHalen

    EddieVanHalen Forum Resident

    With a good Rock recording I can tell the different between red book resolution and HiRes anytime, general sound improves on HiRes but drums shout out loud HiRes!!! I did test it with the Van Halen's remasters from 2015 which I have at 192/24. I downconverted them using Weiss Saracon (which is one of the best sample rate/bit depth converters) both at 96/24 and 44.1/16 using dither, not truncating. I did the blind test on two different set ups, mine with a Pioneer SC LX76 A/V receiver (soldin the US under the Elite badge, on the 2012 range it only had one model on top of it) which plays HiRes and DSD via USB, and at a friend's (the same guy who switch files to me for blind test) set up. Differences between 192/24 and 96/24 were minimal, most of the times I couldn't tell one from the other. Differences between the original files at 192/24 and the downconverted and dithered 44.1/16 WERE detectable. First,as I said before were drums which at 44.1/16 sound fuzzy, mechanical, lifeless, charles sound specially artificial, soundstage is narrow, overall sound sounds synthetic. The 192/24 sounds more natural, soundstage is wider, more three dimensional, there's a sense of air, like if the set up is playing music with easy.
     
    plextor and Shak Cohen like this.
  9. sunspot42

    sunspot42 Forum Resident

    Location:
    San Francisco
    The problem with using an A/V receiver to conduct tests like this is, god only knows what processing is either switched on or switched off at various sample rates. I'm also not sure how they handle 24-bit vs. 16-bit audio - is one automatically louder than the other, for example? If so, level matching could be tough.

    If I were going to do A/B tests, I'd use a much simpler setup than that.
     
    missan likes this.
  10. Archimago

    Archimago Forum Resident

    Hmmmm. Here's what I would do...

    Take the 24/192 --> downsample & dither to 44.1/16 in Saracon

    Take the 44/16 --> upsampled to 24/192 again in Saracon

    Now A/B the original 24/192 and the upsampled 24/192. That way the receiver is processing the input signal exactly the same with the 16/44 upsampled using probably superior software filters in Saracon. Get your friend to do the blind switching and see if you can accurately log the difference.

    Good luck!

    BTW: what good rock recording are you using?!
     
    Contact Lost, LarryP, crispi and 2 others like this.
  11. anorak2

    anorak2 Forum Resident

    Location:
    Berlin, Germany
    The only difference between properly downsampled and dithered audio is hiss for bit depth and bandwidth for sampling frequency. Nothing else. Whatever other difference you peceive must be due to something else.
     
    missan likes this.
  12. Tim Müller

    Tim Müller Forum Resident

    Location:
    Germany
    Hello,

    why they, Sony and Philips chose 44.1kHz and 16Bit, were two reasons:
    1.) They wanted the audio quality considerably better than what was available on LP. Science tells, human hearing is between 20Hz an 20kHz. Also, within that range, there's everything which is of musical interesst. 14 Bit allow for 84dB dynamic range, 16 Bit resolution for 96dB. These numbers for dynamic range are sufficient for almost any music recording. LP would reach about 50Hz to 14kHz and about 50dB. Treble and bass could not be recorded as loud as mid frequencies on LP. On CD, a 20kHz tone can be recorded as loud as a 1kHz tone.
    2.) It would not have hurt audio quality if the specifications were, say, 20Bits and 60kHz, or whatever larger numbers. But it would have required more storage space on the disc, and more elaborate and costly electronics, and would not have improved the perceived audio quality.

    That's why they chose to make the audio quality of CD perfect, but not better than perfect. (16bits, 44.1kHz allows for perfect audio quality.)
    Multicannel, such as 5.1 or quadraphonic, would provide a more realistic sound stage, but at the time CD was introduced into the market, quadrophonic already was a commercial failure. So that was not implemented.

    A few thoughts on dynamic range...

    If you want to enjoy the full 96dB of a CD, you actually need a sound recorded an the CD which is more than 90dB louder than the noise floor of the recording itself, and the noise floor of your living room (where you listen to the music).
    Noise floor of a normal living room is about 30dB sound pressure level, or even more. So, the playback level for the 0dBFS peak of your CD, must be at 126dB or so. Sound of that level is about to cause pain, and certainly will damage hearing if listend to it for longer times.
    Sound pressure levels above about 100dB are of no practial use for the enjoyment of music.
    If you set the playback level lower, then the information of the least significant bits would be buried under the noise floor of your living room. Actually, it would make no difference then, if the recording would be only 15 or 13 bits ...

    In the studio...
    Every microphone has some noise floor, which is about roughly 20dB of aquivalent sound pressure level (for typical not to expensive microphones), or down to 10dB for the best available (and most expensive) microphones.
    Any sound created by a musical performance which is lower or softer than that said 10 to 20dB SPL , would be buried below the noise floor of the microphone. To record a musical performance exhausting the available dynamic range of the CD (96dB), the lowest sound should be about - say - 15dB SPL, and the loudest sound around 110dB SPL.
    If the musical performance does not reach a sound level of around that said 110dB, than the actual number of bits required to record everything between the max. sound level (of the performance) and the noise floor of the studio (microphones, mixing desk, environment, ...) is less than 16 Bits.
    For example, if you record a singer/songwriter with acoustic guitar and vocals, the sound level of the performance may max at about 80dB or so. The noise floor of the studio is at about 15dB SPL, so the actual dynamic range of the musical performance is about 65dB only. Which requires 11 Bits only...

    Thats why the resolution of 16 bits is just about perfect for enjoyable musical recordings. Any more bits would not improve it.

    However, any larger numbers do not deteriorate the sound quality.
     
    WDeranged, RomanZ, tumpux and 7 others like this.
  13. anorak2

    anorak2 Forum Resident

    Location:
    Berlin, Germany
    The red book defines a 4 channel quadraphonic option, but there were never any CDs made in that format and no machines capable of playing it.

    Four-channel Compact Disc Digital Audio - Wikipedia
     
    PhilBiker likes this.
  14. Tim Müller

    Tim Müller Forum Resident

    Location:
    Germany

    Yes, thank you, I know. But Quadraphonic was never implemented in the CD system. Neither quadro discs nor quadro player were ever available. Probably because, Quadro was already a marketing and commercial failure, because of costs and bad performing quadraphonic systems for LP discs (channel separation was quite poor). Stereo CDs were a big commercial success, therefore no need to come up with quadraphonic CDs...

    That's why I like DVDs, they can provide multichannel audio.
     
  15. MZ_RH1

    MZ_RH1 Active Member Thread Starter

    Location:
    Angel Valley, CA
    There are 5.1 DTS compact discs. These sound better than standard CDs.
     
  16. EddieVanHalen

    EddieVanHalen Forum Resident

    They play 5.1 sound but they use lossy compression.
     
    Blank Frank, JimmyCool and sunspot42 like this.
  17. MrRom92

    MrRom92 Forum Supermodel

    Location:
    Long Island, NY
    Fixed that for you
     
    JimmyCool, rbbert, scobb and 2 others like this.
  18. Fastnbulbous

    Fastnbulbous Doubleplus Ungood

    Location:
    Washington DC USA
    Goodposting. For the vast majority of this forum, who are well past the age when hearing deteriorates, even 16kHz is probably overkill. But if people want to spend their money on media and equipment that produce sounds they can't possibly hear, it's their right.
     
  19. Tim Müller

    Tim Müller Forum Resident

    Location:
    Germany
    OldSoul and Blank Frank like this.
  20. MZ_RH1

    MZ_RH1 Active Member Thread Starter

    Location:
    Angel Valley, CA
    No it's not fixed. DTS is multichannel, something that a standard CD can never do.
     
  21. sunspot42

    sunspot42 Forum Resident

    Location:
    San Francisco
    Yes, but to cram multichannel audio onto a CD, DTS has to compress the hell out of it.
     
    Blank Frank likes this.
  22. Grant

    Grant Life is a rock, but the radio rolled me!

    No, those processes will always alter the sound to a degree.
     
    rbbert and Ham Sandwich like this.
  23. Metralla

    Metralla Joined Jan 13, 2002

    Location:
    San Jose, CA
    Somehow the person quoting you exchanged ", human hearing is between 20Hz an 20kHz. " for a link from the Daily Caller. Wonder how that happened.
     
  24. MrRom92

    MrRom92 Forum Supermodel

    Location:
    Long Island, NY
    So adding more channels of poor quality audio = better sound?
     
    sunspot42 likes this.
  25. Simon A

    Simon A Arrr!

    DTS still managed to do a superb job with many of their releases. I love the ones I have and play them often.
     
    Tim Müller likes this.

Share This Page

molar-endocrine