100% Natural Sounding Digital Reproduction: Is It Possible???

Discussion in 'Audio Hardware' started by Khorn, Dec 15, 2004.

Thread Status:
Not open for further replies.
  1. Luke M

    Luke M New Member

    Location:
    Pittsburgh
    I'm really baffled as to how you can come to this conclusion. It defies all logic. What does the "leading edge of the signal" have to do with a test involving steady tones? They have no "leading edge". Also please explain how you came to the conclusion that amps and speakers are perfect and can't exhibit IM distortion? Frankly you seem to be reaching for far-fetched theories and ignoring the obvious.

    Did you read http://world.std.com/~griesngr/intermod.ppt? In particular slide 4, which depicts the signal produced via one speaker (clearly showing distortion)? You don't need ultrasonic equipment to produce that distortion or bat ears to hear it.

    You are on firmer ground if you confine your claims to non-periodic signals (which seems to be where you want to go with this "leading edge" business). At least that can't be trivially proven wrong.
     
  2. Taurus

    Taurus Senior Member

    Location:
    Houston, Texas
  3. LeeS

    LeeS Music Fan

    Location:
    Atlanta
    Taurus,

    I was hoping there was an actual paper regarding audibility. I have seen one recently that I will try to link to.

    This link just goes to an internet forumk where Curve Dominant is saying that an A/B test of a "$1700" Sony SACD player sounded worse than a Redbook player on the SACD over a system with Klipsch speakers. I'm no fan of Klipsch at all but in any event this seems to show a relative lack of sophistication among the participants. I think as a result at least one participant is discussing PCM versus DSD without ever hearing the advantage of DSD which in my mind is not subtle. There is no question in anyone's mind that faster sampling is not value added for music playback whether 96k+ PCM or DSD relative to Red Book.

    There is a recent AES paper that discusses the importance of ultrasonic frequencies on the audible range. I will try to obtain that paper for this thread.
     
  4. Tony Plachy

    Tony Plachy Senior Member

    Location:
    Pleasantville, NY
    Luke, Go back to my original statement. I claim you can hear the difference between a periodic sine wave and a periodic square wave at 10KHz, or 12 KHz or 14 KHz up to the limit of your hearing range. Since these are periodic signals of a single frequency IM distortion should be minimal. If you do not believe me get a signal generator and try it for yourself.
     
  5. Metralla

    Metralla Joined Jan 13, 2002

    Location:
    San Jose, CA
    LeeS, I think Taurus wanted us to read Eve Anna's post - which I did. I'm aware of DSD ultrasonic noise (we probably all are) and have seen it on the oscilloscope. It bugs me there - but not in listening.

    Curve Dominant posted his experience on 02-20-2002 06:04 PM, close to three years ago. Old news. I'm glad he's over there, and not over here!
     
  6. LeeS

    LeeS Music Fan

    Location:
    Atlanta
    Geoff,

    I agree. I read EveAnna's post carefully but much of DSD's noise is above 50khz. Myself and John Atkinson can't hear that noise. It's a theoretical more than a practical issue except for some minor system matching things.
     
  7. Luke M

    Luke M New Member

    Location:
    Pittsburgh
    And that's just plain wrong. It's contrary to established science, and there isn't a snowball's chance in hell of the science being wrong, since this is really a straightforward question.

    Everything you are hearing is below ~20Khz. If the sound changes when you add tones >20Khz, it is because you are hearing the "minimal" distortion. This distortion is a proven fact, yet you seem to want to ignore it - why?
     
  8. LeeS

    LeeS Music Fan

    Location:
    Atlanta
    Uh dude...it's frequently wrong. Even scientists cop to that.
     
  9. therockman

    therockman Senior Member In Memoriam


    I have to agree with this statement. Science is an ever-changing group of ideas that represents current technology and opinion.
     
  10. Tony Plachy

    Tony Plachy Senior Member

    Location:
    Pleasantville, NY
    Luke, You win, I give up. :sigh: Call it distortion. After all as a Ph.D. physicist I can assure you that science is always 100% right. ;)
     
  11. Luke M

    Luke M New Member

    Location:
    Pittsburgh
    Oh, the irony.
     
  12. therockman

    therockman Senior Member In Memoriam

    This whole discussion has gotten a little bit out of hand. The whole debate about analouge vs. digital reminds me of my studies in physics and the debate about light existing as a particle or beam. The particle theory described light as bits of matter, or rahter a stream of photons, that eminated from the source and impinged upon the eye. The wave theory saw light as a feature of electromagnetism that utilized waves of energy to convey the image from the source to our eyes. So the debate raged, what was light? Energy or matter? Waves or particles? Enter quantum mechanics that described light as both waves and particles. So what is sound?
     

    Attached Files:

  13. LeeS

    LeeS Music Fan

    Location:
    Atlanta
    This supports my point about science not knowing everything. Just like jitter, science in the case of explaining light continually has evolved.

    Science can't measure everything. Anyone who has listened to low THD amplifiers that sound like crap can attest to that. :)
     
  14. OcdMan

    OcdMan Senior Member

    Location:
    Maryland
    ALP, Luke, What exactly would it take to prove whether humans can hear differences in a tone when ultrasonics signals are present as opposed to when they are not? Some sort of speaker that has measureably equal distortion regardless of what sound it's producing? Using separate speakers and amplifiers can't be completely conclusive one way or the other because that setup doesn't exactly reproduce what you're trying to test in the first place. It's close but not the same. And because someone can't know beforehand whether that slight change, no matter how insignificant, would have a bearing on perception then it can't be dismissed. To assume that some seemingly inconsequential difference has no effect would be considered bad science, right?
     
  15. Luke M

    Luke M New Member

    Location:
    Pittsburgh
    Ocd, my view is that this particular question has been asked and answered satisfactorily, hence is not a fruitful area of research. Anyone is free to disagree, of course, as long as the disagreement is not based on obvious errors (e.g. forgetting to take into account amp/speaker distortion).
     
  16. Tony Plachy

    Tony Plachy Senior Member

    Location:
    Pleasantville, NY
    Matt, When I did the experiment we were not even trying to test human hearing, we were checking out our acoustic equipment. I used a powerful signal generator and piezoelectric transducers. I fed the generator signal directly to the transducer. We were very young (I was in my early 30's and my technician was barely 21) and we were close (3 feet) to the transducers. We were working in the range of 12 KHz to 14 KHz. The fact that we could hear the transducers was more irritating than pleasant (a very high pitch ringing). My tech switched from sine wave to square wave by mistake and when he did I immediately said "what did you do it sounds even worse" :laugh: and he agreed, he then found his mistake. I remember the data from the transducers, they were very linear and clean and would probably satisfy Luke as being good enough. On the other hand we never made any attempt to determine what distortion products the generator was putting out on sine wave or square wave (it wasn't important for what we were doing then) and I am sure Luke would argue that what we heard was distortion products and not the change in the wave form caused by the ultrasonics that are needed to make a sine wave into a square wave.

    If you wanted to repeat the experiment using audio equipment it would take a very clean generator (probably lower power than what I used originally so that it stays clean), a very clean SS amp (this is no place for tubes) and determining how clean the amp is would be a challenge because amp distortion (THD and IMD) are usually determined using sine waves. So figuring out how to determine the distortion with a square wave would have to be thought through. Finally, very linear and very efficient speakers would be needed so that again we could keep every thing at the lowest possible power to keep distortion products as low as possible.

    What I have realized in writing all of this is that I do not really care if I am right or Luke is right, what I do care about is I want to know if I am listening to a square wave or a sine wave (whether it be because the human ear can detect the change in the wave form shape due to the effect of the ultrasonics or whether the human ear detects the very slight amplifier distortion cause be the ultrasonics). Since it has been shown (and can be mathematically proved) that a digitizing system like DSD captures the ultrasonic information that is needed to make the difference between a sine wave and a square wave this is the system I prefer. :)

    P.S. The original work was done over 20 years ago, and I have since changed companies and jobs, so I cannot go back and recreate it.
     
  17. arnie35

    arnie35 New Member

    Location:
    New Jersey
    You probably meant to but didn't make it clear that it the accuracy of the relative amplitudes of the almost infinite number of frequencies (all those harmonics) which is critical rather than the overall amplitude at any point in time.
    Arnie (new poster)
    ~~~~~~~~~~~~~~~~~

     
  18. arnie35

    arnie35 New Member

    Location:
    New Jersey
    I wondered when someone was going to point this out. About the only "natural" music one will find these days is the boys' choir at Evensong in any Cathedral, with only the physical surroundings for amplification.

    BTW, anyone interested in early jazz, please visit my Yahoo Group (just started) listed under my profile.

    Arnie
     
  19. Steve Hoffman

    Steve Hoffman Your host Your Host

    Hi Arnie,

    WELCOME!!!!
     
  20. Mal

    Mal Phorum Physicist

    Welcome Arnie :wave:.

    I'm not sure what you are getting at here.

    The sampling theorem states that a continous waveform can be reconstructed perfectly from samples so long as long as the sampling frequency is at least twice the maximum frequency in the waveform being sampled.

    Each sampled value is simply the amplitude value of the waveform at the time the sample is taken. These are the values that are assumed to be known exactly in the sampling theorem but which in practice are not known exactly thanks to the quantization error I talked about.

    Does that help?

    :)
     
  21. arnie35

    arnie35 New Member

    Location:
    New Jersey
    Not completely. Yes, you now refer to the "frequency (ies)" in the waveform, which you didn't before. If you look at the waveform in a typical wave editor, all you are seeing is the "overall amplitude" of the hundreds of frequencies being sampled/measured. What I was trying to say was that this overall amplitude is not all that important. What IS important is how the process measures the accuracy of each frequency (in terms of Hz) and the relative amplitudes of all those frequencies. This is what makes a piano sound different from a clarinet, for example. I really do assume you know all this and we may be misunderstaning each othet or talking at crossed purposes.

    Merry Christmas to all,
    Arnie
     
  22. Mal

    Mal Phorum Physicist

    Arnie,

    what you say about the combination of different frequencies at different loudnesses making all the different sounds we hear is true. I say loudness rather than amplitude as the actual amplitude of an oscillating signal varies with time - this variation in time defines the frequency.

    What you are missing is that these different frequencies (at whatever sound pressure level they may be at) that we hear can each be represented as a waveform which shows the variation in amplitude as a function if time and that these waveforms all combine together to form one waveform - this is known as superposition. This composite waveform is simply a time varying amplitude - this is what your ear-drum is responding to and this is what you see on your wave editor. All the information about any different frequencies at whatever level that combined to make the sound you hear is found within this single waveform.

    By sampling the waveform at regular time intervals you are making discrete recordings of the amplitude as a funtion of time which means that you are recording the frequency information up to half the sampling frequency as well as the dynamic variation.

    :)
     
Thread Status:
Not open for further replies.

Share This Page

molar-endocrine