Time resolution of Red Book <=45ns

Discussion in 'Audio Hardware' started by Publius, Jul 6, 2006.

Thread Status:
Not open for further replies.
  1. lukpac

    lukpac Senior Member

    Location:
    Milwaukee, WI
    You've lost me.

    If the event can be captured correctly, it can be shifted. You seem to talking about a spike that can't be captured regardless of the position.
     
  2. felimid

    felimid New Member

    Location:
    ulster
    Only when you can assume a synthetic template for the event.
    Ie, the spike is smooth -parabola like, or the spike is an Isosceles triangle or a right angled trianlge, and you have more than one sample of its known form to fit it with. Closer to the sample rate, more assumptions are required.

    If you cant figure out what I was writing abouting entirely, you should still be able to answer the explicit query I made, that is to define x and y in the circumstance specified. You should be able to do that before summarizing what 'I seem' to be talking about.

    Here was the challenge again to be clear:
    The subsample levels between 0,8,0. Figure them out according to your subsample time resolution methods (you may even assume for a headstart that all the preceeding and following samples are 0).

    Surely you can see that is a game you cant win, if you were in my position how would you play it?
     
  3. lukpac

    lukpac Senior Member

    Location:
    Milwaukee, WI
    All those things you mention would be above 22.05 kHz - of course you wouldn't be able to tell the difference between them.

    Assuming a band limited signal, you can certainly calculate the values at subsample levels. I'm not about to claim I'm good enough at the math to get those for you, though.
     
  4. felimid

    felimid New Member

    Location:
    ulster
    In the challenge I gave, the spike survives below the nyquist frequency (or else it wouldnt be present. This thread claims, the record can indicate its subsample timing profile, the challenge demonstrates that it *cannot*
    That spike could 0,8,8,0,0 or 0,0,8,8,0 or various other permutations ,while represented at half the rate by 0,8,0. This is the fact that needs to be acknowledged with that contradicts the claim of this thread.
     
  5. Publius

    Publius Forum Resident Thread Starter

    Location:
    Austin, TX
    Wrong. According to my resampler, the resampled output should be something like [-0.66, 6.23, 2.15].

    Well, [0,8,8,0,0] downsampled is [2.54, 6.54, -0.84]. And [0,0,8,8,0] downsampled is [-0.95, 6.86, 2.86]. So, I would say that, given an input of [0,X,8,Y,0] and a 1:2 downsampled output of [0,8,0], it would in fact definitely not be EITHER of those two.

    In fact, an approximate solution to [0,8,0] upsampled is [0, 5.09, 8, 5.09, 0]. However, if I downsample this back, I don't get [0,8,0] again but [1.08, 7.23, 1.08]. This happens because the upsampled signal requires more than 5 samples to represent the original signal! A more correct upsample would be something like [0,-0.42, 0, 0.53, 0, -0.70, 0, 1.00, 0, -1.69, 0, 5.09, 8, 5.09, 0, -1.69, 0, 1.00, 0, -0.70, 0, 0.53, 0, -0.42, 0].

    SO, given your [0,X,8,X,Y] question, the "answer" is that no values of X and Y would ever downsample to [0,8,0]. However, just to go along with your thought experiment, let's see how far we can adjust them while still keeping [0,8,0] - or at least rounded to [0,8,0]. As it turns out, if I adjust X while holding Y=5.09, I can increase X to about 6.3 without disturbing the output (X=6.4 yields [1, 8, 0]. Similarly I can decrease X to 3.9 without disturbing the result (3.8 gives [0, 7, 0]). Varying X and Y at the same time would make for a much shorter range of allowable changes - for X=Y, they can be no lower than 4.5, otherwise the result will be [0,7,0]. They can go as high as about 6.

    So round all those ranges, since it's clear that we're talking about integers only. If I assume that only X or Y is changing, then X/Y can be 5, or 6. If both X and Y are changing, then they can ONLY be 5 or 6. Tabularized... here are the allowable values of X and the allowable values of Y, given X.

    X, Y
    5, {5, 6}
    6, {5, 6}

    So there are only four possible combinations of X and Y that you could have dropped. Whoop dee freaking doo! Guess what, that's only three more combinations than knowing X and Y exactly. That's not much of a range for variance.

    I believe your thought experiment clearly demonstrates that inter-sample values can be known to a very fine degree of precision in a correctly functioning PCM implementation.
     
  6. felimid

    felimid New Member

    Location:
    ulster
    Do you understand the particular resampling algorythm you used to generate these numbers?

    No one who properly understood resampling processes would make assertions like this -and like the threads thesis really.

    In fact just taking every other sample, is a mathematicaly perfect and common method of resampling, although a lowpass is necessary for good quality resampling of signals with frequencies higher than half the target sample rate. It is not neccessary at all for signals already lowpassed and that clause has been provided for (by yourself or lukpak I think).

    Its obvious you are pluging data into programs you dont well understand, and drawing conclusions wishfuly.

    So 0,8,0 is a 'naturaly impossible' sequence in pcm records ?
    No configuration of 0,x,8,y,0 can downsample to 0,8,0?
    Can you find more of these? - seems these redundant sequences should be a mathematical breakthrough of great use for data compression.

    check....
    Yeh I was sure I indicated the approximate nature of the figures. I credited you with being able to choose an apropriate resampling algorythm too. But you are quibling over insignificant particulars.

    There are many variables in the case I sketched out -though not as many as in your described "experimental proceedure" at the start of this thread.

    Finally. You have observed that at that sample rate the only values allowable with standard interpratation are 5 or 6 depending on rounding.
    And you believe that your calculated values must be the 'right' values. Yet there is EVERY chance that these values are not the same as the values recordable by the higher sample rate (because the higher sample rate can have any value for them while not lowpassed, and multiple configurations even if lowpassed). Despite the ability of the higher sample rate to record values different from the values which you must assume from the lower rate record, you think the lower record can resolve them 'correctly'.

    To be clear, and dont get more hung up in the many variables here please- you have provided the answer yourself: You are admitting that there is only one valid solution of those values, actualy a few depending on the algoryhtm used, but basicaly each algorythm has its ideal solution ( thats how it works, because whether working in integer or floating point, the time index is integer, so chicken and egg situations are unavoidable)
    Whichever interpratation or method you choose, the gaps between samples, are defined (and constrained there) by the information at the samples. This means it is impossible to specify values between samples different from what is to be idealy infered from the information stored. (There is only one consistent solution for each approach, solution depends on approach, accuracy depends on quality of approach but is fundamentaly limited by samplerate) There is no intersample resolution - only one solution per approach. That means event signatures, such as a 'cliff' like this ......1,1,1,1,100,100,100,100..... cannont have their form resolved to less than a sample width. You cannot say wether the event which caused those 100s, started 1/2 sample before indicated or half a sample after. At twice the sample rate you obviously could. Thats what i and i expect others think of as 'timing resolution' ie, how accurately can the timing of events be discerned.

    In defense its been said, the subsample timing detail of events cant matter or wont exist after apropriate lowpassing - I dont have to get into that. The details need to not matter for the very certain reason that it isnt possible to actualy discern them, only suppose what they most ideally would have been.

    I hope the penny drops.
     
  7. felimid

    felimid New Member

    Location:
    ulster
    ps, sorry for posting sarcastic there. I got the feeling I wasnt recieving fair attention. These debates/explainations/contentions can be frustrating but I dont mean any offence to the quality of anyones days.
     
  8. Publius

    Publius Forum Resident Thread Starter

    Location:
    Austin, TX
    I do not have the source code to the algorithm immediately available, but I can obtain it if need be. The documentation implies that an FIR lowpass filter is autogenerated at the target sample rate, given a fraction of Fs for the transition band, and a stopband rejection ratio. The signal is resampled, then the filter is applied.

    It's also worth noting that I had to zero pad the signal very heavily (IIRC, about 100-200 zeroes on each side) to get the output. This is a natural result of the FIR filters used - the length of the filters is dependent on the constraints mentioned above - and it's going to happen anytime you use a causal FIR filter to implement the bandlimited interpolation. I don't think it is relevant to the topic.

    If you want any more detail, I can hand-construct the FIR filter for you, and post the coefficients; do the convolution and decimation/zero padding myself for the upsampling and downsampling, and rerun the numbers.
    And you'd be halfway right; I do not work in a signal processing job, and in fact I never took a DSP course in college (drat). But I am an EE, and I do know how to read a textbook and crunch numbers. (And maybe not interpret them correctly, but so far I think I'm crunching them well enough.)
    I do agree with that. Perhaps I've been playing fast and loose with my terminology. The actual act of downsampling can be implemented with decimation, just as upsampling can be implemented by zero padding.
    Like I just mentioned, if the downsampling algorithm used is simple decimation, then you're clearly right. Nothing is going to get X and Y back in that case, and [0,x,8,y,0] will downsample to [0,8,0]. But my point here is that simple decimation is never used in audio engineering as the only step of a downsampler.

    Like you describe, it only makes sense when the signal is known to be bandlimited. But for the input stage of an ADC, clearly you don't know that. For a downsampler in an audio program, you don't know that. About the only time I'd know that is if what I was downsampling was the result of an upsampling operation of equal or lower bandwidth, compared to what I'm downsampling to.
    So when you take that filter into account, suddenly [0,8,0] can only mean that X and Y can only be a couple of different values. Otherwise it's no longer bandlimited.

    Eh, well, that wasn't what I meant, and I apologize if that was what was perceived. Obviously, if you have a signal at some sample rate, and downsample it to 50% of the original rate with an antialiasing filter that removes some significant signal, when you upsample back, you're never going to get that signal back again. BUT - just because you lost that high frequency information doesn't mean you've completely lost the phase information at the baseband.

    Example. Take a single-sample-width pulse, surrounded by zeroes. Lowpass it with an appropriately good FIR filter at, say, 10% of the sampling rate. You get a peak at that same location, surrounded by lobes. Just because the lowpass exists doesn't mean that the location of the peak changes! And moreover, the frequency components of the original signal, that are under 10% of the sampling rate, will by and large have exactly the same phase before the filtering as after the filtering. That's the whole point of FIR filtering. Constant group delay. If you used an IIR or analog filter this property would be lost, and you'd have a harder time getting good phase information. But once you're to that point, you might as well have considerably worse than 1-sample phase accuracy.

    So in this case, the position of the peak did not change after a major lowpass was applied. And so if you downsample 1:10, the position of the peak still doesn't change, since the signal was bandlimited to the final sampling rate, and Nyquist-Shannon guarantees you've kept all the info. And when you upsample back up, the peak is still at the same location. That's what I mean by keeping the subsample delay info.
    And well, that's the crux of the problem here. I keep trying to give numeric examples and counterexamples, showing that, for instance, this exact situation doesn't come up, when both downsampling and upsampling properly antialiased and bandlimited. But nothing seems to be sticking! I want to say that I've driven my point home well enough, and that I hope to have convinced enough people about this. We seem to be pretty much arguing in circles.
     
  9. Publius

    Publius Forum Resident Thread Starter

    Location:
    Austin, TX
    No prob. I've had the urge to bite my tongue in this thread too. ;)
     
  10. felimid

    felimid New Member

    Location:
    ulster
    Thanks - your a better man than I.

    I have been developing signal analysis (frequency, ~wavelet parsing, data compression ) algorythms for a year or so, which are coming on very well. I started from scratch from basic principles because to start with I just couldnt accept claims commonly made about Nyquist Shannon etc, now Ive thought my way around the data a lot I think I understand that approach generaly but believe it is widely misunderstood or overread.

    The thing is that 'this situation' theoreticaly and practically turns up (for my methods) around every single sample that differs from its neighbours. The (employed theory's) standards only describe an ideal path between each sample ,-a 'most normal' path but not the only path that could have been downsampled from. It is deceptive to think of the normal path as 'accurate' whens it's is just a best guess that can be significantly off (significant relative to its single sampled self that is).

    The reason why frequencies phases can be resolved finer is as described -frequencies dimensions are calculated from extent of aggreance with numerous level samples (hundreds/thousands usually)
    Subsample changes in level *deviating from the standardised level path require explicit subsample definition (extra samples).Thats why the 'time resolution' of pcm record has to have a pretty isoteric meaning to be considered as less than the sample rate.

    Aye it'll dawn on ya' ;)
    but i dont expect this stuff to become common knowledge anytime soon.

    cheers'
    fe
     
  11. felimid

    felimid New Member

    Location:
    ulster
    Here is an illustration, It compares a high definition recording of the start of a handclap, downsampled to 16kHz and the same downsampled to 32kHz but also lowpassed at 8kHz (to match the 16kHz)
    The two are then upsampled to 64kHz so we can view contemporary algorythms interpretation of the series, and the renderings are superimposed so we can contrast the shape of each.

    The origonal recording at 96kHz is also shown downsampled to 64kHz.
    The resampling was done using Garf's polyphase resampler in 'ultra' quality mode (should be near perfect conversion in DSP terms) and waveform rendering and lowpass was done in audacity 1.2.4 (should suffice -i think)

    The scale shown is one horizontal pixel per sample.

    [​IMG]

    I think it is easily observed that the actual shape and distinguishing accute features of the waveforms have been significantly altered by the difference in downsampling even though their bandwidths are the same (both lowpassed to 8kHz)

    How is the notion of theoretical subsample time resolution to defend against this kind of demonstration?
     
  12. Publius

    Publius Forum Resident Thread Starter

    Location:
    Austin, TX
    Audacity's waveform renderer uses linear interpolation rather than bandlimited interpolation, but I don't think that matters much here.

    "Garf's polyphase resampler": Where can I get this? Is this the PPHS resampler in foobar? What is its cutoff frequency?

    From what I can perceive visually, a lot of the high frequency content is in fact lost, but the 8khz-lowpassed signal definitely has more high-frequency content than the resampled signal. What exactly did you do in Audacity to do the 8khz lowpass?
     
  13. felimid

    felimid New Member

    Location:
    ulster
    Thats why I upsampled for rendering.

    Yes its PPHS, I dont know the cutoff, assumed its practically 1/2 sample rate. I also compared with ssrc set to high precision, whose output was almost identical to PPHS's.

    The appearance surprised me to, considering I was looking for just the odd small disparity to show subsample resolution is implausible.

    For the lowpass I just used audacities menu option (effect>lowpass filter>cutoff frequency:8000>ok), Then I checked against Sox's default lowpass which came out slighlty differently but similar in definition (still indicating peaks which were lost by the resamplers)
     
  14. Publius

    Publius Forum Resident Thread Starter

    Location:
    Austin, TX
    And did you actually look at the spectral content of the output wavs?

    Because if you did, you would find that the "low pass filter" option in Audacity works for precisely jack squat. I just reproduced your test by creating a 30-second white noise signal at 96khz in Audacity, downsampling it to both 16khz and 32khz, using the Audacity lowpass at 8khz on the 32 and upsampling the 16 back to 32 in foobar. I then did a frequency magnitude plot of both wavs. The stopband attenuation of the lowpass filter is somewhere around 4db. The stopband attenuation of the filter involved with foo_pphs is 110db.

    The reason the lowpass-filtered signal looks like it has more information is because it is a crappy filter.
     
  15. felimid

    felimid New Member

    Location:
    ulster
    Ok, thats a bit of a gotcha.
    Its odd how sox's default lowpass setting performs similarly.
    I should have spotted that, some wishful thinking of my own involved.

    Since you are there, we have the issue of subsample timing measurement of features. Do peaks on the most downsampled conversions coincide exactly with with peaks on the greatest? eg. For half sample timing resolution the peaks on a 44kHz sample should agree with the peaks on an 88kHz. For quarter samplewidth resoultion, 22kHz should agree with 88kHz.

    I dont think they could because those details would require higher frequency energy to render. On one hand it is argued that higher frequencies become irrelevant, on the other hand they are required to render subsample accuracy of the waveforms features. After my messing up the graphs, are you big enough to observe/admit that is the case? ;)

    Good gotcha anyway, what a waste of time my graphs where :thumbsdn:
     
  16. felimid

    felimid New Member

    Location:
    ulster
    I suppose that what I arguing against is the validity of conceptualy separating 'time resolution' from waveform definition, dependent on bandwidth ,limited by the samplerate.

    Although the graphs are fundamentaly flawed, the comparison can still be made between the 16kHz sampled one, and the 32kHz one with the insufficient lowpass. When we come to resolve actual distinguishing marks in the waveforms representations rather than the impressions of surviving frequencies used to render it, far from being able to measure subsample differences in timing, we cant even be sure what features have been discarded and what features have been smoothed.
    Its suggested those features rendered by higher frequencies (and i think that is established since the objection was that they werent properly removed) -dont matter because we cant hear them. But they do matter to the practicality of waveform time resolution -they are in fact what informs 'time resolution' audible or not, which is sublime because the period between samples is ~usualy a period of time.

    We have near precise (bar calculation and rounding error) reproduction of the frequency range which survived the downsample, but the fact that no longer knowable higher frequencies have been removed, which detailed not only themselves but also subtle differences in the shape of event signatures at the higher samplrate, makes the claim of precise time resolution insensible.

    So, as not to mislead about the capabalities of pcm interpretation, the time resolution of the discontinuous waveform should be indicated by the samplerate, for all the reasons described.

    To refer to the waveforms time resolution as having 'subsample' accuracy is worthy of marketing literature in my book. It would be just like marketeers to omit the detail that it is the 'surviving' waveform's time resolution which is subsample accurate.

    But then, 'lowpass surviving time resolution of red Book <=45ns' doesnt have the same ring to it eh?

    ps. I do still feel embarassed about running with the faulty lowpass. :(
     
  17. Publius

    Publius Forum Resident Thread Starter

    Location:
    Austin, TX
    It all depends on exactly what you're trying to measure and what you're trying to apply. Obviously, you can lowpass a signal with a peak, and that peak is probably going to change. But in that event, there is *no* guarantee of accuracy in determining the peak, not even to within one sample. What's preventing the HF information from putting the peak at some other place entirely?

    If you're only concerned with low frequency peak info - that is, the high frequency info is extraneous - then it doesn't really matter where the peak really lies, and in fact, the HF info is really noise that ought to be removed anyway. Think of trying to determine the peak of a sinusoid in the presence of large amounts of noise mixed in. Or even determining the peak of a test tone from a record, with lots of tracking distortion at the highest levels. The high frequency information, if it is not actually correlated with the low frequency signal, will reduce the accuracy of the peak measurement in that case.

    Or moreover, what about the bazillion other DSP operations you could apply to a signal? Take an IIR filter with large amounts of group delay, or an allpass filter. You could wind up not knowing where your peak really was except to within 100 samples. Granted those things are not kosher for resampling, but they're going to crud up your peak detection just as much as any ideal lowpass filter would.

    I feel that sample-precise peak determination through a lowpass filter is really a specific example of a more general case, of a processing chain causing unexpected or perhaps unintuitive results. As such, it doesn't really feel like an indictment of resampling. It really doesn't have anything to do with Nyquist, per se. It's more of an engineering problem, that is perhaps of practical concern in a couple situations, and is solved by more careful tuning of the system. And yes, increasing the sample rate (or removing the resample) is one possible way to change the system.

    In the context of ADCs, if you absolutely must have high precision in your time measurements, it all really depends on how far you want to go with your algorithm, and especially how much you can assume about your input signals. From what I know about audio engineering, that sort of thing doesn't come up often in practice, compared to phase/delay accuracy (which I hope by now you agree is considerably better than 1 sample). I'm thinking of things like estimating the time duration of specific kinds of transients - like pops, and only pops. And even then, it's quite possible to get the algorithm working just as good in 44khz as it is in 96khz. Some algorithms really do work better if you upsample from 44 to 96, but it's not really because 44 is necessarily imprecise. It's because the algorithm just can't handle it.

    So I think we're agreeing a lot more than we're disagreeing right now. In fact, I think I can articulate a counterexample that sort of makes sense to me. Take an algorithm that is supposed to identify pops in a signal - which are assumed to be extremely short in duration, 1-2us - and outputs ranges of the signal that are "invalid" (ie contain the pops). The invalid ranges are supposed to be as tight as possible. However, the algorithm is clearly going to have to invalidate more of the signal at 44.1khz than at 96khz, simply because the sample period is so much longer. (Also, the odd resampling ratio would mean that ringing would occur, but that particular problem goes away at 88.2.) This particular issue would occur just as much if you worked in 44.1khz as compared to upsampling from 44.1 to 96.

    Note, however, that a lot of the potential issues with this scheme are not impossible to overcome at 44.1, given adjustment of the rules. If invalid ranges are made "fuzzier" - ie, you identify pops as exact points in time that cause changes across multiple samples, so that those samples are only partially invalidated - you nearly avoid all of the issues, since the phase information of the pop is preserved at 44.1. Of course, then you might need to make more assumptions about the pop. It's all about engineering the system. You perhaps need to do it a lot more at 44.1 than at 96, but that doesn't mean the results at 44.1 are always going to be that much worse than at 96.

    Eh, I forgive you. To be honest, I stay away from Audacity for high-precision audio work. The processing system it uses internally (it's called Nyquist, ironically) is entirely single-precision.
     
  18. felimid

    felimid New Member

    Location:
    ulster
    I see you realise the limits of your report and I do too now, but what less involved folks will assume is trying 'to be measured' is reproduction of origional sound.
    Here is my problem with the wording of this thread (and it has travelled to hydrogen audio wiki for example:
    http://wiki.hydrogenaudio.org/index.php?title=Vinyl_Myths
    "PCM can encode time delays to any arbitrarily small length. Time delays of 1us or less - a tiny fraction of the sample rate - are easily achievable. The theoretical minimum delay is 1ns or less. (Proof here*[this thread]).)"

    ...Saying the time resolution is near as hey perfect, is true, when it relates to the record itself (and records like it, with the same samplerate).
    But this is ripe for misinterpretation when its clear to only a few that the precise delays encodable are in a waveform which is neccessarily simplified or at least limited in resolution by the sample rate.

    As while we look at things in the frequency domain, what limits the resolution of the waveform is not 'timing' but complexety provided by the frequency range. Through the looking glass in the 'level' domain, what limits the resolution of the waveform are the gaps in the specification of level -the sample interval.

    I couldnt see my point so clearly as this earlier, which is why I was struggling with communicating my own esoteric comprehension of the subject, but I think this has boiled down to an easily accessible point.

    If we define the 'time resolution' of the record in regards to its reproduction of the origonal, the correct value is surely the sampling interval - that would be nice and simple.

    In my esoteric terms, it is within the frequency domain that precise time related measurements are made because the frequency domain is (only) observable by relating many sampled instants in time simultaneously.

    Consider this timing resolution which relates to the records own limits (rather than the origonal's) will ultimately depend on rounding error of the samples word size, we could be able to claim that 24bit ~10kHz samples have superior time resolution than 16bit 44kHz, or something like that depending on the exquisite math required to calculate the error limits exactly. Or maybe the accuracy is dependent on window size/weighting of measures applied - you see the confusion which ensues :p

    I see, I dont have such work to do normaly. Im writing algorythms and coding them so >most things are a distraction. This was interesting though so thanks for your replies to my winding kriteek.

    :wave:
     
  19. Metoo

    Metoo Forum Hall Of Fame

    Location:
    Spain (EU)
  20. Metoo

    Metoo Forum Hall Of Fame

    Location:
    Spain (EU)
    What I do not understand here is why you would be using 96khz at any stage if you are studying 44.1khz. Wouldn't the fact that 96khz is not a multiple of 44.1 not be the ideal frequncy, but rather 88.2?
     
  21. felimid

    felimid New Member

    Location:
    ulster
    Hi metoo, that example of mine was seriously flawed but the conversion between 96kHz and other uneven multiples was not the problem. I just used a 96kHz recording as 'origional' because it was the highest recording I could find. The result of precise time resolution of all details capable of being stored at any given samplerate, means precise conversions are possible to other samplerates of *surviving details* -no matter the 'evenness' of the conversion.
    The details being talked about here, are not conventional details which we usually think of, but details in 'the frequency domain'

    In a current thread at Hydrogenaudio, the lead developer of LAME mp3 encoder had this to say in a similar subject:
    We need to know about these two persepctives, to understand what precise 'time resolution' in the frequency domain means and that it does not translate to the same degree of time resolution in the time domain (the familar arrangement of samples in a row through time)

    And as I keep repeating (like a bolshevik :sigh:), 'time' has been unified in the frequency domain, ie its highly misleading if not outright invalid to talk of measurements made in the frequency domain as relating to 'time'.

    Another way to say things is that frequencies require sufficient resolution through time to separate (at least) two halfs of their cycle, which is why the nyquist frequency is half of the samplerate. As long as a frequency has the time resoltuion to describe two halves of its cycle, its dimensions (phase and power) can be discerned with great precision. It is that precision which is confusingly refered to here as 'time resolution' and 'delay'

    best regards'
    fe
     
  22. felimid

    felimid New Member

    Location:
    ulster
    On second consideration, I think the time resoltuion being discussed here is dependant on the measure-width employed, so it would be as much dependant on the length of the track as the samplerate :p
     
  23. lukpac

    lukpac Senior Member

    Location:
    Milwaukee, WI
    How about this for a test...

    Take your 96kHz clap and convert it down to 44.1kHz. Bring it back up to 96kHz. Call that "file A".

    Shift "file A" by one sample, convert down to 44.1kHz, then back up to 96kHz. Call that "file B".

    See if A and B are correctly offset by 1 sample or not...

    Thoughts?
     
  24. felimid

    felimid New Member

    Location:
    ulster
    Yes, this plainly will not work except through sheer coincidence possibly or the total absence of frequencies higher than 22.05 kHz in the 96kHz clap (already lowpassed) I would carry out the test, but am already certain of the outcome. To be even clearer,use white noise instead of natural recording.

    To be fair i know publius would not expect this to work either, but we see this is what most would expect the report of precise 'time resolution' to mean.

    regards'
    fe
     
  25. lukpac

    lukpac Senior Member

    Location:
    Milwaukee, WI
    Why wouldn't it work? What would you expect to happen?
     
Thread Status:
Not open for further replies.

Share This Page

molar-endocrine