When is hi-rez overkill?

QuadraphonicQuad

Help Support QuadraphonicQuad:

This site may earn a commission from merchant affiliate links, including eBay, Amazon, and others.

ssully

2K Club - QQ Super Nova
QQ Supporter
Since 2002/2003
Joined
Jul 2, 2003
Messages
3,916
Location
in your face
Note, that the latest edition of ITCOTCK is HDCD coded. It actually sounds better on an HDCD compatible player.

Decoding that HDCD, like many HDCDs (including most, but not all, of the KC HDCDs), mainly just lowers the peak and average levels (in this case, by 6 dB). No peak extension was used, so there is no gain in dynamic range.

So if you were to level match undecoded and decoded outputs of the ITCOTCK HDCD, it's unlikely you'd be able to tell them apart in a blind comparison.
 
Well, to be honest I never performed a matched level comparison.

Why was it HDCD coded if not the extra dynamics are used? I've viewed HDCD as "less than 16 bits" when played with a regular CD player, since bits are discarded, and "slightly more than 16 bits" when played with an HDCD player.

Oh, it's a stupid format. :mad:
 
Well, to be honest I never performed a matched level comparison.

Why was it HDCD coded if not the extra dynamics are used? I've viewed HDCD as "less than 16 bits" when played with a regular CD player, since bits are discarded, and "slightly more than 16 bits" when played with an HDCD player.

Oh, it's a stupid format. :mad:

When an HDCD A/D converter is used, even when no 'enhancement' options are enabled, the digital master is still flagged as HDCD.

It is indeed a stupid format -- a solution looking for a problem. None of these old master tapes needs more than 16 bits of dynamic range to fully encompass THEIR dynamic range (someone please, show me a King Crimson master tape recording that spans more than 94 dB of dynamic range). Back in the day when HDCD was introduced, the Keith Johnson A/D converter was among the best of breed -- and that, if anywhere, was where HDCD-specific sonic improvement came from. More of it probably came just from more careful mastering , though.
 
When an HDCD A/D converter is used, even when no 'enhancement' options are enabled, the digital master is still flagged as HDCD.

So, except for the 6 dBs (which, I guess, still make for less dynamics, albeit in the LSBs?) in this case there is no difference between playing an HDCD in a CD player and an HDCD player? And there is no difference between a red book CD from the same master?

Is this valid for all KC HDCDs?
 
Not sure I agree with that. I suspect the 'crystal clarity' comes from compression and boosting the treble EQ, at least for the stereo version.

Here's the stereo 'Roundabout' from the DVD-A
...

I haven't listened to the stereo version(I don't see it happening soon either-not interested in it-I'd prefer to listen to Eddie Offord's version)...

As for the MCh:
You are right , but you forgot a very important factor regarding "sound clarity"
Yes, EQ.
Yes, compression.

One more thing;
Recording the Multitracks to Digital using a very high sampling rate (at least 96K/24bit), which I think is the key factor to the sound clarity.

Cheers!
 
I haven't listened to the stereo version(I don't see it happening soon either-not interested in it-I'd prefer to listen to Eddie Offord's version)...

The stereo version is Eddie Offord's version. It's just remastered, not remixed.

As for the MCh:
You are right , but you forgot a very important factor regarding "sound clarity"
Yes, EQ.
Yes, compression.

One more thing;
Recording the Multitracks to Digital using a very high sampling rate (at least 96K/24bit), which I think is the key factor to the sound clarity.
I think that has little or nothing to do with the sound clarity...it's really all in the mixing and mastering. And that's where EQ and compression are applied (though choice of source tapes is important too). 24bits of delivery resolution is pointless when the dynamic range has been compressed the way this version has. It's useful for digital processing, though, and I'm sure a fair bit of that has been applied here. 24bits during transfer and production prevent introduction of artifacts, but it doesn't intrinsically enhance 'clarity'.

You could take a dullish-sounding version of Fragile -- say, the first CD version -- and capture its output at 96/24 kHz. It wouldn't sound any more clear.
 
Could just be the recording itself, doesn't always have to be the specific mastering, but no way to know until you A/B.....


I've always felt that parts of Jethro Tull's STAND UP were better mixed and recorded than other tracks, but, at the time, who cared? If the CD/digital age brought anything to the fore, it was that audiophiles would find a lot wanting, and start comparing not only formats but their own equipment, ideas, sanity....been there, done that, came out the other side, and I'm here, the worse for wear, but much more understanding than I used to be.

I think FRAGILE is like that: some tracks easy to mix for 5.1, others? Maybe not. But IMO, the 5.1 is better for being there, but can't argue it coulda been better.

ED :)
 
I think that has little or nothing to do with the sound clarity...it's really all in the mixing and mastering. And that's where EQ and compression are applied (though choice of source tapes is important too). 24bits of delivery resolution is pointless when the dynamic range has been compressed the way this version has. It's useful for digital processing, though, and I'm sure a fair bit of that has been applied here. 24bits during transfer and production prevent introduction of artifacts, but it doesn't intrinsically enhance 'clarity'.

You could take a dullish-sounding version of Fragile -- say, the first CD version -- and capture its output at 96/24 kHz. It wouldn't sound any more clear.


Yes, the source tapes have to be good,
and of course;
GIGO

But I do not agree that higher resolution recording doesn't help.
Higher resolution ALWAYS gives it way more presence-if it's bad , it will be and stay bad (EQ can help, too , but only so much!),
but if it's mediocre or better, it will help bucketloads!

:smokin
 
Yes, the source tapes have to be good,
and of course;
GIGO

But I do not agree that higher resolution recording doesn't help.
Higher resolution ALWAYS gives it way more presence-


No, it doesn't. If that's your experience, you're hearing either different EQ...or listener bias.

You can test this if you have a soundcard that records at 16 and 24 bits.
Or if you have something already at 24 bits, you can convert it to 16, and compare them. Blind, of course.
 
No, it doesn't. If that's your experience, you're hearing either different EQ...or listener bias.

You can test this if you have a soundcard that records at 16 and 24 bits.
Or if you have something already at 24 bits, you can convert it to 16, and compare them. Blind, of course.

This reminds me of the Monty Python sketch about having an argument; a plain contradiction is not the same.

I don't know if you are just contradicting me just for the fun of it (or because you need the energy, vampire like) or just to see how I react.

This my final word about this.

Your point is ridiculous.:confused:

Why record in 24 bit/96 K or higher if it sounds the same as 16/44?
A ruse to take more space in your hard drive?

With this same philosophy 30ips sounds the same as 15!(yeah right)

I'm a recording engineer(back when it was 24 tr 2"), and although I'm not taking a holier than thou attitude,

I can hear a huge difference when I record in 24 or higher than in 16.

A voice from a Neumann thru a Focusrite in 24 is no comaparison to 16 bit - if you can't face this....go back to cassettes.

I suggest you look somewhere else for an argument.

Geeeeez!!!!
 
This reminds me of the Monty Python sketch about having an argument; a plain contradiction is not the same.

I don't know if you are just contradicting me just for the fun of it (or because you need the energy, vampire like) or just to see how I react.

This my final word about this.

Your point is ridiculous.:confused:

Why record in 24 bit/96 K or higher if it sounds the same as 16/44?
A ruse to take more space in your hard drive?

The question of why sample rates have gone up as high as 196 is an interesting one, because there's surely no technical reason why they ever need to go much past 60 kHz, worst case. That;s not just my opinion, it's the opinion of some high powered ADC designers and experts on human hearing.

For bit depths, there are good reasons for recording and producing at 24 bits, particularly of live events...but a properly done 16 bit and 24 bit transfer of a source like the 'Fragile' analog master tape shouldn't sound different. 16 bits is giving us 96 dB of 'resolution', which exceed not only what old analog recordings like this have to offer, but also what even very good home listening environments require. 24 bits gives plenty of 'insurance', and even makes some sense as a delivery format nowadays when signals are often streamed right into complex DSP chains in receivers, which would convert 16 to 24 anyway. But *intrinsic* difference, like 'more presence' just by virtue of being at 24 versus 16? Nah. There's no experimental support for that.

So am I saying that 16/44 should be 'good enough' if properly done, that the difference you hear in remasters is overwhelmingly due to the remastering EQ/levels, and that the 'high rez' formats in home delivery media have a significant element of hype to them, and that the industry has no moral compunction about hyping 'specs' with little or no practical significance when it suits them for reselling same product over and over?

Heh.


With this same philosophy 30ips sounds the same as 15!(yeah right)
What 'philosophy' are you imagining that I am promoting? That nothing makes a difference? That's not the case.

Sometimes, a process likely makes a real audible difference. Sometimes not. It's not magic...there are well-founded technical reasons why one could expect 15 vs 30ips to make a real audible difference. That doesn't mean every measured difference, makes an audible difference.


I'm a recording engineer(back when it was 24 tr 2"), and although I'm not taking a holier than thou attitude,
Of course not. You're merely saying that whatever effect you think you hear, must be real, it must be happening for the reason you say it is.


I can hear a huge difference when I record in 24 or higher than in 16.
There are a couple of good technical reasons to record and produce at 24 bits, and do digital processing at 24 bits. They don't have to do with 'presence'. They have to do with allowing enough headroom (esp for a live recording), and with dealing with accumulated digital errors during digital editing/mastering/processing, that could become audible if it were done at 16 bits. That's all.

So, sure, if you overloaded your 16 bit format during recording, or did a ton of digital processing on it afterwards, it could sound much worse than the same recording kept at 24 bits. But I have a feeling your aren't talking about that situation. If you record and edit at 24, then properly convert to 16 (with dither), the only difference you should hear would be if you took the very quietest part of the recording, and listened to it at level that would be earsplitting during the loud parts.

So this 'huge difference' you routinely hear, I gotta wonder, where does it come from? There's no technical reason for it to really exist between properly done 16 and 24 bit releases. It's a prime candidate for an ABX comparison, which would rule out the usual psychological biases (one of which is 'better' numbers must mean better sound!)


A voice from a Neumann thru a Focusrite in 24 is no comaparison to 16 bit - if you can't face this....go back to cassettes.
You seem fond of 'excluded middle' argument...if I disagree that 16 vs 24 should make a huge routine difference, therefore I'd be happy with cassette-quality sound?

It's also interesting that you're comparing live recording, to transferring a 1971-era analog tape with an inherent dynamic range of...care to guess? :mad:@:

Again, this thread is about a 1971 analog tape recording, transferred at 24 bits, EQ'd and compressed to a fare-thee-well, and even remixed (for surround) -- and you're claiming you hear the 'added presence' that's specifically due to the extra 8 bits?

'Pros' aren't immune to normal psychological effects. So, take one of your 24 bit recordings (or rip a track from the Fragile DVD-A...it can be done). Dither it down to 16. Run an ABX comparison at normal listening levels and see if the 'presence' goes away. By your logic, it should. Therefore you should pass that ABX with flying colors.

And btw, I'm not 'looking for an argument'. I'm looking to counter a persistent and, to my mind, pernicious 'audiophile' mindset that puts maximum faith in an inherently flawed 'method' of comparing. It leads directly to things like 'intelligent pebbles', 'cable lifters', green markers on CDS, and other snake oil that user SWEAR makes a 'huge difference'.
 
Last edited:
as an addendum, luck has it that this very topic was being discussed on the ProAudio mailing list today. Here is one recording engineer's testimony
When I moved from 16 bit to 24 bit somewhere in the mid 90s and was
recording a 66 piece orch, I couldn't hear any significant difference in
quality, so I decided to do some tests. In the blind people couldn't
identify the raw 24 bit file with the dithered to 16 bit file. I ran some
null tests between the original and the dithered to 16 bit and I have come
to the conclusion that the reason why my subjects couldn't tell the
difference is that when using a good bit reducer with good dither, the two
files will null totally all the way down to the peak of the dither at
around -93dB using TPDF. IOW, above -93dB there is no difference audible or
otherwise, between the 24 bit file and the same file dithered to 16 bit.
ITR, it's more important to capture at 24 bit than it is to playback at 24
bit. It's probably why final destination at 16 bit, for example redbook cd,
is still alive 27 years after the first cd was released, while we've had 24
bit fairly common for about 12 years now on the capturing side.
 
...a properly done 16 bit and 24 bit transfer of a source like the 'Fragile' analog master tape shouldn't sound different.
---
If you record and edit at 24, then properly convert to 16 (with dither), the only difference you should hear would be if you took the very quietest part of the recording, and listened to it at level that would be earsplitting during the loud parts.

I agree with this. I have never done any real (level adjusted, blind) tests, apart from listening in the studio when recording (where I have not been able to hear any diffrerences). But the correctly performed tests I have seen results from have null results, apart from, as mentioned, that the 16 bits higher noise floor (?) is audible at extremely high levels.

I am not saying that there isn't an audible difference at normal listening levels, it's just that I haven't yet seen any theoretical or practical evidence that supports that.
 
I believe the same about hi-res that I do about UFO's: extraordinary claims require extraordinary evidence. So far, that evidence is absent.

But don't I "believe my own ears"? No, not necessarily.
As another wise man said (quoted in James Randi's Flim Flam):

"Man's capacity for self-delusion is infinite."
 
I respectfully disagree. Perhaps the dynamic range of the original recording does not "require" the "quote" dynamic range "available" on an SACD or a DVD-A recording but to my ears the 16 bit format is inherently flawed and limited.

A 24 bit recording does not improve nor take advantage of the dynamic range of the recording in question, it fixes some of the "noise" inherent in the 16 bit format. This results in a more dare I say analogue sounding midrange, improves imaging, the sound is more liquid and less tiresome.

I have spent a bit of time the past couple of years downloading hi rez live recordings, 24/48 and 24/96, which is just about the only way to get a dose of what hi resolution recordings sound like.

After a while you will say "Hey! this red book CD stuff sounds like SH#$%^T"

So, I find it interesting that this thread dates from 2003 and still no high rez King Crimson recordings. With any luck a 24/96 recording or two from the current tour will surface. Going to see them at the Keswick shortly and can't wait.

I do not think I will hold my breath for any SACD King Crimson releases, I am still holding it for the last Genesis box. Smart thing would be for Fripp or DGM to make these files available for download, then there is no expense associated with pressing and cover art etc.
:smokin
 
I respectfully disagree. Perhaps the dynamic range of the original recording does not "require" the "quote" dynamic range "available" on an SACD or a DVD-A recording but to my ears the 16 bit format is inherently flawed and limited.

No doubt you detemined this by a series of careful blind A/B comparisons of 16 vs 24 bit, where your correct score had a p < 0.05? No? Then you propably weren't just using your ears.

Because when that *has* been done, differences only manifest themselves when you play the quietest parts back at unrealistically loud levels.

And there's no reason to put require or available in quotes.


A 24 bit recording does not improve nor take advantage of the dynamic range of the recording in question, it fixes some of the "noise" inherent in the 16 bit format.
What 'noise' is that? Do you mean dither? Or do you mean rounding artifacts?

This results in a more dare I say analogue sounding midrange, improves imaging, the sound is more liquid and less tiresome.
I daresay none of those claims would hold up in a proper listening test.

I have spent a bit of time the past couple of years downloading hi rez live recordings, 24/48 and 24/96, which is just about the only way to get a dose of what hi resolution recordings sound like.
I've done that, and I can record my own, so I have yet another way.

The only real use for higher-bit formats is for 1)live recording, where actual peak levels may not be known in advance, and 2) digital production and processing, to keep rounding errors from becoming audible. Routinely digital production for CDs is done in 24 or 32-bit domains, then transcoded down to 16 with dither. 16 vs 24 bit, of its own is not audible at normal listening levels.

You can try this yourself, you know if you have a good soundcard (i.e., one that doesn't resample everything). Take one of those 'hi rez' downloads, and get your hands on a decent software sample rate/bit depth converter (Adobne Audition's is excellent). Convert the hi rez download to 16/44, then compare them using ABX software (a fine one tool comes with the free foobar2000 player).


After a while you will say "Hey! this red book CD stuff sounds like SH#$%^T"
If I did say that, it certainly would not be because of the 'limitation' of 16 bit -- it would be due to poor mastering.

And you might want to investigate this paper by Meyer and Moran that was published in JAES this past year, where blind comparison was used to compare an SACD to the same SACD converted to redbook:
"Audibility of a CD-Standard A/D/A Loop Inserted into High-Resolution Audio Playback". E. Brad Meyer and David R. Moran. JAES 55(9) September 2007.

And more discussion of 16 vs 24 bit here, complete with nutty recording engineer input:

http://www.hydrogenaudio.org/forums/index.php?showtopic=53716&st=0
 
So, except for the 6 dBs (which, I guess, still make for less dynamics, albeit in the LSBs?) in this case there is no difference between playing an HDCD in a CD player and an HDCD player? And there is no difference between a red book CD from the same master?

Now that we have hdcd.exe, you can level match the decoded and nondecoded signal, and see for yourself if there's an audible difference, with an ABX comparison.

Is this valid for all KC HDCDs?

No, a few of the KC HDCDs use peak extension -- Wake, Lizard, Islands.
 
Yes, you certainly have all the objectivist arguments trotted out. Double blind listening tests do not neccessarily reveal all and just because it isn't revealed by such a method does not necessarily mean that it must be so.

that to me is part of the fun of audio, response to music can sometimes be an emotional thing, and there are still things that just don't seem to be measurable.

I stand by my assertions but appreciate your input.

thanks!
 
Yes, you certainly have all the objectivist arguments trotted out. Double blind listening tests do not neccessarily reveal all and just because it isn't revealed by such a method does not necessarily mean that it must be so.

that to me is part of the fun of audio, response to music can sometimes be an emotional thing, and there are still things that just don't seem to be measurable.

I stand by my assertions but appreciate your input.

thanks!

One can have a different emotional response to the same exact recording, at different times. And one's emotional response is easily influenced by stuff having nothing to do with the actual sound. I'd bet good money that if I played you a 16-bit recording and a 24-bit recording, but told you the 16-bit one was 24-bit and vice-versa, your emotional response would be to favor the bogus '24 bit' one.
 
One can have a different emotional response to the same exact recording, at different times. And one's emotional response is easily influenced by stuff having nothing to do with the actual sound. I'd bet good money that if I played you a 16-bit recording and a 24-bit recording, but told you the 16-bit one was 24-bit and vice-versa, your emotional response would be to favor the bogus '24 bit' one.

I respectfully and wholeheartedly disagree. Once you have been exposed to and perhaps trained to know what to listen for the differences are subtle but obvious. Midrange is more silky, high end cymbals are less hashy and most obviously on audience recordings the background chatter is much more annoying.

If you are a recording engineer I certainly would not hire you for any projects. Besides, this is a King Crimson thread about upcoming hi rez SACD, why are you here then if your current crop of CD's is not only good enough, but apparently perfect for you already?

So, back to the thread, the only hard news we have is something from one of RF's diaries stating that Discipline is being "worked on". Could be a while then.

Good thing I have a nice pile of 24 bit recordings that at the very least I have deluded myself into thinking that they sound very nice and better than 16 bit CDs. BTW, in an un--scientific test a friend of mine was able to pick out the 24 bit version of the Grateful Dead 8-13-75 on a boom box.

the more I hear about double blind listening tests, the less I respect their validity. Useful for some things I am sure.

Thanks for your input though. I am looking forward to any remaining SACDs that might be coming out as I believe that they sound better than CDs. Same goes for the few DVD-A titles I managed to get.
 
Back
Top