Yes, but it can't be decoded by most operating systems natively, so it's no mean feat to have Atmos play back using a computer audio interface (with sufficient output channels) and no dedicated hardware decoder (like an AVR)....Atmos can already be played back on any cheap notebook PC--and if it has an HDMI port, you can route it to your AVR, to boot.
And without the $400 dolby software decoder. Cavern seems to have an upmixer and room eq.Yes, but it can't be decoded by most operating systems natively, so it's no mean feat to have Atmos play back using a computer audio interface (with sufficient output channels) and no dedicated hardware decoder (like an AVR).
That title was meant to show the decoder's performance. Dolby is using dual Xeon processors in their CP850, I did the same render (DCP Atmos) in a single entry-level notebook core (Intel 5200U). While Dolby made great efforts to make DD+ really heavy on a CPU, pushing users to dedicated hardware, it's also running really well, about 8 seconds of DD+ can be decoded per second on a high-end PC. This is really far from the regular DD's 40x performance, and on cheaper PCs, real-time Atmos needs more optimizations to run next to a video, it's well within reach.Atmos can already be played back on any cheap notebook PC
This video has Dolby Atmos for Headphones audio. While I also have my own virtualizer, I'm not advertising that since it was made only for my ears. YouTube can't have Atmos audio, so this was the closest. Even if it could use any home codec, this is 8x the object count they can do.I guess they mean this technology will give you Atmos-y effects on your two laptop speakers alone?
Do you have an opinion on Cavern?I think people should be aware of the current state of Cavern's limitations for decoding atmos:
1) It can decode lossy atmos, typically used for streaming, but not lossless Atmos with TrueHD, as on BlueRay, etc.2) Tests on music decode, to date, have produced distorted output. Bug filed and in open state.3) Live decode is limited to 8 channels (Think 5.1.2) or the above mentioned headphone virtualization targeting the author's ears.
Yes, these are true. A bit more info on these: TrueHD's documentation is kept as a secret by Dolby, and DD+ has a sparse and coarse mode, one of which is documented in a wrong way. This made every third party to only support one of the DD+ modes, including the streaming encoders and Cavern. Only official Dolby encoders know both modes, and content created with them cannot be decoded with any third party software - yet. However, these are super rare, commercial movies work without issues.I think people should be aware of the current state of Cavern's limitations for decoding atmos:
1) It can decode lossy atmos, typically used for streaming, but not lossless Atmos with TrueHD, as on BlueRay, etc.2) Tests on music decode, to date, have produced distorted output. Bug filed and in open state.3) Live decode is limited to 8 channels (Think 5.1.2) or the above mentioned headphone virtualization targeting the author's ears.
Can you comment on the distorted output? I think I know about what zeerround mentioned, but I would like to hear your thoughts first.Yes, these are true. A bit more info on these: TrueHD's documentation is kept as a secret by Dolby, and DD+ has a sparse and coarse mode, one of which is documented in a wrong way. This made every third party to only support one of the DD+ modes, including the streaming encoders and Cavern. Only official Dolby encoders know both modes, and content created with them cannot be decoded with any third party software - yet. However, these are super rare, commercial movies work without issues.
Yes, the root cause is only one of the DD+ methods are documented correctly, and content that use both can't be played. This includes content exported with Dolby tools, but nothing else, as other encoders are made with the same wrong documentation (and skip this method completely). There were issues with the doc I could just try again with different parameters, but this one is so complex that it's not possible. Dolby's help is needed, they said they'll update the doc, but I have no idea when it will happen.Can you comment on the distorted output? I think I know about what zeerround mentioned, but I would like to hear your thoughts first.
Do they show up as spikes in Audition? That is what I noticed when I look at the four top channels in the waveform.What I have seen is music from Tidal has the issue, and music that I have encoded with Dolby tools has the issue.
I have not tested beyond that to date.
I guess another case that I could test would be music that I have encoded on AWS Elemental. I will try that, and report back.
The spikes aren't causing clipping, but they look out of place in the waveform. Maybe we are not talking about the same thing. I only tested one album to any extent, the new Bjork, which has so much unique added distortion, that I didn't notice any "clearly audible" distortion.It's been a while, but when I quickly looked in audition, I checked for clipping, with the default settings, and it didn't find any, yet the distortion was clearly audible. I didn't actually look at the waveforms.
Haha. Sorry for asking.Clipping (flat tops) is just one kind of distortion. Spikes / Transients being another.
Like I said, I didn't spend a lot of time analyzing what I was hearing, I just reported it and provided samples, in my bug report. It was accepted as a bug, so no need for me to look deeper.
If there's distortion, then it's the Dolby encoder, which can only be played back with Dolby-supported stuff.I guess we don't know if AWS is using Dolby encoders or their own code?
Yes, there's 16 (int) and 32 (float).FYI .riff output has a bit depth of 16. Is that what was intended?
Enter your email address to join: