Was thinking about this a bit today...
When a mastering or authoring error results in a channel being inverted - and we correct that by inverting it back, we shift the phase of every frequency in the channel by 180 degrees. This brings the listening experience back to the mixing engineer's intent.
But what happens when we time shift an LFE channel?
Assuming that the LFE delay is in what the mixing engineer heard... I will assume that the sound with the delay is the engineer's intent and therefore is not an error. What happens when we "fix" the delay by bringing the amplitude peaks together in time?
When we shift an entire channel by say 5ms, we are shifting every single frequency in it by a different portion of its phase. A 200 Hz signal has a period of 5ms. When we shift those amplitude peaks by 5ms we shift a 200 Hz signal one cycle over (no audible change), but a 100Hz signal moves 1/2 a cycle over (phase inversion) and a 300 Hz signal also moves 1/2 cycle over (phase inversion). This time-shifted channel may now be causing a perceived loss of signal in the very low bass range because the LFE is now combining with the other channels to cancel out the low bass frequencies from about 50Hz - 120 Hz.
The shifting would appear to be increasing the amplitude of the signal or "cleaning it up", but over the frequency spectrum the results are a mixed bag.
If we're looking just at the LFE band, we cause phase inversion at different frequencies depending on the number of milliseconds shifted:
5 ms = 100Hz
6 ms = 83Hz
7 ms = 71Hz
8 ms = 63Hz
The graph below shows phase shift for different time shifts vs. frequency for 5ms to 8ms shifts. To read, 0 is no phase change and -1 to +1 indicate -180 to 180 degrees of phase change. Both -180 and +180 degrees are 100% out of phase. You can clearly see that the meat of the LFE band is now completely out of phase so tones in this range will be subdued rather than enhanced.
I think this explains the apparent paradox I experienced with Yoshimi in my post above that "fixing" an LFE delay caused a loss of low bass in the end result that I heard.
And now I leave this to the friendly crowd in here and hope that my admittedly rusty signal analysis skills did not fail me.
When a mastering or authoring error results in a channel being inverted - and we correct that by inverting it back, we shift the phase of every frequency in the channel by 180 degrees. This brings the listening experience back to the mixing engineer's intent.
But what happens when we time shift an LFE channel?
Assuming that the LFE delay is in what the mixing engineer heard... I will assume that the sound with the delay is the engineer's intent and therefore is not an error. What happens when we "fix" the delay by bringing the amplitude peaks together in time?
When we shift an entire channel by say 5ms, we are shifting every single frequency in it by a different portion of its phase. A 200 Hz signal has a period of 5ms. When we shift those amplitude peaks by 5ms we shift a 200 Hz signal one cycle over (no audible change), but a 100Hz signal moves 1/2 a cycle over (phase inversion) and a 300 Hz signal also moves 1/2 cycle over (phase inversion). This time-shifted channel may now be causing a perceived loss of signal in the very low bass range because the LFE is now combining with the other channels to cancel out the low bass frequencies from about 50Hz - 120 Hz.
The shifting would appear to be increasing the amplitude of the signal or "cleaning it up", but over the frequency spectrum the results are a mixed bag.
If we're looking just at the LFE band, we cause phase inversion at different frequencies depending on the number of milliseconds shifted:
5 ms = 100Hz
6 ms = 83Hz
7 ms = 71Hz
8 ms = 63Hz
The graph below shows phase shift for different time shifts vs. frequency for 5ms to 8ms shifts. To read, 0 is no phase change and -1 to +1 indicate -180 to 180 degrees of phase change. Both -180 and +180 degrees are 100% out of phase. You can clearly see that the meat of the LFE band is now completely out of phase so tones in this range will be subdued rather than enhanced.
I think this explains the apparent paradox I experienced with Yoshimi in my post above that "fixing" an LFE delay caused a loss of low bass in the end result that I heard.
And now I leave this to the friendly crowd in here and hope that my admittedly rusty signal analysis skills did not fail me.