Inspired by Zeerround's recent thread on using Reaper for immersive up-remixing (https://www.quadraphonicquad.com/forums/threads/reaper-for-immersive-up-remixing.37036/) I thought it might be useful to have a general thread on everyone's current favourite tools. I would like to dip my toes into Reaper and have got as far as installing a trial in the past, but never got over the learning curve hump. I'm sure it is work persevering and I intend to have a go in the near future. In the meantime, I'll share my current process using Adobe Audition.
For stem creation, I mostly use MVSEP, given that it is free, with a bit of LALA.AI for occasional tricky bits and also for separating out acoustic and electric guitars. Otherwise, I have found the constant development and addition of new algorithms on MVSEP to be fantastic, and I think the quality of stems achievable (with a bit of manual editing) is at least as good as LALA.AI, and is often better. I have also tried deMIX Pro in the past, but found that the quality was inferior to MVSEP. Opinions may vary on this, and I would love to hear others thoughts.
For the best quality results I tend to separate one instrument at a time, creating an 'other' stem at each stage for further processing. I have found through trial and error that the following sequence works best:
Vocals
I get best results with 'MelBand Roformer (vocals, instrumental) ver 2024.10'. I listen through (ideally on headphones) and manually delete stuff that should not be there as well as silencing gaps of any significant length. Sometimes there are bits of other instruments on top of vocals (particularly guitars) which the algorithm has missed - I use the 'MelBand Karaoke by viperx and aufr33' to further clean these up which usually works quite well. I don't separate out lead and backing at this stage - that comes at the end. Once I am happy with the vocal stem, I create an instrumental stem by subtraction ready for the next step.
Bass
Using 'MVSEP Bass (bass, other) set to 'BS + HTDemucs SCNet / Extract directly from mixture / Include results of independent models'. This way you get all three separations in one go together with an ensemble and you can audition each to find the best one - manually cleaning up and combining bits together if necessary. Again, once I'm happy, subtract from 'other' and go to the next instrument
Piano / guitar
Which comes first depends on the track. I often try both to see which one comes out cleanest. The 'MVSEP Piano' has a number of options - I usually do both MelRoformer and SCNet Large and pick the best one or sometimes blend them together. 'MVSEP Guitar' also has options - again I run both MelRoformer and BSRoformer and compare. For guitars, if there are both acoustic and electric mixed together, I do a second pass through LALA.AI to see if I can create separate stems for each.
Drums
MVSEP Drums, like MVSEP Bass allows the output to include results of independent models, allowing you to select the best one.
Other stuff
Depending on the track, I use the MVSEP Wind, Organ and Strings to get as much stuff separated out as possible. Wind does well for sax and brass instruments. Organ and Strings can often give good results for synths as well as actual organs/strings. BandIt Plus, BandIt V2 and MVSep DnR v3 can also tease out interesting synth sounds. Several of the above have 'sub' options and can output independent models, so it's worth doing a few passes to get the best results.
Whatever is left over after creating all of the above stems, I generally manually listen through and copy/paste to the relevant stem (sometimes using spectral editing) or leave as an 'other' stem to mix back into the final remix.
Finally, after I am happy that I have got as many stems as possible, I use 'MelBand Karaoke by viperx and aufr33' to separate lead and backing vocals, and DrumSep set to the 5 stem model to split drums into Kick, Snare, Toms HiHat and Cymbals. Optionally you can separate Cymbals into Crash and Ride by using the 6 stem model, but I don't tend to.
This can give me up to 12/13 stems to play with, depending on the source track
Upmixing
I upmix all stems to both 5.0 and 4.0 using my own scripts based on CentreCutGui (details here: https://www.quadraphonicquad.com/fo...entrecutcl-stereo-to-5-1-script-v-0-2b.32788/). Results are similar to those achievable with Zeerround's SpecScript. Reviewing the stems, I decide which ones to use in the 'base' speakers and what to push to the heights. Some I keep as stereo (generally bass to the front L+R and some of the drum stems)
Remixing
For remixing, I use Adobe Audition as part of a Creative Cloud subscription. Out of the box, Audition can only do 5.1 multichannel - my workaround is to create a 'main' and a 'top' bus and direct the tracks to one or the other - or sometimes a bit in each. I have template files set up for all the stems I usually create. For drums, I usually place stereo kick in front L+R, Stereo Snare and HiHat slightly to the left with hihat mixed approx. 75% to 'main' and %25% to 'top'. Cymbals are upmixed to 4.0, but with L+R to 'main and the upmixed rears placed to top L+R. Toms I use the stereo stem but spread wide from back right to front left - mixed 50%-50% to 'main' and 'top'
I then mixdown the two buses separately and recombine them to a 5.1.4 file by copy/pasting to the relevant channels. This way, I can live monitor one or the other bus, or listen to both simultaneously as a 5.1 mixdown to check the overall balance. I playback the 'final' 5.1.4 wav file in Foobar to get the full immersive mix. Once I am happy with everything, I run the 5.1.4 file through the 'match loudness' tool, setting the target loudness to match the original stereo source. It's a bit heath-robinson,but the results are good - although I might give Reaper another using Zeerround's templates.
Below is a recent example - Get Down Make Love by Queen from News of The World. From a stereo source, split to the following stems: bass, cymbals, hihat, kick, snare, toms, guitar, piano, sfx, vocals_lead and vocals_backing:
I know some people upmix first and then separate stems - I have not tried this and would like to hear the pros/cons of this method.
For stem creation, I mostly use MVSEP, given that it is free, with a bit of LALA.AI for occasional tricky bits and also for separating out acoustic and electric guitars. Otherwise, I have found the constant development and addition of new algorithms on MVSEP to be fantastic, and I think the quality of stems achievable (with a bit of manual editing) is at least as good as LALA.AI, and is often better. I have also tried deMIX Pro in the past, but found that the quality was inferior to MVSEP. Opinions may vary on this, and I would love to hear others thoughts.
For the best quality results I tend to separate one instrument at a time, creating an 'other' stem at each stage for further processing. I have found through trial and error that the following sequence works best:
Vocals
I get best results with 'MelBand Roformer (vocals, instrumental) ver 2024.10'. I listen through (ideally on headphones) and manually delete stuff that should not be there as well as silencing gaps of any significant length. Sometimes there are bits of other instruments on top of vocals (particularly guitars) which the algorithm has missed - I use the 'MelBand Karaoke by viperx and aufr33' to further clean these up which usually works quite well. I don't separate out lead and backing at this stage - that comes at the end. Once I am happy with the vocal stem, I create an instrumental stem by subtraction ready for the next step.
Bass
Using 'MVSEP Bass (bass, other) set to 'BS + HTDemucs SCNet / Extract directly from mixture / Include results of independent models'. This way you get all three separations in one go together with an ensemble and you can audition each to find the best one - manually cleaning up and combining bits together if necessary. Again, once I'm happy, subtract from 'other' and go to the next instrument
Piano / guitar
Which comes first depends on the track. I often try both to see which one comes out cleanest. The 'MVSEP Piano' has a number of options - I usually do both MelRoformer and SCNet Large and pick the best one or sometimes blend them together. 'MVSEP Guitar' also has options - again I run both MelRoformer and BSRoformer and compare. For guitars, if there are both acoustic and electric mixed together, I do a second pass through LALA.AI to see if I can create separate stems for each.
Drums
MVSEP Drums, like MVSEP Bass allows the output to include results of independent models, allowing you to select the best one.
Other stuff
Depending on the track, I use the MVSEP Wind, Organ and Strings to get as much stuff separated out as possible. Wind does well for sax and brass instruments. Organ and Strings can often give good results for synths as well as actual organs/strings. BandIt Plus, BandIt V2 and MVSep DnR v3 can also tease out interesting synth sounds. Several of the above have 'sub' options and can output independent models, so it's worth doing a few passes to get the best results.
Whatever is left over after creating all of the above stems, I generally manually listen through and copy/paste to the relevant stem (sometimes using spectral editing) or leave as an 'other' stem to mix back into the final remix.
Finally, after I am happy that I have got as many stems as possible, I use 'MelBand Karaoke by viperx and aufr33' to separate lead and backing vocals, and DrumSep set to the 5 stem model to split drums into Kick, Snare, Toms HiHat and Cymbals. Optionally you can separate Cymbals into Crash and Ride by using the 6 stem model, but I don't tend to.
This can give me up to 12/13 stems to play with, depending on the source track
Upmixing
I upmix all stems to both 5.0 and 4.0 using my own scripts based on CentreCutGui (details here: https://www.quadraphonicquad.com/fo...entrecutcl-stereo-to-5-1-script-v-0-2b.32788/). Results are similar to those achievable with Zeerround's SpecScript. Reviewing the stems, I decide which ones to use in the 'base' speakers and what to push to the heights. Some I keep as stereo (generally bass to the front L+R and some of the drum stems)
Remixing
For remixing, I use Adobe Audition as part of a Creative Cloud subscription. Out of the box, Audition can only do 5.1 multichannel - my workaround is to create a 'main' and a 'top' bus and direct the tracks to one or the other - or sometimes a bit in each. I have template files set up for all the stems I usually create. For drums, I usually place stereo kick in front L+R, Stereo Snare and HiHat slightly to the left with hihat mixed approx. 75% to 'main' and %25% to 'top'. Cymbals are upmixed to 4.0, but with L+R to 'main and the upmixed rears placed to top L+R. Toms I use the stereo stem but spread wide from back right to front left - mixed 50%-50% to 'main' and 'top'
I then mixdown the two buses separately and recombine them to a 5.1.4 file by copy/pasting to the relevant channels. This way, I can live monitor one or the other bus, or listen to both simultaneously as a 5.1 mixdown to check the overall balance. I playback the 'final' 5.1.4 wav file in Foobar to get the full immersive mix. Once I am happy with everything, I run the 5.1.4 file through the 'match loudness' tool, setting the target loudness to match the original stereo source. It's a bit heath-robinson,but the results are good - although I might give Reaper another using Zeerround's templates.
Below is a recent example - Get Down Make Love by Queen from News of The World. From a stereo source, split to the following stems: bass, cymbals, hihat, kick, snare, toms, guitar, piano, sfx, vocals_lead and vocals_backing:
![1739647346485.png 1739647346485.png](https://cdn2.imagearchive.com/quadraphonicquad/data/attach/109/109175-1739647346485.png)
I know some people upmix first and then separate stems - I have not tried this and would like to hear the pros/cons of this method.