What's the future of "Transform to Rendered Audio"?
There are so many issues with it that alter the reality of the mix, that I wonder if I should use it at all:
- Sidechains are lost (ouch!)
- Faders go back to 0, so there's no point in adjusting the volume of a rendered track
- No visuals indicating that the track is rendered audio (error prone!)
- Duplicating rendered tracks lose the realtime information (can't go back to realtime audio, it's almost like printing on another track using song -> stems)
- There are probably some mix fx impacts too that I just don't know about, but I'm not using passthrough so I guess I'm ok here... If it was in channel mode though...?
- Any mixdown uses the rendered audio and doesn't reattempt to reprocess the sound... So to get the real sound (because sidechains) I have to turn everything back to realtime audio before the mixdown... But if I do, it means I have to put my CPU in 100%+ mode until I can click the "Mixdown" button! Maybe a button to disable the audio engine would help here...
It feels to me as if the intent behind rendered audio isn't clear. I sort of disagree with the manual:
...it is sometimes necessary to render an Audio Track so that the Insert effects and automation moves become a part of the audio waveform on the Track. You might do this for creative purposes or simply to enable you to remove the Insert effects in order to save CPU power.
For creative purposes? Isn't that what the menu song -> stems should be used for? Or a new "Bounce" menu? For the sake of combining tracks or rendering effects/automations in a single waveform, it sounds to me as if the stems menu could be improved a bit and offer options such as "include sidechains", "bypass mix fx", "bypass bus effects", "bypass master effects", "include fx sends", etc. This way, people would feel empowered in creatively printing tracks the way they want with consistent and expected results. Being transparent and agile here is much more powerful than offering some printing magic, IMO.
If all the creativity process is moved to the Stems options, then the Transform to Rendered Audio should be exclusively seen as a CPU saver option. And if that's all it's supposed to do, it probably will do its job better. Of course magic has its limits (if track A sidechains to track B's compressor, and I render track B, and then change the dynamics in track A, i don't expect track B to react to that unless track B is rendered again!)
In the best of worlds, here's how I picture the workflow when a project gets too big:
- You need to add this exciter but it makes your cpu go boom!
- So find the tracks that you don't expect to modify soon: Transform them to Rendered Audio (but frankly, I would call that "Cache Track" or something, because rendering to audio is not the goal here, we just want to save CPU)
- When CPU is available again, continue working on the realtime tracks. When done, transform them too! Because why not! (At the moment there are tons of reasons why not...)
- After a few hours, you know that track interaction is no longer true to the reality since you have so many rendered/outdated tracks. So you hit the-super-convenient-button-that-doesn't-exist-yet: Update Rendered Audio Tracks :) Then you grab a coffee and 30 minutes later you can resume your work and you know that all rendered audio was reprocessed correctly, with all the cross-track interactions.
I also think that you would save people TONS of time by offering the option to mixdown from realtime audio even if rendered audio is in there. Typically I want to hit that "Update Mastering File" button and work from the Project view to compare the sound with the rest of an album. But to do so properly, I have to transform everything back to realtime audio, cringe under the CPU load, update mastering file, then render everything back to rendered audio while the CPU is dying... So many clicks and manipulations and about an hour lost :(
There's a bit of it that could probably done with macros, in a limited way?