I understand this is rather a technical question, and might need someone from the dev team who knows, but after many years I am finally clear on how S1 uses my CPU power. Cores are assigned on a per-channel basis, and then have to process everything in serial on that channel with that core.
I am curious, what is the logic used by S1 to determine that core? Is it only something such as: if core x [CPU usage] < (a certain threshold) ? Or are there other more complex factors involved?
The reason I ask is because I am demoing some new guitar plugins (rather CPU hungry ones in fact), and when I create a new buss and then place an instance of the plugin on it, Studio One picks the core that was already loaded at 30% for the task, while there were several of my 6 cores (hex-core CPU, intel i7-6800k) at 10% or less load, and naturally when ANY core breaks 100% usage, we get audio artifacts.
Thusly it seems as though S1 is assigning a core to this bus which is limiting my CPU headroom as opposed to choosing one that is sitting at a lower load?
Any explanations or insight is appreciated, thanks in advance!
-TD