And if you’re getting the feeling of what does this have to do with making music? You’d be correct in continuing to ignore the technicalities because it truly is a non-issue in modern DAWs.
Many people (myself included) actually use analogue summing to introduce back some of inconsistencies that have been perfected in the digital domain.
Yeah so we’re not really disagreeing I guess, like im not saying that in practice using two daws will produce the same results, because of course the built in plugins are different, things are laid out different etc.
I was just quibbling with the idea that for example there is some unavoidable artifacts to exporting or summing audio in e.g. Ableton or whatever that are just baked in. At the end of the day what you do to the audio is what you get. Any of them can produce good results and I dont think that just running audio through a particular daw colours the audio. Imo
i swear if i bounce out stems in logic and play them back in ableton
something changes
but it might be logic
but imagine the transients of three stem tracks sit on top of each other in logic because the point of amblitude to silence is at 0,whatever seconds
and you then render these three stems in ableton and the silent cut off point moves slightly - because it interprets that differently
i think its something like that
another example, how close you can zoom into a waveform on soundforge is different from how you used to be able to zoom into a wave form on normal daws
it could somehow interpret a waveform with a smaller metric than the others
if this is the same with silence or amplitude
then you get another sense of colouring just by rendering
Well metering sucked in most trackers afaik. A lot of them were probably running at low bit depth too. I think most of the major daws were running at 16 bit until the early 00s
should be talking about how to fuck up sound not how to keep it clean
BRUT tings ----------------------
I use this weird delay arp thing (obelisk) because it has poor delay compensation but sounds amazing, this means some of it goes out of time and i have to cut it up and place the fx in time
but it makes it sound cool and organic
bit like taking quantize off
**on drums ** instead of compression
i zoom into a file and look at the waveform and the transient - then decide if i want to emphasize the tail portion or fuck around with the attack - and then just gain that part on the file inside the editor
bit like using a an enveloper/ transient designer
but with way more control/ more finite
if a kick and bass-sound interfere
you can go into the file you least like the attack of, and then just lower that part on the file
it sounds way better than sidechain
and its more precise than manual fade
so i would rec those sort of approaches
because you quickly learn to listen instead of fucking too much around with questionable plugins
you can always make a copy before you make any of these destructive file level adjustments
it also works differently in terms of how fast dance music is
i find a lot of compression and shaping tools still arent fast enough for high BPMs
all of this manual gaining at file level is also good for reverbed stuff
-you can change the tails on reverbed sounds so they sound more natural than if you set the reverb to have a shorter release this way, because you allow it to have a long reverberated tail/release setting but then cut it off
its a way of saying that sound editing at wave level is more precise than vst that handle dynamics
and its different from fading in the audio lane too, because its way smaller increments
I’m using a clone of Fasttracker 2 and can confirm this, no real way of mixing in it. You can set the levels of each instrument/sample, but there are no faders in the main view.