Production Thoughts

Except I’m a visual learner. I need the graphic EQ in order to understand what I’m doing.

There are a few reasons for using it. Right now it’s very trendy (peaked a few years back) just as a sound unto itself - the pumping effect Hubb mentioned.

It’s also often a byproduct of the Loudness Wars and a way to physically make overloud mix elements not totally destroy each other in a compromised headroom situation.

But when it’s used creatively it can create cool movement effects like Harkat talks about.

A pal who’s a Top 40 type producer sent his hat signal out to a string section and tweaked the attack and such that a formerly flat chord bed suddenly sounded like it had all kinds of dynamics. Pretty cool.

https://www.izotope.com/en/blog/music-production/11-creative-sidechain-compression-techniques.html

3 Likes

You know you best but trust me man, I thought that graphic eq shit was so important for years but what you actually learn by focusing on it is a kind of understanding that obscures the useful one that works. Things can look very right and actually sound very wrong

The transients and envelope of your sounds for example is something looking at the EQ will give you 0 sense of but is so so important

1 BigUp

Thats one way to do it but a better and easier way to do it us by using a channel strip as a send/return

Basically the idea behind it is that you’d make a new channel strip in your DAW, put the fx you want on that channel strip.

Then you’d route audio from other channels that you want effected into that empty one

this video is for ableton but the idea applies to all mixers

I found this one specifically for Reaper which I can’t vouch for since I dont use that DAW myself but hopefully helps ya out

3 Likes

That Reaper vid series is top notch.

1 BigUp

Damn I thought I was being a doonce, but my version of Maschine doesn’t allow for
mono inputs on the channels… So I always get the return from my Dynacord, which is mono, on one end of the stereofield :badteeth: Apparently people just use an external plugin to make the signal mono.

1 BigUp

Hardware is more direct and more fun they said

Several years later I’m just now accepting that I have a cable problem

5 Likes

Thanks for the Reaper vids. Slowly getting my head round the whole send/buss etc shit.

Love that guys voice. He sounds like a Brooklyn mafia enforcer.

lol that guy’s accent was killing me when i checked that video for a bit

this guy’s accent is something else, but the content is gut

healthy vid to watch imo to just learn a bit more

Ok. Question time.

  1. If I route 4 drum tracks to a reverb send with XYZ parameters set up, then can I only control the amount of reverb for each track in the send? Can I still adjust individual XYZ reverb parameters for each track within the send.
    So for eg. if one of the drum tracks wants some modulation on the reverb but the other 3 didn’t, would that require it’s own dedicated reverb rather than it going to the send.

  2. If route a snare (for eg.) to a reverb send, and to a delay send, and then route the delay through the reverb send, will that add reverb to the original snare or only the delay elements of it?

  1. no, if you adjust the reverb plugin you will be changing parameters for everything being sent through the plugin (see answer 2 for ideas about signal flowing like water)

  2. depending on your routing, its basically linear, think of water flow or something, opening and closing valves, you are just routing the audio to different places before it hits the master fader

sends are kind of a strange concept and for some reason not often talked about compared to something like compression, but i cannot stress enough how important it is to get comfy with sends (bussing)

for example bussing to one reverb unit or compressor, instead of using different ones on each track, will really help start to make your tunes sound more holistic

beyond just effects, you can send multiple channels of sound to one channel strip and control the levels for all of them with one fader for example. Sounds a bit trivial at first, but this kind of stuff makes mixing down complex projects possible.

and obviously, you arent stuck with only having 1 reverb/fx send, you can do multiple/many and send varying amounts of your channels through them throughout your tune. Very effective with some automation. @harkat touched on it very nicely as well when he mentioned some mixing tricks like keeping a new element loud in the mix for a few bars and then reducing it so your ear follows but its not taking up the headroom anymore.

1 BigUp

this is a heavy duty post script here but

P.S almost all of the tech we use was developed piece by piece to mix down and record music that is really quite different than a typical dance music track, the ideas are very similar, but the usage can be very very different

1 BigUp

Damn these learning curves are steep!!

i spent years mixing tracks with 50-100 separate channels and just ending up with mud with no concept of bussing.

Bussing techniques might seems a bit complex at the moment for ya but the sooner you have access to that problem solving method you’re gonna be in heaven. It opens a lot of new avenues of exploration and will really help your sanity as your projects get more complex :slight_smile:

2 Likes

Yeah those channel counts are creeping…
More and more each time haha

It would be cool if there was a plugin that replicated the sound of bass rattling a trunk rolling thru the hood.

:thinking:

With like separate mix controls for the amount of engine noise, delayed attack time, jankiness of the car, etc…

3 Likes

lowrider

1 BigUp

Found some old pics of the studio. This was using aux sends.

DSC01172

Sending them to some of these.

DSC01173

1 BigUp

copied a post from doa
(feeding cone)

So hearing loss is a matter of sensitivity rather than strictly frequency. Low volumes and higher frequencies require greater sensitivity, but with less sensitivity you’ll still be able to hear high frequencies if they are at a loud enough amplitude. Just about everyone (unless your hearing damage is brain-side) will be able to hear 13kHz if it’s reproduced loudly enough, which is why you might be able to hear the whine of a large old CRT even if your hearing is bad, because that whine is stupid loud from some sets.

This is why live sound engineers can still do their jobs without hearing protection. I’ve met some incredible engineers who are essentially deaf at conversational levels, yet they can still dial-in a great sounding show with balanced, crisp highs.

There’s also another aspect - your brain constantly acclimates to what the senses send to it. If your every-day experience sends your brain nothing above, say, 10kHz, your brain will stretch-out the spectrum to where that 10kHz to you will be subjectively experienced as in the same frequency range a healthy person experiences higher frequencies. As far as your brain is concerned, 10kHz is the high end.

When people get hearing restored with surgery, and their brain is suddenly getting all this high-frequency information overnight, they have to go through an uncomfortable/nauseating acclimation experience where they’re tone-deaf and everyone sounds like a cartoon chipmunk. If your brain never gets those higher frequencies, except for occasionally when they are loud enough, you can get a minor version of this where hearing those frequencies will be perceived as freakishly high and crisp, even if they might not even really be that high in the spectrum.

4 Likes

free