Do you want to know more about audio mixing? How about sound design? Great! I recently had a chat with creative sound designer, Dallas Taylor, about the basics of audio mixing and sound design. Watch the full interview below to see Dallas pull back the curtain on why it’s important to send a video’s audio to a professional mixer and exactly what happens when they sweeten a mix.
Transcription
Chris Salters 0:00
Hey guys, it’s Chris with better editor and today I’m doing something a little bit different. I am talking to Dallas Taylor from Defacto Sound and host of the podcast Twenty Thousand Hertz. And he is going to give us an introduction to some basics on audio mixing. Now I have a feeling that Dallas is about to drop a ton of knowledge bombs on us. So get your pens and paper ready. And let’s see what he has to say.
Chris Salters 0:24
Hey, Dallas, welcome to better editor. Thank you for joining me. Do you mind telling my audience a little bit about who you are and why we should listen to you?
Dallas Taylor 0:32
Yeah, so I’m Dallas Taylor, and I’m a sound designer. And I lead a company where I’m the creative director, and I own it called de facto sound. And over there. Nowadays, we do lots and lots of trailers for Netflix and HBO and a lot of TV networks, as well as a lot of advertising agency types of work like car spots, and sneaker commercials, and really all kinds of stuff, stuff that needs a ton of sound design all the way to things that just need to sound better than what the editor can do.
Chris Salters 1:06
Yeah, so that was actually one of the big reasons I wanted to have you on the day. So thank you for saying that. I guess with everything else that you’ve done, you’ve probably done an audio mixer to in your life a lot. Yeah. So what’s one of the coolest things that you’ve been able to mix on?
Dallas Taylor 1:22
Oh, it’s so hard to it, because it’s just a sliding scale all the time. I mean, I would say that like some of my first big opportunities that really kind of put things in another place would be doing a lot of game trailers about a decade ago, that’s really where a lot of work kind of pivoted. And from that point, and so back then doing tons of like Bethesda softworks trailers for games like Fallout and Skyrim, and just tons tons of like Bethesda properties. But those are, those are a lot of fun, just because I’m, I’m a huge fan of games in general. But, but I’d say that most of the things we really enjoy the most are just kind of a it’s kind of a balance between really cool, creative, and really nice people and having to always take really nice people over amazing creative any day. And when we get both of those things, it’s just a bonus. So over the over the past five, six years or so we’ve been able to curate a lot of the people that we work with to really like hone in on personalities and, and kind people, because we always get the best out of all sides. If if we’re kind to each other.
Chris Salters 2:35
I got you and I completely agree. So another question I have is in the audio world, there’s some nomenclature that I’m not too familiar with. So can you tell me a little bit about the difference between a sound designer and an audio engineer? Are they the same thing? Are they different? Can they be the same thing?
Dallas Taylor 2:52
Yeah, it just kind of depends on what part of audio you’re in. I, at least in the post production soundside, I tend to lean more towards words that that line up with being a creative rather than an engineer, I think I mean, I think that the terms have been are very all depend on who you ask. So every audio person is going to have a different opinion of that. I think of like an audio engineer as someone that’s probably doing more like signal paths and cables and kind of building out systems. Because that’s, that’s very engineering, while there’s a lot of creativity to it as well. I think of a sound designer, as someone critical to the soundscape creatively in some in some way. And so that’s really the way that I define it. And then with movies and stuff, there’s all kinds of things like sound effects editors, and dialogue editors and re recording mixers and all of those things.
Chris Salters 3:52
Yeah. And that that makes a ton of sense. Thank you for that explanation. So my audience, a lot of them are new to video editing, and they’re excited by it. It’s a lot of fun. But they don’t know a lot of the technical aspects of finishing a video. So how you know how, once a video is finished, how do they make the sound sound as good as they can? And that’s why I want to introduce the concept of an audio mix to them. So in your own words, can you tell us what is an audio mix?
Dallas Taylor 4:18
Well, yeah, so when you’re mixing all these different elements, so we have a bunch of elements that you know, we want to think about. So we have our music, sometimes we get music splits, which give us more ability to mix. We have ambiences, things like wind and things that tie scenes together like longer things we have fully which is which are things that we have to perform because you really can’t cut the like hand and feet performances from libraries and sound accurate. We have hard effects. Those are usually pulled from libraries and we have millions files of stuff that we can all access really quickly. And then we have what it what I think a lot about his emotional effects. Those are like trailer type effects or effects that like are really designed to be a gray area between music and sound design. And then we have a narration we have die on on camera dialogue, etc. And so there’s a big difference between, like just music playing, and other elements playing with it. Because when you’re mixing, I mean it, it’s like a soup, you know, you have a bunch of ingredients. But different ingredients will bring out different highlights of other places. Other things like like corn by itself might taste great this way. But you can’t make corn the same way. If you put it in like a taco soup or something like you got to play off of it.
So when we have dialogue and music, and you know, even bringing volumes down and up and stuff, that’s they all all these frequencies relate to each other, kind of in real time in a slightly different way. And so, so it’s really just like, how do you bring clarity with us? We’re always thinking today all about clarity, of mix, where and not like machinists, frequency builds up on each other, too. So it’s really like taking all of these elements, and how do we get clarity between each layer and with with dialogue always being king. Gotcha. So it’s kind of like really defining what’s in that audio so that your hears, can hear everything that’s in the video that you’re watching, and really bringing out what’s most important at that time, right? Yeah, and really kind of like drawing the listeners ear. And I kind of around the screen, I think of sound design and mixing as the thing that that can definitely highlight and spotlight places on the screen. That if we want if the story is kind of taking us in one place, we know there’s a little dog walking off over here, we might boost a little pause, if that’s where we feel like the story is going. But it’s also not only serves as a spotlight for what’s on screen, but it also serves as everything outside of these, these barriers here, you know, because like this is basically a picture frame on a wall. But it doesn’t have the ability to kind of like wrap around you and with sound, you know, we kind of imply things we can kind of bring people in. And the whole goal is really just to like hyper focus all of the attention into the center of the screen or whatever kind of action. We’re trying to draw the story into
Chris Salters 7:19
I got you and just really use the sound to make people feel like they’re, you know, inside that rectangle that’s on the wall.
Dallas Taylor 7:25
Yeah. And really focusing on like, what is most important here. And that’s the hardest thing I would say about sound design, because it’s it’s the hardest, especially when you first start to get past the tools aspect of it. You know, it’s like, I think that when you get into with anybody, like a great editor, or a great cinematographer, or a great artist, or anything, there’s a lot of practice that goes into it. Because eventually you want your tools to become transparent. And with sound, that’s that it’s the same way it’s really like you can really start to be creative. Once you’re not thinking about hotkeys or Pro Tools, or like certain plugins and stuff when you just know how they work. And you just like fluidly. kind of really focus on story.
Chris Salters 8:06
Yeah, and that’s one of the big pushes that I’m trying to do with Better Editor, I really want people to understand the mechanics of video editing, the button pushes and things like that, so that they can be better editors and be more creative in the videos that they make. And honestly just get to make something that’s fun to watch and fun to listen to. Yeah. So you’ve already talked a little bit about this. But can you you know, regarding the the frequencies piling up on each other. But can you talk a little bit more about why it’s important for video editors to send their audio out to a professional like yourself and have it professionally audio mixed?
Dallas Taylor 8:39
Well, I would even take it a step back first and say that there is no gatekeeper to playing with sound. And the best work that we do is in conjunction with an editor who is an amazing sound designer in their own right. So there is no like, you know, you just worry about this and let us worry about this. It’s like it’s a much more holistic process for the best projects in the world. So So I would say that like there’s you don’t need permission to start messing with sound design, you don’t need any permission to start messing around with sounds or plugins or anything do it because that’s, that’s huge.
I actually don’t prefer working with editors who’s really timid on that, like I prefer working with editors who are like, I piled it full and did my best work. Take the ball and run with it. Because then we can we have a baseline and we can kind of start and go Okay, we need clarity and whatnot. So as far as how you know when to send things out, you know, when when you do bump up into some sort of like agency level or TV level, there becomes a lot of secondary, there’s there’s a lot there to really Polish things. And I think there’s definitely limitations in the in the editor, the editing programs too. They don’t work super well with linear kind of mixing which mixing is a very linear process. And we have a lot of things that are in real time. But we, so it’s so I’d say that like, you have all the plugins for the most part in there except for like, as far as I’m aware, you don’t really I mean, I could be completely wrong. But I don’t know if you have buses and like buses that can bus to other buses.
Chris Salters 10:18
Yeah, to an extent.
Dallas Taylor 10:19
So yeah, you have those things. And I don’t know, I, maybe you have ways to put plugins on those buses, and then kind of all the way through. So yeah, it just gets really complicated. I would say that the time because we work with a lot of new people who are who are bridging the gap, like say, we’re working with like a five person company with like two or three editors, maybe a post production place that’s kind of going from doing everything themselves to starting to hire out. And the the time, the thing that I always kind of argue is that you really need your your editors doing what they’re best at. And that’s editing, that’s kind of like, you know, thinking of a cinematographer, some cinematographers are amazing editors, some are not, some don’t want to be doing that. So So you’ve, I think it’s much more of like a focus of what you do best, because you can send things out and like we can do something in an hour that it might take an editor eight hours to do halfway, just because we were doing it all day every day with you know, 10 projects every day.
So it’s more so like, it’s it’s kind of a I think it’s important to know these tools, it’s important to kind of know the general spirit, of course, but I don’t want to discourage anybody from like doing their own sound if they can’t afford it. But I would say that, like when you start to get to a place where editors, you know, are kind of worried about the next project they need to get on the next thing, I always think it’s a better deal to send out to just kind of like a sound design mix company just to where you’re not kind of spinning your editor’s wheels, like get them on the next thing that they’re good at. And I’m a big believer in with employees of like, really like pinpointing the best thing that they’re doing and maximize that.
Chris Salters 11:51
Yeah, and that makes a ton of sense. So taking a step back to the buses that you mentioned a bit ago, we editors have the ability to add sub mixes in the NLE, so that we can group our audio that way and typically I work with you know, dialogue, music and effects. That’s how I split things out when I send it to the the guys that I have do my audio mixes. Now, there’s a lot of times when editors including myself, we don’t have the budget to go out of house and have a professional mix done. So when that’s the case, are there typical effects that you think we should become familiar with that would help us you know, clean things up?
Dallas Taylor 12:27
Yeah, I mean, I can start with exactly what we do, you know, kind of prioritize of something that you could certainly do, kind of walking through our presses when we first get something. So the first things that we’re doing are editing, we’re making sure that every single bit of dialogue is cleanly edited. You if it’s narration, usually you don’t have to put fades and stuff because usually it’s so clean. You’re kind of making a choice if we have narration do we keep breaths do we remove breaths. If it’s kind of a faith, faceless voice of God type of person, we remove breaths. This is pretty standard. I worked in the Discovery Channel for years and their policy was pretty much remove breaths from narrators. So it’s just this kind of like otherworldly voice of God. Now, if it’s a personality, you know, if we had, I don’t know Whoopi Goldberg or something, narrate, narrating we want to keep all of the nuances of that celebrity performance. So we would we would use the breaths as a part of that. And it’s it’s that’s that’s kind of just like to taste, you know, if you think that the breaths work, if it’s part of the performance, sometimes voice artists don’t breathe in a way that are not performative. They’re much more like because they’re going to be cut. So that’s that.
So the first thing we’re doing if we have narration is just getting the edit clean, because we don’t want any pops or up breaths or type of things. The next thing we’re doing is dialogue, editing, which can get really complicated, especially if you’re doing any outdoor shots, or really noisy indoor shots. That’s just where we’re really trying to just make sure there’s never a discernible moment where you hear and edit. And so we’ll peel through every millisecond of of dialogue. And we’ll keep everything really organized, we’ll keep no narration on their own track top to bottom, we’ll keep you know, generally, if it’s the same microphone on someone, we’ll keep them on a single track. Now if they bounce around into different places or locations, that’s where the session gets thicker with tracks. You know, and another person comes in, but it’s like to sit down things will have you know, sit down person, one here, sit down person to here all the way through. So we’re cleaning it up. And we’re making that as clean as possible.
And the next thing we’re doing is we’re going probably to the music at this point to make sure every music start at whatever is just not skipping beats or not doing anything funky. Real clean, we checkerboard music we kind of put one full track on one side and then when another one comes in, we usually put on the second bit and then when the third one comes in, we put on the first so the checkerboarding that’s really helpful downstream if you’re working on anything that requires like a Music cue list or something. But it’s also just nice habit for us to kind of bounce between the two if we want to kind of we actually do occasionally put EQ on different music tracks because of the way that music can change sonically. When you pull it under dialogue, like you lose bass quickly.
Chris Salters 15:09
So when you say EQ, what do you mean?
Dallas Taylor 15:11
So, EQ, so that’s the frequency frequency manipulation from 20 hertz all the way to 20,000 hertz. That’s general, that’s the, the, you know, what we can hear as humans. And so, in that I haven’t really that’s kind of more pushing toward the mixing phase, which, which we’re about to get to. So yeah, I’ll drop it. I’ll dive right into that. So usually checkerboarding music, sound effects, cleaning those up again, making sure it’s just nice and clean, and everything is speaking. And it’s there’s no weird at it. So now we have like a clean edit. So that’s what we always do first.
The next thing we’re usually doing is kind of starting with the with the voice or the narration. And that’s where kind of the two key things that that I would say that’s number one number, you know, the two number one things to, to actually understand how they work is one EQ, for sure. And then two is compression, because those are 90% of most mixes, if not more. So we have, that’s where we’re going to start with this, that with the foundation of whatever the dialogue is, or narration. So we’re going to do that where we go in and we kind of microscopically even put fingers on the fader top to bottom making sure that every single thing speaks, now we push, we do our EQ on the tracks, we might need to automate EQ, if the positioning changes or whatnot, want to be really careful with frequencies like 250, 300 hertz, kind of in a wide band. That’s usually when you start to add elements up music, sound effects voice, you start to get a lot of mid range II stuff like it’s almost like you’re talking over over yourself. Sure, you start to get really like muddy mixes. And that’s usually you can, like I hear from editors and very new sound designers, the thing that I hear like not like the vast majority of the timer, muddy mixes, because it’s kind of hard to get that concept. But really, we’re just trying to clean out two frequency ranges, usually, well, maybe three.
So one, we’re trying to roll off things like 100 hertz and below in dialogue, there’s no rumbles that you need in dialogue we had, so we get rid of that 250 hertz, this is all talking about EQ usually there we’re trying to do just like a slight dip with a really wide band, just to kind of catch all of it and just clear out that space a little bit for clarity. And then up in the 4000 hertz range, that’s another that’s another place where you have clarity of speech. And so usually we’re kind of doing a little bit of scoop up. So usually, if you see our EQ, I’m gonna draw it on my screen here. Usually, if you see EQ, might be backwards, but this is 20 hertz. And usually we’re like rolling off 100 hertz. And then you’ll see around 300, you’ll see this kind of like three, four dip, and then around 4000, you’ll kind of see it to four boost, and it’ll kind of like smooth out at the top end. Naturally, just to get clarity, it’s kind of the most simple thing you can do to just to kind of get clarity to that.
So then we’re we go through and we do every single track finger fader on all the dialogue cuz that’s really like key for us. If we’re if something’s going to web, we’re hitting negative 27 LUFS, or I’m sorry, -16 LUFS. And then for broadcast, you were usually doing negative 24 LUFS. And that’s just kind of like an overall top to bottom measurement.
Chris Salters 18:09
Gotcha.
Dallas Taylor 18:10
And for and it’s all kind of wrapped around just like anchor dialogue for the most part. And then from there, like then we’re going and usually doing a music mix finger on fader, you can certainly kind of do it with with keyframes, but you can’t feel it, which I like to feel it. Because when I’m mixing things like music under dialogue, I’m not thinking I’m not thinking about the music or the dialogue. I’m watching the story and just feeling where that seems to lay out.
Chris Salters 18:37
Sure. And at that moment, you’re using a physical mixer right and really feeling the music.
Dallas Taylor 18:42
Even if you have a one fader thing like I would say that you know having like, it’s weird. It’s like the difference between no fader and one single fader fader port like any of these one single fader things that actually have automation that will move the jump between no no fader and one fader is the biggest jump in all of audio mixing, like it’s the most quality boost that you’re gonna get. Because you can feel it and you can put one finger on it, you can feel the music underneath. That’s so cool. Yeah, and this is after we’ve already kind of like done, I skipped the compression past because we are pushing our dialogue into a buss compressor compressor generally, okay, where we, where we have like, so compression, all that does is like once it hits a certain threshold of volume. If we say like four to one, it means that like above that threshold, it’s going to squish it four to one. So like if you know or if it’s like two to one, it’ll squish it halfway. So whereas when it goes up above here, if it’s originally going to go here, but we have a two to one, it’ll go to right here, but it’s really helpful in like getting punchiness to.
Chris Salters 19:44
So it like squeezes it almost?
Dallas Taylor 19:45
Squeezes it at a certain point, not the whole thing but at once it hits a certain point it starts to squeeze it. So compression is a friend over compression can get really taxing Sure. So there’s a delicate balance in that but we always have a very fast attack, people have different opinions on how to do this. Some sound designers and mixers kind of have one compressor that slow attacks and have fast attack. I personally just like fast attack compressors, because it’s one single point where I’m getting compressed.
So yeah, then this is this is kind of like stepping back where we have the dialogue and the narration all now locked into a very, like, clean, beautiful, you know, we’ve already done noise reduction at this point, we’ve clicked, we’ve cleaned clicks and pops in the editing phase, all that stuff to make that really clean. But then we’re feeling the music underneath it, then usually that’s you know, once we have a good mix on dialogue and music, that’s when we start to kind of, you know, figure how sound design is going to work in that. And, and kind of mix the sound design as is. And then it’s just like, getting really granular adding more sound design replacing sound design, boosting, you know, second here, like reducing here feeling this, then it’s like really like stepping back feeling where the story’s going. And then just kind of letting it breathe a little bit more. So that’s like, very generally like the top to bottom of what we do on every single project.
Chris Salters 21:04
That’s amazing. There’s a lot that goes into it.
Dallas Taylor 21:06
It is a lot. But it makes a huge difference, every little teeny, tiny, out of 20,000 little decisions, it all turns into something that just is invisible.
Chris Salters 21:15
Absolutely. And, you know, I’ve got another question that feeds off of that a little bit. So when editors start editing, no matter what program, they’re in Premiere Pro, Final Cut, DaVinci Resolve, Avid, doesn’t matter. They rely on the audio meters on the side of the street, you know, that bounce with the dialogue and music and effects to whatever decibel level that they’re listening to. And you know, they adjust the audio according to that. Do you have any recommendations for them whenever that’s the tool that they’re using to monitor their audio mix.
Dallas Taylor 21:46
Back in the day, like a decade plus, that bar made a lot like was something that we looked a lot at, like I remember, a decade to two decades ago, it was very peak oriented, like most specs were like, don’t go over negative 10. Because of a lot of the way the broadcast work. So a lot of people just pushed it into negative 10 as hard as possible. And so we don’t get this as much today. But if you kind of remember rewinding a little bit, there’s a lot of frustration and arguments over how loud commercials are versus programming. That was like a decade ago, and it was a huge problem. Such a big problem that in thankfully, you know, I guess the government had figured out all problems at that point, because they got into the they actually passed a law of loudness in commercials that had to match programming so So Congress actually like forced commercials to kind of push more toward more of a LKFS, LUFS thing. So basically, that’s an overall measurement of stuff to where if programming is this overall, then then commercial should also be that. So now you hear a lot more, you hear a lot more consistency. And even on the flip side, even if you send something that’s too loud to a network, or, you know, even streaming platforms and stuff, if you send something too loud, it will automatically re-level it. So you can you can hurt yourself trying to make things too loud, it can actually sound terrible.
So, nowadays, it’s not so much about those meters, like I don’t really look much at peak peaks anymore, we have just a, I think we have like a -2 limiter just to catch it if it’s going to go out of control. But nowadays, it’s much more about that LUFS like that negative for broadcast, usually -24. So that really what it’s doing is if your meters are doing this, you know, over the course of 30 minutes or 30 seconds, it’s just taking that that average, kind of through the whole thing. And then by the end of it, it’s like this is the number that you need to hit. So it’s so much more about like an overall global loudness Now, over the past decade than it is like peaks. That’s a little bit more like hard to gauge. Now the web generally. So for broadcast is usually a good 24 kind of across the board. I think internationally, it’s number 23. For for YouTube and web, you can do anything you want for the web, or YouTube. But the standard, like what we always provide is -16. That gives you plenty of headroom, but it still gonna be loud enough when you pop it up. It’s going to compete with everything you hear on YouTube for the most part. But it’s also not going to get to a point where it’s so loud that like you can’t punch something up if you need it for like a creative.
Chris Salters 24:20
Yeah, I totally understand that. So we’ve covered a lot of ground with the audio mixing. So thank you for everything that you’ve told us so far. And I know we’re short on time, but I want to ask one more question. Regarding video editors. Is there anything that we do that just drives you absolutely insane? When we send you an audio mix? Maybe we do it the wrong way we don’t organize the tracks. We give you the wrong file type. I don’t know, is there anything that we can avoid that will keep you from having to run your fist through a wall?
Dallas Taylor 24:48
I’m sorry, sometimes I would say and I don’t know if it’s necessarily edit. I think we’re going through a little bit of about with a very close collaborator at the moment where we’re getting the wrong versions. Stuff like we’re getting OMF from one version and then we’ll get like a video from another and we’re having to fix it. So like organization like, you know, once it gets to us, like if we are not organized like we don’t want to, we don’t need to be worried about finding, you know, going to a daily and getting a getting a microphone or something like that is going to like negate all of the actual money and time you’re doing to spin it out. So, really well organization. Nowadays, we don’t get a lot of unorganized things. But occasionally, I would say very clean track organizations is great, excuse me is great.
The other thing is that I would say that I would always encourage editors to do is do sound design, like do the best you can. Because it’s only going to take things further when it comes to us because we get really, really get an idea of like what you’ve heard in your head because you put things in, and then we know where we can kind of like bring clarity and change things or use what what comes out. So I think that one just being really clean with with sessions, which most everyone that we work with do that. The other thing is very much just not being timid, like I don’t think I don’t love when we get emails from directors or editors or producers that are like you all are the geniuses you just do what you do. Here’s, you know, here’s what we did. And it’s just like, there’s not on our side. It’s not like black magic. It’s not like, it’s not like it’s just this this art form that you can’t possibly grasp. No, you can you can definitely do it. And you can you can do it at high levels. I know editors who can do a pretty, pretty decent mix in the in the editing app. But yeah, I would say just like not being timid.
And then when when you get something back, I would say that the thing that’s always the hardest to deal with is demo love. Like that happens in music. It happens in sound design to where we’ll hear something that sounds really amateur that somebody doesn’t realize sounds really amateur because we hear so much. But they fall in love with it. And it’s like so muddy and so hard to like comprehend that it’s and we get a lot of people were just like make it sound exactly like what I what we did. You know, that happens one out of probably every 20 times we make something it’s like, basically make it sound exactly like what like what I did. And I’m like, why don’t you just do it like, like we have, we have just our talent level is just unbelievable. And I think knowing what I know from this side, and if I was on the other side, I would be much more like, here’s what we did. But here are very nuanced details of what I would like to feel here or some pre notes or like things but then right, like when you get something like be open minded about it, because it’s very easy to knee jerk and panic, especially when things are.
And the other thing too is I’d say if like if you’ve already, like sometimes like they’ll go and approve things with the client, and the client doesn’t have any idea how sound works. So they’ll prove it with the client and basically say, like, make it sound exactly like this. Like, alternatively, let’s get in on that before the client hears it. Yeah, to where the client is also hearing something that we’ve all kind of collaborated on. So on all the highest level projects that we work on, it’s very, like strongly encouraged that we are part of the first version of what the clients gonna hear too. And then nine times out of 10, it just makes everything so much more smooth. Because like we had our place, we got to bounce back and forth with the editor and the producer and the director, were all really happy and then present something and they’re just blown away. Rather than trying to imagine something that people can’t imagine. And people can’t imagine a good mix or good sound design. There’s just that’s too far for someone to imagine. So you really have to present it.
Chris Salters 28:25
Sure they they’ve got to hear it for themselves. Well, hey, I know we’ve got to go Dallas, thank you so much for your time, you’ve dropped so much knowledge on us today. And it’s been an honor to get to meet you. Thanks. It’s a lot of fun to do this. Hey, before you go, do you want to plug Twenty Thousand Hertz or anything?
Dallas Taylor 28:40
Oh, yeah, so. So two things I would say on the Defacto Sound side. Go follow us on Instagram because we post a lot of cool sounding things. And sometimes we even do like joke posts too. So that’s a lot of fun too. That’s that’s the most entertaining place on the on the internet for our specific de facto sound company. Of course, we have de facto sound comm where you can see kind of the highlights of all the stuff that we do. So that’s fun.
But we also five years ago started a passion project that was a podcast that kind of blew up into its own right. So that’s called Twenty Thousand Hertz, which is all spelled out. And if you don’t have a project or don’t want to kind of start going down that road, just go subscribe to Twenty Thousand Hertz. And that show is all about the stories behind the world’s most recognizable and interesting sounds. And every single episode takes around 250 300 hours to make with writers and editing and sound design and music. And so it’s incredibly like highly produced and yeah, so that’s that’s kind of the other thing that we do in conjunction with Defacto.
Chris Salters 29:41
I can vouch for that. Twenty Thousand Hertz is an amazing podcast and it’s so much fun. Like you just heard Dallas say he and his team put a ton of effort into every single episode, and they honestly answer questions about sounds that I didn’t even know I had. I hope that you liked today’s episode with Dallas and that you you know got some knowledge from. If you enjoyed it, please leave me a note in the comments and hit that subscribe button. Also check back soon because I’m going to put all the tips and tricks that Dallas just taught us into practice so you can use it in your own workflow. See you then
Transcribed by https://otter.ai