banner



Triune Sound Design Tools Torrent

Cyberpunk-themed action RPG The Ascent -- developed by Swedish indie game studio Neon Giant -- takes players on a bullet-slinging adventure through the underbelly of planet Veles (an alien world run by corrupt corporate overlords). Here, the Sweet Justice sound team talks about designing and implementing ambiences, weapons, and enemies. Plus, they discuss their technical and creative challenges in managing and mixing the game sounds - and so much more!


Interview by Jennifer Walden, photos courtesy of Neon Giant, Sweet Justice


Please share:

Looking for a futuristic version of a hack 'n' slack? Check out Neon Giant's cyberpunk "spray 'n' slay" RGP The Ascent, where the player takes on the role of an indentured servant working off their passage to planet Veles by doing maintenance jobs for a corrupt corporation called The Ascent Group. The corporation mysteriously tanks and the player is left to figure out why as they keep crime lords and other corporations from seizing control.

Neon Giant tapped Sweet Justice to be their full-service audio team, to provide complete sound design, implementation, and runtime mixing strategies, as well as handle full cinematic post production responsibilities. Sweet Justice is one of the most senior and talented audio teams in the industry. They've created audio for titles such as Cuphead, Demon's Souls, Ratchet and Clank: A Rift Apart, SOMA, Spider-Man: Miles Morales, Returnal, Half Life: Alyx, and many more!

Here, Audio Director Samuel Justice, Lead Sound Designer (Environments) Joe Thom, Lead Sound Designer (Combat) Stefan Rutherford, Supervising Sound Designer (Cinematics) Csaba Wagner, Sound Designer Jay Jennings, Sound Designer (UI) Barney Oram, Senior Technical Sound Designer Lee Banyard, and Senior Technical Sound Designer John Tennant talk about their goal to take Neon Giant's cyberpunk aesthetic to a whole new level in terms of sound.

They delve into design details for weapons, ambiences, enemies, and UI. They discuss creative and technical challenges they faced in terms of asset management, implementation, and mixing and what solutions helped them to achieve an energetic, diverse, and satisfying shooter game!

Video Thumbnail

The Ascent | LAUNCH TRAILER

What was Neon Giant's overall direction for sound on The Ascent? How would you describe the sound of this world? And what were some key descriptors that helped your sound team to stay in the right vein?

Samuel Justice (SJ) : The Ascent is Neon Giant's first title. Even though the studio was founded in 2018, they are by no means new to game development. The studio comprises 12 extraordinarily talented industry veterans, including Art Director Tor Frick, who is a world-renown 3D artist known for his incredible style, which is present throughout The Ascent.

We were approached by Neon Giant as they were aware of our reputation in the industry for creating unique and signature soundscapes. As such, we were given complete creative freedom over the audio; it was a true collaboration where Neon Giant gave us the keys to their castle and we didn't take that responsibility lightly.

…Neon Giant gave us the keys to their castle and we didn't take that responsibility lightly.

The overall audio direction for the title was (as for every title we oversee) to create the most immersive, exciting, and striking soundscape possible. The world of Veles is visually evocative, and the audio needed to be equally as vivid, if not more so, so as to sell the immersion of what's happening beyond the screen. Along with this, the knowledge we have as a team — from working on a multitude of titles over many years — allows us to approach these problems confidently and with solutions to a lot of the aesthetic, technical, and mix issues that might appear. A quick example is understanding the airy (not noisy) type of frequency content you need in ambient beds for combat-led games, with an audio aesthetic grounded in grit/reality, to allow flourishing tails from combat, IR first-order reflections, and pass by tails to all sit together correctly.

We focused our efforts on exploring and creating the sound of Veles whilst using our framework of knowledge to create the aesthetic; it's both grounded and livable, yet thrilling and huge. We were able to complete the bulk of the audio work for The Ascent in the space of six months, which is no small feat for an open-world title.

We also partnered with our great friends at Wushu Studios to help us get the title over the finish line. A huge shoutout to Alan McDermott, Alex Stopher (audio programmer), and Jack Cotter (technical sound designer)!

TheAscent_sound-02

There's so much happening in the city: groups of other indents, shops, police, ads, announcements, and more! How did you tackle this area in the game from a design standpoint?

SJ: Early on we knew how dense the game was going to be due to the art style beginning to come online. We did a lot of initial concepts, many of which made it into the final game. However, we held off doing too much work on this until the world was really fleshed out that way we could accurately see the interplay between areas (markets vs. clubs, districts vs. shanty towns, perhaps there's an apartment block opposite a music store, and the alien residents are shouting at the music store to turn it down, etc.).

We then spent a good month spotting the game whilst playing the critical path — going through each town, hub, and area and making wish lists of what we'd like to hear in each of these areas.

We then had a good idea of how much work needed to go into the title to make it sound as rich as we wanted it to and we began to conceptualize and design some of these moments; there's so much detail in the world that we are so proud of.

We spent a lot of time telling micro-stories in the world through sound…

For instance, Jay Jennings did an amazing Alien immigration booth just outside of the cosmodrome where you hear the different species talking and being called up to show papers. Barney and Jordan did amazing work on the in-world advertisements, focusing not on the sync or specifics of the advert, but instead working in unique broad-stroke tones for each one. When you hear these across the city, it just adds so much texture rather than something that might feel too abstract if it's directly synced and you can't see what's going on during the moment. We spent a lot of time telling micro-stories in the world through sound, like aliens playing gambling games in a back alley, casino brawl fights just slightly off-screen, disgruntled security workers at cosmodromes, etc.

We also have to give a shoutout to Sound Particles for making our lives much easier in creating group alien walla for The Ascent. We couldn't record a normal group like you would for most games or films, because… aliens!.

Instead, Sound Particles allowed us to take a whole slew of vocalised alien designs we had made and simply create these fantastic group loops that we could use, and scale the density of the crowds with the simple controls it provides.

We then took these loops and dirtied them up a bit — worldizing them, adding random scoop/sweep EQ's, sprinkling some first reflections for bouncing off of walls/buildings, etc. Then we could drop them into the game and they sat perfectly.

Because the world is so dense, every designed flavor had to have an "earworm" to catch the interest of the player as they passed through each district.

Jay Jennings (JJ): Because the world is so dense, every designed flavor had to have an "earworm" to catch the interest of the player as they passed through each district. There's the golden Buddha outside of a casino that lures people in with promises of riches and goodwill, or the catchy tune of a slot machine inside the casino as it hits a jackpot offscreen.

One of my favorite deep dives into the world is the capsule rooms inside the Japanese hotel. From the player's point of view, all you can see is a wall of doors, almost like lockers, but each one of those doors acts as an emitter. The direction was, "Imagine an alien traveler inside each room simply going about their day." So, if a player hangs around a capsule long enough they'll hear that, muffled through the door. One guy is on a conference call, another is singing in the shower, and yet another is watching cartoons.

The level of detail, interaction, and complexity of real-time mixing implementation in The Ascent is simply mind-boggling! Sam and his team did an absolutely amazing job bringing the world to life.

TheAscent_sound-03

In terms of implementation, what were some of your challenges or hurdles for building up the city? What was your approach to making the city feel alive as the player moves through it?

SJ: One of the core pillars for the environmental sound was to have constant movement. The players move through the world with quite an intense speed, and we were challenged to keep the world interesting even when the moment-to-moment exploration was at a high velocity.

We coined the term 'pass-by environment' for an approach we adopted where we'd litter the world with interesting sounds beyond the screen, no matter if they matched the visual in front of the player or were in sync to anything specifically. This meant when the players ran through the world, they'd get snippets of interest passing them by, like an advertisement, a ship-by, an argument ensuing, a vendor shouting about selling goods, etc. These constant micro-stories really help lend the belief that this is a living, breathing arcology.

…when the players ran through the world, they'd get snippets of interest passing them by, like an advertisement…

Neon Giant was very understanding of this as well; they knew that sound could sell the world beyond the screen far better than anything else.

One of my favourites is the dub barge in an area called Scrapland. Our own John Tennant writes fantastic deep bass reggae and dub under the guise of Knautic, so in Scrapland there are floating barges collecting scrap metal from old ships and we placed some of John's music on the floating barges along with Barney's fantastic alien walla to give the feeling of a deeper personality to these aliens, rather than just giving them beat up, rattly engine sounds (which they have). We were constantly challenging ourselves to add depth to everything we approached in this manner.

Joe Thom (JT): In addition to the pass-by environments that Sam mentions above, we also added a selection of scripted off-screen stories that are told through sound. These vary based on the environment and progression level of the game. Upon entering certain areas, the player will hear a gunfight and screams in the distance, or glasses smashing and laughter coming from the next room of a bar. The selection of these scripted events really allowed us to control the feel of an area upon entry and guide the player's emotions and expectations of what's to come.

One of the main challenges from an environmental implementation standpoint was making the civilians sound alive and animated.

One of the main challenges from an environmental implementation standpoint was making the civilians sound alive and animated. To achieve this we created a number of bespoke systems, one of which being the Alien Speech style babble that you hear throughout the game. For this, we used a single paragraph of nonsense dialogue that was loosely based around Icelandic. We had multiple actors that played other characters in the game record takes of this paragraph in a happy, sad, and neutral style.

We also had almost the whole audio team record the same takes.

Each take was then chopped up into around 100 to 300 small snippets and added to looping random containers in Wwise and processed to suit the various alien species. We then exposed a control setup to the base civilian blueprint in UE4 to allow the designers to select a mood and length for each line. Later we added the ability to change mood mid-line. This setup can make for some great moments when exploring the hub areas.

We also designed a reactive walla system that selected and modulated walla samples based on the number of civilians present within a defined area.

We also designed a reactive walla system that selected and modulated walla samples based on the number of civilians present within a defined area. This worked by using overlap events based upon a placed trigger volume to essentially keep a tally of nearby civilians. We could place this actor in the world and define walla samples to fade between based upon the number of present civilians.

This made it incredibly easy and quick to cover large areas with appropriate walla, and if the player fired a gun and the civilians ran away then the system would automatically update the 2D environmental walla to be appropriate.

TheAscent_sound-04

Since this game is all about shooting combatants, there is a large variety of guns — from small sidearms to excessive-damage weapons like 'The Dealbreaker.' Plus, there are custom upgrades for the weapons. From an aesthetic perspective, how did you want these weapons to sound?

Stefan Rutherford (SR): I didn't want the weapons to sound too refined. A bit of dirt and grit in the design was welcome so long as it added character rather than noise. And having some slight jank in the timing was also welcome as long as the weapons weren't stumbling!

There are, of course, gun recordings used in the designs (especially for gun tails, etc.) but I really wanted them to sound unique. Each weapon needed to sound punchy and satisfying as a bare minimum. The real goal was to get the guns sounding different from just punchy/processed gun recordings.

A bit of dirt and grit in the design was welcome so long as it added character rather than noise.

The original sound I made for the Dread was lovely and weighty and… boring. So I re-designed it by taking the punchy element and clamping the punch element HARD with a limiter to bring out all sorts of grit (this was one of those times where I just played around with sounds and let the accidents happen rather than going for something specific). I then took that processed sound and worked with it to get it punchy and satisfying again.

Another example was the Rocket Launcher. I used some pretty harsh mechanical elements before the punch of the sound and then used tremolo and other effects to draw out the tail. The actual rocket ignition was a sound recorded from a video camera rather than any special audio recording equipment, which again gave it a lot of grit/aggression. The key as always was balancing these angrier elements with the parts which make it sound punchy and satisfying.

Distinguishing types of guns was also very important. The key types were ballistic, digital, and energy (though there were things like the flamethrower and weapons with explosive projectiles, etc).

The previous examples touched on what I might do with ballistic weapons. For energy weapons, I took a different route where I often used processed dance-kicks for the punch. I still wanted to retain a gritty feel so I used recorded elements such as whiz-by's and processed them to get lasery sounds as well as the more 'standard' processing techniques I might employ for getting a "zap sound."

As an example, the E77 superior (energy assault rifle) used a processed dance-kick as one element and then a laser zap element made using MeldaProduction's MSpectralDelay and then pitched it way up. This got the sound most of the way but it felt like it was missing something so I made a loop out of arrow whiz-by's processed with one of the D16 plugins and noise reduction. When playing this over the top, it gave it a unique twist and a bit of needed grit. The final piece of the puzzle was the environment tails.

TheAscent_sound-05

Can you break down your approach to building a library of sounds for the weapons? How did you divide and conquer the creation of the weapon sounds? What were some of your sound sources (field recordings? specific libraries?) Any specific software apps or plugins that were helpful in creating these sounds?

SR: Each time I started a weapon, I collected a little library of sounds. Some of it was just sounds I liked from the Sweet Justice Library. Some of it was sounds which I processed. I tried to make sure that the sounds specifically tackled particular elements I wanted to layer into the design such as punch, noise (the more distorted, overbearing elements of a gun sound), environment (how the gun sounds in different spaces), crack (snappy elements to give the gun presence), interesting tones/textures, mechanical elements, and so on.

There was no plugin that I used particularly for weapons on The Ascent. The goal was almost always to create the sound I had in my head. A lot of the time it was basic dynamics processing, noise reduction, transient shapers, and gates. Nothing special!

Some of the fruitier plugins were Kilohearts' Disperser, which was great for beefing up explosions by giving them a low sweep, and also good for making laser elements.

I LOVE the D16 plugins (Antresol, Godfazer, Syntorus, etc.). These are great for processing interesting tails and creating grittier sci-fi textures.

Glitchmachines' Byome was nice for hashing things out. For example, I used the granular part of Byome for the tonal element on the Digital shotgun, which was a pre-order bonus item.

Gatey Watey from Boz Digital Labs is a good plugin for really tightening up source material for punchy elements of a library. I reached for that a number of times!

I also used MeldaProduction's MSpectralDelay for creating tonal laser-sweeps (a tip Rob Blake gave me a little while back! Thanks Rob!).

TheAscent_sound-06

What were some of your favorite weapons to design? What went into them?

SR: I loved getting a bit experimental with some of the automatic weapons. For example, the P9000 has a wood lathe underneath giving it a screaming tone. It sounds really aggressive!

The Minigun was fun to work on as I had to tackle the variable fire rate. I tried to keep the per-shot punch very simple and punchy for this and rely more on the loops. I wanted to get the rapid phut phut-phut of the pipes spinning on the minigun, which I did by using a tremolo with a sharp attack on a wind sample — surprisingly effective!

Probably my favourite weapon was the Sweet Justice Grenade. Three people worked on the explosion sounds for that.

I also really enjoyed the shotgun, mainly because I feel like it ended up sounding beefy as f*** (if I do say so myself). I must admit though, I think I used some source that Sam processed in the design for that one!

Probably my favourite weapon was the Sweet Justice Grenade. Three people worked on the explosion sounds for that. For the explosion, Csaba made the original sound. The Neon Giant folks re-timed the VFX to make it much longer so I made a beginning part of the sound by re-processing some sounds Sam had worked on and layering in some quickly panning bubbly elements which I designed. I then whacked Csaba's sound to the end where the implosion happened and the end result appeared.

Csaba nailed the tonal implosion sound. It was a pleasure working with those sounds and building on them with bits that Sam and I had made (I also thought that Barney did a cracking job of the UI when you select it, MUHAHAHAHA)!

Oh.. did we mention that's a grenade called SWEET JUSTICE, and yes, it delivers. @AscentTheGame pic.twitter.com/SkMoJoLucN

— Sweet Justice Sound Ltd (@sweetjusticesnd) May 24, 2021

TheAscent_sound-07

What was your approach to the UI sounds? From an aesthetic perspective, how did you want these to sound? And how did you create them?

Barney Oram (BO): The opportunity to create UI sounds for this game was a dream come true for me. The game features technology that is clunky, tactile, and generally quite low-tech. I really wanted the UI sounds to reflect this; I tried hard to make them feel real and functional and imbued with a tasteful vintage aesthetic.

I looked to classic sci-fi movies like Alien and Blade Runner to inform my approach to creating the UI designs. This happens to be a style I am familiar with, having long been a big fan of these wonderful clunky analog sounds.

The game features technology that is clunky, tactile, and generally quite low-tech. I really wanted the UI sounds to reflect this…

I began the process with an extended period of experimentation and exploration to find a core set of sounds that would form The Ascent UI library. I have a growing collection of weird, old audio equipment that came in handy during this phase, including an old AM radio, cassette tape machines, a ¼" reel to reel, contact microphones, tube preamps, a spring reverb unit, various old broken guitar pedals and multi-effects units, EMF coil pickups, and plenty of other fun stuff.

I really like generating simple tones, mostly using soft synths and processing existing library material, and then running those sounds through hardware equipment in order to give it a bit of dirty warmth. I will often mess with crazy processing chains in Soundminer, and then use this hardware re-amping technique to breathe some life back into the heavily manipulated source. There's something nice that you can get from cranking clean tones through vintage gear — a natural attenuation of the high frequencies and a gorgeous warm distortion, often extremely useful for helping a design to cut through in a busy mix.


Popular on A Sound Effect right now - article continues below:

Need specific sound effects? Try a search below:


…by recording directly to tape, you can give basic foley performances a nice old-school vibe.

After that, I like to add tactility and clunkiness to the UI designs using recorded foley. My setup for this is just a Shure SM57 into my Nagra IV-S. It's basic and simple, but by recording directly to tape, you can give basic foley performances a nice old-school vibe. It seems to flatten out the peaks in a very pleasing way, which is a quality of sound I really like. I recorded switches, clicks, latches, metallic clunks, plastic knocks, and manipulations. These 'real' elements help to ground the designed UI sounds in the universe of The Ascent.

One of the standout UI suites I created for The Ascent is the hologram minimap in the main menu. As soon as I saw the wonderful visuals that Neon Giant had put together for it, the inspiration struck. It is so visually intriguing, with crackles and visual glitches in the projected image; I imagined some kind of mechanical hologram projector whirring into life when you select the map view, switching between physical slides in order to show different portions of the map. I used recordings of a vintage slide projector to vividly describe this mechanism, as well as recordings of a cathode ray tube TV, a 16mm film projector, and plenty of bubbling analog tonal synth sounds to bring it all to life. It was a pleasure to bring these wonderful visuals alive with audio!

TheAscent_sound-09

Let's look at some of the main enemies the player encounters. There is typically a cinematic that introduces each one. What went into creating that sound for:

Papa Feral:

Csaba Wagner (CW): This is our first boss fight, and the very first scene I got to work on as well. The goal was simple: make it aggressive and scary. I wanted to make the vocals deep and guttural, so I used mainly recordings of seals and of pugs, but you can also hear some lions and tigers in there. In order to create the weight and size of the beast, I wanted to make sure that every time it jumps or smashes the ground, you hear a lingered rattle of the metal structure all around you.

Siege Mech:

CW: When I first saw this scene, my mind automatically drifted to ED-209 in Robocop, so my goal was to pay a bit of homage to that, but also to make it clunkier and heavier. I tried to make the servos and mechanics as grounded as possible, so I spent some time in Soundminer and Radium creating various patches and morphing simple things like forklift servos and train coupling sounds into something new and fresh sounding.

Megarachnoid:

CW: The cutscene of the Megarachnoid starts with some spooky sparks, but I wanted to avoid the old-fashioned electric sparks, so I processed veggie and dry pasta crunches to turn them into these big spooky electric arcs. The mysterious distant vocals of the robot are combinations of apes and metal cable grinding sounds. As it walks towards us, I wanted to give it a vibe of a train running towards us. The tonal servos you hear when it stops are made out of animal vocalizations as well. For me, the most fun about this scene is that you don't know whether you hear animal sounds or something mechanical, because I had some fun patches in Soundminer and Kilohearts' Snap Heap that helped me to turn some animals into servos and vice versa to turn some tonal metal sounds into creepy robot vocals.

SJ: The sounds for Megarachnoid were relatively simple sounds. The session was just 2-3 sounds layered together, sometimes single sounds — mainly mixing low resonance tones with high-frequency signature tones for the spider. I created a few chains in Byome to create a fun, granular resonance, low-middy kind of processing that also saturated that area of the spectrum in an interesting way with some harmonic resonances so that when it hit some kind of low-end processing, it would be super clean and sizable.

Dakyne:

CW: My thought process to this scene was "what if mankind had the technology to build a stargate?" It's a big mechanical gate that has all these giant power sources that take some time to power up. So I wanted to feel the various turbines/power sources firing up. And as we get closer, you see and hear the little pistons and mechanical parts moving.

The opening of the gate was definitely inspired by the iris in Stargate SG1. My goal whenever it comes to giant mechanical things is to give them some kind of an organic/vocal quality, so you almost feel like this is a huge mechanical monster opening its mouth.

The tentacles for the monster were morphed using recordings of mud, lion growls, rockslide, and leather stretching sounds. The morphed sounds were then processed through a chain in Soundminer to give extra size and movement to it.

TheAscent_sound-10

There are other enemies that players face in hordes, like the ferals and thugs. Since there are a bunch of these enemies coming at the player one time, how did that affect your approach to their sounds?

SR: The most significant aspect of this was ensuring the mix adapted appropriately. There are two sides to this. One is the design of the sounds and the other is the mix. For The Ascent, we did not compromise on design to accommodate for mix issues (although sometimes sounds did change as the mix adapted). A single feral needed to sound great by itself and in a horde!

We used a mixture of game parameters and metered busses for controlling the mix of these sounds. Game parameters would include things like how many enemies there were actively pursuing the player at a given time. We used this to reduce the number of voices given to some sounds. For example, melee swipes get culled more heavily when there are more enemies.

Wwise also gives us the option to meter any bus and use the output of the bus to drive game parameters.

We also turn some sounds down when there are a lot of enemies including things like the vocals for any small and medium-sized enemies. Wwise also gives us the option to meter any bus and use the output of the bus to drive game parameters. We had quite a lot of busses and nearly as many meter parameters!! It was a big spaghetti-like but the end result was a very responsive mix.

If we look at the ducking of enemies as a case study it gives some insight into the kind of things we did. Boss enemies ducked the large/medium/small ones. Large ducked medium/small and the medium ones ducked the small. Alongside this, weapons and explosions ducked each by varying amounts.

In order to get the combat sounding chaotic (but pleasantly so) we used suites of combat sweeteners.

In order to get the combat sounding chaotic (but pleasantly so) we used suites of combat sweeteners. These trigger once there are a certain number of enemies present, the combat mix has been above a volume threshold for long enough, and the correct type of event (e.g. a gunshot) was playing. In these cases, some of the combat sounds are mixed down and we start playing these sweeteners. They were a mixture of distant combat shouts, designed bullet-bys that whizzed around the speakers, aggressive swells in gunfire, and so on.

Another interesting way we dealt with the issue of large swarms of enemies was to allow for more variety per instance of something than actually existed in-game. Each enemy weapon has several versions of its non-player-character fire sounds. A group of 3 goons all firing the same pistol with the same pistol sound for each gets boring after a while even if distance and angle do vary the sound a bit. For some things like ferals, it was as simple as having a very large number of variations for their sounds rather than needing separate sets.

The above demo has a group of over 10 enemies and they are only firing 2 different types of weapons (a burst rifle and a pistol). But despite the lack of variety in the weapons used, the soundscape copes with this by using sweeteners and additional sets of fire sounds for these weapons.

TheAscent_sound-11

The Ascent was created using Unreal Engine 4. Sound-wise, was this a good fit for the audio department? Why or why not?

SR: UE4 was a boon for us on The Ascent. The Neon Giant folks used Blueprint scripting extensively on this game, which allowed us to prototype and implement without needing to wait for coders. This essentially removed barriers between us — creating sounds in our DAW and hearing them in-game.

For the weapon firing system, I managed to set it up so that minimal changes needed to be made inside of UE4.

For the weapon firing system, I managed to set it up so that minimal changes needed to be made inside of UE4. The weapon type would filter by the fire-mode of the weapon and then only three events were needed. One for starting an automatic weapon fire, one for stopping, and another for single fire/semi-auto weapons. All data that was required for the weapon was then fed to these events including fire rate, the name of the weapon, who is firing the weapon, and so on.

Being able to develop this system myself saved a lot of time and meant that no programmer was required to prototype. It also meant that I could set up the sounds for a new weapon with little or no changes inside of UE4. This kind of thing would normally have required a programmer to support me but I could just go ahead and do it myself!

Another reason I love Unreal Engine is for the reference viewer. Being able to quickly find where things are referenced is HUGE.

Blueprints are clearly powerful, more so than what was available previously, and a lot more integrated with the rest of the game's core logic.

JT: UE4 is a great fit for us. It's an engine that a lot of the team has a great deal of experience with. We approached The Ascent with the intention of having Wwise be a sampling and mixing tool and using UE4's Blueprints for the majority of logic and switching, etc. We're also incredibly excited by the prospect of the audio tools that are being developed for Unreal Engine 5.

Lee Banyard (LB): Up until relatively recently, I'd managed to avoid UE4, at least compared to most of the rest of the Sweet Justice team. It wasn't a conscious thing, just a by-product of the projects I'd been involved with since UE4 hit the scene. Then suddenly, I was thrown in at the deep end with The Ascent!

Fortunately, I'd had a lot of experience in previous iterations of UE so a lot of what I had to do in terms of, say, mapping out volume, or audio geometry, that was familiar to me. Blueprints are clearly powerful, more so than what was available previously, and a lot more integrated with the rest of the game's core logic. Which is great! But it was a bit of a learning curve for me, being a bit more used to Kismet, which I found easier to trace through, especially to begin with. With the extra power that is afforded to sound designers, it's easier to break things, and being aware of that made me a bit warier.

Ultimately, the question of whether a particular engine is a good fit for the audio department is a bit moot, as part of our job is adjusting to the prevailing conditions. We were able to use it along with Wwise to get something together that ran well on multiple platforms, relatively painlessly, so in that respect, it was a good fit!

John Tennant (JTenn): Because The Ascent was almost entirely data-driven, the versatility of working with Blueprints gave us a ton of control to dig deep into the heart of things to get our Wwise event triggers in the right spots without needing much code-support for day-to-day hookups and implementation.

TheAscent_sound-12

Sound-wise, what was the biggest challenge in working on The Ascent? Technically? And creatively?

JT: One of the biggest challenges environment-wise was scope. Veles is a huge area and every part of it is incredibly detailed. It was important to us that we did every environment justice, and to achieve this we had to come up with a toolset to optimize the workflow of implementing sound into the environment. This toolset was made up of systemic solutions to common environmental needs and custom placeable actors that allowed for speedy ambient zone and spot effects placement.

An example of a systemic solution would be the holograms that are present throughout a lot of Veles. The base Hologram Blueprint automatically chooses a sound based upon the hologram size and type, which can be overridden per instance if desired.

The base Hologram Blueprint automatically chooses a sound based upon the hologram size and type, which can be overridden per instance if desired.

For ambient zone placement, we made custom trigger boxes that could be placed in the environment that included slots for ambience to play when inside the volume, RTPC's/States to set when entering/exiting the volume, etc. This minimized the need for repetitive scripting in Blueprints.

We also created an actor which acted as an 'Environment Sweetener.' This was another placeable volume to which we added a drop-down menu with a number of environment prefabs that could be selected, for example, 'Industrial,' 'Balcony With Traffic,' 'Bar,' etc. When one of these prefabs was selected, a streaming ambience and a selection of randomly positioned spot effects would play whenever the player was inside the volume. All of these sounds were overridable per instance and we could add new prefabs at any time. We used this a lot in addition to bespoke placed ambient sounds to really bring each environment to life.

…the biggest functional differences between the newer consoles and the previous-gen consoles were in terms of memory and the variation in hard disk specs.

LB: I spent a lot of time trying to wrangle audio that was primarily developed with Series X/high-end PCs in mind so that it'd work as well on lower-end platforms. For us, the biggest functional differences between the newer consoles and the previous-gen consoles were in terms of memory and the variation in hard disk specs. The streaming bandwidth is just so much better for the new platforms! Memory is always a problem, of course, but setting stuff to stream for the top-end platforms, which already had more mem, and then finding you can't just leave stuff like that when stepping down to Xbox One — yeah, that was quite a challenge. There was less room to maneuver for sure, so it was a bit of a battle on two different fronts.

Trying to squeeze all this amazing content that's been authored, without affecting the quality too much, was a tough job at the best of times — making those hard decisions about how many variations of assets one can have on your top-tier platform. So I did a lot of zooming in and out, trying to figure out how often certain sounds trigger, testing them in context, seeing what can be dispensed with. Making those trade-offs is something I'm used to doing as I'm as much a content-creating sound designer as I am a technical one.

The Ascent is basically an open-world game in many ways, and we didn't have too much of a voice-management system available to us here. When I came on, a lot of sounds were looping and holding onto a lot of virtual voices, which is a familiar problem to many tech sound designers on open-world titles using Wwise I'm sure!

'The Ascent' is basically an open-world game in many ways, and we didn't have too much of a voice-management system available to us here.

I did a lot of work granulating existing sounds into random/seq containers that would then kill voice at max distance and get out of the way. I kept running into the problem of upstream global limits set on these sounds causing those that are still audible to chop up on crossfade boundaries. And I don't know how many times I found crossfading between grains causing sounds to hit the volume threshold when that simply shouldn't have been the case. I found a workaround for this thankfully but it took a lot of experimentation. (I rarely use the crossfade transition type in looping random/seq containers anymore, in case you're wondering!)

So yep, a lot of the standard open-world audio issues were there to overcome on The Ascent, which is probably a surprise as it doesn't have that obvious open-world thing going on visually, just due to where the camera is. Overall, it was straddling two generations of platforms that caused the most problems though.

JTenn: On a broader scale, the team realized pretty fast that a general management system for all of the Audiokinetic Components (Ak Components) we were using was going to be needed. Without any management, we were counting over 1000 Ak Components in complex scenes! This was a huge hit to CPU and while our beefy dev PCs could handle the load, there was no chance for our lower-spec target platforms.

…the team realized pretty fast that a general management system for all of the Audiokinetic Components (Ak Components) we were using was going to be needed.

To tackle this problem, we worked with the coders at Neon Giant to create a "Scope Group" system where each Wwise Event (Ak Event) would be assigned to a group. Then each group would get a set of rules, like max number of Ak Components and Max Distance from the player. If a new Ak Event would attempt to play but it fell outside the defined rules of its assigned Scope Group, then it would be 'paused' and it would be updated with a slower tick-rate than the active ones. We ended up with 5 total Scope Groups, which have settings that can be tweaked per-platform.

Getting into more specifics, there were a lot of day-to-day implementation challenges. The open-world nature of the game meant it was sometimes difficult to set up meaningful test environments while still making use of developer short-cuts. (A lot of stuff needed to be tested in the built game instead of simply using Play-in-Editor.)

'The Ascent's camera is so far away from the player, so our attenuation curves were often much bigger than in a conventional game.

I specifically remember the TVs were tricky because they weren't just playing back mp4s. Instead, each advertisement that played was a hodge-podge of textures and materials animating around at runtime. Getting the audio to stay in sync to that kind of thing is tricky.

Another good one was finding the right balance between optimizing activity occurring off-screen without nerfing the sounds they should be making. The Ascent's camera is so far away from the player, so our attenuation curves were often much bigger than in a conventional game. When fast-movers (like drones or spaceships) approached, we needed them to actually get simulated far before the player camera could see them. We had to strike a balance between optimizing and simulating off-screen so these types of game objects faded up gradually instead of 'popping' in.

The monorail/subway system also had this trouble–popping in. In the end, we just had to predict when the train was coming and start playing its approach sound before it had actually spawned into the world.

TheAscent_sound-13

What were your biggest challenges in mixing The Ascent?

SR: I spent a lot of time working with Sam to make sure the game was punchy and also coherent. The most significant aspect of what I undertook with the mix was essentially ducking, with the goal of getting things to step aside when we didn't want to hear them and making sure the readability of each element we would expect to hear was not compromised as part of that process. I'd like to say that this was an elegant dance but the process was more of a wrestling match.

I'd like to say that this was an elegant dance but the process was more of a wrestling match.

The gist is loud and/or important things clear the less loud and/or important things out of the way. When we are dealing with a tree-structure for doing all this mixing (Wwise' master mixer hierarchy), it would often mean edge cases would appear. For example, a child bus no longer wanted to be ducked by something, which meant we had to either apply inverse ducking or move that child bus into its own part of the mix hierarchy.

The result of that is that any inheritance that child lost by being moved would need to be re-created and kept updated as the project moved forwards!! The inverse of this are occasions where we wanted something to be much more aggressive with the ducking compared with other sounds. This very reasoning led to the creation of a bus called "Explosion MegaDucker" where certain very big/obnoxious explosions (such as the Sweet Justice grenade explosion) would be bussed and would do things such as filter music, duck things much more heavily, and so forth.

In the end, we landed upon a fairly sensible (albeit relatively deep) structure to the mix hierarchy with most edge cases accounted for.

Towards the end of the project, the naming of some busses started to lose meaning a bit. For example, Sam noticed that the Larkian hammer ground slams weren't quite making it through the mix. The solution was to send them to an explosion bus. Resisting such changes through development is often helpful when it comes to maintaining structures but at some point in the project, we found that "sounds good, is good" was reason enough to make a change like this.

In the end, we landed upon a fairly sensible (albeit relatively deep) structure to the mix hierarchy with most edge cases accounted for.

SJ: Due to the top-down view of the game, the majority of the combat takes place in the LCR speakers. One thing we did right away was pan combat music to the rears for the surround mix. When the combat is so front-heavy yet we have 2-4 additional channels of full-range not getting utilized, it made sense to fill these with combat music.

This is compensated for on the stereo mix as well. We dynamically turn the combat music up and down depending on a number of factors, and we remove it entirely when a certain number of enemies are on screen. We also embellished the weapon and explosion tails during the exploration state of the game, and when combat music kicks in they get pulled right back to retain the punch — as the swell of the environment isn't as important during that moment.

We licensed a few tracks of music for the big boss fight moments…the music takes center stage and the combat mix is dynamically turned down…

We licensed a few tracks of music for the big boss fight moments, and during those, the music takes center stage and the combat mix is dynamically turned down depending on the level of the boss music. We wanted to create that John Wick power fantasy with these pieces of music.

For the environmental/exploration mix, we took a similar approach. Initially, the exploration music would loop throughout the game which got old, really fast. We set up a system to dynamically bring the exploration music in and out, and when it hit over a certain threshold it would pan to the rears and surround the player, creating this rich amazing synthwave sound that envelopes the player. When this occurred, the environmental sound design was pulled down to let the music shine.

We also turned down TV advertisements and in-world music to avoid any tonal clashes. When the exploration music began to fade out, we'd pull the environment up as well as some dynamic EQ's pulling up the high end a bit, allowing the detail, grit, and air to flourish a bit more — something that could be perceived as a noise floor when competing against the music.

We set up a system to dynamically bring the exploration music in and out…

There are a few tricks we use to retain punch and clarity. We run a dynamic EQ on the combat master, and when it reaches over a certain threshold we begin to chip away at the 2.5khz range by a few dB. This is perceived as quite a 'clicky' area of the spectrum, and that gives a bit more range to hit the limiter to give a perceived phatness to the combat.

Along with this, all NPC and player characters who aren't player 1 have their weapons high-passed at 200 Hz, allowing for the low-end to not get saturated. Again, this gives more headroom to hit the mastering and create a higher perceived loudness. The title has a lot of tricks and fun approaches to keep the punch. I think the combat mix alone has 60+ RTPCs driven by individual bus meters creating a web of interaction.

LB: I spent a lot of time trying to set up options for those tasked with late-stage mixing. I know how difficult it can be to add some of that stuff retrospectively! Some of this is standard stuff, like setting up state groups exclusively for high-level mixing that key into wide-scale contexts, such as general locations. These were always intended to be used when it came to the point that other solutions weren't practical to implement due to the development phase of the game, rather than relied upon by every sound designer to set levels early on. One can't always predict the combination of contexts that a game will present, especially with an open-world title. That can make mixing — choosing what to prioritize sonically for the player's benefit — tricky to cater for. So having fall-backs is important, even if they're not used all the time.

I spent a lot of time trying to set up options for those tasked with late-stage mixing.

I'm a total convert to using Wwise's asset-based loudness normalization, which for some people (and I used to be one of them!) just feels like something there to spoil the fun of high-level sounds not hitting you with the same energy that they're authored with. But it does such a good job of ensuring fader values in Wwise actually equate to the sonic energy present in the sounds they're affecting that it's invaluable for run-time mixing tech, and simplifying mixing overall — more so than peak normalization can ever do.

We had so many sound groups reliant upon RTPCs driven by Wwise Meter to carve out clarity, and these run-time mixing mechanisms are only as good as the data fed to them, in terms of how loud Wwise *thinks* they are. So those fader values suddenly have massive meaning right? If they're just being set to madly disparate values because your assets aren't consistently loudness normalized then these dynamic mixing methods fall over. I did have to do a fair bit of work making sure that loudness norm was ticked or at least, not overridden. And often I'd simply do it offline as part of asset rationalization just to make double sure it was in place (sorry everyone!).

TheAscent_sound-14

In terms of sound, what are you most proud of in your work on The Ascent?

SJ: I'm really proud of the way the team came together to create something totally unique and special in a really short period of time. It's very special seeing the entire team on autopilot, navigating development and completing the title. The end result speaks for itself, more so than any individual component, and it's a testament to how well the team works together and knows how to work together.

The end result speaks for itself, more so than any individual component, and it's a testament to how well the team works together and knows how to work together.

SR: I'm really proud of my contribution to the combat sandbox. From the individual assets all the way to the final rendering of the sounds in-game, I really enjoyed honing in on the sounds and mix for these parts of the game. It sounds good-chaotic and punchy most of the time without feeling like parts of the soundscape were missing.

In that sense, I think we made the right decisions. The sounds we mixed out or stopped from playing were mostly the right ones. I'm looking forward to taking what we learned here forwards and refining it further.

As a general note, I also really enjoyed how much down-right trickery we got away with. The soundscape does much more than the game itself actually simulates to make the soundscape sound as good as it can.

JTenn: The Ascent has a great balance between actually simulating game events to generate individual effects triggers on the one hand and 'faking' large scale sets of effects triggers on the other hand. For instance, each bullet impact of course generates a material-sensitive impact sound but then we also had a system that triggers a cascade of debris after a fight which is above a threshold intensity. So after a good-sized battle, you hear bits of the environment falling off the walls and onto the floor as all the chaos and debris settles — of course, none of this was actually simulated, but it's a garnish that adds character and depth after a big fight.

So after a good-sized battle, you hear bits of the environment falling off the walls and onto the floor as all the chaos and debris settles…

Another example of this approach is the 'Player Foley' system. Rather than simulating/keyframing a ton of different animations to trigger individual foley events, we instead used player 2D velocity + player rotational velocity to modulate the levels of some always-running foley loops which corresponded to upper body and lower body armour type being worn.

It's another example of a system that is simulating just enough game data to get the message of movement and materials across, but not more than is needed for believability for the player. Both of these systems were accomplished relatively quickly and without the need for coder involvement.

BO: I'm really proud of the detailed world-building sound design that helps bring Veles to life. We spent a lot of time putting together bespoke advertising sounds for the various screens and billboards around the arcology. Everything has a unique sonic character, with extensive work going into crafting detailed sounds for all of the elements of the environment, and ultimately culminating in a rich soundscape for the player to explore. These are tiny features in the context of a large game-world, but we were really keen to give The Ascent that feeling of dense atmosphere that is so integral to cyberpunk experiences.

We spent a lot of time putting together bespoke advertising sounds for the various screens and billboards around the arcology.

JJ: This game had no creative boundaries; nothing was too weird or outlandish to try. A sandbox world inhabited by smarmy aliens and unruly bots required thinking WAY outside the box, and the teams at Sweet Justice and Neon Giant encouraged that exploration and experimentation 1000%. Really flexing that muscle was hugely satisfying.

JT: The Ascent can be a beautifully chaotic game at times, with incredibly intense battles. That being said, I think the almost 'rubber band' style mixing system we implemented really manages to maintain a pleasant and cohesive listening experience throughout. We really spent a lot of time ensuring that each element — combat effects/ Music, Environment effects/ Music, In-game (diegetic), etc. really had its moment to shine, each one giving way to other elements at moments that are appropriate to the setting or action. We really threw out the rulebook in terms of mixing, at points even bringing the volume of combat effects right down for some of the biggest battles in the game, just to make a ton of room for the great licensed tracks that we had available. I think things like that make for a really interesting and different soundscape and that's definitely one of the things I'm most proud of.

LB: Being part of the technical underpinning so that everyone else's sounds can shine, I find that hugely satisfying. I know that sounds sickeningly selfless, but honestly, it's truly the case here and it can be immensely pleasurable to just let your co-workers do their thing and for it to turn out so well.

I can't stress enough just how good the core assets are from the rest of the team. I created the odd sound effect here and there in The Ascent but was more filling in gaps, and nothing of the scale/scope of the weapons, say, or the more involved world-building sounds — the fictional brand ads and little musical jingles, and the environmental content. It's such a rich-sounding experience in so many ways.

I guess the most pride I feel regarding The Ascent comes less from any individual 'bit' I may have assisted with, and more with being part of Sweet Justice, and in turn getting to work with teams such as Neon Giant that understand how much this team can contribute creatively to their own amazing projects when trust is extended to us. I give Neon Giant huge credit for that.

A big thanks to the Sweet Justice sound team for giving us a behind-the-scenes look at the sound of The Ascent and to Jennifer Walden for the interview!

Please share this:


THE WORLD'S EASIEST WAY TO GET INDEPENDENT SOUND EFFECTS:

A Sound Effect gives you easy access to an absolutely huge sound effects catalog from a myriad of independent sound creators, all covered by one license agreement - a few highlights:

Explore the full, unique collection here

Latest sound effects libraries:

FOLLOW OR SUBSCRIBE FOR THE LATEST IN FANTASTIC SOUND:

                         

Triune Sound Design Tools Torrent

Source: https://www.asoundeffect.com/the-ascent-game-audio/

Posted by: mcdowellwhoustoll.blogspot.com

0 Response to "Triune Sound Design Tools Torrent"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel