Real Time Print To Track

Logic and Audition users will be familiar with the term Bounce to Track. This process allows the user to perform an Off-line Mixdown of a selected group of Session Tracks without physically exporting. In most cases the Mixdown appears on a supplemental target Track.

Bouncing Off-line is a time saver. However it can be precarious. It would be irresponsible to submit a finished piece of audio to a client without 100% conformation the bounced delivery file (most likely slated for distribution) is glitch free. In essence it is imperative to throughly check your piece prior to submission.

Off-line Bounce (aka Bounce to Disk) was once notoriously absent from Pro Tools. Avid finally implemented support a few years ago.

In professional Post Production, engineers may perform a real time (On-line) Bounce of a mix Session. The process is commonly referred to as Printing. It requires the operator to sit through the Session in it’s entirety.

Besides glitch detection capabilities, it is possible to edit clips before the playhead reaches their location. As well, you can edit clips and/or sub-segments within a previously completed Print and only re-Print the manipulated segment.

So how is this done? Simple – if the DAW or Interface supports it.

For instance in Pro Tools the user can assign Bus outputs to the input of a standard Audio Track. The key is you can ARM a standard Audio Track to record any signal that is passing through it. This would be the Print Track.

Adobe Audition CC does not support direct Bus Output —>> Audio Track assignments. However, it is still possible to implement a Print workflow (see attached image). You will need a supported Audio Interface with a Mix Return. Simply assign all Session Tracks and Buses to the Main Output. Then add a supplemental Audio Track. Set it’s input to Mix Return. ARM the Track to record and fire away.

-paul.

Technorati Tags: ,

Adobe Audition CC Productivity

Below I’ve listed a few Adobe Audition CC (ver.2015.2.1) features/options that may be obscure and perhaps underutilized.

aud_small

Usability

1- Maximize Active Frame (⌘↓). This command toggles full screen display accessibility of the active (blue outlined) UI Panel.

2- Lock In Time (Multitrack). When activated, selected clips are pinned to their current location. I mapped ⌥⌘L for this function.

3- Group (⌘G) (Multitrack). Multiple clips will be congregated and may be repositioned cumulatively.

4- Suspend Groups (⏎⌘G) (Multitrack). This function temporarily deactivates the Group. Actually, this command toggles the behavior between deactivate and activate. There are also options to Remove Focus Clip from Group and Ungroup Selected Clips. They both support custom shortcut mapping,

5- Right + Click on any Clip’s Fade Handle (Multitrack) to display the following customization menu:

– No Fade
– Fade In/Out
– Crossfade
– Symmetrical
– Asymmetrical
– Linear
– Cosine
– Automatic Crossfade Enabled

6- Bounce to New Track (Multitrack). This feature will process and combine multiple clips located on a single track or multiple tracks. This will free up system resources. The following options support custom shortcut mapping:

– Selected Track
– Time Selection
– Selected Clips In Time Selection
– Selected Clips Only

7- Convert To Unique Copy (Multitrack). This function creates a sub clip derived from the original trimmed source clip. Media Handles are no longer accessible in the converted copy (Multitrack and/or Waveform Editor environments). I mapped ⌥⌘C for this function.

Editing

1- Time Selection in all Tracks (Multitrack). This is a Ripple Delete variation (⏎⌘⌦) that will retain clip relevant Marker position(s).

2- Split All Clips Under Playhead (Multitrack). I mapped ⌥⌘R for this function.

3- Merge Clips (remove thru edits) (Multitrack). I mapped ⌥⌘J for this function.

Mixer/Track Inserts and Sends

1- Individual Track supplied buttons will designate Sends and Inserts as Pre or Post Fader.

Markers

1- Markers implemented in the Waveform Editor may be Merged thus allowing easy selection of encapsulated audio.

2- Selected Range Markers present in the Waveform Editor may be exported as individual clips.

3- Selected Range Markers present in the Waveform Editor may be added to a Playlist where they may be reordered for auditioning.

Exporting

1- The (Multitrack) Session Export Dialog includes user defined Mixdown options:

– Master: Stereo, Mono, or 5.1
– Signal present on individual Tracks
– Signal present on individual Busses

2- Export with Adobe Media Encoder (Multitrack). This Export option runs Media Encoder and requires the user to select a predefined Media Encoder preset. Routing options are available as well.

-paul.

Technorati Tags:

Loudness Measurements and Silence

Consider this: Two extended segments of audio, Loudness Normalized (or mixed in real time) to the same Integrated Loudness Target.

Segment (A) is fairly consistent, with a very limited amount of intermittent silence gaps.

Segment (B) is far less consistent, due to a multitude of intermittent silence gaps.

When passing both segments through a Loudness Meter (or measuring the segments offline), and recognizing Integrated Loudness is a reflection of the average perceptual Loudness of an entire segment – how will inherent silence affect the accuracy of the cumulative measurements?

In theory the silence gaps in Segment (B) should affect the overall measurement by returning a lower representation of average Integrated Loudness. If additional gain is added to compensate, Segment (B) would be perceptually louder than Segment (A).

Basically without some sort of active measurement threshold, the algorithms would factor in silence gaps and return an inaccurate representation of Integrated Loudness.

The Fix

In order to establish perceptual accuracy silence gaps must be removed from active measurements. Loudness Meters and their algorithms are designed to ignore silence gaps. The omission of silence is based on the relationship between the average signal level and a predefined threshold.

Loudness Meter (G10) Gate

The specification Gate (G10) is an aspect of the ITU Loudness Measurement algorithms included in compliant Loudness Meters. It’s function is to temporarily pause Loudness measurements when the signal drops below a relative threshold, thus allowing only prominent foreground sound to be measured.

The relative threshold is -10 LU below ungated LUFS. Momentary and Short Term measurements are not gated. There is also a -70 LUFS Absolute Gate that will force metering to ignore extreme low level noise.

Most Loudness Meters reveal a visual indication of active gating (see attached image) and confirm the accuracy of displayed measurements.

Gate-(480)

Additional Gate Generalizations and Nomenclature

Common Noise Gate

A Downward Expander and it’s applied attenuation is dependent on signal level when the signal drops below a user defined threshold. The Ratio dictates the amount of attenuation. Alternatively a Noise Gate functions independent of signal level. When the level drops below the defined threshold, hard muting is applied.

Silence Gate

This is a somewhat proprietary term. It is a parameter setting available on the Aphex 320A and 320D Compellor hardware Leveler/Compressor.

Compellor

When a passing signal level drops below the user defined Silence Gate threshold for 1 second or longer, the device’s VCA (Voltage Controlled Amplifier) gain is frozen. The Silence Gate will prevent the Leveling and Compression processing from releasing and inadvertently increasing the audibility of background noise.

-paul.

Technorati Tags: , ,

Hardware Inserts In Your DAW

It is possible to implement support for use of external hardware processing components within your software DAW. This support is common in music recording and audio post production environments.

When properly implemented, operators have the capability to insert an instance of an external component (or chain) on a DAW audio track just like any other installed third party software plugin.

Besides potential tonal advantages, routing through a specialized external component can be less taxing on the host system’s resources.

Requirements

1 – Your Interface must have an available output (mono or stereo) for routing audio to an external component. You will also need an available input (again, mono or stereo) to accept the processed audio.

2 – Your DAW must support the routing.

Pro Tools and Logic Pro X

In the Pro Tools I/O settings you must define a set of available (and matching) Interface inputs and outputs for signal routing. In Logic Pro X, there is an I/O routing option plugin included in the Utility plugins group.

Have a look at the routing configuration options for both DAWS:

Inserts_small

The upper image displays a Pro Tools Insert Routing matrix. The default audio interface has a total of 8 inputs and outputs available as discrete I/O mono channels. They can remain as such. Alternatively, they can be paired to create four stereo signal paths.

I’ve defined three instances or parent paths of “Aphex” inserts using interface inputs and outputs 3 + 4. My processing chain supports a stereo signal flow or discrete dual mono.

The first Aphex instance is a stereo insert. Clicking the disclosure triangle reveals two associated mono channels that make up the stereo pair. This configuration translates in Pro Tools as a stereo hardware insert or as two discrete mono inserts.

At the bottom of the list I’ve also created two custom mono paths the will pass audio to discrete mono component channels. This alternative solution is unnecessary in this particular configuration. The stereo instance above provides the same level of flexibility with support for mono accessibility. Just be aware of the configuration flexibility.

The lower image displays a Logic Pro X stereo I/O instance as it would appear when inserted on any track. Notice how I am using the same combination of interface channels (3 + 4) to output the signal to external components, and to route the processed audio back into the DAW.

Use Case

Let’s say you are the proud owner of the very affordable and recommended dbx 266xs Dynamics Processor. You would like to use it to pre-process a discrete channel Skype session in realtime. This dbx Compressor, Limiter, and Gate can function as a dual mono processor. With routing properly configured, you can insert mono instances of the hardware processor on discrete tracks in your DAW session. Simply customize settings for each dbx channel and fire away.

266xs_small

My Chain

Over the years I’ve accumulated various analog audio processors by Telos, dbx, and Aphex. In the displayed diagram I disclose part of my current configuration with a few active components.

hardware_inserts-small

Before I get into the Pro Tools insert path configuration, let me explain the basic signal routing:

• I use a Mackie Onyx 1220i FW Mixer in combination with a Motu Audio Express USB/FW Interface. The Mackie controls a POTS line mix-minus using a Telos Digital Hybrid. The mixer also controls signal routing scenarios and recording on a Marantz CF Recorder. I use the mixer’s Control Room outputs to feed the inputs of a power amplifier to drive my JBL near-field monitors.

• The Motu’s Main Outputs are patched to the mixer. This audio is available on the Control Room outputs. I can easily switch back and forth between the mixer and the interface, designating one or the other as the default I/O.

• The mixer also functions as a secondary gain stage for the mic signal path. Notice how the mic is directly connected to the dbx 286A Voice Processor. It’s balanced line output feeds the channel 1 line input on the Mackie. The balanced Mackie Main Outputs are set to deliver a Mic Level signal. They feed the Mic Level inputs on the Motu interface. These inputs can be linked and routed to a single stereo DAW track. Alternatively I can designate the inputs to deliver discrete mono. This is handy when a second mic is integrated

• The dbx160a is a single channel (mono) compressor. It is connected to the Mackie’s channel 2 insert. I can use this device as a serial processor on mixer channel 2. I can also insert it on the channel that returns a telco caller’s POTS audio back to the mixer. In this scenario I can easily bypass it’s use on an insert and instead connect it in-line.

• All system connections are made with balanced XLR and TRS cables.

Not pictured: Aphex Expressor (mono) Compressor, Aphex 622 Expander/Gate, and Aphex two channel Parametric EQ.

Hardware Chain Insert

Let’s focus on the Pro Tools Insert path, instantiated on a stereo audio track:

The two (pictured) devices that I am currently using for external audio processing are by Aphex: 320a Compellor, and the 720 Dominator II. The 320a Compellor is widely used in radio broadcast facilities. This device can be configured to function as a Leveler, Compressor, or a mixture of both. A Process Balance setting controls the Leveling and Compression weighting. It supports stereo and dual mono processing. The current “D” version supports AES/EBU Digital I/O.

The Dominator II is a 3-band Peak Limiter with adjustable crossovers and zero overshoot. This device is also widely used in broadcast facilities and for live performances. The current 722 version features enhanced broadcast processing support, including Pre-Emphasis and De-Emphasis options.

With the Motu interface designated as the default I/0, it’s 3+4 Line Outputs route audio via insert from a Pro Tools audio track to the Compellor’s inputs. The Compellor’s outputs feed the Dominator II’s inputs. It’s outputs feed the Motu’s Line Inputs, routing the processed audio back to the DAW track where the hardware insert was originally instantiated.

A Skype session would be an obvious use option. In this case I would implement discrete mono hardware processing using two separate insert instances. In fact I can use this configuration when recording any audio source, or as a realtime processing option for output, playback, and streaming.

As far as playback, the Motu interface supports a Mix 1 Return option. In essence I can patch my system’s output into Pro Tools. With Input Monitoring activated, I can route the signal through the external processors and monitor the wet audio. This is a handy feature during playback of poorly produced programs.

Audition

Unfortunately Adobe Audition does not support hardware inserts. However there are various ways to integrate your external components in a multitrack session. For example you can patch a track’s output (or outputs) to an available interface output that feeds an external component’s input (or inputs). The processed audio is then routed to available interface inputs. By defining this active interface input as a track input, you essentially route processed audio back into the session.

This signal routing option will work in any DAW. Be aware you run the risk of initiating feedback loops!. To avoid this please make sure the software routing utility for the particular interface is properly configured.

In Conclusion

It is easy to integrate your analog gear in your software DAW. Use case scenarios are endless. Of course support and effectiveness will vary across all components and applications. I will say it’s a pretty cool feature, especially when software versions of coveted analog devices simply do not exist.

-paul.

Technorati Tags: , ,

Understanding Pan Mode Options

Adobe Audition and Logic Pro X include Pan Mode preference options that determine track output gain for center panned mono clips included in stereo sessions. These options are often the source of confusion when working with a combination of mono and stereo clips, especially when clips are pre-Loudness Normalized prior to importing.

In Audition, the Left/Right Cut (Logarithmic) option retains center panned mono clip gain. The -3.0 dB Center option, which by the way is customizable – will attenuate center panned mono clip gain by the specified dB value.

For example if you were targeting -16.0 LUFS in a stereo session using a combination of pre-Loudness Normalized clips, and all channel faders were set to unity – the imported mono clips need to be -19.0 LUFS (Integrated). The stereo clips need to be -16.0 LUFS (Integrated). The Left/Right Cut Pan Mode option will not alter the gain of the center panned mono clips. This would result in a -16.0 LUFS stereo mixdown.

Conversely the -3.0 dB Center Pan Mode option will apply a -3 dB gain offset (it will subtract 3 dB of gain) to center panned mono clips resulting in a -19.0 LUFS stereo mixdown. In most cases this -3 LU discrepancy is not the desired target for a stereo mixdown. Note 1 LU == 1 dB.

As stated Logic Pro X provides a similar level of Pan Mode flexibility. I’ve also tested Reaper, and it’s options are equally flexible.

Pro Tools

Pro Tools Pan Mode support (they call it Pan Depth) is somewhat restricted. The preference is limited to Center Pan Mode, with selectable dB compensation options (-2.5 dB, -3.0 dB, -4.5 dB, and -6.0 dB).

There are several ways to reconstitute the loss of gain that occurs in Pro Tools when working with center panned mono clips in stereo sessions. One option would be to duplicate a mono clip and place each instance of it on hard-panned discrete mono tracks (L+R respectively). Routing the mono tracks to a stereo output will reconstitute the loss of gain.

A second and much more efficient method is to route all individual instances of mono session clips to a stereo Auxiliary Input, and use it to apply the necessary compensating gain offset before the signal reaches the stereo Master Output. The gain offset can be applied using the Aux Input channel fader or by using an inserted gain trim plugin. Stereo clips included in the session can bypass this Aux and should be directly routed to the stereo Master Output. In essence stereo clips do not require compensation.

Example Session

Have a look at the attached Pro Tools session snapshot. In order to clearly display the signal path relative to it’s gain, I purposely implemented Pre-Fader Metering.

pt-pan_small

Notice how the mono spoken word clip included on track 1 is routed (by way of stereo Bus 1-2) to a stereo Auxiliary Input track (named to Stereo). Also notice how the stereo signal level displayed by the meters on the Stereo Auxiliary Input track is lower than the mono source that is feeding it. The level variation is clear due to Pre-Fader Metering. It is the direct result of the session’s Pan Depth setting that is subtracting -3dB of gain on this center panned mono track.

Next, notice how the signal level on the Master Output has been reconstituted and is in fact equal to the original mono source. We’ve effectively added +3dB of gain to compensate for the attenuation of the original center panned mono clip. The +3dB gain compensation was applied to the signal on the Auxiliary Input track (via fader) before routing it’s output to the stereo Master Output.

So it’s: Center Panned mono resulting in a -3dB gain attenuation —>> to a stereo Aux Input with +3dB of gain compensation —>> to stereo Master Output at unity.

In case you are wondering – why not add +3dB of gain to the mono clip and bypass all the fluff? By doing so you would be altering the native inherent gain structure of the mono source clip, possibly resulting in clipping. My described workflow simply reconstitutes the attenuated gain after it occurs on center panned mono clips. It is all necessary due to Pro Tool’s Pan Depth methods and implementation.

-paul.

Technorati Tags: , ,

Utilizing Multiple Outputs for Recording

The vast majority of audio industry professionals use DAWS running on proficient computer systems to record audio directly to secondary hard disks. For some reason direct to disk recording is not widely endorsed in the Podcasting space. Many consultants (for various reasons) advise against this recording method. Instead, they recommend the use of inexpensive hand-held solid state Recorders.

For instance I’ve heard a few people state “computers cause ground loops”, hence the widespread Portable Recorder recommendation. In my opinion that is a half-baked assertion. In fact, ANY electronic component in a signal chain (including your electrical system) is capable of producing inherent noise. Often the replacement of cheaply manufactured components (interfaces, mixers, processors, cables, etc.) will solve audible noise problems. The key is to isolate the source and correct or replace it.

Portable Recorders are well suited for location interviews and video shoots. For in-studio sessions I feel direct to disk recording on a proficient system is much more flexible compared to the use of an external device. More so, the sole use of a Portable Recorder without a proper backup strategy is flat out risky.

That being said I thought I would document a basic Skype Recording session that I implemented in Pro Tools using a multi-output Motu Audio Interface. The incoming audio will be recorded on a secondary hard disk installed (or interfaced) on the host system. The real time session audio will also be routed to an alternate Interface Output, feeding an external Recorder for backup purposes.

Recording_Session_small

Note a multi-output Mixer can be used in place of an Audio Interface. As far as software you can use any modern DAW to replicate the described session. If you are using a Mac, Rogue Amoeba’s distinctive Audio Hijack application is also highly capable.

Objectives:

1-Record Studio Host and Skype Participant on discrete mono tracks in real time.

2-Combine the discrete recordings and create a split-stereo clip with independent dynamics processing applied to each channel, all in real time.

3-Use a Pre-Fader Send to independently control the level of the split-stereo discrete recording, and patch the real time signal to the Interface S/PDIF Output. This will feed the external Recorder’s S/PDIF Input.

4-Monitor the session through Headphones and play out through Desktop near-field Monitors.

Please review the displayed Pro Tools session snapshot.

• The Input for the mono Host track is the Interface connected mic. The Input for the mono Skype track is “Mix 1 Return.” This is an Interface supported feature, allowing the operator to route the computer’s Output (in this case Skype) to an available DAW Input. This configuration effectively creates a mix-minus with discrete, unprocessed recordings on individual mono tracks.

• The mono recording tracks are routed to individual mono Aux Input tracks using Buses. The Aux Input tracks are hard-panned L+R and contain various inserted processing options, including a Gain Trim, Expander, and Compressor.

The processing applied in this session is not intended to replace what would normally occur in post. The Compressors are there just to tame dynamics in the event either participant exceeds nominal input levels. The Expander is set up to apply mild attenuation when the host is not speaking.

• The Aux Input tracks have their Outputs set to a common stereo Bus.

• Finally a third standard stereo audio track (Rec-Sum) uses the stereo Bus Output(s) as it’s Inputs. By hard panning the channels L+R we are able to maintain discrete channel separation within any printed stereo clip.

To record the discrete raw audio and the processed split-stereo audio in real time, we simply arm all session Audio tracks to record and fire away. The session can be monitored through Headphones and played out through near fields via the Main Output.

Secondary Output

The Motu Interface used for this session has a total of 8 Outputs, including a stereo S/PDIF option. I implemented Pre-Fader Send on the session’s Rec-Sum channel with it’s Output set to S/PDIF. This will route the track’s split-stereo audio to the S/PDIF stereo Input of an external Marantz CF Recorder. With the Send designated as Pre-Fader, it’s level control will be independent of the parent (Rec-Sum) channel fader, thus allowing discrete control of the real time signal being fed to the Recorder.

Note in the displayed Pro Tools session snapshot – the floating fader positioned to the left of the mixer is a user friendly and easily accessible copy of the much smaller Send fader displayed in the parent (Rec-Sum) track.

In summary, we can successfully initialize and capture 4 recordings in a single pass: the raw Host audio, the raw Skype participant audio, a split-stereo processed version of the Skype session, and a split-stereo copy of the processed Skype session stored on the Recorder.

The image below displays the completed session with the split-stereo clip playing through the Main Outputs.

Mix_small

My general recommendation:when it is feasible, use direct to disk and Portable recording options in unison on a proficient system to capture in-studio multitrack and single participant Podcast sessions.

-paul.

Technorati Tags: , ,

Bit Depth and Dithering

In a professional environment Dithering will be applied to audio clips when reducing word length. This process will mask errors that occur due to the removal of digital audio bits. I thought I’d cover the basics.

Dither_small

Digital Audio

Digital Audio incorporates individual samples consisting of bits created by the process of Quantization. This is essentially the conversion of a continuous, linear range of values present in analog audio into a fixed range of discrete values. Bit Depth (a.k.a. Word Length or Resolution) represents the number of bits stored in a sample’s measure of amplitude. It indicates the extent of inherent vertical precision. Higher bit depths (or bits per sample) encompass improved vertical dynamic resolution resulting in an extended Dynamic Range.

1 bit = 6dB of Dynamic Range. Theoretically 16bit audio has a quantified Dynamic Range of 96 dB. 24bit audio has a quantified Dynamic Range of 144 dB. However, in order to accurately assess Dynamic Range we must also recognize the amplitude of the highest spectral component of the inherent noise floor. Specifically, where it resides relative to the maximum Peak value that a system is capable of reproducing. Dynamic Range is the measurement of this ratio or range.

Signal to Noise Ratio (SNR) is the quantified range between the nominal average signal level and the average level of the noise floor. Audio with an extended Dynamic Range will exhibit a higher SNR compared to audio with a reduced Dynamic Range. In essence 24bit audio will allow you to work with additional headroom without any increase in noise compared to 16bit audio.

Word Length Reduction

Truncation is the removal of bits with no compensating replacement. The repositioning of samples after converting to a lower resolution creates Quantization Errors resulting in audible artifacts and distortion. Dithering is technology that adds minimal perceived noise to audio before word length reduction. This noise will minimize and/or mask the audibility of distortion caused by Quantization Errors. It will also help preserve the sound quality and Dynamic Range of a higher resolution clip when converting or exporting to a lower bit depth.

There is a trade off: you are replacing bad noise with alternative “good” noise that is smoother, less audible, and much more consistent.

Noise Shaping is a supplemental feature that pushes Dithering noise into frequency ranges that are less audible to humans, thus allowing greater Dither with reduced perceptual noise.

Take a look at the Noise Shaped frequency response curve in the attached image. There is a clear visual indication of increased gain at higher frequencies that we are less susceptible to.

Podcasting

So what does this all mean for the typical Podcast Producer? Is Dithering just another obscure aspect of professional Audio Mastering and Post Production that can be safely ignored?

Consider the following variables:

If you are recording spoken word in a well suited environment that is reasonable quiet, and you are using capable (and trouble free) gear that is properly configured, there is really no reason to record 24bit audio. In my honest opinion with proper handling 16bit audio from acquisition to distribution will be perfectly acceptable.

Remember, I’m specifically referring to spoken word audio slated for Podcast distribution. If you are tracking music, well then by all means make full use of the advantages of higher resolution audio recording.

If you elect to record 24bit audio, and you are not properly implementing the word length reduction to 16bit, you are essentially nulling the advantages of the original higher resolution audio. When down-converting, you will be unknowingly degrading the sound quality by introducing artifacts and distortion. That’s not my opinion – it is a fact.

Consider this: The stand-alone version of iZotope’s Ozone 7 Mastering Suite processes all imported audio to 32bit word length. The manual specifically states:

“If you select a bit depth other than 32-bit, you may want to apply dither to your export. Ozone processes files at 32-bit so dither is desirable for files being exported to values lower than 32-bit.”

Most DAWS include Dithering options. In some cases it’s by way of a plugin. You may also notice Dithering options included in application Preferences or Export dialogs. Hopefully after reading this article you will understand what it all means and whether you should consider implementing it. Please note that Dither must be applied at the very last stage of any processing chain.

-paul.

Technorati Tags: , , ,

dbx 286s: Beyond The Basics …

The dbx brand has been a favorite of mine since the late 1970’s. My first piece of dbx kit was a stand-alone noise reduction unit that I coupled with an old Teac Reel to Reel Tape Deck. Through the years I’ve owned various EQ’s and Dynamics processors, including the highly regarded 160A Compressor. I purchased mine in 2006.

160a-small

In January 2011 I was skimming through eBay listings looking for a dbx 286A Microphone Preamp Processor. At the time I had heard the original 286 model was co-designed by Bob Orban, and both models were widely used in Radio Broadcast facilities. I found it interesting that Radio Engineers would use a piece of gear that was not only cheap in terms of cost – but unconventional in terms of controls.

286A-small

One piece was available on eBay, supposedly used for 4 hours at a party in Hollywood Hills California, and then boxed for resale. The seller had a positive reputation, so I grabbed it for $115. Upon arrival it’s condition was as described, and it’s been in my rack ever since.

The 286/286A has evolved into the 286s, quite frankly an outright steal priced at $199. Due to it’s straight forward approach and affordable price, the Podcasting community has embraced it and often classifies it as “drool-worthy.” Pretty amusing.

286-small

In this article I am going to discuss the attributes of the Compressor stage and the De-Esser. I’ll demystify the DeEsser and talk about the importance of the Output (Gain) Compensation setting.

Unconventional

I mentioned the processor is unconventional. For example the Compressor’s Drive and Density controls essentially replace Threshold, Ratio, Attack, and Release – present on most Compressors.

The De-Esser requires a user defined High-Pass Frequency designation and Threshold to reduce excessive sibilance. Setup can be time consuming due to the lack of any visual representation of problematic energy in need of attenuation.

Compressor:Drive

Compression results depend on the level (and dynamics) of the incoming signal and the corresponding settings. On a conventional compressor the Threshold monitors the incoming signal. When the signal surpasses the Threshold, processing engages and gain reduction is activated. The Ratio determines the amount of gain reduction. The Attack will affect how aggressively (or the speed at which) gain reduction initializes and then reaches maximum attenuation. The Release will control the speed of the transition from full attenuation – back to the original level

The Drive control on the 286s determines the amount of gain reduction (compression) that will be applied to the incoming signal. Higher settings will increase the input signal level and yield more aggressive compression (and noise).

How much gain reduction should you shoot for? Well that’s subjective. I would recommend experimenting with 6-12dB of gain reduction. Of course results will vary on a case by case basis due to obvious variables (mic selection, preamp level, etc.)

Compressor:Density

When using a compressor to process spoken word, improper Release settings can result in choppiness, often referred to as pumping. The key is to have the gain reduction occurrences smoothly transition between instances of audible sound and natural pauses (silence).

The 286s uses a variable program dependent Release. In the event you feel (and hear) the necessity to speed up or slow down the program dependent Release – the Density control will come in handy.

Note that the Density scale on the 286s is again somewhat unconventional. On a typical dynamics processor – setting the Release full counter-clockwise would result in a very fast Release. As the setting is adjusted clockwise, the Release duration would be extended. The scale usually transitions from milliseconds to full seconds.

On the 286s, think of Density as a linear speed controller, where “1” (counter-clockwise) is slow and “10” (full clockwise) is fast.

For normal speech I would recommend experimenting with the Density set between 3 and 5.

The De-Esser

If you check around you will notice a wide range of references regarding the frequency range where sibilance generally occurs. In reality there are so many variables and each instance of sibilance will need to be properly identified and handled accordingly.

The 286s De-Esser uses a variable high-pass filter that tells the processor where to begin to attenuate problematic energy. This Frequency control has a range of 800Hz-10kHz. The user manual states ” … settings between 4-8kHz will yield the best results for vocal processing.” This is good starting point. However proper setup requires time consuming arbitrary tweaking that may result in a low level of accuracy. A visual representation of the frequency range of the excessive sibilant energy will solve this problem. Once you identify the frequencies and/or range where most of the energy is present, setting the Frequency on the 286s will be demystified.

The De-Esser’s Threshold setting controls the amount of attenuation (sensitivity) and will remain constant as the input level changes.

Have a look at the spectral analysis below:

sibilance-small

Notice the excessive energy in the 2-6kHz range (Frequency Range is represented on the X axis). For this particular segment of audio I would initially set the Frequency control on the 286s to 5kHz. Next I would adjust the Threshold until the sibilant energy is attenuated. I would then sweep the Frequency setting within the visual range of the sibilant energy and fine tune both settings until I achieve the most pleasing results. The key is not to over do it. Heavy attenuation will suppress vital energy and remove any hint of natural presence and sparkle.

To perform this analysis excersize – set the Threshold setting on the 286s to OFF. Pass the output of the processor to your DAW of choice and perform a real time spectral analysis of your voice using a software plugin the includes a Spectrum Analyzer. You can use any supported EQ plugin with it’s controls bypassed. You can also use something like the free (AU/VST) Span plugin by Voxengo (note that Span is CPU intensive).

Output Gain Compensation

Gain Compensation is an integral part of Audio Compression. It is most commonly used to offset the gain reduction that occurs when audio is compressed. It is often referred to as Make-up Gain. When this gain offset is applied to compressed audio, the perceived, average level of the audio is increased. Excessive Make-up Gain can sometimes elevate noise that may have been previously inaudible at lower average levels.

Earlier I discussed how an elevated Drive control setting on the 286s will increase the input signal of low level source audio. In doing so you may pick up a suitable amount of compression. However you also run the risk of a noticeable increase in noise. In this particular scenario, try setting the Output Gain on the 286s to a negative value to offset the gain (and noise) that may have been introduced by the Drive setting.

Conclusion

I think it’s important to first learn the basics of Audio Compression from a conventional perspective. In doing so you will find it easier to get the most out of the unconventional controls on the dbx 286s, especially Drive and Density.

And let’s not forget that De-Essing is really nothing more than frequency band compression that will attenuate problematic energy. Establishing a visual reference to the energy will simplify the process of accurate correction.

-paul.

Technorati Tags: , ,

Skype, Logic Pro X, and Aggregate Devices …

Scenario:

Studio Host and Skype participant to be recorded inside Logic Pro X on a single machine (single pass) with no additional hardware other than a Mic Input Device.

Objectives:

[– Two independent mono Host/Participant stems with no processing.

[– One processed split-stereo mixdown of the session with the Host and Guest residing on discrete (L+R) channels.

[– Real time Processing and Recording of all instances.

skype-waves-small

Of course the objectives noted above are easily attainable using two independent machines, with the recording box running Logic Pro X and the Skype machine handling the connection. In this case you would also need to use a mixer to set up a proper mix-minus.

You can also implement similar workflows by using two inexpensive USB audio interfaces connected to a single machine.

Considering the resourcefulness of today’s modern day Macs, I’m confident the following workflow will be successful freeing the user from complexities and added costs.

OSX Aggregate Devices

The foundation of this setup is based on a user created Aggregate Audio Device. Aggregate devices appear in the OSX System Preferences/Sound I/O options for system wide use. By wrapping supported “Subdevices” into a single Aggregate, you effectivly create a sort of cumulative Input Device that can be designated in Logic as the default. We also need a software utility that supports routing of the Skype Output to an Input in Logic.

I originally created this workflow using SoundFlower that was installed on my secondary iMac and carried over form previous versions of OSX. SoundFlower, along with the iMac’s Line Input were wrapped into a single Aggregate Device, and then designated in Logic as the default Input.

This worked well. However, I had no plans to install the now unsupported SoundFlower on my production MacPro for further testing. And so I looked around for a suitable up to date (and actively developed) replacement for SoundFlower.

Sound Siphon

Sound Siphon by Static Z Software “… makes your Mac’s Audio Output available as an Audio Input Device. It enables you to send audio from one application to another where it can be processed, streamed, or recorded.

Exactly what I needed.

Note that Sound Siphon is very diverse in terms of features. And the developer states that many useful enhancements are in the works. You can download a restricted demo. My hope is that you consider purchasing a $29.99 license. This will ensure the longevity of the application and continued development. Note that I have no affilation and I gladly purchased a license.

This is a snapshot of Sound Siphon:

ss-small

In the example above I display a user defined Device (“Capture Safari”) that is essentially a Custom Audio Input. I then associated the Safari Application with this device. This becomes a system wide option to capture Safari audio. For example QuickTime X will now display “Capture Safari” as an Input option for audio recording.

It’s important to note that this particular Sound Siphon feature is supplemental to the Skype recording implementation. In other words – it’s an entrley different use case scenario. My goal here is to disclose the flexibility of the application.

Creating the Aggregate Device

Input 1 on my Mackie Onyx 1220i Mixer receives the output from a dbx 286A Voice Processor. The studio Mic is connected to the processor for proper gain staging. I needed to wrap the Mic signal along with the Skype audio into a single Input Device and designate it in Logic’s Preferences for proper routing.

To create an Aggregate Device, open Audio MIDI Setup, located in ~/Applications/Utilities. When creating a new Aggregate, supported Subdevices appear in the right side setup table.

midi-small-44

Notice that Sound Siphon is listed as a 2 in/2 out device in the left source view. This is created when you install the application. Once installed, it will be available to be wrapped into an Aggregate Device along with pre-existing devices.

For my implementation I created “Skype Tracker” as a new Aggregate and selected my mixer (Onyx-(2528)) and Sound Siphon as Subdevices. Up top you set your Sample Rate and the Clock Source. My system seems to perform better with Sound Siphon set as the Clock Source.

It’s important to review the Input Channel matrix of the new Aggregate Device. Notice that Sound Siphon will only support Input channels (17+18). When routing Inputs in Logic, I will use Input 1 for the studio Mic and Input 17 for Skype.

Skype

Here are the Skype settings that I am using:

skype-44

The Microphone is set to the Aggregate Device. The Speakers option is set to Sound Siphon. This setting is imperative and from what I can tell non-flexiable.

Logic Pro X

The first thing we need to do is define the Input Device in Global Preferences/Audio/Devices. I set mine to the Aggregate Device:

prefs-sm-44

Next we will address setup and routing. What’s important here is that I use an Object in Logic that may not be immediately obvious in your particular installation.

Specifically, I often use Input Channel Strip Objects in my projects. They are implemented in the Environemnt (aka “MIDI Environment”). It is accessible form the Logic Window Menu.

From the Logic Docs regarding Input Channel Strips:

“The Input Channel Strip allows you to directly route and control signals from your audio hardware’s Inputs. Once an Input Channel Strip is assigned to an Audio Channel Strip, it can be monitored and recorded directly into Logic Pro, along with its effect plug-ins.

The signal is processed, inclusive of plug-ins even while Logic Pro is not playing. In other words, Input Channel Strips can behave just like external hardware processors. Aux sends can be used pre- or post-fader.

Input Channel Strips can be used as live Inputs that can stream audio signals from external sources (such as MIDI synthesizers and sound modules) into a stereo mix (by bouncing an Output Channel Strip).”

You can also create Bus Channel Strip Objects in the Environment. They are not the same as Auxiliary Channel Strips and can be quite useful in certain instances. For more information about Bus Channel Strips please refer to this article.

The Environment

To expose the accessability of the Logic Environment, open global Preferences and access the Advanced options. The MIDI option needs to be selected as part of the Advanced Tools:

prefs-small

Once that setting is ticked, “Open Midi Environment” will appear as an option in the Logic Window Menu.

Channel Strip Objects are added to the Environment from the New Menu/Channel Strip. Notice how the Environment emulates the Project Mixer:

add-env-sm-55

Note that when adding Input Channel Strips in the Environment, you must define the corresponding (Aggregate) Device Inputs using the Channel Strip editor:

env-sm-77

For this particular project I created two Input Channel Strips in the Environment using Inputs 1 and 17 respectively, based on Aggregate Subdevice availability (Input 1 = Mic, Input 17 = Skype).

You will also need 4 Audio Tracks (2 Mono, 1 Stereo, 1 PreListen), and 2 (Mono) Auxiliary Channel Strips. Create Audio Tracks using the Track/New Tracks option – located in the Logic Application Menu. Add Auxiliary Channel Strips using the Mixer’s Options Menu/Create New … || Note that the Input Channel Strips created in the Environment should be designated Mono.

Here is my Project Mixer with all necessary Objects and Routing:

mixer-new-sm-44

Routing

The reddish labeled channels are the two Input Channel Strips that I created in the Environment. If you look at the text at the very top of these Channel Strips, you will see their Input designations.

The signals coming in through the Inputs are routed to their own independent Aux Channels for processing. Notice I inserted a Gain Trim on the Mic Input Channel. All processing options are of course subjective. One example would be to insert two instances of a Compressor on each Aux Channel. You would set these up to apply real time, non-aggresive dynamic range compression as you record.

Moving forward – notice the Aux Channels are Mono and hard panned L+R respectivly. This will maintain channel separation when recording the split-stereo version of the session. In this example each Aux Channel Output is routed to Audio Channel 3 (“Split Record”). This Stereo Audio Track is panned center. When armed it will record the Aux Channel Outputs to a split-stereo file.

Also study how I set up the remaining Audio Tracks – Audio Track 1 (“Rec. Mic”) and Audio Track 2 (“Rec. Skype”). Their Inputs are set to Bus 1 and 2 respectively, allowing these tracks to receive the unprocessed Outputs (“dry” audio) from the Input Channel Strips.

Keep in mind that if Effects are inserted on the Input Channel Strips, the audio routed to Audio Tracks 1+2 will be processed. In most cases I would not insert any Effects on the Input Channel Strips other than Gain. My intension here is to record dry stems.

I Grouped various aspects of these two channels, mainly Volume, Mute, Solo, and Record. This will link the faders and make it easy to control audibility of the mono stems cumulatively.

Wrap Up

That’s basicilly it. You can record/monitor all tracks in real time. And when you are done, there is no need to bounce, although you still can. You simply “Export” or “Export Region” as an individual file(s).

waves-22-small

Notes

You may have noticed the Outputs for the Auxiliary Channel Strips (1+2) and the Input for Audio Track 3 (“Split Record”) is Bus 3. This is in fact a virtual (permanent) Bus used to route the processed audio to Track 3 for recording.

When you select a permanent virtual Bus in Logic for routing, an Auxiliary Channel Strip is auto-created and will appear in the Mixer. For this particular workflow – we use two Auxiliary Channel Strips, one for Mic processing and a second for Skype processing.

Throughout this entire workflow no changes were made to my default OSX Audio I/O Settings located in System Preferences/Sound.

As I always say – Audio Tracking and Post are highly subjective arts. In fact many Logic “experts” have never heard of or utilized the options in the Environment. And your processing options are also subjective. My hope is this documentation will at the very least introduce you the creation and usage of Aggregate Devices.

If by chance you develop a successful alternative solution, all well and good. In my tests I’ve found the documented implementation to work quite well.

Let me know if you have any questions.

I’d like to thank my friend Victor Cajiao for his help while testing this workflow.

-paul.

Technorati Tags: , ,

Asymmetric Waveforms: Should You Be Concerned?

In order to understand the attributes of asymmetric waveforms, it’s important to clarify the differences between DC Offset and Asymmetry …

Waveform Basics

A waveform consists of both a Positive and Negative side, separated by a center (X) axis or “Baseline.” This Baseline represents Zero (∞) amplitude as displayed on the (Y) axis. The center portion of the waveform that is anchored to the Baseline may be referred to as the mean amplitude.

wf-480

DC Offset

DC Offset occurs when the mean amplitude of a waveform is off the center axis due to differing amounts of the signal shifting to the positive or negative side of the waveform.

One common cause of this shift is when faulty electronics insert a DC current into the signal. This abnormality can be corrected in most file based editing applications and DAW’s. Left uncorrected, audio with DC Offset will exhibit compromised dynamic range and a loss of headroom.

Notice the displacement of the mean amplitude:

dc-offset-ex-480-png

The same clip after applying DC Offset correction. Also, notice the preexisting placement of (+/-) energy:

dc-offset-removed-480

Asymmetry

Unlike waveforms that indicate DC Offset, Asymmetric waveform’s mean amplitude will reside on the center axis. However the representations of positive and negative amplitude (energy) will be disproportionate. This can inhibit the amount of gain that can be safely applied to the audio.

In fact, the elevated side of a waveform will tap the target ceiling before it’s counterpart resulting in possible distortion and the loss of headroom.

High-pass filters, and aggressive low-end processing are common causes of asymmetric waveforms. Adding gain to asymmetric waveforms will further intensify the disproportionate placement of energy.

In this example I applied a high-pass filter resulting in asymmetry:

asymm-matural-480

Broadcast Chains

Broadcast engineers closely monitor positive to negative energy distribution as their audio passes through various stages of transmission. Proper symmetry aides in the ability to process a signal more effectively downstream. In essence uniform gain improves clarity and maximizes loudness.

Podcasts

In spoken word – symmetry allows the voice to ride higher in the mix with a lower risk of distortion. Since many Podcast Producers will be adding gain to their mastered audio when loudness normalizing to targets, the benefits of symmetric waveforms are obvious.

In the event an asymmetric waveform represents audio with audible distortion and/or a loss of headroom, a Phase Rotator can be used to reestablish proper symmetry.

Below is a segment lifted from a distributed Podcast (full zoom out). Notice the lack of symmetry, with the positive side of the waveform limited much more aggressively than the negative:

podcast-asymm-480

The same clip after Phase Rotation:

asymm-podcas-fixed-480

(I processed the clip above using the Adaptive Phase Rotation option located in iZotope’s RX 4 Advanced Channel Ops module.)

In Conclusion

Please note that asymmetric waveforms are not necessarily bad. In fact the human voice (most notably male) is often asymmetric by nature. If your audio is well recorded, properly processed, and pleasing to the ear … there’s really no need to attempt to correct any indication of asymmetry.

However if you are noticing abnormal displacement of energy, it may be worth looking into. My suggestion would be to evaluate your workflow and determine possible causes. Listen carefully for any indication of distortion. Often a slight EQ tweak or a console setting modification is all that may be necessary to make noticeable (audible) improvements to your audio.

-paul.

Technorati Tags: , ,

Intermediate File Format for New Media Producers: MP2

mp2-file If you are in the audio production business or involved in some sort of collaborative Podcast effort, moving large lossless audio files to and from various locations can be challenging.

Slow internet speeds, Hotel WiFi, and server bottlenecks have the potential to cripple efficient file management and ultimately impede timely delivery. And let’s not forget how quickly drive space can diminish when storing WAV and/or AIFF files for archival purposes.

The Requirements for a Suitable Intermediate

From the perspective of a Spoken Word New Media Producer, there are two requirements for Intermediate files: Size Reduction and Retention of Fidelity. The benefits of file size reduction are obvious. File transfers originating from locations with less than ideal connectivity would be much more efficient, and the consumption of local or remote disk/server space would be minimized. The key here is to use a flexible lossy codec that will reduce file sizes AND hold up well throughout various stages of encoding and decoding.

Consider the possible benefits of the following client/producer relationship: A client converts (encodes) lossless files to lossy and delivers the files to the producer via FTP, DropBox, etc. The Producer would then decode the files back to their original format in preparation for post production.

When the work is completed, the distribution file is created and delivered (in most cases) as an MP3. Finally with a bit of ingenuity, the producer can determine what needs to be retained for archival purposes, and convert these files back to the intermediate format for long term storage.

How about this scenario: Podcast Producer A is located in L.A.. Producer B is located in NYC. Producer B handles the audio post for a double-ender that will consist of 2 individual WAV files recorded locally at each location.

DA

Upon completion of a session, the person in L.A must send the NY based audio producer a copy of the recorded lossless audio. The weekly published program typically runs upwards of 60 minutes. Needless to say the lossless files will be huge. Let’s hope the sender is not in a Hotel room or at Starbucks.

The good news is such a codec exists …

MPEG 1 Layer II (commonly referred to as MP2 with an .mp2 file extension) is in fact a lossy “perceptual” codec. What makes it so unique (by design) is the format’s ability to limit the introduction of artifacts throughout various stages of encoding and decoding. And get this – MP2’s check in at about 1/5th the size of a lossless source. For example a 30 minute (16 bit/44.1kHz) Stereo WAV file currently residing on my desktop is 323.5 megabytes. It’s MP2 counterpart is 58.7 megabytes.

Public Radio

If you look into the file submission requirements over at PRX (The Public Radio Exchange) and NPR (see requirements), you will notice MP2 audio files are what they ask for.

In fact during the early days of IT Conversations, founder and Executive Director Doug Kaye implemented the use of MP2 audio files as intermediates throughout the entire network based on recommendations by some of the most prominent engineers in the Public Radio space. We expected our show producers and content providers to convert their audio files to MP2 prior to submission to our servers using third party software applications.

Eventually a proprietary piece of software (encoder/uploader) was developed and distributed to our affilates. The server side MP2’s were downloaded by our audio engineers, decoded to lossless, produced, and then sent back up to the network as MP2 in preparation for server side distribution encoding (MP3).

From a personal perspective I was so impressed with the codec’s performance, I immediatly began to ask my clients to submit MP2 audio files to me, and I’ve never looked back. I have never experienced a noticeable degradation of audio quality when converting a client’s MP2 back to WAV in preparation for post.

Storage

In my view it’s always a good idea to have unfettered access to all previously produced project files. Besides produced masters, let’s not forget the accumulation of individual project assets that were edited, saved, and mixed in post.

On average my project folders that include audio assets for a 30 minute program may consume upwards of 3 Gigabytes of storage space. Needless to say an efficient method of storage is imperative.

Fidelity Retention

If you are concerned about the possibility of audio quality degradation due to compression artifacts, well that’s understandable. In certain instances accessability to raw, uncompressed audio will be more suitable. However I am convinced that you will be impressed with how well MP2 audio files hold up throughout various workflows.

In fact try this: (Suggested encoders listed below)

Convert a stereo WAV file to stereo MP2 (256 kbps). Compare the file sizes. Listen to the MP2 and assess fidelity retention. Then convert the stereo MP2 directly to stereo MP3 (128 kbps). Listen for any indication of noticeable artifacts.

Let me know what you think …

My recommendation would be to first experiment with converting a few of your completed project assets to MP2 in preparation for storage. I’ve found that I rarely need to dig back into old work. I have on a few occasions, and the decoded MP2’s were perfectly fine. Note that I always save a copy of the produced lossless master.

Specifications and Software

The requirements for mono and stereo MP2 files:

Stereo: 256 kbps, 16 bit, 44.1kHz
Mono: 128 kbps, 16 bit, 44.1kHz

There are many audio applications that support MP2 encoding. Since I have limited exposure to Windows based software, the scope of my awareness is narrow. I do know that Adobe Audition supports the format. In the past I’ve heard that dBPowerAmp is a suitable option.

On the Mac side, besides the cross platform Audition – there is a handy utility on the Mac App Store called Audio-Converter. It’s practically free, priced at $0.99. File encoding is also supported in FFmpeg either from the Command Line or through various third party front ends.

Here is the syntax (stereo, then mono) for Command Line use on a Mac. The converted file will land on your Desktop, named Output.mp2:

ffmpeg -i yourInputFile.wav -acodec mp2 -ab 256k ~/Desktop/Output.mp2

ffmpeg -i yourInputFile.wav -acodec mp2 -ab 128k ~/Desktop/Output.mp2

Here’s a good place to download pre-compiled FFmpeg binaries.

Many modern media applications support native playback of MP2 audio files, including iTunes and Quicktime.

In Conclusion

If you are in the business of moving around large Spoken Word audio files, or if you are struggling with disk space consumption issues, the use of MP2 audio files as intermediates is a worthy solution.

-paul.

Technorati Tags: ,

iZotope Ozone 6

iZotope has released a newly designed version of Ozone, their flagship Mastering processor. Notice I didn’t refer to Ozone [6] as a plugin? Well I’m happy to report that Ozone [6] is now capable to run independent of a DAW as a stand-alone desktop processor.

oz6-480

Besides the stand-alone option and striking UI overhaul, Ozone’s flexibility has been greatly enhanced with the addition of support to host third party Audio Units and VST plugins. Preliminary tests here indicate that it functions very well in the stand-alone mode. More on this in moment …

I’ve been a customer and supporter of iZotope since early 2005. If I remember correctly Ozone 3 was the first version that I had access to. In fact back in the early days of Podcasting, many producers purchased an Ozone license based on my endorsement. This was an interesting scenario all due to the fact that most of the people in the community who bought it – had no idea how to use it! And so a steady flow of user support inquiries began to trickle in.

I decided the best way to bring users up to speed was to design Presets. I would distribute the underlying XML file and have the users move it to the proper location on their system’s. After doing so, the Preset would be accessible within Ozone’s Preset Manager.

The complexity of the Presets varied. Some people wanted basic Band-Pass filters. Others requested the simulation of a broadcast chain that would result in a signature sound for their recorded voice. In fact I remember one particular instance where the user requested a Preset that would make him sound like an “AM Radio DJ”. So I went to work and I think I made him happy.

As Ozone matured, it’s level of complexity increased resulting in somewhat sluggish performance (at least for me). When iZotope released Alloy 2, I bought it – and found it to be much more responsive. And so I sort of moved away from Ozone, especially Ozone 5. My guess is if my system’s were a bit more robust, poor performance would be less of an issue. Note that my personal experience with Ozone was not necessarily the general concensus. Up to this latest release, the plugin was highly regarded with widespread use in the Mastering community.

Over the past 24 hours I’ve been paying close attention to how Ozone users are reacting to this new version. Note that a few key features have been removed. The Reverb module is totally gone. Gating/Expansion has been removed from the Dynamics Module, and the Dithering options have been minimized. The good news is these particular features are not game changers for me based on how I use this tool. I will say the community reaction has been tepid. Some users are passing on the release due to the omissions that I’ve mentioned and others that I’m sure I’ve overlooked.

For me personally – the $99 upgrade was a no-brainer. In my view the stand-alone functionality and the support for third party plugins makes up for what has been removed. In stand-alone mode you can import multiple files, save your work as projects, implement processing chains in a specific order, apply head/tail cuts/fades, and export your work.

Ozone [6] will accept WAV, AIFF, or MP3 files. If you are exporting to lossless, you can convert Sample Rates and apply Dither. This all worked quite well on my 2010 MacPro. In fact the performance was quite good, with no signs of sluggish performance. I did notice some problematic issues with plugin wrappers not scaling properly. Also the Plugin Manager displayed duplicates of a few plugins. This did not hinder performance in any way. In fact all of my plugins functioned well.

And so that’s my preliminary take. My guess is this new version of Ozone is well suited for advanced New Media Producers who have a basic understanding of how to process audio dynamics and apply EQ. Of course there’s much more to it, and I’m around to answer any questions that you might have.

Look for more information in future posts …

-paul.

Technorati Tags: , , ,

Skype in the Box …

Scenario:

Studio Host and Skype participant to be recorded inside your DAW utilizing a slightly advanced configuration.

The session will require a proper mix-minus using your mixer’s Aux Send to feed the Skype Input – minus the Skype participant.

Objectives:

[– Two discrete mono Host/participant recordings with minimal or no processing.

[– Host Mic routed through a voice processing chain using plugins.

[– Incoming Skype routed through a compressor to tame levels, if necessary.

[– One fully processed stereo mix of the session with the Host audio on the left channel and the Skype participant on the right channel.

[– Real time recording and output.

There are certainly various ways to accomplish these objectives utilizing a Bounce to Track concept. The optional inserted plugins and even the routing decisions noted below are entirely subjective. And success with this implementation will depend on how resourceful your system is. I would recommend that you send the session audio out in real time to an external recorder for backup.

Configuration:

This particular example works well for me in Pro Tools. I tried to make this design as generic as possible. My guess is you will have no trouble applying these concepts in any professional DAW. (Click to enlarge)

Skype-NEW-480

Setup:

First I’ll mention that I’m using a Mackie Onyx 1220i Firewire Mixer. This device is defined as my default system I/O. The mixer has a sort nifty feature that allows the creation of a mix-minus just by the press of a button.

onyx-480

Pressing the Input button located on the mixer’s Line In 11-12 channel(s) sets the computer’s audio output as the channel’s input, passing the signal through Firewire 1-2. Disengaging this button will set the Input(s) to Line and the channels’s 1/4″ Input jacks would become active.

Skype recognizes the mixer as the default I/O. So I plug my mic into the mixer’s Channel 1 Input and hard-pan left. I then hard-pan Channel(s) 11-12 right. With the Input button pressed – I can hear Skype. In order to create a successful mix-minus you need to tell the mixer to prevent the Skype input from being inserted back into the Main Mix. These options are located in the mixer’s Source Matrix Control area.

This configuration translates into a Pro Tools session by setting the Track 1 Input (mono) to Onyx Channel 1 and the Track 2 Input (mono) to Onyx Channel 12. I now have discrete channels of audio coming into Pro Tools on independent tracks.

Typically I insert noise reduction plugins on the Mic Input Channel. A Gate basically mutes the channel when there is no signal, and iZotope’s Dialog DeNoiser handles problematic broadband noise in real time. At this stage the Skype Input is recorded with no processing.

Next, both Input Channels are bused out to independent mono Auxiliary Inputs that are hard-panned left + right respectively in preparation to route the passing audio to a Stereo Record bus. To process the mic signal passing through Aux 1 I usually insert something like Waves MaxxVolume, FabFilter’s Pro-DS, and Avid’s Impact Compressor.

For the Skype audio passing through Aux 2, I might insert a gain stage plugin and another instance of Avid’s Impact Compressor. This would keep the Skype audio in check in the event the guest’s delivery is problematic.

The last step is to bus out the processed audio to a Stereo Audio Track with it’s channels hard-panned left + right. This will maintain the channel separation that we established by hard-panning the Aux Inputs. On this track I may insert a Loudness Maximizer and a Peak Limiter. The processed and recorded stereo file will contain the Mic audio on the Left Channel and the Skype audio on the Right Channel.

Finally you’ll notice I have a Loudness Meter inserted on the Master in one of the Pro Tools Post Fader inserts. Once a session is completed I can disarm the “Record” track and monitor the stereo mixdown. Since the Loudness Meter will be operating Post Fader, I can apply a global gain offset using the Master Fader. Output measurements will be accurate. Of course at this point the channels that contain the original discrete mono recordings would need to be muted.

Notes

All the recording and processing steps in this session can be executed in real time. You simply define your Inputs, add Inserts, set up panning/routing, and finally arm your tracks to record. You will be able to converse with the Skype guest as you monitor the session through the mixer’s headphone output with no latency issues. When the session ends you will have access to independent mono recordings for both participants and a processed stereo mix with discrete channels.

Note that you can also implement this workflow as a two step process by first recording the Host/Skype session as discrete mono files. Then Bounce to Track (or Disk) to create the stereo mixdown.

Again the efficiency of this workflow will depend on how resourceful your system is. You might consider running Skype on a separate computer. And I reiterate: as you record in the box, consider sending the session audio out to an external recorder for backup.

-paul.

Technorati Tags: ,

Avid Impact …

Since upgrading to Pro Tools 11 – I lost access to one of my favorite plugins – The Glue by Cytomic. The Glue is an analog modeled console-style Mix Bus Compressor that supports Side-Chaining and features a classic needle type gain reduction meter. This plugin gets high marks in the music production community. In my work I find it very useful on mix buses and to tame dynamics in individual clips. At this time there is no AAX Native version available, although I’ve read a release may be imminent.

After using The Glue for about a year – I grew very fond of the form factor and ease of use. And, the analog gain reduction meter is just too cool. Here’s a video that demonstrates how The Glue can be used as a Limiter to tame transients.

I have a bunch of Compressors that I use in Pro Tools including C1 by Waves and Pro-C by FabFilter. I also use the Compressors included in the Dynamics modules in iZotope’s Ozone and Alloy plugins.

I decided to look around for a suitable replacement for The Glue that would work well in my Pro Tools environment. I was surprised when I stumbled upon something offered by Avid … Impact Mix Bus Compressor.

impact_blog

Before shelling out $300 for this plugin, I decided to check eBay. Sure enough I found a reliable reseller who was accepting offers for this previously registered plugin by way of an iLok license transfer. I secured the license for $80. I’m hoping this is legit

Regardless, I’m looking forward to adding this new tool to my Pro Tools rig. We’ll see how well it stacks up against The Glue.

-paul.

Update:The license transfer worked out fine and from what I’ve heard the process is totally legit …

Technorati Tags:

Waves WLM Plus Loudness Meter …

Waves has just released a stellar update to their critically acclaimed WLM Loudness Meter. The new WLM Plus version, available for free to those who are eligible – includes a few new and very useful features.

The plugin now acts as both a Loudness Meter and a Loudness Processor. New controls (Gain/Trim) are located in the Processing Panel and are designed to apply loudness normalization and correction. There is also a new switchable True Peak Limiter that adheres to the True Peak parameter defined in the selected running preset.

Here’s how it works:

Notice below I am running WLM Plus using my own custom preset (figg -16 LUFS). Besides the obvious Integrated Loudness target (-16 LUFS), I’ve defined -1.0 dBTP as my True Peak ceiling.

wlm-blog

What you need to do is insert the plugin at the end of your chain. Turn on the True Peak Limiter. Now play through the entire segment that you wish to measure and correct. During playback the textField value located on the WLM Plus Trim button will update in realtime, displaying the proper amount of gain compensation that is necessary to meet the Integrated Loudness target (it’s +2.1 dB in this example).

When measurement is complete, simply press the Trim button. This will set the Gain slider to the proper value for accurate compensation. Finish up by bouncing the segment through WLM Plus, much the same as any processing plugin. The processed audio will now match the Integrated Loudness Preset target and True Peaks will be limited accordingly.

I haven’t tested this in Pro Tools but my guess is this also works when using WLM Plus as an Audio Suite process on individual clips.

Of course you can make a manual adjustment to the Gain slider as well. In this case you would use the displayed Trim Value to properly set the necessary amount of gain compensation.

Great update to this well designed Loudness Meter.

-paul.

Technorati Tags: , ,