In the Beginning
Consider the fact that the earliest mixers were orchestra conductors shaping live concert performances. By the 19th century, conductors had advanced the art of conducting beyond simply dictating timing, and began creating a physical language to convey dynamics and relative loudness to each section of the orchestra. As acoustic phonograph recorders began to capture these performances into the 20th century, conductors continued to serve their role of arranging the orchestra while also containing the dynamics within the bounds of the recording medium. Keep in mind that early recording technology was entirely passive, involving acoustic sound being funneled through a “recording horn,” to a membrane attached to cutting needle, which would dig a groove into a piece of wax. “Mixing” in those days simply meant the conductor moved players closer to, or further from the recording horn.
After WWI, radio broadcasting became very popular, with sound being turned into voltage using microphones. Once this technology became available it was integrated into the process of record cutting, with performers being mic’ed and the voltage being used to drive the cutting needle of the recorder. The film industry adopted this, as well as another process called “optical sound,” where the voltage pattern was photographed onto film and then could be reproduced in the theater. By the 1930’s the radio broadcasting industry developed three or four channel vacuum tube-based mixers to sum multiple microphones, and also line mixers to combine pre-amplified microphone signals with music from turntables for on-air programs. The film industry started recording multiple tracks of sound to a single piece of film, and using broadcast-style four-channel tube mixers to combine music, dialog, and sound effects down to a mono print master for release.
These early tube consoles from RCA, Western Electric, and GE were boxes that sat on a desktop, providing easy access to basic switching and leveling controls. Instead of familiar modern sliding faders, large knobs were used to control the balance of different incoming signals. These boxes provided routing, balancing, and a master level control, but for the most part, the microphone amplifiers, compressors, and other equipment accompanied the console itself, and would be mounted in additional racks. Consoles like the RCA 76-B6 and the Western Electric 25B became very popular because all of the amplifiers and switching electronics were housed in the console itself, however, equalizers and dynamic range compressors and some other components were still largely modular.
This setup was sufficiently manageable for broadcasting or for summing multiple microphones to a mono record cutter, but after WWII, tape recording, developed in Germany, began to spread to the rest of the world. Mono tape recordings eventually advanced to two-track. With voice printed to one track and instruments to another, the engineer had more options later on during mixdown. In America, a three-track recording format was also developed. The standard process was to record two tracks and bounce them down to one. Then, more tracks would be recorded and bounced, and eventually all of the bounces would be mixed together. While recording this way, more mixing was necessary. Multiple signals would have to be combined to each track of the recorder, but simultaneously, those signals returning from the tape machine had to be summed to a mono studio monitor. This meant that a tracking console and a separate mixing console were commonplace in one control room.
Birth of Modern Consoles
As larger eight-track or sixteen-track recorders became available, with them came new mixing technology. Around the time eight-track tape machines were catching on, Electrodyne created a straightforward alternative to purchasing multiple mixing consoles, preamplifiers, and equalizers. They would sell a frame, which held a variety of modular channel strips. Each module had a preamplifier, a line level amplifier, onboard EQ available in a few different varieties, and controls to route signals to different tracks of the multi-track recorder. Effectively, they merged all of these pieces into the modern I/O strip. Half of the I/O’s could be used for sources, the other half for returns, forming a split-monitor console design. American console manufacturers like MCI and API, and British console manufacturers like Trident and Neve, all started making split-monitor consoles with complete channel strips similar to the Electrodyne designs.
This generation of consoles became legendary for their unique sound and tonal coloration, utilizing discrete amplifiers and transformer coupling and defining the “vintage console sound” going forward. Many of your favorite records owe their sound to these designs, and there are plenty of engineers that can name the console used to mix a record, just by hearing its sound. In fact, the console at a given studio became as big of a draw as the roster of engineers working there. Eventually, engineers started freelancing, and choosing the studio they would bring a project to, specifically based on the console flavor they were seeking for that particular record.
While the sonic characteristics of classic transformer-based consoles are still very popular for many applications today, particularly for “warming up” and smoothing some of the harsher elements of digital recording, there were other console features which created a competitive market as the 1970’s rolled on. For one, David Harrison’s consoles designed for MCI, were the first to have send and return faders, to and from the tape machine, all in one channel strip. Rather than using half the desk for sources, and the other half for tape returns, this provided a more ergonomic feel. This in-line architecture established a benchmark in contemporary console design, especially as twenty-four and thirty-two-track recorders were introduced.
Meanwhile, Solid State Logic (SSL) took ergonomics to the next level with their SL4000 series consoles. In addition to adopting an inline design, they incorporated compressors, expander/gates, along with EQ into the channel strip. Though extremely convenient, with all of these controls available on each of the forty-plus channels, documenting a mix so that it could be recalled at a later date became extremely tedious, and rarely produced accurate results. Because of this, SSL introduced “Total Recall,” a process by which computer software stored all of the control values and aided in an accurate recall when necessary. This benefit was too good for many studios to live without.
While SSL’s became popular for their ease of use and flexibility, their sound came as somewhat of a shock to many engineers who were used to the traditional transformer-coupled tone. The SL4000 consoles used DC-coupled IC input stages along with op-amps fed from the summing buss (commonly referred to as “current summing.”) The result was a different character than what Neve and API consoles had been producing. It was somewhat cleaner and punchier than what engineers were used to hearing previously.
While the SL4000E series console and its successors still have a home in many modern studios, the 90’s saw another trend largely dethrone them as the industry standard: Digital Audio Workstation, or DAW-based mixing. When people realized that instead of buying giant and expensive tape machines and consoles, they could buy a computer, some software, an interface, and some outboard mic pre’s and EQ’s, the game started to change in a big way. Many studios ditched their analog consoles, which were always extremely costly to maintain. In this new digital world, however, engineers quickly realized that a lot of the rules had changed. Dynamic range was no longer defined as the range from noise floor to distortion, it was now determined mathematically as a function of the bit depth of the recording.
The Rise of DAW’s
All analog-to-digital (AD) converters and digital-to-analog (DA) converters by definition have analog circuits at their front or back end. Within the digital realm, however, dynamic range is based on the number of values that can be represented mathematically given the bit depth. Though this dynamic range can theoretically be very high, in reality it will be limited by the inherent noise floor of the analog components of the AD and DA converters. On the other end of the spectrum, the highest value which can potentially be sampled is referred to as full-scale. Once this ceiling is exceeded, digital clipping occurs. Where hitting tape aggressively results in a pleasant, compressed sound, signals nearing the peaking point of an AD converter sound no better, and when they cross full-scale, they incur a nasty unusable distortion. To put it another way, in the analog world life exists above zero; in digital, zero is all there is – end of story.
So when first implemented, digital recording as a tracking and mixing medium often led to dainty level-setting and a general sense of discomfort. With some adjustment engineers learned to replace lost tape compression with their choice of analog compressors on the front end. Many engineers also struggled to get a grip on the in-the-box solution to mixing. Blasting transients through transformers or slamming the stereo bus of an SSL and saturating the output stage has always felt different than pushing faders in a DAW. In fact, more often than pushing faders, mixing in the box means pulling faders down. It was (and is) somewhat of a buzz-kill to have the mix rocking and have to “select all” and pull the tracks down to avoid clipping the digital master fader.
DAW-based mixing offers some huge advances over mixing in the analog domain. For example with control surfaces providing a tactile interface to fader, mute, pan and other automatable functions and the software GUI allowing the ability to quickly draw in corrections, the power is undeniable. Opening a session and having 100% recall occur effortlessly is just one of the reasons why the DAW is now the industry standard platform. Recalling a mix in an analog studio can take a ridiculous amount of time. Analog console recall software still requires the operator to manually configure the controls, the computer just guides them as they do. From there, in reality, the mix never sounds exactly the same after the recall has been performed, and a good amount of tweaking is always necessary to get things close to the original state of the mix.
Enter The Dangerous 2-Bus
Recognizing both the power of DAW mixing as well as the desirability of the headroom, tone and feel of an analog console, Bob Muller and Chris Muth, the founders of Dangerous Music created a solution which would merge these two worlds. DAW- equipped studios already had 8- or 16-channel interfaces whose A/D converters were being used for tracking. Those interfaces also had the same number of D/A converter outputs, which weren’t being used for anything in the mixing process except possibly monitoring through one stereo pair. The Dangerous 2-Bus was developed to permit the mix engineer to output sub-groups (or “stems”) of their mix from the DAW and sum them in the analog domain, then record this stereo mix back into the session without compromising any DAW functions like recall, use of plug-in DSP or automation. It was the first device of it’s kind, and introduced the concept of out-of-the-box (OTB), or “ hybrid” mixing, as an alternative to in-the-box (ITB) mixing. In listening tests the engineers at Dangerous determined that the software mixers performed better sonically when they were doing separate, smaller packets of work, mixing sub-groups, instead of combining the entire track load to a stereo master fader. Utilizing multiple D/A converters to share the workload of the session and summing their outputs in a high-headroom, mastering-quality analog environment opened up an alternative way to use the DAW with no trade-offs.
Remember what one of the primary functions of a mixer has always been about: balancing the levels of individual pieces while optimizing the overall product to fit the limitations of the release medium. When recording to analog tape, dynamic range is the range between the noise floor of the tape medium and the threshold of harmonic distortion incurred at the limiting amplifiers of the tape machine. If a tape recording is done properly, each of the tracks is recorded with signal that is hot enough to overcome the surface noise, while leaving a reasonable amount of headroom to avoid distortion. If eight channels of sound are recorded at this level, and then combined together equally, each additional channel brings with it 3 dB of additional power, or 6 dB of increased voltage, and thus the combined signal will be so strong that it will almost certainly distort the mix-down recorder.
One of the key functions of a mixing amplifier is to overcome this, in every case, using some sort of circuit that can reduce the voltage at the end of the line in order to create a signal that will fit within the confines of the mix-down medium.
Mixing vs. Summing, or When is a “Summing Box” Not Really a Summing Box?
Mixers perform 3 main functions: level balancing of multiple audio streams (level controls or faders), spatial placement (panning) and summing the audio to stereo. A summing amplifier simply sums multiple audio streams to stereo, while the DAW’s mixer is used to perform the mixing functions of balancing and panning. A true summing amplifier designed to be a back-end for a DAW mixer is a fixed gain, fixed pan device because, again, those functions have already been handled by the software mixer. Repeating these functions in hardware means additional, unnecessary electronics are being placed in the signal path, which will degrade performance, and also prevent the complete 100% accurate recall provided by simply opening a session in your DAW. To put it simply, if it has level controls and/or pan pots on the inputs it is a line mixer, not a summing mixer. Granted, a line mixer is essential if you are sub-mixing analog synths or combining tom mics to stereo, but they are not the ideal box to put at the back-end of your DAW’s mixer.
The summing amps found in the 2-Bus, the 2-Bus LT, and D-Box each use the same design and components featuring fully active circuit paths, which maintain a hot signal, with plenty of headroom from input through to output. That way, you can kick back, mix with your ears, and not worry about clipping. Enjoy the new paradigm of mixing with a hybrid workflow, merging the easy recall and automation of the DAW world, but the classic, comfortable feel that consoles brought to the table.