Posts by EdwardArnold

    Awesome read, thank you. I've gotten as far as racking up a Qu-SB, DBX driverack, wireless router and Shure PSM system into an 8u case with power distribution and a drawer for the iPad etc. Seeing your videos I'm back to wondering if I can squeeze in an ARPnet controller for lights as well... Cool stuff, well done.

    Hmmm. The spec page quoting +4dBu implies to me that this is stating the operating level, i.e. both units use the -20dBFS = +4dBu line-up standard, which defines the voltage output would be 1.228 V at +4dBu.


    The specs for maximum output level are, as far as I'm understanding this, the maximum output level that can be driven prior to the onset of distortion. So, the Stage's output drivers are just a touch over 6dB lower (7dB according to the published figures but I'd put that down to rounding discrepancies between the two). 6dB is the difference between a balanced and unbalanced line, so I think the output driver has been redesigned on the stage to deliberately align the levels for the balanced and unbalanced outputs, thus avoiding any unexpected leaps in level when different interconnections are used. Otherwise, I guess it would drive +21/22 dBu cleanly as well.

    Not sure about the board swap - seems highly likely that the SPDIF circuit is integrated into the main PCB. That aside for a moment, I wouldn't get too hung up on master clocks. It's common for manufacturers to infer that the clock quality/PLL chip they've used is going to create some sort of leap in sound quality, when the reality is clocking chips need to be reliable and not drift away from a known sample rate and provide steady modulation without jitter. SPDIF embeds clock signal this way, the same as AES. When that embedded clock signal is received it will be regenerated. So, if your clock in your interface would be slaved to an external device (Kemper), but it would still be regenerating its own clock signal as far as I understand it.

    Instead, I would always think about the workflows that your chosen clocking scheme dictates. It's unfortunate that you will need to slave to the Kemper in this case whenever you power it/attach it and want to use it. If you use your interface frequently without it, then you'll have to keep in mind any blats that can occur as the interface re-clocks, reaching your speakers or headphones. Ideally, the interface would lift its output relays when reclocking to avoid this, but I wouldn't know for the Apollo - worth finding out though as it makes much of the issue go away.

    This sort of stuff is going to bug you much more than the perceived sound of one clocking chip vs another.

    I agree that I prefer to work in samples, ms and dB, but then I suppose not everyone does.

    On the subject of delay for widening purposes though, I find just delaying a leg isn’t ideal. The Haas effect means that the earlier signal is prioritised and the stereo image subjectively shifts, so you then either try and compensate by level or by filtering off lower frequencies from the earlier signal to try and take the energy out of it. This also helps reduce the loss of low end due to phase cancellation for anyone listening in mono (front fills, repeaters, recording or livestream in mono etc.).
    If I’m ‘manually’ widening sources when mixing, I’ll create mid+sides channels so that the centre signal can be the earliest signal and provide mono compatibility. And on a digital mixer, you can dea exclusively in Hz, ms, samples and dBFS 🙂

    There is a delay widener available that you can put after the stack (in one of the stereo slots). If the PA destinations have any mono fills, just get the engineer to check mono compatibility is passable by folding down during sound check. Alternatively, you may have more success with the two other widening effects available. Check out page 167 of the manual for more. The wideners are under the 'EQ' section, which isn't immediately obvious.

    Thanks for finding the thread - having looked through again, it was 44.1 kHz operation that produced the extra latency, but this was solved in subsequent releases as quoted.

    Yep, give it a go. It’s likely that the checkbox will stop the software from pulling up/down the sample rate on load. Go for whichever is the easiest workflow option.

    I don't think there are any compelling reasons to go back to 16bit for recording purposes; the 144dB of dynamic range that 24bit recording offers, vs 96dB for 16bit dramatically improves the signal to noise ratio and lessens the requirement to drive inputs near the maximum threshold and risk clipping a good take. The cumulative reduction in noise over multiple tracks is beneficial across budget and pro audio interfaces alike and presents more noticeable, audible improvements than say, doubling sample rate.

    Regarding the extra latency at sample rates other than 44.1 kHz, can you remember what the values creep up to? I remember there was a discussion on it but can't locate the thread now. I thought it was addressed in one of the beta releases but looking back through the change log I can only spot the 'reduced latency at 44.1kHz' note from v7.5. Is it possible this was the other way around, as in, the latency was high when operating at 44.1kHz?

    Under normal operating conditions (i.e. no S/PDIF) involved I measured latency at just over 2.5 ms to main outs, which is a good result considering the A/D D/A stages. When recording I monitor via an analogue desk, so direct from Kemper and mixed with the DAW playback, then shuffle the audio regions earlier in Pro Tools as required. If you have a means of reliably measuring your input latency, I guess you can apply this numerically rather than visually.


    As for which clock should be master, unless your interface lifts its output relays when re-clocking, I'd keep it on internal and avoid it blatting through your speakers/headphones. Some software also tries to control the audio interface settings (Pro Tools, for example), so loading up a session can change sample rate to match the previous settings. If the interface is the master, this can happen more elegantly without you needing to manually change the kemper, then re-clock the interface etc; the Kemper should just follow.

    Hi, is the noise the same no matter where you plug in to mains in your house? For example, does the upstairs mains ring sound any different to downstairs? It seems you've already identified that some equipment is dumping noise onto your mains ground signal or similar so if audio equipment is using that for a reference it seems it could become audible. An electrician is likely to compare the earth connection once mains to a copper stake in the ground outside (or at least they did at a studio install I was working on) for a differential but not likely to put an analyser on the mains to look for noise. There might be someone more mains savvy one here who can advise.
    A DI box can lift the ground connection just like the Kemper does, but this is for prevention of ground loops where errant 50/60Hz signal flows partially through mains socket to socket, then via audio cabling between the bits of equipment powered for those sockets (I'm perhaps telling you what you already know here). If the noise is part of the audio signal though, ground lift won't touch it as you've found. A passive DI box is transformer-based and will provide physical de-coupling in addition to ground lift, but again if the noise is already part of the audio signal, it will get transformed too. Still worth a try if someone can lend you one.

    It might be possible that a mains RF filtered socket extension would help. Quite a few affordable mains extension leads feature this, filtering errant high frequencies that can sometimes cause issues (I have some D-Link powerline network equipment that operates by dumping 2.5MHz signals over the mains in my house, which is great for networking but not so much for some of my audio equipment).
    Best of luck with it.

    No gigs for me, but I’ve thought about just busking around my local town to get my kicks. Started travelling again with work (technical rather than musical) to some theatres and concert halls in Vienna and they intend to open at 50% capacity next month.
    Back in the UK, there are rumours that the live hire places I deal with are dusting off their consoles and wheeling them out in anticipation of things starting to pick up a little bit. I really hope so, but it’s difficult to imagine still.

    It’s highly likely that the stagebox inputs onstage are feeding mic preamps not line ins as Ingolf has suggested. Many desks don’t have mic/line switching for their XLR inputs, and sometimes also no input PAD to drop it signal down. Plenty of mic pres have a minimum gain value of 15-25 dB or so though. What desks are you typically running into at different venues, if you recall?


    If the Kemper’s -12dB isn’t enough attenuation, what you require is a DI box that can step down the signal level further e.g. -20dB and -40dB steps are common. I’ve said it before on these forums and been challenged, but a DI box is not completely replaced by the output features of the Kemper, nor the input features of smarter consoles, and this is one of the examples.
    If you use a passive, transformer-based DI, you’ll have the added benefit of being decoupled from the FOH rig as well, providing some level of protection for your kit if there is a floating earth at a venue or a mixer PSU that decides to let go.


    DI boxes are also ideal for wedging venue doors open, tilting back real amp cabs, tripping up people who shouldn’t be wandering the stage, throwing at drummers and muting banjos etc. so they really are the musician’s Swiss Army knife that should be taken to all gigs.

    Low cut = High Pass filter

    High cut = Low pass filter


    This is referring to the high and low frequency content of the signal. As you move the crossover frequency point around (the only control in this case), you will hear all frequencies below/above being attenuated for the HPF/LPF respectively. Give it a try and you’ll hear what it’s doing right away.

    What’s the workflow challenge that you’re trying to solve with more metering? E.g. preventing overload of analogue inputs on a mixing desk or audio interface?


    Is the metering you imagine a bar graph-style meter with a dBFS, or a dBu scale? The former would be common to most digital desks and DAW metering, where analogue equipment would feature the latter.

    Looks like Lindy make a TOSLINK to Coaxial SPDIF converter, available from Thomann and others. Also cheaper (possibly lower quality) items on Amazon.

    If you’re new to re-amping, have you already explored the analogue method and ruled it out?