As our church does not have cameras, microphones or an internet connection in the church, we have little choice but to use video conferencing from our homes, even with often sub-standard internet speeds, for live worship services.
Supplying music of the same type and quality (usually organ and sometimes piano) used in our regular services from my home, over the internet was required. Several possibilities for having music came to mind, but we ended up using music prior to and after the service. Eventually we might add singing but limitations in video conferencing and latency over the internet may make that impossible.
Regular services at the church have me playing the music live, which I tried to do using my midi controller. The midi controller I have is a 61-note organ like keyboard that triggers whatever sound I select on my computer. I first used a layered sound of piano, organ and violin ensemble. One problem I found was that using the sustain pedal didn’t work well with the organ and violin sounds. Never mind the fact that I only have 61 notes and a piano has 88 notes.
There were also issues with not having weighted keys like a piano does. However, since the keyboard is like an organ I figured organ music would work well. Alas, with only one keyboard, even if I split the keyboard into two parts, I had no way to play the pedal (low note) parts nor could I change registrations (sounds) in the middle of a piece as organ music often requires. That made playing organ music difficult and in some cases impossible. The selection of music I have (or that is available) that doesn’t use pedals is sparse and almost always requires two manuals.
How then do I provide music that is of the same type and standard we were use to? Enter the world of making music on a computer. Recording the music and playing the recording via my DAW (Digital Audio Workstation) has been the best way so far to provide music that is similar to what the congregation was use to hearing. Even so, I had problems getting everything to work ideally.
In testing the audio in zoom I tried at first using the microphone in my web camera to pick up the music from my speakers. Since my speakers are near field and don’t produce sounds below 100Hz they don’t provide the best sound to start with. Similarly, few web camera microphones are good for picking up music with low frequencies like an organ, and they tend to pick up noise from the room. Yes, if people are listening through their computer or phone speakers the sound would probably be acceptable but anyone listening through headphones or good speakers would be disappointed as it wouldn’t be as good as the original.
So, I had to find a way to get the sound from my DAW (I use Reaper) directly to the input in zoom. There were a couple of ways that might work. The most obvious way would be to send the output of Reaper to my audio interface as I normally do, then take the output from the audio interface and plug into another input in the interface, making that input the microphone source for zoom. Except for mixing engineers reading this, that is probably confusing and it actually turned out not to be a good way to do it. More research was necessary.
I found a piece of software – voicemeeter – that creates a virtual mixer in the computer with three inputs, one of which is the audio from the computer. Connecting the audio to zoom was now simple. Routing the output from reaper via the built in ASIO driver directly to voicemeeter did the job. And I connected my condenser microphone to my audio interface and told voicemeeter to use it for the first channel input for when I needed to speak. Connections in the software allowed me to send all the inputs to either my speakers or to Zoom or both. I found that sending my microphone to my speakers resulted in feedback some of the time but since I didn’t need to hear myself, I turned that off. So now I had audio going directly from Reaper to Zoom. Mention should be made that I changed my default system output to voicemeeter to allow any audio software not using an ASIO driver to be able to send directly to zoom.
In testing this setup in an actual zoom meeting it worked pretty well. Next I needed to make sure my recordings were a uniform volume. Ultimately I discovered that I needed to make sure my recordings were a little louder than I would normally make them. Some websites suggested -16 LUFS for streaming but I found over the weeks that -16 to as high as -14 LUFS worked better. Additional compression on the master channel when doing the recording in some cases helped level out the signal and make it better for streaming.
In planning what pieces to use each week I abandoned my usual method of picking music. Sometimes, in the past, I’d try to pick pieces that matched the hymns selected for a service. Normally I’d do about 5 minutes of pre-service music. Only now it made more sense to do 8 to 10 minutes of music. This meant some weeks I’d have to have as many as 4 pieces recorded for the prelude time.
Since copyright laws would require us to pay royalties if we included any copyrighted music, I could only include music that was not copyright. Understanding the somewhat draconian copyright laws is confusing so I won’t go into it here. Please be aware that even though it is a church service being streamed and recorded, one cannot include copyrighted material in the stream or recording. Performances of copyrighted material in an actual church worship service (that isn’t recorded) is permitted, but any streaming or recording (video or audio) is not permitted under any circumstances.
One problem the copyright restrictions presented was finding enough music for each week. Repeating the same piece each week or with much regularity is not something I like to do. This meant the best thing to do was to reach into my catalog of compositions and arrangements. Even though I’ve made recordings of some of my music, I did not have enough to meet the weekly needs. During the week I would look for titles from my catalog to use and make recordings of them or I would compose or arrange a new piece specifically for the upcoming service. Being a bit of a perfectionist I don’t like churning out material so quickly. You could compare the quality of what I’m doing to a live performance where there is background noise, people talking, coughing or a few wrong notes, things that would never make it onto an album (assuming anyone makes albums any more).
The fact that I have so many different sample libraries, synthesizers, physical modeling software and the like meant I could put together music that wasn’t just piano or organ. However, was that a good idea? Every week I’ve put into the music at least one piece that was not piano or organ. For regular services at the church I sometimes used a Korg sound module to incorporate orchestral music into the preludes and communion music, so the congregation was use to hearing some non-piano or non-organ music. As of this date I’ve received no negative feedback in using the different sounds, mostly strings and woodwinds, but in a test during our coffee hour where I played a very contemporary setting of I Have Decided To Follow Jesus, complete with drums and synthesizers I pretty much knew everyone would say they preferred me to do more traditional settings and that’s what they said.
Continuing on it looks like at least for a while we’ll be doing services via Zoom. The method of pre-recording what I play – and I do play in all the sounds that are heard – is the best I can come up. You may be interested to know that most broadcast TV that includes live music – like the Macy’s Thanksgiving Day parade or any live concert – you are rarely hearing the performers perform live, it is almost always pre-recorded music. So, for now that’s how I’m going to be doing services. I will continue to experiment and see if I can come up with a way to play live using my midi controller. Any comments or questions? I’d love hear them. Leave them below.