
The Role of Time Base Correctors (TBCs) in Professional Video Tape Transfers
June 11, 2025The production company would shoot in the field and record to the Betacam SP format. The time code would be set at one-hour increments. 01:00:00:00, 02:00:00:00 etc. That way, there was no duplicate time code & the tapes were labelled correspondingly. A Time code window burn is when you display the time code on the output of the source deck & make a copy. So, after the project was shot, Time code window burns would be recorded to VHS. My job was to sit with the producer & do the offline edit. Online editing was around $500 per hour. In that process they were using 2 x Betacam SP VTR’s as source machines & a 1” VTR was the recorder. The edit controller was made by CMX & the Character Generator, by Kyron. It was such a Niche business back then. The 1” machine would cost over $75K, the other decks, Edit Controller, Switcher, patch bays etc. all adds up to $500K easy. Plus, you need a full-time engineer to keep it all running.
I would take the final edited program that was created by transferring video from the NV 8500 player to the recorder. The editing system was accurate to the frame. One thirtieth of a second. Once we had finalized the show, I would write down the in point on the master tape & the time code displayed on the screen. This Time represents the time code on the source tape. I.e. the “Rushes” Time Code. With the in point on the recorder & the in point on the source deck, Time Code, I would build an Edit Decision List or EDL. That EDL was imported into the CMX edit controller at the “Online Editing Facility”
The Edit controller talks to the machines via RS 422 Protocol. You could also set GPI triggers to signal a dissolve between two sources on the switcher. The edit controller could also trigger slow-motion playback, with ramps to alter the speed. These GPI’s or General-purpose interface was to trigger things like dissolve. GPI’s (General Purpose Interface) can also trigger graphic Keys or control any device with GPI input. The video signal path was from the player through a loop through patch bay. Then to the switcher. The output of the switcher went to the recorder. At the beginning of the edit you would enter the in point on the recorder. The in point on the player and select that source on the switcher so the video signal passed to the recorder, then perform a preview edit where the two machines would pre roll 5 seconds, start playing & the drop the record head at the in point. Then you can trim the in points until you are happy. To end the edit, you would hit Match Frame on the edit controller & it would record the out point on the player & the recorder. The next edit, if you subtracted the same number of seconds & frames from The EDL (edit decision list) of both the source & the recorder, you could pick up the edit seamlessly. In order to perform a dissolve, you would enter the in point for the second VCR & set a GPI trigger to signal the switcher to perform the dissolve between the two sources. Preview & remember to reload the switcher back to the A Deck, ready to dissolve to the B deck. In order to dissolve back to the A deck, you would preload the switcher to dissolve from the B deck back to the A deck. Set the GPI & bingo. This was called an A/B roll edit suite. Post production Houses would also have an offline editing system. This was a 3/4” Umatic system that was cuts only. Two machines are hooked up directly with a simple edit controller.
Time Code Window burns were made from the “Rushes”. As protocol on the master tape, you would set the time code to 00:59:00:00 lay down 30 seconds of color Bars & Tone, then “Black” the tape. You are laying down 7.5 IRE (Institute of Radio Engineers) black and more importantly laying down a clean control track. The control track is created during the assemble edit or “crash record”. That track sets the sync for the machine. That is called an assemble edit. The rest of the editing process is called Insert Editing. The record head doesn’t touch the control track. All of the devices are also supplied 75 Ohm Black Burst. Genlock locks all the devices together. The video signals are aligned using the Sub Carrier and Vertical Sync adjustments on each device. The process of timing the system takes hours. The use of a Waveform Vector scope & Oscilloscope is imperative. The wave form measures the IRE value of the video. 100 being the highest brightness accepted in the broadcast industry. Black was set at 7.5 IRE when running color bar video through the scopes. The Oscilloscope measures color saturation & phase, or HUE. The signal runs through a proc amp. That device altered the timing of the R the G and the B. Also, black level & brightness. That is why it was so important to have Gen Lock distributed to every online device. The proc amp affects the hue, brightness, color saturation & black level. It is like a race between R, G and B by changing the timing of the three colors, then the Hue changes.
Before you transferred any footage from the player to the recorder, protocol was to line up the signal. Play the color bars & set the black set up to 7.5 IRE, the maximum white level to 100 IRE. Then on the Oscilloscope put the colors of the rainbow in their assigned little boxes by using the color saturation & hue controls on the Proc amp. This accounted for the variety of different recording devices, camaras and VTR’s. Once you had lined the tape up you would advance to the in point, set the in point on the player & the recorder, preview, adjust in points and when happy with the preview, then perform the edit. Often you would eject that tape, put in another tape, rewind to the color bars, line up the tape & repeat, repeat, repeat. As a process, the idea of adding or removing any of the selected clips on the edited master was very hard. You would have to go back to the in point on the clip that was to be removed & re lay down all the shots after that. This is where having a clean EDL separates the men from the boys. The only other option was to take the edited master and make into a source tape & make a new edited sub master with the clip removed. You could see the generational loss on their screen. Protocol was to loose as few generations as possible. You have already transferred the video from the Rushes i.e the field tapes AKA raw footage to the master. Usually, the master is split audio. Narration & music or Natural Sound. From the master would be created a Sub Master of Dub Master. Mixing the Audio through the sound board. The Sub Master would have mixed, split mono Audio and the video would be transferred using RGB connection. Looping through the patch bay, One patch cable for each color value. Then A Dub Master would be made. The VHS copies would be made from the Dub Master. You can see how losing another generation would really affect the Video Quality. There was also a composite signal that went to the monitors. There were two video patch bays. One for RGB & the other for composite. Input at the top output on the bottom. When the facility was built, all devices loop through the patch bay. You need the flexibility to change the configuration based on change. An RS 422 patch bay was handy to have too. The CMX edit controller only has three RS 422 ports. Two for the players & one for the Recorder. The machines have a maximum of four tracks of the audio patch bay had 4 patch cables per 4 channel devices. Only 2 channels on most machines. Only much later devices had four channels.
For film distribution it had to be four tracks. Narration, Music, Nat sound & the last track was for the foreign language translation. Soho in London was the European center for film distribution on tape. It is littered with High Tech postproduction facilities that cost millions of pounds to build. The mixing part of the process was when you copied the audio & video over to the bub master. This was real time and also involved a lot of match frame editing to really get a good mix..i.e stop the edit, readjust & start the edit back up using the matching time code from the edit controlled. Basically, your out point becomes your next in point on both the recorder & the player. So, subtract let’s say 5 second from both the time code number of the recorder & the player & pick the edit back up and adjust the blend of the audio channels. Audio people call video people Vidiots. On the average mobile truck, that, for instance, would shoot a basketball game, there would be 15 video people and one audio guy. I always thought it was so unfair. So, a good editor had a diverse skill set. Creative, technical, musically inclined and good with people. A real rare editor also had the ability to create nice looking graphics. The video Toaster changed everything. It was the first real computer to be integrated into postproduction. It was really the first digital switcher. It made postproduction more affordable. Then came the Pinnacle Alladin. You could get an RGB version that produced clean video.. The way I configured my switcher was that each source looped through the switcher and looped through the Alladin. This all relied on distribution amplifiers to distribute multiple outputs from one source. So you could set up a super imposed graphic on the Alladin and dissolve from the virgin source to the Alladin output. It really was an amazing edit suite. Last of the On line edit suites. It took a long day to time the system & align it!
Another feature I used a lot was the chromakey on the Alladin I could roll a tape behind the talent & record it live. So the actor would stand in front of the green screen and I would play a tape that I chromakey the actor over. At the same time, I recorded the footage to tape. It gave me a pretty clean chroma key. Much cleaner than trying to perform the key in post. The Alladin also had a lot of transitions that really made the work look modern & professional. It also used Inscriber for the Character Generation and was, for its time, very advanced. It was around 1996 that I first used an Avid NLE. Non Linear Editor. It blew my mind. Initially the resolution was so low you simply used it to create an EDL. Then the show would be put together tape to tape. By this time 1” reel to reel was gone. Betacam SP was king. Soon the digital revolution was happening. You could see that the Audio business had been re defined by Pro Tools & digital technology. I could see the Video business was next. Sony & Panasonic both came out with 1/4” Digital Tape. DV format. They had agreed on the format but not the cassette. So Sony had DVCAM & Panasonic DVC pro. Both formats could be ingested into an NLE via firewire. So a DVCAM Deck like a DSR 80 on the output side has Composite, RGB, S-Video i.e. 1340 Firewire. It was ingested in real time. D1 & Digital Betacam gave the editor the ability to perform a pre read edit. The recorder pulled the footage off the play head & dissolved to the new footage. So now you can do a dissolve with only 2 machines. Plus you were not a slave to chasing time code & match frame edits. You could build multi-layer effects. Just do not make a mistake. No control Z. Just preview, preview, preview. D2 & D3 by Panasonic made online editing easier & higher quality. The Avids were very expensive. MAC based & memory was small & expensive. Then, some clever people at DPS came up with the Velocity and later the Velocity Q. This had firewire, Composite, RGB & S video inputs. Capture is in real time. The real secret was the codec they used. Because of the limited memory as programs were completed the would get laid off to tape. When outputting media to be distributed to a bunch of TV stations you could connect the DPS to a distribution amp & record to multiple Betacam SP machines. I was shipping over 50 tapes a week. It was an interesting time. Still using the edit controller & the Alladin and VTR’s but using an NLE to edit. It is only in the last ten to fifteen years that media can be distributed through the internet. Sony, the manufacturer of Betacam SP, announced the discontinuation of the format in 2001, stating that they had sold 450,000 units worldwide. Other manufacturers such as: Maxell, Ampex, Panasonic etc. manufactured large amounts of tapes. The estimate is over two million Betacam SP tapes were produced. The ¾” Umatic tape format, from its introduction in 1971 to the late 1990s, Some institutions, such as the INA archives, hold large collections of Umatic tapes, with around 200,000 large-type Umatic tapes. It is impossible to tell how many tapes there are. Estimates say as many as four million.
Long used in TV production: 1” reel to Reel was widely used in, including: instant replays and creating program titles, for almost 20 years. When sending out commercials one would take a big reel of new tape & run off the spot. Then cut the tape, put on a new reel & repeat. So the number of 1” reels is estimated at over six million. From 1965 to around 2010 Video tape ruled the media market. All TV stations relied on tape to record the programs via satellite. Also for playback, editing & shooting. TV Commercial & infomercials were sent on tape to the TV station. It’s impossible to know the exact number of cassette tapes ever produced, but it’s estimated that nearly 30 billion have been manufactured since the 1960’s
Basic Timeline:
- Magnetophon K1, the first practical tape recorder
¼” open reel audio tape was the most common size for amateur recordings, from the early 1950s. The use of reel to reel would not taper off until the 1980s, which saw the increased popularity of cassette tapes.
1962 Phillips comes up with the compact cassette
1965.. ½-inch reel-to-reel video.
1971 Umatic
1975 Betamax.
1976 Type C one inch.
1982 BetacamSP
1991 Panasonic D3.
1995 Avid Technologies, first NLE.
Today: The take home message is that prior to around 2010 All media was analog. This is our history. So many stories, so many important events. So much to archive.
At Broadcast Tapes we love what we do. Preserving the past. Call or email for Digital Archiving. www.BroadcastTapes.com
804 398 3838




