This paper compares a show control system with a media control system and makes observations about when one is appropriate over another. The author presents the state of the art of show control programming in the system called Medallion manager.
Media control is the electronic glue that connects disparate media pieces. Manually controlled by someone pushing buttons.
Show control is a linkage of separate live entertainment control systems into a meta-system that links together multiple production elements such as lighting, sound, video, and special effects.
In addition show control must:
Handle multiple time streams, or timelines, all running independently, separately, and asynchronously
Have the ability to “pause” an entire show so that precise edits can be made at the pause point.
Start at a randomly selected point anywhere in the show, by repositioning or cuing up all the controlled devices.
Offer support for a variety of device protocols, striving for a plug-and play approach with a broad mix of audio, video, lighting, and automation equipment. (But also the ability to add new devices)
He makes several useful observations and statements:
1. In show programming, you are assured of only one thing: changes will not stop until all resources (money, time, and so on) have been exhausted.
2. A show control programmer, on the other hand, may get minimal opportunity to configure or program the system before technical rehearsals begin. At this point, the show control programmer is an integral part of the show creation process. If the creative director, lighting designer, sound designer, video designer, or facility
owner wants changes, the programmer has to implement those changes immediately, right in the venue. Changes must be tested with the full production, including performers, and this cycle of changes and rehearsals may even continue to opening day while the show is running in front of an audience.
The costs in the comparison is predictably lower for the show control system than the media control system.
Medallion uses a timeline based user interface and precise synchronization through SMPTE.
What both systems lack is a compositional element (a la Max/MSP), and a more robust interface for dealing with sensory input.