This paper is about a system for managing flow of data from sensors to media control in a live performance context.  The research is occurring at the Intelligent Stage.


The "media flow network" is a useful construction  (Figure 3) similar to the one i am designing.  I would add one component: 'I' for interpretation.  They have 'S' for sensor, 'F' for filter, 'X' for fusion operator, and 'A' for actuator.  Actually there is also a 'T' for translator, and a 'D' for media driver and perhaps one more, but itŐs a good model from sensor to the control divide. 


Their processing model is also similar to the one I am designing, again, missing some elements.  The vertices/nodes are limited as noted above.  The representations of data passing between sensing nodes is not formalized and leaves open an ad-hoc approach to passing data around that will eventually make the system not generalizable and therefore not scalable until its defined.  The "Beta" operator is used I think essentially for QoS analysis, which could be useful.


They make reference to GrIPD a cross-platform version of PD that I need to look into.


They present and "optimizer" for use with the sensor network.   This is a great idea in the context of a full working system with lots of reusable parts.  But without a support structure itŐs a bit premature.  However, this is a good reason to provide for ways to reuse code between live performances.   (There is also a problem with grain of algorithms.  They show each process grain as being the same between implementation to gain a particular piece of knowledge or get to a particular fusion operator.  This may not be the case.) 


They bring in the idea of "quality aware signal processing algorithms".  This allows them to do the optimization algorithm, measure delays of the system and components and generally debug the network.  Three quality measures are noted: Precision, Delay, and Resources.  These are very useful things to be able to track in the network for debugging.


Their multimedia delivery model is a bit two generalized for theatrical use.  No one or computer should be optimizing anything, the optimal strategy should already be being used.  However, I can see how, in the face of limited resources, if several computers are servicing a particular media, there may need to be a way to do the optimization for the user of the space. 


They have defaulted to "streaming" media.  I wonder what this means?


The idea of removing a level of media management from the user by having the computer optimize the streaming of resources would take away one chore of organization of media.  If the media were able to be of high enough quality and could be delivered over a network to a rendering device, it could be essentially stored in a database and tagged for reference by the user. 


The model also fails to recognize the need for sequence and timing as part of a greater whole.


Separating data from indexing.


Its also missing a algorithmic and translation component in the interplay between sensing and media that can add a very big level of complexity in the way the media is accessed.  Streaming media is not random access media and implements very poorly the idea of control transport.  Control transport is essential in the creation of more sophisticated sensing to recorded media interactions.