Tech-invite3GPPspaceIETFspace
96959493929190898887868584838281807978777675747372717069686766656463626160595857565554535251504948474645444342414039383736353433323130292827262524232221201918171615141312111009080706050403020100
in Index   Prev   Next

RFC 5707

Media Server Markup Language (MSML)

Pages: 184
Informational
Errata
Part 2 of 6 – Pages 19 to 51
First   Prev   Next

Top   ToC   RFC5707 - Page 19   prevText

5. Execution Flow

MSML assumes a model where there is a single control context within a media server for MSML processing. That context may have one or many SIP [n1] dialogs associated with it. It is assumed that any SIP dialogs associated with the MSML control context have been authorized, as appropriate, by mechanisms outside the scope of MSML. A media server control context maintains information about the state of all media objects and media streams within a media server. It receives and processes all MSML requests from authorized SIP dialogs and receives all events generated internally by media objects and sends them on the appropriate SIP dialog. An MSML request is able to create new media objects and streams, and to modify or destroy any existing media objects and streams. An MSML request may simply specify a single action for a media server to undertake. In this case, the document is very similar to a simple command request. Often, though, it may be more natural for a client to request multiple actions at one time, or the client would like several actions to be closely coordinated by the media server. Multiple MSML elements received in a single request MUST be processed sequentially in document order.
Top   ToC   RFC5707 - Page 20
   An example of the first scenario would be to create a conference and
   join it with an initial participant.  An example of the second case
   would be to unjoin one or more participants from a main conference
   and join them to a sidebar conference.  In the first scenario,
   network latencies may not be an issue, but it is simpler for the
   client to combine the requests.  In the second case, the added
   network latency between separate requests could mean perceptible
   audio loss to the participant.

   Each MSML request is processed as a single transaction.  A media
   server MUST ensure that it has the necessary resources available to
   carry out the complete transaction before executing any elements of
   the request.  If it does not have sufficient resources, it MUST
   return a 520 response and MUST NOT execute the transaction.

   The MSML request MUST be checked for well-formedness and validated
   against the schema prior to executing any elements.  This allows XML
   [n2] errors to reported immediately and minimizes failures within a
   transaction and the corresponding execution of only part of the
   transaction.

   Each element is expected to execute immediately.  Elements such as
   <dialogstart>, which take an unpredictable amount of time, are
   "forked" and executed in a separate thread (see MSML Dialog
   Packages).  Once successfully forked, execution continues with the
   element following the </dialogstart>.  As such, MSML does not provide
   mechanisms to sequence or coordinate other operations with dialog
   elements.

   Processing within a transaction MUST stop if any errors occur.
   Elements that were executed prior to the error are not rolled back.
   It is the responsibility of the client to determine appropriate
   actions based upon the results indicated in the response.  Most
   elements MAY contain an optional "mark" attribute.  The value of that
   attribute from the last successfully executed element MUST be
   returned in an error response.  Note that errors that occur during
   the execution of a dialog occur outside the context of an MSML
   transaction.  These errors will be indicated in an asynchronous
   event.

   Transaction results are returned as part of the SIP request response.
   The transaction results indicate the success or failure of the
   transaction.  The result MUST also include identifiers for any
   objects created by a media server for which the client did not
   provide an instance name.  Additionally, if the transaction fails,
   the reason for the failure MUST be returned, as well as an indication
   of how much of the transaction was executed before the failure
   occurred SHOULD be returned.
Top   ToC   RFC5707 - Page 21

6. Media Server Object Model

Media servers are general-purpose platforms for executing real-time media processing tasks. These tasks range in complexity from simple ones such as serving announcements, to complex ones, such as speech interfaces, centralized multimedia conferencing, and sophisticated gaming applications. Calls are established to a media server using SIP. Clients will often use SIP third party call control (3PCC) [i4] to establish calls to a media server on behalf of end users. However MSML does not require that 3PCC be used, only that the client and the media server share a common identifier for the call and its associated RTP [i3] sessions. Objects represent entities that source, sink, or modify media streams. A media streams is a bidirectional or unidirectional media flow between objects on a media server. The following subsections define the classes of objects that exist on a media server and the way these are identified in MSML.

6.1. Objects

A media object is an endpoint of one or more media streams. It may be a connection that terminates RTP sessions from the network or a resource that transforms or manipulates media. MSML defines four classes of media objects. Each class defines the basic properties of how object instances are used within a media server. However, most classes require that the function of specific instances be defined by the client, using MSML or other languages such as VoiceXML. The following classes of media processing objects are defined. The class names are given in parentheses: o network connection (conn) o conference (conf) o dialog (dialog) Network connection is an abstraction for the media processing resources involved in terminating the RTP session(s) of a call. For audio services, a connection instance presents a full-duplex audio stream interface within a media server. Multimedia connections have multiple media streams of different media types, each corresponding to an RTP session. Network connections get instantiated through SIP [n1].
Top   ToC   RFC5707 - Page 22
   A conference represents the media resources and state information
   required for a single logical mix of each media type in the
   conference (e.g., audio and video).  MSML models multiple mixes/views
   of the same media type as separate conferences.  Each conference has
   multiple inputs.  Inputs may be divided into classes that allow an
   application to request different media treatment for different
   participants.  For example, the video streams for some participants
   may be assigned to fixed regions of the screen while those for other
   participants may only be shown when they are speaking.

   A conference has a single logical output per media type.  For each
   participant, it consists of the audio conference mix, less any
   contributed audio of the participant, and the video mix shared by all
   conference participants.  Video conferences using voice activated
   switching have an optional ability to show the previous speaker to
   the current speaker.

   Conferences are instantiated using the <createconference> element.
   The content of the <createconference> element specifies the
   parameters of the audio and/or video mixes.

   Dialogs are a class of objects that represent automated participants.
   They are similar to network connections from a media flow perspective
   and may have one or more media streams as the abstraction for their
   interface within a media server.  Unlike connections, however,
   dialogs are created and destroyed through MSML, and the media server
   itself implements the dialog participant.  Dialogs are instantiated
   through the <dialogstart> element.  Contents of the <dialogstart>
   element define the desired or expected dialog behavior.  Dialogs may
   also be invoked by referencing VoiceXML as the dialog description
   language.

   Operators are functions that are used to filter or transform a media
   stream.  The function that an instance of an operator fulfills is
   defined as a property of the media stream.  Operators may be
   unidirectional or bidirectional and have a media type.
   Unidirectional operators reflect simple atomic functions such as
   automatic gain control, filtering tones from conferences, or applying
   specific gain values to a stream.  Unidirectional operators have a
   single media input, which is connected to the media stream from one
   object, and a single media output, which is connected to the media
   stream of a different object.

   Bidirectional operators have two media inputs and two media outputs.
   One media input and output is associated with the stream to one
   object, and the other input and output is associated with a stream to
   a different object.  Bidirectional objects may treat the media
   differently in each direction.  For example, an operator could be
Top   ToC   RFC5707 - Page 23
   defined that changed the media sent to a connection based upon
   recognized speech or dual-tone multi-frequency (DTMF) received from
   the connection.  Operators are implicitly instantiated when streams
   are created or modified using the elements <join> and <modifystream>,
   respectively.

   The relationships between the different object classes (conf, conn,
   and dialog) are shown in the figure below.

              +--------------------------------------+
              |           Media Server               |
              |                                      |
              |------+                      ,---.    |
              |      |      +------+       /     \   |
   <== RTP ==>| conn |<---->| oper |<---->( conf  )  |
              |      |      +------+       \     /   |
              |------+                      `---'    |
              |   ^                           ^      |
              |   |                           |      |
              |   |   +------+    +------+    |      |
              |   |   |      |    |      |    |      |
              |   +-->|dialog|    |dialog|<---+      |
              |       |      |    |      |           |
              |       +------+    +------+           |
              +--------------------------------------+

   A single, full-duplex instance of each object class is shown together
   with common relationships between them.  An operator (such as gain)
   is shown between a connection and a conference and dialogs are shown
   participating both with an individual connection and with a
   conference.  The figure is not meant to imply only one-to-one
   relationships.  Conferences will often have hundreds of participants,
   and either connections or conferences may be interacting with more
   than one dialog.  For example, one dialog may be recording a
   conference while other dialogs announce participants joining or
   leaving the conference.

6.2. Identifiers

Objects are referenced using identifiers that are composed of one or more terms. Each term specifies an object class and names a specific instance within that class. The object class and instance are separated by a colon ":" in an identifier term. Identifiers are assigned to objects when they are first created. In general, either the MSML client or a media server may specify the instance name for an object. Objects for which a client does not assign an instance name will be assigned one by a media server.
Top   ToC   RFC5707 - Page 24
   Media server assigned instance names are returned to the client as a
   complete object identifier in the response to the request that
   created the object.

   It is meaningful for some classes of objects to exist independently
   on a media server.  Network connections may be created through SIP at
   any time.  MSML can then be used to associate their media with other
   objects as required to create services.  Conferences may be created
   and have specific resources reserved waiting for participant
   connections.

   Objects from these two classes, connections and conferences, are
   considered independent objects since they can exist on a standalone
   basis.  Identifiers for independent objects consist of a single term
   as defined above.  For example, identifiers for a conference and
   connection could be "conf:abc" or "conn:1234" respectively.  Clients
   that choose to assign instance names to independent objects must use
   globally unique instance names.  One way to create globally unique
   names is to include the domain name of the client as part of the
   name.

   Dialogs are created to provide a service to independent objects.
   Dialogs may act as a participant in a conference or interact with a
   connection similar to a two-participant call.  Dialogs depend upon
   the existence of independent objects, and this is reflected in the
   composition of their identifiers.  Operators modify the media flow
   between other objects, such as application of gain between a
   connection and a conference.  As operators are merely media transform
   primitives defined as properties of the media stream, they are not
   represented by identifiers and created implicitly.

   Identifiers for dialogs are composed of a structured list of slash
   ('/') separated terms.  The left-most term of the identifier must
   specify a conference or connection.  This serves as the root for the
   identifier.  An example of an identifier for a dialog acting as a
   conference participant could be:

      conf:abc/dialog:recorder

   All objects except connections are created using MSML.  Connections
   are created when media sessions get established through SIP.  There
   are several options clients and media servers can use to establish a
   shared instance name for a connection and its media streams.

   When media servers support multiple media types, the instance name
   SHOULD be a call identifier that can be used to identify the
   collection of RTP sessions associated with a call.  When MSML is used
   in conjunction with SIP and third party call control, the call
Top   ToC   RFC5707 - Page 25
   identifier MUST be the same as the local tag assigned by the media
   server to identify the SIP dialog.  This will be the tag the media
   server adds to the "To" header in its response to an initial invite
   transaction.  RFC 3261 requires the tag values to be globally unique.

   An example of a connection identifier is: conn:74jgd63956ts.

   With third party call control, the MSML client acts as a back-to-back
   user agent (B2BUA) to establish the media sessions.  SIP dialogs are
   established between the client and the media server allowing the use
   of the media server local tag as a connection identifier.  If third
   party call control is not used, a SIP event package MAY be used to
   allow a media server to notify new sessions to a client that has
   subscribed to this information.

   Identifiers as described above allow every object in a media server
   to be uniquely addressed.  They can also be used to refer to multiple
   objects.  There are two ways in which this can currently be done:

      wildcards

      common instance names

   An identifier can reference multiple objects when a wildcard is used
   as an instance name.  MSML reserves the instance name composed of a
   single asterisk ('*') to mean all objects that have the same
   identifier root and class.  Instance names containing an asterisk
   cannot be created.  Wildcards MUST only be used as the right-most
   term of an identifier and MUST NOT be used as part of the root for
   dialog identifiers.  Wildcards are only allowed where explicitly
   indicated below.

   The following are examples of valid wildcards:

      conf:abc/dialog:*

      conn:*

   An example of illegal wildcard usage is:

      conf:*/dialog:73849

   Although identifiers share a common syntax, MSML elements restrict
   the class of objects that are valid in a given context.  As an
   example, although it is valid to join two connections together, it is
   not valid to join two IVR dialogs.
Top   ToC   RFC5707 - Page 26

7. MSML Core Package

This section describes the core MSML package that MUST be supported in order to use any other MSML packages. The core MSML package defines a framework, without explicit functionality, over which functional packages are used.

7.1. <msml>

<msml> is the root element. When received by a media server, it defines the set of operations that form a single MSML request. Operations are requested by the contents of the element. Each operation MAY appear zero or more times as children of <msml>. Specific operations are defined within the conference package and in the set of dialog packages. The results of a request or the contents of events sent by a media server are also enclosed within the <msml> element. The results of the transaction are included as a body in the response to the SIP request that contained the transaction. This response will contain any identifiers that the media server assigned to newly created objects. All messages that a media server generates are correlated to an object identifier. Objects and identifiers are discussed in section 6 (Media Server Object Model). Attributes: version: "1.1" Mandatory

7.2. <send>

Events are used to affect the behavior of different objects within a media server. The <send> element is used to send an event to the specified recipient within the media server. Attributes: event: the name of an event. Mandatory. target: an object identifier. When the identifier is for a dialog, it may optionally be appended with a slash "/" followed by the target to be included in an MSML dialog <send>. Mandatory. valuelist: a list of zero or more parameters that are included with the event.
Top   ToC   RFC5707 - Page 27
      mark: a token that can be used to identify execution progress in
      the case of errors.  The value of the mark attribute from the last
      successfully executed MSML element is returned in an error
      response.  Therefore, the value of all mark attributes within an
      MSML document should be unique.

7.3. <result>

The <result> element is used to report the results of an MSML transaction. It is included as a body in the final response to the SIP request that initiated the transaction. An optional child element <description> may include text that expands on the meaning of error responses. Response codes are defined in section 11 (Response Codes). Attributes: response: a numeric code indicating the overall success or failure of the transaction, and in the case of failure, an indication of the reason. Mandatory. mark: in the case of an error, the value of the mark attribute from the last successfully executed element that included the mark attribute. In the case of failure, a description of the reason SHOULD be provided using the child element <description>. Three other child elements allow the response to include identifiers for objects created by the request but that did not have instance names specified by the client. Those elements are <confid> and <dialogid>, for objects created through a <createconference> and <dialogstart> respectively.

7.4. <event>

The <event> element is used to notify an event to a media server client. Three types of events are defined by the MSML Core Package: "msml.dialog.exit", "msml.conf.nomedia", and "msml.conf.asn". These correspond to the termination of an executing dialog, a conference being automatically deleted when the last participant has left, and the notification of the current set of active speakers for a conference, respectively. Events may also be generated by an executing dialog. In this case, the event type is specified by the dialog (see MSML Dialog Core Package <send>).
Top   ToC   RFC5707 - Page 28
   Attributes:

      name: the type of event.  If the event is generated because of the
      execution MSML dialog <send>, the value MUST be the value of the
      "event" attribute from the <send> element within the MSML Dialog
      Core Package.  If the event is generated because of the execution
      of an <exit>, the value MUST be "moml.exit".  If the event is
      generated because of the execution of a <disconnect>, the value
      MUST be "moml.disconnect".  If the event is generated because of
      an error, the value must be "moml.error".  Mandatory.

      id: the identifier of the conference or dialog that generated the
      event or caused the event to be generated.  Mandatory.

      <event> has two children, <name> and <value>, which contain the
      name and value respectively of each namelist item associated with
      the event.

8. MSML Conference Core Package

8.1. Conferences

A conference has a mixer for each type of media that the conference supports. Each mix has a corresponding description that defines how the media from participants contributes to that mix. A mixer has multiple inputs that are combined in a media specific way to create a single logical output. The elements that describe the mix for each media type are called mixer description elements. They are: <audiomix> defines the parameters for mixing audio media. <videolayout> defines the composition of a video window. These elements, defined in sections 8.6 (Audio Mix) and 8.7 (Video Layout) respectively, are used as content of the <createconference> element to establish the initial properties of a conference. The elements are used within the <modifyconference> element to change the properties of a conference once it has been created, or within the <destroyconference> element to remove individual mixes from the conference. Conferences may be terminated by an MSML client using the <destroyconference> element to remove the entire conference or by removing the last mixer(s) associated with the conference. Conferences can also be terminated automatically by a media server based on criteria specified when the conference is created. When the
Top   ToC   RFC5707 - Page 29
   conference is deleted, any remaining participants will have their
   associated SIP dialogs left unchanged or deleted based on the value
   of the "term" attribute specified when the conference was created.

8.2. Media Streams

Objects have at least one media input and output for each type of media that they support. Each object class defines the number of input and output objects of that class support. Media streams are created when objects are joined, either explicitly using <join> or implicitly when dialogs are created using <dialogstart>. Dialog creation has two stages, allocating and configuring the resources required for the dialog instance, and implicitly joining those resources to the dialog target during the dialog execution. Refer to the MSML Dialog Base Package. A join operation by default creates a bidirectional audio stream between two objects. Video and unidirectional streams may also be created. A media stream is created by connecting the output from one object to the input of another object and vice versa (assuming a bidirectional or full-duplex join). Many objects may only support a single input for each type of media. Within this specification, only the conference object class supports an arbitrary number of inputs. When a stream is requested to be created to an object that already has a stream of the same type connected to its single input, the result of the request depends upon the type of the media stream. Audio mixing is done by summing audio signals. Automatically mixing audio streams has common and straightforward applications. For example, the ability to bridge two streams allows for the easy creation of simple three-way calls or to bridge private announcements with a (whispered) conference mix for an individual participant. In the case of general conferences, however, an MSML client SHOULD create an audio conference and then join participants to the conference. Conference mixers SHOULD subtract the audio of each participant from the mix so that they do not hear themselves. A media server receiving a request that requires joining an audio stream to the single audio input of an object that already has an audio stream connected SHOULD automatically bridge the new stream with the existing stream, creating a mix of the two audio streams. The maximum number of streams that may be bridged in this manner is implementation specific. It is RECOMMENDED that a media server support bridging at least two streams. A media server that cannot bridge a new stream with any existing streams MUST fail the operation requesting the join.
Top   ToC   RFC5707 - Page 30
   Unlike audio mixing, there are many different ways that two video
   streams may be combined and presented.  For example, they may be
   presented side by side in separate panes, picture in picture, or in a
   single pane that displays only a single stream at a time based on a
   heuristic such as active speaker.  Each of these options creates a
   very different presentation and requires significantly different
   media resources.

   A join operation does not describe how a new stream can be combined
   with an existing stream.  Therefore, automatic bridging of video is
   not supported.  A media server MUST fail requests to join a new video
   stream to an object that only supports a single video input and
   already has a video stream connected to that input.  For an object to
   have multiple video streams joined to it, the object itself must be
   capable in supporting multiple video streams.  Conference objects can
   support multiple video streams and provide a way to specify the
   mixing presentation for the video streams.

   A media server MUST NOT establish any streams unless the media server
   is able to create all the streams requested by an operation.  Streams
   are only able to be created if both objects support a media type and
   at least one of the following conditions is true:

      1. Each object that is to receive media is not already receiving a
         stream of that type.

      2. Any object that is to receive media and is already receiving a
         stream of that type supports receiving an additional stream of
         that type.  The only class of objects defined in this
         specification that directly support receiving multiple streams
         of the same type are conferences.

      3. The media server is able to automatically bridge media streams
         for an object that is to receive media and that is already
         receiving a stream of the requested type.  The only type of
         media defined in this specification that MAY be automatically
         bridged is audio.

   The directionality of media streams associated with a connection is
   modeled independently from what SDP [n9] allows for the corresponding
   RTP [i3] sessions.  Media servers MUST respect the SDP in what they
   actually transmit but MUST NOT allow the SDP to affect the
   directionality when joining streams internal to the media server.
Top   ToC   RFC5707 - Page 31

8.3. <createconference>

<createconference> is used to allocate and configure the media mixing resources for conferences. A description of the properties for each type of media mix required for the conference is defined within the content of the <createconference> element. Mixer descriptions are described in Audio Mix and Video Layout sections. When no mixer descriptions are specified, the default behavior MUST be equivalent to inclusion of a single <audiomix>. Clients can request that a media server automatically delete a conference when a specified condition occurs by using the "deletewhen" attribute. A value of "nomedia" indicates that the conference MUST be deleted when no participants remain in the conference. When this occurs, an "msml.conf.nomedia" event MUST be notified to the MSML client. A value of "nocontrol" indicates that the conference MUST be deleted when the SIP [n1] dialog that carries the <createconference> element is terminated. When this occurs, a media server MUST terminate all participant dialogs by sending a BYE for their associated SIP dialog. A value of "never" MUST leave the ability to delete a conference under the control of the MSML client. Attributes: name: the instance name of the conference. If the attribute is not present, the media server MUST assign a globally unique name for the conference. If the attribute is present but the name is already in use, an error (432) will result and MSML document execution MUST stop. Events that the conference generates use this name as the value of their "id" attribute (see section 7.4 (<event>)). deletewhen: defines whether a media server should automatically delete the conference. Possible values are "nomedia", "nocontrol", and "never". Default is "nomedia". term: when true, the media server MUST send a BYE request on all SIP dialogs still associated with the conference when the conference is deleted. Setting term equal to false allows clients to start dialogs on connections once the conference has completed. Default is "true". mark: a token that MAY be used to identify execution progress in the case of errors. The value of the mark attribute from the last successfully executed MSML element is returned in an error response. Therefore, the value of all mark attributes within an MSML document should be unique.
Top   ToC   RFC5707 - Page 32
   An example of creating an audio conference is shown below.  This
   conference allows at most two participants to contend to be heard and
   reports the set of active speakers no more frequently than every 10
   seconds.

      <?xml version="1.0" encoding="UTF-8"?>
      <msml version="1.1">
         <createconference name="example">
            <audiomix>
               <n-loudest n="3"/>
               <asn ri="10s"/>
            </audiomix>
         </createconference>
      </msml>

8.3.1. <reserve>

Conference resources may be reserved by including the <reserve> element as a child of <createconference>. <reserve> allows the specification of a set of resources that a media server will reserve for the conference. Any requests for resources beyond those that have been reserved should be honored on a best-effort basis by a media server. Attributes: required: boolean that specifies whether <createconference> should fail if the requested resources are not available. When set to false, the conference will be created, with no reserved resources, if the complete reservation cannot be honored. Default is "true".
8.3.1.1. <resource>
The resources to be reserved are defined using <resource>. The contents of these elements describe a resource that is to be reserved. Descriptions are implementation dependent. Media servers that support MSML dialogs may use the elements from that package as the basis for resource descriptions. Each resource element may use the attribute "n" to define the quantity of the resource to reserve. For example, the following creates a conference and reserves two types of resources. One resource element may represent resources that are shared by all participants of the conference, while the other may represent resources that are reserved for each of the expected participants.
Top   ToC   RFC5707 - Page 33
   Attributes:

      n: number of resources to be reserved.  Default is 1.

      type: specifies whether the resource is to be reserved by each
      individual participant or reserved as a shared conference
      resource.  Valid values for this attribute are "individual" or
      "shared".  Default is "individual".

      <createconference>
         <reserve>
            <resource n="20">
              <!--description of resources used by each participant-->
            </resource>
            <resource n="2" type="shared">
              <!--description of the shared conference resources-->
            </resource>
         </reserve>
      </createconference>

8.4. <modifyconference>

All of the properties of an audio mix or the presentation of a video mix may be changed during the life of a conference using the <modifyconference> element. Changes to an audio mix are requested by including an <audiomix> element as a child of <modifyconference>. This may also be used to add an audio mixer to the conference if none was previously allocated. Changes to a video presentation are requested by including a <videolayout> element as a child of <modifyconference>. Similar to an audio mixer, this may be used to add a video mixer if none was previously allocated. Mixers are removed by including a mixer description element within <destroyconference/>. Features and presentation aspects are enabled/added or modified by including the element(s) that define the feature or presentation aspect within a mixer description. The complete specification of the element must be included just as it would be included when the conference is created. The new definition completely replaces any previous definition that existed. Only things that are defined by elements included in the mixer descriptions are affected. Any existing configuration aspects of a conference, which are not specified within the <modifyconference/> element, MUST maintain their current state in the media server.
Top   ToC   RFC5707 - Page 34
   For example, if an MSML client wanted to change the minimum reporting
   interval for active speaker notification from that shown in the
   Conference Examples section (<createconference>) it would send the
   following to the media server:

      <?xml version="1.0" encoding="UTF-8"?>
      <msml version="1.1">
         <modifyconference id="conf:example">
            <audiomix>
               <asn ri="4"/>
            </audiomix>
         </modifyconference>
      </msml>

   This would also enable active speaker notification if it had not
   previously been enabled.  The N-loudest mixing is unaffected.

   Multiple elements MAY be included in the mixer descriptions similar
   to when conferences are created.  For example, in a video conference,
   the video mix description (<videolayout>) could specify that the
   layout of the video being displayed should change such that the
   regions currently displaying participants get smaller and new
   region(s) are created to support additional participants.  A media
   server MUST make all of the requested changes or none of the
   requested changes.

   Additional examples of modifying conferences are presented in the
   Conference Examples section.

   Attributes:

      id: the identifier for a conference.  Wildcards MUST NOT be used.
      Mandatory.

      mark: a token that can be used to identify execution progress in
      the case of errors.  The value of the mark attribute from the last
      successfully executed MSML element is returned in an error
      response.  Therefore, the value of all "mark" attributes within an
      MSML document SHOULD be unique.

8.5. <destroyconference>

Destroy conference is used to delete mixers or to delete the entire conference and all state and shared resources. When a mixer is removed, all of the streams joined to that mixer are unjoined. When a conference is destroyed, SIP dialogs for any remaining participants MUST be maintained or removed based on the value of the "term" attribute when the conference was created.
Top   ToC   RFC5707 - Page 35
   When there is no element content, <destroyconference/> deletes the
   entire conference.  Individual mixers are removed by including a
   mixer description element identifying the mix (or mixes) to be
   removed as content to <destroyconference/>.  <audiomix/> is used
   remove audio mixers and <videolayout/> is used remove video mixers.
   When one or more mixer descriptions are specified, then media server
   MUST only delete the specified mixer and MUST NOT affect any other
   existing mixers.  When <audiomix/> or <videolayout/> is identified
   for individual removal, other feature aspects of the mix MUST NOT be
   included.  If specified, the media server MUST ignore any such
   elements.  When the last mixer is removed from a conference, a media
   server MUST remove all conference state, leaving or removing any
   remaining SIP dialogs as described above.

   Attributes:

      id: the identifier for a conference.  Mandatory.

      mark: a token that can be used to identify execution progress in
      the case of errors.  The value of the mark attribute from the last
      successfully executed MSML element is returned in an error
      response.  Therefore, the value of all "mark" attributes within an
      MSML document SHOULD be unique.

8.6. <audiomix>

The properties of the overall audio mix are specified using the <audiomix> element. Attributes: id: an optional identifier for the audio mix. samplerate: Integer value specifies the sample rate (in Hz) for the audio mixer. Optional, default value of 8000. An example of the description for an audio mix is: <audiomix id="mix1"> <asn ri="10s"/> <n-loudest n="3"/> </audiomix>

8.6.1. <n-loudest>

The <n-loudest> element defines that participants contend to be included in the conference mix based upon their audio energy. When the element is not present, all participants are mixed.
Top   ToC   RFC5707 - Page 36
   Attributes:

      n: the number of participants that will be included in the audio
      mix based upon having the greatest audio energy.  Mandatory.

8.6.2. <asn>

The <asn> element enables notification of active speakers. Active speakers MUST be notified using the <event> element with an event name of "msml.conf.asn". The namelist of the event consists of the set of active speakers. The name of each item is the string "speaker" with a value of the connection identifier for the connection. Attributes: ri: the minimum reporting interval defines the minimum duration of time that must pass before changes to active speakers will be reported. A value of zero disables active speaker notification. asth: specifies the active speaker threshold (in unit of dBm0). Valid value range is 0 to -96. Optional, default is -96. An example of an active speaker notification is: <event name="msml.conf.asn" id="conf:example"> <name>speaker</name> <value>conn:hd93tg5hdf</value> <name>speaker</name> <value>conn:w8cn59vei7</value> <name>speaker</name> <value>conn:p78fnh6sek47fg</value> </event>

8.7. <videolayout>

A video layout is specified using the <videolayout> element. It is used as a container to hold elements that describe all of the properties of a video mix. The parameters of the window that displays the video mix are defined by the <root> element. When the video mix in composed of multiple panes, the location and characteristics of the panes are defined by one or more <region> elements. A <region> element is not required when only a single video stream is displayed at one time and none of the visual attributes of regions are required. Some regions may be used to display a video stream based on a selection criteria rather than having a video stream of a single participant continuously presented in the region. One such an
Top   ToC   RFC5707 - Page 37
   example is a distance learning lecture where the instructor sees each
   of the students periodically displayed in a region.  When a region is
   used to display one of a number of streams, it is placed as a child
   of a <selector> element.

   Attributes:

      type: specifies the language used to define the layout.  Layouts
      defined using MSML MUST use the value "text/msml-basic-layout".
      This is the same convention as defined for the layout package from
      the W3C SMIL 2.0 specification [i6].  The default when omitted is
      "text/msml-basic-layout".

      id: an optional identifier for the video layout.

8.7.1. <root>

The <root> element describes the root window or virtual screen in which the conference video mix will be displayed. Simple conferences can display participant video directly within the root window but more complex conferences will use regions for this purpose. Areas of the window which are not used to display video will show the root window background. All video presentations require a root window. It MUST be present when a video mix is created and it cannot be deleted; however, its attributes MAY be changed using the <modifyconference> element. Attributes: size: the size of the root window specified as one of the five standard common intermediate formats (e.g., CIF, QCIF). backgroundcolor: the color for the root window background defined using the values for the "background-color" property of the CSS2 specification [n10]. backgroundimage: the URI for an image to be displayed as the root window background. Transparent portions of the image allow the background color to show through.

8.7.2. <region>

<region> elements define video panes that are used to display participant video streams. Regions are rendered on top of the root window.
Top   ToC   RFC5707 - Page 38
   The size of a region is specified relative to the size of the root
   window using the "relativesize" attribute.  Relative sizes are
   expressed as fractions (e.g., 1/4, 1/3) that preserve the aspect
   ratio of the original video stream while allowing for efficient
   scaling implementations.

   Regions are located on the root window based on the value of the
   position attributes "top" and "left".  These attributes define the
   position of the top left corner of the region as an offset from the
   top left corner of the root window.  Their values may be expressed
   either as a number of pixels or as a percent of the vertical or
   horizontal dimension of the root window.  Percent values are appended
   with a percent ('%') character.  Percent values of "33%" and "67%"
   should be interpreted as "1/3" and "2/3" to allow easy alignment of
   regions whose size is expressed relative to the size of the root
   window.

   An example of a video layout with six regions is:

      +-------+---+
      |       | 2 |
      |   1   +---+
      |       | 3 |
      +---+---+---+
      | 6 | 5 | 4 |
      +---+---+---+

      <videolayout type="text/msml-basic-layout">
         <root size="CIF"/>
         <region id="1" left="0" top="0" relativesize="2/3"/>
         <region id="2" left="67%" top="0" relativesize="1/3"/>
         <region id="3" left="67%" top="33%" relativesize="1/3">
         <region id="4" left="67%" top="67%" relativesize="1/3"/>
         <region id="5" left="33%" top="67%" relativesize="1/3"/>
         <region id="6" left="0" top="67%" relativesize="1/3"/>
      </videolayout>

   The area of the root window covered by a region is a function of the
   region's position and its size.  When areas of different regions
   overlap, they are layered in order of their "priority" attribute.
   The region with the highest value for the "priority" attribute is
   below all other regions and will be hidden by overlapping regions.
   The region with the lowest non-zero value for the "priority"
   attribute is on top of all other regions and will not be hidden by
   overlapping regions.  The priority attribute may be assigned values
   between 0 and 1.  A value of zero disables the region, freeing any
   resources associated with the region, and unjoining any video stream
   displayed in the region.
Top   ToC   RFC5707 - Page 39
   Regions that do not specify a priority will be assigned a priority by
   a media server when a conference is created.  The first region within
   the <videolayout> element that does not specify a priority will be
   assigned a priority of one, the second a priority of two, etc.  In
   this way, all regions that do not explicitly specify a priority will
   be underneath all regions that do specify a priority.  As well,
   within those regions that do not specify a priority, they will be
   layered from top to bottom, in the order they appear within the
   <videolayout> element.

   For example, if a layout was specified as follows:

      <videolayout>
         <root size="CIF"/>
         <region id="a" ... priority=".3" .../>
         <region id="b" ... />
         <region id="c" ... priority=".2" ...>
         <region id="d" ... />
      </videolayout>

   Then the regions would be layered, from top to bottom, c,a,b,d.

   Portions of regions that extend beyond the root window will be
   cropped.  For example, a layout specified as:

      <videolayout>
         <root size="CIF"/>
         <region id="foo" left="50%" top="50%" relativesize="2/3"/>
      </videolayout>

   would appear similar to:

      +-----------+
      |   root    |
      |background |
      |     +-----+--
      |     |     |//
      |     | foo |//
      +-----+-----+//
            |////////

   Visual attributes are used to define aspects of the visual appearance
   of individual regions.  A border may be defined together with a title
   and/or logo.  Text and logos are displayed as images on top of the
   region's video, below all regions with a lower priority.  The visual
   attributes are "title", "titletextcolor", "titlebackgroundcolor",
   "bordercolor", "borderwidth", and "logo".
Top   ToC   RFC5707 - Page 40
   Visual attributes can also be defined for individual streams (Video
   Stream Properties).  When visual attributes are specified as part of
   both a region and a stream, those associated with the stream MUST
   take precedence.  This allows streams that are chosen for display
   automatically (Stream Selection) to have proper text and logos
   displayed.  The region visual attributes are displayed when no stream
   is associated with the region.

   Two other attributes associated with a region, "blank" and "freeze",
   define the state of the video displayed in the region.  When the
   blank or freeze attribute is assigned the value "true", then the
   media server MUST display the region either as a blank region, or the
   video image frozen at the last received frame.

   These attributes are specified for a region and not allowed for
   streams because that appears to be the common use case.  Applying
   them to streams would allow only that stream to be affected within a
   selector while other streams continue to display normally.  Except
   for personal mixing scenarios, the same effect can be achieved by
   having the participant mute their own transmission to the media
   server.

   Attributes: associated with each region:

      id: a name that can be used to refer to the region.

      left: the position of the region from the left side of the root
      window.

      top: the position of the region from the top of the root window.

      relativesize: the size of the region expressed as a fraction of
      the root window size.

      priority: a number between 0 and 1 that is used to define the
      precedence when rendering overlapping regions.  A value of zero
      disables the region.

      title: text to be displayed as the title for the region

      titletextcolor: the color of the text

      titlebackgroundcolor: the color of the text background

      bordercolor: the color of the region border

      borderwidth: the width of the region border
Top   ToC   RFC5707 - Page 41
      logo: the URI of an image file to be displayed

      freeze: a boolean value, with a default of "false", that defines
      whether the video image should be frozen at the currently
      displayed frame

      blank: a boolean value, with a default of "false", that defines
      whether the region should display black instead of the associated
      video stream

8.7.3. <selector>

It is often desired that one of several video streams be automatically selected to be displayed. The <selector> element is used to define the selection criteria and its associated parameters. The selection algorithm is specified by the "method" attribute. Currently defined selection methods allow for voice activated switching and to iterate sequentially through the set of associated video streams. The regions that will display the selected video stream are placed as child elements of the <selector> element. Including regions within a <selector> element does not affect their layout with respect to regions not subject to the selection. For simple video conferences that display the video directly in the root window, the <root> element can be placed as a child of <selector>. Region elements MUST NOT be used in this case. For example, below is a common video layout that allows the video stream from the currently active speaker to be displayed in the large region ("1") at the top left of the layout while the streams from five other participants are displayed in regions located at the layout periphery. +-------+---+ | | 2 | | 1 +---+ | | 3 | +---+---+---+ | 6 | 5 | 4 | +---+---+---+
Top   ToC   RFC5707 - Page 42
      <videolayout type="text/msml-basic-layout">
         <root size="CIF"/>
         <selector id="switch" method="vas">
            <region id="1" left="0" top="0" relativesize="2/3"/>
         </selector>
         <region id="2" left="67%" top="0" relativesize="1/3"/>
         <region id="3" left="67%" top="33%" relativesize="1/3">
         <region id="4" left="67%" top="67%" relativesize="1/3"/>
         <region id="5" left="33%" top="67%" relativesize="1/3"/>
         <region id="6" left="0" top="67%" relativesize="1/3"/>
      </videolayout>

   All selector methods must be defined so that they work if only a
   single region is a child of the selector.  Selector methods that
   support more than one child region MUST specify how the method works
   across multiple regions.  Media server implementations MAY support
   only a single region for methods that are defined to allow multiple
   regions.

   The selector or region for a participant's video is defined using the
   "display" attribute of <stream> during a join operation.  Specifying
   a selector allows the stream to be displayed according to the
   criteria defined by the selector method.  Specifying a region
   supports continuous presence display of participants.  Some streams
   may be joined with both a selector and a region.  In this case, the
   value of <blankothers> attribute defines whether the streams
   associated with a continuous presence region should be blanked when
   the stream is selected for display in one of the selector regions.

   Attributes: common to all selector methods are:

      id: a name that can be used to refer to the selector.

      method: the name of the method used to select the video stream.  A
      value of "vas" (see the following section, Voice Activated
      Switching) MAY be specified.

      status: specifies whether the selector is "active" or "disabled".

      blankothers: when "true", video streams that are also displayed in
      continuous presence regions will have the continuous presence
      regions blanked when the stream is displayed in a selection
      region.
Top   ToC   RFC5707 - Page 43
8.7.3.1. Voice Activated Switching ("vas")
Voice activated switching (VAS) is used to display the video stream that correlates with the participant who is currently speaking. It is specified using a selector method value of "vas". If the video stream associated with the active speaker is not currently displayed in a selection region, then it replaces the video in the region that is displaying the video of the speaker that was least recently active. If the video of the active speaker is currently displayed in a selection region, then there is no change to any region. When VAS is applied to a single region, this has the effect that the current speaker is displayed in that region. Attributes: si: switching interval is the minimum period of time that must elapse before allowing the video to switch to the active speaker. speakersees: defines whether the active speaker sees the "current" speaker (themselves) or the "previous" speaker.

8.8. <join>

<join> is used to create one or more streams between two independent objects. Streams may be audio or video and may be bidirectional or unidirectional. A bidirectional stream is implicitly composed of two unidirectional streams that can be manipulated independently. The streams to be established are specified by <stream> elements (section <stream>) as the content of <join>. Without any content, <join> by default establishes a bidirectional audio stream. When only a stream of a single type has previously been created between two objects, or when only a unidirectional stream exists, <join> can be used to add a stream of another media type or make the stream bidirectional by including the necessary <stream> elements. Bidirectional streams are made unidirectional by using <unjoin> (section <unjoin>) to remove the unidirectional stream for the direction that is no longer required. In addition to defining the media type and direction of streams, <stream> elements are also used to establish the properties of streams, such as gain, voice masking, or tone clamping of audio streams, or labels and other visual characteristics of video streams. Properties are often defined asymmetrically for a single direction of a stream. Creating a bidirectional stream requires two <stream> elements within the <join>, one for each direction, if one direction is to have different properties from the other direction.
Top   ToC   RFC5707 - Page 44
   If a media server can provide services using both compressed or
   uncompressed media, the MSML client may need to distinguish within
   requests which format is to be used.  When compressed streams are
   created, both objects must use the same media format or an error
   response (450) is generated.

   Attributes:

      id1: an identifier of either a connection or conference.
      Wildcards MUST NOT be used.  Mandatory.  Any other object class
      results in a 440 error.

      id2: an identifier of either a connection or conference.
      Wildcards MUST NOT be used.  Mandatory.  Any other object class
      results in a 440 error.

      mark: a token that can be used to identify execution progress in
      the case of errors.  The value of the mark attribute from the last
      successfully executed MSML element is returned in an error
      response.  Therefore, the value of all mark attributes within an
      MSML document SHOULD be unique.

   For example, consider a call center coaching scenario where a
   supervisor can listen to the conversation between an agent and a
   customer and provide hints to the agent, which are not heard by the
   customer.  One join establishes a stream between the agent and the
   customer and another join establishes a stream between the agent and
   the supervisor.  A third join is used to establish a half-duplex
   stream from the customer to the supervisor.  The media server
   automatically bridges the media streams from the customer and the
   supervisor for the agent, and from the customer and the agent for the
   supervisor.

   Assuming the following connections, each with a single audio stream:

      conn:supervisor

      conn:agent

      conn:customer
Top   ToC   RFC5707 - Page 45
   The following would create the media flows previously described:

      <?xml version="1.0" encoding="UTF-8"?>
      <msml version="1.1">
         <join id1="conn:supervisor" id2="conn:agent"/>
         <join id1="conn:agent" id2="conn:customer"/>
         <join id1="conn:supervisor" id2="conn:customer">
            <stream media="audio" dir="to-id1"/>
         </join>
      </msml>

      The following example shows joining a participant to a multimedia
      conference.  It assumes that the conference has a video
      presentation region named "topright".  The "display" attribute is
      explained in the section Video Stream Properties.

      <?xml version="1.0" encoding="UTF-8"?>
      <msml version="1.1">
         <join id1="conn:hd83t5hf7g3" id2="conf:example">
            <stream media="audio"/>
            <stream media="video" dir="from-id1" display="topright"/>
            <stream media="video" dir="to-id1"/>
         </join>
      </msml>

8.9. <modifystream>

Media streams can have different properties such as the gain for an audio stream or a visual label for a video stream. These properties are specified as the content of <stream> elements (section <stream>). <modifystream> is used to change the properties of a stream by including one or more <stream> elements that are to have their properties changed. Stream properties MUST be set as specified by the element <stream> as a child element of <modifystream> element. Any properties not included in the <stream> element when modifying a stream MUST remain unchanged. Setting a property for only one direction of a bidirectional stream MUST NOT affect the other direction. The directionality of streams can be changed by issuing an <unjoin> followed by a <join>. Any streams that exist between the two objects that are not included within <modifystream> MUST NOT be affected. Attributes: id1: an identifier of either a conference or a connection. The instance name MUST NOT contain a wildcard if "id2" contains a wildcard. Mandatory.
Top   ToC   RFC5707 - Page 46
      id2: an identifier of either a conference or a connection.  The
      instance name MUST NOT contain a wildcard if "id1" contains a
      wildcard.  Mandatory.

      mark: a token that can be used to identify execution progress in
      the case of errors.  The value of the mark attribute from the last
      successfully executed MSML element is returned in an error
      response.  Therefore, the value of all mark attributes within an
      MSML document is RECOMMENDED to be unique.

8.10. <unjoin>

Unjoin removes one or more media streams between two objects. In the absence of any content in the <stream> element, all media streams between the objects MUST be removed. Individual streams may be removed by specifying them using <stream> elements, while the unspecified streams MUST NOT be removed. A bidirectional stream is changed to a unidirectional stream by unjoining the direction that is no longer required, using the <unjoin> element. Operator elements MUST NOT be specified within <stream> elements when streams are being unjoined using the <unjoin> element. Any specified stream operators MUST be ignored. <unjoin> and <join> may be used together to move a media stream, such as from a main conference to a sidebar conference. Attributes: id1: an identifier of either a conference or a connection. The instance name MUST NOT contain a wildcard if "id2" contains a wildcard. Mandatory. id2: an identifier of either a conference or a connection. The instance name MUST NOT contain a wildcard if "id1" contains a wildcard. Mandatory. mark: a token that can be used to identify execution progress in the case of errors. The value of the mark attribute from the last successfully executed MSML element is returned in an error response. Therefore, the value of all mark attributes within an MSML document SHOULD be unique. The following removes a participant from a conference and plays a leave tone for the remaining participants in the conference.
Top   ToC   RFC5707 - Page 47
      <?xml version="1.0" encoding="UTF-8"?>
      <msml version="1.1">
         <unjoin id1="conn:jd73ht89sf489f" id2="conf:1"/>
         <dialogstart target="conf:1" type="application/moml+xml">
            <play>
               <audio uri="file://leave_tone.wav"/>
            </play>
         </dialogstart>
      </msml>

8.11. <monitor>

Monitor is a specialized unidirectional join that copies the media that is destined for a connection object. One example of the use for <monitor> may be quality monitoring within a conference. The media stream may be removed using the <unjoin> element (see the section <unjoin>). Attributes: id1: an identifier of the connection to be monitored. Mandatory. Any other object class results in a 440 error. Wildcards MUST NOT be used. id2: an identifier of the object that is to receive the copy of the media destined to id1. id2 may be a connection or a conference. Mandatory. Any other object class results in a 440 error. Wildcards MUST NOT be used. compressed: "true" or "false". Specifies whether the join should occur before or after compression. When "true", id2 must be a connection using the same media format as id1 or an error response (450) is generated. Default is "false". mark: a token that can be used to identify execution progress in the case of errors. The value of the mark attribute from the last successfully executed MSML element is returned in an error response. Therefore, the value of all mark attributes within an MSML document SHOULD be unique.

8.12. <stream>

Individual streams are specified using the <stream> element. They MAY be included as a child element in any of the stream manipulation elements <join>, <modifystream>, or <unjoin>.
Top   ToC   RFC5707 - Page 48
   The type of the stream is specified using a "media" attribute that
   uses values corresponding to the top-level MIME media types as
   defined in RFC 2046 [i7].  This specification only addresses audio
   and video media.  Other specifications may define procedures for
   additional types.

   A bidirectional stream is identified when no direction attribute
   "dir" is present.  A unidirectional stream is identified when a
   direction attribute is present.  The "dir" attribute MUST have a
   value of "from-id1" or "to-id1" depending on the required direction.
   These values are relative to the identifier attributes of the parent
   element.

   The compressed attribute is used to distinguish the compressed nature
   of the stream when necessary.  It is implementation specific what is
   used when the attribute is not present.  Joining compressed streams
   acts much like an RTP [i3] relay.

   The properties of the media streams are specified as the content of
   <stream> elements when the element is used as a child of <join> or
   <modifystream>.  Stream elements MUST NOT have any content when they
   are used as a child of <unjoin> to identify specific streams to
   remove.

   Some properties are defined within MSML as additional attributes or
   child elements of <stream> that are media type specific.  Ones for
   audio streams and video streams are defined in the following two sub-
   sections.  Operators, viewed as properties of the media stream, MAY
   be specified as child elements of the <stream> element.

   Attributes:

      media: "audio" or video".  Mandatory

      dir: "from-id1" or "to-id1".

      compressed: "true" or "false".  Specifies whether the stream uses
      compressed media.  Default is implementation specific.

8.12.1. Audio Stream Properties

Audio mixes can be specified to only mix the N-loudest participants. However, there may be some "preferred" participants that are always able to contribute. When audio streams are joined to a conference that uses N-loudest audio mixing, preferred streams need to be identified.
Top   ToC   RFC5707 - Page 49
   A preferred audio stream is identified using the "preferred"
   attribute.  The "preferred" attribute MAY be used for an audio stream
   that is input to a conference and MUST NOT be used for other streams.

   Additional attributes of the <stream> element for audio streams are:

   Attributes:

      preferred: a boolean value that defines whether the stream does
      not contend for N-loudest mixing.  A value of "true" means that
      the stream MUST always be mixed while a value of "false" means
      that the stream MAY contend for mixing into a conference when
      N-loudest mixing is enabled.  Default is "false".

   There are two elements that can be used to change the characteristics
   of an audio stream as defined below.

8.12.1.1. <gain>
The <gain> element may be used to adjust the volume of an audio media stream. It may be set to a specific gain amount, to automatically adjust the gain to a desired target level, or to mute the stream. Attributes: id: an optional identifier that may be referenced elsewhere for sending events to the gain primitive. amt: a specific gain to apply specified in dB or the string "mute" indicating that the stream should be muted. This attribute MUST NOT be used if "agc" is present. agc: boolean indicating whether automatic gain control is to be used. This attribute MUST NOT be used if "amt" is present. tgtlvl: the desired target level for AGC specified in dBm0. This attribute MUST be specified if "agc" is set to "true". This attribute MUST NOT be specified if "agc" is not present. maxgain: the maximum gain that AGC may apply. Maxgain is specified in dB. This attribute MUST be used if "agc" is present and MUST NOT be used when "agc" is not present.
8.12.1.2. <clamp>
The <clamp> element is used to filter tones and/or audio-band dtmf from a media stream.
Top   ToC   RFC5707 - Page 50
   Attributes:

      dtmf: boolean indicating whether DTMF tones should be removed.

      tone: boolean indicating whether other tones should be removed.

8.12.2. Video Stream Properties

Video mixes define a presentation that may have multiple regions, such as a quad-split. Each region displays the video from one or more participants. When video streams are joined to such a conference, the region that will display the video needs to be specified as part of the join operation. The region that will display the video is specified using the "display" attribute. The "display" attribute MUST be used for a video stream that is input to a conference and MUST NOT be used for other streams. The value of the attribute MUST identify a <region> (see the section <region>) or a <selector> (see the section <selector>) that is defined for the conference. A stream MUST NOT be directly joined to a region that is defined within a selector. Changing the value of the "display" attribute can be used to change where in a video presentation layout a video stream is displayed. Additional attributes of the <stream> element for video streams are: Attributes: display: the identifier of a video layout region or selector that is to be used to display the video stream. override: specifies whether or not the given video stream is the override source in the region defined by "display" attribute. Valid values are "true" or "false". Optional, default value is "false". Only a video stream that is input to a conference can be the override source. A particular region can have at most one override source at a time. The most recently joined video stream with this attribute set to "true" becomes the override source. When there's an override source in place, its video is always displayed in the region, regardless of what video selection algorithm (either a selector or continuous presence mode) is configured for that region. Once the override source is cleared, the conference MUST revert back to original video selection algorithm.
Top   ToC   RFC5707 - Page 51
8.12.2.1. <visual>
Some regions of video conferences may display different streams automatically, such as when voice activated switching is used. Connections MAY also be joined directly without the use of video mixing. In these cases, the <visual> element may be used to define visual display properties for a stream. The <visual> element MAY use any of the visual attributes defined for regions (see the section <region>). This allows the visual aspects of regions within a <selector> to be tailored to the selected video stream, or for streams that are directly joined to display a name or logo.


(page 51 continued on part 3)

Next Section