Introduction to UST and UST/MSC - Lurker's Guide - lurkertech.com
lurkertech.com Lurker's Guide Introduction to UST and UST/MSC

Support This SiteThis hobby site is supported by readers like you. To guarantee future updates, please support the site in one of these ways:
donate now   Donate Now
Use your credit card or PayPal to donate in support of the site.
get anything from amazon.com
Use this link to Amazon—you pay the same, I get 4%.
get my thai dictionary app
Learn Thai with my Talking Thai-English-Thai Dictionary app: iOS, Android, Windows.
get my thai phrasebook app
Experience Thailand richly with my Talking Thai-English-Thai Phrasebook app.
get my chinese phrasebook app
Visit China easily with my Talking Chinese-English-Chinese Phrasebook app.
get thailand fever
I co-authored this bilingual cultural guide to Thai-Western romantic relationships.
Support This Site

This hobby site is supported by readers like you. To guarantee future updates, please support the site in one of these ways:
donate now   Donate Now
Use your credit card or PayPal to donate in support of the site.
get anything from amazon.com
Use this link to Amazon—you pay the same, I get 4%.
get my thai dictionary app
Learn Thai with my Talking Thai-English-Thai Dictionary app: iOS, Android, Windows.
get my thai phrasebook app
Experience Thailand richly with my Talking Thai-English-Thai Phrasebook app.
get my chinese phrasebook app
Visit China easily with my Talking Chinese-English-Chinese Phrasebook app.
get thailand fever
I co-authored this bilingual cultural guide to Thai-Western romantic relationships.

Submit This SiteLike what you see?
Help spread the word on social media:
Submit This Site

Like what you see?
Help spread the word on social media:

Note: Updated Lurker's Guide available (but not this page!)

This page belongs to the old 1990s SGI Lurker's Guide. As of 2008, several of the Lurker's Guide pages have been updated for HDTV and for modern OS platforms like Windows and Mac. This particular page is not one of those, but you can see what new stuff is available here. Thanks!

Introduction to UST and UST/MSC

By Chris Pirazzi. Some material stolen from Wiltse Carpenter, Doug Cook, Bryan James, and Bruce Karsh.

This document describes the UST support in the VL, AL, GL, CL, MD, tserialio, DM, and other libraries (see What Are the SGI Video-Related Libraries?). 

What Is It?

SGI's UST support lets you measure the time at which signals arrived at an input jack of the machine, and schedule the time at which your data will hit an output jack of the machine. You can use this support to:
  • record or play audio, video, and/or MIDI in sync.
  • perform field-accurate laydowns to and captures from a VTR.
  • slave audio, video and/or MIDI playback to incoming timecode.
  • implement highly accurate punch-in and punch-out capability via a MIDI or serial device.
  • measure the total input-jack-to-output-jack delay of your real-time signal processing program.
In many cases you will be able to synchronize signals to an accuracy of tens of microseconds.

 For a taste of why synchronization is hard and how the UST support solves the problem, see Why Use UST?.

Accuracy vs. Latency

The UST support has no interactions with the UNIX process scheduler. Therefore, the UST support will not affect your program's latency:
  • It will not affect the upper bound, lower bound, or average time it takes for your program to react to an incoming signal by producing a corresponding output signal.
  • It will not make your program run any more or less often.
  • It will not affect the amount of "advance warning" an output device needs in order to output your data on time.
You can affect these latencies in the average case using the tricks described in Seizing Higher Scheduling Priority.

The UST Clock

Every SGI machine has a UST clock. UST, which means "Universal System Time" or "Unadjusted System Time:"
  • has a 64-bit signed value (use the C typedef "stamp_t" from sys/time.h).
  • has units of nanoseconds (ie, has 1ns numerical precision).
  • is initialized to 0 at system startup.
  • is never adjusted or reset while the system is running, unlike gettimeofday(3).
  • is monotonically increasing, meaning that in any pair of UST measurements by a device or your software, the UST clock's value never decreases.
  • has a resolution that is 1us or better (ie, takes on a new value at least once every 1us).
  • has an accuracy or error that is ±100 parts per million or better, meaning that its actual rate over the long term differs at most that much from its nominal rate (as measured by your favorite cesium clock).
  • has a wrap period of 292 years, so practical code does not have to worry about USTs wrapping.
Every component of the system (your program, audio subsystems, video subsystems, serial subsystems, etc.) can snap the value of UST clock.

How to Use UST

You can read the UST clock directly from your program with dmGetUST(). However, it's much more useful to get USTs from the same library you're using to input or output data:
 
 
  • When a piece of data such as a video field, an audio frame, or a MIDI event arrives at an input jack of the machine, an input subsystem will snap the value of the UST clock at that instant. The subsystem will then provide you with the data and give you a way to determine its UST. Your program doesn't have to be running at the instant the data arrives. You get the full accuracy of the input subsystem's USTs, which can have as fine a granularity as ±1.5us for some of SGI's audio subsystems, without needing to schedule your process to that granularity.
     

     
     
     
     
     
     
     
     
     
     
     
     
     
     
     
     
     

  • Similarly, when you want to output a piece of data, the subsystem gives you a way to control the UST at which each piece of your data hits the output jack of the machine. Again, since you have provided the data and its UST ahead of time, your process does not have to be running at the crucial moment of output. Some data types (audio and video for example) inherently only let you output data at certain instants. The subsystem provides you with the possible choices in terms of UST.
     

     
     
     
     
     
     
     
     
     
     
     
     
     
     
     
     
     

The point within an audio frame, video field, MIDI event, etc. which the UST refers to is called its synchronization point. For video and graphics signals, this is defined in videosync(3dm). For serial bytes, this is the half amplitude point of the leading edge of the byte's start bit. For MIDI events, this is the synchronization point of the first serial byte of the MIDI message (except sysex messages, which the MIDI library currently does not accurately timestamp). For analog audio signals, this is the instant at which the corresponding voltage level is sampled or produced. For digital audio signals, this is the edge of the recovered sample clock from the input signal or the driving sample clock for the output signal.

 You can perform all of the timing and synchronization tasks listed above by simply comparing USTs you get from or give to different subsystems. For example:

  • To record audio and video in sync, you use the UST support in the AL to pair incoming audio data with its USTs, and the UST support in the VL to pair incoming video data with its USTs. You can then compare the USTs from the AL and VL to determine how each frame of audio data coincided with each field/frame of video data at the audio and video jacks of the machine. This tells you how to create a movie file where the two streams are synchronized.
     

     
     
     
     
     
     
     
     
     
     
     
     
     
     
     
     
     

  • To play audio and video data from a movie file in sync, you look at how the movie file wants each audio frame and video field/frame to line up in time, and schedule these items of data to go out at USTs which line up in the same way.
     

     
     
     
     
     
     
     
     
     
     
     
     
     
     
     
     
     

  • To measure the input-jack-to-output-jack delay (or latency) of your real-time signal processing program, request the UST of a piece of input data, determine the UST at which your program outputs the processed version of that data, and subtract! If you wanted to increase the delay of your processing program to a precise value, you could add more and more buffering in your program and repeat the UST measurement until you reached the desired delay. Don't forget that the UST support just gives you a way to measure your latency; it won't inherently increase or reduce your program's latency.
     

     
     
     
     
     
     
     
     
     
     
     
     
     
     
     
     
     

UST Timestamping in the MIDI Library

The MIDI library represents MIDI events with this structure:
 
 
typedef struct __mdevent {
        char msg[4];
        char *sysexmsg;
        long long stamp;            /* time stamp in ns */
        int msglen;                 /* length of data, sysex only */
} MDevent;
The MDevent contains a MIDI event and a field "stamp" that identifies the time of that event. The library supports lots of different stamp modes that affect the interpretation of "stamp," but the one you want to use is the simplest one:
{
  MDport port;
  /* ... open port ... */
  mdSetStampMode(port, MD_RELATIVESTAMP);
  mdSetStartPoint(port, 0, 0);
  /* ... send or receive MIDI ... */
}
In this mode, "stamp" is simply a UST time.

 To receive data, you call mdReceive() and you get back an MDevent with the event and UST stamp filled in. To send data, you provide an MDevent to mdSend(), and the MIDI subsystem will output the specified MIDI message at the specified UST.

UST in the Timestamped Serial I/O Library

The tserialio(3) library (currently supported on IRIX 6.3) lets you measure the time at which bytes arrive at a serial port to within ±1ms of accuracy, and lets you schedule serial bytes for output to within ±1ms accuracy. When you read serial bytes with tsRead(3), you receive each byte paired with its UST. When you write serial bytes with tsWrite(3), you specify the UST at which you want each byte to go out the serial port.

UST Timestamping for Video Library Input

When you receive a field or frame in the classic VL API using vlGetNextValid(), you can pass the returned VLInfoPtr into vlGetDMediaInfo(). This returns a DMediaInfo structure. DMediaInfo.ustime contains the UST of that field or frame. DMediaInfo.sequence contains a coincident snap of a counter that increments once per video field time. You can tell if you dropped any input data by seeing if DMediaInfo.sequence increments by more than 1 (if the time spacing of your video buffers is a field time) or 2 (if the time spacing of your buffers is a frame time).

 When you receive a field or frame in a DMbuffer using the O2 or cross-platform VL buffering API (see What Are the SGI Video-Related Libraries? for more info), you can pass the DMbuffer into dmBufferGetUSTMSCpair(). This returns a USTMSCpair structure. USTMSCpair.ust contains the UST of the field or frame. USTMSCpair.msc contains a coincident snap of a counter that increments once per buffer time (field or frame, depending on the time spacing of the data in your buffers). You can tell if you dropped any data by seeing if USTMSCpair.msc has incremented by more than 1. This MSC counter has some useful properties which we describe below along with UST/MSC.

 The VL also supports UST/MSC, another API for determining the UST of data which works for input and output. We describe UST/MSC below.

 We will explain how to distinguish dominant and non-dominant fields along with UST/MSC below.
 
 

UST Timestamping in Other Libraries

The Image Conversion library dmIC (part of libdmedia) lets you pass DMbuffers into a converter. The DMbuffers which exit the converter will have an identical USTMSCpair (retrievable with dmBufferGetUSTMSCpair()) to the corresponding input DMbuffer.

UST and gettimeofday()

You can use dmGetUSTCurrentTimePair() to get a UST and a struct timeval which represent the same instant of time. Unlike UST, the clock returned by gettimeofday() may occasionally be adjusted forwards or backwards by system administrators and network time daemons. For this reason and other reasons given in Clocking and Clock Drift: Relating Multiple MSC or UST Clocks below, you'll probably want to call this function periodically to maintain a fresh correlation between UST and gettimeofday().

 This function provides one possible way to relate the UST clocks of two different machines. See Synchronization Across Machines for more information.
 
 

UST for Sampled Data Types: UST/MSC

UST/MSC is a way to get at UST for sampled data types like audio and video. These data types have a certain unit (an audio frame, a video field or frame) which repeats at a regular interval. The input or output of each unit is driven not by your software, but by an electrical oscillator in your machine or in some external piece of equipment. For example, a video output device which is genlocked to an external signal must precisely align each outgoing field with those in the external signal.

 As a result of this, you can think of your signal as having discrete "slots" which each hold one unit of data. Here's an example with an audio and video signal:
 
 

The UST of a slot is the UST of the synchronization point of the data in that slot (see How to Use UST above). Each slot in a given signal is numbered with an MSC ("media stream count" or "media sequence count").

 If your program inputs a sampled signal, your job is to read the data out of each slot. If your program outputs a sampled signal, your job is to place the right data into each slot. In both cases, you need a way to figure out the UST of any given slot.

 The UST/MSC support in the AL and VL consists of two operations which let you do this:
 
 

  • alGetFrameNumber(), vlGetFrontierMSC(): This operation gives you a way to identify the MSC of the next piece of data you are about to read from an AL port or VL path, or the next piece of data you are about to write to an AL port or VL path. This MSC is called the "frontier MSC." We'll define the read and write operation for each library below.
     

     
     
     
     
     
     
     
     
     
     
     
     
     
     
     

     Until we say otherwise, this document will assume that your program reads or writes every slot. On input, this means that you read data out of your buffer frequently enough that the device always has room to deposit new data, so no data is ever dropped. On output, this means that you write data to your buffer frequently enough that the device will never starve for data, so the output will never glitch (e.g., audio click, video flash or pause). As we'll see later, the frontier MSC is also useful for detecting these error conditions.
     
     

  • alGetFrameTime(), vlGetUSTMSCPair(): This operation returns a "UST/MSC pair" for your AL port or VL path. This pair has an MSC M and a UST U. M identifies a particular slot in the signal, and U is the UST of that slot. M could be any recent MSC. It's not necessarily the MSC that's crossing the jack "now," or the MSC that you just input or output. It's not necessarily the frontier MSC. It could even be an MSC that the machine has not yet input or has already output. The important part is that the library has paired together M and U so you can read them atomically.
     

     
     
     
     
     
     
     
     
     
     
     
     
     
     
     
     
     

The first operation links your data to an MSC. The second operation links an MSC to a UST. You can use both operations to get from your data to UST. This works even if the MSC in the pair differs from the MSC whose UST you want, because you know how the slots are spaced in time (the audio sampling rate, the video frame/field rate). For example, to determine the UST of the next piece of data you're going to read or write (that is, the frontier MSC), do this:
{
  double ust_per_msc = get_ust_per_msc();
  USTMSCpair pair = get_ust_msc_pair();
  stamp_t frontier_msc = get_frontier_msc();
  
  /* step 1: figure out which MSC you want a UST for */

  stamp_t desired_msc = frontier_msc;

  /* step 2: compute the UST of that MSC */

  stamp_t desired_ust = pair.ust + ust_per_msc*(desired_msc - pair.msc);
}
When extrapolating from a UST/MSC pair as above, be careful to use sufficiently precise C types which will not overflow.

Determining the ust_per_msc figure can sometimes be tricky, because of various idiosyncrasies of audio hardware, video hardware, and video itself. Each library provides a simple way to get a nominal ust_per_msc figure. For any UST/MSC device, you can also measure ust_per_msc empirically instead of computing the nominal ust_per_msc figure. This often has accuracy advantages and works for any sync source. We will describe this more fully in Clocking and Clock Drift: Relating Multiple MSC or UST Clocks below.

 You don't have to compute ust_per_msc or get a UST/MSC pair every time you read or write data. Typically, applications compute ust_per_msc only once, and get a new UST/MSC pair every second or so.

 MSC values you get from a particular AL port or VL path are specific to that port/path only. For example, if you input the same video signal using two different VL paths, you must not assume that MSC 123 on one path refers to the same video field/frame as MSC 123 on the other path. When you need to compare the time of data on two different paths or ports, convert to UST.

 After giving the specifics of UST/MSC for each library, we will provide some pictorial examples of the above pseudocode.
 
 

UST/MSC in the Audio Library

The AL UST/MSC support is useful for paths which bring audio data into memory and out of memory (ie, not for monitoring paths):
 
 
type of data in each slot (MSC)one audio frame
time spacing of each slot (MSC)one audio frame period
read (dequeue)alReadFrames()
write (enqueue)alWriteFrames()
get frontier MSCalGetFrameNumber()
get UST/MSC pairalGetFrameTime()
get nominal UST per MSCsample code below
Below we will provide many pictorial examples of using the AL's UST/MSC support.

 To get a nominal ust_per_msc figure, do this:
 
 

{
  ALpv pv;
  double samprate;
  double ust_per_msc;
  
  pv.param = AL_RATE;
  if (alGetParams(res, &pv, 1) < 0 || pv.value.ll <= 0)
    {
      /* cannot determine nominal sample-rate */
    }
  else
    {
      samprate = alFixedToDouble(pv.value.ll);
      ust_per_msc = 1.E9 / samprate;
    }
}
The "res" parameter to alGetParams() is the resource id of your AL clock generator, your AL device that uses that clock generator, or your AL port that uses that device.

 The alGetParams() call will succeed but return a negative sampling rate if the nominal sample-rate cannot be determined. This only happens if the AL device's clock generator's master clock is an AES or ADAT digital signal which does not contain rate information (e.g. an AES source which has not set the rate bits in the subcode).

 You can measure ust_per_msc instead, which works in every case and gives you a more accurate figure. This will be described in Clocking and Clock Drift: Relating Multiple MSC or UST Clocks below.
 
 

UST/MSC in the Video Library

The VL UST/MSC support is useful for paths which have a VL_MEM node which brings video data into memory or out of memory:
 
 
type of data in each slot (MSC)one buffer entry (VLInfoPtr or DMbuffer)
time spacing of each slot (MSC)VL_RATE (which is in buffer entries per second)
read (dequeue)vlGetNextValid() or vlEventRecv()
write (enqueue)vlPutValid() or vlDMBufferSend()
get frontier MSCvlGetFrontierMSC()
get UST/MSC pairvlGetUSTMSCPair()
get nominal UST per MSCvlGetUSTPerMSC()
On older devices (pre DIVO) VL UST/MSC requires that VL_RATE is set to the maximum rate for the current video timing and cap type:
 
 
VL_TIMINGVL_CAP_TYPErequired VL_RATE
525-lineVL_CAPTURE_NONINTERLEAVED or 
VL_CAPTURE_FIELDS
60/1
VL_CAPTURE_INTERLEAVED30/1
VL_CAPTURE_EVEN_FIELDS30/1
VL_CAPTURE_ODD_FIELDS30/1
625-lineVL_CAPTURE_NONINTERLEAVED or 
VL_CAPTURE_FIELDS
50/1
VL_CAPTURE_INTERLEAVED25/1
VL_CAPTURE_EVEN_FIELDS25/1
VL_CAPTURE_ODD_FIELDS25/1
On older devices, the MSC increases by 1 for each slot in the stream to/from your application:
 
TimingVL_CAP_TYPEMSC rateMSC increment 
per buffer
InterlacedVL_CAPTURE_NONINTERLEAVED or 
VL_CAPTURE_FIELDS
once per 
field
1
VL_CAPTURE_INTERLEAVEDonce per 
2 fields
1
VL_CAPTURE_EVEN_FIELDS or 
VL_CAPTURE_ODD_FIELDS.
once per 
2 fields
1
Progressivenot supported
On newer devices (Divo, HDIO, and those released after 1999),  MSC increases by one for each slot in the stream in/out of the machine:
 
TimingVL_CAP_TYPEMSC rateMSC increment 
per buffer
InterlacedVL_CAPTURE_NONINTERLEAVED or 
VL_CAPTURE_FIELDS
once per 
field
1
VL_CAPTURE_INTERLEAVED or 
VL_CAPTURE_EVEN_FIELDS or 
VL_CAPTURE_ODD_FIELDS.
once per 
field
2
ProgressiveVL_CAPTURE_INTERLEAVEDonce per 
frame
1
In addition to its UST/MSC API, the VL has a conceptually simpler UST timestamping scheme for video input, which we described in UST Timestamping for Video Library Input above. Consider using UST timestamping instead of UST/MSC if your application only does video input. If you use UST timestamping, don't forget that DMediaInfo.sequence (classic VL API) is always in video field times and cannot be compared with MSCs, whereas USTMSCpair.msc (O2/cross-platform VL API) has the same units and offset as MSCs you get from the UST/MSC API. USTMSCpair.msc and the frontier MSC may deviate if your input buffer overflows, as described in UST/MSC: Using the Frontier MSC to Detect Errors.
 
 

Dominant/Non-Dominant and F1/F2

For VL_CAPTURE_NONINTERLEAVED paths, each buffer contains either a dominant or non-dominant field (see Definitions: F1/F2, Interleave, Field Dominance, and More). The first field you dequeue on input or enqueue on output after beginning a VL transfer will be a dominant field. Subsequent fields you enqueue or dequeue will alternate between non-dominant and dominant. Therefore, if the device must drop video (input overflow, output underflow, input signal problems), it will skip an even number of fields at the video jack to maintain the sequence. The fields skipped may begin with either a dominant or non-dominant field. Some VL devices allow you to choose F1 or F2 dominance. All other VL devices fix F1 dominance.

 For VL_CAPTURE_FIELDS paths, the device may skip an even or odd number of fields during an input overflow, output underflow, or input signal problem. Therefore, you need a way to determine what field type you are currently enqueuing or dequeuing. This is currently only possible on vino, ev1, ev3, and mvp, where you can use any of DMediaInfo->sequence, USTMSCpair.msc, or an MSC from UST/MSC to determine the field type:
 
 

  • (msc%2)==0 indicates an F1 field
  • (msc%2)==1 indicates an F2 field

Jack-to-Jack Paths

Some VL devices support jack-to-jack monitoring or processing paths that do not contain a VL_MEM node. On these paths you can use vlGetPathDelay() to determine the path's jack-to-jack latency in nanoseconds.
 
 

Sample UST/MSC Calculation: Audio Input

First we'll show how to compute the UST of any audio frame you read from an AL input port. The first step is to open the port:
 
 

This diagram shows your program sitting atop the audio subsystem (the AL and the audio hardware). We show an incoming audio signal as a stack of slots, each of which holds one frame of audio. Some of those frames have not entered the computer yet, and some are sitting in the computer waiting for you to read them. As in the diagram above, each slot has an MSC and a UST. The UST is the UST at which the data in that slot will hit, or did hit, the audio input jack of the machine.

 The next thing your program does is to read some data with alReadFrames:
 
 

The AL examples in this document will use reads and writes of 2-5 frames for typographical convenience. Typical programs transfer 10ms or more of audio frames with each alReadFrames() or alWriteFrames(). UST/MSC works the same either way.

 Now you have some data. The next thing you need to do is figure out the MSC of each of your frames. You do that with one call to alGetFrameNumber():
 
 

alGetFrameNumber() returns a frontier MSC of 80. This is the MSC of the next frame you're about to read from the port. Since audio frames come at regular intervals, and since we're assuming that we read the data out of every slot of the signal (ie, we never drop any audio frames on input), we then know that the MSCs of the 4 frames we just read are 76, 77, 78, and 79.

 Now that we know the MSC of each of our frames, we can figure out the UST of each of the frames using alGetFrameTime():

alGetFrameTime() returns a pair of numbers to us: MSC 98 and UST u. This tells us that MSC 98 hits the jack at UST u. We want to know when MSC 76, 77, 78, and 79 hit the jack. We find the UST of MSC 76 by taking u and adjusting it by (76-98) audio frame periods. The audio frame period T is (1/audio sampling rate) in nanoseconds. T is the same as ust_per_msc in the pseudocode above. You can use this simple extrapolation trick to find the UST of any recent MSC, and we do it for MSC 77, 78, and 79 above.

 It doesn't really matter which MSC alGetFrameTime() tells us about. It could return a UST/MSC pair for MSC 100, MSC 96, MSC 80, MSC 76, or any other recent MSC, and your code will still give you a UST typically within a few audio frames of accuracy (on some devices, the UST will be better than audio frame accurate). The reason you need a recent MSC (an MSC within a few seconds of the MSC currently hitting the jack) has to do with a secondary effect called clock drift, which will be explained later.
 
 

Sample UST/MSC Calculation: Audio Output

Now we'll show how to compute the UST of the next audio frame you write to an AL output port. This tells you when that audio frame will hit the jack of the computer. If you want a piece of audio data to play at a certain UST, you write some other data to the port (perhaps silence) until you see that the UST of the next audio frame to go into the port is as close as possible to your desired UST. You then switch to writing your desired data. This is the basic mechanism for synchronizing the output of an audio signal with the input or output of other signals (such as video).

 Remember our assumption that your program fills every slot of the output signal with data. This means that every time the output device needs a new audio frame to output, you will have made one available with alWriteFrames(). If the output device begins to starve for data, we say that the buffer is "underflowing." When you first open an AL output port, there are no frames in the buffer, so the buffer is underflowing. The first thing you need to do, before attempting to use UST/MSC, is to get some audio frames in there! You need to enqueue at least enough silence to handle the worst-case amount of time between enqueuing the silence and taking the first UST measurement.

 Let's assume that you have opened an AL output port and you have written enough data to it to keep the port happy for a while. You want to determine which audio data you should write to the port next, so you want to know the UST of the next audio frame you write to the port:
 
 

Note that MSCs now decrease (data gets older) as you move down the diagram, since the data flows in the opposite direction. The slot shown in dotted lines is the next audio frame you will write to the AL. It's not an audio frame that is currently in the AL buffer.

 As with the input case, one call to alGetFrameNumber() tells you the MSC of each of your audio frames:
 
 

alGetFrameNumber() returns a frontier MSC of 65. This is the MSC of the next frame you're about to write to the port. So the MSCs of your four frames are 65, 66, 67, and 68.

 Be careful: In the input case, the audio frame with the frontier MSC is in the AL buffer. In the output case, the audio frame with the frontier MSC is in your buffer.

 Continuing to follow the pattern, we get a UST/MSC pair and use it to compute the UST of each frame we are about to write:

The explanation here is identical to that for input.

 Now we know the UST of each audio frame, so we can choose some data to output and proceed with the alWriteFrames().

UST/MSC: Using the Frontier MSC to Detect Errors

So far in this document we have been carefully assuming that:
  • On input, your AL or VL buffer never overflows. This means that you read data out of your buffer frequently enough that the device always has room to deposit new data, so no data is ever dropped.
     

     
     
     
     
     
     
     
     
     
     
     
     
     
     
     
     
     

  • On output, your AL or VL buffer never underflows. This means that you write data to your buffer frequently enough that the device will never starve for data, so the output will never glitch (audio click, video flash or pause).
     

     
     
     
     
     
     
     
     
     
     
     
     
     
     
     
     
     

These assumptions allow us to use the frontier MSC (returned by alGetFrameNumber() and vlGetFrontierMSC()) to associate an MSC with each piece of data we read or write, so we can get a UST for our data.

 You can also use the frontier MSC to detect overflow and underflow conditions, and determine their precise length.

 Here's an AL example of how to detect overflow in an input buffer:

{     
  stamp_t newmsc, oldmsc=-1;
  
  /* this is your main data-reading loop, not a special one */
  while (1)
    {
      alReadFrames(port, buf, nframes);
     
      alGetFrameNumber(port, &newmsc);
      if (oldmsc > 0)
        {
          stamp_t M = (newmsc-oldmsc) - nframes;
          if (M != 0)
            printf("we overflowed for %lld MSCs!\n", M);
        }
      oldmsc = newmsc;
    }
}
Each time you read nframes frames, check to see how much the frontier MSC has incremented. If the frontier MSC has incremented by nframes+M, then you know that an overflow just occurred, and that it lasted exactly M sample periods.

 Use this basically identical code to detect underflow in an AL output buffer:

{     
  stamp_t newmsc, oldmsc=-1;
  
  /* this is your main data-writing loop, not a special one */
  while (1)
    {
      alWriteFrames(port, buf, nframes);
     
      alGetFrameNumber(port, &newmsc);
      if (oldmsc > 0)
        {
          stamp_t M = (newmsc-oldmsc) - nframes;
          if (M != 0)
            printf("we underflowed for %lld MSCs!\n", M);
        }
      oldmsc = newmsc;
    }
}
Each time you write nframes frames, check to see how much the frontier MSC has incremented. If the frontier MSC has incremented by nframes+M, then you know that an underflow just occurred, and that it lasted exactly M sample periods.

 This technique works just as well with the VL.

 For more information on how and why this works, check out UST/MSC: Using the Frontier MSC to Detect Errors.

UST/MSC in the Graphics Libraries

UST/MSC fits quite nicely into the double-buffered model of OpenGL and IRIS GL, but currently there is no graphics UST/MSC API. For more discussion of this and some methods you can use to get USTs now, see UST and Graphics.
 
 

UST in the CL for cosmo1 and cosmo2

The cosmo1 and cosmo2 devices support direct paths from video to compression to memory, and memory to decompression to video. You use clGetNextImageInfo() to retrieve UST and field count information:
 
 
typedef struct {
    unsigned size;       /* size of compressed image in bytes */
    unsigned long long ustime;
    unsigned imagecount;
    unsigned status;     /* additional status information */
} CLimageInfo;
The API is UST timestamping for input and something bizarre for output:
 
 
  • Video-to-compression-to-memory: when you call clGetNextImageInfo() to get the size of the next compressed JPEG field, you can read the field's UST out of CLimageInfo.ustime and you can read a field counter out of CLimageInfo.imagecount. A jump in CLimageInfo.imagecount by more than 1 indicates dropped data. CLimageInfo is like DMediaInfo for video input.
     

     
     
     
     
     
     
     
     
     
     
     
     
     
     
     
     
     

  • Memory-to-decompression-to-video: again you call clGetNextImageInfo(). CLimageInfo.ustime contains the UST at which the next JPEG field you enqueue will hit the output jack of the machine. If you miss a field on output, the UST will jump by one field time. CLimageInfo.imagecount will not. CLimageInfo.imagecount contains a delayed count of the number of fields you put into the CL buffer. If you want to determine how many images you missed on output, divide the jump in CLimageInfo.ustime by the field time (being careful to round to nearest, since the UST may have small amounts of jitter).
The cosmo1 and cosmo2 boards violate the rule about UST being at the video jack. To do analog video I/O with cosmo1, you plug your cosmo1 into an ev1 board (see ev1 and cosmo1). To do digital video I/O with cosmo2, you wire your cosmo2 to an ev3 digital video jack with the VL (data flows over the GIO or XIO bus). In both cases, the UST returned by clGetNextImageInfo() specifies when the associated field hits the cosmo1/cosmo2 board's jack/bus, not the ev1/ev3 board's video jack. Your software must compensate for the delay through the ev1/ev3 board. For video to compression to memory, this delay is on the order of a few video lines. For memory to decompression to video, it is either a few video lines, or one frame time plus or minus a few lines. In both cases you can query the delay with vlGetPathDelay().

 The CL interface for cosmo1 and cosmo2 always deals in fields. The cosmo1 board is always F1 dominant (see Definitions: F1/F2, Interleave, Field Dominance, and More). You set cosmo2's dominance using VL_MGC_DOMINANCE_FIELD on the VL_CODEC node (the default is F1 dominance). Here is how to tell which of your fields is dominant and which is non-dominant:
 
 

  • For cosmo2, the first field you enqueue or dequeue from the CL buffer after creating a CL compressor or decompressor is a dominant field. Subsequent fields you enqueue or dequeue will alternate between non-dominant and dominant. Therefore, if the CL buffer fills up on input or the CL device starves for data on output, cosmo2 will skip an even number of fields at the video jack to maintain the sequence.
     

     
     
     
     
     
     
     
     
     
     
     
     
     
     
     
     
     

  • The cosmo1 device may skip an even or odd number of fields. You must determine the type of each field you input or output:
     

     
     
     
     
     
     
     
     
     
     
     
     
     
     
     
     
     

    • For cosmo1 video to compression to memory, look at the low bit of CLimageInfo.ustime. Yes, ustime, not imagecount. Yes, this bit normally is nanoseconds. Yes, this is a heinous hack. A 1 indicates a dominant field (which is always F1 for cosmo1). A 0 indicates a non-dominant field (F2 for cosmo1).
       

       
       
       
       
       
       
       
       
       
       
       
       
       
       
       
       
       

    • For cosmo1 memory to decompression to video, the first field you enqueue on the CL buffer will go out as a dominant field (which is always F1 for cosmo1). From then on, monitor jumps in CLimageInfo.ustime to see what kind of field you are addressing next. The ustime low bit hack will not work.
       

       
       
       
       
       
       
       
       
       
       
       
       
       
       
       
       
       

Clocking and Clock Drift: Relating Multiple MSC or UST Clocks

All the clocks in your system are driven by electrical oscillators. This includes the UST clock, the gettimeofday() clock, an audio port's clock (which drives its MSCs), and a video path's clock (which drives its MSCs). Sometimes the oscillator is outside your machine. Oscillators are imperfect. Two oscillators with the same nominal rate may run at slightly different actual rates depending on their manufacture, their age, or even the temperature.

 If your software makes an assumption about the rate ratio of any two clocks, then it must assure that either:

  • 1. the two clocks are driven by the same oscillator, or
  • 2. the two clocks are driven by different oscillators, and the drift caused by the worst-case rate error of the two clocks over the worst-case time:
    • a. is acceptable for your application, or
    • b. is compensated for by code in your application.

Movie Playback Example

An example will make this clear. A movie file stores audio at some sampling rate, say 44100 frames per second, and video at some field rate, say 50 fields per second. The file contains exactly 44100 audio frames for every 50 video fields. Say you want to play such a file. Your program sets up an AL port at 44100, and a VL path at 50. Your program uses UST/MSC to prime the AL path and VL port so that the next audio and video data written will be simultaneous at the jack, and then it begins to write groups of 44100 audio frames and 50 fields from the movie file. The 44100 Hz heartbeat that tells your audio hardware when to output a new frame comes from an oscillator. The 50 Hz heartbeat that tells your video hardware when to output a new field comes from an oscillator.

 Say your audio system and your video system are clocked off of the same oscillator (case 1). This would be the case if your video system and your audio system are locked to a common external blackburst signal, for example. If that oscillator is running a bit slow or fast compared to some other oscillator (say the oscillator in your wristwatch), it's no big deal: your audio system will not quite be running at 44100 Hz, but your video system will have the exact same error, so the audio and video you provide to them will stay in sync. This corresponds to case 1 above.

 Now say your audio system and your video system are not clocked off of the same oscillator (case 2). This is a common situation on lower-end SGI systems, where users may not have the knowledge, desire, or equipment to slave their audio and video devices to a common oscillator. Then the ratio of the rates of your audio and video system may not actually be 44100/50. As your program runs, the sound coming out the audio jack will slowly drift ahead of the image coming out the video jack, or vice versa. This drift will continue to worsen without bound as playback proceeds.

 To take a worst-case example for typical hardware, say the audio oscillator is 50 parts per million fast (44102.205 Hz) and the video oscillator is 50 parts per million slow (49.9975 Hz). After playing for 10 minutes, the video will have drifted 60ms (3 fields) behind the audio, which is quite noticeable. What's worse, since your program continues to send 44100 audio frames for every 50 video fields, but since the devices consume 44100 audio frames for every 49.995 (roughly) video fields, an unbounded amount of video data slowly builds up between your program and the video device (in this case that data will build up in a VL buffer). Eventually you will run out of memory.

 Say your application requires field accuracy (audio and video in sync within ±10ms). If your application never deals with movie files longer than 1 minute, then your audio and video will never drift apart more than 10ms in either direction, and no more than one extra field will collect in the VL buffer, so you have no problem. This is case 2a.

 If your application can deal with movies of any length, then you have case 2b. In this case, you cannot achieve any upper bound on synchronization error or memory usage. You must measure the actual ratio of audio frames per video field, and adjust the data coming out of the movie file so that it actually has that ratio going into the devices. Depending on the quality constraints, this may require sampling rate converting the audio data, dropping or duplicating video fields, or other tricks. UST/MSC gives you a way to measure the ratio. To do this, grab two audio UST/MSC pairs spaced a second or more apart (call them (aust1, amsc1) and (aust2, amsc2)). Also grab two video UST/MSC pairs spaced a second or more apart (call them (vust1, vmsc1) and (vust2, vmsc2)). The ratio of audio frames to video fields is:
 
 

    ((amsc2-amsc1)/(aust2-aust1)) / ((vmsc2-vmsc1)/(vust2-vust1)).

Other Examples

The clock drift problem described above happens in many other scenarios:
  • If you are recording audio and video to a movie file, the movie file constraints you to storing 44100 audio frames per 50 video fields. If your input devices are not driven from a common clock, they may not produce data at this rate ratio. Depending on your synchronization requirement and maximum record time, you may need to tweak the data.
     

     
     
     
     
     
     
     
     
     
     
     
     
     
     
     
     
     

  • If your program processes an input audio or video signal by producing a corresponding output signal, and the two AL ports or VL paths are not clocked off the same oscillator, then either your output buffer will starve (if the output is faster) or your input buffer will overflow (if the input is faster). Again you may have to tweak the incoming data to match the output rate.
     

     
     
     
     
     
     
     
     
     
     
     
     
     
     
     
     
     

  • Whenever you use the UST/MSC API, you use code like this:
    {
      double ust_per_msc = get_ust_per_msc();
      USTMSCpair pair = get_ust_msc_pair();
      stamp_t frontier_msc = get_frontier_msc();
      
      /* step 1: figure out which MSC you want a UST for */
    
      stamp_t desired_msc = frontier_msc;
    
      /* step 2: compute the UST of that MSC */
    
      stamp_t desired_ust = pair.ust + ust_per_msc*(desired_msc - pair.msc);
    }
    ust_per_msc is a rate ratio between the UST clock and a device's MSC clock. UST and MSC clocks are not driven off of the same oscillator in any current SGI system, so their rate ratio is uncertain. desired_ust will have an error which is proportional to the error in ust_per_msc and to the distance you are extrapolating the UST/MSC pair (desired_msc - pair.msc):
     
     
    • You can reduce the error in ust_per_msc by measuring the actual rate ratio instead of using the nominal ust_per_msc figures described above. To do this, grab two UST/MSC pairs spaced a second or more apart (call them (ust1, msc1) and (ust2, msc2)), and compute:
       

       
       
       
       
       
       
       
       
       
       
       
       
       
       
       
       
       

        ust_per_msc = (ust2-ust1) / (msc2-msc1).
      You may want to measure ust_per_msc periodically for particularly long transfers.
       
       
    • You can reduce (desired_msc - pair.msc) by remembering that alGetFrameTime() and vlGetUSTMSCPair() return a "recent" pair, whose UST is within a few seconds of the current UST. The less you have to extrapolate a UST/MSC pair, the more accurate your UST will be.
       

       
       
       
       
       
       
       
       
       
       
       
       
       
       
       
       
       

  • If you want to relate the UST clock to the gettimeofday() clock, you can use dmGetUSTCurrentTimePair() as described above. These clocks are not derived from the same oscillator. Even if you assume the gettimeofday() clock is never adjusted by system administrators or network time daemons, you still cannot assume that 1 billion USTs represents the same amount of time as one timeval.tv_sec. You probably want to take multiple, sufficiently-spaced UST/Current Time pairs to determine the actual rate ratio.
     

     
     
     
     
     
     
     
     
     
     
     
     
     
     
     
     
     

  • If you want to relate the UST clocks of two different machines, keep in mind that their rate ratio may not be 1/1. We describe techniques for inter-machine synchronization in Synchronization Across Machines.
     

     
     
     
     
     
     
     
     
     
     
     
     
     
     
     
     
     

Where's the Oscillator?

This chart helps you locate the oscillator which drives the clocks used by your program. All of the crystal oscillators mentioned are different oscillators:
 
 
ClockWhere's the oscillator?
USTcrystal in your machine
gettimeofday()no time daemoncrystal in your machine
time daemon runningin the timed master or NTP stratum-0 clock
video inputsignal contains clock 
(oscillator is in upstream equipment)
video outputinternal synccrystal in your machine
genlock/slave syncsync source signal contains clock 
(oscillator is in upstream equipment)
digital audio input 
(AES/ADAT)
signal contains clock 
(oscillator is in upstream equipment)
analog audio input 
analog audio output 
digital audio output (AES/ADAT)
sync source set to AES/ADAT input signalsync source signal contains clock 
(oscillator is in upstream equipment)
sync source set to internal videovideo board provides clock internally 
(see video above)
sync source set to external blackburstexternal blackburst signal contains clock 
(oscillator is in upstream equipment)
sync source set to internalcrystal in your machine 
If you have not used apanel, vcp, or some AL or VL code to explicitly specify a sync source for video output, analog audio input, analog audio output, or digital audio output, then your oscillator is most probably a crystal inside your machine.
 
 

Support This SiteThis hobby site is supported by readers like you. To guarantee future updates, please support the site in one of these ways:
donate now   Donate Now
Use your credit card or PayPal to donate in support of the site.
get anything from amazon.com
Use this link to Amazon—you pay the same, I get 4%.
get my thai dictionary app
Learn Thai with my Talking Thai-English-Thai Dictionary app: iOS, Android, Windows.
get my thai phrasebook app
Experience Thailand richly with my Talking Thai-English-Thai Phrasebook app.
get my chinese phrasebook app
Visit China easily with my Talking Chinese-English-Chinese Phrasebook app.
get thailand fever
I co-authored this bilingual cultural guide to Thai-Western romantic relationships.
CopyrightAll text and images copyright 1999-2017 Chris Pirazzi unless otherwise indicated.
Support This Site

This hobby site is supported by readers like you. To guarantee future updates, please support the site in one of these ways:
donate now   Donate Now
Use your credit card or PayPal to donate in support of the site.
get anything from amazon.com
Use this link to Amazon—you pay the same, I get 4%.
get my thai dictionary app
Learn Thai with my Talking Thai-English-Thai Dictionary app: iOS, Android, Windows.
get my thai phrasebook app
Experience Thailand richly with my Talking Thai-English-Thai Phrasebook app.
get my chinese phrasebook app
Visit China easily with my Talking Chinese-English-Chinese Phrasebook app.
get thailand fever
I co-authored this bilingual cultural guide to Thai-Western romantic relationships.
Copyright

All text and images copyright 1999-2017 Chris Pirazzi unless otherwise indicated.