Converting STREAM_START_TIME_STAMP to Calendar Time | LabJack
 

Converting STREAM_START_TIME_STAMP to Calendar Time

2 posts / 0 new
Last post
cmwarre
cmwarre's picture
Converting STREAM_START_TIME_STAMP to Calendar Time

I was trying to follow the documentation on https://labjack.com/support/datasheets/t-series/communication/stream-mode#system-clock for converting STREAM_START_TIME_STAMP to a calendar time and I'm fairly confused it has to say.  

I'm trying to setup a triggered stream and correlate the STREAM_START_TIME_STAMP to my system's calendar time.  I'm streaming the CORE_TIMER register to try to correlate but it looks like it's overflowing/resetting very quickly and I'm struggling to find a consistent way to make it correlate with my system's start time (especially because I don't know exactly when the stream is triggered).  

Are there any good examples of doing this in any language?  

 

LabJack Support
labjack support's picture
Unfortunately, we do not have

Unfortunately, we do not have any programming examples for this. The STREAM_START_TIMESTAMP corresponds to the start time of the first stream scan. The overall timestamp implementation will depend on how you want to handle some aspects like NTP updates and how much drift you can tolerate. The main requirements are as follows:

  1. You need to know how the CORE_TIMER ticks translates to time. The CORE_TIMER runs at 40MHz, so 1 tick corresponds to 1/40000000 s. It is likely best to store the core timer frequency as a constant and use it in calculations where needed. It is also good to think of the time in terms of the scan rate. If running with a scan rate of 1000, you will have a new scan every 1ms, which corresponds to CoreFreq/ScanRate = 40000000/1000 = 40000 CORE_TIMER ticks for the timestamp difference between scans.
  2. You need to know how to handle roll. This is trivial if the behavior of an unsigned 32-bit integer rolling over is well defined like in C/C++. In that case, you can just save the timestamp CORE_TIMER tick values in uint32 variables and increment by CoreFreq/ScanRate to get the next CORE_TIMER value. If you cannot guarantee the roll behavior like in C/C++, the logic is as follows:
    // If true the core timer rolls. The next value is  CoreFreq/ScanRate + lastCoreTimerVal - 232
    if (232 - 1 - CoreFreq/ScanRate < lastCoreTimerVal)
  3. You need to synchronize the system time and core timer. Since the core timer value should correspond to about halfway between making the read command and getting the result, you can take system time before and after the CORE_TIMER read, then we say the CORE_TIMER corresponds to startTime + (endTime-startTime)/2. You can do this multiple times to ensure you are not synchronizing according to some CORE_TIMER read that is an outlier in response latency.

With all of that, you synchronize the core timer and adjust CORE_TIMER timestamps based on the value you synchronize. For example, if we are at the start of stream and correlate the CORE_TIMER value of 123456 to local time 8:00:00.00 (h:mm:ss h=hour m=minute s=second) and the STREAM_START_TIMESTAMP reading is 120000, we can say the first scan started 3456 ticks before our system timestamp or 3456/40000000 = 86.4 microseconds before our system timestamp. The first scan timestamp would then be 7:59:59.9999136. If the scanRate is 1000, the second scan should happen about 1000 microseconds later at 8:00:00.0009136, or we could calculate the CORE_TIMER value that it should be (40000000/1000 = 40000, so the CORE_TIMER value should be 120000 + 40000 = 160000). The CORE_TIMER value for each scan should always track according to the stream start timestamp, so in the example you would have CORE_TIMER of 120000, 160000, 200000, 240000, etc. till the value rolls. The rest of the implementation is a matter of when or where you want to synchronize the CORE_TIMER to the system timestamp as given in requirement 3 above. You could just synchronize once at the start of stream, but you would have increasing clock drift the longer you stream.