In my application, I am plotting data from multiple labjacks but when plotting on the same timescale, we found that the data between each is slightly shifted (I am pretty sure this is just because looping through and starting each stream means they are starting at slightly different times).
So I am trying to fix this by letting each labjack have its own timescale, or at least its own offset from the first labjack's timescale, by implementing System Clock Scan Time from here https://labjack.com/support/datasheets/t-series/communication/stream-mod....
However I am experiencing a bit of confusion when correlating the CORE_TIMER with my system time. It says to read CORE_TIMER and my system time in a loop five times and discard any reads that took too long. So now I have 5 core times and 5 system times... I don't quite understand what I am supposed to do with that.
First of all, how will I tell if a read took too long? Will it just have a longer time gap in between it and the previous read than the others? What if that was the first CORE_TIMER read I got and the outbound communication was long, how would I tell then?
Second, how exactly am I supposed to correlate these system times and CORE_TIMER values? All I can think of doing is reading system time, then CORE_TIMER, then system time again and taking the time in between the two as the time that CORE_TIMER was obtained and seeing the difference between that CORE_TIMER value and STREAM_START_TIME_STAMP to know what time the stream started in system time and then calculating from there using scanrate. Why do I have 5 reads of CORE_TIMER and 5 system time reads each taken after each CORE_TIMER read?
All clocks involved are going to be slightly different. The error from one clock to the next is going to be several parts per million (i.e. really small difference) from one to the next. This means that no matter what you do, even if two devices were started simultaneously a second on one device would be slightly different than a second on another similar device. Typically for short collections, this isn't a problem, but as soon as collection time increases the difference will become more and more noticeable as one signal will appear to lag the other.
In practice, what this means is you have to choose one of the clocks to be the master for time and use that clock to decide when to sample. This means you have to use a clock pulse from one LabJack (the master) to the other LabJacks (slaves). The slave LabJacks have to be configured to trigger on the external clock pulse from the master rather than the internal clock pulse normally used. Then configure the master LabJack to run the collection. Next, start the collection on the slaves, and finally, start the master collection. When the master starts the other LabJacks will start pulling in data exactly when the master is sampling, i.e. all the data will line up as expected.
The only real problem then is managing the data streams so you always return the same amount of data for each channel from each device. but for the most part, this should be straightforward. The good news is with the new configuration you'll always know that sample x on channel 1 of Labjack 1 was sampled at exactly the same time as sample x on channel 1 of LabJack n.
(This isn't 100% true as the LabJack is not simultaneously sampling and the microsecond delays between channel samples will be slightly different because the clocks are still ever so slightly different. In practice it doesn't make a difference as that level of time accuracy usually isn't needed.)
I've never done this with LabJacks, so see this forum topic on how to set up the varying stream: Externally Clocked Stream | LabJack
If you do not need the device clocks synchronized, the core timer reads are not necessary. If you do need synchronization, the clocks from each device will drift slightly over time, so you need to read the core timer to resynchronize the device clock back to whatever system time you are aligning to.
If you grab a timestamp before reading the core timer, and a timestamp after the core timer, the core timer reading should represent the device core timer value at the time about halfway between the timestamps. So say you take timestamp1, coreRead1, timestamp2 in your program. If timestamp1 = 9:00:00.00 and timestamp2 = 9:00:00.02 you can say coreRead1 should correspond to the system time around 9:00:00.01. That is mostly hypothetical though, and often you will see the same value for timestamp1 and timestamp2 using system time functions due to resolution constraints. Typical communication overhead for the core timer read is somewhere between 0.5-2ms; if you see a difference in start and end timestamps much greater than this you should likely toss the reading.
Reading the core timer five times is just a suggestion. Taking a single core timer reading and system timestamp is sometimes all you need. The idea is that the core timer read can take a variable amount of time, system timing functions often have poor resolution, so averaging out a few readings may help improve the accuracy. It also builds in a mechanism for which you can toss "bad" readings.
Thank you doug3, that is an alternative method to solve my problem and I think I might try that too if I can't get the other method to work how I would like.
About that method, when using Externally Clocked Streams, do you have to run a wire then from one labjack to another to send the signal or can that be sent through the computer that they are all connected to?
Regarding LabJack Support's reply, using the system time, core time, system time call I can easily determine the system time at a certain core time and use that to find a system time for the start of stream, but how exactly would I go about resynchronizing the clocks every so often? I'm guessing something along the lines of reading the core time, seeing how far from the start time it is and seeing if its been a different amount of time than the system clock says its been. In that case, I know that the core timer wraps back to zero at a certain value... what is that number? Finally, if I find that it has drifted a bit, like say for example I found that the system time thinks its been .1 seconds less time since the start than the core timer did, how would I actually resynchronize them? I don't really get what I am supposed to do if I'm partway into a stream and I find they have drifted a little bit.
By the way I'm streaming at a scanrate of 50,000 Hz, so I'm just calculating the time stamp for each point as 0.00002 seconds later than the previous point starting at the system start time and I'm receiving 25000 * (number of channels used) data points from reading the stream every 0.5 seconds.
With externally clocked stream, you would need a wire from the main clock to every device you are synchronizing.
First, realize you can determine the core timer value for every future scan using the stream start timestamp. With that, you synchronize the core time to system time and use it to calculate the system timestamp for the first scan of a future stream read. You could synchronize every x stream reads or scans. To get timestamp data for the scans between synchronizations, you would use a calculated value based on the scan rate.
The core timer is an unsigned 32-bit integer register, and as such will roll over after every 232 - 1 ticks.
LabJack support has got the answers (as always, the support here is always great).
However, at the 50kHz sample rate, I'd urge you to use the external clock to sync the sampling. it will save you a ton of headaches trying to realign the data, and truthfully that's not really possible in long runs as the differences in clocks will add up quickly.
I ran into the phenomenon years ago and could figure it out as I could always get the data to align locally by shifting it around in time, but if I did that then in places a fair bit away in time (say 300 seconds for a 1kHz signal) the data at that location would no longer match as one trace would be ahead or behind the other. If you fix it up there and go back to the original point you started at, now the data doesn't match there. I suppose you could resample over and over and over again, but that's a lot of work. Once we went to the single-clock source all the misery, pain and the time wasted doing calculations to try to "fix" the problem went away. and it just works.
Thank you for the help, that makes sense.
I think I will also test the external clock method as recommended, it seems like the safer option to not be worrying about shifting and realigning.