Hi,
I've been trying to automatically have my app select the maximum possible scan rate given the input setting contraints. I've looked through the forum and the topic always pops up but there is never an actual answer as to what the formula is for actually determening the sample rate in stream mode given arbitrary inputs.
I've tried estimating it by using a interchannel delay lookup table as described in the documentation, add in 5µs sample rate per input and 10µs setting times, but that obviously doesn't work at some combinations as I'm getting stream overlap errors still.
My question for example on the below input settings:
AIN0 1V resolution 4
AIN1 10V resolution 1
AIN2 10V resolution 1
Obviously I see that the resolution 1 settings will not be observed in stream mode as there just exists STREAM_RESOLUTION_INDEX that appears to be a global setting instead of a per-input setting. In this case I set the stream resolution to the maximum of a given input set, so the above example being 4.
I haven't been able to find out what exactly happens when you give different gain ranges in stream mode as above, and how do you programatically try to estimate the needed sample time. STREAM_SETTLING_US is set to 0/auto, but how does that work given different gains for different inputs?
If the interchannel delays don't determine the needed sample time for a scan, what does? I'm essentially trying to have a lookup table of the needed 1) sample time 2) settling time 3) interchannel delay? to sum up all the timings and have the app always select the maximum sample/scan rate possible.
In the documentation there's this sentence:
"...The stream interrupt will fire every 1000 μs, at which time it takes about 5 μs until the 1st channel is sampled.."
Where does that 5µs value come from? How does it scale across gains and resolutions?
Thanks in advance.
There is no hard and fast equation for the maximum supported rates for any arbitrary settings. There are too many potential factors to allow a simple equation. I would recommend treating the maximum rate as the values given in our tables for your highest resolution and gain channel settings. The values in our tables are not necessarily the maximums you could see, they are derated to help ensure they hold regardless of any changes to firmware or other features running:
http://labjack.com/support/datasheets/t-series/appendix-a-1#t7-stream-rates
We expect you should be able to sample somewhere around 45 to 1300 samples per second with three channels at ±1V and resolution index 4. Since two of the channels in your example are ±10V, you should be closer to the 1300 value. Experimentally, with factory defaults (besides the three AIN settings) I am able to sample at around 1700 samples per second before seeing overlap errors.
I'm not looking for a simple equation, but a more complete understanding of all the different factors so I can model something that sort of works.
The table isn't specifically helpful because for my purposes I need at least one channel at 1V while the others above are at 10V. This is a bit disappointing as it leaves me only with the option to do a try loop with decreasing scan rates till it somehow works.
The above mentioned settings:
I'm getting a max scan rate of 843Hz for it to work.
Removing one 10V input:
I'm able to scan at 1587Hz - but that's where I don't really get it, as jumping from a 10V channel to another is supposed to be minimally taxing, assuming the 1V input and its settling is supposed to require the largest amount of scan time.
Eventually I'll resolve this by getting better external instrumentation amplifiers and run the T7 at 10V gain 1 all the time, but at least I want to streamline the current process to maximise scan rates.
Lets see, things that affect max stream rate:
I think the point that will help with the above confusion is #3.
This was helpful to know, thank you.
This didn't doesn't make any sense to me. If you're running at the maximum possible scan rate then it doesn't matter where the high gain channel is at readout. I tested it out and it made no difference between the read order, both orders behaved the same to the µs scan duration.
Generally knowing that settling time is the maximum out of an input list for all samples within a scan simplifies things a lot. I can just simply take max(gain) and max(res) out of a set and rebuild Table A.1.7 without the inaccuracies / derating.
Experimentally building a new table gives me this:
The above table represents the accepted scan rates without the library giving off invalid scan rates or scan overlap erros. Unfortunately that it not accurate at all as verifying the rates with actual data shows that there's massive time dilation on some sample rates, up to a 1.3x factor at the high rates and 2-3x factor on some of the very low rates.
Re-verifying the correct real scan rates versus real time results up to a point where no time dilation appears results in this table: scan reads were done at 60Hz so the scan buffer is scanRate / 60:
So far it seems to be working well and so far so good. The rates of interest for me are quite higher than the documentation rate table, sometimes by large margins.
Appraciate if you could publish my other comment as well for others to read if it's helpful.