Hi everyone!
According to the documentation, a data item of type NSec contains two 4-byte integers. The first integer provides the number of seconds that have elapsed since 1990-01-01 00:00:00. That works fine.
The second integer should provide the number of nanoseconds that have elapsed since the start of the current second. However, it does not, the numbers are obviously microseconds (check below)!
Output shows the results of reading the DL clock very quickly. When the clock changes from 12:45:06 to 12:45:07, the second integer flips from 980000 to 0:
CR1000 clock: 2009-07-31 12:45:06
Logger timestamp (NSec): (617892306, 970000)
CR1000 clock: 2009-07-31 12:45:06
Logger timestamp (NSec): (617892306, 980000)
CR1000 clock: 2009-07-31 12:45:07
Logger timestamp (NSec): (617892307, 0) <---- !!!!!
CR1000 clock: 2009-07-31 12:45:07
Logger timestamp (NSec): (617892307, 20000)
CR1000 clock: 2009-07-31 12:45:07
Logger timestamp (NSec): (617892307, 40000)
If these numbers were nanoseconds, they should be 1000 times larger. Is this just a documentation error? Very strange, especially since there is also a data type for microsecond resolution. The difference is quite important if you need sub-second time resolution (like me).
My model: OSVersion = "CR1000.Std.15", OSDate = "080115"
Kind regards
Dietrich
Hi dgf,
The first thing to do is update your 2 generation old OS with "CR1000 OS 17" available over here... http://www.campbellsci.com/downloads and then try again.
We have noticed huge improvements between OS versions.
Let us know if that made any difference please.
Cheers
Stewart.
I do the documentation on the logger OSes, so this may very well be a documentation error :)
I went back and checked the initial info I had from the engineer, and the documentation reflects it correctly. I have sent a request for further clarification and will let you know.
Regards,
Dana
Dear Dana,
thanks for passing this on. I also think that the documentation is correct but the output from the get clock response packet (MsgType 0x97) is not. I did some more tests:
- when I read the clock, the 2nd four-byte number in the Nsec
field is always between 0 and 999'999. This suggests microseconds and
not nanoseconds as the documentation says.
- however, if I adjust the clock by 500'000 ticks, it is only
adjusted by 0.0005 seconds. To adjust for 0.5 seconds, I have to
set that part of the Nsec field "Adjustment" (MsgType 0x17) to
500'000'000 - as the documentation says!
It looks like only the read clock command is using microseconds instead of nanoseconds. It is strange that nobody ever noticed that. My CR1000 has shown this behaviour from the original OS version (~12?) to the currently installed OS version 15.
Kind regards
Dietrich
I was wondering where you were decoding the nanoseconds value. It is now clear and now we see your OS version I can confirm this is not a documentation error but it was an issue with the firmware of some of the loggers. They reported microseconds incorrectly in the Get/Set clock messages in Pakbus.
This was fixed last year in OS16 for the CR1000 (and equivalent OSes for the CR800 and 3000). The change log reports this as:
"Fixed sub 1 second error when setting the clock via LoggerNet."
Sorry for this confusion. Suggest you get the latest OS from: www.campbellsci.com/downloads