Sensors, Electronics and Prototyping

Time Scale

Viewing 15 posts - 1 through 15 (of 18 total)
  • Author
  • #891
    Jason Greenberg

    I have been running some experiments and have noticed that the time elapsed reported from the IMU is in correct. Specifically the difference between the processed gyro (or accelerometer) time in the first binary packet I capture and the last binary packet I captured is off. For instance I had timed the collection of packets for 20 seconds using an external timer, and then compared the time difference between the first and last packets and got roughly 10.2 seconds. In another case I collected packets for 40 seconds, and the time difference from the IMU came to roughly 27.3 seconds.

    I have been assuming that the HEX value that comes from the time fields in the binary packets is in seconds, but I have not been able to find any mention of the reported scale in the UM7 datasheet. I am starting to speculate there might be some scaling factor I am not considering, e.g. a TICKS_PER_SECOND conversion factor.

    A clarification of this issue with would be appreciated.

    Thank You


    Hi Jason,

    There is no scaling factor required. The system time is based on the rate of an internal crystal oscillator, so it should be pretty solid.

    The fact that the two intervals you reported would be indicative of two different internal clock rates is suggestive that something is happening on the logging end. What are you using to collect and log the data? My suspicion is that data from the sensor may be being dropped as it comes in because it isn’t being collected fast enough. The most recent packets would be dropped and the logged timestamps would correspond to old packets.

    Jason Greenberg

    Thank You for the hint.

    I am using a Beaglebone Black which uses a C++ program to capture the IMU packets. I know that from experience with running this code that not all IMU packets received are not valid packets. For instance I would roughly say 1 out of 5 packets received is valid (i.e. with correct header sequence and check sum). Would that be something wrong with my code or is that normal?

    I tried lowering the rate at which the IMU broadcasts processed IMU data from 200hz to 150hz but that did not seem to have much of an effect. Are there other areas of tweaking that you could possibly suggest based on experience?

    Thank You again for the help.


    Interesting. No, it is definitely not normal for the majority of the packets to be invalid. Bad checksum errors should be very rare. What baud rate are you running? You might try lowering it to a different setting to see if that improves the bit error rate.

    Jason Greenberg

    I have been using a baud rate of 115200. I just double checked my ratio of valid to total packets (I haven’t looked at this in a while and have gave a rough guess earlier). For 3 runs which should have lasted roughly 30 seconds each I was getting on average 1144 valid packets out of 2860 total packets or roughly 4 out of 10 valid packets. The IMU was broadcasting at 100 Hz. The 30 seconds was the time difference from IMU packets, the code stops after the time difference is over 30 seconds.

    I reduced the baud rate to 57600 and also reduced the broadcast frequency to 30 Hz. The results were worse. For 2 runs which should have lasted roughly 30 seconds each. I got 531 valid out of 13787 total and 136 valid out of 30346 total. Again the 30 seconds was the time difference from IMU packets, the code stops after the time difference is over 30 seconds.

    I know that based on the output counter message I have in my code that when running the IMU at both 57600 and 115200 there are periods when code gets stuck waiting for a valid packets after the code has been running for a few seconds. At 11520 this pause is very brief but at 57600 the pause can last what seems to be roughly 20-40 seconds at times.
    I believe this is an indication that the timing of the baud rates of the BeagleBone and IMU are not staying aligned correctly. I am not sure what might be cause this or what I maybe able to do to correct it.

    I should also mention that currently in my code I receive each character/byte from the IMU one at a time, filling a buffer manually/my own software function, rather than using a pre-built library function which access the buffer from hardware. I had originally converted this from an arduino code, and did not find it neccessary to switch to different functions. I will try switching to library function which access the buffer status from hardware to see if this results in a significant improvement.

    If you have other suggestions/observations please let me know, any additional hints will be greatly appreciated.


    I wouldn’t think that the BeagleBone would have a hard time synthesizing a good clock rate, nor has the UM7 had those kinds of issues before. If there is a lot of bad data coming in, here is where I would look (in order of likelihood):

    1. Make sure that the BeagleBone and the UM7 share a common ground.
    2. Make sure that the logic level for the BeagleBone is compatible with the 3.3V logic level of the UM7
    3. Check for an intermittent/bad connection on the TX and RX lines of the UART. An oscope can help you determine whether the signals are clean.
    4. Check your parsing software. If it fails to retrieve bytes from the receive buffer fast enough, bytes could be dropped.


    Dear Caleb,

    I think I have approximately the same behavior of UM7 as Jason mentioned.

    I have is UM7-LT with U71C firmware, connected with Pololu USB-to-Serial Adapter (based on CP2102). TX, RX, GND, Vin are connected and 3.3V is not. It is a long story how I came up with it, but the result is that in my opinion, at least my UM7 does not works correctly with rates. Let me explain what I mean.

    Typically, and I think you will agree with me, that if it stated that a device is working at 100Hz that implies that:
    1) with that frequency (strongly saying at least with that) MCR on the device reading current values from sensors;
    2) with that frequency it transmits collected data to the logging end in a some type of packet.

    In other words, and I underline this, the time between two measures (measures from sensor) have to be 10 ms (with 100Hz frequency).

    If time at which the data from sensors acquired is available for user, it implies that the difference between two measures (time value recovered from two consequent packets, let’s call it DT) has to be the same 10 ms. Yes, strongly saying due to conversion to double and precision of MCR those 10 ms could have some stochastic addition, but the variation of it definitely has to be much less than 1 ms and the mean of such noise has to be zero.

    What I have… PC: win 10 x32 and laptop with the same OS. I am trying to collect raw accels data with AD = 0x59. It will arrive as a batch packet where the third batch is a single floating representing DREG_ACCEL_RAW_TIME. Having in a row a number of such values, we can compute the difference between each consequent combination of time values. Having this, we can compute an average frequency at which (according to the UM7’s internal timer) accels measurements were done. Taking time values from the first and last packets, and the total number of received packets, we can compute an average frequency of receiving (transmitting) packets.

    And here is the problem. Below in a table values of a desired frequency and the average frequency of measurements that I computed from the obtained time values. Average transmitting frequency is almost equals to what I have form frequency of measurements, so I do not place it here.
    50 Hz ===== ~48 Hz
    100 Hz ===== ~94 Hz
    150 Hz ===== ~121 Hz
    200 Hz ===== ~163 Hz
    250 Hz ===== ~192 Hz.

    Those results are consistent but differs a little bit from run to run. In addition, the computed DTs are not right on average, they are shifted up to 2 ms (instead of being zero mean), that is actually results frequency drop.

    An additional observation I made that changing frequency by 1 or 2 Hz in CHR Serial Interface does not change observed frequency when the data is collected. As an example. If I set 100 Hz and collect data, the frequencies that I compute will be around 94 Hz. Now, if I change to 101 or 102 or … 105 or 95 or 96 in the the interval about +/- 5 Hz the computed frequencies will stay at the same 94 Hz. However, If I change to 110 (more than 5-7), than they will change significantly to ~102 Hz.

    Converting DREG_ACCEL_RAW_TIME from 4 bytes to float is correct, otherwise, it will have a look of a complete mess. There are no bad packets, which check sums are wrong. No other data (including health packet) transmitted from UM7 at this time (no health packet). Its communication settings are set correctly with baud rate 115200 (did also tests with 25600 – result is the same). Nothing else (at least what depends on user activity) in windows is not consuming system resources. All this done with my program written in C++/Qt5.5.

    In order to exclude any coding mistake I used CHR Serial Interface, did the same setup to UM7 and log incoming date from Log tab in CHR Serial Interface. Then, I wrote a code in Matlab that takes log file and parse it with matlab and did the same computations as before. The check sum was also verifies. And the results are the same.

    My considerations are following.
    1) If something will be wrong with connecting UM7 to Pololu and PC, or any communication problems will results in a massive mess in incoming data, but I definitely not at this point. By the way, I tried also much shorted usbB to usbA cable, but the results we the same. Pololu adapted should not affect it.
    2) With effects related to the speed of communication, any problems should not show up at low frequencies, say 100 Hz (it is a pretty easy to communicate at such frequency). However, we definitely see frequency drop at 50 Hz and lower. So, this also not the cause of a problem.
    3) The conversion from 4 uchar bytes to float is completely correct, I check it programmatically.
    4) Since the average DT is consistent that implies that there are no missed packets at windows serial port buffer.

    For me it looks like UM7 does not work on internal timer events or the frequency that we set from CHR Serial Interface internally transformed incorrectly.

    So, do you have any ideas what is going on? I can send you the matlab code that does before mentioned stuff with the logs obtained from CHR Serial Interface. It produces some graphs, so maybe they might to understand that I wrote here…

    Sorry for the long post, but I want to point out all details in order to solve the problem. Thank you in advance!



    Hi Andrey,

    You wrote:

    but the variation of [the delay] definitely has to be much less than 1 ms and the mean of such noise has to be zero.

    The delay from your PC’s serial port hardware and OS will most certainly not be zero mean, nor will it less than one millisecond. Delays from the machine can be extremely lengthy and unpredictable.

    The actual transmission rates can be expected to be less than the target due to limitations of the serial port’s bandwidth. To get data faster, you can
    1) reduce the number of packets being broadcast by the UM7
    2) increase the baud rate
    3) connect to the UM7 with something running an RTOS to remove additional unpredictable latency.

    But there is nothing wrong with the UM7, it is performing as expected.


    Hi Caleb,

    Thank you for quick response.

    I agree that transmission rates could drop, but they cannot drop as much as I noticed before. Two months ago I did work with the approximately the same device not from CH Robotics, but I was able to read (say the same 20 bytes as for AD = 0x59) from it at rates about 250 Hz without any problems. Yes, there are could be delay due to transmission, but it is consistent and does not increase with frequency increasing. I also agree that delay in transmission will be no zero mean. And I have it about 0.2 ms.

    Strongly saying, with 115200 b/s = 14400 B/s it is completely possible to transmit 20 bytes (AD = 0x59 packet) * 100 Hz = 2000 B/s. It consume only 10% of a channel.

    UM7 provide me with DREG_ACCEL_TIME. Assuming that the time in DREG_ACCEL_TIME is the internal time at which UM7 read data from sensors, and, assuming that UM7 does it with the same frequency as set for transmitting data, the values of time (for instance for the 100 Hz) should looks that:
    1 packet. Time: 00.0 ms. (Yes, UM7 send it as seconds).
    2 packet. Time: 10.0 ms.
    3 packet. Time: 20.0 ms.
    4 packet. Time: 30.0 ms.
    5 packet. Time: 40.0 ms.
    6 packet. Time: 50.0 ms.
    and so on…

    However, in fact, they are not like this. They looks approx like that:
    1 packet. Time: 00.7 ms.
    2 packet. Time: 10.2 ms.
    3 packet. Time: 21.2 ms.
    4 packet. Time: 30.4 ms.
    5 packet. Time: 40.5 ms.
    6 packet. Time: 50.2 ms.

    And they are shifted when they should not be if the internal timer works exactly at 100 Hz. Strongly saying, yes, they cannot be absolutely 10 20 30 and so on, they might have some noise but it cannot be comparable to 1 ms and on average it should be zero. Therefore, I expect to see something like this:
    1 packet. Time: 00.00007 ms.
    2 packet. Time: 10.00002 ms.
    3 packet. Time: 19.00089 ms.
    4 packet. Time: 30.00004 ms.
    5 packet. Time: 39.000405 ms.
    6 packet. Time: 50.000452 ms.

    Transmission frequency is out of this business. These time values are came from UM7.


    The extra delay is a result of the way the UM7’s architecture is set up. Internally, the UM7 uses a cooperative multi-tasking architecture. On each iteration through its main loop, the UM7 samples sensors (if they are ready), computes state estimates if needed, and initiates transmission of packets that need to be sent. Packet transmission is not driven asynchronously – meaning, when it comes time to transmit the next packet, it has to finish whatever it is currently doing before it pushes the packet to the serial transmit buffer.

    Another factor affecting delay is the number of packets being transmitted and the baud rate of the port. When packets need to be transmitted, they are added to a queue, and the packets are sent to the serial hardware one at a time by the DMA controller. The more packets that are being transmitted, and the lower the baud rate, the longer the delay.

    So yes, you can expect the timestamps to be more than a couple microseconds away from nominal.



    Ok. So, if I understand it correctly, it works like this: start iteration –> ask sensors (fix time) -> compute all the stuff -> form a packet (place time value to packet) -> send packet. Correct?

    Suppose, that with properly set baud rate for the desired packet size we are out of the problems with transmission delays. It should be like this for the 115200 and 20 bytes for packet and 200 hz.

    The next question that arise is that how main loop is organized. I see two cases (on example of 100Hz):
    1) Internal timer send a signal -> start a loop -> do all the stuff (read sensors, form packet send packet) -> done -> wait for the next signal from internal timer and so on… And internal timer here generates signal exactly at 100 Hz, meaning that the time between two iterations is 10 ms with an accuracy of internal timer. Since the internal clock is working at 10 MHz this accuracy is good. It should be no more than microseconds.
    2) based on the frequency (100 Hz) we get a time interval 10 ms and than starting the loop -> first iteration -> do all the stuff (read sensors, form packet send packet) -> done -> wait 10 ms -> start second iteration -> and so on.

    I think I a little bit confused you. In the previous message, I wrote that I get something like that:
    1 packet. Time: 00.7 ms.
    2 packet. Time: 10.2 ms.
    3 packet. Time: 21.2 ms.
    4 packet. Time: 30.4 ms.
    5 packet. Time: 40.5 ms.
    6 packet. Time: 50.2 ms.

    However, I meant a little bit different thing. Sorry. Let me clarify it. I get (for 50 Hz):
    1 packet. T1: 4804.4799804688 s => T1 – T0 = 20.5078 ms (T0 is not shown)
    2 packet. T2: 4804.5009765625 s => T2 – T1 = 20.9961 ms
    3 packet. T3: 4804.5214843750 s => T3 – T2 = 20.5078 ms
    4 packet. T4: 4804.5424804688 s => T4 – T3 = 20.9961 ms
    5 packet. T5: 4804.5629882813 s => T5 – T4 = 20.5078 ms
    6 packet. T6: 4804.5834960938 s => T6 – T5 = 20.5078 ms

    Here, come back to the question how the main loop is organized. Those results show that it works in a second way with a delay of 20 ms between the end of the previous iteration and the beginning of the next one, where the time about 1 ms on average is a time at which a single iteration completes.

    In comparison to the first case when the loop operates by the timer signal, yes, there are could be a small time from the beginning of the operation and the moment at which sensors data is coming. But that one will be microseconds and it will be almost consistent over all iterations and results in that the difference between T(i) and T(i-1) on average will be equals to the time at which timer starts next iterations, say 20 ms. So I expect to have in this case:
    1 packet. T1: 4804.4799804688 s => T1 – T0 = 20.0 ms (T0 is not shown)
    2 packet. T2: 4804.5009765625 s => T2 – T1 = 20.0 ms
    3 packet. T3: 4804.5214843750 s => T3 – T2 = 20.0 ms
    4 packet. T4: 4804.5424804688 s => T4 – T3 = 20.0 ms
    5 packet. T5: 4804.5629882813 s => T5 – T4 = 20.0 ms
    6 packet. T6: 4804.5834960938 s => T6 – T5 = 20.0 ms



    Hi Andrey,

    No, the main loop runs continuously, and independently of the system time. The system time is maintained by a hardware counter that runs continuously. Each time through the main loop, the system checks the system time to determine what, if any, events should be executed – including transmission of packets.

    MOST of the time, the core is busy estimating states in the extended kalman filter. So here is the order of events that would cause delay:

    1) The system checks the current time. No packets need to be transmitted.
    2) The system samples sensors
    3) The system takes the new sensor data and computes updated states
    4) Greater than 1 millisecond has passed since the last time we checked the time. We check again. It is now past time to transmit a COM packet.
    5) The packet TX task is added to a queue, along with all other TX packets. The packet is transmitted as soon as the UART is available.
    6) When the packet is transmitted, it takes the most recent data, which will have been sampled after the 20 millisecond target period.


    Dear Caleb,

    Ok. Now it is clear.

    So in this case the frequency rates that is set via CREG_COM_RATES* has just no sense. It is not reliable at all. UM7 has a small chance to hit an exact moment when it is necessary to send data. It will always has a delay that is exactly that I have in my experiments. This delay (caused by the step 3 – read sensors, update states) will affects the real frequency at which the packets are transmitted and the real frequency will always less than the desired. That is why the transmission rate is differs from the desired one. Moreover, since the transmitted packet takes the last available sensor data and the time of the collecting data not fixed (in terms of time scale) than that is why I got such big difference between time values in two consequent packet.

    Again that means that CREG_COM_RATES* just has no sense. For a 50Hz it could be fine that the delay of 0.3ms cause drop in frequency up to 1-2 Hz, but for 100Hz and more it will drop it significantly. Like I have for 200Hz that is in real about 163 Hz.

    Is there any ways to avoid it? If I cancel any UM7 cyclic output by setting all rates them to zero and, then work with UM7 as follows: send command asking UM7 to send me back data and read response. In this case, yes, I have to realize timing to send periodically commands to UM7 by myself. However, without cyclic output, will UM7 immediately read sensors, compute states and send data to me on demand (when I ask to do it)?


    Polling data registers instead of having the UM7 broadcast them automatically won’t change anything. If anything, it will make the delay worse.

    You can increase the broadcast rate until you get something closer to what you need, but in all cases you’ll need to utilize the reported system time in the packet to determine exactly how much time has elapsed between samples. You’d have to do that anyway to properly account for unpredictable serial port latency, which can be substantial (tens to hundreds of milliseconds, depending on what the system is doing) on a non-RTOS OS like Windows or even Linux.


    Hi Caleb,

    Ok. Thanks. Possibly stupid question, but is the code for Um7 available, so I can program it in a way I want?

Viewing 15 posts - 1 through 15 (of 18 total)
  • The forum ‘UM7 Product Support’ is closed to new topics and replies.