Odometry
Originally written by Team 5225A, the E-Bots PiLons. Thanks!
Last updated
Originally written by Team 5225A, the E-Bots PiLons. Thanks!
Last updated
This work is licensed under a Attribution-ShareAlike 2.0 Generic License
The Absolute Positioning System (APS) had been a concept that our programmers had thought about and discussed for a few years, mostly hypothetically. The APS is a system that keeps track of the absolute position (i.e. cartesian coordinates) and orientation of the robot during a game autonomous or programming skills run. After the conclusion of the Nothing but Net season, the PiLons programmers decided to start developing it for real. Over the following ~2 years, they went through several iterations of both the tracking algorithm itself (which uses sensors to keep track of position and orientation), and the coupled motion control algorithms (which use the position and orientation information to follow a planned route). Function calls to the motion control algorithms were then used as building blocks for game autonomous and programming skills runs.
After the use of this system played a major role in the PiLons becoming the 2018 World Tournament Champions and Robot Skills World Champions, they have had many requests from various other members of the VEX community for information on how their system was made. While they are always tempted to boil it down to "thousands of hours of time and most of [their] sanity", they also decided to write and release this document as a resource for all.
It is recommended to use inches as units for distance (and thus position), as all VEX field measurements are in imperial/customary units. For angles, radians (at least internally) work better because they are the native output of the algorithm, and also the native unit for built-in trigonometric functions. Above all, however, it is extremely important to use the same units everywhere (i.e. if a part of the code uses inches, then everything distance-related is in inches, speed is in inches per second or per millisecond, etc.). Not doing so will require additional unit conversions, which are easy to mess up, and also require more calculations than necessary (this can already push the limits of the Cortex M3).
On this page, a column notation for vectors is used (it looks like a 2x1 matrix); for example, a vector with x-value 𝑎 and y-value 𝑏 would be written as:
(1)
The following variables are used in this document to represent physical parameters and other known values:
is the left-right distance from the tracking center to the left tracking wheel
is the left-right distance from the tracking center to the right tracking wheel
is the forward-backward distance from the tracking center to the back tracking wheel
is the previous global position vector
is the previous global orientation
is the global orientation at last reset
This is the core of the position tracking system, providing the rest of the robots’ code with live data on the current position and orientation (represented as a position vector 𝑑 , and orientation 𝜃) of a predefined point on the robot (called the “tracking center”, see Figure 1). Note that the tracking wheels can be placed anywhere along the dotted lines without affecting the math; it is the perpendicular distance to the tracking center that matters, as explained below.
Position tracking works by modelling the robot's motion (i.e. the motion of the tracking center) as an arc over an infinitely short time interval. Figure 2 shows the same robot as above following an arc with a 5' radius for 15°. Note the assumption for this simple example that the robot does not move toward or away from the arc center during the maneuver, and that the tracking wheels are lined up with the center of the robot; it will be shown after that neither of these are necessary. The final robot position is shown in light gray. The left and right wheel paths are both arcs that are concentric with the tracking center's arc. (Concentric means that the arcs have the same center point; this applies in this case, as the robot is effectively rotating around the arc center.)
There are a few important points to realize from this diagram. Firstly, the arc angle is the same as the change in orientation of the robot. This angle (call it Δ𝜃) can easily be calculated by starting with the arc length formula:
These two equations can be combined, eliminating the radius (the only unknown):
Note that this gives a value in radians. Testing it with this example, one gets:
Also based on this arc, the translation (change in position) can be calculated. To do this, the radius of the tracking center's arc needs to be known. This can be calculated as the radius of the right side arc plus the offset of the right tracking wheel from the tracking center:
Now, it is important to consider what happens if the robot strays from the arc somewhat, resulting in additional translation. This forms a second arc, representing a second component of the robot's movement. This time, the axis is offset from the robot's initial "right" by 𝜃 2 , making it perpendicular to our y component (and thus representative of our local x axis). This arc's radius and chord length can be calculated in much the same as the y component, giving our complete local translation vector:
It was mentioned earlier in this document that the tracking wheels can be moved along their line of motion (dotted lines in Figure 1) without affecting the math at all. To demonstrate this, it can be proven mathematically that the rotational motion of an omni wheel does not depend on its position along its line of motion; specifically, it depends only on its line of motion.
There are two types of motion that the robot can undergo: translational and rotational. It is trivial to show that the position of the wheel does not matter for translational motion, as the velocity is the same at all points on the robot. However, the proof does get more complicated for rotational motion.
As can be seen, the radius (which is dependent on the y-axis offset of the wheel), cancels out of the equation; the motion of the wheel (as described by its measured velocity) is thus independent of its position along its line of motion.
The algorithm itself consists of a single procedure to perform calculations, which should be called frequently (a cycle period of no more than 10 milliseconds, although it should be noted keeping the cycle time too low will cause undershoots due to the limited resolution of the optical shaft encoders). We recommend using a dedicated runtime task/thread for this process. The procedure can be broken down into a few steps:
Store the current encoder values in local variables
Update stored "previous values" of encoders
The team has decided not to release the complete motion algorithms that have been created and used by the team. There are several reasons for this, including the hundreds thousands of person-hours that went into their cumulative development, and that while they are more than happy to help out teams that are interested and wish to learn, they do not want to become technical support for a large suite of complex algorithms. Most importantly, the goal is not to release a library for other teams to use; rather, it is to facilitate an incredible learning opportunity that involves complex mathematics and control theory.
That being said, they do want to offer as much of a guideline as possible. Below is an arms-length tutorial for developing such an algorithm:
A team will need at least two algorithms, as a minimum. One for driving to a point, and the other for turning on the spot. Let's start with the turning on the spot algorithm.
Pick a convention. It can be more intuitive to specify angles in degrees, but remember that everything uses radians internally. Pick one or the other as the units for angles passed as parameters to your functions, and stick with it. Also, try to keep the order of equivalent parameters consistent (X before Y, etc.).
Start with something the team has used before. There's no right answer, and the math can get very complicated very quickly, so start with something that people on the team are familiar with (it could be PID or TBH, but a plain proportional loop works just as well to start).
Every situation is different. The algorithm shouldn't be a black box that is assumed to work perfectly for every scenario; remember that there's no practical limit on how many parameters a function can have. This doesn't necessarily mean passing custom PID constants (or equivalent) to each call, but there should be some way to fine-tune the algorithm(s) for one particular use case without messing up others. A more customizable algorithm means a more useful one.
Deal in absolutes. Generally speaking, no parameters should indicate a value relative to the robot's current position or orientation; they should be field-centric. This means the code doesn't say "turn 25 degrees to the right", it says "turn to face 90 degrees from the audience direction" or "turn to face the middle post".
Think about making it more versatile in the future. Start with turning to face an orientation, but turning to face a point might be useful too. It also might be helpful to add support for automatically turning to the closest equivalent angle (to avoid doing 270 clockwise when 90 counterclockwise would work). This might require changes to how the code is structured. The more possibilities, the better.
Try a wide range of scenarios to see what should be adjustable. The PiLons tested angles from 10 to 360 degrees, with and without a mobile goal, for their final version in In The Zone. In the end, they made a spreadsheet showing ‘ideal’ parameters to pass depending on the scenario, and they were further tuned based on that for each use.
Take it step by step. Get each feature working before starting on the next. Not only does this help with time and source code management, but the team will probably learn something that might help them with the next feature. Moving onto the algorithm to drive to a target; start by driving until the robot is past the target before worrying about correcting angle. Then, consider which of the two major types of algorithms is wanted: one that continuously tries to point at the target (fastest), or one that tries to follow a predefined line as closely as possible (most consistent). Also, think about what might happen if the robot is too far off, and make sure it can't end up in an infinite loop.
Play it safe. Regardless how perfect the testing environment is, at some point the team will probably want to use this at competition. The field might be different in ways not as negligible as you might think, or there might be another robot (even your alliance partner) in the way. There's no limit to the number of things that can go wrong at competition, so make sure to consider what the robot will do in these scenarios. A good starting point is: if the bot is trying to move but isn't not moving, stop the autonomous. It might also be easier to write this as a separate task that is running continuously, regardless of what the autonomous is doing.
The PiLons have also decided to include some general suggestions that aren't strictly related to position tracking, but (they hope) will help other teams with this and future projects nonetheless.
This quote, attributed to programmer Kent Beck, provides a good starter priority list for any non-trivial software project. The first priority should be to fulfil the basic functionality requirements by any means possible. Then, without changing the functionality or core algorithms, rewrite/refactor the code until it is easy to use, easy to understand, and thus easy to maintain (within reason). Finally, make it fast; in this context, this applies to both efficiency of the code (factoring out repeated identical calculations is always a good idea), and to the time it takes to reach the target, which can always be faster. It might be better to apply this philosophy at a smaller scale over each iteration or feature addition of your algorithms, but really it is up to you. As always, there is no one right way to do things.
They highly recommend that you learn to (properly) use a version control system (VCS) such as Git. Not only is this a great way to maintain backups and allow collaboration, using tools such as ‘merge’ and ‘revert’, it is possible to very finely manage changes to the code without messing with source files directly.
Parallel programming has different names on different platforms. In ROBOTC, it's called "tasks". On modern desktop, mobile and real-time operating systems, including PROS, it's typically called "threads". Regardless, parallelization of code offers many benefits, including having loops waiting on a sensor value; in a traditional, "synchronous" program, this loop would prevent other important code (such as your position tracking algorithm) from running. That being said, it is a very easy technology to overuse. Programs that have too much parallelization can quickly become bug-prone and unmaintainable. For those who are relatively new to programming, they suggest limiting (or avoiding entirely) the use of tasks or threads, at least when starting. Regardless, before starting to use them heavily, we suggest you read up on race conditions and deadlocks. There are several online resources on those topics, and if in doubt, Wikipedia is always a good place to start.
(2)
(3)
As can be seen on the diagram, this is correct. This is an example of a "state function"; if any tow of the following are known (starting orientation, ending orientation, and distance between left and right wheel travel ()), it is possible to find the other one. Since any translation without rotation does not change the difference between these numbers (it changes both by the same amount), this formula applies regardless of where the robot is on the field, or how it got there. This means that orientation can be calculated as an absolute quantity, rather than a relative change from the last arc approximation.
A local coordinate system is also needed. Theoretically, any will work, but one that makes the math easier is such that the straight line path from the initial position to the final position is in the positive y direction; this is an offset from the robot's initial "forward" by . Using this convention, the local translation x-coordinate is zero (as we are still ignoring the back wheel), while the y-coordinate is the chord length. The chord length (the straight-line distance between the two endpoints of the arc) is based on the radius and angle change; both are known, therefore the local y-axis translation can be calculated:
(5)
(6)
This coordinate system is offset by from the global coordinate system. Therefore, by rotating back by that amount, the global translation vector can be calculated. The current position at any time is simply the starting position, plus the summation of all global translation vectors up to that time.
As a scenario, this set up is a wheel that is rotating around a point at from the center of the wheel (using the forward direction of the wheel as a positive y-axis), as shown in Figure 3.
By combining the radius of the arc followed by the wheel with the angular velocity with respect to the rotation center, , the tangential velocity of the wheel, , can be found:
(7)
This can then be used to calculate the y-component of the velocity (i.e. what the wheel will measure), , and simplified using trigonometric ratios:
(8)
Calculate the change in each encoders’ value since the last cycle, and convert to distance of wheel travel (for example, if the wheel has a radius 2" and turned 5°, then it travelled approximately 0.1745"); call these and
Calculate the total change in the left and right encoder values since the last reset, and convert to distance of wheel travel; call these and
Calculate new absolute orientation ; please note that the second term will be in radians, regardless of the units of other variables
Calculate the change in angle
If (i.e. ), then calculate the local offset \Delta\overrightarrow{d_t}=$$$${\Delta S}\brack{\Delta R}
Otherwise, calculate the local offset (Equation 6)
Calculate the average orientation
Calculate global offset as rotated by ; this can be done by converting your existing Cartesian coordinates to polar coordinates, changing the angle, then converting back
Calculate new absolute position