LogoLogo
  • Welcome!
  • Mission Statement
  • Contributing Guidelines
    • Embed CADs in Wiki Articles
  • VEX Worlds Livestream Archive
    • VEX U
    • V5RC High School
    • V5RC Middle School
    • VIQRC Middle School
    • VIQRC Elementary School
    • JROTC
  • ⚙️Hardware
    • Design Fundamentals
      • Gear Ratios
      • Internal Forces (Stress)
      • Torque
      • RPM
      • Center of Mass
    • Introduction to VEX Parts
      • Structure
        • C-Channels and Angles
        • Fasteners
        • Retainers
        • Gussets and Brackets
        • Bearings
        • Plate Metal and Flat Bars
      • Motion
        • High Strength Components
        • Gears and Sprockets
        • Traction Wheels
        • Mecanum Wheels
        • Omnidirectional Wheels
        • Flex Wheels
    • Robot Decorations
      • Part Dyeing
      • Metal Coloring
      • License Plate Holders
    • Lifts
      • Double Reverse Four Bar (DR4B or RD4B)
      • Four Bar
      • Scissor Lift
      • Six Bar
      • Other Lifts
      • Best Practices
    • Shooting Mechanisms
      • Catapult
      • Flywheel
      • Linear Puncher
    • Drivetrains
      • Tank Drive
      • Mecanum Drive
      • Holonomic Drive
      • Designing a Drivetrain
      • Best Practices
    • Pivots & Joints
    • Pneumatics
      • Best Practices - Pneumatics
    • Intakes
    • Flip Out Mechanisms
    • Defensive Mechanisms
    • Misc. Building Techniques
    • VexU
      • Common Manufacturing Techniques
        • 3D Printing
        • Laser Cutting
      • Custom Manufactured Parts Library
      • Commercial Off The Shelf Parts Library
  • 👑Team Administration
    • New Team Resources
      • Creating The Team
      • Gaining Interest for Robotics Teams
      • Attending Competitions
        • Elimination Bracket
    • Team Dynamics
      • Organization Structure and Longevity
      • Member Allocation and Management
      • How *Not* To Run a Team
    • Team Finances
      • One-Year Team Financial Breakdown
      • Funding Your Teams
    • Hosting Competitions
      • Live Streaming
      • Tournament Manager
        • Competition Electronics
        • Creating a Tournament
        • Tools
          • Field Set Control
          • Connecting Mobile Devices
          • Connecting Raspberry Pis
        • Match Control
          • Inputting Match Scores
          • Inputting Skills Scores
          • Inputting Scores on TM Mobile
        • Displays
        • Alliance Selection
      • Additional Event Partner Resources
    • VexU Organization Management
      • Getting Started in VexU
      • Team / Personnel Management
      • Volunteering At Local Events
  • 📚The Judging Process
    • The Engineering Design Process
      • Test and Refine
    • The Engineering Notebook
      • Segments of the Notebook
      • BLRS2 '23-'24 Engineering Notebook
      • Integrating Inventor Models into Documentation
      • Engineering Notebook Rubric Breakdown
    • The Interview
      • Interview Rubric Breakdown
    • Using Notion for an Engineering Notebook
      • How to Setup a Notebook
      • How to Create Entries
      • How to Export a Notebook
      • Purdue SIGBots Notion Template
        • Game Analysis
        • Identify The Problem
        • Brainstorm Solution
        • Select Best Approach & Plan
        • Build Log
        • Programming Log
        • Testing Solution
        • Tournament Recap
        • Innovative Feature
  • 🖥️VEX CAD
    • CAD Programs
      • Inventor
      • Fusion 360
      • Solidworks
      • OnShape
      • Protobot
    • Making a Chassis
      • Inventor Chassis: The Basics
        • Installation
        • User Interface Overview
        • Dark Mode
        • Assemblies
        • Placing Parts
        • Navigating CAD
        • Changing Visual Style
        • Grounding
        • Connecting Two C-Channels
        • Modifying Existing Constraints
        • Toggling Visibility on Existing Parts
        • Completing Half of the Chassis
          • Inner Drive Channel
          • Bearing Flats
          • Motors
          • Wheels
          • Sprockets
          • Spacers, Washers and Standoffs
          • Spacers Cont.
        • Creating Mid-Plane
        • Mirroring
      • Inventor Chassis: Best Practices
        • File Structure
        • Subassemblies
        • Wheel Subassembly
        • Origin Planes
        • Cross Brace
        • Drive Channels
        • Simple Motor iMates
        • Replacing Simple Electronics
        • Completing Half of the Drive
          • Bearing Flats (Best Practice)
          • Wheels
          • Powered Gear
          • Spacer Boxing
          • Spacers, Washers and Standoffs (Best Practice)
        • Model Browser Folders
        • Mirroring (Best Practice)
        • Model Browser Folder (Right)
        • Main Assembly
      • Fusion 360 Chassis
      • Solidworks Chassis, Chain, and Custom Plastic
    • Remembering The Best
      • 62A Skyrise
      • 400X Nothing But Net
      • 2587Z Nothing But Net
      • 365X Starstruck
      • 62A In The Zone
      • 202Z In The Zone
      • 5225A In The Zone
      • 169A Turning Point
      • 929U Turning Point
      • 7K Tower Takeover
      • 5225A Tower Takeover
      • 62A Change Up
    • Scuff Controller
  • 💻Software
    • Odometry
    • Path Planning
    • Robotics Basics
      • Arcade Drive
      • Tank Drive
      • Joystick Deadzones
      • Curvature (Cheesy) Drive
      • Subsystem Toggling
    • Organizing Code
      • Code Style
      • Code Styling Guide
      • Writing Good Comments
      • Version Control
    • Control Algorithms
      • Bang Bang
      • PID Controller
      • Basic Pure Pursuit
      • Flywheel Velocity Control
      • Kalman Filter
      • Take Back Half (TBH) Controller
      • RAMSETE Controller
    • Competition Specific
      • Operator Control
      • Autonomous Control
    • C++ Basics for VEX Robotics
      • Basic Control Flow
      • Enumerations
      • Namespaces (::)
      • Multiple Files (C/C++)
    • VEX Programming Software
      • PROS
        • OkapiLib
      • vexide
      • Robot Mesh Studio (RMS)
      • EasyC
      • RobotC
      • VEXcode
      • Midnight C
    • General
      • Stall Detection
      • Register Programming
      • Sensors and Odometry in Autonomous
      • Embedded Programming Tips
      • Debugging
      • Bit Shift
      • Bit Mask
      • Autoformatting
      • Finite State Machine
      • Data Logging
    • Object Recognition
      • Red Green Buoy
      • AMS
      • OpenCV
      • OpenNI
    • 🤖AI in VRC: Pac-Man Pete
  • ⚡VEX Electronics
    • V5 ESD Protection Board
    • VEX Electronics
      • VEX V5 Brain
        • V5 Electronics Observations and Issues
      • VEX Controller
      • VEXnet and V5 Robot Radio
      • VEX Battery
      • VEX Motors
    • VEX Sensors
      • 3-Pin / ADI Sensors
        • Encoder
        • Potentiometer
        • Limit Switch
        • Bumper Switch
        • Accelerometer
        • Gyroscope
        • Ultrasonic
        • Line Tracker
        • LED Indicator
      • Smart Port Sensors
        • GPS Sensor
        • Rotation Sensor
        • Vision Sensor
        • Optical Sensor
        • Distance Sensor
        • Inertial Sensor (IMU)
        • 3-Wire Expander
    • V5 Brain Wiring Guide
    • Legacy
      • VEX Cortex
      • Power Expander
      • VEX Motor Controller
      • VEX Cortex Wiring Guide
  • General Electronics
    • General Topics
      • External Boards
        • ASUS Tinker Board S
        • Arduino
        • Beagleboard
        • Leaflabs Maple
        • LattePanda
        • Meadow F7 Micro
        • Netduino
        • ODROID-XU4
        • Pandaboard
        • Raspberry Pi
      • Analog-Digital Converter (ADC)
      • Bit-Bang
      • GPIO
      • I2C
      • Jitter
      • Line Noise
      • List of Tools
      • Output Drive
      • Power Consumption
      • Radius Array
      • Resettable Fuse (PTC)
      • SPI
      • Slew Rate
      • Stalling
      • USART
      • UART
      • 5 Volt Tolerant
      • DC Motor Basics
Powered by GitBook
LogoLogo

This work is licensed under a Attribution-ShareAlike 2.0 Generic License

On this page
  • Motivation
  • Units
  • Definitions and Conventions
  • Tracking Theory (aka Odometry)
  • Tracking Using Arcs
  • Proof of Axial Position Independence of Tracking Wheels
  • Tracking Algorithm
  • Writing Motion Algorithms
  • General Programming Tips

Was this helpful?

Edit on GitHub
Export as PDF
  1. Software

Odometry

Originally written by Team 5225A, the E-Bots PiLons. Thanks!

PreviousScuff ControllerNextPath Planning

Last updated 1 year ago

Was this helpful?

Motivation

The Absolute Positioning System (APS) had been a concept that our programmers had thought about and discussed for a few years, mostly hypothetically. The APS is a system that keeps track of the absolute position (i.e. cartesian coordinates) and orientation of the robot during a game autonomous or programming skills run. After the conclusion of the Nothing but Net season, the PiLons programmers decided to start developing it for real. Over the following ~2 years, they went through several iterations of both the tracking algorithm itself (which uses sensors to keep track of position and orientation), and the coupled motion control algorithms (which use the position and orientation information to follow a planned route). Function calls to the motion control algorithms were then used as building blocks for game autonomous and programming skills runs.

After the use of this system played a major role in the PiLons becoming the 2018 World Tournament Champions and Robot Skills World Champions, they have had many requests from various other members of the VEX community for information on how their system was made. While they are always tempted to boil it down to "thousands of hours of time and most of [their] sanity", they also decided to write and release this document as a resource for all.

Units

It is recommended to use inches as units for distance (and thus position), as all VEX field measurements are in imperial/customary units. For angles, radians (at least internally) work better because they are the native output of the algorithm, and also the native unit for built-in trigonometric functions. Above all, however, it is extremely important to use the same units everywhere (i.e. if a part of the code uses inches, then everything distance-related is in inches, speed is in inches per second or per millisecond, etc.). Not doing so will require additional unit conversions, which are easy to mess up, and also require more calculations than necessary (this can already push the limits of the Cortex M3).

Definitions and Conventions

On this page, a column notation for vectors is used (it looks like a 2x1 matrix); for example, a vector with x-value 𝑎 and y-value 𝑏 would be written as:

[ab]{a}\brack{b}[ba​] (1)​

The following variables are used in this document to represent physical parameters and other known values:

  • sLs_LsL​ is the left-right distance from the tracking center to the left tracking wheel

  • sRs_RsR​ is the left-right distance from the tracking center to the right tracking wheel

  • sSs_SsS​is the forward-backward distance from the tracking center to the back tracking wheel

  • d0→\overrightarrow{d_0}d0​​ is the previous global position vector

  • θ0\theta_0θ0​ is the previous global orientation

  • θr\theta_rθr​ is the global orientation at last reset

Tracking Theory (aka Odometry)

This is the core of the position tracking system, providing the rest of the robots’ code with live data on the current position and orientation (represented as a position vector 𝑑 , and orientation 𝜃) of a predefined point on the robot (called the “tracking center”, see Figure 1). Note that the tracking wheels can be placed anywhere along the dotted lines without affecting the math; it is the perpendicular distance to the tracking center that matters, as explained below.

Tracking Using Arcs

Position tracking works by modelling the robot's motion (i.e. the motion of the tracking center) as an arc over an infinitely short time interval. Figure 2 shows the same robot as above following an arc with a 5' radius for 15°. Note the assumption for this simple example that the robot does not move toward or away from the arc center during the maneuver, and that the tracking wheels are lined up with the center of the robot; it will be shown after that neither of these are necessary. The final robot position is shown in light gray. The left and right wheel paths are both arcs that are concentric with the tracking center's arc. (Concentric means that the arcs have the same center point; this applies in this case, as the robot is effectively rotating around the arc center.)

There are a few important points to realize from this diagram. Firstly, the arc angle is the same as the change in orientation of the robot. This angle (call it Δ𝜃) can easily be calculated by starting with the arc length formula:

These two equations can be combined, eliminating the radius (the only unknown):

Note that this gives a value in radians. Testing it with this example, one gets:

Also based on this arc, the translation (change in position) can be calculated. To do this, the radius of the tracking center's arc needs to be known. This can be calculated as the radius of the right side arc plus the offset of the right tracking wheel from the tracking center:

Now, it is important to consider what happens if the robot strays from the arc somewhat, resulting in additional translation. This forms a second arc, representing a second component of the robot's movement. This time, the axis is offset from the robot's initial "right" by 𝜃 2 , making it perpendicular to our y component (and thus representative of our local x axis). This arc's radius and chord length can be calculated in much the same as the y component, giving our complete local translation vector:

Proof of Axial Position Independence of Tracking Wheels

It was mentioned earlier in this document that the tracking wheels can be moved along their line of motion (dotted lines in Figure 1) without affecting the math at all. To demonstrate this, it can be proven mathematically that the rotational motion of an omni wheel does not depend on its position along its line of motion; specifically, it depends only on its line of motion.

There are two types of motion that the robot can undergo: translational and rotational. It is trivial to show that the position of the wheel does not matter for translational motion, as the velocity is the same at all points on the robot. However, the proof does get more complicated for rotational motion.

As can be seen, the radius (which is dependent on the y-axis offset of the wheel), cancels out of the equation; the motion of the wheel (as described by its measured velocity) is thus independent of its position along its line of motion.

Tracking Algorithm

The algorithm itself consists of a single procedure to perform calculations, which should be called frequently (a cycle period of no more than 10 milliseconds, although it should be noted keeping the cycle time too low will cause undershoots due to the limited resolution of the optical shaft encoders). We recommend using a dedicated runtime task/thread for this process. The procedure can be broken down into a few steps:

  1. Store the current encoder values in local variables

  2. Update stored "previous values" of encoders

Writing Motion Algorithms

The team has decided not to release the complete motion algorithms that have been created and used by the team. There are several reasons for this, including the hundreds thousands of person-hours that went into their cumulative development, and that while they are more than happy to help out teams that are interested and wish to learn, they do not want to become technical support for a large suite of complex algorithms. Most importantly, the goal is not to release a library for other teams to use; rather, it is to facilitate an incredible learning opportunity that involves complex mathematics and control theory.

That being said, they do want to offer as much of a guideline as possible. Below is an arms-length tutorial for developing such an algorithm:

  1. A team will need at least two algorithms, as a minimum. One for driving to a point, and the other for turning on the spot. Let's start with the turning on the spot algorithm.

  2. Pick a convention. It can be more intuitive to specify angles in degrees, but remember that everything uses radians internally. Pick one or the other as the units for angles passed as parameters to your functions, and stick with it. Also, try to keep the order of equivalent parameters consistent (X before Y, etc.).

  3. Start with something the team has used before. There's no right answer, and the math can get very complicated very quickly, so start with something that people on the team are familiar with (it could be PID or TBH, but a plain proportional loop works just as well to start).

  4. Every situation is different. The algorithm shouldn't be a black box that is assumed to work perfectly for every scenario; remember that there's no practical limit on how many parameters a function can have. This doesn't necessarily mean passing custom PID constants (or equivalent) to each call, but there should be some way to fine-tune the algorithm(s) for one particular use case without messing up others. A more customizable algorithm means a more useful one.

  5. Deal in absolutes. Generally speaking, no parameters should indicate a value relative to the robot's current position or orientation; they should be field-centric. This means the code doesn't say "turn 25 degrees to the right", it says "turn to face 90 degrees from the audience direction" or "turn to face the middle post".

  6. Think about making it more versatile in the future. Start with turning to face an orientation, but turning to face a point might be useful too. It also might be helpful to add support for automatically turning to the closest equivalent angle (to avoid doing 270 clockwise when 90 counterclockwise would work). This might require changes to how the code is structured. The more possibilities, the better.

  7. Try a wide range of scenarios to see what should be adjustable. The PiLons tested angles from 10 to 360 degrees, with and without a mobile goal, for their final version in In The Zone. In the end, they made a spreadsheet showing ‘ideal’ parameters to pass depending on the scenario, and they were further tuned based on that for each use.

  8. Take it step by step. Get each feature working before starting on the next. Not only does this help with time and source code management, but the team will probably learn something that might help them with the next feature. Moving onto the algorithm to drive to a target; start by driving until the robot is past the target before worrying about correcting angle. Then, consider which of the two major types of algorithms is wanted: one that continuously tries to point at the target (fastest), or one that tries to follow a predefined line as closely as possible (most consistent). Also, think about what might happen if the robot is too far off, and make sure it can't end up in an infinite loop.

  9. Play it safe. Regardless how perfect the testing environment is, at some point the team will probably want to use this at competition. The field might be different in ways not as negligible as you might think, or there might be another robot (even your alliance partner) in the way. There's no limit to the number of things that can go wrong at competition, so make sure to consider what the robot will do in these scenarios. A good starting point is: if the bot is trying to move but isn't not moving, stop the autonomous. It might also be easier to write this as a separate task that is running continuously, regardless of what the autonomous is doing.

General Programming Tips

The PiLons have also decided to include some general suggestions that aren't strictly related to position tracking, but (they hope) will help other teams with this and future projects nonetheless.

• Make it work. Make it right. Make it fast.

This quote, attributed to programmer Kent Beck, provides a good starter priority list for any non-trivial software project. The first priority should be to fulfil the basic functionality requirements by any means possible. Then, without changing the functionality or core algorithms, rewrite/refactor the code until it is easy to use, easy to understand, and thus easy to maintain (within reason). Finally, make it fast; in this context, this applies to both efficiency of the code (factoring out repeated identical calculations is always a good idea), and to the time it takes to reach the target, which can always be faster. It might be better to apply this philosophy at a smaller scale over each iteration or feature addition of your algorithms, but really it is up to you. As always, there is no one right way to do things.

• Take advantage of version control systems.

They highly recommend that you learn to (properly) use a version control system (VCS) such as Git. Not only is this a great way to maintain backups and allow collaboration, using tools such as ‘merge’ and ‘revert’, it is possible to very finely manage changes to the code without messing with source files directly.

• Use parallel programming wisely.

Parallel programming has different names on different platforms. In ROBOTC, it's called "tasks". On modern desktop, mobile and real-time operating systems, including PROS, it's typically called "threads". Regardless, parallelization of code offers many benefits, including having loops waiting on a sensor value; in a traditional, "synchronous" program, this loop would prevent other important code (such as your position tracking algorithm) from running. That being said, it is a very easy technology to overuse. Programs that have too much parallelization can quickly become bug-prone and unmaintainable. For those who are relatively new to programming, they suggest limiting (or avoiding entirely) the use of tasks or threads, at least when starting. Regardless, before starting to use them heavily, we suggest you read up on race conditions and deadlocks. There are several online resources on those topics, and if in doubt, Wikipedia is always a good place to start.

Contributing Teams to this Article:

ΔL=rLΔθ\Delta L = r_L \Delta\thetaΔL=rL​Δθ ΔR=rRΔθ\Delta R = r_R \Delta\thetaΔR=rR​Δθ​ (2)

ΔL=(rA+sL)Δθ\Delta L = (r_A+s_L)\Delta\thetaΔL=(rA​+sL​)Δθ ΔR=(rA−sR)Δθ\Delta R=(r_A-s_R)\Delta\thetaΔR=(rA​−sR​)Δθ

ΔLΔθ=rA+sL\frac{\Delta L}{\Delta\theta} = r_A+s_LΔθΔL​=rA​+sL​ ΔRΔθ=rA−sL\frac{\Delta R}{\Delta\theta} = r_A-s_LΔθΔR​=rA​−sL​​

rA=ΔLΔθ−sLr_A = \frac{\Delta L}{\Delta\theta} - s_LrA​=ΔθΔL​−sL​ rA=ΔRΔθ+sRr_A = \frac{\Delta R}{\Delta\theta} +s_RrA​=ΔθΔR​+sR​

​ ΔLΔθ−sL=ΔRΔθ+sr\frac{\Delta L}{\Delta\theta}-s_L= \frac{\Delta R}{\Delta\theta}+s_rΔθΔL​−sL​=ΔθΔR​+sr​ (3)

​ ΔL−sLΔθ=ΔR+sRΔθ\Delta L- s_L \Delta\theta = \Delta R + s_R \Delta\thetaΔL−sL​Δθ=ΔR+sR​Δθ

ΔL−ΔR=Δθ(sL+sR)\Delta L - \Delta R = \Delta\theta(s_L+s_R)ΔL−ΔR=Δθ(sL​+sR​)​

Δθ=ΔL−ΔRsL+sR\Delta\theta = \frac{\Delta L - \Delta R}{s_L+s_R}Δθ=sL​+sR​ΔL−ΔR​

Δθ=17.606−13.8107.250+7.250\Delta\theta= \frac{17.606-13.810}{7.250+7.250}Δθ=7.250+7.25017.606−13.810​

Δθ=0.2618 rad=15.00°\Delta\theta=0.2618 \text{ rad}=15.00\degreeΔθ=0.2618 rad=15.00°​

As can be seen on the diagram, this is correct. This is an example of a "state function"; if any tow of the following are known (starting orientation, ending orientation, and distance between left and right wheel travel (ΔL−ΔR\Delta L - \Delta RΔL−ΔR)), it is possible to find the other one. Since any translation without rotation does not change the difference between these numbers (it changes both by the same amount), this formula applies regardless of where the robot is on the field, or how it got there. This means that orientation can be calculated as an absolute quantity, rather than a relative change from the last arc approximation.

rR=ΔRΔθ+sRr_R=\frac{\Delta R}{\Delta\theta}+s_RrR​=ΔθΔR​+sR​​

A local coordinate system is also needed. Theoretically, any will work, but one that makes the math easier is such that the straight line path from the initial position to the final position is in the positive y direction; this is an offset from the robot's initial "forward" by θ2\frac{\theta}{2}2θ​. Using this convention, the local translation x-coordinate is zero (as we are still ignoring the back wheel), while the y-coordinate is the chord length. The chord length (the straight-line distance between the two endpoints of the arc) is based on the radius and angle change; both are known, therefore the local y-axis translation can be calculated:

​ ​ Δdly→=\Delta\overrightarrow{d_{ly}} =Δdly​​= [02sinΔθ2×(ΔRΔθ+sR)]{0}\brack{2sin\frac{\Delta\theta}{2}\times(\frac{\Delta R}{\Delta\theta}+s_R)}[2sin2Δθ​×(ΔθΔR​+sR​)0​] (5)

[Δdt→=2sinθ2×ΔSΔθ+sSΔRΔθ+sR]\Delta\overrightarrow{d_t}=2sin\frac{\theta}{2} \times{\frac{\Delta S}{\Delta\theta}+s_S}\brack{\frac{\Delta R}{\Delta\theta}+s_R}[ΔθΔR​+sR​Δdt​​=2sin2θ​×ΔθΔS​+sS​​] (6)​

This coordinate system is offset by θ0+Δθ2\theta_0+\frac{\Delta\theta}{2}θ0​+2Δθ​ from the global coordinate system. Therefore, by rotating Δdl→\Delta\overrightarrow{d_l}Δdl​​ back by that amount, the global translation vector can be calculated. The current position at any time is simply the starting position, plus the summation of all global translation vectors up to that time.

As a scenario, this set up is a wheel that is rotating around a point at [ΔxΔy]{\Delta x}\brack{\Delta y}[ΔyΔx​] from the center of the wheel (using the forward direction of the wheel as a positive y-axis), as shown in Figure 3.

By combining the radius of the arc followed by the wheel with the angular velocity with respect to the rotation center, ω\omegaω, the tangential velocity of the wheel, vtv_tvt​, can be found:

vt=ωrv_t=\omega rvt​=ωr (7)

This can then be used to calculate the y-component of the velocity (i.e. what the wheel will measure), vyv_yvy​, and simplified using trigonometric ratios:

vy=vt×cosβv_y=v_t\times cos\betavy​=vt​×cosβ (8)

​ vy=ωr×Δxrv_y=\omega r\times\frac{\Delta x}{r}vy​=ωr×rΔx​

vy=ωΔxv_y=\omega \Delta xvy​=ωΔx​

Calculate the change in each encoders’ value since the last cycle, and convert to distance of wheel travel (for example, if the wheel has a radius 2" and turned 5°, then it travelled approximately 0.1745"); call these ΔL,ΔR,\Delta L, \Delta R,ΔL,ΔR, andΔS\Delta SΔS

Calculate the total change in the left and right encoder values since the last reset, and convert to distance of wheel travel; call these ΔLr\Delta L_rΔLr​ and ΔRr\Delta R_rΔRr​

Calculate new absolute orientation θ1=θr+ΔLr−ΔRrsL+sR\theta_1=\theta_r+\frac{\Delta L_r-\Delta R_r}{s_L+s_R}θ1​=θr​+sL​+sR​ΔLr​−ΔRr​​ ; please note that the second term will be in radians, regardless of the units of other variables

Calculate the change in angle Δθ=θ1−θ0\Delta\theta=\theta_1-\theta_0Δθ=θ1​−θ0​

If Δθ=0\Delta\theta=0Δθ=0 (i.e. ΔL=ΔR\Delta L=\Delta RΔL=ΔR), then calculate the local offset \Delta\overrightarrow{d_t}=$$$${\Delta S}\brack{\Delta R}

Otherwise, calculate the local offset [Δdt→=2sinΔθ2×ΔSΔθ+sSΔRΔθ+sR]\Delta {\overrightarrow{d_t}}=2sin\frac{\Delta\theta}{2}\times{\frac{\Delta S}{\Delta\theta}+s_S}\brack{\frac{\Delta R}{\Delta\theta}+s_R}[ΔθΔR​+sR​Δdt​​=2sin2Δθ​×ΔθΔS​+sS​​](Equation 6)

Calculate the average orientation θm=θ0+Δθ2\theta_m=\theta_0+\frac{\Delta\theta}{2}θm​=θ0​+2Δθ​

Calculate global offset Δd→\Delta{\overrightarrow{d}}Δd as Δdl→\Delta{\overrightarrow{d_l}}Δdl​​rotated by −θm-\theta_m−θm​; this can be done by converting your existing Cartesian coordinates to polar coordinates, changing the angle, then converting back

Calculate new absolute position d1→=d0→+Δd→\overrightarrow{d_1}=\overrightarrow{d_0}+\Delta\overrightarrow{d}d1​​=d0​​+Δd

(Queen's University)

(E-Bots Pilons)

💻
QUEEN
5225
Figure 1: A sample robot base with tracking wheels
Figure 2: A simple example maneuver, with the left and right wheel paths shown
Figure 3: Scenario for position independence proof