# Basic Pure Pursuit

Originally written by Sarah Xiang from VRC team 97963A and VEXU team ILLINI

### What Do You Need to Know Before Trying to Implement the Pure Pursuit Controller Yourself?

Working knowledge of feedback control loops such as PID

A functioning Position Tracking (odometry) system (also see Odometry)

Basic programming knowledge (arrays and loops)

High school level math such as trigonometry

### Important Notes

If you have any comments/suggestions, encountered issues with the code, or have questions about the document, please contact Sarah (vexforum `@sarah_97963a`

, discord `@Sarah | iCRY | 97963A#2509`

)

*Wiki dev's note: GitBook doesn't have a way to run code natively (at least that I know of), but there are some references to running/writing the code throughout this article. If you'd like to try pure pursuit yourself, or just see what the output of the code should look like, **here's a link** to Sarah's original Google Colab. *

### Import Necessary Libraries

### Helper Functions for Graphing

Below are some helper functions to help visualize the output of pure pursuit and path generator

Feel free to modify these functions to graph lines in the style you like

### What is the Pure Pursuit Controller?

The pure pursuit controller is an automatic steering method for wheeled mobile robots. It is a steering method, which means it computes the necessary angular velocity for a wheeled robot to stay on pre-computed paths. Linear velocity is assumed to be constant. Therefore, an additional velocity controller of your choice is needed if you wish to slow down the robot as it approaches the target (something as simple as a proportional controller could work). During the 2020-2021 VRC season, our team used the pure pursuit controller for the programming skills challenge and achieved great results. Videos of our robot in action can be found here.

Read a brief introduction of the pure pursuit controller by MathWorks here.

A quick demonstration of path tracking using the pure pursuit controller:

In the animation shown above, the dotted gray line is the pre-computed path that the robot needs to follow and the solid black line is the robot's trajectory. The big yellow dot and the short red line represent the robot's current position and heading. There is another solid line connecting the point the robot is "chasing after" and the robot's current position, but it's very hard to see in this demo. Although the robot's trajectory does not match with the path perfectly, the pure pursuit controller still performed decently well.

In the sections below, we will implement a basic pure pursuit controller for a differential drive (NOT the type of differential commonly used for power sharing in VRC), a robot model fairly similar to the 4 wheel tank drives in VRC.

### Limitations

As shown above, the pure pursuit controller cannot trace a path perfectly due to the non-zero look ahead distance. The more sharp corners the path contains, the worse the performance. There are two ways to achieve better performance under the presence of sharp corners:

Optimize the path during the path generation process

Make improvements to the pure pursuit controller itself

For most circumstances in the VRC competition, the basic pure pursuit controller will suffice even without the above two improvements.

### How does it work?

The key of the pure pursuit controller involves calculating a goal point on the path that is a certain distance $l_d$, which is called the look ahead distance, from the robot's current position. Then, the goal point is used to calculate the appropriate angular velocity needed for the robot to move toward that point. As the robot moves forward, a different goal point on the path is chosen and the robot's angular velocity gets updated. Since the robot can never reach this goal point and the goal point stays on the path, the end result would be the robot tracing the path.

To find the appropriate goal point for the robot to follow mathematically, a circle centered at the robot's current position with radius $l_d$ is made. The circle's intersections with the path are the potential goal points. We will return to this part of the algorithm later in the document.

A more intuitive way of understanding and visualizing the pure pursuit controller would be the GIF below:

The carrot is always at a constant distance away from the donkey so the donkey moves in the direction the rider wants by chasing after the carrot.

In our implementation of the pure pursuit controller, the path is a 2D array represented by equally spaced points (1D array). The basic algorithm is to use a for loop to go through each pair of points to determine if there are any intersections within each line segments. If goal points are found, follow the most appropriate one. These steps will be repeated until the end of the path is reached.

### Line-Circle Intersection

Let's start by looking at the basic math behind choosing goal points -- the line-circle intersection. There are multiple ways to find the intersection, we have found that this method (the screenshot below) was the easiest to implement and debug. (Side note: the math here might look a bit terrifying, but it's okay if you don't fully understand how it works. This statement only applies to this section.)

In the method presented above, the circle is assumed to be centered at the the origin. In our application, the circle would be centered at the robot's current position `[currentX, currentY]`

and the two points that define the line would be arbitrary points `pt1 = [x1, y1]`

and `pt2 = [x2, y2]`

. Therefore, to apply this method, we first need to subtract `currentX`

and `currentY`

from `[x1, y1]`

and `[x2, y2]`

to center the system at origin. Again, we perform such offsets so that we can use the method from the screenshot above, which requires the circle to be centered at the origin.

In the code cell below, you are encouraged to code such line-circle intersection algorithm yourself. If you do not know python at all, complete code with comments is provided.

#### Line-Circle Intersection: Commented Code Example

### Line-Circle Intersection with Bounds

You might have already noticed something not quite right when messing with the algorithm above. Under certain condition, although the intersections found are on the infinite line defined by `pt1`

and `pt2`

, they are not exactly within the range of `[x1, y1]`

and `[x2, y2]`

. Consider the situation illustrated below:

Although intersections are found, they are not within any of the line segments in the path. Those are not the points we want the robot to follow! So how do we prevent such situation from happening?

The solution is simple. After intersections are found, we check to see if their x values are within the range of `[min(x1, x2), max(x1, x2)]`

and their y values are within the range of `[min(y1, y2), max(y1, y2)]`

. That is, for a solution to be valid and `intersectFound`

to be `True`

, the following conditions need to be satisfied:

You can modify your `line_circle_intersection`

function in the code cell below. Completed and commented code is also provided.

(Side note: use `(a >= x) && (x >= b)`

in VEXCode.)

#### Line-Circle Intersection with Bounds: Commented Code Example

### Choosing the Goal Point

All possible outcomes of the line-circle intersections (with a single line segment) are the following:

**Situation 1:**No intersection. If this is the case, the discriminant will be negative.

**Situation 2:**Intersections are found, but they are not in between

`(x1, y1)`

and`(x2, y2)`

. The discriminant is positive, but we should still consider this situation as “no intersection”.**Situation 3:**There is one and only one intersection inside the range of

`(x1, y1)`

and`(x2, y2)`

. The discriminant is positive, and the robot should follow the intersection found.**Situation 4:**There are two intersections and both are in between

`(x1, y1)`

and`(x2, y2)`

. In this situation, we have to determine which point is better for the robot to follow. One method we can to use is to calculate the distance between the intersections and the second point`(x2, y2)`

(the path goes in the direction of`(x1, y1) -> (x2, y2)`

) and pick the point that is closer to the second point (in other words, the point closer to the end of the path).In some extreme cases where the robot is traveling through sharp corners, it might create multiple intersections with multiple line segments (as shown in the picture below). We can code our program in a way that the robot would follow the first valid point it found. This way, in extreme cases where the path overlaps itself or there exists multiple sharp corners, the robot would not skip a portion of the path altogether. In order to prevent the robot from going backwards in the path, we can create a variable

`lastFoundIndex`

to store the index of the point it just passed. Every time the loop runs, it will only check the points that are located after`path[lastFoundIndex]`

. This way,**the segments the robot has already traveled through will not be checked again for intersections**. In the cases that no new intersection has been found (robot deviates from the path), the robot will follow the point at`lastFoundIndex`

.

Let's take a closer look at how `lastFoundIndex`

and the goal point choosing function should work.

When the robot first entered the path (the 1st loop iteration), `lastFoundIndex = 0`

. The line-circle intersection search starts from `path[0]`

, an intersection is found and selected as the goal point for the robot to move toward.

At the end of the 1st loop iteration, `lastFoundIndex`

is updated to 0 since the goal point was found in between `path[0]`

and `path[1]`

. When the 2nd loop iteration starts, the robot has moved closer to `path[1]`

. Since `lastFoundIndex`

is still 0, the line-circle intersection starts from `path[0]`

again. This time, there are two intersections with the path: one located in between `path[0]`

and `path[1]`

and the other located in between `path[1]`

and `path[2]`

.

Following normal procedure, the algorithm would choose the intersection in between `path[0]`

and `path[1]`

as the goal point and break out of the search loop. As we can tell from the picture below, it's not a very good choice since it will cause the robot to go backward. To avoid such bad goal point choice, we can add an additional condition to evaluate the goal point found by the algorithm: **the search loop break statement can only be reached if the distance between the goal point chosen and the next point in path is shorter than the distance between the robot's current position and the next point in path**. If the above statement is not true, the search loop continues. We can also increment `lastFoundIndex`

in case the robot fails to find intersections in the next line segment. This will prevent the robot from going backward since `path[lastFoundIndex]`

will become the goal point when no intersection can be found.

Equivalent pseudo code:

In the situation illustrated below, it is obvious that the distance between the `goalPt`

(the left intersection) and `path[1]`

is greater that the distance between `currentPos`

and `path[1]`

(both distances are marked with dotted red line). Hence, the search loop continues and the intersection between `path[1]`

and `path[2]`

will be chosen in the next search loop iteration.

At the end of the 2nd loop iteration, `lastFoundIndex`

is updated to 1. When the 3rd loop iteration starts, the next search will start from `path[1]`

so the intersection in between `path[0]`

and `path[1]`

will be omitted. The portion of the path omitted for goal point searching is marked in brown.

The code cell below can be used for implementing the `goal_pt_search`

algorithm yourself (for people that are not familiar enough with python, commented code is provided as before). A sample path has been provided.

#### Choosing the Goal Point: Commented Code Example

### Move Toward Target Point - Part 1

Now that we have a goal point determined, the next step would be to make the robot move toward that point. Let's first look at how to move the robot to a fixed target point. Consider the situation illustrated below, where the robot is located at `[currentX, currentY]`

and we would like to make it move to point `[targetX, targetY]`

:

The robot would need to perform two actions at the same time: moving toward the target point and turning toward the target point. To achieve this, we need to calculate linear and turn velocity separately and add them together to obtain the final speed. Obtaining linear error is easy in this case. In the picture above, the total distance the robot needs to travel is marked with the dotted gray line and has the value `sqrt(pow(targetY - currentY, 2) + pow(targetX - currentX, 2))`

.

Obtaining turn error, however, is slightly more complicated. The total amount the robot needs to turn is marked in blue. Since the direction of the turn is counterclockwise, the blue angle (which would be referred to as `turnError`

from this point on) should be positive. We can obtain the magnitude of this angle by performing subtraction:

The `turnError`

can be calculated by subtracting the orange angle (which is just `currentHeading`

) from the grey angle, which is the angle vector `[targetX-currentX, targetY-currentY]`

makes with the global X axis. From this point on, we will call this grey angle `absTargetAngle`

. We can use the built-in `atan2`

function to calculate its value. Since `atan2`

's output ranges from -180 to 180 but our Cartesian coordinate system's range is 0 to 360, we can simply add 360 to the `atan2`

output if its negative.

Before we move on, there are some special cases that are worth mentioning. If `currentHeading`

is 1 degrees while `absTargetAngle`

is 359 degrees, we would get `turnError = absTargetHeading - currentHeading = 1 - 359 = -358 deg`

, which is totally nonsense. To make the function as efficient as possible, the robot should just turn to the target from another direction if `turnError`

turns out to be greater than 180 degrees.

The code cells below is a space for you to write a `find_min_angle`

function that computes the minimum turn error the robot would need to turn toward the target point. If your implementation is correct, the angle returned should stay in the range of `[-180, 180]`

under all circumstances. If you are more familiar with using `rotation`

and modulo (%) instead of `heading`

, feel free to implement the function that way. The animation function provided in later sections uses `heading`

by default, but code that's compatible with `rotation`

and modulo will also be provided.

#### Find Minimum Turn Error: Commented Code Example

### Moving Toward Target Point - Part 2

With the linear error and the turn error obtained, we can calculate the robot's left and right side velocity based on those. Since the turn error is positive when the robot turns counterclockwise, the left side should have velocity `linearVel - turnVel`

and the right side should have velocity `linearVel + turnVel`

. In the code example below, a basic proportional controller is used to compute these two velocities. Please note that the animation code assumes `linearVel`

and `turnVel`

are in `velocityUnits::pct`

.

To recap, the move to target point algorithm goes as follows (not real code):