DifferentialDrivePoseEstimator

class wpimath.estimator.DifferentialDrivePoseEstimator(*args, **kwargs)

Bases: pybind11_object

This class wraps an Unscented Kalman Filter to fuse latency-compensated vision measurements with differential drive encoder measurements. It will correct for noisy vision measurements and encoder drift. It is intended to be an easy drop-in for DifferentialDriveOdometry. In fact, if you never call addVisionMeasurement(), and only call update(), this will behave exactly the same as DifferentialDriveOdometry.

update() should be called every robot loop (if your robot loops are faster or slower than the default, then you should change the nominal delta time via the constructor).

addVisionMeasurement() can be called as infrequently as you want; if you never call it, then this class will behave like regular encoder odometry.

The state-space system used internally has the following states (x), inputs (u), and outputs (y):

\(x = [x, y, \theta, dist_l, dist_r]^T\) in the field coordinate system containing x position, y position, heading, left encoder distance, and right encoder distance.

\(u = [v_l, v_r, d\theta]^T\) containing left wheel velocity, right wheel velocity, and change in gyro heading.

NB: Using velocities make things considerably easier, because it means that teams don’t have to worry about getting an accurate model. Basically, we suspect that it’s easier for teams to get good encoder data than it is for them to perform system identification well enough to get a good model.

\(y = [x, y, \theta]^T\) from vision containing x position, y position, and heading; or \(y = [dist_l, dist_r, \theta]^T\) containing left encoder position, right encoder position, and gyro heading.

Overloaded function.

  1. __init__(self: wpimath._controls._controls.estimator.DifferentialDrivePoseEstimator, kinematics: wpimath.kinematics._kinematics.DifferentialDriveKinematics, gyroAngle: wpimath.geometry._geometry.Rotation2d, leftDistance: meters, rightDistance: meters, initialPose: wpimath.geometry._geometry.Pose2d) -> None

Constructs a DifferentialDrivePoseEstimator with default standard deviations for the model and vision measurements.

The default standard deviations of the model states are 0.02 meters for x, 0.02 meters for y, and 0.01 radians for heading. The default standard deviations of the vision measurements are 0.1 meters for x, 0.1 meters for y, and 0.1 radians for heading.

Parameters:
  • kinematics – A correctly-configured kinematics object for your drivetrain.

  • gyroAngle – The gyro angle of the robot.

  • leftDistance – The distance traveled by the left encoder.

  • rightDistance – The distance traveled by the right encoder.

  • initialPose – The estimated initial pose.

  1. __init__(self: wpimath._controls._controls.estimator.DifferentialDrivePoseEstimator, kinematics: wpimath.kinematics._kinematics.DifferentialDriveKinematics, gyroAngle: wpimath.geometry._geometry.Rotation2d, leftDistance: meters, rightDistance: meters, initialPose: wpimath.geometry._geometry.Pose2d, stateStdDevs: Tuple[float, float, float], visionMeasurementStdDevs: Tuple[float, float, float]) -> None

Constructs a DifferentialDrivePoseEstimator.

Parameters:
  • kinematics – A correctly-configured kinematics object for your drivetrain.

  • gyroAngle – The gyro angle of the robot.

  • leftDistance – The distance traveled by the left encoder.

  • rightDistance – The distance traveled by the right encoder.

  • initialPose – The estimated initial pose.

  • stateStdDevs – Standard deviations of the pose estimate (x position in meters, y position in meters, and heading in radians). Increase these numbers to trust your state estimate less.

  • visionMeasurementStdDevs – Standard deviations of the vision pose measurement (x position in meters, y position in meters, and heading in radians). Increase these numbers to trust the vision pose measurement less.

addVisionMeasurement(*args, **kwargs)

Overloaded function.

  1. addVisionMeasurement(self: wpimath._controls._controls.estimator.DifferentialDrivePoseEstimator, visionRobotPose: wpimath.geometry._geometry.Pose2d, timestamp: seconds) -> None

Adds a vision measurement to the Kalman Filter. This will correct the odometry pose estimate while still accounting for measurement noise.

This method can be called as infrequently as you want, as long as you are calling Update() every loop.

To promote stability of the pose estimate and make it robust to bad vision data, we recommend only adding vision measurements that are already within one meter or so of the current pose estimate.

Parameters:
  • visionRobotPose – The pose of the robot as measured by the vision camera.

  • timestamp – The timestamp of the vision measurement in seconds. Note that if you don’t use your own time source by calling UpdateWithTime(), then you must use a timestamp with an epoch since FPGA startup (i.e., the epoch of this timestamp is the same epoch as frc::Timer::GetFPGATimestamp(). This means that you should use frc::Timer::GetFPGATimestamp() as your time source in this case.

  1. addVisionMeasurement(self: wpimath._controls._controls.estimator.DifferentialDrivePoseEstimator, visionRobotPose: wpimath.geometry._geometry.Pose2d, timestamp: seconds, visionMeasurementStdDevs: Tuple[float, float, float]) -> None

Adds a vision measurement to the Kalman Filter. This will correct the odometry pose estimate while still accounting for measurement noise.

This method can be called as infrequently as you want, as long as you are calling Update() every loop.

To promote stability of the pose estimate and make it robust to bad vision data, we recommend only adding vision measurements that are already within one meter or so of the current pose estimate.

Note that the vision measurement standard deviations passed into this method will continue to apply to future measurements until a subsequent call to SetVisionMeasurementStdDevs() or this method.

Parameters:
  • visionRobotPose – The pose of the robot as measured by the vision camera.

  • timestamp – The timestamp of the vision measurement in seconds. Note that if you don’t use your own time source by calling UpdateWithTime(), then you must use a timestamp with an epoch since FPGA startup (i.e., the epoch of this timestamp is the same epoch as frc::Timer::GetFPGATimestamp(). This means that you should use frc::Timer::GetFPGATimestamp() as your time source in this case.

  • visionMeasurementStdDevs – Standard deviations of the vision pose measurement (x position in meters, y position in meters, and heading in radians). Increase these numbers to trust the vision pose measurement less.

getEstimatedPosition() wpimath.geometry._geometry.Pose2d

Gets the estimated robot pose.

Returns:

The estimated robot pose.

resetPosition(gyroAngle: wpimath.geometry._geometry.Rotation2d, leftDistance: meters, rightDistance: meters, pose: wpimath.geometry._geometry.Pose2d) None

Resets the robot’s position on the field.

Parameters:
  • gyroAngle – The current gyro angle.

  • leftDistance – The distance traveled by the left encoder.

  • rightDistance – The distance traveled by the right encoder.

  • pose – The estimated pose of the robot on the field.

setVisionMeasurementStdDevs(visionMeasurementStdDevs: Tuple[float, float, float]) None

Sets the pose estimator’s trust in vision measurements. This might be used to change trust in vision measurements after the autonomous period, or to change trust as distance to a vision target increases.

Parameters:

visionMeasurementStdDevs – Standard deviations of the vision pose measurement (x position in meters, y position in meters, and heading in radians). Increase these numbers to trust the vision pose measurement less.

update(gyroAngle: wpimath.geometry._geometry.Rotation2d, leftDistance: meters, rightDistance: meters) wpimath.geometry._geometry.Pose2d

Updates the pose estimator with wheel encoder and gyro information. This should be called every loop.

Parameters:
  • gyroAngle – The current gyro angle.

  • leftDistance – The distance traveled by the left encoder.

  • rightDistance – The distance traveled by the right encoder.

Returns:

The estimated pose of the robot.

updateWithTime(currentTime: seconds, gyroAngle: wpimath.geometry._geometry.Rotation2d, leftDistance: meters, rightDistance: meters) wpimath.geometry._geometry.Pose2d

Updates the pose estimator with wheel encoder and gyro information. This should be called every loop.

Parameters:
  • currentTime – The time at which this method was called.

  • gyroAngle – The current gyro angle.

  • leftDistance – The distance traveled by the left encoder.

  • rightDistance – The distance traveled by the right encoder.

Returns:

The estimated pose of the robot.