Articles by tag: control

Articles by tag: control

    Balancing and PID

    Balancing and PID By Tycho

    Task: Test and improve the PID system and balance code

    We're currently testing code to give Argos a balancing system so that we can demo it. This is also a test for the PID in the new REV robotics expansion hubs, which we plan on switching to for this season if reliable. Example code is below.

    public void BalanceArgos(double Kp, double Ki, double Kd, double pwr, double currentAngle, double targetAngle)
     {
         //sanity check - exit balance mode if we are out of recovery range
     
     
     
         if (isBalanceMode()){ //only balance in the right mode
     
             setHeadTilt(nod);
     
             //servo steering should be locked straight ahead
             servoSteerFront.setPosition(.5);
             servoSteerBack.setPosition(0.5);
     
             //double pwr = clampMotor((roll-staticBalance)*-.05);
     
             balancePID.setOutputRange(-.5,.5);
             balancePID.setPID(Kp, Ki, Kd);
             balancePID.setSetpoint(staticBalance);
             balancePID.enable();
             balancePID.setInput(currentAngle);
             double correction = balancePID.performPID();
     
             logger.UpdateLog(Long.toString(System.nanoTime()) + ","
                     + Double.toString(balancePID.getDeltaTime()) + ","
                     + Double.toString(currentAngle) + ","
                     + Double.toString(balancePID.getError()) + ","
                     + Double.toString(balancePID.getTotalError()) + ","
                     + Double.toString(balancePID.getDeltaError()) + ","
                     + Double.toString(balancePID.getPwrP()) + ","
                     + Double.toString(balancePID.getPwrI()) + ","
                     + Double.toString(balancePID.getPwrD()) + ","
                     + Double.toString(correction));
     
             timeStamp=System.nanoTime();
             motorFront.setPower(correction);
     
    

    PID Calibration and Testing

    PID Calibration and Testing By Tycho

    Task: Allow user to change PID coefficients from the controller

    To allow each user to create their own settings, we're designing a way to allow the user to tune PID to their own liking from the controller. This also enables debugging for our robot.

    public void PIDTune(PIDController pid, boolean pidIncrease, boolean pidDecrease, boolean magnitudeIncrease, boolean magnitudeDecrease, boolean shouldStateIncrement) {
     if (shouldStateIncrement) {
      pidTunerState = stateIncrement(pidTunerState, 0, 2, true);
     }
     if (magnitudeIncrease) {
      pidTunerMagnitude *= 10;
     }
     if (magnitudeDecrease) {
      pidTunerMagnitude /= 10;
     }
     double dir;
     if (pidIncrease) dir = 1;
     else if (pidDecrease) dir = -1;
     else if (pidDecrease) dir = -1;
     else dir = 0;
     switch (pidTunerState) {
      case 0:
       pid.setPID(pid.getP() pidTunerMagnitude * dir, pid.getI(), pid.getD());
       break;
      case 1:
       pid.setPID(pid.getP(), pid.getI() pidTunerMagnitude * dir, pid.getD());
       break;
      case 2:
       pid.setPID(pid.getP(), pid.getI(), pid.getD() pidTunerMagnitude * dir);
       break;
     }
    }
    public double getPidTunerMagnitude() {
     return pidTunerMagnitude;
    }
    public int getPidTunerState() {
     return pidTunerState;
    }
    public int stateIncrement(int val, int minVal, int maxVal, boolean increase) {
     if (increase) {
      if (val == maxVal) {
       return minVal;
      }
      val++;
      return val;
     } else {
      if (val == minVal) {
       return maxVal;
      }
      val--;
      return val;
     }
    }
    

    Testing Materials

    Testing Materials By Austin, Evan, and and Tycho

    Task: Test Materials for V2 Gripper

    Though our current gripper is working sufficiently, there are some issues we would like to improve in our second version. The mounting system is unstable and easily comes out of alignment because the rev rail keeps bending. Another issue we've encountered is the cervo pulling the grippers so that they begin to cave inwards, releasing any blocks being held at the bottom. By far the biggest problem is our intake. Our drivers have to align the robot with the block so precisely to be able to stack it that it eats a majority of our game time. However, there are some advantages, such as light weight and adjustability, to this gripper that we would like to carry over into the second version.

      We tested out a few different materials:
    • Silicone Baking Mats - The mats were a very neutral option because they didn't have any huge advantages or disadvantages (other than not adhering well). These could have been used, however, there were other better options.
    • Shelf Liner - It was far too slippery. Also, when thinking about actually making the grippers, there was no good way to put it on the grippers. Using this materials would have been too much work with little gain.
    • Baking Pan Lining (picked) - It was made out of durable rubber but was still very malleable which is a big advantage. We need the grippers to compress and 'grip' the block without causing any damage.
    • Rubber Bands on Wheels - This material was closest to our original version and, unexpectedly, carried over one of the problems. It still requires very specific orientations to pick up blocks, which would defeat the purpose of this entire task.

    The purpose of this is as a part of our future grabber design, which will need to be relatively light, as our string is currently breaking under stress due to weight. The material must also have good direct shear and direct strength, as the grabber will have rotating arms that move in and out to grasp blocks. As well, we're replacing the tetrix parts with REV, as they're smaller and a little lighter, with the additional bonus of more mounting points.

    Machine Vision Goals – Part 1

    Machine Vision Goals – Part 1 By Tycho

    We’ve been using machine vision for a couple of years now and have a plan to use it in Relic Rescue for a number of things. I mostly haven’t gotten to it because college application deadlines have a higher priority for me this year. But since we already have experience with color blob tracking in OpenCV and Vuforia tracking, I hope this won’t be too difficult. We have 5 different things we want to try:

    VuMark decode – this is obvious since it gives us a chance to regularly get the glyph crypto bonus. From looking at the code, it seems to be a single line different from the Vuforia tracking code we’ve already got. It’s probably a good idea to signal the completed decode by flashing our lights or something like that. That will make it more obvious to judges and competitors.

    Jewel Identification – most teams seem to be using the REV color sensor on the arm their jewel displacement arm. We’ll probably start out doing that too, but I’d also like to use machine vision to identify the correct jewel. Just because we can. Just looking at the arrangement, we should be able to get both the jewels and the Vuforia target in the same frame at the beginning of autonomous.

    Alignment – it is not legal to extend a part of the robot outside of the 18” dimensions during match setup. So we can’t put the jewel arm out to make sure it is between the jewels. But there is nothing preventing us from using the camera to assist with alignment. We can even draw on the screen where the jewels should appear, like inside the orange box below. This will also help with Jewel ID – we won’t have to hunt for the relevant pixels – we can just compare the average hue of the two regions around the wiffle balls.

    Autonomous Deposition – this is the most ambitious use for machine vision. The dividers on the crypto boxes should make pretty clear color blob regions. If we can find the center points between these regions, we should be able to code and automatically centering glyph depositing behavior.

    Autonomous glyph collection – ok this is actually harder. Teams seem to spend most of their time retrieving glyphs. Most of that time seems to be spent getting the robot and the glyphs square with each other. Our drivers have a lot of trouble with this even though we have a very maneuverable mecanum drive. What if we could create a behavior that would automatically align the robot to a target glyph on approach? With our PID routines we should be able to do this pretty efficiently. The trouble is we need to figure out the glyph orientation by analyzing frames on approach. And it probably means shape analysis – something we’ve never done before. If we get to this, it won’t be until pretty late in the season. Maybe we’ll come up with a better mechanical approach to aligning glyphs with our bot and this won’t be needed.

    Tools for Experimenting

    Machine vision folks tend to think about image analysis as a pipeline that strings together different image processing algorithms in order to understand something about the source image or video feed. These algorithms are often things like convolution filters that isolate different parts of the image. You have to decide which stages to put into a pipeline depending on what that pipeline is meant to detect or decide. To make it easier to experiment, it’s good to use tools that let you create these pipelines and play around with them before you try to hard-code it into your robot.

    I've been using a tool called ImagePlay. http://imageplay.io/ It's open source and based on OpenCV. I used it to create a pipeline that has some potential to help navigation in this year's challenge. Since ImagePlay is open source, once you have a pipeline, you can figure out the calls to it makes to opencv to construct the stages. It's based on the C++ implementation of OpenCV so we’ll have to translate that to java for Android. It has a very nice pipeline editor that supports branching. The downside is that this tool is buggy and doesn't have anywhere near the number of filters and algorithms that RoboRealm supports.

    RoboRealm is what we wanted to use. We’ve been pretty closely connected with the Dallas Personal Robotics Group (DPRG) for years and Carl Ott is a member who has taught a couple of sessions on using RoboRealm to solve the club’s expert line following course. Based on his recommendation we contacted the RoboRealm folks and they gave use a 5 user commercial license. I think that’s valued at $2,500. They seemed happy to support FTC teams.

    RoboRealm is much easier to experiment with and they have great documentation so now have an improved pipeline. It's going to take more work to figure out how to implement that pipeline in OpenCV because it’s not always clear what a particular stage in RoboRealm does at a low level. But this improved pipeline isn’t all that different from the ImagePlay version.

    Candidate Pipeline

    So here is a picture of a red cryptobox sitting against a wall with a bunch of junk in the background. This image ended up upside down, but that doesn’t matter for just experimenting. I wanted a challenging image, because I want to know early if we need to have a clean background for the cryptoboxes. If so, we might need to ask the FTA if we can put an opaque background behind the cryptoboxes:

    Stage 1 – Color Filter – this selects only the reddest pixels

    Stage 2 – GreyScale – Don’t need the color information anymore, this reduces the data size

    Stage 3 – Flood Fill – This simplifies a region by flooding it with the average color of nearby pixels. This is the same thing when you use the posterize effect in photoshop. This also tends to remove some of the background noise.

    Stage 4 – Auto Threshold – Turns the image into a B/W image with no grey values based on a thresholding algorithm that only the RoboRealm folks know.

    Stage 5 – Blob Size – A blob is a set of connected pixels with a similar value. Here we are limiting the output to the 4 largest blobs, because normally there are 4 dividers visible. In this case there is an error. The small blob on the far right is classified as a divider even though it is just some other red thing in the background, because the leftmost column was mostly cut out of the frame and wasn’t lit very well. It ended up being erased by this pipeline.

    Stages 6 & 7 – Moment Statistics – Moments are calculations that can help to classify parts of images. We’ve used Hu Moments since our first work with machine vision on our robot named Argos. They can calculate the center of a blob (center of gravity), its eccentricity, and its area. Here the center of gravity is the little red square at the center of each blob. Now we can calculate the midpoint between each blob to find the center of a column and use that as a navigation target if we can do all this in real-time. We may have to reduce image resolution to speed things up.

    Working on Autonomous

    Working on Autonomous By Tycho

    Task: Create a temporary autonomous for the bot

    We attempted to create an autonomous for our first scrimmage. It aimed to make the robot to drive forward and drive into the safe zone. However, we forgot to align the robot and it failed at the scrimmage.

    Instead of talking about the code like usual, the code's main functions are well documented so that any person can understand its functions without a prior knowledge of coding.

     public void autonomous2 (){
    
            switch(autoState){
                case 0: //moves the robot forward .5 meters
                    if (robot.driveStrafe(false, .60, .35)) {
    
                        robot.resetMotors(true);
                        autoState++;
                    }
                        break;
                case 1: //scan jewels and decide which one to hit
                    if (robot.driveForward(false, .25, .35)) {
                        autoTimer = futureTime(1f);
                        robot.resetMotors(true);
                        autoState++;
                    }
    
                    break;
                case 2: //short move to knock off jewel
    
                    robot.glyphSystem.ToggleGrip();
                    autoTimer = futureTime(1f);
    
                    robot.resetMotors(true);
                    autoState++;
                    break;
                case 3: //back off of the balance stone
                    if (robot.driveForward(true, .10, .35)) {
                        autoTimer = futureTime(3f);
                        robot.resetMotors(true);
                        autoState++;
                    }
                    break;
                case 4: //re-orient the robot
                    autoState++;
                    break;
                case 5: //drive to proper crypto box column based on vuforia target
                    autoState++;
                    break;
                case 6: // turn towards crypto box
                    autoState++;
                    break;
                case 7: //drive to crypto box
                    autoState++;
                    break;
                case 8: //deposit glyph
                    autoState++;
                    break;
                case 9: //back away from crypto box
                    autoState++;
                    break;
            }
        }
    

    Adding Code Fixes to the Robot

    Adding Code Fixes to the Robot By Tycho

    Task: Add code updates

    These commits add said functionality:

    • Pre-game logic - joystick control
    • Fix PID settings
    • Autonomous resets motor
    • Jewel Arm functionality
    • Autonomous changes
    • Tests servos

    These commits allow better QoL for our drivers, allow our robot to function more smoothly both in autonomous and during TeleOp, allows us to score the jewels, and lets us test servos.

    Jewel Arm


    package org.firstinspires.ftc.teamcode;
    
    import com.qualcomm.robotcore.hardware.NormalizedColorSensor;
    import com.qualcomm.robotcore.hardware.Servo;
    
    /**
     * Created by 2938061 on 11/10/2017.
     */
    
    public class JewelArm {
    
        private Servo servoJewel;
        private NormalizedColorSensor colorJewel;
        private int jewelUpPos;
        private int jewelDownPos;
    
        public JewelArm(Servo servoJewel, NormalizedColorSensor colorJewel, int jewelUpPos, int jewelDownPos){
            this.servoJewel = servoJewel;
            this.colorJewel = colorJewel;
            this.jewelUpPos = jewelUpPos;
            this.jewelDownPos = jewelDownPos;
        }
    
        public void liftArm(){
            servoJewel.setPosition(ServoNormalize(jewelUpPos));
        }
        public void lowerArm(){
            servoJewel.setPosition(ServoNormalize(jewelDownPos));
        }
    
        public static double ServoNormalize(int pulse){
            double normalized = (double)pulse;
            return (normalized - 750.0) / 1500.0; //convert mr servo controller pulse width to double on _0 - 1 scale
        }
    
    }
    

    Autonomous

    		public void autonomous(){
            switch(autoState){
                case 0: //scan vuforia target and deploy jewel arm
                    robot.jewel.lowerArm();
                    autoTimer = futureTime(1.5f);
                    if(autoTimer < System.nanoTime()) {
                        relicCase = getRelicCodex();
                        jewelMatches = robot.doesJewelMatch(isBlue);
                        autoState++;
                    }
                    break;
                case 1: //small turn to knock off jewel
                    if ((isBlue && jewelMatches)||(!isBlue && !jewelMatches)){
                        if(robot.RotateIMU(10, .5)){
                            robot.resetMotors(true);
                        }
                    }
                    else{
                        if(robot.RotateIMU(350, .5)){
                            robot.resetMotors(true);
                        }
                    }
                    break;
                case 2: //lift jewel arm
                    robot.jewel.liftArm();
                    autoTimer = futureTime(1.5f);
                    if(autoTimer < System.nanoTime()) {
                        jewelMatches = robot.doesJewelMatch(isBlue);
                        autoState++;
                    }
                case 3: //turn parallel to the wall
                    if(isBlue){
                        if(robot.RotateIMU(270, 2.0)){
                            robot.resetMotors(true);
                            autoState++;
                        }
                    }
                    else{
                        if(robot.RotateIMU(90, 2.0)){
                            robot.resetMotors(true);
                            autoState++;
                        }
                    }
                    autoState++;
                    break;
                case 4: //drive off the balance stone
                    if(robot.driveForward(true, .3, .5)) {
                        robot.resetMotors(true);
                        autoState++;
                    }
                    break;
                case 5: //re-orient robot
                    if(isBlue){
                        if(robot.RotateIMU(270, 1.0)){
                            robot.resetMotors(true);
                            autoState++;
                        }
                    }
                    else{
                        if(robot.RotateIMU(90, 1.0)){
                            robot.resetMotors(true);
                            autoState++;
                        }
                    }
                    break;
                case 6: //drive to proper crypto box column based on vuforia target
                    switch (relicCase) {
                        case 0:
                            if(robot.driveForward(true, .5, .35)) {
                                robot.resetMotors(true);
                                autoState++;
                            }
                            break;
                        case 1:
                            if(robot.driveForward(true, .75, .35)) {
                                robot.resetMotors(true);
                                autoState++;
                            }
                            autoState++;
                            break;
                        case 2:
                            if(robot.driveForward(true, 1.0, .35)) {
                                robot.resetMotors(true);
                                autoState++;
                            }
                            autoState++;
                            break;
                    }
                    break;
                case 7: //turn to crypto box
                    if(isBlue){
                        if(robot.RotateIMU(315, 1.5)){
                            robot.resetMotors(true);
                            autoState++;
                        }
                    }
                    else{
                        if(robot.RotateIMU(45, 1.5)){
                            robot.resetMotors(true);
                            autoState++;
                        }
                    }
                    break;
                case 8: //deposit glyph
                    if(robot.driveForward(true, 1.0, .50)) {
                        robot.resetMotors(true);
                        robot.glyphSystem.ReleaseGrip();
                        autoState++;
                    }
                    break;
                case 9: //back away from crypto box
                    if(robot.driveForward(false, .5, .50)){
                        robot.resetMotors(true);
                        autoState++;
                    }
                    break;
                default:
                    robot.resetMotors(true);
                    autoState = 0;
                    active = false;
                    state = 0;
                    break;
            }
        }
        public void autonomous2 (){
    
            switch(autoState){
                case 0: //scan vuforia target and deploy jewel arm
                    robot.jewel.lowerArm();
                    autoTimer = futureTime(1.5f);
                    if(autoTimer < System.nanoTime()) {
                        relicCase = getRelicCodex();
                        jewelMatches = robot.doesJewelMatch(isBlue);
                        autoState++;
                    }
                    break;
                case 1: //small turn to knock off jewel
                    if ((isBlue && jewelMatches)||(!isBlue && !jewelMatches)){
                        if(robot.RotateIMU(10, .5)){
                            robot.resetMotors(true);
                        }
                    }
                    else{
                        if(robot.RotateIMU(350, .5)){
                            robot.resetMotors(true);
                        }
                    }
                    break;
                case 2: //lift jewel arm
                    robot.jewel.liftArm();
                    autoTimer = futureTime(1.5f);
                    if(autoTimer < System.nanoTime()) {
                        jewelMatches = robot.doesJewelMatch(isBlue);
                        autoState++;
                    }
                case 3: //turn parallel to the wall
                    if(isBlue){
                        if(robot.RotateIMU(270, 2.0)){
                            robot.resetMotors(true);
                            autoState++;
                        }
                    }
                    else{
                        if(robot.RotateIMU(90, 2.0)){
                            robot.resetMotors(true);
                            autoState++;
                        }
                    }
                    autoState++;
                    break;
                case 4: //drive off the balance stone
                    if(robot.driveForward(true, .3, .5)) {
                        robot.resetMotors(true);
                        autoState++;
                    }
                    break;
                case 5: //re-orient robot
                    if(isBlue){
                        if(robot.RotateIMU(270, 1.0)){
                            robot.resetMotors(true);
                            autoState++;
                        }
                    }
                    else{
                        if(robot.RotateIMU(90, 1.0)){
                            robot.resetMotors(true);
                            autoState++;
                        }
                    }
                    break;
                case 6: //drive to proper crypto box column based on vuforia target
                    switch (relicCase) {
                        case 0:
                            if(robot.driveStrafe(true, .00, .35)) {
                                robot.resetMotors(true);
                                autoState++;
                            }
                            break;
                        case 1:
                            if(robot.driveStrafe(true, .25, .35)) {
                                robot.resetMotors(true);
                                autoState++;
                            }
                            autoState++;
                            break;
                        case 2:
                            if(robot.driveStrafe(true, .50, .35)) {
                                robot.resetMotors(true);
                                autoState++;
                            }
                            autoState++;
                            break;
                    }
                    break;
                case 7: //turn to crypto box
                    if(isBlue){
                        if(robot.RotateIMU(215, 1.5)){
                            robot.resetMotors(true);
                            autoState++;
                        }
                    }
                    else{
                        if(robot.RotateIMU(135, 1.5)){
                            robot.resetMotors(true);
                            autoState++;
                        }
                    }
                    break;
                case 8: //deposit glyph
                    if(robot.driveForward(true, 1.0, .50)) {
                        robot.resetMotors(true);
                        robot.glyphSystem.ReleaseGrip();
                        autoState++;
                    }
                    break;
                case 9: //back away from crypto box
                    if(robot.driveForward(false, .5, .50)){
                        robot.resetMotors(true);
                        autoState++;
                    }
                    break;
                default:
                    robot.resetMotors(true);
                    autoState = 0;
                    active = false;
                    state = 0;
                    break;
            }
        }
    

    Driving Struggles

    Driving Struggles By Abhi

    Task: Drive the Robot

    Today we tried to drive the robot on the practice field for the first time since the qualifier last Saturday. However, we couldn't get in very much quality drive practice because the robot kept breaking down. We decided to dig a bit deeper and found some issues.

    As seen above, the first thing that was wrong was that the lift was tilted. Due to the cantilever orientation of the plank of the grabber arm mounted on the vertical axis, the structure only had one bar for support for the lift. As a result, since the construction of our robot, the rev rail of the mount had been worn out constantly up to the point where it broke. Also because of the singular rod mounting, the lift system rotated on the vertical planar axis creating a need for drivers, such as myself, to rotate into the cryptobox every time we needed to mount. This was not a good way for the robot to function and had frustrated us.

    Another issue we had was that the lift system string was caught often in all the wiring of the robot. Due to the friction created between this string and all the wiring, including the jewel system, it breaks the string and also creates a safety issue. As a result, we need to fix either the wiring of the robot or the lift system altogether.

    Reflections

    We hope to make improvements over this week before the Oklahoma qualifier. Hopefully, we will have a more proficient robot making it easier on our drivers.

    Code Fixes and Readability

    Code Fixes and Readability By Tycho

    Task: Make the code more readable

    So, we can't include all the code changes we made today, but all of it involved cleaning up our code, removing extra functions we didn't use, refactoring, adding comments, and making it more readable for the tournament. We had almost 80k deletions and 80k additions. This marks a turning point in the readablity of our code so that less experienced team members can read it. We went through methodically and commented out each function and method for future readability, as we will have to pass the codebase on to next year's team.

    Drive Practice

    Drive Practice By Karina, Charlotte, and Abhi

    Task: Become experts at driving the robot and scoring glyphs

    Iron Reign’s robot drivers Abhi, Charlotte, and I, have been working hard to decrease our team’s glyph-scoring time. The past few meets, we have spent many hours practicing maneuvering on the field and around blocks, something that is crucial if we want to go far this competition season. When we first started driving the robot, we took approximately 4 minutes to complete a single column of the cryptobox, but now we can fill one and a half columns in two minutes.

    When we first started practicing, we had trouble aligning with the glyphs to grab them. The fact that were using our prototype arms was partially at fault for our inability to move fast and efficiently. We also had some human error to blame. Personally, it was difficult for me to not confuse my orientation with the robot's orientation. In addition, our drive team had yet to establish a communication system between the driver and the coach, so the driver had no guidance as to which glyphs seemed like the easiest to go for or whether or not the robot was in position to grab a glyph. Below is a video that shows our shaky beginning:

    Our driving has improved significantly. We have done mock teleop runs, timed ourselves on how long we take to complete different tasks, and have repeatedly tried stacking blocks and parking on the balancing stone. When our robot doesn't break, we can fill up to two columns of the cryptobox!

    Reflections

    Overall, we feel that we can further improve our driving skills with more drive practice. Driving the robot really does require being familiar with your robot and its quirks, as well as the controls to move the robot. Abhi, Charlotte, and I know we are still far from being driving experts, but we are putting forth our time and effort so that we can give it our best at tournaments.

    Control Award

    Control Award By Janavi

    Task:

    Last Saturday, after our qualifier, we had a team meeting where we created a list of what we needed to do before our second qualifier this Saturday. One of the tasks was to create the control award which we were unfortunately unable to complete in time for our last competition.

    Autonomous Objective:

    1. Knock off opponent's Jewel, place glyphs In correct location based on image, park in safe zone (85 pts)
    2. Park in Zone, place glyph in cryptobox (25 pts)

    Autonomous B has the ability to be delayed for a certain amount of time, allowing for better coordination with alliance mates. If our partner team is more reliable, we can give them freedom to move, but still add points to our team score.

    Sensors Used

    1. Phone Camera - Allows the robot to determine where to place glyphs using Vuforia, taking advantage of the wide range of data provided from the pattern detection, as well as using Open Computer Vision (OpenCV) to analyze the pattern of the image.
    2. Color Sensor - Robot selects correct jewel using the passive mode of the sensor. This feedback allows us determine whether the robot needs to move forwards or backwards so that it knocks off the opposing teams jewel
    3. Inertial Measurement Unit (IMU) - 3 Gyroscopes and Accelerometers return the robot’s heading for station keeping and straight-line driving in autonomous, while letting us orient ourselves to specific headings for proper navigation, crypt placing, and balancing
    4. Motor Encoders - Using returned motor odometry, we track how many rotations the wheels have made and convert that into meters travelled. We use this in combination with feedback from the IMU to calculate our location on the field relative to where we started.

    Key Algorithms:

    1. Integrate motor odometry, the IMU gyroscope, and accelerometer with using trigonometry so the robot knows its location at all times
    2. Use Proportional/Integral/Derivative (PID) combined with IMU readouts to maintain heading. The robot corrects any differences between actual and desired heading at a power level appropriate for the difference and amount of error built up. This allows us to navigate the field accurately during autonomous.
    3. We use Vuforia to track and maintain distance from the patterns on the wall based on the robot controller phone's camera. It combines 2 machine vision libraries, trig and PID motion control.
    4. All code is non-blocking to allow multiple operations to happen at the same time. We extensively use state machines to prevent conflicts over priorities in low-level behaviors

    Driver Controlled Enhancements:

    1. If the lift has been raised, movement by the jewel arm is blocked to avoid a collision
    2. The robot has a slow mode, which allows our drivers to accurately maneuver and pick up glyphs easily and accurately.
    3. The robot also has a turbo mode. This speed is activated when the bumper is pressed, allowing the driver to quickly maneuver the field.
    Autonomous Field