Articles by tag: software

Articles by tag: software

    Balancing and PID

    Balancing and PID By Tycho

    Task: Test and improve the PID system and balance code

    We're currently testing code to give Argos a balancing system so that we can demo it. This is also a test for the PID in the new REV robotics expansion hubs, which we plan on switching to for this season if reliable. Example code is below.

    public void BalanceArgos(double Kp, double Ki, double Kd, double pwr, double currentAngle, double targetAngle)
     {
         //sanity check - exit balance mode if we are out of recovery range
     
     
     
         if (isBalanceMode()){ //only balance in the right mode
     
             setHeadTilt(nod);
     
             //servo steering should be locked straight ahead
             servoSteerFront.setPosition(.5);
             servoSteerBack.setPosition(0.5);
     
             //double pwr = clampMotor((roll-staticBalance)*-.05);
     
             balancePID.setOutputRange(-.5,.5);
             balancePID.setPID(Kp, Ki, Kd);
             balancePID.setSetpoint(staticBalance);
             balancePID.enable();
             balancePID.setInput(currentAngle);
             double correction = balancePID.performPID();
     
             logger.UpdateLog(Long.toString(System.nanoTime()) + ","
                     + Double.toString(balancePID.getDeltaTime()) + ","
                     + Double.toString(currentAngle) + ","
                     + Double.toString(balancePID.getError()) + ","
                     + Double.toString(balancePID.getTotalError()) + ","
                     + Double.toString(balancePID.getDeltaError()) + ","
                     + Double.toString(balancePID.getPwrP()) + ","
                     + Double.toString(balancePID.getPwrI()) + ","
                     + Double.toString(balancePID.getPwrD()) + ","
                     + Double.toString(correction));
     
             timeStamp=System.nanoTime();
             motorFront.setPower(correction);
     
    

    REV Robot Reveal

    REV Robot Reveal By Tycho, Austin, Charlotte, Omar, Evan, and Janavi

    Argos V2 - a REV Robot Reveal

    This video was pulled from Argos visits to: The NSTA STEM Expo in Kissimmee FL, in the path of eclipse totality in Tennessee, and in North Texas at The Dallas Makerspace, The Southwest Center Mall, Southside on Lamar and the Frontiers of Flight Museum. We hope you find it interesting:

    Machine Vision Goals – Part 1

    Machine Vision Goals – Part 1 By Tycho

    We’ve been using machine vision for a couple of years now and have a plan to use it in Relic Rescue for a number of things. I mostly haven’t gotten to it because college application deadlines have a higher priority for me this year. But since we already have experience with color blob tracking in OpenCV and Vuforia tracking, I hope this won’t be too difficult. We have 5 different things we want to try:

    VuMark decode – this is obvious since it gives us a chance to regularly get the glyph crypto bonus. From looking at the code, it seems to be a single line different from the Vuforia tracking code we’ve already got. It’s probably a good idea to signal the completed decode by flashing our lights or something like that. That will make it more obvious to judges and competitors.

    Jewel Identification – most teams seem to be using the REV color sensor on the arm their jewel displacement arm. We’ll probably start out doing that too, but I’d also like to use machine vision to identify the correct jewel. Just because we can. Just looking at the arrangement, we should be able to get both the jewels and the Vuforia target in the same frame at the beginning of autonomous.

    Alignment – it is not legal to extend a part of the robot outside of the 18” dimensions during match setup. So we can’t put the jewel arm out to make sure it is between the jewels. But there is nothing preventing us from using the camera to assist with alignment. We can even draw on the screen where the jewels should appear, like inside the orange box below. This will also help with Jewel ID – we won’t have to hunt for the relevant pixels – we can just compare the average hue of the two regions around the wiffle balls.

    Autonomous Deposition – this is the most ambitious use for machine vision. The dividers on the crypto boxes should make pretty clear color blob regions. If we can find the center points between these regions, we should be able to code and automatically centering glyph depositing behavior.

    Autonomous glyph collection – ok this is actually harder. Teams seem to spend most of their time retrieving glyphs. Most of that time seems to be spent getting the robot and the glyphs square with each other. Our drivers have a lot of trouble with this even though we have a very maneuverable mecanum drive. What if we could create a behavior that would automatically align the robot to a target glyph on approach? With our PID routines we should be able to do this pretty efficiently. The trouble is we need to figure out the glyph orientation by analyzing frames on approach. And it probably means shape analysis – something we’ve never done before. If we get to this, it won’t be until pretty late in the season. Maybe we’ll come up with a better mechanical approach to aligning glyphs with our bot and this won’t be needed.

    Tools for Experimenting

    Machine vision folks tend to think about image analysis as a pipeline that strings together different image processing algorithms in order to understand something about the source image or video feed. These algorithms are often things like convolution filters that isolate different parts of the image. You have to decide which stages to put into a pipeline depending on what that pipeline is meant to detect or decide. To make it easier to experiment, it’s good to use tools that let you create these pipelines and play around with them before you try to hard-code it into your robot.

    I've been using a tool called ImagePlay. http://imageplay.io/ It's open source and based on OpenCV. I used it to create a pipeline that has some potential to help navigation in this year's challenge. Since ImagePlay is open source, once you have a pipeline, you can figure out the calls to it makes to opencv to construct the stages. It's based on the C++ implementation of OpenCV so we’ll have to translate that to java for Android. It has a very nice pipeline editor that supports branching. The downside is that this tool is buggy and doesn't have anywhere near the number of filters and algorithms that RoboRealm supports.

    RoboRealm is what we wanted to use. We’ve been pretty closely connected with the Dallas Personal Robotics Group (DPRG) for years and Carl Ott is a member who has taught a couple of sessions on using RoboRealm to solve the club’s expert line following course. Based on his recommendation we contacted the RoboRealm folks and they gave use a 5 user commercial license. I think that’s valued at $2,500. They seemed happy to support FTC teams.

    RoboRealm is much easier to experiment with and they have great documentation so now have an improved pipeline. It's going to take more work to figure out how to implement that pipeline in OpenCV because it’s not always clear what a particular stage in RoboRealm does at a low level. But this improved pipeline isn’t all that different from the ImagePlay version.

    Candidate Pipeline

    So here is a picture of a red cryptobox sitting against a wall with a bunch of junk in the background. This image ended up upside down, but that doesn’t matter for just experimenting. I wanted a challenging image, because I want to know early if we need to have a clean background for the cryptoboxes. If so, we might need to ask the FTA if we can put an opaque background behind the cryptoboxes:

    Stage 1 – Color Filter – this selects only the reddest pixels

    Stage 2 – GreyScale – Don’t need the color information anymore, this reduces the data size

    Stage 3 – Flood Fill – This simplifies a region by flooding it with the average color of nearby pixels. This is the same thing when you use the posterize effect in photoshop. This also tends to remove some of the background noise.

    Stage 4 – Auto Threshold – Turns the image into a B/W image with no grey values based on a thresholding algorithm that only the RoboRealm folks know.

    Stage 5 – Blob Size – A blob is a set of connected pixels with a similar value. Here we are limiting the output to the 4 largest blobs, because normally there are 4 dividers visible. In this case there is an error. The small blob on the far right is classified as a divider even though it is just some other red thing in the background, because the leftmost column was mostly cut out of the frame and wasn’t lit very well. It ended up being erased by this pipeline.

    Stages 6 & 7 – Moment Statistics – Moments are calculations that can help to classify parts of images. We’ve used Hu Moments since our first work with machine vision on our robot named Argos. They can calculate the center of a blob (center of gravity), its eccentricity, and its area. Here the center of gravity is the little red square at the center of each blob. Now we can calculate the midpoint between each blob to find the center of a column and use that as a navigation target if we can do all this in real-time. We may have to reduce image resolution to speed things up.

    Working on Autonomous

    Working on Autonomous By Tycho

    Task: Create a temporary autonomous for the bot

    We attempted to create an autonomous for our first scrimmage. It aimed to make the robot to drive forward and drive into the safe zone. However, we forgot to align the robot and it failed at the scrimmage.

    Instead of talking about the code like usual, the code's main functions are well documented so that any person can understand its functions without a prior knowledge of coding.

     public void autonomous2 (){
    
            switch(autoState){
                case 0: //moves the robot forward .5 meters
                    if (robot.driveStrafe(false, .60, .35)) {
    
                        robot.resetMotors(true);
                        autoState++;
                    }
                        break;
                case 1: //scan jewels and decide which one to hit
                    if (robot.driveForward(false, .25, .35)) {
                        autoTimer = futureTime(1f);
                        robot.resetMotors(true);
                        autoState++;
                    }
    
                    break;
                case 2: //short move to knock off jewel
    
                    robot.glyphSystem.ToggleGrip();
                    autoTimer = futureTime(1f);
    
                    robot.resetMotors(true);
                    autoState++;
                    break;
                case 3: //back off of the balance stone
                    if (robot.driveForward(true, .10, .35)) {
                        autoTimer = futureTime(3f);
                        robot.resetMotors(true);
                        autoState++;
                    }
                    break;
                case 4: //re-orient the robot
                    autoState++;
                    break;
                case 5: //drive to proper crypto box column based on vuforia target
                    autoState++;
                    break;
                case 6: // turn towards crypto box
                    autoState++;
                    break;
                case 7: //drive to crypto box
                    autoState++;
                    break;
                case 8: //deposit glyph
                    autoState++;
                    break;
                case 9: //back away from crypto box
                    autoState++;
                    break;
            }
        }
    

    Adding Code Fixes to the Robot

    Adding Code Fixes to the Robot By Tycho

    Task: Add code updates

    These commits add said functionality:

    • Pre-game logic - joystick control
    • Fix PID settings
    • Autonomous resets motor
    • Jewel Arm functionality
    • Autonomous changes
    • Tests servos

    These commits allow better QoL for our drivers, allow our robot to function more smoothly both in autonomous and during TeleOp, allows us to score the jewels, and lets us test servos.

    Jewel Arm


    package org.firstinspires.ftc.teamcode;
    
    import com.qualcomm.robotcore.hardware.NormalizedColorSensor;
    import com.qualcomm.robotcore.hardware.Servo;
    
    /**
     * Created by 2938061 on 11/10/2017.
     */
    
    public class JewelArm {
    
        private Servo servoJewel;
        private NormalizedColorSensor colorJewel;
        private int jewelUpPos;
        private int jewelDownPos;
    
        public JewelArm(Servo servoJewel, NormalizedColorSensor colorJewel, int jewelUpPos, int jewelDownPos){
            this.servoJewel = servoJewel;
            this.colorJewel = colorJewel;
            this.jewelUpPos = jewelUpPos;
            this.jewelDownPos = jewelDownPos;
        }
    
        public void liftArm(){
            servoJewel.setPosition(ServoNormalize(jewelUpPos));
        }
        public void lowerArm(){
            servoJewel.setPosition(ServoNormalize(jewelDownPos));
        }
    
        public static double ServoNormalize(int pulse){
            double normalized = (double)pulse;
            return (normalized - 750.0) / 1500.0; //convert mr servo controller pulse width to double on _0 - 1 scale
        }
    
    }
    

    Autonomous

    		public void autonomous(){
            switch(autoState){
                case 0: //scan vuforia target and deploy jewel arm
                    robot.jewel.lowerArm();
                    autoTimer = futureTime(1.5f);
                    if(autoTimer < System.nanoTime()) {
                        relicCase = getRelicCodex();
                        jewelMatches = robot.doesJewelMatch(isBlue);
                        autoState++;
                    }
                    break;
                case 1: //small turn to knock off jewel
                    if ((isBlue && jewelMatches)||(!isBlue && !jewelMatches)){
                        if(robot.RotateIMU(10, .5)){
                            robot.resetMotors(true);
                        }
                    }
                    else{
                        if(robot.RotateIMU(350, .5)){
                            robot.resetMotors(true);
                        }
                    }
                    break;
                case 2: //lift jewel arm
                    robot.jewel.liftArm();
                    autoTimer = futureTime(1.5f);
                    if(autoTimer < System.nanoTime()) {
                        jewelMatches = robot.doesJewelMatch(isBlue);
                        autoState++;
                    }
                case 3: //turn parallel to the wall
                    if(isBlue){
                        if(robot.RotateIMU(270, 2.0)){
                            robot.resetMotors(true);
                            autoState++;
                        }
                    }
                    else{
                        if(robot.RotateIMU(90, 2.0)){
                            robot.resetMotors(true);
                            autoState++;
                        }
                    }
                    autoState++;
                    break;
                case 4: //drive off the balance stone
                    if(robot.driveForward(true, .3, .5)) {
                        robot.resetMotors(true);
                        autoState++;
                    }
                    break;
                case 5: //re-orient robot
                    if(isBlue){
                        if(robot.RotateIMU(270, 1.0)){
                            robot.resetMotors(true);
                            autoState++;
                        }
                    }
                    else{
                        if(robot.RotateIMU(90, 1.0)){
                            robot.resetMotors(true);
                            autoState++;
                        }
                    }
                    break;
                case 6: //drive to proper crypto box column based on vuforia target
                    switch (relicCase) {
                        case 0:
                            if(robot.driveForward(true, .5, .35)) {
                                robot.resetMotors(true);
                                autoState++;
                            }
                            break;
                        case 1:
                            if(robot.driveForward(true, .75, .35)) {
                                robot.resetMotors(true);
                                autoState++;
                            }
                            autoState++;
                            break;
                        case 2:
                            if(robot.driveForward(true, 1.0, .35)) {
                                robot.resetMotors(true);
                                autoState++;
                            }
                            autoState++;
                            break;
                    }
                    break;
                case 7: //turn to crypto box
                    if(isBlue){
                        if(robot.RotateIMU(315, 1.5)){
                            robot.resetMotors(true);
                            autoState++;
                        }
                    }
                    else{
                        if(robot.RotateIMU(45, 1.5)){
                            robot.resetMotors(true);
                            autoState++;
                        }
                    }
                    break;
                case 8: //deposit glyph
                    if(robot.driveForward(true, 1.0, .50)) {
                        robot.resetMotors(true);
                        robot.glyphSystem.ReleaseGrip();
                        autoState++;
                    }
                    break;
                case 9: //back away from crypto box
                    if(robot.driveForward(false, .5, .50)){
                        robot.resetMotors(true);
                        autoState++;
                    }
                    break;
                default:
                    robot.resetMotors(true);
                    autoState = 0;
                    active = false;
                    state = 0;
                    break;
            }
        }
        public void autonomous2 (){
    
            switch(autoState){
                case 0: //scan vuforia target and deploy jewel arm
                    robot.jewel.lowerArm();
                    autoTimer = futureTime(1.5f);
                    if(autoTimer < System.nanoTime()) {
                        relicCase = getRelicCodex();
                        jewelMatches = robot.doesJewelMatch(isBlue);
                        autoState++;
                    }
                    break;
                case 1: //small turn to knock off jewel
                    if ((isBlue && jewelMatches)||(!isBlue && !jewelMatches)){
                        if(robot.RotateIMU(10, .5)){
                            robot.resetMotors(true);
                        }
                    }
                    else{
                        if(robot.RotateIMU(350, .5)){
                            robot.resetMotors(true);
                        }
                    }
                    break;
                case 2: //lift jewel arm
                    robot.jewel.liftArm();
                    autoTimer = futureTime(1.5f);
                    if(autoTimer < System.nanoTime()) {
                        jewelMatches = robot.doesJewelMatch(isBlue);
                        autoState++;
                    }
                case 3: //turn parallel to the wall
                    if(isBlue){
                        if(robot.RotateIMU(270, 2.0)){
                            robot.resetMotors(true);
                            autoState++;
                        }
                    }
                    else{
                        if(robot.RotateIMU(90, 2.0)){
                            robot.resetMotors(true);
                            autoState++;
                        }
                    }
                    autoState++;
                    break;
                case 4: //drive off the balance stone
                    if(robot.driveForward(true, .3, .5)) {
                        robot.resetMotors(true);
                        autoState++;
                    }
                    break;
                case 5: //re-orient robot
                    if(isBlue){
                        if(robot.RotateIMU(270, 1.0)){
                            robot.resetMotors(true);
                            autoState++;
                        }
                    }
                    else{
                        if(robot.RotateIMU(90, 1.0)){
                            robot.resetMotors(true);
                            autoState++;
                        }
                    }
                    break;
                case 6: //drive to proper crypto box column based on vuforia target
                    switch (relicCase) {
                        case 0:
                            if(robot.driveStrafe(true, .00, .35)) {
                                robot.resetMotors(true);
                                autoState++;
                            }
                            break;
                        case 1:
                            if(robot.driveStrafe(true, .25, .35)) {
                                robot.resetMotors(true);
                                autoState++;
                            }
                            autoState++;
                            break;
                        case 2:
                            if(robot.driveStrafe(true, .50, .35)) {
                                robot.resetMotors(true);
                                autoState++;
                            }
                            autoState++;
                            break;
                    }
                    break;
                case 7: //turn to crypto box
                    if(isBlue){
                        if(robot.RotateIMU(215, 1.5)){
                            robot.resetMotors(true);
                            autoState++;
                        }
                    }
                    else{
                        if(robot.RotateIMU(135, 1.5)){
                            robot.resetMotors(true);
                            autoState++;
                        }
                    }
                    break;
                case 8: //deposit glyph
                    if(robot.driveForward(true, 1.0, .50)) {
                        robot.resetMotors(true);
                        robot.glyphSystem.ReleaseGrip();
                        autoState++;
                    }
                    break;
                case 9: //back away from crypto box
                    if(robot.driveForward(false, .5, .50)){
                        robot.resetMotors(true);
                        autoState++;
                    }
                    break;
                default:
                    robot.resetMotors(true);
                    autoState = 0;
                    active = false;
                    state = 0;
                    break;
            }
        }
    

    Code Fixes and Readability

    Code Fixes and Readability By Tycho

    Task: Make the code more readable

    So, we can't include all the code changes we made today, but all of it involved cleaning up our code, removing extra functions we didn't use, refactoring, adding comments, and making it more readable for the tournament. We had almost 80k deletions and 80k additions. This marks a turning point in the readablity of our code so that less experienced team members can read it. We went through methodically and commented out each function and method for future readability, as we will have to pass the codebase on to next year's team.

    Field Oriented Control

    Field Oriented Control By Abhi

    Task: Implement a drive system depending on field perspective

    We are always looking for ways to make it easier to drive. One way to do that is to modify our code such that no matter where the front of the robot is, moving the joystick in a certain direction will move the entire robot in that direction. This allows our drivers to only think about the field and align with the cryptobox easier. I knew that some FRC teams used libraries developed by WPLIB to implement this sort of drive. Reading their code, I figured out how to implement field-oriented drive in our codebase.

    The code began by getting the joystick axis readings. This data then needed to be processed to account for the heading of the robot. This needed a special method depicted below.

    Some math needed to be done for the angle. This is no easy feat so I will explain it in case if any other teams want to use this code. The first thing we need to do is the find the sine and cosine of the heading. This allows us to find the power to the x-axis and the y-axis respective to the angle.

    Now that the trig is done, we needed to apply these values to the joystick axes. In this method, x represented the forward direction and y represented the strafing direction. That is why, when we look at out[0] which would tell the output forward direction, it considers the joystick's y direction and modifies it with the x-direction so that the joysticks get converted to their respective axes. This applies to the strafing direction as well.

    Going back to the original method, the directions output from the method are applied to the actual powers of the motors. Before this happens, in case if any dimensions are over 1.0 (the max speed), they need to be scaled down to to 1. This is what the normalize and clampMotors methods do. Therefore, in the end, the code allows drivers to control the bot respective to the field.

    Next Steps:

    Now the drive team just needs to test the code out and see what happens.

    Controller Mapping

    Controller Mapping By Janavi

    Task: Map the controller layout

    At this point, we are training the next generation of the drivers on our team, and since we have so many buttons with so many different functions it can often become difficult for the new drivers to determine which button does what, so Karina and I created a map of the controller. By doing this, we not only assist others in determining what button they need to press to do what, but also help the coders in understanding the wants and needs of the drivers. This is because often times when we are coding we will set any function to any available button, just to test if it works. But oftentimes we don't change the button value after that and then there are either far too many buttons for the driver to easily control the robot, or the buttons are too far apart for easy access. Now, after creating this map the drivers were able to sit down together with the coders and map out the most effective controller in their minds together.

    Next Steps:

    We have planned that now as a part of our post-mortem discussion as well as discussion what could have been done to improve the robots functions as pertaining to code. We will also sit down and take out the controller map and determine if any of the buttons can be switched for easy driver control. This will not only lead to better, more efficient driving but will also lead to better communication between groups.

    Importance of Documentation

    Importance of Documentation By Abhi and Tycho

    Task: Explain commits

    As explained in a previous post, we were having many issues with git commits and fixing our errors in it. After a lot of the merging conflicts, we had to fix all the commits without exactly knowing what was being changed in the code. Part of the reason this was so hard was our lack of good naming conventions. Though we always try to make a title and good description for all posts, this doesn't always happen. This is precisely why it is important to check in all changes at every session with good descriptions. Someone had to spend their time mechanically trying to do merge conflicts without understanding the intent behind the code. So it took them longer, they may have made mistakes, an issue fixed by good documentation in the first place.

    This post is dedicated to explaining some of the errors and what the commits true intentions were.

    Stuff:

    That one is mostly about code for the 3rd servo in the gripper open/close methods. It created the servo in pose and added code for it in GlyphSystem2.

    4a6b7dbfff573a72bfee2f7e481654cb6c26b6b2:

    This was for tuning the field oriented code to work. There were some errors with arrays in the way power was fed to the motors (null pointer exception) so I (Abhi) had to fix that. Also, I made some touch up edits with formatting in the methods. After all this, Tycho made (may have edited existing) a method in Pose for Viewforia demo. Minor changes were made to account for this.

    c8ffc8592cd1583e3b71c39ba5106d48da887c66:

    First part was all Argos edits at the museum to make it functional and fine tune some measurements. Second part involved the conveyor belt flipper. Tycho made changes to the dpad up and down to return to the home position rather than carry out complete motion (not sure if this carried over in all the commit mess but it was done theoretically). Driver practice will have to confirm changes.

    Next Steps

    Don't name things badly so this doesn't happen.

    Autonomous Updates, Multi-glyph

    Autonomous Updates, Multi-glyph By Abhi

    Task: Score extra autonomous glyphs

    At super regionals, we saw all the good teams having multi glyph autonomi. In fact, Viperbots Hydra, the winning alliance captain, had a 3 glyph autonomous. I believed Iron Reign could get some of this 100 point autonomous action so I sat down to create a 2 glyph autonomous. We now have 3 autonomi, one of which is multiglyph.

    I made a new methods called autonomous3(). For the starting settings (like driving off balancing stone and jewel points), I copied code from our existing autonomous program. After that, I realized that 10 seconds of the autonomous period had already been used by the time the robot had driven off the stone. That led me to think about ways to optimize autonomous after that point. I realized that if the gripper deployed as the robot was aligning itself with the balancing stone, it would save a lot of time. I also sped up the drive train speeds to lead to maximum efficiency. I had many runs of the fix though.

    First time through, I ran the code and nothing happened. I realized that I forgot to call the actual state machine in the main loop. Dumb mistake, quick fix.

    Second run: The robot drove off the balancing stone properly and was ready to pick up extra glyphs. Unfortunately, I flipped the motor directions to the robot rammed into the cryptobox instead of driving into glyph pit. Another quick fix.

    Third run: The robot drove off stone and into glyph pit. However, it went almost into the Blue zone (I was testing from red side). Also, the robot would rotate while in the glyph pit, causing glyphs to get under the wiring and pull stuff out. I had to rewire a couple things then I went back to coding.

    Fourth run: The robot drove off stone and into pit and collected one more glyph. The issue was that once the glyph was collected, the bot kept driving forward because I forgot to check the speeds again.

    Fifth run: All the initial motions worked. However, I realized that the robot didn't strafe off as far as I needed it to reach the glyph pit. I added about .3 meters to the robot distance and tested again.

    Sixth run: I don't know if anyone says 6th time is the charm but it was definitely a successful one for me. The robot did everything correctly and placed the glyph in the cryptobox. The only issue I had was that the robot kept backing away and ramming the cryptobox during end of auto. I fixed this easily by adding another autoState++ to the code.

    Before I made the fix after the 6th run, I decided to take a wonderful video of the robot moving. It is linked below.

    Next Steps:

    Everything is ready to go to do a multiglyph autonomous. However, the robot doesn't score by the codex for the correct column. I need to implement that with IMU angles before Champs.

    Autonomous Updates, Multiglyph Part 2

    Autonomous Updates, Multiglyph Part 2 By Abhi, Karina, and Tycho

    Task: Develop multiglyph for far Stone

    We had a functional autonomous for the balacing stone close to the audience. However, chances are that our alliance partner would want that same stone since they could get more glyphs during autonomous. This meant that we needed a multiglyph autonomous for the far balancing stone. We went on an adventure to make this happen.

    We programmed the robot to drive off the balancing stone and deploy the grippers as this occurred. To ensure the robot was off the stone before deploy, we utilized the roll sensor in the REV hub to determine whether the angle the robot was at was flat on the ground. This made our autonomous account for the error we could have on the balancing stone in terms of placement in the forward backward direction respective to the field. Next, we used an IMU turn into the glyph pit to increase our chances of picking up a second glyph. Then, we backed away and turned parallel to the cryptobox. The following motion was to travel to the field perimeter for a long period of time so that the robot will be pushing the field perimeter. This was done to correct any wrong angles and make grippers perpendicular to field wall. Then the robot backs up and scores the glyphs. Here is a video of it working:

    Next Steps

    Now we are speeding auto up and correcting IMU angles.