Articles by tag: control

Articles by tag: control

    Balancing and PID

    Balancing and PID By Tycho

    Task: Test and improve the PID system and balance code

    We're currently testing code to give Argos a balancing system so that we can demo it. This is also a test for the PID in the new REV robotics expansion hubs, which we plan on switching to for this season if reliable. Example code is below.

    public void BalanceArgos(double Kp, double Ki, double Kd, double pwr, double currentAngle, double targetAngle)
     {
         //sanity check - exit balance mode if we are out of recovery range
     
     
     
         if (isBalanceMode()){ //only balance in the right mode
     
             setHeadTilt(nod);
     
             //servo steering should be locked straight ahead
             servoSteerFront.setPosition(.5);
             servoSteerBack.setPosition(0.5);
     
             //double pwr = clampMotor((roll-staticBalance)*-.05);
     
             balancePID.setOutputRange(-.5,.5);
             balancePID.setPID(Kp, Ki, Kd);
             balancePID.setSetpoint(staticBalance);
             balancePID.enable();
             balancePID.setInput(currentAngle);
             double correction = balancePID.performPID();
     
             logger.UpdateLog(Long.toString(System.nanoTime()) + ","
                     + Double.toString(balancePID.getDeltaTime()) + ","
                     + Double.toString(currentAngle) + ","
                     + Double.toString(balancePID.getError()) + ","
                     + Double.toString(balancePID.getTotalError()) + ","
                     + Double.toString(balancePID.getDeltaError()) + ","
                     + Double.toString(balancePID.getPwrP()) + ","
                     + Double.toString(balancePID.getPwrI()) + ","
                     + Double.toString(balancePID.getPwrD()) + ","
                     + Double.toString(correction));
     
             timeStamp=System.nanoTime();
             motorFront.setPower(correction);
     
    

    PID Calibration and Testing

    PID Calibration and Testing By Tycho

    Task: Allow user to change PID coefficients from the controller

    To allow each user to create their own settings, we're designing a way to allow the user to tune PID to their own liking from the controller. This also enables debugging for our robot.

    public void PIDTune(PIDController pid, boolean pidIncrease, boolean pidDecrease, boolean magnitudeIncrease, boolean magnitudeDecrease, boolean shouldStateIncrement) {
     if (shouldStateIncrement) {
      pidTunerState = stateIncrement(pidTunerState, 0, 2, true);
     }
     if (magnitudeIncrease) {
      pidTunerMagnitude *= 10;
     }
     if (magnitudeDecrease) {
      pidTunerMagnitude /= 10;
     }
     double dir;
     if (pidIncrease) dir = 1;
     else if (pidDecrease) dir = -1;
     else if (pidDecrease) dir = -1;
     else dir = 0;
     switch (pidTunerState) {
      case 0:
       pid.setPID(pid.getP() pidTunerMagnitude * dir, pid.getI(), pid.getD());
       break;
      case 1:
       pid.setPID(pid.getP(), pid.getI() pidTunerMagnitude * dir, pid.getD());
       break;
      case 2:
       pid.setPID(pid.getP(), pid.getI(), pid.getD() pidTunerMagnitude * dir);
       break;
     }
    }
    public double getPidTunerMagnitude() {
     return pidTunerMagnitude;
    }
    public int getPidTunerState() {
     return pidTunerState;
    }
    public int stateIncrement(int val, int minVal, int maxVal, boolean increase) {
     if (increase) {
      if (val == maxVal) {
       return minVal;
      }
      val++;
      return val;
     } else {
      if (val == minVal) {
       return maxVal;
      }
      val--;
      return val;
     }
    }
    

    Testing Materials

    Testing Materials By Austin, Evan, and and Tycho

    Task: Test Materials for V2 Gripper

    Though our current gripper is working sufficiently, there are some issues we would like to improve in our second version. The mounting system is unstable and easily comes out of alignment because the rev rail keeps bending. Another issue we've encountered is the cervo pulling the grippers so that they begin to cave inwards, releasing any blocks being held at the bottom. By far the biggest problem is our intake. Our drivers have to align the robot with the block so precisely to be able to stack it that it eats a majority of our game time. However, there are some advantages, such as light weight and adjustability, to this gripper that we would like to carry over into the second version.

      We tested out a few different materials:
    • Silicone Baking Mats - The mats were a very neutral option because they didn't have any huge advantages or disadvantages (other than not adhering well). These could have been used, however, there were other better options.
    • Shelf Liner - It was far too slippery. Also, when thinking about actually making the grippers, there was no good way to put it on the grippers. Using this materials would have been too much work with little gain.
    • Baking Pan Lining (picked) - It was made out of durable rubber but was still very malleable which is a big advantage. We need the grippers to compress and 'grip' the block without causing any damage.
    • Rubber Bands on Wheels - This material was closest to our original version and, unexpectedly, carried over one of the problems. It still requires very specific orientations to pick up blocks, which would defeat the purpose of this entire task.

    The purpose of this is as a part of our future grabber design, which will need to be relatively light, as our string is currently breaking under stress due to weight. The material must also have good direct shear and direct strength, as the grabber will have rotating arms that move in and out to grasp blocks. As well, we're replacing the tetrix parts with REV, as they're smaller and a little lighter, with the additional bonus of more mounting points.

    Machine Vision Goals – Part 1

    Machine Vision Goals – Part 1 By Tycho

    We’ve been using machine vision for a couple of years now and have a plan to use it in Relic Rescue for a number of things. I mostly haven’t gotten to it because college application deadlines have a higher priority for me this year. But since we already have experience with color blob tracking in OpenCV and Vuforia tracking, I hope this won’t be too difficult. We have 5 different things we want to try:

    VuMark decode – this is obvious since it gives us a chance to regularly get the glyph crypto bonus. From looking at the code, it seems to be a single line different from the Vuforia tracking code we’ve already got. It’s probably a good idea to signal the completed decode by flashing our lights or something like that. That will make it more obvious to judges and competitors.

    Jewel Identification – most teams seem to be using the REV color sensor on the arm their jewel displacement arm. We’ll probably start out doing that too, but I’d also like to use machine vision to identify the correct jewel. Just because we can. Just looking at the arrangement, we should be able to get both the jewels and the Vuforia target in the same frame at the beginning of autonomous.

    Alignment – it is not legal to extend a part of the robot outside of the 18” dimensions during match setup. So we can’t put the jewel arm out to make sure it is between the jewels. But there is nothing preventing us from using the camera to assist with alignment. We can even draw on the screen where the jewels should appear, like inside the orange box below. This will also help with Jewel ID – we won’t have to hunt for the relevant pixels – we can just compare the average hue of the two regions around the wiffle balls.

    Autonomous Deposition – this is the most ambitious use for machine vision. The dividers on the crypto boxes should make pretty clear color blob regions. If we can find the center points between these regions, we should be able to code and automatically centering glyph depositing behavior.

    Autonomous glyph collection – ok this is actually harder. Teams seem to spend most of their time retrieving glyphs. Most of that time seems to be spent getting the robot and the glyphs square with each other. Our drivers have a lot of trouble with this even though we have a very maneuverable mecanum drive. What if we could create a behavior that would automatically align the robot to a target glyph on approach? With our PID routines we should be able to do this pretty efficiently. The trouble is we need to figure out the glyph orientation by analyzing frames on approach. And it probably means shape analysis – something we’ve never done before. If we get to this, it won’t be until pretty late in the season. Maybe we’ll come up with a better mechanical approach to aligning glyphs with our bot and this won’t be needed.

    Tools for Experimenting

    Machine vision folks tend to think about image analysis as a pipeline that strings together different image processing algorithms in order to understand something about the source image or video feed. These algorithms are often things like convolution filters that isolate different parts of the image. You have to decide which stages to put into a pipeline depending on what that pipeline is meant to detect or decide. To make it easier to experiment, it’s good to use tools that let you create these pipelines and play around with them before you try to hard-code it into your robot.

    I've been using a tool called ImagePlay. http://imageplay.io/ It's open source and based on OpenCV. I used it to create a pipeline that has some potential to help navigation in this year's challenge. Since ImagePlay is open source, once you have a pipeline, you can figure out the calls to it makes to opencv to construct the stages. It's based on the C++ implementation of OpenCV so we’ll have to translate that to java for Android. It has a very nice pipeline editor that supports branching. The downside is that this tool is buggy and doesn't have anywhere near the number of filters and algorithms that RoboRealm supports.

    RoboRealm is what we wanted to use. We’ve been pretty closely connected with the Dallas Personal Robotics Group (DPRG) for years and Carl Ott is a member who has taught a couple of sessions on using RoboRealm to solve the club’s expert line following course. Based on his recommendation we contacted the RoboRealm folks and they gave use a 5 user commercial license. I think that’s valued at $2,500. They seemed happy to support FTC teams.

    RoboRealm is much easier to experiment with and they have great documentation so now have an improved pipeline. It's going to take more work to figure out how to implement that pipeline in OpenCV because it’s not always clear what a particular stage in RoboRealm does at a low level. But this improved pipeline isn’t all that different from the ImagePlay version.

    Candidate Pipeline

    So here is a picture of a red cryptobox sitting against a wall with a bunch of junk in the background. This image ended up upside down, but that doesn’t matter for just experimenting. I wanted a challenging image, because I want to know early if we need to have a clean background for the cryptoboxes. If so, we might need to ask the FTA if we can put an opaque background behind the cryptoboxes:

    Stage 1 – Color Filter – this selects only the reddest pixels

    Stage 2 – GreyScale – Don’t need the color information anymore, this reduces the data size

    Stage 3 – Flood Fill – This simplifies a region by flooding it with the average color of nearby pixels. This is the same thing when you use the posterize effect in photoshop. This also tends to remove some of the background noise.

    Stage 4 – Auto Threshold – Turns the image into a B/W image with no grey values based on a thresholding algorithm that only the RoboRealm folks know.

    Stage 5 – Blob Size – A blob is a set of connected pixels with a similar value. Here we are limiting the output to the 4 largest blobs, because normally there are 4 dividers visible. In this case there is an error. The small blob on the far right is classified as a divider even though it is just some other red thing in the background, because the leftmost column was mostly cut out of the frame and wasn’t lit very well. It ended up being erased by this pipeline.

    Stages 6 & 7 – Moment Statistics – Moments are calculations that can help to classify parts of images. We’ve used Hu Moments since our first work with machine vision on our robot named Argos. They can calculate the center of a blob (center of gravity), its eccentricity, and its area. Here the center of gravity is the little red square at the center of each blob. Now we can calculate the midpoint between each blob to find the center of a column and use that as a navigation target if we can do all this in real-time. We may have to reduce image resolution to speed things up.

    Working on Autonomous

    Working on Autonomous By Tycho

    Task: Create a temporary autonomous for the bot

    We attempted to create an autonomous for our first scrimmage. It aimed to make the robot to drive forward and drive into the safe zone. However, we forgot to align the robot and it failed at the scrimmage.

    Instead of talking about the code like usual, the code's main functions are well documented so that any person can understand its functions without a prior knowledge of coding.

     public void autonomous2 (){
    
            switch(autoState){
                case 0: //moves the robot forward .5 meters
                    if (robot.driveStrafe(false, .60, .35)) {
    
                        robot.resetMotors(true);
                        autoState++;
                    }
                        break;
                case 1: //scan jewels and decide which one to hit
                    if (robot.driveForward(false, .25, .35)) {
                        autoTimer = futureTime(1f);
                        robot.resetMotors(true);
                        autoState++;
                    }
    
                    break;
                case 2: //short move to knock off jewel
    
                    robot.glyphSystem.ToggleGrip();
                    autoTimer = futureTime(1f);
    
                    robot.resetMotors(true);
                    autoState++;
                    break;
                case 3: //back off of the balance stone
                    if (robot.driveForward(true, .10, .35)) {
                        autoTimer = futureTime(3f);
                        robot.resetMotors(true);
                        autoState++;
                    }
                    break;
                case 4: //re-orient the robot
                    autoState++;
                    break;
                case 5: //drive to proper crypto box column based on vuforia target
                    autoState++;
                    break;
                case 6: // turn towards crypto box
                    autoState++;
                    break;
                case 7: //drive to crypto box
                    autoState++;
                    break;
                case 8: //deposit glyph
                    autoState++;
                    break;
                case 9: //back away from crypto box
                    autoState++;
                    break;
            }
        }
    

    Adding Code Fixes to the Robot

    Adding Code Fixes to the Robot By Tycho

    Task: Add code updates

    These commits add said functionality:

    • Pre-game logic - joystick control
    • Fix PID settings
    • Autonomous resets motor
    • Jewel Arm functionality
    • Autonomous changes
    • Tests servos

    These commits allow better QoL for our drivers, allow our robot to function more smoothly both in autonomous and during TeleOp, allows us to score the jewels, and lets us test servos.

    Jewel Arm


    package org.firstinspires.ftc.teamcode;
    
    import com.qualcomm.robotcore.hardware.NormalizedColorSensor;
    import com.qualcomm.robotcore.hardware.Servo;
    
    /**
     * Created by 2938061 on 11/10/2017.
     */
    
    public class JewelArm {
    
        private Servo servoJewel;
        private NormalizedColorSensor colorJewel;
        private int jewelUpPos;
        private int jewelDownPos;
    
        public JewelArm(Servo servoJewel, NormalizedColorSensor colorJewel, int jewelUpPos, int jewelDownPos){
            this.servoJewel = servoJewel;
            this.colorJewel = colorJewel;
            this.jewelUpPos = jewelUpPos;
            this.jewelDownPos = jewelDownPos;
        }
    
        public void liftArm(){
            servoJewel.setPosition(ServoNormalize(jewelUpPos));
        }
        public void lowerArm(){
            servoJewel.setPosition(ServoNormalize(jewelDownPos));
        }
    
        public static double ServoNormalize(int pulse){
            double normalized = (double)pulse;
            return (normalized - 750.0) / 1500.0; //convert mr servo controller pulse width to double on _0 - 1 scale
        }
    
    }
    

    Autonomous

    		public void autonomous(){
            switch(autoState){
                case 0: //scan vuforia target and deploy jewel arm
                    robot.jewel.lowerArm();
                    autoTimer = futureTime(1.5f);
                    if(autoTimer < System.nanoTime()) {
                        relicCase = getRelicCodex();
                        jewelMatches = robot.doesJewelMatch(isBlue);
                        autoState++;
                    }
                    break;
                case 1: //small turn to knock off jewel
                    if ((isBlue && jewelMatches)||(!isBlue && !jewelMatches)){
                        if(robot.RotateIMU(10, .5)){
                            robot.resetMotors(true);
                        }
                    }
                    else{
                        if(robot.RotateIMU(350, .5)){
                            robot.resetMotors(true);
                        }
                    }
                    break;
                case 2: //lift jewel arm
                    robot.jewel.liftArm();
                    autoTimer = futureTime(1.5f);
                    if(autoTimer < System.nanoTime()) {
                        jewelMatches = robot.doesJewelMatch(isBlue);
                        autoState++;
                    }
                case 3: //turn parallel to the wall
                    if(isBlue){
                        if(robot.RotateIMU(270, 2.0)){
                            robot.resetMotors(true);
                            autoState++;
                        }
                    }
                    else{
                        if(robot.RotateIMU(90, 2.0)){
                            robot.resetMotors(true);
                            autoState++;
                        }
                    }
                    autoState++;
                    break;
                case 4: //drive off the balance stone
                    if(robot.driveForward(true, .3, .5)) {
                        robot.resetMotors(true);
                        autoState++;
                    }
                    break;
                case 5: //re-orient robot
                    if(isBlue){
                        if(robot.RotateIMU(270, 1.0)){
                            robot.resetMotors(true);
                            autoState++;
                        }
                    }
                    else{
                        if(robot.RotateIMU(90, 1.0)){
                            robot.resetMotors(true);
                            autoState++;
                        }
                    }
                    break;
                case 6: //drive to proper crypto box column based on vuforia target
                    switch (relicCase) {
                        case 0:
                            if(robot.driveForward(true, .5, .35)) {
                                robot.resetMotors(true);
                                autoState++;
                            }
                            break;
                        case 1:
                            if(robot.driveForward(true, .75, .35)) {
                                robot.resetMotors(true);
                                autoState++;
                            }
                            autoState++;
                            break;
                        case 2:
                            if(robot.driveForward(true, 1.0, .35)) {
                                robot.resetMotors(true);
                                autoState++;
                            }
                            autoState++;
                            break;
                    }
                    break;
                case 7: //turn to crypto box
                    if(isBlue){
                        if(robot.RotateIMU(315, 1.5)){
                            robot.resetMotors(true);
                            autoState++;
                        }
                    }
                    else{
                        if(robot.RotateIMU(45, 1.5)){
                            robot.resetMotors(true);
                            autoState++;
                        }
                    }
                    break;
                case 8: //deposit glyph
                    if(robot.driveForward(true, 1.0, .50)) {
                        robot.resetMotors(true);
                        robot.glyphSystem.ReleaseGrip();
                        autoState++;
                    }
                    break;
                case 9: //back away from crypto box
                    if(robot.driveForward(false, .5, .50)){
                        robot.resetMotors(true);
                        autoState++;
                    }
                    break;
                default:
                    robot.resetMotors(true);
                    autoState = 0;
                    active = false;
                    state = 0;
                    break;
            }
        }
        public void autonomous2 (){
    
            switch(autoState){
                case 0: //scan vuforia target and deploy jewel arm
                    robot.jewel.lowerArm();
                    autoTimer = futureTime(1.5f);
                    if(autoTimer < System.nanoTime()) {
                        relicCase = getRelicCodex();
                        jewelMatches = robot.doesJewelMatch(isBlue);
                        autoState++;
                    }
                    break;
                case 1: //small turn to knock off jewel
                    if ((isBlue && jewelMatches)||(!isBlue && !jewelMatches)){
                        if(robot.RotateIMU(10, .5)){
                            robot.resetMotors(true);
                        }
                    }
                    else{
                        if(robot.RotateIMU(350, .5)){
                            robot.resetMotors(true);
                        }
                    }
                    break;
                case 2: //lift jewel arm
                    robot.jewel.liftArm();
                    autoTimer = futureTime(1.5f);
                    if(autoTimer < System.nanoTime()) {
                        jewelMatches = robot.doesJewelMatch(isBlue);
                        autoState++;
                    }
                case 3: //turn parallel to the wall
                    if(isBlue){
                        if(robot.RotateIMU(270, 2.0)){
                            robot.resetMotors(true);
                            autoState++;
                        }
                    }
                    else{
                        if(robot.RotateIMU(90, 2.0)){
                            robot.resetMotors(true);
                            autoState++;
                        }
                    }
                    autoState++;
                    break;
                case 4: //drive off the balance stone
                    if(robot.driveForward(true, .3, .5)) {
                        robot.resetMotors(true);
                        autoState++;
                    }
                    break;
                case 5: //re-orient robot
                    if(isBlue){
                        if(robot.RotateIMU(270, 1.0)){
                            robot.resetMotors(true);
                            autoState++;
                        }
                    }
                    else{
                        if(robot.RotateIMU(90, 1.0)){
                            robot.resetMotors(true);
                            autoState++;
                        }
                    }
                    break;
                case 6: //drive to proper crypto box column based on vuforia target
                    switch (relicCase) {
                        case 0:
                            if(robot.driveStrafe(true, .00, .35)) {
                                robot.resetMotors(true);
                                autoState++;
                            }
                            break;
                        case 1:
                            if(robot.driveStrafe(true, .25, .35)) {
                                robot.resetMotors(true);
                                autoState++;
                            }
                            autoState++;
                            break;
                        case 2:
                            if(robot.driveStrafe(true, .50, .35)) {
                                robot.resetMotors(true);
                                autoState++;
                            }
                            autoState++;
                            break;
                    }
                    break;
                case 7: //turn to crypto box
                    if(isBlue){
                        if(robot.RotateIMU(215, 1.5)){
                            robot.resetMotors(true);
                            autoState++;
                        }
                    }
                    else{
                        if(robot.RotateIMU(135, 1.5)){
                            robot.resetMotors(true);
                            autoState++;
                        }
                    }
                    break;
                case 8: //deposit glyph
                    if(robot.driveForward(true, 1.0, .50)) {
                        robot.resetMotors(true);
                        robot.glyphSystem.ReleaseGrip();
                        autoState++;
                    }
                    break;
                case 9: //back away from crypto box
                    if(robot.driveForward(false, .5, .50)){
                        robot.resetMotors(true);
                        autoState++;
                    }
                    break;
                default:
                    robot.resetMotors(true);
                    autoState = 0;
                    active = false;
                    state = 0;
                    break;
            }
        }
    

    Driving Struggles

    Driving Struggles By Abhi

    Task: Drive the Robot

    Today we tried to drive the robot on the practice field for the first time since the qualifier last Saturday. However, we couldn't get in very much quality drive practice because the robot kept breaking down. We decided to dig a bit deeper and found some issues.

    As seen above, the first thing that was wrong was that the lift was tilted. Due to the cantilever orientation of the plank of the grabber arm mounted on the vertical axis, the structure only had one bar for support for the lift. As a result, since the construction of our robot, the rev rail of the mount had been worn out constantly up to the point where it broke. Also because of the singular rod mounting, the lift system rotated on the vertical planar axis creating a need for drivers, such as myself, to rotate into the cryptobox every time we needed to mount. This was not a good way for the robot to function and had frustrated us.

    Another issue we had was that the lift system string was caught often in all the wiring of the robot. Due to the friction created between this string and all the wiring, including the jewel system, it breaks the string and also creates a safety issue. As a result, we need to fix either the wiring of the robot or the lift system altogether.

    Reflections

    We hope to make improvements over this week before the Oklahoma qualifier. Hopefully, we will have a more proficient robot making it easier on our drivers.

    Code Fixes and Readability

    Code Fixes and Readability By Tycho

    Task: Make the code more readable

    So, we can't include all the code changes we made today, but all of it involved cleaning up our code, removing extra functions we didn't use, refactoring, adding comments, and making it more readable for the tournament. We had almost 80k deletions and 80k additions. This marks a turning point in the readablity of our code so that less experienced team members can read it. We went through methodically and commented out each function and method for future readability, as we will have to pass the codebase on to next year's team.

    Drive Practice

    Drive Practice By Karina, Charlotte, and Abhi

    Task: Become experts at driving the robot and scoring glyphs

    Iron Reign’s robot drivers Abhi, Charlotte, and I, have been working hard to decrease our team’s glyph-scoring time. The past few meets, we have spent many hours practicing maneuvering on the field and around blocks, something that is crucial if we want to go far this competition season. When we first started driving the robot, we took approximately 4 minutes to complete a single column of the cryptobox, but now we can fill one and a half columns in two minutes.

    When we first started practicing, we had trouble aligning with the glyphs to grab them. The fact that were using our prototype arms was partially at fault for our inability to move fast and efficiently. We also had some human error to blame. Personally, it was difficult for me to not confuse my orientation with the robot's orientation. In addition, our drive team had yet to establish a communication system between the driver and the coach, so the driver had no guidance as to which glyphs seemed like the easiest to go for or whether or not the robot was in position to grab a glyph. Below is a video that shows our shaky beginning:

    Our driving has improved significantly. We have done mock teleop runs, timed ourselves on how long we take to complete different tasks, and have repeatedly tried stacking blocks and parking on the balancing stone. When our robot doesn't break, we can fill up to two columns of the cryptobox!

    Reflections

    Overall, we feel that we can further improve our driving skills with more drive practice. Driving the robot really does require being familiar with your robot and its quirks, as well as the controls to move the robot. Abhi, Charlotte, and I know we are still far from being driving experts, but we are putting forth our time and effort so that we can give it our best at tournaments.

    Control Award

    Control Award By Janavi

    Task:

    Last Saturday, after our qualifier, we had a team meeting where we created a list of what we needed to do before our second qualifier this Saturday. One of the tasks was to create the control award which we were unfortunately unable to complete in time for our last competition.

    Autonomous Objective:

    1. Knock off opponent's Jewel, place glyphs In correct location based on image, park in safe zone (85 pts)
    2. Park in Zone, place glyph in cryptobox (25 pts)

    Autonomous B has the ability to be delayed for a certain amount of time, allowing for better coordination with alliance mates. If our partner team is more reliable, we can give them freedom to move, but still add points to our team score.

    Sensors Used

    1. Phone Camera - Allows the robot to determine where to place glyphs using Vuforia, taking advantage of the wide range of data provided from the pattern detection, as well as using Open Computer Vision (OpenCV) to analyze the pattern of the image.
    2. Color Sensor - Robot selects correct jewel using the passive mode of the sensor. This feedback allows us determine whether the robot needs to move forwards or backwards so that it knocks off the opposing teams jewel
    3. Inertial Measurement Unit (IMU) - 3 Gyroscopes and Accelerometers return the robot’s heading for station keeping and straight-line driving in autonomous, while letting us orient ourselves to specific headings for proper navigation, crypt placing, and balancing
    4. Motor Encoders - Using returned motor odometry, we track how many rotations the wheels have made and convert that into meters travelled. We use this in combination with feedback from the IMU to calculate our location on the field relative to where we started.

    Key Algorithms:

    1. Integrate motor odometry, the IMU gyroscope, and accelerometer with using trigonometry so the robot knows its location at all times
    2. Use Proportional/Integral/Derivative (PID) combined with IMU readouts to maintain heading. The robot corrects any differences between actual and desired heading at a power level appropriate for the difference and amount of error built up. This allows us to navigate the field accurately during autonomous.
    3. We use Vuforia to track and maintain distance from the patterns on the wall based on the robot controller phone's camera. It combines 2 machine vision libraries, trig and PID motion control.
    4. All code is non-blocking to allow multiple operations to happen at the same time. We extensively use state machines to prevent conflicts over priorities in low-level behaviors

    Driver Controlled Enhancements:

    1. If the lift has been raised, movement by the jewel arm is blocked to avoid a collision
    2. The robot has a slow mode, which allows our drivers to accurately maneuver and pick up glyphs easily and accurately.
    3. The robot also has a turbo mode. This speed is activated when the bumper is pressed, allowing the driver to quickly maneuver the field.
    Autonomous Field

    Robot Drive Team

    Robot Drive Team By Charlotte, Tycho, Karina, and Evan

    Task: Build a solid drive team.

    One of the leading problems Iron Reign faces is our ability to allot time to effective driving practice. Driving practice is essential for our success in the robot game, but it is sometimes difficult to find time to practice due to other team members working on various robot improvements. We have created two different drive teams, a main team and a backup team, so that despite who is available at meeting we can always have some kind of drive practice going on. The bulk of the time spent in driving practice is spent practicing putting glyphs in the cryptobox, trying to better our previous time and complete as many columns as we can. We focus on performing and scoring timed runs, and sometimes when our sister team 3734 is available, we scrimmage our robots against each other. Another smaller, yet equally essential, part of drive practice is setting up the robot in the correct orientation for every situation and running our autonomous. It is important that we make all of our mistakes during practice, so that when it is time to compete we run autonomous perfectly every time. The main challenges we face in driving practice is consistency in filling the cryptobox, adjusting to significant robot design changes, and our time management (actually finding the time to get in good practice).

    In the future, the drive team is going to meet more often and hold more efficient practices. Our main goal is to significantly decrease the time that it takes to fill the cryptobox, and to accomplish this we will need to clock in many hours so that we are very comfortable in driving the robot. Ideally, any error that might occur during competition will be mechanical errors that are out of the drivers' control. We have improved a lot, but we still have a long way to go.

    Control Award Updates

    Control Award Updates By Janavi

    Task:

    In the past few months we've made a lot of improvements and updates to our robot and code. For example, we changed our gripper system again; it now includes an internal which makes it easier to despite out collected glyphs into the cryptobox. So we have decided to update our control award submission to reflect these changes.

    Autonomous Objective:

    1. Knock off opponent's Jewel, place glyphs in correct location based on image, park in safe zone (85 pts)
    2. Park in Zone, place glyph in cryptobox (25 pts)

    Autonomous B has the ability to be delayed for a certain amount of time, allowing for better coordination with alliance mates. If our partner team is more reliable, we can give them freedom to move, but still add points to our team score.

    Sensors Used

    1. Phone Camera - Allows robot to determine where to place glyphs using Vuforia, taking advantage of the wide range of data provided from the pattern detection, as well as using Open Computer Vision (OpenCV) to analyze the pattern of the image.
    2. Color Sensor - Robot selects correct jewel using the passive mode of the sensor. Feedback determines whether the robot needs to move forwards or backwards to knock off opposing team's jewel.
    3. Inertial Measurement Unit (IMU) - 3 Gyroscopes and Accelerometers return robot’s heading for station keeping and straight-line driving in autonomous, while orienting ourselves to specific headings for proper navigation, crypt placing, and balancing.
    4. Motor Encoders - Returned motor odometry tracks how many rotations the wheels have made and converts into meters travelled. In combination with feedback from the IMU, can calculate location on the field relative to starting point.

    Key Algorithms:

    1. Integrate motor odometry, IMU gyroscope, and accelerometer with trigonometry so robot knows its location at all times.
    2. Uses Proportional/Integral/Derivative (PID) combined with IMU readouts to maintain heading, corrects any differences between actual and desired heading at power level appropriate for difference and amount of error built up. Allows us to navigate the field accurately during autonomous.
    3. Vuforia to tracks and maintains distance from patterns on wall based on robot controller phone's camera, and combines 2 machine vision libraries, trigonometry, and PID motion control.
    4. All code is non-blocking, allowing multiple operations to happen at the same time. Extensively use state machines to prevent conflicts over priorities in low-level behaviors.

    Driver Controlled Enhancements:

    1. Internal Lift System is a conveyor-belt-like system that moves blocks from the bottom the grippers to the top and makes it easier for the drivers to deposit the glyphs in the cryptobox.
    2. If the lift has been raised, jewel arm movement is blocked to avoid a collision.
    3. The robot's slow mode allows our drivers to accurately maneuver around the field as well as gather glyphs easily and accurately.
    4. The robot also has a turbo mode. This speed is activated when the bumper is pressed, allowing the driver to quickly navigate the field.
    Autonomous Field

    Kraken LED Modes

    Kraken LED Modes By Tycho and Janavi

    Task: Attach and Code LEDs

    We added LED's to Kraken's base. After that, we coded the lights to change color depending on which mode we are in. Though a small addition, it helps take stress off of our drivers. By glancing at the robot, they can immediately tell what mode we're in and can adjust accordingly. It also keeps us from making an crucial mistakes like activating our autonomous for blue alliance when we're on red.

    • Cyan - End-game mode, changes control scheme to support relic arm control. Resets forward direction so drivers can think of relic gripper as forward. Enables automatic balancing mode.
    • Magenta - Glyph-scoring mode for higher rows. Reverses which way motors are and slows down motors.
    • Blue/Red = Blue or red depending on alliance. Regular driver mode, collects glyphs for lower columns.
    Here is Kraken in end-game mode:

    Controller Mapping

    Controller Mapping By Janavi

    Task: Map the controller layout

    At this point, we are training the next generation of the drivers on our team, and since we have so many buttons with so many different functions it can often become difficult for the new drivers to determine which button does what, so Karina and I created a map of the controller. By doing this, we not only assist others in determining what button they need to press to do what, but also help the coders in understanding the wants and needs of the drivers. This is because often times when we are coding we will set any function to any available button, just to test if it works. But oftentimes we don't change the button value after that and then there are either far too many buttons for the driver to easily control the robot, or the buttons are too far apart for easy access. Now, after creating this map the drivers were able to sit down together with the coders and map out the most effective controller in their minds together.

    Next Steps:

    We have planned that now as a part of our post-mortem discussion as well as discussion what could have been done to improve the robots functions as pertaining to code. We will also sit down and take out the controller map and determine if any of the buttons can be switched for easy driver control. This will not only lead to better, more efficient driving but will also lead to better communication between groups.

    Importance of Documentation

    Importance of Documentation By Abhi and Tycho

    Task: Explain commits

    As explained in a previous post, we were having many issues with git commits and fixing our errors in it. After a lot of the merging conflicts, we had to fix all the commits without exactly knowing what was being changed in the code. Part of the reason this was so hard was our lack of good naming conventions. Though we always try to make a title and good description for all posts, this doesn't always happen. This is precisely why it is important to check in all changes at every session with good descriptions. Someone had to spend their time mechanically trying to do merge conflicts without understanding the intent behind the code. So it took them longer, they may have made mistakes, an issue fixed by good documentation in the first place.

    This post is dedicated to explaining some of the errors and what the commits true intentions were.

    Stuff:

    That one is mostly about code for the 3rd servo in the gripper open/close methods. It created the servo in pose and added code for it in GlyphSystem2.

    4a6b7dbfff573a72bfee2f7e481654cb6c26b6b2:

    This was for tuning the field oriented code to work. There were some errors with arrays in the way power was fed to the motors (null pointer exception) so I (Abhi) had to fix that. Also, I made some touch up edits with formatting in the methods. After all this, Tycho made (may have edited existing) a method in Pose for Viewforia demo. Minor changes were made to account for this.

    c8ffc8592cd1583e3b71c39ba5106d48da887c66:

    First part was all Argos edits at the museum to make it functional and fine tune some measurements. Second part involved the conveyor belt flipper. Tycho made changes to the dpad up and down to return to the home position rather than carry out complete motion (not sure if this carried over in all the commit mess but it was done theoretically). Driver practice will have to confirm changes.

    Next Steps

    Don't name things badly so this doesn't happen.

    Autonomous Updates, Multi-glyph

    Autonomous Updates, Multi-glyph By Abhi

    Task: Score extra autonomous glyphs

    At super regionals, we saw all the good teams having multi glyph autonomi. In fact, Viperbots Hydra, the winning alliance captain, had a 3 glyph autonomous. I believed Iron Reign could get some of this 100 point autonomous action so I sat down to create a 2 glyph autonomous. We now have 3 autonomi, one of which is multiglyph.

    I made a new methods called autonomous3(). For the starting settings (like driving off balancing stone and jewel points), I copied code from our existing autonomous program. After that, I realized that 10 seconds of the autonomous period had already been used by the time the robot had driven off the stone. That led me to think about ways to optimize autonomous after that point. I realized that if the gripper deployed as the robot was aligning itself with the balancing stone, it would save a lot of time. I also sped up the drive train speeds to lead to maximum efficiency. I had many runs of the fix though.

    First time through, I ran the code and nothing happened. I realized that I forgot to call the actual state machine in the main loop. Dumb mistake, quick fix.

    Second run: The robot drove off the balancing stone properly and was ready to pick up extra glyphs. Unfortunately, I flipped the motor directions to the robot rammed into the cryptobox instead of driving into glyph pit. Another quick fix.

    Third run: The robot drove off stone and into glyph pit. However, it went almost into the Blue zone (I was testing from red side). Also, the robot would rotate while in the glyph pit, causing glyphs to get under the wiring and pull stuff out. I had to rewire a couple things then I went back to coding.

    Fourth run: The robot drove off stone and into pit and collected one more glyph. The issue was that once the glyph was collected, the bot kept driving forward because I forgot to check the speeds again.

    Fifth run: All the initial motions worked. However, I realized that the robot didn't strafe off as far as I needed it to reach the glyph pit. I added about .3 meters to the robot distance and tested again.

    Sixth run: I don't know if anyone says 6th time is the charm but it was definitely a successful one for me. The robot did everything correctly and placed the glyph in the cryptobox. The only issue I had was that the robot kept backing away and ramming the cryptobox during end of auto. I fixed this easily by adding another autoState++ to the code.

    Before I made the fix after the 6th run, I decided to take a wonderful video of the robot moving. It is linked below.

    Next Steps:

    Everything is ready to go to do a multiglyph autonomous. However, the robot doesn't score by the codex for the correct column. I need to implement that with IMU angles before Champs.

    Autonomous Updates, Multiglyph Part 2

    Autonomous Updates, Multiglyph Part 2 By Abhi, Karina, and Tycho

    Task: Develop multiglyph for far Stone

    We had a functional autonomous for the balacing stone close to the audience. However, chances are that our alliance partner would want that same stone since they could get more glyphs during autonomous. This meant that we needed a multiglyph autonomous for the far balancing stone. We went on an adventure to make this happen.

    We programmed the robot to drive off the balancing stone and deploy the grippers as this occurred. To ensure the robot was off the stone before deploy, we utilized the roll sensor in the REV hub to determine whether the angle the robot was at was flat on the ground. This made our autonomous account for the error we could have on the balancing stone in terms of placement in the forward backward direction respective to the field. Next, we used an IMU turn into the glyph pit to increase our chances of picking up a second glyph. Then, we backed away and turned parallel to the cryptobox. The following motion was to travel to the field perimeter for a long period of time so that the robot will be pushing the field perimeter. This was done to correct any wrong angles and make grippers perpendicular to field wall. Then the robot backs up and scores the glyphs. Here is a video of it working:

    Next Steps

    Now we are speeding auto up and correcting IMU angles.

    Position Tracking

    Position Tracking By Abhi

    Task: Design a way to track the robot's location

    Throughout the Relic Recovery season, we have had many issues with the autonomous being inaccurate simply because the scoring was dependent on perfectly aligning the robot on the balancing stone. This was prone to many issues as evidenced by numerous matches in which our autonomous failed. Thus far, we had relied on the encoders on the mecanum chassis to input distances and such. Though this worked to a significant degree, the bot was still prone to loss from drift and running into the glyph pit. We don't know if glyphs will be reused or not but we definitely needed a better tracking mechanism on the field to be more efficient.

    After some investigation online and discussing with other teams, I thought about a way to make a tracker. For the sake of testing, we built a small chassis with two perpendicular REV rails. Then, with the help of new trainees for Iron Reign, we attached two omni wheels on opposite sides of the chassis, as seen in the image above. To this, we added axle encoders to track the movement of the omni wheels.

    The reason the axles of these omnis was not dependent of any motors was because we wanted to avoid any error from the motors themselves. By making the omni wheels free spinning, no matter what the encoder reads on the robot, the omni wheels will always move whichever direction the robot is moving. Therefore, the omni wheels will generally give a more accurate reading of position.

    To test the concept, we attached the apparatus to ARGOS. With some upgrades to the ARGOS code by using the IMU and omni wheels, we added some basic trigonometry to the code to accurately track the position. The omni setup was relatively accurate and may be used for future projects and robots.

    Next Steps

    Now that we have a prototype to track position without using too many resources, we need to test it on an actual FTC chassis. Depending on whether or not there is terrain in Rover Ruckus, the use of this system will change. Until then, we can still experiment with this and develop a useful multipurpose sensor.