Articles by tag: control

Articles by tag: control

    Swerve Drive Experiment

    Swerve Drive Experiment By Abhi

    Task: Consider a Swerve Drive base

    During the entire season of Relic Recovery, we saw many robots both in and outside our region that had a swerve drive. As Iron Reign, we never considered a swerve drive in the past but seeing all the robots, I wanted to see if it was maybe possible. One motivation was that I didn't like how slow mechanums were. Swerves generally use traction wheels and create a faster speed than usually can be found with mechanum. Also, it seemed as if swerve could provide the mobility neccessary that a mechanum drive provided. This is why I wanted to consider the possibility of a swerve drive and why I did more investigation.

    I first came across the PRINT swerve for FTC by team 9773. They had a very detailed explanation of all the parts and assebly tools. After reading into it more, I decided that the system they created wasn't the best. First, the final cost of the drive train was very expensive; we did not have a very high budget despite help from our sponsors. If this drive train didn't work for some reason after playing with it over the summer or if the chassis didn't make sense to use in Rover Ruckus, we would have almost no money for an alternate drive train since we wanted to presearve Kraken. Also, they parts used by 9773 invovled X-rail rather than extrusion rail from REV. This would cause problems in the future as we wold need to redesign the REVolution system for X-rail. In the end, I decided this was not worth it to pursue.

    After further investigation, I found a chassis by team 9048. The swerve they developed looked like a more feasible option. By using REV rail and many of the parts we had, I thought this would be a possible prototype for Iron Reign. Because they didn't have a parts list, we had the find the rough estimate of cost from the REV and Andymark websites. Upon further analysis, we realized that the cost, though cheaper than the chassis of 9773, would still be a considerable chunk of our budget. But I am still motivated to find a way to make this happen.

    Next Steps

    Possibly scavenge for parts in the house and Robodojo to make swerve modules.

    Swerve Drive Prototype

    Swerve Drive Prototype By Abhi and Christian

    Task: Build a Swerve Drive base

    During the discussion about swerve drive, Imperial robotics, our sister team, was also interested in the designs. Since we needed to conserve resources and prototype, I worked with Christian and another member of Imperial to prototype a drive train.

    Due to the limited resources. we decided to use Tetrix parts since we had an abundance of those. We decided to make the swerve such that a servo would turn a swerve module and the motors would be attached directly to the wheels. This system would be mounted to a square base. We decided to go ahead and make the base.

    Immediatly we noticed it was very feeble. The servos were working very hard to turn the heavy module and the motors had trouble staying aligned. Also, programming the train was also a challenge. After experimenting further, the base even broke. This was a moment of realization. Not only was swerve expensive and complicated, we also would need to replace a module really quickly at competition which needed more resources and an immaculate design. With all these considerations, I ultimately decided that swerve wasn't worth it to use as a drive chassis.

    Next Steps

    Wait until Rover Ruckus starts so that we can think of a new chassis.

    Position Tracking

    Position Tracking By Abhi

    Task: Design a way to track the robot's location

    Throughout the Relic Recovery season, we have had many issues with the autonomous being inaccurate simply because the scoring was dependent on perfectly aligning the robot on the balancing stone. This was prone to many issues as evidenced by numerous matches in which our autonomous failed. Thus far, we had relied on the encoders on the mecanum chassis to input distances and such. Though this worked to a significant degree, the bot was still prone to loss from drift and running into the glyph pit. We don't know if glyphs will be reused or not but we definitely needed a better tracking mechanism on the field to be more efficient.

    After some investigation online and discussing with other teams, I thought about a way to make a tracker. For the sake of testing, we built a small chassis with two perpendicular REV rails. Then, with the help of new trainees for Iron Reign, we attached two omni wheels on opposite sides of the chassis, as seen in the image above. To this, we added axle encoders to track the movement of the omni wheels.

    The reason the axles of these omnis was not dependent of any motors was because we wanted to avoid any error from the motors themselves. By making the omni wheels free spinning, no matter what the encoder reads on the robot, the omni wheels will always move whichever direction the robot is moving. Therefore, the omni wheels will generally give a more accurate reading of position.

    To test the concept, we attached the apparatus to ARGOS. With some upgrades to the ARGOS code by using the IMU and omni wheels, we added some basic trigonometry to the code to accurately track the position. The omni setup was relatively accurate and may be used for future projects and robots.

    Next Steps

    Now that we have a prototype to track position without using too many resources, we need to test it on an actual FTC chassis. Depending on whether or not there is terrain in Rover Ruckus, the use of this system will change. Until then, we can still experiment with this and develop a useful multipurpose sensor.

    Replay Autonomous

    Replay Autonomous By Arjun

    Task: Design a program to record and replay a driver run

    One of the difficulties in writing an autonomous program is the long development cycle. We have to unplug the robot controller, plug it into a computer, make a few changes to the code, recompile and download the code, and then retest our program. All this must be done over and over again, until the autonomous is perfected. Each autonomous takes ~4 hours to write and tune. Over the entire season, we spend over 40 hours working on autonomous programs.

    One possible solution for this is to record a driver running through the autonomous, and then replay it. I used this solution on my previous robotics team. Since we had no access to a field, we had to write our entire autonomous at a competition. After some brainstorming, we decided to write a program to record our driver as he ran through our autonomous routine and then execute it during a match. It worked very well, and got us a few extra points each match.

    Using this program, writing an autonomous program is reduced to a matter of minutes. We just need to run through our autonomous routine a few times until weare happy with it, and then take the data from the console and paste it into our program. Then we recompile the program and run it.

    There are two parts to our replay program. One part (a Tele-op Opmode) records the driver's motions and outputs it into the Android console. The next part (an Autonomous Opmode) reads in that data, and turns it into a working autonomous program.

    Next Steps

    Our current replay program requires one recompilation. While it is very quick, one possible next step is to save the autonomous data straight into the phone's internal memory, so that we do not have to recompile the program. This could further reduce the time required to create an autonomous.

    One more next step could be a way to easily edit the autonomous. The output data is just a big list of numbers, and it is very difficult to edit it. If we need to tune the autonomous due to wear and tear on the robot, it is difficult to do so without rerecording. If we can figure out a mechanism for editing the generated autonomous, we can further reduce the time we spend creating autonomous programs.

    Rover Ruckus Brainstorming & Initial Thoughts

    Rover Ruckus Brainstorming & Initial Thoughts By Ethan, Charlotte, Kenna, Evan, Abhi, Arjun, Karina, and Justin

    Task: Come up with ideas for the 2018-19 season

    So, today was the first meeting in the Rover Ruckus season! On top of that, we had our first round of new recruits (20!). So, it was an extremely hectic session, but we came up with a lot of new ideas.

    Building

    • A One-way Intake System

    • This suggestion uses a plastic flap to "trap" game elements inside it, similar to the lid of a soda cup. You can put marbles through the straw-hole, but you can't easily get them back out.
    • Crater Bracing
    • In the past, we've had center-of-balance issues with our robot. To counteract this, we plan to attach shaped braces to our robot such that it can hold on to the walls and not tip over.
    • Extendable Arm + Silicone Grip

    • This one is simple - a linear slide arm attached to a motor so that it can pick up game elements and rotate. We fear, however, that many teams will adopt this strategy, so we probably won't do it. One unique part of our design would be the silicone grips, so that the "claws" can firmly grasp the silver and gold.
    • Binder-ring Hanger

    • When we did Res-Q, we dropped our robot more times than we'd like to admit. To prevent that, we're designing an interlocking mechanism that the robot can use to hang. It'll have an indent and a corresponding recess that resists lateral force by nature of the indent, but can be opened easily.
    • Passive Intake
    • Inspired by a few FRC Stronghold intake systems, we designed a passive intake. Attached to a weak spring, it would have the ability to move over game elements before falling back down to capture them. The benefit of this design is that we wouldn't have to use an extra motor for intake, but we risk controlling more than two elements at the same time.
    • Mechanum
    • Mechanum is our Ol' Faithful. We've used it for the past three years, so we're loath to abandon it for this year. It's still a good idea for this year, but strafing isn't as important, and we may need to emphasize speed instead. Plus, we're not exactly sure how to get over the crater walls with Mechanum.
    • Tape Measure
    • In Res-Q, we used a tape-measure system to pull our robot up, and we believe that we could do the same again this year. One issue is that our tape measure system is ridiculously heavy (~5 lbs) and with the new weight limits, this may not be ideal.
    • Mining
    • We're currently thinking of a "mining mechanism" that can score two glyphs at a time extremely quickly in exchange for not being able to climb. It'll involve a conveyor belt and a set of linear slides such that the objects in the crater can automatically be transferred to either the low-scoring zone or the higher one.

    Journal

    This year, we may switch to weekly summaries instead of meeting logs so that our journal is more reasonable for judges to read. In particular, we were inspired by team Nonstandard Deviation, which has an amazing engineering journal that we recommend the readers to check out.

    Programming

    Luckily, this year seems to have a more-easily programmed autonomous. We're working on some autonomous diagrams that we'll release in the next couple weeks. Aside from that, we have such a developed codebase that we don't really need to update it any further.

    Next Steps

    We're going to prototype these ideas in the coming weeks and develop our thoughts more thoroughly.

    Vision Discussion

    Vision Discussion By Arjun and Abhi

    Task: Consider potential vision approaches for sampling

    Part of this year’s game requires us to be able to detect the location of minerals on the field. The main use for this is in sampling. During autonomous, we need to move only the gold mineral, without touching the silver minerals in order to earn points for sampling. There are a few ways we could be able to detect the location of the gold mineral.

    First, we could possibly use OpenCV to run transformations on the image that the camera sees. We would have to design an OpenCV pipeline which identifies yellow blobs, filters out those that aren’t minerals, and finds the centers of the blobs which are minerals. This is most likely the approach that many teams will use. The benefit of this approach is that it will be easy enough to write. However, it may not work in different lighting conditions that were not tested during the designing of the OpenCV pipeline.

    Another approach is to use Convolutional Neural Networks (CNNs) to identify the location of the gold mineral. Convolutional Neural Networks are a class of machine learning algorithms that “learn” to find patterns in images by looking at large amounts of samples. In order to develop a CNN to identify minerals, we must take lots of photos of the sampling arrangement in different arrangements (and lighting conditions), and then manually label them. Then, the algorithm will “learn” how to differentiate gold minerals from other objects on the field. A CNN should be able to work in many different lighting conditions, however, it is also more difficult to write.

    Next Steps

    As of now, Iron Reign is going to attempt both methods of classification and compare their performance.

    CNN Training

    CNN Training By Arjun and Abhi

    Task: Capture training data for a Convolutional Neural Network

    In order to train a Convolutional Neural Network, we need a whole bunch of training images. So we got out into the field, and took 125 photos of the sampling setup in different positions and angles. Our next step is to label the gold minerals in all of these photos, so that we can train a Convolutional Neural Network to label the gold minerals by learning from the patterns of the training data.

    Next Steps

    Next, we will go through and designate gold minerals. In addition, we must create a program to process these.

    Autonomous Path Planning

    Autonomous Path Planning By Abhi

    Task: Map Autonomous paths

    Ahhhhhhh! Rover Ruckus has been around for a while now and it's time to figure out our autonomous plans! This year's autonomous is a lot more hectic than last year's since there is the detaching from the lander component which can take an unknown amount of time right now. That cuts down the potential speed for the rest of the auto but only time will tell.

    Until then, I can only dream of the potential autonomous paths our robot can take. First, we need to know some basic facts. One, the field is the exact same for both red and blue aliance, meaning I don't need to rewrite the code to act on the other side of the field. Second, we have to account for our alliance partner's autonomous if they have one and need to adapt to their path so we don't crash them. Third, we have to avoid the other alliance's bots to avoid penalties. There are no explicit boundaries this year for auto but if we somehow interrupt the opponent's auto we get heavily penalized. Now, with this in mind, lets look at these paths

    This path plan is the simplest of all autonomi. I assume that our alliance partner has an autonomous and our robot only takes care of half the functions. It starts with a simple detaching from the lander, sampling the proper mineral, deploying the team marker, and parking in the crater. The reason I chose the opposite crater instead of the one on our nearside was that it was shorter distance and less chance to mess with our alliance partner. The issue with this plan is that it may interfere with the opponent's autnomous but if we drive strategically hugging the wall, we shouldn't have issues.

    This path is also a "simple" path but is obviously complicated. The issue is that the team marker depot is not on the same side as the lander, forcing us to drive all the way down and back to park in the crater. I could also change this one to go to the opposite crater but that may interfere with our alliance partner's code.

    This is one of the autonomi that assumes our alliance partners don't have an autonomous and is built for multifunctionality. The time restriction makes this autonomous unlikely but it is stil nice to plan out a path for it.

    This is also one of the autonomi that assumes our alliance partners don't have an autonomous. This is the simpler one of the methods but still has the same restrictions

    Next Steps

    Although its great to think these paths will actually work out in the end, we might need to change them a lot. With potential collisions with alliance partners and opponents, we might need a drop down menu of sorts on the driver station that can let us put together a lot of different pieces so we can pick and choose the auto plan. Maybe we could even draw out the path in init. All this is only at the speculation stage right now.

    CNN Training Program

    CNN Training Program By Arjun and Abhi

    Task: Designing a program to label training data for our Convolutional Neural Network

    In order to use the captured training data, we need to label it by identifying the location of the gold mineral in it. We also need to normalize it by resizing the training images to a constant size, (320x240 pixels). While we could do this by hand, it would be a pain to do so. We would have to resize each individual picture, and then identify the coordinates of the center of the gold mineral, then create a file to store the resized image and coordinates.

    Instead of doing this, we decided to write a program to do this for us. That way, we could just click on the gold mineral on the screen, and the program would do the resizing and coordinate-finding for us. Thus, the process of labeling the images will be much easier.

    Throughout the weekend, I worked on this program. The end result is shown above.

    Next Steps

    Now that the program has been developed, we need to actually use it to label the training images we have. Then, we can train the Convolutional Neural Network.

    Labelling Minerals - CNN

    Labelling Minerals - CNN By Arjun and Abhi

    Task: Label training images to train a Neural Network

    Now that we have software to make labeling the training data easier, we have to actually use it to label the training images. Abhi and I split up our training data into two halves, and we each labeled one half. Then, when we had completed the labeling, we recombined the images. The images we labeled are publicly available at https://github.com/arjvik/RoverRuckusTrainingData.

    Next Steps

    We need to actually write a Convolutional Neural Network using the training data we collected.

    Upgrading to FTC SDK version 4.0

    Upgrading to FTC SDK version 4.0 By Arjun

    Task: Upgrade our code to the latest version of the FTC SDK

    FTC recently released version 4.0 of their SDK, with initial support for external cameras, better PIDF motor control, improved wireless connectivity, new sensors, and other general improvements. Our code was based on last year's SDK version 3.7, so we needed to merge the new SDK with our repository.

    The merge was slightly difficult, as there were some issues with the Gradle build system. However, after a little fiddling with the configuration, as well as fixing some errors in the internal code we changed, we were able to successfully merge the new SDK.

    After the merge, we tested that our code still worked on Kraken, last year's competition robot. It ran with no problems.

    Developing a CNN

    Developing a CNN By Arjun and Abhi

    Task: Begin developing a Convolutional Neural Network using TensorFlow and Python

    Now that we have gathered and labeled our training data, we began writing our Convolutional Neural Network. Since Abhi had used Python and TensorFlow to write a neural network in the past during his visit to MIT over the summer, we decided to do the same now.

    After running our model, however, we noticed that it was not very accurate. Though we knew that was due to a bad choice of layer structure or hyperparameters, we were not able to determine the exact cause. (Hyperparameters are special parameters that need to be just right for the neural network to do well. If they are off, the neural network will not work well.) We fiddled with many of the hyperparameters and layer structure options, but were unable to fix the inaccuracy levels.

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    model = Sequential()
    model.add(Conv2D(64, activation="relu", input_shape=(n_rows, n_cols, 1), kernel_size=(3,3)))
    model.add(Conv2D(32, activation="relu", kernel_size=(3,3)))
    model.add(MaxPooling2D(pool_size=(8, 8), padding="same"))
    model.add(Conv2D(8, activation="tanh", kernel_size=(3,3)))
    model.add(MaxPooling2D(pool_size=(8, 8), padding="same"))
    model.add(Conv2D(4, activation="relu", kernel_size=(3,3)))
    model.add(Conv2D(4, activation="tanh", kernel_size=(1,1)))
    model.add(Flatten())
    model.add(Dense(2, activation="linear"))
    model.summary()
    

    Next Steps

    We have not fully given up, though. We plan to keep attempting to improve the accuracy of our neural network model.

    Rewriting CNN

    Rewriting CNN By Arjun and Abhi

    Task: Begin rewriting the Convolutional Neural Network using Java and DL4J

    While we were using Python and TensorFlow to train our convolutional neural network, we decided to attempt writing this in Java, as the code for our robot is entirely in Java, and before we can use our neural network, it must be written in java.

    We also decided to try using DL4J, a competing library to TensorFlow, to write our neural network, to determine if it was easier to write a neural network using DL4J or TensorFlow. We found that both DL4J and TensorFlow were similarly easy to use, and while each had a different style, code written using both were equally easy to read and maintain.

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    java
    		//Download dataset
    		DataDownloader downloader = new DataDownloader();
    		File rootDir = downloader.downloadFilesFromGit("https://github.com/arjvik/RoverRuckusTrainingData.git", "data/RoverRuckusTrainingData", "TrainingData");
    		
    		//Read in dataset
    		DataSetIterator iterator = new CustomDataSetIterator(rootDir, 1);
    		
    		//Normalization
    		DataNormalization scaler = new ImagePreProcessingScaler(0, 1);
    		scaler.fit(iterator);
    		iterator.setPreProcessor(scaler);
    		
    		//Read in test dataset
    		DataSetIterator testIterator = new CustomDataSetIterator(new File(rootDir, "Test"), 1);
    			
    		//Test Normalization
    		DataNormalization testScaler = new ImagePreProcessingScaler(0, 1);
    		testScaler.fit(testIterator);
    		testIterator.setPreProcessor(testScaler);
    		
    		//Layer Configuration
    		MultiLayerConfiguration conf = new NeuralNetConfiguration.Builder()
    				.seed(SEED)
    				.l2(0.005)
    				.weightInit(WeightInit.XAVIER)
    				.list()
    				.layer(0, new ConvolutionLayer.Builder()
    						.nIn(1)
    						.kernelSize(3, 3)
    						.stride(1, 1)
    						.activation(Activation.RELU)
    						.build())
    				.layer(1, new ConvolutionLayer.Builder()
    						.nIn(1)
    						.kernelSize(3, 3)
    						.stride(1, 1)
    						.activation(Activation.RELU)
    						.build())
    				/* ...more layer code... */
    				.build();
    

    Next Steps

    We still need to attempt to to fix the inaccuracy in the predictions made by our neural network.

    Pose BigWheel

    Pose BigWheel By Abhi

    Task: New Pose for Big Wheel robot

    Historically, Iron Reign has used a class called "Pose" to control all the hardware mapping of our robot instead of putting it directly into our opmodes. This has created cleaner code and smoother integration with our crazy functions. However, we used the same Pose for the past two years since both had an almost identical drive base. Since there wasn't a viable differential drive Pose in the past, I made a new one using inspiration from the mecanum one.

    We start with initializing everything including PID constants and all our motors/sensors. I will skip all this for this post since this is repetitive in all team code.

    In the init, I made the hardware mapping for the motors we have on BigWheel right now. Other functions will come in later.

    Here is where a lot of the work happens. This is what allows our robot to move accurately using IMU and encoder values.

    There are a lot of other methods beyond these but there is just a lot of technical math behind them with trigonometry. I won't bore you with the details but our code is open source so you can find the neccesary help if you just look at our github!

    RIP CNN

    RIP CNN By Abhi

    Task: Farewell Iron Reign's CNN

    So FTC released a new software update that added Tensorflow support. With it came a class that implemented Tensorflow to autonomously detect both minerals. This meant all our progress was undercut by a software update. The silver lining is that we have done enough research into how CNN's work and it will allow us to understand the mind of the FTC app better. We're still gonna need an F in the chat tho.

    Next Steps

    We gotta figure out how to use the autonomous detection of the minerals to path plan.

    Code Post-Mortem after Conrad Qualifier

    Code Post-Mortem after Conrad Qualifier By Arjun and Abhi

    Task: Analyze code failiure at Conrad Qualifier

    Iron Reign has been working hard on our robot, and we expected to do fairly well at our last competition. However, we couldn't be more wrong, since our robot came last place in the robot game. While we did win the Inspire award, we still would like to make some changes to ensure that this doesn't happen again.

    Our autonomous plan was fairly simple: perform sampling, deploy the team marker, then drive to the crater to park. We planned to use the built-in TensorFlow object detection for our sampling, and thus assumed that our autonomous would be fairly easy. Unfortunately, we didn't begin writing the code for our autonomous until the Thursday before our competition.

    On Thursday, I worked on writing a class to help us detect the location of the gold mineral using the built-in TensorFlow object detection. While testing this class, I noticed that it produced an error rather than outputting the location of the gold mineral. This error was not diagnosed until the morning of the competition.

    On Friday, Abhi worked on writing code for the driving part of the autonomous. He wrote three different autonomous routines, one for each position of the gold mineral. His code did not select the routine to use yet, leaving it open for us to connect to the TensorFlow class to determine which position the gold mineral was.

    On Saturday, the morning of the competition, we debugged the TensorFlow class that was written earlier and determined the cause of the error. We had misused the API for the TensorFlow object detection, and after we corrected that, our code didn't spit out an error anymore. Then, we realized that TensorFlow only worked at certain camera positions and angles. We then had to adjust the position of our robot on the field, so that we could

    Our code failiure was mostly due to the fact that we only started working on our autonomous two days before the competition. Next time, we plan to make our autonomous an integral part of our robot, and focus on it much earlier. I plan to fix our autonomous sometime soon, well before our next competition, so that we do not have to come to our next competition without a working autonomous.

    Next Steps:

    Spend more time focusing on code and autonomous, to ensure that we enter our next competition with a fully working autonomous.

    DPRG Vision Presentation

    DPRG Vision Presentation By Arjun and Abhi

    Task: Present to the Dallas Personal Robotics Group about computer vision

    We presented to the DPRG about our computer vision, touching on subjects including OpenCV, Vuforia, TensorFlow, and training our own Convolutional Neural Network. Everyone we presented to was very interested in our work, and they asked us many questions. We also recieved quite a few suggestions on ways we could improve the performance of our vision solutions. The presentation can be seen below.

    Next Steps

    Refactoring Vision Code

    Refactoring Vision Code By Arjun

    Task: Refactor Vision Code

    Iron Reign has been working on multiple vision pipelines, including TensorFlow, OpenCV, and a home-grown Convolutional Neural Network. Until now, all our code assumed that we only used TensorFlow, and we wanted to be able to switch out vision implementations quickly. As such, we decided to abstract away the actual vision pipeline used, which allows us to be able to choose between vision implementations at runtime.

    We did this by creating a java interface, VisionProvider, seen below. We then made our TensorFlowIntegration class (our code for detecting mineral positions using TensorFlow) implement VisionProvider.

    Next, we changed our opmode to use the new VisionProvider interface. We added code to allow us to switch vision implementations using the left button on the dpad.

    Our code for VisionProvider is shown below.

    1
    2
    3
    4
    5
    6
    public interface VisionProvider {
        public void initializeVision(HardwareMap hardwareMap, Telemetry telemetry);
        public void shutdownVision();
        public GoldPos detect();
    }
    ```
    

    These methods are implemented in the integration classes.
    Our new code for TensorflowIntegration is shown below:

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    54
    55
    56
    57
    58
    59
    60
    61
    62
    63
    64
    65
    66
    67
    68
    69
    70
    71
    72
    73
    74
    75
    76
    77
    78
    79
    80
    81
    82
    83
    84
    85
    86
    87
    88
    89
    90
    91
    92
    93
    94
    95
    96
    97
    98
    99
    public class TensorflowIntegration implements VisionProvider {
        private static final String TFOD_MODEL_ASSET = "RoverRuckus.tflite";
        private static final String LABEL_GOLD_MINERAL = "Gold Mineral";
        private static final String LABEL_SILVER_MINERAL = "Silver Mineral";
    
        private List<Recognition> cacheRecognitions = null;
      
        /**
         * {@link #vuforia} is the variable we will use to store our instance of the Vuforia
         * localization engine.
         */
        private VuforiaLocalizer vuforia;
        /**
         * {@link #tfod} is the variable we will use to store our instance of the Tensor Flow Object
         * Detection engine.
         */
        public TFObjectDetector tfod;
    
        /**
         * Initialize the Vuforia localization engine.
         */
        public void initVuforia() {
            /*
             * Configure Vuforia by creating a Parameter object, and passing it to the Vuforia engine.
             */
            VuforiaLocalizer.Parameters parameters = new VuforiaLocalizer.Parameters();
            parameters.vuforiaLicenseKey = RC.VUFORIA_LICENSE_KEY;
            ;
            parameters.cameraDirection = CameraDirection.FRONT;
            //  Instantiate the Vuforia engine
            vuforia = ClassFactory.getInstance().createVuforia(parameters);
        }
    
        /**
         * Initialize the Tensor Flow Object Detection engine.
         */
        private void initTfod(HardwareMap hardwareMap) {
            int tfodMonitorViewId = hardwareMap.appContext.getResources().getIdentifier(
                    "tfodMonitorViewId", "id", hardwareMap.appContext.getPackageName());
            TFObjectDetector.Parameters tfodParameters = new TFObjectDetector.Parameters(tfodMonitorViewId);
            tfod = ClassFactory.getInstance().createTFObjectDetector(tfodParameters, vuforia);
            tfod.loadModelFromAsset(TFOD_MODEL_ASSET, LABEL_GOLD_MINERAL, LABEL_SILVER_MINERAL);
        }
    
        @Override
        public void initializeVision(HardwareMap hardwareMap, Telemetry telemetry) {
            initVuforia();
    
            if (ClassFactory.getInstance().canCreateTFObjectDetector()) {
                initTfod(hardwareMap);
            } else {
                telemetry.addData("Sorry!", "This device is not compatible with TFOD");
            }
    
            if (tfod != null) {
                tfod.activate();
            }
        }
    
        @Override
        public void shutdownVision() {
            if (tfod != null) {
                tfod.shutdown();
            }
        }
    
        @Override
        public GoldPos detect() {
            List<Recognition> updatedRecognitions = tfod.getUpdatedRecognitions();
            if (updatedRecognitions != null) {
                cacheRecognitions = updatedRecognitions;
            }
            if (cacheRecognitions.size() == 3) {
                int goldMineralX = -1;
                int silverMineral1X = -1;
                int silverMineral2X = -1;
                for (Recognition recognition : cacheRecognitions) {
                    if (recognition.getLabel().equals(LABEL_GOLD_MINERAL)) {
                        goldMineralX = (int) recognition.getLeft();
                    } else if (silverMineral1X == -1) {
                        silverMineral1X = (int) recognition.getLeft();
                    } else {
                        silverMineral2X = (int) recognition.getLeft();
                    }
                }
                if (goldMineralX != -1 && silverMineral1X != -1 && silverMineral2X != -1)
                    if (goldMineralX < silverMineral1X && goldMineralX < silverMineral2X) {
                        return GoldPos.LEFT;
                    } else if (goldMineralX > silverMineral1X && goldMineralX > silverMineral2X) {
                        return GoldPos.RIGHT;
                    } else {
                        return GoldPos.MIDDLE;
                    }
            }
            return GoldPos.NONE_FOUND;
    
        }
    
    }
    

    Next Steps

    We need to implement detection using OpenCV, and make our class conform to VisionProvider, so that we can easily swap it out for TensorflowIntegration.

    We also need to do the same using our Convolutional Neural Network.

    Finally, it might be beneficial to have a dummy implementation that always “detects” the gold as being in the middle, so that if we know that all our vision implementations are failing, we can use this dummy one to prevent our autonomous from failing.

    OpenCV Support

    OpenCV Support By Arjun

    Task: Add OpenCV support to vision pipeline

    We recently refactored our vision code to allow us to easily swap out vision implementations. We had already implemented TensorFlow, but we hadn't implemented code for using OpenCV instead of TensorFlow. Using the GRIP pipeline we designed earlier, we wrote a class called OpenCVIntegration, which implements VisionProvider. This new class allows us to use OpenCV instead of TensorFlow for our vision implementation.
    Our code for OpenCVIntegration is shown below.

      1
      2
      3
      4
      5
      6
      7
      8
      9
     10
     11
     12
     13
     14
     15
     16
     17
     18
     19
     20
     21
     22
     23
     24
     25
     26
     27
     28
     29
     30
     31
     32
     33
     34
     35
     36
     37
     38
     39
     40
     41
     42
     43
     44
     45
     46
     47
     48
     49
     50
     51
     52
     53
     54
     55
     56
     57
     58
     59
     60
     61
     62
     63
     64
     65
     66
     67
     68
     69
     70
     71
     72
     73
     74
     75
     76
     77
     78
     79
     80
     81
     82
     83
     84
     85
     86
     87
     88
     89
     90
     91
     92
     93
     94
     95
     96
     97
     98
     99
    100
    101
    102
    103
    104
    105
    106
    107
    108
    109
    110
    111
    112
    113
    114
    115
    116
    117
    118
    119
    120
    121
    122
    123
    124
    125
    126
    127
    128
    129
    public class OpenCVIntegration implements VisionProvider {
    
        private VuforiaLocalizer vuforia;
        private Queue<VuforiaLocalizer.CloseableFrame> q;
        private int state = -3;
        private Mat mat;
        private List<MatOfPoint> contours;
        private Point lowest;
    
        private void initVuforia() {
            VuforiaLocalizer.Parameters parameters = new VuforiaLocalizer.Parameters();
            parameters.vuforiaLicenseKey = RC.VUFORIA_LICENSE_KEY;
            parameters.cameraDirection = VuforiaLocalizer.CameraDirection.FRONT;
            vuforia = ClassFactory.getInstance().createVuforia(parameters);
        }
    
        public void initializeVision(HardwareMap hardwareMap, Telemetry telemetry) {
            initVuforia();
            q = vuforia.getFrameQueue();
            state = -2;
    
        }
    
        public void shutdownVision() {}
    
        public GoldPos detect() {
            if (state == -2) {
                if (q.isEmpty())
                    return GoldPos.HOLD_STATE;
                VuforiaLocalizer.CloseableFrame frame = q.poll();
                Image img = VisionUtils.getImageFromFrame(frame, PIXEL_FORMAT.RGB565);
                Bitmap bm = Bitmap.createBitmap(img.getWidth(), img.getHeight(), Bitmap.Config.RGB_565);
                bm.copyPixelsFromBuffer(img.getPixels());
                mat = VisionUtils.bitmapToMat(bm, CvType.CV_8UC3);
            } else if (state == -1) {
                RoverRuckusGripPipeline pipeline = new RoverRuckusGripPipeline();
                pipeline.process(mat);
                contours = pipeline.filterContoursOutput();
            } else if (state == 0) {
                if (contours.size() == 0)
                    return GoldPos.NONE_FOUND;
                lowest = centroidish(contours.get(0));
            } else if (state < contours.size()) {
                Point centroid = centroidish(contours.get(state));
                if (lowest.y > centroid.y)
                    lowest = centroid;
            } else if (state == contours.size()) {
                if (lowest.x < 320d / 3)
                    return GoldPos.LEFT;
                else if (lowest.x < 640d / 3)
                    return GoldPos.MIDDLE;
                else
                    return GoldPos.RIGHT;
            } else {
                return GoldPos.ERROR2;
            }
            state++;
            return GoldPos.HOLD_STATE;
        }
    
        private static Point centroidish(MatOfPoint matOfPoint) {
            Rect br = Imgproc.boundingRect(matOfPoint);
            return new Point(br.x + br.width/2,br.y + br.height/2);
        }
    }public class OpenCVIntegration implements VisionProvider {
    
        private VuforiaLocalizer vuforia;
        private Queue<VuforiaLocalizer.CloseableFrame> q;
        private int state = -3;
        private Mat mat;
        private List<MatOfPoint> contours;
        private Point lowest;
    
        private void initVuforia() {
            VuforiaLocalizer.Parameters parameters = new VuforiaLocalizer.Parameters();
            parameters.vuforiaLicenseKey = RC.VUFORIA_LICENSE_KEY;
            parameters.cameraDirection = VuforiaLocalizer.CameraDirection.FRONT;
            vuforia = ClassFactory.getInstance().createVuforia(parameters);
        }
    
        public void initializeVision(HardwareMap hardwareMap, Telemetry telemetry) {
            initVuforia();
            q = vuforia.getFrameQueue();
            state = -2;
    
        }
    
        public void shutdownVision() {}
    
        public GoldPos detect() {
            if (state == -2) {
                if (q.isEmpty())
                    return GoldPos.HOLD_STATE;
                VuforiaLocalizer.CloseableFrame frame = q.poll();
                Image img = VisionUtils.getImageFromFrame(frame, PIXEL_FORMAT.RGB565);
                Bitmap bm = Bitmap.createBitmap(img.getWidth(), img.getHeight(), Bitmap.Config.RGB_565);
                bm.copyPixelsFromBuffer(img.getPixels());
                mat = VisionUtils.bitmapToMat(bm, CvType.CV_8UC3);
            } else if (state == -1) {
                RoverRuckusGripPipeline pipeline = new RoverRuckusGripPipeline();
                pipeline.process(mat);
                contours = pipeline.filterContoursOutput();
            } else if (state == 0) {
                if (contours.size() == 0)
                    return GoldPos.NONE_FOUND;
                lowest = centroidish(contours.get(0));
            } else if (state < contours.size()) {
                Point centroid = centroidish(contours.get(state));
                if (lowest.y > centroid.y)
                    lowest = centroid;
            } else if (state == contours.size()) {
                if (lowest.x < 320d / 3)
                    return GoldPos.LEFT;
                else if (lowest.x < 640d / 3)
                    return GoldPos.MIDDLE;
                else
                    return GoldPos.RIGHT;
            } else {
                return GoldPos.ERROR2;
            }
            state++;
            return GoldPos.HOLD_STATE;
        }
    
        private static Point centroidish(MatOfPoint matOfPoint) {
            Rect br = Imgproc.boundingRect(matOfPoint);
            return new Point(br.x + br.width/2,br.y + br.height/2);
        }
    }
    

    Debug OpenCV Errors

    Debug OpenCV Errors By Arjun

    Task: Use black magic to fix errors in our code

    We recently implemented OpenCV support in our code, but we hadn’t tested it until now. Upon testing, we realized that while our code worked in theory, it misbehaved in practice. Thus, we began the time-tested ritual of debugging our code. From past experience we know that debugging is 90% luck and 10% hoping that you have pleased the gods of programming. We crossed our fingers and hoped that we were able to correctly diagnose the problem.

    The first problem we found was that Vuforia wasn’t reading in our frames. The queue which holds Vuforia frames was always empty. After making lots of small changes, we realized that this was due to not initializing our Vuforia correctly. After fixing this, we got a new error.

    The error message changed! This meant that we fixed one problem, but there was another problem hiding behind it. The new error we found was that our code was unable to access the native OpenCV libraries, namely it could not link to libopencv_java320.so. Unfortunately, we could not debug this any further.

    Next Steps

    We need to continue debugging this problem and find the root cause of it.

    Auto Paths

    Auto Paths By Abhi

    Task: Map and code auto for depot side start

    So we created a bunch of auto paths in the beginning of the season with hopes to work with any that we want, allowing us to adapt to our alliance partner's capabilities. I decided to spend today actually completely coding one of them. Since we we still didn't have a complete vision software, I made these manually so we can integrate vision without issues. Here are videos of all of the paths. For the sake of debugging the bot stops after turning towards the crater but in reality it will drive and park in the far crater.

    Center

    Left

    Right

    Next Steps

    We gotta get vision integrated into the paths.

    Issues with Driving

    Issues with Driving By Karina

    Task: Get ready for Regionals

    Regionals is coming up, and there are some driving issues that need to be addressed. Going back to November, one notable issue we had at the Conrad qualifier was the lack of friction between Bigwheel's wheels and the field tiles. There was not enough weight resting on the wheels, which made it hard to move suddenly.

    Since then many changes have been made to Bigwheel in terms of the lift. For starters, we switched out the REV extrusion linear slide for the MGN12H linear slide. We have also added more components to intake and carry minerals. These steps have fixed the previous issue if we keep the lift at a position not exceeding ~70 degrees while moving, but having added a lot of weight to the end of the slide makes rotating around the elbow joint of Bigwheel problematic. As you can see below, Bigwheel's chassis is not heavy enough to stay grounded when deploying the arm (and so I had to step on the back end of Bigwheel like a fool).

    Another issue I encountered during driver practice was trying to deposit minerals in the lander. By "having issues" I mean I couldn't. Superman broke as soon as I tried going into the up position, and this mechanism was intended to raise Bigwheel enough so that is would reach the lander. Regardless of Superman's condition, the container for the minerals was still loose and not attached to the servo. Consequently, I could not rotate the lift past the vertical without dropping the minerals I had collected.

    Next Steps

    To run a full practice match, Superman and the container will need to be fixed, as well as the weight issue. Meanwhile, I will practice getting minerals out of the crater.

    Vision Summary

    Vision Summary By Arjun and Abhi

    Task: Reflect on our vision development

    One of our priorities this season was our autonomous, as a perfect autonomous could score us a considerable amount of points. A large portion of these points come from sampling, so that was one of our main focuses within autonomous. Throughout the season, we developed a few different approaches to sampling.

    Early on in the season, we began experimenting with using a Convolutional Neural Network to detect the location of the gold mineral. A Convolutional Neural Network, or CNN, is a machine learning algorithm that uses multiple layers which "vote" on what the output should be based on the output of previous layers. We developed a tool to label training images for use in training a CNN, publicly available at https://github.com/arjvik/MineralLabler. We then began training a CNN with the training data we labeled. However, our CNN was unable to reach a high accuracy level, despite us spending lots of time tuning it. A large part of this came to our lack training data. We haven't given up on it, though, and we hope to improve this approach in the coming weeks.

    We then turned to other alternatives. At this time, the built-in TensorFlow Object Detection code was released in the FTC SDK. We tried out TensorFlow, but we were unable to use it reliably. Our testing revealed that the detection provided by TensorFlow was not always able to detect the location of the gold mineral. We attempted to modify some of the parameters, however, since only the trained model was provided to us by FIRST, we were unable to increase its accuracy. We are currently looking to see if we can detect the sampling order even if we only detect some of the sampling minerals. We still have code to use TensorFlow on our robot, but it is only one of a few different vision backends available for selection during runtime.

    Another alternative vision framework we tried was OpenCV. OpenCV is a collection of vision processing algorithms which can be combined to form powerful pipelines. OpenCV pipelines perform sequential transformations on their input image, until it ends up in a desired form, such as a set of contours or boundaries of all minerals detected in the image. We developed an OpenCV pipeline to find the center of the gold mineral given an image of the sampling order. To create our pipeline, we used a tool called GRIP, which allows us to visualize and tune our pipeline. However, since we have found the lighting conditions to greatly influence the quality of detection, we hope to add LED lights to the top of our phone mount so we can get consistent lighting on the field, hopefully further increasing our performance.

    Since we wanted to be able to switch easily between these vision backends, we decided to write a modular framework which allows us to swap out vision implementations with ease. As such, we are now able to choose which vision backend we would like to use during the match, with just a single button press. Because of this, we can also work in parallel on all of the vision backends.

    Next Steps

    We would like to continue improving on and testing our vision software so that we can reliably sample during our autonomous.

    Minor Code Change

    Minor Code Change By Karina

    Task: Save Bigwheel from self destruction

    The other day, when running through Bigwheel's controls, I came across an error in the code. The motors on the elbow did not have min and max values for its range of motion, causing the gears to grind when careless team members got a hold of the control. Needless to say, Iron Reign has gone through a few gears already. Adding stops in the code was simple enough:

    Testing the code revealed immediate success. I went through the full range of motion and there were no scary noises. Yay for the drive team!

    Next Steps

    Going foward, we will continue to debug code through drive practice.

    Code Updates

    Code Updates By Abhi and Arjun

    Task: Week before competition changes

    Oh boy! It is almost time for comepetition and with that comes a super duper autonomous. For the past couple of weeks and today, we focused on making our depot side auto rock solid. Because our robot wasn't fully built, we couldn't do autodelatching. Today, we integrated our vision pipelines into the auto and tested all the paths with vision. They seemed to work at home base but the field we have isn't built to exact specifications.

    Next Steps

    At Wylie, we will have to tune auto paths so they actually work.

    Competition Day Code

    Competition Day Code By Abhi and Arjun

    Task: Update our code

    Woo hoo! We finally made it to Wylie qualifier and made it to regionals. However, there was a lot of stuff that happened at competition itself. The reason why so many code changes needed to happen at competition was because our robot actually broke the night before competition. Rather than staying up while the builders fixed the robot, I decided to rest and do some super duper work the following day.

    First thing that happened was that the belt code was added. Previously, we had relied on gravity and the polycarb locks we had on the slides but we quickly realized that the slides needed to articulate in order to not break Superman. As a result, we added the belts into our collector class and used the encoders to power them.

    Next, we added manual overrides for all functions of our robot. Simply due to lack of time, we didn't add any presets and I was focusing on just making the robot functional enough for competition. During competition, Karina was actually able to latch during endgame with purely the manual overrides :)

    Finally, we did a lot of auto path tuning. We ended up using a modified DogeCV pipeline and we were accurately able to detech the gold mineral at all times. However, our practice field wasn't setup to the exact specifications needed so I spent the majority of my day at the Wylie practice field tuning depot side auto (by the end of the day it worked almost perfectly every time.

    Next Steps

    We were very lucky to have qualified early in the season so I could use this competition as just a code testing day. However, it will be very hard to proceed like this so I hope we can have a build freeze to work on code.

    Code Updates

    Code Updates By Abhi

    Task: DISD STEM EXPO

    On today's episode of code changes, we have some cool stuff to look at.

    The picture above is a good representation of our work today. After making sure all the manual drive controls were working, I asked Karina to find the positions she liked for intake, deposit, and latch. Taking these encoder values from telemetry, I created new methods for the robot to run to those positions. As a result, the robot was very fuctional. We could latch onto the lander in 10 seconds (a much faster endgame than we had ever done).

    Next Steps

    The code is still a little messy so we will have to do further testing before any competition.

    Code Updates

    Code Updates By Abhi

    Task: Woodrow Scrimmage

    Welcome back folks! Today on Code Updates I will tell you all about our changes at Woodrow!

    In the past few days, I remodeled our code so all of our teleop functions would be in specialized functions. Today, I was able to test all these changes as well as add some new stuff. We made a choice to switch out our nylon latch for a forged hook (pretty cool right) so I integrated that servo as well.

    Next Steps

    Great day for changes! We need to do some more code cleaning before we are ready!

    Autonomous Non-Blocking State Machines

    Autonomous Non-Blocking State Machines By Arjun

    Task: Design a state machine class to make autonomous easier

    In the past, writing autonomous routines was an exercise in patience. The way we structured them was tedious and difficult to change. Adding one step to the beginning of an autonomous would require changing the indexes of every single step afterwards, which could take a long time depending on the size of the routine. In addition, simple typos could go undetected, and cause lots of problems. Finally, there was so much repetitive code, making our routines over 400 lines long.

    In order to remedy this, we decided to create a state machine class that takes care of the repetitive parts of our autonomous code. We created a StateMachine class, which allows us to build autonomous routines as sequences of "states", or individual steps. This new state machine system makes autonomous routines much easier to code and tune, as well as removing the possibility for small bugs. We also were able to shorten our code by converting it to the new system, reducing each routine from over 400 lines to approximately 30 lines.

    Internally, StateMachine uses instances of the functional interface State (or some of its subclasses, SingleState for states that only need to be run once, TimedState, for states that are run on a timer, or MineralState, for states that do different things depending on the sampling order). Using a functional interface lets us use lambdas, which further reduce the length of our code. When it is executed, the state machine takes the current state and runs it. If the state is finished, the current state index (stored in a class called Stage) is incremented, and a state switch action is run, which stops all motors.

    Here is an autonomous routine which has been converted to the new system:

    private StateMachine auto_depotSample = getStateMachine(autoStage)
                .addNestedStateMachine(auto_setup) //common states to all autonomous
                .addMineralState(mineralStateProvider, //turn to mineral, depending on mineral
                        () -> robot.rotateIMU(39, TURN_TIME), //turn left
                        () -> true, //don't turn if mineral is in the middle
                        () -> robot.rotateIMU(321, TURN_TIME)) //turn right
                .addMineralState(mineralStateProvider, //move to mineral
                        () -> robot.driveForward(true, .604, DRIVE_POWER), //move more on the sides
                        () -> robot.driveForward(true, .47, DRIVE_POWER), //move less in the middle
                        () -> robot.driveForward(true, .604, DRIVE_POWER))
                .addMineralState(mineralStateProvider, //turn to depot
                        () -> robot.rotateIMU(345, TURN_TIME),
                        () -> true,
                        () -> robot.rotateIMU(15, TURN_TIME))
                .addMineralState(mineralStateProvider, //move to depot
                        () -> robot.driveForward(true, .880, DRIVE_POWER),
                        () -> robot.driveForward(true, .762, DRIVE_POWER),
                        () -> robot.driveForward(true, .890, DRIVE_POWER))
                .addTimedState(4, //turn on intake for 4 seconds
                        () -> robot.collector.eject(),
                        () -> robot.collector.stopIntake())
                .build();
    

    Control Mapping

    Control Mapping By Bhanaviya, Abhi, Ben, and Karina

    Task: Map and test controls

    With regionals being a literal week away, the robot needs to be in drive testing phase. So, we started out by mapping out controls as depicted above.

    Upon testing the controls, we realized that when the robot went into Superman-mode, it collapsed due to the lopsided structure of the base since the presets were not as accurate as they could be. The robot had trouble trying to find the right position when attempting to deposit and intake minerals.

    After we found a preset for the intake mechanism we had to test it out to ensure that the arm extended far enough to sample. Our second task was ensuring that the robot could go into superman while still moving forward. To do this, we had to find the position which allowed the smaller wheel at the base of the robot to move forward while the robot was in motion.

    Next Steps

    We plan to revisit the robot's balancing issue in the next meet and find the accurate presets to fix the problem.

    Big Wheel Articulations

    Big Wheel Articulations By Abhi

    Task: Summary of all Big Wheel movements

    The two main subsystems for our robot this year are the Superman arm and the elbows. Because all of our robot relies on these two to work together well, we paired them together in our Pose class and created a set of methods to articulate the subsystems. The presets were created by using manual control to find our encoder values and creating a set of state machines to move them.

    The reason why each articulation is necessary and complicated is that we have to manage our center of gravity. When moving our arm, the center of gravity moves with it since the lift is so heavy. When transitioning from intake to deposit, we cant go directly because our robot will tip over so we have to go to safe drive then deposit. We are still managing this constantly with driver control but we hope to fix this easily with distance sensors in future.

    The position seen above is called "safe drive". During normal match play, our drivers can go to this position to navigate the field quickly and with the arm out of the way.

    When the driver control period starts, we normally navigate to the crater then enter the intake position shown above. From this position, we can safely pick up minerals from the crater.

    From the intake position, the robot goes to safe drive to fix the weight balance then goes to the deposit position shown above. The arm can still extend upwards above the lander and our automatic sorter can place the minerals appropriately.

    During the end game, we enter a latchable position where our hook can easily slide into the latch. After hooked on, our robot can slightly lift itself off the ground to hook.

    At the beginning of the match, we can completely close the arm and superman to fit in sizing cube and latch on the lander.

    As you can see, there is a lot of articulations that need to work together during the course of the match. By putting this info in a state machine, we can easily toggle between articulations. Refer to our code snippets for more details.

    Next Steps

    At this point, we have 4 cycles in 1 minute 30 seconds. By adding some upgrades to the articulations using our new distance sensors, we hope to speed this up even more.

    Cart Hack

    Cart Hack By Arjun

    Task: Tweaking ftc_app to allow us to drive robots without a Driver Station phone

    As you already know, Iron Reign has a mechanized cart called Cartbot that we bring to competitions. We used the FTC control system to build it, so we could gain experience. However, this has one issue: we can only use one pair of Robot Controller and Driver Station phones at a competition, because of WiFi interference problems.

    To avoid this pitfall, we decided to tweak the ftc_app our team uses to allow us to plug in a controller directly into the Robot Controller. This cuts out the need for a Driver Station phone, which means that we can drive around Cartbot without worrying about breaking any rules.

    Another use for this tweak could be for testing, since with this new system we don't need a Driver Station when we are testing our tele-op.

    As of now this modification lives in a separate branch of our code, since we don't know how it may affect our match code. We hope to merge this later once we confirm it doesn't cause any issues.

    Localization

    Localization By Ben

    Localization

    A feature that is essential to many advanced autonomous sequences is the ability to know the robots absolute location (x position, y position, heading). For our localization, we determine the robots position relative to the fields coordinate frame. To track our position, we use encoders (to determine displacement) and a gyro (to determine heading).

    Our robots translational velocity can be determined by seeing how our encoder counts change over time. Heading velocity is simply how our angle changes in time. Thus, our actual velocity can be represented by the following equation.

    Integrating that to find our position yields

    Using this new equation, can obtain the robots updated x and y coordinates.

    Balancing Robot

    Balancing Robot By Ben

    Initial Work on Balancing Robot

    We debated whether to use model based control (to factor in the robots dyanmics) or simply run a pid loop and stringently tune the gains. We have decided to use a PID loop due to time contraints, and the fact that we got close to getting the robot to balance with a PID loop. Next time we will continue fine tuning the gains, and use a graph plotting our current pitch versus the desired pitch to determine how we should tweak the gains to smoothly reach the setpoint. Another factor we need to account for is the varying loop times, and multiply these loop times into a pid calculations to ensure consistency. UPDATE: Today we managed to get our robot to balance for 30 seconds after spending about an hour tuning the PID gains. We made significant progress, but there is a flaw in our algorithm that needs to be addressed. At the moment, we have a fixed pitch that we want the robot to balance at but due to the weight distrubution of the robot, forcing it to balance at some fixed setpoint will not work well and will cause it to continuilly oscillate around that pitch instead of maintaining it.