Articles by tag: control

Articles by tag: control

    Swerve Drive Experiment

    Swerve Drive Experiment By Abhi

    Task: Consider a Swerve Drive base

    Last season, we saw many robots that utilized a swerve drive rather than the mecanum drive for omnidirectional movement. To further expand Iron Reign's repertoire of drive bases, I wanted to further investigate this chassis. Swerve was considered as an alternative to swerve because of its increased speed in addition to the maneuverability of the drive base to allow for quick scoring due to its use of traction wheels at pivot angles. Before we could consider making a prototype, we investigated several other examples.

    Among the examples considered was the PRINT swerve for FTC by team 9773. After reading their detailed assembly instructions, I moved away from their design for many reasons. First, the final cost of the drive train was very expensive; we did not have a very high budget despite help from our sponsors. If this drive train was not functional or if the chassis didn't make sense to use in Rover Ruckus, we would have almost no money for an alternate drive train. Also, they parts used by 9773 involved X-rail rather than extrusion rail from REV. This would cause problems in the future as we would need to redesign the REVolution system for X-rail.

    Another example was from team 9048 which appeared to be more feasible. Because they used REV rail and many 3D printed parts, this was a more feasible prototype. Because they didn't have a parts list, we had the find the rough estimate of cost from the REV and Andymark websites. Upon further analysis, we realized that the cost, though cheaper than the chassis of 9773, would still be a considerable chunk of our budget.

    At this point it was evident most swerve drives being used are very expensive. Wary of making this investment, I worked with our sister team 3734 to create a budget swerve with materials around the house. A basic sketch is listed below.

    Next Steps

    Scavenge for parts in the house and Robodojo to make swerve modules.

    Swerve Drive Prototype

    Swerve Drive Prototype By Abhi and Christian

    Task: Build a Swerve Drive base

    Over the past week, I worked with Christian and another member of Imperial to prototype a drive train. Due to the limited resources. we decided to use Tetrix parts since we had an abundance of those. We decided to make the swerve such that a servo would turn a swerve module and the motors would be attached directly to the wheels.

    Immediately we noticed it was very feeble. The servos were working very hard to turn the heavy module and the motors had trouble staying aligned. Also, programming the chassis was also a challenge. After experimenting further, the base even broke. This was a moment of realization. Not only was swerve expensive and complicated, we also would need to replace a module really quickly at competition which needed more resources and an immaculate design. With all these considerations, I ultimately decided that swerve wasn't worth it to use as a drive chassis at this time.

    Next Steps

    Consider and prototype other chassis designs until Rover Ruckus begins.

    Position Tracking

    Position Tracking By Abhi

    Task: Design a way to track the robot's location

    During Relic Recovery season, we had many problems with our autonomous due to slippage in the mecanum wheels and our need to align to the balancing stone, both of which created high error in our encoder feedback. To address this recurring issue, we searched for an alternative way to identify our position on the field. Upon researching online and discussing with other teams, we discovered an alternative tracker sensor with unpowered omni wheels. This tracker may be used during Rover Ruckus or beyond depending on what our chassis will be.

    We designed the tracker by building a small right angular REV rail assembly. On this, we attached 2 omni wheels at 90 degrees to one another and added axle encoders. The omni wheels were not driven because we simply wanted them to glide along the floor and read the encoder values of the movements. This method of tracking is commonly referred to as "dead wheel tracking". Since the omnis will always be touching the ground, any movement will be sensed in them and prevents changes in readings due to defense or drive wheel slippage.

    To test the concept, we attached the apparatus to ARGOS. With some upgrades to the ARGOS code by using the IMU and omni wheels, we added some basic trigonometry to the code to accurately track the position. The omni setup was relatively accurate and may be used for future projects and robots.

    Next Steps

    Now that we have a prototype to track position without using too many resources, we need to test it on an actual FTC chassis. Depending on whether or not there is terrain in Rover Ruckus, the use of this system will change. Until then, we can still experiment with this and develop a useful multipurpose sensor.

    Replay Autonomous

    Replay Autonomous By Arjun

    Task: Design a program to record and replay a driver run

    One of the difficulties in writing an autonomous program is the long development cycle. We have to unplug the robot controller, plug it into a computer, make a few changes to the code, recompile and download the code, and then retest our program. All this must be done over and over again, until the autonomous is perfected. Each autonomous takes ~4 hours to write and tune. Over the entire season, we spend over 40 hours working on autonomous programs.

    One possible solution for this is to record a driver running through the autonomous, and then replay it. I used this solution on my previous robotics team. Since we had no access to a field, we had to write our entire autonomous at a competition. After some brainstorming, we decided to write a program to record our driver as he ran through our autonomous routine and then execute it during a match. It worked very well, and got us a few extra points each match.

    Using this program, writing an autonomous program is reduced to a matter of minutes. We just need to run through our autonomous routine a few times until we're happy with it, and then take the data from the console and paste it into our program. Then we recompile the program and run it.

    There are two parts to our replay program. One part (a Tele-op Opmode) records the driver's motions and outputs it into the Android console. The next part (an Autonomous Opmode) reads in that data, and turns it into a working autonomous program.

    Next Steps

    Our current replay program requires one recompilation. While it is very quick, one possible next step is to save the autonomous data straight into the phone's internal memory, so that we do not have to recompile the program. This could further reduce the time required to create an autonomous.

    One more next step could be a way to easily edit the autonomous. The output data is just a big list of numbers, and it is very difficult to edit it. If we need to tune the autonomous due to wear and tear on the robot, it is difficult to do so without rerecording. If we can figure out a mechanism for editing the generated autonomous, we can further reduce the time we spend creating autonomous programs.

    Rover Ruckus Brainstorming & Initial Thoughts

    Rover Ruckus Brainstorming & Initial Thoughts By Ethan, Charlotte, Kenna, Evan, Abhi, Arjun, Karina, and Justin

    Task: Come up with ideas for the 2018-19 season

    So, today was the first meeting in the Rover Ruckus season! On top of that, we had our first round of new recruits (20!). So, it was an extremely hectic session, but we came up with a lot of new ideas.

    Building

    • A One-way Intake System

    • This suggestion uses a plastic flap to "trap" game elements inside it, similar to the lid of a soda cup. You can put marbles through the straw-hole, but you can't easily get them back out.
    • Crater Bracing
    • In the past, we've had center-of-balance issues with our robot. To counteract this, we plan to attach shaped braces to our robot such that it can hold on to the walls and not tip over.
    • Extendable Arm + Silicone Grip

    • This one is simple - a linear slide arm attached to a motor so that it can pick up game elements and rotate. We fear, however, that many teams will adopt this strategy, so we probably won't do it. One unique part of our design would be the silicone grips, so that the "claws" can firmly grasp the silver and gold.
    • Binder-ring Hanger

    • When we did Res-Q, we dropped our robot more times than we'd like to admit. To prevent that, we're designing an interlocking mechanism that the robot can use to hang. It'll have an indent and a corresponding recess that resists lateral force by nature of the indent, but can be opened easily.
    • Passive Intake
    • Inspired by a few FRC Stronghold intake systems, we designed a passive intake. Attached to a weak spring, it would have the ability to move over game elements before falling back down to capture them. The benefit of this design is that we wouldn't have to use an extra motor for intake, but we risk controlling more than two elements at the same time.
    • Mechanum
    • Mechanum is our Ol' Faithful. We've used it for the past three years, so we're loath to abandon it for this year. It's still a good idea for this year, but strafing isn't as important, and we may need to emphasize speed instead. Plus, we're not exactly sure how to get over the crater walls with Mechanum.
    • Tape Measure
    • In Res-Q, we used a tape-measure system to pull our robot up, and we believe that we could do the same again this year. One issue is that our tape measure system is ridiculously heavy (~5 lbs) and with the new weight limits, this may not be ideal.
    • Mining
    • We're currently thinking of a "mining mechanism" that can score two glyphs at a time extremely quickly in exchange for not being able to climb. It'll involve a conveyor belt and a set of linear slides such that the objects in the crater can automatically be transferred to either the low-scoring zone or the higher one.

    Journal

    This year, we may switch to weekly summaries instead of meeting logs so that our journal is more reasonable for judges to read. In particular, we were inspired by team Nonstandard Deviation, which has an amazing engineering journal that we recommend the readers to check out.

    Programming

    Luckily, this year seems to have a more-easily programmed autonomous. We're working on some autonomous diagrams that we'll release in the next couple weeks. Aside from that, we have such a developed code base that we don't really need to update it any further.

    Next Steps

    We're going to prototype these ideas in the coming weeks and develop our thoughts more thoroughly.

    Vision Discussion

    Vision Discussion By Arjun and Abhi

    Task: Consider potential vision approaches for sampling

    Part of this year’s game requires us to be able to detect the location of minerals on the field. The main use for this is in sampling. During autonomous, we need to move only the gold mineral, without touching the silver minerals in order to earn points for sampling. There are a few ways we could be able to detect the location of the gold mineral.

    First, we could possibly use OpenCV to run transformations on the image that the camera sees. We would have to design an OpenCV pipeline which identifies yellow blobs, filters out those that aren’t minerals, and finds the centers of the blobs which are minerals. This is most likely the approach that many teams will use. The benefit of this approach is that it will be easy enough to write. However, it may not work in different lighting conditions that were not tested during the designing of the OpenCV pipeline.

    Another approach is to use Convolutional Neural Networks (CNNs) to identify the location of the gold mineral. Convolutional Neural Networks are a class of machine learning algorithms that “learn” to find patterns in images by looking at large amounts of samples. In order to develop a CNN to identify minerals, we must take lots of photos of the sampling arrangement in different arrangements (and lighting conditions), and then manually label them. Then, the algorithm will “learn” how to differentiate gold minerals from other objects on the field. A CNN should be able to work in many different lighting conditions, however, it is also more difficult to write.

    Next Steps

    As of now, Iron Reign is going to attempt both methods of classification and compare their performance.

    CNN Training

    CNN Training By Arjun and Abhi

    Task: Capture training data for a Convolutional Neural Network

    In order to train a Convolutional Neural Network, we need a whole bunch of training images. So we got out into the field, and took 125 photos of the sampling setup in different positions and angles. Our next step is to label the gold minerals in all of these photos, so that we can train a Convolutional Neural Network to label the gold minerals by learning from the patterns of the training data.

    Next Steps

    Next, we will go through and designate gold minerals. In addition, we must create a program to process these.

    Autonomous Path Planning

    Autonomous Path Planning By Abhi

    Task: Map Autonomous paths

    With the high point potential available in this year's autonomous it is essential to create autonomous paths right now. This year's auto is more complicated due to potential collisions with alliance partners in addition to an unknown period of time spend delatching from the lander. To address both these concerns, I developed 4 autonomous paths we will investigate with to use during competition.

    When making auto paths, there are some things to consider. One, the field is the exact same for both red and blue alliance, meaning we don't need to rewrite the code to act on the other side of the field. Second, we have to account for our alliance partner's autonomous if they have one and need to adapt to their path so we don't crash them. Third, we have to avoid the other alliance's bots to avoid penalties. There are no explicit boundaries this year for auto but if we somehow interrupt the opponent's auto we get heavily penalized. Now, with these in mind, lets look at these paths.

    This path plan is the simplest of all autonomi. I assume that our alliance partner has an autonomous and our robot only takes care of half the functions. It starts with a simple detaching from the lander, sampling the proper mineral, deploying the team marker, and parking in the crater. The reason I chose the opposite crater instead of the one on our nearside was that it was shorter distance and less chance to mess with our alliance partner. The issue with this plan is that it may interfere with the opponent's autonomous but if we drive strategically hugging the wall, we shouldn't have issues.

    This path is also a "simple" path but is obviously complicated. The issue is that the team marker depot is not on the same side as the lander, forcing us to drive all the way down and back to park in the crater. I could also change this one to go to the opposite crater but that may interfere with our alliance partner's code.

    This is one of the autonomi that assumes our alliance partners don't have an autonomous and is built for multi-functionality. The time restriction makes this autonomous unlikely but it is still nice to plan out a path for it.

    This is also one of the autonomi that assumes our alliance partners don't have an autonomous. This is the simpler one of the methods but still has the same restrictions

    Next Steps

    Although its great to think these paths will actually work out in the end, we might need to change them a lot. With potential collisions with alliance partners and opponents, we might need a drop down menu of sorts on the driver station that can let us put together a lot of different pieces so we can pick and choose the auto plan. Maybe we could even draw out the path in init. All this is only at the speculation stage right now.

    CNN Training Program

    CNN Training Program By Arjun and Abhi

    Task: Designing a program to label training data for our Convolutional Neural Network

    In order to use the captured training data, we need to label it by identifying the location of the gold mineral in it. We also need to normalize it by resizing the training images to a constant size, (320x240 pixels). While we could do this by hand, it would be a pain to do so. We would have to resize each individual picture, and then identify the coordinates of the center of the gold mineral, then create a file to store the resized image and coordinates.

    Instead of doing this, we decided to write a program to do this for us. That way, we could just click on the gold mineral on the screen, and the program would do the resizing and coordinate-finding for us. Thus, the process of labeling the images will be much easier.

    Throughout the weekend, I worked on this program. The end result is shown above.

    Next Steps

    Now that the program has been developed, we need to actually use it to label the training images we have. Then, we can train the Convolutional Neural Network.

    Labelling Minerals - CNN

    Labelling Minerals - CNN By Arjun and Abhi

    Task: Label training images to train a Neural Network

    Now that we have software to make labeling the training data easier, we have to actually use it to label the training images. Abhi and I split up our training data into two halves, and we each labeled one half. Then, when we had completed the labeling, we recombined the images. The images we labeled are publicly available at https://github.com/arjvik/RoverRuckusTrainingData.

    Next Steps

    We need to actually write a Convolutional Neural Network using the training data we collected.

    Upgrading to FTC SDK version 4.0

    Upgrading to FTC SDK version 4.0 By Arjun

    Task: Upgrade our code to the latest version of the FTC SDK

    FTC recently released version 4.0 of their SDK, with initial support for external cameras, better PIDF motor control, improved wireless connectivity, new sensors, and other general improvements. Our code was based on last year's SDK version 3.7, so we needed to merge the new SDK with our repository.

    The merge was slightly difficult, as there were some issues with the Gradle build system. However, after a little fiddling with the configuration, as well as fixing some errors in the internal code we changed, we were able to successfully merge the new SDK.

    After the merge, we tested that our code still worked on Kraken, last year's competition robot. It ran with no problems.

    Developing a CNN

    Developing a CNN By Arjun and Abhi

    Task: Begin developing a Convolutional Neural Network using TensorFlow and Python

    Now that we have gathered and labeled our training data, we began writing our Convolutional Neural Network. Since Abhi had used Python and TensorFlow to write a neural network in the past during his visit to MIT over the summer, we decided to do the same now.

    After running our model, however, we noticed that it was not very accurate. Though we knew that was due to a bad choice of layer structure or hyperparameters, we were not able to determine the exact cause. (Hyperparameters are special parameters that need to be just right for the neural network to do well. If they are off, the neural network will not work well.) We fiddled with many of the hyperparameters and layer structure options, but were unable to fix the inaccuracy levels.

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    model = Sequential()
    model.add(Conv2D(64, activation="relu", input_shape=(n_rows, n_cols, 1), kernel_size=(3,3)))
    model.add(Conv2D(32, activation="relu", kernel_size=(3,3)))
    model.add(MaxPooling2D(pool_size=(8, 8), padding="same"))
    model.add(Conv2D(8, activation="tanh", kernel_size=(3,3)))
    model.add(MaxPooling2D(pool_size=(8, 8), padding="same"))
    model.add(Conv2D(4, activation="relu", kernel_size=(3,3)))
    model.add(Conv2D(4, activation="tanh", kernel_size=(1,1)))
    model.add(Flatten())
    model.add(Dense(2, activation="linear"))
    model.summary()
    

    Next Steps

    We have not fully given up, though. We plan to keep attempting to improve the accuracy of our neural network model.

    Rewriting CNN

    Rewriting CNN By Arjun and Abhi

    Task: Begin rewriting the Convolutional Neural Network using Java and DL4J

    While we were using Python and TensorFlow to train our convolutional neural network, we decided to attempt writing this in Java, as the code for our robot is entirely in Java, and before we can use our neural network, it must be written in Java.

    We also decided to try using DL4J, a competing library to TensorFlow, to write our neural network, to determine if it was easier to write a neural network using DL4J or TensorFlow. We found that both DL4J and TensorFlow were similarly easy to use, and while each had a different style, code written using both were equally easy to read and maintain.

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    java
    		//Download dataset
    		DataDownloader downloader = new DataDownloader();
    		File rootDir = downloader.downloadFilesFromGit("https://github.com/arjvik/RoverRuckusTrainingData.git", "data/RoverRuckusTrainingData", "TrainingData");
    		
    		//Read in dataset
    		DataSetIterator iterator = new CustomDataSetIterator(rootDir, 1);
    		
    		//Normalization
    		DataNormalization scaler = new ImagePreProcessingScaler(0, 1);
    		scaler.fit(iterator);
    		iterator.setPreProcessor(scaler);
    		
    		//Read in test dataset
    		DataSetIterator testIterator = new CustomDataSetIterator(new File(rootDir, "Test"), 1);
    			
    		//Test Normalization
    		DataNormalization testScaler = new ImagePreProcessingScaler(0, 1);
    		testScaler.fit(testIterator);
    		testIterator.setPreProcessor(testScaler);
    		
    		//Layer Configuration
    		MultiLayerConfiguration conf = new NeuralNetConfiguration.Builder()
    				.seed(SEED)
    				.l2(0.005)
    				.weightInit(WeightInit.XAVIER)
    				.list()
    				.layer(0, new ConvolutionLayer.Builder()
    						.nIn(1)
    						.kernelSize(3, 3)
    						.stride(1, 1)
    						.activation(Activation.RELU)
    						.build())
    				.layer(1, new ConvolutionLayer.Builder()
    						.nIn(1)
    						.kernelSize(3, 3)
    						.stride(1, 1)
    						.activation(Activation.RELU)
    						.build())
    				/* ...more layer code... */
    				.build();
    

    Next Steps

    We still need to attempt to to fix the inaccuracy in the predictions made by our neural network.

    Pose BigWheel

    Pose BigWheel By Abhi

    Task: New Pose for Big Wheel robot

    Historically, Iron Reign has used a class called "Pose" to control all the hardware mapping of our robot instead of putting it directly into our opmodes. This has created cleaner code and smoother integration with our crazy functions. However, we used the same Pose for the past two years since both had an almost identical drive base. Since there wasn't a viable differential drive Pose in the past, I made a new one using inspiration from the mecanum one. Pose will be used from this point onwards in our code to setup.

    We start with initializing everything including PID constants and all our motors/sensors. I will skip all this for this post since this is repetitive in all team code.

    In the init, I made the hardware mapping for the motors we have on BigWheel right now. Other functions will come in later.

    Here is where a lot of the work happens. This is what allows our robot to move accurately using IMU and encoder values.

    There are a lot of other methods beyond these but there is just a lot of technical math behind them with trigonometry. I won't bore you with the details but our code is open source so you can find the necessary help if you just look at our github!

    RIP CNN

    RIP CNN By Abhi

    Task: Farewell Iron Reign's CNN

    FTC released new code to support Tensorflow and automatically detect minerals with the model they trained. Unfortunately, all of our CNN work was undercut by this update. The silver lining is that we have done enough research into how CNN's work and it will allow us to understand the mind of the FTC app better. In addition, we may retrain this model if we feel it doesn't work well. But now, it is time to bid farewell to our CNN.

    Next Steps

    From this point, we will further analyze the CNN to determine its ability to detect the minerals. At the same time, we will also look into OpenCV detection.

    Code Post-Mortem after Conrad Qualifier

    Code Post-Mortem after Conrad Qualifier By Arjun and Abhi

    Task: Analyze code failure at Conrad Qualifier

    Iron Reign has been working hard on our robot, but despite that, we did not perform well owing to our autonomous performance.

    Our autonomous plan was fairly simple: perform sampling, deploy the team marker, then drive to the crater to park. We planned to use the built-in TensorFlow object detection for our sampling, and thus assumed that our autonomous would be fairly easy.

    On Thursday, I worked on writing a class to help us detect the location of the gold mineral using the built-in TensorFlow object detection. While testing this class, I noticed that it produced an error rather than outputting the location of the gold mineral. This error was not diagnosed until the morning of the competition.

    On Friday, Abhi worked on writing code for the driving part of the autonomous. He wrote three different autonomous routines, one for each position of the gold mineral. His code did not select the routine to use yet, leaving it open for us to connect to the TensorFlow class to determine which position the gold mineral was.

    On Saturday, the morning of the competition, we debugged the TensorFlow class that was written earlier and determined the cause of the error. We had misused the API for the TensorFlow object detection, and after we corrected that, our code didn't spit out an error anymore. Then, we realized that TensorFlow only worked at certain camera positions and angles. We then had to adjust the position of our robot on the field, so that we could.

    Our code failure was mostly due to the fact that we only started working on our autonomous two days before the competition. Next time, we plan to make our autonomous an integral part of our robot, and focus on it much earlier.

    Next Steps:

    We spend more time focusing on code and autonomous, to ensure that we enter our next competition with a fully working autonomous.

    DPRG Vision Presentation

    DPRG Vision Presentation By Arjun and Abhi

    Task: Present to the Dallas Personal Robotics Group about computer vision

    We presented to the DPRG about our computer vision, touching on subjects including OpenCV, Vuforia, TensorFlow, and training our own Convolutional Neural Network. Everyone we presented to was very interested in our work, and they asked us many questions. We also received quite a few suggestions on ways we could improve the performance of our vision solutions. The presentation can be seen below.

    Next Steps

    We plan to research what they suggested, such as retraining our neural networks and reusing our old training images.

    Refactoring Vision Code

    Refactoring Vision Code By Arjun

    Task: Refactor Vision Code

    Iron Reign has been working on multiple vision pipelines, including TensorFlow, OpenCV, and a home-grown Convolutional Neural Network. Until now, all our code assumed that we only used TensorFlow, and we wanted to be able to switch out vision implementations quickly. As such, we decided to abstract away the actual vision pipeline used, which allows us to be able to choose between vision implementations at runtime.

    We did this by creating a java interface, VisionProvider, seen below. We then made our TensorFlowIntegration class (our code for detecting mineral positions using TensorFlow) implement VisionProvider.

    Next, we changed our opmode to use the new VisionProvider interface. We added code to allow us to switch vision implementations using the left button on the dpad.

    Our code for VisionProvider is shown below.

    1
    2
    3
    4
    5
    6
    public interface VisionProvider {
        public void initializeVision(HardwareMap hardwareMap, Telemetry telemetry);
        public void shutdownVision();
        public GoldPos detect();
    }
    ```
    

    These methods are implemented in the integration classes.
    Our new code for TensorflowIntegration is shown below:

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    54
    55
    56
    57
    58
    59
    60
    61
    62
    63
    64
    65
    66
    67
    68
    69
    70
    71
    72
    73
    74
    75
    76
    77
    78
    79
    80
    81
    82
    83
    84
    85
    86
    87
    88
    89
    90
    91
    92
    93
    94
    95
    96
    97
    98
    99
    public class TensorflowIntegration implements VisionProvider {
        private static final String TFOD_MODEL_ASSET = "RoverRuckus.tflite";
        private static final String LABEL_GOLD_MINERAL = "Gold Mineral";
        private static final String LABEL_SILVER_MINERAL = "Silver Mineral";
    
        private List<Recognition> cacheRecognitions = null;
      
        /**
         * {@link #vuforia} is the variable we will use to store our instance of the Vuforia
         * localization engine.
         */
        private VuforiaLocalizer vuforia;
        /**
         * {@link #tfod} is the variable we will use to store our instance of the Tensor Flow Object
         * Detection engine.
         */
        public TFObjectDetector tfod;
    
        /**
         * Initialize the Vuforia localization engine.
         */
        public void initVuforia() {
            /*
             * Configure Vuforia by creating a Parameter object, and passing it to the Vuforia engine.
             */
            VuforiaLocalizer.Parameters parameters = new VuforiaLocalizer.Parameters();
            parameters.vuforiaLicenseKey = RC.VUFORIA_LICENSE_KEY;
            ;
            parameters.cameraDirection = CameraDirection.FRONT;
            //  Instantiate the Vuforia engine
            vuforia = ClassFactory.getInstance().createVuforia(parameters);
        }
    
        /**
         * Initialize the Tensor Flow Object Detection engine.
         */
        private void initTfod(HardwareMap hardwareMap) {
            int tfodMonitorViewId = hardwareMap.appContext.getResources().getIdentifier(
                    "tfodMonitorViewId", "id", hardwareMap.appContext.getPackageName());
            TFObjectDetector.Parameters tfodParameters = new TFObjectDetector.Parameters(tfodMonitorViewId);
            tfod = ClassFactory.getInstance().createTFObjectDetector(tfodParameters, vuforia);
            tfod.loadModelFromAsset(TFOD_MODEL_ASSET, LABEL_GOLD_MINERAL, LABEL_SILVER_MINERAL);
        }
    
        @Override
        public void initializeVision(HardwareMap hardwareMap, Telemetry telemetry) {
            initVuforia();
    
            if (ClassFactory.getInstance().canCreateTFObjectDetector()) {
                initTfod(hardwareMap);
            } else {
                telemetry.addData("Sorry!", "This device is not compatible with TFOD");
            }
    
            if (tfod != null) {
                tfod.activate();
            }
        }
    
        @Override
        public void shutdownVision() {
            if (tfod != null) {
                tfod.shutdown();
            }
        }
    
        @Override
        public GoldPos detect() {
            List<Recognition> updatedRecognitions = tfod.getUpdatedRecognitions();
            if (updatedRecognitions != null) {
                cacheRecognitions = updatedRecognitions;
            }
            if (cacheRecognitions.size() == 3) {
                int goldMineralX = -1;
                int silverMineral1X = -1;
                int silverMineral2X = -1;
                for (Recognition recognition : cacheRecognitions) {
                    if (recognition.getLabel().equals(LABEL_GOLD_MINERAL)) {
                        goldMineralX = (int) recognition.getLeft();
                    } else if (silverMineral1X == -1) {
                        silverMineral1X = (int) recognition.getLeft();
                    } else {
                        silverMineral2X = (int) recognition.getLeft();
                    }
                }
                if (goldMineralX != -1 && silverMineral1X != -1 && silverMineral2X != -1)
                    if (goldMineralX < silverMineral1X && goldMineralX < silverMineral2X) {
                        return GoldPos.LEFT;
                    } else if (goldMineralX > silverMineral1X && goldMineralX > silverMineral2X) {
                        return GoldPos.RIGHT;
                    } else {
                        return GoldPos.MIDDLE;
                    }
            }
            return GoldPos.NONE_FOUND;
    
        }
    
    }
    

    Next Steps

    We need to implement detection using OpenCV, and make our class conform to VisionProvider, so that we can easily swap it out for TensorflowIntegration.

    We also need to do the same using our Convolutional Neural Network.

    Finally, it might be beneficial to have a dummy implementation that always “detects” the gold as being in the middle, so that if we know that all our vision implementations are failing, we can use this dummy one to prevent our autonomous from failing.

    OpenCV Support

    OpenCV Support By Arjun

    Task: Add OpenCV support to vision pipeline

    We recently refactored our vision code to allow us to easily swap out vision implementations. We had already implemented TensorFlow, but we hadn't implemented code for using OpenCV instead of TensorFlow. Using the GRIP pipeline we designed earlier, we wrote a class called OpenCVIntegration, which implements VisionProvider. This new class allows us to use OpenCV instead of TensorFlow for our vision implementation.
    Our code for OpenCVIntegration is shown below.

      1
      2
      3
      4
      5
      6
      7
      8
      9
     10
     11
     12
     13
     14
     15
     16
     17
     18
     19
     20
     21
     22
     23
     24
     25
     26
     27
     28
     29
     30
     31
     32
     33
     34
     35
     36
     37
     38
     39
     40
     41
     42
     43
     44
     45
     46
     47
     48
     49
     50
     51
     52
     53
     54
     55
     56
     57
     58
     59
     60
     61
     62
     63
     64
     65
     66
     67
     68
     69
     70
     71
     72
     73
     74
     75
     76
     77
     78
     79
     80
     81
     82
     83
     84
     85
     86
     87
     88
     89
     90
     91
     92
     93
     94
     95
     96
     97
     98
     99
    100
    101
    102
    103
    104
    105
    106
    107
    108
    109
    110
    111
    112
    113
    114
    115
    116
    117
    118
    119
    120
    121
    122
    123
    124
    125
    126
    127
    128
    129
    public class OpenCVIntegration implements VisionProvider {
    
        private VuforiaLocalizer vuforia;
        private Queue<VuforiaLocalizer.CloseableFrame> q;
        private int state = -3;
        private Mat mat;
        private List<MatOfPoint> contours;
        private Point lowest;
    
        private void initVuforia() {
            VuforiaLocalizer.Parameters parameters = new VuforiaLocalizer.Parameters();
            parameters.vuforiaLicenseKey = RC.VUFORIA_LICENSE_KEY;
            parameters.cameraDirection = VuforiaLocalizer.CameraDirection.FRONT;
            vuforia = ClassFactory.getInstance().createVuforia(parameters);
        }
    
        public void initializeVision(HardwareMap hardwareMap, Telemetry telemetry) {
            initVuforia();
            q = vuforia.getFrameQueue();
            state = -2;
    
        }
    
        public void shutdownVision() {}
    
        public GoldPos detect() {
            if (state == -2) {
                if (q.isEmpty())
                    return GoldPos.HOLD_STATE;
                VuforiaLocalizer.CloseableFrame frame = q.poll();
                Image img = VisionUtils.getImageFromFrame(frame, PIXEL_FORMAT.RGB565);
                Bitmap bm = Bitmap.createBitmap(img.getWidth(), img.getHeight(), Bitmap.Config.RGB_565);
                bm.copyPixelsFromBuffer(img.getPixels());
                mat = VisionUtils.bitmapToMat(bm, CvType.CV_8UC3);
            } else if (state == -1) {
                RoverRuckusGripPipeline pipeline = new RoverRuckusGripPipeline();
                pipeline.process(mat);
                contours = pipeline.filterContoursOutput();
            } else if (state == 0) {
                if (contours.size() == 0)
                    return GoldPos.NONE_FOUND;
                lowest = centroidish(contours.get(0));
            } else if (state < contours.size()) {
                Point centroid = centroidish(contours.get(state));
                if (lowest.y > centroid.y)
                    lowest = centroid;
            } else if (state == contours.size()) {
                if (lowest.x < 320d / 3)
                    return GoldPos.LEFT;
                else if (lowest.x < 640d / 3)
                    return GoldPos.MIDDLE;
                else
                    return GoldPos.RIGHT;
            } else {
                return GoldPos.ERROR2;
            }
            state++;
            return GoldPos.HOLD_STATE;
        }
    
        private static Point centroidish(MatOfPoint matOfPoint) {
            Rect br = Imgproc.boundingRect(matOfPoint);
            return new Point(br.x + br.width/2,br.y + br.height/2);
        }
    }public class OpenCVIntegration implements VisionProvider {
    
        private VuforiaLocalizer vuforia;
        private Queue<VuforiaLocalizer.CloseableFrame> q;
        private int state = -3;
        private Mat mat;
        private List<MatOfPoint> contours;
        private Point lowest;
    
        private void initVuforia() {
            VuforiaLocalizer.Parameters parameters = new VuforiaLocalizer.Parameters();
            parameters.vuforiaLicenseKey = RC.VUFORIA_LICENSE_KEY;
            parameters.cameraDirection = VuforiaLocalizer.CameraDirection.FRONT;
            vuforia = ClassFactory.getInstance().createVuforia(parameters);
        }
    
        public void initializeVision(HardwareMap hardwareMap, Telemetry telemetry) {
            initVuforia();
            q = vuforia.getFrameQueue();
            state = -2;
    
        }
    
        public void shutdownVision() {}
    
        public GoldPos detect() {
            if (state == -2) {
                if (q.isEmpty())
                    return GoldPos.HOLD_STATE;
                VuforiaLocalizer.CloseableFrame frame = q.poll();
                Image img = VisionUtils.getImageFromFrame(frame, PIXEL_FORMAT.RGB565);
                Bitmap bm = Bitmap.createBitmap(img.getWidth(), img.getHeight(), Bitmap.Config.RGB_565);
                bm.copyPixelsFromBuffer(img.getPixels());
                mat = VisionUtils.bitmapToMat(bm, CvType.CV_8UC3);
            } else if (state == -1) {
                RoverRuckusGripPipeline pipeline = new RoverRuckusGripPipeline();
                pipeline.process(mat);
                contours = pipeline.filterContoursOutput();
            } else if (state == 0) {
                if (contours.size() == 0)
                    return GoldPos.NONE_FOUND;
                lowest = centroidish(contours.get(0));
            } else if (state < contours.size()) {
                Point centroid = centroidish(contours.get(state));
                if (lowest.y > centroid.y)
                    lowest = centroid;
            } else if (state == contours.size()) {
                if (lowest.x < 320d / 3)
                    return GoldPos.LEFT;
                else if (lowest.x < 640d / 3)
                    return GoldPos.MIDDLE;
                else
                    return GoldPos.RIGHT;
            } else {
                return GoldPos.ERROR2;
            }
            state++;
            return GoldPos.HOLD_STATE;
        }
    
        private static Point centroidish(MatOfPoint matOfPoint) {
            Rect br = Imgproc.boundingRect(matOfPoint);
            return new Point(br.x + br.width/2,br.y + br.height/2);
        }
    }
    

    Debug OpenCV Errors

    Debug OpenCV Errors By Arjun

    Task: Use black magic to fix errors in our code

    We implemented OpenCV support in our code, but we hadn’t tested it until now. Upon testing, we realized it didn't work.

    The first problem we found was that Vuforia wasn’t reading in our frames. The queue which holds Vuforia frames was always empty. After making lots of small changes, we realized that this was due to not initializing our Vuforia correctly. After fixing this, we got a new error.

    The error message changed, meaning that we fixed one problem, but there was another problem hiding behind it. The new error we found was that our code was unable to access the native OpenCV libraries, namely it could not link to libopencv_java320.so. Unfortunately, we could not debug this any further.

    Next Steps

    We need to continue debugging this problem and find the root cause of it.

    Auto Paths

    Auto Paths By Abhi

    Task: Map and code auto for depot side start

    Today, we implemented our first autonomous path. Since we we still didn't have a complete vision software, we made these manually so we can integrate vision without issues. Here are videos of all of the paths. For the sake of debugging the bot stops after turning towards the crater but in reality it will drive and park in the far crater. These paths will help us score highly during autonomous.

    Center

    Left

    Right

    Next Steps

    We will get vision integrated into the paths.

    Issues with Driving

    Issues with Driving By Karina

    Task: Get ready for Regionals

    Regionals is coming up, and there are some driving issues that need to be addressed. Going back to November, one notable issue we had at the Conrad qualifier was the lack of friction between Bigwheel's wheels and the field tiles. There was not enough weight resting on the wheels, which made it hard to move suddenly.

    Since then many changes have been made to Bigwheel in terms of the lift. For starters, we switched out the REV extrusion linear slide for the MGN12H linear slide. We have also added more components to intake and carry minerals. These steps have fixed the previous issue if we keep the lift at a position not exceeding ~70 degrees while moving, but having added a lot of weight to the end of the slide makes rotating around the elbow joint of Bigwheel problematic. As you can see below, Bigwheel's chassis is not heavy enough to stay grounded when deploying the arm (and so I had to step on the back end of Bigwheel like a fool).

    Another issue I encountered during driver practice was trying to deposit minerals in the lander. By "having issues" I mean I couldn't. Superman broke as soon as I tried going into the up position, and this mechanism was intended to raise Bigwheel enough so that is would reach the lander. Regardless of Superman's condition, the container for the minerals was still loose and not attached to the servo. Consequently, I could not rotate the lift past the vertical without dropping the minerals I had collected.

    Next Steps

    To run a full practice match, Superman and the container will need to be fixed, as well as the weight issue. Meanwhile, I will practice getting minerals out of the crater.

    Vision Summary

    Vision Summary By Arjun and Abhi

    Task: Reflect on our vision development

    One of our priorities this season was our autonomous, as a perfect autonomous could score us a considerable amount of points. A large portion of these points come from sampling, so that was one of our main focuses within autonomous. Throughout the season, we developed a few different approaches to sampling.

    Early on in the season, we began experimenting with using a Convolutional Neural Network to detect the location of the gold mineral. A Convolutional Neural Network, or CNN, is a machine learning algorithm that uses multiple layers which "vote" on what the output should be based on the output of previous layers. We developed a tool to label training images for use in training a CNN, publicly available at https://github.com/arjvik/MineralLabler. We then began training a CNN with the training data we labeled. However, our CNN was unable to reach a high accuracy level, despite us spending lots of time tuning it. A large part of this came to our lack of training data. We haven't given up on it, though, and we hope to improve this approach in the coming weeks.

    We then turned to other alternatives. At this time, the built-in TensorFlow Object Detection code was released in the FTC SDK. We tried out TensorFlow, but we were unable to use it reliably. Our testing revealed that the detection provided by TensorFlow was not always able to detect the location of the gold mineral. We attempted to modify some of the parameters, however, since only the trained model was provided to us by FIRST, we were unable to increase its accuracy. We are currently looking to see if we can detect the sampling order even if we only detect some of the sampling minerals. We still have code to use TensorFlow on our robot, but it is only one of a few different vision backends available for selection during runtime.

    Another alternative vision framework we tried was OpenCV. OpenCV is a collection of vision processing algorithms which can be combined to form powerful pipelines. OpenCV pipelines perform sequential transformations on their input image, until it ends up in a desired form, such as a set of contours or boundaries of all minerals detected in the image. We developed an OpenCV pipeline to find the center of the gold mineral given an image of the sampling order. To create our pipeline, we used a tool called GRIP, which allows us to visualize and tune our pipeline. However, since we have found that bad lighting conditions greatly influence the quality of detection, we hope to add LED lights to the top of our phone mount so we can get consistent lighting on the field, hopefully further increasing our performance in dark field conditions.

    Since we wanted to be able to switch easily between these vision backends, we decided to write a modular framework which allows us to swap out vision implementations with ease. As such, we are now able to choose which vision backend we would like to use during the match, with just a single button press. Because of this, we can also work in parallel on all of the vision backends.

    Another abstraction we made was the ability to switch between different viewpoints, or cameras. This allows us to decide at runtime which viewpoint we wish to use, either the front/back camera of the phone, or external webcam. Of course, while there is no good reason to change this during competition (hopefully by then the placement of the phone and webcam on the robot will be finalized), it is extremely useful during the development of the robot, because we don't have everything about our robot finalized.

      Summary of what we have done:
    • Designed a convolutional neural network to perform sampling.
    • Tested out the provided TensorFlow model for sampling.
    • Developed an OpenCV pipeline to perform sampling.
    • Created a framework to switch between different Vision Providers at runtime.
    • Created a framework to switch between different camera viewpoints at runtime.

    Next Steps

    We would like to continue improving on and testing our vision software so that we can reliably sample during our autonomous.

    Minor Code Change

    Minor Code Change By Karina

    Task: Save Bigwheel from self destruction

    The other day, when running through Bigwheel's controls, we came across an error in the code. The motors on the elbow did not have min and max values for its range of motion, causing the gears to grind in non-optimal conditions. Needless to say, Iron Reign has gone through a few gears already. Adding stops in the code was simple enough:

    Testing the code revealed immediate success. we went through the full range of motion and no further grinding occurred.

    Next Steps

    Going forward, we will continue to debug code through drive practice.

    Code Updates

    Code Updates By Abhi and Arjun

    Task: Detail last-minute code changes to autonomous

    It is almost time for competition and with that comes a super duper autonomous. For the past couple of weeks and today, we focused on making our depot side work consistently. Because our robot wasn't fully built, we couldn't do auto-delatching. Today, we integrated our vision pipelines into the auto and tested all the paths with vision. They seemed to work at home base but the field we have isn't built to exact specifications.

    Next Steps

    At Wylie, we will have to tune auto paths to adjust from our field's discrepancies.

    Competition Day Code

    Competition Day Code By Abhi and Arjun

    Task: Update our code

    While at the Wylie quaiifier, we had to make many changes because our robot broke the night before.

    First thing that happened was that the belt code was added. Previously, we had relied on gravity and the polycarb locks we had on the slides but we quickly realized that the slides needed to articulate in order to preserve Superman. As a result, we added the belts into our collector class and used the encoders to power them.

    Next, we added manual overrides for all functions of our robot. Simply due to lack of time, we didn't add any presets and we focused on making the robot functional enough for competition. During competition, Karina was able to latch during endgame with purely the manual overrides.

    Finally, we did auto path tuning. We ended up using an OpenCV pipeline and we were accurately able to detect the gold mineral at all times. However, our practice field wasn't setup to the exact specifications needed so we spent the majority the day at the Wylie practice field tuning depot side auto (by the end of the day it worked almost perfectly every time.

    Next Steps

    We were lucky to have qualified early in the season we could make room for mistakes such as this. However, it will be hard to sustain this, so we must implement build freezes in the future.

    Code Updates

    Code Updates By Abhi

    Task: DISD STEM EXPO

    The picture above is a representation of our work today. After making sure all the manual drive controls were working, Karina found the positions she preferred for intake, deposit, and latch. Taking these encoder values from telemetry, we created new methods for the robot to run to those positions. As a result, the robot was very functional. We could latch onto the lander in 10 seconds (a much faster endgame than we had ever done).

    Next Steps

    The code is still a little messy so we will have to do further testing before any competition.

    Autonomous Non-Blocking State Machines

    Autonomous Non-Blocking State Machines By Arjun

    Task: Design a state machine class to make autonomous easier

    In the past our autonomous routines were tedious and difficult to change. Adding one step to the beginning of an autonomous would require changing the indexes of every single step afterwards, which could take a long time depending on the size of the routine. In addition, simple typos could go undetected, and cause lots of problems. Finally, there was so much repetitive code, making our routines over 400 lines long.

    In order to remedy this, we decided to create a state machine class that takes care of the repetitive parts of our autonomous code. We created a StateMachine class, which allows us to build autonomous routines as sequences of "states", or individual steps. This new state machine system makes autonomous routines much easier to code and tune, as well as removing the possibility for small bugs. We also were able to shorten our code by converting it to the new system, reducing each routine from over 400 lines to approximately 30 lines.

    Internally, StateMachine uses instances of the functional interface State (or some of its subclasses, SingleState for states that only need to be run once, TimedState, for states that are run on a timer, or MineralState, for states that do different things depending on the sampling order). Using a functional interface lets us use lambdas, which further reduce the length of our code. When it is executed, the state machine takes the current state and runs it. If the state is finished, the current state index (stored in a class called Stage) is incremented, and a state switch action is run, which stops all motors.

    Here is an autonomous routine which has been converted to the new system:

    private StateMachine auto_depotSample = getStateMachine(autoStage)
                .addNestedStateMachine(auto_setup) //common states to all autonomous
                .addMineralState(mineralStateProvider, //turn to mineral, depending on mineral
                        () -> robot.rotateIMU(39, TURN_TIME), //turn left
                        () -> true, //don't turn if mineral is in the middle
                        () -> robot.rotateIMU(321, TURN_TIME)) //turn right
                .addMineralState(mineralStateProvider, //move to mineral
                        () -> robot.driveForward(true, .604, DRIVE_POWER), //move more on the sides
                        () -> robot.driveForward(true, .47, DRIVE_POWER), //move less in the middle
                        () -> robot.driveForward(true, .604, DRIVE_POWER))
                .addMineralState(mineralStateProvider, //turn to depot
                        () -> robot.rotateIMU(345, TURN_TIME),
                        () -> true,
                        () -> robot.rotateIMU(15, TURN_TIME))
                .addMineralState(mineralStateProvider, //move to depot
                        () -> robot.driveForward(true, .880, DRIVE_POWER),
                        () -> robot.driveForward(true, .762, DRIVE_POWER),
                        () -> robot.driveForward(true, .890, DRIVE_POWER))
                .addTimedState(4, //turn on intake for 4 seconds
                        () -> robot.collector.eject(),
                        () -> robot.collector.stopIntake())
                .build();
    

    Control Mapping

    Control Mapping By Bhanaviya, Abhi, Ben, and Karina

    Task: Map and test controls

    With regionals being a week away, the robot needs to be in drive testing phase. So, we started out by mapping out controls as depicted above.

    Upon testing the controls, we realized that when the robot went into Superman-mode, it collapsed due to the lopsided structure of the base since the presets were not as accurate as they could be. The robot had trouble trying to find the right position when attempting to deposit and intake minerals.

    After we found a preset for the intake mechanism, we had to test it out to ensure that the arm extended far enough to sample. Our second task was ensuring that the robot could go into superman while still moving forward. To do this, we had to find the position which allowed the smaller wheel at the base of the robot to move forward while the robot was in motion.

    Next Steps

    We plan to revisit the robot's balancing issue in the next meet and find the accurate presets to fix the problem.

    Big Wheel Articulations

    Big Wheel Articulations By Abhi

    Task: Summary of all Big Wheel movements

    In our motion, our robot shifts multiple major subsystems (the elbow and Superman) that make it difficult to keep the robot from tipping. Therefore, through driver practice, we determined the 5 major deployment modes that would make it easier for the driver to transition from mode to mode. Each articulation is necessary to maintain the robot's center of gravity as its mode of operation shifts.

    The position seen above is called "safe drive". During normal match play, our drivers can go to this position to navigate the field quickly and with the arm out of the way.

    When the driver control period starts, we normally navigate to the crater then enter the intake position shown above. From this position, we can safely pick up minerals from the crater.

    From the intake position, the robot goes to safe drive to fix the weight balance then goes to the deposit position shown above. The arm can still extend upwards above the lander and our automatic sorter can place the minerals appropriately.

    During the end game, we enter a latchable position where our hook can easily slide into the latch. After hooked on, our robot can slightly lift itself off the ground to hook.

    At the beginning of the match, we can completely close the arm and superman to fit in sizing cube and latch on the lander.

    As you can see, there is a lot of articulations that need to work together during the course of the match. By putting this info in a state machine, we can easily toggle between articulations. Refer to our code snippets for more details.

    Next Steps

    At this point, we have 4 cycles in 1 minute 30 seconds. By adding some upgrades to the articulations using our new distance sensors, we hope to speed this up even more.

    Cart Hack

    Cart Hack By Arjun

    Task: Tweaking ftc_app to allow us to drive robots without a Driver Station phone

    As you already know, Iron Reign has a mechanized cart called Cartbot that we bring to competitions. We used the FTC control system to build it, so we could gain experience. However, this has one issue: we can only use one pair of Robot Controller and Driver Station phones at a competition, because of WiFi interference problems.

    To avoid this pitfall, we decided to tweak the ftc_app our team uses to allow us to plug in a controller directly into the Robot Controller. This cuts out the need for a Driver Station phone, which means that we can drive around Cartbot without worrying about breaking any rules.

    Another use for this tweak could be for testing, since with this new system we don't need a Driver Station when we are testing our tele-op.

    As of now this modification lives in a separate branch of our code, since we don't know how it may affect our match code. We hope to merge this later once we confirm it doesn't cause any issues.

    Road to Worlds Document

    Road to Worlds Document By Ethan, Charlotte, Evan, Karina, Janavi, Jose, Ben, Justin, Arjun, and Abhi

    Task: Consider what we need to do in the coming months

    ROAD TO WORLDS - What we need to do

     

    OVERALL:

    • New social media manager (Janavi/Ben) and photographer (Ethan, Paul, and Charlotte)

     

    ENGINEERING JOURNAL: - Charlotte, Ethan, & all freshmen

     

    • Big one - freshmen get to start doing a lot more

     

    • Engineering section revamp
      • Decide on major subsystems to focus on
        • Make summary pages and guides for judges to find relevant articles
      • Code section
        • Finalize state diagram
          • Label diagram to refer to the following print out of different parts of the code
        • Create plan to print out classes
        • Monthly summaries
      • Meeting Logs
        • Include meeting planning sessions at the beginning of every log
          • Start doing planning sessions!
        • Create monthly summaries
      • Biweekly Doodle Polls
        • record of supposed attendance rather than word of mouth
      • Design and format revamping
        • Start doing actual descriptions for blog commits
        • More bullet points to be more technical
        • Award highlights [Ethan][Done]

    Page numbers [Ethan][Done]

        • Awards on indexPrintable [Ethan][Done]
      • Irrelevant/distracting content
        • Packing list
        • Need a miscellaneous section
          • content
      • Details and dimensions
        • Could you build robot with our journal?
        • CAD models
        • More technical language, it is readable but not technical currently
    • Outreach
      • More about the impact and personal connections
      • What went wrong
      • Make content more concise and make it convey our message better



    ENGINEERING TEAM:

     

    • Making a new robot - All build team (Karina & Jose over spring break)

     

      • Need to organize motors (used, etc)
      • Test harness for motors (summer project)
    • Re-do wiring -Janavi and Abhi
    • Elbow joint needs to be redone (is at a slight angle) - Justin/Ben
      • 3D print as a prototype
        • Cut out of aluminum
      • Needs to be higher up and pushed forward
      • More serviceable
        • Can’t plug in servos
    • Sorter -Evan, Karina, and Justin
      • Sorter redesign
    • Intake -Evan, Karina, Abhi, Jose
      • Take video of performance to gauge how issues are happening and how we can fix
      • Subteam to tackle intake issues
    • Superman -Evan and Ben
      • Widen superman wheel
    • Lift
      • Transfer police (1:1 to 3:4)
      • Larger drive pulley
        • Mount motors differently to make room
    • Chassis -Karina and a freshman
      • Protection for LED strips
      • Battery mount
      • Phone mount
      • Camera mount
      • New 20:1 motors
      • Idler sprocket to take up slack in chain (caused by small sprocket driving large one)
    • CAD Model



    CODE TEAM: -Abhi and Arjun

    • add an autorecover function to our robot for when it tips over
      • it happened twice and we couldn’t recover fast enough to climb
    • something in the update loop to maintain balance
      • we were supposed to do this for regionals but we forgot to do it and we faced the consequences
    • fix IMU corrections such that we can align to field wall instead of me eyeballing a parallel position
    • use distance sensors to do wall following and crater detection
    • auto paths need to be expanded such that we can avoid alliance partners and have enough flexibility to pick and choose what path needs to be followed
      • In both auto paths, can facilitate double sampling
    • Tuning with PID (tuning constants)
    • Autonomous optimization



    DRIVE TEAM:

    • Driving Logs
      • everytime there is driving practice, a driver will fill out a log that records overall record time, record time for that day, number of cycles for each run, and other helpful stats to track the progress of driving practice
    • actual driving practice lol
    • Multiple drive teams

     

    COMPETITION PREP:

    • Pit setup
      • Clean up tent and make sure we have everything to put it together
      • Activities
        • Robotics related
      • Find nuts and bolts based on the online list
    • Helping other teams
    • Posters
    • Need a handout
    • Conduct in pits - need to be focused
    • MXP or no?
    • Spring break - who is here and what can we accomplish
    • Scouting

     

    Code Refactor

    Code Refactor By Abhi and Arjun

    Task: Code cleanup and season analysis

    At this point in the season, we have time to clean up our code before development for code. This is important to do now so that the code remains understandable as we make many changes for worlds.

    There aren't any new features that were added during these commits. In total, there were 12 files changed, 149 additions, and 253 deletions.

    Here is a brief graph of our commit history over the past season. As you can see, there was a spike during this code refactor.

    Here is a graph of additions and deletions over the course of the season. There was also another spike during this time as we made changes.

    Next Steps

    Hopefully this cleanup will help us on our journey to worlds.

    Localization

    Localization By Ben

    Localization

    A feature that is essential to many advanced autonomous sequences is the ability to know the robots absolute location (x position, y position, heading). For our localization, we determine the robots position relative to the fields coordinate frame. To track our position, we use encoders (to determine displacement) and a gyro (to determine heading).

    Our robots translational velocity can be determined by seeing how our encoder counts change over time. Heading velocity is simply how our angle changes in time. Thus, our actual velocity can be represented by the following equation.

    Integrating that to find our position yields

    Using this new equation, can obtain the robots updated x and y coordinates.

    Balancing Robot

    Balancing Robot By Abhi and Ben

    Initial Work on Balancing Robot

    Since our robot has two wheels and a long arm, we decided to take on an interesting problem: balancing our robot on two wheels as do modern hoverboards and Segways. Though the problem had already been solved by others, we tried our own approach.

    We first tried a PID control loop approach as we had traditionally been accustomed to that model for our autonomous and such. However, this served as a large challenge as lag in loop times didn't give us the sensitivity that was necessary. However, we tried to optimize this model.

    Next time we will continue fine tuning the gains, and use a graph plotting our current pitch versus the desired pitch to determine how we should tweak the gains to smoothly reach the setpoint. Another factor we need to account for is the varying loop times, and multiply these loop times into a pid calculations to ensure consistency. In addition, we may try to implement state space control to control this balancing instead of PID.

    Balancing Robot Updates

    Balancing Robot Updates By Abhi and Ben

    Updates on Balancing Robot

    Today we managed to get our robot to balance for 30 seconds after spending about an hour tuning the PID gains. We made significant progress, but there is a flaw in our algorithm that needs to be addressed. At the moment, we have a fixed pitch that we want the robot to balance at but due to the weight distribution of the robot, forcing it to balance at some fixed setpoint will not work well and will cause it to continually oscillate around that pitch instead of maintaining it.

    To address this issue, there are a number of solutions. As mentioned in the past post, one approach is to use state space control. Though it may present a more accurate approach, it is computationally intensive and is more difficult to implement. Another solution is to set the elbow to run to a vertical angle rather than having that value preset. For this, we would need another IMU sensor on the arm and this also adds another variable to consider in our algorithm.

    To learn more about this problem, we looked into this paper developed by Harvard and MIT that used Lagrangian mechanics relate the variables combined with state space control. Lagrangian mechanics allows you to represent the physics of the robot in terms of energy rather than Newtonian forces. The main equation, the Lagrangian, is given as follows:

    To actually represent the lagrangian in terms of our problem, there is a set of differential equations which can be fed into the state space control equation. For the sake of this post, I will not list it here but refer to the paper given for more info.

    Next Steps:

    This problem will be on hold until we finish the necessary code for our robot but we have a lot of new information we can use to solve the problem.

    Icarus Code Support

    Icarus Code Support By Abhi

    Task: Implement dual robot code

    With the birth of Icarus came a new job for the programmers: supporting both Bigwheel and Icarus. We needed the code to work both ways because new logic could be developed on bigwheel while the builders completed Icarus.

    This was done by simply creating an Enum for the robot type and feeding it into PoseBigWheel initialization. This value was fed into all the subsystems so they could be initialized properly. During init, we could now select the robot type and test with it. The change to the init loop is shown below.

    Next Steps

    After testing, it appears that our logic is functional for now. Coders can now further devlop our base without Icarus.

    Reverse Articulations

    Reverse Articulations By Abhi

    Task: Summary of Icarus Movements

    In post E-116, I showed all the big wheel articulations. As we shifted our robot to Icarus, we decided to change to a new set of articulations as they would work better to maintain the center of gravity of our robot. Once again, we made 5 major deployment modes. Each articulation is necessary to maintain the robot's center of gravity as its mode of operation shifts.

    The position seen above is called "safe drive". During normal match play, our drivers can go to this position to navigate the field quickly and with the arm out of the way. In addition, we use this articulation as we approach the lander to deposit.

    When the driver control period starts, we normally navigate to the crater then enter the intake position shown above. From this position, we can safely pick up minerals from the crater. Note that there are two articulations shown here. These show the intake position both contracted and extended during intake.

    During the end game, we enter a latchable position where our hook can easily slide into the latch. After hooked on, our robot can slightly lift itself off the ground to hook. This is the same articulation as before.

    At the beginning of the match, we can completely close the arm and superman to fit in sizing cube and latch on the lander. This is the same articulation as before.

    These articulations were integrated into our control loop just as before. This allowed smooth integration

    Next Steps

    As the final build of Icarus is completed, we can test these articulations and their implications.

    Center of Gravity calculations

    Center of Gravity calculations By Arjun

    Task: Determine equations to find robot Center of Gravity

    Because our robot tends to tip over often, we decided to start working on a dynamic anti-tip algorithm. In order to do so, we needed to be able to find the center of gravity of the robot. We did this by modeling the robot as 5 separate components, finding the center of gravity of each, and then using that to find the overall center of gravity. This will allow us to better understand when our robot is tipping programmatically.

    The five components we modeled the robot as are the main chassis, the arm, the intake, superman, and the wheels. We then assumed that each of these components had an even weight distribution, and found their individual centers of gravity. Finally, we took the weighted average of the individual centers of gravity in the ratio of the weights of each of the components.

    By having equations to find the center of gravity of our robot, we can continuously find it programmatically. Because of this, we can take corrective action to prevent tipping earlier than we would be able to by just looking at the IMU angle of our robot.

    Next Steps

    We now need to implement these equations in the code for our robot, so we can actually use them.

    Code updates at UIL

    Code updates at UIL By Arjun, Abhi, and Ben O

    Task: Update code to get ready for UIL

    It's competition time again, and with that means updating our code. We have made quite a few changes to our robot in the past few weeks, and so we needed to update our code to reflect those changes.

    Unfortunately, because the robot build was completed very late, we did not have much time to code. That meant that we not only needed to stay at the UIL event center until the minute it closed to use their practice field (we were literally the last team in the FTC pits), we also needed to pull a late-nighter and stay up from 11 pm to 4 am coding.

    One of our main priorities was autonomous. We decided early on to focus on our crater-side autonomous, because in our experience, most teams who only had one autonomous chose depot-side because it was easier to code.

    Unfortunately, we were quite wrong about that. We were forced to run our untested depot-side auto multiple times throughout the course of the day, and it caused us many headaches. Because of it, we missampled, got stuck in the crater, and tipped over in some of our matches where we were forced to run depot-side. Towards the end of the competition, we tried to quickly hack together a better depot-side autonomous, but we ran out of time to do so.

    Some of the changes we made to our crater-side auto were:

    • Updating to use our new reverse articulations
    • Moving vision detection during the de-latch sequence
    • Speeding up our autonomous by replacing driving with belt extensions
    • Sampling using the belt extensions instead of driving to prevent accidental missamples
    • Using PID for all turns to improve accuracy

    We also made some enhancements to teleop. We added a system to correct the elbow angle in accordance to the belt extensions so that we don't fall over during intake when drivers adjust the belts. We also performed more tuning to our articulations to make them easier to use.

    Finally, we added support for the LEDs to the code. After attaching the Blinkin LED controller late Friday night, we included LED color changes in the code. We use them to signal to drivers what mode we are in, and to indicate when our robot is ready to go.

    Control Hub First Impressions

    Control Hub First Impressions By Arjun and Abhi

    Task: Test the REV Control Hub ahead of the REV trial

    Iron Reign was recently selected to attend a REV Control Hub trial along with select other teams in the region. We wanted to do this so that we could get a good look at the control system that FTC would likely be switching to in the near future, as well as get another chance to test our robot in tournament conditions before Worlds.

    We received our Control Hub a few days ago, and today we started testing it. We noticed that while the control hub seemed to use the same exterior as the First Global control hubs, it seems to be different on the inside. For example, in the port labeled Micro USB, there was a USB C connector. We are glad that REV listened to us teams and made this change, as switching to USB C means that there will be less wear and tear on the port. The other ports included are a Mini USB port (we don't know what it is for), an HDMI port should we ever need to view the screen of the Control Hub, and two USB ports, presumably for Webcams and other accessories. The inclusion of 2 USB ports means that a USB Hub is no longer needed. One port appears to be USB 2.0, while the other appears to be USB 3.0.

    Getting started with programming it was quite easy. We tested using Android Studio, but both OnBot Java and Blocks should be able to work fine as we were able to access the programming webpage. We just plugged the battery in to the Control Hub, and then connected it to a computer via the provided USB C cable. The Control Hub immediately showed up in ADB. (Of course, if you forget to plug in the battery like we did at first, you won't be able to program it.)

    REV provided us with a separate SDK to use to program the Control Hub. Unfortunately, we are not allowed to redistribute it. We did note however, that much of the visible internals look the same. We performed a diff between the original ftc_app's FtcRobotControllerActivity.java and the one in the new Control Hub SDK, and saw nothing notable except for mentions of permissions such as Read/Write External Storage Devices, and Access Camera. These permissions look reminiscent of standard Android permissions, and is likely accounting for the fact that you can't accept permissions on a device without a screen.

    While testing it, we didn't have time to copy over our entire codebase, so we made a quick OpMode that moved one wheel of one of our old robots. Because the provided SDK is almost identical to ftc_app, no changes were needed to the existing sample OpModes. We successfully tested our OpMode, proving that it works fine with the new system.

    Pairing the DS phone to the Control Hub was very quick with no hurdles, just requiring us to select "Control Hub" as the pairing method, and connect to the hub's Wifi network. We were told that for the purposes of this test, the WiFi password was "password". This worked, but we hope that REV changes this in the future, as this means that other malicious teams can connect to our Control Hub too.

    We also tested ADB Wireless Debugging. We connected to the Control Hub Wifi through our laptop, and then made it listen for ADB connections over the network via adb tcpip 5555. However, since the Control Hub doesn't use Wifi Direct, we were unable to connect to it via adb connect 192.168.49.1:5555. The reason for this is that the ip address 192.168.49.1 is used mainly by devices for Wifi Direct. We saw that our Control Hub used 192.168.43.1 instead (using the ip route command on Linux, or ipconfig if you are on Windows). We aren't sure if the address 192.168.43.1 is the same for all Control Hubs, or if it is different per control hub. After finding this ip address, we connected via adb connect 192.168.43.1:5555. ADB worked as expected following that command.

    Next Steps

    Overall, our testing was a success. We hope to perform further testing before we attend the REV test on Saturday. We would like to test using Webcams, OpenCV, libraries such as FtcDashboard, and more.

    We will be posting a form where you can let us know about things you would like us to test. Stay tuned for that!

    Auto Paths, Updated

    Auto Paths, Updated By Abhi

    Task: Reflect and develop auto paths

    It has been a very long time since we have reconsidered our auto paths. Between my last post and now, we have made numerous changes to both the hardware and the articulations. As a result, we should rethink the paths we used and optimize them for scoring. After testing multiple paths and observing other teams, I identified 3 auto paths we will try to perfect for championships.

    These two paths represent crater side auto. Earlier in the season, I drew one of the paths to do double sampling. However, because of the time necessary for our delatch sequence, I determined we simply don't have the time necessary to double sample. The left path above is a safe auto path that we had tested many times and used at UIL. However, it doesn't allow us to score the sampled mineral into the lander which would give us 5 extra points during auto. That's why we created a theoretical path seen on the right side that would deposit the team marker before sampling. This allows us to score the sampling mineral rather than just pushing it.

    This is the depot path I was thinking about. Though it looks very similar to the past auto path, there are some notable differences. After the robot delatches from the lander, the lift will simply extend into the depot rather than driving into it. This allows us to extend back and pick up the sampling mineral and score it. Then the robot can turn to the crater and park.

    Next Steps

    One of the crater paths is already coded. Our first priority is to get the depot auto functional for worlds. If we have time still remaining, we can try to do the second crater path.