Articles by tag: software

Articles by tag: software

    Position Tracking

    Position Tracking By Abhi

    Task: Design a way to track the robot's location

    Throughout the Relic Recovery season, we have had many issues with the autonomous being inaccurate simply because the scoring was dependent on perfectly aligning the robot on the balancing stone. This was prone to many issues as evidenced by numerous matches in which our autonomous failed. Thus far, we had relied on the encoders on the mecanum chassis to input distances and such. Though this worked to a significant degree, the bot was still prone to loss from drift and running into the glyph pit. We don't know if glyphs will be reused or not but we definitely needed a better tracking mechanism on the field to be more efficient.

    After some investigation online and discussing with other teams, I thought about a way to make a tracker. For the sake of testing, we built a small chassis with two perpendicular REV rails. Then, with the help of new trainees for Iron Reign, we attached two omni wheels on opposite sides of the chassis, as seen in the image above. To this, we added axle encoders to track the movement of the omni wheels.

    The reason the axles of these omnis was not dependent of any motors was because we wanted to avoid any error from the motors themselves. By making the omni wheels free spinning, no matter what the encoder reads on the robot, the omni wheels will always move whichever direction the robot is moving. Therefore, the omni wheels will generally give a more accurate reading of position.

    To test the concept, we attached the apparatus to ARGOS. With some upgrades to the ARGOS code by using the IMU and omni wheels, we added some basic trigonometry to the code to accurately track the position. The omni setup was relatively accurate and may be used for future projects and robots.

    Next Steps

    Now that we have a prototype to track position without using too many resources, we need to test it on an actual FTC chassis. Depending on whether or not there is terrain in Rover Ruckus, the use of this system will change. Until then, we can still experiment with this and develop a useful multipurpose sensor.

    Replay Autonomous

    Replay Autonomous By Arjun

    Task: Design a program to record and replay a driver run

    One of the difficulties in writing an autonomous program is the long development cycle. We have to unplug the robot controller, plug it into a computer, make a few changes to the code, recompile and download the code, and then retest our program. All this must be done over and over again, until the autonomous is perfected. Each autonomous takes ~4 hours to write and tune. Over the entire season, we spend over 40 hours working on autonomous programs.

    One possible solution for this is to record a driver running through the autonomous, and then replay it. I used this solution on my previous robotics team. Since we had no access to a field, we had to write our entire autonomous at a competition. After some brainstorming, we decided to write a program to record our driver as he ran through our autonomous routine and then execute it during a match. It worked very well, and got us a few extra points each match.

    Using this program, writing an autonomous program is reduced to a matter of minutes. We just need to run through our autonomous routine a few times until weare happy with it, and then take the data from the console and paste it into our program. Then we recompile the program and run it.

    There are two parts to our replay program. One part (a Tele-op Opmode) records the driver's motions and outputs it into the Android console. The next part (an Autonomous Opmode) reads in that data, and turns it into a working autonomous program.

    Next Steps

    Our current replay program requires one recompilation. While it is very quick, one possible next step is to save the autonomous data straight into the phone's internal memory, so that we do not have to recompile the program. This could further reduce the time required to create an autonomous.

    One more next step could be a way to easily edit the autonomous. The output data is just a big list of numbers, and it is very difficult to edit it. If we need to tune the autonomous due to wear and tear on the robot, it is difficult to do so without rerecording. If we can figure out a mechanism for editing the generated autonomous, we can further reduce the time we spend creating autonomous programs.

    Upgrading to FTC SDK version 4.0

    Upgrading to FTC SDK version 4.0 By Arjun

    Task: Upgrade our code to the latest version of the FTC SDK

    FTC recently released version 4.0 of their SDK, with initial support for external cameras, better PIDF motor control, improved wireless connectivity, new sensors, and other general improvements. Our code was based on last year's SDK version 3.7, so we needed to merge the new SDK with our repository.

    The merge was slightly difficult, as there were some issues with the Gradle build system. However, after a little fiddling with the configuration, as well as fixing some errors in the internal code we changed, we were able to successfully merge the new SDK.

    After the merge, we tested that our code still worked on Kraken, last year's competition robot. It ran with no problems.

    Pose BigWheel

    Pose BigWheel By Abhi

    Task: New Pose for Big Wheel robot

    Historically, Iron Reign has used a class called "Pose" to control all the hardware mapping of our robot instead of putting it directly into our opmodes. This has created cleaner code and smoother integration with our crazy functions. However, we used the same Pose for the past two years since both had an almost identical drive base. Since there wasn't a viable differential drive Pose in the past, I made a new one using inspiration from the mecanum one.

    We start with initializing everything including PID constants and all our motors/sensors. I will skip all this for this post since this is repetitive in all team code.

    In the init, I made the hardware mapping for the motors we have on BigWheel right now. Other functions will come in later.

    Here is where a lot of the work happens. This is what allows our robot to move accurately using IMU and encoder values.

    There are a lot of other methods beyond these but there is just a lot of technical math behind them with trigonometry. I won't bore you with the details but our code is open source so you can find the neccesary help if you just look at our github!


    RIP CNN By Abhi

    Task: Farewell Iron Reign's CNN

    So FTC released a new software update that added Tensorflow support. With it came a class that implemented Tensorflow to autonomously detect both minerals. This meant all our progress was undercut by a software update. The silver lining is that we have done enough research into how CNN's work and it will allow us to understand the mind of the FTC app better. We're still gonna need an F in the chat tho.

    Next Steps

    We gotta figure out how to use the autonomous detection of the minerals to path plan.

    Code Post-Mortem after Conrad Qualifier

    Code Post-Mortem after Conrad Qualifier By Arjun and Abhi

    Task: Analyze code failiure at Conrad Qualifier

    Iron Reign has been working hard on our robot, and we expected to do fairly well at our last competition. However, we couldn't be more wrong, since our robot came last place in the robot game. While we did win the Inspire award, we still would like to make some changes to ensure that this doesn't happen again.

    Our autonomous plan was fairly simple: perform sampling, deploy the team marker, then drive to the crater to park. We planned to use the built-in TensorFlow object detection for our sampling, and thus assumed that our autonomous would be fairly easy. Unfortunately, we didn't begin writing the code for our autonomous until the Thursday before our competition.

    On Thursday, I worked on writing a class to help us detect the location of the gold mineral using the built-in TensorFlow object detection. While testing this class, I noticed that it produced an error rather than outputting the location of the gold mineral. This error was not diagnosed until the morning of the competition.

    On Friday, Abhi worked on writing code for the driving part of the autonomous. He wrote three different autonomous routines, one for each position of the gold mineral. His code did not select the routine to use yet, leaving it open for us to connect to the TensorFlow class to determine which position the gold mineral was.

    On Saturday, the morning of the competition, we debugged the TensorFlow class that was written earlier and determined the cause of the error. We had misused the API for the TensorFlow object detection, and after we corrected that, our code didn't spit out an error anymore. Then, we realized that TensorFlow only worked at certain camera positions and angles. We then had to adjust the position of our robot on the field, so that we could

    Our code failiure was mostly due to the fact that we only started working on our autonomous two days before the competition. Next time, we plan to make our autonomous an integral part of our robot, and focus on it much earlier. I plan to fix our autonomous sometime soon, well before our next competition, so that we do not have to come to our next competition without a working autonomous.

    Next Steps:

    Spend more time focusing on code and autonomous, to ensure that we enter our next competition with a fully working autonomous.

    Auto Paths

    Auto Paths By Abhi

    Task: Map and code auto for depot side start

    So we created a bunch of auto paths in the beginning of the season with hopes to work with any that we want, allowing us to adapt to our alliance partner's capabilities. I decided to spend today actually completely coding one of them. Since we we still didn't have a complete vision software, I made these manually so we can integrate vision without issues. Here are videos of all of the paths. For the sake of debugging the bot stops after turning towards the crater but in reality it will drive and park in the far crater.




    Next Steps

    We gotta get vision integrated into the paths.

    Vision Summary

    Vision Summary By Arjun and Abhi

    Task: Reflect on our vision development

    One of our priorities this season was our autonomous, as a perfect autonomous could score us a considerable amount of points. A large portion of these points come from sampling, so that was one of our main focuses within autonomous. Throughout the season, we developed a few different approaches to sampling.

    Early on in the season, we began experimenting with using a Convolutional Neural Network to detect the location of the gold mineral. A Convolutional Neural Network, or CNN, is a machine learning algorithm that uses multiple layers which "vote" on what the output should be based on the output of previous layers. We developed a tool to label training images for use in training a CNN, publicly available at We then began training a CNN with the training data we labeled. However, our CNN was unable to reach a high accuracy level, despite us spending lots of time tuning it. A large part of this came to our lack training data. We haven't given up on it, though, and we hope to improve this approach in the coming weeks.

    We then turned to other alternatives. At this time, the built-in TensorFlow Object Detection code was released in the FTC SDK. We tried out TensorFlow, but we were unable to use it reliably. Our testing revealed that the detection provided by TensorFlow was not always able to detect the location of the gold mineral. We attempted to modify some of the parameters, however, since only the trained model was provided to us by FIRST, we were unable to increase its accuracy. We are currently looking to see if we can detect the sampling order even if we only detect some of the sampling minerals. We still have code to use TensorFlow on our robot, but it is only one of a few different vision backends available for selection during runtime.

    Another alternative vision framework we tried was OpenCV. OpenCV is a collection of vision processing algorithms which can be combined to form powerful pipelines. OpenCV pipelines perform sequential transformations on their input image, until it ends up in a desired form, such as a set of contours or boundaries of all minerals detected in the image. We developed an OpenCV pipeline to find the center of the gold mineral given an image of the sampling order. To create our pipeline, we used a tool called GRIP, which allows us to visualize and tune our pipeline. However, since we have found the lighting conditions to greatly influence the quality of detection, we hope to add LED lights to the top of our phone mount so we can get consistent lighting on the field, hopefully further increasing our performance.

    Since we wanted to be able to switch easily between these vision backends, we decided to write a modular framework which allows us to swap out vision implementations with ease. As such, we are now able to choose which vision backend we would like to use during the match, with just a single button press. Because of this, we can also work in parallel on all of the vision backends.

    Next Steps

    We would like to continue improving on and testing our vision software so that we can reliably sample during our autonomous.