Articles by tag: software

Articles by tag: software

    Position Tracking

    Position Tracking By Abhi

    Task: Design a way to track the robot's location

    Throughout the Relic Recovery season, we have had many issues with the autonomous being inaccurate simply because the scoring was dependent on perfectly aligning the robot on the balancing stone. This was prone to many issues as evidenced by numerous matches in which our autonomous failed. Thus far, we had relied on the encoders on the mecanum chassis to input distances and such. Though this worked to a significant degree, the bot was still prone to loss from drift and running into the glyph pit. We don't know if glyphs will be reused or not but we definitely needed a better tracking mechanism on the field to be more efficient.

    After some investigation online and discussing with other teams, I thought about a way to make a tracker. For the sake of testing, we built a small chassis with two perpendicular REV rails. Then, with the help of new trainees for Iron Reign, we attached two omni wheels on opposite sides of the chassis, as seen in the image above. To this, we added axle encoders to track the movement of the omni wheels.

    The reason the axles of these omnis was not dependent of any motors was because we wanted to avoid any error from the motors themselves. By making the omni wheels free spinning, no matter what the encoder reads on the robot, the omni wheels will always move whichever direction the robot is moving. Therefore, the omni wheels will generally give a more accurate reading of position.

    To test the concept, we attached the apparatus to ARGOS. With some upgrades to the ARGOS code by using the IMU and omni wheels, we added some basic trigonometry to the code to accurately track the position. The omni setup was relatively accurate and may be used for future projects and robots.

    Next Steps

    Now that we have a prototype to track position without using too many resources, we need to test it on an actual FTC chassis. Depending on whether or not there is terrain in Rover Ruckus, the use of this system will change. Until then, we can still experiment with this and develop a useful multipurpose sensor.

    Replay Autonomous

    Replay Autonomous By Arjun

    Task: Design a program to record and replay a driver run

    One of the difficulties in writing an autonomous program is the long development cycle. We have to unplug the robot controller, plug it into a computer, make a few changes to the code, recompile and download the code, and then retest our program. All this must be done over and over again, until the autonomous is perfected. Each autonomous takes ~4 hours to write and tune. Over the entire season, we spend over 40 hours working on autonomous programs.

    One possible solution for this is to record a driver running through the autonomous, and then replay it. I used this solution on my previous robotics team. Since we had no access to a field, we had to write our entire autonomous at a competition. After some brainstorming, we decided to write a program to record our driver as he ran through our autonomous routine and then execute it during a match. It worked very well, and got us a few extra points each match.

    Using this program, writing an autonomous program is reduced to a matter of minutes. We just need to run through our autonomous routine a few times until weare happy with it, and then take the data from the console and paste it into our program. Then we recompile the program and run it.

    There are two parts to our replay program. One part (a Tele-op Opmode) records the driver's motions and outputs it into the Android console. The next part (an Autonomous Opmode) reads in that data, and turns it into a working autonomous program.

    Next Steps

    Our current replay program requires one recompilation. While it is very quick, one possible next step is to save the autonomous data straight into the phone's internal memory, so that we do not have to recompile the program. This could further reduce the time required to create an autonomous.

    One more next step could be a way to easily edit the autonomous. The output data is just a big list of numbers, and it is very difficult to edit it. If we need to tune the autonomous due to wear and tear on the robot, it is difficult to do so without rerecording. If we can figure out a mechanism for editing the generated autonomous, we can further reduce the time we spend creating autonomous programs.

    Upgrading to FTC SDK version 4.0

    Upgrading to FTC SDK version 4.0 By Arjun

    Task: Upgrade our code to the latest version of the FTC SDK

    FTC recently released version 4.0 of their SDK, with initial support for external cameras, better PIDF motor control, improved wireless connectivity, new sensors, and other general improvements. Our code was based on last year's SDK version 3.7, so we needed to merge the new SDK with our repository.

    The merge was slightly difficult, as there were some issues with the Gradle build system. However, after a little fiddling with the configuration, as well as fixing some errors in the internal code we changed, we were able to successfully merge the new SDK.

    After the merge, we tested that our code still worked on Kraken, last year's competition robot. It ran with no problems.

    Pose BigWheel

    Pose BigWheel By Abhi

    Task: New Pose for Big Wheel robot

    Historically, Iron Reign has used a class called "Pose" to control all the hardware mapping of our robot instead of putting it directly into our opmodes. This has created cleaner code and smoother integration with our crazy functions. However, we used the same Pose for the past two years since both had an almost identical drive base. Since there wasn't a viable differential drive Pose in the past, I made a new one using inspiration from the mecanum one.

    We start with initializing everything including PID constants and all our motors/sensors. I will skip all this for this post since this is repetitive in all team code.

    In the init, I made the hardware mapping for the motors we have on BigWheel right now. Other functions will come in later.

    Here is where a lot of the work happens. This is what allows our robot to move accurately using IMU and encoder values.

    There are a lot of other methods beyond these but there is just a lot of technical math behind them with trigonometry. I won't bore you with the details but our code is open source so you can find the neccesary help if you just look at our github!

    RIP CNN

    RIP CNN By Abhi

    Task: Farewell Iron Reign's CNN

    So FTC released a new software update that added Tensorflow support. With it came a class that implemented Tensorflow to autonomously detect both minerals. This meant all our progress was undercut by a software update. The silver lining is that we have done enough research into how CNN's work and it will allow us to understand the mind of the FTC app better. We're still gonna need an F in the chat tho.

    Next Steps

    We gotta figure out how to use the autonomous detection of the minerals to path plan.

    Code Post-Mortem after Conrad Qualifier

    Code Post-Mortem after Conrad Qualifier By Arjun and Abhi

    Task: Analyze code failiure at Conrad Qualifier

    Iron Reign has been working hard on our robot, and we expected to do fairly well at our last competition. However, we couldn't be more wrong, since our robot came last place in the robot game. While we did win the Inspire award, we still would like to make some changes to ensure that this doesn't happen again.

    Our autonomous plan was fairly simple: perform sampling, deploy the team marker, then drive to the crater to park. We planned to use the built-in TensorFlow object detection for our sampling, and thus assumed that our autonomous would be fairly easy. Unfortunately, we didn't begin writing the code for our autonomous until the Thursday before our competition.

    On Thursday, I worked on writing a class to help us detect the location of the gold mineral using the built-in TensorFlow object detection. While testing this class, I noticed that it produced an error rather than outputting the location of the gold mineral. This error was not diagnosed until the morning of the competition.

    On Friday, Abhi worked on writing code for the driving part of the autonomous. He wrote three different autonomous routines, one for each position of the gold mineral. His code did not select the routine to use yet, leaving it open for us to connect to the TensorFlow class to determine which position the gold mineral was.

    On Saturday, the morning of the competition, we debugged the TensorFlow class that was written earlier and determined the cause of the error. We had misused the API for the TensorFlow object detection, and after we corrected that, our code didn't spit out an error anymore. Then, we realized that TensorFlow only worked at certain camera positions and angles. We then had to adjust the position of our robot on the field, so that we could

    Our code failiure was mostly due to the fact that we only started working on our autonomous two days before the competition. Next time, we plan to make our autonomous an integral part of our robot, and focus on it much earlier. I plan to fix our autonomous sometime soon, well before our next competition, so that we do not have to come to our next competition without a working autonomous.

    Next Steps:

    Spend more time focusing on code and autonomous, to ensure that we enter our next competition with a fully working autonomous.

    Auto Paths

    Auto Paths By Abhi

    Task: Map and code auto for depot side start

    So we created a bunch of auto paths in the beginning of the season with hopes to work with any that we want, allowing us to adapt to our alliance partner's capabilities. I decided to spend today actually completely coding one of them. Since we we still didn't have a complete vision software, I made these manually so we can integrate vision without issues. Here are videos of all of the paths. For the sake of debugging the bot stops after turning towards the crater but in reality it will drive and park in the far crater.

    Center

    Left

    Right

    Next Steps

    We gotta get vision integrated into the paths.

    Vision Summary

    Vision Summary By Arjun and Abhi

    Task: Reflect on our vision development

    One of our priorities this season was our autonomous, as a perfect autonomous could score us a considerable amount of points. A large portion of these points come from sampling, so that was one of our main focuses within autonomous. Throughout the season, we developed a few different approaches to sampling.

    Early on in the season, we began experimenting with using a Convolutional Neural Network to detect the location of the gold mineral. A Convolutional Neural Network, or CNN, is a machine learning algorithm that uses multiple layers which "vote" on what the output should be based on the output of previous layers. We developed a tool to label training images for use in training a CNN, publicly available at https://github.com/arjvik/MineralLabler. We then began training a CNN with the training data we labeled. However, our CNN was unable to reach a high accuracy level, despite us spending lots of time tuning it. A large part of this came to our lack training data. We haven't given up on it, though, and we hope to improve this approach in the coming weeks.

    We then turned to other alternatives. At this time, the built-in TensorFlow Object Detection code was released in the FTC SDK. We tried out TensorFlow, but we were unable to use it reliably. Our testing revealed that the detection provided by TensorFlow was not always able to detect the location of the gold mineral. We attempted to modify some of the parameters, however, since only the trained model was provided to us by FIRST, we were unable to increase its accuracy. We are currently looking to see if we can detect the sampling order even if we only detect some of the sampling minerals. We still have code to use TensorFlow on our robot, but it is only one of a few different vision backends available for selection during runtime.

    Another alternative vision framework we tried was OpenCV. OpenCV is a collection of vision processing algorithms which can be combined to form powerful pipelines. OpenCV pipelines perform sequential transformations on their input image, until it ends up in a desired form, such as a set of contours or boundaries of all minerals detected in the image. We developed an OpenCV pipeline to find the center of the gold mineral given an image of the sampling order. To create our pipeline, we used a tool called GRIP, which allows us to visualize and tune our pipeline. However, since we have found the lighting conditions to greatly influence the quality of detection, we hope to add LED lights to the top of our phone mount so we can get consistent lighting on the field, hopefully further increasing our performance.

    Since we wanted to be able to switch easily between these vision backends, we decided to write a modular framework which allows us to swap out vision implementations with ease. As such, we are now able to choose which vision backend we would like to use during the match, with just a single button press. Because of this, we can also work in parallel on all of the vision backends.

    Next Steps

    We would like to continue improving on and testing our vision software so that we can reliably sample during our autonomous.

    Code Updates

    Code Updates By Abhi and Arjun

    Task: Week before competition changes

    Oh boy! It is almost time for comepetition and with that comes a super duper autonomous. For the past couple of weeks and today, we focused on making our depot side auto rock solid. Because our robot wasn't fully built, we couldn't do autodelatching. Today, we integrated our vision pipelines into the auto and tested all the paths with vision. They seemed to work at home base but the field we have isn't built to exact specifications.

    Next Steps

    At Wylie, we will have to tune auto paths so they actually work.

    Competition Day Code

    Competition Day Code By Abhi and Arjun

    Task: Update our code

    Woo hoo! We finally made it to Wylie qualifier and made it to regionals. However, there was a lot of stuff that happened at competition itself. The reason why so many code changes needed to happen at competition was because our robot actually broke the night before competition. Rather than staying up while the builders fixed the robot, I decided to rest and do some super duper work the following day.

    First thing that happened was that the belt code was added. Previously, we had relied on gravity and the polycarb locks we had on the slides but we quickly realized that the slides needed to articulate in order to not break Superman. As a result, we added the belts into our collector class and used the encoders to power them.

    Next, we added manual overrides for all functions of our robot. Simply due to lack of time, we didn't add any presets and I was focusing on just making the robot functional enough for competition. During competition, Karina was actually able to latch during endgame with purely the manual overrides :)

    Finally, we did a lot of auto path tuning. We ended up using a modified DogeCV pipeline and we were accurately able to detech the gold mineral at all times. However, our practice field wasn't setup to the exact specifications needed so I spent the majority of my day at the Wylie practice field tuning depot side auto (by the end of the day it worked almost perfectly every time.

    Next Steps

    We were very lucky to have qualified early in the season so I could use this competition as just a code testing day. However, it will be very hard to proceed like this so I hope we can have a build freeze to work on code.

    Code Updates

    Code Updates By Abhi

    Task: DISD STEM EXPO

    On today's episode of code changes, we have some cool stuff to look at.

    The picture above is a good representation of our work today. After making sure all the manual drive controls were working, I asked Karina to find the positions she liked for intake, deposit, and latch. Taking these encoder values from telemetry, I created new methods for the robot to run to those positions. As a result, the robot was very fuctional. We could latch onto the lander in 10 seconds (a much faster endgame than we had ever done).

    Next Steps

    The code is still a little messy so we will have to do further testing before any competition.

    Code Updates

    Code Updates By Abhi

    Task: Woodrow Scrimmage

    Welcome back folks! Today on Code Updates I will tell you all about our changes at Woodrow!

    In the past few days, I remodeled our code so all of our teleop functions would be in specialized functions. Today, I was able to test all these changes as well as add some new stuff. We made a choice to switch out our nylon latch for a forged hook (pretty cool right) so I integrated that servo as well.

    Next Steps

    Great day for changes! We need to do some more code cleaning before we are ready!

    Autonomous Non-Blocking State Machines

    Autonomous Non-Blocking State Machines By Arjun

    Task: Design a state machine class to make autonomous easier

    In the past, writing autonomous routines was an exercise in patience. The way we structured them was tedious and difficult to change. Adding one step to the beginning of an autonomous would require changing the indexes of every single step afterwards, which could take a long time depending on the size of the routine. In addition, simple typos could go undetected, and cause lots of problems. Finally, there was so much repetitive code, making our routines over 400 lines long.

    In order to remedy this, we decided to create a state machine class that takes care of the repetitive parts of our autonomous code. We created a StateMachine class, which allows us to build autonomous routines as sequences of "states", or individual steps. This new state machine system makes autonomous routines much easier to code and tune, as well as removing the possibility for small bugs. We also were able to shorten our code by converting it to the new system, reducing each routine from over 400 lines to approximately 30 lines.

    Internally, StateMachine uses instances of the functional interface State (or some of its subclasses, SingleState for states that only need to be run once, TimedState, for states that are run on a timer, or MineralState, for states that do different things depending on the sampling order). Using a functional interface lets us use lambdas, which further reduce the length of our code. When it is executed, the state machine takes the current state and runs it. If the state is finished, the current state index (stored in a class called Stage) is incremented, and a state switch action is run, which stops all motors.

    Here is an autonomous routine which has been converted to the new system:

    private StateMachine auto_depotSample = getStateMachine(autoStage)
                .addNestedStateMachine(auto_setup) //common states to all autonomous
                .addMineralState(mineralStateProvider, //turn to mineral, depending on mineral
                        () -> robot.rotateIMU(39, TURN_TIME), //turn left
                        () -> true, //don't turn if mineral is in the middle
                        () -> robot.rotateIMU(321, TURN_TIME)) //turn right
                .addMineralState(mineralStateProvider, //move to mineral
                        () -> robot.driveForward(true, .604, DRIVE_POWER), //move more on the sides
                        () -> robot.driveForward(true, .47, DRIVE_POWER), //move less in the middle
                        () -> robot.driveForward(true, .604, DRIVE_POWER))
                .addMineralState(mineralStateProvider, //turn to depot
                        () -> robot.rotateIMU(345, TURN_TIME),
                        () -> true,
                        () -> robot.rotateIMU(15, TURN_TIME))
                .addMineralState(mineralStateProvider, //move to depot
                        () -> robot.driveForward(true, .880, DRIVE_POWER),
                        () -> robot.driveForward(true, .762, DRIVE_POWER),
                        () -> robot.driveForward(true, .890, DRIVE_POWER))
                .addTimedState(4, //turn on intake for 4 seconds
                        () -> robot.collector.eject(),
                        () -> robot.collector.stopIntake())
                .build();
    

    Big Wheel Articulations

    Big Wheel Articulations By Abhi

    Task: Summary of all Big Wheel movements

    The two main subsystems for our robot this year are the Superman arm and the elbows. Because all of our robot relies on these two to work together well, we paired them together in our Pose class and created a set of methods to articulate the subsystems. The presets were created by using manual control to find our encoder values and creating a set of state machines to move them.

    The reason why each articulation is necessary and complicated is that we have to manage our center of gravity. When moving our arm, the center of gravity moves with it since the lift is so heavy. When transitioning from intake to deposit, we cant go directly because our robot will tip over so we have to go to safe drive then deposit. We are still managing this constantly with driver control but we hope to fix this easily with distance sensors in future.

    The position seen above is called "safe drive". During normal match play, our drivers can go to this position to navigate the field quickly and with the arm out of the way.

    When the driver control period starts, we normally navigate to the crater then enter the intake position shown above. From this position, we can safely pick up minerals from the crater.

    From the intake position, the robot goes to safe drive to fix the weight balance then goes to the deposit position shown above. The arm can still extend upwards above the lander and our automatic sorter can place the minerals appropriately.

    During the end game, we enter a latchable position where our hook can easily slide into the latch. After hooked on, our robot can slightly lift itself off the ground to hook.

    At the beginning of the match, we can completely close the arm and superman to fit in sizing cube and latch on the lander.

    As you can see, there is a lot of articulations that need to work together during the course of the match. By putting this info in a state machine, we can easily toggle between articulations. Refer to our code snippets for more details.

    Next Steps

    At this point, we have 4 cycles in 1 minute 30 seconds. By adding some upgrades to the articulations using our new distance sensors, we hope to speed this up even more.

    Cart Hack

    Cart Hack By Arjun

    Task: Tweaking ftc_app to allow us to drive robots without a Driver Station phone

    As you already know, Iron Reign has a mechanized cart called Cartbot that we bring to competitions. We used the FTC control system to build it, so we could gain experience. However, this has one issue: we can only use one pair of Robot Controller and Driver Station phones at a competition, because of WiFi interference problems.

    To avoid this pitfall, we decided to tweak the ftc_app our team uses to allow us to plug in a controller directly into the Robot Controller. This cuts out the need for a Driver Station phone, which means that we can drive around Cartbot without worrying about breaking any rules.

    Another use for this tweak could be for testing, since with this new system we don't need a Driver Station when we are testing our tele-op.

    As of now this modification lives in a separate branch of our code, since we don't know how it may affect our match code. We hope to merge this later once we confirm it doesn't cause any issues.