Articles by tag: control

Articles by tag: control

    Swerve Drive Experiment

    Swerve Drive Experiment By Abhi

    Task: Consider a Swerve Drive base

    During the entire season of Relic Recovery, we saw many robots both in and outside our region that had a swerve drive. As Iron Reign, we never considered a swerve drive in the past but seeing all the robots, I wanted to see if it was maybe possible. One motivation was that I didn't like how slow mechanums were. Swerves generally use traction wheels and create a faster speed than usually can be found with mechanum. Also, it seemed as if swerve could provide the mobility neccessary that a mechanum drive provided. This is why I wanted to consider the possibility of a swerve drive and why I did more investigation.

    I first came across the PRINT swerve for FTC by team 9773. They had a very detailed explanation of all the parts and assebly tools. After reading into it more, I decided that the system they created wasn't the best. First, the final cost of the drive train was very expensive; we did not have a very high budget despite help from our sponsors. If this drive train didn't work for some reason after playing with it over the summer or if the chassis didn't make sense to use in Rover Ruckus, we would have almost no money for an alternate drive train since we wanted to presearve Kraken. Also, they parts used by 9773 invovled X-rail rather than extrusion rail from REV. This would cause problems in the future as we wold need to redesign the REVolution system for X-rail. In the end, I decided this was not worth it to pursue.

    After further investigation, I found a chassis by team 9048. The swerve they developed looked like a more feasible option. By using REV rail and many of the parts we had, I thought this would be a possible prototype for Iron Reign. Because they didn't have a parts list, we had the find the rough estimate of cost from the REV and Andymark websites. Upon further analysis, we realized that the cost, though cheaper than the chassis of 9773, would still be a considerable chunk of our budget. But I am still motivated to find a way to make this happen.

    Next Steps

    Possibly scavenge for parts in the house and Robodojo to make swerve modules.

    Swerve Drive Prototype

    Swerve Drive Prototype By Abhi and Christian

    Task: Build a Swerve Drive base

    During the discussion about swerve drive, Imperial robotics, our sister team, was also interested in the designs. Since we needed to conserve resources and prototype, I worked with Christian and another member of Imperial to prototype a drive train.

    Due to the limited resources. we decided to use Tetrix parts since we had an abundance of those. We decided to make the swerve such that a servo would turn a swerve module and the motors would be attached directly to the wheels. This system would be mounted to a square base. We decided to go ahead and make the base.

    Immediatly we noticed it was very feeble. The servos were working very hard to turn the heavy module and the motors had trouble staying aligned. Also, programming the train was also a challenge. After experimenting further, the base even broke. This was a moment of realization. Not only was swerve expensive and complicated, we also would need to replace a module really quickly at competition which needed more resources and an immaculate design. With all these considerations, I ultimately decided that swerve wasn't worth it to use as a drive chassis.

    Next Steps

    Wait until Rover Ruckus starts so that we can think of a new chassis.

    Position Tracking

    Position Tracking By Abhi

    Task: Design a way to track the robot's location

    Throughout the Relic Recovery season, we have had many issues with the autonomous being inaccurate simply because the scoring was dependent on perfectly aligning the robot on the balancing stone. This was prone to many issues as evidenced by numerous matches in which our autonomous failed. Thus far, we had relied on the encoders on the mecanum chassis to input distances and such. Though this worked to a significant degree, the bot was still prone to loss from drift and running into the glyph pit. We don't know if glyphs will be reused or not but we definitely needed a better tracking mechanism on the field to be more efficient.

    After some investigation online and discussing with other teams, I thought about a way to make a tracker. For the sake of testing, we built a small chassis with two perpendicular REV rails. Then, with the help of new trainees for Iron Reign, we attached two omni wheels on opposite sides of the chassis, as seen in the image above. To this, we added axle encoders to track the movement of the omni wheels.

    The reason the axles of these omnis was not dependent of any motors was because we wanted to avoid any error from the motors themselves. By making the omni wheels free spinning, no matter what the encoder reads on the robot, the omni wheels will always move whichever direction the robot is moving. Therefore, the omni wheels will generally give a more accurate reading of position.

    To test the concept, we attached the apparatus to ARGOS. With some upgrades to the ARGOS code by using the IMU and omni wheels, we added some basic trigonometry to the code to accurately track the position. The omni setup was relatively accurate and may be used for future projects and robots.

    Next Steps

    Now that we have a prototype to track position without using too many resources, we need to test it on an actual FTC chassis. Depending on whether or not there is terrain in Rover Ruckus, the use of this system will change. Until then, we can still experiment with this and develop a useful multipurpose sensor.

    Replay Autonomous

    Replay Autonomous By Arjun

    Task: Design a program to record and replay a driver run

    One of the difficulties in writing an autonomous program is the long development cycle. We have to unplug the robot controller, plug it into a computer, make a few changes to the code, recompile and download the code, and then retest our program. All this must be done over and over again, until the autonomous is perfected. Each autonomous takes ~4 hours to write and tune. Over the entire season, we spend over 40 hours working on autonomous programs.

    One possible solution for this is to record a driver running through the autonomous, and then replay it. I used this solution on my previous robotics team. Since we had no access to a field, we had to write our entire autonomous at a competition. After some brainstorming, we decided to write a program to record our driver as he ran through our autonomous routine and then execute it during a match. It worked very well, and got us a few extra points each match.

    Using this program, writing an autonomous program is reduced to a matter of minutes. We just need to run through our autonomous routine a few times until weare happy with it, and then take the data from the console and paste it into our program. Then we recompile the program and run it.

    There are two parts to our replay program. One part (a Tele-op Opmode) records the driver's motions and outputs it into the Android console. The next part (an Autonomous Opmode) reads in that data, and turns it into a working autonomous program.

    Next Steps

    Our current replay program requires one recompilation. While it is very quick, one possible next step is to save the autonomous data straight into the phone's internal memory, so that we do not have to recompile the program. This could further reduce the time required to create an autonomous.

    One more next step could be a way to easily edit the autonomous. The output data is just a big list of numbers, and it is very difficult to edit it. If we need to tune the autonomous due to wear and tear on the robot, it is difficult to do so without rerecording. If we can figure out a mechanism for editing the generated autonomous, we can further reduce the time we spend creating autonomous programs.

    Rover Ruckus Brainstorming & Initial Thoughts

    Rover Ruckus Brainstorming & Initial Thoughts By Ethan, Charlotte, Kenna, Evan, Abhi, Arjun, Karina, and Justin

    Task: Come up with ideas for the 2018-19 season

    So, today was the first meeting in the Rover Ruckus season! On top of that, we had our first round of new recruits (20!). So, it was an extremely hectic session, but we came up with a lot of new ideas.

    Building

    • A One-way Intake System
    • This suggestion uses a plastic flap to "trap" game elements inside it, similar to the lid of a soda cup. You can put marbles through the straw-hole, but you can't easily get them back out.
    • Crater Bracing
    • In the past, we've had center-of-balance issues with our robot. To counteract this, we plan to attach shaped braces to our robot such that it can hold on to the walls and not tip over.
    • Extendable Arm + Silicone Grip
    • This one is simple - a linear slide arm attached to a motor so that it can pick up game elements and rotate. We fear, however, that many teams will adopt this strategy, so we probably won't do it. One unique part of our design would be the silicone grips, so that the "claws" can firmly grasp the silver and gold.
    • Binder-ring Hanger
    • When we did Res-Q, we dropped our robot more times than we'd like to admit. To prevent that, we're designing an interlocking mechanism that the robot can use to hang. It'll have an indent and a corresponding recess that resists lateral force by nature of the indent, but can be opened easily.
    • Passive Intake
    • Inspired by a few FRC Stronghold intake systems, we designed a passive intake. Attached to a weak spring, it would have the ability to move over game elements before falling back down to capture them. The benefit of this design is that we wouldn't have to use an extra motor for intake, but we risk controlling more than two elements at the same time.
    • Mechanum
    • Mechanum is our Ol' Faithful. We've used it for the past three years, so we're loath to abandon it for this year. It's still a good idea for this year, but strafing isn't as important, and we may need to emphasize speed instead. Plus, we're not exactly sure how to get over the crater walls with Mechanum.
    • Tape Measure
    • In Res-Q, we used a tape-measure system to pull our robot up, and we believe that we could do the same again this year. One issue is that our tape measure system is ridiculously heavy (~5 lbs) and with the new weight limits, this may not be ideal.
    • Mining
    • We're currently thinking of a "mining mechanism" that can score two glyphs at a time extremely quickly in exchange for not being able to climb. It'll involve a conveyor belt and a set of linear slides such that the objects in the crater can automatically be transferred to either the low-scoring zone or the higher one.

    Journal

    This year, we may switch to weekly summaries instead of meeting logs so that our journal is more reasonable for judges to read. In particular, we were inspired by team Nonstandard Deviation, which has an amazing engineering journal that we recommend the readers to check out.

    Programming

    Luckily, this year seems to have a more-easily programmed autonomous. We're working on some autonomous diagrams that we'll release in the next couple weeks. Aside from that, we have such a developed codebase that we don't really need to update it any further.

    Next Steps

    We're going to prototype these ideas in the coming weeks and develop our thoughts more thoroughly.

    Vision Discussion

    Vision Discussion By Arjun and Abhi

    Task: Consider potential vision approaches for sampling

    Part of this year’s game requires us to be able to detect the location of minerals on the field. The main use for this is in sampling. During autonomous, we need to move only the gold mineral, without touching the silver minerals in order to earn points for sampling. There are a few ways we could be able to detect the location of the gold mineral.

    First, we could possibly use OpenCV to run transformations on the image that the camera sees. We would have to design an OpenCV pipeline which identifies yellow blobs, filters out those that aren’t minerals, and finds the centers of the blobs which are minerals. This is most likely the approach that many teams will use. The benefit of this approach is that it will be easy enough to write. However, it may not work in different lighting conditions that were not tested during the designing of the OpenCV pipeline.

    Another approach is to use Convolutional Neural Networks (CNNs) to identify the location of the gold mineral. Convolutional Neural Networks are a class of machine learning algorithms that “learn” to find patterns in images by looking at large amounts of samples. In order to develop a CNN to identify minerals, we must take lots of photos of the sampling arrangement in different arrangements (and lighting conditions), and then manually label them. Then, the algorithm will “learn” how to differentiate gold minerals from other objects on the field. A CNN should be able to work in many different lighting conditions, however, it is also more difficult to write.

    Next Steps

    As of now, Iron Reign is going to attempt both methods of classification and compare their performance.

    CNN Training

    CNN Training By Arjun and Abhi

    Task: Capture training data for a Convolutional Neural Network

    In order to train a Convolutional Neural Network, we need a whole bunch of training images. So we got out into the field, and took 125 photos of the sampling setup in different positions and angles. Our next step is to label the gold minerals in all of these photos, so that we can train a Convolutional Neural Network to label the gold minerals by learning from the patterns of the training data.

    Next Steps

    Next, we will go through and designate gold minerals. In addition, we must create a program to process these.

    Autonomous Path Planning

    Autonomous Path Planning By Abhi

    Task: Map Autonomous paths

    Ahhhhhhh! Rover Ruckus has been around for a while now and it's time to figure out our autonomous plans! This year's autonomous is a lot more hectic than last year's since there is the detaching from the lander component which can take an unknown amount of time right now. That cuts down the potential speed for the rest of the auto but only time will tell.

    Until then, I can only dream of the potential autonomous paths our robot can take. First, we need to know some basic facts. One, the field is the exact same for both red and blue aliance, meaning I don't need to rewrite the code to act on the other side of the field. Second, we have to account for our alliance partner's autonomous if they have one and need to adapt to their path so we don't crash them. Third, we have to avoid the other alliance's bots to avoid penalties. There are no explicit boundaries this year for auto but if we somehow interrupt the opponent's auto we get heavily penalized. Now, with this in mind, lets look at these paths

    This path plan is the simplest of all autonomi. I assume that our alliance partner has an autonomous and our robot only takes care of half the functions. It starts with a simple detaching from the lander, sampling the proper mineral, deploying the team marker, and parking in the crater. The reason I chose the opposite crater instead of the one on our nearside was that it was shorter distance and less chance to mess with our alliance partner. The issue with this plan is that it may interfere with the opponent's autnomous but if we drive strategically hugging the wall, we shouldn't have issues.

    This path is also a "simple" path but is obviously complicated. The issue is that the team marker depot is not on the same side as the lander, forcing us to drive all the way down and back to park in the crater. I could also change this one to go to the opposite crater but that may interfere with our alliance partner's code.

    This is one of the autonomi that assumes our alliance partners don't have an autonomous and is built for multifunctionality. The time restriction makes this autonomous unlikely but it is stil nice to plan out a path for it.

    This is also one of the autonomi that assumes our alliance partners don't have an autonomous. This is the simpler one of the methods but still has the same restrictions

    Next Steps

    Although its great to think these paths will actually work out in the end, we might need to change them a lot. With potential collisions with alliance partners and opponents, we might need a drop down menu of sorts on the driver station that can let us put together a lot of different pieces so we can pick and choose the auto plan. Maybe we could even draw out the path in init. All this is only at the speculation stage right now.

    CNN Training Program

    CNN Training Program By Arjun and Abhi

    Task: Designing a program to label training data for our Convolutional Neural Network

    In order to use the captured training data, we need to label it by identifying the location of the gold mineral in it. We also need to normalize it by resizing the training images to a constant size, (320x240 pixels). While we could do this by hand, it would be a pain to do so. We would have to resize each individual picture, and then identify the coordinates of the center of the gold mineral, then create a file to store the resized image and coordinates.

    Instead of doing this, we decided to write a program to do this for us. That way, we could just click on the gold mineral on the screen, and the program would do the resizing and coordinate-finding for us. Thus, the process of labeling the images will be much easier.

    Throughout the weekend, I worked on this program. The end result is shown above.

    Next Steps

    Now that the program has been developed, we need to actually use it to label the training images we have. Then, we can train the Convolutional Neural Network.

    Labelling Minerals - CNN

    Labelling Minerals - CNN By Arjun and Abhi

    Task: Label training images to train a Neural Network

    Now that we have software to make labeling the training data easier, we have to actually use it to label the training images. Abhi and I split up our training data into two halves, and we each labeled one half. Then, when we had completed the labeling, we recombined the images. The images we labeled are publicly available at https://github.com/arjvik/RoverRuckusTrainingData.

    Next Steps

    We need to actually write a Convolutional Neural Network using the training data we collected.

    Upgrading to FTC SDK version 4.0

    Upgrading to FTC SDK version 4.0 By Arjun

    Task: Upgrade our code to the latest version of the FTC SDK

    FTC recently released version 4.0 of their SDK, with initial support for external cameras, better PIDF motor control, improved wireless connectivity, new sensors, and other general improvements. Our code was based on last year's SDK version 3.7, so we needed to merge the new SDK with our repository.

    The merge was slightly difficult, as there were some issues with the Gradle build system. However, after a little fiddling with the configuration, as well as fixing some errors in the internal code we changed, we were able to successfully merge the new SDK.

    After the merge, we tested that our code still worked on Kraken, last year's competition robot. It ran with no problems.

    Developing a CNN

    Developing a CNN By Arjun and Abhi

    Task: Begin developing a Convolutional Neural Network using TensorFlow and Python

    Now that we have gathered and labeled our training data, we began writing our Convolutional Neural Network. Since Abhi had used Python and TensorFlow to write a neural network in the past during his visit to MIT over the summer, we decided to do the same now.

    After running our model, however, we noticed that it was not very accurate. Though we knew that was due to a bad choice of layer structure or hyperparameters, we were not able to determine the exact cause. (Hyperparameters are special parameters that need to be just right for the neural network to do well. If they are off, the neural network will not work well.) We fiddled with many of the hyperparameters and layer structure options, but were unable to fix the inaccuracy levels.

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    model = Sequential()
    model.add(Conv2D(64, activation="relu", input_shape=(n_rows, n_cols, 1), kernel_size=(3,3)))
    model.add(Conv2D(32, activation="relu", kernel_size=(3,3)))
    model.add(MaxPooling2D(pool_size=(8, 8), padding="same"))
    model.add(Conv2D(8, activation="tanh", kernel_size=(3,3)))
    model.add(MaxPooling2D(pool_size=(8, 8), padding="same"))
    model.add(Conv2D(4, activation="relu", kernel_size=(3,3)))
    model.add(Conv2D(4, activation="tanh", kernel_size=(1,1)))
    model.add(Flatten())
    model.add(Dense(2, activation="linear"))
    model.summary()
    

    Next Steps

    We have not fully given up, though. We plan to keep attempting to improve the accuracy of our neural network model.

    Rewriting CNN

    Rewriting CNN By Arjun and Abhi

    Task: Begin rewriting the Convolutional Neural Network using Java and DL4J

    While we were using Python and TensorFlow to train our convolutional neural network, we decided to attempt writing this in Java, as the code for our robot is entirely in Java, and before we can use our neural network, it must be written in java.

    We also decided to try using DL4J, a competing library to TensorFlow, to write our neural network, to determine if it was easier to write a neural network using DL4J or TensorFlow. We found that both DL4J and TensorFlow were similarly easy to use, and while each had a different style, code written using both were equally easy to read and maintain.

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    java
    		//Download dataset
    		DataDownloader downloader = new DataDownloader();
    		File rootDir = downloader.downloadFilesFromGit("https://github.com/arjvik/RoverRuckusTrainingData.git", "data/RoverRuckusTrainingData", "TrainingData");
    		
    		//Read in dataset
    		DataSetIterator iterator = new CustomDataSetIterator(rootDir, 1);
    		
    		//Normalization
    		DataNormalization scaler = new ImagePreProcessingScaler(0, 1);
    		scaler.fit(iterator);
    		iterator.setPreProcessor(scaler);
    		
    		//Read in test dataset
    		DataSetIterator testIterator = new CustomDataSetIterator(new File(rootDir, "Test"), 1);
    			
    		//Test Normalization
    		DataNormalization testScaler = new ImagePreProcessingScaler(0, 1);
    		testScaler.fit(testIterator);
    		testIterator.setPreProcessor(testScaler);
    		
    		//Layer Configuration
    		MultiLayerConfiguration conf = new NeuralNetConfiguration.Builder()
    				.seed(SEED)
    				.l2(0.005)
    				.weightInit(WeightInit.XAVIER)
    				.list()
    				.layer(0, new ConvolutionLayer.Builder()
    						.nIn(1)
    						.kernelSize(3, 3)
    						.stride(1, 1)
    						.activation(Activation.RELU)
    						.build())
    				.layer(1, new ConvolutionLayer.Builder()
    						.nIn(1)
    						.kernelSize(3, 3)
    						.stride(1, 1)
    						.activation(Activation.RELU)
    						.build())
    				/* ...more layer code... */
    				.build();
    

    Next Steps

    We still need to attempt to to fix the inaccuracy in the predictions made by our neural network.

    Strategy and Business Whitepaper

    Strategy and Business Whitepaper By Ethan

    Task: Write the Strategy+Business Whitepaper for the Journal

    For teams who don't know, this kind of paper is suggested for judging. Iron Reign usually completes one every year. You can download the pdf of this post here.

    Intro

    This year is Iron Reign’s eleventh season in FIRST, our ninth year overall. We’ve participated in five years of FLL and seven years of FTC:



    FLL

    • Body Forward
    • Food Factor
    • Senior Solution
    • Nature’s Fury
    • World Class



    FTC

    • Ring It Up!
    • Block Party
    • Cascade Effect
    • RES-Q
    • Velocity Vortex
    • Relic Recovery
    • Rover Ruckus

     

    While our team originated at WB Travis Vanguard and Academy, as our members became older (such is the passage of time), we moved to the School of Science and Engineering at Townview (SEM) in DISD. Despite our school being 66% economically disadvantaged and being Title 1, our school consistently ranks in the top 10 nationwide academically. Our school also has numerous other award-winning extracurricular clubs; including CX Debate, Math/Science UIL, and more. Our school employs a rigorous STEM-based curriculum, which provides our students access to specialized class schedules, such as Engineering, Computer Science, and Math, as well as paying for AP classes that our students would normally not be able to afford. The average SEM student takes at least 10 APs.

     

    A History of Iron Reign

     

    Iron Reign has been a team for nine years. We initially started as a First Lego League (FLL) team, plateauing in regionals every year we competed. This was usually not due to the actual “robot game” in FLL, but because of our presentations. Starting there, Iron Reign was defined as focusing on creative and innovative designs. We also did Google’s Lunar X Prize program every summer, achieving finalist status in 2011 and 2012. Upon moving to high school, we started doing FTC, as FRC was too cost-prohibitive to be parent-run.

    We have been an FTC team for 7 years, advancing further and further each year. In Velocity Vortex, we got to the South Super Regionals, qualifying by winning the North Texas Inspire Award, which means that we represent all parts of the competition, from teamwork, to the presentation, to creativity, and to the actual game. In Georgia, the same year, we were the first alternative for Worlds if another team dropped out.

     

    Then, last year, we finally got to Worlds. We got there in two ways: the 2nd place Innovate award at Supers, and also got the lottery, on the prior merits of being a FIRST team for so long. There, we got the recognition that we’d been seeking – we won the Worlds Motivate Award.

    In the same vein, we compete in the Texas UIL State Championships. For those unfamiliar with UIL, it is the main organizational committee for all public school academic and athletic events in the state of Texas. Through UIL, we helped compete in the first test program for the UIL Robotics program and since then have competed in every subsequent tournament. This year, it finally got out of the trial period, and became a full-fledged competition.

     

    Outreach

     

    Our outreach stands out from other teams through our mode of presentation. Last year, we renovated a 90’s Seaview Skyline RV, took out the “home” components, such as the bathroom and bedroom, and turned it into a mobile tech lab, so that we can bring STEM to underprivileged demographics within our community. Our RV currently holds 4 3D Printers, 30+ computers, 3 widescreen TVs, and 1 microwave. Our current curriculum consists of teaching kids 3D modelling in the back of the RV, using Google Sketchup, as it is free and available to any family with a computer. We usually help them design keychains, as they are memorable, but don’t take excessive time to print on our printers. In the front, we teach kids how to use EV3 robots and teach them how to use the EV3 programming language to compete in a sumo-bot competition. We also give advice to parents and educators on how to start FIRST teams.

     

    To make Iron Reign’s history entirely clear, we built the RV two years ago. We do not claim any credit for the actual construction of the RV in this journal; however, we do share the goals of this program: making the RV run as a standalone program, expanding the program to other communities, and serving more and more underprivileged communities in Dallas. To our own standards, we have achieved this.

     

    Our current funding services for the operation of the RV come from Best Buy, who purchased the thirty-plus laptops and four 3D printers. We receive grants from non-profits such as BigThought and Dallas City of Learning to fund events and provide staff (even though our team provides staffing).

     

    This year, we have obtained $150k in additional funds to expand our outreach program by building a second Mobile Learning Lab. This is an unprecedented level of funding - it can cover the majority of buying an RV, staffing it, and filling it to the brim with technology. So far, this is the highlight of the Iron Reign season.

     

    When not in outreach service, we can transform our RV into tournament mode. We have taken numerous long-distance road trips aboard our RV, with locations such as Austin, Arkansas, Oklahoma, and Florida. We substitute the laptops for a band saw and drill press, use the flat screens to program instead of teach, and bring our higher-quality personal 3D printer. At tournaments, we encourage other teams to board our RV, not only to encourage them to start their own similar programs, but also to help them with mechanical and building issues.

    Iron Reign spends a lot of time on outreach. So far, we’ve spent 84.5 man-hours and talked to just under 2000 people (1995) within our community. Our goal of this outreach is to reach disadvantaged children who would not normally have the opportunity to participate in STEM programs in order to spark their interest in STEM for future learning. Some of our major outreach events this year include Love Field Turn Up!, where we reached 1100 children from around the Metroplex. We’ve worked for our school district in various circumstances, including bringing children back-to-school STEM education and running orientations for our high school.

    We also represent FIRST in a variety of ways. At our Mobile Learning Lab events, we talk to parents and educators about starting their own FLL and FTC teams. We currently mentor our school’s FRC team Robobusters and are in the process of founding another. We are the mentors for our sister team, FTC 3734 We also provide help as-requested for FLL teams to go back to our roots. As well, we’ve historically hosted underfunded teams for late-night-before-tournament workshops.

     

    Date

    Event

    People

    Hours

    # People

    2018-04-26

    SEM Orientation

    Shaggy

    6

    200

    2018-06-23

    Turn Up! Dallas Love Field

    Justin, Ethan, Charlotte, Kenna, Abhi, Evan

    24

    1100

    2018-07-14

    Dallas Public Library

    Ethan, Kenna, Charlotte, Evan

    16

    190

    2018-07-21

    MoonDay

    Karina, Ethan, Janavi, Charlotte

    26

    200

    2018-07-22

    Summer Chassis

    Kenna, Ethan, Charlotte, Karina, Shaggy, Abhi

    24

    25

    2018-08-01

    SEM Summer Camp

    Arjun

    6

    175

    2018-08-18

    Back to School Fair

    Ethan, Kenna

    6.5

    130

    2018-10-13

    SEM STEM Spark

    Ethan, Charlotte, Janavi, Abhi, Karina, Justin

    80

    140

    2018-10-16

    Travis High School Night

    Ethan, Evan, Kenna, Charlotte, Karina

    12.5

    120

         

    201

    2280

    Business and Funding

     

    Iron Reign, for the past two years, has increasingly ramped up its funding. We aggressively seek out new sponsors so that we can continue to keep Iron Reign great. Currently, these include:

    • BigThought - RV materials, staffing, and upkeep
    • Dallas City of Learning (DCOL) – RV materials and upkeep
    • Best Buy – 4x3D Printers, Laptops for RV
    • DISD STEM – Practice field and tournament funding
    • RoboRealm - $1500 of machine vision software
    • Dallas Makerspace – Access to machining tools
    • DPRG – Robot assistance
    • Mark Cuban - $2500
    • DEKA - Rookie team funding for our two new teams
    • Texas Workforce Commission - $525 for our team, $2350 for new ones



    We are always seeking more funding. We apply for the FIRST and FIRST in Texas grants every year, and seek grants from STEM-curious companies and individuals in the Dallas area. We have applied for grants from Orix and Mark Cuban, receiving personal funding from the latter. We receive staffing and upkeep from a local Dallas non-profit, BigThought. Currently, we are seeking funding and assistance from Ernst and Young, an international company with a Dallas branch, that a team member works for.

     

    In previous years, we have lacked the ability to get significant transportation funding to get to tournaments. However, through our partnership with DISD, we have solved that problem, and when DISD is unable to provide transportation due to short notice, we can provide our own transportation due to our building of the RV.

     

    Reference Business Letter

     

    “To whomever it may concern,

              My name is Abhijit Bhattaru, and I am currently a member of Iron Reign Robotics at the School of Science and Engineering at Townview, a DISD magnet school whose population is 66% economically disadvantaged. We have been a FIRST team for about nine years, over half of some of our members’ lives. For the past six years, we have operated as FTC Team 6832, Iron Reign. We’ve achieved various forms of success in these years, culminating with our rise to the Houston World Championship this year, winning the Motivate Award, an award for outstanding outreach within our community.

     

              What makes our team stand out from other teams is our dedication to our community. Two years ago, we converted a Sea View RV into a Mobile Learning Lab equipped with 4 3D printers, 15 EV3 robots, and 30 laptops to teach children basic programming and 3D modelling. The purpose of all of this is to start a spark of STEM in underserved communities so that these children can later go into STEM. And, we have expanded this program nationwide, presenting at the National Science Teachers’ Association national conference in 2017. We have partnered with local nonprofits such as Big Thought to fund our outreach expenses, and to reach out to interested communities across Dallas, and the nation, to expand our program.

     

              So, why do we need your help? Our school is 66% economically disadvantaged, and adding to that, DISD is facing up to an $81 million budget gap. The district’s funding for robotics has been dropping to the point where only the basics are covered and even then come too late in the season due to red-tape. The one silver lining is that the DISD STEM Department is still able to handle most of our competition travel expenses. This offsets our largest expense category. But we still have to fund the development of our robot, and we aim high. Our robot earned an Innovation Award at the twelve-state South Super Regional Championship this year. We try to push the boundaries of design and execution and this requires a different level of funding for parts, materials and tools.

     

            To achieve this higher level of funding, Iron Reign is aiming to create a 501(c)(3) foundation to avoid the level of red tape and financial mismanagement from DISD that we have experienced for the past several years. This is where you come in, Mr. Cuban. We are asking for a seed donation for this non-profit, so that our team can become a free-standing team unhampered by DISD’s bureaucracy. Our mission would still be to serve our school and community, as it has been for the past eight years, but we would be able to avoid DISD’s mismanagement.

     

            If the money is not utilized for a seed donation, we would allocate it for new robot parts and equipment. A starter kit for FTC is at least $600 but this is nowhere close to cost of a World Championship robot. To become more successful in the robot game for the following seasons, we would need a higher investment into parts, considering many things can go wrong in an 8 month season. Your donation to the cause would allow us to become more successful.

     

            In return for your investment, Iron Reign will set out to accomplish what you desire from us. We can promote you and your companies on our website, presentations, etc. However, this is just one option. We are open to helping you in whatever way you  would like in return for your help to our team.

     

               Thank you for taking the time to consider our request, and if you happen to have additional time, we would like you to look over our previous Engineering Journals here to see our team’s engineering process and history. To see a video about our robot, please visit https://www.youtube.com/watch?v=TBlGXSf_-8A.

     

            Also, since you were not able to meet with us, we thought we would bring ourselves to you. Here is a video of our team and the FIRST Tech Challenge program.

    Thanks for your consideration,

    Iron Reign (6832)

    Looking Back, Moving Forward

     

    Recently, Iron Reign has put a large emphasis on recruitment. We have alternating years with high turnover due to graduation, so we hold recruitment meetings at our school every year for both Iron Reign and Imperial Robotics.

     

    We already have another team in our school, team 3734 Imperial Robotics. 3734 is an entirely different team, with different sponsors, members, robots, journal, outreach, and codebase. That being said, we recruit the more accomplished members of that team. The teams’ relationship is most similar to the difference between a Junior Varsity team and a Varsity team.

     

    We tend to recruit based on robotics experience, but having robotics experience alone is not a guarantee of joining our team. Iron Reign has a specific culture, and we tend to recruit people whose personalities fit our culture. We also do not accept people who only want to join robotics as a resume booster. While robotics is indeed a resume booster, and we allow every member to claim co-captain on their college applications, members of Iron Reign ought to join out of their genuine passion for robotics, not because of it getting them ahead in the rat race of college applications.

     

    This year has been an unprecedented year in recruitment for Iron Reign. We recruited approximately 30 new freshmen, expanding the Iron Reign program from two teams to four; from Iron Reign and Imperial Robotics, to adding Iron Star Robotics and Iron Core. And, our efforts have been recognized by our donors: we have been supplied four additional REV kits, and two fields so that we can support the larger program.

     

    Build

     

    Iron Reign utilizes a variety of parts and kits. At the moment, Iron Reign prefers the REV kit due to its simplicity - everything seems to just fit together, while still being minimalist. However, Iron Reign’s old standby is 3D printing. We’ve used 3D printing before it became widespread within FTC, and we’ve become sort of pros at specialized design. We even have our own 3D-print kits such as REVolution, a system to turn REV extrusions into axles.

     

    This year, we’re using a new base that’s more adapted to the challenge. Its working name is Minichassis. It is approximately 6”x6” for the base with an additional 4” extension for mounting. It uses four 4” AndyMark mecanum mounted low to the ground with NeverRest 20s with planetary gearboxes attached to each wheel. So, the robot is astoundingly small and fast.

     

    We have two main attachments to our robot, the lift and the intake. First, the intake is a small square with silicone oven mitts attached to it. It knocks the particles upward into racks spaced 68mm apart. This spacing allows the blocks to fall through while the balls move upwards into the lift. Then, the lift. The lift is a series of REV rails attached through a linear slide kit with a hook and particle holder on the end. This extends, allowing the robot to deposit particles in the lander while also being able to hook onto the lander.

     

    In addition to this design, we have also developed BigWheel, aptly named for its 6-inch wheels at the back with a front-facing omniwheel. At the front of the robot, we installed two “arms” which brace an intake system named “CornCob” for its lumpy, cylindrical appearance. This is mounted at a height just so it only contacts the silver particles, not the gold. But, what truly differentiates this robot is it’s lift mechanism. Unlike the majority of FTC robots we’ve encountered this year, BigWheel has no lift, extending-arm, or linear slide. Instead, we have a central lever mounted to two high-torque motors, with a ridiculous 3:1 gear ratio for a cumulative 19.4 N*m of torque. This serves to rotate the robot into a near-total-vertical position, allowing the arms of the robot to reach to the lip of the lander. We feel that this differentiates our team’s robot from the majority of other robots within the current FTC season.

     

    Code

     

    Iron Reign has a large pre-existing codebase. We’ve been improving off of our prior code for years. The particulars we want to focus on are thus:

    • Pose
      • This class uses the IMU to approximate the location of the robot on the field relative to the starting position. The math behind this is simple; we use trigonometry to calculate the short-line distance between the robot’s prior location and its current one.
    • OpenCV
      • We use OpenCV to recognize particles in autonomous. To do this, we trained the software to differentiate between gold and silver particles. To extend our knowledge of computer vision, we ran tests of OpenCV vs TensorFlow CNN in Python to see if there would be a meaningful runtime difference.
    • PID
      • At this point, PID is common among FTC teams. However, as we moved to a new driving base for the first time in three years, we had to retune it, so we rewrote our code to account for the changes in behavior.

     

    Design Process

     

    Iron Reign uses two design processes in conjunction with each other to create efficient and reliable parts. First, we use the Kaizen design process, also used in industrial corporations such as Toyota. The philosophy behind Kaizen is the idea of continual improvement, that there is always some modification to each system on our robot that will make it more efficient or more reliable. As well, design competitions are a focal point of Iron Reign’s design process. In these design competitions, team members choose their favored designs that all complete some field challenge, and build them individually. Upon completion of each mechanism, the designs are tested against each other, considering weight, maneuverability, reliability, and efficiency.

     

    This year, we have exemplified this process. Since kickoff, we’ve had two separate design paths, allowing us to explore the most efficient and workable design. Here, we will describe each segment in detail.

     

    First, we explored chassis designs. Over the summer, we created BigWheel, the aforementioned paragon of uniqueness - operating off of just two wheels. Then, we created the MiniChassis to compete against it, letting the best robot win. As of now, this is undecided, but we’re entering BigWheel to compete, as we feel that this is our more technically-impressive robot through its ability to rotate into a vertical position.

     

    Then, we compared intake mechanisms. First, we created the Corn-Cob intake - a silicone ice cube tray - and mounted it on a beater bar that would ensure sorting through the height difference between blocks and balls. We found that if we mounted it at about 6.5 cm above the ground, it would only consume the silver particles. After, we felt that this wasn’t our best work. So, we created a second intake. As described previously, we attached silicone oven mitts to a beater bar, and added lower fins as a ramp separated 68mm apart so that blocks would fly through, even as balls entered the intake system.

     

    The best thing about Kaizen is that we can mix-and-match these systems for the ultimate robot. At the moment, we’re considering removing the second intake from MiniChassis so that we can replace the Corn-Cob. The fact that we can even consider this system matching casually demonstrates the power of the Kaizen system.

     

    Pose BigWheel

    Pose BigWheel By Abhi

    Task: New Pose for Big Wheel robot

    Historically, Iron Reign has used a class called "Pose" to control all the hardware mapping of our robot instead of putting it directly into our opmodes. This has created cleaner code and smoother integration with our crazy functions. However, we used the same Pose for the past two years since both had an almost identical drive base. Since there wasn't a viable differential drive Pose in the past, I made a new one using inspiration from the mecanum one.

    We start with initializing everything including PID constants and all our motors/sensors. I will skip all this for this post since this is repetitive in all team code.

    In the init, I made the hardware mapping for the motors we have on BigWheel right now. Other functions will come in later.

    Here is where a lot of the work happens. This is what allows our robot to move accurately using IMU and encoder values.

    There are a lot of other methods beyond these but there is just a lot of technical math behind them with trigonometry. I won't bore you with the details but our code is open source so you can find the neccesary help if you just look at our github!

    RIP CNN

    RIP CNN By Abhi

    Task: Farewell Iron Reign's CNN

    So FTC released a new software update that added Tensorflow support. With it came a class that implemented Tensorflow to autonomously detect both minerals. This meant all our progress was undercut by a software update. The silver lining is that we have done enough research into how CNN's work and it will allow us to understand the mind of the FTC app better. We're still gonna need an F in the chat tho.

    Next Steps

    We gotta figure out how to use the autonomous detection of the minerals to path plan.

    Code Post-Mortem after Conrad Qualifier

    Code Post-Mortem after Conrad Qualifier By Arjun and Abhi

    Task: Analyze code failiure at Conrad Qualifier

    Iron Reign has been working hard on our robot, and we expected to do fairly well at our last competition. However, we couldn't be more wrong, since our robot came last place in the robot game. While we did win the Inspire award, we still would like to make some changes to ensure that this doesn't happen again.

    Our autonomous plan was fairly simple: perform sampling, deploy the team marker, then drive to the crater to park. We planned to use the built-in TensorFlow object detection for our sampling, and thus assumed that our autonomous would be fairly easy. Unfortunately, we didn't begin writing the code for our autonomous until the Thursday before our competition.

    On Thursday, I worked on writing a class to help us detect the location of the gold mineral using the built-in TensorFlow object detection. While testing this class, I noticed that it produced an error rather than outputting the location of the gold mineral. This error was not diagnosed until the morning of the competition.

    On Friday, Abhi worked on writing code for the driving part of the autonomous. He wrote three different autonomous routines, one for each position of the gold mineral. His code did not select the routine to use yet, leaving it open for us to connect to the TensorFlow class to determine which position the gold mineral was.

    On Saturday, the morning of the competition, we debugged the TensorFlow class that was written earlier and determined the cause of the error. We had misused the API for the TensorFlow object detection, and after we corrected that, our code didn't spit out an error anymore. Then, we realized that TensorFlow only worked at certain camera positions and angles. We then had to adjust the position of our robot on the field, so that we could

    Our code failiure was mostly due to the fact that we only started working on our autonomous two days before the competition. Next time, we plan to make our autonomous an integral part of our robot, and focus on it much earlier. I plan to fix our autonomous sometime soon, well before our next competition, so that we do not have to come to our next competition without a working autonomous.

    Next Steps:

    Spend more time focusing on code and autonomous, to ensure that we enter our next competition with a fully working autonomous.

    Refactoring Vision Code

    Refactoring Vision Code By Arjun

    Task: Refactor Vision Code

    Iron Reign has been working on multiple vision pipelines, including TensorFlow, OpenCV, and a home-grown Convolutional Neural Network. Until now, all our code assumed that we only used TensorFlow, and we wanted to be able to switch out vision implementations quickly. As such, we decided to abstract away the actual vision pipeline used, which allows us to be able to choose between vision implementations at runtime.

    We did this by creating a java interface, VisionProvider, seen below. We then made our TensorFlowIntegration class (our code for detecting mineral positions using TensorFlow) implement VisionProvider.

    Next, we changed our opmode to use the new VisionProvider interface. We added code to allow us to switch vision implementations using the left button on the dpad.

    Our code for VisionProvider is shown below.

    1
    2
    3
    4
    5
    6
    public interface VisionProvider {
        public void initializeVision(HardwareMap hardwareMap, Telemetry telemetry);
        public void shutdownVision();
        public GoldPos detect();
    }
    ```
    

    These methods are implemented in the integration classes.
    Our new code for TensorflowIntegration is shown below:

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    54
    55
    56
    57
    58
    59
    60
    61
    62
    63
    64
    65
    66
    67
    68
    69
    70
    71
    72
    73
    74
    75
    76
    77
    78
    79
    80
    81
    82
    83
    84
    85
    86
    87
    88
    89
    90
    91
    92
    93
    94
    95
    96
    97
    98
    99
    public class TensorflowIntegration implements VisionProvider {
        private static final String TFOD_MODEL_ASSET = "RoverRuckus.tflite";
        private static final String LABEL_GOLD_MINERAL = "Gold Mineral";
        private static final String LABEL_SILVER_MINERAL = "Silver Mineral";
    
        private List<Recognition> cacheRecognitions = null;
      
        /**
         * {@link #vuforia} is the variable we will use to store our instance of the Vuforia
         * localization engine.
         */
        private VuforiaLocalizer vuforia;
        /**
         * {@link #tfod} is the variable we will use to store our instance of the Tensor Flow Object
         * Detection engine.
         */
        public TFObjectDetector tfod;
    
        /**
         * Initialize the Vuforia localization engine.
         */
        public void initVuforia() {
            /*
             * Configure Vuforia by creating a Parameter object, and passing it to the Vuforia engine.
             */
            VuforiaLocalizer.Parameters parameters = new VuforiaLocalizer.Parameters();
            parameters.vuforiaLicenseKey = RC.VUFORIA_LICENSE_KEY;
            ;
            parameters.cameraDirection = CameraDirection.FRONT;
            //  Instantiate the Vuforia engine
            vuforia = ClassFactory.getInstance().createVuforia(parameters);
        }
    
        /**
         * Initialize the Tensor Flow Object Detection engine.
         */
        private void initTfod(HardwareMap hardwareMap) {
            int tfodMonitorViewId = hardwareMap.appContext.getResources().getIdentifier(
                    "tfodMonitorViewId", "id", hardwareMap.appContext.getPackageName());
            TFObjectDetector.Parameters tfodParameters = new TFObjectDetector.Parameters(tfodMonitorViewId);
            tfod = ClassFactory.getInstance().createTFObjectDetector(tfodParameters, vuforia);
            tfod.loadModelFromAsset(TFOD_MODEL_ASSET, LABEL_GOLD_MINERAL, LABEL_SILVER_MINERAL);
        }
    
        @Override
        public void initializeVision(HardwareMap hardwareMap, Telemetry telemetry) {
            initVuforia();
    
            if (ClassFactory.getInstance().canCreateTFObjectDetector()) {
                initTfod(hardwareMap);
            } else {
                telemetry.addData("Sorry!", "This device is not compatible with TFOD");
            }
    
            if (tfod != null) {
                tfod.activate();
            }
        }
    
        @Override
        public void shutdownVision() {
            if (tfod != null) {
                tfod.shutdown();
            }
        }
    
        @Override
        public GoldPos detect() {
            List<Recognition> updatedRecognitions = tfod.getUpdatedRecognitions();
            if (updatedRecognitions != null) {
                cacheRecognitions = updatedRecognitions;
            }
            if (cacheRecognitions.size() == 3) {
                int goldMineralX = -1;
                int silverMineral1X = -1;
                int silverMineral2X = -1;
                for (Recognition recognition : cacheRecognitions) {
                    if (recognition.getLabel().equals(LABEL_GOLD_MINERAL)) {
                        goldMineralX = (int) recognition.getLeft();
                    } else if (silverMineral1X == -1) {
                        silverMineral1X = (int) recognition.getLeft();
                    } else {
                        silverMineral2X = (int) recognition.getLeft();
                    }
                }
                if (goldMineralX != -1 && silverMineral1X != -1 && silverMineral2X != -1)
                    if (goldMineralX < silverMineral1X && goldMineralX < silverMineral2X) {
                        return GoldPos.LEFT;
                    } else if (goldMineralX > silverMineral1X && goldMineralX > silverMineral2X) {
                        return GoldPos.RIGHT;
                    } else {
                        return GoldPos.MIDDLE;
                    }
            }
            return GoldPos.NONE_FOUND;
    
        }
    
    }
    

    Next Steps

    We need to implement detection using OpenCV, and make our class conform to VisionProvider, so that we can easily swap it out for TensorflowIntegration.

    We also need to do the same using our Convolutional Neural Network.

    Finally, it might be beneficial to have a dummy implementation that always “detects” the gold as being in the middle, so that if we know that all our vision implementations are failing, we can use this dummy one to prevent our autonomous from failing.

    OpenCV Support

    OpenCV Support By Arjun

    Task: Add OpenCV support to vision pipeline

    We recently refactored our vision code to allow us to easily swap out vision implementations. We had already implemented TensorFlow, but we hadn't implemented code for using OpenCV instead of TensorFlow. Using the GRIP pipeline we designed earlier, we wrote a class called OpenCVIntegration, which implements VisionProvider. This new class allows us to use OpenCV instead of TensorFlow for our vision implementation.
    Our code for OpenCVIntegration is shown below.

      1
      2
      3
      4
      5
      6
      7
      8
      9
     10
     11
     12
     13
     14
     15
     16
     17
     18
     19
     20
     21
     22
     23
     24
     25
     26
     27
     28
     29
     30
     31
     32
     33
     34
     35
     36
     37
     38
     39
     40
     41
     42
     43
     44
     45
     46
     47
     48
     49
     50
     51
     52
     53
     54
     55
     56
     57
     58
     59
     60
     61
     62
     63
     64
     65
     66
     67
     68
     69
     70
     71
     72
     73
     74
     75
     76
     77
     78
     79
     80
     81
     82
     83
     84
     85
     86
     87
     88
     89
     90
     91
     92
     93
     94
     95
     96
     97
     98
     99
    100
    101
    102
    103
    104
    105
    106
    107
    108
    109
    110
    111
    112
    113
    114
    115
    116
    117
    118
    119
    120
    121
    122
    123
    124
    125
    126
    127
    128
    129
    public class OpenCVIntegration implements VisionProvider {
    
        private VuforiaLocalizer vuforia;
        private Queue<VuforiaLocalizer.CloseableFrame> q;
        private int state = -3;
        private Mat mat;
        private List<MatOfPoint> contours;
        private Point lowest;
    
        private void initVuforia() {
            VuforiaLocalizer.Parameters parameters = new VuforiaLocalizer.Parameters();
            parameters.vuforiaLicenseKey = RC.VUFORIA_LICENSE_KEY;
            parameters.cameraDirection = VuforiaLocalizer.CameraDirection.FRONT;
            vuforia = ClassFactory.getInstance().createVuforia(parameters);
        }
    
        public void initializeVision(HardwareMap hardwareMap, Telemetry telemetry) {
            initVuforia();
            q = vuforia.getFrameQueue();
            state = -2;
    
        }
    
        public void shutdownVision() {}
    
        public GoldPos detect() {
            if (state == -2) {
                if (q.isEmpty())
                    return GoldPos.HOLD_STATE;
                VuforiaLocalizer.CloseableFrame frame = q.poll();
                Image img = VisionUtils.getImageFromFrame(frame, PIXEL_FORMAT.RGB565);
                Bitmap bm = Bitmap.createBitmap(img.getWidth(), img.getHeight(), Bitmap.Config.RGB_565);
                bm.copyPixelsFromBuffer(img.getPixels());
                mat = VisionUtils.bitmapToMat(bm, CvType.CV_8UC3);
            } else if (state == -1) {
                RoverRuckusGripPipeline pipeline = new RoverRuckusGripPipeline();
                pipeline.process(mat);
                contours = pipeline.filterContoursOutput();
            } else if (state == 0) {
                if (contours.size() == 0)
                    return GoldPos.NONE_FOUND;
                lowest = centroidish(contours.get(0));
            } else if (state < contours.size()) {
                Point centroid = centroidish(contours.get(state));
                if (lowest.y > centroid.y)
                    lowest = centroid;
            } else if (state == contours.size()) {
                if (lowest.x < 320d / 3)
                    return GoldPos.LEFT;
                else if (lowest.x < 640d / 3)
                    return GoldPos.MIDDLE;
                else
                    return GoldPos.RIGHT;
            } else {
                return GoldPos.ERROR2;
            }
            state++;
            return GoldPos.HOLD_STATE;
        }
    
        private static Point centroidish(MatOfPoint matOfPoint) {
            Rect br = Imgproc.boundingRect(matOfPoint);
            return new Point(br.x + br.width/2,br.y + br.height/2);
        }
    }public class OpenCVIntegration implements VisionProvider {
    
        private VuforiaLocalizer vuforia;
        private Queue<VuforiaLocalizer.CloseableFrame> q;
        private int state = -3;
        private Mat mat;
        private List<MatOfPoint> contours;
        private Point lowest;
    
        private void initVuforia() {
            VuforiaLocalizer.Parameters parameters = new VuforiaLocalizer.Parameters();
            parameters.vuforiaLicenseKey = RC.VUFORIA_LICENSE_KEY;
            parameters.cameraDirection = VuforiaLocalizer.CameraDirection.FRONT;
            vuforia = ClassFactory.getInstance().createVuforia(parameters);
        }
    
        public void initializeVision(HardwareMap hardwareMap, Telemetry telemetry) {
            initVuforia();
            q = vuforia.getFrameQueue();
            state = -2;
    
        }
    
        public void shutdownVision() {}
    
        public GoldPos detect() {
            if (state == -2) {
                if (q.isEmpty())
                    return GoldPos.HOLD_STATE;
                VuforiaLocalizer.CloseableFrame frame = q.poll();
                Image img = VisionUtils.getImageFromFrame(frame, PIXEL_FORMAT.RGB565);
                Bitmap bm = Bitmap.createBitmap(img.getWidth(), img.getHeight(), Bitmap.Config.RGB_565);
                bm.copyPixelsFromBuffer(img.getPixels());
                mat = VisionUtils.bitmapToMat(bm, CvType.CV_8UC3);
            } else if (state == -1) {
                RoverRuckusGripPipeline pipeline = new RoverRuckusGripPipeline();
                pipeline.process(mat);
                contours = pipeline.filterContoursOutput();
            } else if (state == 0) {
                if (contours.size() == 0)
                    return GoldPos.NONE_FOUND;
                lowest = centroidish(contours.get(0));
            } else if (state < contours.size()) {
                Point centroid = centroidish(contours.get(state));
                if (lowest.y > centroid.y)
                    lowest = centroid;
            } else if (state == contours.size()) {
                if (lowest.x < 320d / 3)
                    return GoldPos.LEFT;
                else if (lowest.x < 640d / 3)
                    return GoldPos.MIDDLE;
                else
                    return GoldPos.RIGHT;
            } else {
                return GoldPos.ERROR2;
            }
            state++;
            return GoldPos.HOLD_STATE;
        }
    
        private static Point centroidish(MatOfPoint matOfPoint) {
            Rect br = Imgproc.boundingRect(matOfPoint);
            return new Point(br.x + br.width/2,br.y + br.height/2);
        }
    }
    

    Debug OpenCV Errors

    Debug OpenCV Errors By Arjun

    Task: Use black magic to fix errors in our code

    We recently implemented OpenCV support in our code, but we hadn’t tested it until now. Upon testing, we realized that while our code worked in theory, it misbehaved in practice. Thus, we began the time-tested ritual of debugging our code. From past experience we know that debugging is 90% luck and 10% hoping that you have pleased the gods of programming. We crossed our fingers and hoped that we were able to correctly diagnose the problem.

    The first problem we found was that Vuforia wasn’t reading in our frames. The queue which holds Vuforia frames was always empty. After making lots of small changes, we realized that this was due to not initializing our Vuforia correctly. After fixing this, we got a new error.

    The error message changed! This meant that we fixed one problem, but there was another problem hiding behind it. The new error we found was that our code was unable to access the native OpenCV libraries, namely it could not link to libopencv_java320.so. Unfortunately, we could not debug this any further.

    Next Steps

    We need to continue debugging this problem and find the root cause of it.

    Auto Paths

    Auto Paths By Abhi

    Task: Map and code auto for depot side start

    So we created a bunch of auto paths in the beginning of the season with hopes to work with any that we want, allowing us to adapt to our alliance partner's capabilities. I decided to spend today actually completely coding one of them. Since we we still didn't have a complete vision software, I made these manually so we can integrate vision without issues. Here are videos of all of the paths. For the sake of debugging the bot stops after turning towards the crater but in reality it will drive and park in the far crater.

    Center

    Left

    Right

    Next Steps

    We gotta get vision integrated into the paths.

    Issues with driving

    Issues with driving By Karina

    Task: Get ready for Regionals

    Regionals is coming up, and there are some driving issues that need to be addressed. Going back to November, one notable issue we had at the Conrad qualifier was the lack of friction between Bigwheel's wheels and the field tiles. There was not enough weight resting on the wheels, which made it hard to move suddenly.

    Since then many changes have been made to Bigwheel in terms of the lift. For starters, we switched out the REV extrusion linear slide for the MGN12H linear slide. We have also added more components to intake and carry minerals. These steps have fixed the previous issue if we keep the lift at a position not exceeding ~70 degrees while moving, but having added a lot of weight to the end of the slide makes rotating around the elbow joint of Bigwheel problematic. As you can see below, Bigwheel's chassis is not heavy enough to stay grounded when deploying the arm (and so I had to step on the back end of Bigwheel like a fool).

    Another issue I encountered during driver practice was trying to deposit minerals in the lander. By "having issues" I mean I couldn't. Superman broke as soon as I tried going into the up position, and this mechanism was intended to raise Bigwheel enough so that is would reach the lander. Regardless of Superman's condition, the container for the minerals was still loose and not attached to the servo. Consequently, I could not rotate the lift past the vertical without dropping the minerals I had collected.

    Next Steps

    To run a full practice match, Superman and the container will need to be fixed, as well as the weight issue. Meanwhile, I will practice getting minerals out of the crater.

    Vision Summary

    Vision Summary By Arjun and Abhi

    Task: Reflect on our vision development

    One of our priorities this season was our autonomous, as a perfect autonomous could score us a considerable amount of points. A large portion of these points come from sampling, so that was one of our main focuses within autonomous. Throughout the season, we developed a few different approaches to sampling.

    Early on in the season, we began experimenting with using a Convolutional Neural Network to detect the location of the gold mineral. A Convolutional Neural Network, or CNN, is a machine learning algorithm that uses multiple layers which "vote" on what the output should be based on the output of previous layers. We developed a tool to label training images for use in training a CNN, publicly available at https://github.com/arjvik/MineralLabler. We then began training a CNN with the training data we labeled. However, our CNN was unable to reach a high accuracy level, despite us spending lots of time tuning it. A large part of this came to our lack training data. We haven't given up on it, though, and we hope to improve this approach in the coming weeks.

    We then turned to other alternatives. At this time, the built-in TensorFlow Object Detection code was released in the FTC SDK. We tried out TensorFlow, but we were unable to use it reliably. Our testing revealed that the detection provided by TensorFlow was not always able to detect the location of the gold mineral. We attempted to modify some of the parameters, however, since only the trained model was provided to us by FIRST, we were unable to increase its accuracy. We are currently looking to see if we can detect the sampling order even if we only detect some of the sampling minerals. We still have code to use TensorFlow on our robot, but it is only one of a few different vision backends available for selection during runtime.

    Another alternative vision framework we tried was OpenCV. OpenCV is a collection of vision processing algorithms which can be combined to form powerful pipelines. OpenCV pipelines perform sequential transformations on their input image, until it ends up in a desired form, such as a set of contours or boundaries of all minerals detected in the image. We developed an OpenCV pipeline to find the center of the gold mineral given an image of the sampling order. To create our pipeline, we used a tool called GRIP, which allows us to visualize and tune our pipeline. However, since we have found the lighting conditions to greatly influence the quality of detection, we hope to add LED lights to the top of our phone mount so we can get consistent lighting on the field, hopefully further increasing our performance.

    Since we wanted to be able to switch easily between these vision backends, we decided to write a modular framework which allows us to swap out vision implementations with ease. As such, we are now able to choose which vision backend we would like to use during the match, with just a single button press. Because of this, we can also work in parallel on all of the vision backends.

    Next Steps

    We would like to continue improving on and testing our vision software so that we can reliably sample during our autonomous.

    Minor Code Change

    Minor Code Change By Karina

    Task: Save Bigwheel from self destruction

    The other day, when running through Bigwheel's controls, I came across an error in the code. The motors on the elbow did not have min and max values for its range of motion, causing the gears to grind when careless team members got a hold of the control. Needless to say, Iron Reign has gone through a few gears already. Adding stops in the code was simple enough:

    Testing the code revealed immediate success. I went through the full range of motion and there were no scary noises. Yay for the drive team!

    Next Steps

    Going foward, we will continue to debug code through drive practice.