Articles by tag: control

Articles by tag: control

    Hoverboards and PID

    Hoverboards and PID By Caitlin, Omar, Darshan, Tycho, and Max

    Task: Continue with the Hoverboard and tweak PID

    After the long weekend last week, today was a reasonably relaxed practice. We decided that we could work on anything, as long we stayed focused. The two main foci were the -Robot on a Hoverboard- and fine tuning our PID for autonomous.

    Reflections

    We experimented with balancing the robot more evenly on the hoverboard to keep it on a straight path and then getting creative with the controls to speed it up. We found that we could effectively rocket it forward by extending and angling the cliff climber while flipping the block dispenser forward at the same time. While it was easy to send the robot forward, there was little we could do to recover outside of just pushing it back by hand. This created a game of hot potato as we passed the robot around from person to person, but was ended rather abruptly when it careened into the table.

    If we're going to get our autonomous functional for UIL then we need to fix our PID. We used the parts of the current autonomous demo to check the straight line gyro drive, and went from barely correcting, to crazy oscillations, to a good level in between. This took a decent amount of tweaking for K-proportional, and when we felt we were straddling the line between too much and too little correction we messed around with K-Derivative to be better prepared. After the initial gyro guided line the robot is programmed to do a 45 degree turn towards the beacon, and then fine tune its angle using color blob. The color blob detection seemed to track the selected color accurately, outlining the area in neon green, but for some reason didn't turn to aim at it. If anything it turned away from the beacon. We found a mistake in our error calculation, where leftover error wasn't properly cleared before the guided turn, that we believe caused at least some of the odd turning behavior.

    Programming our New Robot

    Programming our New Robot By Tycho, Caitlin, Ethan, and Jayesh

    Task: Program our new mecanum wheel driving platform

    Now that our new robot has been built with a mecanum wheel platform, we can start write our drive code and figure out how to make our robot preform three basic motions: forwards and backwards, side-to-side and to rotate. We decided that, in order to get the best understanding of our robot, how it moves and our code, that we would try to write our drive code through trial and error. However, we did reference some guides written by other various FTC and FRC teams if we got stuck on something and needed to figure out where to start in solving the problem.

    Reflections

    In order to drive our mecanum wheels properly, we need to first discuss how each wheel is placed on the robot, and also how each wheel needs to move in respect to the others in order move in a certain direction. Each wheel has small rollers that point 45 degrees off of the larger wheel itself. In order to properly set up mecanum wheels, the rollers on each wheel have to point towards the center of the robot.

    This is important, because if these wheels are not pointing in the proper direction, then the rollers will begin to fight against each other, causing strange driving patterns that aren't very useful. We learned this the hard way, because when reconstructing the wheel mounts, the positions of two wheels on robot were flipped, causing the robot to drive in circles when we tried to drive sideways.

    The main reason we decided to go with mecanum wheels is because they open so many different ways to navigate the field. Robots that properly use mecanum wheels can not only go forwards, backwards and turn, but they can also make a robot move side to side. These three types of movement can be mixed with each other to do even cooler things like move in diagonals or even strafe, which is when a robot moves in an arc while moving sideways. Of course, we cannot use mecanum wheels to their full potential if we do not first understand the first three basic types of movement. Driving forwards and backwards is pretty simple, and it's the same as any other robot; all of the wheels have to move in the same direction. Turning also remains unchanged from other platforms; the left side of the robot has to move one way, and the right side of the robot has to move the other.

    However, moving side to side is not really intuitive compared to the others. In order to move side to side, the wheels on either side have to move opposite of each other. For example, if I wanted to, from a top-down perspective, drive to the right, the wheels on the left side of the robot would have to drive away from each other, while the wheels on the right side of the robot would have to drive towards each other. The rollers start spinning away from the center and to the right on the left side of the robot, or towards the center and to the right for the right side of the robot. The forwards and backwards components of the wheels and the rollers cancel each other out, and the robot moves to the right.

    Mecanum Driving

    Mecanum Driving By Tycho

    Task: Code driving under mecanum wheels

    Today, I wrote the whole code for controlling our mecanum wheels. It is entirely fron scratch, and works perfectly right off the bat. This code allows us to strafe, move backwards and forwards, and rotate, in one method.

    Reflections

    We still have a lot of coding to do, as we're currently working on a particle-launching system. As well, we need to consider autonomous soon.

    Fixing Faulty Encoder

    Fixing Faulty Encoder By Tycho and Jayesh

    Task: Fix a faulty encoder on our robot

    This shows a test of our encoder issues. It might have been a month ago that we noticed a strange behavior in our autonomous code when the robot was moving forward at low speed. It would curve to the right when we were telling it to go straight. We probably would have noticed the problem earlier if we had any kind of subtlety in our driving. But we didn't, partly because the problem goes away when driving at full speed. We did suspect that the problem was in the encoder feedback of the front left drive wheel. In this video you can see how it ramps up to full speed much faster than the other motors. Here we are driving the robot with encoder PID active. But when driving backward, the motor/wheel behaves properly. This indicated to us that one channel of the encoder was working normally while the other was skipping ticks. But the problem could be in the encoder, the encoder cable or the motor controller. Since our motors take some time to change out and we were heading off to the Arkansas State Championship, we decided to simply turn off PID control. The problem goes away when we are simply driving the motors with open loop power levels - proving that the encoder is the culprit and that motor imbalance was not an issue. The problem with just turning of PID control is that we were still getting bad odometry since our method of calculating distance traveled is based on averaging the encoders across the four motors. So we had to adjust our target distances in autonomous based on trial and error instead of proper calculations.

    Here we are trying to identify the source of our encoder issues. We swapped the encoder cable and and ports on the motor controller, but the problem stayed with the front left motor. That told us that it was the motor's encoder that was the issue. We confirmed through telemetry that the encoder was giving no ticks when driving forward, but worked fine in the reverse. So we replaced the whole motor and this time it sped up faster than the other motors in both directions. That really mystified us for a bit. Turns out the substitute motor had an encoder that was dead on both channels. What were the odds of that? We'd simply pulled a motor from our inventory and assumed it would be OK. But it must have been one we hadn't needed to be encoder controlled. We finally found a third motor, this time tested its encoder before throwing on the robot, and all is well now. We can now drive the robot under PID control and it drives as expected at various power levels. We need to re-calibrate our blended travel distances with the working encoder, but feel much better about our performance. We can even reliably use run to position commands if we want, though our navigation code doesn't require that.

    This just shows the proper behavior of the robot after our encoder troubles were resolved:

    Mapping Out Autonomous

    Mapping Out Autonomous By Janavi, Tycho, Omar, Evan, and Darshan

    Task: Mapping Out Autonomous

    To tell the robot how far to move forward we had to calculate our motors RPM. We did this by telling the robot move to 10 rotations forward and calculating how far it travelled. After he RPM we created a model field upon which we designed a set path for the robot during autonomous. One path for red and then one for blue. Both of these paths allowed the robot to shoot two balls and then push the beacon buttons.After testing this we realised that our alliance partner may be better or worse than us at shooting the balls. So we created a method that allowed us to push a,b, or x to change the number of times the catapult fired.

    Reflections

    We are constantly working to improve the design our Autonomous. Before this, while our autonomous may have worked. It didn't allow us to collaborate with our alliances to create the best path, stopping us from earning the most points possible. Now with these changes we can work together with our alliance partners to complement each others strengths and weaknesses, helping us earn more points. This will also encourage us to scout around and interact with other teams before and in-between the matches, letting us create a even more detailed scouting sheet.

    Vuforia

    Vuforia By Janavi and Tycho

    Task: Use Vuforia to enhance autonomous

    We use Vuforia and Open CV vision to autonomously drive our robot to the beacon and then click the button corresponding to our team's colour. We started this by getting the robot the recognize the image below the beacon and keep it within its line of vision. Vuforia is used by the phone's camera to inspect it's surroundings, and to locate target images. When images are located, Vuforia is able to determine the position and orientation of the image relative to the camera.

    To start setting up our robots vision, we watched Team 3491 FixIt's videos on Vuforia to help us understand how to set it up. After finishing the code for following the image, we went to go test it out. We found out that we had accidentally coded the robot to follow the picture by moving up and down, as we had coded the phone for portrait mode instead of landscape. After fixing that, we tested out the robot and it ended up attacking Tycho by running at him and the image at full speed. Apparently, we had accidentally told the robot to go much farther than it was supposed to go by placing a parenthesis in the wrong spot. We tested the code one more time, only this time I held the picture while standing on top of the chair. Luckily the robot worked this time and was able to follow the image both ways.

    Reflections

    We would like to explore the uses of Vuforia+OpenCV. We are considering using it to determine particle color as well as using it to view the images beneath the beacons.

    OpenCV

    OpenCV By Ethan and Tycho

    Task: Implement OpenCV in autonomous

    Last year, we had some experience with OpenCV to press the beacons, and this year we decided to do the same. We use OpenCV to find the color we are looking for on the beacon in conjunction with Vuforia. First, it detects the search pattern in the view with vuforia, then isolates that area and finds the side of the beacon with the correct color. Our code is based off of FTC team 3491, Fixit.

    In a previous post, we talked about OpenCV and Vuforia research, as well as how we plan to use it in the robot game. And now, I am happy to say, we have implemented it partially in our autonomous. We can now figure out what side a specified color is on the beacon using our new phones, the Galaxy S5. (Side note - do not update them to Android Lolipop if you want to use OpenCV, it will not work.)

    Reflections

    In the future, we would like to implement OpenCV so that we can detect the color of a particle during autonomous and pick it up to shoot it. However, we probably don't have time to program it before Super Regionals.