During the winter term of my Junior year I enrolled in Junior Design. In the fall students worked on two projects each lasting three weeks. During the winter term students work on one project for the full 10 weeks. I was partnered with two other electrical engineering students to work on an autonomous robot.
Requirements of the Project
For this project we were given five requirements that the robot had to complete. We were tasked to make two additional requirements that would make our robot unique from the other robot groups. With our additional requirements our group wanted to make an end product that was impressive and would challenge us as engineers.
Requirement 1: The robot should be able to find and connect to its charger at close distances.
Testable Outcome: At least 9 out of 10 times, the system can successfully navigate and connect to the charger without human intervention when placed at least 2 feet away.
Requirement 2: The robot should be able to find and connect to its charger at long distances.
Testable Outcome: At least 2 out of 5 times, the system can successfully navigate and connect to the charger without human intervention when placed at least 20 feet away.
Requirement 3: The robot can navigate on multiple surfaces.
Testable Outcome: The system can travel across tile, carpet, and concrete floors with no more than 10% difference in speed between all surfaces.
Requirement 4: The robot has reasonable battery life.
Testable Outcome: The system will move for at least 10 continuous minutes on a single charge.
Requirement 5: The robot looks aesthetically pleasing.
Testable Outcome: At least 9 out of 10 people agree that the robot is more aesthetically pleasing than 4 other commercial robots from the market.
Additional Requirement 1: The robot will generate a mapping of the room.
Testable Outcome: The robot will locate the walls of a room that have a width of at least a foot. 3 out of 5 times it will map the room within half a foot of where the wall is actually is in the room and within half a foot of its actual length.
Additional Requirement 2: The robot will have a real time display.
Testable Outcome: The robot will send data to a computer over a wireless interface and the computer will display the information in a GUI. The display will be accurate within three seconds of reality 3 out of 5 times.
This project is more complex than any other project my group mates or I have worked on. To manage this complexity, we used diagrams to plan the architecture of the robot. This helped us to break the project into smaller more manageable pieces. First, we made a black box diagram. This is the simplest way of viewing the system. It shows how the robot will interact with the environment. From there we made a block diagram. The block diagram expanded upon the black box diagram and gives more detail about the system. For each interface between two blocks we made an interface description. The interface descriptions allow each group member to work on a part of the project independent of what others are doing. While the diagrams changed throughout the project, they were really helpful for giving us an understanding of what the system is going to do and gave us a roadmap to follow when building the system.
Figure: Black box diagram of the system. This figure shows what the robot takes in from the environment and what it will produce.
Figure: Block Diagram of the system. Each block is assigned to one person.
Figure: Interface descriptions
Our robot had many different components and each one needed power. We made a diagram of power flow to help us understand what power requirements each part of the robot needs. Our charging station is connected to the wall and is powered by 110V AC. The charging base regulates the voltage to allow the robot to charge itself. The power enters the robot and goes through the battery charger module to the batteries. When the robot is not charging power is drawn from the batteries to power the electronics.
Figure: Diagram of power flow in the system
The main processor for our robot is a Jetson TK1. We chose this board because of its ability to use the onboard GPU for computer vision. In the end we were not able to use the GPU in our project because of time constraints but the board had the processing power to do everything we needed it to. We used the Jetson to run ROS and used SSH to get remote access to the robot.
For the movement of the robot we used stepper motors controlled by an AVR microcontroller. A ROS node ran on the Jetson that sent a message to the microcontroller over a COM port. The message includes direction and speed which the robot needs to move. The microcontroller interfaces with stepper motor drivers to move the robot at the desired speed.
Two of the requirements for the robot included finding the charging station autonomously. To do this we attached a camera to the robot and used computer vision. The charging station has a pattern of red, green, and blue stripes on it. The robot would use its camera to take a picture of the environment and then our algorithm would look for that specific pattern. If the pattern was found the algorithm would return the coordinates of the center of the charging station and the robot would drive in that direction. If the pattern was not found, then the robot would turn and take another picture.
Figure: Testing the computer vision algorithm. The original image is on the top right. The left column is the algorithm trying to isolate each color. The center column is where the algorithm things each pure color is.
One of our additional requirements was to map the room. We used a LIDAR attached to the top of the robot to accomplish this. A ROS node of the LIDAR was provided online by another robot enthusiast. This made it simple setup and use the LIDAR. rVIS, another ROS node, was available online and was used for generating maps and visualizing the robot’s position. The standard message type that ROS provides allowed for the two nodes to interface without any issue or revision of the code.
Figure: A map generated by the robot. The area on the left side of the room is from the LIDAR not being able to accurately measure the distance of a window.
My team was able to work with a mechanical engineering student, Sam, to create the frame of the robot. Sam was able to create a 3D model of the robot and plan how the robot would be assembled. His model included all of the major components and where they would be placed inside of the robot. He also helped us to build the frame because ECE students at Oregon State University do not have access to the machine shop.
Figure: A render of the robot’s frame
Figure: An image of the completed robot
We faced many difficulties during this project. One difficulty was that at the start of the project no one on the team knew ROS. In the beginning we spent a lot of time trying to set up ROS and figure out how to use it. We were able to reach out to graduate students and faculty for help. In the end we were all competent with ROS.
Another difficulty we faced was with the battery holders. There was an Ohm of resistance between the contact and the wire lead. When the Jetson would boot it would draw 2 Amps of current and there would be a 2 Volt drop across the battery holder connectors. Because of the large resistance in the battery holders the voltage that the Jetson sees would drop below the minimum power threshold and the Jetson would turn off. It was difficult to track this issue down. At first, we thought it was our batteries but after more investigation we found the problem. We were able to solve this problem by putting two battery packs in parallel. This way on boot the Jetson would only draw 1 Amp from each pack.
We put in a lot of work into the project and I think it turned out really well. We completed six out of the seven requirements that we were tasked. We all learned about teamwork, managing a large project, and understanding what workload can be taken on and completed in a given timeframe.