Training an Object Detector with TensorFlow – Kevin

Have you ever heard of Tesla’s Model S sedan? It is one of the few cars capable of fully autonomous driving. Although U.S. laws currently do not permit this, the Model S can pick you up at your house and drop you off at school, all without you even touching the steering wheel. To create a self-driving vehicle, Tesla engineers had to employ many machine learning techniques, including an object detector that recognizes and classifies objects around the car. For example, the on-board camera is able to recognize pedestrians and instructs the car to stop. Another example is that the object detector recognizes other vehicles on the road, keeping the Tesla from colliding into them.

With the use of the TensorFlow Object Detection API, creating such a model (though probably not as accurate as the one Tesla developed) can now be done with consumer-grade equipment such as a personal computer. As promised in last week’s blog, I will discuss how to create a customized object detector with the TensorFlow API.

First of all, this tutorial assumes that you have installed TensorFlow library and Python. If you have not installed TensorFlow, please follow my tutorial on how to install TensorFlow on a Windows 10 PC. Using TensorFlow with GPU is recommend as it will significantly accelerate the training of you model.

detect

What model are we training?

One that detects robots. Why? As a member of FIRST Robotics Competition(FRC) Team 1391 (The Metal Moose), I have been looking for new ideas that would improve our robot’s performance during the autonomous period. (Each match runs 150s, with the first 15s being autonomous period and the remaining being human-operated period.) To provide the robot with fully-autonomous driving capability, an object detector is required for collision avoidance––that is, to prevent the robot from crashing into other robots.

Step 1: Gather Training Dataset

When I first set out to train this model, I collected images of robots used in FRC with Google Images. This solution, however, failed to produce good results. The images were of different dimensions and qualities. Many of them only had one robot in it and lacked sophisticated background to help the model effectively distinguish robots from other objects. Then, I came up with the idea of taking screenshots of the recorded live streams of competitions available on YouTube.

Since the project was experimental and was not to be deployed onto our robot, I only collected forty screenshot in my first attempt. With the understanding that robots look different when the live streaming cameras are placed at different angles, I extracted screenshot from not one but two different competitions. The first 20 screenshot had a camera angle of roughly 20º to 30º. The remaining 20 screenshot had a camera angle of 80º, showing the robots from almost directly above the competition field. The results were not exactly ideal but was usable.

To make the model a little more robust, I incorporated 100 images with 6 robots in each. With these 600 (6 x 100 = 600) samples and a singular camera angle, I was able to yield much better results. I moved away from the idea of using multiple angles in the training set because the model will be deployed onboard a robot on the field, not from a camera directly above the field.

To access the videos, simply search for “FRC” on YouTube and look for videos with durations longer than 1 hour. I have also listed two competition recordings that you might want to use for the purpose of this tutorial:

In next week’s blog post, I will continue this tutorial with the next steps, including image labelling, choosing a pre-trained model, training the model & monitoring with TensorBoard, and testing the model. Thanks for reading and see you next week!

Works Cited 

Nealwu. Kites Detections Output. GitHub, 21 Sept. 2017, github.com/tensorflow/models/blob/master/research/object_detection/g3doc/img/kites_detections_output.jpg. Accessed 6 Apr. 2018.

Tesla Inc. “Autopilot.” Tesla, http://www.tesla.com/autopilot. Accessed 6 Apr. 2018.

Tran, Dat. “How to train your own Object Detector with TensorFlow’s Object Detector API.” Towards Data Science, 28 July 2017, towardsdatascience.com/how-to-train-your-own-object-detector-with-tensorflows-object-detector-api-bec72ecfe1d9. Accessed 6 Apr. 2018.

 

One thought on “Training an Object Detector with TensorFlow – Kevin

  1. alecbarbs

    It’s very interesting how you are able to use the same technology in home use devices as the program in a luxury car company. Do you have plans of using these sensors anywhere else besides Metal Moose?

    Reply

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s