Press Release: Introducing Argus – Peripheral Recognizer

WEST CHESTER, Pennsylvania — January 20, 2018 — Today, Kevin Wang, designer of Westtown Resort and Polaris, announced Argus, an innovative iOS application that uses machine learning to perform scene and object recognition and enunciates what it detects to the user.

Designed with Accessibility
Argus is built from the ground up with accessibility in mind. It is the first app that combines an object and scene recognizer with a speech synthesizer. Therefore, its intended target audience is the blind and the legally blind. Argus enhances the vision of users with disabilities and increases their overall peripheral awareness. Additionally, Argus has a simple interface that consists only two buttons for activating speech utterance. The two buttons work with VoiceOver which helps users to locate the buttons on the screen.

In the future, Argus will be integrated with ARKit. Not only will the app be able to recognize objects and scenes, it will also be able to perform distance estimation and gain new abilities like providing lane guidance for its users.

Machine Learning
Argus is a mobile application with on-device machine learning capability. It utilizes two open-source neural network classifiers, both with industry-leading performance and reliability. The Inceptionv3 model is used for object recognition and recognizes up to 1,000 categories of common objects in the world. The Places205-GoogLeNet is used for scene detection, capable of recognizing up to 205 scenes such as airport, home office, classroom, etc.

Performance & Efficiency
At its core, Argus uses Apple’s newly introduced machine learning framework–Core ML. The framework is designed for machine learning applications on mobile devices. It optimizes Argus’ performance and efficiency, allowing the app to perform image classification in real-time.

Additionally, Core ML doubles the amount of image processing throughput, allow multiple machine learning models to be used simultaneously in Argus. An additional object detection model, MobileNet, is employed in situations where the confidence level of Inceptionv3 is particularly low. Argus feeds the video frames into both models and cross compares their predictions and confidence levels to achieve the highest accuracy. Users will see an increased accuracy when attempting to recognize objects in situations where the video feed is not easily discernible, especially in low-light situations.

Argus will be marketed as “Argus – Peripheral Recognizer” on the App Store and will be available for download in late January.

Press Contact:
Kevin Wang ’18
Designer, Argus
kevin.wang@westtown.edu

Works Cited

B. Zhou, A. Lapedriza, J. Xiao, A. Torralba, and A. Oliva. “Learning Deep Features for Scene Recognition using Places Database.” Advances in Neural Information Processing Systems 27 (NIPS), 2014.

Google LLC, and TensorFlow. “Image Recognition.” TensorFlow, Google, http://www.tensorflow.org/tutorials/image_recognition. Accessed 8 Jan. 2018.


Note
1. Company and product names may be trademarks of their respective owners.
2. Argus is currently in beta and should not be used as a substitute.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.