Using AI to Monitor the Processing State and to Promote the Development of the Machine Tools Industry
Author(s)
Yu-Chee TsengBiography
Yu-Chee Tseng received his Ph.D. in Computer and Information Science from the Ohio State University. He is the Founding Dean of the College of Artificial Intelligence, National Chiao-Tung University, Taiwan. As a Lifetime Chair Professor in NCTU, his h-index is higher than 60. Recently, he published several works about integrating IoT and computer visions.
Academy/University/Organization
National Chiao-Tung UniversitySource
Demo paper “Tagging IoT Data in a Drone View” -
TAGS
-
Share this article
You are free to share this article under the Attribution 4.0 International license
- ENGINEERING & TECHNOLOGIES
- Text & Image
- October 22,2019
Both cameras and IoT devices have their particular capabilities of tracking moving objects. Their correlations are, however, unclear. Through state-of-the-art deep learning technologies, tracking human objects in videos is not a difficult job. On the other hand, people also use wearable smart watches to track their own daily activities. In this work, we consider using a drone to track ground objects. We demonstrate how to retrieve IoT data from devices which are attached to human objects, and correctly tag them on the human objects captured in a drone view. Therefore, observing and tracking humans “from the air” is becoming possible, and we can even see their identities and personal profiles directly in drone videos. This is the first work correlating IoT data and computer vision from a drone camera. Potential applications of this work include aerial surveillance, people tracking, and intelligent human-drone interaction.
Drones (known as Unmanned Aerial Vehicles, UAVs) have been applied to a wide range of security and surveillance applications. When conducting video surveillance, one fundamental issue is person identification (PID), where human objects in videos need to be immediately tagged by their IDs and personal profiles. The cover image, which was taken by a drone, is an example. By a Convolutional Neural Network (CNN), we are only able to know that there are 4 persons. By our IoT-video fusion technology, we are able to not only know that these are people, but we can also tag their names, personal profiles, and past activities on the day on the image. This augmented information is apparently more user-friendly and can provide very intuitive human-computer interfaces.
Traditional technologies such as RFID and fingerprint/iris/face recognition have their limitations or require close contact with specific devices. Hence, they are not applicable to drones with changing height and view angle. In this work, we present an approach to detecting human objects in the videos taken by a drone and correctly tag their retrieved personal profiles, through wireless communications, from their wearable devices on those human objects. Through combining IoT and AI, we can display IoT data directly on the recognized image. Our target application is future aerial surveillance with much richer information. The displayed information can include the past activities of users even before they entered the camera view.
STAY CONNECTED. SUBSCRIBE TO OUR NEWSLETTER.
Add your information below to receive daily updates.