Video analytics to code naturalistic driving data
Naturalistic driving data offers a unique window into driving that promises to greatly improve our understanding of what contributes to safe and unsafe driving. Video data recorded from vehicles over months can be particularly valuable. Unfortunately the current practice of manual coding of video data demands many hours of frame-by-frame coding that makes systematic analysis of millions of hours of video data infeasible.
We address this challenge with automatic video coding in a collaboration with faculty and students from Industrial Engineering (R. Radwin), Electrical and Computer Engineering (Y.H. Hu), Computer Science (Li Zhang), and Civil Engineering (D. Noyce, M. Chitturi)
Optic flow to assess driver engagement
We applied optical flow analysis to two SHRP2 NDS sample videos to seek to find a spatio-temporal characterization of driver behavior in terms of driver distraction and engagement. The hypothesis is that a tired driver may exhibit less movement during driving while an engaged driver may check side mirror, rear mirror and scanning the road conditions and hence exhibit moderate movement. Instead of estimating the exact angles, directions of head pose, the optical flow gives an overall behavior description which may be more robust indicator of overall driver engagement.
Check out the optic flow videos below -
Driver 1 -
Driver 2 -
Hand tracking to quantify manual distractions
Even with poor quality video we successfully tracked driver's hands. By tracking the hands it is possible to automatically code instances where drivers remove their hands from the steering wheel to read to the radio, CD player or phone.
Check out videos below -
Video 1 -
Video 2 -
Extracting facial features

Identifying and tracking facial features enables automatic coding of a range of behaviors relevant for identifying instances of driver distraction as well as drowsiness. Distractions that require a head rotation can be automatically coded. Likewise driver's attention to other traffic, particularly at intersections can be identified to tracking head rotations to check for traffic on the left and right as well as in their blind spot.
Check out the videos below and the presentation here.
Example 1
Example 2
Example 3
Example 4
Identifying environmental features

Coding the driving context is often as important for understanding behavior as coding driver behavior. This video shows automatic coding of semantically meaningful features, such as buildings, roadway, signs, and traffic signals.
Check out the video and presentation here.
Driving Simulator

A Realtime Technologies driving simulator equipped with a Ford Fusion cab, 1 degree-of-freedom pitch motion base, and 240 degrees of computer-generated scenery (powered by six projectors, eight-foot-tall screens and several LCD monitors. The 240 degree arc of projector screens and a surround sound system simulate the visual and auditory experience of driving on-road. Movement and vibration that accompany on-road driving are produced from the motion platform.