Surveillance camera video in traffic-collision reconstruction

Surveillance-cameras are everywhere in commerical districts, and their video can be useful when accidents occur in nearby traffic lanes – but act fast!

Kurt D. Weiss
2016 May

The popularity of surveillance cameras has advanced to the extent that cameras are often present without the motoring public’s knowledge.  Most common are theft-deterring surveillance cameras in the upper corners of convenience stores, cameras in bank automated-teller machines, and cameras under gas-station pump overhangs (see fig 1).

Virtually every street corner of a densely-populated metropolitan city will have surveillance cameras mounted above commercial properties, which cameras will also monitor pedestrian traffic and vehicles in adjacent traffic lanes. These technologies have proven useful to the traffic collision reconstructionist.

This article explains how surveillance and some onboard vehicle-camera videos may be applied to traffic-collision reconstruction. Several real-world examples are used to illustrate how images, some with poor image quality, may be used to supplement conventional traffic-collision reconstruction methodology.

Traffic-collision investigators frequently use momentum or damage-energy analyses, collision reconstruction or simulation software, or crash data retrieved from the vehicle’s ACM. The analysis of surveillance-camera video is another tool at their disposal.

The following material is not intended to be comprehensive, but may provide insightful consideration of the integration of surveillance-camera videos in collision analysis. Furthermore, these concepts may be applied to a wide range of collision types, as long as the camera’s view captures reference landmarks and the pre- or post-impact movements of the involved vehicles. The resulting accuracy may depend on image quality or playback frame rate.

Collision information sources

Police routinely respond to traffic collisions. These officers write traffic reports documenting the involved parties and vehicles, including a collision overview, and they sometimes take photographs of the property damage and physical evidence on the roadway. These two items are the primary sources of information for a traffic collision reconstructionist (“Photographic Techniques for Accident Reconstruction,” John F. Kerkoff, SAE Paper No. 850248). But if the adjacent area is thoroughly canvassed, especially for collisions involving severe injuries or fatalities, the investigator may find surveillance cameras.

The challenge of obtaining a surveillance camera video

Finding a surveillance camera in the area does not necessarily mean that the video will be helpful to the investigation, because some cameras are decoys, and others may not even point toward the relevant area. When a camera is located, several concerns exist when obtaining a copy of the surveillance video. The foremost consideration is the time between the event and the investigation. Most surveillance video recording systems record on a storage drive of finite capacity. That is, when the digital media is full, new images will overwrite previous images, like a circular buffer. Accordingly, efforts to retrieve video files must be performed expediently. Then, assuming access to the camera recording equipment is achieved, the next challenge the investigator faces is how to obtain a copy of the video without a loss of image quality. Here is where the most diverse video quality is observed.

One example is using a cell phone to record the playback monitor while it displays the incident in question. Another example is obtaining a raw digital file from the video recording equipment directly to a USB flash drive, but in a format that requires special viewing software. Videos obtained in both examples can be used to some degree and will provide meaningful results to the accident reconstructionist. Even the poorest quality video may yield something useful to the analysis.

Video image quality resolution and frame rate

Surveillance-camera video quality will vary between systems. Resolution is the video frame dimensions, in units of pixels, and this identifies the width and height of the video image (i.e., 1440x900, or 1280x1024). Image quality increases with pixel count. Frame rate, in units of frames per second (fps), is the number of images the system will record (or playback) per second in a specified resolution. A common capture rate for home surveillance cameras is 30fps.

Environmental conditions may affect video-image quality despite having a high-resolution system. The sun’s azimuth and altitude continually changes, and sunlight that is directed on the surveillance camera lens may wash out portions of the video image. Accumulated dust on the camera lens may diminish otherwise apparent video details. Low sunlight conditions at the time of recording may yield a video so dark that objects are no longer visible on a playback monitor. The method used to obtain a copy of the video can lead to diminished image quality. Recording the playback monitor with another video device can significantly reduce the image quality such that necessary detail may be lost (see fig 2).

Video distortion

Surveillance videos may appear distorted, a result of commonly used wide-angle lenses. This radial distortion is a quadratic function that increases as the square of the distance from the lens (“Forensic Engineering Usage of Surveillance Video in Accident Reconstruction,” Richard M. Ziernicki, PhD, et al, National Academy of Forensic Engineers, Vol. 31, December 2014). The effect causes the image magnification to decrease with distance from the image center. The results are otherwise straight lines that appear curved as they approach the edges of the image. This lens distortion can be corrected, or minimized. If the camera and lens properties are known, many software applications can correct automatically. Adjusting lens correction filters manually within Adobe Photoshop is an alternate method when camera/lens properties are unknown. Depending on the surveillance video quality and the details of the project, the effects of lens distortion on the analysis may be minimal.

Video playback speed

Video playback speed is the rate at which the images are displayed relative to time. An example is 30fps, or one frame every 0.0333 seconds. In one example to be examined, an onboard vehicle video camera system recorded at 4fps, i.e., one frame every quarter second, but displays at 30fps. In that video, the movement of objects is choppy, unlike the fluid motion of objects with higher recording speeds.

Surveillance videos commonly display a timestamp, and this timestamp may help determine the frame rate (“Speed Calculation from A Video Tape,” Sergeant Mark Kimsey, Accident Reconstruction Journal, July/August, 2008). I frequently use the Windows Live Movie Maker, because it easily creates image snapshots. But this software also conveniently displays a timestamp with 1/100 second resolution. The video frames can be advanced either forward or backward using the keyboard arrow keys. The frame rate may be calculated by first counting key strokes between timestamp increments, and then dividing the time increment by the frame count.

Vehicle speed from video analysis

Surveillance videos may be used to authenticate eyewitness statements, correctly determine the color of traffic lights in cases of conflicting accounts, or verify headlight use at the time of collision when damage precludes standard forensic analysis. However, aside from these ancillary benefits, surveillance videos are commonly used to determine vehicle speed(s) before or at impact. In simple terms, to determine vehicle speed (s), the time (t) and distance (d) over which the vehicle travels is needed. The parameters time and distance must be determined as accurately as possible to yield reliable results.

Distance (d) is determined by first tracking the vehicle’s movements relative to stationary landmarks or reference lines drawn between objects in the video image. The intent is to establish vehicle position on the roadway at a particular time. Then a time-position analysis using the distance traveled between consecutive vehicle positions and the corresponding time interval may be performed.

Video image quality (lighting conditions, dust on lens, etc.) may limit landmark selection. Landmarks such as lane lines, raised pavement markers, or sign posts, may be chosen because they are conveniently adjacent to the vehicle passing in view of the camera. Alternately, reference lines may be used. Reference lines are established between two points in the video; the camera location and a stationary object, or two stationary objects. Correspondingly, the vehicle position may be determined relative to these lines.

If the collision site can be inspected, then a Total Station or 3D Laser Scanner can be used to create a scale drawing, or collision diagram. On this diagram, landmarks are identified and reference lines are drawn. Next, each unique vehicle position (and corresponding time) may be placed. In cases where the roadway or intersection has changed (i.e., restriping or new construction) or travel cost prohibits direct site measurement, then the use of scale, high-resolution aerial photographs may yield suitable results.

Using the scale diagram (or aerial photograph), the distance traveled between vehicle positions may be directly measured. The time between vehicle positions is determined by subtraction; the corresponding time of one vehicle position is subtracted from the time corresponding to the next vehicle position. The vehicle speed between positions is calculated as distance divided by time. One must pay particular attention to the units of measure. If distance is measured in feet, and time is measured in seconds, then the calculated vehicle speed will have units of feet per second, not miles per hour.

The time over which the vehicle travels between landmarks or sight lines can be accomplished by stepping through the video, first with the vehicle at one landmark and counting frames until the vehicle reaches the second. Once the frame count is determined, the time (seconds) is calculated by dividing frame count by the playback speed (fps).

Surveillance video analysis − Case-study examples

Example #1 — Motorcycle vs. Vehicle Collision

This case involves a vehicle driver that briefly stopped on the right shoulder of a four-lane boulevard and then attempted an illegal U-turn. During the U-turn, the vehicle crossed the path of a motorcycle approaching from behind in the adjacent traffic lane. The vehicle’s alignment is nearly perpendicular to the traffic lanes as the motorcycle slams into the driver’s door, killing the rider.

A surveillance camera was discovered during the investigation. The camera faces the street and captured the moments leading up to and including the collision event (see fig 3). The investigator was unable to obtain the raw video file. Instead, a recording was made of the collision event. Note the light fixture’s reflection on the playback monitor (see fig 4).  The video quality is so poor that the impact speed of vehicle and motorcycle could not be accurately determined. However, the video still provides important timing and pre-impact information.

The video shows the vehicle stopped along the curb for about 32 seconds while the driver waits for traffic to clear (see fig 5). The vehicle’s headlights and taillights are illuminated, and brake light function is confirmed as the vehicle inches forward several times before commencing the U-turn (see fig 6). The U-turn lasts about 3 seconds until the vehicle is observed to roll clockwise due to the force of impact (see fig 7). The vehicle continues after impact and eventually parks at the far-side shoulder.

The motorcycle’s approach is announced by its headlight beam on the roadway for more than a second before the motorcycle enters the video image. Indeed, the motorcycle’s headlights were functioning (see fig 8).  The motorcycle travels across the video screen for less than a second until impact (see fig 9). Before impact, however, an increased intensity of the headlight beam on the roadway is detected. This critical observation indicates the rider at least applied the front brake; the front suspension compressed and the motorcycle pitched downward causing the headlight beam angle to change. Therefore, with surveillance video the perception of impact hazard by the rider was confirmed, despite the motorcycle having ABS and where no tire friction marks were observed or documented at the scene.

Example #2 — Semi tractor-trailer vs automobile collision

This case involves a collision between a semi tractor-trailer and a passenger vehicle that was merging onto a highway. As the automobile merged left, it slowed until its left rear bumper corner was impacted by the right edge of the tractor’s front bumper. Upon contact, the vehicle rotated counter-clockwise and was redirected to the left. The vehicle then crossed adjacent traffic lanes and collided with another vehicle in what became a three-car event.

The trucking company installed the DriveCam event recorder in its fleet vehicles (see fig 10 on page 20).  The DriveCam unit simultaneously records one camera’s view forward through the windshield and a second camera pointed at the driver (see figs 11-14 on pages 20 & 22).  In this case, the video recording assisted the analysis of the events, including the driver’s actions, to the developing collision sequence. Ironically, since the date of collision, DriveCam has become the flagship solution of Lytx, a company marketing the science of driver behavior (www.lytx.com). When a pre-determined condition, or “trigger,” has been met the event recorder stores the event.

Based on the timestamp, the DriveCam video playback speed is in real time at 30fps. But the camera records at 4fps, so video frames are displayed in one quarter second increments. At the bottom of the DriveCam image the tractor’s speed is displayed along with forward and lateral acceleration, and a timestamp relative to the trigger event. Vehicle speed was not in dispute since the tractor’s speed is based on the GPS satellite signals and is reasonably accurate.

The passenger vehicle comes alongside the tractor and initiates the merge left. After approximately two seconds into the merge, the vehicle’s left side tires roll over the lane separator line (see fig 12). The vehicle continues moving left, and then slows until contact with the tractor’s bumper is made (see fig 13). Impact is the moment when the vehicle’s heading change is detected. After impact, the vehicle rotates and translates left until the vehicle enters the adjacent left lane, an action that takes approximately 1.25 seconds (see fig 14).

Without the onboard video camera, piecing the pre-impact and impact events together would not have been possible due to the lack of documented physical evidence. In this case, no tire marking or other roadway evidence was recorded, and the “AOI’s” were only estimated because access to the roadway was limited due to vehicle traffic speed.

Example #3 — Wrong-way DUI driver head-on collision

This case involves a law enforcement deputy intervening on the report of a wrong-way driver. Up ahead, several vehicles successfully avoided a head-on collision before the deputy entered the highway, driving toward the offending SUV. Approaching from behind the deputy, in the same lane, were two vehicles one behind the other. These vehicles changed lanes to the left and passed the slow merging law enforcement vehicle. The deputy had not yet switched on the overhead lights (see fig 15). In the left lane, the passing vehicles were now in a collision course with the offending vehicle. At the last moment, the lead vehicle of the two swerved right, but the trailing vehicle plowed head-on into the wrong-way vehicle (see fig 16).

The law enforcement agency utilized the Coban Technologies on-board video camera (www.cobantech.com) installed on the windshield header adjacent to the interior rearview mirror (see fig 17). The camera captures images continuously, but doesn’t record until the overhead lights are activated. Once activated, the system also records the previous 60 seconds of captured images. When the law enforcement officer noticed the vehicles pass, he activated the overhead lights. In doing so, he captured the fatal collision event with a closing speed in excess of 120mph. Without the Coban video, the vehicles’ speeds and their relative positions before impact would only be speculation.

A time-position analysis was employed to determine the speed of 1) the law enforcement vehicle at the time of merge and 2) the speeds of the passing vehicles. The roadway position of the law enforcement vehicle during its approach was determined using the push guard, because its position is fixed relative to the video camera (see fig 18).  And, the position of this vehicle reference was tracked relative to artifacts on the roadway, such as asphalt patches, lane lines, and raised reflectors. The passing vehicles were tracked in a similar fashion, however the headlight beams on the roadway were tracked throughout the video sequence to establish their roadway positions in time (see fig 19).

The frame rate of the video playback provided the time increments between the artifacts. The on-ramp and highway were surveyed by a licensed professional land surveyor who prepared a scale roadway diagram. The distance between artifacts, i.e., corresponding vehicle positions, was measured. It was determined that the law enforcement vehicle had merged onto the highway at approximately 32mph, less than half the posted speed limit of 65mph. The vehicles that passed the law enforcement vehicle were determined to be traveling at approximately 60mph.

Conclusion

Traffic-collision reconstruction is a multi-disciplinary field that encompasses the proper application of physics and engineering principles to quantifying motion and collision dynamics of traffic collisions. Conventional reconstruction methods include, but are not limited to momentum or damage energy analyses, collision reconstruction or simulation software, interpreting crash data, or evaluating physical evidence at the collision scene, such as tire friction marks and other debris. The analysis of surveillance camera video is another available tool.

The surveillance videos vary greatly with regard to quality. But regardless of their quality, they often reveal some collision detail that would not have been discovered without the video. Conventional collision reconstruction methods don’t fail due to a lack of a surveillance camera capturing the event in question. But surveillance videos have been shown to advance the understanding of traffic collisions when they are available.

Kurt D. Weiss Kurt D. Weiss

Kurt D. Weiss is a collision reconstruction specialist and forensic engineer with Automotive Safety Research, Inc., in Santa Barbara. He holds a Master of Science degree in Mechanical Engineering, and is ACTAR accredited. Since 1986, Mr. Weiss has reconstructed hundreds of traffic collisions and he is familiar with the wide diversity of physical evidence to present crash reconstruction. He is a frequent speaker on traffic-collision reconstruction and forensic analysis and performance evaluation of automotive seat-belt systems. He frequently attends scientific conferences and has authored numerous peer-reviewed papers on topics relating to traffic-collision reconstruction.

Figures 1-19

Copyright © 2024 by the author.
For reprint permission, contact the publisher: Advocate Magazine