How Moblr measures On-Ramp acceleration

The 0-60 mph acceleration run is, by far, the best-known ‘classic’ test of a car’s performance. The quickest times it produces – which are now down to a few seconds for modern high-performance cars – are the first thing car enthusiasts want to know. But does it have any relevance?

Yeah … if the point is to win a bet with your friend. But actually, most of the acceleration we do in the real world requires only a small fraction of a car’s capability. The rest of it slumbers in the remaining motion of the accelerator pedal, waiting for the rare emergency. 

However, one instance where that reservoir is tapped more than usual is the stress of accelerating on a freeway on-ramp; at metered ones, you’re required to go from a dead stop to the hustling traffic’s ambient speed – at least 65 mph – in order to merge (the typical on-ramp is 1000-feet long). 

To illustrate both a car’s acceleration potential and interpret its relevance on an on-ramp, we’re portraying a 0-60 mph run in this setting to see what it looks like. Along the way, there are a few comments to color-in the experience, that 0-60 time (that’s weather-corrected), a calculation that places that time in the context of all other cars, and what percentage of this car’s power would be needed to smoothly accelerate to merging speed (what you’d actually be doing in real-world driving).

An important detail to discuss here is that our acceleration is commenced by simply releasing the brake, promptly moving the foot to the right, and mashing the accelerator pedal (during that time, those cars with start-stop systems will be starting their engines). This method results in times that are a bit slower (but much more realistic) than what you’ll see from car-enthusiast sources which attempt to get the fastest time possible by any means, some of them weird and abusive, and none of them having a relationship with how people actually operate cars. (In the future, we’ll add a ‘pedal request’ to 60-mph time to parse the powertrain’s response time as well).

Critically, we don’t subtract the 1-foot time which is still done elsewhere for very arcane, historical reasons, though we do show it for reference. Beyond how it distorts the reality of a car’s acceleration time, that subtracted number isn’t even consistent in size, with the consequence that it literally garbles the results. Testing experts will notice that our 1-foot times are a lengthier than they’re accustomed to seeing - and that’s precisely due to the absence of those driver antics at the start. 

Our visual depiction of the acceleration happens by animating the background scene instead of the car itself (which is the perspective you have when behind the wheel). To do it, the on-ramp scene is re-timed, frame-by-frame, to exactly reenact the car’s physical performance.


How Moblr measures Lane Centering performance

Figuring out how to evaluate Automatic Lane Centering (ALC) systems has been an ongoing challenge for the automotive media, which lacks the resources of automakers. Typically, the press has had to describe ALC performance almost completely subjectively, hence the popular phrases “it ping-pongs between lane edges” or “is rock-steady in the middle of the lane.”

Moblr is the first media entity to replace ALC’s mushy subjectivity with precise, objective measurements. We have tested and quantified the ALC performance with over 30 vehicles. Here is how we replaced subjective lane-centering prose with precise, objective measurements:

We record the vehicle’s position in the lane with video cameras (see above) and analyze each frame post-test with a proprietary machine-learning computer vision tool. The cameras are attached to either side of the car and aimed downward at the adjacent passing roadway so the lane markers are captured in the scenes. Using calibration videos shot while the vehicle is motionless, we correlate the lane markings with their actual distances from the cars’ centerline; the cameras are triggered simultaneously to synchronize them. Approximately 16 miles of driving (including mild curves) are recorded while traveling at 70 mph.

Later, the results are filtered to eliminate any questionable data. Then, the data are used to create a histogram, which graphically shows how the car’s position is distributed across the roadway. To evaluate each histogram, we normalize the size of the data sets, apply a linearly increasing penalty to car positions according to how offset they are from the centerline, and then add it all up (the smaller the number, the better).

Of course, the performance of all lane-centering systems depends on the road’s quality, curviness, lighting, and the distinctness of its markings. Consequently, these tests are initially being done on a public road in good condition with clear markings and at a consistent time. In the future, we intend to add more challenging circumstances, characterize the test vehicle’s path around corners, examine automatic offsetting from an intimidating adjacent vehicle, and automatic lane-changing.


What’s behind Moblr’s Ride Quality testing

Ride quality can be evaluated either subjectively (by driver’s who rate their experiences) or objectively (using measurements from instruments). Both approaches have their advantages. But while car manufacturers have the necessary means to conduct meaningful subjective evaluations, it’s not unusual for weeks (or months) to pass between subjective comparisons by 3rd-party automotive reviewers. During that time, their memories become unreliable – and, if not experienced on very similar road surfaces, completely meaningless. 

Moblr’s ride quality testing is measurement-based and performed on the consistent road surfaces, so that results conducted even months apart can be compared, directly and accurately. Next, these are interpreted into human experience and finally – in a process that is on-going and will ever-improve – into human opinion.

Our test environments aim to capture key experiences people have in their daily life: A neighborhood speed hump at 15 mph; a culvert at 25 mph; a typical city street at 40 mph, and a highway at 70 mph. 

The instrumentation measures vertical acceleration at the driver seat’s bottom cushion, fore-aft and lateral head motions (recognizing the significance of elevated-seat SUVs and crossovers), and vertical motions of an infant child seat fitted in a second-row. All of these are logged at a minimum of 100 Hz (samples per second).

The analysis begins with converting the data into their frequency spectrums – what’s akin to its DNA – to reveal their component parts of slow oscillations, fast shaking, and even faster vibrating. Focusing on the range between 1 and 25 Hz (the realm of motion that’s recognized as ride quality), weighting curves representing human response to the spectrum’s contents, are then applied and the results are combined. A final layer, which reflects human psychology, recognizes how instance of a larger, stand-out ‘g’ tends to ring longer in our memory than more typical, frequent, and predictable ones. 

All of this analysis forms the numerical basis for ascribing an emotion result to the test. Initially, these will be based on where they fall in the Bell Curve of all results. However, they’ll be continuously tuned and nuanced by human, subjective scoring in controlled conditions. By carefully merging objective data and subjective scoring – the best of both worlds – vehicle evaluations conducted at considerably different times can become meaningful and easy-to-understand emotion ratings.