21 Comments

This article seems to really miss the mark on predicting the future which is why autonomous systems are almost certain to fail if this is the very limited perspective taken by the average "engineer" with their head stuck in the clouds when the reality of expectations is so much broader. It's hard enough to assure over 30% of the general public that vaccines are safe when the data suggests that the proportion of population rejecting vaccines should be lower based on the expected rate of adverse outcomes. Just because the data suggests some rational outcome that doesn't mean you will get that outcome unless you take into account a multitude of other factors. Just imagine how the public will receive robot cars when they hear about a slew of fatal accidents even if ] we're talking small probabilities that they occur. And what if work as we know it is transformed by automation and AI? We're already seeing permanent WFH and subscriber based concepts proliferating. It seems highly plausible that the automotive industry will consolidate due to a substantial decline in vehicle ownership. Maybe we get L5 but the competitive landscape will be far different from what is depicted here and Tesla survives for very different reasons.

This article assumes that L5 autonomy is even possible or will ever be accepted by consumers as a necessary "feature". Imagine an L5 system that has been tested to 99.9% of scenarios and assume that 0.01% of scenarios can never be mitigated without human intervention (freak rain storm, snow storm, flying object/vehicle that cannot be avoided, sinkhole in road, failing to correctly interpret a cliff in poor weather, something the system has never experienced, etc. etc.) . Assuming 200 M vehicles at 200 driving days each you end up with 4 M failures. If we assume only 1% of those failures result in fatal accidents that's 40k fatalities related to L5 failures. Maybe my math is wrong or my expectations are flawed but just based on even more conservative versions of these numbers (like 10k fatalities) it seems highly unlikely that we can engineer enough public confidence in these systems "initially" given a certain minimum level of failure that cannot be mitigated. Logically, it would seem that given the rare nature of catastrophic scenarios, more exposure to these scenarios wouldn't help the system learn how to handle them either. It would just end up with PTSD and drive exceedingly cautiously around anything that remotely resembles a scenario in which it previously failed. Don't forget the arrogance of Boeing in engineering MCAS. You cannot entirely predicate success on the tech alone.

So to me the question is more how do you re-engineer the vehicle itself to withstand potential catastrophic circumstances that while low probability will be survivable or come up with better algorithms for how autonomous systems behave when dealing with unexpected circumstances (other than stopping in the middle of a road). Tesla should start with commercial autonomous systems like for bus drivers and semi truckers that have lower risk/publicity expectations, simpler cockpit environments, and make it work under some radical failure scenarios. Once you can get L5 broadly into that market then it will be far more widely accepted by retail consumers. As it is, you already have people being triggered by Teslas. If the engineers in charge don't proceed carefully they could just end up with another Google Glass or Boeing 737 Max.

Expand full comment

are u idiot... waymo,cruise, can work anywhere in the world once they map area.. the main problem is weather.. and infrastructure to handle it.... example - suppose you drive from point A to point B ( 50 miles etc) if there is weather issue or snow , rain in road it need to handle no car in the world will handle it even human not able to drive in all weather conditions ... user safety is main concern ...you can make any product but if a product kill the human nobody will acecpt it... simple....

Expand full comment

Having had some experience in this space, Tesla's CV is quite impressive. That's not taking anything away from LIdar. The future is promising and Tesla's data treasure cannot be dismissed.

Expand full comment

Hi Joe blow, I would simply question if you are underestimating the march of nines reliability to just 3 instead of 6 or 7?

Expand full comment

The way the author quickly dismisses MobileEye's switch from 100% CV to including LIDAR really calls into question their willingness to question why. How about we find out from them why they felt they had to add LIDAR?

Expand full comment

This is a joke article. All SDC companies use NN on lidar outputs to detect, classify, track and prediction static and dynamic objects. The perception and prediction stack of all SDC companies are almost 100% NN driven. You are 100% clueless.

Expand full comment

Neural networks could also be used with LIDAR.

Expand full comment

I think you’re misclassifying and underestimating Mobileye. They’ve been collecting data for years on all cars with their hardware and I would bet money their computer vision model (no lidar) is currently more performant than Tesla’s and will ultimately be scalable to the nines.

Expand full comment

A 'dark horse' here is Nvidia: not only are their GPUs powering the vast majority of the deep learning boom, but they've repeatedly shown off their own version of self-driving training: render a simulated environment, and train the AI model on that. Sure, it's not actual real-world imagery, but as anyone whose seen the last year or two of hybrid raytraced rendering knows it's not far off, and it has the advantage that the environment can be tweaked to throw edge-case after edge-case at the AI in a way that the real world cannot.

Expand full comment

The big question for me is none of these systems works well during heavy rain. Lidar is severely affected and cameras don't have eyelids to wipe water away.

Expand full comment