This article seems to really miss the mark on predicting the future which is why autonomous systems are almost certain to fail if this is the very limited perspective taken by the average "engineer" with their head stuck in the clouds when the reality of expectations is so much broader. It's hard enough to assure over 30% of the general public that vaccines are safe when the data suggests that the proportion of population rejecting vaccines should be lower based on the expected rate of adverse outcomes. Just because the data suggests some rational outcome that doesn't mean you will get that outcome unless you take into account a multitude of other factors. Just imagine how the public will receive robot cars when they hear about a slew of fatal accidents even if ] we're talking small probabilities that they occur. And what if work as we know it is transformed by automation and AI? We're already seeing permanent WFH and subscriber based concepts proliferating. It seems highly plausible that the automotive industry will consolidate due to a substantial decline in vehicle ownership. Maybe we get L5 but the competitive landscape will be far different from what is depicted here and Tesla survives for very different reasons.
This article assumes that L5 autonomy is even possible or will ever be accepted by consumers as a necessary "feature". Imagine an L5 system that has been tested to 99.9% of scenarios and assume that 0.01% of scenarios can never be mitigated without human intervention (freak rain storm, snow storm, flying object/vehicle that cannot be avoided, sinkhole in road, failing to correctly interpret a cliff in poor weather, something the system has never experienced, etc. etc.) . Assuming 200 M vehicles at 200 driving days each you end up with 4 M failures. If we assume only 1% of those failures result in fatal accidents that's 40k fatalities related to L5 failures. Maybe my math is wrong or my expectations are flawed but just based on even more conservative versions of these numbers (like 10k fatalities) it seems highly unlikely that we can engineer enough public confidence in these systems "initially" given a certain minimum level of failure that cannot be mitigated. Logically, it would seem that given the rare nature of catastrophic scenarios, more exposure to these scenarios wouldn't help the system learn how to handle them either. It would just end up with PTSD and drive exceedingly cautiously around anything that remotely resembles a scenario in which it previously failed. Don't forget the arrogance of Boeing in engineering MCAS. You cannot entirely predicate success on the tech alone.
So to me the question is more how do you re-engineer the vehicle itself to withstand potential catastrophic circumstances that while low probability will be survivable or come up with better algorithms for how autonomous systems behave when dealing with unexpected circumstances (other than stopping in the middle of a road). Tesla should start with commercial autonomous systems like for bus drivers and semi truckers that have lower risk/publicity expectations, simpler cockpit environments, and make it work under some radical failure scenarios. Once you can get L5 broadly into that market then it will be far more widely accepted by retail consumers. As it is, you already have people being triggered by Teslas. If the engineers in charge don't proceed carefully they could just end up with another Google Glass or Boeing 737 Max.
Yes, there will probably still be fatal accidents even with L5 vehicles.
And just as how the media now focuses on EV fires while ignoring the way more frequent ICE fires, media (and hence the public) will focus on L5 accidents while ignoring the millions of more accidents caused by humans in non-autonomous vehicles... "Cause that's just the way it is, right..?"
Question is, would you rather ride in an autonomous car that has an accident rate of one in a million, or drive a car yourself, which has an accident rate of one in 50.000..?
(These numbers I completelly made up, just to illustrate my point.) It is the difference in accident rate between L5 and non-autonomous cars that are the important numbers to compare, not the absolute number of L5 related accidents.
Hakan, I agree 100%. Example: my late night flight arrived in Phoenix last night - wife and I had to drive across town for an hour at 1am. We BOTH commented how nice it would have been to be in a robotaxi - much cheaper AND safer when they are finally available. Arizona roads are so well marked & maintained - I'm not too worried about the 1% freak stormy weather in a location like this.
Yes, Joe Blow's "math is wrong or my expectations are flawed" as he stated. The safety numbers won't lie and comparing auto-pilot uptake to vaccines in a poor comparison. Robo-taxi is not just about safety BUT CONVENIENCE! Also, if the numbers are ultimately anywhere close to being as cheap as ARK invest and other predict, there would be ZERO reason for me to pay $450 for a rental car for a week to use it maybe a few miles a day!
I'll trust Elon's (and best in breed team) proven intuition, market prediction and acumen to random commenters.
There are currently, globally 1.4m fatalities per year and 30+ million injuries. Your estimates of L5 proficiency would surely make fantastic reading to the public with fatalities around 35 times lower with L5 than human driven.
are u idiot... waymo,cruise, can work anywhere in the world once they map area.. the main problem is weather.. and infrastructure to handle it.... example - suppose you drive from point A to point B ( 50 miles etc) if there is weather issue or snow , rain in road it need to handle no car in the world will handle it even human not able to drive in all weather conditions ... user safety is main concern ...you can make any product but if a product kill the human nobody will acecpt it... simple....
Having had some experience in this space, Tesla's CV is quite impressive. That's not taking anything away from LIdar. The future is promising and Tesla's data treasure cannot be dismissed.
The way the author quickly dismisses MobileEye's switch from 100% CV to including LIDAR really calls into question their willingness to question why. How about we find out from them why they felt they had to add LIDAR?
He did explain it, because MobileEye lacks the huge amount of data that tesla gets everyday with their cars. CV requires a lot of data to be well trained, if you can't get it, you need other system, in this case they went with lidar
It's obvious from the article: MobilEye does not have anything near Tesla's data collection capabilities, so they had to pivot to Lidar + HD mapping to move forward.
Only an OEM can go the distance with computer vision. However, to do so, they must commit to installing hardware on each and every new vehicle and also master OTAs, which VW is finding quite difficult.
This is a joke article. All SDC companies use NN on lidar outputs to detect, classify, track and prediction static and dynamic objects. The perception and prediction stack of all SDC companies are almost 100% NN driven. You are 100% clueless.
I think you’re misclassifying and underestimating Mobileye. They’ve been collecting data for years on all cars with their hardware and I would bet money their computer vision model (no lidar) is currently more performant than Tesla’s and will ultimately be scalable to the nines.
When Mobileye dumped Musk, the writing was on the wall. The guys at google and mobileye both dont think the TSLA approach is ever going to work to the 9s and both have far more advanced AI teams than TSLA (though not the data). What is actually going to happen is:
1. Waymo is really going to expand their robotaxi service through out the US.
2. TSLAs FSD will get better and better, but it will be years before you would feel safe letting one take your kid somewhere
3. By the time TSLA FSD gets close, LIDAR will be cheap and everyone will license the Waymo system from google the way they do Android
4. Consumers will then get to choose between a lidar system with billions of proven robocalls taxi miles from every other manufacturer or a CV only system that has been buggy for years from TSLA,
A 'dark horse' here is Nvidia: not only are their GPUs powering the vast majority of the deep learning boom, but they've repeatedly shown off their own version of self-driving training: render a simulated environment, and train the AI model on that. Sure, it's not actual real-world imagery, but as anyone whose seen the last year or two of hybrid raytraced rendering knows it's not far off, and it has the advantage that the environment can be tweaked to throw edge-case after edge-case at the AI in a way that the real world cannot.
Oh, but the real world really can, because there are Teslas driving billions of miles, with cameras harvesting edge cases and sending them back to the mother ship.
Tesla s own chips are a few years ahead of nvidia apparently. Simulations are not a proxy for real world actions - simulating world roads and human behaviours is just not as good as teaching a NN with millions of real world examples. Then eventually NN teaches itself while watching real world. You get closer to 99.99% with that. Simulator cannot
The big question for me is none of these systems works well during heavy rain. Lidar is severely affected and cameras don't have eyelids to wipe water away.
This article seems to really miss the mark on predicting the future which is why autonomous systems are almost certain to fail if this is the very limited perspective taken by the average "engineer" with their head stuck in the clouds when the reality of expectations is so much broader. It's hard enough to assure over 30% of the general public that vaccines are safe when the data suggests that the proportion of population rejecting vaccines should be lower based on the expected rate of adverse outcomes. Just because the data suggests some rational outcome that doesn't mean you will get that outcome unless you take into account a multitude of other factors. Just imagine how the public will receive robot cars when they hear about a slew of fatal accidents even if ] we're talking small probabilities that they occur. And what if work as we know it is transformed by automation and AI? We're already seeing permanent WFH and subscriber based concepts proliferating. It seems highly plausible that the automotive industry will consolidate due to a substantial decline in vehicle ownership. Maybe we get L5 but the competitive landscape will be far different from what is depicted here and Tesla survives for very different reasons.
This article assumes that L5 autonomy is even possible or will ever be accepted by consumers as a necessary "feature". Imagine an L5 system that has been tested to 99.9% of scenarios and assume that 0.01% of scenarios can never be mitigated without human intervention (freak rain storm, snow storm, flying object/vehicle that cannot be avoided, sinkhole in road, failing to correctly interpret a cliff in poor weather, something the system has never experienced, etc. etc.) . Assuming 200 M vehicles at 200 driving days each you end up with 4 M failures. If we assume only 1% of those failures result in fatal accidents that's 40k fatalities related to L5 failures. Maybe my math is wrong or my expectations are flawed but just based on even more conservative versions of these numbers (like 10k fatalities) it seems highly unlikely that we can engineer enough public confidence in these systems "initially" given a certain minimum level of failure that cannot be mitigated. Logically, it would seem that given the rare nature of catastrophic scenarios, more exposure to these scenarios wouldn't help the system learn how to handle them either. It would just end up with PTSD and drive exceedingly cautiously around anything that remotely resembles a scenario in which it previously failed. Don't forget the arrogance of Boeing in engineering MCAS. You cannot entirely predicate success on the tech alone.
So to me the question is more how do you re-engineer the vehicle itself to withstand potential catastrophic circumstances that while low probability will be survivable or come up with better algorithms for how autonomous systems behave when dealing with unexpected circumstances (other than stopping in the middle of a road). Tesla should start with commercial autonomous systems like for bus drivers and semi truckers that have lower risk/publicity expectations, simpler cockpit environments, and make it work under some radical failure scenarios. Once you can get L5 broadly into that market then it will be far more widely accepted by retail consumers. As it is, you already have people being triggered by Teslas. If the engineers in charge don't proceed carefully they could just end up with another Google Glass or Boeing 737 Max.
Yes, there will probably still be fatal accidents even with L5 vehicles.
And just as how the media now focuses on EV fires while ignoring the way more frequent ICE fires, media (and hence the public) will focus on L5 accidents while ignoring the millions of more accidents caused by humans in non-autonomous vehicles... "Cause that's just the way it is, right..?"
Question is, would you rather ride in an autonomous car that has an accident rate of one in a million, or drive a car yourself, which has an accident rate of one in 50.000..?
(These numbers I completelly made up, just to illustrate my point.) It is the difference in accident rate between L5 and non-autonomous cars that are the important numbers to compare, not the absolute number of L5 related accidents.
Hakan, I agree 100%. Example: my late night flight arrived in Phoenix last night - wife and I had to drive across town for an hour at 1am. We BOTH commented how nice it would have been to be in a robotaxi - much cheaper AND safer when they are finally available. Arizona roads are so well marked & maintained - I'm not too worried about the 1% freak stormy weather in a location like this.
Yes, Joe Blow's "math is wrong or my expectations are flawed" as he stated. The safety numbers won't lie and comparing auto-pilot uptake to vaccines in a poor comparison. Robo-taxi is not just about safety BUT CONVENIENCE! Also, if the numbers are ultimately anywhere close to being as cheap as ARK invest and other predict, there would be ZERO reason for me to pay $450 for a rental car for a week to use it maybe a few miles a day!
I'll trust Elon's (and best in breed team) proven intuition, market prediction and acumen to random commenters.
There are currently, globally 1.4m fatalities per year and 30+ million injuries. Your estimates of L5 proficiency would surely make fantastic reading to the public with fatalities around 35 times lower with L5 than human driven.
are u idiot... waymo,cruise, can work anywhere in the world once they map area.. the main problem is weather.. and infrastructure to handle it.... example - suppose you drive from point A to point B ( 50 miles etc) if there is weather issue or snow , rain in road it need to handle no car in the world will handle it even human not able to drive in all weather conditions ... user safety is main concern ...you can make any product but if a product kill the human nobody will acecpt it... simple....
Having had some experience in this space, Tesla's CV is quite impressive. That's not taking anything away from LIdar. The future is promising and Tesla's data treasure cannot be dismissed.
Hi Joe blow, I would simply question if you are underestimating the march of nines reliability to just 3 instead of 6 or 7?
The way the author quickly dismisses MobileEye's switch from 100% CV to including LIDAR really calls into question their willingness to question why. How about we find out from them why they felt they had to add LIDAR?
He did explain it, because MobileEye lacks the huge amount of data that tesla gets everyday with their cars. CV requires a lot of data to be well trained, if you can't get it, you need other system, in this case they went with lidar
It's obvious from the article: MobilEye does not have anything near Tesla's data collection capabilities, so they had to pivot to Lidar + HD mapping to move forward.
Only an OEM can go the distance with computer vision. However, to do so, they must commit to installing hardware on each and every new vehicle and also master OTAs, which VW is finding quite difficult.
the reason is because mobileeye doesn't have a large vehicle fleet on the road for data collection, as the author clearly states
This is a joke article. All SDC companies use NN on lidar outputs to detect, classify, track and prediction static and dynamic objects. The perception and prediction stack of all SDC companies are almost 100% NN driven. You are 100% clueless.
Neural networks could also be used with LIDAR.
That has already been tested by tesla. Tesla s CV can already deliver pseudo lidar output (they call it like this think) hence no need for lidar
I think you’re misclassifying and underestimating Mobileye. They’ve been collecting data for years on all cars with their hardware and I would bet money their computer vision model (no lidar) is currently more performant than Tesla’s and will ultimately be scalable to the nines.
When Mobileye dumped Musk, the writing was on the wall. The guys at google and mobileye both dont think the TSLA approach is ever going to work to the 9s and both have far more advanced AI teams than TSLA (though not the data). What is actually going to happen is:
1. Waymo is really going to expand their robotaxi service through out the US.
2. TSLAs FSD will get better and better, but it will be years before you would feel safe letting one take your kid somewhere
3. By the time TSLA FSD gets close, LIDAR will be cheap and everyone will license the Waymo system from google the way they do Android
4. Consumers will then get to choose between a lidar system with billions of proven robocalls taxi miles from every other manufacturer or a CV only system that has been buggy for years from TSLA,
A 'dark horse' here is Nvidia: not only are their GPUs powering the vast majority of the deep learning boom, but they've repeatedly shown off their own version of self-driving training: render a simulated environment, and train the AI model on that. Sure, it's not actual real-world imagery, but as anyone whose seen the last year or two of hybrid raytraced rendering knows it's not far off, and it has the advantage that the environment can be tweaked to throw edge-case after edge-case at the AI in a way that the real world cannot.
> in a way that the real world cannot.
Oh, but the real world really can, because there are Teslas driving billions of miles, with cameras harvesting edge cases and sending them back to the mother ship.
Tesla s own chips are a few years ahead of nvidia apparently. Simulations are not a proxy for real world actions - simulating world roads and human behaviours is just not as good as teaching a NN with millions of real world examples. Then eventually NN teaches itself while watching real world. You get closer to 99.99% with that. Simulator cannot
The big question for me is none of these systems works well during heavy rain. Lidar is severely affected and cameras don't have eyelids to wipe water away.
Humans don't do well during heavy rain. I would bet that vision based NN systems could do better than humans in the same situation though.