ebonyandivory said:
Knurled said:
Oh, there are laws on the books, if your autopiloted car hits somebody, you are at fault because you were supposed to be paying attention to what was going on.
While the creator of the software that failed to do what it's designed to do shrugs it of as productive research?
Eee-yup.
Granted, there's precedent in aviation, where the planes can be on autopilot practically from gate to gate, but the pilots are there for when autopilot is insufficient.
The issue that I see is that in an airliner, you generally aren't passing other planes with a margin of 10 feet or less. And you generally don't have to worry about avoiding road debris or stopped traffic...
In reply to Knurled. :
But there aren't many survivors in a 30,000 foot fender-bender.
In reply to BlueInGreen - Jon :
To build on your question, what makes it different than GM Super Cruise? I thought all (or at least most) of the automated cruise controls (lane keeping with adaptive cruise) require the driver to be attentive (hands on the wheel, monitoring eye movements, etc). Why does Tesla not have those same driver interaction requirements.
Also, comparing this to the Boeing 737 Max grounding, I'm surprised after multiple software/vehicle mishaps with Teslas, the autopilot hasn't been disabled by NHTSA. The FAA mist have different expectations from NHSTA.
I also recall a autonomous driving expert criticized Tesla for saying the hardware in their cars was capable of autonomous driving without Lidar. Tesla is the only company saying Lidar isn't required for autonomous driving. As i understand Lidar has a longer and/or wider range of view.
Nate90LX said:The FAA mist have different expectations from NHSTA.
The FAA has a lot more ball-balls than the NHTSA. "Nit-suh" is pretty berking worthless as a regulating body.
The FAA can and will ground an entire model and type until they feel comfortable letting them back in the air. NHTSA, at best, will write strongly worded letters, but they'll feel like they are overstepping their bounds when doing it.
Nate90LX said:
In reply to BlueInGreen - Jon :
To build on your question, what makes it different than GM Super Cruise? I thought all (or at least most) of the automated cruise controls (lane keeping with adaptive cruise) require the driver to be attentive (hands on the wheel, monitoring eye movements, etc). Why does Tesla not have those same driver interaction requirements.
I think Tesla's warn you to keep your hands on the wheel x amount of times then it'll deactivate all the autonomous stuff for a time period.
yupididit said:
Nate90LX said:
In reply to BlueInGreen - Jon :
To build on your question, what makes it different than GM Super Cruise? I thought all (or at least most) of the automated cruise controls (lane keeping with adaptive cruise) require the driver to be attentive (hands on the wheel, monitoring eye movements, etc). Why does Tesla not have those same driver interaction requirements.
I think Tesla's warn you to keep your hands on the wheel x amount of times then it'll deactivate all the autonomous stuff for a time period.
They warn you twice. It won't allow autopilot back on until you've parked and shut the car off.
In reply to Knurled. :
Right? Jalopnik's been doing a great job of showing how bad it really is.
Nugi
Reader
12/9/19 12:11 a.m.
In reply to GIRTHQUAKE :
Nevermind the many products sold to simulate hands so you can bypass this. Tesla even sued one iirc, so they just changed the way it was marketed. They are easy to find on aliexpress, amazon, or many independent retailers found by a cursory google. Tesla knows without a doubt that people are misusing it at scale.
STM317
UltraDork
12/9/19 5:14 a.m.
Cadillac's SuperCruise is essentially the same thing. It would be really interesting to compare accident rates per mile driven between the two systems.
I do think that the marketing has something to do with it. Tesla calls their system Autopilot. Cadillac calls their system SuperCruise. If an average person hears "Autopilot", they probably think something different than if they hear "SuperCruise" even though both systems do essentially the same thing.
Consumer Reports compared both systems and found GM's system to be much more safety focused.
Another good comparison that shows the benefits/drawbacks of SuperCruise and Autopilot. The good thing about the Tesla system is that it has fewer limitations. The bad thing about the Tesla system is that it has fewer limitations.
Tesla's silly marketing is what's created a lot of this froth. They have convinced buyers that their cars truly drive themselves. A friend of mine wanted one because "it can drive me to work in traffic!" and I'm just shaking my head. Autopilot can't do much more than what Ford's CoPilot 360 will allow on an F-150.
That said, calling your system "Autopilot" is much sexier than "CoPilot" or "EyeSight" or whatever others use. And Mr. Musk has a tendency to oversell a bit anyway, so it's very on brand for him. The systems need to be developed more to be fully "self driving" and even then, I don't see them working that way 100% of the time.
The thing about "autopilot" is that, in aircraft, it actually works more analogously to what the Tesla version does than what people think it does. That is, the name is honest, people are just idiots. :) It works better in the air than it does on the road because the air is a much more controlled environment with a lot fewer random, unexpected events. (also because there's more money in individual airplanes than in cars)
Frankly, I think it's a mistake to call the systems we have today as being any stage of "autonomous". Truly autonomous cars are ones that could be made without steering wheels, where you could strap your 10 year old into the seat, tell the car to drive her 20 miles to home, and send it off without any more worry about it than if you were driving the car yourself. This requires "strong AI", a computer that can actually think the way people do. We don't have that, at least partly because we don't even know how human thinking actually works...
So while we can incrementally improve today's driver aid programs, that's a dead end process that will never yield true autonomy. It's a bit like the kinds of incremental improvements that were being made to carburetors in the late 80s -- they got better and better, but to get to EFI you needed to rip the whole thing out and go to a completely different approach.
I foresee this technology swerving to miss an empty refrigerator box and careening into oncoming traffic.
When an algorithm is able to decide which object is the least-bad thing to crash into in a no-win situation, then I might consider it viable.
b13990
Reader
12/9/19 7:37 p.m.
Nate90LX said:
I also recall a autonomous driving expert criticized Tesla for saying the hardware in their cars was capable of autonomous driving without Lidar. Tesla is the only company saying Lidar isn't required for autonomous driving. As i understand Lidar has a longer and/or wider range of view.
I read something like that, likely the same thing. The "AI" weenies think that their computer programs can acquire enough data (as part of a so-called machine learning process) to perform autonomously using simple camera feeds, unassisted by LIDAR or anything like that.
I don't buy it. A system like that faces an incredible dearth of data compared to a LIDAR system. It's being fed collections of colored pixels whose position in 3D space is unknown, vs. real data about other objects and their position. Will it get better at not killing you over time? Yeah. That's why those systems rack up so many miles in development.
Will such a system ever reach acceptable performance? Not in my book. There are all sorts of visual tricks that a camera is susceptible to, and getting to 99.9% vs. 99.0% only breeds driver complacency. The people arguing otherwise are worshippers of the buzzword and the shibboleth that is "AI" and are being dangerously irresponsible.
It's worth noting that non-autonomous vehicles use a single pair of cameras that don't work at all outside the visible spectrum. Their optics are notoriously bad as well, often requiring external lenses to bring them back into spec.
In 1996, I had a computer prof explain to me in great detail how full screen streaming video was impossible.
tuna55
MegaDork
12/10/19 10:39 a.m.
It may be useful someday. It blows my mind that the NHTSA allows what is essentially very risky beta testing on public roads without anyone's express consent. If anyone else tried this, they would be bankrupted.
tuna55 said:
It may be useful someday. It blows my mind that the NHTSA allows what is essentially very risky beta testing on public roads without anyone's express consent. If anyone else tried this, they would be bankrupted.
So much this-- wanting to use the data acquired to "train the AI" is not a valid justification for these kinds of operations by the general public on public roads.
tuna55 said:
It may be useful someday. It blows my mind that the NHTSA allows what is essentially very risky beta testing on public roads without anyone's express consent. If anyone else tried this, they would be bankrupted.
GM has this with Cadillac Super Cruise.
tuna55 said:
It may be useful someday. It blows my mind that the NHTSA allows what is essentially very risky beta testing on public roads without anyone's express consent. If anyone else tried this, they would be bankrupted.
It's really just lane keeping and adaptive cruise. How do you draw a line? That's a serious question, because it's not an easy answer. My mother's Sportwagen will do a very similar thing - keep a set distance to the car in front and adjust speed, and steer to keep itself between the lines.
The NHTSA/NTSB does require proof of driver involvement. Tesla requires the driver to apply a little bit of torque on request to prove that the driver has their hands on the wheel, but the NTSB doesn't think this is sufficient because you don't have to look up to do it. NTSB doesn't set regs, though. Other manufacturers are using in-car cameras to watch the driver. Interestingly, the Model 3 does have an interior camera that is not currently used.
STM317
UltraDork
12/10/19 11:46 a.m.
In reply to dculberson :
SuperCruise only operates in areas specifically mapped/scanned in advance by GM so general things about the road are known going in and they specifically allow the tech to only be used in "limited access" roads (interstates and divided highways) where there are fewer threats with intersections, pedestrians, etc. It's essentially fancy cruise control (as it's name states) for situations when you might be using cruise control already, rather than self driving tech for any driving condition. GM's system does a much better job of clarifying when the system is in use or not, also requires more constant driver attention, and isn't communicating back to GM to contribute to the "learning" so it's not being used for test purposes in the general public like Tesla's is. It's a more restrictive, but more fleshed out and responsible deployment of the tech in my opinion.
Posting the comparison between Autopilot and SuperCruise again because I think it's an honest read that brings up real strength/weaknesses of both systems
tuna55 said:
It may be useful someday. It blows my mind that the NHTSA allows what is essentially very risky beta testing on public roads without anyone's express consent. If anyone else tried this, they would be bankrupted.
I don't have any problem with NHTSA allowing these systems on the road but people still need to be aware of how these are beta systems, not finished product "I'll drive you to work while you nap or eat a bowl of cereal." type systems. It's not communicated that way at least on Tesla's website. This is unfinished tech and with any under tested tech the best way to hammer it out is with real world experience by people who know it's not the last line of defense.
STM317
UltraDork
12/10/19 12:03 p.m.
iansane said:
tuna55 said:
It may be useful someday. It blows my mind that the NHTSA allows what is essentially very risky beta testing on public roads without anyone's express consent. If anyone else tried this, they would be bankrupted.
I don't have any problem with NHTSA allowing these systems on the road but people still need to be aware of how these are beta systems, not finished product "I'll drive you to work while you nap or eat a bowl of cereal." type systems. It's not communicated that way at least on Tesla's website. This is unfinished tech and with any under tested tech the best way to hammer it out is with real world experience by people who know it's not the last line of defense.
The issue with that is that the user may "opt in" to being a beta tester but the people on the road around them have no voice in the matter and they've got "skin in the game" too.
Good read, thanks. It is two years old, so it would be interesting to read a follow-up given the improvements made to the systems.
I do find it amusing that Tesla makes every other car on the road look like a Tesla in their display.
Keith Tanner said:
It's worth noting that non-autonomous vehicles use a single pair of cameras that don't work at all outside the visible spectrum. Their optics are notoriously bad as well, often requiring external lenses to bring them back into spec.
In 1996, I had a computer prof explain to me in great detail how full screen streaming video was impossible.
Those cameras also have a billion or so years worth of signal processing R&D, and they still take a few years after they come on-line for the spatial acuity to be properly calibrated.
STM317 said:
iansane said:
tuna55 said:
It may be useful someday. It blows my mind that the NHTSA allows what is essentially very risky beta testing on public roads without anyone's express consent. If anyone else tried this, they would be bankrupted.
I don't have any problem with NHTSA allowing these systems on the road but people still need to be aware of how these are beta systems, not finished product "I'll drive you to work while you nap or eat a bowl of cereal." type systems. It's not communicated that way at least on Tesla's website. This is unfinished tech and with any under tested tech the best way to hammer it out is with real world experience by people who know it's not the last line of defense.
The issue with that is that the user may "opt in" to being a beta tester but the people on the road around them have no voice in the matter and they've got "skin in the game" too.
But is under tested code any worse than a teenager with no experience and bad decision making skills? We all have skin in the game everytime we get behind the wheel. I'm not saying I'm ready and willing to get tboned by a smooth looking toaster that mistook that empty burgerking bag for a toddler we have to move forward somehow, right?
STM317
UltraDork
12/10/19 1:00 p.m.
In reply to iansane :
I guess I'd ask if it has to be under tested code? There are plenty of places where code is thoroughly tested in simulations and controlled environments before it's released in the public realm. Other automakers are doing it and rolling their tech out in much more controlled/limited ways. "As bad as an inexperienced teenager" shouldn't be the standard we set for tech being sent into the world.
A modern passenger aircraft has 150 million lines of code. How many aircraft are released with "under tested code"? 737 Max jumps to mind, but nothing else really (there are a bunch of plane people here that might clarify). What kind of track record do those aircraft have in the field? Now consider that there are far fewer "threats" for an aircraft in the air than a vehicle moving through traffic. Airliners are typically traveling very far apart, and have 3 dimensions of movement to maneuver around potential issues. Vehicles travel in close proximity to one another, at different rates of speed, in two dimensions, while also dealing with stop/go situations, hazards in the path of travel and intersecting paths, etc.