Jump to content

Science Thread


Nigel

Recommended Posts

It isn't a hypothetical question at all - in as much as what are the makers of these cars looking at doing.

There were people on the telly box a few weeks ago (one an academic, the other involved in the development of self-driving cars) who were talking about the difficulty of looking at this 'trolley problem' algorithm and then encoding it in to the working of the cars.

It isn't just about what other cars do, it's also about other actors on the roads (cyclists, pedestrians, cats, dogs, horses, pheasants, &c.) and their input. This isn't about machines buggering up (whether due to problems within or attacks from without) but it's about the ethics supporting the decision making processes put in place in these 'self-driving' machines.

If a company develops a car which deliberately hurts people, they won't be in business for long.

If the vehicle is self-driving and it is faced with the simple trolley problem decision tree of hitting one person or hitting two then both actions are deliberate.

Edited by snowychap
  • Like 1
Link to comment
Share on other sites

I think we are talking different things here. Autonomous vehicles won't be a driver's aid. There won't be a steering wheel. They will go wrong, like drivers do today. Any liability will be met by insurance who will seek recompense from whoever they can prove to be the guilty party if any. Horror stories about accidents are unfortunately written every day. As insurance forces more vehicles to be automatic the number of accidents will almost certainly reduce, but will therefore be perceived as "worse". The only science there is psychology.

When I first started thinking about the legality of automated vehicles I thought we'd need new legislation. I no longer believe that we do.

People can talk about "moral decisions", but drivers involved in collisions rarely make moral decisions.

Perhaps we'll have to use Asimov's rules :)

Link to comment
Share on other sites

People can talk about "moral decisions", but drivers involved in collisions rarely make moral decisions.

That's because they're making decisions on the hoof as events unfold.

Encoding those decisions (to be made by the autonomous vehicles) before the event even occurs can't be viewed in the same terms.

If the decision is already made via the algorithm encoded in to the autonomous vehicle which means that it is already decided that the one person rather than the two will be hit (or the one instead of the fifty or the 6 year old looking person instead of the eighty year old dodderer) then it isn't just a case of insurance liability (or it may be but it would require the law to make it so). Otherwise those making the algorithms could surely be held criminally liable and, perhaps, so could those who purchased one car versus another (should the decision making algoithms not be universal and the purchaser of the vehicle was deemed to have made a choice that made the outcome worse - whether or not they had detailed knowldege about it).

Edit: As I said in my post above this isn't dealing with autonomous vehicles when they bugger up. This is about encoding decision making in to the normal practice of these vehicles.

Edited by snowychap
Link to comment
Share on other sites

That's because they're making decisions on the hoof as events unfold.

Encoding those decisions (to be made by the autonomous vehicles) before the event even occurs can't be viewed in the same terms.

I disagree with your assertion. I don't understand what other terms it can be viewed in. Do you feel that responses generated by one set of humans is different to another set of humans simply because it's been implemented in silicon?

There wont be a single algorithm, it'll be continuously learning from all of the other vehicles. Data will be reviewed and lessons will be learned far quicker than we can train humans. If the desired goal in the event of inevitable human injury is "least damage" then I think I'll be happy with that, regardless of the detail used to derive the values for "damage". Because the computer is continually monitoring everything, it's going to be aware far earlier of a problem and have substantially more time to react and to keep reacting.

Link to comment
Share on other sites

I disagree with your assertion. I don't understand what other terms it can be viewed in. Do you feel that responses generated by one set of humans is different to another set of humans simply because it's been implemented in silicon?

No. It's to do with timing, i.e. that the action would be deliberate and decided beforehand. You would appear to be wilfully misreading what I wrote if your last sentence is what you inferred from my comments.

There wont be a single algorithm, it'll be continuously learning from all of the other vehicles. Data will be reviewed and lessons will be learned far quicker than we can train humans.

 Will the initial algorithm be universal? Will the data be reviewed (and those lessons learned) universally?

Your two sentences appear to represent the most bizarre level of naive optimism.

If the desired goal in the event of inevitable human injury is "least damage" then I think I'll be happy with that, regardless of the detail used to derive the values for "damage". Because the computer is continually monitoring everything, it's going to be aware far earlier of a problem and have substantially more time to react and to keep reacting.

There are few things there.

What counts as 'least damage'? The breaking of bones? Some calculation of potential future injury? In the age old 'trolley problem' style, are you happy with the 'least damage' scenario that it swerves to avoid two eighty year olds to hit a four year old child? Or to hit the school bus carrying four to avoid the Darby and Joan daytripper bus carrying 16?

 

Link to comment
Share on other sites

The initial algorithms will be based on existing data relating to crashes, of which there is plenty.

 

I tend to think in general, what Limpid said earlier is dead on. These cars wont be driving down narrow streets around blind corners at inappropriate speed. Modern vehicles can stop within about a car length at the sorts of speeds they'd be travelling in cities and suburbs.

 

Vehicles wont be able to make a moral decision in a split second. The best decision, almost always, is to drive at a safe speed, and if something comes out in front of you, just apply the brakes as hard as you can. Some vehicles will already do this if they detect an object in front of you that you're going to hit. These vehicles will have a lot more sensory equipment and will be able to sense objects around in all directions.

 

In the real world, with these safe driving practices already in place, how often is this decision of "do I kill the 2 80 year olds or the 6 year old?" actually going to come up? If you're driving according to the road rules and keeping safe distances etc, it's actually quite difficult to get yourself into an accident, let alone one where you're travelling at such a speed where there are elderly people and children walking on the road and you've come across them so suddenly that you couldn't avoid one of them.

Link to comment
Share on other sites

I disagree with your assertion. I don't understand what other terms it can be viewed in. Do you feel that responses generated by one set of humans is different to another set of humans simply because it's been implemented in silicon?

No. It's to do with timing, i.e. that the action would be deliberate and decided beforehand. You would appear to be wilfully misreading what I wrote if your last sentence is what you inferred from my comments.

I inferred nothing from your comments. That's an accurate portrayal of my position. Nothing more. 

There wont be a single algorithm, it'll be continuously learning from all of the other vehicles. Data will be reviewed and lessons will be learned far quicker than we can train humans.

 Will the initial algorithm be universal? Will the data be reviewed (and those lessons learned) universally?

Your two sentences appear to represent the most bizarre level of naive optimism.

No and No. 

Optimism in what precisely?

We're quite good at these kind of systems. Look at the aircraft industry. We investigate accidents and change procedures. Not all aircraft manufactures use the advice in a universal way. Not every airline implements the same procedures in a universal way. People still trust aircraft. Sufficient trust is all that's required.

If the desired goal in the event of inevitable human injury is "least damage" then I think I'll be happy with that, regardless of the detail used to derive the values for "damage". Because the computer is continually monitoring everything, it's going to be aware far earlier of a problem and have substantially more time to react and to keep reacting.

There are few things there.

What counts as 'least damage'? The breaking of bones? Some calculation of potential future injury? In the age old 'trolley problem' style, are you happy with the 'least damage' scenario that it swerves to avoid two eighty year olds to hit a four year old child? Or to hit the school bus carrying four to avoid the Darby and Joan daytripper bus carrying 16?

A court (more likely a judiciary-led public enquiry) answers that based on the evidence. If the algorithm is programmed in such a way as that it causes anything other than what THAT COURT decides is least damage then the court also has to decide whether the mistake was wilful and therefore criminal behaviour and refer as necessary.

Link to comment
Share on other sites

What about cause?

If a group of people are situated so that they'd get hit and killed by a car that is obeying the laws of the road (which we'd assume an automated car would) then is it right to save them and kill the innocent passenger in the car, who's done nothing wrong?

 

Edit: this isn't in response to any particular post. Just random musings.

Edited by Stevo985
Link to comment
Share on other sites

I inferred nothing from your comments. That's an accurate portrayal of my position. Nothing more. 

You asked me a question about what I felt (about responses generated ...) which didn't address the point that I was making (i.e. that it is a timing - and thus a premeditation - issue) and which wasn't just simply in tune with an accurate portrayal of your position.

No and No. 

Optimism in what precisely?

We're quite good at these kind of systems. Look at the aircraft industry. We investigate accidents and change procedures. Not all aircraft manufactures use the advice in a universal way. Not every airline implements the same procedures in a universal way. People still trust aircraft. Sufficient trust is all that's required.

The lack of universality surely casts doubt upon the suggested efficacy of the 'continuously learning from all of the other vehicles'.

I'm not sure that 'quite good' (however English one is being about the description) is good enough. Anyway, surely when one is talking about aircraft safety procedures we aren't comparing like with like as autonomous vehicles (as you've already said) aren't intended to be aids to drivers (as with aircraft safety things) but driver substitutes. I may have misunderstood (or demonstrated a severe lack of knowledge about) aircraft procedures here though.

A court (more likely a judiciary-led public enquiry) answers that based on the evidence. If the algorithm is programmed in such a way as that it causes anything other than what THAT COURT decides is least damage then the court also has to decide whether the mistake was wilful and therefore criminal behaviour and refer as necessary.

True but the point that I was making is that the decision making process is, at least, rather different to the decision making process that a person would reasonably make in a certain situation. The very fact that the protocols that govern the decision making are written in to the coding that decides what happens before the event takes place means that the way in which courts do/may decide the willfulness is likely to be very different to how it looks at people's reactions now. That the protocols are things to be put in to placve in advance of events might also suggest that rather than retrospectively dealing with deviance, the law in the area might be more about regulation and compliance.

It's also up to that court to decide whether utilitarianism would become enshrined in law. ;)

 

Again, apologies for not continuing the discussion before and also sorry if I came across as (even more than usually) argumentative.

Edited by snowychap
Link to comment
Share on other sites

Good discussion this one. My 2 cents is based on the idea that a car driving sensibly and legally by a machine cannot get into an accident. I am happy to concede that accidents may be less likely and cars crashing into each other could become a thing of the past. However I had a day just this week when pedestrians around Oxford seemed to collectively decide to suicide themselves under my car. Fortunately I managed to avoid them but it is only by a matter of inches in one case and not a single one of them seemed to understand that the roads may have cars on them. It is not hard to conceive of a time when pedestrians or a group of them put a self driving car in a position where it does have to decide who to kill.

I can even imagine people deliberately throwing themselves in the way as a dare or indeed a protest. People can be idiot's and the software needs to be prepared to deal with them.

Link to comment
Share on other sites

@snowychap I'll offer another example.

Hospitals make life or death decisions all the time. They get it right, they get it wrong. There is no "universality" in the decisions they make, only best practice. Why do you think we should hold technology to a higher level of accountability? Someone in the medical profession wouldn't be struck off for making a single wrong decision. A pattern needs to be formed at which point someone might be deemed unfit for purpose. Autonomous cars are the same, except that it can be proven that a particular bad decision will not happen a second time.

How do hospitals mitigate against unforeseen circumstances? Insurance. And if the insurance company isn't satisfied that a practitioner is capable, they cease being a practitioner.

I hope that analogy is clearer :)

Edited by limpid
Link to comment
Share on other sites

@snowychap I'll offer another example.

Hospitals make life or death decisions all the time. They get it right, they get it wrong. There is no "universality" in the decisions they make, only best practice. Why do you think we should hold technology to a higher level of accountability? Someone in the medical profession wouldn't be struck off for making a single wrong decision. A pattern needs to be formed at which point someone might be deemed unfit for purpose. Autonomous cars are the same, except that it can be proven that a particular bad decision will not happen a second time.

How do hospitals mitigate against unforeseen circumstances? Insurance. And if the insurance company isn't satisfied that a practitioner is capable, they cease being a practitioner.

I hope that analogy is clearer :)

I'm not sure the analogy holds, tbh. I didn't have any problem with the clarity of your previous comparisons or that they clearly put forward your opinion, the problem I had was that I disagreed with the comparison. I'm not suggesting holding technology to a higher level of accountability but that this kind of technology and decision making ought to be held to a different kind of accountability.

The mention of universality was not because I thought it was a prerequisite but rather in response to your point

There wont be a single algorithm, it'll be continuously learning from all of the other vehicles. Data will be reviewed and lessons will be learned far quicker than we can train humans.

 And it actually applies to this one, too

Autonomous cars are the same, except that it can be proven that a particular bad decision will not happen a second time.

Ignoring whether or not that could be proven (unless the use of particular is meant to indicate that all events are particular and thus a second one cannot be the same as the first), in order for all autonomous vehicles to benefit from the not occurring again then the algorithm(s)/protocols/processes would need to be universal.

Interesting, though, to speak of hospitals and medical decisions. If we look at the decision making of doctors about their patients then their actions are supposed to be made in the best interests of their patient, on that basis perhaps each autonomous vehicle should act in the best interests of its occupant(s)/owner(s) (referencing Stevo's point earlier)? Again that would encounter the trolley problem issue if, in a choice being made in order to safeguard those within the vehicle, an action meant that people may be hurt/killed/whatever that would otherwise have not been participants in the original scenario.

In all of this, too, who is responsible for the actions of the autonomous vehicles? The initial programmers? The companies which sold the vehicles? Or the owners of/passengers in the vehicles?

 

Link to comment
Share on other sites

In all of this, too, who is responsible for the actions of the autonomous vehicles? The initial programmers? The companies which sold the vehicles? Or the owners of/passengers in the vehicles?

I think that the manufacturers, the retailers and the owners would all be responsible, depending on what has gone wrong. Some, all or none of them might be culpable, again depending on what has gone wrong. Same as for any other product.

I'm still struggling to see why you think this is different. I don't believe we need new legislation as I don't believe this is something different.

It's a fact that we are going to have autonomous vehicles and it would be nice to see some government / legislature confirmation of whether the current frameworks are adequate so that our engineering base can lead the way rather than follow.

Link to comment
Share on other sites

Would you buy a car Snowy that would potentially "sacrifice" you in a given set of circumstances? I dont think many people would so that should probably answer your questions because manufacturers aren't going to build something that people dont want.

 

 

Edited by villaglint
Link to comment
Share on other sites

  • 2 weeks later...

Has anybody seen this story about Flamini setting up a company to mass produce Levulonic Acid (A possible replacement for Oil)?

http://www.squawka.com/news/mathieu-flaminis-pioneering-company-could-revolutionise-energy-industry/518962

“After a while we found out about Levulinic Acid: it’s a molecule identified by the US Department of Energy as one of the 12 molecules with the potential to replace petrol in all its forms.”
 

 

Edited by villaglint
Link to comment
Share on other sites

1 hour ago, villaglint said:

Has anybody seen this story about Flamini setting up a company to mass produce Levulonic Acid (A possible replacement for Oil)?

http://www.squawka.com/news/mathieu-flaminis-pioneering-company-could-revolutionise-energy-industry/518962

This will useful for creating plastics when the oil is gone. Burning it in cars is just as polluting as petrol, and probably worse considering the costs of land  / water to cultivate the crop it comes from.

Link to comment
Share on other sites

×
×
  • Create New...
Â