Jump to content

Science Thread


Nigel

Recommended Posts

It's a little more nuanced than the PR spin. They have not found flowing water!

Science guy talks to media relations guy who then creates article for the general public, which all to often means things get lost in translation, and or NASA has a tendency to big up things... it's cool, but hold your horses.

They have found salts that appear to be linked to the creation dynamically changing channels in the soil on a seasonal basis, which are consistent with originating in a fluid flow, most likely waterish. Now, the question is where does the water come from (i) sub surface ice melting seasonally or (ii) condensing in some fashion from the atmosphere seasonally. Fun times ahead!

 

 

 

Link to comment
Share on other sites

And just at the time Hollywood has a film out about men on Mars

who'd have thunk it 

 

NASA also did a similar announcement about an astoroid hitting the earth just in time for the release of Deep Impact  .... 

 

 

 

Edited by tonyh29
Link to comment
Share on other sites

And just at the time Hollywood has a film out about men on Mars

who'd have thunk it 

 

NASA also did a similar announcement about an astoroid hitting the earth just in time for the release of Deep Impact  .... 

 

 

 

Cui bono?

Link to comment
Share on other sites

  • 5 weeks later...

Should A Self-Driving Car Kill Its Passengers In A “Greater Good” Scenario?

Picture the scene: You’re in a self-driving car and, after turning a corner, find that you are on course for an unavoidable collision with a group of 10 people in the road with walls on either side. Should the car swerve to the side into the wall, likely seriously injuring or killing you, its sole occupant, and saving the group? Or should it make every attempt to stop, knowing full well it will hit the group of people while keeping you safe?

 IFL Science

You could see gangs of robbers exploiting this in some places in the world.

Link to comment
Share on other sites

I agree with Limpid. It's an interesting hypothetical question, but I doubt the car would get itself into a position where that would happen.

I read recently that in all of the testing they've done with self driving vehicles (and they're testing them on real public roads with real traffic and pedestrian etc) the only accidents they've had have been people hitting their car, not the other way round (and they've all been extremely minor)

Link to comment
Share on other sites

No decision to make. It won't make a blind turn into a narrow road at an inappropriate speed.

Machines never go wrong, right?  Yes, you could make a good case that it would be less likely to... than a car with a "bag of water" in control of it, but not that it won't.

 

 

Link to comment
Share on other sites

No decision to make. It won't make a blind turn into a narrow road at an inappropriate speed.

Machines never go wrong, right?  Yes, you could make a good case that it would be less likely to... than a car with a "bag of water" in control of it, but not that it won't.

The post wasn't asking what a computer should do when it goes wrong. It was asking what it should do if it's working correctly.

Link to comment
Share on other sites

This will come, and I reckon it will be safer, and with traffic management, the only way some places will move at all.

I also reckon it will be exploited, particularly where car jacking is a problem.

 

Link to comment
Share on other sites

No decision to make. It won't make a blind turn into a narrow road at an inappropriate speed.

Machines never go wrong, right?  Yes, you could make a good case that it would be less likely to... than a car with a "bag of water" in control of it, but not that it won't.

The post wasn't asking what a computer should do when it goes wrong. It was asking what it should do if it's working correctly.

OK. Understood. It looked like you were saying it wouldn't wrongly turn at an inappropriate speed.

But it will go wrong. Then there's a decision to make (albeit a pre-programmed one, based on a set of criteria an sensor inputs etc. or even one derived via learning from a neural network). And then.... ?

Edited by blandy
Link to comment
Share on other sites

But it will go wrong. Then there's a decision to make (albeit a pre-programmed one, based on a set of criteria an sensor inputs etc. or even one derived via learning from a neural network). And then.... ?

What decision do we use now when drivers / cars go wrong? It's either an accident or deliberate. If it's an accident then there is no blame. Everything else is a matter of insurance. If a company develops a car which deliberately hurts people, they won't be in business for long.

Link to comment
Share on other sites

We might be talking about different aspects, Simon.

when you wrote The alternative question is do you trust a bag of water to make a better "moral" decision? that kind of is the issue at the core of your question "What decision do we use now when drivers / cars go wrong? It's either an accident or deliberate. If it's an accident then there is no blame. Everything else is a matter of insurance. If a company develops a car which deliberately hurts people, they won't be in business for long." because until something is set in terms of the decision making process of an autonomous car, and that it is deemed to be as or more capable of making the "moral" decision there's going to be an issue about liability - as in "I tried to make it stop/steer but it wouldn't let me - it's google's fault becuase their car software is faulty. SO it has to be established that "accidents" are not the fault of the tech.

So in the event that the hardware or software goes wrong (yes I know software doesn't actually go wrong, it just has, er, undocumented design features") and the car veer towards the innocent children clutching puppies, the mechanism by which a whole mess of puppy/kid injuries is avoided is no longer solely the driver's thing - it's now the drivers actions and those of the autonomy function - there's two "brains". The human part is assessed via driving tests, alcohol levels and all that - the machine brain needs to be assessed as fit for purpose, or proven to always be sub- the driver's authority.

If you take an unmanned aircraft, they are still under the legal responsibility of a pilot. If the thing does a crash, the pilot is responsible, BUT, if the aircraft design is faulty, then the liability is with the designers/builders. So the same thing will have to be accepted legally by regulators for cars - they will need to answer the question asked - what happens if/when it goes wrong, or "prove that the probability that it will go wrong is less than X (x is likely to be in the order of 1x10-9)"

I've maybe not put it across very well, but it is something that will have to be covered by law. Currently I'm not sure the law and the tech are at the same place. Hence the point I made.

Edited by blandy
Link to comment
Share on other sites

×
×
  • Create New...
Â