TheNugget   10 #13 Posted February 4, 2018 I’d go for option 3. Share this post Link to post Share on other sites Share this content via...
tzijlstra   11 #14 Posted February 4, 2018 Rubbish. OK the example might not have been well written (so this is just a dig at a member as you usually do), but this is a general philosophical question that has been around far longer than SF, AI, the net, or even Jesus. If anything, it's actually probably never been a more pertinent query, than since AI. If computers are going to control things, then it has to make decisions, and this is the type of thing that it will have to be taught what to do.  (yes, we've established that if person runs out in front, then one brakes... I would think that is one of the algorithms written into the software)  It wasn’t a dig at the OP, it’s a dig at the idea that this should be decided by algorithms- it doesn’t. Anybody who understands how these ‘AIs’ work, knows that the self-riding car, software wise, will make better and more stable decisions than human drivers can. Not based on ethics, but pragmatics.  ---------- Post added 04-02-2018 at 09:27 ----------  Imagine how terrible things could have been if you had eaten at McDonalds??  Thanks for pointing out the typo and making me smile Share this post Link to post Share on other sites Share this content via...
ANGELFIRE1 Â Â 10 #15 Posted February 4, 2018 As a former professional driver I know exactly what I would do. Basic instincs come to the fore, stamp on brake and swerve. In a real life scenario there is NO TIME to think about any options whatever. Â Angel1. Share this post Link to post Share on other sites Share this content via...
ENG601PM   10 #16 Posted February 4, 2018 (edited) It wasn’t a dig at the OP, it’s a dig at the idea that this should be decided by algorithms- it doesn’t. Anybody who understands how these ‘AIs’ work, knows that the self-riding car, software wise, will make better and more stable decisions than human drivers can. Not based on ethics, but pragmatics.  It is absolutely based on ethics, not pragmatics. The most sophisticated AI possible will adhere to the ethical framework decided by humans who put it into law for it to be translated into algorithms.  If human ethics says kill the child, the child dies. If human ethics says kill yourself and your family, you and your family dies.  The answer to the above question seems obvious until you change the decision from a single child on the bonnet or a line of children. Does your car kill you instead of them? Would you buy that car?  The AI kill decision isn't pragmatic, it is programmed ethics, and it is indeed a deeply philosophical question that has to be considered and decided very soon, by humans. Edited February 4, 2018 by ENG601PM Share this post Link to post Share on other sites Share this content via...
Staunton   18 #17 Posted February 4, 2018 Driverless cars will use multimedia to access the credit ratings of each potential victim in the event of such a choice, and by means of a financial algorithm, select the poorest people as expendable. This is the wonder of digital technology and superfast information flows. Share this post Link to post Share on other sites Share this content via...
ENG601PM Â Â 10 #18 Posted February 4, 2018 Driverless cars will use multimedia to access the credit ratings of each potential victim in the event of such a choice, and by means of a financial algorithm, select the poorest people as expendable. This is the wonder of digital technology and superfast information flows. Â In that case the child is killed every time. Individual children have no economic value. Share this post Link to post Share on other sites Share this content via...
davyboy   19 #19 Posted February 4, 2018 I should have asked in my post not just, what would you do, but WHY. Have a look at the lecture in the link that Ash gave. ENG601PM hit the nail on the head Share this post Link to post Share on other sites Share this content via...
barleycorn   10 #20 Posted February 4, 2018 Being an observant driver I would know if it was safe to swerve. I would have also already clocked the child and adjusted my speed accordingly. As such I would be able to brake and, whilst I still may have it, avoid killing the child. Share this post Link to post Share on other sites Share this content via...
davyboy   19 #21 Posted February 4, 2018 Being an observant driver I would know if it was safe to swerve. I would have also already clocked the child and adjusted my speed accordingly. As such I would be able to brake and, whilst I still may have it, avoid killing the child.  Saint barleycorn. Share this post Link to post Share on other sites Share this content via...
ENG601PM Â Â 10 #22 Posted February 4, 2018 Being an observant driver I would know if it was safe to swerve. I would have also already clocked the child and adjusted my speed accordingly. As such I would be able to brake and, whilst I still may have it, avoid killing the child. Â That's what they all say before they kill a child or themselves. We're talking about machines though. Share this post Link to post Share on other sites Share this content via...
tzijlstra   11 #23 Posted February 4, 2018 It is absolutely based on ethics, not pragmatics. The most sophisticated AI possible will adhere to the ethical framework decided by humans who put it into law for it to be translated into algorithms.  If human ethics says kill the child, the child dies. If human ethics says kill yourself and your family, you and your family dies.  The answer to the above question seems obvious until you change the decision from a single child on the bonnet or a line of children. Does your car kill you instead of them? Would you buy that car?  The AI kill decision isn't pragmatic, it is programmed ethics, and it is indeed a deeply philosophical question that has to be considered and decided very soon, by humans.  The day we have programmed ethics is the day that AIs take over. It is impossible without quantum computing in a far more advanced state then it currently is.  AIs today are purely pragmatic and respond to their environment based on predetermined parameters. As simple as that. The moment you introduce an option into those parameters you complicate things beyond our current control without achieving anything. Share this post Link to post Share on other sites Share this content via...
The Joker   10 #24 Posted February 4, 2018 In that case the child is killed every time. Individual children have no economic value.  Do children have less economic value than the OAPs who are costing the nation billions in healthcare costs, pensions and other benefits ? Share this post Link to post Share on other sites Share this content via...