Asimov had some good ideas, but we need to, you know, actually write some laws now:
As they become smarter and more widespread, autonomous machines are bound to end up making life-or-death decisions in unpredictable situations, thus assuming—or at least appearing to assume—moral agency. Weapons systems currently have human operators “in the loop”, but as they grow more sophisticated, it will be possible to shift to “on the loop” operation, with machines carrying out orders autonomously.
As that happens, they will be presented with ethical dilemmas. Should a drone fire on a house where a target is known to be hiding, which may also be sheltering civilians? Should a driverless car swerve to avoid pedestrians if that means hitting other vehicles or endangering its occupants? Should a robot involved in disaster recovery tell people the truth about what is happening if that risks causing a panic? Such questions have led to the emergence of the field of “machine ethics”, which aims to give machines the ability to make such choices appropriately—in other words, to tell right from wrong.
Bioethics is controversial.
No one endorses the ideas or concepts explored here, not even me.
You will develop a strong opinion about something you find here. I want to hear it. Philosophy is a conversation.
popbioethics [at] gmail [dot] com
Long Form ArticlesWhy Mass Effect is the most Important Science Fiction Universe of our Generation