Moral Machine

Moral Machine is an online platform, developed by Iyad Rahwan's Scalable Cooperation group at the Massachusetts Institute of Technology, that generates moral dilemmas and collects information on the decisions that people make between two destructive outcomes.[1][2] The presented scenarios are often variations of the trolley problem, and the information collected would be used for further research regarding the decisions that machine intelligence must make in the future.[3][4][5][6][7][8] For example, as artificial intelligence plays an increasingly significant role in autonomous driving technology, research projects like Moral Machine help to find solutions for challenging life-and-death decisions that will face self-driving vehicles.[9]

References

  1. "Driverless cars face a moral dilemma: Who lives and who dies?". NBC News. Retrieved 2017-02-16.
  2. Brogan, Jacob (2016-08-11). "Should a Self-Driving Car Kill Two Jaywalkers or One Law-Abiding Citizen?". Slate. ISSN 1091-2339. Retrieved 2017-02-16.
  3. "Moral Machine | MIT Media Lab". www.media.mit.edu. Retrieved 2017-02-16.
  4. "MIT Seeks 'Moral' to the Story of Self-Driving Cars". VOA. Retrieved 2017-02-16.
  5. "Moral Machine". Moral Machine. Retrieved 2017-02-16.
  6. Clark, Bryan (2017-01-16). "MIT's 'Moral Machine' wants you to decide who dies in a self-driving car accident". The Next Web. Retrieved 2017-02-16.
  7. "MIT Game Asks Who Driverless Cars Should Kill". Popular Science. Retrieved 2017-02-16.
  8. Constine, Josh. "Play this killer self-driving car ethics game". TechCrunch. Retrieved 2017-02-16.
  9. Chopra, Ajay. "What's Taking So Long for Driverless Cars to Go Mainstream?". Fortune. Retrieved 2017-08-01.
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.