Google says its algorithms will do no harm
|
The company has created a set of seven principles for its artificial intelligence researchers to live by—and they prohibit weapons technology.
An AI code of ethics: Google’s new guidelines prohibit it from helping develop autonomous weapons in future, but leave sufficient wiggle-room for it to benefit from lucrative defense deals. Google’s CEO, Sundar Pichai, announced the new code in a blog post yesterday.
The principles state... Google’s AI should:
An AI code of ethics: Google’s new guidelines prohibit it from helping develop autonomous weapons in future, but leave sufficient wiggle-room for it to benefit from lucrative defense deals. Google’s CEO, Sundar Pichai, announced the new code in a blog post yesterday.
The principles state... Google’s AI should:
-
Benefit society
-
Avoid algorithmic bias
-
Respect privacy
-
Be tested for safety
-
Be accountable to the public
-
Maintain scientific rigor
-
Be made available to others in accordance with the same principles
Some background: The announcement comes in the wake of employee protest over the Department of Defense’s use of Google’s AI to improve the accuracy of drone strikes, among other things.
No comments:
Post a Comment