Machine-learning systems can be duped or confounded by situations they haven’t seen before. Computer scientists call this problem “catastrophic forgetting.”
These shortcomings exist because AI systems don’t understand causation. It’s as if you knew that the presence of clouds made rain likelier, but you didn’t know clouds caused rain. Understanding cause and effect is a big aspect of what we call common sense, and it’s an area in which AI systems today “are clueless,” says Elias Bareinboim. He should know: as the director of the new Causal Artificial Intelligence Lab at Columbia University, he’s at the forefront of efforts to fix this problem. Read the full story.
—Brian Bergstein
No comments:
Post a Comment