“Sometimes, an AI’s brilliant problem-solving rules actually rely on mistaken assumptions. For example, some of my weirdest AI experiments have involved Microsoft’s image recognition product, which allows you to submit any image for the AI to tag and caption. Generally, this algorithm gets things right — identifying clouds, subway trains, and even a kid doing some sweet skateboarding tricks. But one day I noticed something odd about its results: **it was tagging sheep in pictures that definitely did not contain any sheep. When I investigated further, I discovered that it tended to see sheep in landscapes that had lush green fields — whether or not the sheep were actually there. Why the persistent — and specific — error? Maybe during training this AI had mostly been shown sheep that were in fields of this sort and had failed to realize that the “sheep” caption referred to the animals, not to the grassy landscape.** In other words, **the AI had been looking at the wrong thing**. And sure enough, when I showed it examples of sheep that were not in lush green fields, it tended to get confused. If I showed it pictures of sheep in cars, it would tend to label them as dogs or cats instead. Sheep in living rooms also got labeled as dogs and cats, as did sheep held in people’s arms. And sheep on leashes were identified as dogs. **The AI also had similar problems with goats — when they climbed into trees, as they sometimes do, the algorithm thought they were giraffes (and another similar algorithm called them birds)**.” --- **Tags** — [[quotes]], [[artificial-intelligence]], [[ai-problems]], [[teaching-anecdotes]] **Source** — [[202307171347 — B — You Look Like a Thing and I Love You]]