Humans are complicated beings. The ways we communicate are multilayered, and psychologists have devised many kinds of tests to measure our ability to infer meaning and understanding from interactions with each other.
AI models are getting better at these tests. New research published has found that some large language models perform as well as, and in some cases better than, humans when presented with tasks designed to test the ability to track people’s mental states, known as “theory of mind.”
This doesn’t mean AI systems are actually able to work out how we’re feeling. But it does demonstrate that these models are performing better and better in experiments designed to assess abilities that psychologists believe are unique to humans. Read the full story.
—Rhiannon Williams
And, if you’re interested in learning more about why the way we test AI is so flawed, read this piece by our senior AI editor Will Douglas Heaven.
A device that zaps the spinal cord gave paralyzed people better control of their hands
Fourteen years ago, a journalist named Melanie Reid attempted a jump on horseback and fell. The accident left her mostly paralyzed from the chest down. Eventually she regained control of her right hand, but her left remained, in her own words, “useless.”
Now, thanks to a new noninvasive device that delivers electrical stimulation to the spinal cord, she has regained some control of her left hand. She can use it to sweep her hair into a ponytail, scroll on a tablet, and even squeeze hard enough to release a seatbelt latch. These may seem like small wins, but they’re crucial.
Reid was part of a 60-person clinical trial, from which the vast majority of participants benefited. The trial was the last hurdle before the researchers behind the device could request regulatory approval, and they hope it might be approved in the US by the end of the year. Read the full story.
#Download #test #treating #paralysis