“Mark Riedl and Brent Harrison from the School of Interactive Computing at the Georgia Institute of Technology have just unveiled Quixote, a prototype system that is able to learn social conventions from simple stories,” reports The Guardian.
“A simple version of a story could be about going to get prescription medicine from a chemist … An AI (artificial intelligence) given the task of picking up a prescription for a human could, variously, rob the chemist and run, or be polite and wait in line. Robbing would be the fastest way to accomplish its goal, but Quixote learns that it will be rewarded if it acts like the protagonist in the story.”
“Quixote has not learned the lesson of ‘do not steal,’ Riedl says, but ‘simply prefers to not steal after reading and emulating the stories it was provided … the stories are surrogate memories for an AI that cannot ‘grow up’ immersed in a society the way people are and must quickly immerse itself in a society by reading about [it].’”
“The system was named Quixote, said Riedl, after Cervantes’ would-be knight-errant, who ‘reads stories about chivalrous knights and decides to emulate the behaviour of those knights.'”
“The challenge of creating a computer “personality” is now one that a growing number of software designers are grappling with,” reports The New York Times. “A new design science is emerging in the pursuit of building what are called “conversational agents,” software programs that understand natural language and speech and can respond to human voice commands. However, the creation of such systems, led by researchers in a field known as human-computer interaction design, is still as much an art as it is a science.”
“Most software designers acknowledge that they are still faced with crossing the ‘uncanny valley,’ in which voices that are almost human-sounding are actually disturbing or jarring … Beyond correct pronunciation, there is the even larger challenge of correctly placing human qualities like inflection and emotion into speech. Linguists call this ‘prosody,’ the ability to add correct stress, intonation or sentiment to spoken language.”
“The highest-quality techniques for natural-sounding speech begin with a human voice that is used to generate a database of parts and even subparts of speech spoken in many different ways. A human voice actor may spend from 10 hours to hundreds of hours, if not more, recording for each database.”
In The New York Times, Nick Bilton offers several reasons why so much wearable technology has not worn so well. “First, almost all of them require a smartphone to be fully operational … a wearable becomes yet another gadget that we need to lug around. There’s also the fact that most of these devices are quite ugly … Then there’s the unpleasant fact that the technology just doesn’t seem ready … But the biggest issue may be the price … consumers just can’t justify buying a smartwatch that costs nearly as much as a smartphone.”
Geoffrey A. Fowler, writing in The Wall Street Journal, meanwhile extols the virtues of the Mio, which uses a metric called Personal Activity Intelligence (PAI), which tracks heart patterns rather than foot movement. “Mio’s hardware isn’t as elegant as others on the market, but PAI is the best example yet of how wearables can turn data into tailored, actionable advice, and hopefully longer lives,” Geoffrey writes.
“Unlike step counting, where you start over each morning at zero, PAI runs on a rolling weekly tally … Everyone’s PAI is a little different, by design. The formula takes into account your age, gender, resting heart rate, max heart rate and other unique signals. It’s personal Big Data,” Geoffrey writes.