Siri — a form of artificial intelligence used in iPhones — can understand a lot; however, if you ask Siri if it “grasps” the concepts lobbed its way, the intelligence reveals its artificiality.
A recent study by campus researchers explores how metaphorical language developed in humans, and it maps the organization of how these phrases came to be, moving artificial intelligence, or AI, closer to grasping these concepts.
Artificial intelligence software such as Siri and Cortana possess a limited lexicon, and as a result their abilities to respond to subtleties and double meanings are restricted at the moment.
Siri might be able to understand when asked if it “understands” what is said to it, but it’s unable to “grasp” which meaning of “grasp” is being used in the sentence.
This is because “grasp” in the sense of holding an object is in the concrete domain, while “grasp” in the sense of understanding is in the abstract, according to Mahesh Srinivasan, lead author of the study and a campus assistant professor of psychology.
This inability of artificial intelligence to understand complexities in human language is because human beings are able to conceive multiple meanings of a word and move a word from one domain to another, while machines have not yet developed that same ability, according to the study.
The challenge for scientists to program AI to understand these complexities in human language stems from how exactly humans are able to accomplish this task, according to Srinivasan, but there have been many other theories in the past by linguists.
The research by Srinivasan, Yang Xu and Barbara C. Malt involved comparing polling data from subjects to an online database from the Oxford English Dictionary called the Metaphor Map of English, according to Srinivasan.
Subjects were asked to rate a term on a scale of one to seven for a number of categories such as external vs. internal, embodied vs. disembodied, animacy vs. inanimacy and concrete vs. abstract.
Responses were then attached to words to determine what was, Srinivasan said, the source domain — or the original interpretation — for the word, and what domain that word will be used in next.
Data was used to create an algorithm that could correctly predict where a word began and where it would go on the map 75 percent of the time.
“The idea is to take lessons from how humans have created new words in the past and taking that into making new natural language processing systems,” Srinivasan said.
Members of the campus community expressed differing opinions on the possibility of Siri understanding more creative usage of language.
Varun Agarwal, a campus sophomore and computer science major, said AI’s “lack of ability to understand subtlety is a barrier” and that “having that barrier is a way for (him) to remain secure.”
Agarwal added, however, that this development would have positive aspects for “people who are learning (English)” that may not have enough formal language skills to communicate with AI.
Campus sophomore Anchit Sood said he does not use Siri often but would be more inclined to if there were significant changes in its comprehension capabilities.