In many households in the U.S., when someone runs out of dish soap or wants to look up the lyrics to the latest Taylor Swift song, they call out to Siri or Alexa. Many times, it’s a seamless experience — the soothing voice responds with the appropriate information, and users don’t have to think twice. These smart home assistants rely on artificial intelligence to complete tasks. The same is true when we’re talking or texting with a chatbot, such as when we need to return an online purchase or resolve a billing issue. How can these technologies understand us and communicate with us as if we’re having a conversation? Read on to learn more about natural language processing and artificial intelligence.
Natural Language Processing: What Is It?
Computers “learn” the rules of a language thanks to artificial intelligence and natural language processing (NLP). Put another way: NLP is how Alexa and Siri know how to understand and respond to a written or spoken question. NLP is a complicated business — we’re only skimming the surface here. But by learning about the process computers undergo to learn language, we may be able to better understand our own methods of communication.
The Fundamentals of NLP
How does one go about teaching a computer a language? There are a few key steps. For grammarians, many of these processes will likely sound familiar, if only because they’re done manually with a red pen.
The first part of the process involves breaking up a sentence into individual words to make each one easier to understand. (Multilingual readers may use a similar process when learning a new language.) In order to understand what each word means, computers need a lot of data. Take the word “dog.” Someone may describe it as an animal with four legs and floppy ears. Based on this criteria, a computer could pull up an image of an elephant — four legs, floppy ears, check!
Does that mean the only other option is to write out every detail that identifies a dog (and every type of dog)? The possibilities are almost infinite. That’s where machine learning comes in. An analyst will give a computer plenty of examples for it to reference; a machine-learning algorithm then takes all that data and makes inferences about new information it receives. As it receives feedback, it learns what does and doesn’t count as “dog.”
Once a computer can define each of the words in a sentence, it then has to process the meaning of the sentence. This process — which humans do, too — relies on understanding the context clues around a word that may have multiple definitions.
A computer references the user’s behavior to identify the most likely definition using a machine-learning algorithm: Are there any clues in the sentence that indicate the specific usage? Take the word "buckle," which has multiple meanings depending on the sentence. A user could mean “to close,” or “to break under its own weight.” Context clues help identify the correct meaning.
Example: How do you buckle a shoe?
Example: How does a foundation buckle?
Then there’s pragmatic analysis. When someone asks if you can go to the grocery store, they’re really asking two things: Are you able to do this and would you do it? The process we go through to understand the literal meaning of a sentence, plus the implied meaning, is how we know to respond, “Yes, do you need me to pick up something for you?”
Algorithms — plus all that data — are how our virtual assistants are able to speak naturally. So, the next time Siri says, “I don’t know what you are asking,” try rephrasing your request with a few more context clues so the AI can figure out what you mean.
Featured photo credit: xijian/ iStock