- Alicia Hernandez por_puesto
- BBC News World
A thinking and feeling machine.
This is how Blake Lemoine, a Google engineer, referred to LaMDA, Google’s artificial intelligence system. It spread quickly. We read it everywhere.
But, how ands This machine?
Referring to old science fiction movies, you can imagine LaMDA as a file The humanoid robot that opens its eyes, perceives and speaks. Or like HAL-9000, the super computer 2001 space flight And that in The Simpsons, as a parody, he has the voice of Pierce Brosnan, loves Marge and wants to kill Homer.
The truth is more complicated: lambda is too An artificial brain, hosted in the cloud, feeding on trillions of texts as it trains itself.
But at the same time he looks like a parrot.
Complex? Let’s try to break it down to understand it better.
super brain
CAN LaMDA (Language Model for Dialog Applications, Language Model for Spanish Dialog Applications) Designed by Google in 2017 is based on adapteri.e. a deep artificial neural network.
This neural network Practice with loads of texts. But learning is by objectivity and is presented as a game. It contains a whole sentence but you take a word and the system has to guess it,” explains Julio Gonzalo Arroyo, Professor at UNED (National University of Distance Education) in Spain and Principal Investigator in the Department of Natural Language Processing and Language Information Retrieval.
Play with yourself. The system puts out words by trial and error, and when it makes a mistake, as if it were a children’s activity booklet, it looks at the last pages, sees the correct answer, and thus corrects the parameters, adjusts.
At the same time, “It defines the meaning of each word and Pay attention to the words around her‘ says Gonzalo Arroyo.
Thus he becomes a specialist in predicting patterns and words. Just like predictive text on your cell phone, here it’s only expanded to the ninth degree, with much more memory.
Qualitative, specific and attentive responses
But LaMDA also creates smooth responses, not stifling and, according to Google, the ability to recreate the dynamism and recognize the nuances of human conversation. In short: Doesn’t look like a mechanism.
This liquidity is one of Google’s goals, as explained in their tech blog. They say they get it, and they notice it The answers are quality, they are specific and there is utility.
In order to have quality, it must make sense. For example, if you tell LaMDA I started playing guitar“I have to answer something about this and not some nonsense.
In order to achieve the second objective, he should not answer “okay”, but rather answer with something more specific like “Which type of guitar do you prefer more, Gibson or Fender?”.
And in order for the system to provide answers that show interest, insightful, it moves to a higher level, such as: “A Fender Stratocaster is a good guitar, but Brian May’s Red Special is unique.”
Why give answers with this level of detail? Like we said, she’s training herself. “After reading billions of words, he has an extraordinary ability to guess the most appropriate words in every context.”
For experts in artificial intelligence, adapters such as LaMDA were a milestone because they “allowed for very efficient processing (of information and text) and produced a real revolution in the field of natural language processing”.
Safety and bias
Another goal in LaMDA training, according to Google, is Does not create “violent or gory content, or promote slanderous or hateful stereotypes.” towards groups of people, or contain profanity,” according to their Artificial Intelligence (AI) blog.
It is also required that answers are based on facts and that there are known external sources.
“With LaMDA we count a A calculated and meticulous approach to taking into account valid concerns about fairness and honestysays Brian Gabriel, a Google spokesperson.
It asserts that the system has undergone 11 separate reviews of AI principles “along with rigorous research and testing based on key metrics for quality, security, and the system’s ability to produce fact-based data”.
How do you make a system like LaMDA free of prejudice and hate speech?
“The The key is to determine which data (with any text sources) is being fed,” says Gonzalo.
But it’s not easy: “Our way of communicating reflects our biases, and thus the machines learn it. It’s hard to remove it from the training data without removing its representation,” he explains.
he is called, Cann He appears prejudices.
“If you give him news about Queen Letizia (of Spain) and in each of them comment on the clothes she wears, it is possible that when the order is asked about her, she will repeat this masculine style and talk about clothes and not about other things,” says the expert.
The parrot that sings the tango
In 1966, the ELIZA system was designed that applied very simple patterns to simulate the dialogue of a psychotherapist. “The regime encouraged the patient to tell him more, no matter what the topic of conversation was, and elicited patterns of the kind, ‘If he mentions the word family, ask him what his relationship to his mother is,’” Gonzalo says.
There are people who believed that Elisa was really a psychotherapist: they even claimed that she helped them.
“The It is relatively easy to deceive people‘,” maintains Gonzalo, who considers Lemoine’s assertion that LaMDA has become self-aware an “exaggeration.”
In Professor Gonzalo’s opinion, statements like Lemoine do not help a healthy discussion about artificial intelligence.
“Listening to this kind of nonsense (nonsense) does no good. We risk that people will get obsessed and think that we are in the matrix mode and that machines are smarter and they are going to kill us. That’s far, it’s my imagination. I don’t think it’s helpful to have a mild conversation about the benefits of AI. ”
Because although the conversation is fluid, quality and specific, “it is nothing more than a giant formula that modifies parameters to better predict the next word. You have no idea what you’re talking about.”
Google’s response is similar. “These systems simulate the kinds of exchanges that exist in millions of sentences, and they can be related to any fascinating topic: If you ask them what it’s like when it’s a frozen dinosaur, they can create texts about melting and roaring, etc.,” Gabriel says. from Google.
Researchers Emily Bender or Timnit Gebru referred to these language creation systems as “random parrots”which are repeated randomly.
Thus, as researchers Ariel Gersenweig and Ramon Sanguisa said, transformers Like LaMDA they understand what they write as much as the parrot that sings the tango The day you love me.
You can now receive notifications from the BBC World. Download and activate the new version of our app so you don’t miss our best content.
“Creator. Troublemaker. Hardcore alcohol lover. Web evangelist. Extreme pop culture practitioner. Devoted zombie scholar. Avid introvert.”