Chatbots with new capabilities can make a big difference in the world. Can you trust them

He introduced Jeremy Howard, an artificial Intelligence researcher, to the audience. Online chatbotCall chatHis 7-year-old daughter. OpenAI, one among the most ambitious AI labs around the world, released the file a few days ago.

He instructed her to ask the chatbot. I asked why trigonometry was being used, where black hole came from, and why chickens incubate eggs. She answered each time in clear, dotted prose. He also provided a computer program capable of predicting the trajectory of a ball as it is thrown through air.

Over the next few weeks, Dr. Howard – a data scientist/professor – will be on hand whose work inspired the creation of ChatGPT and similar technologiesCame across a chatbot to be a new type of personal tutor. He could teach his daughter English, science, math, and English. One of his most important lessons was to not believe everything that you are told.

He said, “It gives us great joy to see her learn this way.” “But I also told her: Don’t trust everything he gives you. It can go wrong.”

OpenAI is just one of the many academic labs and independent researchers that are working to develop more advanced chatbots. These systems don’t have the ability to communicate with humans as well, but they can chat just like a human. They appear often. They can also retrieve and reassemble information at speeds that are unimaginable for humans. They can be seen as digital assistants — like Siri or Alexa — that are better at understanding what you’re looking for and giving it to you.

ChatGPT was released and has been used by over a million people. Many experts believe these chatbots will replace Google or Bing.

They are able to present information in short sentences rather than long lists of blue hyperlinks. They can explain concepts in a way that is easy for people to understand. They can present facts while creating business plans, paper topics, or other new ideas.

“You now have a computer that can answer any question in a way that makes sense to a human,” said Aaron Levy, CEO of Silicon Valley, Box, and one of several executives exploring ways these chatbots will change. technology landscape. It can combine ideas from different contexts.

The chatbots of the future do this with complete confidence. They may not always be honest. Sometimes they fail simple operations in arithmetic. They mix fact and fiction. It is becoming more accessible to people. Generate and spread lies.

Google recently developed a proprietary system for conversational called LaMDA (Language Model for Dialog Applications). A Google engineer joined the team this spring. He claimed he was conscious. has not beenIt captured the imagination of the public.

Aaron Margolis, a Arlington data scientist, was one of the few people who were allowed to use LaMDA via Google’s experimental app AI Test Kitchen. His talent for open conversation was something he was always amazed by. I kept him entertained. He warned me that it might not be true. This is to be expected with a system that is trained from the huge amount of information on the Internet.

“What gives to you is kind of an Aaron Sorkin movie,” said he. “The Social Network” was written by Mr. Sorkin, a film that has been criticized for denying the truth about Facebook’s origins. “Parts will be true, but some parts won’t.”

ChatGPT and LaMDA asked him to speak to him as Mark Twain. LaMDA asked him to describe the meeting between Twain, Levi Strauss and he replied that he worked with the blue jeans mogul while residing in San Francisco in mid-1800s. It seemed correct. But it wasn’t. Twain & Strauss lived in San Francisco together, but they never worked together.

Scientists refer to this as “hallucinations”. Much like a good storyteller, chatbots have a way of taking what they’ve learned and reshaping it into something new — without any regard for whether or not it’s true.

LaMDA is what AI researchers refer to as a neural networkA mathematical system loosely modeled after the brain’s network neurons, is called. This technology is the same as Translates between French and EnglishGoogle Translate allows you to identify pedestrians using services like Google Translate Self-driving cars are navigating the city streets.

A neural network learns by analysing data. It can identify patterns in thousands upon thousands of photos of cats and learn how to identify a cat.

Five years ago, researchers from Google and other labs like OpenAI started designing neural networks. Analyze huge amounts of digital textThese include books, Wikipedia articles, news stories, chat logs, and online chat logs. Scientists refer to them “large-language paradigms”. These systems have been able to create text by identifying billions and billions of patterns in the way people associate symbols, numbers, or words.

Their ability to generate languages has amazed many researchers in the field, as well as many of the people who developed it. The technology can duplicate what people have written and combine disparate ideas. You could ask him for a “Seinfeld” scene in which Jerry learns an obscure mathematical technique called the bubble sort algorithm. would.

OpenAI has improved its technology with ChatGPT. It does not allow for free-flowing conversations like Google’s LaMDA. It was designed to be more like Alexa, Siri, and other digital assistants. Like LaMDA, ChatGPT was trained on a sea of ​​digital texts extracted from the Internet.

The system was tested by people who were asked to rate its responses. Did they convince you? Was it convincing? They were honest? Then, through a technology called Learning reinforcementI used ratings to fine tune the system and define its capabilities.

Mira Moratti is OpenAI’s chief technological officer. “This allows them to get to a point where the model interacts with you and admits when it’s wrong,” she said. “He can reject something that is inappropriate, and he can challenge a question or hypothesis that is not valid.”

This method was not perfect. OpenAI warns ChatGPT users that ChatGPT may “occasionally generate incorrect data” and “produce malign instructions or biased material.” The company intends to continue improving the technology and reminds those who use it that it is still a research project.

Google, Meta, as well as other companies, address accuracy issues. Recently dead RemovalOnline preview of Galactica’s chatbot, as it often generates biased and incorrect information.

Experts warned that companies cannot control the fate of these technologies. ChatGPT, LaMDA and Galactica are based upon ideas, research papers and computer code that have been freely circulating for many years.

OpenAI and Google can advance technology at a much faster pace than other companies. Their latest technology has been widely copied and distributed. These systems can be used to spread misinformation.

Dr. Howard hoped that his daughter would not trust everything she read online, and that society would also learn the same lesson.

He stated that you could program millions of bots to look like people and have conversations to convince people of certain points of view. This was something I warned about years ago. It is now evident that this is what is going to happen.



Source link

[Denial of responsibility! reporterbyte.com is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – reporterbyte.com The content will be deleted within 24 hours.]

Total
0
Shares
Leave a Reply

Your email address will not be published. Required fields are marked *

Previous Post

The best Hallmark Christmas movies 2022

Next Post

US astronaut Joseph Kittinger, 94, dies

Related Posts