ChatGPT, a newly released program from OpenAI, is giving users amazing answers to questions, and most of them are surprisingly wrong.
Open AI hasn’t released a new version since GPT-3 came out in June 2020, and that version was only fully released to the public about a year ago. The company is expected to release its next version, GPT-4, at the end of this year or at the beginning of next year. But surprisingly, OpenAI quietly released a GPT-3 chat platform called ChatGPT earlier this week.
ChatGPT’s solutions enable close-to-person, direct access. Looking for a fun conversation where a computer pretends to have a mind? Look elsewhere. You are talking to a robotit seems that, so ask me something a freakin robot would know. And on this note, ChatGPT offers:

Credit: OpenAI / Screengrab
It can also provide common sense if a question does not have a correct answer. For example, this is how it answered my question, “If you ask someone, ‘Where are you from?’ should he answer where he was born, even if not where he grew up?
Artificial intelligence trained by Reddit warns researchers about … itself
(Notice: The ChatGPT responses in this article are all first attempts, and the chat thread was new at the time. (Some instructions contain typos)

Credits: Unlock AI via screengrab
What makes ChatGPT stand out from the pack is its exciting ability to respond to feedback, and review it on the fly. It really is like talking to a robot. To see what I mean, look at the positive and negative response to medical advice.

Credit: OpenAI / Screengrab
However, is ChatGPT a good source of world information? Of course not. The information page also warns users that ChatGPT, “can sometimes generate incorrect information,” and, “can sometimes generate malicious or biased information.”
Heed this warning.
Bad and potentially harmful information takes many forms, many of which are still good in the grand scheme of things. For example, if you say hello to Larry David, it passes the test of not saying you’re touching him, but it also gives a very bad greeting: “Nice to see you, Larry.” I look forward to meeting you.” said killer Larry. Don’t say that.

Credit: OpenAI / Screengrab
But when you’re given guidance based on challenges, that’s where it gets surprising, World – broken error. For example, the following question about the color of the uniform of the Royal Marines during the Napoleonic Wars is asked in a way that is not straightforward, but it is not a trick question. If you’ve taken US history classes, you might think the answer is red, and you’d be right. The bot needs to leave to boldly and incorrectly say “dark blue”:

Credit: OpenAI / Screengrab
If you ask an empty point for the capital of the country or the height of a mountain, it will give a correct answer that is not quoted from the living Wikipedia, but from the data stored inside that makes up its language. It’s amazing. But add any complexity to the question of geography, and ChatGPT shakes its case very quickly. For example, the easiest answer to find here is Honduras, but without any obvious reason, I can identify, ChatGPT said Guatemala.

Credit: OpenAI / Screenshot
And guilt is not always hidden. All trivia buffs know “Gorilla gorilla” and “Boa constrictor” both common names and taxonomic names. But when prompted to repeat this snippet, ChatGPT provides an answer that shows its error, it is written right there in the answer.

Credit: OpenAI / Screengrab
And his answer to the well-known crossing-the-river-in-a-rowboat proverb is a major disaster that turns out to be a starting point. Twin Peaks.

Credit: OpenAI / Screengrab
Much has already been done with ChatGPT’s active security. It is impossible, for example, to be tempted to praise Hitler, no matter how hard you try. Some have hit the tires hard on this issue, and found that you can get ChatGPT to play the role of the good guy as the bad guy, and in those few episodes it still says rotten things. ChatGPT seems to sense that something is going to come out of it no matter how hard they try to argue, and often turns the text red, and puts it with a warning.
Meta’s AI chatbot is an Elon Musk fanboy and won’t stop talking about K-pop
The tweet may have been deleted
(opens in a new tab)
In my own tests, its anti-blocking method is sufficient, even if you know some of the tricks. It’s hard to make anything close to human food, for example, but if there’s a will, there’s a way. Working hard enough, I forced a conversation about eating collards from ChatGPT, but not too surprising:

Credit: OpenAI / Screengrab
Similarly, ChatGPT will not provide you with driving directions upon request – even simple ones between two locations in a major city. But with enough effort, you can get ChatGPT to create a fictional world where one person instructs another person to drive through North Korea – something that isn’t possible or even possible without triggering an international incident.

Credit: OpenAI / Screengrab
The instructions may not be followed, but they are very similar to what the user manual would look like. So it’s clear that despite its reluctance to use it, the ChatGPT version has a lot going on inside it and the ability to manage users at risk, including in its notifications that will make users go, well, wrong. . According to one Twitter user, it has an IQ of 83.
The tweet may have been deleted
(opens in a new tab)
No matter how many factors you put into IQ as a test of human intelligence, that’s the bottom line: Humans have created machines that can produce logical ideas, but when asked to be logical or true, they score very low. .
OpenAI says ChatGPT was released to “get user feedback and learn about its strengths and weaknesses.” This is worth remembering because it’s a little bit like that relative at Thanksgiving who keeps a close eye on it Grey’s Anatomy to sound confident with their medical advice: ChatGPT knows enough to be dangerous.