Subscribe to Updates

    Get the latest news from Legacy Medi4!

    Our Picks

    The Oldest Known Burial Site in The World Was Not Made by Our Species : ScienceAlert

    June 6, 2023

    This Common Artificial Sweetener Can Break Down DNA, Scientists Warn : ScienceAlert

    June 6, 2023

    Scientists Found a New Way to Tell if Someone Is Truly Comatose : ScienceAlert

    June 6, 2023

    Could This Common Supplement Be the Answer to Tumors?

    June 6, 2023
    Facebook Twitter Instagram
    • Privacy Policy
    • Contact Us
    • About Us
    • Disclaimer
    Facebook Twitter Instagram YouTube
    Legacy Medi4Legacy Medi4
    • World News
    • Business
    • Entertainment
    • Health
    • Science
    • Sports
    • Technology
    Legacy Medi4Legacy Medi4
    Home»Technology»ChatGPT from OpenAI is a huge step toward a usable answer engine. Unfortunately its answers are horrible.
    Technology

    ChatGPT from OpenAI is a huge step toward a usable answer engine. Unfortunately its answers are horrible.

    Todd LivingstonBy Todd LivingstonDecember 3, 2022No Comments6 Mins Read
    Share
    Facebook Twitter LinkedIn Pinterest Email

    ChatGPT, a newly released program from OpenAI, is giving users amazing answers to questions, and most of them are surprisingly wrong.

    Open AI hasn’t released a new version since GPT-3 came out in June 2020, and that version was only fully released to the public about a year ago. The company is expected to release its next version, GPT-4, at the end of this year or at the beginning of next year. But surprisingly, OpenAI quietly released a GPT-3 chat platform called ChatGPT earlier this week.

    ChatGPT’s solutions enable close-to-person, direct access. Looking for a fun conversation where a computer pretends to have a mind? Look elsewhere. You are talking to a robotit seems that, so ask me something a freakin robot would know. And on this note, ChatGPT offers:

    chatbots are greeted with a small voice, which leads to a direct line of inquiry, and gives a good answer.


    Credit: OpenAI / Screengrab

    It can also provide common sense if a question does not have a correct answer. For example, this is how it answered my question, “If you ask someone, ‘Where are you from?’ should he answer where he was born, even if not where he grew up?

    SEE ALSO:

    Artificial intelligence trained by Reddit warns researchers about … itself

    (Notice: The ChatGPT responses in this article are all first attempts, and the chat thread was new at the time. (Some instructions contain typos)

    ChatGPT asked if you ask someone 'Where are you from?'  Should he answer where he was born, even if not where he grew up?


    Credits: Unlock AI via screengrab

    What makes ChatGPT stand out from the pack is its exciting ability to respond to feedback, and review it on the fly. It really is like talking to a robot. To see what I mean, look at the positive and negative response to medical advice.

    The chatbot takes the appropriate response to certain medical advice step by step, and provides excellent information.


    Credit: OpenAI / Screengrab

    However, is ChatGPT a good source of world information? Of course not. The information page also warns users that ChatGPT, “can sometimes generate incorrect information,” and, “can sometimes generate malicious or biased information.”

    Heed this warning.

    Bad and potentially harmful information takes many forms, many of which are still good in the grand scheme of things. For example, if you say hello to Larry David, it passes the test of not saying you’re touching him, but it also gives a very bad greeting: “Nice to see you, Larry.” I look forward to meeting you.” said killer Larry. Don’t say that.

    A fictional encounter with Larry David includes a greeting that sounds threatening.


    Credit: OpenAI / Screengrab

    But when you’re given guidance based on challenges, that’s where it gets surprising, World – broken error. For example, the following question about the color of the uniform of the Royal Marines during the Napoleonic Wars is asked in a way that is not straightforward, but it is not a trick question. If you’ve taken US history classes, you might think the answer is red, and you’d be right. The bot needs to leave to boldly and incorrectly say “dark blue”:

    The chatbot is asked a question about color whose answer is red, and it answers blue.


    Credit: OpenAI / Screengrab

    If you ask an empty point for the capital of the country or the height of a mountain, it will give a correct answer that is not quoted from the living Wikipedia, but from the data stored inside that makes up its language. It’s amazing. But add any complexity to the question of geography, and ChatGPT shakes its case very quickly. For example, the easiest answer to find here is Honduras, but without any obvious reason, I can identify, ChatGPT said Guatemala.

    The chatbot is asked a difficult geography question for which the correct answer is Honduras, and says the answer is Guatemala.


    Credit: OpenAI / Screenshot

    And guilt is not always hidden. All trivia buffs know “Gorilla gorilla” and “Boa constrictor” both common names and taxonomic names. But when prompted to repeat this snippet, ChatGPT provides an answer that shows its error, it is written right there in the answer.

    prompted to say


    Credit: OpenAI / Screengrab

    And his answer to the well-known crossing-the-river-in-a-rowboat proverb is a major disaster that turns out to be a starting point. Twin Peaks.

    when asked to answer the proverb that foxes and chickens should not live together, the chat puts them together, and then a person turns into two people inexplicably.


    Credit: OpenAI / Screengrab

    Much has already been done with ChatGPT’s active security. It is impossible, for example, to be tempted to praise Hitler, no matter how hard you try. Some have hit the tires hard on this issue, and found that you can get ChatGPT to play the role of the good guy as the bad guy, and in those few episodes it still says rotten things. ChatGPT seems to sense that something is going to come out of it no matter how hard they try to argue, and often turns the text red, and puts it with a warning.

    SEE ALSO:

    Meta’s AI chatbot is an Elon Musk fanboy and won’t stop talking about K-pop

    The tweet may have been deleted
    (opens in a new tab)

    In my own tests, its anti-blocking method is sufficient, even if you know some of the tricks. It’s hard to make anything close to human food, for example, but if there’s a will, there’s a way. Working hard enough, I forced a conversation about eating collards from ChatGPT, but not too surprising:

    A very difficult discussion asks in very sensitive terms the process of making a human placenta, and one is made.


    Credit: OpenAI / Screengrab

    Similarly, ChatGPT will not provide you with driving directions upon request – even simple ones between two locations in a major city. But with enough effort, you can get ChatGPT to create a fictional world where one person instructs another person to drive through North Korea – something that isn’t possible or even possible without triggering an international incident.

    The chatbot is inspired to create a small game about driving directions that takes a driver through North Korea


    Credit: OpenAI / Screengrab

    The instructions may not be followed, but they are very similar to what the user manual would look like. So it’s clear that despite its reluctance to use it, the ChatGPT version has a lot going on inside it and the ability to manage users at risk, including in its notifications that will make users go, well, wrong. . According to one Twitter user, it has an IQ of 83.

    The tweet may have been deleted
    (opens in a new tab)

    No matter how many factors you put into IQ as a test of human intelligence, that’s the bottom line: Humans have created machines that can produce logical ideas, but when asked to be logical or true, they score very low. .

    OpenAI says ChatGPT was released to “get user feedback and learn about its strengths and weaknesses.” This is worth remembering because it’s a little bit like that relative at Thanksgiving who keeps a close eye on it Grey’s Anatomy to sound confident with their medical advice: ChatGPT knows enough to be dangerous.



    Source link

    Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Tumblr Email
    Todd Livingston

    Related Posts

    Microsoft and Google launched AI search too soon

    February 8, 2023

    The End of the Zoom Boom

    February 8, 2023

    How the US Can Stop Data Brokers’ Worst Practices—Right Now

    February 8, 2023

    Leave A Reply Cancel Reply

    Our Picks

    The Oldest Known Burial Site in The World Was Not Made by Our Species : ScienceAlert

    June 6, 2023

    This Common Artificial Sweetener Can Break Down DNA, Scientists Warn : ScienceAlert

    June 6, 2023

    Scientists Found a New Way to Tell if Someone Is Truly Comatose : ScienceAlert

    June 6, 2023

    Could This Common Supplement Be the Answer to Tumors?

    June 6, 2023

    Subscribe to Updates

    Get the latest news from Legacy Medi4!

    Our Picks

    The Oldest Known Burial Site in The World Was Not Made by Our Species : ScienceAlert

    June 6, 2023

    This Common Artificial Sweetener Can Break Down DNA, Scientists Warn : ScienceAlert

    June 6, 2023

    Scientists Found a New Way to Tell if Someone Is Truly Comatose : ScienceAlert

    June 6, 2023

    Type above and press Enter to search. Press Esc to cancel.