Since its release at the end of November, ChatGPT has become an internet phenomenon. In just 5 days it crossed 1 million users, a feat that took Netflix 41 months, Facebook 10 months, and Instagram 2.5 months.
What makes ChatGPT different from existing chatbots is its ability to answer follow-up questions, admit its mistakes, challenge incorrect premises and reject inappropriate requests.
The internet is strewn with examples of people asking ChatGPT, and getting meaningful investment advice, help with generating and debugging code, creating a weight-loss plan, writing movie scripts, thought-provoking essays, and much more.
ChatGPT is built by the engineers behind OpenAI who are working on some interesting pieces of Artificial Intelligence (AI) models such as Codex, which powers GitHub’s co-pilot, as well as DALL.E2, an image generator that crafts artworks based on user-entered prompts.
ChatGPT, which is free for anyone to use, is based on what’s known as a large language model, which has been trained with huge quantities of text information scooped from the web, from books, and from other sources. It’s very adept at crunching them together in order to answer prompts like a human, with stunning and unprecedented eloquence.
While ChatGPT is new, AI at its core isn’t. The brains behind ChatGPT is a new variant of the popular AI model GPT-3, called GPT-3.5.
Not so fast
Despite ChatGPT’s unprecedented success, it still has its fallacies, primarily because it mimics human-like responses based on statistical probability, rather than real learning. In fact, users have found ChatGPT to often confidently present false information as fact, exhibit biases, and fail at logical reasoning.
ChatGPT’s web interface notes that it has been put online to “get external feedback in order to improve our systems and make them safer.”
Its developers also acknowledge that while the bot has certain guardrails in place, “the system may occasionally generate incorrect or misleading information and produce offensive or biased content.” There are other limitations as well, such as its “limited knowledge” of the world after 2021.
However, even with these limitations, ChatGPT is an impressive bot with an uncanny accuracy for presenting accurate and relevant information to queries. Many people have in fact called for systems like ChatGPT to replace traditional search engines.
According to an engineer at Google’s parent company Alphabet, AI-powered systems in their current state take a lot of computing power. This means that while they’d work on a small scale fielding a few million queries, handling billions of user requests daily would be prohibitively expensive, with one AI answer costing 10-100 times more than a typical web search.
There’s no denying the fact that ChatGPT is a tipping point for AI. It’ll help advance the field of AI, and will likely lead to the development of more sophisticated applications.
No surprise then that intrepid entrepreneurs are excited about hatGPT’s potential and are trying to adapt it to play different roles.
For instance, DoNotPay, which provides a chatbot for legal services, just put out a video of its chatbot conversing with a representative of an internet service provider. The bot complained about poor internet service and successfully negotiated a $10/month discount. DoNotPay’s bot is built on the same AI model that’s at the heart of ChatGPT.
ChatGPT isn’t perfect. But it has an undeniable potential for disrupting all aspects of computer-enabled interactions. It might appear like a novelty at the moment, but we expect to see language-based AI models put to more interesting uses in the very near future.