5 jaw-dropping things GPT-4 can do that ChatGPT couldn’t
By Samantha Murphy Kelly, CNN Business
In the first day after it was unveiled, GPT-4 stunned many users in early tests and a company demo with its ability to draft lawsuits, pass standardized exams and build a working website from a hand-drawn sketch.
On Tuesday, OpenAI announced the next-generation version of the artificial intelligence technology that underpins its viral chatbot tool, ChatGPT. The more powerful GPT-4 promises to blow previous iterations out of the water, potentially changing the way we use the internet to work, play and create. But it could also add to challenging questions around how AI tools can upend professions, enable students to cheat, and shift our relationship with technology.
GPT-4 is an updated version of the company’s large language model, which is trained on vast amounts of online data to generate complex responses to user prompts. It is now available via a waitlist and has already made its way into some third-party products, including Microsoft’s new AI-powered Bing search engine. Some users with early access to the tool are sharing their experiences and highlighting some of its most compelling use cases.
Here’s a closer look at the potential of GPT-4:
Analyzing more than text
At its core, the biggest change to GPT-4 is its ability to work with photos that users upload.
One of the most jaw-dropping use cases so far came from an OpenAI video demo that showed how a drawing could be turned into a functional website within minutes. The demonstrator uploaded the picture into GPT-4 and then pasted the resulting code into a preview that showed how it could be a working website.
In its announcement, OpenAI also showed how GPT-4 was asked to explain a joke from a series of images — which featured a smartphone with the wrong charger — and described why it was funny. While it might sound straightforward, dissecting a joke is more complicated for artificial intelligence tools to pick up on because of needed context.
In another test, The New York Times showed GPT-4 a picture of the interior of a refrigerator and prompted it to come up with a meal based on the ingredients.
The photos feature isn’t live yet, but OpenAI is expected to roll it out in the upcoming weeks.
Coding made even easier
Some early GPT-4 users with very little to no prior coding knowledge have also used it to recreate iconic games such as Pong, Tetris or Snake after following step-by-step instructions provided by the tool on how to do so. Others have made their own original games. (GPT-4 can write code in all major programming languages, according to OpenAI.)
“The powerful language capabilities of GPT-4 will be used for everything from storyboarding, character creation to gaming content creation,” said Arun Chandrasekaran, an analyst at Gartner Research. “This could give rise to more independent gaming providers in the future. But beyond the game itself, GPT-4 and similar models can be used for creating marketing content around game previews, generating news articles and even moderating gaming discussion boards.”
Similar to gaming, GPT-4 could change the way people develop apps. One user on Twitter said they made a simple drawing app in minutes, while another claimed to have coded an app that recommends five new movies every day, along with providing trailers and details on where to watch them.
“Coding is like learning how to drive — as long as the beginner gets some guidance, anyone can code,” said Lian Jye Su, an analyst at ABI Research. “AI can be a good teacher.”
Passing tests with flying colors
Although OpenAI said the update is “less capable” than humans in many real-world scenarios, it exhibits “human-level performance” on various professional and academic tests. The company said GPT-4 recently passed a simulated law school bar exam with a score around the top 10% of test takers. By contrast, the prior version, GPT-3.5, scored around the bottom 10%. The latest version also performed strongly on the LSAT, GRE, SATs and many AP exams, according to OpenAI.
In January, ChatGPT made headlines for its ability to pass prestigious graduate-level exams, such as one from University of Pennsylvania’s Wharton School of Business, but not with particularly high marks. The company said it spent months using lessons from its testing program and ChatGPT to improve the system’s accuracy and ability to stay on topic.
Providing more precise responses
Compared to the prior version, GPT-4 is able to produce longer, more detailed and more reliable written responses, according to the company.
The latest version can now give responses up to 25,000 words, up from about 4,000 previously, and can provide detailed instructions for even the most unique scenarios, ranging from how to clean a piranha’s fish tank to extracting the DNA of a strawberry. One early user said it provided in-depth suggestions for pickup lines based on a question listed on a dating profile.
Streamlining work across various industries
Joshua Browder, CEO of legal services chatbot DoNotPay, said his company is already working on using the tool to generate “one click lawsuits” to sue robocallers, in an early indication of the vast potential for GPT-4 to change how people work across industries.
“Imagine receiving a call, clicking a button, [the] call is transcribed and 1,000 word lawsuit is generated. GPT-3.5 was not good enough, but GPT-4 handles the job extremely well,” Browder tweeted.
Meanwhile, Jake Kozloski, CEO of dating site Keeper, said his company is using the tool to better match its users.
According to Su at ABI Research, it’s possible we’ll also see major advancements in “connected car [dashboards], remote diagnosis in healthcare, and other AI applications that were previously not possible.”
A work in progress
Although the company has made vast improvements to its AI model, GPT-4 has similar limitations to previous versions. OpenAI said the technology lacks knowledge of events that occurred before its data set cuts off (September 2021) and does not learn from its experience. It can also make “simple reasoning errors” or be “overly gullible in accepting obvious false statements from a user,” and not double-check work, the company said.
Gartner’s Chandrasekaran said this is also reflective of many AI models today. “Let us not forget that these AI models aren’t perfect,” Chandrasekaran said. “They can produce inaccurate information from time to time and can be black-box in nature.”
For now, OpenAI said GPT-4 users should exercise caution and use “great care” particularly “in high-stakes contexts.”
The-CNN-Wire
™ & © 2023 Cable News Network, Inc., a Warner Bros. Discovery Company. All rights reserved.