Promising — But Not Perfect
ChatGPT’s latest hip-check to Google — its own search engine — is delighting untold numbers across the Web.
Accessible with a simple click on a new globe icon in the ChatGPT message box, the new tool brings back summaries of searches for you — complete with hotlinks to the sources of the summaries.
The only problem: Sometimes those summaries get the facts wrong — even while linking to news stories and other content that directly contradict those summaries.
Bottom line: The new tool will probably be used by some writers as a quick, down-and-dirty way to generate a rough text draft from a Web search.
But as far as trusting that rough draft to be completely accurate: Not so much.
Put another way: ChatGPT can be an incredibly powerful auto-writer, as long as you use the right prompts — and as long as you strictly limit its writing to facts that you know are true.
For a great demo on how to use ChatGPT’s new Web search tool , check-out this video:
*How To Use ChatGPT Search
For real-world looks at how SearchGPT can get it wrong, check out these videos:
*ChatGPT Search Tested
*ChatGPT with Search
In other news and analysis on AI writing:
*In-Depth Guide: Apple Intelligence’s New Writing Tools: Slick on Interface, Less So on Brains: PC Magazine offers an in-depth look into how to use Apple Intelligence’s new writing tools in this piece.
Capabilities include AI-powered writing, rewriting, summarization and proofreading.
One caveat: Despite the ga-ga attack many are experiencing at the release of the tools, it turns-out they’re much less powerful than AI writing available from industry leaders like ChatGPT, Gemini and Claude.
*ChatGPT’s Truth-O-Meter: Mostly Set to Fiction: While ChatGPT’s maker freely admits that the AI may make-up facts, a new study finds that ChatGPT actually gets the facts wrong most of the time.
The report — issued by the chatbot’s maker, OpenAI — found that ChatGPT’s new o1 AI engine only came back with correct answers 43% of the time.
Observes writer Matthias Bastian: “Anthropic’s Claude models (competitors to ChatGPT) performed even worse. Their top model, Claude-3.5-Sonnet, got 28.9% right and 36.1% wrong.”
The takeaway: Among other wonder uses, ChatGPT and its close competitors are incredibly powerful auto-writing tools.
But they’re woefully inadequate as research tools.
*Memory Bonanza: All Your Old ChatGPT Chats Accessible Soon: Avid ChatGPT users rejoice: Now all those gems of insight buried in your old chats with ChatGPT will soon be easily accessible.
This piece in Tom’s Guide offers detail on how you’ll get to them.
Observes writer Ryan Morrison: “A new magnifying glass icon at the top of the sidebar will open a search box.
“And from there, you can see your history, start a new chat, or search for a specific chat you’ve previously created.”
*Pro Prompts: Or How To Unleash Your Inner Einstein: Writer Aditya Kumar offers an in-depth look at high-level tools designed to offer you killer prompts to use with ChatGPT and its competitors.
Tools detailed include:
~OpenPrompt
~AIPRM
~PromptBase
~PromptChainer
*School Policies on AI: For Many, Cross Fingers, Hope for Best: When it comes to the dos and don’ts regarding AI use in schools, many instructors are flying blind.
Observes lead writer Steph Machado: “Two years after ChatGPT became widely available, states have been slow to roll-out guidance on the use of artificial intelligence.
“That leaves many teachers and schools to grapple with AI on their own.”
*Gartner: 2025 Will See the Rise of AI Agents: Expect to see hordes of writers and others using AI agents to automate much of their everyday workflows in 2025, according to IT consulting firm Gartner.
Initially, AI agents will be used to automate the most mundane of repetitive tasks, according to writer Taryn Plumb.
But ultimately, AI agents will also be elevated to the role of digital co-worker, enabling them to make ever-more-impactful business decisions sans constant human oversight.
*Free AI: The Ultimate Winner?: Writer Matt Marshall reports that open source AI — freely available for download from the Web — may become the preferred AI for the world’s companies.
Observes Marshall: “While closed models like OpenAI’s GPT-4 dominated early adoption, open source models have since closed the gap in quality, and are growing at least as quickly in the enterprise.”
Facebook parent Meta has been a leader in offering open source AI with its own AI engine, Llama.
Meta is betting that the real money will be in designing applications that run atop Llama.
*AI Killed the Radio Star: The days when early AI adopters took great pains to pretend the tech would never replace humans are apparently far behind us.
Case in point: A Polish radio station recently fired all of its journalist announcers — quickly replacing them with AI-generated ‘presenters.’
Observes writer Carla St. Louis: “The station, based in Krakow, recently re-launched with three AI avatars, in hopes of attracting younger listeners to talk about cultural, art and social topics like LGBTQ+ issues.”
*Your Face, Their Chatbot: Drew Crecente found out the hard way that anyone can steal your image these days and turn it into a Web chatbot.
Specifically: Someone pirated an image of Crecente’s deceased daughter and callously used it to turn her into a video game journalist chatbot, courtesy Character.ai.
While the chatbot has since been deleted, “this enforcement was just a quick fix in a never-ending game of whack-a-mole in the land of generative AI, where new pieces of media are churned out every day using derivatives of other media scraped haphazardly from the Web,” according to writer Megan Farokhmanesh.
*Flawed Berkeley Study Concludes Humans More Creative Writers Than AI: A poorly designed Berkeley study has misleadingly concluded that ChatGPT and its competitors are less creative than their human counterparts.
In her study, researcher Nina Begus gave both the AI and the humans a single prompt to write a short story.
The problem: As any AI insider has known for years, you need to guide an AI’s writing with a few more prompts along the way to fully tap into its motherlode of creativity.
Essentially, with the study, it was as if Begus placed a human and a Ferrari on a high school running track, turned the ignition key and then fired the starting gun.
No surprise: Under that scenario, the human would easily beat the Ferrari, since the race car would be stuck in park as the human casually loped around the race track to victory.
However, if you would have put a human behind the wheel of the Ferrari, started the engine and then fired the starting gun, the human would have thrown the Ferrari in drive, gunned the gas pedal and actually been a participant in the race.
The question here, in each case — ChatGPT and Ferrari — is which is the fairest way to test a technology:
- The way the technology was designed to be operated?
- A method that deliberately undermines the way that technology was designed to be operated?
*AI Big Picture: AI Our Next Nukes?: The chances that China and similar rogue nations will out-militarize the U.S. when it comes to AI just got a bit more remote.
Reuters reports the Biden Administration is pushing the U.S. military to embed AI into its systems — while simultaneously ensuring that the remains fully controllable by humans.
Says Jake Sullivan, White House national security advisor:
“We have to get this right, because there is probably no other technology that will be more critical to our national security in the years ahead.”
Share a Link: Please consider sharing a link to https://RobotWritersAI.com from your blog, social media post, publication or emails. More links leading to RobotWritersAI.com helps everyone interested in AI-generated writing.
–Joe Dysart is editor of RobotWritersAI.com and a tech journalist with 20+ years experience. His work has appeared in 150+ publications, including The New York Times and the Financial Times of London.