Sam Altman Back As CEO

Consumers Win As ChatGPT Rights Ship

At least for now, our future’s so bright, we gotta wear shades

Avid users of ChatGPT scored a major victory last week as Sam Altman grabbed back his title of CEO of ChatGPT’s maker, OpenAI.

The definitive end to OpenAI’s short-lived power struggle — which lasted five days and transfixed AI insiders the world over — virtually guarantees ChatGPT will continue on as one of the most successful software products in human history.

#ad

Gone are worries that a prolonged ‘game-of-thrones’ at OpenAI might have seriously impeded its healthy growth — as well as the steady announcements of ChatGPT upgrades and new features that avid ChatGPT users have come to expect.

Gone, too, are worries that without Altman, a new — and much more timid OpenAI CEO — may have decided to slow down the evolution of ChatGPT so considerably that the product would ultimately be crowned, ‘Could-have-been-a-contender.’

And gone are the worries that without Altman, the 95% of employees who threatened to quit unless Altman returned would have rendered the ChatGPT-maker a hollowed-out remnant of its former self.

Instead, Altman arrives back at OpenAI — in the eyes of many — a much stronger CEO.

Wildly supported by 700+ OpenAI employees, Altman now has much freer rein to continually enhance the company’s AI — and transform those gains into ever-more-sophisticated iterations of ChatGPT and similar products.

In addition, Altman has also neutralized three of the four people on OpenAI’s former board who together decided to attempt to toss him overboard last week.

Ilya Sutskever, Tasha McCauley and Helen Toner — who reportedly tried to deep-six Altman as a way to put the brakes on the company’s fast-track development of AI — no longer have seats on OpenAI’s newly reconstituted board.

Of course, long-term, we’ll only know in hindsight if this resolution of OpenAI’s upheaval turns-out to be good for the planet.

The reason: Thinkers like Sutskever, McCauley and Toner — loosely identified as ‘effective altruists’ — tried to quash Altman out of fear that his breakneck development of AI could lead to the ‘paper clip’ nightmare scenario.

Under that doomsday vision — or one similar — an extremely advanced version of AI might be so poorly instructed by humans to ‘produce as many paper clips as possible’ that the AI system would completely exhaust all the earth’s resources in a manic, hell-bent, single-minded attempt to achieve ultimate paper clip dominance.

Granted, the effective altruists have a point here.

It’s impossible to disagree that the human race needs to be cautious in how it handles the evolution of AI — given the stakes.

But the problem with effective altruists is that they have a naive solution: Slow down or even halt the enhancement of AI until all of mankind can be absolutely certain that AI development has the proper developmental guardrails in place.

It’s a Kumbya fix that in theory, sounds awesome.

But as Bruce Springsteen once said, “If dreams came true, aw wouldn’t that be nice.”

Here in the real world, the fatal flaw with effective altruism is that as some of the most brilliant minds in AI play patty-cake by endlessly debating over what’s safe and what’s not safe, there are plenty of other players who could care less about safety.

These ‘what-me-worry’ AI programmers, under-the-thumbs of charmers like China’s Xi Jinpin, Russia’s Vladimir Putin, Iran’s Ali Khamenei, North Korea’s Kim Jong Un — along with any number of similar AI programmers serving similar, sordid strongmen — are not pausing for a nanosecond to fret over prudence.

Instead, these pawns of despots wake up every day driven by a single-minded, take-no-prisoners directive: Develop the most powerful version of AI possible — worldwide consequences be dammed.

Bottom line: Yes, in a perfect world, finding a way to continually enhance AI — while ensuring the technology does not end-up destroying or enslaving the human race — is a wonderful, inspiring, pragmatic ideal everyone should take seriously.

But to completely destroy OpenAI, the world’s leader in AI — as the effective altruists on OpenAI’s former board nearly did — in pursuit of that hope is neither effective, nor altruistic.

#ad

Instead, it simply robs writers and others of a product that millions have deemed ‘magical.’

Moreover: Writ large, effective altruism would ultimately leave high-minded countries at the mercy of snickering, despotic regimes who develop AI by any means necessary — and end-up with the world’s most powerful AI.

Essentially: Play the game like an effective altruist, and you’ll find yourself helplessly subservient to ‘humanity-be-damned’ regimes who — when it comes to AI — hold all the cards.

Might as well bring a knife to a gunfight.

Share a Link:  Please consider sharing a link to https://RobotWritersAI.com from your blog, social media post, publication or emails. More links leading to RobotWritersAI.com helps everyone interested in AI-generated writing.

Grammarly
#ad

Joe Dysart is editor of RobotWritersAI.com and a tech journalist with 20+ years experience. His work has appeared in 150+ publications, including The New York Times and the Financial Times of London.

Never Miss An Issue
Join our newsletter to be instantly updated when the latest issue of Robot Writers AI publishes
We respect your privacy. Unsubscribe at any time -- we abhor spam as much as you do.
Subscribe
Please notify me when new comments are posted about this article.
guest

0 Comments
Inline Feedbacks
View all comments