UPDATED 22:08 EDT / DECEMBER 17 2023

AI

What do we have after the OpenAI-nniversary? Not much more than palace intrigue and another finger

Since the one legitimate technology revolution of the century, the convergence of mature and networked software interfaces with GPS-enabled smartphone cameras, we’ve been waiting for the next technology ship to come in.

It wasn’t 3D printing, which has mostly disrupted the ghost gun industry. Augmented reality technology hasn’t gotten there either, though it was able briefly to put a Pokemon on every city block. Crypto and blockchain adherents will tell you that decentralization will finally lead to the Mad Max Bartertown world we’ve always wanted, though it has mostly been effective at temporarily transferring great amounts of wealth and JPEGs between the idle rich.

These promised technology revolutions have all been, to some extent, well, lacking. The greatest hopes and fears around these can’t-miss, soon-to-be-pervasive, absolutely transformative tech trends weren’t even close to realized. That’s not to say they still cannot be, latching onto the right moment in time or mixed with the right additive to change our relationship with technology or society, but the time scale is looking closer to decades than years.

All these hype trains that missed their stops pale in comparison with what we’ve experienced in the last year with artificial intelligence, generative AI, large language models, deep learning, machine learning or whatever you want to call it. We’ve just passed 1 A.C. (1 Year After ChatGPT), and after countless dire headlines, what do we really have to show for it?

Well, we have some idea of how a foreign intelligence would perceive us, even if it has just been fed with the data of our own creation, and based on what DALL-E’s text-to-image generates, it seems to think a sixth finger would really come in handy. It can do a passable impersonation of a Drake or Grimes song if you stripped those songs of any shred of humanity, and sure, if you think about it, that’s what those songs really needed. We also have our first fully AI singer, and the singularity looks and sounds a lot like the sexy baby Taylor Swift was so worried about.

In terms of how we might actually use AI in our day-to-day life, it certainly ain’t Skynet. Some have estimated that in the enterprise, generative AI is still less than just 1% of cloud spend, far from being a day-to-day driver. On the consumer side, it’s being used not much differently than how we’ve already used personal assistants like Siri and Alexa for the last several years, guiding consumers to generate cooking ideas, shopping ideas and other ways to liberate money from their wallets.

Ah, yes, liberating money from wallets. Now we’re getting somewhere. There’s money to be made simply from the mere idea of the promise and peril AI technology represents. As with the previous tech hype trains, there’s enough money in just the glimmer of that future, and investing in the theory of its inevitability, no matter what comes to pass.

Just check how many times the phrase “if you build it, they will come” is used in the tech hustle-and-grind blog of your choice. I love you, Silicon Valley, but sometimes you are not serious people.

At least OpenAI has tangibly given us the all-too-human drama of the departure and return of Sam Altman (pictured), like Steve Jobs’ return to Apple for the TikTok-and-Adderall generation. I saw one tech influencer breathlessly describe this palace drama as the “wildest five days in AI,” and if the recursive five-day loop of firing, replacing and then rehiring one research organization’s CEO is the wildest five days in a given industry, what is that industry actually producing, really?

OpenAI has been warning us to pay no attention to the supercomputer behind the curtain and that the results would just be disastrous for us as a species if it were released, pretty much with every iteration. In reality, those iterations have been released and the results have been…mostly functional to amusing, especially when the AI’s “hallucinations” spit out Dadaist sentence constructions.

We have a new version of some highly functioning cut-and-paste that also needs to be vetted more strictly than your standard Wikipedia entry. Interesting for coders in reducing or eliminating boilerplate drudgery, maybe for the rest of us too as we think about constructing the written language or incorporating ideas into other types of art, but that’s about it.

Before we get carried away with the prospect of an AI apocalypse (noted fact enthusiast Elon Musk has called it worse than actual nuclear war), maybe we should lean on good old-fashioned human skepticism, the kind many neglected during the rise of next-generation “disrupters” such as Elizabeth Holmes and Sam Bankman-Fried.

Perhaps what the new AI “wave,” at least here at the end of 2023, really represents is the all-too-human tendency to fret about the apocalypse that never comes and get too high on the prospect of a utopia that never does either. Wrapped up in those notions too, and specifically AI, is the false promise that the hard work of keeping a society running, and profiting from it, can all be done without actually doing any of the work. Just pay the smartest kids in the room to build the perfect algorithm, hit the “start” button, and it’s beach reading forever.

The prospect of AI is also more beguiling than the standard-issue tech hype because it also speaks to deeper existential questions if we allow it. What constitutes a fulfilling life? What constitutes a successful life? What constitutes a life that not only meets some kind of basic moral prerequisite, but is also somehow rooted in leaving the world better than we found it? How does the notion of “work” fit into that?

Take away the drivers of human agency as we’ve known them for the last few centuries, and what do we have left? What was the point of spending hours on something that a machine could do in fractions of a second?

As a large language model trained on even infinite data sources, that is something AI is never going to be able to answer for us on our own behalf, no matter how badly we might want it to. As German philosopher Immanuel Kant said, out of the crooked timber of humanity, no straight thing was ever made, and, even at the most optimistic, all AI might be able to do better and faster than us is sort the crooked timber. Where we go from there is still a responsibility that falls squarely on our shoulders and our shoulders alone.

But it’ll do as the focus of the latest and greatest distractive tech news cycle until the flying cars get here. Wait a minute, I’m now just hearing Sam Altman has been fired again….

Ian Chaffee is a technology and startup media relations consultant based in Los Angeles who has worked with AI brands and researchers. He wrote this article for SiliconANGLE.

Photo: Sam Altman/Twitter

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU