BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

5 Things CEOs Need To Know About ChatGPT And Generative AI

Following

OBSERVATIONS FROM THE FINTECH SNARK TANK

If you’ve been to any industry conferences this year, you know that ChatGPT and Generative AI—and artificial intelligence, in general—dominate the agendas.

A lot of the content, however, is preachy and vacuous—e.g., ”AI is going to be disruptive” or “AI is a game changer.”

CEOs (and other senior executives for that matter) need—and want—more specific viewpoints on what the impact of these new technologies will be and on how to move forward with them.

So here are five things CEOs need to know about ChatGPT and Generative AI:

1) Cost Reduction Is Not The Goal of Generative AI

The early focus of Generative AI tool and technology deployment should be on productivity improvement, specifically process acceleration.

Estimates of staff cutbacks vary by type of role and position, and range from 20% to even 80%. While there are isolated examples of companies completely (or nearly completely) replacing employees with Generative AI, they’re few and far between—and the results have been less than spectacular.

The impact of Generative AI on business isn’t staff replacement—it’s the acceleration of human productivity and creativity. According to Charles Morris, Microsoft’s Chief Data Scientist for Financial Services: “Don’t think about Gen AI as an automation tool, but as a co-pilot—humans do it, and the co-pilot helps them do it faster.”

From executing marketing campaigns to developing web sites to developing code to create new data models, the benefits of these use cases for using Generative AI isn’t cost reduction, it’s reducing time to market.

2) You Have to Evaluate Large Language Model Risks

Although ChatGPT might currently be the most well known large language model (LLM) out there (Microsoft’s Gorilla and Facebook’s Llama are coming on strong), nearly every major technology vendor has a LLM in the works or has recently launched one.

By the end of the decade, you should expect to be relying on anywhere from 10 to 100 LLMs depending on your industry and the size of your business. There are two things you can bet on: 1) Tech vendors will claim to be incorporating Generative AI technology in their offerings when they really don’t, and 2) Tech vendors won’t tell you what the weaknesses and limitations of their LLMs (if they really have one) are.

As a result, companies will need to evaluate the strengths, weaknesses, and risks of each model themselves. According to Chris Nichols, Director of Capital Markets at South State Bank:

“There are certain standards that companies should apply to each model. Risk groups need to track these models and rate them on their accuracy, potential for bias, security, transparency, data privacy, audit approach/frequency, and ethical considerations (e.g., infringement of intellectual property, deep fake creation, etc.).”

3) ChatGPT is to 2023 what Lotus 1-2-3 was to 1983

Remember the spreadsheet Lotus 1-2-3? Although it wasn’t the first PC-based spreadsheet on the market, when it was introduced in early 1983 it sparked a boom in the adoption of personal computers, and was considered the “killer app” for PCs.

Lotus 1-2-3 also sparked a boom in employee productivity. It enabled people to track, calculate, and manage numerical data like nothing before it. Few people in the working ranks today remember how we (oops—I meant “they”) had to rely on HP calculators to make calculations and then write stuff down.

Despite the huge gain in productivity, there were some issues: 1) Users hardcoded errors in calculations which caused big problems for some companies; 2) Documentation of the assumptions going into spreadsheets was weak (more like non-existent), creating a lack of transparency; and 3) There was a lack of consistency and standardization in the design and use of the spreadsheets.

These same issues companies wrestled with 40 years ago with Lotus 1-2-3 are present today with the use of ChatGPT and other Generative AI tools: There’s a reliance on ChatGPT’s often incorrect output, there’s no documentation (or “paper trail”) on the use of the tool, and there’s no consistency in the use of the tool across employees in the same department, let alone same company.

Back in its day, Lotus 1-2-3 spawned a number of plugins that enhanced the spreadsheet’s functionality. Similarly, hundreds of plugins already exist for ChatGPT. In fact, much of the power to generate output like audio, video, programming code, and other forms of non-text output comes from these plugins, not ChatGPT itself.

4) Data Quality Makes or Breaks Generative AI Efforts

Consultants have been urging you to get your internal data house in order for years, and when you start using Generative AI tools you’ll see how well you’ve done. The adage “garbage in, garbage out” was tailor-made for Generative AI.

For open source LLMs that use public Internet data, you’ve got to be very wary of data quality. While the Internet is a data gold mine, it’s a gold mine sitting in the middle of a data landfill. Stick your hand in for some data, and you won’t be sure if you’ve got a gold nugget or a handful of garbage.

Companies have wrestled—for decades, now— with giving their employees access to the data they need to make decisions and do their job. Part of the challenge is having tools that access the data, and getting employees trained and up to speed on them.

Generative AI tools help to abstract away some of the issues with using data access and reporting software applications. That’s a big benefit (and one reason why these new tools help to accelerate human performance).

What’s left, though, is the quality of the data.

Paradoxically, however, you need to stop talking about “data”—generically, that is. Instead, evaluate the quality, availability, and accessibility of specific types of data, for example, customer data, customer interaction data, transaction data, financial performance data, operational performance data, etc.

Each one of these types of data is fodder for Generative AI tools.

5) Generative AI Requires New Behaviors

You can’t ban the use of Generative AI tools. What you can—and should—do is to establish guidelines for their use. For example, require employees to: 1) Document the prompts they use to generate results; 2) Proofread Generative AI output (and prove that they did); and 3) Adhere to internal document guidelines that include the use of keywords, clear headings, graphics with alt tags, short sentences, and formatting requirements.

That’s a tall order, but according to South State Bank’s Nichols, “poorly structured documents cause the bulk of Generative AI inaccuracies.”

Management’s focus will change over the rest of the decade, as well.

Businesses have spent the past 10 years on a “digital transformation” journey, where the focus has been on digitizing high volume transaction processes like account opening and customer support.

That focus is changing—expanding would be a better word—to enhancing the productivity of knowledge workers in the organization—IT, legal, marketing, etc.

In the short term, you’d be crazy to trust Generative AI tools to run the company without human intervention and oversight. There’s too much bad data leading to too many “hallucinations.”

In the long run, Generative AI will be “disruptive” and a “a game changer.” CEOs need to be proactive and take big steps to ensure these disruptions and changes are positive for their organizations.

Follow me on Twitter or LinkedInCheck out my website

Join The Conversation

Comments 

One Community. Many Voices. Create a free account to share your thoughts. 

Read our community guidelines .

Forbes Community Guidelines

Our community is about connecting people through open and thoughtful conversations. We want our readers to share their views and exchange ideas and facts in a safe space.

In order to do so, please follow the posting rules in our site's Terms of Service.  We've summarized some of those key rules below. Simply put, keep it civil.

Your post will be rejected if we notice that it seems to contain:

  • False or intentionally out-of-context or misleading information
  • Spam
  • Insults, profanity, incoherent, obscene or inflammatory language or threats of any kind
  • Attacks on the identity of other commenters or the article's author
  • Content that otherwise violates our site's terms.

User accounts will be blocked if we notice or believe that users are engaged in:

  • Continuous attempts to re-post comments that have been previously moderated/rejected
  • Racist, sexist, homophobic or other discriminatory comments
  • Attempts or tactics that put the site security at risk
  • Actions that otherwise violate our site's terms.

So, how can you be a power user?

  • Stay on topic and share your insights
  • Feel free to be clear and thoughtful to get your point across
  • ‘Like’ or ‘Dislike’ to show your point of view.
  • Protect your community.
  • Use the report tool to alert us when someone breaks the rules.

Thanks for reading our community guidelines. Please read the full list of posting rules found in our site's Terms of Service.