TECH

In spite of hype, many companies are moving cautiously when it comes to generative AI


Vendors would have you believe that we are in the midst of an AI revolution, one that is changing the very nature of how we work. But the truth, according to several recent studies, suggests that it’s much more nuanced than that.

Companies are extremely interested in generative AI as vendors push potential benefits, but turning that desire from a proof of concept into a working product is proving much more challenging: They’re running up against the technical complexity of implementation, whether that’s due to technical debt from an older technology stack or simply lacking the people with appropriate skills.

In fact, a recent study by Gartner found that the top two barriers to implementing AI solutions were finding ways to estimate and demonstrate value at 49% and a lack of talent at 42%. These two elements could turn out to be key obstacles for companies.

Consider that a study by LucidWorks, an enterprise search technology company, found that just 1 in 4 of those surveyed reported successfully implementing a generative AI project.

Aamer Baig, senior partner at McKinsey and Company, speaking at the MIT Sloan CIO Symposium in May, said his company has also found in a recent survey that just 10% of companies are implementing generative AI projects at scale. He also reported that just 15% were seeing any positive impact on earnings. That suggests that the hype might be far ahead of the reality most companies are experiencing.

What’s the holdup?

Baig sees complexity as the primary factor slowing companies down with even a simple project requiring 20-30 technology elements, with the right LLM being just the starting point. They also need things like proper data and security controls and employees may have to learn new capabilities like prompt engineering and how to implement IP controls, among other things.

Ancient tech stacks can also hold companies back, he says. “In our survey, one of the top obstacles that was cited to achieving generative AI at scale was actually too many technology platforms,” Baig said. “It wasn’t the use case, it wasn’t data availability, it wasn’t path to value; it was actually tech platforms.”

Mike Mason, chief AI officer at consulting firm Thoughtworks, says his firm spends a lot of time getting companies ready for AI — and their current technology setup is a big part of that. “So the question is, how much technical debt do you have, how much of a deficit? And the answer is always going to be: It depends on the organization, but I think organizations are increasingly feeling the pain of this,” Mason told TechCrunch.

It starts with good data

A big part of that readiness deficit is the data piece with 39% of respondents to the Gartner survey expressing concerns about a lack of data as a top barrier to successful AI implementation. “Data is a huge and daunting challenge for many, many organizations,” Baig said. He recommends focusing on a limited set of data with an eye toward reuse.

“A simple lesson we’ve learned is to actually focus on data that helps you with multiple use cases, and that usually ends up being three or four domains in most companies that you can actually get started on and apply it to your high-priority business challenges with business values and deliver something that actually gets to production and scale,” he said.

Mason says a big part of being able to execute AI successfully is related to data readiness, but that’s only part of it. “Organizations quickly realize that in most cases they need to do some AI readiness work, some platform building, data cleansing, all of that kind of stuff,” he said. “But you don’t have to do an all-or-nothing approach, you don’t have to spend two years before you can get any value.”

When it comes to data, companies also have to respect where the data comes from — and whether they have permission to use it. Akira Bell, CIO at Mathematica, a consultancy that works with companies and governments to collect and analyze data related to various research initiatives, says her company has to move carefully when it comes to putting that data to work in generative AI.

“As we look at generative AI, certainly there are going to be possibilities for us, and looking across the ecosystem of data that we use, but we have to do that cautiously,” Bell told TechCrunch. Partly that’s because they have a lot of private data with strict data use agreements, and partly it’s because they are dealing sometimes with vulnerable populations and they have to be cognizant of that.

“I came to a company that really takes being a trusted data steward seriously, and in my role as a CIO, I have to be very grounded in that, both from a cybersecurity perspective, but also from how we deal with our clients and their data, so I know how important governance is,” she said.

She says right now it’s hard not to feel excited about the possibilities that generative AI brings to the table; the technology could provide significantly better ways for her organization and their customers to understand the data they are collecting. But it’s also her job to move cautiously without getting in the way of real progress, a challenging balancing act.

Finding the value

Much like when the cloud was emerging a decade and a half ago, CIOs are naturally cautious. They see the potential that generative AI brings, but they also need to take care of basics like governance and security. They also need to see real ROI, which is sometimes hard to measure with this technology.

In a January TechCrunch article on AI pricing models, Juniper CIO Sharon Mandell said that it was proving challenging to measure return on generative AI investment.

“In 2024, we’re going to be testing the genAI hype, because if those tools can produce the types of benefits that they say, then the ROI on those is high and may help us eliminate other things,” she said. So she and other CIOs are running pilots, moving cautiously and trying to find ways to measure whether there is truly a productivity increase to justify the increased cost.

Baig says that it’s important to have a centralized approach to AI across the company and avoid what he calls “too many skunkworks initiatives,” where small groups are working independently on a number of projects.

“You need the scaffolding from the company to actually make sure that the product and platform teams are organized and focused and working at pace. And, of course, it needs the visibility of top management,” he said.

None of that is a guarantee that an AI initiative is going to be successful or that companies will find all the answers right away. Both Mason and Baig said it’s important for teams to avoid trying to do too much, and both stress reusing what works. “Reuse directly translates to delivery speed, keeping your businesses happy and delivering impact,” Baig said.

However companies execute generative AI projects, they shouldn’t become paralyzed by the challenges related to governance and security and technology. But neither should they be blinded by the hype: There are going to be obstacles aplenty for just about every organization.

The best approach could be to get something going that works and shows value and build from there. And remember, that in spite of the hype, many other companies are struggling, too.



Source link

MarylandDigitalNews.com