Blog

IBM and Watson 2.0: A progress report

By Tony Baer, dbInsight and Merv Adrian, IT Market Strategy

It’s been barely six months since IBM unveiled the new watsonx family of products targeting enterprise clients, AI builders, data scientists, and data professionals. And since May, IBM has generally released all three pillars of the new AI lifecycle stool: watsonx.ai for AI builders; watsonx.data, as the data lake house for data professionals; and just now, the last major piece: watsonx.governance for overseeing bias, ethics, risk, and compliance issues over the lifecycle. And to boot, we saw the logo slide showing over three dozen clients and partners that have already signed on to watsonx. An ecosystem is building, and customers are buying into watsonx, just months out of the gate.

Coming back from a day spent with IBM reviewing its AI and quantum computing strategy at its Thomas J. Watson Research Center our overriding impression was that the company has gotten the message on the need to pick up the pace of introducing products and innovation.

It shouldn’t be surprising that a big part of the story was organizational. For a technology provider as long-lived and diverse as IBM, it shouldn’t be surprising that silos have long been a big part of the story. It’s not that IBM couldn’t put product together; in the past, we saw great examples of where they fused capabilities spanning multiple acquisitions such as Lotus, Rational, and Tivoli into coherent product.

So it was significant when Dr. Dario Gil, who heads IBM Research, and Rob Thomas, who leads IBM Software, stood up on stage together and described how their organizations now align around watsonx, and how the company’s research labs are now reading off a common sheet of AI music. It was also significant when Thomas underlined his policy to get his product organization strictly devoting their time to product development rather than overlapping with the marketing organization to develop marketing.

These efforts have been prompted by an urgency to commercialize research coming out of the labs. With fast-moving technologies such as AI where new industry giants like OpenAI, with its reported $80 billion Pre-IPO valuation, can suddenly change the pecking order, getting innovation out to market fast becomes a matter of both relevancy and survival.

Unless you’ve been living under a rock, it’s been hard to miss the generative AI theme that has come to dominate the discussion this year. And effective commercialization is not an unfamiliar IBM theme – we have heard it said several times in the past few decades. It was supported here with timelines showing the release of numerous watsonx deliverables (some of which were NDA) on a very aggressive schedule that will complete a deep stack for the delivery of business outcomes. We are accustomed to discussing the deepening data stack these days, and IBM has already had much to say about that with more in the offing. However, IBM’s deeper stack presented at this event subsumes that one as it spans from silicon to applications, with trusted offerings – even indemnification of its delivered models – along with development and deployment capabilities that span all of it.

To some extent, IBM’s strategy echoes that of its partners, frenemies, and outright rivals. Each of the major hyperscaler and enterprise technology providers are announcing their various partnerships with the OpenAI's, Anthropic’s, and Hugging Faces of the world; most are in the process of creating marketplaces or exchanges of Foundation Models, and most are either busily partnering with Nivida and/or developing their own silicon as second sources for the optimized processors that will perform the heavy lift. Most of them also have their own cloud infrastructures, AI and data development and deployment platforms, and their own R&D operations, to boot.

Watsonx.ai, essentially an IDE – running on-premises and in the cloud – showcases IBM’s focus on delivery, not vague science ideas. It leverages a foundation model library, prompt lab, tuning studio, data science components and MLOps.  It works with those partners in typical IBM fashion – supporting models like LLAMA2 from Meta, a giant library from HuggingFace (in which IBM invested), as well as company-created models, like IBM Granite. ”We don’t care where the models come from but ours are good,” was the message at the IBM analyst event.

IBM’s differences are evident. Unlike the hyperscalers, its offerings are designed to work almost anywhere: on-premises, in hybrid cloud, and/or any of the public clouds. That's largely attributable to IBM’'s Red Hat OpenShift cloud deployment environment. Even more significant is that, compared to the hyperscalers, IBM has far more troops on the ground close to its clients. Its 160,000-person global consulting organization gives IBM more visibility in its customers’ organizations and intimate knowledge of their business requirements.

While hyperscalers stop short of tailoring their data and AI platforms to the needs of different vertical or industry domains, IBM is already and will continue to take that concept further. In fact, we wouldn’t be surprised if in the future, IBM gets prescriptive regarding fitting the right foundation models, schema, and best practices shaped for the particular needs of different industries.

And one more thing. The company’s delivery focus extends to deployment and modernization as well – using these tools at last to enhance delivery and optimization of existing and new systems, for example, using Red Hat’s Ansible to enhance playbooks (with a Canadian bank already in deployment.) This opens an entirely new conversation. Returning to the lab where Jeopardy Watson was born, the other side of the story is the lessons that IBM learned from the last go-round. The original Watson was a highly complex system that proved costly to customize. It was, in former CEO Ginny Rometty’s words, IBM’s “Riverboat Gamble” on AI. A gamble it was, as IBM strove to take AI to take on “cognitive” capabilities at a time when the world was just starting to get up to speed on machine learning.

To our knowledge, few if any first-generation Watsons successfully made it to commercial production. Since then, Watson has survived as brand – not a unitary product. On this go round with watsonx, IBM chose to focus both on ease of use and actual customer requirements. Where many other vendors view existing, mission-critical systems in operation as technical debt to be eliminated, IBM instead offers to service that debt – treating it rather as an asset whose value can be enhanced and persists. And it all starts, as it should, with data. Watson.x governance provides what Rob Thomas called “nutrition labeling for your data,” and Dario Gil noted that beyond indemnification, explanations of models suitable for regulatory reporting are part of the package.

This is a far cry from the company’s past slow mentum, and it will be accelerated with a newly created team of 500 success engineers who will fan out into and beyond the IBM customer base to accelerate design and delivery. Dare we call it Watson 2.0?

 

Tony Baer