Generative AI roundup: IBM, Wolters Kluwer and others offer new products and services
Each day, more companies add generative artificial intelligence assistants to their products and services. This week, some of the biggest names in software announced integrations and implementations they say are safe and transparent uses of AI while a healthcare IT industry research organization published guidance on automation, ethics and trust.
IBM introduces Watsonx legal protections
To ease customer apprehensions about use of generative AI, IBM announced that it would indemnify its clients against intellectual property claims against IBM-developed watsonx models.
The company’s standard contractual intellectual property protections for IBM products will apply to its specialized Granite models that apply generative AI to the modalities of language and code, according to its announcement last week.
Clients can develop AI applications using their own data along under the client protections, afforded by IBM foundation models.
IBM also said that it would publish its underlying training data sets.
Trained on business-relevant datasets from internet, academic, code, legal and finance, IBM-developed foundation models are curated for business uses.
“When it comes to today’s AI innovation boom, the businesses that are positioned for success are the ones outfitted with AI technologies that demonstrate success at scale and have built-in guardrails and practices that enable their responsible use,” Dinesh Nirmal, IBM Software’s senior vice president of products, said in a statement.
On Monday, IBM announced on its website that it has partnered with telemedicine company Ovum Health to scale web and mobile app-based chat and scheduling solutions on its family-building platform providing pregnancy, prenatal and postnatal healthcare.
IBM assisted Ovum Health in the creation of a no-code platform for an AI assistant that leverages natural language models, according to a blog post. Ovum then fully integrated watsonx Assistant into its web interface and iOS app in less than two months
Wolters Kluwer unveils AI Labs
On Tuesday the Waltham, Massachusetts-based company introduced a new UpToDate integration called AI Labs.
The clinical decision support system is relied on by more than two million users at over 44,000 healthcare organizations in 190 countries to view more than 650 million topics per year, according to Wolters Kluwer Health.
“Bringing together the power of UpToDate and generative AI can help drive value for both clinicians and patients,” Greg Samios, president and CEO of clinical effectiveness, said in the announcement.
“With this advanced capability, we have an implementation of generative AI that could help clinicians make better and more informed decisions to deliver the best care everywhere.”
Dr. Peter Bonis, Chief Medical Officer, said the company has long incorporated AI to synthesize medical literature and the experience of physicians into 12,400 clinical topics.
To help hospitals and health plans better aggregate data from disparate electronic health records after mergers, Wolters Kluwer developed a machine learning model to improve the process mapping lab results and other data to standardized LOINC codes.
“We are committed to setting a standard for the responsible application of generative AI to the complex realities of front-line healthcare,” Bonis said.
“The approach they have taken is the right one and I look forward to seeing how it evolves,” Julio Ramirez, chief scientific officer at Norton Infectious Diseases Institute, added.
Chilmark offers an industry guide for AI adoption
Also this week, Chilmark Research released its first eBook, “Building Responsible AI in Healthcare: A Journey into Automation, Ethics and Trust.”
The ebook from the healthcare IT research and advisory firm explores how to develop trust in AI technologies and implementation that creates a positive impact on patients, providers and organizations.
Content includes a combination of its public and premium articles and reports from the last three years, and covers:
- The evolving regulatory landscape and need for guide rails.
- Emerging best practices on developing and implementing AI.
- Bias in AI and how to address health equity mandates.
“We’re still in the early days of mass adoption, so most use cases are low-risk, focused more on administrative and operations use cases,” John Moore, Chillmark’s managing partner, said in a statement.
“With overtures being made about broader adoption for clinical decision support, understanding the limitations of these tools and the need for human interpretation is critical.”
“Organizations will need to have a deep understanding of fairness and equity from a political philosophy or anthropological perspective, develop design expertise relevant to machine learning, and consciously monitor applications over their entire lifespan in order to improve and maintain trust of users and patients,” added lead author Dr. Jody Ranck.
Ranck, the firm’s senior analyst, has explored state-of-the-art processes for AI bias and risk mitigation and how develop more trustworthy machine learning tools for healthcare.
Andrea Fox is senior editor of Healthcare IT News.
Email: [email protected]
Healthcare IT News is a HIMSS Media publication.
Source: Read Full Article