{ "version": "https://jsonfeed.org/version/1.1", "user_comment": "This feed allows you to read the posts from this site in any feed reader that supports the JSON Feed format. To add this feed to your reader, copy the following URL -- https://www.pymnts.com/category/news/artificial-intelligence/feed/json/ -- and add it your reader.", "next_url": "https://www.pymnts.com/category/news/artificial-intelligence/feed/json/?paged=2", "home_page_url": "https://www.pymnts.com/category/news/artificial-intelligence/", "feed_url": "https://www.pymnts.com/category/news/artificial-intelligence/feed/json/", "language": "en-US", "title": "Artificial Intelligence Archives | PYMNTS.com", "description": "What's next in payments and commerce", "icon": "https://www.pymnts.com/wp-content/uploads/2022/11/cropped-PYMNTS-Icon-512x512-1.png", "items": [ { "id": "https://www.pymnts.com/?p=2689987", "url": "https://www.pymnts.com/news/artificial-intelligence/2025/google-gemini-reaches-350-million-monthly-active-users-lags-rivals/", "title": "Google\u2019s Gemini Reaches 350 Million Monthly Active Users but Lags Rivals", "content_html": "

Gemini, Google\u2019s chatbot, grew to reach 350 million monthly active users in March but still lagged behind ChatGPT and Meta AI, TechCrunch reported Wednesday (April 23), citing data revealed during Google\u2019s antitrust suit.

\n

The data also showed that Gemini increased its number of daily active users fourfold since October, according to the report. The chatbot had 35 million daily active users in March, up from 9 million in October.

\n

While the chatbot has gained widespread consumer adoption, it lags behind its two most popular competitors, the report said.

\n

OpenAI\u2019s ChatGPT had 600 million monthly active users in March, according to the Google data, and Meta AI had about 500 million monthly active users in September, Meta CEO Mark Zuckerberg said at the time, per the TechCrunch report.

\n

It was reported April 3 that Google removed the head of Gemini as the AI chatbot continues to lag behind ChatGPT. Sissie Hsiao stepped down to be replaced by Josh Woodward, the head of Google Labs who oversaw the launch of NotebookLM.

\n

Google DeepMind CEO Demis Hassabis reportedly told staffers in a memo that the change in leadership would \u201csharpen our focus on the next evolution of the Gemini app.\u201d

\n

During a February earnings call, Google CEO Sundar Pichai said the company plans to insert ads in the Gemini multimodal model, as it did in the AI Overviews portion of its search engine in an attempt to recoup the high costs of processing AI workloads.

\n

Gemini ads will not come in 2025, and Google will focus on offering a free and paid version of Gemini, Pichai said.

\n

\u201cWe do have very good ideas for native ad concepts,\u201d Pichai said during the call. \u201cBut you will see us lead with the user experience\u201d and first make sure it works at scale.

\n

In December, Pichai told Google staff that 2025 will be a crucial year for the company and \u201cthe stakes are high\u201d as Google focuses on unlocking the benefits of technology and solving real user problems.

\n

Gemini is a top priority, and Google believes this will be its next app to reach 500 million users, Pichai said.

\n

\u201cBut we have some work to do in 2025 to close the gap and establish a leadership position there as well,\u201d Pichai said.

\n

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.

\n

The post Google\u2019s Gemini Reaches 350 Million Monthly Active Users but Lags Rivals appeared first on PYMNTS.com.

\n", "content_text": "Gemini, Google\u2019s chatbot, grew to reach 350 million monthly active users in March but still lagged behind ChatGPT and Meta AI, TechCrunch reported Wednesday (April 23), citing data revealed during Google\u2019s antitrust suit.\nThe data also showed that Gemini increased its number of daily active users fourfold since October, according to the report. The chatbot had 35 million daily active users in March, up from 9 million in October.\nWhile the chatbot has gained widespread consumer adoption, it lags behind its two most popular competitors, the report said.\nOpenAI\u2019s ChatGPT had 600 million monthly active users in March, according to the Google data, and Meta AI had about 500 million monthly active users in September, Meta CEO Mark Zuckerberg said at the time, per the TechCrunch report.\nIt was reported April 3 that Google removed the head of Gemini as the AI chatbot continues to lag behind ChatGPT. Sissie Hsiao stepped down to be replaced by Josh Woodward, the head of Google Labs who oversaw the launch of NotebookLM.\nGoogle DeepMind CEO Demis Hassabis reportedly told staffers in a memo that the change in leadership would \u201csharpen our focus on the next evolution of the Gemini app.\u201d\nDuring a February earnings call, Google CEO Sundar Pichai said the company plans to insert ads in the Gemini multimodal model, as it did in the AI Overviews portion of its search engine in an attempt to recoup the high costs of processing AI workloads.\nGemini ads will not come in 2025, and Google will focus on offering a free and paid version of Gemini, Pichai said.\n\u201cWe do have very good ideas for native ad concepts,\u201d Pichai said during the call. \u201cBut you will see us lead with the user experience\u201d and first make sure it works at scale.\nIn December, Pichai told Google staff that 2025 will be a crucial year for the company and \u201cthe stakes are high\u201d as Google focuses on unlocking the benefits of technology and solving real user problems.\nGemini is a top priority, and Google believes this will be its next app to reach 500 million users, Pichai said.\n\u201cBut we have some work to do in 2025 to close the gap and establish a leadership position there as well,\u201d Pichai said.\nFor all PYMNTS AI coverage, subscribe to the daily AI Newsletter.\nThe post Google\u2019s Gemini Reaches 350 Million Monthly Active Users but Lags Rivals appeared first on PYMNTS.com.", "date_published": "2025-04-23T15:42:31-04:00", "date_modified": "2025-04-23T15:42:31-04:00", "authors": [ { "name": "PYMNTS", "url": "https://www.pymnts.com/author/pymnts/", "avatar": "https://secure.gravatar.com/avatar/679fcf5c2ed5358e99e8e23b22e3b5d761e37bdb76fa7b0e13d8ecd9ff01bf88?s=512&d=blank&r=g" } ], "author": { "name": "PYMNTS", "url": "https://www.pymnts.com/author/pymnts/", "avatar": "https://secure.gravatar.com/avatar/679fcf5c2ed5358e99e8e23b22e3b5d761e37bdb76fa7b0e13d8ecd9ff01bf88?s=512&d=blank&r=g" }, "image": "https://www.pymnts.com/wp-content/uploads/2024/09/Google-AI-Gemini-1.jpg", "tags": [ "Artificial Intelligence", "chatbots", "Gemini", "GenAI", "Google", "Innovation", "News", "PYMNTS News", "Technology", "What's Hot" ] }, { "id": "https://www.pymnts.com/?p=2687621", "url": "https://www.pymnts.com/news/artificial-intelligence/2025/this-week-ai-tariffs-tech-giants-antitrust-woes-more-adept-chatgpt/", "title": "This Week in AI: Tariffs Hit AI, Tech Giants\u2019 Antitrust Woes and a More Adept ChatGPT", "content_html": "

It\u2019s a big week for artificial intelligence news. AI hardware is being dinged by tariff troubles while two juggernauts in the AI field \u2014 Meta and Google \u2014 are in the crosshairs of the U.S. government. Meanwhile, OpenAI unveiled a more capable and smarter ChatGPT and revealed plans for an AI software engineer.

\n

Tariff Uncertainty Hits AI Hardware

\n

The President Donald Trump administration\u2019s unpredictable tariff policy is creating long-term uncertainty for the AI industry.

\n

The tariffs don\u2019t target digital services and intellectual property like AI software. AI pioneer Andrew Ng said intellectual property is hard to tax due to its intangible nature and ease of cross-border transfer. A Morgan Stanley note said major software firms like Adobe and Salesforce have not yet seen demand impacts.

\n

However, AI models require powerful hardware to function \u2014 and that\u2019s where tariffs may bite.

\n

Although chips are exempted, tariffs on essential infrastructure like servers, cooling systems and networking gear could disrupt AI development. Ng said bringing computer equipment manufacturing back to the United States isn\u2019t feasible due to a lack of domestic expertise and supply chain capacity.

\n

Meta\u2019s Anticompetition Trial Begins

\n

The Federal Trade Commission\u2019s 2020 anticompetition lawsuit against Meta went to trial this week, with the U.S. government seeking a divestiture of Instagram and WhatsApp, among other remedies.

\n

The FTC sued Meta (formerly Facebook) for allegedly engaging in \u201canticompetitive conduct\u201d to weaken or squash rivals as it protects its \u201cmonopoly\u201d in social media, according to the revised complaint.

\n

But Meta Chief Legal Officer Jennifer Newstead said in a blog post that the FTC\u2019s \u201cweak antitrust lawsuit \u2026 ignores how the market actually works and chases a theory that doesn\u2019t hold up in the real world.\u201d

\n

In today\u2019s digital landscape, Meta competes with TikTok, YouTube and X for eyeballs and engagement.

\n

\u201cIn reality, more time is spent on TikTok and YouTube than on either Facebook or Instagram,\u201d Newstead said in the post.

\n

Court Rules That Google Broke Law to Dominate in Ads

\n

A Virginia district court judge ruled that Google broke the law to dominate the online advertising technology market, one of two major antitrust lawsuits brought by the U.S. government against the search giant.

\n

The government sued Google for having a monopoly in three parts of the online ad market: online publishers\u2019 tools; advertiser tools; and software that makes this market work.

\n

While it is not illegal to dominate a market by innovating, Google entrenched its monopolies and tied them together, a classic antitrust violation, experts told The New York Times.

\n

Meanwhile, a D.C. judge ruled last year in a separate case that Google holds an online search monopoly. The judge is considering a request by the Department of Justice to force Google to sell Chrome, the world\u2019s dominant browser. A ruling is expected by August.

\n

ChatGPT to Become Smarter, All-Around AI Agent

\n

OpenAI unveiled two new AI models in its reasoning model family that will power ChatGPT: o3 and o4-mini.

\n

The AI startup said these are its smartest models to date. They can use all the tools at ChatGPT\u2019s disposal and even incorporate images into their thinking. This would be helpful for businesses looking to analyze PDFs and faxes. OpenAI said the models can even read blurry or upside-down images.

\n

OpenAI said for most real-world uses, o3 and o4-mini will be cheaper than o1 and o3-mini while outperforming them on tasks.

\n

The models are now available for ChatGPT Plus, Pro and Team users. ChatGPT Enterprise and Edu users will get them in a week. Free users can try o4-mini by selecting \u201cThink\u201d before entering a prompt.

\n

OpenAI Developing AI Software Engineer

\n

OpenAI Chief Financial Officer Sarah Friar said the company is building an AI agent that can do all the work of software engineers, not just augment their skills.

\n

\u201cThis is not just augmenting the current software engineers in your workforce \u2026 it\u2019s literally an agentic software engineer that can build an app for you,\u201d Friar said at Goldman Sachs\u2019 Disruptive Technology Symposium in London.

\n

\u201cNot only does it build it, it does all the things that software engineers hate to do,\u201d such as quality assurance tests, bug testing and bashing, as well as the accompanying documentation, she said. \u201cSo suddenly, you can force multiply your software engineering workforce.\u201d

\n

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.

\n

The post This Week in AI: Tariffs Hit AI, Tech Giants\u2019 Antitrust Woes and a More Adept ChatGPT appeared first on PYMNTS.com.

\n", "content_text": "It\u2019s a big week for artificial intelligence news. AI hardware is being dinged by tariff troubles while two juggernauts in the AI field \u2014 Meta and Google \u2014 are in the crosshairs of the U.S. government. Meanwhile, OpenAI unveiled a more capable and smarter ChatGPT and revealed plans for an AI software engineer.\nTariff Uncertainty Hits AI Hardware\nThe President Donald Trump administration\u2019s unpredictable tariff policy is creating long-term uncertainty for the AI industry.\nThe tariffs don\u2019t target digital services and intellectual property like AI software. AI pioneer Andrew Ng said intellectual property is hard to tax due to its intangible nature and ease of cross-border transfer. A Morgan Stanley note said major software firms like Adobe and Salesforce have not yet seen demand impacts.\nHowever, AI models require powerful hardware to function \u2014 and that\u2019s where tariffs may bite.\nAlthough chips are exempted, tariffs on essential infrastructure like servers, cooling systems and networking gear could disrupt AI development. Ng said bringing computer equipment manufacturing back to the United States isn\u2019t feasible due to a lack of domestic expertise and supply chain capacity.\nMeta\u2019s Anticompetition Trial Begins\nThe Federal Trade Commission\u2019s 2020 anticompetition lawsuit against Meta went to trial this week, with the U.S. government seeking a divestiture of Instagram and WhatsApp, among other remedies.\nThe FTC sued Meta (formerly Facebook) for allegedly engaging in \u201canticompetitive conduct\u201d to weaken or squash rivals as it protects its \u201cmonopoly\u201d in social media, according to the revised complaint.\nBut Meta Chief Legal Officer Jennifer Newstead said in a blog post that the FTC\u2019s \u201cweak antitrust lawsuit \u2026 ignores how the market actually works and chases a theory that doesn\u2019t hold up in the real world.\u201d\nIn today\u2019s digital landscape, Meta competes with TikTok, YouTube and X for eyeballs and engagement.\n\u201cIn reality, more time is spent on TikTok and YouTube than on either Facebook or Instagram,\u201d Newstead said in the post.\nCourt Rules That Google Broke Law to Dominate in Ads\nA Virginia district court judge ruled that Google broke the law to dominate the online advertising technology market, one of two major antitrust lawsuits brought by the U.S. government against the search giant.\nThe government sued Google for having a monopoly in three parts of the online ad market: online publishers\u2019 tools; advertiser tools; and software that makes this market work.\nWhile it is not illegal to dominate a market by innovating, Google entrenched its monopolies and tied them together, a classic antitrust violation, experts told The New York Times.\nMeanwhile, a D.C. judge ruled last year in a separate case that Google holds an online search monopoly. The judge is considering a request by the Department of Justice to force Google to sell Chrome, the world\u2019s dominant browser. A ruling is expected by August.\nChatGPT to Become Smarter, All-Around AI Agent\nOpenAI unveiled two new AI models in its reasoning model family that will power ChatGPT: o3 and o4-mini.\nThe AI startup said these are its smartest models to date. They can use all the tools at ChatGPT\u2019s disposal and even incorporate images into their thinking. This would be helpful for businesses looking to analyze PDFs and faxes. OpenAI said the models can even read blurry or upside-down images.\nOpenAI said for most real-world uses, o3 and o4-mini will be cheaper than o1 and o3-mini while outperforming them on tasks.\nThe models are now available for ChatGPT Plus, Pro and Team users. ChatGPT Enterprise and Edu users will get them in a week. Free users can try o4-mini by selecting \u201cThink\u201d before entering a prompt.\nOpenAI Developing AI Software Engineer\nOpenAI Chief Financial Officer Sarah Friar said the company is building an AI agent that can do all the work of software engineers, not just augment their skills.\n\u201cThis is not just augmenting the current software engineers in your workforce \u2026 it\u2019s literally an agentic software engineer that can build an app for you,\u201d Friar said at Goldman Sachs\u2019 Disruptive Technology Symposium in London.\n\u201cNot only does it build it, it does all the things that software engineers hate to do,\u201d such as quality assurance tests, bug testing and bashing, as well as the accompanying documentation, she said. \u201cSo suddenly, you can force multiply your software engineering workforce.\u201d\nFor all PYMNTS AI coverage, subscribe to the daily AI Newsletter.\nThe post This Week in AI: Tariffs Hit AI, Tech Giants\u2019 Antitrust Woes and a More Adept ChatGPT appeared first on PYMNTS.com.", "date_published": "2025-04-18T14:26:17-04:00", "date_modified": "2025-04-18T14:26:56-04:00", "authors": [ { "name": "PYMNTS", "url": "https://www.pymnts.com/author/pymnts/", "avatar": "https://secure.gravatar.com/avatar/679fcf5c2ed5358e99e8e23b22e3b5d761e37bdb76fa7b0e13d8ecd9ff01bf88?s=512&d=blank&r=g" } ], "author": { "name": "PYMNTS", "url": "https://www.pymnts.com/author/pymnts/", "avatar": "https://secure.gravatar.com/avatar/679fcf5c2ed5358e99e8e23b22e3b5d761e37bdb76fa7b0e13d8ecd9ff01bf88?s=512&d=blank&r=g" }, "image": "https://www.pymnts.com/wp-content/uploads/2025/04/AI-artificial-intelligence.jpg", "tags": [ "AI agents", "Antitrust", "Artificial Intelligence", "Big Tech", "ChatGPT", "GenAI", "Google", "Government", "Innovation", "jobs", "Meta", "News", "OpenAI", "personnel", "PYMNTS News", "software", "tariffs", "taxes", "Technology" ] }, { "id": "https://www.pymnts.com/?p=2687456", "url": "https://www.pymnts.com/news/artificial-intelligence/2025/johnson-15percent-ai-use-cases-deliver-80percent-value/", "title": "Johnson & Johnson: 15% of AI Use Cases Deliver 80% of Value", "content_html": "

Johnson & Johnson reportedly found that 10% to 15% of artificial intelligence use cases deliver 80% of the value.

\n

The medicine and MedTech company determined that after encouraging employees to experiment with AI and, after about three years, seeing the results of their pursuits of nearly 900 use cases, The Wall Street Journal (WSJ) reported Friday (April 18).

\n

Johnson & Johnson is now devoting resources to the highest-value projects and cutting the rest, according to the report.

\n

\u201cWe\u2019re prioritizing, we\u2019re scaling, we\u2019re looking at the things that make the most sense,\u201d Johnson & Johnson Chief Information Officer Jim Swanson said, per the report. \u201cThat was part of the maturation process we went through.\u201d

\n

The use cases that are delivering the most value include a generative AI copilot that coaches sales representatives on how to engage with healthcare professionals, and an internal chatbot that answers employees\u2019 questions about company policies and benefits, according to the report.

\n

Johnson & Johnson is also developing an AI tool that will facilitate drug discovery and another that will identify and mitigate supply chain risks, per the report.

\n

Swanson said in the report that the broader approach was necessary three years ago to gain familiarity with the technology but that it\u2019s now time to focus on the use cases that can be successfully implemented, are widely adopted and deliver value.

\n

The PYMNTS Intelligence report \u201cThe AI MonitorEdge Report: Healthcare Firms Going Long on GenAI Investment\u201d found that 90% of C-suite executives at healthcare firms with at least $1 billion in annual revenue said that their previous generative AI investments have already achieved a positive return on investment.

\n

The two applications that represent the most frequent use cases of generative AI \u2014 product and service innovation and real-time automated customer service responses \u2014 are being used by about 6 in 10 of these healthcare firms, per the report.

\n

It was reported in March that Apple is working on an AI agent that can dispense medical advice.

\n

Healthcare AI firm Suki said in October that it raised $70 million in new funding to invest in the development of its products, which include an AI-powered voice assistant used by clinicians.

\n

In October 2023, Microsoft unveiled AI-powered products that help doctors glean insights from medical data.

\n

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.

\n

The post Johnson & Johnson: 15% of AI Use Cases Deliver 80% of Value appeared first on PYMNTS.com.

\n", "content_text": "Johnson & Johnson reportedly found that 10% to 15% of artificial intelligence use cases deliver 80% of the value.\nThe medicine and MedTech company determined that after encouraging employees to experiment with AI and, after about three years, seeing the results of their pursuits of nearly 900 use cases, The Wall Street Journal (WSJ) reported Friday (April 18).\nJohnson & Johnson is now devoting resources to the highest-value projects and cutting the rest, according to the report.\n\u201cWe\u2019re prioritizing, we\u2019re scaling, we\u2019re looking at the things that make the most sense,\u201d Johnson & Johnson Chief Information Officer Jim Swanson said, per the report. \u201cThat was part of the maturation process we went through.\u201d\nThe use cases that are delivering the most value include a generative AI copilot that coaches sales representatives on how to engage with healthcare professionals, and an internal chatbot that answers employees\u2019 questions about company policies and benefits, according to the report.\nJohnson & Johnson is also developing an AI tool that will facilitate drug discovery and another that will identify and mitigate supply chain risks, per the report.\nSwanson said in the report that the broader approach was necessary three years ago to gain familiarity with the technology but that it\u2019s now time to focus on the use cases that can be successfully implemented, are widely adopted and deliver value.\nThe PYMNTS Intelligence report \u201cThe AI MonitorEdge Report: Healthcare Firms Going Long on GenAI Investment\u201d found that 90% of C-suite executives at healthcare firms with at least $1 billion in annual revenue said that their previous generative AI investments have already achieved a positive return on investment.\nThe two applications that represent the most frequent use cases of generative AI \u2014 product and service innovation and real-time automated customer service responses \u2014 are being used by about 6 in 10 of these healthcare firms, per the report.\nIt was reported in March that Apple is working on an AI agent that can dispense medical advice.\nHealthcare AI firm Suki said in October that it raised $70 million in new funding to invest in the development of its products, which include an AI-powered voice assistant used by clinicians.\nIn October 2023, Microsoft unveiled AI-powered products that help doctors glean insights from medical data.\nFor all PYMNTS AI coverage, subscribe to the daily AI Newsletter.\nThe post Johnson & Johnson: 15% of AI Use Cases Deliver 80% of Value appeared first on PYMNTS.com.", "date_published": "2025-04-18T10:15:54-04:00", "date_modified": "2025-04-18T10:18:57-04:00", "authors": [ { "name": "PYMNTS", "url": "https://www.pymnts.com/author/pymnts/", "avatar": "https://secure.gravatar.com/avatar/679fcf5c2ed5358e99e8e23b22e3b5d761e37bdb76fa7b0e13d8ecd9ff01bf88?s=512&d=blank&r=g" } ], "author": { "name": "PYMNTS", "url": "https://www.pymnts.com/author/pymnts/", "avatar": "https://secure.gravatar.com/avatar/679fcf5c2ed5358e99e8e23b22e3b5d761e37bdb76fa7b0e13d8ecd9ff01bf88?s=512&d=blank&r=g" }, "image": "https://www.pymnts.com/wp-content/uploads/2025/04/Johnson-and-Johnson.jpg", "tags": [ "Artificial Intelligence", "chatbots", "GenAI", "Healthcare", "Innovation", "Johnson & Johnson", "News", "PYMNTS News", "Technology", "What's Hot" ] }, { "id": "https://www.pymnts.com/?p=2686589", "url": "https://www.pymnts.com/news/artificial-intelligence/2025/ai-explained-sorting-through-models-alphabet-soup/", "title": "From Buzzwords to Bottom Lines: Understanding the AI Model Types", "content_html": "

There\u2019s an alphabet soup of terms floating around when it comes to artificial intelligence models. There are foundation and frontier models, large and small language models, multimodal models \u2014 and the AI model term du jour, reasoning models.

\n

These buzzwords show up in blogs, company announcements, executive speeches, conference panels and quarterly earnings calls, but what do they actually mean? More importantly, why should business users care?

\n

This guide explains key AI model types in plain English and how each affects cost, capability and risk for organizations.

\n

Here are the different types of models often encountered.

\n

Foundation Models: The Base Layer of Generative AI

\n

Foundation models are large, general-purpose AI systems trained on massive datasets such as the entire internet. They serve as the \u201cbase model\u201d that can be adapted to perform a wide variety of tasks.

\n

They can include large language models, vision language models, code models and more. They are typically trained to predict the next word in a sentence, the next pixel in an image or the next token in a code sequence.

\n

Foundation models can be frontier models if they bring major new capabilities and push the boundaries of foundation models.

\n

Examples: OpenAI\u2019s GPT family of models; Google\u2019s Gemini; Meta\u2019s Llama; Anthropic\u2019s Claude

\n

Why it matters for business: These models power everything from customer service chatbots to content generation tools. You can either use them as they are, through APIs, or fine-tune (retrain) them on your company\u2019s data to create more specialized applications.

\n

Pros: Versatile, fast to deploy, broadly knowledgeable

\n

Cons: Expensive to run at scale, may hallucinate or generate inaccurate content, are not inherently secure or compliant with regulations

\n

Large Language vs. Small Language Models

\n

Large language models are AI models trained on huge amounts of data to learn language patterns. They power generative AI to create prose, poems, business emails and other language tasks. They are behind today\u2019s most popular chatbots and AI assistants.

\n

Small language models are tinier, cheaper and usually more specialized versions of large language models.

\n

Large language models are often used by AI agents to execute tasks. The agent, which is a system and not a model, is layered on top of the large language model.

\n

Examples: OpenAI\u2019s GPT series; Google\u2019s Gemini; Meta\u2019s Llama; Anthropic\u2019s Claude

\n

Why it matters for business: Large language models can handle several administrative and creative tasks quickly and at scale to save employees hours of work and make business operations more efficient.

\n

Pros: Highly capable in general tasks, can be fine-tuned to specialize in an industry or task

\n

Cons: Expensive to run, prone to hallucinations, may absorb biases from its training data

\n

Reasoning Models: Thinking and Ruminating

\n

Reasoning models are usually fine-tuned versions of large language models designed to think through problems step-by-step. This makes them ideal as a second opinion on decisions, and for answering complex queries or handling more in-depth tasks.

\n

Examples: OpenAI\u2019s Omni, or o series of models; Google\u2019s Gemini 2.5, Meta\u2019s Llama 3.2 series, Anthropic\u2019s Claude 3.7 Sonnet

\n

Why it matters for business: It\u2019s a smarter AI that can dive into more complex tasks, such as explaining a legal contract and its ramifications, not just summarizing the document.

\n

Pros: Greater accuracy, deeper insight, less human oversight needed

\n

Cons: Slower responses, higher compute cost per query

\n

Multimodal Models: Diversity of Inputs

\n

Multimodal models are AI models that can ingest different forms of data (text, video, images and audio).

\n

Examples: OpenAI\u2019s GPT-4o and GPT-4 with Vision; Google\u2019s Gemini family of models; Meta\u2019s Llama 4

\n

Why it matters for business: AI models can now read, analyze and interpret data in many forms, which is practical for businesses using PDFs, Excel sheets, PowerPoints, faxes and other forms of documents.

\n

Pros: Better understanding of context, leading to wider usefulness

\n

Cons: Needs more data and computing power to train and deploy

\n

Open Source vs. Closed or Propriety Models

\n

Open-source AI models generally are free to use, modify and share, and any restrictions vary by the type of license it is using. Their code and weights are publicly available to use.

\n

Closed or proprietary AI models are not free, and they are developed by private companies. Users cannot see inside or modify these models.

\n

Examples:

\n

-Open\u00a0source: Meta\u2019s Llama family; Google\u2019s Gemma family; several Mistral models; EleutherAI\u2019s GPT-NeoX

\n

-Closed: OpenAI\u2019s GPT-3 and later models; Google\u2019s Gemini; Anthropic\u2019s Claude

\n

Why it matters for business: Closed models are usually more capable and more convenient to use, with a company behind them for support. Open models can be cheaper, and users have more control and customization opportunities. Companies can deploy both types, depending on the use case.

\n

Pros:

\n

-Open\u00a0source: Free, transparent, customizable, more control

\n

-Closed: More powerful with support from the company that developed it

\n

Cons:

\n

-Open source: More DIY and responsibility, may be less powerful or safe

\n

-Closed: Limited transparency, more expensive, less customizable

\n

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.

\n

The post From Buzzwords to Bottom Lines: Understanding the AI Model Types appeared first on PYMNTS.com.

\n", "content_text": "There\u2019s an alphabet soup of terms floating around when it comes to artificial intelligence models. There are foundation and frontier models, large and small language models, multimodal models \u2014 and the AI model term du jour, reasoning models.\nThese buzzwords show up in blogs, company announcements, executive speeches, conference panels and quarterly earnings calls, but what do they actually mean? More importantly, why should business users care?\nThis guide explains key AI model types in plain English and how each affects cost, capability and risk for organizations.\nHere are the different types of models often encountered.\nFoundation Models: The Base Layer of Generative AI\nFoundation models are large, general-purpose AI systems trained on massive datasets such as the entire internet. They serve as the \u201cbase model\u201d that can be adapted to perform a wide variety of tasks.\nThey can include large language models, vision language models, code models and more. They are typically trained to predict the next word in a sentence, the next pixel in an image or the next token in a code sequence.\nFoundation models can be frontier models if they bring major new capabilities and push the boundaries of foundation models.\nExamples: OpenAI\u2019s GPT family of models; Google\u2019s Gemini; Meta\u2019s Llama; Anthropic\u2019s Claude\nWhy it matters for business: These models power everything from customer service chatbots to content generation tools. You can either use them as they are, through APIs, or fine-tune (retrain) them on your company\u2019s data to create more specialized applications.\nPros: Versatile, fast to deploy, broadly knowledgeable\nCons: Expensive to run at scale, may hallucinate or generate inaccurate content, are not inherently secure or compliant with regulations\nLarge Language vs. Small Language Models\nLarge language models are AI models trained on huge amounts of data to learn language patterns. They power generative AI to create prose, poems, business emails and other language tasks. They are behind today\u2019s most popular chatbots and AI assistants.\nSmall language models are tinier, cheaper and usually more specialized versions of large language models.\nLarge language models are often used by AI agents to execute tasks. The agent, which is a system and not a model, is layered on top of the large language model.\nExamples: OpenAI\u2019s GPT series; Google\u2019s Gemini; Meta\u2019s Llama; Anthropic\u2019s Claude\nWhy it matters for business: Large language models can handle several administrative and creative tasks quickly and at scale to save employees hours of work and make business operations more efficient.\nPros: Highly capable in general tasks, can be fine-tuned to specialize in an industry or task\nCons: Expensive to run, prone to hallucinations, may absorb biases from its training data\nReasoning Models: Thinking and Ruminating\nReasoning models are usually fine-tuned versions of large language models designed to think through problems step-by-step. This makes them ideal as a second opinion on decisions, and for answering complex queries or handling more in-depth tasks.\nExamples: OpenAI\u2019s Omni, or o series of models; Google\u2019s Gemini 2.5, Meta\u2019s Llama 3.2 series, Anthropic\u2019s Claude 3.7 Sonnet\nWhy it matters for business: It\u2019s a smarter AI that can dive into more complex tasks, such as explaining a legal contract and its ramifications, not just summarizing the document.\nPros: Greater accuracy, deeper insight, less human oversight needed\nCons: Slower responses, higher compute cost per query\nMultimodal Models: Diversity of Inputs\nMultimodal models are AI models that can ingest different forms of data (text, video, images and audio).\nExamples: OpenAI\u2019s GPT-4o and GPT-4 with Vision; Google\u2019s Gemini family of models; Meta\u2019s Llama 4\nWhy it matters for business: AI models can now read, analyze and interpret data in many forms, which is practical for businesses using PDFs, Excel sheets, PowerPoints, faxes and other forms of documents.\nPros: Better understanding of context, leading to wider usefulness\nCons: Needs more data and computing power to train and deploy\nOpen Source vs. Closed or Propriety Models\nOpen-source AI models generally are free to use, modify and share, and any restrictions vary by the type of license it is using. Their code and weights are publicly available to use.\nClosed or proprietary AI models are not free, and they are developed by private companies. Users cannot see inside or modify these models.\nExamples:\n-Open\u00a0source: Meta\u2019s Llama family; Google\u2019s Gemma family; several Mistral models; EleutherAI\u2019s GPT-NeoX\n-Closed: OpenAI\u2019s GPT-3 and later models; Google\u2019s Gemini; Anthropic\u2019s Claude\nWhy it matters for business: Closed models are usually more capable and more convenient to use, with a company behind them for support. Open models can be cheaper, and users have more control and customization opportunities. Companies can deploy both types, depending on the use case.\nPros:\n-Open\u00a0source: Free, transparent, customizable, more control\n-Closed: More powerful with support from the company that developed it\nCons:\n-Open source: More DIY and responsibility, may be less powerful or safe\n-Closed: Limited transparency, more expensive, less customizable\nFor all PYMNTS AI coverage, subscribe to the daily AI Newsletter.\nThe post From Buzzwords to Bottom Lines: Understanding the AI Model Types appeared first on PYMNTS.com.", "date_published": "2025-04-17T13:00:59-04:00", "date_modified": "2025-04-17T13:59:37-04:00", "authors": [ { "name": "PYMNTS", "url": "https://www.pymnts.com/author/pymnts/", "avatar": "https://secure.gravatar.com/avatar/679fcf5c2ed5358e99e8e23b22e3b5d761e37bdb76fa7b0e13d8ecd9ff01bf88?s=512&d=blank&r=g" } ], "author": { "name": "PYMNTS", "url": "https://www.pymnts.com/author/pymnts/", "avatar": "https://secure.gravatar.com/avatar/679fcf5c2ed5358e99e8e23b22e3b5d761e37bdb76fa7b0e13d8ecd9ff01bf88?s=512&d=blank&r=g" }, "image": "https://www.pymnts.com/wp-content/uploads/2025/04/AI-models-artificial-intelligence.jpg", "tags": [ "AI agents", "Anthropic", "APIs", "Artificial Intelligence", "chatbots", "ChatGPT", "Claude", "EleutherAI", "Gemini", "GenAI", "Google", "Innovation", "LLAMA", "Meta", "Mistral AI", "News", "OpenAI", "PYMNTS News", "Technology" ] }, { "id": "https://www.pymnts.com/?p=2685771", "url": "https://www.pymnts.com/news/artificial-intelligence/2025/how-to-manage-risks-when-employees-use-ai-secretly-work/", "title": "How to Manage Risks When Employees Use AI Secretly for Work", "content_html": "

Employees who use generative artificial intelligence tools in the workplace without company approval or oversight \u2014 a practice known as \u201cbring your own AI\u2019 or BYOAI \u2014 could introduce risks into the enterprise, according to MIT researchers.

\n

\u201cMake no mistake, I\u2019m not talking theory here,\u201d said Nick van der Meulen, a research scientist at MIT\u2019s Center for Information Systems Research, during an MIT Sloan Management Review webinar. \u201cThis has been happening for quite some time now.\u201d

\n

The temptation to BYOAI could be especially acute at companies that have banned using AI chatbots that are publicly available, such as ChatGPT. Samsung, Verizon, J.P. Morgan Chase and other banks have banned or limited the use of external AI chatbots due to regulatory and security concerns.

\n

The issue is gaining urgency among business leaders as AI models become more powerful and freely available to anyone, according to MIT.

\n

Research from van der Meulen and fellow research scientist Barbara Wixom showed that about 16% of employees in large organizations were already using AI tools last year, with that number expected to rise to 72% by 2027. This includes sanctioned and unsanctioned use of AI.

\n

They warned about the risks arising when employees use these tools without guidance.

\n

\u201cWhat happens when sensitive data gets entered into platforms that you don\u2019t control? When business decisions are made based on outputs that no one quite understands?\u201d van der Meulen said.

\n

The researchers said there are two types of generative AI implementations:

\n\n

Separating the two uses of generative AI is useful because that \u201chelps us tackle each differently and manage their value properly,\u201d Wixom said.

\n

Tools is a cost management play and needs to be handled similarly to spreadsheets and word processing.

\n

\u201cIn a way, they simply represent, for most organizations, the new cost of doing business,\u201d van der Meulen said.

\n

Solutions help different areas of the company, whether it\u2019s the call center, marketing, software development or another business unit.

\n

\u201cThey offer measurable lift in either efficiencies or sales,\u201d Wixom said.

\n

For example, IT services provider Wolters Kluwer developed a generative AI tool that can read raw text directly from scanned images of lien documents. Banks using this tool were able to cut their loan processing time from weeks to days.

\n

\u201cThat is not something that an individual employee at either Wolters Kluwer or the bank could have done on their own with a GenAI tool,\u201d van der Meulen said. \u201cIt takes effort from many stakeholders to create these solutions to integrate them into systems.\u201d

\n

When AI is used as a tool, the employee is responsible for its successful use. When AI is used in the company as a solution, the organization owns its success, the researchers said.

\n

This is another important distinction because it guides how to govern these two types of generative AI in a company, they said.

\n

Read also: MIT Discovers AI Training Paradox That Could Boost Robot Intelligence

\n

Tips to Manage BYOAI

\n

Simply banning these tools is neither practical nor effective.

\n

\u201cEmployees won\u2019t just stop using GenAI; they\u2019ll start looking for workarounds,\u201d said van der Meulen. \u201cThey\u2019ll turn to personal devices, use unsanctioned accounts, hidden tools. So instead of mitigating risk, we\u2019d have made it harder to detect and manage.\u201d

\n

The researchers recommended three key approaches to managing BYOAI.

\n
    \n
  1. Establish clear guardrails and guidelines.
  2. \n
\n

Organizations should tell employees which uses are always acceptable, like searching for publicly available information, and those that are not approved, such as inputting proprietary information into a publicly available AI chatbot. In a survey of senior data and technology leaders, 30% reported having well-developed policies regarding workers\u2019 AI use, the researchers said.

\n
    \n
  1. Invest in training and education.
  2. \n
\n

Employees need what the researchers called \u201cAI direction and evaluation skills\u201d (AIDE skills). If they don\u2019t know how to use the tools well, they won\u2019t be as effective. It\u2019s not enough to do an online tutorial; employees must practice.

\n

For example, at Zoetis, a global animal health company, the data analytics unit runs sessions three times a week that are attended by over 100 employees at each session for hands-on AI practice.

\n

The researchers said J.D. Williams, Zoetis\u2019 chief data and analytics officer, likened it to teaching people how to change tires \u2014 by making them change tires.

\n
    \n
  1. Provide approved tools from trusted vendors.
  2. \n
\n

Since banning AI tools won\u2019t work and allowing the use of any AI tools isn\u2019t feasible either because it becomes impossible to use them safely, organizations should instead provide approved AI tools to employees.

\n

Zoetis implemented a \u201cGenAI app store\u201d where employees apply for a licensed seat. They have to say why they need the app, and then share their experiences using it. This helps the company identify valuable applications while managing costs.

\n

\u201cIt\u2019s how you avoid paying $50 a month for Joe from Finance who \u2026 used it exactly once to write a birthday card,\u201d van der Meulen said.

\n

For organizations just beginning their GenAI journey, Wixom also recommended establishing a center of excellence \u2014 which could be a single person or a small team \u2014 to provide an enterprise-wide perspective and coordinate efforts across departments.

\n

But most of all, \u201cit is important to remind everyone what the end game is here,\u201d Wixom said. \u201cThe point of AI, regardless of its flavor, should be to create value for our organizations and ideally value that hits our books.\u201d

\n

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.

\n

The post How to Manage Risks When Employees Use AI Secretly for Work appeared first on PYMNTS.com.

\n", "content_text": "Employees who use generative artificial intelligence tools in the workplace without company approval or oversight \u2014 a practice known as \u201cbring your own AI\u2019 or BYOAI \u2014 could introduce risks into the enterprise, according to MIT researchers.\n\u201cMake no mistake, I\u2019m not talking theory here,\u201d said Nick van der Meulen, a research scientist at MIT\u2019s Center for Information Systems Research, during an MIT Sloan Management Review webinar. \u201cThis has been happening for quite some time now.\u201d\nThe temptation to BYOAI could be especially acute at companies that have banned using AI chatbots that are publicly available, such as ChatGPT. Samsung, Verizon, J.P. Morgan Chase and other banks have banned or limited the use of external AI chatbots due to regulatory and security concerns.\nThe issue is gaining urgency among business leaders as AI models become more powerful and freely available to anyone, according to MIT.\nResearch from van der Meulen and fellow research scientist Barbara Wixom showed that about 16% of employees in large organizations were already using AI tools last year, with that number expected to rise to 72% by 2027. This includes sanctioned and unsanctioned use of AI.\nThey warned about the risks arising when employees use these tools without guidance.\n\u201cWhat happens when sensitive data gets entered into platforms that you don\u2019t control? When business decisions are made based on outputs that no one quite understands?\u201d van der Meulen said.\nThe researchers said there are two types of generative AI implementations:\n\nGenAI tools like ChatGPT or Microsoft Copilot are used to enhance individual employee productivity and efficiency. They are freely available, but it\u2019s harder to translate their use into ROI.\nGenAI solutions are company-wide deployment of AI across processes and business units to bring value to the enterprise.\n\nSeparating the two uses of generative AI is useful because that \u201chelps us tackle each differently and manage their value properly,\u201d Wixom said.\nTools is a cost management play and needs to be handled similarly to spreadsheets and word processing.\n\u201cIn a way, they simply represent, for most organizations, the new cost of doing business,\u201d van der Meulen said.\nSolutions help different areas of the company, whether it\u2019s the call center, marketing, software development or another business unit.\n\u201cThey offer measurable lift in either efficiencies or sales,\u201d Wixom said.\nFor example, IT services provider Wolters Kluwer developed a generative AI tool that can read raw text directly from scanned images of lien documents. Banks using this tool were able to cut their loan processing time from weeks to days.\n\u201cThat is not something that an individual employee at either Wolters Kluwer or the bank could have done on their own with a GenAI tool,\u201d van der Meulen said. \u201cIt takes effort from many stakeholders to create these solutions to integrate them into systems.\u201d\nWhen AI is used as a tool, the employee is responsible for its successful use. When AI is used in the company as a solution, the organization owns its success, the researchers said.\nThis is another important distinction because it guides how to govern these two types of generative AI in a company, they said.\nRead also: MIT Discovers AI Training Paradox That Could Boost Robot Intelligence\nTips to Manage BYOAI\nSimply banning these tools is neither practical nor effective.\n\u201cEmployees won\u2019t just stop using GenAI; they\u2019ll start looking for workarounds,\u201d said van der Meulen. \u201cThey\u2019ll turn to personal devices, use unsanctioned accounts, hidden tools. So instead of mitigating risk, we\u2019d have made it harder to detect and manage.\u201d\nThe researchers recommended three key approaches to managing BYOAI.\n\nEstablish clear guardrails and guidelines.\n\nOrganizations should tell employees which uses are always acceptable, like searching for publicly available information, and those that are not approved, such as inputting proprietary information into a publicly available AI chatbot. In a survey of senior data and technology leaders, 30% reported having well-developed policies regarding workers\u2019 AI use, the researchers said.\n\nInvest in training and education.\n\nEmployees need what the researchers called \u201cAI direction and evaluation skills\u201d (AIDE skills). If they don\u2019t know how to use the tools well, they won\u2019t be as effective. It\u2019s not enough to do an online tutorial; employees must practice.\nFor example, at Zoetis, a global animal health company, the data analytics unit runs sessions three times a week that are attended by over 100 employees at each session for hands-on AI practice.\nThe researchers said J.D. Williams, Zoetis\u2019 chief data and analytics officer, likened it to teaching people how to change tires \u2014 by making them change tires.\n\nProvide approved tools from trusted vendors.\n\nSince banning AI tools won\u2019t work and allowing the use of any AI tools isn\u2019t feasible either because it becomes impossible to use them safely, organizations should instead provide approved AI tools to employees.\nZoetis implemented a \u201cGenAI app store\u201d where employees apply for a licensed seat. They have to say why they need the app, and then share their experiences using it. This helps the company identify valuable applications while managing costs.\n\u201cIt\u2019s how you avoid paying $50 a month for Joe from Finance who \u2026 used it exactly once to write a birthday card,\u201d van der Meulen said.\nFor organizations just beginning their GenAI journey, Wixom also recommended establishing a center of excellence \u2014 which could be a single person or a small team \u2014 to provide an enterprise-wide perspective and coordinate efforts across departments.\nBut most of all, \u201cit is important to remind everyone what the end game is here,\u201d Wixom said. \u201cThe point of AI, regardless of its flavor, should be to create value for our organizations and ideally value that hits our books.\u201d\nFor all PYMNTS AI coverage, subscribe to the daily AI Newsletter.\nThe post How to Manage Risks When Employees Use AI Secretly for Work appeared first on PYMNTS.com.", "date_published": "2025-04-17T09:00:26-04:00", "date_modified": "2025-04-16T11:15:18-04:00", "authors": [ { "name": "PYMNTS", "url": "https://www.pymnts.com/author/pymnts/", "avatar": "https://secure.gravatar.com/avatar/679fcf5c2ed5358e99e8e23b22e3b5d761e37bdb76fa7b0e13d8ecd9ff01bf88?s=512&d=blank&r=g" } ], "author": { "name": "PYMNTS", "url": "https://www.pymnts.com/author/pymnts/", "avatar": "https://secure.gravatar.com/avatar/679fcf5c2ed5358e99e8e23b22e3b5d761e37bdb76fa7b0e13d8ecd9ff01bf88?s=512&d=blank&r=g" }, "image": "https://www.pymnts.com/wp-content/uploads/2025/04/Bring-Your-Own-AI.jpg", "tags": [ "Artificial Intelligence", "chatbots", "GenAI", "Innovation", "MIT", "News", "PYMNTS News", "Technology", "What's Hot" ] }, { "id": "https://www.pymnts.com/?p=2681754", "url": "https://www.pymnts.com/news/artificial-intelligence/2025/how-cfos-are-unlocking-the-power-of-advanced-analytics/", "title": "How CFOs Are Unlocking the Power of Advanced Analytics", "content_html": "

Some opportunities are too great to ignore. For chief financial officers looking to give their procurement processes a shot in the arm, generative artificial intelligence (GenAI) represents one of those opportunities.

\n

According to the latest numbers from the PYMNTS Intelligence March 2025 CAIO Report, \u201cThe Investment Impact of GenAI Operating Standards on Enterprise Adoption,\u201d 73% of enterprises are actively exploring GenAI to enhance procurement efficiency.

\n

This interest spans various industries where procurement processes are often mired in manual procedures, inefficiencies and communication gaps. By integrating GenAI tools, companies can work to streamline vendor management, accelerate contract analysis and improve demand forecasting.

\n

High-impact enterprises, particularly those with a robust technological infrastructure, are leading the charge. They see GenAI as a way to reduce operational bottlenecks and drive strategic decision making. For these companies, using AI\u2019s capabilities is not just about cost-cutting but about enhancing overall business agility and competitiveness.

\n

From automating mundane tasks to providing insights businesses wouldn\u2019t have seen otherwise, the possibilities of integrating GenAI across procurement function workflows could be immense. The businesses experimenting now could find themselves with an advantage in the future.

\n

Read also: 3 Ways Embedded Finance Solutions Are Remaking B2B Procurement

\n

The High Potential of GenAI in Procurement

\n

Procurement is a function that, while often overlooked, serves as the backbone of strategic sourcing, supplier management and cost optimization. Yet by their nature, procurement processes involve repetitive, time-intensive tasks such as drafting contracts, processing invoices and maintaining supplier relationships.

\n

If automated effectively, these can unlock value for organizations. GenAI\u2019s ability to process vast amounts of data and generate human-like language can enhance procurement efficiency through natural language processing and intelligent automation.

\n

The PYMNTS Intelligence report found that among firms deploying GenAI for high-impact automation, 30% have already adopted it for procure-to-pay processes, while 48% are still evaluating it.

\n

Beyond automation, GenAI can offer firms an advantage in generating actionable insights from vast and often unstructured datasets. Procurement departments, which are frequently tasked with managing thousands of supplier relationships, could stand to benefit from AI\u2019s predictive and prescriptive analytics capabilities.

\n

By streamlining procurement processes and enhancing decision making, organizations can achieve improved efficiency and reduce operational costs. For example, AI can suggest supplier consolidation strategies, uncover rogue spending, or highlight areas where alternative sourcing could result in financial benefits.

\n

See also: Better Standards Outshine Flashier Tech as Winning GenAI Recipe for Procurement

\n

Governance Concerns Could Slow GenAI Adoption

\n

Despite the potential of GenAI across procurement, CFO enthusiasm for its implementation is tempered by concerns over the lack of clear operating standards. Without transparent governance frameworks, companies risk exposure to data breaches, algorithmic bias and reputational damage.

\n

It\u2019s challenging for finance leaders to make confident investment decisions when the rules of the game aren\u2019t clear, and the PYMNTS report found that 38% of CFOs identified this ambiguity as a moderate or significant obstacle to adoption.

\n

Concerns over accountability and traceability are equally prevalent. Many CFOs worry that GenAI models, particularly those built on proprietary data, could generate outputs that are difficult to audit or explain. The absence of clear standards also raises questions about intellectual property rights and ethical use.

\n

Despite these concerns, most CFOs are not discounting GenAI\u2019s transformative potential. Rather, they are urging the development of more robust standards to guide responsible innovation. Many are calling for frameworks that ensure privacy, reliability and sustainability without stifling creativity.

\n

The future of GenAI adoption will likely depend on the ability of industry leaders to collaborate in developing transparent, enforceable standards. Whether these guidelines come from government regulators, industry consortia or individual enterprises themselves remains to be seen.

\n

For now, CFOs are left to navigate a landscape rich with potential but fraught with uncertainty. As more companies experiment with GenAI, the insights they gather will undoubtedly shape the frameworks of the future.

\n

For all PYMNTS AI and B2B coverage, subscribe to the daily AI and B2B Newsletters.

\n

The post How CFOs Are Unlocking the Power of Advanced Analytics appeared first on PYMNTS.com.

\n", "content_text": "Some opportunities are too great to ignore. For chief financial officers looking to give their procurement processes a shot in the arm, generative artificial intelligence (GenAI) represents one of those opportunities.\nAccording to the latest numbers from the PYMNTS Intelligence March 2025 CAIO Report, \u201cThe Investment Impact of GenAI Operating Standards on Enterprise Adoption,\u201d 73% of enterprises are actively exploring GenAI to enhance procurement efficiency.\nThis interest spans various industries where procurement processes are often mired in manual procedures, inefficiencies and communication gaps. By integrating GenAI tools, companies can work to streamline vendor management, accelerate contract analysis and improve demand forecasting.\nHigh-impact enterprises, particularly those with a robust technological infrastructure, are leading the charge. They see GenAI as a way to reduce operational bottlenecks and drive strategic decision making. For these companies, using AI\u2019s capabilities is not just about cost-cutting but about enhancing overall business agility and competitiveness.\nFrom automating mundane tasks to providing insights businesses wouldn\u2019t have seen otherwise, the possibilities of integrating GenAI across procurement function workflows could be immense. The businesses experimenting now could find themselves with an advantage in the future.\nRead also: 3 Ways Embedded Finance Solutions Are Remaking B2B Procurement\nThe High Potential of GenAI in Procurement\nProcurement is a function that, while often overlooked, serves as the backbone of strategic sourcing, supplier management and cost optimization. Yet by their nature, procurement processes involve repetitive, time-intensive tasks such as drafting contracts, processing invoices and maintaining supplier relationships.\nIf automated effectively, these can unlock value for organizations. GenAI\u2019s ability to process vast amounts of data and generate human-like language can enhance procurement efficiency through natural language processing and intelligent automation.\nThe PYMNTS Intelligence report found that among firms deploying GenAI for high-impact automation, 30% have already adopted it for procure-to-pay processes, while 48% are still evaluating it.\nBeyond automation, GenAI can offer firms an advantage in generating actionable insights from vast and often unstructured datasets. Procurement departments, which are frequently tasked with managing thousands of supplier relationships, could stand to benefit from AI\u2019s predictive and prescriptive analytics capabilities.\nBy streamlining procurement processes and enhancing decision making, organizations can achieve improved efficiency and reduce operational costs. For example, AI can suggest supplier consolidation strategies, uncover rogue spending, or highlight areas where alternative sourcing could result in financial benefits.\nSee also: Better Standards Outshine Flashier Tech as Winning GenAI Recipe for Procurement\nGovernance Concerns Could Slow GenAI Adoption\nDespite the potential of GenAI across procurement, CFO enthusiasm for its implementation is tempered by concerns over the lack of clear operating standards. Without transparent governance frameworks, companies risk exposure to data breaches, algorithmic bias and reputational damage.\nIt\u2019s challenging for finance leaders to make confident investment decisions when the rules of the game aren\u2019t clear, and the PYMNTS report found that 38% of CFOs identified this ambiguity as a moderate or significant obstacle to adoption.\nConcerns over accountability and traceability are equally prevalent. Many CFOs worry that GenAI models, particularly those built on proprietary data, could generate outputs that are difficult to audit or explain. The absence of clear standards also raises questions about intellectual property rights and ethical use.\nDespite these concerns, most CFOs are not discounting GenAI\u2019s transformative potential. Rather, they are urging the development of more robust standards to guide responsible innovation. Many are calling for frameworks that ensure privacy, reliability and sustainability without stifling creativity.\nThe future of GenAI adoption will likely depend on the ability of industry leaders to collaborate in developing transparent, enforceable standards. Whether these guidelines come from government regulators, industry consortia or individual enterprises themselves remains to be seen.\nFor now, CFOs are left to navigate a landscape rich with potential but fraught with uncertainty. As more companies experiment with GenAI, the insights they gather will undoubtedly shape the frameworks of the future.\nFor all PYMNTS AI and B2B coverage, subscribe to the daily AI and B2B Newsletters.\nThe post How CFOs Are Unlocking the Power of Advanced Analytics appeared first on PYMNTS.com.", "date_published": "2025-04-16T04:00:21-04:00", "date_modified": "2025-04-15T23:03:21-04:00", "authors": [ { "name": "PYMNTS", "url": "https://www.pymnts.com/author/pymnts/", "avatar": "https://secure.gravatar.com/avatar/679fcf5c2ed5358e99e8e23b22e3b5d761e37bdb76fa7b0e13d8ecd9ff01bf88?s=512&d=blank&r=g" } ], "author": { "name": "PYMNTS", "url": "https://www.pymnts.com/author/pymnts/", "avatar": "https://secure.gravatar.com/avatar/679fcf5c2ed5358e99e8e23b22e3b5d761e37bdb76fa7b0e13d8ecd9ff01bf88?s=512&d=blank&r=g" }, "image": "https://www.pymnts.com/wp-content/uploads/2025/04/GenAI-procurement-B2B-payments.jpg", "tags": [ "Artificial Intelligence", "automation", "B2B", "B2B Payments", "CFO", "commercial payments", "Featured News", "GenAI", "Innovation", "News", "procurement", "PYMNTS Intelligence", "PYMNTS News", "PYMNTS Study", "Technology", "The Investment Impact of GenAI Operating Standards on Enterprise Adoption" ] }, { "id": "https://www.pymnts.com/?p=2684245", "url": "https://www.pymnts.com/news/artificial-intelligence/2025/nvidia-and-partners-to-produce-ai-supercomputers-in-us/", "title": "Nvidia and Partners to Produce AI Supercomputers in US", "content_html": "

Nvidia said Monday (April 14) that its artificial intelligence (AI) supercomputers will be built in the U.S. for the first time, starting in the next 12 to 15 months.

\n

The company is working with manufacturing partners to build two manufacturing plants in Texas \u2014 one in Houston with\u00a0Foxconn and one in Dallas with\u00a0Wistron \u2014 and expects to begin mass production within that timeframe, Nvidia said in a Monday\u00a0press release.

\n

\u201cAdding American manufacturing helps us better meet the incredible and growing demand for AI chips and supercomputers, strengthens our supply chain and boosts our resiliency,\u201d Nvidia Founder and CEO\u00a0Jensen Huang said in the release.

\n

Nvidia Blackwell chips are now being produced at the plants of another manufacturing partner,\u00a0TSMC, in Phoenix, Arizona, according to the release.

\n

Those plants and the new supercomputer facilities will add up to a million square feet of manufacturing space, the release said.

\n

Nvidia also partners with\u00a0Amkor\u00a0and\u00a0SPIL for packing and testing operations in Arizona, per the release.

\n

Within the next four years, Nvidia plans to produce up to $500 billion\u00a0of AI infrastructure in the U.S. through its partnerships with TSMC, Foxconn, Wistron, Amkor and SPIL, according to the release.

\n

\u201cManufacturing Nvidia AI chips and supercomputers for American AI factories is expected to create hundreds of thousands of jobs and drive trillions of dollars in economic security over the coming decades,\u201d the release said.

\n

Huang said in March that Nvidia will procure \u201cseveral hundred billion\u201d dollars\u2019 worth of chips and other electronics\u00a0manufactured in the U.S. over the next four years.

\n

The chip designer will source these products from suppliers like TSMC and Foxconn, which can manufacture its latest systems in the U.S.

\n

By doing so, Nvidia will avoid tariffs and improve the resiliency of its supply chain, Huang said at the time.

\n

Asked during a Monday press conference about the Nvidia announcement, President Donald Trump attributed to the company\u2019s move to tariffs.

\n

\u201cI knew it was going to happen, but not to the extent that it happened. It\u2019s\u00a0big,\u201d Trump said in a\u00a0video\u00a0of the press conference posted on X by the White House\u2019s Rapid Response 47 account. \u201cAnd the reason they did it is because of the election on Nov. 5 and because of a thing called tariffs.\u201d

\n

The post Nvidia and Partners to Produce AI Supercomputers in US appeared first on PYMNTS.com.

\n", "content_text": "Nvidia said Monday (April 14) that its artificial intelligence (AI) supercomputers will be built in the U.S. for the first time, starting in the next 12 to 15 months.\nThe company is working with manufacturing partners to build two manufacturing plants in Texas \u2014 one in Houston with\u00a0Foxconn and one in Dallas with\u00a0Wistron \u2014 and expects to begin mass production within that timeframe, Nvidia said in a Monday\u00a0press release.\n\u201cAdding American manufacturing helps us better meet the incredible and growing demand for AI chips and supercomputers, strengthens our supply chain and boosts our resiliency,\u201d Nvidia Founder and CEO\u00a0Jensen Huang said in the release.\nNvidia Blackwell chips are now being produced at the plants of another manufacturing partner,\u00a0TSMC, in Phoenix, Arizona, according to the release.\nThose plants and the new supercomputer facilities will add up to a million square feet of manufacturing space, the release said.\nNvidia also partners with\u00a0Amkor\u00a0and\u00a0SPIL for packing and testing operations in Arizona, per the release.\nWithin the next four years, Nvidia plans to produce up to $500 billion\u00a0of AI infrastructure in the U.S. through its partnerships with TSMC, Foxconn, Wistron, Amkor and SPIL, according to the release.\n\u201cManufacturing Nvidia AI chips and supercomputers for American AI factories is expected to create hundreds of thousands of jobs and drive trillions of dollars in economic security over the coming decades,\u201d the release said.\nHuang said in March that Nvidia will procure \u201cseveral hundred billion\u201d dollars\u2019 worth of chips and other electronics\u00a0manufactured in the U.S. over the next four years.\nThe chip designer will source these products from suppliers like TSMC and Foxconn, which can manufacture its latest systems in the U.S.\nBy doing so, Nvidia will avoid tariffs and improve the resiliency of its supply chain, Huang said at the time.\nAsked during a Monday press conference about the Nvidia announcement, President Donald Trump attributed to the company\u2019s move to tariffs.\n\u201cI knew it was going to happen, but not to the extent that it happened. It\u2019s\u00a0big,\u201d Trump said in a\u00a0video\u00a0of the press conference posted on X by the White House\u2019s Rapid Response 47 account. \u201cAnd the reason they did it is because of the election on Nov. 5 and because of a thing called tariffs.\u201d\nThe post Nvidia and Partners to Produce AI Supercomputers in US appeared first on PYMNTS.com.", "date_published": "2025-04-14T15:16:12-04:00", "date_modified": "2025-04-14T15:16:12-04:00", "authors": [ { "name": "PYMNTS", "url": "https://www.pymnts.com/author/pymnts/", "avatar": "https://secure.gravatar.com/avatar/679fcf5c2ed5358e99e8e23b22e3b5d761e37bdb76fa7b0e13d8ecd9ff01bf88?s=512&d=blank&r=g" } ], "author": { "name": "PYMNTS", "url": "https://www.pymnts.com/author/pymnts/", "avatar": "https://secure.gravatar.com/avatar/679fcf5c2ed5358e99e8e23b22e3b5d761e37bdb76fa7b0e13d8ecd9ff01bf88?s=512&d=blank&r=g" }, "image": "https://www.pymnts.com/wp-content/uploads/2024/09/Nvidia-AI-2.jpg", "tags": [ "AI", "AI Chips", "AI infrastructure", "Artificial Intelligence", "Jensen Huang", "News", "NVIDIA", "PYMNTS News", "What's Hot" ] }, { "id": "https://www.pymnts.com/?p=2684126", "url": "https://www.pymnts.com/news/artificial-intelligence/2025/meta-begin-training-ai-user-data-european-union/", "title": "Meta to Begin Training AI on User Data in EU", "content_html": "

Meta will begin using content shared by adults in the European Union to train its artificial intelligence models after the European Data Protection Board\u00a0said the company\u2019s approach met its legal requirements.

\n

The company plans to train its AI using public content \u2014 including public posts and comments shared by adults on its products in the EU \u2014 and people\u2019s questions, queries and other interactions with Meta AI, Meta said in a Monday (April 14) blog post.

\n

Meta will not use public data from account holders in the EU who are under the age of 18, people\u2019s private messages with family and friends, or data from users who submit an objection form provided by the company, according to the post.

\n

The company launched Meta AI in the EU in March and plans to make its chat function available for free across the region within its messaging apps: Facebook, Instagram, WhatsApp and Messenger, per the post.

\n

\u201cWe believe we have a responsibility to build AI that\u2019s not just available to Europeans, but is actually built for them,\u201d the post said.

\n

By training its generative AI models on data from EU users, it will be better able to understand European dialects, colloquialisms, hyper-local knowledge and other aspects so it can serve the region\u2019s users, according to the post.

\n

\u201cIt\u2019s important to note that the kind of AI training we\u2019re doing is not unique to Meta, nor will it be unique to Europe,\u201d the post said. \u201cThis is how we have been training our generative AI models for other regions since launch. We\u2019re following the example set by others including Google and OpenAI, both of which have already used data from European users to train their AI models.\u201d

\n

It was reported in July that Meta decided to withhold what was at the time its latest multimodal AI model from the EU, citing an \u201cunpredictable\u201d regulatory environment in the region.

\n

The company\u2019s retreat stemmed from uncertainties surrounding compliance with the General Data Protection Regulation (GDPR), particularly AI model training using user data from its products.

\n

In June, a European privacy group said it filed complaints with 11 European countries, arguing that Meta\u2019s use of user data in its proposed AI practices violated the GDPR.

\n

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.

\n

The post Meta to Begin Training AI on User Data in EU appeared first on PYMNTS.com.

\n", "content_text": "Meta will begin using content shared by adults in the European Union to train its artificial intelligence models after the European Data Protection Board\u00a0said the company\u2019s approach met its legal requirements.\nThe company plans to train its AI using public content \u2014 including public posts and comments shared by adults on its products in the EU \u2014 and people\u2019s questions, queries and other interactions with Meta AI, Meta said in a Monday (April 14) blog post.\nMeta will not use public data from account holders in the EU who are under the age of 18, people\u2019s private messages with family and friends, or data from users who submit an objection form provided by the company, according to the post.\nThe company launched Meta AI in the EU in March and plans to make its chat function available for free across the region within its messaging apps: Facebook, Instagram, WhatsApp and Messenger, per the post.\n\u201cWe believe we have a responsibility to build AI that\u2019s not just available to Europeans, but is actually built for them,\u201d the post said.\nBy training its generative AI models on data from EU users, it will be better able to understand European dialects, colloquialisms, hyper-local knowledge and other aspects so it can serve the region\u2019s users, according to the post.\n\u201cIt\u2019s important to note that the kind of AI training we\u2019re doing is not unique to Meta, nor will it be unique to Europe,\u201d the post said. \u201cThis is how we have been training our generative AI models for other regions since launch. We\u2019re following the example set by others including Google and OpenAI, both of which have already used data from European users to train their AI models.\u201d\nIt was reported in July that Meta decided to withhold what was at the time its latest multimodal AI model from the EU, citing an \u201cunpredictable\u201d regulatory environment in the region.\nThe company\u2019s retreat stemmed from uncertainties surrounding compliance with the General Data Protection Regulation (GDPR), particularly AI model training using user data from its products.\nIn June, a European privacy group said it filed complaints with 11 European countries, arguing that Meta\u2019s use of user data in its proposed AI practices violated the GDPR.\nFor all PYMNTS AI coverage, subscribe to the daily AI Newsletter.\nThe post Meta to Begin Training AI on User Data in EU appeared first on PYMNTS.com.", "date_published": "2025-04-14T13:30:25-04:00", "date_modified": "2025-04-14T13:30:25-04:00", "authors": [ { "name": "PYMNTS", "url": "https://www.pymnts.com/author/pymnts/", "avatar": "https://secure.gravatar.com/avatar/679fcf5c2ed5358e99e8e23b22e3b5d761e37bdb76fa7b0e13d8ecd9ff01bf88?s=512&d=blank&r=g" } ], "author": { "name": "PYMNTS", "url": "https://www.pymnts.com/author/pymnts/", "avatar": "https://secure.gravatar.com/avatar/679fcf5c2ed5358e99e8e23b22e3b5d761e37bdb76fa7b0e13d8ecd9ff01bf88?s=512&d=blank&r=g" }, "image": "https://www.pymnts.com/wp-content/uploads/2023/08/meta-AI.jpg", "tags": [ "Artificial Intelligence", "data", "data privacy", "EMEA", "EU", "GenAI", "Innovation", "international", "Meta", "News", "privacy", "PYMNTS News", "regulations", "Social Media", "Technology", "What's Hot" ] }, { "id": "https://www.pymnts.com/?p=2682904", "url": "https://www.pymnts.com/news/artificial-intelligence/2025/bank-england-warns-higher-market-volatility-from-ai-driven-trading/", "title": "Bank of England Warns of Higher Market Volatility From AI-Driven Trading", "content_html": "

The use of artificial intelligence in algorithmic trading could exacerbate market volatility and amplify financial instability, according to a policy paper by the Bank of England released this week.

\n

As global markets reel from President Donald Trump\u2019s tariff policy changes, the United Kingdom\u2019s central bank warned that the widespread use of AI for trading and investing could lead to a \u201cherding\u201d behavior that could raise the chance of sudden market drops, especially during times of stress because firms might sell off assets at once.

\n

As more firms use AI for investing and trading, there\u2019s a risk that many will end up making the same decisions at the same time, the paper said.

\n

\u201cGreater use of AI to inform trading and investment decisions could help increase market efficiency,\u201d per the paper. \u201cBut it could also lead market participants inadvertently to take actions collectively in such a way that reduces stability.\u201d

\n

For example, the use of more advanced AI-based trading strategies could lead firms to \u201ctaking increasingly correlated positions and acting in a similar way during a stress, thereby amplifying shocks,\u201d according to the paper.

\n

Such market instability can affect the amount of capital available to businesses since they can\u2019t raise as much when markets are down.

\n

The report comes as global equity and bond markets have been on a roller coaster since the Trump administration announced a minimum of 10% tariffs on imports from all countries, with China, the European Union and a few other countries getting hit with higher rates.

\n

The Dow Jones Industrial Average has fallen by 6.2% since Trump\u2019s April 2 announcement, while the S&P 500 gave up 7.1% and the Nasdaq Composite fell by 6.9%. The benchmark 10-year Treasury yields rose from 4.053% to 4.509% over the same time frame as investors flocked to safety.

\n

Federal Reserve Chair Jerome Powell said tariffs are \u201clikely to raise inflation in coming quarters\u201d and \u201cit is also possible that the effects could be more persistent,\u201d according to a transcript of his April 4 speech before the Society for Advancing Business Editing and Writing. Inflation is a key statistic influencing monetary policy such as the direction of the Fed funds rate.

\n

Powell\u2019s comments came five days before Trump decided to pause tariffs for 90 days for nearly 60 countries, except China.

\n

Read also: Trump Boosts Tariffs on Low-Value Packages Again After China Retaliates

\n

AI and Systemic Shocks

\n

The use of AI in algorithmic trading could exacerbate these extremes because many companies rely on the same AI models or data, leading them to act similarly, according to the BoE paper.

\n

Although AI might make markets more efficient by processing information faster than humans, it could also make them more fragile and less able to handle shocks, the paper said.

\n

The central banker said the International Monetary Fund (IMF) identified herding and market concentration as the top risks that could come from wider adoption of generative AI in the capital markets.

\n

The IMF\u2019s 2024 report said the adoption of AI in trading and investing is \u201clikely to increase significantly in the near future.\u201d While AI may reduce some financial stability risks through improved risk management and market monitoring, at the same time \u201cnew risks may arise, including increased market speed and volatility under stress\u201d and others.

\n

On the positive side, AI could help financial services firms manage risk more effectively by making better use of the data they already have, the BoE paper said. With stronger risk management, firms are less likely to be caught off guard when prices suddenly drop.

\n

That means they might not need to rush into selling off assets all at once, which is what happens during a fire sale. The resulting damage caused by market selloffs could be mitigated or even avoided.

\n

The central banker also pointed to another potential mitigating factor. If investment managers use AI to tailor strategies specifically for each client, it could lead to more market stability since people won\u2019t hold the same assets.

\n

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.

\n

The post Bank of England Warns of Higher Market Volatility From AI-Driven Trading appeared first on PYMNTS.com.

\n", "content_text": "The use of artificial intelligence in algorithmic trading could exacerbate market volatility and amplify financial instability, according to a policy paper by the Bank of England released this week.\nAs global markets reel from President Donald Trump\u2019s tariff policy changes, the United Kingdom\u2019s central bank warned that the widespread use of AI for trading and investing could lead to a \u201cherding\u201d behavior that could raise the chance of sudden market drops, especially during times of stress because firms might sell off assets at once.\nAs more firms use AI for investing and trading, there\u2019s a risk that many will end up making the same decisions at the same time, the paper said.\n\u201cGreater use of AI to inform trading and investment decisions could help increase market efficiency,\u201d per the paper. \u201cBut it could also lead market participants inadvertently to take actions collectively in such a way that reduces stability.\u201d\nFor example, the use of more advanced AI-based trading strategies could lead firms to \u201ctaking increasingly correlated positions and acting in a similar way during a stress, thereby amplifying shocks,\u201d according to the paper.\nSuch market instability can affect the amount of capital available to businesses since they can\u2019t raise as much when markets are down.\nThe report comes as global equity and bond markets have been on a roller coaster since the Trump administration announced a minimum of 10% tariffs on imports from all countries, with China, the European Union and a few other countries getting hit with higher rates.\nThe Dow Jones Industrial Average has fallen by 6.2% since Trump\u2019s April 2 announcement, while the S&P 500 gave up 7.1% and the Nasdaq Composite fell by 6.9%. The benchmark 10-year Treasury yields rose from 4.053% to 4.509% over the same time frame as investors flocked to safety.\nFederal Reserve Chair Jerome Powell said tariffs are \u201clikely to raise inflation in coming quarters\u201d and \u201cit is also possible that the effects could be more persistent,\u201d according to a transcript of his April 4 speech before the Society for Advancing Business Editing and Writing. Inflation is a key statistic influencing monetary policy such as the direction of the Fed funds rate.\nPowell\u2019s comments came five days before Trump decided to pause tariffs for 90 days for nearly 60 countries, except China.\nRead also: Trump Boosts Tariffs on Low-Value Packages Again After China Retaliates\nAI and Systemic Shocks\nThe use of AI in algorithmic trading could exacerbate these extremes because many companies rely on the same AI models or data, leading them to act similarly, according to the BoE paper.\nAlthough AI might make markets more efficient by processing information faster than humans, it could also make them more fragile and less able to handle shocks, the paper said.\nThe central banker said the International Monetary Fund (IMF) identified herding and market concentration as the top risks that could come from wider adoption of generative AI in the capital markets.\nThe IMF\u2019s 2024 report said the adoption of AI in trading and investing is \u201clikely to increase significantly in the near future.\u201d While AI may reduce some financial stability risks through improved risk management and market monitoring, at the same time \u201cnew risks may arise, including increased market speed and volatility under stress\u201d and others.\nOn the positive side, AI could help financial services firms manage risk more effectively by making better use of the data they already have, the BoE paper said. With stronger risk management, firms are less likely to be caught off guard when prices suddenly drop.\nThat means they might not need to rush into selling off assets all at once, which is what happens during a fire sale. The resulting damage caused by market selloffs could be mitigated or even avoided.\nThe central banker also pointed to another potential mitigating factor. If investment managers use AI to tailor strategies specifically for each client, it could lead to more market stability since people won\u2019t hold the same assets.\nFor all PYMNTS AI coverage, subscribe to the daily AI Newsletter.\nThe post Bank of England Warns of Higher Market Volatility From AI-Driven Trading appeared first on PYMNTS.com.", "date_published": "2025-04-11T14:42:44-04:00", "date_modified": "2025-04-11T14:42:44-04:00", "authors": [ { "name": "PYMNTS", "url": "https://www.pymnts.com/author/pymnts/", "avatar": "https://secure.gravatar.com/avatar/679fcf5c2ed5358e99e8e23b22e3b5d761e37bdb76fa7b0e13d8ecd9ff01bf88?s=512&d=blank&r=g" } ], "author": { "name": "PYMNTS", "url": "https://www.pymnts.com/author/pymnts/", "avatar": "https://secure.gravatar.com/avatar/679fcf5c2ed5358e99e8e23b22e3b5d761e37bdb76fa7b0e13d8ecd9ff01bf88?s=512&d=blank&r=g" }, "image": "https://www.pymnts.com/wp-content/uploads/2025/04/Bank-England-AI-trading.jpg", "tags": [ "Artificial Intelligence", "Bank of England", "economy", "GenAI", "inflation", "Innovation", "Investments", "News", "PYMNTS News", "stock market", "Stocks", "tariffs", "taxes", "Technology" ] }, { "id": "https://www.pymnts.com/?p=2682815", "url": "https://www.pymnts.com/news/artificial-intelligence/2025/this-week-in-ai-mitigate-tariff-uncertainty-bank-america-tech-investment/", "title": "This Week in AI: Using AI to Mitigate Tariff Uncertainty and Bank of America\u2019s Big Bet", "content_html": "

Artificial intelligence continues to dominate headlines as businesses accelerate their digital transformations. From banking to AI models, here are the top stories PYMNTS published this week.

\n

Companies Use AI to Help Mitigate Tariff Impacts

\n

The ability of AI to make businesses more efficient is coming in handy as President Donald Trump\u2019s back-and-forth on tariffs is making the markets swoon.

\n

A Zilliant survey found that 83% of U.S. C-suite leaders are using AI to adapt their pricing strategies to economic volatility.

\n

AI can help by monitoring and understanding tariffs in real time; finding new suppliers or sources for raw materials; improving scenario planning; raising worker productivity; and reducing costs.

\n

Bank of America Invests in New Initiatives Like AI

\n

Bank of America is allocating $4 billion toward new initiatives including AI in 2025, or nearly a third of its overall tech budget.

\n

The financial services giant is seeing the benefits of using AI and machine learning, a journey it began in 2018 after launching an AI-powered virtual assistant called Erica to help consumers with financial matters. That\u2019s four years before ChatGPT became a household name.

\n

Gains across its business include a 50% reduction in calls to IT support after employees began using Erica for Employees, an internal AI chatbot. Developers were able to raise their efficiency by 20%. Employees save tens of thousands of hours per year by using AI to prepare materials ahead of client meetings, while sales and trading teams are more quickly and efficiently finding and summarizing Bank of America research and market commentary.

\n

AI Helps Businesses Streamline Payment Processes

\n

AI is becoming the equivalent of a corporate \u201cpacemaker\u201d as the technology helps enterprises manage their financial operations by automating and regulating billions of dollars in disbursements.

\n

The result is that AI is becoming a profit center that helps businesses streamline their payment processes, ensure disbursements flow on time and in the right amount, and better manage their capital.

\n

More than 80% of chief financial officers at large companies are either already using AI or considering adopting it for a core financial function, according to a forthcoming PYMNTS Intelligence report, \u201cSmart Spending: How AI Is Transforming Financial Decision Making.\u201d

\n

Salesforce Sees Massive Growth in Data Cloud Platform

\n

Salesforce is experiencing explosive growth in its data cloud platform, driven by enterprise demand for generative and agentic AI, technologies that rely on clean, unified, real-time data to be effective.

\n

In an interview with PYMNTS, Gabrielle Tao, senior vice president of product management at Salesforce, said most companies struggle to unlock the full value of their scattered and siloed data.

\n

Meta\u2019s Open-Source Llama 4 Bad for Rivals Like OpenAI

\n

Meta released its open-source Llama 4 models this week: Llama 4 Scout and Maverick.

\n

They are the first multimodal models from Meta, meaning they can ingest images, not only text. Scout has a 10 million token context window (the amount of space for prompts). The previous record holder was Google\u2019s Gemini 2.5, with 1 million and going up to 2 million.

\n

Llama 4 is a challenge to proprietary models from OpenAI and Google.

\n

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.

\n

The post This Week in AI: Using AI to Mitigate Tariff Uncertainty and Bank of America\u2019s Big Bet appeared first on PYMNTS.com.

\n", "content_text": "Artificial intelligence continues to dominate headlines as businesses accelerate their digital transformations. From banking to AI models, here are the top stories PYMNTS published this week.\nCompanies Use AI to Help Mitigate Tariff Impacts\nThe ability of AI to make businesses more efficient is coming in handy as President Donald Trump\u2019s back-and-forth on tariffs is making the markets swoon.\nA Zilliant survey found that 83% of U.S. C-suite leaders are using AI to adapt their pricing strategies to economic volatility.\nAI can help by monitoring and understanding tariffs in real time; finding new suppliers or sources for raw materials; improving scenario planning; raising worker productivity; and reducing costs.\nBank of America Invests in New Initiatives Like AI\nBank of America is allocating $4 billion toward new initiatives including AI in 2025, or nearly a third of its overall tech budget.\nThe financial services giant is seeing the benefits of using AI and machine learning, a journey it began in 2018 after launching an AI-powered virtual assistant called Erica to help consumers with financial matters. That\u2019s four years before ChatGPT became a household name.\nGains across its business include a 50% reduction in calls to IT support after employees began using Erica for Employees, an internal AI chatbot. Developers were able to raise their efficiency by 20%. Employees save tens of thousands of hours per year by using AI to prepare materials ahead of client meetings, while sales and trading teams are more quickly and efficiently finding and summarizing Bank of America research and market commentary.\nAI Helps Businesses Streamline Payment Processes\nAI is becoming the equivalent of a corporate \u201cpacemaker\u201d as the technology helps enterprises manage their financial operations by automating and regulating billions of dollars in disbursements.\nThe result is that AI is becoming a profit center that helps businesses streamline their payment processes, ensure disbursements flow on time and in the right amount, and better manage their capital.\nMore than 80% of chief financial officers at large companies are either already using AI or considering adopting it for a core financial function, according to a forthcoming PYMNTS Intelligence report, \u201cSmart Spending: How AI Is Transforming Financial Decision Making.\u201d\nSalesforce Sees Massive Growth in Data Cloud Platform\nSalesforce is experiencing explosive growth in its data cloud platform, driven by enterprise demand for generative and agentic AI, technologies that rely on clean, unified, real-time data to be effective.\nIn an interview with PYMNTS, Gabrielle Tao, senior vice president of product management at Salesforce, said most companies struggle to unlock the full value of their scattered and siloed data.\nMeta\u2019s Open-Source Llama 4 Bad for Rivals Like OpenAI\nMeta released its open-source Llama 4 models this week: Llama 4 Scout and Maverick.\nThey are the first multimodal models from Meta, meaning they can ingest images, not only text. Scout has a 10 million token context window (the amount of space for prompts). The previous record holder was Google\u2019s Gemini 2.5, with 1 million and going up to 2 million.\nLlama 4 is a challenge to proprietary models from OpenAI and Google.\nFor all PYMNTS AI coverage, subscribe to the daily AI Newsletter.\nThe post This Week in AI: Using AI to Mitigate Tariff Uncertainty and Bank of America\u2019s Big Bet appeared first on PYMNTS.com.", "date_published": "2025-04-11T13:20:52-04:00", "date_modified": "2025-04-11T13:20:52-04:00", "authors": [ { "name": "PYMNTS", "url": "https://www.pymnts.com/author/pymnts/", "avatar": "https://secure.gravatar.com/avatar/679fcf5c2ed5358e99e8e23b22e3b5d761e37bdb76fa7b0e13d8ecd9ff01bf88?s=512&d=blank&r=g" } ], "author": { "name": "PYMNTS", "url": "https://www.pymnts.com/author/pymnts/", "avatar": "https://secure.gravatar.com/avatar/679fcf5c2ed5358e99e8e23b22e3b5d761e37bdb76fa7b0e13d8ecd9ff01bf88?s=512&d=blank&r=g" }, "image": "https://www.pymnts.com/wp-content/uploads/2022/10/Bank-of-America-1.jpg", "tags": [ "Artificial Intelligence", "automation", "Bank of America", "chatbots", "disbursements", "GenAI", "Innovation", "Meta", "News", "PYMNTS News", "Salesforce", "tariffs", "Technology" ] } ] }