Artificial Intelligence Archives | PYMNTS.com https://www.pymnts.com/news/artificial-intelligence/2025/google-gemini-reaches-350-million-monthly-active-users-lags-rivals/ What's next in payments and commerce Wed, 23 Apr 2025 19:42:31 +0000 en-US hourly 1 https://wordpress.org/?v=6.8 https://www.pymnts.com/wp-content/uploads/2022/11/cropped-PYMNTS-Icon-512x512-1.png?w=32 Artificial Intelligence Archives | PYMNTS.com https://www.pymnts.com/news/artificial-intelligence/2025/google-gemini-reaches-350-million-monthly-active-users-lags-rivals/ 32 32 225068944 Google’s Gemini Reaches 350 Million Monthly Active Users but Lags Rivals https://www.pymnts.com/news/artificial-intelligence/2025/google-gemini-reaches-350-million-monthly-active-users-lags-rivals/ Wed, 23 Apr 2025 19:42:31 +0000 https://www.pymnts.com/?p=2689987 Gemini, Google’s chatbot, grew to reach 350 million monthly active users in March but still lagged behind ChatGPT and Meta AI, TechCrunch reported Wednesday (April 23), citing data revealed during Google’s antitrust suit. The data also showed that Gemini increased its number of daily active users fourfold since October, according to the report. The chatbot […]

The post Google’s Gemini Reaches 350 Million Monthly Active Users but Lags Rivals appeared first on PYMNTS.com.

]]>
Gemini, Google’s chatbot, grew to reach 350 million monthly active users in March but still lagged behind ChatGPT and Meta AI, TechCrunch reported Wednesday (April 23), citing data revealed during Google’s antitrust suit.

The data also showed that Gemini increased its number of daily active users fourfold since October, according to the report. The chatbot had 35 million daily active users in March, up from 9 million in October.

While the chatbot has gained widespread consumer adoption, it lags behind its two most popular competitors, the report said.

OpenAI’s ChatGPT had 600 million monthly active users in March, according to the Google data, and Meta AI had about 500 million monthly active users in September, Meta CEO Mark Zuckerberg said at the time, per the TechCrunch report.

It was reported April 3 that Google removed the head of Gemini as the AI chatbot continues to lag behind ChatGPT. Sissie Hsiao stepped down to be replaced by Josh Woodward, the head of Google Labs who oversaw the launch of NotebookLM.

Google DeepMind CEO Demis Hassabis reportedly told staffers in a memo that the change in leadership would “sharpen our focus on the next evolution of the Gemini app.”

During a February earnings call, Google CEO Sundar Pichai said the company plans to insert ads in the Gemini multimodal model, as it did in the AI Overviews portion of its search engine in an attempt to recoup the high costs of processing AI workloads.

Gemini ads will not come in 2025, and Google will focus on offering a free and paid version of Gemini, Pichai said.

“We do have very good ideas for native ad concepts,” Pichai said during the call. “But you will see us lead with the user experience” and first make sure it works at scale.

In December, Pichai told Google staff that 2025 will be a crucial year for the company and “the stakes are high” as Google focuses on unlocking the benefits of technology and solving real user problems.

Gemini is a top priority, and Google believes this will be its next app to reach 500 million users, Pichai said.

“But we have some work to do in 2025 to close the gap and establish a leadership position there as well,” Pichai said.

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.

The post Google’s Gemini Reaches 350 Million Monthly Active Users but Lags Rivals appeared first on PYMNTS.com.

]]>
2689987
This Week in AI: Tariffs Hit AI, Tech Giants’ Antitrust Woes and a More Adept ChatGPT https://www.pymnts.com/news/artificial-intelligence/2025/this-week-ai-tariffs-tech-giants-antitrust-woes-more-adept-chatgpt/ https://www.pymnts.com/news/artificial-intelligence/2025/this-week-ai-tariffs-tech-giants-antitrust-woes-more-adept-chatgpt/#comments Fri, 18 Apr 2025 18:26:17 +0000 https://www.pymnts.com/?p=2687621 It’s a big week for artificial intelligence news. AI hardware is being dinged by tariff troubles while two juggernauts in the AI field — Meta and Google — are in the crosshairs of the U.S. government. Meanwhile, OpenAI unveiled a more capable and smarter ChatGPT and revealed plans for an AI software engineer. Tariff Uncertainty […]

The post This Week in AI: Tariffs Hit AI, Tech Giants’ Antitrust Woes and a More Adept ChatGPT appeared first on PYMNTS.com.

]]>
It’s a big week for artificial intelligence news. AI hardware is being dinged by tariff troubles while two juggernauts in the AI field — Meta and Google — are in the crosshairs of the U.S. government. Meanwhile, OpenAI unveiled a more capable and smarter ChatGPT and revealed plans for an AI software engineer.

Tariff Uncertainty Hits AI Hardware

The President Donald Trump administration’s unpredictable tariff policy is creating long-term uncertainty for the AI industry.

The tariffs don’t target digital services and intellectual property like AI software. AI pioneer Andrew Ng said intellectual property is hard to tax due to its intangible nature and ease of cross-border transfer. A Morgan Stanley note said major software firms like Adobe and Salesforce have not yet seen demand impacts.

However, AI models require powerful hardware to function — and that’s where tariffs may bite.

Although chips are exempted, tariffs on essential infrastructure like servers, cooling systems and networking gear could disrupt AI development. Ng said bringing computer equipment manufacturing back to the United States isn’t feasible due to a lack of domestic expertise and supply chain capacity.

Meta’s Anticompetition Trial Begins

The Federal Trade Commission’s 2020 anticompetition lawsuit against Meta went to trial this week, with the U.S. government seeking a divestiture of Instagram and WhatsApp, among other remedies.

The FTC sued Meta (formerly Facebook) for allegedly engaging in “anticompetitive conduct” to weaken or squash rivals as it protects its “monopoly” in social media, according to the revised complaint.

But Meta Chief Legal Officer Jennifer Newstead said in a blog post that the FTC’s “weak antitrust lawsuit … ignores how the market actually works and chases a theory that doesn’t hold up in the real world.”

In today’s digital landscape, Meta competes with TikTok, YouTube and X for eyeballs and engagement.

“In reality, more time is spent on TikTok and YouTube than on either Facebook or Instagram,” Newstead said in the post.

Court Rules That Google Broke Law to Dominate in Ads

A Virginia district court judge ruled that Google broke the law to dominate the online advertising technology market, one of two major antitrust lawsuits brought by the U.S. government against the search giant.

The government sued Google for having a monopoly in three parts of the online ad market: online publishers’ tools; advertiser tools; and software that makes this market work.

While it is not illegal to dominate a market by innovating, Google entrenched its monopolies and tied them together, a classic antitrust violation, experts told The New York Times.

Meanwhile, a D.C. judge ruled last year in a separate case that Google holds an online search monopoly. The judge is considering a request by the Department of Justice to force Google to sell Chrome, the world’s dominant browser. A ruling is expected by August.

ChatGPT to Become Smarter, All-Around AI Agent

OpenAI unveiled two new AI models in its reasoning model family that will power ChatGPT: o3 and o4-mini.

The AI startup said these are its smartest models to date. They can use all the tools at ChatGPT’s disposal and even incorporate images into their thinking. This would be helpful for businesses looking to analyze PDFs and faxes. OpenAI said the models can even read blurry or upside-down images.

OpenAI said for most real-world uses, o3 and o4-mini will be cheaper than o1 and o3-mini while outperforming them on tasks.

The models are now available for ChatGPT Plus, Pro and Team users. ChatGPT Enterprise and Edu users will get them in a week. Free users can try o4-mini by selecting “Think” before entering a prompt.

OpenAI Developing AI Software Engineer

OpenAI Chief Financial Officer Sarah Friar said the company is building an AI agent that can do all the work of software engineers, not just augment their skills.

“This is not just augmenting the current software engineers in your workforce … it’s literally an agentic software engineer that can build an app for you,” Friar said at Goldman Sachs’ Disruptive Technology Symposium in London.

“Not only does it build it, it does all the things that software engineers hate to do,” such as quality assurance tests, bug testing and bashing, as well as the accompanying documentation, she said. “So suddenly, you can force multiply your software engineering workforce.”

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.

The post This Week in AI: Tariffs Hit AI, Tech Giants’ Antitrust Woes and a More Adept ChatGPT appeared first on PYMNTS.com.

]]>
https://www.pymnts.com/news/artificial-intelligence/2025/this-week-ai-tariffs-tech-giants-antitrust-woes-more-adept-chatgpt/feed/ 8 2687621
Johnson & Johnson: 15% of AI Use Cases Deliver 80% of Value https://www.pymnts.com/news/artificial-intelligence/2025/johnson-15percent-ai-use-cases-deliver-80percent-value/ https://www.pymnts.com/news/artificial-intelligence/2025/johnson-15percent-ai-use-cases-deliver-80percent-value/#comments Fri, 18 Apr 2025 14:15:54 +0000 https://www.pymnts.com/?p=2687456 Johnson & Johnson reportedly found that 10% to 15% of artificial intelligence use cases deliver 80% of the value. The medicine and MedTech company determined that after encouraging employees to experiment with AI and, after about three years, seeing the results of their pursuits of nearly 900 use cases, The Wall Street Journal (WSJ) reported […]

The post Johnson & Johnson: 15% of AI Use Cases Deliver 80% of Value appeared first on PYMNTS.com.

]]>
Johnson & Johnson reportedly found that 10% to 15% of artificial intelligence use cases deliver 80% of the value.

The medicine and MedTech company determined that after encouraging employees to experiment with AI and, after about three years, seeing the results of their pursuits of nearly 900 use cases, The Wall Street Journal (WSJ) reported Friday (April 18).

Johnson & Johnson is now devoting resources to the highest-value projects and cutting the rest, according to the report.

“We’re prioritizing, we’re scaling, we’re looking at the things that make the most sense,” Johnson & Johnson Chief Information Officer Jim Swanson said, per the report. “That was part of the maturation process we went through.”

The use cases that are delivering the most value include a generative AI copilot that coaches sales representatives on how to engage with healthcare professionals, and an internal chatbot that answers employees’ questions about company policies and benefits, according to the report.

Johnson & Johnson is also developing an AI tool that will facilitate drug discovery and another that will identify and mitigate supply chain risks, per the report.

Swanson said in the report that the broader approach was necessary three years ago to gain familiarity with the technology but that it’s now time to focus on the use cases that can be successfully implemented, are widely adopted and deliver value.

The PYMNTS Intelligence report “The AI MonitorEdge Report: Healthcare Firms Going Long on GenAI Investment” found that 90% of C-suite executives at healthcare firms with at least $1 billion in annual revenue said that their previous generative AI investments have already achieved a positive return on investment.

The two applications that represent the most frequent use cases of generative AI — product and service innovation and real-time automated customer service responses — are being used by about 6 in 10 of these healthcare firms, per the report.

It was reported in March that Apple is working on an AI agent that can dispense medical advice.

Healthcare AI firm Suki said in October that it raised $70 million in new funding to invest in the development of its products, which include an AI-powered voice assistant used by clinicians.

In October 2023, Microsoft unveiled AI-powered products that help doctors glean insights from medical data.

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.

The post Johnson & Johnson: 15% of AI Use Cases Deliver 80% of Value appeared first on PYMNTS.com.

]]>
https://www.pymnts.com/news/artificial-intelligence/2025/johnson-15percent-ai-use-cases-deliver-80percent-value/feed/ 4 2687456
From Buzzwords to Bottom Lines: Understanding the AI Model Types https://www.pymnts.com/news/artificial-intelligence/2025/ai-explained-sorting-through-models-alphabet-soup/ https://www.pymnts.com/news/artificial-intelligence/2025/ai-explained-sorting-through-models-alphabet-soup/#comments Thu, 17 Apr 2025 17:00:59 +0000 https://www.pymnts.com/?p=2686589 There’s an alphabet soup of terms floating around when it comes to artificial intelligence models. There are foundation and frontier models, large and small language models, multimodal models — and the AI model term du jour, reasoning models. These buzzwords show up in blogs, company announcements, executive speeches, conference panels and quarterly earnings calls, but […]

The post From Buzzwords to Bottom Lines: Understanding the AI Model Types appeared first on PYMNTS.com.

]]>
There’s an alphabet soup of terms floating around when it comes to artificial intelligence models. There are foundation and frontier models, large and small language models, multimodal models — and the AI model term du jour, reasoning models.

These buzzwords show up in blogs, company announcements, executive speeches, conference panels and quarterly earnings calls, but what do they actually mean? More importantly, why should business users care?

This guide explains key AI model types in plain English and how each affects cost, capability and risk for organizations.

Here are the different types of models often encountered.

Foundation Models: The Base Layer of Generative AI

Foundation models are large, general-purpose AI systems trained on massive datasets such as the entire internet. They serve as the “base model” that can be adapted to perform a wide variety of tasks.

They can include large language models, vision language models, code models and more. They are typically trained to predict the next word in a sentence, the next pixel in an image or the next token in a code sequence.

Foundation models can be frontier models if they bring major new capabilities and push the boundaries of foundation models.

Examples: OpenAI’s GPT family of models; Google’s Gemini; Meta’s Llama; Anthropic’s Claude

Why it matters for business: These models power everything from customer service chatbots to content generation tools. You can either use them as they are, through APIs, or fine-tune (retrain) them on your company’s data to create more specialized applications.

Pros: Versatile, fast to deploy, broadly knowledgeable

Cons: Expensive to run at scale, may hallucinate or generate inaccurate content, are not inherently secure or compliant with regulations

Large Language vs. Small Language Models

Large language models are AI models trained on huge amounts of data to learn language patterns. They power generative AI to create prose, poems, business emails and other language tasks. They are behind today’s most popular chatbots and AI assistants.

Small language models are tinier, cheaper and usually more specialized versions of large language models.

Large language models are often used by AI agents to execute tasks. The agent, which is a system and not a model, is layered on top of the large language model.

Examples: OpenAI’s GPT series; Google’s Gemini; Meta’s Llama; Anthropic’s Claude

Why it matters for business: Large language models can handle several administrative and creative tasks quickly and at scale to save employees hours of work and make business operations more efficient.

Pros: Highly capable in general tasks, can be fine-tuned to specialize in an industry or task

Cons: Expensive to run, prone to hallucinations, may absorb biases from its training data

Reasoning Models: Thinking and Ruminating

Reasoning models are usually fine-tuned versions of large language models designed to think through problems step-by-step. This makes them ideal as a second opinion on decisions, and for answering complex queries or handling more in-depth tasks.

Examples: OpenAI’s Omni, or o series of models; Google’s Gemini 2.5, Meta’s Llama 3.2 series, Anthropic’s Claude 3.7 Sonnet

Why it matters for business: It’s a smarter AI that can dive into more complex tasks, such as explaining a legal contract and its ramifications, not just summarizing the document.

Pros: Greater accuracy, deeper insight, less human oversight needed

Cons: Slower responses, higher compute cost per query

Multimodal Models: Diversity of Inputs

Multimodal models are AI models that can ingest different forms of data (text, video, images and audio).

Examples: OpenAI’s GPT-4o and GPT-4 with Vision; Google’s Gemini family of models; Meta’s Llama 4

Why it matters for business: AI models can now read, analyze and interpret data in many forms, which is practical for businesses using PDFs, Excel sheets, PowerPoints, faxes and other forms of documents.

Pros: Better understanding of context, leading to wider usefulness

Cons: Needs more data and computing power to train and deploy

Open Source vs. Closed or Propriety Models

Open-source AI models generally are free to use, modify and share, and any restrictions vary by the type of license it is using. Their code and weights are publicly available to use.

Closed or proprietary AI models are not free, and they are developed by private companies. Users cannot see inside or modify these models.

Examples:

-Open source: Meta’s Llama family; Google’s Gemma family; several Mistral models; EleutherAI’s GPT-NeoX

-Closed: OpenAI’s GPT-3 and later models; Google’s Gemini; Anthropic’s Claude

Why it matters for business: Closed models are usually more capable and more convenient to use, with a company behind them for support. Open models can be cheaper, and users have more control and customization opportunities. Companies can deploy both types, depending on the use case.

Pros:

-Open source: Free, transparent, customizable, more control

-Closed: More powerful with support from the company that developed it

Cons:

-Open source: More DIY and responsibility, may be less powerful or safe

-Closed: Limited transparency, more expensive, less customizable

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.

The post From Buzzwords to Bottom Lines: Understanding the AI Model Types appeared first on PYMNTS.com.

]]>
https://www.pymnts.com/news/artificial-intelligence/2025/ai-explained-sorting-through-models-alphabet-soup/feed/ 1 2686589
How to Manage Risks When Employees Use AI Secretly for Work https://www.pymnts.com/news/artificial-intelligence/2025/how-to-manage-risks-when-employees-use-ai-secretly-work/ https://www.pymnts.com/news/artificial-intelligence/2025/how-to-manage-risks-when-employees-use-ai-secretly-work/#comments Thu, 17 Apr 2025 13:00:26 +0000 https://www.pymnts.com/?p=2685771 Employees who use generative artificial intelligence tools in the workplace without company approval or oversight — a practice known as “bring your own AI’ or BYOAI — could introduce risks into the enterprise, according to MIT researchers. “Make no mistake, I’m not talking theory here,” said Nick van der Meulen, a research scientist at MIT’s […]

The post How to Manage Risks When Employees Use AI Secretly for Work appeared first on PYMNTS.com.

]]>
Employees who use generative artificial intelligence tools in the workplace without company approval or oversight — a practice known as “bring your own AI’ or BYOAI — could introduce risks into the enterprise, according to MIT researchers.

“Make no mistake, I’m not talking theory here,” said Nick van der Meulen, a research scientist at MIT’s Center for Information Systems Research, during an MIT Sloan Management Review webinar. “This has been happening for quite some time now.”

The temptation to BYOAI could be especially acute at companies that have banned using AI chatbots that are publicly available, such as ChatGPT. Samsung, Verizon, J.P. Morgan Chase and other banks have banned or limited the use of external AI chatbots due to regulatory and security concerns.

The issue is gaining urgency among business leaders as AI models become more powerful and freely available to anyone, according to MIT.

Research from van der Meulen and fellow research scientist Barbara Wixom showed that about 16% of employees in large organizations were already using AI tools last year, with that number expected to rise to 72% by 2027. This includes sanctioned and unsanctioned use of AI.

They warned about the risks arising when employees use these tools without guidance.

“What happens when sensitive data gets entered into platforms that you don’t control? When business decisions are made based on outputs that no one quite understands?” van der Meulen said.

The researchers said there are two types of generative AI implementations:

  • GenAI tools like ChatGPT or Microsoft Copilot are used to enhance individual employee productivity and efficiency. They are freely available, but it’s harder to translate their use into ROI.
  • GenAI solutions are company-wide deployment of AI across processes and business units to bring value to the enterprise.

Separating the two uses of generative AI is useful because that “helps us tackle each differently and manage their value properly,” Wixom said.

Tools is a cost management play and needs to be handled similarly to spreadsheets and word processing.

“In a way, they simply represent, for most organizations, the new cost of doing business,” van der Meulen said.

Solutions help different areas of the company, whether it’s the call center, marketing, software development or another business unit.

“They offer measurable lift in either efficiencies or sales,” Wixom said.

For example, IT services provider Wolters Kluwer developed a generative AI tool that can read raw text directly from scanned images of lien documents. Banks using this tool were able to cut their loan processing time from weeks to days.

“That is not something that an individual employee at either Wolters Kluwer or the bank could have done on their own with a GenAI tool,” van der Meulen said. “It takes effort from many stakeholders to create these solutions to integrate them into systems.”

When AI is used as a tool, the employee is responsible for its successful use. When AI is used in the company as a solution, the organization owns its success, the researchers said.

This is another important distinction because it guides how to govern these two types of generative AI in a company, they said.

Read also: MIT Discovers AI Training Paradox That Could Boost Robot Intelligence

Tips to Manage BYOAI

Simply banning these tools is neither practical nor effective.

“Employees won’t just stop using GenAI; they’ll start looking for workarounds,” said van der Meulen. “They’ll turn to personal devices, use unsanctioned accounts, hidden tools. So instead of mitigating risk, we’d have made it harder to detect and manage.”

The researchers recommended three key approaches to managing BYOAI.

  1. Establish clear guardrails and guidelines.

Organizations should tell employees which uses are always acceptable, like searching for publicly available information, and those that are not approved, such as inputting proprietary information into a publicly available AI chatbot. In a survey of senior data and technology leaders, 30% reported having well-developed policies regarding workers’ AI use, the researchers said.

  1. Invest in training and education.

Employees need what the researchers called “AI direction and evaluation skills” (AIDE skills). If they don’t know how to use the tools well, they won’t be as effective. It’s not enough to do an online tutorial; employees must practice.

For example, at Zoetis, a global animal health company, the data analytics unit runs sessions three times a week that are attended by over 100 employees at each session for hands-on AI practice.

The researchers said J.D. Williams, Zoetis’ chief data and analytics officer, likened it to teaching people how to change tires — by making them change tires.

  1. Provide approved tools from trusted vendors.

Since banning AI tools won’t work and allowing the use of any AI tools isn’t feasible either because it becomes impossible to use them safely, organizations should instead provide approved AI tools to employees.

Zoetis implemented a “GenAI app store” where employees apply for a licensed seat. They have to say why they need the app, and then share their experiences using it. This helps the company identify valuable applications while managing costs.

“It’s how you avoid paying $50 a month for Joe from Finance who … used it exactly once to write a birthday card,” van der Meulen said.

For organizations just beginning their GenAI journey, Wixom also recommended establishing a center of excellence — which could be a single person or a small team — to provide an enterprise-wide perspective and coordinate efforts across departments.

But most of all, “it is important to remind everyone what the end game is here,” Wixom said. “The point of AI, regardless of its flavor, should be to create value for our organizations and ideally value that hits our books.”

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.

The post How to Manage Risks When Employees Use AI Secretly for Work appeared first on PYMNTS.com.

]]>
https://www.pymnts.com/news/artificial-intelligence/2025/how-to-manage-risks-when-employees-use-ai-secretly-work/feed/ 2 2685771
How CFOs Are Unlocking the Power of Advanced Analytics https://www.pymnts.com/news/artificial-intelligence/2025/how-cfos-are-unlocking-the-power-of-advanced-analytics/ Wed, 16 Apr 2025 08:00:21 +0000 https://www.pymnts.com/?p=2681754 Some opportunities are too great to ignore. For chief financial officers looking to give their procurement processes a shot in the arm, generative artificial intelligence (GenAI) represents one of those opportunities. According to the latest numbers from the PYMNTS Intelligence March 2025 CAIO Report, “The Investment Impact of GenAI Operating Standards on Enterprise Adoption,” 73% […]

The post How CFOs Are Unlocking the Power of Advanced Analytics appeared first on PYMNTS.com.

]]>
Some opportunities are too great to ignore. For chief financial officers looking to give their procurement processes a shot in the arm, generative artificial intelligence (GenAI) represents one of those opportunities.

According to the latest numbers from the PYMNTS Intelligence March 2025 CAIO Report, “The Investment Impact of GenAI Operating Standards on Enterprise Adoption,” 73% of enterprises are actively exploring GenAI to enhance procurement efficiency.

This interest spans various industries where procurement processes are often mired in manual procedures, inefficiencies and communication gaps. By integrating GenAI tools, companies can work to streamline vendor management, accelerate contract analysis and improve demand forecasting.

High-impact enterprises, particularly those with a robust technological infrastructure, are leading the charge. They see GenAI as a way to reduce operational bottlenecks and drive strategic decision making. For these companies, using AI’s capabilities is not just about cost-cutting but about enhancing overall business agility and competitiveness.

From automating mundane tasks to providing insights businesses wouldn’t have seen otherwise, the possibilities of integrating GenAI across procurement function workflows could be immense. The businesses experimenting now could find themselves with an advantage in the future.

Read also: 3 Ways Embedded Finance Solutions Are Remaking B2B Procurement

The High Potential of GenAI in Procurement

Procurement is a function that, while often overlooked, serves as the backbone of strategic sourcing, supplier management and cost optimization. Yet by their nature, procurement processes involve repetitive, time-intensive tasks such as drafting contracts, processing invoices and maintaining supplier relationships.

If automated effectively, these can unlock value for organizations. GenAI’s ability to process vast amounts of data and generate human-like language can enhance procurement efficiency through natural language processing and intelligent automation.

The PYMNTS Intelligence report found that among firms deploying GenAI for high-impact automation, 30% have already adopted it for procure-to-pay processes, while 48% are still evaluating it.

Beyond automation, GenAI can offer firms an advantage in generating actionable insights from vast and often unstructured datasets. Procurement departments, which are frequently tasked with managing thousands of supplier relationships, could stand to benefit from AI’s predictive and prescriptive analytics capabilities.

By streamlining procurement processes and enhancing decision making, organizations can achieve improved efficiency and reduce operational costs. For example, AI can suggest supplier consolidation strategies, uncover rogue spending, or highlight areas where alternative sourcing could result in financial benefits.

See also: Better Standards Outshine Flashier Tech as Winning GenAI Recipe for Procurement

Governance Concerns Could Slow GenAI Adoption

Despite the potential of GenAI across procurement, CFO enthusiasm for its implementation is tempered by concerns over the lack of clear operating standards. Without transparent governance frameworks, companies risk exposure to data breaches, algorithmic bias and reputational damage.

It’s challenging for finance leaders to make confident investment decisions when the rules of the game aren’t clear, and the PYMNTS report found that 38% of CFOs identified this ambiguity as a moderate or significant obstacle to adoption.

Concerns over accountability and traceability are equally prevalent. Many CFOs worry that GenAI models, particularly those built on proprietary data, could generate outputs that are difficult to audit or explain. The absence of clear standards also raises questions about intellectual property rights and ethical use.

Despite these concerns, most CFOs are not discounting GenAI’s transformative potential. Rather, they are urging the development of more robust standards to guide responsible innovation. Many are calling for frameworks that ensure privacy, reliability and sustainability without stifling creativity.

The future of GenAI adoption will likely depend on the ability of industry leaders to collaborate in developing transparent, enforceable standards. Whether these guidelines come from government regulators, industry consortia or individual enterprises themselves remains to be seen.

For now, CFOs are left to navigate a landscape rich with potential but fraught with uncertainty. As more companies experiment with GenAI, the insights they gather will undoubtedly shape the frameworks of the future.

For all PYMNTS AI and B2B coverage, subscribe to the daily AI and B2B Newsletters.

The post How CFOs Are Unlocking the Power of Advanced Analytics appeared first on PYMNTS.com.

]]>
2681754
Nvidia and Partners to Produce AI Supercomputers in US https://www.pymnts.com/news/artificial-intelligence/2025/nvidia-and-partners-to-produce-ai-supercomputers-in-us/ https://www.pymnts.com/news/artificial-intelligence/2025/nvidia-and-partners-to-produce-ai-supercomputers-in-us/#comments Mon, 14 Apr 2025 19:16:12 +0000 https://www.pymnts.com/?p=2684245 Nvidia said Monday (April 14) that its artificial intelligence (AI) supercomputers will be built in the U.S. for the first time, starting in the next 12 to 15 months. The company is working with manufacturing partners to build two manufacturing plants in Texas — one in Houston with Foxconn and one in Dallas with Wistron — and […]

The post Nvidia and Partners to Produce AI Supercomputers in US appeared first on PYMNTS.com.

]]>
Nvidia said Monday (April 14) that its artificial intelligence (AI) supercomputers will be built in the U.S. for the first time, starting in the next 12 to 15 months.

The company is working with manufacturing partners to build two manufacturing plants in Texas — one in Houston with Foxconn and one in Dallas with Wistron — and expects to begin mass production within that timeframe, Nvidia said in a Monday press release.

“Adding American manufacturing helps us better meet the incredible and growing demand for AI chips and supercomputers, strengthens our supply chain and boosts our resiliency,” Nvidia Founder and CEO Jensen Huang said in the release.

Nvidia Blackwell chips are now being produced at the plants of another manufacturing partner, TSMC, in Phoenix, Arizona, according to the release.

Those plants and the new supercomputer facilities will add up to a million square feet of manufacturing space, the release said.

Nvidia also partners with Amkor and SPIL for packing and testing operations in Arizona, per the release.

Within the next four years, Nvidia plans to produce up to $500 billion of AI infrastructure in the U.S. through its partnerships with TSMC, Foxconn, Wistron, Amkor and SPIL, according to the release.

“Manufacturing Nvidia AI chips and supercomputers for American AI factories is expected to create hundreds of thousands of jobs and drive trillions of dollars in economic security over the coming decades,” the release said.

Huang said in March that Nvidia will procure “several hundred billion” dollars’ worth of chips and other electronics manufactured in the U.S. over the next four years.

The chip designer will source these products from suppliers like TSMC and Foxconn, which can manufacture its latest systems in the U.S.

By doing so, Nvidia will avoid tariffs and improve the resiliency of its supply chain, Huang said at the time.

Asked during a Monday press conference about the Nvidia announcement, President Donald Trump attributed to the company’s move to tariffs.

“I knew it was going to happen, but not to the extent that it happened. It’s big,” Trump said in a video of the press conference posted on X by the White House’s Rapid Response 47 account. “And the reason they did it is because of the election on Nov. 5 and because of a thing called tariffs.”

The post Nvidia and Partners to Produce AI Supercomputers in US appeared first on PYMNTS.com.

]]>
https://www.pymnts.com/news/artificial-intelligence/2025/nvidia-and-partners-to-produce-ai-supercomputers-in-us/feed/ 1 2684245
Meta to Begin Training AI on User Data in EU https://www.pymnts.com/news/artificial-intelligence/2025/meta-begin-training-ai-user-data-european-union/ https://www.pymnts.com/news/artificial-intelligence/2025/meta-begin-training-ai-user-data-european-union/#comments Mon, 14 Apr 2025 17:30:25 +0000 https://www.pymnts.com/?p=2684126 Meta will begin using content shared by adults in the European Union to train its artificial intelligence models after the European Data Protection Board said the company’s approach met its legal requirements. The company plans to train its AI using public content — including public posts and comments shared by adults on its products in the […]

The post Meta to Begin Training AI on User Data in EU appeared first on PYMNTS.com.

]]>
Meta will begin using content shared by adults in the European Union to train its artificial intelligence models after the European Data Protection Board said the company’s approach met its legal requirements.

The company plans to train its AI using public content — including public posts and comments shared by adults on its products in the EU — and people’s questions, queries and other interactions with Meta AI, Meta said in a Monday (April 14) blog post.

Meta will not use public data from account holders in the EU who are under the age of 18, people’s private messages with family and friends, or data from users who submit an objection form provided by the company, according to the post.

The company launched Meta AI in the EU in March and plans to make its chat function available for free across the region within its messaging apps: Facebook, Instagram, WhatsApp and Messenger, per the post.

“We believe we have a responsibility to build AI that’s not just available to Europeans, but is actually built for them,” the post said.

By training its generative AI models on data from EU users, it will be better able to understand European dialects, colloquialisms, hyper-local knowledge and other aspects so it can serve the region’s users, according to the post.

“It’s important to note that the kind of AI training we’re doing is not unique to Meta, nor will it be unique to Europe,” the post said. “This is how we have been training our generative AI models for other regions since launch. We’re following the example set by others including Google and OpenAI, both of which have already used data from European users to train their AI models.”

It was reported in July that Meta decided to withhold what was at the time its latest multimodal AI model from the EU, citing an “unpredictable” regulatory environment in the region.

The company’s retreat stemmed from uncertainties surrounding compliance with the General Data Protection Regulation (GDPR), particularly AI model training using user data from its products.

In June, a European privacy group said it filed complaints with 11 European countries, arguing that Meta’s use of user data in its proposed AI practices violated the GDPR.

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.

The post Meta to Begin Training AI on User Data in EU appeared first on PYMNTS.com.

]]>
https://www.pymnts.com/news/artificial-intelligence/2025/meta-begin-training-ai-user-data-european-union/feed/ 3 2684126
Bank of England Warns of Higher Market Volatility From AI-Driven Trading https://www.pymnts.com/news/artificial-intelligence/2025/bank-england-warns-higher-market-volatility-from-ai-driven-trading/ https://www.pymnts.com/news/artificial-intelligence/2025/bank-england-warns-higher-market-volatility-from-ai-driven-trading/#comments Fri, 11 Apr 2025 18:42:44 +0000 https://www.pymnts.com/?p=2682904 The use of artificial intelligence in algorithmic trading could exacerbate market volatility and amplify financial instability, according to a policy paper by the Bank of England released this week. As global markets reel from President Donald Trump’s tariff policy changes, the United Kingdom’s central bank warned that the widespread use of AI for trading and […]

The post Bank of England Warns of Higher Market Volatility From AI-Driven Trading appeared first on PYMNTS.com.

]]>
The use of artificial intelligence in algorithmic trading could exacerbate market volatility and amplify financial instability, according to a policy paper by the Bank of England released this week.

As global markets reel from President Donald Trump’s tariff policy changes, the United Kingdom’s central bank warned that the widespread use of AI for trading and investing could lead to a “herding” behavior that could raise the chance of sudden market drops, especially during times of stress because firms might sell off assets at once.

As more firms use AI for investing and trading, there’s a risk that many will end up making the same decisions at the same time, the paper said.

“Greater use of AI to inform trading and investment decisions could help increase market efficiency,” per the paper. “But it could also lead market participants inadvertently to take actions collectively in such a way that reduces stability.”

For example, the use of more advanced AI-based trading strategies could lead firms to “taking increasingly correlated positions and acting in a similar way during a stress, thereby amplifying shocks,” according to the paper.

Such market instability can affect the amount of capital available to businesses since they can’t raise as much when markets are down.

The report comes as global equity and bond markets have been on a roller coaster since the Trump administration announced a minimum of 10% tariffs on imports from all countries, with China, the European Union and a few other countries getting hit with higher rates.

The Dow Jones Industrial Average has fallen by 6.2% since Trump’s April 2 announcement, while the S&P 500 gave up 7.1% and the Nasdaq Composite fell by 6.9%. The benchmark 10-year Treasury yields rose from 4.053% to 4.509% over the same time frame as investors flocked to safety.

Federal Reserve Chair Jerome Powell said tariffs are “likely to raise inflation in coming quarters” and “it is also possible that the effects could be more persistent,” according to a transcript of his April 4 speech before the Society for Advancing Business Editing and Writing. Inflation is a key statistic influencing monetary policy such as the direction of the Fed funds rate.

Powell’s comments came five days before Trump decided to pause tariffs for 90 days for nearly 60 countries, except China.

Read also: Trump Boosts Tariffs on Low-Value Packages Again After China Retaliates

AI and Systemic Shocks

The use of AI in algorithmic trading could exacerbate these extremes because many companies rely on the same AI models or data, leading them to act similarly, according to the BoE paper.

Although AI might make markets more efficient by processing information faster than humans, it could also make them more fragile and less able to handle shocks, the paper said.

The central banker said the International Monetary Fund (IMF) identified herding and market concentration as the top risks that could come from wider adoption of generative AI in the capital markets.

The IMF’s 2024 report said the adoption of AI in trading and investing is “likely to increase significantly in the near future.” While AI may reduce some financial stability risks through improved risk management and market monitoring, at the same time “new risks may arise, including increased market speed and volatility under stress” and others.

On the positive side, AI could help financial services firms manage risk more effectively by making better use of the data they already have, the BoE paper said. With stronger risk management, firms are less likely to be caught off guard when prices suddenly drop.

That means they might not need to rush into selling off assets all at once, which is what happens during a fire sale. The resulting damage caused by market selloffs could be mitigated or even avoided.

The central banker also pointed to another potential mitigating factor. If investment managers use AI to tailor strategies specifically for each client, it could lead to more market stability since people won’t hold the same assets.

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.

The post Bank of England Warns of Higher Market Volatility From AI-Driven Trading appeared first on PYMNTS.com.

]]>
https://www.pymnts.com/news/artificial-intelligence/2025/bank-england-warns-higher-market-volatility-from-ai-driven-trading/feed/ 1 2682904
This Week in AI: Using AI to Mitigate Tariff Uncertainty and Bank of America’s Big Bet https://www.pymnts.com/news/artificial-intelligence/2025/this-week-in-ai-mitigate-tariff-uncertainty-bank-america-tech-investment/ https://www.pymnts.com/news/artificial-intelligence/2025/this-week-in-ai-mitigate-tariff-uncertainty-bank-america-tech-investment/#comments Fri, 11 Apr 2025 17:20:52 +0000 https://www.pymnts.com/?p=2682815 Artificial intelligence continues to dominate headlines as businesses accelerate their digital transformations. From banking to AI models, here are the top stories PYMNTS published this week. Companies Use AI to Help Mitigate Tariff Impacts The ability of AI to make businesses more efficient is coming in handy as President Donald Trump’s back-and-forth on tariffs is […]

The post This Week in AI: Using AI to Mitigate Tariff Uncertainty and Bank of America’s Big Bet appeared first on PYMNTS.com.

]]>
Artificial intelligence continues to dominate headlines as businesses accelerate their digital transformations. From banking to AI models, here are the top stories PYMNTS published this week.

Companies Use AI to Help Mitigate Tariff Impacts

The ability of AI to make businesses more efficient is coming in handy as President Donald Trump’s back-and-forth on tariffs is making the markets swoon.

A Zilliant survey found that 83% of U.S. C-suite leaders are using AI to adapt their pricing strategies to economic volatility.

AI can help by monitoring and understanding tariffs in real time; finding new suppliers or sources for raw materials; improving scenario planning; raising worker productivity; and reducing costs.

Bank of America Invests in New Initiatives Like AI

Bank of America is allocating $4 billion toward new initiatives including AI in 2025, or nearly a third of its overall tech budget.

The financial services giant is seeing the benefits of using AI and machine learning, a journey it began in 2018 after launching an AI-powered virtual assistant called Erica to help consumers with financial matters. That’s four years before ChatGPT became a household name.

Gains across its business include a 50% reduction in calls to IT support after employees began using Erica for Employees, an internal AI chatbot. Developers were able to raise their efficiency by 20%. Employees save tens of thousands of hours per year by using AI to prepare materials ahead of client meetings, while sales and trading teams are more quickly and efficiently finding and summarizing Bank of America research and market commentary.

AI Helps Businesses Streamline Payment Processes

AI is becoming the equivalent of a corporate “pacemaker” as the technology helps enterprises manage their financial operations by automating and regulating billions of dollars in disbursements.

The result is that AI is becoming a profit center that helps businesses streamline their payment processes, ensure disbursements flow on time and in the right amount, and better manage their capital.

More than 80% of chief financial officers at large companies are either already using AI or considering adopting it for a core financial function, according to a forthcoming PYMNTS Intelligence report, “Smart Spending: How AI Is Transforming Financial Decision Making.”

Salesforce Sees Massive Growth in Data Cloud Platform

Salesforce is experiencing explosive growth in its data cloud platform, driven by enterprise demand for generative and agentic AI, technologies that rely on clean, unified, real-time data to be effective.

In an interview with PYMNTS, Gabrielle Tao, senior vice president of product management at Salesforce, said most companies struggle to unlock the full value of their scattered and siloed data.

Meta’s Open-Source Llama 4 Bad for Rivals Like OpenAI

Meta released its open-source Llama 4 models this week: Llama 4 Scout and Maverick.

They are the first multimodal models from Meta, meaning they can ingest images, not only text. Scout has a 10 million token context window (the amount of space for prompts). The previous record holder was Google’s Gemini 2.5, with 1 million and going up to 2 million.

Llama 4 is a challenge to proprietary models from OpenAI and Google.

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.

The post This Week in AI: Using AI to Mitigate Tariff Uncertainty and Bank of America’s Big Bet appeared first on PYMNTS.com.

]]>
https://www.pymnts.com/news/artificial-intelligence/2025/this-week-in-ai-mitigate-tariff-uncertainty-bank-america-tech-investment/feed/ 1 2682815