Thursday, April 2, 2026

Agentic AI & Proactive Intelligence

In a recent APOS webinar, we explored the topic of proactive intelligence in relation to APOS Publisher for Cloud (the SAP Analytics Cloud-specific version) and APOS PowerBurst Publisher (the Power BI-specific edition).* The term "proactive intelligence" refers to the APOS publishing/bursting/broadcasting technology platform's ability to integrate agentic AI with your analytics workflows for greater efficiency and effectiveness.

Proactive intelligence uses the intelligence from your analytics platform to trigger proactive actions by an AI agent before human analysts are aware of the need for such actions.

Here is one proactive intelligence scenario from the webinar:

Customer Success – UPD New Clicks - APOS Publisher for Cloud

Inventory management is a critical operational function, and an obvious candidate for improved efficiency and effectiveness. In this scenario, individual warehouse managers are empowered to replenish stock in advance of inventory shortages, but they may not be aware of the need to replenish if they have not reviewed analytics online.

Customer Success – UPD New Clicks - APOS Publisher for Cloud

An AI agent can be tasked with monitoring potential shortages for each warehouse and pass those potential shortages to the APOS publishing platform. The APOS publishing platform generates and sends reports to the impacted warehouse managers.

For more proactive intelligence scenarios and demonstrations, view the webinar on demand.

*Note: While these products are differentiated and marketed to specific audiences, they both in fact feature multi-platform support for SAP Analytics, Power BI, Tableau and Looker.

ROI & Risk

It might be argued that a more advanced agentic AI scenario would have the AI agent ordering and replenishing stock directly, omitting the warehouse managers entirely. While this revised scenario may be more efficient, it may not be more effective. If the AI agent orders stock that is unnecessary, the ROI of the scenario is questionable.

The scenario described above includes a "human-in-the-loop" (HITL) step. If it is ultimately the warehouse manager's responsibility to ensure appropriate levels of inventory, then that manager is precisely the human who should be in the loop.

The use of APOS publishing technology in proactive intelligence scenarios has potential for great ROI with minimal risk, and with AI, providing low-risk ROI is everything.

Stanford University reports that corporate investments in AI reached $252.3 billion in 2024. Considering the investments organizations are making in AI, you would expect to see equally enticing reports of realized ROI. And yet, according to Deloitte, the ROI of AI is "slow to materialize and hard to measure."

The reason ROI is slow to materialize may be down to analysis of benefits vs. risks.

The Benefits Are Real

A recent article on the McKinsey & Company website (Nov. 3. 2025) summarizes a McKinsey survey of CFOs to see how finance teams are using AI "to deliver faster insight, stronger controls, and measurable results. While the article is not comprehensive in its review of AI use cases, it notes that the most tangible results for finance teams have come from strategic planning and control, working-capital management, and cost optimization.

In one use case, a company used agentic AI workflows to police invoice-to-contract compliance, preventing "value leakage when vendors miss or misapply terms such as early payment discounts, tiered pricing, and volume rebates." The result was elimination of contract leakage amounting to 4% of total spend - hypothetically a $40 million margin improvement on a $1 billion spend, not to mention the time savings on manual number crunching.

And yet, AI adoption in real-world use cases is slower than one might expect, considering the huge upside shown in such use cases. Why are AI finance implementations not scaling quickly?

So Are the Risks

There is no question that Finance is one of the most risk-averse silos within any enterprise, and understandably so, since the results of misplaced trust can be catastrophic.

The essential problem with agentic AI is that it increases the attack surfaces on your data infrastructure in ways we do not even completely understand yet, so scaling agentic AI is a twin-edged sword: you realize greater benefits, but you may also create unacceptable risk

Risk Mitigation - OWASP

Risk mitigation must be a founding imperative in your AI strategy. The Open Worldwide Application Security Project (OWASP) Foundation "works to improve the security of software through its community-led open-source software projects, hundreds of chapters worldwide, tens of thousands of members, and by hosting local and global conferences." OWASP's Top Ten lists are a good place to start your risk analysis:

Zenity is a company that specializes in AI agent security. Their recent webinar series on the foundations of AI security applies OWASP principles to real-life scenarios and is worth a look.

 

Learn more about APOS Publisher for Cloud

Featured Posts

    Get our newsletter for the latest BI insights and blog posts!

    Subscribe!

     

     


    Post Archive