Unlocking the benefits of agentic AI in the enterprise

Agentic AI promises to revolutionize how enterprises operate, enabling systems to make decisions and take actions independently. But without high-quality, harmonized data, even the most advanced AI initiatives are likely to fail.

6/5/2025  |  4 min

Your contact

Steele Arbeeny.jpg

Dr. Steele Arbeeny

CTO North America at SNP

Tags

  • Data analytics
  • Data migration
  • Cloud move

A new era for user interfaces

 

Enterprise software vendors are shifting toward agentic AI user interfaces – systems that replace traditional GUIs with intelligent, autonomous agents. These agents can execute tasks, make decisions, and work toward goals based on parameters like cost, time, and complexity. Many even support voice interactions, similar to the smart assistants we use at home.

Major ERP vendors are planning for agentic AI user interfaces to largely replace the traditional graphical user interfaces we use today. However, the autonomous behavior and orchestration performed by agentic AI are heavily dependent on data quality and consistency. Industry sources report that a whopping 87% of AI projects never reach “production” status in the enterprise – and the single largest contributor to this is bad data quality. The adage of garbage in, garbage out applies now more than ever.

The rise (and risk) of agentic AI

 

Over 70% of the Fortune 500 are in the process of implementing some form of agent-enabled solution in the enterprise, however given the 87% failure rate stated above, we can sadly assume that most of these initiatives will not succeed. Even so, the proportion of agent-enabled products in the market is expected to reach 33% by 2028 compared with less than 1% in 2024. This speaks to the promise and desire for such solutions, and the realized productivity gains of over 200% just add to the momentum.

Agentic AI in enterprise asset management

 

Enterprise asset management (EAM) is one of the areas most primed to benefit from agentic AI. Industries like automotive, oil and gas, mining, aerospace, and other complex manufacturing are the most common users of these processes, as they rely heavily on equipment, downtime is costly, and the possibility of severe injury increases with improper maintenance.

For years, we have been applying agentic-like orchestration for preventative maintenance work orders. These work orders – which are a request and instructions to complete a specific EAM task – can be automatically generated from a meter reading or even an IOT sensor transmitting equipment metrics back to a central system. When certain conditions are met, a new work order is automatically generated to solve the problem, the required parts and tools are ordered, and the work is dispatched to a technician with the appropriate skills. This certainly qualifies as some level of agentic behavior based on our description above, even though we have been doing it for decades without the help of AI.

Traditional preventative maintenance processes as well as new intelligent agent-enabled ones unfortunately can easily be disrupted by poor data quality. Incorrect meter readings, bad triggering thresholds, and inconsistent equipment metadata lead to bad decisions, missed or incorrect maintenance, and potentially injury or even fatalities. It’s common in modern enterprises to have data spread across multiple systems from multiple vendors. Separate ERP, asset management, work dispatch, and purchasing systems make it virtually impossible and unsuitably risky for any automated process to aggregate all the data, harmonize it, and make decisions that could have a life-safety impact. Many incidents in 2024 were attributed to unsuitable EAM procedures and data. For example, in April 2024 in a tire manufacturing plant, equipment was improperly activated while being worked on, and in October, a cable snapped on a crane in a serious mining accident. A survey by Plant Maintenance Services magazine found that 41% of equipment failures were due to inaccurate or missing data and that poor data quality is a root cause of preventable equipment failures.

 

Disconnected systems, disjointed insights: The challenge of EAM data silos

 

One factor plaguing EAM in large enterprises is that EAM data is spread across multiple systems. There are work management systems like ERP, where work is generated and financially tracked – often more than one. There are also systems that maintain access to the running asset or so called lockout–tagout (LOTO) systems. There also are supervisory control and data acquisition (SCADA) systems that control machines and collect monitoring data. All these systems often have disconnected views of the same equipment, its metadata, and characteristics. This can be a dangerous problem.

Centralization, where data is merged into a single system, is an essential step toward safer, smarter asset management. After all, all the modern leading ERP systems support complex EAM processes, condition-based maintenance, and automatic work generation and dispatch. However, consolidating data is not a simple task. Different systems often execute processes in different ways, requiring careful alignment and standardization of both master and transactional data. It is crucial to consider the migration and standardization of master and transactional data and, in doing so, determine the mapping of legacy processes onto new processes. Historical data is also required so that predictive failure models can be trained on real-world information. Additionally, it is advisable to standardize based on industry equipment taxonomies such as ISO 14224. This means that a significant data analysis, business and technical mapping, and migration and verification process must be undertaken. These are not simple and can be very costly, but the benefits are clear. Single sets of master data to maintain, consistent failure prediction and resolution, and ultimately lower equipment costs and a safer work environment.

 

Scaling success with automation

 

Automation is key to achieving this outcome. In our experience at SNP,  manual approaches to system consolidation and data migration deliver poor data quality and typically take two or more years to complete – if they are completed at all. Both Gartner and an academic study (Wong, Ada & Scarbrough, Harry & Chau, Patrick & Davison, Robert. Critical Failure Factors in ERP Implementation) cite failure rates between 70–90% on such projects, and SNP can corroborate these figures, because we are often called in to remediate such situations.

The key to success lies in repeatability and the use of predefined content. Reusing proven migration rules and methodologies from past projects helps avoid reinventing the wheel, saving time and reducing risk. A typical system consolidation and heterogeneous move project has familiar waterfall-like project stages of analysis, design, implementation, and verification. For true repeatability, each step of the process must be accelerated through software automation. Not only does this shorten project timelines, but it also delivers results that have been tested already, greatly accelerating the project and minimizing testing and disruption. This is exactly what SNP delivers in the Kyano platform: 30 years of enterprise data transformation built into a platform to plan, move, and manage your multi-vendor environment and allow you to unlock the power of agentic AI.

Ready to streamline your transformation journey? Contact us to learn how SNP can help.

Your contact

Steele Arbeeny.jpg

Dr. Steele Arbeeny

CTO North America at SNP

Tags

  • Data analytics
  • Data migration
  • Cloud move

Related blogs