Build Trust Like a Product: Connected Compliance and Modern Audit by Design
- 1 day ago
- 8 min read

Regulators, financial institutions, and boards all seek more transparency, accountability, and clear control over decision-making. AI use is now common, with 88% of respondents saying their organizations use it regularly in at least one business function (McKinsey, The state of AI in 2025: Agents, innovation, and transformation, 2025). However, the value at the enterprise level is uneven. Nearly two-thirds of companies haven't started to scale AI across their organizations. This indicates that adoption is moving faster than the readiness of operating models (McKinsey, The state of AI in 2025: Agents, innovation, and transformation, 2025). This gap is significant because AI risks often affect multiple areas. When models impact workflows, customer interactions, and compliance decisions, risks can spread swiftly throughout systems and departments. EY highlights the major downsides of poor governance. Ninety-nine percent of surveyed organizations reported financial losses due to AI-related risks, with an average reported loss of US$4.4 million (EY, Responsible AI Pulse survey, 2025). The most common risks include not meeting AI regulations (57%), negative effects on sustainability goals affecting just over half of organizations, and biased outputs reported by slightly more than half. This shows that failures of trust are easier to measure and increasingly financial (EY, Responsible AI Pulse survey, 2025). Meanwhile, the compliance environment is becoming a barrier to growth. Roughly three quarters of respondents say their company has experienced negative effects from compliance complexity (PwC, Global Compliance Survey 2025, 2025). This indicates a shift toward more integrated models. Fifty-nine percent feel more confident in compliance decision-making due to better coordination, showing the value of connected compliance (PwC, Global Compliance Survey 2025, 2025). The message is clear: trust cannot be a mere statement or annual task. It must be a focused capability, built into workflows, supported by connected compliance, and continually verified through modern audit practices.
The AI Value–Risk Paradox

When accountability, governance, and monitoring are integrated into the design, implementation, and operation of AI, it generates value (EY, Responsible AI Pulse survey, 8 Oct 2025). However, the market shows a contradiction. Even with widespread use, large-scale value is still uneven because many organizations are still moving from pilot projects to larger transformations. In a global survey by McKinsey, 88% of participants say they regularly use AI in at least one business function. Yet, nearly two-thirds of them say their companies have not started using AI across the entire organization (McKinsey, The state of AI in 2025: Agents, innovation, and transformation, 2025).

McKinsey adds that although AI tools are now commonplace, most organizations still haven’t embedded them deeply enough into workflows and processes to generate material enterprise-level benefits and that workflow redesign is a key success factor among AI high performers (McKinsey, The state of AI in 2025: Agents, innovation, and transformation, 2025).
This is where “building trust like a product” becomes operational: trust is treated as a design requirement, translated into data quality, traceability, governance, monitoring and auditable controls built directly into systems and workflows. EY’s Responsible AI Pulse survey reinforces the same direction: organizations implementing more advanced responsible AI measures report tangible performance upside:
81% report improved innovation.
79% observe productivity and efficiency gains.
54% report revenue growth.
48% identify cost savings.
(EY, Responsible AI Pulse survey, 8 Oct 2025; McKinsey, The state of AI in 2025: Agents, innovation, and transformation, 5 Nov 2025)
These findings support a wider view of the market: AI drives real transformation when governance is part of the design and operations, rather than just a reaction (EY, Responsible AI Pulse survey, 8 Oct 2025). Regulatory approaches to AI are also diverging across jurisdictions, combining soft law, hard law, and regulatory sandboxes, which raises the bar for consistent governance and traceability as organizations scale AI (World Bank, Global Trends in AI Governance: Evolving Country Approaches, 2024). However, the cost of poor control is high. Almost all surveyed organizations report financial losses from AI-related risks, with an average estimated loss of US$4.4 million. The most common risks include not complying with AI regulations (57%), along with sustainability impacts and biased outputs reported by just over half of organizations (EY, Responsible AI Pulse survey, 2025). The issue is not ambition but the maturity of governance and the design of controls. A noticeable gap in leadership capabilities exists: when asked to match appropriate controls to five AI-related risks, only 12% of C-suite respondents got it right. This shows how hard it is to implement controls on a large scale (EY, Responsible AI Pulse survey, 2025). At the same time, organizations are becoming more autonomous. Sixty-two percent are at least trying out AI agents (McKinsey, The state of AI in 2025: Agents, innovation, and transformation, 2025).

In supervisory practice, AI is increasingly treated as a capability that must be implemented within robust governance and risk management frameworks, strengthening the case for controls and evidence that are designed into systems rather than assembled after the fact. (IMF, AI Projects in Financial Supervisory Authorities, 2025) Agentic AI raises the stakes. It shifts AI from merely producing outputs to taking control and performing multi-step actions within workflows. As adoption increases, the use of AI remains uneven. This unevenness leads to different ways organizations manage AI and its applications. As AI gains more autonomy, clear boundaries for delegation and oversight at crucial decision points are important. It's also vital to have evidence prepared for audits as a default, including logging, traceability, and approval records. (McKinsey, The state of AI in 2025: Agents, innovation, and transformation, 2025; McKinsey, Agentic AI advances, 2026; Deloitte, Agentic AI strategy, 2026) As autonomy grows, periodic and manually gathered assurance cannot keep up with how decisions and actions are carried out across workflows. Therefore, continuous, built-in controls are necessary to ensure governance is enforced and evidence is consistently available. In this situation, connected compliance and audit by design change assurance from occasional, manual reviews to ongoing, system-integrated controls. In this model, risks are mapped out, and evidence is generated by design, such as logging, lineage, approvals, and monitoring . In the end, responsible AI is not just a way to ensure compliance but also a way to improve performance. It is closely tied to redesigning workflows and embedding AI throughout the organization (EY, Responsible AI Pulse survey, 2025; McKinsey, The state of AI in 2025: Agents, innovation, and transformation, 2025).
EY’s 2025 Responsible AI survey also highlights a maturity between defining responsible AI principles and fully implementing governance mechanisms. While 80% of organizations report having defined responsible AI principles, only 63% conduct independent assessments of AI governance practices, showing that operational trust frameworks are still evolving.

According to Ernst & Young (2025), organizations adopting responsible AI practices report significant improvements in performance and innovation.
Connected compliance

As AI moves from experimentation to core infrastructure, compliance cannot remain a set of disconnected activities spread across functions, tools, and territories. A “connected compliance” model is the operating layer that links data, controls, workflows, and evidence across the organization so governance is embedded and reusable, not recreated in silos each time a new regulation, model, or use case appears. The urgency is clear in the market evidence. PwC’s Global Compliance Survey 2025 found that 77% of respondents believe their company has suffered due to compliance complexity.
Additionally, 59% have more confidence in compliance decision-making because of better coordination (PwC, Global Compliance Survey 2025, 2025). These findings indicate that the cost comes not just from the amount of regulations, but also from the extra effort needed to manage fragmented ownership, inconsistent interpretations, and duplicated control activities across different areas. In practice, complexity becomes costly when the same obligation requires different controls and evidence standards. This creates a situation where teams must reconcile results instead of managing risk. This issue is even more critical in the context of scaling AI. McKinsey’s 2025 State of AI report highlights that AI tools are now widespread, but many organizations have not integrated them deeply enough into their workflows to achieve significant benefits . When AI is added to processes without redesigning them, controls are often attached after the fact, evidence is collected after the event, and accountability is unclear throughout the workflow. Connected compliance addresses this gap by creating shared control definitions based on risks and providing a consistent layer of evidence. This ensures that the important controls are carried out where the AI operates, making assurance repeatable instead of having to be rebuilt for each case. PwC’s “Connected Tax Compliance” provides a concrete illustration of the connected-compliance logic: compliance reimagined as “fully integrated, automated, and AI-powered” across jurisdictions, with a strong emphasis that the focus is no longer only the process but also the data that makes reporting reliable and repeatable (PwC, Connected Tax Compliance, accessed 4 Mars 2026). Even though the scope is tax, the architectural principle generalizes well: build a shared data-and-controls backbone that multiple obligations can draw on, rather than duplicating effort function by function. EY reinforces the same direction in its connected approach to compliance and reporting: it emphasizes rethinking the operating model and the synergy between data, technology, talent, and processes. Critically for your “audit by design” bridge, EY describes technology driven from a common data platform that provides a clear audit trail from a single source, exactly the kind of “evidence layer” needed for scalable assurance (EY, Domestic Tax Compliance & Reporting Services, accessed 4 Mars 2026).

So in practical terms, “connected compliance” means:
one reliable data backbone supporting multiple reporting and regulatory needs
harmonized control definitions mapped consistently to risks
workflow-embedded approvals and accountability (not email-based afterthoughts)
automation and monitoring where possible to reduce periodic, manual assurance
shared evidence that audit, risk, compliance, IT, and the business can rely on
Connected compliance creates the shared data and evidence foundation that makes modern audit possible, shifting assurance from periodic, manual checks to continuous, system-embedded audit by design.
Modern audit by design

Modern audit by design is the shift from periodic, manually assembled assurance to continuous, system-embedded assurance, where auditability is engineered into how data is captured, transformed, controlled, and reviewed. It relies on an AI-first, data-driven audit approach that expands coverage beyond samples by transforming and analysing larger datasets, applying analytics to identify anomalies earlier, and automating both simpler and more complex audit activities to improve audit quality and speed.
A central principle is that risk assessment becomes dynamic rather than static, able to adapt as risks evolve, supported by data acquisition, anomaly detection, and real-time signals that help surface issues sooner and focus effort where it matters most. (PwC, Next generation audit, accessed 4 Mars 2026)
Operationally, audit by design depends on the right technology foundation: modern audit teams increasingly move toward cloud-based platforms, open APIs, and connected ecosystems that make integrations easier, reduce tool fragmentation, and accelerate adoption and ROI. (Trullion, How to build a modern audit tech stack: Insights from industry leaders,2025).
In practice, “audit by design” typically translates into a small set of embedded capabilities:
Evidence-by-default (audit-ready logs and standardized trails)
Seamless data extraction and transformation to reduce manual handling
Continuous anomaly detection and dynamic risk assessment
A connected audit tech stack (cloud delivery + API-enabled integrations)
(PwC, Next generation audit, accessed 4 Mars 2026; Trullion, 2025)

As AI becomes part of essential processes and takes on more independent roles, the main challenge shifts from how well the models work to how well we can manage them. Evidence from the market shows that scattered compliance efforts and occasional checks do not work well with AI-driven workflows. A connected compliance operating model offers the shared control definitions and evidence needed to make governance repeatable. Modern audit by design builds on this foundation by embedding checks into systems. This way, risk signals and audit evidence are generated continuously instead of being put together later. The practical goal is clear: make trust measurable, enforceable, and scalable.



Comments