As AI moves from pilots into core workflows, the question shifts from what can be automated to who is responsible when automated decisions cause harm. Software executes tasks, but only human beings or legal entities can be questioned, investigated, or sanctioned.
That gap between execution and responsibility gives accountability a central role in automated economies. Even when models make day to day decisions, institutions still require identifiable people and organizations to oversee systems, preserve evidence, and provide remedies when outcomes are contested.
Executive Summary
- In McKinsey’s 2025 State of AI survey, 78 percent of respondents say their organizations use AI in at least one business function, up from earlier years.
- AI systems are not legal persons, so liability for AI-related harm attaches to human or corporate actors under existing legal frameworks.
- The EU AI Act requires high-risk AI systems to support effective human oversight and log retention of at least six months, with key obligations for deployers starting from 2026.
- OECD AI Principles link accountability to traceability and risk management across the AI lifecycle, while NIST’s AI Risk Management Framework highlights documentation as a way to enhance transparency and human review.
- Organizations that invest in logging, audits, and clear escalation paths can respond faster to incidents and turn AI governance into a source of trust with regulators, customers, and employees.
Why Liability Still Defaults to Humans
Eversheds Sutherland notes that under English law, only a legal person can be held liable for breach of contract or negligence. AI systems, however advanced, are not legal persons and therefore cannot themselves be held liable for harms.
In practice, that means responsibility for AI outputs falls back on human or corporate actors. If an automated trading system misprices securities, the counterparty’s claim is brought against the firm that placed the order, not the codebase or the model.
Scholars writing in the Journal of Responsible Technology argue that discussions of AI governance must identify where responsibility lies for the outputs and impacts of AI enabled systems. Without a clear assignment of responsibility, they note, it is difficult to achieve justice, compensation, or meaningful guidance for engineering practice.
Contract and insurance arrangements reinforce this structure by assuming that a named entity ultimately stands behind automated systems. This legal and commercial reality is why accountability work concentrates on human and organizational decision makers rather than on models themselves.
More Technology Articles
Oversight Mandates Solidify
The EU AI Act is described by the European Commission as the first comprehensive legal framework on AI worldwide. It sets risk based rules for developers and deployers of AI systems, including requirements for logging and human oversight of high risk uses, according to the Commission’s AI Act overview.
Article 14 of the AI Act requires that high risk AI systems be designed and developed so that natural persons can effectively oversee them. The consolidated text explains that oversight must allow human supervisors to understand system capabilities and limitations, monitor operation, interpret outputs, and decide to stop or override the system when necessary.
The same text records that Article 14 applies from 2026, following the Act’s entry into force in 2024. This phasing turns what might have been soft expectations about supervision into enforceable duties for providers and deployers of high risk AI systems.
Article 26 places explicit obligations on deployers. They must assign human oversight to natural persons with the necessary competence, training, and authority, and they must monitor system operation in line with the provider’s instructions.
The same provision requires deployers to keep automatically generated logs for a period appropriate to the system’s intended purpose, of at least six months, unless other Union or national law provides otherwise. Together with the design requirements in Article 14, this creates a two level structure of accountability: systems must be built to support oversight, and organizations must assign people and keep records.
Assurance in Practice: Logs, Audits, and Control Rooms
Article 26’s logging obligation is intended to make high risk AI use reconstructable over time. By keeping logs for at least six months, deployers preserve information that can show which version of a system was running, what inputs it received, and how outputs influenced downstream decisions.
The US National Institute of Standards and Technology’s AI Risk Management Framework notes that documentation can support this type of assurance. NIST writes that documentation "can enhance transparency, improve human review processes, and bolster accountability in AI system teams" in its AI RMF 1.0 publication.
In concrete terms, assurance can include versioned records of models and configuration, documented data preprocessing pipelines, and periodic reviews of model performance in production. Organizations can also conduct structured testing, such as stress tests and adversarial evaluations, to identify failure modes before they appear in real use.
These activities require time and specialist capacity, and often span legal, risk, engineering, and operations teams. The combination of mandated logs and internal assurance practices defines how ready an organization is to explain its automated decisions when challenged.
Incident Response Lessons
Even with controls in place, some automated decisions will be inaccurate, harmful, or contested. When that happens, the presence or absence of logs and clear oversight responsibilities determines whether investigation is orderly and evidence based or improvised and prolonged.
Where logs from high risk systems are preserved as required by Article 26, teams can reconstruct the sequence of events around a disputed decision. This supports root cause analysis and helps regulators or courts assess whether the deployer used the system in line with instructions and legal duties.
By contrast, organizations that have not invested in logging and escalation paths may struggle to provide a coherent account when automated decisions are questioned. Investigations can then depend on partial recollections or ad hoc sampling, increasing uncertainty for affected individuals and for the organization itself.
Over time, these differences in incident readiness influence the net cost of automation. Systems backed by evidence trails and clear human decision points may recover trust faster after failures than systems that cannot show how decisions were made.
Strategic Value of Accountability
Accountability is often framed as a compliance obligation, but it can also shape market access and competitive positioning. Public sector tenders and regulated industry contracts increasingly request information about AI governance, including oversight structures and logging practices.
Investors and lenders factor governance into assessments of operational and legal risk, especially when business models depend on large scale algorithmic decision making. Clear lines of responsibility and well documented controls can reduce perceived risk and support access to capital on better terms.
Employees also encounter AI systems as part of their work environments, in hiring, performance management, and workflow tools. For trust and adoption, staff often look for evidence that they can raise concerns, trigger review, and obtain explanations when automated systems influence outcomes that matter to them.
In this context, meticulous oversight and documentation become differentiators rather than only safeguards. Organizations that can demonstrate how automated systems are monitored, audited, and corrected signal reliability to customers, regulators, and employees.
What Cannot Be Automated Yet
Automation can optimize tasks within a defined objective and constraint set, but defining those objectives and constraints remains a human responsibility. Decisions about what to optimize, which risks are acceptable, and how to weigh competing interests sit upstream of model training and deployment.
The OECD AI Principles emphasize that AI actors should be accountable for the proper functioning of AI systems and respect for the broader values based principles, based on their roles and context. The same text highlights traceability across datasets, processes, and decisions so that outputs can be analyzed and subject to inquiry.
Those expectations assume that someone can explain why a system was built the way it was and how it should behave in borderline cases. Tasks such as negotiating trade offs between fairness and accuracy, deciding when to defer to human review, and setting escalation thresholds still rely on context sensitive judgment.
Relationships with customers, citizens, or employees also involve elements that are hard to formalize fully in code. Explaining a complex decision, offering an apology, or agreeing on remediation are closely tied to legitimacy and consent, and they depend on accountable human decision makers.
As organizations automate more execution steps, a larger share of value creation and risk management sits in roles that design rules, interpret obligations, and authorize responses. Governance, risk, compliance, and public policy teams therefore become more central to how automated systems are selected, deployed, and overseen.
Looking Ahead
Both the EU AI Act and the OECD accountability principle frame AI governance as a lifecycle task that spans design, deployment, and post deployment monitoring. The AI Act’s high risk rules, which the Commission indicates will apply from 2026 for key obligations, include post market monitoring and incident reporting duties alongside oversight and logging.
For companies, this shifts the strategic question from whether a task can be automated to whether the organization can explain and stand behind automated behavior years after deployment. That includes showing how models were trained, how they were used, and how human supervisors were expected to intervene.
Execution can be scaled with AI, but accountability, traceability, and remedy remain anchored in human and institutional roles. As regulatory frameworks mature and adoption deepens, the capacity to design and operate accountable AI systems is likely to define how much value organizations can safely extract from automation.
Sources
- McKinsey & Company. "The state of AI: How organizations are rewiring to capture value." McKinsey & Company, 2025.
- Eversheds Sutherland. "Who's liable? Legal accountability in the age of AI: Part 1." Eversheds Sutherland, 2025.
- Zoe Porter, Philippa Ryan et al. "Unravelling responsibility for AI." Journal of Responsible Technology, 2025.
- EU Artificial Intelligence Act. "Article 14: Human Oversight." artificialintelligenceact.eu, 2026.
- EU Artificial Intelligence Act. "Article 26: Obligations of deployers of high-risk AI systems." AI Act Service Desk, 2024.
- OECD. "Accountability (Principle 1.5)." OECD.AI, 2026.
- NIST. "Artificial Intelligence Risk Management Framework (AI RMF 1.0)." National Institute of Standards and Technology, 2023.
- European Commission. "AI Act." European Commission, 2026.
- EU AI Act Service Desk. "AI Act Single Information Platform." ai-act-service-desk.ec.europa.eu, n.d..
