Deontological approaches to AI and machine ethics: principles, guidelines, and debates (2010–2025)

  1. Machine ethics framed as a practical design problem

    Labels: Machine ethics, Deontological ethics

    Work in machine ethics argued that as software and robots make decisions affecting people, designers may need explicit moral rules. Deontological ethics (duty- and rule-based ethics) became a common reference point because it focuses on constraints—what systems must and must not do—even when outcomes are uncertain.

  2. Partnership on AI publicly announced

    Labels: Partnership on, Technology companies

    Major technology companies and partners announced the Partnership on AI to study best practices and societal impacts of AI. The move reflected a shift from purely academic debate toward organized governance efforts, including rule-like commitments (such as transparency and responsibility) that align with deontological thinking.

  3. Beneficial AI meeting at Asilomar convened

    Labels: Beneficial AI, Asilomar conference

    Researchers and other stakeholders gathered at the Beneficial AI conference at Asilomar to discuss how to steer AI toward public benefit. This kind of convening helped translate high-level duties—like preventing harm and enabling oversight—into shared governance language.

  4. Asilomar AI Principles published

    Labels: Asilomar AI

    The Asilomar AI Principles set out widely cited guidance about safety, transparency, responsibility, and human control. Although not a strict deontological code, many principles read like duties (for example, making failures explainable and ensuring safety over an operational lifetime), helping to normalize “must/should” constraints for AI.

  5. AIES conference launches to link AI and ethics

    Labels: AIES conference, AAAI

    AAAI and ACM launched the Conference on AI, Ethics, and Society (AIES) to create a regular venue for interdisciplinary research on AI’s societal impacts. This helped move debates about rule-based constraints (for example, rights, due process, and accountability) into peer-reviewed technical and policy discussions.

  6. EU issues “Artificial Intelligence for Europe” communication

    Labels: European Commission, Artificial Intelligence

    The European Commission set out a European approach to AI that included building an ethical and legal framework alongside investment and innovation goals. This positioned rights- and duty-based constraints as a central part of policy planning, not just an optional add-on.

  7. UK publishes its Data Ethics Framework

    Labels: UK Data, UK government

    The UK government published a Data Ethics Framework to guide public-sector data work, foregrounding transparency, accountability, and fairness. These are often treated as duty-like requirements that apply even when projects promise beneficial outcomes.

  8. EU appoints High-Level Expert Group on AI

    Labels: High-Level Expert, European AI

    The European Commission appointed a High-Level Expert Group on AI and launched the European AI Alliance to support broad consultation. The group’s mandate included drafting ethics guidance rooted in fundamental rights—an approach closely connected to deontological ideas about duties and constraints.

  9. Alan Turing Institute publishes public-sector AI ethics guide

    Labels: Alan Turing, Public-sector guide

    The Alan Turing Institute published guidance on responsible design and implementation of AI in the public sector. It made ethical duties operational by discussing concrete measures to anticipate harms and support accountable, fair, and safe systems in government settings.

  10. IEEE releases Ethically Aligned Design (1st ed.)

    Labels: IEEE, Ethically Aligned

    IEEE launched the first edition of Ethically Aligned Design, a major multi-stakeholder document offering high-level principles and practical recommendations for autonomous and intelligent systems. It reinforced the idea that designers and organizations have ongoing duties—such as respecting human rights and prioritizing well-being—throughout the AI lifecycle.

  11. EU publishes Ethics Guidelines for Trustworthy AI

    Labels: EU Ethics, High-Level Expert

    The EU High-Level Expert Group released Ethics Guidelines for Trustworthy AI and described “trustworthy AI” as lawful, ethical, and robust. The guidelines’ emphasis on fundamental rights, human oversight, and accountability supported a deontological framing: AI systems should respect duties to people, not only optimize outcomes.

  12. OECD AI Principles adopted; later echoed by G20

    Labels: OECD AI, G20

    OECD AI Principles were adopted as an international, consensus framework for “trustworthy AI,” grounded in human rights and democratic values. The G20 later issued AI Principles drawn from the OECD work, helping spread duty-like expectations—such as responsibility and transparency—across governments.

  13. UNESCO adopts Recommendation on the Ethics of AI

    Labels: UNESCO Recommendation, Member states

    UNESCO’s member states adopted a global recommendation on AI ethics, framed around human rights and human dignity. The recommendation strengthened deontological approaches by treating certain constraints—like respect for rights and protection of vulnerable groups—as baseline obligations across countries.

  14. NIST releases AI Risk Management Framework 1.0

    Labels: NIST, AI RMF

    NIST released AI RMF 1.0 as voluntary guidance for managing AI risks to individuals, organizations, and society. While risk management is not identical to deontology, the framework supports “duty to govern” ideas by emphasizing repeatable processes for accountability, measurement, and mitigation across the AI lifecycle.

  15. U.S. issues Executive Order 14110 on AI

    Labels: U S

    The United States issued an executive order on safe, secure, and trustworthy AI, directing federal actions related to safety testing, privacy, civil rights, and responsible government use. This broadened debates about deontological constraints from “best practice” into formal public governance expectations.

  16. OECD Principles updated to address generative AI era

    Labels: OECD Principles, Generative AI

    OECD countries adopted revisions to the OECD AI Principles to reflect rapid developments such as general-purpose and generative AI. The update shows how deontological-style duties (privacy, safety, information integrity) are being reinterpreted and strengthened as new capabilities change what compliance and accountability require.

  17. EU AI Act enters into force

    Labels: EU AI, European Union

    The EU AI Act entered into force, setting a legal framework with obligations that scale by risk and with some outright prohibitions. It marked a shift from voluntary principles to enforceable rules—bringing deontological “must not” constraints (like bans on certain practices) into binding law.

  18. Council of Europe opens legally binding AI convention for signature

    Labels: Council of, Framework Convention

    The Council of Europe opened the Framework Convention on AI for signature as a legally binding treaty focused on human rights, democracy, and the rule of law. This reinforced a strongly deontological posture at the international level: AI governance should be built around duties to protect rights, not only around promised benefits.

  19. U.S. Executive Order 14110 rescinded

    Labels: Executive Order, NIST

    NIST reported that Executive Order 14110 was rescinded on January 20, 2025. This highlighted a central debate in applied deontological AI governance: whether duties like safety, privacy, and civil-rights protections should rely on shifting executive policy or be stabilized through longer-lasting laws and standards.

  20. First EU AI Act prohibitions and literacy obligations apply

    Labels: EU AI, EU

    Early AI Act requirements began to apply, including prohibitions on certain AI practices and obligations related to AI literacy. These steps operationalized the idea that some AI uses are categorically unacceptable—an approach consistent with deontological “constraints first” governance.

First
Last
StartEnd
Last Updated:Jan 1, 1980

Deontological approaches to AI and machine ethics: principles, guidelines, and debates (2010–2025)