# Gemini ## Introduction to the Ethics of Artificial Intelligence The lecture commences with a standard academic welcome, establishing the setting as a university course titled "The Ethics of AI." The speaker, one of the course's lecturers, begins by creating a rapport with the large group of students, confirming the audibility of the sound system to ensure effective communication for everyone present. Before delving into the subject matter, the lecturer performs an Acknowledgement of Country. This is a formal and respectful practice in Australia, recognizing the traditional custodians of the land upon which the university campus is built. Specifically, the Parkville campus is identified as the land of the Wurundjeri people of the Kulin nations. This act serves to pay respect to their Elders—past, present, and future—and situates the academic endeavor within a broader historical and cultural context, acknowledging the thousands of years of Indigenous heritage that precedes the university's existence. Following this formal opening, the lecturer introduces a more informal and interactive element to the class: a system of rewarding student participation with small candies, or "lollies." A student assistant, Bill, is designated as the distributor of these rewards. This pedagogical technique is designed to lower the barrier to participation, encouraging students to ask and answer questions by creating a low-stakes, positive reinforcement loop. The lecturer then transitions to the course's substance by reflecting on preliminary tutorial discussions. These initial interactions provide the teaching team with valuable insights into the student cohort, particularly their interests and motivations for enrolling in the subject. The lecturer notes a fascination with the students' favorite science-fiction AI characters, which reveals a shared cultural touchstone and passion for the topic among both students and the teaching staff. More significantly, the lecturer identifies three primary motivations students have for taking this course. The first is a desire to avoid subjects that require intensive computer programming or "coding." The lecturer confirms that this course will indeed meet that expectation, as it is not a technical subject focused on building AI systems. Instead, its primary focus is on the ethical dimensions of AI. This distinction is crucial: while some technical familiarity is helpful, the course is designed to be accessible to students without a deep computer science background, concentrating on the "why" and "should we" questions rather than the "how-to." The second, and more humorous, motivation observed is the perception that an ethics course might be an "easy" subject. The lecturer deliberately refrains from confirming or denying this, instead proposing to revisit the question at the end of the semester. The implication is that while the subject may lack the objective, right-or-wrong answers of a technical field, it presents its own form of difficulty. This difficulty lies in grappling with complex, ambiguous moral problems that require rigorous critical thinking and the ability to analyze issues from multiple perspectives, thereby "exercising a different part of the brain" beyond a purely computational or information systems lens. The third and most common reason students enroll is a genuine and pressing interest in learning about AI ethics. This reflects the growing prominence of AI in society and the increasing public awareness of its profound social, political, and personal implications. Students are interested not only for intellectual curiosity but also for its practical relevance to their future careers, where they may be involved in designing, implementing, or managing AI systems. ### The Teaching Team and Course Philosophy To provide a more personal context for the course, the lecturer shares his own unconventional career path. He began his professional life as a veterinarian, a field seemingly distant from AI and philosophy. This background is symbolized by a stethoscope and a picture of his dog, Imogen, a former border-force dog he adopted. This personal anecdote serves multiple purposes: it humanizes the lecturer and illustrates that paths to engaging with technology and ethics are not always linear. His journey into ethics was sparked by an encounter with the works of ancient Greek philosophers, specifically Plato and his student, Aristotle. These two figures, depicted in a famous painting, are foundational to Western philosophy and, by extension, to the systematic study of ethics. The lecturer clarifies that his encounter was with their writings, as they lived over two millennia ago. This interest in classical ethics evolved into a focus on more contemporary issues, such as robot ethics and the ethics of technology, ultimately leading him to his current role. The lecturer then introduces the rest of the teaching team, emphasizing the interdisciplinary strength they bring to the subject. The other primary lecturer, Simon, is both a philosopher and a computer scientist, embodying the bridge between the humanities and technology that is central to the course. The team also includes a head tutor, Madeleine, and a group of other tutors, all of whom are described as passionate about AI ethics. The lecturer underscores the importance of the tutorial sessions as a space for discussion and debate. He explicitly encourages students to engage actively with their tutors and peers, stressing that a key goal of the subject is to explore diverse viewpoints on complex ethical issues. This collaborative and discursive approach is fundamental to ethical inquiry, which thrives on the examination of different perspectives rather than the memorization of facts. To reinforce a previous point and maintain the interactive atmosphere, the lecturer poses a quiz question about his dog's name, "Imogen," which is correctly answered by a student, leading to a lighthearted moment involving the throwing of a lolly. ## The Contemporary Landscape and Core Questions of AI Ethics The lecture transitions to the rapidly evolving and often tumultuous world of AI ethics, highlighting several recent events to demonstrate the topic's currency and urgency. These examples serve as concrete illustrations of the abstract ethical principles the course will explore. ### Recent Developments and Controversies The lecturer points to a series of news items from the past year or two that have brought AI ethics to the forefront of public conversation. * **Competition and Innovation:** The emergence of systems like "DeepSeek," positioned as a low-cost competitor to established models like ChatGPT, highlights the intense economic and geopolitical competition driving AI development. * **Content Moderation:** A significant policy shift by Mark Zuckerberg's Meta was mentioned, where the company decided to reduce its monitoring and removal of certain types of content on Facebook and Instagram. This decision, framed as a move against "censorship," raises profound ethical questions about the responsibility of social media platforms. It forces a confrontation between the values of free expression and the need to protect users from harm, such as misinformation, hate speech, and incitement to violence. The timing of such decisions, often influenced by the political climate, underscores the deep entanglement of technology, ethics, and power. * **Algorithmic Bias:** An incident from the previous year involving an image generation model, likely Google's Gemini (though the lecturer refers to it as DALL-E in this context, the example of depicting racially diverse Nazis was a widely reported issue with Gemini), is used to illustrate the problem of algorithmic bias. In an attempt to promote diversity, the model was trained to avoid generating exclusively white individuals. However, this crude intervention resulted in historically inaccurate and offensive images, such as depicting Black and Asian people as Nazi soldiers. This example provides a powerful lesson: simply trying to "fix" bias without a deep, nuanced understanding of the data and the social context can lead to absurd and harmful outcomes. It demonstrates that AI systems do not possess genuine understanding but rather reflect, and can amplify, the patterns and flaws in their training data. * **Deepfakes and Malicious Use:** The proliferation of pornographic deepfakes, specifically mentioning those targeting the celebrity Taylor Swift, is cited as an example of the malicious use of AI. Deepfake technology, which uses AI to create realistic but fabricated images and videos, poses severe threats related to privacy, consent, defamation, and the spread of disinformation. This issue remains a persistent and difficult challenge for both technology platforms and legal systems. ### Global Governance and Geopolitical Tensions The discussion then broadens to the international stage, focusing on the global effort to govern AI. The lecturer refers to a recent "Paris AI Action Summit," where numerous countries convened to discuss the future of artificial intelligence. A key outcome was a declaration signed by 60 countries, including Australia, China, and many European nations. The declaration committed the signatories to ensuring that AI is developed and used in a way that is "open, inclusive, transparent, ethical, safe, secure, and trustworthy." This represents a global consensus on the high-level principles that should guide AI. However, the lecturer immediately highlights a critical point of divergence by asking which major countries *notably* did not sign this document. The correct answers, provided by students, are the United States and the United Kingdom. This refusal to sign signifies a major schism in the global approach to AI regulation. On one side, nations like those in the European Union are pursuing a more cautious, regulation-heavy approach, exemplified by the EU's comprehensive "Artificial Intelligence Act," which aims to establish clear rules and "guardrails" to ensure safety and trustworthiness. On the other side, the US and UK appear to be prioritizing rapid, uninhibited innovation, driven by the fear of falling behind in the global "AI race." This "full steam ahead" approach, with fewer regulatory constraints, reflects a different calculation of risks and rewards, where geopolitical and economic dominance are weighed heavily against potential societal harms. This illustrates that AI ethics is not just a philosophical debate but a domain of intense international politics and economic strategy. ### Hopes and Fears for an AI-Powered Future To ground the discussion in the students' own perspectives, the lecturer solicits their hopes and fears regarding AI. This exercise reveals the dual nature of powerful technologies, which often hold the promise of immense benefit alongside the risk of significant harm. The hopes expressed by students are ambitious and transformative: 1. **Revolutionizing Medicine:** A student suggests that AI could dramatically improve healthcare, transforming diagnostics, prognostics (predicting disease outcomes), and therapeutics (developing treatments). The lecturer affirms this, noting that some AI systems are already outperforming human doctors in specific diagnostic tasks, such as interpreting medical scans. 2. **Alleviating Tedium and Error:** Another hope is that AI can automate redundant tasks and reduce human error. This speaks to the potential for AI to increase efficiency and safety in various industries by taking over repetitive or dangerous jobs. 3. **Breaking Down Language Barriers:** A student proposes that AI could enable seamless, real-time, and accurate translation between human languages. The potential social and cultural impact of such a technology would be immense, fostering greater global communication and understanding. 4. **Personalized Education:** The idea of using AI to create individualized learning experiences is also raised. AI tutors could adapt to each student's pace and learning style, potentially making education more effective and accessible. Conversely, the students also voice significant concerns: 1. **Mass Job Displacement:** A primary fear is widespread job loss. The lecturer elaborates on this, noting that unlike previous technological revolutions that primarily displaced manual labor (the Industrial Revolution), the current AI wave threatens to automate "white-collar" jobs that require intellectual work and cognitive skills. This raises the prospect of a fundamental restructuring of the economy and the nature of work itself. 2. **Automated Decision-Making and Loss of Agency:** A student expresses concern about computers making critical decisions that were previously the domain of humans. The lecturer provides concrete examples: an AI system deciding who receives welfare benefits, who is granted bail, or who gets a loan. This raises profound ethical issues about accountability, transparency, fairness, and the right to an explanation or appeal when a machine makes a life-altering decision. The lecturer concludes this section by presenting a series of pointed ethical questions that the course will explore, such as whether social media algorithms should be regulated, if AI should be used in the criminal justice system, whether facial recognition should be banned, and even the meta-question of whether it is morally acceptable to pursue research into human-level Artificial General Intelligence (AGI), given the potential existential risks that some prominent AI researchers themselves have warned about. ## Course Structure and Administration The lecture briefly pauses to cover essential administrative details and assessment requirements, ensuring students have a clear understanding of the course structure. * **Assessment Breakdown:** The course assessment is composed of four main components: * **Tutorial Participation (20%):** Marks are awarded for in-person attendance at tutorials, starting from the second week. The system is designed with some flexibility; students can miss one of the eleven marked tutorials without penalty, as the total marks available (22) exceed the component's weight (20). For further absences, a formal special consideration process is required. * **Essay 1 (30%):** Due in Week 7. * **Essay 2 (30%):** A research essay due at the end of Week 12. * **Final Exam (20%):** A 90-minute exam held during the official examination period. * **No Hurdles:** A key point of relief for many students is the confirmation that there are no "hurdle requirements," meaning a student does not need to pass each individual assessment component to pass the overall subject. * **Discussion Board:** An online discussion board is available for students to ask questions, answer their peers' queries, and share points of interest related to each week's topic, fostering a continuous learning community outside of scheduled class times. * **Class Representative:** A call is made for a volunteer to act as a class representative, a role that facilitates communication between the student body and the teaching team. The lecturer humorously offers an "automatic H1" (the highest possible grade) as an incentive, before clarifying that only the promise of being considered "cool" is true. This role is presented as a valuable experience for a student's CV. ## A Foundational History of Computing and Artificial Intelligence The core of the lecture begins with a historical overview, designed to provide the necessary context for understanding modern AI. The narrative traces the conceptual and technological evolution from simple calculating devices to the complex systems of today. ### The Pre-History of Computing: A Quiz The lecturer engages the class with a multiple-choice question: "In what year was the first programmable computer built?" The options span from antiquity (87 BC) to the mid-20th century (1951). This quiz serves as a framework for walking through key milestones in the history of computation. 1. **87 BCE - The Antikythera Mechanism:** Discovered in an ancient Greek shipwreck, this intricate device of bronze gears was an astronomical calculator. It could predict the positions of the sun, moon, and planets. While a remarkable feat of ancient engineering, the lecturer explains it is not a programmable computer. It is a *special-purpose* calculating device; its function is fixed by its physical construction and it cannot be reprogrammed to perform other tasks. 2. **1642 - The Pascaline:** Invented by French mathematician and philosopher Blaise Pascal, this was one of the first mechanical calculators. It could perform arithmetic operations (addition and subtraction) through a series of interlocking gears. Like the Antikythera mechanism, it was a significant step in automating calculation, but it was not programmable and was limited to a narrow range of mathematical functions. 3. **1805 - The Jacquard Loom:** This device represents a crucial conceptual leap. It was a loom for weaving textiles that used a series of punched cards to control the pattern. By changing the stack of cards, one could change the woven design. The lecturer explains that this is the birth of the concept of a *program*—a set of stored, modifiable instructions that direct a machine's operation. However, the Jacquard loom is not a *computer* because its domain is limited to a single task: weaving. It cannot perform general-purpose calculations. 4. **1837 - The Analytical Engine:** This was the theoretical invention of English mathematician Charles Babbage. The lecturer identifies this as the true conceptual forerunner of the modern computer. The Analytical Engine was designed to be a *general-purpose, programmable* machine. It had a "store" (memory) to hold numbers and a "mill" (a central processing unit) to perform operations on them, controlled by punched cards. Crucially, the lecturer highlights the contribution of Ada Lovelace, who worked with Babbage. Lovelace recognized that the machine's potential went beyond mere number-crunching. She understood that if symbols like musical notes or letters could be represented by numbers, the machine could manipulate them. She famously predicted that it might one day compose music, a vision that is only now being fully realized with generative AI. The lecturer poses a trick question: was this the first programmable computer *built*? The answer is no. The Analytical Engine was never constructed in Babbage's lifetime due to the limitations of Victorian-era mechanical engineering. It remained a brilliant design on paper. 5. **1943 - The Colossus:** Moving into the electronic era, the lecturer discusses the Colossus computer, built by the British at Bletchley Park during World War II. Its purpose was to crack the sophisticated codes generated by the German Lorenz cipher machine (often associated with the more famous Enigma). Colossus was electronic, using vacuum tubes, and was programmable to a degree using switches and plugs. Because it was built, electronic, and could be reconfigured for different logical tasks, the lecturer identifies this as a strong candidate for the answer to the quiz: the first *built, programmable, electronic* computer. 6. **1951 - CSIRAC:** The lecturer concludes the historical computing tour with Australia's first computer, CSIRAC, which was one of the earliest stored-program computers in the world. It is now housed in the Melbourne Museum. A notable achievement of CSIRAC was that it was one of the first computers to play music, fulfilling Ada Lovelace's prediction from over a century earlier. ### The Defining Eras of Artificial Intelligence With the history of the computer established, the lecture turns to the history of the field that sought to make those computers intelligent. #### The Genesis: The Turing Test and the Dartmouth Conference * **The Turing Test (1950):** The lecturer introduces Alan Turing, a central figure in both computer science and AI. Turing proposed what he called the "Imitation Game," now universally known as the Turing Test. The lecturer explains Turing's brilliant philosophical move: faced with the vague and perhaps unanswerable question, "Can machines think?", Turing substituted it with a concrete, operational test. The test involves a human interrogator communicating via text with two unseen entities: one a human, the other a machine. If the interrogator cannot reliably distinguish the machine from the human, the machine is said to have passed the test. The lecturer notes that this test has become a famous, if controversial, benchmark for AI, sparking decades of debate about whether successfully imitating intelligence is the same as possessing genuine intelligence or consciousness. * **The Dartmouth Conference (1956):** This is identified as the official birthplace of "artificial intelligence" as a field. The lecturer reads from the original proposal for the workshop, written by John McCarthy and his colleagues. The proposal's core premise was the "conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it." This optimistic statement laid out an ambitious research program: to make machines use language, form concepts, solve problems reserved for humans, and even improve themselves. The lecturer then directs the students to look at a photograph of the conference attendees and asks what they have in common. Students correctly identify that they are all white, male, English-speaking, middle-class academics (computer scientists or mathematicians). This observation is a crucial foreshadowing of the lecture's final section on diversity and representation, highlighting that the very foundations of the field were laid by a demographically homogeneous group. #### The Golden Age and the First AI Winter (1950s-1980) * **The Golden Age:** Following Dartmouth, the field entered a period of great optimism and foundational discoveries. The dominant paradigm was "symbolic AI" or "Good Old-Fashioned AI" (GOFAI), which treated intelligence as a process of manipulating symbols according to logical rules. A key technique developed was **reasoning as search**, where a problem is represented as a tree of possibilities, and the AI searches for a path to the solution. A famous example of this era was **Shakey the Robot**. Shakey could perceive its environment (a room with large blocks) using a camera, build an internal model of that world, and use a search algorithm (the A* algorithm, which is still used today) to plan a path to a destination. However, the lecturer points out Shakey's critical limitation: it was incredibly slow, taking hours to compute a short journey. This hinted at the problems to come. Another key invention was the **perceptron**, an early, simple model of an artificial neuron, which foreshadowed the later rise of neural networks. * **The First AI Winter (c. 1974-1980):** The initial hype and promises of the Golden Age failed to materialize into practical, real-world applications. The lecturer explains the key reasons for this downturn: * **The Scaling Problem:** Techniques that worked in simplified "microworlds" like Shakey's room failed when faced with the complexity and messiness of the real world. The number of possibilities would explode, making search-based methods computationally intractable. * **The Common Sense Problem:** It proved immensely difficult to program a computer with the vast ocean of unspoken, "common sense" knowledge that humans use effortlessly to navigate the world. * **Moravec's Paradox:** This is the observation that, contrary to expectation, it is easy for AI to perform tasks that humans find hard (like advanced mathematics or formal logic), but extremely difficult for AI to perform tasks that humans find easy (like perceiving the world, walking, or recognizing a face). This is because our "easy" skills are the product of millions of years of evolution, while formal logic is a recent cultural invention. * As a result of these failures, government funding (especially from military sources like DARPA in the US) dried up, and public interest waned. The field went into a period of hibernation. #### The Knowledge Era and the Second AI Winter (1980-1993) * **The Knowledge Era (1980-1987):** AI re-emerged with a new focus: **knowledge-based systems**, particularly **expert systems**. The idea was to overcome the common sense problem by manually encoding the knowledge of human experts in a specific, narrow domain into a computer. The lecturer gives the example of **MYCIN**, an expert system that could diagnose blood diseases at a level comparable to human doctors. * **The Second AI Winter (c. 1987-1993):** Once again, the hype outstripped the reality. Expert systems proved to be "brittle." They worked well within their narrow domain but failed catastrophically when faced with a problem just outside it. Maintaining and updating their vast knowledge bases was also incredibly expensive and difficult (the "knowledge acquisition bottleneck"). The same fundamental problems—scaling and common sense—persisted. Consequently, funding dried up again, AI companies went bankrupt, and the field entered its second major downturn. #### The Revival Era (1994-Present) The lecture then describes the current, ongoing "AI summer," which began in the mid-1990s. This revival was not caused by a single breakthrough but by a powerful convergence of three factors: 1. **New Algorithms:** The most important shift was the rise of **machine learning**, particularly the resurgence of **neural networks** and the development of **deep learning**. Instead of having humans explicitly program rules, machine learning systems learn patterns and rules directly from vast amounts of data. This represented a fundamental paradigm shift away from symbolic AI. 2. **Big Data:** The explosion of the **internet and the World Wide Web** provided the massive datasets that these new algorithms needed to be effective. For the first time, there was enough raw material to train powerful models. 3. **Increased Computational Power:** **Moore's Law**—the observation that the number of transistors on a chip doubles roughly every two years—provided the exponential increase in computing power necessary to run these complex algorithms on these huge datasets. This was later augmented by the development of specialized hardware (like GPUs) and cloud computing. This "three-legged stool" of algorithms, data, and compute power is the foundation of the modern AI revolution. The lecturer then poses a critical question to the students: given this history of boom-and-bust cycles, are we due for another AI winter? ### The Hype Cycle: Are We Due for Another AI Winter? The students engage in a discussion, offering arguments for and against the possibility of a new winter. * **Arguments for a Winter:** One student suggests that the quality of training data may be becoming a bottleneck, and that current systems are hitting a plateau. Another argues that the current hype around Artificial General Intelligence (AGI) is reminiscent of the overblown promises of past eras, and a failure to deliver could trigger a collapse in confidence and investment. A third student raises the possibility of a public or political backlash against AI due to concerns about job loss or other harms, which could lead to stricter regulation and a slowdown in development. * **Arguments Against a Winter:** A student points to the intense geopolitical and corporate competition—an "arms race"—as a powerful force that will continue to drive massive investment and research, regardless of short-term setbacks. The lecturer then contextualizes this debate by presenting a history of AI predictions, many of which have proven to be wildly optimistic and incorrect. From Marvin Minsky in 1970 predicting human-level AI within a decade, to Elon Musk in 2014 repeatedly making false predictions about the arrival of fully self-driving cars, the field has a long track record of over-promising. This leads to a discussion of **Amara's Law**: "We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run." The lecturer suggests that while the short-term hype around AI may be inflated, its long-term transformative potential could be even greater than we currently imagine. The section concludes with quotes from experts warning about the dangers of this hype, with Pedro Domingos's quip being particularly memorable: "People worry that computers will get too smart and take over the world, but the real problem is that they're too stupid and they've already taken over the world." This highlights the immediate, practical danger of deploying flawed, biased, and not-truly-intelligent systems at a massive scale. ## History, Representation, and the Importance of Diversity in AI The final section of the lecture returns to the theme of representation that was introduced with the Dartmouth conference photograph. It argues that the history of AI, and the technology it produces, cannot be separated from the culture and society in which it is developed. ### Recognizing the Unseen and Overcoming Bias The lecturer profiles several key figures in the history of computing and AI who do not fit the dominant stereotype of the white, male computer scientist, and who often faced significant barriers. * **Ada Lovelace (1815-1852):** A woman whose visionary contributions to the concept of general-purpose computing were not fully appreciated for a century. * **Alan Turing (1912-1954):** A gay man who was a foundational genius of AI and computing but was chemically castrated by the British government for his sexuality and died young. His persecution represents a tragic loss of talent driven by societal prejudice. * **Grace Hopper (1906-1992):** A pioneering woman in the US Navy who developed the first compiler, a crucial technology that allows programmers to write in higher-level languages. Her ideas were initially not taken seriously in a male-dominated field. * **Fei-Fei Li (b. 1976):** An immigrant from China who overcame poverty to become a leading researcher in computer vision and a vocal advocate for diversity in AI. * **Timnit Gebru:** A prominent Black AI ethics researcher who was famously fired from Google after co-authoring a paper that raised concerns about the biases and environmental costs of large language models. Her story highlights the ongoing struggles of marginalized voices within powerful tech corporations. The lecturer presents data showing the persistent gender imbalance in the AI field, where men vastly outnumber women. This lack of diversity, he argues, is not just a matter of social justice but has direct consequences for the quality and safety of the technology being built. ### The Harms of Homogeneity and the Benefits of Diversity A lack of diversity in design teams can lead to a host of problems, including systems that are unfair, unsafe, inaccessible, or biased. The lecturer cites Virginia Eubanks's book, *Automating Inequality*, which documents how automated decision-making systems in areas like welfare and child protective services disproportionately harm the poor and marginalized, often amplifying existing societal inequalities. The lecture concludes by exploring *why* diversity leads to better, safer, and fairer products. 1. **Diverse Perspectives and Blind Spots:** People from different backgrounds (gender, ethnicity, socioeconomic status, disability, etc.) bring different lived experiences. A homogeneous team may have "blind spots," failing to consider how a product might affect or fail people unlike themselves. For example, a facial recognition system trained primarily on white faces will have a higher error rate for people of color, a problem a more diverse team is more likely to anticipate and address. 2. **Constructive Skepticism:** Counter-intuitively, research suggests that diverse teams are more rigorous because people tend to be more skeptical of those who are different from them. This "social friction" challenges groupthink, forcing team members to question assumptions and justify their reasoning more thoroughly, leading to a more robust final product. Therefore, diversity is not only an ethical imperative but also a practical benefit that can lead to better technology and better business outcomes. The history of AI is inextricably linked to the history of culture, and creating a more equitable and beneficial future with AI requires actively including a more diverse group of people in its creation and governance. ## Conclusion and Future Topics The lecture concludes with a summary of the day's key takeaways: the birth of AI at Dartmouth, the cyclical nature of AI winters caused by unmet hype, the distinct eras of AI development, and the critical importance of understanding AI's history through the lens of societal culture and representation. Finally, the lecturer provides a roadmap for the coming weeks. The next lecture will delve into the specific harms and benefits of AI and introduce the concept of ethics as a formal discipline. The week after will focus on core philosophical frameworks—Utilitarianism, Deontology, and Virtue Ethics—that provide structured methods for analyzing and resolving the complex ethical dilemmas that artificial intelligence presents.