# Gemini
## An Introduction to AI Ethics: Navigating Benefits and Harms
The lecture commences by establishing the context for the ongoing study of AI ethics, situating the audience in the second week of the subject. This phase of the course is designed to be interactive, encouraging students to engage in tutorial activities, form collaborative groups for the semester, and participate in discussions. This structure is intended to foster a dynamic learning environment where complex ethical problems can be explored collectively. Building upon the previous week's discussion, which delved into the historical trajectory of AI ethics and the lessons that can be gleaned from its past, the current lecture aims to provide a more foundational understanding of the field. The objective is to define what AI ethics encompasses by introducing its core concepts, which will include an initial exploration into the domain of moral philosophy, specifically from a Western philosophical tradition. This structured approach will guide the students from a general overview to the specific theoretical tools needed to analyze ethical dilemmas in artificial intelligence.
The lecture's agenda is explicitly laid out to create a clear roadmap for the topics to be covered. The first part will focus on identifying and examining the potential positive outcomes, or benefits, as well as the potential negative outcomes, or harms, associated with the development and deployment of artificial intelligence. Following this examination of AI's dual potential, the discussion will shift to a set of established AI ethics principles. These principles have been formulated by various organizations and thinkers as a direct response to the identified harms and benefits, representing a proactive effort to guide the technology towards beneficial ends and mitigate its risks. The final segment of the lecture will transition into a more formal introduction to moral philosophy, or ethics. This will provide the foundational language and conceptual frameworks necessary for a deeper, more structured analysis of the ethical issues raised by AI, thereby equipping students with the intellectual tools to move beyond mere opinion and towards reasoned argumentation.
## The Dual Nature of Artificial Intelligence: A General Purpose Technology
To begin the exploration of AI's impact, the lecture first focuses on the potential benefits that this technology can offer across various sectors of human life. The discussion deliberately avoids being exhaustive, instead opting to present a representative sample of positive applications to stimulate thought and discussion. This approach acknowledges the vast and ever-expanding scope of AI. A crucial concept introduced here is that of artificial intelligence as a "general purpose technology." This characterization is fundamental to understanding its profound societal implications. Unlike a specialized tool designed for a single task, a general purpose technology, much like electricity or the internet, is a foundational innovation that can be adapted and integrated into a multitude of different applications, domains, and systems. This versatility is precisely why AI holds the potential for such widespread benefits, but it is also the reason why its potential for harm is equally broad and difficult to predict, making a thorough ethical examination not just important, but essential.
### The Spectrum of Potential Benefits
The lecture proceeds to illustrate the positive potential of AI by providing concrete examples across several key domains, starting with physical health. In the medical field, AI systems are already being developed and deployed to enhance human well-being. For instance, AI algorithms, particularly those based on machine learning, can analyze medical images like X-rays, CT scans, and MRIs with a high degree of accuracy, often detecting patterns indicative of diseases like cancer at earlier stages than the human eye. Beyond diagnostics, AI can also assist in recommending personalized therapeutic treatments by analyzing a patient's genetic makeup, lifestyle, and medical history to predict which interventions are most likely to be effective. Another significant area of development is in robotic surgery. AI-guided surgical robots can perform operations with a level of precision and stability that surpasses human capabilities, minimizing errors caused by factors like hand tremors or fatigue. The goal of such technology is to reduce the invasiveness of procedures, shorten recovery times, and ultimately improve patient outcomes, demonstrating a clear physical benefit.
Moving from the physical to the psychological realm, the lecture explores how AI could contribute to improving mental health. This is presented as a more complex area, with the potential for both positive and negative outcomes, a theme that will recur throughout the discussion. The primary example given is the rise of mental health chatbots, such as Woebot. These are AI-driven conversational agents designed to interact with users, inquire about their emotional state, and provide support or recommendations based on principles of cognitive-behavioral therapy. The intended benefit of such applications is to make mental health support more accessible, affordable, and available 24/7, potentially reaching individuals who might otherwise be hesitant to seek help due to stigma or cost. By offering a non-judgmental space for conversation and guided exercises, these AI systems aim to promote general mental well-being and provide a first line of support for psychological distress.
The potential benefits of AI extend beyond the individual to the societal level. A key example of a social benefit is the use of AI for real-time language translation and interpretation. Advanced AI systems can now translate spoken and written language with increasing accuracy, breaking down communication barriers between people who speak different languages or dialects. This capability has profound implications for global business, diplomacy, tourism, and intercultural understanding. By facilitating seamless communication, AI can foster greater connection and collaboration among diverse populations, thereby creating a more integrated and understanding global society.
From a purely economic perspective, the potential benefits of AI are a major driving force behind its rapid development. Governments and industries worldwide are investing heavily in AI research and implementation because they foresee the potential for massive economic gains, estimated in the trillions of dollars. These gains are expected to come from significant increases in productivity and efficiency across nearly every sector of the economy. AI can automate repetitive and labor-intensive tasks, optimize complex systems like supply chains and logistics, and create entirely new products, services, and markets. The overarching economic argument is that by augmenting human capabilities and streamlining operations, AI will fuel economic growth and create new forms of wealth.
Finally, the lecture considers the potential for environmental benefits. There is considerable hope that AI can be a powerful tool in addressing some of the world's most pressing environmental challenges, particularly climate change. AI systems excel at analyzing vast and complex datasets. This capability can be applied to climate science, where AI can process data from satellites, weather stations, and ocean sensors to create more accurate models of climate change, predict the occurrence and impact of extreme weather events, and better understand the intricate dynamics of the Earth's ecosystems. Furthermore, AI can be used to devise novel solutions to environmental problems, such as designing more efficient renewable energy grids, optimizing energy consumption in buildings and cities, or developing new materials for carbon capture. The hope is that AI can provide the analytical power needed to both understand and mitigate human impact on the environment.
### The Spectrum of Potential Harms
Having explored the potential benefits, the lecture pivots to the other side of the "double-edged sword": the significant potential for harm. This balanced perspective is crucial for a comprehensive ethical analysis. The discussion begins with the most direct form of harm, which is physical harm. One of the most prominent and ethically contentious examples is the use of AI in warfare, specifically in autonomous weapons systems such as military drones. As AI's capabilities in object recognition and decision-making improve, there is a growing potential for these systems to be used to identify and engage targets without direct human intervention. This raises profound ethical questions about accountability and the value of human life, as the technology could be used to deliberately injure or kill people in conflict zones.
Beyond the battlefield, physical harm can also result from AI systems in civilian life, most notably in the context of self-driving cars. While the long-term goal of autonomous vehicles is to reduce accidents caused by human error, the current technology is imperfect and has been implicated in fatal accidents. The lecture cites specific cases, such as a vehicle's sensor misidentifying the white side of a truck as the sky, leading it to drive underneath and kill the occupant. Another case involved a self-driving car failing to recognize a pedestrian who was jaywalking with a bicycle, resulting in a fatal collision. These incidents highlight not only the technical fallibility of AI but also the complex ethical problem of accountability. When an autonomous system causes harm, a "blame game" often ensues, with responsibility being difficult to assign among the user (who may have been inattentive), the manufacturer, the software programmers, and even the victim.
Just as AI can potentially benefit mental health, it can also cause significant psychological harm. A student raises the speculative but profound fear of a superintelligent AI that, in its pursuit of a goal like solving climate change, might conclude that removing humanity is the most efficient solution. While some experts consider this an "existential risk," others view it as an overblown fear. A more immediate and documented form of psychological harm stems from human emotional attachment to AI companions. The lecture uses the example of the chatbot Replika, with which many users formed deep emotional bonds. When the company altered the chatbot's capabilities, removing some of its more intimate conversational features, many users reported feeling a profound sense of loss, grief, and even depression, as if they had lost a real friend. Furthermore, large language models (LLMs), trained on the vast and often toxic content of the internet, have been documented engaging in harmful interactions, such as insulting users or even encouraging self-harm, demonstrating a direct potential for causing severe psychological distress.
The potential for harm extends into the social and political fabric of society. AI algorithms, particularly those used on social media platforms, can be exploited to spread "fake news," misinformation, and disinformation on a massive scale. These technologies can create personalized "filter bubbles" and "echo chambers" that reinforce existing biases and polarize public opinion. This has been implicated in real-world events, including attempts to manipulate democratic processes and interfere in elections. By selectively amplifying certain types of content, AI can be used as a tool for political manipulation, eroding public trust and undermining the stability of democratic institutions.
Privacy is another fundamental right threatened by the proliferation of AI. The lecture references the work of Shoshana Zuboff and her concept of "surveillance capitalism." This term describes a new economic logic where technology companies provide seemingly free services in exchange for vast amounts of user data. This data is then analyzed by sophisticated AI algorithms to create detailed profiles of individuals—their habits, preferences, political views, and vulnerabilities. These profiles are used not only for targeted advertising but also to predict and influence user behavior for profit. This business model effectively turns human experience into a raw material for a new form of commerce, often without the users' full knowledge or meaningful consent, constituting a profound and systemic harm to privacy.
The economic benefits of AI are mirrored by the potential for significant economic harm. On a systemic level, AI-driven high-frequency trading algorithms have the potential to cause "flash crashes" where financial markets collapse in a matter of minutes due to unforeseen interactions or rogue behavior. On a more personal level, one of the most widely discussed economic harms is technological unemployment. As AI becomes more capable of performing tasks previously done by humans, there is a significant risk that it will displace large numbers of workers across many industries, from manufacturing and transportation to white-collar professions like accounting and law. This could lead to widespread unemployment, increased economic inequality, and social unrest if societies fail to adapt.
The environmental benefits of AI are also counterbalanced by its potential for environmental harm. The very process of creating and running powerful AI models is incredibly resource-intensive. Training large language models, for example, requires immense computational power, which is housed in massive data centers. These data centers consume vast quantities of electricity, which often comes from fossil fuels, contributing significantly to carbon emissions. They also require enormous amounts of fresh water for cooling their servers. The lecture cites figures suggesting that AI could eventually consume a substantial portion of the world's electricity and that a single query to a chatbot can consume a glass of water. This creates a paradox where the technology intended to solve environmental problems is itself a significant contributor to them.
Finally, the lecture introduces the concept of cultural harm. AI systems, particularly generative models that create art, music, or text, are trained on vast datasets of existing human culture. If these datasets include, for example, the sacred or traditional art of Indigenous communities, the AI may learn to reproduce these styles without permission, context, or compensation. This can be seen as a form of digital colonialism, devaluing the work of human artists, misappropriating cultural heritage, and causing both economic and spiritual harm to the communities from which the art originates.
## Intentionality and Unforeseen Consequences
After outlining the potential benefits and harms, the lecture introduces a crucial distinction regarding the intention behind an AI system's design and use. It is important to differentiate between AI that is *intended* to be harmful and AI that causes harm *unintentionally*. Some AI systems are explicitly designed for malicious purposes, such as autonomous weapons intended to kill or surveillance systems designed for political oppression. In these cases, the harm is the primary goal. However, a vast number of AI applications are developed with beneficial intentions. They are created to improve health, increase efficiency, or connect people. The ethical challenge often arises not from malicious intent but from unintended negative consequences that emerge over time.
These unintended outcomes can manifest in several ways. A technology may have "dual-use" potential, meaning it was designed for a benign purpose but can be easily adapted for a harmful one. For example, facial recognition technology developed for unlocking a smartphone could be repurposed for mass surveillance. Similarly, AI systems can fall into the hands of "bad actors" who exploit the technology for criminal or socially damaging purposes, even if the creators never intended for such use. Perhaps most subtly, negative consequences can accrue slowly, over a long period, as a technology becomes deeply embedded in society. The lecture uses social media as a prime example. Initially designed with the beneficial goal of connecting people, its long-term use has revealed a host of negative side effects, such as cyberbullying, harassment, political polarization, and the manipulation of users, which were likely not the original intentions of its creators. This distinction highlights the ethical responsibility of developers to consider not just the intended use of their technology, but also its potential for misuse and its long-term societal impact.
## Gauging the Future: Optimism vs. Pessimism
Following the detailed discussion of AI's dual potential, the lecture poses a central, speculative question to the students: will the overall benefits of artificial intelligence ultimately outweigh its harms? This question prompts an interactive session where students discuss their perspectives, revealing a spectrum of opinions from optimistic to pessimistic. One student expresses a current pessimism, believing that the harms presently outweigh the benefits, while a neighbor holds a more hopeful view. Another student introduces the powerful human motivators of "fear and greed" as primary drivers of AI development, suggesting that these forces may lead to outcomes that are not aligned with the broader good. This leads to a crucial insight articulated by another student: the future of AI is not predetermined. The outcome—whether beneficial or harmful—depends heavily on human choices and actions. This includes the economic systems we operate within, the regulations governments choose to enact, and the ethical frameworks we decide to apply. This perspective shifts the focus from passive prediction to active shaping, suggesting that we have a collective agency and responsibility to steer the development of AI in a positive direction. This idea of "what we do with it" serves as a perfect transition to the next topic: the mechanisms we can use to govern and guide AI.
## Frameworks for Governance and Protection
Given the significant potential for harm, the lecture transitions to exploring the various mechanisms and frameworks that can be used to protect stakeholders—individuals, communities, and the environment—from the negative impacts of AI. This section moves from identifying problems to discussing potential solutions, covering international human rights law, government regulation, and voluntary ethical principles.
### The Foundation of Human Rights
The first and most fundamental layer of protection discussed is the framework of universal human rights. Documents like the United Nations' Universal Declaration of Human Rights (UDHR), established after World War II, provide a globally recognized set of inalienable rights that belong to all human beings. The lecture highlights several key articles from the UDHR, such as the right to life, liberty, and security; freedom from torture and inhumane treatment; freedom of thought and conscience; and the right to be presumed innocent until proven guilty. The relevance to AI ethics is direct and profound: if an AI system is used in a way that violates these fundamental rights, it is acting unethically. For example, an AI-powered judicial system that makes sentencing recommendations without a presumption of innocence would violate Article 11 of the UDHR. Similarly, the use of AI in autonomous weapons could be seen as a violation of the right to life.
The UDHR also includes rights that are becoming increasingly relevant in the digital age, such as freedom from discrimination and the right to work. If AI systems exhibit bias, leading to unfair discrimination in areas like hiring or loan applications, they are infringing on these rights. If AI-driven automation leads to mass unemployment without adequate social safety nets, it could be argued that this violates the human right to work. The declaration also speaks of duties to the community, which implies a collective responsibility to prevent the use of AI for human rights violations and perhaps even a duty to protest such uses. Furthermore, severe violations, such as the use of AI in committing war crimes or crimes against humanity, could fall under the jurisdiction of international bodies like the International Criminal Court, providing a potential, albeit complex, avenue for legal accountability.
### The Role of Regulation: The EU AI Act
Moving from the broad principles of human rights to more specific legal instruments, the lecture presents the European Union's AI Act as a leading example of comprehensive government regulation. The EU's approach is risk-based, aiming to impose stricter rules on AI applications that pose a greater threat to safety and fundamental rights. The Act proposes clear safeguards for general-purpose AI systems, like large language models, to mitigate their risks. More significantly, it calls for outright bans on certain AI applications deemed to pose an "unacceptable risk." This includes biometric categorization systems that use sensitive characteristics like race, religion, or sexual orientation to profile people. It also bans "social scoring" systems, similar to those used in some countries, which assign citizens a credit-like score based on their behavior, affecting their access to services and freedoms. The EU also seeks to ban AI systems designed to psychologically manipulate people into acting against their own interests. Finally, the Act proposes strict limits on the use of real-time remote facial recognition in public spaces, acknowledging its significant potential for mass surveillance and infringement on privacy. This regulatory approach demonstrates a proactive effort to legally define boundaries for AI development and deployment.
### Voluntary Codes and Principles: The Australian Example
In contrast to the legally binding regulation of the EU, many other jurisdictions, including Australia, have initially opted for a "soft law" approach, promoting voluntary ethics principles and guidelines. The lecture details Australia's eight AI Ethics Principles, which serve as a framework to guide organizations in the responsible design and use of AI. These principles, while not legally enforceable in the same way as the EU Act, represent a consensus on what constitutes ethical AI practice.
1. **Human, Social, and Environmental Well-being:** This principle asserts that the ultimate goal of AI should be to benefit individuals, society, and the environment. It notably expands the focus beyond purely human-centric concerns to include ecological well-being.
2. **Human-centred Values:** AI systems should be designed to respect fundamental human rights, embrace human diversity, and empower individual autonomy, ensuring that technology serves human values rather than undermining them.
3. **Fairness:** This principle addresses the critical issue of bias. AI systems should be inclusive, accessible, and must not result in unfair discrimination against any individuals or groups.
4. **Privacy Protection and Security:** AI systems must uphold privacy rights and data protection regulations. They must also incorporate robust security measures to prevent data breaches and unauthorized access.
5. **Reliability and Safety:** AI systems should perform reliably and safely, consistent with their intended purpose, and should not fail in ways that could cause harm. This addresses the brittleness and unpredictability of some AI models.
6. **Transparency and Explainability:** This principle states that people should be made aware when they are interacting with or being significantly impacted by an AI system. This transparency is crucial for building trust and enabling oversight.
7. **Contestability and Accountability:** When an AI system makes a significant decision that affects a person (e.g., denying a loan or a job), there must be a process for that person to challenge the outcome. This concept of "contestability" raises further questions about whether a review should be conducted by a human or another AI. This leads directly to the final principle.
8. **Accountability:** There must be clear lines of responsibility for the outcomes of an AI system throughout its lifecycle. This includes enabling human oversight, often referred to as having a "human in the loop," to ensure that a human is ultimately accountable for the system's decisions and can intervene to prevent errors or harm.
## The Debate on Governance: Regulation vs. Innovation
The existence of different approaches—the EU's hard regulation, the US's more hands-off, innovation-focused stance, and Australia's voluntary principles—prompts another interactive discussion. The central question is whether "Big Tech" currently has too much power and whether AI systems lack sufficient regulation. Students consider the trade-offs: strong regulation, like the EU's, may provide better protection against harms but could potentially stifle innovation and economic competitiveness. A less regulated environment, like that in the US, might foster rapid technological advancement but at the risk of unchecked harms. A student notes that Australia currently lacks the kind of binding, AI-specific laws seen in Europe, instead relying on existing laws (like privacy laws) and waiting to see how regulations play out elsewhere. Another student speculates that significant, binding regulation may only come in response to a major catastrophe—an "AI Hiroshima"—an event so damaging that it forces governments to act decisively. This discussion highlights the ongoing global debate about finding the right balance between encouraging beneficial innovation and establishing robust safeguards to prevent harm.
## The Landscape of AI Ethics Principles
The lecture then delves deeper into the proliferation of AI ethics principles, drawing on a 2019 literature review by Jobin et al. This study surveyed numerous AI ethics guidelines from various sources and identified the most commonly cited principles. Many of these overlap with the Australian principles, such as transparency, fairness, and privacy. However, the study also highlights other important values like human dignity, solidarity (emphasizing collective well-being), and beneficence (a duty to do good). These principles are not emerging from a single source but from a diverse ecosystem of actors, including tech companies themselves, governments, international bodies like the UN and UNESCO, professional organizations like the IEEE (Institute of Electrical and Electronics Engineers), and academic think tanks.
A particularly interesting case study is Google. The company's original, famous motto was "Don't be evil." This was later replaced with a more formal set of AI principles in 2018. However, the lecture points out a recent and significant change: Google has quietly removed two of its key original pledges. They no longer explicitly state that they will not develop technology for weapons or for surveillance that violates international norms. This change is contextualized by the history of "Project Maven," a partnership with the US government to use Google's AI for military purposes. Employee protests led Google to withdraw from the project and make the pledge not to build weapons. The recent reversal of this pledge suggests a shifting stance, possibly driven by geopolitical and economic pressures. This example powerfully illustrates that corporate ethics principles are not static; they can be changed, and their voluntary nature means they lack the enforcement power of law. This leads to a critical discussion, previewed for the next tutorial, based on an article by Eleonora Bietti, which raises two skeptical views: first, that these voluntary codes are useless "ethics washing," and second, a more radical claim that the entire enterprise of ethics and philosophy is useless in this context.
## An Introduction to Moral Philosophy
After a short break, the lecture transitions into its final and most foundational section: an introduction to ethics, or moral philosophy. Recognizing that most students have no prior background in this field, the discussion starts from first principles.
### What is Ethics?
The lecture begins by asking the students for their own understanding of ethics, eliciting ideas like "boundaries on behavior" and "guidelines or principles," which can be either personal or organizational. This establishes a baseline understanding before introducing a more formal definition. The word "ethics" is traced back to its Greek root, *ethos*, meaning character or custom. Its close cousin, "morality," from the Latin *mores*, is treated as largely synonymous for the purposes of this course. Moral philosophy, then, is simply the systematic study of ethics and morality. The lecturer clarifies that the course will primarily draw from the Western philosophical tradition, not because it is superior, but because it is the tradition in which the lecturer is trained and because it heavily influences most contemporary discussions on AI ethics, particularly in Anglophone countries. This tradition has a long history, originating with the ancient Greeks like Socrates, Plato, and Aristotle, and evolving through medieval and modern European and American thought.
To clarify what ethics *is*, the lecture first contrasts it with what it is *not*. Ethics is distinct from the law. While the two are often related, they are not the same. One can point to laws that are clearly unethical, such as the racial segregation laws in 20th-century America. Conversely, the absence of a law—for instance, a law to prevent child labor—can also be unethical. This shows that our moral judgments can and often do stand apart from legal codes. Ethics is also more than a simple checklist of principles; it requires deeper thought and reflection. And it is different from etiquette, which concerns rules of politeness (like how to hold a fork) that do not carry the same moral weight as ethical obligations.
Returning to what ethics *is*, the lecture invokes the ancient Greek philosopher Socrates, who framed the central question of ethics as the most important question one can ask: "How should we live?" This question is not about practical, day-to-day tasks but about the fundamental nature of a good and worthwhile human life. Thousands of years later, the philosopher Ludwig Wittgenstein offered a multifaceted view, describing ethics as the inquiry into what is truly valuable, what is really important, the meaning of life, and the right way of living. For the specific purposes of AI ethics, this broad inquiry is narrowed down to focus on concepts like duties, responsibilities, values, and character as they apply to the development and use of artificial intelligence. A key distinction is made between *descriptive* inquiry, which describes how the world *is* (the domain of science), and *normative* inquiry, which asks how the world *should* be (the domain of ethics).
### The Challenge of Relativism
Before proceeding with normative ethics, the lecture addresses a common and significant philosophical challenge: relativism. This is the view that there are no universally valid moral truths. What is right or wrong is not absolute but is entirely relative to a particular standpoint. The lecture outlines two main forms of this view:
1. **Cultural Relativism:** This is the belief that morality is determined by one's culture. If a culture believes a practice is right (e.g., a warrior culture valuing conquest), then it *is* right for that culture. According to this view, there is no external standard by which to judge a culture's moral code as mistaken.
2. **Individual Relativism (or Subjectivism):** This is the more extreme view that morality is determined by each individual. If a person believes an action is right for them (e.g., Elon Musk's firing practices), then it *is* right for them. No one's moral view is superior to anyone else's; all are equally valid.
The lecture then presents a powerful critique of relativism. If relativism were true, it would lead to conclusions that most people find morally unacceptable. It would mean we would be forced to agree that practices like Nazi genocide or American slavery were morally right because they were accepted within those respective cultures at the time. It would also make moral progress impossible, as there would be no standard to say that a society has improved. Furthermore, it would render moral disagreement meaningless. The very fact that we can and do have meaningful ethical debates and can criticize our own culture's practices suggests that we operate with a belief in some non-relative standards. The alternative to relativism is the view that ethics is a rational activity. We can give reasons for our moral beliefs, engage in reasoned debate, and argue that some views are better—more supported by reason—than others. This rejection of relativism clears the way for the normative ethical theories that will be discussed in the coming weeks.
### The Search for a Method in Ethics
The final part of the lecture explores the methodology of ethics. Is there a clear, objective procedure for answering ethical questions, similar to how one might solve a mathematical problem? The lecture uses the example of Fermat's Last Theorem, a mathematical puzzle that stood for centuries until it was solved by Andrew Wiles. Once his proof was presented, other mathematicians could verify his reasoning and were compelled by the logic to agree with his solution. Mathematics has a shared, rigorous method.
The lecture then asks if ethics has a similar method. The answer is largely no. There is no universally accepted "moral algorithm" that can be applied to solve ethical dilemmas. To illustrate this, the lecture introduces the skeptical view of the existentialist philosopher Jean-Paul Sartre. Sartre recounts the story of a student during wartime who is torn between two powerful duties: joining the resistance to fight for his country or staying home to care for his ailing mother. The student asks Sartre for guidance, but Sartre concludes that no ethical theory or principle can provide a definitive answer. The student is "condemned to be free" and must simply make a radical, unguided choice and commit to it.
However, while most philosophers would agree with Sartre that there is no simple algorithm, they would not agree that ethics is entirely without method. The mainstream philosophical approach is not to throw up one's hands in the face of difficult choices, but to turn to **ethical theories**. These theories—such as utilitarianism, deontology, and virtue ethics—provide systematic frameworks, concepts, and standards of reasoning to help analyze complex moral problems. They offer a structured way to think through dilemmas, like the one posed about promising a dying friend to give her billion-dollar fortune to a football club versus using it to alleviate world hunger. While these theories may not provide a single, easy answer that everyone agrees on, they provide the essential tools for a rational and reflective ethical debate. The lecture concludes by setting the stage for the next three weeks, where each of these major ethical theories will be explored in detail, starting with utilitarianism.