# Gemini
## Introduction and Course Context
The lecture commences with the speaker, Phillip Dad, welcoming the attendees to the second lecture of the course. He briefly addresses some logistical matters, including a technical issue with a polling software called "Poll Everywhere" that was experienced in the previous session. This issue serves as a practical, real-world example of a concept central to software development: identifying and responding to bugs.
### A Practical Example of a Software "Hack"
The speaker explains that the polling software provider has implemented a "hack" to temporarily fix a known bug. A "bug" in a software context is an error, flaw, or fault in a computer program or system that causes it to produce an incorrect or unexpected result, or to behave in unintended ways. The bug in this case manifested under specific circumstances, preventing the polls from being displayed correctly. The provider, aware of the problem but not yet having a permanent solution, released a "hack." A hack, in this context, is not a malicious security breach but rather a quick, often crude, and temporary solution to a problem. It is a workaround that addresses the immediate symptoms of the bug without necessarily fixing the underlying cause.
The speaker points out the visible evidence of this hack: the presentation slide is reduced in size and not scaled correctly, making it fit on the screen but in a visually imperfect way. This situation is presented as an interesting case study in itself. It demonstrates a common scenario in software maintenance where a team understands the requirements (the poll should be visible), identifies a clear bug (it is not visible), and, while working on a proper fix, releases an interim solution to restore core functionality for users. This highlights the trade-offs that are often made between perfection and practicality in the software industry. The speaker then confirms that interactive exercises and question-and-answer sessions will be part of the lecture, encouraging student participation.
### Recapping the Nature of Projects and Success
Before delving into the new material, the speaker briefly recaps the topics from the first week. The core concepts discussed were the definition of a "project" and the nuanced meaning of project "success" versus "failure." A project is a temporary endeavor undertaken to create a unique product, service, or result. It has a defined beginning and end, and therefore defined scope and resources.
The recap emphasizes that traditional metrics for success, such as adhering strictly to the initial budget and timeframe, are not always straightforward or well-defined. The speaker adds a new, crucial insight from his personal experience: in the context of large, particularly public, projects, there is immense political and reputational pressure to declare the project a "success" regardless of the actual outcome. Even if a project significantly exceeds its budget or schedule, the stakeholders involved—the sponsors, managers, and organizations who proposed and ran it—have a vested interest in framing the narrative positively. They will often highlight the delivered product and downplay the overruns to protect their reputations and justify the expenditure. This creates a subjective element to project success, where the official declaration may differ from an objective assessment. The speaker encourages students to develop their own critical judgment to evaluate a project's success based on all available evidence, not just the official pronouncements. The recap also mentions a review of project management methodologies and the "Project Charter," a document that formally authorizes a project and outlines its objectives and key stakeholders.
## The Rationale for Studying Software Development History
The lecture's main agenda is then outlined. The focus will be on software processes and the historical evolution of software development life cycles (SDLCs). An SDLC is a framework that defines the tasks performed at each step in the software development process. The speaker clarifies that this historical review is not merely an academic exercise. It serves two primary, practical purposes.
First, understanding the history of SDLCs reveals the evolutionary path that led to the current dominance of "Agile" methodologies. Agile is a modern approach to software development that emphasizes flexibility, collaboration, and responding to change. By examining the preceding models, one can see how specific elements and ideas that are now central to Agile were gradually introduced over time as responses to the limitations of earlier methods. This historical context provides a deeper understanding of *why* Agile works the way it does.
Second, the speaker stresses that new methodologies do not necessarily render old ones obsolete. The evolution of SDLCs is not a simple replacement of inferior methods with superior ones. Instead, it is an expansion of a "toolbox" of approaches. Each historical model, from the most rigid to the most flexible, has its own distinct strengths and weaknesses. In the professional world, it is rare for an organization to adopt a single, "off-the-shelf" methodology in its pure form. More commonly, experienced teams create "hybrid" models, combining elements from different approaches to suit the specific needs of their project, organization, and client. Therefore, understanding the entire spectrum of historical models equips a professional to make informed decisions about which process, or combination of processes, is most appropriate for a given situation.
The lecture will specifically focus on the Agile family of methodologies, with a deep dive into "Scrum," one of the most popular Agile frameworks. To ground these concepts in a real-world context, the lecture will also include a case study of how PayPal, a large and established organization, introduced an Agile approach. The speaker notes that the slide deck for this case study is extensive, and he may not cover it all in detail, assigning some of it as reading material.
## Foundational Concepts: Process, Defined vs. Empirical
To build a solid foundation for understanding different SDLCs, the speaker starts with the most fundamental concept: "process." He provides a dictionary definition: "a series of actions or steps taken in order to achieve a particular end." This definition is then applied directly to software development. Any act of creating software, from a simple student assignment to a massive enterprise system, inherently follows a process because it involves a sequence of steps, even if those steps are not formally planned or documented.
An "ad hoc" process is one that is improvised, created on the fly without a formal structure or external guidance. While it is a type of process, the lecture will focus on more structured and intentional approaches. Specifically, the speaker introduces a critical distinction between two fundamental types of processes: defined processes and empirical processes.
### The Dichotomy of Defined and Empirical Processes
A **defined process** is a process with a well-defined, explicit, and fixed set of steps. The core philosophy behind a defined process is *repeatability* and *predictability*. The underlying assumption is that if you start with the same inputs and meticulously follow the prescribed steps, you will consistently produce the same outputs. This model is heavily influenced by manufacturing and traditional engineering, where consistency and control are paramount. For a defined process to be effective, the environment must be stable, the requirements must be well-understood, and the steps must be clear and executable without significant variation. The goal is to minimize deviation and guarantee a predictable outcome.
In contrast, an **empirical process** is one that is based on observation, experience, and experimentation. The term "empirical" means "based on, concerned with, or verifiable by observation or experience rather than theory or pure logic." This type of process acknowledges that many endeavors, including software development, are too complex and unpredictable to be fully planned out in advance. Instead of relying on a fixed plan, an empirical process operates in a cycle of transparency, inspection, and adaptation. You perform a small amount of work, observe the outcome (inspect), and then adjust your plan and actions based on what you have learned (adapt). This approach is designed to handle uncertainty and unexpected events, allowing for flexibility and continuous learning. It prioritizes responding to change over following a rigid plan.
It is important to understand that these two process types represent two ends of a spectrum, not a strict binary choice. Most real-world processes contain elements of both.
### The Role of the Process Model
The speaker then introduces the concept of a **process model**. A process model is a simplified, abstract description of a process. It is not the process itself, but rather a blueprint, a map, or a set of instructions that describes the essential steps, their order, and their relationships. By abstracting away the minute details of a proven practice, a process model makes that practice understandable, communicable, and repeatable by others. It provides a framework that guides people on how to apply a particular process to achieve their desired goals. The various SDLCs that will be discussed, such as Waterfall and Agile, are all examples of process models.
## Illustrating Process Types with Real-World Analogies
To make the abstract distinction between defined and empirical processes more concrete and intuitive, the speaker provides two powerful real-world analogies.
### The McDonald's Burger: A Defined Process in Action
The process of assembling a standard McDonald's burger is presented as a quintessential example of a **defined process**. Every step is meticulously standardized: the type of bun, the specific condiments, their exact placement, and the order of assembly are all prescribed. The kitchen staff are trained to execute this sequence of steps repeatedly and consistently. The result is that a Big Mac purchased in Melbourne is, ideally, identical to one purchased in New York. This high degree of repeatability is essential to the McDonald's business model, which relies on delivering a predictable product at massive scale and high speed. The speaker points out that if every burger required a detailed, custom consultation with the customer, the entire operation would collapse. The success of the system depends on the rigidity and efficiency of its defined process.
### Navigating an Aircraft: An Empirical Process in Practice
In contrast, the task of a pilot navigating an aircraft from a starting point (A) to a destination (B) is used to illustrate an **empirical process**. While there is a defined component—the flight plan, which often follows a standard, optimized route like a great circle—the pilot cannot simply follow this plan mechanically. The environment is dynamic and unpredictable. The pilot must constantly *observe* current conditions, such as weather patterns. If a severe storm appears on the planned route, the pilot will not fly into it. They will *inspect* the situation and *adapt* their course, flying around the storm to ensure safety. In extreme cases, they might even decide to turn back. This continuous cycle of observation and adjustment in response to unexpected events is the hallmark of an empirical process. The pilot is not just executing a plan; they are using the plan as a guide while actively managing a complex, changing system. This example effectively shows how a process can have both defined and empirical elements, but the need to react to unforeseen circumstances pushes it towards the empirical end of the spectrum.
## The Software Development Life Cycle (SDLC)
With the foundational concepts of process types established, the discussion now narrows to their application in software development. A **software process** is a specific type of process model that dictates the activities to be performed and the timing of those activities (whether time-based or event-triggered) for the purpose of building a software product. The ultimate aim of a good software process is to ensure that the final product not only functions correctly but also genuinely meets the customer's needs and provides value.
The speaker makes a critical point that having a good process does not eliminate the need for good project management. A process provides the "how," but a manager is still needed to ensure the "when" and "with what resources." The manager's role is described as an active, continuous effort of "nudging the project back on track." Projects naturally tend to deviate from their plan due to unforeseen problems or changing circumstances. The manager's job is to constantly monitor the project's health, detect these deviations early, and take corrective action to guide it back towards its goals of being on time and on budget.
### The Universal Activities of Software Development
Regardless of the specific process model chosen, any software development effort will inevitably involve a set of fundamental activities. These are often visualized in a Software Development Life Cycle (SDLC) diagram. The speaker presents a diagram showing these core activities:
1. **Requirements:** This phase involves identifying, documenting, and understanding what the stakeholders (customers, users, etc.) want the product to do. It defines the system's functionality, features, and constraints.
2. **Analysis:** This involves a deeper examination of the requirements to understand their implications, resolve ambiguities, and model them precisely. This is particularly crucial for complex systems with stringent non-functional requirements, such as performance, security, or safety.
3. **Design:** This is the "how" phase. It involves planning the software's architecture, breaking the system down into smaller, manageable components (subsystems), defining the interfaces between them, and making key technical decisions.
4. **Implementation:** This is the actual coding phase, where developers write the source code to build the components as specified in the design.
5. **Testing:** This phase involves verifying that the implemented software works as intended and meets the specified requirements. It includes various levels of testing, from individual components to the integrated system.
6. **Evolution (or Maintenance):** This phase occurs after the initial release of the software. It encompasses all post-delivery activities, such as fixing bugs, making enhancements, adding new features, and eventually, retiring the system.
The speaker emphasizes a crucial point: this list of activities does not imply a strict sequential order. All SDLC models, including modern Agile ones, must address all of these activities in some form. The key difference between models lies in *how* they organize, sequence, and interleave these activities. For example, a traditional model might perform all requirements gathering upfront, while an Agile model might revisit requirements in every development cycle.
## The "Code and Fix" Model and the Software Crisis
To begin the historical journey through SDLCs, the speaker starts with the most primitive and informal process, which he illustrates with an analogy that is likely familiar to the students.
### An Analogy: The Student Programming Assignment Process
A process model for completing a student programming assignment is presented, half-jokingly. The steps are:
1. Read the assignment specification (noted as "optional," a humorous nod to a common student pitfall).
2. Write some code ("coding").
3. Compile the code. If the compiler reports errors, go back to step 2 and fix them.
4. Once the code compiles without errors, test it using the single example input and output provided by the lecturer.
5. If the output matches the example, the work is considered "done." No further testing is performed.
6. Submit the assignment.
A poll is conducted on whether this process "works." The results are mixed: yes, no, and sometimes. The speaker concludes that this process might indeed work for very simple, first-year-level exercises. However, for more complex projects, it is fraught with problems. The key issues identified are:
* **Incomplete Testing:** The program might work for the one provided test case but fail on other, more complex or edge-case inputs that the lecturer will use for marking.
* **Ignoring Non-Functional Requirements:** The assignment specification might include requirements beyond simple input-output behavior, such as the use of specific design patterns, coding standards, or architectural principles. The "code and fix" approach, focused solely on passing one test, often overlooks these.
* **Poor Quality:** The resulting code may be poorly designed, badly structured, and lack comments, leading to a low grade even if it produces the correct output for the given example.
### From Student Analogy to Historical Practice: The "Code and Fix" Era
This simplistic student process is a perfect analogy for the earliest form of software development, formally known as **Code and Fix**. This was the default, ad hoc approach used in the nascent days of computing. There were no formal processes; developers would write code, test it, and if it didn't work, they would modify ("fix") it and try again. This cycle would repeat until the program appeared to work correctly, at which point it was delivered.
The speaker explains the historical context in which this approach was viable. In the early days of computing (pre-1960s):
* **Programs were simple:** By today's standards, the software being written was small and had a limited scope.
* **Developers were experts:** Access to the enormously expensive and complex early computers was limited to highly skilled academics, scientists, and engineers.
* **The developer was the user:** Often, the person writing the code was also the client and the end-user. They were building a tool for their own specific calculation or experiment, so they had a perfect understanding of the requirements.
* **Programs were often single-use:** A program might be written to perform a single complex calculation. Once the result was obtained and verified, the program (often a physical deck of punch cards) might never be used again.
### The Dawn of the Software Crisis
This idyllic state did not last. As computer technology advanced and became more accessible, the landscape changed dramatically. Programs became larger and more complex. The customer base expanded from scientific users to businesses, which had more sophisticated and diverse needs. The group of stakeholders broadened to include managers, end-users, and customers who were not technically savvy.
As a result, the "Code and Fix" approach began to fail spectacularly. The speaker describes this period, beginning in the 1960s, as the **"software crisis."** A large majority of software projects were delivered late, went massively over budget, failed to meet the users' needs, or were simply never completed at all. The resulting software was often unreliable and difficult to maintain, analogous to a ramshackle hut built without a plan or a car jury-rigged with a shopping trolley for a wheel. This crisis signaled that the informal, ad hoc methods of the past were completely inadequate for the challenges of modern software development.
## The Formalization of Software Development: Software Engineering and the Waterfall Model
The widespread project failures of the software crisis prompted a search for a more disciplined and structured approach to building software.
### The NATO Conferences and the Birth of a Discipline
A pivotal moment in this search was a series of conferences sponsored by NATO, starting in 1968. These conferences brought together the leading minds in the computing profession at the time, including luminaries like Edsger Dijkstra and Tony Hoare. It was at these conferences that the term **"software engineering"** was popularized. The core idea was to address the software crisis by applying the principles and rigor of traditional engineering disciplines (like civil, mechanical, or electrical engineering) to the development of software. The prevailing sentiment, captured in a famous quote from the time, was that software was being built like the Wright brothers built airplanes: "build the whole thing, push it off a cliff, let it crash, and start over again." The goal of software engineering was to replace this chaotic, trial-and-error approach with a predictable, manageable, and professional discipline.
### The Waterfall Model: A Linear, Defined Process
The first major process model to emerge from this new "software engineering" mindset was the **Waterfall model**. First described in a 1970 paper by Winston W. Royce (though he did not use the term "waterfall"), this model is the epitome of a **defined process** applied to software. It is based directly on the linear, sequential processes used in manufacturing and civil engineering.
The Waterfall model breaks the development process into a series of distinct, non-overlapping phases. The output of one phase becomes the input for the next, flowing downwards like a waterfall. The typical phases are:
1. **Requirements:** All system requirements are gathered, analyzed, and documented in a comprehensive specification document. This document is then "frozen" and signed off by the client.
2. **Design:** Based on the frozen requirements, a complete architectural and detailed design for the entire system is created and documented.
3. **Implementation (Coding):** Developers take the design documents and write the code for the entire system.
4. **Testing (Verification):** The completed system is tested against the original requirements to find and fix bugs.
5. **Maintenance:** The system is deployed and enters a maintenance phase for ongoing support and enhancements.
The defining characteristic of the pure Waterfall model is its rigidity. A phase must be 100% complete before the next phase can begin. There is no going back. Quality assurance activities are performed at the end of each phase to ensure the deliverables are correct before proceeding.
However, the speaker notes that even early proponents recognized the impracticality of this strict linearity. In reality, errors are often discovered late in the process. For example, a contradiction in the requirements might only become apparent during the design or implementation phase. This necessitates feedback loops, allowing developers to go back to an earlier phase to make corrections. The problem was that in a Waterfall project, the cost of such changes was extraordinarily high. Because each phase ended with a formal sign-off, often tied to a contract, any change required a complex and expensive process of renegotiation, re-planning, and re-doing large amounts of work. This created a massive barrier to change, making the model inflexible and poorly suited for projects where requirements were not perfectly understood from the very beginning.
## Evolving Beyond Waterfall: Iteration and Prototyping
The inflexibility and high risk of the Waterfall model's "big bang" approach—building the entire system in one single pass—led to the development of more adaptive process models.
### The Iterative Model: A Series of Mini-Waterfalls
The **Iterative model** was a direct response to the problems of Waterfall. Instead of building the entire system at once, the project is broken down into smaller, manageable chunks or increments. Each increment is then developed using its own "mini-waterfall" process (requirements, design, code, test). A working, albeit partial, version of the system is delivered at the end of each iteration. Subsequent iterations add more functionality, gradually building up the complete system.
This approach has significant advantages. It spreads the project risk across multiple smaller builds, rather than concentrating it all in one large delivery. It also allows for some value to be delivered to the customer earlier in the project's life. However, the speaker points out a key weakness: the model doesn't provide a clear principle for *how* to break down the system. A common but dangerous approach is to build the easy, well-understood parts first, deferring the difficult or uncertain parts. This can lead to major problems later, when it is discovered that the requirements for a later increment fundamentally conflict with the design choices made in an earlier one.
### The Prototyping Model: Reducing Requirements Risk
Another major challenge that Waterfall failed to address is that customers often don't truly know what they want until they see a working system. The speaker emphasizes that the adage "the customer is always right" is a poor guide for software developers. A developer's professional responsibility is not to blindly implement what the customer initially asks for, but to engage with them to discover what they truly *need*.
To illustrate this, the speaker shares a powerful anecdote from his own experience. A client working on indigenous health data collection in remote communities requested an app to be built for iPads. On the surface, this seemed like a reasonable request. However, after analysis, the speaker's team realized that iPads were a poor choice for the environment due to a lack of support infrastructure (e.g., Apple Stores) and challenges with securely transmitting sensitive data from remote locations. When they discussed these concerns with the client, the real motivation was revealed: the project members simply thought it would be a nice perk to get an iPad at the end of the project. The team then guided the client towards a more suitable and robust platform. This story highlights the critical gap between a customer's stated "want" and their actual "need."
The **Prototyping model** was developed to address this exact problem. In this model, before committing to a full-scale Waterfall development, a preliminary, simplified version of the system—a **prototype**—is built. This prototype, which might be a series of mock-up screens or a simple working model, is shown to the customer to elicit feedback and clarify the requirements. The goal is to ensure that everyone has a shared and accurate understanding of what needs to be built *before* the expensive and rigid development process begins. The speaker notes that prototyping has since evolved into a more sophisticated discipline, used not just for user interfaces but also for testing technical feasibility and other aspects of a system.
### Rapid Prototyping: An Iterative Approach to Prototyping
A refinement of this idea is **Rapid Prototyping**. Instead of creating a single, throwaway prototype, this model involves an iterative cycle: a prototype is built, reviewed by the customer, and then refined based on their feedback. This "prototype-review-refine" loop continues until the prototype evolves into a stable representation of the customer's requirements, at which point the team can move on to building the final system with much greater confidence.
## The Spiral Model: A Risk-Driven Approach
The final model discussed before the break is the **Spiral Model**, introduced by Barry Boehm in 1986. This model represents a significant leap in sophistication, combining the iterative nature of the Iterative model with the risk-reduction focus of prototyping. It was widely used, particularly in high-stakes environments like the defense industry, where the speaker has personal experience.
The core innovation of the Spiral Model is that it is explicitly **risk-driven**. It directly answers the question that the simple Iterative model left open: "How do you decide what to build in each iteration?" The Spiral Model's answer is: in each cycle, you identify and tackle the highest-risk elements of the project first. Instead of avoiding uncertainty, you actively seek it out and resolve it as early as possible. This principle of actively pursuing risk is a key idea that was later inherited by Agile methodologies.
The model is visualized as a spiral, starting from the center and expanding outwards with each iteration. Each full loop of the spiral consists of four main activities:
1. **Determine Objectives:** Identify the objectives, alternatives, and constraints for that iteration.
2. **Evaluate Alternatives, Identify and Resolve Risks:** This is the heart of the model. A thorough risk analysis is performed. To mitigate the identified risks, activities like prototyping, simulation, or further analysis are undertaken.
3. **Develop and Verify:** The next level of the product is engineered and tested based on the resolved risks.
4. **Plan:** The results are evaluated, and the next loop of the spiral is planned.
To illustrate this, the speaker discusses the concept of a **"Concept of Operations" (CONOPS)**, a common practice in the defense industry. A CONOPS is a detailed document that describes how an organization currently operates and how its operations will change with the introduction of a new system. The speaker provides a detailed example from a project involving the acquisition of a new airborne surveillance aircraft (the kind with a "mushroom on top," which is a radar dome). Before purchasing a specific aircraft, the client considered a wide range of alternatives, from different types of planes to much cheaper options like instrumented balloons. For each alternative, a different CONOPS was developed to understand its operational implications, costs, and benefits. This process allowed them to systematically analyze the risks and trade-offs of each option. Once they decided to pursue an aircraft, the speaker's company developed a high-fidelity simulation of the proposed platform. This simulation was effectively a sophisticated software prototype that allowed the client to explore the CONOPS in a virtual environment, further reducing risk before committing to a multi-billion dollar purchase.
The speaker concludes this section by noting that the Spiral Model's focus on risk-driven iteration was a major step in the evolution of software development processes and a direct precursor to many of the ideas found in modern Agile frameworks. He then pauses the lecture for a short break.