Software Engineering
Complete notes & cheatsheet β detailed explanations in simple language for every topic in the syllabus.
Introduction to Software Engineering & Process Models
What is Software Engineering?
The Problem Domain
Software today is used everywhere β from banking to healthcare to social media. As software systems became larger and more complex, building them became harder. Early programmers wrote code without any planning or structure, which led to what we call the "Software Crisis" β projects were late, over budget, and full of bugs.
The problem domain refers to the real-world area that the software is meant to serve. For example, if you're building a hospital management system, the problem domain is healthcare administration. Understanding the problem domain deeply is the very first step in building good software.
Software Engineering Challenges
Building software is fundamentally different from building physical things. Here are the key challenges:
- Complexity: Software systems can have millions of lines of code with intricate interactions. Unlike a bridge, you can't "see" software, making it harder to understand.
- Changeability: Software is expected to change constantly β new features, bug fixes, platform updates. This makes managing change a core challenge.
- Invisibility: You cannot visualize software the way you can a building blueprint. This makes communication between developers and clients difficult.
- Conformity: Software must conform to existing systems, regulations, hardware, and user expectations β things that software engineers have no control over.
- Scale: A small script behaves differently from a system with 10 million users. Scaling introduces performance, reliability, and coordination challenges.
The Software Engineering Approach
The software engineering approach tackles these challenges using a phased, structured methodology. Instead of diving directly into coding, a systematic approach is followed:
- Understand the problem β Gather requirements from stakeholders.
- Plan a solution β Design the architecture and modules.
- Build the solution β Write the code according to the design.
- Verify and validate β Test the software to ensure it works correctly.
- Maintain and evolve β Fix bugs and add features over time.
Software Process & Desired Characteristics
A good software process should have these desired characteristics:
- Predictability: You should be able to estimate time, effort, and cost with reasonable accuracy.
- Testability & Maintainability: The process should produce software that is easy to test and maintain.
- Early Error Detection: The process should catch errors as early as possible (remember: cost increases exponentially).
- Support for Change: Since requirements change, the process should accommodate changes gracefully.
- Facilitates Verification & Validation: The process should have built-in checkpoints (reviews, testing phases).
Waterfall Model
The Waterfall Model is the oldest and most straightforward SDLC model. It follows a linear, sequential flow β like water flowing down a waterfall β where each phase must be completed before the next begins.
Fig: The Waterfall Model β linear, sequential phases
Phases:
- Requirements: Gather and document all requirements upfront.
- Design: Create the system architecture and detailed design.
- Implementation: Write the actual code.
- Testing: Verify the software against requirements.
- Maintenance: Deploy and maintain the system.
β Advantages
- Simple and easy to understand
- Well-documented at every stage
- Works well when requirements are clear and fixed
- Easy to manage (milestones are clear)
β Disadvantages
- No working software until late
- Cannot handle changing requirements
- High risk β errors found late are very costly
- Not suitable for complex or long-term projects
Prototyping Model
In the Prototyping Model, a simplified version (prototype) of the software is built quickly to help understand and refine requirements. Think of it as a "rough draft" that you show to the client to get feedback before building the real thing.
How it works:
- Gather initial requirements (even if incomplete).
- Build a quick prototype (just the UI or basic features).
- Show it to the user and collect feedback.
- Refine the prototype based on feedback.
- Repeat steps 3-4 until the user is satisfied.
- Build the final system using the refined requirements.
Fig: The Prototyping Model β iterative refinement with user feedback
Iterative Development
In Iterative Development, the software is built in repeated cycles (iterations). Each iteration produces a working version of the software with added features. The idea: don't try to build everything at once.
How it works: Each iteration goes through the full cycle β requirements β design β code β test. After each iteration, the software is evaluated and the next iteration plans improvements.
Fig: Iterative Development β repeated cycles of Plan β Design β Code β Test
β Advantages
- Working software early
- Easy to accommodate changes
- Lower risk β problems caught early
- User feedback at each iteration
β Disadvantages
- Harder to manage (no clear end date)
- Requires skilled planning
- Architecture may need rework
- Documentation may lag behind
- Software Engineering = systematic approach to develop, operate, and maintain software
- Software Crisis = projects late, over budget, buggy β led to SE as a discipline
- Waterfall = linear, sequential; good for fixed requirements; no flexibility
- Prototyping = build a quick mock; refine with user feedback; throw away prototype
- Iterative = build in cycles; each cycle adds features; early working software
- Cost of fixing bugs increases exponentially with each phase
- Good process = predictable, testable, supports change, catches errors early
Software Requirements Analysis & Specification
Need for SRS
Why do we need an SRS? Without it:
- Developers and clients have different expectations β the final product doesn't match what the client wanted.
- There's no basis for testing β how do you test something that was never clearly defined?
- Scope creep β requirements keep growing because nothing was agreed upon.
- Disputes β without a written agreement, disagreements can't be resolved objectively.
Requirement Process
The requirement process is the overall workflow for creating an SRS. It has four major steps:
- Requirement Gathering (Elicitation): Collect information from stakeholders about what they need.
- Requirement Analysis: Analyze the gathered information, resolve conflicts, and model the system.
- Requirement Specification: Write the SRS document in a clear, unambiguous format.
- Requirement Validation: Review the SRS with stakeholders to ensure it's correct and complete.
Fig: Requirement Process β Gathering β Analysis β Specification β Validation
Requirement Gathering Techniques
How do you find what the client actually needs? Here are the common techniques:
| Technique | Description | Best For |
|---|---|---|
| Interviews | One-on-one discussions with stakeholders | Understanding specific needs and concerns |
| Questionnaires | Written surveys sent to many users | Large user groups; quantitative data |
| Observation | Watch users perform their current tasks | Understanding workflows; finding hidden needs |
| Document Analysis | Study existing forms, reports, manuals | Understanding current system; data requirements |
| Brainstorming | Group sessions to generate ideas | Creative solutions; new features |
| Prototyping | Build a mock-up to clarify requirements | Unclear or complex user interfaces |
Problem Analysis
After gathering raw requirements, we need to analyze them. Problem analysis involves:
- Identifying conflicts: Two stakeholders may want contradictory things. These must be resolved.
- Removing ambiguity: "The system should be fast" is ambiguous. How fast? Under what conditions?
- Modeling the system: Create visual models (like Data Flow Diagrams) to understand data and processes.
- Prioritizing requirements: Not everything is equally important. Use MoSCoW (Must have, Should have, Could have, Won't have) to prioritize.
Types of Requirements
Functional Requirements
- Describe what the system should do
- Specific behaviors, functions, services
- Example: "The system shall allow users to log in with email and password"
- Example: "The system shall generate monthly sales reports"
Non-Functional Requirements
- Describe how well the system should do it
- Quality attributes and constraints
- Example: "The system shall respond within 2 seconds" (Performance)
- Example: "The system shall be available 99.9% of the time" (Reliability)
Non-functional requirements include: Performance Reliability Usability Security Portability Maintainability
Characteristics of a Good SRS
A well-written SRS should be:
- Correct: Every requirement stated actually represents what is needed.
- Unambiguous: Each requirement has only one possible interpretation.
- Complete: All requirements are documented; nothing is missing.
- Consistent: No two requirements contradict each other.
- Verifiable: You can test whether each requirement is met. "The system should be user-friendly" is NOT verifiable.
- Modifiable: Easy to update when requirements change.
- Traceable: Each requirement can be traced to its origin and to the design/code that implements it.
- Ranked for importance: Requirements are prioritized (essential vs. desirable).
Components & Structure of an SRS Document
A standard SRS document (following IEEE 830) has these major sections:
- Introduction: Purpose, scope, definitions, overview of the document.
- Overall Description: Product perspective, product functions, user characteristics, constraints, assumptions.
- Specific Requirements: Functional requirements (listed individually), non-functional requirements (performance, security, etc.), interface requirements (user, hardware, software, communication interfaces).
- Appendices: Supporting information, data models, glossary.
FR-002: On three consecutive failed login attempts, the system shall lock the account for 30 minutes.
NFR-001: The login page shall load within 1.5 seconds on a 4G connection.
- SRS = contract between client and developer; describes WHAT to build
- Requirement Process = Gathering β Analysis β Specification β Validation
- Functional = what the system does; Non-functional = how well it does it
- Good SRS = Correct, Unambiguous, Complete, Consistent, Verifiable, Traceable
- Gathering techniques = interviews, questionnaires, observation, prototyping
- IEEE 830 = standard format for SRS documents
- Requirements must be testable β avoid vague terms like "user-friendly"
Software Design
Design Principles
Software design is the process of transforming requirements (the "what") into a blueprint (the "how"). Good design makes the system easier to build, test, and maintain. The key principles are:
- Abstraction: Hide complex details and expose only what's necessary. Example: When you use a TV remote, you don't need to know how the circuit works β you just press buttons.
- Decomposition (Divide & Conquer): Break a complex problem into smaller, manageable sub-problems. Solve each one independently.
- Modularity: Divide the software into independent modules, each handling one specific function.
- Information Hiding: Each module should hide its internal workings from other modules. Other modules only interact through a well-defined interface.
- Separation of Concerns: Different aspects of the software (UI, business logic, database) should be handled by separate parts of the system.
Modularity
Think of modules like LEGO blocks β each block has a clear purpose and connects to others through a standard interface. Benefits:
- Easier development: Teams can work on different modules simultaneously.
- Easier testing: Test each module independently (unit testing).
- Easier maintenance: Fix or update one module without breaking others.
- Reusability: Well-designed modules can be reused in other projects.
Top-Down and Bottom-Up Strategies
π½ Top-Down Design
- Start with the big picture (main module)
- Progressively break it into sub-modules
- Like writing an essay: outline β sections β paragraphs β sentences
- Good for understanding overall structure
- Risk: low-level details may be overlooked
πΌ Bottom-Up Design
- Start with basic, low-level components
- Combine them to form higher-level modules
- Like building with LEGO: small bricks β sections β complete model
- Good for reusing existing components
- Risk: may not fit together at the top level
In practice, most projects use a combination of both β top-down for overall architecture and bottom-up for implementing individual components.
Coupling
Types of coupling (from BEST to WORST):
| Type | Description | Quality |
|---|---|---|
| Data Coupling | Modules share data through parameters (only what's needed) | π’ Best |
| Stamp Coupling | Modules share a composite data structure but use only part of it | π‘ OK |
| Control Coupling | One module controls the flow of another by passing control info (flags) | π Poor |
| Common Coupling | Modules share global data | π΄ Bad |
| Content Coupling | One module directly modifies the internals of another | π΄ Worst |
Fig: Types of Coupling β from best (Data) to worst (Content)
Cohesion
Types of cohesion (from WORST to BEST):
| Type | Description | Quality |
|---|---|---|
| Coincidental | Elements are randomly grouped (no relation) | π΄ Worst |
| Logical | Elements perform similar things (e.g., all I/O functions) | π΄ Poor |
| Temporal | Elements are executed at the same time (e.g., initialization) | π Low |
| Procedural | Elements follow a specific sequence of execution | π‘ Medium |
| Communicational | Elements operate on the same data | π‘ Good |
| Sequential | Output of one element is input to the next | π’ High |
| Functional | All elements contribute to a single, well-defined function | π’ Best |
Fig: Types of Cohesion β from worst (Coincidental) to best (Functional)
Structure Charts
A Structure Chart is a diagram that shows the hierarchy of modules in a system. It shows which module calls which, and what data flows between them.
Symbols used:
- Rectangle: Represents a module
- Arrow (line with arrowhead): Shows the calling relationship (A calls B)
- Small arrow with open circle: Data flow
- Small arrow with filled circle: Control flow (flag)
- Diamond: Conditional call (module is called only if a condition is true)
- Curved arrow: Repetitive call (loop)
Fig: Structure Chart β module hierarchy with data and control flow
Data Flow Diagrams (DFD)
A Data Flow Diagram shows how data moves through a system. It focuses on the processes that transform data and the data stores where data is kept.
Four symbols:
| Symbol | Name | Meaning |
|---|---|---|
| Rectangle | External Entity | Source or destination of data (outside the system) |
| Rounded Rectangle / Circle | Process | Transforms input data into output data |
| Arrow | Data Flow | Direction of data movement |
| Open Rectangle (two lines) | Data Store | Where data is stored (database, file) |
Levels of DFD:
- Level 0 (Context Diagram): Shows the entire system as one process with external entities. The "bird's eye view."
- Level 1: Breaks the main process into sub-processes. Shows major functions.
- Level 2+: Further decomposition of each sub-process.
Fig: Level-0 Context Diagram β the system as a single process with external entities
- Design principles = Abstraction, Decomposition, Modularity, Info Hiding
- Goal = Low Coupling + High Cohesion
- Coupling (bestβworst) = Data β Stamp β Control β Common β Content
- Cohesion (worstβbest) = Coincidental β Logical β Temporal β Procedural β Communicational β Sequential β Functional
- Top-Down = big picture first; Bottom-Up = small parts first
- Structure Chart = module hierarchy + data/control flow
- DFD = how data flows; Level 0 (context) β Level 1 β Level 2+
Coding
Programming Principles & Guidelines
Coding is the phase where the design is translated into a working program. While it may seem like "just writing code," professional software engineering treats coding as a disciplined activity with clear principles:
- Readability: Code is read far more often than it is written. Write code that another developer (or future you) can easily understand. Use meaningful variable names, proper indentation, and comments where the "why" isn't obvious.
- Simplicity (KISS): Keep It Simple, Stupid. Choose the simplest solution that works. Clever, tricky code is hard to debug and maintain.
- DRY (Don't Repeat Yourself): If you find yourself copying the same code in multiple places, extract it into a function. Duplicate code means duplicate bugs.
- Single Responsibility: Each function or module should do one thing and do it well.
- Input Validation: Always validate data coming from the user or external systems. Never trust input blindly β it's one of the most common sources of bugs and security vulnerabilities.
Common Coding Errors
Being aware of common errors helps you avoid them. Here are the most frequent coding mistakes:
| Error Type | Description | Example |
|---|---|---|
| Off-by-one | Loop runs one too many or one too few times | Using <= instead of < in a loop |
| Uninitialized variables | Using a variable before assigning it a value | Reading from a variable that was never set |
| Null reference | Trying to access a member of an object that is null | Calling a method on a null pointer |
| Buffer overflow | Writing data beyond the allocated memory | Copying a 100-char string into a 50-char array |
| Type mismatch | Using incompatible data types in operations | Comparing a string to an integer |
| Logic errors | Code runs without crashing but produces wrong results | Using AND when you should use OR |
| Resource leaks | Not closing files, connections, or freeing memory | Opening a file but never closing it |
Structured Programming
goto statements, which make code hard to follow (called "spaghetti
code").
The three building blocks:
- Sequence: Statements executed one after another in order.
- Selection: Choose between paths based on a condition (
if-else,switch). - Iteration: Repeat a block of code (
for,while,do-while).
Information Hiding
Information hiding (proposed by David Parnas) is the principle that a module should hide its internal details from the outside world. Other modules only interact through a defined interface β they don't know how things work inside, only what they can do.
Benefits:
- Changes to one module don't cascade to others
- Reduces complexity β developers only need to understand interfaces, not internals
- Increases security β internal state can't be tampered with from outside
Programming Practices & Coding Standards
Coding Standards
Coding standards are agreed-upon rules a development team follows for consistency. Common standards include:
- Naming conventions: camelCase for variables, PascalCase for classes, UPPER_SNAKE for constants.
- Indentation & spacing: Consistent use of tabs or spaces (usually 4 spaces).
- Comment guidelines: Every function should have a header comment explaining purpose, parameters, and return value.
- File organization: Imports at top, constants next, then class/function definitions.
- Error handling: Use try-catch blocks, handle errors gracefully, never silently swallow exceptions.
The Coding Process
Professional coding follows a structured process:
- Understand the design document for the module you're coding.
- Write pseudocode or outline the logic before writing actual code.
- Write the code following coding standards.
- Self-review the code for errors and style issues.
- Compile and fix syntax errors.
- Unit test the module.
- Code review by a peer (another developer reviews your code).
- Key principles = Readability, KISS, DRY, Single Responsibility, Input Validation
- Common errors = Off-by-one, null reference, buffer overflow, logic errors, resource leaks
- Structured programming = Sequence + Selection + Iteration (no goto)
- Information hiding = modules hide internals, expose only interfaces
- Coding process = Design β Pseudocode β Code β Self-Review β Compile β Unit Test β Peer Review
- Standards = consistent naming, indentation, comments, error handling
Testing
Error, Fault, and Failure
These three terms are often confused but mean different things. Understanding the difference is important:
| Term | Definition | Example |
|---|---|---|
| Error | A human mistake made during development (wrong logic, misunderstanding) | Developer uses ">" instead of ">=" |
| Fault (Bug/Defect) | The manifestation of an error in the code β the incorrect code itself | The line if (x > 10) when it should be if (x >= 10) |
| Failure | When the faulty code actually produces wrong output during execution | User enters 10, system says "invalid" when it should be "valid" |
Fig: Error β Fault β Failure chain
Psychology of Testing
Testing is fundamentally a destructive activity β the goal is to find bugs, not to prove the software works. This is psychologically difficult because the developer naturally wants to prove their code is correct. Best practice: have someone other than the developer test the code. An independent tester approaches the software with a "break it" mindset.
Test Oracles, Test Cases & Test Criteria
Test Case: A set of inputs, execution conditions, and expected results designed to test a specific aspect of the software. A good test case has:
- Test Case ID β unique identifier
- Description β what is being tested
- Pre-conditions β state of the system before the test
- Input data β what to feed the system
- Expected output β what the system should produce
- Post-conditions β state of the system after the test
Test Criteria: Rules that define when testing is "sufficient." Examples:
- Coverage criteria: All statements, all branches, or all paths must be tested.
- Fault-based criteria: Testing should detect specific types of faults.
- Error-based criteria: Focus testing on areas where errors are most likely.
Black-Box Testing
Fig: Black-Box vs White-Box Testing comparison
Key Techniques:
1. Equivalence Class Partitioning
Divide all possible inputs into groups (classes) where each class is expected to behave the same way. Test one value from each class instead of testing every possible input.
2. Boundary Value Analysis
Bugs tend to cluster at the boundaries of input ranges. Test values at, just below, and just above each boundary.
White-Box Testing
Key Coverage Criteria:
- Statement Coverage: Every line of code is executed at least once. (Weakest criterion)
- Branch Coverage: Every decision (if/else) takes both the true and false path at least once.
- Path Coverage: Every possible path through the code is tested. (Strongest but often impractical for large programs)
- Condition Coverage: Each individual boolean condition is evaluated to both true and false.
Levels of Testing
| Level | What is Tested | Who Tests | Purpose |
|---|---|---|---|
| Unit Testing | Individual modules/functions | Developer | Verify each unit works correctly in isolation |
| Integration Testing | Interactions between modules | Developer / QA | Verify modules work together correctly |
| System Testing | The complete system | QA team | Verify the system meets all requirements |
| Acceptance Testing | The system by the client | Client/User | Verify the system is acceptable for delivery |
Integration Testing Strategies:
- Big Bang: Integrate all modules at once and test. Simple but very hard to isolate errors.
- Top-Down: Start from the top module, stub lower modules. Tests high-level logic first.
- Bottom-Up: Start from the lowest modules, use drivers. Tests basic components first.
- Sandwich (Hybrid): Combine top-down and bottom-up approaches.
Fig: Levels of Testing β Unit β Integration β System β Acceptance
- Error β Fault β Failure (human mistake β code bug β wrong output)
- Test Oracle = mechanism that tells you the expected output
- Black-box = test inputs/outputs without looking at code
- White-box = test based on internal code structure
- Equivalence Partitioning = divide inputs into classes, test one per class
- Boundary Value Analysis = test at edges of input ranges
- Coverage: Statement β Branch β Path; Branch is practical minimum
- Levels: Unit β Integration β System β Acceptance
- Testing is destructive β the goal is to find bugs, not prove correctness
Software Testing Process, Maintenance & Metrics
Test Plan
A typical test plan includes:
- Test Plan ID & Name
- Scope: What features will and will not be tested.
- Test Strategy: Types of testing (unit, integration, system, acceptance) and techniques (black-box, white-box).
- Entry & Exit Criteria: Conditions to start and stop testing. E.g., "Entry: all modules coded; Exit: 95% test cases pass, no critical bugs."
- Test Environment: Hardware, software, network configuration needed.
- Resources & Schedule: Who tests what, and the timeline.
- Risk Assessment: What could go wrong and how to mitigate it.
Test Case Specifications & Execution
Test Case Specification documents the detailed test cases in a structured format:
| Field | Description | Example |
|---|---|---|
| TC-ID | Unique identifier | TC-LOGIN-001 |
| Description | What this test verifies | Verify login with valid credentials |
| Pre-conditions | Required setup | User account exists in database |
| Steps | Actions to perform | 1. Open login page 2. Enter email 3. Enter password 4. Click Login |
| Input Data | Test inputs | Email: test@mail.com, Pwd: Pass@123 |
| Expected Result | What should happen | Redirected to dashboard, welcome message shown |
| Status | Pass / Fail / Blocked | Pass |
Test Execution & Analysis: After running test cases, results are recorded and analyzed. If a test fails, the tester documents the actual vs. expected output, the environment, and steps to reproduce the issue.
Defect Logging & Tracking
When a test case fails, a defect report is filed. It includes:
- Defect ID β unique identifier
- Summary β one-line description
- Severity β how bad is it? (Critical / Major / Minor / Cosmetic)
- Priority β how urgently should it be fixed? (High / Medium / Low)
- Steps to Reproduce β exact steps to trigger the bug
- Environment β OS, browser, device used
- Attachments β screenshots, logs
- Status β New β Assigned β Fixed β Verified β Closed
Fig: Defect Lifecycle β New β Open β Assigned β Fixed β Retested β Closed
Software Maintenance
Four types of maintenance:
| Type | Purpose | Example | % of Effort |
|---|---|---|---|
| Corrective | Fix bugs found after delivery | Fixing a crash that occurs on leap years | ~20% |
| Adaptive | Adapt to environment changes | Updating for a new OS version or database | ~25% |
| Perfective | Improve performance or add features | Adding a search feature users requested | ~50% |
| Preventive | Prevent future problems | Refactoring code to improve maintainability | ~5% |
Fig: Software Maintenance Types β Perfective dominates at ~50%
COCOMO Model
Three levels of COCOMO:
| Level | Based On | Accuracy |
|---|---|---|
| Basic COCOMO | Just project size (KLOC) | Rough estimate |
| Intermediate COCOMO | Size + 15 cost drivers (product, hardware, personnel, project attributes) | Better estimate |
| Detailed COCOMO | Size + cost drivers + phase-by-phase estimation | Most accurate |
Three project modes:
| Mode | Description | Effort Formula | Time Formula |
|---|---|---|---|
| Organic | Small teams, familiar problem, relaxed deadlines | E = 2.4 Γ (KLOC)^1.05 | T = 2.5 Γ E^0.38 |
| Semi-detached | Medium teams, mix of experienced and new people | E = 3.0 Γ (KLOC)^1.12 | T = 2.5 Γ E^0.35 |
| Embedded | Tight constraints, complex, hardware-coupled | E = 3.6 Γ (KLOC)^1.20 | T = 2.5 Γ E^0.32 |
Fig: COCOMO Modes β Organic, Semi-Detached, Embedded
Effort = 2.4 Γ (32)^1.05 = 2.4 Γ 36.95 β 88.7 person-months
Time = 2.5 Γ (88.7)^0.38 = 2.5 Γ 5.69 β 14.2 months
People needed = 88.7 / 14.2 β 6 people
Function Point Metric
Five function types counted:
| Type | Description | Example |
|---|---|---|
| External Inputs (EI) | Data entering the system from outside | Login form, registration form |
| External Outputs (EO) | Data leaving the system to outside | Reports, invoices, error messages |
| External Inquiries (EQ) | Input-output pairs (query β result) | Search by name, view profile |
| Internal Logical Files (ILF) | Data stored and maintained by the system | User table, product database |
| External Interface Files (EIF) | Data referenced from external systems | Third-party API data, shared database |
Fig: Function Point Analysis β 5 function types (EI, EO, EQ, ILF, EIF)
Calculation process:
- Count each function type and classify as Simple, Average, or Complex.
- Multiply counts by standard weights to get Unadjusted Function Points (UFP).
- Evaluate 14 General System Characteristics (GSCs) β each rated 0-5 β to get the Value Adjustment Factor (VAF).
- Calculate: FP = UFP Γ (0.65 + 0.01 Γ sum of GSCs)
- Test Plan = scope, strategy, entry/exit criteria, environment, schedule
- Test Case = ID + preconditions + steps + input + expected result + status
- Defect lifecycle = New β Open β Assigned β Fixed β Retested β Closed
- Maintenance types = Corrective (fix bugs), Adaptive (new env), Perfective (improve), Preventive (refactor)
- Maintenance consumes 60-70% of total lifecycle cost; Perfective is ~50% of maintenance
- COCOMO modes = Organic, Semi-detached, Embedded (increasing constraint)
- COCOMO formula = E = a Γ (KLOC)^b; T = c Γ E^d
- Function Points = measure size by functionality (EI, EO, EQ, ILF, EIF)
- FP formula = UFP Γ (0.65 + 0.01 Γ Ξ£ GSCs)