πŸ“˜ Exam Notes

Software Engineering

Complete notes & cheatsheet β€” detailed explanations in simple language for every topic in the syllabus.

πŸ“„ 6 Units ⏱ Credits: 3 πŸ“š Pressman (Textbook)
Unit I

Introduction to Software Engineering & Process Models

What is Software Engineering?

πŸ“– Definition
Software Engineering is the systematic, disciplined, and quantifiable approach to the development, operation, and maintenance of software. It applies engineering principles to software creation so that the final product is reliable, efficient, and easy to maintain.

The Problem Domain

Software today is used everywhere β€” from banking to healthcare to social media. As software systems became larger and more complex, building them became harder. Early programmers wrote code without any planning or structure, which led to what we call the "Software Crisis" β€” projects were late, over budget, and full of bugs.

The problem domain refers to the real-world area that the software is meant to serve. For example, if you're building a hospital management system, the problem domain is healthcare administration. Understanding the problem domain deeply is the very first step in building good software.

Software Engineering Challenges

Building software is fundamentally different from building physical things. Here are the key challenges:

  • Complexity: Software systems can have millions of lines of code with intricate interactions. Unlike a bridge, you can't "see" software, making it harder to understand.
  • Changeability: Software is expected to change constantly β€” new features, bug fixes, platform updates. This makes managing change a core challenge.
  • Invisibility: You cannot visualize software the way you can a building blueprint. This makes communication between developers and clients difficult.
  • Conformity: Software must conform to existing systems, regulations, hardware, and user expectations β€” things that software engineers have no control over.
  • Scale: A small script behaves differently from a system with 10 million users. Scaling introduces performance, reliability, and coordination challenges.

The Software Engineering Approach

The software engineering approach tackles these challenges using a phased, structured methodology. Instead of diving directly into coding, a systematic approach is followed:

  1. Understand the problem β€” Gather requirements from stakeholders.
  2. Plan a solution β€” Design the architecture and modules.
  3. Build the solution β€” Write the code according to the design.
  4. Verify and validate β€” Test the software to ensure it works correctly.
  5. Maintain and evolve β€” Fix bugs and add features over time.
⚠️ Key Insight
The cost of fixing bugs increases exponentially with each phase. A requirement error caught in design costs 5Γ— as much. Caught in testing? 50Γ— as much. Caught after deployment? 200Γ—. That's why early phases are so critical.

Software Process & Desired Characteristics

πŸ“– Definition
A software process (or Software Development Life Cycle β€” SDLC) is a structured set of activities required to develop a software system. It defines WHO does WHAT, WHEN, and HOW.

A good software process should have these desired characteristics:

  • Predictability: You should be able to estimate time, effort, and cost with reasonable accuracy.
  • Testability & Maintainability: The process should produce software that is easy to test and maintain.
  • Early Error Detection: The process should catch errors as early as possible (remember: cost increases exponentially).
  • Support for Change: Since requirements change, the process should accommodate changes gracefully.
  • Facilitates Verification & Validation: The process should have built-in checkpoints (reviews, testing phases).

Waterfall Model

The Waterfall Model is the oldest and most straightforward SDLC model. It follows a linear, sequential flow β€” like water flowing down a waterfall β€” where each phase must be completed before the next begins.

Waterfall Model Diagram

Fig: The Waterfall Model β€” linear, sequential phases

Phases:

  1. Requirements: Gather and document all requirements upfront.
  2. Design: Create the system architecture and detailed design.
  3. Implementation: Write the actual code.
  4. Testing: Verify the software against requirements.
  5. Maintenance: Deploy and maintain the system.

βœ… Advantages

  • Simple and easy to understand
  • Well-documented at every stage
  • Works well when requirements are clear and fixed
  • Easy to manage (milestones are clear)

❌ Disadvantages

  • No working software until late
  • Cannot handle changing requirements
  • High risk β€” errors found late are very costly
  • Not suitable for complex or long-term projects

Prototyping Model

In the Prototyping Model, a simplified version (prototype) of the software is built quickly to help understand and refine requirements. Think of it as a "rough draft" that you show to the client to get feedback before building the real thing.

How it works:

  1. Gather initial requirements (even if incomplete).
  2. Build a quick prototype (just the UI or basic features).
  3. Show it to the user and collect feedback.
  4. Refine the prototype based on feedback.
  5. Repeat steps 3-4 until the user is satisfied.
  6. Build the final system using the refined requirements.
Prototyping Model Diagram

Fig: The Prototyping Model β€” iterative refinement with user feedback

πŸ’‘ When to Use
Use prototyping when requirements are unclear or the client isn't sure what they want. It is especially useful for UI/UX-heavy applications where the user needs to "see" the system before they can articulate requirements.
⚠️ Common Pitfall
Clients often mistake the prototype for the final product and pressure you to ship it. Prototypes are built with shortcuts and bad code β€” they should always be thrown away and rewritten properly.

Iterative Development

In Iterative Development, the software is built in repeated cycles (iterations). Each iteration produces a working version of the software with added features. The idea: don't try to build everything at once.

How it works: Each iteration goes through the full cycle β€” requirements β†’ design β†’ code β†’ test. After each iteration, the software is evaluated and the next iteration plans improvements.

Iterative Development Model

Fig: Iterative Development β€” repeated cycles of Plan β†’ Design β†’ Code β†’ Test

βœ… Advantages

  • Working software early
  • Easy to accommodate changes
  • Lower risk β€” problems caught early
  • User feedback at each iteration

❌ Disadvantages

  • Harder to manage (no clear end date)
  • Requires skilled planning
  • Architecture may need rework
  • Documentation may lag behind
πŸ“‹ Unit I β€” Quick Cheatsheet
  • Software Engineering = systematic approach to develop, operate, and maintain software
  • Software Crisis = projects late, over budget, buggy β€” led to SE as a discipline
  • Waterfall = linear, sequential; good for fixed requirements; no flexibility
  • Prototyping = build a quick mock; refine with user feedback; throw away prototype
  • Iterative = build in cycles; each cycle adds features; early working software
  • Cost of fixing bugs increases exponentially with each phase
  • Good process = predictable, testable, supports change, catches errors early
Unit II

Software Requirements Analysis & Specification

Need for SRS

πŸ“– Definition
An SRS (Software Requirements Specification) is a document that describes what the software should do. It is a contract between the client and the development team β€” "Here is exactly what we'll build."

Why do we need an SRS? Without it:

  • Developers and clients have different expectations β†’ the final product doesn't match what the client wanted.
  • There's no basis for testing β†’ how do you test something that was never clearly defined?
  • Scope creep β†’ requirements keep growing because nothing was agreed upon.
  • Disputes β†’ without a written agreement, disagreements can't be resolved objectively.

Requirement Process

The requirement process is the overall workflow for creating an SRS. It has four major steps:

  1. Requirement Gathering (Elicitation): Collect information from stakeholders about what they need.
  2. Requirement Analysis: Analyze the gathered information, resolve conflicts, and model the system.
  3. Requirement Specification: Write the SRS document in a clear, unambiguous format.
  4. Requirement Validation: Review the SRS with stakeholders to ensure it's correct and complete.
Requirement Process Flow

Fig: Requirement Process β€” Gathering β†’ Analysis β†’ Specification β†’ Validation

Requirement Gathering Techniques

How do you find what the client actually needs? Here are the common techniques:

Technique Description Best For
Interviews One-on-one discussions with stakeholders Understanding specific needs and concerns
Questionnaires Written surveys sent to many users Large user groups; quantitative data
Observation Watch users perform their current tasks Understanding workflows; finding hidden needs
Document Analysis Study existing forms, reports, manuals Understanding current system; data requirements
Brainstorming Group sessions to generate ideas Creative solutions; new features
Prototyping Build a mock-up to clarify requirements Unclear or complex user interfaces

Problem Analysis

After gathering raw requirements, we need to analyze them. Problem analysis involves:

  • Identifying conflicts: Two stakeholders may want contradictory things. These must be resolved.
  • Removing ambiguity: "The system should be fast" is ambiguous. How fast? Under what conditions?
  • Modeling the system: Create visual models (like Data Flow Diagrams) to understand data and processes.
  • Prioritizing requirements: Not everything is equally important. Use MoSCoW (Must have, Should have, Could have, Won't have) to prioritize.

Types of Requirements

Functional Requirements

  • Describe what the system should do
  • Specific behaviors, functions, services
  • Example: "The system shall allow users to log in with email and password"
  • Example: "The system shall generate monthly sales reports"

Non-Functional Requirements

  • Describe how well the system should do it
  • Quality attributes and constraints
  • Example: "The system shall respond within 2 seconds" (Performance)
  • Example: "The system shall be available 99.9% of the time" (Reliability)

Non-functional requirements include: Performance Reliability Usability Security Portability Maintainability

Characteristics of a Good SRS

A well-written SRS should be:

  • Correct: Every requirement stated actually represents what is needed.
  • Unambiguous: Each requirement has only one possible interpretation.
  • Complete: All requirements are documented; nothing is missing.
  • Consistent: No two requirements contradict each other.
  • Verifiable: You can test whether each requirement is met. "The system should be user-friendly" is NOT verifiable.
  • Modifiable: Easy to update when requirements change.
  • Traceable: Each requirement can be traced to its origin and to the design/code that implements it.
  • Ranked for importance: Requirements are prioritized (essential vs. desirable).

Components & Structure of an SRS Document

A standard SRS document (following IEEE 830) has these major sections:

  1. Introduction: Purpose, scope, definitions, overview of the document.
  2. Overall Description: Product perspective, product functions, user characteristics, constraints, assumptions.
  3. Specific Requirements: Functional requirements (listed individually), non-functional requirements (performance, security, etc.), interface requirements (user, hardware, software, communication interfaces).
  4. Appendices: Supporting information, data models, glossary.
πŸ” Example
FR-001: The system shall allow a registered user to log in using their email and password.
FR-002: On three consecutive failed login attempts, the system shall lock the account for 30 minutes.
NFR-001: The login page shall load within 1.5 seconds on a 4G connection.
πŸ“‹ Unit II β€” Quick Cheatsheet
  • SRS = contract between client and developer; describes WHAT to build
  • Requirement Process = Gathering β†’ Analysis β†’ Specification β†’ Validation
  • Functional = what the system does; Non-functional = how well it does it
  • Good SRS = Correct, Unambiguous, Complete, Consistent, Verifiable, Traceable
  • Gathering techniques = interviews, questionnaires, observation, prototyping
  • IEEE 830 = standard format for SRS documents
  • Requirements must be testable β€” avoid vague terms like "user-friendly"
Unit III

Software Design

Design Principles

Software design is the process of transforming requirements (the "what") into a blueprint (the "how"). Good design makes the system easier to build, test, and maintain. The key principles are:

  • Abstraction: Hide complex details and expose only what's necessary. Example: When you use a TV remote, you don't need to know how the circuit works β€” you just press buttons.
  • Decomposition (Divide & Conquer): Break a complex problem into smaller, manageable sub-problems. Solve each one independently.
  • Modularity: Divide the software into independent modules, each handling one specific function.
  • Information Hiding: Each module should hide its internal workings from other modules. Other modules only interact through a well-defined interface.
  • Separation of Concerns: Different aspects of the software (UI, business logic, database) should be handled by separate parts of the system.

Modularity

πŸ“– Definition
Modularity is the degree to which a system is divided into independent, self-contained modules. Each module performs a specific task and interacts with other modules through well-defined interfaces.

Think of modules like LEGO blocks β€” each block has a clear purpose and connects to others through a standard interface. Benefits:

  • Easier development: Teams can work on different modules simultaneously.
  • Easier testing: Test each module independently (unit testing).
  • Easier maintenance: Fix or update one module without breaking others.
  • Reusability: Well-designed modules can be reused in other projects.
⚠️ Important
There's a sweet spot for modularity. Too few modules = each module is too complex. Too many modules = too much overhead in communication between modules. The goal is to find the right balance.

Top-Down and Bottom-Up Strategies

πŸ”½ Top-Down Design

  • Start with the big picture (main module)
  • Progressively break it into sub-modules
  • Like writing an essay: outline β†’ sections β†’ paragraphs β†’ sentences
  • Good for understanding overall structure
  • Risk: low-level details may be overlooked

πŸ”Ό Bottom-Up Design

  • Start with basic, low-level components
  • Combine them to form higher-level modules
  • Like building with LEGO: small bricks β†’ sections β†’ complete model
  • Good for reusing existing components
  • Risk: may not fit together at the top level

In practice, most projects use a combination of both β€” top-down for overall architecture and bottom-up for implementing individual components.

Coupling

πŸ“– Definition
Coupling measures how much one module depends on another. Low coupling is good β€” it means modules are independent and changes in one won't break another.

Types of coupling (from BEST to WORST):

Type Description Quality
Data Coupling Modules share data through parameters (only what's needed) 🟒 Best
Stamp Coupling Modules share a composite data structure but use only part of it 🟑 OK
Control Coupling One module controls the flow of another by passing control info (flags) 🟠 Poor
Common Coupling Modules share global data πŸ”΄ Bad
Content Coupling One module directly modifies the internals of another πŸ”΄ Worst
Coupling Types

Fig: Types of Coupling β€” from best (Data) to worst (Content)

Cohesion

πŸ“– Definition
Cohesion measures how closely related the elements within a single module are. High cohesion is good β€” it means the module does one thing well.

Types of cohesion (from WORST to BEST):

Type Description Quality
Coincidental Elements are randomly grouped (no relation) πŸ”΄ Worst
Logical Elements perform similar things (e.g., all I/O functions) πŸ”΄ Poor
Temporal Elements are executed at the same time (e.g., initialization) 🟠 Low
Procedural Elements follow a specific sequence of execution 🟑 Medium
Communicational Elements operate on the same data 🟑 Good
Sequential Output of one element is input to the next 🟒 High
Functional All elements contribute to a single, well-defined function 🟒 Best
Cohesion Types

Fig: Types of Cohesion β€” from worst (Coincidental) to best (Functional)

πŸ’‘ Remember
Low Coupling + High Cohesion = Good Design. Think of it as: each module should mind its own business (high cohesion) and not be nosy about other modules (low coupling).

Structure Charts

A Structure Chart is a diagram that shows the hierarchy of modules in a system. It shows which module calls which, and what data flows between them.

Symbols used:

  • Rectangle: Represents a module
  • Arrow (line with arrowhead): Shows the calling relationship (A calls B)
  • Small arrow with open circle: Data flow
  • Small arrow with filled circle: Control flow (flag)
  • Diamond: Conditional call (module is called only if a condition is true)
  • Curved arrow: Repetitive call (loop)
Structure Chart Example

Fig: Structure Chart β€” module hierarchy with data and control flow

Data Flow Diagrams (DFD)

A Data Flow Diagram shows how data moves through a system. It focuses on the processes that transform data and the data stores where data is kept.

Four symbols:

Symbol Name Meaning
Rectangle External Entity Source or destination of data (outside the system)
Rounded Rectangle / Circle Process Transforms input data into output data
Arrow Data Flow Direction of data movement
Open Rectangle (two lines) Data Store Where data is stored (database, file)

Levels of DFD:

  • Level 0 (Context Diagram): Shows the entire system as one process with external entities. The "bird's eye view."
  • Level 1: Breaks the main process into sub-processes. Shows major functions.
  • Level 2+: Further decomposition of each sub-process.
Level-0 DFD Context Diagram

Fig: Level-0 Context Diagram β€” the system as a single process with external entities

πŸ“‹ Unit III β€” Quick Cheatsheet
  • Design principles = Abstraction, Decomposition, Modularity, Info Hiding
  • Goal = Low Coupling + High Cohesion
  • Coupling (bestβ†’worst) = Data β†’ Stamp β†’ Control β†’ Common β†’ Content
  • Cohesion (worstβ†’best) = Coincidental β†’ Logical β†’ Temporal β†’ Procedural β†’ Communicational β†’ Sequential β†’ Functional
  • Top-Down = big picture first; Bottom-Up = small parts first
  • Structure Chart = module hierarchy + data/control flow
  • DFD = how data flows; Level 0 (context) β†’ Level 1 β†’ Level 2+
Unit IV

Coding

Programming Principles & Guidelines

Coding is the phase where the design is translated into a working program. While it may seem like "just writing code," professional software engineering treats coding as a disciplined activity with clear principles:

  • Readability: Code is read far more often than it is written. Write code that another developer (or future you) can easily understand. Use meaningful variable names, proper indentation, and comments where the "why" isn't obvious.
  • Simplicity (KISS): Keep It Simple, Stupid. Choose the simplest solution that works. Clever, tricky code is hard to debug and maintain.
  • DRY (Don't Repeat Yourself): If you find yourself copying the same code in multiple places, extract it into a function. Duplicate code means duplicate bugs.
  • Single Responsibility: Each function or module should do one thing and do it well.
  • Input Validation: Always validate data coming from the user or external systems. Never trust input blindly β€” it's one of the most common sources of bugs and security vulnerabilities.

Common Coding Errors

Being aware of common errors helps you avoid them. Here are the most frequent coding mistakes:

Error Type Description Example
Off-by-one Loop runs one too many or one too few times Using <= instead of < in a loop
Uninitialized variables Using a variable before assigning it a value Reading from a variable that was never set
Null reference Trying to access a member of an object that is null Calling a method on a null pointer
Buffer overflow Writing data beyond the allocated memory Copying a 100-char string into a 50-char array
Type mismatch Using incompatible data types in operations Comparing a string to an integer
Logic errors Code runs without crashing but produces wrong results Using AND when you should use OR
Resource leaks Not closing files, connections, or freeing memory Opening a file but never closing it

Structured Programming

πŸ“– Definition
Structured programming is a programming paradigm that uses only three control structures: sequence (do A then B), selection (if-else), and iteration (loops). It avoids the use of goto statements, which make code hard to follow (called "spaghetti code").

The three building blocks:

  1. Sequence: Statements executed one after another in order.
  2. Selection: Choose between paths based on a condition (if-else, switch).
  3. Iteration: Repeat a block of code (for, while, do-while).
πŸ’‘ Key Theorem
The BΓΆhm-Jacopini theorem proves that any algorithm can be written using only these three structures. You never need goto β€” the three structures are sufficient for any logic.

Information Hiding

Information hiding (proposed by David Parnas) is the principle that a module should hide its internal details from the outside world. Other modules only interact through a defined interface β€” they don't know how things work inside, only what they can do.

πŸ” Example
Think of a vending machine. You interact through buttons and a coin slot (interface). You don't know how the internal mechanism selects and drops your snack (implementation). If the manufacturer upgrades the internal mechanism, your interaction doesn't change.

Benefits:

  • Changes to one module don't cascade to others
  • Reduces complexity β€” developers only need to understand interfaces, not internals
  • Increases security β€” internal state can't be tampered with from outside

Programming Practices & Coding Standards

Coding Standards

Coding standards are agreed-upon rules a development team follows for consistency. Common standards include:

  • Naming conventions: camelCase for variables, PascalCase for classes, UPPER_SNAKE for constants.
  • Indentation & spacing: Consistent use of tabs or spaces (usually 4 spaces).
  • Comment guidelines: Every function should have a header comment explaining purpose, parameters, and return value.
  • File organization: Imports at top, constants next, then class/function definitions.
  • Error handling: Use try-catch blocks, handle errors gracefully, never silently swallow exceptions.

The Coding Process

Professional coding follows a structured process:

  1. Understand the design document for the module you're coding.
  2. Write pseudocode or outline the logic before writing actual code.
  3. Write the code following coding standards.
  4. Self-review the code for errors and style issues.
  5. Compile and fix syntax errors.
  6. Unit test the module.
  7. Code review by a peer (another developer reviews your code).
πŸ“‹ Unit IV β€” Quick Cheatsheet
  • Key principles = Readability, KISS, DRY, Single Responsibility, Input Validation
  • Common errors = Off-by-one, null reference, buffer overflow, logic errors, resource leaks
  • Structured programming = Sequence + Selection + Iteration (no goto)
  • Information hiding = modules hide internals, expose only interfaces
  • Coding process = Design β†’ Pseudocode β†’ Code β†’ Self-Review β†’ Compile β†’ Unit Test β†’ Peer Review
  • Standards = consistent naming, indentation, comments, error handling
Unit V

Testing

Error, Fault, and Failure

These three terms are often confused but mean different things. Understanding the difference is important:

Term Definition Example
Error A human mistake made during development (wrong logic, misunderstanding) Developer uses ">" instead of ">="
Fault (Bug/Defect) The manifestation of an error in the code β€” the incorrect code itself The line if (x > 10) when it should be if (x >= 10)
Failure When the faulty code actually produces wrong output during execution User enters 10, system says "invalid" when it should be "valid"
⚠️ Key Insight
Not every fault causes a failure. A bug may exist in code that is rarely executed. But every failure is caused by a fault, and every fault is caused by an error. Error β†’ Fault β†’ Failure
Error Fault Failure Chain

Fig: Error β†’ Fault β†’ Failure chain

Psychology of Testing

Testing is fundamentally a destructive activity β€” the goal is to find bugs, not to prove the software works. This is psychologically difficult because the developer naturally wants to prove their code is correct. Best practice: have someone other than the developer test the code. An independent tester approaches the software with a "break it" mindset.

Test Oracles, Test Cases & Test Criteria

πŸ“– Definition
Test Oracle: A mechanism to determine if a test has passed or failed. It provides the expected output for a given input. Without an oracle, you can run tests but can't know if the results are correct.

Test Case: A set of inputs, execution conditions, and expected results designed to test a specific aspect of the software. A good test case has:

  • Test Case ID β€” unique identifier
  • Description β€” what is being tested
  • Pre-conditions β€” state of the system before the test
  • Input data β€” what to feed the system
  • Expected output β€” what the system should produce
  • Post-conditions β€” state of the system after the test

Test Criteria: Rules that define when testing is "sufficient." Examples:

  • Coverage criteria: All statements, all branches, or all paths must be tested.
  • Fault-based criteria: Testing should detect specific types of faults.
  • Error-based criteria: Focus testing on areas where errors are most likely.

Black-Box Testing

Black-Box vs White-Box Testing

Fig: Black-Box vs White-Box Testing comparison

πŸ“– Definition
Black-box testing (also called functional testing) tests the software without looking at the internal code. You only know the inputs and expected outputs β€” the system is a "black box."

Key Techniques:

1. Equivalence Class Partitioning

Divide all possible inputs into groups (classes) where each class is expected to behave the same way. Test one value from each class instead of testing every possible input.

πŸ” Example
A field accepts ages 18-60. Classes: Valid class: 18-60 (test: 30). Invalid class 1: <18 (test: 10). Invalid class 2: >60 (test: 75). You only need 3 tests instead of testing every number.

2. Boundary Value Analysis

Bugs tend to cluster at the boundaries of input ranges. Test values at, just below, and just above each boundary.

πŸ” Example
For ages 18-60: test 17, 18, 19 (lower boundary) and 59, 60, 61 (upper boundary). These 6 tests catch most boundary-related bugs.

White-Box Testing

πŸ“– Definition
White-box testing (also called structural testing) tests the internal structure of code. The tester has full access to the source code and designs tests to exercise specific code paths.

Key Coverage Criteria:

  • Statement Coverage: Every line of code is executed at least once. (Weakest criterion)
  • Branch Coverage: Every decision (if/else) takes both the true and false path at least once.
  • Path Coverage: Every possible path through the code is tested. (Strongest but often impractical for large programs)
  • Condition Coverage: Each individual boolean condition is evaluated to both true and false.
πŸ’‘ Remember
Statement Coverage βŠ‚ Branch Coverage βŠ‚ Path Coverage. 100% statement coverage does NOT guarantee 100% branch coverage. Branch coverage is usually the practical minimum standard.

Levels of Testing

Level What is Tested Who Tests Purpose
Unit Testing Individual modules/functions Developer Verify each unit works correctly in isolation
Integration Testing Interactions between modules Developer / QA Verify modules work together correctly
System Testing The complete system QA team Verify the system meets all requirements
Acceptance Testing The system by the client Client/User Verify the system is acceptable for delivery

Integration Testing Strategies:

  • Big Bang: Integrate all modules at once and test. Simple but very hard to isolate errors.
  • Top-Down: Start from the top module, stub lower modules. Tests high-level logic first.
  • Bottom-Up: Start from the lowest modules, use drivers. Tests basic components first.
  • Sandwich (Hybrid): Combine top-down and bottom-up approaches.
Testing Levels Pyramid

Fig: Levels of Testing β€” Unit β†’ Integration β†’ System β†’ Acceptance

πŸ“‹ Unit V β€” Quick Cheatsheet
  • Error β†’ Fault β†’ Failure (human mistake β†’ code bug β†’ wrong output)
  • Test Oracle = mechanism that tells you the expected output
  • Black-box = test inputs/outputs without looking at code
  • White-box = test based on internal code structure
  • Equivalence Partitioning = divide inputs into classes, test one per class
  • Boundary Value Analysis = test at edges of input ranges
  • Coverage: Statement βŠ‚ Branch βŠ‚ Path; Branch is practical minimum
  • Levels: Unit β†’ Integration β†’ System β†’ Acceptance
  • Testing is destructive β€” the goal is to find bugs, not prove correctness
Unit VI

Software Testing Process, Maintenance & Metrics

Test Plan

πŸ“– Definition
A Test Plan is a document that describes the scope, approach, resources, and schedule of testing activities. It answers: What will be tested? How? By whom? When? What are the pass/fail criteria?

A typical test plan includes:

  • Test Plan ID & Name
  • Scope: What features will and will not be tested.
  • Test Strategy: Types of testing (unit, integration, system, acceptance) and techniques (black-box, white-box).
  • Entry & Exit Criteria: Conditions to start and stop testing. E.g., "Entry: all modules coded; Exit: 95% test cases pass, no critical bugs."
  • Test Environment: Hardware, software, network configuration needed.
  • Resources & Schedule: Who tests what, and the timeline.
  • Risk Assessment: What could go wrong and how to mitigate it.

Test Case Specifications & Execution

Test Case Specification documents the detailed test cases in a structured format:

Field Description Example
TC-ID Unique identifier TC-LOGIN-001
Description What this test verifies Verify login with valid credentials
Pre-conditions Required setup User account exists in database
Steps Actions to perform 1. Open login page 2. Enter email 3. Enter password 4. Click Login
Input Data Test inputs Email: test@mail.com, Pwd: Pass@123
Expected Result What should happen Redirected to dashboard, welcome message shown
Status Pass / Fail / Blocked Pass

Test Execution & Analysis: After running test cases, results are recorded and analyzed. If a test fails, the tester documents the actual vs. expected output, the environment, and steps to reproduce the issue.

Defect Logging & Tracking

When a test case fails, a defect report is filed. It includes:

  • Defect ID β€” unique identifier
  • Summary β€” one-line description
  • Severity β€” how bad is it? (Critical / Major / Minor / Cosmetic)
  • Priority β€” how urgently should it be fixed? (High / Medium / Low)
  • Steps to Reproduce β€” exact steps to trigger the bug
  • Environment β€” OS, browser, device used
  • Attachments β€” screenshots, logs
  • Status β€” New β†’ Assigned β†’ Fixed β†’ Verified β†’ Closed
πŸ’‘ Defect Lifecycle
New β†’ Open β†’ Assigned β†’ Fixed β†’ Retested β†’ Verified β†’ Closed. If the fix doesn't work, the defect is Reopened and goes back to Assigned.
Defect Lifecycle

Fig: Defect Lifecycle β€” New β†’ Open β†’ Assigned β†’ Fixed β†’ Retested β†’ Closed

Software Maintenance

πŸ“– Definition
Software maintenance is the process of modifying software after delivery to correct faults, improve performance, or adapt to a changed environment. Maintenance consumes 60-70% of total software lifecycle cost.

Four types of maintenance:

Type Purpose Example % of Effort
Corrective Fix bugs found after delivery Fixing a crash that occurs on leap years ~20%
Adaptive Adapt to environment changes Updating for a new OS version or database ~25%
Perfective Improve performance or add features Adding a search feature users requested ~50%
Preventive Prevent future problems Refactoring code to improve maintainability ~5%
Maintenance Types Pie

Fig: Software Maintenance Types β€” Perfective dominates at ~50%

COCOMO Model

πŸ“– Definition
COCOMO (Constructive Cost Model), developed by Barry Boehm, estimates the effort, time, and cost of a software project based on the size of the code (measured in KLOC β€” thousands of lines of code).

Three levels of COCOMO:

Level Based On Accuracy
Basic COCOMO Just project size (KLOC) Rough estimate
Intermediate COCOMO Size + 15 cost drivers (product, hardware, personnel, project attributes) Better estimate
Detailed COCOMO Size + cost drivers + phase-by-phase estimation Most accurate

Three project modes:

Mode Description Effort Formula Time Formula
Organic Small teams, familiar problem, relaxed deadlines E = 2.4 Γ— (KLOC)^1.05 T = 2.5 Γ— E^0.38
Semi-detached Medium teams, mix of experienced and new people E = 3.0 Γ— (KLOC)^1.12 T = 2.5 Γ— E^0.35
Embedded Tight constraints, complex, hardware-coupled E = 3.6 Γ— (KLOC)^1.20 T = 2.5 Γ— E^0.32
COCOMO Project Modes

Fig: COCOMO Modes β€” Organic, Semi-Detached, Embedded

πŸ” Example
A project has 32 KLOC and is Organic mode.
Effort = 2.4 Γ— (32)^1.05 = 2.4 Γ— 36.95 β‰ˆ 88.7 person-months
Time = 2.5 Γ— (88.7)^0.38 = 2.5 Γ— 5.69 β‰ˆ 14.2 months
People needed = 88.7 / 14.2 β‰ˆ 6 people

Function Point Metric

πŸ“– Definition
Function Points (FP) measure the size of software based on functionality delivered to the user, not lines of code. This makes it language-independent β€” a feature has the same function points whether coded in Java, Python, or C.

Five function types counted:

Type Description Example
External Inputs (EI) Data entering the system from outside Login form, registration form
External Outputs (EO) Data leaving the system to outside Reports, invoices, error messages
External Inquiries (EQ) Input-output pairs (query β†’ result) Search by name, view profile
Internal Logical Files (ILF) Data stored and maintained by the system User table, product database
External Interface Files (EIF) Data referenced from external systems Third-party API data, shared database
Function Point Types

Fig: Function Point Analysis β€” 5 function types (EI, EO, EQ, ILF, EIF)

Calculation process:

  1. Count each function type and classify as Simple, Average, or Complex.
  2. Multiply counts by standard weights to get Unadjusted Function Points (UFP).
  3. Evaluate 14 General System Characteristics (GSCs) β€” each rated 0-5 β€” to get the Value Adjustment Factor (VAF).
  4. Calculate: FP = UFP Γ— (0.65 + 0.01 Γ— sum of GSCs)
Function Point Formula FP = UFP Γ— (0.65 + 0.01 Γ— Ξ£ GSCi)
πŸ“‹ Unit VI β€” Quick Cheatsheet
  • Test Plan = scope, strategy, entry/exit criteria, environment, schedule
  • Test Case = ID + preconditions + steps + input + expected result + status
  • Defect lifecycle = New β†’ Open β†’ Assigned β†’ Fixed β†’ Retested β†’ Closed
  • Maintenance types = Corrective (fix bugs), Adaptive (new env), Perfective (improve), Preventive (refactor)
  • Maintenance consumes 60-70% of total lifecycle cost; Perfective is ~50% of maintenance
  • COCOMO modes = Organic, Semi-detached, Embedded (increasing constraint)
  • COCOMO formula = E = a Γ— (KLOC)^b; T = c Γ— E^d
  • Function Points = measure size by functionality (EI, EO, EQ, ILF, EIF)
  • FP formula = UFP Γ— (0.65 + 0.01 Γ— Ξ£ GSCs)
Made with ❀ by Ansh Sharma