·

Equivalence partitioning, boundary analysis, and exploratory heuristics

Equivalence partitioning, boundary analysis, and exploratory heuristics

How to Use AI to Generate Equivalence Partition Sets from Requirements?

Equivalence partitioning is a systematic test design technique that divides the input domain into classes where members of each class are expected to be treated identically by the system. The technique reduces the total number of test cases needed while maintaining coverage of all distinct behaviors. AI is exceptionally good at equivalence partitioning when you give it the right frame — but it will generate overlapping or incomplete partitions if you prompt it naively.

The Core Principle AI Needs to Work With

For AI to generate good equivalence partitions, it needs to understand that the goal is to identify groups where any one value from the group is as good as any other for testing purposes. The natural language framing that works best: "Identify classes of input where the system should behave identically for any value in the class."

Partition Analysis for a Single Field

Prompt:

Apply equivalence partitioning to the following input field. Identify all valid and invalid equivalence classes, select one representative value from each class, and explain why that value represents the entire class.

Field: [field name]
Rules:
- [List all validation rules from the spec]

For each equivalence class, output:
- Class ID (EC-01, EC-02, etc.)
- Class description (what all values in this class have in common)
- Valid or Invalid class
- Representative test value
- Behavior expected when this value is used
- Which validation rule or behavior distinguishes this class from adjacent classes

Example output for a "user age" field with rules (integer, min 13, max 120):

Class ID Description Type Representative Value Expected Behavior
EC-01 Age below minimum (< 13) Invalid 12 Validation error: age below minimum
EC-02 Age at or above minimum, at or below maximum (13–120) Valid 25 Accepted
EC-03 Age above maximum (> 120) Invalid 121 Validation error: age above maximum
EC-04 Non-integer input Invalid 17.5 Validation error: must be whole number
EC-05 Non-numeric input Invalid "abc" Validation error: must be a number
EC-06 Empty/null Invalid (empty) Validation error: required field
EC-07 Negative number Invalid -1 Validation error: age below minimum

Partition Analysis for Multi-Field Forms

When a form has multiple interacting fields, equivalence partitions need to account for combinations. AI can help map these systematically:

Prompt:

Perform equivalence partitioning for the following multi-field form. First, partition each field independently. Then identify the most important field combinations to test, focusing on combinations where the interaction between fields changes the expected system behavior.

Form fields and rules:
[PASTE FIELD LIST WITH VALIDATION RULES]

Step 1: List equivalence classes for each field independently.
Step 2: Identify field interaction scenarios — cases where the value in one field affects how another field is validated.
Step 3: For each interaction scenario, generate the specific combination of representative values that exercises that interaction.
Step 4: Generate a final test case list that covers all independent classes plus all interaction scenarios, minimizing redundancy.

Partitioning Non-Numeric Domains

Equivalence partitioning isn't just for numbers. AI can partition string formats, enumerated types, and state-based domains:

Prompt:

Apply equivalence partitioning to the following non-numeric input domain.

Field: Email address
System behavior to partition:
- The system accepts email addresses that conform to RFC 5322 format
- The system rejects addresses that don't conform
- The domain portion is used for email routing (some domains are blocked)
- The system normalizes to lowercase before storing

Identify equivalence classes for:
1. Format validity (structurally valid, structurally invalid)
2. Domain validity (allowed domain, blocked domain)
3. Normalization behavior (mixed case, all lowercase, all uppercase)
4. Length constraints (if any)
5. Special character handling

For each class, provide a representative value and explain what distinct system behavior it exercises.

Validating AI-Generated Partitions

After generating partitions, ask the AI to validate its own work:

Prompt:

Review the equivalence partitions you generated for completeness:
1. Are there any gaps — input values that don't fall into any defined class?
2. Are there any overlaps — values that could belong to more than one class?
3. Is each class truly equivalent — would the system treat all values in the class identically?
4. Are there any classes where the expected behavior isn't clearly defined by the requirements?

List any gaps, overlaps, or ambiguities found.

Learning Tip: After using AI to generate equivalence partitions, run a "what about X?" session — spend 5 minutes listing any inputs you can think of that don't cleanly fit into one of the AI-generated classes. If you find values that don't fit, the partitioning is incomplete and you need a new class. This adversarial check takes 5 minutes and catches the partitioning gaps that AI commonly misses on complex domains (especially string-typed fields with complex format rules).


How to Automate Boundary Value Analysis with AI?

Boundary value analysis (BVA) is the complement to equivalence partitioning: while EP identifies representative values from each class, BVA focuses on the boundaries between classes, where defects are most likely to lurk. AI can generate exhaustive boundary sets in seconds — but you need to understand the different BVA models to prompt correctly.

Two-Value vs. Three-Value BVA

Two-value BVA: Test at the boundary itself and just inside the valid zone.
- For min=13: test 13 (valid boundary) and 12 (just outside, invalid)

Three-value BVA: Test just below boundary, at boundary, and just above boundary.
- For min=13: test 12 (invalid), 13 (valid boundary), 14 (just inside valid)

Three-value BVA is more thorough and catches off-by-one errors in both directions. Always specify which model you want in your prompt.

Prompt for three-value BVA:

Apply three-value boundary value analysis to the following field constraints. For each boundary, generate three test values: one just below (or before) the boundary, one at the boundary, and one just above (or after) the boundary. Specify whether each value is valid or invalid.

Field constraints:
[PASTE CONSTRAINTS]

For numeric ranges, "just below/above" means ±1 for integers, ±0.001 for decimals (use the smallest meaningful unit for the domain).
For string lengths, "just below/above" means ±1 character.
For date ranges, "just below/above" means ±1 day (or ±1 second if time is relevant).

Output: a table with columns: Boundary Type | Test Value | Valid/Invalid | Expected Result

BVA for Different Data Types

Prompt for date boundaries:

Apply boundary value analysis to the following date field rules:
- Date must be in the future (after today's date)
- Date must be within the next 365 days
- Time component is ignored; only date matters
- Today's date for this analysis: [CURRENT DATE]

Generate boundary test cases for:
1. The lower boundary (today / tomorrow)
2. The upper boundary (365 days from now / 366 days from now)
3. Common edge dates: last day of month, last day of year, Feb 28/29 in leap year

For each test value, specify: the date, whether it's valid, and the expected system response.

Prompt for string length boundaries:

Apply boundary value analysis to all text input fields in the following form. For each field with a length constraint, generate test cases at: max length - 1 character, max length (exact), max length + 1 character. Also test min length - 1, min length, min length + 1.

Fields:
- Username: 3–30 characters
- Bio: 0–500 characters (optional)
- Password: 8–72 characters
- Display Name: 1–50 characters

For each boundary test, provide: the exact string length being tested, a sample string of that length, and the expected validation result.

Boundary Analysis for API Endpoints

BVA is equally important for API testing. AI can generate boundary test data for API parameters:

Prompt:

Apply boundary value analysis to the following API endpoint parameters. Generate a test data set that exercises all boundaries.

Endpoint: POST /api/products
Parameters:
- price: number, min 0.01, max 99999.99, 2 decimal places
- quantity: integer, min 1, max 10000
- sku: string, exactly 8 characters, alphanumeric
- category_id: integer, must reference existing category, range 1-999

For each parameter:
1. Generate boundary test values (at, just-below, just-above for each limit)
2. Specify the expected HTTP status code and response for each boundary value
3. Flag any boundary combinations that might interact (e.g., maximum price AND maximum quantity simultaneously)

Output as a test data table, then as a set of complete request bodies I can use directly in API testing.

Combining EP and BVA in One Prompt

For maximum coverage efficiency, run both techniques in sequence:

Prompt:

Perform a combined equivalence partitioning and boundary value analysis for the following field or system. First identify all equivalence classes (with representatives), then identify all boundaries between classes and generate three-value boundary test cases at each boundary.

[PASTE FIELD/SYSTEM DESCRIPTION WITH RULES]

Output structure:
## Equivalence Classes
[Table of classes with representative values]

## Boundaries
[For each class boundary: boundary name, three test values, expected behavior for each]

## Final Test Set
[Deduplicated list of all test cases from both techniques, with ID, value, expected result]

Learning Tip: When you use AI for BVA on financial or time-sensitive data, always manually verify the boundary values before executing the tests. AI occasionally makes off-by-one errors on date calculations, gets confused about whether a range is inclusive or exclusive, or uses the wrong unit (milliseconds vs. seconds, or cents vs. dollars). For a field where "max value is $9,999.99," the AI must correctly understand whether the system stores this as a float or integer cents — and it may not have that context unless you provide it. Add a "verify boundary values before executing" note to your BVA test case set.


How to Apply Exploratory Heuristics Like SFDPOT and FCC CUTS VIDS with AI?

Exploratory testing heuristics are mental frameworks that guide experienced testers toward areas where bugs are likely to hide. Heuristics like SFDPOT (Structure, Function, Data, Platform, Operations, Time) and FCC CUTS VIDS encode decades of collective testing experience into systematic checklists. AI can apply these heuristics mechanically to any feature description, surfacing test ideas a checklist alone would take much longer to generate.

SFDPOT Applied with AI

SFDPOT is a risk analysis heuristic that covers six dimensions of a system:
- S — Structure: What is the feature made of? (UI elements, data stores, integrations)
- F — Function: What does the feature do? (calculations, transformations, state changes)
- D — Data: What data does it use? (inputs, outputs, persistence, formats)
- P — Platform: Where does it run? (browsers, OS, devices, screen sizes)
- O — Operations: How is it used? (workflows, sequences, frequency, concurrency)
- T — Time: When does it behave differently? (timeouts, caching, expiry, scheduled events)

Prompt:

Apply the SFDPOT heuristic to the following feature to generate exploratory test ideas. For each SFDPOT dimension, generate specific test ideas relevant to this feature — not generic checklist items, but actionable ideas based on the feature's specific behaviors.

Feature: [PASTE FEATURE DESCRIPTION AND AC]

For each dimension, output:
- 3–6 specific test ideas
- The specific aspect of the feature this tests
- Why this area is risky (what kind of bug this might find)

Format:
## Structure
[Test ideas about UI elements, components, data structures]

## Function
[Test ideas about calculations, logic, state transitions]

## Data
[Test ideas about input formats, data boundaries, persistence]

## Platform
[Test ideas about browser/device/OS behavior]

## Operations
[Test ideas about user workflows, sequences, concurrent use]

## Time
[Test ideas about timeouts, caching, expiry, scheduling]

FCC CUTS VIDS Applied with AI

FCC CUTS VIDS is a more granular heuristic covering:
- F — Fake Data: What happens with spoofed, injected, or simulated data?
- C — Calculations: What happens with arithmetic, aggregations, and derived values?
- C — Configuration: What changes with different settings, feature flags, or environments?
- U — User: What varies by user type, permission, locale, or accessibility needs?
- T — Time: What changes with different dates, timezones, or timing sequences?
- S — Storage: What happens at capacity limits, with corrupt data, or across persistence boundaries?
- V — Volume: What happens with large datasets, many records, or bulk operations?
- I — Interruptions: What happens if the user interrupts a flow (back button, refresh, close, network drop)?
- D — Data: What varies in data format, encoding, special characters, or localization?
- S — Sequence: What happens when actions are performed out of expected order?

Prompt:

Apply the FCC CUTS VIDS heuristic to the following feature. For each dimension, generate specific test scenarios relevant to this feature's risks.

Feature: [PASTE FEATURE DESCRIPTION]
Integration points: [List APIs, databases, or external services]
User roles: [List roles and permissions]

For each heuristic dimension, output:
- Dimension name and definition in the context of this feature
- 2–4 specific test scenarios with actionable test steps
- Risk level for this feature (High/Medium/Low) and why

Focus on dimensions most relevant to this feature's risks — don't pad low-risk dimensions.

Using Heuristics for Exploratory Session Charter Generation

Heuristics are most powerful when used to generate exploratory testing charters — structured session plans that guide time-boxed exploration:

Prompt:

Using the SFDPOT heuristic, generate 6 exploratory testing session charters for the following feature. Each charter should be time-boxed to 45–60 minutes and focus on a specific risk area.

Feature: [PASTE FEATURE DESCRIPTION]

For each charter, use this format:
**Charter #**: [Number]
**Mission**: [One sentence: "Explore [feature area] to find [type of defect]"]
**SFDPOT Dimension**: [Which dimension this charter focuses on]
**Test Ideas**: [5–8 specific test ideas to explore in this session]
**Oracles**: [How you'll know if something is wrong — reference AC, design spec, or common UX patterns]
**Time Box**: 45 minutes
**Notes/Risks**: [Anything specific to watch for in this area]

Combining Heuristics for Maximum Coverage

For a high-risk feature, combine SFDPOT and FCC CUTS VIDS in a single analysis:

Prompt:

I want to generate comprehensive exploratory test coverage for a high-risk feature. Apply both SFDPOT and FCC CUTS VIDS heuristics. After generating ideas from both, consolidate the output by removing duplicates and ranking all test ideas by risk level.

Feature: [PASTE FEATURE DESCRIPTION]
Risk factors: [List any known risk areas: "This feature handles payments," "This is used by tens of thousands of concurrent users," etc.]

Output:
1. Full SFDPOT analysis (3+ ideas per dimension)
2. Full FCC CUTS VIDS analysis (2+ ideas per dimension)
3. Consolidated risk-ranked test idea list (top 20 ideas by risk level, with rationale for each ranking)

Learning Tip: Heuristics are frameworks that encode expert judgment, not checklists you run mechanically. When AI applies a heuristic, it generates a list — your job is to read the list with your domain expertise and identify which items are genuinely risky for this specific feature and team. For a payment feature, the "Calculations" dimension of FCC CUTS VIDS deserves 3x more attention than "Configuration." For a user profile feature, the "Platform" dimension of SFDPOT (browser/device behavior) matters far more than "Time." Calibrate the AI's output with your knowledge of where bugs actually live in your system.


How to Combine Multiple Test Techniques in a Single AI Prompt?

Real-world test design rarely uses a single technique. A well-designed test suite for a meaningful feature combines equivalence partitioning (for representative coverage), BVA (for boundary coverage), heuristics (for risk-based exploration), and scenario-based testing (for end-to-end behavior). The challenge is getting AI to apply all of these coherently to a single feature without producing redundant or conflicting output.

The Technique Composition Pattern

The most effective approach is a two-phase prompt: analysis phase (identify what to test using all techniques) then generation phase (convert the analysis into test cases).

Phase 1: Multi-technique analysis

Analyze the following feature using three test design techniques in sequence. Do not generate test cases yet — only perform the analysis.

Feature: [PASTE FEATURE DESCRIPTION AND AC]
Field constraints: [PASTE CONSTRAINTS]

## Technique 1: Equivalence Partitioning
List all equivalence classes for each input/output domain. Mark each class as valid or invalid.

## Technique 2: Boundary Value Analysis
For each boundary between classes identified above, list the three test points (just below, at, just above).

## Technique 3: Risk-Based Heuristics (SFDPOT)
Apply SFDPOT to identify risk areas not covered by the above two techniques. Focus only on risks NOT already represented by EP/BVA.

## Consolidated Analysis
List all unique test ideas from all three techniques, eliminating duplicates. Rank by risk level.

Phase 2: Generate test cases from analysis

Based on the analysis above, now generate a complete test case set. Use the equivalence classes as the basis for data-driven tests, the boundary values as specific test data, and the SFDPOT risk areas as additional scenarios.

Output each test case using this format: [YOUR FORMAT]

Ensure:
- Each equivalence class has at least one representative test case
- Each identified boundary has at least one test case
- Each SFDPOT risk area that isn't covered by EP/BVA has at least one dedicated test case
- Positive, negative, and edge cases are clearly labeled

Decision Table Technique with AI

For features with complex conditional logic (multiple inputs that combine to determine an outcome), decision tables are the most systematic technique. AI can generate complete decision tables:

Prompt:

Create a decision table for the following feature with complex conditional logic. Identify all conditions and actions, enumerate the condition combinations, and specify the expected action for each combination.

Feature logic: [DESCRIBE THE CONDITIONAL LOGIC, e.g., "Discount is applied based on: user membership tier (Gold/Silver/Basic), cart total, and whether a promo code is applied"]

Step 1: List all conditions and their possible values
Step 2: Calculate the number of unique condition combinations (and simplify if any conditions are independent)
Step 3: Create the decision table with all combination columns
Step 4: Fill in the expected outcome (action) for each combination
Step 5: Generate one test case per unique combination, using representative values

Flag any combinations that the requirements don't explicitly specify (these are coverage gaps requiring clarification).

State Transition Testing with AI

For features with distinct states and transitions (shopping cart, order lifecycle, user account status), state transition testing is the right technique:

Prompt:

Apply state transition testing to the following feature. Identify all states, all valid transitions, all invalid transitions, and generate test cases that cover every state and every transition.

Feature: [PASTE DESCRIPTION]
States (if known): [LIST STATES or ask AI to infer them]

Step 1: Create a state transition diagram (text representation: State A → [event] → State B)
Step 2: Create a state transition table showing all valid and invalid transitions
Step 3: Generate test cases for:
   - Every valid transition (one test per transition)
   - Every invalid transition (one test per invalid attempt — what is blocked and what error is shown)
   - Every state (at least one test that verifies the behavior while IN that state)

Format each test case with the source state, event, expected destination state, and any expected side effects.

A Complete Technique Selection Guide

Ask the AI to recommend which techniques to apply:

Prompt:

Given the following feature description, recommend which test design techniques are most appropriate and in what priority order. Explain why each recommended technique is suitable for this feature's risk profile.

Feature: [PASTE FEATURE DESCRIPTION]
Risk areas (known): [LIST ANY KNOWN RISKS]

Available techniques: Equivalence Partitioning, Boundary Value Analysis, Decision Table, State Transition, SFDPOT, FCC CUTS VIDS, Pairwise, Cause-Effect Graphing

Output: ranked list of applicable techniques with rationale for each, and an estimated test case count each technique would add.

Learning Tip: Combining techniques in a single prompt works well for moderate-complexity features. For very complex features (many fields, complex state machines, multiple user roles), you'll get better output by applying each technique in a separate prompt and then consolidating. The consolidation prompt — "here are test case sets from three separate techniques; merge them, remove duplicates, and rank by risk" — is itself a powerful step that AI handles well. Trying to do everything in one 5,000-word prompt often results in the AI applying techniques superficially rather than thoroughly.