Overview
Data modeling and system analysis sit at the intersection of business requirements and technical architecture. A business analyst working in this space must translate between business language — what the business needs to do — and systems language — what the technology must support. This translation work requires analytical rigor, a solid understanding of system design concepts, and the ability to synthesize complex technical documentation into artifacts that both business and technology stakeholders can engage with productively.
AI has become a powerful accelerator for this work. Given a requirements narrative or a system description, AI can generate first-draft data flow diagrams, entity-relationship models, system context diagrams, integration point analyses, and gap analysis reports in minutes rather than hours or days. The quality of these outputs is directly proportional to the quality of the inputs — detailed, specific requirements and system descriptions produce detailed, specific models; vague inputs produce vague models that require extensive revision.
This topic covers four core capabilities of AI-assisted data modeling and system analysis. First, generating the foundational modeling artifacts — DFDs, ER models, and system context diagrams — from requirements text. Second, analyzing existing system documentation to identify integration points, technical debt, and architecture concerns. Third, producing structured gap analysis reports that compare current-state and desired-state capabilities. Fourth, translating system analysis findings into actionable requirements, with particular attention to non-functional requirements that emerge from architectural analysis.
The goal is not to replace the systems architect or the senior technical BA. The goal is to enable any competent BA to produce high-quality first-draft technical artifacts that can be validated and refined by technical experts — compressing days of modeling work into hours and shifting the expert's role from generation to review and refinement.
Generating Data Flow Diagrams, Entity-Relationship Models, and System Context Diagrams with AI
The three foundational modeling artifacts in business and systems analysis serve distinct purposes. A data flow diagram (DFD) shows how data moves through a system — where it comes from, what processes transform it, where it is stored, and where it goes. A entity-relationship (ER) model shows the structure of the data itself — what entities exist, what attributes they have, and how entities relate to each other. A system context diagram shows the system's boundaries — what external actors (users, other systems) interact with the system, and what data flows pass across those boundaries. Together, these three artifacts give a complete picture of the data dimension of any system.
Generating these artifacts from text with AI requires a specific prompting approach. The starting point is always a requirements narrative or a business process description — the richer the input, the higher the quality of the output. For DFDs, the key inputs are: the processes being modeled, the data they consume and produce, the data stores involved, and the external entities that send or receive data. For ER models, the key inputs are: the business entities mentioned in the requirements, the attributes that need to be tracked for each entity, and the relationships between entities including cardinality. For system context diagrams, the key inputs are: the system boundary being modeled, the external systems and user roles that interact with it, and the data flows that cross the boundary in each direction.
AI outputs for these artifacts are text-based representations — structured descriptions that map to the visual notation of the diagram type, rather than actual rendered diagrams. This is not a limitation; it is a strength. Text-based diagram descriptions are immediately usable as input to diagramming tools (Lucidchart, draw.io, Miro), as structured descriptions in documentation systems, and as the basis for technical review conversations. Many teams also use AI-generated descriptions to feed directly into PlantUML or Mermaid diagram syntax, which can then be rendered automatically.
Hands-On Steps
- Gather the requirements narrative or business process description that the model will represent. Ensure it includes: the key business processes, the data entities involved, the external actors (users and external systems), and the business rules that govern data transformation.
- Run a DFD generation prompt, specifying the level of decomposition required: Level 0 (context diagram showing the system as a single process), Level 1 (major processes within the system), or Level 2 (sub-processes within each Level 1 process).
- Review the DFD output. Validate each data flow: does data actually flow in this direction? Is this data store real? Is this external entity genuinely external to the system boundary?
- Run an ER model generation prompt from the same requirements narrative, specifying: entities, attributes (key attributes at minimum), relationships, and cardinality (one-to-one, one-to-many, many-to-many).
- Review the ER model with a data architect or developer to validate entity definitions, attribute completeness, and relationship cardinality.
- Run a system context diagram prompt focusing specifically on the system boundary, external actors, and cross-boundary data flows.
- For each artifact, create a PlantUML or Mermaid syntax version if your documentation system supports rendering, or import the structured description into your diagramming tool.
- Include all three artifacts in your requirements documentation package with a brief narrative explanation of what each diagram shows and why it is relevant to the project.
Prompt Examples
Prompt:
You are a senior business analyst and systems analyst creating modeling artifacts from requirements.
Here is the requirements narrative for a digital procurement platform:
"The system allows employees to create purchase requests specifying an item, quantity, estimated cost, cost code, and preferred supplier. The system validates the cost code against the Finance cost code master, checks available budget for the cost center associated with the cost code, and routes the request for approval based on the approval matrix (which defines who can approve requests at different dollar thresholds). Approved requests are converted to purchase orders and sent to the supplier via the supplier portal. Suppliers acknowledge the PO and provide an expected delivery date. When goods are received, the warehouse team records the receipt in the system against the original PO. The system matches the receipt to the PO and, if matched, generates a goods receipt notification to Finance for payment processing. Finance processes the payment in SAP."
Generate a Level 1 Data Flow Diagram (DFD) for this system. Structure your output as follows:
External Entities (actors outside the system boundary):
- List each external entity with a brief description
Processes (numbered, within the system boundary):
- List each major process with: Process ID, Process Name, Input data flows, Output data flows, Data stores accessed
Data Stores (within the system boundary):
- List each data store with: what data it holds, which processes read from it, which processes write to it
Data Flows:
- List each data flow with: from (entity/process/store), to (entity/process/store), data description
Also generate the Mermaid flowchart syntax for this diagram.
Expected output: A structured DFD Level 1 with typically 5-8 processes (Create Purchase Request, Validate Cost Code, Check Budget, Route for Approval, Generate PO, Record Goods Receipt, Match Receipt to PO, Generate GR Notification), 4-5 data stores (Cost Code Master, Budget Register, Approval Matrix, Purchase Order Register, Goods Receipt Register), and 4-6 external entities (Employee, Finance Cost Code System, Finance/SAP, Supplier Portal, Warehouse Team). Plus a Mermaid flowchart block that can be pasted into documentation to render the diagram.
Prompt:
You are a senior data modeler creating an entity-relationship model from business requirements.
Here is the business requirements narrative for the procurement platform:
"The system manages purchase orders issued by employees on behalf of cost centers. Each purchase order references one supplier and one cost center, and may contain multiple line items. Each line item specifies a catalog item (or free-text description for non-catalog items), quantity, unit price, and the cost code to be charged. Suppliers have a profile with their contact information, payment terms, and approved status. The system tracks approval history for each purchase order, recording who approved it, when, and the approval level. When goods are received, a goods receipt record is created referencing the original purchase order, with line-item-level receipt quantities. Finance processes payments against approved and received purchase orders."
Generate an Entity-Relationship model. For each entity:
1. Entity name
2. Attributes (mark primary key with PK, foreign keys with FK)
3. Relationships to other entities (with cardinality notation: 1:1, 1:N, M:N)
4. Any business rules embedded in the relationship
Also generate a PlantUML class diagram syntax for this ER model.
Flag any entities or relationships that are ambiguous in the requirements and need clarification.
Expected output: An ER model with entities including: PurchaseOrder (with PK, FK to CostCenter, FK to Supplier), PurchaseOrderLineItem (with FK to PurchaseOrder, FK to CatalogItem or free-text), Supplier (with approval status attribute), CostCenter (with FK to Employee as owner), CatalogItem, ApprovalHistory (with FK to PurchaseOrder, FK to Approver/Employee), GoodsReceipt (with FK to PurchaseOrder), GoodsReceiptLineItem. Cardinality should include 1:N between PurchaseOrder and LineItems, M:N between PurchaseOrder and Approvals. Ambiguity flags should include "Is a cost center a single entity or can a PO span multiple cost centers?" and "Are catalog items maintained centrally or per supplier?"
Learning Tip: Always generate the DFD and ER model from the same requirements narrative in separate prompts — do not ask for both in a single prompt. The DFD and ER model serve different purposes and require different analytical lenses: the DFD focuses on behavior (how data flows) while the ER model focuses on structure (what data is). A single prompt tends to produce shallow versions of both. Separate prompts produce deeper, more useful artifacts.
Analyzing Existing System Documentation and Identifying Integration Points
When working on projects that involve existing systems — which is the majority of enterprise BA work — the starting point is not a blank slate but an existing landscape of legacy systems, documented APIs, database schemas, and integration contracts that both constrain and enable the new solution. Analyzing this existing landscape to understand what integration points exist, where data is exchanged between systems, and where technical debt and architecture problems reside is one of the most valuable contributions a BA can make to the project's technical design.
AI is highly effective at comparative analysis of system documentation. Given documentation for two or more systems, AI can identify where those systems exchange data, what the format and protocol of those exchanges is (or should be), where the data models are compatible and where they conflict, and what gaps exist in the current integration landscape that will need to be addressed. This analysis typically requires hours of manual documentation review — cross-referencing API specifications, data dictionaries, and integration architecture diagrams. AI compresses this work significantly.
Technical debt detection is a high-value application of AI in system analysis. Technical debt manifests in architecture documentation through patterns: duplicated data across systems with no clear master, undocumented integrations (systems that exchange data with no formal contract or specification), point-to-point integrations that have grown beyond their original scope, missing error handling specifications, and security or access control gaps. AI can identify these patterns in documentation systematically, whereas human reviewers often overlook them because they are buried in volume or because the reviewer is too close to the system to see the patterns clearly.
Integration point analysis should produce a structured catalog: for each integration between systems, document the integration name, the systems involved, the data exchanged, the direction of flow, the protocol (API, file transfer, database query), the frequency, the owner (which team maintains this integration), and any known issues or constraints. This catalog is a prerequisite for any solution design that involves systems integration.
Hands-On Steps
- Gather all available documentation for each system being analyzed: API specifications, data dictionaries, integration diagrams, system architecture documents, database schemas, and any existing interface control documents.
- For each system, create a structured system summary: name, purpose, data it owns, APIs or interfaces it exposes, and key dependencies on other systems.
- Run a pairwise integration analysis prompt for each pair of systems that are expected to interact. Provide the system documentation for both systems as input.
- Compile the integration findings into a master integration catalog.
- Run a technical debt identification prompt on each system's documentation separately, then across the full integration landscape.
- Validate integration findings with system owners: some integration points may exist but be undocumented, and some documented integrations may be deprecated.
- For each identified technical debt item, create a risk entry in the project risk register. Technical debt that intersects with the project scope becomes a project risk.
- Use the integration catalog as a key input to the solution architecture design and non-functional requirements definition.
Prompt Examples
Prompt:
You are a senior integration analyst reviewing system documentation to identify integration points and potential issues.
Here is the documentation for two systems that will need to exchange data in the new procurement platform project:
System A — Salesforce CRM (current state):
- Purpose: Customer and prospect relationship management
- Data entities: Accounts, Contacts, Opportunities, Cases, Products (catalog), Price Books
- Exposed APIs: REST API v55.0, supports CRUD operations on all entities, bulk API for large data sets
- Current integrations: SAP billing system (one-way, daily batch, invoice data pushed from SAP to Salesforce), Marketing automation platform (bi-directional, contact and campaign data)
- Authentication: OAuth 2.0
- Data ownership: Salesforce is the system of record for Customer data
System B — SAP ERP (current state):
- Purpose: Financial management, procurement, inventory, and HR
- Data entities: Vendors (maps to Salesforce Accounts for suppliers), Materials (maps to Salesforce Products), Cost Centers, Purchase Orders, Goods Receipts, Financial Documents
- Exposed APIs: SAP OData API (REST-based), limited to read operations on most entities; write operations require SAP BAPI calls via RFC (older technology)
- Current integrations: Salesforce billing data (as above), legacy procurement system (file-based, nightly batch, PO data)
- Authentication: SAP technical user with Basic Auth
- Data ownership: SAP is the system of record for financial, inventory, and procurement data
Analysis request: We are building a new procurement platform that needs to:
1. Check customer (supplier) data from Salesforce when creating POs
2. Check material availability from SAP inventory when processing orders
3. Write approved POs into SAP for financial processing
4. Read cost center and budget data from SAP for approval routing
For each of these four integration scenarios, analyze:
1. Integration feasibility: Is the required data available via the exposed APIs?
2. Data compatibility: Are the data entities compatible (e.g., does Salesforce Account map cleanly to SAP Vendor)?
3. Protocol and complexity: What technology approach is required, and how complex is it?
4. Risks and constraints: What technical, security, or data quality risks should the project be aware of?
5. Open questions that need to be answered by the system owners before integration design can proceed
Expected output: A detailed integration analysis for each of the four scenarios. Integration 3 (write POs to SAP) should flag as the most complex because SAP's write operations require BAPI calls via RFC rather than the modern REST API, adding significant integration complexity. Integration 1 should flag the Salesforce Account/SAP Vendor data compatibility question — they may use different IDs, names, or status values. Each scenario should include specific open questions such as "Does SAP's OData API expose real-time inventory levels or only batch-updated data?" and "What is the SAP approval cycle for BAPI-based write integration — is there an existing middleware layer?"
Prompt:
You are a senior systems analyst performing a technical debt assessment on an existing system integration landscape.
Here is a description of the current integration landscape for a manufacturing company's enterprise systems:
System landscape:
- ERP System (SAP S/4HANA): Financial management, procurement, inventory
- CRM System (Salesforce): Customer management, sales pipeline
- Legacy Procurement System (custom-built, 12 years old): Purchase order creation, approval workflows
- Supplier Portal (custom-built, 8 years old): Supplier communication, PO acknowledgment
- Finance Reporting Tool (Power BI): Spend analysis and financial dashboards
- HR System (Workday): Employee data, organizational structure, approval authorities
Current integration map (as described by the IT team):
- Legacy Procurement → SAP: Nightly batch file, CSV format, PO data
- Legacy Procurement → Supplier Portal: Direct database connection (no API)
- SAP → Power BI: Direct database query (read replica)
- SAP → Workday: Manual export/import, weekly frequency
- Salesforce → SAP: No formal integration (data is manually duplicated by the Finance team)
- Workday → Legacy Procurement: No integration (approval matrix is maintained manually in Legacy Procurement)
Perform a technical debt analysis of this integration landscape. For each identified technical debt item:
1. Technical debt type: Data duplication, Point-to-point integration, Missing integration, Deprecated technology, Security vulnerability, or Missing documentation
2. Description of the debt
3. Business impact: what operational problem does this debt cause today?
4. Risk exposure: what project or operational risk does this create?
5. Remediation priority: High (blocks new solution design), Medium (should be addressed in this project), Low (should be logged in tech debt backlog)
Also identify any "integration anti-patterns" that are architectural concerns beyond individual debt items.
Expected output: A technical debt register with 6-10 debt items. High-priority items should include: the direct database connection between Legacy Procurement and Supplier Portal (a critical point-to-point integration with no formal contract that will break during migration), the SAP-Workday manual export/import (causing stale approval authority data that creates approval routing errors), and the Salesforce-SAP manual duplication (causing data inconsistency in supplier records). The integration anti-pattern analysis should identify the absence of an integration middleware layer as an architectural concern — every system integration is point-to-point with no central message bus, creating an exponentially scaling maintenance problem.
Learning Tip: When analyzing existing system documentation with AI, always include information about the age and maintenance status of each system. A 12-year-old custom-built system with "no known API" is a completely different integration risk than a modern SaaS platform with a published REST API. AI that knows the system age and maintenance context will flag the right risks — AI that treats all systems as equivalent will miss the most important architectural concerns.
Generating Gap Analysis Reports with AI
A gap analysis report answers a specific question: given where we are today (current state) and where we need to be (desired state), what is missing, what needs to change, and how significant are the gaps? In business analysis, gap analysis is performed at multiple levels: capability gaps (what the organization cannot currently do), process gaps (where current processes fail to meet future requirements), system gaps (where current systems lack required functionality), and data gaps (where required data is not available or not in the required form).
AI can generate a structured gap analysis report when given a clear description of both the current state and the desired state. The quality of the output depends critically on the specificity of both inputs. A vague current-state description ("we currently use a manual process") produces a vague gap analysis. A specific current-state description ("we currently use a paper-based PO form that requires physical signature, takes 24-48 hours for approval, and has no integration with SAP") produces a specific, actionable gap analysis.
The gap analysis format that produces the most useful output for stakeholder communication and project planning follows a five-column structure: Capability (the business capability being assessed), Current State (a specific description of how the capability is currently provided or where it is missing), Desired State (a specific description of the required capability after the project), Gap Description (what specifically is missing or needs to change), and Priority (relative importance of closing this gap — High/Medium/Low — and the justification for that priority). This format allows stakeholders to quickly scan the entire capability landscape and understand where the most significant work needs to happen.
Gap analysis validation is the step that separates a useful analysis from a theoretical exercise. Before the gap analysis is used to scope a project or plan a roadmap, it should be reviewed by both the stakeholders who provided the desired-state requirements and the system owners or process owners who own the current state. AI can generate validation summary artifacts — stakeholder-friendly summaries of the gap analysis that surface the most critical gaps and the implications of not closing them — that make this validation process more efficient.
Hands-On Steps
- Document the current state systematically: for each capability area in scope, describe precisely what exists today — not what was intended to exist, but what actually exists and how it works.
- Document the desired state from your validated requirements: for each capability area, describe specifically what the future state must deliver to meet the business requirements.
- Organize both current and desired state descriptions by the same capability taxonomy — this makes the gap identification systematic rather than ad-hoc.
- Run a gap analysis generation prompt with both the current and desired state descriptions as inputs, structured by capability area.
- Review the generated gap analysis. Validate each gap with the relevant current-state owners (system owners, process owners) to confirm the current-state description is accurate.
- Assign priority ratings to each gap using a consistent scoring rubric: does this gap block the primary use case (High)? Does it impact a significant subset of users or processes (Medium)? Is it a nice-to-have improvement (Low)?
- Generate a stakeholder-facing gap analysis summary using AI: a narrative version that highlights the top 5 gaps with their business implications, suitable for a steering committee presentation.
- Use the prioritized gap analysis as a key input to project scoping, sprint planning, and roadmap development.
Prompt Examples
Prompt:
You are a senior business analyst producing a gap analysis report for a procurement platform project.
Current State (existing manual procurement process):
- Purchase requests are created by employees on paper forms or by email to the procurement team
- Cost code lookup requires a PDF spreadsheet published monthly by Finance — no system integration
- Approval routing is manual: the form is sent by email to the required approvers based on a manually maintained approval matrix document
- No budget checking before PO issuance — overspends are only detected during monthly budget reviews
- POs are emailed to suppliers as PDF attachments — no structured data exchange
- Supplier acknowledgments are received by email and manually tracked in a spreadsheet by the procurement team
- Goods receipt is recorded on a paper delivery docket and filed physically
- No spend analytics capability — reports are built manually by Finance from SAP export data, taking 2-3 days per report
- Current SLA: 24-48 hours for standard PO approval, no defined SLA for urgent purchases
Desired State (from validated requirements):
- Digital PO creation from any device with automatic cost code suggestion
- Real-time budget checking before PO approval
- Automated approval routing based on approval matrix maintained in the system
- Structured supplier portal with electronic PO delivery, acknowledgment tracking, and delivery date capture
- Mobile-capable goods receipt recording at warehouse level
- Real-time spend analytics dashboard with drill-down by department, category, and supplier
- Target SLA: 4 hours for standard PO approval, 30 minutes for urgent purchases
Produce a gap analysis report using the following capability areas:
1. Purchase Request and PO Creation
2. Budget and Cost Code Management
3. Approval Workflow Management
4. Supplier Communication and Management
5. Goods Receipt and Delivery Management
6. Spend Analytics and Reporting
7. System Integration and Data Management
For each capability area, provide:
- Current State summary (2-3 sentences)
- Desired State summary (2-3 sentences)
- Gap Description (specific and actionable)
- Priority (High/Medium/Low with one-sentence rationale)
- Indicative effort (High/Medium/Low — implementation complexity)
Expected output: A seven-section gap analysis with capability-by-capability current state, desired state, and specific gap descriptions. The highest priority gaps should include budget checking (currently zero capability — a High priority gap that creates direct financial risk) and approval workflow management (manual email routing with no SLA enforcement is a fundamental process failure). Spend analytics should appear as Medium priority (the business currently gets the data — it is just slow and manual). Each gap should include a one-sentence effort indicator (e.g., "Real-time budget checking requires deep SAP integration — High implementation effort").
Prompt:
You are a senior business analyst preparing a gap analysis for stakeholder review.
Here is the full gap analysis report:
[Paste the gap analysis generated above]
Produce two stakeholder-facing documents:
Document 1 — Executive Summary (for Steering Committee)
- One paragraph overview of the gap analysis scope and approach
- A summary table: Capability Area | Gap Severity (Critical/Significant/Moderate) | Priority
- Top 3 gaps that represent the highest business risk if not addressed
- Total number of gaps by priority: High / Medium / Low
- Recommended next step
Document 2 — Validation Checklist (for Current-State Owners)
For each capability area, produce 3-4 yes/no validation questions that the current-state owners (system owners, process managers) can answer to confirm that the current-state description is accurate.
Example format: "Purchase Request and PO Creation: (1) Is it correct that all purchase requests currently require a paper form or email? (2) Is it correct that there is no electronic submission option for any purchase category? (3) Is it correct that the current SLA is 24-48 hours with no exceptions for urgent purchases?"
Expected output: An executive summary with a clean gap severity table (mapping the 7 capability areas to Critical/Significant/Moderate severity) and a validation checklist with 3-4 targeted yes/no questions per capability area. The checklist format makes validation fast — current-state owners can respond in 15 minutes rather than requiring a 90-minute review meeting.
Learning Tip: Write the current state description before writing the desired state when preparing a gap analysis input. It sounds obvious, but many BAs start with the desired state (the requirements) and then write a current state that is implicitly framed as "everything the desired state doesn't have." This creates a biased current state description. Start from the actual current state, describe it as accurately as possible, then independently describe the desired state — then let the gaps emerge from the comparison.
Translating System Analysis into Actionable Requirements with AI
System analysis work — DFDs, ER models, integration point analysis, gap analysis — produces technical findings that must ultimately be translated into requirements that development teams can act on. This translation is where many BA deliverables fall short: the analysis is rigorous but the requirements it generates are not specific enough for engineering to implement without additional interpretation. AI can help bridge this gap by systematically converting each system analysis finding into a precisely specified requirement.
The translation approach for different finding types differs. A DFD finding (a data flow that needs to be implemented) translates into a functional requirement specifying the trigger, the data involved, the transformation applied, and the destination. An ER finding (an entity and its relationships) translates into data requirements: what entities must be created, what attributes are required, what constraints apply, and what the referential integrity rules are. An integration point finding (a data exchange between systems) translates into integration requirements: the protocol, the data format, the frequency, the error handling, and the security requirements for that exchange. A gap analysis finding (a missing capability) translates into functional or non-functional requirements that describe what the new capability must do.
Non-functional requirements derived from system analysis deserve particular attention because they are frequently underspecified or omitted entirely from requirements produced through stakeholder interviews alone. Performance requirements (how fast must the system process a transaction?), scalability requirements (how many concurrent users must the system support?), security requirements (what access controls are required for sensitive data flows?), and availability requirements (what is the acceptable downtime?) are all informed by system analysis findings and are best specified at this stage, when the system's data flows and integration points are clearly understood.
Hands-On Steps
- Take each system analysis finding (from DFD, ER model, integration analysis, or gap analysis) and create a one-sentence finding statement that describes what was found and why it is relevant to requirements.
- Run a requirements translation prompt for each finding category: DFD findings → functional requirements, ER findings → data requirements, integration findings → integration requirements, gap findings → capability requirements.
- For each translated requirement, run a non-functional requirements derivation prompt: "Given this functional requirement, what performance, security, scalability, and availability requirements must also be specified?"
- Review each non-functional requirement against the system context: is the performance target achievable with the chosen architecture? Is the security requirement proportionate to the data sensitivity?
- Ensure every requirement generated from system analysis is cross-referenced to its source finding — this traceability is essential for change impact analysis when requirements evolve.
- Add all requirements to the master requirements register, tagged by source (system analysis, stakeholder interview, regulatory, etc.) and type (functional, non-functional, constraint, assumption).
- Review the requirements register with the solution architect or lead developer to validate technical feasibility before committing to the requirements baseline.
Prompt Examples
Prompt:
You are a senior business analyst translating system analysis findings into actionable requirements.
Here are 5 findings from the integration analysis of the procurement platform project:
Finding 1: The SAP OData API supports read operations for cost centers and budget data, but write operations (creating a PO in SAP) require BAPI calls via RFC — an older SAP integration technology that requires a dedicated middleware component.
Finding 2: Salesforce stores Supplier data as Accounts, but SAP stores the same suppliers as Vendors with a different ID scheme. There is currently no master data synchronization between the two systems — supplier records are manually maintained in both.
Finding 3: The current Legacy Procurement system connects to the Supplier Portal via a direct database connection with no formal API contract. The Legacy Procurement system will be decommissioned as part of this project, which will break this connection.
Finding 4: The approval authority matrix is currently maintained manually in the Legacy Procurement system. There is no API or data export available — the data would need to be manually re-keyed into the new system.
Finding 5: The SAP-to-Power BI integration uses a direct database query against SAP's read replica. This integration will continue to work but will need to be updated to include data from the new procurement platform.
For each finding, generate:
1. A functional or integration requirement that addresses the finding (use "The system shall..." format)
2. Any associated non-functional requirements (performance, security, data integrity)
3. An implementation assumption that must be confirmed before this requirement can be baselined
4. A dependency: what must be true or built before this requirement can be implemented
Expected output: Five sets of requirements derived from the integration findings. Finding 1 should generate a requirement like "The system shall create Purchase Orders in SAP using the BAPI RFC integration layer, with a maximum response time of 5 seconds under normal load," plus a non-functional requirement for error handling when the SAP BAPI call fails (what does the user see? Is the PO held in a queue?). Finding 2 should generate a master data synchronization requirement for Supplier/Vendor records, plus a data integrity non-functional requirement. Finding 4 should generate a data migration requirement for the approval matrix, with a dependency on the approval matrix being exported and validated before the new system goes live.
Prompt:
You are a senior business analyst specifying non-functional requirements derived from system analysis.
The procurement platform processes the following transaction volumes (from current-state analysis):
- 250 purchase orders created per week (peak: 60 POs on Monday morning)
- 15-20 supplier portal interactions per PO (acknowledgment, status updates, delivery confirmation)
- 200 internal users accessing the system (85 active daily users during business hours)
- 85 external supplier organizations with varying portal usage frequency
- 8 system integrations: SAP (bidirectional), Salesforce (read), Workday (read), Supplier Portal (bidirectional), Power BI (write), Email (outbound), SSO/Azure AD (authentication), Mobile devices (bidirectional)
From the DFD analysis, the following data flows have been identified as sensitive:
- Budget data (financial sensitivity)
- Supplier pricing data (commercial sensitivity)
- Employee approval authority data (HR sensitivity)
- Payment terms and banking details in supplier profiles (high financial sensitivity)
Derive a complete set of non-functional requirements for the procurement platform covering:
1. Performance: response time, throughput, and latency requirements
2. Scalability: capacity limits and growth headroom
3. Availability: uptime requirements and maintenance window constraints
4. Security: access control, data encryption, and audit requirements for each sensitive data category
5. Integration reliability: retry, failover, and error notification requirements for each of the 8 integrations
For each non-functional requirement:
- Write it in a testable format: "The system shall [measurable behavior] under [specified conditions]"
- Provide the rationale from the system analysis that justifies this requirement
- Assign a priority: Mandatory (must be met for go-live), Important (should be met in Phase 1), Desirable (Phase 2 or later)
Expected output: A comprehensive non-functional requirements set with 15-25 specific, testable requirements. Performance requirements should reference the peak load (60 POs on Monday morning) to specify peak throughput. Security requirements should differentiate by data sensitivity — banking details requiring encryption at rest and in transit with restricted access logging, while general PO data has lighter controls. Integration reliability requirements should specify retry behavior (e.g., "The SAP integration shall retry failed BAPI calls up to 3 times with exponential backoff before alerting the procurement team") to prevent silent data loss.
Learning Tip: Non-functional requirements derived from system analysis are almost always more defensible than NFRs derived from general best practice. When you can say "the performance requirement of 3 seconds response time under 60 concurrent users comes from the peak load analysis of Monday morning PO creation activity," stakeholders and developers have context that motivates the requirement. Abstract NFRs ("the system shall be performant") get deprioritized. Evidence-based NFRs get built.
Key Takeaways
- Generate DFD, ER model, and system context diagram in separate prompts — each requires a different analytical lens and produces better artifacts when not combined.
- AI-generated modeling artifacts are text-based representations. Use them as inputs to diagramming tools or as PlantUML/Mermaid syntax for rendering in documentation systems.
- Integration point analysis should always include system age, API maturity, and maintenance status. These contextual factors determine integration risk more reliably than the formal integration specification alone.
- Technical debt detection is most effective when applied to the full integration landscape, not individual systems. The most damaging technical debt is usually found at the boundaries between systems.
- Gap analysis quality depends on the specificity of both current-state and desired-state inputs. Write the current state description first, as accurately as possible, before describing the desired state.
- Validate the current-state description with the actual current-state owners before using the gap analysis for project scoping — inaccurate current-state descriptions generate fictional gaps.
- Every system analysis finding (DFD, ER, integration, gap) should be translated into a specific requirement. Use the finding-to-requirement translation pattern to ensure nothing is lost between analysis and specification.
- Non-functional requirements derived from system analysis are more credible and actionable than generic NFRs. Always anchor them to the volume, sensitivity, and integration complexity findings from the analysis.