·

Mobile testing

Mobile testing

How to generate a device matrix test plan with AI — OS versions, screen sizes, orientations?

Device fragmentation is one of the defining challenges of mobile QA. Android alone has thousands of device/OS combinations in active use; iOS is more controlled but introduces its own complexity with notch variants, Dynamic Island, ProMotion displays, and split-screen multitasking. Without a principled strategy, device coverage becomes either dangerously thin or unreasonably expensive.

AI doesn't replace analytics or device farm access — but it dramatically accelerates the reasoning work required to build a defensible, risk-stratified device matrix.

The inputs you need before prompting

A good device matrix starts with real data, not guesses. Before prompting AI, gather:

  • Analytics data: Your app's actual device/OS distribution from Firebase Analytics, Crashlytics, or your analytics platform (last 90 days is a reasonable window)
  • Crash distribution: Which device/OS combos have the highest crash rate in production
  • OS market share: Cross-reference with public data (StatCounter, Android Studio device distribution chart)
  • Business-critical regions: Country-specific device preferences (e.g., Samsung dominance in Korea, Xiaomi in India, older OS prevalence in emerging markets)
  • App store minimum OS requirement: Your declared minSdkVersion (Android) / deployment target (iOS)

Prompt: Generate a prioritized device matrix

I'm building a device test matrix for a fintech mobile app. Generate a prioritized 
test matrix with rationale for each tier.

App context:
- Platform: iOS + Android
- minSdkVersion: Android 10 (API 29); iOS minimum deployment target: iOS 16
- Key markets: United States (60%), United Kingdom (20%), India (15%), Brazil (5%)
- User base analytics (top devices by session count, last 90 days):
  1. iPhone 15 Pro — iOS 17.4 — 18%
  2. iPhone 14 — iOS 17.3 — 12%
  3. Samsung Galaxy S24 — Android 14 — 9%
  4. iPhone 13 — iOS 17.2 — 8%
  5. Samsung Galaxy A54 — Android 13 — 7%
  6. Pixel 8 — Android 14 — 5%
  7. iPhone SE (3rd gen) — iOS 16.7 — 4%
  8. Xiaomi Redmi Note 12 — Android 12 — 3%
  9. Samsung Galaxy S22 — Android 13 — 3%
  10. OnePlus Nord — Android 11 — 2%

Crash hotspots from Crashlytics:
  - Samsung Galaxy A-series Android 12: 2.3× average crash rate
  - iPhone SE iOS 16: 1.8× average crash rate

Generate:
1. Tier 1 (run on every build): Top 5 devices with rationale
2. Tier 2 (run on every release candidate): 10-15 devices covering OS spread
3. Tier 3 (run on major releases only): Long tail devices and crash hotspots
4. Screen-size coverage check: Ensure matrix covers small (4.7"), mid (6.1"), large (6.7"+), and tablet
5. Orientation coverage: Which test types need portrait-only vs. portrait+landscape

Generating an Android-specific fragmentation matrix

Android fragmentation requires additional depth around OEM customizations:

Generate an Android-specific test matrix that addresses OEM fragmentation risks.
For our app's top Android device segments, identify:

1. OEM-specific behaviors to test per manufacturer:
   - Samsung (One UI): notification permission flows, background app restrictions, 
     Bixby button interference, Samsung Internet vs. Chrome WebView behavior
   - Xiaomi (MIUI): aggressive battery optimization that kills background processes, 
     custom permission dialogs, MIUI-specific notification channels
   - OnePlus (OxygenOS): Shelf panel interference, parallel apps feature
   - Huawei (EMUI, no Google Services): HMS vs. GMS service availability

2. Android version-specific behaviors to test:
   - Android 10: Scoped storage enforcement
   - Android 11: One-time permissions, foreground service changes
   - Android 12: Approximate location permission, app hibernation
   - Android 13: Per-app notification permission (POST_NOTIFICATIONS)
   - Android 14: Selected photos access, health permissions

3. Screen density variations to test:
   - mdpi (160dpi), hdpi (240dpi), xhdpi (320dpi), xxhdpi (480dpi), xxxhdpi (640dpi)
   - Foldable screens: Galaxy Z Fold (unfolded inner display + cover display)

App type: [describe your app type]

Generating an iOS device matrix with platform-specific features

Generate an iOS device test matrix that covers Apple's hardware-specific 
feature set. Our app uses: Face ID, push notifications, Siri Shortcuts, 
WidgetKit, and StoreKit 2.

Generate test device recommendations that cover:
1. Face ID devices (iPhone X+) vs. Touch ID devices (iPhone SE)
2. Dynamic Island devices (iPhone 14 Pro+) vs. notch devices (iPhone 12-14 standard)
3. ProMotion 120Hz display (iPhone 13 Pro+) — test animations and scroll physics
4. iPhone SE form factor (4.7" screen): layout critical, smallest supported viewport
5. iPad variants: iPad mini, iPad standard, iPad Pro (M-series) — 
   if your app supports iPadOS multitasking

For each device category, specify which test scenarios are uniquely required 
(e.g., Dynamic Island live activity tests only on 14 Pro+).

Learning Tip: A device matrix that is never updated is worse than no matrix — it creates false confidence. Schedule a quarterly review of your device matrix using analytics data. Devices fall out of relevance as users upgrade; new device categories emerge (foldables, large-screen Android). Prompt AI to help you refresh the matrix quarterly: feed it your new analytics export and ask it to compare against your existing matrix, flag any devices that have dropped below your usage threshold, and suggest replacements.


How to use AI to generate mobile-specific failure scenarios — connectivity, battery, permissions?

Mobile apps operate in an environment that is fundamentally hostile compared to a controlled desktop browser: networks drop mid-request, batteries run low and restrict background activity, permissions get revoked unexpectedly, and the OS aggressively kills background processes. Most mobile apps are undertested for these conditions because they're hard to simulate manually and even harder to systematize.

AI is particularly good at generating the taxonomy of mobile-specific failure scenarios — because these patterns are well-documented in platform documentation and known failure reports, AI can enumerate them comprehensively and generate test cases for each.

Connectivity failure scenarios

Generate a comprehensive test suite for network connectivity failure scenarios 
in our mobile app. The app performs these network operations:
- Authentication (JWT refresh)
- Feed loading (paginated API calls)
- Real-time messaging (WebSocket)
- File upload (multipart, up to 50MB)
- In-app purchases (StoreKit 2 / Play Billing)

Generate test scenarios for:
1. Full offline mode — device has no connectivity:
   - App launch when offline
   - Transition to offline mid-session
   - Cached content availability
   - Queue/retry behavior for write operations

2. Poor connectivity — simulate with Xcode Network Link Conditioner / Android emulator network throttling:
   - 3G (750Kbps, 100ms RTT)
   - Edge (100Kbps, 500ms RTT)
   - High latency WiFi (5Mbps, 2000ms RTT)

3. Network interruption during critical operations:
   - JWT refresh interrupted → what state does the user end up in?
   - File upload at 60% completion → does app resume or restart?
   - In-app purchase confirmation interrupted → is purchase correctly recovered?
   - WebSocket disconnection during active message send

4. Network type switching:
   - WiFi → Cellular mid-session
   - Cellular → WiFi (should automatically improve)
   - VPN activation/deactivation mid-session

For each scenario, specify: test tool (Xcode Network Link Conditioner, 
Android Emulator network controls, or Charles Proxy), expected behavior, 
and observable failure indicators.

Battery and power state scenarios

Generate test scenarios for battery and power state behavior in our app.
The app has: background sync, location tracking (when-in-use), 
push notification handling, and background audio playback.

Generate tests for:
1. Low Power Mode (iOS) / Battery Saver (Android):
   - Background refresh suspended — does the app handle stale data gracefully?
   - Location updates reduced — does location-dependent feature degrade gracefully?
   - Animations reduced (iOS Low Power Mode) — any UI glitches?

2. Background app lifecycle:
   - App backgrounded for 5 minutes, foregrounded — state restoration correct?
   - App backgrounded for 30 minutes — session still valid? Data still fresh?
   - App terminated by OS (memory pressure) — deep link opens correct state?

3. Doze mode (Android):
   - Device in Doze mode for 1 hour — do push notifications deliver when Doze exits?
   - Background sync behavior during Doze windows

4. Battery percentage edge cases:
   - Critically low battery (≤5%) — app behavior? Warning displayed?

Specify: how to simulate each condition on iOS Simulator and Android Emulator, 
and which conditions require physical device testing.

Permission denial and revocation scenarios

This is one of the most commonly undertested areas in mobile QA:

Generate a comprehensive permission testing matrix for our iOS and Android app.
Required permissions: Camera, Microphone, Photo Library, Location (precise + approximate),
Notifications, Contacts, and Biometrics (Face ID / Fingerprint).

For each permission, generate test cases for:
1. First-time permission request:
   - User grants permission → feature available
   - User denies permission → graceful degradation (no crash, helpful explanation shown)
   - User taps "not now" / dismisses without deciding (iOS only)

2. Post-denial recovery:
   - User denied → navigates to Settings and grants → feature available on return
   - User denied → feature entry point shows "enable in settings" UI with deep link to settings

3. Runtime revocation:
   - Permission granted → user revokes in Settings → returns to app → graceful handling
   - App should detect revocation on foreground (not only at launch)

4. iOS-specific permission tiers:
   - Location: Always vs. When In Use vs. Precise vs. Approximate
   - Photos: Full access vs. Selected Photos (iOS 14+)
   - Notifications: alert + badge + sound vs. partial combinations

5. Android 13+ per-app notification permission (POST_NOTIFICATIONS):
   - App targets SDK 33: dialog shown at first notification-triggering action
   - User denies: in-app fallback messaging

For each test case, include: precondition setup (permission state), 
steps to reproduce, and expected result.

OS interruption scenarios

Generate test scenarios for OS-level interruptions during active app use:
1. Phone call received mid-session → app backgrounded → call ends → app returns
2. Siri/Google Assistant invoked mid-session
3. Incoming push notification tapped → app jumps to different section → back navigation correct
4. Emergency alert (government emergency broadcast) interruption
5. Screen rotation mid-form entry → form state preserved
6. Screenshot taken (relevant for sensitive content apps — content masking)
7. App switcher opened and closed without switching apps
8. Dynamic Island interaction interrupts active video playback (iPhone 14 Pro+)

Learning Tip: Don't try to run all mobile failure scenario tests manually on every build — that's infeasible. Use AI to help you classify these scenarios into three tiers: automated (run every build via device farm or emulator), semi-automated (manual execution with automated assertion capture), and manual-only (physical device required). The connectivity and permission tests are often fully automatable on emulators; battery/power state tests usually require physical devices.


How to test iOS vs. Android consistency with AI assistance?

Building a consistent experience across iOS and Android is one of the most nuanced challenges in mobile QA. "Consistent" doesn't mean identical — it means users on both platforms get an experience that meets their platform conventions, behaves reliably, and delivers the same business value. AI helps you build systematic cross-platform test strategies that respect platform differences while catching genuine divergence.

Defining "consistency" correctly

Before prompting, establish what type of consistency you're testing:

Consistency type Description Example
Functional parity Both platforms support the same features Biometric auth on both
Data consistency Same data, same business rules Order history identical
Visual consistency Brand-consistent, not pixel-identical Same color palette, slightly different fonts
Behavioral consistency Interactions behave predictably Both platforms scroll smoothly
Platform convention compliance Each platform follows its own HIG/Material guidelines iOS uses swipe-to-go-back; Android uses back button

Functional parity and data consistency are testable systematically. Visual and behavioral consistency require judgment. Platform convention compliance requires platform expertise.

Generating a cross-platform parity checklist with AI

I'm building a cross-platform consistency test plan for our iOS and Android apps.
The app is a project management tool with: task creation/editing, team messaging, 
file attachments, notifications, and biometric lock.

Generate a cross-platform parity test checklist that covers:
1. Feature parity: List all features and mark expected iOS/Android availability
2. Data parity test cases: 
   - Actions taken on iOS should be reflected on Android (and vice versa)
   - Include eventual consistency cases (push notification delivery, sync)
3. Navigation model differences to explicitly test:
   - iOS: swipe-back gesture, tab bar navigation, sheet presentation
   - Android: back button/gesture, bottom nav, activity stack
4. Input method differences:
   - iOS keyboard behavior vs. Android keyboard
   - iOS long-press context menus vs. Android contextual action bar
5. Platform-specific feature implementations:
   - iOS: Share Sheet vs. Android: Share Intent
   - iOS: Haptic feedback vs. Android: Vibration API

For each category, specify which tests must produce identical outcomes 
(functional/data) vs. which tests verify platform-appropriate behavior 
(navigation, input conventions).

Generating platform-specific bug scenario prompts

AI is good at enumerating the specific places where iOS/Android divergence commonly causes bugs:

Generate a targeted test case list for known iOS/Android divergence hotspots.
Our tech stack: React Native 0.73 with Expo.

For each area, generate specific test cases that catch real divergence bugs in 
React Native apps:

1. Text rendering:
   - Font fallback behavior (iOS uses San Francisco; Android uses Roboto)
   - Line height differences between platforms
   - Text truncation at the same string length on different screen densities

2. Touch handling:
   - TouchableOpacity vs. Pressable feedback differences
   - Scroll event velocity differences
   - Keyboard avoiding view behavior

3. Date/time handling:
   - Date picker UI differs significantly — test all user flows on both platforms
   - Locale-specific date formatting (MM/DD vs. DD/MM)

4. Permissions UX:
   - iOS shows system permission dialog; Android 13+ shows both system and in-app
   - Camera/microphone permission request flows differ

5. Deep link handling:
   - iOS Universal Links vs. Android App Links (both need server-side verification)
   - Custom URL scheme fallback behavior

6. Background behavior:
   - iOS suspends JS execution more aggressively
   - Android task stack behavior differs

Using AI to analyze cross-platform bug reports

When the same feature produces different defects on each platform, AI helps you reason through the root cause:

I have two bug reports for the same feature (file upload) on different platforms.
Analyze both and identify:
1. Whether these are the same underlying bug or different bugs
2. Which platform implementation is non-standard
3. The most likely root cause in our React Native code
4. Suggested test cases that would catch this regression

iOS bug: "When uploading a file > 10MB on iOS, the app shows 'Upload complete' 
but the file appears corrupted on the server. Reproducible on iPhone 12+ running 
iOS 16+. Not reproducible on older devices."

Android bug: "When uploading a file > 10MB on Android 12+ (Scoped Storage), 
the upload fails silently with no error message. The file upload progress bar 
reaches 100% but then the progress bar disappears without a success message."

Learning Tip: Cross-platform parity testing is most effective when run simultaneously — run the same test scenario on an iOS device and an Android device at the same time, comparing behavior in real time. This reveals behavioral differences that async testing often misses. Use a shared test session with screen recording on both devices and walk through the same flow in parallel. AI can help you script these parallel test sessions in advance.


Deep links, push notifications, and background state management are the three mobile-specific capabilities that most frequently have bugs in production — because they're hard to test in development environments, they involve OS-level coordination, and they depend on server infrastructure (APNs, FCM) that isn't always available in test environments.

AI helps you generate comprehensive test matrices for all three areas and build the tooling to make them testable.

Generate a comprehensive deep link test suite for our e-commerce app.
Deep link scheme: myapp:// (custom URL scheme) and https://myapp.com (Universal Links/App Links)

App destinations that support deep linking:
- Product page: /products/{productId}
- Category listing: /categories/{categorySlug}
- Order detail: /orders/{orderId} (requires authentication)
- Promotional offer: /offers/{offerId} (can be time-limited/expired)
- User profile: /profile (requires authentication)
- Checkout with pre-filled cart: /checkout?items={itemIds}&promo={promoCode}

Generate test cases covering:
1. Basic navigation:
   - Deep link to each destination while app is in foreground
   - Deep link to each destination while app is in background
   - Deep link to each destination while app is terminated (cold launch)

2. Authentication gates:
   - Deep link to authenticated screen while logged out → redirect to login → return to destination
   - Deep link to authenticated screen while logged in → direct navigation

3. Parameter handling:
   - Valid parameters → correct destination loaded with correct data
   - Invalid productId (non-existent) → graceful 404 handling
   - Expired offerId → expired offer message shown, not generic error
   - Malformed URL → app doesn't crash, falls back to home

4. Universal Links (iOS) / App Links (Android) specific:
   - Link opens in-app (not browser) when app is installed
   - Link falls back to browser when app is not installed
   - AASA / Digital Asset Links file verification

5. Edge cases:
   - Deep link received while another deep link navigation is in progress
   - Deep link to a screen deep in the navigation stack (correct back stack built?)
   - Duplicate deep link (same link tapped twice rapidly)

Push notification test scenario generation

Generate a push notification test matrix for our app.
Notification types: transactional (order updates), marketing (promotional), 
system (security alerts), and real-time (new messages).

Test matrix covering:

1. Delivery scenarios:
   - App in foreground: notification received → in-app banner shown, not system notification
   - App in background: system notification shown → tap → app opens to correct screen
   - App terminated: system notification shown → tap → app cold-launches to correct screen
   - Notification received while Do Not Disturb active (iOS) / Priority mode (Android)

2. Permission states:
   - Notifications permitted: all notification types delivered
   - Notifications denied: no system notifications; in-app fallbacks working
   - Provisional authorization (iOS 12+): quiet delivery to notification center

3. Notification content:
   - Correct title, body, and data payload for each notification type
   - Localized content for each supported locale
   - Deep link in notification opens correct screen
   - Rich notifications: image attachment renders (iOS rich push, Android BigPicture)

4. Notification actions:
   - Action buttons in notification (e.g., "Mark as Read", "Reply") function correctly
   - Action taken on notification reflected in app state when opened

5. APNs (iOS) / FCM (Android) specific:
   - Token refresh handling: when push token rotates, new token registered to server
   - Silent push (background refresh): app data updated without user notification
   - FCM priority: high priority vs. normal priority delivery timing

For each scenario, provide: test setup (how to trigger), steps, 
expected result, and how to verify the assertion.

Background state and app lifecycle test generation

Generate a test suite for background app lifecycle and state restoration.
Our app has: active user sessions (JWT), real-time WebSocket connection, 
offline-capable data cache, and background location tracking.

Generate tests for:
1. State restoration after backgrounding:
   - Active form (partially completed) → app backgrounded → foregrounded → form state preserved
   - Scroll position in feed → backgrounded → foregrounded → scroll position maintained
   - Active video playback → backgrounded → foregrounded → playback resumes at correct position

2. Session management:
   - App backgrounded for 15 minutes → foregrounded → session still valid, WebSocket reconnected
   - App backgrounded for 60 minutes → foregrounded → session expired → user prompted to re-authenticate
   - Session expiry during backgrounded state → notification or in-app message on foreground

3. Data sync on foreground:
   - Data modified on another device while app is backgrounded → foreground sync occurs
   - Conflict resolution when offline changes conflict with server changes

4. Background task completion:
   - File upload started → app backgrounded → upload completes → success notification delivered
   - File upload started → app terminated → upload state on re-launch (resume or restart?)

5. iOS-specific background modes:
   - Background fetch: enabled/disabled → data freshness on foreground
   - Background processing (BGTaskScheduler): heavy tasks complete in background

6. Android-specific:
   - WorkManager constraints: task runs only on WiFi, charges — correct deferral behavior
   - Foreground service notification: visible during background operation, dismisses on complete

Learning Tip: Push notification testing is one area where physical device testing is non-negotiable. Simulators and emulators can deliver push notifications in limited ways (Xcode Simulator supports local push simulation via drag-and-drop of .apns files; Android emulator supports FCM with the Google Play services build), but real-world APNs/FCM delivery timing, token rotation, and Doze-mode delivery are only reliable on physical devices. Build at least one physical device into your notification test workflow.