How We Test New Web and Mobile Apps Before Delivery: Our Full QA and Confirmation Process
Every application Arekan delivers goes through a structured multi-layer testing process before it reaches your users. Testing is not a single phase — it is a discipline that runs in parallel with development, from the first component to the final deployment. This guide documents exactly how we test newly built web and mobile applications, what tools and frameworks we use, and the specific confirmation steps we require before any go-live.
Why Testing Phases Cannot Be Skipped
The most expensive bugs are the ones found by your users, not your team. A post-launch bug in a payment flow, authentication system, or data processing pipeline costs exponentially more to fix than one found in development. Our testing process exists to catch issues at the cheapest possible point: before users see them.
- Bug detection cost multiplier: a bug caught in development costs 1x to fix. In staging it costs 6x. In production it costs 15–100x — factoring in incident response, data recovery, customer communication, and reputational damage
- Regression prevention: as applications grow, new code breaks old functionality. Automated test suites catch regressions before they ship, eliminating the 'we broke something we didn't touch' class of production incidents
- Client confidence: a test suite with documented coverage gives the client objective proof of quality — not just the team's verbal assurance that 'it works'
- Compliance requirements: enterprise clients in finance, healthcare, and government sectors often require demonstrable testing evidence as part of their vendor qualification process
- Onboarding speed: a well-tested codebase is significantly faster to onboard new developers into — tests serve as living documentation of how the system is supposed to behave
The 5 Layers of Web Application Testing
Web applications require testing at multiple layers, each catching a different class of problem. Skipping any layer leaves a category of bugs undetected until production. Here is what we run on every web application we deliver:
- Unit tests: individual functions, utilities, and business logic components tested in isolation. We target 80%+ branch coverage on business-critical code (payment processing, authentication, data transformation). Stack: Vitest for Vue/Nuxt, Jest for Node.js, pytest for Python
- Integration tests: verify that multiple components work correctly together — API routes with database queries, authentication middleware with route guards, third-party service integrations. These catch the wiring errors that unit tests miss
- End-to-end (E2E) tests: simulate real user journeys through a browser — login, navigation, form submission, checkout flows, error states. Stack: Playwright with automated screenshot comparison. We cover the 5–10 most critical user journeys for each application
- Load and performance tests: simulate concurrent users to verify the application handles production-level traffic. We use k6 to model realistic usage patterns. Every endpoint is profiled; slow queries (>200ms) are flagged before launch
- Security scanning: automated OWASP Top 10 scanning using OWASP ZAP integrated into the CI pipeline, plus manual review of authentication, authorization, and data validation logic. Our security team signs off on any application handling sensitive user data
Mobile-Specific Testing: What Is Different and Why It Matters
Mobile applications face a fundamentally different testing challenge than web applications. Device fragmentation, OS version diversity, network variability, and hardware constraints create failure modes that do not exist in browser-based applications. Here is our mobile-specific testing layer:
- Device matrix testing: we test on a defined matrix of real physical devices — not just emulators — covering the top 80% of the target market's actual device types. For Middle East clients this means current and 2-generation-old iPhones, Samsung Galaxy (S and A series), and Xiaomi devices which dominate the market
- OS version coverage: we test on iOS 16, 17, and 18 and Android 12, 13, 14, and 15. The most common production crashes we see are caused by OS-specific API changes that only surface on older versions still in wide use
- Network condition simulation: we test under 4G, 3G, and weak WiFi conditions using network throttling. Applications that work on fast connections frequently break on slow ones — timeouts, missing loading states, and broken retry logic are the top findings
- Permission and lifecycle testing: mobile apps must handle permission denials (camera, location, notifications), app backgrounding and foreground resumption, low memory conditions, and interruptions (calls, notifications). We test all these transitions explicitly
- Crash reporting integration: before any mobile release, we verify that Sentry or Firebase Crashlytics is correctly integrated and reporting crashes to the right project — so that if something does go wrong post-launch, it is immediately visible
- App store compliance pre-check: we run Apple App Store and Google Play Store guideline checks before submission, including privacy manifest requirements, permission justifications, and content rating declarations — avoiding the multi-day review rejection cycle that catches teams off-guard
The Go-Live Confirmation Checklist
Before any application is handed over to the client or deployed to production, our team runs a structured go-live checklist. This is the final gate before launch. Everything on this list must pass — there are no exceptions for deadline pressure.
- All automated test suites pass on the production build (not just development build): environment differences between dev and production have caused go-live failures on projects that 'passed all tests locally'
- Error monitoring is active and verified: we trigger an intentional test error and confirm it appears in Sentry or Crashlytics within 60 seconds before launch
- Performance baseline documented: page load times (web), app launch time (mobile), and critical API response times are recorded. This baseline enables fast regression detection post-launch
- Backups and rollback plan confirmed: for web applications, we confirm the deployment pipeline has a one-command rollback to the previous version. For mobile, the previous build is retained in the app store for emergency rollback
- Client walkthrough completed: a screen-share session where the client tests the application themselves in a staging environment. This is not optional — it catches expectation gaps that no technical test can find
- Staging-to-production parity verified: database schema, environment variables, feature flags, and third-party API keys are all confirmed correct for the production environment before the switch is made
Book a Free Consultation
Ready to secure your application or build something with AI? Let's talk.
Send Enquiry