
QA Analyst (AI)
- Philippines
- Contract
- Full-time
- Ensure the quality and compatibility of AI and software components across development, staging, and production environments.
- Design, develop, and maintain automated test scripts for functional, regression, performance, and integration testing.
- Execute end-to-end test cycles including component, system, integration, UAT, and performance testing based on defined test strategies and plans.
- Track and report defects, working closely with developers and project stakeholders to ensure timely resolution.
- Support the standardization of test processes and contribute to the evolution of testing best practices within the AI domain.
- Identify, report, and escalate risks that may impact testing timelines or deliverables.
- Collaborate closely with data scientists, AI engineers, developers, business analysts, and product managers to validate features and functionality.
- Generate high-quality test documentation including test plans, strategies, test cases, execution reports, and defect logs.
- Continuously assess and recommend improvements to testing strategies, particularly in the context of AI/ML feature validation and automation.
- Apply critical thinking to simulate real-world scenarios from an end-user and data pipeline perspective.
- 6+ years of experience in Quality Assurance, with at least 3 years in test automation.
- Solid grasp of modern software development and testing methodologies (Agile, Scrum, Waterfall).
- Experience with automation testing frameworks (Selenium, Playwright, TestNG)
- Proficiency in Java/Groovy or other object-oriented programming languages for test automation.
- Strong understanding of defect triage, root cause analysis, and bug lifecycle using tools like JIRA.
- Experience testing web-based platforms; AI or ML product testing experience is a strong plus.
- Bachelor’s degree in Computer Science, Information Systems, Engineering, or related discipline.
- Excellent English communication skills (verbal and written) with the ability to interface across technical and non-technical teams.
- Experience working in global delivery environments or IT consulting settings.
- Identify and report bugs using tools such as JIRA
- Integrate testing procedures into CI/CD pipelines (e.g., Jenkins, GitLab CI)
- Test and validate AI data pipelines (ETL), model explainability, and fairness
- Perform adversarial, boundary, and robustness testing for AI applications
- Must be fully available to operate within the MST UTC - (07:00) time zone.
- Work in one of the fastest-growing companies in the United States
- Be a part of a friendly and professional team
- Ability to work fully remote