Autodesk

Autodesk

Design and make software for architecture, engineering, construction, and entertainment industries.

11,600Building DesignConstructionAutomotiveBuilding Product Manufacturing3D AnimationArchitectureEngineeringConstruction ProfessionalsMechanical EngineeringMechanical CADThermal SimulationElectronic Design AutomationPrint Circuit Board DesignMechanical, Electrical, and Plumbing (MEP)HVACFabricationEstimationInfrastructureCivil EngineeringGenetic Engineering (Life Sciences)Website

Principal AI Quality Developer

Lead AI quality strategy, testing, and safety for Autodesk's Fusion Industry Cloud

Ontario, Canada
Full Time
Expert & Leadership (13+ years)

Job Highlights

Environment
Office Full-Time

About the Role

Autodesk is seeking a principal quality engineering leader to design, build, and operationalize testing for applied AI and machine‑learning systems, with a primary focus on the Fusion Industry Cloud. The role ensures the quality, safety, reliability, and performance of AI features and agentic workflows shipped by the Applied AI engineering team. You will create repeatable evaluation approaches for predictive and generative AI, develop automated test harnesses integrated into CI/CD pipelines, and partner closely with Applied AI engineers, Product, Security, and Platform teams to meet defined acceptance thresholds and customer expectations. In this role you will define and drive the quality strategy for AI experiences that directly affect product experience and customer outcomes, shaping how the organization tests and trusts AI at scale while raising the bar for reliability and responsible AI practices. • Define and drive the quality strategy for AI and agentic features in Fusion Industry Cloud. • Design test plans, acceptance criteria, and release readiness processes for AI‑powered capabilities. • Build automated evaluation frameworks for LLM and ML outputs using gold, synthetic, and adversarial test sets integrated into CI/CD. • Create repeatable metrics (relevance, hallucination, bias, latency, cost) and set pass/fail thresholds with engineering and product partners. • Validate retrieval‑augmented generation workflows, tool usage, and agent behavior across edge cases and prompt sensitivities. • Lead AI safety and abuse testing, including prompt injection, jailbreak, and data leakage risk assessments. • Establish production monitoring, drift detection, failure taxonomy, and alerting tied to customer impact and SLA commitments. • Perform deep issue triage and root‑cause analysis for AI‑related defects across data, prompts, retrieval, model versions, and integration logic. • Mentor engineers and QA practitioners on AI testing methods, improve team standards, and contribute to hiring for quality roles. • Collaborate with cross‑functional partners to embed responsible AI practices, document test methodology, and secure release sign‑off.

Key Responsibilities

  • test automation
  • ci/cd integration
  • metrics development
  • safety testing
  • production monitoring
  • root cause

What You Bring

The ideal candidate brings over seven years of quality engineering or software engineering experience, strong Python programming skills, hands‑on expertise in testing AI/ML systems, and a proven ability to influence cross‑functional teams in a matrixed environment. Preferred experience includes adversarial AI testing, cloud platforms (AWS), containerized test environments, and a background in responsible AI evaluation. • 7+ years of experience in quality engineering, test automation, or software engineering with ownership of complex test strategies. • Strong Python skills and experience with pytest/unittest and CI/CD integrated test harnesses. • Hands‑on experience testing AI/ML systems, including model output evaluation, regression testing, and ongoing quality monitoring. • Practical experience validating LLM‑based systems, prompt testing, hallucination detection, and robustness across edge cases. • Proven ability to define measurable acceptance criteria and translate them into evaluation metrics and release gates. • Experience debugging cross‑stack issues spanning application logic, APIs, data flows, and model behavior. • Excellent communication and influencing skills in a collaborative, matrixed environment. • Preferred: adversarial AI testing, AI security testing, observability tooling, AWS cloud platforms, Docker/Kubernetes, responsible AI testing, and domain knowledge in AEC, design, or manufacturing.

Requirements

  • python
  • pytest
  • ci/cd
  • ai/ml
  • llm
  • aws

Benefits

Autodesk offers a competitive compensation package with base salary, bonuses, stock grants, and comprehensive benefits, alongside a culture that values optimism, outcome focus, risk transparency, and innovative problem solving. The company is committed to belonging and encourages candidates to join a talent community for future opportunities.

Work Environment

Office Full-Time

Apply Now