leidos

Leidos AI Engineer Case Interview: Secure Mission-Systems ML Architecture and MLOps

This case simulates how Leidos designs, deploys, and sustains AI for mission-critical programs across Defense, Civil (e.g., FAA), and Health portfolios. You’ll be given a problem brief typical of Leidos capture/delivery work and asked to architect an end-to-end AI solution that meets real-world constraints: security and compliance (NIST/RMF), reliability in contested or regulated environments, cost/schedule realism, and stakeholder-driven outcomes. Format (approx.): 10 min problem brief by interviewers (Hiring Manager + Chief Engineer), 30–35 min whiteboard/system design, 15–20 min deep-dive Q&A (security, MLOps, testing, data rights), 5 min candidate questions. Sample scenario: Build a multi-sensor computer vision pipeline for small UAS detection and classification at an airbase with an edge compute footprint and a GovCloud enclave. Requirements include: latency ≤150 ms at the edge; model confidence calibration and uncertainty reporting; ATO-readiness under RMF with auditability; data separation across IL levels; integration to a command-and-control interface; and a fail-safe concept for degraded comms. What we expect you to cover: (1) Problem framing and assumptions: mission objectives, operational environment, success metrics (ROC/AUPRC, latency, MTBF), and constraints (size/weight/power, export controls, customer-available hardware). (2) System architecture: data ingest (EO/IR/RF), preprocessing, model selection/training (e.g., YOLO/DETR variants), fusion strategy, inference serving (e.g., Triton/ONNX Runtime), edge-to-cloud data flows, and C2 integration patterns. (3) Secure MLOps in classified or regulated settings: repo structure, CI/CD in air-gapped/OpenShift, container hardening (DISA STIG), SBOM generation, vulnerability scanning, model registry, versioning, lineage, rollback, and drift monitoring; promotion gates tied to test evidence for ATO. (4) Data strategy: labeling plan, synthetic data augmentation, handling PII/export-controlled data, cross-domain solutions, and IL boundary movement with audit trails. (5) Reliability and testing: test harnesses (unit/integration/simulation-in-the-loop), red-teaming/adversarial robustness, calibration, bias/fairness considerations for operational use, and operational acceptance tests tied to customer KPPs. (6) Deployment/operations: telemetry, health checks, canarying, incident response runbooks, and sustainment plan aligned to firm-fixed-price or T&M realities. (7) Trade-offs and estimation: schedule (phased MVP→pilot→IOC), risks and mitigations, and rough order-of-magnitude resourcing. Interview style reflects Leidos culture: mission-first pragmatism, systems thinking, secure-by-design, evidence-based decision making, and clear communication with multi-disciplinary stakeholders (PM, security, ops). Deliverables we’ll look for on the whiteboard or verbally: a block diagram with data pathways and trust boundaries, a brief MLOps workflow, a test/validation matrix, and a risk register with mitigations.

engineering

60 minutes

Practice with our AI-powered interview system to improve your skills.

About This Interview

Interview Type

PRODUCT SENSE

Difficulty Level

4/5

Interview Tips

• Research the company thoroughly

• Practice common questions

• Prepare your STAR method responses

• Dress appropriately for the role