
CACI AI Engineer Case Interview: Deploying ISR Computer Vision in a Cleared, Air‑Gapped Environment
This case simulates a typical CACI mission-delivery scenario for an AI Engineer supporting a Defense customer within Domestic Operations. You will design an end‑to‑end approach for detecting and tracking small UAS in full‑motion video (FMV) from C4ISR platforms, to be fielded in a secure, partially air‑gapped environment. The session emphasizes CACI’s mission‑first culture, brief‑and‑build communication style (BLUF), and the constraints of classified workflows. Format (75 minutes): 1) 5‑minute BLUF readout of your approach; 2) 20‑minute solution proposal (data, model, architecture, and deployment); 3) 25‑minute technical deep‑dive with a panel (AI lead, systems engineer, mission lead); 4) 15‑minute whiteboard pivot on an edge‑deployment variation; 5) 10‑minute culture/ethics and stakeholder alignment. Case prompt: Given FMV feeds (unclassified samples for interview), propose a system to detect/track Class 1–2 UAS with low false‑negative rates under variable conditions (day/night, clutter, occlusion), with analyst‑in‑the‑loop review and dissemination to a mission portal. The target environment includes a SCIF, disconnected dev/test enclaves, and a cross‑domain guard for moving products across classification levels. Provided inputs during interview: brief CONOPS; data profile (frame rates/resolutions, sensor variability); latency SLOs for analyst workflows; initial performance thresholds (e.g., ≥0.85 recall, ≤0.10 false‑positive rate at operating point); hardware constraints (edge NVIDIA GPU available, limited power/size); RMF/ATO considerations; need for full auditability, SBOM, and reproducible builds. What you should cover: data ingestion/labeling plan (including synthetic data/augmentation), model selection and training approach (e.g., multi‑object tracking + detection; handling class imbalance), evaluation methodology (precision/recall, PR curves, robustness testing, adversarial considerations), MLOps for air‑gapped environments (artifact registries, signed containers, CI/CD mirroring), deployment architecture (edge vs enclave inference, streaming, message bus), cross‑domain transfer strategy, human‑in‑the‑loop UX for analysts, model governance and NIST‑aligned AI risk management, privacy and responsible‑AI controls, fail‑safe/degrade modes, ops/sustainment (monitoring/drift detection, retraining triggers), and a rough delivery plan (MVP milestones, team skills, risks/mitigations, cost/schedule trade‑offs). Evaluation criteria aligned to CACI’s style: mission impact and stakeholder empathy; security‑by‑design in cleared settings; technical rigor and ability to reason under constraints; practicality and operability (maintainability, logging, metrics); communication (clear BLUF, briefing clarity for a government PM); collaboration and integrity (trade‑offs, data rights, OSS/licensing posture).
75 minutes
Practice with our AI-powered interview system to improve your skills.
About This Interview
Interview Type
PRODUCT SENSE
Difficulty Level
4/5
Interview Tips
• Research the company thoroughly
• Practice common questions
• Prepare your STAR method responses
• Dress appropriately for the role