
Atlassian AI Engineer Case Interview: Designing an LLM-powered Jira/Confluence Assistant for Issue Triage and Knowledge Retrieval
This Atlassian case interview simulates partnering with a PM and EM to design, ship, and iterate on an AI capability that accelerates software teams using Jira and Confluence. The prompt centers on creating an LLM-powered assistant that: (a) summarizes and triages new Jira issues across projects, and (b) answers developer and support questions by retrieving and grounding responses in Confluence spaces, Jira issues, and Bitbucket PRs. What the interview covers (Atlassian-specific focus areas): - Customer-first framing: Identify core user personas (support engineer, triage lead, developer) and success metrics aligned to "Don't #@!% the customer" (e.g., time-to-first-response, issue routing accuracy, answer helpfulness, reduction in context-switching for on-call). - Data and integration surfaces: Discuss ingesting and securing content via Jira/Confluence Cloud REST APIs, webhooks, and event streams; handling multi-tenant isolation, data residency, and permission-aware retrieval (only answers from pages/issues the user can access). - System design for LLM + retrieval: Propose RAG architecture over Atlassian content (indexing, chunking, embeddings, metadata filters by space/project/labels/permissions); compare hosted vs. open-source models; latency/cost targets for interactive UX; fallback and caching strategies; prompt templates and tool-use for JQL assistance or issue field extraction. - Model evaluation and safety: Define offline and online evals (groundedness, hallucination rate, routing precision/recall, answer satisfaction), red-teaming for PII leakage and project confidentiality, prompt hardening and content filtering, human-in-the-loop for corrections, feedback capture in-product. - Experimentation and rollout: Plan feature flags, gradual cohort rollout, A/B tests on triage accuracy and user task completion; telemetry and guardrails to rapidly disable regressions; post-incident learnings. - Reliability and observability: Establish SLOs (e.g., p95 < 1.5s for short answers), tracing across retrieval/model steps, rate limiting, backpressure, and graceful degradation when upstreams fail. - Collaboration and ways of working: Operate in Atlassian’s collaborative style (interviewer as teammate), narrate trade-offs clearly, document decisions as you go (think Confluence page sketch), and show how you would partner across PM, Design, and Security. Reflect Atlassian values: Open company, no bullshit; Build with heart and balance; Play, as a team; Be the change you seek. Format and flow (based on real Atlassian experiences): - 5 min: Brief, align on customer problem and constraints (enterprise cloud context, permissions, multi-product data). - 25–30 min: Whiteboard the end-to-end architecture (ingest, index, retrieval, policy checks, LLM orchestration) with trade-offs (model choice, cost/latency, tenancy, data residency). - 15 min: Deep dive on evaluation plan and safety/abuse prevention; define measurable launch criteria and dashboards. - 10 min: Rollout plan, on-call readiness, and incident response story (how you’d debug a bad answer or permission leak report). - 10 min: Values and collaboration check—communicate assumptions, clarify unknowns, and summarize decisions with crisp next steps. What great looks like: Clear problem framing tied to customer impact; pragmatic architecture that respects tenant boundaries and permissions; a concrete, testable evaluation plan; thoughtful safety mitigations; and open, concise communication with trade-offs and data-driven decisions in Atlassian’s team-first style.
8 minutes
Practice with our AI-powered interview system to improve your skills.
About This Interview
Interview Type
PRODUCT SENSE
Difficulty Level
4/5
Interview Tips
• Research the company thoroughly
• Practice common questions
• Prepare your STAR method responses
• Dress appropriately for the role