bloomberg

Bloomberg Software Engineer Case Interview: Real‑Time Market Data Ingestion and Alerting

This Bloomberg‑style case simulates building a production‑ready microservice that ingests real‑time trades/quotes from multiple venues, normalizes and enriches them, computes rolling analytics (e.g., VWAP, top‑movers), and pushes alerts to Terminal subscribers. It mirrors Bloomberg’s culture of practical problem solving, accuracy, and customer impact, and blends systems design with hands‑on coding. Structure (typical flow): - 5 min: Context + candidate clarifying questions (latency/throughput targets, data shapes, users). - 10 min: Requirements and API design. - 25 min: System design (ingestion → normalization → stateful analytics → fan‑out). - 20 min: Focused coding exercise implementing a core component. - 10 min: Reliability, observability, trade‑offs, and extensions. - 5 min: Wrap‑up and Q&A. Case prompt: "Design a service to generate price alerts for selected symbols on the Bloomberg Terminal. The service consumes tick data (trades/quotes) from multiple exchanges, deduplicates/out‑of‑order events, computes rolling 60‑second VWAP per symbol, and triggers alerts when price deviates by a configurable threshold from VWAP. It must support snapshot requests and streaming subscriptions, respect user entitlements, and achieve p99 < 50 ms end‑to‑end for 100k msgs/sec per node." Focus areas specific to Bloomberg’s interview style: - Data modeling and normalization: canonical symbol/ID mapping, handling corporate actions, late/out‑of‑order ticks, sequence numbers, and dedup keys. - API design oriented to Terminal workflows: - Streaming: POST /subscribe with {"symbols": ["AAPL US Equity"], "threshold_bps": 50} - Snapshot: GET /symbols/{id}/vwap?window=60s - Alerts: server‑push over pub/sub; discuss unsubscribe, backfill on reconnect, and entitlement checks. - Algorithms and data structures that are practical: rolling VWAP with time‑bucketed queues/deques; top‑K via heaps; O(1) symbol state updates; ring buffers vs linked lists; hash maps for per‑symbol state. - Systems design for low‑latency, high‑throughput streams: batching vs immediacy, backpressure, at‑least‑once semantics with idempotency, replay from a durable log, per‑partition ordering, horizontal sharding by symbol, fan‑out to thousands of subscribers. - Reliability and correctness: health checks, circuit breakers, fallbacks when a venue feed is degraded, multi‑AZ failover, warm start with snapshot+replay, data quality signals (gap detection, staleness, dropped‑message counters). - Observability and SLOs: p50/p95/p99 latencies, alert generation rates, consumer lag, dedup ratio, entitlement denials; discuss dashboards and canaries. - Trade‑offs candidates are expected to articulate at Bloomberg: C++ vs Python for the hot path, memory layout (AoS vs SoA) and GC considerations, compression vs latency, schema evolution (e.g., Protobuf) and backward compatibility for long‑lived clients. Hands‑on coding (representative task): - Implement a function/class that maintains rolling 60‑second VWAP per symbol from a stream of (ts, price, size, seq) and returns the current VWAP and whether a deviation threshold is crossed. Must handle out‑of‑order events within a small window, dedup by seq, and run in O(1) amortized per tick. Provide unit tests and discuss edge cases (no trades, symbol switch, window expiry). Evaluation rubric: - Problem discovery: clarifies ambiguous market/latency/entitlement requirements and defines measurable SLOs. - API and data contracts: clear, client‑centric endpoints with idempotency and versioning. - Correctness under real‑world data: robust handling of late data, corporate actions, and sequence gaps. - Performance and scalability: coherent sharding, batching, and state management; anticipates hot symbols and burstiness. - Reliability/observability: failure modes, metrics, alarms, and safe degradation paths. - Code quality: clean, tested, incremental; prefers simple, reliable structures over clever but brittle solutions. Common follow‑ups: - Add top‑N intraday movers by volume/price change. - Introduce entitlements caching and audit logging. - Reduce p99 from 80 ms to 50 ms without overprovisioning (discuss batching, zero‑copy, contention hot spots). - Support snapshot-on-subscribe and catch‑up replay after disconnect. What “good” looks like at Bloomberg: pragmatic design that protects accuracy and latency, thoughtful handling of market data quirks, incremental delivery with clear metrics, and production‑minded code you could ship today.

engineering

8 minutes

Practice with our AI-powered interview system to improve your skills.

About This Interview

Interview Type

PRODUCT SENSE

Difficulty Level

4/5

Interview Tips

• Research the company thoroughly

• Practice common questions

• Prepare your STAR method responses

• Dress appropriately for the role