In anticipation of the SBC Summit in Lisbon (16–18 September), Frogo’s CTO, Maksym Tkach, shares how the company engineered a fraud engine that is as fast as it is intelligent.
- Solving Impossible Latency
Frogo achieved a major breakthrough through its proprietary LTee (Layered Tree) engine. By organizing rule execution into layers, the platform achieves maximum parallelism delivering complex fraud detection workflows with p99 latency under 300 ms. - Pre-computed Intelligence for Speed
Real-time metrics like percentiles and averages are pre-calculated and stored in Aerospike, turning expensive computations into instant lookups. High-cardinality estimates are similarly optimized for speed. - Go-Powered, Microservices Architecture
Built in Go for concurrent processing and simplicity, Frogo’s microservices architecture enables independent scaling and robust fault isolation ensuring smooth performance even in peak conditions. - Elasticity That Reacts Instantly
Traffic surges are managed via:- A NATS streaming queue that decouples ingestion and processing,
- In-app autoscaling that dynamically increases worker throughput,
- Horizontal Azure/Kubernetes scaling when CPU load rises,
- Graceful degradation to sustain performance during unprecedented loads.
- Layered Analytics Stack
- Aerospike for high-speed enrichment,
- AWS Neptune for graph intelligence on flagged transactions,
- Elasticsearch for logging and investigative analytics, empowering fraud teams with depth and context.
- Why Frogo Stands Out
- Hybrid Intelligence: rules + ML + graph analytics.
- Extreme Adaptability: policy changes in hours.
- Client Partnership: tech delivered with strategic, adaptive collaboration.
