Building a Predictive Analytics Dashboard: Our Stack & Lessons
The Brief
A retail client with 14 stores across Kerala needed a dashboard to forecast weekly demand per SKU, flag slow-moving inventory, and surface replenishment recommendations — all updated daily from their POS system.
scikit-learn vs PyTorch: How We Decided
We ran a two-week spike. scikit-learn's gradient boosting (XGBoost) reached 91% forecast accuracy on historical data in 3 days. PyTorch LSTM reached 93% but required 3 weeks of tuning and a GPU instance. For a daily-refresh use case with structured tabular data, the 2% accuracy gain did not justify the infrastructure cost. We shipped XGBoost.
Tech Stack
- ML model: XGBoost via scikit-learn
- Data pipeline: Apache Airflow (daily ETL from POS CSV exports)
- Backend: FastAPI + PostgreSQL
- Dashboard: React + Recharts
- Hosting: Google Cloud Run
What We'd Do Differently
We built the feature engineering manually. In hindsight, using a feature store (like Feast) from day one would have saved significant time when the client later asked to add weather and local event data as features. We now include a feature store in every ML project scoping call.
The Outcome
Within 90 days, the client reported a 17% reduction in overstock write-offs and a 22% improvement in in-stock availability on high-velocity SKUs.
Building Something Similar?
Predictive dashboards work best when the data pipeline is solid before the model is built. Talk to our team about a free data readiness assessment.

