When we provide recommendation and predictive services to our enterprise customers, it’s important that we are able to keep track of and optimise the performance of the infrastructure as well as the impact of the recommendations and predictions on KPIs. We’re excited to include our favourite open-source analytics dashboard as a fully integrated component in Seldon’s machine learning infrastructure.
The new 1.3.2 release of Seldon includes an open-source Grafana dashboard for each client showing real-time analytics of the API calls running through the system. The dashboards currently show:
- API Request Time by REST endpoint.
- API Request Count by REST endpoint.
- Content Recommendation Stats:
- Overall Impression and Click count along with CTR.
- Impressions and Clicks by rectag and variation.
- CTR by rectag and variation. You would use this to monitor running A/B tests.
- Prediction Stats:
- Counts for each predicted class.
Example analytics for a running content recommendation endpoint are shown below:
Here is an example graph showing a test of the simple Iris prediction demo:
The Grafana endpoint will be exposed on port 30002 if running with NodePort settings or as an external load balancer if running with this setting when configuring Seldon.
These dashboards provide an initial release and we plan to extend the various metrics visible in the dashboards in the coming iterations to cover more areas of interest for monitoring real-time predictions running through a Seldon deployment.
For the technically minded the dashboards are created using Spark streaming jobs which read data from Kafka as it is pushed by Fluentd from the front end REST servers. The data is processed and sent to InfluxDB for display by a Grafana frontend. The dashboards and InfluxDB data are held in persistent storage volumes so will be available outside the Kubernetes cluster for backup or other use.
We recently re-architected how Seldon is packaged, deployed and maintained. Seldon is now provided as a fully dockerized set of containers running inside Kubernetes. To get started, please follow our updated install guide and please post to our users group if you have any questions or feedback. We hope you find this release useful.