This post was originally published on the Ambassador API Gateway blog.

Can you tell us about yourself and what your company does?

I am Clive Cox, CTO of Seldon. Seldon’s focus is to help organisations successfully deploy machine learning solutions to production. We provide an open source product Seldon Core which provides a Kubernetes based platform for deploying machine learning runtime inference graphs. Our goal is to help data scientist, DevOps engineers and data managers work together to make a machine learning project a success.

ML is a currently a hot topic, why should a data scientist choose Seldon over other frameworks?

Seldon is focused on deploying data science models so that the work of data scientist can easily be put into production. Data scientists can continue to use whatever frameworks they choose to train their models (such as, Tensorflow, Sklearn, H2O, R) and then easily package up their runtime inference graph so it can be managed by Seldon. We help the data science and DevOps teams work together to help ensure data science projects go past the PoC stage and into production successfully.

Seldon Core our open-source product runs on top of Kubernetes and fits in well with the wider eco-system of Kubernetes based ML such as Kubeflow and IBM FfDL, both of which Seldon is integrated.

What was your pre-Ambassador API Gateway strategy? What challenges did you face?

We provided a simple OAuth API Gateway to allow clients to connect to their running ML model inference graphs over REST and gRPC. However, we wanted to focus on the core ML challenges and therefore correctly managing ingress/egress from Kubernetes to our services. We hoped this could be better solved via an existing solution in the cloud-native space.

Why did you choose Ambassador, and what benefits have you seen since adopting Ambassador?

We chose Ambassador so we could focus on our core ML challenges in deployment and utilize a reverse proxy that meets our customer needs. We like the single gateway provided by Ambassador for REST and gRPC and its easy but powerful configuration. Also, we appreciate it plays well with other emerging technologies such as Istio. Another major benefit is the pluggable authentication as organisations we work with have differing needs in this area so we need a solution with the flexibility to handle differing authentication mechanisms.

Do you have any advice for people looking to adopt Ambassador?

Ambassador is easy to deploy and use so I would encourage people to give it a go and see if it fits their needs.

How can people get involved with Seldon Core, and what does the future hold for Seldon?

We are always looking for further feedback from users looking to deploy ML in their organisation so would welcome contact from people looking to put ML tasks into production. We provide many examples for getting started with Seldon Core on our GitHub page and are building a developer community on Slack. Also, for people based in London we run the TensorFlow London meetup so you can always meet us there for a chat.

We are continuing to work on extending the capabilities of the open source Seldon Core product to provide a general control plane for ML deployment as well as working towards our beta release of Seldon Deploy.

%d bloggers like this: