Seldon Core

Seldon Core 1.2 Release

The Seldon team are delighted to announce the release of version 1.2 of our open source model deployment platform for Kubernetes. The release follows our award win for ‘Best Innovation in Open Source Technology’ at the 2020 CogX Innovation Awards and provides several updates outlined below.

Following the release, Seldon CTO Clive Cox commented “With the Seldon Core 1.2 release, powerful features such as batch inferencing and model metadata become available for machine learning services deployed using Seldon”.

Batch inference is now integrated into Seldon to allow users to run both batch and RPC workloads for their models in a seamless fashion. The batch functionality can easily be included in any workflow manager to provision and run a batch inference task for a particular Seldon Deployment resource. The architecture is shown below.

We provide two hands on tutorials for Argo Workflows and Kubeflow Pipelines illustrating the batch functionality in action.

With Seldon 1.2 model creators can expose the metadata of their models. This allows both core input/output metadata as well as lineage metadata from external systems such as Pachyderm or DVC to be exposed for interrogation over REST when the model is running. The general architecture is shown below. 

This release also updates our RedHat Seldon Core operator so it can be used by our upcoming Seldon Deploy Enterprise release to the RedHat Marketplace. These will be appearing in the RedHat distribution streams in the near future.

As usual we look forward to hearing your feedback and blogs on how you are deploying models using Seldon. Check out the project on Github or our quick guide to getting started with the platform. 

Interested in a 14-day trial of our enterprise product Seldon Deploy? Fill out the form below to get started:

  • This field is for validation purposes and should be left unchanged.