Seldon Core

Seldon Core

The open-source framework for easily and quickly deploying models and experiments at scale.

Get it from GitHub

Data-Centric Deployment Pipelines

Enables data-centricity and easier model deployments through experiments, ensembles, and transformers.

Runs on any cloud, is framework agnostic, and supports top ML libraries, toolkits, and languages.

Backed by a thriving MLOps community with over 3,000 members. Join them here.

Blazing Fast, Industry-Ready ML

Seldon core converts your ML models (Tensorflow, Pytorch, H2o, etc.) or language wrappers (Python, Java, etc.) into production REST/GRPC microservices.

Seldon handles scaling to thousands of production machine learning models and provides advanced machine learning capabilities out of the box including Advanced Metrics, Request Logging, Explainers, Outlier Detectors, A/B Tests, Canaries and more.

Runs anywhere

Built on Docker and Kubernetes, runs on your local machine, on any cloud and on premises

Agnostic and independent

Framework agnostic, supports top ML libraries, toolkits and languages

Runtime inference graphs

Advanced deployments with experiments, ensembles and transformers

Platform Integrations

Kubeflow
Seamlessly integrate Seldon’s advanced deployment features into your existing Kubeflow training pipelines

RedHat: OpenShift
Extend your application delivery platform to also support machine learning workloads with Seldon Core

Supported Toolkits

See it for yourself

Serve, monitor, explain, and manage your models today.

© 2022 Seldon Technologies. All Rights Reserved.

Rise London
41 Luke Street
Shoreditch
EC2A 4DP

UK: +44 (20) 7193-6752
US. +1 (646) 397-9911

Email: hello@seldon.io