Seldon Enterprise Platform v2.2 Release: Focus on Usability with Enhanced Batch Jobs

We are continuously working to make MLOps and the future of AI more accessible with elevated experiences across our platform. In our latest release, we are solving critical challenges with enhancements to Batch Job functionality and improvements to your overall experience with Enterprise Platform. Here’s everything you need to know:

Enhancing Operational Efficiency

Streamlined Model Deployment

Deploying custom models has never been easier with support for specifying model URIs, rather than Docker images via the UI. We have also added support for deploying models from the HuggingFace Hub directly through the UI creating a simplified model deployment process. 

Enhanced Access Management and Logging

This update brings support for differentiating between users and machines via OIDC configuration, facilitating programmatic model deployment. In addition, a new API endpoint gives access to the full history of inference request logs, aiding in model retraining and enhancing your ability to monitor and optimize model performance.

Improved Error Messaging and Customizability

We’ve significantly upgraded error messages for failed model deployments, enhancing troubleshooting capabilities. This makes it easier for teams to identify and resolve issues quickly, improving the efficiency of model deployment processes.

We’ve also introduced a configurable prefix for the Consumer Group ID in Kafka. This enhancement not only simplifies the collaboration between model development and operations teams but also offers greater control over data streaming configurations, allowing for a more tailored and efficient data management approach.

Support for NaN Values

We’ve introduced support for handling NaN values during the encoding of a request using a codec from MLServer, ensuring your data handling is more robust.

Batch Job Enhancements Bring More Efficiency and Flexibility

Support of Automatic Mini-Batching

The development of this release was driven by customer feedback, with a sharp focus on the challenges large batch inference requests can pose, from straining memory resources to affecting performance. We have introduced automatic mini-batching, a feature designed to intelligently manage large batch inference requests so that memory constraints are no longer a bottleneck.

Directory-based Batch Job Specification

Users are no longer restricted to specifying batch jobs through a single text file. Now, you can specify batch jobs using a directory full of files, offering you unparalleled flexibility and ease in managing batch inference jobs.

Configurable Resource Allocation

With the new ability to configure CPU and RAM resource allocation for batch jobs, you can now control resource usage to meet the demands of large batch jobs. This means more power where you need it, and less power when you’re optimising for cost over time. This ensures your jobs run efficiently, giving you greater control over performance and budget. 

Continuous Improvements to MLServer 1.4 & Core 2.8

Our community continues to play an active role in driving important improvements and fixes that enhance the capabilities of MLServer and Core 2. 

MLServer Enhancements in Functionality and User Experience

The most recent updates include an integration with OpenTelemetry thatprovides transparency on requests, helping you get a better picture of the journey of a request and identify any bottlenecks via Core 2 and Enterprise Platform. Check out the MLServer 1.4 release notes for a full list of fixes and improvements. 

Improvements to Dataflow and Scheduler

As part of our pioneering efforts to leverage a dataflow architecture for data-centric model pipelines, we are putting our integration with Kafka through its paces and have been fine-tuning the scheduler configuration to improve its performance.

In addition, updating and testing dependencies is a critical part of managing CVEs. See the Core 2.8 release notes for more info.

We are excited to bring you this important release to Enterprise Platform so you can achieve more with your models to drive further innovation and shape the future of MLOps. For a deeper dive into these updates and to understand how they can benefit your operations, visit our documentation.

Contents