Seldon has recently launched a new version of its lightweight offering: Seldon Core+. Core+ is designed especially for organizations needing support for their unique ML challenges, like those in highly regulated industries that must pay particular attention to governance, risk, and compliance.
This new version of Core+ provides our open-source software, Seldon Core v2, backed by an enterprise-grade warranty from Seldon towards the increased security and reliability of the entire software lifecycle. This blog post will help you to understand the key features, benefits, and customer use cases that have been successfully put into production with Core+.
What is Core+?
Core+ is an enterprise-grade solution for machine learning deployments, paired with the reassurance of warranted binaries that have been rigorously tested by Seldon. Core+ makes it faster and easier to serve and manage machine learning models in production.
It is an ideal solution for MLOps practitioners, ML engineers, and data scientists looking to productionize their ML models. Core+ empowers users with a shared lexicon and consistent resource definitions across the ML lifecycle, regardless of where a model is running. By removing these common communications barriers, Core+ encourages more automation and closer collaboration, thereby enabling DevOps.
Moreover, Core+ offers direct access to our enterprise support portal with defined service level agreements (SLAs) and fast-track Slack access dedicated to getting your issues resolved. You will have priority access to engineering and delivery teams who can offer dedicated workshops and system evaluations. On top of this, you will receive access to product management for feature requests and proposals that you would like Seldon to consider. Say goodbye to all those challenges and risks that come with building on top of open-source software with no guarantees.
Why choose Core+?
To start with, Core+ has all the benefits of Seldon Core v2, including:
- Reduced operational costs driven by efficiencies from multi-model serving (MMS) and server overcommit
- Accelerated time to value with smaller, lighter model artifacts to build and deploy
- Out-of-the-box traceability with end-to-end data flows
- Flexible to your needs, so its fits into your infrastructure and processes
- A data-centric architecture to focus on what really drives value
The key differentiator for integrating Core+ over Core v2 is the extra enterprise-level support for your organization, from your first steps and all throughout your journey of scaling up.
When you first start using Core+, you will receive:
- A dedicated Customer Success Manager to support and accelerate your onboarding
- Exclusive support communications channels for quick troubleshooting
- A two week trial with system evaluations and setup checklists
With Core+ fully integrated, you continue to benefit from:
- Our enterprise support portal with defined SLAs
- ‘How to’ documentation and ‘Getting started’ videos to onboard new users
- An enterprise-grade warranty to minimize downtime and provide peace of mind
As your level of operational maturity with ML systems increases, leveraging Core+ means:
- Reliability in your mission-critical ML deployments
- Availability of increasingly in-depth customer success and system evaluations
- Access to Seldon IQ for additional training and support to nurture your growth
What does Core+ provide?
The key features of Core+ include:
- Platform-agnostic: run Seldon on Kubernetes, Docker, or Docker Compose and choose the service mesh that suits your needs
- Run the ML ecosystem you want: Seldon supports numerous common ML runtimes and leverages the Open Inference Protocol so your models can easily interconnect
- Simple resource definitions allow you to express your needs clearly and concisely–Core v2 natively supports models, pipelines, experiments, and servers
- Flexible, extensible definitions allow you to start small and grow your ML ecosystem to match your requirements, regardless of whether you start on your laptop or on Kubernetes
- Understand your system by generating explanations for not just models, but also entire inference pipelines with traceability and auditing enabled by tracked data flows
- Maximize efficiency with MMS and overcommit to reduce the overheads of running many models; model and server auto-scaling so you only use what you need, when you need it; and model sharing between pipelines to avoid unnecessary duplication
- Release with confidence by running experiments like canaries and shadows to test new models and find the best-performing ones, and by monitoring your data flows to ensure your models continue to accurately reflect the ever-changing nature of the real world
A requirement for regulated industries
Machine learning is rapidly being adopted in several high-stakes sectors. Industries like healthcare and financial services are already subject to strict regulations and compliance requirements to ensure the safety and security of their customers and patients. With growing concerns over the potential misuse of AI, further regulatory controls are imminent across both these and other sectors.
Such highly regulated industries must rely on production-grade software that is robust, reliable, and fully supported. The software they use must furthermore be thoroughly tested and warrantied. This provides them with the assurance that the models they put into production meet the highest quality standards and can be relied upon in critical situations.
Failure to quickly adapt and adhere to current and anticipated requirements can lead to severe penalties and reputational damage, breaking their users’ trust and confidence in the organization. That’s why it is imperative for regulated industries to implement Core Enterprise in order to ensure smooth machine learning operations whilst maintaining the highest levels of security and compliance.
Core+ in action
Core+ benefits a wide range of industries and use cases. Our customers already employ it to power autonomous vehicles’ real-time decision making, to detect fraudulent transactions in financial activities, and to accelerate drug discovery.
Capital One, a leading US retail bank, selected Seldon to accelerate machine learning deployments across multiple different areas of their business.
Before the introduction of Seldon, Steve Evangelista, Director of Product Management, was faced with delays of up to a month for ML models to reach production! This made scaling projects up nearly impossible without occupying even more developer resources on teams that were already overstretched.
With Seldon as their underlying infrastructure, Capital One created a ‘Model as a Service’ platform (MaaS) and sped up their model deployments from months to minutes. The value that Core+ gave them helped them to reach their machine learning deployment goals quicker, with full-service support to resolve any problems that might have taken them double or even triple the time to understand on their own. In an industry where time-to-value is so vital, every second in production counts.
Read more about Capital One’s journey with Seldon here.
Try Core + today
If you’re looking for a powerful, enterprise-level solution for serving your machine learning models in production, look no further. It’s the perfect choice for those who want the power of our open-source Seldon Core technology with the support of our team’s knowledge and expertise.
With Core+, you’ll be able to significantly reduce your deployment time, boost your productivity and efficiency, and feel confident in your ML journey, no matter what challenges you face.
When you need to ensure your models maintain peak performance through advanced monitoring and explainability, Seldon Enterprise Platform is the solution for you. Find out more about our industry leading platform for the enterprise.
Ready to see it for yourself? Book a demo and see the difference it can make for your team!
Alex is a Software Engineer at Seldon involved in the design and implementation of both Core and Deploy. He read his Bachelor’s and Master’s degrees in Computer Science at Cambridge, with an emphasis on data science and machine learning. His investigations into incorporating temporal information into medical imaging data culminated in publication in the open-access journal PLOS ONE. After working as a data scientist on high-volume click-stream data for e-commerce, he moved into software engineering for financial markets. You can find him talking about shell scripting, keyboard and chair ergonomics, and his love of patisserie.