Machine Learning

Automating and Governing AI over Production Data on Azure - MLOPs Live #14 w/Microsoft

Many enterprises today face numerous challenges around handling data for AI/ML. They find themselves having to manually extract datasets from a variety of sources, which wastes time and resources. In this session, we discuss end-to-end automation of the production pipeline and how to govern AI in an automated way. We touch upon setting up a feedback loop, generating explainable AI and doing all of this — at scale.

Industrializing Enterprise AI with the Right Platform - MLOps Live #9 - With NVIDIA

We discuss how enterprises need a platform that brings together tools to streamline data science workflow with leading edge infrastructure that can tackle the most complex ML models — one that can bring innovative concepts into production sooner, integrated within your existing IT/DevOps-grounded approach.

Simplifying Deployment of ML in Federated Cloud and Edge Environments - MLOPs Live #12 - with AWS

We discuss some common applications for machine learning at the edge and the main challenges associated with deploying distributed cloud and edge applications. We then wrap up the session with a live demo showing how to run a distributed cloud or edge application on Amazon Cloud and Outposts with the Iguazio Data Science Platform.

How Feature Stores Accelerate & Simplify Deployment of AI to Production MLOPs Live #13

The breakdown:

00:00 - Intro
02:15 - MLOps Overview
05:03 - Feature Engineering
07:44 - MLOps Workflow
10:44 - Solution: Feature Store
14:25 - Feature Store Competitive Landscape
17:03 - Features of a Feature Store
21:01 - CTO: Feature Store Sneakpeak
25:55 - Python Code example
27:57 - ML Pipeline example
30:07 - Covid-19 Patient Deterioration
33:26 - LIVE DEMO
52:45 - QA


Why and when enterprises should care about Model Explainability

Machine learning models are often used for decision support—what products to recommend next, when an equipment is due for maintenance, and even predict whether a patient is at risk. The question is, do organizations know how these models arrive at their predictions and outcomes? As the application of ML becomes more widespread, there are instances where an answer to this question becomes essential. This is called model explainability.