Walkthrough to Set Up the Deep Learning Toolkit for Splunk with Amazon EKS

The Splunk Deep Learning Toolkit (DLTK) is a very powerful tool that allows you to offload compute resources to external container environments. Additionally, you can use GPU or SPARK environments. In last Splunk blog post, The Power of Deep Learning Analytics and GPU Acceleration, you can learn more about building a GPU-based environment. Splunk DLTK supports Docker as well as Kubernetes and OpenShift as container environments.

What risk managers need to know about AI governance

As more businesses begin to realize the full potential of AI to deliver business results from their data, they're starting to bump up against their ability to manage it all. As the amount of data and number of models grow, organizations can accrue significant technical debt. Chief risk officers (CROs) and model risk managers can be left asking themselves, "Do I spend more to keep up with model demand, or do I accept more risk?"

100+ AI Predictions for 2021 from Industry Leaders

The technological landscape today is evolving faster than ever before. 2020 was no doubt a tough year that pushed businesses across verticals to innovate and adapt quickly to overcome the challenges brought on by the COVID-19 pandemic. It accelerated digital transformation and the adoption of emerging technologies like artificial intelligence at an incredible pace.

Dissecting the need for ethical AI

Until recently, topics like data ethics and ethics in AI were limited to academic circles and non-profit organizations rallying for citizen data rights. Fast forward to 2020, and the scenario is very different; AI ethics has become a mainstream topic that's a top priority for big organizations. With data collection and processing capabilities growing by the day, it's become easier than ever to train machine learning (ML) models on this collected data. However, organizations have come to realize that, without building transparency, explainability, and impartiality into their AI models, they're likely to do more harm than good to their business. This podcast will explore why ethical AI is the need of the hour, and what key factors AI leaders should consider before implementing AI in their organization's ecosystem.

Lessons Learned on Operationalizing Machine Learning at Scale with IHS Markit

According to Gartner, over 80% of data science projects never make it to production. This is the main problem that enterprises are facing today, when bringing data science into their organization or scaling existing projects. In this session, Senior Data Scientist Nick Brown will share his lessons learned from operationalizing machine learning at IHS Markit. He will discuss the functional requirements required to operationalize machine learning at scale, and what you need to focus on to ensure you have a reliable solution for developing and deploying AI.
allegro AI

Trains Has Left the Station for the Last Time

We have three big announcements to our community today, and I wanted to talk to you about them: One, Allegro Trains is changing its name, two, we’re adding a completely new way to use Trains, and three, we’re announcing a bunch of features that make Trains an even better product for you! Read all about it on our blog at, our new website for our open source suite of tools.


The Importance of Data Storytelling in Shaping a Data Science Product

Artificial intelligence and machine learning are relentlessly revolutionizing marketplaces and ushering in radical, disruptive changes that threaten incumbent companies with obsolescence. To maintain a competitive edge and gain entry into new business segments, many companies are racing to build and deploy AI applications.