<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" encoding="UTF-8" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:sy="http://purl.org/rss/1.0/modules/syndication/" xmlns:admin="http://webns.net/mvcb/" xmlns:atom="http://www.w3.org/2005/Atom/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:fireside="http://fireside.fm/modules/rss/fireside">
  <channel>
    <fireside:hostname>web01.fireside.fm</fireside:hostname>
    <fireside:genDate>Thu, 23 Apr 2026 07:15:41 -0500</fireside:genDate>
    <generator>Fireside (https://fireside.fm)</generator>
    <title>Pipeline Conversations - Episodes Tagged with “Monitoring”</title>
    <link>https://podcast.zenml.io/tags/monitoring</link>
    <pubDate>Thu, 04 Aug 2022 10:00:00 +0200</pubDate>
    <description>Pipeline Conversations brings you interviews with platform engineers, ML practitioners, and technical leaders building production AI systems. We dig into the real challenges of MLOps and LLMOps: orchestrating complex workflows on Kubernetes, fine-tuning and evaluating models at scale, and shipping AI that actually works. From ZenML.
</description>
    <language>en-us</language>
    <itunes:type>episodic</itunes:type>
    <itunes:subtitle>MLOps and LLMOps, from the trenches</itunes:subtitle>
    <itunes:author>ZenML GmbH</itunes:author>
    <itunes:summary>Pipeline Conversations brings you interviews with platform engineers, ML practitioners, and technical leaders building production AI systems. We dig into the real challenges of MLOps and LLMOps: orchestrating complex workflows on Kubernetes, fine-tuning and evaluating models at scale, and shipping AI that actually works. From ZenML.
</itunes:summary>
    <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/4/4d525632-f8ef-47c1-9321-20f5c498b1ac/cover.jpg?v=3"/>
    <itunes:explicit>no</itunes:explicit>
    <itunes:keywords>machine-learning, machinelearning, mlops, deeplearning, ai, artificialintelligence, artificial-intelligence, technology, tech, mlops, llmops</itunes:keywords>
    <itunes:owner>
      <itunes:name>ZenML GmbH</itunes:name>
      <itunes:email>podcast@zenml.io</itunes:email>
    </itunes:owner>
<itunes:category text="Technology"/>
<item>
  <title>Safe and Testable Computer Vision with Lakera</title>
  <link>https://podcast.zenml.io/safe-testable-computer-vision-lakera</link>
  <guid isPermaLink="false">6300d5ea-04f5-45a5-8c81-ca184b3d5bd4</guid>
  <pubDate>Thu, 04 Aug 2022 10:00:00 +0200</pubDate>
  <author>ZenML GmbH</author>
  <enclosure url="https://aphid.fireside.fm/d/1437767933/4d525632-f8ef-47c1-9321-20f5c498b1ac/6300d5ea-04f5-45a5-8c81-ca184b3d5bd4.mp3" length="42191444" type="audio/mpeg"/>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:season>2</itunes:season>
  <itunes:author>ZenML GmbH</itunes:author>
  <itunes:subtitle>This week I spoke with Mateo Rojas-Carulla, the CTO and a co-founder of Lakera and Matthias Kraft, also a co-founder and the CPO there. Lakera is an AI safety company that does a lot of work in the computer vision domain, building a platform and tools for users to gain more confidence in the output and functionality of their models.</itunes:subtitle>
  <itunes:duration>57:32</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/4/4d525632-f8ef-47c1-9321-20f5c498b1ac/episodes/6/6300d5ea-04f5-45a5-8c81-ca184b3d5bd4/cover.jpg?v=1"/>
  <description>This week I spoke with Mateo Rojas-Carulla, the CTO and a co-founder of Lakera (https://www.lakera.ai/) and Matthias Kraft, also a co-founder and the CPO there. Lakera (https://www.lakera.ai/) is an AI safety company that does a lot of work in the computer vision domain, building a platform and tools for users to gain more confidence in the output and functionality of their models.
We discuss how they think about the testing of machine learning models, and about how having this safety element upfront has implications for how you go about the testing and ensuring robustness. We specifically dive into how to go about testing computer vision models and the various pitfalls that are to be found in that domain. Special Guests: Mateo Rojas-Carulla and Matthias Kraft.
</description>
  <itunes:keywords>mlops, monitoring, data, machine-learning, computer-vision, testing, safety</itunes:keywords>
  <content:encoded>
    <![CDATA[<p>This week I spoke with Mateo Rojas-Carulla, the CTO and a co-founder of <a href="https://www.lakera.ai/" rel="nofollow">Lakera</a> and Matthias Kraft, also a co-founder and the CPO there. <a href="https://www.lakera.ai/" rel="nofollow">Lakera</a> is an AI safety company that does a lot of work in the computer vision domain, building a platform and tools for users to gain more confidence in the output and functionality of their models.</p>

<p>We discuss how they think about the testing of machine learning models, and about how having this safety element upfront has implications for how you go about the testing and ensuring robustness. We specifically dive into how to go about testing computer vision models and the various pitfalls that are to be found in that domain.</p><p>Special Guests: Mateo Rojas-Carulla and Matthias Kraft.</p>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>This week I spoke with Mateo Rojas-Carulla, the CTO and a co-founder of <a href="https://www.lakera.ai/" rel="nofollow">Lakera</a> and Matthias Kraft, also a co-founder and the CPO there. <a href="https://www.lakera.ai/" rel="nofollow">Lakera</a> is an AI safety company that does a lot of work in the computer vision domain, building a platform and tools for users to gain more confidence in the output and functionality of their models.</p>

<p>We discuss how they think about the testing of machine learning models, and about how having this safety element upfront has implications for how you go about the testing and ensuring robustness. We specifically dive into how to go about testing computer vision models and the various pitfalls that are to be found in that domain.</p><p>Special Guests: Mateo Rojas-Carulla and Matthias Kraft.</p>]]>
  </itunes:summary>
</item>
<item>
  <title>ML Monitoring with Emeli Dral</title>
  <link>https://podcast.zenml.io/monitoring-evidently-emeli-dral</link>
  <guid isPermaLink="false">57e441f1-021f-42f6-b676-fd8077e4eca1</guid>
  <pubDate>Thu, 07 Jul 2022 11:00:00 +0200</pubDate>
  <author>ZenML GmbH</author>
  <enclosure url="https://aphid.fireside.fm/d/1437767933/4d525632-f8ef-47c1-9321-20f5c498b1ac/57e441f1-021f-42f6-b676-fd8077e4eca1.mp3" length="34563048" type="audio/mpeg"/>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:season>2</itunes:season>
  <itunes:author>ZenML GmbH</itunes:author>
  <itunes:subtitle>I'll be having some conversations with the people behind the tools that ZenML offers as integrations. We spoke with Ben Wilson a few weeks back, and today I'm pleased to publish this conversation with Emeli Dral, co-founder and CTO of Evidently, an open-source tool tackling the problem of monitoring of models and data for machine learning.</itunes:subtitle>
  <itunes:duration>46:57</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/4/4d525632-f8ef-47c1-9321-20f5c498b1ac/episodes/5/57e441f1-021f-42f6-b676-fd8077e4eca1/cover.jpg?v=1"/>
  <description>I'll be having some conversations with the people behind the tools that ZenML offers as integrations. We spoke with Ben Wilson a few weeks back, and today I'm pleased to publish this conversation with Emeli Dral, co-founder and CTO of Evidently, an open-source tool tackling the problem of monitoring of models and data for machine learning.
We discussed the challenges around building a tool that is both straightforward to use while also customisable and powerful. We also got into the thinking behind how they grew their community and blog along the way. Special Guest: Emeli Dral.
</description>
  <itunes:keywords>mlops, monitoring, data, machine-learning</itunes:keywords>
  <content:encoded>
    <![CDATA[<p>I&#39;ll be having some conversations with the people behind the tools that ZenML offers as integrations. We spoke with Ben Wilson a few weeks back, and today I&#39;m pleased to publish this conversation with Emeli Dral, co-founder and CTO of Evidently, an open-source tool tackling the problem of monitoring of models and data for machine learning.</p>

<p>We discussed the challenges around building a tool that is both straightforward to use while also customisable and powerful. We also got into the thinking behind how they grew their community and blog along the way.</p><p>Special Guest: Emeli Dral.</p><p>Links:</p><ul><li><a title="Emeli Dral (LinkedIn)" rel="nofollow" href="https://www.linkedin.com/in/emelidral/">Emeli Dral (LinkedIn)</a></li><li><a title="Emeli Dral (@EmeliDral) / Twitter" rel="nofollow" href="https://twitter.com/EmeliDral">Emeli Dral (@EmeliDral) / Twitter</a></li><li><a title="Evidently AI - Open-Source Machine Learning Monitoring" rel="nofollow" href="https://evidentlyai.com/">Evidently AI - Open-Source Machine Learning Monitoring</a></li><li><a title="Evidently Documentation" rel="nofollow" href="https://docs.evidentlyai.com/">Evidently Documentation</a></li><li><a title="Evidently AI Blog - Machine Learning in Production" rel="nofollow" href="https://evidentlyai.com/blog">Evidently AI Blog - Machine Learning in Production</a></li><li><a title="Evidently AI - Community &amp; Support" rel="nofollow" href="https://evidentlyai.com/community">Evidently AI - Community &amp; Support</a></li><li><a title="Emeli Dral - How Your ML Model Will Fail and How to Prepare for It - YouTube" rel="nofollow" href="https://www.youtube.com/watch?v=35A7P03wA0Y">Emeli Dral - How Your ML Model Will Fail and How to Prepare for It - YouTube</a></li><li><a title="Emeli Dral: The day after deployment: how to set up your model monitoring - YouTube" rel="nofollow" href="https://www.youtube.com/watch?v=vTkQHLLX3rQ">Emeli Dral: The day after deployment: how to set up your model monitoring - YouTube</a></li><li><a title="Is My Data Drifting? Early Monitoring for Machine Learning Models in Production | PyData Global 2021 - YouTube" rel="nofollow" href="https://www.youtube.com/watch?v=ukWc6mv8ojw">Is My Data Drifting? Early Monitoring for Machine Learning Models in Production | PyData Global 2021 - YouTube</a></li><li><a title="Monitoring Machine Learning Systems in Production - YouTube" rel="nofollow" href="https://www.youtube.com/watch?v=Z8b64sgTmaU">Monitoring Machine Learning Systems in Production - YouTube</a></li></ul>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>I&#39;ll be having some conversations with the people behind the tools that ZenML offers as integrations. We spoke with Ben Wilson a few weeks back, and today I&#39;m pleased to publish this conversation with Emeli Dral, co-founder and CTO of Evidently, an open-source tool tackling the problem of monitoring of models and data for machine learning.</p>

<p>We discussed the challenges around building a tool that is both straightforward to use while also customisable and powerful. We also got into the thinking behind how they grew their community and blog along the way.</p><p>Special Guest: Emeli Dral.</p><p>Links:</p><ul><li><a title="Emeli Dral (LinkedIn)" rel="nofollow" href="https://www.linkedin.com/in/emelidral/">Emeli Dral (LinkedIn)</a></li><li><a title="Emeli Dral (@EmeliDral) / Twitter" rel="nofollow" href="https://twitter.com/EmeliDral">Emeli Dral (@EmeliDral) / Twitter</a></li><li><a title="Evidently AI - Open-Source Machine Learning Monitoring" rel="nofollow" href="https://evidentlyai.com/">Evidently AI - Open-Source Machine Learning Monitoring</a></li><li><a title="Evidently Documentation" rel="nofollow" href="https://docs.evidentlyai.com/">Evidently Documentation</a></li><li><a title="Evidently AI Blog - Machine Learning in Production" rel="nofollow" href="https://evidentlyai.com/blog">Evidently AI Blog - Machine Learning in Production</a></li><li><a title="Evidently AI - Community &amp; Support" rel="nofollow" href="https://evidentlyai.com/community">Evidently AI - Community &amp; Support</a></li><li><a title="Emeli Dral - How Your ML Model Will Fail and How to Prepare for It - YouTube" rel="nofollow" href="https://www.youtube.com/watch?v=35A7P03wA0Y">Emeli Dral - How Your ML Model Will Fail and How to Prepare for It - YouTube</a></li><li><a title="Emeli Dral: The day after deployment: how to set up your model monitoring - YouTube" rel="nofollow" href="https://www.youtube.com/watch?v=vTkQHLLX3rQ">Emeli Dral: The day after deployment: how to set up your model monitoring - YouTube</a></li><li><a title="Is My Data Drifting? Early Monitoring for Machine Learning Models in Production | PyData Global 2021 - YouTube" rel="nofollow" href="https://www.youtube.com/watch?v=ukWc6mv8ojw">Is My Data Drifting? Early Monitoring for Machine Learning Models in Production | PyData Global 2021 - YouTube</a></li><li><a title="Monitoring Machine Learning Systems in Production - YouTube" rel="nofollow" href="https://www.youtube.com/watch?v=Z8b64sgTmaU">Monitoring Machine Learning Systems in Production - YouTube</a></li></ul>]]>
  </itunes:summary>
</item>
  </channel>
</rss>
