<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" encoding="UTF-8" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:sy="http://purl.org/rss/1.0/modules/syndication/" xmlns:admin="http://webns.net/mvcb/" xmlns:atom="http://www.w3.org/2005/Atom/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:fireside="http://fireside.fm/modules/rss/fireside">
  <channel>
    <fireside:hostname>web01.fireside.fm</fireside:hostname>
    <fireside:genDate>Thu, 23 Apr 2026 07:09:42 -0500</fireside:genDate>
    <generator>Fireside (https://fireside.fm)</generator>
    <title>Pipeline Conversations - Episodes Tagged with “Data”</title>
    <link>https://podcast.zenml.io/tags/data</link>
    <pubDate>Thu, 04 Aug 2022 10:00:00 +0200</pubDate>
    <description>Pipeline Conversations brings you interviews with platform engineers, ML practitioners, and technical leaders building production AI systems. We dig into the real challenges of MLOps and LLMOps: orchestrating complex workflows on Kubernetes, fine-tuning and evaluating models at scale, and shipping AI that actually works. From ZenML.
</description>
    <language>en-us</language>
    <itunes:type>episodic</itunes:type>
    <itunes:subtitle>MLOps and LLMOps, from the trenches</itunes:subtitle>
    <itunes:author>ZenML GmbH</itunes:author>
    <itunes:summary>Pipeline Conversations brings you interviews with platform engineers, ML practitioners, and technical leaders building production AI systems. We dig into the real challenges of MLOps and LLMOps: orchestrating complex workflows on Kubernetes, fine-tuning and evaluating models at scale, and shipping AI that actually works. From ZenML.
</itunes:summary>
    <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/4/4d525632-f8ef-47c1-9321-20f5c498b1ac/cover.jpg?v=3"/>
    <itunes:explicit>no</itunes:explicit>
    <itunes:keywords>machine-learning, machinelearning, mlops, deeplearning, ai, artificialintelligence, artificial-intelligence, technology, tech, mlops, llmops</itunes:keywords>
    <itunes:owner>
      <itunes:name>ZenML GmbH</itunes:name>
      <itunes:email>podcast@zenml.io</itunes:email>
    </itunes:owner>
<itunes:category text="Technology"/>
<item>
  <title>Safe and Testable Computer Vision with Lakera</title>
  <link>https://podcast.zenml.io/safe-testable-computer-vision-lakera</link>
  <guid isPermaLink="false">6300d5ea-04f5-45a5-8c81-ca184b3d5bd4</guid>
  <pubDate>Thu, 04 Aug 2022 10:00:00 +0200</pubDate>
  <author>ZenML GmbH</author>
  <enclosure url="https://aphid.fireside.fm/d/1437767933/4d525632-f8ef-47c1-9321-20f5c498b1ac/6300d5ea-04f5-45a5-8c81-ca184b3d5bd4.mp3" length="42191444" type="audio/mpeg"/>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:season>2</itunes:season>
  <itunes:author>ZenML GmbH</itunes:author>
  <itunes:subtitle>This week I spoke with Mateo Rojas-Carulla, the CTO and a co-founder of Lakera and Matthias Kraft, also a co-founder and the CPO there. Lakera is an AI safety company that does a lot of work in the computer vision domain, building a platform and tools for users to gain more confidence in the output and functionality of their models.</itunes:subtitle>
  <itunes:duration>57:32</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/4/4d525632-f8ef-47c1-9321-20f5c498b1ac/episodes/6/6300d5ea-04f5-45a5-8c81-ca184b3d5bd4/cover.jpg?v=1"/>
  <description>This week I spoke with Mateo Rojas-Carulla, the CTO and a co-founder of Lakera (https://www.lakera.ai/) and Matthias Kraft, also a co-founder and the CPO there. Lakera (https://www.lakera.ai/) is an AI safety company that does a lot of work in the computer vision domain, building a platform and tools for users to gain more confidence in the output and functionality of their models.
We discuss how they think about the testing of machine learning models, and about how having this safety element upfront has implications for how you go about the testing and ensuring robustness. We specifically dive into how to go about testing computer vision models and the various pitfalls that are to be found in that domain. Special Guests: Mateo Rojas-Carulla and Matthias Kraft.
</description>
  <itunes:keywords>mlops, monitoring, data, machine-learning, computer-vision, testing, safety</itunes:keywords>
  <content:encoded>
    <![CDATA[<p>This week I spoke with Mateo Rojas-Carulla, the CTO and a co-founder of <a href="https://www.lakera.ai/" rel="nofollow">Lakera</a> and Matthias Kraft, also a co-founder and the CPO there. <a href="https://www.lakera.ai/" rel="nofollow">Lakera</a> is an AI safety company that does a lot of work in the computer vision domain, building a platform and tools for users to gain more confidence in the output and functionality of their models.</p>

<p>We discuss how they think about the testing of machine learning models, and about how having this safety element upfront has implications for how you go about the testing and ensuring robustness. We specifically dive into how to go about testing computer vision models and the various pitfalls that are to be found in that domain.</p><p>Special Guests: Mateo Rojas-Carulla and Matthias Kraft.</p>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>This week I spoke with Mateo Rojas-Carulla, the CTO and a co-founder of <a href="https://www.lakera.ai/" rel="nofollow">Lakera</a> and Matthias Kraft, also a co-founder and the CPO there. <a href="https://www.lakera.ai/" rel="nofollow">Lakera</a> is an AI safety company that does a lot of work in the computer vision domain, building a platform and tools for users to gain more confidence in the output and functionality of their models.</p>

<p>We discuss how they think about the testing of machine learning models, and about how having this safety element upfront has implications for how you go about the testing and ensuring robustness. We specifically dive into how to go about testing computer vision models and the various pitfalls that are to be found in that domain.</p><p>Special Guests: Mateo Rojas-Carulla and Matthias Kraft.</p>]]>
  </itunes:summary>
</item>
<item>
  <title>ML Monitoring with Emeli Dral</title>
  <link>https://podcast.zenml.io/monitoring-evidently-emeli-dral</link>
  <guid isPermaLink="false">57e441f1-021f-42f6-b676-fd8077e4eca1</guid>
  <pubDate>Thu, 07 Jul 2022 11:00:00 +0200</pubDate>
  <author>ZenML GmbH</author>
  <enclosure url="https://aphid.fireside.fm/d/1437767933/4d525632-f8ef-47c1-9321-20f5c498b1ac/57e441f1-021f-42f6-b676-fd8077e4eca1.mp3" length="34563048" type="audio/mpeg"/>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:season>2</itunes:season>
  <itunes:author>ZenML GmbH</itunes:author>
  <itunes:subtitle>I'll be having some conversations with the people behind the tools that ZenML offers as integrations. We spoke with Ben Wilson a few weeks back, and today I'm pleased to publish this conversation with Emeli Dral, co-founder and CTO of Evidently, an open-source tool tackling the problem of monitoring of models and data for machine learning.</itunes:subtitle>
  <itunes:duration>46:57</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/4/4d525632-f8ef-47c1-9321-20f5c498b1ac/episodes/5/57e441f1-021f-42f6-b676-fd8077e4eca1/cover.jpg?v=1"/>
  <description>I'll be having some conversations with the people behind the tools that ZenML offers as integrations. We spoke with Ben Wilson a few weeks back, and today I'm pleased to publish this conversation with Emeli Dral, co-founder and CTO of Evidently, an open-source tool tackling the problem of monitoring of models and data for machine learning.
We discussed the challenges around building a tool that is both straightforward to use while also customisable and powerful. We also got into the thinking behind how they grew their community and blog along the way. Special Guest: Emeli Dral.
</description>
  <itunes:keywords>mlops, monitoring, data, machine-learning</itunes:keywords>
  <content:encoded>
    <![CDATA[<p>I&#39;ll be having some conversations with the people behind the tools that ZenML offers as integrations. We spoke with Ben Wilson a few weeks back, and today I&#39;m pleased to publish this conversation with Emeli Dral, co-founder and CTO of Evidently, an open-source tool tackling the problem of monitoring of models and data for machine learning.</p>

<p>We discussed the challenges around building a tool that is both straightforward to use while also customisable and powerful. We also got into the thinking behind how they grew their community and blog along the way.</p><p>Special Guest: Emeli Dral.</p><p>Links:</p><ul><li><a title="Emeli Dral (LinkedIn)" rel="nofollow" href="https://www.linkedin.com/in/emelidral/">Emeli Dral (LinkedIn)</a></li><li><a title="Emeli Dral (@EmeliDral) / Twitter" rel="nofollow" href="https://twitter.com/EmeliDral">Emeli Dral (@EmeliDral) / Twitter</a></li><li><a title="Evidently AI - Open-Source Machine Learning Monitoring" rel="nofollow" href="https://evidentlyai.com/">Evidently AI - Open-Source Machine Learning Monitoring</a></li><li><a title="Evidently Documentation" rel="nofollow" href="https://docs.evidentlyai.com/">Evidently Documentation</a></li><li><a title="Evidently AI Blog - Machine Learning in Production" rel="nofollow" href="https://evidentlyai.com/blog">Evidently AI Blog - Machine Learning in Production</a></li><li><a title="Evidently AI - Community &amp; Support" rel="nofollow" href="https://evidentlyai.com/community">Evidently AI - Community &amp; Support</a></li><li><a title="Emeli Dral - How Your ML Model Will Fail and How to Prepare for It - YouTube" rel="nofollow" href="https://www.youtube.com/watch?v=35A7P03wA0Y">Emeli Dral - How Your ML Model Will Fail and How to Prepare for It - YouTube</a></li><li><a title="Emeli Dral: The day after deployment: how to set up your model monitoring - YouTube" rel="nofollow" href="https://www.youtube.com/watch?v=vTkQHLLX3rQ">Emeli Dral: The day after deployment: how to set up your model monitoring - YouTube</a></li><li><a title="Is My Data Drifting? Early Monitoring for Machine Learning Models in Production | PyData Global 2021 - YouTube" rel="nofollow" href="https://www.youtube.com/watch?v=ukWc6mv8ojw">Is My Data Drifting? Early Monitoring for Machine Learning Models in Production | PyData Global 2021 - YouTube</a></li><li><a title="Monitoring Machine Learning Systems in Production - YouTube" rel="nofollow" href="https://www.youtube.com/watch?v=Z8b64sgTmaU">Monitoring Machine Learning Systems in Production - YouTube</a></li></ul>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>I&#39;ll be having some conversations with the people behind the tools that ZenML offers as integrations. We spoke with Ben Wilson a few weeks back, and today I&#39;m pleased to publish this conversation with Emeli Dral, co-founder and CTO of Evidently, an open-source tool tackling the problem of monitoring of models and data for machine learning.</p>

<p>We discussed the challenges around building a tool that is both straightforward to use while also customisable and powerful. We also got into the thinking behind how they grew their community and blog along the way.</p><p>Special Guest: Emeli Dral.</p><p>Links:</p><ul><li><a title="Emeli Dral (LinkedIn)" rel="nofollow" href="https://www.linkedin.com/in/emelidral/">Emeli Dral (LinkedIn)</a></li><li><a title="Emeli Dral (@EmeliDral) / Twitter" rel="nofollow" href="https://twitter.com/EmeliDral">Emeli Dral (@EmeliDral) / Twitter</a></li><li><a title="Evidently AI - Open-Source Machine Learning Monitoring" rel="nofollow" href="https://evidentlyai.com/">Evidently AI - Open-Source Machine Learning Monitoring</a></li><li><a title="Evidently Documentation" rel="nofollow" href="https://docs.evidentlyai.com/">Evidently Documentation</a></li><li><a title="Evidently AI Blog - Machine Learning in Production" rel="nofollow" href="https://evidentlyai.com/blog">Evidently AI Blog - Machine Learning in Production</a></li><li><a title="Evidently AI - Community &amp; Support" rel="nofollow" href="https://evidentlyai.com/community">Evidently AI - Community &amp; Support</a></li><li><a title="Emeli Dral - How Your ML Model Will Fail and How to Prepare for It - YouTube" rel="nofollow" href="https://www.youtube.com/watch?v=35A7P03wA0Y">Emeli Dral - How Your ML Model Will Fail and How to Prepare for It - YouTube</a></li><li><a title="Emeli Dral: The day after deployment: how to set up your model monitoring - YouTube" rel="nofollow" href="https://www.youtube.com/watch?v=vTkQHLLX3rQ">Emeli Dral: The day after deployment: how to set up your model monitoring - YouTube</a></li><li><a title="Is My Data Drifting? Early Monitoring for Machine Learning Models in Production | PyData Global 2021 - YouTube" rel="nofollow" href="https://www.youtube.com/watch?v=ukWc6mv8ojw">Is My Data Drifting? Early Monitoring for Machine Learning Models in Production | PyData Global 2021 - YouTube</a></li><li><a title="Monitoring Machine Learning Systems in Production - YouTube" rel="nofollow" href="https://www.youtube.com/watch?v=Z8b64sgTmaU">Monitoring Machine Learning Systems in Production - YouTube</a></li></ul>]]>
  </itunes:summary>
</item>
<item>
  <title>Humans in the Loop with Iva Gumnishka</title>
  <link>https://podcast.zenml.io/humans-in-loop-iva-gumnishka</link>
  <guid isPermaLink="false">2b5720f5-ce03-4cd5-a077-87780513ee1d</guid>
  <pubDate>Thu, 23 Jun 2022 10:00:00 +0200</pubDate>
  <author>ZenML GmbH</author>
  <enclosure url="https://aphid.fireside.fm/d/1437767933/4d525632-f8ef-47c1-9321-20f5c498b1ac/2b5720f5-ce03-4cd5-a077-87780513ee1d.mp3" length="37416318" type="audio/mpeg"/>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:season>2</itunes:season>
  <itunes:author>ZenML GmbH</itunes:author>
  <itunes:subtitle>We were lucky to get to talk to [Iva Gumnishka](https://www.linkedin.com/in/ivagumnishka/), the founder of [Humans in the Loop](https://humansintheloop.org/). They are an organisation that provides data annotation and collection services. Their teams are primarily made up of those who have been affected by conflict and now are asylum seekers or refugees.</itunes:subtitle>
  <itunes:duration>50:55</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/4/4d525632-f8ef-47c1-9321-20f5c498b1ac/episodes/2/2b5720f5-ce03-4cd5-a077-87780513ee1d/cover.jpg?v=1"/>
  <description>In this episode, I'm really happy to be able to continue the dialogue we've been having with our users and community around the role of data annotation and labeling in MLOps.
We were lucky to get to talk to Iva Gumnishka (https://www.linkedin.com/in/ivagumnishka/), the founder of Humans in the Loop (https://humansintheloop.org/). They are an organisation that provides data annotation and collection services. Their teams are primarily made up of those who have been affected by conflict and now are asylum seekers or refugees.
Iva has a ton of experience working with annotation and has seen how different companies build this into their production machine learning lifecycles. We're continuing to work on a feature that will allow you to do this as part of your MLOps workflow when using ZenML, and I welcome any feedback you might have on the back of this podcast or the articles we've been publishing on the ZenML blog. Special Guest: Iva Gumnishka.
</description>
  <itunes:keywords>data-annotation, labeling, annotation, data, data-centric-ai, machine-learning</itunes:keywords>
  <content:encoded>
    <![CDATA[<p>In this episode, I&#39;m really happy to be able to continue the dialogue we&#39;ve been having with our users and community around the role of data annotation and labeling in MLOps.</p>

<p>We were lucky to get to talk to <a href="https://www.linkedin.com/in/ivagumnishka/" rel="nofollow">Iva Gumnishka</a>, the founder of <a href="https://humansintheloop.org/" rel="nofollow">Humans in the Loop</a>. They are an organisation that provides data annotation and collection services. Their teams are primarily made up of those who have been affected by conflict and now are asylum seekers or refugees.</p>

<p>Iva has a ton of experience working with annotation and has seen how different companies build this into their production machine learning lifecycles. We&#39;re continuing to work on a feature that will allow you to do this as part of your MLOps workflow when using ZenML, and I welcome any feedback you might have on the back of this podcast or the articles we&#39;ve been publishing on the ZenML blog.</p><p>Special Guest: Iva Gumnishka.</p><p>Links:</p><ul><li><a title="Humans in the Loop | Image annotation for ethical AI" rel="nofollow" href="https://humansintheloop.org/">Humans in the Loop | Image annotation for ethical AI</a></li><li><a title="Blog | Humans in the Loop" rel="nofollow" href="https://humansintheloop.org/resources/blog/">Blog | Humans in the Loop</a></li><li><a title="10 of the best open-source annotation tools for computer vision 2022 | Humans in the Loop" rel="nofollow" href="https://humansintheloop.org/10-of-the-best-open-source-annotation-tools-for-computer-vision-2022/">10 of the best open-source annotation tools for computer vision 2022 | Humans in the Loop</a></li><li><a title="zenml-io/awesome-open-data-annotation: Open Source Data Annotation &amp; Labeling Tools" rel="nofollow" href="https://github.com/zenml-io/awesome-open-data-annotation">zenml-io/awesome-open-data-annotation: Open Source Data Annotation &amp; Labeling Tools</a></li><li><a title="Need an open-source data annotation tool? We’ve got you covered! | ZenML Blog" rel="nofollow" href="https://blog.zenml.io/open-source-data-annotation-tools/">Need an open-source data annotation tool? We’ve got you covered! | ZenML Blog</a></li><li><a title="How to get the most out of data annotation | ZenML Blog" rel="nofollow" href="https://blog.zenml.io/data-labelling-annotation/">How to get the most out of data annotation | ZenML Blog</a></li><li><a title="Foundation | Humans in the Loop" rel="nofollow" href="https://humansintheloop.org/why-us/foundation/">Foundation | Humans in the Loop</a></li><li><a title="Your Data Needs a Human Touch. The story of Iva Gumnishka, a Bulgarian… | by Antoaneta Manko | womenintechglobal | Medium" rel="nofollow" href="https://medium.com/bulgarianwomenintech/your-data-needs-a-human-touch-5bc2ee70d548">Your Data Needs a Human Touch. The story of Iva Gumnishka, a Bulgarian… | by Antoaneta Manko | womenintechglobal | Medium</a></li></ul>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>In this episode, I&#39;m really happy to be able to continue the dialogue we&#39;ve been having with our users and community around the role of data annotation and labeling in MLOps.</p>

<p>We were lucky to get to talk to <a href="https://www.linkedin.com/in/ivagumnishka/" rel="nofollow">Iva Gumnishka</a>, the founder of <a href="https://humansintheloop.org/" rel="nofollow">Humans in the Loop</a>. They are an organisation that provides data annotation and collection services. Their teams are primarily made up of those who have been affected by conflict and now are asylum seekers or refugees.</p>

<p>Iva has a ton of experience working with annotation and has seen how different companies build this into their production machine learning lifecycles. We&#39;re continuing to work on a feature that will allow you to do this as part of your MLOps workflow when using ZenML, and I welcome any feedback you might have on the back of this podcast or the articles we&#39;ve been publishing on the ZenML blog.</p><p>Special Guest: Iva Gumnishka.</p><p>Links:</p><ul><li><a title="Humans in the Loop | Image annotation for ethical AI" rel="nofollow" href="https://humansintheloop.org/">Humans in the Loop | Image annotation for ethical AI</a></li><li><a title="Blog | Humans in the Loop" rel="nofollow" href="https://humansintheloop.org/resources/blog/">Blog | Humans in the Loop</a></li><li><a title="10 of the best open-source annotation tools for computer vision 2022 | Humans in the Loop" rel="nofollow" href="https://humansintheloop.org/10-of-the-best-open-source-annotation-tools-for-computer-vision-2022/">10 of the best open-source annotation tools for computer vision 2022 | Humans in the Loop</a></li><li><a title="zenml-io/awesome-open-data-annotation: Open Source Data Annotation &amp; Labeling Tools" rel="nofollow" href="https://github.com/zenml-io/awesome-open-data-annotation">zenml-io/awesome-open-data-annotation: Open Source Data Annotation &amp; Labeling Tools</a></li><li><a title="Need an open-source data annotation tool? We’ve got you covered! | ZenML Blog" rel="nofollow" href="https://blog.zenml.io/open-source-data-annotation-tools/">Need an open-source data annotation tool? We’ve got you covered! | ZenML Blog</a></li><li><a title="How to get the most out of data annotation | ZenML Blog" rel="nofollow" href="https://blog.zenml.io/data-labelling-annotation/">How to get the most out of data annotation | ZenML Blog</a></li><li><a title="Foundation | Humans in the Loop" rel="nofollow" href="https://humansintheloop.org/why-us/foundation/">Foundation | Humans in the Loop</a></li><li><a title="Your Data Needs a Human Touch. The story of Iva Gumnishka, a Bulgarian… | by Antoaneta Manko | womenintechglobal | Medium" rel="nofollow" href="https://medium.com/bulgarianwomenintech/your-data-needs-a-human-touch-5bc2ee70d548">Your Data Needs a Human Touch. The story of Iva Gumnishka, a Bulgarian… | by Antoaneta Manko | womenintechglobal | Medium</a></li></ul>]]>
  </itunes:summary>
</item>
  </channel>
</rss>
