<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" encoding="UTF-8" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:sy="http://purl.org/rss/1.0/modules/syndication/" xmlns:admin="http://webns.net/mvcb/" xmlns:atom="http://www.w3.org/2005/Atom/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:fireside="http://fireside.fm/modules/rss/fireside">
  <channel>
    <fireside:hostname>web02.fireside.fm</fireside:hostname>
    <fireside:genDate>Thu, 23 Apr 2026 07:02:26 -0500</fireside:genDate>
    <generator>Fireside (https://fireside.fm)</generator>
    <title>Pipeline Conversations - Episodes Tagged with “Testing”</title>
    <link>https://podcast.zenml.io/tags/testing</link>
    <pubDate>Thu, 04 Aug 2022 10:00:00 +0200</pubDate>
    <description>Pipeline Conversations brings you interviews with platform engineers, ML practitioners, and technical leaders building production AI systems. We dig into the real challenges of MLOps and LLMOps: orchestrating complex workflows on Kubernetes, fine-tuning and evaluating models at scale, and shipping AI that actually works. From ZenML.
</description>
    <language>en-us</language>
    <itunes:type>episodic</itunes:type>
    <itunes:subtitle>MLOps and LLMOps, from the trenches</itunes:subtitle>
    <itunes:author>ZenML GmbH</itunes:author>
    <itunes:summary>Pipeline Conversations brings you interviews with platform engineers, ML practitioners, and technical leaders building production AI systems. We dig into the real challenges of MLOps and LLMOps: orchestrating complex workflows on Kubernetes, fine-tuning and evaluating models at scale, and shipping AI that actually works. From ZenML.
</itunes:summary>
    <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/4/4d525632-f8ef-47c1-9321-20f5c498b1ac/cover.jpg?v=3"/>
    <itunes:explicit>no</itunes:explicit>
    <itunes:keywords>machine-learning, machinelearning, mlops, deeplearning, ai, artificialintelligence, artificial-intelligence, technology, tech, mlops, llmops</itunes:keywords>
    <itunes:owner>
      <itunes:name>ZenML GmbH</itunes:name>
      <itunes:email>podcast@zenml.io</itunes:email>
    </itunes:owner>
<itunes:category text="Technology"/>
<item>
  <title>Safe and Testable Computer Vision with Lakera</title>
  <link>https://podcast.zenml.io/safe-testable-computer-vision-lakera</link>
  <guid isPermaLink="false">6300d5ea-04f5-45a5-8c81-ca184b3d5bd4</guid>
  <pubDate>Thu, 04 Aug 2022 10:00:00 +0200</pubDate>
  <author>ZenML GmbH</author>
  <enclosure url="https://aphid.fireside.fm/d/1437767933/4d525632-f8ef-47c1-9321-20f5c498b1ac/6300d5ea-04f5-45a5-8c81-ca184b3d5bd4.mp3" length="42191444" type="audio/mpeg"/>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:season>2</itunes:season>
  <itunes:author>ZenML GmbH</itunes:author>
  <itunes:subtitle>This week I spoke with Mateo Rojas-Carulla, the CTO and a co-founder of Lakera and Matthias Kraft, also a co-founder and the CPO there. Lakera is an AI safety company that does a lot of work in the computer vision domain, building a platform and tools for users to gain more confidence in the output and functionality of their models.</itunes:subtitle>
  <itunes:duration>57:32</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/4/4d525632-f8ef-47c1-9321-20f5c498b1ac/episodes/6/6300d5ea-04f5-45a5-8c81-ca184b3d5bd4/cover.jpg?v=1"/>
  <description>This week I spoke with Mateo Rojas-Carulla, the CTO and a co-founder of Lakera (https://www.lakera.ai/) and Matthias Kraft, also a co-founder and the CPO there. Lakera (https://www.lakera.ai/) is an AI safety company that does a lot of work in the computer vision domain, building a platform and tools for users to gain more confidence in the output and functionality of their models.
We discuss how they think about the testing of machine learning models, and about how having this safety element upfront has implications for how you go about the testing and ensuring robustness. We specifically dive into how to go about testing computer vision models and the various pitfalls that are to be found in that domain. Special Guests: Mateo Rojas-Carulla and Matthias Kraft.
</description>
  <itunes:keywords>mlops, monitoring, data, machine-learning, computer-vision, testing, safety</itunes:keywords>
  <content:encoded>
    <![CDATA[<p>This week I spoke with Mateo Rojas-Carulla, the CTO and a co-founder of <a href="https://www.lakera.ai/" rel="nofollow">Lakera</a> and Matthias Kraft, also a co-founder and the CPO there. <a href="https://www.lakera.ai/" rel="nofollow">Lakera</a> is an AI safety company that does a lot of work in the computer vision domain, building a platform and tools for users to gain more confidence in the output and functionality of their models.</p>

<p>We discuss how they think about the testing of machine learning models, and about how having this safety element upfront has implications for how you go about the testing and ensuring robustness. We specifically dive into how to go about testing computer vision models and the various pitfalls that are to be found in that domain.</p><p>Special Guests: Mateo Rojas-Carulla and Matthias Kraft.</p>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>This week I spoke with Mateo Rojas-Carulla, the CTO and a co-founder of <a href="https://www.lakera.ai/" rel="nofollow">Lakera</a> and Matthias Kraft, also a co-founder and the CPO there. <a href="https://www.lakera.ai/" rel="nofollow">Lakera</a> is an AI safety company that does a lot of work in the computer vision domain, building a platform and tools for users to gain more confidence in the output and functionality of their models.</p>

<p>We discuss how they think about the testing of machine learning models, and about how having this safety element upfront has implications for how you go about the testing and ensuring robustness. We specifically dive into how to go about testing computer vision models and the various pitfalls that are to be found in that domain.</p><p>Special Guests: Mateo Rojas-Carulla and Matthias Kraft.</p>]]>
  </itunes:summary>
</item>
  </channel>
</rss>
