<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" encoding="UTF-8" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:sy="http://purl.org/rss/1.0/modules/syndication/" xmlns:admin="http://webns.net/mvcb/" xmlns:atom="http://www.w3.org/2005/Atom/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:fireside="http://fireside.fm/modules/rss/fireside">
  <channel>
    <fireside:hostname>web02.fireside.fm</fireside:hostname>
    <fireside:genDate>Sat, 18 Apr 2026 05:09:14 -0500</fireside:genDate>
    <generator>Fireside (https://fireside.fm)</generator>
    <title>Pipeline Conversations - Episodes Tagged with “Ethics”</title>
    <link>https://podcast.zenml.io/tags/ethics</link>
    <pubDate>Thu, 14 Apr 2022 11:00:00 +0200</pubDate>
    <description>Pipeline Conversations brings you interviews with platform engineers, ML practitioners, and technical leaders building production AI systems. We dig into the real challenges of MLOps and LLMOps: orchestrating complex workflows on Kubernetes, fine-tuning and evaluating models at scale, and shipping AI that actually works. From ZenML.
</description>
    <language>en-us</language>
    <itunes:type>episodic</itunes:type>
    <itunes:subtitle>MLOps and LLMOps, from the trenches</itunes:subtitle>
    <itunes:author>ZenML GmbH</itunes:author>
    <itunes:summary>Pipeline Conversations brings you interviews with platform engineers, ML practitioners, and technical leaders building production AI systems. We dig into the real challenges of MLOps and LLMOps: orchestrating complex workflows on Kubernetes, fine-tuning and evaluating models at scale, and shipping AI that actually works. From ZenML.
</itunes:summary>
    <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/4/4d525632-f8ef-47c1-9321-20f5c498b1ac/cover.jpg?v=3"/>
    <itunes:explicit>no</itunes:explicit>
    <itunes:keywords>machine-learning, machinelearning, mlops, deeplearning, ai, artificialintelligence, artificial-intelligence, technology, tech, mlops, llmops</itunes:keywords>
    <itunes:owner>
      <itunes:name>ZenML GmbH</itunes:name>
      <itunes:email>podcast@zenml.io</itunes:email>
    </itunes:owner>
<itunes:category text="Technology"/>
<item>
  <title>Trustworthy ML with Kush Varshney</title>
  <link>https://podcast.zenml.io/trustworthy-ml-kush-varshney</link>
  <guid isPermaLink="false">3b306917-5653-40d1-b3c7-85c92ac80ad3</guid>
  <pubDate>Thu, 14 Apr 2022 11:00:00 +0200</pubDate>
  <author>ZenML GmbH</author>
  <enclosure url="https://aphid.fireside.fm/d/1437767933/4d525632-f8ef-47c1-9321-20f5c498b1ac/3b306917-5653-40d1-b3c7-85c92ac80ad3.mp3" length="28933081" type="audio/mpeg"/>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>ZenML GmbH</itunes:author>
  <itunes:subtitle>I enthusiastically read Kush Varshney's book when it was released for free to the world several months back. Trustworthy Machine Learning is a concise and clear overview of many of the ways that machine learning can go wrong, and so I was especially keen to get Kush on to talk more about his work and research.</itunes:subtitle>
  <itunes:duration>39:08</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/4/4d525632-f8ef-47c1-9321-20f5c498b1ac/episodes/3/3b306917-5653-40d1-b3c7-85c92ac80ad3/cover.jpg?v=1"/>
  <description>I enthusiastically read Kush Varshney's book when it was released for free to the world several months back. Trustworthy Machine Learning (http://www.trustworthymachinelearning.com/) is a concise and clear overview of many of the ways that machine learning can go wrong, and so I was especially keen to get Kush (http://krvarshney.github.io/) on to talk more about his work and research.
I also got a stronger sense of appreciation for how good MLOps practices and workflows offered a clear path to ensuring that your machine learning models and behaviours could become more trustworthy. Kush has done a lot of interesting work, particularly with the AI Fairness 360 (https://ai-fairness-360.org/) and AI Explainability 360 (https://ai-explainability-360.org/) toolkits that I'm sure listeners of this podcast would find worth checking out. Special Guest: Kush Varshney.
</description>
  <itunes:keywords>machine-learning, data-science, ai, artificial-intelligence, ethics, fairness, bias</itunes:keywords>
  <content:encoded>
    <![CDATA[<p>I enthusiastically read Kush Varshney&#39;s book when it was released for free to the world several months back. <a href="http://www.trustworthymachinelearning.com/" rel="nofollow">Trustworthy Machine Learning</a> is a concise and clear overview of many of the ways that machine learning can go wrong, and so I was especially keen to get <a href="http://krvarshney.github.io/" rel="nofollow">Kush</a> on to talk more about his work and research.</p>

<p>I also got a stronger sense of appreciation for how good MLOps practices and workflows offered a clear path to ensuring that your machine learning models and behaviours could become more trustworthy. Kush has done a lot of interesting work, particularly with the <a href="https://ai-fairness-360.org/" rel="nofollow">AI Fairness 360</a> and <a href="https://ai-explainability-360.org/" rel="nofollow">AI Explainability 360</a> toolkits that I&#39;m sure listeners of this podcast would find worth checking out.</p><p>Special Guest: Kush Varshney.</p><p>Links:</p><ul><li><a title="Trustworthy Machine Learning by Kush R. Varshney" rel="nofollow" href="http://www.trustworthymachinelearning.com/">Trustworthy Machine Learning by Kush R. Varshney</a></li><li><a title="Home - AI Explainability 360" rel="nofollow" href="https://ai-explainability-360.org/">Home - AI Explainability 360</a></li><li><a title="Home - AI Fairness 360" rel="nofollow" href="https://ai-fairness-360.org/">Home - AI Fairness 360</a></li><li><a title="Kush Varshney" rel="nofollow" href="http://krvarshney.github.io/">Kush Varshney</a></li><li><a title="Kush Varshney (@krvarshney) / Twitter" rel="nofollow" href="https://twitter.com/krvarshney">Kush Varshney (@krvarshney) / Twitter</a></li><li><a title="Kush Varshney | LinkedIn" rel="nofollow" href="https://www.linkedin.com/in/kushvarshney/">Kush Varshney | LinkedIn</a></li><li><a title="Trustworthy Machine Learning: Varshney, Kush R.: 9798411903959: Amazon.com: Books" rel="nofollow" href="https://www.amazon.com/Trustworthy-Machine-Learning-Kush-Varshney/dp/B09SL5GPCD">Trustworthy Machine Learning: Varshney, Kush R.: 9798411903959: Amazon.com: Books</a></li></ul>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>I enthusiastically read Kush Varshney&#39;s book when it was released for free to the world several months back. <a href="http://www.trustworthymachinelearning.com/" rel="nofollow">Trustworthy Machine Learning</a> is a concise and clear overview of many of the ways that machine learning can go wrong, and so I was especially keen to get <a href="http://krvarshney.github.io/" rel="nofollow">Kush</a> on to talk more about his work and research.</p>

<p>I also got a stronger sense of appreciation for how good MLOps practices and workflows offered a clear path to ensuring that your machine learning models and behaviours could become more trustworthy. Kush has done a lot of interesting work, particularly with the <a href="https://ai-fairness-360.org/" rel="nofollow">AI Fairness 360</a> and <a href="https://ai-explainability-360.org/" rel="nofollow">AI Explainability 360</a> toolkits that I&#39;m sure listeners of this podcast would find worth checking out.</p><p>Special Guest: Kush Varshney.</p><p>Links:</p><ul><li><a title="Trustworthy Machine Learning by Kush R. Varshney" rel="nofollow" href="http://www.trustworthymachinelearning.com/">Trustworthy Machine Learning by Kush R. Varshney</a></li><li><a title="Home - AI Explainability 360" rel="nofollow" href="https://ai-explainability-360.org/">Home - AI Explainability 360</a></li><li><a title="Home - AI Fairness 360" rel="nofollow" href="https://ai-fairness-360.org/">Home - AI Fairness 360</a></li><li><a title="Kush Varshney" rel="nofollow" href="http://krvarshney.github.io/">Kush Varshney</a></li><li><a title="Kush Varshney (@krvarshney) / Twitter" rel="nofollow" href="https://twitter.com/krvarshney">Kush Varshney (@krvarshney) / Twitter</a></li><li><a title="Kush Varshney | LinkedIn" rel="nofollow" href="https://www.linkedin.com/in/kushvarshney/">Kush Varshney | LinkedIn</a></li><li><a title="Trustworthy Machine Learning: Varshney, Kush R.: 9798411903959: Amazon.com: Books" rel="nofollow" href="https://www.amazon.com/Trustworthy-Machine-Learning-Kush-Varshney/dp/B09SL5GPCD">Trustworthy Machine Learning: Varshney, Kush R.: 9798411903959: Amazon.com: Books</a></li></ul>]]>
  </itunes:summary>
</item>
  </channel>
</rss>
