<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" encoding="UTF-8" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:sy="http://purl.org/rss/1.0/modules/syndication/" xmlns:admin="http://webns.net/mvcb/" xmlns:atom="http://www.w3.org/2005/Atom/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:fireside="http://fireside.fm/modules/rss/fireside">
  <channel>
    <fireside:hostname>web02.fireside.fm</fireside:hostname>
    <fireside:genDate>Fri, 15 May 2026 17:26:58 -0500</fireside:genDate>
    <generator>Fireside (https://fireside.fm)</generator>
    <title>Pipeline Conversations - Episodes Tagged with “Platforms”</title>
    <link>https://podcast.zenml.io/tags/platforms</link>
    <pubDate>Mon, 05 Sep 2022 09:00:00 +0200</pubDate>
    <description>Pipeline Conversations brings you interviews with platform engineers, ML practitioners, and technical leaders building production AI systems. We dig into the real challenges of MLOps and LLMOps: orchestrating complex workflows on Kubernetes, fine-tuning and evaluating models at scale, and shipping AI that actually works. From ZenML.
</description>
    <language>en-us</language>
    <itunes:type>episodic</itunes:type>
    <itunes:subtitle>MLOps and LLMOps, from the trenches</itunes:subtitle>
    <itunes:author>ZenML GmbH</itunes:author>
    <itunes:summary>Pipeline Conversations brings you interviews with platform engineers, ML practitioners, and technical leaders building production AI systems. We dig into the real challenges of MLOps and LLMOps: orchestrating complex workflows on Kubernetes, fine-tuning and evaluating models at scale, and shipping AI that actually works. From ZenML.
</itunes:summary>
    <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/4/4d525632-f8ef-47c1-9321-20f5c498b1ac/cover.jpg?v=3"/>
    <itunes:explicit>no</itunes:explicit>
    <itunes:keywords>machine-learning, machinelearning, mlops, deeplearning, ai, artificialintelligence, artificial-intelligence, technology, tech, mlops, llmops</itunes:keywords>
    <itunes:owner>
      <itunes:name>ZenML GmbH</itunes:name>
      <itunes:email>podcast@zenml.io</itunes:email>
    </itunes:owner>
<itunes:category text="Technology"/>
<item>
  <title>ML Abstractions with Phil Howes</title>
  <link>https://podcast.zenml.io/ml-abstractions-phil-howes</link>
  <guid isPermaLink="false">38e30182-2cf3-4295-9294-629edca09548</guid>
  <pubDate>Mon, 05 Sep 2022 09:00:00 +0200</pubDate>
  <author>ZenML GmbH</author>
  <enclosure url="https://aphid.fireside.fm/d/1437767933/4d525632-f8ef-47c1-9321-20f5c498b1ac/38e30182-2cf3-4295-9294-629edca09548.mp3" length="39799563" type="audio/mpeg"/>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:season>2</itunes:season>
  <itunes:author>ZenML GmbH</itunes:author>
  <itunes:subtitle>This week we dive into the abstractions that we're all trying to layer on top of the core ML processes and workflows. I spoke with Phil Howes, co-founder and chief scientist at BaseTen. BaseTen is a platform that allows data scientists to go from an initial model to an MVP web app quickly.</itunes:subtitle>
  <itunes:duration>54:13</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/4/4d525632-f8ef-47c1-9321-20f5c498b1ac/episodes/3/38e30182-2cf3-4295-9294-629edca09548/cover.jpg?v=1"/>
  <description>&lt;p&gt;This week we dive into the abstractions that we're all trying to layer on top of the core ML processes and workflows. I spoke with Phil Howes, co-founder and chief scientist at BaseTen. BaseTen is a platform that allows data scientists to go from an initial model to an MVP web app quickly.&lt;/p&gt;

&lt;p&gt;We got into some of the big challenges he had working to build out the platform, as well as the core issue of iteration speed that motivates why they're building BaseTen.&lt;/p&gt;

&lt;p&gt;Phil has experienced quite a few of the industry's end-to-end patterns in the years that he's been working on machine learning and it was great to have that context inform the conversation, too. Special Guest: Phil Howes.&lt;/p&gt;
</description>
  <itunes:keywords>mlops, machine-learning, data-science, ai,  infrastructure, pipelines, tools, platforms</itunes:keywords>
  <content:encoded>
    <![CDATA[<p>This week we dive into the abstractions that we&#39;re all trying to layer on top of the core ML processes and workflows. I spoke with Phil Howes, co-founder and chief scientist at BaseTen. BaseTen is a platform that allows data scientists to go from an initial model to an MVP web app quickly.</p>

<p>We got into some of the big challenges he had working to build out the platform, as well as the core issue of iteration speed that motivates why they&#39;re building BaseTen.</p>

<p>Phil has experienced quite a few of the industry&#39;s end-to-end patterns in the years that he&#39;s been working on machine learning and it was great to have that context inform the conversation, too.</p><p>Special Guest: Phil Howes.</p><p>Links:</p><ul><li><a title="Baseten | Turn ML models into full-stack apps" rel="nofollow" href="https://www.baseten.co/">Baseten | Turn ML models into full-stack apps</a></li><li><a title="Welcome to Baseten! - Baseten" rel="nofollow" href="https://docs.baseten.co/">Welcome to Baseten! - Baseten</a></li><li><a title="Blog | Baseten" rel="nofollow" href="https://www.baseten.co/blog">Blog | Baseten</a></li><li><a title="Gallery | Baseten" rel="nofollow" href="https://www.baseten.co/gallery">Gallery | Baseten</a></li><li><a title="basetenlabs/truss: Serve any model without boilerplate code" rel="nofollow" href="https://github.com/basetenlabs/truss">basetenlabs/truss: Serve any model without boilerplate code</a></li><li><a title="Baseten" rel="nofollow" href="https://github.com/basetenlabs">Baseten</a></li><li><a title="Phil Howes (LinkedIn)" rel="nofollow" href="https://www.linkedin.com/in/philhowes/">Phil Howes (LinkedIn)</a></li></ul>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>This week we dive into the abstractions that we&#39;re all trying to layer on top of the core ML processes and workflows. I spoke with Phil Howes, co-founder and chief scientist at BaseTen. BaseTen is a platform that allows data scientists to go from an initial model to an MVP web app quickly.</p>

<p>We got into some of the big challenges he had working to build out the platform, as well as the core issue of iteration speed that motivates why they&#39;re building BaseTen.</p>

<p>Phil has experienced quite a few of the industry&#39;s end-to-end patterns in the years that he&#39;s been working on machine learning and it was great to have that context inform the conversation, too.</p><p>Special Guest: Phil Howes.</p><p>Links:</p><ul><li><a title="Baseten | Turn ML models into full-stack apps" rel="nofollow" href="https://www.baseten.co/">Baseten | Turn ML models into full-stack apps</a></li><li><a title="Welcome to Baseten! - Baseten" rel="nofollow" href="https://docs.baseten.co/">Welcome to Baseten! - Baseten</a></li><li><a title="Blog | Baseten" rel="nofollow" href="https://www.baseten.co/blog">Blog | Baseten</a></li><li><a title="Gallery | Baseten" rel="nofollow" href="https://www.baseten.co/gallery">Gallery | Baseten</a></li><li><a title="basetenlabs/truss: Serve any model without boilerplate code" rel="nofollow" href="https://github.com/basetenlabs/truss">basetenlabs/truss: Serve any model without boilerplate code</a></li><li><a title="Baseten" rel="nofollow" href="https://github.com/basetenlabs">Baseten</a></li><li><a title="Phil Howes (LinkedIn)" rel="nofollow" href="https://www.linkedin.com/in/philhowes/">Phil Howes (LinkedIn)</a></li></ul>]]>
  </itunes:summary>
</item>
  </channel>
</rss>
