<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" encoding="UTF-8" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:sy="http://purl.org/rss/1.0/modules/syndication/" xmlns:admin="http://webns.net/mvcb/" xmlns:atom="http://www.w3.org/2005/Atom/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:fireside="http://fireside.fm/modules/rss/fireside">
  <channel>
    <fireside:hostname>web02.fireside.fm</fireside:hostname>
    <fireside:genDate>Tue, 14 Apr 2026 19:05:51 -0500</fireside:genDate>
    <generator>Fireside (https://fireside.fm)</generator>
    <title>Pipeline Conversations - Episodes Tagged with “Prompts”</title>
    <link>https://podcast.zenml.io/tags/prompts</link>
    <pubDate>Wed, 11 Dec 2024 06:30:00 +0100</pubDate>
    <description>Pipeline Conversations brings you interviews with platform engineers, ML practitioners, and technical leaders building production AI systems. We dig into the real challenges of MLOps and LLMOps: orchestrating complex workflows on Kubernetes, fine-tuning and evaluating models at scale, and shipping AI that actually works. From ZenML.
</description>
    <language>en-us</language>
    <itunes:type>episodic</itunes:type>
    <itunes:subtitle>MLOps and LLMOps, from the trenches</itunes:subtitle>
    <itunes:author>ZenML GmbH</itunes:author>
    <itunes:summary>Pipeline Conversations brings you interviews with platform engineers, ML practitioners, and technical leaders building production AI systems. We dig into the real challenges of MLOps and LLMOps: orchestrating complex workflows on Kubernetes, fine-tuning and evaluating models at scale, and shipping AI that actually works. From ZenML.
</itunes:summary>
    <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/4/4d525632-f8ef-47c1-9321-20f5c498b1ac/cover.jpg?v=3"/>
    <itunes:explicit>no</itunes:explicit>
    <itunes:keywords>machine-learning, machinelearning, mlops, deeplearning, ai, artificialintelligence, artificial-intelligence, technology, tech, mlops, llmops</itunes:keywords>
    <itunes:owner>
      <itunes:name>ZenML GmbH</itunes:name>
      <itunes:email>podcast@zenml.io</itunes:email>
    </itunes:owner>
<itunes:category text="Technology"/>
<item>
  <title>Prompt Engineering &amp; Management in Production: Practical Lessons from the LLMOps Database</title>
  <link>https://podcast.zenml.io/llmops-db-prompt-engineering</link>
  <guid isPermaLink="false">a1117f0f-33e9-464b-a6e5-7c7eee9d39a0</guid>
  <pubDate>Wed, 11 Dec 2024 06:30:00 +0100</pubDate>
  <author>ZenML GmbH</author>
  <enclosure url="https://aphid.fireside.fm/d/1437767933/4d525632-f8ef-47c1-9321-20f5c498b1ac/a1117f0f-33e9-464b-a6e5-7c7eee9d39a0.mp3" length="19883141" type="audio/mpeg"/>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:season>3</itunes:season>
  <itunes:author>ZenML GmbH</itunes:author>
  <itunes:subtitle>Prompt engineering is the art and science of crafting instructions that unlock the potential of large language models (LLMs). It's a critical skill for anyone working with LLMs, whether you're building cutting-edge applications or conducting fundamental research. But what does effective prompt engineering look like in practice, and how can we systematically improve our prompts over time?</itunes:subtitle>
  <itunes:duration>29:34</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/4/4d525632-f8ef-47c1-9321-20f5c498b1ac/episodes/a/a1117f0f-33e9-464b-a6e5-7c7eee9d39a0/cover.jpg?v=2"/>
  <description>Prompt engineering is the art and science of crafting instructions that unlock the potential of large language models (LLMs). It's a critical skill for anyone working with LLMs, whether you're building cutting-edge applications or conducting fundamental research. But what does effective prompt engineering look like in practice, and how can we systematically improve our prompts over time?
To answer these questions, we've distilled key insights and techniques from a collection of LLMOps case studies spanning diverse industries and applications. From designing robust prompts to iterative refinement, optimization strategies to management infrastructure, these battle-tested lessons provide a roadmap for prompt engineering mastery.
Please read the full blog post here (https://www.zenml.io/blog/prompt-engineering-management-in-production-practical-lessons-from-the-llmops-database) and the associated LLMOps database entries here (https://zenml.io/llmops-database). 
</description>
  <itunes:keywords>llmops, llms, ai, mlops, genai, prompts, prompt-engineering</itunes:keywords>
  <content:encoded>
    <![CDATA[<p>Prompt engineering is the art and science of crafting instructions that unlock the potential of large language models (LLMs). It&#39;s a critical skill for anyone working with LLMs, whether you&#39;re building cutting-edge applications or conducting fundamental research. But what does effective prompt engineering look like in practice, and how can we systematically improve our prompts over time?</p>

<p>To answer these questions, we&#39;ve distilled key insights and techniques from a collection of LLMOps case studies spanning diverse industries and applications. From designing robust prompts to iterative refinement, optimization strategies to management infrastructure, these battle-tested lessons provide a roadmap for prompt engineering mastery.</p>

<p>Please read the full blog post <a href="https://www.zenml.io/blog/prompt-engineering-management-in-production-practical-lessons-from-the-llmops-database" rel="nofollow">here</a> and the associated LLMOps database entries <a href="https://zenml.io/llmops-database" rel="nofollow">here</a>.</p>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>Prompt engineering is the art and science of crafting instructions that unlock the potential of large language models (LLMs). It&#39;s a critical skill for anyone working with LLMs, whether you&#39;re building cutting-edge applications or conducting fundamental research. But what does effective prompt engineering look like in practice, and how can we systematically improve our prompts over time?</p>

<p>To answer these questions, we&#39;ve distilled key insights and techniques from a collection of LLMOps case studies spanning diverse industries and applications. From designing robust prompts to iterative refinement, optimization strategies to management infrastructure, these battle-tested lessons provide a roadmap for prompt engineering mastery.</p>

<p>Please read the full blog post <a href="https://www.zenml.io/blog/prompt-engineering-management-in-production-practical-lessons-from-the-llmops-database" rel="nofollow">here</a> and the associated LLMOps database entries <a href="https://zenml.io/llmops-database" rel="nofollow">here</a>.</p>]]>
  </itunes:summary>
</item>
  </channel>
</rss>
