This week, we spoke with Danny Leybzon, currently working with WhyLabs to help data scientists monitor their models in production and prevent model performance from degrading. He previously worked as a kind of roving data scientist and engineer, helping companies put their models into production.
As such, we had a really interesting discussion of some of the ways that tooling and the general context for data science sometimes lets practitioners down,
And of course we also discussed why monitoring and logging is actually a kind of baseline practice that should be part of any and every data scientist's toolkit. Luckily for us, Danny added in a bunch of examples from his wide experience doing all this in the real world.
- Danny D. Leybzon
- whylogs · PyPI
- Data and AI Observability Platform - enabling MLOps | WhyLabs
- SLCPython December 2020: Monitoring Machine Learning with Danny Leybzon - YouTube
- Monitoring ML Models - YouTube
- Monitoring ML Models in Production - YouTube
- Machine Learning Models in Production - YouTube
- Danny on LinkedIn
- Women's Clothes | Men's Clothes | Kid's Clothing Boxes | Stitch Fix
- zenml-io/zenml: ZenML 🙏: MLOps framework to create reproducible ML pipelines for production machine learning.
- Terraform by HashiCorp
- Zillow — A Cautionary Tale of Machine Learning - causaLens
- Cloud Monitoring as a Service | Datadog
- Prometheus - Monitoring system & time series database