Transcript
Introduction
Patrick
My guest this week is one of my best and oldest friends, Jeremiah Lowin. Jeremiah has had a fascinating career starting with advanced work in statistics before moving into risk management in the hedge fund world. Through his career, he has studied data, risk, stats, and machine learning, the last of which is the topic of our conversation today. He has now left the world of finance to found a company called Prefect, which is a framework for building data infrastructure. Prefect was inspired by observing frictions between data scientists and data engineers and solves these problems with a functional API for defining and extricating data workflows. These problems, while wonky, are ones I can relate to working in the quantitative investing world, and others that suffer from them out there will be nodding their heads right now.
In full and fair disclosure, both me and my family are investors in Jeremiah's business. You won't have to worry about that potential conflict of interest in today's conversation, though, because our focus is on the deployment of machine learning technologies in the realm of investing. What I love about talking to Jeremiah is that he is both an optimist and a skeptic. He loves working with new statistical learning technologies, but often thinks they are overhyped or entirely unsuited to the tasks they are being used for. We get into some deep detail on how tests are set up in this world, the importance of data, and how the minimization of error is a guiding light in machine learning and perhaps all of human learning, too. Let's dive in.