Google explains what it takes to create Production-Scale #MachineLearning Platform using #TensorFlow #TFX

Creating and maintaining a platform for reliably producing and deploying machine learning models requires careful orchestration of many components—-a learner for generating models based on training data, modules for analyzing and validating both data as well as models, and finally infrastructure for serving models in production. This becomes particularly challenging when data changes over time and fresh models need to be produced continuously. Unfortunately, such orchestration is often done ad hoc using glue code and custom scripts developed by individual teams for specific use cases, leading to duplicated effort and fragile systems with high technical debt. We present the anatomy of a general-purpose machine learning platform and one implementation of such a platform at Google. By integrating the aforementioned components into one platform, we were able to standardize the components, simplify the platform configuration, and reduce the time to production from the order of months to weeks, while providing platform stability that minimizes service disruptions. We present the case study of one deployment of the platform in the Google Play app store, where the machine learning models are refreshed continuously as new data arrive. Deploying the platform led to reduced custom code, faster experiment cycles, and a 2% increase in app installs resulting from improved data and model analysis.

Sourced through Scoop.it from: www.kdd.org

WHY THIS MATTERS

This article explains the different components that Google had to create in order to make their machine learning tensorFlow system production grade and ensure it could support a software engineering development process. We are entering a brave new world of new tools.

Farid Mheir
farid@mheir.com
No Comments

Sorry, the comment form is closed at this time.