Deploying production ML models with TensorFlow Serving overview
>> YOUR LINK HERE: ___ http://youtube.com/watch?v=1d4BzR_7Nbc
Wei Wei, Developer Advocate at Google, overviews deploying ML models into production with TensorFlow Serving, a framework that makes it easy to serve the production ML models with low latency and high throughput. Learn how to start a TF Serving model server and send POST requests using the command line tool. Wei covers what it is, its architecture, general workflow, and how to use it. • Stay tuned for the upcoming episodes on Deploying Production ML models with TensorFlow Serving. Wei Wei will cover how to customize TF Serving, tune performance, perform A/B testing and monitoring, and more. • Resources: • TensorFlow Serving → https://goo.gle/3tLWkqr • TensorFlow Serving with Docker → https://goo.gle/3tQHyi0 • Training and serving a TensorFlow model with TF Serving → https://goo.gle/3HE2e2F • Deploying Production ML Models with TensorFlow Serving playlist → https://goo.gle/tf-serving • Subscribe to TensorFlow → https://goo.gle/TensorFlow • #TensorFlow #MachineLearning #ML
#############################
