Tuning and scaling your ML models


Hyperparameter tuning codelab → https://goo.gle/3AQi5Jo
Distributed training codelab → https://goo.gle/3pKBF3E
Distributed training with TensorFlow video → https://goo.gle/3QSJtfr

A huge part of the machine learning process is experimentation, luckily there are a few Vertex AI features that can help you with tuning and scaling your ML models. In this episode of Prototype to Production, Developer Advocate Nikita Namjoshi takes a look at hyperparameter tuning, distributed training, and experiment tracking. Watch this episode to learn how you can get models out of experimentation and into production with Vertex AI.

Chapters:
0:00 – Intro
0:49 – Hyperparameter tuning
1:28 – Hyperparameter tuning on Vertex AI
3:25 – Distributed training
5:16 – Configuring worker pools
6:00 – Experimentation with TensorBoard
7:03 – Vertex AI experiment tracking service
7:30 – Wrap up

Hyperparameter tuning on Vertex AI docs → https://goo.gle/3RiRxpT
Distributed training on Vertex AI docs → https://goo.gle/3QNgdXz
Vertex AI TensorBoard docs → https://goo.gle/3CBiMb4
Vertex AI Experiments docs → https://goo.gle/3pP0U4O
Vertex AI Experiments notebooks → https://goo.gle/3Kw7cQj

Prototype To Production playlist → https://goo.gle/PrototypeToProduction
Subscribe to Google Cloud Tech → https://goo.gle/GoogleCloudTech
​​
#Prototype2Production


Duration: 00:08:24
Publisher: Google Cloud
You can watch this video also at the source.