Create your AI applications once, then run them easily on both GCP and on-premises.
Take your machine learning projects to production
AI Platform makes it easy for machine learning developers, data scientists, and data engineers to take their ML projects from ideation to production and deployment, quickly and cost-effectively. From data engineering to “no lock-in” flexibility, AI Platform’s integrated tool chain helps you build and run your own machine learning applications.
AI Platform supports Kubeflow, Google’s open-source platform, which lets you build portable ML pipelines that you can run on-premises or on Google Cloud without significant code changes. And you’ll have access to cutting-edge Google AI technology like TensorFlow, TPUs, and TFX tools as you deploy your AI applications to production.
Prepare
You can use Cloud Storage or BigQuery to store your data. Then use the built-in data labeling service to label your training data by applying classification, object detection, and entity extraction, etc., for images, videos, audio, and text. You can also import the labeled data to AutoML and train a model directly.
Build and run
You can build your ML applications on GCP with a managed Jupyter Notebook service that provides fully configured environments for different ML frameworks using Deep Learning VM Image. Then you can use AI Platform Training and Prediction services to train your models and deploy them to production on GCP in a serverless environment, or do so on-premises using the training and prediction microservices provided by Kubeflow.
Manage
You can manage your models, experiments, and end-to-end workflows using the AI Platform interface within the GCP console, or do so on-premises using Kubeflow Pipelines. AI Platform offers advanced tooling to help you understand your model results and explain them to business users.
More Information and Official Website:
Download : https://cloud.google.com/ai-platform