MLOps with Vertex AI

 
















MLOps with Vertex AI
1 hour 30 minutes
9 Credits
GSP274
Google Cloud self-paced labs logo

Overview
Taking a TensorFlow model that you trained in your Jupyter notebook and deploying the SavedModel to Vertex AI doesn’t scale to hundreds of models and large teams. Retraining will be difficult because the ops team has to set up all of the ops and monitoring and scheduling on top of something that is really clunky and totally non-minimal.

For an ML model to be placed into production, it must meet the following requirements:

The model should be under version control. Source code control systems such as GitHub work better with text files (such as .py files) than with mixtures of text and binaries (which is what .ipynb files are).

The entire process — from dataset creation to training to deployment – must be driven by code. This enables the automatic retrigger of a training run using GitHub Actions or GitLab Continuous Integration whenever new changed code is checked in.

The entire process should be invokable from a single entry point, so that the retraining can be triggered by non-code changes such as the arrival of new data in a Cloud Storage bucket.

It should be easy to monitor the performance of models and endpoints and take measures to fix some subset of issues that arise without having to modify the model code.

Together, these criteria go by the name MLOps. Google Cloud, in general, and Vertex AI, in particular, provide a number of MLOps capabilities. However, to take advantage of these in-built capabilities, you separate out the model code from the ops code, and express everything in Python rather than in notebooks.

In this lab, you automate the entire Machine Learning (ML) process by creating a Vertex training pipeline. You create a single entry point for the end-to-end training run, which is triggered whenever new code is checked in, when new data is received, or when changes in feature distribution or model evaluation are detected.

This lab uses a set of code samples and scripts developed for Data Science on the Google Cloud Platform, 2nd Edition from O'Reilly Media, Inc.

What you'll learn
Develop and Deploy model using Python

Make predictions from the deployed model

Setup and requirements







Task 2. Run a standalone model

Jupyter notebooks are great for development, but putting those notebooks directly into production is not ideal, even though Vertex AI allows you to do this.

It's good practice that you convert your initial prototyping model code into a Python file and then continue all development in it.

In the startup-vm terminal, use cat to examine the file model.py:

cat model.py

The file model.py is created by extracting all the previously developed Keras model code from the jupyter notebook. See chapter 9 of Data Science on the Google Cloud Platform, 2nd Edition for more information about the Keras model code.

Run the model.py in development mode to make sure it works.
export PROJECT_ID=$(gcloud info --format='value(config.project)')
export BUCKET_NAME=$PROJECT_ID-dsongcp
python3 model.py --bucket $BUCKET_NAME --develop


To could run it on the full dataset by dropping the develop flag, but it takes more time to comple


  
 

student-03-95f0ae3493c6@startup-vm:~/data-science-on-gcp/10_mlops$ export PROJECT_ID=$(gcloud info --format='value(config.project)')
export BUCKET_NAME=$PROJECT_ID-dsongcp
python3 model.py --bucket $BUCKET_NAME --develop

2023-08-07 04:39:51.735192: I tensorflow/tsl/cuda/cudart_stub.cc:28] Could not find cuda drivers on your machine, GPU will not be used.
2023-08-07 04:39:51.795523: I tensorflow/tsl/cuda/cudart_stub.cc:28] Could not find cuda drivers on your machine, GPU will not be used.
2023-08-07 04:39:51.796298: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-08-07 04:39:52.774801: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
INFO:root:Writing checkpoints and other outputs to gs://qwiklabs-gcp-04-35a7487fc1f1-dsongcp/ch9/train_output
INFO:root:Exporting trained model to gs://qwiklabs-gcp-04-35a7487fc1f1-dsongcp/ch9/train_output/export/flights_20230807-043953
INFO:root:Reading training data from gs://qwiklabs-gcp-04-35a7487fc1f1-dsongcp/ch9/data/train*
INFO:root:Writing trained model to gs://qwiklabs-gcp-04-35a7487fc1f1-dsongcp/ch9/train_output/export/flights_20230807-043953
WARNING:tensorflow:From /home/student-03-95f0ae3493c6/.local/lib/python3.9/site-packages/tensorflow/python/data/experimental/ops/readers.py:573: ignore_errors (from tensorflow.python.data.experimental.ops.error_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.data.Dataset.ignore_errors` instead.
WARNING:tensorflow:From /home/student-03-95f0ae3493c6/.local/lib/python3.9/site-packages/tensorflow/python/data/experimental/ops/readers.py:573: ignore_errors (from tensorflow.python.data.experimental.ops.error_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.data.Dataset.ignore_errors` instead.
INFO:root:Checkpointing to gs://qwiklabs-gcp-04-35a7487fc1f1-dsongcp/ch9/train_output/checkpoints/flights.cpt
WARNING:tensorflow:From /home/student-03-95f0ae3493c6/data-science-on-gcp/10_mlops/model.py:70: numeric_column (from tensorflow.python.feature_column.feature_column_v2) is deprecated and will be removed in a future version.
Instructions for updating:
Use Keras preprocessing layers instead, either directly or via the `tf.keras.utils.FeatureSpace` utility. Each of `tf.feature_column.*` has a functional equivalent in `tf.keras.layers` for feature preprocessing when training a Keras model.
WARNING:tensorflow:From /home/student-03-95f0ae3493c6/data-science-on-gcp/10_mlops/model.py:70: numeric_column (from tensorflow.python.feature_column.feature_column_v2) is deprecated and will be removed in a future version.
Instructions for updating:
Use Keras preprocessing layers instead, either directly or via the `tf.keras.utils.FeatureSpace` utility. Each of `tf.feature_column.*` has a functional equivalent in `tf.keras.layers` for feature preprocessing when training a Keras model.
WARNING:tensorflow:From /home/student-03-95f0ae3493c6/data-science-on-gcp/10_mlops/model.py:79: categorical_column_with_vocabulary_list (from tensorflow.python.feature_column.feature_column_v2) is deprecated and will be removed in a future version.
Instructions for updating:
Use Keras preprocessing layers instead, either directly or via the `tf.keras.utils.FeatureSpace` utility. Each of `tf.feature_column.*` has a functional equivalent in `tf.keras.layers` for feature preprocessing when training a Keras model.
WARNING:tensorflow:From /home/student-03-95f0ae3493c6/data-science-on-gcp/10_mlops/model.py:79: categorical_column_with_vocabulary_list (from tensorflow.python.feature_column.feature_column_v2) is deprecated and will be removed in a future version.
Instructions for updating:
Use Keras preprocessing layers instead, either directly or via the `tf.keras.utils.FeatureSpace` utility. Each of `tf.feature_column.*` has a functional equivalent in `tf.keras.layers` for feature preprocessing when training a Keras model.
WARNING:tensorflow:From /home/student-03-95f0ae3493c6/data-science-on-gcp/10_mlops/model.py:82: categorical_column_with_hash_bucket (from tensorflow.python.feature_column.feature_column_v2) is deprecated and will be removed in a future version.
Instructions for updating:
Use Keras preprocessing layers instead, either directly or via the `tf.keras.utils.FeatureSpace` utility. Each of `tf.feature_column.*` has a functional equivalent in `tf.keras.layers` for feature preprocessing when training a Keras model.
WARNING:tensorflow:From /home/student-03-95f0ae3493c6/data-science-on-gcp/10_mlops/model.py:82: categorical_column_with_hash_bucket (from tensorflow.python.feature_column.feature_column_v2) is deprecated and will be removed in a future version.
Instructions for updating:
Use Keras preprocessing layers instead, either directly or via the `tf.keras.utils.FeatureSpace` utility. Each of `tf.feature_column.*` has a functional equivalent in `tf.keras.layers` for feature preprocessing when training a Keras model.
WARNING:tensorflow:From /home/student-03-95f0ae3493c6/data-science-on-gcp/10_mlops/model.py:99: bucketized_column (from tensorflow.python.feature_column.feature_column_v2) is deprecated and will be removed in a future version.
Instructions for updating:
Use Keras preprocessing layers instead, either directly or via the `tf.keras.utils.FeatureSpace` utility. Each of `tf.feature_column.*` has a functional equivalent in `tf.keras.layers` for feature preprocessing when training a Keras model.
WARNING:tensorflow:From /home/student-03-95f0ae3493c6/data-science-on-gcp/10_mlops/model.py:99: bucketized_column (from tensorflow.python.feature_column.feature_column_v2) is deprecated and will be removed in a future version.
Instructions for updating:
Use Keras preprocessing layers instead, either directly or via the `tf.keras.utils.FeatureSpace` utility. Each of `tf.feature_column.*` has a functional equivalent in `tf.keras.layers` for feature preprocessing when training a Keras model.
WARNING:tensorflow:From /home/student-03-95f0ae3493c6/data-science-on-gcp/10_mlops/model.py:108: crossed_column (from tensorflow.python.feature_column.feature_column_v2) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.keras.layers.experimental.preprocessing.HashedCrossing` instead for feature crossing when preprocessing data to train a Keras model.
WARNING:tensorflow:From /home/student-03-95f0ae3493c6/data-science-on-gcp/10_mlops/model.py:108: crossed_column (from tensorflow.python.feature_column.feature_column_v2) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.keras.layers.experimental.preprocessing.HashedCrossing` instead for feature crossing when preprocessing data to train a Keras model.
WARNING:tensorflow:From /home/student-03-95f0ae3493c6/data-science-on-gcp/10_mlops/model.py:116: embedding_column (from tensorflow.python.feature_column.feature_column_v2) is deprecated and will be removed in a future version.
Instructions for updating:
Use Keras preprocessing layers instead, either directly or via the `tf.keras.utils.FeatureSpace` utility. Each of `tf.feature_column.*` has a functional equivalent in `tf.keras.layers` for feature preprocessing when training a Keras model.
WARNING:tensorflow:From /home/student-03-95f0ae3493c6/data-science-on-gcp/10_mlops/model.py:116: embedding_column (from tensorflow.python.feature_column.feature_column_v2) is deprecated and will be removed in a future version.
Instructions for updating:
Use Keras preprocessing layers instead, either directly or via the `tf.keras.utils.FeatureSpace` utility. Each of `tf.feature_column.*` has a functional equivalent in `tf.keras.layers` for feature preprocessing when training a Keras model.
WARNING:tensorflow:From /home/student-03-95f0ae3493c6/data-science-on-gcp/10_mlops/model.py:123: indicator_column (from tensorflow.python.feature_column.feature_column_v2) is deprecated and will be removed in a future version.
Instructions for updating:
Use Keras preprocessing layers instead, either directly or via the `tf.keras.utils.FeatureSpace` utility. Each of `tf.feature_column.*` has a functional equivalent in `tf.keras.layers` for feature preprocessing when training a Keras model.
WARNING:tensorflow:From /home/student-03-95f0ae3493c6/data-science-on-gcp/10_mlops/model.py:123: indicator_column (from tensorflow.python.feature_column.feature_column_v2) is deprecated and will be removed in a future version.
Instructions for updating:
Use Keras preprocessing layers instead, either directly or via the `tf.keras.utils.FeatureSpace` utility. Each of `tf.feature_column.*` has a functional equivalent in `tf.keras.layers` for feature preprocessing when training a Keras model.
INFO:root:Training on gs://qwiklabs-gcp-04-35a7487fc1f1-dsongcp/ch9/data/train*; eval on gs://qwiklabs-gcp-04-35a7487fc1f1-dsongcp/ch9/data/eval*; 2 epochs; 3
Epoch 1/2
2023-08-07 04:40:08.952885: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:422] Filling up shuffle buffer (this may take a while): 1075 of 2560
2023-08-07 04:40:18.954461: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:422] Filling up shuffle buffer (this may take a while): 2279 of 2560
2023-08-07 04:40:21.253962: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:450] Shuffle buffer filled.
3/3 [==============================] - ETA: 0s - loss: 1.0713 - accuracy: 0.2734 - rmse: 0.6270 - auc: 0.4290 
Epoch 1: saving model to gs://qwiklabs-gcp-04-35a7487fc1f1-dsongcp/ch9/train_output/checkpoints/flights.cpt
INFO:root:Epoch 0: val_rmse = 0.45602381229400635
3/3 [==============================] - 39s 6s/step - loss: 1.0713 - accuracy: 0.2734 - rmse: 0.6270 - auc: 0.4290 - val_loss: 0.6094 - val_accuracy: 0.6457 - val_rmse: 0.4560 - val_auc: 0.4486
Epoch 2/2
1/3 [=========>....................] - ETA: 0s - loss: 0.6230 - accuracy: 0.6055 - rmse: 0.4667 - auc: 0.4087
Epoch 2: saving model to gs://qwiklabs-gcp-04-35a7487fc1f1-dsongcp/ch9/train_output/checkpoints/flights.cpt
INFO:root:Epoch 1: val_rmse = 0.40551894903182983
3/3 [==============================] - 12s 6s/step - loss: 0.5516 - accuracy: 0.7721 - rmse: 0.4240 - auc: 0.4485 - val_loss: 0.6199 - val_accuracy: 0.8230 - val_rmse: 0.4055 - val_auc: 0.4899
INFO:root:Exporting to gs://qwiklabs-gcp-04-35a7487fc1f1-dsongcp/ch9/train_output/export/flights_20230807-043953
WARNING:tensorflow:From /home/student-03-95f0ae3493c6/.local/lib/python3.9/site-packages/keras/src/feature_column/base_feature_layer.py:129: serialize_feature_column (from tensorflow.python.feature_column.serialization) is deprecated and will be removed in a future version.
Instructions for updating:
Use Keras preprocessing layers instead, either directly or via the `tf.keras.utils.FeatureSpace` utility. Each of `tf.feature_column.*` has a functional equivalent in `tf.keras.layers` for feature preprocessing when training a Keras model.
WARNING:tensorflow:From /home/student-03-95f0ae3493c6/.local/lib/python3.9/site-packages/keras/src/feature_column/base_feature_layer.py:129: serialize_feature_column (from tensorflow.python.feature_column.serialization) is deprecated and will be removed in a future version.
Instructions for updating:
Use Keras preprocessing layers instead, either directly or via the `tf.keras.utils.FeatureSpace` utility. Each of `tf.feature_column.*` has a functional equivalent in `tf.keras.layers` for feature preprocessing when training a Keras model.
INFO:tensorflow:Assets written to: gs://qwiklabs-gcp-04-35a7487fc1f1-dsongcp/ch9/train_output/export/flights_20230807-043953/assets
INFO:tensorflow:Assets written to: gs://qwiklabs-gcp-04-35a7487fc1f1-dsongcp/ch9/train_output/export/flights_20230807-043953/assets
INFO:root:Validation metric val_rmse on 1000 samples = 0.40551894903182983
INFO:root:Skipping evaluation on full test dataset
INFO:root:Done
student-03-95f0ae3493c6@startup-vm:~/data-science-on-gcp/10_mlops$ history
    1  git clone https://github.com/GoogleCloudPlatform/data-science-on-gcp
    2  cd data-science-on-gcp/
    5  pip3 install google-cloud-aiplatform cloudml-hypertune kfp numpy tensorflow requests-toolbelt==0.10.1
     9   cd 10_mlops/
   11  cat model.py 
   12  export PROJECT_ID=$(gcloud info --format='value(config.project)')
   13  export BUCKET_NAME=$PROJECT_ID-dsongcp
   14  python3 model.py --bucket $BUCKET_NAME --develop   

Task 3. Develop and deploy model using Vertex AI

To develop and deploy a pipeline on Vertex AI, a training pipeline must do five things in code:

  • Load up a managed dataset in Vertex AI
  • Set up training infrastructure to run model.py
  • Train the model by invoking functions in model.py on the managed dataset
  • Find the endpoint to which to deploy the model
  • Deploy the model to the endpoint



1) In the startup-vm terminal, examine the train_on_vertexai.py script used to develop and deploy pipeline on Vertex AI:
sed -i -e "s/kfp.v2/kfp/g" train_on_vertexai.py
cat train_on_vertexai.py


2) Run the training pipeline using the following command, which passes in the PROJECT_ID and storage bucket name BUCKET_NAME in develop mode.


python3 train_on_vertexai.py --project $PROJECT_ID --bucket $BUCKET_NAME --develop --cpuonly --tfversion 2.6





student-03-95f0ae3493c6@startup-vm:~/data-science-on-gcp/10_mlops$ echo $BUCKET_NAME
qwiklabs-gcp-04-35a7487fc1f1-dsongcp

student-03-95f0ae3493c6@startup-vm:~/data-science-on-gcp/10_mlops$ echo $PROJECT_ID
qwiklabs-gcp-04-35a7487fc1f1

student-03-95f0ae3493c6@startup-vm:~/data-science-on-gcp/10_mlops$   python3 train_on_vertexai.py --project $PROJECT_ID --bucket $BUCKET_NAME --develop --cpuonly --tfversion 2.6

2023-08-07 04:43:57.486476: I tensorflow/tsl/cuda/cudart_stub.cc:28] Could not find cuda drivers on your machine, GPU will not be used.
2023-08-07 0

<TRUNCATED>

PipelineState.PIPELINE_STATE_RUNNING
INFO:google.cloud.aiplatform.training_jobs:CustomTrainingJob projects/305157610540/locations/us-central1/trainingPipelines/7002842952338767872 current state:
PipelineState.PIPELINE_STATE_RUNNING
CustomTrainingJob projects/305157610540/locations/us-central1/trainingPipelines/7002842952338767872 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO:google.cloud.aiplatform.training_jobs:CustomTrainingJob projects/305157610540/locations/us-central1/trainingPipelines/7002842952338767872 current state:
PipelineState.PIPELINE_STATE_RUNNING
View backing custom job:
https://console.cloud.google.com/ai/platform/locations/us-central1/training/5286355767799054336?project=305157610540
INFO:google.cloud.aiplatform.training_jobs:View backing custom job:
https://console.cloud.google.com/ai/platform/locations/us-central1/training/5286355767799054336?project=305157610540


CustomTrainingJob projects/305157610540/locations/us-central1/trainingPipelines/7002842952338767872 current state:
PipelineState.PIPELINE_STATE_RUNNING
INFO:google.cloud.aiplatform.training_jobs:CustomTrainingJob projects/305157610540/locations/us-central1/trainingPipelines/7002842952338767872 current state:
PipelineState.PIPELINE_STATE_RUNNING
CustomTrainingJob run completed. Resource name: projects/305157610540/locations/us-central1/trainingPipelines/7002842952338767872
INFO:google.cloud.aiplatform.training_jobs:CustomTrainingJob run completed. Resource name: projects/305157610540/locations/us-central1/trainingPipelines/7002842952338767872
Model available at projects/305157610540/locations/us-central1/models/80776721246191616
INFO:google.cloud.aiplatform.training_jobs:Model available at projects/305157610540/locations/us-central1/models/80776721246191616
Creating Endpoint
INFO:google.cloud.aiplatform.models:Creating Endpoint
Create Endpoint backing LRO: projects/305157610540/locations/us-central1/endpoints/3762190140668116992/operations/265176572254748672
INFO:google.cloud.aiplatform.models:Create Endpoint backing LRO: projects/305157610540/locations/us-central1/endpoints/3762190140668116992/operations/265176572254748672
Endpoint created. Resource name: projects/305157610540/locations/us-central1/endpoints/3762190140668116992
INFO:google.cloud.aiplatform.models:Endpoint created. Resource name: projects/305157610540/locations/us-central1/endpoints/3762190140668116992
To use this Endpoint in another session:
INFO:google.cloud.aiplatform.models:To use this Endpoint in another session:
endpoint = aiplatform.Endpoint('projects/305157610540/locations/us-central1/endpoints/3762190140668116992')
INFO:google.cloud.aiplatform.models:endpoint = aiplatform.Endpoint('projects/305157610540/locations/us-central1/endpoints/3762190140668116992')
Deploying model to Endpoint : projects/305157610540/locations/us-central1/endpoints/3762190140668116992
INFO:google.cloud.aiplatform.models:Deploying model to Endpoint : projects/305157610540/locations/us-central1/endpoints/3762190140668116992
Deploy Endpoint model backing LRO: projects/305157610540/locations/us-central1/endpoints/3762190140668116992/operations/1421475776582123520
INFO:google.cloud.aiplatform.models:Deploy Endpoint model backing LRO: projects/305157610540/locations/us-central1/endpoints/3762190140668116992/operations/1421475776582123520
Endpoint model deployed. Resource name: projects/305157610540/locations/us-central1/endpoints/3762190140668116992
INFO:google.cloud.aiplatform.models:Endpoint model deployed. Resource name: projects/305157610540/locations/us-central1/endpoints/3762190140668116992
student-03-95f0ae3493c6@startup-vm:~/data-science-on-gcp/10_mlops$
































To use this Endpoint in another session:
INFO:google.cloud.aiplatform.models:To use this Endpoint in another session:
endpoint = aiplatform.Endpoint('projects/305157610540/locations/us-central1/endpoints/3762190140668116992')
INFO:google.cloud.aiplatform.models:endpoint = aiplatform.Endpoint('projects/305157610540/locations/us-central1/endpoints/3762190140668116992')
Deploying model to Endpoint : projects/305157610540/locations/us-central1/endpoints/3762190140668116992
INFO:google.cloud.aiplatform.models:Deploying model to Endpoint : projects/305157610540/locations/us-central1/endpoints/3762190140668116992
Deploy Endpoint model backing LRO: projects/305157610540/locations/us-central1/endpoints/3762190140668116992/operations/1421475776582123520
INFO:google.cloud.aiplatform.models:Deploy Endpoint model backing LRO: projects/305157610540/locations/us-central1/endpoints/3762190140668116992/operations/1421475776582123520
Endpoint model deployed. Resource name: projects/305157610540/locations/us-central1/endpoints/3762190140668116992

INFO:google.cloud.aiplatform.models:Endpoint model deployed. Resource name: projects/305157610540/locations/us-central1/endpoints/3762190140668116992







Task 4. Make predictions from the deployed model

Sending the normal prediction request to the model endpoint will return a response that contains feature attributions.

Here, you have two different ways to call the model, using bash and using Python. You use either one to call the deployed model.

Call the model using bash

You can send prediction requests using bash script call_predict.sh by entering the following commands:
cd ../09_vertexai
bash ./call_predict.sh
cd ../10_mlops



student-03-95f0ae3493c6@startup-vm:~/data-science-on-gcp/10_mlops$ cd ../09_vertexai/

student-03-95f0ae3493c6@startup-vm:~/data-science-on-gcp/09_vertexai$ bash ./call_predict.sh 
Using endpoint [https://us-central1-aiplatform.googleapis.com/]
3762190140668116992
Using endpoint [https://us-central1-prediction-aiplatform.googleapis.com/]
[[0.673846781], [0.715811491]]
student-03-95f0ae3493c6@startup-vm:~/data-science-on-gcp/09_vertexai$ cd ../10_mlops/

student-03-95f0ae3493c6@startup-vm:~/data-science-on-gcp/10_mlops$ python3 call_predict.py
Prediction(predictions=[[0.673846781], [0.715811491]], deployed_model_id='5056964038420856832', model_version_id='1', model_resource_name='projects/305157610540/locations/us-central1/models/80776721246191616', explanations=None)
student-03-95f0ae3493c6@startup-vm:~/data-science-on-gcp/10_mlops$ 


=None)
student-03-95f0ae3493c6@startup-vm:~/data-science-on-gcp/10_mlops$ history
    1  git clone https://github.com/GoogleCloudPlatform/data-science-on-gcp
    2  cd data-science-on-gcp/
    3  pip3
    4  pip3 list
    5  pip3 install google-cloud-aiplatform cloudml-hypertune kfp numpy tensorflow requests-toolbelt==0.10.1
    6  pip3 list
    7  cat model.py
    8  pwd
    9  ls
   10  cd 10_mlops/
   11  cat model.py 
   12  export PROJECT_ID=$(gcloud info --format='value(config.project)')
   13  export BUCKET_NAME=$PROJECT_ID-dsongcp
   14  python3 model.py --bucket $BUCKET_NAME --develop
   15  history
   16  sed -i -e "s/kfp.v2/kfp/g" train_on_vertexai.py
   17  cat train_on_vertexai.py
   18  echo $BUCKET_NAME
   19  echo $PROJECT_ID
   20  cd ../09_vertexai/
   21  bash ./call_predict.sh 
   22  cd ../10_mlops/
   23  python3 call_predict.py
   24  history
 












EXTRAS:



student-03-95f0ae3493c6@startup-vm:~/data-science-on-gcp$ pip3 list
Package             Version
------------------- --------------
certifi             2020.6.20
chardet             4.0.0
dbus-python         1.2.16
distro-info         1.0
httplib2            0.18.1
idna                2.10
pip                 20.3.4
pycurl              7.43.0.6
PySimpleSOAP        1.16.2
python-apt          2.2.1
python-debian       0.1.39
python-debianbts    3.1.0
reportbug           7.10.3+deb11u1
requests            2.25.1
setuptools          52.0.0
six                 1.16.0
unattended-upgrades 0.1
urllib3             1.26.5
wheel               0.34.2

student-03-95f0ae3493c6@startup-vm:~/data-science-on-gcp$ pip3 install google-cloud-aiplatform cloudml-hypertune kfp numpy tensorflow requests-toolbelt==0.10.1



student-03-95f0ae3493c6@startup-vm:~/data-science-on-gcp$ pip3 list
Package                       Version
----------------------------- --------------
absl-py                       1.4.0
astunparse                    1.6.3
attrs                         23.1.0
cachetools                    5.3.1
certifi                       2020.6.20
chardet                       4.0.0
click                         8.1.6
cloudml-hypertune             0.1.0.dev6
cloudpickle                   2.2.1
dbus-python                   1.2.16
Deprecated                    1.2.14
distro-info                   1.0
docstring-parser              0.15
fire                          0.5.0
flatbuffers                   23.5.26
gast                          0.4.0
google-api-core               2.11.1
google-auth                   2.22.0
google-auth-oauthlib          1.0.0
google-cloud-aiplatform       1.29.0
google-cloud-bigquery         3.11.4
google-cloud-core             2.3.3
google-cloud-resource-manager 1.10.3
google-cloud-storage          2.10.0
google-crc32c                 1.5.0
google-pasta                  0.2.0
google-resumable-media        2.5.0
googleapis-common-protos      1.60.0
grpc-google-iam-v1            0.12.6
grpcio                        1.56.2
grpcio-status                 1.56.2
h5py                          3.9.0
httplib2                      0.18.1
idna                          2.10
importlib-metadata            6.8.0
jsonschema                    4.18.6
jsonschema-specifications     2023.7.1
keras                         2.13.1
kfp                           1.4.0
kfp-pipeline-spec             0.1.5
kfp-server-api                1.8.5
kubernetes                    11.0.0
libclang                      16.0.6
Markdown                      3.4.4
MarkupSafe                    2.1.3
numpy                         1.24.3
oauthlib                      3.2.2
opt-einsum                    3.3.0
packaging                     23.1
pip                           20.3.4
proto-plus                    1.22.3
protobuf                      4.23.4
pyasn1                        0.5.0
pyasn1-modules                0.3.0
pycurl                        7.43.0.6
PySimpleSOAP                  1.16.2
python-apt                    2.2.1
python-dateutil               2.8.2
python-debian                 0.1.39
python-debianbts              3.1.0
PyYAML                        6.0.1
referencing                   0.30.2
reportbug                     7.10.3+deb11u1
requests                      2.25.1
requests-oauthlib             1.3.1
requests-toolbelt             0.10.1
rpds-py                       0.9.2
rsa                           4.9
setuptools                    52.0.0
Shapely                       1.8.5.post1
six                           1.16.0
strip-hints                   0.1.10
tabulate                      0.9.0
tensorboard                   2.13.0
tensorboard-data-server       0.7.1
tensorflow                    2.13.0
tensorflow-estimator          2.13.0
tensorflow-io-gcs-filesystem  0.33.0
termcolor                     2.3.0
typing-extensions             4.5.0
unattended-upgrades           0.1
urllib3                       1.26.5
websocket-client              1.6.1
Werkzeug                      2.3.6
wheel                         0.34.2
wrapt                         1.15.0
zipp                          3.16.2
student-03-95f0ae3493c6@startup-vm:~/data-science-on-gcp$ 




No comments:

Post a Comment

AppEngine - Python

tudent_04_347b5286260a@cloudshell:~/python-docs-samples/appengine/standard_python3/hello_world (qwiklabs-gcp-00-88834e0beca1)$ sudo apt upda...