Conversational Agents: Bot Building Basics

Builds conversational interfaces (for example, chatbots, and voice-powered apps and devices).


ENABLE API:




Task 1. Create your agent

Visit the Conversational Agents console, then select your Cloud Project named qwiklabs-gcp-03-d324fb1857f7.

https://dialogflow.cloud.google.com/v2/projects


An agent is a virtual agent that handles conversations with your end-users. It is a natural language understanding module that understands the nuances of human language.






Click Create agent > Build your own.




Name your agent Flight booker.

Pick global from the Location drop-down.

Click Create.







Conversational Agents Console
In Conversational Agents, you can build AI agents that chat naturally and serve your needs

Agent graph
Get a big picture view of your agent with the graph. It shows how your Playbooks and Flows link up.

Playbooks
Playbooks guide your AI agent. They’re instructions for how agents handle generative chats and tasks.

Flows
Flows can add more structure to conversations. Define rules that deal with chat topics and paths in consistent ways.

Tools
Tools let agents do even more. Connect playbooks to data sources and external systems to expand your agent’s knowledge and abilities.

Settings
Set up agent-level settings here. For example, the LLM model you want to use, or a list of banned phrases.

Simulator
Want to test your agent? Use the simulator to see how it responds in a live conversation.



After creating the agent, navigate to Settings > (Agent)Settings > General > Logging settings and check the boxes next to Enable Cloud Logging and Enable Conversation History option. It will generate logs for this agent.



Task 2. Intents

Intents are the reasons an end-user has for interacting with the agent, for example, ordering something. You can create an intent for every topic they may want to navigate.

Intents can be reused across Pages and Flows. Each intent is defined by training phrases end-users typically ask. These can be annotated or "labeled" to collect specific parameters, such as arrival city or departure date.

Conversational Agents will suggest annotations as you include training phrases for the intent; they can also be manually annotated to collect the parameter values you want to extract from the end-user's interaction with your agent.

Recommended: in order to reuse intents as well as make maintenance easier, name your intents with clear and explicit names.

Format of intent: category.some_description

Example of formatting:

Core Intents: main.book_a_flight
Common intent but not core: supplemental.flight_emissions
Reusable intents: confirmation.yes, confirmation_no, redirect.live_agent.
Create your first intent
From the left navigation pane, click Flows.

Click Manage > Intents > + Create :







Under the Training Phrases header, add each of the following phrases into Conversational Agents, click Enter after each phrase:

  • Book a flight
  • Can you book my flight to San Francisco next month
  • I want to use my reward points to book a flight from Milan in October
  • My family is visiting next week and we need to book 6 round trip tickets
  • Four business class tickets from Taiwan to Dubai for June 2nd to 30th
  • I need a flight Saturday from LAX to San Jose
  • Book SFO to MIA on August 10th one way
  • Help me book a ticket from 4/10 to 4/15 from Mexico City to Medellin Colombia please
  • I am booking a surprise trip for my mom, can you help arrange that for May 10th to May 25th to Costa Rica
  • Do you have any cheap flights to NYC for this weekend
  • I want to fly in my cousin from Montreal on August 8th
  • I want to find two seats to Panama City on July 4th
  • For my wedding anniversary we want to go to Seattle for Christmas
Note: For higher model accuracy, using 20-50 training phrases with short and long response options is recommended.





Task 3. Flows and pages
Flows are used to define topics and the associated conversational paths. Every agent has one flow called the Default Start Flow. This single flow may be all you need for a simple agent.

More complicated agents may require additional flows, and different development team members can be responsible for building and maintaining these flows.



Every flow starts with a Page, and is made of one or multiple different pages thereafter to handle the conversation within a particular flow. The current page an end-user is on is considered the "active page". Each page can be configured to collect any required information from the end-user.

Build from your Default Start Flow
The page your agent starts from is called the Default Start Flow. Pages store routing logic, responses (known as Fulfillment), specific actions to take if an intent cannot be matched (known as no-match) or receives no-input (which is when the agent does not receive a response in time).

Click Build.

Click Start Page to open the page.



From the expanded options on the Start page, select the + icon next to Routes.

Select the intent main.book_a_flight from the drop-down, then click Save.


Next, in the Routes section, click the main.book_a_flight route.

Scroll down to Transition and choose + new Page from the drop-down.

Name the page Ticket information and click Save.

Exit out of the windows to return to the main display of flows to see your new Ticket information page connected to the Start page.




The beginning of the flow now includes a greeting, and will then proceed to the Ticket information page when the main.book_a_flight intent is matched. On the Ticket Information page you will collect parameters from the end-user so they can book their flight.

Task 4. Entities and parameters

Entities define the type of information you wish to extract from an end-user, ex: city you want to fly to. Use Conversational Agent's built-in " system entities'' for matching dates, times, colors, email addresses, and so on.

System entities can also be “extended” to include values that are not part of the default system values. If you need to create a fully customized entity, you can do so by creating a Custom Entity type for matching data that is custom to your business and not found as a system entity.

Parameters are information supplied by the end-user during a session, such as date, time, and destination city. Each parameter has a name and an entity type. They are written in snake_case (lowercase with underscores between words)




Create your first set of parameters

Next you will use an entity to extract a required parameter from the end-user.

Click on the page Ticket Information, then the + by Parameters to collect flight data.

Enter departure_city in the Display name field.

Choose @sys.geo-city from the Entity type drop-down.




Scroll down to Initial prompt fulfillment > Agent responses > + Add dialogue response > Agent dialogue and add What city would you like the flight to depart from?




Click Save.

Exit out of this window to make another parameter.




Click the + by Parameters again to create 4 additional parameters one by one with the following name, entity type, and how the agent will prompt the end-user.

xit out of this window to make another parameter.

Click the + by Parameters again to create 4 additional parameters one by one with the following name, entity type, and how the agent will prompt the end-user.

Display name         Entity type     Agent dialogue
departure_date @sys.date     What is the month and day of the departure?
destination_city @sys.geo-city     What is your destination city?
return_date         @sys.date     What is the month and day for the returning flight?
passenger_name @sys.any     What is the passenger's name?


When finished they are listed like this:

The primary reason that an end-user is interacting with your agent is captured by which resource type?


Intents


Task 5. Conditions

Once the agent has collected the necessary 5 flight booking parameters, you want to route the end user to another page using a routing condition, which you will create next.

Exit out of the parameter window to return to the Ticket information page again.

Scroll down to locate Routes and click the + sign next to it.

Scroll down to Condition > Condition rules > select "Match AT LEAST ONE rule (OR)"

In the Parameter field enter $page.params.status.

Choose the = sign in the Operand drop-down.

In the Value field enter: "FINAL" (ensure you include the double quotes).

Click Save.


ask 6. Fulfillment
Now add a response to say to the end-user when all 5 of their booking parameters are collected. These responses are called Fulfillment.

From the condition you just made, scroll down a bit and locate the section called Fulfillment.

Under Agent responses click Add dialogue response, select Agent dialogue and then type the following: Thank you for that information. Let me check on the availability of your ticket and click Add.

Click Save.

(Now stay on this page while you read on to the next step of confirming information.)



Task 8. Testing

To test that your agent works as intended, click on Toggle Simulator in the upper right corner of the screen.

In the Start resource dropdown, select Default Start Flow.

Interact with the agent as if you were the end-user. As you move through the main flow, notice the pages, intents, and transitions you created.

Depending on how you arranged your parameter collection, you can try typing in the following sample dialogue:

I'd like to book a flight
Austin
Tomorrow
Boston
Next Friday
Mickey Mouse
Yes
This should result in a successful transaction through your agent, commonly known as the “happy path”.

Here is an example of the above agent testing in the Test Agent console:





Task 9. Exporting your agent

When you build an agent for one project, you can export it to use in a different project. You can export your agent and save it to use in future labs or to continue building in your own personal project!

In the Agent drop down at the top of the Conversational Agents console, click View all agents.






Now open downloaded file.




Service Accounts and Roles: Fundamentals


============ ============= ====================

In Azure, the equivalents to a GCP service account are Managed Identities and Service Principals. The specific choice depends on the use case, as Azure provides two distinct options where GCP generally uses a single concept. 

Azure Equivalents to GCP Service Accounts

Managed Identities: These are recommended for services running within Azure (e.g., on an Azure Virtual Machine, Azure App Service, or Azure Function) that need to authenticate to other Azure resources. Managed Identities simplify authentication by automatically managing credentials through Azure's metadata service, eliminating the need for developers to manage secrets, passwords, or keys.
System-assigned: The identity has a one-to-one relationship with the specific Azure resource and is deleted when the resource is deleted.

User-assigned: The identity is created as a standalone, independent Azure resource and can be assigned to multiple Azure resources.

Service Principals: A service principal is the non-human identity used by applications, services, and automation tools to access Azure resources, typically when running outside of Azure or when a managed identity is not an option (e.g., on-premises applications, CI/CD pipelines). They require the management of credentials, such as client secrets or certificates. Creating an App Registration in Microsoft Entra ID (formerly Azure AD) will create an associated service principal. 
============ ============== ===================


Service accounts are a special type of Google account that grant permissions to virtual machines instead of end users. Service accounts are primarily used to ensure safe, managed connections to APIs and Google Cloud services. Granting access to trusted connections and rejecting malicious ones is a must-have security feature for any Google Cloud project.
  • Create and manage service accounts.
  • Create a virtual machine and associate it with a service account.
  • Use client libraries to access BigQuery from a service account.
  • Run a query on a BigQuery public dataset from a Compute Engine instance.

What are service accounts?

A service account is a special Google account that belongs to your application or a virtual machine (VM) instead of an individual end user. Your application uses the service account to call the Google API of a service, so that the users aren't directly involved.

For example, a Compute Engine VM may run as a service account, and that account can be given permissions to access the resources it needs. This way the service account is the identity of the service, and the service account's permissions control which resources the service can access.

A service account is identified by its email address, which is unique to the account.



Types of service accounts

  • User-managed service accounts
  • Google-managed service accounts
  • Google APIs service account

User-managed service accounts

When you create a new Cloud project using Google Cloud console and if Compute Engine API is enabled for your project, a Compute Engine Service account is created for you by default. It is identifiable using the email:

PROJECT_NUMBER-compute@developer.gserviceaccount.com

If your project contains an App Engine application, the default App Engine service account is created in your project by default. It is identifiable using the email:

PROJECT_ID@appspot.gserviceaccount.com

Google-managed service accounts

In addition to the user-managed service accounts, you might see some additional service accounts in your project’s IAM policy or in the console. These service accounts are created and owned by Google. These accounts represent different Google services and each account is automatically granted IAM roles to access your Google Cloud project.

Google APIs service account

An example of a Google-managed service account is a Google API service account identifiable using the email:

PROJECT_NUMBER@cloudservices.gserviceaccount.com
This service account is designed specifically to run internal Google processes on your behalf and is not listed in the Service Accounts section of the console. By default, the account is automatically granted the project editor role on the project and is listed in the IAM section of the console. This service account is deleted only when the project is deleted.

Note: Google services rely on the account having access to your project, so you should not remove or change the service account’s role on your project.


Understanding IAM roles

When an identity calls a Google Cloud API, Google Cloud Identity and Access Management requires that the identity has the appropriate permissions to use the resource. You can grant permissions by granting roles to a user, a group, or a service account.

Types of roles
There are three types of roles in Cloud IAM:

Primitive roles, which include the Owner, Editor, and Viewer roles that existed prior to the introduction of Cloud IAM.

Predefined roles, which provide granular access for a specific service and are managed by Google Cloud.

Custom roles, which provide granular access according to a user-specified list of permissions.




student_04_ac2c687f8546@cloudshell:~ (qwiklabs-gcp-02-41dc3caed530)$ gcloud iam service-accounts create ketan-sa-123 --display-name "ketan service account"
Created service account [ketan-sa-123].

student_04_ac2c687f8546@cloudshell:~ (qwiklabs-gcp-02-41dc3caed530)$ gcloud iam service-accounts list | grep ketan
DISPLAY NAME: ketan service account
EMAIL: ketan-sa-123@qwiklabs-gcp-02-41dc3caed530.iam.gserviceaccount.com
student_04_ac2c687f8546@cloudshell:~ (qwiklabs-gcp-02-41dc3caed530)$ 


Task 1. Create and manage service accounts

When you create a new Cloud project, Google Cloud automatically creates one Compute Engine service account and one App Engine service account under that project. You can create up to 98 additional service accounts to your project to control access to your resources.

Creating a service account

Creating a service account is similar to adding a member to your project, but the service account belongs to your applications rather than an individual end user.

To create a service account, run the following command in Cloud Shell:

gcloud iam service-accounts create my-sa-123 --display-name "my service account"


Granting roles to a service account for specific resources

You grant roles to a service account so that the service account has permission to complete specific actions on the resources in your Cloud Platform project. For example, you might grant the storage.admin role to a service account so that it has control over objects and buckets in Cloud Storage.

Run the following in Cloud Shell to grant roles to the service account you just made:

gcloud projects add-iam-policy-binding $DEVSHELL_PROJECT_ID \
    --member serviceAccount:my-sa-123@$DEVSHELL_PROJECT_ID.iam.gserviceaccount.com --role roles/editor
Copied!

The output displays a list of roles the service account now has:




















Create a VM instance
Go to Navigation menu > Compute Engine > VM Instances, and click Create Instance.

In the Machine configuration:

Set the following values:

Configuration Value
Name bigquery-instance
Region us-west4
Zone us-west4-b
Series E2
Machine Type e2-medium
Click OS and storage.

If the boot disk is not already set, click Change and select

Boot Disk: Debian GNU/Linux 12 (bookworm)
Click Select.

Click Security.

Set the following values:

Configuration Value
Service account bigquery-qwiklab
Access scopes Set access for each API
BigQuery Enabled
Note: If the bigquery-qwiklab service account doesn't appear in the drop-down list, try typing the name into the "Filter" section.
Click Create.












Put the example code on a Compute Engine instance


Now, you will be able to see the bigquery-instance under VM Instances.

SSH into bigquery-instance by clicking on the SSH button.

Note: While connecting to SSH, you can click on Connect without Identity-Aware Proxy.
Install Python and create a virtual environment by running the following commands:


sudo apt install python3 python3-pip python3.11-venv -y
python3 -m venv myvenv
source myvenv/bin/activate


In the SSH window, install the necessary dependencies by running the following commands:

sudo apt-get update

sudo apt-get install -y git python3-pip

pip3 install --upgrade pip
pip3 install google-cloud-bigquery
pip3 install pyarrow
!pip3 install pandas
pip3 install db-dtypes

Now create the example Python file:


echo "
from google.auth import compute_engine
from google.cloud import bigquery

credentials = compute_engine.Credentials(
    service_account_email='YOUR_SERVICE_ACCOUNT')

query = '''
SELECT
  year,
  COUNT(1) as num_babies
FROM
  publicdata.samples.natality
WHERE
  year > 2000
GROUP BY
  year
'''

client = bigquery.Client(
    project='qwiklabs-gcp-02-41dc3caed530',
    credentials=credentials)
print(client.query(query).to_dataframe())
" > query.py


Add the Project ID to query.py with:

sed -i -e "s/qwiklabs-gcp-02-41dc3caed530/$(gcloud config get-value project)/g" query.py



Run the following to make sure that the sed command has successfully changed the Project ID in the file:

cat query.py


Example output (yours may differ):

from google.auth import compute_engine
from google.cloud import bigquery

credentials = compute_engine.Credentials(
    service_account_email='YOUR_SERVICE_ACCOUNT')

query = '''
SELECT
  year,
  COUNT(1) as num_babies
FROM
  publicdata.samples.natality
WHERE
  year > 2000
GROUP BY
  year
'''

client = bigquery.Client(
    project=qwiklabs-gcp-02-41dc3caed530,
    credentials=credentials)
print(client.query(query).to_dataframe())
Add the service account email to query.py with:

sed -i -e "s/YOUR_SERVICE_ACCOUNT/bigquery-qwiklab@$(gcloud config get-value project).iam.gserviceaccount.com/g" query.py


Run the following to make sure that the sed command has successfully changed the service account email in the file:

cat query.py



Example output (yours may differ):

from google.auth import compute_engine
from google.cloud import bigquery
credentials = compute_engine.Credentials(
    service_account_email='bigquery-qwiklab@qwiklabs-gcp-02-41dc3caed530.iam.gserviceaccount.com')

query = '''
SELECT
  year,
  COUNT(1) as num_babies
FROM
  publicdata.samples.natality
WHERE
  year > 2000
GROUP BY
  year
'''

client = bigquery.Client(
    project=qwiklabs-gcp-02-41dc3caed530,
    credentials=credentials)
print(client.query(query).to_dataframe())


The application now uses the permissions that are associated with this service account. Run the query with the following Python command:

python3 query.py


The query should return the following output (your numbers may vary):

Row year  num_babies
0   2008  4255156
1   2006  4273225
2   2003  4096092
3   2004  4118907
4   2002  4027376
5   2005  4145619
6   2001  4031531
7   2007  4324008

Note: Your row values might not map to the years in the above output. However, make sure that the babies per year are the same.

Awesome work! You made a request to a BigQuery public dataset with a bigquery-qwiklab service account.




Use `gcloud config set project [PROJECT_ID]` to change to a different project.
student_04_ac2c687f8546@cloudshell:~ (qwiklabs-gcp-02-41dc3caed530)$ history
    1  gcloud auth list
    2  gcloud config list 
    3  gcloud config list  project
    4  gcloud config list compute
    5  gcloud config list compute/region
    6  gcloud config set compute/region
    7  gcloud config set compute/region --help
    8  gcloud config set compute/region us-westd
    9  gcloud config set compute/region us-west4
   10  gcloud config list compute/region
   11  gcloud config list 
   12  gcloud iam service-accounts create my-sa-123 --display-name "my service account"
   13  gcloud iam service-account list
   14  gcloud iam 
   15  gcloud iam  roles list
   16  gcloud iam service account list
   17  gcloud iam service-accounts list
   18  gcloud projects add-iam-policy-binding $DEVSHELL_PROJECT_ID     --member serviceAccount:my-sa-123@$DEVSHELL_PROJECT_ID.iam.gserviceaccount.com --role roles/editor
   19  gcloud iam service-accounts create ketan-sa-123 --display-name "ketan service account"
   20  gcloud iam service-account list | grep ketan
   21  gcloud iam service-accounts list | grep ketan
   22  history
student_04_ac2c687f8546@cloudshell:~ (qwiklabs-gcp-02-41dc3caed530)$ 

BigQuery

BigQuery is a fully-managed enterprise data warehouse that enables super-fast SQL queries.

The BigQuery UI helps you complete tasks like running queries, loading data, and even creating and training ML models. 

Storing and querying massive datasets can be time consuming and expensive without the right hardware and infrastructure. 

BigQuery is an enterprise data warehouse that solves this problem by enabling super-fast SQL queries using the processing power of Google's infrastructure. 

Simply move your data into BigQuery and let us handle the hard work. 

You can control access to both the project and your data based on your business needs, such as giving others the ability to view or query your data.

You can access BigQuery in the Console, the command-line tool, or by making calls to the BigQuery REST API using a variety of client libraries such as Java, .NET, or Python. 

There are also a variety of third-party tools that you can use to interact with BigQuery, such as visualizing the data or loading the data.

How to query public tables and load sample data into BigQuery.

  • Query a public dataset
  • Create a new dataset
  • Load data into a new table
  • Query a custom table

Task 1. Open BigQuery

The BigQuery console provides an interface to query tables, including public datasets offered by BigQuery. The query you will run accesses a table from a public dataset that BigQuery provides. It uses standard query language to search the dataset, and limits the results returned to 10.

Open the BigQuery console

In the Google Cloud Console, select Navigation menu > BigQuery.
The Welcome to BigQuery in the Cloud Console message box opens. This message box provides a link to the quickstart guide and the release notes.

Click Done.

The BigQuery console opens.





Task 2. Query a public dataset
Click + (SQL query) to create a new query. Copy and paste the following query into the BigQuery Query editor:


#standardSQL
SELECT
 weight_pounds, state, year, gestation_weeks
FROM
 `bigquery-public-data.samples.natality`
ORDER BY weight_pounds DESC LIMIT 10;





Create a new dataset

To load custom data into a table, you first need to create a BigQuery dataset.

Datasets help control access to tables and views in a project. This lab uses only one table, but you still need a dataset to hold the table.

In the Explorer pane, near your project id, click on View actions (view actions icon) then click Create dataset.








Load data into a new table

Next you create a table inside the babynames dataset, then load the data file from your storage bucket into the new table.

The custom data file you'll use contains approximately 7 MB of data about popular baby names, provided by the US Social Security Administration.

In the Cloud Console, select Navigation menu > BigQuery to return to the BigQuery console.

Navigate to the babynames dataset, by clicking View actions (view actions icon) near your dataset then click Create table.




In the Create table dialog, set the following fields, leave all others at the default value:

In the Create table dialog, set the following fields, leave all others at the default value:

Field                                Value
Create table from                 Google Cloud Storage
Select file from GCS bucket spls/gsp072/baby-names/yob2014.txt
File format                         CSV
Table                                  names_2014
Schema > Edit as text Slide on, then add the following in the textbox: name:string,gender:string,count:integer

Click the Create table button.



Task 5. Preview the table
Check your table! View the first few rows of the data.

Click the names_2014 table in the left panel, then click Preview.




Task 6. Query a custom dataset
Running a query against custom data is identical to the querying a public dataset that you did earlier, except that now you're querying your own table instead of a public table.

In BigQuery, click the + (SQL query) icon at the top.

Paste or type the following query into the query Editor.


#standardSQL
SELECT
 name, count
FROM
 `babynames.names_2014`
WHERE
 gender = 'M'
ORDER BY count DESC LIMIT 5;


BigQuery is a fully-managed enterprise data warehouse that enables super-fast SQL queries.


Idea to App - Prompting

 







Everything starts with a prompt.


When you first use Vertex AI Studio, you tell the AI what you want to do.

You can ask a question or give an instruction using natural language.

This is called a prompt.




Simply put, a prompt is a natural language request to an AI model.

The request can be a question, a task, or anything in between.

Once the model receives the prompt, it generates text, code, images, videos, music, and more.

However, just like how we communicate with one another, the way you communicate with AI makes a difference in what you get.

This process of creating prompts to get the response you want is called prompt design.





And the iterative process of repeatedly drafting and refining prompts and assessing the model's responses is called prompt engineering.




So what makes a good prompt?


Zero Shot Prompting









Few-Shot Prompting:



















FIRST PROMPT:








Conversational Agents: Bot Building Basics

Builds conversational interfaces (for example, chatbots, and voice-powered apps and devices). ENABLE API: Task 1. Create your agent Visit th...