使用 Terraform 创建触发器

本文档介绍如何使用 Terraform 和 google_eventarc_trigger 资源为以下 Google Cloud目标创建 Eventarc 触发器:

如需详细了解如何使用 Terraform,请参阅 Terraform on Google Cloud 文档。

本指南中的代码示例会路由来自 Cloud Storage 的直接事件,但可以适用于任何事件提供方。例如,如需了解如何将直接事件从 Pub/Sub 路由到 Cloud Run,请参阅 Terraform 快速入门

准备工作

  1. 登录您的 Google Cloud 账号。如果您是 Google Cloud新手,请 创建一个账号来评估我们的产品在实际场景中的表现。新客户还可获享 $300 赠金,用于运行、测试和部署工作负载。
  2. In the Google Cloud console, on the project selector page, select or create a Google Cloud project.

    Roles required to select or create a project

    • Select a project: Selecting a project doesn't require a specific IAM role—you can select any project that you've been granted a role on.
    • Create a project: To create a project, you need the Project Creator role (roles/resourcemanager.projectCreator), which contains the resourcemanager.projects.create permission. Learn how to grant roles.

    Go to project selector

  3. Verify that billing is enabled for your Google Cloud project.

  4. In the Google Cloud console, on the project selector page, select or create a Google Cloud project.

    Roles required to select or create a project

    • Select a project: Selecting a project doesn't require a specific IAM role—you can select any project that you've been granted a role on.
    • Create a project: To create a project, you need the Project Creator role (roles/resourcemanager.projectCreator), which contains the resourcemanager.projects.create permission. Learn how to grant roles.

    Go to project selector

  5. Verify that billing is enabled for your Google Cloud project.

  6. 启用 Cloud Resource Manager 和 Identity and Access Management (IAM) API。

    启用 API 所需的角色

    如需启用 API,您需要拥有 Service Usage Admin IAM 角色 (roles/serviceusage.serviceUsageAdmin),该角色包含 serviceusage.services.enable 权限。了解如何授予角色

    启用 API

  7. 在 Google Cloud 控制台中,激活 Cloud Shell。

    激活 Cloud Shell

    Cloud Shell 会话随即会在 Google Cloud 控制台的底部启动,并显示命令行提示符。Cloud Shell 是一个已安装 Google Cloud CLI 且已为当前项目设置值的 Shell 环境。该会话可能需要几秒钟时间来完成初始化。

  8. Terraform 已集成到 Cloud Shell 环境中,您可以使用 Cloud Shell 部署 Terraform 资源,而无需安装 Terraform。

准备部署 Terraform

在部署任何 Terraform 资源之前,您必须先创建 Terraform 配置文件。借助 Terraform 配置文件,您可以使用 Terraform 语法为基础设施定义自己偏好的最终状态。

准备 Cloud Shell

在 Cloud Shell 中,设置要应用 Terraform 配置的默认 Google Cloud 项目。您只需为每个项目运行一次以下命令,即可在任何目录中运行它:

export GOOGLE_CLOUD_PROJECT=PROJECT_ID

PROJECT_ID 替换为您的 Google Cloud 项目的 ID。

请注意,如果您在 Terraform 配置文件中设置显式值,则环境变量会被替换。

准备目录

每个 Terraform 配置文件都必须有自己的目录(也称为“根模块”)。在 Cloud Shell 中,创建一个目录,并在该目录中创建一个新文件:

mkdir DIRECTORY && cd DIRECTORY && touch main.tf

文件名必须具有 .tf 扩展名,例如,在本文档中,该文件称为 main.tf

定义 Terraform 配置

将适用的 Terraform 代码示例复制到新创建的 main.tf 文件中。(可选)您可以从 GitHub 中复制代码。如果 Terraform 代码段是端到端解决方案的一部分,则建议这样做。

通常,您需要一次性应用整个配置。不过,您也可以指定特定资源。例如:

terraform apply -target="google_eventarc_trigger.default"

请注意,Terraform 代码示例使用插值类型,例如引用变量、资源属性和调用函数。

启用 API

Terraform 示例通常假定您的Google Cloud 项目中启用了所需的 API。使用以下代码启用 API:

Cloud Run

# Enable Cloud Run API
resource "google_project_service" "run" {
  service            = "run.googleapis.com"
  disable_on_destroy = false
}

# Enable Eventarc API
resource "google_project_service" "eventarc" {
  service            = "eventarc.googleapis.com"
  disable_on_destroy = false
}

# Enable Pub/Sub API
resource "google_project_service" "pubsub" {
  service            = "pubsub.googleapis.com"
  disable_on_destroy = false
}

GKE

# Enable GKE API
resource "google_project_service" "container" {
  service            = "container.googleapis.com"
  disable_on_destroy = false
}

# Enable Eventarc API
resource "google_project_service" "eventarc" {
  service            = "eventarc.googleapis.com"
  disable_on_destroy = false
}

# Enable Pub/Sub API
resource "google_project_service" "pubsub" {
  service            = "pubsub.googleapis.com"
  disable_on_destroy = false
}

Workflows

# Enable Workflows API
resource "google_project_service" "workflows" {
  service            = "workflows.googleapis.com"
  disable_on_destroy = false
}

# Enable Eventarc API
resource "google_project_service" "eventarc" {
  service            = "eventarc.googleapis.com"
  disable_on_destroy = false
}

# Enable Pub/Sub API
resource "google_project_service" "pubsub" {
  service            = "pubsub.googleapis.com"
  disable_on_destroy = false
}

创建服务账号并配置其访问权限

每个 Eventarc 触发器在创建时都会与一个 IAM 服务账号相关联。使用以下代码创建专用服务账号,并向用户代管式服务账号授予特定的 Identity and Access Management 角色以管理事件:

Cloud Run

# Used to retrieve project information later
data "google_project" "project" {}

# Create a dedicated service account
resource "google_service_account" "default" {
  account_id   = "eventarc-trigger-sa"
  display_name = "Eventarc Trigger Service Account"
}

# Grant permission to receive Eventarc events
resource "google_project_iam_member" "eventreceiver" {
  project = data.google_project.project.id
  role    = "roles/eventarc.eventReceiver"
  member  = "serviceAccount:${google_service_account.default.email}"
}

# Grant permission to invoke Cloud Run services
resource "google_project_iam_member" "runinvoker" {
  project = data.google_project.project.id
  role    = "roles/run.invoker"
  member  = "serviceAccount:${google_service_account.default.email}"
}

启用 Pub/Sub API 后,系统会自动创建 Pub/Sub 服务代理。如果 Pub/Sub 服务代理是在 2021 年 4 月 8 日或之前创建的,并且相应服务账号没有 Cloud Pub/Sub Service Agent 角色 (roles/pubsub.serviceAgent),请向该服务代理授予 Service Account Token Creator 角色 (roles/iam.serviceAccountTokenCreator)。如需了解详情,请参阅创建服务代理并授予角色

resource "google_project_iam_member" "tokencreator" {
  project  = data.google_project.project.id
  role     = "roles/iam.serviceAccountTokenCreator"
  member   = "serviceAccount:service-${data.google_project.project.number}@gcp-sa-pubsub.iam.gserviceaccount.com"
}

GKE

  1. 在创建服务账号之前,请先启用 Eventarc 以管理 GKE 集群:

    # Used to retrieve project_number later
    data "google_project" "project" {}
    
    # Enable Eventarc to manage GKE clusters
    # This is usually done with: gcloud eventarc gke-destinations init
    #
    # Eventarc creates a separate Event Forwarder pod for each trigger targeting a
    # GKE service, and  requires explicit permissions to make changes to the
    # cluster. This is done by granting permissions to a special service account
    # (the Eventarc P4SA) to manage resources in the cluster. This needs to be done
    # once per Google Cloud project.
    
    # This identity is created with: gcloud beta services identity create --service eventarc.googleapis.com
    # This local variable is used for convenience
    locals {
      eventarc_sa = "serviceAccount:service-${data.google_project.project.number}@gcp-sa-eventarc.iam.gserviceaccount.com"
    }
    
    resource "google_project_iam_member" "computeViewer" {
      project = data.google_project.project.id
      role    = "roles/compute.viewer"
      member  = local.eventarc_sa
    }
    
    resource "google_project_iam_member" "containerDeveloper" {
      project = data.google_project.project.id
      role    = "roles/container.developer"
      member  = local.eventarc_sa
    }
    
    resource "google_project_iam_member" "serviceAccountAdmin" {
      project = data.google_project.project.id
      role    = "roles/iam.serviceAccountAdmin"
      member  = local.eventarc_sa
    }
  2. 创建服务账号:

    # Create a service account to be used by GKE trigger
    resource "google_service_account" "eventarc_gke_trigger_sa" {
      account_id   = "eventarc-gke-trigger-sa"
      display_name = "Evenarc GKE Trigger Service Account"
    }
    
    # Grant permission to receive Eventarc events
    resource "google_project_iam_member" "eventreceiver" {
      project = data.google_project.project.id
      role    = "roles/eventarc.eventReceiver"
      member  = "serviceAccount:${google_service_account.eventarc_gke_trigger_sa.email}"
    }
    
    # Grant permission to subscribe to Pub/Sub topics
    resource "google_project_iam_member" "pubsubscriber" {
      project = data.google_project.project.id
      role    = "roles/pubsub.subscriber"
      member  = "serviceAccount:${google_service_account.eventarc_gke_trigger_sa.email}"
    }
    

Workflows

# Used to retrieve project information later
data "google_project" "project" {}

# Create a service account for Eventarc trigger and Workflows
resource "google_service_account" "eventarc" {
  account_id   = "eventarc-workflows-sa"
  display_name = "Eventarc Workflows Service Account"
}

# Grant permission to invoke Workflows
resource "google_project_iam_member" "workflowsinvoker" {
  project = data.google_project.project.id
  role    = "roles/workflows.invoker"
  member  = "serviceAccount:${google_service_account.eventarc.email}"
}

# Grant permission to receive events
resource "google_project_iam_member" "eventreceiver" {
  project = data.google_project.project.id
  role    = "roles/eventarc.eventReceiver"
  member  = "serviceAccount:${google_service_account.eventarc.email}"
}

# Grant permission to write logs
resource "google_project_iam_member" "logwriter" {
  project = data.google_project.project.id
  role    = "roles/logging.logWriter"
  member  = "serviceAccount:${google_service_account.eventarc.email}"
}

启用 Pub/Sub API 后,系统会自动创建 Pub/Sub 服务代理。如果 Pub/Sub 服务代理是在 2021 年 4 月 8 日或之前创建的,并且相应服务账号没有 Cloud Pub/Sub Service Agent 角色 (roles/pubsub.serviceAgent),请向该服务代理授予 Service Account Token Creator 角色 (roles/iam.serviceAccountTokenCreator)。如需了解详情,请参阅创建服务代理并授予角色

resource "google_project_iam_member" "tokencreator" {
  project  = data.google_project.project.id
  role     = "roles/iam.serviceAccountTokenCreator"
  member   = "serviceAccount:service-${data.google_project.project.number}@gcp-sa-pubsub.iam.gserviceaccount.com"
}

创建 Cloud Storage 存储桶作为事件提供方

使用以下代码创建 Cloud Storage 存储桶,并向 Cloud Storage 服务代理授予 Pub/Sub Publisher 角色 (roles/pubsub.publisher)。

Cloud Run

# Cloud Storage bucket names must be globally unique
resource "random_id" "bucket_name_suffix" {
  byte_length = 4
}

# Create a Cloud Storage bucket
resource "google_storage_bucket" "default" {
  name          = "trigger-cloudrun-${data.google_project.project.name}-${random_id.bucket_name_suffix.hex}"
  location      = google_cloud_run_v2_service.default.location
  force_destroy = true

  uniform_bucket_level_access = true
}

# Grant the Cloud Storage service account permission to publish pub/sub topics
data "google_storage_project_service_account" "gcs_account" {}
resource "google_project_iam_member" "pubsubpublisher" {
  project = data.google_project.project.id
  role    = "roles/pubsub.publisher"
  member  = "serviceAccount:${data.google_storage_project_service_account.gcs_account.email_address}"

  depends_on = [data.google_storage_project_service_account.gcs_account]
}

GKE

# Cloud Storage bucket names must be globally unique
resource "random_id" "bucket_name_suffix" {
  byte_length = 4
}

# Create a Cloud Storage bucket
resource "google_storage_bucket" "default" {
  name          = "trigger-gke-${data.google_project.project.name}-${random_id.bucket_name_suffix.hex}"
  location      = "us-central1"
  force_destroy = true

  uniform_bucket_level_access = true
}

# Grant the Cloud Storage service account permission to publish pub/sub topics
data "google_storage_project_service_account" "gcs_account" {}
resource "google_project_iam_member" "pubsubpublisher" {
  project = data.google_project.project.id
  role    = "roles/pubsub.publisher"
  member  = "serviceAccount:${data.google_storage_project_service_account.gcs_account.email_address}"
}

Workflows

# Cloud Storage bucket names must be globally unique
resource "random_id" "bucket_name_suffix" {
  byte_length = 4
}

# Create a Cloud Storage bucket
resource "google_storage_bucket" "default" {
  name          = "trigger-workflows-${data.google_project.project.name}-${random_id.bucket_name_suffix.hex}"
  location      = google_workflows_workflow.default.region
  force_destroy = true

  uniform_bucket_level_access = true
}

# Grant the Cloud Storage service account permission to publish Pub/Sub topics
data "google_storage_project_service_account" "gcs_account" {}
resource "google_project_iam_member" "pubsubpublisher" {
  project = data.google_project.project.id
  role    = "roles/pubsub.publisher"
  member  = "serviceAccount:${data.google_storage_project_service_account.gcs_account.email_address}"
}

创建作为事件目标的事件接收器

使用以下 Terraform 资源之一创建事件接收器:

Cloud Run

创建 Cloud Run 服务作为 Eventarc 触发器的事件目标:

# Deploy Cloud Run service
resource "google_cloud_run_v2_service" "default" {
  name     = "hello-events"
  location = "us-central1"

  deletion_protection = false # set to "true" in production

  template {
    containers {
      # This container will log received events
      image = "us-docker.pkg.dev/cloudrun/container/hello"
    }
    service_account = google_service_account.default.email
  }

  depends_on = [google_project_service.run]
}

GKE

为简化本指南,请在应用 Terraform 配置之间,在 Terraform 外部创建一个 Google Kubernetes Engine 服务作为事件目标位置。

  1. 如果您之前未在此 Google Cloud 项目中创建触发器,请运行以下命令来创建 Eventarc 服务代理

    gcloud beta services identity create --service eventarc.googleapis.com
  2. 创建 GKE 集群:

    # Create an auto-pilot GKE cluster
    resource "google_container_cluster" "gke_cluster" {
      name     = "eventarc-cluster"
      location = "us-central1"
    
      enable_autopilot = true
    
      depends_on = [
        google_project_service.container
      ]
    }
  3. 在 GKE 上部署 Kubernetes 服务,该服务将使用预构建的 Cloud Run 映像 us-docker.pkg.dev/cloudrun/container/hello 接收 HTTP 请求和日志事件:

    1. 获取身份验证凭据以便与集群进行交互:

      gcloud container clusters get-credentials eventarc-cluster \
         --region=us-central1
      
    2. 创建名为 hello-gke 的部署:

      kubectl create deployment hello-gke \
         --image=us-docker.pkg.dev/cloudrun/container/hello
      
    3. 将部署公开为 Kubernetes 服务:

      kubectl expose deployment hello-gke \
         --type ClusterIP --port 80 --target-port 8080
      
    4. 确保 Pod 正在运行:

      kubectl get pods
      

      输出应类似如下所示:

      NAME                         READY   STATUS    RESTARTS   AGE
      hello-gke-5b6574b4db-rzzcr   1/1     Running   0          2m45s
      

      如果 STATUSPendingContainerCreating,则表示 pod 正在部署。 等待一分钟,等待部署完成,然后再次检查状态。

    5. 确保服务正在运行:

      kubectl get svc
      

      输出应类似如下所示:

      NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
      hello-gke    ClusterIP   34.118.230.123   <none>        80/TCP    4m46s
      kubernetes   ClusterIP   34.118.224.1     <none>        443/TCP   14m
      

Workflows

部署在 Cloud Storage 存储桶中更新对象时执行的工作流:

# Create a workflow
resource "google_workflows_workflow" "default" {
  name            = "storage-workflow-tf"
  region          = "us-central1"
  description     = "Workflow that returns information about storage events"
  service_account = google_service_account.eventarc.email

  deletion_protection = false # set to "true" in production

  # Note that $$ is needed for Terraform
  source_contents = <<EOF
  main:
    params: [event]
    steps:
      - log_event:
          call: sys.log
          args:
            text: $${event}
            severity: INFO
      - gather_data:
          assign:
            - bucket: $${event.data.bucket}
            - name: $${event.data.name}
            - message: $${"Received event " + event.type + " - " + bucket + ", " + name}
      - return_data:
          return: $${message}
  EOF

  depends_on = [
    google_project_service.workflows
  ]
}

定义 Eventarc 触发器

Eventarc 触发器会将事件从事件提供方路由到事件目的地。使用 google_eventarc_trigger 资源在 matching_criteria 中指定 CloudEvents 属性并过滤事件。如需了解详情,请按照说明为特定提供商、事件类型和目标创建触发器。符合所有过滤条件的事件会被发送到目标。

Cloud Run

创建一个 Eventarc 触发器,以将 Cloud Storage 事件路由到 hello-event Cloud Run 服务。

# Create an Eventarc trigger, routing Cloud Storage events to Cloud Run
resource "google_eventarc_trigger" "default" {
  name     = "trigger-storage-cloudrun-tf"
  location = google_cloud_run_v2_service.default.location

  # Capture objects changed in the bucket
  matching_criteria {
    attribute = "type"
    value     = "google.cloud.storage.object.v1.finalized"
  }
  matching_criteria {
    attribute = "bucket"
    value     = google_storage_bucket.default.name
  }

  # Send events to Cloud Run
  destination {
    cloud_run_service {
      service = google_cloud_run_v2_service.default.name
      region  = google_cloud_run_v2_service.default.location
    }
  }

  # Specify a single delivery attempt with no retries
  retry_policy {
    max_attempts = 1
  }

  service_account = google_service_account.default.email
  depends_on = [
    google_project_service.eventarc,
    google_storage_bucket.default,
    google_project_iam_member.pubsubpublisher
  ]
}

GKE

创建 Eventarc 触发器,以将 Cloud Storage 事件路由到 hello-gke GKE 服务。

# Create an Eventarc trigger, routing Storage events to GKE
resource "google_eventarc_trigger" "default" {
  name     = "trigger-storage-gke-tf"
  location = "us-central1"

  # Capture objects changed in the bucket
  matching_criteria {
    attribute = "type"
    value     = "google.cloud.storage.object.v1.finalized"
  }
  matching_criteria {
    attribute = "bucket"
    value     = google_storage_bucket.default.name
  }

  # Send events to GKE service
  destination {
    gke {
      cluster   = "eventarc-cluster"
      location  = "us-central1"
      namespace = "default"
      path      = "/"
      service   = "hello-gke"
    }
  }

  service_account = google_service_account.eventarc_gke_trigger_sa.email
}

Workflows

创建一个 Eventarc 触发器,用于将 Cloud Storage 事件路由到名为 storage-workflow-tf 的工作流。

# Create an Eventarc trigger, routing Cloud Storage events to Workflows
resource "google_eventarc_trigger" "default" {
  name     = "trigger-storage-workflows-tf"
  location = google_workflows_workflow.default.region

  # Capture objects changed in the bucket
  matching_criteria {
    attribute = "type"
    value     = "google.cloud.storage.object.v1.finalized"
  }
  matching_criteria {
    attribute = "bucket"
    value     = google_storage_bucket.default.name
  }

  # Send events to Workflows
  destination {
    workflow = google_workflows_workflow.default.id
  }

  service_account = google_service_account.eventarc.email

  depends_on = [
    google_project_service.eventarc,
    google_project_service.workflows,
  ]
}

应用 Terraform

使用 Terraform CLI 基于配置文件预配基础设施。

如需了解如何应用或移除 Terraform 配置,请参阅基本 Terraform 命令

  1. 初始化 Terraform。您只需为每个目录执行一次此操作。

    terraform init

    (可选)如需使用最新的 Google 提供程序版本,请添加 -upgrade 选项:

    terraform init -upgrade
  2. 查看配置并验证 Terraform 将创建或更新的资源是否符合您的预期:

    terraform plan

    根据需要更正配置。

  3. 通过运行以下命令并在提示符处输入 yes 来应用 Terraform 配置:

    terraform apply

    等待 Terraform 显示“应用完成!”消息。

验证资源的创建

Cloud Run

  1. 确认服务已创建:

    gcloud run services list --region us-central1
    
  2. 确认触发器已创建:

    gcloud eventarc triggers list --location us-central1
    

    输出应类似如下所示:

    NAME: trigger-storage-cloudrun-tf
    TYPE: google.cloud.storage.object.v1.finalized
    DESTINATION: Cloud Run service: hello-events
    ACTIVE: Yes
    LOCATION: us-central1
    

GKE

  1. 确认服务已创建:

    kubectl get service hello-gke
    
  2. 确认触发器已创建:

    gcloud eventarc triggers list --location us-central1
    

    输出应类似如下所示:

    NAME: trigger-storage-gke-tf
    TYPE: google.cloud.storage.object.v1.finalized
    DESTINATION: GKE: hello-gke
    ACTIVE: Yes
    LOCATION: us-central1
    

Workflows

  1. 确认工作流已创建:

    gcloud workflows list --location us-central1
    
  2. 确认 Eventarc 触发器已创建:

    gcloud eventarc triggers list --location us-central1
    

    输出应类似如下所示:

    NAME: trigger-storage-workflows-tf
    TYPE: google.cloud.storage.object.v1.finalized
    DESTINATION: Workflows: storage-workflow-tf
    ACTIVE: Yes
    LOCATION: us-central1
    

生成并查看事件

您可以生成事件并确认 Eventarc 触发器是否按预期工作。

  1. 检索您之前创建的 Cloud Storage 存储桶的名称:

    gcloud storage ls
    
  2. 将文本文件上传到 Cloud Storage 存储桶:

    echo "Hello World" > random.txt
    gcloud storage cp random.txt gs://BUCKET_NAME/random.txt
    

    BUCKET_NAME 替换为您在上一步中检索到的 Cloud Storage 存储桶名称。例如:

    gcloud storage cp random.txt gs://BUCKET_NAME/random.txt

    上传操作会生成事件,而事件接收器服务会记录事件的消息。

  3. 验证是否收到了事件:

    Cloud Run

    1. 过滤服务创建的日志条目:

      gcloud logging read 'jsonPayload.message: "Received event of type google.cloud.storage.object.v1.finalized."'
      
    2. 查找如下日志条目:

      Received event of type google.cloud.storage.object.v1.finalized.
      Event data: { "kind": "storage#object", "id": "trigger-cloudrun-BUCKET_NAME/random.txt", ...}
      

    GKE

    1. 找到 Pod ID:

      POD_NAME=$(kubectl get pods -o custom-columns=":metadata.name" --no-headers)
      

      此命令使用 kubectl 设置格式的输出

    2. 查看 Pod 的日志:

      kubectl logs $POD_NAME
      
    3. 查找如下日志条目:

      {"severity":"INFO","eventType":"google.cloud.storage.object.v1.finalized","message":
      "Received event of type google.cloud.storage.object.v1.finalized. Event data: ...}
      

    Workflows

    1. 通过列出最近五次执行,验证工作流执行是否已触发:

      gcloud workflows executions list storage-workflow-tf --limit=5
      

      输出应包含带 NAMESTATESTART_TIMEEND_TIME 的执行列表。

    2. 获取最近一次执行的结果:

      EXECUTION_NAME=$(gcloud workflows executions list storage-workflow-tf --limit=1 --format "value(name)")
      gcloud workflows executions describe $EXECUTION_NAME
      
    3. 确认输出类似于以下内容:

      ...
      result: '"Received event google.cloud.storage.object.v1.finalized - BUCKET_NAME, random.txt"'
      startTime: '2024-12-13T17:23:50.451316533Z'
      state: SUCCEEDED
      ...
      

清理

通过运行以下命令并在提示符处输入 yes,移除之前使用 Terraform 配置应用的资源:

terraform destroy

您还可以删除 Google Cloud 项目,以避免产生费用。删除 Google Cloud 项目后,系统即会停止对该项目中使用的所有资源计费。

  1. 在 Google Cloud 控制台中,前往管理资源页面。

    转到“管理资源”

  2. 在项目列表中,选择要删除的项目,然后点击删除
  3. 在对话框中输入项目 ID,然后点击关闭以删除项目。

后续步骤