Menyesuaikan dan menskalakan reinforcement learning dengan Verl di GKE

Tutorial ini menunjukkan cara mengatur lingkungan pelatihan terdistribusi untuk reinforcement learning di Google Kubernetes Engine (GKE). Anda menggunakan Ray dan framework verl (Volcano Engine Reinforcement Learning) untuk menyiapkan lingkungan pelatihan terdistribusi guna menyempurnakan model Qwen2.5-32B-Instruct.

Tutorial ini berfokus pada pipeline pelatihan Group Relative Policy Optimization (GRPO) di GKE dengan Ray dan verl. GRPO adalah algoritma reinforcement learning yang dirancang untuk meningkatkan kemampuan penalaran model. Algoritma hemat memori ini menyederhanakan proses reinforcement learning (RL) dengan menghilangkan Critic, atau model nilai, dan menggunakan perhitungan berbasis grup relatif.

Tutorial ini adalah titik awal yang baik jika Anda perlu menyiapkan lingkungan pelatihan terdistribusi tempat data, bobot model, dan mesin pelatihan dipisahkan untuk efisiensi.

Latar belakang

Bagian berikut memberikan ringkasan singkat tentang konsep yang digunakan dalam tutorial ini.

Reinforcement learning

RL mengajari model melalui pengalaman, eksplorasi, dan masukan, bukan imitasi statis. Meskipun pra-pelatihan mengajarkan model apa yang harus dikatakan, RL—khususnya Reinforcement Learning from Human Feedback (RLHF)—mengajarkannya cara menjadi bermanfaat, aman, dan logis. RL berfungsi sebagai jembatan antara model dasar dan model yang di-fine-tune untuk kasus penggunaan khusus.

Untuk mengetahui informasi selengkapnya, lihat Apa yang dimaksud dengan reinforcement learning?

Volcano Engine Reinforcement Learning (verl)

verl adalah framework berperforma tinggi yang dirancang untuk menangani pola komputasi dan memori yang kompleks dari RL berbasis LLM.

Untuk mengetahui informasi selengkapnya, lihat verl.

Pengoptimalan Kebijakan Relatif Grup (GRPO)

GRPO, sebuah algoritma yang dipopulerkan oleh DeepSeek, menawarkan alternatif yang hemat memori untuk penyelarasan LLM dengan Proximal Policy Optimization (PPO) dengan menghapus model Critic. Daripada jaringan Critic, GRPO menghasilkan sekelompok respons untuk perintah yang sama dan menggunakan reward rata-rata grup tersebut sebagai dasar.

Untuk mengetahui informasi selengkapnya, lihat GRPO.

Tujuan

Tutorial ini menunjukkan cara menyiapkan reinforcement learning di GKE dengan verl, dengan menyelesaikan langkah-langkah berikut:

  1. Siapkan cluster GKE dengan GPU B200 atau H200.
  2. Konfigurasi KubeRay untuk mengelola cluster Ray terdistribusi.
  3. Gunakan Cloud Storage FUSE untuk memasang bucket Cloud Storage di semua node.
  4. Jalankan tugas pelatihan GRPO menggunakan verl untuk menyelaraskan model Qwen2.5-32B-Instruct dengan set data GSM8K.

Sebelum memulai

  • Login ke akun Google Cloud Anda. Jika Anda baru menggunakan Google Cloud, buat akun untuk mengevaluasi performa produk kami dalam skenario dunia nyata. Pelanggan baru juga mendapatkan kredit gratis senilai $300 untuk menjalankan, menguji, dan men-deploy workload.
  • Instal Google Cloud CLI.

  • Jika Anda menggunakan penyedia identitas (IdP) eksternal, Anda harus login ke gcloud CLI dengan identitas gabungan Anda terlebih dahulu.

  • Untuk melakukan inisialisasi gcloud CLI, jalankan perintah berikut:

    gcloud init
  • Buat atau pilih Google Cloud project.

    Peran yang diperlukan untuk memilih atau membuat project

    • Pilih project: Memilih project tidak memerlukan peran IAM tertentu—Anda dapat memilih project mana pun yang telah diberi peran.
    • Membuat project: Untuk membuat project, Anda memerlukan peran Pembuat Project (roles/resourcemanager.projectCreator), yang berisi izin resourcemanager.projects.create. Pelajari cara memberikan peran.
    • Buat Google Cloud project:

      gcloud projects create PROJECT_ID

      Ganti PROJECT_ID dengan nama untuk Google Cloud project yang Anda buat.

    • Pilih project Google Cloud yang Anda buat:

      gcloud config set project PROJECT_ID

      Ganti PROJECT_ID dengan nama project Google Cloud Anda.

  • Verifikasi bahwa penagihan diaktifkan untuk project Google Cloud Anda.

  • Aktifkan API yang diperlukan:

    Peran yang diperlukan untuk mengaktifkan API

    Untuk mengaktifkan API, Anda memerlukan peran IAM Service Usage Admin (roles/serviceusage.serviceUsageAdmin), yang berisi izin serviceusage.services.enable. Pelajari cara memberikan peran.

    gcloud services enable container.googleapis.com storage.googleapis.com compute.googleapis.com
  • Instal Google Cloud CLI.

  • Jika Anda menggunakan penyedia identitas (IdP) eksternal, Anda harus login ke gcloud CLI dengan identitas gabungan Anda terlebih dahulu.

  • Untuk melakukan inisialisasi gcloud CLI, jalankan perintah berikut:

    gcloud init
  • Buat atau pilih Google Cloud project.

    Peran yang diperlukan untuk memilih atau membuat project

    • Pilih project: Memilih project tidak memerlukan peran IAM tertentu—Anda dapat memilih project mana pun yang telah diberi peran.
    • Membuat project: Untuk membuat project, Anda memerlukan peran Pembuat Project (roles/resourcemanager.projectCreator), yang berisi izin resourcemanager.projects.create. Pelajari cara memberikan peran.
    • Buat Google Cloud project:

      gcloud projects create PROJECT_ID

      Ganti PROJECT_ID dengan nama untuk Google Cloud project yang Anda buat.

    • Pilih project Google Cloud yang Anda buat:

      gcloud config set project PROJECT_ID

      Ganti PROJECT_ID dengan nama project Google Cloud Anda.

  • Verifikasi bahwa penagihan diaktifkan untuk project Google Cloud Anda.

  • Aktifkan API yang diperlukan:

    Peran yang diperlukan untuk mengaktifkan API

    Untuk mengaktifkan API, Anda memerlukan peran IAM Service Usage Admin (roles/serviceusage.serviceUsageAdmin), yang berisi izin serviceusage.services.enable. Pelajari cara memberikan peran.

    gcloud services enable container.googleapis.com storage.googleapis.com compute.googleapis.com
  • Memberikan peran ke akun pengguna Anda. Jalankan perintah berikut satu kali untuk setiap peran IAM berikut: roles/container.admin, roles/iam.serviceAccountAdmin, roles/storage.admin

    gcloud projects add-iam-policy-binding PROJECT_ID --member="user:USER_IDENTIFIER" --role=ROLE

    Ganti kode berikut:

    • PROJECT_ID: Project ID Anda.
    • USER_IDENTIFIER: ID untuk akun pengguna Anda. Misalnya, myemail@example.com.
    • ROLE: Peran IAM yang Anda berikan ke akun pengguna Anda.

Menyiapkan lingkungan Anda

Dalam tutorial ini, Anda akan menggunakan Cloud Shell.

  1. Buka Google Cloud console.

  2. Di bagian atas jendela konsol Google Cloud , klik tombol Activate Cloud Shell.

  3. Tetapkan variabel lingkungan berikut:

    export PROJECT_ID=$(gcloud config get project)
    export PROJECT_NUMBER=$(gcloud projects describe ${PROJECT_ID} --format="value(projectNumber)")
    export GPU_TYPE=GPU_TYPE
    export CONTROL_PLANE_LOCATION=CONTROL_PLANE_LOCATION
    export NODE_LOCATION=NODE_LOCATION
    export CLUSTER_NAME=CLUSTER_NAME
    export KSA_NAME=CLUSTER_NAME
    export GS_BUCKET=BUCKET_NAME-${PROJECT_ID}
    export NAMESPACE=default
    export HF_TOKEN=YOUR_HUGGING_FACE_TOKEN
    export MACHINE_TYPE=MACHINE_TYPE
    export GKE_VERSION=GKE_VERSION
    

    Ganti nilai berikut:

    • CONTROL_PLANE_LOCATION: region Compute Engine untuk bidang kontrol cluster GKE.
    • GPU_TYPE: akselerator yang Anda pesan dalam reservasi kapasitas Compute Engine. Harus berupa salah satu nilai berikut:
      • nvidia-b200: NVIDIA B200 (180GB)
      • nvidia-h200-141gb: NVIDIA H200 (141GB)
    • NODE_LOCATION: zona untuk node GKE. Pilih zona tempat GPU NVIDIA B200 atau H200 tersedia.
    • CLUSTER_NAME: nama cluster GKE Anda.
    • BUCKET_NAME: nama dasar untuk bucket Cloud Storage Anda. Anda tidak perlu menentukan awalan gs://.
    • YOUR_HUGGING_FACE_TOKEN: token Hugging Face Anda untuk akses model.
    • MACHINE_TYPE: jenis mesin yang akan digunakan. Opsi yang valid adalah c2standard8 atau c2standard16.
    • GKE_VERSION: versi GKE yang akan digunakan:
      • Untuk GPU NVIDIA B200 (180 GB), gunakan 1.32.2-gke.1422000 atau yang lebih baru.
      • Untuk GPU NVIDIA H200 (141 GB), gunakan 1.31.4-gke.1183000 atau yang lebih baru.
  4. Buat variabel lingkungan berikut untuk jaringan:

    export GVNIC_NETWORK_PREFIX="GVNIC-NAME"
    export RDMA_NETWORK_PREFIX="RDMA-NAME"
    

    Ganti nilai berikut:

    • GVNIC-NAME: awalan untuk nama jaringan gVNIC. Anda dapat menggunakan awalan apa pun yang Anda inginkan.
    • RDMA-NAME: awalan untuk jaringan akses memori langsung (RDMA) jarak jauh. Anda dapat menggunakan awalan apa pun yang Anda inginkan.

Menyiapkan infrastruktur

Di bagian ini, Anda akan membuat jaringan RDMA dan cluster GKE.

Buat jaringan dan subnet RDMA

  1. Buat jaringan VPC untuk antarmuka gVNIC:

    gcloud compute networks create ${GVNIC_NETWORK_PREFIX}-net \
        --subnet-mode=custom \
        --project=${PROJECT}
    gcloud compute networks subnets create ${GVNIC_NETWORK_PREFIX}-sub \
        --network=${GVNIC_NETWORK_PREFIX}-net \
        --location=${CONTROL_PLANE_LOCATION} \
        --range=192.168.0.0/24
    gcloud compute firewall-rules create ${GVNIC_NETWORK_PREFIX}-internal \
        --network=${GVNIC_NETWORK_PREFIX}-net \
        --action=ALLOW \
        --rules=tcp:0-65535,udp:0-65535,icmp \
        --source-ranges=192.168.0.0/16
    
  2. Buat jaringan VPC dan subnet untuk RDMA dengan 8 subnet untuk 8 GPU:

    gcloud beta compute networks create ${RDMA_NETWORK_PREFIX}-net \
        --network-profile=${CONTROL_PLANE_LOCATION}-vpc-roce \
        --subnet-mode=custom
    
    for N in $(seq 0 7); do
      gcloud compute networks subnets create ${RDMA_NETWORK_PREFIX}-sub-$N \
        --network=${RDMA_NETWORK_PREFIX}-net \
        --location=${CONTROL_PLANE_LOCATION} \
        --range=192.168.$((N+1)).0/24 &
    done
    wait
    
  3. Gandakan repositori sampel

    git clone https://github.com/GoogleCloudPlatform/kubernetes-engine-samples.git
    cd kubernetes-engine-samples
    
  4. Buka direktori kerja:

    cd ai-ml/verl-on-gke
    

Membuat cluster GKE

Anda dapat menyetel verl di cluster GKE Autopilot atau Standard. Sebaiknya gunakan cluster Autopilot untuk pengalaman Kubernetes yang terkelola sepenuhnya. Untuk memilih mode operasi GKE yang paling sesuai untuk workload Anda, lihat Memilih mode operasi GKE.

Autopilot

  1. Buat cluster Autopilot:

    gcloud container clusters create-auto ${CLUSTER_NAME} \
        --location=${CONTROL_PLANE_LOCATION} \
        --enable-multi-networking  \
        --enable-ray-operator
    
  2. Dapatkan kredensial untuk cluster Anda:

    gcloud container clusters get-credentials ${CLUSTER_NAME} \
        --location=${REGION}
    
  3. Instal penginstal NCCL RDMA untuk Autopilot:

    kubectl apply -f https://raw.githubusercontent.com/GoogleCloudPlatform/container-engine-accelerators/refs/heads/master/gpudirect-rdma/nccl-rdma-installer-autopilot.yaml
    

Standar

  1. Buat cluster Standard:

    gcloud container clusters create ${CLUSTER_NAME} \
        --location=${CONTROL_PLANE_LOCATION} \
        --location=${ZONE} \
        --enable-dataplane-v2 \
        --enable-ip-alias \
        --enable-multi-networking \
        --addons=RayOperator,GcsFuseCsiDriver \
        --machine-type=${MACHINE_TYPE} \
        --num-nodes=1 \
        --min-nodes=1 \
        --max-nodes=5 \
        --enable-autoscaling
    
  2. Dapatkan kredensial untuk cluster Anda:

    gcloud container clusters get-credentials ${CLUSTER_NAME} --location=${ZONE}
    
  3. Buat node pool GPU (menggunakan instance Spot untuk efisiensi biaya):

    gcloud container node-pools create gpu-pool \
        --cluster=${CLUSTER_NAME} \
        --location=${NODE_LOCATION} \
        --machine-type=${MACHINE_TYPE} \
        --accelerator=type=${GPU_TYPE},count=8,gpu-driver-version=DEFAULT \
        --spot \
        --enable-autoscaling \
        --num-nodes=0 \
        --total-max-nodes=10 \
        --additional-node-network=network=${GVNIC_NETWORK_PREFIX}-net,subnetwork=${GVNIC_NETWORK_PREFIX}-sub \
        --additional-node-network=network=${RDMA_NETWORK_PREFIX}-net,subnetwork=${RDMA_NETWORK_PREFIX}-sub-0 \
        --additional-node-network=network=${RDMA_NETWORK_PREFIX}-net,subnetwork=${RDMA_NETWORK_PREFIX}-sub-1 \
        --additional-node-network=network=${RDMA_NETWORK_PREFIX}-net,subnetwork=${RDMA_NETWORK_PREFIX}-sub-2 \
        --additional-node-network=network=${RDMA_NETWORK_PREFIX}-net,subnetwork=${RDMA_NETWORK_PREFIX}-sub-3 \
        --additional-node-network=network=${RDMA_NETWORK_PREFIX}-net,subnetwork=${RDMA_NETWORK_PREFIX}-sub-4 \
        --additional-node-network=network=${RDMA_NETWORK_PREFIX}-net,subnetwork=${RDMA_NETWORK_PREFIX}-sub-5 \
        --additional-node-network=network=${RDMA_NETWORK_PREFIX}-net,subnetwork=${RDMA_NETWORK_PREFIX}-sub-6 \
        --additional-node-network=network=${RDMA_NETWORK_PREFIX}-net,subnetwork=${RDMA_NETWORK_PREFIX}-sub-7
    
  4. Instal penginstal NCCL RDMA yang digunakan untuk cluster Standard:

    kubectl apply -f https://raw.githubusercontent.com/GoogleCloudPlatform/container-engine-accelerators/refs/heads/master/gpudirect-rdma/nccl-rdma-installer.yaml
    

Mengonfigurasi pemetaan jaringan

  1. Periksa manifes network-mapping.yaml:

    # Copyright 2026 Google LLC. All rights reserved.
    #
    # Licensed under the Apache License, Version 2.0 (the "License");
    # you may not use this file except in compliance with the License.
    # You may obtain a copy of the License at
    #
    #     http://www.apache.org/licenses/LICENSE-2.0
    #
    # Unless required by applicable law or agreed to in writing, software
    # distributed under the License is distributed on an "AS IS" BASIS,
    # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    # See the License for the specific language governing permissions and
    # limitations under the License.
    apiVersion: networking.gke.io/v1
    kind: GKENetworkParamSet
    metadata:
      name: gvnic-1
    spec:
      vpc: ${GVNIC_NETWORK_PREFIX}-net
      vpcSubnet: ${GVNIC_NETWORK_PREFIX}-sub
      deviceMode: NetDevice
    ---
    apiVersion: networking.gke.io/v1
    kind: Network
    metadata:
      name: gvnic-1
    spec:
      type: "Device"
      parametersRef:
        group: networking.gke.io
        kind: GKENetworkParamSet
        name: gvnic-1
    ---
    apiVersion: networking.gke.io/v1
    kind: GKENetworkParamSet
    metadata:
      name: rdma-0
    spec:
      vpc: ${RDMA_NETWORK_PREFIX}-net
      vpcSubnet: ${RDMA_NETWORK_PREFIX}-sub-0
      deviceMode: RDMA
    ---
    apiVersion: networking.gke.io/v1
    kind: Network
    metadata:
      name: rdma-0
    spec:
      type: "Device"
      parametersRef:
        group: networking.gke.io
        kind: GKENetworkParamSet
        name: rdma-0
    ---
    apiVersion: networking.gke.io/v1
    kind: GKENetworkParamSet
    metadata:
      name: rdma-1
    spec:
      vpc: ${RDMA_NETWORK_PREFIX}-net
      vpcSubnet: ${RDMA_NETWORK_PREFIX}-sub-1
      deviceMode: RDMA
    ---
    apiVersion: networking.gke.io/v1
    kind: Network
    metadata:
      name: rdma-1
    spec:
      type: "Device"
      parametersRef:
        group: networking.gke.io
        kind: GKENetworkParamSet
        name: rdma-1
    ---
    apiVersion: networking.gke.io/v1
    kind: GKENetworkParamSet
    metadata:
      name: rdma-2
    spec:
      vpc: ${RDMA_NETWORK_PREFIX}-net
      vpcSubnet: ${RDMA_NETWORK_PREFIX}-sub-2
      deviceMode: RDMA
    ---
    apiVersion: networking.gke.io/v1
    kind: Network
    metadata:
      name: rdma-2
    spec:
      type: "Device"
      parametersRef:
        group: networking.gke.io
        kind: GKENetworkParamSet
        name: rdma-2
    ---
    apiVersion: networking.gke.io/v1
    kind: GKENetworkParamSet
    metadata:
      name: rdma-3
    spec:
      vpc: ${RDMA_NETWORK_PREFIX}-net
      vpcSubnet: ${RDMA_NETWORK_PREFIX}-sub-3
      deviceMode: RDMA
    ---
    apiVersion: networking.gke.io/v1
    kind: Network
    metadata:
      name: rdma-3
    spec:
      type: "Device"
      parametersRef:
        group: networking.gke.io
        kind: GKENetworkParamSet
        name: rdma-3
    ---
    apiVersion: networking.gke.io/v1
    kind: GKENetworkParamSet
    metadata:
      name: rdma-4
    spec:
      vpc: ${RDMA_NETWORK_PREFIX}-net
      vpcSubnet: ${RDMA_NETWORK_PREFIX}-sub-4
      deviceMode: RDMA
    ---
    apiVersion: networking.gke.io/v1
    kind: Network
    metadata:
      name: rdma-4
    spec:
      type: "Device"
      parametersRef:
        group: networking.gke.io
        kind: GKENetworkParamSet
        name: rdma-4
    ---
    apiVersion: networking.gke.io/v1
    kind: GKENetworkParamSet
    metadata:
      name: rdma-5
    spec:
      vpc: ${RDMA_NETWORK_PREFIX}-net
      vpcSubnet: ${RDMA_NETWORK_PREFIX}-sub-5
      deviceMode: RDMA
    ---
    apiVersion: networking.gke.io/v1
    kind: Network
    metadata:
      name: rdma-5
    spec:
      type: "Device"
      parametersRef:
        group: networking.gke.io
        kind: GKENetworkParamSet
        name: rdma-5
    ---
    apiVersion: networking.gke.io/v1
    kind: GKENetworkParamSet
    metadata:
      name: rdma-6
    spec:
      vpc: ${RDMA_NETWORK_PREFIX}-net
      vpcSubnet: ${RDMA_NETWORK_PREFIX}-sub-6
      deviceMode: RDMA
    ---
    apiVersion: networking.gke.io/v1
    kind: Network
    metadata:
      name: rdma-6
    spec:
      type: "Device"
      parametersRef:
        group: networking.gke.io
        kind: GKENetworkParamSet
        name: rdma-6
    ---
    apiVersion: networking.gke.io/v1
    kind: GKENetworkParamSet
    metadata:
      name: rdma-7
    spec:
      vpc: ${RDMA_NETWORK_PREFIX}-net
      vpcSubnet: ${RDMA_NETWORK_PREFIX}-sub-7
      deviceMode: RDMA
    ---
    apiVersion: networking.gke.io/v1
    kind: Network
    metadata:
      name: rdma-7
    spec:
      type: "Device"
      parametersRef:
        group: networking.gke.io
        kind: GKENetworkParamSet
        name: rdma-7
    
  2. Terapkan manifes:

    kubectl apply -f network-mapping.yaml
    

Menyiapkan data dan penyimpanan

  1. Membuat bucket Cloud Storage:

    gcloud storage buckets create gs://${GS_BUCKET} --location=${REGION} --enable-hierarchical-namespace --uniform-bucket-level-access
    
  2. Buat Akun Layanan Kubernetes (KSA) dan ikat ke bucket:

    kubectl create serviceaccount ${KSA_NAME} --namespace ${NAMESPACE}
    
    gcloud storage buckets add-iam-policy-binding gs://${GS_BUCKET} \
        --member "principal://iam.googleapis.com/projects/${PROJECT_NUMBER}/locations/global/workloadIdentityPools/${PROJECT_ID}.svc.id.goog/subject/ns/${NAMESPACE}/sa/${KSA_NAME}" \
        --role "roles/storage.objectUser"
    
  3. Buat Secret untuk Hugging Face:

    kubectl create secret generic hf-secret --from-literal=hf_api_token=${HF_TOKEN}
    
  4. Periksa manifes gcsfuse-storage.yaml:

    # Copyright 2026 Google LLC. All rights reserved.
    #
    # Licensed under the Apache License, Version 2.0 (the "License");
    # you may not use this file except in compliance with the License.
    # You may obtain a copy of the License at
    #
    #     http://www.apache.org/licenses/LICENSE-2.0
    #
    # Unless required by applicable law or agreed to in writing, software
    # distributed under the License is distributed on an "AS IS" BASIS,
    # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    # See the License for the specific language governing permissions and
    # limitations under the License.
    
    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: training-bucket-pv
    spec:
      accessModes:
      -   ReadWriteMany
      capacity:
        storage: 768Gi
      persistentVolumeReclaimPolicy: Delete
      storageClassName: gcsfuse-sc
      mountOptions:
      -   implicit-dirs
      -   metadata-cache:negative-ttl-secs:0
      -   metadata-cache:ttl-secs:0
      -   metadata-cache:stat-cache-max-size-mb:-1
      -   metadata-cache:type-cache-max-size-mb:-1
      -   file-cache:max-size-mb:-1
      -   file-cache:cache-file-for-range-read:true
      -   file-cache:enable-parallel-downloads:true
      -   read_ahead_kb=1024
      -   write:enable-streaming-writes:true
      -   write:global-max-blocks:200000
      csi:
        driver: gcsfuse.csi.storage.gke.io
        volumeHandle: ${GS_BUCKET}
        volumeAttributes:
          skipCSIBucketAccessCheck: "true"
          gcsfuseMetadataPrefetchOnMount: "true"
    ---
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: training-bucket-pvc
    spec:
      accessModes:
      -   ReadWriteMany
      resources:
        requests:
          storage: 768Gi
      storageClassName: gcsfuse-sc
    
  5. Terapkan manifes:

    kubectl apply -f gcsfuse-storage.yaml
    

Menyiapkan model dan data

Anda dapat menjalankan perintah ini secara lokal atau di Pod GKE untuk mengisi bucket.

  1. Buat clone repositori verl:

    git clone https://github.com/volcengine/verl.git
    
  2. Download model Qwen2.5-32B-Instruct menggunakan Hugging Face CLI:

    huggingface-cli download Qwen/Qwen2.5-32B-Instruct --local-dir Qwen2.5-32B-Instruct
    
  3. Lakukan pra-pemrosesan set data GSM8K:

    python examples/data_preprocess/gsm8k.py --local_save_dir ~/data/gsm8k
    
  4. Upload model, data, dan kode VERL ke bucket Cloud Storage Anda:

    gcloud storage cp --recursive verl gs://${GS_BUCKET}/verl
    gcloud storage cp --recursive Qwen2.5-32B-Instruct gs://${GS_BUCKET}/Qwen2.5-32B-Instruct
    gcloud storage cp --recursive ~/data/gsm8k/* ${GS_BUCKET}
    

Men-deploy resource kustom RayCluster

Deploy resource kustom RayCluster, yang biasanya terdiri dari satu Pod sistem dan beberapa Pod pekerja.

Autopilot

  1. Deploy RayCluster. Simpan kode berikut ke ray-cluster-auto.yaml:

    # Copyright 2026 Google LLC. All rights reserved.
    #
    # Licensed under the Apache License, Version 2.0 (the "License");
    # you may not use this file except in compliance with the License.
    # You may obtain a copy of the License at
    #
    #     http://www.apache.org/licenses/LICENSE-2.0
    #
    # Unless required by applicable law or agreed to in writing, software
    # distributed under the License is distributed on an "AS IS" BASIS,
    # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    # See the License for the specific language governing permissions and
    # limitations under the License.
    apiVersion: ray.io/v1
    kind: RayCluster
    metadata:
      name: b200-ray-cluster
      annotations:
    spec:
      rayVersion: '2.47.0'
      headGroupSpec:
        rayStartParams:
          dashboard-host: '0.0.0.0'
        template:
          metadata:
            annotations:
              gke-gcsfuse/volumes: "true"
          spec:
            serviceAccountName: ${KSA_NAME}
            nodeSelector:
              cloud.google.com/gke-spot: "true"
              cloud.google.com/machine-family: "c2"
              cloud.google.com/compute-class: Performance
            containers:
            - name: ray-head
              image: verlai/verl:vllm011.latest 
              ports:
                - containerPort: 6379
                  name: gcs-server
                - containerPort: 8265
                  name: dashboard
                - containerPort: 10001
                  name: client
              resources:
                limits:
                  cpu: "12"
                  memory: "32G"
                  ephemeral-storage: "9Gi"
                requests:
                  cpu: "12"
                  memory: "32G"
                  ephemeral-storage: "9Gi"
              volumeMounts:
                - mountPath: /tmp/ray
                  name: ray-logs
                - name: training-bucket-vol
                  mountPath: /data
            volumes:
              - name: ray-logs
                emptyDir: {}
              - name: training-bucket-vol
                persistentVolumeClaim:
                  claimName: training-bucket-pvc
      workerGroupSpecs:
      - replicas: 2
        minReplicas: 2
        maxReplicas: 2
        groupName: gpu-group
        rayStartParams:
          num-cpus: "220"
        template:
          metadata:
            annotations:
              gke-gcsfuse/volumes: "true"
              networking.gke.io/default-interface: 'eth0'
              networking.gke.io/interfaces: |
                [
                  {"interfaceName":"eth0","network":"default"},
                  {"interfaceName":"eth1","network":"gvnic-1"},
                  {"interfaceName":"eth2","network":"rdma-0"},
                  {"interfaceName":"eth3","network":"rdma-1"},
                  {"interfaceName":"eth4","network":"rdma-2"},
                  {"interfaceName":"eth5","network":"rdma-3"},
                  {"interfaceName":"eth6","network":"rdma-4"},
                  {"interfaceName":"eth7","network":"rdma-5"},
                  {"interfaceName":"eth8","network":"rdma-6"},
                  {"interfaceName":"eth9","network":"rdma-7"}
                ]
          spec:
            initContainers:
            - name: verl-setup
              image: verlai/verl:vllm011.latest
              command: ["/bin/bash", "-c"]
              args:
                - |
                  echo "Performing local editable install..."
                  cd /data/verl && pip3 install --no-deps -e .
              volumeMounts:
              - name: training-bucket-vol
                mountPath: /data
            serviceAccountName: ${KSA_NAME}
            nodeSelector:
              cloud.google.com/gke-accelerator: ${GPU_TYPE}
              cloud.google.com/gke-accelerator-count: 8
              cloud.google.com/gke-spot: "true"
              cloud.google.com/compute-class: Performance
            tolerations:
              - key: "nvidia.com/gpu"
                operator: "Exists"
                effect: "NoSchedule"
            containers:
            - name: ray-worker
              image: verlai/verl:vllm011.latest
              env:
               - name: LD_LIBRARY_PATH
                 value: /usr/local/nvidia/lib64
              resources:
                limits:
                  cpu: "220"
                  memory: "2800Gi"
                  nvidia.com/gpu: "8"
                  ephemeral-storage: "1000Gi"
                requests:
                  cpu: "220"
                  memory: "2800Gi"
                  nvidia.com/gpu: "8"
                  ephemeral-storage: "1000Gi"
              volumeMounts:
              - name: nvidia
                mountPath: /usr/local/nvidia
                readOnly: true
              - name: gib
                mountPath: /usr/local/gib
                readOnly: true
              - name: shared-memory
                mountPath: /dev/shm
              - name: ray-tmp-storage
                mountPath: /tmp
              - name: training-bucket-vol
                mountPath: /data
            volumes:
            - name: gib
              hostPath:
                path: /home/kubernetes/bin/gib
            - name: nvidia
              hostPath:
                path: /home/kubernetes/bin/nvidia
            - name: lib64
              hostPath:
                path: /lib64
            - name: shared-memory
              emptyDir:
                medium: "Memory"
                sizeLimit: 250Gi 
            - name: sys
              hostPath:
                path: /sys
            - name: proc-sys
              hostPath:
                path: /proc/sys
            - name: ray-tmp-storage
              emptyDir: {}
            - name: training-bucket-vol
              persistentVolumeClaim:
                claimName: training-bucket-pvc
    
  2. Terapkan RayCluster:

    kubectl apply -f ray-cluster.yaml
    

Standar

  1. Deploy RayCluster. Simpan kode berikut ke ray-cluster.yaml:

    # Copyright 2026 Google LLC. All rights reserved.
    #
    # Licensed under the Apache License, Version 2.0 (the "License");
    # you may not use this file except in compliance with the License.
    # You may obtain a copy of the License at
    #
    #     http://www.apache.org/licenses/LICENSE-2.0
    #
    # Unless required by applicable law or agreed to in writing, software
    # distributed under the License is distributed on an "AS IS" BASIS,
    # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    # See the License for the specific language governing permissions and
    # limitations under the License.
    
    apiVersion: ray.io/v1
    kind: RayCluster
    metadata:
      name: b200-ray-cluster
      annotations:
    spec:
      rayVersion: '2.47.0'
      headGroupSpec:
        rayStartParams:
          dashboard-host: '0.0.0.0'
        template:
          metadata:
            annotations:
              gke-gcsfuse/volumes: "true"
          spec:
            serviceAccountName: ${KSA_NAME}
            nodeSelector:
              cloud.google.com/gke-nodepool: "default-pool"
            containers:
            - name: ray-head
              image: verlai/verl:vllm011.latest 
              ports:
                - containerPort: 6379
                  name: gcs-server
                - containerPort: 8265
                  name: dashboard
                - containerPort: 10001
                  name: client
              resources:
                limits:
                  cpu: "12"
                  memory: "32G"
                  ephemeral-storage: "9Gi"
                requests:
                  cpu: "12"
                  memory: "32G"
                  ephemeral-storage: "9Gi"
              volumeMounts:
                - mountPath: /tmp/ray
                  name: ray-logs
                - name: training-bucket-vol
                  mountPath: /data
            volumes:
              - name: ray-logs
                emptyDir: {}
              - name: training-bucket-vol
                persistentVolumeClaim:
                  claimName: training-bucket-pvc
      workerGroupSpecs:
      - replicas: 2
        minReplicas: 2
        maxReplicas: 2
        groupName: gpu-group
        rayStartParams:
          num-cpus: "220"
        template:
          metadata:
            annotations:
              gke-gcsfuse/volumes: "true"
              networking.gke.io/default-interface: 'eth0'
              networking.gke.io/interfaces: |
                [
                  {"interfaceName":"eth0","network":"default"},
                  {"interfaceName":"eth1","network":"gvnic-1"},
                  {"interfaceName":"eth2","network":"rdma-0"},
                  {"interfaceName":"eth3","network":"rdma-1"},
                  {"interfaceName":"eth4","network":"rdma-2"},
                  {"interfaceName":"eth5","network":"rdma-3"},
                  {"interfaceName":"eth6","network":"rdma-4"},
                  {"interfaceName":"eth7","network":"rdma-5"},
                  {"interfaceName":"eth8","network":"rdma-6"},
                  {"interfaceName":"eth9","network":"rdma-7"}
                ]
          spec:
            initContainers:
            - name: verl-setup
              image: verlai/verl:vllm011.latest
              command: ["/bin/bash", "-c"]
              args:
                - |
                  echo "Performing local editable install..."
                  cd /data/verl && pip3 install --no-deps -e .
              volumeMounts:
              - name: training-bucket-vol
                mountPath: /data
            serviceAccountName: ${KSA_NAME}
            nodeSelector:
              cloud.google.com/gke-accelerator: ${GPU_TYPE}
            tolerations:
              - key: "nvidia.com/gpu"
                operator: "Exists"
                effect: "NoSchedule"
            containers:
            - name: ray-worker
              image: verlai/verl:vllm011.latest
              env:
               - name: LD_LIBRARY_PATH
                 value: /usr/local/nvidia/lib64
              resources:
                limits:
                  cpu: "220"
                  memory: "2800Gi"
                  nvidia.com/gpu: "8"
                  ephemeral-storage: "1000Gi"
                requests:
                  cpu: "220"
                  memory: "2800Gi"
                  nvidia.com/gpu: "8"
                  ephemeral-storage: "1000Gi"
              volumeMounts:
              - name: nvidia
                mountPath: /usr/local/nvidia
              - name: gib
                mountPath: /usr/local/gib
              - name: shared-memory
                mountPath: /dev/shm
              - name: ray-tmp-storage
                mountPath: /tmp
              - name: training-bucket-vol
                mountPath: /data
            volumes:
            - name: gib
              hostPath:
                path: /home/kubernetes/bin/gib
            - name: nvidia
              hostPath:
                path: /home/kubernetes/bin/nvidia
            - name: lib64
              hostPath:
                path: /lib64
            - name: shared-memory
              emptyDir:
                medium: "Memory"
                sizeLimit: 250Gi 
            - name: sys
              hostPath:
                path: /sys
            - name: proc-sys
              hostPath:
                path: /proc/sys
            - name: ray-tmp-storage
              emptyDir: {}
            - name: training-bucket-vol
              persistentVolumeClaim:
                claimName: training-bucket-pvc
    
  2. Terapkan RayCluster:

    kubectl apply -f ray-cluster.yaml
    

Luncurkan Tugas GRPO

  1. Siapkan penerusan port ke node dasbor Ray:

    kubectl port-forward svc/b200-ray-cluster-head-svc 8265:8265
    
  2. Periksa manifes runtime-env.yaml:

    # Copyright 2026 Google LLC. All rights reserved.
    #
    # Licensed under the Apache License, Version 2.0 (the "License");
    # you may not use this file except in compliance with the License.
    # You may obtain a copy of the License at
    #
    #     http://www.apache.org/licenses/LICENSE-2.0
    #
    # Unless required by applicable law or agreed to in writing, software
    # distributed under the License is distributed on an "AS IS" BASIS,
    # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    # See the License for the specific language governing permissions and
    # limitations under the License.
    
    py_modules: ["."]
    working_dir": "."
    py_executable": "uv run"
    setup_hook: runtime_env.uv_runtime_env_hook.hook 
    env_vars:
      PYTHONPATH: "/data/verl"
      LD_LIBRARY_PATH: "/usr/local/nvidia/lib64"
      NCCL_DEBUG: "INFO"
      NUM_WORKERS: "2"
      CPUS_PER_WORKER: "192"
      GPUS_PER_WORKER: "8"
      NCCL_NET_PLUGIN: "/usr/local/gib/lib64/libnccl-net_internal.so"
      NCCL_CROSS_NIC: "0"
      NCCL_NET_GDR_LEVEL: "PIX"
      NCCL_P2P_NET_CHUNKSIZE: "131072"
      NCCL_NVLS_CHUNKSIZE: "524288"
      NCCL_IB_ADAPTIVE_ROUTING: "1"
      NCCL_IB_QPS_PER_CONNECTION: "4"
      NCCL_IB_TC: "52"
      NCCL_IB_FIFO_TC: "84"
      NCCL_TUNER_CONFIG_PATH: "/usr/local/gib/configs/tuner_config_a4.txtpb" 
      HF_HOME: "/data/huggingface_cache"
      GLOO_SOCKET_IFNAME: "eth0" 
    pip:
      packages:
        - torch 
        - torchvision
    

    Jika Anda menggunakan GPU H200, ubah NCCL_TUNER_CONFIG_PATH menjadi /usr/local/gib/configs/tuner_config_a3u.txtpb.

    File ini digunakan oleh klien Ray. Anda tidak perlu menerapkan manifes ini ke cluster.

  3. Kirim Tugas menggunakan ray job submit:

    ray -- job submit \
    --address "http://localhost:8265" \
    --runtime-env runtime-env.yaml \
    -- \
    bash -c "
        cd /data/verl && PYTHONUNBUFFERED=1 python3 -m verl.trainer.main_ppo \
        data.train_files=/data/gsm8k/train.parquet \
        data.val_files=/data/gsm8k/test.parquet \
        data.train_batch_size=256 \
        data.max_prompt_length=512 \
        data.max_response_length=512 \
        actor_rollout_ref.model.path=Qwen/Qwen2.5-32B-Instruct \
        actor_rollout_ref.actor.optim.lr=1e-5 \
        actor_rollout_ref.actor.ppo_mini_batch_size=256 \
        actor_rollout_ref.actor.ppo_micro_batch_size_per_gpu=64 \
        actor_rollout_ref.rollout.name=vllm \
        actor_rollout_ref.rollout.log_prob_micro_batch_size_per_gpu=8 \
        actor_rollout_ref.rollout.tensor_model_parallel_size=8 \
        actor_rollout_ref.rollout.gpu_memory_utilization=0.6 \
        actor_rollout_ref.ref.log_prob_micro_batch_size_per_gpu=4 \
        actor_rollout_ref.actor.strategy=fsdp2 \
        algorithm.kl_ctrl.kl_coef=0.001 \
        trainer.logger=console \
        trainer.val_before_train=False \
        trainer.n_gpus_per_node=8 \
        trainer.nnodes=2 \
        trainer.save_freq=10 \
        trainer.test_freq=10 \
        algorithm.adv_estimator=grpo \
        actor_rollout_ref.rollout.n=8 \
        trainer.total_epochs=2" 2>&1 | tee verl_demo.log
    

    Pantau log di Dasbor Ray atau output. Cari critic/score/mean yang meningkat, yang menunjukkan pembelajaran.

Pembersihan

Untuk menghindari biaya, hapus resource:

kubectl delete raycluster b200-ray-cluster # change to variables
gcloud container clusters delete ${CLUSTER_NAME} --location=${CONTROL_PLANE_LOCATION}
gcloud storage rm -r gs://${GS_BUCKET}

Langkah berikutnya