使用 HTTP 託管靜態網站

本教學課程說明如何設定 Cloud Storage 值區來代管您網域的靜態網站。靜態網頁可能會採用一些用戶端技術,包括 HTML、CSS 和 JavaScript 等等。且不得包含伺服器端指令碼 (如 PHP) 的動態內容。

本教學課程說明如何透過 HTTP 提供內容。如需使用 HTTPS 的教學課程,請參閱託管靜態網站

如需靜態網頁的範例和提示,包括如何託管動態網站的靜態資產,請參閱靜態網站頁面

將網域連線至 Cloud Storage

如要將網域連線至 Cloud Storage,請透過網域註冊服務建立 CNAME 記錄。CNAME 記錄是一種 DNS 記錄類型,可以將從您網域要求網址的流量導向要提供的資源,在本例則是指 Cloud Storage 值區內的物件。如果是 www.example.comCNAME 記錄可能包含下列資訊:

NAME                  TYPE     DATA
www                   CNAME    c.storage.googleapis.com.

如需有關 CNAME 重新導向的詳細資訊,請參閱 CNAME 別名的 URI

將網域連線至 Cloud Storage 的方式如下:

  1. 建立指向 c.storage.googleapis.com.CNAME 記錄。

    您的網域註冊服務應該會支援您管理網域 (包括新增 CNAME 記錄)。舉例來說,如果您使用 Cloud DNS,可以在「新增、修改及刪除記錄」頁面查看新增資源記錄的操作說明。

建立值區

建立值區時,名稱應符合您為網域建立的 CNAME

舉例來說,如果您新增了一個將 example.comwww 子網域指向 c.storage.googleapis.com.CNAME 記錄,則使用 Google Cloud CLI 建立名稱為 www.example.com 的值區時,指令會類似於下列範例:

gcloud storage buckets create gs://www.example.com --location=US

如需使用不同工具建立值區的完整操作說明,請參閱建立值區

上傳網站檔案

將您想讓網站提供的檔案新增到值區內:

控制台

  1. 在 Google Cloud 控制台,前往「Cloud Storage bucket」頁面。

    前往「Buckets」(值區) 頁面

  2. 在值區清單中,按一下您建立的值區名稱。

  3. 按一下「Objects」(物件) 分頁中的 [Upload files] (上傳檔案) 按鈕。

  4. 在檔案對話方塊中,找出並選取您需要的檔案。

上傳完成後,您應該會看到檔案名稱與值區中顯示的檔案資訊。

指令列

使用 gcloud storage cp 指令將檔案複製到您的 bucket。 例如,假設您要從目前的位置 Desktop 複製 index.html

gcloud storage cp Desktop/index.html gs://www.example.com

如果成功,回應會類似以下範例:

Completed files 1/1 | 164.3kiB/164.3kiB

用戶端程式庫

C++

詳情請參閱 Cloud Storage C++ API 參考說明文件

如要驗證 Cloud Storage,請設定應用程式預設憑證。 詳情請參閱「設定用戶端程式庫的驗證機制」。

namespace gcs = ::google::cloud::storage;
using ::google::cloud::StatusOr;
[](gcs::Client client, std::string const& file_name,
   std::string const& bucket_name, std::string const& object_name) {
  // Note that the client library automatically computes a hash on the
  // client-side to verify data integrity during transmission.
  StatusOr<gcs::ObjectMetadata> metadata = client.UploadFile(
      file_name, bucket_name, object_name, gcs::IfGenerationMatch(0));
  if (!metadata) throw std::move(metadata).status();

  std::cout << "Uploaded " << file_name << " to object " << metadata->name()
            << " in bucket " << metadata->bucket()
            << "\nFull metadata: " << *metadata << "\n";
}

C#

詳情請參閱 Cloud Storage C# API 參考說明文件

如要驗證 Cloud Storage,請設定應用程式預設憑證。 詳情請參閱「設定用戶端程式庫的驗證機制」。


using Google.Cloud.Storage.V1;
using System;
using System.IO;

public class UploadFileSample
{
    public void UploadFile(
        string bucketName = "your-unique-bucket-name",
        string localPath = "my-local-path/my-file-name",
        string objectName = "my-file-name")
    {
        var storage = StorageClient.Create();
        using var fileStream = File.OpenRead(localPath);
        storage.UploadObject(bucketName, objectName, null, fileStream);
        Console.WriteLine($"Uploaded {objectName}.");
    }
}

Go

詳情請參閱 Cloud Storage Go API 參考說明文件

如要驗證 Cloud Storage,請設定應用程式預設憑證。 詳情請參閱「設定用戶端程式庫的驗證機制」。

import (
	"context"
	"fmt"
	"io"
	"os"
	"time"

	"cloud.google.com/go/storage"
)

// uploadFile uploads an object.
func uploadFile(w io.Writer, bucket, object string) error {
	// bucket := "bucket-name"
	// object := "object-name"
	ctx := context.Background()
	client, err := storage.NewClient(ctx)
	if err != nil {
		return fmt.Errorf("storage.NewClient: %w", err)
	}
	defer client.Close()

	// Open local file.
	f, err := os.Open("notes.txt")
	if err != nil {
		return fmt.Errorf("os.Open: %w", err)
	}
	defer f.Close()

	ctx, cancel := context.WithTimeout(ctx, time.Second*50)
	defer cancel()

	o := client.Bucket(bucket).Object(object)

	// Optional: set a generation-match precondition to avoid potential race
	// conditions and data corruptions. The request to upload is aborted if the
	// object's generation number does not match your precondition.
	// For an object that does not yet exist, set the DoesNotExist precondition.
	o = o.If(storage.Conditions{DoesNotExist: true})
	// If the live object already exists in your bucket, set instead a
	// generation-match precondition using the live object's generation number.
	// attrs, err := o.Attrs(ctx)
	// if err != nil {
	// 	return fmt.Errorf("object.Attrs: %w", err)
	// }
	// o = o.If(storage.Conditions{GenerationMatch: attrs.Generation})

	// Upload an object with storage.Writer.
	wc := o.NewWriter(ctx)
	if _, err = io.Copy(wc, f); err != nil {
		return fmt.Errorf("io.Copy: %w", err)
	}
	if err := wc.Close(); err != nil {
		return fmt.Errorf("Writer.Close: %w", err)
	}
	fmt.Fprintf(w, "Blob %v uploaded.\n", object)
	return nil
}

Java

詳情請參閱 Cloud Storage Java API 參考說明文件

如要驗證 Cloud Storage,請設定應用程式預設憑證。 詳情請參閱「設定用戶端程式庫的驗證機制」。

下例示範如何上傳個別物件:


import com.google.cloud.storage.BlobId;
import com.google.cloud.storage.BlobInfo;
import com.google.cloud.storage.Storage;
import com.google.cloud.storage.StorageOptions;
import java.io.IOException;
import java.nio.file.Paths;

public class UploadObject {
  public static void uploadObject(
      String projectId, String bucketName, String objectName, String filePath) throws IOException {
    // The ID of your GCP project
    // String projectId = "your-project-id";

    // The ID of your GCS bucket
    // String bucketName = "your-unique-bucket-name";

    // The ID of your GCS object
    // String objectName = "your-object-name";

    // The path to your file to upload
    // String filePath = "path/to/your/file"

    Storage storage = StorageOptions.newBuilder().setProjectId(projectId).build().getService();
    BlobId blobId = BlobId.of(bucketName, objectName);
    BlobInfo blobInfo = BlobInfo.newBuilder(blobId).build();

    // Optional: set a generation-match precondition to avoid potential race
    // conditions and data corruptions. The request returns a 412 error if the
    // preconditions are not met.
    Storage.BlobWriteOption precondition;
    if (storage.get(bucketName, objectName) == null) {
      // For a target object that does not yet exist, set the DoesNotExist precondition.
      // This will cause the request to fail if the object is created before the request runs.
      precondition = Storage.BlobWriteOption.doesNotExist();
    } else {
      // If the destination already exists in your bucket, instead set a generation-match
      // precondition. This will cause the request to fail if the existing object's generation
      // changes before the request runs.
      precondition =
          Storage.BlobWriteOption.generationMatch(
              storage.get(bucketName, objectName).getGeneration());
    }
    storage.createFrom(blobInfo, Paths.get(filePath), precondition);

    System.out.println(
        "File " + filePath + " uploaded to bucket " + bucketName + " as " + objectName);
  }
}

下例示範如何同時上傳多個物件:

import com.google.cloud.storage.transfermanager.ParallelUploadConfig;
import com.google.cloud.storage.transfermanager.TransferManager;
import com.google.cloud.storage.transfermanager.TransferManagerConfig;
import com.google.cloud.storage.transfermanager.UploadResult;
import java.io.IOException;
import java.nio.file.Path;
import java.util.List;

class UploadMany {

  public static void uploadManyFiles(String bucketName, List<Path> files) throws IOException {
    TransferManager transferManager = TransferManagerConfig.newBuilder().build().getService();
    ParallelUploadConfig parallelUploadConfig =
        ParallelUploadConfig.newBuilder().setBucketName(bucketName).build();
    List<UploadResult> results =
        transferManager.uploadFiles(files, parallelUploadConfig).getUploadResults();
    for (UploadResult result : results) {
      System.out.println(
          "Upload for "
              + result.getInput().getName()
              + " completed with status "
              + result.getStatus());
    }
  }
}

下列範例會同時上傳具有相同前置字串的所有物件:

import com.google.cloud.storage.transfermanager.ParallelUploadConfig;
import com.google.cloud.storage.transfermanager.TransferManager;
import com.google.cloud.storage.transfermanager.TransferManagerConfig;
import com.google.cloud.storage.transfermanager.UploadResult;
import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Path;
import java.util.ArrayList;
import java.util.List;
import java.util.stream.Stream;

class UploadDirectory {

  public static void uploadDirectoryContents(String bucketName, Path sourceDirectory)
      throws IOException {
    TransferManager transferManager = TransferManagerConfig.newBuilder().build().getService();
    ParallelUploadConfig parallelUploadConfig =
        ParallelUploadConfig.newBuilder().setBucketName(bucketName).build();

    // Create a list to store the file paths
    List<Path> filePaths = new ArrayList<>();
    // Get all files in the directory
    // try-with-resource to ensure pathStream is closed
    try (Stream<Path> pathStream = Files.walk(sourceDirectory)) {
      pathStream.filter(Files::isRegularFile).forEach(filePaths::add);
    }
    List<UploadResult> results =
        transferManager.uploadFiles(filePaths, parallelUploadConfig).getUploadResults();
    for (UploadResult result : results) {
      System.out.println(
          "Upload for "
              + result.getInput().getName()
              + " completed with status "
              + result.getStatus());
    }
  }
}

Node.js

詳情請參閱 Cloud Storage Node.js API 參考說明文件

如要驗證 Cloud Storage,請設定應用程式預設憑證。 詳情請參閱「設定用戶端程式庫的驗證機制」。

下例示範如何上傳個別物件:

/**
 * TODO(developer): Uncomment the following lines before running the sample.
 */
// The ID of your GCS bucket
// const bucketName = 'your-unique-bucket-name';

// The path to your file to upload
// const filePath = 'path/to/your/file';

// The new ID for your GCS file
// const destFileName = 'your-new-file-name';

// Imports the Google Cloud client library
const {Storage} = require('@google-cloud/storage');

// Creates a client
const storage = new Storage();

async function uploadFile() {
  const options = {
    destination: destFileName,
    // Optional:
    // Set a generation-match precondition to avoid potential race conditions
    // and data corruptions. The request to upload is aborted if the object's
    // generation number does not match your precondition. For a destination
    // object that does not yet exist, set the ifGenerationMatch precondition to 0
    // If the destination object already exists in your bucket, set instead a
    // generation-match precondition using its generation number.
    preconditionOpts: {ifGenerationMatch: generationMatchPrecondition},
  };

  await storage.bucket(bucketName).upload(filePath, options);
  console.log(`${filePath} uploaded to ${bucketName}`);
}

uploadFile().catch(console.error);

下例示範如何同時上傳多個物件:

/**
 * TODO(developer): Uncomment the following lines before running the sample.
 */
// The ID of your GCS bucket
// const bucketName = 'your-unique-bucket-name';

// The ID of the first GCS file to upload
// const firstFilePath = 'your-first-file-name';

// The ID of the second GCS file to upload
// const secondFilePath = 'your-second-file-name';

// Imports the Google Cloud client library
const {Storage, TransferManager} = require('@google-cloud/storage');

// Creates a client
const storage = new Storage();

// Creates a transfer manager client
const transferManager = new TransferManager(storage.bucket(bucketName));

async function uploadManyFilesWithTransferManager() {
  // Uploads the files
  await transferManager.uploadManyFiles([firstFilePath, secondFilePath]);

  for (const filePath of [firstFilePath, secondFilePath]) {
    console.log(`${filePath} uploaded to ${bucketName}.`);
  }
}

uploadManyFilesWithTransferManager().catch(console.error);

下列範例會同時上傳具有相同前置字串的所有物件:

/**
 * TODO(developer): Uncomment the following lines before running the sample.
 */
// The ID of your GCS bucket
// const bucketName = 'your-unique-bucket-name';

// The local directory to upload
// const directoryName = 'your-directory';

// Imports the Google Cloud client library
const {Storage, TransferManager} = require('@google-cloud/storage');

// Creates a client
const storage = new Storage();

// Creates a transfer manager client
const transferManager = new TransferManager(storage.bucket(bucketName));

async function uploadDirectoryWithTransferManager() {
  // Uploads the directory
  await transferManager.uploadManyFiles(directoryName);

  console.log(`${directoryName} uploaded to ${bucketName}.`);
}

uploadDirectoryWithTransferManager().catch(console.error);

PHP

詳情請參閱 Cloud Storage PHP API 參考說明文件

如要驗證 Cloud Storage,請設定應用程式預設憑證。 詳情請參閱「設定用戶端程式庫的驗證機制」。

use Google\Cloud\Storage\StorageClient;

/**
 * Upload a file.
 *
 * @param string $bucketName The name of your Cloud Storage bucket.
 *        (e.g. 'my-bucket')
 * @param string $objectName The name of your Cloud Storage object.
 *        (e.g. 'my-object')
 * @param string $source The path to the file to upload.
 *        (e.g. '/path/to/your/file')
 */
function upload_object(string $bucketName, string $objectName, string $source): void
{
    $storage = new StorageClient();
    if (!$file = fopen($source, 'r')) {
        throw new \InvalidArgumentException('Unable to open file for reading');
    }
    $bucket = $storage->bucket($bucketName);
    $object = $bucket->upload($file, [
        'name' => $objectName
    ]);
    printf('Uploaded %s to gs://%s/%s' . PHP_EOL, basename($source), $bucketName, $objectName);
}

Python

詳情請參閱 Cloud Storage Python API 參考說明文件

如要驗證 Cloud Storage,請設定應用程式預設憑證。 詳情請參閱「設定用戶端程式庫的驗證機制」。

下例示範如何上傳個別物件:

from google.cloud import storage


def upload_blob(bucket_name, source_file_name, destination_blob_name):
    """Uploads a file to the bucket."""
    # The ID of your GCS bucket
    # bucket_name = "your-bucket-name"
    # The path to your file to upload
    # source_file_name = "local/path/to/file"
    # The ID of your GCS object
    # destination_blob_name = "storage-object-name"

    storage_client = storage.Client()
    bucket = storage_client.bucket(bucket_name)
    blob = bucket.blob(destination_blob_name)

    # Optional: set a generation-match precondition to avoid potential race conditions
    # and data corruptions. The request to upload is aborted if the object's
    # generation number does not match your precondition. For a destination
    # object that does not yet exist, set the if_generation_match precondition to 0.
    # If the destination object already exists in your bucket, set instead a
    # generation-match precondition using its generation number.
    generation_match_precondition = 0

    blob.upload_from_filename(source_file_name, if_generation_match=generation_match_precondition)

    print(
        f"File {source_file_name} uploaded to {destination_blob_name}."
    )

下例示範如何同時上傳多個物件:

def upload_many_blobs_with_transfer_manager(
    bucket_name, filenames, source_directory="", workers=8
):
    """Upload every file in a list to a bucket, concurrently in a process pool.

    Each blob name is derived from the filename, not including the
    `source_directory` parameter. For complete control of the blob name for each
    file (and other aspects of individual blob metadata), use
    transfer_manager.upload_many() instead.
    """

    # The ID of your GCS bucket
    # bucket_name = "your-bucket-name"

    # A list (or other iterable) of filenames to upload.
    # filenames = ["file_1.txt", "file_2.txt"]

    # The directory on your computer that is the root of all of the files in the
    # list of filenames. This string is prepended (with os.path.join()) to each
    # filename to get the full path to the file. Relative paths and absolute
    # paths are both accepted. This string is not included in the name of the
    # uploaded blob; it is only used to find the source files. An empty string
    # means "the current working directory". Note that this parameter allows
    # directory traversal (e.g. "/", "../") and is not intended for unsanitized
    # end user input.
    # source_directory=""

    # The maximum number of processes to use for the operation. The performance
    # impact of this value depends on the use case, but smaller files usually
    # benefit from a higher number of processes. Each additional process occupies
    # some CPU and memory resources until finished. Threads can be used instead
    # of processes by passing `worker_type=transfer_manager.THREAD`.
    # workers=8

    from google.cloud.storage import Client, transfer_manager

    storage_client = Client()
    bucket = storage_client.bucket(bucket_name)

    results = transfer_manager.upload_many_from_filenames(
        bucket, filenames, source_directory=source_directory, max_workers=workers
    )

    for name, result in zip(filenames, results):
        # The results list is either `None` or an exception for each filename in
        # the input list, in order.

        if isinstance(result, Exception):
            print("Failed to upload {} due to exception: {}".format(name, result))
        else:
            print("Uploaded {} to {}.".format(name, bucket.name))

下列範例會同時上傳具有相同前置字串的所有物件:

def upload_directory_with_transfer_manager(bucket_name, source_directory, workers=8):
    """Upload every file in a directory, including all files in subdirectories.

    Each blob name is derived from the filename, not including the `directory`
    parameter itself. For complete control of the blob name for each file (and
    other aspects of individual blob metadata), use
    transfer_manager.upload_many() instead.
    """

    # The ID of your GCS bucket
    # bucket_name = "your-bucket-name"

    # The directory on your computer to upload. Files in the directory and its
    # subdirectories will be uploaded. An empty string means "the current
    # working directory".
    # source_directory=""

    # The maximum number of processes to use for the operation. The performance
    # impact of this value depends on the use case, but smaller files usually
    # benefit from a higher number of processes. Each additional process occupies
    # some CPU and memory resources until finished. Threads can be used instead
    # of processes by passing `worker_type=transfer_manager.THREAD`.
    # workers=8

    from pathlib import Path

    from google.cloud.storage import Client, transfer_manager

    storage_client = Client()
    bucket = storage_client.bucket(bucket_name)

    # Generate a list of paths (in string form) relative to the `directory`.
    # This can be done in a single list comprehension, but is expanded into
    # multiple lines here for clarity.

    # First, recursively get all files in `directory` as Path objects.
    directory_as_path_obj = Path(source_directory)
    paths = directory_as_path_obj.rglob("*")

    # Filter so the list only includes files, not directories themselves.
    file_paths = [path for path in paths if path.is_file()]

    # These paths are relative to the current working directory. Next, make them
    # relative to `directory`
    relative_paths = [path.relative_to(source_directory) for path in file_paths]

    # Finally, convert them all to strings.
    string_paths = [str(path) for path in relative_paths]

    print("Found {} files.".format(len(string_paths)))

    # Start the upload.
    results = transfer_manager.upload_many_from_filenames(
        bucket, string_paths, source_directory=source_directory, max_workers=workers
    )

    for name, result in zip(string_paths, results):
        # The results list is either `None` or an exception for each filename in
        # the input list, in order.

        if isinstance(result, Exception):
            print("Failed to upload {} due to exception: {}".format(name, result))
        else:
            print("Uploaded {} to {}.".format(name, bucket.name))

Ruby

詳情請參閱 Cloud Storage Ruby API 參考說明文件

如要驗證 Cloud Storage,請設定應用程式預設憑證。 詳情請參閱「設定用戶端程式庫的驗證機制」。

def upload_file bucket_name:, local_file_path:, file_name: nil
  # The ID of your GCS bucket
  # bucket_name = "your-unique-bucket-name"

  # The path to your file to upload
  # local_file_path = "/local/path/to/file.txt"

  # The ID of your GCS object
  # file_name = "your-file-name"

  require "google/cloud/storage"

  storage = Google::Cloud::Storage.new
  bucket  = storage.bucket bucket_name, skip_lookup: true

  file = bucket.create_file local_file_path, file_name

  puts "Uploaded #{local_file_path} as #{file.name} in bucket #{bucket_name}"
end

REST API

JSON API

  1. 安裝並初始化 gcloud CLI,以便為 Authorization 標頭產生存取權杖。

  2. 使用 cURL 透過 POST 物件要求呼叫 JSON API。對於 www.example.com 的索引頁面:

    curl -X POST --data-binary @index.html \
      -H "Content-Type: text/html" \
      -H "Authorization: $(gcloud auth print-access-token)" \
      "https://storage.googleapis.com/upload/storage/v1/b/www.example.com/o?uploadType=media&name=index.html"

XML API

  1. 安裝並初始化 gcloud CLI,以便為 Authorization 標頭產生存取權杖。

  2. 使用 cURL 透過 PUT 物件要求呼叫 XML API。對於 www.example.com 的索引頁面:

    curl -X PUT --data-binary @index.html \
      -H "Authorization: $(gcloud auth print-access-token)" \
      -H "Content-Type: text/html" \
      "https://storage.googleapis.com/www.example.com/index.html"

共用檔案

如要將 bucket 中的所有物件設為可供公開網路中的所有使用者讀取,請按照下列步驟操作:

控制台

  1. 在 Google Cloud 控制台,前往「Cloud Storage bucket」頁面。

    前往「Buckets」(值區) 頁面

  2. 在值區清單中,找到您要設為公開的值區名稱,然後點選這個名稱。

  3. 選取靠近頁面上方的 [Permissions] (權限) 分頁標籤。

  4. 如果「公開存取權」窗格顯示「非公開」,請點選標示為「移除禁止公開存取功能」的按鈕,然後在隨即顯示的對話方塊中點選「確認」

  5. 按一下「授予存取權」按鈕。

    系統隨即會顯示「新增主體」對話方塊。

  6. 在「New principals」(新增主體) 欄位中輸入 allUsers

  7. 在「請選擇角色」下拉式選單中,選取「Cloud Storage」子選單,然後點選「Storage 物件檢視者」選項。

  8. 按一下 [儲存]

  9. 按一下「Allow public access」(允許公開存取)

公開共用後,「public access」(公開存取權) 欄中會出現各個物件的連結圖示。點選這個圖示即可取得物件的網址。

指令列

使用 buckets add-iam-policy-binding 指令:

gcloud storage buckets add-iam-policy-binding gs://www.example.com --member=allUsers --role=roles/storage.objectViewer

用戶端程式庫

C++

詳情請參閱 Cloud Storage C++ API 參考說明文件

如要驗證 Cloud Storage,請設定應用程式預設憑證。 詳情請參閱「設定用戶端程式庫的驗證機制」。

namespace gcs = ::google::cloud::storage;
using ::google::cloud::StatusOr;
[](gcs::Client client, std::string const& bucket_name) {
  auto current_policy = client.GetNativeBucketIamPolicy(
      bucket_name, gcs::RequestedPolicyVersion(3));
  if (!current_policy) throw std::move(current_policy).status();

  current_policy->set_version(3);
  current_policy->bindings().emplace_back(
      gcs::NativeIamBinding("roles/storage.objectViewer", {"allUsers"}));

  auto updated =
      client.SetNativeBucketIamPolicy(bucket_name, *current_policy);
  if (!updated) throw std::move(updated).status();

  std::cout << "Policy successfully updated: " << *updated << "\n";
}

C#

詳情請參閱 Cloud Storage C# API 參考說明文件

如要驗證 Cloud Storage,請設定應用程式預設憑證。 詳情請參閱「設定用戶端程式庫的驗證機制」。


using Google.Apis.Storage.v1.Data;
using Google.Cloud.Storage.V1;
using System;
using System.Collections.Generic;

public class MakeBucketPublicSample
{
    public void MakeBucketPublic(string bucketName = "your-unique-bucket-name")
    {
        var storage = StorageClient.Create();

        Policy policy = storage.GetBucketIamPolicy(bucketName);

        policy.Bindings.Add(new Policy.BindingsData
        {
            Role = "roles/storage.objectViewer",
            Members = new List<string> { "allUsers" }
        });

        storage.SetBucketIamPolicy(bucketName, policy);
        Console.WriteLine(bucketName + " is now public ");
    }
}

Go

詳情請參閱 Cloud Storage Go API 參考說明文件

如要驗證 Cloud Storage,請設定應用程式預設憑證。 詳情請參閱「設定用戶端程式庫的驗證機制」。

import (
	"context"
	"fmt"
	"io"

	"cloud.google.com/go/iam"
	"cloud.google.com/go/iam/apiv1/iampb"
	"cloud.google.com/go/storage"
)

// setBucketPublicIAM makes all objects in a bucket publicly readable.
func setBucketPublicIAM(w io.Writer, bucketName string) error {
	// bucketName := "bucket-name"
	ctx := context.Background()
	client, err := storage.NewClient(ctx)
	if err != nil {
		return fmt.Errorf("storage.NewClient: %w", err)
	}
	defer client.Close()

	policy, err := client.Bucket(bucketName).IAM().V3().Policy(ctx)
	if err != nil {
		return fmt.Errorf("Bucket(%q).IAM().V3().Policy: %w", bucketName, err)
	}
	role := "roles/storage.objectViewer"
	policy.Bindings = append(policy.Bindings, &iampb.Binding{
		Role:    role,
		Members: []string{iam.AllUsers},
	})
	if err := client.Bucket(bucketName).IAM().V3().SetPolicy(ctx, policy); err != nil {
		return fmt.Errorf("Bucket(%q).IAM().SetPolicy: %w", bucketName, err)
	}
	fmt.Fprintf(w, "Bucket %v is now publicly readable\n", bucketName)
	return nil
}

Java

詳情請參閱 Cloud Storage Java API 參考說明文件

如要驗證 Cloud Storage,請設定應用程式預設憑證。 詳情請參閱「設定用戶端程式庫的驗證機制」。

import com.google.cloud.Identity;
import com.google.cloud.Policy;
import com.google.cloud.storage.Storage;
import com.google.cloud.storage.StorageOptions;
import com.google.cloud.storage.StorageRoles;

public class MakeBucketPublic {
  public static void makeBucketPublic(String projectId, String bucketName) {
    // The ID of your GCP project
    // String projectId = "your-project-id";

    // The ID of your GCS bucket
    // String bucketName = "your-unique-bucket-name";

    Storage storage = StorageOptions.newBuilder().setProjectId(projectId).build().getService();
    Policy originalPolicy = storage.getIamPolicy(bucketName);
    storage.setIamPolicy(
        bucketName,
        originalPolicy.toBuilder()
            .addIdentity(StorageRoles.objectViewer(), Identity.allUsers()) // All users can view
            .build());

    System.out.println("Bucket " + bucketName + " is now publicly readable");
  }
}

Node.js

詳情請參閱 Cloud Storage Node.js API 參考說明文件

如要驗證 Cloud Storage,請設定應用程式預設憑證。 詳情請參閱「設定用戶端程式庫的驗證機制」。

/**
 * TODO(developer): Uncomment the following lines before running the sample.
 */
// The ID of your GCS bucket
// const bucketName = 'your-unique-bucket-name';

// Imports the Google Cloud client library
const {Storage} = require('@google-cloud/storage');

// Creates a client
const storage = new Storage();

async function makeBucketPublic() {
  await storage.bucket(bucketName).makePublic();

  console.log(`Bucket ${bucketName} is now publicly readable`);
}

makeBucketPublic().catch(console.error);

PHP

詳情請參閱 Cloud Storage PHP API 參考說明文件

如要驗證 Cloud Storage,請設定應用程式預設憑證。 詳情請參閱「設定用戶端程式庫的驗證機制」。

use Google\Cloud\Storage\StorageClient;

/**
 * Update the specified bucket's IAM configuration to make it publicly accessible.
 *
 * @param string $bucketName The name of your Cloud Storage bucket.
 *        (e.g. 'my-bucket')
 */
function set_bucket_public_iam(string $bucketName): void
{
    $storage = new StorageClient();
    $bucket = $storage->bucket($bucketName);

    $policy = $bucket->iam()->policy(['requestedPolicyVersion' => 3]);
    $policy['version'] = 3;

    $role = 'roles/storage.objectViewer';
    $members = ['allUsers'];

    $policy['bindings'][] = [
        'role' => $role,
        'members' => $members
    ];

    $bucket->iam()->setPolicy($policy);

    printf('Bucket %s is now public', $bucketName);
}

Python

詳情請參閱 Cloud Storage Python API 參考說明文件

如要驗證 Cloud Storage,請設定應用程式預設憑證。 詳情請參閱「設定用戶端程式庫的驗證機制」。

from typing import List

from google.cloud import storage


def set_bucket_public_iam(
    bucket_name: str = "your-bucket-name",
    members: List[str] = ["allUsers"],
):
    """Set a public IAM Policy to bucket"""
    # bucket_name = "your-bucket-name"

    storage_client = storage.Client()
    bucket = storage_client.bucket(bucket_name)

    policy = bucket.get_iam_policy(requested_policy_version=3)
    policy.bindings.append(
        {"role": "roles/storage.objectViewer", "members": members}
    )

    bucket.set_iam_policy(policy)

    print(f"Bucket {bucket.name} is now publicly readable")

Ruby

詳情請參閱 Cloud Storage Ruby API 參考說明文件

如要驗證 Cloud Storage,請設定應用程式預設憑證。 詳情請參閱「設定用戶端程式庫的驗證機制」。

def set_bucket_public_iam bucket_name:
  # The ID of your GCS bucket
  # bucket_name = "your-unique-bucket-name"

  require "google/cloud/storage"

  storage = Google::Cloud::Storage.new
  bucket = storage.bucket bucket_name

  bucket.policy do |p|
    p.add "roles/storage.objectViewer", "allUsers"
  end

  puts "Bucket #{bucket_name} is now publicly readable"
end

REST API

JSON API

  1. 安裝並初始化 gcloud CLI,以便為 Authorization 標頭產生存取權杖。

  2. 建立包含下列資訊的 JSON 檔案:

    {
      "bindings":[
        {
          "role": "roles/storage.objectViewer",
          "members":["allUsers"]
        }
      ]
    }
  3. 使用 cURL 透過 PUT Bucket 要求呼叫 JSON API

    curl -X PUT --data-binary @JSON_FILE_NAME \
      -H "Authorization: Bearer $(gcloud auth print-access-token)" \
      -H "Content-Type: application/json" \
      "https://storage.googleapis.com/storage/v1/b/BUCKET_NAME/iam"

    其中:

    • JSON_FILE_NAME 是您在步驟 2 建立的 JSON 檔案路徑。
    • BUCKET_NAME 是要將物件設為公開的值區名稱。例如:my-bucket

XML API

XML API 不支援將值區中的所有物件設為可公開讀取,請改用 Google Cloud 控制台或 gcloud storage

如有需要,您也可以將部分值區設為公開存取

針對非公開或不存在的檔案要求網址時,訪客會收到 http 403 回應碼。請參閱下一節,進一步瞭解如何新增顯示使用 http 404 回應碼的錯誤網頁。

建議採用:指派專用頁面

您可以指派索引頁面的尾碼 (由 MainPageSuffix 資源控制) 及自訂錯誤網頁 (由 NotFoundPage 資源控制)。雖然不一定要指派,但如果沒有索引頁面,使用者存取您的頂層網站 (例如 http://www.example.com) 時就不會看到任何內容。詳情請參閱網站設定範例

在下列範例中,會將 MainPageSuffix 設定為 index.html,而 NotFoundPage 設定為 404.html

控制台

  1. 在 Google Cloud 控制台,前往「Cloud Storage bucket」頁面。

    前往「Buckets」(值區) 頁面

  2. 在值區清單中,找到您建立的值區。

  3. 按一下與值區相關聯的「Bucket overflow」(值區溢位) 選單 (),然後選取「Edit website configuration」(編輯網站設定)

  4. 在網站設定對話方塊中,指定主網頁和錯誤頁面。

  5. 按一下 [儲存]

指令列

使用加上 --web-main-page-suffix--web-error-page 旗標的 buckets update 指令:

gcloud storage buckets update gs://www.example.com --web-main-page-suffix=index.html --web-error-page=404.html

如果執行成功,指令會傳回下列內容:

Updating gs://www.example.com/...
  Completed 1

用戶端程式庫

C++

詳情請參閱 Cloud Storage C++ API 參考說明文件

如要驗證 Cloud Storage,請設定應用程式預設憑證。 詳情請參閱「設定用戶端程式庫的驗證機制」。

namespace gcs = ::google::cloud::storage;
using ::google::cloud::StatusOr;
[](gcs::Client client, std::string const& bucket_name,
   std::string const& main_page_suffix, std::string const& not_found_page) {
  StatusOr<gcs::BucketMetadata> original =
      client.GetBucketMetadata(bucket_name);

  if (!original) throw std::move(original).status();
  StatusOr<gcs::BucketMetadata> patched = client.PatchBucket(
      bucket_name,
      gcs::BucketMetadataPatchBuilder().SetWebsite(
          gcs::BucketWebsite{main_page_suffix, not_found_page}),
      gcs::IfMetagenerationMatch(original->metageneration()));
  if (!patched) throw std::move(patched).status();

  if (!patched->has_website()) {
    std::cout << "Static website configuration is not set for bucket "
              << patched->name() << "\n";
    return;
  }

  std::cout << "Static website configuration successfully set for bucket "
            << patched->name() << "\nNew main page suffix is: "
            << patched->website().main_page_suffix
            << "\nNew not found page is: "
            << patched->website().not_found_page << "\n";
}

C#

詳情請參閱 Cloud Storage C# API 參考說明文件

如要驗證 Cloud Storage,請設定應用程式預設憑證。 詳情請參閱「設定用戶端程式庫的驗證機制」。


using Google.Apis.Storage.v1.Data;
using Google.Cloud.Storage.V1;
using System;

public class BucketWebsiteConfigurationSample
{
    public Bucket BucketWebsiteConfiguration(
        string bucketName = "your-bucket-name",
        string mainPageSuffix = "index.html",
        string notFoundPage = "404.html")
    {
        var storage = StorageClient.Create();
        var bucket = storage.GetBucket(bucketName);

        if (bucket.Website == null)
        {
            bucket.Website = new Bucket.WebsiteData();
        }
        bucket.Website.MainPageSuffix = mainPageSuffix;
        bucket.Website.NotFoundPage = notFoundPage;

        bucket = storage.UpdateBucket(bucket);
        Console.WriteLine($"Static website bucket {bucketName} is set up to use {mainPageSuffix} as the index page and {notFoundPage} as the 404 not found page.");
        return bucket;
    }
}

Go

詳情請參閱 Cloud Storage Go API 參考說明文件

如要驗證 Cloud Storage,請設定應用程式預設憑證。 詳情請參閱「設定用戶端程式庫的驗證機制」。

import (
	"context"
	"fmt"
	"io"
	"time"

	"cloud.google.com/go/storage"
)

// setBucketWebsiteInfo sets website configuration on a bucket.
func setBucketWebsiteInfo(w io.Writer, bucketName, indexPage, notFoundPage string) error {
	// bucketName := "www.example.com"
	// indexPage := "index.html"
	// notFoundPage := "404.html"
	ctx := context.Background()
	client, err := storage.NewClient(ctx)
	if err != nil {
		return fmt.Errorf("storage.NewClient: %w", err)
	}
	defer client.Close()

	ctx, cancel := context.WithTimeout(ctx, time.Second*10)
	defer cancel()

	bucket := client.Bucket(bucketName)
	bucketAttrsToUpdate := storage.BucketAttrsToUpdate{
		Website: &storage.BucketWebsite{
			MainPageSuffix: indexPage,
			NotFoundPage:   notFoundPage,
		},
	}
	if _, err := bucket.Update(ctx, bucketAttrsToUpdate); err != nil {
		return fmt.Errorf("Bucket(%q).Update: %w", bucketName, err)
	}
	fmt.Fprintf(w, "Static website bucket %v is set up to use %v as the index page and %v as the 404 page\n", bucketName, indexPage, notFoundPage)
	return nil
}

Java

詳情請參閱 Cloud Storage Java API 參考說明文件

如要驗證 Cloud Storage,請設定應用程式預設憑證。 詳情請參閱「設定用戶端程式庫的驗證機制」。

import com.google.cloud.storage.Bucket;
import com.google.cloud.storage.Storage;
import com.google.cloud.storage.StorageOptions;

public class SetBucketWebsiteInfo {
  public static void setBucketWesbiteInfo(
      String projectId, String bucketName, String indexPage, String notFoundPage) {
    // The ID of your GCP project
    // String projectId = "your-project-id";

    // The ID of your static website bucket
    // String bucketName = "www.example.com";

    // The index page for a static website bucket
    // String indexPage = "index.html";

    // The 404 page for a static website bucket
    // String notFoundPage = "404.html";

    Storage storage = StorageOptions.newBuilder().setProjectId(projectId).build().getService();
    Bucket bucket = storage.get(bucketName);
    bucket.toBuilder().setIndexPage(indexPage).setNotFoundPage(notFoundPage).build().update();

    System.out.println(
        "Static website bucket "
            + bucketName
            + " is set up to use "
            + indexPage
            + " as the index page and "
            + notFoundPage
            + " as the 404 page");
  }
}

Node.js

詳情請參閱 Cloud Storage Node.js API 參考說明文件

如要驗證 Cloud Storage,請設定應用程式預設憑證。 詳情請參閱「設定用戶端程式庫的驗證機制」。

/**
 * TODO(developer): Uncomment the following lines before running the sample.
 */
// The ID of your GCS bucket
// const bucketName = 'your-unique-bucket-name';

// The name of the main page
// const mainPageSuffix = 'http://example.com';

// The Name of a 404 page
// const notFoundPage = 'http://example.com/404.html';

// Imports the Google Cloud client library
const {Storage} = require('@google-cloud/storage');

// Creates a client
const storage = new Storage();

async function addBucketWebsiteConfiguration() {
  await storage.bucket(bucketName).setMetadata({
    website: {
      mainPageSuffix,
      notFoundPage,
    },
  });

  console.log(
    `Static website bucket ${bucketName} is set up to use ${mainPageSuffix} as the index page and ${notFoundPage} as the 404 page`
  );
}

addBucketWebsiteConfiguration().catch(console.error);

PHP

詳情請參閱 Cloud Storage PHP API 參考說明文件

如要驗證 Cloud Storage,請設定應用程式預設憑證。 詳情請參閱「設定用戶端程式庫的驗證機制」。

use Google\Cloud\Storage\StorageClient;

/**
 * Update the given bucket's website configuration.
 *
 * @param string $bucketName The name of your Cloud Storage bucket.
 *        (e.g. 'my-bucket')
 * @param string $indexPageObject the name of an object in the bucket to use as
 *        (e.g. 'index.html')
 *     an index page for a static website bucket.
 * @param string $notFoundPageObject the name of an object in the bucket to use
 *        (e.g. '404.html')
 *     as the 404 Not Found page.
 */
function define_bucket_website_configuration(string $bucketName, string $indexPageObject, string $notFoundPageObject): void
{
    $storage = new StorageClient();
    $bucket = $storage->bucket($bucketName);

    $bucket->update([
        'website' => [
            'mainPageSuffix' => $indexPageObject,
            'notFoundPage' => $notFoundPageObject
        ]
    ]);

    printf(
        'Static website bucket %s is set up to use %s as the index page and %s as the 404 page.',
        $bucketName,
        $indexPageObject,
        $notFoundPageObject
    );
}

Python

詳情請參閱 Cloud Storage Python API 參考說明文件

如要驗證 Cloud Storage,請設定應用程式預設憑證。 詳情請參閱「設定用戶端程式庫的驗證機制」。

from google.cloud import storage


def define_bucket_website_configuration(bucket_name, main_page_suffix, not_found_page):
    """Configure website-related properties of bucket"""
    # bucket_name = "your-bucket-name"
    # main_page_suffix = "index.html"
    # not_found_page = "404.html"

    storage_client = storage.Client()

    bucket = storage_client.get_bucket(bucket_name)
    bucket.configure_website(main_page_suffix, not_found_page)
    bucket.patch()

    print(
        "Static website bucket {} is set up to use {} as the index page and {} as the 404 page".format(
            bucket.name, main_page_suffix, not_found_page
        )
    )
    return bucket

Ruby

詳情請參閱 Cloud Storage Ruby API 參考說明文件

如要驗證 Cloud Storage,請設定應用程式預設憑證。 詳情請參閱「設定用戶端程式庫的驗證機制」。

def define_bucket_website_configuration bucket_name:, main_page_suffix:, not_found_page:
  # The ID of your static website bucket
  # bucket_name = "www.example.com"

  # The index page for a static website bucket
  # main_page_suffix = "index.html"

  # The 404 page for a static website bucket
  # not_found_page = "404.html"

  require "google/cloud/storage"

  storage = Google::Cloud::Storage.new
  bucket = storage.bucket bucket_name

  bucket.update do |b|
    b.website_main = main_page_suffix
    b.website_404 = not_found_page
  end

  puts "Static website bucket #{bucket_name} is set up to use #{main_page_suffix} as the index page and " \
       "#{not_found_page} as the 404 page"
end

REST API

JSON API

  1. 安裝並初始化 gcloud CLI,以便為 Authorization 標頭產生存取權杖。

  2. 建立 JSON 檔案,將 website 物件中的 mainPageSuffixnotFoundPage 屬性設為需要的網頁:

    {
      "website":{
        "mainPageSuffix": "index.html",
        "notFoundPage": "404.html"
      }
    }
  3. 使用 cURL 透過 PATCH 值區要求呼叫 JSON API。以 www.example.com 為例:

    curl -X PATCH --data-binary @web-config.json \
      -H "Authorization: $(gcloud auth print-access-token)" \
      -H "Content-Type: application/json" \
      "https://storage.googleapis.com/storage/v1/b/www.example.com"

XML API

  1. 安裝並初始化 gcloud CLI,以便為 Authorization 標頭產生存取權杖。

  2. 建立 XML 檔案,將 WebsiteConfiguration 元素中的 MainPageSuffixNotFoundPage 元素設為需要的網頁:

    <WebsiteConfiguration>
      <MainPageSuffix>index.html</MainPageSuffix>
      <NotFoundPage>404.html</NotFoundPage>
    </WebsiteConfiguration>
  3. 使用 cURL 透過 PUT 值區要求和 websiteConfig 查詢字串參數呼叫 XML API。以 www.example.com 為例:

    curl -X PUT --data-binary @web-config.xml \
      -H "Authorization: $(gcloud auth print-access-token)" \
      https://storage.googleapis.com/www.example.com?websiteConfig

測試網站

在瀏覽器中要求網域名稱,驗證內容是從值區提供。如果您已經設定 MainPageSuffix 資源,可以使用物件路徑或單用網域名稱來進行驗證。

舉例來說,若有名為 test.html 的物件儲存在命名為 www.example.com 的值區,請在瀏覽器中查看 www.example.com/test.html,檢查是否可存取該物件。