Run LLM inference on Cloud Run with Hugging Face TGI
Stay organized with collections
Save and categorize content based on your preferences.
The following example shows how to run a backend service that runs the Hugging Face Text Generation Inference (TGI) toolkit using Llama 3. Hugging Face TGI is an open Large Language Models (LLMs), and can be deployed and served on Cloud Run service with GPUs enabled.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-12-03 UTC."],[],[]]