Run LLM inference on Cloud Run with Hugging Face TGI

The following example shows how to run a backend service that runs the Hugging Face Text Generation Inference (TGI) toolkit using Llama 3. Hugging Face TGI is an open Large Language Models (LLMs), and can be deployed and served on Cloud Run service with GPUs enabled.

See the entire example at Deploy Llama 3.1 8B with TGI DLC on Cloud Run.