Class GkeInferenceQuickstartGrpc.GkeInferenceQuickstartBlockingV2Stub (0.1.0)

public static final class GkeInferenceQuickstartGrpc.GkeInferenceQuickstartBlockingV2Stub extends AbstractBlockingStub<GkeInferenceQuickstartGrpc.GkeInferenceQuickstartBlockingV2Stub>

A stub to allow clients to do synchronous rpc calls to service GkeInferenceQuickstart.

GKE Inference Quickstart (GIQ) service provides profiles with performance metrics for popular models and model servers across multiple accelerators. These profiles help generate optimized best practices for running inference on GKE.

Inheritance

java.lang.Object > io.grpc.stub.AbstractStub > io.grpc.stub.AbstractBlockingStub > GkeInferenceQuickstartGrpc.GkeInferenceQuickstartBlockingV2Stub

Methods

build(Channel channel, CallOptions callOptions)

protected GkeInferenceQuickstartGrpc.GkeInferenceQuickstartBlockingV2Stub build(Channel channel, CallOptions callOptions)
Parameters
Name Description
channel io.grpc.Channel
callOptions io.grpc.CallOptions
Returns
Type Description
GkeInferenceQuickstartGrpc.GkeInferenceQuickstartBlockingV2Stub
Overrides
io.grpc.stub.AbstractStub.build(io.grpc.Channel,io.grpc.CallOptions)

fetchBenchmarkingData(FetchBenchmarkingDataRequest request)

public FetchBenchmarkingDataResponse fetchBenchmarkingData(FetchBenchmarkingDataRequest request)

Fetches all of the benchmarking data available for a profile. Benchmarking data returns all of the performance metrics available for a given model server setup on a given instance type.

Parameter
Name Description
request FetchBenchmarkingDataRequest
Returns
Type Description
FetchBenchmarkingDataResponse
Exceptions
Type Description
io.grpc.StatusException

fetchModelServerVersions(FetchModelServerVersionsRequest request)

public FetchModelServerVersionsResponse fetchModelServerVersions(FetchModelServerVersionsRequest request)

Fetches available model server versions. Open-source servers use their own versioning schemas (e.g., vllm uses semver like v1.0.0). Some model servers have different versioning schemas depending on the accelerator. For example, vllm uses semver on GPUs, but returns nightly build tags on TPUs. All available versions will be returned when different schemas are present.

Parameter
Name Description
request FetchModelServerVersionsRequest
Returns
Type Description
FetchModelServerVersionsResponse
Exceptions
Type Description
io.grpc.StatusException

fetchModelServers(FetchModelServersRequest request)

public FetchModelServersResponse fetchModelServers(FetchModelServersRequest request)

Fetches available model servers. Open-source model servers use simplified, lowercase names (e.g., vllm).

Parameter
Name Description
request FetchModelServersRequest
Returns
Type Description
FetchModelServersResponse
Exceptions
Type Description
io.grpc.StatusException

fetchModels(FetchModelsRequest request)

public FetchModelsResponse fetchModels(FetchModelsRequest request)

Fetches available models. Open-source models follow the Huggingface Hub owner/model_name format.

Parameter
Name Description
request FetchModelsRequest
Returns
Type Description
FetchModelsResponse
Exceptions
Type Description
io.grpc.StatusException

fetchProfiles(FetchProfilesRequest request)

public FetchProfilesResponse fetchProfiles(FetchProfilesRequest request)

Fetches available profiles. A profile contains performance metrics and cost information for a specific model server setup. Profiles can be filtered by parameters. If no filters are provided, all profiles are returned. Profiles display a single value per performance metric based on the provided performance requirements. If no requirements are given, the metrics represent the inflection point. See Run best practice inference with GKE Inference Quickstart recipes for details.

Parameter
Name Description
request FetchProfilesRequest
Returns
Type Description
FetchProfilesResponse
Exceptions
Type Description
io.grpc.StatusException

generateOptimizedManifest(GenerateOptimizedManifestRequest request)

public GenerateOptimizedManifestResponse generateOptimizedManifest(GenerateOptimizedManifestRequest request)

Generates an optimized deployment manifest for a given model and model server, based on the specified accelerator, performance targets, and configurations. See Run best practice inference with GKE Inference Quickstart recipes for deployment details.

Parameter
Name Description
request GenerateOptimizedManifestRequest
Returns
Type Description
GenerateOptimizedManifestResponse
Exceptions
Type Description
io.grpc.StatusException