MCP Tools Reference: monitoring.googleapis.com

Tool: list_timeseries

Lists time series data from the Google Cloud Monitoring API

The following sample demonstrate how to use curl to invoke the list_timeseries MCP tool.

Curl Request
                  
curl --location 'https://monitoring.googleapis.com/mcp' \
--header 'content-type: application/json' \
--header 'accept: application/json, text/event-stream' \
--data '{
  "method": "tools/call",
  "params": {
    "name": "list_timeseries",
    "arguments": {
      // provide these details according to the tool's MCP specification
    }
  },
  "jsonrpc": "2.0",
  "id": 1
}'
                

Input Schema

The ListTimeSeries request.

ListTimeSeriesRequest

JSON representation
{
  "name": string,
  "filter": string,
  "interval": {
    object (TimeInterval)
  },
  "aggregation": {
    object (Aggregation)
  },
  "secondaryAggregation": {
    object (Aggregation)
  },
  "orderBy": string,
  "view": enum (TimeSeriesView),
  "pageSize": integer,
  "pageToken": string
}
Fields
name

string

Required. The project, organization or folder on which to execute the request. The format is:

projects/[PROJECT_ID_OR_NUMBER]
organizations/[ORGANIZATION_ID]
folders/[FOLDER_ID]
filter

string

Required. A monitoring filter that specifies which time series should be returned. The filter must specify a single metric type, and can additionally specify metric labels and other information. For example:

metric.type = "compute.googleapis.com/instance/cpu/usage_time" AND
    metric.labels.instance_name = "my-instance-name"
interval

object (TimeInterval)

Required. The time interval for which results should be returned. Only time series that contain data points in the specified interval are included in the response.

aggregation

object (Aggregation)

Specifies the alignment of data points in individual time series as well as how to combine the retrieved time series across specified labels.

By default (if no aggregation is explicitly specified), the raw time series data is returned.

secondaryAggregation

object (Aggregation)

Apply a second aggregation after aggregation is applied. May only be specified if aggregation is specified.

orderBy

string

Unsupported: must be left blank. The points in each time series are currently returned in reverse time order (most recent to oldest).

view

enum (TimeSeriesView)

Required. Specifies which information is returned about the time series.

pageSize

integer

A positive number that is the maximum number of results to return. If page_size is empty or more than 100,000 results, the effective page_size is 100,000 results. If view is set to FULL, this is the maximum number of Points returned. If view is set to HEADERS, this is the maximum number of TimeSeries returned.

pageToken

string

If this field is not empty then it must contain the nextPageToken value returned by a previous call to this method. Using this field causes the method to return additional results from the previous method call.

TimeInterval

JSON representation
{
  "endTime": string,
  "startTime": string
}
Fields
endTime

string (Timestamp format)

Required. The end of the time interval.

Uses RFC 3339, where generated output will always be Z-normalized and use 0, 3, 6 or 9 fractional digits. Offsets other than "Z" are also accepted. Examples: "2014-10-02T15:01:23Z", "2014-10-02T15:01:23.045123456Z" or "2014-10-02T15:01:23+05:30".

startTime

string (Timestamp format)

Optional. The beginning of the time interval. The default value for the start time is the end time. The start time must not be later than the end time.

Uses RFC 3339, where generated output will always be Z-normalized and use 0, 3, 6 or 9 fractional digits. Offsets other than "Z" are also accepted. Examples: "2014-10-02T15:01:23Z", "2014-10-02T15:01:23.045123456Z" or "2014-10-02T15:01:23+05:30".

Timestamp

JSON representation
{
  "seconds": string,
  "nanos": integer
}
Fields
seconds

string (int64 format)

Represents seconds of UTC time since Unix epoch 1970-01-01T00:00:00Z. Must be between -62135596800 and 253402300799 inclusive (which corresponds to 0001-01-01T00:00:00Z to 9999-12-31T23:59:59Z).

nanos

integer

Non-negative fractions of a second at nanosecond resolution. This field is the nanosecond portion of the duration, not an alternative to seconds. Negative second values with fractions must still have non-negative nanos values that count forward in time. Must be between 0 and 999,999,999 inclusive.

Aggregation

JSON representation
{
  "alignmentPeriod": string,
  "perSeriesAligner": enum (Aligner),
  "crossSeriesReducer": enum (Reducer),
  "groupByFields": [
    string
  ]
}
Fields
alignmentPeriod

string (Duration format)

The alignment_period specifies a time interval, in seconds, that is used to divide the data in all the time series into consistent blocks of time. This will be done before the per-series aligner can be applied to the data.

The value must be at least 60 seconds. If a per-series aligner other than ALIGN_NONE is specified, this field is required or an error is returned. If no per-series aligner is specified, or the aligner ALIGN_NONE is specified, then this field is ignored.

The maximum value of the alignment_period is 104 weeks (2 years) for charts, and 90,000 seconds (25 hours) for alerting policies.

A duration in seconds with up to nine fractional digits, ending with 's'. Example: "3.5s".

perSeriesAligner

enum (Aligner)

An Aligner describes how to bring the data points in a single time series into temporal alignment. Except for ALIGN_NONE, all alignments cause all the data points in an alignment_period to be mathematically grouped together, resulting in a single data point for each alignment_period with end timestamp at the end of the period.

Not all alignment operations may be applied to all time series. The valid choices depend on the metric_kind and value_type of the original time series. Alignment can change the metric_kind or the value_type of the time series.

Time series data must be aligned in order to perform cross-time series reduction. If cross_series_reducer is specified, then per_series_aligner must be specified and not equal to ALIGN_NONE and alignment_period must be specified; otherwise, an error is returned.

crossSeriesReducer

enum (Reducer)

The reduction operation to be used to combine time series into a single time series, where the value of each data point in the resulting series is a function of all the already aligned values in the input time series.

Not all reducer operations can be applied to all time series. The valid choices depend on the metric_kind and the value_type of the original time series. Reduction can yield a time series with a different metric_kind or value_type than the input time series.

Time series data must first be aligned (see per_series_aligner) in order to perform cross-time series reduction. If cross_series_reducer is specified, then per_series_aligner must be specified, and must not be ALIGN_NONE. An alignment_period must also be specified; otherwise, an error is returned.

groupByFields[]

string

The set of fields to preserve when cross_series_reducer is specified. The group_by_fields determine how the time series are partitioned into subsets prior to applying the aggregation operation. Each subset contains time series that have the same value for each of the grouping fields. Each individual time series is a member of exactly one subset. The cross_series_reducer is applied to each subset of time series. It is not possible to reduce across different resource types, so this field implicitly contains resource.type. Fields not specified in group_by_fields are aggregated away. If group_by_fields is not specified and all the time series have the same resource type, then the time series are aggregated into a single output time series. If cross_series_reducer is not defined, this field is ignored.

Duration

JSON representation
{
  "seconds": string,
  "nanos": integer
}
Fields
seconds

string (int64 format)

Signed seconds of the span of time. Must be from -315,576,000,000 to +315,576,000,000 inclusive. Note: these bounds are computed from: 60 sec/min * 60 min/hr * 24 hr/day * 365.25 days/year * 10000 years

nanos

integer

Signed fractions of a second at nanosecond resolution of the span of time. Durations less than one second are represented with a 0 seconds field and a positive or negative nanos field. For durations of one second or more, a non-zero value for the nanos field must be of the same sign as the seconds field. Must be from -999,999,999 to +999,999,999 inclusive.

Output Schema

The ListTimeSeries response.

ListTimeSeriesResponse

JSON representation
{
  "timeSeries": [
    {
      object (TimeSeries)
    }
  ],
  "nextPageToken": string,
  "executionErrors": [
    {
      object (Status)
    }
  ],
  "unit": string,
  "unreachable": [
    string
  ]
}
Fields
timeSeries[]

object (TimeSeries)

One or more time series that match the filter included in the request.

nextPageToken

string

If there are more results than have been returned, then this field is set to a non-empty value. To see the additional results, use that value as page_token in the next call to this method.

executionErrors[]

object (Status)

Query execution errors that may have caused the time series data returned to be incomplete.

unit

string

The unit in which all time_series point values are reported. unit follows the UCUM format for units as seen in https://unitsofmeasure.org/ucum.html. If different time_series have different units (for example, because they come from different metric types, or a unit is absent), then unit will be "{not_a_unit}".

unreachable[]

string

Cloud regions that were unreachable which may have caused incomplete data to be returned.

TimeSeries

JSON representation
{
  "metric": {
    object (Metric)
  },
  "resource": {
    object (MonitoredResource)
  },
  "metadata": {
    object (MonitoredResourceMetadata)
  },
  "metricKind": enum (MetricKind),
  "valueType": enum (ValueType),
  "points": [
    {
      object (Point)
    }
  ],
  "unit": string,
  "description": string
}
Fields
metric

object (Metric)

The associated metric. A fully-specified metric used to identify the time series.

resource

object (MonitoredResource)

The associated monitored resource. Custom metrics can use only certain monitored resource types in their time series data. For more information, see Monitored resources for custom metrics.

metadata

object (MonitoredResourceMetadata)

Output only. The associated monitored resource metadata. When reading a time series, this field will include metadata labels that are explicitly named in the reduction. When creating a time series, this field is ignored.

metricKind

enum (MetricKind)

The metric kind of the time series. When listing time series, this metric kind might be different from the metric kind of the associated metric if this time series is an alignment or reduction of other time series.

When creating a time series, this field is optional. If present, it must be the same as the metric kind of the associated metric. If the associated metric's descriptor must be auto-created, then this field specifies the metric kind of the new descriptor and must be either GAUGE (the default) or CUMULATIVE.

valueType

enum (ValueType)

The value type of the time series. When listing time series, this value type might be different from the value type of the associated metric if this time series is an alignment or reduction of other time series.

When creating a time series, this field is optional. If present, it must be the same as the type of the data in the points field.

points[]

object (Point)

The data points of this time series. When listing time series, points are returned in reverse time order.

When creating a time series, this field must contain exactly one point and the point's type must be the same as the value type of the associated metric. If the associated metric's descriptor must be auto-created, then the value type of the descriptor is determined by the point's type, which must be BOOL, INT64, DOUBLE, or DISTRIBUTION.

unit

string

The units in which the metric value is reported. It is only applicable if the value_type is INT64, DOUBLE, or DISTRIBUTION. The unit defines the representation of the stored metric values. This field can only be changed through CreateTimeSeries when it is empty.

description

string

Input only. A detailed description of the time series that will be associated with the google.api.MetricDescriptor for the metric. Once set, this field cannot be changed through CreateTimeSeries.

Metric

JSON representation
{
  "type": string,
  "labels": {
    string: string,
    ...
  }
}
Fields
type

string

An existing metric type, see google.api.MetricDescriptor. For example, custom.googleapis.com/invoice/paid/amount.

labels

map (key: string, value: string)

The set of label values that uniquely identify this metric. All labels listed in the MetricDescriptor must be assigned values.

An object containing a list of "key": value pairs. Example: { "name": "wrench", "mass": "1.3kg", "count": "3" }.

LabelsEntry

JSON representation
{
  "key": string,
  "value": string
}
Fields
key

string

value

string

MonitoredResource

JSON representation
{
  "type": string,
  "labels": {
    string: string,
    ...
  }
}
Fields
type

string

Required. The monitored resource type. This field must match the type field of a MonitoredResourceDescriptor object. For example, the type of a Compute Engine VM instance is gce_instance. For a list of types, see Monitoring resource types and Logging resource types.

labels

map (key: string, value: string)

Required. Values for all of the labels listed in the associated monitored resource descriptor. For example, Compute Engine VM instances use the labels "project_id", "instance_id", and "zone".

An object containing a list of "key": value pairs. Example: { "name": "wrench", "mass": "1.3kg", "count": "3" }.

LabelsEntry

JSON representation
{
  "key": string,
  "value": string
}
Fields
key

string

value

string

MonitoredResourceMetadata

JSON representation
{
  "systemLabels": {
    object
  },
  "userLabels": {
    string: string,
    ...
  }
}
Fields
systemLabels

object (Struct format)

Output only. Values for predefined system metadata labels. System labels are a kind of metadata extracted by Google, including "machine_image", "vpc", "subnet_id", "security_group", "name", etc. System label values can be only strings, Boolean values, or a list of strings. For example:

{ "name": "my-test-instance",
  "security_group": ["a", "b", "c"],
  "spot_instance": false }
userLabels

map (key: string, value: string)

Output only. A map of user-defined metadata labels.

An object containing a list of "key": value pairs. Example: { "name": "wrench", "mass": "1.3kg", "count": "3" }.

Struct

JSON representation
{
  "fields": {
    string: value,
    ...
  }
}
Fields
fields

map (key: string, value: value (Value format))

Unordered map of dynamically typed values.

An object containing a list of "key": value pairs. Example: { "name": "wrench", "mass": "1.3kg", "count": "3" }.

FieldsEntry

JSON representation
{
  "key": string,
  "value": value
}
Fields
key

string

value

value (Value format)

Value

JSON representation
{

  // Union field kind can be only one of the following:
  "nullValue": null,
  "numberValue": number,
  "stringValue": string,
  "boolValue": boolean,
  "structValue": {
    object
  },
  "listValue": array
  // End of list of possible types for union field kind.
}
Fields
Union field kind. The kind of value. kind can be only one of the following:
nullValue

null

Represents a null value.

numberValue

number

Represents a double value.

stringValue

string

Represents a string value.

boolValue

boolean

Represents a boolean value.

structValue

object (Struct format)

Represents a structured value.

listValue

array (ListValue format)

Represents a repeated Value.

ListValue

JSON representation
{
  "values": [
    value
  ]
}
Fields
values[]

value (Value format)

Repeated field of dynamically typed values.

UserLabelsEntry

JSON representation
{
  "key": string,
  "value": string
}
Fields
key

string

value

string

Point

JSON representation
{
  "interval": {
    object (TimeInterval)
  },
  "value": {
    object (TypedValue)
  }
}
Fields
interval

object (TimeInterval)

The time interval to which the data point applies. For GAUGE metrics, the start time is optional, but if it is supplied, it must equal the end time. For DELTA metrics, the start and end time should specify a non-zero interval, with subsequent points specifying contiguous and non-overlapping intervals. For CUMULATIVE metrics, the start and end time should specify a non-zero interval, with subsequent points specifying the same start time and increasing end times, until an event resets the cumulative value to zero and sets a new start time for the following points.

value

object (TypedValue)

The value of the data point.

TimeInterval

JSON representation
{
  "endTime": string,
  "startTime": string
}
Fields
endTime

string (Timestamp format)

Required. The end of the time interval.

Uses RFC 3339, where generated output will always be Z-normalized and use 0, 3, 6 or 9 fractional digits. Offsets other than "Z" are also accepted. Examples: "2014-10-02T15:01:23Z", "2014-10-02T15:01:23.045123456Z" or "2014-10-02T15:01:23+05:30".

startTime

string (Timestamp format)

Optional. The beginning of the time interval. The default value for the start time is the end time. The start time must not be later than the end time.

Uses RFC 3339, where generated output will always be Z-normalized and use 0, 3, 6 or 9 fractional digits. Offsets other than "Z" are also accepted. Examples: "2014-10-02T15:01:23Z", "2014-10-02T15:01:23.045123456Z" or "2014-10-02T15:01:23+05:30".

Timestamp

JSON representation
{
  "seconds": string,
  "nanos": integer
}
Fields
seconds

string (int64 format)

Represents seconds of UTC time since Unix epoch 1970-01-01T00:00:00Z. Must be between -62135596800 and 253402300799 inclusive (which corresponds to 0001-01-01T00:00:00Z to 9999-12-31T23:59:59Z).

nanos

integer

Non-negative fractions of a second at nanosecond resolution. This field is the nanosecond portion of the duration, not an alternative to seconds. Negative second values with fractions must still have non-negative nanos values that count forward in time. Must be between 0 and 999,999,999 inclusive.

TypedValue

JSON representation
{

  // Union field value can be only one of the following:
  "boolValue": boolean,
  "int64Value": string,
  "doubleValue": number,
  "stringValue": string,
  "distributionValue": {
    object (Distribution)
  }
  // End of list of possible types for union field value.
}
Fields
Union field value. The typed value field. value can be only one of the following:
boolValue

boolean

A Boolean value: true or false.

int64Value

string (int64 format)

A 64-bit integer. Its range is approximately ±9.2x1018.

doubleValue

number

A 64-bit double-precision floating-point number. Its magnitude is approximately ±10±300 and it has 16 significant digits of precision.

stringValue

string

A variable-length string value.

distributionValue

object (Distribution)

A distribution value.

Distribution

JSON representation
{
  "count": string,
  "mean": number,
  "sumOfSquaredDeviation": number,
  "range": {
    object (Range)
  },
  "bucketOptions": {
    object (BucketOptions)
  },
  "bucketCounts": [
    string
  ],
  "exemplars": [
    {
      object (Exemplar)
    }
  ]
}
Fields
count

string (int64 format)

The number of values in the population. Must be non-negative. This value must equal the sum of the values in bucket_counts if a histogram is provided.

mean

number

The arithmetic mean of the values in the population. If count is zero then this field must be zero.

sumOfSquaredDeviation

number

The sum of squared deviations from the mean of the values in the population. For values x_i this is:

Sum[i=1..n]((x_i - mean)^2)

Knuth, "The Art of Computer Programming", Vol. 2, page 232, 3rd edition describes Welford's method for accumulating this sum in one pass.

If count is zero then this field must be zero.

range

object (Range)

If specified, contains the range of the population values. The field must not be present if the count is zero. This field is presently ignored by the Cloud Monitoring API v3.

bucketOptions

object (BucketOptions)

Required in the Cloud Monitoring API v3. Defines the histogram bucket boundaries.

bucketCounts[]

string (int64 format)

Required in the Cloud Monitoring API v3. The values for each bucket specified in bucket_options. The sum of the values in bucketCounts must equal the value in the count field of the Distribution object. The order of the bucket counts follows the numbering schemes described for the three bucket types. The underflow bucket has number 0; the finite buckets, if any, have numbers 1 through N-2; and the overflow bucket has number N-1. The size of bucket_counts must not be greater than N. If the size is less than N, then the remaining buckets are assigned values of zero.

exemplars[]

object (Exemplar)

Must be in increasing order of value field.

Range

JSON representation
{
  "min": number,
  "max": number
}
Fields
min

number

The minimum of the population values.

max

number

The maximum of the population values.

BucketOptions

JSON representation
{

  // Union field options can be only one of the following:
  "linearBuckets": {
    object (Linear)
  },
  "exponentialBuckets": {
    object (Exponential)
  },
  "explicitBuckets": {
    object (Explicit)
  }
  // End of list of possible types for union field options.
}
Fields
Union field options. Exactly one of these three fields must be set. options can be only one of the following:
linearBuckets

object (Linear)

The linear bucket.

exponentialBuckets

object (Exponential)

The exponential buckets.

explicitBuckets

object (Explicit)

The explicit buckets.

Linear

JSON representation
{
  "numFiniteBuckets": integer,
  "width": number,
  "offset": number
}
Fields
numFiniteBuckets

integer

Must be greater than 0.

width

number

Must be greater than 0.

offset

number

Lower bound of the first bucket.

Exponential

JSON representation
{
  "numFiniteBuckets": integer,
  "growthFactor": number,
  "scale": number
}
Fields
numFiniteBuckets

integer

Must be greater than 0.

growthFactor

number

Must be greater than 1.

scale

number

Must be greater than 0.

Explicit

JSON representation
{
  "bounds": [
    number
  ]
}
Fields
bounds[]

number

The values must be monotonically increasing.

Exemplar

JSON representation
{
  "value": number,
  "timestamp": string,
  "attachments": [
    {
      "@type": string,
      field1: ...,
      ...
    }
  ]
}
Fields
value

number

Value of the exemplar point. This value determines to which bucket the exemplar belongs.

timestamp

string (Timestamp format)

The observation (sampling) time of the above value.

Uses RFC 3339, where generated output will always be Z-normalized and use 0, 3, 6 or 9 fractional digits. Offsets other than "Z" are also accepted. Examples: "2014-10-02T15:01:23Z", "2014-10-02T15:01:23.045123456Z" or "2014-10-02T15:01:23+05:30".

attachments[]

object

Contextual information about the example value. Examples are:

Trace: type.googleapis.com/google.monitoring.v3.SpanContext

Literal string: type.googleapis.com/google.protobuf.StringValue

Labels dropped during aggregation: type.googleapis.com/google.monitoring.v3.DroppedLabels

There may be only a single attachment of any given message type in a single exemplar, and this is enforced by the system.

An object containing fields of an arbitrary type. An additional field "@type" contains a URI identifying the type. Example: { "id": 1234, "@type": "types.example.com/standard/id" }.

Any

JSON representation
{
  "typeUrl": string,
  "value": string
}
Fields
typeUrl

string

A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one "/" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration). The name should be in a canonical form (e.g., leading "." is not accepted).

In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http, https, or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows:

  • If no scheme is provided, https is assumed.
  • An HTTP GET on the URL must yield a google.protobuf.Type value in binary format, or produce an error.
  • Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.)

Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one.

Schemes other than http, https (or the empty scheme) might be used with implementation specific semantics.

value

string (bytes format)

Must be a valid serialized protocol buffer of the above specified type.

A base64-encoded string.

Status

JSON representation
{
  "code": integer,
  "message": string,
  "details": [
    {
      "@type": string,
      field1: ...,
      ...
    }
  ]
}
Fields
code

integer

The status code, which should be an enum value of google.rpc.Code.

message

string

A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.

details[]

object

A list of messages that carry the error details. There is a common set of message types for APIs to use.

An object containing fields of an arbitrary type. An additional field "@type" contains a URI identifying the type. Example: { "id": 1234, "@type": "types.example.com/standard/id" }.

Tool Annotations

Destructive Hint: ❌ | Idempotent Hint: ✅ | Read Only Hint: ✅ | Open World Hint: ❌