public sealed class StreamingRecognitionConfig : IMessage<StreamingRecognitionConfig>, IEquatable<StreamingRecognitionConfig>, IDeepCloneable<StreamingRecognitionConfig>, IBufferMessage, IMessageReference documentation and code samples for the Google Cloud Speech v1p1beta1 API class StreamingRecognitionConfig.
Provides information to the recognizer that specifies how to process the request.
Implements
IMessageStreamingRecognitionConfig, IEquatableStreamingRecognitionConfig, IDeepCloneableStreamingRecognitionConfig, IBufferMessage, IMessageNamespace
Google.Cloud.Speech.V1P1Beta1Assembly
Google.Cloud.Speech.V1P1Beta1.dll
Constructors
StreamingRecognitionConfig()
public StreamingRecognitionConfig()StreamingRecognitionConfig(StreamingRecognitionConfig)
public StreamingRecognitionConfig(StreamingRecognitionConfig other)| Parameter | |
|---|---|
| Name | Description |
other |
StreamingRecognitionConfig |
Properties
Config
public RecognitionConfig Config { get; set; }Required. Provides information to the recognizer that specifies how to process the request.
| Property Value | |
|---|---|
| Type | Description |
RecognitionConfig |
|
EnableVoiceActivityEvents
public bool EnableVoiceActivityEvents { get; set; }If true, responses with voice activity speech events will be returned as
they are detected.
| Property Value | |
|---|---|
| Type | Description |
bool |
|
InterimResults
public bool InterimResults { get; set; }If true, interim results (tentative hypotheses) may be
returned as they become available (these interim results are indicated with
the is_final=false flag).
If false or omitted, only is_final=true result(s) are returned.
| Property Value | |
|---|---|
| Type | Description |
bool |
|
SingleUtterance
public bool SingleUtterance { get; set; }If false or omitted, the recognizer will perform continuous
recognition (continuing to wait for and process audio even if the user
pauses speaking) until the client closes the input stream (gRPC API) or
until the maximum time limit has been reached. May return multiple
StreamingRecognitionResults with the is_final flag set to true.
If true, the recognizer will detect a single spoken utterance. When it
detects that the user has paused or stopped speaking, it will return an
END_OF_SINGLE_UTTERANCE event and cease recognition. It will return no
more than one StreamingRecognitionResult with the is_final flag set to
true.
The single_utterance field can only be used with specified models,
otherwise an error is thrown. The model field in
[RecognitionConfig][google.cloud.speech.v1p1beta1.RecognitionConfig] must
be set to:
command_and_searchphone_callAND additional fielduseEnhanced=true- The
modelfield is left undefined. In this case the API auto-selects a model based on any other parameters that you set inRecognitionConfig.
| Property Value | |
|---|---|
| Type | Description |
bool |
|
VoiceActivityTimeout
public StreamingRecognitionConfig.Types.VoiceActivityTimeout VoiceActivityTimeout { get; set; }If set, the server will automatically close the stream after the specified
duration has elapsed after the last VOICE_ACTIVITY speech event has been
sent. The field voice_activity_events must also be set to true.
| Property Value | |
|---|---|
| Type | Description |
StreamingRecognitionConfigTypesVoiceActivityTimeout |
|