Reference documentation and code samples for the Cloud Data Loss Prevention (DLP) V2 API class Google::Cloud::Dlp::V2::OutputStorageConfig.
Cloud repository for storing output.
Inherits
- Object
Extended By
- Google::Protobuf::MessageExts::ClassMethods
Includes
- Google::Protobuf::MessageExts
Methods
#output_schema
def output_schema() -> ::Google::Cloud::Dlp::V2::OutputStorageConfig::OutputSchema
-
(::Google::Cloud::Dlp::V2::OutputStorageConfig::OutputSchema) — Schema used for writing the findings for Inspect jobs. This field is only
used for Inspect and must be unspecified for Risk jobs. Columns are derived
from the
Finding
object. If appending to an existing table, any columns from the predefined schema that are missing will be added. No columns in the existing table will be deleted.If unspecified, then all available columns will be used for a new table or an (existing) table with no schema, and no changes will be made to an existing table that has a schema. Only for use with external storage.
#output_schema=
def output_schema=(value) -> ::Google::Cloud::Dlp::V2::OutputStorageConfig::OutputSchema
-
value (::Google::Cloud::Dlp::V2::OutputStorageConfig::OutputSchema) — Schema used for writing the findings for Inspect jobs. This field is only
used for Inspect and must be unspecified for Risk jobs. Columns are derived
from the
Finding
object. If appending to an existing table, any columns from the predefined schema that are missing will be added. No columns in the existing table will be deleted.If unspecified, then all available columns will be used for a new table or an (existing) table with no schema, and no changes will be made to an existing table that has a schema. Only for use with external storage.
-
(::Google::Cloud::Dlp::V2::OutputStorageConfig::OutputSchema) — Schema used for writing the findings for Inspect jobs. This field is only
used for Inspect and must be unspecified for Risk jobs. Columns are derived
from the
Finding
object. If appending to an existing table, any columns from the predefined schema that are missing will be added. No columns in the existing table will be deleted.If unspecified, then all available columns will be used for a new table or an (existing) table with no schema, and no changes will be made to an existing table that has a schema. Only for use with external storage.
#storage_path
def storage_path() -> ::Google::Cloud::Dlp::V2::CloudStoragePath
-
(::Google::Cloud::Dlp::V2::CloudStoragePath) — Store findings in an existing Cloud Storage bucket. Files will be
generated with the job ID and file part number as the filename and will
contain findings in textproto format as
SaveToGcsFindingsOutput.
The filename will follow the naming convention
<job_id>-<shard_number>
. Example:my-job-id-2
.Supported for Inspect jobs. The bucket must not be the same as the bucket being inspected. If storing findings to Cloud Storage, the output schema field should not be set. If set, it will be ignored.
Note: The following fields are mutually exclusive:
storage_path
,table
. If a field in that set is populated, all other fields in the set will automatically be cleared.
#storage_path=
def storage_path=(value) -> ::Google::Cloud::Dlp::V2::CloudStoragePath
-
value (::Google::Cloud::Dlp::V2::CloudStoragePath) — Store findings in an existing Cloud Storage bucket. Files will be
generated with the job ID and file part number as the filename and will
contain findings in textproto format as
SaveToGcsFindingsOutput.
The filename will follow the naming convention
<job_id>-<shard_number>
. Example:my-job-id-2
.Supported for Inspect jobs. The bucket must not be the same as the bucket being inspected. If storing findings to Cloud Storage, the output schema field should not be set. If set, it will be ignored.
Note: The following fields are mutually exclusive:
storage_path
,table
. If a field in that set is populated, all other fields in the set will automatically be cleared.
-
(::Google::Cloud::Dlp::V2::CloudStoragePath) — Store findings in an existing Cloud Storage bucket. Files will be
generated with the job ID and file part number as the filename and will
contain findings in textproto format as
SaveToGcsFindingsOutput.
The filename will follow the naming convention
<job_id>-<shard_number>
. Example:my-job-id-2
.Supported for Inspect jobs. The bucket must not be the same as the bucket being inspected. If storing findings to Cloud Storage, the output schema field should not be set. If set, it will be ignored.
Note: The following fields are mutually exclusive:
storage_path
,table
. If a field in that set is populated, all other fields in the set will automatically be cleared.
#table
def table() -> ::Google::Cloud::Dlp::V2::BigQueryTable
-
(::Google::Cloud::Dlp::V2::BigQueryTable) — Store findings in an existing table or a new table in an existing
dataset. If table_id is not set a new one will be generated
for you with the following format:
dlp_googleapis_yyyy_mm_dd_[dlp_job_id]. Pacific time zone will be used
for generating the date details.
For Inspect, each column in an existing output table must have the same name, type, and mode of a field in the
Finding
object.For Risk, an existing output table should be the output of a previous Risk analysis job run on the same source table, with the same privacy metric and quasi-identifiers. Risk jobs that analyze the same table but compute a different privacy metric, or use different sets of quasi-identifiers, cannot store their results in the same table.
Note: The following fields are mutually exclusive:
table
,storage_path
. If a field in that set is populated, all other fields in the set will automatically be cleared.
#table=
def table=(value) -> ::Google::Cloud::Dlp::V2::BigQueryTable
-
value (::Google::Cloud::Dlp::V2::BigQueryTable) — Store findings in an existing table or a new table in an existing
dataset. If table_id is not set a new one will be generated
for you with the following format:
dlp_googleapis_yyyy_mm_dd_[dlp_job_id]. Pacific time zone will be used
for generating the date details.
For Inspect, each column in an existing output table must have the same name, type, and mode of a field in the
Finding
object.For Risk, an existing output table should be the output of a previous Risk analysis job run on the same source table, with the same privacy metric and quasi-identifiers. Risk jobs that analyze the same table but compute a different privacy metric, or use different sets of quasi-identifiers, cannot store their results in the same table.
Note: The following fields are mutually exclusive:
table
,storage_path
. If a field in that set is populated, all other fields in the set will automatically be cleared.
-
(::Google::Cloud::Dlp::V2::BigQueryTable) — Store findings in an existing table or a new table in an existing
dataset. If table_id is not set a new one will be generated
for you with the following format:
dlp_googleapis_yyyy_mm_dd_[dlp_job_id]. Pacific time zone will be used
for generating the date details.
For Inspect, each column in an existing output table must have the same name, type, and mode of a field in the
Finding
object.For Risk, an existing output table should be the output of a previous Risk analysis job run on the same source table, with the same privacy metric and quasi-identifiers. Risk jobs that analyze the same table but compute a different privacy metric, or use different sets of quasi-identifiers, cannot store their results in the same table.
Note: The following fields are mutually exclusive:
table
,storage_path
. If a field in that set is populated, all other fields in the set will automatically be cleared.