Google Cloud Native is in preview. Google Cloud Classic is fully supported.
google-native.dataplex/v1.DataScan
Explore with Pulumi AI
Google Cloud Native is in preview. Google Cloud Classic is fully supported.
Creates a DataScan resource. Auto-naming is currently not supported for this resource.
Create DataScan Resource
Resources are created with functions called constructors. To learn more about declaring and configuring resources, see Resources.
Constructor syntax
new DataScan(name: string, args: DataScanArgs, opts?: CustomResourceOptions);
@overload
def DataScan(resource_name: str,
args: DataScanArgs,
opts: Optional[ResourceOptions] = None)
@overload
def DataScan(resource_name: str,
opts: Optional[ResourceOptions] = None,
data: Optional[GoogleCloudDataplexV1DataSourceArgs] = None,
data_scan_id: Optional[str] = None,
data_profile_spec: Optional[GoogleCloudDataplexV1DataProfileSpecArgs] = None,
data_quality_spec: Optional[GoogleCloudDataplexV1DataQualitySpecArgs] = None,
description: Optional[str] = None,
display_name: Optional[str] = None,
execution_spec: Optional[GoogleCloudDataplexV1DataScanExecutionSpecArgs] = None,
labels: Optional[Mapping[str, str]] = None,
location: Optional[str] = None,
project: Optional[str] = None)
func NewDataScan(ctx *Context, name string, args DataScanArgs, opts ...ResourceOption) (*DataScan, error)
public DataScan(string name, DataScanArgs args, CustomResourceOptions? opts = null)
public DataScan(String name, DataScanArgs args)
public DataScan(String name, DataScanArgs args, CustomResourceOptions options)
type: google-native:dataplex/v1:DataScan
properties: # The arguments to resource properties.
options: # Bag of options to control resource's behavior.
Parameters
- name string
- The unique name of the resource.
- args DataScanArgs
- The arguments to resource properties.
- opts CustomResourceOptions
- Bag of options to control resource's behavior.
- resource_name str
- The unique name of the resource.
- args DataScanArgs
- The arguments to resource properties.
- opts ResourceOptions
- Bag of options to control resource's behavior.
- ctx Context
- Context object for the current deployment.
- name string
- The unique name of the resource.
- args DataScanArgs
- The arguments to resource properties.
- opts ResourceOption
- Bag of options to control resource's behavior.
- name string
- The unique name of the resource.
- args DataScanArgs
- The arguments to resource properties.
- opts CustomResourceOptions
- Bag of options to control resource's behavior.
- name String
- The unique name of the resource.
- args DataScanArgs
- The arguments to resource properties.
- options CustomResourceOptions
- Bag of options to control resource's behavior.
Constructor example
The following reference example uses placeholder values for all input properties.
var dataScanResource = new GoogleNative.Dataplex.V1.DataScan("dataScanResource", new()
{
Data = new GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataSourceArgs
{
Entity = "string",
Resource = "string",
},
DataScanId = "string",
DataProfileSpec = new GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataProfileSpecArgs
{
ExcludeFields = new GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataProfileSpecSelectedFieldsArgs
{
FieldNames = new[]
{
"string",
},
},
IncludeFields = new GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataProfileSpecSelectedFieldsArgs
{
FieldNames = new[]
{
"string",
},
},
PostScanActions = new GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataProfileSpecPostScanActionsArgs
{
BigqueryExport = new GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataProfileSpecPostScanActionsBigQueryExportArgs
{
ResultsTable = "string",
},
},
RowFilter = "string",
SamplingPercent = 0,
},
DataQualitySpec = new GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataQualitySpecArgs
{
Rules = new[]
{
new GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataQualityRuleArgs
{
Dimension = "string",
RangeExpectation = new GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataQualityRuleRangeExpectationArgs
{
MaxValue = "string",
MinValue = "string",
StrictMaxEnabled = false,
StrictMinEnabled = false,
},
Description = "string",
IgnoreNull = false,
Name = "string",
NonNullExpectation = null,
Column = "string",
RegexExpectation = new GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataQualityRuleRegexExpectationArgs
{
Regex = "string",
},
RowConditionExpectation = new GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataQualityRuleRowConditionExpectationArgs
{
SqlExpression = "string",
},
SetExpectation = new GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataQualityRuleSetExpectationArgs
{
Values = new[]
{
"string",
},
},
StatisticRangeExpectation = new GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataQualityRuleStatisticRangeExpectationArgs
{
MaxValue = "string",
MinValue = "string",
Statistic = GoogleNative.Dataplex.V1.GoogleCloudDataplexV1DataQualityRuleStatisticRangeExpectationStatistic.StatisticUndefined,
StrictMaxEnabled = false,
StrictMinEnabled = false,
},
TableConditionExpectation = new GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataQualityRuleTableConditionExpectationArgs
{
SqlExpression = "string",
},
Threshold = 0,
UniquenessExpectation = null,
},
},
PostScanActions = new GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataQualitySpecPostScanActionsArgs
{
BigqueryExport = new GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataQualitySpecPostScanActionsBigQueryExportArgs
{
ResultsTable = "string",
},
},
RowFilter = "string",
SamplingPercent = 0,
},
Description = "string",
DisplayName = "string",
ExecutionSpec = new GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataScanExecutionSpecArgs
{
Field = "string",
Trigger = new GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1TriggerArgs
{
OnDemand = null,
Schedule = new GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1TriggerScheduleArgs
{
Cron = "string",
},
},
},
Labels =
{
{ "string", "string" },
},
Location = "string",
Project = "string",
});
example, err := dataplex.NewDataScan(ctx, "dataScanResource", &dataplex.DataScanArgs{
Data: &dataplex.GoogleCloudDataplexV1DataSourceArgs{
Entity: pulumi.String("string"),
Resource: pulumi.String("string"),
},
DataScanId: pulumi.String("string"),
DataProfileSpec: &dataplex.GoogleCloudDataplexV1DataProfileSpecArgs{
ExcludeFields: &dataplex.GoogleCloudDataplexV1DataProfileSpecSelectedFieldsArgs{
FieldNames: pulumi.StringArray{
pulumi.String("string"),
},
},
IncludeFields: &dataplex.GoogleCloudDataplexV1DataProfileSpecSelectedFieldsArgs{
FieldNames: pulumi.StringArray{
pulumi.String("string"),
},
},
PostScanActions: &dataplex.GoogleCloudDataplexV1DataProfileSpecPostScanActionsArgs{
BigqueryExport: &dataplex.GoogleCloudDataplexV1DataProfileSpecPostScanActionsBigQueryExportArgs{
ResultsTable: pulumi.String("string"),
},
},
RowFilter: pulumi.String("string"),
SamplingPercent: pulumi.Float64(0),
},
DataQualitySpec: &dataplex.GoogleCloudDataplexV1DataQualitySpecArgs{
Rules: dataplex.GoogleCloudDataplexV1DataQualityRuleArray{
&dataplex.GoogleCloudDataplexV1DataQualityRuleArgs{
Dimension: pulumi.String("string"),
RangeExpectation: &dataplex.GoogleCloudDataplexV1DataQualityRuleRangeExpectationArgs{
MaxValue: pulumi.String("string"),
MinValue: pulumi.String("string"),
StrictMaxEnabled: pulumi.Bool(false),
StrictMinEnabled: pulumi.Bool(false),
},
Description: pulumi.String("string"),
IgnoreNull: pulumi.Bool(false),
Name: pulumi.String("string"),
NonNullExpectation: &dataplex.GoogleCloudDataplexV1DataQualityRuleNonNullExpectationArgs{},
Column: pulumi.String("string"),
RegexExpectation: &dataplex.GoogleCloudDataplexV1DataQualityRuleRegexExpectationArgs{
Regex: pulumi.String("string"),
},
RowConditionExpectation: &dataplex.GoogleCloudDataplexV1DataQualityRuleRowConditionExpectationArgs{
SqlExpression: pulumi.String("string"),
},
SetExpectation: &dataplex.GoogleCloudDataplexV1DataQualityRuleSetExpectationArgs{
Values: pulumi.StringArray{
pulumi.String("string"),
},
},
StatisticRangeExpectation: &dataplex.GoogleCloudDataplexV1DataQualityRuleStatisticRangeExpectationArgs{
MaxValue: pulumi.String("string"),
MinValue: pulumi.String("string"),
Statistic: dataplex.GoogleCloudDataplexV1DataQualityRuleStatisticRangeExpectationStatisticStatisticUndefined,
StrictMaxEnabled: pulumi.Bool(false),
StrictMinEnabled: pulumi.Bool(false),
},
TableConditionExpectation: &dataplex.GoogleCloudDataplexV1DataQualityRuleTableConditionExpectationArgs{
SqlExpression: pulumi.String("string"),
},
Threshold: pulumi.Float64(0),
UniquenessExpectation: &dataplex.GoogleCloudDataplexV1DataQualityRuleUniquenessExpectationArgs{},
},
},
PostScanActions: &dataplex.GoogleCloudDataplexV1DataQualitySpecPostScanActionsArgs{
BigqueryExport: &dataplex.GoogleCloudDataplexV1DataQualitySpecPostScanActionsBigQueryExportArgs{
ResultsTable: pulumi.String("string"),
},
},
RowFilter: pulumi.String("string"),
SamplingPercent: pulumi.Float64(0),
},
Description: pulumi.String("string"),
DisplayName: pulumi.String("string"),
ExecutionSpec: &dataplex.GoogleCloudDataplexV1DataScanExecutionSpecArgs{
Field: pulumi.String("string"),
Trigger: &dataplex.GoogleCloudDataplexV1TriggerArgs{
OnDemand: &dataplex.GoogleCloudDataplexV1TriggerOnDemandArgs{},
Schedule: &dataplex.GoogleCloudDataplexV1TriggerScheduleArgs{
Cron: pulumi.String("string"),
},
},
},
Labels: pulumi.StringMap{
"string": pulumi.String("string"),
},
Location: pulumi.String("string"),
Project: pulumi.String("string"),
})
var dataScanResource = new DataScan("dataScanResource", DataScanArgs.builder()
.data(GoogleCloudDataplexV1DataSourceArgs.builder()
.entity("string")
.resource("string")
.build())
.dataScanId("string")
.dataProfileSpec(GoogleCloudDataplexV1DataProfileSpecArgs.builder()
.excludeFields(GoogleCloudDataplexV1DataProfileSpecSelectedFieldsArgs.builder()
.fieldNames("string")
.build())
.includeFields(GoogleCloudDataplexV1DataProfileSpecSelectedFieldsArgs.builder()
.fieldNames("string")
.build())
.postScanActions(GoogleCloudDataplexV1DataProfileSpecPostScanActionsArgs.builder()
.bigqueryExport(GoogleCloudDataplexV1DataProfileSpecPostScanActionsBigQueryExportArgs.builder()
.resultsTable("string")
.build())
.build())
.rowFilter("string")
.samplingPercent(0)
.build())
.dataQualitySpec(GoogleCloudDataplexV1DataQualitySpecArgs.builder()
.rules(GoogleCloudDataplexV1DataQualityRuleArgs.builder()
.dimension("string")
.rangeExpectation(GoogleCloudDataplexV1DataQualityRuleRangeExpectationArgs.builder()
.maxValue("string")
.minValue("string")
.strictMaxEnabled(false)
.strictMinEnabled(false)
.build())
.description("string")
.ignoreNull(false)
.name("string")
.nonNullExpectation()
.column("string")
.regexExpectation(GoogleCloudDataplexV1DataQualityRuleRegexExpectationArgs.builder()
.regex("string")
.build())
.rowConditionExpectation(GoogleCloudDataplexV1DataQualityRuleRowConditionExpectationArgs.builder()
.sqlExpression("string")
.build())
.setExpectation(GoogleCloudDataplexV1DataQualityRuleSetExpectationArgs.builder()
.values("string")
.build())
.statisticRangeExpectation(GoogleCloudDataplexV1DataQualityRuleStatisticRangeExpectationArgs.builder()
.maxValue("string")
.minValue("string")
.statistic("STATISTIC_UNDEFINED")
.strictMaxEnabled(false)
.strictMinEnabled(false)
.build())
.tableConditionExpectation(GoogleCloudDataplexV1DataQualityRuleTableConditionExpectationArgs.builder()
.sqlExpression("string")
.build())
.threshold(0)
.uniquenessExpectation()
.build())
.postScanActions(GoogleCloudDataplexV1DataQualitySpecPostScanActionsArgs.builder()
.bigqueryExport(GoogleCloudDataplexV1DataQualitySpecPostScanActionsBigQueryExportArgs.builder()
.resultsTable("string")
.build())
.build())
.rowFilter("string")
.samplingPercent(0)
.build())
.description("string")
.displayName("string")
.executionSpec(GoogleCloudDataplexV1DataScanExecutionSpecArgs.builder()
.field("string")
.trigger(GoogleCloudDataplexV1TriggerArgs.builder()
.onDemand()
.schedule(GoogleCloudDataplexV1TriggerScheduleArgs.builder()
.cron("string")
.build())
.build())
.build())
.labels(Map.of("string", "string"))
.location("string")
.project("string")
.build());
data_scan_resource = google_native.dataplex.v1.DataScan("dataScanResource",
data={
"entity": "string",
"resource": "string",
},
data_scan_id="string",
data_profile_spec={
"exclude_fields": {
"field_names": ["string"],
},
"include_fields": {
"field_names": ["string"],
},
"post_scan_actions": {
"bigquery_export": {
"results_table": "string",
},
},
"row_filter": "string",
"sampling_percent": 0,
},
data_quality_spec={
"rules": [{
"dimension": "string",
"range_expectation": {
"max_value": "string",
"min_value": "string",
"strict_max_enabled": False,
"strict_min_enabled": False,
},
"description": "string",
"ignore_null": False,
"name": "string",
"non_null_expectation": {},
"column": "string",
"regex_expectation": {
"regex": "string",
},
"row_condition_expectation": {
"sql_expression": "string",
},
"set_expectation": {
"values": ["string"],
},
"statistic_range_expectation": {
"max_value": "string",
"min_value": "string",
"statistic": google_native.dataplex.v1.GoogleCloudDataplexV1DataQualityRuleStatisticRangeExpectationStatistic.STATISTIC_UNDEFINED,
"strict_max_enabled": False,
"strict_min_enabled": False,
},
"table_condition_expectation": {
"sql_expression": "string",
},
"threshold": 0,
"uniqueness_expectation": {},
}],
"post_scan_actions": {
"bigquery_export": {
"results_table": "string",
},
},
"row_filter": "string",
"sampling_percent": 0,
},
description="string",
display_name="string",
execution_spec={
"field": "string",
"trigger": {
"on_demand": {},
"schedule": {
"cron": "string",
},
},
},
labels={
"string": "string",
},
location="string",
project="string")
const dataScanResource = new google_native.dataplex.v1.DataScan("dataScanResource", {
data: {
entity: "string",
resource: "string",
},
dataScanId: "string",
dataProfileSpec: {
excludeFields: {
fieldNames: ["string"],
},
includeFields: {
fieldNames: ["string"],
},
postScanActions: {
bigqueryExport: {
resultsTable: "string",
},
},
rowFilter: "string",
samplingPercent: 0,
},
dataQualitySpec: {
rules: [{
dimension: "string",
rangeExpectation: {
maxValue: "string",
minValue: "string",
strictMaxEnabled: false,
strictMinEnabled: false,
},
description: "string",
ignoreNull: false,
name: "string",
nonNullExpectation: {},
column: "string",
regexExpectation: {
regex: "string",
},
rowConditionExpectation: {
sqlExpression: "string",
},
setExpectation: {
values: ["string"],
},
statisticRangeExpectation: {
maxValue: "string",
minValue: "string",
statistic: google_native.dataplex.v1.GoogleCloudDataplexV1DataQualityRuleStatisticRangeExpectationStatistic.StatisticUndefined,
strictMaxEnabled: false,
strictMinEnabled: false,
},
tableConditionExpectation: {
sqlExpression: "string",
},
threshold: 0,
uniquenessExpectation: {},
}],
postScanActions: {
bigqueryExport: {
resultsTable: "string",
},
},
rowFilter: "string",
samplingPercent: 0,
},
description: "string",
displayName: "string",
executionSpec: {
field: "string",
trigger: {
onDemand: {},
schedule: {
cron: "string",
},
},
},
labels: {
string: "string",
},
location: "string",
project: "string",
});
type: google-native:dataplex/v1:DataScan
properties:
data:
entity: string
resource: string
dataProfileSpec:
excludeFields:
fieldNames:
- string
includeFields:
fieldNames:
- string
postScanActions:
bigqueryExport:
resultsTable: string
rowFilter: string
samplingPercent: 0
dataQualitySpec:
postScanActions:
bigqueryExport:
resultsTable: string
rowFilter: string
rules:
- column: string
description: string
dimension: string
ignoreNull: false
name: string
nonNullExpectation: {}
rangeExpectation:
maxValue: string
minValue: string
strictMaxEnabled: false
strictMinEnabled: false
regexExpectation:
regex: string
rowConditionExpectation:
sqlExpression: string
setExpectation:
values:
- string
statisticRangeExpectation:
maxValue: string
minValue: string
statistic: STATISTIC_UNDEFINED
strictMaxEnabled: false
strictMinEnabled: false
tableConditionExpectation:
sqlExpression: string
threshold: 0
uniquenessExpectation: {}
samplingPercent: 0
dataScanId: string
description: string
displayName: string
executionSpec:
field: string
trigger:
onDemand: {}
schedule:
cron: string
labels:
string: string
location: string
project: string
DataScan Resource Properties
To learn more about resource properties and how to use them, see Inputs and Outputs in the Architecture and Concepts docs.
Inputs
In Python, inputs that are objects can be passed either as argument classes or as dictionary literals.
The DataScan resource accepts the following input properties:
- Data
Pulumi.
Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Data Source - The data source for DataScan.
- Data
Scan stringId - Required. DataScan identifier. Must contain only lowercase letters, numbers and hyphens. Must start with a letter. Must end with a number or a letter. Must be between 1-63 characters. Must be unique within the customer project / location.
- Data
Profile Pulumi.Spec Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Data Profile Spec - DataProfileScan related setting.
- Data
Quality Pulumi.Spec Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Data Quality Spec - DataQualityScan related setting.
- Description string
- Optional. Description of the scan. Must be between 1-1024 characters.
- Display
Name string - Optional. User friendly display name. Must be between 1-256 characters.
- Execution
Spec Pulumi.Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Data Scan Execution Spec - Optional. DataScan execution settings.If not specified, the fields in it will use their default values.
- Labels Dictionary<string, string>
- Optional. User-defined labels for the scan.
- Location string
- Project string
- Data
Google
Cloud Dataplex V1Data Source Args - The data source for DataScan.
- Data
Scan stringId - Required. DataScan identifier. Must contain only lowercase letters, numbers and hyphens. Must start with a letter. Must end with a number or a letter. Must be between 1-63 characters. Must be unique within the customer project / location.
- Data
Profile GoogleSpec Cloud Dataplex V1Data Profile Spec Args - DataProfileScan related setting.
- Data
Quality GoogleSpec Cloud Dataplex V1Data Quality Spec Args - DataQualityScan related setting.
- Description string
- Optional. Description of the scan. Must be between 1-1024 characters.
- Display
Name string - Optional. User friendly display name. Must be between 1-256 characters.
- Execution
Spec GoogleCloud Dataplex V1Data Scan Execution Spec Args - Optional. DataScan execution settings.If not specified, the fields in it will use their default values.
- Labels map[string]string
- Optional. User-defined labels for the scan.
- Location string
- Project string
- data
Google
Cloud Dataplex V1Data Source - The data source for DataScan.
- data
Scan StringId - Required. DataScan identifier. Must contain only lowercase letters, numbers and hyphens. Must start with a letter. Must end with a number or a letter. Must be between 1-63 characters. Must be unique within the customer project / location.
- data
Profile GoogleSpec Cloud Dataplex V1Data Profile Spec - DataProfileScan related setting.
- data
Quality GoogleSpec Cloud Dataplex V1Data Quality Spec - DataQualityScan related setting.
- description String
- Optional. Description of the scan. Must be between 1-1024 characters.
- display
Name String - Optional. User friendly display name. Must be between 1-256 characters.
- execution
Spec GoogleCloud Dataplex V1Data Scan Execution Spec - Optional. DataScan execution settings.If not specified, the fields in it will use their default values.
- labels Map<String,String>
- Optional. User-defined labels for the scan.
- location String
- project String
- data
Google
Cloud Dataplex V1Data Source - The data source for DataScan.
- data
Scan stringId - Required. DataScan identifier. Must contain only lowercase letters, numbers and hyphens. Must start with a letter. Must end with a number or a letter. Must be between 1-63 characters. Must be unique within the customer project / location.
- data
Profile GoogleSpec Cloud Dataplex V1Data Profile Spec - DataProfileScan related setting.
- data
Quality GoogleSpec Cloud Dataplex V1Data Quality Spec - DataQualityScan related setting.
- description string
- Optional. Description of the scan. Must be between 1-1024 characters.
- display
Name string - Optional. User friendly display name. Must be between 1-256 characters.
- execution
Spec GoogleCloud Dataplex V1Data Scan Execution Spec - Optional. DataScan execution settings.If not specified, the fields in it will use their default values.
- labels {[key: string]: string}
- Optional. User-defined labels for the scan.
- location string
- project string
- data
Google
Cloud Dataplex V1Data Source Args - The data source for DataScan.
- data_
scan_ strid - Required. DataScan identifier. Must contain only lowercase letters, numbers and hyphens. Must start with a letter. Must end with a number or a letter. Must be between 1-63 characters. Must be unique within the customer project / location.
- data_
profile_ Googlespec Cloud Dataplex V1Data Profile Spec Args - DataProfileScan related setting.
- data_
quality_ Googlespec Cloud Dataplex V1Data Quality Spec Args - DataQualityScan related setting.
- description str
- Optional. Description of the scan. Must be between 1-1024 characters.
- display_
name str - Optional. User friendly display name. Must be between 1-256 characters.
- execution_
spec GoogleCloud Dataplex V1Data Scan Execution Spec Args - Optional. DataScan execution settings.If not specified, the fields in it will use their default values.
- labels Mapping[str, str]
- Optional. User-defined labels for the scan.
- location str
- project str
- data Property Map
- The data source for DataScan.
- data
Scan StringId - Required. DataScan identifier. Must contain only lowercase letters, numbers and hyphens. Must start with a letter. Must end with a number or a letter. Must be between 1-63 characters. Must be unique within the customer project / location.
- data
Profile Property MapSpec - DataProfileScan related setting.
- data
Quality Property MapSpec - DataQualityScan related setting.
- description String
- Optional. Description of the scan. Must be between 1-1024 characters.
- display
Name String - Optional. User friendly display name. Must be between 1-256 characters.
- execution
Spec Property Map - Optional. DataScan execution settings.If not specified, the fields in it will use their default values.
- labels Map<String>
- Optional. User-defined labels for the scan.
- location String
- project String
Outputs
All input properties are implicitly available as output properties. Additionally, the DataScan resource produces the following output properties:
- Create
Time string - The time when the scan was created.
- Data
Profile Pulumi.Result Google Native. Dataplex. V1. Outputs. Google Cloud Dataplex V1Data Profile Result Response - The result of the data profile scan.
- Data
Quality Pulumi.Result Google Native. Dataplex. V1. Outputs. Google Cloud Dataplex V1Data Quality Result Response - The result of the data quality scan.
- Execution
Status Pulumi.Google Native. Dataplex. V1. Outputs. Google Cloud Dataplex V1Data Scan Execution Status Response - Status of the data scan execution.
- Id string
- The provider-assigned unique ID for this managed resource.
- Name string
- The relative resource name of the scan, of the form: projects/{project}/locations/{location_id}/dataScans/{datascan_id}, where project refers to a project_id or project_number and location_id refers to a GCP region.
- State string
- Current state of the DataScan.
- Type string
- The type of DataScan.
- Uid string
- System generated globally unique ID for the scan. This ID will be different if the scan is deleted and re-created with the same name.
- Update
Time string - The time when the scan was last updated.
- Create
Time string - The time when the scan was created.
- Data
Profile GoogleResult Cloud Dataplex V1Data Profile Result Response - The result of the data profile scan.
- Data
Quality GoogleResult Cloud Dataplex V1Data Quality Result Response - The result of the data quality scan.
- Execution
Status GoogleCloud Dataplex V1Data Scan Execution Status Response - Status of the data scan execution.
- Id string
- The provider-assigned unique ID for this managed resource.
- Name string
- The relative resource name of the scan, of the form: projects/{project}/locations/{location_id}/dataScans/{datascan_id}, where project refers to a project_id or project_number and location_id refers to a GCP region.
- State string
- Current state of the DataScan.
- Type string
- The type of DataScan.
- Uid string
- System generated globally unique ID for the scan. This ID will be different if the scan is deleted and re-created with the same name.
- Update
Time string - The time when the scan was last updated.
- create
Time String - The time when the scan was created.
- data
Profile GoogleResult Cloud Dataplex V1Data Profile Result Response - The result of the data profile scan.
- data
Quality GoogleResult Cloud Dataplex V1Data Quality Result Response - The result of the data quality scan.
- execution
Status GoogleCloud Dataplex V1Data Scan Execution Status Response - Status of the data scan execution.
- id String
- The provider-assigned unique ID for this managed resource.
- name String
- The relative resource name of the scan, of the form: projects/{project}/locations/{location_id}/dataScans/{datascan_id}, where project refers to a project_id or project_number and location_id refers to a GCP region.
- state String
- Current state of the DataScan.
- type String
- The type of DataScan.
- uid String
- System generated globally unique ID for the scan. This ID will be different if the scan is deleted and re-created with the same name.
- update
Time String - The time when the scan was last updated.
- create
Time string - The time when the scan was created.
- data
Profile GoogleResult Cloud Dataplex V1Data Profile Result Response - The result of the data profile scan.
- data
Quality GoogleResult Cloud Dataplex V1Data Quality Result Response - The result of the data quality scan.
- execution
Status GoogleCloud Dataplex V1Data Scan Execution Status Response - Status of the data scan execution.
- id string
- The provider-assigned unique ID for this managed resource.
- name string
- The relative resource name of the scan, of the form: projects/{project}/locations/{location_id}/dataScans/{datascan_id}, where project refers to a project_id or project_number and location_id refers to a GCP region.
- state string
- Current state of the DataScan.
- type string
- The type of DataScan.
- uid string
- System generated globally unique ID for the scan. This ID will be different if the scan is deleted and re-created with the same name.
- update
Time string - The time when the scan was last updated.
- create_
time str - The time when the scan was created.
- data_
profile_ Googleresult Cloud Dataplex V1Data Profile Result Response - The result of the data profile scan.
- data_
quality_ Googleresult Cloud Dataplex V1Data Quality Result Response - The result of the data quality scan.
- execution_
status GoogleCloud Dataplex V1Data Scan Execution Status Response - Status of the data scan execution.
- id str
- The provider-assigned unique ID for this managed resource.
- name str
- The relative resource name of the scan, of the form: projects/{project}/locations/{location_id}/dataScans/{datascan_id}, where project refers to a project_id or project_number and location_id refers to a GCP region.
- state str
- Current state of the DataScan.
- type str
- The type of DataScan.
- uid str
- System generated globally unique ID for the scan. This ID will be different if the scan is deleted and re-created with the same name.
- update_
time str - The time when the scan was last updated.
- create
Time String - The time when the scan was created.
- data
Profile Property MapResult - The result of the data profile scan.
- data
Quality Property MapResult - The result of the data quality scan.
- execution
Status Property Map - Status of the data scan execution.
- id String
- The provider-assigned unique ID for this managed resource.
- name String
- The relative resource name of the scan, of the form: projects/{project}/locations/{location_id}/dataScans/{datascan_id}, where project refers to a project_id or project_number and location_id refers to a GCP region.
- state String
- Current state of the DataScan.
- type String
- The type of DataScan.
- uid String
- System generated globally unique ID for the scan. This ID will be different if the scan is deleted and re-created with the same name.
- update
Time String - The time when the scan was last updated.
Supporting Types
GoogleCloudDataplexV1DataProfileResultPostScanActionsResultBigQueryExportResultResponse, GoogleCloudDataplexV1DataProfileResultPostScanActionsResultBigQueryExportResultResponseArgs
GoogleCloudDataplexV1DataProfileResultPostScanActionsResultResponse, GoogleCloudDataplexV1DataProfileResultPostScanActionsResultResponseArgs
- Bigquery
Export Pulumi.Result Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Data Profile Result Post Scan Actions Result Big Query Export Result Response - The result of BigQuery export post scan action.
- Bigquery
Export GoogleResult Cloud Dataplex V1Data Profile Result Post Scan Actions Result Big Query Export Result Response - The result of BigQuery export post scan action.
- bigquery
Export GoogleResult Cloud Dataplex V1Data Profile Result Post Scan Actions Result Big Query Export Result Response - The result of BigQuery export post scan action.
- bigquery
Export GoogleResult Cloud Dataplex V1Data Profile Result Post Scan Actions Result Big Query Export Result Response - The result of BigQuery export post scan action.
- bigquery_
export_ Googleresult Cloud Dataplex V1Data Profile Result Post Scan Actions Result Big Query Export Result Response - The result of BigQuery export post scan action.
- bigquery
Export Property MapResult - The result of BigQuery export post scan action.
GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoDoubleFieldInfoResponse, GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoDoubleFieldInfoResponseArgs
- Average double
- Average of non-null values in the scanned data. NaN, if the field has a NaN.
- Max double
- Maximum of non-null values in the scanned data. NaN, if the field has a NaN.
- Min double
- Minimum of non-null values in the scanned data. NaN, if the field has a NaN.
- Quartiles List<double>
- A quartile divides the number of data points into four parts, or quarters, of more-or-less equal size. Three main quartiles used are: The first quartile (Q1) splits off the lowest 25% of data from the highest 75%. It is also known as the lower or 25th empirical quartile, as 25% of the data is below this point. The second quartile (Q2) is the median of a data set. So, 50% of the data lies below this point. The third quartile (Q3) splits off the highest 25% of data from the lowest 75%. It is known as the upper or 75th empirical quartile, as 75% of the data lies below this point. Here, the quartiles is provided as an ordered list of quartile values for the scanned data, occurring in order Q1, median, Q3.
- Standard
Deviation double - Standard deviation of non-null values in the scanned data. NaN, if the field has a NaN.
- Average float64
- Average of non-null values in the scanned data. NaN, if the field has a NaN.
- Max float64
- Maximum of non-null values in the scanned data. NaN, if the field has a NaN.
- Min float64
- Minimum of non-null values in the scanned data. NaN, if the field has a NaN.
- Quartiles []float64
- A quartile divides the number of data points into four parts, or quarters, of more-or-less equal size. Three main quartiles used are: The first quartile (Q1) splits off the lowest 25% of data from the highest 75%. It is also known as the lower or 25th empirical quartile, as 25% of the data is below this point. The second quartile (Q2) is the median of a data set. So, 50% of the data lies below this point. The third quartile (Q3) splits off the highest 25% of data from the lowest 75%. It is known as the upper or 75th empirical quartile, as 75% of the data lies below this point. Here, the quartiles is provided as an ordered list of quartile values for the scanned data, occurring in order Q1, median, Q3.
- Standard
Deviation float64 - Standard deviation of non-null values in the scanned data. NaN, if the field has a NaN.
- average Double
- Average of non-null values in the scanned data. NaN, if the field has a NaN.
- max Double
- Maximum of non-null values in the scanned data. NaN, if the field has a NaN.
- min Double
- Minimum of non-null values in the scanned data. NaN, if the field has a NaN.
- quartiles List<Double>
- A quartile divides the number of data points into four parts, or quarters, of more-or-less equal size. Three main quartiles used are: The first quartile (Q1) splits off the lowest 25% of data from the highest 75%. It is also known as the lower or 25th empirical quartile, as 25% of the data is below this point. The second quartile (Q2) is the median of a data set. So, 50% of the data lies below this point. The third quartile (Q3) splits off the highest 25% of data from the lowest 75%. It is known as the upper or 75th empirical quartile, as 75% of the data lies below this point. Here, the quartiles is provided as an ordered list of quartile values for the scanned data, occurring in order Q1, median, Q3.
- standard
Deviation Double - Standard deviation of non-null values in the scanned data. NaN, if the field has a NaN.
- average number
- Average of non-null values in the scanned data. NaN, if the field has a NaN.
- max number
- Maximum of non-null values in the scanned data. NaN, if the field has a NaN.
- min number
- Minimum of non-null values in the scanned data. NaN, if the field has a NaN.
- quartiles number[]
- A quartile divides the number of data points into four parts, or quarters, of more-or-less equal size. Three main quartiles used are: The first quartile (Q1) splits off the lowest 25% of data from the highest 75%. It is also known as the lower or 25th empirical quartile, as 25% of the data is below this point. The second quartile (Q2) is the median of a data set. So, 50% of the data lies below this point. The third quartile (Q3) splits off the highest 25% of data from the lowest 75%. It is known as the upper or 75th empirical quartile, as 75% of the data lies below this point. Here, the quartiles is provided as an ordered list of quartile values for the scanned data, occurring in order Q1, median, Q3.
- standard
Deviation number - Standard deviation of non-null values in the scanned data. NaN, if the field has a NaN.
- average float
- Average of non-null values in the scanned data. NaN, if the field has a NaN.
- max float
- Maximum of non-null values in the scanned data. NaN, if the field has a NaN.
- min float
- Minimum of non-null values in the scanned data. NaN, if the field has a NaN.
- quartiles Sequence[float]
- A quartile divides the number of data points into four parts, or quarters, of more-or-less equal size. Three main quartiles used are: The first quartile (Q1) splits off the lowest 25% of data from the highest 75%. It is also known as the lower or 25th empirical quartile, as 25% of the data is below this point. The second quartile (Q2) is the median of a data set. So, 50% of the data lies below this point. The third quartile (Q3) splits off the highest 25% of data from the lowest 75%. It is known as the upper or 75th empirical quartile, as 75% of the data lies below this point. Here, the quartiles is provided as an ordered list of quartile values for the scanned data, occurring in order Q1, median, Q3.
- standard_
deviation float - Standard deviation of non-null values in the scanned data. NaN, if the field has a NaN.
- average Number
- Average of non-null values in the scanned data. NaN, if the field has a NaN.
- max Number
- Maximum of non-null values in the scanned data. NaN, if the field has a NaN.
- min Number
- Minimum of non-null values in the scanned data. NaN, if the field has a NaN.
- quartiles List<Number>
- A quartile divides the number of data points into four parts, or quarters, of more-or-less equal size. Three main quartiles used are: The first quartile (Q1) splits off the lowest 25% of data from the highest 75%. It is also known as the lower or 25th empirical quartile, as 25% of the data is below this point. The second quartile (Q2) is the median of a data set. So, 50% of the data lies below this point. The third quartile (Q3) splits off the highest 25% of data from the lowest 75%. It is known as the upper or 75th empirical quartile, as 75% of the data lies below this point. Here, the quartiles is provided as an ordered list of quartile values for the scanned data, occurring in order Q1, median, Q3.
- standard
Deviation Number - Standard deviation of non-null values in the scanned data. NaN, if the field has a NaN.
GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoIntegerFieldInfoResponse, GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoIntegerFieldInfoResponseArgs
- Average double
- Average of non-null values in the scanned data. NaN, if the field has a NaN.
- Max string
- Maximum of non-null values in the scanned data. NaN, if the field has a NaN.
- Min string
- Minimum of non-null values in the scanned data. NaN, if the field has a NaN.
- Quartiles List<string>
- A quartile divides the number of data points into four parts, or quarters, of more-or-less equal size. Three main quartiles used are: The first quartile (Q1) splits off the lowest 25% of data from the highest 75%. It is also known as the lower or 25th empirical quartile, as 25% of the data is below this point. The second quartile (Q2) is the median of a data set. So, 50% of the data lies below this point. The third quartile (Q3) splits off the highest 25% of data from the lowest 75%. It is known as the upper or 75th empirical quartile, as 75% of the data lies below this point. Here, the quartiles is provided as an ordered list of approximate quartile values for the scanned data, occurring in order Q1, median, Q3.
- Standard
Deviation double - Standard deviation of non-null values in the scanned data. NaN, if the field has a NaN.
- Average float64
- Average of non-null values in the scanned data. NaN, if the field has a NaN.
- Max string
- Maximum of non-null values in the scanned data. NaN, if the field has a NaN.
- Min string
- Minimum of non-null values in the scanned data. NaN, if the field has a NaN.
- Quartiles []string
- A quartile divides the number of data points into four parts, or quarters, of more-or-less equal size. Three main quartiles used are: The first quartile (Q1) splits off the lowest 25% of data from the highest 75%. It is also known as the lower or 25th empirical quartile, as 25% of the data is below this point. The second quartile (Q2) is the median of a data set. So, 50% of the data lies below this point. The third quartile (Q3) splits off the highest 25% of data from the lowest 75%. It is known as the upper or 75th empirical quartile, as 75% of the data lies below this point. Here, the quartiles is provided as an ordered list of approximate quartile values for the scanned data, occurring in order Q1, median, Q3.
- Standard
Deviation float64 - Standard deviation of non-null values in the scanned data. NaN, if the field has a NaN.
- average Double
- Average of non-null values in the scanned data. NaN, if the field has a NaN.
- max String
- Maximum of non-null values in the scanned data. NaN, if the field has a NaN.
- min String
- Minimum of non-null values in the scanned data. NaN, if the field has a NaN.
- quartiles List<String>
- A quartile divides the number of data points into four parts, or quarters, of more-or-less equal size. Three main quartiles used are: The first quartile (Q1) splits off the lowest 25% of data from the highest 75%. It is also known as the lower or 25th empirical quartile, as 25% of the data is below this point. The second quartile (Q2) is the median of a data set. So, 50% of the data lies below this point. The third quartile (Q3) splits off the highest 25% of data from the lowest 75%. It is known as the upper or 75th empirical quartile, as 75% of the data lies below this point. Here, the quartiles is provided as an ordered list of approximate quartile values for the scanned data, occurring in order Q1, median, Q3.
- standard
Deviation Double - Standard deviation of non-null values in the scanned data. NaN, if the field has a NaN.
- average number
- Average of non-null values in the scanned data. NaN, if the field has a NaN.
- max string
- Maximum of non-null values in the scanned data. NaN, if the field has a NaN.
- min string
- Minimum of non-null values in the scanned data. NaN, if the field has a NaN.
- quartiles string[]
- A quartile divides the number of data points into four parts, or quarters, of more-or-less equal size. Three main quartiles used are: The first quartile (Q1) splits off the lowest 25% of data from the highest 75%. It is also known as the lower or 25th empirical quartile, as 25% of the data is below this point. The second quartile (Q2) is the median of a data set. So, 50% of the data lies below this point. The third quartile (Q3) splits off the highest 25% of data from the lowest 75%. It is known as the upper or 75th empirical quartile, as 75% of the data lies below this point. Here, the quartiles is provided as an ordered list of approximate quartile values for the scanned data, occurring in order Q1, median, Q3.
- standard
Deviation number - Standard deviation of non-null values in the scanned data. NaN, if the field has a NaN.
- average float
- Average of non-null values in the scanned data. NaN, if the field has a NaN.
- max str
- Maximum of non-null values in the scanned data. NaN, if the field has a NaN.
- min str
- Minimum of non-null values in the scanned data. NaN, if the field has a NaN.
- quartiles Sequence[str]
- A quartile divides the number of data points into four parts, or quarters, of more-or-less equal size. Three main quartiles used are: The first quartile (Q1) splits off the lowest 25% of data from the highest 75%. It is also known as the lower or 25th empirical quartile, as 25% of the data is below this point. The second quartile (Q2) is the median of a data set. So, 50% of the data lies below this point. The third quartile (Q3) splits off the highest 25% of data from the lowest 75%. It is known as the upper or 75th empirical quartile, as 75% of the data lies below this point. Here, the quartiles is provided as an ordered list of approximate quartile values for the scanned data, occurring in order Q1, median, Q3.
- standard_
deviation float - Standard deviation of non-null values in the scanned data. NaN, if the field has a NaN.
- average Number
- Average of non-null values in the scanned data. NaN, if the field has a NaN.
- max String
- Maximum of non-null values in the scanned data. NaN, if the field has a NaN.
- min String
- Minimum of non-null values in the scanned data. NaN, if the field has a NaN.
- quartiles List<String>
- A quartile divides the number of data points into four parts, or quarters, of more-or-less equal size. Three main quartiles used are: The first quartile (Q1) splits off the lowest 25% of data from the highest 75%. It is also known as the lower or 25th empirical quartile, as 25% of the data is below this point. The second quartile (Q2) is the median of a data set. So, 50% of the data lies below this point. The third quartile (Q3) splits off the highest 25% of data from the lowest 75%. It is known as the upper or 75th empirical quartile, as 75% of the data lies below this point. Here, the quartiles is provided as an ordered list of approximate quartile values for the scanned data, occurring in order Q1, median, Q3.
- standard
Deviation Number - Standard deviation of non-null values in the scanned data. NaN, if the field has a NaN.
GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoResponse, GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoResponseArgs
- Distinct
Ratio double - Ratio of rows with distinct values against total scanned rows. Not available for complex non-groupable field type RECORD and fields with REPEATABLE mode.
- Double
Profile Pulumi.Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Data Profile Result Profile Field Profile Info Double Field Info Response - Double type field information.
- Integer
Profile Pulumi.Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Data Profile Result Profile Field Profile Info Integer Field Info Response - Integer type field information.
- Null
Ratio double - Ratio of rows with null value against total scanned rows.
- String
Profile Pulumi.Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Data Profile Result Profile Field Profile Info String Field Info Response - String type field information.
- Top
NValues List<Pulumi.Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Data Profile Result Profile Field Profile Info Top NValue Response> - The list of top N non-null values, frequency and ratio with which they occur in the scanned data. N is 10 or equal to the number of distinct values in the field, whichever is smaller. Not available for complex non-groupable field type RECORD and fields with REPEATABLE mode.
- Distinct
Ratio float64 - Ratio of rows with distinct values against total scanned rows. Not available for complex non-groupable field type RECORD and fields with REPEATABLE mode.
- Double
Profile GoogleCloud Dataplex V1Data Profile Result Profile Field Profile Info Double Field Info Response - Double type field information.
- Integer
Profile GoogleCloud Dataplex V1Data Profile Result Profile Field Profile Info Integer Field Info Response - Integer type field information.
- Null
Ratio float64 - Ratio of rows with null value against total scanned rows.
- String
Profile GoogleCloud Dataplex V1Data Profile Result Profile Field Profile Info String Field Info Response - String type field information.
- Top
NValues []GoogleCloud Dataplex V1Data Profile Result Profile Field Profile Info Top NValue Response - The list of top N non-null values, frequency and ratio with which they occur in the scanned data. N is 10 or equal to the number of distinct values in the field, whichever is smaller. Not available for complex non-groupable field type RECORD and fields with REPEATABLE mode.
- distinct
Ratio Double - Ratio of rows with distinct values against total scanned rows. Not available for complex non-groupable field type RECORD and fields with REPEATABLE mode.
- double
Profile GoogleCloud Dataplex V1Data Profile Result Profile Field Profile Info Double Field Info Response - Double type field information.
- integer
Profile GoogleCloud Dataplex V1Data Profile Result Profile Field Profile Info Integer Field Info Response - Integer type field information.
- null
Ratio Double - Ratio of rows with null value against total scanned rows.
- string
Profile GoogleCloud Dataplex V1Data Profile Result Profile Field Profile Info String Field Info Response - String type field information.
- top
NValues List<GoogleCloud Dataplex V1Data Profile Result Profile Field Profile Info Top NValue Response> - The list of top N non-null values, frequency and ratio with which they occur in the scanned data. N is 10 or equal to the number of distinct values in the field, whichever is smaller. Not available for complex non-groupable field type RECORD and fields with REPEATABLE mode.
- distinct
Ratio number - Ratio of rows with distinct values against total scanned rows. Not available for complex non-groupable field type RECORD and fields with REPEATABLE mode.
- double
Profile GoogleCloud Dataplex V1Data Profile Result Profile Field Profile Info Double Field Info Response - Double type field information.
- integer
Profile GoogleCloud Dataplex V1Data Profile Result Profile Field Profile Info Integer Field Info Response - Integer type field information.
- null
Ratio number - Ratio of rows with null value against total scanned rows.
- string
Profile GoogleCloud Dataplex V1Data Profile Result Profile Field Profile Info String Field Info Response - String type field information.
- top
NValues GoogleCloud Dataplex V1Data Profile Result Profile Field Profile Info Top NValue Response[] - The list of top N non-null values, frequency and ratio with which they occur in the scanned data. N is 10 or equal to the number of distinct values in the field, whichever is smaller. Not available for complex non-groupable field type RECORD and fields with REPEATABLE mode.
- distinct_
ratio float - Ratio of rows with distinct values against total scanned rows. Not available for complex non-groupable field type RECORD and fields with REPEATABLE mode.
- double_
profile GoogleCloud Dataplex V1Data Profile Result Profile Field Profile Info Double Field Info Response - Double type field information.
- integer_
profile GoogleCloud Dataplex V1Data Profile Result Profile Field Profile Info Integer Field Info Response - Integer type field information.
- null_
ratio float - Ratio of rows with null value against total scanned rows.
- string_
profile GoogleCloud Dataplex V1Data Profile Result Profile Field Profile Info String Field Info Response - String type field information.
- top_
n_ Sequence[Googlevalues Cloud Dataplex V1Data Profile Result Profile Field Profile Info Top NValue Response] - The list of top N non-null values, frequency and ratio with which they occur in the scanned data. N is 10 or equal to the number of distinct values in the field, whichever is smaller. Not available for complex non-groupable field type RECORD and fields with REPEATABLE mode.
- distinct
Ratio Number - Ratio of rows with distinct values against total scanned rows. Not available for complex non-groupable field type RECORD and fields with REPEATABLE mode.
- double
Profile Property Map - Double type field information.
- integer
Profile Property Map - Integer type field information.
- null
Ratio Number - Ratio of rows with null value against total scanned rows.
- string
Profile Property Map - String type field information.
- top
NValues List<Property Map> - The list of top N non-null values, frequency and ratio with which they occur in the scanned data. N is 10 or equal to the number of distinct values in the field, whichever is smaller. Not available for complex non-groupable field type RECORD and fields with REPEATABLE mode.
GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoStringFieldInfoResponse, GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoStringFieldInfoResponseArgs
- Average
Length double - Average length of non-null values in the scanned data.
- Max
Length string - Maximum length of non-null values in the scanned data.
- Min
Length string - Minimum length of non-null values in the scanned data.
- Average
Length float64 - Average length of non-null values in the scanned data.
- Max
Length string - Maximum length of non-null values in the scanned data.
- Min
Length string - Minimum length of non-null values in the scanned data.
- average
Length Double - Average length of non-null values in the scanned data.
- max
Length String - Maximum length of non-null values in the scanned data.
- min
Length String - Minimum length of non-null values in the scanned data.
- average
Length number - Average length of non-null values in the scanned data.
- max
Length string - Maximum length of non-null values in the scanned data.
- min
Length string - Minimum length of non-null values in the scanned data.
- average_
length float - Average length of non-null values in the scanned data.
- max_
length str - Maximum length of non-null values in the scanned data.
- min_
length str - Minimum length of non-null values in the scanned data.
- average
Length Number - Average length of non-null values in the scanned data.
- max
Length String - Maximum length of non-null values in the scanned data.
- min
Length String - Minimum length of non-null values in the scanned data.
GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoTopNValueResponse, GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoTopNValueResponseArgs
GoogleCloudDataplexV1DataProfileResultProfileFieldResponse, GoogleCloudDataplexV1DataProfileResultProfileFieldResponseArgs
- Mode string
- The mode of the field. Possible values include: REQUIRED, if it is a required field. NULLABLE, if it is an optional field. REPEATED, if it is a repeated field.
- Name string
- The name of the field.
- Profile
Pulumi.
Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Data Profile Result Profile Field Profile Info Response - Profile information for the corresponding field.
- Type string
- The data type retrieved from the schema of the data source. For instance, for a BigQuery native table, it is the BigQuery Table Schema (https://cloud.google.com/bigquery/docs/reference/rest/v2/tables#tablefieldschema). For a Dataplex Entity, it is the Entity Schema (https://cloud.google.com/dataplex/docs/reference/rpc/google.cloud.dataplex.v1#type_3).
- Mode string
- The mode of the field. Possible values include: REQUIRED, if it is a required field. NULLABLE, if it is an optional field. REPEATED, if it is a repeated field.
- Name string
- The name of the field.
- Profile
Google
Cloud Dataplex V1Data Profile Result Profile Field Profile Info Response - Profile information for the corresponding field.
- Type string
- The data type retrieved from the schema of the data source. For instance, for a BigQuery native table, it is the BigQuery Table Schema (https://cloud.google.com/bigquery/docs/reference/rest/v2/tables#tablefieldschema). For a Dataplex Entity, it is the Entity Schema (https://cloud.google.com/dataplex/docs/reference/rpc/google.cloud.dataplex.v1#type_3).
- mode String
- The mode of the field. Possible values include: REQUIRED, if it is a required field. NULLABLE, if it is an optional field. REPEATED, if it is a repeated field.
- name String
- The name of the field.
- profile
Google
Cloud Dataplex V1Data Profile Result Profile Field Profile Info Response - Profile information for the corresponding field.
- type String
- The data type retrieved from the schema of the data source. For instance, for a BigQuery native table, it is the BigQuery Table Schema (https://cloud.google.com/bigquery/docs/reference/rest/v2/tables#tablefieldschema). For a Dataplex Entity, it is the Entity Schema (https://cloud.google.com/dataplex/docs/reference/rpc/google.cloud.dataplex.v1#type_3).
- mode string
- The mode of the field. Possible values include: REQUIRED, if it is a required field. NULLABLE, if it is an optional field. REPEATED, if it is a repeated field.
- name string
- The name of the field.
- profile
Google
Cloud Dataplex V1Data Profile Result Profile Field Profile Info Response - Profile information for the corresponding field.
- type string
- The data type retrieved from the schema of the data source. For instance, for a BigQuery native table, it is the BigQuery Table Schema (https://cloud.google.com/bigquery/docs/reference/rest/v2/tables#tablefieldschema). For a Dataplex Entity, it is the Entity Schema (https://cloud.google.com/dataplex/docs/reference/rpc/google.cloud.dataplex.v1#type_3).
- mode str
- The mode of the field. Possible values include: REQUIRED, if it is a required field. NULLABLE, if it is an optional field. REPEATED, if it is a repeated field.
- name str
- The name of the field.
- profile
Google
Cloud Dataplex V1Data Profile Result Profile Field Profile Info Response - Profile information for the corresponding field.
- type str
- The data type retrieved from the schema of the data source. For instance, for a BigQuery native table, it is the BigQuery Table Schema (https://cloud.google.com/bigquery/docs/reference/rest/v2/tables#tablefieldschema). For a Dataplex Entity, it is the Entity Schema (https://cloud.google.com/dataplex/docs/reference/rpc/google.cloud.dataplex.v1#type_3).
- mode String
- The mode of the field. Possible values include: REQUIRED, if it is a required field. NULLABLE, if it is an optional field. REPEATED, if it is a repeated field.
- name String
- The name of the field.
- profile Property Map
- Profile information for the corresponding field.
- type String
- The data type retrieved from the schema of the data source. For instance, for a BigQuery native table, it is the BigQuery Table Schema (https://cloud.google.com/bigquery/docs/reference/rest/v2/tables#tablefieldschema). For a Dataplex Entity, it is the Entity Schema (https://cloud.google.com/dataplex/docs/reference/rpc/google.cloud.dataplex.v1#type_3).
GoogleCloudDataplexV1DataProfileResultProfileResponse, GoogleCloudDataplexV1DataProfileResultProfileResponseArgs
- Fields
List<Pulumi.
Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Data Profile Result Profile Field Response> - List of fields with structural and profile information for each field.
- Fields
[]Google
Cloud Dataplex V1Data Profile Result Profile Field Response - List of fields with structural and profile information for each field.
- fields
List<Google
Cloud Dataplex V1Data Profile Result Profile Field Response> - List of fields with structural and profile information for each field.
- fields
Google
Cloud Dataplex V1Data Profile Result Profile Field Response[] - List of fields with structural and profile information for each field.
- fields
Sequence[Google
Cloud Dataplex V1Data Profile Result Profile Field Response] - List of fields with structural and profile information for each field.
- fields List<Property Map>
- List of fields with structural and profile information for each field.
GoogleCloudDataplexV1DataProfileResultResponse, GoogleCloudDataplexV1DataProfileResultResponseArgs
- Post
Scan Pulumi.Actions Result Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Data Profile Result Post Scan Actions Result Response - The result of post scan actions.
- Profile
Pulumi.
Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Data Profile Result Profile Response - The profile information per field.
- Row
Count string - The count of rows scanned.
- Scanned
Data Pulumi.Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Scanned Data Response - The data scanned for this result.
- Post
Scan GoogleActions Result Cloud Dataplex V1Data Profile Result Post Scan Actions Result Response - The result of post scan actions.
- Profile
Google
Cloud Dataplex V1Data Profile Result Profile Response - The profile information per field.
- Row
Count string - The count of rows scanned.
- Scanned
Data GoogleCloud Dataplex V1Scanned Data Response - The data scanned for this result.
- post
Scan GoogleActions Result Cloud Dataplex V1Data Profile Result Post Scan Actions Result Response - The result of post scan actions.
- profile
Google
Cloud Dataplex V1Data Profile Result Profile Response - The profile information per field.
- row
Count String - The count of rows scanned.
- scanned
Data GoogleCloud Dataplex V1Scanned Data Response - The data scanned for this result.
- post
Scan GoogleActions Result Cloud Dataplex V1Data Profile Result Post Scan Actions Result Response - The result of post scan actions.
- profile
Google
Cloud Dataplex V1Data Profile Result Profile Response - The profile information per field.
- row
Count string - The count of rows scanned.
- scanned
Data GoogleCloud Dataplex V1Scanned Data Response - The data scanned for this result.
- post_
scan_ Googleactions_ result Cloud Dataplex V1Data Profile Result Post Scan Actions Result Response - The result of post scan actions.
- profile
Google
Cloud Dataplex V1Data Profile Result Profile Response - The profile information per field.
- row_
count str - The count of rows scanned.
- scanned_
data GoogleCloud Dataplex V1Scanned Data Response - The data scanned for this result.
- post
Scan Property MapActions Result - The result of post scan actions.
- profile Property Map
- The profile information per field.
- row
Count String - The count of rows scanned.
- scanned
Data Property Map - The data scanned for this result.
GoogleCloudDataplexV1DataProfileSpec, GoogleCloudDataplexV1DataProfileSpecArgs
- Exclude
Fields Pulumi.Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Data Profile Spec Selected Fields - Optional. The fields to exclude from data profile.If specified, the fields will be excluded from data profile, regardless of include_fields value.
- Include
Fields Pulumi.Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Data Profile Spec Selected Fields - Optional. The fields to include in data profile.If not specified, all fields at the time of profile scan job execution are included, except for ones listed in exclude_fields.
- Post
Scan Pulumi.Actions Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Data Profile Spec Post Scan Actions - Optional. Actions to take upon job completion..
- Row
Filter string - Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10
- Sampling
Percent double - Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.
- Exclude
Fields GoogleCloud Dataplex V1Data Profile Spec Selected Fields - Optional. The fields to exclude from data profile.If specified, the fields will be excluded from data profile, regardless of include_fields value.
- Include
Fields GoogleCloud Dataplex V1Data Profile Spec Selected Fields - Optional. The fields to include in data profile.If not specified, all fields at the time of profile scan job execution are included, except for ones listed in exclude_fields.
- Post
Scan GoogleActions Cloud Dataplex V1Data Profile Spec Post Scan Actions - Optional. Actions to take upon job completion..
- Row
Filter string - Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10
- Sampling
Percent float64 - Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.
- exclude
Fields GoogleCloud Dataplex V1Data Profile Spec Selected Fields - Optional. The fields to exclude from data profile.If specified, the fields will be excluded from data profile, regardless of include_fields value.
- include
Fields GoogleCloud Dataplex V1Data Profile Spec Selected Fields - Optional. The fields to include in data profile.If not specified, all fields at the time of profile scan job execution are included, except for ones listed in exclude_fields.
- post
Scan GoogleActions Cloud Dataplex V1Data Profile Spec Post Scan Actions - Optional. Actions to take upon job completion..
- row
Filter String - Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10
- sampling
Percent Double - Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.
- exclude
Fields GoogleCloud Dataplex V1Data Profile Spec Selected Fields - Optional. The fields to exclude from data profile.If specified, the fields will be excluded from data profile, regardless of include_fields value.
- include
Fields GoogleCloud Dataplex V1Data Profile Spec Selected Fields - Optional. The fields to include in data profile.If not specified, all fields at the time of profile scan job execution are included, except for ones listed in exclude_fields.
- post
Scan GoogleActions Cloud Dataplex V1Data Profile Spec Post Scan Actions - Optional. Actions to take upon job completion..
- row
Filter string - Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10
- sampling
Percent number - Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.
- exclude_
fields GoogleCloud Dataplex V1Data Profile Spec Selected Fields - Optional. The fields to exclude from data profile.If specified, the fields will be excluded from data profile, regardless of include_fields value.
- include_
fields GoogleCloud Dataplex V1Data Profile Spec Selected Fields - Optional. The fields to include in data profile.If not specified, all fields at the time of profile scan job execution are included, except for ones listed in exclude_fields.
- post_
scan_ Googleactions Cloud Dataplex V1Data Profile Spec Post Scan Actions - Optional. Actions to take upon job completion..
- row_
filter str - Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10
- sampling_
percent float - Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.
- exclude
Fields Property Map - Optional. The fields to exclude from data profile.If specified, the fields will be excluded from data profile, regardless of include_fields value.
- include
Fields Property Map - Optional. The fields to include in data profile.If not specified, all fields at the time of profile scan job execution are included, except for ones listed in exclude_fields.
- post
Scan Property MapActions - Optional. Actions to take upon job completion..
- row
Filter String - Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10
- sampling
Percent Number - Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.
GoogleCloudDataplexV1DataProfileSpecPostScanActions, GoogleCloudDataplexV1DataProfileSpecPostScanActionsArgs
- Bigquery
Export Pulumi.Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Data Profile Spec Post Scan Actions Big Query Export - Optional. If set, results will be exported to the provided BigQuery table.
- Bigquery
Export GoogleCloud Dataplex V1Data Profile Spec Post Scan Actions Big Query Export - Optional. If set, results will be exported to the provided BigQuery table.
- bigquery
Export GoogleCloud Dataplex V1Data Profile Spec Post Scan Actions Big Query Export - Optional. If set, results will be exported to the provided BigQuery table.
- bigquery
Export GoogleCloud Dataplex V1Data Profile Spec Post Scan Actions Big Query Export - Optional. If set, results will be exported to the provided BigQuery table.
- bigquery_
export GoogleCloud Dataplex V1Data Profile Spec Post Scan Actions Big Query Export - Optional. If set, results will be exported to the provided BigQuery table.
- bigquery
Export Property Map - Optional. If set, results will be exported to the provided BigQuery table.
GoogleCloudDataplexV1DataProfileSpecPostScanActionsBigQueryExport, GoogleCloudDataplexV1DataProfileSpecPostScanActionsBigQueryExportArgs
- Results
Table string - Optional. The BigQuery table to export DataProfileScan results to. Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID
- Results
Table string - Optional. The BigQuery table to export DataProfileScan results to. Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID
- results
Table String - Optional. The BigQuery table to export DataProfileScan results to. Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID
- results
Table string - Optional. The BigQuery table to export DataProfileScan results to. Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID
- results_
table str - Optional. The BigQuery table to export DataProfileScan results to. Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID
- results
Table String - Optional. The BigQuery table to export DataProfileScan results to. Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID
GoogleCloudDataplexV1DataProfileSpecPostScanActionsBigQueryExportResponse, GoogleCloudDataplexV1DataProfileSpecPostScanActionsBigQueryExportResponseArgs
- Results
Table string - Optional. The BigQuery table to export DataProfileScan results to. Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID
- Results
Table string - Optional. The BigQuery table to export DataProfileScan results to. Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID
- results
Table String - Optional. The BigQuery table to export DataProfileScan results to. Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID
- results
Table string - Optional. The BigQuery table to export DataProfileScan results to. Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID
- results_
table str - Optional. The BigQuery table to export DataProfileScan results to. Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID
- results
Table String - Optional. The BigQuery table to export DataProfileScan results to. Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID
GoogleCloudDataplexV1DataProfileSpecPostScanActionsResponse, GoogleCloudDataplexV1DataProfileSpecPostScanActionsResponseArgs
- Bigquery
Export Pulumi.Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Data Profile Spec Post Scan Actions Big Query Export Response - Optional. If set, results will be exported to the provided BigQuery table.
- Bigquery
Export GoogleCloud Dataplex V1Data Profile Spec Post Scan Actions Big Query Export Response - Optional. If set, results will be exported to the provided BigQuery table.
- bigquery
Export GoogleCloud Dataplex V1Data Profile Spec Post Scan Actions Big Query Export Response - Optional. If set, results will be exported to the provided BigQuery table.
- bigquery
Export GoogleCloud Dataplex V1Data Profile Spec Post Scan Actions Big Query Export Response - Optional. If set, results will be exported to the provided BigQuery table.
- bigquery_
export GoogleCloud Dataplex V1Data Profile Spec Post Scan Actions Big Query Export Response - Optional. If set, results will be exported to the provided BigQuery table.
- bigquery
Export Property Map - Optional. If set, results will be exported to the provided BigQuery table.
GoogleCloudDataplexV1DataProfileSpecResponse, GoogleCloudDataplexV1DataProfileSpecResponseArgs
- Exclude
Fields Pulumi.Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Data Profile Spec Selected Fields Response - Optional. The fields to exclude from data profile.If specified, the fields will be excluded from data profile, regardless of include_fields value.
- Include
Fields Pulumi.Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Data Profile Spec Selected Fields Response - Optional. The fields to include in data profile.If not specified, all fields at the time of profile scan job execution are included, except for ones listed in exclude_fields.
- Post
Scan Pulumi.Actions Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Data Profile Spec Post Scan Actions Response - Optional. Actions to take upon job completion..
- Row
Filter string - Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10
- Sampling
Percent double - Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.
- Exclude
Fields GoogleCloud Dataplex V1Data Profile Spec Selected Fields Response - Optional. The fields to exclude from data profile.If specified, the fields will be excluded from data profile, regardless of include_fields value.
- Include
Fields GoogleCloud Dataplex V1Data Profile Spec Selected Fields Response - Optional. The fields to include in data profile.If not specified, all fields at the time of profile scan job execution are included, except for ones listed in exclude_fields.
- Post
Scan GoogleActions Cloud Dataplex V1Data Profile Spec Post Scan Actions Response - Optional. Actions to take upon job completion..
- Row
Filter string - Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10
- Sampling
Percent float64 - Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.
- exclude
Fields GoogleCloud Dataplex V1Data Profile Spec Selected Fields Response - Optional. The fields to exclude from data profile.If specified, the fields will be excluded from data profile, regardless of include_fields value.
- include
Fields GoogleCloud Dataplex V1Data Profile Spec Selected Fields Response - Optional. The fields to include in data profile.If not specified, all fields at the time of profile scan job execution are included, except for ones listed in exclude_fields.
- post
Scan GoogleActions Cloud Dataplex V1Data Profile Spec Post Scan Actions Response - Optional. Actions to take upon job completion..
- row
Filter String - Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10
- sampling
Percent Double - Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.
- exclude
Fields GoogleCloud Dataplex V1Data Profile Spec Selected Fields Response - Optional. The fields to exclude from data profile.If specified, the fields will be excluded from data profile, regardless of include_fields value.
- include
Fields GoogleCloud Dataplex V1Data Profile Spec Selected Fields Response - Optional. The fields to include in data profile.If not specified, all fields at the time of profile scan job execution are included, except for ones listed in exclude_fields.
- post
Scan GoogleActions Cloud Dataplex V1Data Profile Spec Post Scan Actions Response - Optional. Actions to take upon job completion..
- row
Filter string - Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10
- sampling
Percent number - Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.
- exclude_
fields GoogleCloud Dataplex V1Data Profile Spec Selected Fields Response - Optional. The fields to exclude from data profile.If specified, the fields will be excluded from data profile, regardless of include_fields value.
- include_
fields GoogleCloud Dataplex V1Data Profile Spec Selected Fields Response - Optional. The fields to include in data profile.If not specified, all fields at the time of profile scan job execution are included, except for ones listed in exclude_fields.
- post_
scan_ Googleactions Cloud Dataplex V1Data Profile Spec Post Scan Actions Response - Optional. Actions to take upon job completion..
- row_
filter str - Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10
- sampling_
percent float - Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.
- exclude
Fields Property Map - Optional. The fields to exclude from data profile.If specified, the fields will be excluded from data profile, regardless of include_fields value.
- include
Fields Property Map - Optional. The fields to include in data profile.If not specified, all fields at the time of profile scan job execution are included, except for ones listed in exclude_fields.
- post
Scan Property MapActions - Optional. Actions to take upon job completion..
- row
Filter String - Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10
- sampling
Percent Number - Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.
GoogleCloudDataplexV1DataProfileSpecSelectedFields, GoogleCloudDataplexV1DataProfileSpecSelectedFieldsArgs
- Field
Names List<string> - Optional. Expected input is a list of fully qualified names of fields as in the schema.Only top-level field names for nested fields are supported. For instance, if 'x' is of nested field type, listing 'x' is supported but 'x.y.z' is not supported. Here 'y' and 'y.z' are nested fields of 'x'.
- Field
Names []string - Optional. Expected input is a list of fully qualified names of fields as in the schema.Only top-level field names for nested fields are supported. For instance, if 'x' is of nested field type, listing 'x' is supported but 'x.y.z' is not supported. Here 'y' and 'y.z' are nested fields of 'x'.
- field
Names List<String> - Optional. Expected input is a list of fully qualified names of fields as in the schema.Only top-level field names for nested fields are supported. For instance, if 'x' is of nested field type, listing 'x' is supported but 'x.y.z' is not supported. Here 'y' and 'y.z' are nested fields of 'x'.
- field
Names string[] - Optional. Expected input is a list of fully qualified names of fields as in the schema.Only top-level field names for nested fields are supported. For instance, if 'x' is of nested field type, listing 'x' is supported but 'x.y.z' is not supported. Here 'y' and 'y.z' are nested fields of 'x'.
- field_
names Sequence[str] - Optional. Expected input is a list of fully qualified names of fields as in the schema.Only top-level field names for nested fields are supported. For instance, if 'x' is of nested field type, listing 'x' is supported but 'x.y.z' is not supported. Here 'y' and 'y.z' are nested fields of 'x'.
- field
Names List<String> - Optional. Expected input is a list of fully qualified names of fields as in the schema.Only top-level field names for nested fields are supported. For instance, if 'x' is of nested field type, listing 'x' is supported but 'x.y.z' is not supported. Here 'y' and 'y.z' are nested fields of 'x'.
GoogleCloudDataplexV1DataProfileSpecSelectedFieldsResponse, GoogleCloudDataplexV1DataProfileSpecSelectedFieldsResponseArgs
- Field
Names List<string> - Optional. Expected input is a list of fully qualified names of fields as in the schema.Only top-level field names for nested fields are supported. For instance, if 'x' is of nested field type, listing 'x' is supported but 'x.y.z' is not supported. Here 'y' and 'y.z' are nested fields of 'x'.
- Field
Names []string - Optional. Expected input is a list of fully qualified names of fields as in the schema.Only top-level field names for nested fields are supported. For instance, if 'x' is of nested field type, listing 'x' is supported but 'x.y.z' is not supported. Here 'y' and 'y.z' are nested fields of 'x'.
- field
Names List<String> - Optional. Expected input is a list of fully qualified names of fields as in the schema.Only top-level field names for nested fields are supported. For instance, if 'x' is of nested field type, listing 'x' is supported but 'x.y.z' is not supported. Here 'y' and 'y.z' are nested fields of 'x'.
- field
Names string[] - Optional. Expected input is a list of fully qualified names of fields as in the schema.Only top-level field names for nested fields are supported. For instance, if 'x' is of nested field type, listing 'x' is supported but 'x.y.z' is not supported. Here 'y' and 'y.z' are nested fields of 'x'.
- field_
names Sequence[str] - Optional. Expected input is a list of fully qualified names of fields as in the schema.Only top-level field names for nested fields are supported. For instance, if 'x' is of nested field type, listing 'x' is supported but 'x.y.z' is not supported. Here 'y' and 'y.z' are nested fields of 'x'.
- field
Names List<String> - Optional. Expected input is a list of fully qualified names of fields as in the schema.Only top-level field names for nested fields are supported. For instance, if 'x' is of nested field type, listing 'x' is supported but 'x.y.z' is not supported. Here 'y' and 'y.z' are nested fields of 'x'.
GoogleCloudDataplexV1DataQualityColumnResultResponse, GoogleCloudDataplexV1DataQualityColumnResultResponseArgs
GoogleCloudDataplexV1DataQualityDimensionResponse, GoogleCloudDataplexV1DataQualityDimensionResponseArgs
- Name string
- The dimension name a rule belongs to. Supported dimensions are "COMPLETENESS", "ACCURACY", "CONSISTENCY", "VALIDITY", "UNIQUENESS", "INTEGRITY"
- Name string
- The dimension name a rule belongs to. Supported dimensions are "COMPLETENESS", "ACCURACY", "CONSISTENCY", "VALIDITY", "UNIQUENESS", "INTEGRITY"
- name String
- The dimension name a rule belongs to. Supported dimensions are "COMPLETENESS", "ACCURACY", "CONSISTENCY", "VALIDITY", "UNIQUENESS", "INTEGRITY"
- name string
- The dimension name a rule belongs to. Supported dimensions are "COMPLETENESS", "ACCURACY", "CONSISTENCY", "VALIDITY", "UNIQUENESS", "INTEGRITY"
- name str
- The dimension name a rule belongs to. Supported dimensions are "COMPLETENESS", "ACCURACY", "CONSISTENCY", "VALIDITY", "UNIQUENESS", "INTEGRITY"
- name String
- The dimension name a rule belongs to. Supported dimensions are "COMPLETENESS", "ACCURACY", "CONSISTENCY", "VALIDITY", "UNIQUENESS", "INTEGRITY"
GoogleCloudDataplexV1DataQualityDimensionResultResponse, GoogleCloudDataplexV1DataQualityDimensionResultResponseArgs
- Dimension
Pulumi.
Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Data Quality Dimension Response - The dimension config specified in the DataQualitySpec, as is.
- Passed bool
- Whether the dimension passed or failed.
- Score double
- The dimension-level data quality score for this data scan job if and only if the 'dimension' field is set.The score ranges between 0, 100 (up to two decimal points).
- Dimension
Google
Cloud Dataplex V1Data Quality Dimension Response - The dimension config specified in the DataQualitySpec, as is.
- Passed bool
- Whether the dimension passed or failed.
- Score float64
- The dimension-level data quality score for this data scan job if and only if the 'dimension' field is set.The score ranges between 0, 100 (up to two decimal points).
- dimension
Google
Cloud Dataplex V1Data Quality Dimension Response - The dimension config specified in the DataQualitySpec, as is.
- passed Boolean
- Whether the dimension passed or failed.
- score Double
- The dimension-level data quality score for this data scan job if and only if the 'dimension' field is set.The score ranges between 0, 100 (up to two decimal points).
- dimension
Google
Cloud Dataplex V1Data Quality Dimension Response - The dimension config specified in the DataQualitySpec, as is.
- passed boolean
- Whether the dimension passed or failed.
- score number
- The dimension-level data quality score for this data scan job if and only if the 'dimension' field is set.The score ranges between 0, 100 (up to two decimal points).
- dimension
Google
Cloud Dataplex V1Data Quality Dimension Response - The dimension config specified in the DataQualitySpec, as is.
- passed bool
- Whether the dimension passed or failed.
- score float
- The dimension-level data quality score for this data scan job if and only if the 'dimension' field is set.The score ranges between 0, 100 (up to two decimal points).
- dimension Property Map
- The dimension config specified in the DataQualitySpec, as is.
- passed Boolean
- Whether the dimension passed or failed.
- score Number
- The dimension-level data quality score for this data scan job if and only if the 'dimension' field is set.The score ranges between 0, 100 (up to two decimal points).
GoogleCloudDataplexV1DataQualityResultPostScanActionsResultBigQueryExportResultResponse, GoogleCloudDataplexV1DataQualityResultPostScanActionsResultBigQueryExportResultResponseArgs
GoogleCloudDataplexV1DataQualityResultPostScanActionsResultResponse, GoogleCloudDataplexV1DataQualityResultPostScanActionsResultResponseArgs
- Bigquery
Export Pulumi.Result Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Data Quality Result Post Scan Actions Result Big Query Export Result Response - The result of BigQuery export post scan action.
- Bigquery
Export GoogleResult Cloud Dataplex V1Data Quality Result Post Scan Actions Result Big Query Export Result Response - The result of BigQuery export post scan action.
- bigquery
Export GoogleResult Cloud Dataplex V1Data Quality Result Post Scan Actions Result Big Query Export Result Response - The result of BigQuery export post scan action.
- bigquery
Export GoogleResult Cloud Dataplex V1Data Quality Result Post Scan Actions Result Big Query Export Result Response - The result of BigQuery export post scan action.
- bigquery_
export_ Googleresult Cloud Dataplex V1Data Quality Result Post Scan Actions Result Big Query Export Result Response - The result of BigQuery export post scan action.
- bigquery
Export Property MapResult - The result of BigQuery export post scan action.
GoogleCloudDataplexV1DataQualityResultResponse, GoogleCloudDataplexV1DataQualityResultResponseArgs
- Columns
List<Pulumi.
Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Data Quality Column Result Response> - A list of results at the column level.A column will have a corresponding DataQualityColumnResult if and only if there is at least one rule with the 'column' field set to it.
- Dimensions
List<Pulumi.
Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Data Quality Dimension Result Response> - A list of results at the dimension level.A dimension will have a corresponding DataQualityDimensionResult if and only if there is at least one rule with the 'dimension' field set to it.
- Passed bool
- Overall data quality result -- true if all rules passed.
- Post
Scan Pulumi.Actions Result Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Data Quality Result Post Scan Actions Result Response - The result of post scan actions.
- Row
Count string - The count of rows processed.
- Rules
List<Pulumi.
Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Data Quality Rule Result Response> - A list of all the rules in a job, and their results.
- Scanned
Data Pulumi.Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Scanned Data Response - The data scanned for this result.
- Score double
- The overall data quality score.The score ranges between 0, 100 (up to two decimal points).
- Columns
[]Google
Cloud Dataplex V1Data Quality Column Result Response - A list of results at the column level.A column will have a corresponding DataQualityColumnResult if and only if there is at least one rule with the 'column' field set to it.
- Dimensions
[]Google
Cloud Dataplex V1Data Quality Dimension Result Response - A list of results at the dimension level.A dimension will have a corresponding DataQualityDimensionResult if and only if there is at least one rule with the 'dimension' field set to it.
- Passed bool
- Overall data quality result -- true if all rules passed.
- Post
Scan GoogleActions Result Cloud Dataplex V1Data Quality Result Post Scan Actions Result Response - The result of post scan actions.
- Row
Count string - The count of rows processed.
- Rules
[]Google
Cloud Dataplex V1Data Quality Rule Result Response - A list of all the rules in a job, and their results.
- Scanned
Data GoogleCloud Dataplex V1Scanned Data Response - The data scanned for this result.
- Score float64
- The overall data quality score.The score ranges between 0, 100 (up to two decimal points).
- columns
List<Google
Cloud Dataplex V1Data Quality Column Result Response> - A list of results at the column level.A column will have a corresponding DataQualityColumnResult if and only if there is at least one rule with the 'column' field set to it.
- dimensions
List<Google
Cloud Dataplex V1Data Quality Dimension Result Response> - A list of results at the dimension level.A dimension will have a corresponding DataQualityDimensionResult if and only if there is at least one rule with the 'dimension' field set to it.
- passed Boolean
- Overall data quality result -- true if all rules passed.
- post
Scan GoogleActions Result Cloud Dataplex V1Data Quality Result Post Scan Actions Result Response - The result of post scan actions.
- row
Count String - The count of rows processed.
- rules
List<Google
Cloud Dataplex V1Data Quality Rule Result Response> - A list of all the rules in a job, and their results.
- scanned
Data GoogleCloud Dataplex V1Scanned Data Response - The data scanned for this result.
- score Double
- The overall data quality score.The score ranges between 0, 100 (up to two decimal points).
- columns
Google
Cloud Dataplex V1Data Quality Column Result Response[] - A list of results at the column level.A column will have a corresponding DataQualityColumnResult if and only if there is at least one rule with the 'column' field set to it.
- dimensions
Google
Cloud Dataplex V1Data Quality Dimension Result Response[] - A list of results at the dimension level.A dimension will have a corresponding DataQualityDimensionResult if and only if there is at least one rule with the 'dimension' field set to it.
- passed boolean
- Overall data quality result -- true if all rules passed.
- post
Scan GoogleActions Result Cloud Dataplex V1Data Quality Result Post Scan Actions Result Response - The result of post scan actions.
- row
Count string - The count of rows processed.
- rules
Google
Cloud Dataplex V1Data Quality Rule Result Response[] - A list of all the rules in a job, and their results.
- scanned
Data GoogleCloud Dataplex V1Scanned Data Response - The data scanned for this result.
- score number
- The overall data quality score.The score ranges between 0, 100 (up to two decimal points).
- columns
Sequence[Google
Cloud Dataplex V1Data Quality Column Result Response] - A list of results at the column level.A column will have a corresponding DataQualityColumnResult if and only if there is at least one rule with the 'column' field set to it.
- dimensions
Sequence[Google
Cloud Dataplex V1Data Quality Dimension Result Response] - A list of results at the dimension level.A dimension will have a corresponding DataQualityDimensionResult if and only if there is at least one rule with the 'dimension' field set to it.
- passed bool
- Overall data quality result -- true if all rules passed.
- post_
scan_ Googleactions_ result Cloud Dataplex V1Data Quality Result Post Scan Actions Result Response - The result of post scan actions.
- row_
count str - The count of rows processed.
- rules
Sequence[Google
Cloud Dataplex V1Data Quality Rule Result Response] - A list of all the rules in a job, and their results.
- scanned_
data GoogleCloud Dataplex V1Scanned Data Response - The data scanned for this result.
- score float
- The overall data quality score.The score ranges between 0, 100 (up to two decimal points).
- columns List<Property Map>
- A list of results at the column level.A column will have a corresponding DataQualityColumnResult if and only if there is at least one rule with the 'column' field set to it.
- dimensions List<Property Map>
- A list of results at the dimension level.A dimension will have a corresponding DataQualityDimensionResult if and only if there is at least one rule with the 'dimension' field set to it.
- passed Boolean
- Overall data quality result -- true if all rules passed.
- post
Scan Property MapActions Result - The result of post scan actions.
- row
Count String - The count of rows processed.
- rules List<Property Map>
- A list of all the rules in a job, and their results.
- scanned
Data Property Map - The data scanned for this result.
- score Number
- The overall data quality score.The score ranges between 0, 100 (up to two decimal points).
GoogleCloudDataplexV1DataQualityRule, GoogleCloudDataplexV1DataQualityRuleArgs
- Dimension string
- The dimension a rule belongs to. Results are also aggregated at the dimension level. Supported dimensions are "COMPLETENESS", "ACCURACY", "CONSISTENCY", "VALIDITY", "UNIQUENESS", "INTEGRITY"
- Column string
- Optional. The unnested column which this rule is evaluated against.
- Description string
- Optional. Description of the rule. The maximum length is 1,024 characters.
- Ignore
Null bool - Optional. Rows with null values will automatically fail a rule, unless ignore_null is true. In that case, such null rows are trivially considered passing.This field is only valid for row-level type rules.
- Name string
- Optional. A mutable name for the rule. The name must contain only letters (a-z, A-Z), numbers (0-9), or hyphens (-). The maximum length is 63 characters. Must start with a letter. Must end with a number or a letter.
- Non
Null Pulumi.Expectation Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Data Quality Rule Non Null Expectation - Row-level rule which evaluates whether each column value is null.
- Range
Expectation Pulumi.Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Data Quality Rule Range Expectation - Row-level rule which evaluates whether each column value lies between a specified range.
- Regex
Expectation Pulumi.Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Data Quality Rule Regex Expectation - Row-level rule which evaluates whether each column value matches a specified regex.
- Row
Condition Pulumi.Expectation Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Data Quality Rule Row Condition Expectation - Row-level rule which evaluates whether each row in a table passes the specified condition.
- Set
Expectation Pulumi.Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Data Quality Rule Set Expectation - Row-level rule which evaluates whether each column value is contained by a specified set.
- Statistic
Range Pulumi.Expectation Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Data Quality Rule Statistic Range Expectation - Aggregate rule which evaluates whether the column aggregate statistic lies between a specified range.
- Table
Condition Pulumi.Expectation Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Data Quality Rule Table Condition Expectation - Aggregate rule which evaluates whether the provided expression is true for a table.
- Threshold double
- Optional. The minimum ratio of passing_rows / total_rows required to pass this rule, with a range of 0.0, 1.0.0 indicates default value (i.e. 1.0).This field is only valid for row-level type rules.
- Uniqueness
Expectation Pulumi.Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Data Quality Rule Uniqueness Expectation - Row-level rule which evaluates whether each column value is unique.
- Dimension string
- The dimension a rule belongs to. Results are also aggregated at the dimension level. Supported dimensions are "COMPLETENESS", "ACCURACY", "CONSISTENCY", "VALIDITY", "UNIQUENESS", "INTEGRITY"
- Column string
- Optional. The unnested column which this rule is evaluated against.
- Description string
- Optional. Description of the rule. The maximum length is 1,024 characters.
- Ignore
Null bool - Optional. Rows with null values will automatically fail a rule, unless ignore_null is true. In that case, such null rows are trivially considered passing.This field is only valid for row-level type rules.
- Name string
- Optional. A mutable name for the rule. The name must contain only letters (a-z, A-Z), numbers (0-9), or hyphens (-). The maximum length is 63 characters. Must start with a letter. Must end with a number or a letter.
- Non
Null GoogleExpectation Cloud Dataplex V1Data Quality Rule Non Null Expectation - Row-level rule which evaluates whether each column value is null.
- Range
Expectation GoogleCloud Dataplex V1Data Quality Rule Range Expectation - Row-level rule which evaluates whether each column value lies between a specified range.
- Regex
Expectation GoogleCloud Dataplex V1Data Quality Rule Regex Expectation - Row-level rule which evaluates whether each column value matches a specified regex.
- Row
Condition GoogleExpectation Cloud Dataplex V1Data Quality Rule Row Condition Expectation - Row-level rule which evaluates whether each row in a table passes the specified condition.
- Set
Expectation GoogleCloud Dataplex V1Data Quality Rule Set Expectation - Row-level rule which evaluates whether each column value is contained by a specified set.
- Statistic
Range GoogleExpectation Cloud Dataplex V1Data Quality Rule Statistic Range Expectation - Aggregate rule which evaluates whether the column aggregate statistic lies between a specified range.
- Table
Condition GoogleExpectation Cloud Dataplex V1Data Quality Rule Table Condition Expectation - Aggregate rule which evaluates whether the provided expression is true for a table.
- Threshold float64
- Optional. The minimum ratio of passing_rows / total_rows required to pass this rule, with a range of 0.0, 1.0.0 indicates default value (i.e. 1.0).This field is only valid for row-level type rules.
- Uniqueness
Expectation GoogleCloud Dataplex V1Data Quality Rule Uniqueness Expectation - Row-level rule which evaluates whether each column value is unique.
- dimension String
- The dimension a rule belongs to. Results are also aggregated at the dimension level. Supported dimensions are "COMPLETENESS", "ACCURACY", "CONSISTENCY", "VALIDITY", "UNIQUENESS", "INTEGRITY"
- column String
- Optional. The unnested column which this rule is evaluated against.
- description String
- Optional. Description of the rule. The maximum length is 1,024 characters.
- ignore
Null Boolean - Optional. Rows with null values will automatically fail a rule, unless ignore_null is true. In that case, such null rows are trivially considered passing.This field is only valid for row-level type rules.
- name String
- Optional. A mutable name for the rule. The name must contain only letters (a-z, A-Z), numbers (0-9), or hyphens (-). The maximum length is 63 characters. Must start with a letter. Must end with a number or a letter.
- non
Null GoogleExpectation Cloud Dataplex V1Data Quality Rule Non Null Expectation - Row-level rule which evaluates whether each column value is null.
- range
Expectation GoogleCloud Dataplex V1Data Quality Rule Range Expectation - Row-level rule which evaluates whether each column value lies between a specified range.
- regex
Expectation GoogleCloud Dataplex V1Data Quality Rule Regex Expectation - Row-level rule which evaluates whether each column value matches a specified regex.
- row
Condition GoogleExpectation Cloud Dataplex V1Data Quality Rule Row Condition Expectation - Row-level rule which evaluates whether each row in a table passes the specified condition.
- set
Expectation GoogleCloud Dataplex V1Data Quality Rule Set Expectation - Row-level rule which evaluates whether each column value is contained by a specified set.
- statistic
Range GoogleExpectation Cloud Dataplex V1Data Quality Rule Statistic Range Expectation - Aggregate rule which evaluates whether the column aggregate statistic lies between a specified range.
- table
Condition GoogleExpectation Cloud Dataplex V1Data Quality Rule Table Condition Expectation - Aggregate rule which evaluates whether the provided expression is true for a table.
- threshold Double
- Optional. The minimum ratio of passing_rows / total_rows required to pass this rule, with a range of 0.0, 1.0.0 indicates default value (i.e. 1.0).This field is only valid for row-level type rules.
- uniqueness
Expectation GoogleCloud Dataplex V1Data Quality Rule Uniqueness Expectation - Row-level rule which evaluates whether each column value is unique.
- dimension string
- The dimension a rule belongs to. Results are also aggregated at the dimension level. Supported dimensions are "COMPLETENESS", "ACCURACY", "CONSISTENCY", "VALIDITY", "UNIQUENESS", "INTEGRITY"
- column string
- Optional. The unnested column which this rule is evaluated against.
- description string
- Optional. Description of the rule. The maximum length is 1,024 characters.
- ignore
Null boolean - Optional. Rows with null values will automatically fail a rule, unless ignore_null is true. In that case, such null rows are trivially considered passing.This field is only valid for row-level type rules.
- name string
- Optional. A mutable name for the rule. The name must contain only letters (a-z, A-Z), numbers (0-9), or hyphens (-). The maximum length is 63 characters. Must start with a letter. Must end with a number or a letter.
- non
Null GoogleExpectation Cloud Dataplex V1Data Quality Rule Non Null Expectation - Row-level rule which evaluates whether each column value is null.
- range
Expectation GoogleCloud Dataplex V1Data Quality Rule Range Expectation - Row-level rule which evaluates whether each column value lies between a specified range.
- regex
Expectation GoogleCloud Dataplex V1Data Quality Rule Regex Expectation - Row-level rule which evaluates whether each column value matches a specified regex.
- row
Condition GoogleExpectation Cloud Dataplex V1Data Quality Rule Row Condition Expectation - Row-level rule which evaluates whether each row in a table passes the specified condition.
- set
Expectation GoogleCloud Dataplex V1Data Quality Rule Set Expectation - Row-level rule which evaluates whether each column value is contained by a specified set.
- statistic
Range GoogleExpectation Cloud Dataplex V1Data Quality Rule Statistic Range Expectation - Aggregate rule which evaluates whether the column aggregate statistic lies between a specified range.
- table
Condition GoogleExpectation Cloud Dataplex V1Data Quality Rule Table Condition Expectation - Aggregate rule which evaluates whether the provided expression is true for a table.
- threshold number
- Optional. The minimum ratio of passing_rows / total_rows required to pass this rule, with a range of 0.0, 1.0.0 indicates default value (i.e. 1.0).This field is only valid for row-level type rules.
- uniqueness
Expectation GoogleCloud Dataplex V1Data Quality Rule Uniqueness Expectation - Row-level rule which evaluates whether each column value is unique.
- dimension str
- The dimension a rule belongs to. Results are also aggregated at the dimension level. Supported dimensions are "COMPLETENESS", "ACCURACY", "CONSISTENCY", "VALIDITY", "UNIQUENESS", "INTEGRITY"
- column str
- Optional. The unnested column which this rule is evaluated against.
- description str
- Optional. Description of the rule. The maximum length is 1,024 characters.
- ignore_
null bool - Optional. Rows with null values will automatically fail a rule, unless ignore_null is true. In that case, such null rows are trivially considered passing.This field is only valid for row-level type rules.
- name str
- Optional. A mutable name for the rule. The name must contain only letters (a-z, A-Z), numbers (0-9), or hyphens (-). The maximum length is 63 characters. Must start with a letter. Must end with a number or a letter.
- non_
null_ Googleexpectation Cloud Dataplex V1Data Quality Rule Non Null Expectation - Row-level rule which evaluates whether each column value is null.
- range_
expectation GoogleCloud Dataplex V1Data Quality Rule Range Expectation - Row-level rule which evaluates whether each column value lies between a specified range.
- regex_
expectation GoogleCloud Dataplex V1Data Quality Rule Regex Expectation - Row-level rule which evaluates whether each column value matches a specified regex.
- row_
condition_ Googleexpectation Cloud Dataplex V1Data Quality Rule Row Condition Expectation - Row-level rule which evaluates whether each row in a table passes the specified condition.
- set_
expectation GoogleCloud Dataplex V1Data Quality Rule Set Expectation - Row-level rule which evaluates whether each column value is contained by a specified set.
- statistic_
range_ Googleexpectation Cloud Dataplex V1Data Quality Rule Statistic Range Expectation - Aggregate rule which evaluates whether the column aggregate statistic lies between a specified range.
- table_
condition_ Googleexpectation Cloud Dataplex V1Data Quality Rule Table Condition Expectation - Aggregate rule which evaluates whether the provided expression is true for a table.
- threshold float
- Optional. The minimum ratio of passing_rows / total_rows required to pass this rule, with a range of 0.0, 1.0.0 indicates default value (i.e. 1.0).This field is only valid for row-level type rules.
- uniqueness_
expectation GoogleCloud Dataplex V1Data Quality Rule Uniqueness Expectation - Row-level rule which evaluates whether each column value is unique.
- dimension String
- The dimension a rule belongs to. Results are also aggregated at the dimension level. Supported dimensions are "COMPLETENESS", "ACCURACY", "CONSISTENCY", "VALIDITY", "UNIQUENESS", "INTEGRITY"
- column String
- Optional. The unnested column which this rule is evaluated against.
- description String
- Optional. Description of the rule. The maximum length is 1,024 characters.
- ignore
Null Boolean - Optional. Rows with null values will automatically fail a rule, unless ignore_null is true. In that case, such null rows are trivially considered passing.This field is only valid for row-level type rules.
- name String
- Optional. A mutable name for the rule. The name must contain only letters (a-z, A-Z), numbers (0-9), or hyphens (-). The maximum length is 63 characters. Must start with a letter. Must end with a number or a letter.
- non
Null Property MapExpectation - Row-level rule which evaluates whether each column value is null.
- range
Expectation Property Map - Row-level rule which evaluates whether each column value lies between a specified range.
- regex
Expectation Property Map - Row-level rule which evaluates whether each column value matches a specified regex.
- row
Condition Property MapExpectation - Row-level rule which evaluates whether each row in a table passes the specified condition.
- set
Expectation Property Map - Row-level rule which evaluates whether each column value is contained by a specified set.
- statistic
Range Property MapExpectation - Aggregate rule which evaluates whether the column aggregate statistic lies between a specified range.
- table
Condition Property MapExpectation - Aggregate rule which evaluates whether the provided expression is true for a table.
- threshold Number
- Optional. The minimum ratio of passing_rows / total_rows required to pass this rule, with a range of 0.0, 1.0.0 indicates default value (i.e. 1.0).This field is only valid for row-level type rules.
- uniqueness
Expectation Property Map - Row-level rule which evaluates whether each column value is unique.
GoogleCloudDataplexV1DataQualityRuleRangeExpectation, GoogleCloudDataplexV1DataQualityRuleRangeExpectationArgs
- Max
Value string - Optional. The maximum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.
- Min
Value string - Optional. The minimum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.
- Strict
Max boolEnabled - Optional. Whether each value needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.
- Strict
Min boolEnabled - Optional. Whether each value needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.
- Max
Value string - Optional. The maximum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.
- Min
Value string - Optional. The minimum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.
- Strict
Max boolEnabled - Optional. Whether each value needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.
- Strict
Min boolEnabled - Optional. Whether each value needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.
- max
Value String - Optional. The maximum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.
- min
Value String - Optional. The minimum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.
- strict
Max BooleanEnabled - Optional. Whether each value needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.
- strict
Min BooleanEnabled - Optional. Whether each value needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.
- max
Value string - Optional. The maximum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.
- min
Value string - Optional. The minimum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.
- strict
Max booleanEnabled - Optional. Whether each value needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.
- strict
Min booleanEnabled - Optional. Whether each value needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.
- max_
value str - Optional. The maximum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.
- min_
value str - Optional. The minimum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.
- strict_
max_ boolenabled - Optional. Whether each value needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.
- strict_
min_ boolenabled - Optional. Whether each value needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.
- max
Value String - Optional. The maximum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.
- min
Value String - Optional. The minimum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.
- strict
Max BooleanEnabled - Optional. Whether each value needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.
- strict
Min BooleanEnabled - Optional. Whether each value needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.
GoogleCloudDataplexV1DataQualityRuleRangeExpectationResponse, GoogleCloudDataplexV1DataQualityRuleRangeExpectationResponseArgs
- Max
Value string - Optional. The maximum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.
- Min
Value string - Optional. The minimum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.
- Strict
Max boolEnabled - Optional. Whether each value needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.
- Strict
Min boolEnabled - Optional. Whether each value needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.
- Max
Value string - Optional. The maximum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.
- Min
Value string - Optional. The minimum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.
- Strict
Max boolEnabled - Optional. Whether each value needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.
- Strict
Min boolEnabled - Optional. Whether each value needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.
- max
Value String - Optional. The maximum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.
- min
Value String - Optional. The minimum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.
- strict
Max BooleanEnabled - Optional. Whether each value needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.
- strict
Min BooleanEnabled - Optional. Whether each value needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.
- max
Value string - Optional. The maximum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.
- min
Value string - Optional. The minimum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.
- strict
Max booleanEnabled - Optional. Whether each value needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.
- strict
Min booleanEnabled - Optional. Whether each value needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.
- max_
value str - Optional. The maximum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.
- min_
value str - Optional. The minimum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.
- strict_
max_ boolenabled - Optional. Whether each value needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.
- strict_
min_ boolenabled - Optional. Whether each value needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.
- max
Value String - Optional. The maximum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.
- min
Value String - Optional. The minimum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.
- strict
Max BooleanEnabled - Optional. Whether each value needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.
- strict
Min BooleanEnabled - Optional. Whether each value needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.
GoogleCloudDataplexV1DataQualityRuleRegexExpectation, GoogleCloudDataplexV1DataQualityRuleRegexExpectationArgs
- Regex string
- Optional. A regular expression the column value is expected to match.
- Regex string
- Optional. A regular expression the column value is expected to match.
- regex String
- Optional. A regular expression the column value is expected to match.
- regex string
- Optional. A regular expression the column value is expected to match.
- regex str
- Optional. A regular expression the column value is expected to match.
- regex String
- Optional. A regular expression the column value is expected to match.
GoogleCloudDataplexV1DataQualityRuleRegexExpectationResponse, GoogleCloudDataplexV1DataQualityRuleRegexExpectationResponseArgs
- Regex string
- Optional. A regular expression the column value is expected to match.
- Regex string
- Optional. A regular expression the column value is expected to match.
- regex String
- Optional. A regular expression the column value is expected to match.
- regex string
- Optional. A regular expression the column value is expected to match.
- regex str
- Optional. A regular expression the column value is expected to match.
- regex String
- Optional. A regular expression the column value is expected to match.
GoogleCloudDataplexV1DataQualityRuleResponse, GoogleCloudDataplexV1DataQualityRuleResponseArgs
- Column string
- Optional. The unnested column which this rule is evaluated against.
- Description string
- Optional. Description of the rule. The maximum length is 1,024 characters.
- Dimension string
- The dimension a rule belongs to. Results are also aggregated at the dimension level. Supported dimensions are "COMPLETENESS", "ACCURACY", "CONSISTENCY", "VALIDITY", "UNIQUENESS", "INTEGRITY"
- Ignore
Null bool - Optional. Rows with null values will automatically fail a rule, unless ignore_null is true. In that case, such null rows are trivially considered passing.This field is only valid for row-level type rules.
- Name string
- Optional. A mutable name for the rule. The name must contain only letters (a-z, A-Z), numbers (0-9), or hyphens (-). The maximum length is 63 characters. Must start with a letter. Must end with a number or a letter.
- Non
Null Pulumi.Expectation Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Data Quality Rule Non Null Expectation Response - Row-level rule which evaluates whether each column value is null.
- Range
Expectation Pulumi.Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Data Quality Rule Range Expectation Response - Row-level rule which evaluates whether each column value lies between a specified range.
- Regex
Expectation Pulumi.Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Data Quality Rule Regex Expectation Response - Row-level rule which evaluates whether each column value matches a specified regex.
- Row
Condition Pulumi.Expectation Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Data Quality Rule Row Condition Expectation Response - Row-level rule which evaluates whether each row in a table passes the specified condition.
- Set
Expectation Pulumi.Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Data Quality Rule Set Expectation Response - Row-level rule which evaluates whether each column value is contained by a specified set.
- Statistic
Range Pulumi.Expectation Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Data Quality Rule Statistic Range Expectation Response - Aggregate rule which evaluates whether the column aggregate statistic lies between a specified range.
- Table
Condition Pulumi.Expectation Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Data Quality Rule Table Condition Expectation Response - Aggregate rule which evaluates whether the provided expression is true for a table.
- Threshold double
- Optional. The minimum ratio of passing_rows / total_rows required to pass this rule, with a range of 0.0, 1.0.0 indicates default value (i.e. 1.0).This field is only valid for row-level type rules.
- Uniqueness
Expectation Pulumi.Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Data Quality Rule Uniqueness Expectation Response - Row-level rule which evaluates whether each column value is unique.
- Column string
- Optional. The unnested column which this rule is evaluated against.
- Description string
- Optional. Description of the rule. The maximum length is 1,024 characters.
- Dimension string
- The dimension a rule belongs to. Results are also aggregated at the dimension level. Supported dimensions are "COMPLETENESS", "ACCURACY", "CONSISTENCY", "VALIDITY", "UNIQUENESS", "INTEGRITY"
- Ignore
Null bool - Optional. Rows with null values will automatically fail a rule, unless ignore_null is true. In that case, such null rows are trivially considered passing.This field is only valid for row-level type rules.
- Name string
- Optional. A mutable name for the rule. The name must contain only letters (a-z, A-Z), numbers (0-9), or hyphens (-). The maximum length is 63 characters. Must start with a letter. Must end with a number or a letter.
- Non
Null GoogleExpectation Cloud Dataplex V1Data Quality Rule Non Null Expectation Response - Row-level rule which evaluates whether each column value is null.
- Range
Expectation GoogleCloud Dataplex V1Data Quality Rule Range Expectation Response - Row-level rule which evaluates whether each column value lies between a specified range.
- Regex
Expectation GoogleCloud Dataplex V1Data Quality Rule Regex Expectation Response - Row-level rule which evaluates whether each column value matches a specified regex.
- Row
Condition GoogleExpectation Cloud Dataplex V1Data Quality Rule Row Condition Expectation Response - Row-level rule which evaluates whether each row in a table passes the specified condition.
- Set
Expectation GoogleCloud Dataplex V1Data Quality Rule Set Expectation Response - Row-level rule which evaluates whether each column value is contained by a specified set.
- Statistic
Range GoogleExpectation Cloud Dataplex V1Data Quality Rule Statistic Range Expectation Response - Aggregate rule which evaluates whether the column aggregate statistic lies between a specified range.
- Table
Condition GoogleExpectation Cloud Dataplex V1Data Quality Rule Table Condition Expectation Response - Aggregate rule which evaluates whether the provided expression is true for a table.
- Threshold float64
- Optional. The minimum ratio of passing_rows / total_rows required to pass this rule, with a range of 0.0, 1.0.0 indicates default value (i.e. 1.0).This field is only valid for row-level type rules.
- Uniqueness
Expectation GoogleCloud Dataplex V1Data Quality Rule Uniqueness Expectation Response - Row-level rule which evaluates whether each column value is unique.
- column String
- Optional. The unnested column which this rule is evaluated against.
- description String
- Optional. Description of the rule. The maximum length is 1,024 characters.
- dimension String
- The dimension a rule belongs to. Results are also aggregated at the dimension level. Supported dimensions are "COMPLETENESS", "ACCURACY", "CONSISTENCY", "VALIDITY", "UNIQUENESS", "INTEGRITY"
- ignore
Null Boolean - Optional. Rows with null values will automatically fail a rule, unless ignore_null is true. In that case, such null rows are trivially considered passing.This field is only valid for row-level type rules.
- name String
- Optional. A mutable name for the rule. The name must contain only letters (a-z, A-Z), numbers (0-9), or hyphens (-). The maximum length is 63 characters. Must start with a letter. Must end with a number or a letter.
- non
Null GoogleExpectation Cloud Dataplex V1Data Quality Rule Non Null Expectation Response - Row-level rule which evaluates whether each column value is null.
- range
Expectation GoogleCloud Dataplex V1Data Quality Rule Range Expectation Response - Row-level rule which evaluates whether each column value lies between a specified range.
- regex
Expectation GoogleCloud Dataplex V1Data Quality Rule Regex Expectation Response - Row-level rule which evaluates whether each column value matches a specified regex.
- row
Condition GoogleExpectation Cloud Dataplex V1Data Quality Rule Row Condition Expectation Response - Row-level rule which evaluates whether each row in a table passes the specified condition.
- set
Expectation GoogleCloud Dataplex V1Data Quality Rule Set Expectation Response - Row-level rule which evaluates whether each column value is contained by a specified set.
- statistic
Range GoogleExpectation Cloud Dataplex V1Data Quality Rule Statistic Range Expectation Response - Aggregate rule which evaluates whether the column aggregate statistic lies between a specified range.
- table
Condition GoogleExpectation Cloud Dataplex V1Data Quality Rule Table Condition Expectation Response - Aggregate rule which evaluates whether the provided expression is true for a table.
- threshold Double
- Optional. The minimum ratio of passing_rows / total_rows required to pass this rule, with a range of 0.0, 1.0.0 indicates default value (i.e. 1.0).This field is only valid for row-level type rules.
- uniqueness
Expectation GoogleCloud Dataplex V1Data Quality Rule Uniqueness Expectation Response - Row-level rule which evaluates whether each column value is unique.
- column string
- Optional. The unnested column which this rule is evaluated against.
- description string
- Optional. Description of the rule. The maximum length is 1,024 characters.
- dimension string
- The dimension a rule belongs to. Results are also aggregated at the dimension level. Supported dimensions are "COMPLETENESS", "ACCURACY", "CONSISTENCY", "VALIDITY", "UNIQUENESS", "INTEGRITY"
- ignore
Null boolean - Optional. Rows with null values will automatically fail a rule, unless ignore_null is true. In that case, such null rows are trivially considered passing.This field is only valid for row-level type rules.
- name string
- Optional. A mutable name for the rule. The name must contain only letters (a-z, A-Z), numbers (0-9), or hyphens (-). The maximum length is 63 characters. Must start with a letter. Must end with a number or a letter.
- non
Null GoogleExpectation Cloud Dataplex V1Data Quality Rule Non Null Expectation Response - Row-level rule which evaluates whether each column value is null.
- range
Expectation GoogleCloud Dataplex V1Data Quality Rule Range Expectation Response - Row-level rule which evaluates whether each column value lies between a specified range.
- regex
Expectation GoogleCloud Dataplex V1Data Quality Rule Regex Expectation Response - Row-level rule which evaluates whether each column value matches a specified regex.
- row
Condition GoogleExpectation Cloud Dataplex V1Data Quality Rule Row Condition Expectation Response - Row-level rule which evaluates whether each row in a table passes the specified condition.
- set
Expectation GoogleCloud Dataplex V1Data Quality Rule Set Expectation Response - Row-level rule which evaluates whether each column value is contained by a specified set.
- statistic
Range GoogleExpectation Cloud Dataplex V1Data Quality Rule Statistic Range Expectation Response - Aggregate rule which evaluates whether the column aggregate statistic lies between a specified range.
- table
Condition GoogleExpectation Cloud Dataplex V1Data Quality Rule Table Condition Expectation Response - Aggregate rule which evaluates whether the provided expression is true for a table.
- threshold number
- Optional. The minimum ratio of passing_rows / total_rows required to pass this rule, with a range of 0.0, 1.0.0 indicates default value (i.e. 1.0).This field is only valid for row-level type rules.
- uniqueness
Expectation GoogleCloud Dataplex V1Data Quality Rule Uniqueness Expectation Response - Row-level rule which evaluates whether each column value is unique.
- column str
- Optional. The unnested column which this rule is evaluated against.
- description str
- Optional. Description of the rule. The maximum length is 1,024 characters.
- dimension str
- The dimension a rule belongs to. Results are also aggregated at the dimension level. Supported dimensions are "COMPLETENESS", "ACCURACY", "CONSISTENCY", "VALIDITY", "UNIQUENESS", "INTEGRITY"
- ignore_
null bool - Optional. Rows with null values will automatically fail a rule, unless ignore_null is true. In that case, such null rows are trivially considered passing.This field is only valid for row-level type rules.
- name str
- Optional. A mutable name for the rule. The name must contain only letters (a-z, A-Z), numbers (0-9), or hyphens (-). The maximum length is 63 characters. Must start with a letter. Must end with a number or a letter.
- non_
null_ Googleexpectation Cloud Dataplex V1Data Quality Rule Non Null Expectation Response - Row-level rule which evaluates whether each column value is null.
- range_
expectation GoogleCloud Dataplex V1Data Quality Rule Range Expectation Response - Row-level rule which evaluates whether each column value lies between a specified range.
- regex_
expectation GoogleCloud Dataplex V1Data Quality Rule Regex Expectation Response - Row-level rule which evaluates whether each column value matches a specified regex.
- row_
condition_ Googleexpectation Cloud Dataplex V1Data Quality Rule Row Condition Expectation Response - Row-level rule which evaluates whether each row in a table passes the specified condition.
- set_
expectation GoogleCloud Dataplex V1Data Quality Rule Set Expectation Response - Row-level rule which evaluates whether each column value is contained by a specified set.
- statistic_
range_ Googleexpectation Cloud Dataplex V1Data Quality Rule Statistic Range Expectation Response - Aggregate rule which evaluates whether the column aggregate statistic lies between a specified range.
- table_
condition_ Googleexpectation Cloud Dataplex V1Data Quality Rule Table Condition Expectation Response - Aggregate rule which evaluates whether the provided expression is true for a table.
- threshold float
- Optional. The minimum ratio of passing_rows / total_rows required to pass this rule, with a range of 0.0, 1.0.0 indicates default value (i.e. 1.0).This field is only valid for row-level type rules.
- uniqueness_
expectation GoogleCloud Dataplex V1Data Quality Rule Uniqueness Expectation Response - Row-level rule which evaluates whether each column value is unique.
- column String
- Optional. The unnested column which this rule is evaluated against.
- description String
- Optional. Description of the rule. The maximum length is 1,024 characters.
- dimension String
- The dimension a rule belongs to. Results are also aggregated at the dimension level. Supported dimensions are "COMPLETENESS", "ACCURACY", "CONSISTENCY", "VALIDITY", "UNIQUENESS", "INTEGRITY"
- ignore
Null Boolean - Optional. Rows with null values will automatically fail a rule, unless ignore_null is true. In that case, such null rows are trivially considered passing.This field is only valid for row-level type rules.
- name String
- Optional. A mutable name for the rule. The name must contain only letters (a-z, A-Z), numbers (0-9), or hyphens (-). The maximum length is 63 characters. Must start with a letter. Must end with a number or a letter.
- non
Null Property MapExpectation - Row-level rule which evaluates whether each column value is null.
- range
Expectation Property Map - Row-level rule which evaluates whether each column value lies between a specified range.
- regex
Expectation Property Map - Row-level rule which evaluates whether each column value matches a specified regex.
- row
Condition Property MapExpectation - Row-level rule which evaluates whether each row in a table passes the specified condition.
- set
Expectation Property Map - Row-level rule which evaluates whether each column value is contained by a specified set.
- statistic
Range Property MapExpectation - Aggregate rule which evaluates whether the column aggregate statistic lies between a specified range.
- table
Condition Property MapExpectation - Aggregate rule which evaluates whether the provided expression is true for a table.
- threshold Number
- Optional. The minimum ratio of passing_rows / total_rows required to pass this rule, with a range of 0.0, 1.0.0 indicates default value (i.e. 1.0).This field is only valid for row-level type rules.
- uniqueness
Expectation Property Map - Row-level rule which evaluates whether each column value is unique.
GoogleCloudDataplexV1DataQualityRuleResultResponse, GoogleCloudDataplexV1DataQualityRuleResultResponseArgs
- Evaluated
Count string - The number of rows a rule was evaluated against.This field is only valid for row-level type rules.Evaluated count can be configured to either include all rows (default) - with null rows automatically failing rule evaluation, or exclude null rows from the evaluated_count, by setting ignore_nulls = true.
- Failing
Rows stringQuery - The query to find rows that did not pass this rule.This field is only valid for row-level type rules.
- Null
Count string - The number of rows with null values in the specified column.
- Pass
Ratio double - The ratio of passed_count / evaluated_count.This field is only valid for row-level type rules.
- Passed bool
- Whether the rule passed or failed.
- Passed
Count string - The number of rows which passed a rule evaluation.This field is only valid for row-level type rules.
- Rule
Pulumi.
Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Data Quality Rule Response - The rule specified in the DataQualitySpec, as is.
- Evaluated
Count string - The number of rows a rule was evaluated against.This field is only valid for row-level type rules.Evaluated count can be configured to either include all rows (default) - with null rows automatically failing rule evaluation, or exclude null rows from the evaluated_count, by setting ignore_nulls = true.
- Failing
Rows stringQuery - The query to find rows that did not pass this rule.This field is only valid for row-level type rules.
- Null
Count string - The number of rows with null values in the specified column.
- Pass
Ratio float64 - The ratio of passed_count / evaluated_count.This field is only valid for row-level type rules.
- Passed bool
- Whether the rule passed or failed.
- Passed
Count string - The number of rows which passed a rule evaluation.This field is only valid for row-level type rules.
- Rule
Google
Cloud Dataplex V1Data Quality Rule Response - The rule specified in the DataQualitySpec, as is.
- evaluated
Count String - The number of rows a rule was evaluated against.This field is only valid for row-level type rules.Evaluated count can be configured to either include all rows (default) - with null rows automatically failing rule evaluation, or exclude null rows from the evaluated_count, by setting ignore_nulls = true.
- failing
Rows StringQuery - The query to find rows that did not pass this rule.This field is only valid for row-level type rules.
- null
Count String - The number of rows with null values in the specified column.
- pass
Ratio Double - The ratio of passed_count / evaluated_count.This field is only valid for row-level type rules.
- passed Boolean
- Whether the rule passed or failed.
- passed
Count String - The number of rows which passed a rule evaluation.This field is only valid for row-level type rules.
- rule
Google
Cloud Dataplex V1Data Quality Rule Response - The rule specified in the DataQualitySpec, as is.
- evaluated
Count string - The number of rows a rule was evaluated against.This field is only valid for row-level type rules.Evaluated count can be configured to either include all rows (default) - with null rows automatically failing rule evaluation, or exclude null rows from the evaluated_count, by setting ignore_nulls = true.
- failing
Rows stringQuery - The query to find rows that did not pass this rule.This field is only valid for row-level type rules.
- null
Count string - The number of rows with null values in the specified column.
- pass
Ratio number - The ratio of passed_count / evaluated_count.This field is only valid for row-level type rules.
- passed boolean
- Whether the rule passed or failed.
- passed
Count string - The number of rows which passed a rule evaluation.This field is only valid for row-level type rules.
- rule
Google
Cloud Dataplex V1Data Quality Rule Response - The rule specified in the DataQualitySpec, as is.
- evaluated_
count str - The number of rows a rule was evaluated against.This field is only valid for row-level type rules.Evaluated count can be configured to either include all rows (default) - with null rows automatically failing rule evaluation, or exclude null rows from the evaluated_count, by setting ignore_nulls = true.
- failing_
rows_ strquery - The query to find rows that did not pass this rule.This field is only valid for row-level type rules.
- null_
count str - The number of rows with null values in the specified column.
- pass_
ratio float - The ratio of passed_count / evaluated_count.This field is only valid for row-level type rules.
- passed bool
- Whether the rule passed or failed.
- passed_
count str - The number of rows which passed a rule evaluation.This field is only valid for row-level type rules.
- rule
Google
Cloud Dataplex V1Data Quality Rule Response - The rule specified in the DataQualitySpec, as is.
- evaluated
Count String - The number of rows a rule was evaluated against.This field is only valid for row-level type rules.Evaluated count can be configured to either include all rows (default) - with null rows automatically failing rule evaluation, or exclude null rows from the evaluated_count, by setting ignore_nulls = true.
- failing
Rows StringQuery - The query to find rows that did not pass this rule.This field is only valid for row-level type rules.
- null
Count String - The number of rows with null values in the specified column.
- pass
Ratio Number - The ratio of passed_count / evaluated_count.This field is only valid for row-level type rules.
- passed Boolean
- Whether the rule passed or failed.
- passed
Count String - The number of rows which passed a rule evaluation.This field is only valid for row-level type rules.
- rule Property Map
- The rule specified in the DataQualitySpec, as is.
GoogleCloudDataplexV1DataQualityRuleRowConditionExpectation, GoogleCloudDataplexV1DataQualityRuleRowConditionExpectationArgs
- Sql
Expression string - Optional. The SQL expression.
- Sql
Expression string - Optional. The SQL expression.
- sql
Expression String - Optional. The SQL expression.
- sql
Expression string - Optional. The SQL expression.
- sql_
expression str - Optional. The SQL expression.
- sql
Expression String - Optional. The SQL expression.
GoogleCloudDataplexV1DataQualityRuleRowConditionExpectationResponse, GoogleCloudDataplexV1DataQualityRuleRowConditionExpectationResponseArgs
- Sql
Expression string - Optional. The SQL expression.
- Sql
Expression string - Optional. The SQL expression.
- sql
Expression String - Optional. The SQL expression.
- sql
Expression string - Optional. The SQL expression.
- sql_
expression str - Optional. The SQL expression.
- sql
Expression String - Optional. The SQL expression.
GoogleCloudDataplexV1DataQualityRuleSetExpectation, GoogleCloudDataplexV1DataQualityRuleSetExpectationArgs
- Values List<string>
- Optional. Expected values for the column value.
- Values []string
- Optional. Expected values for the column value.
- values List<String>
- Optional. Expected values for the column value.
- values string[]
- Optional. Expected values for the column value.
- values Sequence[str]
- Optional. Expected values for the column value.
- values List<String>
- Optional. Expected values for the column value.
GoogleCloudDataplexV1DataQualityRuleSetExpectationResponse, GoogleCloudDataplexV1DataQualityRuleSetExpectationResponseArgs
- Values List<string>
- Optional. Expected values for the column value.
- Values []string
- Optional. Expected values for the column value.
- values List<String>
- Optional. Expected values for the column value.
- values string[]
- Optional. Expected values for the column value.
- values Sequence[str]
- Optional. Expected values for the column value.
- values List<String>
- Optional. Expected values for the column value.
GoogleCloudDataplexV1DataQualityRuleStatisticRangeExpectation, GoogleCloudDataplexV1DataQualityRuleStatisticRangeExpectationArgs
- Max
Value string - Optional. The maximum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.
- Min
Value string - Optional. The minimum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.
- Statistic
Pulumi.
Google Native. Dataplex. V1. Google Cloud Dataplex V1Data Quality Rule Statistic Range Expectation Statistic - Optional. The aggregate metric to evaluate.
- Strict
Max boolEnabled - Optional. Whether column statistic needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.
- Strict
Min boolEnabled - Optional. Whether column statistic needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.
- Max
Value string - Optional. The maximum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.
- Min
Value string - Optional. The minimum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.
- Statistic
Google
Cloud Dataplex V1Data Quality Rule Statistic Range Expectation Statistic - Optional. The aggregate metric to evaluate.
- Strict
Max boolEnabled - Optional. Whether column statistic needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.
- Strict
Min boolEnabled - Optional. Whether column statistic needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.
- max
Value String - Optional. The maximum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.
- min
Value String - Optional. The minimum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.
- statistic
Google
Cloud Dataplex V1Data Quality Rule Statistic Range Expectation Statistic - Optional. The aggregate metric to evaluate.
- strict
Max BooleanEnabled - Optional. Whether column statistic needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.
- strict
Min BooleanEnabled - Optional. Whether column statistic needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.
- max
Value string - Optional. The maximum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.
- min
Value string - Optional. The minimum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.
- statistic
Google
Cloud Dataplex V1Data Quality Rule Statistic Range Expectation Statistic - Optional. The aggregate metric to evaluate.
- strict
Max booleanEnabled - Optional. Whether column statistic needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.
- strict
Min booleanEnabled - Optional. Whether column statistic needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.
- max_
value str - Optional. The maximum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.
- min_
value str - Optional. The minimum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.
- statistic
Google
Cloud Dataplex V1Data Quality Rule Statistic Range Expectation Statistic - Optional. The aggregate metric to evaluate.
- strict_
max_ boolenabled - Optional. Whether column statistic needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.
- strict_
min_ boolenabled - Optional. Whether column statistic needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.
- max
Value String - Optional. The maximum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.
- min
Value String - Optional. The minimum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.
- statistic "STATISTIC_UNDEFINED" | "MEAN" | "MIN" | "MAX"
- Optional. The aggregate metric to evaluate.
- strict
Max BooleanEnabled - Optional. Whether column statistic needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.
- strict
Min BooleanEnabled - Optional. Whether column statistic needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.
GoogleCloudDataplexV1DataQualityRuleStatisticRangeExpectationResponse, GoogleCloudDataplexV1DataQualityRuleStatisticRangeExpectationResponseArgs
- Max
Value string - Optional. The maximum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.
- Min
Value string - Optional. The minimum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.
- Statistic string
- Optional. The aggregate metric to evaluate.
- Strict
Max boolEnabled - Optional. Whether column statistic needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.
- Strict
Min boolEnabled - Optional. Whether column statistic needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.
- Max
Value string - Optional. The maximum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.
- Min
Value string - Optional. The minimum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.
- Statistic string
- Optional. The aggregate metric to evaluate.
- Strict
Max boolEnabled - Optional. Whether column statistic needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.
- Strict
Min boolEnabled - Optional. Whether column statistic needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.
- max
Value String - Optional. The maximum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.
- min
Value String - Optional. The minimum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.
- statistic String
- Optional. The aggregate metric to evaluate.
- strict
Max BooleanEnabled - Optional. Whether column statistic needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.
- strict
Min BooleanEnabled - Optional. Whether column statistic needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.
- max
Value string - Optional. The maximum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.
- min
Value string - Optional. The minimum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.
- statistic string
- Optional. The aggregate metric to evaluate.
- strict
Max booleanEnabled - Optional. Whether column statistic needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.
- strict
Min booleanEnabled - Optional. Whether column statistic needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.
- max_
value str - Optional. The maximum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.
- min_
value str - Optional. The minimum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.
- statistic str
- Optional. The aggregate metric to evaluate.
- strict_
max_ boolenabled - Optional. Whether column statistic needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.
- strict_
min_ boolenabled - Optional. Whether column statistic needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.
- max
Value String - Optional. The maximum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.
- min
Value String - Optional. The minimum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.
- statistic String
- Optional. The aggregate metric to evaluate.
- strict
Max BooleanEnabled - Optional. Whether column statistic needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.
- strict
Min BooleanEnabled - Optional. Whether column statistic needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.
GoogleCloudDataplexV1DataQualityRuleStatisticRangeExpectationStatistic, GoogleCloudDataplexV1DataQualityRuleStatisticRangeExpectationStatisticArgs
- Statistic
Undefined - STATISTIC_UNDEFINEDUnspecified statistic type
- Mean
- MEANEvaluate the column mean
- Min
- MINEvaluate the column min
- Max
- MAXEvaluate the column max
- Google
Cloud Dataplex V1Data Quality Rule Statistic Range Expectation Statistic Statistic Undefined - STATISTIC_UNDEFINEDUnspecified statistic type
- Google
Cloud Dataplex V1Data Quality Rule Statistic Range Expectation Statistic Mean - MEANEvaluate the column mean
- Google
Cloud Dataplex V1Data Quality Rule Statistic Range Expectation Statistic Min - MINEvaluate the column min
- Google
Cloud Dataplex V1Data Quality Rule Statistic Range Expectation Statistic Max - MAXEvaluate the column max
- Statistic
Undefined - STATISTIC_UNDEFINEDUnspecified statistic type
- Mean
- MEANEvaluate the column mean
- Min
- MINEvaluate the column min
- Max
- MAXEvaluate the column max
- Statistic
Undefined - STATISTIC_UNDEFINEDUnspecified statistic type
- Mean
- MEANEvaluate the column mean
- Min
- MINEvaluate the column min
- Max
- MAXEvaluate the column max
- STATISTIC_UNDEFINED
- STATISTIC_UNDEFINEDUnspecified statistic type
- MEAN
- MEANEvaluate the column mean
- MIN
- MINEvaluate the column min
- MAX
- MAXEvaluate the column max
- "STATISTIC_UNDEFINED"
- STATISTIC_UNDEFINEDUnspecified statistic type
- "MEAN"
- MEANEvaluate the column mean
- "MIN"
- MINEvaluate the column min
- "MAX"
- MAXEvaluate the column max
GoogleCloudDataplexV1DataQualityRuleTableConditionExpectation, GoogleCloudDataplexV1DataQualityRuleTableConditionExpectationArgs
- Sql
Expression string - Optional. The SQL expression.
- Sql
Expression string - Optional. The SQL expression.
- sql
Expression String - Optional. The SQL expression.
- sql
Expression string - Optional. The SQL expression.
- sql_
expression str - Optional. The SQL expression.
- sql
Expression String - Optional. The SQL expression.
GoogleCloudDataplexV1DataQualityRuleTableConditionExpectationResponse, GoogleCloudDataplexV1DataQualityRuleTableConditionExpectationResponseArgs
- Sql
Expression string - Optional. The SQL expression.
- Sql
Expression string - Optional. The SQL expression.
- sql
Expression String - Optional. The SQL expression.
- sql
Expression string - Optional. The SQL expression.
- sql_
expression str - Optional. The SQL expression.
- sql
Expression String - Optional. The SQL expression.
GoogleCloudDataplexV1DataQualitySpec, GoogleCloudDataplexV1DataQualitySpecArgs
- Rules
List<Pulumi.
Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Data Quality Rule> - The list of rules to evaluate against a data source. At least one rule is required.
- Post
Scan Pulumi.Actions Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Data Quality Spec Post Scan Actions - Optional. Actions to take upon job completion.
- Row
Filter string - Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10
- Sampling
Percent double - Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.
- Rules
[]Google
Cloud Dataplex V1Data Quality Rule - The list of rules to evaluate against a data source. At least one rule is required.
- Post
Scan GoogleActions Cloud Dataplex V1Data Quality Spec Post Scan Actions - Optional. Actions to take upon job completion.
- Row
Filter string - Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10
- Sampling
Percent float64 - Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.
- rules
List<Google
Cloud Dataplex V1Data Quality Rule> - The list of rules to evaluate against a data source. At least one rule is required.
- post
Scan GoogleActions Cloud Dataplex V1Data Quality Spec Post Scan Actions - Optional. Actions to take upon job completion.
- row
Filter String - Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10
- sampling
Percent Double - Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.
- rules
Google
Cloud Dataplex V1Data Quality Rule[] - The list of rules to evaluate against a data source. At least one rule is required.
- post
Scan GoogleActions Cloud Dataplex V1Data Quality Spec Post Scan Actions - Optional. Actions to take upon job completion.
- row
Filter string - Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10
- sampling
Percent number - Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.
- rules
Sequence[Google
Cloud Dataplex V1Data Quality Rule] - The list of rules to evaluate against a data source. At least one rule is required.
- post_
scan_ Googleactions Cloud Dataplex V1Data Quality Spec Post Scan Actions - Optional. Actions to take upon job completion.
- row_
filter str - Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10
- sampling_
percent float - Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.
- rules List<Property Map>
- The list of rules to evaluate against a data source. At least one rule is required.
- post
Scan Property MapActions - Optional. Actions to take upon job completion.
- row
Filter String - Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10
- sampling
Percent Number - Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.
GoogleCloudDataplexV1DataQualitySpecPostScanActions, GoogleCloudDataplexV1DataQualitySpecPostScanActionsArgs
- Bigquery
Export Pulumi.Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Data Quality Spec Post Scan Actions Big Query Export - Optional. If set, results will be exported to the provided BigQuery table.
- Bigquery
Export GoogleCloud Dataplex V1Data Quality Spec Post Scan Actions Big Query Export - Optional. If set, results will be exported to the provided BigQuery table.
- bigquery
Export GoogleCloud Dataplex V1Data Quality Spec Post Scan Actions Big Query Export - Optional. If set, results will be exported to the provided BigQuery table.
- bigquery
Export GoogleCloud Dataplex V1Data Quality Spec Post Scan Actions Big Query Export - Optional. If set, results will be exported to the provided BigQuery table.
- bigquery_
export GoogleCloud Dataplex V1Data Quality Spec Post Scan Actions Big Query Export - Optional. If set, results will be exported to the provided BigQuery table.
- bigquery
Export Property Map - Optional. If set, results will be exported to the provided BigQuery table.
GoogleCloudDataplexV1DataQualitySpecPostScanActionsBigQueryExport, GoogleCloudDataplexV1DataQualitySpecPostScanActionsBigQueryExportArgs
- Results
Table string - Optional. The BigQuery table to export DataQualityScan results to. Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID
- Results
Table string - Optional. The BigQuery table to export DataQualityScan results to. Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID
- results
Table String - Optional. The BigQuery table to export DataQualityScan results to. Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID
- results
Table string - Optional. The BigQuery table to export DataQualityScan results to. Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID
- results_
table str - Optional. The BigQuery table to export DataQualityScan results to. Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID
- results
Table String - Optional. The BigQuery table to export DataQualityScan results to. Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID
GoogleCloudDataplexV1DataQualitySpecPostScanActionsBigQueryExportResponse, GoogleCloudDataplexV1DataQualitySpecPostScanActionsBigQueryExportResponseArgs
- Results
Table string - Optional. The BigQuery table to export DataQualityScan results to. Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID
- Results
Table string - Optional. The BigQuery table to export DataQualityScan results to. Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID
- results
Table String - Optional. The BigQuery table to export DataQualityScan results to. Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID
- results
Table string - Optional. The BigQuery table to export DataQualityScan results to. Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID
- results_
table str - Optional. The BigQuery table to export DataQualityScan results to. Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID
- results
Table String - Optional. The BigQuery table to export DataQualityScan results to. Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID
GoogleCloudDataplexV1DataQualitySpecPostScanActionsResponse, GoogleCloudDataplexV1DataQualitySpecPostScanActionsResponseArgs
- Bigquery
Export Pulumi.Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Data Quality Spec Post Scan Actions Big Query Export Response - Optional. If set, results will be exported to the provided BigQuery table.
- Bigquery
Export GoogleCloud Dataplex V1Data Quality Spec Post Scan Actions Big Query Export Response - Optional. If set, results will be exported to the provided BigQuery table.
- bigquery
Export GoogleCloud Dataplex V1Data Quality Spec Post Scan Actions Big Query Export Response - Optional. If set, results will be exported to the provided BigQuery table.
- bigquery
Export GoogleCloud Dataplex V1Data Quality Spec Post Scan Actions Big Query Export Response - Optional. If set, results will be exported to the provided BigQuery table.
- bigquery_
export GoogleCloud Dataplex V1Data Quality Spec Post Scan Actions Big Query Export Response - Optional. If set, results will be exported to the provided BigQuery table.
- bigquery
Export Property Map - Optional. If set, results will be exported to the provided BigQuery table.
GoogleCloudDataplexV1DataQualitySpecResponse, GoogleCloudDataplexV1DataQualitySpecResponseArgs
- Post
Scan Pulumi.Actions Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Data Quality Spec Post Scan Actions Response - Optional. Actions to take upon job completion.
- Row
Filter string - Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10
- Rules
List<Pulumi.
Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Data Quality Rule Response> - The list of rules to evaluate against a data source. At least one rule is required.
- Sampling
Percent double - Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.
- Post
Scan GoogleActions Cloud Dataplex V1Data Quality Spec Post Scan Actions Response - Optional. Actions to take upon job completion.
- Row
Filter string - Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10
- Rules
[]Google
Cloud Dataplex V1Data Quality Rule Response - The list of rules to evaluate against a data source. At least one rule is required.
- Sampling
Percent float64 - Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.
- post
Scan GoogleActions Cloud Dataplex V1Data Quality Spec Post Scan Actions Response - Optional. Actions to take upon job completion.
- row
Filter String - Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10
- rules
List<Google
Cloud Dataplex V1Data Quality Rule Response> - The list of rules to evaluate against a data source. At least one rule is required.
- sampling
Percent Double - Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.
- post
Scan GoogleActions Cloud Dataplex V1Data Quality Spec Post Scan Actions Response - Optional. Actions to take upon job completion.
- row
Filter string - Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10
- rules
Google
Cloud Dataplex V1Data Quality Rule Response[] - The list of rules to evaluate against a data source. At least one rule is required.
- sampling
Percent number - Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.
- post_
scan_ Googleactions Cloud Dataplex V1Data Quality Spec Post Scan Actions Response - Optional. Actions to take upon job completion.
- row_
filter str - Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10
- rules
Sequence[Google
Cloud Dataplex V1Data Quality Rule Response] - The list of rules to evaluate against a data source. At least one rule is required.
- sampling_
percent float - Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.
- post
Scan Property MapActions - Optional. Actions to take upon job completion.
- row
Filter String - Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10
- rules List<Property Map>
- The list of rules to evaluate against a data source. At least one rule is required.
- sampling
Percent Number - Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.
GoogleCloudDataplexV1DataScanExecutionSpec, GoogleCloudDataplexV1DataScanExecutionSpecArgs
- Field string
- Immutable. The unnested field (of type Date or Timestamp) that contains values which monotonically increase over time.If not specified, a data scan will run for all data in the table.
- Trigger
Pulumi.
Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Trigger - Optional. Spec related to how often and when a scan should be triggered.If not specified, the default is OnDemand, which means the scan will not run until the user calls RunDataScan API.
- Field string
- Immutable. The unnested field (of type Date or Timestamp) that contains values which monotonically increase over time.If not specified, a data scan will run for all data in the table.
- Trigger
Google
Cloud Dataplex V1Trigger - Optional. Spec related to how often and when a scan should be triggered.If not specified, the default is OnDemand, which means the scan will not run until the user calls RunDataScan API.
- field String
- Immutable. The unnested field (of type Date or Timestamp) that contains values which monotonically increase over time.If not specified, a data scan will run for all data in the table.
- trigger
Google
Cloud Dataplex V1Trigger - Optional. Spec related to how often and when a scan should be triggered.If not specified, the default is OnDemand, which means the scan will not run until the user calls RunDataScan API.
- field string
- Immutable. The unnested field (of type Date or Timestamp) that contains values which monotonically increase over time.If not specified, a data scan will run for all data in the table.
- trigger
Google
Cloud Dataplex V1Trigger - Optional. Spec related to how often and when a scan should be triggered.If not specified, the default is OnDemand, which means the scan will not run until the user calls RunDataScan API.
- field str
- Immutable. The unnested field (of type Date or Timestamp) that contains values which monotonically increase over time.If not specified, a data scan will run for all data in the table.
- trigger
Google
Cloud Dataplex V1Trigger - Optional. Spec related to how often and when a scan should be triggered.If not specified, the default is OnDemand, which means the scan will not run until the user calls RunDataScan API.
- field String
- Immutable. The unnested field (of type Date or Timestamp) that contains values which monotonically increase over time.If not specified, a data scan will run for all data in the table.
- trigger Property Map
- Optional. Spec related to how often and when a scan should be triggered.If not specified, the default is OnDemand, which means the scan will not run until the user calls RunDataScan API.
GoogleCloudDataplexV1DataScanExecutionSpecResponse, GoogleCloudDataplexV1DataScanExecutionSpecResponseArgs
- Field string
- Immutable. The unnested field (of type Date or Timestamp) that contains values which monotonically increase over time.If not specified, a data scan will run for all data in the table.
- Trigger
Pulumi.
Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Trigger Response - Optional. Spec related to how often and when a scan should be triggered.If not specified, the default is OnDemand, which means the scan will not run until the user calls RunDataScan API.
- Field string
- Immutable. The unnested field (of type Date or Timestamp) that contains values which monotonically increase over time.If not specified, a data scan will run for all data in the table.
- Trigger
Google
Cloud Dataplex V1Trigger Response - Optional. Spec related to how often and when a scan should be triggered.If not specified, the default is OnDemand, which means the scan will not run until the user calls RunDataScan API.
- field String
- Immutable. The unnested field (of type Date or Timestamp) that contains values which monotonically increase over time.If not specified, a data scan will run for all data in the table.
- trigger
Google
Cloud Dataplex V1Trigger Response - Optional. Spec related to how often and when a scan should be triggered.If not specified, the default is OnDemand, which means the scan will not run until the user calls RunDataScan API.
- field string
- Immutable. The unnested field (of type Date or Timestamp) that contains values which monotonically increase over time.If not specified, a data scan will run for all data in the table.
- trigger
Google
Cloud Dataplex V1Trigger Response - Optional. Spec related to how often and when a scan should be triggered.If not specified, the default is OnDemand, which means the scan will not run until the user calls RunDataScan API.
- field str
- Immutable. The unnested field (of type Date or Timestamp) that contains values which monotonically increase over time.If not specified, a data scan will run for all data in the table.
- trigger
Google
Cloud Dataplex V1Trigger Response - Optional. Spec related to how often and when a scan should be triggered.If not specified, the default is OnDemand, which means the scan will not run until the user calls RunDataScan API.
- field String
- Immutable. The unnested field (of type Date or Timestamp) that contains values which monotonically increase over time.If not specified, a data scan will run for all data in the table.
- trigger Property Map
- Optional. Spec related to how often and when a scan should be triggered.If not specified, the default is OnDemand, which means the scan will not run until the user calls RunDataScan API.
GoogleCloudDataplexV1DataScanExecutionStatusResponse, GoogleCloudDataplexV1DataScanExecutionStatusResponseArgs
- Latest
Job stringEnd Time - The time when the latest DataScanJob ended.
- Latest
Job stringStart Time - The time when the latest DataScanJob started.
- Latest
Job stringEnd Time - The time when the latest DataScanJob ended.
- Latest
Job stringStart Time - The time when the latest DataScanJob started.
- latest
Job StringEnd Time - The time when the latest DataScanJob ended.
- latest
Job StringStart Time - The time when the latest DataScanJob started.
- latest
Job stringEnd Time - The time when the latest DataScanJob ended.
- latest
Job stringStart Time - The time when the latest DataScanJob started.
- latest_
job_ strend_ time - The time when the latest DataScanJob ended.
- latest_
job_ strstart_ time - The time when the latest DataScanJob started.
- latest
Job StringEnd Time - The time when the latest DataScanJob ended.
- latest
Job StringStart Time - The time when the latest DataScanJob started.
GoogleCloudDataplexV1DataSource, GoogleCloudDataplexV1DataSourceArgs
- Entity string
- Immutable. The Dataplex entity that represents the data source (e.g. BigQuery table) for DataScan, of the form: projects/{project_number}/locations/{location_id}/lakes/{lake_id}/zones/{zone_id}/entities/{entity_id}.
- Resource string
- Immutable. The service-qualified full resource name of the cloud resource for a DataScan job to scan against. The field could be: BigQuery table of type "TABLE" for DataProfileScan/DataQualityScan Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID
- Entity string
- Immutable. The Dataplex entity that represents the data source (e.g. BigQuery table) for DataScan, of the form: projects/{project_number}/locations/{location_id}/lakes/{lake_id}/zones/{zone_id}/entities/{entity_id}.
- Resource string
- Immutable. The service-qualified full resource name of the cloud resource for a DataScan job to scan against. The field could be: BigQuery table of type "TABLE" for DataProfileScan/DataQualityScan Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID
- entity String
- Immutable. The Dataplex entity that represents the data source (e.g. BigQuery table) for DataScan, of the form: projects/{project_number}/locations/{location_id}/lakes/{lake_id}/zones/{zone_id}/entities/{entity_id}.
- resource String
- Immutable. The service-qualified full resource name of the cloud resource for a DataScan job to scan against. The field could be: BigQuery table of type "TABLE" for DataProfileScan/DataQualityScan Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID
- entity string
- Immutable. The Dataplex entity that represents the data source (e.g. BigQuery table) for DataScan, of the form: projects/{project_number}/locations/{location_id}/lakes/{lake_id}/zones/{zone_id}/entities/{entity_id}.
- resource string
- Immutable. The service-qualified full resource name of the cloud resource for a DataScan job to scan against. The field could be: BigQuery table of type "TABLE" for DataProfileScan/DataQualityScan Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID
- entity str
- Immutable. The Dataplex entity that represents the data source (e.g. BigQuery table) for DataScan, of the form: projects/{project_number}/locations/{location_id}/lakes/{lake_id}/zones/{zone_id}/entities/{entity_id}.
- resource str
- Immutable. The service-qualified full resource name of the cloud resource for a DataScan job to scan against. The field could be: BigQuery table of type "TABLE" for DataProfileScan/DataQualityScan Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID
- entity String
- Immutable. The Dataplex entity that represents the data source (e.g. BigQuery table) for DataScan, of the form: projects/{project_number}/locations/{location_id}/lakes/{lake_id}/zones/{zone_id}/entities/{entity_id}.
- resource String
- Immutable. The service-qualified full resource name of the cloud resource for a DataScan job to scan against. The field could be: BigQuery table of type "TABLE" for DataProfileScan/DataQualityScan Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID
GoogleCloudDataplexV1DataSourceResponse, GoogleCloudDataplexV1DataSourceResponseArgs
- Entity string
- Immutable. The Dataplex entity that represents the data source (e.g. BigQuery table) for DataScan, of the form: projects/{project_number}/locations/{location_id}/lakes/{lake_id}/zones/{zone_id}/entities/{entity_id}.
- Resource string
- Immutable. The service-qualified full resource name of the cloud resource for a DataScan job to scan against. The field could be: BigQuery table of type "TABLE" for DataProfileScan/DataQualityScan Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID
- Entity string
- Immutable. The Dataplex entity that represents the data source (e.g. BigQuery table) for DataScan, of the form: projects/{project_number}/locations/{location_id}/lakes/{lake_id}/zones/{zone_id}/entities/{entity_id}.
- Resource string
- Immutable. The service-qualified full resource name of the cloud resource for a DataScan job to scan against. The field could be: BigQuery table of type "TABLE" for DataProfileScan/DataQualityScan Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID
- entity String
- Immutable. The Dataplex entity that represents the data source (e.g. BigQuery table) for DataScan, of the form: projects/{project_number}/locations/{location_id}/lakes/{lake_id}/zones/{zone_id}/entities/{entity_id}.
- resource String
- Immutable. The service-qualified full resource name of the cloud resource for a DataScan job to scan against. The field could be: BigQuery table of type "TABLE" for DataProfileScan/DataQualityScan Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID
- entity string
- Immutable. The Dataplex entity that represents the data source (e.g. BigQuery table) for DataScan, of the form: projects/{project_number}/locations/{location_id}/lakes/{lake_id}/zones/{zone_id}/entities/{entity_id}.
- resource string
- Immutable. The service-qualified full resource name of the cloud resource for a DataScan job to scan against. The field could be: BigQuery table of type "TABLE" for DataProfileScan/DataQualityScan Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID
- entity str
- Immutable. The Dataplex entity that represents the data source (e.g. BigQuery table) for DataScan, of the form: projects/{project_number}/locations/{location_id}/lakes/{lake_id}/zones/{zone_id}/entities/{entity_id}.
- resource str
- Immutable. The service-qualified full resource name of the cloud resource for a DataScan job to scan against. The field could be: BigQuery table of type "TABLE" for DataProfileScan/DataQualityScan Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID
- entity String
- Immutable. The Dataplex entity that represents the data source (e.g. BigQuery table) for DataScan, of the form: projects/{project_number}/locations/{location_id}/lakes/{lake_id}/zones/{zone_id}/entities/{entity_id}.
- resource String
- Immutable. The service-qualified full resource name of the cloud resource for a DataScan job to scan against. The field could be: BigQuery table of type "TABLE" for DataProfileScan/DataQualityScan Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID
GoogleCloudDataplexV1ScannedDataIncrementalFieldResponse, GoogleCloudDataplexV1ScannedDataIncrementalFieldResponseArgs
GoogleCloudDataplexV1ScannedDataResponse, GoogleCloudDataplexV1ScannedDataResponseArgs
- Incremental
Field Pulumi.Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Scanned Data Incremental Field Response - The range denoted by values of an incremental field
- Incremental
Field GoogleCloud Dataplex V1Scanned Data Incremental Field Response - The range denoted by values of an incremental field
- incremental
Field GoogleCloud Dataplex V1Scanned Data Incremental Field Response - The range denoted by values of an incremental field
- incremental
Field GoogleCloud Dataplex V1Scanned Data Incremental Field Response - The range denoted by values of an incremental field
- incremental_
field GoogleCloud Dataplex V1Scanned Data Incremental Field Response - The range denoted by values of an incremental field
- incremental
Field Property Map - The range denoted by values of an incremental field
GoogleCloudDataplexV1Trigger, GoogleCloudDataplexV1TriggerArgs
- On
Demand Pulumi.Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Trigger On Demand - The scan runs once via RunDataScan API.
- Schedule
Pulumi.
Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Trigger Schedule - The scan is scheduled to run periodically.
- On
Demand GoogleCloud Dataplex V1Trigger On Demand - The scan runs once via RunDataScan API.
- Schedule
Google
Cloud Dataplex V1Trigger Schedule - The scan is scheduled to run periodically.
- on
Demand GoogleCloud Dataplex V1Trigger On Demand - The scan runs once via RunDataScan API.
- schedule
Google
Cloud Dataplex V1Trigger Schedule - The scan is scheduled to run periodically.
- on
Demand GoogleCloud Dataplex V1Trigger On Demand - The scan runs once via RunDataScan API.
- schedule
Google
Cloud Dataplex V1Trigger Schedule - The scan is scheduled to run periodically.
- on_
demand GoogleCloud Dataplex V1Trigger On Demand - The scan runs once via RunDataScan API.
- schedule
Google
Cloud Dataplex V1Trigger Schedule - The scan is scheduled to run periodically.
- on
Demand Property Map - The scan runs once via RunDataScan API.
- schedule Property Map
- The scan is scheduled to run periodically.
GoogleCloudDataplexV1TriggerResponse, GoogleCloudDataplexV1TriggerResponseArgs
- On
Demand Pulumi.Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Trigger On Demand Response - The scan runs once via RunDataScan API.
- Schedule
Pulumi.
Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Trigger Schedule Response - The scan is scheduled to run periodically.
- On
Demand GoogleCloud Dataplex V1Trigger On Demand Response - The scan runs once via RunDataScan API.
- Schedule
Google
Cloud Dataplex V1Trigger Schedule Response - The scan is scheduled to run periodically.
- on
Demand GoogleCloud Dataplex V1Trigger On Demand Response - The scan runs once via RunDataScan API.
- schedule
Google
Cloud Dataplex V1Trigger Schedule Response - The scan is scheduled to run periodically.
- on
Demand GoogleCloud Dataplex V1Trigger On Demand Response - The scan runs once via RunDataScan API.
- schedule
Google
Cloud Dataplex V1Trigger Schedule Response - The scan is scheduled to run periodically.
- on_
demand GoogleCloud Dataplex V1Trigger On Demand Response - The scan runs once via RunDataScan API.
- schedule
Google
Cloud Dataplex V1Trigger Schedule Response - The scan is scheduled to run periodically.
- on
Demand Property Map - The scan runs once via RunDataScan API.
- schedule Property Map
- The scan is scheduled to run periodically.
GoogleCloudDataplexV1TriggerSchedule, GoogleCloudDataplexV1TriggerScheduleArgs
- Cron string
- Cron (https://en.wikipedia.org/wiki/Cron) schedule for running scans periodically.To explicitly set a timezone in the cron tab, apply a prefix in the cron tab: "CRON_TZ=${IANA_TIME_ZONE}" or "TZ=${IANA_TIME_ZONE}". The ${IANA_TIME_ZONE} may only be a valid string from IANA time zone database (wikipedia (https://en.wikipedia.org/wiki/List_of_tz_database_time_zones#List)). For example, CRON_TZ=America/New_York 1 * * * *, or TZ=America/New_York 1 * * * *.This field is required for Schedule scans.
- Cron string
- Cron (https://en.wikipedia.org/wiki/Cron) schedule for running scans periodically.To explicitly set a timezone in the cron tab, apply a prefix in the cron tab: "CRON_TZ=${IANA_TIME_ZONE}" or "TZ=${IANA_TIME_ZONE}". The ${IANA_TIME_ZONE} may only be a valid string from IANA time zone database (wikipedia (https://en.wikipedia.org/wiki/List_of_tz_database_time_zones#List)). For example, CRON_TZ=America/New_York 1 * * * *, or TZ=America/New_York 1 * * * *.This field is required for Schedule scans.
- cron String
- Cron (https://en.wikipedia.org/wiki/Cron) schedule for running scans periodically.To explicitly set a timezone in the cron tab, apply a prefix in the cron tab: "CRON_TZ=${IANA_TIME_ZONE}" or "TZ=${IANA_TIME_ZONE}". The ${IANA_TIME_ZONE} may only be a valid string from IANA time zone database (wikipedia (https://en.wikipedia.org/wiki/List_of_tz_database_time_zones#List)). For example, CRON_TZ=America/New_York 1 * * * *, or TZ=America/New_York 1 * * * *.This field is required for Schedule scans.
- cron string
- Cron (https://en.wikipedia.org/wiki/Cron) schedule for running scans periodically.To explicitly set a timezone in the cron tab, apply a prefix in the cron tab: "CRON_TZ=${IANA_TIME_ZONE}" or "TZ=${IANA_TIME_ZONE}". The ${IANA_TIME_ZONE} may only be a valid string from IANA time zone database (wikipedia (https://en.wikipedia.org/wiki/List_of_tz_database_time_zones#List)). For example, CRON_TZ=America/New_York 1 * * * *, or TZ=America/New_York 1 * * * *.This field is required for Schedule scans.
- cron str
- Cron (https://en.wikipedia.org/wiki/Cron) schedule for running scans periodically.To explicitly set a timezone in the cron tab, apply a prefix in the cron tab: "CRON_TZ=${IANA_TIME_ZONE}" or "TZ=${IANA_TIME_ZONE}". The ${IANA_TIME_ZONE} may only be a valid string from IANA time zone database (wikipedia (https://en.wikipedia.org/wiki/List_of_tz_database_time_zones#List)). For example, CRON_TZ=America/New_York 1 * * * *, or TZ=America/New_York 1 * * * *.This field is required for Schedule scans.
- cron String
- Cron (https://en.wikipedia.org/wiki/Cron) schedule for running scans periodically.To explicitly set a timezone in the cron tab, apply a prefix in the cron tab: "CRON_TZ=${IANA_TIME_ZONE}" or "TZ=${IANA_TIME_ZONE}". The ${IANA_TIME_ZONE} may only be a valid string from IANA time zone database (wikipedia (https://en.wikipedia.org/wiki/List_of_tz_database_time_zones#List)). For example, CRON_TZ=America/New_York 1 * * * *, or TZ=America/New_York 1 * * * *.This field is required for Schedule scans.
GoogleCloudDataplexV1TriggerScheduleResponse, GoogleCloudDataplexV1TriggerScheduleResponseArgs
- Cron string
- Cron (https://en.wikipedia.org/wiki/Cron) schedule for running scans periodically.To explicitly set a timezone in the cron tab, apply a prefix in the cron tab: "CRON_TZ=${IANA_TIME_ZONE}" or "TZ=${IANA_TIME_ZONE}". The ${IANA_TIME_ZONE} may only be a valid string from IANA time zone database (wikipedia (https://en.wikipedia.org/wiki/List_of_tz_database_time_zones#List)). For example, CRON_TZ=America/New_York 1 * * * *, or TZ=America/New_York 1 * * * *.This field is required for Schedule scans.
- Cron string
- Cron (https://en.wikipedia.org/wiki/Cron) schedule for running scans periodically.To explicitly set a timezone in the cron tab, apply a prefix in the cron tab: "CRON_TZ=${IANA_TIME_ZONE}" or "TZ=${IANA_TIME_ZONE}". The ${IANA_TIME_ZONE} may only be a valid string from IANA time zone database (wikipedia (https://en.wikipedia.org/wiki/List_of_tz_database_time_zones#List)). For example, CRON_TZ=America/New_York 1 * * * *, or TZ=America/New_York 1 * * * *.This field is required for Schedule scans.
- cron String
- Cron (https://en.wikipedia.org/wiki/Cron) schedule for running scans periodically.To explicitly set a timezone in the cron tab, apply a prefix in the cron tab: "CRON_TZ=${IANA_TIME_ZONE}" or "TZ=${IANA_TIME_ZONE}". The ${IANA_TIME_ZONE} may only be a valid string from IANA time zone database (wikipedia (https://en.wikipedia.org/wiki/List_of_tz_database_time_zones#List)). For example, CRON_TZ=America/New_York 1 * * * *, or TZ=America/New_York 1 * * * *.This field is required for Schedule scans.
- cron string
- Cron (https://en.wikipedia.org/wiki/Cron) schedule for running scans periodically.To explicitly set a timezone in the cron tab, apply a prefix in the cron tab: "CRON_TZ=${IANA_TIME_ZONE}" or "TZ=${IANA_TIME_ZONE}". The ${IANA_TIME_ZONE} may only be a valid string from IANA time zone database (wikipedia (https://en.wikipedia.org/wiki/List_of_tz_database_time_zones#List)). For example, CRON_TZ=America/New_York 1 * * * *, or TZ=America/New_York 1 * * * *.This field is required for Schedule scans.
- cron str
- Cron (https://en.wikipedia.org/wiki/Cron) schedule for running scans periodically.To explicitly set a timezone in the cron tab, apply a prefix in the cron tab: "CRON_TZ=${IANA_TIME_ZONE}" or "TZ=${IANA_TIME_ZONE}". The ${IANA_TIME_ZONE} may only be a valid string from IANA time zone database (wikipedia (https://en.wikipedia.org/wiki/List_of_tz_database_time_zones#List)). For example, CRON_TZ=America/New_York 1 * * * *, or TZ=America/New_York 1 * * * *.This field is required for Schedule scans.
- cron String
- Cron (https://en.wikipedia.org/wiki/Cron) schedule for running scans periodically.To explicitly set a timezone in the cron tab, apply a prefix in the cron tab: "CRON_TZ=${IANA_TIME_ZONE}" or "TZ=${IANA_TIME_ZONE}". The ${IANA_TIME_ZONE} may only be a valid string from IANA time zone database (wikipedia (https://en.wikipedia.org/wiki/List_of_tz_database_time_zones#List)). For example, CRON_TZ=America/New_York 1 * * * *, or TZ=America/New_York 1 * * * *.This field is required for Schedule scans.
Package Details
- Repository
- Google Cloud Native pulumi/pulumi-google-native
- License
- Apache-2.0
Google Cloud Native is in preview. Google Cloud Classic is fully supported.