1. Packages
  2. Google Cloud Native
  3. API Docs
  4. dataplex
  5. dataplex/v1
  6. DataScan

Google Cloud Native is in preview. Google Cloud Classic is fully supported.

Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi

google-native.dataplex/v1.DataScan

Explore with Pulumi AI

google-native logo

Google Cloud Native is in preview. Google Cloud Classic is fully supported.

Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi

    Creates a DataScan resource. Auto-naming is currently not supported for this resource.

    Create DataScan Resource

    Resources are created with functions called constructors. To learn more about declaring and configuring resources, see Resources.

    Constructor syntax

    new DataScan(name: string, args: DataScanArgs, opts?: CustomResourceOptions);
    @overload
    def DataScan(resource_name: str,
                 args: DataScanArgs,
                 opts: Optional[ResourceOptions] = None)
    
    @overload
    def DataScan(resource_name: str,
                 opts: Optional[ResourceOptions] = None,
                 data: Optional[GoogleCloudDataplexV1DataSourceArgs] = None,
                 data_scan_id: Optional[str] = None,
                 data_profile_spec: Optional[GoogleCloudDataplexV1DataProfileSpecArgs] = None,
                 data_quality_spec: Optional[GoogleCloudDataplexV1DataQualitySpecArgs] = None,
                 description: Optional[str] = None,
                 display_name: Optional[str] = None,
                 execution_spec: Optional[GoogleCloudDataplexV1DataScanExecutionSpecArgs] = None,
                 labels: Optional[Mapping[str, str]] = None,
                 location: Optional[str] = None,
                 project: Optional[str] = None)
    func NewDataScan(ctx *Context, name string, args DataScanArgs, opts ...ResourceOption) (*DataScan, error)
    public DataScan(string name, DataScanArgs args, CustomResourceOptions? opts = null)
    public DataScan(String name, DataScanArgs args)
    public DataScan(String name, DataScanArgs args, CustomResourceOptions options)
    
    type: google-native:dataplex/v1:DataScan
    properties: # The arguments to resource properties.
    options: # Bag of options to control resource's behavior.
    
    

    Parameters

    name string
    The unique name of the resource.
    args DataScanArgs
    The arguments to resource properties.
    opts CustomResourceOptions
    Bag of options to control resource's behavior.
    resource_name str
    The unique name of the resource.
    args DataScanArgs
    The arguments to resource properties.
    opts ResourceOptions
    Bag of options to control resource's behavior.
    ctx Context
    Context object for the current deployment.
    name string
    The unique name of the resource.
    args DataScanArgs
    The arguments to resource properties.
    opts ResourceOption
    Bag of options to control resource's behavior.
    name string
    The unique name of the resource.
    args DataScanArgs
    The arguments to resource properties.
    opts CustomResourceOptions
    Bag of options to control resource's behavior.
    name String
    The unique name of the resource.
    args DataScanArgs
    The arguments to resource properties.
    options CustomResourceOptions
    Bag of options to control resource's behavior.

    Constructor example

    The following reference example uses placeholder values for all input properties.

    var dataScanResource = new GoogleNative.Dataplex.V1.DataScan("dataScanResource", new()
    {
        Data = new GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataSourceArgs
        {
            Entity = "string",
            Resource = "string",
        },
        DataScanId = "string",
        DataProfileSpec = new GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataProfileSpecArgs
        {
            ExcludeFields = new GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataProfileSpecSelectedFieldsArgs
            {
                FieldNames = new[]
                {
                    "string",
                },
            },
            IncludeFields = new GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataProfileSpecSelectedFieldsArgs
            {
                FieldNames = new[]
                {
                    "string",
                },
            },
            PostScanActions = new GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataProfileSpecPostScanActionsArgs
            {
                BigqueryExport = new GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataProfileSpecPostScanActionsBigQueryExportArgs
                {
                    ResultsTable = "string",
                },
            },
            RowFilter = "string",
            SamplingPercent = 0,
        },
        DataQualitySpec = new GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataQualitySpecArgs
        {
            Rules = new[]
            {
                new GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataQualityRuleArgs
                {
                    Dimension = "string",
                    RangeExpectation = new GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataQualityRuleRangeExpectationArgs
                    {
                        MaxValue = "string",
                        MinValue = "string",
                        StrictMaxEnabled = false,
                        StrictMinEnabled = false,
                    },
                    Description = "string",
                    IgnoreNull = false,
                    Name = "string",
                    NonNullExpectation = null,
                    Column = "string",
                    RegexExpectation = new GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataQualityRuleRegexExpectationArgs
                    {
                        Regex = "string",
                    },
                    RowConditionExpectation = new GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataQualityRuleRowConditionExpectationArgs
                    {
                        SqlExpression = "string",
                    },
                    SetExpectation = new GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataQualityRuleSetExpectationArgs
                    {
                        Values = new[]
                        {
                            "string",
                        },
                    },
                    StatisticRangeExpectation = new GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataQualityRuleStatisticRangeExpectationArgs
                    {
                        MaxValue = "string",
                        MinValue = "string",
                        Statistic = GoogleNative.Dataplex.V1.GoogleCloudDataplexV1DataQualityRuleStatisticRangeExpectationStatistic.StatisticUndefined,
                        StrictMaxEnabled = false,
                        StrictMinEnabled = false,
                    },
                    TableConditionExpectation = new GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataQualityRuleTableConditionExpectationArgs
                    {
                        SqlExpression = "string",
                    },
                    Threshold = 0,
                    UniquenessExpectation = null,
                },
            },
            PostScanActions = new GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataQualitySpecPostScanActionsArgs
            {
                BigqueryExport = new GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataQualitySpecPostScanActionsBigQueryExportArgs
                {
                    ResultsTable = "string",
                },
            },
            RowFilter = "string",
            SamplingPercent = 0,
        },
        Description = "string",
        DisplayName = "string",
        ExecutionSpec = new GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataScanExecutionSpecArgs
        {
            Field = "string",
            Trigger = new GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1TriggerArgs
            {
                OnDemand = null,
                Schedule = new GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1TriggerScheduleArgs
                {
                    Cron = "string",
                },
            },
        },
        Labels = 
        {
            { "string", "string" },
        },
        Location = "string",
        Project = "string",
    });
    
    example, err := dataplex.NewDataScan(ctx, "dataScanResource", &dataplex.DataScanArgs{
    	Data: &dataplex.GoogleCloudDataplexV1DataSourceArgs{
    		Entity:   pulumi.String("string"),
    		Resource: pulumi.String("string"),
    	},
    	DataScanId: pulumi.String("string"),
    	DataProfileSpec: &dataplex.GoogleCloudDataplexV1DataProfileSpecArgs{
    		ExcludeFields: &dataplex.GoogleCloudDataplexV1DataProfileSpecSelectedFieldsArgs{
    			FieldNames: pulumi.StringArray{
    				pulumi.String("string"),
    			},
    		},
    		IncludeFields: &dataplex.GoogleCloudDataplexV1DataProfileSpecSelectedFieldsArgs{
    			FieldNames: pulumi.StringArray{
    				pulumi.String("string"),
    			},
    		},
    		PostScanActions: &dataplex.GoogleCloudDataplexV1DataProfileSpecPostScanActionsArgs{
    			BigqueryExport: &dataplex.GoogleCloudDataplexV1DataProfileSpecPostScanActionsBigQueryExportArgs{
    				ResultsTable: pulumi.String("string"),
    			},
    		},
    		RowFilter:       pulumi.String("string"),
    		SamplingPercent: pulumi.Float64(0),
    	},
    	DataQualitySpec: &dataplex.GoogleCloudDataplexV1DataQualitySpecArgs{
    		Rules: dataplex.GoogleCloudDataplexV1DataQualityRuleArray{
    			&dataplex.GoogleCloudDataplexV1DataQualityRuleArgs{
    				Dimension: pulumi.String("string"),
    				RangeExpectation: &dataplex.GoogleCloudDataplexV1DataQualityRuleRangeExpectationArgs{
    					MaxValue:         pulumi.String("string"),
    					MinValue:         pulumi.String("string"),
    					StrictMaxEnabled: pulumi.Bool(false),
    					StrictMinEnabled: pulumi.Bool(false),
    				},
    				Description:        pulumi.String("string"),
    				IgnoreNull:         pulumi.Bool(false),
    				Name:               pulumi.String("string"),
    				NonNullExpectation: &dataplex.GoogleCloudDataplexV1DataQualityRuleNonNullExpectationArgs{},
    				Column:             pulumi.String("string"),
    				RegexExpectation: &dataplex.GoogleCloudDataplexV1DataQualityRuleRegexExpectationArgs{
    					Regex: pulumi.String("string"),
    				},
    				RowConditionExpectation: &dataplex.GoogleCloudDataplexV1DataQualityRuleRowConditionExpectationArgs{
    					SqlExpression: pulumi.String("string"),
    				},
    				SetExpectation: &dataplex.GoogleCloudDataplexV1DataQualityRuleSetExpectationArgs{
    					Values: pulumi.StringArray{
    						pulumi.String("string"),
    					},
    				},
    				StatisticRangeExpectation: &dataplex.GoogleCloudDataplexV1DataQualityRuleStatisticRangeExpectationArgs{
    					MaxValue:         pulumi.String("string"),
    					MinValue:         pulumi.String("string"),
    					Statistic:        dataplex.GoogleCloudDataplexV1DataQualityRuleStatisticRangeExpectationStatisticStatisticUndefined,
    					StrictMaxEnabled: pulumi.Bool(false),
    					StrictMinEnabled: pulumi.Bool(false),
    				},
    				TableConditionExpectation: &dataplex.GoogleCloudDataplexV1DataQualityRuleTableConditionExpectationArgs{
    					SqlExpression: pulumi.String("string"),
    				},
    				Threshold:             pulumi.Float64(0),
    				UniquenessExpectation: &dataplex.GoogleCloudDataplexV1DataQualityRuleUniquenessExpectationArgs{},
    			},
    		},
    		PostScanActions: &dataplex.GoogleCloudDataplexV1DataQualitySpecPostScanActionsArgs{
    			BigqueryExport: &dataplex.GoogleCloudDataplexV1DataQualitySpecPostScanActionsBigQueryExportArgs{
    				ResultsTable: pulumi.String("string"),
    			},
    		},
    		RowFilter:       pulumi.String("string"),
    		SamplingPercent: pulumi.Float64(0),
    	},
    	Description: pulumi.String("string"),
    	DisplayName: pulumi.String("string"),
    	ExecutionSpec: &dataplex.GoogleCloudDataplexV1DataScanExecutionSpecArgs{
    		Field: pulumi.String("string"),
    		Trigger: &dataplex.GoogleCloudDataplexV1TriggerArgs{
    			OnDemand: &dataplex.GoogleCloudDataplexV1TriggerOnDemandArgs{},
    			Schedule: &dataplex.GoogleCloudDataplexV1TriggerScheduleArgs{
    				Cron: pulumi.String("string"),
    			},
    		},
    	},
    	Labels: pulumi.StringMap{
    		"string": pulumi.String("string"),
    	},
    	Location: pulumi.String("string"),
    	Project:  pulumi.String("string"),
    })
    
    var dataScanResource = new DataScan("dataScanResource", DataScanArgs.builder()
        .data(GoogleCloudDataplexV1DataSourceArgs.builder()
            .entity("string")
            .resource("string")
            .build())
        .dataScanId("string")
        .dataProfileSpec(GoogleCloudDataplexV1DataProfileSpecArgs.builder()
            .excludeFields(GoogleCloudDataplexV1DataProfileSpecSelectedFieldsArgs.builder()
                .fieldNames("string")
                .build())
            .includeFields(GoogleCloudDataplexV1DataProfileSpecSelectedFieldsArgs.builder()
                .fieldNames("string")
                .build())
            .postScanActions(GoogleCloudDataplexV1DataProfileSpecPostScanActionsArgs.builder()
                .bigqueryExport(GoogleCloudDataplexV1DataProfileSpecPostScanActionsBigQueryExportArgs.builder()
                    .resultsTable("string")
                    .build())
                .build())
            .rowFilter("string")
            .samplingPercent(0)
            .build())
        .dataQualitySpec(GoogleCloudDataplexV1DataQualitySpecArgs.builder()
            .rules(GoogleCloudDataplexV1DataQualityRuleArgs.builder()
                .dimension("string")
                .rangeExpectation(GoogleCloudDataplexV1DataQualityRuleRangeExpectationArgs.builder()
                    .maxValue("string")
                    .minValue("string")
                    .strictMaxEnabled(false)
                    .strictMinEnabled(false)
                    .build())
                .description("string")
                .ignoreNull(false)
                .name("string")
                .nonNullExpectation()
                .column("string")
                .regexExpectation(GoogleCloudDataplexV1DataQualityRuleRegexExpectationArgs.builder()
                    .regex("string")
                    .build())
                .rowConditionExpectation(GoogleCloudDataplexV1DataQualityRuleRowConditionExpectationArgs.builder()
                    .sqlExpression("string")
                    .build())
                .setExpectation(GoogleCloudDataplexV1DataQualityRuleSetExpectationArgs.builder()
                    .values("string")
                    .build())
                .statisticRangeExpectation(GoogleCloudDataplexV1DataQualityRuleStatisticRangeExpectationArgs.builder()
                    .maxValue("string")
                    .minValue("string")
                    .statistic("STATISTIC_UNDEFINED")
                    .strictMaxEnabled(false)
                    .strictMinEnabled(false)
                    .build())
                .tableConditionExpectation(GoogleCloudDataplexV1DataQualityRuleTableConditionExpectationArgs.builder()
                    .sqlExpression("string")
                    .build())
                .threshold(0)
                .uniquenessExpectation()
                .build())
            .postScanActions(GoogleCloudDataplexV1DataQualitySpecPostScanActionsArgs.builder()
                .bigqueryExport(GoogleCloudDataplexV1DataQualitySpecPostScanActionsBigQueryExportArgs.builder()
                    .resultsTable("string")
                    .build())
                .build())
            .rowFilter("string")
            .samplingPercent(0)
            .build())
        .description("string")
        .displayName("string")
        .executionSpec(GoogleCloudDataplexV1DataScanExecutionSpecArgs.builder()
            .field("string")
            .trigger(GoogleCloudDataplexV1TriggerArgs.builder()
                .onDemand()
                .schedule(GoogleCloudDataplexV1TriggerScheduleArgs.builder()
                    .cron("string")
                    .build())
                .build())
            .build())
        .labels(Map.of("string", "string"))
        .location("string")
        .project("string")
        .build());
    
    data_scan_resource = google_native.dataplex.v1.DataScan("dataScanResource",
        data={
            "entity": "string",
            "resource": "string",
        },
        data_scan_id="string",
        data_profile_spec={
            "exclude_fields": {
                "field_names": ["string"],
            },
            "include_fields": {
                "field_names": ["string"],
            },
            "post_scan_actions": {
                "bigquery_export": {
                    "results_table": "string",
                },
            },
            "row_filter": "string",
            "sampling_percent": 0,
        },
        data_quality_spec={
            "rules": [{
                "dimension": "string",
                "range_expectation": {
                    "max_value": "string",
                    "min_value": "string",
                    "strict_max_enabled": False,
                    "strict_min_enabled": False,
                },
                "description": "string",
                "ignore_null": False,
                "name": "string",
                "non_null_expectation": {},
                "column": "string",
                "regex_expectation": {
                    "regex": "string",
                },
                "row_condition_expectation": {
                    "sql_expression": "string",
                },
                "set_expectation": {
                    "values": ["string"],
                },
                "statistic_range_expectation": {
                    "max_value": "string",
                    "min_value": "string",
                    "statistic": google_native.dataplex.v1.GoogleCloudDataplexV1DataQualityRuleStatisticRangeExpectationStatistic.STATISTIC_UNDEFINED,
                    "strict_max_enabled": False,
                    "strict_min_enabled": False,
                },
                "table_condition_expectation": {
                    "sql_expression": "string",
                },
                "threshold": 0,
                "uniqueness_expectation": {},
            }],
            "post_scan_actions": {
                "bigquery_export": {
                    "results_table": "string",
                },
            },
            "row_filter": "string",
            "sampling_percent": 0,
        },
        description="string",
        display_name="string",
        execution_spec={
            "field": "string",
            "trigger": {
                "on_demand": {},
                "schedule": {
                    "cron": "string",
                },
            },
        },
        labels={
            "string": "string",
        },
        location="string",
        project="string")
    
    const dataScanResource = new google_native.dataplex.v1.DataScan("dataScanResource", {
        data: {
            entity: "string",
            resource: "string",
        },
        dataScanId: "string",
        dataProfileSpec: {
            excludeFields: {
                fieldNames: ["string"],
            },
            includeFields: {
                fieldNames: ["string"],
            },
            postScanActions: {
                bigqueryExport: {
                    resultsTable: "string",
                },
            },
            rowFilter: "string",
            samplingPercent: 0,
        },
        dataQualitySpec: {
            rules: [{
                dimension: "string",
                rangeExpectation: {
                    maxValue: "string",
                    minValue: "string",
                    strictMaxEnabled: false,
                    strictMinEnabled: false,
                },
                description: "string",
                ignoreNull: false,
                name: "string",
                nonNullExpectation: {},
                column: "string",
                regexExpectation: {
                    regex: "string",
                },
                rowConditionExpectation: {
                    sqlExpression: "string",
                },
                setExpectation: {
                    values: ["string"],
                },
                statisticRangeExpectation: {
                    maxValue: "string",
                    minValue: "string",
                    statistic: google_native.dataplex.v1.GoogleCloudDataplexV1DataQualityRuleStatisticRangeExpectationStatistic.StatisticUndefined,
                    strictMaxEnabled: false,
                    strictMinEnabled: false,
                },
                tableConditionExpectation: {
                    sqlExpression: "string",
                },
                threshold: 0,
                uniquenessExpectation: {},
            }],
            postScanActions: {
                bigqueryExport: {
                    resultsTable: "string",
                },
            },
            rowFilter: "string",
            samplingPercent: 0,
        },
        description: "string",
        displayName: "string",
        executionSpec: {
            field: "string",
            trigger: {
                onDemand: {},
                schedule: {
                    cron: "string",
                },
            },
        },
        labels: {
            string: "string",
        },
        location: "string",
        project: "string",
    });
    
    type: google-native:dataplex/v1:DataScan
    properties:
        data:
            entity: string
            resource: string
        dataProfileSpec:
            excludeFields:
                fieldNames:
                    - string
            includeFields:
                fieldNames:
                    - string
            postScanActions:
                bigqueryExport:
                    resultsTable: string
            rowFilter: string
            samplingPercent: 0
        dataQualitySpec:
            postScanActions:
                bigqueryExport:
                    resultsTable: string
            rowFilter: string
            rules:
                - column: string
                  description: string
                  dimension: string
                  ignoreNull: false
                  name: string
                  nonNullExpectation: {}
                  rangeExpectation:
                    maxValue: string
                    minValue: string
                    strictMaxEnabled: false
                    strictMinEnabled: false
                  regexExpectation:
                    regex: string
                  rowConditionExpectation:
                    sqlExpression: string
                  setExpectation:
                    values:
                        - string
                  statisticRangeExpectation:
                    maxValue: string
                    minValue: string
                    statistic: STATISTIC_UNDEFINED
                    strictMaxEnabled: false
                    strictMinEnabled: false
                  tableConditionExpectation:
                    sqlExpression: string
                  threshold: 0
                  uniquenessExpectation: {}
            samplingPercent: 0
        dataScanId: string
        description: string
        displayName: string
        executionSpec:
            field: string
            trigger:
                onDemand: {}
                schedule:
                    cron: string
        labels:
            string: string
        location: string
        project: string
    

    DataScan Resource Properties

    To learn more about resource properties and how to use them, see Inputs and Outputs in the Architecture and Concepts docs.

    Inputs

    In Python, inputs that are objects can be passed either as argument classes or as dictionary literals.

    The DataScan resource accepts the following input properties:

    Data Pulumi.GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataSource
    The data source for DataScan.
    DataScanId string
    Required. DataScan identifier. Must contain only lowercase letters, numbers and hyphens. Must start with a letter. Must end with a number or a letter. Must be between 1-63 characters. Must be unique within the customer project / location.
    DataProfileSpec Pulumi.GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataProfileSpec
    DataProfileScan related setting.
    DataQualitySpec Pulumi.GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataQualitySpec
    DataQualityScan related setting.
    Description string
    Optional. Description of the scan. Must be between 1-1024 characters.
    DisplayName string
    Optional. User friendly display name. Must be between 1-256 characters.
    ExecutionSpec Pulumi.GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataScanExecutionSpec
    Optional. DataScan execution settings.If not specified, the fields in it will use their default values.
    Labels Dictionary<string, string>
    Optional. User-defined labels for the scan.
    Location string
    Project string
    Data GoogleCloudDataplexV1DataSourceArgs
    The data source for DataScan.
    DataScanId string
    Required. DataScan identifier. Must contain only lowercase letters, numbers and hyphens. Must start with a letter. Must end with a number or a letter. Must be between 1-63 characters. Must be unique within the customer project / location.
    DataProfileSpec GoogleCloudDataplexV1DataProfileSpecArgs
    DataProfileScan related setting.
    DataQualitySpec GoogleCloudDataplexV1DataQualitySpecArgs
    DataQualityScan related setting.
    Description string
    Optional. Description of the scan. Must be between 1-1024 characters.
    DisplayName string
    Optional. User friendly display name. Must be between 1-256 characters.
    ExecutionSpec GoogleCloudDataplexV1DataScanExecutionSpecArgs
    Optional. DataScan execution settings.If not specified, the fields in it will use their default values.
    Labels map[string]string
    Optional. User-defined labels for the scan.
    Location string
    Project string
    data GoogleCloudDataplexV1DataSource
    The data source for DataScan.
    dataScanId String
    Required. DataScan identifier. Must contain only lowercase letters, numbers and hyphens. Must start with a letter. Must end with a number or a letter. Must be between 1-63 characters. Must be unique within the customer project / location.
    dataProfileSpec GoogleCloudDataplexV1DataProfileSpec
    DataProfileScan related setting.
    dataQualitySpec GoogleCloudDataplexV1DataQualitySpec
    DataQualityScan related setting.
    description String
    Optional. Description of the scan. Must be between 1-1024 characters.
    displayName String
    Optional. User friendly display name. Must be between 1-256 characters.
    executionSpec GoogleCloudDataplexV1DataScanExecutionSpec
    Optional. DataScan execution settings.If not specified, the fields in it will use their default values.
    labels Map<String,String>
    Optional. User-defined labels for the scan.
    location String
    project String
    data GoogleCloudDataplexV1DataSource
    The data source for DataScan.
    dataScanId string
    Required. DataScan identifier. Must contain only lowercase letters, numbers and hyphens. Must start with a letter. Must end with a number or a letter. Must be between 1-63 characters. Must be unique within the customer project / location.
    dataProfileSpec GoogleCloudDataplexV1DataProfileSpec
    DataProfileScan related setting.
    dataQualitySpec GoogleCloudDataplexV1DataQualitySpec
    DataQualityScan related setting.
    description string
    Optional. Description of the scan. Must be between 1-1024 characters.
    displayName string
    Optional. User friendly display name. Must be between 1-256 characters.
    executionSpec GoogleCloudDataplexV1DataScanExecutionSpec
    Optional. DataScan execution settings.If not specified, the fields in it will use their default values.
    labels {[key: string]: string}
    Optional. User-defined labels for the scan.
    location string
    project string
    data GoogleCloudDataplexV1DataSourceArgs
    The data source for DataScan.
    data_scan_id str
    Required. DataScan identifier. Must contain only lowercase letters, numbers and hyphens. Must start with a letter. Must end with a number or a letter. Must be between 1-63 characters. Must be unique within the customer project / location.
    data_profile_spec GoogleCloudDataplexV1DataProfileSpecArgs
    DataProfileScan related setting.
    data_quality_spec GoogleCloudDataplexV1DataQualitySpecArgs
    DataQualityScan related setting.
    description str
    Optional. Description of the scan. Must be between 1-1024 characters.
    display_name str
    Optional. User friendly display name. Must be between 1-256 characters.
    execution_spec GoogleCloudDataplexV1DataScanExecutionSpecArgs
    Optional. DataScan execution settings.If not specified, the fields in it will use their default values.
    labels Mapping[str, str]
    Optional. User-defined labels for the scan.
    location str
    project str
    data Property Map
    The data source for DataScan.
    dataScanId String
    Required. DataScan identifier. Must contain only lowercase letters, numbers and hyphens. Must start with a letter. Must end with a number or a letter. Must be between 1-63 characters. Must be unique within the customer project / location.
    dataProfileSpec Property Map
    DataProfileScan related setting.
    dataQualitySpec Property Map
    DataQualityScan related setting.
    description String
    Optional. Description of the scan. Must be between 1-1024 characters.
    displayName String
    Optional. User friendly display name. Must be between 1-256 characters.
    executionSpec Property Map
    Optional. DataScan execution settings.If not specified, the fields in it will use their default values.
    labels Map<String>
    Optional. User-defined labels for the scan.
    location String
    project String

    Outputs

    All input properties are implicitly available as output properties. Additionally, the DataScan resource produces the following output properties:

    CreateTime string
    The time when the scan was created.
    DataProfileResult Pulumi.GoogleNative.Dataplex.V1.Outputs.GoogleCloudDataplexV1DataProfileResultResponse
    The result of the data profile scan.
    DataQualityResult Pulumi.GoogleNative.Dataplex.V1.Outputs.GoogleCloudDataplexV1DataQualityResultResponse
    The result of the data quality scan.
    ExecutionStatus Pulumi.GoogleNative.Dataplex.V1.Outputs.GoogleCloudDataplexV1DataScanExecutionStatusResponse
    Status of the data scan execution.
    Id string
    The provider-assigned unique ID for this managed resource.
    Name string
    The relative resource name of the scan, of the form: projects/{project}/locations/{location_id}/dataScans/{datascan_id}, where project refers to a project_id or project_number and location_id refers to a GCP region.
    State string
    Current state of the DataScan.
    Type string
    The type of DataScan.
    Uid string
    System generated globally unique ID for the scan. This ID will be different if the scan is deleted and re-created with the same name.
    UpdateTime string
    The time when the scan was last updated.
    CreateTime string
    The time when the scan was created.
    DataProfileResult GoogleCloudDataplexV1DataProfileResultResponse
    The result of the data profile scan.
    DataQualityResult GoogleCloudDataplexV1DataQualityResultResponse
    The result of the data quality scan.
    ExecutionStatus GoogleCloudDataplexV1DataScanExecutionStatusResponse
    Status of the data scan execution.
    Id string
    The provider-assigned unique ID for this managed resource.
    Name string
    The relative resource name of the scan, of the form: projects/{project}/locations/{location_id}/dataScans/{datascan_id}, where project refers to a project_id or project_number and location_id refers to a GCP region.
    State string
    Current state of the DataScan.
    Type string
    The type of DataScan.
    Uid string
    System generated globally unique ID for the scan. This ID will be different if the scan is deleted and re-created with the same name.
    UpdateTime string
    The time when the scan was last updated.
    createTime String
    The time when the scan was created.
    dataProfileResult GoogleCloudDataplexV1DataProfileResultResponse
    The result of the data profile scan.
    dataQualityResult GoogleCloudDataplexV1DataQualityResultResponse
    The result of the data quality scan.
    executionStatus GoogleCloudDataplexV1DataScanExecutionStatusResponse
    Status of the data scan execution.
    id String
    The provider-assigned unique ID for this managed resource.
    name String
    The relative resource name of the scan, of the form: projects/{project}/locations/{location_id}/dataScans/{datascan_id}, where project refers to a project_id or project_number and location_id refers to a GCP region.
    state String
    Current state of the DataScan.
    type String
    The type of DataScan.
    uid String
    System generated globally unique ID for the scan. This ID will be different if the scan is deleted and re-created with the same name.
    updateTime String
    The time when the scan was last updated.
    createTime string
    The time when the scan was created.
    dataProfileResult GoogleCloudDataplexV1DataProfileResultResponse
    The result of the data profile scan.
    dataQualityResult GoogleCloudDataplexV1DataQualityResultResponse
    The result of the data quality scan.
    executionStatus GoogleCloudDataplexV1DataScanExecutionStatusResponse
    Status of the data scan execution.
    id string
    The provider-assigned unique ID for this managed resource.
    name string
    The relative resource name of the scan, of the form: projects/{project}/locations/{location_id}/dataScans/{datascan_id}, where project refers to a project_id or project_number and location_id refers to a GCP region.
    state string
    Current state of the DataScan.
    type string
    The type of DataScan.
    uid string
    System generated globally unique ID for the scan. This ID will be different if the scan is deleted and re-created with the same name.
    updateTime string
    The time when the scan was last updated.
    create_time str
    The time when the scan was created.
    data_profile_result GoogleCloudDataplexV1DataProfileResultResponse
    The result of the data profile scan.
    data_quality_result GoogleCloudDataplexV1DataQualityResultResponse
    The result of the data quality scan.
    execution_status GoogleCloudDataplexV1DataScanExecutionStatusResponse
    Status of the data scan execution.
    id str
    The provider-assigned unique ID for this managed resource.
    name str
    The relative resource name of the scan, of the form: projects/{project}/locations/{location_id}/dataScans/{datascan_id}, where project refers to a project_id or project_number and location_id refers to a GCP region.
    state str
    Current state of the DataScan.
    type str
    The type of DataScan.
    uid str
    System generated globally unique ID for the scan. This ID will be different if the scan is deleted and re-created with the same name.
    update_time str
    The time when the scan was last updated.
    createTime String
    The time when the scan was created.
    dataProfileResult Property Map
    The result of the data profile scan.
    dataQualityResult Property Map
    The result of the data quality scan.
    executionStatus Property Map
    Status of the data scan execution.
    id String
    The provider-assigned unique ID for this managed resource.
    name String
    The relative resource name of the scan, of the form: projects/{project}/locations/{location_id}/dataScans/{datascan_id}, where project refers to a project_id or project_number and location_id refers to a GCP region.
    state String
    Current state of the DataScan.
    type String
    The type of DataScan.
    uid String
    System generated globally unique ID for the scan. This ID will be different if the scan is deleted and re-created with the same name.
    updateTime String
    The time when the scan was last updated.

    Supporting Types

    GoogleCloudDataplexV1DataProfileResultPostScanActionsResultBigQueryExportResultResponse, GoogleCloudDataplexV1DataProfileResultPostScanActionsResultBigQueryExportResultResponseArgs

    Message string
    Additional information about the BigQuery exporting.
    State string
    Execution state for the BigQuery exporting.
    Message string
    Additional information about the BigQuery exporting.
    State string
    Execution state for the BigQuery exporting.
    message String
    Additional information about the BigQuery exporting.
    state String
    Execution state for the BigQuery exporting.
    message string
    Additional information about the BigQuery exporting.
    state string
    Execution state for the BigQuery exporting.
    message str
    Additional information about the BigQuery exporting.
    state str
    Execution state for the BigQuery exporting.
    message String
    Additional information about the BigQuery exporting.
    state String
    Execution state for the BigQuery exporting.

    GoogleCloudDataplexV1DataProfileResultPostScanActionsResultResponse, GoogleCloudDataplexV1DataProfileResultPostScanActionsResultResponseArgs

    bigqueryExportResult Property Map
    The result of BigQuery export post scan action.

    GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoDoubleFieldInfoResponse, GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoDoubleFieldInfoResponseArgs

    Average double
    Average of non-null values in the scanned data. NaN, if the field has a NaN.
    Max double
    Maximum of non-null values in the scanned data. NaN, if the field has a NaN.
    Min double
    Minimum of non-null values in the scanned data. NaN, if the field has a NaN.
    Quartiles List<double>
    A quartile divides the number of data points into four parts, or quarters, of more-or-less equal size. Three main quartiles used are: The first quartile (Q1) splits off the lowest 25% of data from the highest 75%. It is also known as the lower or 25th empirical quartile, as 25% of the data is below this point. The second quartile (Q2) is the median of a data set. So, 50% of the data lies below this point. The third quartile (Q3) splits off the highest 25% of data from the lowest 75%. It is known as the upper or 75th empirical quartile, as 75% of the data lies below this point. Here, the quartiles is provided as an ordered list of quartile values for the scanned data, occurring in order Q1, median, Q3.
    StandardDeviation double
    Standard deviation of non-null values in the scanned data. NaN, if the field has a NaN.
    Average float64
    Average of non-null values in the scanned data. NaN, if the field has a NaN.
    Max float64
    Maximum of non-null values in the scanned data. NaN, if the field has a NaN.
    Min float64
    Minimum of non-null values in the scanned data. NaN, if the field has a NaN.
    Quartiles []float64
    A quartile divides the number of data points into four parts, or quarters, of more-or-less equal size. Three main quartiles used are: The first quartile (Q1) splits off the lowest 25% of data from the highest 75%. It is also known as the lower or 25th empirical quartile, as 25% of the data is below this point. The second quartile (Q2) is the median of a data set. So, 50% of the data lies below this point. The third quartile (Q3) splits off the highest 25% of data from the lowest 75%. It is known as the upper or 75th empirical quartile, as 75% of the data lies below this point. Here, the quartiles is provided as an ordered list of quartile values for the scanned data, occurring in order Q1, median, Q3.
    StandardDeviation float64
    Standard deviation of non-null values in the scanned data. NaN, if the field has a NaN.
    average Double
    Average of non-null values in the scanned data. NaN, if the field has a NaN.
    max Double
    Maximum of non-null values in the scanned data. NaN, if the field has a NaN.
    min Double
    Minimum of non-null values in the scanned data. NaN, if the field has a NaN.
    quartiles List<Double>
    A quartile divides the number of data points into four parts, or quarters, of more-or-less equal size. Three main quartiles used are: The first quartile (Q1) splits off the lowest 25% of data from the highest 75%. It is also known as the lower or 25th empirical quartile, as 25% of the data is below this point. The second quartile (Q2) is the median of a data set. So, 50% of the data lies below this point. The third quartile (Q3) splits off the highest 25% of data from the lowest 75%. It is known as the upper or 75th empirical quartile, as 75% of the data lies below this point. Here, the quartiles is provided as an ordered list of quartile values for the scanned data, occurring in order Q1, median, Q3.
    standardDeviation Double
    Standard deviation of non-null values in the scanned data. NaN, if the field has a NaN.
    average number
    Average of non-null values in the scanned data. NaN, if the field has a NaN.
    max number
    Maximum of non-null values in the scanned data. NaN, if the field has a NaN.
    min number
    Minimum of non-null values in the scanned data. NaN, if the field has a NaN.
    quartiles number[]
    A quartile divides the number of data points into four parts, or quarters, of more-or-less equal size. Three main quartiles used are: The first quartile (Q1) splits off the lowest 25% of data from the highest 75%. It is also known as the lower or 25th empirical quartile, as 25% of the data is below this point. The second quartile (Q2) is the median of a data set. So, 50% of the data lies below this point. The third quartile (Q3) splits off the highest 25% of data from the lowest 75%. It is known as the upper or 75th empirical quartile, as 75% of the data lies below this point. Here, the quartiles is provided as an ordered list of quartile values for the scanned data, occurring in order Q1, median, Q3.
    standardDeviation number
    Standard deviation of non-null values in the scanned data. NaN, if the field has a NaN.
    average float
    Average of non-null values in the scanned data. NaN, if the field has a NaN.
    max float
    Maximum of non-null values in the scanned data. NaN, if the field has a NaN.
    min float
    Minimum of non-null values in the scanned data. NaN, if the field has a NaN.
    quartiles Sequence[float]
    A quartile divides the number of data points into four parts, or quarters, of more-or-less equal size. Three main quartiles used are: The first quartile (Q1) splits off the lowest 25% of data from the highest 75%. It is also known as the lower or 25th empirical quartile, as 25% of the data is below this point. The second quartile (Q2) is the median of a data set. So, 50% of the data lies below this point. The third quartile (Q3) splits off the highest 25% of data from the lowest 75%. It is known as the upper or 75th empirical quartile, as 75% of the data lies below this point. Here, the quartiles is provided as an ordered list of quartile values for the scanned data, occurring in order Q1, median, Q3.
    standard_deviation float
    Standard deviation of non-null values in the scanned data. NaN, if the field has a NaN.
    average Number
    Average of non-null values in the scanned data. NaN, if the field has a NaN.
    max Number
    Maximum of non-null values in the scanned data. NaN, if the field has a NaN.
    min Number
    Minimum of non-null values in the scanned data. NaN, if the field has a NaN.
    quartiles List<Number>
    A quartile divides the number of data points into four parts, or quarters, of more-or-less equal size. Three main quartiles used are: The first quartile (Q1) splits off the lowest 25% of data from the highest 75%. It is also known as the lower or 25th empirical quartile, as 25% of the data is below this point. The second quartile (Q2) is the median of a data set. So, 50% of the data lies below this point. The third quartile (Q3) splits off the highest 25% of data from the lowest 75%. It is known as the upper or 75th empirical quartile, as 75% of the data lies below this point. Here, the quartiles is provided as an ordered list of quartile values for the scanned data, occurring in order Q1, median, Q3.
    standardDeviation Number
    Standard deviation of non-null values in the scanned data. NaN, if the field has a NaN.

    GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoIntegerFieldInfoResponse, GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoIntegerFieldInfoResponseArgs

    Average double
    Average of non-null values in the scanned data. NaN, if the field has a NaN.
    Max string
    Maximum of non-null values in the scanned data. NaN, if the field has a NaN.
    Min string
    Minimum of non-null values in the scanned data. NaN, if the field has a NaN.
    Quartiles List<string>
    A quartile divides the number of data points into four parts, or quarters, of more-or-less equal size. Three main quartiles used are: The first quartile (Q1) splits off the lowest 25% of data from the highest 75%. It is also known as the lower or 25th empirical quartile, as 25% of the data is below this point. The second quartile (Q2) is the median of a data set. So, 50% of the data lies below this point. The third quartile (Q3) splits off the highest 25% of data from the lowest 75%. It is known as the upper or 75th empirical quartile, as 75% of the data lies below this point. Here, the quartiles is provided as an ordered list of approximate quartile values for the scanned data, occurring in order Q1, median, Q3.
    StandardDeviation double
    Standard deviation of non-null values in the scanned data. NaN, if the field has a NaN.
    Average float64
    Average of non-null values in the scanned data. NaN, if the field has a NaN.
    Max string
    Maximum of non-null values in the scanned data. NaN, if the field has a NaN.
    Min string
    Minimum of non-null values in the scanned data. NaN, if the field has a NaN.
    Quartiles []string
    A quartile divides the number of data points into four parts, or quarters, of more-or-less equal size. Three main quartiles used are: The first quartile (Q1) splits off the lowest 25% of data from the highest 75%. It is also known as the lower or 25th empirical quartile, as 25% of the data is below this point. The second quartile (Q2) is the median of a data set. So, 50% of the data lies below this point. The third quartile (Q3) splits off the highest 25% of data from the lowest 75%. It is known as the upper or 75th empirical quartile, as 75% of the data lies below this point. Here, the quartiles is provided as an ordered list of approximate quartile values for the scanned data, occurring in order Q1, median, Q3.
    StandardDeviation float64
    Standard deviation of non-null values in the scanned data. NaN, if the field has a NaN.
    average Double
    Average of non-null values in the scanned data. NaN, if the field has a NaN.
    max String
    Maximum of non-null values in the scanned data. NaN, if the field has a NaN.
    min String
    Minimum of non-null values in the scanned data. NaN, if the field has a NaN.
    quartiles List<String>
    A quartile divides the number of data points into four parts, or quarters, of more-or-less equal size. Three main quartiles used are: The first quartile (Q1) splits off the lowest 25% of data from the highest 75%. It is also known as the lower or 25th empirical quartile, as 25% of the data is below this point. The second quartile (Q2) is the median of a data set. So, 50% of the data lies below this point. The third quartile (Q3) splits off the highest 25% of data from the lowest 75%. It is known as the upper or 75th empirical quartile, as 75% of the data lies below this point. Here, the quartiles is provided as an ordered list of approximate quartile values for the scanned data, occurring in order Q1, median, Q3.
    standardDeviation Double
    Standard deviation of non-null values in the scanned data. NaN, if the field has a NaN.
    average number
    Average of non-null values in the scanned data. NaN, if the field has a NaN.
    max string
    Maximum of non-null values in the scanned data. NaN, if the field has a NaN.
    min string
    Minimum of non-null values in the scanned data. NaN, if the field has a NaN.
    quartiles string[]
    A quartile divides the number of data points into four parts, or quarters, of more-or-less equal size. Three main quartiles used are: The first quartile (Q1) splits off the lowest 25% of data from the highest 75%. It is also known as the lower or 25th empirical quartile, as 25% of the data is below this point. The second quartile (Q2) is the median of a data set. So, 50% of the data lies below this point. The third quartile (Q3) splits off the highest 25% of data from the lowest 75%. It is known as the upper or 75th empirical quartile, as 75% of the data lies below this point. Here, the quartiles is provided as an ordered list of approximate quartile values for the scanned data, occurring in order Q1, median, Q3.
    standardDeviation number
    Standard deviation of non-null values in the scanned data. NaN, if the field has a NaN.
    average float
    Average of non-null values in the scanned data. NaN, if the field has a NaN.
    max str
    Maximum of non-null values in the scanned data. NaN, if the field has a NaN.
    min str
    Minimum of non-null values in the scanned data. NaN, if the field has a NaN.
    quartiles Sequence[str]
    A quartile divides the number of data points into four parts, or quarters, of more-or-less equal size. Three main quartiles used are: The first quartile (Q1) splits off the lowest 25% of data from the highest 75%. It is also known as the lower or 25th empirical quartile, as 25% of the data is below this point. The second quartile (Q2) is the median of a data set. So, 50% of the data lies below this point. The third quartile (Q3) splits off the highest 25% of data from the lowest 75%. It is known as the upper or 75th empirical quartile, as 75% of the data lies below this point. Here, the quartiles is provided as an ordered list of approximate quartile values for the scanned data, occurring in order Q1, median, Q3.
    standard_deviation float
    Standard deviation of non-null values in the scanned data. NaN, if the field has a NaN.
    average Number
    Average of non-null values in the scanned data. NaN, if the field has a NaN.
    max String
    Maximum of non-null values in the scanned data. NaN, if the field has a NaN.
    min String
    Minimum of non-null values in the scanned data. NaN, if the field has a NaN.
    quartiles List<String>
    A quartile divides the number of data points into four parts, or quarters, of more-or-less equal size. Three main quartiles used are: The first quartile (Q1) splits off the lowest 25% of data from the highest 75%. It is also known as the lower or 25th empirical quartile, as 25% of the data is below this point. The second quartile (Q2) is the median of a data set. So, 50% of the data lies below this point. The third quartile (Q3) splits off the highest 25% of data from the lowest 75%. It is known as the upper or 75th empirical quartile, as 75% of the data lies below this point. Here, the quartiles is provided as an ordered list of approximate quartile values for the scanned data, occurring in order Q1, median, Q3.
    standardDeviation Number
    Standard deviation of non-null values in the scanned data. NaN, if the field has a NaN.

    GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoResponse, GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoResponseArgs

    DistinctRatio double
    Ratio of rows with distinct values against total scanned rows. Not available for complex non-groupable field type RECORD and fields with REPEATABLE mode.
    DoubleProfile Pulumi.GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoDoubleFieldInfoResponse
    Double type field information.
    IntegerProfile Pulumi.GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoIntegerFieldInfoResponse
    Integer type field information.
    NullRatio double
    Ratio of rows with null value against total scanned rows.
    StringProfile Pulumi.GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoStringFieldInfoResponse
    String type field information.
    TopNValues List<Pulumi.GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoTopNValueResponse>
    The list of top N non-null values, frequency and ratio with which they occur in the scanned data. N is 10 or equal to the number of distinct values in the field, whichever is smaller. Not available for complex non-groupable field type RECORD and fields with REPEATABLE mode.
    DistinctRatio float64
    Ratio of rows with distinct values against total scanned rows. Not available for complex non-groupable field type RECORD and fields with REPEATABLE mode.
    DoubleProfile GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoDoubleFieldInfoResponse
    Double type field information.
    IntegerProfile GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoIntegerFieldInfoResponse
    Integer type field information.
    NullRatio float64
    Ratio of rows with null value against total scanned rows.
    StringProfile GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoStringFieldInfoResponse
    String type field information.
    TopNValues []GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoTopNValueResponse
    The list of top N non-null values, frequency and ratio with which they occur in the scanned data. N is 10 or equal to the number of distinct values in the field, whichever is smaller. Not available for complex non-groupable field type RECORD and fields with REPEATABLE mode.
    distinctRatio Double
    Ratio of rows with distinct values against total scanned rows. Not available for complex non-groupable field type RECORD and fields with REPEATABLE mode.
    doubleProfile GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoDoubleFieldInfoResponse
    Double type field information.
    integerProfile GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoIntegerFieldInfoResponse
    Integer type field information.
    nullRatio Double
    Ratio of rows with null value against total scanned rows.
    stringProfile GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoStringFieldInfoResponse
    String type field information.
    topNValues List<GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoTopNValueResponse>
    The list of top N non-null values, frequency and ratio with which they occur in the scanned data. N is 10 or equal to the number of distinct values in the field, whichever is smaller. Not available for complex non-groupable field type RECORD and fields with REPEATABLE mode.
    distinctRatio number
    Ratio of rows with distinct values against total scanned rows. Not available for complex non-groupable field type RECORD and fields with REPEATABLE mode.
    doubleProfile GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoDoubleFieldInfoResponse
    Double type field information.
    integerProfile GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoIntegerFieldInfoResponse
    Integer type field information.
    nullRatio number
    Ratio of rows with null value against total scanned rows.
    stringProfile GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoStringFieldInfoResponse
    String type field information.
    topNValues GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoTopNValueResponse[]
    The list of top N non-null values, frequency and ratio with which they occur in the scanned data. N is 10 or equal to the number of distinct values in the field, whichever is smaller. Not available for complex non-groupable field type RECORD and fields with REPEATABLE mode.
    distinct_ratio float
    Ratio of rows with distinct values against total scanned rows. Not available for complex non-groupable field type RECORD and fields with REPEATABLE mode.
    double_profile GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoDoubleFieldInfoResponse
    Double type field information.
    integer_profile GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoIntegerFieldInfoResponse
    Integer type field information.
    null_ratio float
    Ratio of rows with null value against total scanned rows.
    string_profile GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoStringFieldInfoResponse
    String type field information.
    top_n_values Sequence[GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoTopNValueResponse]
    The list of top N non-null values, frequency and ratio with which they occur in the scanned data. N is 10 or equal to the number of distinct values in the field, whichever is smaller. Not available for complex non-groupable field type RECORD and fields with REPEATABLE mode.
    distinctRatio Number
    Ratio of rows with distinct values against total scanned rows. Not available for complex non-groupable field type RECORD and fields with REPEATABLE mode.
    doubleProfile Property Map
    Double type field information.
    integerProfile Property Map
    Integer type field information.
    nullRatio Number
    Ratio of rows with null value against total scanned rows.
    stringProfile Property Map
    String type field information.
    topNValues List<Property Map>
    The list of top N non-null values, frequency and ratio with which they occur in the scanned data. N is 10 or equal to the number of distinct values in the field, whichever is smaller. Not available for complex non-groupable field type RECORD and fields with REPEATABLE mode.

    GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoStringFieldInfoResponse, GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoStringFieldInfoResponseArgs

    AverageLength double
    Average length of non-null values in the scanned data.
    MaxLength string
    Maximum length of non-null values in the scanned data.
    MinLength string
    Minimum length of non-null values in the scanned data.
    AverageLength float64
    Average length of non-null values in the scanned data.
    MaxLength string
    Maximum length of non-null values in the scanned data.
    MinLength string
    Minimum length of non-null values in the scanned data.
    averageLength Double
    Average length of non-null values in the scanned data.
    maxLength String
    Maximum length of non-null values in the scanned data.
    minLength String
    Minimum length of non-null values in the scanned data.
    averageLength number
    Average length of non-null values in the scanned data.
    maxLength string
    Maximum length of non-null values in the scanned data.
    minLength string
    Minimum length of non-null values in the scanned data.
    average_length float
    Average length of non-null values in the scanned data.
    max_length str
    Maximum length of non-null values in the scanned data.
    min_length str
    Minimum length of non-null values in the scanned data.
    averageLength Number
    Average length of non-null values in the scanned data.
    maxLength String
    Maximum length of non-null values in the scanned data.
    minLength String
    Minimum length of non-null values in the scanned data.

    GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoTopNValueResponse, GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoTopNValueResponseArgs

    Count string
    Count of the corresponding value in the scanned data.
    Ratio double
    Ratio of the corresponding value in the field against the total number of rows in the scanned data.
    Value string
    String value of a top N non-null value.
    Count string
    Count of the corresponding value in the scanned data.
    Ratio float64
    Ratio of the corresponding value in the field against the total number of rows in the scanned data.
    Value string
    String value of a top N non-null value.
    count String
    Count of the corresponding value in the scanned data.
    ratio Double
    Ratio of the corresponding value in the field against the total number of rows in the scanned data.
    value String
    String value of a top N non-null value.
    count string
    Count of the corresponding value in the scanned data.
    ratio number
    Ratio of the corresponding value in the field against the total number of rows in the scanned data.
    value string
    String value of a top N non-null value.
    count str
    Count of the corresponding value in the scanned data.
    ratio float
    Ratio of the corresponding value in the field against the total number of rows in the scanned data.
    value str
    String value of a top N non-null value.
    count String
    Count of the corresponding value in the scanned data.
    ratio Number
    Ratio of the corresponding value in the field against the total number of rows in the scanned data.
    value String
    String value of a top N non-null value.

    GoogleCloudDataplexV1DataProfileResultProfileFieldResponse, GoogleCloudDataplexV1DataProfileResultProfileFieldResponseArgs

    Mode string
    The mode of the field. Possible values include: REQUIRED, if it is a required field. NULLABLE, if it is an optional field. REPEATED, if it is a repeated field.
    Name string
    The name of the field.
    Profile Pulumi.GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoResponse
    Profile information for the corresponding field.
    Type string
    The data type retrieved from the schema of the data source. For instance, for a BigQuery native table, it is the BigQuery Table Schema (https://cloud.google.com/bigquery/docs/reference/rest/v2/tables#tablefieldschema). For a Dataplex Entity, it is the Entity Schema (https://cloud.google.com/dataplex/docs/reference/rpc/google.cloud.dataplex.v1#type_3).
    Mode string
    The mode of the field. Possible values include: REQUIRED, if it is a required field. NULLABLE, if it is an optional field. REPEATED, if it is a repeated field.
    Name string
    The name of the field.
    Profile GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoResponse
    Profile information for the corresponding field.
    Type string
    The data type retrieved from the schema of the data source. For instance, for a BigQuery native table, it is the BigQuery Table Schema (https://cloud.google.com/bigquery/docs/reference/rest/v2/tables#tablefieldschema). For a Dataplex Entity, it is the Entity Schema (https://cloud.google.com/dataplex/docs/reference/rpc/google.cloud.dataplex.v1#type_3).
    mode String
    The mode of the field. Possible values include: REQUIRED, if it is a required field. NULLABLE, if it is an optional field. REPEATED, if it is a repeated field.
    name String
    The name of the field.
    profile GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoResponse
    Profile information for the corresponding field.
    type String
    The data type retrieved from the schema of the data source. For instance, for a BigQuery native table, it is the BigQuery Table Schema (https://cloud.google.com/bigquery/docs/reference/rest/v2/tables#tablefieldschema). For a Dataplex Entity, it is the Entity Schema (https://cloud.google.com/dataplex/docs/reference/rpc/google.cloud.dataplex.v1#type_3).
    mode string
    The mode of the field. Possible values include: REQUIRED, if it is a required field. NULLABLE, if it is an optional field. REPEATED, if it is a repeated field.
    name string
    The name of the field.
    profile GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoResponse
    Profile information for the corresponding field.
    type string
    The data type retrieved from the schema of the data source. For instance, for a BigQuery native table, it is the BigQuery Table Schema (https://cloud.google.com/bigquery/docs/reference/rest/v2/tables#tablefieldschema). For a Dataplex Entity, it is the Entity Schema (https://cloud.google.com/dataplex/docs/reference/rpc/google.cloud.dataplex.v1#type_3).
    mode str
    The mode of the field. Possible values include: REQUIRED, if it is a required field. NULLABLE, if it is an optional field. REPEATED, if it is a repeated field.
    name str
    The name of the field.
    profile GoogleCloudDataplexV1DataProfileResultProfileFieldProfileInfoResponse
    Profile information for the corresponding field.
    type str
    The data type retrieved from the schema of the data source. For instance, for a BigQuery native table, it is the BigQuery Table Schema (https://cloud.google.com/bigquery/docs/reference/rest/v2/tables#tablefieldschema). For a Dataplex Entity, it is the Entity Schema (https://cloud.google.com/dataplex/docs/reference/rpc/google.cloud.dataplex.v1#type_3).
    mode String
    The mode of the field. Possible values include: REQUIRED, if it is a required field. NULLABLE, if it is an optional field. REPEATED, if it is a repeated field.
    name String
    The name of the field.
    profile Property Map
    Profile information for the corresponding field.
    type String
    The data type retrieved from the schema of the data source. For instance, for a BigQuery native table, it is the BigQuery Table Schema (https://cloud.google.com/bigquery/docs/reference/rest/v2/tables#tablefieldschema). For a Dataplex Entity, it is the Entity Schema (https://cloud.google.com/dataplex/docs/reference/rpc/google.cloud.dataplex.v1#type_3).

    GoogleCloudDataplexV1DataProfileResultProfileResponse, GoogleCloudDataplexV1DataProfileResultProfileResponseArgs

    Fields []GoogleCloudDataplexV1DataProfileResultProfileFieldResponse
    List of fields with structural and profile information for each field.
    fields List<GoogleCloudDataplexV1DataProfileResultProfileFieldResponse>
    List of fields with structural and profile information for each field.
    fields GoogleCloudDataplexV1DataProfileResultProfileFieldResponse[]
    List of fields with structural and profile information for each field.
    fields Sequence[GoogleCloudDataplexV1DataProfileResultProfileFieldResponse]
    List of fields with structural and profile information for each field.
    fields List<Property Map>
    List of fields with structural and profile information for each field.

    GoogleCloudDataplexV1DataProfileResultResponse, GoogleCloudDataplexV1DataProfileResultResponseArgs

    postScanActionsResult Property Map
    The result of post scan actions.
    profile Property Map
    The profile information per field.
    rowCount String
    The count of rows scanned.
    scannedData Property Map
    The data scanned for this result.

    GoogleCloudDataplexV1DataProfileSpec, GoogleCloudDataplexV1DataProfileSpecArgs

    ExcludeFields Pulumi.GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataProfileSpecSelectedFields
    Optional. The fields to exclude from data profile.If specified, the fields will be excluded from data profile, regardless of include_fields value.
    IncludeFields Pulumi.GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataProfileSpecSelectedFields
    Optional. The fields to include in data profile.If not specified, all fields at the time of profile scan job execution are included, except for ones listed in exclude_fields.
    PostScanActions Pulumi.GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataProfileSpecPostScanActions
    Optional. Actions to take upon job completion..
    RowFilter string
    Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10
    SamplingPercent double
    Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.
    ExcludeFields GoogleCloudDataplexV1DataProfileSpecSelectedFields
    Optional. The fields to exclude from data profile.If specified, the fields will be excluded from data profile, regardless of include_fields value.
    IncludeFields GoogleCloudDataplexV1DataProfileSpecSelectedFields
    Optional. The fields to include in data profile.If not specified, all fields at the time of profile scan job execution are included, except for ones listed in exclude_fields.
    PostScanActions GoogleCloudDataplexV1DataProfileSpecPostScanActions
    Optional. Actions to take upon job completion..
    RowFilter string
    Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10
    SamplingPercent float64
    Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.
    excludeFields GoogleCloudDataplexV1DataProfileSpecSelectedFields
    Optional. The fields to exclude from data profile.If specified, the fields will be excluded from data profile, regardless of include_fields value.
    includeFields GoogleCloudDataplexV1DataProfileSpecSelectedFields
    Optional. The fields to include in data profile.If not specified, all fields at the time of profile scan job execution are included, except for ones listed in exclude_fields.
    postScanActions GoogleCloudDataplexV1DataProfileSpecPostScanActions
    Optional. Actions to take upon job completion..
    rowFilter String
    Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10
    samplingPercent Double
    Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.
    excludeFields GoogleCloudDataplexV1DataProfileSpecSelectedFields
    Optional. The fields to exclude from data profile.If specified, the fields will be excluded from data profile, regardless of include_fields value.
    includeFields GoogleCloudDataplexV1DataProfileSpecSelectedFields
    Optional. The fields to include in data profile.If not specified, all fields at the time of profile scan job execution are included, except for ones listed in exclude_fields.
    postScanActions GoogleCloudDataplexV1DataProfileSpecPostScanActions
    Optional. Actions to take upon job completion..
    rowFilter string
    Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10
    samplingPercent number
    Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.
    exclude_fields GoogleCloudDataplexV1DataProfileSpecSelectedFields
    Optional. The fields to exclude from data profile.If specified, the fields will be excluded from data profile, regardless of include_fields value.
    include_fields GoogleCloudDataplexV1DataProfileSpecSelectedFields
    Optional. The fields to include in data profile.If not specified, all fields at the time of profile scan job execution are included, except for ones listed in exclude_fields.
    post_scan_actions GoogleCloudDataplexV1DataProfileSpecPostScanActions
    Optional. Actions to take upon job completion..
    row_filter str
    Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10
    sampling_percent float
    Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.
    excludeFields Property Map
    Optional. The fields to exclude from data profile.If specified, the fields will be excluded from data profile, regardless of include_fields value.
    includeFields Property Map
    Optional. The fields to include in data profile.If not specified, all fields at the time of profile scan job execution are included, except for ones listed in exclude_fields.
    postScanActions Property Map
    Optional. Actions to take upon job completion..
    rowFilter String
    Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10
    samplingPercent Number
    Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.

    GoogleCloudDataplexV1DataProfileSpecPostScanActions, GoogleCloudDataplexV1DataProfileSpecPostScanActionsArgs

    BigqueryExport GoogleCloudDataplexV1DataProfileSpecPostScanActionsBigQueryExport
    Optional. If set, results will be exported to the provided BigQuery table.
    bigqueryExport GoogleCloudDataplexV1DataProfileSpecPostScanActionsBigQueryExport
    Optional. If set, results will be exported to the provided BigQuery table.
    bigqueryExport GoogleCloudDataplexV1DataProfileSpecPostScanActionsBigQueryExport
    Optional. If set, results will be exported to the provided BigQuery table.
    bigquery_export GoogleCloudDataplexV1DataProfileSpecPostScanActionsBigQueryExport
    Optional. If set, results will be exported to the provided BigQuery table.
    bigqueryExport Property Map
    Optional. If set, results will be exported to the provided BigQuery table.

    GoogleCloudDataplexV1DataProfileSpecPostScanActionsBigQueryExport, GoogleCloudDataplexV1DataProfileSpecPostScanActionsBigQueryExportArgs

    ResultsTable string
    Optional. The BigQuery table to export DataProfileScan results to. Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID
    ResultsTable string
    Optional. The BigQuery table to export DataProfileScan results to. Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID
    resultsTable String
    Optional. The BigQuery table to export DataProfileScan results to. Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID
    resultsTable string
    Optional. The BigQuery table to export DataProfileScan results to. Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID
    results_table str
    Optional. The BigQuery table to export DataProfileScan results to. Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID
    resultsTable String
    Optional. The BigQuery table to export DataProfileScan results to. Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID

    GoogleCloudDataplexV1DataProfileSpecPostScanActionsBigQueryExportResponse, GoogleCloudDataplexV1DataProfileSpecPostScanActionsBigQueryExportResponseArgs

    ResultsTable string
    Optional. The BigQuery table to export DataProfileScan results to. Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID
    ResultsTable string
    Optional. The BigQuery table to export DataProfileScan results to. Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID
    resultsTable String
    Optional. The BigQuery table to export DataProfileScan results to. Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID
    resultsTable string
    Optional. The BigQuery table to export DataProfileScan results to. Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID
    results_table str
    Optional. The BigQuery table to export DataProfileScan results to. Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID
    resultsTable String
    Optional. The BigQuery table to export DataProfileScan results to. Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID

    GoogleCloudDataplexV1DataProfileSpecPostScanActionsResponse, GoogleCloudDataplexV1DataProfileSpecPostScanActionsResponseArgs

    BigqueryExport GoogleCloudDataplexV1DataProfileSpecPostScanActionsBigQueryExportResponse
    Optional. If set, results will be exported to the provided BigQuery table.
    bigqueryExport GoogleCloudDataplexV1DataProfileSpecPostScanActionsBigQueryExportResponse
    Optional. If set, results will be exported to the provided BigQuery table.
    bigqueryExport GoogleCloudDataplexV1DataProfileSpecPostScanActionsBigQueryExportResponse
    Optional. If set, results will be exported to the provided BigQuery table.
    bigquery_export GoogleCloudDataplexV1DataProfileSpecPostScanActionsBigQueryExportResponse
    Optional. If set, results will be exported to the provided BigQuery table.
    bigqueryExport Property Map
    Optional. If set, results will be exported to the provided BigQuery table.

    GoogleCloudDataplexV1DataProfileSpecResponse, GoogleCloudDataplexV1DataProfileSpecResponseArgs

    ExcludeFields Pulumi.GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataProfileSpecSelectedFieldsResponse
    Optional. The fields to exclude from data profile.If specified, the fields will be excluded from data profile, regardless of include_fields value.
    IncludeFields Pulumi.GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataProfileSpecSelectedFieldsResponse
    Optional. The fields to include in data profile.If not specified, all fields at the time of profile scan job execution are included, except for ones listed in exclude_fields.
    PostScanActions Pulumi.GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataProfileSpecPostScanActionsResponse
    Optional. Actions to take upon job completion..
    RowFilter string
    Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10
    SamplingPercent double
    Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.
    ExcludeFields GoogleCloudDataplexV1DataProfileSpecSelectedFieldsResponse
    Optional. The fields to exclude from data profile.If specified, the fields will be excluded from data profile, regardless of include_fields value.
    IncludeFields GoogleCloudDataplexV1DataProfileSpecSelectedFieldsResponse
    Optional. The fields to include in data profile.If not specified, all fields at the time of profile scan job execution are included, except for ones listed in exclude_fields.
    PostScanActions GoogleCloudDataplexV1DataProfileSpecPostScanActionsResponse
    Optional. Actions to take upon job completion..
    RowFilter string
    Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10
    SamplingPercent float64
    Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.
    excludeFields GoogleCloudDataplexV1DataProfileSpecSelectedFieldsResponse
    Optional. The fields to exclude from data profile.If specified, the fields will be excluded from data profile, regardless of include_fields value.
    includeFields GoogleCloudDataplexV1DataProfileSpecSelectedFieldsResponse
    Optional. The fields to include in data profile.If not specified, all fields at the time of profile scan job execution are included, except for ones listed in exclude_fields.
    postScanActions GoogleCloudDataplexV1DataProfileSpecPostScanActionsResponse
    Optional. Actions to take upon job completion..
    rowFilter String
    Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10
    samplingPercent Double
    Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.
    excludeFields GoogleCloudDataplexV1DataProfileSpecSelectedFieldsResponse
    Optional. The fields to exclude from data profile.If specified, the fields will be excluded from data profile, regardless of include_fields value.
    includeFields GoogleCloudDataplexV1DataProfileSpecSelectedFieldsResponse
    Optional. The fields to include in data profile.If not specified, all fields at the time of profile scan job execution are included, except for ones listed in exclude_fields.
    postScanActions GoogleCloudDataplexV1DataProfileSpecPostScanActionsResponse
    Optional. Actions to take upon job completion..
    rowFilter string
    Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10
    samplingPercent number
    Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.
    exclude_fields GoogleCloudDataplexV1DataProfileSpecSelectedFieldsResponse
    Optional. The fields to exclude from data profile.If specified, the fields will be excluded from data profile, regardless of include_fields value.
    include_fields GoogleCloudDataplexV1DataProfileSpecSelectedFieldsResponse
    Optional. The fields to include in data profile.If not specified, all fields at the time of profile scan job execution are included, except for ones listed in exclude_fields.
    post_scan_actions GoogleCloudDataplexV1DataProfileSpecPostScanActionsResponse
    Optional. Actions to take upon job completion..
    row_filter str
    Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10
    sampling_percent float
    Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.
    excludeFields Property Map
    Optional. The fields to exclude from data profile.If specified, the fields will be excluded from data profile, regardless of include_fields value.
    includeFields Property Map
    Optional. The fields to include in data profile.If not specified, all fields at the time of profile scan job execution are included, except for ones listed in exclude_fields.
    postScanActions Property Map
    Optional. Actions to take upon job completion..
    rowFilter String
    Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10
    samplingPercent Number
    Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.

    GoogleCloudDataplexV1DataProfileSpecSelectedFields, GoogleCloudDataplexV1DataProfileSpecSelectedFieldsArgs

    FieldNames List<string>
    Optional. Expected input is a list of fully qualified names of fields as in the schema.Only top-level field names for nested fields are supported. For instance, if 'x' is of nested field type, listing 'x' is supported but 'x.y.z' is not supported. Here 'y' and 'y.z' are nested fields of 'x'.
    FieldNames []string
    Optional. Expected input is a list of fully qualified names of fields as in the schema.Only top-level field names for nested fields are supported. For instance, if 'x' is of nested field type, listing 'x' is supported but 'x.y.z' is not supported. Here 'y' and 'y.z' are nested fields of 'x'.
    fieldNames List<String>
    Optional. Expected input is a list of fully qualified names of fields as in the schema.Only top-level field names for nested fields are supported. For instance, if 'x' is of nested field type, listing 'x' is supported but 'x.y.z' is not supported. Here 'y' and 'y.z' are nested fields of 'x'.
    fieldNames string[]
    Optional. Expected input is a list of fully qualified names of fields as in the schema.Only top-level field names for nested fields are supported. For instance, if 'x' is of nested field type, listing 'x' is supported but 'x.y.z' is not supported. Here 'y' and 'y.z' are nested fields of 'x'.
    field_names Sequence[str]
    Optional. Expected input is a list of fully qualified names of fields as in the schema.Only top-level field names for nested fields are supported. For instance, if 'x' is of nested field type, listing 'x' is supported but 'x.y.z' is not supported. Here 'y' and 'y.z' are nested fields of 'x'.
    fieldNames List<String>
    Optional. Expected input is a list of fully qualified names of fields as in the schema.Only top-level field names for nested fields are supported. For instance, if 'x' is of nested field type, listing 'x' is supported but 'x.y.z' is not supported. Here 'y' and 'y.z' are nested fields of 'x'.

    GoogleCloudDataplexV1DataProfileSpecSelectedFieldsResponse, GoogleCloudDataplexV1DataProfileSpecSelectedFieldsResponseArgs

    FieldNames List<string>
    Optional. Expected input is a list of fully qualified names of fields as in the schema.Only top-level field names for nested fields are supported. For instance, if 'x' is of nested field type, listing 'x' is supported but 'x.y.z' is not supported. Here 'y' and 'y.z' are nested fields of 'x'.
    FieldNames []string
    Optional. Expected input is a list of fully qualified names of fields as in the schema.Only top-level field names for nested fields are supported. For instance, if 'x' is of nested field type, listing 'x' is supported but 'x.y.z' is not supported. Here 'y' and 'y.z' are nested fields of 'x'.
    fieldNames List<String>
    Optional. Expected input is a list of fully qualified names of fields as in the schema.Only top-level field names for nested fields are supported. For instance, if 'x' is of nested field type, listing 'x' is supported but 'x.y.z' is not supported. Here 'y' and 'y.z' are nested fields of 'x'.
    fieldNames string[]
    Optional. Expected input is a list of fully qualified names of fields as in the schema.Only top-level field names for nested fields are supported. For instance, if 'x' is of nested field type, listing 'x' is supported but 'x.y.z' is not supported. Here 'y' and 'y.z' are nested fields of 'x'.
    field_names Sequence[str]
    Optional. Expected input is a list of fully qualified names of fields as in the schema.Only top-level field names for nested fields are supported. For instance, if 'x' is of nested field type, listing 'x' is supported but 'x.y.z' is not supported. Here 'y' and 'y.z' are nested fields of 'x'.
    fieldNames List<String>
    Optional. Expected input is a list of fully qualified names of fields as in the schema.Only top-level field names for nested fields are supported. For instance, if 'x' is of nested field type, listing 'x' is supported but 'x.y.z' is not supported. Here 'y' and 'y.z' are nested fields of 'x'.

    GoogleCloudDataplexV1DataQualityColumnResultResponse, GoogleCloudDataplexV1DataQualityColumnResultResponseArgs

    Column string
    The column specified in the DataQualityRule.
    Score double
    The column-level data quality score for this data scan job if and only if the 'column' field is set.The score ranges between between 0, 100 (up to two decimal points).
    Column string
    The column specified in the DataQualityRule.
    Score float64
    The column-level data quality score for this data scan job if and only if the 'column' field is set.The score ranges between between 0, 100 (up to two decimal points).
    column String
    The column specified in the DataQualityRule.
    score Double
    The column-level data quality score for this data scan job if and only if the 'column' field is set.The score ranges between between 0, 100 (up to two decimal points).
    column string
    The column specified in the DataQualityRule.
    score number
    The column-level data quality score for this data scan job if and only if the 'column' field is set.The score ranges between between 0, 100 (up to two decimal points).
    column str
    The column specified in the DataQualityRule.
    score float
    The column-level data quality score for this data scan job if and only if the 'column' field is set.The score ranges between between 0, 100 (up to two decimal points).
    column String
    The column specified in the DataQualityRule.
    score Number
    The column-level data quality score for this data scan job if and only if the 'column' field is set.The score ranges between between 0, 100 (up to two decimal points).

    GoogleCloudDataplexV1DataQualityDimensionResponse, GoogleCloudDataplexV1DataQualityDimensionResponseArgs

    Name string
    The dimension name a rule belongs to. Supported dimensions are "COMPLETENESS", "ACCURACY", "CONSISTENCY", "VALIDITY", "UNIQUENESS", "INTEGRITY"
    Name string
    The dimension name a rule belongs to. Supported dimensions are "COMPLETENESS", "ACCURACY", "CONSISTENCY", "VALIDITY", "UNIQUENESS", "INTEGRITY"
    name String
    The dimension name a rule belongs to. Supported dimensions are "COMPLETENESS", "ACCURACY", "CONSISTENCY", "VALIDITY", "UNIQUENESS", "INTEGRITY"
    name string
    The dimension name a rule belongs to. Supported dimensions are "COMPLETENESS", "ACCURACY", "CONSISTENCY", "VALIDITY", "UNIQUENESS", "INTEGRITY"
    name str
    The dimension name a rule belongs to. Supported dimensions are "COMPLETENESS", "ACCURACY", "CONSISTENCY", "VALIDITY", "UNIQUENESS", "INTEGRITY"
    name String
    The dimension name a rule belongs to. Supported dimensions are "COMPLETENESS", "ACCURACY", "CONSISTENCY", "VALIDITY", "UNIQUENESS", "INTEGRITY"

    GoogleCloudDataplexV1DataQualityDimensionResultResponse, GoogleCloudDataplexV1DataQualityDimensionResultResponseArgs

    Dimension Pulumi.GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataQualityDimensionResponse
    The dimension config specified in the DataQualitySpec, as is.
    Passed bool
    Whether the dimension passed or failed.
    Score double
    The dimension-level data quality score for this data scan job if and only if the 'dimension' field is set.The score ranges between 0, 100 (up to two decimal points).
    Dimension GoogleCloudDataplexV1DataQualityDimensionResponse
    The dimension config specified in the DataQualitySpec, as is.
    Passed bool
    Whether the dimension passed or failed.
    Score float64
    The dimension-level data quality score for this data scan job if and only if the 'dimension' field is set.The score ranges between 0, 100 (up to two decimal points).
    dimension GoogleCloudDataplexV1DataQualityDimensionResponse
    The dimension config specified in the DataQualitySpec, as is.
    passed Boolean
    Whether the dimension passed or failed.
    score Double
    The dimension-level data quality score for this data scan job if and only if the 'dimension' field is set.The score ranges between 0, 100 (up to two decimal points).
    dimension GoogleCloudDataplexV1DataQualityDimensionResponse
    The dimension config specified in the DataQualitySpec, as is.
    passed boolean
    Whether the dimension passed or failed.
    score number
    The dimension-level data quality score for this data scan job if and only if the 'dimension' field is set.The score ranges between 0, 100 (up to two decimal points).
    dimension GoogleCloudDataplexV1DataQualityDimensionResponse
    The dimension config specified in the DataQualitySpec, as is.
    passed bool
    Whether the dimension passed or failed.
    score float
    The dimension-level data quality score for this data scan job if and only if the 'dimension' field is set.The score ranges between 0, 100 (up to two decimal points).
    dimension Property Map
    The dimension config specified in the DataQualitySpec, as is.
    passed Boolean
    Whether the dimension passed or failed.
    score Number
    The dimension-level data quality score for this data scan job if and only if the 'dimension' field is set.The score ranges between 0, 100 (up to two decimal points).

    GoogleCloudDataplexV1DataQualityResultPostScanActionsResultBigQueryExportResultResponse, GoogleCloudDataplexV1DataQualityResultPostScanActionsResultBigQueryExportResultResponseArgs

    Message string
    Additional information about the BigQuery exporting.
    State string
    Execution state for the BigQuery exporting.
    Message string
    Additional information about the BigQuery exporting.
    State string
    Execution state for the BigQuery exporting.
    message String
    Additional information about the BigQuery exporting.
    state String
    Execution state for the BigQuery exporting.
    message string
    Additional information about the BigQuery exporting.
    state string
    Execution state for the BigQuery exporting.
    message str
    Additional information about the BigQuery exporting.
    state str
    Execution state for the BigQuery exporting.
    message String
    Additional information about the BigQuery exporting.
    state String
    Execution state for the BigQuery exporting.

    GoogleCloudDataplexV1DataQualityResultPostScanActionsResultResponse, GoogleCloudDataplexV1DataQualityResultPostScanActionsResultResponseArgs

    bigqueryExportResult Property Map
    The result of BigQuery export post scan action.

    GoogleCloudDataplexV1DataQualityResultResponse, GoogleCloudDataplexV1DataQualityResultResponseArgs

    Columns List<Pulumi.GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataQualityColumnResultResponse>
    A list of results at the column level.A column will have a corresponding DataQualityColumnResult if and only if there is at least one rule with the 'column' field set to it.
    Dimensions List<Pulumi.GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataQualityDimensionResultResponse>
    A list of results at the dimension level.A dimension will have a corresponding DataQualityDimensionResult if and only if there is at least one rule with the 'dimension' field set to it.
    Passed bool
    Overall data quality result -- true if all rules passed.
    PostScanActionsResult Pulumi.GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataQualityResultPostScanActionsResultResponse
    The result of post scan actions.
    RowCount string
    The count of rows processed.
    Rules List<Pulumi.GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataQualityRuleResultResponse>
    A list of all the rules in a job, and their results.
    ScannedData Pulumi.GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1ScannedDataResponse
    The data scanned for this result.
    Score double
    The overall data quality score.The score ranges between 0, 100 (up to two decimal points).
    Columns []GoogleCloudDataplexV1DataQualityColumnResultResponse
    A list of results at the column level.A column will have a corresponding DataQualityColumnResult if and only if there is at least one rule with the 'column' field set to it.
    Dimensions []GoogleCloudDataplexV1DataQualityDimensionResultResponse
    A list of results at the dimension level.A dimension will have a corresponding DataQualityDimensionResult if and only if there is at least one rule with the 'dimension' field set to it.
    Passed bool
    Overall data quality result -- true if all rules passed.
    PostScanActionsResult GoogleCloudDataplexV1DataQualityResultPostScanActionsResultResponse
    The result of post scan actions.
    RowCount string
    The count of rows processed.
    Rules []GoogleCloudDataplexV1DataQualityRuleResultResponse
    A list of all the rules in a job, and their results.
    ScannedData GoogleCloudDataplexV1ScannedDataResponse
    The data scanned for this result.
    Score float64
    The overall data quality score.The score ranges between 0, 100 (up to two decimal points).
    columns List<GoogleCloudDataplexV1DataQualityColumnResultResponse>
    A list of results at the column level.A column will have a corresponding DataQualityColumnResult if and only if there is at least one rule with the 'column' field set to it.
    dimensions List<GoogleCloudDataplexV1DataQualityDimensionResultResponse>
    A list of results at the dimension level.A dimension will have a corresponding DataQualityDimensionResult if and only if there is at least one rule with the 'dimension' field set to it.
    passed Boolean
    Overall data quality result -- true if all rules passed.
    postScanActionsResult GoogleCloudDataplexV1DataQualityResultPostScanActionsResultResponse
    The result of post scan actions.
    rowCount String
    The count of rows processed.
    rules List<GoogleCloudDataplexV1DataQualityRuleResultResponse>
    A list of all the rules in a job, and their results.
    scannedData GoogleCloudDataplexV1ScannedDataResponse
    The data scanned for this result.
    score Double
    The overall data quality score.The score ranges between 0, 100 (up to two decimal points).
    columns GoogleCloudDataplexV1DataQualityColumnResultResponse[]
    A list of results at the column level.A column will have a corresponding DataQualityColumnResult if and only if there is at least one rule with the 'column' field set to it.
    dimensions GoogleCloudDataplexV1DataQualityDimensionResultResponse[]
    A list of results at the dimension level.A dimension will have a corresponding DataQualityDimensionResult if and only if there is at least one rule with the 'dimension' field set to it.
    passed boolean
    Overall data quality result -- true if all rules passed.
    postScanActionsResult GoogleCloudDataplexV1DataQualityResultPostScanActionsResultResponse
    The result of post scan actions.
    rowCount string
    The count of rows processed.
    rules GoogleCloudDataplexV1DataQualityRuleResultResponse[]
    A list of all the rules in a job, and their results.
    scannedData GoogleCloudDataplexV1ScannedDataResponse
    The data scanned for this result.
    score number
    The overall data quality score.The score ranges between 0, 100 (up to two decimal points).
    columns Sequence[GoogleCloudDataplexV1DataQualityColumnResultResponse]
    A list of results at the column level.A column will have a corresponding DataQualityColumnResult if and only if there is at least one rule with the 'column' field set to it.
    dimensions Sequence[GoogleCloudDataplexV1DataQualityDimensionResultResponse]
    A list of results at the dimension level.A dimension will have a corresponding DataQualityDimensionResult if and only if there is at least one rule with the 'dimension' field set to it.
    passed bool
    Overall data quality result -- true if all rules passed.
    post_scan_actions_result GoogleCloudDataplexV1DataQualityResultPostScanActionsResultResponse
    The result of post scan actions.
    row_count str
    The count of rows processed.
    rules Sequence[GoogleCloudDataplexV1DataQualityRuleResultResponse]
    A list of all the rules in a job, and their results.
    scanned_data GoogleCloudDataplexV1ScannedDataResponse
    The data scanned for this result.
    score float
    The overall data quality score.The score ranges between 0, 100 (up to two decimal points).
    columns List<Property Map>
    A list of results at the column level.A column will have a corresponding DataQualityColumnResult if and only if there is at least one rule with the 'column' field set to it.
    dimensions List<Property Map>
    A list of results at the dimension level.A dimension will have a corresponding DataQualityDimensionResult if and only if there is at least one rule with the 'dimension' field set to it.
    passed Boolean
    Overall data quality result -- true if all rules passed.
    postScanActionsResult Property Map
    The result of post scan actions.
    rowCount String
    The count of rows processed.
    rules List<Property Map>
    A list of all the rules in a job, and their results.
    scannedData Property Map
    The data scanned for this result.
    score Number
    The overall data quality score.The score ranges between 0, 100 (up to two decimal points).

    GoogleCloudDataplexV1DataQualityRule, GoogleCloudDataplexV1DataQualityRuleArgs

    Dimension string
    The dimension a rule belongs to. Results are also aggregated at the dimension level. Supported dimensions are "COMPLETENESS", "ACCURACY", "CONSISTENCY", "VALIDITY", "UNIQUENESS", "INTEGRITY"
    Column string
    Optional. The unnested column which this rule is evaluated against.
    Description string
    Optional. Description of the rule. The maximum length is 1,024 characters.
    IgnoreNull bool
    Optional. Rows with null values will automatically fail a rule, unless ignore_null is true. In that case, such null rows are trivially considered passing.This field is only valid for row-level type rules.
    Name string
    Optional. A mutable name for the rule. The name must contain only letters (a-z, A-Z), numbers (0-9), or hyphens (-). The maximum length is 63 characters. Must start with a letter. Must end with a number or a letter.
    NonNullExpectation Pulumi.GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataQualityRuleNonNullExpectation
    Row-level rule which evaluates whether each column value is null.
    RangeExpectation Pulumi.GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataQualityRuleRangeExpectation
    Row-level rule which evaluates whether each column value lies between a specified range.
    RegexExpectation Pulumi.GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataQualityRuleRegexExpectation
    Row-level rule which evaluates whether each column value matches a specified regex.
    RowConditionExpectation Pulumi.GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataQualityRuleRowConditionExpectation
    Row-level rule which evaluates whether each row in a table passes the specified condition.
    SetExpectation Pulumi.GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataQualityRuleSetExpectation
    Row-level rule which evaluates whether each column value is contained by a specified set.
    StatisticRangeExpectation Pulumi.GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataQualityRuleStatisticRangeExpectation
    Aggregate rule which evaluates whether the column aggregate statistic lies between a specified range.
    TableConditionExpectation Pulumi.GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataQualityRuleTableConditionExpectation
    Aggregate rule which evaluates whether the provided expression is true for a table.
    Threshold double
    Optional. The minimum ratio of passing_rows / total_rows required to pass this rule, with a range of 0.0, 1.0.0 indicates default value (i.e. 1.0).This field is only valid for row-level type rules.
    UniquenessExpectation Pulumi.GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataQualityRuleUniquenessExpectation
    Row-level rule which evaluates whether each column value is unique.
    Dimension string
    The dimension a rule belongs to. Results are also aggregated at the dimension level. Supported dimensions are "COMPLETENESS", "ACCURACY", "CONSISTENCY", "VALIDITY", "UNIQUENESS", "INTEGRITY"
    Column string
    Optional. The unnested column which this rule is evaluated against.
    Description string
    Optional. Description of the rule. The maximum length is 1,024 characters.
    IgnoreNull bool
    Optional. Rows with null values will automatically fail a rule, unless ignore_null is true. In that case, such null rows are trivially considered passing.This field is only valid for row-level type rules.
    Name string
    Optional. A mutable name for the rule. The name must contain only letters (a-z, A-Z), numbers (0-9), or hyphens (-). The maximum length is 63 characters. Must start with a letter. Must end with a number or a letter.
    NonNullExpectation GoogleCloudDataplexV1DataQualityRuleNonNullExpectation
    Row-level rule which evaluates whether each column value is null.
    RangeExpectation GoogleCloudDataplexV1DataQualityRuleRangeExpectation
    Row-level rule which evaluates whether each column value lies between a specified range.
    RegexExpectation GoogleCloudDataplexV1DataQualityRuleRegexExpectation
    Row-level rule which evaluates whether each column value matches a specified regex.
    RowConditionExpectation GoogleCloudDataplexV1DataQualityRuleRowConditionExpectation
    Row-level rule which evaluates whether each row in a table passes the specified condition.
    SetExpectation GoogleCloudDataplexV1DataQualityRuleSetExpectation
    Row-level rule which evaluates whether each column value is contained by a specified set.
    StatisticRangeExpectation GoogleCloudDataplexV1DataQualityRuleStatisticRangeExpectation
    Aggregate rule which evaluates whether the column aggregate statistic lies between a specified range.
    TableConditionExpectation GoogleCloudDataplexV1DataQualityRuleTableConditionExpectation
    Aggregate rule which evaluates whether the provided expression is true for a table.
    Threshold float64
    Optional. The minimum ratio of passing_rows / total_rows required to pass this rule, with a range of 0.0, 1.0.0 indicates default value (i.e. 1.0).This field is only valid for row-level type rules.
    UniquenessExpectation GoogleCloudDataplexV1DataQualityRuleUniquenessExpectation
    Row-level rule which evaluates whether each column value is unique.
    dimension String
    The dimension a rule belongs to. Results are also aggregated at the dimension level. Supported dimensions are "COMPLETENESS", "ACCURACY", "CONSISTENCY", "VALIDITY", "UNIQUENESS", "INTEGRITY"
    column String
    Optional. The unnested column which this rule is evaluated against.
    description String
    Optional. Description of the rule. The maximum length is 1,024 characters.
    ignoreNull Boolean
    Optional. Rows with null values will automatically fail a rule, unless ignore_null is true. In that case, such null rows are trivially considered passing.This field is only valid for row-level type rules.
    name String
    Optional. A mutable name for the rule. The name must contain only letters (a-z, A-Z), numbers (0-9), or hyphens (-). The maximum length is 63 characters. Must start with a letter. Must end with a number or a letter.
    nonNullExpectation GoogleCloudDataplexV1DataQualityRuleNonNullExpectation
    Row-level rule which evaluates whether each column value is null.
    rangeExpectation GoogleCloudDataplexV1DataQualityRuleRangeExpectation
    Row-level rule which evaluates whether each column value lies between a specified range.
    regexExpectation GoogleCloudDataplexV1DataQualityRuleRegexExpectation
    Row-level rule which evaluates whether each column value matches a specified regex.
    rowConditionExpectation GoogleCloudDataplexV1DataQualityRuleRowConditionExpectation
    Row-level rule which evaluates whether each row in a table passes the specified condition.
    setExpectation GoogleCloudDataplexV1DataQualityRuleSetExpectation
    Row-level rule which evaluates whether each column value is contained by a specified set.
    statisticRangeExpectation GoogleCloudDataplexV1DataQualityRuleStatisticRangeExpectation
    Aggregate rule which evaluates whether the column aggregate statistic lies between a specified range.
    tableConditionExpectation GoogleCloudDataplexV1DataQualityRuleTableConditionExpectation
    Aggregate rule which evaluates whether the provided expression is true for a table.
    threshold Double
    Optional. The minimum ratio of passing_rows / total_rows required to pass this rule, with a range of 0.0, 1.0.0 indicates default value (i.e. 1.0).This field is only valid for row-level type rules.
    uniquenessExpectation GoogleCloudDataplexV1DataQualityRuleUniquenessExpectation
    Row-level rule which evaluates whether each column value is unique.
    dimension string
    The dimension a rule belongs to. Results are also aggregated at the dimension level. Supported dimensions are "COMPLETENESS", "ACCURACY", "CONSISTENCY", "VALIDITY", "UNIQUENESS", "INTEGRITY"
    column string
    Optional. The unnested column which this rule is evaluated against.
    description string
    Optional. Description of the rule. The maximum length is 1,024 characters.
    ignoreNull boolean
    Optional. Rows with null values will automatically fail a rule, unless ignore_null is true. In that case, such null rows are trivially considered passing.This field is only valid for row-level type rules.
    name string
    Optional. A mutable name for the rule. The name must contain only letters (a-z, A-Z), numbers (0-9), or hyphens (-). The maximum length is 63 characters. Must start with a letter. Must end with a number or a letter.
    nonNullExpectation GoogleCloudDataplexV1DataQualityRuleNonNullExpectation
    Row-level rule which evaluates whether each column value is null.
    rangeExpectation GoogleCloudDataplexV1DataQualityRuleRangeExpectation
    Row-level rule which evaluates whether each column value lies between a specified range.
    regexExpectation GoogleCloudDataplexV1DataQualityRuleRegexExpectation
    Row-level rule which evaluates whether each column value matches a specified regex.
    rowConditionExpectation GoogleCloudDataplexV1DataQualityRuleRowConditionExpectation
    Row-level rule which evaluates whether each row in a table passes the specified condition.
    setExpectation GoogleCloudDataplexV1DataQualityRuleSetExpectation
    Row-level rule which evaluates whether each column value is contained by a specified set.
    statisticRangeExpectation GoogleCloudDataplexV1DataQualityRuleStatisticRangeExpectation
    Aggregate rule which evaluates whether the column aggregate statistic lies between a specified range.
    tableConditionExpectation GoogleCloudDataplexV1DataQualityRuleTableConditionExpectation
    Aggregate rule which evaluates whether the provided expression is true for a table.
    threshold number
    Optional. The minimum ratio of passing_rows / total_rows required to pass this rule, with a range of 0.0, 1.0.0 indicates default value (i.e. 1.0).This field is only valid for row-level type rules.
    uniquenessExpectation GoogleCloudDataplexV1DataQualityRuleUniquenessExpectation
    Row-level rule which evaluates whether each column value is unique.
    dimension str
    The dimension a rule belongs to. Results are also aggregated at the dimension level. Supported dimensions are "COMPLETENESS", "ACCURACY", "CONSISTENCY", "VALIDITY", "UNIQUENESS", "INTEGRITY"
    column str
    Optional. The unnested column which this rule is evaluated against.
    description str
    Optional. Description of the rule. The maximum length is 1,024 characters.
    ignore_null bool
    Optional. Rows with null values will automatically fail a rule, unless ignore_null is true. In that case, such null rows are trivially considered passing.This field is only valid for row-level type rules.
    name str
    Optional. A mutable name for the rule. The name must contain only letters (a-z, A-Z), numbers (0-9), or hyphens (-). The maximum length is 63 characters. Must start with a letter. Must end with a number or a letter.
    non_null_expectation GoogleCloudDataplexV1DataQualityRuleNonNullExpectation
    Row-level rule which evaluates whether each column value is null.
    range_expectation GoogleCloudDataplexV1DataQualityRuleRangeExpectation
    Row-level rule which evaluates whether each column value lies between a specified range.
    regex_expectation GoogleCloudDataplexV1DataQualityRuleRegexExpectation
    Row-level rule which evaluates whether each column value matches a specified regex.
    row_condition_expectation GoogleCloudDataplexV1DataQualityRuleRowConditionExpectation
    Row-level rule which evaluates whether each row in a table passes the specified condition.
    set_expectation GoogleCloudDataplexV1DataQualityRuleSetExpectation
    Row-level rule which evaluates whether each column value is contained by a specified set.
    statistic_range_expectation GoogleCloudDataplexV1DataQualityRuleStatisticRangeExpectation
    Aggregate rule which evaluates whether the column aggregate statistic lies between a specified range.
    table_condition_expectation GoogleCloudDataplexV1DataQualityRuleTableConditionExpectation
    Aggregate rule which evaluates whether the provided expression is true for a table.
    threshold float
    Optional. The minimum ratio of passing_rows / total_rows required to pass this rule, with a range of 0.0, 1.0.0 indicates default value (i.e. 1.0).This field is only valid for row-level type rules.
    uniqueness_expectation GoogleCloudDataplexV1DataQualityRuleUniquenessExpectation
    Row-level rule which evaluates whether each column value is unique.
    dimension String
    The dimension a rule belongs to. Results are also aggregated at the dimension level. Supported dimensions are "COMPLETENESS", "ACCURACY", "CONSISTENCY", "VALIDITY", "UNIQUENESS", "INTEGRITY"
    column String
    Optional. The unnested column which this rule is evaluated against.
    description String
    Optional. Description of the rule. The maximum length is 1,024 characters.
    ignoreNull Boolean
    Optional. Rows with null values will automatically fail a rule, unless ignore_null is true. In that case, such null rows are trivially considered passing.This field is only valid for row-level type rules.
    name String
    Optional. A mutable name for the rule. The name must contain only letters (a-z, A-Z), numbers (0-9), or hyphens (-). The maximum length is 63 characters. Must start with a letter. Must end with a number or a letter.
    nonNullExpectation Property Map
    Row-level rule which evaluates whether each column value is null.
    rangeExpectation Property Map
    Row-level rule which evaluates whether each column value lies between a specified range.
    regexExpectation Property Map
    Row-level rule which evaluates whether each column value matches a specified regex.
    rowConditionExpectation Property Map
    Row-level rule which evaluates whether each row in a table passes the specified condition.
    setExpectation Property Map
    Row-level rule which evaluates whether each column value is contained by a specified set.
    statisticRangeExpectation Property Map
    Aggregate rule which evaluates whether the column aggregate statistic lies between a specified range.
    tableConditionExpectation Property Map
    Aggregate rule which evaluates whether the provided expression is true for a table.
    threshold Number
    Optional. The minimum ratio of passing_rows / total_rows required to pass this rule, with a range of 0.0, 1.0.0 indicates default value (i.e. 1.0).This field is only valid for row-level type rules.
    uniquenessExpectation Property Map
    Row-level rule which evaluates whether each column value is unique.

    GoogleCloudDataplexV1DataQualityRuleRangeExpectation, GoogleCloudDataplexV1DataQualityRuleRangeExpectationArgs

    MaxValue string
    Optional. The maximum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.
    MinValue string
    Optional. The minimum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.
    StrictMaxEnabled bool
    Optional. Whether each value needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.
    StrictMinEnabled bool
    Optional. Whether each value needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.
    MaxValue string
    Optional. The maximum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.
    MinValue string
    Optional. The minimum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.
    StrictMaxEnabled bool
    Optional. Whether each value needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.
    StrictMinEnabled bool
    Optional. Whether each value needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.
    maxValue String
    Optional. The maximum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.
    minValue String
    Optional. The minimum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.
    strictMaxEnabled Boolean
    Optional. Whether each value needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.
    strictMinEnabled Boolean
    Optional. Whether each value needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.
    maxValue string
    Optional. The maximum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.
    minValue string
    Optional. The minimum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.
    strictMaxEnabled boolean
    Optional. Whether each value needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.
    strictMinEnabled boolean
    Optional. Whether each value needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.
    max_value str
    Optional. The maximum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.
    min_value str
    Optional. The minimum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.
    strict_max_enabled bool
    Optional. Whether each value needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.
    strict_min_enabled bool
    Optional. Whether each value needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.
    maxValue String
    Optional. The maximum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.
    minValue String
    Optional. The minimum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.
    strictMaxEnabled Boolean
    Optional. Whether each value needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.
    strictMinEnabled Boolean
    Optional. Whether each value needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.

    GoogleCloudDataplexV1DataQualityRuleRangeExpectationResponse, GoogleCloudDataplexV1DataQualityRuleRangeExpectationResponseArgs

    MaxValue string
    Optional. The maximum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.
    MinValue string
    Optional. The minimum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.
    StrictMaxEnabled bool
    Optional. Whether each value needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.
    StrictMinEnabled bool
    Optional. Whether each value needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.
    MaxValue string
    Optional. The maximum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.
    MinValue string
    Optional. The minimum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.
    StrictMaxEnabled bool
    Optional. Whether each value needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.
    StrictMinEnabled bool
    Optional. Whether each value needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.
    maxValue String
    Optional. The maximum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.
    minValue String
    Optional. The minimum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.
    strictMaxEnabled Boolean
    Optional. Whether each value needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.
    strictMinEnabled Boolean
    Optional. Whether each value needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.
    maxValue string
    Optional. The maximum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.
    minValue string
    Optional. The minimum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.
    strictMaxEnabled boolean
    Optional. Whether each value needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.
    strictMinEnabled boolean
    Optional. Whether each value needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.
    max_value str
    Optional. The maximum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.
    min_value str
    Optional. The minimum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.
    strict_max_enabled bool
    Optional. Whether each value needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.
    strict_min_enabled bool
    Optional. Whether each value needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.
    maxValue String
    Optional. The maximum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.
    minValue String
    Optional. The minimum column value allowed for a row to pass this validation. At least one of min_value and max_value need to be provided.
    strictMaxEnabled Boolean
    Optional. Whether each value needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.
    strictMinEnabled Boolean
    Optional. Whether each value needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.

    GoogleCloudDataplexV1DataQualityRuleRegexExpectation, GoogleCloudDataplexV1DataQualityRuleRegexExpectationArgs

    Regex string
    Optional. A regular expression the column value is expected to match.
    Regex string
    Optional. A regular expression the column value is expected to match.
    regex String
    Optional. A regular expression the column value is expected to match.
    regex string
    Optional. A regular expression the column value is expected to match.
    regex str
    Optional. A regular expression the column value is expected to match.
    regex String
    Optional. A regular expression the column value is expected to match.

    GoogleCloudDataplexV1DataQualityRuleRegexExpectationResponse, GoogleCloudDataplexV1DataQualityRuleRegexExpectationResponseArgs

    Regex string
    Optional. A regular expression the column value is expected to match.
    Regex string
    Optional. A regular expression the column value is expected to match.
    regex String
    Optional. A regular expression the column value is expected to match.
    regex string
    Optional. A regular expression the column value is expected to match.
    regex str
    Optional. A regular expression the column value is expected to match.
    regex String
    Optional. A regular expression the column value is expected to match.

    GoogleCloudDataplexV1DataQualityRuleResponse, GoogleCloudDataplexV1DataQualityRuleResponseArgs

    Column string
    Optional. The unnested column which this rule is evaluated against.
    Description string
    Optional. Description of the rule. The maximum length is 1,024 characters.
    Dimension string
    The dimension a rule belongs to. Results are also aggregated at the dimension level. Supported dimensions are "COMPLETENESS", "ACCURACY", "CONSISTENCY", "VALIDITY", "UNIQUENESS", "INTEGRITY"
    IgnoreNull bool
    Optional. Rows with null values will automatically fail a rule, unless ignore_null is true. In that case, such null rows are trivially considered passing.This field is only valid for row-level type rules.
    Name string
    Optional. A mutable name for the rule. The name must contain only letters (a-z, A-Z), numbers (0-9), or hyphens (-). The maximum length is 63 characters. Must start with a letter. Must end with a number or a letter.
    NonNullExpectation Pulumi.GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataQualityRuleNonNullExpectationResponse
    Row-level rule which evaluates whether each column value is null.
    RangeExpectation Pulumi.GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataQualityRuleRangeExpectationResponse
    Row-level rule which evaluates whether each column value lies between a specified range.
    RegexExpectation Pulumi.GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataQualityRuleRegexExpectationResponse
    Row-level rule which evaluates whether each column value matches a specified regex.
    RowConditionExpectation Pulumi.GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataQualityRuleRowConditionExpectationResponse
    Row-level rule which evaluates whether each row in a table passes the specified condition.
    SetExpectation Pulumi.GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataQualityRuleSetExpectationResponse
    Row-level rule which evaluates whether each column value is contained by a specified set.
    StatisticRangeExpectation Pulumi.GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataQualityRuleStatisticRangeExpectationResponse
    Aggregate rule which evaluates whether the column aggregate statistic lies between a specified range.
    TableConditionExpectation Pulumi.GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataQualityRuleTableConditionExpectationResponse
    Aggregate rule which evaluates whether the provided expression is true for a table.
    Threshold double
    Optional. The minimum ratio of passing_rows / total_rows required to pass this rule, with a range of 0.0, 1.0.0 indicates default value (i.e. 1.0).This field is only valid for row-level type rules.
    UniquenessExpectation Pulumi.GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataQualityRuleUniquenessExpectationResponse
    Row-level rule which evaluates whether each column value is unique.
    Column string
    Optional. The unnested column which this rule is evaluated against.
    Description string
    Optional. Description of the rule. The maximum length is 1,024 characters.
    Dimension string
    The dimension a rule belongs to. Results are also aggregated at the dimension level. Supported dimensions are "COMPLETENESS", "ACCURACY", "CONSISTENCY", "VALIDITY", "UNIQUENESS", "INTEGRITY"
    IgnoreNull bool
    Optional. Rows with null values will automatically fail a rule, unless ignore_null is true. In that case, such null rows are trivially considered passing.This field is only valid for row-level type rules.
    Name string
    Optional. A mutable name for the rule. The name must contain only letters (a-z, A-Z), numbers (0-9), or hyphens (-). The maximum length is 63 characters. Must start with a letter. Must end with a number or a letter.
    NonNullExpectation GoogleCloudDataplexV1DataQualityRuleNonNullExpectationResponse
    Row-level rule which evaluates whether each column value is null.
    RangeExpectation GoogleCloudDataplexV1DataQualityRuleRangeExpectationResponse
    Row-level rule which evaluates whether each column value lies between a specified range.
    RegexExpectation GoogleCloudDataplexV1DataQualityRuleRegexExpectationResponse
    Row-level rule which evaluates whether each column value matches a specified regex.
    RowConditionExpectation GoogleCloudDataplexV1DataQualityRuleRowConditionExpectationResponse
    Row-level rule which evaluates whether each row in a table passes the specified condition.
    SetExpectation GoogleCloudDataplexV1DataQualityRuleSetExpectationResponse
    Row-level rule which evaluates whether each column value is contained by a specified set.
    StatisticRangeExpectation GoogleCloudDataplexV1DataQualityRuleStatisticRangeExpectationResponse
    Aggregate rule which evaluates whether the column aggregate statistic lies between a specified range.
    TableConditionExpectation GoogleCloudDataplexV1DataQualityRuleTableConditionExpectationResponse
    Aggregate rule which evaluates whether the provided expression is true for a table.
    Threshold float64
    Optional. The minimum ratio of passing_rows / total_rows required to pass this rule, with a range of 0.0, 1.0.0 indicates default value (i.e. 1.0).This field is only valid for row-level type rules.
    UniquenessExpectation GoogleCloudDataplexV1DataQualityRuleUniquenessExpectationResponse
    Row-level rule which evaluates whether each column value is unique.
    column String
    Optional. The unnested column which this rule is evaluated against.
    description String
    Optional. Description of the rule. The maximum length is 1,024 characters.
    dimension String
    The dimension a rule belongs to. Results are also aggregated at the dimension level. Supported dimensions are "COMPLETENESS", "ACCURACY", "CONSISTENCY", "VALIDITY", "UNIQUENESS", "INTEGRITY"
    ignoreNull Boolean
    Optional. Rows with null values will automatically fail a rule, unless ignore_null is true. In that case, such null rows are trivially considered passing.This field is only valid for row-level type rules.
    name String
    Optional. A mutable name for the rule. The name must contain only letters (a-z, A-Z), numbers (0-9), or hyphens (-). The maximum length is 63 characters. Must start with a letter. Must end with a number or a letter.
    nonNullExpectation GoogleCloudDataplexV1DataQualityRuleNonNullExpectationResponse
    Row-level rule which evaluates whether each column value is null.
    rangeExpectation GoogleCloudDataplexV1DataQualityRuleRangeExpectationResponse
    Row-level rule which evaluates whether each column value lies between a specified range.
    regexExpectation GoogleCloudDataplexV1DataQualityRuleRegexExpectationResponse
    Row-level rule which evaluates whether each column value matches a specified regex.
    rowConditionExpectation GoogleCloudDataplexV1DataQualityRuleRowConditionExpectationResponse
    Row-level rule which evaluates whether each row in a table passes the specified condition.
    setExpectation GoogleCloudDataplexV1DataQualityRuleSetExpectationResponse
    Row-level rule which evaluates whether each column value is contained by a specified set.
    statisticRangeExpectation GoogleCloudDataplexV1DataQualityRuleStatisticRangeExpectationResponse
    Aggregate rule which evaluates whether the column aggregate statistic lies between a specified range.
    tableConditionExpectation GoogleCloudDataplexV1DataQualityRuleTableConditionExpectationResponse
    Aggregate rule which evaluates whether the provided expression is true for a table.
    threshold Double
    Optional. The minimum ratio of passing_rows / total_rows required to pass this rule, with a range of 0.0, 1.0.0 indicates default value (i.e. 1.0).This field is only valid for row-level type rules.
    uniquenessExpectation GoogleCloudDataplexV1DataQualityRuleUniquenessExpectationResponse
    Row-level rule which evaluates whether each column value is unique.
    column string
    Optional. The unnested column which this rule is evaluated against.
    description string
    Optional. Description of the rule. The maximum length is 1,024 characters.
    dimension string
    The dimension a rule belongs to. Results are also aggregated at the dimension level. Supported dimensions are "COMPLETENESS", "ACCURACY", "CONSISTENCY", "VALIDITY", "UNIQUENESS", "INTEGRITY"
    ignoreNull boolean
    Optional. Rows with null values will automatically fail a rule, unless ignore_null is true. In that case, such null rows are trivially considered passing.This field is only valid for row-level type rules.
    name string
    Optional. A mutable name for the rule. The name must contain only letters (a-z, A-Z), numbers (0-9), or hyphens (-). The maximum length is 63 characters. Must start with a letter. Must end with a number or a letter.
    nonNullExpectation GoogleCloudDataplexV1DataQualityRuleNonNullExpectationResponse
    Row-level rule which evaluates whether each column value is null.
    rangeExpectation GoogleCloudDataplexV1DataQualityRuleRangeExpectationResponse
    Row-level rule which evaluates whether each column value lies between a specified range.
    regexExpectation GoogleCloudDataplexV1DataQualityRuleRegexExpectationResponse
    Row-level rule which evaluates whether each column value matches a specified regex.
    rowConditionExpectation GoogleCloudDataplexV1DataQualityRuleRowConditionExpectationResponse
    Row-level rule which evaluates whether each row in a table passes the specified condition.
    setExpectation GoogleCloudDataplexV1DataQualityRuleSetExpectationResponse
    Row-level rule which evaluates whether each column value is contained by a specified set.
    statisticRangeExpectation GoogleCloudDataplexV1DataQualityRuleStatisticRangeExpectationResponse
    Aggregate rule which evaluates whether the column aggregate statistic lies between a specified range.
    tableConditionExpectation GoogleCloudDataplexV1DataQualityRuleTableConditionExpectationResponse
    Aggregate rule which evaluates whether the provided expression is true for a table.
    threshold number
    Optional. The minimum ratio of passing_rows / total_rows required to pass this rule, with a range of 0.0, 1.0.0 indicates default value (i.e. 1.0).This field is only valid for row-level type rules.
    uniquenessExpectation GoogleCloudDataplexV1DataQualityRuleUniquenessExpectationResponse
    Row-level rule which evaluates whether each column value is unique.
    column str
    Optional. The unnested column which this rule is evaluated against.
    description str
    Optional. Description of the rule. The maximum length is 1,024 characters.
    dimension str
    The dimension a rule belongs to. Results are also aggregated at the dimension level. Supported dimensions are "COMPLETENESS", "ACCURACY", "CONSISTENCY", "VALIDITY", "UNIQUENESS", "INTEGRITY"
    ignore_null bool
    Optional. Rows with null values will automatically fail a rule, unless ignore_null is true. In that case, such null rows are trivially considered passing.This field is only valid for row-level type rules.
    name str
    Optional. A mutable name for the rule. The name must contain only letters (a-z, A-Z), numbers (0-9), or hyphens (-). The maximum length is 63 characters. Must start with a letter. Must end with a number or a letter.
    non_null_expectation GoogleCloudDataplexV1DataQualityRuleNonNullExpectationResponse
    Row-level rule which evaluates whether each column value is null.
    range_expectation GoogleCloudDataplexV1DataQualityRuleRangeExpectationResponse
    Row-level rule which evaluates whether each column value lies between a specified range.
    regex_expectation GoogleCloudDataplexV1DataQualityRuleRegexExpectationResponse
    Row-level rule which evaluates whether each column value matches a specified regex.
    row_condition_expectation GoogleCloudDataplexV1DataQualityRuleRowConditionExpectationResponse
    Row-level rule which evaluates whether each row in a table passes the specified condition.
    set_expectation GoogleCloudDataplexV1DataQualityRuleSetExpectationResponse
    Row-level rule which evaluates whether each column value is contained by a specified set.
    statistic_range_expectation GoogleCloudDataplexV1DataQualityRuleStatisticRangeExpectationResponse
    Aggregate rule which evaluates whether the column aggregate statistic lies between a specified range.
    table_condition_expectation GoogleCloudDataplexV1DataQualityRuleTableConditionExpectationResponse
    Aggregate rule which evaluates whether the provided expression is true for a table.
    threshold float
    Optional. The minimum ratio of passing_rows / total_rows required to pass this rule, with a range of 0.0, 1.0.0 indicates default value (i.e. 1.0).This field is only valid for row-level type rules.
    uniqueness_expectation GoogleCloudDataplexV1DataQualityRuleUniquenessExpectationResponse
    Row-level rule which evaluates whether each column value is unique.
    column String
    Optional. The unnested column which this rule is evaluated against.
    description String
    Optional. Description of the rule. The maximum length is 1,024 characters.
    dimension String
    The dimension a rule belongs to. Results are also aggregated at the dimension level. Supported dimensions are "COMPLETENESS", "ACCURACY", "CONSISTENCY", "VALIDITY", "UNIQUENESS", "INTEGRITY"
    ignoreNull Boolean
    Optional. Rows with null values will automatically fail a rule, unless ignore_null is true. In that case, such null rows are trivially considered passing.This field is only valid for row-level type rules.
    name String
    Optional. A mutable name for the rule. The name must contain only letters (a-z, A-Z), numbers (0-9), or hyphens (-). The maximum length is 63 characters. Must start with a letter. Must end with a number or a letter.
    nonNullExpectation Property Map
    Row-level rule which evaluates whether each column value is null.
    rangeExpectation Property Map
    Row-level rule which evaluates whether each column value lies between a specified range.
    regexExpectation Property Map
    Row-level rule which evaluates whether each column value matches a specified regex.
    rowConditionExpectation Property Map
    Row-level rule which evaluates whether each row in a table passes the specified condition.
    setExpectation Property Map
    Row-level rule which evaluates whether each column value is contained by a specified set.
    statisticRangeExpectation Property Map
    Aggregate rule which evaluates whether the column aggregate statistic lies between a specified range.
    tableConditionExpectation Property Map
    Aggregate rule which evaluates whether the provided expression is true for a table.
    threshold Number
    Optional. The minimum ratio of passing_rows / total_rows required to pass this rule, with a range of 0.0, 1.0.0 indicates default value (i.e. 1.0).This field is only valid for row-level type rules.
    uniquenessExpectation Property Map
    Row-level rule which evaluates whether each column value is unique.

    GoogleCloudDataplexV1DataQualityRuleResultResponse, GoogleCloudDataplexV1DataQualityRuleResultResponseArgs

    EvaluatedCount string
    The number of rows a rule was evaluated against.This field is only valid for row-level type rules.Evaluated count can be configured to either include all rows (default) - with null rows automatically failing rule evaluation, or exclude null rows from the evaluated_count, by setting ignore_nulls = true.
    FailingRowsQuery string
    The query to find rows that did not pass this rule.This field is only valid for row-level type rules.
    NullCount string
    The number of rows with null values in the specified column.
    PassRatio double
    The ratio of passed_count / evaluated_count.This field is only valid for row-level type rules.
    Passed bool
    Whether the rule passed or failed.
    PassedCount string
    The number of rows which passed a rule evaluation.This field is only valid for row-level type rules.
    Rule Pulumi.GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataQualityRuleResponse
    The rule specified in the DataQualitySpec, as is.
    EvaluatedCount string
    The number of rows a rule was evaluated against.This field is only valid for row-level type rules.Evaluated count can be configured to either include all rows (default) - with null rows automatically failing rule evaluation, or exclude null rows from the evaluated_count, by setting ignore_nulls = true.
    FailingRowsQuery string
    The query to find rows that did not pass this rule.This field is only valid for row-level type rules.
    NullCount string
    The number of rows with null values in the specified column.
    PassRatio float64
    The ratio of passed_count / evaluated_count.This field is only valid for row-level type rules.
    Passed bool
    Whether the rule passed or failed.
    PassedCount string
    The number of rows which passed a rule evaluation.This field is only valid for row-level type rules.
    Rule GoogleCloudDataplexV1DataQualityRuleResponse
    The rule specified in the DataQualitySpec, as is.
    evaluatedCount String
    The number of rows a rule was evaluated against.This field is only valid for row-level type rules.Evaluated count can be configured to either include all rows (default) - with null rows automatically failing rule evaluation, or exclude null rows from the evaluated_count, by setting ignore_nulls = true.
    failingRowsQuery String
    The query to find rows that did not pass this rule.This field is only valid for row-level type rules.
    nullCount String
    The number of rows with null values in the specified column.
    passRatio Double
    The ratio of passed_count / evaluated_count.This field is only valid for row-level type rules.
    passed Boolean
    Whether the rule passed or failed.
    passedCount String
    The number of rows which passed a rule evaluation.This field is only valid for row-level type rules.
    rule GoogleCloudDataplexV1DataQualityRuleResponse
    The rule specified in the DataQualitySpec, as is.
    evaluatedCount string
    The number of rows a rule was evaluated against.This field is only valid for row-level type rules.Evaluated count can be configured to either include all rows (default) - with null rows automatically failing rule evaluation, or exclude null rows from the evaluated_count, by setting ignore_nulls = true.
    failingRowsQuery string
    The query to find rows that did not pass this rule.This field is only valid for row-level type rules.
    nullCount string
    The number of rows with null values in the specified column.
    passRatio number
    The ratio of passed_count / evaluated_count.This field is only valid for row-level type rules.
    passed boolean
    Whether the rule passed or failed.
    passedCount string
    The number of rows which passed a rule evaluation.This field is only valid for row-level type rules.
    rule GoogleCloudDataplexV1DataQualityRuleResponse
    The rule specified in the DataQualitySpec, as is.
    evaluated_count str
    The number of rows a rule was evaluated against.This field is only valid for row-level type rules.Evaluated count can be configured to either include all rows (default) - with null rows automatically failing rule evaluation, or exclude null rows from the evaluated_count, by setting ignore_nulls = true.
    failing_rows_query str
    The query to find rows that did not pass this rule.This field is only valid for row-level type rules.
    null_count str
    The number of rows with null values in the specified column.
    pass_ratio float
    The ratio of passed_count / evaluated_count.This field is only valid for row-level type rules.
    passed bool
    Whether the rule passed or failed.
    passed_count str
    The number of rows which passed a rule evaluation.This field is only valid for row-level type rules.
    rule GoogleCloudDataplexV1DataQualityRuleResponse
    The rule specified in the DataQualitySpec, as is.
    evaluatedCount String
    The number of rows a rule was evaluated against.This field is only valid for row-level type rules.Evaluated count can be configured to either include all rows (default) - with null rows automatically failing rule evaluation, or exclude null rows from the evaluated_count, by setting ignore_nulls = true.
    failingRowsQuery String
    The query to find rows that did not pass this rule.This field is only valid for row-level type rules.
    nullCount String
    The number of rows with null values in the specified column.
    passRatio Number
    The ratio of passed_count / evaluated_count.This field is only valid for row-level type rules.
    passed Boolean
    Whether the rule passed or failed.
    passedCount String
    The number of rows which passed a rule evaluation.This field is only valid for row-level type rules.
    rule Property Map
    The rule specified in the DataQualitySpec, as is.

    GoogleCloudDataplexV1DataQualityRuleRowConditionExpectation, GoogleCloudDataplexV1DataQualityRuleRowConditionExpectationArgs

    SqlExpression string
    Optional. The SQL expression.
    SqlExpression string
    Optional. The SQL expression.
    sqlExpression String
    Optional. The SQL expression.
    sqlExpression string
    Optional. The SQL expression.
    sql_expression str
    Optional. The SQL expression.
    sqlExpression String
    Optional. The SQL expression.

    GoogleCloudDataplexV1DataQualityRuleRowConditionExpectationResponse, GoogleCloudDataplexV1DataQualityRuleRowConditionExpectationResponseArgs

    SqlExpression string
    Optional. The SQL expression.
    SqlExpression string
    Optional. The SQL expression.
    sqlExpression String
    Optional. The SQL expression.
    sqlExpression string
    Optional. The SQL expression.
    sql_expression str
    Optional. The SQL expression.
    sqlExpression String
    Optional. The SQL expression.

    GoogleCloudDataplexV1DataQualityRuleSetExpectation, GoogleCloudDataplexV1DataQualityRuleSetExpectationArgs

    Values List<string>
    Optional. Expected values for the column value.
    Values []string
    Optional. Expected values for the column value.
    values List<String>
    Optional. Expected values for the column value.
    values string[]
    Optional. Expected values for the column value.
    values Sequence[str]
    Optional. Expected values for the column value.
    values List<String>
    Optional. Expected values for the column value.

    GoogleCloudDataplexV1DataQualityRuleSetExpectationResponse, GoogleCloudDataplexV1DataQualityRuleSetExpectationResponseArgs

    Values List<string>
    Optional. Expected values for the column value.
    Values []string
    Optional. Expected values for the column value.
    values List<String>
    Optional. Expected values for the column value.
    values string[]
    Optional. Expected values for the column value.
    values Sequence[str]
    Optional. Expected values for the column value.
    values List<String>
    Optional. Expected values for the column value.

    GoogleCloudDataplexV1DataQualityRuleStatisticRangeExpectation, GoogleCloudDataplexV1DataQualityRuleStatisticRangeExpectationArgs

    MaxValue string
    Optional. The maximum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.
    MinValue string
    Optional. The minimum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.
    Statistic Pulumi.GoogleNative.Dataplex.V1.GoogleCloudDataplexV1DataQualityRuleStatisticRangeExpectationStatistic
    Optional. The aggregate metric to evaluate.
    StrictMaxEnabled bool
    Optional. Whether column statistic needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.
    StrictMinEnabled bool
    Optional. Whether column statistic needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.
    MaxValue string
    Optional. The maximum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.
    MinValue string
    Optional. The minimum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.
    Statistic GoogleCloudDataplexV1DataQualityRuleStatisticRangeExpectationStatistic
    Optional. The aggregate metric to evaluate.
    StrictMaxEnabled bool
    Optional. Whether column statistic needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.
    StrictMinEnabled bool
    Optional. Whether column statistic needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.
    maxValue String
    Optional. The maximum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.
    minValue String
    Optional. The minimum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.
    statistic GoogleCloudDataplexV1DataQualityRuleStatisticRangeExpectationStatistic
    Optional. The aggregate metric to evaluate.
    strictMaxEnabled Boolean
    Optional. Whether column statistic needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.
    strictMinEnabled Boolean
    Optional. Whether column statistic needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.
    maxValue string
    Optional. The maximum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.
    minValue string
    Optional. The minimum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.
    statistic GoogleCloudDataplexV1DataQualityRuleStatisticRangeExpectationStatistic
    Optional. The aggregate metric to evaluate.
    strictMaxEnabled boolean
    Optional. Whether column statistic needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.
    strictMinEnabled boolean
    Optional. Whether column statistic needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.
    max_value str
    Optional. The maximum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.
    min_value str
    Optional. The minimum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.
    statistic GoogleCloudDataplexV1DataQualityRuleStatisticRangeExpectationStatistic
    Optional. The aggregate metric to evaluate.
    strict_max_enabled bool
    Optional. Whether column statistic needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.
    strict_min_enabled bool
    Optional. Whether column statistic needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.
    maxValue String
    Optional. The maximum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.
    minValue String
    Optional. The minimum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.
    statistic "STATISTIC_UNDEFINED" | "MEAN" | "MIN" | "MAX"
    Optional. The aggregate metric to evaluate.
    strictMaxEnabled Boolean
    Optional. Whether column statistic needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.
    strictMinEnabled Boolean
    Optional. Whether column statistic needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.

    GoogleCloudDataplexV1DataQualityRuleStatisticRangeExpectationResponse, GoogleCloudDataplexV1DataQualityRuleStatisticRangeExpectationResponseArgs

    MaxValue string
    Optional. The maximum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.
    MinValue string
    Optional. The minimum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.
    Statistic string
    Optional. The aggregate metric to evaluate.
    StrictMaxEnabled bool
    Optional. Whether column statistic needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.
    StrictMinEnabled bool
    Optional. Whether column statistic needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.
    MaxValue string
    Optional. The maximum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.
    MinValue string
    Optional. The minimum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.
    Statistic string
    Optional. The aggregate metric to evaluate.
    StrictMaxEnabled bool
    Optional. Whether column statistic needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.
    StrictMinEnabled bool
    Optional. Whether column statistic needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.
    maxValue String
    Optional. The maximum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.
    minValue String
    Optional. The minimum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.
    statistic String
    Optional. The aggregate metric to evaluate.
    strictMaxEnabled Boolean
    Optional. Whether column statistic needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.
    strictMinEnabled Boolean
    Optional. Whether column statistic needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.
    maxValue string
    Optional. The maximum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.
    minValue string
    Optional. The minimum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.
    statistic string
    Optional. The aggregate metric to evaluate.
    strictMaxEnabled boolean
    Optional. Whether column statistic needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.
    strictMinEnabled boolean
    Optional. Whether column statistic needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.
    max_value str
    Optional. The maximum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.
    min_value str
    Optional. The minimum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.
    statistic str
    Optional. The aggregate metric to evaluate.
    strict_max_enabled bool
    Optional. Whether column statistic needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.
    strict_min_enabled bool
    Optional. Whether column statistic needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.
    maxValue String
    Optional. The maximum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.
    minValue String
    Optional. The minimum column statistic value allowed for a row to pass this validation.At least one of min_value and max_value need to be provided.
    statistic String
    Optional. The aggregate metric to evaluate.
    strictMaxEnabled Boolean
    Optional. Whether column statistic needs to be strictly lesser than ('<') the maximum, or if equality is allowed.Only relevant if a max_value has been defined. Default = false.
    strictMinEnabled Boolean
    Optional. Whether column statistic needs to be strictly greater than ('>') the minimum, or if equality is allowed.Only relevant if a min_value has been defined. Default = false.

    GoogleCloudDataplexV1DataQualityRuleStatisticRangeExpectationStatistic, GoogleCloudDataplexV1DataQualityRuleStatisticRangeExpectationStatisticArgs

    StatisticUndefined
    STATISTIC_UNDEFINEDUnspecified statistic type
    Mean
    MEANEvaluate the column mean
    Min
    MINEvaluate the column min
    Max
    MAXEvaluate the column max
    GoogleCloudDataplexV1DataQualityRuleStatisticRangeExpectationStatisticStatisticUndefined
    STATISTIC_UNDEFINEDUnspecified statistic type
    GoogleCloudDataplexV1DataQualityRuleStatisticRangeExpectationStatisticMean
    MEANEvaluate the column mean
    GoogleCloudDataplexV1DataQualityRuleStatisticRangeExpectationStatisticMin
    MINEvaluate the column min
    GoogleCloudDataplexV1DataQualityRuleStatisticRangeExpectationStatisticMax
    MAXEvaluate the column max
    StatisticUndefined
    STATISTIC_UNDEFINEDUnspecified statistic type
    Mean
    MEANEvaluate the column mean
    Min
    MINEvaluate the column min
    Max
    MAXEvaluate the column max
    StatisticUndefined
    STATISTIC_UNDEFINEDUnspecified statistic type
    Mean
    MEANEvaluate the column mean
    Min
    MINEvaluate the column min
    Max
    MAXEvaluate the column max
    STATISTIC_UNDEFINED
    STATISTIC_UNDEFINEDUnspecified statistic type
    MEAN
    MEANEvaluate the column mean
    MIN
    MINEvaluate the column min
    MAX
    MAXEvaluate the column max
    "STATISTIC_UNDEFINED"
    STATISTIC_UNDEFINEDUnspecified statistic type
    "MEAN"
    MEANEvaluate the column mean
    "MIN"
    MINEvaluate the column min
    "MAX"
    MAXEvaluate the column max

    GoogleCloudDataplexV1DataQualityRuleTableConditionExpectation, GoogleCloudDataplexV1DataQualityRuleTableConditionExpectationArgs

    SqlExpression string
    Optional. The SQL expression.
    SqlExpression string
    Optional. The SQL expression.
    sqlExpression String
    Optional. The SQL expression.
    sqlExpression string
    Optional. The SQL expression.
    sql_expression str
    Optional. The SQL expression.
    sqlExpression String
    Optional. The SQL expression.

    GoogleCloudDataplexV1DataQualityRuleTableConditionExpectationResponse, GoogleCloudDataplexV1DataQualityRuleTableConditionExpectationResponseArgs

    SqlExpression string
    Optional. The SQL expression.
    SqlExpression string
    Optional. The SQL expression.
    sqlExpression String
    Optional. The SQL expression.
    sqlExpression string
    Optional. The SQL expression.
    sql_expression str
    Optional. The SQL expression.
    sqlExpression String
    Optional. The SQL expression.

    GoogleCloudDataplexV1DataQualitySpec, GoogleCloudDataplexV1DataQualitySpecArgs

    Rules List<Pulumi.GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataQualityRule>
    The list of rules to evaluate against a data source. At least one rule is required.
    PostScanActions Pulumi.GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataQualitySpecPostScanActions
    Optional. Actions to take upon job completion.
    RowFilter string
    Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10
    SamplingPercent double
    Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.
    Rules []GoogleCloudDataplexV1DataQualityRule
    The list of rules to evaluate against a data source. At least one rule is required.
    PostScanActions GoogleCloudDataplexV1DataQualitySpecPostScanActions
    Optional. Actions to take upon job completion.
    RowFilter string
    Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10
    SamplingPercent float64
    Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.
    rules List<GoogleCloudDataplexV1DataQualityRule>
    The list of rules to evaluate against a data source. At least one rule is required.
    postScanActions GoogleCloudDataplexV1DataQualitySpecPostScanActions
    Optional. Actions to take upon job completion.
    rowFilter String
    Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10
    samplingPercent Double
    Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.
    rules GoogleCloudDataplexV1DataQualityRule[]
    The list of rules to evaluate against a data source. At least one rule is required.
    postScanActions GoogleCloudDataplexV1DataQualitySpecPostScanActions
    Optional. Actions to take upon job completion.
    rowFilter string
    Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10
    samplingPercent number
    Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.
    rules Sequence[GoogleCloudDataplexV1DataQualityRule]
    The list of rules to evaluate against a data source. At least one rule is required.
    post_scan_actions GoogleCloudDataplexV1DataQualitySpecPostScanActions
    Optional. Actions to take upon job completion.
    row_filter str
    Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10
    sampling_percent float
    Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.
    rules List<Property Map>
    The list of rules to evaluate against a data source. At least one rule is required.
    postScanActions Property Map
    Optional. Actions to take upon job completion.
    rowFilter String
    Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10
    samplingPercent Number
    Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.

    GoogleCloudDataplexV1DataQualitySpecPostScanActions, GoogleCloudDataplexV1DataQualitySpecPostScanActionsArgs

    BigqueryExport GoogleCloudDataplexV1DataQualitySpecPostScanActionsBigQueryExport
    Optional. If set, results will be exported to the provided BigQuery table.
    bigqueryExport GoogleCloudDataplexV1DataQualitySpecPostScanActionsBigQueryExport
    Optional. If set, results will be exported to the provided BigQuery table.
    bigqueryExport GoogleCloudDataplexV1DataQualitySpecPostScanActionsBigQueryExport
    Optional. If set, results will be exported to the provided BigQuery table.
    bigquery_export GoogleCloudDataplexV1DataQualitySpecPostScanActionsBigQueryExport
    Optional. If set, results will be exported to the provided BigQuery table.
    bigqueryExport Property Map
    Optional. If set, results will be exported to the provided BigQuery table.

    GoogleCloudDataplexV1DataQualitySpecPostScanActionsBigQueryExport, GoogleCloudDataplexV1DataQualitySpecPostScanActionsBigQueryExportArgs

    ResultsTable string
    Optional. The BigQuery table to export DataQualityScan results to. Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID
    ResultsTable string
    Optional. The BigQuery table to export DataQualityScan results to. Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID
    resultsTable String
    Optional. The BigQuery table to export DataQualityScan results to. Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID
    resultsTable string
    Optional. The BigQuery table to export DataQualityScan results to. Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID
    results_table str
    Optional. The BigQuery table to export DataQualityScan results to. Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID
    resultsTable String
    Optional. The BigQuery table to export DataQualityScan results to. Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID

    GoogleCloudDataplexV1DataQualitySpecPostScanActionsBigQueryExportResponse, GoogleCloudDataplexV1DataQualitySpecPostScanActionsBigQueryExportResponseArgs

    ResultsTable string
    Optional. The BigQuery table to export DataQualityScan results to. Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID
    ResultsTable string
    Optional. The BigQuery table to export DataQualityScan results to. Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID
    resultsTable String
    Optional. The BigQuery table to export DataQualityScan results to. Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID
    resultsTable string
    Optional. The BigQuery table to export DataQualityScan results to. Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID
    results_table str
    Optional. The BigQuery table to export DataQualityScan results to. Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID
    resultsTable String
    Optional. The BigQuery table to export DataQualityScan results to. Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID

    GoogleCloudDataplexV1DataQualitySpecPostScanActionsResponse, GoogleCloudDataplexV1DataQualitySpecPostScanActionsResponseArgs

    BigqueryExport GoogleCloudDataplexV1DataQualitySpecPostScanActionsBigQueryExportResponse
    Optional. If set, results will be exported to the provided BigQuery table.
    bigqueryExport GoogleCloudDataplexV1DataQualitySpecPostScanActionsBigQueryExportResponse
    Optional. If set, results will be exported to the provided BigQuery table.
    bigqueryExport GoogleCloudDataplexV1DataQualitySpecPostScanActionsBigQueryExportResponse
    Optional. If set, results will be exported to the provided BigQuery table.
    bigquery_export GoogleCloudDataplexV1DataQualitySpecPostScanActionsBigQueryExportResponse
    Optional. If set, results will be exported to the provided BigQuery table.
    bigqueryExport Property Map
    Optional. If set, results will be exported to the provided BigQuery table.

    GoogleCloudDataplexV1DataQualitySpecResponse, GoogleCloudDataplexV1DataQualitySpecResponseArgs

    PostScanActions Pulumi.GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataQualitySpecPostScanActionsResponse
    Optional. Actions to take upon job completion.
    RowFilter string
    Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10
    Rules List<Pulumi.GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1DataQualityRuleResponse>
    The list of rules to evaluate against a data source. At least one rule is required.
    SamplingPercent double
    Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.
    PostScanActions GoogleCloudDataplexV1DataQualitySpecPostScanActionsResponse
    Optional. Actions to take upon job completion.
    RowFilter string
    Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10
    Rules []GoogleCloudDataplexV1DataQualityRuleResponse
    The list of rules to evaluate against a data source. At least one rule is required.
    SamplingPercent float64
    Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.
    postScanActions GoogleCloudDataplexV1DataQualitySpecPostScanActionsResponse
    Optional. Actions to take upon job completion.
    rowFilter String
    Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10
    rules List<GoogleCloudDataplexV1DataQualityRuleResponse>
    The list of rules to evaluate against a data source. At least one rule is required.
    samplingPercent Double
    Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.
    postScanActions GoogleCloudDataplexV1DataQualitySpecPostScanActionsResponse
    Optional. Actions to take upon job completion.
    rowFilter string
    Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10
    rules GoogleCloudDataplexV1DataQualityRuleResponse[]
    The list of rules to evaluate against a data source. At least one rule is required.
    samplingPercent number
    Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.
    post_scan_actions GoogleCloudDataplexV1DataQualitySpecPostScanActionsResponse
    Optional. Actions to take upon job completion.
    row_filter str
    Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10
    rules Sequence[GoogleCloudDataplexV1DataQualityRuleResponse]
    The list of rules to evaluate against a data source. At least one rule is required.
    sampling_percent float
    Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.
    postScanActions Property Map
    Optional. Actions to take upon job completion.
    rowFilter String
    Optional. A filter applied to all rows in a single DataScan job. The filter needs to be a valid SQL expression for a WHERE clause in BigQuery standard SQL syntax. Example: col1 >= 0 AND col2 < 10
    rules List<Property Map>
    The list of rules to evaluate against a data source. At least one rule is required.
    samplingPercent Number
    Optional. The percentage of the records to be selected from the dataset for DataScan. Value can range between 0.0 and 100.0 with up to 3 significant decimal digits. Sampling is not applied if sampling_percent is not specified, 0 or 100.

    GoogleCloudDataplexV1DataScanExecutionSpec, GoogleCloudDataplexV1DataScanExecutionSpecArgs

    Field string
    Immutable. The unnested field (of type Date or Timestamp) that contains values which monotonically increase over time.If not specified, a data scan will run for all data in the table.
    Trigger Pulumi.GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1Trigger
    Optional. Spec related to how often and when a scan should be triggered.If not specified, the default is OnDemand, which means the scan will not run until the user calls RunDataScan API.
    Field string
    Immutable. The unnested field (of type Date or Timestamp) that contains values which monotonically increase over time.If not specified, a data scan will run for all data in the table.
    Trigger GoogleCloudDataplexV1Trigger
    Optional. Spec related to how often and when a scan should be triggered.If not specified, the default is OnDemand, which means the scan will not run until the user calls RunDataScan API.
    field String
    Immutable. The unnested field (of type Date or Timestamp) that contains values which monotonically increase over time.If not specified, a data scan will run for all data in the table.
    trigger GoogleCloudDataplexV1Trigger
    Optional. Spec related to how often and when a scan should be triggered.If not specified, the default is OnDemand, which means the scan will not run until the user calls RunDataScan API.
    field string
    Immutable. The unnested field (of type Date or Timestamp) that contains values which monotonically increase over time.If not specified, a data scan will run for all data in the table.
    trigger GoogleCloudDataplexV1Trigger
    Optional. Spec related to how often and when a scan should be triggered.If not specified, the default is OnDemand, which means the scan will not run until the user calls RunDataScan API.
    field str
    Immutable. The unnested field (of type Date or Timestamp) that contains values which monotonically increase over time.If not specified, a data scan will run for all data in the table.
    trigger GoogleCloudDataplexV1Trigger
    Optional. Spec related to how often and when a scan should be triggered.If not specified, the default is OnDemand, which means the scan will not run until the user calls RunDataScan API.
    field String
    Immutable. The unnested field (of type Date or Timestamp) that contains values which monotonically increase over time.If not specified, a data scan will run for all data in the table.
    trigger Property Map
    Optional. Spec related to how often and when a scan should be triggered.If not specified, the default is OnDemand, which means the scan will not run until the user calls RunDataScan API.

    GoogleCloudDataplexV1DataScanExecutionSpecResponse, GoogleCloudDataplexV1DataScanExecutionSpecResponseArgs

    Field string
    Immutable. The unnested field (of type Date or Timestamp) that contains values which monotonically increase over time.If not specified, a data scan will run for all data in the table.
    Trigger Pulumi.GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1TriggerResponse
    Optional. Spec related to how often and when a scan should be triggered.If not specified, the default is OnDemand, which means the scan will not run until the user calls RunDataScan API.
    Field string
    Immutable. The unnested field (of type Date or Timestamp) that contains values which monotonically increase over time.If not specified, a data scan will run for all data in the table.
    Trigger GoogleCloudDataplexV1TriggerResponse
    Optional. Spec related to how often and when a scan should be triggered.If not specified, the default is OnDemand, which means the scan will not run until the user calls RunDataScan API.
    field String
    Immutable. The unnested field (of type Date or Timestamp) that contains values which monotonically increase over time.If not specified, a data scan will run for all data in the table.
    trigger GoogleCloudDataplexV1TriggerResponse
    Optional. Spec related to how often and when a scan should be triggered.If not specified, the default is OnDemand, which means the scan will not run until the user calls RunDataScan API.
    field string
    Immutable. The unnested field (of type Date or Timestamp) that contains values which monotonically increase over time.If not specified, a data scan will run for all data in the table.
    trigger GoogleCloudDataplexV1TriggerResponse
    Optional. Spec related to how often and when a scan should be triggered.If not specified, the default is OnDemand, which means the scan will not run until the user calls RunDataScan API.
    field str
    Immutable. The unnested field (of type Date or Timestamp) that contains values which monotonically increase over time.If not specified, a data scan will run for all data in the table.
    trigger GoogleCloudDataplexV1TriggerResponse
    Optional. Spec related to how often and when a scan should be triggered.If not specified, the default is OnDemand, which means the scan will not run until the user calls RunDataScan API.
    field String
    Immutable. The unnested field (of type Date or Timestamp) that contains values which monotonically increase over time.If not specified, a data scan will run for all data in the table.
    trigger Property Map
    Optional. Spec related to how often and when a scan should be triggered.If not specified, the default is OnDemand, which means the scan will not run until the user calls RunDataScan API.

    GoogleCloudDataplexV1DataScanExecutionStatusResponse, GoogleCloudDataplexV1DataScanExecutionStatusResponseArgs

    LatestJobEndTime string
    The time when the latest DataScanJob ended.
    LatestJobStartTime string
    The time when the latest DataScanJob started.
    LatestJobEndTime string
    The time when the latest DataScanJob ended.
    LatestJobStartTime string
    The time when the latest DataScanJob started.
    latestJobEndTime String
    The time when the latest DataScanJob ended.
    latestJobStartTime String
    The time when the latest DataScanJob started.
    latestJobEndTime string
    The time when the latest DataScanJob ended.
    latestJobStartTime string
    The time when the latest DataScanJob started.
    latest_job_end_time str
    The time when the latest DataScanJob ended.
    latest_job_start_time str
    The time when the latest DataScanJob started.
    latestJobEndTime String
    The time when the latest DataScanJob ended.
    latestJobStartTime String
    The time when the latest DataScanJob started.

    GoogleCloudDataplexV1DataSource, GoogleCloudDataplexV1DataSourceArgs

    Entity string
    Immutable. The Dataplex entity that represents the data source (e.g. BigQuery table) for DataScan, of the form: projects/{project_number}/locations/{location_id}/lakes/{lake_id}/zones/{zone_id}/entities/{entity_id}.
    Resource string
    Immutable. The service-qualified full resource name of the cloud resource for a DataScan job to scan against. The field could be: BigQuery table of type "TABLE" for DataProfileScan/DataQualityScan Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID
    Entity string
    Immutable. The Dataplex entity that represents the data source (e.g. BigQuery table) for DataScan, of the form: projects/{project_number}/locations/{location_id}/lakes/{lake_id}/zones/{zone_id}/entities/{entity_id}.
    Resource string
    Immutable. The service-qualified full resource name of the cloud resource for a DataScan job to scan against. The field could be: BigQuery table of type "TABLE" for DataProfileScan/DataQualityScan Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID
    entity String
    Immutable. The Dataplex entity that represents the data source (e.g. BigQuery table) for DataScan, of the form: projects/{project_number}/locations/{location_id}/lakes/{lake_id}/zones/{zone_id}/entities/{entity_id}.
    resource String
    Immutable. The service-qualified full resource name of the cloud resource for a DataScan job to scan against. The field could be: BigQuery table of type "TABLE" for DataProfileScan/DataQualityScan Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID
    entity string
    Immutable. The Dataplex entity that represents the data source (e.g. BigQuery table) for DataScan, of the form: projects/{project_number}/locations/{location_id}/lakes/{lake_id}/zones/{zone_id}/entities/{entity_id}.
    resource string
    Immutable. The service-qualified full resource name of the cloud resource for a DataScan job to scan against. The field could be: BigQuery table of type "TABLE" for DataProfileScan/DataQualityScan Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID
    entity str
    Immutable. The Dataplex entity that represents the data source (e.g. BigQuery table) for DataScan, of the form: projects/{project_number}/locations/{location_id}/lakes/{lake_id}/zones/{zone_id}/entities/{entity_id}.
    resource str
    Immutable. The service-qualified full resource name of the cloud resource for a DataScan job to scan against. The field could be: BigQuery table of type "TABLE" for DataProfileScan/DataQualityScan Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID
    entity String
    Immutable. The Dataplex entity that represents the data source (e.g. BigQuery table) for DataScan, of the form: projects/{project_number}/locations/{location_id}/lakes/{lake_id}/zones/{zone_id}/entities/{entity_id}.
    resource String
    Immutable. The service-qualified full resource name of the cloud resource for a DataScan job to scan against. The field could be: BigQuery table of type "TABLE" for DataProfileScan/DataQualityScan Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID

    GoogleCloudDataplexV1DataSourceResponse, GoogleCloudDataplexV1DataSourceResponseArgs

    Entity string
    Immutable. The Dataplex entity that represents the data source (e.g. BigQuery table) for DataScan, of the form: projects/{project_number}/locations/{location_id}/lakes/{lake_id}/zones/{zone_id}/entities/{entity_id}.
    Resource string
    Immutable. The service-qualified full resource name of the cloud resource for a DataScan job to scan against. The field could be: BigQuery table of type "TABLE" for DataProfileScan/DataQualityScan Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID
    Entity string
    Immutable. The Dataplex entity that represents the data source (e.g. BigQuery table) for DataScan, of the form: projects/{project_number}/locations/{location_id}/lakes/{lake_id}/zones/{zone_id}/entities/{entity_id}.
    Resource string
    Immutable. The service-qualified full resource name of the cloud resource for a DataScan job to scan against. The field could be: BigQuery table of type "TABLE" for DataProfileScan/DataQualityScan Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID
    entity String
    Immutable. The Dataplex entity that represents the data source (e.g. BigQuery table) for DataScan, of the form: projects/{project_number}/locations/{location_id}/lakes/{lake_id}/zones/{zone_id}/entities/{entity_id}.
    resource String
    Immutable. The service-qualified full resource name of the cloud resource for a DataScan job to scan against. The field could be: BigQuery table of type "TABLE" for DataProfileScan/DataQualityScan Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID
    entity string
    Immutable. The Dataplex entity that represents the data source (e.g. BigQuery table) for DataScan, of the form: projects/{project_number}/locations/{location_id}/lakes/{lake_id}/zones/{zone_id}/entities/{entity_id}.
    resource string
    Immutable. The service-qualified full resource name of the cloud resource for a DataScan job to scan against. The field could be: BigQuery table of type "TABLE" for DataProfileScan/DataQualityScan Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID
    entity str
    Immutable. The Dataplex entity that represents the data source (e.g. BigQuery table) for DataScan, of the form: projects/{project_number}/locations/{location_id}/lakes/{lake_id}/zones/{zone_id}/entities/{entity_id}.
    resource str
    Immutable. The service-qualified full resource name of the cloud resource for a DataScan job to scan against. The field could be: BigQuery table of type "TABLE" for DataProfileScan/DataQualityScan Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID
    entity String
    Immutable. The Dataplex entity that represents the data source (e.g. BigQuery table) for DataScan, of the form: projects/{project_number}/locations/{location_id}/lakes/{lake_id}/zones/{zone_id}/entities/{entity_id}.
    resource String
    Immutable. The service-qualified full resource name of the cloud resource for a DataScan job to scan against. The field could be: BigQuery table of type "TABLE" for DataProfileScan/DataQualityScan Format: //bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID/tables/TABLE_ID

    GoogleCloudDataplexV1ScannedDataIncrementalFieldResponse, GoogleCloudDataplexV1ScannedDataIncrementalFieldResponseArgs

    End string
    Value that marks the end of the range.
    Field string
    The field that contains values which monotonically increases over time (e.g. a timestamp column).
    Start string
    Value that marks the start of the range.
    End string
    Value that marks the end of the range.
    Field string
    The field that contains values which monotonically increases over time (e.g. a timestamp column).
    Start string
    Value that marks the start of the range.
    end String
    Value that marks the end of the range.
    field String
    The field that contains values which monotonically increases over time (e.g. a timestamp column).
    start String
    Value that marks the start of the range.
    end string
    Value that marks the end of the range.
    field string
    The field that contains values which monotonically increases over time (e.g. a timestamp column).
    start string
    Value that marks the start of the range.
    end str
    Value that marks the end of the range.
    field str
    The field that contains values which monotonically increases over time (e.g. a timestamp column).
    start str
    Value that marks the start of the range.
    end String
    Value that marks the end of the range.
    field String
    The field that contains values which monotonically increases over time (e.g. a timestamp column).
    start String
    Value that marks the start of the range.

    GoogleCloudDataplexV1ScannedDataResponse, GoogleCloudDataplexV1ScannedDataResponseArgs

    IncrementalField GoogleCloudDataplexV1ScannedDataIncrementalFieldResponse
    The range denoted by values of an incremental field
    incrementalField GoogleCloudDataplexV1ScannedDataIncrementalFieldResponse
    The range denoted by values of an incremental field
    incrementalField GoogleCloudDataplexV1ScannedDataIncrementalFieldResponse
    The range denoted by values of an incremental field
    incremental_field GoogleCloudDataplexV1ScannedDataIncrementalFieldResponse
    The range denoted by values of an incremental field
    incrementalField Property Map
    The range denoted by values of an incremental field

    GoogleCloudDataplexV1Trigger, GoogleCloudDataplexV1TriggerArgs

    OnDemand GoogleCloudDataplexV1TriggerOnDemand
    The scan runs once via RunDataScan API.
    Schedule GoogleCloudDataplexV1TriggerSchedule
    The scan is scheduled to run periodically.
    onDemand GoogleCloudDataplexV1TriggerOnDemand
    The scan runs once via RunDataScan API.
    schedule GoogleCloudDataplexV1TriggerSchedule
    The scan is scheduled to run periodically.
    onDemand GoogleCloudDataplexV1TriggerOnDemand
    The scan runs once via RunDataScan API.
    schedule GoogleCloudDataplexV1TriggerSchedule
    The scan is scheduled to run periodically.
    on_demand GoogleCloudDataplexV1TriggerOnDemand
    The scan runs once via RunDataScan API.
    schedule GoogleCloudDataplexV1TriggerSchedule
    The scan is scheduled to run periodically.
    onDemand Property Map
    The scan runs once via RunDataScan API.
    schedule Property Map
    The scan is scheduled to run periodically.

    GoogleCloudDataplexV1TriggerResponse, GoogleCloudDataplexV1TriggerResponseArgs

    OnDemand GoogleCloudDataplexV1TriggerOnDemandResponse
    The scan runs once via RunDataScan API.
    Schedule GoogleCloudDataplexV1TriggerScheduleResponse
    The scan is scheduled to run periodically.
    onDemand GoogleCloudDataplexV1TriggerOnDemandResponse
    The scan runs once via RunDataScan API.
    schedule GoogleCloudDataplexV1TriggerScheduleResponse
    The scan is scheduled to run periodically.
    onDemand GoogleCloudDataplexV1TriggerOnDemandResponse
    The scan runs once via RunDataScan API.
    schedule GoogleCloudDataplexV1TriggerScheduleResponse
    The scan is scheduled to run periodically.
    on_demand GoogleCloudDataplexV1TriggerOnDemandResponse
    The scan runs once via RunDataScan API.
    schedule GoogleCloudDataplexV1TriggerScheduleResponse
    The scan is scheduled to run periodically.
    onDemand Property Map
    The scan runs once via RunDataScan API.
    schedule Property Map
    The scan is scheduled to run periodically.

    GoogleCloudDataplexV1TriggerSchedule, GoogleCloudDataplexV1TriggerScheduleArgs

    Cron string
    Cron (https://en.wikipedia.org/wiki/Cron) schedule for running scans periodically.To explicitly set a timezone in the cron tab, apply a prefix in the cron tab: "CRON_TZ=${IANA_TIME_ZONE}" or "TZ=${IANA_TIME_ZONE}". The ${IANA_TIME_ZONE} may only be a valid string from IANA time zone database (wikipedia (https://en.wikipedia.org/wiki/List_of_tz_database_time_zones#List)). For example, CRON_TZ=America/New_York 1 * * * *, or TZ=America/New_York 1 * * * *.This field is required for Schedule scans.
    Cron string
    Cron (https://en.wikipedia.org/wiki/Cron) schedule for running scans periodically.To explicitly set a timezone in the cron tab, apply a prefix in the cron tab: "CRON_TZ=${IANA_TIME_ZONE}" or "TZ=${IANA_TIME_ZONE}". The ${IANA_TIME_ZONE} may only be a valid string from IANA time zone database (wikipedia (https://en.wikipedia.org/wiki/List_of_tz_database_time_zones#List)). For example, CRON_TZ=America/New_York 1 * * * *, or TZ=America/New_York 1 * * * *.This field is required for Schedule scans.
    cron String
    Cron (https://en.wikipedia.org/wiki/Cron) schedule for running scans periodically.To explicitly set a timezone in the cron tab, apply a prefix in the cron tab: "CRON_TZ=${IANA_TIME_ZONE}" or "TZ=${IANA_TIME_ZONE}". The ${IANA_TIME_ZONE} may only be a valid string from IANA time zone database (wikipedia (https://en.wikipedia.org/wiki/List_of_tz_database_time_zones#List)). For example, CRON_TZ=America/New_York 1 * * * *, or TZ=America/New_York 1 * * * *.This field is required for Schedule scans.
    cron string
    Cron (https://en.wikipedia.org/wiki/Cron) schedule for running scans periodically.To explicitly set a timezone in the cron tab, apply a prefix in the cron tab: "CRON_TZ=${IANA_TIME_ZONE}" or "TZ=${IANA_TIME_ZONE}". The ${IANA_TIME_ZONE} may only be a valid string from IANA time zone database (wikipedia (https://en.wikipedia.org/wiki/List_of_tz_database_time_zones#List)). For example, CRON_TZ=America/New_York 1 * * * *, or TZ=America/New_York 1 * * * *.This field is required for Schedule scans.
    cron str
    Cron (https://en.wikipedia.org/wiki/Cron) schedule for running scans periodically.To explicitly set a timezone in the cron tab, apply a prefix in the cron tab: "CRON_TZ=${IANA_TIME_ZONE}" or "TZ=${IANA_TIME_ZONE}". The ${IANA_TIME_ZONE} may only be a valid string from IANA time zone database (wikipedia (https://en.wikipedia.org/wiki/List_of_tz_database_time_zones#List)). For example, CRON_TZ=America/New_York 1 * * * *, or TZ=America/New_York 1 * * * *.This field is required for Schedule scans.
    cron String
    Cron (https://en.wikipedia.org/wiki/Cron) schedule for running scans periodically.To explicitly set a timezone in the cron tab, apply a prefix in the cron tab: "CRON_TZ=${IANA_TIME_ZONE}" or "TZ=${IANA_TIME_ZONE}". The ${IANA_TIME_ZONE} may only be a valid string from IANA time zone database (wikipedia (https://en.wikipedia.org/wiki/List_of_tz_database_time_zones#List)). For example, CRON_TZ=America/New_York 1 * * * *, or TZ=America/New_York 1 * * * *.This field is required for Schedule scans.

    GoogleCloudDataplexV1TriggerScheduleResponse, GoogleCloudDataplexV1TriggerScheduleResponseArgs

    Cron string
    Cron (https://en.wikipedia.org/wiki/Cron) schedule for running scans periodically.To explicitly set a timezone in the cron tab, apply a prefix in the cron tab: "CRON_TZ=${IANA_TIME_ZONE}" or "TZ=${IANA_TIME_ZONE}". The ${IANA_TIME_ZONE} may only be a valid string from IANA time zone database (wikipedia (https://en.wikipedia.org/wiki/List_of_tz_database_time_zones#List)). For example, CRON_TZ=America/New_York 1 * * * *, or TZ=America/New_York 1 * * * *.This field is required for Schedule scans.
    Cron string
    Cron (https://en.wikipedia.org/wiki/Cron) schedule for running scans periodically.To explicitly set a timezone in the cron tab, apply a prefix in the cron tab: "CRON_TZ=${IANA_TIME_ZONE}" or "TZ=${IANA_TIME_ZONE}". The ${IANA_TIME_ZONE} may only be a valid string from IANA time zone database (wikipedia (https://en.wikipedia.org/wiki/List_of_tz_database_time_zones#List)). For example, CRON_TZ=America/New_York 1 * * * *, or TZ=America/New_York 1 * * * *.This field is required for Schedule scans.
    cron String
    Cron (https://en.wikipedia.org/wiki/Cron) schedule for running scans periodically.To explicitly set a timezone in the cron tab, apply a prefix in the cron tab: "CRON_TZ=${IANA_TIME_ZONE}" or "TZ=${IANA_TIME_ZONE}". The ${IANA_TIME_ZONE} may only be a valid string from IANA time zone database (wikipedia (https://en.wikipedia.org/wiki/List_of_tz_database_time_zones#List)). For example, CRON_TZ=America/New_York 1 * * * *, or TZ=America/New_York 1 * * * *.This field is required for Schedule scans.
    cron string
    Cron (https://en.wikipedia.org/wiki/Cron) schedule for running scans periodically.To explicitly set a timezone in the cron tab, apply a prefix in the cron tab: "CRON_TZ=${IANA_TIME_ZONE}" or "TZ=${IANA_TIME_ZONE}". The ${IANA_TIME_ZONE} may only be a valid string from IANA time zone database (wikipedia (https://en.wikipedia.org/wiki/List_of_tz_database_time_zones#List)). For example, CRON_TZ=America/New_York 1 * * * *, or TZ=America/New_York 1 * * * *.This field is required for Schedule scans.
    cron str
    Cron (https://en.wikipedia.org/wiki/Cron) schedule for running scans periodically.To explicitly set a timezone in the cron tab, apply a prefix in the cron tab: "CRON_TZ=${IANA_TIME_ZONE}" or "TZ=${IANA_TIME_ZONE}". The ${IANA_TIME_ZONE} may only be a valid string from IANA time zone database (wikipedia (https://en.wikipedia.org/wiki/List_of_tz_database_time_zones#List)). For example, CRON_TZ=America/New_York 1 * * * *, or TZ=America/New_York 1 * * * *.This field is required for Schedule scans.
    cron String
    Cron (https://en.wikipedia.org/wiki/Cron) schedule for running scans periodically.To explicitly set a timezone in the cron tab, apply a prefix in the cron tab: "CRON_TZ=${IANA_TIME_ZONE}" or "TZ=${IANA_TIME_ZONE}". The ${IANA_TIME_ZONE} may only be a valid string from IANA time zone database (wikipedia (https://en.wikipedia.org/wiki/List_of_tz_database_time_zones#List)). For example, CRON_TZ=America/New_York 1 * * * *, or TZ=America/New_York 1 * * * *.This field is required for Schedule scans.

    Package Details

    Repository
    Google Cloud Native pulumi/pulumi-google-native
    License
    Apache-2.0
    google-native logo

    Google Cloud Native is in preview. Google Cloud Classic is fully supported.

    Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi