1. Packages
  2. Google Cloud Native
  3. API Docs
  4. aiplatform
  5. aiplatform/v1
  6. ModelDeploymentMonitoringJob

Google Cloud Native is in preview. Google Cloud Classic is fully supported.

Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi

google-native.aiplatform/v1.ModelDeploymentMonitoringJob

Explore with Pulumi AI

google-native logo

Google Cloud Native is in preview. Google Cloud Classic is fully supported.

Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi

    Creates a ModelDeploymentMonitoringJob. It will run periodically on a configured interval. Auto-naming is currently not supported for this resource.

    Create ModelDeploymentMonitoringJob Resource

    Resources are created with functions called constructors. To learn more about declaring and configuring resources, see Resources.

    Constructor syntax

    new ModelDeploymentMonitoringJob(name: string, args: ModelDeploymentMonitoringJobArgs, opts?: CustomResourceOptions);
    @overload
    def ModelDeploymentMonitoringJob(resource_name: str,
                                     args: ModelDeploymentMonitoringJobArgs,
                                     opts: Optional[ResourceOptions] = None)
    
    @overload
    def ModelDeploymentMonitoringJob(resource_name: str,
                                     opts: Optional[ResourceOptions] = None,
                                     logging_sampling_strategy: Optional[GoogleCloudAiplatformV1SamplingStrategyArgs] = None,
                                     display_name: Optional[str] = None,
                                     model_deployment_monitoring_schedule_config: Optional[GoogleCloudAiplatformV1ModelDeploymentMonitoringScheduleConfigArgs] = None,
                                     model_deployment_monitoring_objective_configs: Optional[Sequence[GoogleCloudAiplatformV1ModelDeploymentMonitoringObjectiveConfigArgs]] = None,
                                     endpoint: Optional[str] = None,
                                     analysis_instance_schema_uri: Optional[str] = None,
                                     labels: Optional[Mapping[str, str]] = None,
                                     location: Optional[str] = None,
                                     log_ttl: Optional[str] = None,
                                     encryption_spec: Optional[GoogleCloudAiplatformV1EncryptionSpecArgs] = None,
                                     enable_monitoring_pipeline_logs: Optional[bool] = None,
                                     model_monitoring_alert_config: Optional[GoogleCloudAiplatformV1ModelMonitoringAlertConfigArgs] = None,
                                     predict_instance_schema_uri: Optional[str] = None,
                                     project: Optional[str] = None,
                                     sample_predict_instance: Optional[Any] = None,
                                     stats_anomalies_base_directory: Optional[GoogleCloudAiplatformV1GcsDestinationArgs] = None)
    func NewModelDeploymentMonitoringJob(ctx *Context, name string, args ModelDeploymentMonitoringJobArgs, opts ...ResourceOption) (*ModelDeploymentMonitoringJob, error)
    public ModelDeploymentMonitoringJob(string name, ModelDeploymentMonitoringJobArgs args, CustomResourceOptions? opts = null)
    public ModelDeploymentMonitoringJob(String name, ModelDeploymentMonitoringJobArgs args)
    public ModelDeploymentMonitoringJob(String name, ModelDeploymentMonitoringJobArgs args, CustomResourceOptions options)
    
    type: google-native:aiplatform/v1:ModelDeploymentMonitoringJob
    properties: # The arguments to resource properties.
    options: # Bag of options to control resource's behavior.
    
    

    Parameters

    name string
    The unique name of the resource.
    args ModelDeploymentMonitoringJobArgs
    The arguments to resource properties.
    opts CustomResourceOptions
    Bag of options to control resource's behavior.
    resource_name str
    The unique name of the resource.
    args ModelDeploymentMonitoringJobArgs
    The arguments to resource properties.
    opts ResourceOptions
    Bag of options to control resource's behavior.
    ctx Context
    Context object for the current deployment.
    name string
    The unique name of the resource.
    args ModelDeploymentMonitoringJobArgs
    The arguments to resource properties.
    opts ResourceOption
    Bag of options to control resource's behavior.
    name string
    The unique name of the resource.
    args ModelDeploymentMonitoringJobArgs
    The arguments to resource properties.
    opts CustomResourceOptions
    Bag of options to control resource's behavior.
    name String
    The unique name of the resource.
    args ModelDeploymentMonitoringJobArgs
    The arguments to resource properties.
    options CustomResourceOptions
    Bag of options to control resource's behavior.

    Constructor example

    The following reference example uses placeholder values for all input properties.

    var modelDeploymentMonitoringJobResource = new GoogleNative.Aiplatform.V1.ModelDeploymentMonitoringJob("modelDeploymentMonitoringJobResource", new()
    {
        LoggingSamplingStrategy = new GoogleNative.Aiplatform.V1.Inputs.GoogleCloudAiplatformV1SamplingStrategyArgs
        {
            RandomSampleConfig = new GoogleNative.Aiplatform.V1.Inputs.GoogleCloudAiplatformV1SamplingStrategyRandomSampleConfigArgs
            {
                SampleRate = 0,
            },
        },
        DisplayName = "string",
        ModelDeploymentMonitoringScheduleConfig = new GoogleNative.Aiplatform.V1.Inputs.GoogleCloudAiplatformV1ModelDeploymentMonitoringScheduleConfigArgs
        {
            MonitorInterval = "string",
            MonitorWindow = "string",
        },
        ModelDeploymentMonitoringObjectiveConfigs = new[]
        {
            new GoogleNative.Aiplatform.V1.Inputs.GoogleCloudAiplatformV1ModelDeploymentMonitoringObjectiveConfigArgs
            {
                DeployedModelId = "string",
                ObjectiveConfig = new GoogleNative.Aiplatform.V1.Inputs.GoogleCloudAiplatformV1ModelMonitoringObjectiveConfigArgs
                {
                    ExplanationConfig = new GoogleNative.Aiplatform.V1.Inputs.GoogleCloudAiplatformV1ModelMonitoringObjectiveConfigExplanationConfigArgs
                    {
                        EnableFeatureAttributes = false,
                        ExplanationBaseline = new GoogleNative.Aiplatform.V1.Inputs.GoogleCloudAiplatformV1ModelMonitoringObjectiveConfigExplanationConfigExplanationBaselineArgs
                        {
                            Bigquery = new GoogleNative.Aiplatform.V1.Inputs.GoogleCloudAiplatformV1BigQueryDestinationArgs
                            {
                                OutputUri = "string",
                            },
                            Gcs = new GoogleNative.Aiplatform.V1.Inputs.GoogleCloudAiplatformV1GcsDestinationArgs
                            {
                                OutputUriPrefix = "string",
                            },
                            PredictionFormat = GoogleNative.Aiplatform.V1.GoogleCloudAiplatformV1ModelMonitoringObjectiveConfigExplanationConfigExplanationBaselinePredictionFormat.PredictionFormatUnspecified,
                        },
                    },
                    PredictionDriftDetectionConfig = new GoogleNative.Aiplatform.V1.Inputs.GoogleCloudAiplatformV1ModelMonitoringObjectiveConfigPredictionDriftDetectionConfigArgs
                    {
                        AttributionScoreDriftThresholds = 
                        {
                            { "string", "string" },
                        },
                        DefaultDriftThreshold = new GoogleNative.Aiplatform.V1.Inputs.GoogleCloudAiplatformV1ThresholdConfigArgs
                        {
                            Value = 0,
                        },
                        DriftThresholds = 
                        {
                            { "string", "string" },
                        },
                    },
                    TrainingDataset = new GoogleNative.Aiplatform.V1.Inputs.GoogleCloudAiplatformV1ModelMonitoringObjectiveConfigTrainingDatasetArgs
                    {
                        BigquerySource = new GoogleNative.Aiplatform.V1.Inputs.GoogleCloudAiplatformV1BigQuerySourceArgs
                        {
                            InputUri = "string",
                        },
                        DataFormat = "string",
                        Dataset = "string",
                        GcsSource = new GoogleNative.Aiplatform.V1.Inputs.GoogleCloudAiplatformV1GcsSourceArgs
                        {
                            Uris = new[]
                            {
                                "string",
                            },
                        },
                        LoggingSamplingStrategy = new GoogleNative.Aiplatform.V1.Inputs.GoogleCloudAiplatformV1SamplingStrategyArgs
                        {
                            RandomSampleConfig = new GoogleNative.Aiplatform.V1.Inputs.GoogleCloudAiplatformV1SamplingStrategyRandomSampleConfigArgs
                            {
                                SampleRate = 0,
                            },
                        },
                        TargetField = "string",
                    },
                    TrainingPredictionSkewDetectionConfig = new GoogleNative.Aiplatform.V1.Inputs.GoogleCloudAiplatformV1ModelMonitoringObjectiveConfigTrainingPredictionSkewDetectionConfigArgs
                    {
                        AttributionScoreSkewThresholds = 
                        {
                            { "string", "string" },
                        },
                        DefaultSkewThreshold = new GoogleNative.Aiplatform.V1.Inputs.GoogleCloudAiplatformV1ThresholdConfigArgs
                        {
                            Value = 0,
                        },
                        SkewThresholds = 
                        {
                            { "string", "string" },
                        },
                    },
                },
            },
        },
        Endpoint = "string",
        AnalysisInstanceSchemaUri = "string",
        Labels = 
        {
            { "string", "string" },
        },
        Location = "string",
        LogTtl = "string",
        EncryptionSpec = new GoogleNative.Aiplatform.V1.Inputs.GoogleCloudAiplatformV1EncryptionSpecArgs
        {
            KmsKeyName = "string",
        },
        EnableMonitoringPipelineLogs = false,
        ModelMonitoringAlertConfig = new GoogleNative.Aiplatform.V1.Inputs.GoogleCloudAiplatformV1ModelMonitoringAlertConfigArgs
        {
            EmailAlertConfig = new GoogleNative.Aiplatform.V1.Inputs.GoogleCloudAiplatformV1ModelMonitoringAlertConfigEmailAlertConfigArgs
            {
                UserEmails = new[]
                {
                    "string",
                },
            },
            EnableLogging = false,
            NotificationChannels = new[]
            {
                "string",
            },
        },
        PredictInstanceSchemaUri = "string",
        Project = "string",
        SamplePredictInstance = "any",
        StatsAnomaliesBaseDirectory = new GoogleNative.Aiplatform.V1.Inputs.GoogleCloudAiplatformV1GcsDestinationArgs
        {
            OutputUriPrefix = "string",
        },
    });
    
    example, err := aiplatform.NewModelDeploymentMonitoringJob(ctx, "modelDeploymentMonitoringJobResource", &aiplatform.ModelDeploymentMonitoringJobArgs{
    	LoggingSamplingStrategy: &aiplatform.GoogleCloudAiplatformV1SamplingStrategyArgs{
    		RandomSampleConfig: &aiplatform.GoogleCloudAiplatformV1SamplingStrategyRandomSampleConfigArgs{
    			SampleRate: pulumi.Float64(0),
    		},
    	},
    	DisplayName: pulumi.String("string"),
    	ModelDeploymentMonitoringScheduleConfig: &aiplatform.GoogleCloudAiplatformV1ModelDeploymentMonitoringScheduleConfigArgs{
    		MonitorInterval: pulumi.String("string"),
    		MonitorWindow:   pulumi.String("string"),
    	},
    	ModelDeploymentMonitoringObjectiveConfigs: aiplatform.GoogleCloudAiplatformV1ModelDeploymentMonitoringObjectiveConfigArray{
    		&aiplatform.GoogleCloudAiplatformV1ModelDeploymentMonitoringObjectiveConfigArgs{
    			DeployedModelId: pulumi.String("string"),
    			ObjectiveConfig: &aiplatform.GoogleCloudAiplatformV1ModelMonitoringObjectiveConfigArgs{
    				ExplanationConfig: &aiplatform.GoogleCloudAiplatformV1ModelMonitoringObjectiveConfigExplanationConfigArgs{
    					EnableFeatureAttributes: pulumi.Bool(false),
    					ExplanationBaseline: &aiplatform.GoogleCloudAiplatformV1ModelMonitoringObjectiveConfigExplanationConfigExplanationBaselineArgs{
    						Bigquery: &aiplatform.GoogleCloudAiplatformV1BigQueryDestinationArgs{
    							OutputUri: pulumi.String("string"),
    						},
    						Gcs: &aiplatform.GoogleCloudAiplatformV1GcsDestinationArgs{
    							OutputUriPrefix: pulumi.String("string"),
    						},
    						PredictionFormat: aiplatform.GoogleCloudAiplatformV1ModelMonitoringObjectiveConfigExplanationConfigExplanationBaselinePredictionFormatPredictionFormatUnspecified,
    					},
    				},
    				PredictionDriftDetectionConfig: &aiplatform.GoogleCloudAiplatformV1ModelMonitoringObjectiveConfigPredictionDriftDetectionConfigArgs{
    					AttributionScoreDriftThresholds: pulumi.StringMap{
    						"string": pulumi.String("string"),
    					},
    					DefaultDriftThreshold: &aiplatform.GoogleCloudAiplatformV1ThresholdConfigArgs{
    						Value: pulumi.Float64(0),
    					},
    					DriftThresholds: pulumi.StringMap{
    						"string": pulumi.String("string"),
    					},
    				},
    				TrainingDataset: &aiplatform.GoogleCloudAiplatformV1ModelMonitoringObjectiveConfigTrainingDatasetArgs{
    					BigquerySource: &aiplatform.GoogleCloudAiplatformV1BigQuerySourceArgs{
    						InputUri: pulumi.String("string"),
    					},
    					DataFormat: pulumi.String("string"),
    					Dataset:    pulumi.String("string"),
    					GcsSource: &aiplatform.GoogleCloudAiplatformV1GcsSourceArgs{
    						Uris: pulumi.StringArray{
    							pulumi.String("string"),
    						},
    					},
    					LoggingSamplingStrategy: &aiplatform.GoogleCloudAiplatformV1SamplingStrategyArgs{
    						RandomSampleConfig: &aiplatform.GoogleCloudAiplatformV1SamplingStrategyRandomSampleConfigArgs{
    							SampleRate: pulumi.Float64(0),
    						},
    					},
    					TargetField: pulumi.String("string"),
    				},
    				TrainingPredictionSkewDetectionConfig: &aiplatform.GoogleCloudAiplatformV1ModelMonitoringObjectiveConfigTrainingPredictionSkewDetectionConfigArgs{
    					AttributionScoreSkewThresholds: pulumi.StringMap{
    						"string": pulumi.String("string"),
    					},
    					DefaultSkewThreshold: &aiplatform.GoogleCloudAiplatformV1ThresholdConfigArgs{
    						Value: pulumi.Float64(0),
    					},
    					SkewThresholds: pulumi.StringMap{
    						"string": pulumi.String("string"),
    					},
    				},
    			},
    		},
    	},
    	Endpoint:                  pulumi.String("string"),
    	AnalysisInstanceSchemaUri: pulumi.String("string"),
    	Labels: pulumi.StringMap{
    		"string": pulumi.String("string"),
    	},
    	Location: pulumi.String("string"),
    	LogTtl:   pulumi.String("string"),
    	EncryptionSpec: &aiplatform.GoogleCloudAiplatformV1EncryptionSpecArgs{
    		KmsKeyName: pulumi.String("string"),
    	},
    	EnableMonitoringPipelineLogs: pulumi.Bool(false),
    	ModelMonitoringAlertConfig: &aiplatform.GoogleCloudAiplatformV1ModelMonitoringAlertConfigArgs{
    		EmailAlertConfig: &aiplatform.GoogleCloudAiplatformV1ModelMonitoringAlertConfigEmailAlertConfigArgs{
    			UserEmails: pulumi.StringArray{
    				pulumi.String("string"),
    			},
    		},
    		EnableLogging: pulumi.Bool(false),
    		NotificationChannels: pulumi.StringArray{
    			pulumi.String("string"),
    		},
    	},
    	PredictInstanceSchemaUri: pulumi.String("string"),
    	Project:                  pulumi.String("string"),
    	SamplePredictInstance:    pulumi.Any("any"),
    	StatsAnomaliesBaseDirectory: &aiplatform.GoogleCloudAiplatformV1GcsDestinationArgs{
    		OutputUriPrefix: pulumi.String("string"),
    	},
    })
    
    var modelDeploymentMonitoringJobResource = new ModelDeploymentMonitoringJob("modelDeploymentMonitoringJobResource", ModelDeploymentMonitoringJobArgs.builder()
        .loggingSamplingStrategy(GoogleCloudAiplatformV1SamplingStrategyArgs.builder()
            .randomSampleConfig(GoogleCloudAiplatformV1SamplingStrategyRandomSampleConfigArgs.builder()
                .sampleRate(0)
                .build())
            .build())
        .displayName("string")
        .modelDeploymentMonitoringScheduleConfig(GoogleCloudAiplatformV1ModelDeploymentMonitoringScheduleConfigArgs.builder()
            .monitorInterval("string")
            .monitorWindow("string")
            .build())
        .modelDeploymentMonitoringObjectiveConfigs(GoogleCloudAiplatformV1ModelDeploymentMonitoringObjectiveConfigArgs.builder()
            .deployedModelId("string")
            .objectiveConfig(GoogleCloudAiplatformV1ModelMonitoringObjectiveConfigArgs.builder()
                .explanationConfig(GoogleCloudAiplatformV1ModelMonitoringObjectiveConfigExplanationConfigArgs.builder()
                    .enableFeatureAttributes(false)
                    .explanationBaseline(GoogleCloudAiplatformV1ModelMonitoringObjectiveConfigExplanationConfigExplanationBaselineArgs.builder()
                        .bigquery(GoogleCloudAiplatformV1BigQueryDestinationArgs.builder()
                            .outputUri("string")
                            .build())
                        .gcs(GoogleCloudAiplatformV1GcsDestinationArgs.builder()
                            .outputUriPrefix("string")
                            .build())
                        .predictionFormat("PREDICTION_FORMAT_UNSPECIFIED")
                        .build())
                    .build())
                .predictionDriftDetectionConfig(GoogleCloudAiplatformV1ModelMonitoringObjectiveConfigPredictionDriftDetectionConfigArgs.builder()
                    .attributionScoreDriftThresholds(Map.of("string", "string"))
                    .defaultDriftThreshold(GoogleCloudAiplatformV1ThresholdConfigArgs.builder()
                        .value(0)
                        .build())
                    .driftThresholds(Map.of("string", "string"))
                    .build())
                .trainingDataset(GoogleCloudAiplatformV1ModelMonitoringObjectiveConfigTrainingDatasetArgs.builder()
                    .bigquerySource(GoogleCloudAiplatformV1BigQuerySourceArgs.builder()
                        .inputUri("string")
                        .build())
                    .dataFormat("string")
                    .dataset("string")
                    .gcsSource(GoogleCloudAiplatformV1GcsSourceArgs.builder()
                        .uris("string")
                        .build())
                    .loggingSamplingStrategy(GoogleCloudAiplatformV1SamplingStrategyArgs.builder()
                        .randomSampleConfig(GoogleCloudAiplatformV1SamplingStrategyRandomSampleConfigArgs.builder()
                            .sampleRate(0)
                            .build())
                        .build())
                    .targetField("string")
                    .build())
                .trainingPredictionSkewDetectionConfig(GoogleCloudAiplatformV1ModelMonitoringObjectiveConfigTrainingPredictionSkewDetectionConfigArgs.builder()
                    .attributionScoreSkewThresholds(Map.of("string", "string"))
                    .defaultSkewThreshold(GoogleCloudAiplatformV1ThresholdConfigArgs.builder()
                        .value(0)
                        .build())
                    .skewThresholds(Map.of("string", "string"))
                    .build())
                .build())
            .build())
        .endpoint("string")
        .analysisInstanceSchemaUri("string")
        .labels(Map.of("string", "string"))
        .location("string")
        .logTtl("string")
        .encryptionSpec(GoogleCloudAiplatformV1EncryptionSpecArgs.builder()
            .kmsKeyName("string")
            .build())
        .enableMonitoringPipelineLogs(false)
        .modelMonitoringAlertConfig(GoogleCloudAiplatformV1ModelMonitoringAlertConfigArgs.builder()
            .emailAlertConfig(GoogleCloudAiplatformV1ModelMonitoringAlertConfigEmailAlertConfigArgs.builder()
                .userEmails("string")
                .build())
            .enableLogging(false)
            .notificationChannels("string")
            .build())
        .predictInstanceSchemaUri("string")
        .project("string")
        .samplePredictInstance("any")
        .statsAnomaliesBaseDirectory(GoogleCloudAiplatformV1GcsDestinationArgs.builder()
            .outputUriPrefix("string")
            .build())
        .build());
    
    model_deployment_monitoring_job_resource = google_native.aiplatform.v1.ModelDeploymentMonitoringJob("modelDeploymentMonitoringJobResource",
        logging_sampling_strategy={
            "random_sample_config": {
                "sample_rate": 0,
            },
        },
        display_name="string",
        model_deployment_monitoring_schedule_config={
            "monitor_interval": "string",
            "monitor_window": "string",
        },
        model_deployment_monitoring_objective_configs=[{
            "deployed_model_id": "string",
            "objective_config": {
                "explanation_config": {
                    "enable_feature_attributes": False,
                    "explanation_baseline": {
                        "bigquery": {
                            "output_uri": "string",
                        },
                        "gcs": {
                            "output_uri_prefix": "string",
                        },
                        "prediction_format": google_native.aiplatform.v1.GoogleCloudAiplatformV1ModelMonitoringObjectiveConfigExplanationConfigExplanationBaselinePredictionFormat.PREDICTION_FORMAT_UNSPECIFIED,
                    },
                },
                "prediction_drift_detection_config": {
                    "attribution_score_drift_thresholds": {
                        "string": "string",
                    },
                    "default_drift_threshold": {
                        "value": 0,
                    },
                    "drift_thresholds": {
                        "string": "string",
                    },
                },
                "training_dataset": {
                    "bigquery_source": {
                        "input_uri": "string",
                    },
                    "data_format": "string",
                    "dataset": "string",
                    "gcs_source": {
                        "uris": ["string"],
                    },
                    "logging_sampling_strategy": {
                        "random_sample_config": {
                            "sample_rate": 0,
                        },
                    },
                    "target_field": "string",
                },
                "training_prediction_skew_detection_config": {
                    "attribution_score_skew_thresholds": {
                        "string": "string",
                    },
                    "default_skew_threshold": {
                        "value": 0,
                    },
                    "skew_thresholds": {
                        "string": "string",
                    },
                },
            },
        }],
        endpoint="string",
        analysis_instance_schema_uri="string",
        labels={
            "string": "string",
        },
        location="string",
        log_ttl="string",
        encryption_spec={
            "kms_key_name": "string",
        },
        enable_monitoring_pipeline_logs=False,
        model_monitoring_alert_config={
            "email_alert_config": {
                "user_emails": ["string"],
            },
            "enable_logging": False,
            "notification_channels": ["string"],
        },
        predict_instance_schema_uri="string",
        project="string",
        sample_predict_instance="any",
        stats_anomalies_base_directory={
            "output_uri_prefix": "string",
        })
    
    const modelDeploymentMonitoringJobResource = new google_native.aiplatform.v1.ModelDeploymentMonitoringJob("modelDeploymentMonitoringJobResource", {
        loggingSamplingStrategy: {
            randomSampleConfig: {
                sampleRate: 0,
            },
        },
        displayName: "string",
        modelDeploymentMonitoringScheduleConfig: {
            monitorInterval: "string",
            monitorWindow: "string",
        },
        modelDeploymentMonitoringObjectiveConfigs: [{
            deployedModelId: "string",
            objectiveConfig: {
                explanationConfig: {
                    enableFeatureAttributes: false,
                    explanationBaseline: {
                        bigquery: {
                            outputUri: "string",
                        },
                        gcs: {
                            outputUriPrefix: "string",
                        },
                        predictionFormat: google_native.aiplatform.v1.GoogleCloudAiplatformV1ModelMonitoringObjectiveConfigExplanationConfigExplanationBaselinePredictionFormat.PredictionFormatUnspecified,
                    },
                },
                predictionDriftDetectionConfig: {
                    attributionScoreDriftThresholds: {
                        string: "string",
                    },
                    defaultDriftThreshold: {
                        value: 0,
                    },
                    driftThresholds: {
                        string: "string",
                    },
                },
                trainingDataset: {
                    bigquerySource: {
                        inputUri: "string",
                    },
                    dataFormat: "string",
                    dataset: "string",
                    gcsSource: {
                        uris: ["string"],
                    },
                    loggingSamplingStrategy: {
                        randomSampleConfig: {
                            sampleRate: 0,
                        },
                    },
                    targetField: "string",
                },
                trainingPredictionSkewDetectionConfig: {
                    attributionScoreSkewThresholds: {
                        string: "string",
                    },
                    defaultSkewThreshold: {
                        value: 0,
                    },
                    skewThresholds: {
                        string: "string",
                    },
                },
            },
        }],
        endpoint: "string",
        analysisInstanceSchemaUri: "string",
        labels: {
            string: "string",
        },
        location: "string",
        logTtl: "string",
        encryptionSpec: {
            kmsKeyName: "string",
        },
        enableMonitoringPipelineLogs: false,
        modelMonitoringAlertConfig: {
            emailAlertConfig: {
                userEmails: ["string"],
            },
            enableLogging: false,
            notificationChannels: ["string"],
        },
        predictInstanceSchemaUri: "string",
        project: "string",
        samplePredictInstance: "any",
        statsAnomaliesBaseDirectory: {
            outputUriPrefix: "string",
        },
    });
    
    type: google-native:aiplatform/v1:ModelDeploymentMonitoringJob
    properties:
        analysisInstanceSchemaUri: string
        displayName: string
        enableMonitoringPipelineLogs: false
        encryptionSpec:
            kmsKeyName: string
        endpoint: string
        labels:
            string: string
        location: string
        logTtl: string
        loggingSamplingStrategy:
            randomSampleConfig:
                sampleRate: 0
        modelDeploymentMonitoringObjectiveConfigs:
            - deployedModelId: string
              objectiveConfig:
                explanationConfig:
                    enableFeatureAttributes: false
                    explanationBaseline:
                        bigquery:
                            outputUri: string
                        gcs:
                            outputUriPrefix: string
                        predictionFormat: PREDICTION_FORMAT_UNSPECIFIED
                predictionDriftDetectionConfig:
                    attributionScoreDriftThresholds:
                        string: string
                    defaultDriftThreshold:
                        value: 0
                    driftThresholds:
                        string: string
                trainingDataset:
                    bigquerySource:
                        inputUri: string
                    dataFormat: string
                    dataset: string
                    gcsSource:
                        uris:
                            - string
                    loggingSamplingStrategy:
                        randomSampleConfig:
                            sampleRate: 0
                    targetField: string
                trainingPredictionSkewDetectionConfig:
                    attributionScoreSkewThresholds:
                        string: string
                    defaultSkewThreshold:
                        value: 0
                    skewThresholds:
                        string: string
        modelDeploymentMonitoringScheduleConfig:
            monitorInterval: string
            monitorWindow: string
        modelMonitoringAlertConfig:
            emailAlertConfig:
                userEmails:
                    - string
            enableLogging: false
            notificationChannels:
                - string
        predictInstanceSchemaUri: string
        project: string
        samplePredictInstance: any
        statsAnomaliesBaseDirectory:
            outputUriPrefix: string
    

    ModelDeploymentMonitoringJob Resource Properties

    To learn more about resource properties and how to use them, see Inputs and Outputs in the Architecture and Concepts docs.

    Inputs

    In Python, inputs that are objects can be passed either as argument classes or as dictionary literals.

    The ModelDeploymentMonitoringJob resource accepts the following input properties:

    DisplayName string
    The user-defined name of the ModelDeploymentMonitoringJob. The name can be up to 128 characters long and can consist of any UTF-8 characters. Display name of a ModelDeploymentMonitoringJob.
    Endpoint string
    Endpoint resource name. Format: projects/{project}/locations/{location}/endpoints/{endpoint}
    LoggingSamplingStrategy Pulumi.GoogleNative.Aiplatform.V1.Inputs.GoogleCloudAiplatformV1SamplingStrategy
    Sample Strategy for logging.
    ModelDeploymentMonitoringObjectiveConfigs List<Pulumi.GoogleNative.Aiplatform.V1.Inputs.GoogleCloudAiplatformV1ModelDeploymentMonitoringObjectiveConfig>
    The config for monitoring objectives. This is a per DeployedModel config. Each DeployedModel needs to be configured separately.
    ModelDeploymentMonitoringScheduleConfig Pulumi.GoogleNative.Aiplatform.V1.Inputs.GoogleCloudAiplatformV1ModelDeploymentMonitoringScheduleConfig
    Schedule config for running the monitoring job.
    AnalysisInstanceSchemaUri string
    YAML schema file uri describing the format of a single instance that you want Tensorflow Data Validation (TFDV) to analyze. If this field is empty, all the feature data types are inferred from predict_instance_schema_uri, meaning that TFDV will use the data in the exact format(data type) as prediction request/response. If there are any data type differences between predict instance and TFDV instance, this field can be used to override the schema. For models trained with Vertex AI, this field must be set as all the fields in predict instance formatted as string.
    EnableMonitoringPipelineLogs bool
    If true, the scheduled monitoring pipeline logs are sent to Google Cloud Logging, including pipeline status and anomalies detected. Please note the logs incur cost, which are subject to Cloud Logging pricing.
    EncryptionSpec Pulumi.GoogleNative.Aiplatform.V1.Inputs.GoogleCloudAiplatformV1EncryptionSpec
    Customer-managed encryption key spec for a ModelDeploymentMonitoringJob. If set, this ModelDeploymentMonitoringJob and all sub-resources of this ModelDeploymentMonitoringJob will be secured by this key.
    Labels Dictionary<string, string>
    The labels with user-defined metadata to organize your ModelDeploymentMonitoringJob. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.
    Location string
    LogTtl string
    The TTL of BigQuery tables in user projects which stores logs. A day is the basic unit of the TTL and we take the ceil of TTL/86400(a day). e.g. { second: 3600} indicates ttl = 1 day.
    ModelMonitoringAlertConfig Pulumi.GoogleNative.Aiplatform.V1.Inputs.GoogleCloudAiplatformV1ModelMonitoringAlertConfig
    Alert config for model monitoring.
    PredictInstanceSchemaUri string
    YAML schema file uri describing the format of a single instance, which are given to format this Endpoint's prediction (and explanation). If not set, we will generate predict schema from collected predict requests.
    Project string
    SamplePredictInstance object
    Sample Predict instance, same format as PredictRequest.instances, this can be set as a replacement of ModelDeploymentMonitoringJob.predict_instance_schema_uri. If not set, we will generate predict schema from collected predict requests.
    StatsAnomaliesBaseDirectory Pulumi.GoogleNative.Aiplatform.V1.Inputs.GoogleCloudAiplatformV1GcsDestination
    Stats anomalies base folder path.
    DisplayName string
    The user-defined name of the ModelDeploymentMonitoringJob. The name can be up to 128 characters long and can consist of any UTF-8 characters. Display name of a ModelDeploymentMonitoringJob.
    Endpoint string
    Endpoint resource name. Format: projects/{project}/locations/{location}/endpoints/{endpoint}
    LoggingSamplingStrategy GoogleCloudAiplatformV1SamplingStrategyArgs
    Sample Strategy for logging.
    ModelDeploymentMonitoringObjectiveConfigs []GoogleCloudAiplatformV1ModelDeploymentMonitoringObjectiveConfigArgs
    The config for monitoring objectives. This is a per DeployedModel config. Each DeployedModel needs to be configured separately.
    ModelDeploymentMonitoringScheduleConfig GoogleCloudAiplatformV1ModelDeploymentMonitoringScheduleConfigArgs
    Schedule config for running the monitoring job.
    AnalysisInstanceSchemaUri string
    YAML schema file uri describing the format of a single instance that you want Tensorflow Data Validation (TFDV) to analyze. If this field is empty, all the feature data types are inferred from predict_instance_schema_uri, meaning that TFDV will use the data in the exact format(data type) as prediction request/response. If there are any data type differences between predict instance and TFDV instance, this field can be used to override the schema. For models trained with Vertex AI, this field must be set as all the fields in predict instance formatted as string.
    EnableMonitoringPipelineLogs bool
    If true, the scheduled monitoring pipeline logs are sent to Google Cloud Logging, including pipeline status and anomalies detected. Please note the logs incur cost, which are subject to Cloud Logging pricing.
    EncryptionSpec GoogleCloudAiplatformV1EncryptionSpecArgs
    Customer-managed encryption key spec for a ModelDeploymentMonitoringJob. If set, this ModelDeploymentMonitoringJob and all sub-resources of this ModelDeploymentMonitoringJob will be secured by this key.
    Labels map[string]string
    The labels with user-defined metadata to organize your ModelDeploymentMonitoringJob. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.
    Location string
    LogTtl string
    The TTL of BigQuery tables in user projects which stores logs. A day is the basic unit of the TTL and we take the ceil of TTL/86400(a day). e.g. { second: 3600} indicates ttl = 1 day.
    ModelMonitoringAlertConfig GoogleCloudAiplatformV1ModelMonitoringAlertConfigArgs
    Alert config for model monitoring.
    PredictInstanceSchemaUri string
    YAML schema file uri describing the format of a single instance, which are given to format this Endpoint's prediction (and explanation). If not set, we will generate predict schema from collected predict requests.
    Project string
    SamplePredictInstance interface{}
    Sample Predict instance, same format as PredictRequest.instances, this can be set as a replacement of ModelDeploymentMonitoringJob.predict_instance_schema_uri. If not set, we will generate predict schema from collected predict requests.
    StatsAnomaliesBaseDirectory GoogleCloudAiplatformV1GcsDestinationArgs
    Stats anomalies base folder path.
    displayName String
    The user-defined name of the ModelDeploymentMonitoringJob. The name can be up to 128 characters long and can consist of any UTF-8 characters. Display name of a ModelDeploymentMonitoringJob.
    endpoint String
    Endpoint resource name. Format: projects/{project}/locations/{location}/endpoints/{endpoint}
    loggingSamplingStrategy GoogleCloudAiplatformV1SamplingStrategy
    Sample Strategy for logging.
    modelDeploymentMonitoringObjectiveConfigs List<GoogleCloudAiplatformV1ModelDeploymentMonitoringObjectiveConfig>
    The config for monitoring objectives. This is a per DeployedModel config. Each DeployedModel needs to be configured separately.
    modelDeploymentMonitoringScheduleConfig GoogleCloudAiplatformV1ModelDeploymentMonitoringScheduleConfig
    Schedule config for running the monitoring job.
    analysisInstanceSchemaUri String
    YAML schema file uri describing the format of a single instance that you want Tensorflow Data Validation (TFDV) to analyze. If this field is empty, all the feature data types are inferred from predict_instance_schema_uri, meaning that TFDV will use the data in the exact format(data type) as prediction request/response. If there are any data type differences between predict instance and TFDV instance, this field can be used to override the schema. For models trained with Vertex AI, this field must be set as all the fields in predict instance formatted as string.
    enableMonitoringPipelineLogs Boolean
    If true, the scheduled monitoring pipeline logs are sent to Google Cloud Logging, including pipeline status and anomalies detected. Please note the logs incur cost, which are subject to Cloud Logging pricing.
    encryptionSpec GoogleCloudAiplatformV1EncryptionSpec
    Customer-managed encryption key spec for a ModelDeploymentMonitoringJob. If set, this ModelDeploymentMonitoringJob and all sub-resources of this ModelDeploymentMonitoringJob will be secured by this key.
    labels Map<String,String>
    The labels with user-defined metadata to organize your ModelDeploymentMonitoringJob. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.
    location String
    logTtl String
    The TTL of BigQuery tables in user projects which stores logs. A day is the basic unit of the TTL and we take the ceil of TTL/86400(a day). e.g. { second: 3600} indicates ttl = 1 day.
    modelMonitoringAlertConfig GoogleCloudAiplatformV1ModelMonitoringAlertConfig
    Alert config for model monitoring.
    predictInstanceSchemaUri String
    YAML schema file uri describing the format of a single instance, which are given to format this Endpoint's prediction (and explanation). If not set, we will generate predict schema from collected predict requests.
    project String
    samplePredictInstance Object
    Sample Predict instance, same format as PredictRequest.instances, this can be set as a replacement of ModelDeploymentMonitoringJob.predict_instance_schema_uri. If not set, we will generate predict schema from collected predict requests.
    statsAnomaliesBaseDirectory GoogleCloudAiplatformV1GcsDestination
    Stats anomalies base folder path.
    displayName string
    The user-defined name of the ModelDeploymentMonitoringJob. The name can be up to 128 characters long and can consist of any UTF-8 characters. Display name of a ModelDeploymentMonitoringJob.
    endpoint string
    Endpoint resource name. Format: projects/{project}/locations/{location}/endpoints/{endpoint}
    loggingSamplingStrategy GoogleCloudAiplatformV1SamplingStrategy
    Sample Strategy for logging.
    modelDeploymentMonitoringObjectiveConfigs GoogleCloudAiplatformV1ModelDeploymentMonitoringObjectiveConfig[]
    The config for monitoring objectives. This is a per DeployedModel config. Each DeployedModel needs to be configured separately.
    modelDeploymentMonitoringScheduleConfig GoogleCloudAiplatformV1ModelDeploymentMonitoringScheduleConfig
    Schedule config for running the monitoring job.
    analysisInstanceSchemaUri string
    YAML schema file uri describing the format of a single instance that you want Tensorflow Data Validation (TFDV) to analyze. If this field is empty, all the feature data types are inferred from predict_instance_schema_uri, meaning that TFDV will use the data in the exact format(data type) as prediction request/response. If there are any data type differences between predict instance and TFDV instance, this field can be used to override the schema. For models trained with Vertex AI, this field must be set as all the fields in predict instance formatted as string.
    enableMonitoringPipelineLogs boolean
    If true, the scheduled monitoring pipeline logs are sent to Google Cloud Logging, including pipeline status and anomalies detected. Please note the logs incur cost, which are subject to Cloud Logging pricing.
    encryptionSpec GoogleCloudAiplatformV1EncryptionSpec
    Customer-managed encryption key spec for a ModelDeploymentMonitoringJob. If set, this ModelDeploymentMonitoringJob and all sub-resources of this ModelDeploymentMonitoringJob will be secured by this key.
    labels {[key: string]: string}
    The labels with user-defined metadata to organize your ModelDeploymentMonitoringJob. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.
    location string
    logTtl string
    The TTL of BigQuery tables in user projects which stores logs. A day is the basic unit of the TTL and we take the ceil of TTL/86400(a day). e.g. { second: 3600} indicates ttl = 1 day.
    modelMonitoringAlertConfig GoogleCloudAiplatformV1ModelMonitoringAlertConfig
    Alert config for model monitoring.
    predictInstanceSchemaUri string
    YAML schema file uri describing the format of a single instance, which are given to format this Endpoint's prediction (and explanation). If not set, we will generate predict schema from collected predict requests.
    project string
    samplePredictInstance any
    Sample Predict instance, same format as PredictRequest.instances, this can be set as a replacement of ModelDeploymentMonitoringJob.predict_instance_schema_uri. If not set, we will generate predict schema from collected predict requests.
    statsAnomaliesBaseDirectory GoogleCloudAiplatformV1GcsDestination
    Stats anomalies base folder path.
    display_name str
    The user-defined name of the ModelDeploymentMonitoringJob. The name can be up to 128 characters long and can consist of any UTF-8 characters. Display name of a ModelDeploymentMonitoringJob.
    endpoint str
    Endpoint resource name. Format: projects/{project}/locations/{location}/endpoints/{endpoint}
    logging_sampling_strategy GoogleCloudAiplatformV1SamplingStrategyArgs
    Sample Strategy for logging.
    model_deployment_monitoring_objective_configs Sequence[GoogleCloudAiplatformV1ModelDeploymentMonitoringObjectiveConfigArgs]
    The config for monitoring objectives. This is a per DeployedModel config. Each DeployedModel needs to be configured separately.
    model_deployment_monitoring_schedule_config GoogleCloudAiplatformV1ModelDeploymentMonitoringScheduleConfigArgs
    Schedule config for running the monitoring job.
    analysis_instance_schema_uri str
    YAML schema file uri describing the format of a single instance that you want Tensorflow Data Validation (TFDV) to analyze. If this field is empty, all the feature data types are inferred from predict_instance_schema_uri, meaning that TFDV will use the data in the exact format(data type) as prediction request/response. If there are any data type differences between predict instance and TFDV instance, this field can be used to override the schema. For models trained with Vertex AI, this field must be set as all the fields in predict instance formatted as string.
    enable_monitoring_pipeline_logs bool
    If true, the scheduled monitoring pipeline logs are sent to Google Cloud Logging, including pipeline status and anomalies detected. Please note the logs incur cost, which are subject to Cloud Logging pricing.
    encryption_spec GoogleCloudAiplatformV1EncryptionSpecArgs
    Customer-managed encryption key spec for a ModelDeploymentMonitoringJob. If set, this ModelDeploymentMonitoringJob and all sub-resources of this ModelDeploymentMonitoringJob will be secured by this key.
    labels Mapping[str, str]
    The labels with user-defined metadata to organize your ModelDeploymentMonitoringJob. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.
    location str
    log_ttl str
    The TTL of BigQuery tables in user projects which stores logs. A day is the basic unit of the TTL and we take the ceil of TTL/86400(a day). e.g. { second: 3600} indicates ttl = 1 day.
    model_monitoring_alert_config GoogleCloudAiplatformV1ModelMonitoringAlertConfigArgs
    Alert config for model monitoring.
    predict_instance_schema_uri str
    YAML schema file uri describing the format of a single instance, which are given to format this Endpoint's prediction (and explanation). If not set, we will generate predict schema from collected predict requests.
    project str
    sample_predict_instance Any
    Sample Predict instance, same format as PredictRequest.instances, this can be set as a replacement of ModelDeploymentMonitoringJob.predict_instance_schema_uri. If not set, we will generate predict schema from collected predict requests.
    stats_anomalies_base_directory GoogleCloudAiplatformV1GcsDestinationArgs
    Stats anomalies base folder path.
    displayName String
    The user-defined name of the ModelDeploymentMonitoringJob. The name can be up to 128 characters long and can consist of any UTF-8 characters. Display name of a ModelDeploymentMonitoringJob.
    endpoint String
    Endpoint resource name. Format: projects/{project}/locations/{location}/endpoints/{endpoint}
    loggingSamplingStrategy Property Map
    Sample Strategy for logging.
    modelDeploymentMonitoringObjectiveConfigs List<Property Map>
    The config for monitoring objectives. This is a per DeployedModel config. Each DeployedModel needs to be configured separately.
    modelDeploymentMonitoringScheduleConfig Property Map
    Schedule config for running the monitoring job.
    analysisInstanceSchemaUri String
    YAML schema file uri describing the format of a single instance that you want Tensorflow Data Validation (TFDV) to analyze. If this field is empty, all the feature data types are inferred from predict_instance_schema_uri, meaning that TFDV will use the data in the exact format(data type) as prediction request/response. If there are any data type differences between predict instance and TFDV instance, this field can be used to override the schema. For models trained with Vertex AI, this field must be set as all the fields in predict instance formatted as string.
    enableMonitoringPipelineLogs Boolean
    If true, the scheduled monitoring pipeline logs are sent to Google Cloud Logging, including pipeline status and anomalies detected. Please note the logs incur cost, which are subject to Cloud Logging pricing.
    encryptionSpec Property Map
    Customer-managed encryption key spec for a ModelDeploymentMonitoringJob. If set, this ModelDeploymentMonitoringJob and all sub-resources of this ModelDeploymentMonitoringJob will be secured by this key.
    labels Map<String>
    The labels with user-defined metadata to organize your ModelDeploymentMonitoringJob. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.
    location String
    logTtl String
    The TTL of BigQuery tables in user projects which stores logs. A day is the basic unit of the TTL and we take the ceil of TTL/86400(a day). e.g. { second: 3600} indicates ttl = 1 day.
    modelMonitoringAlertConfig Property Map
    Alert config for model monitoring.
    predictInstanceSchemaUri String
    YAML schema file uri describing the format of a single instance, which are given to format this Endpoint's prediction (and explanation). If not set, we will generate predict schema from collected predict requests.
    project String
    samplePredictInstance Any
    Sample Predict instance, same format as PredictRequest.instances, this can be set as a replacement of ModelDeploymentMonitoringJob.predict_instance_schema_uri. If not set, we will generate predict schema from collected predict requests.
    statsAnomaliesBaseDirectory Property Map
    Stats anomalies base folder path.

    Outputs

    All input properties are implicitly available as output properties. Additionally, the ModelDeploymentMonitoringJob resource produces the following output properties:

    BigqueryTables List<Pulumi.GoogleNative.Aiplatform.V1.Outputs.GoogleCloudAiplatformV1ModelDeploymentMonitoringBigQueryTableResponse>
    The created bigquery tables for the job under customer project. Customer could do their own query & analysis. There could be 4 log tables in maximum: 1. Training data logging predict request/response 2. Serving data logging predict request/response
    CreateTime string
    Timestamp when this ModelDeploymentMonitoringJob was created.
    Error Pulumi.GoogleNative.Aiplatform.V1.Outputs.GoogleRpcStatusResponse
    Only populated when the job's state is JOB_STATE_FAILED or JOB_STATE_CANCELLED.
    Id string
    The provider-assigned unique ID for this managed resource.
    LatestMonitoringPipelineMetadata Pulumi.GoogleNative.Aiplatform.V1.Outputs.GoogleCloudAiplatformV1ModelDeploymentMonitoringJobLatestMonitoringPipelineMetadataResponse
    Latest triggered monitoring pipeline metadata.
    Name string
    Resource name of a ModelDeploymentMonitoringJob.
    NextScheduleTime string
    Timestamp when this monitoring pipeline will be scheduled to run for the next round.
    ScheduleState string
    Schedule state when the monitoring job is in Running state.
    State string
    The detailed state of the monitoring job. When the job is still creating, the state will be 'PENDING'. Once the job is successfully created, the state will be 'RUNNING'. Pause the job, the state will be 'PAUSED'. Resume the job, the state will return to 'RUNNING'.
    UpdateTime string
    Timestamp when this ModelDeploymentMonitoringJob was updated most recently.
    BigqueryTables []GoogleCloudAiplatformV1ModelDeploymentMonitoringBigQueryTableResponse
    The created bigquery tables for the job under customer project. Customer could do their own query & analysis. There could be 4 log tables in maximum: 1. Training data logging predict request/response 2. Serving data logging predict request/response
    CreateTime string
    Timestamp when this ModelDeploymentMonitoringJob was created.
    Error GoogleRpcStatusResponse
    Only populated when the job's state is JOB_STATE_FAILED or JOB_STATE_CANCELLED.
    Id string
    The provider-assigned unique ID for this managed resource.
    LatestMonitoringPipelineMetadata GoogleCloudAiplatformV1ModelDeploymentMonitoringJobLatestMonitoringPipelineMetadataResponse
    Latest triggered monitoring pipeline metadata.
    Name string
    Resource name of a ModelDeploymentMonitoringJob.
    NextScheduleTime string
    Timestamp when this monitoring pipeline will be scheduled to run for the next round.
    ScheduleState string
    Schedule state when the monitoring job is in Running state.
    State string
    The detailed state of the monitoring job. When the job is still creating, the state will be 'PENDING'. Once the job is successfully created, the state will be 'RUNNING'. Pause the job, the state will be 'PAUSED'. Resume the job, the state will return to 'RUNNING'.
    UpdateTime string
    Timestamp when this ModelDeploymentMonitoringJob was updated most recently.
    bigqueryTables List<GoogleCloudAiplatformV1ModelDeploymentMonitoringBigQueryTableResponse>
    The created bigquery tables for the job under customer project. Customer could do their own query & analysis. There could be 4 log tables in maximum: 1. Training data logging predict request/response 2. Serving data logging predict request/response
    createTime String
    Timestamp when this ModelDeploymentMonitoringJob was created.
    error GoogleRpcStatusResponse
    Only populated when the job's state is JOB_STATE_FAILED or JOB_STATE_CANCELLED.
    id String
    The provider-assigned unique ID for this managed resource.
    latestMonitoringPipelineMetadata GoogleCloudAiplatformV1ModelDeploymentMonitoringJobLatestMonitoringPipelineMetadataResponse
    Latest triggered monitoring pipeline metadata.
    name String
    Resource name of a ModelDeploymentMonitoringJob.
    nextScheduleTime String
    Timestamp when this monitoring pipeline will be scheduled to run for the next round.
    scheduleState String
    Schedule state when the monitoring job is in Running state.
    state String
    The detailed state of the monitoring job. When the job is still creating, the state will be 'PENDING'. Once the job is successfully created, the state will be 'RUNNING'. Pause the job, the state will be 'PAUSED'. Resume the job, the state will return to 'RUNNING'.
    updateTime String
    Timestamp when this ModelDeploymentMonitoringJob was updated most recently.
    bigqueryTables GoogleCloudAiplatformV1ModelDeploymentMonitoringBigQueryTableResponse[]
    The created bigquery tables for the job under customer project. Customer could do their own query & analysis. There could be 4 log tables in maximum: 1. Training data logging predict request/response 2. Serving data logging predict request/response
    createTime string
    Timestamp when this ModelDeploymentMonitoringJob was created.
    error GoogleRpcStatusResponse
    Only populated when the job's state is JOB_STATE_FAILED or JOB_STATE_CANCELLED.
    id string
    The provider-assigned unique ID for this managed resource.
    latestMonitoringPipelineMetadata GoogleCloudAiplatformV1ModelDeploymentMonitoringJobLatestMonitoringPipelineMetadataResponse
    Latest triggered monitoring pipeline metadata.
    name string
    Resource name of a ModelDeploymentMonitoringJob.
    nextScheduleTime string
    Timestamp when this monitoring pipeline will be scheduled to run for the next round.
    scheduleState string
    Schedule state when the monitoring job is in Running state.
    state string
    The detailed state of the monitoring job. When the job is still creating, the state will be 'PENDING'. Once the job is successfully created, the state will be 'RUNNING'. Pause the job, the state will be 'PAUSED'. Resume the job, the state will return to 'RUNNING'.
    updateTime string
    Timestamp when this ModelDeploymentMonitoringJob was updated most recently.
    bigquery_tables Sequence[GoogleCloudAiplatformV1ModelDeploymentMonitoringBigQueryTableResponse]
    The created bigquery tables for the job under customer project. Customer could do their own query & analysis. There could be 4 log tables in maximum: 1. Training data logging predict request/response 2. Serving data logging predict request/response
    create_time str
    Timestamp when this ModelDeploymentMonitoringJob was created.
    error GoogleRpcStatusResponse
    Only populated when the job's state is JOB_STATE_FAILED or JOB_STATE_CANCELLED.
    id str
    The provider-assigned unique ID for this managed resource.
    latest_monitoring_pipeline_metadata GoogleCloudAiplatformV1ModelDeploymentMonitoringJobLatestMonitoringPipelineMetadataResponse
    Latest triggered monitoring pipeline metadata.
    name str
    Resource name of a ModelDeploymentMonitoringJob.
    next_schedule_time str
    Timestamp when this monitoring pipeline will be scheduled to run for the next round.
    schedule_state str
    Schedule state when the monitoring job is in Running state.
    state str
    The detailed state of the monitoring job. When the job is still creating, the state will be 'PENDING'. Once the job is successfully created, the state will be 'RUNNING'. Pause the job, the state will be 'PAUSED'. Resume the job, the state will return to 'RUNNING'.
    update_time str
    Timestamp when this ModelDeploymentMonitoringJob was updated most recently.
    bigqueryTables List<Property Map>
    The created bigquery tables for the job under customer project. Customer could do their own query & analysis. There could be 4 log tables in maximum: 1. Training data logging predict request/response 2. Serving data logging predict request/response
    createTime String
    Timestamp when this ModelDeploymentMonitoringJob was created.
    error Property Map
    Only populated when the job's state is JOB_STATE_FAILED or JOB_STATE_CANCELLED.
    id String
    The provider-assigned unique ID for this managed resource.
    latestMonitoringPipelineMetadata Property Map
    Latest triggered monitoring pipeline metadata.
    name String
    Resource name of a ModelDeploymentMonitoringJob.
    nextScheduleTime String
    Timestamp when this monitoring pipeline will be scheduled to run for the next round.
    scheduleState String
    Schedule state when the monitoring job is in Running state.
    state String
    The detailed state of the monitoring job. When the job is still creating, the state will be 'PENDING'. Once the job is successfully created, the state will be 'RUNNING'. Pause the job, the state will be 'PAUSED'. Resume the job, the state will return to 'RUNNING'.
    updateTime String
    Timestamp when this ModelDeploymentMonitoringJob was updated most recently.

    Supporting Types

    GoogleCloudAiplatformV1BigQueryDestination, GoogleCloudAiplatformV1BigQueryDestinationArgs

    OutputUri string
    BigQuery URI to a project or table, up to 2000 characters long. When only the project is specified, the Dataset and Table is created. When the full table reference is specified, the Dataset must exist and table must not exist. Accepted forms: * BigQuery path. For example: bq://projectId or bq://projectId.bqDatasetId or bq://projectId.bqDatasetId.bqTableId.
    OutputUri string
    BigQuery URI to a project or table, up to 2000 characters long. When only the project is specified, the Dataset and Table is created. When the full table reference is specified, the Dataset must exist and table must not exist. Accepted forms: * BigQuery path. For example: bq://projectId or bq://projectId.bqDatasetId or bq://projectId.bqDatasetId.bqTableId.
    outputUri String
    BigQuery URI to a project or table, up to 2000 characters long. When only the project is specified, the Dataset and Table is created. When the full table reference is specified, the Dataset must exist and table must not exist. Accepted forms: * BigQuery path. For example: bq://projectId or bq://projectId.bqDatasetId or bq://projectId.bqDatasetId.bqTableId.
    outputUri string
    BigQuery URI to a project or table, up to 2000 characters long. When only the project is specified, the Dataset and Table is created. When the full table reference is specified, the Dataset must exist and table must not exist. Accepted forms: * BigQuery path. For example: bq://projectId or bq://projectId.bqDatasetId or bq://projectId.bqDatasetId.bqTableId.
    output_uri str
    BigQuery URI to a project or table, up to 2000 characters long. When only the project is specified, the Dataset and Table is created. When the full table reference is specified, the Dataset must exist and table must not exist. Accepted forms: * BigQuery path. For example: bq://projectId or bq://projectId.bqDatasetId or bq://projectId.bqDatasetId.bqTableId.
    outputUri String
    BigQuery URI to a project or table, up to 2000 characters long. When only the project is specified, the Dataset and Table is created. When the full table reference is specified, the Dataset must exist and table must not exist. Accepted forms: * BigQuery path. For example: bq://projectId or bq://projectId.bqDatasetId or bq://projectId.bqDatasetId.bqTableId.

    GoogleCloudAiplatformV1BigQueryDestinationResponse, GoogleCloudAiplatformV1BigQueryDestinationResponseArgs

    OutputUri string
    BigQuery URI to a project or table, up to 2000 characters long. When only the project is specified, the Dataset and Table is created. When the full table reference is specified, the Dataset must exist and table must not exist. Accepted forms: * BigQuery path. For example: bq://projectId or bq://projectId.bqDatasetId or bq://projectId.bqDatasetId.bqTableId.
    OutputUri string
    BigQuery URI to a project or table, up to 2000 characters long. When only the project is specified, the Dataset and Table is created. When the full table reference is specified, the Dataset must exist and table must not exist. Accepted forms: * BigQuery path. For example: bq://projectId or bq://projectId.bqDatasetId or bq://projectId.bqDatasetId.bqTableId.
    outputUri String
    BigQuery URI to a project or table, up to 2000 characters long. When only the project is specified, the Dataset and Table is created. When the full table reference is specified, the Dataset must exist and table must not exist. Accepted forms: * BigQuery path. For example: bq://projectId or bq://projectId.bqDatasetId or bq://projectId.bqDatasetId.bqTableId.
    outputUri string
    BigQuery URI to a project or table, up to 2000 characters long. When only the project is specified, the Dataset and Table is created. When the full table reference is specified, the Dataset must exist and table must not exist. Accepted forms: * BigQuery path. For example: bq://projectId or bq://projectId.bqDatasetId or bq://projectId.bqDatasetId.bqTableId.
    output_uri str
    BigQuery URI to a project or table, up to 2000 characters long. When only the project is specified, the Dataset and Table is created. When the full table reference is specified, the Dataset must exist and table must not exist. Accepted forms: * BigQuery path. For example: bq://projectId or bq://projectId.bqDatasetId or bq://projectId.bqDatasetId.bqTableId.
    outputUri String
    BigQuery URI to a project or table, up to 2000 characters long. When only the project is specified, the Dataset and Table is created. When the full table reference is specified, the Dataset must exist and table must not exist. Accepted forms: * BigQuery path. For example: bq://projectId or bq://projectId.bqDatasetId or bq://projectId.bqDatasetId.bqTableId.

    GoogleCloudAiplatformV1BigQuerySource, GoogleCloudAiplatformV1BigQuerySourceArgs

    InputUri string
    BigQuery URI to a table, up to 2000 characters long. Accepted forms: * BigQuery path. For example: bq://projectId.bqDatasetId.bqTableId.
    InputUri string
    BigQuery URI to a table, up to 2000 characters long. Accepted forms: * BigQuery path. For example: bq://projectId.bqDatasetId.bqTableId.
    inputUri String
    BigQuery URI to a table, up to 2000 characters long. Accepted forms: * BigQuery path. For example: bq://projectId.bqDatasetId.bqTableId.
    inputUri string
    BigQuery URI to a table, up to 2000 characters long. Accepted forms: * BigQuery path. For example: bq://projectId.bqDatasetId.bqTableId.
    input_uri str
    BigQuery URI to a table, up to 2000 characters long. Accepted forms: * BigQuery path. For example: bq://projectId.bqDatasetId.bqTableId.
    inputUri String
    BigQuery URI to a table, up to 2000 characters long. Accepted forms: * BigQuery path. For example: bq://projectId.bqDatasetId.bqTableId.

    GoogleCloudAiplatformV1BigQuerySourceResponse, GoogleCloudAiplatformV1BigQuerySourceResponseArgs

    InputUri string
    BigQuery URI to a table, up to 2000 characters long. Accepted forms: * BigQuery path. For example: bq://projectId.bqDatasetId.bqTableId.
    InputUri string
    BigQuery URI to a table, up to 2000 characters long. Accepted forms: * BigQuery path. For example: bq://projectId.bqDatasetId.bqTableId.
    inputUri String
    BigQuery URI to a table, up to 2000 characters long. Accepted forms: * BigQuery path. For example: bq://projectId.bqDatasetId.bqTableId.
    inputUri string
    BigQuery URI to a table, up to 2000 characters long. Accepted forms: * BigQuery path. For example: bq://projectId.bqDatasetId.bqTableId.
    input_uri str
    BigQuery URI to a table, up to 2000 characters long. Accepted forms: * BigQuery path. For example: bq://projectId.bqDatasetId.bqTableId.
    inputUri String
    BigQuery URI to a table, up to 2000 characters long. Accepted forms: * BigQuery path. For example: bq://projectId.bqDatasetId.bqTableId.

    GoogleCloudAiplatformV1EncryptionSpec, GoogleCloudAiplatformV1EncryptionSpecArgs

    KmsKeyName string
    The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key. The key needs to be in the same region as where the compute resource is created.
    KmsKeyName string
    The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key. The key needs to be in the same region as where the compute resource is created.
    kmsKeyName String
    The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key. The key needs to be in the same region as where the compute resource is created.
    kmsKeyName string
    The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key. The key needs to be in the same region as where the compute resource is created.
    kms_key_name str
    The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key. The key needs to be in the same region as where the compute resource is created.
    kmsKeyName String
    The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key. The key needs to be in the same region as where the compute resource is created.

    GoogleCloudAiplatformV1EncryptionSpecResponse, GoogleCloudAiplatformV1EncryptionSpecResponseArgs

    KmsKeyName string
    The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key. The key needs to be in the same region as where the compute resource is created.
    KmsKeyName string
    The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key. The key needs to be in the same region as where the compute resource is created.
    kmsKeyName String
    The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key. The key needs to be in the same region as where the compute resource is created.
    kmsKeyName string
    The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key. The key needs to be in the same region as where the compute resource is created.
    kms_key_name str
    The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key. The key needs to be in the same region as where the compute resource is created.
    kmsKeyName String
    The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key. The key needs to be in the same region as where the compute resource is created.

    GoogleCloudAiplatformV1GcsDestination, GoogleCloudAiplatformV1GcsDestinationArgs

    OutputUriPrefix string
    Google Cloud Storage URI to output directory. If the uri doesn't end with '/', a '/' will be automatically appended. The directory is created if it doesn't exist.
    OutputUriPrefix string
    Google Cloud Storage URI to output directory. If the uri doesn't end with '/', a '/' will be automatically appended. The directory is created if it doesn't exist.
    outputUriPrefix String
    Google Cloud Storage URI to output directory. If the uri doesn't end with '/', a '/' will be automatically appended. The directory is created if it doesn't exist.
    outputUriPrefix string
    Google Cloud Storage URI to output directory. If the uri doesn't end with '/', a '/' will be automatically appended. The directory is created if it doesn't exist.
    output_uri_prefix str
    Google Cloud Storage URI to output directory. If the uri doesn't end with '/', a '/' will be automatically appended. The directory is created if it doesn't exist.
    outputUriPrefix String
    Google Cloud Storage URI to output directory. If the uri doesn't end with '/', a '/' will be automatically appended. The directory is created if it doesn't exist.

    GoogleCloudAiplatformV1GcsDestinationResponse, GoogleCloudAiplatformV1GcsDestinationResponseArgs

    OutputUriPrefix string
    Google Cloud Storage URI to output directory. If the uri doesn't end with '/', a '/' will be automatically appended. The directory is created if it doesn't exist.
    OutputUriPrefix string
    Google Cloud Storage URI to output directory. If the uri doesn't end with '/', a '/' will be automatically appended. The directory is created if it doesn't exist.
    outputUriPrefix String
    Google Cloud Storage URI to output directory. If the uri doesn't end with '/', a '/' will be automatically appended. The directory is created if it doesn't exist.
    outputUriPrefix string
    Google Cloud Storage URI to output directory. If the uri doesn't end with '/', a '/' will be automatically appended. The directory is created if it doesn't exist.
    output_uri_prefix str
    Google Cloud Storage URI to output directory. If the uri doesn't end with '/', a '/' will be automatically appended. The directory is created if it doesn't exist.
    outputUriPrefix String
    Google Cloud Storage URI to output directory. If the uri doesn't end with '/', a '/' will be automatically appended. The directory is created if it doesn't exist.

    GoogleCloudAiplatformV1GcsSource, GoogleCloudAiplatformV1GcsSourceArgs

    Uris List<string>
    Google Cloud Storage URI(-s) to the input file(s). May contain wildcards. For more information on wildcards, see https://cloud.google.com/storage/docs/gsutil/addlhelp/WildcardNames.
    Uris []string
    Google Cloud Storage URI(-s) to the input file(s). May contain wildcards. For more information on wildcards, see https://cloud.google.com/storage/docs/gsutil/addlhelp/WildcardNames.
    uris List<String>
    Google Cloud Storage URI(-s) to the input file(s). May contain wildcards. For more information on wildcards, see https://cloud.google.com/storage/docs/gsutil/addlhelp/WildcardNames.
    uris string[]
    Google Cloud Storage URI(-s) to the input file(s). May contain wildcards. For more information on wildcards, see https://cloud.google.com/storage/docs/gsutil/addlhelp/WildcardNames.
    uris Sequence[str]
    Google Cloud Storage URI(-s) to the input file(s). May contain wildcards. For more information on wildcards, see https://cloud.google.com/storage/docs/gsutil/addlhelp/WildcardNames.
    uris List<String>
    Google Cloud Storage URI(-s) to the input file(s). May contain wildcards. For more information on wildcards, see https://cloud.google.com/storage/docs/gsutil/addlhelp/WildcardNames.

    GoogleCloudAiplatformV1GcsSourceResponse, GoogleCloudAiplatformV1GcsSourceResponseArgs

    Uris List<string>
    Google Cloud Storage URI(-s) to the input file(s). May contain wildcards. For more information on wildcards, see https://cloud.google.com/storage/docs/gsutil/addlhelp/WildcardNames.
    Uris []string
    Google Cloud Storage URI(-s) to the input file(s). May contain wildcards. For more information on wildcards, see https://cloud.google.com/storage/docs/gsutil/addlhelp/WildcardNames.
    uris List<String>
    Google Cloud Storage URI(-s) to the input file(s). May contain wildcards. For more information on wildcards, see https://cloud.google.com/storage/docs/gsutil/addlhelp/WildcardNames.
    uris string[]
    Google Cloud Storage URI(-s) to the input file(s). May contain wildcards. For more information on wildcards, see https://cloud.google.com/storage/docs/gsutil/addlhelp/WildcardNames.
    uris Sequence[str]
    Google Cloud Storage URI(-s) to the input file(s). May contain wildcards. For more information on wildcards, see https://cloud.google.com/storage/docs/gsutil/addlhelp/WildcardNames.
    uris List<String>
    Google Cloud Storage URI(-s) to the input file(s). May contain wildcards. For more information on wildcards, see https://cloud.google.com/storage/docs/gsutil/addlhelp/WildcardNames.

    GoogleCloudAiplatformV1ModelDeploymentMonitoringBigQueryTableResponse, GoogleCloudAiplatformV1ModelDeploymentMonitoringBigQueryTableResponseArgs

    BigqueryTablePath string
    The created BigQuery table to store logs. Customer could do their own query & analysis. Format: bq://.model_deployment_monitoring_._
    LogSource string
    The source of log.
    LogType string
    The type of log.
    BigqueryTablePath string
    The created BigQuery table to store logs. Customer could do their own query & analysis. Format: bq://.model_deployment_monitoring_._
    LogSource string
    The source of log.
    LogType string
    The type of log.
    bigqueryTablePath String
    The created BigQuery table to store logs. Customer could do their own query & analysis. Format: bq://.model_deployment_monitoring_._
    logSource String
    The source of log.
    logType String
    The type of log.
    bigqueryTablePath string
    The created BigQuery table to store logs. Customer could do their own query & analysis. Format: bq://.model_deployment_monitoring_._
    logSource string
    The source of log.
    logType string
    The type of log.
    bigquery_table_path str
    The created BigQuery table to store logs. Customer could do their own query & analysis. Format: bq://.model_deployment_monitoring_._
    log_source str
    The source of log.
    log_type str
    The type of log.
    bigqueryTablePath String
    The created BigQuery table to store logs. Customer could do their own query & analysis. Format: bq://.model_deployment_monitoring_._
    logSource String
    The source of log.
    logType String
    The type of log.

    GoogleCloudAiplatformV1ModelDeploymentMonitoringJobLatestMonitoringPipelineMetadataResponse, GoogleCloudAiplatformV1ModelDeploymentMonitoringJobLatestMonitoringPipelineMetadataResponseArgs

    RunTime string
    The time that most recent monitoring pipelines that is related to this run.
    Status Pulumi.GoogleNative.Aiplatform.V1.Inputs.GoogleRpcStatusResponse
    The status of the most recent monitoring pipeline.
    RunTime string
    The time that most recent monitoring pipelines that is related to this run.
    Status GoogleRpcStatusResponse
    The status of the most recent monitoring pipeline.
    runTime String
    The time that most recent monitoring pipelines that is related to this run.
    status GoogleRpcStatusResponse
    The status of the most recent monitoring pipeline.
    runTime string
    The time that most recent monitoring pipelines that is related to this run.
    status GoogleRpcStatusResponse
    The status of the most recent monitoring pipeline.
    run_time str
    The time that most recent monitoring pipelines that is related to this run.
    status GoogleRpcStatusResponse
    The status of the most recent monitoring pipeline.
    runTime String
    The time that most recent monitoring pipelines that is related to this run.
    status Property Map
    The status of the most recent monitoring pipeline.

    GoogleCloudAiplatformV1ModelDeploymentMonitoringObjectiveConfig, GoogleCloudAiplatformV1ModelDeploymentMonitoringObjectiveConfigArgs

    DeployedModelId string
    The DeployedModel ID of the objective config.
    ObjectiveConfig Pulumi.GoogleNative.Aiplatform.V1.Inputs.GoogleCloudAiplatformV1ModelMonitoringObjectiveConfig
    The objective config of for the modelmonitoring job of this deployed model.
    DeployedModelId string
    The DeployedModel ID of the objective config.
    ObjectiveConfig GoogleCloudAiplatformV1ModelMonitoringObjectiveConfig
    The objective config of for the modelmonitoring job of this deployed model.
    deployedModelId String
    The DeployedModel ID of the objective config.
    objectiveConfig GoogleCloudAiplatformV1ModelMonitoringObjectiveConfig
    The objective config of for the modelmonitoring job of this deployed model.
    deployedModelId string
    The DeployedModel ID of the objective config.
    objectiveConfig GoogleCloudAiplatformV1ModelMonitoringObjectiveConfig
    The objective config of for the modelmonitoring job of this deployed model.
    deployed_model_id str
    The DeployedModel ID of the objective config.
    objective_config GoogleCloudAiplatformV1ModelMonitoringObjectiveConfig
    The objective config of for the modelmonitoring job of this deployed model.
    deployedModelId String
    The DeployedModel ID of the objective config.
    objectiveConfig Property Map
    The objective config of for the modelmonitoring job of this deployed model.

    GoogleCloudAiplatformV1ModelDeploymentMonitoringObjectiveConfigResponse, GoogleCloudAiplatformV1ModelDeploymentMonitoringObjectiveConfigResponseArgs

    DeployedModelId string
    The DeployedModel ID of the objective config.
    ObjectiveConfig Pulumi.GoogleNative.Aiplatform.V1.Inputs.GoogleCloudAiplatformV1ModelMonitoringObjectiveConfigResponse
    The objective config of for the modelmonitoring job of this deployed model.
    DeployedModelId string
    The DeployedModel ID of the objective config.
    ObjectiveConfig GoogleCloudAiplatformV1ModelMonitoringObjectiveConfigResponse
    The objective config of for the modelmonitoring job of this deployed model.
    deployedModelId String
    The DeployedModel ID of the objective config.
    objectiveConfig GoogleCloudAiplatformV1ModelMonitoringObjectiveConfigResponse
    The objective config of for the modelmonitoring job of this deployed model.
    deployedModelId string
    The DeployedModel ID of the objective config.
    objectiveConfig GoogleCloudAiplatformV1ModelMonitoringObjectiveConfigResponse
    The objective config of for the modelmonitoring job of this deployed model.
    deployed_model_id str
    The DeployedModel ID of the objective config.
    objective_config GoogleCloudAiplatformV1ModelMonitoringObjectiveConfigResponse
    The objective config of for the modelmonitoring job of this deployed model.
    deployedModelId String
    The DeployedModel ID of the objective config.
    objectiveConfig Property Map
    The objective config of for the modelmonitoring job of this deployed model.

    GoogleCloudAiplatformV1ModelDeploymentMonitoringScheduleConfig, GoogleCloudAiplatformV1ModelDeploymentMonitoringScheduleConfigArgs

    MonitorInterval string
    The model monitoring job scheduling interval. It will be rounded up to next full hour. This defines how often the monitoring jobs are triggered.
    MonitorWindow string
    The time window of the prediction data being included in each prediction dataset. This window specifies how long the data should be collected from historical model results for each run. If not set, ModelDeploymentMonitoringScheduleConfig.monitor_interval will be used. e.g. If currently the cutoff time is 2022-01-08 14:30:00 and the monitor_window is set to be 3600, then data from 2022-01-08 13:30:00 to 2022-01-08 14:30:00 will be retrieved and aggregated to calculate the monitoring statistics.
    MonitorInterval string
    The model monitoring job scheduling interval. It will be rounded up to next full hour. This defines how often the monitoring jobs are triggered.
    MonitorWindow string
    The time window of the prediction data being included in each prediction dataset. This window specifies how long the data should be collected from historical model results for each run. If not set, ModelDeploymentMonitoringScheduleConfig.monitor_interval will be used. e.g. If currently the cutoff time is 2022-01-08 14:30:00 and the monitor_window is set to be 3600, then data from 2022-01-08 13:30:00 to 2022-01-08 14:30:00 will be retrieved and aggregated to calculate the monitoring statistics.
    monitorInterval String
    The model monitoring job scheduling interval. It will be rounded up to next full hour. This defines how often the monitoring jobs are triggered.
    monitorWindow String
    The time window of the prediction data being included in each prediction dataset. This window specifies how long the data should be collected from historical model results for each run. If not set, ModelDeploymentMonitoringScheduleConfig.monitor_interval will be used. e.g. If currently the cutoff time is 2022-01-08 14:30:00 and the monitor_window is set to be 3600, then data from 2022-01-08 13:30:00 to 2022-01-08 14:30:00 will be retrieved and aggregated to calculate the monitoring statistics.
    monitorInterval string
    The model monitoring job scheduling interval. It will be rounded up to next full hour. This defines how often the monitoring jobs are triggered.
    monitorWindow string
    The time window of the prediction data being included in each prediction dataset. This window specifies how long the data should be collected from historical model results for each run. If not set, ModelDeploymentMonitoringScheduleConfig.monitor_interval will be used. e.g. If currently the cutoff time is 2022-01-08 14:30:00 and the monitor_window is set to be 3600, then data from 2022-01-08 13:30:00 to 2022-01-08 14:30:00 will be retrieved and aggregated to calculate the monitoring statistics.
    monitor_interval str
    The model monitoring job scheduling interval. It will be rounded up to next full hour. This defines how often the monitoring jobs are triggered.
    monitor_window str
    The time window of the prediction data being included in each prediction dataset. This window specifies how long the data should be collected from historical model results for each run. If not set, ModelDeploymentMonitoringScheduleConfig.monitor_interval will be used. e.g. If currently the cutoff time is 2022-01-08 14:30:00 and the monitor_window is set to be 3600, then data from 2022-01-08 13:30:00 to 2022-01-08 14:30:00 will be retrieved and aggregated to calculate the monitoring statistics.
    monitorInterval String
    The model monitoring job scheduling interval. It will be rounded up to next full hour. This defines how often the monitoring jobs are triggered.
    monitorWindow String
    The time window of the prediction data being included in each prediction dataset. This window specifies how long the data should be collected from historical model results for each run. If not set, ModelDeploymentMonitoringScheduleConfig.monitor_interval will be used. e.g. If currently the cutoff time is 2022-01-08 14:30:00 and the monitor_window is set to be 3600, then data from 2022-01-08 13:30:00 to 2022-01-08 14:30:00 will be retrieved and aggregated to calculate the monitoring statistics.

    GoogleCloudAiplatformV1ModelDeploymentMonitoringScheduleConfigResponse, GoogleCloudAiplatformV1ModelDeploymentMonitoringScheduleConfigResponseArgs

    MonitorInterval string
    The model monitoring job scheduling interval. It will be rounded up to next full hour. This defines how often the monitoring jobs are triggered.
    MonitorWindow string
    The time window of the prediction data being included in each prediction dataset. This window specifies how long the data should be collected from historical model results for each run. If not set, ModelDeploymentMonitoringScheduleConfig.monitor_interval will be used. e.g. If currently the cutoff time is 2022-01-08 14:30:00 and the monitor_window is set to be 3600, then data from 2022-01-08 13:30:00 to 2022-01-08 14:30:00 will be retrieved and aggregated to calculate the monitoring statistics.
    MonitorInterval string
    The model monitoring job scheduling interval. It will be rounded up to next full hour. This defines how often the monitoring jobs are triggered.
    MonitorWindow string
    The time window of the prediction data being included in each prediction dataset. This window specifies how long the data should be collected from historical model results for each run. If not set, ModelDeploymentMonitoringScheduleConfig.monitor_interval will be used. e.g. If currently the cutoff time is 2022-01-08 14:30:00 and the monitor_window is set to be 3600, then data from 2022-01-08 13:30:00 to 2022-01-08 14:30:00 will be retrieved and aggregated to calculate the monitoring statistics.
    monitorInterval String
    The model monitoring job scheduling interval. It will be rounded up to next full hour. This defines how often the monitoring jobs are triggered.
    monitorWindow String
    The time window of the prediction data being included in each prediction dataset. This window specifies how long the data should be collected from historical model results for each run. If not set, ModelDeploymentMonitoringScheduleConfig.monitor_interval will be used. e.g. If currently the cutoff time is 2022-01-08 14:30:00 and the monitor_window is set to be 3600, then data from 2022-01-08 13:30:00 to 2022-01-08 14:30:00 will be retrieved and aggregated to calculate the monitoring statistics.
    monitorInterval string
    The model monitoring job scheduling interval. It will be rounded up to next full hour. This defines how often the monitoring jobs are triggered.
    monitorWindow string
    The time window of the prediction data being included in each prediction dataset. This window specifies how long the data should be collected from historical model results for each run. If not set, ModelDeploymentMonitoringScheduleConfig.monitor_interval will be used. e.g. If currently the cutoff time is 2022-01-08 14:30:00 and the monitor_window is set to be 3600, then data from 2022-01-08 13:30:00 to 2022-01-08 14:30:00 will be retrieved and aggregated to calculate the monitoring statistics.
    monitor_interval str
    The model monitoring job scheduling interval. It will be rounded up to next full hour. This defines how often the monitoring jobs are triggered.
    monitor_window str
    The time window of the prediction data being included in each prediction dataset. This window specifies how long the data should be collected from historical model results for each run. If not set, ModelDeploymentMonitoringScheduleConfig.monitor_interval will be used. e.g. If currently the cutoff time is 2022-01-08 14:30:00 and the monitor_window is set to be 3600, then data from 2022-01-08 13:30:00 to 2022-01-08 14:30:00 will be retrieved and aggregated to calculate the monitoring statistics.
    monitorInterval String
    The model monitoring job scheduling interval. It will be rounded up to next full hour. This defines how often the monitoring jobs are triggered.
    monitorWindow String
    The time window of the prediction data being included in each prediction dataset. This window specifies how long the data should be collected from historical model results for each run. If not set, ModelDeploymentMonitoringScheduleConfig.monitor_interval will be used. e.g. If currently the cutoff time is 2022-01-08 14:30:00 and the monitor_window is set to be 3600, then data from 2022-01-08 13:30:00 to 2022-01-08 14:30:00 will be retrieved and aggregated to calculate the monitoring statistics.

    GoogleCloudAiplatformV1ModelMonitoringAlertConfig, GoogleCloudAiplatformV1ModelMonitoringAlertConfigArgs

    EmailAlertConfig Pulumi.GoogleNative.Aiplatform.V1.Inputs.GoogleCloudAiplatformV1ModelMonitoringAlertConfigEmailAlertConfig
    Email alert config.
    EnableLogging bool
    Dump the anomalies to Cloud Logging. The anomalies will be put to json payload encoded from proto google.cloud.aiplatform.logging.ModelMonitoringAnomaliesLogEntry. This can be further sinked to Pub/Sub or any other services supported by Cloud Logging.
    NotificationChannels List<string>
    Resource names of the NotificationChannels to send alert. Must be of the format projects//notificationChannels/
    EmailAlertConfig GoogleCloudAiplatformV1ModelMonitoringAlertConfigEmailAlertConfig
    Email alert config.
    EnableLogging bool
    Dump the anomalies to Cloud Logging. The anomalies will be put to json payload encoded from proto google.cloud.aiplatform.logging.ModelMonitoringAnomaliesLogEntry. This can be further sinked to Pub/Sub or any other services supported by Cloud Logging.
    NotificationChannels []string
    Resource names of the NotificationChannels to send alert. Must be of the format projects//notificationChannels/
    emailAlertConfig GoogleCloudAiplatformV1ModelMonitoringAlertConfigEmailAlertConfig
    Email alert config.
    enableLogging Boolean
    Dump the anomalies to Cloud Logging. The anomalies will be put to json payload encoded from proto google.cloud.aiplatform.logging.ModelMonitoringAnomaliesLogEntry. This can be further sinked to Pub/Sub or any other services supported by Cloud Logging.
    notificationChannels List<String>
    Resource names of the NotificationChannels to send alert. Must be of the format projects//notificationChannels/
    emailAlertConfig GoogleCloudAiplatformV1ModelMonitoringAlertConfigEmailAlertConfig
    Email alert config.
    enableLogging boolean
    Dump the anomalies to Cloud Logging. The anomalies will be put to json payload encoded from proto google.cloud.aiplatform.logging.ModelMonitoringAnomaliesLogEntry. This can be further sinked to Pub/Sub or any other services supported by Cloud Logging.
    notificationChannels string[]
    Resource names of the NotificationChannels to send alert. Must be of the format projects//notificationChannels/
    email_alert_config GoogleCloudAiplatformV1ModelMonitoringAlertConfigEmailAlertConfig
    Email alert config.
    enable_logging bool
    Dump the anomalies to Cloud Logging. The anomalies will be put to json payload encoded from proto google.cloud.aiplatform.logging.ModelMonitoringAnomaliesLogEntry. This can be further sinked to Pub/Sub or any other services supported by Cloud Logging.
    notification_channels Sequence[str]
    Resource names of the NotificationChannels to send alert. Must be of the format projects//notificationChannels/
    emailAlertConfig Property Map
    Email alert config.
    enableLogging Boolean
    Dump the anomalies to Cloud Logging. The anomalies will be put to json payload encoded from proto google.cloud.aiplatform.logging.ModelMonitoringAnomaliesLogEntry. This can be further sinked to Pub/Sub or any other services supported by Cloud Logging.
    notificationChannels List<String>
    Resource names of the NotificationChannels to send alert. Must be of the format projects//notificationChannels/

    GoogleCloudAiplatformV1ModelMonitoringAlertConfigEmailAlertConfig, GoogleCloudAiplatformV1ModelMonitoringAlertConfigEmailAlertConfigArgs

    UserEmails List<string>
    The email addresses to send the alert.
    UserEmails []string
    The email addresses to send the alert.
    userEmails List<String>
    The email addresses to send the alert.
    userEmails string[]
    The email addresses to send the alert.
    user_emails Sequence[str]
    The email addresses to send the alert.
    userEmails List<String>
    The email addresses to send the alert.

    GoogleCloudAiplatformV1ModelMonitoringAlertConfigEmailAlertConfigResponse, GoogleCloudAiplatformV1ModelMonitoringAlertConfigEmailAlertConfigResponseArgs

    UserEmails List<string>
    The email addresses to send the alert.
    UserEmails []string
    The email addresses to send the alert.
    userEmails List<String>
    The email addresses to send the alert.
    userEmails string[]
    The email addresses to send the alert.
    user_emails Sequence[str]
    The email addresses to send the alert.
    userEmails List<String>
    The email addresses to send the alert.

    GoogleCloudAiplatformV1ModelMonitoringAlertConfigResponse, GoogleCloudAiplatformV1ModelMonitoringAlertConfigResponseArgs

    EmailAlertConfig Pulumi.GoogleNative.Aiplatform.V1.Inputs.GoogleCloudAiplatformV1ModelMonitoringAlertConfigEmailAlertConfigResponse
    Email alert config.
    EnableLogging bool
    Dump the anomalies to Cloud Logging. The anomalies will be put to json payload encoded from proto google.cloud.aiplatform.logging.ModelMonitoringAnomaliesLogEntry. This can be further sinked to Pub/Sub or any other services supported by Cloud Logging.
    NotificationChannels List<string>
    Resource names of the NotificationChannels to send alert. Must be of the format projects//notificationChannels/
    EmailAlertConfig GoogleCloudAiplatformV1ModelMonitoringAlertConfigEmailAlertConfigResponse
    Email alert config.
    EnableLogging bool
    Dump the anomalies to Cloud Logging. The anomalies will be put to json payload encoded from proto google.cloud.aiplatform.logging.ModelMonitoringAnomaliesLogEntry. This can be further sinked to Pub/Sub or any other services supported by Cloud Logging.
    NotificationChannels []string
    Resource names of the NotificationChannels to send alert. Must be of the format projects//notificationChannels/
    emailAlertConfig GoogleCloudAiplatformV1ModelMonitoringAlertConfigEmailAlertConfigResponse
    Email alert config.
    enableLogging Boolean
    Dump the anomalies to Cloud Logging. The anomalies will be put to json payload encoded from proto google.cloud.aiplatform.logging.ModelMonitoringAnomaliesLogEntry. This can be further sinked to Pub/Sub or any other services supported by Cloud Logging.
    notificationChannels List<String>
    Resource names of the NotificationChannels to send alert. Must be of the format projects//notificationChannels/
    emailAlertConfig GoogleCloudAiplatformV1ModelMonitoringAlertConfigEmailAlertConfigResponse
    Email alert config.
    enableLogging boolean
    Dump the anomalies to Cloud Logging. The anomalies will be put to json payload encoded from proto google.cloud.aiplatform.logging.ModelMonitoringAnomaliesLogEntry. This can be further sinked to Pub/Sub or any other services supported by Cloud Logging.
    notificationChannels string[]
    Resource names of the NotificationChannels to send alert. Must be of the format projects//notificationChannels/
    email_alert_config GoogleCloudAiplatformV1ModelMonitoringAlertConfigEmailAlertConfigResponse
    Email alert config.
    enable_logging bool
    Dump the anomalies to Cloud Logging. The anomalies will be put to json payload encoded from proto google.cloud.aiplatform.logging.ModelMonitoringAnomaliesLogEntry. This can be further sinked to Pub/Sub or any other services supported by Cloud Logging.
    notification_channels Sequence[str]
    Resource names of the NotificationChannels to send alert. Must be of the format projects//notificationChannels/
    emailAlertConfig Property Map
    Email alert config.
    enableLogging Boolean
    Dump the anomalies to Cloud Logging. The anomalies will be put to json payload encoded from proto google.cloud.aiplatform.logging.ModelMonitoringAnomaliesLogEntry. This can be further sinked to Pub/Sub or any other services supported by Cloud Logging.
    notificationChannels List<String>
    Resource names of the NotificationChannels to send alert. Must be of the format projects//notificationChannels/

    GoogleCloudAiplatformV1ModelMonitoringObjectiveConfig, GoogleCloudAiplatformV1ModelMonitoringObjectiveConfigArgs

    ExplanationConfig Pulumi.GoogleNative.Aiplatform.V1.Inputs.GoogleCloudAiplatformV1ModelMonitoringObjectiveConfigExplanationConfig
    The config for integrating with Vertex Explainable AI.
    PredictionDriftDetectionConfig Pulumi.GoogleNative.Aiplatform.V1.Inputs.GoogleCloudAiplatformV1ModelMonitoringObjectiveConfigPredictionDriftDetectionConfig
    The config for drift of prediction data.
    TrainingDataset Pulumi.GoogleNative.Aiplatform.V1.Inputs.GoogleCloudAiplatformV1ModelMonitoringObjectiveConfigTrainingDataset
    Training dataset for models. This field has to be set only if TrainingPredictionSkewDetectionConfig is specified.
    TrainingPredictionSkewDetectionConfig Pulumi.GoogleNative.Aiplatform.V1.Inputs.GoogleCloudAiplatformV1ModelMonitoringObjectiveConfigTrainingPredictionSkewDetectionConfig
    The config for skew between training data and prediction data.
    explanationConfig Property Map
    The config for integrating with Vertex Explainable AI.
    predictionDriftDetectionConfig Property Map
    The config for drift of prediction data.
    trainingDataset Property Map
    Training dataset for models. This field has to be set only if TrainingPredictionSkewDetectionConfig is specified.
    trainingPredictionSkewDetectionConfig Property Map
    The config for skew between training data and prediction data.

    GoogleCloudAiplatformV1ModelMonitoringObjectiveConfigExplanationConfig, GoogleCloudAiplatformV1ModelMonitoringObjectiveConfigExplanationConfigArgs

    EnableFeatureAttributes bool
    If want to analyze the Vertex Explainable AI feature attribute scores or not. If set to true, Vertex AI will log the feature attributions from explain response and do the skew/drift detection for them.
    ExplanationBaseline Pulumi.GoogleNative.Aiplatform.V1.Inputs.GoogleCloudAiplatformV1ModelMonitoringObjectiveConfigExplanationConfigExplanationBaseline
    Predictions generated by the BatchPredictionJob using baseline dataset.
    EnableFeatureAttributes bool
    If want to analyze the Vertex Explainable AI feature attribute scores or not. If set to true, Vertex AI will log the feature attributions from explain response and do the skew/drift detection for them.
    ExplanationBaseline GoogleCloudAiplatformV1ModelMonitoringObjectiveConfigExplanationConfigExplanationBaseline
    Predictions generated by the BatchPredictionJob using baseline dataset.
    enableFeatureAttributes Boolean
    If want to analyze the Vertex Explainable AI feature attribute scores or not. If set to true, Vertex AI will log the feature attributions from explain response and do the skew/drift detection for them.
    explanationBaseline GoogleCloudAiplatformV1ModelMonitoringObjectiveConfigExplanationConfigExplanationBaseline
    Predictions generated by the BatchPredictionJob using baseline dataset.
    enableFeatureAttributes boolean
    If want to analyze the Vertex Explainable AI feature attribute scores or not. If set to true, Vertex AI will log the feature attributions from explain response and do the skew/drift detection for them.
    explanationBaseline GoogleCloudAiplatformV1ModelMonitoringObjectiveConfigExplanationConfigExplanationBaseline
    Predictions generated by the BatchPredictionJob using baseline dataset.
    enable_feature_attributes bool
    If want to analyze the Vertex Explainable AI feature attribute scores or not. If set to true, Vertex AI will log the feature attributions from explain response and do the skew/drift detection for them.
    explanation_baseline GoogleCloudAiplatformV1ModelMonitoringObjectiveConfigExplanationConfigExplanationBaseline
    Predictions generated by the BatchPredictionJob using baseline dataset.
    enableFeatureAttributes Boolean
    If want to analyze the Vertex Explainable AI feature attribute scores or not. If set to true, Vertex AI will log the feature attributions from explain response and do the skew/drift detection for them.
    explanationBaseline Property Map
    Predictions generated by the BatchPredictionJob using baseline dataset.

    GoogleCloudAiplatformV1ModelMonitoringObjectiveConfigExplanationConfigExplanationBaseline, GoogleCloudAiplatformV1ModelMonitoringObjectiveConfigExplanationConfigExplanationBaselineArgs

    Bigquery GoogleCloudAiplatformV1BigQueryDestination
    BigQuery location for BatchExplain output.
    Gcs GoogleCloudAiplatformV1GcsDestination
    Cloud Storage location for BatchExplain output.
    PredictionFormat GoogleCloudAiplatformV1ModelMonitoringObjectiveConfigExplanationConfigExplanationBaselinePredictionFormat
    The storage format of the predictions generated BatchPrediction job.
    bigquery GoogleCloudAiplatformV1BigQueryDestination
    BigQuery location for BatchExplain output.
    gcs GoogleCloudAiplatformV1GcsDestination
    Cloud Storage location for BatchExplain output.
    predictionFormat GoogleCloudAiplatformV1ModelMonitoringObjectiveConfigExplanationConfigExplanationBaselinePredictionFormat
    The storage format of the predictions generated BatchPrediction job.
    bigquery GoogleCloudAiplatformV1BigQueryDestination
    BigQuery location for BatchExplain output.
    gcs GoogleCloudAiplatformV1GcsDestination
    Cloud Storage location for BatchExplain output.
    predictionFormat GoogleCloudAiplatformV1ModelMonitoringObjectiveConfigExplanationConfigExplanationBaselinePredictionFormat
    The storage format of the predictions generated BatchPrediction job.
    bigquery GoogleCloudAiplatformV1BigQueryDestination
    BigQuery location for BatchExplain output.
    gcs GoogleCloudAiplatformV1GcsDestination
    Cloud Storage location for BatchExplain output.
    prediction_format GoogleCloudAiplatformV1ModelMonitoringObjectiveConfigExplanationConfigExplanationBaselinePredictionFormat
    The storage format of the predictions generated BatchPrediction job.
    bigquery Property Map
    BigQuery location for BatchExplain output.
    gcs Property Map
    Cloud Storage location for BatchExplain output.
    predictionFormat "PREDICTION_FORMAT_UNSPECIFIED" | "JSONL" | "BIGQUERY"
    The storage format of the predictions generated BatchPrediction job.

    GoogleCloudAiplatformV1ModelMonitoringObjectiveConfigExplanationConfigExplanationBaselinePredictionFormat, GoogleCloudAiplatformV1ModelMonitoringObjectiveConfigExplanationConfigExplanationBaselinePredictionFormatArgs

    PredictionFormatUnspecified
    PREDICTION_FORMAT_UNSPECIFIEDShould not be set.
    Jsonl
    JSONLPredictions are in JSONL files.
    Bigquery
    BIGQUERYPredictions are in BigQuery.
    GoogleCloudAiplatformV1ModelMonitoringObjectiveConfigExplanationConfigExplanationBaselinePredictionFormatPredictionFormatUnspecified
    PREDICTION_FORMAT_UNSPECIFIEDShould not be set.
    GoogleCloudAiplatformV1ModelMonitoringObjectiveConfigExplanationConfigExplanationBaselinePredictionFormatJsonl
    JSONLPredictions are in JSONL files.
    GoogleCloudAiplatformV1ModelMonitoringObjectiveConfigExplanationConfigExplanationBaselinePredictionFormatBigquery
    BIGQUERYPredictions are in BigQuery.
    PredictionFormatUnspecified
    PREDICTION_FORMAT_UNSPECIFIEDShould not be set.
    Jsonl
    JSONLPredictions are in JSONL files.
    Bigquery
    BIGQUERYPredictions are in BigQuery.
    PredictionFormatUnspecified
    PREDICTION_FORMAT_UNSPECIFIEDShould not be set.
    Jsonl
    JSONLPredictions are in JSONL files.
    Bigquery
    BIGQUERYPredictions are in BigQuery.
    PREDICTION_FORMAT_UNSPECIFIED
    PREDICTION_FORMAT_UNSPECIFIEDShould not be set.
    JSONL
    JSONLPredictions are in JSONL files.
    BIGQUERY
    BIGQUERYPredictions are in BigQuery.
    "PREDICTION_FORMAT_UNSPECIFIED"
    PREDICTION_FORMAT_UNSPECIFIEDShould not be set.
    "JSONL"
    JSONLPredictions are in JSONL files.
    "BIGQUERY"
    BIGQUERYPredictions are in BigQuery.

    GoogleCloudAiplatformV1ModelMonitoringObjectiveConfigExplanationConfigExplanationBaselineResponse, GoogleCloudAiplatformV1ModelMonitoringObjectiveConfigExplanationConfigExplanationBaselineResponseArgs

    Bigquery Pulumi.GoogleNative.Aiplatform.V1.Inputs.GoogleCloudAiplatformV1BigQueryDestinationResponse
    BigQuery location for BatchExplain output.
    Gcs Pulumi.GoogleNative.Aiplatform.V1.Inputs.GoogleCloudAiplatformV1GcsDestinationResponse
    Cloud Storage location for BatchExplain output.
    PredictionFormat string
    The storage format of the predictions generated BatchPrediction job.
    Bigquery GoogleCloudAiplatformV1BigQueryDestinationResponse
    BigQuery location for BatchExplain output.
    Gcs GoogleCloudAiplatformV1GcsDestinationResponse
    Cloud Storage location for BatchExplain output.
    PredictionFormat string
    The storage format of the predictions generated BatchPrediction job.
    bigquery GoogleCloudAiplatformV1BigQueryDestinationResponse
    BigQuery location for BatchExplain output.
    gcs GoogleCloudAiplatformV1GcsDestinationResponse
    Cloud Storage location for BatchExplain output.
    predictionFormat String
    The storage format of the predictions generated BatchPrediction job.
    bigquery GoogleCloudAiplatformV1BigQueryDestinationResponse
    BigQuery location for BatchExplain output.
    gcs GoogleCloudAiplatformV1GcsDestinationResponse
    Cloud Storage location for BatchExplain output.
    predictionFormat string
    The storage format of the predictions generated BatchPrediction job.
    bigquery GoogleCloudAiplatformV1BigQueryDestinationResponse
    BigQuery location for BatchExplain output.
    gcs GoogleCloudAiplatformV1GcsDestinationResponse
    Cloud Storage location for BatchExplain output.
    prediction_format str
    The storage format of the predictions generated BatchPrediction job.
    bigquery Property Map
    BigQuery location for BatchExplain output.
    gcs Property Map
    Cloud Storage location for BatchExplain output.
    predictionFormat String
    The storage format of the predictions generated BatchPrediction job.

    GoogleCloudAiplatformV1ModelMonitoringObjectiveConfigExplanationConfigResponse, GoogleCloudAiplatformV1ModelMonitoringObjectiveConfigExplanationConfigResponseArgs

    EnableFeatureAttributes bool
    If want to analyze the Vertex Explainable AI feature attribute scores or not. If set to true, Vertex AI will log the feature attributions from explain response and do the skew/drift detection for them.
    ExplanationBaseline Pulumi.GoogleNative.Aiplatform.V1.Inputs.GoogleCloudAiplatformV1ModelMonitoringObjectiveConfigExplanationConfigExplanationBaselineResponse
    Predictions generated by the BatchPredictionJob using baseline dataset.
    EnableFeatureAttributes bool
    If want to analyze the Vertex Explainable AI feature attribute scores or not. If set to true, Vertex AI will log the feature attributions from explain response and do the skew/drift detection for them.
    ExplanationBaseline GoogleCloudAiplatformV1ModelMonitoringObjectiveConfigExplanationConfigExplanationBaselineResponse
    Predictions generated by the BatchPredictionJob using baseline dataset.
    enableFeatureAttributes Boolean
    If want to analyze the Vertex Explainable AI feature attribute scores or not. If set to true, Vertex AI will log the feature attributions from explain response and do the skew/drift detection for them.
    explanationBaseline GoogleCloudAiplatformV1ModelMonitoringObjectiveConfigExplanationConfigExplanationBaselineResponse
    Predictions generated by the BatchPredictionJob using baseline dataset.
    enableFeatureAttributes boolean
    If want to analyze the Vertex Explainable AI feature attribute scores or not. If set to true, Vertex AI will log the feature attributions from explain response and do the skew/drift detection for them.
    explanationBaseline GoogleCloudAiplatformV1ModelMonitoringObjectiveConfigExplanationConfigExplanationBaselineResponse
    Predictions generated by the BatchPredictionJob using baseline dataset.
    enable_feature_attributes bool
    If want to analyze the Vertex Explainable AI feature attribute scores or not. If set to true, Vertex AI will log the feature attributions from explain response and do the skew/drift detection for them.
    explanation_baseline GoogleCloudAiplatformV1ModelMonitoringObjectiveConfigExplanationConfigExplanationBaselineResponse
    Predictions generated by the BatchPredictionJob using baseline dataset.
    enableFeatureAttributes Boolean
    If want to analyze the Vertex Explainable AI feature attribute scores or not. If set to true, Vertex AI will log the feature attributions from explain response and do the skew/drift detection for them.
    explanationBaseline Property Map
    Predictions generated by the BatchPredictionJob using baseline dataset.

    GoogleCloudAiplatformV1ModelMonitoringObjectiveConfigPredictionDriftDetectionConfig, GoogleCloudAiplatformV1ModelMonitoringObjectiveConfigPredictionDriftDetectionConfigArgs

    AttributionScoreDriftThresholds Dictionary<string, string>
    Key is the feature name and value is the threshold. The threshold here is against attribution score distance between different time windows.
    DefaultDriftThreshold Pulumi.GoogleNative.Aiplatform.V1.Inputs.GoogleCloudAiplatformV1ThresholdConfig
    Drift anomaly detection threshold used by all features. When the per-feature thresholds are not set, this field can be used to specify a threshold for all features.
    DriftThresholds Dictionary<string, string>
    Key is the feature name and value is the threshold. If a feature needs to be monitored for drift, a value threshold must be configured for that feature. The threshold here is against feature distribution distance between different time windws.
    AttributionScoreDriftThresholds map[string]string
    Key is the feature name and value is the threshold. The threshold here is against attribution score distance between different time windows.
    DefaultDriftThreshold GoogleCloudAiplatformV1ThresholdConfig
    Drift anomaly detection threshold used by all features. When the per-feature thresholds are not set, this field can be used to specify a threshold for all features.
    DriftThresholds map[string]string
    Key is the feature name and value is the threshold. If a feature needs to be monitored for drift, a value threshold must be configured for that feature. The threshold here is against feature distribution distance between different time windws.
    attributionScoreDriftThresholds Map<String,String>
    Key is the feature name and value is the threshold. The threshold here is against attribution score distance between different time windows.
    defaultDriftThreshold GoogleCloudAiplatformV1ThresholdConfig
    Drift anomaly detection threshold used by all features. When the per-feature thresholds are not set, this field can be used to specify a threshold for all features.
    driftThresholds Map<String,String>
    Key is the feature name and value is the threshold. If a feature needs to be monitored for drift, a value threshold must be configured for that feature. The threshold here is against feature distribution distance between different time windws.
    attributionScoreDriftThresholds {[key: string]: string}
    Key is the feature name and value is the threshold. The threshold here is against attribution score distance between different time windows.
    defaultDriftThreshold GoogleCloudAiplatformV1ThresholdConfig
    Drift anomaly detection threshold used by all features. When the per-feature thresholds are not set, this field can be used to specify a threshold for all features.
    driftThresholds {[key: string]: string}
    Key is the feature name and value is the threshold. If a feature needs to be monitored for drift, a value threshold must be configured for that feature. The threshold here is against feature distribution distance between different time windws.
    attribution_score_drift_thresholds Mapping[str, str]
    Key is the feature name and value is the threshold. The threshold here is against attribution score distance between different time windows.
    default_drift_threshold GoogleCloudAiplatformV1ThresholdConfig
    Drift anomaly detection threshold used by all features. When the per-feature thresholds are not set, this field can be used to specify a threshold for all features.
    drift_thresholds Mapping[str, str]
    Key is the feature name and value is the threshold. If a feature needs to be monitored for drift, a value threshold must be configured for that feature. The threshold here is against feature distribution distance between different time windws.
    attributionScoreDriftThresholds Map<String>
    Key is the feature name and value is the threshold. The threshold here is against attribution score distance between different time windows.
    defaultDriftThreshold Property Map
    Drift anomaly detection threshold used by all features. When the per-feature thresholds are not set, this field can be used to specify a threshold for all features.
    driftThresholds Map<String>
    Key is the feature name and value is the threshold. If a feature needs to be monitored for drift, a value threshold must be configured for that feature. The threshold here is against feature distribution distance between different time windws.

    GoogleCloudAiplatformV1ModelMonitoringObjectiveConfigPredictionDriftDetectionConfigResponse, GoogleCloudAiplatformV1ModelMonitoringObjectiveConfigPredictionDriftDetectionConfigResponseArgs

    AttributionScoreDriftThresholds Dictionary<string, string>
    Key is the feature name and value is the threshold. The threshold here is against attribution score distance between different time windows.
    DefaultDriftThreshold Pulumi.GoogleNative.Aiplatform.V1.Inputs.GoogleCloudAiplatformV1ThresholdConfigResponse
    Drift anomaly detection threshold used by all features. When the per-feature thresholds are not set, this field can be used to specify a threshold for all features.
    DriftThresholds Dictionary<string, string>
    Key is the feature name and value is the threshold. If a feature needs to be monitored for drift, a value threshold must be configured for that feature. The threshold here is against feature distribution distance between different time windws.
    AttributionScoreDriftThresholds map[string]string
    Key is the feature name and value is the threshold. The threshold here is against attribution score distance between different time windows.
    DefaultDriftThreshold GoogleCloudAiplatformV1ThresholdConfigResponse
    Drift anomaly detection threshold used by all features. When the per-feature thresholds are not set, this field can be used to specify a threshold for all features.
    DriftThresholds map[string]string
    Key is the feature name and value is the threshold. If a feature needs to be monitored for drift, a value threshold must be configured for that feature. The threshold here is against feature distribution distance between different time windws.
    attributionScoreDriftThresholds Map<String,String>
    Key is the feature name and value is the threshold. The threshold here is against attribution score distance between different time windows.
    defaultDriftThreshold GoogleCloudAiplatformV1ThresholdConfigResponse
    Drift anomaly detection threshold used by all features. When the per-feature thresholds are not set, this field can be used to specify a threshold for all features.
    driftThresholds Map<String,String>
    Key is the feature name and value is the threshold. If a feature needs to be monitored for drift, a value threshold must be configured for that feature. The threshold here is against feature distribution distance between different time windws.
    attributionScoreDriftThresholds {[key: string]: string}
    Key is the feature name and value is the threshold. The threshold here is against attribution score distance between different time windows.
    defaultDriftThreshold GoogleCloudAiplatformV1ThresholdConfigResponse
    Drift anomaly detection threshold used by all features. When the per-feature thresholds are not set, this field can be used to specify a threshold for all features.
    driftThresholds {[key: string]: string}
    Key is the feature name and value is the threshold. If a feature needs to be monitored for drift, a value threshold must be configured for that feature. The threshold here is against feature distribution distance between different time windws.
    attribution_score_drift_thresholds Mapping[str, str]
    Key is the feature name and value is the threshold. The threshold here is against attribution score distance between different time windows.
    default_drift_threshold GoogleCloudAiplatformV1ThresholdConfigResponse
    Drift anomaly detection threshold used by all features. When the per-feature thresholds are not set, this field can be used to specify a threshold for all features.
    drift_thresholds Mapping[str, str]
    Key is the feature name and value is the threshold. If a feature needs to be monitored for drift, a value threshold must be configured for that feature. The threshold here is against feature distribution distance between different time windws.
    attributionScoreDriftThresholds Map<String>
    Key is the feature name and value is the threshold. The threshold here is against attribution score distance between different time windows.
    defaultDriftThreshold Property Map
    Drift anomaly detection threshold used by all features. When the per-feature thresholds are not set, this field can be used to specify a threshold for all features.
    driftThresholds Map<String>
    Key is the feature name and value is the threshold. If a feature needs to be monitored for drift, a value threshold must be configured for that feature. The threshold here is against feature distribution distance between different time windws.

    GoogleCloudAiplatformV1ModelMonitoringObjectiveConfigResponse, GoogleCloudAiplatformV1ModelMonitoringObjectiveConfigResponseArgs

    ExplanationConfig Pulumi.GoogleNative.Aiplatform.V1.Inputs.GoogleCloudAiplatformV1ModelMonitoringObjectiveConfigExplanationConfigResponse
    The config for integrating with Vertex Explainable AI.
    PredictionDriftDetectionConfig Pulumi.GoogleNative.Aiplatform.V1.Inputs.GoogleCloudAiplatformV1ModelMonitoringObjectiveConfigPredictionDriftDetectionConfigResponse
    The config for drift of prediction data.
    TrainingDataset Pulumi.GoogleNative.Aiplatform.V1.Inputs.GoogleCloudAiplatformV1ModelMonitoringObjectiveConfigTrainingDatasetResponse
    Training dataset for models. This field has to be set only if TrainingPredictionSkewDetectionConfig is specified.
    TrainingPredictionSkewDetectionConfig Pulumi.GoogleNative.Aiplatform.V1.Inputs.GoogleCloudAiplatformV1ModelMonitoringObjectiveConfigTrainingPredictionSkewDetectionConfigResponse
    The config for skew between training data and prediction data.
    explanationConfig Property Map
    The config for integrating with Vertex Explainable AI.
    predictionDriftDetectionConfig Property Map
    The config for drift of prediction data.
    trainingDataset Property Map
    Training dataset for models. This field has to be set only if TrainingPredictionSkewDetectionConfig is specified.
    trainingPredictionSkewDetectionConfig Property Map
    The config for skew between training data and prediction data.

    GoogleCloudAiplatformV1ModelMonitoringObjectiveConfigTrainingDataset, GoogleCloudAiplatformV1ModelMonitoringObjectiveConfigTrainingDatasetArgs

    BigquerySource Pulumi.GoogleNative.Aiplatform.V1.Inputs.GoogleCloudAiplatformV1BigQuerySource
    The BigQuery table of the unmanaged Dataset used to train this Model.
    DataFormat string
    Data format of the dataset, only applicable if the input is from Google Cloud Storage. The possible formats are: "tf-record" The source file is a TFRecord file. "csv" The source file is a CSV file. "jsonl" The source file is a JSONL file.
    Dataset string
    The resource name of the Dataset used to train this Model.
    GcsSource Pulumi.GoogleNative.Aiplatform.V1.Inputs.GoogleCloudAiplatformV1GcsSource
    The Google Cloud Storage uri of the unmanaged Dataset used to train this Model.
    LoggingSamplingStrategy Pulumi.GoogleNative.Aiplatform.V1.Inputs.GoogleCloudAiplatformV1SamplingStrategy
    Strategy to sample data from Training Dataset. If not set, we process the whole dataset.
    TargetField string
    The target field name the model is to predict. This field will be excluded when doing Predict and (or) Explain for the training data.
    BigquerySource GoogleCloudAiplatformV1BigQuerySource
    The BigQuery table of the unmanaged Dataset used to train this Model.
    DataFormat string
    Data format of the dataset, only applicable if the input is from Google Cloud Storage. The possible formats are: "tf-record" The source file is a TFRecord file. "csv" The source file is a CSV file. "jsonl" The source file is a JSONL file.
    Dataset string
    The resource name of the Dataset used to train this Model.
    GcsSource GoogleCloudAiplatformV1GcsSource
    The Google Cloud Storage uri of the unmanaged Dataset used to train this Model.
    LoggingSamplingStrategy GoogleCloudAiplatformV1SamplingStrategy
    Strategy to sample data from Training Dataset. If not set, we process the whole dataset.
    TargetField string
    The target field name the model is to predict. This field will be excluded when doing Predict and (or) Explain for the training data.
    bigquerySource GoogleCloudAiplatformV1BigQuerySource
    The BigQuery table of the unmanaged Dataset used to train this Model.
    dataFormat String
    Data format of the dataset, only applicable if the input is from Google Cloud Storage. The possible formats are: "tf-record" The source file is a TFRecord file. "csv" The source file is a CSV file. "jsonl" The source file is a JSONL file.
    dataset String
    The resource name of the Dataset used to train this Model.
    gcsSource GoogleCloudAiplatformV1GcsSource
    The Google Cloud Storage uri of the unmanaged Dataset used to train this Model.
    loggingSamplingStrategy GoogleCloudAiplatformV1SamplingStrategy
    Strategy to sample data from Training Dataset. If not set, we process the whole dataset.
    targetField String
    The target field name the model is to predict. This field will be excluded when doing Predict and (or) Explain for the training data.
    bigquerySource GoogleCloudAiplatformV1BigQuerySource
    The BigQuery table of the unmanaged Dataset used to train this Model.
    dataFormat string
    Data format of the dataset, only applicable if the input is from Google Cloud Storage. The possible formats are: "tf-record" The source file is a TFRecord file. "csv" The source file is a CSV file. "jsonl" The source file is a JSONL file.
    dataset string
    The resource name of the Dataset used to train this Model.
    gcsSource GoogleCloudAiplatformV1GcsSource
    The Google Cloud Storage uri of the unmanaged Dataset used to train this Model.
    loggingSamplingStrategy GoogleCloudAiplatformV1SamplingStrategy
    Strategy to sample data from Training Dataset. If not set, we process the whole dataset.
    targetField string
    The target field name the model is to predict. This field will be excluded when doing Predict and (or) Explain for the training data.
    bigquery_source GoogleCloudAiplatformV1BigQuerySource
    The BigQuery table of the unmanaged Dataset used to train this Model.
    data_format str
    Data format of the dataset, only applicable if the input is from Google Cloud Storage. The possible formats are: "tf-record" The source file is a TFRecord file. "csv" The source file is a CSV file. "jsonl" The source file is a JSONL file.
    dataset str
    The resource name of the Dataset used to train this Model.
    gcs_source GoogleCloudAiplatformV1GcsSource
    The Google Cloud Storage uri of the unmanaged Dataset used to train this Model.
    logging_sampling_strategy GoogleCloudAiplatformV1SamplingStrategy
    Strategy to sample data from Training Dataset. If not set, we process the whole dataset.
    target_field str
    The target field name the model is to predict. This field will be excluded when doing Predict and (or) Explain for the training data.
    bigquerySource Property Map
    The BigQuery table of the unmanaged Dataset used to train this Model.
    dataFormat String
    Data format of the dataset, only applicable if the input is from Google Cloud Storage. The possible formats are: "tf-record" The source file is a TFRecord file. "csv" The source file is a CSV file. "jsonl" The source file is a JSONL file.
    dataset String
    The resource name of the Dataset used to train this Model.
    gcsSource Property Map
    The Google Cloud Storage uri of the unmanaged Dataset used to train this Model.
    loggingSamplingStrategy Property Map
    Strategy to sample data from Training Dataset. If not set, we process the whole dataset.
    targetField String
    The target field name the model is to predict. This field will be excluded when doing Predict and (or) Explain for the training data.

    GoogleCloudAiplatformV1ModelMonitoringObjectiveConfigTrainingDatasetResponse, GoogleCloudAiplatformV1ModelMonitoringObjectiveConfigTrainingDatasetResponseArgs

    BigquerySource Pulumi.GoogleNative.Aiplatform.V1.Inputs.GoogleCloudAiplatformV1BigQuerySourceResponse
    The BigQuery table of the unmanaged Dataset used to train this Model.
    DataFormat string
    Data format of the dataset, only applicable if the input is from Google Cloud Storage. The possible formats are: "tf-record" The source file is a TFRecord file. "csv" The source file is a CSV file. "jsonl" The source file is a JSONL file.
    Dataset string
    The resource name of the Dataset used to train this Model.
    GcsSource Pulumi.GoogleNative.Aiplatform.V1.Inputs.GoogleCloudAiplatformV1GcsSourceResponse
    The Google Cloud Storage uri of the unmanaged Dataset used to train this Model.
    LoggingSamplingStrategy Pulumi.GoogleNative.Aiplatform.V1.Inputs.GoogleCloudAiplatformV1SamplingStrategyResponse
    Strategy to sample data from Training Dataset. If not set, we process the whole dataset.
    TargetField string
    The target field name the model is to predict. This field will be excluded when doing Predict and (or) Explain for the training data.
    BigquerySource GoogleCloudAiplatformV1BigQuerySourceResponse
    The BigQuery table of the unmanaged Dataset used to train this Model.
    DataFormat string
    Data format of the dataset, only applicable if the input is from Google Cloud Storage. The possible formats are: "tf-record" The source file is a TFRecord file. "csv" The source file is a CSV file. "jsonl" The source file is a JSONL file.
    Dataset string
    The resource name of the Dataset used to train this Model.
    GcsSource GoogleCloudAiplatformV1GcsSourceResponse
    The Google Cloud Storage uri of the unmanaged Dataset used to train this Model.
    LoggingSamplingStrategy GoogleCloudAiplatformV1SamplingStrategyResponse
    Strategy to sample data from Training Dataset. If not set, we process the whole dataset.
    TargetField string
    The target field name the model is to predict. This field will be excluded when doing Predict and (or) Explain for the training data.
    bigquerySource GoogleCloudAiplatformV1BigQuerySourceResponse
    The BigQuery table of the unmanaged Dataset used to train this Model.
    dataFormat String
    Data format of the dataset, only applicable if the input is from Google Cloud Storage. The possible formats are: "tf-record" The source file is a TFRecord file. "csv" The source file is a CSV file. "jsonl" The source file is a JSONL file.
    dataset String
    The resource name of the Dataset used to train this Model.
    gcsSource GoogleCloudAiplatformV1GcsSourceResponse
    The Google Cloud Storage uri of the unmanaged Dataset used to train this Model.
    loggingSamplingStrategy GoogleCloudAiplatformV1SamplingStrategyResponse
    Strategy to sample data from Training Dataset. If not set, we process the whole dataset.
    targetField String
    The target field name the model is to predict. This field will be excluded when doing Predict and (or) Explain for the training data.
    bigquerySource GoogleCloudAiplatformV1BigQuerySourceResponse
    The BigQuery table of the unmanaged Dataset used to train this Model.
    dataFormat string
    Data format of the dataset, only applicable if the input is from Google Cloud Storage. The possible formats are: "tf-record" The source file is a TFRecord file. "csv" The source file is a CSV file. "jsonl" The source file is a JSONL file.
    dataset string
    The resource name of the Dataset used to train this Model.
    gcsSource GoogleCloudAiplatformV1GcsSourceResponse
    The Google Cloud Storage uri of the unmanaged Dataset used to train this Model.
    loggingSamplingStrategy GoogleCloudAiplatformV1SamplingStrategyResponse
    Strategy to sample data from Training Dataset. If not set, we process the whole dataset.
    targetField string
    The target field name the model is to predict. This field will be excluded when doing Predict and (or) Explain for the training data.
    bigquery_source GoogleCloudAiplatformV1BigQuerySourceResponse
    The BigQuery table of the unmanaged Dataset used to train this Model.
    data_format str
    Data format of the dataset, only applicable if the input is from Google Cloud Storage. The possible formats are: "tf-record" The source file is a TFRecord file. "csv" The source file is a CSV file. "jsonl" The source file is a JSONL file.
    dataset str
    The resource name of the Dataset used to train this Model.
    gcs_source GoogleCloudAiplatformV1GcsSourceResponse
    The Google Cloud Storage uri of the unmanaged Dataset used to train this Model.
    logging_sampling_strategy GoogleCloudAiplatformV1SamplingStrategyResponse
    Strategy to sample data from Training Dataset. If not set, we process the whole dataset.
    target_field str
    The target field name the model is to predict. This field will be excluded when doing Predict and (or) Explain for the training data.
    bigquerySource Property Map
    The BigQuery table of the unmanaged Dataset used to train this Model.
    dataFormat String
    Data format of the dataset, only applicable if the input is from Google Cloud Storage. The possible formats are: "tf-record" The source file is a TFRecord file. "csv" The source file is a CSV file. "jsonl" The source file is a JSONL file.
    dataset String
    The resource name of the Dataset used to train this Model.
    gcsSource Property Map
    The Google Cloud Storage uri of the unmanaged Dataset used to train this Model.
    loggingSamplingStrategy Property Map
    Strategy to sample data from Training Dataset. If not set, we process the whole dataset.
    targetField String
    The target field name the model is to predict. This field will be excluded when doing Predict and (or) Explain for the training data.

    GoogleCloudAiplatformV1ModelMonitoringObjectiveConfigTrainingPredictionSkewDetectionConfig, GoogleCloudAiplatformV1ModelMonitoringObjectiveConfigTrainingPredictionSkewDetectionConfigArgs

    AttributionScoreSkewThresholds Dictionary<string, string>
    Key is the feature name and value is the threshold. The threshold here is against attribution score distance between the training and prediction feature.
    DefaultSkewThreshold Pulumi.GoogleNative.Aiplatform.V1.Inputs.GoogleCloudAiplatformV1ThresholdConfig
    Skew anomaly detection threshold used by all features. When the per-feature thresholds are not set, this field can be used to specify a threshold for all features.
    SkewThresholds Dictionary<string, string>
    Key is the feature name and value is the threshold. If a feature needs to be monitored for skew, a value threshold must be configured for that feature. The threshold here is against feature distribution distance between the training and prediction feature.
    AttributionScoreSkewThresholds map[string]string
    Key is the feature name and value is the threshold. The threshold here is against attribution score distance between the training and prediction feature.
    DefaultSkewThreshold GoogleCloudAiplatformV1ThresholdConfig
    Skew anomaly detection threshold used by all features. When the per-feature thresholds are not set, this field can be used to specify a threshold for all features.
    SkewThresholds map[string]string
    Key is the feature name and value is the threshold. If a feature needs to be monitored for skew, a value threshold must be configured for that feature. The threshold here is against feature distribution distance between the training and prediction feature.
    attributionScoreSkewThresholds Map<String,String>
    Key is the feature name and value is the threshold. The threshold here is against attribution score distance between the training and prediction feature.
    defaultSkewThreshold GoogleCloudAiplatformV1ThresholdConfig
    Skew anomaly detection threshold used by all features. When the per-feature thresholds are not set, this field can be used to specify a threshold for all features.
    skewThresholds Map<String,String>
    Key is the feature name and value is the threshold. If a feature needs to be monitored for skew, a value threshold must be configured for that feature. The threshold here is against feature distribution distance between the training and prediction feature.
    attributionScoreSkewThresholds {[key: string]: string}
    Key is the feature name and value is the threshold. The threshold here is against attribution score distance between the training and prediction feature.
    defaultSkewThreshold GoogleCloudAiplatformV1ThresholdConfig
    Skew anomaly detection threshold used by all features. When the per-feature thresholds are not set, this field can be used to specify a threshold for all features.
    skewThresholds {[key: string]: string}
    Key is the feature name and value is the threshold. If a feature needs to be monitored for skew, a value threshold must be configured for that feature. The threshold here is against feature distribution distance between the training and prediction feature.
    attribution_score_skew_thresholds Mapping[str, str]
    Key is the feature name and value is the threshold. The threshold here is against attribution score distance between the training and prediction feature.
    default_skew_threshold GoogleCloudAiplatformV1ThresholdConfig
    Skew anomaly detection threshold used by all features. When the per-feature thresholds are not set, this field can be used to specify a threshold for all features.
    skew_thresholds Mapping[str, str]
    Key is the feature name and value is the threshold. If a feature needs to be monitored for skew, a value threshold must be configured for that feature. The threshold here is against feature distribution distance between the training and prediction feature.
    attributionScoreSkewThresholds Map<String>
    Key is the feature name and value is the threshold. The threshold here is against attribution score distance between the training and prediction feature.
    defaultSkewThreshold Property Map
    Skew anomaly detection threshold used by all features. When the per-feature thresholds are not set, this field can be used to specify a threshold for all features.
    skewThresholds Map<String>
    Key is the feature name and value is the threshold. If a feature needs to be monitored for skew, a value threshold must be configured for that feature. The threshold here is against feature distribution distance between the training and prediction feature.

    GoogleCloudAiplatformV1ModelMonitoringObjectiveConfigTrainingPredictionSkewDetectionConfigResponse, GoogleCloudAiplatformV1ModelMonitoringObjectiveConfigTrainingPredictionSkewDetectionConfigResponseArgs

    AttributionScoreSkewThresholds Dictionary<string, string>
    Key is the feature name and value is the threshold. The threshold here is against attribution score distance between the training and prediction feature.
    DefaultSkewThreshold Pulumi.GoogleNative.Aiplatform.V1.Inputs.GoogleCloudAiplatformV1ThresholdConfigResponse
    Skew anomaly detection threshold used by all features. When the per-feature thresholds are not set, this field can be used to specify a threshold for all features.
    SkewThresholds Dictionary<string, string>
    Key is the feature name and value is the threshold. If a feature needs to be monitored for skew, a value threshold must be configured for that feature. The threshold here is against feature distribution distance between the training and prediction feature.
    AttributionScoreSkewThresholds map[string]string
    Key is the feature name and value is the threshold. The threshold here is against attribution score distance between the training and prediction feature.
    DefaultSkewThreshold GoogleCloudAiplatformV1ThresholdConfigResponse
    Skew anomaly detection threshold used by all features. When the per-feature thresholds are not set, this field can be used to specify a threshold for all features.
    SkewThresholds map[string]string
    Key is the feature name and value is the threshold. If a feature needs to be monitored for skew, a value threshold must be configured for that feature. The threshold here is against feature distribution distance between the training and prediction feature.
    attributionScoreSkewThresholds Map<String,String>
    Key is the feature name and value is the threshold. The threshold here is against attribution score distance between the training and prediction feature.
    defaultSkewThreshold GoogleCloudAiplatformV1ThresholdConfigResponse
    Skew anomaly detection threshold used by all features. When the per-feature thresholds are not set, this field can be used to specify a threshold for all features.
    skewThresholds Map<String,String>
    Key is the feature name and value is the threshold. If a feature needs to be monitored for skew, a value threshold must be configured for that feature. The threshold here is against feature distribution distance between the training and prediction feature.
    attributionScoreSkewThresholds {[key: string]: string}
    Key is the feature name and value is the threshold. The threshold here is against attribution score distance between the training and prediction feature.
    defaultSkewThreshold GoogleCloudAiplatformV1ThresholdConfigResponse
    Skew anomaly detection threshold used by all features. When the per-feature thresholds are not set, this field can be used to specify a threshold for all features.
    skewThresholds {[key: string]: string}
    Key is the feature name and value is the threshold. If a feature needs to be monitored for skew, a value threshold must be configured for that feature. The threshold here is against feature distribution distance between the training and prediction feature.
    attribution_score_skew_thresholds Mapping[str, str]
    Key is the feature name and value is the threshold. The threshold here is against attribution score distance between the training and prediction feature.
    default_skew_threshold GoogleCloudAiplatformV1ThresholdConfigResponse
    Skew anomaly detection threshold used by all features. When the per-feature thresholds are not set, this field can be used to specify a threshold for all features.
    skew_thresholds Mapping[str, str]
    Key is the feature name and value is the threshold. If a feature needs to be monitored for skew, a value threshold must be configured for that feature. The threshold here is against feature distribution distance between the training and prediction feature.
    attributionScoreSkewThresholds Map<String>
    Key is the feature name and value is the threshold. The threshold here is against attribution score distance between the training and prediction feature.
    defaultSkewThreshold Property Map
    Skew anomaly detection threshold used by all features. When the per-feature thresholds are not set, this field can be used to specify a threshold for all features.
    skewThresholds Map<String>
    Key is the feature name and value is the threshold. If a feature needs to be monitored for skew, a value threshold must be configured for that feature. The threshold here is against feature distribution distance between the training and prediction feature.

    GoogleCloudAiplatformV1SamplingStrategy, GoogleCloudAiplatformV1SamplingStrategyArgs

    RandomSampleConfig GoogleCloudAiplatformV1SamplingStrategyRandomSampleConfig
    Random sample config. Will support more sampling strategies later.
    randomSampleConfig GoogleCloudAiplatformV1SamplingStrategyRandomSampleConfig
    Random sample config. Will support more sampling strategies later.
    randomSampleConfig GoogleCloudAiplatformV1SamplingStrategyRandomSampleConfig
    Random sample config. Will support more sampling strategies later.
    random_sample_config GoogleCloudAiplatformV1SamplingStrategyRandomSampleConfig
    Random sample config. Will support more sampling strategies later.
    randomSampleConfig Property Map
    Random sample config. Will support more sampling strategies later.

    GoogleCloudAiplatformV1SamplingStrategyRandomSampleConfig, GoogleCloudAiplatformV1SamplingStrategyRandomSampleConfigArgs

    SampleRate double
    Sample rate (0, 1]
    SampleRate float64
    Sample rate (0, 1]
    sampleRate Double
    Sample rate (0, 1]
    sampleRate number
    Sample rate (0, 1]
    sample_rate float
    Sample rate (0, 1]
    sampleRate Number
    Sample rate (0, 1]

    GoogleCloudAiplatformV1SamplingStrategyRandomSampleConfigResponse, GoogleCloudAiplatformV1SamplingStrategyRandomSampleConfigResponseArgs

    SampleRate double
    Sample rate (0, 1]
    SampleRate float64
    Sample rate (0, 1]
    sampleRate Double
    Sample rate (0, 1]
    sampleRate number
    Sample rate (0, 1]
    sample_rate float
    Sample rate (0, 1]
    sampleRate Number
    Sample rate (0, 1]

    GoogleCloudAiplatformV1SamplingStrategyResponse, GoogleCloudAiplatformV1SamplingStrategyResponseArgs

    RandomSampleConfig GoogleCloudAiplatformV1SamplingStrategyRandomSampleConfigResponse
    Random sample config. Will support more sampling strategies later.
    randomSampleConfig GoogleCloudAiplatformV1SamplingStrategyRandomSampleConfigResponse
    Random sample config. Will support more sampling strategies later.
    randomSampleConfig GoogleCloudAiplatformV1SamplingStrategyRandomSampleConfigResponse
    Random sample config. Will support more sampling strategies later.
    random_sample_config GoogleCloudAiplatformV1SamplingStrategyRandomSampleConfigResponse
    Random sample config. Will support more sampling strategies later.
    randomSampleConfig Property Map
    Random sample config. Will support more sampling strategies later.

    GoogleCloudAiplatformV1ThresholdConfig, GoogleCloudAiplatformV1ThresholdConfigArgs

    Value double
    Specify a threshold value that can trigger the alert. If this threshold config is for feature distribution distance: 1. For categorical feature, the distribution distance is calculated by L-inifinity norm. 2. For numerical feature, the distribution distance is calculated by Jensen–Shannon divergence. Each feature must have a non-zero threshold if they need to be monitored. Otherwise no alert will be triggered for that feature.
    Value float64
    Specify a threshold value that can trigger the alert. If this threshold config is for feature distribution distance: 1. For categorical feature, the distribution distance is calculated by L-inifinity norm. 2. For numerical feature, the distribution distance is calculated by Jensen–Shannon divergence. Each feature must have a non-zero threshold if they need to be monitored. Otherwise no alert will be triggered for that feature.
    value Double
    Specify a threshold value that can trigger the alert. If this threshold config is for feature distribution distance: 1. For categorical feature, the distribution distance is calculated by L-inifinity norm. 2. For numerical feature, the distribution distance is calculated by Jensen–Shannon divergence. Each feature must have a non-zero threshold if they need to be monitored. Otherwise no alert will be triggered for that feature.
    value number
    Specify a threshold value that can trigger the alert. If this threshold config is for feature distribution distance: 1. For categorical feature, the distribution distance is calculated by L-inifinity norm. 2. For numerical feature, the distribution distance is calculated by Jensen–Shannon divergence. Each feature must have a non-zero threshold if they need to be monitored. Otherwise no alert will be triggered for that feature.
    value float
    Specify a threshold value that can trigger the alert. If this threshold config is for feature distribution distance: 1. For categorical feature, the distribution distance is calculated by L-inifinity norm. 2. For numerical feature, the distribution distance is calculated by Jensen–Shannon divergence. Each feature must have a non-zero threshold if they need to be monitored. Otherwise no alert will be triggered for that feature.
    value Number
    Specify a threshold value that can trigger the alert. If this threshold config is for feature distribution distance: 1. For categorical feature, the distribution distance is calculated by L-inifinity norm. 2. For numerical feature, the distribution distance is calculated by Jensen–Shannon divergence. Each feature must have a non-zero threshold if they need to be monitored. Otherwise no alert will be triggered for that feature.

    GoogleCloudAiplatformV1ThresholdConfigResponse, GoogleCloudAiplatformV1ThresholdConfigResponseArgs

    Value double
    Specify a threshold value that can trigger the alert. If this threshold config is for feature distribution distance: 1. For categorical feature, the distribution distance is calculated by L-inifinity norm. 2. For numerical feature, the distribution distance is calculated by Jensen–Shannon divergence. Each feature must have a non-zero threshold if they need to be monitored. Otherwise no alert will be triggered for that feature.
    Value float64
    Specify a threshold value that can trigger the alert. If this threshold config is for feature distribution distance: 1. For categorical feature, the distribution distance is calculated by L-inifinity norm. 2. For numerical feature, the distribution distance is calculated by Jensen–Shannon divergence. Each feature must have a non-zero threshold if they need to be monitored. Otherwise no alert will be triggered for that feature.
    value Double
    Specify a threshold value that can trigger the alert. If this threshold config is for feature distribution distance: 1. For categorical feature, the distribution distance is calculated by L-inifinity norm. 2. For numerical feature, the distribution distance is calculated by Jensen–Shannon divergence. Each feature must have a non-zero threshold if they need to be monitored. Otherwise no alert will be triggered for that feature.
    value number
    Specify a threshold value that can trigger the alert. If this threshold config is for feature distribution distance: 1. For categorical feature, the distribution distance is calculated by L-inifinity norm. 2. For numerical feature, the distribution distance is calculated by Jensen–Shannon divergence. Each feature must have a non-zero threshold if they need to be monitored. Otherwise no alert will be triggered for that feature.
    value float
    Specify a threshold value that can trigger the alert. If this threshold config is for feature distribution distance: 1. For categorical feature, the distribution distance is calculated by L-inifinity norm. 2. For numerical feature, the distribution distance is calculated by Jensen–Shannon divergence. Each feature must have a non-zero threshold if they need to be monitored. Otherwise no alert will be triggered for that feature.
    value Number
    Specify a threshold value that can trigger the alert. If this threshold config is for feature distribution distance: 1. For categorical feature, the distribution distance is calculated by L-inifinity norm. 2. For numerical feature, the distribution distance is calculated by Jensen–Shannon divergence. Each feature must have a non-zero threshold if they need to be monitored. Otherwise no alert will be triggered for that feature.

    GoogleRpcStatusResponse, GoogleRpcStatusResponseArgs

    Code int
    The status code, which should be an enum value of google.rpc.Code.
    Details List<ImmutableDictionary<string, string>>
    A list of messages that carry the error details. There is a common set of message types for APIs to use.
    Message string
    A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
    Code int
    The status code, which should be an enum value of google.rpc.Code.
    Details []map[string]string
    A list of messages that carry the error details. There is a common set of message types for APIs to use.
    Message string
    A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
    code Integer
    The status code, which should be an enum value of google.rpc.Code.
    details List<Map<String,String>>
    A list of messages that carry the error details. There is a common set of message types for APIs to use.
    message String
    A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
    code number
    The status code, which should be an enum value of google.rpc.Code.
    details {[key: string]: string}[]
    A list of messages that carry the error details. There is a common set of message types for APIs to use.
    message string
    A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
    code int
    The status code, which should be an enum value of google.rpc.Code.
    details Sequence[Mapping[str, str]]
    A list of messages that carry the error details. There is a common set of message types for APIs to use.
    message str
    A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
    code Number
    The status code, which should be an enum value of google.rpc.Code.
    details List<Map<String>>
    A list of messages that carry the error details. There is a common set of message types for APIs to use.
    message String
    A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.

    Package Details

    Repository
    Google Cloud Native pulumi/pulumi-google-native
    License
    Apache-2.0
    google-native logo

    Google Cloud Native is in preview. Google Cloud Classic is fully supported.

    Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi