Google Cloud Native is in preview. Google Cloud Classic is fully supported.
google-native.aiplatform/v1beta1.ModelDeploymentMonitoringJob
Explore with Pulumi AI
Google Cloud Native is in preview. Google Cloud Classic is fully supported.
Creates a ModelDeploymentMonitoringJob. It will run periodically on a configured interval. Auto-naming is currently not supported for this resource.
Create ModelDeploymentMonitoringJob Resource
Resources are created with functions called constructors. To learn more about declaring and configuring resources, see Resources.
Constructor syntax
new ModelDeploymentMonitoringJob(name: string, args: ModelDeploymentMonitoringJobArgs, opts?: CustomResourceOptions);
@overload
def ModelDeploymentMonitoringJob(resource_name: str,
args: ModelDeploymentMonitoringJobArgs,
opts: Optional[ResourceOptions] = None)
@overload
def ModelDeploymentMonitoringJob(resource_name: str,
opts: Optional[ResourceOptions] = None,
logging_sampling_strategy: Optional[GoogleCloudAiplatformV1beta1SamplingStrategyArgs] = None,
display_name: Optional[str] = None,
model_deployment_monitoring_schedule_config: Optional[GoogleCloudAiplatformV1beta1ModelDeploymentMonitoringScheduleConfigArgs] = None,
model_deployment_monitoring_objective_configs: Optional[Sequence[GoogleCloudAiplatformV1beta1ModelDeploymentMonitoringObjectiveConfigArgs]] = None,
endpoint: Optional[str] = None,
analysis_instance_schema_uri: Optional[str] = None,
labels: Optional[Mapping[str, str]] = None,
location: Optional[str] = None,
log_ttl: Optional[str] = None,
encryption_spec: Optional[GoogleCloudAiplatformV1beta1EncryptionSpecArgs] = None,
enable_monitoring_pipeline_logs: Optional[bool] = None,
model_monitoring_alert_config: Optional[GoogleCloudAiplatformV1beta1ModelMonitoringAlertConfigArgs] = None,
predict_instance_schema_uri: Optional[str] = None,
project: Optional[str] = None,
sample_predict_instance: Optional[Any] = None,
stats_anomalies_base_directory: Optional[GoogleCloudAiplatformV1beta1GcsDestinationArgs] = None)
func NewModelDeploymentMonitoringJob(ctx *Context, name string, args ModelDeploymentMonitoringJobArgs, opts ...ResourceOption) (*ModelDeploymentMonitoringJob, error)
public ModelDeploymentMonitoringJob(string name, ModelDeploymentMonitoringJobArgs args, CustomResourceOptions? opts = null)
public ModelDeploymentMonitoringJob(String name, ModelDeploymentMonitoringJobArgs args)
public ModelDeploymentMonitoringJob(String name, ModelDeploymentMonitoringJobArgs args, CustomResourceOptions options)
type: google-native:aiplatform/v1beta1:ModelDeploymentMonitoringJob
properties: # The arguments to resource properties.
options: # Bag of options to control resource's behavior.
Parameters
- name string
- The unique name of the resource.
- args ModelDeploymentMonitoringJobArgs
- The arguments to resource properties.
- opts CustomResourceOptions
- Bag of options to control resource's behavior.
- resource_name str
- The unique name of the resource.
- args ModelDeploymentMonitoringJobArgs
- The arguments to resource properties.
- opts ResourceOptions
- Bag of options to control resource's behavior.
- ctx Context
- Context object for the current deployment.
- name string
- The unique name of the resource.
- args ModelDeploymentMonitoringJobArgs
- The arguments to resource properties.
- opts ResourceOption
- Bag of options to control resource's behavior.
- name string
- The unique name of the resource.
- args ModelDeploymentMonitoringJobArgs
- The arguments to resource properties.
- opts CustomResourceOptions
- Bag of options to control resource's behavior.
- name String
- The unique name of the resource.
- args ModelDeploymentMonitoringJobArgs
- The arguments to resource properties.
- options CustomResourceOptions
- Bag of options to control resource's behavior.
Constructor example
The following reference example uses placeholder values for all input properties.
var google_nativeModelDeploymentMonitoringJobResource = new GoogleNative.Aiplatform.V1Beta1.ModelDeploymentMonitoringJob("google-nativeModelDeploymentMonitoringJobResource", new()
{
LoggingSamplingStrategy = new GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1SamplingStrategyArgs
{
RandomSampleConfig = new GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1SamplingStrategyRandomSampleConfigArgs
{
SampleRate = 0,
},
},
DisplayName = "string",
ModelDeploymentMonitoringScheduleConfig = new GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1ModelDeploymentMonitoringScheduleConfigArgs
{
MonitorInterval = "string",
MonitorWindow = "string",
},
ModelDeploymentMonitoringObjectiveConfigs = new[]
{
new GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1ModelDeploymentMonitoringObjectiveConfigArgs
{
DeployedModelId = "string",
ObjectiveConfig = new GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1ModelMonitoringObjectiveConfigArgs
{
ExplanationConfig = new GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1ModelMonitoringObjectiveConfigExplanationConfigArgs
{
EnableFeatureAttributes = false,
ExplanationBaseline = new GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1ModelMonitoringObjectiveConfigExplanationConfigExplanationBaselineArgs
{
Bigquery = new GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1BigQueryDestinationArgs
{
OutputUri = "string",
},
Gcs = new GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1GcsDestinationArgs
{
OutputUriPrefix = "string",
},
PredictionFormat = GoogleNative.Aiplatform.V1Beta1.GoogleCloudAiplatformV1beta1ModelMonitoringObjectiveConfigExplanationConfigExplanationBaselinePredictionFormat.PredictionFormatUnspecified,
},
},
PredictionDriftDetectionConfig = new GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1ModelMonitoringObjectiveConfigPredictionDriftDetectionConfigArgs
{
AttributionScoreDriftThresholds =
{
{ "string", "string" },
},
DefaultDriftThreshold = new GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1ThresholdConfigArgs
{
Value = 0,
},
DriftThresholds =
{
{ "string", "string" },
},
},
TrainingDataset = new GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1ModelMonitoringObjectiveConfigTrainingDatasetArgs
{
BigquerySource = new GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1BigQuerySourceArgs
{
InputUri = "string",
},
DataFormat = "string",
Dataset = "string",
GcsSource = new GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1GcsSourceArgs
{
Uris = new[]
{
"string",
},
},
LoggingSamplingStrategy = new GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1SamplingStrategyArgs
{
RandomSampleConfig = new GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1SamplingStrategyRandomSampleConfigArgs
{
SampleRate = 0,
},
},
TargetField = "string",
},
TrainingPredictionSkewDetectionConfig = new GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1ModelMonitoringObjectiveConfigTrainingPredictionSkewDetectionConfigArgs
{
AttributionScoreSkewThresholds =
{
{ "string", "string" },
},
DefaultSkewThreshold = new GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1ThresholdConfigArgs
{
Value = 0,
},
SkewThresholds =
{
{ "string", "string" },
},
},
},
},
},
Endpoint = "string",
AnalysisInstanceSchemaUri = "string",
Labels =
{
{ "string", "string" },
},
Location = "string",
LogTtl = "string",
EncryptionSpec = new GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1EncryptionSpecArgs
{
KmsKeyName = "string",
},
EnableMonitoringPipelineLogs = false,
ModelMonitoringAlertConfig = new GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1ModelMonitoringAlertConfigArgs
{
EmailAlertConfig = new GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1ModelMonitoringAlertConfigEmailAlertConfigArgs
{
UserEmails = new[]
{
"string",
},
},
EnableLogging = false,
NotificationChannels = new[]
{
"string",
},
},
PredictInstanceSchemaUri = "string",
Project = "string",
SamplePredictInstance = "any",
StatsAnomaliesBaseDirectory = new GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1GcsDestinationArgs
{
OutputUriPrefix = "string",
},
});
example, err := aiplatformv1beta1.NewModelDeploymentMonitoringJob(ctx, "google-nativeModelDeploymentMonitoringJobResource", &aiplatformv1beta1.ModelDeploymentMonitoringJobArgs{
LoggingSamplingStrategy: &aiplatform.GoogleCloudAiplatformV1beta1SamplingStrategyArgs{
RandomSampleConfig: &aiplatform.GoogleCloudAiplatformV1beta1SamplingStrategyRandomSampleConfigArgs{
SampleRate: pulumi.Float64(0),
},
},
DisplayName: pulumi.String("string"),
ModelDeploymentMonitoringScheduleConfig: &aiplatform.GoogleCloudAiplatformV1beta1ModelDeploymentMonitoringScheduleConfigArgs{
MonitorInterval: pulumi.String("string"),
MonitorWindow: pulumi.String("string"),
},
ModelDeploymentMonitoringObjectiveConfigs: aiplatform.GoogleCloudAiplatformV1beta1ModelDeploymentMonitoringObjectiveConfigArray{
&aiplatform.GoogleCloudAiplatformV1beta1ModelDeploymentMonitoringObjectiveConfigArgs{
DeployedModelId: pulumi.String("string"),
ObjectiveConfig: &aiplatform.GoogleCloudAiplatformV1beta1ModelMonitoringObjectiveConfigArgs{
ExplanationConfig: &aiplatform.GoogleCloudAiplatformV1beta1ModelMonitoringObjectiveConfigExplanationConfigArgs{
EnableFeatureAttributes: pulumi.Bool(false),
ExplanationBaseline: &aiplatform.GoogleCloudAiplatformV1beta1ModelMonitoringObjectiveConfigExplanationConfigExplanationBaselineArgs{
Bigquery: &aiplatform.GoogleCloudAiplatformV1beta1BigQueryDestinationArgs{
OutputUri: pulumi.String("string"),
},
Gcs: &aiplatform.GoogleCloudAiplatformV1beta1GcsDestinationArgs{
OutputUriPrefix: pulumi.String("string"),
},
PredictionFormat: aiplatformv1beta1.GoogleCloudAiplatformV1beta1ModelMonitoringObjectiveConfigExplanationConfigExplanationBaselinePredictionFormatPredictionFormatUnspecified,
},
},
PredictionDriftDetectionConfig: &aiplatform.GoogleCloudAiplatformV1beta1ModelMonitoringObjectiveConfigPredictionDriftDetectionConfigArgs{
AttributionScoreDriftThresholds: pulumi.StringMap{
"string": pulumi.String("string"),
},
DefaultDriftThreshold: &aiplatform.GoogleCloudAiplatformV1beta1ThresholdConfigArgs{
Value: pulumi.Float64(0),
},
DriftThresholds: pulumi.StringMap{
"string": pulumi.String("string"),
},
},
TrainingDataset: &aiplatform.GoogleCloudAiplatformV1beta1ModelMonitoringObjectiveConfigTrainingDatasetArgs{
BigquerySource: &aiplatform.GoogleCloudAiplatformV1beta1BigQuerySourceArgs{
InputUri: pulumi.String("string"),
},
DataFormat: pulumi.String("string"),
Dataset: pulumi.String("string"),
GcsSource: &aiplatform.GoogleCloudAiplatformV1beta1GcsSourceArgs{
Uris: pulumi.StringArray{
pulumi.String("string"),
},
},
LoggingSamplingStrategy: &aiplatform.GoogleCloudAiplatformV1beta1SamplingStrategyArgs{
RandomSampleConfig: &aiplatform.GoogleCloudAiplatformV1beta1SamplingStrategyRandomSampleConfigArgs{
SampleRate: pulumi.Float64(0),
},
},
TargetField: pulumi.String("string"),
},
TrainingPredictionSkewDetectionConfig: &aiplatform.GoogleCloudAiplatformV1beta1ModelMonitoringObjectiveConfigTrainingPredictionSkewDetectionConfigArgs{
AttributionScoreSkewThresholds: pulumi.StringMap{
"string": pulumi.String("string"),
},
DefaultSkewThreshold: &aiplatform.GoogleCloudAiplatformV1beta1ThresholdConfigArgs{
Value: pulumi.Float64(0),
},
SkewThresholds: pulumi.StringMap{
"string": pulumi.String("string"),
},
},
},
},
},
Endpoint: pulumi.String("string"),
AnalysisInstanceSchemaUri: pulumi.String("string"),
Labels: pulumi.StringMap{
"string": pulumi.String("string"),
},
Location: pulumi.String("string"),
LogTtl: pulumi.String("string"),
EncryptionSpec: &aiplatform.GoogleCloudAiplatformV1beta1EncryptionSpecArgs{
KmsKeyName: pulumi.String("string"),
},
EnableMonitoringPipelineLogs: pulumi.Bool(false),
ModelMonitoringAlertConfig: &aiplatform.GoogleCloudAiplatformV1beta1ModelMonitoringAlertConfigArgs{
EmailAlertConfig: &aiplatform.GoogleCloudAiplatformV1beta1ModelMonitoringAlertConfigEmailAlertConfigArgs{
UserEmails: pulumi.StringArray{
pulumi.String("string"),
},
},
EnableLogging: pulumi.Bool(false),
NotificationChannels: pulumi.StringArray{
pulumi.String("string"),
},
},
PredictInstanceSchemaUri: pulumi.String("string"),
Project: pulumi.String("string"),
SamplePredictInstance: pulumi.Any("any"),
StatsAnomaliesBaseDirectory: &aiplatform.GoogleCloudAiplatformV1beta1GcsDestinationArgs{
OutputUriPrefix: pulumi.String("string"),
},
})
var google_nativeModelDeploymentMonitoringJobResource = new ModelDeploymentMonitoringJob("google-nativeModelDeploymentMonitoringJobResource", ModelDeploymentMonitoringJobArgs.builder()
.loggingSamplingStrategy(GoogleCloudAiplatformV1beta1SamplingStrategyArgs.builder()
.randomSampleConfig(GoogleCloudAiplatformV1beta1SamplingStrategyRandomSampleConfigArgs.builder()
.sampleRate(0)
.build())
.build())
.displayName("string")
.modelDeploymentMonitoringScheduleConfig(GoogleCloudAiplatformV1beta1ModelDeploymentMonitoringScheduleConfigArgs.builder()
.monitorInterval("string")
.monitorWindow("string")
.build())
.modelDeploymentMonitoringObjectiveConfigs(GoogleCloudAiplatformV1beta1ModelDeploymentMonitoringObjectiveConfigArgs.builder()
.deployedModelId("string")
.objectiveConfig(GoogleCloudAiplatformV1beta1ModelMonitoringObjectiveConfigArgs.builder()
.explanationConfig(GoogleCloudAiplatformV1beta1ModelMonitoringObjectiveConfigExplanationConfigArgs.builder()
.enableFeatureAttributes(false)
.explanationBaseline(GoogleCloudAiplatformV1beta1ModelMonitoringObjectiveConfigExplanationConfigExplanationBaselineArgs.builder()
.bigquery(GoogleCloudAiplatformV1beta1BigQueryDestinationArgs.builder()
.outputUri("string")
.build())
.gcs(GoogleCloudAiplatformV1beta1GcsDestinationArgs.builder()
.outputUriPrefix("string")
.build())
.predictionFormat("PREDICTION_FORMAT_UNSPECIFIED")
.build())
.build())
.predictionDriftDetectionConfig(GoogleCloudAiplatformV1beta1ModelMonitoringObjectiveConfigPredictionDriftDetectionConfigArgs.builder()
.attributionScoreDriftThresholds(Map.of("string", "string"))
.defaultDriftThreshold(GoogleCloudAiplatformV1beta1ThresholdConfigArgs.builder()
.value(0)
.build())
.driftThresholds(Map.of("string", "string"))
.build())
.trainingDataset(GoogleCloudAiplatformV1beta1ModelMonitoringObjectiveConfigTrainingDatasetArgs.builder()
.bigquerySource(GoogleCloudAiplatformV1beta1BigQuerySourceArgs.builder()
.inputUri("string")
.build())
.dataFormat("string")
.dataset("string")
.gcsSource(GoogleCloudAiplatformV1beta1GcsSourceArgs.builder()
.uris("string")
.build())
.loggingSamplingStrategy(GoogleCloudAiplatformV1beta1SamplingStrategyArgs.builder()
.randomSampleConfig(GoogleCloudAiplatformV1beta1SamplingStrategyRandomSampleConfigArgs.builder()
.sampleRate(0)
.build())
.build())
.targetField("string")
.build())
.trainingPredictionSkewDetectionConfig(GoogleCloudAiplatformV1beta1ModelMonitoringObjectiveConfigTrainingPredictionSkewDetectionConfigArgs.builder()
.attributionScoreSkewThresholds(Map.of("string", "string"))
.defaultSkewThreshold(GoogleCloudAiplatformV1beta1ThresholdConfigArgs.builder()
.value(0)
.build())
.skewThresholds(Map.of("string", "string"))
.build())
.build())
.build())
.endpoint("string")
.analysisInstanceSchemaUri("string")
.labels(Map.of("string", "string"))
.location("string")
.logTtl("string")
.encryptionSpec(GoogleCloudAiplatformV1beta1EncryptionSpecArgs.builder()
.kmsKeyName("string")
.build())
.enableMonitoringPipelineLogs(false)
.modelMonitoringAlertConfig(GoogleCloudAiplatformV1beta1ModelMonitoringAlertConfigArgs.builder()
.emailAlertConfig(GoogleCloudAiplatformV1beta1ModelMonitoringAlertConfigEmailAlertConfigArgs.builder()
.userEmails("string")
.build())
.enableLogging(false)
.notificationChannels("string")
.build())
.predictInstanceSchemaUri("string")
.project("string")
.samplePredictInstance("any")
.statsAnomaliesBaseDirectory(GoogleCloudAiplatformV1beta1GcsDestinationArgs.builder()
.outputUriPrefix("string")
.build())
.build());
google_native_model_deployment_monitoring_job_resource = google_native.aiplatform.v1beta1.ModelDeploymentMonitoringJob("google-nativeModelDeploymentMonitoringJobResource",
logging_sampling_strategy={
"random_sample_config": {
"sample_rate": 0,
},
},
display_name="string",
model_deployment_monitoring_schedule_config={
"monitor_interval": "string",
"monitor_window": "string",
},
model_deployment_monitoring_objective_configs=[{
"deployed_model_id": "string",
"objective_config": {
"explanation_config": {
"enable_feature_attributes": False,
"explanation_baseline": {
"bigquery": {
"output_uri": "string",
},
"gcs": {
"output_uri_prefix": "string",
},
"prediction_format": google_native.aiplatform.v1beta1.GoogleCloudAiplatformV1beta1ModelMonitoringObjectiveConfigExplanationConfigExplanationBaselinePredictionFormat.PREDICTION_FORMAT_UNSPECIFIED,
},
},
"prediction_drift_detection_config": {
"attribution_score_drift_thresholds": {
"string": "string",
},
"default_drift_threshold": {
"value": 0,
},
"drift_thresholds": {
"string": "string",
},
},
"training_dataset": {
"bigquery_source": {
"input_uri": "string",
},
"data_format": "string",
"dataset": "string",
"gcs_source": {
"uris": ["string"],
},
"logging_sampling_strategy": {
"random_sample_config": {
"sample_rate": 0,
},
},
"target_field": "string",
},
"training_prediction_skew_detection_config": {
"attribution_score_skew_thresholds": {
"string": "string",
},
"default_skew_threshold": {
"value": 0,
},
"skew_thresholds": {
"string": "string",
},
},
},
}],
endpoint="string",
analysis_instance_schema_uri="string",
labels={
"string": "string",
},
location="string",
log_ttl="string",
encryption_spec={
"kms_key_name": "string",
},
enable_monitoring_pipeline_logs=False,
model_monitoring_alert_config={
"email_alert_config": {
"user_emails": ["string"],
},
"enable_logging": False,
"notification_channels": ["string"],
},
predict_instance_schema_uri="string",
project="string",
sample_predict_instance="any",
stats_anomalies_base_directory={
"output_uri_prefix": "string",
})
const google_nativeModelDeploymentMonitoringJobResource = new google_native.aiplatform.v1beta1.ModelDeploymentMonitoringJob("google-nativeModelDeploymentMonitoringJobResource", {
loggingSamplingStrategy: {
randomSampleConfig: {
sampleRate: 0,
},
},
displayName: "string",
modelDeploymentMonitoringScheduleConfig: {
monitorInterval: "string",
monitorWindow: "string",
},
modelDeploymentMonitoringObjectiveConfigs: [{
deployedModelId: "string",
objectiveConfig: {
explanationConfig: {
enableFeatureAttributes: false,
explanationBaseline: {
bigquery: {
outputUri: "string",
},
gcs: {
outputUriPrefix: "string",
},
predictionFormat: google_native.aiplatform.v1beta1.GoogleCloudAiplatformV1beta1ModelMonitoringObjectiveConfigExplanationConfigExplanationBaselinePredictionFormat.PredictionFormatUnspecified,
},
},
predictionDriftDetectionConfig: {
attributionScoreDriftThresholds: {
string: "string",
},
defaultDriftThreshold: {
value: 0,
},
driftThresholds: {
string: "string",
},
},
trainingDataset: {
bigquerySource: {
inputUri: "string",
},
dataFormat: "string",
dataset: "string",
gcsSource: {
uris: ["string"],
},
loggingSamplingStrategy: {
randomSampleConfig: {
sampleRate: 0,
},
},
targetField: "string",
},
trainingPredictionSkewDetectionConfig: {
attributionScoreSkewThresholds: {
string: "string",
},
defaultSkewThreshold: {
value: 0,
},
skewThresholds: {
string: "string",
},
},
},
}],
endpoint: "string",
analysisInstanceSchemaUri: "string",
labels: {
string: "string",
},
location: "string",
logTtl: "string",
encryptionSpec: {
kmsKeyName: "string",
},
enableMonitoringPipelineLogs: false,
modelMonitoringAlertConfig: {
emailAlertConfig: {
userEmails: ["string"],
},
enableLogging: false,
notificationChannels: ["string"],
},
predictInstanceSchemaUri: "string",
project: "string",
samplePredictInstance: "any",
statsAnomaliesBaseDirectory: {
outputUriPrefix: "string",
},
});
type: google-native:aiplatform/v1beta1:ModelDeploymentMonitoringJob
properties:
analysisInstanceSchemaUri: string
displayName: string
enableMonitoringPipelineLogs: false
encryptionSpec:
kmsKeyName: string
endpoint: string
labels:
string: string
location: string
logTtl: string
loggingSamplingStrategy:
randomSampleConfig:
sampleRate: 0
modelDeploymentMonitoringObjectiveConfigs:
- deployedModelId: string
objectiveConfig:
explanationConfig:
enableFeatureAttributes: false
explanationBaseline:
bigquery:
outputUri: string
gcs:
outputUriPrefix: string
predictionFormat: PREDICTION_FORMAT_UNSPECIFIED
predictionDriftDetectionConfig:
attributionScoreDriftThresholds:
string: string
defaultDriftThreshold:
value: 0
driftThresholds:
string: string
trainingDataset:
bigquerySource:
inputUri: string
dataFormat: string
dataset: string
gcsSource:
uris:
- string
loggingSamplingStrategy:
randomSampleConfig:
sampleRate: 0
targetField: string
trainingPredictionSkewDetectionConfig:
attributionScoreSkewThresholds:
string: string
defaultSkewThreshold:
value: 0
skewThresholds:
string: string
modelDeploymentMonitoringScheduleConfig:
monitorInterval: string
monitorWindow: string
modelMonitoringAlertConfig:
emailAlertConfig:
userEmails:
- string
enableLogging: false
notificationChannels:
- string
predictInstanceSchemaUri: string
project: string
samplePredictInstance: any
statsAnomaliesBaseDirectory:
outputUriPrefix: string
ModelDeploymentMonitoringJob Resource Properties
To learn more about resource properties and how to use them, see Inputs and Outputs in the Architecture and Concepts docs.
Inputs
In Python, inputs that are objects can be passed either as argument classes or as dictionary literals.
The ModelDeploymentMonitoringJob resource accepts the following input properties:
- Display
Name string - The user-defined name of the ModelDeploymentMonitoringJob. The name can be up to 128 characters long and can consist of any UTF-8 characters. Display name of a ModelDeploymentMonitoringJob.
- Endpoint string
- Endpoint resource name. Format:
projects/{project}/locations/{location}/endpoints/{endpoint}
- Logging
Sampling Pulumi.Strategy Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Sampling Strategy - Sample Strategy for logging.
- Model
Deployment List<Pulumi.Monitoring Objective Configs Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Model Deployment Monitoring Objective Config> - The config for monitoring objectives. This is a per DeployedModel config. Each DeployedModel needs to be configured separately.
- Model
Deployment Pulumi.Monitoring Schedule Config Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Model Deployment Monitoring Schedule Config - Schedule config for running the monitoring job.
- Analysis
Instance stringSchema Uri - YAML schema file uri describing the format of a single instance that you want Tensorflow Data Validation (TFDV) to analyze. If this field is empty, all the feature data types are inferred from predict_instance_schema_uri, meaning that TFDV will use the data in the exact format(data type) as prediction request/response. If there are any data type differences between predict instance and TFDV instance, this field can be used to override the schema. For models trained with Vertex AI, this field must be set as all the fields in predict instance formatted as string.
- Enable
Monitoring boolPipeline Logs - If true, the scheduled monitoring pipeline logs are sent to Google Cloud Logging, including pipeline status and anomalies detected. Please note the logs incur cost, which are subject to Cloud Logging pricing.
- Encryption
Spec Pulumi.Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Encryption Spec - Customer-managed encryption key spec for a ModelDeploymentMonitoringJob. If set, this ModelDeploymentMonitoringJob and all sub-resources of this ModelDeploymentMonitoringJob will be secured by this key.
- Labels Dictionary<string, string>
- The labels with user-defined metadata to organize your ModelDeploymentMonitoringJob. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.
- Location string
- Log
Ttl string - The TTL of BigQuery tables in user projects which stores logs. A day is the basic unit of the TTL and we take the ceil of TTL/86400(a day). e.g. { second: 3600} indicates ttl = 1 day.
- Model
Monitoring Pulumi.Alert Config Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Model Monitoring Alert Config - Alert config for model monitoring.
- Predict
Instance stringSchema Uri - YAML schema file uri describing the format of a single instance, which are given to format this Endpoint's prediction (and explanation). If not set, we will generate predict schema from collected predict requests.
- Project string
- Sample
Predict objectInstance - Sample Predict instance, same format as PredictRequest.instances, this can be set as a replacement of ModelDeploymentMonitoringJob.predict_instance_schema_uri. If not set, we will generate predict schema from collected predict requests.
- Stats
Anomalies Pulumi.Base Directory Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Gcs Destination - Stats anomalies base folder path.
- Display
Name string - The user-defined name of the ModelDeploymentMonitoringJob. The name can be up to 128 characters long and can consist of any UTF-8 characters. Display name of a ModelDeploymentMonitoringJob.
- Endpoint string
- Endpoint resource name. Format:
projects/{project}/locations/{location}/endpoints/{endpoint}
- Logging
Sampling GoogleStrategy Cloud Aiplatform V1beta1Sampling Strategy Args - Sample Strategy for logging.
- Model
Deployment []GoogleMonitoring Objective Configs Cloud Aiplatform V1beta1Model Deployment Monitoring Objective Config Args - The config for monitoring objectives. This is a per DeployedModel config. Each DeployedModel needs to be configured separately.
- Model
Deployment GoogleMonitoring Schedule Config Cloud Aiplatform V1beta1Model Deployment Monitoring Schedule Config Args - Schedule config for running the monitoring job.
- Analysis
Instance stringSchema Uri - YAML schema file uri describing the format of a single instance that you want Tensorflow Data Validation (TFDV) to analyze. If this field is empty, all the feature data types are inferred from predict_instance_schema_uri, meaning that TFDV will use the data in the exact format(data type) as prediction request/response. If there are any data type differences between predict instance and TFDV instance, this field can be used to override the schema. For models trained with Vertex AI, this field must be set as all the fields in predict instance formatted as string.
- Enable
Monitoring boolPipeline Logs - If true, the scheduled monitoring pipeline logs are sent to Google Cloud Logging, including pipeline status and anomalies detected. Please note the logs incur cost, which are subject to Cloud Logging pricing.
- Encryption
Spec GoogleCloud Aiplatform V1beta1Encryption Spec Args - Customer-managed encryption key spec for a ModelDeploymentMonitoringJob. If set, this ModelDeploymentMonitoringJob and all sub-resources of this ModelDeploymentMonitoringJob will be secured by this key.
- Labels map[string]string
- The labels with user-defined metadata to organize your ModelDeploymentMonitoringJob. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.
- Location string
- Log
Ttl string - The TTL of BigQuery tables in user projects which stores logs. A day is the basic unit of the TTL and we take the ceil of TTL/86400(a day). e.g. { second: 3600} indicates ttl = 1 day.
- Model
Monitoring GoogleAlert Config Cloud Aiplatform V1beta1Model Monitoring Alert Config Args - Alert config for model monitoring.
- Predict
Instance stringSchema Uri - YAML schema file uri describing the format of a single instance, which are given to format this Endpoint's prediction (and explanation). If not set, we will generate predict schema from collected predict requests.
- Project string
- Sample
Predict interface{}Instance - Sample Predict instance, same format as PredictRequest.instances, this can be set as a replacement of ModelDeploymentMonitoringJob.predict_instance_schema_uri. If not set, we will generate predict schema from collected predict requests.
- Stats
Anomalies GoogleBase Directory Cloud Aiplatform V1beta1Gcs Destination Args - Stats anomalies base folder path.
- display
Name String - The user-defined name of the ModelDeploymentMonitoringJob. The name can be up to 128 characters long and can consist of any UTF-8 characters. Display name of a ModelDeploymentMonitoringJob.
- endpoint String
- Endpoint resource name. Format:
projects/{project}/locations/{location}/endpoints/{endpoint}
- logging
Sampling GoogleStrategy Cloud Aiplatform V1beta1Sampling Strategy - Sample Strategy for logging.
- model
Deployment List<GoogleMonitoring Objective Configs Cloud Aiplatform V1beta1Model Deployment Monitoring Objective Config> - The config for monitoring objectives. This is a per DeployedModel config. Each DeployedModel needs to be configured separately.
- model
Deployment GoogleMonitoring Schedule Config Cloud Aiplatform V1beta1Model Deployment Monitoring Schedule Config - Schedule config for running the monitoring job.
- analysis
Instance StringSchema Uri - YAML schema file uri describing the format of a single instance that you want Tensorflow Data Validation (TFDV) to analyze. If this field is empty, all the feature data types are inferred from predict_instance_schema_uri, meaning that TFDV will use the data in the exact format(data type) as prediction request/response. If there are any data type differences between predict instance and TFDV instance, this field can be used to override the schema. For models trained with Vertex AI, this field must be set as all the fields in predict instance formatted as string.
- enable
Monitoring BooleanPipeline Logs - If true, the scheduled monitoring pipeline logs are sent to Google Cloud Logging, including pipeline status and anomalies detected. Please note the logs incur cost, which are subject to Cloud Logging pricing.
- encryption
Spec GoogleCloud Aiplatform V1beta1Encryption Spec - Customer-managed encryption key spec for a ModelDeploymentMonitoringJob. If set, this ModelDeploymentMonitoringJob and all sub-resources of this ModelDeploymentMonitoringJob will be secured by this key.
- labels Map<String,String>
- The labels with user-defined metadata to organize your ModelDeploymentMonitoringJob. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.
- location String
- log
Ttl String - The TTL of BigQuery tables in user projects which stores logs. A day is the basic unit of the TTL and we take the ceil of TTL/86400(a day). e.g. { second: 3600} indicates ttl = 1 day.
- model
Monitoring GoogleAlert Config Cloud Aiplatform V1beta1Model Monitoring Alert Config - Alert config for model monitoring.
- predict
Instance StringSchema Uri - YAML schema file uri describing the format of a single instance, which are given to format this Endpoint's prediction (and explanation). If not set, we will generate predict schema from collected predict requests.
- project String
- sample
Predict ObjectInstance - Sample Predict instance, same format as PredictRequest.instances, this can be set as a replacement of ModelDeploymentMonitoringJob.predict_instance_schema_uri. If not set, we will generate predict schema from collected predict requests.
- stats
Anomalies GoogleBase Directory Cloud Aiplatform V1beta1Gcs Destination - Stats anomalies base folder path.
- display
Name string - The user-defined name of the ModelDeploymentMonitoringJob. The name can be up to 128 characters long and can consist of any UTF-8 characters. Display name of a ModelDeploymentMonitoringJob.
- endpoint string
- Endpoint resource name. Format:
projects/{project}/locations/{location}/endpoints/{endpoint}
- logging
Sampling GoogleStrategy Cloud Aiplatform V1beta1Sampling Strategy - Sample Strategy for logging.
- model
Deployment GoogleMonitoring Objective Configs Cloud Aiplatform V1beta1Model Deployment Monitoring Objective Config[] - The config for monitoring objectives. This is a per DeployedModel config. Each DeployedModel needs to be configured separately.
- model
Deployment GoogleMonitoring Schedule Config Cloud Aiplatform V1beta1Model Deployment Monitoring Schedule Config - Schedule config for running the monitoring job.
- analysis
Instance stringSchema Uri - YAML schema file uri describing the format of a single instance that you want Tensorflow Data Validation (TFDV) to analyze. If this field is empty, all the feature data types are inferred from predict_instance_schema_uri, meaning that TFDV will use the data in the exact format(data type) as prediction request/response. If there are any data type differences between predict instance and TFDV instance, this field can be used to override the schema. For models trained with Vertex AI, this field must be set as all the fields in predict instance formatted as string.
- enable
Monitoring booleanPipeline Logs - If true, the scheduled monitoring pipeline logs are sent to Google Cloud Logging, including pipeline status and anomalies detected. Please note the logs incur cost, which are subject to Cloud Logging pricing.
- encryption
Spec GoogleCloud Aiplatform V1beta1Encryption Spec - Customer-managed encryption key spec for a ModelDeploymentMonitoringJob. If set, this ModelDeploymentMonitoringJob and all sub-resources of this ModelDeploymentMonitoringJob will be secured by this key.
- labels {[key: string]: string}
- The labels with user-defined metadata to organize your ModelDeploymentMonitoringJob. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.
- location string
- log
Ttl string - The TTL of BigQuery tables in user projects which stores logs. A day is the basic unit of the TTL and we take the ceil of TTL/86400(a day). e.g. { second: 3600} indicates ttl = 1 day.
- model
Monitoring GoogleAlert Config Cloud Aiplatform V1beta1Model Monitoring Alert Config - Alert config for model monitoring.
- predict
Instance stringSchema Uri - YAML schema file uri describing the format of a single instance, which are given to format this Endpoint's prediction (and explanation). If not set, we will generate predict schema from collected predict requests.
- project string
- sample
Predict anyInstance - Sample Predict instance, same format as PredictRequest.instances, this can be set as a replacement of ModelDeploymentMonitoringJob.predict_instance_schema_uri. If not set, we will generate predict schema from collected predict requests.
- stats
Anomalies GoogleBase Directory Cloud Aiplatform V1beta1Gcs Destination - Stats anomalies base folder path.
- display_
name str - The user-defined name of the ModelDeploymentMonitoringJob. The name can be up to 128 characters long and can consist of any UTF-8 characters. Display name of a ModelDeploymentMonitoringJob.
- endpoint str
- Endpoint resource name. Format:
projects/{project}/locations/{location}/endpoints/{endpoint}
- logging_
sampling_ Googlestrategy Cloud Aiplatform V1beta1Sampling Strategy Args - Sample Strategy for logging.
- model_
deployment_ Sequence[Googlemonitoring_ objective_ configs Cloud Aiplatform V1beta1Model Deployment Monitoring Objective Config Args] - The config for monitoring objectives. This is a per DeployedModel config. Each DeployedModel needs to be configured separately.
- model_
deployment_ Googlemonitoring_ schedule_ config Cloud Aiplatform V1beta1Model Deployment Monitoring Schedule Config Args - Schedule config for running the monitoring job.
- analysis_
instance_ strschema_ uri - YAML schema file uri describing the format of a single instance that you want Tensorflow Data Validation (TFDV) to analyze. If this field is empty, all the feature data types are inferred from predict_instance_schema_uri, meaning that TFDV will use the data in the exact format(data type) as prediction request/response. If there are any data type differences between predict instance and TFDV instance, this field can be used to override the schema. For models trained with Vertex AI, this field must be set as all the fields in predict instance formatted as string.
- enable_
monitoring_ boolpipeline_ logs - If true, the scheduled monitoring pipeline logs are sent to Google Cloud Logging, including pipeline status and anomalies detected. Please note the logs incur cost, which are subject to Cloud Logging pricing.
- encryption_
spec GoogleCloud Aiplatform V1beta1Encryption Spec Args - Customer-managed encryption key spec for a ModelDeploymentMonitoringJob. If set, this ModelDeploymentMonitoringJob and all sub-resources of this ModelDeploymentMonitoringJob will be secured by this key.
- labels Mapping[str, str]
- The labels with user-defined metadata to organize your ModelDeploymentMonitoringJob. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.
- location str
- log_
ttl str - The TTL of BigQuery tables in user projects which stores logs. A day is the basic unit of the TTL and we take the ceil of TTL/86400(a day). e.g. { second: 3600} indicates ttl = 1 day.
- model_
monitoring_ Googlealert_ config Cloud Aiplatform V1beta1Model Monitoring Alert Config Args - Alert config for model monitoring.
- predict_
instance_ strschema_ uri - YAML schema file uri describing the format of a single instance, which are given to format this Endpoint's prediction (and explanation). If not set, we will generate predict schema from collected predict requests.
- project str
- sample_
predict_ Anyinstance - Sample Predict instance, same format as PredictRequest.instances, this can be set as a replacement of ModelDeploymentMonitoringJob.predict_instance_schema_uri. If not set, we will generate predict schema from collected predict requests.
- stats_
anomalies_ Googlebase_ directory Cloud Aiplatform V1beta1Gcs Destination Args - Stats anomalies base folder path.
- display
Name String - The user-defined name of the ModelDeploymentMonitoringJob. The name can be up to 128 characters long and can consist of any UTF-8 characters. Display name of a ModelDeploymentMonitoringJob.
- endpoint String
- Endpoint resource name. Format:
projects/{project}/locations/{location}/endpoints/{endpoint}
- logging
Sampling Property MapStrategy - Sample Strategy for logging.
- model
Deployment List<Property Map>Monitoring Objective Configs - The config for monitoring objectives. This is a per DeployedModel config. Each DeployedModel needs to be configured separately.
- model
Deployment Property MapMonitoring Schedule Config - Schedule config for running the monitoring job.
- analysis
Instance StringSchema Uri - YAML schema file uri describing the format of a single instance that you want Tensorflow Data Validation (TFDV) to analyze. If this field is empty, all the feature data types are inferred from predict_instance_schema_uri, meaning that TFDV will use the data in the exact format(data type) as prediction request/response. If there are any data type differences between predict instance and TFDV instance, this field can be used to override the schema. For models trained with Vertex AI, this field must be set as all the fields in predict instance formatted as string.
- enable
Monitoring BooleanPipeline Logs - If true, the scheduled monitoring pipeline logs are sent to Google Cloud Logging, including pipeline status and anomalies detected. Please note the logs incur cost, which are subject to Cloud Logging pricing.
- encryption
Spec Property Map - Customer-managed encryption key spec for a ModelDeploymentMonitoringJob. If set, this ModelDeploymentMonitoringJob and all sub-resources of this ModelDeploymentMonitoringJob will be secured by this key.
- labels Map<String>
- The labels with user-defined metadata to organize your ModelDeploymentMonitoringJob. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.
- location String
- log
Ttl String - The TTL of BigQuery tables in user projects which stores logs. A day is the basic unit of the TTL and we take the ceil of TTL/86400(a day). e.g. { second: 3600} indicates ttl = 1 day.
- model
Monitoring Property MapAlert Config - Alert config for model monitoring.
- predict
Instance StringSchema Uri - YAML schema file uri describing the format of a single instance, which are given to format this Endpoint's prediction (and explanation). If not set, we will generate predict schema from collected predict requests.
- project String
- sample
Predict AnyInstance - Sample Predict instance, same format as PredictRequest.instances, this can be set as a replacement of ModelDeploymentMonitoringJob.predict_instance_schema_uri. If not set, we will generate predict schema from collected predict requests.
- stats
Anomalies Property MapBase Directory - Stats anomalies base folder path.
Outputs
All input properties are implicitly available as output properties. Additionally, the ModelDeploymentMonitoringJob resource produces the following output properties:
- Bigquery
Tables List<Pulumi.Google Native. Aiplatform. V1Beta1. Outputs. Google Cloud Aiplatform V1beta1Model Deployment Monitoring Big Query Table Response> - The created bigquery tables for the job under customer project. Customer could do their own query & analysis. There could be 4 log tables in maximum: 1. Training data logging predict request/response 2. Serving data logging predict request/response
- Create
Time string - Timestamp when this ModelDeploymentMonitoringJob was created.
- Error
Pulumi.
Google Native. Aiplatform. V1Beta1. Outputs. Google Rpc Status Response - Only populated when the job's state is
JOB_STATE_FAILED
orJOB_STATE_CANCELLED
. - Id string
- The provider-assigned unique ID for this managed resource.
- Latest
Monitoring Pulumi.Pipeline Metadata Google Native. Aiplatform. V1Beta1. Outputs. Google Cloud Aiplatform V1beta1Model Deployment Monitoring Job Latest Monitoring Pipeline Metadata Response - Latest triggered monitoring pipeline metadata.
- Name string
- Resource name of a ModelDeploymentMonitoringJob.
- Next
Schedule stringTime - Timestamp when this monitoring pipeline will be scheduled to run for the next round.
- Schedule
State string - Schedule state when the monitoring job is in Running state.
- State string
- The detailed state of the monitoring job. When the job is still creating, the state will be 'PENDING'. Once the job is successfully created, the state will be 'RUNNING'. Pause the job, the state will be 'PAUSED'. Resume the job, the state will return to 'RUNNING'.
- Update
Time string - Timestamp when this ModelDeploymentMonitoringJob was updated most recently.
- Bigquery
Tables []GoogleCloud Aiplatform V1beta1Model Deployment Monitoring Big Query Table Response - The created bigquery tables for the job under customer project. Customer could do their own query & analysis. There could be 4 log tables in maximum: 1. Training data logging predict request/response 2. Serving data logging predict request/response
- Create
Time string - Timestamp when this ModelDeploymentMonitoringJob was created.
- Error
Google
Rpc Status Response - Only populated when the job's state is
JOB_STATE_FAILED
orJOB_STATE_CANCELLED
. - Id string
- The provider-assigned unique ID for this managed resource.
- Latest
Monitoring GooglePipeline Metadata Cloud Aiplatform V1beta1Model Deployment Monitoring Job Latest Monitoring Pipeline Metadata Response - Latest triggered monitoring pipeline metadata.
- Name string
- Resource name of a ModelDeploymentMonitoringJob.
- Next
Schedule stringTime - Timestamp when this monitoring pipeline will be scheduled to run for the next round.
- Schedule
State string - Schedule state when the monitoring job is in Running state.
- State string
- The detailed state of the monitoring job. When the job is still creating, the state will be 'PENDING'. Once the job is successfully created, the state will be 'RUNNING'. Pause the job, the state will be 'PAUSED'. Resume the job, the state will return to 'RUNNING'.
- Update
Time string - Timestamp when this ModelDeploymentMonitoringJob was updated most recently.
- bigquery
Tables List<GoogleCloud Aiplatform V1beta1Model Deployment Monitoring Big Query Table Response> - The created bigquery tables for the job under customer project. Customer could do their own query & analysis. There could be 4 log tables in maximum: 1. Training data logging predict request/response 2. Serving data logging predict request/response
- create
Time String - Timestamp when this ModelDeploymentMonitoringJob was created.
- error
Google
Rpc Status Response - Only populated when the job's state is
JOB_STATE_FAILED
orJOB_STATE_CANCELLED
. - id String
- The provider-assigned unique ID for this managed resource.
- latest
Monitoring GooglePipeline Metadata Cloud Aiplatform V1beta1Model Deployment Monitoring Job Latest Monitoring Pipeline Metadata Response - Latest triggered monitoring pipeline metadata.
- name String
- Resource name of a ModelDeploymentMonitoringJob.
- next
Schedule StringTime - Timestamp when this monitoring pipeline will be scheduled to run for the next round.
- schedule
State String - Schedule state when the monitoring job is in Running state.
- state String
- The detailed state of the monitoring job. When the job is still creating, the state will be 'PENDING'. Once the job is successfully created, the state will be 'RUNNING'. Pause the job, the state will be 'PAUSED'. Resume the job, the state will return to 'RUNNING'.
- update
Time String - Timestamp when this ModelDeploymentMonitoringJob was updated most recently.
- bigquery
Tables GoogleCloud Aiplatform V1beta1Model Deployment Monitoring Big Query Table Response[] - The created bigquery tables for the job under customer project. Customer could do their own query & analysis. There could be 4 log tables in maximum: 1. Training data logging predict request/response 2. Serving data logging predict request/response
- create
Time string - Timestamp when this ModelDeploymentMonitoringJob was created.
- error
Google
Rpc Status Response - Only populated when the job's state is
JOB_STATE_FAILED
orJOB_STATE_CANCELLED
. - id string
- The provider-assigned unique ID for this managed resource.
- latest
Monitoring GooglePipeline Metadata Cloud Aiplatform V1beta1Model Deployment Monitoring Job Latest Monitoring Pipeline Metadata Response - Latest triggered monitoring pipeline metadata.
- name string
- Resource name of a ModelDeploymentMonitoringJob.
- next
Schedule stringTime - Timestamp when this monitoring pipeline will be scheduled to run for the next round.
- schedule
State string - Schedule state when the monitoring job is in Running state.
- state string
- The detailed state of the monitoring job. When the job is still creating, the state will be 'PENDING'. Once the job is successfully created, the state will be 'RUNNING'. Pause the job, the state will be 'PAUSED'. Resume the job, the state will return to 'RUNNING'.
- update
Time string - Timestamp when this ModelDeploymentMonitoringJob was updated most recently.
- bigquery_
tables Sequence[GoogleCloud Aiplatform V1beta1Model Deployment Monitoring Big Query Table Response] - The created bigquery tables for the job under customer project. Customer could do their own query & analysis. There could be 4 log tables in maximum: 1. Training data logging predict request/response 2. Serving data logging predict request/response
- create_
time str - Timestamp when this ModelDeploymentMonitoringJob was created.
- error
Google
Rpc Status Response - Only populated when the job's state is
JOB_STATE_FAILED
orJOB_STATE_CANCELLED
. - id str
- The provider-assigned unique ID for this managed resource.
- latest_
monitoring_ Googlepipeline_ metadata Cloud Aiplatform V1beta1Model Deployment Monitoring Job Latest Monitoring Pipeline Metadata Response - Latest triggered monitoring pipeline metadata.
- name str
- Resource name of a ModelDeploymentMonitoringJob.
- next_
schedule_ strtime - Timestamp when this monitoring pipeline will be scheduled to run for the next round.
- schedule_
state str - Schedule state when the monitoring job is in Running state.
- state str
- The detailed state of the monitoring job. When the job is still creating, the state will be 'PENDING'. Once the job is successfully created, the state will be 'RUNNING'. Pause the job, the state will be 'PAUSED'. Resume the job, the state will return to 'RUNNING'.
- update_
time str - Timestamp when this ModelDeploymentMonitoringJob was updated most recently.
- bigquery
Tables List<Property Map> - The created bigquery tables for the job under customer project. Customer could do their own query & analysis. There could be 4 log tables in maximum: 1. Training data logging predict request/response 2. Serving data logging predict request/response
- create
Time String - Timestamp when this ModelDeploymentMonitoringJob was created.
- error Property Map
- Only populated when the job's state is
JOB_STATE_FAILED
orJOB_STATE_CANCELLED
. - id String
- The provider-assigned unique ID for this managed resource.
- latest
Monitoring Property MapPipeline Metadata - Latest triggered monitoring pipeline metadata.
- name String
- Resource name of a ModelDeploymentMonitoringJob.
- next
Schedule StringTime - Timestamp when this monitoring pipeline will be scheduled to run for the next round.
- schedule
State String - Schedule state when the monitoring job is in Running state.
- state String
- The detailed state of the monitoring job. When the job is still creating, the state will be 'PENDING'. Once the job is successfully created, the state will be 'RUNNING'. Pause the job, the state will be 'PAUSED'. Resume the job, the state will return to 'RUNNING'.
- update
Time String - Timestamp when this ModelDeploymentMonitoringJob was updated most recently.
Supporting Types
GoogleCloudAiplatformV1beta1BigQueryDestination, GoogleCloudAiplatformV1beta1BigQueryDestinationArgs
- Output
Uri string - BigQuery URI to a project or table, up to 2000 characters long. When only the project is specified, the Dataset and Table is created. When the full table reference is specified, the Dataset must exist and table must not exist. Accepted forms: * BigQuery path. For example:
bq://projectId
orbq://projectId.bqDatasetId
orbq://projectId.bqDatasetId.bqTableId
.
- Output
Uri string - BigQuery URI to a project or table, up to 2000 characters long. When only the project is specified, the Dataset and Table is created. When the full table reference is specified, the Dataset must exist and table must not exist. Accepted forms: * BigQuery path. For example:
bq://projectId
orbq://projectId.bqDatasetId
orbq://projectId.bqDatasetId.bqTableId
.
- output
Uri String - BigQuery URI to a project or table, up to 2000 characters long. When only the project is specified, the Dataset and Table is created. When the full table reference is specified, the Dataset must exist and table must not exist. Accepted forms: * BigQuery path. For example:
bq://projectId
orbq://projectId.bqDatasetId
orbq://projectId.bqDatasetId.bqTableId
.
- output
Uri string - BigQuery URI to a project or table, up to 2000 characters long. When only the project is specified, the Dataset and Table is created. When the full table reference is specified, the Dataset must exist and table must not exist. Accepted forms: * BigQuery path. For example:
bq://projectId
orbq://projectId.bqDatasetId
orbq://projectId.bqDatasetId.bqTableId
.
- output_
uri str - BigQuery URI to a project or table, up to 2000 characters long. When only the project is specified, the Dataset and Table is created. When the full table reference is specified, the Dataset must exist and table must not exist. Accepted forms: * BigQuery path. For example:
bq://projectId
orbq://projectId.bqDatasetId
orbq://projectId.bqDatasetId.bqTableId
.
- output
Uri String - BigQuery URI to a project or table, up to 2000 characters long. When only the project is specified, the Dataset and Table is created. When the full table reference is specified, the Dataset must exist and table must not exist. Accepted forms: * BigQuery path. For example:
bq://projectId
orbq://projectId.bqDatasetId
orbq://projectId.bqDatasetId.bqTableId
.
GoogleCloudAiplatformV1beta1BigQueryDestinationResponse, GoogleCloudAiplatformV1beta1BigQueryDestinationResponseArgs
- Output
Uri string - BigQuery URI to a project or table, up to 2000 characters long. When only the project is specified, the Dataset and Table is created. When the full table reference is specified, the Dataset must exist and table must not exist. Accepted forms: * BigQuery path. For example:
bq://projectId
orbq://projectId.bqDatasetId
orbq://projectId.bqDatasetId.bqTableId
.
- Output
Uri string - BigQuery URI to a project or table, up to 2000 characters long. When only the project is specified, the Dataset and Table is created. When the full table reference is specified, the Dataset must exist and table must not exist. Accepted forms: * BigQuery path. For example:
bq://projectId
orbq://projectId.bqDatasetId
orbq://projectId.bqDatasetId.bqTableId
.
- output
Uri String - BigQuery URI to a project or table, up to 2000 characters long. When only the project is specified, the Dataset and Table is created. When the full table reference is specified, the Dataset must exist and table must not exist. Accepted forms: * BigQuery path. For example:
bq://projectId
orbq://projectId.bqDatasetId
orbq://projectId.bqDatasetId.bqTableId
.
- output
Uri string - BigQuery URI to a project or table, up to 2000 characters long. When only the project is specified, the Dataset and Table is created. When the full table reference is specified, the Dataset must exist and table must not exist. Accepted forms: * BigQuery path. For example:
bq://projectId
orbq://projectId.bqDatasetId
orbq://projectId.bqDatasetId.bqTableId
.
- output_
uri str - BigQuery URI to a project or table, up to 2000 characters long. When only the project is specified, the Dataset and Table is created. When the full table reference is specified, the Dataset must exist and table must not exist. Accepted forms: * BigQuery path. For example:
bq://projectId
orbq://projectId.bqDatasetId
orbq://projectId.bqDatasetId.bqTableId
.
- output
Uri String - BigQuery URI to a project or table, up to 2000 characters long. When only the project is specified, the Dataset and Table is created. When the full table reference is specified, the Dataset must exist and table must not exist. Accepted forms: * BigQuery path. For example:
bq://projectId
orbq://projectId.bqDatasetId
orbq://projectId.bqDatasetId.bqTableId
.
GoogleCloudAiplatformV1beta1BigQuerySource, GoogleCloudAiplatformV1beta1BigQuerySourceArgs
- Input
Uri string - BigQuery URI to a table, up to 2000 characters long. Accepted forms: * BigQuery path. For example:
bq://projectId.bqDatasetId.bqTableId
.
- Input
Uri string - BigQuery URI to a table, up to 2000 characters long. Accepted forms: * BigQuery path. For example:
bq://projectId.bqDatasetId.bqTableId
.
- input
Uri String - BigQuery URI to a table, up to 2000 characters long. Accepted forms: * BigQuery path. For example:
bq://projectId.bqDatasetId.bqTableId
.
- input
Uri string - BigQuery URI to a table, up to 2000 characters long. Accepted forms: * BigQuery path. For example:
bq://projectId.bqDatasetId.bqTableId
.
- input_
uri str - BigQuery URI to a table, up to 2000 characters long. Accepted forms: * BigQuery path. For example:
bq://projectId.bqDatasetId.bqTableId
.
- input
Uri String - BigQuery URI to a table, up to 2000 characters long. Accepted forms: * BigQuery path. For example:
bq://projectId.bqDatasetId.bqTableId
.
GoogleCloudAiplatformV1beta1BigQuerySourceResponse, GoogleCloudAiplatformV1beta1BigQuerySourceResponseArgs
- Input
Uri string - BigQuery URI to a table, up to 2000 characters long. Accepted forms: * BigQuery path. For example:
bq://projectId.bqDatasetId.bqTableId
.
- Input
Uri string - BigQuery URI to a table, up to 2000 characters long. Accepted forms: * BigQuery path. For example:
bq://projectId.bqDatasetId.bqTableId
.
- input
Uri String - BigQuery URI to a table, up to 2000 characters long. Accepted forms: * BigQuery path. For example:
bq://projectId.bqDatasetId.bqTableId
.
- input
Uri string - BigQuery URI to a table, up to 2000 characters long. Accepted forms: * BigQuery path. For example:
bq://projectId.bqDatasetId.bqTableId
.
- input_
uri str - BigQuery URI to a table, up to 2000 characters long. Accepted forms: * BigQuery path. For example:
bq://projectId.bqDatasetId.bqTableId
.
- input
Uri String - BigQuery URI to a table, up to 2000 characters long. Accepted forms: * BigQuery path. For example:
bq://projectId.bqDatasetId.bqTableId
.
GoogleCloudAiplatformV1beta1EncryptionSpec, GoogleCloudAiplatformV1beta1EncryptionSpecArgs
- Kms
Key stringName - The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form:
projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key
. The key needs to be in the same region as where the compute resource is created.
- Kms
Key stringName - The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form:
projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key
. The key needs to be in the same region as where the compute resource is created.
- kms
Key StringName - The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form:
projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key
. The key needs to be in the same region as where the compute resource is created.
- kms
Key stringName - The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form:
projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key
. The key needs to be in the same region as where the compute resource is created.
- kms_
key_ strname - The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form:
projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key
. The key needs to be in the same region as where the compute resource is created.
- kms
Key StringName - The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form:
projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key
. The key needs to be in the same region as where the compute resource is created.
GoogleCloudAiplatformV1beta1EncryptionSpecResponse, GoogleCloudAiplatformV1beta1EncryptionSpecResponseArgs
- Kms
Key stringName - The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form:
projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key
. The key needs to be in the same region as where the compute resource is created.
- Kms
Key stringName - The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form:
projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key
. The key needs to be in the same region as where the compute resource is created.
- kms
Key StringName - The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form:
projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key
. The key needs to be in the same region as where the compute resource is created.
- kms
Key stringName - The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form:
projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key
. The key needs to be in the same region as where the compute resource is created.
- kms_
key_ strname - The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form:
projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key
. The key needs to be in the same region as where the compute resource is created.
- kms
Key StringName - The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form:
projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key
. The key needs to be in the same region as where the compute resource is created.
GoogleCloudAiplatformV1beta1GcsDestination, GoogleCloudAiplatformV1beta1GcsDestinationArgs
- Output
Uri stringPrefix - Google Cloud Storage URI to output directory. If the uri doesn't end with '/', a '/' will be automatically appended. The directory is created if it doesn't exist.
- Output
Uri stringPrefix - Google Cloud Storage URI to output directory. If the uri doesn't end with '/', a '/' will be automatically appended. The directory is created if it doesn't exist.
- output
Uri StringPrefix - Google Cloud Storage URI to output directory. If the uri doesn't end with '/', a '/' will be automatically appended. The directory is created if it doesn't exist.
- output
Uri stringPrefix - Google Cloud Storage URI to output directory. If the uri doesn't end with '/', a '/' will be automatically appended. The directory is created if it doesn't exist.
- output_
uri_ strprefix - Google Cloud Storage URI to output directory. If the uri doesn't end with '/', a '/' will be automatically appended. The directory is created if it doesn't exist.
- output
Uri StringPrefix - Google Cloud Storage URI to output directory. If the uri doesn't end with '/', a '/' will be automatically appended. The directory is created if it doesn't exist.
GoogleCloudAiplatformV1beta1GcsDestinationResponse, GoogleCloudAiplatformV1beta1GcsDestinationResponseArgs
- Output
Uri stringPrefix - Google Cloud Storage URI to output directory. If the uri doesn't end with '/', a '/' will be automatically appended. The directory is created if it doesn't exist.
- Output
Uri stringPrefix - Google Cloud Storage URI to output directory. If the uri doesn't end with '/', a '/' will be automatically appended. The directory is created if it doesn't exist.
- output
Uri StringPrefix - Google Cloud Storage URI to output directory. If the uri doesn't end with '/', a '/' will be automatically appended. The directory is created if it doesn't exist.
- output
Uri stringPrefix - Google Cloud Storage URI to output directory. If the uri doesn't end with '/', a '/' will be automatically appended. The directory is created if it doesn't exist.
- output_
uri_ strprefix - Google Cloud Storage URI to output directory. If the uri doesn't end with '/', a '/' will be automatically appended. The directory is created if it doesn't exist.
- output
Uri StringPrefix - Google Cloud Storage URI to output directory. If the uri doesn't end with '/', a '/' will be automatically appended. The directory is created if it doesn't exist.
GoogleCloudAiplatformV1beta1GcsSource, GoogleCloudAiplatformV1beta1GcsSourceArgs
- Uris List<string>
- Google Cloud Storage URI(-s) to the input file(s). May contain wildcards. For more information on wildcards, see https://cloud.google.com/storage/docs/gsutil/addlhelp/WildcardNames.
- Uris []string
- Google Cloud Storage URI(-s) to the input file(s). May contain wildcards. For more information on wildcards, see https://cloud.google.com/storage/docs/gsutil/addlhelp/WildcardNames.
- uris List<String>
- Google Cloud Storage URI(-s) to the input file(s). May contain wildcards. For more information on wildcards, see https://cloud.google.com/storage/docs/gsutil/addlhelp/WildcardNames.
- uris string[]
- Google Cloud Storage URI(-s) to the input file(s). May contain wildcards. For more information on wildcards, see https://cloud.google.com/storage/docs/gsutil/addlhelp/WildcardNames.
- uris Sequence[str]
- Google Cloud Storage URI(-s) to the input file(s). May contain wildcards. For more information on wildcards, see https://cloud.google.com/storage/docs/gsutil/addlhelp/WildcardNames.
- uris List<String>
- Google Cloud Storage URI(-s) to the input file(s). May contain wildcards. For more information on wildcards, see https://cloud.google.com/storage/docs/gsutil/addlhelp/WildcardNames.
GoogleCloudAiplatformV1beta1GcsSourceResponse, GoogleCloudAiplatformV1beta1GcsSourceResponseArgs
- Uris List<string>
- Google Cloud Storage URI(-s) to the input file(s). May contain wildcards. For more information on wildcards, see https://cloud.google.com/storage/docs/gsutil/addlhelp/WildcardNames.
- Uris []string
- Google Cloud Storage URI(-s) to the input file(s). May contain wildcards. For more information on wildcards, see https://cloud.google.com/storage/docs/gsutil/addlhelp/WildcardNames.
- uris List<String>
- Google Cloud Storage URI(-s) to the input file(s). May contain wildcards. For more information on wildcards, see https://cloud.google.com/storage/docs/gsutil/addlhelp/WildcardNames.
- uris string[]
- Google Cloud Storage URI(-s) to the input file(s). May contain wildcards. For more information on wildcards, see https://cloud.google.com/storage/docs/gsutil/addlhelp/WildcardNames.
- uris Sequence[str]
- Google Cloud Storage URI(-s) to the input file(s). May contain wildcards. For more information on wildcards, see https://cloud.google.com/storage/docs/gsutil/addlhelp/WildcardNames.
- uris List<String>
- Google Cloud Storage URI(-s) to the input file(s). May contain wildcards. For more information on wildcards, see https://cloud.google.com/storage/docs/gsutil/addlhelp/WildcardNames.
GoogleCloudAiplatformV1beta1ModelDeploymentMonitoringBigQueryTableResponse, GoogleCloudAiplatformV1beta1ModelDeploymentMonitoringBigQueryTableResponseArgs
- Bigquery
Table stringPath - The created BigQuery table to store logs. Customer could do their own query & analysis. Format:
bq://.model_deployment_monitoring_._
- Log
Source string - The source of log.
- Log
Type string - The type of log.
- Bigquery
Table stringPath - The created BigQuery table to store logs. Customer could do their own query & analysis. Format:
bq://.model_deployment_monitoring_._
- Log
Source string - The source of log.
- Log
Type string - The type of log.
- bigquery
Table StringPath - The created BigQuery table to store logs. Customer could do their own query & analysis. Format:
bq://.model_deployment_monitoring_._
- log
Source String - The source of log.
- log
Type String - The type of log.
- bigquery
Table stringPath - The created BigQuery table to store logs. Customer could do their own query & analysis. Format:
bq://.model_deployment_monitoring_._
- log
Source string - The source of log.
- log
Type string - The type of log.
- bigquery_
table_ strpath - The created BigQuery table to store logs. Customer could do their own query & analysis. Format:
bq://.model_deployment_monitoring_._
- log_
source str - The source of log.
- log_
type str - The type of log.
- bigquery
Table StringPath - The created BigQuery table to store logs. Customer could do their own query & analysis. Format:
bq://.model_deployment_monitoring_._
- log
Source String - The source of log.
- log
Type String - The type of log.
GoogleCloudAiplatformV1beta1ModelDeploymentMonitoringJobLatestMonitoringPipelineMetadataResponse, GoogleCloudAiplatformV1beta1ModelDeploymentMonitoringJobLatestMonitoringPipelineMetadataResponseArgs
- Run
Time string - The time that most recent monitoring pipelines that is related to this run.
- Status
Pulumi.
Google Native. Aiplatform. V1Beta1. Inputs. Google Rpc Status Response - The status of the most recent monitoring pipeline.
- Run
Time string - The time that most recent monitoring pipelines that is related to this run.
- Status
Google
Rpc Status Response - The status of the most recent monitoring pipeline.
- run
Time String - The time that most recent monitoring pipelines that is related to this run.
- status
Google
Rpc Status Response - The status of the most recent monitoring pipeline.
- run
Time string - The time that most recent monitoring pipelines that is related to this run.
- status
Google
Rpc Status Response - The status of the most recent monitoring pipeline.
- run_
time str - The time that most recent monitoring pipelines that is related to this run.
- status
Google
Rpc Status Response - The status of the most recent monitoring pipeline.
- run
Time String - The time that most recent monitoring pipelines that is related to this run.
- status Property Map
- The status of the most recent monitoring pipeline.
GoogleCloudAiplatformV1beta1ModelDeploymentMonitoringObjectiveConfig, GoogleCloudAiplatformV1beta1ModelDeploymentMonitoringObjectiveConfigArgs
- Deployed
Model stringId - The DeployedModel ID of the objective config.
- Objective
Config Pulumi.Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Model Monitoring Objective Config - The objective config of for the modelmonitoring job of this deployed model.
- Deployed
Model stringId - The DeployedModel ID of the objective config.
- Objective
Config GoogleCloud Aiplatform V1beta1Model Monitoring Objective Config - The objective config of for the modelmonitoring job of this deployed model.
- deployed
Model StringId - The DeployedModel ID of the objective config.
- objective
Config GoogleCloud Aiplatform V1beta1Model Monitoring Objective Config - The objective config of for the modelmonitoring job of this deployed model.
- deployed
Model stringId - The DeployedModel ID of the objective config.
- objective
Config GoogleCloud Aiplatform V1beta1Model Monitoring Objective Config - The objective config of for the modelmonitoring job of this deployed model.
- deployed_
model_ strid - The DeployedModel ID of the objective config.
- objective_
config GoogleCloud Aiplatform V1beta1Model Monitoring Objective Config - The objective config of for the modelmonitoring job of this deployed model.
- deployed
Model StringId - The DeployedModel ID of the objective config.
- objective
Config Property Map - The objective config of for the modelmonitoring job of this deployed model.
GoogleCloudAiplatformV1beta1ModelDeploymentMonitoringObjectiveConfigResponse, GoogleCloudAiplatformV1beta1ModelDeploymentMonitoringObjectiveConfigResponseArgs
- Deployed
Model stringId - The DeployedModel ID of the objective config.
- Objective
Config Pulumi.Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Model Monitoring Objective Config Response - The objective config of for the modelmonitoring job of this deployed model.
- Deployed
Model stringId - The DeployedModel ID of the objective config.
- Objective
Config GoogleCloud Aiplatform V1beta1Model Monitoring Objective Config Response - The objective config of for the modelmonitoring job of this deployed model.
- deployed
Model StringId - The DeployedModel ID of the objective config.
- objective
Config GoogleCloud Aiplatform V1beta1Model Monitoring Objective Config Response - The objective config of for the modelmonitoring job of this deployed model.
- deployed
Model stringId - The DeployedModel ID of the objective config.
- objective
Config GoogleCloud Aiplatform V1beta1Model Monitoring Objective Config Response - The objective config of for the modelmonitoring job of this deployed model.
- deployed_
model_ strid - The DeployedModel ID of the objective config.
- objective_
config GoogleCloud Aiplatform V1beta1Model Monitoring Objective Config Response - The objective config of for the modelmonitoring job of this deployed model.
- deployed
Model StringId - The DeployedModel ID of the objective config.
- objective
Config Property Map - The objective config of for the modelmonitoring job of this deployed model.
GoogleCloudAiplatformV1beta1ModelDeploymentMonitoringScheduleConfig, GoogleCloudAiplatformV1beta1ModelDeploymentMonitoringScheduleConfigArgs
- Monitor
Interval string - The model monitoring job scheduling interval. It will be rounded up to next full hour. This defines how often the monitoring jobs are triggered.
- Monitor
Window string - The time window of the prediction data being included in each prediction dataset. This window specifies how long the data should be collected from historical model results for each run. If not set, ModelDeploymentMonitoringScheduleConfig.monitor_interval will be used. e.g. If currently the cutoff time is 2022-01-08 14:30:00 and the monitor_window is set to be 3600, then data from 2022-01-08 13:30:00 to 2022-01-08 14:30:00 will be retrieved and aggregated to calculate the monitoring statistics.
- Monitor
Interval string - The model monitoring job scheduling interval. It will be rounded up to next full hour. This defines how often the monitoring jobs are triggered.
- Monitor
Window string - The time window of the prediction data being included in each prediction dataset. This window specifies how long the data should be collected from historical model results for each run. If not set, ModelDeploymentMonitoringScheduleConfig.monitor_interval will be used. e.g. If currently the cutoff time is 2022-01-08 14:30:00 and the monitor_window is set to be 3600, then data from 2022-01-08 13:30:00 to 2022-01-08 14:30:00 will be retrieved and aggregated to calculate the monitoring statistics.
- monitor
Interval String - The model monitoring job scheduling interval. It will be rounded up to next full hour. This defines how often the monitoring jobs are triggered.
- monitor
Window String - The time window of the prediction data being included in each prediction dataset. This window specifies how long the data should be collected from historical model results for each run. If not set, ModelDeploymentMonitoringScheduleConfig.monitor_interval will be used. e.g. If currently the cutoff time is 2022-01-08 14:30:00 and the monitor_window is set to be 3600, then data from 2022-01-08 13:30:00 to 2022-01-08 14:30:00 will be retrieved and aggregated to calculate the monitoring statistics.
- monitor
Interval string - The model monitoring job scheduling interval. It will be rounded up to next full hour. This defines how often the monitoring jobs are triggered.
- monitor
Window string - The time window of the prediction data being included in each prediction dataset. This window specifies how long the data should be collected from historical model results for each run. If not set, ModelDeploymentMonitoringScheduleConfig.monitor_interval will be used. e.g. If currently the cutoff time is 2022-01-08 14:30:00 and the monitor_window is set to be 3600, then data from 2022-01-08 13:30:00 to 2022-01-08 14:30:00 will be retrieved and aggregated to calculate the monitoring statistics.
- monitor_
interval str - The model monitoring job scheduling interval. It will be rounded up to next full hour. This defines how often the monitoring jobs are triggered.
- monitor_
window str - The time window of the prediction data being included in each prediction dataset. This window specifies how long the data should be collected from historical model results for each run. If not set, ModelDeploymentMonitoringScheduleConfig.monitor_interval will be used. e.g. If currently the cutoff time is 2022-01-08 14:30:00 and the monitor_window is set to be 3600, then data from 2022-01-08 13:30:00 to 2022-01-08 14:30:00 will be retrieved and aggregated to calculate the monitoring statistics.
- monitor
Interval String - The model monitoring job scheduling interval. It will be rounded up to next full hour. This defines how often the monitoring jobs are triggered.
- monitor
Window String - The time window of the prediction data being included in each prediction dataset. This window specifies how long the data should be collected from historical model results for each run. If not set, ModelDeploymentMonitoringScheduleConfig.monitor_interval will be used. e.g. If currently the cutoff time is 2022-01-08 14:30:00 and the monitor_window is set to be 3600, then data from 2022-01-08 13:30:00 to 2022-01-08 14:30:00 will be retrieved and aggregated to calculate the monitoring statistics.
GoogleCloudAiplatformV1beta1ModelDeploymentMonitoringScheduleConfigResponse, GoogleCloudAiplatformV1beta1ModelDeploymentMonitoringScheduleConfigResponseArgs
- Monitor
Interval string - The model monitoring job scheduling interval. It will be rounded up to next full hour. This defines how often the monitoring jobs are triggered.
- Monitor
Window string - The time window of the prediction data being included in each prediction dataset. This window specifies how long the data should be collected from historical model results for each run. If not set, ModelDeploymentMonitoringScheduleConfig.monitor_interval will be used. e.g. If currently the cutoff time is 2022-01-08 14:30:00 and the monitor_window is set to be 3600, then data from 2022-01-08 13:30:00 to 2022-01-08 14:30:00 will be retrieved and aggregated to calculate the monitoring statistics.
- Monitor
Interval string - The model monitoring job scheduling interval. It will be rounded up to next full hour. This defines how often the monitoring jobs are triggered.
- Monitor
Window string - The time window of the prediction data being included in each prediction dataset. This window specifies how long the data should be collected from historical model results for each run. If not set, ModelDeploymentMonitoringScheduleConfig.monitor_interval will be used. e.g. If currently the cutoff time is 2022-01-08 14:30:00 and the monitor_window is set to be 3600, then data from 2022-01-08 13:30:00 to 2022-01-08 14:30:00 will be retrieved and aggregated to calculate the monitoring statistics.
- monitor
Interval String - The model monitoring job scheduling interval. It will be rounded up to next full hour. This defines how often the monitoring jobs are triggered.
- monitor
Window String - The time window of the prediction data being included in each prediction dataset. This window specifies how long the data should be collected from historical model results for each run. If not set, ModelDeploymentMonitoringScheduleConfig.monitor_interval will be used. e.g. If currently the cutoff time is 2022-01-08 14:30:00 and the monitor_window is set to be 3600, then data from 2022-01-08 13:30:00 to 2022-01-08 14:30:00 will be retrieved and aggregated to calculate the monitoring statistics.
- monitor
Interval string - The model monitoring job scheduling interval. It will be rounded up to next full hour. This defines how often the monitoring jobs are triggered.
- monitor
Window string - The time window of the prediction data being included in each prediction dataset. This window specifies how long the data should be collected from historical model results for each run. If not set, ModelDeploymentMonitoringScheduleConfig.monitor_interval will be used. e.g. If currently the cutoff time is 2022-01-08 14:30:00 and the monitor_window is set to be 3600, then data from 2022-01-08 13:30:00 to 2022-01-08 14:30:00 will be retrieved and aggregated to calculate the monitoring statistics.
- monitor_
interval str - The model monitoring job scheduling interval. It will be rounded up to next full hour. This defines how often the monitoring jobs are triggered.
- monitor_
window str - The time window of the prediction data being included in each prediction dataset. This window specifies how long the data should be collected from historical model results for each run. If not set, ModelDeploymentMonitoringScheduleConfig.monitor_interval will be used. e.g. If currently the cutoff time is 2022-01-08 14:30:00 and the monitor_window is set to be 3600, then data from 2022-01-08 13:30:00 to 2022-01-08 14:30:00 will be retrieved and aggregated to calculate the monitoring statistics.
- monitor
Interval String - The model monitoring job scheduling interval. It will be rounded up to next full hour. This defines how often the monitoring jobs are triggered.
- monitor
Window String - The time window of the prediction data being included in each prediction dataset. This window specifies how long the data should be collected from historical model results for each run. If not set, ModelDeploymentMonitoringScheduleConfig.monitor_interval will be used. e.g. If currently the cutoff time is 2022-01-08 14:30:00 and the monitor_window is set to be 3600, then data from 2022-01-08 13:30:00 to 2022-01-08 14:30:00 will be retrieved and aggregated to calculate the monitoring statistics.
GoogleCloudAiplatformV1beta1ModelMonitoringAlertConfig, GoogleCloudAiplatformV1beta1ModelMonitoringAlertConfigArgs
- Email
Alert Pulumi.Config Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Model Monitoring Alert Config Email Alert Config - Email alert config.
- Enable
Logging bool - Dump the anomalies to Cloud Logging. The anomalies will be put to json payload encoded from proto google.cloud.aiplatform.logging.ModelMonitoringAnomaliesLogEntry. This can be further sinked to Pub/Sub or any other services supported by Cloud Logging.
- Notification
Channels List<string> - Resource names of the NotificationChannels to send alert. Must be of the format
projects//notificationChannels/
- Email
Alert GoogleConfig Cloud Aiplatform V1beta1Model Monitoring Alert Config Email Alert Config - Email alert config.
- Enable
Logging bool - Dump the anomalies to Cloud Logging. The anomalies will be put to json payload encoded from proto google.cloud.aiplatform.logging.ModelMonitoringAnomaliesLogEntry. This can be further sinked to Pub/Sub or any other services supported by Cloud Logging.
- Notification
Channels []string - Resource names of the NotificationChannels to send alert. Must be of the format
projects//notificationChannels/
- email
Alert GoogleConfig Cloud Aiplatform V1beta1Model Monitoring Alert Config Email Alert Config - Email alert config.
- enable
Logging Boolean - Dump the anomalies to Cloud Logging. The anomalies will be put to json payload encoded from proto google.cloud.aiplatform.logging.ModelMonitoringAnomaliesLogEntry. This can be further sinked to Pub/Sub or any other services supported by Cloud Logging.
- notification
Channels List<String> - Resource names of the NotificationChannels to send alert. Must be of the format
projects//notificationChannels/
- email
Alert GoogleConfig Cloud Aiplatform V1beta1Model Monitoring Alert Config Email Alert Config - Email alert config.
- enable
Logging boolean - Dump the anomalies to Cloud Logging. The anomalies will be put to json payload encoded from proto google.cloud.aiplatform.logging.ModelMonitoringAnomaliesLogEntry. This can be further sinked to Pub/Sub or any other services supported by Cloud Logging.
- notification
Channels string[] - Resource names of the NotificationChannels to send alert. Must be of the format
projects//notificationChannels/
- email_
alert_ Googleconfig Cloud Aiplatform V1beta1Model Monitoring Alert Config Email Alert Config - Email alert config.
- enable_
logging bool - Dump the anomalies to Cloud Logging. The anomalies will be put to json payload encoded from proto google.cloud.aiplatform.logging.ModelMonitoringAnomaliesLogEntry. This can be further sinked to Pub/Sub or any other services supported by Cloud Logging.
- notification_
channels Sequence[str] - Resource names of the NotificationChannels to send alert. Must be of the format
projects//notificationChannels/
- email
Alert Property MapConfig - Email alert config.
- enable
Logging Boolean - Dump the anomalies to Cloud Logging. The anomalies will be put to json payload encoded from proto google.cloud.aiplatform.logging.ModelMonitoringAnomaliesLogEntry. This can be further sinked to Pub/Sub or any other services supported by Cloud Logging.
- notification
Channels List<String> - Resource names of the NotificationChannels to send alert. Must be of the format
projects//notificationChannels/
GoogleCloudAiplatformV1beta1ModelMonitoringAlertConfigEmailAlertConfig, GoogleCloudAiplatformV1beta1ModelMonitoringAlertConfigEmailAlertConfigArgs
- User
Emails List<string> - The email addresses to send the alert.
- User
Emails []string - The email addresses to send the alert.
- user
Emails List<String> - The email addresses to send the alert.
- user
Emails string[] - The email addresses to send the alert.
- user_
emails Sequence[str] - The email addresses to send the alert.
- user
Emails List<String> - The email addresses to send the alert.
GoogleCloudAiplatformV1beta1ModelMonitoringAlertConfigEmailAlertConfigResponse, GoogleCloudAiplatformV1beta1ModelMonitoringAlertConfigEmailAlertConfigResponseArgs
- User
Emails List<string> - The email addresses to send the alert.
- User
Emails []string - The email addresses to send the alert.
- user
Emails List<String> - The email addresses to send the alert.
- user
Emails string[] - The email addresses to send the alert.
- user_
emails Sequence[str] - The email addresses to send the alert.
- user
Emails List<String> - The email addresses to send the alert.
GoogleCloudAiplatformV1beta1ModelMonitoringAlertConfigResponse, GoogleCloudAiplatformV1beta1ModelMonitoringAlertConfigResponseArgs
- Email
Alert Pulumi.Config Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Model Monitoring Alert Config Email Alert Config Response - Email alert config.
- Enable
Logging bool - Dump the anomalies to Cloud Logging. The anomalies will be put to json payload encoded from proto google.cloud.aiplatform.logging.ModelMonitoringAnomaliesLogEntry. This can be further sinked to Pub/Sub or any other services supported by Cloud Logging.
- Notification
Channels List<string> - Resource names of the NotificationChannels to send alert. Must be of the format
projects//notificationChannels/
- Email
Alert GoogleConfig Cloud Aiplatform V1beta1Model Monitoring Alert Config Email Alert Config Response - Email alert config.
- Enable
Logging bool - Dump the anomalies to Cloud Logging. The anomalies will be put to json payload encoded from proto google.cloud.aiplatform.logging.ModelMonitoringAnomaliesLogEntry. This can be further sinked to Pub/Sub or any other services supported by Cloud Logging.
- Notification
Channels []string - Resource names of the NotificationChannels to send alert. Must be of the format
projects//notificationChannels/
- email
Alert GoogleConfig Cloud Aiplatform V1beta1Model Monitoring Alert Config Email Alert Config Response - Email alert config.
- enable
Logging Boolean - Dump the anomalies to Cloud Logging. The anomalies will be put to json payload encoded from proto google.cloud.aiplatform.logging.ModelMonitoringAnomaliesLogEntry. This can be further sinked to Pub/Sub or any other services supported by Cloud Logging.
- notification
Channels List<String> - Resource names of the NotificationChannels to send alert. Must be of the format
projects//notificationChannels/
- email
Alert GoogleConfig Cloud Aiplatform V1beta1Model Monitoring Alert Config Email Alert Config Response - Email alert config.
- enable
Logging boolean - Dump the anomalies to Cloud Logging. The anomalies will be put to json payload encoded from proto google.cloud.aiplatform.logging.ModelMonitoringAnomaliesLogEntry. This can be further sinked to Pub/Sub or any other services supported by Cloud Logging.
- notification
Channels string[] - Resource names of the NotificationChannels to send alert. Must be of the format
projects//notificationChannels/
- email_
alert_ Googleconfig Cloud Aiplatform V1beta1Model Monitoring Alert Config Email Alert Config Response - Email alert config.
- enable_
logging bool - Dump the anomalies to Cloud Logging. The anomalies will be put to json payload encoded from proto google.cloud.aiplatform.logging.ModelMonitoringAnomaliesLogEntry. This can be further sinked to Pub/Sub or any other services supported by Cloud Logging.
- notification_
channels Sequence[str] - Resource names of the NotificationChannels to send alert. Must be of the format
projects//notificationChannels/
- email
Alert Property MapConfig - Email alert config.
- enable
Logging Boolean - Dump the anomalies to Cloud Logging. The anomalies will be put to json payload encoded from proto google.cloud.aiplatform.logging.ModelMonitoringAnomaliesLogEntry. This can be further sinked to Pub/Sub or any other services supported by Cloud Logging.
- notification
Channels List<String> - Resource names of the NotificationChannels to send alert. Must be of the format
projects//notificationChannels/
GoogleCloudAiplatformV1beta1ModelMonitoringObjectiveConfig, GoogleCloudAiplatformV1beta1ModelMonitoringObjectiveConfigArgs
- Explanation
Config Pulumi.Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Model Monitoring Objective Config Explanation Config - The config for integrating with Vertex Explainable AI.
- Prediction
Drift Pulumi.Detection Config Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Model Monitoring Objective Config Prediction Drift Detection Config - The config for drift of prediction data.
- Training
Dataset Pulumi.Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Model Monitoring Objective Config Training Dataset - Training dataset for models. This field has to be set only if TrainingPredictionSkewDetectionConfig is specified.
- Training
Prediction Pulumi.Skew Detection Config Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Model Monitoring Objective Config Training Prediction Skew Detection Config - The config for skew between training data and prediction data.
- Explanation
Config GoogleCloud Aiplatform V1beta1Model Monitoring Objective Config Explanation Config - The config for integrating with Vertex Explainable AI.
- Prediction
Drift GoogleDetection Config Cloud Aiplatform V1beta1Model Monitoring Objective Config Prediction Drift Detection Config - The config for drift of prediction data.
- Training
Dataset GoogleCloud Aiplatform V1beta1Model Monitoring Objective Config Training Dataset - Training dataset for models. This field has to be set only if TrainingPredictionSkewDetectionConfig is specified.
- Training
Prediction GoogleSkew Detection Config Cloud Aiplatform V1beta1Model Monitoring Objective Config Training Prediction Skew Detection Config - The config for skew between training data and prediction data.
- explanation
Config GoogleCloud Aiplatform V1beta1Model Monitoring Objective Config Explanation Config - The config for integrating with Vertex Explainable AI.
- prediction
Drift GoogleDetection Config Cloud Aiplatform V1beta1Model Monitoring Objective Config Prediction Drift Detection Config - The config for drift of prediction data.
- training
Dataset GoogleCloud Aiplatform V1beta1Model Monitoring Objective Config Training Dataset - Training dataset for models. This field has to be set only if TrainingPredictionSkewDetectionConfig is specified.
- training
Prediction GoogleSkew Detection Config Cloud Aiplatform V1beta1Model Monitoring Objective Config Training Prediction Skew Detection Config - The config for skew between training data and prediction data.
- explanation
Config GoogleCloud Aiplatform V1beta1Model Monitoring Objective Config Explanation Config - The config for integrating with Vertex Explainable AI.
- prediction
Drift GoogleDetection Config Cloud Aiplatform V1beta1Model Monitoring Objective Config Prediction Drift Detection Config - The config for drift of prediction data.
- training
Dataset GoogleCloud Aiplatform V1beta1Model Monitoring Objective Config Training Dataset - Training dataset for models. This field has to be set only if TrainingPredictionSkewDetectionConfig is specified.
- training
Prediction GoogleSkew Detection Config Cloud Aiplatform V1beta1Model Monitoring Objective Config Training Prediction Skew Detection Config - The config for skew between training data and prediction data.
- explanation_
config GoogleCloud Aiplatform V1beta1Model Monitoring Objective Config Explanation Config - The config for integrating with Vertex Explainable AI.
- prediction_
drift_ Googledetection_ config Cloud Aiplatform V1beta1Model Monitoring Objective Config Prediction Drift Detection Config - The config for drift of prediction data.
- training_
dataset GoogleCloud Aiplatform V1beta1Model Monitoring Objective Config Training Dataset - Training dataset for models. This field has to be set only if TrainingPredictionSkewDetectionConfig is specified.
- training_
prediction_ Googleskew_ detection_ config Cloud Aiplatform V1beta1Model Monitoring Objective Config Training Prediction Skew Detection Config - The config for skew between training data and prediction data.
- explanation
Config Property Map - The config for integrating with Vertex Explainable AI.
- prediction
Drift Property MapDetection Config - The config for drift of prediction data.
- training
Dataset Property Map - Training dataset for models. This field has to be set only if TrainingPredictionSkewDetectionConfig is specified.
- training
Prediction Property MapSkew Detection Config - The config for skew between training data and prediction data.
GoogleCloudAiplatformV1beta1ModelMonitoringObjectiveConfigExplanationConfig, GoogleCloudAiplatformV1beta1ModelMonitoringObjectiveConfigExplanationConfigArgs
- Enable
Feature boolAttributes - If want to analyze the Vertex Explainable AI feature attribute scores or not. If set to true, Vertex AI will log the feature attributions from explain response and do the skew/drift detection for them.
- Explanation
Baseline Pulumi.Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Model Monitoring Objective Config Explanation Config Explanation Baseline - Predictions generated by the BatchPredictionJob using baseline dataset.
- Enable
Feature boolAttributes - If want to analyze the Vertex Explainable AI feature attribute scores or not. If set to true, Vertex AI will log the feature attributions from explain response and do the skew/drift detection for them.
- Explanation
Baseline GoogleCloud Aiplatform V1beta1Model Monitoring Objective Config Explanation Config Explanation Baseline - Predictions generated by the BatchPredictionJob using baseline dataset.
- enable
Feature BooleanAttributes - If want to analyze the Vertex Explainable AI feature attribute scores or not. If set to true, Vertex AI will log the feature attributions from explain response and do the skew/drift detection for them.
- explanation
Baseline GoogleCloud Aiplatform V1beta1Model Monitoring Objective Config Explanation Config Explanation Baseline - Predictions generated by the BatchPredictionJob using baseline dataset.
- enable
Feature booleanAttributes - If want to analyze the Vertex Explainable AI feature attribute scores or not. If set to true, Vertex AI will log the feature attributions from explain response and do the skew/drift detection for them.
- explanation
Baseline GoogleCloud Aiplatform V1beta1Model Monitoring Objective Config Explanation Config Explanation Baseline - Predictions generated by the BatchPredictionJob using baseline dataset.
- enable_
feature_ boolattributes - If want to analyze the Vertex Explainable AI feature attribute scores or not. If set to true, Vertex AI will log the feature attributions from explain response and do the skew/drift detection for them.
- explanation_
baseline GoogleCloud Aiplatform V1beta1Model Monitoring Objective Config Explanation Config Explanation Baseline - Predictions generated by the BatchPredictionJob using baseline dataset.
- enable
Feature BooleanAttributes - If want to analyze the Vertex Explainable AI feature attribute scores or not. If set to true, Vertex AI will log the feature attributions from explain response and do the skew/drift detection for them.
- explanation
Baseline Property Map - Predictions generated by the BatchPredictionJob using baseline dataset.
GoogleCloudAiplatformV1beta1ModelMonitoringObjectiveConfigExplanationConfigExplanationBaseline, GoogleCloudAiplatformV1beta1ModelMonitoringObjectiveConfigExplanationConfigExplanationBaselineArgs
- Bigquery
Pulumi.
Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Big Query Destination - BigQuery location for BatchExplain output.
- Gcs
Pulumi.
Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Gcs Destination - Cloud Storage location for BatchExplain output.
- Prediction
Format Pulumi.Google Native. Aiplatform. V1Beta1. Google Cloud Aiplatform V1beta1Model Monitoring Objective Config Explanation Config Explanation Baseline Prediction Format - The storage format of the predictions generated BatchPrediction job.
- Bigquery
Google
Cloud Aiplatform V1beta1Big Query Destination - BigQuery location for BatchExplain output.
- Gcs
Google
Cloud Aiplatform V1beta1Gcs Destination - Cloud Storage location for BatchExplain output.
- Prediction
Format GoogleCloud Aiplatform V1beta1Model Monitoring Objective Config Explanation Config Explanation Baseline Prediction Format - The storage format of the predictions generated BatchPrediction job.
- bigquery
Google
Cloud Aiplatform V1beta1Big Query Destination - BigQuery location for BatchExplain output.
- gcs
Google
Cloud Aiplatform V1beta1Gcs Destination - Cloud Storage location for BatchExplain output.
- prediction
Format GoogleCloud Aiplatform V1beta1Model Monitoring Objective Config Explanation Config Explanation Baseline Prediction Format - The storage format of the predictions generated BatchPrediction job.
- bigquery
Google
Cloud Aiplatform V1beta1Big Query Destination - BigQuery location for BatchExplain output.
- gcs
Google
Cloud Aiplatform V1beta1Gcs Destination - Cloud Storage location for BatchExplain output.
- prediction
Format GoogleCloud Aiplatform V1beta1Model Monitoring Objective Config Explanation Config Explanation Baseline Prediction Format - The storage format of the predictions generated BatchPrediction job.
- bigquery
Google
Cloud Aiplatform V1beta1Big Query Destination - BigQuery location for BatchExplain output.
- gcs
Google
Cloud Aiplatform V1beta1Gcs Destination - Cloud Storage location for BatchExplain output.
- prediction_
format GoogleCloud Aiplatform V1beta1Model Monitoring Objective Config Explanation Config Explanation Baseline Prediction Format - The storage format of the predictions generated BatchPrediction job.
- bigquery Property Map
- BigQuery location for BatchExplain output.
- gcs Property Map
- Cloud Storage location for BatchExplain output.
- prediction
Format "PREDICTION_FORMAT_UNSPECIFIED" | "JSONL" | "BIGQUERY" - The storage format of the predictions generated BatchPrediction job.
GoogleCloudAiplatformV1beta1ModelMonitoringObjectiveConfigExplanationConfigExplanationBaselinePredictionFormat, GoogleCloudAiplatformV1beta1ModelMonitoringObjectiveConfigExplanationConfigExplanationBaselinePredictionFormatArgs
- Prediction
Format Unspecified - PREDICTION_FORMAT_UNSPECIFIEDShould not be set.
- Jsonl
- JSONLPredictions are in JSONL files.
- Bigquery
- BIGQUERYPredictions are in BigQuery.
- Google
Cloud Aiplatform V1beta1Model Monitoring Objective Config Explanation Config Explanation Baseline Prediction Format Prediction Format Unspecified - PREDICTION_FORMAT_UNSPECIFIEDShould not be set.
- Google
Cloud Aiplatform V1beta1Model Monitoring Objective Config Explanation Config Explanation Baseline Prediction Format Jsonl - JSONLPredictions are in JSONL files.
- Google
Cloud Aiplatform V1beta1Model Monitoring Objective Config Explanation Config Explanation Baseline Prediction Format Bigquery - BIGQUERYPredictions are in BigQuery.
- Prediction
Format Unspecified - PREDICTION_FORMAT_UNSPECIFIEDShould not be set.
- Jsonl
- JSONLPredictions are in JSONL files.
- Bigquery
- BIGQUERYPredictions are in BigQuery.
- Prediction
Format Unspecified - PREDICTION_FORMAT_UNSPECIFIEDShould not be set.
- Jsonl
- JSONLPredictions are in JSONL files.
- Bigquery
- BIGQUERYPredictions are in BigQuery.
- PREDICTION_FORMAT_UNSPECIFIED
- PREDICTION_FORMAT_UNSPECIFIEDShould not be set.
- JSONL
- JSONLPredictions are in JSONL files.
- BIGQUERY
- BIGQUERYPredictions are in BigQuery.
- "PREDICTION_FORMAT_UNSPECIFIED"
- PREDICTION_FORMAT_UNSPECIFIEDShould not be set.
- "JSONL"
- JSONLPredictions are in JSONL files.
- "BIGQUERY"
- BIGQUERYPredictions are in BigQuery.
GoogleCloudAiplatformV1beta1ModelMonitoringObjectiveConfigExplanationConfigExplanationBaselineResponse, GoogleCloudAiplatformV1beta1ModelMonitoringObjectiveConfigExplanationConfigExplanationBaselineResponseArgs
- Bigquery
Pulumi.
Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Big Query Destination Response - BigQuery location for BatchExplain output.
- Gcs
Pulumi.
Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Gcs Destination Response - Cloud Storage location for BatchExplain output.
- Prediction
Format string - The storage format of the predictions generated BatchPrediction job.
- Bigquery
Google
Cloud Aiplatform V1beta1Big Query Destination Response - BigQuery location for BatchExplain output.
- Gcs
Google
Cloud Aiplatform V1beta1Gcs Destination Response - Cloud Storage location for BatchExplain output.
- Prediction
Format string - The storage format of the predictions generated BatchPrediction job.
- bigquery
Google
Cloud Aiplatform V1beta1Big Query Destination Response - BigQuery location for BatchExplain output.
- gcs
Google
Cloud Aiplatform V1beta1Gcs Destination Response - Cloud Storage location for BatchExplain output.
- prediction
Format String - The storage format of the predictions generated BatchPrediction job.
- bigquery
Google
Cloud Aiplatform V1beta1Big Query Destination Response - BigQuery location for BatchExplain output.
- gcs
Google
Cloud Aiplatform V1beta1Gcs Destination Response - Cloud Storage location for BatchExplain output.
- prediction
Format string - The storage format of the predictions generated BatchPrediction job.
- bigquery
Google
Cloud Aiplatform V1beta1Big Query Destination Response - BigQuery location for BatchExplain output.
- gcs
Google
Cloud Aiplatform V1beta1Gcs Destination Response - Cloud Storage location for BatchExplain output.
- prediction_
format str - The storage format of the predictions generated BatchPrediction job.
- bigquery Property Map
- BigQuery location for BatchExplain output.
- gcs Property Map
- Cloud Storage location for BatchExplain output.
- prediction
Format String - The storage format of the predictions generated BatchPrediction job.
GoogleCloudAiplatformV1beta1ModelMonitoringObjectiveConfigExplanationConfigResponse, GoogleCloudAiplatformV1beta1ModelMonitoringObjectiveConfigExplanationConfigResponseArgs
- Enable
Feature boolAttributes - If want to analyze the Vertex Explainable AI feature attribute scores or not. If set to true, Vertex AI will log the feature attributions from explain response and do the skew/drift detection for them.
- Explanation
Baseline Pulumi.Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Model Monitoring Objective Config Explanation Config Explanation Baseline Response - Predictions generated by the BatchPredictionJob using baseline dataset.
- Enable
Feature boolAttributes - If want to analyze the Vertex Explainable AI feature attribute scores or not. If set to true, Vertex AI will log the feature attributions from explain response and do the skew/drift detection for them.
- Explanation
Baseline GoogleCloud Aiplatform V1beta1Model Monitoring Objective Config Explanation Config Explanation Baseline Response - Predictions generated by the BatchPredictionJob using baseline dataset.
- enable
Feature BooleanAttributes - If want to analyze the Vertex Explainable AI feature attribute scores or not. If set to true, Vertex AI will log the feature attributions from explain response and do the skew/drift detection for them.
- explanation
Baseline GoogleCloud Aiplatform V1beta1Model Monitoring Objective Config Explanation Config Explanation Baseline Response - Predictions generated by the BatchPredictionJob using baseline dataset.
- enable
Feature booleanAttributes - If want to analyze the Vertex Explainable AI feature attribute scores or not. If set to true, Vertex AI will log the feature attributions from explain response and do the skew/drift detection for them.
- explanation
Baseline GoogleCloud Aiplatform V1beta1Model Monitoring Objective Config Explanation Config Explanation Baseline Response - Predictions generated by the BatchPredictionJob using baseline dataset.
- enable_
feature_ boolattributes - If want to analyze the Vertex Explainable AI feature attribute scores or not. If set to true, Vertex AI will log the feature attributions from explain response and do the skew/drift detection for them.
- explanation_
baseline GoogleCloud Aiplatform V1beta1Model Monitoring Objective Config Explanation Config Explanation Baseline Response - Predictions generated by the BatchPredictionJob using baseline dataset.
- enable
Feature BooleanAttributes - If want to analyze the Vertex Explainable AI feature attribute scores or not. If set to true, Vertex AI will log the feature attributions from explain response and do the skew/drift detection for them.
- explanation
Baseline Property Map - Predictions generated by the BatchPredictionJob using baseline dataset.
GoogleCloudAiplatformV1beta1ModelMonitoringObjectiveConfigPredictionDriftDetectionConfig, GoogleCloudAiplatformV1beta1ModelMonitoringObjectiveConfigPredictionDriftDetectionConfigArgs
- Attribution
Score Dictionary<string, string>Drift Thresholds - Key is the feature name and value is the threshold. The threshold here is against attribution score distance between different time windows.
- Default
Drift Pulumi.Threshold Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Threshold Config - Drift anomaly detection threshold used by all features. When the per-feature thresholds are not set, this field can be used to specify a threshold for all features.
- Drift
Thresholds Dictionary<string, string> - Key is the feature name and value is the threshold. If a feature needs to be monitored for drift, a value threshold must be configured for that feature. The threshold here is against feature distribution distance between different time windws.
- Attribution
Score map[string]stringDrift Thresholds - Key is the feature name and value is the threshold. The threshold here is against attribution score distance between different time windows.
- Default
Drift GoogleThreshold Cloud Aiplatform V1beta1Threshold Config - Drift anomaly detection threshold used by all features. When the per-feature thresholds are not set, this field can be used to specify a threshold for all features.
- Drift
Thresholds map[string]string - Key is the feature name and value is the threshold. If a feature needs to be monitored for drift, a value threshold must be configured for that feature. The threshold here is against feature distribution distance between different time windws.
- attribution
Score Map<String,String>Drift Thresholds - Key is the feature name and value is the threshold. The threshold here is against attribution score distance between different time windows.
- default
Drift GoogleThreshold Cloud Aiplatform V1beta1Threshold Config - Drift anomaly detection threshold used by all features. When the per-feature thresholds are not set, this field can be used to specify a threshold for all features.
- drift
Thresholds Map<String,String> - Key is the feature name and value is the threshold. If a feature needs to be monitored for drift, a value threshold must be configured for that feature. The threshold here is against feature distribution distance between different time windws.
- attribution
Score {[key: string]: string}Drift Thresholds - Key is the feature name and value is the threshold. The threshold here is against attribution score distance between different time windows.
- default
Drift GoogleThreshold Cloud Aiplatform V1beta1Threshold Config - Drift anomaly detection threshold used by all features. When the per-feature thresholds are not set, this field can be used to specify a threshold for all features.
- drift
Thresholds {[key: string]: string} - Key is the feature name and value is the threshold. If a feature needs to be monitored for drift, a value threshold must be configured for that feature. The threshold here is against feature distribution distance between different time windws.
- attribution_
score_ Mapping[str, str]drift_ thresholds - Key is the feature name and value is the threshold. The threshold here is against attribution score distance between different time windows.
- default_
drift_ Googlethreshold Cloud Aiplatform V1beta1Threshold Config - Drift anomaly detection threshold used by all features. When the per-feature thresholds are not set, this field can be used to specify a threshold for all features.
- drift_
thresholds Mapping[str, str] - Key is the feature name and value is the threshold. If a feature needs to be monitored for drift, a value threshold must be configured for that feature. The threshold here is against feature distribution distance between different time windws.
- attribution
Score Map<String>Drift Thresholds - Key is the feature name and value is the threshold. The threshold here is against attribution score distance between different time windows.
- default
Drift Property MapThreshold - Drift anomaly detection threshold used by all features. When the per-feature thresholds are not set, this field can be used to specify a threshold for all features.
- drift
Thresholds Map<String> - Key is the feature name and value is the threshold. If a feature needs to be monitored for drift, a value threshold must be configured for that feature. The threshold here is against feature distribution distance between different time windws.
GoogleCloudAiplatformV1beta1ModelMonitoringObjectiveConfigPredictionDriftDetectionConfigResponse, GoogleCloudAiplatformV1beta1ModelMonitoringObjectiveConfigPredictionDriftDetectionConfigResponseArgs
- Attribution
Score Dictionary<string, string>Drift Thresholds - Key is the feature name and value is the threshold. The threshold here is against attribution score distance between different time windows.
- Default
Drift Pulumi.Threshold Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Threshold Config Response - Drift anomaly detection threshold used by all features. When the per-feature thresholds are not set, this field can be used to specify a threshold for all features.
- Drift
Thresholds Dictionary<string, string> - Key is the feature name and value is the threshold. If a feature needs to be monitored for drift, a value threshold must be configured for that feature. The threshold here is against feature distribution distance between different time windws.
- Attribution
Score map[string]stringDrift Thresholds - Key is the feature name and value is the threshold. The threshold here is against attribution score distance between different time windows.
- Default
Drift GoogleThreshold Cloud Aiplatform V1beta1Threshold Config Response - Drift anomaly detection threshold used by all features. When the per-feature thresholds are not set, this field can be used to specify a threshold for all features.
- Drift
Thresholds map[string]string - Key is the feature name and value is the threshold. If a feature needs to be monitored for drift, a value threshold must be configured for that feature. The threshold here is against feature distribution distance between different time windws.
- attribution
Score Map<String,String>Drift Thresholds - Key is the feature name and value is the threshold. The threshold here is against attribution score distance between different time windows.
- default
Drift GoogleThreshold Cloud Aiplatform V1beta1Threshold Config Response - Drift anomaly detection threshold used by all features. When the per-feature thresholds are not set, this field can be used to specify a threshold for all features.
- drift
Thresholds Map<String,String> - Key is the feature name and value is the threshold. If a feature needs to be monitored for drift, a value threshold must be configured for that feature. The threshold here is against feature distribution distance between different time windws.
- attribution
Score {[key: string]: string}Drift Thresholds - Key is the feature name and value is the threshold. The threshold here is against attribution score distance between different time windows.
- default
Drift GoogleThreshold Cloud Aiplatform V1beta1Threshold Config Response - Drift anomaly detection threshold used by all features. When the per-feature thresholds are not set, this field can be used to specify a threshold for all features.
- drift
Thresholds {[key: string]: string} - Key is the feature name and value is the threshold. If a feature needs to be monitored for drift, a value threshold must be configured for that feature. The threshold here is against feature distribution distance between different time windws.
- attribution_
score_ Mapping[str, str]drift_ thresholds - Key is the feature name and value is the threshold. The threshold here is against attribution score distance between different time windows.
- default_
drift_ Googlethreshold Cloud Aiplatform V1beta1Threshold Config Response - Drift anomaly detection threshold used by all features. When the per-feature thresholds are not set, this field can be used to specify a threshold for all features.
- drift_
thresholds Mapping[str, str] - Key is the feature name and value is the threshold. If a feature needs to be monitored for drift, a value threshold must be configured for that feature. The threshold here is against feature distribution distance between different time windws.
- attribution
Score Map<String>Drift Thresholds - Key is the feature name and value is the threshold. The threshold here is against attribution score distance between different time windows.
- default
Drift Property MapThreshold - Drift anomaly detection threshold used by all features. When the per-feature thresholds are not set, this field can be used to specify a threshold for all features.
- drift
Thresholds Map<String> - Key is the feature name and value is the threshold. If a feature needs to be monitored for drift, a value threshold must be configured for that feature. The threshold here is against feature distribution distance between different time windws.
GoogleCloudAiplatformV1beta1ModelMonitoringObjectiveConfigResponse, GoogleCloudAiplatformV1beta1ModelMonitoringObjectiveConfigResponseArgs
- Explanation
Config Pulumi.Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Model Monitoring Objective Config Explanation Config Response - The config for integrating with Vertex Explainable AI.
- Prediction
Drift Pulumi.Detection Config Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Model Monitoring Objective Config Prediction Drift Detection Config Response - The config for drift of prediction data.
- Training
Dataset Pulumi.Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Model Monitoring Objective Config Training Dataset Response - Training dataset for models. This field has to be set only if TrainingPredictionSkewDetectionConfig is specified.
- Training
Prediction Pulumi.Skew Detection Config Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Model Monitoring Objective Config Training Prediction Skew Detection Config Response - The config for skew between training data and prediction data.
- Explanation
Config GoogleCloud Aiplatform V1beta1Model Monitoring Objective Config Explanation Config Response - The config for integrating with Vertex Explainable AI.
- Prediction
Drift GoogleDetection Config Cloud Aiplatform V1beta1Model Monitoring Objective Config Prediction Drift Detection Config Response - The config for drift of prediction data.
- Training
Dataset GoogleCloud Aiplatform V1beta1Model Monitoring Objective Config Training Dataset Response - Training dataset for models. This field has to be set only if TrainingPredictionSkewDetectionConfig is specified.
- Training
Prediction GoogleSkew Detection Config Cloud Aiplatform V1beta1Model Monitoring Objective Config Training Prediction Skew Detection Config Response - The config for skew between training data and prediction data.
- explanation
Config GoogleCloud Aiplatform V1beta1Model Monitoring Objective Config Explanation Config Response - The config for integrating with Vertex Explainable AI.
- prediction
Drift GoogleDetection Config Cloud Aiplatform V1beta1Model Monitoring Objective Config Prediction Drift Detection Config Response - The config for drift of prediction data.
- training
Dataset GoogleCloud Aiplatform V1beta1Model Monitoring Objective Config Training Dataset Response - Training dataset for models. This field has to be set only if TrainingPredictionSkewDetectionConfig is specified.
- training
Prediction GoogleSkew Detection Config Cloud Aiplatform V1beta1Model Monitoring Objective Config Training Prediction Skew Detection Config Response - The config for skew between training data and prediction data.
- explanation
Config GoogleCloud Aiplatform V1beta1Model Monitoring Objective Config Explanation Config Response - The config for integrating with Vertex Explainable AI.
- prediction
Drift GoogleDetection Config Cloud Aiplatform V1beta1Model Monitoring Objective Config Prediction Drift Detection Config Response - The config for drift of prediction data.
- training
Dataset GoogleCloud Aiplatform V1beta1Model Monitoring Objective Config Training Dataset Response - Training dataset for models. This field has to be set only if TrainingPredictionSkewDetectionConfig is specified.
- training
Prediction GoogleSkew Detection Config Cloud Aiplatform V1beta1Model Monitoring Objective Config Training Prediction Skew Detection Config Response - The config for skew between training data and prediction data.
- explanation_
config GoogleCloud Aiplatform V1beta1Model Monitoring Objective Config Explanation Config Response - The config for integrating with Vertex Explainable AI.
- prediction_
drift_ Googledetection_ config Cloud Aiplatform V1beta1Model Monitoring Objective Config Prediction Drift Detection Config Response - The config for drift of prediction data.
- training_
dataset GoogleCloud Aiplatform V1beta1Model Monitoring Objective Config Training Dataset Response - Training dataset for models. This field has to be set only if TrainingPredictionSkewDetectionConfig is specified.
- training_
prediction_ Googleskew_ detection_ config Cloud Aiplatform V1beta1Model Monitoring Objective Config Training Prediction Skew Detection Config Response - The config for skew between training data and prediction data.
- explanation
Config Property Map - The config for integrating with Vertex Explainable AI.
- prediction
Drift Property MapDetection Config - The config for drift of prediction data.
- training
Dataset Property Map - Training dataset for models. This field has to be set only if TrainingPredictionSkewDetectionConfig is specified.
- training
Prediction Property MapSkew Detection Config - The config for skew between training data and prediction data.
GoogleCloudAiplatformV1beta1ModelMonitoringObjectiveConfigTrainingDataset, GoogleCloudAiplatformV1beta1ModelMonitoringObjectiveConfigTrainingDatasetArgs
- Bigquery
Source Pulumi.Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Big Query Source - The BigQuery table of the unmanaged Dataset used to train this Model.
- Data
Format string - Data format of the dataset, only applicable if the input is from Google Cloud Storage. The possible formats are: "tf-record" The source file is a TFRecord file. "csv" The source file is a CSV file. "jsonl" The source file is a JSONL file.
- Dataset string
- The resource name of the Dataset used to train this Model.
- Gcs
Source Pulumi.Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Gcs Source - The Google Cloud Storage uri of the unmanaged Dataset used to train this Model.
- Logging
Sampling Pulumi.Strategy Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Sampling Strategy - Strategy to sample data from Training Dataset. If not set, we process the whole dataset.
- Target
Field string - The target field name the model is to predict. This field will be excluded when doing Predict and (or) Explain for the training data.
- Bigquery
Source GoogleCloud Aiplatform V1beta1Big Query Source - The BigQuery table of the unmanaged Dataset used to train this Model.
- Data
Format string - Data format of the dataset, only applicable if the input is from Google Cloud Storage. The possible formats are: "tf-record" The source file is a TFRecord file. "csv" The source file is a CSV file. "jsonl" The source file is a JSONL file.
- Dataset string
- The resource name of the Dataset used to train this Model.
- Gcs
Source GoogleCloud Aiplatform V1beta1Gcs Source - The Google Cloud Storage uri of the unmanaged Dataset used to train this Model.
- Logging
Sampling GoogleStrategy Cloud Aiplatform V1beta1Sampling Strategy - Strategy to sample data from Training Dataset. If not set, we process the whole dataset.
- Target
Field string - The target field name the model is to predict. This field will be excluded when doing Predict and (or) Explain for the training data.
- bigquery
Source GoogleCloud Aiplatform V1beta1Big Query Source - The BigQuery table of the unmanaged Dataset used to train this Model.
- data
Format String - Data format of the dataset, only applicable if the input is from Google Cloud Storage. The possible formats are: "tf-record" The source file is a TFRecord file. "csv" The source file is a CSV file. "jsonl" The source file is a JSONL file.
- dataset String
- The resource name of the Dataset used to train this Model.
- gcs
Source GoogleCloud Aiplatform V1beta1Gcs Source - The Google Cloud Storage uri of the unmanaged Dataset used to train this Model.
- logging
Sampling GoogleStrategy Cloud Aiplatform V1beta1Sampling Strategy - Strategy to sample data from Training Dataset. If not set, we process the whole dataset.
- target
Field String - The target field name the model is to predict. This field will be excluded when doing Predict and (or) Explain for the training data.
- bigquery
Source GoogleCloud Aiplatform V1beta1Big Query Source - The BigQuery table of the unmanaged Dataset used to train this Model.
- data
Format string - Data format of the dataset, only applicable if the input is from Google Cloud Storage. The possible formats are: "tf-record" The source file is a TFRecord file. "csv" The source file is a CSV file. "jsonl" The source file is a JSONL file.
- dataset string
- The resource name of the Dataset used to train this Model.
- gcs
Source GoogleCloud Aiplatform V1beta1Gcs Source - The Google Cloud Storage uri of the unmanaged Dataset used to train this Model.
- logging
Sampling GoogleStrategy Cloud Aiplatform V1beta1Sampling Strategy - Strategy to sample data from Training Dataset. If not set, we process the whole dataset.
- target
Field string - The target field name the model is to predict. This field will be excluded when doing Predict and (or) Explain for the training data.
- bigquery_
source GoogleCloud Aiplatform V1beta1Big Query Source - The BigQuery table of the unmanaged Dataset used to train this Model.
- data_
format str - Data format of the dataset, only applicable if the input is from Google Cloud Storage. The possible formats are: "tf-record" The source file is a TFRecord file. "csv" The source file is a CSV file. "jsonl" The source file is a JSONL file.
- dataset str
- The resource name of the Dataset used to train this Model.
- gcs_
source GoogleCloud Aiplatform V1beta1Gcs Source - The Google Cloud Storage uri of the unmanaged Dataset used to train this Model.
- logging_
sampling_ Googlestrategy Cloud Aiplatform V1beta1Sampling Strategy - Strategy to sample data from Training Dataset. If not set, we process the whole dataset.
- target_
field str - The target field name the model is to predict. This field will be excluded when doing Predict and (or) Explain for the training data.
- bigquery
Source Property Map - The BigQuery table of the unmanaged Dataset used to train this Model.
- data
Format String - Data format of the dataset, only applicable if the input is from Google Cloud Storage. The possible formats are: "tf-record" The source file is a TFRecord file. "csv" The source file is a CSV file. "jsonl" The source file is a JSONL file.
- dataset String
- The resource name of the Dataset used to train this Model.
- gcs
Source Property Map - The Google Cloud Storage uri of the unmanaged Dataset used to train this Model.
- logging
Sampling Property MapStrategy - Strategy to sample data from Training Dataset. If not set, we process the whole dataset.
- target
Field String - The target field name the model is to predict. This field will be excluded when doing Predict and (or) Explain for the training data.
GoogleCloudAiplatformV1beta1ModelMonitoringObjectiveConfigTrainingDatasetResponse, GoogleCloudAiplatformV1beta1ModelMonitoringObjectiveConfigTrainingDatasetResponseArgs
- Bigquery
Source Pulumi.Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Big Query Source Response - The BigQuery table of the unmanaged Dataset used to train this Model.
- Data
Format string - Data format of the dataset, only applicable if the input is from Google Cloud Storage. The possible formats are: "tf-record" The source file is a TFRecord file. "csv" The source file is a CSV file. "jsonl" The source file is a JSONL file.
- Dataset string
- The resource name of the Dataset used to train this Model.
- Gcs
Source Pulumi.Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Gcs Source Response - The Google Cloud Storage uri of the unmanaged Dataset used to train this Model.
- Logging
Sampling Pulumi.Strategy Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Sampling Strategy Response - Strategy to sample data from Training Dataset. If not set, we process the whole dataset.
- Target
Field string - The target field name the model is to predict. This field will be excluded when doing Predict and (or) Explain for the training data.
- Bigquery
Source GoogleCloud Aiplatform V1beta1Big Query Source Response - The BigQuery table of the unmanaged Dataset used to train this Model.
- Data
Format string - Data format of the dataset, only applicable if the input is from Google Cloud Storage. The possible formats are: "tf-record" The source file is a TFRecord file. "csv" The source file is a CSV file. "jsonl" The source file is a JSONL file.
- Dataset string
- The resource name of the Dataset used to train this Model.
- Gcs
Source GoogleCloud Aiplatform V1beta1Gcs Source Response - The Google Cloud Storage uri of the unmanaged Dataset used to train this Model.
- Logging
Sampling GoogleStrategy Cloud Aiplatform V1beta1Sampling Strategy Response - Strategy to sample data from Training Dataset. If not set, we process the whole dataset.
- Target
Field string - The target field name the model is to predict. This field will be excluded when doing Predict and (or) Explain for the training data.
- bigquery
Source GoogleCloud Aiplatform V1beta1Big Query Source Response - The BigQuery table of the unmanaged Dataset used to train this Model.
- data
Format String - Data format of the dataset, only applicable if the input is from Google Cloud Storage. The possible formats are: "tf-record" The source file is a TFRecord file. "csv" The source file is a CSV file. "jsonl" The source file is a JSONL file.
- dataset String
- The resource name of the Dataset used to train this Model.
- gcs
Source GoogleCloud Aiplatform V1beta1Gcs Source Response - The Google Cloud Storage uri of the unmanaged Dataset used to train this Model.
- logging
Sampling GoogleStrategy Cloud Aiplatform V1beta1Sampling Strategy Response - Strategy to sample data from Training Dataset. If not set, we process the whole dataset.
- target
Field String - The target field name the model is to predict. This field will be excluded when doing Predict and (or) Explain for the training data.
- bigquery
Source GoogleCloud Aiplatform V1beta1Big Query Source Response - The BigQuery table of the unmanaged Dataset used to train this Model.
- data
Format string - Data format of the dataset, only applicable if the input is from Google Cloud Storage. The possible formats are: "tf-record" The source file is a TFRecord file. "csv" The source file is a CSV file. "jsonl" The source file is a JSONL file.
- dataset string
- The resource name of the Dataset used to train this Model.
- gcs
Source GoogleCloud Aiplatform V1beta1Gcs Source Response - The Google Cloud Storage uri of the unmanaged Dataset used to train this Model.
- logging
Sampling GoogleStrategy Cloud Aiplatform V1beta1Sampling Strategy Response - Strategy to sample data from Training Dataset. If not set, we process the whole dataset.
- target
Field string - The target field name the model is to predict. This field will be excluded when doing Predict and (or) Explain for the training data.
- bigquery_
source GoogleCloud Aiplatform V1beta1Big Query Source Response - The BigQuery table of the unmanaged Dataset used to train this Model.
- data_
format str - Data format of the dataset, only applicable if the input is from Google Cloud Storage. The possible formats are: "tf-record" The source file is a TFRecord file. "csv" The source file is a CSV file. "jsonl" The source file is a JSONL file.
- dataset str
- The resource name of the Dataset used to train this Model.
- gcs_
source GoogleCloud Aiplatform V1beta1Gcs Source Response - The Google Cloud Storage uri of the unmanaged Dataset used to train this Model.
- logging_
sampling_ Googlestrategy Cloud Aiplatform V1beta1Sampling Strategy Response - Strategy to sample data from Training Dataset. If not set, we process the whole dataset.
- target_
field str - The target field name the model is to predict. This field will be excluded when doing Predict and (or) Explain for the training data.
- bigquery
Source Property Map - The BigQuery table of the unmanaged Dataset used to train this Model.
- data
Format String - Data format of the dataset, only applicable if the input is from Google Cloud Storage. The possible formats are: "tf-record" The source file is a TFRecord file. "csv" The source file is a CSV file. "jsonl" The source file is a JSONL file.
- dataset String
- The resource name of the Dataset used to train this Model.
- gcs
Source Property Map - The Google Cloud Storage uri of the unmanaged Dataset used to train this Model.
- logging
Sampling Property MapStrategy - Strategy to sample data from Training Dataset. If not set, we process the whole dataset.
- target
Field String - The target field name the model is to predict. This field will be excluded when doing Predict and (or) Explain for the training data.
GoogleCloudAiplatformV1beta1ModelMonitoringObjectiveConfigTrainingPredictionSkewDetectionConfig, GoogleCloudAiplatformV1beta1ModelMonitoringObjectiveConfigTrainingPredictionSkewDetectionConfigArgs
- Attribution
Score Dictionary<string, string>Skew Thresholds - Key is the feature name and value is the threshold. The threshold here is against attribution score distance between the training and prediction feature.
- Default
Skew Pulumi.Threshold Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Threshold Config - Skew anomaly detection threshold used by all features. When the per-feature thresholds are not set, this field can be used to specify a threshold for all features.
- Skew
Thresholds Dictionary<string, string> - Key is the feature name and value is the threshold. If a feature needs to be monitored for skew, a value threshold must be configured for that feature. The threshold here is against feature distribution distance between the training and prediction feature.
- Attribution
Score map[string]stringSkew Thresholds - Key is the feature name and value is the threshold. The threshold here is against attribution score distance between the training and prediction feature.
- Default
Skew GoogleThreshold Cloud Aiplatform V1beta1Threshold Config - Skew anomaly detection threshold used by all features. When the per-feature thresholds are not set, this field can be used to specify a threshold for all features.
- Skew
Thresholds map[string]string - Key is the feature name and value is the threshold. If a feature needs to be monitored for skew, a value threshold must be configured for that feature. The threshold here is against feature distribution distance between the training and prediction feature.
- attribution
Score Map<String,String>Skew Thresholds - Key is the feature name and value is the threshold. The threshold here is against attribution score distance between the training and prediction feature.
- default
Skew GoogleThreshold Cloud Aiplatform V1beta1Threshold Config - Skew anomaly detection threshold used by all features. When the per-feature thresholds are not set, this field can be used to specify a threshold for all features.
- skew
Thresholds Map<String,String> - Key is the feature name and value is the threshold. If a feature needs to be monitored for skew, a value threshold must be configured for that feature. The threshold here is against feature distribution distance between the training and prediction feature.
- attribution
Score {[key: string]: string}Skew Thresholds - Key is the feature name and value is the threshold. The threshold here is against attribution score distance between the training and prediction feature.
- default
Skew GoogleThreshold Cloud Aiplatform V1beta1Threshold Config - Skew anomaly detection threshold used by all features. When the per-feature thresholds are not set, this field can be used to specify a threshold for all features.
- skew
Thresholds {[key: string]: string} - Key is the feature name and value is the threshold. If a feature needs to be monitored for skew, a value threshold must be configured for that feature. The threshold here is against feature distribution distance between the training and prediction feature.
- attribution_
score_ Mapping[str, str]skew_ thresholds - Key is the feature name and value is the threshold. The threshold here is against attribution score distance between the training and prediction feature.
- default_
skew_ Googlethreshold Cloud Aiplatform V1beta1Threshold Config - Skew anomaly detection threshold used by all features. When the per-feature thresholds are not set, this field can be used to specify a threshold for all features.
- skew_
thresholds Mapping[str, str] - Key is the feature name and value is the threshold. If a feature needs to be monitored for skew, a value threshold must be configured for that feature. The threshold here is against feature distribution distance between the training and prediction feature.
- attribution
Score Map<String>Skew Thresholds - Key is the feature name and value is the threshold. The threshold here is against attribution score distance between the training and prediction feature.
- default
Skew Property MapThreshold - Skew anomaly detection threshold used by all features. When the per-feature thresholds are not set, this field can be used to specify a threshold for all features.
- skew
Thresholds Map<String> - Key is the feature name and value is the threshold. If a feature needs to be monitored for skew, a value threshold must be configured for that feature. The threshold here is against feature distribution distance between the training and prediction feature.
GoogleCloudAiplatformV1beta1ModelMonitoringObjectiveConfigTrainingPredictionSkewDetectionConfigResponse, GoogleCloudAiplatformV1beta1ModelMonitoringObjectiveConfigTrainingPredictionSkewDetectionConfigResponseArgs
- Attribution
Score Dictionary<string, string>Skew Thresholds - Key is the feature name and value is the threshold. The threshold here is against attribution score distance between the training and prediction feature.
- Default
Skew Pulumi.Threshold Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Threshold Config Response - Skew anomaly detection threshold used by all features. When the per-feature thresholds are not set, this field can be used to specify a threshold for all features.
- Skew
Thresholds Dictionary<string, string> - Key is the feature name and value is the threshold. If a feature needs to be monitored for skew, a value threshold must be configured for that feature. The threshold here is against feature distribution distance between the training and prediction feature.
- Attribution
Score map[string]stringSkew Thresholds - Key is the feature name and value is the threshold. The threshold here is against attribution score distance between the training and prediction feature.
- Default
Skew GoogleThreshold Cloud Aiplatform V1beta1Threshold Config Response - Skew anomaly detection threshold used by all features. When the per-feature thresholds are not set, this field can be used to specify a threshold for all features.
- Skew
Thresholds map[string]string - Key is the feature name and value is the threshold. If a feature needs to be monitored for skew, a value threshold must be configured for that feature. The threshold here is against feature distribution distance between the training and prediction feature.
- attribution
Score Map<String,String>Skew Thresholds - Key is the feature name and value is the threshold. The threshold here is against attribution score distance between the training and prediction feature.
- default
Skew GoogleThreshold Cloud Aiplatform V1beta1Threshold Config Response - Skew anomaly detection threshold used by all features. When the per-feature thresholds are not set, this field can be used to specify a threshold for all features.
- skew
Thresholds Map<String,String> - Key is the feature name and value is the threshold. If a feature needs to be monitored for skew, a value threshold must be configured for that feature. The threshold here is against feature distribution distance between the training and prediction feature.
- attribution
Score {[key: string]: string}Skew Thresholds - Key is the feature name and value is the threshold. The threshold here is against attribution score distance between the training and prediction feature.
- default
Skew GoogleThreshold Cloud Aiplatform V1beta1Threshold Config Response - Skew anomaly detection threshold used by all features. When the per-feature thresholds are not set, this field can be used to specify a threshold for all features.
- skew
Thresholds {[key: string]: string} - Key is the feature name and value is the threshold. If a feature needs to be monitored for skew, a value threshold must be configured for that feature. The threshold here is against feature distribution distance between the training and prediction feature.
- attribution_
score_ Mapping[str, str]skew_ thresholds - Key is the feature name and value is the threshold. The threshold here is against attribution score distance between the training and prediction feature.
- default_
skew_ Googlethreshold Cloud Aiplatform V1beta1Threshold Config Response - Skew anomaly detection threshold used by all features. When the per-feature thresholds are not set, this field can be used to specify a threshold for all features.
- skew_
thresholds Mapping[str, str] - Key is the feature name and value is the threshold. If a feature needs to be monitored for skew, a value threshold must be configured for that feature. The threshold here is against feature distribution distance between the training and prediction feature.
- attribution
Score Map<String>Skew Thresholds - Key is the feature name and value is the threshold. The threshold here is against attribution score distance between the training and prediction feature.
- default
Skew Property MapThreshold - Skew anomaly detection threshold used by all features. When the per-feature thresholds are not set, this field can be used to specify a threshold for all features.
- skew
Thresholds Map<String> - Key is the feature name and value is the threshold. If a feature needs to be monitored for skew, a value threshold must be configured for that feature. The threshold here is against feature distribution distance between the training and prediction feature.
GoogleCloudAiplatformV1beta1SamplingStrategy, GoogleCloudAiplatformV1beta1SamplingStrategyArgs
- Random
Sample Pulumi.Config Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Sampling Strategy Random Sample Config - Random sample config. Will support more sampling strategies later.
- Random
Sample GoogleConfig Cloud Aiplatform V1beta1Sampling Strategy Random Sample Config - Random sample config. Will support more sampling strategies later.
- random
Sample GoogleConfig Cloud Aiplatform V1beta1Sampling Strategy Random Sample Config - Random sample config. Will support more sampling strategies later.
- random
Sample GoogleConfig Cloud Aiplatform V1beta1Sampling Strategy Random Sample Config - Random sample config. Will support more sampling strategies later.
- random_
sample_ Googleconfig Cloud Aiplatform V1beta1Sampling Strategy Random Sample Config - Random sample config. Will support more sampling strategies later.
- random
Sample Property MapConfig - Random sample config. Will support more sampling strategies later.
GoogleCloudAiplatformV1beta1SamplingStrategyRandomSampleConfig, GoogleCloudAiplatformV1beta1SamplingStrategyRandomSampleConfigArgs
- Sample
Rate double - Sample rate (0, 1]
- Sample
Rate float64 - Sample rate (0, 1]
- sample
Rate Double - Sample rate (0, 1]
- sample
Rate number - Sample rate (0, 1]
- sample_
rate float - Sample rate (0, 1]
- sample
Rate Number - Sample rate (0, 1]
GoogleCloudAiplatformV1beta1SamplingStrategyRandomSampleConfigResponse, GoogleCloudAiplatformV1beta1SamplingStrategyRandomSampleConfigResponseArgs
- Sample
Rate double - Sample rate (0, 1]
- Sample
Rate float64 - Sample rate (0, 1]
- sample
Rate Double - Sample rate (0, 1]
- sample
Rate number - Sample rate (0, 1]
- sample_
rate float - Sample rate (0, 1]
- sample
Rate Number - Sample rate (0, 1]
GoogleCloudAiplatformV1beta1SamplingStrategyResponse, GoogleCloudAiplatformV1beta1SamplingStrategyResponseArgs
- Random
Sample Pulumi.Config Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Sampling Strategy Random Sample Config Response - Random sample config. Will support more sampling strategies later.
- Random
Sample GoogleConfig Cloud Aiplatform V1beta1Sampling Strategy Random Sample Config Response - Random sample config. Will support more sampling strategies later.
- random
Sample GoogleConfig Cloud Aiplatform V1beta1Sampling Strategy Random Sample Config Response - Random sample config. Will support more sampling strategies later.
- random
Sample GoogleConfig Cloud Aiplatform V1beta1Sampling Strategy Random Sample Config Response - Random sample config. Will support more sampling strategies later.
- random_
sample_ Googleconfig Cloud Aiplatform V1beta1Sampling Strategy Random Sample Config Response - Random sample config. Will support more sampling strategies later.
- random
Sample Property MapConfig - Random sample config. Will support more sampling strategies later.
GoogleCloudAiplatformV1beta1ThresholdConfig, GoogleCloudAiplatformV1beta1ThresholdConfigArgs
- Value double
- Specify a threshold value that can trigger the alert. If this threshold config is for feature distribution distance: 1. For categorical feature, the distribution distance is calculated by L-inifinity norm. 2. For numerical feature, the distribution distance is calculated by Jensen–Shannon divergence. Each feature must have a non-zero threshold if they need to be monitored. Otherwise no alert will be triggered for that feature.
- Value float64
- Specify a threshold value that can trigger the alert. If this threshold config is for feature distribution distance: 1. For categorical feature, the distribution distance is calculated by L-inifinity norm. 2. For numerical feature, the distribution distance is calculated by Jensen–Shannon divergence. Each feature must have a non-zero threshold if they need to be monitored. Otherwise no alert will be triggered for that feature.
- value Double
- Specify a threshold value that can trigger the alert. If this threshold config is for feature distribution distance: 1. For categorical feature, the distribution distance is calculated by L-inifinity norm. 2. For numerical feature, the distribution distance is calculated by Jensen–Shannon divergence. Each feature must have a non-zero threshold if they need to be monitored. Otherwise no alert will be triggered for that feature.
- value number
- Specify a threshold value that can trigger the alert. If this threshold config is for feature distribution distance: 1. For categorical feature, the distribution distance is calculated by L-inifinity norm. 2. For numerical feature, the distribution distance is calculated by Jensen–Shannon divergence. Each feature must have a non-zero threshold if they need to be monitored. Otherwise no alert will be triggered for that feature.
- value float
- Specify a threshold value that can trigger the alert. If this threshold config is for feature distribution distance: 1. For categorical feature, the distribution distance is calculated by L-inifinity norm. 2. For numerical feature, the distribution distance is calculated by Jensen–Shannon divergence. Each feature must have a non-zero threshold if they need to be monitored. Otherwise no alert will be triggered for that feature.
- value Number
- Specify a threshold value that can trigger the alert. If this threshold config is for feature distribution distance: 1. For categorical feature, the distribution distance is calculated by L-inifinity norm. 2. For numerical feature, the distribution distance is calculated by Jensen–Shannon divergence. Each feature must have a non-zero threshold if they need to be monitored. Otherwise no alert will be triggered for that feature.
GoogleCloudAiplatformV1beta1ThresholdConfigResponse, GoogleCloudAiplatformV1beta1ThresholdConfigResponseArgs
- Value double
- Specify a threshold value that can trigger the alert. If this threshold config is for feature distribution distance: 1. For categorical feature, the distribution distance is calculated by L-inifinity norm. 2. For numerical feature, the distribution distance is calculated by Jensen–Shannon divergence. Each feature must have a non-zero threshold if they need to be monitored. Otherwise no alert will be triggered for that feature.
- Value float64
- Specify a threshold value that can trigger the alert. If this threshold config is for feature distribution distance: 1. For categorical feature, the distribution distance is calculated by L-inifinity norm. 2. For numerical feature, the distribution distance is calculated by Jensen–Shannon divergence. Each feature must have a non-zero threshold if they need to be monitored. Otherwise no alert will be triggered for that feature.
- value Double
- Specify a threshold value that can trigger the alert. If this threshold config is for feature distribution distance: 1. For categorical feature, the distribution distance is calculated by L-inifinity norm. 2. For numerical feature, the distribution distance is calculated by Jensen–Shannon divergence. Each feature must have a non-zero threshold if they need to be monitored. Otherwise no alert will be triggered for that feature.
- value number
- Specify a threshold value that can trigger the alert. If this threshold config is for feature distribution distance: 1. For categorical feature, the distribution distance is calculated by L-inifinity norm. 2. For numerical feature, the distribution distance is calculated by Jensen–Shannon divergence. Each feature must have a non-zero threshold if they need to be monitored. Otherwise no alert will be triggered for that feature.
- value float
- Specify a threshold value that can trigger the alert. If this threshold config is for feature distribution distance: 1. For categorical feature, the distribution distance is calculated by L-inifinity norm. 2. For numerical feature, the distribution distance is calculated by Jensen–Shannon divergence. Each feature must have a non-zero threshold if they need to be monitored. Otherwise no alert will be triggered for that feature.
- value Number
- Specify a threshold value that can trigger the alert. If this threshold config is for feature distribution distance: 1. For categorical feature, the distribution distance is calculated by L-inifinity norm. 2. For numerical feature, the distribution distance is calculated by Jensen–Shannon divergence. Each feature must have a non-zero threshold if they need to be monitored. Otherwise no alert will be triggered for that feature.
GoogleRpcStatusResponse, GoogleRpcStatusResponseArgs
- Code int
- The status code, which should be an enum value of google.rpc.Code.
- Details
List<Immutable
Dictionary<string, string>> - A list of messages that carry the error details. There is a common set of message types for APIs to use.
- Message string
- A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
- Code int
- The status code, which should be an enum value of google.rpc.Code.
- Details []map[string]string
- A list of messages that carry the error details. There is a common set of message types for APIs to use.
- Message string
- A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
- code Integer
- The status code, which should be an enum value of google.rpc.Code.
- details List<Map<String,String>>
- A list of messages that carry the error details. There is a common set of message types for APIs to use.
- message String
- A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
- code number
- The status code, which should be an enum value of google.rpc.Code.
- details {[key: string]: string}[]
- A list of messages that carry the error details. There is a common set of message types for APIs to use.
- message string
- A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
- code int
- The status code, which should be an enum value of google.rpc.Code.
- details Sequence[Mapping[str, str]]
- A list of messages that carry the error details. There is a common set of message types for APIs to use.
- message str
- A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
- code Number
- The status code, which should be an enum value of google.rpc.Code.
- details List<Map<String>>
- A list of messages that carry the error details. There is a common set of message types for APIs to use.
- message String
- A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
Package Details
- Repository
- Google Cloud Native pulumi/pulumi-google-native
- License
- Apache-2.0
Google Cloud Native is in preview. Google Cloud Classic is fully supported.