We recommend new projects start with resources from the AWS provider.
aws-native.sagemaker.DataQualityJobDefinition
Explore with Pulumi AI
We recommend new projects start with resources from the AWS provider.
Resource Type definition for AWS::SageMaker::DataQualityJobDefinition
Create DataQualityJobDefinition Resource
Resources are created with functions called constructors. To learn more about declaring and configuring resources, see Resources.
Constructor syntax
new DataQualityJobDefinition(name: string, args: DataQualityJobDefinitionArgs, opts?: CustomResourceOptions);
@overload
def DataQualityJobDefinition(resource_name: str,
args: DataQualityJobDefinitionArgs,
opts: Optional[ResourceOptions] = None)
@overload
def DataQualityJobDefinition(resource_name: str,
opts: Optional[ResourceOptions] = None,
data_quality_app_specification: Optional[DataQualityJobDefinitionDataQualityAppSpecificationArgs] = None,
data_quality_job_input: Optional[DataQualityJobDefinitionDataQualityJobInputArgs] = None,
data_quality_job_output_config: Optional[DataQualityJobDefinitionMonitoringOutputConfigArgs] = None,
job_resources: Optional[DataQualityJobDefinitionMonitoringResourcesArgs] = None,
role_arn: Optional[str] = None,
data_quality_baseline_config: Optional[DataQualityJobDefinitionDataQualityBaselineConfigArgs] = None,
endpoint_name: Optional[str] = None,
job_definition_name: Optional[str] = None,
network_config: Optional[DataQualityJobDefinitionNetworkConfigArgs] = None,
stopping_condition: Optional[DataQualityJobDefinitionStoppingConditionArgs] = None,
tags: Optional[Sequence[_root_inputs.CreateOnlyTagArgs]] = None)
func NewDataQualityJobDefinition(ctx *Context, name string, args DataQualityJobDefinitionArgs, opts ...ResourceOption) (*DataQualityJobDefinition, error)
public DataQualityJobDefinition(string name, DataQualityJobDefinitionArgs args, CustomResourceOptions? opts = null)
public DataQualityJobDefinition(String name, DataQualityJobDefinitionArgs args)
public DataQualityJobDefinition(String name, DataQualityJobDefinitionArgs args, CustomResourceOptions options)
type: aws-native:sagemaker:DataQualityJobDefinition
properties: # The arguments to resource properties.
options: # Bag of options to control resource's behavior.
Parameters
- name string
- The unique name of the resource.
- args DataQualityJobDefinitionArgs
- The arguments to resource properties.
- opts CustomResourceOptions
- Bag of options to control resource's behavior.
- resource_name str
- The unique name of the resource.
- args DataQualityJobDefinitionArgs
- The arguments to resource properties.
- opts ResourceOptions
- Bag of options to control resource's behavior.
- ctx Context
- Context object for the current deployment.
- name string
- The unique name of the resource.
- args DataQualityJobDefinitionArgs
- The arguments to resource properties.
- opts ResourceOption
- Bag of options to control resource's behavior.
- name string
- The unique name of the resource.
- args DataQualityJobDefinitionArgs
- The arguments to resource properties.
- opts CustomResourceOptions
- Bag of options to control resource's behavior.
- name String
- The unique name of the resource.
- args DataQualityJobDefinitionArgs
- The arguments to resource properties.
- options CustomResourceOptions
- Bag of options to control resource's behavior.
DataQualityJobDefinition Resource Properties
To learn more about resource properties and how to use them, see Inputs and Outputs in the Architecture and Concepts docs.
Inputs
In Python, inputs that are objects can be passed either as argument classes or as dictionary literals.
The DataQualityJobDefinition resource accepts the following input properties:
- Data
Quality Pulumi.App Specification Aws Native. Sage Maker. Inputs. Data Quality Job Definition Data Quality App Specification - Specifies the container that runs the monitoring job.
- Data
Quality Pulumi.Job Input Aws Native. Sage Maker. Inputs. Data Quality Job Definition Data Quality Job Input - A list of inputs for the monitoring job. Currently endpoints are supported as monitoring inputs.
- Data
Quality Pulumi.Job Output Config Aws Native. Sage Maker. Inputs. Data Quality Job Definition Monitoring Output Config - The output configuration for monitoring jobs.
- Job
Resources Pulumi.Aws Native. Sage Maker. Inputs. Data Quality Job Definition Monitoring Resources - Identifies the resources to deploy for a monitoring job.
- Role
Arn string - The Amazon Resource Name (ARN) of an IAM role that Amazon SageMaker can assume to perform tasks on your behalf.
- Data
Quality Pulumi.Baseline Config Aws Native. Sage Maker. Inputs. Data Quality Job Definition Data Quality Baseline Config - Configures the constraints and baselines for the monitoring job.
- Endpoint
Name string - Job
Definition stringName - The name for the monitoring job definition.
- Network
Config Pulumi.Aws Native. Sage Maker. Inputs. Data Quality Job Definition Network Config - Specifies networking configuration for the monitoring job.
- Stopping
Condition Pulumi.Aws Native. Sage Maker. Inputs. Data Quality Job Definition Stopping Condition - A time limit for how long the monitoring job is allowed to run before stopping.
- List<Pulumi.
Aws Native. Inputs. Create Only Tag> - An array of key-value pairs to apply to this resource.
- Data
Quality DataApp Specification Quality Job Definition Data Quality App Specification Args - Specifies the container that runs the monitoring job.
- Data
Quality DataJob Input Quality Job Definition Data Quality Job Input Args - A list of inputs for the monitoring job. Currently endpoints are supported as monitoring inputs.
- Data
Quality DataJob Output Config Quality Job Definition Monitoring Output Config Args - The output configuration for monitoring jobs.
- Job
Resources DataQuality Job Definition Monitoring Resources Args - Identifies the resources to deploy for a monitoring job.
- Role
Arn string - The Amazon Resource Name (ARN) of an IAM role that Amazon SageMaker can assume to perform tasks on your behalf.
- Data
Quality DataBaseline Config Quality Job Definition Data Quality Baseline Config Args - Configures the constraints and baselines for the monitoring job.
- Endpoint
Name string - Job
Definition stringName - The name for the monitoring job definition.
- Network
Config DataQuality Job Definition Network Config Args - Specifies networking configuration for the monitoring job.
- Stopping
Condition DataQuality Job Definition Stopping Condition Args - A time limit for how long the monitoring job is allowed to run before stopping.
- Create
Only Tag Args - An array of key-value pairs to apply to this resource.
- data
Quality DataApp Specification Quality Job Definition Data Quality App Specification - Specifies the container that runs the monitoring job.
- data
Quality DataJob Input Quality Job Definition Data Quality Job Input - A list of inputs for the monitoring job. Currently endpoints are supported as monitoring inputs.
- data
Quality DataJob Output Config Quality Job Definition Monitoring Output Config - The output configuration for monitoring jobs.
- job
Resources DataQuality Job Definition Monitoring Resources - Identifies the resources to deploy for a monitoring job.
- role
Arn String - The Amazon Resource Name (ARN) of an IAM role that Amazon SageMaker can assume to perform tasks on your behalf.
- data
Quality DataBaseline Config Quality Job Definition Data Quality Baseline Config - Configures the constraints and baselines for the monitoring job.
- endpoint
Name String - job
Definition StringName - The name for the monitoring job definition.
- network
Config DataQuality Job Definition Network Config - Specifies networking configuration for the monitoring job.
- stopping
Condition DataQuality Job Definition Stopping Condition - A time limit for how long the monitoring job is allowed to run before stopping.
- List<Create
Only Tag> - An array of key-value pairs to apply to this resource.
- data
Quality DataApp Specification Quality Job Definition Data Quality App Specification - Specifies the container that runs the monitoring job.
- data
Quality DataJob Input Quality Job Definition Data Quality Job Input - A list of inputs for the monitoring job. Currently endpoints are supported as monitoring inputs.
- data
Quality DataJob Output Config Quality Job Definition Monitoring Output Config - The output configuration for monitoring jobs.
- job
Resources DataQuality Job Definition Monitoring Resources - Identifies the resources to deploy for a monitoring job.
- role
Arn string - The Amazon Resource Name (ARN) of an IAM role that Amazon SageMaker can assume to perform tasks on your behalf.
- data
Quality DataBaseline Config Quality Job Definition Data Quality Baseline Config - Configures the constraints and baselines for the monitoring job.
- endpoint
Name string - job
Definition stringName - The name for the monitoring job definition.
- network
Config DataQuality Job Definition Network Config - Specifies networking configuration for the monitoring job.
- stopping
Condition DataQuality Job Definition Stopping Condition - A time limit for how long the monitoring job is allowed to run before stopping.
- Create
Only Tag[] - An array of key-value pairs to apply to this resource.
- data_
quality_ Dataapp_ specification Quality Job Definition Data Quality App Specification Args - Specifies the container that runs the monitoring job.
- data_
quality_ Datajob_ input Quality Job Definition Data Quality Job Input Args - A list of inputs for the monitoring job. Currently endpoints are supported as monitoring inputs.
- data_
quality_ Datajob_ output_ config Quality Job Definition Monitoring Output Config Args - The output configuration for monitoring jobs.
- job_
resources DataQuality Job Definition Monitoring Resources Args - Identifies the resources to deploy for a monitoring job.
- role_
arn str - The Amazon Resource Name (ARN) of an IAM role that Amazon SageMaker can assume to perform tasks on your behalf.
- data_
quality_ Databaseline_ config Quality Job Definition Data Quality Baseline Config Args - Configures the constraints and baselines for the monitoring job.
- endpoint_
name str - job_
definition_ strname - The name for the monitoring job definition.
- network_
config DataQuality Job Definition Network Config Args - Specifies networking configuration for the monitoring job.
- stopping_
condition DataQuality Job Definition Stopping Condition Args - A time limit for how long the monitoring job is allowed to run before stopping.
- Sequence[Create
Only Tag Args] - An array of key-value pairs to apply to this resource.
- data
Quality Property MapApp Specification - Specifies the container that runs the monitoring job.
- data
Quality Property MapJob Input - A list of inputs for the monitoring job. Currently endpoints are supported as monitoring inputs.
- data
Quality Property MapJob Output Config - The output configuration for monitoring jobs.
- job
Resources Property Map - Identifies the resources to deploy for a monitoring job.
- role
Arn String - The Amazon Resource Name (ARN) of an IAM role that Amazon SageMaker can assume to perform tasks on your behalf.
- data
Quality Property MapBaseline Config - Configures the constraints and baselines for the monitoring job.
- endpoint
Name String - job
Definition StringName - The name for the monitoring job definition.
- network
Config Property Map - Specifies networking configuration for the monitoring job.
- stopping
Condition Property Map - A time limit for how long the monitoring job is allowed to run before stopping.
- List<Property Map>
- An array of key-value pairs to apply to this resource.
Outputs
All input properties are implicitly available as output properties. Additionally, the DataQualityJobDefinition resource produces the following output properties:
- Creation
Time string - The time at which the job definition was created.
- Id string
- The provider-assigned unique ID for this managed resource.
- Job
Definition stringArn - The Amazon Resource Name (ARN) of job definition.
- Creation
Time string - The time at which the job definition was created.
- Id string
- The provider-assigned unique ID for this managed resource.
- Job
Definition stringArn - The Amazon Resource Name (ARN) of job definition.
- creation
Time String - The time at which the job definition was created.
- id String
- The provider-assigned unique ID for this managed resource.
- job
Definition StringArn - The Amazon Resource Name (ARN) of job definition.
- creation
Time string - The time at which the job definition was created.
- id string
- The provider-assigned unique ID for this managed resource.
- job
Definition stringArn - The Amazon Resource Name (ARN) of job definition.
- creation_
time str - The time at which the job definition was created.
- id str
- The provider-assigned unique ID for this managed resource.
- job_
definition_ strarn - The Amazon Resource Name (ARN) of job definition.
- creation
Time String - The time at which the job definition was created.
- id String
- The provider-assigned unique ID for this managed resource.
- job
Definition StringArn - The Amazon Resource Name (ARN) of job definition.
Supporting Types
CreateOnlyTag, CreateOnlyTagArgs
DataQualityJobDefinitionBatchTransformInput, DataQualityJobDefinitionBatchTransformInputArgs
- Data
Captured stringDestination S3Uri - A URI that identifies the Amazon S3 storage location where Batch Transform Job captures data.
- Dataset
Format Pulumi.Aws Native. Sage Maker. Inputs. Data Quality Job Definition Dataset Format - The dataset format for your batch transform job.
- Local
Path string - Path to the filesystem where the endpoint data is available to the container.
- Exclude
Features stringAttribute - Indexes or names of the features to be excluded from analysis
- S3Data
Distribution Pulumi.Type Aws Native. Sage Maker. Data Quality Job Definition Batch Transform Input S3Data Distribution Type - Whether input data distributed in Amazon S3 is fully replicated or sharded by an S3 key. Defauts to FullyReplicated
- S3Input
Mode Pulumi.Aws Native. Sage Maker. Data Quality Job Definition Batch Transform Input S3Input Mode - Whether the Pipe or File is used as the input mode for transfering data for the monitoring job. Pipe mode is recommended for large datasets. File mode is useful for small files that fit in memory. Defaults to File.
- Data
Captured stringDestination S3Uri - A URI that identifies the Amazon S3 storage location where Batch Transform Job captures data.
- Dataset
Format DataQuality Job Definition Dataset Format - The dataset format for your batch transform job.
- Local
Path string - Path to the filesystem where the endpoint data is available to the container.
- Exclude
Features stringAttribute - Indexes or names of the features to be excluded from analysis
- S3Data
Distribution DataType Quality Job Definition Batch Transform Input S3Data Distribution Type - Whether input data distributed in Amazon S3 is fully replicated or sharded by an S3 key. Defauts to FullyReplicated
- S3Input
Mode DataQuality Job Definition Batch Transform Input S3Input Mode - Whether the Pipe or File is used as the input mode for transfering data for the monitoring job. Pipe mode is recommended for large datasets. File mode is useful for small files that fit in memory. Defaults to File.
- data
Captured StringDestination S3Uri - A URI that identifies the Amazon S3 storage location where Batch Transform Job captures data.
- dataset
Format DataQuality Job Definition Dataset Format - The dataset format for your batch transform job.
- local
Path String - Path to the filesystem where the endpoint data is available to the container.
- exclude
Features StringAttribute - Indexes or names of the features to be excluded from analysis
- s3Data
Distribution DataType Quality Job Definition Batch Transform Input S3Data Distribution Type - Whether input data distributed in Amazon S3 is fully replicated or sharded by an S3 key. Defauts to FullyReplicated
- s3Input
Mode DataQuality Job Definition Batch Transform Input S3Input Mode - Whether the Pipe or File is used as the input mode for transfering data for the monitoring job. Pipe mode is recommended for large datasets. File mode is useful for small files that fit in memory. Defaults to File.
- data
Captured stringDestination S3Uri - A URI that identifies the Amazon S3 storage location where Batch Transform Job captures data.
- dataset
Format DataQuality Job Definition Dataset Format - The dataset format for your batch transform job.
- local
Path string - Path to the filesystem where the endpoint data is available to the container.
- exclude
Features stringAttribute - Indexes or names of the features to be excluded from analysis
- s3Data
Distribution DataType Quality Job Definition Batch Transform Input S3Data Distribution Type - Whether input data distributed in Amazon S3 is fully replicated or sharded by an S3 key. Defauts to FullyReplicated
- s3Input
Mode DataQuality Job Definition Batch Transform Input S3Input Mode - Whether the Pipe or File is used as the input mode for transfering data for the monitoring job. Pipe mode is recommended for large datasets. File mode is useful for small files that fit in memory. Defaults to File.
- data_
captured_ strdestination_ s3_ uri - A URI that identifies the Amazon S3 storage location where Batch Transform Job captures data.
- dataset_
format DataQuality Job Definition Dataset Format - The dataset format for your batch transform job.
- local_
path str - Path to the filesystem where the endpoint data is available to the container.
- exclude_
features_ strattribute - Indexes or names of the features to be excluded from analysis
- s3_
data_ Datadistribution_ type Quality Job Definition Batch Transform Input S3Data Distribution Type - Whether input data distributed in Amazon S3 is fully replicated or sharded by an S3 key. Defauts to FullyReplicated
- s3_
input_ Datamode Quality Job Definition Batch Transform Input S3Input Mode - Whether the Pipe or File is used as the input mode for transfering data for the monitoring job. Pipe mode is recommended for large datasets. File mode is useful for small files that fit in memory. Defaults to File.
- data
Captured StringDestination S3Uri - A URI that identifies the Amazon S3 storage location where Batch Transform Job captures data.
- dataset
Format Property Map - The dataset format for your batch transform job.
- local
Path String - Path to the filesystem where the endpoint data is available to the container.
- exclude
Features StringAttribute - Indexes or names of the features to be excluded from analysis
- s3Data
Distribution "FullyType Replicated" | "Sharded By S3Key" - Whether input data distributed in Amazon S3 is fully replicated or sharded by an S3 key. Defauts to FullyReplicated
- s3Input
Mode "Pipe" | "File" - Whether the Pipe or File is used as the input mode for transfering data for the monitoring job. Pipe mode is recommended for large datasets. File mode is useful for small files that fit in memory. Defaults to File.
DataQualityJobDefinitionBatchTransformInputS3DataDistributionType, DataQualityJobDefinitionBatchTransformInputS3DataDistributionTypeArgs
- Fully
Replicated - FullyReplicated
- Sharded
By S3Key - ShardedByS3Key
- Data
Quality Job Definition Batch Transform Input S3Data Distribution Type Fully Replicated - FullyReplicated
- Data
Quality Job Definition Batch Transform Input S3Data Distribution Type Sharded By S3Key - ShardedByS3Key
- Fully
Replicated - FullyReplicated
- Sharded
By S3Key - ShardedByS3Key
- Fully
Replicated - FullyReplicated
- Sharded
By S3Key - ShardedByS3Key
- FULLY_REPLICATED
- FullyReplicated
- SHARDED_BY_S3_KEY
- ShardedByS3Key
- "Fully
Replicated" - FullyReplicated
- "Sharded
By S3Key" - ShardedByS3Key
DataQualityJobDefinitionBatchTransformInputS3InputMode, DataQualityJobDefinitionBatchTransformInputS3InputModeArgs
- Pipe
- Pipe
- File
- File
- Data
Quality Job Definition Batch Transform Input S3Input Mode Pipe - Pipe
- Data
Quality Job Definition Batch Transform Input S3Input Mode File - File
- Pipe
- Pipe
- File
- File
- Pipe
- Pipe
- File
- File
- PIPE
- Pipe
- FILE
- File
- "Pipe"
- Pipe
- "File"
- File
DataQualityJobDefinitionClusterConfig, DataQualityJobDefinitionClusterConfigArgs
- Instance
Count int - The number of ML compute instances to use in the model monitoring job. For distributed processing jobs, specify a value greater than 1. The default value is 1.
- Instance
Type string - The ML compute instance type for the processing job.
- Volume
Size intIn Gb - The size of the ML storage volume, in gigabytes, that you want to provision. You must specify sufficient ML storage for your scenario.
- Volume
Kms stringKey Id - The AWS Key Management Service (AWS KMS) key that Amazon SageMaker uses to encrypt data on the storage volume attached to the ML compute instance(s) that run the model monitoring job.
- Instance
Count int - The number of ML compute instances to use in the model monitoring job. For distributed processing jobs, specify a value greater than 1. The default value is 1.
- Instance
Type string - The ML compute instance type for the processing job.
- Volume
Size intIn Gb - The size of the ML storage volume, in gigabytes, that you want to provision. You must specify sufficient ML storage for your scenario.
- Volume
Kms stringKey Id - The AWS Key Management Service (AWS KMS) key that Amazon SageMaker uses to encrypt data on the storage volume attached to the ML compute instance(s) that run the model monitoring job.
- instance
Count Integer - The number of ML compute instances to use in the model monitoring job. For distributed processing jobs, specify a value greater than 1. The default value is 1.
- instance
Type String - The ML compute instance type for the processing job.
- volume
Size IntegerIn Gb - The size of the ML storage volume, in gigabytes, that you want to provision. You must specify sufficient ML storage for your scenario.
- volume
Kms StringKey Id - The AWS Key Management Service (AWS KMS) key that Amazon SageMaker uses to encrypt data on the storage volume attached to the ML compute instance(s) that run the model monitoring job.
- instance
Count number - The number of ML compute instances to use in the model monitoring job. For distributed processing jobs, specify a value greater than 1. The default value is 1.
- instance
Type string - The ML compute instance type for the processing job.
- volume
Size numberIn Gb - The size of the ML storage volume, in gigabytes, that you want to provision. You must specify sufficient ML storage for your scenario.
- volume
Kms stringKey Id - The AWS Key Management Service (AWS KMS) key that Amazon SageMaker uses to encrypt data on the storage volume attached to the ML compute instance(s) that run the model monitoring job.
- instance_
count int - The number of ML compute instances to use in the model monitoring job. For distributed processing jobs, specify a value greater than 1. The default value is 1.
- instance_
type str - The ML compute instance type for the processing job.
- volume_
size_ intin_ gb - The size of the ML storage volume, in gigabytes, that you want to provision. You must specify sufficient ML storage for your scenario.
- volume_
kms_ strkey_ id - The AWS Key Management Service (AWS KMS) key that Amazon SageMaker uses to encrypt data on the storage volume attached to the ML compute instance(s) that run the model monitoring job.
- instance
Count Number - The number of ML compute instances to use in the model monitoring job. For distributed processing jobs, specify a value greater than 1. The default value is 1.
- instance
Type String - The ML compute instance type for the processing job.
- volume
Size NumberIn Gb - The size of the ML storage volume, in gigabytes, that you want to provision. You must specify sufficient ML storage for your scenario.
- volume
Kms StringKey Id - The AWS Key Management Service (AWS KMS) key that Amazon SageMaker uses to encrypt data on the storage volume attached to the ML compute instance(s) that run the model monitoring job.
DataQualityJobDefinitionConstraintsResource, DataQualityJobDefinitionConstraintsResourceArgs
- S3Uri string
- The Amazon S3 URI for baseline constraint file in Amazon S3 that the current monitoring job should validated against.
- S3Uri string
- The Amazon S3 URI for baseline constraint file in Amazon S3 that the current monitoring job should validated against.
- s3Uri String
- The Amazon S3 URI for baseline constraint file in Amazon S3 that the current monitoring job should validated against.
- s3Uri string
- The Amazon S3 URI for baseline constraint file in Amazon S3 that the current monitoring job should validated against.
- s3_
uri str - The Amazon S3 URI for baseline constraint file in Amazon S3 that the current monitoring job should validated against.
- s3Uri String
- The Amazon S3 URI for baseline constraint file in Amazon S3 that the current monitoring job should validated against.
DataQualityJobDefinitionCsv, DataQualityJobDefinitionCsvArgs
- Header bool
- A boolean flag indicating if given CSV has header
- Header bool
- A boolean flag indicating if given CSV has header
- header Boolean
- A boolean flag indicating if given CSV has header
- header boolean
- A boolean flag indicating if given CSV has header
- header bool
- A boolean flag indicating if given CSV has header
- header Boolean
- A boolean flag indicating if given CSV has header
DataQualityJobDefinitionDataQualityAppSpecification, DataQualityJobDefinitionDataQualityAppSpecificationArgs
- Image
Uri string - The container image to be run by the monitoring job.
- Container
Arguments List<string> - An array of arguments for the container used to run the monitoring job.
- Container
Entrypoint List<string> - Specifies the entrypoint for a container used to run the monitoring job.
- Environment object
- Sets the environment variables in the Docker container
- Post
Analytics stringProcessor Source Uri - An Amazon S3 URI to a script that is called after analysis has been performed. Applicable only for the built-in (first party) containers.
- Record
Preprocessor stringSource Uri - An Amazon S3 URI to a script that is called per row prior to running analysis. It can base64 decode the payload and convert it into a flatted json so that the built-in container can use the converted data. Applicable only for the built-in (first party) containers
- Image
Uri string - The container image to be run by the monitoring job.
- Container
Arguments []string - An array of arguments for the container used to run the monitoring job.
- Container
Entrypoint []string - Specifies the entrypoint for a container used to run the monitoring job.
- Environment interface{}
- Sets the environment variables in the Docker container
- Post
Analytics stringProcessor Source Uri - An Amazon S3 URI to a script that is called after analysis has been performed. Applicable only for the built-in (first party) containers.
- Record
Preprocessor stringSource Uri - An Amazon S3 URI to a script that is called per row prior to running analysis. It can base64 decode the payload and convert it into a flatted json so that the built-in container can use the converted data. Applicable only for the built-in (first party) containers
- image
Uri String - The container image to be run by the monitoring job.
- container
Arguments List<String> - An array of arguments for the container used to run the monitoring job.
- container
Entrypoint List<String> - Specifies the entrypoint for a container used to run the monitoring job.
- environment Object
- Sets the environment variables in the Docker container
- post
Analytics StringProcessor Source Uri - An Amazon S3 URI to a script that is called after analysis has been performed. Applicable only for the built-in (first party) containers.
- record
Preprocessor StringSource Uri - An Amazon S3 URI to a script that is called per row prior to running analysis. It can base64 decode the payload and convert it into a flatted json so that the built-in container can use the converted data. Applicable only for the built-in (first party) containers
- image
Uri string - The container image to be run by the monitoring job.
- container
Arguments string[] - An array of arguments for the container used to run the monitoring job.
- container
Entrypoint string[] - Specifies the entrypoint for a container used to run the monitoring job.
- environment any
- Sets the environment variables in the Docker container
- post
Analytics stringProcessor Source Uri - An Amazon S3 URI to a script that is called after analysis has been performed. Applicable only for the built-in (first party) containers.
- record
Preprocessor stringSource Uri - An Amazon S3 URI to a script that is called per row prior to running analysis. It can base64 decode the payload and convert it into a flatted json so that the built-in container can use the converted data. Applicable only for the built-in (first party) containers
- image_
uri str - The container image to be run by the monitoring job.
- container_
arguments Sequence[str] - An array of arguments for the container used to run the monitoring job.
- container_
entrypoint Sequence[str] - Specifies the entrypoint for a container used to run the monitoring job.
- environment Any
- Sets the environment variables in the Docker container
- post_
analytics_ strprocessor_ source_ uri - An Amazon S3 URI to a script that is called after analysis has been performed. Applicable only for the built-in (first party) containers.
- record_
preprocessor_ strsource_ uri - An Amazon S3 URI to a script that is called per row prior to running analysis. It can base64 decode the payload and convert it into a flatted json so that the built-in container can use the converted data. Applicable only for the built-in (first party) containers
- image
Uri String - The container image to be run by the monitoring job.
- container
Arguments List<String> - An array of arguments for the container used to run the monitoring job.
- container
Entrypoint List<String> - Specifies the entrypoint for a container used to run the monitoring job.
- environment Any
- Sets the environment variables in the Docker container
- post
Analytics StringProcessor Source Uri - An Amazon S3 URI to a script that is called after analysis has been performed. Applicable only for the built-in (first party) containers.
- record
Preprocessor StringSource Uri - An Amazon S3 URI to a script that is called per row prior to running analysis. It can base64 decode the payload and convert it into a flatted json so that the built-in container can use the converted data. Applicable only for the built-in (first party) containers
DataQualityJobDefinitionDataQualityBaselineConfig, DataQualityJobDefinitionDataQualityBaselineConfigArgs
- Baselining
Job stringName - The name of the job that performs baselining for the data quality monitoring job.
- Constraints
Resource Pulumi.Aws Native. Sage Maker. Inputs. Data Quality Job Definition Constraints Resource - The constraints resource for a monitoring job.
- Statistics
Resource Pulumi.Aws Native. Sage Maker. Inputs. Data Quality Job Definition Statistics Resource - Configuration for monitoring constraints and monitoring statistics. These baseline resources are compared against the results of the current job from the series of jobs scheduled to collect data periodically.
- Baselining
Job stringName - The name of the job that performs baselining for the data quality monitoring job.
- Constraints
Resource DataQuality Job Definition Constraints Resource - The constraints resource for a monitoring job.
- Statistics
Resource DataQuality Job Definition Statistics Resource - Configuration for monitoring constraints and monitoring statistics. These baseline resources are compared against the results of the current job from the series of jobs scheduled to collect data periodically.
- baselining
Job StringName - The name of the job that performs baselining for the data quality monitoring job.
- constraints
Resource DataQuality Job Definition Constraints Resource - The constraints resource for a monitoring job.
- statistics
Resource DataQuality Job Definition Statistics Resource - Configuration for monitoring constraints and monitoring statistics. These baseline resources are compared against the results of the current job from the series of jobs scheduled to collect data periodically.
- baselining
Job stringName - The name of the job that performs baselining for the data quality monitoring job.
- constraints
Resource DataQuality Job Definition Constraints Resource - The constraints resource for a monitoring job.
- statistics
Resource DataQuality Job Definition Statistics Resource - Configuration for monitoring constraints and monitoring statistics. These baseline resources are compared against the results of the current job from the series of jobs scheduled to collect data periodically.
- baselining_
job_ strname - The name of the job that performs baselining for the data quality monitoring job.
- constraints_
resource DataQuality Job Definition Constraints Resource - The constraints resource for a monitoring job.
- statistics_
resource DataQuality Job Definition Statistics Resource - Configuration for monitoring constraints and monitoring statistics. These baseline resources are compared against the results of the current job from the series of jobs scheduled to collect data periodically.
- baselining
Job StringName - The name of the job that performs baselining for the data quality monitoring job.
- constraints
Resource Property Map - The constraints resource for a monitoring job.
- statistics
Resource Property Map - Configuration for monitoring constraints and monitoring statistics. These baseline resources are compared against the results of the current job from the series of jobs scheduled to collect data periodically.
DataQualityJobDefinitionDataQualityJobInput, DataQualityJobDefinitionDataQualityJobInputArgs
- Batch
Transform Pulumi.Input Aws Native. Sage Maker. Inputs. Data Quality Job Definition Batch Transform Input - Input object for the batch transform job.
- Endpoint
Input Pulumi.Aws Native. Sage Maker. Inputs. Data Quality Job Definition Endpoint Input - Input object for the endpoint
- Batch
Transform DataInput Quality Job Definition Batch Transform Input - Input object for the batch transform job.
- Endpoint
Input DataQuality Job Definition Endpoint Input - Input object for the endpoint
- batch
Transform DataInput Quality Job Definition Batch Transform Input - Input object for the batch transform job.
- endpoint
Input DataQuality Job Definition Endpoint Input - Input object for the endpoint
- batch
Transform DataInput Quality Job Definition Batch Transform Input - Input object for the batch transform job.
- endpoint
Input DataQuality Job Definition Endpoint Input - Input object for the endpoint
- batch_
transform_ Datainput Quality Job Definition Batch Transform Input - Input object for the batch transform job.
- endpoint_
input DataQuality Job Definition Endpoint Input - Input object for the endpoint
- batch
Transform Property MapInput - Input object for the batch transform job.
- endpoint
Input Property Map - Input object for the endpoint
DataQualityJobDefinitionDatasetFormat, DataQualityJobDefinitionDatasetFormatArgs
- csv Property Map
- json Property Map
- parquet Boolean
DataQualityJobDefinitionEndpointInput, DataQualityJobDefinitionEndpointInputArgs
- Endpoint
Name string - An endpoint in customer's account which has enabled
DataCaptureConfig
enabled. - Local
Path string - Path to the filesystem where the endpoint data is available to the container.
- Exclude
Features stringAttribute - Indexes or names of the features to be excluded from analysis
- S3Data
Distribution Pulumi.Type Aws Native. Sage Maker. Data Quality Job Definition Endpoint Input S3Data Distribution Type - Whether input data distributed in Amazon S3 is fully replicated or sharded by an S3 key. Defauts to FullyReplicated
- S3Input
Mode Pulumi.Aws Native. Sage Maker. Data Quality Job Definition Endpoint Input S3Input Mode - Whether the Pipe or File is used as the input mode for transfering data for the monitoring job. Pipe mode is recommended for large datasets. File mode is useful for small files that fit in memory. Defaults to File.
- Endpoint
Name string - An endpoint in customer's account which has enabled
DataCaptureConfig
enabled. - Local
Path string - Path to the filesystem where the endpoint data is available to the container.
- Exclude
Features stringAttribute - Indexes or names of the features to be excluded from analysis
- S3Data
Distribution DataType Quality Job Definition Endpoint Input S3Data Distribution Type - Whether input data distributed in Amazon S3 is fully replicated or sharded by an S3 key. Defauts to FullyReplicated
- S3Input
Mode DataQuality Job Definition Endpoint Input S3Input Mode - Whether the Pipe or File is used as the input mode for transfering data for the monitoring job. Pipe mode is recommended for large datasets. File mode is useful for small files that fit in memory. Defaults to File.
- endpoint
Name String - An endpoint in customer's account which has enabled
DataCaptureConfig
enabled. - local
Path String - Path to the filesystem where the endpoint data is available to the container.
- exclude
Features StringAttribute - Indexes or names of the features to be excluded from analysis
- s3Data
Distribution DataType Quality Job Definition Endpoint Input S3Data Distribution Type - Whether input data distributed in Amazon S3 is fully replicated or sharded by an S3 key. Defauts to FullyReplicated
- s3Input
Mode DataQuality Job Definition Endpoint Input S3Input Mode - Whether the Pipe or File is used as the input mode for transfering data for the monitoring job. Pipe mode is recommended for large datasets. File mode is useful for small files that fit in memory. Defaults to File.
- endpoint
Name string - An endpoint in customer's account which has enabled
DataCaptureConfig
enabled. - local
Path string - Path to the filesystem where the endpoint data is available to the container.
- exclude
Features stringAttribute - Indexes or names of the features to be excluded from analysis
- s3Data
Distribution DataType Quality Job Definition Endpoint Input S3Data Distribution Type - Whether input data distributed in Amazon S3 is fully replicated or sharded by an S3 key. Defauts to FullyReplicated
- s3Input
Mode DataQuality Job Definition Endpoint Input S3Input Mode - Whether the Pipe or File is used as the input mode for transfering data for the monitoring job. Pipe mode is recommended for large datasets. File mode is useful for small files that fit in memory. Defaults to File.
- endpoint_
name str - An endpoint in customer's account which has enabled
DataCaptureConfig
enabled. - local_
path str - Path to the filesystem where the endpoint data is available to the container.
- exclude_
features_ strattribute - Indexes or names of the features to be excluded from analysis
- s3_
data_ Datadistribution_ type Quality Job Definition Endpoint Input S3Data Distribution Type - Whether input data distributed in Amazon S3 is fully replicated or sharded by an S3 key. Defauts to FullyReplicated
- s3_
input_ Datamode Quality Job Definition Endpoint Input S3Input Mode - Whether the Pipe or File is used as the input mode for transfering data for the monitoring job. Pipe mode is recommended for large datasets. File mode is useful for small files that fit in memory. Defaults to File.
- endpoint
Name String - An endpoint in customer's account which has enabled
DataCaptureConfig
enabled. - local
Path String - Path to the filesystem where the endpoint data is available to the container.
- exclude
Features StringAttribute - Indexes or names of the features to be excluded from analysis
- s3Data
Distribution "FullyType Replicated" | "Sharded By S3Key" - Whether input data distributed in Amazon S3 is fully replicated or sharded by an S3 key. Defauts to FullyReplicated
- s3Input
Mode "Pipe" | "File" - Whether the Pipe or File is used as the input mode for transfering data for the monitoring job. Pipe mode is recommended for large datasets. File mode is useful for small files that fit in memory. Defaults to File.
DataQualityJobDefinitionEndpointInputS3DataDistributionType, DataQualityJobDefinitionEndpointInputS3DataDistributionTypeArgs
- Fully
Replicated - FullyReplicated
- Sharded
By S3Key - ShardedByS3Key
- Data
Quality Job Definition Endpoint Input S3Data Distribution Type Fully Replicated - FullyReplicated
- Data
Quality Job Definition Endpoint Input S3Data Distribution Type Sharded By S3Key - ShardedByS3Key
- Fully
Replicated - FullyReplicated
- Sharded
By S3Key - ShardedByS3Key
- Fully
Replicated - FullyReplicated
- Sharded
By S3Key - ShardedByS3Key
- FULLY_REPLICATED
- FullyReplicated
- SHARDED_BY_S3_KEY
- ShardedByS3Key
- "Fully
Replicated" - FullyReplicated
- "Sharded
By S3Key" - ShardedByS3Key
DataQualityJobDefinitionEndpointInputS3InputMode, DataQualityJobDefinitionEndpointInputS3InputModeArgs
- Pipe
- Pipe
- File
- File
- Data
Quality Job Definition Endpoint Input S3Input Mode Pipe - Pipe
- Data
Quality Job Definition Endpoint Input S3Input Mode File - File
- Pipe
- Pipe
- File
- File
- Pipe
- Pipe
- File
- File
- PIPE
- Pipe
- FILE
- File
- "Pipe"
- Pipe
- "File"
- File
DataQualityJobDefinitionJson, DataQualityJobDefinitionJsonArgs
- Line bool
- A boolean flag indicating if it is JSON line format
- Line bool
- A boolean flag indicating if it is JSON line format
- line Boolean
- A boolean flag indicating if it is JSON line format
- line boolean
- A boolean flag indicating if it is JSON line format
- line bool
- A boolean flag indicating if it is JSON line format
- line Boolean
- A boolean flag indicating if it is JSON line format
DataQualityJobDefinitionMonitoringOutput, DataQualityJobDefinitionMonitoringOutputArgs
- S3Output
Pulumi.
Aws Native. Sage Maker. Inputs. Data Quality Job Definition S3Output - The Amazon S3 storage location where the results of a monitoring job are saved.
- S3Output
Data
Quality Job Definition S3Output - The Amazon S3 storage location where the results of a monitoring job are saved.
- s3Output
Data
Quality Job Definition S3Output - The Amazon S3 storage location where the results of a monitoring job are saved.
- s3Output
Data
Quality Job Definition S3Output - The Amazon S3 storage location where the results of a monitoring job are saved.
- s3_
output DataQuality Job Definition S3Output - The Amazon S3 storage location where the results of a monitoring job are saved.
- s3Output Property Map
- The Amazon S3 storage location where the results of a monitoring job are saved.
DataQualityJobDefinitionMonitoringOutputConfig, DataQualityJobDefinitionMonitoringOutputConfigArgs
- Monitoring
Outputs List<Pulumi.Aws Native. Sage Maker. Inputs. Data Quality Job Definition Monitoring Output> - Monitoring outputs for monitoring jobs. This is where the output of the periodic monitoring jobs is uploaded.
- Kms
Key stringId - The AWS Key Management Service (AWS KMS) key that Amazon SageMaker uses to encrypt the model artifacts at rest using Amazon S3 server-side encryption.
- Monitoring
Outputs []DataQuality Job Definition Monitoring Output - Monitoring outputs for monitoring jobs. This is where the output of the periodic monitoring jobs is uploaded.
- Kms
Key stringId - The AWS Key Management Service (AWS KMS) key that Amazon SageMaker uses to encrypt the model artifacts at rest using Amazon S3 server-side encryption.
- monitoring
Outputs List<DataQuality Job Definition Monitoring Output> - Monitoring outputs for monitoring jobs. This is where the output of the periodic monitoring jobs is uploaded.
- kms
Key StringId - The AWS Key Management Service (AWS KMS) key that Amazon SageMaker uses to encrypt the model artifacts at rest using Amazon S3 server-side encryption.
- monitoring
Outputs DataQuality Job Definition Monitoring Output[] - Monitoring outputs for monitoring jobs. This is where the output of the periodic monitoring jobs is uploaded.
- kms
Key stringId - The AWS Key Management Service (AWS KMS) key that Amazon SageMaker uses to encrypt the model artifacts at rest using Amazon S3 server-side encryption.
- monitoring_
outputs Sequence[DataQuality Job Definition Monitoring Output] - Monitoring outputs for monitoring jobs. This is where the output of the periodic monitoring jobs is uploaded.
- kms_
key_ strid - The AWS Key Management Service (AWS KMS) key that Amazon SageMaker uses to encrypt the model artifacts at rest using Amazon S3 server-side encryption.
- monitoring
Outputs List<Property Map> - Monitoring outputs for monitoring jobs. This is where the output of the periodic monitoring jobs is uploaded.
- kms
Key StringId - The AWS Key Management Service (AWS KMS) key that Amazon SageMaker uses to encrypt the model artifacts at rest using Amazon S3 server-side encryption.
DataQualityJobDefinitionMonitoringResources, DataQualityJobDefinitionMonitoringResourcesArgs
- Cluster
Config Pulumi.Aws Native. Sage Maker. Inputs. Data Quality Job Definition Cluster Config - The configuration for the cluster resources used to run the processing job.
- Cluster
Config DataQuality Job Definition Cluster Config - The configuration for the cluster resources used to run the processing job.
- cluster
Config DataQuality Job Definition Cluster Config - The configuration for the cluster resources used to run the processing job.
- cluster
Config DataQuality Job Definition Cluster Config - The configuration for the cluster resources used to run the processing job.
- cluster_
config DataQuality Job Definition Cluster Config - The configuration for the cluster resources used to run the processing job.
- cluster
Config Property Map - The configuration for the cluster resources used to run the processing job.
DataQualityJobDefinitionNetworkConfig, DataQualityJobDefinitionNetworkConfigArgs
- Enable
Inter boolContainer Traffic Encryption - Whether to encrypt all communications between distributed processing jobs. Choose True to encrypt communications. Encryption provides greater security for distributed processing jobs, but the processing might take longer.
- Enable
Network boolIsolation - Whether to allow inbound and outbound network calls to and from the containers used for the processing job.
- Vpc
Config Pulumi.Aws Native. Sage Maker. Inputs. Data Quality Job Definition Vpc Config - Specifies a VPC that your training jobs and hosted models have access to. Control access to and from your training and model containers by configuring the VPC.
- Enable
Inter boolContainer Traffic Encryption - Whether to encrypt all communications between distributed processing jobs. Choose True to encrypt communications. Encryption provides greater security for distributed processing jobs, but the processing might take longer.
- Enable
Network boolIsolation - Whether to allow inbound and outbound network calls to and from the containers used for the processing job.
- Vpc
Config DataQuality Job Definition Vpc Config - Specifies a VPC that your training jobs and hosted models have access to. Control access to and from your training and model containers by configuring the VPC.
- enable
Inter BooleanContainer Traffic Encryption - Whether to encrypt all communications between distributed processing jobs. Choose True to encrypt communications. Encryption provides greater security for distributed processing jobs, but the processing might take longer.
- enable
Network BooleanIsolation - Whether to allow inbound and outbound network calls to and from the containers used for the processing job.
- vpc
Config DataQuality Job Definition Vpc Config - Specifies a VPC that your training jobs and hosted models have access to. Control access to and from your training and model containers by configuring the VPC.
- enable
Inter booleanContainer Traffic Encryption - Whether to encrypt all communications between distributed processing jobs. Choose True to encrypt communications. Encryption provides greater security for distributed processing jobs, but the processing might take longer.
- enable
Network booleanIsolation - Whether to allow inbound and outbound network calls to and from the containers used for the processing job.
- vpc
Config DataQuality Job Definition Vpc Config - Specifies a VPC that your training jobs and hosted models have access to. Control access to and from your training and model containers by configuring the VPC.
- enable_
inter_ boolcontainer_ traffic_ encryption - Whether to encrypt all communications between distributed processing jobs. Choose True to encrypt communications. Encryption provides greater security for distributed processing jobs, but the processing might take longer.
- enable_
network_ boolisolation - Whether to allow inbound and outbound network calls to and from the containers used for the processing job.
- vpc_
config DataQuality Job Definition Vpc Config - Specifies a VPC that your training jobs and hosted models have access to. Control access to and from your training and model containers by configuring the VPC.
- enable
Inter BooleanContainer Traffic Encryption - Whether to encrypt all communications between distributed processing jobs. Choose True to encrypt communications. Encryption provides greater security for distributed processing jobs, but the processing might take longer.
- enable
Network BooleanIsolation - Whether to allow inbound and outbound network calls to and from the containers used for the processing job.
- vpc
Config Property Map - Specifies a VPC that your training jobs and hosted models have access to. Control access to and from your training and model containers by configuring the VPC.
DataQualityJobDefinitionS3Output, DataQualityJobDefinitionS3OutputArgs
- Local
Path string - The local path to the Amazon S3 storage location where Amazon SageMaker saves the results of a monitoring job. LocalPath is an absolute path for the output data.
- S3Uri string
- A URI that identifies the Amazon S3 storage location where Amazon SageMaker saves the results of a monitoring job.
- S3Upload
Mode Pulumi.Aws Native. Sage Maker. Data Quality Job Definition S3Output S3Upload Mode - Whether to upload the results of the monitoring job continuously or after the job completes.
- Local
Path string - The local path to the Amazon S3 storage location where Amazon SageMaker saves the results of a monitoring job. LocalPath is an absolute path for the output data.
- S3Uri string
- A URI that identifies the Amazon S3 storage location where Amazon SageMaker saves the results of a monitoring job.
- S3Upload
Mode DataQuality Job Definition S3Output S3Upload Mode - Whether to upload the results of the monitoring job continuously or after the job completes.
- local
Path String - The local path to the Amazon S3 storage location where Amazon SageMaker saves the results of a monitoring job. LocalPath is an absolute path for the output data.
- s3Uri String
- A URI that identifies the Amazon S3 storage location where Amazon SageMaker saves the results of a monitoring job.
- s3Upload
Mode DataQuality Job Definition S3Output S3Upload Mode - Whether to upload the results of the monitoring job continuously or after the job completes.
- local
Path string - The local path to the Amazon S3 storage location where Amazon SageMaker saves the results of a monitoring job. LocalPath is an absolute path for the output data.
- s3Uri string
- A URI that identifies the Amazon S3 storage location where Amazon SageMaker saves the results of a monitoring job.
- s3Upload
Mode DataQuality Job Definition S3Output S3Upload Mode - Whether to upload the results of the monitoring job continuously or after the job completes.
- local_
path str - The local path to the Amazon S3 storage location where Amazon SageMaker saves the results of a monitoring job. LocalPath is an absolute path for the output data.
- s3_
uri str - A URI that identifies the Amazon S3 storage location where Amazon SageMaker saves the results of a monitoring job.
- s3_
upload_ Datamode Quality Job Definition S3Output S3Upload Mode - Whether to upload the results of the monitoring job continuously or after the job completes.
- local
Path String - The local path to the Amazon S3 storage location where Amazon SageMaker saves the results of a monitoring job. LocalPath is an absolute path for the output data.
- s3Uri String
- A URI that identifies the Amazon S3 storage location where Amazon SageMaker saves the results of a monitoring job.
- s3Upload
Mode "Continuous" | "EndOf Job" - Whether to upload the results of the monitoring job continuously or after the job completes.
DataQualityJobDefinitionS3OutputS3UploadMode, DataQualityJobDefinitionS3OutputS3UploadModeArgs
- Continuous
- Continuous
- End
Of Job - EndOfJob
- Data
Quality Job Definition S3Output S3Upload Mode Continuous - Continuous
- Data
Quality Job Definition S3Output S3Upload Mode End Of Job - EndOfJob
- Continuous
- Continuous
- End
Of Job - EndOfJob
- Continuous
- Continuous
- End
Of Job - EndOfJob
- CONTINUOUS
- Continuous
- END_OF_JOB
- EndOfJob
- "Continuous"
- Continuous
- "End
Of Job" - EndOfJob
DataQualityJobDefinitionStatisticsResource, DataQualityJobDefinitionStatisticsResourceArgs
- S3Uri string
- The Amazon S3 URI for the baseline statistics file in Amazon S3 that the current monitoring job should be validated against.
- S3Uri string
- The Amazon S3 URI for the baseline statistics file in Amazon S3 that the current monitoring job should be validated against.
- s3Uri String
- The Amazon S3 URI for the baseline statistics file in Amazon S3 that the current monitoring job should be validated against.
- s3Uri string
- The Amazon S3 URI for the baseline statistics file in Amazon S3 that the current monitoring job should be validated against.
- s3_
uri str - The Amazon S3 URI for the baseline statistics file in Amazon S3 that the current monitoring job should be validated against.
- s3Uri String
- The Amazon S3 URI for the baseline statistics file in Amazon S3 that the current monitoring job should be validated against.
DataQualityJobDefinitionStoppingCondition, DataQualityJobDefinitionStoppingConditionArgs
- Max
Runtime intIn Seconds - The maximum runtime allowed in seconds.
- Max
Runtime intIn Seconds - The maximum runtime allowed in seconds.
- max
Runtime IntegerIn Seconds - The maximum runtime allowed in seconds.
- max
Runtime numberIn Seconds - The maximum runtime allowed in seconds.
- max_
runtime_ intin_ seconds - The maximum runtime allowed in seconds.
- max
Runtime NumberIn Seconds - The maximum runtime allowed in seconds.
DataQualityJobDefinitionVpcConfig, DataQualityJobDefinitionVpcConfigArgs
- Security
Group List<string>Ids - The VPC security group IDs, in the form sg-xxxxxxxx. Specify the security groups for the VPC that is specified in the Subnets field.
- Subnets List<string>
- The ID of the subnets in the VPC to which you want to connect to your monitoring jobs.
- Security
Group []stringIds - The VPC security group IDs, in the form sg-xxxxxxxx. Specify the security groups for the VPC that is specified in the Subnets field.
- Subnets []string
- The ID of the subnets in the VPC to which you want to connect to your monitoring jobs.
- security
Group List<String>Ids - The VPC security group IDs, in the form sg-xxxxxxxx. Specify the security groups for the VPC that is specified in the Subnets field.
- subnets List<String>
- The ID of the subnets in the VPC to which you want to connect to your monitoring jobs.
- security
Group string[]Ids - The VPC security group IDs, in the form sg-xxxxxxxx. Specify the security groups for the VPC that is specified in the Subnets field.
- subnets string[]
- The ID of the subnets in the VPC to which you want to connect to your monitoring jobs.
- security_
group_ Sequence[str]ids - The VPC security group IDs, in the form sg-xxxxxxxx. Specify the security groups for the VPC that is specified in the Subnets field.
- subnets Sequence[str]
- The ID of the subnets in the VPC to which you want to connect to your monitoring jobs.
- security
Group List<String>Ids - The VPC security group IDs, in the form sg-xxxxxxxx. Specify the security groups for the VPC that is specified in the Subnets field.
- subnets List<String>
- The ID of the subnets in the VPC to which you want to connect to your monitoring jobs.
Package Details
- Repository
- AWS Native pulumi/pulumi-aws-native
- License
- Apache-2.0
We recommend new projects start with resources from the AWS provider.