We recommend new projects start with resources from the AWS provider.
aws-native.sagemaker.InferenceExperiment
Explore with Pulumi AI
We recommend new projects start with resources from the AWS provider.
Resource Type definition for AWS::SageMaker::InferenceExperiment
Create InferenceExperiment Resource
Resources are created with functions called constructors. To learn more about declaring and configuring resources, see Resources.
Constructor syntax
new InferenceExperiment(name: string, args: InferenceExperimentArgs, opts?: CustomResourceOptions);
@overload
def InferenceExperiment(resource_name: str,
args: InferenceExperimentArgs,
opts: Optional[ResourceOptions] = None)
@overload
def InferenceExperiment(resource_name: str,
opts: Optional[ResourceOptions] = None,
model_variants: Optional[Sequence[InferenceExperimentModelVariantConfigArgs]] = None,
type: Optional[InferenceExperimentType] = None,
role_arn: Optional[str] = None,
endpoint_name: Optional[str] = None,
name: Optional[str] = None,
kms_key: Optional[str] = None,
data_storage_config: Optional[InferenceExperimentDataStorageConfigArgs] = None,
desired_state: Optional[InferenceExperimentDesiredState] = None,
schedule: Optional[InferenceExperimentScheduleArgs] = None,
shadow_mode_config: Optional[InferenceExperimentShadowModeConfigArgs] = None,
status_reason: Optional[str] = None,
tags: Optional[Sequence[_root_inputs.TagArgs]] = None,
description: Optional[str] = None)
func NewInferenceExperiment(ctx *Context, name string, args InferenceExperimentArgs, opts ...ResourceOption) (*InferenceExperiment, error)
public InferenceExperiment(string name, InferenceExperimentArgs args, CustomResourceOptions? opts = null)
public InferenceExperiment(String name, InferenceExperimentArgs args)
public InferenceExperiment(String name, InferenceExperimentArgs args, CustomResourceOptions options)
type: aws-native:sagemaker:InferenceExperiment
properties: # The arguments to resource properties.
options: # Bag of options to control resource's behavior.
Parameters
- name string
- The unique name of the resource.
- args InferenceExperimentArgs
- The arguments to resource properties.
- opts CustomResourceOptions
- Bag of options to control resource's behavior.
- resource_name str
- The unique name of the resource.
- args InferenceExperimentArgs
- The arguments to resource properties.
- opts ResourceOptions
- Bag of options to control resource's behavior.
- ctx Context
- Context object for the current deployment.
- name string
- The unique name of the resource.
- args InferenceExperimentArgs
- The arguments to resource properties.
- opts ResourceOption
- Bag of options to control resource's behavior.
- name string
- The unique name of the resource.
- args InferenceExperimentArgs
- The arguments to resource properties.
- opts CustomResourceOptions
- Bag of options to control resource's behavior.
- name String
- The unique name of the resource.
- args InferenceExperimentArgs
- The arguments to resource properties.
- options CustomResourceOptions
- Bag of options to control resource's behavior.
InferenceExperiment Resource Properties
To learn more about resource properties and how to use them, see Inputs and Outputs in the Architecture and Concepts docs.
Inputs
In Python, inputs that are objects can be passed either as argument classes or as dictionary literals.
The InferenceExperiment resource accepts the following input properties:
- Endpoint
Name string - The name of the endpoint.
- Model
Variants List<Pulumi.Aws Native. Sage Maker. Inputs. Inference Experiment Model Variant Config> - An array of ModelVariantConfig objects. Each ModelVariantConfig object in the array describes the infrastructure configuration for the corresponding variant.
- Role
Arn string - The Amazon Resource Name (ARN) of an IAM role that Amazon SageMaker can assume to access model artifacts and container images, and manage Amazon SageMaker Inference endpoints for model deployment.
- Type
Pulumi.
Aws Native. Sage Maker. Inference Experiment Type - The type of the inference experiment that you want to run.
- Data
Storage Pulumi.Config Aws Native. Sage Maker. Inputs. Inference Experiment Data Storage Config - The Amazon S3 location and configuration for storing inference request and response data.
- Description string
- The description of the inference experiment.
- Desired
State Pulumi.Aws Native. Sage Maker. Inference Experiment Desired State - The desired state of the experiment after starting or stopping operation.
- Kms
Key string - The AWS Key Management Service (AWS KMS) key that Amazon SageMaker uses to encrypt data on the storage volume attached to the ML compute instance that hosts the endpoint.
- Name string
- The name for the inference experiment.
- Schedule
Pulumi.
Aws Native. Sage Maker. Inputs. Inference Experiment Schedule The duration for which the inference experiment ran or will run.
The maximum duration that you can set for an inference experiment is 30 days.
- Shadow
Mode Pulumi.Config Aws Native. Sage Maker. Inputs. Inference Experiment Shadow Mode Config - The configuration of
ShadowMode
inference experiment type, which shows the production variant that takes all the inference requests, and the shadow variant to which Amazon SageMaker replicates a percentage of the inference requests. For the shadow variant it also shows the percentage of requests that Amazon SageMaker replicates. - Status
Reason string - The error message or client-specified reason from the StopInferenceExperiment API, that explains the status of the inference experiment.
- List<Pulumi.
Aws Native. Inputs. Tag> - An array of key-value pairs to apply to this resource.
- Endpoint
Name string - The name of the endpoint.
- Model
Variants []InferenceExperiment Model Variant Config Args - An array of ModelVariantConfig objects. Each ModelVariantConfig object in the array describes the infrastructure configuration for the corresponding variant.
- Role
Arn string - The Amazon Resource Name (ARN) of an IAM role that Amazon SageMaker can assume to access model artifacts and container images, and manage Amazon SageMaker Inference endpoints for model deployment.
- Type
Inference
Experiment Type - The type of the inference experiment that you want to run.
- Data
Storage InferenceConfig Experiment Data Storage Config Args - The Amazon S3 location and configuration for storing inference request and response data.
- Description string
- The description of the inference experiment.
- Desired
State InferenceExperiment Desired State - The desired state of the experiment after starting or stopping operation.
- Kms
Key string - The AWS Key Management Service (AWS KMS) key that Amazon SageMaker uses to encrypt data on the storage volume attached to the ML compute instance that hosts the endpoint.
- Name string
- The name for the inference experiment.
- Schedule
Inference
Experiment Schedule Args The duration for which the inference experiment ran or will run.
The maximum duration that you can set for an inference experiment is 30 days.
- Shadow
Mode InferenceConfig Experiment Shadow Mode Config Args - The configuration of
ShadowMode
inference experiment type, which shows the production variant that takes all the inference requests, and the shadow variant to which Amazon SageMaker replicates a percentage of the inference requests. For the shadow variant it also shows the percentage of requests that Amazon SageMaker replicates. - Status
Reason string - The error message or client-specified reason from the StopInferenceExperiment API, that explains the status of the inference experiment.
- Tag
Args - An array of key-value pairs to apply to this resource.
- endpoint
Name String - The name of the endpoint.
- model
Variants List<InferenceExperiment Model Variant Config> - An array of ModelVariantConfig objects. Each ModelVariantConfig object in the array describes the infrastructure configuration for the corresponding variant.
- role
Arn String - The Amazon Resource Name (ARN) of an IAM role that Amazon SageMaker can assume to access model artifacts and container images, and manage Amazon SageMaker Inference endpoints for model deployment.
- type
Inference
Experiment Type - The type of the inference experiment that you want to run.
- data
Storage InferenceConfig Experiment Data Storage Config - The Amazon S3 location and configuration for storing inference request and response data.
- description String
- The description of the inference experiment.
- desired
State InferenceExperiment Desired State - The desired state of the experiment after starting or stopping operation.
- kms
Key String - The AWS Key Management Service (AWS KMS) key that Amazon SageMaker uses to encrypt data on the storage volume attached to the ML compute instance that hosts the endpoint.
- name String
- The name for the inference experiment.
- schedule
Inference
Experiment Schedule The duration for which the inference experiment ran or will run.
The maximum duration that you can set for an inference experiment is 30 days.
- shadow
Mode InferenceConfig Experiment Shadow Mode Config - The configuration of
ShadowMode
inference experiment type, which shows the production variant that takes all the inference requests, and the shadow variant to which Amazon SageMaker replicates a percentage of the inference requests. For the shadow variant it also shows the percentage of requests that Amazon SageMaker replicates. - status
Reason String - The error message or client-specified reason from the StopInferenceExperiment API, that explains the status of the inference experiment.
- List<Tag>
- An array of key-value pairs to apply to this resource.
- endpoint
Name string - The name of the endpoint.
- model
Variants InferenceExperiment Model Variant Config[] - An array of ModelVariantConfig objects. Each ModelVariantConfig object in the array describes the infrastructure configuration for the corresponding variant.
- role
Arn string - The Amazon Resource Name (ARN) of an IAM role that Amazon SageMaker can assume to access model artifacts and container images, and manage Amazon SageMaker Inference endpoints for model deployment.
- type
Inference
Experiment Type - The type of the inference experiment that you want to run.
- data
Storage InferenceConfig Experiment Data Storage Config - The Amazon S3 location and configuration for storing inference request and response data.
- description string
- The description of the inference experiment.
- desired
State InferenceExperiment Desired State - The desired state of the experiment after starting or stopping operation.
- kms
Key string - The AWS Key Management Service (AWS KMS) key that Amazon SageMaker uses to encrypt data on the storage volume attached to the ML compute instance that hosts the endpoint.
- name string
- The name for the inference experiment.
- schedule
Inference
Experiment Schedule The duration for which the inference experiment ran or will run.
The maximum duration that you can set for an inference experiment is 30 days.
- shadow
Mode InferenceConfig Experiment Shadow Mode Config - The configuration of
ShadowMode
inference experiment type, which shows the production variant that takes all the inference requests, and the shadow variant to which Amazon SageMaker replicates a percentage of the inference requests. For the shadow variant it also shows the percentage of requests that Amazon SageMaker replicates. - status
Reason string - The error message or client-specified reason from the StopInferenceExperiment API, that explains the status of the inference experiment.
- Tag[]
- An array of key-value pairs to apply to this resource.
- endpoint_
name str - The name of the endpoint.
- model_
variants Sequence[InferenceExperiment Model Variant Config Args] - An array of ModelVariantConfig objects. Each ModelVariantConfig object in the array describes the infrastructure configuration for the corresponding variant.
- role_
arn str - The Amazon Resource Name (ARN) of an IAM role that Amazon SageMaker can assume to access model artifacts and container images, and manage Amazon SageMaker Inference endpoints for model deployment.
- type
Inference
Experiment Type - The type of the inference experiment that you want to run.
- data_
storage_ Inferenceconfig Experiment Data Storage Config Args - The Amazon S3 location and configuration for storing inference request and response data.
- description str
- The description of the inference experiment.
- desired_
state InferenceExperiment Desired State - The desired state of the experiment after starting or stopping operation.
- kms_
key str - The AWS Key Management Service (AWS KMS) key that Amazon SageMaker uses to encrypt data on the storage volume attached to the ML compute instance that hosts the endpoint.
- name str
- The name for the inference experiment.
- schedule
Inference
Experiment Schedule Args The duration for which the inference experiment ran or will run.
The maximum duration that you can set for an inference experiment is 30 days.
- shadow_
mode_ Inferenceconfig Experiment Shadow Mode Config Args - The configuration of
ShadowMode
inference experiment type, which shows the production variant that takes all the inference requests, and the shadow variant to which Amazon SageMaker replicates a percentage of the inference requests. For the shadow variant it also shows the percentage of requests that Amazon SageMaker replicates. - status_
reason str - The error message or client-specified reason from the StopInferenceExperiment API, that explains the status of the inference experiment.
- Sequence[Tag
Args] - An array of key-value pairs to apply to this resource.
- endpoint
Name String - The name of the endpoint.
- model
Variants List<Property Map> - An array of ModelVariantConfig objects. Each ModelVariantConfig object in the array describes the infrastructure configuration for the corresponding variant.
- role
Arn String - The Amazon Resource Name (ARN) of an IAM role that Amazon SageMaker can assume to access model artifacts and container images, and manage Amazon SageMaker Inference endpoints for model deployment.
- type
"Shadow
Mode" - The type of the inference experiment that you want to run.
- data
Storage Property MapConfig - The Amazon S3 location and configuration for storing inference request and response data.
- description String
- The description of the inference experiment.
- desired
State "Running" | "Completed" | "Cancelled" - The desired state of the experiment after starting or stopping operation.
- kms
Key String - The AWS Key Management Service (AWS KMS) key that Amazon SageMaker uses to encrypt data on the storage volume attached to the ML compute instance that hosts the endpoint.
- name String
- The name for the inference experiment.
- schedule Property Map
The duration for which the inference experiment ran or will run.
The maximum duration that you can set for an inference experiment is 30 days.
- shadow
Mode Property MapConfig - The configuration of
ShadowMode
inference experiment type, which shows the production variant that takes all the inference requests, and the shadow variant to which Amazon SageMaker replicates a percentage of the inference requests. For the shadow variant it also shows the percentage of requests that Amazon SageMaker replicates. - status
Reason String - The error message or client-specified reason from the StopInferenceExperiment API, that explains the status of the inference experiment.
- List<Property Map>
- An array of key-value pairs to apply to this resource.
Outputs
All input properties are implicitly available as output properties. Additionally, the InferenceExperiment resource produces the following output properties:
- Arn string
- The Amazon Resource Name (ARN) of the inference experiment.
- Creation
Time string - The timestamp at which you created the inference experiment.
- Endpoint
Metadata Pulumi.Aws Native. Sage Maker. Outputs. Inference Experiment Endpoint Metadata - Id string
- The provider-assigned unique ID for this managed resource.
- Last
Modified stringTime - The timestamp at which you last modified the inference experiment.
- Status
Pulumi.
Aws Native. Sage Maker. Inference Experiment Status - The status of the inference experiment.
- Arn string
- The Amazon Resource Name (ARN) of the inference experiment.
- Creation
Time string - The timestamp at which you created the inference experiment.
- Endpoint
Metadata InferenceExperiment Endpoint Metadata - Id string
- The provider-assigned unique ID for this managed resource.
- Last
Modified stringTime - The timestamp at which you last modified the inference experiment.
- Status
Inference
Experiment Status - The status of the inference experiment.
- arn String
- The Amazon Resource Name (ARN) of the inference experiment.
- creation
Time String - The timestamp at which you created the inference experiment.
- endpoint
Metadata InferenceExperiment Endpoint Metadata - id String
- The provider-assigned unique ID for this managed resource.
- last
Modified StringTime - The timestamp at which you last modified the inference experiment.
- status
Inference
Experiment Status - The status of the inference experiment.
- arn string
- The Amazon Resource Name (ARN) of the inference experiment.
- creation
Time string - The timestamp at which you created the inference experiment.
- endpoint
Metadata InferenceExperiment Endpoint Metadata - id string
- The provider-assigned unique ID for this managed resource.
- last
Modified stringTime - The timestamp at which you last modified the inference experiment.
- status
Inference
Experiment Status - The status of the inference experiment.
- arn str
- The Amazon Resource Name (ARN) of the inference experiment.
- creation_
time str - The timestamp at which you created the inference experiment.
- endpoint_
metadata InferenceExperiment Endpoint Metadata - id str
- The provider-assigned unique ID for this managed resource.
- last_
modified_ strtime - The timestamp at which you last modified the inference experiment.
- status
Inference
Experiment Status - The status of the inference experiment.
- arn String
- The Amazon Resource Name (ARN) of the inference experiment.
- creation
Time String - The timestamp at which you created the inference experiment.
- endpoint
Metadata Property Map - id String
- The provider-assigned unique ID for this managed resource.
- last
Modified StringTime - The timestamp at which you last modified the inference experiment.
- status "Creating" | "Created" | "Updating" | "Starting" | "Stopping" | "Running" | "Completed" | "Cancelled"
- The status of the inference experiment.
Supporting Types
InferenceExperimentCaptureContentTypeHeader, InferenceExperimentCaptureContentTypeHeaderArgs
- Csv
Content List<string>Types - The list of all content type headers that SageMaker will treat as CSV and capture accordingly.
- Json
Content List<string>Types - The list of all content type headers that SageMaker will treat as JSON and capture accordingly.
- Csv
Content []stringTypes - The list of all content type headers that SageMaker will treat as CSV and capture accordingly.
- Json
Content []stringTypes - The list of all content type headers that SageMaker will treat as JSON and capture accordingly.
- csv
Content List<String>Types - The list of all content type headers that SageMaker will treat as CSV and capture accordingly.
- json
Content List<String>Types - The list of all content type headers that SageMaker will treat as JSON and capture accordingly.
- csv
Content string[]Types - The list of all content type headers that SageMaker will treat as CSV and capture accordingly.
- json
Content string[]Types - The list of all content type headers that SageMaker will treat as JSON and capture accordingly.
- csv_
content_ Sequence[str]types - The list of all content type headers that SageMaker will treat as CSV and capture accordingly.
- json_
content_ Sequence[str]types - The list of all content type headers that SageMaker will treat as JSON and capture accordingly.
- csv
Content List<String>Types - The list of all content type headers that SageMaker will treat as CSV and capture accordingly.
- json
Content List<String>Types - The list of all content type headers that SageMaker will treat as JSON and capture accordingly.
InferenceExperimentDataStorageConfig, InferenceExperimentDataStorageConfigArgs
- Destination string
- The Amazon S3 bucket where the inference request and response data is stored.
- Content
Type Pulumi.Aws Native. Sage Maker. Inputs. Inference Experiment Capture Content Type Header - Configuration specifying how to treat different headers. If no headers are specified SageMaker will by default base64 encode when capturing the data.
- Kms
Key string - The AWS Key Management Service key that Amazon SageMaker uses to encrypt captured data at rest using Amazon S3 server-side encryption.
- Destination string
- The Amazon S3 bucket where the inference request and response data is stored.
- Content
Type InferenceExperiment Capture Content Type Header - Configuration specifying how to treat different headers. If no headers are specified SageMaker will by default base64 encode when capturing the data.
- Kms
Key string - The AWS Key Management Service key that Amazon SageMaker uses to encrypt captured data at rest using Amazon S3 server-side encryption.
- destination String
- The Amazon S3 bucket where the inference request and response data is stored.
- content
Type InferenceExperiment Capture Content Type Header - Configuration specifying how to treat different headers. If no headers are specified SageMaker will by default base64 encode when capturing the data.
- kms
Key String - The AWS Key Management Service key that Amazon SageMaker uses to encrypt captured data at rest using Amazon S3 server-side encryption.
- destination string
- The Amazon S3 bucket where the inference request and response data is stored.
- content
Type InferenceExperiment Capture Content Type Header - Configuration specifying how to treat different headers. If no headers are specified SageMaker will by default base64 encode when capturing the data.
- kms
Key string - The AWS Key Management Service key that Amazon SageMaker uses to encrypt captured data at rest using Amazon S3 server-side encryption.
- destination str
- The Amazon S3 bucket where the inference request and response data is stored.
- content_
type InferenceExperiment Capture Content Type Header - Configuration specifying how to treat different headers. If no headers are specified SageMaker will by default base64 encode when capturing the data.
- kms_
key str - The AWS Key Management Service key that Amazon SageMaker uses to encrypt captured data at rest using Amazon S3 server-side encryption.
- destination String
- The Amazon S3 bucket where the inference request and response data is stored.
- content
Type Property Map - Configuration specifying how to treat different headers. If no headers are specified SageMaker will by default base64 encode when capturing the data.
- kms
Key String - The AWS Key Management Service key that Amazon SageMaker uses to encrypt captured data at rest using Amazon S3 server-side encryption.
InferenceExperimentDesiredState, InferenceExperimentDesiredStateArgs
- Running
- Running
- Completed
- Completed
- Cancelled
- Cancelled
- Inference
Experiment Desired State Running - Running
- Inference
Experiment Desired State Completed - Completed
- Inference
Experiment Desired State Cancelled - Cancelled
- Running
- Running
- Completed
- Completed
- Cancelled
- Cancelled
- Running
- Running
- Completed
- Completed
- Cancelled
- Cancelled
- RUNNING
- Running
- COMPLETED
- Completed
- CANCELLED
- Cancelled
- "Running"
- Running
- "Completed"
- Completed
- "Cancelled"
- Cancelled
InferenceExperimentEndpointMetadata, InferenceExperimentEndpointMetadataArgs
- Endpoint
Name string - The name of the endpoint.
- Endpoint
Config stringName - The name of the endpoint configuration.
- Endpoint
Status Pulumi.Aws Native. Sage Maker. Inference Experiment Endpoint Metadata Endpoint Status - The status of the endpoint. For possible values of the status of an endpoint.
- Endpoint
Name string - The name of the endpoint.
- Endpoint
Config stringName - The name of the endpoint configuration.
- Endpoint
Status InferenceExperiment Endpoint Metadata Endpoint Status - The status of the endpoint. For possible values of the status of an endpoint.
- endpoint
Name String - The name of the endpoint.
- endpoint
Config StringName - The name of the endpoint configuration.
- endpoint
Status InferenceExperiment Endpoint Metadata Endpoint Status - The status of the endpoint. For possible values of the status of an endpoint.
- endpoint
Name string - The name of the endpoint.
- endpoint
Config stringName - The name of the endpoint configuration.
- endpoint
Status InferenceExperiment Endpoint Metadata Endpoint Status - The status of the endpoint. For possible values of the status of an endpoint.
- endpoint_
name str - The name of the endpoint.
- endpoint_
config_ strname - The name of the endpoint configuration.
- endpoint_
status InferenceExperiment Endpoint Metadata Endpoint Status - The status of the endpoint. For possible values of the status of an endpoint.
- endpoint
Name String - The name of the endpoint.
- endpoint
Config StringName - The name of the endpoint configuration.
- endpoint
Status "Creating" | "Updating" | "SystemUpdating" | "Rolling Back" | "In Service" | "Out Of Service" | "Deleting" | "Failed" - The status of the endpoint. For possible values of the status of an endpoint.
InferenceExperimentEndpointMetadataEndpointStatus, InferenceExperimentEndpointMetadataEndpointStatusArgs
- Creating
- Creating
- Updating
- Updating
- System
Updating - SystemUpdating
- Rolling
Back - RollingBack
- In
Service - InService
- Out
Of Service - OutOfService
- Deleting
- Deleting
- Failed
- Failed
- Inference
Experiment Endpoint Metadata Endpoint Status Creating - Creating
- Inference
Experiment Endpoint Metadata Endpoint Status Updating - Updating
- Inference
Experiment Endpoint Metadata Endpoint Status System Updating - SystemUpdating
- Inference
Experiment Endpoint Metadata Endpoint Status Rolling Back - RollingBack
- Inference
Experiment Endpoint Metadata Endpoint Status In Service - InService
- Inference
Experiment Endpoint Metadata Endpoint Status Out Of Service - OutOfService
- Inference
Experiment Endpoint Metadata Endpoint Status Deleting - Deleting
- Inference
Experiment Endpoint Metadata Endpoint Status Failed - Failed
- Creating
- Creating
- Updating
- Updating
- System
Updating - SystemUpdating
- Rolling
Back - RollingBack
- In
Service - InService
- Out
Of Service - OutOfService
- Deleting
- Deleting
- Failed
- Failed
- Creating
- Creating
- Updating
- Updating
- System
Updating - SystemUpdating
- Rolling
Back - RollingBack
- In
Service - InService
- Out
Of Service - OutOfService
- Deleting
- Deleting
- Failed
- Failed
- CREATING
- Creating
- UPDATING
- Updating
- SYSTEM_UPDATING
- SystemUpdating
- ROLLING_BACK
- RollingBack
- IN_SERVICE
- InService
- OUT_OF_SERVICE
- OutOfService
- DELETING
- Deleting
- FAILED
- Failed
- "Creating"
- Creating
- "Updating"
- Updating
- "System
Updating" - SystemUpdating
- "Rolling
Back" - RollingBack
- "In
Service" - InService
- "Out
Of Service" - OutOfService
- "Deleting"
- Deleting
- "Failed"
- Failed
InferenceExperimentModelInfrastructureConfig, InferenceExperimentModelInfrastructureConfigArgs
- Infrastructure
Type Pulumi.Aws Native. Sage Maker. Inference Experiment Model Infrastructure Config Infrastructure Type - The type of the inference experiment that you want to run.
- Real
Time Pulumi.Inference Config Aws Native. Sage Maker. Inputs. Inference Experiment Real Time Inference Config - The infrastructure configuration for deploying the model to real-time inference.
- Infrastructure
Type InferenceExperiment Model Infrastructure Config Infrastructure Type - The type of the inference experiment that you want to run.
- Real
Time InferenceInference Config Experiment Real Time Inference Config - The infrastructure configuration for deploying the model to real-time inference.
- infrastructure
Type InferenceExperiment Model Infrastructure Config Infrastructure Type - The type of the inference experiment that you want to run.
- real
Time InferenceInference Config Experiment Real Time Inference Config - The infrastructure configuration for deploying the model to real-time inference.
- infrastructure
Type InferenceExperiment Model Infrastructure Config Infrastructure Type - The type of the inference experiment that you want to run.
- real
Time InferenceInference Config Experiment Real Time Inference Config - The infrastructure configuration for deploying the model to real-time inference.
- infrastructure_
type InferenceExperiment Model Infrastructure Config Infrastructure Type - The type of the inference experiment that you want to run.
- real_
time_ Inferenceinference_ config Experiment Real Time Inference Config - The infrastructure configuration for deploying the model to real-time inference.
- infrastructure
Type "RealTime Inference" - The type of the inference experiment that you want to run.
- real
Time Property MapInference Config - The infrastructure configuration for deploying the model to real-time inference.
InferenceExperimentModelInfrastructureConfigInfrastructureType, InferenceExperimentModelInfrastructureConfigInfrastructureTypeArgs
- Real
Time Inference - RealTimeInference
- Inference
Experiment Model Infrastructure Config Infrastructure Type Real Time Inference - RealTimeInference
- Real
Time Inference - RealTimeInference
- Real
Time Inference - RealTimeInference
- REAL_TIME_INFERENCE
- RealTimeInference
- "Real
Time Inference" - RealTimeInference
InferenceExperimentModelVariantConfig, InferenceExperimentModelVariantConfigArgs
- Infrastructure
Config Pulumi.Aws Native. Sage Maker. Inputs. Inference Experiment Model Infrastructure Config - The configuration for the infrastructure that the model will be deployed to.
- Model
Name string - The name of the Amazon SageMaker Model entity.
- Variant
Name string - The name of the variant.
- Infrastructure
Config InferenceExperiment Model Infrastructure Config - The configuration for the infrastructure that the model will be deployed to.
- Model
Name string - The name of the Amazon SageMaker Model entity.
- Variant
Name string - The name of the variant.
- infrastructure
Config InferenceExperiment Model Infrastructure Config - The configuration for the infrastructure that the model will be deployed to.
- model
Name String - The name of the Amazon SageMaker Model entity.
- variant
Name String - The name of the variant.
- infrastructure
Config InferenceExperiment Model Infrastructure Config - The configuration for the infrastructure that the model will be deployed to.
- model
Name string - The name of the Amazon SageMaker Model entity.
- variant
Name string - The name of the variant.
- infrastructure_
config InferenceExperiment Model Infrastructure Config - The configuration for the infrastructure that the model will be deployed to.
- model_
name str - The name of the Amazon SageMaker Model entity.
- variant_
name str - The name of the variant.
- infrastructure
Config Property Map - The configuration for the infrastructure that the model will be deployed to.
- model
Name String - The name of the Amazon SageMaker Model entity.
- variant
Name String - The name of the variant.
InferenceExperimentRealTimeInferenceConfig, InferenceExperimentRealTimeInferenceConfigArgs
- Instance
Count int - The number of instances of the type specified by InstanceType.
- Instance
Type string - The instance type the model is deployed to.
- Instance
Count int - The number of instances of the type specified by InstanceType.
- Instance
Type string - The instance type the model is deployed to.
- instance
Count Integer - The number of instances of the type specified by InstanceType.
- instance
Type String - The instance type the model is deployed to.
- instance
Count number - The number of instances of the type specified by InstanceType.
- instance
Type string - The instance type the model is deployed to.
- instance_
count int - The number of instances of the type specified by InstanceType.
- instance_
type str - The instance type the model is deployed to.
- instance
Count Number - The number of instances of the type specified by InstanceType.
- instance
Type String - The instance type the model is deployed to.
InferenceExperimentSchedule, InferenceExperimentScheduleArgs
- end_
time str - The timestamp at which the inference experiment ended or will end.
- start_
time str - The timestamp at which the inference experiment started or will start.
InferenceExperimentShadowModeConfig, InferenceExperimentShadowModeConfigArgs
- Shadow
Model List<Pulumi.Variants Aws Native. Sage Maker. Inputs. Inference Experiment Shadow Model Variant Config> - List of shadow variant configurations.
- Source
Model stringVariant Name - The name of the production variant, which takes all the inference requests.
- Shadow
Model []InferenceVariants Experiment Shadow Model Variant Config - List of shadow variant configurations.
- Source
Model stringVariant Name - The name of the production variant, which takes all the inference requests.
- shadow
Model List<InferenceVariants Experiment Shadow Model Variant Config> - List of shadow variant configurations.
- source
Model StringVariant Name - The name of the production variant, which takes all the inference requests.
- shadow
Model InferenceVariants Experiment Shadow Model Variant Config[] - List of shadow variant configurations.
- source
Model stringVariant Name - The name of the production variant, which takes all the inference requests.
- shadow_
model_ Sequence[Inferencevariants Experiment Shadow Model Variant Config] - List of shadow variant configurations.
- source_
model_ strvariant_ name - The name of the production variant, which takes all the inference requests.
- shadow
Model List<Property Map>Variants - List of shadow variant configurations.
- source
Model StringVariant Name - The name of the production variant, which takes all the inference requests.
InferenceExperimentShadowModelVariantConfig, InferenceExperimentShadowModelVariantConfigArgs
- Sampling
Percentage int - The percentage of inference requests that Amazon SageMaker replicates from the production variant to the shadow variant.
- Shadow
Model stringVariant Name - The name of the shadow variant.
- Sampling
Percentage int - The percentage of inference requests that Amazon SageMaker replicates from the production variant to the shadow variant.
- Shadow
Model stringVariant Name - The name of the shadow variant.
- sampling
Percentage Integer - The percentage of inference requests that Amazon SageMaker replicates from the production variant to the shadow variant.
- shadow
Model StringVariant Name - The name of the shadow variant.
- sampling
Percentage number - The percentage of inference requests that Amazon SageMaker replicates from the production variant to the shadow variant.
- shadow
Model stringVariant Name - The name of the shadow variant.
- sampling_
percentage int - The percentage of inference requests that Amazon SageMaker replicates from the production variant to the shadow variant.
- shadow_
model_ strvariant_ name - The name of the shadow variant.
- sampling
Percentage Number - The percentage of inference requests that Amazon SageMaker replicates from the production variant to the shadow variant.
- shadow
Model StringVariant Name - The name of the shadow variant.
InferenceExperimentStatus, InferenceExperimentStatusArgs
- Creating
- Creating
- Created
- Created
- Updating
- Updating
- Starting
- Starting
- Stopping
- Stopping
- Running
- Running
- Completed
- Completed
- Cancelled
- Cancelled
- Inference
Experiment Status Creating - Creating
- Inference
Experiment Status Created - Created
- Inference
Experiment Status Updating - Updating
- Inference
Experiment Status Starting - Starting
- Inference
Experiment Status Stopping - Stopping
- Inference
Experiment Status Running - Running
- Inference
Experiment Status Completed - Completed
- Inference
Experiment Status Cancelled - Cancelled
- Creating
- Creating
- Created
- Created
- Updating
- Updating
- Starting
- Starting
- Stopping
- Stopping
- Running
- Running
- Completed
- Completed
- Cancelled
- Cancelled
- Creating
- Creating
- Created
- Created
- Updating
- Updating
- Starting
- Starting
- Stopping
- Stopping
- Running
- Running
- Completed
- Completed
- Cancelled
- Cancelled
- CREATING
- Creating
- CREATED
- Created
- UPDATING
- Updating
- STARTING
- Starting
- STOPPING
- Stopping
- RUNNING
- Running
- COMPLETED
- Completed
- CANCELLED
- Cancelled
- "Creating"
- Creating
- "Created"
- Created
- "Updating"
- Updating
- "Starting"
- Starting
- "Stopping"
- Stopping
- "Running"
- Running
- "Completed"
- Completed
- "Cancelled"
- Cancelled
InferenceExperimentType, InferenceExperimentTypeArgs
- Shadow
Mode - ShadowMode
- Inference
Experiment Type Shadow Mode - ShadowMode
- Shadow
Mode - ShadowMode
- Shadow
Mode - ShadowMode
- SHADOW_MODE
- ShadowMode
- "Shadow
Mode" - ShadowMode
Tag, TagArgs
Package Details
- Repository
- AWS Native pulumi/pulumi-aws-native
- License
- Apache-2.0
We recommend new projects start with resources from the AWS provider.