We recommend new projects start with resources from the AWS provider.
aws-native.ecs.getService
Explore with Pulumi AI
We recommend new projects start with resources from the AWS provider.
The AWS::ECS::Service
resource creates an Amazon Elastic Container Service (Amazon ECS) service that runs and maintains the requested number of tasks and associated load balancers.
The stack update fails if you change any properties that require replacement and at least one ECS Service Connect ServiceConnectConfiguration
property the is configured. This is because AWS CloudFormation creates the replacement service first, but each ServiceConnectService
must have a name that is unique in the namespace.
Starting April 15, 2023, AWS; will not onboard new customers to Amazon Elastic Inference (EI), and will help current customers migrate their workloads to options that offer better price and performance. After April 15, 2023, new customers will not be able to launch instances with Amazon EI accelerators in Amazon SageMaker, ECS, or EC2. However, customers who have used Amazon EI at least once during the past 30-day period are considered current customers and will be able to continue using the service.
Using getService
Two invocation forms are available. The direct form accepts plain arguments and either blocks until the result value is available, or returns a Promise-wrapped result. The output form accepts Input-wrapped arguments and returns an Output-wrapped result.
function getService(args: GetServiceArgs, opts?: InvokeOptions): Promise<GetServiceResult>
function getServiceOutput(args: GetServiceOutputArgs, opts?: InvokeOptions): Output<GetServiceResult>
def get_service(cluster: Optional[str] = None,
service_arn: Optional[str] = None,
opts: Optional[InvokeOptions] = None) -> GetServiceResult
def get_service_output(cluster: Optional[pulumi.Input[str]] = None,
service_arn: Optional[pulumi.Input[str]] = None,
opts: Optional[InvokeOptions] = None) -> Output[GetServiceResult]
func LookupService(ctx *Context, args *LookupServiceArgs, opts ...InvokeOption) (*LookupServiceResult, error)
func LookupServiceOutput(ctx *Context, args *LookupServiceOutputArgs, opts ...InvokeOption) LookupServiceResultOutput
> Note: This function is named LookupService
in the Go SDK.
public static class GetService
{
public static Task<GetServiceResult> InvokeAsync(GetServiceArgs args, InvokeOptions? opts = null)
public static Output<GetServiceResult> Invoke(GetServiceInvokeArgs args, InvokeOptions? opts = null)
}
public static CompletableFuture<GetServiceResult> getService(GetServiceArgs args, InvokeOptions options)
// Output-based functions aren't available in Java yet
fn::invoke:
function: aws-native:ecs:getService
arguments:
# arguments dictionary
The following arguments are supported:
- Cluster string
- The short name or full Amazon Resource Name (ARN) of the cluster that you run your service on. If you do not specify a cluster, the default cluster is assumed.
- Service
Arn string - Not currently supported in AWS CloudFormation .
- Cluster string
- The short name or full Amazon Resource Name (ARN) of the cluster that you run your service on. If you do not specify a cluster, the default cluster is assumed.
- Service
Arn string - Not currently supported in AWS CloudFormation .
- cluster String
- The short name or full Amazon Resource Name (ARN) of the cluster that you run your service on. If you do not specify a cluster, the default cluster is assumed.
- service
Arn String - Not currently supported in AWS CloudFormation .
- cluster string
- The short name or full Amazon Resource Name (ARN) of the cluster that you run your service on. If you do not specify a cluster, the default cluster is assumed.
- service
Arn string - Not currently supported in AWS CloudFormation .
- cluster str
- The short name or full Amazon Resource Name (ARN) of the cluster that you run your service on. If you do not specify a cluster, the default cluster is assumed.
- service_
arn str - Not currently supported in AWS CloudFormation .
- cluster String
- The short name or full Amazon Resource Name (ARN) of the cluster that you run your service on. If you do not specify a cluster, the default cluster is assumed.
- service
Arn String - Not currently supported in AWS CloudFormation .
getService Result
The following output properties are available:
- Availability
Zone Pulumi.Rebalancing Aws Native. Ecs. Service Availability Zone Rebalancing - Capacity
Provider List<Pulumi.Strategy Aws Native. Ecs. Outputs. Service Capacity Provider Strategy Item> - The capacity provider strategy to use for the service.
If a
capacityProviderStrategy
is specified, thelaunchType
parameter must be omitted. If nocapacityProviderStrategy
orlaunchType
is specified, thedefaultCapacityProviderStrategy
for the cluster is used. A capacity provider strategy may contain a maximum of 6 capacity providers. - Deployment
Configuration Pulumi.Aws Native. Ecs. Outputs. Service Deployment Configuration - Optional deployment parameters that control how many tasks run during the deployment and the ordering of stopping and starting tasks.
- Desired
Count int - The number of instantiations of the specified task definition to place and keep running in your service.
For new services, if a desired count is not specified, a default value of
1
is used. When using theDAEMON
scheduling strategy, the desired count is not required. For existing services, if a desired count is not specified, it is omitted from the operation. - bool
- Specifies whether to turn on Amazon ECS managed tags for the tasks within the service. For more information, see Tagging your Amazon ECS resources in the Amazon Elastic Container Service Developer Guide.
When you use Amazon ECS managed tags, you need to set the
propagateTags
request parameter. - Enable
Execute boolCommand - Determines whether the execute command functionality is turned on for the service. If
true
, the execute command functionality is turned on for all containers in tasks as part of the service. - Health
Check intGrace Period Seconds - The period of time, in seconds, that the Amazon ECS service scheduler ignores unhealthy Elastic Load Balancing target health checks after a task has first started. This is only used when your service is configured to use a load balancer. If your service has a load balancer defined and you don't specify a health check grace period value, the default value of
0
is used. If you do not use an Elastic Load Balancing, we recommend that you use thestartPeriod
in the task definition health check parameters. For more information, see Health check. If your service's tasks take a while to start and respond to Elastic Load Balancing health checks, you can specify a health check grace period of up to 2,147,483,647 seconds (about 69 years). During that time, the Amazon ECS service scheduler ignores health check status. This grace period can prevent the service scheduler from marking tasks as unhealthy and stopping them before they have time to come up. - Load
Balancers List<Pulumi.Aws Native. Ecs. Outputs. Service Load Balancer> - A list of load balancer objects to associate with the service. If you specify the
Role
property,LoadBalancers
must be specified as well. For information about the number of load balancers that you can specify per service, see Service Load Balancing in the Amazon Elastic Container Service Developer Guide. - Name string
- The name of the Amazon ECS service, such as
sample-webapp
. - Network
Configuration Pulumi.Aws Native. Ecs. Outputs. Service Network Configuration - The network configuration for the service. This parameter is required for task definitions that use the
awsvpc
network mode to receive their own elastic network interface, and it is not supported for other network modes. For more information, see Task Networking in the Amazon Elastic Container Service Developer Guide. - Placement
Constraints List<Pulumi.Aws Native. Ecs. Outputs. Service Placement Constraint> - An array of placement constraint objects to use for tasks in your service. You can specify a maximum of 10 constraints for each task. This limit includes constraints in the task definition and those specified at runtime.
- Placement
Strategies List<Pulumi.Aws Native. Ecs. Outputs. Service Placement Strategy> - The placement strategy objects to use for tasks in your service. You can specify a maximum of 5 strategy rules for each service.
- Platform
Version string - The platform version that your tasks in the service are running on. A platform version is specified only for tasks using the Fargate launch type. If one isn't specified, the
LATEST
platform version is used. For more information, see platform versions in the Amazon Elastic Container Service Developer Guide. - Pulumi.
Aws Native. Ecs. Service Propagate Tags - Specifies whether to propagate the tags from the task definition to the task. If no value is specified, the tags aren't propagated. Tags can only be propagated to the task during task creation. To add tags to a task after task creation, use the TagResource API action.
You must set this to a value other than
NONE
when you use Cost Explorer. For more information, see Amazon ECS usage reports in the Amazon Elastic Container Service Developer Guide. The default isNONE
. - Service
Arn string - Not currently supported in AWS CloudFormation .
- Service
Registries List<Pulumi.Aws Native. Ecs. Outputs. Service Registry> - The details of the service discovery registry to associate with this service. For more information, see Service discovery. Each service may be associated with one service registry. Multiple service registries for each service isn't supported.
- List<Pulumi.
Aws Native. Outputs. Tag> - The metadata that you apply to the service to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define. When a service is deleted, the tags are deleted as well.
The following basic restrictions apply to tags:
- Maximum number of tags per resource - 50
- For each resource, each tag key must be unique, and each tag key can have only one value.
- Maximum key length - 128 Unicode characters in UTF-8
- Maximum value length - 256 Unicode characters in UTF-8
- If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
- Tag keys and values are case-sensitive.
- Do not use
aws:
,AWS:
, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for AWS use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
- Task
Definition string - The
family
andrevision
(family:revision
) or full ARN of the task definition to run in your service. If arevision
isn't specified, the latestACTIVE
revision is used. A task definition must be specified if the service uses either theECS
orCODE_DEPLOY
deployment controllers. For more information about deployment types, see Amazon ECS deployment types. - Vpc
Lattice List<Pulumi.Configurations Aws Native. Ecs. Outputs. Service Vpc Lattice Configuration>
- Availability
Zone ServiceRebalancing Availability Zone Rebalancing - Capacity
Provider []ServiceStrategy Capacity Provider Strategy Item - The capacity provider strategy to use for the service.
If a
capacityProviderStrategy
is specified, thelaunchType
parameter must be omitted. If nocapacityProviderStrategy
orlaunchType
is specified, thedefaultCapacityProviderStrategy
for the cluster is used. A capacity provider strategy may contain a maximum of 6 capacity providers. - Deployment
Configuration ServiceDeployment Configuration - Optional deployment parameters that control how many tasks run during the deployment and the ordering of stopping and starting tasks.
- Desired
Count int - The number of instantiations of the specified task definition to place and keep running in your service.
For new services, if a desired count is not specified, a default value of
1
is used. When using theDAEMON
scheduling strategy, the desired count is not required. For existing services, if a desired count is not specified, it is omitted from the operation. - bool
- Specifies whether to turn on Amazon ECS managed tags for the tasks within the service. For more information, see Tagging your Amazon ECS resources in the Amazon Elastic Container Service Developer Guide.
When you use Amazon ECS managed tags, you need to set the
propagateTags
request parameter. - Enable
Execute boolCommand - Determines whether the execute command functionality is turned on for the service. If
true
, the execute command functionality is turned on for all containers in tasks as part of the service. - Health
Check intGrace Period Seconds - The period of time, in seconds, that the Amazon ECS service scheduler ignores unhealthy Elastic Load Balancing target health checks after a task has first started. This is only used when your service is configured to use a load balancer. If your service has a load balancer defined and you don't specify a health check grace period value, the default value of
0
is used. If you do not use an Elastic Load Balancing, we recommend that you use thestartPeriod
in the task definition health check parameters. For more information, see Health check. If your service's tasks take a while to start and respond to Elastic Load Balancing health checks, you can specify a health check grace period of up to 2,147,483,647 seconds (about 69 years). During that time, the Amazon ECS service scheduler ignores health check status. This grace period can prevent the service scheduler from marking tasks as unhealthy and stopping them before they have time to come up. - Load
Balancers []ServiceLoad Balancer - A list of load balancer objects to associate with the service. If you specify the
Role
property,LoadBalancers
must be specified as well. For information about the number of load balancers that you can specify per service, see Service Load Balancing in the Amazon Elastic Container Service Developer Guide. - Name string
- The name of the Amazon ECS service, such as
sample-webapp
. - Network
Configuration ServiceNetwork Configuration - The network configuration for the service. This parameter is required for task definitions that use the
awsvpc
network mode to receive their own elastic network interface, and it is not supported for other network modes. For more information, see Task Networking in the Amazon Elastic Container Service Developer Guide. - Placement
Constraints []ServicePlacement Constraint - An array of placement constraint objects to use for tasks in your service. You can specify a maximum of 10 constraints for each task. This limit includes constraints in the task definition and those specified at runtime.
- Placement
Strategies []ServicePlacement Strategy - The placement strategy objects to use for tasks in your service. You can specify a maximum of 5 strategy rules for each service.
- Platform
Version string - The platform version that your tasks in the service are running on. A platform version is specified only for tasks using the Fargate launch type. If one isn't specified, the
LATEST
platform version is used. For more information, see platform versions in the Amazon Elastic Container Service Developer Guide. - Service
Propagate Tags - Specifies whether to propagate the tags from the task definition to the task. If no value is specified, the tags aren't propagated. Tags can only be propagated to the task during task creation. To add tags to a task after task creation, use the TagResource API action.
You must set this to a value other than
NONE
when you use Cost Explorer. For more information, see Amazon ECS usage reports in the Amazon Elastic Container Service Developer Guide. The default isNONE
. - Service
Arn string - Not currently supported in AWS CloudFormation .
- Service
Registries []ServiceRegistry - The details of the service discovery registry to associate with this service. For more information, see Service discovery. Each service may be associated with one service registry. Multiple service registries for each service isn't supported.
- Tag
- The metadata that you apply to the service to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define. When a service is deleted, the tags are deleted as well.
The following basic restrictions apply to tags:
- Maximum number of tags per resource - 50
- For each resource, each tag key must be unique, and each tag key can have only one value.
- Maximum key length - 128 Unicode characters in UTF-8
- Maximum value length - 256 Unicode characters in UTF-8
- If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
- Tag keys and values are case-sensitive.
- Do not use
aws:
,AWS:
, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for AWS use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
- Task
Definition string - The
family
andrevision
(family:revision
) or full ARN of the task definition to run in your service. If arevision
isn't specified, the latestACTIVE
revision is used. A task definition must be specified if the service uses either theECS
orCODE_DEPLOY
deployment controllers. For more information about deployment types, see Amazon ECS deployment types. - Vpc
Lattice []ServiceConfigurations Vpc Lattice Configuration
- availability
Zone ServiceRebalancing Availability Zone Rebalancing - capacity
Provider List<ServiceStrategy Capacity Provider Strategy Item> - The capacity provider strategy to use for the service.
If a
capacityProviderStrategy
is specified, thelaunchType
parameter must be omitted. If nocapacityProviderStrategy
orlaunchType
is specified, thedefaultCapacityProviderStrategy
for the cluster is used. A capacity provider strategy may contain a maximum of 6 capacity providers. - deployment
Configuration ServiceDeployment Configuration - Optional deployment parameters that control how many tasks run during the deployment and the ordering of stopping and starting tasks.
- desired
Count Integer - The number of instantiations of the specified task definition to place and keep running in your service.
For new services, if a desired count is not specified, a default value of
1
is used. When using theDAEMON
scheduling strategy, the desired count is not required. For existing services, if a desired count is not specified, it is omitted from the operation. - Boolean
- Specifies whether to turn on Amazon ECS managed tags for the tasks within the service. For more information, see Tagging your Amazon ECS resources in the Amazon Elastic Container Service Developer Guide.
When you use Amazon ECS managed tags, you need to set the
propagateTags
request parameter. - enable
Execute BooleanCommand - Determines whether the execute command functionality is turned on for the service. If
true
, the execute command functionality is turned on for all containers in tasks as part of the service. - health
Check IntegerGrace Period Seconds - The period of time, in seconds, that the Amazon ECS service scheduler ignores unhealthy Elastic Load Balancing target health checks after a task has first started. This is only used when your service is configured to use a load balancer. If your service has a load balancer defined and you don't specify a health check grace period value, the default value of
0
is used. If you do not use an Elastic Load Balancing, we recommend that you use thestartPeriod
in the task definition health check parameters. For more information, see Health check. If your service's tasks take a while to start and respond to Elastic Load Balancing health checks, you can specify a health check grace period of up to 2,147,483,647 seconds (about 69 years). During that time, the Amazon ECS service scheduler ignores health check status. This grace period can prevent the service scheduler from marking tasks as unhealthy and stopping them before they have time to come up. - load
Balancers List<ServiceLoad Balancer> - A list of load balancer objects to associate with the service. If you specify the
Role
property,LoadBalancers
must be specified as well. For information about the number of load balancers that you can specify per service, see Service Load Balancing in the Amazon Elastic Container Service Developer Guide. - name String
- The name of the Amazon ECS service, such as
sample-webapp
. - network
Configuration ServiceNetwork Configuration - The network configuration for the service. This parameter is required for task definitions that use the
awsvpc
network mode to receive their own elastic network interface, and it is not supported for other network modes. For more information, see Task Networking in the Amazon Elastic Container Service Developer Guide. - placement
Constraints List<ServicePlacement Constraint> - An array of placement constraint objects to use for tasks in your service. You can specify a maximum of 10 constraints for each task. This limit includes constraints in the task definition and those specified at runtime.
- placement
Strategies List<ServicePlacement Strategy> - The placement strategy objects to use for tasks in your service. You can specify a maximum of 5 strategy rules for each service.
- platform
Version String - The platform version that your tasks in the service are running on. A platform version is specified only for tasks using the Fargate launch type. If one isn't specified, the
LATEST
platform version is used. For more information, see platform versions in the Amazon Elastic Container Service Developer Guide. - Service
Propagate Tags - Specifies whether to propagate the tags from the task definition to the task. If no value is specified, the tags aren't propagated. Tags can only be propagated to the task during task creation. To add tags to a task after task creation, use the TagResource API action.
You must set this to a value other than
NONE
when you use Cost Explorer. For more information, see Amazon ECS usage reports in the Amazon Elastic Container Service Developer Guide. The default isNONE
. - service
Arn String - Not currently supported in AWS CloudFormation .
- service
Registries List<ServiceRegistry> - The details of the service discovery registry to associate with this service. For more information, see Service discovery. Each service may be associated with one service registry. Multiple service registries for each service isn't supported.
- List<Tag>
- The metadata that you apply to the service to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define. When a service is deleted, the tags are deleted as well.
The following basic restrictions apply to tags:
- Maximum number of tags per resource - 50
- For each resource, each tag key must be unique, and each tag key can have only one value.
- Maximum key length - 128 Unicode characters in UTF-8
- Maximum value length - 256 Unicode characters in UTF-8
- If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
- Tag keys and values are case-sensitive.
- Do not use
aws:
,AWS:
, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for AWS use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
- task
Definition String - The
family
andrevision
(family:revision
) or full ARN of the task definition to run in your service. If arevision
isn't specified, the latestACTIVE
revision is used. A task definition must be specified if the service uses either theECS
orCODE_DEPLOY
deployment controllers. For more information about deployment types, see Amazon ECS deployment types. - vpc
Lattice List<ServiceConfigurations Vpc Lattice Configuration>
- availability
Zone ServiceRebalancing Availability Zone Rebalancing - capacity
Provider ServiceStrategy Capacity Provider Strategy Item[] - The capacity provider strategy to use for the service.
If a
capacityProviderStrategy
is specified, thelaunchType
parameter must be omitted. If nocapacityProviderStrategy
orlaunchType
is specified, thedefaultCapacityProviderStrategy
for the cluster is used. A capacity provider strategy may contain a maximum of 6 capacity providers. - deployment
Configuration ServiceDeployment Configuration - Optional deployment parameters that control how many tasks run during the deployment and the ordering of stopping and starting tasks.
- desired
Count number - The number of instantiations of the specified task definition to place and keep running in your service.
For new services, if a desired count is not specified, a default value of
1
is used. When using theDAEMON
scheduling strategy, the desired count is not required. For existing services, if a desired count is not specified, it is omitted from the operation. - boolean
- Specifies whether to turn on Amazon ECS managed tags for the tasks within the service. For more information, see Tagging your Amazon ECS resources in the Amazon Elastic Container Service Developer Guide.
When you use Amazon ECS managed tags, you need to set the
propagateTags
request parameter. - enable
Execute booleanCommand - Determines whether the execute command functionality is turned on for the service. If
true
, the execute command functionality is turned on for all containers in tasks as part of the service. - health
Check numberGrace Period Seconds - The period of time, in seconds, that the Amazon ECS service scheduler ignores unhealthy Elastic Load Balancing target health checks after a task has first started. This is only used when your service is configured to use a load balancer. If your service has a load balancer defined and you don't specify a health check grace period value, the default value of
0
is used. If you do not use an Elastic Load Balancing, we recommend that you use thestartPeriod
in the task definition health check parameters. For more information, see Health check. If your service's tasks take a while to start and respond to Elastic Load Balancing health checks, you can specify a health check grace period of up to 2,147,483,647 seconds (about 69 years). During that time, the Amazon ECS service scheduler ignores health check status. This grace period can prevent the service scheduler from marking tasks as unhealthy and stopping them before they have time to come up. - load
Balancers ServiceLoad Balancer[] - A list of load balancer objects to associate with the service. If you specify the
Role
property,LoadBalancers
must be specified as well. For information about the number of load balancers that you can specify per service, see Service Load Balancing in the Amazon Elastic Container Service Developer Guide. - name string
- The name of the Amazon ECS service, such as
sample-webapp
. - network
Configuration ServiceNetwork Configuration - The network configuration for the service. This parameter is required for task definitions that use the
awsvpc
network mode to receive their own elastic network interface, and it is not supported for other network modes. For more information, see Task Networking in the Amazon Elastic Container Service Developer Guide. - placement
Constraints ServicePlacement Constraint[] - An array of placement constraint objects to use for tasks in your service. You can specify a maximum of 10 constraints for each task. This limit includes constraints in the task definition and those specified at runtime.
- placement
Strategies ServicePlacement Strategy[] - The placement strategy objects to use for tasks in your service. You can specify a maximum of 5 strategy rules for each service.
- platform
Version string - The platform version that your tasks in the service are running on. A platform version is specified only for tasks using the Fargate launch type. If one isn't specified, the
LATEST
platform version is used. For more information, see platform versions in the Amazon Elastic Container Service Developer Guide. - Service
Propagate Tags - Specifies whether to propagate the tags from the task definition to the task. If no value is specified, the tags aren't propagated. Tags can only be propagated to the task during task creation. To add tags to a task after task creation, use the TagResource API action.
You must set this to a value other than
NONE
when you use Cost Explorer. For more information, see Amazon ECS usage reports in the Amazon Elastic Container Service Developer Guide. The default isNONE
. - service
Arn string - Not currently supported in AWS CloudFormation .
- service
Registries ServiceRegistry[] - The details of the service discovery registry to associate with this service. For more information, see Service discovery. Each service may be associated with one service registry. Multiple service registries for each service isn't supported.
- Tag[]
- The metadata that you apply to the service to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define. When a service is deleted, the tags are deleted as well.
The following basic restrictions apply to tags:
- Maximum number of tags per resource - 50
- For each resource, each tag key must be unique, and each tag key can have only one value.
- Maximum key length - 128 Unicode characters in UTF-8
- Maximum value length - 256 Unicode characters in UTF-8
- If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
- Tag keys and values are case-sensitive.
- Do not use
aws:
,AWS:
, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for AWS use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
- task
Definition string - The
family
andrevision
(family:revision
) or full ARN of the task definition to run in your service. If arevision
isn't specified, the latestACTIVE
revision is used. A task definition must be specified if the service uses either theECS
orCODE_DEPLOY
deployment controllers. For more information about deployment types, see Amazon ECS deployment types. - vpc
Lattice ServiceConfigurations Vpc Lattice Configuration[]
- availability_
zone_ Servicerebalancing Availability Zone Rebalancing - capacity_
provider_ Sequence[Servicestrategy Capacity Provider Strategy Item] - The capacity provider strategy to use for the service.
If a
capacityProviderStrategy
is specified, thelaunchType
parameter must be omitted. If nocapacityProviderStrategy
orlaunchType
is specified, thedefaultCapacityProviderStrategy
for the cluster is used. A capacity provider strategy may contain a maximum of 6 capacity providers. - deployment_
configuration ServiceDeployment Configuration - Optional deployment parameters that control how many tasks run during the deployment and the ordering of stopping and starting tasks.
- desired_
count int - The number of instantiations of the specified task definition to place and keep running in your service.
For new services, if a desired count is not specified, a default value of
1
is used. When using theDAEMON
scheduling strategy, the desired count is not required. For existing services, if a desired count is not specified, it is omitted from the operation. - bool
- Specifies whether to turn on Amazon ECS managed tags for the tasks within the service. For more information, see Tagging your Amazon ECS resources in the Amazon Elastic Container Service Developer Guide.
When you use Amazon ECS managed tags, you need to set the
propagateTags
request parameter. - enable_
execute_ boolcommand - Determines whether the execute command functionality is turned on for the service. If
true
, the execute command functionality is turned on for all containers in tasks as part of the service. - health_
check_ intgrace_ period_ seconds - The period of time, in seconds, that the Amazon ECS service scheduler ignores unhealthy Elastic Load Balancing target health checks after a task has first started. This is only used when your service is configured to use a load balancer. If your service has a load balancer defined and you don't specify a health check grace period value, the default value of
0
is used. If you do not use an Elastic Load Balancing, we recommend that you use thestartPeriod
in the task definition health check parameters. For more information, see Health check. If your service's tasks take a while to start and respond to Elastic Load Balancing health checks, you can specify a health check grace period of up to 2,147,483,647 seconds (about 69 years). During that time, the Amazon ECS service scheduler ignores health check status. This grace period can prevent the service scheduler from marking tasks as unhealthy and stopping them before they have time to come up. - load_
balancers Sequence[ServiceLoad Balancer] - A list of load balancer objects to associate with the service. If you specify the
Role
property,LoadBalancers
must be specified as well. For information about the number of load balancers that you can specify per service, see Service Load Balancing in the Amazon Elastic Container Service Developer Guide. - name str
- The name of the Amazon ECS service, such as
sample-webapp
. - network_
configuration ServiceNetwork Configuration - The network configuration for the service. This parameter is required for task definitions that use the
awsvpc
network mode to receive their own elastic network interface, and it is not supported for other network modes. For more information, see Task Networking in the Amazon Elastic Container Service Developer Guide. - placement_
constraints Sequence[ServicePlacement Constraint] - An array of placement constraint objects to use for tasks in your service. You can specify a maximum of 10 constraints for each task. This limit includes constraints in the task definition and those specified at runtime.
- placement_
strategies Sequence[ServicePlacement Strategy] - The placement strategy objects to use for tasks in your service. You can specify a maximum of 5 strategy rules for each service.
- platform_
version str - The platform version that your tasks in the service are running on. A platform version is specified only for tasks using the Fargate launch type. If one isn't specified, the
LATEST
platform version is used. For more information, see platform versions in the Amazon Elastic Container Service Developer Guide. - Service
Propagate Tags - Specifies whether to propagate the tags from the task definition to the task. If no value is specified, the tags aren't propagated. Tags can only be propagated to the task during task creation. To add tags to a task after task creation, use the TagResource API action.
You must set this to a value other than
NONE
when you use Cost Explorer. For more information, see Amazon ECS usage reports in the Amazon Elastic Container Service Developer Guide. The default isNONE
. - service_
arn str - Not currently supported in AWS CloudFormation .
- service_
registries Sequence[ServiceRegistry] - The details of the service discovery registry to associate with this service. For more information, see Service discovery. Each service may be associated with one service registry. Multiple service registries for each service isn't supported.
- Sequence[root_Tag]
- The metadata that you apply to the service to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define. When a service is deleted, the tags are deleted as well.
The following basic restrictions apply to tags:
- Maximum number of tags per resource - 50
- For each resource, each tag key must be unique, and each tag key can have only one value.
- Maximum key length - 128 Unicode characters in UTF-8
- Maximum value length - 256 Unicode characters in UTF-8
- If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
- Tag keys and values are case-sensitive.
- Do not use
aws:
,AWS:
, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for AWS use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
- task_
definition str - The
family
andrevision
(family:revision
) or full ARN of the task definition to run in your service. If arevision
isn't specified, the latestACTIVE
revision is used. A task definition must be specified if the service uses either theECS
orCODE_DEPLOY
deployment controllers. For more information about deployment types, see Amazon ECS deployment types. - vpc_
lattice_ Sequence[Serviceconfigurations Vpc Lattice Configuration]
- availability
Zone "ENABLED" | "DISABLED"Rebalancing - capacity
Provider List<Property Map>Strategy - The capacity provider strategy to use for the service.
If a
capacityProviderStrategy
is specified, thelaunchType
parameter must be omitted. If nocapacityProviderStrategy
orlaunchType
is specified, thedefaultCapacityProviderStrategy
for the cluster is used. A capacity provider strategy may contain a maximum of 6 capacity providers. - deployment
Configuration Property Map - Optional deployment parameters that control how many tasks run during the deployment and the ordering of stopping and starting tasks.
- desired
Count Number - The number of instantiations of the specified task definition to place and keep running in your service.
For new services, if a desired count is not specified, a default value of
1
is used. When using theDAEMON
scheduling strategy, the desired count is not required. For existing services, if a desired count is not specified, it is omitted from the operation. - Boolean
- Specifies whether to turn on Amazon ECS managed tags for the tasks within the service. For more information, see Tagging your Amazon ECS resources in the Amazon Elastic Container Service Developer Guide.
When you use Amazon ECS managed tags, you need to set the
propagateTags
request parameter. - enable
Execute BooleanCommand - Determines whether the execute command functionality is turned on for the service. If
true
, the execute command functionality is turned on for all containers in tasks as part of the service. - health
Check NumberGrace Period Seconds - The period of time, in seconds, that the Amazon ECS service scheduler ignores unhealthy Elastic Load Balancing target health checks after a task has first started. This is only used when your service is configured to use a load balancer. If your service has a load balancer defined and you don't specify a health check grace period value, the default value of
0
is used. If you do not use an Elastic Load Balancing, we recommend that you use thestartPeriod
in the task definition health check parameters. For more information, see Health check. If your service's tasks take a while to start and respond to Elastic Load Balancing health checks, you can specify a health check grace period of up to 2,147,483,647 seconds (about 69 years). During that time, the Amazon ECS service scheduler ignores health check status. This grace period can prevent the service scheduler from marking tasks as unhealthy and stopping them before they have time to come up. - load
Balancers List<Property Map> - A list of load balancer objects to associate with the service. If you specify the
Role
property,LoadBalancers
must be specified as well. For information about the number of load balancers that you can specify per service, see Service Load Balancing in the Amazon Elastic Container Service Developer Guide. - name String
- The name of the Amazon ECS service, such as
sample-webapp
. - network
Configuration Property Map - The network configuration for the service. This parameter is required for task definitions that use the
awsvpc
network mode to receive their own elastic network interface, and it is not supported for other network modes. For more information, see Task Networking in the Amazon Elastic Container Service Developer Guide. - placement
Constraints List<Property Map> - An array of placement constraint objects to use for tasks in your service. You can specify a maximum of 10 constraints for each task. This limit includes constraints in the task definition and those specified at runtime.
- placement
Strategies List<Property Map> - The placement strategy objects to use for tasks in your service. You can specify a maximum of 5 strategy rules for each service.
- platform
Version String - The platform version that your tasks in the service are running on. A platform version is specified only for tasks using the Fargate launch type. If one isn't specified, the
LATEST
platform version is used. For more information, see platform versions in the Amazon Elastic Container Service Developer Guide. - "SERVICE" | "TASK_DEFINITION"
- Specifies whether to propagate the tags from the task definition to the task. If no value is specified, the tags aren't propagated. Tags can only be propagated to the task during task creation. To add tags to a task after task creation, use the TagResource API action.
You must set this to a value other than
NONE
when you use Cost Explorer. For more information, see Amazon ECS usage reports in the Amazon Elastic Container Service Developer Guide. The default isNONE
. - service
Arn String - Not currently supported in AWS CloudFormation .
- service
Registries List<Property Map> - The details of the service discovery registry to associate with this service. For more information, see Service discovery. Each service may be associated with one service registry. Multiple service registries for each service isn't supported.
- List<Property Map>
- The metadata that you apply to the service to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define. When a service is deleted, the tags are deleted as well.
The following basic restrictions apply to tags:
- Maximum number of tags per resource - 50
- For each resource, each tag key must be unique, and each tag key can have only one value.
- Maximum key length - 128 Unicode characters in UTF-8
- Maximum value length - 256 Unicode characters in UTF-8
- If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
- Tag keys and values are case-sensitive.
- Do not use
aws:
,AWS:
, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for AWS use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
- task
Definition String - The
family
andrevision
(family:revision
) or full ARN of the task definition to run in your service. If arevision
isn't specified, the latestACTIVE
revision is used. A task definition must be specified if the service uses either theECS
orCODE_DEPLOY
deployment controllers. For more information about deployment types, see Amazon ECS deployment types. - vpc
Lattice List<Property Map>Configurations
Supporting Types
ServiceAvailabilityZoneRebalancing
ServiceAwsVpcConfiguration
- Assign
Public Pulumi.Ip Aws Native. Ecs. Service Aws Vpc Configuration Assign Public Ip - Whether the task's elastic network interface receives a public IP address. The default value is
DISABLED
. - Security
Groups List<string> - The IDs of the security groups associated with the task or service. If you don't specify a security group, the default security group for the VPC is used. There's a limit of 5 security groups that can be specified per
awsvpcConfiguration
. All specified security groups must be from the same VPC. - Subnets List<string>
- The IDs of the subnets associated with the task or service. There's a limit of 16 subnets that can be specified per
awsvpcConfiguration
. All specified subnets must be from the same VPC.
- Assign
Public ServiceIp Aws Vpc Configuration Assign Public Ip - Whether the task's elastic network interface receives a public IP address. The default value is
DISABLED
. - Security
Groups []string - The IDs of the security groups associated with the task or service. If you don't specify a security group, the default security group for the VPC is used. There's a limit of 5 security groups that can be specified per
awsvpcConfiguration
. All specified security groups must be from the same VPC. - Subnets []string
- The IDs of the subnets associated with the task or service. There's a limit of 16 subnets that can be specified per
awsvpcConfiguration
. All specified subnets must be from the same VPC.
- assign
Public ServiceIp Aws Vpc Configuration Assign Public Ip - Whether the task's elastic network interface receives a public IP address. The default value is
DISABLED
. - security
Groups List<String> - The IDs of the security groups associated with the task or service. If you don't specify a security group, the default security group for the VPC is used. There's a limit of 5 security groups that can be specified per
awsvpcConfiguration
. All specified security groups must be from the same VPC. - subnets List<String>
- The IDs of the subnets associated with the task or service. There's a limit of 16 subnets that can be specified per
awsvpcConfiguration
. All specified subnets must be from the same VPC.
- assign
Public ServiceIp Aws Vpc Configuration Assign Public Ip - Whether the task's elastic network interface receives a public IP address. The default value is
DISABLED
. - security
Groups string[] - The IDs of the security groups associated with the task or service. If you don't specify a security group, the default security group for the VPC is used. There's a limit of 5 security groups that can be specified per
awsvpcConfiguration
. All specified security groups must be from the same VPC. - subnets string[]
- The IDs of the subnets associated with the task or service. There's a limit of 16 subnets that can be specified per
awsvpcConfiguration
. All specified subnets must be from the same VPC.
- assign_
public_ Serviceip Aws Vpc Configuration Assign Public Ip - Whether the task's elastic network interface receives a public IP address. The default value is
DISABLED
. - security_
groups Sequence[str] - The IDs of the security groups associated with the task or service. If you don't specify a security group, the default security group for the VPC is used. There's a limit of 5 security groups that can be specified per
awsvpcConfiguration
. All specified security groups must be from the same VPC. - subnets Sequence[str]
- The IDs of the subnets associated with the task or service. There's a limit of 16 subnets that can be specified per
awsvpcConfiguration
. All specified subnets must be from the same VPC.
- assign
Public "DISABLED" | "ENABLED"Ip - Whether the task's elastic network interface receives a public IP address. The default value is
DISABLED
. - security
Groups List<String> - The IDs of the security groups associated with the task or service. If you don't specify a security group, the default security group for the VPC is used. There's a limit of 5 security groups that can be specified per
awsvpcConfiguration
. All specified security groups must be from the same VPC. - subnets List<String>
- The IDs of the subnets associated with the task or service. There's a limit of 16 subnets that can be specified per
awsvpcConfiguration
. All specified subnets must be from the same VPC.
ServiceAwsVpcConfigurationAssignPublicIp
ServiceCapacityProviderStrategyItem
- Base int
- The base value designates how many tasks, at a minimum, to run on the specified capacity provider. Only one capacity provider in a capacity provider strategy can have a base defined. If no value is specified, the default value of
0
is used. - Capacity
Provider string - The short name of the capacity provider.
- Weight int
- The weight value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The
weight
value is taken into consideration after thebase
value, if defined, is satisfied. If noweight
value is specified, the default value of0
is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of0
can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of0
, anyRunTask
orCreateService
actions using the capacity provider strategy will fail. An example scenario for using weights is defining a strategy that contains two capacity providers and both have a weight of1
, then when thebase
is satisfied, the tasks will be split evenly across the two capacity providers. Using that same logic, if you specify a weight of1
for capacityProviderA and a weight of4
for capacityProviderB, then for every one task that's run using capacityProviderA, four tasks would use capacityProviderB.
- Base int
- The base value designates how many tasks, at a minimum, to run on the specified capacity provider. Only one capacity provider in a capacity provider strategy can have a base defined. If no value is specified, the default value of
0
is used. - Capacity
Provider string - The short name of the capacity provider.
- Weight int
- The weight value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The
weight
value is taken into consideration after thebase
value, if defined, is satisfied. If noweight
value is specified, the default value of0
is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of0
can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of0
, anyRunTask
orCreateService
actions using the capacity provider strategy will fail. An example scenario for using weights is defining a strategy that contains two capacity providers and both have a weight of1
, then when thebase
is satisfied, the tasks will be split evenly across the two capacity providers. Using that same logic, if you specify a weight of1
for capacityProviderA and a weight of4
for capacityProviderB, then for every one task that's run using capacityProviderA, four tasks would use capacityProviderB.
- base Integer
- The base value designates how many tasks, at a minimum, to run on the specified capacity provider. Only one capacity provider in a capacity provider strategy can have a base defined. If no value is specified, the default value of
0
is used. - capacity
Provider String - The short name of the capacity provider.
- weight Integer
- The weight value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The
weight
value is taken into consideration after thebase
value, if defined, is satisfied. If noweight
value is specified, the default value of0
is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of0
can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of0
, anyRunTask
orCreateService
actions using the capacity provider strategy will fail. An example scenario for using weights is defining a strategy that contains two capacity providers and both have a weight of1
, then when thebase
is satisfied, the tasks will be split evenly across the two capacity providers. Using that same logic, if you specify a weight of1
for capacityProviderA and a weight of4
for capacityProviderB, then for every one task that's run using capacityProviderA, four tasks would use capacityProviderB.
- base number
- The base value designates how many tasks, at a minimum, to run on the specified capacity provider. Only one capacity provider in a capacity provider strategy can have a base defined. If no value is specified, the default value of
0
is used. - capacity
Provider string - The short name of the capacity provider.
- weight number
- The weight value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The
weight
value is taken into consideration after thebase
value, if defined, is satisfied. If noweight
value is specified, the default value of0
is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of0
can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of0
, anyRunTask
orCreateService
actions using the capacity provider strategy will fail. An example scenario for using weights is defining a strategy that contains two capacity providers and both have a weight of1
, then when thebase
is satisfied, the tasks will be split evenly across the two capacity providers. Using that same logic, if you specify a weight of1
for capacityProviderA and a weight of4
for capacityProviderB, then for every one task that's run using capacityProviderA, four tasks would use capacityProviderB.
- base int
- The base value designates how many tasks, at a minimum, to run on the specified capacity provider. Only one capacity provider in a capacity provider strategy can have a base defined. If no value is specified, the default value of
0
is used. - capacity_
provider str - The short name of the capacity provider.
- weight int
- The weight value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The
weight
value is taken into consideration after thebase
value, if defined, is satisfied. If noweight
value is specified, the default value of0
is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of0
can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of0
, anyRunTask
orCreateService
actions using the capacity provider strategy will fail. An example scenario for using weights is defining a strategy that contains two capacity providers and both have a weight of1
, then when thebase
is satisfied, the tasks will be split evenly across the two capacity providers. Using that same logic, if you specify a weight of1
for capacityProviderA and a weight of4
for capacityProviderB, then for every one task that's run using capacityProviderA, four tasks would use capacityProviderB.
- base Number
- The base value designates how many tasks, at a minimum, to run on the specified capacity provider. Only one capacity provider in a capacity provider strategy can have a base defined. If no value is specified, the default value of
0
is used. - capacity
Provider String - The short name of the capacity provider.
- weight Number
- The weight value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The
weight
value is taken into consideration after thebase
value, if defined, is satisfied. If noweight
value is specified, the default value of0
is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of0
can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of0
, anyRunTask
orCreateService
actions using the capacity provider strategy will fail. An example scenario for using weights is defining a strategy that contains two capacity providers and both have a weight of1
, then when thebase
is satisfied, the tasks will be split evenly across the two capacity providers. Using that same logic, if you specify a weight of1
for capacityProviderA and a weight of4
for capacityProviderB, then for every one task that's run using capacityProviderA, four tasks would use capacityProviderB.
ServiceDeploymentAlarms
- Alarm
Names List<string> - One or more CloudWatch alarm names. Use a "," to separate the alarms.
- Enable bool
- Determines whether to use the CloudWatch alarm option in the service deployment process.
- Rollback bool
- Determines whether to configure Amazon ECS to roll back the service if a service deployment fails. If rollback is used, when a service deployment fails, the service is rolled back to the last deployment that completed successfully.
- Alarm
Names []string - One or more CloudWatch alarm names. Use a "," to separate the alarms.
- Enable bool
- Determines whether to use the CloudWatch alarm option in the service deployment process.
- Rollback bool
- Determines whether to configure Amazon ECS to roll back the service if a service deployment fails. If rollback is used, when a service deployment fails, the service is rolled back to the last deployment that completed successfully.
- alarm
Names List<String> - One or more CloudWatch alarm names. Use a "," to separate the alarms.
- enable Boolean
- Determines whether to use the CloudWatch alarm option in the service deployment process.
- rollback Boolean
- Determines whether to configure Amazon ECS to roll back the service if a service deployment fails. If rollback is used, when a service deployment fails, the service is rolled back to the last deployment that completed successfully.
- alarm
Names string[] - One or more CloudWatch alarm names. Use a "," to separate the alarms.
- enable boolean
- Determines whether to use the CloudWatch alarm option in the service deployment process.
- rollback boolean
- Determines whether to configure Amazon ECS to roll back the service if a service deployment fails. If rollback is used, when a service deployment fails, the service is rolled back to the last deployment that completed successfully.
- alarm_
names Sequence[str] - One or more CloudWatch alarm names. Use a "," to separate the alarms.
- enable bool
- Determines whether to use the CloudWatch alarm option in the service deployment process.
- rollback bool
- Determines whether to configure Amazon ECS to roll back the service if a service deployment fails. If rollback is used, when a service deployment fails, the service is rolled back to the last deployment that completed successfully.
- alarm
Names List<String> - One or more CloudWatch alarm names. Use a "," to separate the alarms.
- enable Boolean
- Determines whether to use the CloudWatch alarm option in the service deployment process.
- rollback Boolean
- Determines whether to configure Amazon ECS to roll back the service if a service deployment fails. If rollback is used, when a service deployment fails, the service is rolled back to the last deployment that completed successfully.
ServiceDeploymentCircuitBreaker
- Enable bool
- Determines whether to use the deployment circuit breaker logic for the service.
- Rollback bool
- Determines whether to configure Amazon ECS to roll back the service if a service deployment fails. If rollback is on, when a service deployment fails, the service is rolled back to the last deployment that completed successfully.
- Enable bool
- Determines whether to use the deployment circuit breaker logic for the service.
- Rollback bool
- Determines whether to configure Amazon ECS to roll back the service if a service deployment fails. If rollback is on, when a service deployment fails, the service is rolled back to the last deployment that completed successfully.
- enable Boolean
- Determines whether to use the deployment circuit breaker logic for the service.
- rollback Boolean
- Determines whether to configure Amazon ECS to roll back the service if a service deployment fails. If rollback is on, when a service deployment fails, the service is rolled back to the last deployment that completed successfully.
- enable boolean
- Determines whether to use the deployment circuit breaker logic for the service.
- rollback boolean
- Determines whether to configure Amazon ECS to roll back the service if a service deployment fails. If rollback is on, when a service deployment fails, the service is rolled back to the last deployment that completed successfully.
- enable bool
- Determines whether to use the deployment circuit breaker logic for the service.
- rollback bool
- Determines whether to configure Amazon ECS to roll back the service if a service deployment fails. If rollback is on, when a service deployment fails, the service is rolled back to the last deployment that completed successfully.
- enable Boolean
- Determines whether to use the deployment circuit breaker logic for the service.
- rollback Boolean
- Determines whether to configure Amazon ECS to roll back the service if a service deployment fails. If rollback is on, when a service deployment fails, the service is rolled back to the last deployment that completed successfully.
ServiceDeploymentConfiguration
- Alarms
Pulumi.
Aws Native. Ecs. Inputs. Service Deployment Alarms - Information about the CloudWatch alarms.
- Deployment
Circuit Pulumi.Breaker Aws Native. Ecs. Inputs. Service Deployment Circuit Breaker - The deployment circuit breaker can only be used for services using the rolling update (
ECS
) deployment type. The deployment circuit breaker determines whether a service deployment will fail if the service can't reach a steady state. If you use the deployment circuit breaker, a service deployment will transition to a failed state and stop launching new tasks. If you use the rollback option, when a service deployment fails, the service is rolled back to the last deployment that completed successfully. For more information, see Rolling update in the Amazon Elastic Container Service Developer Guide - Maximum
Percent int - If a service is using the rolling update (
ECS
) deployment type, themaximumPercent
parameter represents an upper limit on the number of your service's tasks that are allowed in theRUNNING
orPENDING
state during a deployment, as a percentage of thedesiredCount
(rounded down to the nearest integer). This parameter enables you to define the deployment batch size. For example, if your service is using theREPLICA
service scheduler and has adesiredCount
of four tasks and amaximumPercent
value of 200%, the scheduler may start four new tasks before stopping the four older tasks (provided that the cluster resources required to do this are available). The defaultmaximumPercent
value for a service using theREPLICA
service scheduler is 200%. If a service is using either the blue/green (CODE_DEPLOY
) orEXTERNAL
deployment types, and tasks in the service use the EC2 launch type, the maximum percent value is set to the default value. The maximum percent value is used to define the upper limit on the number of the tasks in the service that remain in theRUNNING
state while the container instances are in theDRAINING
state. You can't specify a custommaximumPercent
value for a service that uses either the blue/green (CODE_DEPLOY
) orEXTERNAL
deployment types and has tasks that use the EC2 launch type. If the tasks in the service use the Fargate launch type, the maximum percent value is not used, although it is returned when describing your service. - Minimum
Healthy intPercent If a service is using the rolling update (
ECS
) deployment type, theminimumHealthyPercent
represents a lower limit on the number of your service's tasks that must remain in theRUNNING
state during a deployment, as a percentage of thedesiredCount
(rounded up to the nearest integer). This parameter enables you to deploy without using additional cluster capacity. For example, if your service has adesiredCount
of four tasks and aminimumHealthyPercent
of 50%, the service scheduler may stop two existing tasks to free up cluster capacity before starting two new tasks. For services that do not use a load balancer, the following should be noted:- A service is considered healthy if all essential containers within the tasks in the service pass their health checks.
- If a task has no essential containers with a health check defined, the service scheduler will wait for 40 seconds after a task reaches a
RUNNING
state before the task is counted towards the minimum healthy percent total. - If a task has one or more essential containers with a health check defined, the service scheduler will wait for the task to reach a healthy status before counting it towards the minimum healthy percent total. A task is considered healthy when all essential containers within the task have passed their health checks. The amount of time the service scheduler can wait for is determined by the container health check settings.
For services that do use a load balancer, the following should be noted:
- If a task has no essential containers with a health check defined, the service scheduler will wait for the load balancer target group health check to return a healthy status before counting the task towards the minimum healthy percent total.
- If a task has an essential container with a health check defined, the service scheduler will wait for both the task to reach a healthy status and the load balancer target group health check to return a healthy status before counting the task towards the minimum healthy percent total.
The default value for a replica service for
minimumHealthyPercent
is 100%. The defaultminimumHealthyPercent
value for a service using theDAEMON
service schedule is 0% for the CLI, the AWS SDKs, and the APIs and 50% for the AWS Management Console. The minimum number of healthy tasks during a deployment is thedesiredCount
multiplied by theminimumHealthyPercent
/100, rounded up to the nearest integer value. If a service is using either the blue/green (CODE_DEPLOY
) orEXTERNAL
deployment types and is running tasks that use the EC2 launch type, the minimum healthy percent value is set to the default value. The minimum healthy percent value is used to define the lower limit on the number of the tasks in the service that remain in theRUNNING
state while the container instances are in theDRAINING
state. You can't specify a customminimumHealthyPercent
value for a service that uses either the blue/green (CODE_DEPLOY
) orEXTERNAL
deployment types and has tasks that use the EC2 launch type. If a service is using either the blue/green (CODE_DEPLOY
) orEXTERNAL
deployment types and is running tasks that use the Fargate launch type, the minimum healthy percent value is not used, although it is returned when describing your service.
- Alarms
Service
Deployment Alarms - Information about the CloudWatch alarms.
- Deployment
Circuit ServiceBreaker Deployment Circuit Breaker - The deployment circuit breaker can only be used for services using the rolling update (
ECS
) deployment type. The deployment circuit breaker determines whether a service deployment will fail if the service can't reach a steady state. If you use the deployment circuit breaker, a service deployment will transition to a failed state and stop launching new tasks. If you use the rollback option, when a service deployment fails, the service is rolled back to the last deployment that completed successfully. For more information, see Rolling update in the Amazon Elastic Container Service Developer Guide - Maximum
Percent int - If a service is using the rolling update (
ECS
) deployment type, themaximumPercent
parameter represents an upper limit on the number of your service's tasks that are allowed in theRUNNING
orPENDING
state during a deployment, as a percentage of thedesiredCount
(rounded down to the nearest integer). This parameter enables you to define the deployment batch size. For example, if your service is using theREPLICA
service scheduler and has adesiredCount
of four tasks and amaximumPercent
value of 200%, the scheduler may start four new tasks before stopping the four older tasks (provided that the cluster resources required to do this are available). The defaultmaximumPercent
value for a service using theREPLICA
service scheduler is 200%. If a service is using either the blue/green (CODE_DEPLOY
) orEXTERNAL
deployment types, and tasks in the service use the EC2 launch type, the maximum percent value is set to the default value. The maximum percent value is used to define the upper limit on the number of the tasks in the service that remain in theRUNNING
state while the container instances are in theDRAINING
state. You can't specify a custommaximumPercent
value for a service that uses either the blue/green (CODE_DEPLOY
) orEXTERNAL
deployment types and has tasks that use the EC2 launch type. If the tasks in the service use the Fargate launch type, the maximum percent value is not used, although it is returned when describing your service. - Minimum
Healthy intPercent If a service is using the rolling update (
ECS
) deployment type, theminimumHealthyPercent
represents a lower limit on the number of your service's tasks that must remain in theRUNNING
state during a deployment, as a percentage of thedesiredCount
(rounded up to the nearest integer). This parameter enables you to deploy without using additional cluster capacity. For example, if your service has adesiredCount
of four tasks and aminimumHealthyPercent
of 50%, the service scheduler may stop two existing tasks to free up cluster capacity before starting two new tasks. For services that do not use a load balancer, the following should be noted:- A service is considered healthy if all essential containers within the tasks in the service pass their health checks.
- If a task has no essential containers with a health check defined, the service scheduler will wait for 40 seconds after a task reaches a
RUNNING
state before the task is counted towards the minimum healthy percent total. - If a task has one or more essential containers with a health check defined, the service scheduler will wait for the task to reach a healthy status before counting it towards the minimum healthy percent total. A task is considered healthy when all essential containers within the task have passed their health checks. The amount of time the service scheduler can wait for is determined by the container health check settings.
For services that do use a load balancer, the following should be noted:
- If a task has no essential containers with a health check defined, the service scheduler will wait for the load balancer target group health check to return a healthy status before counting the task towards the minimum healthy percent total.
- If a task has an essential container with a health check defined, the service scheduler will wait for both the task to reach a healthy status and the load balancer target group health check to return a healthy status before counting the task towards the minimum healthy percent total.
The default value for a replica service for
minimumHealthyPercent
is 100%. The defaultminimumHealthyPercent
value for a service using theDAEMON
service schedule is 0% for the CLI, the AWS SDKs, and the APIs and 50% for the AWS Management Console. The minimum number of healthy tasks during a deployment is thedesiredCount
multiplied by theminimumHealthyPercent
/100, rounded up to the nearest integer value. If a service is using either the blue/green (CODE_DEPLOY
) orEXTERNAL
deployment types and is running tasks that use the EC2 launch type, the minimum healthy percent value is set to the default value. The minimum healthy percent value is used to define the lower limit on the number of the tasks in the service that remain in theRUNNING
state while the container instances are in theDRAINING
state. You can't specify a customminimumHealthyPercent
value for a service that uses either the blue/green (CODE_DEPLOY
) orEXTERNAL
deployment types and has tasks that use the EC2 launch type. If a service is using either the blue/green (CODE_DEPLOY
) orEXTERNAL
deployment types and is running tasks that use the Fargate launch type, the minimum healthy percent value is not used, although it is returned when describing your service.
- alarms
Service
Deployment Alarms - Information about the CloudWatch alarms.
- deployment
Circuit ServiceBreaker Deployment Circuit Breaker - The deployment circuit breaker can only be used for services using the rolling update (
ECS
) deployment type. The deployment circuit breaker determines whether a service deployment will fail if the service can't reach a steady state. If you use the deployment circuit breaker, a service deployment will transition to a failed state and stop launching new tasks. If you use the rollback option, when a service deployment fails, the service is rolled back to the last deployment that completed successfully. For more information, see Rolling update in the Amazon Elastic Container Service Developer Guide - maximum
Percent Integer - If a service is using the rolling update (
ECS
) deployment type, themaximumPercent
parameter represents an upper limit on the number of your service's tasks that are allowed in theRUNNING
orPENDING
state during a deployment, as a percentage of thedesiredCount
(rounded down to the nearest integer). This parameter enables you to define the deployment batch size. For example, if your service is using theREPLICA
service scheduler and has adesiredCount
of four tasks and amaximumPercent
value of 200%, the scheduler may start four new tasks before stopping the four older tasks (provided that the cluster resources required to do this are available). The defaultmaximumPercent
value for a service using theREPLICA
service scheduler is 200%. If a service is using either the blue/green (CODE_DEPLOY
) orEXTERNAL
deployment types, and tasks in the service use the EC2 launch type, the maximum percent value is set to the default value. The maximum percent value is used to define the upper limit on the number of the tasks in the service that remain in theRUNNING
state while the container instances are in theDRAINING
state. You can't specify a custommaximumPercent
value for a service that uses either the blue/green (CODE_DEPLOY
) orEXTERNAL
deployment types and has tasks that use the EC2 launch type. If the tasks in the service use the Fargate launch type, the maximum percent value is not used, although it is returned when describing your service. - minimum
Healthy IntegerPercent If a service is using the rolling update (
ECS
) deployment type, theminimumHealthyPercent
represents a lower limit on the number of your service's tasks that must remain in theRUNNING
state during a deployment, as a percentage of thedesiredCount
(rounded up to the nearest integer). This parameter enables you to deploy without using additional cluster capacity. For example, if your service has adesiredCount
of four tasks and aminimumHealthyPercent
of 50%, the service scheduler may stop two existing tasks to free up cluster capacity before starting two new tasks. For services that do not use a load balancer, the following should be noted:- A service is considered healthy if all essential containers within the tasks in the service pass their health checks.
- If a task has no essential containers with a health check defined, the service scheduler will wait for 40 seconds after a task reaches a
RUNNING
state before the task is counted towards the minimum healthy percent total. - If a task has one or more essential containers with a health check defined, the service scheduler will wait for the task to reach a healthy status before counting it towards the minimum healthy percent total. A task is considered healthy when all essential containers within the task have passed their health checks. The amount of time the service scheduler can wait for is determined by the container health check settings.
For services that do use a load balancer, the following should be noted:
- If a task has no essential containers with a health check defined, the service scheduler will wait for the load balancer target group health check to return a healthy status before counting the task towards the minimum healthy percent total.
- If a task has an essential container with a health check defined, the service scheduler will wait for both the task to reach a healthy status and the load balancer target group health check to return a healthy status before counting the task towards the minimum healthy percent total.
The default value for a replica service for
minimumHealthyPercent
is 100%. The defaultminimumHealthyPercent
value for a service using theDAEMON
service schedule is 0% for the CLI, the AWS SDKs, and the APIs and 50% for the AWS Management Console. The minimum number of healthy tasks during a deployment is thedesiredCount
multiplied by theminimumHealthyPercent
/100, rounded up to the nearest integer value. If a service is using either the blue/green (CODE_DEPLOY
) orEXTERNAL
deployment types and is running tasks that use the EC2 launch type, the minimum healthy percent value is set to the default value. The minimum healthy percent value is used to define the lower limit on the number of the tasks in the service that remain in theRUNNING
state while the container instances are in theDRAINING
state. You can't specify a customminimumHealthyPercent
value for a service that uses either the blue/green (CODE_DEPLOY
) orEXTERNAL
deployment types and has tasks that use the EC2 launch type. If a service is using either the blue/green (CODE_DEPLOY
) orEXTERNAL
deployment types and is running tasks that use the Fargate launch type, the minimum healthy percent value is not used, although it is returned when describing your service.
- alarms
Service
Deployment Alarms - Information about the CloudWatch alarms.
- deployment
Circuit ServiceBreaker Deployment Circuit Breaker - The deployment circuit breaker can only be used for services using the rolling update (
ECS
) deployment type. The deployment circuit breaker determines whether a service deployment will fail if the service can't reach a steady state. If you use the deployment circuit breaker, a service deployment will transition to a failed state and stop launching new tasks. If you use the rollback option, when a service deployment fails, the service is rolled back to the last deployment that completed successfully. For more information, see Rolling update in the Amazon Elastic Container Service Developer Guide - maximum
Percent number - If a service is using the rolling update (
ECS
) deployment type, themaximumPercent
parameter represents an upper limit on the number of your service's tasks that are allowed in theRUNNING
orPENDING
state during a deployment, as a percentage of thedesiredCount
(rounded down to the nearest integer). This parameter enables you to define the deployment batch size. For example, if your service is using theREPLICA
service scheduler and has adesiredCount
of four tasks and amaximumPercent
value of 200%, the scheduler may start four new tasks before stopping the four older tasks (provided that the cluster resources required to do this are available). The defaultmaximumPercent
value for a service using theREPLICA
service scheduler is 200%. If a service is using either the blue/green (CODE_DEPLOY
) orEXTERNAL
deployment types, and tasks in the service use the EC2 launch type, the maximum percent value is set to the default value. The maximum percent value is used to define the upper limit on the number of the tasks in the service that remain in theRUNNING
state while the container instances are in theDRAINING
state. You can't specify a custommaximumPercent
value for a service that uses either the blue/green (CODE_DEPLOY
) orEXTERNAL
deployment types and has tasks that use the EC2 launch type. If the tasks in the service use the Fargate launch type, the maximum percent value is not used, although it is returned when describing your service. - minimum
Healthy numberPercent If a service is using the rolling update (
ECS
) deployment type, theminimumHealthyPercent
represents a lower limit on the number of your service's tasks that must remain in theRUNNING
state during a deployment, as a percentage of thedesiredCount
(rounded up to the nearest integer). This parameter enables you to deploy without using additional cluster capacity. For example, if your service has adesiredCount
of four tasks and aminimumHealthyPercent
of 50%, the service scheduler may stop two existing tasks to free up cluster capacity before starting two new tasks. For services that do not use a load balancer, the following should be noted:- A service is considered healthy if all essential containers within the tasks in the service pass their health checks.
- If a task has no essential containers with a health check defined, the service scheduler will wait for 40 seconds after a task reaches a
RUNNING
state before the task is counted towards the minimum healthy percent total. - If a task has one or more essential containers with a health check defined, the service scheduler will wait for the task to reach a healthy status before counting it towards the minimum healthy percent total. A task is considered healthy when all essential containers within the task have passed their health checks. The amount of time the service scheduler can wait for is determined by the container health check settings.
For services that do use a load balancer, the following should be noted:
- If a task has no essential containers with a health check defined, the service scheduler will wait for the load balancer target group health check to return a healthy status before counting the task towards the minimum healthy percent total.
- If a task has an essential container with a health check defined, the service scheduler will wait for both the task to reach a healthy status and the load balancer target group health check to return a healthy status before counting the task towards the minimum healthy percent total.
The default value for a replica service for
minimumHealthyPercent
is 100%. The defaultminimumHealthyPercent
value for a service using theDAEMON
service schedule is 0% for the CLI, the AWS SDKs, and the APIs and 50% for the AWS Management Console. The minimum number of healthy tasks during a deployment is thedesiredCount
multiplied by theminimumHealthyPercent
/100, rounded up to the nearest integer value. If a service is using either the blue/green (CODE_DEPLOY
) orEXTERNAL
deployment types and is running tasks that use the EC2 launch type, the minimum healthy percent value is set to the default value. The minimum healthy percent value is used to define the lower limit on the number of the tasks in the service that remain in theRUNNING
state while the container instances are in theDRAINING
state. You can't specify a customminimumHealthyPercent
value for a service that uses either the blue/green (CODE_DEPLOY
) orEXTERNAL
deployment types and has tasks that use the EC2 launch type. If a service is using either the blue/green (CODE_DEPLOY
) orEXTERNAL
deployment types and is running tasks that use the Fargate launch type, the minimum healthy percent value is not used, although it is returned when describing your service.
- alarms
Service
Deployment Alarms - Information about the CloudWatch alarms.
- deployment_
circuit_ Servicebreaker Deployment Circuit Breaker - The deployment circuit breaker can only be used for services using the rolling update (
ECS
) deployment type. The deployment circuit breaker determines whether a service deployment will fail if the service can't reach a steady state. If you use the deployment circuit breaker, a service deployment will transition to a failed state and stop launching new tasks. If you use the rollback option, when a service deployment fails, the service is rolled back to the last deployment that completed successfully. For more information, see Rolling update in the Amazon Elastic Container Service Developer Guide - maximum_
percent int - If a service is using the rolling update (
ECS
) deployment type, themaximumPercent
parameter represents an upper limit on the number of your service's tasks that are allowed in theRUNNING
orPENDING
state during a deployment, as a percentage of thedesiredCount
(rounded down to the nearest integer). This parameter enables you to define the deployment batch size. For example, if your service is using theREPLICA
service scheduler and has adesiredCount
of four tasks and amaximumPercent
value of 200%, the scheduler may start four new tasks before stopping the four older tasks (provided that the cluster resources required to do this are available). The defaultmaximumPercent
value for a service using theREPLICA
service scheduler is 200%. If a service is using either the blue/green (CODE_DEPLOY
) orEXTERNAL
deployment types, and tasks in the service use the EC2 launch type, the maximum percent value is set to the default value. The maximum percent value is used to define the upper limit on the number of the tasks in the service that remain in theRUNNING
state while the container instances are in theDRAINING
state. You can't specify a custommaximumPercent
value for a service that uses either the blue/green (CODE_DEPLOY
) orEXTERNAL
deployment types and has tasks that use the EC2 launch type. If the tasks in the service use the Fargate launch type, the maximum percent value is not used, although it is returned when describing your service. - minimum_
healthy_ intpercent If a service is using the rolling update (
ECS
) deployment type, theminimumHealthyPercent
represents a lower limit on the number of your service's tasks that must remain in theRUNNING
state during a deployment, as a percentage of thedesiredCount
(rounded up to the nearest integer). This parameter enables you to deploy without using additional cluster capacity. For example, if your service has adesiredCount
of four tasks and aminimumHealthyPercent
of 50%, the service scheduler may stop two existing tasks to free up cluster capacity before starting two new tasks. For services that do not use a load balancer, the following should be noted:- A service is considered healthy if all essential containers within the tasks in the service pass their health checks.
- If a task has no essential containers with a health check defined, the service scheduler will wait for 40 seconds after a task reaches a
RUNNING
state before the task is counted towards the minimum healthy percent total. - If a task has one or more essential containers with a health check defined, the service scheduler will wait for the task to reach a healthy status before counting it towards the minimum healthy percent total. A task is considered healthy when all essential containers within the task have passed their health checks. The amount of time the service scheduler can wait for is determined by the container health check settings.
For services that do use a load balancer, the following should be noted:
- If a task has no essential containers with a health check defined, the service scheduler will wait for the load balancer target group health check to return a healthy status before counting the task towards the minimum healthy percent total.
- If a task has an essential container with a health check defined, the service scheduler will wait for both the task to reach a healthy status and the load balancer target group health check to return a healthy status before counting the task towards the minimum healthy percent total.
The default value for a replica service for
minimumHealthyPercent
is 100%. The defaultminimumHealthyPercent
value for a service using theDAEMON
service schedule is 0% for the CLI, the AWS SDKs, and the APIs and 50% for the AWS Management Console. The minimum number of healthy tasks during a deployment is thedesiredCount
multiplied by theminimumHealthyPercent
/100, rounded up to the nearest integer value. If a service is using either the blue/green (CODE_DEPLOY
) orEXTERNAL
deployment types and is running tasks that use the EC2 launch type, the minimum healthy percent value is set to the default value. The minimum healthy percent value is used to define the lower limit on the number of the tasks in the service that remain in theRUNNING
state while the container instances are in theDRAINING
state. You can't specify a customminimumHealthyPercent
value for a service that uses either the blue/green (CODE_DEPLOY
) orEXTERNAL
deployment types and has tasks that use the EC2 launch type. If a service is using either the blue/green (CODE_DEPLOY
) orEXTERNAL
deployment types and is running tasks that use the Fargate launch type, the minimum healthy percent value is not used, although it is returned when describing your service.
- alarms Property Map
- Information about the CloudWatch alarms.
- deployment
Circuit Property MapBreaker - The deployment circuit breaker can only be used for services using the rolling update (
ECS
) deployment type. The deployment circuit breaker determines whether a service deployment will fail if the service can't reach a steady state. If you use the deployment circuit breaker, a service deployment will transition to a failed state and stop launching new tasks. If you use the rollback option, when a service deployment fails, the service is rolled back to the last deployment that completed successfully. For more information, see Rolling update in the Amazon Elastic Container Service Developer Guide - maximum
Percent Number - If a service is using the rolling update (
ECS
) deployment type, themaximumPercent
parameter represents an upper limit on the number of your service's tasks that are allowed in theRUNNING
orPENDING
state during a deployment, as a percentage of thedesiredCount
(rounded down to the nearest integer). This parameter enables you to define the deployment batch size. For example, if your service is using theREPLICA
service scheduler and has adesiredCount
of four tasks and amaximumPercent
value of 200%, the scheduler may start four new tasks before stopping the four older tasks (provided that the cluster resources required to do this are available). The defaultmaximumPercent
value for a service using theREPLICA
service scheduler is 200%. If a service is using either the blue/green (CODE_DEPLOY
) orEXTERNAL
deployment types, and tasks in the service use the EC2 launch type, the maximum percent value is set to the default value. The maximum percent value is used to define the upper limit on the number of the tasks in the service that remain in theRUNNING
state while the container instances are in theDRAINING
state. You can't specify a custommaximumPercent
value for a service that uses either the blue/green (CODE_DEPLOY
) orEXTERNAL
deployment types and has tasks that use the EC2 launch type. If the tasks in the service use the Fargate launch type, the maximum percent value is not used, although it is returned when describing your service. - minimum
Healthy NumberPercent If a service is using the rolling update (
ECS
) deployment type, theminimumHealthyPercent
represents a lower limit on the number of your service's tasks that must remain in theRUNNING
state during a deployment, as a percentage of thedesiredCount
(rounded up to the nearest integer). This parameter enables you to deploy without using additional cluster capacity. For example, if your service has adesiredCount
of four tasks and aminimumHealthyPercent
of 50%, the service scheduler may stop two existing tasks to free up cluster capacity before starting two new tasks. For services that do not use a load balancer, the following should be noted:- A service is considered healthy if all essential containers within the tasks in the service pass their health checks.
- If a task has no essential containers with a health check defined, the service scheduler will wait for 40 seconds after a task reaches a
RUNNING
state before the task is counted towards the minimum healthy percent total. - If a task has one or more essential containers with a health check defined, the service scheduler will wait for the task to reach a healthy status before counting it towards the minimum healthy percent total. A task is considered healthy when all essential containers within the task have passed their health checks. The amount of time the service scheduler can wait for is determined by the container health check settings.
For services that do use a load balancer, the following should be noted:
- If a task has no essential containers with a health check defined, the service scheduler will wait for the load balancer target group health check to return a healthy status before counting the task towards the minimum healthy percent total.
- If a task has an essential container with a health check defined, the service scheduler will wait for both the task to reach a healthy status and the load balancer target group health check to return a healthy status before counting the task towards the minimum healthy percent total.
The default value for a replica service for
minimumHealthyPercent
is 100%. The defaultminimumHealthyPercent
value for a service using theDAEMON
service schedule is 0% for the CLI, the AWS SDKs, and the APIs and 50% for the AWS Management Console. The minimum number of healthy tasks during a deployment is thedesiredCount
multiplied by theminimumHealthyPercent
/100, rounded up to the nearest integer value. If a service is using either the blue/green (CODE_DEPLOY
) orEXTERNAL
deployment types and is running tasks that use the EC2 launch type, the minimum healthy percent value is set to the default value. The minimum healthy percent value is used to define the lower limit on the number of the tasks in the service that remain in theRUNNING
state while the container instances are in theDRAINING
state. You can't specify a customminimumHealthyPercent
value for a service that uses either the blue/green (CODE_DEPLOY
) orEXTERNAL
deployment types and has tasks that use the EC2 launch type. If a service is using either the blue/green (CODE_DEPLOY
) orEXTERNAL
deployment types and is running tasks that use the Fargate launch type, the minimum healthy percent value is not used, although it is returned when describing your service.
ServiceLoadBalancer
- Container
Name string - The name of the container (as it appears in a container definition) to associate with the load balancer. You need to specify the container name when configuring the target group for an Amazon ECS load balancer.
- Container
Port int - The port on the container to associate with the load balancer. This port must correspond to a
containerPort
in the task definition the tasks in the service are using. For tasks that use the EC2 launch type, the container instance they're launched on must allow ingress traffic on thehostPort
of the port mapping. - Load
Balancer stringName - The name of the load balancer to associate with the Amazon ECS service or task set. If you are using an Application Load Balancer or a Network Load Balancer the load balancer name parameter should be omitted.
- Target
Group stringArn - The full Amazon Resource Name (ARN) of the Elastic Load Balancing target group or groups associated with a service or task set.
A target group ARN is only specified when using an Application Load Balancer or Network Load Balancer.
For services using the
ECS
deployment controller, you can specify one or multiple target groups. For more information, see Registering multiple target groups with a service in the Amazon Elastic Container Service Developer Guide. For services using theCODE_DEPLOY
deployment controller, you're required to define two target groups for the load balancer. For more information, see Blue/green deployment with CodeDeploy in the Amazon Elastic Container Service Developer Guide. If your service's task definition uses theawsvpc
network mode, you must chooseip
as the target type, notinstance
. Do this when creating your target groups because tasks that use theawsvpc
network mode are associated with an elastic network interface, not an Amazon EC2 instance. This network mode is required for the Fargate launch type.
- Container
Name string - The name of the container (as it appears in a container definition) to associate with the load balancer. You need to specify the container name when configuring the target group for an Amazon ECS load balancer.
- Container
Port int - The port on the container to associate with the load balancer. This port must correspond to a
containerPort
in the task definition the tasks in the service are using. For tasks that use the EC2 launch type, the container instance they're launched on must allow ingress traffic on thehostPort
of the port mapping. - Load
Balancer stringName - The name of the load balancer to associate with the Amazon ECS service or task set. If you are using an Application Load Balancer or a Network Load Balancer the load balancer name parameter should be omitted.
- Target
Group stringArn - The full Amazon Resource Name (ARN) of the Elastic Load Balancing target group or groups associated with a service or task set.
A target group ARN is only specified when using an Application Load Balancer or Network Load Balancer.
For services using the
ECS
deployment controller, you can specify one or multiple target groups. For more information, see Registering multiple target groups with a service in the Amazon Elastic Container Service Developer Guide. For services using theCODE_DEPLOY
deployment controller, you're required to define two target groups for the load balancer. For more information, see Blue/green deployment with CodeDeploy in the Amazon Elastic Container Service Developer Guide. If your service's task definition uses theawsvpc
network mode, you must chooseip
as the target type, notinstance
. Do this when creating your target groups because tasks that use theawsvpc
network mode are associated with an elastic network interface, not an Amazon EC2 instance. This network mode is required for the Fargate launch type.
- container
Name String - The name of the container (as it appears in a container definition) to associate with the load balancer. You need to specify the container name when configuring the target group for an Amazon ECS load balancer.
- container
Port Integer - The port on the container to associate with the load balancer. This port must correspond to a
containerPort
in the task definition the tasks in the service are using. For tasks that use the EC2 launch type, the container instance they're launched on must allow ingress traffic on thehostPort
of the port mapping. - load
Balancer StringName - The name of the load balancer to associate with the Amazon ECS service or task set. If you are using an Application Load Balancer or a Network Load Balancer the load balancer name parameter should be omitted.
- target
Group StringArn - The full Amazon Resource Name (ARN) of the Elastic Load Balancing target group or groups associated with a service or task set.
A target group ARN is only specified when using an Application Load Balancer or Network Load Balancer.
For services using the
ECS
deployment controller, you can specify one or multiple target groups. For more information, see Registering multiple target groups with a service in the Amazon Elastic Container Service Developer Guide. For services using theCODE_DEPLOY
deployment controller, you're required to define two target groups for the load balancer. For more information, see Blue/green deployment with CodeDeploy in the Amazon Elastic Container Service Developer Guide. If your service's task definition uses theawsvpc
network mode, you must chooseip
as the target type, notinstance
. Do this when creating your target groups because tasks that use theawsvpc
network mode are associated with an elastic network interface, not an Amazon EC2 instance. This network mode is required for the Fargate launch type.
- container
Name string - The name of the container (as it appears in a container definition) to associate with the load balancer. You need to specify the container name when configuring the target group for an Amazon ECS load balancer.
- container
Port number - The port on the container to associate with the load balancer. This port must correspond to a
containerPort
in the task definition the tasks in the service are using. For tasks that use the EC2 launch type, the container instance they're launched on must allow ingress traffic on thehostPort
of the port mapping. - load
Balancer stringName - The name of the load balancer to associate with the Amazon ECS service or task set. If you are using an Application Load Balancer or a Network Load Balancer the load balancer name parameter should be omitted.
- target
Group stringArn - The full Amazon Resource Name (ARN) of the Elastic Load Balancing target group or groups associated with a service or task set.
A target group ARN is only specified when using an Application Load Balancer or Network Load Balancer.
For services using the
ECS
deployment controller, you can specify one or multiple target groups. For more information, see Registering multiple target groups with a service in the Amazon Elastic Container Service Developer Guide. For services using theCODE_DEPLOY
deployment controller, you're required to define two target groups for the load balancer. For more information, see Blue/green deployment with CodeDeploy in the Amazon Elastic Container Service Developer Guide. If your service's task definition uses theawsvpc
network mode, you must chooseip
as the target type, notinstance
. Do this when creating your target groups because tasks that use theawsvpc
network mode are associated with an elastic network interface, not an Amazon EC2 instance. This network mode is required for the Fargate launch type.
- container_
name str - The name of the container (as it appears in a container definition) to associate with the load balancer. You need to specify the container name when configuring the target group for an Amazon ECS load balancer.
- container_
port int - The port on the container to associate with the load balancer. This port must correspond to a
containerPort
in the task definition the tasks in the service are using. For tasks that use the EC2 launch type, the container instance they're launched on must allow ingress traffic on thehostPort
of the port mapping. - load_
balancer_ strname - The name of the load balancer to associate with the Amazon ECS service or task set. If you are using an Application Load Balancer or a Network Load Balancer the load balancer name parameter should be omitted.
- target_
group_ strarn - The full Amazon Resource Name (ARN) of the Elastic Load Balancing target group or groups associated with a service or task set.
A target group ARN is only specified when using an Application Load Balancer or Network Load Balancer.
For services using the
ECS
deployment controller, you can specify one or multiple target groups. For more information, see Registering multiple target groups with a service in the Amazon Elastic Container Service Developer Guide. For services using theCODE_DEPLOY
deployment controller, you're required to define two target groups for the load balancer. For more information, see Blue/green deployment with CodeDeploy in the Amazon Elastic Container Service Developer Guide. If your service's task definition uses theawsvpc
network mode, you must chooseip
as the target type, notinstance
. Do this when creating your target groups because tasks that use theawsvpc
network mode are associated with an elastic network interface, not an Amazon EC2 instance. This network mode is required for the Fargate launch type.
- container
Name String - The name of the container (as it appears in a container definition) to associate with the load balancer. You need to specify the container name when configuring the target group for an Amazon ECS load balancer.
- container
Port Number - The port on the container to associate with the load balancer. This port must correspond to a
containerPort
in the task definition the tasks in the service are using. For tasks that use the EC2 launch type, the container instance they're launched on must allow ingress traffic on thehostPort
of the port mapping. - load
Balancer StringName - The name of the load balancer to associate with the Amazon ECS service or task set. If you are using an Application Load Balancer or a Network Load Balancer the load balancer name parameter should be omitted.
- target
Group StringArn - The full Amazon Resource Name (ARN) of the Elastic Load Balancing target group or groups associated with a service or task set.
A target group ARN is only specified when using an Application Load Balancer or Network Load Balancer.
For services using the
ECS
deployment controller, you can specify one or multiple target groups. For more information, see Registering multiple target groups with a service in the Amazon Elastic Container Service Developer Guide. For services using theCODE_DEPLOY
deployment controller, you're required to define two target groups for the load balancer. For more information, see Blue/green deployment with CodeDeploy in the Amazon Elastic Container Service Developer Guide. If your service's task definition uses theawsvpc
network mode, you must chooseip
as the target type, notinstance
. Do this when creating your target groups because tasks that use theawsvpc
network mode are associated with an elastic network interface, not an Amazon EC2 instance. This network mode is required for the Fargate launch type.
ServiceNetworkConfiguration
- Awsvpc
Configuration Pulumi.Aws Native. Ecs. Inputs. Service Aws Vpc Configuration - The VPC subnets and security groups that are associated with a task. All specified subnets and security groups must be from the same VPC.
- Awsvpc
Configuration ServiceAws Vpc Configuration - The VPC subnets and security groups that are associated with a task. All specified subnets and security groups must be from the same VPC.
- awsvpc
Configuration ServiceAws Vpc Configuration - The VPC subnets and security groups that are associated with a task. All specified subnets and security groups must be from the same VPC.
- awsvpc
Configuration ServiceAws Vpc Configuration - The VPC subnets and security groups that are associated with a task. All specified subnets and security groups must be from the same VPC.
- awsvpc_
configuration ServiceAws Vpc Configuration - The VPC subnets and security groups that are associated with a task. All specified subnets and security groups must be from the same VPC.
- awsvpc
Configuration Property Map - The VPC subnets and security groups that are associated with a task. All specified subnets and security groups must be from the same VPC.
ServicePlacementConstraint
- Type
Pulumi.
Aws Native. Ecs. Service Placement Constraint Type - The type of constraint. Use
distinctInstance
to ensure that each task in a particular group is running on a different container instance. UsememberOf
to restrict the selection to a group of valid candidates. - Expression string
- A cluster query language expression to apply to the constraint. The expression can have a maximum length of 2000 characters. You can't specify an expression if the constraint type is
distinctInstance
. For more information, see Cluster query language in the Amazon Elastic Container Service Developer Guide.
- Type
Service
Placement Constraint Type - The type of constraint. Use
distinctInstance
to ensure that each task in a particular group is running on a different container instance. UsememberOf
to restrict the selection to a group of valid candidates. - Expression string
- A cluster query language expression to apply to the constraint. The expression can have a maximum length of 2000 characters. You can't specify an expression if the constraint type is
distinctInstance
. For more information, see Cluster query language in the Amazon Elastic Container Service Developer Guide.
- type
Service
Placement Constraint Type - The type of constraint. Use
distinctInstance
to ensure that each task in a particular group is running on a different container instance. UsememberOf
to restrict the selection to a group of valid candidates. - expression String
- A cluster query language expression to apply to the constraint. The expression can have a maximum length of 2000 characters. You can't specify an expression if the constraint type is
distinctInstance
. For more information, see Cluster query language in the Amazon Elastic Container Service Developer Guide.
- type
Service
Placement Constraint Type - The type of constraint. Use
distinctInstance
to ensure that each task in a particular group is running on a different container instance. UsememberOf
to restrict the selection to a group of valid candidates. - expression string
- A cluster query language expression to apply to the constraint. The expression can have a maximum length of 2000 characters. You can't specify an expression if the constraint type is
distinctInstance
. For more information, see Cluster query language in the Amazon Elastic Container Service Developer Guide.
- type
Service
Placement Constraint Type - The type of constraint. Use
distinctInstance
to ensure that each task in a particular group is running on a different container instance. UsememberOf
to restrict the selection to a group of valid candidates. - expression str
- A cluster query language expression to apply to the constraint. The expression can have a maximum length of 2000 characters. You can't specify an expression if the constraint type is
distinctInstance
. For more information, see Cluster query language in the Amazon Elastic Container Service Developer Guide.
- type
"distinct
Instance" | "member Of" - The type of constraint. Use
distinctInstance
to ensure that each task in a particular group is running on a different container instance. UsememberOf
to restrict the selection to a group of valid candidates. - expression String
- A cluster query language expression to apply to the constraint. The expression can have a maximum length of 2000 characters. You can't specify an expression if the constraint type is
distinctInstance
. For more information, see Cluster query language in the Amazon Elastic Container Service Developer Guide.
ServicePlacementConstraintType
ServicePlacementStrategy
- Type
Pulumi.
Aws Native. Ecs. Service Placement Strategy Type - The type of placement strategy. The
random
placement strategy randomly places tasks on available candidates. Thespread
placement strategy spreads placement across available candidates evenly based on thefield
parameter. Thebinpack
strategy places tasks on available candidates that have the least available amount of the resource that's specified with thefield
parameter. For example, if you binpack on memory, a task is placed on the instance with the least amount of remaining memory but still enough to run the task. - Field string
- The field to apply the placement strategy against. For the
spread
placement strategy, valid values areinstanceId
(orhost
, which has the same effect), or any platform or custom attribute that's applied to a container instance, such asattribute:ecs.availability-zone
. For thebinpack
placement strategy, valid values arecpu
andmemory
. For therandom
placement strategy, this field is not used.
- Type
Service
Placement Strategy Type - The type of placement strategy. The
random
placement strategy randomly places tasks on available candidates. Thespread
placement strategy spreads placement across available candidates evenly based on thefield
parameter. Thebinpack
strategy places tasks on available candidates that have the least available amount of the resource that's specified with thefield
parameter. For example, if you binpack on memory, a task is placed on the instance with the least amount of remaining memory but still enough to run the task. - Field string
- The field to apply the placement strategy against. For the
spread
placement strategy, valid values areinstanceId
(orhost
, which has the same effect), or any platform or custom attribute that's applied to a container instance, such asattribute:ecs.availability-zone
. For thebinpack
placement strategy, valid values arecpu
andmemory
. For therandom
placement strategy, this field is not used.
- type
Service
Placement Strategy Type - The type of placement strategy. The
random
placement strategy randomly places tasks on available candidates. Thespread
placement strategy spreads placement across available candidates evenly based on thefield
parameter. Thebinpack
strategy places tasks on available candidates that have the least available amount of the resource that's specified with thefield
parameter. For example, if you binpack on memory, a task is placed on the instance with the least amount of remaining memory but still enough to run the task. - field String
- The field to apply the placement strategy against. For the
spread
placement strategy, valid values areinstanceId
(orhost
, which has the same effect), or any platform or custom attribute that's applied to a container instance, such asattribute:ecs.availability-zone
. For thebinpack
placement strategy, valid values arecpu
andmemory
. For therandom
placement strategy, this field is not used.
- type
Service
Placement Strategy Type - The type of placement strategy. The
random
placement strategy randomly places tasks on available candidates. Thespread
placement strategy spreads placement across available candidates evenly based on thefield
parameter. Thebinpack
strategy places tasks on available candidates that have the least available amount of the resource that's specified with thefield
parameter. For example, if you binpack on memory, a task is placed on the instance with the least amount of remaining memory but still enough to run the task. - field string
- The field to apply the placement strategy against. For the
spread
placement strategy, valid values areinstanceId
(orhost
, which has the same effect), or any platform or custom attribute that's applied to a container instance, such asattribute:ecs.availability-zone
. For thebinpack
placement strategy, valid values arecpu
andmemory
. For therandom
placement strategy, this field is not used.
- type
Service
Placement Strategy Type - The type of placement strategy. The
random
placement strategy randomly places tasks on available candidates. Thespread
placement strategy spreads placement across available candidates evenly based on thefield
parameter. Thebinpack
strategy places tasks on available candidates that have the least available amount of the resource that's specified with thefield
parameter. For example, if you binpack on memory, a task is placed on the instance with the least amount of remaining memory but still enough to run the task. - field str
- The field to apply the placement strategy against. For the
spread
placement strategy, valid values areinstanceId
(orhost
, which has the same effect), or any platform or custom attribute that's applied to a container instance, such asattribute:ecs.availability-zone
. For thebinpack
placement strategy, valid values arecpu
andmemory
. For therandom
placement strategy, this field is not used.
- type "binpack" | "random" | "spread"
- The type of placement strategy. The
random
placement strategy randomly places tasks on available candidates. Thespread
placement strategy spreads placement across available candidates evenly based on thefield
parameter. Thebinpack
strategy places tasks on available candidates that have the least available amount of the resource that's specified with thefield
parameter. For example, if you binpack on memory, a task is placed on the instance with the least amount of remaining memory but still enough to run the task. - field String
- The field to apply the placement strategy against. For the
spread
placement strategy, valid values areinstanceId
(orhost
, which has the same effect), or any platform or custom attribute that's applied to a container instance, such asattribute:ecs.availability-zone
. For thebinpack
placement strategy, valid values arecpu
andmemory
. For therandom
placement strategy, this field is not used.
ServicePlacementStrategyType
ServicePropagateTags
ServiceRegistry
- Container
Name string - The container name value to be used for your service discovery service. It's already specified in the task definition. If the task definition that your service task specifies uses the
bridge
orhost
network mode, you must specify acontainerName
andcontainerPort
combination from the task definition. If the task definition that your service task specifies uses theawsvpc
network mode and a type SRV DNS record is used, you must specify either acontainerName
andcontainerPort
combination or aport
value. However, you can't specify both. - Container
Port int - The port value to be used for your service discovery service. It's already specified in the task definition. If the task definition your service task specifies uses the
bridge
orhost
network mode, you must specify acontainerName
andcontainerPort
combination from the task definition. If the task definition your service task specifies uses theawsvpc
network mode and a type SRV DNS record is used, you must specify either acontainerName
andcontainerPort
combination or aport
value. However, you can't specify both. - Port int
- The port value used if your service discovery service specified an SRV record. This field might be used if both the
awsvpc
network mode and SRV records are used. - Registry
Arn string - The Amazon Resource Name (ARN) of the service registry. The currently supported service registry is CMAP. For more information, see CreateService.
- Container
Name string - The container name value to be used for your service discovery service. It's already specified in the task definition. If the task definition that your service task specifies uses the
bridge
orhost
network mode, you must specify acontainerName
andcontainerPort
combination from the task definition. If the task definition that your service task specifies uses theawsvpc
network mode and a type SRV DNS record is used, you must specify either acontainerName
andcontainerPort
combination or aport
value. However, you can't specify both. - Container
Port int - The port value to be used for your service discovery service. It's already specified in the task definition. If the task definition your service task specifies uses the
bridge
orhost
network mode, you must specify acontainerName
andcontainerPort
combination from the task definition. If the task definition your service task specifies uses theawsvpc
network mode and a type SRV DNS record is used, you must specify either acontainerName
andcontainerPort
combination or aport
value. However, you can't specify both. - Port int
- The port value used if your service discovery service specified an SRV record. This field might be used if both the
awsvpc
network mode and SRV records are used. - Registry
Arn string - The Amazon Resource Name (ARN) of the service registry. The currently supported service registry is CMAP. For more information, see CreateService.
- container
Name String - The container name value to be used for your service discovery service. It's already specified in the task definition. If the task definition that your service task specifies uses the
bridge
orhost
network mode, you must specify acontainerName
andcontainerPort
combination from the task definition. If the task definition that your service task specifies uses theawsvpc
network mode and a type SRV DNS record is used, you must specify either acontainerName
andcontainerPort
combination or aport
value. However, you can't specify both. - container
Port Integer - The port value to be used for your service discovery service. It's already specified in the task definition. If the task definition your service task specifies uses the
bridge
orhost
network mode, you must specify acontainerName
andcontainerPort
combination from the task definition. If the task definition your service task specifies uses theawsvpc
network mode and a type SRV DNS record is used, you must specify either acontainerName
andcontainerPort
combination or aport
value. However, you can't specify both. - port Integer
- The port value used if your service discovery service specified an SRV record. This field might be used if both the
awsvpc
network mode and SRV records are used. - registry
Arn String - The Amazon Resource Name (ARN) of the service registry. The currently supported service registry is CMAP. For more information, see CreateService.
- container
Name string - The container name value to be used for your service discovery service. It's already specified in the task definition. If the task definition that your service task specifies uses the
bridge
orhost
network mode, you must specify acontainerName
andcontainerPort
combination from the task definition. If the task definition that your service task specifies uses theawsvpc
network mode and a type SRV DNS record is used, you must specify either acontainerName
andcontainerPort
combination or aport
value. However, you can't specify both. - container
Port number - The port value to be used for your service discovery service. It's already specified in the task definition. If the task definition your service task specifies uses the
bridge
orhost
network mode, you must specify acontainerName
andcontainerPort
combination from the task definition. If the task definition your service task specifies uses theawsvpc
network mode and a type SRV DNS record is used, you must specify either acontainerName
andcontainerPort
combination or aport
value. However, you can't specify both. - port number
- The port value used if your service discovery service specified an SRV record. This field might be used if both the
awsvpc
network mode and SRV records are used. - registry
Arn string - The Amazon Resource Name (ARN) of the service registry. The currently supported service registry is CMAP. For more information, see CreateService.
- container_
name str - The container name value to be used for your service discovery service. It's already specified in the task definition. If the task definition that your service task specifies uses the
bridge
orhost
network mode, you must specify acontainerName
andcontainerPort
combination from the task definition. If the task definition that your service task specifies uses theawsvpc
network mode and a type SRV DNS record is used, you must specify either acontainerName
andcontainerPort
combination or aport
value. However, you can't specify both. - container_
port int - The port value to be used for your service discovery service. It's already specified in the task definition. If the task definition your service task specifies uses the
bridge
orhost
network mode, you must specify acontainerName
andcontainerPort
combination from the task definition. If the task definition your service task specifies uses theawsvpc
network mode and a type SRV DNS record is used, you must specify either acontainerName
andcontainerPort
combination or aport
value. However, you can't specify both. - port int
- The port value used if your service discovery service specified an SRV record. This field might be used if both the
awsvpc
network mode and SRV records are used. - registry_
arn str - The Amazon Resource Name (ARN) of the service registry. The currently supported service registry is CMAP. For more information, see CreateService.
- container
Name String - The container name value to be used for your service discovery service. It's already specified in the task definition. If the task definition that your service task specifies uses the
bridge
orhost
network mode, you must specify acontainerName
andcontainerPort
combination from the task definition. If the task definition that your service task specifies uses theawsvpc
network mode and a type SRV DNS record is used, you must specify either acontainerName
andcontainerPort
combination or aport
value. However, you can't specify both. - container
Port Number - The port value to be used for your service discovery service. It's already specified in the task definition. If the task definition your service task specifies uses the
bridge
orhost
network mode, you must specify acontainerName
andcontainerPort
combination from the task definition. If the task definition your service task specifies uses theawsvpc
network mode and a type SRV DNS record is used, you must specify either acontainerName
andcontainerPort
combination or aport
value. However, you can't specify both. - port Number
- The port value used if your service discovery service specified an SRV record. This field might be used if both the
awsvpc
network mode and SRV records are used. - registry
Arn String - The Amazon Resource Name (ARN) of the service registry. The currently supported service registry is CMAP. For more information, see CreateService.
ServiceVpcLatticeConfiguration
- Port
Name string - Role
Arn string - Target
Group stringArn
- Port
Name string - Role
Arn string - Target
Group stringArn
- port
Name String - role
Arn String - target
Group StringArn
- port
Name string - role
Arn string - target
Group stringArn
- port_
name str - role_
arn str - target_
group_ strarn
- port
Name String - role
Arn String - target
Group StringArn
Tag
Package Details
- Repository
- AWS Native pulumi/pulumi-aws-native
- License
- Apache-2.0
We recommend new projects start with resources from the AWS provider.