Google Cloud Native is in preview. Google Cloud Classic is fully supported.
google-native.bigtableadmin/v2.Cluster
Explore with Pulumi AI
Google Cloud Native is in preview. Google Cloud Classic is fully supported.
Creates a cluster within an instance. Note that exactly one of Cluster.serve_nodes and Cluster.cluster_config.cluster_autoscaling_config can be set. If serve_nodes is set to non-zero, then the cluster is manually scaled. If cluster_config.cluster_autoscaling_config is non-empty, then autoscaling is enabled.
Create Cluster Resource
Resources are created with functions called constructors. To learn more about declaring and configuring resources, see Resources.
Constructor syntax
new Cluster(name: string, args: ClusterArgs, opts?: CustomResourceOptions);
@overload
def Cluster(resource_name: str,
args: ClusterArgs,
opts: Optional[ResourceOptions] = None)
@overload
def Cluster(resource_name: str,
opts: Optional[ResourceOptions] = None,
cluster_id: Optional[str] = None,
instance_id: Optional[str] = None,
cluster_config: Optional[ClusterConfigArgs] = None,
default_storage_type: Optional[ClusterDefaultStorageType] = None,
encryption_config: Optional[EncryptionConfigArgs] = None,
location: Optional[str] = None,
name: Optional[str] = None,
project: Optional[str] = None,
serve_nodes: Optional[int] = None)
func NewCluster(ctx *Context, name string, args ClusterArgs, opts ...ResourceOption) (*Cluster, error)
public Cluster(string name, ClusterArgs args, CustomResourceOptions? opts = null)
public Cluster(String name, ClusterArgs args)
public Cluster(String name, ClusterArgs args, CustomResourceOptions options)
type: google-native:bigtableadmin/v2:Cluster
properties: # The arguments to resource properties.
options: # Bag of options to control resource's behavior.
Parameters
- name string
- The unique name of the resource.
- args ClusterArgs
- The arguments to resource properties.
- opts CustomResourceOptions
- Bag of options to control resource's behavior.
- resource_name str
- The unique name of the resource.
- args ClusterArgs
- The arguments to resource properties.
- opts ResourceOptions
- Bag of options to control resource's behavior.
- ctx Context
- Context object for the current deployment.
- name string
- The unique name of the resource.
- args ClusterArgs
- The arguments to resource properties.
- opts ResourceOption
- Bag of options to control resource's behavior.
- name string
- The unique name of the resource.
- args ClusterArgs
- The arguments to resource properties.
- opts CustomResourceOptions
- Bag of options to control resource's behavior.
- name String
- The unique name of the resource.
- args ClusterArgs
- The arguments to resource properties.
- options CustomResourceOptions
- Bag of options to control resource's behavior.
Constructor example
The following reference example uses placeholder values for all input properties.
var exampleclusterResourceResourceFromBigtableadminv2 = new GoogleNative.BigtableAdmin.V2.Cluster("exampleclusterResourceResourceFromBigtableadminv2", new()
{
ClusterId = "string",
InstanceId = "string",
ClusterConfig = new GoogleNative.BigtableAdmin.V2.Inputs.ClusterConfigArgs
{
ClusterAutoscalingConfig = new GoogleNative.BigtableAdmin.V2.Inputs.ClusterAutoscalingConfigArgs
{
AutoscalingLimits = new GoogleNative.BigtableAdmin.V2.Inputs.AutoscalingLimitsArgs
{
MaxServeNodes = 0,
MinServeNodes = 0,
},
AutoscalingTargets = new GoogleNative.BigtableAdmin.V2.Inputs.AutoscalingTargetsArgs
{
CpuUtilizationPercent = 0,
StorageUtilizationGibPerNode = 0,
},
},
},
DefaultStorageType = GoogleNative.BigtableAdmin.V2.ClusterDefaultStorageType.StorageTypeUnspecified,
EncryptionConfig = new GoogleNative.BigtableAdmin.V2.Inputs.EncryptionConfigArgs
{
KmsKeyName = "string",
},
Location = "string",
Name = "string",
Project = "string",
ServeNodes = 0,
});
example, err := bigtableadmin.NewCluster(ctx, "exampleclusterResourceResourceFromBigtableadminv2", &bigtableadmin.ClusterArgs{
ClusterId: pulumi.String("string"),
InstanceId: pulumi.String("string"),
ClusterConfig: &bigtableadmin.ClusterConfigArgs{
ClusterAutoscalingConfig: &bigtableadmin.ClusterAutoscalingConfigArgs{
AutoscalingLimits: &bigtableadmin.AutoscalingLimitsArgs{
MaxServeNodes: pulumi.Int(0),
MinServeNodes: pulumi.Int(0),
},
AutoscalingTargets: &bigtableadmin.AutoscalingTargetsArgs{
CpuUtilizationPercent: pulumi.Int(0),
StorageUtilizationGibPerNode: pulumi.Int(0),
},
},
},
DefaultStorageType: bigtableadmin.ClusterDefaultStorageTypeStorageTypeUnspecified,
EncryptionConfig: &bigtableadmin.EncryptionConfigArgs{
KmsKeyName: pulumi.String("string"),
},
Location: pulumi.String("string"),
Name: pulumi.String("string"),
Project: pulumi.String("string"),
ServeNodes: pulumi.Int(0),
})
var exampleclusterResourceResourceFromBigtableadminv2 = new Cluster("exampleclusterResourceResourceFromBigtableadminv2", ClusterArgs.builder()
.clusterId("string")
.instanceId("string")
.clusterConfig(ClusterConfigArgs.builder()
.clusterAutoscalingConfig(ClusterAutoscalingConfigArgs.builder()
.autoscalingLimits(AutoscalingLimitsArgs.builder()
.maxServeNodes(0)
.minServeNodes(0)
.build())
.autoscalingTargets(AutoscalingTargetsArgs.builder()
.cpuUtilizationPercent(0)
.storageUtilizationGibPerNode(0)
.build())
.build())
.build())
.defaultStorageType("STORAGE_TYPE_UNSPECIFIED")
.encryptionConfig(EncryptionConfigArgs.builder()
.kmsKeyName("string")
.build())
.location("string")
.name("string")
.project("string")
.serveNodes(0)
.build());
examplecluster_resource_resource_from_bigtableadminv2 = google_native.bigtableadmin.v2.Cluster("exampleclusterResourceResourceFromBigtableadminv2",
cluster_id="string",
instance_id="string",
cluster_config={
"cluster_autoscaling_config": {
"autoscaling_limits": {
"max_serve_nodes": 0,
"min_serve_nodes": 0,
},
"autoscaling_targets": {
"cpu_utilization_percent": 0,
"storage_utilization_gib_per_node": 0,
},
},
},
default_storage_type=google_native.bigtableadmin.v2.ClusterDefaultStorageType.STORAGE_TYPE_UNSPECIFIED,
encryption_config={
"kms_key_name": "string",
},
location="string",
name="string",
project="string",
serve_nodes=0)
const exampleclusterResourceResourceFromBigtableadminv2 = new google_native.bigtableadmin.v2.Cluster("exampleclusterResourceResourceFromBigtableadminv2", {
clusterId: "string",
instanceId: "string",
clusterConfig: {
clusterAutoscalingConfig: {
autoscalingLimits: {
maxServeNodes: 0,
minServeNodes: 0,
},
autoscalingTargets: {
cpuUtilizationPercent: 0,
storageUtilizationGibPerNode: 0,
},
},
},
defaultStorageType: google_native.bigtableadmin.v2.ClusterDefaultStorageType.StorageTypeUnspecified,
encryptionConfig: {
kmsKeyName: "string",
},
location: "string",
name: "string",
project: "string",
serveNodes: 0,
});
type: google-native:bigtableadmin/v2:Cluster
properties:
clusterConfig:
clusterAutoscalingConfig:
autoscalingLimits:
maxServeNodes: 0
minServeNodes: 0
autoscalingTargets:
cpuUtilizationPercent: 0
storageUtilizationGibPerNode: 0
clusterId: string
defaultStorageType: STORAGE_TYPE_UNSPECIFIED
encryptionConfig:
kmsKeyName: string
instanceId: string
location: string
name: string
project: string
serveNodes: 0
Cluster Resource Properties
To learn more about resource properties and how to use them, see Inputs and Outputs in the Architecture and Concepts docs.
Inputs
In Python, inputs that are objects can be passed either as argument classes or as dictionary literals.
The Cluster resource accepts the following input properties:
- Cluster
Id string - Required. The ID to be used when referring to the new cluster within its instance, e.g., just
mycluster
rather thanprojects/myproject/instances/myinstance/clusters/mycluster
. - Instance
Id string - Cluster
Config Pulumi.Google Native. Bigtable Admin. V2. Inputs. Cluster Config - Configuration for this cluster.
- Default
Storage Pulumi.Type Google Native. Bigtable Admin. V2. Cluster Default Storage Type - Immutable. The type of storage used by this cluster to serve its parent instance's tables, unless explicitly overridden.
- Encryption
Config Pulumi.Google Native. Bigtable Admin. V2. Inputs. Encryption Config - Immutable. The encryption configuration for CMEK-protected clusters.
- Location string
- Immutable. The location where this cluster's nodes and storage reside. For best performance, clients should be located as close as possible to this cluster. Currently only zones are supported, so values should be of the form
projects/{project}/locations/{zone}
. - Name string
- The unique name of the cluster. Values are of the form
projects/{project}/instances/{instance}/clusters/a-z*
. - Project string
- Serve
Nodes int - The number of nodes in the cluster. If no value is set, Cloud Bigtable automatically allocates nodes based on your data footprint and optimized for 50% storage utilization.
- Cluster
Id string - Required. The ID to be used when referring to the new cluster within its instance, e.g., just
mycluster
rather thanprojects/myproject/instances/myinstance/clusters/mycluster
. - Instance
Id string - Cluster
Config ClusterConfig Args - Configuration for this cluster.
- Default
Storage ClusterType Default Storage Type - Immutable. The type of storage used by this cluster to serve its parent instance's tables, unless explicitly overridden.
- Encryption
Config EncryptionConfig Args - Immutable. The encryption configuration for CMEK-protected clusters.
- Location string
- Immutable. The location where this cluster's nodes and storage reside. For best performance, clients should be located as close as possible to this cluster. Currently only zones are supported, so values should be of the form
projects/{project}/locations/{zone}
. - Name string
- The unique name of the cluster. Values are of the form
projects/{project}/instances/{instance}/clusters/a-z*
. - Project string
- Serve
Nodes int - The number of nodes in the cluster. If no value is set, Cloud Bigtable automatically allocates nodes based on your data footprint and optimized for 50% storage utilization.
- cluster
Id String - Required. The ID to be used when referring to the new cluster within its instance, e.g., just
mycluster
rather thanprojects/myproject/instances/myinstance/clusters/mycluster
. - instance
Id String - cluster
Config ClusterConfig - Configuration for this cluster.
- default
Storage ClusterType Default Storage Type - Immutable. The type of storage used by this cluster to serve its parent instance's tables, unless explicitly overridden.
- encryption
Config EncryptionConfig - Immutable. The encryption configuration for CMEK-protected clusters.
- location String
- Immutable. The location where this cluster's nodes and storage reside. For best performance, clients should be located as close as possible to this cluster. Currently only zones are supported, so values should be of the form
projects/{project}/locations/{zone}
. - name String
- The unique name of the cluster. Values are of the form
projects/{project}/instances/{instance}/clusters/a-z*
. - project String
- serve
Nodes Integer - The number of nodes in the cluster. If no value is set, Cloud Bigtable automatically allocates nodes based on your data footprint and optimized for 50% storage utilization.
- cluster
Id string - Required. The ID to be used when referring to the new cluster within its instance, e.g., just
mycluster
rather thanprojects/myproject/instances/myinstance/clusters/mycluster
. - instance
Id string - cluster
Config ClusterConfig - Configuration for this cluster.
- default
Storage ClusterType Default Storage Type - Immutable. The type of storage used by this cluster to serve its parent instance's tables, unless explicitly overridden.
- encryption
Config EncryptionConfig - Immutable. The encryption configuration for CMEK-protected clusters.
- location string
- Immutable. The location where this cluster's nodes and storage reside. For best performance, clients should be located as close as possible to this cluster. Currently only zones are supported, so values should be of the form
projects/{project}/locations/{zone}
. - name string
- The unique name of the cluster. Values are of the form
projects/{project}/instances/{instance}/clusters/a-z*
. - project string
- serve
Nodes number - The number of nodes in the cluster. If no value is set, Cloud Bigtable automatically allocates nodes based on your data footprint and optimized for 50% storage utilization.
- cluster_
id str - Required. The ID to be used when referring to the new cluster within its instance, e.g., just
mycluster
rather thanprojects/myproject/instances/myinstance/clusters/mycluster
. - instance_
id str - cluster_
config ClusterConfig Args - Configuration for this cluster.
- default_
storage_ Clustertype Default Storage Type - Immutable. The type of storage used by this cluster to serve its parent instance's tables, unless explicitly overridden.
- encryption_
config EncryptionConfig Args - Immutable. The encryption configuration for CMEK-protected clusters.
- location str
- Immutable. The location where this cluster's nodes and storage reside. For best performance, clients should be located as close as possible to this cluster. Currently only zones are supported, so values should be of the form
projects/{project}/locations/{zone}
. - name str
- The unique name of the cluster. Values are of the form
projects/{project}/instances/{instance}/clusters/a-z*
. - project str
- serve_
nodes int - The number of nodes in the cluster. If no value is set, Cloud Bigtable automatically allocates nodes based on your data footprint and optimized for 50% storage utilization.
- cluster
Id String - Required. The ID to be used when referring to the new cluster within its instance, e.g., just
mycluster
rather thanprojects/myproject/instances/myinstance/clusters/mycluster
. - instance
Id String - cluster
Config Property Map - Configuration for this cluster.
- default
Storage "STORAGE_TYPE_UNSPECIFIED" | "SSD" | "HDD"Type - Immutable. The type of storage used by this cluster to serve its parent instance's tables, unless explicitly overridden.
- encryption
Config Property Map - Immutable. The encryption configuration for CMEK-protected clusters.
- location String
- Immutable. The location where this cluster's nodes and storage reside. For best performance, clients should be located as close as possible to this cluster. Currently only zones are supported, so values should be of the form
projects/{project}/locations/{zone}
. - name String
- The unique name of the cluster. Values are of the form
projects/{project}/instances/{instance}/clusters/a-z*
. - project String
- serve
Nodes Number - The number of nodes in the cluster. If no value is set, Cloud Bigtable automatically allocates nodes based on your data footprint and optimized for 50% storage utilization.
Outputs
All input properties are implicitly available as output properties. Additionally, the Cluster resource produces the following output properties:
Supporting Types
AutoscalingLimits, AutoscalingLimitsArgs
- Max
Serve intNodes - Maximum number of nodes to scale up to.
- Min
Serve intNodes - Minimum number of nodes to scale down to.
- Max
Serve intNodes - Maximum number of nodes to scale up to.
- Min
Serve intNodes - Minimum number of nodes to scale down to.
- max
Serve IntegerNodes - Maximum number of nodes to scale up to.
- min
Serve IntegerNodes - Minimum number of nodes to scale down to.
- max
Serve numberNodes - Maximum number of nodes to scale up to.
- min
Serve numberNodes - Minimum number of nodes to scale down to.
- max_
serve_ intnodes - Maximum number of nodes to scale up to.
- min_
serve_ intnodes - Minimum number of nodes to scale down to.
- max
Serve NumberNodes - Maximum number of nodes to scale up to.
- min
Serve NumberNodes - Minimum number of nodes to scale down to.
AutoscalingLimitsResponse, AutoscalingLimitsResponseArgs
- Max
Serve intNodes - Maximum number of nodes to scale up to.
- Min
Serve intNodes - Minimum number of nodes to scale down to.
- Max
Serve intNodes - Maximum number of nodes to scale up to.
- Min
Serve intNodes - Minimum number of nodes to scale down to.
- max
Serve IntegerNodes - Maximum number of nodes to scale up to.
- min
Serve IntegerNodes - Minimum number of nodes to scale down to.
- max
Serve numberNodes - Maximum number of nodes to scale up to.
- min
Serve numberNodes - Minimum number of nodes to scale down to.
- max_
serve_ intnodes - Maximum number of nodes to scale up to.
- min_
serve_ intnodes - Minimum number of nodes to scale down to.
- max
Serve NumberNodes - Maximum number of nodes to scale up to.
- min
Serve NumberNodes - Minimum number of nodes to scale down to.
AutoscalingTargets, AutoscalingTargetsArgs
- Cpu
Utilization intPercent - The cpu utilization that the Autoscaler should be trying to achieve. This number is on a scale from 0 (no utilization) to 100 (total utilization), and is limited between 10 and 80, otherwise it will return INVALID_ARGUMENT error.
- Storage
Utilization intGib Per Node - The storage utilization that the Autoscaler should be trying to achieve. This number is limited between 2560 (2.5TiB) and 5120 (5TiB) for a SSD cluster and between 8192 (8TiB) and 16384 (16TiB) for an HDD cluster, otherwise it will return INVALID_ARGUMENT error. If this value is set to 0, it will be treated as if it were set to the default value: 2560 for SSD, 8192 for HDD.
- Cpu
Utilization intPercent - The cpu utilization that the Autoscaler should be trying to achieve. This number is on a scale from 0 (no utilization) to 100 (total utilization), and is limited between 10 and 80, otherwise it will return INVALID_ARGUMENT error.
- Storage
Utilization intGib Per Node - The storage utilization that the Autoscaler should be trying to achieve. This number is limited between 2560 (2.5TiB) and 5120 (5TiB) for a SSD cluster and between 8192 (8TiB) and 16384 (16TiB) for an HDD cluster, otherwise it will return INVALID_ARGUMENT error. If this value is set to 0, it will be treated as if it were set to the default value: 2560 for SSD, 8192 for HDD.
- cpu
Utilization IntegerPercent - The cpu utilization that the Autoscaler should be trying to achieve. This number is on a scale from 0 (no utilization) to 100 (total utilization), and is limited between 10 and 80, otherwise it will return INVALID_ARGUMENT error.
- storage
Utilization IntegerGib Per Node - The storage utilization that the Autoscaler should be trying to achieve. This number is limited between 2560 (2.5TiB) and 5120 (5TiB) for a SSD cluster and between 8192 (8TiB) and 16384 (16TiB) for an HDD cluster, otherwise it will return INVALID_ARGUMENT error. If this value is set to 0, it will be treated as if it were set to the default value: 2560 for SSD, 8192 for HDD.
- cpu
Utilization numberPercent - The cpu utilization that the Autoscaler should be trying to achieve. This number is on a scale from 0 (no utilization) to 100 (total utilization), and is limited between 10 and 80, otherwise it will return INVALID_ARGUMENT error.
- storage
Utilization numberGib Per Node - The storage utilization that the Autoscaler should be trying to achieve. This number is limited between 2560 (2.5TiB) and 5120 (5TiB) for a SSD cluster and between 8192 (8TiB) and 16384 (16TiB) for an HDD cluster, otherwise it will return INVALID_ARGUMENT error. If this value is set to 0, it will be treated as if it were set to the default value: 2560 for SSD, 8192 for HDD.
- cpu_
utilization_ intpercent - The cpu utilization that the Autoscaler should be trying to achieve. This number is on a scale from 0 (no utilization) to 100 (total utilization), and is limited between 10 and 80, otherwise it will return INVALID_ARGUMENT error.
- storage_
utilization_ intgib_ per_ node - The storage utilization that the Autoscaler should be trying to achieve. This number is limited between 2560 (2.5TiB) and 5120 (5TiB) for a SSD cluster and between 8192 (8TiB) and 16384 (16TiB) for an HDD cluster, otherwise it will return INVALID_ARGUMENT error. If this value is set to 0, it will be treated as if it were set to the default value: 2560 for SSD, 8192 for HDD.
- cpu
Utilization NumberPercent - The cpu utilization that the Autoscaler should be trying to achieve. This number is on a scale from 0 (no utilization) to 100 (total utilization), and is limited between 10 and 80, otherwise it will return INVALID_ARGUMENT error.
- storage
Utilization NumberGib Per Node - The storage utilization that the Autoscaler should be trying to achieve. This number is limited between 2560 (2.5TiB) and 5120 (5TiB) for a SSD cluster and between 8192 (8TiB) and 16384 (16TiB) for an HDD cluster, otherwise it will return INVALID_ARGUMENT error. If this value is set to 0, it will be treated as if it were set to the default value: 2560 for SSD, 8192 for HDD.
AutoscalingTargetsResponse, AutoscalingTargetsResponseArgs
- Cpu
Utilization intPercent - The cpu utilization that the Autoscaler should be trying to achieve. This number is on a scale from 0 (no utilization) to 100 (total utilization), and is limited between 10 and 80, otherwise it will return INVALID_ARGUMENT error.
- Storage
Utilization intGib Per Node - The storage utilization that the Autoscaler should be trying to achieve. This number is limited between 2560 (2.5TiB) and 5120 (5TiB) for a SSD cluster and between 8192 (8TiB) and 16384 (16TiB) for an HDD cluster, otherwise it will return INVALID_ARGUMENT error. If this value is set to 0, it will be treated as if it were set to the default value: 2560 for SSD, 8192 for HDD.
- Cpu
Utilization intPercent - The cpu utilization that the Autoscaler should be trying to achieve. This number is on a scale from 0 (no utilization) to 100 (total utilization), and is limited between 10 and 80, otherwise it will return INVALID_ARGUMENT error.
- Storage
Utilization intGib Per Node - The storage utilization that the Autoscaler should be trying to achieve. This number is limited between 2560 (2.5TiB) and 5120 (5TiB) for a SSD cluster and between 8192 (8TiB) and 16384 (16TiB) for an HDD cluster, otherwise it will return INVALID_ARGUMENT error. If this value is set to 0, it will be treated as if it were set to the default value: 2560 for SSD, 8192 for HDD.
- cpu
Utilization IntegerPercent - The cpu utilization that the Autoscaler should be trying to achieve. This number is on a scale from 0 (no utilization) to 100 (total utilization), and is limited between 10 and 80, otherwise it will return INVALID_ARGUMENT error.
- storage
Utilization IntegerGib Per Node - The storage utilization that the Autoscaler should be trying to achieve. This number is limited between 2560 (2.5TiB) and 5120 (5TiB) for a SSD cluster and between 8192 (8TiB) and 16384 (16TiB) for an HDD cluster, otherwise it will return INVALID_ARGUMENT error. If this value is set to 0, it will be treated as if it were set to the default value: 2560 for SSD, 8192 for HDD.
- cpu
Utilization numberPercent - The cpu utilization that the Autoscaler should be trying to achieve. This number is on a scale from 0 (no utilization) to 100 (total utilization), and is limited between 10 and 80, otherwise it will return INVALID_ARGUMENT error.
- storage
Utilization numberGib Per Node - The storage utilization that the Autoscaler should be trying to achieve. This number is limited between 2560 (2.5TiB) and 5120 (5TiB) for a SSD cluster and between 8192 (8TiB) and 16384 (16TiB) for an HDD cluster, otherwise it will return INVALID_ARGUMENT error. If this value is set to 0, it will be treated as if it were set to the default value: 2560 for SSD, 8192 for HDD.
- cpu_
utilization_ intpercent - The cpu utilization that the Autoscaler should be trying to achieve. This number is on a scale from 0 (no utilization) to 100 (total utilization), and is limited between 10 and 80, otherwise it will return INVALID_ARGUMENT error.
- storage_
utilization_ intgib_ per_ node - The storage utilization that the Autoscaler should be trying to achieve. This number is limited between 2560 (2.5TiB) and 5120 (5TiB) for a SSD cluster and between 8192 (8TiB) and 16384 (16TiB) for an HDD cluster, otherwise it will return INVALID_ARGUMENT error. If this value is set to 0, it will be treated as if it were set to the default value: 2560 for SSD, 8192 for HDD.
- cpu
Utilization NumberPercent - The cpu utilization that the Autoscaler should be trying to achieve. This number is on a scale from 0 (no utilization) to 100 (total utilization), and is limited between 10 and 80, otherwise it will return INVALID_ARGUMENT error.
- storage
Utilization NumberGib Per Node - The storage utilization that the Autoscaler should be trying to achieve. This number is limited between 2560 (2.5TiB) and 5120 (5TiB) for a SSD cluster and between 8192 (8TiB) and 16384 (16TiB) for an HDD cluster, otherwise it will return INVALID_ARGUMENT error. If this value is set to 0, it will be treated as if it were set to the default value: 2560 for SSD, 8192 for HDD.
ClusterAutoscalingConfig, ClusterAutoscalingConfigArgs
- Autoscaling
Limits Pulumi.Google Native. Bigtable Admin. V2. Inputs. Autoscaling Limits - Autoscaling limits for this cluster.
- Autoscaling
Targets Pulumi.Google Native. Bigtable Admin. V2. Inputs. Autoscaling Targets - Autoscaling targets for this cluster.
- Autoscaling
Limits AutoscalingLimits - Autoscaling limits for this cluster.
- Autoscaling
Targets AutoscalingTargets - Autoscaling targets for this cluster.
- autoscaling
Limits AutoscalingLimits - Autoscaling limits for this cluster.
- autoscaling
Targets AutoscalingTargets - Autoscaling targets for this cluster.
- autoscaling
Limits AutoscalingLimits - Autoscaling limits for this cluster.
- autoscaling
Targets AutoscalingTargets - Autoscaling targets for this cluster.
- autoscaling_
limits AutoscalingLimits - Autoscaling limits for this cluster.
- autoscaling_
targets AutoscalingTargets - Autoscaling targets for this cluster.
- autoscaling
Limits Property Map - Autoscaling limits for this cluster.
- autoscaling
Targets Property Map - Autoscaling targets for this cluster.
ClusterAutoscalingConfigResponse, ClusterAutoscalingConfigResponseArgs
- Autoscaling
Limits Pulumi.Google Native. Bigtable Admin. V2. Inputs. Autoscaling Limits Response - Autoscaling limits for this cluster.
- Autoscaling
Targets Pulumi.Google Native. Bigtable Admin. V2. Inputs. Autoscaling Targets Response - Autoscaling targets for this cluster.
- Autoscaling
Limits AutoscalingLimits Response - Autoscaling limits for this cluster.
- Autoscaling
Targets AutoscalingTargets Response - Autoscaling targets for this cluster.
- autoscaling
Limits AutoscalingLimits Response - Autoscaling limits for this cluster.
- autoscaling
Targets AutoscalingTargets Response - Autoscaling targets for this cluster.
- autoscaling
Limits AutoscalingLimits Response - Autoscaling limits for this cluster.
- autoscaling
Targets AutoscalingTargets Response - Autoscaling targets for this cluster.
- autoscaling_
limits AutoscalingLimits Response - Autoscaling limits for this cluster.
- autoscaling_
targets AutoscalingTargets Response - Autoscaling targets for this cluster.
- autoscaling
Limits Property Map - Autoscaling limits for this cluster.
- autoscaling
Targets Property Map - Autoscaling targets for this cluster.
ClusterConfig, ClusterConfigArgs
- Cluster
Autoscaling Pulumi.Config Google Native. Bigtable Admin. V2. Inputs. Cluster Autoscaling Config - Autoscaling configuration for this cluster.
- Cluster
Autoscaling ClusterConfig Autoscaling Config - Autoscaling configuration for this cluster.
- cluster
Autoscaling ClusterConfig Autoscaling Config - Autoscaling configuration for this cluster.
- cluster
Autoscaling ClusterConfig Autoscaling Config - Autoscaling configuration for this cluster.
- cluster_
autoscaling_ Clusterconfig Autoscaling Config - Autoscaling configuration for this cluster.
- cluster
Autoscaling Property MapConfig - Autoscaling configuration for this cluster.
ClusterConfigResponse, ClusterConfigResponseArgs
- Cluster
Autoscaling Pulumi.Config Google Native. Bigtable Admin. V2. Inputs. Cluster Autoscaling Config Response - Autoscaling configuration for this cluster.
- Cluster
Autoscaling ClusterConfig Autoscaling Config Response - Autoscaling configuration for this cluster.
- cluster
Autoscaling ClusterConfig Autoscaling Config Response - Autoscaling configuration for this cluster.
- cluster
Autoscaling ClusterConfig Autoscaling Config Response - Autoscaling configuration for this cluster.
- cluster_
autoscaling_ Clusterconfig Autoscaling Config Response - Autoscaling configuration for this cluster.
- cluster
Autoscaling Property MapConfig - Autoscaling configuration for this cluster.
ClusterDefaultStorageType, ClusterDefaultStorageTypeArgs
- Storage
Type Unspecified - STORAGE_TYPE_UNSPECIFIEDThe user did not specify a storage type.
- Ssd
- SSDFlash (SSD) storage should be used.
- Hdd
- HDDMagnetic drive (HDD) storage should be used.
- Cluster
Default Storage Type Storage Type Unspecified - STORAGE_TYPE_UNSPECIFIEDThe user did not specify a storage type.
- Cluster
Default Storage Type Ssd - SSDFlash (SSD) storage should be used.
- Cluster
Default Storage Type Hdd - HDDMagnetic drive (HDD) storage should be used.
- Storage
Type Unspecified - STORAGE_TYPE_UNSPECIFIEDThe user did not specify a storage type.
- Ssd
- SSDFlash (SSD) storage should be used.
- Hdd
- HDDMagnetic drive (HDD) storage should be used.
- Storage
Type Unspecified - STORAGE_TYPE_UNSPECIFIEDThe user did not specify a storage type.
- Ssd
- SSDFlash (SSD) storage should be used.
- Hdd
- HDDMagnetic drive (HDD) storage should be used.
- STORAGE_TYPE_UNSPECIFIED
- STORAGE_TYPE_UNSPECIFIEDThe user did not specify a storage type.
- SSD
- SSDFlash (SSD) storage should be used.
- HDD
- HDDMagnetic drive (HDD) storage should be used.
- "STORAGE_TYPE_UNSPECIFIED"
- STORAGE_TYPE_UNSPECIFIEDThe user did not specify a storage type.
- "SSD"
- SSDFlash (SSD) storage should be used.
- "HDD"
- HDDMagnetic drive (HDD) storage should be used.
EncryptionConfig, EncryptionConfigArgs
- Kms
Key stringName - Describes the Cloud KMS encryption key that will be used to protect the destination Bigtable cluster. The requirements for this key are: 1) The Cloud Bigtable service account associated with the project that contains this cluster must be granted the
cloudkms.cryptoKeyEncrypterDecrypter
role on the CMEK key. 2) Only regional keys can be used and the region of the CMEK key must match the region of the cluster. Values are of the formprojects/{project}/locations/{location}/keyRings/{keyring}/cryptoKeys/{key}
- Kms
Key stringName - Describes the Cloud KMS encryption key that will be used to protect the destination Bigtable cluster. The requirements for this key are: 1) The Cloud Bigtable service account associated with the project that contains this cluster must be granted the
cloudkms.cryptoKeyEncrypterDecrypter
role on the CMEK key. 2) Only regional keys can be used and the region of the CMEK key must match the region of the cluster. Values are of the formprojects/{project}/locations/{location}/keyRings/{keyring}/cryptoKeys/{key}
- kms
Key StringName - Describes the Cloud KMS encryption key that will be used to protect the destination Bigtable cluster. The requirements for this key are: 1) The Cloud Bigtable service account associated with the project that contains this cluster must be granted the
cloudkms.cryptoKeyEncrypterDecrypter
role on the CMEK key. 2) Only regional keys can be used and the region of the CMEK key must match the region of the cluster. Values are of the formprojects/{project}/locations/{location}/keyRings/{keyring}/cryptoKeys/{key}
- kms
Key stringName - Describes the Cloud KMS encryption key that will be used to protect the destination Bigtable cluster. The requirements for this key are: 1) The Cloud Bigtable service account associated with the project that contains this cluster must be granted the
cloudkms.cryptoKeyEncrypterDecrypter
role on the CMEK key. 2) Only regional keys can be used and the region of the CMEK key must match the region of the cluster. Values are of the formprojects/{project}/locations/{location}/keyRings/{keyring}/cryptoKeys/{key}
- kms_
key_ strname - Describes the Cloud KMS encryption key that will be used to protect the destination Bigtable cluster. The requirements for this key are: 1) The Cloud Bigtable service account associated with the project that contains this cluster must be granted the
cloudkms.cryptoKeyEncrypterDecrypter
role on the CMEK key. 2) Only regional keys can be used and the region of the CMEK key must match the region of the cluster. Values are of the formprojects/{project}/locations/{location}/keyRings/{keyring}/cryptoKeys/{key}
- kms
Key StringName - Describes the Cloud KMS encryption key that will be used to protect the destination Bigtable cluster. The requirements for this key are: 1) The Cloud Bigtable service account associated with the project that contains this cluster must be granted the
cloudkms.cryptoKeyEncrypterDecrypter
role on the CMEK key. 2) Only regional keys can be used and the region of the CMEK key must match the region of the cluster. Values are of the formprojects/{project}/locations/{location}/keyRings/{keyring}/cryptoKeys/{key}
EncryptionConfigResponse, EncryptionConfigResponseArgs
- Kms
Key stringName - Describes the Cloud KMS encryption key that will be used to protect the destination Bigtable cluster. The requirements for this key are: 1) The Cloud Bigtable service account associated with the project that contains this cluster must be granted the
cloudkms.cryptoKeyEncrypterDecrypter
role on the CMEK key. 2) Only regional keys can be used and the region of the CMEK key must match the region of the cluster. Values are of the formprojects/{project}/locations/{location}/keyRings/{keyring}/cryptoKeys/{key}
- Kms
Key stringName - Describes the Cloud KMS encryption key that will be used to protect the destination Bigtable cluster. The requirements for this key are: 1) The Cloud Bigtable service account associated with the project that contains this cluster must be granted the
cloudkms.cryptoKeyEncrypterDecrypter
role on the CMEK key. 2) Only regional keys can be used and the region of the CMEK key must match the region of the cluster. Values are of the formprojects/{project}/locations/{location}/keyRings/{keyring}/cryptoKeys/{key}
- kms
Key StringName - Describes the Cloud KMS encryption key that will be used to protect the destination Bigtable cluster. The requirements for this key are: 1) The Cloud Bigtable service account associated with the project that contains this cluster must be granted the
cloudkms.cryptoKeyEncrypterDecrypter
role on the CMEK key. 2) Only regional keys can be used and the region of the CMEK key must match the region of the cluster. Values are of the formprojects/{project}/locations/{location}/keyRings/{keyring}/cryptoKeys/{key}
- kms
Key stringName - Describes the Cloud KMS encryption key that will be used to protect the destination Bigtable cluster. The requirements for this key are: 1) The Cloud Bigtable service account associated with the project that contains this cluster must be granted the
cloudkms.cryptoKeyEncrypterDecrypter
role on the CMEK key. 2) Only regional keys can be used and the region of the CMEK key must match the region of the cluster. Values are of the formprojects/{project}/locations/{location}/keyRings/{keyring}/cryptoKeys/{key}
- kms_
key_ strname - Describes the Cloud KMS encryption key that will be used to protect the destination Bigtable cluster. The requirements for this key are: 1) The Cloud Bigtable service account associated with the project that contains this cluster must be granted the
cloudkms.cryptoKeyEncrypterDecrypter
role on the CMEK key. 2) Only regional keys can be used and the region of the CMEK key must match the region of the cluster. Values are of the formprojects/{project}/locations/{location}/keyRings/{keyring}/cryptoKeys/{key}
- kms
Key StringName - Describes the Cloud KMS encryption key that will be used to protect the destination Bigtable cluster. The requirements for this key are: 1) The Cloud Bigtable service account associated with the project that contains this cluster must be granted the
cloudkms.cryptoKeyEncrypterDecrypter
role on the CMEK key. 2) Only regional keys can be used and the region of the CMEK key must match the region of the cluster. Values are of the formprojects/{project}/locations/{location}/keyRings/{keyring}/cryptoKeys/{key}
Package Details
- Repository
- Google Cloud Native pulumi/pulumi-google-native
- License
- Apache-2.0
Google Cloud Native is in preview. Google Cloud Classic is fully supported.