1. Packages
  2. Google Cloud Native
  3. API Docs
  4. dataproc
  5. dataproc/v1
  6. WorkflowTemplate

Google Cloud Native is in preview. Google Cloud Classic is fully supported.

Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi

google-native.dataproc/v1.WorkflowTemplate

Explore with Pulumi AI

google-native logo

Google Cloud Native is in preview. Google Cloud Classic is fully supported.

Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi

    Creates new workflow template. Auto-naming is currently not supported for this resource.

    Create WorkflowTemplate Resource

    Resources are created with functions called constructors. To learn more about declaring and configuring resources, see Resources.

    Constructor syntax

    new WorkflowTemplate(name: string, args: WorkflowTemplateArgs, opts?: CustomResourceOptions);
    @overload
    def WorkflowTemplate(resource_name: str,
                         args: WorkflowTemplateArgs,
                         opts: Optional[ResourceOptions] = None)
    
    @overload
    def WorkflowTemplate(resource_name: str,
                         opts: Optional[ResourceOptions] = None,
                         jobs: Optional[Sequence[OrderedJobArgs]] = None,
                         placement: Optional[WorkflowTemplatePlacementArgs] = None,
                         dag_timeout: Optional[str] = None,
                         encryption_config: Optional[GoogleCloudDataprocV1WorkflowTemplateEncryptionConfigArgs] = None,
                         id: Optional[str] = None,
                         labels: Optional[Mapping[str, str]] = None,
                         location: Optional[str] = None,
                         parameters: Optional[Sequence[TemplateParameterArgs]] = None,
                         project: Optional[str] = None,
                         version: Optional[int] = None)
    func NewWorkflowTemplate(ctx *Context, name string, args WorkflowTemplateArgs, opts ...ResourceOption) (*WorkflowTemplate, error)
    public WorkflowTemplate(string name, WorkflowTemplateArgs args, CustomResourceOptions? opts = null)
    public WorkflowTemplate(String name, WorkflowTemplateArgs args)
    public WorkflowTemplate(String name, WorkflowTemplateArgs args, CustomResourceOptions options)
    
    type: google-native:dataproc/v1:WorkflowTemplate
    properties: # The arguments to resource properties.
    options: # Bag of options to control resource's behavior.
    
    

    Parameters

    name string
    The unique name of the resource.
    args WorkflowTemplateArgs
    The arguments to resource properties.
    opts CustomResourceOptions
    Bag of options to control resource's behavior.
    resource_name str
    The unique name of the resource.
    args WorkflowTemplateArgs
    The arguments to resource properties.
    opts ResourceOptions
    Bag of options to control resource's behavior.
    ctx Context
    Context object for the current deployment.
    name string
    The unique name of the resource.
    args WorkflowTemplateArgs
    The arguments to resource properties.
    opts ResourceOption
    Bag of options to control resource's behavior.
    name string
    The unique name of the resource.
    args WorkflowTemplateArgs
    The arguments to resource properties.
    opts CustomResourceOptions
    Bag of options to control resource's behavior.
    name String
    The unique name of the resource.
    args WorkflowTemplateArgs
    The arguments to resource properties.
    options CustomResourceOptions
    Bag of options to control resource's behavior.

    Constructor example

    The following reference example uses placeholder values for all input properties.

    var workflowTemplateResource = new GoogleNative.Dataproc.V1.WorkflowTemplate("workflowTemplateResource", new()
    {
        Jobs = new[]
        {
            new GoogleNative.Dataproc.V1.Inputs.OrderedJobArgs
            {
                StepId = "string",
                PrestoJob = new GoogleNative.Dataproc.V1.Inputs.PrestoJobArgs
                {
                    ClientTags = new[]
                    {
                        "string",
                    },
                    ContinueOnFailure = false,
                    LoggingConfig = new GoogleNative.Dataproc.V1.Inputs.LoggingConfigArgs
                    {
                        DriverLogLevels = 
                        {
                            { "string", "string" },
                        },
                    },
                    OutputFormat = "string",
                    Properties = 
                    {
                        { "string", "string" },
                    },
                    QueryFileUri = "string",
                    QueryList = new GoogleNative.Dataproc.V1.Inputs.QueryListArgs
                    {
                        Queries = new[]
                        {
                            "string",
                        },
                    },
                },
                HiveJob = new GoogleNative.Dataproc.V1.Inputs.HiveJobArgs
                {
                    ContinueOnFailure = false,
                    JarFileUris = new[]
                    {
                        "string",
                    },
                    Properties = 
                    {
                        { "string", "string" },
                    },
                    QueryFileUri = "string",
                    QueryList = new GoogleNative.Dataproc.V1.Inputs.QueryListArgs
                    {
                        Queries = new[]
                        {
                            "string",
                        },
                    },
                    ScriptVariables = 
                    {
                        { "string", "string" },
                    },
                },
                Labels = 
                {
                    { "string", "string" },
                },
                PigJob = new GoogleNative.Dataproc.V1.Inputs.PigJobArgs
                {
                    ContinueOnFailure = false,
                    JarFileUris = new[]
                    {
                        "string",
                    },
                    LoggingConfig = new GoogleNative.Dataproc.V1.Inputs.LoggingConfigArgs
                    {
                        DriverLogLevels = 
                        {
                            { "string", "string" },
                        },
                    },
                    Properties = 
                    {
                        { "string", "string" },
                    },
                    QueryFileUri = "string",
                    QueryList = new GoogleNative.Dataproc.V1.Inputs.QueryListArgs
                    {
                        Queries = new[]
                        {
                            "string",
                        },
                    },
                    ScriptVariables = 
                    {
                        { "string", "string" },
                    },
                },
                PrerequisiteStepIds = new[]
                {
                    "string",
                },
                FlinkJob = new GoogleNative.Dataproc.V1.Inputs.FlinkJobArgs
                {
                    Args = new[]
                    {
                        "string",
                    },
                    JarFileUris = new[]
                    {
                        "string",
                    },
                    LoggingConfig = new GoogleNative.Dataproc.V1.Inputs.LoggingConfigArgs
                    {
                        DriverLogLevels = 
                        {
                            { "string", "string" },
                        },
                    },
                    MainClass = "string",
                    MainJarFileUri = "string",
                    Properties = 
                    {
                        { "string", "string" },
                    },
                    SavepointUri = "string",
                },
                PysparkJob = new GoogleNative.Dataproc.V1.Inputs.PySparkJobArgs
                {
                    MainPythonFileUri = "string",
                    ArchiveUris = new[]
                    {
                        "string",
                    },
                    Args = new[]
                    {
                        "string",
                    },
                    FileUris = new[]
                    {
                        "string",
                    },
                    JarFileUris = new[]
                    {
                        "string",
                    },
                    LoggingConfig = new GoogleNative.Dataproc.V1.Inputs.LoggingConfigArgs
                    {
                        DriverLogLevels = 
                        {
                            { "string", "string" },
                        },
                    },
                    Properties = 
                    {
                        { "string", "string" },
                    },
                    PythonFileUris = new[]
                    {
                        "string",
                    },
                },
                Scheduling = new GoogleNative.Dataproc.V1.Inputs.JobSchedulingArgs
                {
                    MaxFailuresPerHour = 0,
                    MaxFailuresTotal = 0,
                },
                SparkJob = new GoogleNative.Dataproc.V1.Inputs.SparkJobArgs
                {
                    ArchiveUris = new[]
                    {
                        "string",
                    },
                    Args = new[]
                    {
                        "string",
                    },
                    FileUris = new[]
                    {
                        "string",
                    },
                    JarFileUris = new[]
                    {
                        "string",
                    },
                    LoggingConfig = new GoogleNative.Dataproc.V1.Inputs.LoggingConfigArgs
                    {
                        DriverLogLevels = 
                        {
                            { "string", "string" },
                        },
                    },
                    MainClass = "string",
                    MainJarFileUri = "string",
                    Properties = 
                    {
                        { "string", "string" },
                    },
                },
                SparkRJob = new GoogleNative.Dataproc.V1.Inputs.SparkRJobArgs
                {
                    MainRFileUri = "string",
                    ArchiveUris = new[]
                    {
                        "string",
                    },
                    Args = new[]
                    {
                        "string",
                    },
                    FileUris = new[]
                    {
                        "string",
                    },
                    LoggingConfig = new GoogleNative.Dataproc.V1.Inputs.LoggingConfigArgs
                    {
                        DriverLogLevels = 
                        {
                            { "string", "string" },
                        },
                    },
                    Properties = 
                    {
                        { "string", "string" },
                    },
                },
                SparkSqlJob = new GoogleNative.Dataproc.V1.Inputs.SparkSqlJobArgs
                {
                    JarFileUris = new[]
                    {
                        "string",
                    },
                    LoggingConfig = new GoogleNative.Dataproc.V1.Inputs.LoggingConfigArgs
                    {
                        DriverLogLevels = 
                        {
                            { "string", "string" },
                        },
                    },
                    Properties = 
                    {
                        { "string", "string" },
                    },
                    QueryFileUri = "string",
                    QueryList = new GoogleNative.Dataproc.V1.Inputs.QueryListArgs
                    {
                        Queries = new[]
                        {
                            "string",
                        },
                    },
                    ScriptVariables = 
                    {
                        { "string", "string" },
                    },
                },
                HadoopJob = new GoogleNative.Dataproc.V1.Inputs.HadoopJobArgs
                {
                    ArchiveUris = new[]
                    {
                        "string",
                    },
                    Args = new[]
                    {
                        "string",
                    },
                    FileUris = new[]
                    {
                        "string",
                    },
                    JarFileUris = new[]
                    {
                        "string",
                    },
                    LoggingConfig = new GoogleNative.Dataproc.V1.Inputs.LoggingConfigArgs
                    {
                        DriverLogLevels = 
                        {
                            { "string", "string" },
                        },
                    },
                    MainClass = "string",
                    MainJarFileUri = "string",
                    Properties = 
                    {
                        { "string", "string" },
                    },
                },
                TrinoJob = new GoogleNative.Dataproc.V1.Inputs.TrinoJobArgs
                {
                    ClientTags = new[]
                    {
                        "string",
                    },
                    ContinueOnFailure = false,
                    LoggingConfig = new GoogleNative.Dataproc.V1.Inputs.LoggingConfigArgs
                    {
                        DriverLogLevels = 
                        {
                            { "string", "string" },
                        },
                    },
                    OutputFormat = "string",
                    Properties = 
                    {
                        { "string", "string" },
                    },
                    QueryFileUri = "string",
                    QueryList = new GoogleNative.Dataproc.V1.Inputs.QueryListArgs
                    {
                        Queries = new[]
                        {
                            "string",
                        },
                    },
                },
            },
        },
        Placement = new GoogleNative.Dataproc.V1.Inputs.WorkflowTemplatePlacementArgs
        {
            ClusterSelector = new GoogleNative.Dataproc.V1.Inputs.ClusterSelectorArgs
            {
                ClusterLabels = 
                {
                    { "string", "string" },
                },
                Zone = "string",
            },
            ManagedCluster = new GoogleNative.Dataproc.V1.Inputs.ManagedClusterArgs
            {
                ClusterName = "string",
                Config = new GoogleNative.Dataproc.V1.Inputs.ClusterConfigArgs
                {
                    AutoscalingConfig = new GoogleNative.Dataproc.V1.Inputs.AutoscalingConfigArgs
                    {
                        PolicyUri = "string",
                    },
                    AuxiliaryNodeGroups = new[]
                    {
                        new GoogleNative.Dataproc.V1.Inputs.AuxiliaryNodeGroupArgs
                        {
                            NodeGroup = new GoogleNative.Dataproc.V1.Inputs.NodeGroupArgs
                            {
                                Roles = new[]
                                {
                                    GoogleNative.Dataproc.V1.NodeGroupRolesItem.RoleUnspecified,
                                },
                                Labels = 
                                {
                                    { "string", "string" },
                                },
                                Name = "string",
                                NodeGroupConfig = new GoogleNative.Dataproc.V1.Inputs.InstanceGroupConfigArgs
                                {
                                    Accelerators = new[]
                                    {
                                        new GoogleNative.Dataproc.V1.Inputs.AcceleratorConfigArgs
                                        {
                                            AcceleratorCount = 0,
                                            AcceleratorTypeUri = "string",
                                        },
                                    },
                                    DiskConfig = new GoogleNative.Dataproc.V1.Inputs.DiskConfigArgs
                                    {
                                        BootDiskSizeGb = 0,
                                        BootDiskType = "string",
                                        LocalSsdInterface = "string",
                                        NumLocalSsds = 0,
                                    },
                                    ImageUri = "string",
                                    InstanceFlexibilityPolicy = new GoogleNative.Dataproc.V1.Inputs.InstanceFlexibilityPolicyArgs
                                    {
                                        InstanceSelectionList = new[]
                                        {
                                            new GoogleNative.Dataproc.V1.Inputs.InstanceSelectionArgs
                                            {
                                                MachineTypes = new[]
                                                {
                                                    "string",
                                                },
                                                Rank = 0,
                                            },
                                        },
                                    },
                                    MachineTypeUri = "string",
                                    MinCpuPlatform = "string",
                                    MinNumInstances = 0,
                                    NumInstances = 0,
                                    Preemptibility = GoogleNative.Dataproc.V1.InstanceGroupConfigPreemptibility.PreemptibilityUnspecified,
                                    StartupConfig = new GoogleNative.Dataproc.V1.Inputs.StartupConfigArgs
                                    {
                                        RequiredRegistrationFraction = 0,
                                    },
                                },
                            },
                            NodeGroupId = "string",
                        },
                    },
                    ConfigBucket = "string",
                    DataprocMetricConfig = new GoogleNative.Dataproc.V1.Inputs.DataprocMetricConfigArgs
                    {
                        Metrics = new[]
                        {
                            new GoogleNative.Dataproc.V1.Inputs.MetricArgs
                            {
                                MetricSource = GoogleNative.Dataproc.V1.MetricMetricSource.MetricSourceUnspecified,
                                MetricOverrides = new[]
                                {
                                    "string",
                                },
                            },
                        },
                    },
                    EncryptionConfig = new GoogleNative.Dataproc.V1.Inputs.EncryptionConfigArgs
                    {
                        GcePdKmsKeyName = "string",
                        KmsKey = "string",
                    },
                    EndpointConfig = new GoogleNative.Dataproc.V1.Inputs.EndpointConfigArgs
                    {
                        EnableHttpPortAccess = false,
                    },
                    GceClusterConfig = new GoogleNative.Dataproc.V1.Inputs.GceClusterConfigArgs
                    {
                        ConfidentialInstanceConfig = new GoogleNative.Dataproc.V1.Inputs.ConfidentialInstanceConfigArgs
                        {
                            EnableConfidentialCompute = false,
                        },
                        InternalIpOnly = false,
                        Metadata = 
                        {
                            { "string", "string" },
                        },
                        NetworkUri = "string",
                        NodeGroupAffinity = new GoogleNative.Dataproc.V1.Inputs.NodeGroupAffinityArgs
                        {
                            NodeGroupUri = "string",
                        },
                        PrivateIpv6GoogleAccess = GoogleNative.Dataproc.V1.GceClusterConfigPrivateIpv6GoogleAccess.PrivateIpv6GoogleAccessUnspecified,
                        ReservationAffinity = new GoogleNative.Dataproc.V1.Inputs.ReservationAffinityArgs
                        {
                            ConsumeReservationType = GoogleNative.Dataproc.V1.ReservationAffinityConsumeReservationType.TypeUnspecified,
                            Key = "string",
                            Values = new[]
                            {
                                "string",
                            },
                        },
                        ServiceAccount = "string",
                        ServiceAccountScopes = new[]
                        {
                            "string",
                        },
                        ShieldedInstanceConfig = new GoogleNative.Dataproc.V1.Inputs.ShieldedInstanceConfigArgs
                        {
                            EnableIntegrityMonitoring = false,
                            EnableSecureBoot = false,
                            EnableVtpm = false,
                        },
                        SubnetworkUri = "string",
                        Tags = new[]
                        {
                            "string",
                        },
                        ZoneUri = "string",
                    },
                    GkeClusterConfig = new GoogleNative.Dataproc.V1.Inputs.GkeClusterConfigArgs
                    {
                        GkeClusterTarget = "string",
                        NodePoolTarget = new[]
                        {
                            new GoogleNative.Dataproc.V1.Inputs.GkeNodePoolTargetArgs
                            {
                                NodePool = "string",
                                Roles = new[]
                                {
                                    GoogleNative.Dataproc.V1.GkeNodePoolTargetRolesItem.RoleUnspecified,
                                },
                                NodePoolConfig = new GoogleNative.Dataproc.V1.Inputs.GkeNodePoolConfigArgs
                                {
                                    Autoscaling = new GoogleNative.Dataproc.V1.Inputs.GkeNodePoolAutoscalingConfigArgs
                                    {
                                        MaxNodeCount = 0,
                                        MinNodeCount = 0,
                                    },
                                    Config = new GoogleNative.Dataproc.V1.Inputs.GkeNodeConfigArgs
                                    {
                                        Accelerators = new[]
                                        {
                                            new GoogleNative.Dataproc.V1.Inputs.GkeNodePoolAcceleratorConfigArgs
                                            {
                                                AcceleratorCount = "string",
                                                AcceleratorType = "string",
                                                GpuPartitionSize = "string",
                                            },
                                        },
                                        BootDiskKmsKey = "string",
                                        LocalSsdCount = 0,
                                        MachineType = "string",
                                        MinCpuPlatform = "string",
                                        Preemptible = false,
                                        Spot = false,
                                    },
                                    Locations = new[]
                                    {
                                        "string",
                                    },
                                },
                            },
                        },
                    },
                    InitializationActions = new[]
                    {
                        new GoogleNative.Dataproc.V1.Inputs.NodeInitializationActionArgs
                        {
                            ExecutableFile = "string",
                            ExecutionTimeout = "string",
                        },
                    },
                    LifecycleConfig = new GoogleNative.Dataproc.V1.Inputs.LifecycleConfigArgs
                    {
                        AutoDeleteTime = "string",
                        AutoDeleteTtl = "string",
                        IdleDeleteTtl = "string",
                    },
                    MasterConfig = new GoogleNative.Dataproc.V1.Inputs.InstanceGroupConfigArgs
                    {
                        Accelerators = new[]
                        {
                            new GoogleNative.Dataproc.V1.Inputs.AcceleratorConfigArgs
                            {
                                AcceleratorCount = 0,
                                AcceleratorTypeUri = "string",
                            },
                        },
                        DiskConfig = new GoogleNative.Dataproc.V1.Inputs.DiskConfigArgs
                        {
                            BootDiskSizeGb = 0,
                            BootDiskType = "string",
                            LocalSsdInterface = "string",
                            NumLocalSsds = 0,
                        },
                        ImageUri = "string",
                        InstanceFlexibilityPolicy = new GoogleNative.Dataproc.V1.Inputs.InstanceFlexibilityPolicyArgs
                        {
                            InstanceSelectionList = new[]
                            {
                                new GoogleNative.Dataproc.V1.Inputs.InstanceSelectionArgs
                                {
                                    MachineTypes = new[]
                                    {
                                        "string",
                                    },
                                    Rank = 0,
                                },
                            },
                        },
                        MachineTypeUri = "string",
                        MinCpuPlatform = "string",
                        MinNumInstances = 0,
                        NumInstances = 0,
                        Preemptibility = GoogleNative.Dataproc.V1.InstanceGroupConfigPreemptibility.PreemptibilityUnspecified,
                        StartupConfig = new GoogleNative.Dataproc.V1.Inputs.StartupConfigArgs
                        {
                            RequiredRegistrationFraction = 0,
                        },
                    },
                    MetastoreConfig = new GoogleNative.Dataproc.V1.Inputs.MetastoreConfigArgs
                    {
                        DataprocMetastoreService = "string",
                    },
                    SecondaryWorkerConfig = new GoogleNative.Dataproc.V1.Inputs.InstanceGroupConfigArgs
                    {
                        Accelerators = new[]
                        {
                            new GoogleNative.Dataproc.V1.Inputs.AcceleratorConfigArgs
                            {
                                AcceleratorCount = 0,
                                AcceleratorTypeUri = "string",
                            },
                        },
                        DiskConfig = new GoogleNative.Dataproc.V1.Inputs.DiskConfigArgs
                        {
                            BootDiskSizeGb = 0,
                            BootDiskType = "string",
                            LocalSsdInterface = "string",
                            NumLocalSsds = 0,
                        },
                        ImageUri = "string",
                        InstanceFlexibilityPolicy = new GoogleNative.Dataproc.V1.Inputs.InstanceFlexibilityPolicyArgs
                        {
                            InstanceSelectionList = new[]
                            {
                                new GoogleNative.Dataproc.V1.Inputs.InstanceSelectionArgs
                                {
                                    MachineTypes = new[]
                                    {
                                        "string",
                                    },
                                    Rank = 0,
                                },
                            },
                        },
                        MachineTypeUri = "string",
                        MinCpuPlatform = "string",
                        MinNumInstances = 0,
                        NumInstances = 0,
                        Preemptibility = GoogleNative.Dataproc.V1.InstanceGroupConfigPreemptibility.PreemptibilityUnspecified,
                        StartupConfig = new GoogleNative.Dataproc.V1.Inputs.StartupConfigArgs
                        {
                            RequiredRegistrationFraction = 0,
                        },
                    },
                    SecurityConfig = new GoogleNative.Dataproc.V1.Inputs.SecurityConfigArgs
                    {
                        IdentityConfig = new GoogleNative.Dataproc.V1.Inputs.IdentityConfigArgs
                        {
                            UserServiceAccountMapping = 
                            {
                                { "string", "string" },
                            },
                        },
                        KerberosConfig = new GoogleNative.Dataproc.V1.Inputs.KerberosConfigArgs
                        {
                            CrossRealmTrustAdminServer = "string",
                            CrossRealmTrustKdc = "string",
                            CrossRealmTrustRealm = "string",
                            CrossRealmTrustSharedPasswordUri = "string",
                            EnableKerberos = false,
                            KdcDbKeyUri = "string",
                            KeyPasswordUri = "string",
                            KeystorePasswordUri = "string",
                            KeystoreUri = "string",
                            KmsKeyUri = "string",
                            Realm = "string",
                            RootPrincipalPasswordUri = "string",
                            TgtLifetimeHours = 0,
                            TruststorePasswordUri = "string",
                            TruststoreUri = "string",
                        },
                    },
                    SoftwareConfig = new GoogleNative.Dataproc.V1.Inputs.SoftwareConfigArgs
                    {
                        ImageVersion = "string",
                        OptionalComponents = new[]
                        {
                            GoogleNative.Dataproc.V1.SoftwareConfigOptionalComponentsItem.ComponentUnspecified,
                        },
                        Properties = 
                        {
                            { "string", "string" },
                        },
                    },
                    TempBucket = "string",
                    WorkerConfig = new GoogleNative.Dataproc.V1.Inputs.InstanceGroupConfigArgs
                    {
                        Accelerators = new[]
                        {
                            new GoogleNative.Dataproc.V1.Inputs.AcceleratorConfigArgs
                            {
                                AcceleratorCount = 0,
                                AcceleratorTypeUri = "string",
                            },
                        },
                        DiskConfig = new GoogleNative.Dataproc.V1.Inputs.DiskConfigArgs
                        {
                            BootDiskSizeGb = 0,
                            BootDiskType = "string",
                            LocalSsdInterface = "string",
                            NumLocalSsds = 0,
                        },
                        ImageUri = "string",
                        InstanceFlexibilityPolicy = new GoogleNative.Dataproc.V1.Inputs.InstanceFlexibilityPolicyArgs
                        {
                            InstanceSelectionList = new[]
                            {
                                new GoogleNative.Dataproc.V1.Inputs.InstanceSelectionArgs
                                {
                                    MachineTypes = new[]
                                    {
                                        "string",
                                    },
                                    Rank = 0,
                                },
                            },
                        },
                        MachineTypeUri = "string",
                        MinCpuPlatform = "string",
                        MinNumInstances = 0,
                        NumInstances = 0,
                        Preemptibility = GoogleNative.Dataproc.V1.InstanceGroupConfigPreemptibility.PreemptibilityUnspecified,
                        StartupConfig = new GoogleNative.Dataproc.V1.Inputs.StartupConfigArgs
                        {
                            RequiredRegistrationFraction = 0,
                        },
                    },
                },
                Labels = 
                {
                    { "string", "string" },
                },
            },
        },
        DagTimeout = "string",
        EncryptionConfig = new GoogleNative.Dataproc.V1.Inputs.GoogleCloudDataprocV1WorkflowTemplateEncryptionConfigArgs
        {
            KmsKey = "string",
        },
        Id = "string",
        Labels = 
        {
            { "string", "string" },
        },
        Location = "string",
        Parameters = new[]
        {
            new GoogleNative.Dataproc.V1.Inputs.TemplateParameterArgs
            {
                Fields = new[]
                {
                    "string",
                },
                Name = "string",
                Description = "string",
                Validation = new GoogleNative.Dataproc.V1.Inputs.ParameterValidationArgs
                {
                    Regex = new GoogleNative.Dataproc.V1.Inputs.RegexValidationArgs
                    {
                        Regexes = new[]
                        {
                            "string",
                        },
                    },
                    Values = new GoogleNative.Dataproc.V1.Inputs.ValueValidationArgs
                    {
                        Values = new[]
                        {
                            "string",
                        },
                    },
                },
            },
        },
        Project = "string",
        Version = 0,
    });
    
    example, err := dataproc.NewWorkflowTemplate(ctx, "workflowTemplateResource", &dataproc.WorkflowTemplateArgs{
    	Jobs: dataproc.OrderedJobArray{
    		&dataproc.OrderedJobArgs{
    			StepId: pulumi.String("string"),
    			PrestoJob: &dataproc.PrestoJobArgs{
    				ClientTags: pulumi.StringArray{
    					pulumi.String("string"),
    				},
    				ContinueOnFailure: pulumi.Bool(false),
    				LoggingConfig: &dataproc.LoggingConfigArgs{
    					DriverLogLevels: pulumi.StringMap{
    						"string": pulumi.String("string"),
    					},
    				},
    				OutputFormat: pulumi.String("string"),
    				Properties: pulumi.StringMap{
    					"string": pulumi.String("string"),
    				},
    				QueryFileUri: pulumi.String("string"),
    				QueryList: &dataproc.QueryListArgs{
    					Queries: pulumi.StringArray{
    						pulumi.String("string"),
    					},
    				},
    			},
    			HiveJob: &dataproc.HiveJobArgs{
    				ContinueOnFailure: pulumi.Bool(false),
    				JarFileUris: pulumi.StringArray{
    					pulumi.String("string"),
    				},
    				Properties: pulumi.StringMap{
    					"string": pulumi.String("string"),
    				},
    				QueryFileUri: pulumi.String("string"),
    				QueryList: &dataproc.QueryListArgs{
    					Queries: pulumi.StringArray{
    						pulumi.String("string"),
    					},
    				},
    				ScriptVariables: pulumi.StringMap{
    					"string": pulumi.String("string"),
    				},
    			},
    			Labels: pulumi.StringMap{
    				"string": pulumi.String("string"),
    			},
    			PigJob: &dataproc.PigJobArgs{
    				ContinueOnFailure: pulumi.Bool(false),
    				JarFileUris: pulumi.StringArray{
    					pulumi.String("string"),
    				},
    				LoggingConfig: &dataproc.LoggingConfigArgs{
    					DriverLogLevels: pulumi.StringMap{
    						"string": pulumi.String("string"),
    					},
    				},
    				Properties: pulumi.StringMap{
    					"string": pulumi.String("string"),
    				},
    				QueryFileUri: pulumi.String("string"),
    				QueryList: &dataproc.QueryListArgs{
    					Queries: pulumi.StringArray{
    						pulumi.String("string"),
    					},
    				},
    				ScriptVariables: pulumi.StringMap{
    					"string": pulumi.String("string"),
    				},
    			},
    			PrerequisiteStepIds: pulumi.StringArray{
    				pulumi.String("string"),
    			},
    			FlinkJob: &dataproc.FlinkJobArgs{
    				Args: pulumi.StringArray{
    					pulumi.String("string"),
    				},
    				JarFileUris: pulumi.StringArray{
    					pulumi.String("string"),
    				},
    				LoggingConfig: &dataproc.LoggingConfigArgs{
    					DriverLogLevels: pulumi.StringMap{
    						"string": pulumi.String("string"),
    					},
    				},
    				MainClass:      pulumi.String("string"),
    				MainJarFileUri: pulumi.String("string"),
    				Properties: pulumi.StringMap{
    					"string": pulumi.String("string"),
    				},
    				SavepointUri: pulumi.String("string"),
    			},
    			PysparkJob: &dataproc.PySparkJobArgs{
    				MainPythonFileUri: pulumi.String("string"),
    				ArchiveUris: pulumi.StringArray{
    					pulumi.String("string"),
    				},
    				Args: pulumi.StringArray{
    					pulumi.String("string"),
    				},
    				FileUris: pulumi.StringArray{
    					pulumi.String("string"),
    				},
    				JarFileUris: pulumi.StringArray{
    					pulumi.String("string"),
    				},
    				LoggingConfig: &dataproc.LoggingConfigArgs{
    					DriverLogLevels: pulumi.StringMap{
    						"string": pulumi.String("string"),
    					},
    				},
    				Properties: pulumi.StringMap{
    					"string": pulumi.String("string"),
    				},
    				PythonFileUris: pulumi.StringArray{
    					pulumi.String("string"),
    				},
    			},
    			Scheduling: &dataproc.JobSchedulingArgs{
    				MaxFailuresPerHour: pulumi.Int(0),
    				MaxFailuresTotal:   pulumi.Int(0),
    			},
    			SparkJob: &dataproc.SparkJobArgs{
    				ArchiveUris: pulumi.StringArray{
    					pulumi.String("string"),
    				},
    				Args: pulumi.StringArray{
    					pulumi.String("string"),
    				},
    				FileUris: pulumi.StringArray{
    					pulumi.String("string"),
    				},
    				JarFileUris: pulumi.StringArray{
    					pulumi.String("string"),
    				},
    				LoggingConfig: &dataproc.LoggingConfigArgs{
    					DriverLogLevels: pulumi.StringMap{
    						"string": pulumi.String("string"),
    					},
    				},
    				MainClass:      pulumi.String("string"),
    				MainJarFileUri: pulumi.String("string"),
    				Properties: pulumi.StringMap{
    					"string": pulumi.String("string"),
    				},
    			},
    			SparkRJob: &dataproc.SparkRJobArgs{
    				MainRFileUri: pulumi.String("string"),
    				ArchiveUris: pulumi.StringArray{
    					pulumi.String("string"),
    				},
    				Args: pulumi.StringArray{
    					pulumi.String("string"),
    				},
    				FileUris: pulumi.StringArray{
    					pulumi.String("string"),
    				},
    				LoggingConfig: &dataproc.LoggingConfigArgs{
    					DriverLogLevels: pulumi.StringMap{
    						"string": pulumi.String("string"),
    					},
    				},
    				Properties: pulumi.StringMap{
    					"string": pulumi.String("string"),
    				},
    			},
    			SparkSqlJob: &dataproc.SparkSqlJobArgs{
    				JarFileUris: pulumi.StringArray{
    					pulumi.String("string"),
    				},
    				LoggingConfig: &dataproc.LoggingConfigArgs{
    					DriverLogLevels: pulumi.StringMap{
    						"string": pulumi.String("string"),
    					},
    				},
    				Properties: pulumi.StringMap{
    					"string": pulumi.String("string"),
    				},
    				QueryFileUri: pulumi.String("string"),
    				QueryList: &dataproc.QueryListArgs{
    					Queries: pulumi.StringArray{
    						pulumi.String("string"),
    					},
    				},
    				ScriptVariables: pulumi.StringMap{
    					"string": pulumi.String("string"),
    				},
    			},
    			HadoopJob: &dataproc.HadoopJobArgs{
    				ArchiveUris: pulumi.StringArray{
    					pulumi.String("string"),
    				},
    				Args: pulumi.StringArray{
    					pulumi.String("string"),
    				},
    				FileUris: pulumi.StringArray{
    					pulumi.String("string"),
    				},
    				JarFileUris: pulumi.StringArray{
    					pulumi.String("string"),
    				},
    				LoggingConfig: &dataproc.LoggingConfigArgs{
    					DriverLogLevels: pulumi.StringMap{
    						"string": pulumi.String("string"),
    					},
    				},
    				MainClass:      pulumi.String("string"),
    				MainJarFileUri: pulumi.String("string"),
    				Properties: pulumi.StringMap{
    					"string": pulumi.String("string"),
    				},
    			},
    			TrinoJob: &dataproc.TrinoJobArgs{
    				ClientTags: pulumi.StringArray{
    					pulumi.String("string"),
    				},
    				ContinueOnFailure: pulumi.Bool(false),
    				LoggingConfig: &dataproc.LoggingConfigArgs{
    					DriverLogLevels: pulumi.StringMap{
    						"string": pulumi.String("string"),
    					},
    				},
    				OutputFormat: pulumi.String("string"),
    				Properties: pulumi.StringMap{
    					"string": pulumi.String("string"),
    				},
    				QueryFileUri: pulumi.String("string"),
    				QueryList: &dataproc.QueryListArgs{
    					Queries: pulumi.StringArray{
    						pulumi.String("string"),
    					},
    				},
    			},
    		},
    	},
    	Placement: &dataproc.WorkflowTemplatePlacementArgs{
    		ClusterSelector: &dataproc.ClusterSelectorArgs{
    			ClusterLabels: pulumi.StringMap{
    				"string": pulumi.String("string"),
    			},
    			Zone: pulumi.String("string"),
    		},
    		ManagedCluster: &dataproc.ManagedClusterArgs{
    			ClusterName: pulumi.String("string"),
    			Config: &dataproc.ClusterConfigArgs{
    				AutoscalingConfig: &dataproc.AutoscalingConfigArgs{
    					PolicyUri: pulumi.String("string"),
    				},
    				AuxiliaryNodeGroups: dataproc.AuxiliaryNodeGroupArray{
    					&dataproc.AuxiliaryNodeGroupArgs{
    						NodeGroup: &dataproc.NodeGroupTypeArgs{
    							Roles: dataproc.NodeGroupRolesItemArray{
    								dataproc.NodeGroupRolesItemRoleUnspecified,
    							},
    							Labels: pulumi.StringMap{
    								"string": pulumi.String("string"),
    							},
    							Name: pulumi.String("string"),
    							NodeGroupConfig: &dataproc.InstanceGroupConfigArgs{
    								Accelerators: dataproc.AcceleratorConfigArray{
    									&dataproc.AcceleratorConfigArgs{
    										AcceleratorCount:   pulumi.Int(0),
    										AcceleratorTypeUri: pulumi.String("string"),
    									},
    								},
    								DiskConfig: &dataproc.DiskConfigArgs{
    									BootDiskSizeGb:    pulumi.Int(0),
    									BootDiskType:      pulumi.String("string"),
    									LocalSsdInterface: pulumi.String("string"),
    									NumLocalSsds:      pulumi.Int(0),
    								},
    								ImageUri: pulumi.String("string"),
    								InstanceFlexibilityPolicy: &dataproc.InstanceFlexibilityPolicyArgs{
    									InstanceSelectionList: dataproc.InstanceSelectionArray{
    										&dataproc.InstanceSelectionArgs{
    											MachineTypes: pulumi.StringArray{
    												pulumi.String("string"),
    											},
    											Rank: pulumi.Int(0),
    										},
    									},
    								},
    								MachineTypeUri:  pulumi.String("string"),
    								MinCpuPlatform:  pulumi.String("string"),
    								MinNumInstances: pulumi.Int(0),
    								NumInstances:    pulumi.Int(0),
    								Preemptibility:  dataproc.InstanceGroupConfigPreemptibilityPreemptibilityUnspecified,
    								StartupConfig: &dataproc.StartupConfigArgs{
    									RequiredRegistrationFraction: pulumi.Float64(0),
    								},
    							},
    						},
    						NodeGroupId: pulumi.String("string"),
    					},
    				},
    				ConfigBucket: pulumi.String("string"),
    				DataprocMetricConfig: &dataproc.DataprocMetricConfigArgs{
    					Metrics: dataproc.MetricArray{
    						&dataproc.MetricArgs{
    							MetricSource: dataproc.MetricMetricSourceMetricSourceUnspecified,
    							MetricOverrides: pulumi.StringArray{
    								pulumi.String("string"),
    							},
    						},
    					},
    				},
    				EncryptionConfig: &dataproc.EncryptionConfigArgs{
    					GcePdKmsKeyName: pulumi.String("string"),
    					KmsKey:          pulumi.String("string"),
    				},
    				EndpointConfig: &dataproc.EndpointConfigArgs{
    					EnableHttpPortAccess: pulumi.Bool(false),
    				},
    				GceClusterConfig: &dataproc.GceClusterConfigArgs{
    					ConfidentialInstanceConfig: &dataproc.ConfidentialInstanceConfigArgs{
    						EnableConfidentialCompute: pulumi.Bool(false),
    					},
    					InternalIpOnly: pulumi.Bool(false),
    					Metadata: pulumi.StringMap{
    						"string": pulumi.String("string"),
    					},
    					NetworkUri: pulumi.String("string"),
    					NodeGroupAffinity: &dataproc.NodeGroupAffinityArgs{
    						NodeGroupUri: pulumi.String("string"),
    					},
    					PrivateIpv6GoogleAccess: dataproc.GceClusterConfigPrivateIpv6GoogleAccessPrivateIpv6GoogleAccessUnspecified,
    					ReservationAffinity: &dataproc.ReservationAffinityArgs{
    						ConsumeReservationType: dataproc.ReservationAffinityConsumeReservationTypeTypeUnspecified,
    						Key:                    pulumi.String("string"),
    						Values: pulumi.StringArray{
    							pulumi.String("string"),
    						},
    					},
    					ServiceAccount: pulumi.String("string"),
    					ServiceAccountScopes: pulumi.StringArray{
    						pulumi.String("string"),
    					},
    					ShieldedInstanceConfig: &dataproc.ShieldedInstanceConfigArgs{
    						EnableIntegrityMonitoring: pulumi.Bool(false),
    						EnableSecureBoot:          pulumi.Bool(false),
    						EnableVtpm:                pulumi.Bool(false),
    					},
    					SubnetworkUri: pulumi.String("string"),
    					Tags: pulumi.StringArray{
    						pulumi.String("string"),
    					},
    					ZoneUri: pulumi.String("string"),
    				},
    				GkeClusterConfig: &dataproc.GkeClusterConfigArgs{
    					GkeClusterTarget: pulumi.String("string"),
    					NodePoolTarget: dataproc.GkeNodePoolTargetArray{
    						&dataproc.GkeNodePoolTargetArgs{
    							NodePool: pulumi.String("string"),
    							Roles: dataproc.GkeNodePoolTargetRolesItemArray{
    								dataproc.GkeNodePoolTargetRolesItemRoleUnspecified,
    							},
    							NodePoolConfig: &dataproc.GkeNodePoolConfigArgs{
    								Autoscaling: &dataproc.GkeNodePoolAutoscalingConfigArgs{
    									MaxNodeCount: pulumi.Int(0),
    									MinNodeCount: pulumi.Int(0),
    								},
    								Config: &dataproc.GkeNodeConfigArgs{
    									Accelerators: dataproc.GkeNodePoolAcceleratorConfigArray{
    										&dataproc.GkeNodePoolAcceleratorConfigArgs{
    											AcceleratorCount: pulumi.String("string"),
    											AcceleratorType:  pulumi.String("string"),
    											GpuPartitionSize: pulumi.String("string"),
    										},
    									},
    									BootDiskKmsKey: pulumi.String("string"),
    									LocalSsdCount:  pulumi.Int(0),
    									MachineType:    pulumi.String("string"),
    									MinCpuPlatform: pulumi.String("string"),
    									Preemptible:    pulumi.Bool(false),
    									Spot:           pulumi.Bool(false),
    								},
    								Locations: pulumi.StringArray{
    									pulumi.String("string"),
    								},
    							},
    						},
    					},
    				},
    				InitializationActions: dataproc.NodeInitializationActionArray{
    					&dataproc.NodeInitializationActionArgs{
    						ExecutableFile:   pulumi.String("string"),
    						ExecutionTimeout: pulumi.String("string"),
    					},
    				},
    				LifecycleConfig: &dataproc.LifecycleConfigArgs{
    					AutoDeleteTime: pulumi.String("string"),
    					AutoDeleteTtl:  pulumi.String("string"),
    					IdleDeleteTtl:  pulumi.String("string"),
    				},
    				MasterConfig: &dataproc.InstanceGroupConfigArgs{
    					Accelerators: dataproc.AcceleratorConfigArray{
    						&dataproc.AcceleratorConfigArgs{
    							AcceleratorCount:   pulumi.Int(0),
    							AcceleratorTypeUri: pulumi.String("string"),
    						},
    					},
    					DiskConfig: &dataproc.DiskConfigArgs{
    						BootDiskSizeGb:    pulumi.Int(0),
    						BootDiskType:      pulumi.String("string"),
    						LocalSsdInterface: pulumi.String("string"),
    						NumLocalSsds:      pulumi.Int(0),
    					},
    					ImageUri: pulumi.String("string"),
    					InstanceFlexibilityPolicy: &dataproc.InstanceFlexibilityPolicyArgs{
    						InstanceSelectionList: dataproc.InstanceSelectionArray{
    							&dataproc.InstanceSelectionArgs{
    								MachineTypes: pulumi.StringArray{
    									pulumi.String("string"),
    								},
    								Rank: pulumi.Int(0),
    							},
    						},
    					},
    					MachineTypeUri:  pulumi.String("string"),
    					MinCpuPlatform:  pulumi.String("string"),
    					MinNumInstances: pulumi.Int(0),
    					NumInstances:    pulumi.Int(0),
    					Preemptibility:  dataproc.InstanceGroupConfigPreemptibilityPreemptibilityUnspecified,
    					StartupConfig: &dataproc.StartupConfigArgs{
    						RequiredRegistrationFraction: pulumi.Float64(0),
    					},
    				},
    				MetastoreConfig: &dataproc.MetastoreConfigArgs{
    					DataprocMetastoreService: pulumi.String("string"),
    				},
    				SecondaryWorkerConfig: &dataproc.InstanceGroupConfigArgs{
    					Accelerators: dataproc.AcceleratorConfigArray{
    						&dataproc.AcceleratorConfigArgs{
    							AcceleratorCount:   pulumi.Int(0),
    							AcceleratorTypeUri: pulumi.String("string"),
    						},
    					},
    					DiskConfig: &dataproc.DiskConfigArgs{
    						BootDiskSizeGb:    pulumi.Int(0),
    						BootDiskType:      pulumi.String("string"),
    						LocalSsdInterface: pulumi.String("string"),
    						NumLocalSsds:      pulumi.Int(0),
    					},
    					ImageUri: pulumi.String("string"),
    					InstanceFlexibilityPolicy: &dataproc.InstanceFlexibilityPolicyArgs{
    						InstanceSelectionList: dataproc.InstanceSelectionArray{
    							&dataproc.InstanceSelectionArgs{
    								MachineTypes: pulumi.StringArray{
    									pulumi.String("string"),
    								},
    								Rank: pulumi.Int(0),
    							},
    						},
    					},
    					MachineTypeUri:  pulumi.String("string"),
    					MinCpuPlatform:  pulumi.String("string"),
    					MinNumInstances: pulumi.Int(0),
    					NumInstances:    pulumi.Int(0),
    					Preemptibility:  dataproc.InstanceGroupConfigPreemptibilityPreemptibilityUnspecified,
    					StartupConfig: &dataproc.StartupConfigArgs{
    						RequiredRegistrationFraction: pulumi.Float64(0),
    					},
    				},
    				SecurityConfig: &dataproc.SecurityConfigArgs{
    					IdentityConfig: &dataproc.IdentityConfigArgs{
    						UserServiceAccountMapping: pulumi.StringMap{
    							"string": pulumi.String("string"),
    						},
    					},
    					KerberosConfig: &dataproc.KerberosConfigArgs{
    						CrossRealmTrustAdminServer:       pulumi.String("string"),
    						CrossRealmTrustKdc:               pulumi.String("string"),
    						CrossRealmTrustRealm:             pulumi.String("string"),
    						CrossRealmTrustSharedPasswordUri: pulumi.String("string"),
    						EnableKerberos:                   pulumi.Bool(false),
    						KdcDbKeyUri:                      pulumi.String("string"),
    						KeyPasswordUri:                   pulumi.String("string"),
    						KeystorePasswordUri:              pulumi.String("string"),
    						KeystoreUri:                      pulumi.String("string"),
    						KmsKeyUri:                        pulumi.String("string"),
    						Realm:                            pulumi.String("string"),
    						RootPrincipalPasswordUri:         pulumi.String("string"),
    						TgtLifetimeHours:                 pulumi.Int(0),
    						TruststorePasswordUri:            pulumi.String("string"),
    						TruststoreUri:                    pulumi.String("string"),
    					},
    				},
    				SoftwareConfig: &dataproc.SoftwareConfigArgs{
    					ImageVersion: pulumi.String("string"),
    					OptionalComponents: dataproc.SoftwareConfigOptionalComponentsItemArray{
    						dataproc.SoftwareConfigOptionalComponentsItemComponentUnspecified,
    					},
    					Properties: pulumi.StringMap{
    						"string": pulumi.String("string"),
    					},
    				},
    				TempBucket: pulumi.String("string"),
    				WorkerConfig: &dataproc.InstanceGroupConfigArgs{
    					Accelerators: dataproc.AcceleratorConfigArray{
    						&dataproc.AcceleratorConfigArgs{
    							AcceleratorCount:   pulumi.Int(0),
    							AcceleratorTypeUri: pulumi.String("string"),
    						},
    					},
    					DiskConfig: &dataproc.DiskConfigArgs{
    						BootDiskSizeGb:    pulumi.Int(0),
    						BootDiskType:      pulumi.String("string"),
    						LocalSsdInterface: pulumi.String("string"),
    						NumLocalSsds:      pulumi.Int(0),
    					},
    					ImageUri: pulumi.String("string"),
    					InstanceFlexibilityPolicy: &dataproc.InstanceFlexibilityPolicyArgs{
    						InstanceSelectionList: dataproc.InstanceSelectionArray{
    							&dataproc.InstanceSelectionArgs{
    								MachineTypes: pulumi.StringArray{
    									pulumi.String("string"),
    								},
    								Rank: pulumi.Int(0),
    							},
    						},
    					},
    					MachineTypeUri:  pulumi.String("string"),
    					MinCpuPlatform:  pulumi.String("string"),
    					MinNumInstances: pulumi.Int(0),
    					NumInstances:    pulumi.Int(0),
    					Preemptibility:  dataproc.InstanceGroupConfigPreemptibilityPreemptibilityUnspecified,
    					StartupConfig: &dataproc.StartupConfigArgs{
    						RequiredRegistrationFraction: pulumi.Float64(0),
    					},
    				},
    			},
    			Labels: pulumi.StringMap{
    				"string": pulumi.String("string"),
    			},
    		},
    	},
    	DagTimeout: pulumi.String("string"),
    	EncryptionConfig: &dataproc.GoogleCloudDataprocV1WorkflowTemplateEncryptionConfigArgs{
    		KmsKey: pulumi.String("string"),
    	},
    	Id: pulumi.String("string"),
    	Labels: pulumi.StringMap{
    		"string": pulumi.String("string"),
    	},
    	Location: pulumi.String("string"),
    	Parameters: dataproc.TemplateParameterArray{
    		&dataproc.TemplateParameterArgs{
    			Fields: pulumi.StringArray{
    				pulumi.String("string"),
    			},
    			Name:        pulumi.String("string"),
    			Description: pulumi.String("string"),
    			Validation: &dataproc.ParameterValidationArgs{
    				Regex: &dataproc.RegexValidationArgs{
    					Regexes: pulumi.StringArray{
    						pulumi.String("string"),
    					},
    				},
    				Values: &dataproc.ValueValidationArgs{
    					Values: pulumi.StringArray{
    						pulumi.String("string"),
    					},
    				},
    			},
    		},
    	},
    	Project: pulumi.String("string"),
    	Version: pulumi.Int(0),
    })
    
    var workflowTemplateResource = new WorkflowTemplate("workflowTemplateResource", WorkflowTemplateArgs.builder()
        .jobs(OrderedJobArgs.builder()
            .stepId("string")
            .prestoJob(PrestoJobArgs.builder()
                .clientTags("string")
                .continueOnFailure(false)
                .loggingConfig(LoggingConfigArgs.builder()
                    .driverLogLevels(Map.of("string", "string"))
                    .build())
                .outputFormat("string")
                .properties(Map.of("string", "string"))
                .queryFileUri("string")
                .queryList(QueryListArgs.builder()
                    .queries("string")
                    .build())
                .build())
            .hiveJob(HiveJobArgs.builder()
                .continueOnFailure(false)
                .jarFileUris("string")
                .properties(Map.of("string", "string"))
                .queryFileUri("string")
                .queryList(QueryListArgs.builder()
                    .queries("string")
                    .build())
                .scriptVariables(Map.of("string", "string"))
                .build())
            .labels(Map.of("string", "string"))
            .pigJob(PigJobArgs.builder()
                .continueOnFailure(false)
                .jarFileUris("string")
                .loggingConfig(LoggingConfigArgs.builder()
                    .driverLogLevels(Map.of("string", "string"))
                    .build())
                .properties(Map.of("string", "string"))
                .queryFileUri("string")
                .queryList(QueryListArgs.builder()
                    .queries("string")
                    .build())
                .scriptVariables(Map.of("string", "string"))
                .build())
            .prerequisiteStepIds("string")
            .flinkJob(FlinkJobArgs.builder()
                .args("string")
                .jarFileUris("string")
                .loggingConfig(LoggingConfigArgs.builder()
                    .driverLogLevels(Map.of("string", "string"))
                    .build())
                .mainClass("string")
                .mainJarFileUri("string")
                .properties(Map.of("string", "string"))
                .savepointUri("string")
                .build())
            .pysparkJob(PySparkJobArgs.builder()
                .mainPythonFileUri("string")
                .archiveUris("string")
                .args("string")
                .fileUris("string")
                .jarFileUris("string")
                .loggingConfig(LoggingConfigArgs.builder()
                    .driverLogLevels(Map.of("string", "string"))
                    .build())
                .properties(Map.of("string", "string"))
                .pythonFileUris("string")
                .build())
            .scheduling(JobSchedulingArgs.builder()
                .maxFailuresPerHour(0)
                .maxFailuresTotal(0)
                .build())
            .sparkJob(SparkJobArgs.builder()
                .archiveUris("string")
                .args("string")
                .fileUris("string")
                .jarFileUris("string")
                .loggingConfig(LoggingConfigArgs.builder()
                    .driverLogLevels(Map.of("string", "string"))
                    .build())
                .mainClass("string")
                .mainJarFileUri("string")
                .properties(Map.of("string", "string"))
                .build())
            .sparkRJob(SparkRJobArgs.builder()
                .mainRFileUri("string")
                .archiveUris("string")
                .args("string")
                .fileUris("string")
                .loggingConfig(LoggingConfigArgs.builder()
                    .driverLogLevels(Map.of("string", "string"))
                    .build())
                .properties(Map.of("string", "string"))
                .build())
            .sparkSqlJob(SparkSqlJobArgs.builder()
                .jarFileUris("string")
                .loggingConfig(LoggingConfigArgs.builder()
                    .driverLogLevels(Map.of("string", "string"))
                    .build())
                .properties(Map.of("string", "string"))
                .queryFileUri("string")
                .queryList(QueryListArgs.builder()
                    .queries("string")
                    .build())
                .scriptVariables(Map.of("string", "string"))
                .build())
            .hadoopJob(HadoopJobArgs.builder()
                .archiveUris("string")
                .args("string")
                .fileUris("string")
                .jarFileUris("string")
                .loggingConfig(LoggingConfigArgs.builder()
                    .driverLogLevels(Map.of("string", "string"))
                    .build())
                .mainClass("string")
                .mainJarFileUri("string")
                .properties(Map.of("string", "string"))
                .build())
            .trinoJob(TrinoJobArgs.builder()
                .clientTags("string")
                .continueOnFailure(false)
                .loggingConfig(LoggingConfigArgs.builder()
                    .driverLogLevels(Map.of("string", "string"))
                    .build())
                .outputFormat("string")
                .properties(Map.of("string", "string"))
                .queryFileUri("string")
                .queryList(QueryListArgs.builder()
                    .queries("string")
                    .build())
                .build())
            .build())
        .placement(WorkflowTemplatePlacementArgs.builder()
            .clusterSelector(ClusterSelectorArgs.builder()
                .clusterLabels(Map.of("string", "string"))
                .zone("string")
                .build())
            .managedCluster(ManagedClusterArgs.builder()
                .clusterName("string")
                .config(ClusterConfigArgs.builder()
                    .autoscalingConfig(AutoscalingConfigArgs.builder()
                        .policyUri("string")
                        .build())
                    .auxiliaryNodeGroups(AuxiliaryNodeGroupArgs.builder()
                        .nodeGroup(NodeGroupArgs.builder()
                            .roles("ROLE_UNSPECIFIED")
                            .labels(Map.of("string", "string"))
                            .name("string")
                            .nodeGroupConfig(InstanceGroupConfigArgs.builder()
                                .accelerators(AcceleratorConfigArgs.builder()
                                    .acceleratorCount(0)
                                    .acceleratorTypeUri("string")
                                    .build())
                                .diskConfig(DiskConfigArgs.builder()
                                    .bootDiskSizeGb(0)
                                    .bootDiskType("string")
                                    .localSsdInterface("string")
                                    .numLocalSsds(0)
                                    .build())
                                .imageUri("string")
                                .instanceFlexibilityPolicy(InstanceFlexibilityPolicyArgs.builder()
                                    .instanceSelectionList(InstanceSelectionArgs.builder()
                                        .machineTypes("string")
                                        .rank(0)
                                        .build())
                                    .build())
                                .machineTypeUri("string")
                                .minCpuPlatform("string")
                                .minNumInstances(0)
                                .numInstances(0)
                                .preemptibility("PREEMPTIBILITY_UNSPECIFIED")
                                .startupConfig(StartupConfigArgs.builder()
                                    .requiredRegistrationFraction(0)
                                    .build())
                                .build())
                            .build())
                        .nodeGroupId("string")
                        .build())
                    .configBucket("string")
                    .dataprocMetricConfig(DataprocMetricConfigArgs.builder()
                        .metrics(MetricArgs.builder()
                            .metricSource("METRIC_SOURCE_UNSPECIFIED")
                            .metricOverrides("string")
                            .build())
                        .build())
                    .encryptionConfig(EncryptionConfigArgs.builder()
                        .gcePdKmsKeyName("string")
                        .kmsKey("string")
                        .build())
                    .endpointConfig(EndpointConfigArgs.builder()
                        .enableHttpPortAccess(false)
                        .build())
                    .gceClusterConfig(GceClusterConfigArgs.builder()
                        .confidentialInstanceConfig(ConfidentialInstanceConfigArgs.builder()
                            .enableConfidentialCompute(false)
                            .build())
                        .internalIpOnly(false)
                        .metadata(Map.of("string", "string"))
                        .networkUri("string")
                        .nodeGroupAffinity(NodeGroupAffinityArgs.builder()
                            .nodeGroupUri("string")
                            .build())
                        .privateIpv6GoogleAccess("PRIVATE_IPV6_GOOGLE_ACCESS_UNSPECIFIED")
                        .reservationAffinity(ReservationAffinityArgs.builder()
                            .consumeReservationType("TYPE_UNSPECIFIED")
                            .key("string")
                            .values("string")
                            .build())
                        .serviceAccount("string")
                        .serviceAccountScopes("string")
                        .shieldedInstanceConfig(ShieldedInstanceConfigArgs.builder()
                            .enableIntegrityMonitoring(false)
                            .enableSecureBoot(false)
                            .enableVtpm(false)
                            .build())
                        .subnetworkUri("string")
                        .tags("string")
                        .zoneUri("string")
                        .build())
                    .gkeClusterConfig(GkeClusterConfigArgs.builder()
                        .gkeClusterTarget("string")
                        .nodePoolTarget(GkeNodePoolTargetArgs.builder()
                            .nodePool("string")
                            .roles("ROLE_UNSPECIFIED")
                            .nodePoolConfig(GkeNodePoolConfigArgs.builder()
                                .autoscaling(GkeNodePoolAutoscalingConfigArgs.builder()
                                    .maxNodeCount(0)
                                    .minNodeCount(0)
                                    .build())
                                .config(GkeNodeConfigArgs.builder()
                                    .accelerators(GkeNodePoolAcceleratorConfigArgs.builder()
                                        .acceleratorCount("string")
                                        .acceleratorType("string")
                                        .gpuPartitionSize("string")
                                        .build())
                                    .bootDiskKmsKey("string")
                                    .localSsdCount(0)
                                    .machineType("string")
                                    .minCpuPlatform("string")
                                    .preemptible(false)
                                    .spot(false)
                                    .build())
                                .locations("string")
                                .build())
                            .build())
                        .build())
                    .initializationActions(NodeInitializationActionArgs.builder()
                        .executableFile("string")
                        .executionTimeout("string")
                        .build())
                    .lifecycleConfig(LifecycleConfigArgs.builder()
                        .autoDeleteTime("string")
                        .autoDeleteTtl("string")
                        .idleDeleteTtl("string")
                        .build())
                    .masterConfig(InstanceGroupConfigArgs.builder()
                        .accelerators(AcceleratorConfigArgs.builder()
                            .acceleratorCount(0)
                            .acceleratorTypeUri("string")
                            .build())
                        .diskConfig(DiskConfigArgs.builder()
                            .bootDiskSizeGb(0)
                            .bootDiskType("string")
                            .localSsdInterface("string")
                            .numLocalSsds(0)
                            .build())
                        .imageUri("string")
                        .instanceFlexibilityPolicy(InstanceFlexibilityPolicyArgs.builder()
                            .instanceSelectionList(InstanceSelectionArgs.builder()
                                .machineTypes("string")
                                .rank(0)
                                .build())
                            .build())
                        .machineTypeUri("string")
                        .minCpuPlatform("string")
                        .minNumInstances(0)
                        .numInstances(0)
                        .preemptibility("PREEMPTIBILITY_UNSPECIFIED")
                        .startupConfig(StartupConfigArgs.builder()
                            .requiredRegistrationFraction(0)
                            .build())
                        .build())
                    .metastoreConfig(MetastoreConfigArgs.builder()
                        .dataprocMetastoreService("string")
                        .build())
                    .secondaryWorkerConfig(InstanceGroupConfigArgs.builder()
                        .accelerators(AcceleratorConfigArgs.builder()
                            .acceleratorCount(0)
                            .acceleratorTypeUri("string")
                            .build())
                        .diskConfig(DiskConfigArgs.builder()
                            .bootDiskSizeGb(0)
                            .bootDiskType("string")
                            .localSsdInterface("string")
                            .numLocalSsds(0)
                            .build())
                        .imageUri("string")
                        .instanceFlexibilityPolicy(InstanceFlexibilityPolicyArgs.builder()
                            .instanceSelectionList(InstanceSelectionArgs.builder()
                                .machineTypes("string")
                                .rank(0)
                                .build())
                            .build())
                        .machineTypeUri("string")
                        .minCpuPlatform("string")
                        .minNumInstances(0)
                        .numInstances(0)
                        .preemptibility("PREEMPTIBILITY_UNSPECIFIED")
                        .startupConfig(StartupConfigArgs.builder()
                            .requiredRegistrationFraction(0)
                            .build())
                        .build())
                    .securityConfig(SecurityConfigArgs.builder()
                        .identityConfig(IdentityConfigArgs.builder()
                            .userServiceAccountMapping(Map.of("string", "string"))
                            .build())
                        .kerberosConfig(KerberosConfigArgs.builder()
                            .crossRealmTrustAdminServer("string")
                            .crossRealmTrustKdc("string")
                            .crossRealmTrustRealm("string")
                            .crossRealmTrustSharedPasswordUri("string")
                            .enableKerberos(false)
                            .kdcDbKeyUri("string")
                            .keyPasswordUri("string")
                            .keystorePasswordUri("string")
                            .keystoreUri("string")
                            .kmsKeyUri("string")
                            .realm("string")
                            .rootPrincipalPasswordUri("string")
                            .tgtLifetimeHours(0)
                            .truststorePasswordUri("string")
                            .truststoreUri("string")
                            .build())
                        .build())
                    .softwareConfig(SoftwareConfigArgs.builder()
                        .imageVersion("string")
                        .optionalComponents("COMPONENT_UNSPECIFIED")
                        .properties(Map.of("string", "string"))
                        .build())
                    .tempBucket("string")
                    .workerConfig(InstanceGroupConfigArgs.builder()
                        .accelerators(AcceleratorConfigArgs.builder()
                            .acceleratorCount(0)
                            .acceleratorTypeUri("string")
                            .build())
                        .diskConfig(DiskConfigArgs.builder()
                            .bootDiskSizeGb(0)
                            .bootDiskType("string")
                            .localSsdInterface("string")
                            .numLocalSsds(0)
                            .build())
                        .imageUri("string")
                        .instanceFlexibilityPolicy(InstanceFlexibilityPolicyArgs.builder()
                            .instanceSelectionList(InstanceSelectionArgs.builder()
                                .machineTypes("string")
                                .rank(0)
                                .build())
                            .build())
                        .machineTypeUri("string")
                        .minCpuPlatform("string")
                        .minNumInstances(0)
                        .numInstances(0)
                        .preemptibility("PREEMPTIBILITY_UNSPECIFIED")
                        .startupConfig(StartupConfigArgs.builder()
                            .requiredRegistrationFraction(0)
                            .build())
                        .build())
                    .build())
                .labels(Map.of("string", "string"))
                .build())
            .build())
        .dagTimeout("string")
        .encryptionConfig(GoogleCloudDataprocV1WorkflowTemplateEncryptionConfigArgs.builder()
            .kmsKey("string")
            .build())
        .id("string")
        .labels(Map.of("string", "string"))
        .location("string")
        .parameters(TemplateParameterArgs.builder()
            .fields("string")
            .name("string")
            .description("string")
            .validation(ParameterValidationArgs.builder()
                .regex(RegexValidationArgs.builder()
                    .regexes("string")
                    .build())
                .values(ValueValidationArgs.builder()
                    .values("string")
                    .build())
                .build())
            .build())
        .project("string")
        .version(0)
        .build());
    
    workflow_template_resource = google_native.dataproc.v1.WorkflowTemplate("workflowTemplateResource",
        jobs=[{
            "step_id": "string",
            "presto_job": {
                "client_tags": ["string"],
                "continue_on_failure": False,
                "logging_config": {
                    "driver_log_levels": {
                        "string": "string",
                    },
                },
                "output_format": "string",
                "properties": {
                    "string": "string",
                },
                "query_file_uri": "string",
                "query_list": {
                    "queries": ["string"],
                },
            },
            "hive_job": {
                "continue_on_failure": False,
                "jar_file_uris": ["string"],
                "properties": {
                    "string": "string",
                },
                "query_file_uri": "string",
                "query_list": {
                    "queries": ["string"],
                },
                "script_variables": {
                    "string": "string",
                },
            },
            "labels": {
                "string": "string",
            },
            "pig_job": {
                "continue_on_failure": False,
                "jar_file_uris": ["string"],
                "logging_config": {
                    "driver_log_levels": {
                        "string": "string",
                    },
                },
                "properties": {
                    "string": "string",
                },
                "query_file_uri": "string",
                "query_list": {
                    "queries": ["string"],
                },
                "script_variables": {
                    "string": "string",
                },
            },
            "prerequisite_step_ids": ["string"],
            "flink_job": {
                "args": ["string"],
                "jar_file_uris": ["string"],
                "logging_config": {
                    "driver_log_levels": {
                        "string": "string",
                    },
                },
                "main_class": "string",
                "main_jar_file_uri": "string",
                "properties": {
                    "string": "string",
                },
                "savepoint_uri": "string",
            },
            "pyspark_job": {
                "main_python_file_uri": "string",
                "archive_uris": ["string"],
                "args": ["string"],
                "file_uris": ["string"],
                "jar_file_uris": ["string"],
                "logging_config": {
                    "driver_log_levels": {
                        "string": "string",
                    },
                },
                "properties": {
                    "string": "string",
                },
                "python_file_uris": ["string"],
            },
            "scheduling": {
                "max_failures_per_hour": 0,
                "max_failures_total": 0,
            },
            "spark_job": {
                "archive_uris": ["string"],
                "args": ["string"],
                "file_uris": ["string"],
                "jar_file_uris": ["string"],
                "logging_config": {
                    "driver_log_levels": {
                        "string": "string",
                    },
                },
                "main_class": "string",
                "main_jar_file_uri": "string",
                "properties": {
                    "string": "string",
                },
            },
            "spark_r_job": {
                "main_r_file_uri": "string",
                "archive_uris": ["string"],
                "args": ["string"],
                "file_uris": ["string"],
                "logging_config": {
                    "driver_log_levels": {
                        "string": "string",
                    },
                },
                "properties": {
                    "string": "string",
                },
            },
            "spark_sql_job": {
                "jar_file_uris": ["string"],
                "logging_config": {
                    "driver_log_levels": {
                        "string": "string",
                    },
                },
                "properties": {
                    "string": "string",
                },
                "query_file_uri": "string",
                "query_list": {
                    "queries": ["string"],
                },
                "script_variables": {
                    "string": "string",
                },
            },
            "hadoop_job": {
                "archive_uris": ["string"],
                "args": ["string"],
                "file_uris": ["string"],
                "jar_file_uris": ["string"],
                "logging_config": {
                    "driver_log_levels": {
                        "string": "string",
                    },
                },
                "main_class": "string",
                "main_jar_file_uri": "string",
                "properties": {
                    "string": "string",
                },
            },
            "trino_job": {
                "client_tags": ["string"],
                "continue_on_failure": False,
                "logging_config": {
                    "driver_log_levels": {
                        "string": "string",
                    },
                },
                "output_format": "string",
                "properties": {
                    "string": "string",
                },
                "query_file_uri": "string",
                "query_list": {
                    "queries": ["string"],
                },
            },
        }],
        placement={
            "cluster_selector": {
                "cluster_labels": {
                    "string": "string",
                },
                "zone": "string",
            },
            "managed_cluster": {
                "cluster_name": "string",
                "config": {
                    "autoscaling_config": {
                        "policy_uri": "string",
                    },
                    "auxiliary_node_groups": [{
                        "node_group": {
                            "roles": [google_native.dataproc.v1.NodeGroupRolesItem.ROLE_UNSPECIFIED],
                            "labels": {
                                "string": "string",
                            },
                            "name": "string",
                            "node_group_config": {
                                "accelerators": [{
                                    "accelerator_count": 0,
                                    "accelerator_type_uri": "string",
                                }],
                                "disk_config": {
                                    "boot_disk_size_gb": 0,
                                    "boot_disk_type": "string",
                                    "local_ssd_interface": "string",
                                    "num_local_ssds": 0,
                                },
                                "image_uri": "string",
                                "instance_flexibility_policy": {
                                    "instance_selection_list": [{
                                        "machine_types": ["string"],
                                        "rank": 0,
                                    }],
                                },
                                "machine_type_uri": "string",
                                "min_cpu_platform": "string",
                                "min_num_instances": 0,
                                "num_instances": 0,
                                "preemptibility": google_native.dataproc.v1.InstanceGroupConfigPreemptibility.PREEMPTIBILITY_UNSPECIFIED,
                                "startup_config": {
                                    "required_registration_fraction": 0,
                                },
                            },
                        },
                        "node_group_id": "string",
                    }],
                    "config_bucket": "string",
                    "dataproc_metric_config": {
                        "metrics": [{
                            "metric_source": google_native.dataproc.v1.MetricMetricSource.METRIC_SOURCE_UNSPECIFIED,
                            "metric_overrides": ["string"],
                        }],
                    },
                    "encryption_config": {
                        "gce_pd_kms_key_name": "string",
                        "kms_key": "string",
                    },
                    "endpoint_config": {
                        "enable_http_port_access": False,
                    },
                    "gce_cluster_config": {
                        "confidential_instance_config": {
                            "enable_confidential_compute": False,
                        },
                        "internal_ip_only": False,
                        "metadata": {
                            "string": "string",
                        },
                        "network_uri": "string",
                        "node_group_affinity": {
                            "node_group_uri": "string",
                        },
                        "private_ipv6_google_access": google_native.dataproc.v1.GceClusterConfigPrivateIpv6GoogleAccess.PRIVATE_IPV6_GOOGLE_ACCESS_UNSPECIFIED,
                        "reservation_affinity": {
                            "consume_reservation_type": google_native.dataproc.v1.ReservationAffinityConsumeReservationType.TYPE_UNSPECIFIED,
                            "key": "string",
                            "values": ["string"],
                        },
                        "service_account": "string",
                        "service_account_scopes": ["string"],
                        "shielded_instance_config": {
                            "enable_integrity_monitoring": False,
                            "enable_secure_boot": False,
                            "enable_vtpm": False,
                        },
                        "subnetwork_uri": "string",
                        "tags": ["string"],
                        "zone_uri": "string",
                    },
                    "gke_cluster_config": {
                        "gke_cluster_target": "string",
                        "node_pool_target": [{
                            "node_pool": "string",
                            "roles": [google_native.dataproc.v1.GkeNodePoolTargetRolesItem.ROLE_UNSPECIFIED],
                            "node_pool_config": {
                                "autoscaling": {
                                    "max_node_count": 0,
                                    "min_node_count": 0,
                                },
                                "config": {
                                    "accelerators": [{
                                        "accelerator_count": "string",
                                        "accelerator_type": "string",
                                        "gpu_partition_size": "string",
                                    }],
                                    "boot_disk_kms_key": "string",
                                    "local_ssd_count": 0,
                                    "machine_type": "string",
                                    "min_cpu_platform": "string",
                                    "preemptible": False,
                                    "spot": False,
                                },
                                "locations": ["string"],
                            },
                        }],
                    },
                    "initialization_actions": [{
                        "executable_file": "string",
                        "execution_timeout": "string",
                    }],
                    "lifecycle_config": {
                        "auto_delete_time": "string",
                        "auto_delete_ttl": "string",
                        "idle_delete_ttl": "string",
                    },
                    "master_config": {
                        "accelerators": [{
                            "accelerator_count": 0,
                            "accelerator_type_uri": "string",
                        }],
                        "disk_config": {
                            "boot_disk_size_gb": 0,
                            "boot_disk_type": "string",
                            "local_ssd_interface": "string",
                            "num_local_ssds": 0,
                        },
                        "image_uri": "string",
                        "instance_flexibility_policy": {
                            "instance_selection_list": [{
                                "machine_types": ["string"],
                                "rank": 0,
                            }],
                        },
                        "machine_type_uri": "string",
                        "min_cpu_platform": "string",
                        "min_num_instances": 0,
                        "num_instances": 0,
                        "preemptibility": google_native.dataproc.v1.InstanceGroupConfigPreemptibility.PREEMPTIBILITY_UNSPECIFIED,
                        "startup_config": {
                            "required_registration_fraction": 0,
                        },
                    },
                    "metastore_config": {
                        "dataproc_metastore_service": "string",
                    },
                    "secondary_worker_config": {
                        "accelerators": [{
                            "accelerator_count": 0,
                            "accelerator_type_uri": "string",
                        }],
                        "disk_config": {
                            "boot_disk_size_gb": 0,
                            "boot_disk_type": "string",
                            "local_ssd_interface": "string",
                            "num_local_ssds": 0,
                        },
                        "image_uri": "string",
                        "instance_flexibility_policy": {
                            "instance_selection_list": [{
                                "machine_types": ["string"],
                                "rank": 0,
                            }],
                        },
                        "machine_type_uri": "string",
                        "min_cpu_platform": "string",
                        "min_num_instances": 0,
                        "num_instances": 0,
                        "preemptibility": google_native.dataproc.v1.InstanceGroupConfigPreemptibility.PREEMPTIBILITY_UNSPECIFIED,
                        "startup_config": {
                            "required_registration_fraction": 0,
                        },
                    },
                    "security_config": {
                        "identity_config": {
                            "user_service_account_mapping": {
                                "string": "string",
                            },
                        },
                        "kerberos_config": {
                            "cross_realm_trust_admin_server": "string",
                            "cross_realm_trust_kdc": "string",
                            "cross_realm_trust_realm": "string",
                            "cross_realm_trust_shared_password_uri": "string",
                            "enable_kerberos": False,
                            "kdc_db_key_uri": "string",
                            "key_password_uri": "string",
                            "keystore_password_uri": "string",
                            "keystore_uri": "string",
                            "kms_key_uri": "string",
                            "realm": "string",
                            "root_principal_password_uri": "string",
                            "tgt_lifetime_hours": 0,
                            "truststore_password_uri": "string",
                            "truststore_uri": "string",
                        },
                    },
                    "software_config": {
                        "image_version": "string",
                        "optional_components": [google_native.dataproc.v1.SoftwareConfigOptionalComponentsItem.COMPONENT_UNSPECIFIED],
                        "properties": {
                            "string": "string",
                        },
                    },
                    "temp_bucket": "string",
                    "worker_config": {
                        "accelerators": [{
                            "accelerator_count": 0,
                            "accelerator_type_uri": "string",
                        }],
                        "disk_config": {
                            "boot_disk_size_gb": 0,
                            "boot_disk_type": "string",
                            "local_ssd_interface": "string",
                            "num_local_ssds": 0,
                        },
                        "image_uri": "string",
                        "instance_flexibility_policy": {
                            "instance_selection_list": [{
                                "machine_types": ["string"],
                                "rank": 0,
                            }],
                        },
                        "machine_type_uri": "string",
                        "min_cpu_platform": "string",
                        "min_num_instances": 0,
                        "num_instances": 0,
                        "preemptibility": google_native.dataproc.v1.InstanceGroupConfigPreemptibility.PREEMPTIBILITY_UNSPECIFIED,
                        "startup_config": {
                            "required_registration_fraction": 0,
                        },
                    },
                },
                "labels": {
                    "string": "string",
                },
            },
        },
        dag_timeout="string",
        encryption_config={
            "kms_key": "string",
        },
        id="string",
        labels={
            "string": "string",
        },
        location="string",
        parameters=[{
            "fields": ["string"],
            "name": "string",
            "description": "string",
            "validation": {
                "regex": {
                    "regexes": ["string"],
                },
                "values": {
                    "values": ["string"],
                },
            },
        }],
        project="string",
        version=0)
    
    const workflowTemplateResource = new google_native.dataproc.v1.WorkflowTemplate("workflowTemplateResource", {
        jobs: [{
            stepId: "string",
            prestoJob: {
                clientTags: ["string"],
                continueOnFailure: false,
                loggingConfig: {
                    driverLogLevels: {
                        string: "string",
                    },
                },
                outputFormat: "string",
                properties: {
                    string: "string",
                },
                queryFileUri: "string",
                queryList: {
                    queries: ["string"],
                },
            },
            hiveJob: {
                continueOnFailure: false,
                jarFileUris: ["string"],
                properties: {
                    string: "string",
                },
                queryFileUri: "string",
                queryList: {
                    queries: ["string"],
                },
                scriptVariables: {
                    string: "string",
                },
            },
            labels: {
                string: "string",
            },
            pigJob: {
                continueOnFailure: false,
                jarFileUris: ["string"],
                loggingConfig: {
                    driverLogLevels: {
                        string: "string",
                    },
                },
                properties: {
                    string: "string",
                },
                queryFileUri: "string",
                queryList: {
                    queries: ["string"],
                },
                scriptVariables: {
                    string: "string",
                },
            },
            prerequisiteStepIds: ["string"],
            flinkJob: {
                args: ["string"],
                jarFileUris: ["string"],
                loggingConfig: {
                    driverLogLevels: {
                        string: "string",
                    },
                },
                mainClass: "string",
                mainJarFileUri: "string",
                properties: {
                    string: "string",
                },
                savepointUri: "string",
            },
            pysparkJob: {
                mainPythonFileUri: "string",
                archiveUris: ["string"],
                args: ["string"],
                fileUris: ["string"],
                jarFileUris: ["string"],
                loggingConfig: {
                    driverLogLevels: {
                        string: "string",
                    },
                },
                properties: {
                    string: "string",
                },
                pythonFileUris: ["string"],
            },
            scheduling: {
                maxFailuresPerHour: 0,
                maxFailuresTotal: 0,
            },
            sparkJob: {
                archiveUris: ["string"],
                args: ["string"],
                fileUris: ["string"],
                jarFileUris: ["string"],
                loggingConfig: {
                    driverLogLevels: {
                        string: "string",
                    },
                },
                mainClass: "string",
                mainJarFileUri: "string",
                properties: {
                    string: "string",
                },
            },
            sparkRJob: {
                mainRFileUri: "string",
                archiveUris: ["string"],
                args: ["string"],
                fileUris: ["string"],
                loggingConfig: {
                    driverLogLevels: {
                        string: "string",
                    },
                },
                properties: {
                    string: "string",
                },
            },
            sparkSqlJob: {
                jarFileUris: ["string"],
                loggingConfig: {
                    driverLogLevels: {
                        string: "string",
                    },
                },
                properties: {
                    string: "string",
                },
                queryFileUri: "string",
                queryList: {
                    queries: ["string"],
                },
                scriptVariables: {
                    string: "string",
                },
            },
            hadoopJob: {
                archiveUris: ["string"],
                args: ["string"],
                fileUris: ["string"],
                jarFileUris: ["string"],
                loggingConfig: {
                    driverLogLevels: {
                        string: "string",
                    },
                },
                mainClass: "string",
                mainJarFileUri: "string",
                properties: {
                    string: "string",
                },
            },
            trinoJob: {
                clientTags: ["string"],
                continueOnFailure: false,
                loggingConfig: {
                    driverLogLevels: {
                        string: "string",
                    },
                },
                outputFormat: "string",
                properties: {
                    string: "string",
                },
                queryFileUri: "string",
                queryList: {
                    queries: ["string"],
                },
            },
        }],
        placement: {
            clusterSelector: {
                clusterLabels: {
                    string: "string",
                },
                zone: "string",
            },
            managedCluster: {
                clusterName: "string",
                config: {
                    autoscalingConfig: {
                        policyUri: "string",
                    },
                    auxiliaryNodeGroups: [{
                        nodeGroup: {
                            roles: [google_native.dataproc.v1.NodeGroupRolesItem.RoleUnspecified],
                            labels: {
                                string: "string",
                            },
                            name: "string",
                            nodeGroupConfig: {
                                accelerators: [{
                                    acceleratorCount: 0,
                                    acceleratorTypeUri: "string",
                                }],
                                diskConfig: {
                                    bootDiskSizeGb: 0,
                                    bootDiskType: "string",
                                    localSsdInterface: "string",
                                    numLocalSsds: 0,
                                },
                                imageUri: "string",
                                instanceFlexibilityPolicy: {
                                    instanceSelectionList: [{
                                        machineTypes: ["string"],
                                        rank: 0,
                                    }],
                                },
                                machineTypeUri: "string",
                                minCpuPlatform: "string",
                                minNumInstances: 0,
                                numInstances: 0,
                                preemptibility: google_native.dataproc.v1.InstanceGroupConfigPreemptibility.PreemptibilityUnspecified,
                                startupConfig: {
                                    requiredRegistrationFraction: 0,
                                },
                            },
                        },
                        nodeGroupId: "string",
                    }],
                    configBucket: "string",
                    dataprocMetricConfig: {
                        metrics: [{
                            metricSource: google_native.dataproc.v1.MetricMetricSource.MetricSourceUnspecified,
                            metricOverrides: ["string"],
                        }],
                    },
                    encryptionConfig: {
                        gcePdKmsKeyName: "string",
                        kmsKey: "string",
                    },
                    endpointConfig: {
                        enableHttpPortAccess: false,
                    },
                    gceClusterConfig: {
                        confidentialInstanceConfig: {
                            enableConfidentialCompute: false,
                        },
                        internalIpOnly: false,
                        metadata: {
                            string: "string",
                        },
                        networkUri: "string",
                        nodeGroupAffinity: {
                            nodeGroupUri: "string",
                        },
                        privateIpv6GoogleAccess: google_native.dataproc.v1.GceClusterConfigPrivateIpv6GoogleAccess.PrivateIpv6GoogleAccessUnspecified,
                        reservationAffinity: {
                            consumeReservationType: google_native.dataproc.v1.ReservationAffinityConsumeReservationType.TypeUnspecified,
                            key: "string",
                            values: ["string"],
                        },
                        serviceAccount: "string",
                        serviceAccountScopes: ["string"],
                        shieldedInstanceConfig: {
                            enableIntegrityMonitoring: false,
                            enableSecureBoot: false,
                            enableVtpm: false,
                        },
                        subnetworkUri: "string",
                        tags: ["string"],
                        zoneUri: "string",
                    },
                    gkeClusterConfig: {
                        gkeClusterTarget: "string",
                        nodePoolTarget: [{
                            nodePool: "string",
                            roles: [google_native.dataproc.v1.GkeNodePoolTargetRolesItem.RoleUnspecified],
                            nodePoolConfig: {
                                autoscaling: {
                                    maxNodeCount: 0,
                                    minNodeCount: 0,
                                },
                                config: {
                                    accelerators: [{
                                        acceleratorCount: "string",
                                        acceleratorType: "string",
                                        gpuPartitionSize: "string",
                                    }],
                                    bootDiskKmsKey: "string",
                                    localSsdCount: 0,
                                    machineType: "string",
                                    minCpuPlatform: "string",
                                    preemptible: false,
                                    spot: false,
                                },
                                locations: ["string"],
                            },
                        }],
                    },
                    initializationActions: [{
                        executableFile: "string",
                        executionTimeout: "string",
                    }],
                    lifecycleConfig: {
                        autoDeleteTime: "string",
                        autoDeleteTtl: "string",
                        idleDeleteTtl: "string",
                    },
                    masterConfig: {
                        accelerators: [{
                            acceleratorCount: 0,
                            acceleratorTypeUri: "string",
                        }],
                        diskConfig: {
                            bootDiskSizeGb: 0,
                            bootDiskType: "string",
                            localSsdInterface: "string",
                            numLocalSsds: 0,
                        },
                        imageUri: "string",
                        instanceFlexibilityPolicy: {
                            instanceSelectionList: [{
                                machineTypes: ["string"],
                                rank: 0,
                            }],
                        },
                        machineTypeUri: "string",
                        minCpuPlatform: "string",
                        minNumInstances: 0,
                        numInstances: 0,
                        preemptibility: google_native.dataproc.v1.InstanceGroupConfigPreemptibility.PreemptibilityUnspecified,
                        startupConfig: {
                            requiredRegistrationFraction: 0,
                        },
                    },
                    metastoreConfig: {
                        dataprocMetastoreService: "string",
                    },
                    secondaryWorkerConfig: {
                        accelerators: [{
                            acceleratorCount: 0,
                            acceleratorTypeUri: "string",
                        }],
                        diskConfig: {
                            bootDiskSizeGb: 0,
                            bootDiskType: "string",
                            localSsdInterface: "string",
                            numLocalSsds: 0,
                        },
                        imageUri: "string",
                        instanceFlexibilityPolicy: {
                            instanceSelectionList: [{
                                machineTypes: ["string"],
                                rank: 0,
                            }],
                        },
                        machineTypeUri: "string",
                        minCpuPlatform: "string",
                        minNumInstances: 0,
                        numInstances: 0,
                        preemptibility: google_native.dataproc.v1.InstanceGroupConfigPreemptibility.PreemptibilityUnspecified,
                        startupConfig: {
                            requiredRegistrationFraction: 0,
                        },
                    },
                    securityConfig: {
                        identityConfig: {
                            userServiceAccountMapping: {
                                string: "string",
                            },
                        },
                        kerberosConfig: {
                            crossRealmTrustAdminServer: "string",
                            crossRealmTrustKdc: "string",
                            crossRealmTrustRealm: "string",
                            crossRealmTrustSharedPasswordUri: "string",
                            enableKerberos: false,
                            kdcDbKeyUri: "string",
                            keyPasswordUri: "string",
                            keystorePasswordUri: "string",
                            keystoreUri: "string",
                            kmsKeyUri: "string",
                            realm: "string",
                            rootPrincipalPasswordUri: "string",
                            tgtLifetimeHours: 0,
                            truststorePasswordUri: "string",
                            truststoreUri: "string",
                        },
                    },
                    softwareConfig: {
                        imageVersion: "string",
                        optionalComponents: [google_native.dataproc.v1.SoftwareConfigOptionalComponentsItem.ComponentUnspecified],
                        properties: {
                            string: "string",
                        },
                    },
                    tempBucket: "string",
                    workerConfig: {
                        accelerators: [{
                            acceleratorCount: 0,
                            acceleratorTypeUri: "string",
                        }],
                        diskConfig: {
                            bootDiskSizeGb: 0,
                            bootDiskType: "string",
                            localSsdInterface: "string",
                            numLocalSsds: 0,
                        },
                        imageUri: "string",
                        instanceFlexibilityPolicy: {
                            instanceSelectionList: [{
                                machineTypes: ["string"],
                                rank: 0,
                            }],
                        },
                        machineTypeUri: "string",
                        minCpuPlatform: "string",
                        minNumInstances: 0,
                        numInstances: 0,
                        preemptibility: google_native.dataproc.v1.InstanceGroupConfigPreemptibility.PreemptibilityUnspecified,
                        startupConfig: {
                            requiredRegistrationFraction: 0,
                        },
                    },
                },
                labels: {
                    string: "string",
                },
            },
        },
        dagTimeout: "string",
        encryptionConfig: {
            kmsKey: "string",
        },
        id: "string",
        labels: {
            string: "string",
        },
        location: "string",
        parameters: [{
            fields: ["string"],
            name: "string",
            description: "string",
            validation: {
                regex: {
                    regexes: ["string"],
                },
                values: {
                    values: ["string"],
                },
            },
        }],
        project: "string",
        version: 0,
    });
    
    type: google-native:dataproc/v1:WorkflowTemplate
    properties:
        dagTimeout: string
        encryptionConfig:
            kmsKey: string
        id: string
        jobs:
            - flinkJob:
                args:
                    - string
                jarFileUris:
                    - string
                loggingConfig:
                    driverLogLevels:
                        string: string
                mainClass: string
                mainJarFileUri: string
                properties:
                    string: string
                savepointUri: string
              hadoopJob:
                archiveUris:
                    - string
                args:
                    - string
                fileUris:
                    - string
                jarFileUris:
                    - string
                loggingConfig:
                    driverLogLevels:
                        string: string
                mainClass: string
                mainJarFileUri: string
                properties:
                    string: string
              hiveJob:
                continueOnFailure: false
                jarFileUris:
                    - string
                properties:
                    string: string
                queryFileUri: string
                queryList:
                    queries:
                        - string
                scriptVariables:
                    string: string
              labels:
                string: string
              pigJob:
                continueOnFailure: false
                jarFileUris:
                    - string
                loggingConfig:
                    driverLogLevels:
                        string: string
                properties:
                    string: string
                queryFileUri: string
                queryList:
                    queries:
                        - string
                scriptVariables:
                    string: string
              prerequisiteStepIds:
                - string
              prestoJob:
                clientTags:
                    - string
                continueOnFailure: false
                loggingConfig:
                    driverLogLevels:
                        string: string
                outputFormat: string
                properties:
                    string: string
                queryFileUri: string
                queryList:
                    queries:
                        - string
              pysparkJob:
                archiveUris:
                    - string
                args:
                    - string
                fileUris:
                    - string
                jarFileUris:
                    - string
                loggingConfig:
                    driverLogLevels:
                        string: string
                mainPythonFileUri: string
                properties:
                    string: string
                pythonFileUris:
                    - string
              scheduling:
                maxFailuresPerHour: 0
                maxFailuresTotal: 0
              sparkJob:
                archiveUris:
                    - string
                args:
                    - string
                fileUris:
                    - string
                jarFileUris:
                    - string
                loggingConfig:
                    driverLogLevels:
                        string: string
                mainClass: string
                mainJarFileUri: string
                properties:
                    string: string
              sparkRJob:
                archiveUris:
                    - string
                args:
                    - string
                fileUris:
                    - string
                loggingConfig:
                    driverLogLevels:
                        string: string
                mainRFileUri: string
                properties:
                    string: string
              sparkSqlJob:
                jarFileUris:
                    - string
                loggingConfig:
                    driverLogLevels:
                        string: string
                properties:
                    string: string
                queryFileUri: string
                queryList:
                    queries:
                        - string
                scriptVariables:
                    string: string
              stepId: string
              trinoJob:
                clientTags:
                    - string
                continueOnFailure: false
                loggingConfig:
                    driverLogLevels:
                        string: string
                outputFormat: string
                properties:
                    string: string
                queryFileUri: string
                queryList:
                    queries:
                        - string
        labels:
            string: string
        location: string
        parameters:
            - description: string
              fields:
                - string
              name: string
              validation:
                regex:
                    regexes:
                        - string
                values:
                    values:
                        - string
        placement:
            clusterSelector:
                clusterLabels:
                    string: string
                zone: string
            managedCluster:
                clusterName: string
                config:
                    autoscalingConfig:
                        policyUri: string
                    auxiliaryNodeGroups:
                        - nodeGroup:
                            labels:
                                string: string
                            name: string
                            nodeGroupConfig:
                                accelerators:
                                    - acceleratorCount: 0
                                      acceleratorTypeUri: string
                                diskConfig:
                                    bootDiskSizeGb: 0
                                    bootDiskType: string
                                    localSsdInterface: string
                                    numLocalSsds: 0
                                imageUri: string
                                instanceFlexibilityPolicy:
                                    instanceSelectionList:
                                        - machineTypes:
                                            - string
                                          rank: 0
                                machineTypeUri: string
                                minCpuPlatform: string
                                minNumInstances: 0
                                numInstances: 0
                                preemptibility: PREEMPTIBILITY_UNSPECIFIED
                                startupConfig:
                                    requiredRegistrationFraction: 0
                            roles:
                                - ROLE_UNSPECIFIED
                          nodeGroupId: string
                    configBucket: string
                    dataprocMetricConfig:
                        metrics:
                            - metricOverrides:
                                - string
                              metricSource: METRIC_SOURCE_UNSPECIFIED
                    encryptionConfig:
                        gcePdKmsKeyName: string
                        kmsKey: string
                    endpointConfig:
                        enableHttpPortAccess: false
                    gceClusterConfig:
                        confidentialInstanceConfig:
                            enableConfidentialCompute: false
                        internalIpOnly: false
                        metadata:
                            string: string
                        networkUri: string
                        nodeGroupAffinity:
                            nodeGroupUri: string
                        privateIpv6GoogleAccess: PRIVATE_IPV6_GOOGLE_ACCESS_UNSPECIFIED
                        reservationAffinity:
                            consumeReservationType: TYPE_UNSPECIFIED
                            key: string
                            values:
                                - string
                        serviceAccount: string
                        serviceAccountScopes:
                            - string
                        shieldedInstanceConfig:
                            enableIntegrityMonitoring: false
                            enableSecureBoot: false
                            enableVtpm: false
                        subnetworkUri: string
                        tags:
                            - string
                        zoneUri: string
                    gkeClusterConfig:
                        gkeClusterTarget: string
                        nodePoolTarget:
                            - nodePool: string
                              nodePoolConfig:
                                autoscaling:
                                    maxNodeCount: 0
                                    minNodeCount: 0
                                config:
                                    accelerators:
                                        - acceleratorCount: string
                                          acceleratorType: string
                                          gpuPartitionSize: string
                                    bootDiskKmsKey: string
                                    localSsdCount: 0
                                    machineType: string
                                    minCpuPlatform: string
                                    preemptible: false
                                    spot: false
                                locations:
                                    - string
                              roles:
                                - ROLE_UNSPECIFIED
                    initializationActions:
                        - executableFile: string
                          executionTimeout: string
                    lifecycleConfig:
                        autoDeleteTime: string
                        autoDeleteTtl: string
                        idleDeleteTtl: string
                    masterConfig:
                        accelerators:
                            - acceleratorCount: 0
                              acceleratorTypeUri: string
                        diskConfig:
                            bootDiskSizeGb: 0
                            bootDiskType: string
                            localSsdInterface: string
                            numLocalSsds: 0
                        imageUri: string
                        instanceFlexibilityPolicy:
                            instanceSelectionList:
                                - machineTypes:
                                    - string
                                  rank: 0
                        machineTypeUri: string
                        minCpuPlatform: string
                        minNumInstances: 0
                        numInstances: 0
                        preemptibility: PREEMPTIBILITY_UNSPECIFIED
                        startupConfig:
                            requiredRegistrationFraction: 0
                    metastoreConfig:
                        dataprocMetastoreService: string
                    secondaryWorkerConfig:
                        accelerators:
                            - acceleratorCount: 0
                              acceleratorTypeUri: string
                        diskConfig:
                            bootDiskSizeGb: 0
                            bootDiskType: string
                            localSsdInterface: string
                            numLocalSsds: 0
                        imageUri: string
                        instanceFlexibilityPolicy:
                            instanceSelectionList:
                                - machineTypes:
                                    - string
                                  rank: 0
                        machineTypeUri: string
                        minCpuPlatform: string
                        minNumInstances: 0
                        numInstances: 0
                        preemptibility: PREEMPTIBILITY_UNSPECIFIED
                        startupConfig:
                            requiredRegistrationFraction: 0
                    securityConfig:
                        identityConfig:
                            userServiceAccountMapping:
                                string: string
                        kerberosConfig:
                            crossRealmTrustAdminServer: string
                            crossRealmTrustKdc: string
                            crossRealmTrustRealm: string
                            crossRealmTrustSharedPasswordUri: string
                            enableKerberos: false
                            kdcDbKeyUri: string
                            keyPasswordUri: string
                            keystorePasswordUri: string
                            keystoreUri: string
                            kmsKeyUri: string
                            realm: string
                            rootPrincipalPasswordUri: string
                            tgtLifetimeHours: 0
                            truststorePasswordUri: string
                            truststoreUri: string
                    softwareConfig:
                        imageVersion: string
                        optionalComponents:
                            - COMPONENT_UNSPECIFIED
                        properties:
                            string: string
                    tempBucket: string
                    workerConfig:
                        accelerators:
                            - acceleratorCount: 0
                              acceleratorTypeUri: string
                        diskConfig:
                            bootDiskSizeGb: 0
                            bootDiskType: string
                            localSsdInterface: string
                            numLocalSsds: 0
                        imageUri: string
                        instanceFlexibilityPolicy:
                            instanceSelectionList:
                                - machineTypes:
                                    - string
                                  rank: 0
                        machineTypeUri: string
                        minCpuPlatform: string
                        minNumInstances: 0
                        numInstances: 0
                        preemptibility: PREEMPTIBILITY_UNSPECIFIED
                        startupConfig:
                            requiredRegistrationFraction: 0
                labels:
                    string: string
        project: string
        version: 0
    

    WorkflowTemplate Resource Properties

    To learn more about resource properties and how to use them, see Inputs and Outputs in the Architecture and Concepts docs.

    Inputs

    In Python, inputs that are objects can be passed either as argument classes or as dictionary literals.

    The WorkflowTemplate resource accepts the following input properties:

    Jobs List<Pulumi.GoogleNative.Dataproc.V1.Inputs.OrderedJob>
    The Directed Acyclic Graph of Jobs to submit.
    Placement Pulumi.GoogleNative.Dataproc.V1.Inputs.WorkflowTemplatePlacement
    WorkflowTemplate scheduling information.
    DagTimeout string
    Optional. Timeout duration for the DAG of jobs, expressed in seconds (see JSON representation of duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). The timeout duration must be from 10 minutes ("600s") to 24 hours ("86400s"). The timer begins when the first job is submitted. If the workflow is running at the end of the timeout period, any remaining jobs are cancelled, the workflow is ended, and if the workflow was running on a managed cluster, the cluster is deleted.
    EncryptionConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.GoogleCloudDataprocV1WorkflowTemplateEncryptionConfig
    Optional. Encryption settings for the encrypting customer core content.
    Id string
    Labels Dictionary<string, string>
    Optional. The labels to associate with this template. These labels will be propagated to all jobs and clusters created by the workflow instance.Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt).Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt).No more than 32 labels can be associated with a template.
    Location string
    Parameters List<Pulumi.GoogleNative.Dataproc.V1.Inputs.TemplateParameter>
    Optional. Template parameters whose values are substituted into the template. Values for parameters must be provided when the template is instantiated.
    Project string
    Version int
    Optional. Used to perform a consistent read-modify-write.This field should be left blank for a CreateWorkflowTemplate request. It is required for an UpdateWorkflowTemplate request, and must match the current server version. A typical update template flow would fetch the current template with a GetWorkflowTemplate request, which will return the current template with the version field filled in with the current server version. The user updates other fields in the template, then returns it as part of the UpdateWorkflowTemplate request.
    Jobs []OrderedJobArgs
    The Directed Acyclic Graph of Jobs to submit.
    Placement WorkflowTemplatePlacementArgs
    WorkflowTemplate scheduling information.
    DagTimeout string
    Optional. Timeout duration for the DAG of jobs, expressed in seconds (see JSON representation of duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). The timeout duration must be from 10 minutes ("600s") to 24 hours ("86400s"). The timer begins when the first job is submitted. If the workflow is running at the end of the timeout period, any remaining jobs are cancelled, the workflow is ended, and if the workflow was running on a managed cluster, the cluster is deleted.
    EncryptionConfig GoogleCloudDataprocV1WorkflowTemplateEncryptionConfigArgs
    Optional. Encryption settings for the encrypting customer core content.
    Id string
    Labels map[string]string
    Optional. The labels to associate with this template. These labels will be propagated to all jobs and clusters created by the workflow instance.Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt).Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt).No more than 32 labels can be associated with a template.
    Location string
    Parameters []TemplateParameterArgs
    Optional. Template parameters whose values are substituted into the template. Values for parameters must be provided when the template is instantiated.
    Project string
    Version int
    Optional. Used to perform a consistent read-modify-write.This field should be left blank for a CreateWorkflowTemplate request. It is required for an UpdateWorkflowTemplate request, and must match the current server version. A typical update template flow would fetch the current template with a GetWorkflowTemplate request, which will return the current template with the version field filled in with the current server version. The user updates other fields in the template, then returns it as part of the UpdateWorkflowTemplate request.
    jobs List<OrderedJob>
    The Directed Acyclic Graph of Jobs to submit.
    placement WorkflowTemplatePlacement
    WorkflowTemplate scheduling information.
    dagTimeout String
    Optional. Timeout duration for the DAG of jobs, expressed in seconds (see JSON representation of duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). The timeout duration must be from 10 minutes ("600s") to 24 hours ("86400s"). The timer begins when the first job is submitted. If the workflow is running at the end of the timeout period, any remaining jobs are cancelled, the workflow is ended, and if the workflow was running on a managed cluster, the cluster is deleted.
    encryptionConfig GoogleCloudDataprocV1WorkflowTemplateEncryptionConfig
    Optional. Encryption settings for the encrypting customer core content.
    id String
    labels Map<String,String>
    Optional. The labels to associate with this template. These labels will be propagated to all jobs and clusters created by the workflow instance.Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt).Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt).No more than 32 labels can be associated with a template.
    location String
    parameters List<TemplateParameter>
    Optional. Template parameters whose values are substituted into the template. Values for parameters must be provided when the template is instantiated.
    project String
    version Integer
    Optional. Used to perform a consistent read-modify-write.This field should be left blank for a CreateWorkflowTemplate request. It is required for an UpdateWorkflowTemplate request, and must match the current server version. A typical update template flow would fetch the current template with a GetWorkflowTemplate request, which will return the current template with the version field filled in with the current server version. The user updates other fields in the template, then returns it as part of the UpdateWorkflowTemplate request.
    jobs OrderedJob[]
    The Directed Acyclic Graph of Jobs to submit.
    placement WorkflowTemplatePlacement
    WorkflowTemplate scheduling information.
    dagTimeout string
    Optional. Timeout duration for the DAG of jobs, expressed in seconds (see JSON representation of duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). The timeout duration must be from 10 minutes ("600s") to 24 hours ("86400s"). The timer begins when the first job is submitted. If the workflow is running at the end of the timeout period, any remaining jobs are cancelled, the workflow is ended, and if the workflow was running on a managed cluster, the cluster is deleted.
    encryptionConfig GoogleCloudDataprocV1WorkflowTemplateEncryptionConfig
    Optional. Encryption settings for the encrypting customer core content.
    id string
    labels {[key: string]: string}
    Optional. The labels to associate with this template. These labels will be propagated to all jobs and clusters created by the workflow instance.Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt).Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt).No more than 32 labels can be associated with a template.
    location string
    parameters TemplateParameter[]
    Optional. Template parameters whose values are substituted into the template. Values for parameters must be provided when the template is instantiated.
    project string
    version number
    Optional. Used to perform a consistent read-modify-write.This field should be left blank for a CreateWorkflowTemplate request. It is required for an UpdateWorkflowTemplate request, and must match the current server version. A typical update template flow would fetch the current template with a GetWorkflowTemplate request, which will return the current template with the version field filled in with the current server version. The user updates other fields in the template, then returns it as part of the UpdateWorkflowTemplate request.
    jobs Sequence[OrderedJobArgs]
    The Directed Acyclic Graph of Jobs to submit.
    placement WorkflowTemplatePlacementArgs
    WorkflowTemplate scheduling information.
    dag_timeout str
    Optional. Timeout duration for the DAG of jobs, expressed in seconds (see JSON representation of duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). The timeout duration must be from 10 minutes ("600s") to 24 hours ("86400s"). The timer begins when the first job is submitted. If the workflow is running at the end of the timeout period, any remaining jobs are cancelled, the workflow is ended, and if the workflow was running on a managed cluster, the cluster is deleted.
    encryption_config GoogleCloudDataprocV1WorkflowTemplateEncryptionConfigArgs
    Optional. Encryption settings for the encrypting customer core content.
    id str
    labels Mapping[str, str]
    Optional. The labels to associate with this template. These labels will be propagated to all jobs and clusters created by the workflow instance.Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt).Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt).No more than 32 labels can be associated with a template.
    location str
    parameters Sequence[TemplateParameterArgs]
    Optional. Template parameters whose values are substituted into the template. Values for parameters must be provided when the template is instantiated.
    project str
    version int
    Optional. Used to perform a consistent read-modify-write.This field should be left blank for a CreateWorkflowTemplate request. It is required for an UpdateWorkflowTemplate request, and must match the current server version. A typical update template flow would fetch the current template with a GetWorkflowTemplate request, which will return the current template with the version field filled in with the current server version. The user updates other fields in the template, then returns it as part of the UpdateWorkflowTemplate request.
    jobs List<Property Map>
    The Directed Acyclic Graph of Jobs to submit.
    placement Property Map
    WorkflowTemplate scheduling information.
    dagTimeout String
    Optional. Timeout duration for the DAG of jobs, expressed in seconds (see JSON representation of duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). The timeout duration must be from 10 minutes ("600s") to 24 hours ("86400s"). The timer begins when the first job is submitted. If the workflow is running at the end of the timeout period, any remaining jobs are cancelled, the workflow is ended, and if the workflow was running on a managed cluster, the cluster is deleted.
    encryptionConfig Property Map
    Optional. Encryption settings for the encrypting customer core content.
    id String
    labels Map<String>
    Optional. The labels to associate with this template. These labels will be propagated to all jobs and clusters created by the workflow instance.Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt).Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt).No more than 32 labels can be associated with a template.
    location String
    parameters List<Property Map>
    Optional. Template parameters whose values are substituted into the template. Values for parameters must be provided when the template is instantiated.
    project String
    version Number
    Optional. Used to perform a consistent read-modify-write.This field should be left blank for a CreateWorkflowTemplate request. It is required for an UpdateWorkflowTemplate request, and must match the current server version. A typical update template flow would fetch the current template with a GetWorkflowTemplate request, which will return the current template with the version field filled in with the current server version. The user updates other fields in the template, then returns it as part of the UpdateWorkflowTemplate request.

    Outputs

    All input properties are implicitly available as output properties. Additionally, the WorkflowTemplate resource produces the following output properties:

    CreateTime string
    The time template was created.
    Id string
    The provider-assigned unique ID for this managed resource.
    Name string
    The resource name of the workflow template, as described in https://cloud.google.com/apis/design/resource_names. For projects.regions.workflowTemplates, the resource name of the template has the following format: projects/{project_id}/regions/{region}/workflowTemplates/{template_id} For projects.locations.workflowTemplates, the resource name of the template has the following format: projects/{project_id}/locations/{location}/workflowTemplates/{template_id}
    UpdateTime string
    The time template was last updated.
    CreateTime string
    The time template was created.
    Id string
    The provider-assigned unique ID for this managed resource.
    Name string
    The resource name of the workflow template, as described in https://cloud.google.com/apis/design/resource_names. For projects.regions.workflowTemplates, the resource name of the template has the following format: projects/{project_id}/regions/{region}/workflowTemplates/{template_id} For projects.locations.workflowTemplates, the resource name of the template has the following format: projects/{project_id}/locations/{location}/workflowTemplates/{template_id}
    UpdateTime string
    The time template was last updated.
    createTime String
    The time template was created.
    id String
    The provider-assigned unique ID for this managed resource.
    name String
    The resource name of the workflow template, as described in https://cloud.google.com/apis/design/resource_names. For projects.regions.workflowTemplates, the resource name of the template has the following format: projects/{project_id}/regions/{region}/workflowTemplates/{template_id} For projects.locations.workflowTemplates, the resource name of the template has the following format: projects/{project_id}/locations/{location}/workflowTemplates/{template_id}
    updateTime String
    The time template was last updated.
    createTime string
    The time template was created.
    id string
    The provider-assigned unique ID for this managed resource.
    name string
    The resource name of the workflow template, as described in https://cloud.google.com/apis/design/resource_names. For projects.regions.workflowTemplates, the resource name of the template has the following format: projects/{project_id}/regions/{region}/workflowTemplates/{template_id} For projects.locations.workflowTemplates, the resource name of the template has the following format: projects/{project_id}/locations/{location}/workflowTemplates/{template_id}
    updateTime string
    The time template was last updated.
    create_time str
    The time template was created.
    id str
    The provider-assigned unique ID for this managed resource.
    name str
    The resource name of the workflow template, as described in https://cloud.google.com/apis/design/resource_names. For projects.regions.workflowTemplates, the resource name of the template has the following format: projects/{project_id}/regions/{region}/workflowTemplates/{template_id} For projects.locations.workflowTemplates, the resource name of the template has the following format: projects/{project_id}/locations/{location}/workflowTemplates/{template_id}
    update_time str
    The time template was last updated.
    createTime String
    The time template was created.
    id String
    The provider-assigned unique ID for this managed resource.
    name String
    The resource name of the workflow template, as described in https://cloud.google.com/apis/design/resource_names. For projects.regions.workflowTemplates, the resource name of the template has the following format: projects/{project_id}/regions/{region}/workflowTemplates/{template_id} For projects.locations.workflowTemplates, the resource name of the template has the following format: projects/{project_id}/locations/{location}/workflowTemplates/{template_id}
    updateTime String
    The time template was last updated.

    Supporting Types

    AcceleratorConfig, AcceleratorConfigArgs

    AcceleratorCount int
    The number of the accelerator cards of this type exposed to this instance.
    AcceleratorTypeUri string
    Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/v1/acceleratorTypes).Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
    AcceleratorCount int
    The number of the accelerator cards of this type exposed to this instance.
    AcceleratorTypeUri string
    Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/v1/acceleratorTypes).Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
    acceleratorCount Integer
    The number of the accelerator cards of this type exposed to this instance.
    acceleratorTypeUri String
    Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/v1/acceleratorTypes).Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
    acceleratorCount number
    The number of the accelerator cards of this type exposed to this instance.
    acceleratorTypeUri string
    Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/v1/acceleratorTypes).Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
    accelerator_count int
    The number of the accelerator cards of this type exposed to this instance.
    accelerator_type_uri str
    Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/v1/acceleratorTypes).Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
    acceleratorCount Number
    The number of the accelerator cards of this type exposed to this instance.
    acceleratorTypeUri String
    Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/v1/acceleratorTypes).Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.

    AcceleratorConfigResponse, AcceleratorConfigResponseArgs

    AcceleratorCount int
    The number of the accelerator cards of this type exposed to this instance.
    AcceleratorTypeUri string
    Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/v1/acceleratorTypes).Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
    AcceleratorCount int
    The number of the accelerator cards of this type exposed to this instance.
    AcceleratorTypeUri string
    Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/v1/acceleratorTypes).Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
    acceleratorCount Integer
    The number of the accelerator cards of this type exposed to this instance.
    acceleratorTypeUri String
    Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/v1/acceleratorTypes).Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
    acceleratorCount number
    The number of the accelerator cards of this type exposed to this instance.
    acceleratorTypeUri string
    Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/v1/acceleratorTypes).Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
    accelerator_count int
    The number of the accelerator cards of this type exposed to this instance.
    accelerator_type_uri str
    Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/v1/acceleratorTypes).Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
    acceleratorCount Number
    The number of the accelerator cards of this type exposed to this instance.
    acceleratorTypeUri String
    Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/v1/acceleratorTypes).Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 projects/[project_id]/zones/[zone]/acceleratorTypes/nvidia-tesla-k80 nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.

    AutoscalingConfig, AutoscalingConfigArgs

    PolicyUri string
    Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
    PolicyUri string
    Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
    policyUri String
    Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
    policyUri string
    Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
    policy_uri str
    Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
    policyUri String
    Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.

    AutoscalingConfigResponse, AutoscalingConfigResponseArgs

    PolicyUri string
    Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
    PolicyUri string
    Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
    policyUri String
    Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
    policyUri string
    Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
    policy_uri str
    Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
    policyUri String
    Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.

    AuxiliaryNodeGroup, AuxiliaryNodeGroupArgs

    NodeGroup Pulumi.GoogleNative.Dataproc.V1.Inputs.NodeGroup
    Node group configuration.
    NodeGroupId string
    Optional. A node group ID. Generated if not specified.The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of from 3 to 33 characters.
    NodeGroup NodeGroupType
    Node group configuration.
    NodeGroupId string
    Optional. A node group ID. Generated if not specified.The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of from 3 to 33 characters.
    nodeGroup NodeGroup
    Node group configuration.
    nodeGroupId String
    Optional. A node group ID. Generated if not specified.The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of from 3 to 33 characters.
    nodeGroup NodeGroup
    Node group configuration.
    nodeGroupId string
    Optional. A node group ID. Generated if not specified.The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of from 3 to 33 characters.
    node_group NodeGroup
    Node group configuration.
    node_group_id str
    Optional. A node group ID. Generated if not specified.The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of from 3 to 33 characters.
    nodeGroup Property Map
    Node group configuration.
    nodeGroupId String
    Optional. A node group ID. Generated if not specified.The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of from 3 to 33 characters.

    AuxiliaryNodeGroupResponse, AuxiliaryNodeGroupResponseArgs

    NodeGroup Pulumi.GoogleNative.Dataproc.V1.Inputs.NodeGroupResponse
    Node group configuration.
    NodeGroupId string
    Optional. A node group ID. Generated if not specified.The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of from 3 to 33 characters.
    NodeGroup NodeGroupResponse
    Node group configuration.
    NodeGroupId string
    Optional. A node group ID. Generated if not specified.The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of from 3 to 33 characters.
    nodeGroup NodeGroupResponse
    Node group configuration.
    nodeGroupId String
    Optional. A node group ID. Generated if not specified.The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of from 3 to 33 characters.
    nodeGroup NodeGroupResponse
    Node group configuration.
    nodeGroupId string
    Optional. A node group ID. Generated if not specified.The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of from 3 to 33 characters.
    node_group NodeGroupResponse
    Node group configuration.
    node_group_id str
    Optional. A node group ID. Generated if not specified.The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of from 3 to 33 characters.
    nodeGroup Property Map
    Node group configuration.
    nodeGroupId String
    Optional. A node group ID. Generated if not specified.The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of from 3 to 33 characters.

    ClusterConfig, ClusterConfigArgs

    AutoscalingConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.AutoscalingConfig
    Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
    AuxiliaryNodeGroups List<Pulumi.GoogleNative.Dataproc.V1.Inputs.AuxiliaryNodeGroup>
    Optional. The node group settings.
    ConfigBucket string
    Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
    DataprocMetricConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.DataprocMetricConfig
    Optional. The config for Dataproc metrics.
    EncryptionConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.EncryptionConfig
    Optional. Encryption settings for the cluster.
    EndpointConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.EndpointConfig
    Optional. Port/endpoint configuration for this cluster
    GceClusterConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.GceClusterConfig
    Optional. The shared Compute Engine config settings for all instances in a cluster.
    GkeClusterConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.GkeClusterConfig
    Optional. BETA. The Kubernetes Engine config for Dataproc clusters deployed to The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. These config settings are mutually exclusive with Compute Engine-based options, such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
    InitializationActions List<Pulumi.GoogleNative.Dataproc.V1.Inputs.NodeInitializationAction>
    Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
    LifecycleConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.LifecycleConfig
    Optional. Lifecycle setting for the cluster.
    MasterConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.InstanceGroupConfig
    Optional. The Compute Engine config settings for the cluster's master instance.
    MetastoreConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.MetastoreConfig
    Optional. Metastore configuration.
    SecondaryWorkerConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.InstanceGroupConfig
    Optional. The Compute Engine config settings for a cluster's secondary worker instances
    SecurityConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.SecurityConfig
    Optional. Security settings for the cluster.
    SoftwareConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.SoftwareConfig
    Optional. The config settings for cluster software.
    TempBucket string
    Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
    WorkerConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.InstanceGroupConfig
    Optional. The Compute Engine config settings for the cluster's worker instances.
    AutoscalingConfig AutoscalingConfig
    Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
    AuxiliaryNodeGroups []AuxiliaryNodeGroup
    Optional. The node group settings.
    ConfigBucket string
    Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
    DataprocMetricConfig DataprocMetricConfig
    Optional. The config for Dataproc metrics.
    EncryptionConfig EncryptionConfig
    Optional. Encryption settings for the cluster.
    EndpointConfig EndpointConfig
    Optional. Port/endpoint configuration for this cluster
    GceClusterConfig GceClusterConfig
    Optional. The shared Compute Engine config settings for all instances in a cluster.
    GkeClusterConfig GkeClusterConfig
    Optional. BETA. The Kubernetes Engine config for Dataproc clusters deployed to The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. These config settings are mutually exclusive with Compute Engine-based options, such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
    InitializationActions []NodeInitializationAction
    Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
    LifecycleConfig LifecycleConfig
    Optional. Lifecycle setting for the cluster.
    MasterConfig InstanceGroupConfig
    Optional. The Compute Engine config settings for the cluster's master instance.
    MetastoreConfig MetastoreConfig
    Optional. Metastore configuration.
    SecondaryWorkerConfig InstanceGroupConfig
    Optional. The Compute Engine config settings for a cluster's secondary worker instances
    SecurityConfig SecurityConfig
    Optional. Security settings for the cluster.
    SoftwareConfig SoftwareConfig
    Optional. The config settings for cluster software.
    TempBucket string
    Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
    WorkerConfig InstanceGroupConfig
    Optional. The Compute Engine config settings for the cluster's worker instances.
    autoscalingConfig AutoscalingConfig
    Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
    auxiliaryNodeGroups List<AuxiliaryNodeGroup>
    Optional. The node group settings.
    configBucket String
    Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
    dataprocMetricConfig DataprocMetricConfig
    Optional. The config for Dataproc metrics.
    encryptionConfig EncryptionConfig
    Optional. Encryption settings for the cluster.
    endpointConfig EndpointConfig
    Optional. Port/endpoint configuration for this cluster
    gceClusterConfig GceClusterConfig
    Optional. The shared Compute Engine config settings for all instances in a cluster.
    gkeClusterConfig GkeClusterConfig
    Optional. BETA. The Kubernetes Engine config for Dataproc clusters deployed to The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. These config settings are mutually exclusive with Compute Engine-based options, such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
    initializationActions List<NodeInitializationAction>
    Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
    lifecycleConfig LifecycleConfig
    Optional. Lifecycle setting for the cluster.
    masterConfig InstanceGroupConfig
    Optional. The Compute Engine config settings for the cluster's master instance.
    metastoreConfig MetastoreConfig
    Optional. Metastore configuration.
    secondaryWorkerConfig InstanceGroupConfig
    Optional. The Compute Engine config settings for a cluster's secondary worker instances
    securityConfig SecurityConfig
    Optional. Security settings for the cluster.
    softwareConfig SoftwareConfig
    Optional. The config settings for cluster software.
    tempBucket String
    Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
    workerConfig InstanceGroupConfig
    Optional. The Compute Engine config settings for the cluster's worker instances.
    autoscalingConfig AutoscalingConfig
    Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
    auxiliaryNodeGroups AuxiliaryNodeGroup[]
    Optional. The node group settings.
    configBucket string
    Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
    dataprocMetricConfig DataprocMetricConfig
    Optional. The config for Dataproc metrics.
    encryptionConfig EncryptionConfig
    Optional. Encryption settings for the cluster.
    endpointConfig EndpointConfig
    Optional. Port/endpoint configuration for this cluster
    gceClusterConfig GceClusterConfig
    Optional. The shared Compute Engine config settings for all instances in a cluster.
    gkeClusterConfig GkeClusterConfig
    Optional. BETA. The Kubernetes Engine config for Dataproc clusters deployed to The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. These config settings are mutually exclusive with Compute Engine-based options, such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
    initializationActions NodeInitializationAction[]
    Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
    lifecycleConfig LifecycleConfig
    Optional. Lifecycle setting for the cluster.
    masterConfig InstanceGroupConfig
    Optional. The Compute Engine config settings for the cluster's master instance.
    metastoreConfig MetastoreConfig
    Optional. Metastore configuration.
    secondaryWorkerConfig InstanceGroupConfig
    Optional. The Compute Engine config settings for a cluster's secondary worker instances
    securityConfig SecurityConfig
    Optional. Security settings for the cluster.
    softwareConfig SoftwareConfig
    Optional. The config settings for cluster software.
    tempBucket string
    Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
    workerConfig InstanceGroupConfig
    Optional. The Compute Engine config settings for the cluster's worker instances.
    autoscaling_config AutoscalingConfig
    Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
    auxiliary_node_groups Sequence[AuxiliaryNodeGroup]
    Optional. The node group settings.
    config_bucket str
    Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
    dataproc_metric_config DataprocMetricConfig
    Optional. The config for Dataproc metrics.
    encryption_config EncryptionConfig
    Optional. Encryption settings for the cluster.
    endpoint_config EndpointConfig
    Optional. Port/endpoint configuration for this cluster
    gce_cluster_config GceClusterConfig
    Optional. The shared Compute Engine config settings for all instances in a cluster.
    gke_cluster_config GkeClusterConfig
    Optional. BETA. The Kubernetes Engine config for Dataproc clusters deployed to The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. These config settings are mutually exclusive with Compute Engine-based options, such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
    initialization_actions Sequence[NodeInitializationAction]
    Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
    lifecycle_config LifecycleConfig
    Optional. Lifecycle setting for the cluster.
    master_config InstanceGroupConfig
    Optional. The Compute Engine config settings for the cluster's master instance.
    metastore_config MetastoreConfig
    Optional. Metastore configuration.
    secondary_worker_config InstanceGroupConfig
    Optional. The Compute Engine config settings for a cluster's secondary worker instances
    security_config SecurityConfig
    Optional. Security settings for the cluster.
    software_config SoftwareConfig
    Optional. The config settings for cluster software.
    temp_bucket str
    Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
    worker_config InstanceGroupConfig
    Optional. The Compute Engine config settings for the cluster's worker instances.
    autoscalingConfig Property Map
    Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
    auxiliaryNodeGroups List<Property Map>
    Optional. The node group settings.
    configBucket String
    Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
    dataprocMetricConfig Property Map
    Optional. The config for Dataproc metrics.
    encryptionConfig Property Map
    Optional. Encryption settings for the cluster.
    endpointConfig Property Map
    Optional. Port/endpoint configuration for this cluster
    gceClusterConfig Property Map
    Optional. The shared Compute Engine config settings for all instances in a cluster.
    gkeClusterConfig Property Map
    Optional. BETA. The Kubernetes Engine config for Dataproc clusters deployed to The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. These config settings are mutually exclusive with Compute Engine-based options, such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
    initializationActions List<Property Map>
    Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
    lifecycleConfig Property Map
    Optional. Lifecycle setting for the cluster.
    masterConfig Property Map
    Optional. The Compute Engine config settings for the cluster's master instance.
    metastoreConfig Property Map
    Optional. Metastore configuration.
    secondaryWorkerConfig Property Map
    Optional. The Compute Engine config settings for a cluster's secondary worker instances
    securityConfig Property Map
    Optional. Security settings for the cluster.
    softwareConfig Property Map
    Optional. The config settings for cluster software.
    tempBucket String
    Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
    workerConfig Property Map
    Optional. The Compute Engine config settings for the cluster's worker instances.

    ClusterConfigResponse, ClusterConfigResponseArgs

    AutoscalingConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.AutoscalingConfigResponse
    Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
    AuxiliaryNodeGroups List<Pulumi.GoogleNative.Dataproc.V1.Inputs.AuxiliaryNodeGroupResponse>
    Optional. The node group settings.
    ConfigBucket string
    Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
    DataprocMetricConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.DataprocMetricConfigResponse
    Optional. The config for Dataproc metrics.
    EncryptionConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.EncryptionConfigResponse
    Optional. Encryption settings for the cluster.
    EndpointConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.EndpointConfigResponse
    Optional. Port/endpoint configuration for this cluster
    GceClusterConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.GceClusterConfigResponse
    Optional. The shared Compute Engine config settings for all instances in a cluster.
    GkeClusterConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.GkeClusterConfigResponse
    Optional. BETA. The Kubernetes Engine config for Dataproc clusters deployed to The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. These config settings are mutually exclusive with Compute Engine-based options, such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
    InitializationActions List<Pulumi.GoogleNative.Dataproc.V1.Inputs.NodeInitializationActionResponse>
    Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
    LifecycleConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.LifecycleConfigResponse
    Optional. Lifecycle setting for the cluster.
    MasterConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.InstanceGroupConfigResponse
    Optional. The Compute Engine config settings for the cluster's master instance.
    MetastoreConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.MetastoreConfigResponse
    Optional. Metastore configuration.
    SecondaryWorkerConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.InstanceGroupConfigResponse
    Optional. The Compute Engine config settings for a cluster's secondary worker instances
    SecurityConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.SecurityConfigResponse
    Optional. Security settings for the cluster.
    SoftwareConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.SoftwareConfigResponse
    Optional. The config settings for cluster software.
    TempBucket string
    Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
    WorkerConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.InstanceGroupConfigResponse
    Optional. The Compute Engine config settings for the cluster's worker instances.
    AutoscalingConfig AutoscalingConfigResponse
    Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
    AuxiliaryNodeGroups []AuxiliaryNodeGroupResponse
    Optional. The node group settings.
    ConfigBucket string
    Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
    DataprocMetricConfig DataprocMetricConfigResponse
    Optional. The config for Dataproc metrics.
    EncryptionConfig EncryptionConfigResponse
    Optional. Encryption settings for the cluster.
    EndpointConfig EndpointConfigResponse
    Optional. Port/endpoint configuration for this cluster
    GceClusterConfig GceClusterConfigResponse
    Optional. The shared Compute Engine config settings for all instances in a cluster.
    GkeClusterConfig GkeClusterConfigResponse
    Optional. BETA. The Kubernetes Engine config for Dataproc clusters deployed to The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. These config settings are mutually exclusive with Compute Engine-based options, such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
    InitializationActions []NodeInitializationActionResponse
    Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
    LifecycleConfig LifecycleConfigResponse
    Optional. Lifecycle setting for the cluster.
    MasterConfig InstanceGroupConfigResponse
    Optional. The Compute Engine config settings for the cluster's master instance.
    MetastoreConfig MetastoreConfigResponse
    Optional. Metastore configuration.
    SecondaryWorkerConfig InstanceGroupConfigResponse
    Optional. The Compute Engine config settings for a cluster's secondary worker instances
    SecurityConfig SecurityConfigResponse
    Optional. Security settings for the cluster.
    SoftwareConfig SoftwareConfigResponse
    Optional. The config settings for cluster software.
    TempBucket string
    Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
    WorkerConfig InstanceGroupConfigResponse
    Optional. The Compute Engine config settings for the cluster's worker instances.
    autoscalingConfig AutoscalingConfigResponse
    Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
    auxiliaryNodeGroups List<AuxiliaryNodeGroupResponse>
    Optional. The node group settings.
    configBucket String
    Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
    dataprocMetricConfig DataprocMetricConfigResponse
    Optional. The config for Dataproc metrics.
    encryptionConfig EncryptionConfigResponse
    Optional. Encryption settings for the cluster.
    endpointConfig EndpointConfigResponse
    Optional. Port/endpoint configuration for this cluster
    gceClusterConfig GceClusterConfigResponse
    Optional. The shared Compute Engine config settings for all instances in a cluster.
    gkeClusterConfig GkeClusterConfigResponse
    Optional. BETA. The Kubernetes Engine config for Dataproc clusters deployed to The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. These config settings are mutually exclusive with Compute Engine-based options, such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
    initializationActions List<NodeInitializationActionResponse>
    Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
    lifecycleConfig LifecycleConfigResponse
    Optional. Lifecycle setting for the cluster.
    masterConfig InstanceGroupConfigResponse
    Optional. The Compute Engine config settings for the cluster's master instance.
    metastoreConfig MetastoreConfigResponse
    Optional. Metastore configuration.
    secondaryWorkerConfig InstanceGroupConfigResponse
    Optional. The Compute Engine config settings for a cluster's secondary worker instances
    securityConfig SecurityConfigResponse
    Optional. Security settings for the cluster.
    softwareConfig SoftwareConfigResponse
    Optional. The config settings for cluster software.
    tempBucket String
    Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
    workerConfig InstanceGroupConfigResponse
    Optional. The Compute Engine config settings for the cluster's worker instances.
    autoscalingConfig AutoscalingConfigResponse
    Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
    auxiliaryNodeGroups AuxiliaryNodeGroupResponse[]
    Optional. The node group settings.
    configBucket string
    Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
    dataprocMetricConfig DataprocMetricConfigResponse
    Optional. The config for Dataproc metrics.
    encryptionConfig EncryptionConfigResponse
    Optional. Encryption settings for the cluster.
    endpointConfig EndpointConfigResponse
    Optional. Port/endpoint configuration for this cluster
    gceClusterConfig GceClusterConfigResponse
    Optional. The shared Compute Engine config settings for all instances in a cluster.
    gkeClusterConfig GkeClusterConfigResponse
    Optional. BETA. The Kubernetes Engine config for Dataproc clusters deployed to The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. These config settings are mutually exclusive with Compute Engine-based options, such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
    initializationActions NodeInitializationActionResponse[]
    Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
    lifecycleConfig LifecycleConfigResponse
    Optional. Lifecycle setting for the cluster.
    masterConfig InstanceGroupConfigResponse
    Optional. The Compute Engine config settings for the cluster's master instance.
    metastoreConfig MetastoreConfigResponse
    Optional. Metastore configuration.
    secondaryWorkerConfig InstanceGroupConfigResponse
    Optional. The Compute Engine config settings for a cluster's secondary worker instances
    securityConfig SecurityConfigResponse
    Optional. Security settings for the cluster.
    softwareConfig SoftwareConfigResponse
    Optional. The config settings for cluster software.
    tempBucket string
    Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
    workerConfig InstanceGroupConfigResponse
    Optional. The Compute Engine config settings for the cluster's worker instances.
    autoscaling_config AutoscalingConfigResponse
    Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
    auxiliary_node_groups Sequence[AuxiliaryNodeGroupResponse]
    Optional. The node group settings.
    config_bucket str
    Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
    dataproc_metric_config DataprocMetricConfigResponse
    Optional. The config for Dataproc metrics.
    encryption_config EncryptionConfigResponse
    Optional. Encryption settings for the cluster.
    endpoint_config EndpointConfigResponse
    Optional. Port/endpoint configuration for this cluster
    gce_cluster_config GceClusterConfigResponse
    Optional. The shared Compute Engine config settings for all instances in a cluster.
    gke_cluster_config GkeClusterConfigResponse
    Optional. BETA. The Kubernetes Engine config for Dataproc clusters deployed to The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. These config settings are mutually exclusive with Compute Engine-based options, such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
    initialization_actions Sequence[NodeInitializationActionResponse]
    Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
    lifecycle_config LifecycleConfigResponse
    Optional. Lifecycle setting for the cluster.
    master_config InstanceGroupConfigResponse
    Optional. The Compute Engine config settings for the cluster's master instance.
    metastore_config MetastoreConfigResponse
    Optional. Metastore configuration.
    secondary_worker_config InstanceGroupConfigResponse
    Optional. The Compute Engine config settings for a cluster's secondary worker instances
    security_config SecurityConfigResponse
    Optional. Security settings for the cluster.
    software_config SoftwareConfigResponse
    Optional. The config settings for cluster software.
    temp_bucket str
    Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
    worker_config InstanceGroupConfigResponse
    Optional. The Compute Engine config settings for the cluster's worker instances.
    autoscalingConfig Property Map
    Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
    auxiliaryNodeGroups List<Property Map>
    Optional. The node group settings.
    configBucket String
    Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
    dataprocMetricConfig Property Map
    Optional. The config for Dataproc metrics.
    encryptionConfig Property Map
    Optional. Encryption settings for the cluster.
    endpointConfig Property Map
    Optional. Port/endpoint configuration for this cluster
    gceClusterConfig Property Map
    Optional. The shared Compute Engine config settings for all instances in a cluster.
    gkeClusterConfig Property Map
    Optional. BETA. The Kubernetes Engine config for Dataproc clusters deployed to The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. These config settings are mutually exclusive with Compute Engine-based options, such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
    initializationActions List<Property Map>
    Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
    lifecycleConfig Property Map
    Optional. Lifecycle setting for the cluster.
    masterConfig Property Map
    Optional. The Compute Engine config settings for the cluster's master instance.
    metastoreConfig Property Map
    Optional. Metastore configuration.
    secondaryWorkerConfig Property Map
    Optional. The Compute Engine config settings for a cluster's secondary worker instances
    securityConfig Property Map
    Optional. Security settings for the cluster.
    softwareConfig Property Map
    Optional. The config settings for cluster software.
    tempBucket String
    Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
    workerConfig Property Map
    Optional. The Compute Engine config settings for the cluster's worker instances.

    ClusterSelector, ClusterSelectorArgs

    ClusterLabels Dictionary<string, string>
    The cluster labels. Cluster must have all labels to match.
    Zone string
    Optional. The zone where workflow process executes. This parameter does not affect the selection of the cluster.If unspecified, the zone of the first cluster matching the selector is used.
    ClusterLabels map[string]string
    The cluster labels. Cluster must have all labels to match.
    Zone string
    Optional. The zone where workflow process executes. This parameter does not affect the selection of the cluster.If unspecified, the zone of the first cluster matching the selector is used.
    clusterLabels Map<String,String>
    The cluster labels. Cluster must have all labels to match.
    zone String
    Optional. The zone where workflow process executes. This parameter does not affect the selection of the cluster.If unspecified, the zone of the first cluster matching the selector is used.
    clusterLabels {[key: string]: string}
    The cluster labels. Cluster must have all labels to match.
    zone string
    Optional. The zone where workflow process executes. This parameter does not affect the selection of the cluster.If unspecified, the zone of the first cluster matching the selector is used.
    cluster_labels Mapping[str, str]
    The cluster labels. Cluster must have all labels to match.
    zone str
    Optional. The zone where workflow process executes. This parameter does not affect the selection of the cluster.If unspecified, the zone of the first cluster matching the selector is used.
    clusterLabels Map<String>
    The cluster labels. Cluster must have all labels to match.
    zone String
    Optional. The zone where workflow process executes. This parameter does not affect the selection of the cluster.If unspecified, the zone of the first cluster matching the selector is used.

    ClusterSelectorResponse, ClusterSelectorResponseArgs

    ClusterLabels Dictionary<string, string>
    The cluster labels. Cluster must have all labels to match.
    Zone string
    Optional. The zone where workflow process executes. This parameter does not affect the selection of the cluster.If unspecified, the zone of the first cluster matching the selector is used.
    ClusterLabels map[string]string
    The cluster labels. Cluster must have all labels to match.
    Zone string
    Optional. The zone where workflow process executes. This parameter does not affect the selection of the cluster.If unspecified, the zone of the first cluster matching the selector is used.
    clusterLabels Map<String,String>
    The cluster labels. Cluster must have all labels to match.
    zone String
    Optional. The zone where workflow process executes. This parameter does not affect the selection of the cluster.If unspecified, the zone of the first cluster matching the selector is used.
    clusterLabels {[key: string]: string}
    The cluster labels. Cluster must have all labels to match.
    zone string
    Optional. The zone where workflow process executes. This parameter does not affect the selection of the cluster.If unspecified, the zone of the first cluster matching the selector is used.
    cluster_labels Mapping[str, str]
    The cluster labels. Cluster must have all labels to match.
    zone str
    Optional. The zone where workflow process executes. This parameter does not affect the selection of the cluster.If unspecified, the zone of the first cluster matching the selector is used.
    clusterLabels Map<String>
    The cluster labels. Cluster must have all labels to match.
    zone String
    Optional. The zone where workflow process executes. This parameter does not affect the selection of the cluster.If unspecified, the zone of the first cluster matching the selector is used.

    ConfidentialInstanceConfig, ConfidentialInstanceConfigArgs

    EnableConfidentialCompute bool
    Optional. Defines whether the instance should have confidential compute enabled.
    EnableConfidentialCompute bool
    Optional. Defines whether the instance should have confidential compute enabled.
    enableConfidentialCompute Boolean
    Optional. Defines whether the instance should have confidential compute enabled.
    enableConfidentialCompute boolean
    Optional. Defines whether the instance should have confidential compute enabled.
    enable_confidential_compute bool
    Optional. Defines whether the instance should have confidential compute enabled.
    enableConfidentialCompute Boolean
    Optional. Defines whether the instance should have confidential compute enabled.

    ConfidentialInstanceConfigResponse, ConfidentialInstanceConfigResponseArgs

    EnableConfidentialCompute bool
    Optional. Defines whether the instance should have confidential compute enabled.
    EnableConfidentialCompute bool
    Optional. Defines whether the instance should have confidential compute enabled.
    enableConfidentialCompute Boolean
    Optional. Defines whether the instance should have confidential compute enabled.
    enableConfidentialCompute boolean
    Optional. Defines whether the instance should have confidential compute enabled.
    enable_confidential_compute bool
    Optional. Defines whether the instance should have confidential compute enabled.
    enableConfidentialCompute Boolean
    Optional. Defines whether the instance should have confidential compute enabled.

    DataprocMetricConfig, DataprocMetricConfigArgs

    Metrics []Metric
    Metrics sources to enable.
    metrics List<Metric>
    Metrics sources to enable.
    metrics Metric[]
    Metrics sources to enable.
    metrics Sequence[Metric]
    Metrics sources to enable.
    metrics List<Property Map>
    Metrics sources to enable.

    DataprocMetricConfigResponse, DataprocMetricConfigResponseArgs

    Metrics []MetricResponse
    Metrics sources to enable.
    metrics List<MetricResponse>
    Metrics sources to enable.
    metrics MetricResponse[]
    Metrics sources to enable.
    metrics Sequence[MetricResponse]
    Metrics sources to enable.
    metrics List<Property Map>
    Metrics sources to enable.

    DiskConfig, DiskConfigArgs

    BootDiskSizeGb int
    Optional. Size in GB of the boot disk (default is 500GB).
    BootDiskType string
    Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
    LocalSsdInterface string
    Optional. Interface type of local SSDs (default is "scsi"). Valid values: "scsi" (Small Computer System Interface), "nvme" (Non-Volatile Memory Express). See local SSD performance (https://cloud.google.com/compute/docs/disks/local-ssd#performance).
    NumLocalSsds int
    Optional. Number of attached SSDs, from 0 to 8 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.Note: Local SSD options may vary by machine type and number of vCPUs selected.
    BootDiskSizeGb int
    Optional. Size in GB of the boot disk (default is 500GB).
    BootDiskType string
    Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
    LocalSsdInterface string
    Optional. Interface type of local SSDs (default is "scsi"). Valid values: "scsi" (Small Computer System Interface), "nvme" (Non-Volatile Memory Express). See local SSD performance (https://cloud.google.com/compute/docs/disks/local-ssd#performance).
    NumLocalSsds int
    Optional. Number of attached SSDs, from 0 to 8 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.Note: Local SSD options may vary by machine type and number of vCPUs selected.
    bootDiskSizeGb Integer
    Optional. Size in GB of the boot disk (default is 500GB).
    bootDiskType String
    Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
    localSsdInterface String
    Optional. Interface type of local SSDs (default is "scsi"). Valid values: "scsi" (Small Computer System Interface), "nvme" (Non-Volatile Memory Express). See local SSD performance (https://cloud.google.com/compute/docs/disks/local-ssd#performance).
    numLocalSsds Integer
    Optional. Number of attached SSDs, from 0 to 8 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.Note: Local SSD options may vary by machine type and number of vCPUs selected.
    bootDiskSizeGb number
    Optional. Size in GB of the boot disk (default is 500GB).
    bootDiskType string
    Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
    localSsdInterface string
    Optional. Interface type of local SSDs (default is "scsi"). Valid values: "scsi" (Small Computer System Interface), "nvme" (Non-Volatile Memory Express). See local SSD performance (https://cloud.google.com/compute/docs/disks/local-ssd#performance).
    numLocalSsds number
    Optional. Number of attached SSDs, from 0 to 8 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.Note: Local SSD options may vary by machine type and number of vCPUs selected.
    boot_disk_size_gb int
    Optional. Size in GB of the boot disk (default is 500GB).
    boot_disk_type str
    Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
    local_ssd_interface str
    Optional. Interface type of local SSDs (default is "scsi"). Valid values: "scsi" (Small Computer System Interface), "nvme" (Non-Volatile Memory Express). See local SSD performance (https://cloud.google.com/compute/docs/disks/local-ssd#performance).
    num_local_ssds int
    Optional. Number of attached SSDs, from 0 to 8 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.Note: Local SSD options may vary by machine type and number of vCPUs selected.
    bootDiskSizeGb Number
    Optional. Size in GB of the boot disk (default is 500GB).
    bootDiskType String
    Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
    localSsdInterface String
    Optional. Interface type of local SSDs (default is "scsi"). Valid values: "scsi" (Small Computer System Interface), "nvme" (Non-Volatile Memory Express). See local SSD performance (https://cloud.google.com/compute/docs/disks/local-ssd#performance).
    numLocalSsds Number
    Optional. Number of attached SSDs, from 0 to 8 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.Note: Local SSD options may vary by machine type and number of vCPUs selected.

    DiskConfigResponse, DiskConfigResponseArgs

    BootDiskSizeGb int
    Optional. Size in GB of the boot disk (default is 500GB).
    BootDiskType string
    Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
    LocalSsdInterface string
    Optional. Interface type of local SSDs (default is "scsi"). Valid values: "scsi" (Small Computer System Interface), "nvme" (Non-Volatile Memory Express). See local SSD performance (https://cloud.google.com/compute/docs/disks/local-ssd#performance).
    NumLocalSsds int
    Optional. Number of attached SSDs, from 0 to 8 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.Note: Local SSD options may vary by machine type and number of vCPUs selected.
    BootDiskSizeGb int
    Optional. Size in GB of the boot disk (default is 500GB).
    BootDiskType string
    Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
    LocalSsdInterface string
    Optional. Interface type of local SSDs (default is "scsi"). Valid values: "scsi" (Small Computer System Interface), "nvme" (Non-Volatile Memory Express). See local SSD performance (https://cloud.google.com/compute/docs/disks/local-ssd#performance).
    NumLocalSsds int
    Optional. Number of attached SSDs, from 0 to 8 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.Note: Local SSD options may vary by machine type and number of vCPUs selected.
    bootDiskSizeGb Integer
    Optional. Size in GB of the boot disk (default is 500GB).
    bootDiskType String
    Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
    localSsdInterface String
    Optional. Interface type of local SSDs (default is "scsi"). Valid values: "scsi" (Small Computer System Interface), "nvme" (Non-Volatile Memory Express). See local SSD performance (https://cloud.google.com/compute/docs/disks/local-ssd#performance).
    numLocalSsds Integer
    Optional. Number of attached SSDs, from 0 to 8 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.Note: Local SSD options may vary by machine type and number of vCPUs selected.
    bootDiskSizeGb number
    Optional. Size in GB of the boot disk (default is 500GB).
    bootDiskType string
    Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
    localSsdInterface string
    Optional. Interface type of local SSDs (default is "scsi"). Valid values: "scsi" (Small Computer System Interface), "nvme" (Non-Volatile Memory Express). See local SSD performance (https://cloud.google.com/compute/docs/disks/local-ssd#performance).
    numLocalSsds number
    Optional. Number of attached SSDs, from 0 to 8 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.Note: Local SSD options may vary by machine type and number of vCPUs selected.
    boot_disk_size_gb int
    Optional. Size in GB of the boot disk (default is 500GB).
    boot_disk_type str
    Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
    local_ssd_interface str
    Optional. Interface type of local SSDs (default is "scsi"). Valid values: "scsi" (Small Computer System Interface), "nvme" (Non-Volatile Memory Express). See local SSD performance (https://cloud.google.com/compute/docs/disks/local-ssd#performance).
    num_local_ssds int
    Optional. Number of attached SSDs, from 0 to 8 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.Note: Local SSD options may vary by machine type and number of vCPUs selected.
    bootDiskSizeGb Number
    Optional. Size in GB of the boot disk (default is 500GB).
    bootDiskType String
    Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
    localSsdInterface String
    Optional. Interface type of local SSDs (default is "scsi"). Valid values: "scsi" (Small Computer System Interface), "nvme" (Non-Volatile Memory Express). See local SSD performance (https://cloud.google.com/compute/docs/disks/local-ssd#performance).
    numLocalSsds Number
    Optional. Number of attached SSDs, from 0 to 8 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.Note: Local SSD options may vary by machine type and number of vCPUs selected.

    EncryptionConfig, EncryptionConfigArgs

    GcePdKmsKeyName string
    Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
    KmsKey string
    Optional. The Cloud KMS key name to use for encrypting customer core content in spanner and cluster PD disk for all instances in the cluster.
    GcePdKmsKeyName string
    Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
    KmsKey string
    Optional. The Cloud KMS key name to use for encrypting customer core content in spanner and cluster PD disk for all instances in the cluster.
    gcePdKmsKeyName String
    Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
    kmsKey String
    Optional. The Cloud KMS key name to use for encrypting customer core content in spanner and cluster PD disk for all instances in the cluster.
    gcePdKmsKeyName string
    Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
    kmsKey string
    Optional. The Cloud KMS key name to use for encrypting customer core content in spanner and cluster PD disk for all instances in the cluster.
    gce_pd_kms_key_name str
    Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
    kms_key str
    Optional. The Cloud KMS key name to use for encrypting customer core content in spanner and cluster PD disk for all instances in the cluster.
    gcePdKmsKeyName String
    Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
    kmsKey String
    Optional. The Cloud KMS key name to use for encrypting customer core content in spanner and cluster PD disk for all instances in the cluster.

    EncryptionConfigResponse, EncryptionConfigResponseArgs

    GcePdKmsKeyName string
    Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
    KmsKey string
    Optional. The Cloud KMS key name to use for encrypting customer core content in spanner and cluster PD disk for all instances in the cluster.
    GcePdKmsKeyName string
    Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
    KmsKey string
    Optional. The Cloud KMS key name to use for encrypting customer core content in spanner and cluster PD disk for all instances in the cluster.
    gcePdKmsKeyName String
    Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
    kmsKey String
    Optional. The Cloud KMS key name to use for encrypting customer core content in spanner and cluster PD disk for all instances in the cluster.
    gcePdKmsKeyName string
    Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
    kmsKey string
    Optional. The Cloud KMS key name to use for encrypting customer core content in spanner and cluster PD disk for all instances in the cluster.
    gce_pd_kms_key_name str
    Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
    kms_key str
    Optional. The Cloud KMS key name to use for encrypting customer core content in spanner and cluster PD disk for all instances in the cluster.
    gcePdKmsKeyName String
    Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
    kmsKey String
    Optional. The Cloud KMS key name to use for encrypting customer core content in spanner and cluster PD disk for all instances in the cluster.

    EndpointConfig, EndpointConfigArgs

    EnableHttpPortAccess bool
    Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.
    EnableHttpPortAccess bool
    Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.
    enableHttpPortAccess Boolean
    Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.
    enableHttpPortAccess boolean
    Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.
    enable_http_port_access bool
    Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.
    enableHttpPortAccess Boolean
    Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.

    EndpointConfigResponse, EndpointConfigResponseArgs

    EnableHttpPortAccess bool
    Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.
    HttpPorts Dictionary<string, string>
    The map of port descriptions to URLs. Will only be populated if enable_http_port_access is true.
    EnableHttpPortAccess bool
    Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.
    HttpPorts map[string]string
    The map of port descriptions to URLs. Will only be populated if enable_http_port_access is true.
    enableHttpPortAccess Boolean
    Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.
    httpPorts Map<String,String>
    The map of port descriptions to URLs. Will only be populated if enable_http_port_access is true.
    enableHttpPortAccess boolean
    Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.
    httpPorts {[key: string]: string}
    The map of port descriptions to URLs. Will only be populated if enable_http_port_access is true.
    enable_http_port_access bool
    Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.
    http_ports Mapping[str, str]
    The map of port descriptions to URLs. Will only be populated if enable_http_port_access is true.
    enableHttpPortAccess Boolean
    Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.
    httpPorts Map<String>
    The map of port descriptions to URLs. Will only be populated if enable_http_port_access is true.

    FlinkJob, FlinkJobArgs

    Args List<string>
    Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision might occur that causes an incorrect job submission.
    JarFileUris List<string>
    Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Flink driver and tasks.
    LoggingConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.LoggingConfig
    Optional. The runtime log config for job execution.
    MainClass string
    The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in jarFileUris.
    MainJarFileUri string
    The HCFS URI of the jar file that contains the main class.
    Properties Dictionary<string, string>
    Optional. A mapping of property names to values, used to configure Flink. Properties that conflict with values set by the Dataproc API might beoverwritten. Can include properties set in/etc/flink/conf/flink-defaults.conf and classes in user code.
    SavepointUri string
    Optional. HCFS URI of the savepoint, which contains the last saved progress for starting the current job.
    Args []string
    Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision might occur that causes an incorrect job submission.
    JarFileUris []string
    Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Flink driver and tasks.
    LoggingConfig LoggingConfig
    Optional. The runtime log config for job execution.
    MainClass string
    The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in jarFileUris.
    MainJarFileUri string
    The HCFS URI of the jar file that contains the main class.
    Properties map[string]string
    Optional. A mapping of property names to values, used to configure Flink. Properties that conflict with values set by the Dataproc API might beoverwritten. Can include properties set in/etc/flink/conf/flink-defaults.conf and classes in user code.
    SavepointUri string
    Optional. HCFS URI of the savepoint, which contains the last saved progress for starting the current job.
    args List<String>
    Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision might occur that causes an incorrect job submission.
    jarFileUris List<String>
    Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Flink driver and tasks.
    loggingConfig LoggingConfig
    Optional. The runtime log config for job execution.
    mainClass String
    The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in jarFileUris.
    mainJarFileUri String
    The HCFS URI of the jar file that contains the main class.
    properties Map<String,String>
    Optional. A mapping of property names to values, used to configure Flink. Properties that conflict with values set by the Dataproc API might beoverwritten. Can include properties set in/etc/flink/conf/flink-defaults.conf and classes in user code.
    savepointUri String
    Optional. HCFS URI of the savepoint, which contains the last saved progress for starting the current job.
    args string[]
    Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision might occur that causes an incorrect job submission.
    jarFileUris string[]
    Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Flink driver and tasks.
    loggingConfig LoggingConfig
    Optional. The runtime log config for job execution.
    mainClass string
    The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in jarFileUris.
    mainJarFileUri string
    The HCFS URI of the jar file that contains the main class.
    properties {[key: string]: string}
    Optional. A mapping of property names to values, used to configure Flink. Properties that conflict with values set by the Dataproc API might beoverwritten. Can include properties set in/etc/flink/conf/flink-defaults.conf and classes in user code.
    savepointUri string
    Optional. HCFS URI of the savepoint, which contains the last saved progress for starting the current job.
    args Sequence[str]
    Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision might occur that causes an incorrect job submission.
    jar_file_uris Sequence[str]
    Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Flink driver and tasks.
    logging_config LoggingConfig
    Optional. The runtime log config for job execution.
    main_class str
    The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in jarFileUris.
    main_jar_file_uri str
    The HCFS URI of the jar file that contains the main class.
    properties Mapping[str, str]
    Optional. A mapping of property names to values, used to configure Flink. Properties that conflict with values set by the Dataproc API might beoverwritten. Can include properties set in/etc/flink/conf/flink-defaults.conf and classes in user code.
    savepoint_uri str
    Optional. HCFS URI of the savepoint, which contains the last saved progress for starting the current job.
    args List<String>
    Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision might occur that causes an incorrect job submission.
    jarFileUris List<String>
    Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Flink driver and tasks.
    loggingConfig Property Map
    Optional. The runtime log config for job execution.
    mainClass String
    The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in jarFileUris.
    mainJarFileUri String
    The HCFS URI of the jar file that contains the main class.
    properties Map<String>
    Optional. A mapping of property names to values, used to configure Flink. Properties that conflict with values set by the Dataproc API might beoverwritten. Can include properties set in/etc/flink/conf/flink-defaults.conf and classes in user code.
    savepointUri String
    Optional. HCFS URI of the savepoint, which contains the last saved progress for starting the current job.

    FlinkJobResponse, FlinkJobResponseArgs

    Args List<string>
    Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision might occur that causes an incorrect job submission.
    JarFileUris List<string>
    Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Flink driver and tasks.
    LoggingConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.LoggingConfigResponse
    Optional. The runtime log config for job execution.
    MainClass string
    The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in jarFileUris.
    MainJarFileUri string
    The HCFS URI of the jar file that contains the main class.
    Properties Dictionary<string, string>
    Optional. A mapping of property names to values, used to configure Flink. Properties that conflict with values set by the Dataproc API might beoverwritten. Can include properties set in/etc/flink/conf/flink-defaults.conf and classes in user code.
    SavepointUri string
    Optional. HCFS URI of the savepoint, which contains the last saved progress for starting the current job.
    Args []string
    Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision might occur that causes an incorrect job submission.
    JarFileUris []string
    Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Flink driver and tasks.
    LoggingConfig LoggingConfigResponse
    Optional. The runtime log config for job execution.
    MainClass string
    The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in jarFileUris.
    MainJarFileUri string
    The HCFS URI of the jar file that contains the main class.
    Properties map[string]string
    Optional. A mapping of property names to values, used to configure Flink. Properties that conflict with values set by the Dataproc API might beoverwritten. Can include properties set in/etc/flink/conf/flink-defaults.conf and classes in user code.
    SavepointUri string
    Optional. HCFS URI of the savepoint, which contains the last saved progress for starting the current job.
    args List<String>
    Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision might occur that causes an incorrect job submission.
    jarFileUris List<String>
    Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Flink driver and tasks.
    loggingConfig LoggingConfigResponse
    Optional. The runtime log config for job execution.
    mainClass String
    The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in jarFileUris.
    mainJarFileUri String
    The HCFS URI of the jar file that contains the main class.
    properties Map<String,String>
    Optional. A mapping of property names to values, used to configure Flink. Properties that conflict with values set by the Dataproc API might beoverwritten. Can include properties set in/etc/flink/conf/flink-defaults.conf and classes in user code.
    savepointUri String
    Optional. HCFS URI of the savepoint, which contains the last saved progress for starting the current job.
    args string[]
    Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision might occur that causes an incorrect job submission.
    jarFileUris string[]
    Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Flink driver and tasks.
    loggingConfig LoggingConfigResponse
    Optional. The runtime log config for job execution.
    mainClass string
    The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in jarFileUris.
    mainJarFileUri string
    The HCFS URI of the jar file that contains the main class.
    properties {[key: string]: string}
    Optional. A mapping of property names to values, used to configure Flink. Properties that conflict with values set by the Dataproc API might beoverwritten. Can include properties set in/etc/flink/conf/flink-defaults.conf and classes in user code.
    savepointUri string
    Optional. HCFS URI of the savepoint, which contains the last saved progress for starting the current job.
    args Sequence[str]
    Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision might occur that causes an incorrect job submission.
    jar_file_uris Sequence[str]
    Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Flink driver and tasks.
    logging_config LoggingConfigResponse
    Optional. The runtime log config for job execution.
    main_class str
    The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in jarFileUris.
    main_jar_file_uri str
    The HCFS URI of the jar file that contains the main class.
    properties Mapping[str, str]
    Optional. A mapping of property names to values, used to configure Flink. Properties that conflict with values set by the Dataproc API might beoverwritten. Can include properties set in/etc/flink/conf/flink-defaults.conf and classes in user code.
    savepoint_uri str
    Optional. HCFS URI of the savepoint, which contains the last saved progress for starting the current job.
    args List<String>
    Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision might occur that causes an incorrect job submission.
    jarFileUris List<String>
    Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Flink driver and tasks.
    loggingConfig Property Map
    Optional. The runtime log config for job execution.
    mainClass String
    The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in jarFileUris.
    mainJarFileUri String
    The HCFS URI of the jar file that contains the main class.
    properties Map<String>
    Optional. A mapping of property names to values, used to configure Flink. Properties that conflict with values set by the Dataproc API might beoverwritten. Can include properties set in/etc/flink/conf/flink-defaults.conf and classes in user code.
    savepointUri String
    Optional. HCFS URI of the savepoint, which contains the last saved progress for starting the current job.

    GceClusterConfig, GceClusterConfigArgs

    ConfidentialInstanceConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.ConfidentialInstanceConfig
    Optional. Confidential Instance Config for clusters using Confidential VMs (https://cloud.google.com/compute/confidential-vm/docs).
    InternalIpOnly bool
    Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
    Metadata Dictionary<string, string>
    Optional. The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
    NetworkUri string
    Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/networks/default projects/[project_id]/global/networks/default default
    NodeGroupAffinity Pulumi.GoogleNative.Dataproc.V1.Inputs.NodeGroupAffinity
    Optional. Node Group Affinity for sole-tenant clusters.
    PrivateIpv6GoogleAccess Pulumi.GoogleNative.Dataproc.V1.GceClusterConfigPrivateIpv6GoogleAccess
    Optional. The type of IPv6 access for a cluster.
    ReservationAffinity Pulumi.GoogleNative.Dataproc.V1.Inputs.ReservationAffinity
    Optional. Reservation Affinity for consuming Zonal reservation.
    ServiceAccount string
    Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
    ServiceAccountScopes List<string>
    Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
    ShieldedInstanceConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.ShieldedInstanceConfig
    Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
    SubnetworkUri string
    Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/[region]/subnetworks/sub0 projects/[project_id]/regions/[region]/subnetworks/sub0 sub0
    Tags List<string>
    The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
    ZoneUri string
    Optional. The Compute Engine zone where the Dataproc cluster will be located. If omitted, the service will pick a zone in the cluster's Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] [zone]
    ConfidentialInstanceConfig ConfidentialInstanceConfig
    Optional. Confidential Instance Config for clusters using Confidential VMs (https://cloud.google.com/compute/confidential-vm/docs).
    InternalIpOnly bool
    Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
    Metadata map[string]string
    Optional. The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
    NetworkUri string
    Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/networks/default projects/[project_id]/global/networks/default default
    NodeGroupAffinity NodeGroupAffinity
    Optional. Node Group Affinity for sole-tenant clusters.
    PrivateIpv6GoogleAccess GceClusterConfigPrivateIpv6GoogleAccess
    Optional. The type of IPv6 access for a cluster.
    ReservationAffinity ReservationAffinity
    Optional. Reservation Affinity for consuming Zonal reservation.
    ServiceAccount string
    Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
    ServiceAccountScopes []string
    Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
    ShieldedInstanceConfig ShieldedInstanceConfig
    Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
    SubnetworkUri string
    Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/[region]/subnetworks/sub0 projects/[project_id]/regions/[region]/subnetworks/sub0 sub0
    Tags []string
    The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
    ZoneUri string
    Optional. The Compute Engine zone where the Dataproc cluster will be located. If omitted, the service will pick a zone in the cluster's Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] [zone]
    confidentialInstanceConfig ConfidentialInstanceConfig
    Optional. Confidential Instance Config for clusters using Confidential VMs (https://cloud.google.com/compute/confidential-vm/docs).
    internalIpOnly Boolean
    Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
    metadata Map<String,String>
    Optional. The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
    networkUri String
    Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/networks/default projects/[project_id]/global/networks/default default
    nodeGroupAffinity NodeGroupAffinity
    Optional. Node Group Affinity for sole-tenant clusters.
    privateIpv6GoogleAccess GceClusterConfigPrivateIpv6GoogleAccess
    Optional. The type of IPv6 access for a cluster.
    reservationAffinity ReservationAffinity
    Optional. Reservation Affinity for consuming Zonal reservation.
    serviceAccount String
    Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
    serviceAccountScopes List<String>
    Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
    shieldedInstanceConfig ShieldedInstanceConfig
    Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
    subnetworkUri String
    Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/[region]/subnetworks/sub0 projects/[project_id]/regions/[region]/subnetworks/sub0 sub0
    tags List<String>
    The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
    zoneUri String
    Optional. The Compute Engine zone where the Dataproc cluster will be located. If omitted, the service will pick a zone in the cluster's Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] [zone]
    confidentialInstanceConfig ConfidentialInstanceConfig
    Optional. Confidential Instance Config for clusters using Confidential VMs (https://cloud.google.com/compute/confidential-vm/docs).
    internalIpOnly boolean
    Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
    metadata {[key: string]: string}
    Optional. The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
    networkUri string
    Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/networks/default projects/[project_id]/global/networks/default default
    nodeGroupAffinity NodeGroupAffinity
    Optional. Node Group Affinity for sole-tenant clusters.
    privateIpv6GoogleAccess GceClusterConfigPrivateIpv6GoogleAccess
    Optional. The type of IPv6 access for a cluster.
    reservationAffinity ReservationAffinity
    Optional. Reservation Affinity for consuming Zonal reservation.
    serviceAccount string
    Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
    serviceAccountScopes string[]
    Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
    shieldedInstanceConfig ShieldedInstanceConfig
    Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
    subnetworkUri string
    Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/[region]/subnetworks/sub0 projects/[project_id]/regions/[region]/subnetworks/sub0 sub0
    tags string[]
    The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
    zoneUri string
    Optional. The Compute Engine zone where the Dataproc cluster will be located. If omitted, the service will pick a zone in the cluster's Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] [zone]
    confidential_instance_config ConfidentialInstanceConfig
    Optional. Confidential Instance Config for clusters using Confidential VMs (https://cloud.google.com/compute/confidential-vm/docs).
    internal_ip_only bool
    Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
    metadata Mapping[str, str]
    Optional. The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
    network_uri str
    Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/networks/default projects/[project_id]/global/networks/default default
    node_group_affinity NodeGroupAffinity
    Optional. Node Group Affinity for sole-tenant clusters.
    private_ipv6_google_access GceClusterConfigPrivateIpv6GoogleAccess
    Optional. The type of IPv6 access for a cluster.
    reservation_affinity ReservationAffinity
    Optional. Reservation Affinity for consuming Zonal reservation.
    service_account str
    Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
    service_account_scopes Sequence[str]
    Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
    shielded_instance_config ShieldedInstanceConfig
    Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
    subnetwork_uri str
    Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/[region]/subnetworks/sub0 projects/[project_id]/regions/[region]/subnetworks/sub0 sub0
    tags Sequence[str]
    The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
    zone_uri str
    Optional. The Compute Engine zone where the Dataproc cluster will be located. If omitted, the service will pick a zone in the cluster's Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] [zone]
    confidentialInstanceConfig Property Map
    Optional. Confidential Instance Config for clusters using Confidential VMs (https://cloud.google.com/compute/confidential-vm/docs).
    internalIpOnly Boolean
    Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
    metadata Map<String>
    Optional. The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
    networkUri String
    Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/networks/default projects/[project_id]/global/networks/default default
    nodeGroupAffinity Property Map
    Optional. Node Group Affinity for sole-tenant clusters.
    privateIpv6GoogleAccess "PRIVATE_IPV6_GOOGLE_ACCESS_UNSPECIFIED" | "INHERIT_FROM_SUBNETWORK" | "OUTBOUND" | "BIDIRECTIONAL"
    Optional. The type of IPv6 access for a cluster.
    reservationAffinity Property Map
    Optional. Reservation Affinity for consuming Zonal reservation.
    serviceAccount String
    Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
    serviceAccountScopes List<String>
    Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
    shieldedInstanceConfig Property Map
    Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
    subnetworkUri String
    Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/[region]/subnetworks/sub0 projects/[project_id]/regions/[region]/subnetworks/sub0 sub0
    tags List<String>
    The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
    zoneUri String
    Optional. The Compute Engine zone where the Dataproc cluster will be located. If omitted, the service will pick a zone in the cluster's Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] [zone]

    GceClusterConfigPrivateIpv6GoogleAccess, GceClusterConfigPrivateIpv6GoogleAccessArgs

    PrivateIpv6GoogleAccessUnspecified
    PRIVATE_IPV6_GOOGLE_ACCESS_UNSPECIFIEDIf unspecified, Compute Engine default behavior will apply, which is the same as INHERIT_FROM_SUBNETWORK.
    InheritFromSubnetwork
    INHERIT_FROM_SUBNETWORKPrivate access to and from Google Services configuration inherited from the subnetwork configuration. This is the default Compute Engine behavior.
    Outbound
    OUTBOUNDEnables outbound private IPv6 access to Google Services from the Dataproc cluster.
    Bidirectional
    BIDIRECTIONALEnables bidirectional private IPv6 access between Google Services and the Dataproc cluster.
    GceClusterConfigPrivateIpv6GoogleAccessPrivateIpv6GoogleAccessUnspecified
    PRIVATE_IPV6_GOOGLE_ACCESS_UNSPECIFIEDIf unspecified, Compute Engine default behavior will apply, which is the same as INHERIT_FROM_SUBNETWORK.
    GceClusterConfigPrivateIpv6GoogleAccessInheritFromSubnetwork
    INHERIT_FROM_SUBNETWORKPrivate access to and from Google Services configuration inherited from the subnetwork configuration. This is the default Compute Engine behavior.
    GceClusterConfigPrivateIpv6GoogleAccessOutbound
    OUTBOUNDEnables outbound private IPv6 access to Google Services from the Dataproc cluster.
    GceClusterConfigPrivateIpv6GoogleAccessBidirectional
    BIDIRECTIONALEnables bidirectional private IPv6 access between Google Services and the Dataproc cluster.
    PrivateIpv6GoogleAccessUnspecified
    PRIVATE_IPV6_GOOGLE_ACCESS_UNSPECIFIEDIf unspecified, Compute Engine default behavior will apply, which is the same as INHERIT_FROM_SUBNETWORK.
    InheritFromSubnetwork
    INHERIT_FROM_SUBNETWORKPrivate access to and from Google Services configuration inherited from the subnetwork configuration. This is the default Compute Engine behavior.
    Outbound
    OUTBOUNDEnables outbound private IPv6 access to Google Services from the Dataproc cluster.
    Bidirectional
    BIDIRECTIONALEnables bidirectional private IPv6 access between Google Services and the Dataproc cluster.
    PrivateIpv6GoogleAccessUnspecified
    PRIVATE_IPV6_GOOGLE_ACCESS_UNSPECIFIEDIf unspecified, Compute Engine default behavior will apply, which is the same as INHERIT_FROM_SUBNETWORK.
    InheritFromSubnetwork
    INHERIT_FROM_SUBNETWORKPrivate access to and from Google Services configuration inherited from the subnetwork configuration. This is the default Compute Engine behavior.
    Outbound
    OUTBOUNDEnables outbound private IPv6 access to Google Services from the Dataproc cluster.
    Bidirectional
    BIDIRECTIONALEnables bidirectional private IPv6 access between Google Services and the Dataproc cluster.
    PRIVATE_IPV6_GOOGLE_ACCESS_UNSPECIFIED
    PRIVATE_IPV6_GOOGLE_ACCESS_UNSPECIFIEDIf unspecified, Compute Engine default behavior will apply, which is the same as INHERIT_FROM_SUBNETWORK.
    INHERIT_FROM_SUBNETWORK
    INHERIT_FROM_SUBNETWORKPrivate access to and from Google Services configuration inherited from the subnetwork configuration. This is the default Compute Engine behavior.
    OUTBOUND
    OUTBOUNDEnables outbound private IPv6 access to Google Services from the Dataproc cluster.
    BIDIRECTIONAL
    BIDIRECTIONALEnables bidirectional private IPv6 access between Google Services and the Dataproc cluster.
    "PRIVATE_IPV6_GOOGLE_ACCESS_UNSPECIFIED"
    PRIVATE_IPV6_GOOGLE_ACCESS_UNSPECIFIEDIf unspecified, Compute Engine default behavior will apply, which is the same as INHERIT_FROM_SUBNETWORK.
    "INHERIT_FROM_SUBNETWORK"
    INHERIT_FROM_SUBNETWORKPrivate access to and from Google Services configuration inherited from the subnetwork configuration. This is the default Compute Engine behavior.
    "OUTBOUND"
    OUTBOUNDEnables outbound private IPv6 access to Google Services from the Dataproc cluster.
    "BIDIRECTIONAL"
    BIDIRECTIONALEnables bidirectional private IPv6 access between Google Services and the Dataproc cluster.

    GceClusterConfigResponse, GceClusterConfigResponseArgs

    ConfidentialInstanceConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.ConfidentialInstanceConfigResponse
    Optional. Confidential Instance Config for clusters using Confidential VMs (https://cloud.google.com/compute/confidential-vm/docs).
    InternalIpOnly bool
    Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
    Metadata Dictionary<string, string>
    Optional. The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
    NetworkUri string
    Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/networks/default projects/[project_id]/global/networks/default default
    NodeGroupAffinity Pulumi.GoogleNative.Dataproc.V1.Inputs.NodeGroupAffinityResponse
    Optional. Node Group Affinity for sole-tenant clusters.
    PrivateIpv6GoogleAccess string
    Optional. The type of IPv6 access for a cluster.
    ReservationAffinity Pulumi.GoogleNative.Dataproc.V1.Inputs.ReservationAffinityResponse
    Optional. Reservation Affinity for consuming Zonal reservation.
    ServiceAccount string
    Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
    ServiceAccountScopes List<string>
    Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
    ShieldedInstanceConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.ShieldedInstanceConfigResponse
    Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
    SubnetworkUri string
    Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/[region]/subnetworks/sub0 projects/[project_id]/regions/[region]/subnetworks/sub0 sub0
    Tags List<string>
    The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
    ZoneUri string
    Optional. The Compute Engine zone where the Dataproc cluster will be located. If omitted, the service will pick a zone in the cluster's Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] [zone]
    ConfidentialInstanceConfig ConfidentialInstanceConfigResponse
    Optional. Confidential Instance Config for clusters using Confidential VMs (https://cloud.google.com/compute/confidential-vm/docs).
    InternalIpOnly bool
    Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
    Metadata map[string]string
    Optional. The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
    NetworkUri string
    Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/networks/default projects/[project_id]/global/networks/default default
    NodeGroupAffinity NodeGroupAffinityResponse
    Optional. Node Group Affinity for sole-tenant clusters.
    PrivateIpv6GoogleAccess string
    Optional. The type of IPv6 access for a cluster.
    ReservationAffinity ReservationAffinityResponse
    Optional. Reservation Affinity for consuming Zonal reservation.
    ServiceAccount string
    Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
    ServiceAccountScopes []string
    Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
    ShieldedInstanceConfig ShieldedInstanceConfigResponse
    Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
    SubnetworkUri string
    Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/[region]/subnetworks/sub0 projects/[project_id]/regions/[region]/subnetworks/sub0 sub0
    Tags []string
    The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
    ZoneUri string
    Optional. The Compute Engine zone where the Dataproc cluster will be located. If omitted, the service will pick a zone in the cluster's Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] [zone]
    confidentialInstanceConfig ConfidentialInstanceConfigResponse
    Optional. Confidential Instance Config for clusters using Confidential VMs (https://cloud.google.com/compute/confidential-vm/docs).
    internalIpOnly Boolean
    Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
    metadata Map<String,String>
    Optional. The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
    networkUri String
    Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/networks/default projects/[project_id]/global/networks/default default
    nodeGroupAffinity NodeGroupAffinityResponse
    Optional. Node Group Affinity for sole-tenant clusters.
    privateIpv6GoogleAccess String
    Optional. The type of IPv6 access for a cluster.
    reservationAffinity ReservationAffinityResponse
    Optional. Reservation Affinity for consuming Zonal reservation.
    serviceAccount String
    Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
    serviceAccountScopes List<String>
    Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
    shieldedInstanceConfig ShieldedInstanceConfigResponse
    Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
    subnetworkUri String
    Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/[region]/subnetworks/sub0 projects/[project_id]/regions/[region]/subnetworks/sub0 sub0
    tags List<String>
    The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
    zoneUri String
    Optional. The Compute Engine zone where the Dataproc cluster will be located. If omitted, the service will pick a zone in the cluster's Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] [zone]
    confidentialInstanceConfig ConfidentialInstanceConfigResponse
    Optional. Confidential Instance Config for clusters using Confidential VMs (https://cloud.google.com/compute/confidential-vm/docs).
    internalIpOnly boolean
    Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
    metadata {[key: string]: string}
    Optional. The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
    networkUri string
    Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/networks/default projects/[project_id]/global/networks/default default
    nodeGroupAffinity NodeGroupAffinityResponse
    Optional. Node Group Affinity for sole-tenant clusters.
    privateIpv6GoogleAccess string
    Optional. The type of IPv6 access for a cluster.
    reservationAffinity ReservationAffinityResponse
    Optional. Reservation Affinity for consuming Zonal reservation.
    serviceAccount string
    Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
    serviceAccountScopes string[]
    Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
    shieldedInstanceConfig ShieldedInstanceConfigResponse
    Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
    subnetworkUri string
    Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/[region]/subnetworks/sub0 projects/[project_id]/regions/[region]/subnetworks/sub0 sub0
    tags string[]
    The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
    zoneUri string
    Optional. The Compute Engine zone where the Dataproc cluster will be located. If omitted, the service will pick a zone in the cluster's Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] [zone]
    confidential_instance_config ConfidentialInstanceConfigResponse
    Optional. Confidential Instance Config for clusters using Confidential VMs (https://cloud.google.com/compute/confidential-vm/docs).
    internal_ip_only bool
    Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
    metadata Mapping[str, str]
    Optional. The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
    network_uri str
    Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/networks/default projects/[project_id]/global/networks/default default
    node_group_affinity NodeGroupAffinityResponse
    Optional. Node Group Affinity for sole-tenant clusters.
    private_ipv6_google_access str
    Optional. The type of IPv6 access for a cluster.
    reservation_affinity ReservationAffinityResponse
    Optional. Reservation Affinity for consuming Zonal reservation.
    service_account str
    Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
    service_account_scopes Sequence[str]
    Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
    shielded_instance_config ShieldedInstanceConfigResponse
    Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
    subnetwork_uri str
    Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/[region]/subnetworks/sub0 projects/[project_id]/regions/[region]/subnetworks/sub0 sub0
    tags Sequence[str]
    The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
    zone_uri str
    Optional. The Compute Engine zone where the Dataproc cluster will be located. If omitted, the service will pick a zone in the cluster's Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] [zone]
    confidentialInstanceConfig Property Map
    Optional. Confidential Instance Config for clusters using Confidential VMs (https://cloud.google.com/compute/confidential-vm/docs).
    internalIpOnly Boolean
    Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
    metadata Map<String>
    Optional. The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
    networkUri String
    Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/networks/default projects/[project_id]/global/networks/default default
    nodeGroupAffinity Property Map
    Optional. Node Group Affinity for sole-tenant clusters.
    privateIpv6GoogleAccess String
    Optional. The type of IPv6 access for a cluster.
    reservationAffinity Property Map
    Optional. Reservation Affinity for consuming Zonal reservation.
    serviceAccount String
    Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
    serviceAccountScopes List<String>
    Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
    shieldedInstanceConfig Property Map
    Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
    subnetworkUri String
    Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/[region]/subnetworks/sub0 projects/[project_id]/regions/[region]/subnetworks/sub0 sub0
    tags List<String>
    The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
    zoneUri String
    Optional. The Compute Engine zone where the Dataproc cluster will be located. If omitted, the service will pick a zone in the cluster's Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] [zone]

    GkeClusterConfig, GkeClusterConfigArgs

    GkeClusterTarget string
    Optional. A target GKE cluster to deploy to. It must be in the same project and region as the Dataproc cluster (the GKE cluster can be zonal or regional). Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
    NamespacedGkeDeploymentTarget Pulumi.GoogleNative.Dataproc.V1.Inputs.NamespacedGkeDeploymentTarget
    Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.

    Deprecated: Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.

    NodePoolTarget List<Pulumi.GoogleNative.Dataproc.V1.Inputs.GkeNodePoolTarget>
    Optional. GKE node pools where workloads will be scheduled. At least one node pool must be assigned the DEFAULT GkeNodePoolTarget.Role. If a GkeNodePoolTarget is not specified, Dataproc constructs a DEFAULT GkeNodePoolTarget. Each role can be given to only one GkeNodePoolTarget. All node pools must have the same location settings.
    GkeClusterTarget string
    Optional. A target GKE cluster to deploy to. It must be in the same project and region as the Dataproc cluster (the GKE cluster can be zonal or regional). Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
    NamespacedGkeDeploymentTarget NamespacedGkeDeploymentTarget
    Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.

    Deprecated: Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.

    NodePoolTarget []GkeNodePoolTarget
    Optional. GKE node pools where workloads will be scheduled. At least one node pool must be assigned the DEFAULT GkeNodePoolTarget.Role. If a GkeNodePoolTarget is not specified, Dataproc constructs a DEFAULT GkeNodePoolTarget. Each role can be given to only one GkeNodePoolTarget. All node pools must have the same location settings.
    gkeClusterTarget String
    Optional. A target GKE cluster to deploy to. It must be in the same project and region as the Dataproc cluster (the GKE cluster can be zonal or regional). Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
    namespacedGkeDeploymentTarget NamespacedGkeDeploymentTarget
    Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.

    Deprecated: Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.

    nodePoolTarget List<GkeNodePoolTarget>
    Optional. GKE node pools where workloads will be scheduled. At least one node pool must be assigned the DEFAULT GkeNodePoolTarget.Role. If a GkeNodePoolTarget is not specified, Dataproc constructs a DEFAULT GkeNodePoolTarget. Each role can be given to only one GkeNodePoolTarget. All node pools must have the same location settings.
    gkeClusterTarget string
    Optional. A target GKE cluster to deploy to. It must be in the same project and region as the Dataproc cluster (the GKE cluster can be zonal or regional). Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
    namespacedGkeDeploymentTarget NamespacedGkeDeploymentTarget
    Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.

    Deprecated: Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.

    nodePoolTarget GkeNodePoolTarget[]
    Optional. GKE node pools where workloads will be scheduled. At least one node pool must be assigned the DEFAULT GkeNodePoolTarget.Role. If a GkeNodePoolTarget is not specified, Dataproc constructs a DEFAULT GkeNodePoolTarget. Each role can be given to only one GkeNodePoolTarget. All node pools must have the same location settings.
    gke_cluster_target str
    Optional. A target GKE cluster to deploy to. It must be in the same project and region as the Dataproc cluster (the GKE cluster can be zonal or regional). Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
    namespaced_gke_deployment_target NamespacedGkeDeploymentTarget
    Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.

    Deprecated: Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.

    node_pool_target Sequence[GkeNodePoolTarget]
    Optional. GKE node pools where workloads will be scheduled. At least one node pool must be assigned the DEFAULT GkeNodePoolTarget.Role. If a GkeNodePoolTarget is not specified, Dataproc constructs a DEFAULT GkeNodePoolTarget. Each role can be given to only one GkeNodePoolTarget. All node pools must have the same location settings.
    gkeClusterTarget String
    Optional. A target GKE cluster to deploy to. It must be in the same project and region as the Dataproc cluster (the GKE cluster can be zonal or regional). Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
    namespacedGkeDeploymentTarget Property Map
    Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.

    Deprecated: Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.

    nodePoolTarget List<Property Map>
    Optional. GKE node pools where workloads will be scheduled. At least one node pool must be assigned the DEFAULT GkeNodePoolTarget.Role. If a GkeNodePoolTarget is not specified, Dataproc constructs a DEFAULT GkeNodePoolTarget. Each role can be given to only one GkeNodePoolTarget. All node pools must have the same location settings.

    GkeClusterConfigResponse, GkeClusterConfigResponseArgs

    GkeClusterTarget string
    Optional. A target GKE cluster to deploy to. It must be in the same project and region as the Dataproc cluster (the GKE cluster can be zonal or regional). Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
    NamespacedGkeDeploymentTarget Pulumi.GoogleNative.Dataproc.V1.Inputs.NamespacedGkeDeploymentTargetResponse
    Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.

    Deprecated: Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.

    NodePoolTarget List<Pulumi.GoogleNative.Dataproc.V1.Inputs.GkeNodePoolTargetResponse>
    Optional. GKE node pools where workloads will be scheduled. At least one node pool must be assigned the DEFAULT GkeNodePoolTarget.Role. If a GkeNodePoolTarget is not specified, Dataproc constructs a DEFAULT GkeNodePoolTarget. Each role can be given to only one GkeNodePoolTarget. All node pools must have the same location settings.
    GkeClusterTarget string
    Optional. A target GKE cluster to deploy to. It must be in the same project and region as the Dataproc cluster (the GKE cluster can be zonal or regional). Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
    NamespacedGkeDeploymentTarget NamespacedGkeDeploymentTargetResponse
    Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.

    Deprecated: Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.

    NodePoolTarget []GkeNodePoolTargetResponse
    Optional. GKE node pools where workloads will be scheduled. At least one node pool must be assigned the DEFAULT GkeNodePoolTarget.Role. If a GkeNodePoolTarget is not specified, Dataproc constructs a DEFAULT GkeNodePoolTarget. Each role can be given to only one GkeNodePoolTarget. All node pools must have the same location settings.
    gkeClusterTarget String
    Optional. A target GKE cluster to deploy to. It must be in the same project and region as the Dataproc cluster (the GKE cluster can be zonal or regional). Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
    namespacedGkeDeploymentTarget NamespacedGkeDeploymentTargetResponse
    Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.

    Deprecated: Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.

    nodePoolTarget List<GkeNodePoolTargetResponse>
    Optional. GKE node pools where workloads will be scheduled. At least one node pool must be assigned the DEFAULT GkeNodePoolTarget.Role. If a GkeNodePoolTarget is not specified, Dataproc constructs a DEFAULT GkeNodePoolTarget. Each role can be given to only one GkeNodePoolTarget. All node pools must have the same location settings.
    gkeClusterTarget string
    Optional. A target GKE cluster to deploy to. It must be in the same project and region as the Dataproc cluster (the GKE cluster can be zonal or regional). Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
    namespacedGkeDeploymentTarget NamespacedGkeDeploymentTargetResponse
    Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.

    Deprecated: Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.

    nodePoolTarget GkeNodePoolTargetResponse[]
    Optional. GKE node pools where workloads will be scheduled. At least one node pool must be assigned the DEFAULT GkeNodePoolTarget.Role. If a GkeNodePoolTarget is not specified, Dataproc constructs a DEFAULT GkeNodePoolTarget. Each role can be given to only one GkeNodePoolTarget. All node pools must have the same location settings.
    gke_cluster_target str
    Optional. A target GKE cluster to deploy to. It must be in the same project and region as the Dataproc cluster (the GKE cluster can be zonal or regional). Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
    namespaced_gke_deployment_target NamespacedGkeDeploymentTargetResponse
    Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.

    Deprecated: Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.

    node_pool_target Sequence[GkeNodePoolTargetResponse]
    Optional. GKE node pools where workloads will be scheduled. At least one node pool must be assigned the DEFAULT GkeNodePoolTarget.Role. If a GkeNodePoolTarget is not specified, Dataproc constructs a DEFAULT GkeNodePoolTarget. Each role can be given to only one GkeNodePoolTarget. All node pools must have the same location settings.
    gkeClusterTarget String
    Optional. A target GKE cluster to deploy to. It must be in the same project and region as the Dataproc cluster (the GKE cluster can be zonal or regional). Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
    namespacedGkeDeploymentTarget Property Map
    Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.

    Deprecated: Optional. Deprecated. Use gkeClusterTarget. Used only for the deprecated beta. A target for the deployment.

    nodePoolTarget List<Property Map>
    Optional. GKE node pools where workloads will be scheduled. At least one node pool must be assigned the DEFAULT GkeNodePoolTarget.Role. If a GkeNodePoolTarget is not specified, Dataproc constructs a DEFAULT GkeNodePoolTarget. Each role can be given to only one GkeNodePoolTarget. All node pools must have the same location settings.

    GkeNodeConfig, GkeNodeConfigArgs

    Accelerators List<Pulumi.GoogleNative.Dataproc.V1.Inputs.GkeNodePoolAcceleratorConfig>
    Optional. A list of hardware accelerators (https://cloud.google.com/compute/docs/gpus) to attach to each node.
    BootDiskKmsKey string
    Optional. The Customer Managed Encryption Key (CMEK) (https://cloud.google.com/kubernetes-engine/docs/how-to/using-cmek) used to encrypt the boot disk attached to each node in the node pool. Specify the key using the following format: projects/{project}/locations/{location}/keyRings/{key_ring}/cryptoKeys/{crypto_key}
    LocalSsdCount int
    Optional. The number of local SSD disks to attach to the node, which is limited by the maximum number of disks allowable per zone (see Adding Local SSDs (https://cloud.google.com/compute/docs/disks/local-ssd)).
    MachineType string
    Optional. The name of a Compute Engine machine type (https://cloud.google.com/compute/docs/machine-types).
    MinCpuPlatform string
    Optional. Minimum CPU platform (https://cloud.google.com/compute/docs/instances/specify-min-cpu-platform) to be used by this instance. The instance may be scheduled on the specified or a newer CPU platform. Specify the friendly names of CPU platforms, such as "Intel Haswell"` or Intel Sandy Bridge".
    Preemptible bool
    Optional. Whether the nodes are created as legacy preemptible VM instances (https://cloud.google.com/compute/docs/instances/preemptible). Also see Spot VMs, preemptible VM instances without a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
    Spot bool
    Optional. Whether the nodes are created as Spot VM instances (https://cloud.google.com/compute/docs/instances/spot). Spot VMs are the latest update to legacy preemptible VMs. Spot VMs do not have a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
    Accelerators []GkeNodePoolAcceleratorConfig
    Optional. A list of hardware accelerators (https://cloud.google.com/compute/docs/gpus) to attach to each node.
    BootDiskKmsKey string
    Optional. The Customer Managed Encryption Key (CMEK) (https://cloud.google.com/kubernetes-engine/docs/how-to/using-cmek) used to encrypt the boot disk attached to each node in the node pool. Specify the key using the following format: projects/{project}/locations/{location}/keyRings/{key_ring}/cryptoKeys/{crypto_key}
    LocalSsdCount int
    Optional. The number of local SSD disks to attach to the node, which is limited by the maximum number of disks allowable per zone (see Adding Local SSDs (https://cloud.google.com/compute/docs/disks/local-ssd)).
    MachineType string
    Optional. The name of a Compute Engine machine type (https://cloud.google.com/compute/docs/machine-types).
    MinCpuPlatform string
    Optional. Minimum CPU platform (https://cloud.google.com/compute/docs/instances/specify-min-cpu-platform) to be used by this instance. The instance may be scheduled on the specified or a newer CPU platform. Specify the friendly names of CPU platforms, such as "Intel Haswell"` or Intel Sandy Bridge".
    Preemptible bool
    Optional. Whether the nodes are created as legacy preemptible VM instances (https://cloud.google.com/compute/docs/instances/preemptible). Also see Spot VMs, preemptible VM instances without a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
    Spot bool
    Optional. Whether the nodes are created as Spot VM instances (https://cloud.google.com/compute/docs/instances/spot). Spot VMs are the latest update to legacy preemptible VMs. Spot VMs do not have a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
    accelerators List<GkeNodePoolAcceleratorConfig>
    Optional. A list of hardware accelerators (https://cloud.google.com/compute/docs/gpus) to attach to each node.
    bootDiskKmsKey String
    Optional. The Customer Managed Encryption Key (CMEK) (https://cloud.google.com/kubernetes-engine/docs/how-to/using-cmek) used to encrypt the boot disk attached to each node in the node pool. Specify the key using the following format: projects/{project}/locations/{location}/keyRings/{key_ring}/cryptoKeys/{crypto_key}
    localSsdCount Integer
    Optional. The number of local SSD disks to attach to the node, which is limited by the maximum number of disks allowable per zone (see Adding Local SSDs (https://cloud.google.com/compute/docs/disks/local-ssd)).
    machineType String
    Optional. The name of a Compute Engine machine type (https://cloud.google.com/compute/docs/machine-types).
    minCpuPlatform String
    Optional. Minimum CPU platform (https://cloud.google.com/compute/docs/instances/specify-min-cpu-platform) to be used by this instance. The instance may be scheduled on the specified or a newer CPU platform. Specify the friendly names of CPU platforms, such as "Intel Haswell"` or Intel Sandy Bridge".
    preemptible Boolean
    Optional. Whether the nodes are created as legacy preemptible VM instances (https://cloud.google.com/compute/docs/instances/preemptible). Also see Spot VMs, preemptible VM instances without a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
    spot Boolean
    Optional. Whether the nodes are created as Spot VM instances (https://cloud.google.com/compute/docs/instances/spot). Spot VMs are the latest update to legacy preemptible VMs. Spot VMs do not have a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
    accelerators GkeNodePoolAcceleratorConfig[]
    Optional. A list of hardware accelerators (https://cloud.google.com/compute/docs/gpus) to attach to each node.
    bootDiskKmsKey string
    Optional. The Customer Managed Encryption Key (CMEK) (https://cloud.google.com/kubernetes-engine/docs/how-to/using-cmek) used to encrypt the boot disk attached to each node in the node pool. Specify the key using the following format: projects/{project}/locations/{location}/keyRings/{key_ring}/cryptoKeys/{crypto_key}
    localSsdCount number
    Optional. The number of local SSD disks to attach to the node, which is limited by the maximum number of disks allowable per zone (see Adding Local SSDs (https://cloud.google.com/compute/docs/disks/local-ssd)).
    machineType string
    Optional. The name of a Compute Engine machine type (https://cloud.google.com/compute/docs/machine-types).
    minCpuPlatform string
    Optional. Minimum CPU platform (https://cloud.google.com/compute/docs/instances/specify-min-cpu-platform) to be used by this instance. The instance may be scheduled on the specified or a newer CPU platform. Specify the friendly names of CPU platforms, such as "Intel Haswell"` or Intel Sandy Bridge".
    preemptible boolean
    Optional. Whether the nodes are created as legacy preemptible VM instances (https://cloud.google.com/compute/docs/instances/preemptible). Also see Spot VMs, preemptible VM instances without a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
    spot boolean
    Optional. Whether the nodes are created as Spot VM instances (https://cloud.google.com/compute/docs/instances/spot). Spot VMs are the latest update to legacy preemptible VMs. Spot VMs do not have a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
    accelerators Sequence[GkeNodePoolAcceleratorConfig]
    Optional. A list of hardware accelerators (https://cloud.google.com/compute/docs/gpus) to attach to each node.
    boot_disk_kms_key str
    Optional. The Customer Managed Encryption Key (CMEK) (https://cloud.google.com/kubernetes-engine/docs/how-to/using-cmek) used to encrypt the boot disk attached to each node in the node pool. Specify the key using the following format: projects/{project}/locations/{location}/keyRings/{key_ring}/cryptoKeys/{crypto_key}
    local_ssd_count int
    Optional. The number of local SSD disks to attach to the node, which is limited by the maximum number of disks allowable per zone (see Adding Local SSDs (https://cloud.google.com/compute/docs/disks/local-ssd)).
    machine_type str
    Optional. The name of a Compute Engine machine type (https://cloud.google.com/compute/docs/machine-types).
    min_cpu_platform str
    Optional. Minimum CPU platform (https://cloud.google.com/compute/docs/instances/specify-min-cpu-platform) to be used by this instance. The instance may be scheduled on the specified or a newer CPU platform. Specify the friendly names of CPU platforms, such as "Intel Haswell"` or Intel Sandy Bridge".
    preemptible bool
    Optional. Whether the nodes are created as legacy preemptible VM instances (https://cloud.google.com/compute/docs/instances/preemptible). Also see Spot VMs, preemptible VM instances without a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
    spot bool
    Optional. Whether the nodes are created as Spot VM instances (https://cloud.google.com/compute/docs/instances/spot). Spot VMs are the latest update to legacy preemptible VMs. Spot VMs do not have a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
    accelerators List<Property Map>
    Optional. A list of hardware accelerators (https://cloud.google.com/compute/docs/gpus) to attach to each node.
    bootDiskKmsKey String
    Optional. The Customer Managed Encryption Key (CMEK) (https://cloud.google.com/kubernetes-engine/docs/how-to/using-cmek) used to encrypt the boot disk attached to each node in the node pool. Specify the key using the following format: projects/{project}/locations/{location}/keyRings/{key_ring}/cryptoKeys/{crypto_key}
    localSsdCount Number
    Optional. The number of local SSD disks to attach to the node, which is limited by the maximum number of disks allowable per zone (see Adding Local SSDs (https://cloud.google.com/compute/docs/disks/local-ssd)).
    machineType String
    Optional. The name of a Compute Engine machine type (https://cloud.google.com/compute/docs/machine-types).
    minCpuPlatform String
    Optional. Minimum CPU platform (https://cloud.google.com/compute/docs/instances/specify-min-cpu-platform) to be used by this instance. The instance may be scheduled on the specified or a newer CPU platform. Specify the friendly names of CPU platforms, such as "Intel Haswell"` or Intel Sandy Bridge".
    preemptible Boolean
    Optional. Whether the nodes are created as legacy preemptible VM instances (https://cloud.google.com/compute/docs/instances/preemptible). Also see Spot VMs, preemptible VM instances without a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
    spot Boolean
    Optional. Whether the nodes are created as Spot VM instances (https://cloud.google.com/compute/docs/instances/spot). Spot VMs are the latest update to legacy preemptible VMs. Spot VMs do not have a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).

    GkeNodeConfigResponse, GkeNodeConfigResponseArgs

    Accelerators List<Pulumi.GoogleNative.Dataproc.V1.Inputs.GkeNodePoolAcceleratorConfigResponse>
    Optional. A list of hardware accelerators (https://cloud.google.com/compute/docs/gpus) to attach to each node.
    BootDiskKmsKey string
    Optional. The Customer Managed Encryption Key (CMEK) (https://cloud.google.com/kubernetes-engine/docs/how-to/using-cmek) used to encrypt the boot disk attached to each node in the node pool. Specify the key using the following format: projects/{project}/locations/{location}/keyRings/{key_ring}/cryptoKeys/{crypto_key}
    LocalSsdCount int
    Optional. The number of local SSD disks to attach to the node, which is limited by the maximum number of disks allowable per zone (see Adding Local SSDs (https://cloud.google.com/compute/docs/disks/local-ssd)).
    MachineType string
    Optional. The name of a Compute Engine machine type (https://cloud.google.com/compute/docs/machine-types).
    MinCpuPlatform string
    Optional. Minimum CPU platform (https://cloud.google.com/compute/docs/instances/specify-min-cpu-platform) to be used by this instance. The instance may be scheduled on the specified or a newer CPU platform. Specify the friendly names of CPU platforms, such as "Intel Haswell"` or Intel Sandy Bridge".
    Preemptible bool
    Optional. Whether the nodes are created as legacy preemptible VM instances (https://cloud.google.com/compute/docs/instances/preemptible). Also see Spot VMs, preemptible VM instances without a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
    Spot bool
    Optional. Whether the nodes are created as Spot VM instances (https://cloud.google.com/compute/docs/instances/spot). Spot VMs are the latest update to legacy preemptible VMs. Spot VMs do not have a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
    Accelerators []GkeNodePoolAcceleratorConfigResponse
    Optional. A list of hardware accelerators (https://cloud.google.com/compute/docs/gpus) to attach to each node.
    BootDiskKmsKey string
    Optional. The Customer Managed Encryption Key (CMEK) (https://cloud.google.com/kubernetes-engine/docs/how-to/using-cmek) used to encrypt the boot disk attached to each node in the node pool. Specify the key using the following format: projects/{project}/locations/{location}/keyRings/{key_ring}/cryptoKeys/{crypto_key}
    LocalSsdCount int
    Optional. The number of local SSD disks to attach to the node, which is limited by the maximum number of disks allowable per zone (see Adding Local SSDs (https://cloud.google.com/compute/docs/disks/local-ssd)).
    MachineType string
    Optional. The name of a Compute Engine machine type (https://cloud.google.com/compute/docs/machine-types).
    MinCpuPlatform string
    Optional. Minimum CPU platform (https://cloud.google.com/compute/docs/instances/specify-min-cpu-platform) to be used by this instance. The instance may be scheduled on the specified or a newer CPU platform. Specify the friendly names of CPU platforms, such as "Intel Haswell"` or Intel Sandy Bridge".
    Preemptible bool
    Optional. Whether the nodes are created as legacy preemptible VM instances (https://cloud.google.com/compute/docs/instances/preemptible). Also see Spot VMs, preemptible VM instances without a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
    Spot bool
    Optional. Whether the nodes are created as Spot VM instances (https://cloud.google.com/compute/docs/instances/spot). Spot VMs are the latest update to legacy preemptible VMs. Spot VMs do not have a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
    accelerators List<GkeNodePoolAcceleratorConfigResponse>
    Optional. A list of hardware accelerators (https://cloud.google.com/compute/docs/gpus) to attach to each node.
    bootDiskKmsKey String
    Optional. The Customer Managed Encryption Key (CMEK) (https://cloud.google.com/kubernetes-engine/docs/how-to/using-cmek) used to encrypt the boot disk attached to each node in the node pool. Specify the key using the following format: projects/{project}/locations/{location}/keyRings/{key_ring}/cryptoKeys/{crypto_key}
    localSsdCount Integer
    Optional. The number of local SSD disks to attach to the node, which is limited by the maximum number of disks allowable per zone (see Adding Local SSDs (https://cloud.google.com/compute/docs/disks/local-ssd)).
    machineType String
    Optional. The name of a Compute Engine machine type (https://cloud.google.com/compute/docs/machine-types).
    minCpuPlatform String
    Optional. Minimum CPU platform (https://cloud.google.com/compute/docs/instances/specify-min-cpu-platform) to be used by this instance. The instance may be scheduled on the specified or a newer CPU platform. Specify the friendly names of CPU platforms, such as "Intel Haswell"` or Intel Sandy Bridge".
    preemptible Boolean
    Optional. Whether the nodes are created as legacy preemptible VM instances (https://cloud.google.com/compute/docs/instances/preemptible). Also see Spot VMs, preemptible VM instances without a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
    spot Boolean
    Optional. Whether the nodes are created as Spot VM instances (https://cloud.google.com/compute/docs/instances/spot). Spot VMs are the latest update to legacy preemptible VMs. Spot VMs do not have a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
    accelerators GkeNodePoolAcceleratorConfigResponse[]
    Optional. A list of hardware accelerators (https://cloud.google.com/compute/docs/gpus) to attach to each node.
    bootDiskKmsKey string
    Optional. The Customer Managed Encryption Key (CMEK) (https://cloud.google.com/kubernetes-engine/docs/how-to/using-cmek) used to encrypt the boot disk attached to each node in the node pool. Specify the key using the following format: projects/{project}/locations/{location}/keyRings/{key_ring}/cryptoKeys/{crypto_key}
    localSsdCount number
    Optional. The number of local SSD disks to attach to the node, which is limited by the maximum number of disks allowable per zone (see Adding Local SSDs (https://cloud.google.com/compute/docs/disks/local-ssd)).
    machineType string
    Optional. The name of a Compute Engine machine type (https://cloud.google.com/compute/docs/machine-types).
    minCpuPlatform string
    Optional. Minimum CPU platform (https://cloud.google.com/compute/docs/instances/specify-min-cpu-platform) to be used by this instance. The instance may be scheduled on the specified or a newer CPU platform. Specify the friendly names of CPU platforms, such as "Intel Haswell"` or Intel Sandy Bridge".
    preemptible boolean
    Optional. Whether the nodes are created as legacy preemptible VM instances (https://cloud.google.com/compute/docs/instances/preemptible). Also see Spot VMs, preemptible VM instances without a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
    spot boolean
    Optional. Whether the nodes are created as Spot VM instances (https://cloud.google.com/compute/docs/instances/spot). Spot VMs are the latest update to legacy preemptible VMs. Spot VMs do not have a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
    accelerators Sequence[GkeNodePoolAcceleratorConfigResponse]
    Optional. A list of hardware accelerators (https://cloud.google.com/compute/docs/gpus) to attach to each node.
    boot_disk_kms_key str
    Optional. The Customer Managed Encryption Key (CMEK) (https://cloud.google.com/kubernetes-engine/docs/how-to/using-cmek) used to encrypt the boot disk attached to each node in the node pool. Specify the key using the following format: projects/{project}/locations/{location}/keyRings/{key_ring}/cryptoKeys/{crypto_key}
    local_ssd_count int
    Optional. The number of local SSD disks to attach to the node, which is limited by the maximum number of disks allowable per zone (see Adding Local SSDs (https://cloud.google.com/compute/docs/disks/local-ssd)).
    machine_type str
    Optional. The name of a Compute Engine machine type (https://cloud.google.com/compute/docs/machine-types).
    min_cpu_platform str
    Optional. Minimum CPU platform (https://cloud.google.com/compute/docs/instances/specify-min-cpu-platform) to be used by this instance. The instance may be scheduled on the specified or a newer CPU platform. Specify the friendly names of CPU platforms, such as "Intel Haswell"` or Intel Sandy Bridge".
    preemptible bool
    Optional. Whether the nodes are created as legacy preemptible VM instances (https://cloud.google.com/compute/docs/instances/preemptible). Also see Spot VMs, preemptible VM instances without a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
    spot bool
    Optional. Whether the nodes are created as Spot VM instances (https://cloud.google.com/compute/docs/instances/spot). Spot VMs are the latest update to legacy preemptible VMs. Spot VMs do not have a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
    accelerators List<Property Map>
    Optional. A list of hardware accelerators (https://cloud.google.com/compute/docs/gpus) to attach to each node.
    bootDiskKmsKey String
    Optional. The Customer Managed Encryption Key (CMEK) (https://cloud.google.com/kubernetes-engine/docs/how-to/using-cmek) used to encrypt the boot disk attached to each node in the node pool. Specify the key using the following format: projects/{project}/locations/{location}/keyRings/{key_ring}/cryptoKeys/{crypto_key}
    localSsdCount Number
    Optional. The number of local SSD disks to attach to the node, which is limited by the maximum number of disks allowable per zone (see Adding Local SSDs (https://cloud.google.com/compute/docs/disks/local-ssd)).
    machineType String
    Optional. The name of a Compute Engine machine type (https://cloud.google.com/compute/docs/machine-types).
    minCpuPlatform String
    Optional. Minimum CPU platform (https://cloud.google.com/compute/docs/instances/specify-min-cpu-platform) to be used by this instance. The instance may be scheduled on the specified or a newer CPU platform. Specify the friendly names of CPU platforms, such as "Intel Haswell"` or Intel Sandy Bridge".
    preemptible Boolean
    Optional. Whether the nodes are created as legacy preemptible VM instances (https://cloud.google.com/compute/docs/instances/preemptible). Also see Spot VMs, preemptible VM instances without a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).
    spot Boolean
    Optional. Whether the nodes are created as Spot VM instances (https://cloud.google.com/compute/docs/instances/spot). Spot VMs are the latest update to legacy preemptible VMs. Spot VMs do not have a maximum lifetime. Legacy and Spot preemptible nodes cannot be used in a node pool with the CONTROLLER role or in the DEFAULT node pool if the CONTROLLER role is not assigned (the DEFAULT node pool will assume the CONTROLLER role).

    GkeNodePoolAcceleratorConfig, GkeNodePoolAcceleratorConfigArgs

    AcceleratorCount string
    The number of accelerator cards exposed to an instance.
    AcceleratorType string
    The accelerator type resource namename (see GPUs on Compute Engine).
    GpuPartitionSize string
    Size of partitions to create on the GPU. Valid values are described in the NVIDIA mig user guide (https://docs.nvidia.com/datacenter/tesla/mig-user-guide/#partitioning).
    AcceleratorCount string
    The number of accelerator cards exposed to an instance.
    AcceleratorType string
    The accelerator type resource namename (see GPUs on Compute Engine).
    GpuPartitionSize string
    Size of partitions to create on the GPU. Valid values are described in the NVIDIA mig user guide (https://docs.nvidia.com/datacenter/tesla/mig-user-guide/#partitioning).
    acceleratorCount String
    The number of accelerator cards exposed to an instance.
    acceleratorType String
    The accelerator type resource namename (see GPUs on Compute Engine).
    gpuPartitionSize String
    Size of partitions to create on the GPU. Valid values are described in the NVIDIA mig user guide (https://docs.nvidia.com/datacenter/tesla/mig-user-guide/#partitioning).
    acceleratorCount string
    The number of accelerator cards exposed to an instance.
    acceleratorType string
    The accelerator type resource namename (see GPUs on Compute Engine).
    gpuPartitionSize string
    Size of partitions to create on the GPU. Valid values are described in the NVIDIA mig user guide (https://docs.nvidia.com/datacenter/tesla/mig-user-guide/#partitioning).
    accelerator_count str
    The number of accelerator cards exposed to an instance.
    accelerator_type str
    The accelerator type resource namename (see GPUs on Compute Engine).
    gpu_partition_size str
    Size of partitions to create on the GPU. Valid values are described in the NVIDIA mig user guide (https://docs.nvidia.com/datacenter/tesla/mig-user-guide/#partitioning).
    acceleratorCount String
    The number of accelerator cards exposed to an instance.
    acceleratorType String
    The accelerator type resource namename (see GPUs on Compute Engine).
    gpuPartitionSize String
    Size of partitions to create on the GPU. Valid values are described in the NVIDIA mig user guide (https://docs.nvidia.com/datacenter/tesla/mig-user-guide/#partitioning).

    GkeNodePoolAcceleratorConfigResponse, GkeNodePoolAcceleratorConfigResponseArgs

    AcceleratorCount string
    The number of accelerator cards exposed to an instance.
    AcceleratorType string
    The accelerator type resource namename (see GPUs on Compute Engine).
    GpuPartitionSize string
    Size of partitions to create on the GPU. Valid values are described in the NVIDIA mig user guide (https://docs.nvidia.com/datacenter/tesla/mig-user-guide/#partitioning).
    AcceleratorCount string
    The number of accelerator cards exposed to an instance.
    AcceleratorType string
    The accelerator type resource namename (see GPUs on Compute Engine).
    GpuPartitionSize string
    Size of partitions to create on the GPU. Valid values are described in the NVIDIA mig user guide (https://docs.nvidia.com/datacenter/tesla/mig-user-guide/#partitioning).
    acceleratorCount String
    The number of accelerator cards exposed to an instance.
    acceleratorType String
    The accelerator type resource namename (see GPUs on Compute Engine).
    gpuPartitionSize String
    Size of partitions to create on the GPU. Valid values are described in the NVIDIA mig user guide (https://docs.nvidia.com/datacenter/tesla/mig-user-guide/#partitioning).
    acceleratorCount string
    The number of accelerator cards exposed to an instance.
    acceleratorType string
    The accelerator type resource namename (see GPUs on Compute Engine).
    gpuPartitionSize string
    Size of partitions to create on the GPU. Valid values are described in the NVIDIA mig user guide (https://docs.nvidia.com/datacenter/tesla/mig-user-guide/#partitioning).
    accelerator_count str
    The number of accelerator cards exposed to an instance.
    accelerator_type str
    The accelerator type resource namename (see GPUs on Compute Engine).
    gpu_partition_size str
    Size of partitions to create on the GPU. Valid values are described in the NVIDIA mig user guide (https://docs.nvidia.com/datacenter/tesla/mig-user-guide/#partitioning).
    acceleratorCount String
    The number of accelerator cards exposed to an instance.
    acceleratorType String
    The accelerator type resource namename (see GPUs on Compute Engine).
    gpuPartitionSize String
    Size of partitions to create on the GPU. Valid values are described in the NVIDIA mig user guide (https://docs.nvidia.com/datacenter/tesla/mig-user-guide/#partitioning).

    GkeNodePoolAutoscalingConfig, GkeNodePoolAutoscalingConfigArgs

    MaxNodeCount int
    The maximum number of nodes in the node pool. Must be >= min_node_count, and must be > 0. Note: Quota must be sufficient to scale up the cluster.
    MinNodeCount int
    The minimum number of nodes in the node pool. Must be >= 0 and <= max_node_count.
    MaxNodeCount int
    The maximum number of nodes in the node pool. Must be >= min_node_count, and must be > 0. Note: Quota must be sufficient to scale up the cluster.
    MinNodeCount int
    The minimum number of nodes in the node pool. Must be >= 0 and <= max_node_count.
    maxNodeCount Integer
    The maximum number of nodes in the node pool. Must be >= min_node_count, and must be > 0. Note: Quota must be sufficient to scale up the cluster.
    minNodeCount Integer
    The minimum number of nodes in the node pool. Must be >= 0 and <= max_node_count.
    maxNodeCount number
    The maximum number of nodes in the node pool. Must be >= min_node_count, and must be > 0. Note: Quota must be sufficient to scale up the cluster.
    minNodeCount number
    The minimum number of nodes in the node pool. Must be >= 0 and <= max_node_count.
    max_node_count int
    The maximum number of nodes in the node pool. Must be >= min_node_count, and must be > 0. Note: Quota must be sufficient to scale up the cluster.
    min_node_count int
    The minimum number of nodes in the node pool. Must be >= 0 and <= max_node_count.
    maxNodeCount Number
    The maximum number of nodes in the node pool. Must be >= min_node_count, and must be > 0. Note: Quota must be sufficient to scale up the cluster.
    minNodeCount Number
    The minimum number of nodes in the node pool. Must be >= 0 and <= max_node_count.

    GkeNodePoolAutoscalingConfigResponse, GkeNodePoolAutoscalingConfigResponseArgs

    MaxNodeCount int
    The maximum number of nodes in the node pool. Must be >= min_node_count, and must be > 0. Note: Quota must be sufficient to scale up the cluster.
    MinNodeCount int
    The minimum number of nodes in the node pool. Must be >= 0 and <= max_node_count.
    MaxNodeCount int
    The maximum number of nodes in the node pool. Must be >= min_node_count, and must be > 0. Note: Quota must be sufficient to scale up the cluster.
    MinNodeCount int
    The minimum number of nodes in the node pool. Must be >= 0 and <= max_node_count.
    maxNodeCount Integer
    The maximum number of nodes in the node pool. Must be >= min_node_count, and must be > 0. Note: Quota must be sufficient to scale up the cluster.
    minNodeCount Integer
    The minimum number of nodes in the node pool. Must be >= 0 and <= max_node_count.
    maxNodeCount number
    The maximum number of nodes in the node pool. Must be >= min_node_count, and must be > 0. Note: Quota must be sufficient to scale up the cluster.
    minNodeCount number
    The minimum number of nodes in the node pool. Must be >= 0 and <= max_node_count.
    max_node_count int
    The maximum number of nodes in the node pool. Must be >= min_node_count, and must be > 0. Note: Quota must be sufficient to scale up the cluster.
    min_node_count int
    The minimum number of nodes in the node pool. Must be >= 0 and <= max_node_count.
    maxNodeCount Number
    The maximum number of nodes in the node pool. Must be >= min_node_count, and must be > 0. Note: Quota must be sufficient to scale up the cluster.
    minNodeCount Number
    The minimum number of nodes in the node pool. Must be >= 0 and <= max_node_count.

    GkeNodePoolConfig, GkeNodePoolConfigArgs

    Autoscaling Pulumi.GoogleNative.Dataproc.V1.Inputs.GkeNodePoolAutoscalingConfig
    Optional. The autoscaler configuration for this node pool. The autoscaler is enabled only when a valid configuration is present.
    Config Pulumi.GoogleNative.Dataproc.V1.Inputs.GkeNodeConfig
    Optional. The node pool configuration.
    Locations List<string>
    Optional. The list of Compute Engine zones (https://cloud.google.com/compute/docs/zones#available) where node pool nodes associated with a Dataproc on GKE virtual cluster will be located.Note: All node pools associated with a virtual cluster must be located in the same region as the virtual cluster, and they must be located in the same zone within that region.If a location is not specified during node pool creation, Dataproc on GKE will choose the zone.
    Autoscaling GkeNodePoolAutoscalingConfig
    Optional. The autoscaler configuration for this node pool. The autoscaler is enabled only when a valid configuration is present.
    Config GkeNodeConfig
    Optional. The node pool configuration.
    Locations []string
    Optional. The list of Compute Engine zones (https://cloud.google.com/compute/docs/zones#available) where node pool nodes associated with a Dataproc on GKE virtual cluster will be located.Note: All node pools associated with a virtual cluster must be located in the same region as the virtual cluster, and they must be located in the same zone within that region.If a location is not specified during node pool creation, Dataproc on GKE will choose the zone.
    autoscaling GkeNodePoolAutoscalingConfig
    Optional. The autoscaler configuration for this node pool. The autoscaler is enabled only when a valid configuration is present.
    config GkeNodeConfig
    Optional. The node pool configuration.
    locations List<String>
    Optional. The list of Compute Engine zones (https://cloud.google.com/compute/docs/zones#available) where node pool nodes associated with a Dataproc on GKE virtual cluster will be located.Note: All node pools associated with a virtual cluster must be located in the same region as the virtual cluster, and they must be located in the same zone within that region.If a location is not specified during node pool creation, Dataproc on GKE will choose the zone.
    autoscaling GkeNodePoolAutoscalingConfig
    Optional. The autoscaler configuration for this node pool. The autoscaler is enabled only when a valid configuration is present.
    config GkeNodeConfig
    Optional. The node pool configuration.
    locations string[]
    Optional. The list of Compute Engine zones (https://cloud.google.com/compute/docs/zones#available) where node pool nodes associated with a Dataproc on GKE virtual cluster will be located.Note: All node pools associated with a virtual cluster must be located in the same region as the virtual cluster, and they must be located in the same zone within that region.If a location is not specified during node pool creation, Dataproc on GKE will choose the zone.
    autoscaling GkeNodePoolAutoscalingConfig
    Optional. The autoscaler configuration for this node pool. The autoscaler is enabled only when a valid configuration is present.
    config GkeNodeConfig
    Optional. The node pool configuration.
    locations Sequence[str]
    Optional. The list of Compute Engine zones (https://cloud.google.com/compute/docs/zones#available) where node pool nodes associated with a Dataproc on GKE virtual cluster will be located.Note: All node pools associated with a virtual cluster must be located in the same region as the virtual cluster, and they must be located in the same zone within that region.If a location is not specified during node pool creation, Dataproc on GKE will choose the zone.
    autoscaling Property Map
    Optional. The autoscaler configuration for this node pool. The autoscaler is enabled only when a valid configuration is present.
    config Property Map
    Optional. The node pool configuration.
    locations List<String>
    Optional. The list of Compute Engine zones (https://cloud.google.com/compute/docs/zones#available) where node pool nodes associated with a Dataproc on GKE virtual cluster will be located.Note: All node pools associated with a virtual cluster must be located in the same region as the virtual cluster, and they must be located in the same zone within that region.If a location is not specified during node pool creation, Dataproc on GKE will choose the zone.

    GkeNodePoolConfigResponse, GkeNodePoolConfigResponseArgs

    Autoscaling Pulumi.GoogleNative.Dataproc.V1.Inputs.GkeNodePoolAutoscalingConfigResponse
    Optional. The autoscaler configuration for this node pool. The autoscaler is enabled only when a valid configuration is present.
    Config Pulumi.GoogleNative.Dataproc.V1.Inputs.GkeNodeConfigResponse
    Optional. The node pool configuration.
    Locations List<string>
    Optional. The list of Compute Engine zones (https://cloud.google.com/compute/docs/zones#available) where node pool nodes associated with a Dataproc on GKE virtual cluster will be located.Note: All node pools associated with a virtual cluster must be located in the same region as the virtual cluster, and they must be located in the same zone within that region.If a location is not specified during node pool creation, Dataproc on GKE will choose the zone.
    Autoscaling GkeNodePoolAutoscalingConfigResponse
    Optional. The autoscaler configuration for this node pool. The autoscaler is enabled only when a valid configuration is present.
    Config GkeNodeConfigResponse
    Optional. The node pool configuration.
    Locations []string
    Optional. The list of Compute Engine zones (https://cloud.google.com/compute/docs/zones#available) where node pool nodes associated with a Dataproc on GKE virtual cluster will be located.Note: All node pools associated with a virtual cluster must be located in the same region as the virtual cluster, and they must be located in the same zone within that region.If a location is not specified during node pool creation, Dataproc on GKE will choose the zone.
    autoscaling GkeNodePoolAutoscalingConfigResponse
    Optional. The autoscaler configuration for this node pool. The autoscaler is enabled only when a valid configuration is present.
    config GkeNodeConfigResponse
    Optional. The node pool configuration.
    locations List<String>
    Optional. The list of Compute Engine zones (https://cloud.google.com/compute/docs/zones#available) where node pool nodes associated with a Dataproc on GKE virtual cluster will be located.Note: All node pools associated with a virtual cluster must be located in the same region as the virtual cluster, and they must be located in the same zone within that region.If a location is not specified during node pool creation, Dataproc on GKE will choose the zone.
    autoscaling GkeNodePoolAutoscalingConfigResponse
    Optional. The autoscaler configuration for this node pool. The autoscaler is enabled only when a valid configuration is present.
    config GkeNodeConfigResponse
    Optional. The node pool configuration.
    locations string[]
    Optional. The list of Compute Engine zones (https://cloud.google.com/compute/docs/zones#available) where node pool nodes associated with a Dataproc on GKE virtual cluster will be located.Note: All node pools associated with a virtual cluster must be located in the same region as the virtual cluster, and they must be located in the same zone within that region.If a location is not specified during node pool creation, Dataproc on GKE will choose the zone.
    autoscaling GkeNodePoolAutoscalingConfigResponse
    Optional. The autoscaler configuration for this node pool. The autoscaler is enabled only when a valid configuration is present.
    config GkeNodeConfigResponse
    Optional. The node pool configuration.
    locations Sequence[str]
    Optional. The list of Compute Engine zones (https://cloud.google.com/compute/docs/zones#available) where node pool nodes associated with a Dataproc on GKE virtual cluster will be located.Note: All node pools associated with a virtual cluster must be located in the same region as the virtual cluster, and they must be located in the same zone within that region.If a location is not specified during node pool creation, Dataproc on GKE will choose the zone.
    autoscaling Property Map
    Optional. The autoscaler configuration for this node pool. The autoscaler is enabled only when a valid configuration is present.
    config Property Map
    Optional. The node pool configuration.
    locations List<String>
    Optional. The list of Compute Engine zones (https://cloud.google.com/compute/docs/zones#available) where node pool nodes associated with a Dataproc on GKE virtual cluster will be located.Note: All node pools associated with a virtual cluster must be located in the same region as the virtual cluster, and they must be located in the same zone within that region.If a location is not specified during node pool creation, Dataproc on GKE will choose the zone.

    GkeNodePoolTarget, GkeNodePoolTargetArgs

    NodePool string
    The target GKE node pool. Format: 'projects/{project}/locations/{location}/clusters/{cluster}/nodePools/{node_pool}'
    Roles List<Pulumi.GoogleNative.Dataproc.V1.GkeNodePoolTargetRolesItem>
    The roles associated with the GKE node pool.
    NodePoolConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.GkeNodePoolConfig
    Input only. The configuration for the GKE node pool.If specified, Dataproc attempts to create a node pool with the specified shape. If one with the same name already exists, it is verified against all specified fields. If a field differs, the virtual cluster creation will fail.If omitted, any node pool with the specified name is used. If a node pool with the specified name does not exist, Dataproc create a node pool with default values.This is an input only field. It will not be returned by the API.
    NodePool string
    The target GKE node pool. Format: 'projects/{project}/locations/{location}/clusters/{cluster}/nodePools/{node_pool}'
    Roles []GkeNodePoolTargetRolesItem
    The roles associated with the GKE node pool.
    NodePoolConfig GkeNodePoolConfig
    Input only. The configuration for the GKE node pool.If specified, Dataproc attempts to create a node pool with the specified shape. If one with the same name already exists, it is verified against all specified fields. If a field differs, the virtual cluster creation will fail.If omitted, any node pool with the specified name is used. If a node pool with the specified name does not exist, Dataproc create a node pool with default values.This is an input only field. It will not be returned by the API.
    nodePool String
    The target GKE node pool. Format: 'projects/{project}/locations/{location}/clusters/{cluster}/nodePools/{node_pool}'
    roles List<GkeNodePoolTargetRolesItem>
    The roles associated with the GKE node pool.
    nodePoolConfig GkeNodePoolConfig
    Input only. The configuration for the GKE node pool.If specified, Dataproc attempts to create a node pool with the specified shape. If one with the same name already exists, it is verified against all specified fields. If a field differs, the virtual cluster creation will fail.If omitted, any node pool with the specified name is used. If a node pool with the specified name does not exist, Dataproc create a node pool with default values.This is an input only field. It will not be returned by the API.
    nodePool string
    The target GKE node pool. Format: 'projects/{project}/locations/{location}/clusters/{cluster}/nodePools/{node_pool}'
    roles GkeNodePoolTargetRolesItem[]
    The roles associated with the GKE node pool.
    nodePoolConfig GkeNodePoolConfig
    Input only. The configuration for the GKE node pool.If specified, Dataproc attempts to create a node pool with the specified shape. If one with the same name already exists, it is verified against all specified fields. If a field differs, the virtual cluster creation will fail.If omitted, any node pool with the specified name is used. If a node pool with the specified name does not exist, Dataproc create a node pool with default values.This is an input only field. It will not be returned by the API.
    node_pool str
    The target GKE node pool. Format: 'projects/{project}/locations/{location}/clusters/{cluster}/nodePools/{node_pool}'
    roles Sequence[GkeNodePoolTargetRolesItem]
    The roles associated with the GKE node pool.
    node_pool_config GkeNodePoolConfig
    Input only. The configuration for the GKE node pool.If specified, Dataproc attempts to create a node pool with the specified shape. If one with the same name already exists, it is verified against all specified fields. If a field differs, the virtual cluster creation will fail.If omitted, any node pool with the specified name is used. If a node pool with the specified name does not exist, Dataproc create a node pool with default values.This is an input only field. It will not be returned by the API.
    nodePool String
    The target GKE node pool. Format: 'projects/{project}/locations/{location}/clusters/{cluster}/nodePools/{node_pool}'
    roles List<"ROLE_UNSPECIFIED" | "DEFAULT" | "CONTROLLER" | "SPARK_DRIVER" | "SPARK_EXECUTOR">
    The roles associated with the GKE node pool.
    nodePoolConfig Property Map
    Input only. The configuration for the GKE node pool.If specified, Dataproc attempts to create a node pool with the specified shape. If one with the same name already exists, it is verified against all specified fields. If a field differs, the virtual cluster creation will fail.If omitted, any node pool with the specified name is used. If a node pool with the specified name does not exist, Dataproc create a node pool with default values.This is an input only field. It will not be returned by the API.

    GkeNodePoolTargetResponse, GkeNodePoolTargetResponseArgs

    NodePool string
    The target GKE node pool. Format: 'projects/{project}/locations/{location}/clusters/{cluster}/nodePools/{node_pool}'
    NodePoolConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.GkeNodePoolConfigResponse
    Input only. The configuration for the GKE node pool.If specified, Dataproc attempts to create a node pool with the specified shape. If one with the same name already exists, it is verified against all specified fields. If a field differs, the virtual cluster creation will fail.If omitted, any node pool with the specified name is used. If a node pool with the specified name does not exist, Dataproc create a node pool with default values.This is an input only field. It will not be returned by the API.
    Roles List<string>
    The roles associated with the GKE node pool.
    NodePool string
    The target GKE node pool. Format: 'projects/{project}/locations/{location}/clusters/{cluster}/nodePools/{node_pool}'
    NodePoolConfig GkeNodePoolConfigResponse
    Input only. The configuration for the GKE node pool.If specified, Dataproc attempts to create a node pool with the specified shape. If one with the same name already exists, it is verified against all specified fields. If a field differs, the virtual cluster creation will fail.If omitted, any node pool with the specified name is used. If a node pool with the specified name does not exist, Dataproc create a node pool with default values.This is an input only field. It will not be returned by the API.
    Roles []string
    The roles associated with the GKE node pool.
    nodePool String
    The target GKE node pool. Format: 'projects/{project}/locations/{location}/clusters/{cluster}/nodePools/{node_pool}'
    nodePoolConfig GkeNodePoolConfigResponse
    Input only. The configuration for the GKE node pool.If specified, Dataproc attempts to create a node pool with the specified shape. If one with the same name already exists, it is verified against all specified fields. If a field differs, the virtual cluster creation will fail.If omitted, any node pool with the specified name is used. If a node pool with the specified name does not exist, Dataproc create a node pool with default values.This is an input only field. It will not be returned by the API.
    roles List<String>
    The roles associated with the GKE node pool.
    nodePool string
    The target GKE node pool. Format: 'projects/{project}/locations/{location}/clusters/{cluster}/nodePools/{node_pool}'
    nodePoolConfig GkeNodePoolConfigResponse
    Input only. The configuration for the GKE node pool.If specified, Dataproc attempts to create a node pool with the specified shape. If one with the same name already exists, it is verified against all specified fields. If a field differs, the virtual cluster creation will fail.If omitted, any node pool with the specified name is used. If a node pool with the specified name does not exist, Dataproc create a node pool with default values.This is an input only field. It will not be returned by the API.
    roles string[]
    The roles associated with the GKE node pool.
    node_pool str
    The target GKE node pool. Format: 'projects/{project}/locations/{location}/clusters/{cluster}/nodePools/{node_pool}'
    node_pool_config GkeNodePoolConfigResponse
    Input only. The configuration for the GKE node pool.If specified, Dataproc attempts to create a node pool with the specified shape. If one with the same name already exists, it is verified against all specified fields. If a field differs, the virtual cluster creation will fail.If omitted, any node pool with the specified name is used. If a node pool with the specified name does not exist, Dataproc create a node pool with default values.This is an input only field. It will not be returned by the API.
    roles Sequence[str]
    The roles associated with the GKE node pool.
    nodePool String
    The target GKE node pool. Format: 'projects/{project}/locations/{location}/clusters/{cluster}/nodePools/{node_pool}'
    nodePoolConfig Property Map
    Input only. The configuration for the GKE node pool.If specified, Dataproc attempts to create a node pool with the specified shape. If one with the same name already exists, it is verified against all specified fields. If a field differs, the virtual cluster creation will fail.If omitted, any node pool with the specified name is used. If a node pool with the specified name does not exist, Dataproc create a node pool with default values.This is an input only field. It will not be returned by the API.
    roles List<String>
    The roles associated with the GKE node pool.

    GkeNodePoolTargetRolesItem, GkeNodePoolTargetRolesItemArgs

    RoleUnspecified
    ROLE_UNSPECIFIEDRole is unspecified.
    Default
    DEFAULTAt least one node pool must have the DEFAULT role. Work assigned to a role that is not associated with a node pool is assigned to the node pool with the DEFAULT role. For example, work assigned to the CONTROLLER role will be assigned to the node pool with the DEFAULT role if no node pool has the CONTROLLER role.
    Controller
    CONTROLLERRun work associated with the Dataproc control plane (for example, controllers and webhooks). Very low resource requirements.
    SparkDriver
    SPARK_DRIVERRun work associated with a Spark driver of a job.
    SparkExecutor
    SPARK_EXECUTORRun work associated with a Spark executor of a job.
    GkeNodePoolTargetRolesItemRoleUnspecified
    ROLE_UNSPECIFIEDRole is unspecified.
    GkeNodePoolTargetRolesItemDefault
    DEFAULTAt least one node pool must have the DEFAULT role. Work assigned to a role that is not associated with a node pool is assigned to the node pool with the DEFAULT role. For example, work assigned to the CONTROLLER role will be assigned to the node pool with the DEFAULT role if no node pool has the CONTROLLER role.
    GkeNodePoolTargetRolesItemController
    CONTROLLERRun work associated with the Dataproc control plane (for example, controllers and webhooks). Very low resource requirements.
    GkeNodePoolTargetRolesItemSparkDriver
    SPARK_DRIVERRun work associated with a Spark driver of a job.
    GkeNodePoolTargetRolesItemSparkExecutor
    SPARK_EXECUTORRun work associated with a Spark executor of a job.
    RoleUnspecified
    ROLE_UNSPECIFIEDRole is unspecified.
    Default
    DEFAULTAt least one node pool must have the DEFAULT role. Work assigned to a role that is not associated with a node pool is assigned to the node pool with the DEFAULT role. For example, work assigned to the CONTROLLER role will be assigned to the node pool with the DEFAULT role if no node pool has the CONTROLLER role.
    Controller
    CONTROLLERRun work associated with the Dataproc control plane (for example, controllers and webhooks). Very low resource requirements.
    SparkDriver
    SPARK_DRIVERRun work associated with a Spark driver of a job.
    SparkExecutor
    SPARK_EXECUTORRun work associated with a Spark executor of a job.
    RoleUnspecified
    ROLE_UNSPECIFIEDRole is unspecified.
    Default
    DEFAULTAt least one node pool must have the DEFAULT role. Work assigned to a role that is not associated with a node pool is assigned to the node pool with the DEFAULT role. For example, work assigned to the CONTROLLER role will be assigned to the node pool with the DEFAULT role if no node pool has the CONTROLLER role.
    Controller
    CONTROLLERRun work associated with the Dataproc control plane (for example, controllers and webhooks). Very low resource requirements.
    SparkDriver
    SPARK_DRIVERRun work associated with a Spark driver of a job.
    SparkExecutor
    SPARK_EXECUTORRun work associated with a Spark executor of a job.
    ROLE_UNSPECIFIED
    ROLE_UNSPECIFIEDRole is unspecified.
    DEFAULT
    DEFAULTAt least one node pool must have the DEFAULT role. Work assigned to a role that is not associated with a node pool is assigned to the node pool with the DEFAULT role. For example, work assigned to the CONTROLLER role will be assigned to the node pool with the DEFAULT role if no node pool has the CONTROLLER role.
    CONTROLLER
    CONTROLLERRun work associated with the Dataproc control plane (for example, controllers and webhooks). Very low resource requirements.
    SPARK_DRIVER
    SPARK_DRIVERRun work associated with a Spark driver of a job.
    SPARK_EXECUTOR
    SPARK_EXECUTORRun work associated with a Spark executor of a job.
    "ROLE_UNSPECIFIED"
    ROLE_UNSPECIFIEDRole is unspecified.
    "DEFAULT"
    DEFAULTAt least one node pool must have the DEFAULT role. Work assigned to a role that is not associated with a node pool is assigned to the node pool with the DEFAULT role. For example, work assigned to the CONTROLLER role will be assigned to the node pool with the DEFAULT role if no node pool has the CONTROLLER role.
    "CONTROLLER"
    CONTROLLERRun work associated with the Dataproc control plane (for example, controllers and webhooks). Very low resource requirements.
    "SPARK_DRIVER"
    SPARK_DRIVERRun work associated with a Spark driver of a job.
    "SPARK_EXECUTOR"
    SPARK_EXECUTORRun work associated with a Spark executor of a job.

    GoogleCloudDataprocV1WorkflowTemplateEncryptionConfig, GoogleCloudDataprocV1WorkflowTemplateEncryptionConfigArgs

    KmsKey string
    Optional. The Cloud KMS key name to use for encrypting customer core content.
    KmsKey string
    Optional. The Cloud KMS key name to use for encrypting customer core content.
    kmsKey String
    Optional. The Cloud KMS key name to use for encrypting customer core content.
    kmsKey string
    Optional. The Cloud KMS key name to use for encrypting customer core content.
    kms_key str
    Optional. The Cloud KMS key name to use for encrypting customer core content.
    kmsKey String
    Optional. The Cloud KMS key name to use for encrypting customer core content.

    GoogleCloudDataprocV1WorkflowTemplateEncryptionConfigResponse, GoogleCloudDataprocV1WorkflowTemplateEncryptionConfigResponseArgs

    KmsKey string
    Optional. The Cloud KMS key name to use for encrypting customer core content.
    KmsKey string
    Optional. The Cloud KMS key name to use for encrypting customer core content.
    kmsKey String
    Optional. The Cloud KMS key name to use for encrypting customer core content.
    kmsKey string
    Optional. The Cloud KMS key name to use for encrypting customer core content.
    kms_key str
    Optional. The Cloud KMS key name to use for encrypting customer core content.
    kmsKey String
    Optional. The Cloud KMS key name to use for encrypting customer core content.

    HadoopJob, HadoopJobArgs

    ArchiveUris List<string>
    Optional. HCFS URIs of archives to be extracted in the working directory of Hadoop drivers and tasks. Supported file types: .jar, .tar, .tar.gz, .tgz, or .zip.
    Args List<string>
    Optional. The arguments to pass to the driver. Do not include arguments, such as -libjars or -Dfoo=bar, that can be set as job properties, since a collision might occur that causes an incorrect job submission.
    FileUris List<string>
    Optional. HCFS (Hadoop Compatible Filesystem) URIs of files to be copied to the working directory of Hadoop drivers and distributed tasks. Useful for naively parallel tasks.
    JarFileUris List<string>
    Optional. Jar file URIs to add to the CLASSPATHs of the Hadoop driver and tasks.
    LoggingConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.LoggingConfig
    Optional. The runtime log config for job execution.
    MainClass string
    The name of the driver's main class. The jar file containing the class must be in the default CLASSPATH or specified in jar_file_uris.
    MainJarFileUri string
    The HCFS URI of the jar file containing the main class. Examples: 'gs://foo-bucket/analytics-binaries/extract-useful-metrics-mr.jar' 'hdfs:/tmp/test-samples/custom-wordcount.jar' 'file:///home/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar'
    Properties Dictionary<string, string>
    Optional. A mapping of property names to values, used to configure Hadoop. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/hadoop/conf/*-site and classes in user code.
    ArchiveUris []string
    Optional. HCFS URIs of archives to be extracted in the working directory of Hadoop drivers and tasks. Supported file types: .jar, .tar, .tar.gz, .tgz, or .zip.
    Args []string
    Optional. The arguments to pass to the driver. Do not include arguments, such as -libjars or -Dfoo=bar, that can be set as job properties, since a collision might occur that causes an incorrect job submission.
    FileUris []string
    Optional. HCFS (Hadoop Compatible Filesystem) URIs of files to be copied to the working directory of Hadoop drivers and distributed tasks. Useful for naively parallel tasks.
    JarFileUris []string
    Optional. Jar file URIs to add to the CLASSPATHs of the Hadoop driver and tasks.
    LoggingConfig LoggingConfig
    Optional. The runtime log config for job execution.
    MainClass string
    The name of the driver's main class. The jar file containing the class must be in the default CLASSPATH or specified in jar_file_uris.
    MainJarFileUri string
    The HCFS URI of the jar file containing the main class. Examples: 'gs://foo-bucket/analytics-binaries/extract-useful-metrics-mr.jar' 'hdfs:/tmp/test-samples/custom-wordcount.jar' 'file:///home/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar'
    Properties map[string]string
    Optional. A mapping of property names to values, used to configure Hadoop. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/hadoop/conf/*-site and classes in user code.
    archiveUris List<String>
    Optional. HCFS URIs of archives to be extracted in the working directory of Hadoop drivers and tasks. Supported file types: .jar, .tar, .tar.gz, .tgz, or .zip.
    args List<String>
    Optional. The arguments to pass to the driver. Do not include arguments, such as -libjars or -Dfoo=bar, that can be set as job properties, since a collision might occur that causes an incorrect job submission.
    fileUris List<String>
    Optional. HCFS (Hadoop Compatible Filesystem) URIs of files to be copied to the working directory of Hadoop drivers and distributed tasks. Useful for naively parallel tasks.
    jarFileUris List<String>
    Optional. Jar file URIs to add to the CLASSPATHs of the Hadoop driver and tasks.
    loggingConfig LoggingConfig
    Optional. The runtime log config for job execution.
    mainClass String
    The name of the driver's main class. The jar file containing the class must be in the default CLASSPATH or specified in jar_file_uris.
    mainJarFileUri String
    The HCFS URI of the jar file containing the main class. Examples: 'gs://foo-bucket/analytics-binaries/extract-useful-metrics-mr.jar' 'hdfs:/tmp/test-samples/custom-wordcount.jar' 'file:///home/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar'
    properties Map<String,String>
    Optional. A mapping of property names to values, used to configure Hadoop. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/hadoop/conf/*-site and classes in user code.
    archiveUris string[]
    Optional. HCFS URIs of archives to be extracted in the working directory of Hadoop drivers and tasks. Supported file types: .jar, .tar, .tar.gz, .tgz, or .zip.
    args string[]
    Optional. The arguments to pass to the driver. Do not include arguments, such as -libjars or -Dfoo=bar, that can be set as job properties, since a collision might occur that causes an incorrect job submission.
    fileUris string[]
    Optional. HCFS (Hadoop Compatible Filesystem) URIs of files to be copied to the working directory of Hadoop drivers and distributed tasks. Useful for naively parallel tasks.
    jarFileUris string[]
    Optional. Jar file URIs to add to the CLASSPATHs of the Hadoop driver and tasks.
    loggingConfig LoggingConfig
    Optional. The runtime log config for job execution.
    mainClass string
    The name of the driver's main class. The jar file containing the class must be in the default CLASSPATH or specified in jar_file_uris.
    mainJarFileUri string
    The HCFS URI of the jar file containing the main class. Examples: 'gs://foo-bucket/analytics-binaries/extract-useful-metrics-mr.jar' 'hdfs:/tmp/test-samples/custom-wordcount.jar' 'file:///home/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar'
    properties {[key: string]: string}
    Optional. A mapping of property names to values, used to configure Hadoop. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/hadoop/conf/*-site and classes in user code.
    archive_uris Sequence[str]
    Optional. HCFS URIs of archives to be extracted in the working directory of Hadoop drivers and tasks. Supported file types: .jar, .tar, .tar.gz, .tgz, or .zip.
    args Sequence[str]
    Optional. The arguments to pass to the driver. Do not include arguments, such as -libjars or -Dfoo=bar, that can be set as job properties, since a collision might occur that causes an incorrect job submission.
    file_uris Sequence[str]
    Optional. HCFS (Hadoop Compatible Filesystem) URIs of files to be copied to the working directory of Hadoop drivers and distributed tasks. Useful for naively parallel tasks.
    jar_file_uris Sequence[str]
    Optional. Jar file URIs to add to the CLASSPATHs of the Hadoop driver and tasks.
    logging_config LoggingConfig
    Optional. The runtime log config for job execution.
    main_class str
    The name of the driver's main class. The jar file containing the class must be in the default CLASSPATH or specified in jar_file_uris.
    main_jar_file_uri str
    The HCFS URI of the jar file containing the main class. Examples: 'gs://foo-bucket/analytics-binaries/extract-useful-metrics-mr.jar' 'hdfs:/tmp/test-samples/custom-wordcount.jar' 'file:///home/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar'
    properties Mapping[str, str]
    Optional. A mapping of property names to values, used to configure Hadoop. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/hadoop/conf/*-site and classes in user code.
    archiveUris List<String>
    Optional. HCFS URIs of archives to be extracted in the working directory of Hadoop drivers and tasks. Supported file types: .jar, .tar, .tar.gz, .tgz, or .zip.
    args List<String>
    Optional. The arguments to pass to the driver. Do not include arguments, such as -libjars or -Dfoo=bar, that can be set as job properties, since a collision might occur that causes an incorrect job submission.
    fileUris List<String>
    Optional. HCFS (Hadoop Compatible Filesystem) URIs of files to be copied to the working directory of Hadoop drivers and distributed tasks. Useful for naively parallel tasks.
    jarFileUris List<String>
    Optional. Jar file URIs to add to the CLASSPATHs of the Hadoop driver and tasks.
    loggingConfig Property Map
    Optional. The runtime log config for job execution.
    mainClass String
    The name of the driver's main class. The jar file containing the class must be in the default CLASSPATH or specified in jar_file_uris.
    mainJarFileUri String
    The HCFS URI of the jar file containing the main class. Examples: 'gs://foo-bucket/analytics-binaries/extract-useful-metrics-mr.jar' 'hdfs:/tmp/test-samples/custom-wordcount.jar' 'file:///home/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar'
    properties Map<String>
    Optional. A mapping of property names to values, used to configure Hadoop. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/hadoop/conf/*-site and classes in user code.

    HadoopJobResponse, HadoopJobResponseArgs

    ArchiveUris List<string>
    Optional. HCFS URIs of archives to be extracted in the working directory of Hadoop drivers and tasks. Supported file types: .jar, .tar, .tar.gz, .tgz, or .zip.
    Args List<string>
    Optional. The arguments to pass to the driver. Do not include arguments, such as -libjars or -Dfoo=bar, that can be set as job properties, since a collision might occur that causes an incorrect job submission.
    FileUris List<string>
    Optional. HCFS (Hadoop Compatible Filesystem) URIs of files to be copied to the working directory of Hadoop drivers and distributed tasks. Useful for naively parallel tasks.
    JarFileUris List<string>
    Optional. Jar file URIs to add to the CLASSPATHs of the Hadoop driver and tasks.
    LoggingConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.LoggingConfigResponse
    Optional. The runtime log config for job execution.
    MainClass string
    The name of the driver's main class. The jar file containing the class must be in the default CLASSPATH or specified in jar_file_uris.
    MainJarFileUri string
    The HCFS URI of the jar file containing the main class. Examples: 'gs://foo-bucket/analytics-binaries/extract-useful-metrics-mr.jar' 'hdfs:/tmp/test-samples/custom-wordcount.jar' 'file:///home/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar'
    Properties Dictionary<string, string>
    Optional. A mapping of property names to values, used to configure Hadoop. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/hadoop/conf/*-site and classes in user code.
    ArchiveUris []string
    Optional. HCFS URIs of archives to be extracted in the working directory of Hadoop drivers and tasks. Supported file types: .jar, .tar, .tar.gz, .tgz, or .zip.
    Args []string
    Optional. The arguments to pass to the driver. Do not include arguments, such as -libjars or -Dfoo=bar, that can be set as job properties, since a collision might occur that causes an incorrect job submission.
    FileUris []string
    Optional. HCFS (Hadoop Compatible Filesystem) URIs of files to be copied to the working directory of Hadoop drivers and distributed tasks. Useful for naively parallel tasks.
    JarFileUris []string
    Optional. Jar file URIs to add to the CLASSPATHs of the Hadoop driver and tasks.
    LoggingConfig LoggingConfigResponse
    Optional. The runtime log config for job execution.
    MainClass string
    The name of the driver's main class. The jar file containing the class must be in the default CLASSPATH or specified in jar_file_uris.
    MainJarFileUri string
    The HCFS URI of the jar file containing the main class. Examples: 'gs://foo-bucket/analytics-binaries/extract-useful-metrics-mr.jar' 'hdfs:/tmp/test-samples/custom-wordcount.jar' 'file:///home/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar'
    Properties map[string]string
    Optional. A mapping of property names to values, used to configure Hadoop. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/hadoop/conf/*-site and classes in user code.
    archiveUris List<String>
    Optional. HCFS URIs of archives to be extracted in the working directory of Hadoop drivers and tasks. Supported file types: .jar, .tar, .tar.gz, .tgz, or .zip.
    args List<String>
    Optional. The arguments to pass to the driver. Do not include arguments, such as -libjars or -Dfoo=bar, that can be set as job properties, since a collision might occur that causes an incorrect job submission.
    fileUris List<String>
    Optional. HCFS (Hadoop Compatible Filesystem) URIs of files to be copied to the working directory of Hadoop drivers and distributed tasks. Useful for naively parallel tasks.
    jarFileUris List<String>
    Optional. Jar file URIs to add to the CLASSPATHs of the Hadoop driver and tasks.
    loggingConfig LoggingConfigResponse
    Optional. The runtime log config for job execution.
    mainClass String
    The name of the driver's main class. The jar file containing the class must be in the default CLASSPATH or specified in jar_file_uris.
    mainJarFileUri String
    The HCFS URI of the jar file containing the main class. Examples: 'gs://foo-bucket/analytics-binaries/extract-useful-metrics-mr.jar' 'hdfs:/tmp/test-samples/custom-wordcount.jar' 'file:///home/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar'
    properties Map<String,String>
    Optional. A mapping of property names to values, used to configure Hadoop. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/hadoop/conf/*-site and classes in user code.
    archiveUris string[]
    Optional. HCFS URIs of archives to be extracted in the working directory of Hadoop drivers and tasks. Supported file types: .jar, .tar, .tar.gz, .tgz, or .zip.
    args string[]
    Optional. The arguments to pass to the driver. Do not include arguments, such as -libjars or -Dfoo=bar, that can be set as job properties, since a collision might occur that causes an incorrect job submission.
    fileUris string[]
    Optional. HCFS (Hadoop Compatible Filesystem) URIs of files to be copied to the working directory of Hadoop drivers and distributed tasks. Useful for naively parallel tasks.
    jarFileUris string[]
    Optional. Jar file URIs to add to the CLASSPATHs of the Hadoop driver and tasks.
    loggingConfig LoggingConfigResponse
    Optional. The runtime log config for job execution.
    mainClass string
    The name of the driver's main class. The jar file containing the class must be in the default CLASSPATH or specified in jar_file_uris.
    mainJarFileUri string
    The HCFS URI of the jar file containing the main class. Examples: 'gs://foo-bucket/analytics-binaries/extract-useful-metrics-mr.jar' 'hdfs:/tmp/test-samples/custom-wordcount.jar' 'file:///home/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar'
    properties {[key: string]: string}
    Optional. A mapping of property names to values, used to configure Hadoop. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/hadoop/conf/*-site and classes in user code.
    archive_uris Sequence[str]
    Optional. HCFS URIs of archives to be extracted in the working directory of Hadoop drivers and tasks. Supported file types: .jar, .tar, .tar.gz, .tgz, or .zip.
    args Sequence[str]
    Optional. The arguments to pass to the driver. Do not include arguments, such as -libjars or -Dfoo=bar, that can be set as job properties, since a collision might occur that causes an incorrect job submission.
    file_uris Sequence[str]
    Optional. HCFS (Hadoop Compatible Filesystem) URIs of files to be copied to the working directory of Hadoop drivers and distributed tasks. Useful for naively parallel tasks.
    jar_file_uris Sequence[str]
    Optional. Jar file URIs to add to the CLASSPATHs of the Hadoop driver and tasks.
    logging_config LoggingConfigResponse
    Optional. The runtime log config for job execution.
    main_class str
    The name of the driver's main class. The jar file containing the class must be in the default CLASSPATH or specified in jar_file_uris.
    main_jar_file_uri str
    The HCFS URI of the jar file containing the main class. Examples: 'gs://foo-bucket/analytics-binaries/extract-useful-metrics-mr.jar' 'hdfs:/tmp/test-samples/custom-wordcount.jar' 'file:///home/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar'
    properties Mapping[str, str]
    Optional. A mapping of property names to values, used to configure Hadoop. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/hadoop/conf/*-site and classes in user code.
    archiveUris List<String>
    Optional. HCFS URIs of archives to be extracted in the working directory of Hadoop drivers and tasks. Supported file types: .jar, .tar, .tar.gz, .tgz, or .zip.
    args List<String>
    Optional. The arguments to pass to the driver. Do not include arguments, such as -libjars or -Dfoo=bar, that can be set as job properties, since a collision might occur that causes an incorrect job submission.
    fileUris List<String>
    Optional. HCFS (Hadoop Compatible Filesystem) URIs of files to be copied to the working directory of Hadoop drivers and distributed tasks. Useful for naively parallel tasks.
    jarFileUris List<String>
    Optional. Jar file URIs to add to the CLASSPATHs of the Hadoop driver and tasks.
    loggingConfig Property Map
    Optional. The runtime log config for job execution.
    mainClass String
    The name of the driver's main class. The jar file containing the class must be in the default CLASSPATH or specified in jar_file_uris.
    mainJarFileUri String
    The HCFS URI of the jar file containing the main class. Examples: 'gs://foo-bucket/analytics-binaries/extract-useful-metrics-mr.jar' 'hdfs:/tmp/test-samples/custom-wordcount.jar' 'file:///home/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar'
    properties Map<String>
    Optional. A mapping of property names to values, used to configure Hadoop. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/hadoop/conf/*-site and classes in user code.

    HiveJob, HiveJobArgs

    ContinueOnFailure bool
    Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
    JarFileUris List<string>
    Optional. HCFS URIs of jar files to add to the CLASSPATH of the Hive server and Hadoop MapReduce (MR) tasks. Can contain Hive SerDes and UDFs.
    Properties Dictionary<string, string>
    Optional. A mapping of property names and values, used to configure Hive. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/hive/conf/hive-site.xml, and classes in user code.
    QueryFileUri string
    The HCFS URI of the script that contains Hive queries.
    QueryList Pulumi.GoogleNative.Dataproc.V1.Inputs.QueryList
    A list of queries.
    ScriptVariables Dictionary<string, string>
    Optional. Mapping of query variable names to values (equivalent to the Hive command: SET name="value";).
    ContinueOnFailure bool
    Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
    JarFileUris []string
    Optional. HCFS URIs of jar files to add to the CLASSPATH of the Hive server and Hadoop MapReduce (MR) tasks. Can contain Hive SerDes and UDFs.
    Properties map[string]string
    Optional. A mapping of property names and values, used to configure Hive. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/hive/conf/hive-site.xml, and classes in user code.
    QueryFileUri string
    The HCFS URI of the script that contains Hive queries.
    QueryList QueryList
    A list of queries.
    ScriptVariables map[string]string
    Optional. Mapping of query variable names to values (equivalent to the Hive command: SET name="value";).
    continueOnFailure Boolean
    Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
    jarFileUris List<String>
    Optional. HCFS URIs of jar files to add to the CLASSPATH of the Hive server and Hadoop MapReduce (MR) tasks. Can contain Hive SerDes and UDFs.
    properties Map<String,String>
    Optional. A mapping of property names and values, used to configure Hive. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/hive/conf/hive-site.xml, and classes in user code.
    queryFileUri String
    The HCFS URI of the script that contains Hive queries.
    queryList QueryList
    A list of queries.
    scriptVariables Map<String,String>
    Optional. Mapping of query variable names to values (equivalent to the Hive command: SET name="value";).
    continueOnFailure boolean
    Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
    jarFileUris string[]
    Optional. HCFS URIs of jar files to add to the CLASSPATH of the Hive server and Hadoop MapReduce (MR) tasks. Can contain Hive SerDes and UDFs.
    properties {[key: string]: string}
    Optional. A mapping of property names and values, used to configure Hive. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/hive/conf/hive-site.xml, and classes in user code.
    queryFileUri string
    The HCFS URI of the script that contains Hive queries.
    queryList QueryList
    A list of queries.
    scriptVariables {[key: string]: string}
    Optional. Mapping of query variable names to values (equivalent to the Hive command: SET name="value";).
    continue_on_failure bool
    Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
    jar_file_uris Sequence[str]
    Optional. HCFS URIs of jar files to add to the CLASSPATH of the Hive server and Hadoop MapReduce (MR) tasks. Can contain Hive SerDes and UDFs.
    properties Mapping[str, str]
    Optional. A mapping of property names and values, used to configure Hive. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/hive/conf/hive-site.xml, and classes in user code.
    query_file_uri str
    The HCFS URI of the script that contains Hive queries.
    query_list QueryList
    A list of queries.
    script_variables Mapping[str, str]
    Optional. Mapping of query variable names to values (equivalent to the Hive command: SET name="value";).
    continueOnFailure Boolean
    Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
    jarFileUris List<String>
    Optional. HCFS URIs of jar files to add to the CLASSPATH of the Hive server and Hadoop MapReduce (MR) tasks. Can contain Hive SerDes and UDFs.
    properties Map<String>
    Optional. A mapping of property names and values, used to configure Hive. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/hive/conf/hive-site.xml, and classes in user code.
    queryFileUri String
    The HCFS URI of the script that contains Hive queries.
    queryList Property Map
    A list of queries.
    scriptVariables Map<String>
    Optional. Mapping of query variable names to values (equivalent to the Hive command: SET name="value";).

    HiveJobResponse, HiveJobResponseArgs

    ContinueOnFailure bool
    Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
    JarFileUris List<string>
    Optional. HCFS URIs of jar files to add to the CLASSPATH of the Hive server and Hadoop MapReduce (MR) tasks. Can contain Hive SerDes and UDFs.
    Properties Dictionary<string, string>
    Optional. A mapping of property names and values, used to configure Hive. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/hive/conf/hive-site.xml, and classes in user code.
    QueryFileUri string
    The HCFS URI of the script that contains Hive queries.
    QueryList Pulumi.GoogleNative.Dataproc.V1.Inputs.QueryListResponse
    A list of queries.
    ScriptVariables Dictionary<string, string>
    Optional. Mapping of query variable names to values (equivalent to the Hive command: SET name="value";).
    ContinueOnFailure bool
    Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
    JarFileUris []string
    Optional. HCFS URIs of jar files to add to the CLASSPATH of the Hive server and Hadoop MapReduce (MR) tasks. Can contain Hive SerDes and UDFs.
    Properties map[string]string
    Optional. A mapping of property names and values, used to configure Hive. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/hive/conf/hive-site.xml, and classes in user code.
    QueryFileUri string
    The HCFS URI of the script that contains Hive queries.
    QueryList QueryListResponse
    A list of queries.
    ScriptVariables map[string]string
    Optional. Mapping of query variable names to values (equivalent to the Hive command: SET name="value";).
    continueOnFailure Boolean
    Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
    jarFileUris List<String>
    Optional. HCFS URIs of jar files to add to the CLASSPATH of the Hive server and Hadoop MapReduce (MR) tasks. Can contain Hive SerDes and UDFs.
    properties Map<String,String>
    Optional. A mapping of property names and values, used to configure Hive. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/hive/conf/hive-site.xml, and classes in user code.
    queryFileUri String
    The HCFS URI of the script that contains Hive queries.
    queryList QueryListResponse
    A list of queries.
    scriptVariables Map<String,String>
    Optional. Mapping of query variable names to values (equivalent to the Hive command: SET name="value";).
    continueOnFailure boolean
    Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
    jarFileUris string[]
    Optional. HCFS URIs of jar files to add to the CLASSPATH of the Hive server and Hadoop MapReduce (MR) tasks. Can contain Hive SerDes and UDFs.
    properties {[key: string]: string}
    Optional. A mapping of property names and values, used to configure Hive. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/hive/conf/hive-site.xml, and classes in user code.
    queryFileUri string
    The HCFS URI of the script that contains Hive queries.
    queryList QueryListResponse
    A list of queries.
    scriptVariables {[key: string]: string}
    Optional. Mapping of query variable names to values (equivalent to the Hive command: SET name="value";).
    continue_on_failure bool
    Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
    jar_file_uris Sequence[str]
    Optional. HCFS URIs of jar files to add to the CLASSPATH of the Hive server and Hadoop MapReduce (MR) tasks. Can contain Hive SerDes and UDFs.
    properties Mapping[str, str]
    Optional. A mapping of property names and values, used to configure Hive. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/hive/conf/hive-site.xml, and classes in user code.
    query_file_uri str
    The HCFS URI of the script that contains Hive queries.
    query_list QueryListResponse
    A list of queries.
    script_variables Mapping[str, str]
    Optional. Mapping of query variable names to values (equivalent to the Hive command: SET name="value";).
    continueOnFailure Boolean
    Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
    jarFileUris List<String>
    Optional. HCFS URIs of jar files to add to the CLASSPATH of the Hive server and Hadoop MapReduce (MR) tasks. Can contain Hive SerDes and UDFs.
    properties Map<String>
    Optional. A mapping of property names and values, used to configure Hive. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/hive/conf/hive-site.xml, and classes in user code.
    queryFileUri String
    The HCFS URI of the script that contains Hive queries.
    queryList Property Map
    A list of queries.
    scriptVariables Map<String>
    Optional. Mapping of query variable names to values (equivalent to the Hive command: SET name="value";).

    IdentityConfig, IdentityConfigArgs

    UserServiceAccountMapping Dictionary<string, string>
    Map of user to service account.
    UserServiceAccountMapping map[string]string
    Map of user to service account.
    userServiceAccountMapping Map<String,String>
    Map of user to service account.
    userServiceAccountMapping {[key: string]: string}
    Map of user to service account.
    user_service_account_mapping Mapping[str, str]
    Map of user to service account.
    userServiceAccountMapping Map<String>
    Map of user to service account.

    IdentityConfigResponse, IdentityConfigResponseArgs

    UserServiceAccountMapping Dictionary<string, string>
    Map of user to service account.
    UserServiceAccountMapping map[string]string
    Map of user to service account.
    userServiceAccountMapping Map<String,String>
    Map of user to service account.
    userServiceAccountMapping {[key: string]: string}
    Map of user to service account.
    user_service_account_mapping Mapping[str, str]
    Map of user to service account.
    userServiceAccountMapping Map<String>
    Map of user to service account.

    InstanceFlexibilityPolicy, InstanceFlexibilityPolicyArgs

    InstanceSelectionList List<Pulumi.GoogleNative.Dataproc.V1.Inputs.InstanceSelection>
    Optional. List of instance selection options that the group will use when creating new VMs.
    InstanceSelectionList []InstanceSelection
    Optional. List of instance selection options that the group will use when creating new VMs.
    instanceSelectionList List<InstanceSelection>
    Optional. List of instance selection options that the group will use when creating new VMs.
    instanceSelectionList InstanceSelection[]
    Optional. List of instance selection options that the group will use when creating new VMs.
    instance_selection_list Sequence[InstanceSelection]
    Optional. List of instance selection options that the group will use when creating new VMs.
    instanceSelectionList List<Property Map>
    Optional. List of instance selection options that the group will use when creating new VMs.

    InstanceFlexibilityPolicyResponse, InstanceFlexibilityPolicyResponseArgs

    InstanceSelectionList List<Pulumi.GoogleNative.Dataproc.V1.Inputs.InstanceSelectionResponse>
    Optional. List of instance selection options that the group will use when creating new VMs.
    InstanceSelectionResults List<Pulumi.GoogleNative.Dataproc.V1.Inputs.InstanceSelectionResultResponse>
    A list of instance selection results in the group.
    InstanceSelectionList []InstanceSelectionResponse
    Optional. List of instance selection options that the group will use when creating new VMs.
    InstanceSelectionResults []InstanceSelectionResultResponse
    A list of instance selection results in the group.
    instanceSelectionList List<InstanceSelectionResponse>
    Optional. List of instance selection options that the group will use when creating new VMs.
    instanceSelectionResults List<InstanceSelectionResultResponse>
    A list of instance selection results in the group.
    instanceSelectionList InstanceSelectionResponse[]
    Optional. List of instance selection options that the group will use when creating new VMs.
    instanceSelectionResults InstanceSelectionResultResponse[]
    A list of instance selection results in the group.
    instance_selection_list Sequence[InstanceSelectionResponse]
    Optional. List of instance selection options that the group will use when creating new VMs.
    instance_selection_results Sequence[InstanceSelectionResultResponse]
    A list of instance selection results in the group.
    instanceSelectionList List<Property Map>
    Optional. List of instance selection options that the group will use when creating new VMs.
    instanceSelectionResults List<Property Map>
    A list of instance selection results in the group.

    InstanceGroupConfig, InstanceGroupConfigArgs

    Accelerators List<Pulumi.GoogleNative.Dataproc.V1.Inputs.AcceleratorConfig>
    Optional. The Compute Engine accelerator configuration for these instances.
    DiskConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.DiskConfig
    Optional. Disk option config settings.
    ImageUri string
    Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
    InstanceFlexibilityPolicy Pulumi.GoogleNative.Dataproc.V1.Inputs.InstanceFlexibilityPolicy
    Optional. Instance flexibility Policy allowing a mixture of VM shapes and provisioning models.
    MachineTypeUri string
    Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
    MinCpuPlatform string
    Optional. Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
    MinNumInstances int
    Optional. The minimum number of primary worker instances to create. If min_num_instances is set, cluster creation will succeed if the number of primary workers created is at least equal to the min_num_instances number.Example: Cluster creation request with num_instances = 5 and min_num_instances = 3: If 4 VMs are created and 1 instance fails, the failed VM is deleted. The cluster is resized to 4 instances and placed in a RUNNING state. If 2 instances are created and 3 instances fail, the cluster in placed in an ERROR state. The failed VMs are not deleted.
    NumInstances int
    Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.
    Preemptibility Pulumi.GoogleNative.Dataproc.V1.InstanceGroupConfigPreemptibility
    Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
    StartupConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.StartupConfig
    Optional. Configuration to handle the startup of instances during cluster create and update process.
    Accelerators []AcceleratorConfig
    Optional. The Compute Engine accelerator configuration for these instances.
    DiskConfig DiskConfig
    Optional. Disk option config settings.
    ImageUri string
    Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
    InstanceFlexibilityPolicy InstanceFlexibilityPolicy
    Optional. Instance flexibility Policy allowing a mixture of VM shapes and provisioning models.
    MachineTypeUri string
    Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
    MinCpuPlatform string
    Optional. Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
    MinNumInstances int
    Optional. The minimum number of primary worker instances to create. If min_num_instances is set, cluster creation will succeed if the number of primary workers created is at least equal to the min_num_instances number.Example: Cluster creation request with num_instances = 5 and min_num_instances = 3: If 4 VMs are created and 1 instance fails, the failed VM is deleted. The cluster is resized to 4 instances and placed in a RUNNING state. If 2 instances are created and 3 instances fail, the cluster in placed in an ERROR state. The failed VMs are not deleted.
    NumInstances int
    Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.
    Preemptibility InstanceGroupConfigPreemptibility
    Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
    StartupConfig StartupConfig
    Optional. Configuration to handle the startup of instances during cluster create and update process.
    accelerators List<AcceleratorConfig>
    Optional. The Compute Engine accelerator configuration for these instances.
    diskConfig DiskConfig
    Optional. Disk option config settings.
    imageUri String
    Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
    instanceFlexibilityPolicy InstanceFlexibilityPolicy
    Optional. Instance flexibility Policy allowing a mixture of VM shapes and provisioning models.
    machineTypeUri String
    Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
    minCpuPlatform String
    Optional. Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
    minNumInstances Integer
    Optional. The minimum number of primary worker instances to create. If min_num_instances is set, cluster creation will succeed if the number of primary workers created is at least equal to the min_num_instances number.Example: Cluster creation request with num_instances = 5 and min_num_instances = 3: If 4 VMs are created and 1 instance fails, the failed VM is deleted. The cluster is resized to 4 instances and placed in a RUNNING state. If 2 instances are created and 3 instances fail, the cluster in placed in an ERROR state. The failed VMs are not deleted.
    numInstances Integer
    Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.
    preemptibility InstanceGroupConfigPreemptibility
    Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
    startupConfig StartupConfig
    Optional. Configuration to handle the startup of instances during cluster create and update process.
    accelerators AcceleratorConfig[]
    Optional. The Compute Engine accelerator configuration for these instances.
    diskConfig DiskConfig
    Optional. Disk option config settings.
    imageUri string
    Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
    instanceFlexibilityPolicy InstanceFlexibilityPolicy
    Optional. Instance flexibility Policy allowing a mixture of VM shapes and provisioning models.
    machineTypeUri string
    Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
    minCpuPlatform string
    Optional. Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
    minNumInstances number
    Optional. The minimum number of primary worker instances to create. If min_num_instances is set, cluster creation will succeed if the number of primary workers created is at least equal to the min_num_instances number.Example: Cluster creation request with num_instances = 5 and min_num_instances = 3: If 4 VMs are created and 1 instance fails, the failed VM is deleted. The cluster is resized to 4 instances and placed in a RUNNING state. If 2 instances are created and 3 instances fail, the cluster in placed in an ERROR state. The failed VMs are not deleted.
    numInstances number
    Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.
    preemptibility InstanceGroupConfigPreemptibility
    Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
    startupConfig StartupConfig
    Optional. Configuration to handle the startup of instances during cluster create and update process.
    accelerators Sequence[AcceleratorConfig]
    Optional. The Compute Engine accelerator configuration for these instances.
    disk_config DiskConfig
    Optional. Disk option config settings.
    image_uri str
    Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
    instance_flexibility_policy InstanceFlexibilityPolicy
    Optional. Instance flexibility Policy allowing a mixture of VM shapes and provisioning models.
    machine_type_uri str
    Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
    min_cpu_platform str
    Optional. Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
    min_num_instances int
    Optional. The minimum number of primary worker instances to create. If min_num_instances is set, cluster creation will succeed if the number of primary workers created is at least equal to the min_num_instances number.Example: Cluster creation request with num_instances = 5 and min_num_instances = 3: If 4 VMs are created and 1 instance fails, the failed VM is deleted. The cluster is resized to 4 instances and placed in a RUNNING state. If 2 instances are created and 3 instances fail, the cluster in placed in an ERROR state. The failed VMs are not deleted.
    num_instances int
    Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.
    preemptibility InstanceGroupConfigPreemptibility
    Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
    startup_config StartupConfig
    Optional. Configuration to handle the startup of instances during cluster create and update process.
    accelerators List<Property Map>
    Optional. The Compute Engine accelerator configuration for these instances.
    diskConfig Property Map
    Optional. Disk option config settings.
    imageUri String
    Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
    instanceFlexibilityPolicy Property Map
    Optional. Instance flexibility Policy allowing a mixture of VM shapes and provisioning models.
    machineTypeUri String
    Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
    minCpuPlatform String
    Optional. Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
    minNumInstances Number
    Optional. The minimum number of primary worker instances to create. If min_num_instances is set, cluster creation will succeed if the number of primary workers created is at least equal to the min_num_instances number.Example: Cluster creation request with num_instances = 5 and min_num_instances = 3: If 4 VMs are created and 1 instance fails, the failed VM is deleted. The cluster is resized to 4 instances and placed in a RUNNING state. If 2 instances are created and 3 instances fail, the cluster in placed in an ERROR state. The failed VMs are not deleted.
    numInstances Number
    Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.
    preemptibility "PREEMPTIBILITY_UNSPECIFIED" | "NON_PREEMPTIBLE" | "PREEMPTIBLE" | "SPOT"
    Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
    startupConfig Property Map
    Optional. Configuration to handle the startup of instances during cluster create and update process.

    InstanceGroupConfigPreemptibility, InstanceGroupConfigPreemptibilityArgs

    PreemptibilityUnspecified
    PREEMPTIBILITY_UNSPECIFIEDPreemptibility is unspecified, the system will choose the appropriate setting for each instance group.
    NonPreemptible
    NON_PREEMPTIBLEInstances are non-preemptible.This option is allowed for all instance groups and is the only valid value for Master and Worker instance groups.
    Preemptible
    PREEMPTIBLEInstances are preemptible (https://cloud.google.com/compute/docs/instances/preemptible).This option is allowed only for secondary worker (https://cloud.google.com/dataproc/docs/concepts/compute/secondary-vms) groups.
    Spot
    SPOTInstances are Spot VMs (https://cloud.google.com/compute/docs/instances/spot).This option is allowed only for secondary worker (https://cloud.google.com/dataproc/docs/concepts/compute/secondary-vms) groups. Spot VMs are the latest version of preemptible VMs (https://cloud.google.com/compute/docs/instances/preemptible), and provide additional features.
    InstanceGroupConfigPreemptibilityPreemptibilityUnspecified
    PREEMPTIBILITY_UNSPECIFIEDPreemptibility is unspecified, the system will choose the appropriate setting for each instance group.
    InstanceGroupConfigPreemptibilityNonPreemptible
    NON_PREEMPTIBLEInstances are non-preemptible.This option is allowed for all instance groups and is the only valid value for Master and Worker instance groups.
    InstanceGroupConfigPreemptibilityPreemptible
    PREEMPTIBLEInstances are preemptible (https://cloud.google.com/compute/docs/instances/preemptible).This option is allowed only for secondary worker (https://cloud.google.com/dataproc/docs/concepts/compute/secondary-vms) groups.
    InstanceGroupConfigPreemptibilitySpot
    SPOTInstances are Spot VMs (https://cloud.google.com/compute/docs/instances/spot).This option is allowed only for secondary worker (https://cloud.google.com/dataproc/docs/concepts/compute/secondary-vms) groups. Spot VMs are the latest version of preemptible VMs (https://cloud.google.com/compute/docs/instances/preemptible), and provide additional features.
    PreemptibilityUnspecified
    PREEMPTIBILITY_UNSPECIFIEDPreemptibility is unspecified, the system will choose the appropriate setting for each instance group.
    NonPreemptible
    NON_PREEMPTIBLEInstances are non-preemptible.This option is allowed for all instance groups and is the only valid value for Master and Worker instance groups.
    Preemptible
    PREEMPTIBLEInstances are preemptible (https://cloud.google.com/compute/docs/instances/preemptible).This option is allowed only for secondary worker (https://cloud.google.com/dataproc/docs/concepts/compute/secondary-vms) groups.
    Spot
    SPOTInstances are Spot VMs (https://cloud.google.com/compute/docs/instances/spot).This option is allowed only for secondary worker (https://cloud.google.com/dataproc/docs/concepts/compute/secondary-vms) groups. Spot VMs are the latest version of preemptible VMs (https://cloud.google.com/compute/docs/instances/preemptible), and provide additional features.
    PreemptibilityUnspecified
    PREEMPTIBILITY_UNSPECIFIEDPreemptibility is unspecified, the system will choose the appropriate setting for each instance group.
    NonPreemptible
    NON_PREEMPTIBLEInstances are non-preemptible.This option is allowed for all instance groups and is the only valid value for Master and Worker instance groups.
    Preemptible
    PREEMPTIBLEInstances are preemptible (https://cloud.google.com/compute/docs/instances/preemptible).This option is allowed only for secondary worker (https://cloud.google.com/dataproc/docs/concepts/compute/secondary-vms) groups.
    Spot
    SPOTInstances are Spot VMs (https://cloud.google.com/compute/docs/instances/spot).This option is allowed only for secondary worker (https://cloud.google.com/dataproc/docs/concepts/compute/secondary-vms) groups. Spot VMs are the latest version of preemptible VMs (https://cloud.google.com/compute/docs/instances/preemptible), and provide additional features.
    PREEMPTIBILITY_UNSPECIFIED
    PREEMPTIBILITY_UNSPECIFIEDPreemptibility is unspecified, the system will choose the appropriate setting for each instance group.
    NON_PREEMPTIBLE
    NON_PREEMPTIBLEInstances are non-preemptible.This option is allowed for all instance groups and is the only valid value for Master and Worker instance groups.
    PREEMPTIBLE
    PREEMPTIBLEInstances are preemptible (https://cloud.google.com/compute/docs/instances/preemptible).This option is allowed only for secondary worker (https://cloud.google.com/dataproc/docs/concepts/compute/secondary-vms) groups.
    SPOT
    SPOTInstances are Spot VMs (https://cloud.google.com/compute/docs/instances/spot).This option is allowed only for secondary worker (https://cloud.google.com/dataproc/docs/concepts/compute/secondary-vms) groups. Spot VMs are the latest version of preemptible VMs (https://cloud.google.com/compute/docs/instances/preemptible), and provide additional features.
    "PREEMPTIBILITY_UNSPECIFIED"
    PREEMPTIBILITY_UNSPECIFIEDPreemptibility is unspecified, the system will choose the appropriate setting for each instance group.
    "NON_PREEMPTIBLE"
    NON_PREEMPTIBLEInstances are non-preemptible.This option is allowed for all instance groups and is the only valid value for Master and Worker instance groups.
    "PREEMPTIBLE"
    PREEMPTIBLEInstances are preemptible (https://cloud.google.com/compute/docs/instances/preemptible).This option is allowed only for secondary worker (https://cloud.google.com/dataproc/docs/concepts/compute/secondary-vms) groups.
    "SPOT"
    SPOTInstances are Spot VMs (https://cloud.google.com/compute/docs/instances/spot).This option is allowed only for secondary worker (https://cloud.google.com/dataproc/docs/concepts/compute/secondary-vms) groups. Spot VMs are the latest version of preemptible VMs (https://cloud.google.com/compute/docs/instances/preemptible), and provide additional features.

    InstanceGroupConfigResponse, InstanceGroupConfigResponseArgs

    Accelerators List<Pulumi.GoogleNative.Dataproc.V1.Inputs.AcceleratorConfigResponse>
    Optional. The Compute Engine accelerator configuration for these instances.
    DiskConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.DiskConfigResponse
    Optional. Disk option config settings.
    ImageUri string
    Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
    InstanceFlexibilityPolicy Pulumi.GoogleNative.Dataproc.V1.Inputs.InstanceFlexibilityPolicyResponse
    Optional. Instance flexibility Policy allowing a mixture of VM shapes and provisioning models.
    InstanceNames List<string>
    The list of instance names. Dataproc derives the names from cluster_name, num_instances, and the instance group.
    InstanceReferences List<Pulumi.GoogleNative.Dataproc.V1.Inputs.InstanceReferenceResponse>
    List of references to Compute Engine instances.
    IsPreemptible bool
    Specifies that this instance group contains preemptible instances.
    MachineTypeUri string
    Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
    ManagedGroupConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.ManagedGroupConfigResponse
    The config for Compute Engine Instance Group Manager that manages this group. This is only used for preemptible instance groups.
    MinCpuPlatform string
    Optional. Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
    MinNumInstances int
    Optional. The minimum number of primary worker instances to create. If min_num_instances is set, cluster creation will succeed if the number of primary workers created is at least equal to the min_num_instances number.Example: Cluster creation request with num_instances = 5 and min_num_instances = 3: If 4 VMs are created and 1 instance fails, the failed VM is deleted. The cluster is resized to 4 instances and placed in a RUNNING state. If 2 instances are created and 3 instances fail, the cluster in placed in an ERROR state. The failed VMs are not deleted.
    NumInstances int
    Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.
    Preemptibility string
    Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
    StartupConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.StartupConfigResponse
    Optional. Configuration to handle the startup of instances during cluster create and update process.
    Accelerators []AcceleratorConfigResponse
    Optional. The Compute Engine accelerator configuration for these instances.
    DiskConfig DiskConfigResponse
    Optional. Disk option config settings.
    ImageUri string
    Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
    InstanceFlexibilityPolicy InstanceFlexibilityPolicyResponse
    Optional. Instance flexibility Policy allowing a mixture of VM shapes and provisioning models.
    InstanceNames []string
    The list of instance names. Dataproc derives the names from cluster_name, num_instances, and the instance group.
    InstanceReferences []InstanceReferenceResponse
    List of references to Compute Engine instances.
    IsPreemptible bool
    Specifies that this instance group contains preemptible instances.
    MachineTypeUri string
    Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
    ManagedGroupConfig ManagedGroupConfigResponse
    The config for Compute Engine Instance Group Manager that manages this group. This is only used for preemptible instance groups.
    MinCpuPlatform string
    Optional. Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
    MinNumInstances int
    Optional. The minimum number of primary worker instances to create. If min_num_instances is set, cluster creation will succeed if the number of primary workers created is at least equal to the min_num_instances number.Example: Cluster creation request with num_instances = 5 and min_num_instances = 3: If 4 VMs are created and 1 instance fails, the failed VM is deleted. The cluster is resized to 4 instances and placed in a RUNNING state. If 2 instances are created and 3 instances fail, the cluster in placed in an ERROR state. The failed VMs are not deleted.
    NumInstances int
    Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.
    Preemptibility string
    Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
    StartupConfig StartupConfigResponse
    Optional. Configuration to handle the startup of instances during cluster create and update process.
    accelerators List<AcceleratorConfigResponse>
    Optional. The Compute Engine accelerator configuration for these instances.
    diskConfig DiskConfigResponse
    Optional. Disk option config settings.
    imageUri String
    Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
    instanceFlexibilityPolicy InstanceFlexibilityPolicyResponse
    Optional. Instance flexibility Policy allowing a mixture of VM shapes and provisioning models.
    instanceNames List<String>
    The list of instance names. Dataproc derives the names from cluster_name, num_instances, and the instance group.
    instanceReferences List<InstanceReferenceResponse>
    List of references to Compute Engine instances.
    isPreemptible Boolean
    Specifies that this instance group contains preemptible instances.
    machineTypeUri String
    Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
    managedGroupConfig ManagedGroupConfigResponse
    The config for Compute Engine Instance Group Manager that manages this group. This is only used for preemptible instance groups.
    minCpuPlatform String
    Optional. Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
    minNumInstances Integer
    Optional. The minimum number of primary worker instances to create. If min_num_instances is set, cluster creation will succeed if the number of primary workers created is at least equal to the min_num_instances number.Example: Cluster creation request with num_instances = 5 and min_num_instances = 3: If 4 VMs are created and 1 instance fails, the failed VM is deleted. The cluster is resized to 4 instances and placed in a RUNNING state. If 2 instances are created and 3 instances fail, the cluster in placed in an ERROR state. The failed VMs are not deleted.
    numInstances Integer
    Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.
    preemptibility String
    Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
    startupConfig StartupConfigResponse
    Optional. Configuration to handle the startup of instances during cluster create and update process.
    accelerators AcceleratorConfigResponse[]
    Optional. The Compute Engine accelerator configuration for these instances.
    diskConfig DiskConfigResponse
    Optional. Disk option config settings.
    imageUri string
    Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
    instanceFlexibilityPolicy InstanceFlexibilityPolicyResponse
    Optional. Instance flexibility Policy allowing a mixture of VM shapes and provisioning models.
    instanceNames string[]
    The list of instance names. Dataproc derives the names from cluster_name, num_instances, and the instance group.
    instanceReferences InstanceReferenceResponse[]
    List of references to Compute Engine instances.
    isPreemptible boolean
    Specifies that this instance group contains preemptible instances.
    machineTypeUri string
    Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
    managedGroupConfig ManagedGroupConfigResponse
    The config for Compute Engine Instance Group Manager that manages this group. This is only used for preemptible instance groups.
    minCpuPlatform string
    Optional. Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
    minNumInstances number
    Optional. The minimum number of primary worker instances to create. If min_num_instances is set, cluster creation will succeed if the number of primary workers created is at least equal to the min_num_instances number.Example: Cluster creation request with num_instances = 5 and min_num_instances = 3: If 4 VMs are created and 1 instance fails, the failed VM is deleted. The cluster is resized to 4 instances and placed in a RUNNING state. If 2 instances are created and 3 instances fail, the cluster in placed in an ERROR state. The failed VMs are not deleted.
    numInstances number
    Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.
    preemptibility string
    Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
    startupConfig StartupConfigResponse
    Optional. Configuration to handle the startup of instances during cluster create and update process.
    accelerators Sequence[AcceleratorConfigResponse]
    Optional. The Compute Engine accelerator configuration for these instances.
    disk_config DiskConfigResponse
    Optional. Disk option config settings.
    image_uri str
    Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
    instance_flexibility_policy InstanceFlexibilityPolicyResponse
    Optional. Instance flexibility Policy allowing a mixture of VM shapes and provisioning models.
    instance_names Sequence[str]
    The list of instance names. Dataproc derives the names from cluster_name, num_instances, and the instance group.
    instance_references Sequence[InstanceReferenceResponse]
    List of references to Compute Engine instances.
    is_preemptible bool
    Specifies that this instance group contains preemptible instances.
    machine_type_uri str
    Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
    managed_group_config ManagedGroupConfigResponse
    The config for Compute Engine Instance Group Manager that manages this group. This is only used for preemptible instance groups.
    min_cpu_platform str
    Optional. Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
    min_num_instances int
    Optional. The minimum number of primary worker instances to create. If min_num_instances is set, cluster creation will succeed if the number of primary workers created is at least equal to the min_num_instances number.Example: Cluster creation request with num_instances = 5 and min_num_instances = 3: If 4 VMs are created and 1 instance fails, the failed VM is deleted. The cluster is resized to 4 instances and placed in a RUNNING state. If 2 instances are created and 3 instances fail, the cluster in placed in an ERROR state. The failed VMs are not deleted.
    num_instances int
    Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.
    preemptibility str
    Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
    startup_config StartupConfigResponse
    Optional. Configuration to handle the startup of instances during cluster create and update process.
    accelerators List<Property Map>
    Optional. The Compute Engine accelerator configuration for these instances.
    diskConfig Property Map
    Optional. Disk option config settings.
    imageUri String
    Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/v1/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
    instanceFlexibilityPolicy Property Map
    Optional. Instance flexibility Policy allowing a mixture of VM shapes and provisioning models.
    instanceNames List<String>
    The list of instance names. Dataproc derives the names from cluster_name, num_instances, and the instance group.
    instanceReferences List<Property Map>
    List of references to Compute Engine instances.
    isPreemptible Boolean
    Specifies that this instance group contains preemptible instances.
    machineTypeUri String
    Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 projects/[project_id]/zones/[zone]/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
    managedGroupConfig Property Map
    The config for Compute Engine Instance Group Manager that manages this group. This is only used for preemptible instance groups.
    minCpuPlatform String
    Optional. Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
    minNumInstances Number
    Optional. The minimum number of primary worker instances to create. If min_num_instances is set, cluster creation will succeed if the number of primary workers created is at least equal to the min_num_instances number.Example: Cluster creation request with num_instances = 5 and min_num_instances = 3: If 4 VMs are created and 1 instance fails, the failed VM is deleted. The cluster is resized to 4 instances and placed in a RUNNING state. If 2 instances are created and 3 instances fail, the cluster in placed in an ERROR state. The failed VMs are not deleted.
    numInstances Number
    Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.
    preemptibility String
    Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
    startupConfig Property Map
    Optional. Configuration to handle the startup of instances during cluster create and update process.

    InstanceReferenceResponse, InstanceReferenceResponseArgs

    InstanceId string
    The unique identifier of the Compute Engine instance.
    InstanceName string
    The user-friendly name of the Compute Engine instance.
    PublicEciesKey string
    The public ECIES key used for sharing data with this instance.
    PublicKey string
    The public RSA key used for sharing data with this instance.
    InstanceId string
    The unique identifier of the Compute Engine instance.
    InstanceName string
    The user-friendly name of the Compute Engine instance.
    PublicEciesKey string
    The public ECIES key used for sharing data with this instance.
    PublicKey string
    The public RSA key used for sharing data with this instance.
    instanceId String
    The unique identifier of the Compute Engine instance.
    instanceName String
    The user-friendly name of the Compute Engine instance.
    publicEciesKey String
    The public ECIES key used for sharing data with this instance.
    publicKey String
    The public RSA key used for sharing data with this instance.
    instanceId string
    The unique identifier of the Compute Engine instance.
    instanceName string
    The user-friendly name of the Compute Engine instance.
    publicEciesKey string
    The public ECIES key used for sharing data with this instance.
    publicKey string
    The public RSA key used for sharing data with this instance.
    instance_id str
    The unique identifier of the Compute Engine instance.
    instance_name str
    The user-friendly name of the Compute Engine instance.
    public_ecies_key str
    The public ECIES key used for sharing data with this instance.
    public_key str
    The public RSA key used for sharing data with this instance.
    instanceId String
    The unique identifier of the Compute Engine instance.
    instanceName String
    The user-friendly name of the Compute Engine instance.
    publicEciesKey String
    The public ECIES key used for sharing data with this instance.
    publicKey String
    The public RSA key used for sharing data with this instance.

    InstanceSelection, InstanceSelectionArgs

    MachineTypes List<string>
    Optional. Full machine-type names, e.g. "n1-standard-16".
    Rank int
    Optional. Preference of this instance selection. Lower number means higher preference. Dataproc will first try to create a VM based on the machine-type with priority rank and fallback to next rank based on availability. Machine types and instance selections with the same priority have the same preference.
    MachineTypes []string
    Optional. Full machine-type names, e.g. "n1-standard-16".
    Rank int
    Optional. Preference of this instance selection. Lower number means higher preference. Dataproc will first try to create a VM based on the machine-type with priority rank and fallback to next rank based on availability. Machine types and instance selections with the same priority have the same preference.
    machineTypes List<String>
    Optional. Full machine-type names, e.g. "n1-standard-16".
    rank Integer
    Optional. Preference of this instance selection. Lower number means higher preference. Dataproc will first try to create a VM based on the machine-type with priority rank and fallback to next rank based on availability. Machine types and instance selections with the same priority have the same preference.
    machineTypes string[]
    Optional. Full machine-type names, e.g. "n1-standard-16".
    rank number
    Optional. Preference of this instance selection. Lower number means higher preference. Dataproc will first try to create a VM based on the machine-type with priority rank and fallback to next rank based on availability. Machine types and instance selections with the same priority have the same preference.
    machine_types Sequence[str]
    Optional. Full machine-type names, e.g. "n1-standard-16".
    rank int
    Optional. Preference of this instance selection. Lower number means higher preference. Dataproc will first try to create a VM based on the machine-type with priority rank and fallback to next rank based on availability. Machine types and instance selections with the same priority have the same preference.
    machineTypes List<String>
    Optional. Full machine-type names, e.g. "n1-standard-16".
    rank Number
    Optional. Preference of this instance selection. Lower number means higher preference. Dataproc will first try to create a VM based on the machine-type with priority rank and fallback to next rank based on availability. Machine types and instance selections with the same priority have the same preference.

    InstanceSelectionResponse, InstanceSelectionResponseArgs

    MachineTypes List<string>
    Optional. Full machine-type names, e.g. "n1-standard-16".
    Rank int
    Optional. Preference of this instance selection. Lower number means higher preference. Dataproc will first try to create a VM based on the machine-type with priority rank and fallback to next rank based on availability. Machine types and instance selections with the same priority have the same preference.
    MachineTypes []string
    Optional. Full machine-type names, e.g. "n1-standard-16".
    Rank int
    Optional. Preference of this instance selection. Lower number means higher preference. Dataproc will first try to create a VM based on the machine-type with priority rank and fallback to next rank based on availability. Machine types and instance selections with the same priority have the same preference.
    machineTypes List<String>
    Optional. Full machine-type names, e.g. "n1-standard-16".
    rank Integer
    Optional. Preference of this instance selection. Lower number means higher preference. Dataproc will first try to create a VM based on the machine-type with priority rank and fallback to next rank based on availability. Machine types and instance selections with the same priority have the same preference.
    machineTypes string[]
    Optional. Full machine-type names, e.g. "n1-standard-16".
    rank number
    Optional. Preference of this instance selection. Lower number means higher preference. Dataproc will first try to create a VM based on the machine-type with priority rank and fallback to next rank based on availability. Machine types and instance selections with the same priority have the same preference.
    machine_types Sequence[str]
    Optional. Full machine-type names, e.g. "n1-standard-16".
    rank int
    Optional. Preference of this instance selection. Lower number means higher preference. Dataproc will first try to create a VM based on the machine-type with priority rank and fallback to next rank based on availability. Machine types and instance selections with the same priority have the same preference.
    machineTypes List<String>
    Optional. Full machine-type names, e.g. "n1-standard-16".
    rank Number
    Optional. Preference of this instance selection. Lower number means higher preference. Dataproc will first try to create a VM based on the machine-type with priority rank and fallback to next rank based on availability. Machine types and instance selections with the same priority have the same preference.

    InstanceSelectionResultResponse, InstanceSelectionResultResponseArgs

    MachineType string
    Full machine-type names, e.g. "n1-standard-16".
    VmCount int
    Number of VM provisioned with the machine_type.
    MachineType string
    Full machine-type names, e.g. "n1-standard-16".
    VmCount int
    Number of VM provisioned with the machine_type.
    machineType String
    Full machine-type names, e.g. "n1-standard-16".
    vmCount Integer
    Number of VM provisioned with the machine_type.
    machineType string
    Full machine-type names, e.g. "n1-standard-16".
    vmCount number
    Number of VM provisioned with the machine_type.
    machine_type str
    Full machine-type names, e.g. "n1-standard-16".
    vm_count int
    Number of VM provisioned with the machine_type.
    machineType String
    Full machine-type names, e.g. "n1-standard-16".
    vmCount Number
    Number of VM provisioned with the machine_type.

    JobScheduling, JobSchedulingArgs

    MaxFailuresPerHour int
    Optional. Maximum number of times per hour a driver can be restarted as a result of driver exiting with non-zero code before job is reported failed.A job might be reported as thrashing if the driver exits with a non-zero code four times within a 10-minute window.Maximum value is 10.Note: This restartable job option is not supported in Dataproc workflow templates (https://cloud.google.com/dataproc/docs/concepts/workflows/using-workflows#adding_jobs_to_a_template).
    MaxFailuresTotal int
    Optional. Maximum total number of times a driver can be restarted as a result of the driver exiting with a non-zero code. After the maximum number is reached, the job will be reported as failed.Maximum value is 240.Note: Currently, this restartable job option is not supported in Dataproc workflow templates (https://cloud.google.com/dataproc/docs/concepts/workflows/using-workflows#adding_jobs_to_a_template).
    MaxFailuresPerHour int
    Optional. Maximum number of times per hour a driver can be restarted as a result of driver exiting with non-zero code before job is reported failed.A job might be reported as thrashing if the driver exits with a non-zero code four times within a 10-minute window.Maximum value is 10.Note: This restartable job option is not supported in Dataproc workflow templates (https://cloud.google.com/dataproc/docs/concepts/workflows/using-workflows#adding_jobs_to_a_template).
    MaxFailuresTotal int
    Optional. Maximum total number of times a driver can be restarted as a result of the driver exiting with a non-zero code. After the maximum number is reached, the job will be reported as failed.Maximum value is 240.Note: Currently, this restartable job option is not supported in Dataproc workflow templates (https://cloud.google.com/dataproc/docs/concepts/workflows/using-workflows#adding_jobs_to_a_template).
    maxFailuresPerHour Integer
    Optional. Maximum number of times per hour a driver can be restarted as a result of driver exiting with non-zero code before job is reported failed.A job might be reported as thrashing if the driver exits with a non-zero code four times within a 10-minute window.Maximum value is 10.Note: This restartable job option is not supported in Dataproc workflow templates (https://cloud.google.com/dataproc/docs/concepts/workflows/using-workflows#adding_jobs_to_a_template).
    maxFailuresTotal Integer
    Optional. Maximum total number of times a driver can be restarted as a result of the driver exiting with a non-zero code. After the maximum number is reached, the job will be reported as failed.Maximum value is 240.Note: Currently, this restartable job option is not supported in Dataproc workflow templates (https://cloud.google.com/dataproc/docs/concepts/workflows/using-workflows#adding_jobs_to_a_template).
    maxFailuresPerHour number
    Optional. Maximum number of times per hour a driver can be restarted as a result of driver exiting with non-zero code before job is reported failed.A job might be reported as thrashing if the driver exits with a non-zero code four times within a 10-minute window.Maximum value is 10.Note: This restartable job option is not supported in Dataproc workflow templates (https://cloud.google.com/dataproc/docs/concepts/workflows/using-workflows#adding_jobs_to_a_template).
    maxFailuresTotal number
    Optional. Maximum total number of times a driver can be restarted as a result of the driver exiting with a non-zero code. After the maximum number is reached, the job will be reported as failed.Maximum value is 240.Note: Currently, this restartable job option is not supported in Dataproc workflow templates (https://cloud.google.com/dataproc/docs/concepts/workflows/using-workflows#adding_jobs_to_a_template).
    max_failures_per_hour int
    Optional. Maximum number of times per hour a driver can be restarted as a result of driver exiting with non-zero code before job is reported failed.A job might be reported as thrashing if the driver exits with a non-zero code four times within a 10-minute window.Maximum value is 10.Note: This restartable job option is not supported in Dataproc workflow templates (https://cloud.google.com/dataproc/docs/concepts/workflows/using-workflows#adding_jobs_to_a_template).
    max_failures_total int
    Optional. Maximum total number of times a driver can be restarted as a result of the driver exiting with a non-zero code. After the maximum number is reached, the job will be reported as failed.Maximum value is 240.Note: Currently, this restartable job option is not supported in Dataproc workflow templates (https://cloud.google.com/dataproc/docs/concepts/workflows/using-workflows#adding_jobs_to_a_template).
    maxFailuresPerHour Number
    Optional. Maximum number of times per hour a driver can be restarted as a result of driver exiting with non-zero code before job is reported failed.A job might be reported as thrashing if the driver exits with a non-zero code four times within a 10-minute window.Maximum value is 10.Note: This restartable job option is not supported in Dataproc workflow templates (https://cloud.google.com/dataproc/docs/concepts/workflows/using-workflows#adding_jobs_to_a_template).
    maxFailuresTotal Number
    Optional. Maximum total number of times a driver can be restarted as a result of the driver exiting with a non-zero code. After the maximum number is reached, the job will be reported as failed.Maximum value is 240.Note: Currently, this restartable job option is not supported in Dataproc workflow templates (https://cloud.google.com/dataproc/docs/concepts/workflows/using-workflows#adding_jobs_to_a_template).

    JobSchedulingResponse, JobSchedulingResponseArgs

    MaxFailuresPerHour int
    Optional. Maximum number of times per hour a driver can be restarted as a result of driver exiting with non-zero code before job is reported failed.A job might be reported as thrashing if the driver exits with a non-zero code four times within a 10-minute window.Maximum value is 10.Note: This restartable job option is not supported in Dataproc workflow templates (https://cloud.google.com/dataproc/docs/concepts/workflows/using-workflows#adding_jobs_to_a_template).
    MaxFailuresTotal int
    Optional. Maximum total number of times a driver can be restarted as a result of the driver exiting with a non-zero code. After the maximum number is reached, the job will be reported as failed.Maximum value is 240.Note: Currently, this restartable job option is not supported in Dataproc workflow templates (https://cloud.google.com/dataproc/docs/concepts/workflows/using-workflows#adding_jobs_to_a_template).
    MaxFailuresPerHour int
    Optional. Maximum number of times per hour a driver can be restarted as a result of driver exiting with non-zero code before job is reported failed.A job might be reported as thrashing if the driver exits with a non-zero code four times within a 10-minute window.Maximum value is 10.Note: This restartable job option is not supported in Dataproc workflow templates (https://cloud.google.com/dataproc/docs/concepts/workflows/using-workflows#adding_jobs_to_a_template).
    MaxFailuresTotal int
    Optional. Maximum total number of times a driver can be restarted as a result of the driver exiting with a non-zero code. After the maximum number is reached, the job will be reported as failed.Maximum value is 240.Note: Currently, this restartable job option is not supported in Dataproc workflow templates (https://cloud.google.com/dataproc/docs/concepts/workflows/using-workflows#adding_jobs_to_a_template).
    maxFailuresPerHour Integer
    Optional. Maximum number of times per hour a driver can be restarted as a result of driver exiting with non-zero code before job is reported failed.A job might be reported as thrashing if the driver exits with a non-zero code four times within a 10-minute window.Maximum value is 10.Note: This restartable job option is not supported in Dataproc workflow templates (https://cloud.google.com/dataproc/docs/concepts/workflows/using-workflows#adding_jobs_to_a_template).
    maxFailuresTotal Integer
    Optional. Maximum total number of times a driver can be restarted as a result of the driver exiting with a non-zero code. After the maximum number is reached, the job will be reported as failed.Maximum value is 240.Note: Currently, this restartable job option is not supported in Dataproc workflow templates (https://cloud.google.com/dataproc/docs/concepts/workflows/using-workflows#adding_jobs_to_a_template).
    maxFailuresPerHour number
    Optional. Maximum number of times per hour a driver can be restarted as a result of driver exiting with non-zero code before job is reported failed.A job might be reported as thrashing if the driver exits with a non-zero code four times within a 10-minute window.Maximum value is 10.Note: This restartable job option is not supported in Dataproc workflow templates (https://cloud.google.com/dataproc/docs/concepts/workflows/using-workflows#adding_jobs_to_a_template).
    maxFailuresTotal number
    Optional. Maximum total number of times a driver can be restarted as a result of the driver exiting with a non-zero code. After the maximum number is reached, the job will be reported as failed.Maximum value is 240.Note: Currently, this restartable job option is not supported in Dataproc workflow templates (https://cloud.google.com/dataproc/docs/concepts/workflows/using-workflows#adding_jobs_to_a_template).
    max_failures_per_hour int
    Optional. Maximum number of times per hour a driver can be restarted as a result of driver exiting with non-zero code before job is reported failed.A job might be reported as thrashing if the driver exits with a non-zero code four times within a 10-minute window.Maximum value is 10.Note: This restartable job option is not supported in Dataproc workflow templates (https://cloud.google.com/dataproc/docs/concepts/workflows/using-workflows#adding_jobs_to_a_template).
    max_failures_total int
    Optional. Maximum total number of times a driver can be restarted as a result of the driver exiting with a non-zero code. After the maximum number is reached, the job will be reported as failed.Maximum value is 240.Note: Currently, this restartable job option is not supported in Dataproc workflow templates (https://cloud.google.com/dataproc/docs/concepts/workflows/using-workflows#adding_jobs_to_a_template).
    maxFailuresPerHour Number
    Optional. Maximum number of times per hour a driver can be restarted as a result of driver exiting with non-zero code before job is reported failed.A job might be reported as thrashing if the driver exits with a non-zero code four times within a 10-minute window.Maximum value is 10.Note: This restartable job option is not supported in Dataproc workflow templates (https://cloud.google.com/dataproc/docs/concepts/workflows/using-workflows#adding_jobs_to_a_template).
    maxFailuresTotal Number
    Optional. Maximum total number of times a driver can be restarted as a result of the driver exiting with a non-zero code. After the maximum number is reached, the job will be reported as failed.Maximum value is 240.Note: Currently, this restartable job option is not supported in Dataproc workflow templates (https://cloud.google.com/dataproc/docs/concepts/workflows/using-workflows#adding_jobs_to_a_template).

    KerberosConfig, KerberosConfigArgs

    CrossRealmTrustAdminServer string
    Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
    CrossRealmTrustKdc string
    Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
    CrossRealmTrustRealm string
    Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.
    CrossRealmTrustSharedPasswordUri string
    Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.
    EnableKerberos bool
    Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.
    KdcDbKeyUri string
    Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.
    KeyPasswordUri string
    Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.
    KeystorePasswordUri string
    Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.
    KeystoreUri string
    Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
    KmsKeyUri string
    Optional. The uri of the KMS key used to encrypt various sensitive files.
    Realm string
    Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.
    RootPrincipalPasswordUri string
    Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.
    TgtLifetimeHours int
    Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.
    TruststorePasswordUri string
    Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.
    TruststoreUri string
    Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
    CrossRealmTrustAdminServer string
    Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
    CrossRealmTrustKdc string
    Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
    CrossRealmTrustRealm string
    Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.
    CrossRealmTrustSharedPasswordUri string
    Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.
    EnableKerberos bool
    Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.
    KdcDbKeyUri string
    Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.
    KeyPasswordUri string
    Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.
    KeystorePasswordUri string
    Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.
    KeystoreUri string
    Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
    KmsKeyUri string
    Optional. The uri of the KMS key used to encrypt various sensitive files.
    Realm string
    Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.
    RootPrincipalPasswordUri string
    Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.
    TgtLifetimeHours int
    Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.
    TruststorePasswordUri string
    Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.
    TruststoreUri string
    Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
    crossRealmTrustAdminServer String
    Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
    crossRealmTrustKdc String
    Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
    crossRealmTrustRealm String
    Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.
    crossRealmTrustSharedPasswordUri String
    Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.
    enableKerberos Boolean
    Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.
    kdcDbKeyUri String
    Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.
    keyPasswordUri String
    Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.
    keystorePasswordUri String
    Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.
    keystoreUri String
    Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
    kmsKeyUri String
    Optional. The uri of the KMS key used to encrypt various sensitive files.
    realm String
    Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.
    rootPrincipalPasswordUri String
    Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.
    tgtLifetimeHours Integer
    Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.
    truststorePasswordUri String
    Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.
    truststoreUri String
    Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
    crossRealmTrustAdminServer string
    Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
    crossRealmTrustKdc string
    Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
    crossRealmTrustRealm string
    Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.
    crossRealmTrustSharedPasswordUri string
    Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.
    enableKerberos boolean
    Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.
    kdcDbKeyUri string
    Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.
    keyPasswordUri string
    Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.
    keystorePasswordUri string
    Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.
    keystoreUri string
    Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
    kmsKeyUri string
    Optional. The uri of the KMS key used to encrypt various sensitive files.
    realm string
    Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.
    rootPrincipalPasswordUri string
    Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.
    tgtLifetimeHours number
    Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.
    truststorePasswordUri string
    Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.
    truststoreUri string
    Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
    cross_realm_trust_admin_server str
    Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
    cross_realm_trust_kdc str
    Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
    cross_realm_trust_realm str
    Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.
    cross_realm_trust_shared_password_uri str
    Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.
    enable_kerberos bool
    Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.
    kdc_db_key_uri str
    Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.
    key_password_uri str
    Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.
    keystore_password_uri str
    Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.
    keystore_uri str
    Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
    kms_key_uri str
    Optional. The uri of the KMS key used to encrypt various sensitive files.
    realm str
    Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.
    root_principal_password_uri str
    Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.
    tgt_lifetime_hours int
    Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.
    truststore_password_uri str
    Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.
    truststore_uri str
    Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
    crossRealmTrustAdminServer String
    Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
    crossRealmTrustKdc String
    Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
    crossRealmTrustRealm String
    Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.
    crossRealmTrustSharedPasswordUri String
    Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.
    enableKerberos Boolean
    Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.
    kdcDbKeyUri String
    Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.
    keyPasswordUri String
    Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.
    keystorePasswordUri String
    Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.
    keystoreUri String
    Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
    kmsKeyUri String
    Optional. The uri of the KMS key used to encrypt various sensitive files.
    realm String
    Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.
    rootPrincipalPasswordUri String
    Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.
    tgtLifetimeHours Number
    Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.
    truststorePasswordUri String
    Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.
    truststoreUri String
    Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.

    KerberosConfigResponse, KerberosConfigResponseArgs

    CrossRealmTrustAdminServer string
    Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
    CrossRealmTrustKdc string
    Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
    CrossRealmTrustRealm string
    Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.
    CrossRealmTrustSharedPasswordUri string
    Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.
    EnableKerberos bool
    Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.
    KdcDbKeyUri string
    Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.
    KeyPasswordUri string
    Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.
    KeystorePasswordUri string
    Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.
    KeystoreUri string
    Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
    KmsKeyUri string
    Optional. The uri of the KMS key used to encrypt various sensitive files.
    Realm string
    Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.
    RootPrincipalPasswordUri string
    Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.
    TgtLifetimeHours int
    Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.
    TruststorePasswordUri string
    Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.
    TruststoreUri string
    Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
    CrossRealmTrustAdminServer string
    Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
    CrossRealmTrustKdc string
    Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
    CrossRealmTrustRealm string
    Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.
    CrossRealmTrustSharedPasswordUri string
    Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.
    EnableKerberos bool
    Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.
    KdcDbKeyUri string
    Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.
    KeyPasswordUri string
    Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.
    KeystorePasswordUri string
    Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.
    KeystoreUri string
    Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
    KmsKeyUri string
    Optional. The uri of the KMS key used to encrypt various sensitive files.
    Realm string
    Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.
    RootPrincipalPasswordUri string
    Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.
    TgtLifetimeHours int
    Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.
    TruststorePasswordUri string
    Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.
    TruststoreUri string
    Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
    crossRealmTrustAdminServer String
    Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
    crossRealmTrustKdc String
    Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
    crossRealmTrustRealm String
    Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.
    crossRealmTrustSharedPasswordUri String
    Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.
    enableKerberos Boolean
    Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.
    kdcDbKeyUri String
    Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.
    keyPasswordUri String
    Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.
    keystorePasswordUri String
    Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.
    keystoreUri String
    Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
    kmsKeyUri String
    Optional. The uri of the KMS key used to encrypt various sensitive files.
    realm String
    Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.
    rootPrincipalPasswordUri String
    Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.
    tgtLifetimeHours Integer
    Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.
    truststorePasswordUri String
    Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.
    truststoreUri String
    Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
    crossRealmTrustAdminServer string
    Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
    crossRealmTrustKdc string
    Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
    crossRealmTrustRealm string
    Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.
    crossRealmTrustSharedPasswordUri string
    Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.
    enableKerberos boolean
    Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.
    kdcDbKeyUri string
    Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.
    keyPasswordUri string
    Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.
    keystorePasswordUri string
    Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.
    keystoreUri string
    Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
    kmsKeyUri string
    Optional. The uri of the KMS key used to encrypt various sensitive files.
    realm string
    Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.
    rootPrincipalPasswordUri string
    Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.
    tgtLifetimeHours number
    Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.
    truststorePasswordUri string
    Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.
    truststoreUri string
    Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
    cross_realm_trust_admin_server str
    Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
    cross_realm_trust_kdc str
    Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
    cross_realm_trust_realm str
    Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.
    cross_realm_trust_shared_password_uri str
    Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.
    enable_kerberos bool
    Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.
    kdc_db_key_uri str
    Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.
    key_password_uri str
    Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.
    keystore_password_uri str
    Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.
    keystore_uri str
    Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
    kms_key_uri str
    Optional. The uri of the KMS key used to encrypt various sensitive files.
    realm str
    Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.
    root_principal_password_uri str
    Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.
    tgt_lifetime_hours int
    Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.
    truststore_password_uri str
    Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.
    truststore_uri str
    Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
    crossRealmTrustAdminServer String
    Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
    crossRealmTrustKdc String
    Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
    crossRealmTrustRealm String
    Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.
    crossRealmTrustSharedPasswordUri String
    Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.
    enableKerberos Boolean
    Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.
    kdcDbKeyUri String
    Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.
    keyPasswordUri String
    Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.
    keystorePasswordUri String
    Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.
    keystoreUri String
    Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
    kmsKeyUri String
    Optional. The uri of the KMS key used to encrypt various sensitive files.
    realm String
    Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.
    rootPrincipalPasswordUri String
    Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.
    tgtLifetimeHours Number
    Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.
    truststorePasswordUri String
    Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.
    truststoreUri String
    Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.

    LifecycleConfig, LifecycleConfigArgs

    AutoDeleteTime string
    Optional. The time when cluster will be auto-deleted (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
    AutoDeleteTtl string
    Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
    IdleDeleteTtl string
    Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
    AutoDeleteTime string
    Optional. The time when cluster will be auto-deleted (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
    AutoDeleteTtl string
    Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
    IdleDeleteTtl string
    Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
    autoDeleteTime String
    Optional. The time when cluster will be auto-deleted (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
    autoDeleteTtl String
    Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
    idleDeleteTtl String
    Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
    autoDeleteTime string
    Optional. The time when cluster will be auto-deleted (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
    autoDeleteTtl string
    Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
    idleDeleteTtl string
    Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
    auto_delete_time str
    Optional. The time when cluster will be auto-deleted (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
    auto_delete_ttl str
    Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
    idle_delete_ttl str
    Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
    autoDeleteTime String
    Optional. The time when cluster will be auto-deleted (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
    autoDeleteTtl String
    Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
    idleDeleteTtl String
    Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).

    LifecycleConfigResponse, LifecycleConfigResponseArgs

    AutoDeleteTime string
    Optional. The time when cluster will be auto-deleted (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
    AutoDeleteTtl string
    Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
    IdleDeleteTtl string
    Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
    IdleStartTime string
    The time when cluster became idle (most recent job finished) and became eligible for deletion due to idleness (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
    AutoDeleteTime string
    Optional. The time when cluster will be auto-deleted (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
    AutoDeleteTtl string
    Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
    IdleDeleteTtl string
    Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
    IdleStartTime string
    The time when cluster became idle (most recent job finished) and became eligible for deletion due to idleness (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
    autoDeleteTime String
    Optional. The time when cluster will be auto-deleted (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
    autoDeleteTtl String
    Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
    idleDeleteTtl String
    Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
    idleStartTime String
    The time when cluster became idle (most recent job finished) and became eligible for deletion due to idleness (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
    autoDeleteTime string
    Optional. The time when cluster will be auto-deleted (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
    autoDeleteTtl string
    Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
    idleDeleteTtl string
    Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
    idleStartTime string
    The time when cluster became idle (most recent job finished) and became eligible for deletion due to idleness (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
    auto_delete_time str
    Optional. The time when cluster will be auto-deleted (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
    auto_delete_ttl str
    Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
    idle_delete_ttl str
    Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
    idle_start_time str
    The time when cluster became idle (most recent job finished) and became eligible for deletion due to idleness (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
    autoDeleteTime String
    Optional. The time when cluster will be auto-deleted (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
    autoDeleteTtl String
    Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
    idleDeleteTtl String
    Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
    idleStartTime String
    The time when cluster became idle (most recent job finished) and became eligible for deletion due to idleness (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).

    LoggingConfig, LoggingConfigArgs

    DriverLogLevels Dictionary<string, string>
    The per-package log levels for the driver. This can include "root" package name to configure rootLogger. Examples: - 'com.google = FATAL' - 'root = INFO' - 'org.apache = DEBUG'
    DriverLogLevels map[string]string
    The per-package log levels for the driver. This can include "root" package name to configure rootLogger. Examples: - 'com.google = FATAL' - 'root = INFO' - 'org.apache = DEBUG'
    driverLogLevels Map<String,String>
    The per-package log levels for the driver. This can include "root" package name to configure rootLogger. Examples: - 'com.google = FATAL' - 'root = INFO' - 'org.apache = DEBUG'
    driverLogLevels {[key: string]: string}
    The per-package log levels for the driver. This can include "root" package name to configure rootLogger. Examples: - 'com.google = FATAL' - 'root = INFO' - 'org.apache = DEBUG'
    driver_log_levels Mapping[str, str]
    The per-package log levels for the driver. This can include "root" package name to configure rootLogger. Examples: - 'com.google = FATAL' - 'root = INFO' - 'org.apache = DEBUG'
    driverLogLevels Map<String>
    The per-package log levels for the driver. This can include "root" package name to configure rootLogger. Examples: - 'com.google = FATAL' - 'root = INFO' - 'org.apache = DEBUG'

    LoggingConfigResponse, LoggingConfigResponseArgs

    DriverLogLevels Dictionary<string, string>
    The per-package log levels for the driver. This can include "root" package name to configure rootLogger. Examples: - 'com.google = FATAL' - 'root = INFO' - 'org.apache = DEBUG'
    DriverLogLevels map[string]string
    The per-package log levels for the driver. This can include "root" package name to configure rootLogger. Examples: - 'com.google = FATAL' - 'root = INFO' - 'org.apache = DEBUG'
    driverLogLevels Map<String,String>
    The per-package log levels for the driver. This can include "root" package name to configure rootLogger. Examples: - 'com.google = FATAL' - 'root = INFO' - 'org.apache = DEBUG'
    driverLogLevels {[key: string]: string}
    The per-package log levels for the driver. This can include "root" package name to configure rootLogger. Examples: - 'com.google = FATAL' - 'root = INFO' - 'org.apache = DEBUG'
    driver_log_levels Mapping[str, str]
    The per-package log levels for the driver. This can include "root" package name to configure rootLogger. Examples: - 'com.google = FATAL' - 'root = INFO' - 'org.apache = DEBUG'
    driverLogLevels Map<String>
    The per-package log levels for the driver. This can include "root" package name to configure rootLogger. Examples: - 'com.google = FATAL' - 'root = INFO' - 'org.apache = DEBUG'

    ManagedCluster, ManagedClusterArgs

    ClusterName string
    The cluster name prefix. A unique cluster name will be formed by appending a random suffix.The name must contain only lower-case letters (a-z), numbers (0-9), and hyphens (-). Must begin with a letter. Cannot begin or end with hyphen. Must consist of between 2 and 35 characters.
    Config Pulumi.GoogleNative.Dataproc.V1.Inputs.ClusterConfig
    The cluster configuration.
    Labels Dictionary<string, string>
    Optional. The labels to associate with this cluster.Label keys must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: \p{Ll}\p{Lo}{0,62}Label values must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: \p{Ll}\p{Lo}\p{N}_-{0,63}No more than 32 labels can be associated with a given cluster.
    ClusterName string
    The cluster name prefix. A unique cluster name will be formed by appending a random suffix.The name must contain only lower-case letters (a-z), numbers (0-9), and hyphens (-). Must begin with a letter. Cannot begin or end with hyphen. Must consist of between 2 and 35 characters.
    Config ClusterConfig
    The cluster configuration.
    Labels map[string]string
    Optional. The labels to associate with this cluster.Label keys must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: \p{Ll}\p{Lo}{0,62}Label values must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: \p{Ll}\p{Lo}\p{N}_-{0,63}No more than 32 labels can be associated with a given cluster.
    clusterName String
    The cluster name prefix. A unique cluster name will be formed by appending a random suffix.The name must contain only lower-case letters (a-z), numbers (0-9), and hyphens (-). Must begin with a letter. Cannot begin or end with hyphen. Must consist of between 2 and 35 characters.
    config ClusterConfig
    The cluster configuration.
    labels Map<String,String>
    Optional. The labels to associate with this cluster.Label keys must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: \p{Ll}\p{Lo}{0,62}Label values must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: \p{Ll}\p{Lo}\p{N}_-{0,63}No more than 32 labels can be associated with a given cluster.
    clusterName string
    The cluster name prefix. A unique cluster name will be formed by appending a random suffix.The name must contain only lower-case letters (a-z), numbers (0-9), and hyphens (-). Must begin with a letter. Cannot begin or end with hyphen. Must consist of between 2 and 35 characters.
    config ClusterConfig
    The cluster configuration.
    labels {[key: string]: string}
    Optional. The labels to associate with this cluster.Label keys must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: \p{Ll}\p{Lo}{0,62}Label values must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: \p{Ll}\p{Lo}\p{N}_-{0,63}No more than 32 labels can be associated with a given cluster.
    cluster_name str
    The cluster name prefix. A unique cluster name will be formed by appending a random suffix.The name must contain only lower-case letters (a-z), numbers (0-9), and hyphens (-). Must begin with a letter. Cannot begin or end with hyphen. Must consist of between 2 and 35 characters.
    config ClusterConfig
    The cluster configuration.
    labels Mapping[str, str]
    Optional. The labels to associate with this cluster.Label keys must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: \p{Ll}\p{Lo}{0,62}Label values must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: \p{Ll}\p{Lo}\p{N}_-{0,63}No more than 32 labels can be associated with a given cluster.
    clusterName String
    The cluster name prefix. A unique cluster name will be formed by appending a random suffix.The name must contain only lower-case letters (a-z), numbers (0-9), and hyphens (-). Must begin with a letter. Cannot begin or end with hyphen. Must consist of between 2 and 35 characters.
    config Property Map
    The cluster configuration.
    labels Map<String>
    Optional. The labels to associate with this cluster.Label keys must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: \p{Ll}\p{Lo}{0,62}Label values must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: \p{Ll}\p{Lo}\p{N}_-{0,63}No more than 32 labels can be associated with a given cluster.

    ManagedClusterResponse, ManagedClusterResponseArgs

    ClusterName string
    The cluster name prefix. A unique cluster name will be formed by appending a random suffix.The name must contain only lower-case letters (a-z), numbers (0-9), and hyphens (-). Must begin with a letter. Cannot begin or end with hyphen. Must consist of between 2 and 35 characters.
    Config Pulumi.GoogleNative.Dataproc.V1.Inputs.ClusterConfigResponse
    The cluster configuration.
    Labels Dictionary<string, string>
    Optional. The labels to associate with this cluster.Label keys must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: \p{Ll}\p{Lo}{0,62}Label values must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: \p{Ll}\p{Lo}\p{N}_-{0,63}No more than 32 labels can be associated with a given cluster.
    ClusterName string
    The cluster name prefix. A unique cluster name will be formed by appending a random suffix.The name must contain only lower-case letters (a-z), numbers (0-9), and hyphens (-). Must begin with a letter. Cannot begin or end with hyphen. Must consist of between 2 and 35 characters.
    Config ClusterConfigResponse
    The cluster configuration.
    Labels map[string]string
    Optional. The labels to associate with this cluster.Label keys must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: \p{Ll}\p{Lo}{0,62}Label values must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: \p{Ll}\p{Lo}\p{N}_-{0,63}No more than 32 labels can be associated with a given cluster.
    clusterName String
    The cluster name prefix. A unique cluster name will be formed by appending a random suffix.The name must contain only lower-case letters (a-z), numbers (0-9), and hyphens (-). Must begin with a letter. Cannot begin or end with hyphen. Must consist of between 2 and 35 characters.
    config ClusterConfigResponse
    The cluster configuration.
    labels Map<String,String>
    Optional. The labels to associate with this cluster.Label keys must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: \p{Ll}\p{Lo}{0,62}Label values must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: \p{Ll}\p{Lo}\p{N}_-{0,63}No more than 32 labels can be associated with a given cluster.
    clusterName string
    The cluster name prefix. A unique cluster name will be formed by appending a random suffix.The name must contain only lower-case letters (a-z), numbers (0-9), and hyphens (-). Must begin with a letter. Cannot begin or end with hyphen. Must consist of between 2 and 35 characters.
    config ClusterConfigResponse
    The cluster configuration.
    labels {[key: string]: string}
    Optional. The labels to associate with this cluster.Label keys must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: \p{Ll}\p{Lo}{0,62}Label values must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: \p{Ll}\p{Lo}\p{N}_-{0,63}No more than 32 labels can be associated with a given cluster.
    cluster_name str
    The cluster name prefix. A unique cluster name will be formed by appending a random suffix.The name must contain only lower-case letters (a-z), numbers (0-9), and hyphens (-). Must begin with a letter. Cannot begin or end with hyphen. Must consist of between 2 and 35 characters.
    config ClusterConfigResponse
    The cluster configuration.
    labels Mapping[str, str]
    Optional. The labels to associate with this cluster.Label keys must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: \p{Ll}\p{Lo}{0,62}Label values must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: \p{Ll}\p{Lo}\p{N}_-{0,63}No more than 32 labels can be associated with a given cluster.
    clusterName String
    The cluster name prefix. A unique cluster name will be formed by appending a random suffix.The name must contain only lower-case letters (a-z), numbers (0-9), and hyphens (-). Must begin with a letter. Cannot begin or end with hyphen. Must consist of between 2 and 35 characters.
    config Property Map
    The cluster configuration.
    labels Map<String>
    Optional. The labels to associate with this cluster.Label keys must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: \p{Ll}\p{Lo}{0,62}Label values must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: \p{Ll}\p{Lo}\p{N}_-{0,63}No more than 32 labels can be associated with a given cluster.

    ManagedGroupConfigResponse, ManagedGroupConfigResponseArgs

    InstanceGroupManagerName string
    The name of the Instance Group Manager for this group.
    InstanceGroupManagerUri string
    The partial URI to the instance group manager for this group. E.g. projects/my-project/regions/us-central1/instanceGroupManagers/my-igm.
    InstanceTemplateName string
    The name of the Instance Template used for the Managed Instance Group.
    InstanceGroupManagerName string
    The name of the Instance Group Manager for this group.
    InstanceGroupManagerUri string
    The partial URI to the instance group manager for this group. E.g. projects/my-project/regions/us-central1/instanceGroupManagers/my-igm.
    InstanceTemplateName string
    The name of the Instance Template used for the Managed Instance Group.
    instanceGroupManagerName String
    The name of the Instance Group Manager for this group.
    instanceGroupManagerUri String
    The partial URI to the instance group manager for this group. E.g. projects/my-project/regions/us-central1/instanceGroupManagers/my-igm.
    instanceTemplateName String
    The name of the Instance Template used for the Managed Instance Group.
    instanceGroupManagerName string
    The name of the Instance Group Manager for this group.
    instanceGroupManagerUri string
    The partial URI to the instance group manager for this group. E.g. projects/my-project/regions/us-central1/instanceGroupManagers/my-igm.
    instanceTemplateName string
    The name of the Instance Template used for the Managed Instance Group.
    instance_group_manager_name str
    The name of the Instance Group Manager for this group.
    instance_group_manager_uri str
    The partial URI to the instance group manager for this group. E.g. projects/my-project/regions/us-central1/instanceGroupManagers/my-igm.
    instance_template_name str
    The name of the Instance Template used for the Managed Instance Group.
    instanceGroupManagerName String
    The name of the Instance Group Manager for this group.
    instanceGroupManagerUri String
    The partial URI to the instance group manager for this group. E.g. projects/my-project/regions/us-central1/instanceGroupManagers/my-igm.
    instanceTemplateName String
    The name of the Instance Template used for the Managed Instance Group.

    MetastoreConfig, MetastoreConfigArgs

    DataprocMetastoreService string
    Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]
    DataprocMetastoreService string
    Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]
    dataprocMetastoreService String
    Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]
    dataprocMetastoreService string
    Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]
    dataproc_metastore_service str
    Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]
    dataprocMetastoreService String
    Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]

    MetastoreConfigResponse, MetastoreConfigResponseArgs

    DataprocMetastoreService string
    Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]
    DataprocMetastoreService string
    Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]
    dataprocMetastoreService String
    Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]
    dataprocMetastoreService string
    Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]
    dataproc_metastore_service str
    Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]
    dataprocMetastoreService String
    Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]

    Metric, MetricArgs

    MetricSource Pulumi.GoogleNative.Dataproc.V1.MetricMetricSource
    A standard set of metrics is collected unless metricOverrides are specified for the metric source (see Custom metrics (https://cloud.google.com/dataproc/docs/guides/dataproc-metrics#custom_metrics) for more information).
    MetricOverrides List<string>
    Optional. Specify one or more Custom metrics (https://cloud.google.com/dataproc/docs/guides/dataproc-metrics#custom_metrics) to collect for the metric course (for the SPARK metric source (any Spark metric (https://spark.apache.org/docs/latest/monitoring.html#metrics) can be specified).Provide metrics in the following format: METRIC_SOURCE: INSTANCE:GROUP:METRIC Use camelcase as appropriate.Examples: yarn:ResourceManager:QueueMetrics:AppsCompleted spark:driver:DAGScheduler:job.allJobs sparkHistoryServer:JVM:Memory:NonHeapMemoryUsage.committed hiveserver2:JVM:Memory:NonHeapMemoryUsage.used Notes: Only the specified overridden metrics are collected for the metric source. For example, if one or more spark:executive metrics are listed as metric overrides, other SPARK metrics are not collected. The collection of the metrics for other enabled custom metric sources is unaffected. For example, if both SPARK andd YARN metric sources are enabled, and overrides are provided for Spark metrics only, all YARN metrics are collected.
    MetricSource MetricMetricSource
    A standard set of metrics is collected unless metricOverrides are specified for the metric source (see Custom metrics (https://cloud.google.com/dataproc/docs/guides/dataproc-metrics#custom_metrics) for more information).
    MetricOverrides []string
    Optional. Specify one or more Custom metrics (https://cloud.google.com/dataproc/docs/guides/dataproc-metrics#custom_metrics) to collect for the metric course (for the SPARK metric source (any Spark metric (https://spark.apache.org/docs/latest/monitoring.html#metrics) can be specified).Provide metrics in the following format: METRIC_SOURCE: INSTANCE:GROUP:METRIC Use camelcase as appropriate.Examples: yarn:ResourceManager:QueueMetrics:AppsCompleted spark:driver:DAGScheduler:job.allJobs sparkHistoryServer:JVM:Memory:NonHeapMemoryUsage.committed hiveserver2:JVM:Memory:NonHeapMemoryUsage.used Notes: Only the specified overridden metrics are collected for the metric source. For example, if one or more spark:executive metrics are listed as metric overrides, other SPARK metrics are not collected. The collection of the metrics for other enabled custom metric sources is unaffected. For example, if both SPARK andd YARN metric sources are enabled, and overrides are provided for Spark metrics only, all YARN metrics are collected.
    metricSource MetricMetricSource
    A standard set of metrics is collected unless metricOverrides are specified for the metric source (see Custom metrics (https://cloud.google.com/dataproc/docs/guides/dataproc-metrics#custom_metrics) for more information).
    metricOverrides List<String>
    Optional. Specify one or more Custom metrics (https://cloud.google.com/dataproc/docs/guides/dataproc-metrics#custom_metrics) to collect for the metric course (for the SPARK metric source (any Spark metric (https://spark.apache.org/docs/latest/monitoring.html#metrics) can be specified).Provide metrics in the following format: METRIC_SOURCE: INSTANCE:GROUP:METRIC Use camelcase as appropriate.Examples: yarn:ResourceManager:QueueMetrics:AppsCompleted spark:driver:DAGScheduler:job.allJobs sparkHistoryServer:JVM:Memory:NonHeapMemoryUsage.committed hiveserver2:JVM:Memory:NonHeapMemoryUsage.used Notes: Only the specified overridden metrics are collected for the metric source. For example, if one or more spark:executive metrics are listed as metric overrides, other SPARK metrics are not collected. The collection of the metrics for other enabled custom metric sources is unaffected. For example, if both SPARK andd YARN metric sources are enabled, and overrides are provided for Spark metrics only, all YARN metrics are collected.
    metricSource MetricMetricSource
    A standard set of metrics is collected unless metricOverrides are specified for the metric source (see Custom metrics (https://cloud.google.com/dataproc/docs/guides/dataproc-metrics#custom_metrics) for more information).
    metricOverrides string[]
    Optional. Specify one or more Custom metrics (https://cloud.google.com/dataproc/docs/guides/dataproc-metrics#custom_metrics) to collect for the metric course (for the SPARK metric source (any Spark metric (https://spark.apache.org/docs/latest/monitoring.html#metrics) can be specified).Provide metrics in the following format: METRIC_SOURCE: INSTANCE:GROUP:METRIC Use camelcase as appropriate.Examples: yarn:ResourceManager:QueueMetrics:AppsCompleted spark:driver:DAGScheduler:job.allJobs sparkHistoryServer:JVM:Memory:NonHeapMemoryUsage.committed hiveserver2:JVM:Memory:NonHeapMemoryUsage.used Notes: Only the specified overridden metrics are collected for the metric source. For example, if one or more spark:executive metrics are listed as metric overrides, other SPARK metrics are not collected. The collection of the metrics for other enabled custom metric sources is unaffected. For example, if both SPARK andd YARN metric sources are enabled, and overrides are provided for Spark metrics only, all YARN metrics are collected.
    metric_source MetricMetricSource
    A standard set of metrics is collected unless metricOverrides are specified for the metric source (see Custom metrics (https://cloud.google.com/dataproc/docs/guides/dataproc-metrics#custom_metrics) for more information).
    metric_overrides Sequence[str]
    Optional. Specify one or more Custom metrics (https://cloud.google.com/dataproc/docs/guides/dataproc-metrics#custom_metrics) to collect for the metric course (for the SPARK metric source (any Spark metric (https://spark.apache.org/docs/latest/monitoring.html#metrics) can be specified).Provide metrics in the following format: METRIC_SOURCE: INSTANCE:GROUP:METRIC Use camelcase as appropriate.Examples: yarn:ResourceManager:QueueMetrics:AppsCompleted spark:driver:DAGScheduler:job.allJobs sparkHistoryServer:JVM:Memory:NonHeapMemoryUsage.committed hiveserver2:JVM:Memory:NonHeapMemoryUsage.used Notes: Only the specified overridden metrics are collected for the metric source. For example, if one or more spark:executive metrics are listed as metric overrides, other SPARK metrics are not collected. The collection of the metrics for other enabled custom metric sources is unaffected. For example, if both SPARK andd YARN metric sources are enabled, and overrides are provided for Spark metrics only, all YARN metrics are collected.
    metricSource "METRIC_SOURCE_UNSPECIFIED" | "MONITORING_AGENT_DEFAULTS" | "HDFS" | "SPARK" | "YARN" | "SPARK_HISTORY_SERVER" | "HIVESERVER2" | "HIVEMETASTORE" | "FLINK"
    A standard set of metrics is collected unless metricOverrides are specified for the metric source (see Custom metrics (https://cloud.google.com/dataproc/docs/guides/dataproc-metrics#custom_metrics) for more information).
    metricOverrides List<String>
    Optional. Specify one or more Custom metrics (https://cloud.google.com/dataproc/docs/guides/dataproc-metrics#custom_metrics) to collect for the metric course (for the SPARK metric source (any Spark metric (https://spark.apache.org/docs/latest/monitoring.html#metrics) can be specified).Provide metrics in the following format: METRIC_SOURCE: INSTANCE:GROUP:METRIC Use camelcase as appropriate.Examples: yarn:ResourceManager:QueueMetrics:AppsCompleted spark:driver:DAGScheduler:job.allJobs sparkHistoryServer:JVM:Memory:NonHeapMemoryUsage.committed hiveserver2:JVM:Memory:NonHeapMemoryUsage.used Notes: Only the specified overridden metrics are collected for the metric source. For example, if one or more spark:executive metrics are listed as metric overrides, other SPARK metrics are not collected. The collection of the metrics for other enabled custom metric sources is unaffected. For example, if both SPARK andd YARN metric sources are enabled, and overrides are provided for Spark metrics only, all YARN metrics are collected.

    MetricMetricSource, MetricMetricSourceArgs

    MetricSourceUnspecified
    METRIC_SOURCE_UNSPECIFIEDRequired unspecified metric source.
    MonitoringAgentDefaults
    MONITORING_AGENT_DEFAULTSMonitoring agent metrics. If this source is enabled, Dataproc enables the monitoring agent in Compute Engine, and collects monitoring agent metrics, which are published with an agent.googleapis.com prefix.
    Hdfs
    HDFSHDFS metric source.
    Spark
    SPARKSpark metric source.
    Yarn
    YARNYARN metric source.
    SparkHistoryServer
    SPARK_HISTORY_SERVERSpark History Server metric source.
    Hiveserver2
    HIVESERVER2Hiveserver2 metric source.
    Hivemetastore
    HIVEMETASTOREhivemetastore metric source
    Flink
    FLINKflink metric source
    MetricMetricSourceMetricSourceUnspecified
    METRIC_SOURCE_UNSPECIFIEDRequired unspecified metric source.
    MetricMetricSourceMonitoringAgentDefaults
    MONITORING_AGENT_DEFAULTSMonitoring agent metrics. If this source is enabled, Dataproc enables the monitoring agent in Compute Engine, and collects monitoring agent metrics, which are published with an agent.googleapis.com prefix.
    MetricMetricSourceHdfs
    HDFSHDFS metric source.
    MetricMetricSourceSpark
    SPARKSpark metric source.
    MetricMetricSourceYarn
    YARNYARN metric source.
    MetricMetricSourceSparkHistoryServer
    SPARK_HISTORY_SERVERSpark History Server metric source.
    MetricMetricSourceHiveserver2
    HIVESERVER2Hiveserver2 metric source.
    MetricMetricSourceHivemetastore
    HIVEMETASTOREhivemetastore metric source
    MetricMetricSourceFlink
    FLINKflink metric source
    MetricSourceUnspecified
    METRIC_SOURCE_UNSPECIFIEDRequired unspecified metric source.
    MonitoringAgentDefaults
    MONITORING_AGENT_DEFAULTSMonitoring agent metrics. If this source is enabled, Dataproc enables the monitoring agent in Compute Engine, and collects monitoring agent metrics, which are published with an agent.googleapis.com prefix.
    Hdfs
    HDFSHDFS metric source.
    Spark
    SPARKSpark metric source.
    Yarn
    YARNYARN metric source.
    SparkHistoryServer
    SPARK_HISTORY_SERVERSpark History Server metric source.
    Hiveserver2
    HIVESERVER2Hiveserver2 metric source.
    Hivemetastore
    HIVEMETASTOREhivemetastore metric source
    Flink
    FLINKflink metric source
    MetricSourceUnspecified
    METRIC_SOURCE_UNSPECIFIEDRequired unspecified metric source.
    MonitoringAgentDefaults
    MONITORING_AGENT_DEFAULTSMonitoring agent metrics. If this source is enabled, Dataproc enables the monitoring agent in Compute Engine, and collects monitoring agent metrics, which are published with an agent.googleapis.com prefix.
    Hdfs
    HDFSHDFS metric source.
    Spark
    SPARKSpark metric source.
    Yarn
    YARNYARN metric source.
    SparkHistoryServer
    SPARK_HISTORY_SERVERSpark History Server metric source.
    Hiveserver2
    HIVESERVER2Hiveserver2 metric source.
    Hivemetastore
    HIVEMETASTOREhivemetastore metric source
    Flink
    FLINKflink metric source
    METRIC_SOURCE_UNSPECIFIED
    METRIC_SOURCE_UNSPECIFIEDRequired unspecified metric source.
    MONITORING_AGENT_DEFAULTS
    MONITORING_AGENT_DEFAULTSMonitoring agent metrics. If this source is enabled, Dataproc enables the monitoring agent in Compute Engine, and collects monitoring agent metrics, which are published with an agent.googleapis.com prefix.
    HDFS
    HDFSHDFS metric source.
    SPARK
    SPARKSpark metric source.
    YARN
    YARNYARN metric source.
    SPARK_HISTORY_SERVER
    SPARK_HISTORY_SERVERSpark History Server metric source.
    HIVESERVER2
    HIVESERVER2Hiveserver2 metric source.
    HIVEMETASTORE
    HIVEMETASTOREhivemetastore metric source
    FLINK
    FLINKflink metric source
    "METRIC_SOURCE_UNSPECIFIED"
    METRIC_SOURCE_UNSPECIFIEDRequired unspecified metric source.
    "MONITORING_AGENT_DEFAULTS"
    MONITORING_AGENT_DEFAULTSMonitoring agent metrics. If this source is enabled, Dataproc enables the monitoring agent in Compute Engine, and collects monitoring agent metrics, which are published with an agent.googleapis.com prefix.
    "HDFS"
    HDFSHDFS metric source.
    "SPARK"
    SPARKSpark metric source.
    "YARN"
    YARNYARN metric source.
    "SPARK_HISTORY_SERVER"
    SPARK_HISTORY_SERVERSpark History Server metric source.
    "HIVESERVER2"
    HIVESERVER2Hiveserver2 metric source.
    "HIVEMETASTORE"
    HIVEMETASTOREhivemetastore metric source
    "FLINK"
    FLINKflink metric source

    MetricResponse, MetricResponseArgs

    MetricOverrides List<string>
    Optional. Specify one or more Custom metrics (https://cloud.google.com/dataproc/docs/guides/dataproc-metrics#custom_metrics) to collect for the metric course (for the SPARK metric source (any Spark metric (https://spark.apache.org/docs/latest/monitoring.html#metrics) can be specified).Provide metrics in the following format: METRIC_SOURCE: INSTANCE:GROUP:METRIC Use camelcase as appropriate.Examples: yarn:ResourceManager:QueueMetrics:AppsCompleted spark:driver:DAGScheduler:job.allJobs sparkHistoryServer:JVM:Memory:NonHeapMemoryUsage.committed hiveserver2:JVM:Memory:NonHeapMemoryUsage.used Notes: Only the specified overridden metrics are collected for the metric source. For example, if one or more spark:executive metrics are listed as metric overrides, other SPARK metrics are not collected. The collection of the metrics for other enabled custom metric sources is unaffected. For example, if both SPARK andd YARN metric sources are enabled, and overrides are provided for Spark metrics only, all YARN metrics are collected.
    MetricSource string
    A standard set of metrics is collected unless metricOverrides are specified for the metric source (see Custom metrics (https://cloud.google.com/dataproc/docs/guides/dataproc-metrics#custom_metrics) for more information).
    MetricOverrides []string
    Optional. Specify one or more Custom metrics (https://cloud.google.com/dataproc/docs/guides/dataproc-metrics#custom_metrics) to collect for the metric course (for the SPARK metric source (any Spark metric (https://spark.apache.org/docs/latest/monitoring.html#metrics) can be specified).Provide metrics in the following format: METRIC_SOURCE: INSTANCE:GROUP:METRIC Use camelcase as appropriate.Examples: yarn:ResourceManager:QueueMetrics:AppsCompleted spark:driver:DAGScheduler:job.allJobs sparkHistoryServer:JVM:Memory:NonHeapMemoryUsage.committed hiveserver2:JVM:Memory:NonHeapMemoryUsage.used Notes: Only the specified overridden metrics are collected for the metric source. For example, if one or more spark:executive metrics are listed as metric overrides, other SPARK metrics are not collected. The collection of the metrics for other enabled custom metric sources is unaffected. For example, if both SPARK andd YARN metric sources are enabled, and overrides are provided for Spark metrics only, all YARN metrics are collected.
    MetricSource string
    A standard set of metrics is collected unless metricOverrides are specified for the metric source (see Custom metrics (https://cloud.google.com/dataproc/docs/guides/dataproc-metrics#custom_metrics) for more information).
    metricOverrides List<String>
    Optional. Specify one or more Custom metrics (https://cloud.google.com/dataproc/docs/guides/dataproc-metrics#custom_metrics) to collect for the metric course (for the SPARK metric source (any Spark metric (https://spark.apache.org/docs/latest/monitoring.html#metrics) can be specified).Provide metrics in the following format: METRIC_SOURCE: INSTANCE:GROUP:METRIC Use camelcase as appropriate.Examples: yarn:ResourceManager:QueueMetrics:AppsCompleted spark:driver:DAGScheduler:job.allJobs sparkHistoryServer:JVM:Memory:NonHeapMemoryUsage.committed hiveserver2:JVM:Memory:NonHeapMemoryUsage.used Notes: Only the specified overridden metrics are collected for the metric source. For example, if one or more spark:executive metrics are listed as metric overrides, other SPARK metrics are not collected. The collection of the metrics for other enabled custom metric sources is unaffected. For example, if both SPARK andd YARN metric sources are enabled, and overrides are provided for Spark metrics only, all YARN metrics are collected.
    metricSource String
    A standard set of metrics is collected unless metricOverrides are specified for the metric source (see Custom metrics (https://cloud.google.com/dataproc/docs/guides/dataproc-metrics#custom_metrics) for more information).
    metricOverrides string[]
    Optional. Specify one or more Custom metrics (https://cloud.google.com/dataproc/docs/guides/dataproc-metrics#custom_metrics) to collect for the metric course (for the SPARK metric source (any Spark metric (https://spark.apache.org/docs/latest/monitoring.html#metrics) can be specified).Provide metrics in the following format: METRIC_SOURCE: INSTANCE:GROUP:METRIC Use camelcase as appropriate.Examples: yarn:ResourceManager:QueueMetrics:AppsCompleted spark:driver:DAGScheduler:job.allJobs sparkHistoryServer:JVM:Memory:NonHeapMemoryUsage.committed hiveserver2:JVM:Memory:NonHeapMemoryUsage.used Notes: Only the specified overridden metrics are collected for the metric source. For example, if one or more spark:executive metrics are listed as metric overrides, other SPARK metrics are not collected. The collection of the metrics for other enabled custom metric sources is unaffected. For example, if both SPARK andd YARN metric sources are enabled, and overrides are provided for Spark metrics only, all YARN metrics are collected.
    metricSource string
    A standard set of metrics is collected unless metricOverrides are specified for the metric source (see Custom metrics (https://cloud.google.com/dataproc/docs/guides/dataproc-metrics#custom_metrics) for more information).
    metric_overrides Sequence[str]
    Optional. Specify one or more Custom metrics (https://cloud.google.com/dataproc/docs/guides/dataproc-metrics#custom_metrics) to collect for the metric course (for the SPARK metric source (any Spark metric (https://spark.apache.org/docs/latest/monitoring.html#metrics) can be specified).Provide metrics in the following format: METRIC_SOURCE: INSTANCE:GROUP:METRIC Use camelcase as appropriate.Examples: yarn:ResourceManager:QueueMetrics:AppsCompleted spark:driver:DAGScheduler:job.allJobs sparkHistoryServer:JVM:Memory:NonHeapMemoryUsage.committed hiveserver2:JVM:Memory:NonHeapMemoryUsage.used Notes: Only the specified overridden metrics are collected for the metric source. For example, if one or more spark:executive metrics are listed as metric overrides, other SPARK metrics are not collected. The collection of the metrics for other enabled custom metric sources is unaffected. For example, if both SPARK andd YARN metric sources are enabled, and overrides are provided for Spark metrics only, all YARN metrics are collected.
    metric_source str
    A standard set of metrics is collected unless metricOverrides are specified for the metric source (see Custom metrics (https://cloud.google.com/dataproc/docs/guides/dataproc-metrics#custom_metrics) for more information).
    metricOverrides List<String>
    Optional. Specify one or more Custom metrics (https://cloud.google.com/dataproc/docs/guides/dataproc-metrics#custom_metrics) to collect for the metric course (for the SPARK metric source (any Spark metric (https://spark.apache.org/docs/latest/monitoring.html#metrics) can be specified).Provide metrics in the following format: METRIC_SOURCE: INSTANCE:GROUP:METRIC Use camelcase as appropriate.Examples: yarn:ResourceManager:QueueMetrics:AppsCompleted spark:driver:DAGScheduler:job.allJobs sparkHistoryServer:JVM:Memory:NonHeapMemoryUsage.committed hiveserver2:JVM:Memory:NonHeapMemoryUsage.used Notes: Only the specified overridden metrics are collected for the metric source. For example, if one or more spark:executive metrics are listed as metric overrides, other SPARK metrics are not collected. The collection of the metrics for other enabled custom metric sources is unaffected. For example, if both SPARK andd YARN metric sources are enabled, and overrides are provided for Spark metrics only, all YARN metrics are collected.
    metricSource String
    A standard set of metrics is collected unless metricOverrides are specified for the metric source (see Custom metrics (https://cloud.google.com/dataproc/docs/guides/dataproc-metrics#custom_metrics) for more information).

    NamespacedGkeDeploymentTarget, NamespacedGkeDeploymentTargetArgs

    ClusterNamespace string
    Optional. A namespace within the GKE cluster to deploy into.
    TargetGkeCluster string
    Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
    ClusterNamespace string
    Optional. A namespace within the GKE cluster to deploy into.
    TargetGkeCluster string
    Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
    clusterNamespace String
    Optional. A namespace within the GKE cluster to deploy into.
    targetGkeCluster String
    Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
    clusterNamespace string
    Optional. A namespace within the GKE cluster to deploy into.
    targetGkeCluster string
    Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
    cluster_namespace str
    Optional. A namespace within the GKE cluster to deploy into.
    target_gke_cluster str
    Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
    clusterNamespace String
    Optional. A namespace within the GKE cluster to deploy into.
    targetGkeCluster String
    Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'

    NamespacedGkeDeploymentTargetResponse, NamespacedGkeDeploymentTargetResponseArgs

    ClusterNamespace string
    Optional. A namespace within the GKE cluster to deploy into.
    TargetGkeCluster string
    Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
    ClusterNamespace string
    Optional. A namespace within the GKE cluster to deploy into.
    TargetGkeCluster string
    Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
    clusterNamespace String
    Optional. A namespace within the GKE cluster to deploy into.
    targetGkeCluster String
    Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
    clusterNamespace string
    Optional. A namespace within the GKE cluster to deploy into.
    targetGkeCluster string
    Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
    cluster_namespace str
    Optional. A namespace within the GKE cluster to deploy into.
    target_gke_cluster str
    Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
    clusterNamespace String
    Optional. A namespace within the GKE cluster to deploy into.
    targetGkeCluster String
    Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'

    NodeGroup, NodeGroupArgs

    Roles List<Pulumi.GoogleNative.Dataproc.V1.NodeGroupRolesItem>
    Node group roles.
    Labels Dictionary<string, string>
    Optional. Node group labels. Label keys must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values can be empty. If specified, they must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). The node group must have no more than 32 labelsn.
    Name string
    The Node group resource name (https://aip.dev/122).
    NodeGroupConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.InstanceGroupConfig
    Optional. The node group instance group configuration.
    Roles []NodeGroupRolesItem
    Node group roles.
    Labels map[string]string
    Optional. Node group labels. Label keys must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values can be empty. If specified, they must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). The node group must have no more than 32 labelsn.
    Name string
    The Node group resource name (https://aip.dev/122).
    NodeGroupConfig InstanceGroupConfig
    Optional. The node group instance group configuration.
    roles List<NodeGroupRolesItem>
    Node group roles.
    labels Map<String,String>
    Optional. Node group labels. Label keys must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values can be empty. If specified, they must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). The node group must have no more than 32 labelsn.
    name String
    The Node group resource name (https://aip.dev/122).
    nodeGroupConfig InstanceGroupConfig
    Optional. The node group instance group configuration.
    roles NodeGroupRolesItem[]
    Node group roles.
    labels {[key: string]: string}
    Optional. Node group labels. Label keys must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values can be empty. If specified, they must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). The node group must have no more than 32 labelsn.
    name string
    The Node group resource name (https://aip.dev/122).
    nodeGroupConfig InstanceGroupConfig
    Optional. The node group instance group configuration.
    roles Sequence[NodeGroupRolesItem]
    Node group roles.
    labels Mapping[str, str]
    Optional. Node group labels. Label keys must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values can be empty. If specified, they must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). The node group must have no more than 32 labelsn.
    name str
    The Node group resource name (https://aip.dev/122).
    node_group_config InstanceGroupConfig
    Optional. The node group instance group configuration.
    roles List<"ROLE_UNSPECIFIED" | "DRIVER">
    Node group roles.
    labels Map<String>
    Optional. Node group labels. Label keys must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values can be empty. If specified, they must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). The node group must have no more than 32 labelsn.
    name String
    The Node group resource name (https://aip.dev/122).
    nodeGroupConfig Property Map
    Optional. The node group instance group configuration.

    NodeGroupAffinity, NodeGroupAffinityArgs

    NodeGroupUri string
    The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 node-group-1
    NodeGroupUri string
    The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 node-group-1
    nodeGroupUri String
    The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 node-group-1
    nodeGroupUri string
    The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 node-group-1
    node_group_uri str
    The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 node-group-1
    nodeGroupUri String
    The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 node-group-1

    NodeGroupAffinityResponse, NodeGroupAffinityResponseArgs

    NodeGroupUri string
    The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 node-group-1
    NodeGroupUri string
    The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 node-group-1
    nodeGroupUri String
    The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 node-group-1
    nodeGroupUri string
    The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 node-group-1
    node_group_uri str
    The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 node-group-1
    nodeGroupUri String
    The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 projects/[project_id]/zones/[zone]/nodeGroups/node-group-1 node-group-1

    NodeGroupResponse, NodeGroupResponseArgs

    Labels Dictionary<string, string>
    Optional. Node group labels. Label keys must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values can be empty. If specified, they must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). The node group must have no more than 32 labelsn.
    Name string
    The Node group resource name (https://aip.dev/122).
    NodeGroupConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.InstanceGroupConfigResponse
    Optional. The node group instance group configuration.
    Roles List<string>
    Node group roles.
    Labels map[string]string
    Optional. Node group labels. Label keys must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values can be empty. If specified, they must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). The node group must have no more than 32 labelsn.
    Name string
    The Node group resource name (https://aip.dev/122).
    NodeGroupConfig InstanceGroupConfigResponse
    Optional. The node group instance group configuration.
    Roles []string
    Node group roles.
    labels Map<String,String>
    Optional. Node group labels. Label keys must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values can be empty. If specified, they must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). The node group must have no more than 32 labelsn.
    name String
    The Node group resource name (https://aip.dev/122).
    nodeGroupConfig InstanceGroupConfigResponse
    Optional. The node group instance group configuration.
    roles List<String>
    Node group roles.
    labels {[key: string]: string}
    Optional. Node group labels. Label keys must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values can be empty. If specified, they must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). The node group must have no more than 32 labelsn.
    name string
    The Node group resource name (https://aip.dev/122).
    nodeGroupConfig InstanceGroupConfigResponse
    Optional. The node group instance group configuration.
    roles string[]
    Node group roles.
    labels Mapping[str, str]
    Optional. Node group labels. Label keys must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values can be empty. If specified, they must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). The node group must have no more than 32 labelsn.
    name str
    The Node group resource name (https://aip.dev/122).
    node_group_config InstanceGroupConfigResponse
    Optional. The node group instance group configuration.
    roles Sequence[str]
    Node group roles.
    labels Map<String>
    Optional. Node group labels. Label keys must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values can be empty. If specified, they must consist of from 1 to 63 characters and conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). The node group must have no more than 32 labelsn.
    name String
    The Node group resource name (https://aip.dev/122).
    nodeGroupConfig Property Map
    Optional. The node group instance group configuration.
    roles List<String>
    Node group roles.

    NodeGroupRolesItem, NodeGroupRolesItemArgs

    RoleUnspecified
    ROLE_UNSPECIFIEDRequired unspecified role.
    Driver
    DRIVERJob drivers run on the node pool.
    NodeGroupRolesItemRoleUnspecified
    ROLE_UNSPECIFIEDRequired unspecified role.
    NodeGroupRolesItemDriver
    DRIVERJob drivers run on the node pool.
    RoleUnspecified
    ROLE_UNSPECIFIEDRequired unspecified role.
    Driver
    DRIVERJob drivers run on the node pool.
    RoleUnspecified
    ROLE_UNSPECIFIEDRequired unspecified role.
    Driver
    DRIVERJob drivers run on the node pool.
    ROLE_UNSPECIFIED
    ROLE_UNSPECIFIEDRequired unspecified role.
    DRIVER
    DRIVERJob drivers run on the node pool.
    "ROLE_UNSPECIFIED"
    ROLE_UNSPECIFIEDRequired unspecified role.
    "DRIVER"
    DRIVERJob drivers run on the node pool.

    NodeInitializationAction, NodeInitializationActionArgs

    ExecutableFile string
    Cloud Storage URI of executable file.
    ExecutionTimeout string
    Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.
    ExecutableFile string
    Cloud Storage URI of executable file.
    ExecutionTimeout string
    Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.
    executableFile String
    Cloud Storage URI of executable file.
    executionTimeout String
    Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.
    executableFile string
    Cloud Storage URI of executable file.
    executionTimeout string
    Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.
    executable_file str
    Cloud Storage URI of executable file.
    execution_timeout str
    Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.
    executableFile String
    Cloud Storage URI of executable file.
    executionTimeout String
    Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.

    NodeInitializationActionResponse, NodeInitializationActionResponseArgs

    ExecutableFile string
    Cloud Storage URI of executable file.
    ExecutionTimeout string
    Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.
    ExecutableFile string
    Cloud Storage URI of executable file.
    ExecutionTimeout string
    Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.
    executableFile String
    Cloud Storage URI of executable file.
    executionTimeout String
    Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.
    executableFile string
    Cloud Storage URI of executable file.
    executionTimeout string
    Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.
    executable_file str
    Cloud Storage URI of executable file.
    execution_timeout str
    Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.
    executableFile String
    Cloud Storage URI of executable file.
    executionTimeout String
    Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.

    OrderedJob, OrderedJobArgs

    StepId string
    The step id. The id must be unique among all jobs within the template.The step id is used as prefix for job id, as job goog-dataproc-workflow-step-id label, and in prerequisiteStepIds field from other steps.The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of between 3 and 50 characters.
    FlinkJob Pulumi.GoogleNative.Dataproc.V1.Inputs.FlinkJob
    Optional. Job is a Flink job.
    HadoopJob Pulumi.GoogleNative.Dataproc.V1.Inputs.HadoopJob
    Optional. Job is a Hadoop job.
    HiveJob Pulumi.GoogleNative.Dataproc.V1.Inputs.HiveJob
    Optional. Job is a Hive job.
    Labels Dictionary<string, string>
    Optional. The labels to associate with this job.Label keys must be between 1 and 63 characters long, and must conform to the following regular expression: \p{Ll}\p{Lo}{0,62}Label values must be between 1 and 63 characters long, and must conform to the following regular expression: \p{Ll}\p{Lo}\p{N}_-{0,63}No more than 32 labels can be associated with a given job.
    PigJob Pulumi.GoogleNative.Dataproc.V1.Inputs.PigJob
    Optional. Job is a Pig job.
    PrerequisiteStepIds List<string>
    Optional. The optional list of prerequisite job step_ids. If not specified, the job will start at the beginning of workflow.
    PrestoJob Pulumi.GoogleNative.Dataproc.V1.Inputs.PrestoJob
    Optional. Job is a Presto job.
    PysparkJob Pulumi.GoogleNative.Dataproc.V1.Inputs.PySparkJob
    Optional. Job is a PySpark job.
    Scheduling Pulumi.GoogleNative.Dataproc.V1.Inputs.JobScheduling
    Optional. Job scheduling configuration.
    SparkJob Pulumi.GoogleNative.Dataproc.V1.Inputs.SparkJob
    Optional. Job is a Spark job.
    SparkRJob Pulumi.GoogleNative.Dataproc.V1.Inputs.SparkRJob
    Optional. Job is a SparkR job.
    SparkSqlJob Pulumi.GoogleNative.Dataproc.V1.Inputs.SparkSqlJob
    Optional. Job is a SparkSql job.
    TrinoJob Pulumi.GoogleNative.Dataproc.V1.Inputs.TrinoJob
    Optional. Job is a Trino job.
    StepId string
    The step id. The id must be unique among all jobs within the template.The step id is used as prefix for job id, as job goog-dataproc-workflow-step-id label, and in prerequisiteStepIds field from other steps.The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of between 3 and 50 characters.
    FlinkJob FlinkJob
    Optional. Job is a Flink job.
    HadoopJob HadoopJob
    Optional. Job is a Hadoop job.
    HiveJob HiveJob
    Optional. Job is a Hive job.
    Labels map[string]string
    Optional. The labels to associate with this job.Label keys must be between 1 and 63 characters long, and must conform to the following regular expression: \p{Ll}\p{Lo}{0,62}Label values must be between 1 and 63 characters long, and must conform to the following regular expression: \p{Ll}\p{Lo}\p{N}_-{0,63}No more than 32 labels can be associated with a given job.
    PigJob PigJob
    Optional. Job is a Pig job.
    PrerequisiteStepIds []string
    Optional. The optional list of prerequisite job step_ids. If not specified, the job will start at the beginning of workflow.
    PrestoJob PrestoJob
    Optional. Job is a Presto job.
    PysparkJob PySparkJob
    Optional. Job is a PySpark job.
    Scheduling JobScheduling
    Optional. Job scheduling configuration.
    SparkJob SparkJob
    Optional. Job is a Spark job.
    SparkRJob SparkRJob
    Optional. Job is a SparkR job.
    SparkSqlJob SparkSqlJob
    Optional. Job is a SparkSql job.
    TrinoJob TrinoJob
    Optional. Job is a Trino job.
    stepId String
    The step id. The id must be unique among all jobs within the template.The step id is used as prefix for job id, as job goog-dataproc-workflow-step-id label, and in prerequisiteStepIds field from other steps.The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of between 3 and 50 characters.
    flinkJob FlinkJob
    Optional. Job is a Flink job.
    hadoopJob HadoopJob
    Optional. Job is a Hadoop job.
    hiveJob HiveJob
    Optional. Job is a Hive job.
    labels Map<String,String>
    Optional. The labels to associate with this job.Label keys must be between 1 and 63 characters long, and must conform to the following regular expression: \p{Ll}\p{Lo}{0,62}Label values must be between 1 and 63 characters long, and must conform to the following regular expression: \p{Ll}\p{Lo}\p{N}_-{0,63}No more than 32 labels can be associated with a given job.
    pigJob PigJob
    Optional. Job is a Pig job.
    prerequisiteStepIds List<String>
    Optional. The optional list of prerequisite job step_ids. If not specified, the job will start at the beginning of workflow.
    prestoJob PrestoJob
    Optional. Job is a Presto job.
    pysparkJob PySparkJob
    Optional. Job is a PySpark job.
    scheduling JobScheduling
    Optional. Job scheduling configuration.
    sparkJob SparkJob
    Optional. Job is a Spark job.
    sparkRJob SparkRJob
    Optional. Job is a SparkR job.
    sparkSqlJob SparkSqlJob
    Optional. Job is a SparkSql job.
    trinoJob TrinoJob
    Optional. Job is a Trino job.
    stepId string
    The step id. The id must be unique among all jobs within the template.The step id is used as prefix for job id, as job goog-dataproc-workflow-step-id label, and in prerequisiteStepIds field from other steps.The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of between 3 and 50 characters.
    flinkJob FlinkJob
    Optional. Job is a Flink job.
    hadoopJob HadoopJob
    Optional. Job is a Hadoop job.
    hiveJob HiveJob
    Optional. Job is a Hive job.
    labels {[key: string]: string}
    Optional. The labels to associate with this job.Label keys must be between 1 and 63 characters long, and must conform to the following regular expression: \p{Ll}\p{Lo}{0,62}Label values must be between 1 and 63 characters long, and must conform to the following regular expression: \p{Ll}\p{Lo}\p{N}_-{0,63}No more than 32 labels can be associated with a given job.
    pigJob PigJob
    Optional. Job is a Pig job.
    prerequisiteStepIds string[]
    Optional. The optional list of prerequisite job step_ids. If not specified, the job will start at the beginning of workflow.
    prestoJob PrestoJob
    Optional. Job is a Presto job.
    pysparkJob PySparkJob
    Optional. Job is a PySpark job.
    scheduling JobScheduling
    Optional. Job scheduling configuration.
    sparkJob SparkJob
    Optional. Job is a Spark job.
    sparkRJob SparkRJob
    Optional. Job is a SparkR job.
    sparkSqlJob SparkSqlJob
    Optional. Job is a SparkSql job.
    trinoJob TrinoJob
    Optional. Job is a Trino job.
    step_id str
    The step id. The id must be unique among all jobs within the template.The step id is used as prefix for job id, as job goog-dataproc-workflow-step-id label, and in prerequisiteStepIds field from other steps.The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of between 3 and 50 characters.
    flink_job FlinkJob
    Optional. Job is a Flink job.
    hadoop_job HadoopJob
    Optional. Job is a Hadoop job.
    hive_job HiveJob
    Optional. Job is a Hive job.
    labels Mapping[str, str]
    Optional. The labels to associate with this job.Label keys must be between 1 and 63 characters long, and must conform to the following regular expression: \p{Ll}\p{Lo}{0,62}Label values must be between 1 and 63 characters long, and must conform to the following regular expression: \p{Ll}\p{Lo}\p{N}_-{0,63}No more than 32 labels can be associated with a given job.
    pig_job PigJob
    Optional. Job is a Pig job.
    prerequisite_step_ids Sequence[str]
    Optional. The optional list of prerequisite job step_ids. If not specified, the job will start at the beginning of workflow.
    presto_job PrestoJob
    Optional. Job is a Presto job.
    pyspark_job PySparkJob
    Optional. Job is a PySpark job.
    scheduling JobScheduling
    Optional. Job scheduling configuration.
    spark_job SparkJob
    Optional. Job is a Spark job.
    spark_r_job SparkRJob
    Optional. Job is a SparkR job.
    spark_sql_job SparkSqlJob
    Optional. Job is a SparkSql job.
    trino_job TrinoJob
    Optional. Job is a Trino job.
    stepId String
    The step id. The id must be unique among all jobs within the template.The step id is used as prefix for job id, as job goog-dataproc-workflow-step-id label, and in prerequisiteStepIds field from other steps.The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of between 3 and 50 characters.
    flinkJob Property Map
    Optional. Job is a Flink job.
    hadoopJob Property Map
    Optional. Job is a Hadoop job.
    hiveJob Property Map
    Optional. Job is a Hive job.
    labels Map<String>
    Optional. The labels to associate with this job.Label keys must be between 1 and 63 characters long, and must conform to the following regular expression: \p{Ll}\p{Lo}{0,62}Label values must be between 1 and 63 characters long, and must conform to the following regular expression: \p{Ll}\p{Lo}\p{N}_-{0,63}No more than 32 labels can be associated with a given job.
    pigJob Property Map
    Optional. Job is a Pig job.
    prerequisiteStepIds List<String>
    Optional. The optional list of prerequisite job step_ids. If not specified, the job will start at the beginning of workflow.
    prestoJob Property Map
    Optional. Job is a Presto job.
    pysparkJob Property Map
    Optional. Job is a PySpark job.
    scheduling Property Map
    Optional. Job scheduling configuration.
    sparkJob Property Map
    Optional. Job is a Spark job.
    sparkRJob Property Map
    Optional. Job is a SparkR job.
    sparkSqlJob Property Map
    Optional. Job is a SparkSql job.
    trinoJob Property Map
    Optional. Job is a Trino job.

    OrderedJobResponse, OrderedJobResponseArgs

    FlinkJob Pulumi.GoogleNative.Dataproc.V1.Inputs.FlinkJobResponse
    Optional. Job is a Flink job.
    HadoopJob Pulumi.GoogleNative.Dataproc.V1.Inputs.HadoopJobResponse
    Optional. Job is a Hadoop job.
    HiveJob Pulumi.GoogleNative.Dataproc.V1.Inputs.HiveJobResponse
    Optional. Job is a Hive job.
    Labels Dictionary<string, string>
    Optional. The labels to associate with this job.Label keys must be between 1 and 63 characters long, and must conform to the following regular expression: \p{Ll}\p{Lo}{0,62}Label values must be between 1 and 63 characters long, and must conform to the following regular expression: \p{Ll}\p{Lo}\p{N}_-{0,63}No more than 32 labels can be associated with a given job.
    PigJob Pulumi.GoogleNative.Dataproc.V1.Inputs.PigJobResponse
    Optional. Job is a Pig job.
    PrerequisiteStepIds List<string>
    Optional. The optional list of prerequisite job step_ids. If not specified, the job will start at the beginning of workflow.
    PrestoJob Pulumi.GoogleNative.Dataproc.V1.Inputs.PrestoJobResponse
    Optional. Job is a Presto job.
    PysparkJob Pulumi.GoogleNative.Dataproc.V1.Inputs.PySparkJobResponse
    Optional. Job is a PySpark job.
    Scheduling Pulumi.GoogleNative.Dataproc.V1.Inputs.JobSchedulingResponse
    Optional. Job scheduling configuration.
    SparkJob Pulumi.GoogleNative.Dataproc.V1.Inputs.SparkJobResponse
    Optional. Job is a Spark job.
    SparkRJob Pulumi.GoogleNative.Dataproc.V1.Inputs.SparkRJobResponse
    Optional. Job is a SparkR job.
    SparkSqlJob Pulumi.GoogleNative.Dataproc.V1.Inputs.SparkSqlJobResponse
    Optional. Job is a SparkSql job.
    StepId string
    The step id. The id must be unique among all jobs within the template.The step id is used as prefix for job id, as job goog-dataproc-workflow-step-id label, and in prerequisiteStepIds field from other steps.The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of between 3 and 50 characters.
    TrinoJob Pulumi.GoogleNative.Dataproc.V1.Inputs.TrinoJobResponse
    Optional. Job is a Trino job.
    FlinkJob FlinkJobResponse
    Optional. Job is a Flink job.
    HadoopJob HadoopJobResponse
    Optional. Job is a Hadoop job.
    HiveJob HiveJobResponse
    Optional. Job is a Hive job.
    Labels map[string]string
    Optional. The labels to associate with this job.Label keys must be between 1 and 63 characters long, and must conform to the following regular expression: \p{Ll}\p{Lo}{0,62}Label values must be between 1 and 63 characters long, and must conform to the following regular expression: \p{Ll}\p{Lo}\p{N}_-{0,63}No more than 32 labels can be associated with a given job.
    PigJob PigJobResponse
    Optional. Job is a Pig job.
    PrerequisiteStepIds []string
    Optional. The optional list of prerequisite job step_ids. If not specified, the job will start at the beginning of workflow.
    PrestoJob PrestoJobResponse
    Optional. Job is a Presto job.
    PysparkJob PySparkJobResponse
    Optional. Job is a PySpark job.
    Scheduling JobSchedulingResponse
    Optional. Job scheduling configuration.
    SparkJob SparkJobResponse
    Optional. Job is a Spark job.
    SparkRJob SparkRJobResponse
    Optional. Job is a SparkR job.
    SparkSqlJob SparkSqlJobResponse
    Optional. Job is a SparkSql job.
    StepId string
    The step id. The id must be unique among all jobs within the template.The step id is used as prefix for job id, as job goog-dataproc-workflow-step-id label, and in prerequisiteStepIds field from other steps.The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of between 3 and 50 characters.
    TrinoJob TrinoJobResponse
    Optional. Job is a Trino job.
    flinkJob FlinkJobResponse
    Optional. Job is a Flink job.
    hadoopJob HadoopJobResponse
    Optional. Job is a Hadoop job.
    hiveJob HiveJobResponse
    Optional. Job is a Hive job.
    labels Map<String,String>
    Optional. The labels to associate with this job.Label keys must be between 1 and 63 characters long, and must conform to the following regular expression: \p{Ll}\p{Lo}{0,62}Label values must be between 1 and 63 characters long, and must conform to the following regular expression: \p{Ll}\p{Lo}\p{N}_-{0,63}No more than 32 labels can be associated with a given job.
    pigJob PigJobResponse
    Optional. Job is a Pig job.
    prerequisiteStepIds List<String>
    Optional. The optional list of prerequisite job step_ids. If not specified, the job will start at the beginning of workflow.
    prestoJob PrestoJobResponse
    Optional. Job is a Presto job.
    pysparkJob PySparkJobResponse
    Optional. Job is a PySpark job.
    scheduling JobSchedulingResponse
    Optional. Job scheduling configuration.
    sparkJob SparkJobResponse
    Optional. Job is a Spark job.
    sparkRJob SparkRJobResponse
    Optional. Job is a SparkR job.
    sparkSqlJob SparkSqlJobResponse
    Optional. Job is a SparkSql job.
    stepId String
    The step id. The id must be unique among all jobs within the template.The step id is used as prefix for job id, as job goog-dataproc-workflow-step-id label, and in prerequisiteStepIds field from other steps.The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of between 3 and 50 characters.
    trinoJob TrinoJobResponse
    Optional. Job is a Trino job.
    flinkJob FlinkJobResponse
    Optional. Job is a Flink job.
    hadoopJob HadoopJobResponse
    Optional. Job is a Hadoop job.
    hiveJob HiveJobResponse
    Optional. Job is a Hive job.
    labels {[key: string]: string}
    Optional. The labels to associate with this job.Label keys must be between 1 and 63 characters long, and must conform to the following regular expression: \p{Ll}\p{Lo}{0,62}Label values must be between 1 and 63 characters long, and must conform to the following regular expression: \p{Ll}\p{Lo}\p{N}_-{0,63}No more than 32 labels can be associated with a given job.
    pigJob PigJobResponse
    Optional. Job is a Pig job.
    prerequisiteStepIds string[]
    Optional. The optional list of prerequisite job step_ids. If not specified, the job will start at the beginning of workflow.
    prestoJob PrestoJobResponse
    Optional. Job is a Presto job.
    pysparkJob PySparkJobResponse
    Optional. Job is a PySpark job.
    scheduling JobSchedulingResponse
    Optional. Job scheduling configuration.
    sparkJob SparkJobResponse
    Optional. Job is a Spark job.
    sparkRJob SparkRJobResponse
    Optional. Job is a SparkR job.
    sparkSqlJob SparkSqlJobResponse
    Optional. Job is a SparkSql job.
    stepId string
    The step id. The id must be unique among all jobs within the template.The step id is used as prefix for job id, as job goog-dataproc-workflow-step-id label, and in prerequisiteStepIds field from other steps.The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of between 3 and 50 characters.
    trinoJob TrinoJobResponse
    Optional. Job is a Trino job.
    flink_job FlinkJobResponse
    Optional. Job is a Flink job.
    hadoop_job HadoopJobResponse
    Optional. Job is a Hadoop job.
    hive_job HiveJobResponse
    Optional. Job is a Hive job.
    labels Mapping[str, str]
    Optional. The labels to associate with this job.Label keys must be between 1 and 63 characters long, and must conform to the following regular expression: \p{Ll}\p{Lo}{0,62}Label values must be between 1 and 63 characters long, and must conform to the following regular expression: \p{Ll}\p{Lo}\p{N}_-{0,63}No more than 32 labels can be associated with a given job.
    pig_job PigJobResponse
    Optional. Job is a Pig job.
    prerequisite_step_ids Sequence[str]
    Optional. The optional list of prerequisite job step_ids. If not specified, the job will start at the beginning of workflow.
    presto_job PrestoJobResponse
    Optional. Job is a Presto job.
    pyspark_job PySparkJobResponse
    Optional. Job is a PySpark job.
    scheduling JobSchedulingResponse
    Optional. Job scheduling configuration.
    spark_job SparkJobResponse
    Optional. Job is a Spark job.
    spark_r_job SparkRJobResponse
    Optional. Job is a SparkR job.
    spark_sql_job SparkSqlJobResponse
    Optional. Job is a SparkSql job.
    step_id str
    The step id. The id must be unique among all jobs within the template.The step id is used as prefix for job id, as job goog-dataproc-workflow-step-id label, and in prerequisiteStepIds field from other steps.The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of between 3 and 50 characters.
    trino_job TrinoJobResponse
    Optional. Job is a Trino job.
    flinkJob Property Map
    Optional. Job is a Flink job.
    hadoopJob Property Map
    Optional. Job is a Hadoop job.
    hiveJob Property Map
    Optional. Job is a Hive job.
    labels Map<String>
    Optional. The labels to associate with this job.Label keys must be between 1 and 63 characters long, and must conform to the following regular expression: \p{Ll}\p{Lo}{0,62}Label values must be between 1 and 63 characters long, and must conform to the following regular expression: \p{Ll}\p{Lo}\p{N}_-{0,63}No more than 32 labels can be associated with a given job.
    pigJob Property Map
    Optional. Job is a Pig job.
    prerequisiteStepIds List<String>
    Optional. The optional list of prerequisite job step_ids. If not specified, the job will start at the beginning of workflow.
    prestoJob Property Map
    Optional. Job is a Presto job.
    pysparkJob Property Map
    Optional. Job is a PySpark job.
    scheduling Property Map
    Optional. Job scheduling configuration.
    sparkJob Property Map
    Optional. Job is a Spark job.
    sparkRJob Property Map
    Optional. Job is a SparkR job.
    sparkSqlJob Property Map
    Optional. Job is a SparkSql job.
    stepId String
    The step id. The id must be unique among all jobs within the template.The step id is used as prefix for job id, as job goog-dataproc-workflow-step-id label, and in prerequisiteStepIds field from other steps.The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of between 3 and 50 characters.
    trinoJob Property Map
    Optional. Job is a Trino job.

    ParameterValidation, ParameterValidationArgs

    Regex Pulumi.GoogleNative.Dataproc.V1.Inputs.RegexValidation
    Validation based on regular expressions.
    Values Pulumi.GoogleNative.Dataproc.V1.Inputs.ValueValidation
    Validation based on a list of allowed values.
    Regex RegexValidation
    Validation based on regular expressions.
    Values ValueValidation
    Validation based on a list of allowed values.
    regex RegexValidation
    Validation based on regular expressions.
    values ValueValidation
    Validation based on a list of allowed values.
    regex RegexValidation
    Validation based on regular expressions.
    values ValueValidation
    Validation based on a list of allowed values.
    regex RegexValidation
    Validation based on regular expressions.
    values ValueValidation
    Validation based on a list of allowed values.
    regex Property Map
    Validation based on regular expressions.
    values Property Map
    Validation based on a list of allowed values.

    ParameterValidationResponse, ParameterValidationResponseArgs

    Regex RegexValidationResponse
    Validation based on regular expressions.
    Values ValueValidationResponse
    Validation based on a list of allowed values.
    regex RegexValidationResponse
    Validation based on regular expressions.
    values ValueValidationResponse
    Validation based on a list of allowed values.
    regex RegexValidationResponse
    Validation based on regular expressions.
    values ValueValidationResponse
    Validation based on a list of allowed values.
    regex RegexValidationResponse
    Validation based on regular expressions.
    values ValueValidationResponse
    Validation based on a list of allowed values.
    regex Property Map
    Validation based on regular expressions.
    values Property Map
    Validation based on a list of allowed values.

    PigJob, PigJobArgs

    ContinueOnFailure bool
    Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
    JarFileUris List<string>
    Optional. HCFS URIs of jar files to add to the CLASSPATH of the Pig Client and Hadoop MapReduce (MR) tasks. Can contain Pig UDFs.
    LoggingConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.LoggingConfig
    Optional. The runtime log config for job execution.
    Properties Dictionary<string, string>
    Optional. A mapping of property names to values, used to configure Pig. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/pig/conf/pig.properties, and classes in user code.
    QueryFileUri string
    The HCFS URI of the script that contains the Pig queries.
    QueryList Pulumi.GoogleNative.Dataproc.V1.Inputs.QueryList
    A list of queries.
    ScriptVariables Dictionary<string, string>
    Optional. Mapping of query variable names to values (equivalent to the Pig command: name=[value]).
    ContinueOnFailure bool
    Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
    JarFileUris []string
    Optional. HCFS URIs of jar files to add to the CLASSPATH of the Pig Client and Hadoop MapReduce (MR) tasks. Can contain Pig UDFs.
    LoggingConfig LoggingConfig
    Optional. The runtime log config for job execution.
    Properties map[string]string
    Optional. A mapping of property names to values, used to configure Pig. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/pig/conf/pig.properties, and classes in user code.
    QueryFileUri string
    The HCFS URI of the script that contains the Pig queries.
    QueryList QueryList
    A list of queries.
    ScriptVariables map[string]string
    Optional. Mapping of query variable names to values (equivalent to the Pig command: name=[value]).
    continueOnFailure Boolean
    Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
    jarFileUris List<String>
    Optional. HCFS URIs of jar files to add to the CLASSPATH of the Pig Client and Hadoop MapReduce (MR) tasks. Can contain Pig UDFs.
    loggingConfig LoggingConfig
    Optional. The runtime log config for job execution.
    properties Map<String,String>
    Optional. A mapping of property names to values, used to configure Pig. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/pig/conf/pig.properties, and classes in user code.
    queryFileUri String
    The HCFS URI of the script that contains the Pig queries.
    queryList QueryList
    A list of queries.
    scriptVariables Map<String,String>
    Optional. Mapping of query variable names to values (equivalent to the Pig command: name=[value]).
    continueOnFailure boolean
    Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
    jarFileUris string[]
    Optional. HCFS URIs of jar files to add to the CLASSPATH of the Pig Client and Hadoop MapReduce (MR) tasks. Can contain Pig UDFs.
    loggingConfig LoggingConfig
    Optional. The runtime log config for job execution.
    properties {[key: string]: string}
    Optional. A mapping of property names to values, used to configure Pig. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/pig/conf/pig.properties, and classes in user code.
    queryFileUri string
    The HCFS URI of the script that contains the Pig queries.
    queryList QueryList
    A list of queries.
    scriptVariables {[key: string]: string}
    Optional. Mapping of query variable names to values (equivalent to the Pig command: name=[value]).
    continue_on_failure bool
    Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
    jar_file_uris Sequence[str]
    Optional. HCFS URIs of jar files to add to the CLASSPATH of the Pig Client and Hadoop MapReduce (MR) tasks. Can contain Pig UDFs.
    logging_config LoggingConfig
    Optional. The runtime log config for job execution.
    properties Mapping[str, str]
    Optional. A mapping of property names to values, used to configure Pig. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/pig/conf/pig.properties, and classes in user code.
    query_file_uri str
    The HCFS URI of the script that contains the Pig queries.
    query_list QueryList
    A list of queries.
    script_variables Mapping[str, str]
    Optional. Mapping of query variable names to values (equivalent to the Pig command: name=[value]).
    continueOnFailure Boolean
    Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
    jarFileUris List<String>
    Optional. HCFS URIs of jar files to add to the CLASSPATH of the Pig Client and Hadoop MapReduce (MR) tasks. Can contain Pig UDFs.
    loggingConfig Property Map
    Optional. The runtime log config for job execution.
    properties Map<String>
    Optional. A mapping of property names to values, used to configure Pig. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/pig/conf/pig.properties, and classes in user code.
    queryFileUri String
    The HCFS URI of the script that contains the Pig queries.
    queryList Property Map
    A list of queries.
    scriptVariables Map<String>
    Optional. Mapping of query variable names to values (equivalent to the Pig command: name=[value]).

    PigJobResponse, PigJobResponseArgs

    ContinueOnFailure bool
    Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
    JarFileUris List<string>
    Optional. HCFS URIs of jar files to add to the CLASSPATH of the Pig Client and Hadoop MapReduce (MR) tasks. Can contain Pig UDFs.
    LoggingConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.LoggingConfigResponse
    Optional. The runtime log config for job execution.
    Properties Dictionary<string, string>
    Optional. A mapping of property names to values, used to configure Pig. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/pig/conf/pig.properties, and classes in user code.
    QueryFileUri string
    The HCFS URI of the script that contains the Pig queries.
    QueryList Pulumi.GoogleNative.Dataproc.V1.Inputs.QueryListResponse
    A list of queries.
    ScriptVariables Dictionary<string, string>
    Optional. Mapping of query variable names to values (equivalent to the Pig command: name=[value]).
    ContinueOnFailure bool
    Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
    JarFileUris []string
    Optional. HCFS URIs of jar files to add to the CLASSPATH of the Pig Client and Hadoop MapReduce (MR) tasks. Can contain Pig UDFs.
    LoggingConfig LoggingConfigResponse
    Optional. The runtime log config for job execution.
    Properties map[string]string
    Optional. A mapping of property names to values, used to configure Pig. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/pig/conf/pig.properties, and classes in user code.
    QueryFileUri string
    The HCFS URI of the script that contains the Pig queries.
    QueryList QueryListResponse
    A list of queries.
    ScriptVariables map[string]string
    Optional. Mapping of query variable names to values (equivalent to the Pig command: name=[value]).
    continueOnFailure Boolean
    Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
    jarFileUris List<String>
    Optional. HCFS URIs of jar files to add to the CLASSPATH of the Pig Client and Hadoop MapReduce (MR) tasks. Can contain Pig UDFs.
    loggingConfig LoggingConfigResponse
    Optional. The runtime log config for job execution.
    properties Map<String,String>
    Optional. A mapping of property names to values, used to configure Pig. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/pig/conf/pig.properties, and classes in user code.
    queryFileUri String
    The HCFS URI of the script that contains the Pig queries.
    queryList QueryListResponse
    A list of queries.
    scriptVariables Map<String,String>
    Optional. Mapping of query variable names to values (equivalent to the Pig command: name=[value]).
    continueOnFailure boolean
    Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
    jarFileUris string[]
    Optional. HCFS URIs of jar files to add to the CLASSPATH of the Pig Client and Hadoop MapReduce (MR) tasks. Can contain Pig UDFs.
    loggingConfig LoggingConfigResponse
    Optional. The runtime log config for job execution.
    properties {[key: string]: string}
    Optional. A mapping of property names to values, used to configure Pig. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/pig/conf/pig.properties, and classes in user code.
    queryFileUri string
    The HCFS URI of the script that contains the Pig queries.
    queryList QueryListResponse
    A list of queries.
    scriptVariables {[key: string]: string}
    Optional. Mapping of query variable names to values (equivalent to the Pig command: name=[value]).
    continue_on_failure bool
    Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
    jar_file_uris Sequence[str]
    Optional. HCFS URIs of jar files to add to the CLASSPATH of the Pig Client and Hadoop MapReduce (MR) tasks. Can contain Pig UDFs.
    logging_config LoggingConfigResponse
    Optional. The runtime log config for job execution.
    properties Mapping[str, str]
    Optional. A mapping of property names to values, used to configure Pig. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/pig/conf/pig.properties, and classes in user code.
    query_file_uri str
    The HCFS URI of the script that contains the Pig queries.
    query_list QueryListResponse
    A list of queries.
    script_variables Mapping[str, str]
    Optional. Mapping of query variable names to values (equivalent to the Pig command: name=[value]).
    continueOnFailure Boolean
    Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
    jarFileUris List<String>
    Optional. HCFS URIs of jar files to add to the CLASSPATH of the Pig Client and Hadoop MapReduce (MR) tasks. Can contain Pig UDFs.
    loggingConfig Property Map
    Optional. The runtime log config for job execution.
    properties Map<String>
    Optional. A mapping of property names to values, used to configure Pig. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/pig/conf/pig.properties, and classes in user code.
    queryFileUri String
    The HCFS URI of the script that contains the Pig queries.
    queryList Property Map
    A list of queries.
    scriptVariables Map<String>
    Optional. Mapping of query variable names to values (equivalent to the Pig command: name=[value]).

    PrestoJob, PrestoJobArgs

    ClientTags List<string>
    Optional. Presto client tags to attach to this query
    ContinueOnFailure bool
    Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
    LoggingConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.LoggingConfig
    Optional. The runtime log config for job execution.
    OutputFormat string
    Optional. The format in which query output will be displayed. See the Presto documentation for supported output formats
    Properties Dictionary<string, string>
    Optional. A mapping of property names to values. Used to set Presto session properties (https://prestodb.io/docs/current/sql/set-session.html) Equivalent to using the --session flag in the Presto CLI
    QueryFileUri string
    The HCFS URI of the script that contains SQL queries.
    QueryList Pulumi.GoogleNative.Dataproc.V1.Inputs.QueryList
    A list of queries.
    ClientTags []string
    Optional. Presto client tags to attach to this query
    ContinueOnFailure bool
    Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
    LoggingConfig LoggingConfig
    Optional. The runtime log config for job execution.
    OutputFormat string
    Optional. The format in which query output will be displayed. See the Presto documentation for supported output formats
    Properties map[string]string
    Optional. A mapping of property names to values. Used to set Presto session properties (https://prestodb.io/docs/current/sql/set-session.html) Equivalent to using the --session flag in the Presto CLI
    QueryFileUri string
    The HCFS URI of the script that contains SQL queries.
    QueryList QueryList
    A list of queries.
    clientTags List<String>
    Optional. Presto client tags to attach to this query
    continueOnFailure Boolean
    Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
    loggingConfig LoggingConfig
    Optional. The runtime log config for job execution.
    outputFormat String
    Optional. The format in which query output will be displayed. See the Presto documentation for supported output formats
    properties Map<String,String>
    Optional. A mapping of property names to values. Used to set Presto session properties (https://prestodb.io/docs/current/sql/set-session.html) Equivalent to using the --session flag in the Presto CLI
    queryFileUri String
    The HCFS URI of the script that contains SQL queries.
    queryList QueryList
    A list of queries.
    clientTags string[]
    Optional. Presto client tags to attach to this query
    continueOnFailure boolean
    Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
    loggingConfig LoggingConfig
    Optional. The runtime log config for job execution.
    outputFormat string
    Optional. The format in which query output will be displayed. See the Presto documentation for supported output formats
    properties {[key: string]: string}
    Optional. A mapping of property names to values. Used to set Presto session properties (https://prestodb.io/docs/current/sql/set-session.html) Equivalent to using the --session flag in the Presto CLI
    queryFileUri string
    The HCFS URI of the script that contains SQL queries.
    queryList QueryList
    A list of queries.
    client_tags Sequence[str]
    Optional. Presto client tags to attach to this query
    continue_on_failure bool
    Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
    logging_config LoggingConfig
    Optional. The runtime log config for job execution.
    output_format str
    Optional. The format in which query output will be displayed. See the Presto documentation for supported output formats
    properties Mapping[str, str]
    Optional. A mapping of property names to values. Used to set Presto session properties (https://prestodb.io/docs/current/sql/set-session.html) Equivalent to using the --session flag in the Presto CLI
    query_file_uri str
    The HCFS URI of the script that contains SQL queries.
    query_list QueryList
    A list of queries.
    clientTags List<String>
    Optional. Presto client tags to attach to this query
    continueOnFailure Boolean
    Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
    loggingConfig Property Map
    Optional. The runtime log config for job execution.
    outputFormat String
    Optional. The format in which query output will be displayed. See the Presto documentation for supported output formats
    properties Map<String>
    Optional. A mapping of property names to values. Used to set Presto session properties (https://prestodb.io/docs/current/sql/set-session.html) Equivalent to using the --session flag in the Presto CLI
    queryFileUri String
    The HCFS URI of the script that contains SQL queries.
    queryList Property Map
    A list of queries.

    PrestoJobResponse, PrestoJobResponseArgs

    ClientTags List<string>
    Optional. Presto client tags to attach to this query
    ContinueOnFailure bool
    Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
    LoggingConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.LoggingConfigResponse
    Optional. The runtime log config for job execution.
    OutputFormat string
    Optional. The format in which query output will be displayed. See the Presto documentation for supported output formats
    Properties Dictionary<string, string>
    Optional. A mapping of property names to values. Used to set Presto session properties (https://prestodb.io/docs/current/sql/set-session.html) Equivalent to using the --session flag in the Presto CLI
    QueryFileUri string
    The HCFS URI of the script that contains SQL queries.
    QueryList Pulumi.GoogleNative.Dataproc.V1.Inputs.QueryListResponse
    A list of queries.
    ClientTags []string
    Optional. Presto client tags to attach to this query
    ContinueOnFailure bool
    Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
    LoggingConfig LoggingConfigResponse
    Optional. The runtime log config for job execution.
    OutputFormat string
    Optional. The format in which query output will be displayed. See the Presto documentation for supported output formats
    Properties map[string]string
    Optional. A mapping of property names to values. Used to set Presto session properties (https://prestodb.io/docs/current/sql/set-session.html) Equivalent to using the --session flag in the Presto CLI
    QueryFileUri string
    The HCFS URI of the script that contains SQL queries.
    QueryList QueryListResponse
    A list of queries.
    clientTags List<String>
    Optional. Presto client tags to attach to this query
    continueOnFailure Boolean
    Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
    loggingConfig LoggingConfigResponse
    Optional. The runtime log config for job execution.
    outputFormat String
    Optional. The format in which query output will be displayed. See the Presto documentation for supported output formats
    properties Map<String,String>
    Optional. A mapping of property names to values. Used to set Presto session properties (https://prestodb.io/docs/current/sql/set-session.html) Equivalent to using the --session flag in the Presto CLI
    queryFileUri String
    The HCFS URI of the script that contains SQL queries.
    queryList QueryListResponse
    A list of queries.
    clientTags string[]
    Optional. Presto client tags to attach to this query
    continueOnFailure boolean
    Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
    loggingConfig LoggingConfigResponse
    Optional. The runtime log config for job execution.
    outputFormat string
    Optional. The format in which query output will be displayed. See the Presto documentation for supported output formats
    properties {[key: string]: string}
    Optional. A mapping of property names to values. Used to set Presto session properties (https://prestodb.io/docs/current/sql/set-session.html) Equivalent to using the --session flag in the Presto CLI
    queryFileUri string
    The HCFS URI of the script that contains SQL queries.
    queryList QueryListResponse
    A list of queries.
    client_tags Sequence[str]
    Optional. Presto client tags to attach to this query
    continue_on_failure bool
    Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
    logging_config LoggingConfigResponse
    Optional. The runtime log config for job execution.
    output_format str
    Optional. The format in which query output will be displayed. See the Presto documentation for supported output formats
    properties Mapping[str, str]
    Optional. A mapping of property names to values. Used to set Presto session properties (https://prestodb.io/docs/current/sql/set-session.html) Equivalent to using the --session flag in the Presto CLI
    query_file_uri str
    The HCFS URI of the script that contains SQL queries.
    query_list QueryListResponse
    A list of queries.
    clientTags List<String>
    Optional. Presto client tags to attach to this query
    continueOnFailure Boolean
    Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
    loggingConfig Property Map
    Optional. The runtime log config for job execution.
    outputFormat String
    Optional. The format in which query output will be displayed. See the Presto documentation for supported output formats
    properties Map<String>
    Optional. A mapping of property names to values. Used to set Presto session properties (https://prestodb.io/docs/current/sql/set-session.html) Equivalent to using the --session flag in the Presto CLI
    queryFileUri String
    The HCFS URI of the script that contains SQL queries.
    queryList Property Map
    A list of queries.

    PySparkJob, PySparkJobArgs

    MainPythonFileUri string
    The HCFS URI of the main Python file to use as the driver. Must be a .py file.
    ArchiveUris List<string>
    Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
    Args List<string>
    Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
    FileUris List<string>
    Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
    JarFileUris List<string>
    Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Python driver and tasks.
    LoggingConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.LoggingConfig
    Optional. The runtime log config for job execution.
    Properties Dictionary<string, string>
    Optional. A mapping of property names to values, used to configure PySpark. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
    PythonFileUris List<string>
    Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.
    MainPythonFileUri string
    The HCFS URI of the main Python file to use as the driver. Must be a .py file.
    ArchiveUris []string
    Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
    Args []string
    Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
    FileUris []string
    Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
    JarFileUris []string
    Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Python driver and tasks.
    LoggingConfig LoggingConfig
    Optional. The runtime log config for job execution.
    Properties map[string]string
    Optional. A mapping of property names to values, used to configure PySpark. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
    PythonFileUris []string
    Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.
    mainPythonFileUri String
    The HCFS URI of the main Python file to use as the driver. Must be a .py file.
    archiveUris List<String>
    Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
    args List<String>
    Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
    fileUris List<String>
    Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
    jarFileUris List<String>
    Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Python driver and tasks.
    loggingConfig LoggingConfig
    Optional. The runtime log config for job execution.
    properties Map<String,String>
    Optional. A mapping of property names to values, used to configure PySpark. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
    pythonFileUris List<String>
    Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.
    mainPythonFileUri string
    The HCFS URI of the main Python file to use as the driver. Must be a .py file.
    archiveUris string[]
    Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
    args string[]
    Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
    fileUris string[]
    Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
    jarFileUris string[]
    Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Python driver and tasks.
    loggingConfig LoggingConfig
    Optional. The runtime log config for job execution.
    properties {[key: string]: string}
    Optional. A mapping of property names to values, used to configure PySpark. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
    pythonFileUris string[]
    Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.
    main_python_file_uri str
    The HCFS URI of the main Python file to use as the driver. Must be a .py file.
    archive_uris Sequence[str]
    Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
    args Sequence[str]
    Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
    file_uris Sequence[str]
    Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
    jar_file_uris Sequence[str]
    Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Python driver and tasks.
    logging_config LoggingConfig
    Optional. The runtime log config for job execution.
    properties Mapping[str, str]
    Optional. A mapping of property names to values, used to configure PySpark. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
    python_file_uris Sequence[str]
    Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.
    mainPythonFileUri String
    The HCFS URI of the main Python file to use as the driver. Must be a .py file.
    archiveUris List<String>
    Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
    args List<String>
    Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
    fileUris List<String>
    Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
    jarFileUris List<String>
    Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Python driver and tasks.
    loggingConfig Property Map
    Optional. The runtime log config for job execution.
    properties Map<String>
    Optional. A mapping of property names to values, used to configure PySpark. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
    pythonFileUris List<String>
    Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.

    PySparkJobResponse, PySparkJobResponseArgs

    ArchiveUris List<string>
    Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
    Args List<string>
    Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
    FileUris List<string>
    Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
    JarFileUris List<string>
    Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Python driver and tasks.
    LoggingConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.LoggingConfigResponse
    Optional. The runtime log config for job execution.
    MainPythonFileUri string
    The HCFS URI of the main Python file to use as the driver. Must be a .py file.
    Properties Dictionary<string, string>
    Optional. A mapping of property names to values, used to configure PySpark. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
    PythonFileUris List<string>
    Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.
    ArchiveUris []string
    Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
    Args []string
    Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
    FileUris []string
    Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
    JarFileUris []string
    Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Python driver and tasks.
    LoggingConfig LoggingConfigResponse
    Optional. The runtime log config for job execution.
    MainPythonFileUri string
    The HCFS URI of the main Python file to use as the driver. Must be a .py file.
    Properties map[string]string
    Optional. A mapping of property names to values, used to configure PySpark. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
    PythonFileUris []string
    Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.
    archiveUris List<String>
    Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
    args List<String>
    Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
    fileUris List<String>
    Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
    jarFileUris List<String>
    Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Python driver and tasks.
    loggingConfig LoggingConfigResponse
    Optional. The runtime log config for job execution.
    mainPythonFileUri String
    The HCFS URI of the main Python file to use as the driver. Must be a .py file.
    properties Map<String,String>
    Optional. A mapping of property names to values, used to configure PySpark. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
    pythonFileUris List<String>
    Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.
    archiveUris string[]
    Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
    args string[]
    Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
    fileUris string[]
    Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
    jarFileUris string[]
    Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Python driver and tasks.
    loggingConfig LoggingConfigResponse
    Optional. The runtime log config for job execution.
    mainPythonFileUri string
    The HCFS URI of the main Python file to use as the driver. Must be a .py file.
    properties {[key: string]: string}
    Optional. A mapping of property names to values, used to configure PySpark. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
    pythonFileUris string[]
    Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.
    archive_uris Sequence[str]
    Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
    args Sequence[str]
    Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
    file_uris Sequence[str]
    Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
    jar_file_uris Sequence[str]
    Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Python driver and tasks.
    logging_config LoggingConfigResponse
    Optional. The runtime log config for job execution.
    main_python_file_uri str
    The HCFS URI of the main Python file to use as the driver. Must be a .py file.
    properties Mapping[str, str]
    Optional. A mapping of property names to values, used to configure PySpark. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
    python_file_uris Sequence[str]
    Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.
    archiveUris List<String>
    Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
    args List<String>
    Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
    fileUris List<String>
    Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
    jarFileUris List<String>
    Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Python driver and tasks.
    loggingConfig Property Map
    Optional. The runtime log config for job execution.
    mainPythonFileUri String
    The HCFS URI of the main Python file to use as the driver. Must be a .py file.
    properties Map<String>
    Optional. A mapping of property names to values, used to configure PySpark. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
    pythonFileUris List<String>
    Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.

    QueryList, QueryListArgs

    Queries List<string>
    The queries to execute. You do not need to end a query expression with a semicolon. Multiple queries can be specified in one string by separating each with a semicolon. Here is an example of a Dataproc API snippet that uses a QueryList to specify a HiveJob: "hiveJob": { "queryList": { "queries": [ "query1", "query2", "query3;query4", ] } }
    Queries []string
    The queries to execute. You do not need to end a query expression with a semicolon. Multiple queries can be specified in one string by separating each with a semicolon. Here is an example of a Dataproc API snippet that uses a QueryList to specify a HiveJob: "hiveJob": { "queryList": { "queries": [ "query1", "query2", "query3;query4", ] } }
    queries List<String>
    The queries to execute. You do not need to end a query expression with a semicolon. Multiple queries can be specified in one string by separating each with a semicolon. Here is an example of a Dataproc API snippet that uses a QueryList to specify a HiveJob: "hiveJob": { "queryList": { "queries": [ "query1", "query2", "query3;query4", ] } }
    queries string[]
    The queries to execute. You do not need to end a query expression with a semicolon. Multiple queries can be specified in one string by separating each with a semicolon. Here is an example of a Dataproc API snippet that uses a QueryList to specify a HiveJob: "hiveJob": { "queryList": { "queries": [ "query1", "query2", "query3;query4", ] } }
    queries Sequence[str]
    The queries to execute. You do not need to end a query expression with a semicolon. Multiple queries can be specified in one string by separating each with a semicolon. Here is an example of a Dataproc API snippet that uses a QueryList to specify a HiveJob: "hiveJob": { "queryList": { "queries": [ "query1", "query2", "query3;query4", ] } }
    queries List<String>
    The queries to execute. You do not need to end a query expression with a semicolon. Multiple queries can be specified in one string by separating each with a semicolon. Here is an example of a Dataproc API snippet that uses a QueryList to specify a HiveJob: "hiveJob": { "queryList": { "queries": [ "query1", "query2", "query3;query4", ] } }

    QueryListResponse, QueryListResponseArgs

    Queries List<string>
    The queries to execute. You do not need to end a query expression with a semicolon. Multiple queries can be specified in one string by separating each with a semicolon. Here is an example of a Dataproc API snippet that uses a QueryList to specify a HiveJob: "hiveJob": { "queryList": { "queries": [ "query1", "query2", "query3;query4", ] } }
    Queries []string
    The queries to execute. You do not need to end a query expression with a semicolon. Multiple queries can be specified in one string by separating each with a semicolon. Here is an example of a Dataproc API snippet that uses a QueryList to specify a HiveJob: "hiveJob": { "queryList": { "queries": [ "query1", "query2", "query3;query4", ] } }
    queries List<String>
    The queries to execute. You do not need to end a query expression with a semicolon. Multiple queries can be specified in one string by separating each with a semicolon. Here is an example of a Dataproc API snippet that uses a QueryList to specify a HiveJob: "hiveJob": { "queryList": { "queries": [ "query1", "query2", "query3;query4", ] } }
    queries string[]
    The queries to execute. You do not need to end a query expression with a semicolon. Multiple queries can be specified in one string by separating each with a semicolon. Here is an example of a Dataproc API snippet that uses a QueryList to specify a HiveJob: "hiveJob": { "queryList": { "queries": [ "query1", "query2", "query3;query4", ] } }
    queries Sequence[str]
    The queries to execute. You do not need to end a query expression with a semicolon. Multiple queries can be specified in one string by separating each with a semicolon. Here is an example of a Dataproc API snippet that uses a QueryList to specify a HiveJob: "hiveJob": { "queryList": { "queries": [ "query1", "query2", "query3;query4", ] } }
    queries List<String>
    The queries to execute. You do not need to end a query expression with a semicolon. Multiple queries can be specified in one string by separating each with a semicolon. Here is an example of a Dataproc API snippet that uses a QueryList to specify a HiveJob: "hiveJob": { "queryList": { "queries": [ "query1", "query2", "query3;query4", ] } }

    RegexValidation, RegexValidationArgs

    Regexes List<string>
    RE2 regular expressions used to validate the parameter's value. The value must match the regex in its entirety (substring matches are not sufficient).
    Regexes []string
    RE2 regular expressions used to validate the parameter's value. The value must match the regex in its entirety (substring matches are not sufficient).
    regexes List<String>
    RE2 regular expressions used to validate the parameter's value. The value must match the regex in its entirety (substring matches are not sufficient).
    regexes string[]
    RE2 regular expressions used to validate the parameter's value. The value must match the regex in its entirety (substring matches are not sufficient).
    regexes Sequence[str]
    RE2 regular expressions used to validate the parameter's value. The value must match the regex in its entirety (substring matches are not sufficient).
    regexes List<String>
    RE2 regular expressions used to validate the parameter's value. The value must match the regex in its entirety (substring matches are not sufficient).

    RegexValidationResponse, RegexValidationResponseArgs

    Regexes List<string>
    RE2 regular expressions used to validate the parameter's value. The value must match the regex in its entirety (substring matches are not sufficient).
    Regexes []string
    RE2 regular expressions used to validate the parameter's value. The value must match the regex in its entirety (substring matches are not sufficient).
    regexes List<String>
    RE2 regular expressions used to validate the parameter's value. The value must match the regex in its entirety (substring matches are not sufficient).
    regexes string[]
    RE2 regular expressions used to validate the parameter's value. The value must match the regex in its entirety (substring matches are not sufficient).
    regexes Sequence[str]
    RE2 regular expressions used to validate the parameter's value. The value must match the regex in its entirety (substring matches are not sufficient).
    regexes List<String>
    RE2 regular expressions used to validate the parameter's value. The value must match the regex in its entirety (substring matches are not sufficient).

    ReservationAffinity, ReservationAffinityArgs

    ConsumeReservationType Pulumi.GoogleNative.Dataproc.V1.ReservationAffinityConsumeReservationType
    Optional. Type of reservation to consume
    Key string
    Optional. Corresponds to the label key of reservation resource.
    Values List<string>
    Optional. Corresponds to the label values of reservation resource.
    ConsumeReservationType ReservationAffinityConsumeReservationType
    Optional. Type of reservation to consume
    Key string
    Optional. Corresponds to the label key of reservation resource.
    Values []string
    Optional. Corresponds to the label values of reservation resource.
    consumeReservationType ReservationAffinityConsumeReservationType
    Optional. Type of reservation to consume
    key String
    Optional. Corresponds to the label key of reservation resource.
    values List<String>
    Optional. Corresponds to the label values of reservation resource.
    consumeReservationType ReservationAffinityConsumeReservationType
    Optional. Type of reservation to consume
    key string
    Optional. Corresponds to the label key of reservation resource.
    values string[]
    Optional. Corresponds to the label values of reservation resource.
    consume_reservation_type ReservationAffinityConsumeReservationType
    Optional. Type of reservation to consume
    key str
    Optional. Corresponds to the label key of reservation resource.
    values Sequence[str]
    Optional. Corresponds to the label values of reservation resource.
    consumeReservationType "TYPE_UNSPECIFIED" | "NO_RESERVATION" | "ANY_RESERVATION" | "SPECIFIC_RESERVATION"
    Optional. Type of reservation to consume
    key String
    Optional. Corresponds to the label key of reservation resource.
    values List<String>
    Optional. Corresponds to the label values of reservation resource.

    ReservationAffinityConsumeReservationType, ReservationAffinityConsumeReservationTypeArgs

    TypeUnspecified
    TYPE_UNSPECIFIED
    NoReservation
    NO_RESERVATIONDo not consume from any allocated capacity.
    AnyReservation
    ANY_RESERVATIONConsume any reservation available.
    SpecificReservation
    SPECIFIC_RESERVATIONMust consume from a specific reservation. Must specify key value fields for specifying the reservations.
    ReservationAffinityConsumeReservationTypeTypeUnspecified
    TYPE_UNSPECIFIED
    ReservationAffinityConsumeReservationTypeNoReservation
    NO_RESERVATIONDo not consume from any allocated capacity.
    ReservationAffinityConsumeReservationTypeAnyReservation
    ANY_RESERVATIONConsume any reservation available.
    ReservationAffinityConsumeReservationTypeSpecificReservation
    SPECIFIC_RESERVATIONMust consume from a specific reservation. Must specify key value fields for specifying the reservations.
    TypeUnspecified
    TYPE_UNSPECIFIED
    NoReservation
    NO_RESERVATIONDo not consume from any allocated capacity.
    AnyReservation
    ANY_RESERVATIONConsume any reservation available.
    SpecificReservation
    SPECIFIC_RESERVATIONMust consume from a specific reservation. Must specify key value fields for specifying the reservations.
    TypeUnspecified
    TYPE_UNSPECIFIED
    NoReservation
    NO_RESERVATIONDo not consume from any allocated capacity.
    AnyReservation
    ANY_RESERVATIONConsume any reservation available.
    SpecificReservation
    SPECIFIC_RESERVATIONMust consume from a specific reservation. Must specify key value fields for specifying the reservations.
    TYPE_UNSPECIFIED
    TYPE_UNSPECIFIED
    NO_RESERVATION
    NO_RESERVATIONDo not consume from any allocated capacity.
    ANY_RESERVATION
    ANY_RESERVATIONConsume any reservation available.
    SPECIFIC_RESERVATION
    SPECIFIC_RESERVATIONMust consume from a specific reservation. Must specify key value fields for specifying the reservations.
    "TYPE_UNSPECIFIED"
    TYPE_UNSPECIFIED
    "NO_RESERVATION"
    NO_RESERVATIONDo not consume from any allocated capacity.
    "ANY_RESERVATION"
    ANY_RESERVATIONConsume any reservation available.
    "SPECIFIC_RESERVATION"
    SPECIFIC_RESERVATIONMust consume from a specific reservation. Must specify key value fields for specifying the reservations.

    ReservationAffinityResponse, ReservationAffinityResponseArgs

    ConsumeReservationType string
    Optional. Type of reservation to consume
    Key string
    Optional. Corresponds to the label key of reservation resource.
    Values List<string>
    Optional. Corresponds to the label values of reservation resource.
    ConsumeReservationType string
    Optional. Type of reservation to consume
    Key string
    Optional. Corresponds to the label key of reservation resource.
    Values []string
    Optional. Corresponds to the label values of reservation resource.
    consumeReservationType String
    Optional. Type of reservation to consume
    key String
    Optional. Corresponds to the label key of reservation resource.
    values List<String>
    Optional. Corresponds to the label values of reservation resource.
    consumeReservationType string
    Optional. Type of reservation to consume
    key string
    Optional. Corresponds to the label key of reservation resource.
    values string[]
    Optional. Corresponds to the label values of reservation resource.
    consume_reservation_type str
    Optional. Type of reservation to consume
    key str
    Optional. Corresponds to the label key of reservation resource.
    values Sequence[str]
    Optional. Corresponds to the label values of reservation resource.
    consumeReservationType String
    Optional. Type of reservation to consume
    key String
    Optional. Corresponds to the label key of reservation resource.
    values List<String>
    Optional. Corresponds to the label values of reservation resource.

    SecurityConfig, SecurityConfigArgs

    IdentityConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.IdentityConfig
    Optional. Identity related configuration, including service account based secure multi-tenancy user mappings.
    KerberosConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.KerberosConfig
    Optional. Kerberos related configuration.
    IdentityConfig IdentityConfig
    Optional. Identity related configuration, including service account based secure multi-tenancy user mappings.
    KerberosConfig KerberosConfig
    Optional. Kerberos related configuration.
    identityConfig IdentityConfig
    Optional. Identity related configuration, including service account based secure multi-tenancy user mappings.
    kerberosConfig KerberosConfig
    Optional. Kerberos related configuration.
    identityConfig IdentityConfig
    Optional. Identity related configuration, including service account based secure multi-tenancy user mappings.
    kerberosConfig KerberosConfig
    Optional. Kerberos related configuration.
    identity_config IdentityConfig
    Optional. Identity related configuration, including service account based secure multi-tenancy user mappings.
    kerberos_config KerberosConfig
    Optional. Kerberos related configuration.
    identityConfig Property Map
    Optional. Identity related configuration, including service account based secure multi-tenancy user mappings.
    kerberosConfig Property Map
    Optional. Kerberos related configuration.

    SecurityConfigResponse, SecurityConfigResponseArgs

    IdentityConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.IdentityConfigResponse
    Optional. Identity related configuration, including service account based secure multi-tenancy user mappings.
    KerberosConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.KerberosConfigResponse
    Optional. Kerberos related configuration.
    IdentityConfig IdentityConfigResponse
    Optional. Identity related configuration, including service account based secure multi-tenancy user mappings.
    KerberosConfig KerberosConfigResponse
    Optional. Kerberos related configuration.
    identityConfig IdentityConfigResponse
    Optional. Identity related configuration, including service account based secure multi-tenancy user mappings.
    kerberosConfig KerberosConfigResponse
    Optional. Kerberos related configuration.
    identityConfig IdentityConfigResponse
    Optional. Identity related configuration, including service account based secure multi-tenancy user mappings.
    kerberosConfig KerberosConfigResponse
    Optional. Kerberos related configuration.
    identity_config IdentityConfigResponse
    Optional. Identity related configuration, including service account based secure multi-tenancy user mappings.
    kerberos_config KerberosConfigResponse
    Optional. Kerberos related configuration.
    identityConfig Property Map
    Optional. Identity related configuration, including service account based secure multi-tenancy user mappings.
    kerberosConfig Property Map
    Optional. Kerberos related configuration.

    ShieldedInstanceConfig, ShieldedInstanceConfigArgs

    EnableIntegrityMonitoring bool
    Optional. Defines whether instances have integrity monitoring enabled.
    EnableSecureBoot bool
    Optional. Defines whether instances have Secure Boot enabled.
    EnableVtpm bool
    Optional. Defines whether instances have the vTPM enabled.
    EnableIntegrityMonitoring bool
    Optional. Defines whether instances have integrity monitoring enabled.
    EnableSecureBoot bool
    Optional. Defines whether instances have Secure Boot enabled.
    EnableVtpm bool
    Optional. Defines whether instances have the vTPM enabled.
    enableIntegrityMonitoring Boolean
    Optional. Defines whether instances have integrity monitoring enabled.
    enableSecureBoot Boolean
    Optional. Defines whether instances have Secure Boot enabled.
    enableVtpm Boolean
    Optional. Defines whether instances have the vTPM enabled.
    enableIntegrityMonitoring boolean
    Optional. Defines whether instances have integrity monitoring enabled.
    enableSecureBoot boolean
    Optional. Defines whether instances have Secure Boot enabled.
    enableVtpm boolean
    Optional. Defines whether instances have the vTPM enabled.
    enable_integrity_monitoring bool
    Optional. Defines whether instances have integrity monitoring enabled.
    enable_secure_boot bool
    Optional. Defines whether instances have Secure Boot enabled.
    enable_vtpm bool
    Optional. Defines whether instances have the vTPM enabled.
    enableIntegrityMonitoring Boolean
    Optional. Defines whether instances have integrity monitoring enabled.
    enableSecureBoot Boolean
    Optional. Defines whether instances have Secure Boot enabled.
    enableVtpm Boolean
    Optional. Defines whether instances have the vTPM enabled.

    ShieldedInstanceConfigResponse, ShieldedInstanceConfigResponseArgs

    EnableIntegrityMonitoring bool
    Optional. Defines whether instances have integrity monitoring enabled.
    EnableSecureBoot bool
    Optional. Defines whether instances have Secure Boot enabled.
    EnableVtpm bool
    Optional. Defines whether instances have the vTPM enabled.
    EnableIntegrityMonitoring bool
    Optional. Defines whether instances have integrity monitoring enabled.
    EnableSecureBoot bool
    Optional. Defines whether instances have Secure Boot enabled.
    EnableVtpm bool
    Optional. Defines whether instances have the vTPM enabled.
    enableIntegrityMonitoring Boolean
    Optional. Defines whether instances have integrity monitoring enabled.
    enableSecureBoot Boolean
    Optional. Defines whether instances have Secure Boot enabled.
    enableVtpm Boolean
    Optional. Defines whether instances have the vTPM enabled.
    enableIntegrityMonitoring boolean
    Optional. Defines whether instances have integrity monitoring enabled.
    enableSecureBoot boolean
    Optional. Defines whether instances have Secure Boot enabled.
    enableVtpm boolean
    Optional. Defines whether instances have the vTPM enabled.
    enable_integrity_monitoring bool
    Optional. Defines whether instances have integrity monitoring enabled.
    enable_secure_boot bool
    Optional. Defines whether instances have Secure Boot enabled.
    enable_vtpm bool
    Optional. Defines whether instances have the vTPM enabled.
    enableIntegrityMonitoring Boolean
    Optional. Defines whether instances have integrity monitoring enabled.
    enableSecureBoot Boolean
    Optional. Defines whether instances have Secure Boot enabled.
    enableVtpm Boolean
    Optional. Defines whether instances have the vTPM enabled.

    SoftwareConfig, SoftwareConfigArgs

    ImageVersion string
    Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.
    OptionalComponents List<Pulumi.GoogleNative.Dataproc.V1.SoftwareConfigOptionalComponentsItem>
    Optional. The set of components to activate on the cluster.
    Properties Dictionary<string, string>
    Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
    ImageVersion string
    Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.
    OptionalComponents []SoftwareConfigOptionalComponentsItem
    Optional. The set of components to activate on the cluster.
    Properties map[string]string
    Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
    imageVersion String
    Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.
    optionalComponents List<SoftwareConfigOptionalComponentsItem>
    Optional. The set of components to activate on the cluster.
    properties Map<String,String>
    Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
    imageVersion string
    Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.
    optionalComponents SoftwareConfigOptionalComponentsItem[]
    Optional. The set of components to activate on the cluster.
    properties {[key: string]: string}
    Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
    image_version str
    Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.
    optional_components Sequence[SoftwareConfigOptionalComponentsItem]
    Optional. The set of components to activate on the cluster.
    properties Mapping[str, str]
    Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
    imageVersion String
    Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.
    optionalComponents List<"COMPONENT_UNSPECIFIED" | "ANACONDA" | "DOCKER" | "DRUID" | "FLINK" | "HBASE" | "HIVE_WEBHCAT" | "HUDI" | "JUPYTER" | "PRESTO" | "TRINO" | "RANGER" | "SOLR" | "ZEPPELIN" | "ZOOKEEPER">
    Optional. The set of components to activate on the cluster.
    properties Map<String>
    Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).

    SoftwareConfigOptionalComponentsItem, SoftwareConfigOptionalComponentsItemArgs

    ComponentUnspecified
    COMPONENT_UNSPECIFIEDUnspecified component. Specifying this will cause Cluster creation to fail.
    Anaconda
    ANACONDAThe Anaconda python distribution. The Anaconda component is not supported in the Dataproc 2.0 image. The 2.0 image is pre-installed with Miniconda.
    Docker
    DOCKERDocker
    Druid
    DRUIDThe Druid query engine. (alpha)
    Flink
    FLINKFlink
    Hbase
    HBASEHBase. (beta)
    HiveWebhcat
    HIVE_WEBHCATThe Hive Web HCatalog (the REST service for accessing HCatalog).
    Hudi
    HUDIHudi.
    Jupyter
    JUPYTERThe Jupyter Notebook.
    Presto
    PRESTOThe Presto query engine.
    Trino
    TRINOThe Trino query engine.
    Ranger
    RANGERThe Ranger service.
    Solr
    SOLRThe Solr service.
    Zeppelin
    ZEPPELINThe Zeppelin notebook.
    Zookeeper
    ZOOKEEPERThe Zookeeper service.
    SoftwareConfigOptionalComponentsItemComponentUnspecified
    COMPONENT_UNSPECIFIEDUnspecified component. Specifying this will cause Cluster creation to fail.
    SoftwareConfigOptionalComponentsItemAnaconda
    ANACONDAThe Anaconda python distribution. The Anaconda component is not supported in the Dataproc 2.0 image. The 2.0 image is pre-installed with Miniconda.
    SoftwareConfigOptionalComponentsItemDocker
    DOCKERDocker
    SoftwareConfigOptionalComponentsItemDruid
    DRUIDThe Druid query engine. (alpha)
    SoftwareConfigOptionalComponentsItemFlink
    FLINKFlink
    SoftwareConfigOptionalComponentsItemHbase
    HBASEHBase. (beta)
    SoftwareConfigOptionalComponentsItemHiveWebhcat
    HIVE_WEBHCATThe Hive Web HCatalog (the REST service for accessing HCatalog).
    SoftwareConfigOptionalComponentsItemHudi
    HUDIHudi.
    SoftwareConfigOptionalComponentsItemJupyter
    JUPYTERThe Jupyter Notebook.
    SoftwareConfigOptionalComponentsItemPresto
    PRESTOThe Presto query engine.
    SoftwareConfigOptionalComponentsItemTrino
    TRINOThe Trino query engine.
    SoftwareConfigOptionalComponentsItemRanger
    RANGERThe Ranger service.
    SoftwareConfigOptionalComponentsItemSolr
    SOLRThe Solr service.
    SoftwareConfigOptionalComponentsItemZeppelin
    ZEPPELINThe Zeppelin notebook.
    SoftwareConfigOptionalComponentsItemZookeeper
    ZOOKEEPERThe Zookeeper service.
    ComponentUnspecified
    COMPONENT_UNSPECIFIEDUnspecified component. Specifying this will cause Cluster creation to fail.
    Anaconda
    ANACONDAThe Anaconda python distribution. The Anaconda component is not supported in the Dataproc 2.0 image. The 2.0 image is pre-installed with Miniconda.
    Docker
    DOCKERDocker
    Druid
    DRUIDThe Druid query engine. (alpha)
    Flink
    FLINKFlink
    Hbase
    HBASEHBase. (beta)
    HiveWebhcat
    HIVE_WEBHCATThe Hive Web HCatalog (the REST service for accessing HCatalog).
    Hudi
    HUDIHudi.
    Jupyter
    JUPYTERThe Jupyter Notebook.
    Presto
    PRESTOThe Presto query engine.
    Trino
    TRINOThe Trino query engine.
    Ranger
    RANGERThe Ranger service.
    Solr
    SOLRThe Solr service.
    Zeppelin
    ZEPPELINThe Zeppelin notebook.
    Zookeeper
    ZOOKEEPERThe Zookeeper service.
    ComponentUnspecified
    COMPONENT_UNSPECIFIEDUnspecified component. Specifying this will cause Cluster creation to fail.
    Anaconda
    ANACONDAThe Anaconda python distribution. The Anaconda component is not supported in the Dataproc 2.0 image. The 2.0 image is pre-installed with Miniconda.
    Docker
    DOCKERDocker
    Druid
    DRUIDThe Druid query engine. (alpha)
    Flink
    FLINKFlink
    Hbase
    HBASEHBase. (beta)
    HiveWebhcat
    HIVE_WEBHCATThe Hive Web HCatalog (the REST service for accessing HCatalog).
    Hudi
    HUDIHudi.
    Jupyter
    JUPYTERThe Jupyter Notebook.
    Presto
    PRESTOThe Presto query engine.
    Trino
    TRINOThe Trino query engine.
    Ranger
    RANGERThe Ranger service.
    Solr
    SOLRThe Solr service.
    Zeppelin
    ZEPPELINThe Zeppelin notebook.
    Zookeeper
    ZOOKEEPERThe Zookeeper service.
    COMPONENT_UNSPECIFIED
    COMPONENT_UNSPECIFIEDUnspecified component. Specifying this will cause Cluster creation to fail.
    ANACONDA
    ANACONDAThe Anaconda python distribution. The Anaconda component is not supported in the Dataproc 2.0 image. The 2.0 image is pre-installed with Miniconda.
    DOCKER
    DOCKERDocker
    DRUID
    DRUIDThe Druid query engine. (alpha)
    FLINK
    FLINKFlink
    HBASE
    HBASEHBase. (beta)
    HIVE_WEBHCAT
    HIVE_WEBHCATThe Hive Web HCatalog (the REST service for accessing HCatalog).
    HUDI
    HUDIHudi.
    JUPYTER
    JUPYTERThe Jupyter Notebook.
    PRESTO
    PRESTOThe Presto query engine.
    TRINO
    TRINOThe Trino query engine.
    RANGER
    RANGERThe Ranger service.
    SOLR
    SOLRThe Solr service.
    ZEPPELIN
    ZEPPELINThe Zeppelin notebook.
    ZOOKEEPER
    ZOOKEEPERThe Zookeeper service.
    "COMPONENT_UNSPECIFIED"
    COMPONENT_UNSPECIFIEDUnspecified component. Specifying this will cause Cluster creation to fail.
    "ANACONDA"
    ANACONDAThe Anaconda python distribution. The Anaconda component is not supported in the Dataproc 2.0 image. The 2.0 image is pre-installed with Miniconda.
    "DOCKER"
    DOCKERDocker
    "DRUID"
    DRUIDThe Druid query engine. (alpha)
    "FLINK"
    FLINKFlink
    "HBASE"
    HBASEHBase. (beta)
    "HIVE_WEBHCAT"
    HIVE_WEBHCATThe Hive Web HCatalog (the REST service for accessing HCatalog).
    "HUDI"
    HUDIHudi.
    "JUPYTER"
    JUPYTERThe Jupyter Notebook.
    "PRESTO"
    PRESTOThe Presto query engine.
    "TRINO"
    TRINOThe Trino query engine.
    "RANGER"
    RANGERThe Ranger service.
    "SOLR"
    SOLRThe Solr service.
    "ZEPPELIN"
    ZEPPELINThe Zeppelin notebook.
    "ZOOKEEPER"
    ZOOKEEPERThe Zookeeper service.

    SoftwareConfigResponse, SoftwareConfigResponseArgs

    ImageVersion string
    Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.
    OptionalComponents List<string>
    Optional. The set of components to activate on the cluster.
    Properties Dictionary<string, string>
    Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
    ImageVersion string
    Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.
    OptionalComponents []string
    Optional. The set of components to activate on the cluster.
    Properties map[string]string
    Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
    imageVersion String
    Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.
    optionalComponents List<String>
    Optional. The set of components to activate on the cluster.
    properties Map<String,String>
    Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
    imageVersion string
    Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.
    optionalComponents string[]
    Optional. The set of components to activate on the cluster.
    properties {[key: string]: string}
    Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
    image_version str
    Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.
    optional_components Sequence[str]
    Optional. The set of components to activate on the cluster.
    properties Mapping[str, str]
    Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
    imageVersion String
    Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.
    optionalComponents List<String>
    Optional. The set of components to activate on the cluster.
    properties Map<String>
    Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).

    SparkJob, SparkJobArgs

    ArchiveUris List<string>
    Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
    Args List<string>
    Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
    FileUris List<string>
    Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
    JarFileUris List<string>
    Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Spark driver and tasks.
    LoggingConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.LoggingConfig
    Optional. The runtime log config for job execution.
    MainClass string
    The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in SparkJob.jar_file_uris.
    MainJarFileUri string
    The HCFS URI of the jar file that contains the main class.
    Properties Dictionary<string, string>
    Optional. A mapping of property names to values, used to configure Spark. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
    ArchiveUris []string
    Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
    Args []string
    Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
    FileUris []string
    Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
    JarFileUris []string
    Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Spark driver and tasks.
    LoggingConfig LoggingConfig
    Optional. The runtime log config for job execution.
    MainClass string
    The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in SparkJob.jar_file_uris.
    MainJarFileUri string
    The HCFS URI of the jar file that contains the main class.
    Properties map[string]string
    Optional. A mapping of property names to values, used to configure Spark. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
    archiveUris List<String>
    Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
    args List<String>
    Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
    fileUris List<String>
    Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
    jarFileUris List<String>
    Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Spark driver and tasks.
    loggingConfig LoggingConfig
    Optional. The runtime log config for job execution.
    mainClass String
    The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in SparkJob.jar_file_uris.
    mainJarFileUri String
    The HCFS URI of the jar file that contains the main class.
    properties Map<String,String>
    Optional. A mapping of property names to values, used to configure Spark. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
    archiveUris string[]
    Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
    args string[]
    Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
    fileUris string[]
    Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
    jarFileUris string[]
    Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Spark driver and tasks.
    loggingConfig LoggingConfig
    Optional. The runtime log config for job execution.
    mainClass string
    The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in SparkJob.jar_file_uris.
    mainJarFileUri string
    The HCFS URI of the jar file that contains the main class.
    properties {[key: string]: string}
    Optional. A mapping of property names to values, used to configure Spark. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
    archive_uris Sequence[str]
    Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
    args Sequence[str]
    Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
    file_uris Sequence[str]
    Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
    jar_file_uris Sequence[str]
    Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Spark driver and tasks.
    logging_config LoggingConfig
    Optional. The runtime log config for job execution.
    main_class str
    The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in SparkJob.jar_file_uris.
    main_jar_file_uri str
    The HCFS URI of the jar file that contains the main class.
    properties Mapping[str, str]
    Optional. A mapping of property names to values, used to configure Spark. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
    archiveUris List<String>
    Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
    args List<String>
    Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
    fileUris List<String>
    Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
    jarFileUris List<String>
    Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Spark driver and tasks.
    loggingConfig Property Map
    Optional. The runtime log config for job execution.
    mainClass String
    The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in SparkJob.jar_file_uris.
    mainJarFileUri String
    The HCFS URI of the jar file that contains the main class.
    properties Map<String>
    Optional. A mapping of property names to values, used to configure Spark. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.

    SparkJobResponse, SparkJobResponseArgs

    ArchiveUris List<string>
    Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
    Args List<string>
    Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
    FileUris List<string>
    Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
    JarFileUris List<string>
    Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Spark driver and tasks.
    LoggingConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.LoggingConfigResponse
    Optional. The runtime log config for job execution.
    MainClass string
    The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in SparkJob.jar_file_uris.
    MainJarFileUri string
    The HCFS URI of the jar file that contains the main class.
    Properties Dictionary<string, string>
    Optional. A mapping of property names to values, used to configure Spark. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
    ArchiveUris []string
    Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
    Args []string
    Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
    FileUris []string
    Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
    JarFileUris []string
    Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Spark driver and tasks.
    LoggingConfig LoggingConfigResponse
    Optional. The runtime log config for job execution.
    MainClass string
    The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in SparkJob.jar_file_uris.
    MainJarFileUri string
    The HCFS URI of the jar file that contains the main class.
    Properties map[string]string
    Optional. A mapping of property names to values, used to configure Spark. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
    archiveUris List<String>
    Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
    args List<String>
    Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
    fileUris List<String>
    Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
    jarFileUris List<String>
    Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Spark driver and tasks.
    loggingConfig LoggingConfigResponse
    Optional. The runtime log config for job execution.
    mainClass String
    The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in SparkJob.jar_file_uris.
    mainJarFileUri String
    The HCFS URI of the jar file that contains the main class.
    properties Map<String,String>
    Optional. A mapping of property names to values, used to configure Spark. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
    archiveUris string[]
    Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
    args string[]
    Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
    fileUris string[]
    Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
    jarFileUris string[]
    Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Spark driver and tasks.
    loggingConfig LoggingConfigResponse
    Optional. The runtime log config for job execution.
    mainClass string
    The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in SparkJob.jar_file_uris.
    mainJarFileUri string
    The HCFS URI of the jar file that contains the main class.
    properties {[key: string]: string}
    Optional. A mapping of property names to values, used to configure Spark. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
    archive_uris Sequence[str]
    Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
    args Sequence[str]
    Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
    file_uris Sequence[str]
    Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
    jar_file_uris Sequence[str]
    Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Spark driver and tasks.
    logging_config LoggingConfigResponse
    Optional. The runtime log config for job execution.
    main_class str
    The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in SparkJob.jar_file_uris.
    main_jar_file_uri str
    The HCFS URI of the jar file that contains the main class.
    properties Mapping[str, str]
    Optional. A mapping of property names to values, used to configure Spark. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
    archiveUris List<String>
    Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
    args List<String>
    Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
    fileUris List<String>
    Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
    jarFileUris List<String>
    Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Spark driver and tasks.
    loggingConfig Property Map
    Optional. The runtime log config for job execution.
    mainClass String
    The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in SparkJob.jar_file_uris.
    mainJarFileUri String
    The HCFS URI of the jar file that contains the main class.
    properties Map<String>
    Optional. A mapping of property names to values, used to configure Spark. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.

    SparkRJob, SparkRJobArgs

    MainRFileUri string
    The HCFS URI of the main R file to use as the driver. Must be a .R file.
    ArchiveUris List<string>
    Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
    Args List<string>
    Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
    FileUris List<string>
    Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
    LoggingConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.LoggingConfig
    Optional. The runtime log config for job execution.
    Properties Dictionary<string, string>
    Optional. A mapping of property names to values, used to configure SparkR. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
    MainRFileUri string
    The HCFS URI of the main R file to use as the driver. Must be a .R file.
    ArchiveUris []string
    Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
    Args []string
    Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
    FileUris []string
    Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
    LoggingConfig LoggingConfig
    Optional. The runtime log config for job execution.
    Properties map[string]string
    Optional. A mapping of property names to values, used to configure SparkR. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
    mainRFileUri String
    The HCFS URI of the main R file to use as the driver. Must be a .R file.
    archiveUris List<String>
    Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
    args List<String>
    Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
    fileUris List<String>
    Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
    loggingConfig LoggingConfig
    Optional. The runtime log config for job execution.
    properties Map<String,String>
    Optional. A mapping of property names to values, used to configure SparkR. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
    mainRFileUri string
    The HCFS URI of the main R file to use as the driver. Must be a .R file.
    archiveUris string[]
    Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
    args string[]
    Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
    fileUris string[]
    Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
    loggingConfig LoggingConfig
    Optional. The runtime log config for job execution.
    properties {[key: string]: string}
    Optional. A mapping of property names to values, used to configure SparkR. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
    main_r_file_uri str
    The HCFS URI of the main R file to use as the driver. Must be a .R file.
    archive_uris Sequence[str]
    Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
    args Sequence[str]
    Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
    file_uris Sequence[str]
    Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
    logging_config LoggingConfig
    Optional. The runtime log config for job execution.
    properties Mapping[str, str]
    Optional. A mapping of property names to values, used to configure SparkR. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
    mainRFileUri String
    The HCFS URI of the main R file to use as the driver. Must be a .R file.
    archiveUris List<String>
    Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
    args List<String>
    Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
    fileUris List<String>
    Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
    loggingConfig Property Map
    Optional. The runtime log config for job execution.
    properties Map<String>
    Optional. A mapping of property names to values, used to configure SparkR. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.

    SparkRJobResponse, SparkRJobResponseArgs

    ArchiveUris List<string>
    Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
    Args List<string>
    Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
    FileUris List<string>
    Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
    LoggingConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.LoggingConfigResponse
    Optional. The runtime log config for job execution.
    MainRFileUri string
    The HCFS URI of the main R file to use as the driver. Must be a .R file.
    Properties Dictionary<string, string>
    Optional. A mapping of property names to values, used to configure SparkR. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
    ArchiveUris []string
    Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
    Args []string
    Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
    FileUris []string
    Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
    LoggingConfig LoggingConfigResponse
    Optional. The runtime log config for job execution.
    MainRFileUri string
    The HCFS URI of the main R file to use as the driver. Must be a .R file.
    Properties map[string]string
    Optional. A mapping of property names to values, used to configure SparkR. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
    archiveUris List<String>
    Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
    args List<String>
    Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
    fileUris List<String>
    Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
    loggingConfig LoggingConfigResponse
    Optional. The runtime log config for job execution.
    mainRFileUri String
    The HCFS URI of the main R file to use as the driver. Must be a .R file.
    properties Map<String,String>
    Optional. A mapping of property names to values, used to configure SparkR. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
    archiveUris string[]
    Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
    args string[]
    Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
    fileUris string[]
    Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
    loggingConfig LoggingConfigResponse
    Optional. The runtime log config for job execution.
    mainRFileUri string
    The HCFS URI of the main R file to use as the driver. Must be a .R file.
    properties {[key: string]: string}
    Optional. A mapping of property names to values, used to configure SparkR. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
    archive_uris Sequence[str]
    Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
    args Sequence[str]
    Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
    file_uris Sequence[str]
    Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
    logging_config LoggingConfigResponse
    Optional. The runtime log config for job execution.
    main_r_file_uri str
    The HCFS URI of the main R file to use as the driver. Must be a .R file.
    properties Mapping[str, str]
    Optional. A mapping of property names to values, used to configure SparkR. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
    archiveUris List<String>
    Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
    args List<String>
    Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
    fileUris List<String>
    Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
    loggingConfig Property Map
    Optional. The runtime log config for job execution.
    mainRFileUri String
    The HCFS URI of the main R file to use as the driver. Must be a .R file.
    properties Map<String>
    Optional. A mapping of property names to values, used to configure SparkR. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.

    SparkSqlJob, SparkSqlJobArgs

    JarFileUris List<string>
    Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.
    LoggingConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.LoggingConfig
    Optional. The runtime log config for job execution.
    Properties Dictionary<string, string>
    Optional. A mapping of property names to values, used to configure Spark SQL's SparkConf. Properties that conflict with values set by the Dataproc API might be overwritten.
    QueryFileUri string
    The HCFS URI of the script that contains SQL queries.
    QueryList Pulumi.GoogleNative.Dataproc.V1.Inputs.QueryList
    A list of queries.
    ScriptVariables Dictionary<string, string>
    Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).
    JarFileUris []string
    Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.
    LoggingConfig LoggingConfig
    Optional. The runtime log config for job execution.
    Properties map[string]string
    Optional. A mapping of property names to values, used to configure Spark SQL's SparkConf. Properties that conflict with values set by the Dataproc API might be overwritten.
    QueryFileUri string
    The HCFS URI of the script that contains SQL queries.
    QueryList QueryList
    A list of queries.
    ScriptVariables map[string]string
    Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).
    jarFileUris List<String>
    Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.
    loggingConfig LoggingConfig
    Optional. The runtime log config for job execution.
    properties Map<String,String>
    Optional. A mapping of property names to values, used to configure Spark SQL's SparkConf. Properties that conflict with values set by the Dataproc API might be overwritten.
    queryFileUri String
    The HCFS URI of the script that contains SQL queries.
    queryList QueryList
    A list of queries.
    scriptVariables Map<String,String>
    Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).
    jarFileUris string[]
    Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.
    loggingConfig LoggingConfig
    Optional. The runtime log config for job execution.
    properties {[key: string]: string}
    Optional. A mapping of property names to values, used to configure Spark SQL's SparkConf. Properties that conflict with values set by the Dataproc API might be overwritten.
    queryFileUri string
    The HCFS URI of the script that contains SQL queries.
    queryList QueryList
    A list of queries.
    scriptVariables {[key: string]: string}
    Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).
    jar_file_uris Sequence[str]
    Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.
    logging_config LoggingConfig
    Optional. The runtime log config for job execution.
    properties Mapping[str, str]
    Optional. A mapping of property names to values, used to configure Spark SQL's SparkConf. Properties that conflict with values set by the Dataproc API might be overwritten.
    query_file_uri str
    The HCFS URI of the script that contains SQL queries.
    query_list QueryList
    A list of queries.
    script_variables Mapping[str, str]
    Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).
    jarFileUris List<String>
    Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.
    loggingConfig Property Map
    Optional. The runtime log config for job execution.
    properties Map<String>
    Optional. A mapping of property names to values, used to configure Spark SQL's SparkConf. Properties that conflict with values set by the Dataproc API might be overwritten.
    queryFileUri String
    The HCFS URI of the script that contains SQL queries.
    queryList Property Map
    A list of queries.
    scriptVariables Map<String>
    Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).

    SparkSqlJobResponse, SparkSqlJobResponseArgs

    JarFileUris List<string>
    Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.
    LoggingConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.LoggingConfigResponse
    Optional. The runtime log config for job execution.
    Properties Dictionary<string, string>
    Optional. A mapping of property names to values, used to configure Spark SQL's SparkConf. Properties that conflict with values set by the Dataproc API might be overwritten.
    QueryFileUri string
    The HCFS URI of the script that contains SQL queries.
    QueryList Pulumi.GoogleNative.Dataproc.V1.Inputs.QueryListResponse
    A list of queries.
    ScriptVariables Dictionary<string, string>
    Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).
    JarFileUris []string
    Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.
    LoggingConfig LoggingConfigResponse
    Optional. The runtime log config for job execution.
    Properties map[string]string
    Optional. A mapping of property names to values, used to configure Spark SQL's SparkConf. Properties that conflict with values set by the Dataproc API might be overwritten.
    QueryFileUri string
    The HCFS URI of the script that contains SQL queries.
    QueryList QueryListResponse
    A list of queries.
    ScriptVariables map[string]string
    Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).
    jarFileUris List<String>
    Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.
    loggingConfig LoggingConfigResponse
    Optional. The runtime log config for job execution.
    properties Map<String,String>
    Optional. A mapping of property names to values, used to configure Spark SQL's SparkConf. Properties that conflict with values set by the Dataproc API might be overwritten.
    queryFileUri String
    The HCFS URI of the script that contains SQL queries.
    queryList QueryListResponse
    A list of queries.
    scriptVariables Map<String,String>
    Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).
    jarFileUris string[]
    Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.
    loggingConfig LoggingConfigResponse
    Optional. The runtime log config for job execution.
    properties {[key: string]: string}
    Optional. A mapping of property names to values, used to configure Spark SQL's SparkConf. Properties that conflict with values set by the Dataproc API might be overwritten.
    queryFileUri string
    The HCFS URI of the script that contains SQL queries.
    queryList QueryListResponse
    A list of queries.
    scriptVariables {[key: string]: string}
    Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).
    jar_file_uris Sequence[str]
    Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.
    logging_config LoggingConfigResponse
    Optional. The runtime log config for job execution.
    properties Mapping[str, str]
    Optional. A mapping of property names to values, used to configure Spark SQL's SparkConf. Properties that conflict with values set by the Dataproc API might be overwritten.
    query_file_uri str
    The HCFS URI of the script that contains SQL queries.
    query_list QueryListResponse
    A list of queries.
    script_variables Mapping[str, str]
    Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).
    jarFileUris List<String>
    Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.
    loggingConfig Property Map
    Optional. The runtime log config for job execution.
    properties Map<String>
    Optional. A mapping of property names to values, used to configure Spark SQL's SparkConf. Properties that conflict with values set by the Dataproc API might be overwritten.
    queryFileUri String
    The HCFS URI of the script that contains SQL queries.
    queryList Property Map
    A list of queries.
    scriptVariables Map<String>
    Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).

    StartupConfig, StartupConfigArgs

    RequiredRegistrationFraction double
    Optional. The config setting to enable cluster creation/ updation to be successful only after required_registration_fraction of instances are up and running. This configuration is applicable to only secondary workers for now. The cluster will fail if required_registration_fraction of instances are not available. This will include instance creation, agent registration, and service registration (if enabled).
    RequiredRegistrationFraction float64
    Optional. The config setting to enable cluster creation/ updation to be successful only after required_registration_fraction of instances are up and running. This configuration is applicable to only secondary workers for now. The cluster will fail if required_registration_fraction of instances are not available. This will include instance creation, agent registration, and service registration (if enabled).
    requiredRegistrationFraction Double
    Optional. The config setting to enable cluster creation/ updation to be successful only after required_registration_fraction of instances are up and running. This configuration is applicable to only secondary workers for now. The cluster will fail if required_registration_fraction of instances are not available. This will include instance creation, agent registration, and service registration (if enabled).
    requiredRegistrationFraction number
    Optional. The config setting to enable cluster creation/ updation to be successful only after required_registration_fraction of instances are up and running. This configuration is applicable to only secondary workers for now. The cluster will fail if required_registration_fraction of instances are not available. This will include instance creation, agent registration, and service registration (if enabled).
    required_registration_fraction float
    Optional. The config setting to enable cluster creation/ updation to be successful only after required_registration_fraction of instances are up and running. This configuration is applicable to only secondary workers for now. The cluster will fail if required_registration_fraction of instances are not available. This will include instance creation, agent registration, and service registration (if enabled).
    requiredRegistrationFraction Number
    Optional. The config setting to enable cluster creation/ updation to be successful only after required_registration_fraction of instances are up and running. This configuration is applicable to only secondary workers for now. The cluster will fail if required_registration_fraction of instances are not available. This will include instance creation, agent registration, and service registration (if enabled).

    StartupConfigResponse, StartupConfigResponseArgs

    RequiredRegistrationFraction double
    Optional. The config setting to enable cluster creation/ updation to be successful only after required_registration_fraction of instances are up and running. This configuration is applicable to only secondary workers for now. The cluster will fail if required_registration_fraction of instances are not available. This will include instance creation, agent registration, and service registration (if enabled).
    RequiredRegistrationFraction float64
    Optional. The config setting to enable cluster creation/ updation to be successful only after required_registration_fraction of instances are up and running. This configuration is applicable to only secondary workers for now. The cluster will fail if required_registration_fraction of instances are not available. This will include instance creation, agent registration, and service registration (if enabled).
    requiredRegistrationFraction Double
    Optional. The config setting to enable cluster creation/ updation to be successful only after required_registration_fraction of instances are up and running. This configuration is applicable to only secondary workers for now. The cluster will fail if required_registration_fraction of instances are not available. This will include instance creation, agent registration, and service registration (if enabled).
    requiredRegistrationFraction number
    Optional. The config setting to enable cluster creation/ updation to be successful only after required_registration_fraction of instances are up and running. This configuration is applicable to only secondary workers for now. The cluster will fail if required_registration_fraction of instances are not available. This will include instance creation, agent registration, and service registration (if enabled).
    required_registration_fraction float
    Optional. The config setting to enable cluster creation/ updation to be successful only after required_registration_fraction of instances are up and running. This configuration is applicable to only secondary workers for now. The cluster will fail if required_registration_fraction of instances are not available. This will include instance creation, agent registration, and service registration (if enabled).
    requiredRegistrationFraction Number
    Optional. The config setting to enable cluster creation/ updation to be successful only after required_registration_fraction of instances are up and running. This configuration is applicable to only secondary workers for now. The cluster will fail if required_registration_fraction of instances are not available. This will include instance creation, agent registration, and service registration (if enabled).

    TemplateParameter, TemplateParameterArgs

    Fields List<string>
    Paths to all fields that the parameter replaces. A field is allowed to appear in at most one parameter's list of field paths.A field path is similar in syntax to a google.protobuf.FieldMask. For example, a field path that references the zone field of a workflow template's cluster selector would be specified as placement.clusterSelector.zone.Also, field paths can reference fields using the following syntax: Values in maps can be referenced by key: labels'key' placement.clusterSelector.clusterLabels'key' placement.managedCluster.labels'key' placement.clusterSelector.clusterLabels'key' jobs'step-id'.labels'key' Jobs in the jobs list can be referenced by step-id: jobs'step-id'.hadoopJob.mainJarFileUri jobs'step-id'.hiveJob.queryFileUri jobs'step-id'.pySparkJob.mainPythonFileUri jobs'step-id'.hadoopJob.jarFileUris0 jobs'step-id'.hadoopJob.archiveUris0 jobs'step-id'.hadoopJob.fileUris0 jobs'step-id'.pySparkJob.pythonFileUris0 Items in repeated fields can be referenced by a zero-based index: jobs'step-id'.sparkJob.args0 Other examples: jobs'step-id'.hadoopJob.properties'key' jobs'step-id'.hadoopJob.args0 jobs'step-id'.hiveJob.scriptVariables'key' jobs'step-id'.hadoopJob.mainJarFileUri placement.clusterSelector.zoneIt may not be possible to parameterize maps and repeated fields in their entirety since only individual map values and individual items in repeated fields can be referenced. For example, the following field paths are invalid: placement.clusterSelector.clusterLabels jobs'step-id'.sparkJob.args
    Name string
    Parameter name. The parameter name is used as the key, and paired with the parameter value, which are passed to the template when the template is instantiated. The name must contain only capital letters (A-Z), numbers (0-9), and underscores (_), and must not start with a number. The maximum length is 40 characters.
    Description string
    Optional. Brief description of the parameter. Must not exceed 1024 characters.
    Validation Pulumi.GoogleNative.Dataproc.V1.Inputs.ParameterValidation
    Optional. Validation rules to be applied to this parameter's value.
    Fields []string
    Paths to all fields that the parameter replaces. A field is allowed to appear in at most one parameter's list of field paths.A field path is similar in syntax to a google.protobuf.FieldMask. For example, a field path that references the zone field of a workflow template's cluster selector would be specified as placement.clusterSelector.zone.Also, field paths can reference fields using the following syntax: Values in maps can be referenced by key: labels'key' placement.clusterSelector.clusterLabels'key' placement.managedCluster.labels'key' placement.clusterSelector.clusterLabels'key' jobs'step-id'.labels'key' Jobs in the jobs list can be referenced by step-id: jobs'step-id'.hadoopJob.mainJarFileUri jobs'step-id'.hiveJob.queryFileUri jobs'step-id'.pySparkJob.mainPythonFileUri jobs'step-id'.hadoopJob.jarFileUris0 jobs'step-id'.hadoopJob.archiveUris0 jobs'step-id'.hadoopJob.fileUris0 jobs'step-id'.pySparkJob.pythonFileUris0 Items in repeated fields can be referenced by a zero-based index: jobs'step-id'.sparkJob.args0 Other examples: jobs'step-id'.hadoopJob.properties'key' jobs'step-id'.hadoopJob.args0 jobs'step-id'.hiveJob.scriptVariables'key' jobs'step-id'.hadoopJob.mainJarFileUri placement.clusterSelector.zoneIt may not be possible to parameterize maps and repeated fields in their entirety since only individual map values and individual items in repeated fields can be referenced. For example, the following field paths are invalid: placement.clusterSelector.clusterLabels jobs'step-id'.sparkJob.args
    Name string
    Parameter name. The parameter name is used as the key, and paired with the parameter value, which are passed to the template when the template is instantiated. The name must contain only capital letters (A-Z), numbers (0-9), and underscores (_), and must not start with a number. The maximum length is 40 characters.
    Description string
    Optional. Brief description of the parameter. Must not exceed 1024 characters.
    Validation ParameterValidation
    Optional. Validation rules to be applied to this parameter's value.
    fields List<String>
    Paths to all fields that the parameter replaces. A field is allowed to appear in at most one parameter's list of field paths.A field path is similar in syntax to a google.protobuf.FieldMask. For example, a field path that references the zone field of a workflow template's cluster selector would be specified as placement.clusterSelector.zone.Also, field paths can reference fields using the following syntax: Values in maps can be referenced by key: labels'key' placement.clusterSelector.clusterLabels'key' placement.managedCluster.labels'key' placement.clusterSelector.clusterLabels'key' jobs'step-id'.labels'key' Jobs in the jobs list can be referenced by step-id: jobs'step-id'.hadoopJob.mainJarFileUri jobs'step-id'.hiveJob.queryFileUri jobs'step-id'.pySparkJob.mainPythonFileUri jobs'step-id'.hadoopJob.jarFileUris0 jobs'step-id'.hadoopJob.archiveUris0 jobs'step-id'.hadoopJob.fileUris0 jobs'step-id'.pySparkJob.pythonFileUris0 Items in repeated fields can be referenced by a zero-based index: jobs'step-id'.sparkJob.args0 Other examples: jobs'step-id'.hadoopJob.properties'key' jobs'step-id'.hadoopJob.args0 jobs'step-id'.hiveJob.scriptVariables'key' jobs'step-id'.hadoopJob.mainJarFileUri placement.clusterSelector.zoneIt may not be possible to parameterize maps and repeated fields in their entirety since only individual map values and individual items in repeated fields can be referenced. For example, the following field paths are invalid: placement.clusterSelector.clusterLabels jobs'step-id'.sparkJob.args
    name String
    Parameter name. The parameter name is used as the key, and paired with the parameter value, which are passed to the template when the template is instantiated. The name must contain only capital letters (A-Z), numbers (0-9), and underscores (_), and must not start with a number. The maximum length is 40 characters.
    description String
    Optional. Brief description of the parameter. Must not exceed 1024 characters.
    validation ParameterValidation
    Optional. Validation rules to be applied to this parameter's value.
    fields string[]
    Paths to all fields that the parameter replaces. A field is allowed to appear in at most one parameter's list of field paths.A field path is similar in syntax to a google.protobuf.FieldMask. For example, a field path that references the zone field of a workflow template's cluster selector would be specified as placement.clusterSelector.zone.Also, field paths can reference fields using the following syntax: Values in maps can be referenced by key: labels'key' placement.clusterSelector.clusterLabels'key' placement.managedCluster.labels'key' placement.clusterSelector.clusterLabels'key' jobs'step-id'.labels'key' Jobs in the jobs list can be referenced by step-id: jobs'step-id'.hadoopJob.mainJarFileUri jobs'step-id'.hiveJob.queryFileUri jobs'step-id'.pySparkJob.mainPythonFileUri jobs'step-id'.hadoopJob.jarFileUris0 jobs'step-id'.hadoopJob.archiveUris0 jobs'step-id'.hadoopJob.fileUris0 jobs'step-id'.pySparkJob.pythonFileUris0 Items in repeated fields can be referenced by a zero-based index: jobs'step-id'.sparkJob.args0 Other examples: jobs'step-id'.hadoopJob.properties'key' jobs'step-id'.hadoopJob.args0 jobs'step-id'.hiveJob.scriptVariables'key' jobs'step-id'.hadoopJob.mainJarFileUri placement.clusterSelector.zoneIt may not be possible to parameterize maps and repeated fields in their entirety since only individual map values and individual items in repeated fields can be referenced. For example, the following field paths are invalid: placement.clusterSelector.clusterLabels jobs'step-id'.sparkJob.args
    name string
    Parameter name. The parameter name is used as the key, and paired with the parameter value, which are passed to the template when the template is instantiated. The name must contain only capital letters (A-Z), numbers (0-9), and underscores (_), and must not start with a number. The maximum length is 40 characters.
    description string
    Optional. Brief description of the parameter. Must not exceed 1024 characters.
    validation ParameterValidation
    Optional. Validation rules to be applied to this parameter's value.
    fields Sequence[str]
    Paths to all fields that the parameter replaces. A field is allowed to appear in at most one parameter's list of field paths.A field path is similar in syntax to a google.protobuf.FieldMask. For example, a field path that references the zone field of a workflow template's cluster selector would be specified as placement.clusterSelector.zone.Also, field paths can reference fields using the following syntax: Values in maps can be referenced by key: labels'key' placement.clusterSelector.clusterLabels'key' placement.managedCluster.labels'key' placement.clusterSelector.clusterLabels'key' jobs'step-id'.labels'key' Jobs in the jobs list can be referenced by step-id: jobs'step-id'.hadoopJob.mainJarFileUri jobs'step-id'.hiveJob.queryFileUri jobs'step-id'.pySparkJob.mainPythonFileUri jobs'step-id'.hadoopJob.jarFileUris0 jobs'step-id'.hadoopJob.archiveUris0 jobs'step-id'.hadoopJob.fileUris0 jobs'step-id'.pySparkJob.pythonFileUris0 Items in repeated fields can be referenced by a zero-based index: jobs'step-id'.sparkJob.args0 Other examples: jobs'step-id'.hadoopJob.properties'key' jobs'step-id'.hadoopJob.args0 jobs'step-id'.hiveJob.scriptVariables'key' jobs'step-id'.hadoopJob.mainJarFileUri placement.clusterSelector.zoneIt may not be possible to parameterize maps and repeated fields in their entirety since only individual map values and individual items in repeated fields can be referenced. For example, the following field paths are invalid: placement.clusterSelector.clusterLabels jobs'step-id'.sparkJob.args
    name str
    Parameter name. The parameter name is used as the key, and paired with the parameter value, which are passed to the template when the template is instantiated. The name must contain only capital letters (A-Z), numbers (0-9), and underscores (_), and must not start with a number. The maximum length is 40 characters.
    description str
    Optional. Brief description of the parameter. Must not exceed 1024 characters.
    validation ParameterValidation
    Optional. Validation rules to be applied to this parameter's value.
    fields List<String>
    Paths to all fields that the parameter replaces. A field is allowed to appear in at most one parameter's list of field paths.A field path is similar in syntax to a google.protobuf.FieldMask. For example, a field path that references the zone field of a workflow template's cluster selector would be specified as placement.clusterSelector.zone.Also, field paths can reference fields using the following syntax: Values in maps can be referenced by key: labels'key' placement.clusterSelector.clusterLabels'key' placement.managedCluster.labels'key' placement.clusterSelector.clusterLabels'key' jobs'step-id'.labels'key' Jobs in the jobs list can be referenced by step-id: jobs'step-id'.hadoopJob.mainJarFileUri jobs'step-id'.hiveJob.queryFileUri jobs'step-id'.pySparkJob.mainPythonFileUri jobs'step-id'.hadoopJob.jarFileUris0 jobs'step-id'.hadoopJob.archiveUris0 jobs'step-id'.hadoopJob.fileUris0 jobs'step-id'.pySparkJob.pythonFileUris0 Items in repeated fields can be referenced by a zero-based index: jobs'step-id'.sparkJob.args0 Other examples: jobs'step-id'.hadoopJob.properties'key' jobs'step-id'.hadoopJob.args0 jobs'step-id'.hiveJob.scriptVariables'key' jobs'step-id'.hadoopJob.mainJarFileUri placement.clusterSelector.zoneIt may not be possible to parameterize maps and repeated fields in their entirety since only individual map values and individual items in repeated fields can be referenced. For example, the following field paths are invalid: placement.clusterSelector.clusterLabels jobs'step-id'.sparkJob.args
    name String
    Parameter name. The parameter name is used as the key, and paired with the parameter value, which are passed to the template when the template is instantiated. The name must contain only capital letters (A-Z), numbers (0-9), and underscores (_), and must not start with a number. The maximum length is 40 characters.
    description String
    Optional. Brief description of the parameter. Must not exceed 1024 characters.
    validation Property Map
    Optional. Validation rules to be applied to this parameter's value.

    TemplateParameterResponse, TemplateParameterResponseArgs

    Description string
    Optional. Brief description of the parameter. Must not exceed 1024 characters.
    Fields List<string>
    Paths to all fields that the parameter replaces. A field is allowed to appear in at most one parameter's list of field paths.A field path is similar in syntax to a google.protobuf.FieldMask. For example, a field path that references the zone field of a workflow template's cluster selector would be specified as placement.clusterSelector.zone.Also, field paths can reference fields using the following syntax: Values in maps can be referenced by key: labels'key' placement.clusterSelector.clusterLabels'key' placement.managedCluster.labels'key' placement.clusterSelector.clusterLabels'key' jobs'step-id'.labels'key' Jobs in the jobs list can be referenced by step-id: jobs'step-id'.hadoopJob.mainJarFileUri jobs'step-id'.hiveJob.queryFileUri jobs'step-id'.pySparkJob.mainPythonFileUri jobs'step-id'.hadoopJob.jarFileUris0 jobs'step-id'.hadoopJob.archiveUris0 jobs'step-id'.hadoopJob.fileUris0 jobs'step-id'.pySparkJob.pythonFileUris0 Items in repeated fields can be referenced by a zero-based index: jobs'step-id'.sparkJob.args0 Other examples: jobs'step-id'.hadoopJob.properties'key' jobs'step-id'.hadoopJob.args0 jobs'step-id'.hiveJob.scriptVariables'key' jobs'step-id'.hadoopJob.mainJarFileUri placement.clusterSelector.zoneIt may not be possible to parameterize maps and repeated fields in their entirety since only individual map values and individual items in repeated fields can be referenced. For example, the following field paths are invalid: placement.clusterSelector.clusterLabels jobs'step-id'.sparkJob.args
    Name string
    Parameter name. The parameter name is used as the key, and paired with the parameter value, which are passed to the template when the template is instantiated. The name must contain only capital letters (A-Z), numbers (0-9), and underscores (_), and must not start with a number. The maximum length is 40 characters.
    Validation Pulumi.GoogleNative.Dataproc.V1.Inputs.ParameterValidationResponse
    Optional. Validation rules to be applied to this parameter's value.
    Description string
    Optional. Brief description of the parameter. Must not exceed 1024 characters.
    Fields []string
    Paths to all fields that the parameter replaces. A field is allowed to appear in at most one parameter's list of field paths.A field path is similar in syntax to a google.protobuf.FieldMask. For example, a field path that references the zone field of a workflow template's cluster selector would be specified as placement.clusterSelector.zone.Also, field paths can reference fields using the following syntax: Values in maps can be referenced by key: labels'key' placement.clusterSelector.clusterLabels'key' placement.managedCluster.labels'key' placement.clusterSelector.clusterLabels'key' jobs'step-id'.labels'key' Jobs in the jobs list can be referenced by step-id: jobs'step-id'.hadoopJob.mainJarFileUri jobs'step-id'.hiveJob.queryFileUri jobs'step-id'.pySparkJob.mainPythonFileUri jobs'step-id'.hadoopJob.jarFileUris0 jobs'step-id'.hadoopJob.archiveUris0 jobs'step-id'.hadoopJob.fileUris0 jobs'step-id'.pySparkJob.pythonFileUris0 Items in repeated fields can be referenced by a zero-based index: jobs'step-id'.sparkJob.args0 Other examples: jobs'step-id'.hadoopJob.properties'key' jobs'step-id'.hadoopJob.args0 jobs'step-id'.hiveJob.scriptVariables'key' jobs'step-id'.hadoopJob.mainJarFileUri placement.clusterSelector.zoneIt may not be possible to parameterize maps and repeated fields in their entirety since only individual map values and individual items in repeated fields can be referenced. For example, the following field paths are invalid: placement.clusterSelector.clusterLabels jobs'step-id'.sparkJob.args
    Name string
    Parameter name. The parameter name is used as the key, and paired with the parameter value, which are passed to the template when the template is instantiated. The name must contain only capital letters (A-Z), numbers (0-9), and underscores (_), and must not start with a number. The maximum length is 40 characters.
    Validation ParameterValidationResponse
    Optional. Validation rules to be applied to this parameter's value.
    description String
    Optional. Brief description of the parameter. Must not exceed 1024 characters.
    fields List<String>
    Paths to all fields that the parameter replaces. A field is allowed to appear in at most one parameter's list of field paths.A field path is similar in syntax to a google.protobuf.FieldMask. For example, a field path that references the zone field of a workflow template's cluster selector would be specified as placement.clusterSelector.zone.Also, field paths can reference fields using the following syntax: Values in maps can be referenced by key: labels'key' placement.clusterSelector.clusterLabels'key' placement.managedCluster.labels'key' placement.clusterSelector.clusterLabels'key' jobs'step-id'.labels'key' Jobs in the jobs list can be referenced by step-id: jobs'step-id'.hadoopJob.mainJarFileUri jobs'step-id'.hiveJob.queryFileUri jobs'step-id'.pySparkJob.mainPythonFileUri jobs'step-id'.hadoopJob.jarFileUris0 jobs'step-id'.hadoopJob.archiveUris0 jobs'step-id'.hadoopJob.fileUris0 jobs'step-id'.pySparkJob.pythonFileUris0 Items in repeated fields can be referenced by a zero-based index: jobs'step-id'.sparkJob.args0 Other examples: jobs'step-id'.hadoopJob.properties'key' jobs'step-id'.hadoopJob.args0 jobs'step-id'.hiveJob.scriptVariables'key' jobs'step-id'.hadoopJob.mainJarFileUri placement.clusterSelector.zoneIt may not be possible to parameterize maps and repeated fields in their entirety since only individual map values and individual items in repeated fields can be referenced. For example, the following field paths are invalid: placement.clusterSelector.clusterLabels jobs'step-id'.sparkJob.args
    name String
    Parameter name. The parameter name is used as the key, and paired with the parameter value, which are passed to the template when the template is instantiated. The name must contain only capital letters (A-Z), numbers (0-9), and underscores (_), and must not start with a number. The maximum length is 40 characters.
    validation ParameterValidationResponse
    Optional. Validation rules to be applied to this parameter's value.
    description string
    Optional. Brief description of the parameter. Must not exceed 1024 characters.
    fields string[]
    Paths to all fields that the parameter replaces. A field is allowed to appear in at most one parameter's list of field paths.A field path is similar in syntax to a google.protobuf.FieldMask. For example, a field path that references the zone field of a workflow template's cluster selector would be specified as placement.clusterSelector.zone.Also, field paths can reference fields using the following syntax: Values in maps can be referenced by key: labels'key' placement.clusterSelector.clusterLabels'key' placement.managedCluster.labels'key' placement.clusterSelector.clusterLabels'key' jobs'step-id'.labels'key' Jobs in the jobs list can be referenced by step-id: jobs'step-id'.hadoopJob.mainJarFileUri jobs'step-id'.hiveJob.queryFileUri jobs'step-id'.pySparkJob.mainPythonFileUri jobs'step-id'.hadoopJob.jarFileUris0 jobs'step-id'.hadoopJob.archiveUris0 jobs'step-id'.hadoopJob.fileUris0 jobs'step-id'.pySparkJob.pythonFileUris0 Items in repeated fields can be referenced by a zero-based index: jobs'step-id'.sparkJob.args0 Other examples: jobs'step-id'.hadoopJob.properties'key' jobs'step-id'.hadoopJob.args0 jobs'step-id'.hiveJob.scriptVariables'key' jobs'step-id'.hadoopJob.mainJarFileUri placement.clusterSelector.zoneIt may not be possible to parameterize maps and repeated fields in their entirety since only individual map values and individual items in repeated fields can be referenced. For example, the following field paths are invalid: placement.clusterSelector.clusterLabels jobs'step-id'.sparkJob.args
    name string
    Parameter name. The parameter name is used as the key, and paired with the parameter value, which are passed to the template when the template is instantiated. The name must contain only capital letters (A-Z), numbers (0-9), and underscores (_), and must not start with a number. The maximum length is 40 characters.
    validation ParameterValidationResponse
    Optional. Validation rules to be applied to this parameter's value.
    description str
    Optional. Brief description of the parameter. Must not exceed 1024 characters.
    fields Sequence[str]
    Paths to all fields that the parameter replaces. A field is allowed to appear in at most one parameter's list of field paths.A field path is similar in syntax to a google.protobuf.FieldMask. For example, a field path that references the zone field of a workflow template's cluster selector would be specified as placement.clusterSelector.zone.Also, field paths can reference fields using the following syntax: Values in maps can be referenced by key: labels'key' placement.clusterSelector.clusterLabels'key' placement.managedCluster.labels'key' placement.clusterSelector.clusterLabels'key' jobs'step-id'.labels'key' Jobs in the jobs list can be referenced by step-id: jobs'step-id'.hadoopJob.mainJarFileUri jobs'step-id'.hiveJob.queryFileUri jobs'step-id'.pySparkJob.mainPythonFileUri jobs'step-id'.hadoopJob.jarFileUris0 jobs'step-id'.hadoopJob.archiveUris0 jobs'step-id'.hadoopJob.fileUris0 jobs'step-id'.pySparkJob.pythonFileUris0 Items in repeated fields can be referenced by a zero-based index: jobs'step-id'.sparkJob.args0 Other examples: jobs'step-id'.hadoopJob.properties'key' jobs'step-id'.hadoopJob.args0 jobs'step-id'.hiveJob.scriptVariables'key' jobs'step-id'.hadoopJob.mainJarFileUri placement.clusterSelector.zoneIt may not be possible to parameterize maps and repeated fields in their entirety since only individual map values and individual items in repeated fields can be referenced. For example, the following field paths are invalid: placement.clusterSelector.clusterLabels jobs'step-id'.sparkJob.args
    name str
    Parameter name. The parameter name is used as the key, and paired with the parameter value, which are passed to the template when the template is instantiated. The name must contain only capital letters (A-Z), numbers (0-9), and underscores (_), and must not start with a number. The maximum length is 40 characters.
    validation ParameterValidationResponse
    Optional. Validation rules to be applied to this parameter's value.
    description String
    Optional. Brief description of the parameter. Must not exceed 1024 characters.
    fields List<String>
    Paths to all fields that the parameter replaces. A field is allowed to appear in at most one parameter's list of field paths.A field path is similar in syntax to a google.protobuf.FieldMask. For example, a field path that references the zone field of a workflow template's cluster selector would be specified as placement.clusterSelector.zone.Also, field paths can reference fields using the following syntax: Values in maps can be referenced by key: labels'key' placement.clusterSelector.clusterLabels'key' placement.managedCluster.labels'key' placement.clusterSelector.clusterLabels'key' jobs'step-id'.labels'key' Jobs in the jobs list can be referenced by step-id: jobs'step-id'.hadoopJob.mainJarFileUri jobs'step-id'.hiveJob.queryFileUri jobs'step-id'.pySparkJob.mainPythonFileUri jobs'step-id'.hadoopJob.jarFileUris0 jobs'step-id'.hadoopJob.archiveUris0 jobs'step-id'.hadoopJob.fileUris0 jobs'step-id'.pySparkJob.pythonFileUris0 Items in repeated fields can be referenced by a zero-based index: jobs'step-id'.sparkJob.args0 Other examples: jobs'step-id'.hadoopJob.properties'key' jobs'step-id'.hadoopJob.args0 jobs'step-id'.hiveJob.scriptVariables'key' jobs'step-id'.hadoopJob.mainJarFileUri placement.clusterSelector.zoneIt may not be possible to parameterize maps and repeated fields in their entirety since only individual map values and individual items in repeated fields can be referenced. For example, the following field paths are invalid: placement.clusterSelector.clusterLabels jobs'step-id'.sparkJob.args
    name String
    Parameter name. The parameter name is used as the key, and paired with the parameter value, which are passed to the template when the template is instantiated. The name must contain only capital letters (A-Z), numbers (0-9), and underscores (_), and must not start with a number. The maximum length is 40 characters.
    validation Property Map
    Optional. Validation rules to be applied to this parameter's value.

    TrinoJob, TrinoJobArgs

    ClientTags List<string>
    Optional. Trino client tags to attach to this query
    ContinueOnFailure bool
    Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
    LoggingConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.LoggingConfig
    Optional. The runtime log config for job execution.
    OutputFormat string
    Optional. The format in which query output will be displayed. See the Trino documentation for supported output formats
    Properties Dictionary<string, string>
    Optional. A mapping of property names to values. Used to set Trino session properties (https://trino.io/docs/current/sql/set-session.html) Equivalent to using the --session flag in the Trino CLI
    QueryFileUri string
    The HCFS URI of the script that contains SQL queries.
    QueryList Pulumi.GoogleNative.Dataproc.V1.Inputs.QueryList
    A list of queries.
    ClientTags []string
    Optional. Trino client tags to attach to this query
    ContinueOnFailure bool
    Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
    LoggingConfig LoggingConfig
    Optional. The runtime log config for job execution.
    OutputFormat string
    Optional. The format in which query output will be displayed. See the Trino documentation for supported output formats
    Properties map[string]string
    Optional. A mapping of property names to values. Used to set Trino session properties (https://trino.io/docs/current/sql/set-session.html) Equivalent to using the --session flag in the Trino CLI
    QueryFileUri string
    The HCFS URI of the script that contains SQL queries.
    QueryList QueryList
    A list of queries.
    clientTags List<String>
    Optional. Trino client tags to attach to this query
    continueOnFailure Boolean
    Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
    loggingConfig LoggingConfig
    Optional. The runtime log config for job execution.
    outputFormat String
    Optional. The format in which query output will be displayed. See the Trino documentation for supported output formats
    properties Map<String,String>
    Optional. A mapping of property names to values. Used to set Trino session properties (https://trino.io/docs/current/sql/set-session.html) Equivalent to using the --session flag in the Trino CLI
    queryFileUri String
    The HCFS URI of the script that contains SQL queries.
    queryList QueryList
    A list of queries.
    clientTags string[]
    Optional. Trino client tags to attach to this query
    continueOnFailure boolean
    Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
    loggingConfig LoggingConfig
    Optional. The runtime log config for job execution.
    outputFormat string
    Optional. The format in which query output will be displayed. See the Trino documentation for supported output formats
    properties {[key: string]: string}
    Optional. A mapping of property names to values. Used to set Trino session properties (https://trino.io/docs/current/sql/set-session.html) Equivalent to using the --session flag in the Trino CLI
    queryFileUri string
    The HCFS URI of the script that contains SQL queries.
    queryList QueryList
    A list of queries.
    client_tags Sequence[str]
    Optional. Trino client tags to attach to this query
    continue_on_failure bool
    Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
    logging_config LoggingConfig
    Optional. The runtime log config for job execution.
    output_format str
    Optional. The format in which query output will be displayed. See the Trino documentation for supported output formats
    properties Mapping[str, str]
    Optional. A mapping of property names to values. Used to set Trino session properties (https://trino.io/docs/current/sql/set-session.html) Equivalent to using the --session flag in the Trino CLI
    query_file_uri str
    The HCFS URI of the script that contains SQL queries.
    query_list QueryList
    A list of queries.
    clientTags List<String>
    Optional. Trino client tags to attach to this query
    continueOnFailure Boolean
    Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
    loggingConfig Property Map
    Optional. The runtime log config for job execution.
    outputFormat String
    Optional. The format in which query output will be displayed. See the Trino documentation for supported output formats
    properties Map<String>
    Optional. A mapping of property names to values. Used to set Trino session properties (https://trino.io/docs/current/sql/set-session.html) Equivalent to using the --session flag in the Trino CLI
    queryFileUri String
    The HCFS URI of the script that contains SQL queries.
    queryList Property Map
    A list of queries.

    TrinoJobResponse, TrinoJobResponseArgs

    ClientTags List<string>
    Optional. Trino client tags to attach to this query
    ContinueOnFailure bool
    Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
    LoggingConfig Pulumi.GoogleNative.Dataproc.V1.Inputs.LoggingConfigResponse
    Optional. The runtime log config for job execution.
    OutputFormat string
    Optional. The format in which query output will be displayed. See the Trino documentation for supported output formats
    Properties Dictionary<string, string>
    Optional. A mapping of property names to values. Used to set Trino session properties (https://trino.io/docs/current/sql/set-session.html) Equivalent to using the --session flag in the Trino CLI
    QueryFileUri string
    The HCFS URI of the script that contains SQL queries.
    QueryList Pulumi.GoogleNative.Dataproc.V1.Inputs.QueryListResponse
    A list of queries.
    ClientTags []string
    Optional. Trino client tags to attach to this query
    ContinueOnFailure bool
    Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
    LoggingConfig LoggingConfigResponse
    Optional. The runtime log config for job execution.
    OutputFormat string
    Optional. The format in which query output will be displayed. See the Trino documentation for supported output formats
    Properties map[string]string
    Optional. A mapping of property names to values. Used to set Trino session properties (https://trino.io/docs/current/sql/set-session.html) Equivalent to using the --session flag in the Trino CLI
    QueryFileUri string
    The HCFS URI of the script that contains SQL queries.
    QueryList QueryListResponse
    A list of queries.
    clientTags List<String>
    Optional. Trino client tags to attach to this query
    continueOnFailure Boolean
    Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
    loggingConfig LoggingConfigResponse
    Optional. The runtime log config for job execution.
    outputFormat String
    Optional. The format in which query output will be displayed. See the Trino documentation for supported output formats
    properties Map<String,String>
    Optional. A mapping of property names to values. Used to set Trino session properties (https://trino.io/docs/current/sql/set-session.html) Equivalent to using the --session flag in the Trino CLI
    queryFileUri String
    The HCFS URI of the script that contains SQL queries.
    queryList QueryListResponse
    A list of queries.
    clientTags string[]
    Optional. Trino client tags to attach to this query
    continueOnFailure boolean
    Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
    loggingConfig LoggingConfigResponse
    Optional. The runtime log config for job execution.
    outputFormat string
    Optional. The format in which query output will be displayed. See the Trino documentation for supported output formats
    properties {[key: string]: string}
    Optional. A mapping of property names to values. Used to set Trino session properties (https://trino.io/docs/current/sql/set-session.html) Equivalent to using the --session flag in the Trino CLI
    queryFileUri string
    The HCFS URI of the script that contains SQL queries.
    queryList QueryListResponse
    A list of queries.
    client_tags Sequence[str]
    Optional. Trino client tags to attach to this query
    continue_on_failure bool
    Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
    logging_config LoggingConfigResponse
    Optional. The runtime log config for job execution.
    output_format str
    Optional. The format in which query output will be displayed. See the Trino documentation for supported output formats
    properties Mapping[str, str]
    Optional. A mapping of property names to values. Used to set Trino session properties (https://trino.io/docs/current/sql/set-session.html) Equivalent to using the --session flag in the Trino CLI
    query_file_uri str
    The HCFS URI of the script that contains SQL queries.
    query_list QueryListResponse
    A list of queries.
    clientTags List<String>
    Optional. Trino client tags to attach to this query
    continueOnFailure Boolean
    Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
    loggingConfig Property Map
    Optional. The runtime log config for job execution.
    outputFormat String
    Optional. The format in which query output will be displayed. See the Trino documentation for supported output formats
    properties Map<String>
    Optional. A mapping of property names to values. Used to set Trino session properties (https://trino.io/docs/current/sql/set-session.html) Equivalent to using the --session flag in the Trino CLI
    queryFileUri String
    The HCFS URI of the script that contains SQL queries.
    queryList Property Map
    A list of queries.

    ValueValidation, ValueValidationArgs

    Values List<string>
    List of allowed values for the parameter.
    Values []string
    List of allowed values for the parameter.
    values List<String>
    List of allowed values for the parameter.
    values string[]
    List of allowed values for the parameter.
    values Sequence[str]
    List of allowed values for the parameter.
    values List<String>
    List of allowed values for the parameter.

    ValueValidationResponse, ValueValidationResponseArgs

    Values List<string>
    List of allowed values for the parameter.
    Values []string
    List of allowed values for the parameter.
    values List<String>
    List of allowed values for the parameter.
    values string[]
    List of allowed values for the parameter.
    values Sequence[str]
    List of allowed values for the parameter.
    values List<String>
    List of allowed values for the parameter.

    WorkflowTemplatePlacement, WorkflowTemplatePlacementArgs

    ClusterSelector Pulumi.GoogleNative.Dataproc.V1.Inputs.ClusterSelector
    Optional. A selector that chooses target cluster for jobs based on metadata.The selector is evaluated at the time each job is submitted.
    ManagedCluster Pulumi.GoogleNative.Dataproc.V1.Inputs.ManagedCluster
    A cluster that is managed by the workflow.
    ClusterSelector ClusterSelector
    Optional. A selector that chooses target cluster for jobs based on metadata.The selector is evaluated at the time each job is submitted.
    ManagedCluster ManagedCluster
    A cluster that is managed by the workflow.
    clusterSelector ClusterSelector
    Optional. A selector that chooses target cluster for jobs based on metadata.The selector is evaluated at the time each job is submitted.
    managedCluster ManagedCluster
    A cluster that is managed by the workflow.
    clusterSelector ClusterSelector
    Optional. A selector that chooses target cluster for jobs based on metadata.The selector is evaluated at the time each job is submitted.
    managedCluster ManagedCluster
    A cluster that is managed by the workflow.
    cluster_selector ClusterSelector
    Optional. A selector that chooses target cluster for jobs based on metadata.The selector is evaluated at the time each job is submitted.
    managed_cluster ManagedCluster
    A cluster that is managed by the workflow.
    clusterSelector Property Map
    Optional. A selector that chooses target cluster for jobs based on metadata.The selector is evaluated at the time each job is submitted.
    managedCluster Property Map
    A cluster that is managed by the workflow.

    WorkflowTemplatePlacementResponse, WorkflowTemplatePlacementResponseArgs

    ClusterSelector Pulumi.GoogleNative.Dataproc.V1.Inputs.ClusterSelectorResponse
    Optional. A selector that chooses target cluster for jobs based on metadata.The selector is evaluated at the time each job is submitted.
    ManagedCluster Pulumi.GoogleNative.Dataproc.V1.Inputs.ManagedClusterResponse
    A cluster that is managed by the workflow.
    ClusterSelector ClusterSelectorResponse
    Optional. A selector that chooses target cluster for jobs based on metadata.The selector is evaluated at the time each job is submitted.
    ManagedCluster ManagedClusterResponse
    A cluster that is managed by the workflow.
    clusterSelector ClusterSelectorResponse
    Optional. A selector that chooses target cluster for jobs based on metadata.The selector is evaluated at the time each job is submitted.
    managedCluster ManagedClusterResponse
    A cluster that is managed by the workflow.
    clusterSelector ClusterSelectorResponse
    Optional. A selector that chooses target cluster for jobs based on metadata.The selector is evaluated at the time each job is submitted.
    managedCluster ManagedClusterResponse
    A cluster that is managed by the workflow.
    cluster_selector ClusterSelectorResponse
    Optional. A selector that chooses target cluster for jobs based on metadata.The selector is evaluated at the time each job is submitted.
    managed_cluster ManagedClusterResponse
    A cluster that is managed by the workflow.
    clusterSelector Property Map
    Optional. A selector that chooses target cluster for jobs based on metadata.The selector is evaluated at the time each job is submitted.
    managedCluster Property Map
    A cluster that is managed by the workflow.

    Package Details

    Repository
    Google Cloud Native pulumi/pulumi-google-native
    License
    Apache-2.0
    google-native logo

    Google Cloud Native is in preview. Google Cloud Classic is fully supported.

    Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi