1. Packages
  2. Google Cloud Native
  3. API Docs
  4. dataflow
  5. dataflow/v1b3
  6. Job

Google Cloud Native is in preview. Google Cloud Classic is fully supported.

Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi

google-native.dataflow/v1b3.Job

Explore with Pulumi AI

google-native logo

Google Cloud Native is in preview. Google Cloud Classic is fully supported.

Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi

    Creates a Cloud Dataflow job. To create a job, we recommend using projects.locations.jobs.create with a [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints). Using projects.jobs.create is not recommended, as your job will always start in us-central1. Do not enter confidential information when you supply string values using the API. Note - this resource’s API doesn’t support deletion. When deleted, the resource will persist on Google Cloud even though it will be deleted from Pulumi state.

    Create Job Resource

    Resources are created with functions called constructors. To learn more about declaring and configuring resources, see Resources.

    Constructor syntax

    new Job(name: string, args?: JobArgs, opts?: CustomResourceOptions);
    @overload
    def Job(resource_name: str,
            args: Optional[JobArgs] = None,
            opts: Optional[ResourceOptions] = None)
    
    @overload
    def Job(resource_name: str,
            opts: Optional[ResourceOptions] = None,
            client_request_id: Optional[str] = None,
            create_time: Optional[str] = None,
            created_from_snapshot_id: Optional[str] = None,
            current_state: Optional[JobCurrentState] = None,
            current_state_time: Optional[str] = None,
            environment: Optional[EnvironmentArgs] = None,
            execution_info: Optional[JobExecutionInfoArgs] = None,
            id: Optional[str] = None,
            job_metadata: Optional[JobMetadataArgs] = None,
            labels: Optional[Mapping[str, str]] = None,
            location: Optional[str] = None,
            name: Optional[str] = None,
            pipeline_description: Optional[PipelineDescriptionArgs] = None,
            project: Optional[str] = None,
            replace_job_id: Optional[str] = None,
            replaced_by_job_id: Optional[str] = None,
            requested_state: Optional[JobRequestedState] = None,
            runtime_updatable_params: Optional[RuntimeUpdatableParamsArgs] = None,
            satisfies_pzs: Optional[bool] = None,
            stage_states: Optional[Sequence[ExecutionStageStateArgs]] = None,
            start_time: Optional[str] = None,
            steps: Optional[Sequence[StepArgs]] = None,
            steps_location: Optional[str] = None,
            temp_files: Optional[Sequence[str]] = None,
            transform_name_mapping: Optional[Mapping[str, str]] = None,
            type: Optional[JobType] = None,
            view: Optional[str] = None)
    func NewJob(ctx *Context, name string, args *JobArgs, opts ...ResourceOption) (*Job, error)
    public Job(string name, JobArgs? args = null, CustomResourceOptions? opts = null)
    public Job(String name, JobArgs args)
    public Job(String name, JobArgs args, CustomResourceOptions options)
    
    type: google-native:dataflow/v1b3:Job
    properties: # The arguments to resource properties.
    options: # Bag of options to control resource's behavior.
    
    

    Parameters

    name string
    The unique name of the resource.
    args JobArgs
    The arguments to resource properties.
    opts CustomResourceOptions
    Bag of options to control resource's behavior.
    resource_name str
    The unique name of the resource.
    args JobArgs
    The arguments to resource properties.
    opts ResourceOptions
    Bag of options to control resource's behavior.
    ctx Context
    Context object for the current deployment.
    name string
    The unique name of the resource.
    args JobArgs
    The arguments to resource properties.
    opts ResourceOption
    Bag of options to control resource's behavior.
    name string
    The unique name of the resource.
    args JobArgs
    The arguments to resource properties.
    opts CustomResourceOptions
    Bag of options to control resource's behavior.
    name String
    The unique name of the resource.
    args JobArgs
    The arguments to resource properties.
    options CustomResourceOptions
    Bag of options to control resource's behavior.

    Constructor example

    The following reference example uses placeholder values for all input properties.

    var examplejobResourceResourceFromDataflowv1b3 = new GoogleNative.Dataflow.V1b3.Job("examplejobResourceResourceFromDataflowv1b3", new()
    {
        ClientRequestId = "string",
        CreateTime = "string",
        CreatedFromSnapshotId = "string",
        CurrentState = GoogleNative.Dataflow.V1b3.JobCurrentState.JobStateUnknown,
        CurrentStateTime = "string",
        Environment = new GoogleNative.Dataflow.V1b3.Inputs.EnvironmentArgs
        {
            ClusterManagerApiService = "string",
            Dataset = "string",
            DebugOptions = new GoogleNative.Dataflow.V1b3.Inputs.DebugOptionsArgs
            {
                DataSampling = new GoogleNative.Dataflow.V1b3.Inputs.DataSamplingConfigArgs
                {
                    Behaviors = new[]
                    {
                        GoogleNative.Dataflow.V1b3.DataSamplingConfigBehaviorsItem.DataSamplingBehaviorUnspecified,
                    },
                },
                EnableHotKeyLogging = false,
            },
            Experiments = new[]
            {
                "string",
            },
            FlexResourceSchedulingGoal = GoogleNative.Dataflow.V1b3.EnvironmentFlexResourceSchedulingGoal.FlexrsUnspecified,
            InternalExperiments = 
            {
                { "string", "string" },
            },
            SdkPipelineOptions = 
            {
                { "string", "string" },
            },
            ServiceAccountEmail = "string",
            ServiceKmsKeyName = "string",
            ServiceOptions = new[]
            {
                "string",
            },
            TempStoragePrefix = "string",
            UserAgent = 
            {
                { "string", "string" },
            },
            Version = 
            {
                { "string", "string" },
            },
            WorkerPools = new[]
            {
                new GoogleNative.Dataflow.V1b3.Inputs.WorkerPoolArgs
                {
                    Network = "string",
                    DiskType = "string",
                    NumThreadsPerWorker = 0,
                    OnHostMaintenance = "string",
                    NumWorkers = 0,
                    IpConfiguration = GoogleNative.Dataflow.V1b3.WorkerPoolIpConfiguration.WorkerIpUnspecified,
                    Kind = "string",
                    MachineType = "string",
                    Metadata = 
                    {
                        { "string", "string" },
                    },
                    AutoscalingSettings = new GoogleNative.Dataflow.V1b3.Inputs.AutoscalingSettingsArgs
                    {
                        Algorithm = GoogleNative.Dataflow.V1b3.AutoscalingSettingsAlgorithm.AutoscalingAlgorithmUnknown,
                        MaxNumWorkers = 0,
                    },
                    DiskSizeGb = 0,
                    DefaultPackageSet = GoogleNative.Dataflow.V1b3.WorkerPoolDefaultPackageSet.DefaultPackageSetUnknown,
                    DiskSourceImage = "string",
                    Packages = new[]
                    {
                        new GoogleNative.Dataflow.V1b3.Inputs.PackageArgs
                        {
                            Location = "string",
                            Name = "string",
                        },
                    },
                    PoolArgs = 
                    {
                        { "string", "string" },
                    },
                    SdkHarnessContainerImages = new[]
                    {
                        new GoogleNative.Dataflow.V1b3.Inputs.SdkHarnessContainerImageArgs
                        {
                            Capabilities = new[]
                            {
                                "string",
                            },
                            ContainerImage = "string",
                            EnvironmentId = "string",
                            UseSingleCorePerContainer = false,
                        },
                    },
                    Subnetwork = "string",
                    TaskrunnerSettings = new GoogleNative.Dataflow.V1b3.Inputs.TaskRunnerSettingsArgs
                    {
                        Alsologtostderr = false,
                        BaseTaskDir = "string",
                        BaseUrl = "string",
                        CommandlinesFileName = "string",
                        ContinueOnException = false,
                        DataflowApiVersion = "string",
                        HarnessCommand = "string",
                        LanguageHint = "string",
                        LogDir = "string",
                        LogToSerialconsole = false,
                        LogUploadLocation = "string",
                        OauthScopes = new[]
                        {
                            "string",
                        },
                        ParallelWorkerSettings = new GoogleNative.Dataflow.V1b3.Inputs.WorkerSettingsArgs
                        {
                            BaseUrl = "string",
                            ReportingEnabled = false,
                            ServicePath = "string",
                            ShuffleServicePath = "string",
                            TempStoragePrefix = "string",
                            WorkerId = "string",
                        },
                        StreamingWorkerMainClass = "string",
                        TaskGroup = "string",
                        TaskUser = "string",
                        TempStoragePrefix = "string",
                        VmId = "string",
                        WorkflowFileName = "string",
                    },
                    TeardownPolicy = GoogleNative.Dataflow.V1b3.WorkerPoolTeardownPolicy.TeardownPolicyUnknown,
                    DataDisks = new[]
                    {
                        new GoogleNative.Dataflow.V1b3.Inputs.DiskArgs
                        {
                            DiskType = "string",
                            MountPoint = "string",
                            SizeGb = 0,
                        },
                    },
                    Zone = "string",
                },
            },
            WorkerRegion = "string",
            WorkerZone = "string",
        },
        Id = "string",
        JobMetadata = new GoogleNative.Dataflow.V1b3.Inputs.JobMetadataArgs
        {
            BigTableDetails = new[]
            {
                new GoogleNative.Dataflow.V1b3.Inputs.BigTableIODetailsArgs
                {
                    InstanceId = "string",
                    Project = "string",
                    TableId = "string",
                },
            },
            BigqueryDetails = new[]
            {
                new GoogleNative.Dataflow.V1b3.Inputs.BigQueryIODetailsArgs
                {
                    Dataset = "string",
                    Project = "string",
                    Query = "string",
                    Table = "string",
                },
            },
            DatastoreDetails = new[]
            {
                new GoogleNative.Dataflow.V1b3.Inputs.DatastoreIODetailsArgs
                {
                    Namespace = "string",
                    Project = "string",
                },
            },
            FileDetails = new[]
            {
                new GoogleNative.Dataflow.V1b3.Inputs.FileIODetailsArgs
                {
                    FilePattern = "string",
                },
            },
            PubsubDetails = new[]
            {
                new GoogleNative.Dataflow.V1b3.Inputs.PubSubIODetailsArgs
                {
                    Subscription = "string",
                    Topic = "string",
                },
            },
            SdkVersion = new GoogleNative.Dataflow.V1b3.Inputs.SdkVersionArgs
            {
                SdkSupportStatus = GoogleNative.Dataflow.V1b3.SdkVersionSdkSupportStatus.Unknown,
                Version = "string",
                VersionDisplayName = "string",
            },
            SpannerDetails = new[]
            {
                new GoogleNative.Dataflow.V1b3.Inputs.SpannerIODetailsArgs
                {
                    DatabaseId = "string",
                    InstanceId = "string",
                    Project = "string",
                },
            },
            UserDisplayProperties = 
            {
                { "string", "string" },
            },
        },
        Labels = 
        {
            { "string", "string" },
        },
        Location = "string",
        Name = "string",
        PipelineDescription = new GoogleNative.Dataflow.V1b3.Inputs.PipelineDescriptionArgs
        {
            DisplayData = new[]
            {
                new GoogleNative.Dataflow.V1b3.Inputs.DisplayDataArgs
                {
                    BoolValue = false,
                    DurationValue = "string",
                    FloatValue = 0,
                    Int64Value = "string",
                    JavaClassValue = "string",
                    Key = "string",
                    Label = "string",
                    Namespace = "string",
                    ShortStrValue = "string",
                    StrValue = "string",
                    TimestampValue = "string",
                    Url = "string",
                },
            },
            ExecutionPipelineStage = new[]
            {
                new GoogleNative.Dataflow.V1b3.Inputs.ExecutionStageSummaryArgs
                {
                    ComponentSource = new[]
                    {
                        new GoogleNative.Dataflow.V1b3.Inputs.ComponentSourceArgs
                        {
                            Name = "string",
                            OriginalTransformOrCollection = "string",
                            UserName = "string",
                        },
                    },
                    ComponentTransform = new[]
                    {
                        new GoogleNative.Dataflow.V1b3.Inputs.ComponentTransformArgs
                        {
                            Name = "string",
                            OriginalTransform = "string",
                            UserName = "string",
                        },
                    },
                    Id = "string",
                    InputSource = new[]
                    {
                        new GoogleNative.Dataflow.V1b3.Inputs.StageSourceArgs
                        {
                            Name = "string",
                            OriginalTransformOrCollection = "string",
                            SizeBytes = "string",
                            UserName = "string",
                        },
                    },
                    Kind = GoogleNative.Dataflow.V1b3.ExecutionStageSummaryKind.UnknownKind,
                    Name = "string",
                    OutputSource = new[]
                    {
                        new GoogleNative.Dataflow.V1b3.Inputs.StageSourceArgs
                        {
                            Name = "string",
                            OriginalTransformOrCollection = "string",
                            SizeBytes = "string",
                            UserName = "string",
                        },
                    },
                    PrerequisiteStage = new[]
                    {
                        "string",
                    },
                },
            },
            OriginalPipelineTransform = new[]
            {
                new GoogleNative.Dataflow.V1b3.Inputs.TransformSummaryArgs
                {
                    DisplayData = new[]
                    {
                        new GoogleNative.Dataflow.V1b3.Inputs.DisplayDataArgs
                        {
                            BoolValue = false,
                            DurationValue = "string",
                            FloatValue = 0,
                            Int64Value = "string",
                            JavaClassValue = "string",
                            Key = "string",
                            Label = "string",
                            Namespace = "string",
                            ShortStrValue = "string",
                            StrValue = "string",
                            TimestampValue = "string",
                            Url = "string",
                        },
                    },
                    Id = "string",
                    InputCollectionName = new[]
                    {
                        "string",
                    },
                    Kind = GoogleNative.Dataflow.V1b3.TransformSummaryKind.UnknownKind,
                    Name = "string",
                    OutputCollectionName = new[]
                    {
                        "string",
                    },
                },
            },
            StepNamesHash = "string",
        },
        Project = "string",
        ReplaceJobId = "string",
        ReplacedByJobId = "string",
        RequestedState = GoogleNative.Dataflow.V1b3.JobRequestedState.JobStateUnknown,
        RuntimeUpdatableParams = new GoogleNative.Dataflow.V1b3.Inputs.RuntimeUpdatableParamsArgs
        {
            MaxNumWorkers = 0,
            MinNumWorkers = 0,
        },
        SatisfiesPzs = false,
        StageStates = new[]
        {
            new GoogleNative.Dataflow.V1b3.Inputs.ExecutionStageStateArgs
            {
                CurrentStateTime = "string",
                ExecutionStageName = "string",
                ExecutionStageState = GoogleNative.Dataflow.V1b3.ExecutionStageStateExecutionStageState.JobStateUnknown,
            },
        },
        StartTime = "string",
        Steps = new[]
        {
            new GoogleNative.Dataflow.V1b3.Inputs.StepArgs
            {
                Kind = "string",
                Name = "string",
                Properties = 
                {
                    { "string", "string" },
                },
            },
        },
        StepsLocation = "string",
        TempFiles = new[]
        {
            "string",
        },
        TransformNameMapping = 
        {
            { "string", "string" },
        },
        Type = GoogleNative.Dataflow.V1b3.JobType.JobTypeUnknown,
        View = "string",
    });
    
    example, err := dataflow.NewJob(ctx, "examplejobResourceResourceFromDataflowv1b3", &dataflow.JobArgs{
    	ClientRequestId:       pulumi.String("string"),
    	CreateTime:            pulumi.String("string"),
    	CreatedFromSnapshotId: pulumi.String("string"),
    	CurrentState:          dataflow.JobCurrentStateJobStateUnknown,
    	CurrentStateTime:      pulumi.String("string"),
    	Environment: &dataflow.EnvironmentArgs{
    		ClusterManagerApiService: pulumi.String("string"),
    		Dataset:                  pulumi.String("string"),
    		DebugOptions: &dataflow.DebugOptionsArgs{
    			DataSampling: &dataflow.DataSamplingConfigArgs{
    				Behaviors: dataflow.DataSamplingConfigBehaviorsItemArray{
    					dataflow.DataSamplingConfigBehaviorsItemDataSamplingBehaviorUnspecified,
    				},
    			},
    			EnableHotKeyLogging: pulumi.Bool(false),
    		},
    		Experiments: pulumi.StringArray{
    			pulumi.String("string"),
    		},
    		FlexResourceSchedulingGoal: dataflow.EnvironmentFlexResourceSchedulingGoalFlexrsUnspecified,
    		InternalExperiments: pulumi.StringMap{
    			"string": pulumi.String("string"),
    		},
    		SdkPipelineOptions: pulumi.StringMap{
    			"string": pulumi.String("string"),
    		},
    		ServiceAccountEmail: pulumi.String("string"),
    		ServiceKmsKeyName:   pulumi.String("string"),
    		ServiceOptions: pulumi.StringArray{
    			pulumi.String("string"),
    		},
    		TempStoragePrefix: pulumi.String("string"),
    		UserAgent: pulumi.StringMap{
    			"string": pulumi.String("string"),
    		},
    		Version: pulumi.StringMap{
    			"string": pulumi.String("string"),
    		},
    		WorkerPools: dataflow.WorkerPoolArray{
    			&dataflow.WorkerPoolArgs{
    				Network:             pulumi.String("string"),
    				DiskType:            pulumi.String("string"),
    				NumThreadsPerWorker: pulumi.Int(0),
    				OnHostMaintenance:   pulumi.String("string"),
    				NumWorkers:          pulumi.Int(0),
    				IpConfiguration:     dataflow.WorkerPoolIpConfigurationWorkerIpUnspecified,
    				Kind:                pulumi.String("string"),
    				MachineType:         pulumi.String("string"),
    				Metadata: pulumi.StringMap{
    					"string": pulumi.String("string"),
    				},
    				AutoscalingSettings: &dataflow.AutoscalingSettingsArgs{
    					Algorithm:     dataflow.AutoscalingSettingsAlgorithmAutoscalingAlgorithmUnknown,
    					MaxNumWorkers: pulumi.Int(0),
    				},
    				DiskSizeGb:        pulumi.Int(0),
    				DefaultPackageSet: dataflow.WorkerPoolDefaultPackageSetDefaultPackageSetUnknown,
    				DiskSourceImage:   pulumi.String("string"),
    				Packages: dataflow.PackageArray{
    					&dataflow.PackageArgs{
    						Location: pulumi.String("string"),
    						Name:     pulumi.String("string"),
    					},
    				},
    				PoolArgs: pulumi.StringMap{
    					"string": pulumi.String("string"),
    				},
    				SdkHarnessContainerImages: dataflow.SdkHarnessContainerImageArray{
    					&dataflow.SdkHarnessContainerImageArgs{
    						Capabilities: pulumi.StringArray{
    							pulumi.String("string"),
    						},
    						ContainerImage:            pulumi.String("string"),
    						EnvironmentId:             pulumi.String("string"),
    						UseSingleCorePerContainer: pulumi.Bool(false),
    					},
    				},
    				Subnetwork: pulumi.String("string"),
    				TaskrunnerSettings: &dataflow.TaskRunnerSettingsArgs{
    					Alsologtostderr:      pulumi.Bool(false),
    					BaseTaskDir:          pulumi.String("string"),
    					BaseUrl:              pulumi.String("string"),
    					CommandlinesFileName: pulumi.String("string"),
    					ContinueOnException:  pulumi.Bool(false),
    					DataflowApiVersion:   pulumi.String("string"),
    					HarnessCommand:       pulumi.String("string"),
    					LanguageHint:         pulumi.String("string"),
    					LogDir:               pulumi.String("string"),
    					LogToSerialconsole:   pulumi.Bool(false),
    					LogUploadLocation:    pulumi.String("string"),
    					OauthScopes: pulumi.StringArray{
    						pulumi.String("string"),
    					},
    					ParallelWorkerSettings: &dataflow.WorkerSettingsArgs{
    						BaseUrl:            pulumi.String("string"),
    						ReportingEnabled:   pulumi.Bool(false),
    						ServicePath:        pulumi.String("string"),
    						ShuffleServicePath: pulumi.String("string"),
    						TempStoragePrefix:  pulumi.String("string"),
    						WorkerId:           pulumi.String("string"),
    					},
    					StreamingWorkerMainClass: pulumi.String("string"),
    					TaskGroup:                pulumi.String("string"),
    					TaskUser:                 pulumi.String("string"),
    					TempStoragePrefix:        pulumi.String("string"),
    					VmId:                     pulumi.String("string"),
    					WorkflowFileName:         pulumi.String("string"),
    				},
    				TeardownPolicy: dataflow.WorkerPoolTeardownPolicyTeardownPolicyUnknown,
    				DataDisks: dataflow.DiskArray{
    					&dataflow.DiskArgs{
    						DiskType:   pulumi.String("string"),
    						MountPoint: pulumi.String("string"),
    						SizeGb:     pulumi.Int(0),
    					},
    				},
    				Zone: pulumi.String("string"),
    			},
    		},
    		WorkerRegion: pulumi.String("string"),
    		WorkerZone:   pulumi.String("string"),
    	},
    	Id: pulumi.String("string"),
    	JobMetadata: &dataflow.JobMetadataArgs{
    		BigTableDetails: dataflow.BigTableIODetailsArray{
    			&dataflow.BigTableIODetailsArgs{
    				InstanceId: pulumi.String("string"),
    				Project:    pulumi.String("string"),
    				TableId:    pulumi.String("string"),
    			},
    		},
    		BigqueryDetails: dataflow.BigQueryIODetailsArray{
    			&dataflow.BigQueryIODetailsArgs{
    				Dataset: pulumi.String("string"),
    				Project: pulumi.String("string"),
    				Query:   pulumi.String("string"),
    				Table:   pulumi.String("string"),
    			},
    		},
    		DatastoreDetails: dataflow.DatastoreIODetailsArray{
    			&dataflow.DatastoreIODetailsArgs{
    				Namespace: pulumi.String("string"),
    				Project:   pulumi.String("string"),
    			},
    		},
    		FileDetails: dataflow.FileIODetailsArray{
    			&dataflow.FileIODetailsArgs{
    				FilePattern: pulumi.String("string"),
    			},
    		},
    		PubsubDetails: dataflow.PubSubIODetailsArray{
    			&dataflow.PubSubIODetailsArgs{
    				Subscription: pulumi.String("string"),
    				Topic:        pulumi.String("string"),
    			},
    		},
    		SdkVersion: &dataflow.SdkVersionArgs{
    			SdkSupportStatus:   dataflow.SdkVersionSdkSupportStatusUnknown,
    			Version:            pulumi.String("string"),
    			VersionDisplayName: pulumi.String("string"),
    		},
    		SpannerDetails: dataflow.SpannerIODetailsArray{
    			&dataflow.SpannerIODetailsArgs{
    				DatabaseId: pulumi.String("string"),
    				InstanceId: pulumi.String("string"),
    				Project:    pulumi.String("string"),
    			},
    		},
    		UserDisplayProperties: pulumi.StringMap{
    			"string": pulumi.String("string"),
    		},
    	},
    	Labels: pulumi.StringMap{
    		"string": pulumi.String("string"),
    	},
    	Location: pulumi.String("string"),
    	Name:     pulumi.String("string"),
    	PipelineDescription: &dataflow.PipelineDescriptionArgs{
    		DisplayData: dataflow.DisplayDataArray{
    			&dataflow.DisplayDataArgs{
    				BoolValue:      pulumi.Bool(false),
    				DurationValue:  pulumi.String("string"),
    				FloatValue:     pulumi.Float64(0),
    				Int64Value:     pulumi.String("string"),
    				JavaClassValue: pulumi.String("string"),
    				Key:            pulumi.String("string"),
    				Label:          pulumi.String("string"),
    				Namespace:      pulumi.String("string"),
    				ShortStrValue:  pulumi.String("string"),
    				StrValue:       pulumi.String("string"),
    				TimestampValue: pulumi.String("string"),
    				Url:            pulumi.String("string"),
    			},
    		},
    		ExecutionPipelineStage: dataflow.ExecutionStageSummaryArray{
    			&dataflow.ExecutionStageSummaryArgs{
    				ComponentSource: dataflow.ComponentSourceArray{
    					&dataflow.ComponentSourceArgs{
    						Name:                          pulumi.String("string"),
    						OriginalTransformOrCollection: pulumi.String("string"),
    						UserName:                      pulumi.String("string"),
    					},
    				},
    				ComponentTransform: dataflow.ComponentTransformArray{
    					&dataflow.ComponentTransformArgs{
    						Name:              pulumi.String("string"),
    						OriginalTransform: pulumi.String("string"),
    						UserName:          pulumi.String("string"),
    					},
    				},
    				Id: pulumi.String("string"),
    				InputSource: dataflow.StageSourceArray{
    					&dataflow.StageSourceArgs{
    						Name:                          pulumi.String("string"),
    						OriginalTransformOrCollection: pulumi.String("string"),
    						SizeBytes:                     pulumi.String("string"),
    						UserName:                      pulumi.String("string"),
    					},
    				},
    				Kind: dataflow.ExecutionStageSummaryKindUnknownKind,
    				Name: pulumi.String("string"),
    				OutputSource: dataflow.StageSourceArray{
    					&dataflow.StageSourceArgs{
    						Name:                          pulumi.String("string"),
    						OriginalTransformOrCollection: pulumi.String("string"),
    						SizeBytes:                     pulumi.String("string"),
    						UserName:                      pulumi.String("string"),
    					},
    				},
    				PrerequisiteStage: pulumi.StringArray{
    					pulumi.String("string"),
    				},
    			},
    		},
    		OriginalPipelineTransform: dataflow.TransformSummaryArray{
    			&dataflow.TransformSummaryArgs{
    				DisplayData: dataflow.DisplayDataArray{
    					&dataflow.DisplayDataArgs{
    						BoolValue:      pulumi.Bool(false),
    						DurationValue:  pulumi.String("string"),
    						FloatValue:     pulumi.Float64(0),
    						Int64Value:     pulumi.String("string"),
    						JavaClassValue: pulumi.String("string"),
    						Key:            pulumi.String("string"),
    						Label:          pulumi.String("string"),
    						Namespace:      pulumi.String("string"),
    						ShortStrValue:  pulumi.String("string"),
    						StrValue:       pulumi.String("string"),
    						TimestampValue: pulumi.String("string"),
    						Url:            pulumi.String("string"),
    					},
    				},
    				Id: pulumi.String("string"),
    				InputCollectionName: pulumi.StringArray{
    					pulumi.String("string"),
    				},
    				Kind: dataflow.TransformSummaryKindUnknownKind,
    				Name: pulumi.String("string"),
    				OutputCollectionName: pulumi.StringArray{
    					pulumi.String("string"),
    				},
    			},
    		},
    		StepNamesHash: pulumi.String("string"),
    	},
    	Project:         pulumi.String("string"),
    	ReplaceJobId:    pulumi.String("string"),
    	ReplacedByJobId: pulumi.String("string"),
    	RequestedState:  dataflow.JobRequestedStateJobStateUnknown,
    	RuntimeUpdatableParams: &dataflow.RuntimeUpdatableParamsArgs{
    		MaxNumWorkers: pulumi.Int(0),
    		MinNumWorkers: pulumi.Int(0),
    	},
    	SatisfiesPzs: pulumi.Bool(false),
    	StageStates: dataflow.ExecutionStageStateArray{
    		&dataflow.ExecutionStageStateArgs{
    			CurrentStateTime:    pulumi.String("string"),
    			ExecutionStageName:  pulumi.String("string"),
    			ExecutionStageState: dataflow.ExecutionStageStateExecutionStageStateJobStateUnknown,
    		},
    	},
    	StartTime: pulumi.String("string"),
    	Steps: dataflow.StepArray{
    		&dataflow.StepArgs{
    			Kind: pulumi.String("string"),
    			Name: pulumi.String("string"),
    			Properties: pulumi.StringMap{
    				"string": pulumi.String("string"),
    			},
    		},
    	},
    	StepsLocation: pulumi.String("string"),
    	TempFiles: pulumi.StringArray{
    		pulumi.String("string"),
    	},
    	TransformNameMapping: pulumi.StringMap{
    		"string": pulumi.String("string"),
    	},
    	Type: dataflow.JobTypeJobTypeUnknown,
    	View: pulumi.String("string"),
    })
    
    var examplejobResourceResourceFromDataflowv1b3 = new Job("examplejobResourceResourceFromDataflowv1b3", JobArgs.builder()
        .clientRequestId("string")
        .createTime("string")
        .createdFromSnapshotId("string")
        .currentState("JOB_STATE_UNKNOWN")
        .currentStateTime("string")
        .environment(EnvironmentArgs.builder()
            .clusterManagerApiService("string")
            .dataset("string")
            .debugOptions(DebugOptionsArgs.builder()
                .dataSampling(DataSamplingConfigArgs.builder()
                    .behaviors("DATA_SAMPLING_BEHAVIOR_UNSPECIFIED")
                    .build())
                .enableHotKeyLogging(false)
                .build())
            .experiments("string")
            .flexResourceSchedulingGoal("FLEXRS_UNSPECIFIED")
            .internalExperiments(Map.of("string", "string"))
            .sdkPipelineOptions(Map.of("string", "string"))
            .serviceAccountEmail("string")
            .serviceKmsKeyName("string")
            .serviceOptions("string")
            .tempStoragePrefix("string")
            .userAgent(Map.of("string", "string"))
            .version(Map.of("string", "string"))
            .workerPools(WorkerPoolArgs.builder()
                .network("string")
                .diskType("string")
                .numThreadsPerWorker(0)
                .onHostMaintenance("string")
                .numWorkers(0)
                .ipConfiguration("WORKER_IP_UNSPECIFIED")
                .kind("string")
                .machineType("string")
                .metadata(Map.of("string", "string"))
                .autoscalingSettings(AutoscalingSettingsArgs.builder()
                    .algorithm("AUTOSCALING_ALGORITHM_UNKNOWN")
                    .maxNumWorkers(0)
                    .build())
                .diskSizeGb(0)
                .defaultPackageSet("DEFAULT_PACKAGE_SET_UNKNOWN")
                .diskSourceImage("string")
                .packages(PackageArgs.builder()
                    .location("string")
                    .name("string")
                    .build())
                .poolArgs(Map.of("string", "string"))
                .sdkHarnessContainerImages(SdkHarnessContainerImageArgs.builder()
                    .capabilities("string")
                    .containerImage("string")
                    .environmentId("string")
                    .useSingleCorePerContainer(false)
                    .build())
                .subnetwork("string")
                .taskrunnerSettings(TaskRunnerSettingsArgs.builder()
                    .alsologtostderr(false)
                    .baseTaskDir("string")
                    .baseUrl("string")
                    .commandlinesFileName("string")
                    .continueOnException(false)
                    .dataflowApiVersion("string")
                    .harnessCommand("string")
                    .languageHint("string")
                    .logDir("string")
                    .logToSerialconsole(false)
                    .logUploadLocation("string")
                    .oauthScopes("string")
                    .parallelWorkerSettings(WorkerSettingsArgs.builder()
                        .baseUrl("string")
                        .reportingEnabled(false)
                        .servicePath("string")
                        .shuffleServicePath("string")
                        .tempStoragePrefix("string")
                        .workerId("string")
                        .build())
                    .streamingWorkerMainClass("string")
                    .taskGroup("string")
                    .taskUser("string")
                    .tempStoragePrefix("string")
                    .vmId("string")
                    .workflowFileName("string")
                    .build())
                .teardownPolicy("TEARDOWN_POLICY_UNKNOWN")
                .dataDisks(DiskArgs.builder()
                    .diskType("string")
                    .mountPoint("string")
                    .sizeGb(0)
                    .build())
                .zone("string")
                .build())
            .workerRegion("string")
            .workerZone("string")
            .build())
        .id("string")
        .jobMetadata(JobMetadataArgs.builder()
            .bigTableDetails(BigTableIODetailsArgs.builder()
                .instanceId("string")
                .project("string")
                .tableId("string")
                .build())
            .bigqueryDetails(BigQueryIODetailsArgs.builder()
                .dataset("string")
                .project("string")
                .query("string")
                .table("string")
                .build())
            .datastoreDetails(DatastoreIODetailsArgs.builder()
                .namespace("string")
                .project("string")
                .build())
            .fileDetails(FileIODetailsArgs.builder()
                .filePattern("string")
                .build())
            .pubsubDetails(PubSubIODetailsArgs.builder()
                .subscription("string")
                .topic("string")
                .build())
            .sdkVersion(SdkVersionArgs.builder()
                .sdkSupportStatus("UNKNOWN")
                .version("string")
                .versionDisplayName("string")
                .build())
            .spannerDetails(SpannerIODetailsArgs.builder()
                .databaseId("string")
                .instanceId("string")
                .project("string")
                .build())
            .userDisplayProperties(Map.of("string", "string"))
            .build())
        .labels(Map.of("string", "string"))
        .location("string")
        .name("string")
        .pipelineDescription(PipelineDescriptionArgs.builder()
            .displayData(DisplayDataArgs.builder()
                .boolValue(false)
                .durationValue("string")
                .floatValue(0)
                .int64Value("string")
                .javaClassValue("string")
                .key("string")
                .label("string")
                .namespace("string")
                .shortStrValue("string")
                .strValue("string")
                .timestampValue("string")
                .url("string")
                .build())
            .executionPipelineStage(ExecutionStageSummaryArgs.builder()
                .componentSource(ComponentSourceArgs.builder()
                    .name("string")
                    .originalTransformOrCollection("string")
                    .userName("string")
                    .build())
                .componentTransform(ComponentTransformArgs.builder()
                    .name("string")
                    .originalTransform("string")
                    .userName("string")
                    .build())
                .id("string")
                .inputSource(StageSourceArgs.builder()
                    .name("string")
                    .originalTransformOrCollection("string")
                    .sizeBytes("string")
                    .userName("string")
                    .build())
                .kind("UNKNOWN_KIND")
                .name("string")
                .outputSource(StageSourceArgs.builder()
                    .name("string")
                    .originalTransformOrCollection("string")
                    .sizeBytes("string")
                    .userName("string")
                    .build())
                .prerequisiteStage("string")
                .build())
            .originalPipelineTransform(TransformSummaryArgs.builder()
                .displayData(DisplayDataArgs.builder()
                    .boolValue(false)
                    .durationValue("string")
                    .floatValue(0)
                    .int64Value("string")
                    .javaClassValue("string")
                    .key("string")
                    .label("string")
                    .namespace("string")
                    .shortStrValue("string")
                    .strValue("string")
                    .timestampValue("string")
                    .url("string")
                    .build())
                .id("string")
                .inputCollectionName("string")
                .kind("UNKNOWN_KIND")
                .name("string")
                .outputCollectionName("string")
                .build())
            .stepNamesHash("string")
            .build())
        .project("string")
        .replaceJobId("string")
        .replacedByJobId("string")
        .requestedState("JOB_STATE_UNKNOWN")
        .runtimeUpdatableParams(RuntimeUpdatableParamsArgs.builder()
            .maxNumWorkers(0)
            .minNumWorkers(0)
            .build())
        .satisfiesPzs(false)
        .stageStates(ExecutionStageStateArgs.builder()
            .currentStateTime("string")
            .executionStageName("string")
            .executionStageState("JOB_STATE_UNKNOWN")
            .build())
        .startTime("string")
        .steps(StepArgs.builder()
            .kind("string")
            .name("string")
            .properties(Map.of("string", "string"))
            .build())
        .stepsLocation("string")
        .tempFiles("string")
        .transformNameMapping(Map.of("string", "string"))
        .type("JOB_TYPE_UNKNOWN")
        .view("string")
        .build());
    
    examplejob_resource_resource_from_dataflowv1b3 = google_native.dataflow.v1b3.Job("examplejobResourceResourceFromDataflowv1b3",
        client_request_id="string",
        create_time="string",
        created_from_snapshot_id="string",
        current_state=google_native.dataflow.v1b3.JobCurrentState.JOB_STATE_UNKNOWN,
        current_state_time="string",
        environment={
            "cluster_manager_api_service": "string",
            "dataset": "string",
            "debug_options": {
                "data_sampling": {
                    "behaviors": [google_native.dataflow.v1b3.DataSamplingConfigBehaviorsItem.DATA_SAMPLING_BEHAVIOR_UNSPECIFIED],
                },
                "enable_hot_key_logging": False,
            },
            "experiments": ["string"],
            "flex_resource_scheduling_goal": google_native.dataflow.v1b3.EnvironmentFlexResourceSchedulingGoal.FLEXRS_UNSPECIFIED,
            "internal_experiments": {
                "string": "string",
            },
            "sdk_pipeline_options": {
                "string": "string",
            },
            "service_account_email": "string",
            "service_kms_key_name": "string",
            "service_options": ["string"],
            "temp_storage_prefix": "string",
            "user_agent": {
                "string": "string",
            },
            "version": {
                "string": "string",
            },
            "worker_pools": [{
                "network": "string",
                "disk_type": "string",
                "num_threads_per_worker": 0,
                "on_host_maintenance": "string",
                "num_workers": 0,
                "ip_configuration": google_native.dataflow.v1b3.WorkerPoolIpConfiguration.WORKER_IP_UNSPECIFIED,
                "kind": "string",
                "machine_type": "string",
                "metadata": {
                    "string": "string",
                },
                "autoscaling_settings": {
                    "algorithm": google_native.dataflow.v1b3.AutoscalingSettingsAlgorithm.AUTOSCALING_ALGORITHM_UNKNOWN,
                    "max_num_workers": 0,
                },
                "disk_size_gb": 0,
                "default_package_set": google_native.dataflow.v1b3.WorkerPoolDefaultPackageSet.DEFAULT_PACKAGE_SET_UNKNOWN,
                "disk_source_image": "string",
                "packages": [{
                    "location": "string",
                    "name": "string",
                }],
                "pool_args": {
                    "string": "string",
                },
                "sdk_harness_container_images": [{
                    "capabilities": ["string"],
                    "container_image": "string",
                    "environment_id": "string",
                    "use_single_core_per_container": False,
                }],
                "subnetwork": "string",
                "taskrunner_settings": {
                    "alsologtostderr": False,
                    "base_task_dir": "string",
                    "base_url": "string",
                    "commandlines_file_name": "string",
                    "continue_on_exception": False,
                    "dataflow_api_version": "string",
                    "harness_command": "string",
                    "language_hint": "string",
                    "log_dir": "string",
                    "log_to_serialconsole": False,
                    "log_upload_location": "string",
                    "oauth_scopes": ["string"],
                    "parallel_worker_settings": {
                        "base_url": "string",
                        "reporting_enabled": False,
                        "service_path": "string",
                        "shuffle_service_path": "string",
                        "temp_storage_prefix": "string",
                        "worker_id": "string",
                    },
                    "streaming_worker_main_class": "string",
                    "task_group": "string",
                    "task_user": "string",
                    "temp_storage_prefix": "string",
                    "vm_id": "string",
                    "workflow_file_name": "string",
                },
                "teardown_policy": google_native.dataflow.v1b3.WorkerPoolTeardownPolicy.TEARDOWN_POLICY_UNKNOWN,
                "data_disks": [{
                    "disk_type": "string",
                    "mount_point": "string",
                    "size_gb": 0,
                }],
                "zone": "string",
            }],
            "worker_region": "string",
            "worker_zone": "string",
        },
        id="string",
        job_metadata={
            "big_table_details": [{
                "instance_id": "string",
                "project": "string",
                "table_id": "string",
            }],
            "bigquery_details": [{
                "dataset": "string",
                "project": "string",
                "query": "string",
                "table": "string",
            }],
            "datastore_details": [{
                "namespace": "string",
                "project": "string",
            }],
            "file_details": [{
                "file_pattern": "string",
            }],
            "pubsub_details": [{
                "subscription": "string",
                "topic": "string",
            }],
            "sdk_version": {
                "sdk_support_status": google_native.dataflow.v1b3.SdkVersionSdkSupportStatus.UNKNOWN,
                "version": "string",
                "version_display_name": "string",
            },
            "spanner_details": [{
                "database_id": "string",
                "instance_id": "string",
                "project": "string",
            }],
            "user_display_properties": {
                "string": "string",
            },
        },
        labels={
            "string": "string",
        },
        location="string",
        name="string",
        pipeline_description={
            "display_data": [{
                "bool_value": False,
                "duration_value": "string",
                "float_value": 0,
                "int64_value": "string",
                "java_class_value": "string",
                "key": "string",
                "label": "string",
                "namespace": "string",
                "short_str_value": "string",
                "str_value": "string",
                "timestamp_value": "string",
                "url": "string",
            }],
            "execution_pipeline_stage": [{
                "component_source": [{
                    "name": "string",
                    "original_transform_or_collection": "string",
                    "user_name": "string",
                }],
                "component_transform": [{
                    "name": "string",
                    "original_transform": "string",
                    "user_name": "string",
                }],
                "id": "string",
                "input_source": [{
                    "name": "string",
                    "original_transform_or_collection": "string",
                    "size_bytes": "string",
                    "user_name": "string",
                }],
                "kind": google_native.dataflow.v1b3.ExecutionStageSummaryKind.UNKNOWN_KIND,
                "name": "string",
                "output_source": [{
                    "name": "string",
                    "original_transform_or_collection": "string",
                    "size_bytes": "string",
                    "user_name": "string",
                }],
                "prerequisite_stage": ["string"],
            }],
            "original_pipeline_transform": [{
                "display_data": [{
                    "bool_value": False,
                    "duration_value": "string",
                    "float_value": 0,
                    "int64_value": "string",
                    "java_class_value": "string",
                    "key": "string",
                    "label": "string",
                    "namespace": "string",
                    "short_str_value": "string",
                    "str_value": "string",
                    "timestamp_value": "string",
                    "url": "string",
                }],
                "id": "string",
                "input_collection_name": ["string"],
                "kind": google_native.dataflow.v1b3.TransformSummaryKind.UNKNOWN_KIND,
                "name": "string",
                "output_collection_name": ["string"],
            }],
            "step_names_hash": "string",
        },
        project="string",
        replace_job_id="string",
        replaced_by_job_id="string",
        requested_state=google_native.dataflow.v1b3.JobRequestedState.JOB_STATE_UNKNOWN,
        runtime_updatable_params={
            "max_num_workers": 0,
            "min_num_workers": 0,
        },
        satisfies_pzs=False,
        stage_states=[{
            "current_state_time": "string",
            "execution_stage_name": "string",
            "execution_stage_state": google_native.dataflow.v1b3.ExecutionStageStateExecutionStageState.JOB_STATE_UNKNOWN,
        }],
        start_time="string",
        steps=[{
            "kind": "string",
            "name": "string",
            "properties": {
                "string": "string",
            },
        }],
        steps_location="string",
        temp_files=["string"],
        transform_name_mapping={
            "string": "string",
        },
        type=google_native.dataflow.v1b3.JobType.JOB_TYPE_UNKNOWN,
        view="string")
    
    const examplejobResourceResourceFromDataflowv1b3 = new google_native.dataflow.v1b3.Job("examplejobResourceResourceFromDataflowv1b3", {
        clientRequestId: "string",
        createTime: "string",
        createdFromSnapshotId: "string",
        currentState: google_native.dataflow.v1b3.JobCurrentState.JobStateUnknown,
        currentStateTime: "string",
        environment: {
            clusterManagerApiService: "string",
            dataset: "string",
            debugOptions: {
                dataSampling: {
                    behaviors: [google_native.dataflow.v1b3.DataSamplingConfigBehaviorsItem.DataSamplingBehaviorUnspecified],
                },
                enableHotKeyLogging: false,
            },
            experiments: ["string"],
            flexResourceSchedulingGoal: google_native.dataflow.v1b3.EnvironmentFlexResourceSchedulingGoal.FlexrsUnspecified,
            internalExperiments: {
                string: "string",
            },
            sdkPipelineOptions: {
                string: "string",
            },
            serviceAccountEmail: "string",
            serviceKmsKeyName: "string",
            serviceOptions: ["string"],
            tempStoragePrefix: "string",
            userAgent: {
                string: "string",
            },
            version: {
                string: "string",
            },
            workerPools: [{
                network: "string",
                diskType: "string",
                numThreadsPerWorker: 0,
                onHostMaintenance: "string",
                numWorkers: 0,
                ipConfiguration: google_native.dataflow.v1b3.WorkerPoolIpConfiguration.WorkerIpUnspecified,
                kind: "string",
                machineType: "string",
                metadata: {
                    string: "string",
                },
                autoscalingSettings: {
                    algorithm: google_native.dataflow.v1b3.AutoscalingSettingsAlgorithm.AutoscalingAlgorithmUnknown,
                    maxNumWorkers: 0,
                },
                diskSizeGb: 0,
                defaultPackageSet: google_native.dataflow.v1b3.WorkerPoolDefaultPackageSet.DefaultPackageSetUnknown,
                diskSourceImage: "string",
                packages: [{
                    location: "string",
                    name: "string",
                }],
                poolArgs: {
                    string: "string",
                },
                sdkHarnessContainerImages: [{
                    capabilities: ["string"],
                    containerImage: "string",
                    environmentId: "string",
                    useSingleCorePerContainer: false,
                }],
                subnetwork: "string",
                taskrunnerSettings: {
                    alsologtostderr: false,
                    baseTaskDir: "string",
                    baseUrl: "string",
                    commandlinesFileName: "string",
                    continueOnException: false,
                    dataflowApiVersion: "string",
                    harnessCommand: "string",
                    languageHint: "string",
                    logDir: "string",
                    logToSerialconsole: false,
                    logUploadLocation: "string",
                    oauthScopes: ["string"],
                    parallelWorkerSettings: {
                        baseUrl: "string",
                        reportingEnabled: false,
                        servicePath: "string",
                        shuffleServicePath: "string",
                        tempStoragePrefix: "string",
                        workerId: "string",
                    },
                    streamingWorkerMainClass: "string",
                    taskGroup: "string",
                    taskUser: "string",
                    tempStoragePrefix: "string",
                    vmId: "string",
                    workflowFileName: "string",
                },
                teardownPolicy: google_native.dataflow.v1b3.WorkerPoolTeardownPolicy.TeardownPolicyUnknown,
                dataDisks: [{
                    diskType: "string",
                    mountPoint: "string",
                    sizeGb: 0,
                }],
                zone: "string",
            }],
            workerRegion: "string",
            workerZone: "string",
        },
        id: "string",
        jobMetadata: {
            bigTableDetails: [{
                instanceId: "string",
                project: "string",
                tableId: "string",
            }],
            bigqueryDetails: [{
                dataset: "string",
                project: "string",
                query: "string",
                table: "string",
            }],
            datastoreDetails: [{
                namespace: "string",
                project: "string",
            }],
            fileDetails: [{
                filePattern: "string",
            }],
            pubsubDetails: [{
                subscription: "string",
                topic: "string",
            }],
            sdkVersion: {
                sdkSupportStatus: google_native.dataflow.v1b3.SdkVersionSdkSupportStatus.Unknown,
                version: "string",
                versionDisplayName: "string",
            },
            spannerDetails: [{
                databaseId: "string",
                instanceId: "string",
                project: "string",
            }],
            userDisplayProperties: {
                string: "string",
            },
        },
        labels: {
            string: "string",
        },
        location: "string",
        name: "string",
        pipelineDescription: {
            displayData: [{
                boolValue: false,
                durationValue: "string",
                floatValue: 0,
                int64Value: "string",
                javaClassValue: "string",
                key: "string",
                label: "string",
                namespace: "string",
                shortStrValue: "string",
                strValue: "string",
                timestampValue: "string",
                url: "string",
            }],
            executionPipelineStage: [{
                componentSource: [{
                    name: "string",
                    originalTransformOrCollection: "string",
                    userName: "string",
                }],
                componentTransform: [{
                    name: "string",
                    originalTransform: "string",
                    userName: "string",
                }],
                id: "string",
                inputSource: [{
                    name: "string",
                    originalTransformOrCollection: "string",
                    sizeBytes: "string",
                    userName: "string",
                }],
                kind: google_native.dataflow.v1b3.ExecutionStageSummaryKind.UnknownKind,
                name: "string",
                outputSource: [{
                    name: "string",
                    originalTransformOrCollection: "string",
                    sizeBytes: "string",
                    userName: "string",
                }],
                prerequisiteStage: ["string"],
            }],
            originalPipelineTransform: [{
                displayData: [{
                    boolValue: false,
                    durationValue: "string",
                    floatValue: 0,
                    int64Value: "string",
                    javaClassValue: "string",
                    key: "string",
                    label: "string",
                    namespace: "string",
                    shortStrValue: "string",
                    strValue: "string",
                    timestampValue: "string",
                    url: "string",
                }],
                id: "string",
                inputCollectionName: ["string"],
                kind: google_native.dataflow.v1b3.TransformSummaryKind.UnknownKind,
                name: "string",
                outputCollectionName: ["string"],
            }],
            stepNamesHash: "string",
        },
        project: "string",
        replaceJobId: "string",
        replacedByJobId: "string",
        requestedState: google_native.dataflow.v1b3.JobRequestedState.JobStateUnknown,
        runtimeUpdatableParams: {
            maxNumWorkers: 0,
            minNumWorkers: 0,
        },
        satisfiesPzs: false,
        stageStates: [{
            currentStateTime: "string",
            executionStageName: "string",
            executionStageState: google_native.dataflow.v1b3.ExecutionStageStateExecutionStageState.JobStateUnknown,
        }],
        startTime: "string",
        steps: [{
            kind: "string",
            name: "string",
            properties: {
                string: "string",
            },
        }],
        stepsLocation: "string",
        tempFiles: ["string"],
        transformNameMapping: {
            string: "string",
        },
        type: google_native.dataflow.v1b3.JobType.JobTypeUnknown,
        view: "string",
    });
    
    type: google-native:dataflow/v1b3:Job
    properties:
        clientRequestId: string
        createTime: string
        createdFromSnapshotId: string
        currentState: JOB_STATE_UNKNOWN
        currentStateTime: string
        environment:
            clusterManagerApiService: string
            dataset: string
            debugOptions:
                dataSampling:
                    behaviors:
                        - DATA_SAMPLING_BEHAVIOR_UNSPECIFIED
                enableHotKeyLogging: false
            experiments:
                - string
            flexResourceSchedulingGoal: FLEXRS_UNSPECIFIED
            internalExperiments:
                string: string
            sdkPipelineOptions:
                string: string
            serviceAccountEmail: string
            serviceKmsKeyName: string
            serviceOptions:
                - string
            tempStoragePrefix: string
            userAgent:
                string: string
            version:
                string: string
            workerPools:
                - autoscalingSettings:
                    algorithm: AUTOSCALING_ALGORITHM_UNKNOWN
                    maxNumWorkers: 0
                  dataDisks:
                    - diskType: string
                      mountPoint: string
                      sizeGb: 0
                  defaultPackageSet: DEFAULT_PACKAGE_SET_UNKNOWN
                  diskSizeGb: 0
                  diskSourceImage: string
                  diskType: string
                  ipConfiguration: WORKER_IP_UNSPECIFIED
                  kind: string
                  machineType: string
                  metadata:
                    string: string
                  network: string
                  numThreadsPerWorker: 0
                  numWorkers: 0
                  onHostMaintenance: string
                  packages:
                    - location: string
                      name: string
                  poolArgs:
                    string: string
                  sdkHarnessContainerImages:
                    - capabilities:
                        - string
                      containerImage: string
                      environmentId: string
                      useSingleCorePerContainer: false
                  subnetwork: string
                  taskrunnerSettings:
                    alsologtostderr: false
                    baseTaskDir: string
                    baseUrl: string
                    commandlinesFileName: string
                    continueOnException: false
                    dataflowApiVersion: string
                    harnessCommand: string
                    languageHint: string
                    logDir: string
                    logToSerialconsole: false
                    logUploadLocation: string
                    oauthScopes:
                        - string
                    parallelWorkerSettings:
                        baseUrl: string
                        reportingEnabled: false
                        servicePath: string
                        shuffleServicePath: string
                        tempStoragePrefix: string
                        workerId: string
                    streamingWorkerMainClass: string
                    taskGroup: string
                    taskUser: string
                    tempStoragePrefix: string
                    vmId: string
                    workflowFileName: string
                  teardownPolicy: TEARDOWN_POLICY_UNKNOWN
                  zone: string
            workerRegion: string
            workerZone: string
        id: string
        jobMetadata:
            bigTableDetails:
                - instanceId: string
                  project: string
                  tableId: string
            bigqueryDetails:
                - dataset: string
                  project: string
                  query: string
                  table: string
            datastoreDetails:
                - namespace: string
                  project: string
            fileDetails:
                - filePattern: string
            pubsubDetails:
                - subscription: string
                  topic: string
            sdkVersion:
                sdkSupportStatus: UNKNOWN
                version: string
                versionDisplayName: string
            spannerDetails:
                - databaseId: string
                  instanceId: string
                  project: string
            userDisplayProperties:
                string: string
        labels:
            string: string
        location: string
        name: string
        pipelineDescription:
            displayData:
                - boolValue: false
                  durationValue: string
                  floatValue: 0
                  int64Value: string
                  javaClassValue: string
                  key: string
                  label: string
                  namespace: string
                  shortStrValue: string
                  strValue: string
                  timestampValue: string
                  url: string
            executionPipelineStage:
                - componentSource:
                    - name: string
                      originalTransformOrCollection: string
                      userName: string
                  componentTransform:
                    - name: string
                      originalTransform: string
                      userName: string
                  id: string
                  inputSource:
                    - name: string
                      originalTransformOrCollection: string
                      sizeBytes: string
                      userName: string
                  kind: UNKNOWN_KIND
                  name: string
                  outputSource:
                    - name: string
                      originalTransformOrCollection: string
                      sizeBytes: string
                      userName: string
                  prerequisiteStage:
                    - string
            originalPipelineTransform:
                - displayData:
                    - boolValue: false
                      durationValue: string
                      floatValue: 0
                      int64Value: string
                      javaClassValue: string
                      key: string
                      label: string
                      namespace: string
                      shortStrValue: string
                      strValue: string
                      timestampValue: string
                      url: string
                  id: string
                  inputCollectionName:
                    - string
                  kind: UNKNOWN_KIND
                  name: string
                  outputCollectionName:
                    - string
            stepNamesHash: string
        project: string
        replaceJobId: string
        replacedByJobId: string
        requestedState: JOB_STATE_UNKNOWN
        runtimeUpdatableParams:
            maxNumWorkers: 0
            minNumWorkers: 0
        satisfiesPzs: false
        stageStates:
            - currentStateTime: string
              executionStageName: string
              executionStageState: JOB_STATE_UNKNOWN
        startTime: string
        steps:
            - kind: string
              name: string
              properties:
                string: string
        stepsLocation: string
        tempFiles:
            - string
        transformNameMapping:
            string: string
        type: JOB_TYPE_UNKNOWN
        view: string
    

    Job Resource Properties

    To learn more about resource properties and how to use them, see Inputs and Outputs in the Architecture and Concepts docs.

    Inputs

    In Python, inputs that are objects can be passed either as argument classes or as dictionary literals.

    The Job resource accepts the following input properties:

    ClientRequestId string
    The client's unique identifier of the job, re-used across retried attempts. If this field is set, the service will ensure its uniqueness. The request to create a job will fail if the service has knowledge of a previously submitted job with the same client's ID and job name. The caller may use this field to ensure idempotence of job creation across retried attempts to create a job. By default, the field is empty and, in that case, the service ignores it.
    CreateTime string
    The timestamp when the job was initially created. Immutable and set by the Cloud Dataflow service.
    CreatedFromSnapshotId string
    If this is specified, the job's initial state is populated from the given snapshot.
    CurrentState Pulumi.GoogleNative.Dataflow.V1b3.JobCurrentState
    The current state of the job. Jobs are created in the JOB_STATE_STOPPED state unless otherwise specified. A job in the JOB_STATE_RUNNING state may asynchronously enter a terminal state. After a job has reached a terminal state, no further state updates may be made. This field may be mutated by the Cloud Dataflow service; callers cannot mutate it.
    CurrentStateTime string
    The timestamp associated with the current state.
    Environment Pulumi.GoogleNative.Dataflow.V1b3.Inputs.Environment
    The environment for the job.
    ExecutionInfo Pulumi.GoogleNative.Dataflow.V1b3.Inputs.JobExecutionInfo
    Deprecated.

    Deprecated: Deprecated.

    Id string
    The unique ID of this job. This field is set by the Cloud Dataflow service when the Job is created, and is immutable for the life of the job.
    JobMetadata Pulumi.GoogleNative.Dataflow.V1b3.Inputs.JobMetadata
    This field is populated by the Dataflow service to support filtering jobs by the metadata values provided here. Populated for ListJobs and all GetJob views SUMMARY and higher.
    Labels Dictionary<string, string>
    User-defined labels for this job. The labels map can contain no more than 64 entries. Entries of the labels map are UTF8 strings that comply with the following restrictions: * Keys must conform to regexp: \p{Ll}\p{Lo}{0,62} * Values must conform to regexp: [\p{Ll}\p{Lo}\p{N}_-]{0,63} * Both keys and values are additionally constrained to be <= 128 bytes in size.
    Location string
    The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) that contains this job.
    Name string
    The user-specified Cloud Dataflow job name. Only one Job with a given name can exist in a project within one region at any given time. Jobs in different regions can have the same name. If a caller attempts to create a Job with the same name as an already-existing Job, the attempt returns the existing Job. The name must match the regular expression [a-z]([-a-z0-9]{0,1022}[a-z0-9])?
    PipelineDescription Pulumi.GoogleNative.Dataflow.V1b3.Inputs.PipelineDescription
    Preliminary field: The format of this data may change at any time. A description of the user pipeline and stages through which it is executed. Created by Cloud Dataflow service. Only retrieved with JOB_VIEW_DESCRIPTION or JOB_VIEW_ALL.
    Project string
    The ID of the Cloud Platform project that the job belongs to.
    ReplaceJobId string
    If this job is an update of an existing job, this field is the job ID of the job it replaced. When sending a CreateJobRequest, you can update a job by specifying it here. The job named here is stopped, and its intermediate state is transferred to this job.
    ReplacedByJobId string
    If another job is an update of this job (and thus, this job is in JOB_STATE_UPDATED), this field contains the ID of that job.
    RequestedState Pulumi.GoogleNative.Dataflow.V1b3.JobRequestedState
    The job's requested state. Applies to UpdateJob requests. Set requested_state with UpdateJob requests to switch between the states JOB_STATE_STOPPED and JOB_STATE_RUNNING. You can also use UpdateJob requests to change a job's state from JOB_STATE_RUNNING to JOB_STATE_CANCELLED, JOB_STATE_DONE, or JOB_STATE_DRAINED. These states irrevocably terminate the job if it hasn't already reached a terminal state. This field has no effect on CreateJob requests.
    RuntimeUpdatableParams Pulumi.GoogleNative.Dataflow.V1b3.Inputs.RuntimeUpdatableParams
    This field may ONLY be modified at runtime using the projects.jobs.update method to adjust job behavior. This field has no effect when specified at job creation.
    SatisfiesPzs bool
    Reserved for future use. This field is set only in responses from the server; it is ignored if it is set in any requests.
    StageStates List<Pulumi.GoogleNative.Dataflow.V1b3.Inputs.ExecutionStageState>
    This field may be mutated by the Cloud Dataflow service; callers cannot mutate it.
    StartTime string
    The timestamp when the job was started (transitioned to JOB_STATE_PENDING). Flexible resource scheduling jobs are started with some delay after job creation, so start_time is unset before start and is updated when the job is started by the Cloud Dataflow service. For other jobs, start_time always equals to create_time and is immutable and set by the Cloud Dataflow service.
    Steps List<Pulumi.GoogleNative.Dataflow.V1b3.Inputs.Step>
    Exactly one of step or steps_location should be specified. The top-level steps that constitute the entire job. Only retrieved with JOB_VIEW_ALL.
    StepsLocation string
    The Cloud Storage location where the steps are stored.
    TempFiles List<string>
    A set of files the system should be aware of that are used for temporary storage. These temporary files will be removed on job completion. No duplicates are allowed. No file patterns are supported. The supported files are: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
    TransformNameMapping Dictionary<string, string>
    The map of transform name prefixes of the job to be replaced to the corresponding name prefixes of the new job.
    Type Pulumi.GoogleNative.Dataflow.V1b3.JobType
    The type of Cloud Dataflow job.
    View string
    The level of information requested in response.
    ClientRequestId string
    The client's unique identifier of the job, re-used across retried attempts. If this field is set, the service will ensure its uniqueness. The request to create a job will fail if the service has knowledge of a previously submitted job with the same client's ID and job name. The caller may use this field to ensure idempotence of job creation across retried attempts to create a job. By default, the field is empty and, in that case, the service ignores it.
    CreateTime string
    The timestamp when the job was initially created. Immutable and set by the Cloud Dataflow service.
    CreatedFromSnapshotId string
    If this is specified, the job's initial state is populated from the given snapshot.
    CurrentState JobCurrentState
    The current state of the job. Jobs are created in the JOB_STATE_STOPPED state unless otherwise specified. A job in the JOB_STATE_RUNNING state may asynchronously enter a terminal state. After a job has reached a terminal state, no further state updates may be made. This field may be mutated by the Cloud Dataflow service; callers cannot mutate it.
    CurrentStateTime string
    The timestamp associated with the current state.
    Environment EnvironmentArgs
    The environment for the job.
    ExecutionInfo JobExecutionInfoArgs
    Deprecated.

    Deprecated: Deprecated.

    Id string
    The unique ID of this job. This field is set by the Cloud Dataflow service when the Job is created, and is immutable for the life of the job.
    JobMetadata JobMetadataArgs
    This field is populated by the Dataflow service to support filtering jobs by the metadata values provided here. Populated for ListJobs and all GetJob views SUMMARY and higher.
    Labels map[string]string
    User-defined labels for this job. The labels map can contain no more than 64 entries. Entries of the labels map are UTF8 strings that comply with the following restrictions: * Keys must conform to regexp: \p{Ll}\p{Lo}{0,62} * Values must conform to regexp: [\p{Ll}\p{Lo}\p{N}_-]{0,63} * Both keys and values are additionally constrained to be <= 128 bytes in size.
    Location string
    The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) that contains this job.
    Name string
    The user-specified Cloud Dataflow job name. Only one Job with a given name can exist in a project within one region at any given time. Jobs in different regions can have the same name. If a caller attempts to create a Job with the same name as an already-existing Job, the attempt returns the existing Job. The name must match the regular expression [a-z]([-a-z0-9]{0,1022}[a-z0-9])?
    PipelineDescription PipelineDescriptionArgs
    Preliminary field: The format of this data may change at any time. A description of the user pipeline and stages through which it is executed. Created by Cloud Dataflow service. Only retrieved with JOB_VIEW_DESCRIPTION or JOB_VIEW_ALL.
    Project string
    The ID of the Cloud Platform project that the job belongs to.
    ReplaceJobId string
    If this job is an update of an existing job, this field is the job ID of the job it replaced. When sending a CreateJobRequest, you can update a job by specifying it here. The job named here is stopped, and its intermediate state is transferred to this job.
    ReplacedByJobId string
    If another job is an update of this job (and thus, this job is in JOB_STATE_UPDATED), this field contains the ID of that job.
    RequestedState JobRequestedState
    The job's requested state. Applies to UpdateJob requests. Set requested_state with UpdateJob requests to switch between the states JOB_STATE_STOPPED and JOB_STATE_RUNNING. You can also use UpdateJob requests to change a job's state from JOB_STATE_RUNNING to JOB_STATE_CANCELLED, JOB_STATE_DONE, or JOB_STATE_DRAINED. These states irrevocably terminate the job if it hasn't already reached a terminal state. This field has no effect on CreateJob requests.
    RuntimeUpdatableParams RuntimeUpdatableParamsArgs
    This field may ONLY be modified at runtime using the projects.jobs.update method to adjust job behavior. This field has no effect when specified at job creation.
    SatisfiesPzs bool
    Reserved for future use. This field is set only in responses from the server; it is ignored if it is set in any requests.
    StageStates []ExecutionStageStateArgs
    This field may be mutated by the Cloud Dataflow service; callers cannot mutate it.
    StartTime string
    The timestamp when the job was started (transitioned to JOB_STATE_PENDING). Flexible resource scheduling jobs are started with some delay after job creation, so start_time is unset before start and is updated when the job is started by the Cloud Dataflow service. For other jobs, start_time always equals to create_time and is immutable and set by the Cloud Dataflow service.
    Steps []StepArgs
    Exactly one of step or steps_location should be specified. The top-level steps that constitute the entire job. Only retrieved with JOB_VIEW_ALL.
    StepsLocation string
    The Cloud Storage location where the steps are stored.
    TempFiles []string
    A set of files the system should be aware of that are used for temporary storage. These temporary files will be removed on job completion. No duplicates are allowed. No file patterns are supported. The supported files are: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
    TransformNameMapping map[string]string
    The map of transform name prefixes of the job to be replaced to the corresponding name prefixes of the new job.
    Type JobType
    The type of Cloud Dataflow job.
    View string
    The level of information requested in response.
    clientRequestId String
    The client's unique identifier of the job, re-used across retried attempts. If this field is set, the service will ensure its uniqueness. The request to create a job will fail if the service has knowledge of a previously submitted job with the same client's ID and job name. The caller may use this field to ensure idempotence of job creation across retried attempts to create a job. By default, the field is empty and, in that case, the service ignores it.
    createTime String
    The timestamp when the job was initially created. Immutable and set by the Cloud Dataflow service.
    createdFromSnapshotId String
    If this is specified, the job's initial state is populated from the given snapshot.
    currentState JobCurrentState
    The current state of the job. Jobs are created in the JOB_STATE_STOPPED state unless otherwise specified. A job in the JOB_STATE_RUNNING state may asynchronously enter a terminal state. After a job has reached a terminal state, no further state updates may be made. This field may be mutated by the Cloud Dataflow service; callers cannot mutate it.
    currentStateTime String
    The timestamp associated with the current state.
    environment Environment
    The environment for the job.
    executionInfo JobExecutionInfo
    Deprecated.

    Deprecated: Deprecated.

    id String
    The unique ID of this job. This field is set by the Cloud Dataflow service when the Job is created, and is immutable for the life of the job.
    jobMetadata JobMetadata
    This field is populated by the Dataflow service to support filtering jobs by the metadata values provided here. Populated for ListJobs and all GetJob views SUMMARY and higher.
    labels Map<String,String>
    User-defined labels for this job. The labels map can contain no more than 64 entries. Entries of the labels map are UTF8 strings that comply with the following restrictions: * Keys must conform to regexp: \p{Ll}\p{Lo}{0,62} * Values must conform to regexp: [\p{Ll}\p{Lo}\p{N}_-]{0,63} * Both keys and values are additionally constrained to be <= 128 bytes in size.
    location String
    The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) that contains this job.
    name String
    The user-specified Cloud Dataflow job name. Only one Job with a given name can exist in a project within one region at any given time. Jobs in different regions can have the same name. If a caller attempts to create a Job with the same name as an already-existing Job, the attempt returns the existing Job. The name must match the regular expression [a-z]([-a-z0-9]{0,1022}[a-z0-9])?
    pipelineDescription PipelineDescription
    Preliminary field: The format of this data may change at any time. A description of the user pipeline and stages through which it is executed. Created by Cloud Dataflow service. Only retrieved with JOB_VIEW_DESCRIPTION or JOB_VIEW_ALL.
    project String
    The ID of the Cloud Platform project that the job belongs to.
    replaceJobId String
    If this job is an update of an existing job, this field is the job ID of the job it replaced. When sending a CreateJobRequest, you can update a job by specifying it here. The job named here is stopped, and its intermediate state is transferred to this job.
    replacedByJobId String
    If another job is an update of this job (and thus, this job is in JOB_STATE_UPDATED), this field contains the ID of that job.
    requestedState JobRequestedState
    The job's requested state. Applies to UpdateJob requests. Set requested_state with UpdateJob requests to switch between the states JOB_STATE_STOPPED and JOB_STATE_RUNNING. You can also use UpdateJob requests to change a job's state from JOB_STATE_RUNNING to JOB_STATE_CANCELLED, JOB_STATE_DONE, or JOB_STATE_DRAINED. These states irrevocably terminate the job if it hasn't already reached a terminal state. This field has no effect on CreateJob requests.
    runtimeUpdatableParams RuntimeUpdatableParams
    This field may ONLY be modified at runtime using the projects.jobs.update method to adjust job behavior. This field has no effect when specified at job creation.
    satisfiesPzs Boolean
    Reserved for future use. This field is set only in responses from the server; it is ignored if it is set in any requests.
    stageStates List<ExecutionStageState>
    This field may be mutated by the Cloud Dataflow service; callers cannot mutate it.
    startTime String
    The timestamp when the job was started (transitioned to JOB_STATE_PENDING). Flexible resource scheduling jobs are started with some delay after job creation, so start_time is unset before start and is updated when the job is started by the Cloud Dataflow service. For other jobs, start_time always equals to create_time and is immutable and set by the Cloud Dataflow service.
    steps List<Step>
    Exactly one of step or steps_location should be specified. The top-level steps that constitute the entire job. Only retrieved with JOB_VIEW_ALL.
    stepsLocation String
    The Cloud Storage location where the steps are stored.
    tempFiles List<String>
    A set of files the system should be aware of that are used for temporary storage. These temporary files will be removed on job completion. No duplicates are allowed. No file patterns are supported. The supported files are: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
    transformNameMapping Map<String,String>
    The map of transform name prefixes of the job to be replaced to the corresponding name prefixes of the new job.
    type JobType
    The type of Cloud Dataflow job.
    view String
    The level of information requested in response.
    clientRequestId string
    The client's unique identifier of the job, re-used across retried attempts. If this field is set, the service will ensure its uniqueness. The request to create a job will fail if the service has knowledge of a previously submitted job with the same client's ID and job name. The caller may use this field to ensure idempotence of job creation across retried attempts to create a job. By default, the field is empty and, in that case, the service ignores it.
    createTime string
    The timestamp when the job was initially created. Immutable and set by the Cloud Dataflow service.
    createdFromSnapshotId string
    If this is specified, the job's initial state is populated from the given snapshot.
    currentState JobCurrentState
    The current state of the job. Jobs are created in the JOB_STATE_STOPPED state unless otherwise specified. A job in the JOB_STATE_RUNNING state may asynchronously enter a terminal state. After a job has reached a terminal state, no further state updates may be made. This field may be mutated by the Cloud Dataflow service; callers cannot mutate it.
    currentStateTime string
    The timestamp associated with the current state.
    environment Environment
    The environment for the job.
    executionInfo JobExecutionInfo
    Deprecated.

    Deprecated: Deprecated.

    id string
    The unique ID of this job. This field is set by the Cloud Dataflow service when the Job is created, and is immutable for the life of the job.
    jobMetadata JobMetadata
    This field is populated by the Dataflow service to support filtering jobs by the metadata values provided here. Populated for ListJobs and all GetJob views SUMMARY and higher.
    labels {[key: string]: string}
    User-defined labels for this job. The labels map can contain no more than 64 entries. Entries of the labels map are UTF8 strings that comply with the following restrictions: * Keys must conform to regexp: \p{Ll}\p{Lo}{0,62} * Values must conform to regexp: [\p{Ll}\p{Lo}\p{N}_-]{0,63} * Both keys and values are additionally constrained to be <= 128 bytes in size.
    location string
    The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) that contains this job.
    name string
    The user-specified Cloud Dataflow job name. Only one Job with a given name can exist in a project within one region at any given time. Jobs in different regions can have the same name. If a caller attempts to create a Job with the same name as an already-existing Job, the attempt returns the existing Job. The name must match the regular expression [a-z]([-a-z0-9]{0,1022}[a-z0-9])?
    pipelineDescription PipelineDescription
    Preliminary field: The format of this data may change at any time. A description of the user pipeline and stages through which it is executed. Created by Cloud Dataflow service. Only retrieved with JOB_VIEW_DESCRIPTION or JOB_VIEW_ALL.
    project string
    The ID of the Cloud Platform project that the job belongs to.
    replaceJobId string
    If this job is an update of an existing job, this field is the job ID of the job it replaced. When sending a CreateJobRequest, you can update a job by specifying it here. The job named here is stopped, and its intermediate state is transferred to this job.
    replacedByJobId string
    If another job is an update of this job (and thus, this job is in JOB_STATE_UPDATED), this field contains the ID of that job.
    requestedState JobRequestedState
    The job's requested state. Applies to UpdateJob requests. Set requested_state with UpdateJob requests to switch between the states JOB_STATE_STOPPED and JOB_STATE_RUNNING. You can also use UpdateJob requests to change a job's state from JOB_STATE_RUNNING to JOB_STATE_CANCELLED, JOB_STATE_DONE, or JOB_STATE_DRAINED. These states irrevocably terminate the job if it hasn't already reached a terminal state. This field has no effect on CreateJob requests.
    runtimeUpdatableParams RuntimeUpdatableParams
    This field may ONLY be modified at runtime using the projects.jobs.update method to adjust job behavior. This field has no effect when specified at job creation.
    satisfiesPzs boolean
    Reserved for future use. This field is set only in responses from the server; it is ignored if it is set in any requests.
    stageStates ExecutionStageState[]
    This field may be mutated by the Cloud Dataflow service; callers cannot mutate it.
    startTime string
    The timestamp when the job was started (transitioned to JOB_STATE_PENDING). Flexible resource scheduling jobs are started with some delay after job creation, so start_time is unset before start and is updated when the job is started by the Cloud Dataflow service. For other jobs, start_time always equals to create_time and is immutable and set by the Cloud Dataflow service.
    steps Step[]
    Exactly one of step or steps_location should be specified. The top-level steps that constitute the entire job. Only retrieved with JOB_VIEW_ALL.
    stepsLocation string
    The Cloud Storage location where the steps are stored.
    tempFiles string[]
    A set of files the system should be aware of that are used for temporary storage. These temporary files will be removed on job completion. No duplicates are allowed. No file patterns are supported. The supported files are: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
    transformNameMapping {[key: string]: string}
    The map of transform name prefixes of the job to be replaced to the corresponding name prefixes of the new job.
    type JobType
    The type of Cloud Dataflow job.
    view string
    The level of information requested in response.
    client_request_id str
    The client's unique identifier of the job, re-used across retried attempts. If this field is set, the service will ensure its uniqueness. The request to create a job will fail if the service has knowledge of a previously submitted job with the same client's ID and job name. The caller may use this field to ensure idempotence of job creation across retried attempts to create a job. By default, the field is empty and, in that case, the service ignores it.
    create_time str
    The timestamp when the job was initially created. Immutable and set by the Cloud Dataflow service.
    created_from_snapshot_id str
    If this is specified, the job's initial state is populated from the given snapshot.
    current_state JobCurrentState
    The current state of the job. Jobs are created in the JOB_STATE_STOPPED state unless otherwise specified. A job in the JOB_STATE_RUNNING state may asynchronously enter a terminal state. After a job has reached a terminal state, no further state updates may be made. This field may be mutated by the Cloud Dataflow service; callers cannot mutate it.
    current_state_time str
    The timestamp associated with the current state.
    environment EnvironmentArgs
    The environment for the job.
    execution_info JobExecutionInfoArgs
    Deprecated.

    Deprecated: Deprecated.

    id str
    The unique ID of this job. This field is set by the Cloud Dataflow service when the Job is created, and is immutable for the life of the job.
    job_metadata JobMetadataArgs
    This field is populated by the Dataflow service to support filtering jobs by the metadata values provided here. Populated for ListJobs and all GetJob views SUMMARY and higher.
    labels Mapping[str, str]
    User-defined labels for this job. The labels map can contain no more than 64 entries. Entries of the labels map are UTF8 strings that comply with the following restrictions: * Keys must conform to regexp: \p{Ll}\p{Lo}{0,62} * Values must conform to regexp: [\p{Ll}\p{Lo}\p{N}_-]{0,63} * Both keys and values are additionally constrained to be <= 128 bytes in size.
    location str
    The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) that contains this job.
    name str
    The user-specified Cloud Dataflow job name. Only one Job with a given name can exist in a project within one region at any given time. Jobs in different regions can have the same name. If a caller attempts to create a Job with the same name as an already-existing Job, the attempt returns the existing Job. The name must match the regular expression [a-z]([-a-z0-9]{0,1022}[a-z0-9])?
    pipeline_description PipelineDescriptionArgs
    Preliminary field: The format of this data may change at any time. A description of the user pipeline and stages through which it is executed. Created by Cloud Dataflow service. Only retrieved with JOB_VIEW_DESCRIPTION or JOB_VIEW_ALL.
    project str
    The ID of the Cloud Platform project that the job belongs to.
    replace_job_id str
    If this job is an update of an existing job, this field is the job ID of the job it replaced. When sending a CreateJobRequest, you can update a job by specifying it here. The job named here is stopped, and its intermediate state is transferred to this job.
    replaced_by_job_id str
    If another job is an update of this job (and thus, this job is in JOB_STATE_UPDATED), this field contains the ID of that job.
    requested_state JobRequestedState
    The job's requested state. Applies to UpdateJob requests. Set requested_state with UpdateJob requests to switch between the states JOB_STATE_STOPPED and JOB_STATE_RUNNING. You can also use UpdateJob requests to change a job's state from JOB_STATE_RUNNING to JOB_STATE_CANCELLED, JOB_STATE_DONE, or JOB_STATE_DRAINED. These states irrevocably terminate the job if it hasn't already reached a terminal state. This field has no effect on CreateJob requests.
    runtime_updatable_params RuntimeUpdatableParamsArgs
    This field may ONLY be modified at runtime using the projects.jobs.update method to adjust job behavior. This field has no effect when specified at job creation.
    satisfies_pzs bool
    Reserved for future use. This field is set only in responses from the server; it is ignored if it is set in any requests.
    stage_states Sequence[ExecutionStageStateArgs]
    This field may be mutated by the Cloud Dataflow service; callers cannot mutate it.
    start_time str
    The timestamp when the job was started (transitioned to JOB_STATE_PENDING). Flexible resource scheduling jobs are started with some delay after job creation, so start_time is unset before start and is updated when the job is started by the Cloud Dataflow service. For other jobs, start_time always equals to create_time and is immutable and set by the Cloud Dataflow service.
    steps Sequence[StepArgs]
    Exactly one of step or steps_location should be specified. The top-level steps that constitute the entire job. Only retrieved with JOB_VIEW_ALL.
    steps_location str
    The Cloud Storage location where the steps are stored.
    temp_files Sequence[str]
    A set of files the system should be aware of that are used for temporary storage. These temporary files will be removed on job completion. No duplicates are allowed. No file patterns are supported. The supported files are: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
    transform_name_mapping Mapping[str, str]
    The map of transform name prefixes of the job to be replaced to the corresponding name prefixes of the new job.
    type JobType
    The type of Cloud Dataflow job.
    view str
    The level of information requested in response.
    clientRequestId String
    The client's unique identifier of the job, re-used across retried attempts. If this field is set, the service will ensure its uniqueness. The request to create a job will fail if the service has knowledge of a previously submitted job with the same client's ID and job name. The caller may use this field to ensure idempotence of job creation across retried attempts to create a job. By default, the field is empty and, in that case, the service ignores it.
    createTime String
    The timestamp when the job was initially created. Immutable and set by the Cloud Dataflow service.
    createdFromSnapshotId String
    If this is specified, the job's initial state is populated from the given snapshot.
    currentState "JOB_STATE_UNKNOWN" | "JOB_STATE_STOPPED" | "JOB_STATE_RUNNING" | "JOB_STATE_DONE" | "JOB_STATE_FAILED" | "JOB_STATE_CANCELLED" | "JOB_STATE_UPDATED" | "JOB_STATE_DRAINING" | "JOB_STATE_DRAINED" | "JOB_STATE_PENDING" | "JOB_STATE_CANCELLING" | "JOB_STATE_QUEUED" | "JOB_STATE_RESOURCE_CLEANING_UP"
    The current state of the job. Jobs are created in the JOB_STATE_STOPPED state unless otherwise specified. A job in the JOB_STATE_RUNNING state may asynchronously enter a terminal state. After a job has reached a terminal state, no further state updates may be made. This field may be mutated by the Cloud Dataflow service; callers cannot mutate it.
    currentStateTime String
    The timestamp associated with the current state.
    environment Property Map
    The environment for the job.
    executionInfo Property Map
    Deprecated.

    Deprecated: Deprecated.

    id String
    The unique ID of this job. This field is set by the Cloud Dataflow service when the Job is created, and is immutable for the life of the job.
    jobMetadata Property Map
    This field is populated by the Dataflow service to support filtering jobs by the metadata values provided here. Populated for ListJobs and all GetJob views SUMMARY and higher.
    labels Map<String>
    User-defined labels for this job. The labels map can contain no more than 64 entries. Entries of the labels map are UTF8 strings that comply with the following restrictions: * Keys must conform to regexp: \p{Ll}\p{Lo}{0,62} * Values must conform to regexp: [\p{Ll}\p{Lo}\p{N}_-]{0,63} * Both keys and values are additionally constrained to be <= 128 bytes in size.
    location String
    The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) that contains this job.
    name String
    The user-specified Cloud Dataflow job name. Only one Job with a given name can exist in a project within one region at any given time. Jobs in different regions can have the same name. If a caller attempts to create a Job with the same name as an already-existing Job, the attempt returns the existing Job. The name must match the regular expression [a-z]([-a-z0-9]{0,1022}[a-z0-9])?
    pipelineDescription Property Map
    Preliminary field: The format of this data may change at any time. A description of the user pipeline and stages through which it is executed. Created by Cloud Dataflow service. Only retrieved with JOB_VIEW_DESCRIPTION or JOB_VIEW_ALL.
    project String
    The ID of the Cloud Platform project that the job belongs to.
    replaceJobId String
    If this job is an update of an existing job, this field is the job ID of the job it replaced. When sending a CreateJobRequest, you can update a job by specifying it here. The job named here is stopped, and its intermediate state is transferred to this job.
    replacedByJobId String
    If another job is an update of this job (and thus, this job is in JOB_STATE_UPDATED), this field contains the ID of that job.
    requestedState "JOB_STATE_UNKNOWN" | "JOB_STATE_STOPPED" | "JOB_STATE_RUNNING" | "JOB_STATE_DONE" | "JOB_STATE_FAILED" | "JOB_STATE_CANCELLED" | "JOB_STATE_UPDATED" | "JOB_STATE_DRAINING" | "JOB_STATE_DRAINED" | "JOB_STATE_PENDING" | "JOB_STATE_CANCELLING" | "JOB_STATE_QUEUED" | "JOB_STATE_RESOURCE_CLEANING_UP"
    The job's requested state. Applies to UpdateJob requests. Set requested_state with UpdateJob requests to switch between the states JOB_STATE_STOPPED and JOB_STATE_RUNNING. You can also use UpdateJob requests to change a job's state from JOB_STATE_RUNNING to JOB_STATE_CANCELLED, JOB_STATE_DONE, or JOB_STATE_DRAINED. These states irrevocably terminate the job if it hasn't already reached a terminal state. This field has no effect on CreateJob requests.
    runtimeUpdatableParams Property Map
    This field may ONLY be modified at runtime using the projects.jobs.update method to adjust job behavior. This field has no effect when specified at job creation.
    satisfiesPzs Boolean
    Reserved for future use. This field is set only in responses from the server; it is ignored if it is set in any requests.
    stageStates List<Property Map>
    This field may be mutated by the Cloud Dataflow service; callers cannot mutate it.
    startTime String
    The timestamp when the job was started (transitioned to JOB_STATE_PENDING). Flexible resource scheduling jobs are started with some delay after job creation, so start_time is unset before start and is updated when the job is started by the Cloud Dataflow service. For other jobs, start_time always equals to create_time and is immutable and set by the Cloud Dataflow service.
    steps List<Property Map>
    Exactly one of step or steps_location should be specified. The top-level steps that constitute the entire job. Only retrieved with JOB_VIEW_ALL.
    stepsLocation String
    The Cloud Storage location where the steps are stored.
    tempFiles List<String>
    A set of files the system should be aware of that are used for temporary storage. These temporary files will be removed on job completion. No duplicates are allowed. No file patterns are supported. The supported files are: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
    transformNameMapping Map<String>
    The map of transform name prefixes of the job to be replaced to the corresponding name prefixes of the new job.
    type "JOB_TYPE_UNKNOWN" | "JOB_TYPE_BATCH" | "JOB_TYPE_STREAMING"
    The type of Cloud Dataflow job.
    view String
    The level of information requested in response.

    Outputs

    All input properties are implicitly available as output properties. Additionally, the Job resource produces the following output properties:

    Id string
    The provider-assigned unique ID for this managed resource.
    SatisfiesPzi bool
    Reserved for future use. This field is set only in responses from the server; it is ignored if it is set in any requests.
    Id string
    The provider-assigned unique ID for this managed resource.
    SatisfiesPzi bool
    Reserved for future use. This field is set only in responses from the server; it is ignored if it is set in any requests.
    id String
    The provider-assigned unique ID for this managed resource.
    satisfiesPzi Boolean
    Reserved for future use. This field is set only in responses from the server; it is ignored if it is set in any requests.
    id string
    The provider-assigned unique ID for this managed resource.
    satisfiesPzi boolean
    Reserved for future use. This field is set only in responses from the server; it is ignored if it is set in any requests.
    id str
    The provider-assigned unique ID for this managed resource.
    satisfies_pzi bool
    Reserved for future use. This field is set only in responses from the server; it is ignored if it is set in any requests.
    id String
    The provider-assigned unique ID for this managed resource.
    satisfiesPzi Boolean
    Reserved for future use. This field is set only in responses from the server; it is ignored if it is set in any requests.

    Supporting Types

    AutoscalingSettings, AutoscalingSettingsArgs

    Algorithm Pulumi.GoogleNative.Dataflow.V1b3.AutoscalingSettingsAlgorithm
    The algorithm to use for autoscaling.
    MaxNumWorkers int
    The maximum number of workers to cap scaling at.
    Algorithm AutoscalingSettingsAlgorithm
    The algorithm to use for autoscaling.
    MaxNumWorkers int
    The maximum number of workers to cap scaling at.
    algorithm AutoscalingSettingsAlgorithm
    The algorithm to use for autoscaling.
    maxNumWorkers Integer
    The maximum number of workers to cap scaling at.
    algorithm AutoscalingSettingsAlgorithm
    The algorithm to use for autoscaling.
    maxNumWorkers number
    The maximum number of workers to cap scaling at.
    algorithm AutoscalingSettingsAlgorithm
    The algorithm to use for autoscaling.
    max_num_workers int
    The maximum number of workers to cap scaling at.
    algorithm "AUTOSCALING_ALGORITHM_UNKNOWN" | "AUTOSCALING_ALGORITHM_NONE" | "AUTOSCALING_ALGORITHM_BASIC"
    The algorithm to use for autoscaling.
    maxNumWorkers Number
    The maximum number of workers to cap scaling at.

    AutoscalingSettingsAlgorithm, AutoscalingSettingsAlgorithmArgs

    AutoscalingAlgorithmUnknown
    AUTOSCALING_ALGORITHM_UNKNOWNThe algorithm is unknown, or unspecified.
    AutoscalingAlgorithmNone
    AUTOSCALING_ALGORITHM_NONEDisable autoscaling.
    AutoscalingAlgorithmBasic
    AUTOSCALING_ALGORITHM_BASICIncrease worker count over time to reduce job execution time.
    AutoscalingSettingsAlgorithmAutoscalingAlgorithmUnknown
    AUTOSCALING_ALGORITHM_UNKNOWNThe algorithm is unknown, or unspecified.
    AutoscalingSettingsAlgorithmAutoscalingAlgorithmNone
    AUTOSCALING_ALGORITHM_NONEDisable autoscaling.
    AutoscalingSettingsAlgorithmAutoscalingAlgorithmBasic
    AUTOSCALING_ALGORITHM_BASICIncrease worker count over time to reduce job execution time.
    AutoscalingAlgorithmUnknown
    AUTOSCALING_ALGORITHM_UNKNOWNThe algorithm is unknown, or unspecified.
    AutoscalingAlgorithmNone
    AUTOSCALING_ALGORITHM_NONEDisable autoscaling.
    AutoscalingAlgorithmBasic
    AUTOSCALING_ALGORITHM_BASICIncrease worker count over time to reduce job execution time.
    AutoscalingAlgorithmUnknown
    AUTOSCALING_ALGORITHM_UNKNOWNThe algorithm is unknown, or unspecified.
    AutoscalingAlgorithmNone
    AUTOSCALING_ALGORITHM_NONEDisable autoscaling.
    AutoscalingAlgorithmBasic
    AUTOSCALING_ALGORITHM_BASICIncrease worker count over time to reduce job execution time.
    AUTOSCALING_ALGORITHM_UNKNOWN
    AUTOSCALING_ALGORITHM_UNKNOWNThe algorithm is unknown, or unspecified.
    AUTOSCALING_ALGORITHM_NONE
    AUTOSCALING_ALGORITHM_NONEDisable autoscaling.
    AUTOSCALING_ALGORITHM_BASIC
    AUTOSCALING_ALGORITHM_BASICIncrease worker count over time to reduce job execution time.
    "AUTOSCALING_ALGORITHM_UNKNOWN"
    AUTOSCALING_ALGORITHM_UNKNOWNThe algorithm is unknown, or unspecified.
    "AUTOSCALING_ALGORITHM_NONE"
    AUTOSCALING_ALGORITHM_NONEDisable autoscaling.
    "AUTOSCALING_ALGORITHM_BASIC"
    AUTOSCALING_ALGORITHM_BASICIncrease worker count over time to reduce job execution time.

    AutoscalingSettingsResponse, AutoscalingSettingsResponseArgs

    Algorithm string
    The algorithm to use for autoscaling.
    MaxNumWorkers int
    The maximum number of workers to cap scaling at.
    Algorithm string
    The algorithm to use for autoscaling.
    MaxNumWorkers int
    The maximum number of workers to cap scaling at.
    algorithm String
    The algorithm to use for autoscaling.
    maxNumWorkers Integer
    The maximum number of workers to cap scaling at.
    algorithm string
    The algorithm to use for autoscaling.
    maxNumWorkers number
    The maximum number of workers to cap scaling at.
    algorithm str
    The algorithm to use for autoscaling.
    max_num_workers int
    The maximum number of workers to cap scaling at.
    algorithm String
    The algorithm to use for autoscaling.
    maxNumWorkers Number
    The maximum number of workers to cap scaling at.

    BigQueryIODetails, BigQueryIODetailsArgs

    Dataset string
    Dataset accessed in the connection.
    Project string
    Project accessed in the connection.
    Query string
    Query used to access data in the connection.
    Table string
    Table accessed in the connection.
    Dataset string
    Dataset accessed in the connection.
    Project string
    Project accessed in the connection.
    Query string
    Query used to access data in the connection.
    Table string
    Table accessed in the connection.
    dataset String
    Dataset accessed in the connection.
    project String
    Project accessed in the connection.
    query String
    Query used to access data in the connection.
    table String
    Table accessed in the connection.
    dataset string
    Dataset accessed in the connection.
    project string
    Project accessed in the connection.
    query string
    Query used to access data in the connection.
    table string
    Table accessed in the connection.
    dataset str
    Dataset accessed in the connection.
    project str
    Project accessed in the connection.
    query str
    Query used to access data in the connection.
    table str
    Table accessed in the connection.
    dataset String
    Dataset accessed in the connection.
    project String
    Project accessed in the connection.
    query String
    Query used to access data in the connection.
    table String
    Table accessed in the connection.

    BigQueryIODetailsResponse, BigQueryIODetailsResponseArgs

    Dataset string
    Dataset accessed in the connection.
    Project string
    Project accessed in the connection.
    Query string
    Query used to access data in the connection.
    Table string
    Table accessed in the connection.
    Dataset string
    Dataset accessed in the connection.
    Project string
    Project accessed in the connection.
    Query string
    Query used to access data in the connection.
    Table string
    Table accessed in the connection.
    dataset String
    Dataset accessed in the connection.
    project String
    Project accessed in the connection.
    query String
    Query used to access data in the connection.
    table String
    Table accessed in the connection.
    dataset string
    Dataset accessed in the connection.
    project string
    Project accessed in the connection.
    query string
    Query used to access data in the connection.
    table string
    Table accessed in the connection.
    dataset str
    Dataset accessed in the connection.
    project str
    Project accessed in the connection.
    query str
    Query used to access data in the connection.
    table str
    Table accessed in the connection.
    dataset String
    Dataset accessed in the connection.
    project String
    Project accessed in the connection.
    query String
    Query used to access data in the connection.
    table String
    Table accessed in the connection.

    BigTableIODetails, BigTableIODetailsArgs

    InstanceId string
    InstanceId accessed in the connection.
    Project string
    ProjectId accessed in the connection.
    TableId string
    TableId accessed in the connection.
    InstanceId string
    InstanceId accessed in the connection.
    Project string
    ProjectId accessed in the connection.
    TableId string
    TableId accessed in the connection.
    instanceId String
    InstanceId accessed in the connection.
    project String
    ProjectId accessed in the connection.
    tableId String
    TableId accessed in the connection.
    instanceId string
    InstanceId accessed in the connection.
    project string
    ProjectId accessed in the connection.
    tableId string
    TableId accessed in the connection.
    instance_id str
    InstanceId accessed in the connection.
    project str
    ProjectId accessed in the connection.
    table_id str
    TableId accessed in the connection.
    instanceId String
    InstanceId accessed in the connection.
    project String
    ProjectId accessed in the connection.
    tableId String
    TableId accessed in the connection.

    BigTableIODetailsResponse, BigTableIODetailsResponseArgs

    InstanceId string
    InstanceId accessed in the connection.
    Project string
    ProjectId accessed in the connection.
    TableId string
    TableId accessed in the connection.
    InstanceId string
    InstanceId accessed in the connection.
    Project string
    ProjectId accessed in the connection.
    TableId string
    TableId accessed in the connection.
    instanceId String
    InstanceId accessed in the connection.
    project String
    ProjectId accessed in the connection.
    tableId String
    TableId accessed in the connection.
    instanceId string
    InstanceId accessed in the connection.
    project string
    ProjectId accessed in the connection.
    tableId string
    TableId accessed in the connection.
    instance_id str
    InstanceId accessed in the connection.
    project str
    ProjectId accessed in the connection.
    table_id str
    TableId accessed in the connection.
    instanceId String
    InstanceId accessed in the connection.
    project String
    ProjectId accessed in the connection.
    tableId String
    TableId accessed in the connection.

    ComponentSource, ComponentSourceArgs

    Name string
    Dataflow service generated name for this source.
    OriginalTransformOrCollection string
    User name for the original user transform or collection with which this source is most closely associated.
    UserName string
    Human-readable name for this transform; may be user or system generated.
    Name string
    Dataflow service generated name for this source.
    OriginalTransformOrCollection string
    User name for the original user transform or collection with which this source is most closely associated.
    UserName string
    Human-readable name for this transform; may be user or system generated.
    name String
    Dataflow service generated name for this source.
    originalTransformOrCollection String
    User name for the original user transform or collection with which this source is most closely associated.
    userName String
    Human-readable name for this transform; may be user or system generated.
    name string
    Dataflow service generated name for this source.
    originalTransformOrCollection string
    User name for the original user transform or collection with which this source is most closely associated.
    userName string
    Human-readable name for this transform; may be user or system generated.
    name str
    Dataflow service generated name for this source.
    original_transform_or_collection str
    User name for the original user transform or collection with which this source is most closely associated.
    user_name str
    Human-readable name for this transform; may be user or system generated.
    name String
    Dataflow service generated name for this source.
    originalTransformOrCollection String
    User name for the original user transform or collection with which this source is most closely associated.
    userName String
    Human-readable name for this transform; may be user or system generated.

    ComponentSourceResponse, ComponentSourceResponseArgs

    Name string
    Dataflow service generated name for this source.
    OriginalTransformOrCollection string
    User name for the original user transform or collection with which this source is most closely associated.
    UserName string
    Human-readable name for this transform; may be user or system generated.
    Name string
    Dataflow service generated name for this source.
    OriginalTransformOrCollection string
    User name for the original user transform or collection with which this source is most closely associated.
    UserName string
    Human-readable name for this transform; may be user or system generated.
    name String
    Dataflow service generated name for this source.
    originalTransformOrCollection String
    User name for the original user transform or collection with which this source is most closely associated.
    userName String
    Human-readable name for this transform; may be user or system generated.
    name string
    Dataflow service generated name for this source.
    originalTransformOrCollection string
    User name for the original user transform or collection with which this source is most closely associated.
    userName string
    Human-readable name for this transform; may be user or system generated.
    name str
    Dataflow service generated name for this source.
    original_transform_or_collection str
    User name for the original user transform or collection with which this source is most closely associated.
    user_name str
    Human-readable name for this transform; may be user or system generated.
    name String
    Dataflow service generated name for this source.
    originalTransformOrCollection String
    User name for the original user transform or collection with which this source is most closely associated.
    userName String
    Human-readable name for this transform; may be user or system generated.

    ComponentTransform, ComponentTransformArgs

    Name string
    Dataflow service generated name for this source.
    OriginalTransform string
    User name for the original user transform with which this transform is most closely associated.
    UserName string
    Human-readable name for this transform; may be user or system generated.
    Name string
    Dataflow service generated name for this source.
    OriginalTransform string
    User name for the original user transform with which this transform is most closely associated.
    UserName string
    Human-readable name for this transform; may be user or system generated.
    name String
    Dataflow service generated name for this source.
    originalTransform String
    User name for the original user transform with which this transform is most closely associated.
    userName String
    Human-readable name for this transform; may be user or system generated.
    name string
    Dataflow service generated name for this source.
    originalTransform string
    User name for the original user transform with which this transform is most closely associated.
    userName string
    Human-readable name for this transform; may be user or system generated.
    name str
    Dataflow service generated name for this source.
    original_transform str
    User name for the original user transform with which this transform is most closely associated.
    user_name str
    Human-readable name for this transform; may be user or system generated.
    name String
    Dataflow service generated name for this source.
    originalTransform String
    User name for the original user transform with which this transform is most closely associated.
    userName String
    Human-readable name for this transform; may be user or system generated.

    ComponentTransformResponse, ComponentTransformResponseArgs

    Name string
    Dataflow service generated name for this source.
    OriginalTransform string
    User name for the original user transform with which this transform is most closely associated.
    UserName string
    Human-readable name for this transform; may be user or system generated.
    Name string
    Dataflow service generated name for this source.
    OriginalTransform string
    User name for the original user transform with which this transform is most closely associated.
    UserName string
    Human-readable name for this transform; may be user or system generated.
    name String
    Dataflow service generated name for this source.
    originalTransform String
    User name for the original user transform with which this transform is most closely associated.
    userName String
    Human-readable name for this transform; may be user or system generated.
    name string
    Dataflow service generated name for this source.
    originalTransform string
    User name for the original user transform with which this transform is most closely associated.
    userName string
    Human-readable name for this transform; may be user or system generated.
    name str
    Dataflow service generated name for this source.
    original_transform str
    User name for the original user transform with which this transform is most closely associated.
    user_name str
    Human-readable name for this transform; may be user or system generated.
    name String
    Dataflow service generated name for this source.
    originalTransform String
    User name for the original user transform with which this transform is most closely associated.
    userName String
    Human-readable name for this transform; may be user or system generated.

    DataSamplingConfig, DataSamplingConfigArgs

    Behaviors List<Pulumi.GoogleNative.Dataflow.V1b3.DataSamplingConfigBehaviorsItem>
    List of given sampling behaviors to enable. For example, specifying behaviors = [ALWAYS_ON] samples in-flight elements but does not sample exceptions. Can be used to specify multiple behaviors like, behaviors = [ALWAYS_ON, EXCEPTIONS] for specifying periodic sampling and exception sampling. If DISABLED is in the list, then sampling will be disabled and ignore the other given behaviors. Ordering does not matter.
    Behaviors []DataSamplingConfigBehaviorsItem
    List of given sampling behaviors to enable. For example, specifying behaviors = [ALWAYS_ON] samples in-flight elements but does not sample exceptions. Can be used to specify multiple behaviors like, behaviors = [ALWAYS_ON, EXCEPTIONS] for specifying periodic sampling and exception sampling. If DISABLED is in the list, then sampling will be disabled and ignore the other given behaviors. Ordering does not matter.
    behaviors List<DataSamplingConfigBehaviorsItem>
    List of given sampling behaviors to enable. For example, specifying behaviors = [ALWAYS_ON] samples in-flight elements but does not sample exceptions. Can be used to specify multiple behaviors like, behaviors = [ALWAYS_ON, EXCEPTIONS] for specifying periodic sampling and exception sampling. If DISABLED is in the list, then sampling will be disabled and ignore the other given behaviors. Ordering does not matter.
    behaviors DataSamplingConfigBehaviorsItem[]
    List of given sampling behaviors to enable. For example, specifying behaviors = [ALWAYS_ON] samples in-flight elements but does not sample exceptions. Can be used to specify multiple behaviors like, behaviors = [ALWAYS_ON, EXCEPTIONS] for specifying periodic sampling and exception sampling. If DISABLED is in the list, then sampling will be disabled and ignore the other given behaviors. Ordering does not matter.
    behaviors Sequence[DataSamplingConfigBehaviorsItem]
    List of given sampling behaviors to enable. For example, specifying behaviors = [ALWAYS_ON] samples in-flight elements but does not sample exceptions. Can be used to specify multiple behaviors like, behaviors = [ALWAYS_ON, EXCEPTIONS] for specifying periodic sampling and exception sampling. If DISABLED is in the list, then sampling will be disabled and ignore the other given behaviors. Ordering does not matter.
    behaviors List<"DATA_SAMPLING_BEHAVIOR_UNSPECIFIED" | "DISABLED" | "ALWAYS_ON" | "EXCEPTIONS">
    List of given sampling behaviors to enable. For example, specifying behaviors = [ALWAYS_ON] samples in-flight elements but does not sample exceptions. Can be used to specify multiple behaviors like, behaviors = [ALWAYS_ON, EXCEPTIONS] for specifying periodic sampling and exception sampling. If DISABLED is in the list, then sampling will be disabled and ignore the other given behaviors. Ordering does not matter.

    DataSamplingConfigBehaviorsItem, DataSamplingConfigBehaviorsItemArgs

    DataSamplingBehaviorUnspecified
    DATA_SAMPLING_BEHAVIOR_UNSPECIFIEDIf given, has no effect on sampling behavior. Used as an unknown or unset sentinel value.
    Disabled
    DISABLEDWhen given, disables element sampling. Has same behavior as not setting the behavior.
    AlwaysOn
    ALWAYS_ONWhen given, enables sampling in-flight from all PCollections.
    Exceptions
    EXCEPTIONSWhen given, enables sampling input elements when a user-defined DoFn causes an exception.
    DataSamplingConfigBehaviorsItemDataSamplingBehaviorUnspecified
    DATA_SAMPLING_BEHAVIOR_UNSPECIFIEDIf given, has no effect on sampling behavior. Used as an unknown or unset sentinel value.
    DataSamplingConfigBehaviorsItemDisabled
    DISABLEDWhen given, disables element sampling. Has same behavior as not setting the behavior.
    DataSamplingConfigBehaviorsItemAlwaysOn
    ALWAYS_ONWhen given, enables sampling in-flight from all PCollections.
    DataSamplingConfigBehaviorsItemExceptions
    EXCEPTIONSWhen given, enables sampling input elements when a user-defined DoFn causes an exception.
    DataSamplingBehaviorUnspecified
    DATA_SAMPLING_BEHAVIOR_UNSPECIFIEDIf given, has no effect on sampling behavior. Used as an unknown or unset sentinel value.
    Disabled
    DISABLEDWhen given, disables element sampling. Has same behavior as not setting the behavior.
    AlwaysOn
    ALWAYS_ONWhen given, enables sampling in-flight from all PCollections.
    Exceptions
    EXCEPTIONSWhen given, enables sampling input elements when a user-defined DoFn causes an exception.
    DataSamplingBehaviorUnspecified
    DATA_SAMPLING_BEHAVIOR_UNSPECIFIEDIf given, has no effect on sampling behavior. Used as an unknown or unset sentinel value.
    Disabled
    DISABLEDWhen given, disables element sampling. Has same behavior as not setting the behavior.
    AlwaysOn
    ALWAYS_ONWhen given, enables sampling in-flight from all PCollections.
    Exceptions
    EXCEPTIONSWhen given, enables sampling input elements when a user-defined DoFn causes an exception.
    DATA_SAMPLING_BEHAVIOR_UNSPECIFIED
    DATA_SAMPLING_BEHAVIOR_UNSPECIFIEDIf given, has no effect on sampling behavior. Used as an unknown or unset sentinel value.
    DISABLED
    DISABLEDWhen given, disables element sampling. Has same behavior as not setting the behavior.
    ALWAYS_ON
    ALWAYS_ONWhen given, enables sampling in-flight from all PCollections.
    EXCEPTIONS
    EXCEPTIONSWhen given, enables sampling input elements when a user-defined DoFn causes an exception.
    "DATA_SAMPLING_BEHAVIOR_UNSPECIFIED"
    DATA_SAMPLING_BEHAVIOR_UNSPECIFIEDIf given, has no effect on sampling behavior. Used as an unknown or unset sentinel value.
    "DISABLED"
    DISABLEDWhen given, disables element sampling. Has same behavior as not setting the behavior.
    "ALWAYS_ON"
    ALWAYS_ONWhen given, enables sampling in-flight from all PCollections.
    "EXCEPTIONS"
    EXCEPTIONSWhen given, enables sampling input elements when a user-defined DoFn causes an exception.

    DataSamplingConfigResponse, DataSamplingConfigResponseArgs

    Behaviors List<string>
    List of given sampling behaviors to enable. For example, specifying behaviors = [ALWAYS_ON] samples in-flight elements but does not sample exceptions. Can be used to specify multiple behaviors like, behaviors = [ALWAYS_ON, EXCEPTIONS] for specifying periodic sampling and exception sampling. If DISABLED is in the list, then sampling will be disabled and ignore the other given behaviors. Ordering does not matter.
    Behaviors []string
    List of given sampling behaviors to enable. For example, specifying behaviors = [ALWAYS_ON] samples in-flight elements but does not sample exceptions. Can be used to specify multiple behaviors like, behaviors = [ALWAYS_ON, EXCEPTIONS] for specifying periodic sampling and exception sampling. If DISABLED is in the list, then sampling will be disabled and ignore the other given behaviors. Ordering does not matter.
    behaviors List<String>
    List of given sampling behaviors to enable. For example, specifying behaviors = [ALWAYS_ON] samples in-flight elements but does not sample exceptions. Can be used to specify multiple behaviors like, behaviors = [ALWAYS_ON, EXCEPTIONS] for specifying periodic sampling and exception sampling. If DISABLED is in the list, then sampling will be disabled and ignore the other given behaviors. Ordering does not matter.
    behaviors string[]
    List of given sampling behaviors to enable. For example, specifying behaviors = [ALWAYS_ON] samples in-flight elements but does not sample exceptions. Can be used to specify multiple behaviors like, behaviors = [ALWAYS_ON, EXCEPTIONS] for specifying periodic sampling and exception sampling. If DISABLED is in the list, then sampling will be disabled and ignore the other given behaviors. Ordering does not matter.
    behaviors Sequence[str]
    List of given sampling behaviors to enable. For example, specifying behaviors = [ALWAYS_ON] samples in-flight elements but does not sample exceptions. Can be used to specify multiple behaviors like, behaviors = [ALWAYS_ON, EXCEPTIONS] for specifying periodic sampling and exception sampling. If DISABLED is in the list, then sampling will be disabled and ignore the other given behaviors. Ordering does not matter.
    behaviors List<String>
    List of given sampling behaviors to enable. For example, specifying behaviors = [ALWAYS_ON] samples in-flight elements but does not sample exceptions. Can be used to specify multiple behaviors like, behaviors = [ALWAYS_ON, EXCEPTIONS] for specifying periodic sampling and exception sampling. If DISABLED is in the list, then sampling will be disabled and ignore the other given behaviors. Ordering does not matter.

    DatastoreIODetails, DatastoreIODetailsArgs

    Namespace string
    Namespace used in the connection.
    Project string
    ProjectId accessed in the connection.
    Namespace string
    Namespace used in the connection.
    Project string
    ProjectId accessed in the connection.
    namespace String
    Namespace used in the connection.
    project String
    ProjectId accessed in the connection.
    namespace string
    Namespace used in the connection.
    project string
    ProjectId accessed in the connection.
    namespace str
    Namespace used in the connection.
    project str
    ProjectId accessed in the connection.
    namespace String
    Namespace used in the connection.
    project String
    ProjectId accessed in the connection.

    DatastoreIODetailsResponse, DatastoreIODetailsResponseArgs

    Namespace string
    Namespace used in the connection.
    Project string
    ProjectId accessed in the connection.
    Namespace string
    Namespace used in the connection.
    Project string
    ProjectId accessed in the connection.
    namespace String
    Namespace used in the connection.
    project String
    ProjectId accessed in the connection.
    namespace string
    Namespace used in the connection.
    project string
    ProjectId accessed in the connection.
    namespace str
    Namespace used in the connection.
    project str
    ProjectId accessed in the connection.
    namespace String
    Namespace used in the connection.
    project String
    ProjectId accessed in the connection.

    DebugOptions, DebugOptionsArgs

    DataSampling Pulumi.GoogleNative.Dataflow.V1b3.Inputs.DataSamplingConfig
    Configuration options for sampling elements from a running pipeline.
    EnableHotKeyLogging bool
    When true, enables the logging of the literal hot key to the user's Cloud Logging.
    DataSampling DataSamplingConfig
    Configuration options for sampling elements from a running pipeline.
    EnableHotKeyLogging bool
    When true, enables the logging of the literal hot key to the user's Cloud Logging.
    dataSampling DataSamplingConfig
    Configuration options for sampling elements from a running pipeline.
    enableHotKeyLogging Boolean
    When true, enables the logging of the literal hot key to the user's Cloud Logging.
    dataSampling DataSamplingConfig
    Configuration options for sampling elements from a running pipeline.
    enableHotKeyLogging boolean
    When true, enables the logging of the literal hot key to the user's Cloud Logging.
    data_sampling DataSamplingConfig
    Configuration options for sampling elements from a running pipeline.
    enable_hot_key_logging bool
    When true, enables the logging of the literal hot key to the user's Cloud Logging.
    dataSampling Property Map
    Configuration options for sampling elements from a running pipeline.
    enableHotKeyLogging Boolean
    When true, enables the logging of the literal hot key to the user's Cloud Logging.

    DebugOptionsResponse, DebugOptionsResponseArgs

    DataSampling Pulumi.GoogleNative.Dataflow.V1b3.Inputs.DataSamplingConfigResponse
    Configuration options for sampling elements from a running pipeline.
    EnableHotKeyLogging bool
    When true, enables the logging of the literal hot key to the user's Cloud Logging.
    DataSampling DataSamplingConfigResponse
    Configuration options for sampling elements from a running pipeline.
    EnableHotKeyLogging bool
    When true, enables the logging of the literal hot key to the user's Cloud Logging.
    dataSampling DataSamplingConfigResponse
    Configuration options for sampling elements from a running pipeline.
    enableHotKeyLogging Boolean
    When true, enables the logging of the literal hot key to the user's Cloud Logging.
    dataSampling DataSamplingConfigResponse
    Configuration options for sampling elements from a running pipeline.
    enableHotKeyLogging boolean
    When true, enables the logging of the literal hot key to the user's Cloud Logging.
    data_sampling DataSamplingConfigResponse
    Configuration options for sampling elements from a running pipeline.
    enable_hot_key_logging bool
    When true, enables the logging of the literal hot key to the user's Cloud Logging.
    dataSampling Property Map
    Configuration options for sampling elements from a running pipeline.
    enableHotKeyLogging Boolean
    When true, enables the logging of the literal hot key to the user's Cloud Logging.

    Disk, DiskArgs

    DiskType string
    Disk storage type, as defined by Google Compute Engine. This must be a disk type appropriate to the project and zone in which the workers will run. If unknown or unspecified, the service will attempt to choose a reasonable default. For example, the standard persistent disk type is a resource name typically ending in "pd-standard". If SSD persistent disks are available, the resource name typically ends with "pd-ssd". The actual valid values are defined the Google Compute Engine API, not by the Cloud Dataflow API; consult the Google Compute Engine documentation for more information about determining the set of available disk types for a particular project and zone. Google Compute Engine Disk types are local to a particular project in a particular zone, and so the resource name will typically look something like this: compute.googleapis.com/projects/project-id/zones/zone/diskTypes/pd-standard
    MountPoint string
    Directory in a VM where disk is mounted.
    SizeGb int
    Size of disk in GB. If zero or unspecified, the service will attempt to choose a reasonable default.
    DiskType string
    Disk storage type, as defined by Google Compute Engine. This must be a disk type appropriate to the project and zone in which the workers will run. If unknown or unspecified, the service will attempt to choose a reasonable default. For example, the standard persistent disk type is a resource name typically ending in "pd-standard". If SSD persistent disks are available, the resource name typically ends with "pd-ssd". The actual valid values are defined the Google Compute Engine API, not by the Cloud Dataflow API; consult the Google Compute Engine documentation for more information about determining the set of available disk types for a particular project and zone. Google Compute Engine Disk types are local to a particular project in a particular zone, and so the resource name will typically look something like this: compute.googleapis.com/projects/project-id/zones/zone/diskTypes/pd-standard
    MountPoint string
    Directory in a VM where disk is mounted.
    SizeGb int
    Size of disk in GB. If zero or unspecified, the service will attempt to choose a reasonable default.
    diskType String
    Disk storage type, as defined by Google Compute Engine. This must be a disk type appropriate to the project and zone in which the workers will run. If unknown or unspecified, the service will attempt to choose a reasonable default. For example, the standard persistent disk type is a resource name typically ending in "pd-standard". If SSD persistent disks are available, the resource name typically ends with "pd-ssd". The actual valid values are defined the Google Compute Engine API, not by the Cloud Dataflow API; consult the Google Compute Engine documentation for more information about determining the set of available disk types for a particular project and zone. Google Compute Engine Disk types are local to a particular project in a particular zone, and so the resource name will typically look something like this: compute.googleapis.com/projects/project-id/zones/zone/diskTypes/pd-standard
    mountPoint String
    Directory in a VM where disk is mounted.
    sizeGb Integer
    Size of disk in GB. If zero or unspecified, the service will attempt to choose a reasonable default.
    diskType string
    Disk storage type, as defined by Google Compute Engine. This must be a disk type appropriate to the project and zone in which the workers will run. If unknown or unspecified, the service will attempt to choose a reasonable default. For example, the standard persistent disk type is a resource name typically ending in "pd-standard". If SSD persistent disks are available, the resource name typically ends with "pd-ssd". The actual valid values are defined the Google Compute Engine API, not by the Cloud Dataflow API; consult the Google Compute Engine documentation for more information about determining the set of available disk types for a particular project and zone. Google Compute Engine Disk types are local to a particular project in a particular zone, and so the resource name will typically look something like this: compute.googleapis.com/projects/project-id/zones/zone/diskTypes/pd-standard
    mountPoint string
    Directory in a VM where disk is mounted.
    sizeGb number
    Size of disk in GB. If zero or unspecified, the service will attempt to choose a reasonable default.
    disk_type str
    Disk storage type, as defined by Google Compute Engine. This must be a disk type appropriate to the project and zone in which the workers will run. If unknown or unspecified, the service will attempt to choose a reasonable default. For example, the standard persistent disk type is a resource name typically ending in "pd-standard". If SSD persistent disks are available, the resource name typically ends with "pd-ssd". The actual valid values are defined the Google Compute Engine API, not by the Cloud Dataflow API; consult the Google Compute Engine documentation for more information about determining the set of available disk types for a particular project and zone. Google Compute Engine Disk types are local to a particular project in a particular zone, and so the resource name will typically look something like this: compute.googleapis.com/projects/project-id/zones/zone/diskTypes/pd-standard
    mount_point str
    Directory in a VM where disk is mounted.
    size_gb int
    Size of disk in GB. If zero or unspecified, the service will attempt to choose a reasonable default.
    diskType String
    Disk storage type, as defined by Google Compute Engine. This must be a disk type appropriate to the project and zone in which the workers will run. If unknown or unspecified, the service will attempt to choose a reasonable default. For example, the standard persistent disk type is a resource name typically ending in "pd-standard". If SSD persistent disks are available, the resource name typically ends with "pd-ssd". The actual valid values are defined the Google Compute Engine API, not by the Cloud Dataflow API; consult the Google Compute Engine documentation for more information about determining the set of available disk types for a particular project and zone. Google Compute Engine Disk types are local to a particular project in a particular zone, and so the resource name will typically look something like this: compute.googleapis.com/projects/project-id/zones/zone/diskTypes/pd-standard
    mountPoint String
    Directory in a VM where disk is mounted.
    sizeGb Number
    Size of disk in GB. If zero or unspecified, the service will attempt to choose a reasonable default.

    DiskResponse, DiskResponseArgs

    DiskType string
    Disk storage type, as defined by Google Compute Engine. This must be a disk type appropriate to the project and zone in which the workers will run. If unknown or unspecified, the service will attempt to choose a reasonable default. For example, the standard persistent disk type is a resource name typically ending in "pd-standard". If SSD persistent disks are available, the resource name typically ends with "pd-ssd". The actual valid values are defined the Google Compute Engine API, not by the Cloud Dataflow API; consult the Google Compute Engine documentation for more information about determining the set of available disk types for a particular project and zone. Google Compute Engine Disk types are local to a particular project in a particular zone, and so the resource name will typically look something like this: compute.googleapis.com/projects/project-id/zones/zone/diskTypes/pd-standard
    MountPoint string
    Directory in a VM where disk is mounted.
    SizeGb int
    Size of disk in GB. If zero or unspecified, the service will attempt to choose a reasonable default.
    DiskType string
    Disk storage type, as defined by Google Compute Engine. This must be a disk type appropriate to the project and zone in which the workers will run. If unknown or unspecified, the service will attempt to choose a reasonable default. For example, the standard persistent disk type is a resource name typically ending in "pd-standard". If SSD persistent disks are available, the resource name typically ends with "pd-ssd". The actual valid values are defined the Google Compute Engine API, not by the Cloud Dataflow API; consult the Google Compute Engine documentation for more information about determining the set of available disk types for a particular project and zone. Google Compute Engine Disk types are local to a particular project in a particular zone, and so the resource name will typically look something like this: compute.googleapis.com/projects/project-id/zones/zone/diskTypes/pd-standard
    MountPoint string
    Directory in a VM where disk is mounted.
    SizeGb int
    Size of disk in GB. If zero or unspecified, the service will attempt to choose a reasonable default.
    diskType String
    Disk storage type, as defined by Google Compute Engine. This must be a disk type appropriate to the project and zone in which the workers will run. If unknown or unspecified, the service will attempt to choose a reasonable default. For example, the standard persistent disk type is a resource name typically ending in "pd-standard". If SSD persistent disks are available, the resource name typically ends with "pd-ssd". The actual valid values are defined the Google Compute Engine API, not by the Cloud Dataflow API; consult the Google Compute Engine documentation for more information about determining the set of available disk types for a particular project and zone. Google Compute Engine Disk types are local to a particular project in a particular zone, and so the resource name will typically look something like this: compute.googleapis.com/projects/project-id/zones/zone/diskTypes/pd-standard
    mountPoint String
    Directory in a VM where disk is mounted.
    sizeGb Integer
    Size of disk in GB. If zero or unspecified, the service will attempt to choose a reasonable default.
    diskType string
    Disk storage type, as defined by Google Compute Engine. This must be a disk type appropriate to the project and zone in which the workers will run. If unknown or unspecified, the service will attempt to choose a reasonable default. For example, the standard persistent disk type is a resource name typically ending in "pd-standard". If SSD persistent disks are available, the resource name typically ends with "pd-ssd". The actual valid values are defined the Google Compute Engine API, not by the Cloud Dataflow API; consult the Google Compute Engine documentation for more information about determining the set of available disk types for a particular project and zone. Google Compute Engine Disk types are local to a particular project in a particular zone, and so the resource name will typically look something like this: compute.googleapis.com/projects/project-id/zones/zone/diskTypes/pd-standard
    mountPoint string
    Directory in a VM where disk is mounted.
    sizeGb number
    Size of disk in GB. If zero or unspecified, the service will attempt to choose a reasonable default.
    disk_type str
    Disk storage type, as defined by Google Compute Engine. This must be a disk type appropriate to the project and zone in which the workers will run. If unknown or unspecified, the service will attempt to choose a reasonable default. For example, the standard persistent disk type is a resource name typically ending in "pd-standard". If SSD persistent disks are available, the resource name typically ends with "pd-ssd". The actual valid values are defined the Google Compute Engine API, not by the Cloud Dataflow API; consult the Google Compute Engine documentation for more information about determining the set of available disk types for a particular project and zone. Google Compute Engine Disk types are local to a particular project in a particular zone, and so the resource name will typically look something like this: compute.googleapis.com/projects/project-id/zones/zone/diskTypes/pd-standard
    mount_point str
    Directory in a VM where disk is mounted.
    size_gb int
    Size of disk in GB. If zero or unspecified, the service will attempt to choose a reasonable default.
    diskType String
    Disk storage type, as defined by Google Compute Engine. This must be a disk type appropriate to the project and zone in which the workers will run. If unknown or unspecified, the service will attempt to choose a reasonable default. For example, the standard persistent disk type is a resource name typically ending in "pd-standard". If SSD persistent disks are available, the resource name typically ends with "pd-ssd". The actual valid values are defined the Google Compute Engine API, not by the Cloud Dataflow API; consult the Google Compute Engine documentation for more information about determining the set of available disk types for a particular project and zone. Google Compute Engine Disk types are local to a particular project in a particular zone, and so the resource name will typically look something like this: compute.googleapis.com/projects/project-id/zones/zone/diskTypes/pd-standard
    mountPoint String
    Directory in a VM where disk is mounted.
    sizeGb Number
    Size of disk in GB. If zero or unspecified, the service will attempt to choose a reasonable default.

    DisplayData, DisplayDataArgs

    BoolValue bool
    Contains value if the data is of a boolean type.
    DurationValue string
    Contains value if the data is of duration type.
    FloatValue double
    Contains value if the data is of float type.
    Int64Value string
    Contains value if the data is of int64 type.
    JavaClassValue string
    Contains value if the data is of java class type.
    Key string
    The key identifying the display data. This is intended to be used as a label for the display data when viewed in a dax monitoring system.
    Label string
    An optional label to display in a dax UI for the element.
    Namespace string
    The namespace for the key. This is usually a class name or programming language namespace (i.e. python module) which defines the display data. This allows a dax monitoring system to specially handle the data and perform custom rendering.
    ShortStrValue string
    A possible additional shorter value to display. For example a java_class_name_value of com.mypackage.MyDoFn will be stored with MyDoFn as the short_str_value and com.mypackage.MyDoFn as the java_class_name value. short_str_value can be displayed and java_class_name_value will be displayed as a tooltip.
    StrValue string
    Contains value if the data is of string type.
    TimestampValue string
    Contains value if the data is of timestamp type.
    Url string
    An optional full URL.
    BoolValue bool
    Contains value if the data is of a boolean type.
    DurationValue string
    Contains value if the data is of duration type.
    FloatValue float64
    Contains value if the data is of float type.
    Int64Value string
    Contains value if the data is of int64 type.
    JavaClassValue string
    Contains value if the data is of java class type.
    Key string
    The key identifying the display data. This is intended to be used as a label for the display data when viewed in a dax monitoring system.
    Label string
    An optional label to display in a dax UI for the element.
    Namespace string
    The namespace for the key. This is usually a class name or programming language namespace (i.e. python module) which defines the display data. This allows a dax monitoring system to specially handle the data and perform custom rendering.
    ShortStrValue string
    A possible additional shorter value to display. For example a java_class_name_value of com.mypackage.MyDoFn will be stored with MyDoFn as the short_str_value and com.mypackage.MyDoFn as the java_class_name value. short_str_value can be displayed and java_class_name_value will be displayed as a tooltip.
    StrValue string
    Contains value if the data is of string type.
    TimestampValue string
    Contains value if the data is of timestamp type.
    Url string
    An optional full URL.
    boolValue Boolean
    Contains value if the data is of a boolean type.
    durationValue String
    Contains value if the data is of duration type.
    floatValue Double
    Contains value if the data is of float type.
    int64Value String
    Contains value if the data is of int64 type.
    javaClassValue String
    Contains value if the data is of java class type.
    key String
    The key identifying the display data. This is intended to be used as a label for the display data when viewed in a dax monitoring system.
    label String
    An optional label to display in a dax UI for the element.
    namespace String
    The namespace for the key. This is usually a class name or programming language namespace (i.e. python module) which defines the display data. This allows a dax monitoring system to specially handle the data and perform custom rendering.
    shortStrValue String
    A possible additional shorter value to display. For example a java_class_name_value of com.mypackage.MyDoFn will be stored with MyDoFn as the short_str_value and com.mypackage.MyDoFn as the java_class_name value. short_str_value can be displayed and java_class_name_value will be displayed as a tooltip.
    strValue String
    Contains value if the data is of string type.
    timestampValue String
    Contains value if the data is of timestamp type.
    url String
    An optional full URL.
    boolValue boolean
    Contains value if the data is of a boolean type.
    durationValue string
    Contains value if the data is of duration type.
    floatValue number
    Contains value if the data is of float type.
    int64Value string
    Contains value if the data is of int64 type.
    javaClassValue string
    Contains value if the data is of java class type.
    key string
    The key identifying the display data. This is intended to be used as a label for the display data when viewed in a dax monitoring system.
    label string
    An optional label to display in a dax UI for the element.
    namespace string
    The namespace for the key. This is usually a class name or programming language namespace (i.e. python module) which defines the display data. This allows a dax monitoring system to specially handle the data and perform custom rendering.
    shortStrValue string
    A possible additional shorter value to display. For example a java_class_name_value of com.mypackage.MyDoFn will be stored with MyDoFn as the short_str_value and com.mypackage.MyDoFn as the java_class_name value. short_str_value can be displayed and java_class_name_value will be displayed as a tooltip.
    strValue string
    Contains value if the data is of string type.
    timestampValue string
    Contains value if the data is of timestamp type.
    url string
    An optional full URL.
    bool_value bool
    Contains value if the data is of a boolean type.
    duration_value str
    Contains value if the data is of duration type.
    float_value float
    Contains value if the data is of float type.
    int64_value str
    Contains value if the data is of int64 type.
    java_class_value str
    Contains value if the data is of java class type.
    key str
    The key identifying the display data. This is intended to be used as a label for the display data when viewed in a dax monitoring system.
    label str
    An optional label to display in a dax UI for the element.
    namespace str
    The namespace for the key. This is usually a class name or programming language namespace (i.e. python module) which defines the display data. This allows a dax monitoring system to specially handle the data and perform custom rendering.
    short_str_value str
    A possible additional shorter value to display. For example a java_class_name_value of com.mypackage.MyDoFn will be stored with MyDoFn as the short_str_value and com.mypackage.MyDoFn as the java_class_name value. short_str_value can be displayed and java_class_name_value will be displayed as a tooltip.
    str_value str
    Contains value if the data is of string type.
    timestamp_value str
    Contains value if the data is of timestamp type.
    url str
    An optional full URL.
    boolValue Boolean
    Contains value if the data is of a boolean type.
    durationValue String
    Contains value if the data is of duration type.
    floatValue Number
    Contains value if the data is of float type.
    int64Value String
    Contains value if the data is of int64 type.
    javaClassValue String
    Contains value if the data is of java class type.
    key String
    The key identifying the display data. This is intended to be used as a label for the display data when viewed in a dax monitoring system.
    label String
    An optional label to display in a dax UI for the element.
    namespace String
    The namespace for the key. This is usually a class name or programming language namespace (i.e. python module) which defines the display data. This allows a dax monitoring system to specially handle the data and perform custom rendering.
    shortStrValue String
    A possible additional shorter value to display. For example a java_class_name_value of com.mypackage.MyDoFn will be stored with MyDoFn as the short_str_value and com.mypackage.MyDoFn as the java_class_name value. short_str_value can be displayed and java_class_name_value will be displayed as a tooltip.
    strValue String
    Contains value if the data is of string type.
    timestampValue String
    Contains value if the data is of timestamp type.
    url String
    An optional full URL.

    DisplayDataResponse, DisplayDataResponseArgs

    BoolValue bool
    Contains value if the data is of a boolean type.
    DurationValue string
    Contains value if the data is of duration type.
    FloatValue double
    Contains value if the data is of float type.
    Int64Value string
    Contains value if the data is of int64 type.
    JavaClassValue string
    Contains value if the data is of java class type.
    Key string
    The key identifying the display data. This is intended to be used as a label for the display data when viewed in a dax monitoring system.
    Label string
    An optional label to display in a dax UI for the element.
    Namespace string
    The namespace for the key. This is usually a class name or programming language namespace (i.e. python module) which defines the display data. This allows a dax monitoring system to specially handle the data and perform custom rendering.
    ShortStrValue string
    A possible additional shorter value to display. For example a java_class_name_value of com.mypackage.MyDoFn will be stored with MyDoFn as the short_str_value and com.mypackage.MyDoFn as the java_class_name value. short_str_value can be displayed and java_class_name_value will be displayed as a tooltip.
    StrValue string
    Contains value if the data is of string type.
    TimestampValue string
    Contains value if the data is of timestamp type.
    Url string
    An optional full URL.
    BoolValue bool
    Contains value if the data is of a boolean type.
    DurationValue string
    Contains value if the data is of duration type.
    FloatValue float64
    Contains value if the data is of float type.
    Int64Value string
    Contains value if the data is of int64 type.
    JavaClassValue string
    Contains value if the data is of java class type.
    Key string
    The key identifying the display data. This is intended to be used as a label for the display data when viewed in a dax monitoring system.
    Label string
    An optional label to display in a dax UI for the element.
    Namespace string
    The namespace for the key. This is usually a class name or programming language namespace (i.e. python module) which defines the display data. This allows a dax monitoring system to specially handle the data and perform custom rendering.
    ShortStrValue string
    A possible additional shorter value to display. For example a java_class_name_value of com.mypackage.MyDoFn will be stored with MyDoFn as the short_str_value and com.mypackage.MyDoFn as the java_class_name value. short_str_value can be displayed and java_class_name_value will be displayed as a tooltip.
    StrValue string
    Contains value if the data is of string type.
    TimestampValue string
    Contains value if the data is of timestamp type.
    Url string
    An optional full URL.
    boolValue Boolean
    Contains value if the data is of a boolean type.
    durationValue String
    Contains value if the data is of duration type.
    floatValue Double
    Contains value if the data is of float type.
    int64Value String
    Contains value if the data is of int64 type.
    javaClassValue String
    Contains value if the data is of java class type.
    key String
    The key identifying the display data. This is intended to be used as a label for the display data when viewed in a dax monitoring system.
    label String
    An optional label to display in a dax UI for the element.
    namespace String
    The namespace for the key. This is usually a class name or programming language namespace (i.e. python module) which defines the display data. This allows a dax monitoring system to specially handle the data and perform custom rendering.
    shortStrValue String
    A possible additional shorter value to display. For example a java_class_name_value of com.mypackage.MyDoFn will be stored with MyDoFn as the short_str_value and com.mypackage.MyDoFn as the java_class_name value. short_str_value can be displayed and java_class_name_value will be displayed as a tooltip.
    strValue String
    Contains value if the data is of string type.
    timestampValue String
    Contains value if the data is of timestamp type.
    url String
    An optional full URL.
    boolValue boolean
    Contains value if the data is of a boolean type.
    durationValue string
    Contains value if the data is of duration type.
    floatValue number
    Contains value if the data is of float type.
    int64Value string
    Contains value if the data is of int64 type.
    javaClassValue string
    Contains value if the data is of java class type.
    key string
    The key identifying the display data. This is intended to be used as a label for the display data when viewed in a dax monitoring system.
    label string
    An optional label to display in a dax UI for the element.
    namespace string
    The namespace for the key. This is usually a class name or programming language namespace (i.e. python module) which defines the display data. This allows a dax monitoring system to specially handle the data and perform custom rendering.
    shortStrValue string
    A possible additional shorter value to display. For example a java_class_name_value of com.mypackage.MyDoFn will be stored with MyDoFn as the short_str_value and com.mypackage.MyDoFn as the java_class_name value. short_str_value can be displayed and java_class_name_value will be displayed as a tooltip.
    strValue string
    Contains value if the data is of string type.
    timestampValue string
    Contains value if the data is of timestamp type.
    url string
    An optional full URL.
    bool_value bool
    Contains value if the data is of a boolean type.
    duration_value str
    Contains value if the data is of duration type.
    float_value float
    Contains value if the data is of float type.
    int64_value str
    Contains value if the data is of int64 type.
    java_class_value str
    Contains value if the data is of java class type.
    key str
    The key identifying the display data. This is intended to be used as a label for the display data when viewed in a dax monitoring system.
    label str
    An optional label to display in a dax UI for the element.
    namespace str
    The namespace for the key. This is usually a class name or programming language namespace (i.e. python module) which defines the display data. This allows a dax monitoring system to specially handle the data and perform custom rendering.
    short_str_value str
    A possible additional shorter value to display. For example a java_class_name_value of com.mypackage.MyDoFn will be stored with MyDoFn as the short_str_value and com.mypackage.MyDoFn as the java_class_name value. short_str_value can be displayed and java_class_name_value will be displayed as a tooltip.
    str_value str
    Contains value if the data is of string type.
    timestamp_value str
    Contains value if the data is of timestamp type.
    url str
    An optional full URL.
    boolValue Boolean
    Contains value if the data is of a boolean type.
    durationValue String
    Contains value if the data is of duration type.
    floatValue Number
    Contains value if the data is of float type.
    int64Value String
    Contains value if the data is of int64 type.
    javaClassValue String
    Contains value if the data is of java class type.
    key String
    The key identifying the display data. This is intended to be used as a label for the display data when viewed in a dax monitoring system.
    label String
    An optional label to display in a dax UI for the element.
    namespace String
    The namespace for the key. This is usually a class name or programming language namespace (i.e. python module) which defines the display data. This allows a dax monitoring system to specially handle the data and perform custom rendering.
    shortStrValue String
    A possible additional shorter value to display. For example a java_class_name_value of com.mypackage.MyDoFn will be stored with MyDoFn as the short_str_value and com.mypackage.MyDoFn as the java_class_name value. short_str_value can be displayed and java_class_name_value will be displayed as a tooltip.
    strValue String
    Contains value if the data is of string type.
    timestampValue String
    Contains value if the data is of timestamp type.
    url String
    An optional full URL.

    Environment, EnvironmentArgs

    ClusterManagerApiService string
    The type of cluster manager API to use. If unknown or unspecified, the service will attempt to choose a reasonable default. This should be in the form of the API service name, e.g. "compute.googleapis.com".
    Dataset string
    The dataset for the current project where various workflow related tables are stored. The supported resource type is: Google BigQuery: bigquery.googleapis.com/{dataset}
    DebugOptions Pulumi.GoogleNative.Dataflow.V1b3.Inputs.DebugOptions
    Any debugging options to be supplied to the job.
    Experiments List<string>
    The list of experiments to enable. This field should be used for SDK related experiments and not for service related experiments. The proper field for service related experiments is service_options.
    FlexResourceSchedulingGoal Pulumi.GoogleNative.Dataflow.V1b3.EnvironmentFlexResourceSchedulingGoal
    Which Flexible Resource Scheduling mode to run in.
    InternalExperiments Dictionary<string, string>
    Experimental settings.
    SdkPipelineOptions Dictionary<string, string>
    The Cloud Dataflow SDK pipeline options specified by the user. These options are passed through the service and are used to recreate the SDK pipeline options on the worker in a language agnostic and platform independent way.
    ServiceAccountEmail string
    Identity to run virtual machines as. Defaults to the default account.
    ServiceKmsKeyName string
    If set, contains the Cloud KMS key identifier used to encrypt data at rest, AKA a Customer Managed Encryption Key (CMEK). Format: projects/PROJECT_ID/locations/LOCATION/keyRings/KEY_RING/cryptoKeys/KEY
    ServiceOptions List<string>
    The list of service options to enable. This field should be used for service related experiments only. These experiments, when graduating to GA, should be replaced by dedicated fields or become default (i.e. always on).
    TempStoragePrefix string
    The prefix of the resources the system should use for temporary storage. The system will append the suffix "/temp-{JOBNAME} to this resource prefix, where {JOBNAME} is the value of the job_name field. The resulting bucket and object prefix is used as the prefix of the resources used to store temporary data needed during the job execution. NOTE: This will override the value in taskrunner_settings. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
    UserAgent Dictionary<string, string>
    A description of the process that generated the request.
    Version Dictionary<string, string>
    A structure describing which components and their versions of the service are required in order to run the job.
    WorkerPools List<Pulumi.GoogleNative.Dataflow.V1b3.Inputs.WorkerPool>
    The worker pools. At least one "harness" worker pool must be specified in order for the job to have workers.
    WorkerRegion string
    The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, default to the control plane's region.
    WorkerZone string
    The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane's region is chosen based on available capacity.
    ClusterManagerApiService string
    The type of cluster manager API to use. If unknown or unspecified, the service will attempt to choose a reasonable default. This should be in the form of the API service name, e.g. "compute.googleapis.com".
    Dataset string
    The dataset for the current project where various workflow related tables are stored. The supported resource type is: Google BigQuery: bigquery.googleapis.com/{dataset}
    DebugOptions DebugOptions
    Any debugging options to be supplied to the job.
    Experiments []string
    The list of experiments to enable. This field should be used for SDK related experiments and not for service related experiments. The proper field for service related experiments is service_options.
    FlexResourceSchedulingGoal EnvironmentFlexResourceSchedulingGoal
    Which Flexible Resource Scheduling mode to run in.
    InternalExperiments map[string]string
    Experimental settings.
    SdkPipelineOptions map[string]string
    The Cloud Dataflow SDK pipeline options specified by the user. These options are passed through the service and are used to recreate the SDK pipeline options on the worker in a language agnostic and platform independent way.
    ServiceAccountEmail string
    Identity to run virtual machines as. Defaults to the default account.
    ServiceKmsKeyName string
    If set, contains the Cloud KMS key identifier used to encrypt data at rest, AKA a Customer Managed Encryption Key (CMEK). Format: projects/PROJECT_ID/locations/LOCATION/keyRings/KEY_RING/cryptoKeys/KEY
    ServiceOptions []string
    The list of service options to enable. This field should be used for service related experiments only. These experiments, when graduating to GA, should be replaced by dedicated fields or become default (i.e. always on).
    TempStoragePrefix string
    The prefix of the resources the system should use for temporary storage. The system will append the suffix "/temp-{JOBNAME} to this resource prefix, where {JOBNAME} is the value of the job_name field. The resulting bucket and object prefix is used as the prefix of the resources used to store temporary data needed during the job execution. NOTE: This will override the value in taskrunner_settings. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
    UserAgent map[string]string
    A description of the process that generated the request.
    Version map[string]string
    A structure describing which components and their versions of the service are required in order to run the job.
    WorkerPools []WorkerPool
    The worker pools. At least one "harness" worker pool must be specified in order for the job to have workers.
    WorkerRegion string
    The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, default to the control plane's region.
    WorkerZone string
    The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane's region is chosen based on available capacity.
    clusterManagerApiService String
    The type of cluster manager API to use. If unknown or unspecified, the service will attempt to choose a reasonable default. This should be in the form of the API service name, e.g. "compute.googleapis.com".
    dataset String
    The dataset for the current project where various workflow related tables are stored. The supported resource type is: Google BigQuery: bigquery.googleapis.com/{dataset}
    debugOptions DebugOptions
    Any debugging options to be supplied to the job.
    experiments List<String>
    The list of experiments to enable. This field should be used for SDK related experiments and not for service related experiments. The proper field for service related experiments is service_options.
    flexResourceSchedulingGoal EnvironmentFlexResourceSchedulingGoal
    Which Flexible Resource Scheduling mode to run in.
    internalExperiments Map<String,String>
    Experimental settings.
    sdkPipelineOptions Map<String,String>
    The Cloud Dataflow SDK pipeline options specified by the user. These options are passed through the service and are used to recreate the SDK pipeline options on the worker in a language agnostic and platform independent way.
    serviceAccountEmail String
    Identity to run virtual machines as. Defaults to the default account.
    serviceKmsKeyName String
    If set, contains the Cloud KMS key identifier used to encrypt data at rest, AKA a Customer Managed Encryption Key (CMEK). Format: projects/PROJECT_ID/locations/LOCATION/keyRings/KEY_RING/cryptoKeys/KEY
    serviceOptions List<String>
    The list of service options to enable. This field should be used for service related experiments only. These experiments, when graduating to GA, should be replaced by dedicated fields or become default (i.e. always on).
    tempStoragePrefix String
    The prefix of the resources the system should use for temporary storage. The system will append the suffix "/temp-{JOBNAME} to this resource prefix, where {JOBNAME} is the value of the job_name field. The resulting bucket and object prefix is used as the prefix of the resources used to store temporary data needed during the job execution. NOTE: This will override the value in taskrunner_settings. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
    userAgent Map<String,String>
    A description of the process that generated the request.
    version Map<String,String>
    A structure describing which components and their versions of the service are required in order to run the job.
    workerPools List<WorkerPool>
    The worker pools. At least one "harness" worker pool must be specified in order for the job to have workers.
    workerRegion String
    The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, default to the control plane's region.
    workerZone String
    The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane's region is chosen based on available capacity.
    clusterManagerApiService string
    The type of cluster manager API to use. If unknown or unspecified, the service will attempt to choose a reasonable default. This should be in the form of the API service name, e.g. "compute.googleapis.com".
    dataset string
    The dataset for the current project where various workflow related tables are stored. The supported resource type is: Google BigQuery: bigquery.googleapis.com/{dataset}
    debugOptions DebugOptions
    Any debugging options to be supplied to the job.
    experiments string[]
    The list of experiments to enable. This field should be used for SDK related experiments and not for service related experiments. The proper field for service related experiments is service_options.
    flexResourceSchedulingGoal EnvironmentFlexResourceSchedulingGoal
    Which Flexible Resource Scheduling mode to run in.
    internalExperiments {[key: string]: string}
    Experimental settings.
    sdkPipelineOptions {[key: string]: string}
    The Cloud Dataflow SDK pipeline options specified by the user. These options are passed through the service and are used to recreate the SDK pipeline options on the worker in a language agnostic and platform independent way.
    serviceAccountEmail string
    Identity to run virtual machines as. Defaults to the default account.
    serviceKmsKeyName string
    If set, contains the Cloud KMS key identifier used to encrypt data at rest, AKA a Customer Managed Encryption Key (CMEK). Format: projects/PROJECT_ID/locations/LOCATION/keyRings/KEY_RING/cryptoKeys/KEY
    serviceOptions string[]
    The list of service options to enable. This field should be used for service related experiments only. These experiments, when graduating to GA, should be replaced by dedicated fields or become default (i.e. always on).
    tempStoragePrefix string
    The prefix of the resources the system should use for temporary storage. The system will append the suffix "/temp-{JOBNAME} to this resource prefix, where {JOBNAME} is the value of the job_name field. The resulting bucket and object prefix is used as the prefix of the resources used to store temporary data needed during the job execution. NOTE: This will override the value in taskrunner_settings. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
    userAgent {[key: string]: string}
    A description of the process that generated the request.
    version {[key: string]: string}
    A structure describing which components and their versions of the service are required in order to run the job.
    workerPools WorkerPool[]
    The worker pools. At least one "harness" worker pool must be specified in order for the job to have workers.
    workerRegion string
    The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, default to the control plane's region.
    workerZone string
    The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane's region is chosen based on available capacity.
    cluster_manager_api_service str
    The type of cluster manager API to use. If unknown or unspecified, the service will attempt to choose a reasonable default. This should be in the form of the API service name, e.g. "compute.googleapis.com".
    dataset str
    The dataset for the current project where various workflow related tables are stored. The supported resource type is: Google BigQuery: bigquery.googleapis.com/{dataset}
    debug_options DebugOptions
    Any debugging options to be supplied to the job.
    experiments Sequence[str]
    The list of experiments to enable. This field should be used for SDK related experiments and not for service related experiments. The proper field for service related experiments is service_options.
    flex_resource_scheduling_goal EnvironmentFlexResourceSchedulingGoal
    Which Flexible Resource Scheduling mode to run in.
    internal_experiments Mapping[str, str]
    Experimental settings.
    sdk_pipeline_options Mapping[str, str]
    The Cloud Dataflow SDK pipeline options specified by the user. These options are passed through the service and are used to recreate the SDK pipeline options on the worker in a language agnostic and platform independent way.
    service_account_email str
    Identity to run virtual machines as. Defaults to the default account.
    service_kms_key_name str
    If set, contains the Cloud KMS key identifier used to encrypt data at rest, AKA a Customer Managed Encryption Key (CMEK). Format: projects/PROJECT_ID/locations/LOCATION/keyRings/KEY_RING/cryptoKeys/KEY
    service_options Sequence[str]
    The list of service options to enable. This field should be used for service related experiments only. These experiments, when graduating to GA, should be replaced by dedicated fields or become default (i.e. always on).
    temp_storage_prefix str
    The prefix of the resources the system should use for temporary storage. The system will append the suffix "/temp-{JOBNAME} to this resource prefix, where {JOBNAME} is the value of the job_name field. The resulting bucket and object prefix is used as the prefix of the resources used to store temporary data needed during the job execution. NOTE: This will override the value in taskrunner_settings. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
    user_agent Mapping[str, str]
    A description of the process that generated the request.
    version Mapping[str, str]
    A structure describing which components and their versions of the service are required in order to run the job.
    worker_pools Sequence[WorkerPool]
    The worker pools. At least one "harness" worker pool must be specified in order for the job to have workers.
    worker_region str
    The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, default to the control plane's region.
    worker_zone str
    The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane's region is chosen based on available capacity.
    clusterManagerApiService String
    The type of cluster manager API to use. If unknown or unspecified, the service will attempt to choose a reasonable default. This should be in the form of the API service name, e.g. "compute.googleapis.com".
    dataset String
    The dataset for the current project where various workflow related tables are stored. The supported resource type is: Google BigQuery: bigquery.googleapis.com/{dataset}
    debugOptions Property Map
    Any debugging options to be supplied to the job.
    experiments List<String>
    The list of experiments to enable. This field should be used for SDK related experiments and not for service related experiments. The proper field for service related experiments is service_options.
    flexResourceSchedulingGoal "FLEXRS_UNSPECIFIED" | "FLEXRS_SPEED_OPTIMIZED" | "FLEXRS_COST_OPTIMIZED"
    Which Flexible Resource Scheduling mode to run in.
    internalExperiments Map<String>
    Experimental settings.
    sdkPipelineOptions Map<String>
    The Cloud Dataflow SDK pipeline options specified by the user. These options are passed through the service and are used to recreate the SDK pipeline options on the worker in a language agnostic and platform independent way.
    serviceAccountEmail String
    Identity to run virtual machines as. Defaults to the default account.
    serviceKmsKeyName String
    If set, contains the Cloud KMS key identifier used to encrypt data at rest, AKA a Customer Managed Encryption Key (CMEK). Format: projects/PROJECT_ID/locations/LOCATION/keyRings/KEY_RING/cryptoKeys/KEY
    serviceOptions List<String>
    The list of service options to enable. This field should be used for service related experiments only. These experiments, when graduating to GA, should be replaced by dedicated fields or become default (i.e. always on).
    tempStoragePrefix String
    The prefix of the resources the system should use for temporary storage. The system will append the suffix "/temp-{JOBNAME} to this resource prefix, where {JOBNAME} is the value of the job_name field. The resulting bucket and object prefix is used as the prefix of the resources used to store temporary data needed during the job execution. NOTE: This will override the value in taskrunner_settings. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
    userAgent Map<String>
    A description of the process that generated the request.
    version Map<String>
    A structure describing which components and their versions of the service are required in order to run the job.
    workerPools List<Property Map>
    The worker pools. At least one "harness" worker pool must be specified in order for the job to have workers.
    workerRegion String
    The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, default to the control plane's region.
    workerZone String
    The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane's region is chosen based on available capacity.

    EnvironmentFlexResourceSchedulingGoal, EnvironmentFlexResourceSchedulingGoalArgs

    FlexrsUnspecified
    FLEXRS_UNSPECIFIEDRun in the default mode.
    FlexrsSpeedOptimized
    FLEXRS_SPEED_OPTIMIZEDOptimize for lower execution time.
    FlexrsCostOptimized
    FLEXRS_COST_OPTIMIZEDOptimize for lower cost.
    EnvironmentFlexResourceSchedulingGoalFlexrsUnspecified
    FLEXRS_UNSPECIFIEDRun in the default mode.
    EnvironmentFlexResourceSchedulingGoalFlexrsSpeedOptimized
    FLEXRS_SPEED_OPTIMIZEDOptimize for lower execution time.
    EnvironmentFlexResourceSchedulingGoalFlexrsCostOptimized
    FLEXRS_COST_OPTIMIZEDOptimize for lower cost.
    FlexrsUnspecified
    FLEXRS_UNSPECIFIEDRun in the default mode.
    FlexrsSpeedOptimized
    FLEXRS_SPEED_OPTIMIZEDOptimize for lower execution time.
    FlexrsCostOptimized
    FLEXRS_COST_OPTIMIZEDOptimize for lower cost.
    FlexrsUnspecified
    FLEXRS_UNSPECIFIEDRun in the default mode.
    FlexrsSpeedOptimized
    FLEXRS_SPEED_OPTIMIZEDOptimize for lower execution time.
    FlexrsCostOptimized
    FLEXRS_COST_OPTIMIZEDOptimize for lower cost.
    FLEXRS_UNSPECIFIED
    FLEXRS_UNSPECIFIEDRun in the default mode.
    FLEXRS_SPEED_OPTIMIZED
    FLEXRS_SPEED_OPTIMIZEDOptimize for lower execution time.
    FLEXRS_COST_OPTIMIZED
    FLEXRS_COST_OPTIMIZEDOptimize for lower cost.
    "FLEXRS_UNSPECIFIED"
    FLEXRS_UNSPECIFIEDRun in the default mode.
    "FLEXRS_SPEED_OPTIMIZED"
    FLEXRS_SPEED_OPTIMIZEDOptimize for lower execution time.
    "FLEXRS_COST_OPTIMIZED"
    FLEXRS_COST_OPTIMIZEDOptimize for lower cost.

    EnvironmentResponse, EnvironmentResponseArgs

    ClusterManagerApiService string
    The type of cluster manager API to use. If unknown or unspecified, the service will attempt to choose a reasonable default. This should be in the form of the API service name, e.g. "compute.googleapis.com".
    Dataset string
    The dataset for the current project where various workflow related tables are stored. The supported resource type is: Google BigQuery: bigquery.googleapis.com/{dataset}
    DebugOptions Pulumi.GoogleNative.Dataflow.V1b3.Inputs.DebugOptionsResponse
    Any debugging options to be supplied to the job.
    Experiments List<string>
    The list of experiments to enable. This field should be used for SDK related experiments and not for service related experiments. The proper field for service related experiments is service_options.
    FlexResourceSchedulingGoal string
    Which Flexible Resource Scheduling mode to run in.
    InternalExperiments Dictionary<string, string>
    Experimental settings.
    SdkPipelineOptions Dictionary<string, string>
    The Cloud Dataflow SDK pipeline options specified by the user. These options are passed through the service and are used to recreate the SDK pipeline options on the worker in a language agnostic and platform independent way.
    ServiceAccountEmail string
    Identity to run virtual machines as. Defaults to the default account.
    ServiceKmsKeyName string
    If set, contains the Cloud KMS key identifier used to encrypt data at rest, AKA a Customer Managed Encryption Key (CMEK). Format: projects/PROJECT_ID/locations/LOCATION/keyRings/KEY_RING/cryptoKeys/KEY
    ServiceOptions List<string>
    The list of service options to enable. This field should be used for service related experiments only. These experiments, when graduating to GA, should be replaced by dedicated fields or become default (i.e. always on).
    ShuffleMode string
    The shuffle mode used for the job.
    TempStoragePrefix string
    The prefix of the resources the system should use for temporary storage. The system will append the suffix "/temp-{JOBNAME} to this resource prefix, where {JOBNAME} is the value of the job_name field. The resulting bucket and object prefix is used as the prefix of the resources used to store temporary data needed during the job execution. NOTE: This will override the value in taskrunner_settings. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
    UseStreamingEngineResourceBasedBilling bool
    Whether the job uses the new streaming engine billing model based on resource usage.
    UserAgent Dictionary<string, string>
    A description of the process that generated the request.
    Version Dictionary<string, string>
    A structure describing which components and their versions of the service are required in order to run the job.
    WorkerPools List<Pulumi.GoogleNative.Dataflow.V1b3.Inputs.WorkerPoolResponse>
    The worker pools. At least one "harness" worker pool must be specified in order for the job to have workers.
    WorkerRegion string
    The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, default to the control plane's region.
    WorkerZone string
    The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane's region is chosen based on available capacity.
    ClusterManagerApiService string
    The type of cluster manager API to use. If unknown or unspecified, the service will attempt to choose a reasonable default. This should be in the form of the API service name, e.g. "compute.googleapis.com".
    Dataset string
    The dataset for the current project where various workflow related tables are stored. The supported resource type is: Google BigQuery: bigquery.googleapis.com/{dataset}
    DebugOptions DebugOptionsResponse
    Any debugging options to be supplied to the job.
    Experiments []string
    The list of experiments to enable. This field should be used for SDK related experiments and not for service related experiments. The proper field for service related experiments is service_options.
    FlexResourceSchedulingGoal string
    Which Flexible Resource Scheduling mode to run in.
    InternalExperiments map[string]string
    Experimental settings.
    SdkPipelineOptions map[string]string
    The Cloud Dataflow SDK pipeline options specified by the user. These options are passed through the service and are used to recreate the SDK pipeline options on the worker in a language agnostic and platform independent way.
    ServiceAccountEmail string
    Identity to run virtual machines as. Defaults to the default account.
    ServiceKmsKeyName string
    If set, contains the Cloud KMS key identifier used to encrypt data at rest, AKA a Customer Managed Encryption Key (CMEK). Format: projects/PROJECT_ID/locations/LOCATION/keyRings/KEY_RING/cryptoKeys/KEY
    ServiceOptions []string
    The list of service options to enable. This field should be used for service related experiments only. These experiments, when graduating to GA, should be replaced by dedicated fields or become default (i.e. always on).
    ShuffleMode string
    The shuffle mode used for the job.
    TempStoragePrefix string
    The prefix of the resources the system should use for temporary storage. The system will append the suffix "/temp-{JOBNAME} to this resource prefix, where {JOBNAME} is the value of the job_name field. The resulting bucket and object prefix is used as the prefix of the resources used to store temporary data needed during the job execution. NOTE: This will override the value in taskrunner_settings. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
    UseStreamingEngineResourceBasedBilling bool
    Whether the job uses the new streaming engine billing model based on resource usage.
    UserAgent map[string]string
    A description of the process that generated the request.
    Version map[string]string
    A structure describing which components and their versions of the service are required in order to run the job.
    WorkerPools []WorkerPoolResponse
    The worker pools. At least one "harness" worker pool must be specified in order for the job to have workers.
    WorkerRegion string
    The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, default to the control plane's region.
    WorkerZone string
    The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane's region is chosen based on available capacity.
    clusterManagerApiService String
    The type of cluster manager API to use. If unknown or unspecified, the service will attempt to choose a reasonable default. This should be in the form of the API service name, e.g. "compute.googleapis.com".
    dataset String
    The dataset for the current project where various workflow related tables are stored. The supported resource type is: Google BigQuery: bigquery.googleapis.com/{dataset}
    debugOptions DebugOptionsResponse
    Any debugging options to be supplied to the job.
    experiments List<String>
    The list of experiments to enable. This field should be used for SDK related experiments and not for service related experiments. The proper field for service related experiments is service_options.
    flexResourceSchedulingGoal String
    Which Flexible Resource Scheduling mode to run in.
    internalExperiments Map<String,String>
    Experimental settings.
    sdkPipelineOptions Map<String,String>
    The Cloud Dataflow SDK pipeline options specified by the user. These options are passed through the service and are used to recreate the SDK pipeline options on the worker in a language agnostic and platform independent way.
    serviceAccountEmail String
    Identity to run virtual machines as. Defaults to the default account.
    serviceKmsKeyName String
    If set, contains the Cloud KMS key identifier used to encrypt data at rest, AKA a Customer Managed Encryption Key (CMEK). Format: projects/PROJECT_ID/locations/LOCATION/keyRings/KEY_RING/cryptoKeys/KEY
    serviceOptions List<String>
    The list of service options to enable. This field should be used for service related experiments only. These experiments, when graduating to GA, should be replaced by dedicated fields or become default (i.e. always on).
    shuffleMode String
    The shuffle mode used for the job.
    tempStoragePrefix String
    The prefix of the resources the system should use for temporary storage. The system will append the suffix "/temp-{JOBNAME} to this resource prefix, where {JOBNAME} is the value of the job_name field. The resulting bucket and object prefix is used as the prefix of the resources used to store temporary data needed during the job execution. NOTE: This will override the value in taskrunner_settings. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
    useStreamingEngineResourceBasedBilling Boolean
    Whether the job uses the new streaming engine billing model based on resource usage.
    userAgent Map<String,String>
    A description of the process that generated the request.
    version Map<String,String>
    A structure describing which components and their versions of the service are required in order to run the job.
    workerPools List<WorkerPoolResponse>
    The worker pools. At least one "harness" worker pool must be specified in order for the job to have workers.
    workerRegion String
    The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, default to the control plane's region.
    workerZone String
    The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane's region is chosen based on available capacity.
    clusterManagerApiService string
    The type of cluster manager API to use. If unknown or unspecified, the service will attempt to choose a reasonable default. This should be in the form of the API service name, e.g. "compute.googleapis.com".
    dataset string
    The dataset for the current project where various workflow related tables are stored. The supported resource type is: Google BigQuery: bigquery.googleapis.com/{dataset}
    debugOptions DebugOptionsResponse
    Any debugging options to be supplied to the job.
    experiments string[]
    The list of experiments to enable. This field should be used for SDK related experiments and not for service related experiments. The proper field for service related experiments is service_options.
    flexResourceSchedulingGoal string
    Which Flexible Resource Scheduling mode to run in.
    internalExperiments {[key: string]: string}
    Experimental settings.
    sdkPipelineOptions {[key: string]: string}
    The Cloud Dataflow SDK pipeline options specified by the user. These options are passed through the service and are used to recreate the SDK pipeline options on the worker in a language agnostic and platform independent way.
    serviceAccountEmail string
    Identity to run virtual machines as. Defaults to the default account.
    serviceKmsKeyName string
    If set, contains the Cloud KMS key identifier used to encrypt data at rest, AKA a Customer Managed Encryption Key (CMEK). Format: projects/PROJECT_ID/locations/LOCATION/keyRings/KEY_RING/cryptoKeys/KEY
    serviceOptions string[]
    The list of service options to enable. This field should be used for service related experiments only. These experiments, when graduating to GA, should be replaced by dedicated fields or become default (i.e. always on).
    shuffleMode string
    The shuffle mode used for the job.
    tempStoragePrefix string
    The prefix of the resources the system should use for temporary storage. The system will append the suffix "/temp-{JOBNAME} to this resource prefix, where {JOBNAME} is the value of the job_name field. The resulting bucket and object prefix is used as the prefix of the resources used to store temporary data needed during the job execution. NOTE: This will override the value in taskrunner_settings. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
    useStreamingEngineResourceBasedBilling boolean
    Whether the job uses the new streaming engine billing model based on resource usage.
    userAgent {[key: string]: string}
    A description of the process that generated the request.
    version {[key: string]: string}
    A structure describing which components and their versions of the service are required in order to run the job.
    workerPools WorkerPoolResponse[]
    The worker pools. At least one "harness" worker pool must be specified in order for the job to have workers.
    workerRegion string
    The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, default to the control plane's region.
    workerZone string
    The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane's region is chosen based on available capacity.
    cluster_manager_api_service str
    The type of cluster manager API to use. If unknown or unspecified, the service will attempt to choose a reasonable default. This should be in the form of the API service name, e.g. "compute.googleapis.com".
    dataset str
    The dataset for the current project where various workflow related tables are stored. The supported resource type is: Google BigQuery: bigquery.googleapis.com/{dataset}
    debug_options DebugOptionsResponse
    Any debugging options to be supplied to the job.
    experiments Sequence[str]
    The list of experiments to enable. This field should be used for SDK related experiments and not for service related experiments. The proper field for service related experiments is service_options.
    flex_resource_scheduling_goal str
    Which Flexible Resource Scheduling mode to run in.
    internal_experiments Mapping[str, str]
    Experimental settings.
    sdk_pipeline_options Mapping[str, str]
    The Cloud Dataflow SDK pipeline options specified by the user. These options are passed through the service and are used to recreate the SDK pipeline options on the worker in a language agnostic and platform independent way.
    service_account_email str
    Identity to run virtual machines as. Defaults to the default account.
    service_kms_key_name str
    If set, contains the Cloud KMS key identifier used to encrypt data at rest, AKA a Customer Managed Encryption Key (CMEK). Format: projects/PROJECT_ID/locations/LOCATION/keyRings/KEY_RING/cryptoKeys/KEY
    service_options Sequence[str]
    The list of service options to enable. This field should be used for service related experiments only. These experiments, when graduating to GA, should be replaced by dedicated fields or become default (i.e. always on).
    shuffle_mode str
    The shuffle mode used for the job.
    temp_storage_prefix str
    The prefix of the resources the system should use for temporary storage. The system will append the suffix "/temp-{JOBNAME} to this resource prefix, where {JOBNAME} is the value of the job_name field. The resulting bucket and object prefix is used as the prefix of the resources used to store temporary data needed during the job execution. NOTE: This will override the value in taskrunner_settings. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
    use_streaming_engine_resource_based_billing bool
    Whether the job uses the new streaming engine billing model based on resource usage.
    user_agent Mapping[str, str]
    A description of the process that generated the request.
    version Mapping[str, str]
    A structure describing which components and their versions of the service are required in order to run the job.
    worker_pools Sequence[WorkerPoolResponse]
    The worker pools. At least one "harness" worker pool must be specified in order for the job to have workers.
    worker_region str
    The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, default to the control plane's region.
    worker_zone str
    The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane's region is chosen based on available capacity.
    clusterManagerApiService String
    The type of cluster manager API to use. If unknown or unspecified, the service will attempt to choose a reasonable default. This should be in the form of the API service name, e.g. "compute.googleapis.com".
    dataset String
    The dataset for the current project where various workflow related tables are stored. The supported resource type is: Google BigQuery: bigquery.googleapis.com/{dataset}
    debugOptions Property Map
    Any debugging options to be supplied to the job.
    experiments List<String>
    The list of experiments to enable. This field should be used for SDK related experiments and not for service related experiments. The proper field for service related experiments is service_options.
    flexResourceSchedulingGoal String
    Which Flexible Resource Scheduling mode to run in.
    internalExperiments Map<String>
    Experimental settings.
    sdkPipelineOptions Map<String>
    The Cloud Dataflow SDK pipeline options specified by the user. These options are passed through the service and are used to recreate the SDK pipeline options on the worker in a language agnostic and platform independent way.
    serviceAccountEmail String
    Identity to run virtual machines as. Defaults to the default account.
    serviceKmsKeyName String
    If set, contains the Cloud KMS key identifier used to encrypt data at rest, AKA a Customer Managed Encryption Key (CMEK). Format: projects/PROJECT_ID/locations/LOCATION/keyRings/KEY_RING/cryptoKeys/KEY
    serviceOptions List<String>
    The list of service options to enable. This field should be used for service related experiments only. These experiments, when graduating to GA, should be replaced by dedicated fields or become default (i.e. always on).
    shuffleMode String
    The shuffle mode used for the job.
    tempStoragePrefix String
    The prefix of the resources the system should use for temporary storage. The system will append the suffix "/temp-{JOBNAME} to this resource prefix, where {JOBNAME} is the value of the job_name field. The resulting bucket and object prefix is used as the prefix of the resources used to store temporary data needed during the job execution. NOTE: This will override the value in taskrunner_settings. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
    useStreamingEngineResourceBasedBilling Boolean
    Whether the job uses the new streaming engine billing model based on resource usage.
    userAgent Map<String>
    A description of the process that generated the request.
    version Map<String>
    A structure describing which components and their versions of the service are required in order to run the job.
    workerPools List<Property Map>
    The worker pools. At least one "harness" worker pool must be specified in order for the job to have workers.
    workerRegion String
    The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, default to the control plane's region.
    workerZone String
    The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane's region is chosen based on available capacity.

    ExecutionStageState, ExecutionStageStateArgs

    CurrentStateTime string
    The time at which the stage transitioned to this state.
    ExecutionStageName string
    The name of the execution stage.
    ExecutionStageState Pulumi.GoogleNative.Dataflow.V1b3.ExecutionStageStateExecutionStageState
    Executions stage states allow the same set of values as JobState.
    CurrentStateTime string
    The time at which the stage transitioned to this state.
    ExecutionStageName string
    The name of the execution stage.
    ExecutionStageState ExecutionStageStateExecutionStageState
    Executions stage states allow the same set of values as JobState.
    currentStateTime String
    The time at which the stage transitioned to this state.
    executionStageName String
    The name of the execution stage.
    executionStageState ExecutionStageStateExecutionStageState
    Executions stage states allow the same set of values as JobState.
    currentStateTime string
    The time at which the stage transitioned to this state.
    executionStageName string
    The name of the execution stage.
    executionStageState ExecutionStageStateExecutionStageState
    Executions stage states allow the same set of values as JobState.
    current_state_time str
    The time at which the stage transitioned to this state.
    execution_stage_name str
    The name of the execution stage.
    execution_stage_state ExecutionStageStateExecutionStageState
    Executions stage states allow the same set of values as JobState.

    ExecutionStageStateExecutionStageState, ExecutionStageStateExecutionStageStateArgs

    JobStateUnknown
    JOB_STATE_UNKNOWNThe job's run state isn't specified.
    JobStateStopped
    JOB_STATE_STOPPEDJOB_STATE_STOPPED indicates that the job has not yet started to run.
    JobStateRunning
    JOB_STATE_RUNNINGJOB_STATE_RUNNING indicates that the job is currently running.
    JobStateDone
    JOB_STATE_DONEJOB_STATE_DONE indicates that the job has successfully completed. This is a terminal job state. This state may be set by the Cloud Dataflow service, as a transition from JOB_STATE_RUNNING. It may also be set via a Cloud Dataflow UpdateJob call, if the job has not yet reached a terminal state.
    JobStateFailed
    JOB_STATE_FAILEDJOB_STATE_FAILED indicates that the job has failed. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_RUNNING.
    JobStateCancelled
    JOB_STATE_CANCELLEDJOB_STATE_CANCELLED indicates that the job has been explicitly cancelled. This is a terminal job state. This state may only be set via a Cloud Dataflow UpdateJob call, and only if the job has not yet reached another terminal state.
    JobStateUpdated
    JOB_STATE_UPDATEDJOB_STATE_UPDATED indicates that the job was successfully updated, meaning that this job was stopped and another job was started, inheriting state from this one. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_RUNNING.
    JobStateDraining
    JOB_STATE_DRAININGJOB_STATE_DRAINING indicates that the job is in the process of draining. A draining job has stopped pulling from its input sources and is processing any data that remains in-flight. This state may be set via a Cloud Dataflow UpdateJob call, but only as a transition from JOB_STATE_RUNNING. Jobs that are draining may only transition to JOB_STATE_DRAINED, JOB_STATE_CANCELLED, or JOB_STATE_FAILED.
    JobStateDrained
    JOB_STATE_DRAINEDJOB_STATE_DRAINED indicates that the job has been drained. A drained job terminated by stopping pulling from its input sources and processing any data that remained in-flight when draining was requested. This state is a terminal state, may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_DRAINING.
    JobStatePending
    JOB_STATE_PENDINGJOB_STATE_PENDING indicates that the job has been created but is not yet running. Jobs that are pending may only transition to JOB_STATE_RUNNING, or JOB_STATE_FAILED.
    JobStateCancelling
    JOB_STATE_CANCELLINGJOB_STATE_CANCELLING indicates that the job has been explicitly cancelled and is in the process of stopping. Jobs that are cancelling may only transition to JOB_STATE_CANCELLED or JOB_STATE_FAILED.
    JobStateQueued
    JOB_STATE_QUEUEDJOB_STATE_QUEUED indicates that the job has been created but is being delayed until launch. Jobs that are queued may only transition to JOB_STATE_PENDING or JOB_STATE_CANCELLED.
    JobStateResourceCleaningUp
    JOB_STATE_RESOURCE_CLEANING_UPJOB_STATE_RESOURCE_CLEANING_UP indicates that the batch job's associated resources are currently being cleaned up after a successful run. Currently, this is an opt-in feature, please reach out to Cloud support team if you are interested.
    ExecutionStageStateExecutionStageStateJobStateUnknown
    JOB_STATE_UNKNOWNThe job's run state isn't specified.
    ExecutionStageStateExecutionStageStateJobStateStopped
    JOB_STATE_STOPPEDJOB_STATE_STOPPED indicates that the job has not yet started to run.
    ExecutionStageStateExecutionStageStateJobStateRunning
    JOB_STATE_RUNNINGJOB_STATE_RUNNING indicates that the job is currently running.
    ExecutionStageStateExecutionStageStateJobStateDone
    JOB_STATE_DONEJOB_STATE_DONE indicates that the job has successfully completed. This is a terminal job state. This state may be set by the Cloud Dataflow service, as a transition from JOB_STATE_RUNNING. It may also be set via a Cloud Dataflow UpdateJob call, if the job has not yet reached a terminal state.
    ExecutionStageStateExecutionStageStateJobStateFailed
    JOB_STATE_FAILEDJOB_STATE_FAILED indicates that the job has failed. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_RUNNING.
    ExecutionStageStateExecutionStageStateJobStateCancelled
    JOB_STATE_CANCELLEDJOB_STATE_CANCELLED indicates that the job has been explicitly cancelled. This is a terminal job state. This state may only be set via a Cloud Dataflow UpdateJob call, and only if the job has not yet reached another terminal state.
    ExecutionStageStateExecutionStageStateJobStateUpdated
    JOB_STATE_UPDATEDJOB_STATE_UPDATED indicates that the job was successfully updated, meaning that this job was stopped and another job was started, inheriting state from this one. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_RUNNING.
    ExecutionStageStateExecutionStageStateJobStateDraining
    JOB_STATE_DRAININGJOB_STATE_DRAINING indicates that the job is in the process of draining. A draining job has stopped pulling from its input sources and is processing any data that remains in-flight. This state may be set via a Cloud Dataflow UpdateJob call, but only as a transition from JOB_STATE_RUNNING. Jobs that are draining may only transition to JOB_STATE_DRAINED, JOB_STATE_CANCELLED, or JOB_STATE_FAILED.
    ExecutionStageStateExecutionStageStateJobStateDrained
    JOB_STATE_DRAINEDJOB_STATE_DRAINED indicates that the job has been drained. A drained job terminated by stopping pulling from its input sources and processing any data that remained in-flight when draining was requested. This state is a terminal state, may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_DRAINING.
    ExecutionStageStateExecutionStageStateJobStatePending
    JOB_STATE_PENDINGJOB_STATE_PENDING indicates that the job has been created but is not yet running. Jobs that are pending may only transition to JOB_STATE_RUNNING, or JOB_STATE_FAILED.
    ExecutionStageStateExecutionStageStateJobStateCancelling
    JOB_STATE_CANCELLINGJOB_STATE_CANCELLING indicates that the job has been explicitly cancelled and is in the process of stopping. Jobs that are cancelling may only transition to JOB_STATE_CANCELLED or JOB_STATE_FAILED.
    ExecutionStageStateExecutionStageStateJobStateQueued
    JOB_STATE_QUEUEDJOB_STATE_QUEUED indicates that the job has been created but is being delayed until launch. Jobs that are queued may only transition to JOB_STATE_PENDING or JOB_STATE_CANCELLED.
    ExecutionStageStateExecutionStageStateJobStateResourceCleaningUp
    JOB_STATE_RESOURCE_CLEANING_UPJOB_STATE_RESOURCE_CLEANING_UP indicates that the batch job's associated resources are currently being cleaned up after a successful run. Currently, this is an opt-in feature, please reach out to Cloud support team if you are interested.
    JobStateUnknown
    JOB_STATE_UNKNOWNThe job's run state isn't specified.
    JobStateStopped
    JOB_STATE_STOPPEDJOB_STATE_STOPPED indicates that the job has not yet started to run.
    JobStateRunning
    JOB_STATE_RUNNINGJOB_STATE_RUNNING indicates that the job is currently running.
    JobStateDone
    JOB_STATE_DONEJOB_STATE_DONE indicates that the job has successfully completed. This is a terminal job state. This state may be set by the Cloud Dataflow service, as a transition from JOB_STATE_RUNNING. It may also be set via a Cloud Dataflow UpdateJob call, if the job has not yet reached a terminal state.
    JobStateFailed
    JOB_STATE_FAILEDJOB_STATE_FAILED indicates that the job has failed. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_RUNNING.
    JobStateCancelled
    JOB_STATE_CANCELLEDJOB_STATE_CANCELLED indicates that the job has been explicitly cancelled. This is a terminal job state. This state may only be set via a Cloud Dataflow UpdateJob call, and only if the job has not yet reached another terminal state.
    JobStateUpdated
    JOB_STATE_UPDATEDJOB_STATE_UPDATED indicates that the job was successfully updated, meaning that this job was stopped and another job was started, inheriting state from this one. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_RUNNING.
    JobStateDraining
    JOB_STATE_DRAININGJOB_STATE_DRAINING indicates that the job is in the process of draining. A draining job has stopped pulling from its input sources and is processing any data that remains in-flight. This state may be set via a Cloud Dataflow UpdateJob call, but only as a transition from JOB_STATE_RUNNING. Jobs that are draining may only transition to JOB_STATE_DRAINED, JOB_STATE_CANCELLED, or JOB_STATE_FAILED.
    JobStateDrained
    JOB_STATE_DRAINEDJOB_STATE_DRAINED indicates that the job has been drained. A drained job terminated by stopping pulling from its input sources and processing any data that remained in-flight when draining was requested. This state is a terminal state, may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_DRAINING.
    JobStatePending
    JOB_STATE_PENDINGJOB_STATE_PENDING indicates that the job has been created but is not yet running. Jobs that are pending may only transition to JOB_STATE_RUNNING, or JOB_STATE_FAILED.
    JobStateCancelling
    JOB_STATE_CANCELLINGJOB_STATE_CANCELLING indicates that the job has been explicitly cancelled and is in the process of stopping. Jobs that are cancelling may only transition to JOB_STATE_CANCELLED or JOB_STATE_FAILED.
    JobStateQueued
    JOB_STATE_QUEUEDJOB_STATE_QUEUED indicates that the job has been created but is being delayed until launch. Jobs that are queued may only transition to JOB_STATE_PENDING or JOB_STATE_CANCELLED.
    JobStateResourceCleaningUp
    JOB_STATE_RESOURCE_CLEANING_UPJOB_STATE_RESOURCE_CLEANING_UP indicates that the batch job's associated resources are currently being cleaned up after a successful run. Currently, this is an opt-in feature, please reach out to Cloud support team if you are interested.
    JobStateUnknown
    JOB_STATE_UNKNOWNThe job's run state isn't specified.
    JobStateStopped
    JOB_STATE_STOPPEDJOB_STATE_STOPPED indicates that the job has not yet started to run.
    JobStateRunning
    JOB_STATE_RUNNINGJOB_STATE_RUNNING indicates that the job is currently running.
    JobStateDone
    JOB_STATE_DONEJOB_STATE_DONE indicates that the job has successfully completed. This is a terminal job state. This state may be set by the Cloud Dataflow service, as a transition from JOB_STATE_RUNNING. It may also be set via a Cloud Dataflow UpdateJob call, if the job has not yet reached a terminal state.
    JobStateFailed
    JOB_STATE_FAILEDJOB_STATE_FAILED indicates that the job has failed. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_RUNNING.
    JobStateCancelled
    JOB_STATE_CANCELLEDJOB_STATE_CANCELLED indicates that the job has been explicitly cancelled. This is a terminal job state. This state may only be set via a Cloud Dataflow UpdateJob call, and only if the job has not yet reached another terminal state.
    JobStateUpdated
    JOB_STATE_UPDATEDJOB_STATE_UPDATED indicates that the job was successfully updated, meaning that this job was stopped and another job was started, inheriting state from this one. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_RUNNING.
    JobStateDraining
    JOB_STATE_DRAININGJOB_STATE_DRAINING indicates that the job is in the process of draining. A draining job has stopped pulling from its input sources and is processing any data that remains in-flight. This state may be set via a Cloud Dataflow UpdateJob call, but only as a transition from JOB_STATE_RUNNING. Jobs that are draining may only transition to JOB_STATE_DRAINED, JOB_STATE_CANCELLED, or JOB_STATE_FAILED.
    JobStateDrained
    JOB_STATE_DRAINEDJOB_STATE_DRAINED indicates that the job has been drained. A drained job terminated by stopping pulling from its input sources and processing any data that remained in-flight when draining was requested. This state is a terminal state, may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_DRAINING.
    JobStatePending
    JOB_STATE_PENDINGJOB_STATE_PENDING indicates that the job has been created but is not yet running. Jobs that are pending may only transition to JOB_STATE_RUNNING, or JOB_STATE_FAILED.
    JobStateCancelling
    JOB_STATE_CANCELLINGJOB_STATE_CANCELLING indicates that the job has been explicitly cancelled and is in the process of stopping. Jobs that are cancelling may only transition to JOB_STATE_CANCELLED or JOB_STATE_FAILED.
    JobStateQueued
    JOB_STATE_QUEUEDJOB_STATE_QUEUED indicates that the job has been created but is being delayed until launch. Jobs that are queued may only transition to JOB_STATE_PENDING or JOB_STATE_CANCELLED.
    JobStateResourceCleaningUp
    JOB_STATE_RESOURCE_CLEANING_UPJOB_STATE_RESOURCE_CLEANING_UP indicates that the batch job's associated resources are currently being cleaned up after a successful run. Currently, this is an opt-in feature, please reach out to Cloud support team if you are interested.
    JOB_STATE_UNKNOWN
    JOB_STATE_UNKNOWNThe job's run state isn't specified.
    JOB_STATE_STOPPED
    JOB_STATE_STOPPEDJOB_STATE_STOPPED indicates that the job has not yet started to run.
    JOB_STATE_RUNNING
    JOB_STATE_RUNNINGJOB_STATE_RUNNING indicates that the job is currently running.
    JOB_STATE_DONE
    JOB_STATE_DONEJOB_STATE_DONE indicates that the job has successfully completed. This is a terminal job state. This state may be set by the Cloud Dataflow service, as a transition from JOB_STATE_RUNNING. It may also be set via a Cloud Dataflow UpdateJob call, if the job has not yet reached a terminal state.
    JOB_STATE_FAILED
    JOB_STATE_FAILEDJOB_STATE_FAILED indicates that the job has failed. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_RUNNING.
    JOB_STATE_CANCELLED
    JOB_STATE_CANCELLEDJOB_STATE_CANCELLED indicates that the job has been explicitly cancelled. This is a terminal job state. This state may only be set via a Cloud Dataflow UpdateJob call, and only if the job has not yet reached another terminal state.
    JOB_STATE_UPDATED
    JOB_STATE_UPDATEDJOB_STATE_UPDATED indicates that the job was successfully updated, meaning that this job was stopped and another job was started, inheriting state from this one. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_RUNNING.
    JOB_STATE_DRAINING
    JOB_STATE_DRAININGJOB_STATE_DRAINING indicates that the job is in the process of draining. A draining job has stopped pulling from its input sources and is processing any data that remains in-flight. This state may be set via a Cloud Dataflow UpdateJob call, but only as a transition from JOB_STATE_RUNNING. Jobs that are draining may only transition to JOB_STATE_DRAINED, JOB_STATE_CANCELLED, or JOB_STATE_FAILED.
    JOB_STATE_DRAINED
    JOB_STATE_DRAINEDJOB_STATE_DRAINED indicates that the job has been drained. A drained job terminated by stopping pulling from its input sources and processing any data that remained in-flight when draining was requested. This state is a terminal state, may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_DRAINING.
    JOB_STATE_PENDING
    JOB_STATE_PENDINGJOB_STATE_PENDING indicates that the job has been created but is not yet running. Jobs that are pending may only transition to JOB_STATE_RUNNING, or JOB_STATE_FAILED.
    JOB_STATE_CANCELLING
    JOB_STATE_CANCELLINGJOB_STATE_CANCELLING indicates that the job has been explicitly cancelled and is in the process of stopping. Jobs that are cancelling may only transition to JOB_STATE_CANCELLED or JOB_STATE_FAILED.
    JOB_STATE_QUEUED
    JOB_STATE_QUEUEDJOB_STATE_QUEUED indicates that the job has been created but is being delayed until launch. Jobs that are queued may only transition to JOB_STATE_PENDING or JOB_STATE_CANCELLED.
    JOB_STATE_RESOURCE_CLEANING_UP
    JOB_STATE_RESOURCE_CLEANING_UPJOB_STATE_RESOURCE_CLEANING_UP indicates that the batch job's associated resources are currently being cleaned up after a successful run. Currently, this is an opt-in feature, please reach out to Cloud support team if you are interested.
    "JOB_STATE_UNKNOWN"
    JOB_STATE_UNKNOWNThe job's run state isn't specified.
    "JOB_STATE_STOPPED"
    JOB_STATE_STOPPEDJOB_STATE_STOPPED indicates that the job has not yet started to run.
    "JOB_STATE_RUNNING"
    JOB_STATE_RUNNINGJOB_STATE_RUNNING indicates that the job is currently running.
    "JOB_STATE_DONE"
    JOB_STATE_DONEJOB_STATE_DONE indicates that the job has successfully completed. This is a terminal job state. This state may be set by the Cloud Dataflow service, as a transition from JOB_STATE_RUNNING. It may also be set via a Cloud Dataflow UpdateJob call, if the job has not yet reached a terminal state.
    "JOB_STATE_FAILED"
    JOB_STATE_FAILEDJOB_STATE_FAILED indicates that the job has failed. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_RUNNING.
    "JOB_STATE_CANCELLED"
    JOB_STATE_CANCELLEDJOB_STATE_CANCELLED indicates that the job has been explicitly cancelled. This is a terminal job state. This state may only be set via a Cloud Dataflow UpdateJob call, and only if the job has not yet reached another terminal state.
    "JOB_STATE_UPDATED"
    JOB_STATE_UPDATEDJOB_STATE_UPDATED indicates that the job was successfully updated, meaning that this job was stopped and another job was started, inheriting state from this one. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_RUNNING.
    "JOB_STATE_DRAINING"
    JOB_STATE_DRAININGJOB_STATE_DRAINING indicates that the job is in the process of draining. A draining job has stopped pulling from its input sources and is processing any data that remains in-flight. This state may be set via a Cloud Dataflow UpdateJob call, but only as a transition from JOB_STATE_RUNNING. Jobs that are draining may only transition to JOB_STATE_DRAINED, JOB_STATE_CANCELLED, or JOB_STATE_FAILED.
    "JOB_STATE_DRAINED"
    JOB_STATE_DRAINEDJOB_STATE_DRAINED indicates that the job has been drained. A drained job terminated by stopping pulling from its input sources and processing any data that remained in-flight when draining was requested. This state is a terminal state, may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_DRAINING.
    "JOB_STATE_PENDING"
    JOB_STATE_PENDINGJOB_STATE_PENDING indicates that the job has been created but is not yet running. Jobs that are pending may only transition to JOB_STATE_RUNNING, or JOB_STATE_FAILED.
    "JOB_STATE_CANCELLING"
    JOB_STATE_CANCELLINGJOB_STATE_CANCELLING indicates that the job has been explicitly cancelled and is in the process of stopping. Jobs that are cancelling may only transition to JOB_STATE_CANCELLED or JOB_STATE_FAILED.
    "JOB_STATE_QUEUED"
    JOB_STATE_QUEUEDJOB_STATE_QUEUED indicates that the job has been created but is being delayed until launch. Jobs that are queued may only transition to JOB_STATE_PENDING or JOB_STATE_CANCELLED.
    "JOB_STATE_RESOURCE_CLEANING_UP"
    JOB_STATE_RESOURCE_CLEANING_UPJOB_STATE_RESOURCE_CLEANING_UP indicates that the batch job's associated resources are currently being cleaned up after a successful run. Currently, this is an opt-in feature, please reach out to Cloud support team if you are interested.

    ExecutionStageStateResponse, ExecutionStageStateResponseArgs

    CurrentStateTime string
    The time at which the stage transitioned to this state.
    ExecutionStageName string
    The name of the execution stage.
    ExecutionStageState string
    Executions stage states allow the same set of values as JobState.
    CurrentStateTime string
    The time at which the stage transitioned to this state.
    ExecutionStageName string
    The name of the execution stage.
    ExecutionStageState string
    Executions stage states allow the same set of values as JobState.
    currentStateTime String
    The time at which the stage transitioned to this state.
    executionStageName String
    The name of the execution stage.
    executionStageState String
    Executions stage states allow the same set of values as JobState.
    currentStateTime string
    The time at which the stage transitioned to this state.
    executionStageName string
    The name of the execution stage.
    executionStageState string
    Executions stage states allow the same set of values as JobState.
    current_state_time str
    The time at which the stage transitioned to this state.
    execution_stage_name str
    The name of the execution stage.
    execution_stage_state str
    Executions stage states allow the same set of values as JobState.
    currentStateTime String
    The time at which the stage transitioned to this state.
    executionStageName String
    The name of the execution stage.
    executionStageState String
    Executions stage states allow the same set of values as JobState.

    ExecutionStageSummary, ExecutionStageSummaryArgs

    ComponentSource List<Pulumi.GoogleNative.Dataflow.V1b3.Inputs.ComponentSource>
    Collections produced and consumed by component transforms of this stage.
    ComponentTransform List<Pulumi.GoogleNative.Dataflow.V1b3.Inputs.ComponentTransform>
    Transforms that comprise this execution stage.
    Id string
    Dataflow service generated id for this stage.
    InputSource List<Pulumi.GoogleNative.Dataflow.V1b3.Inputs.StageSource>
    Input sources for this stage.
    Kind Pulumi.GoogleNative.Dataflow.V1b3.ExecutionStageSummaryKind
    Type of transform this stage is executing.
    Name string
    Dataflow service generated name for this stage.
    OutputSource List<Pulumi.GoogleNative.Dataflow.V1b3.Inputs.StageSource>
    Output sources for this stage.
    PrerequisiteStage List<string>
    Other stages that must complete before this stage can run.
    ComponentSource []ComponentSource
    Collections produced and consumed by component transforms of this stage.
    ComponentTransform []ComponentTransform
    Transforms that comprise this execution stage.
    Id string
    Dataflow service generated id for this stage.
    InputSource []StageSource
    Input sources for this stage.
    Kind ExecutionStageSummaryKind
    Type of transform this stage is executing.
    Name string
    Dataflow service generated name for this stage.
    OutputSource []StageSource
    Output sources for this stage.
    PrerequisiteStage []string
    Other stages that must complete before this stage can run.
    componentSource List<ComponentSource>
    Collections produced and consumed by component transforms of this stage.
    componentTransform List<ComponentTransform>
    Transforms that comprise this execution stage.
    id String
    Dataflow service generated id for this stage.
    inputSource List<StageSource>
    Input sources for this stage.
    kind ExecutionStageSummaryKind
    Type of transform this stage is executing.
    name String
    Dataflow service generated name for this stage.
    outputSource List<StageSource>
    Output sources for this stage.
    prerequisiteStage List<String>
    Other stages that must complete before this stage can run.
    componentSource ComponentSource[]
    Collections produced and consumed by component transforms of this stage.
    componentTransform ComponentTransform[]
    Transforms that comprise this execution stage.
    id string
    Dataflow service generated id for this stage.
    inputSource StageSource[]
    Input sources for this stage.
    kind ExecutionStageSummaryKind
    Type of transform this stage is executing.
    name string
    Dataflow service generated name for this stage.
    outputSource StageSource[]
    Output sources for this stage.
    prerequisiteStage string[]
    Other stages that must complete before this stage can run.
    component_source Sequence[ComponentSource]
    Collections produced and consumed by component transforms of this stage.
    component_transform Sequence[ComponentTransform]
    Transforms that comprise this execution stage.
    id str
    Dataflow service generated id for this stage.
    input_source Sequence[StageSource]
    Input sources for this stage.
    kind ExecutionStageSummaryKind
    Type of transform this stage is executing.
    name str
    Dataflow service generated name for this stage.
    output_source Sequence[StageSource]
    Output sources for this stage.
    prerequisite_stage Sequence[str]
    Other stages that must complete before this stage can run.
    componentSource List<Property Map>
    Collections produced and consumed by component transforms of this stage.
    componentTransform List<Property Map>
    Transforms that comprise this execution stage.
    id String
    Dataflow service generated id for this stage.
    inputSource List<Property Map>
    Input sources for this stage.
    kind "UNKNOWN_KIND" | "PAR_DO_KIND" | "GROUP_BY_KEY_KIND" | "FLATTEN_KIND" | "READ_KIND" | "WRITE_KIND" | "CONSTANT_KIND" | "SINGLETON_KIND" | "SHUFFLE_KIND"
    Type of transform this stage is executing.
    name String
    Dataflow service generated name for this stage.
    outputSource List<Property Map>
    Output sources for this stage.
    prerequisiteStage List<String>
    Other stages that must complete before this stage can run.

    ExecutionStageSummaryKind, ExecutionStageSummaryKindArgs

    UnknownKind
    UNKNOWN_KINDUnrecognized transform type.
    ParDoKind
    PAR_DO_KINDParDo transform.
    GroupByKeyKind
    GROUP_BY_KEY_KINDGroup By Key transform.
    FlattenKind
    FLATTEN_KINDFlatten transform.
    ReadKind
    READ_KINDRead transform.
    WriteKind
    WRITE_KINDWrite transform.
    ConstantKind
    CONSTANT_KINDConstructs from a constant value, such as with Create.of.
    SingletonKind
    SINGLETON_KINDCreates a Singleton view of a collection.
    ShuffleKind
    SHUFFLE_KINDOpening or closing a shuffle session, often as part of a GroupByKey.
    ExecutionStageSummaryKindUnknownKind
    UNKNOWN_KINDUnrecognized transform type.
    ExecutionStageSummaryKindParDoKind
    PAR_DO_KINDParDo transform.
    ExecutionStageSummaryKindGroupByKeyKind
    GROUP_BY_KEY_KINDGroup By Key transform.
    ExecutionStageSummaryKindFlattenKind
    FLATTEN_KINDFlatten transform.
    ExecutionStageSummaryKindReadKind
    READ_KINDRead transform.
    ExecutionStageSummaryKindWriteKind
    WRITE_KINDWrite transform.
    ExecutionStageSummaryKindConstantKind
    CONSTANT_KINDConstructs from a constant value, such as with Create.of.
    ExecutionStageSummaryKindSingletonKind
    SINGLETON_KINDCreates a Singleton view of a collection.
    ExecutionStageSummaryKindShuffleKind
    SHUFFLE_KINDOpening or closing a shuffle session, often as part of a GroupByKey.
    UnknownKind
    UNKNOWN_KINDUnrecognized transform type.
    ParDoKind
    PAR_DO_KINDParDo transform.
    GroupByKeyKind
    GROUP_BY_KEY_KINDGroup By Key transform.
    FlattenKind
    FLATTEN_KINDFlatten transform.
    ReadKind
    READ_KINDRead transform.
    WriteKind
    WRITE_KINDWrite transform.
    ConstantKind
    CONSTANT_KINDConstructs from a constant value, such as with Create.of.
    SingletonKind
    SINGLETON_KINDCreates a Singleton view of a collection.
    ShuffleKind
    SHUFFLE_KINDOpening or closing a shuffle session, often as part of a GroupByKey.
    UnknownKind
    UNKNOWN_KINDUnrecognized transform type.
    ParDoKind
    PAR_DO_KINDParDo transform.
    GroupByKeyKind
    GROUP_BY_KEY_KINDGroup By Key transform.
    FlattenKind
    FLATTEN_KINDFlatten transform.
    ReadKind
    READ_KINDRead transform.
    WriteKind
    WRITE_KINDWrite transform.
    ConstantKind
    CONSTANT_KINDConstructs from a constant value, such as with Create.of.
    SingletonKind
    SINGLETON_KINDCreates a Singleton view of a collection.
    ShuffleKind
    SHUFFLE_KINDOpening or closing a shuffle session, often as part of a GroupByKey.
    UNKNOWN_KIND
    UNKNOWN_KINDUnrecognized transform type.
    PAR_DO_KIND
    PAR_DO_KINDParDo transform.
    GROUP_BY_KEY_KIND
    GROUP_BY_KEY_KINDGroup By Key transform.
    FLATTEN_KIND
    FLATTEN_KINDFlatten transform.
    READ_KIND
    READ_KINDRead transform.
    WRITE_KIND
    WRITE_KINDWrite transform.
    CONSTANT_KIND
    CONSTANT_KINDConstructs from a constant value, such as with Create.of.
    SINGLETON_KIND
    SINGLETON_KINDCreates a Singleton view of a collection.
    SHUFFLE_KIND
    SHUFFLE_KINDOpening or closing a shuffle session, often as part of a GroupByKey.
    "UNKNOWN_KIND"
    UNKNOWN_KINDUnrecognized transform type.
    "PAR_DO_KIND"
    PAR_DO_KINDParDo transform.
    "GROUP_BY_KEY_KIND"
    GROUP_BY_KEY_KINDGroup By Key transform.
    "FLATTEN_KIND"
    FLATTEN_KINDFlatten transform.
    "READ_KIND"
    READ_KINDRead transform.
    "WRITE_KIND"
    WRITE_KINDWrite transform.
    "CONSTANT_KIND"
    CONSTANT_KINDConstructs from a constant value, such as with Create.of.
    "SINGLETON_KIND"
    SINGLETON_KINDCreates a Singleton view of a collection.
    "SHUFFLE_KIND"
    SHUFFLE_KINDOpening or closing a shuffle session, often as part of a GroupByKey.

    ExecutionStageSummaryResponse, ExecutionStageSummaryResponseArgs

    ComponentSource List<Pulumi.GoogleNative.Dataflow.V1b3.Inputs.ComponentSourceResponse>
    Collections produced and consumed by component transforms of this stage.
    ComponentTransform List<Pulumi.GoogleNative.Dataflow.V1b3.Inputs.ComponentTransformResponse>
    Transforms that comprise this execution stage.
    InputSource List<Pulumi.GoogleNative.Dataflow.V1b3.Inputs.StageSourceResponse>
    Input sources for this stage.
    Kind string
    Type of transform this stage is executing.
    Name string
    Dataflow service generated name for this stage.
    OutputSource List<Pulumi.GoogleNative.Dataflow.V1b3.Inputs.StageSourceResponse>
    Output sources for this stage.
    PrerequisiteStage List<string>
    Other stages that must complete before this stage can run.
    ComponentSource []ComponentSourceResponse
    Collections produced and consumed by component transforms of this stage.
    ComponentTransform []ComponentTransformResponse
    Transforms that comprise this execution stage.
    InputSource []StageSourceResponse
    Input sources for this stage.
    Kind string
    Type of transform this stage is executing.
    Name string
    Dataflow service generated name for this stage.
    OutputSource []StageSourceResponse
    Output sources for this stage.
    PrerequisiteStage []string
    Other stages that must complete before this stage can run.
    componentSource List<ComponentSourceResponse>
    Collections produced and consumed by component transforms of this stage.
    componentTransform List<ComponentTransformResponse>
    Transforms that comprise this execution stage.
    inputSource List<StageSourceResponse>
    Input sources for this stage.
    kind String
    Type of transform this stage is executing.
    name String
    Dataflow service generated name for this stage.
    outputSource List<StageSourceResponse>
    Output sources for this stage.
    prerequisiteStage List<String>
    Other stages that must complete before this stage can run.
    componentSource ComponentSourceResponse[]
    Collections produced and consumed by component transforms of this stage.
    componentTransform ComponentTransformResponse[]
    Transforms that comprise this execution stage.
    inputSource StageSourceResponse[]
    Input sources for this stage.
    kind string
    Type of transform this stage is executing.
    name string
    Dataflow service generated name for this stage.
    outputSource StageSourceResponse[]
    Output sources for this stage.
    prerequisiteStage string[]
    Other stages that must complete before this stage can run.
    component_source Sequence[ComponentSourceResponse]
    Collections produced and consumed by component transforms of this stage.
    component_transform Sequence[ComponentTransformResponse]
    Transforms that comprise this execution stage.
    input_source Sequence[StageSourceResponse]
    Input sources for this stage.
    kind str
    Type of transform this stage is executing.
    name str
    Dataflow service generated name for this stage.
    output_source Sequence[StageSourceResponse]
    Output sources for this stage.
    prerequisite_stage Sequence[str]
    Other stages that must complete before this stage can run.
    componentSource List<Property Map>
    Collections produced and consumed by component transforms of this stage.
    componentTransform List<Property Map>
    Transforms that comprise this execution stage.
    inputSource List<Property Map>
    Input sources for this stage.
    kind String
    Type of transform this stage is executing.
    name String
    Dataflow service generated name for this stage.
    outputSource List<Property Map>
    Output sources for this stage.
    prerequisiteStage List<String>
    Other stages that must complete before this stage can run.

    FileIODetails, FileIODetailsArgs

    FilePattern string
    File Pattern used to access files by the connector.
    FilePattern string
    File Pattern used to access files by the connector.
    filePattern String
    File Pattern used to access files by the connector.
    filePattern string
    File Pattern used to access files by the connector.
    file_pattern str
    File Pattern used to access files by the connector.
    filePattern String
    File Pattern used to access files by the connector.

    FileIODetailsResponse, FileIODetailsResponseArgs

    FilePattern string
    File Pattern used to access files by the connector.
    FilePattern string
    File Pattern used to access files by the connector.
    filePattern String
    File Pattern used to access files by the connector.
    filePattern string
    File Pattern used to access files by the connector.
    file_pattern str
    File Pattern used to access files by the connector.
    filePattern String
    File Pattern used to access files by the connector.

    JobCurrentState, JobCurrentStateArgs

    JobStateUnknown
    JOB_STATE_UNKNOWNThe job's run state isn't specified.
    JobStateStopped
    JOB_STATE_STOPPEDJOB_STATE_STOPPED indicates that the job has not yet started to run.
    JobStateRunning
    JOB_STATE_RUNNINGJOB_STATE_RUNNING indicates that the job is currently running.
    JobStateDone
    JOB_STATE_DONEJOB_STATE_DONE indicates that the job has successfully completed. This is a terminal job state. This state may be set by the Cloud Dataflow service, as a transition from JOB_STATE_RUNNING. It may also be set via a Cloud Dataflow UpdateJob call, if the job has not yet reached a terminal state.
    JobStateFailed
    JOB_STATE_FAILEDJOB_STATE_FAILED indicates that the job has failed. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_RUNNING.
    JobStateCancelled
    JOB_STATE_CANCELLEDJOB_STATE_CANCELLED indicates that the job has been explicitly cancelled. This is a terminal job state. This state may only be set via a Cloud Dataflow UpdateJob call, and only if the job has not yet reached another terminal state.
    JobStateUpdated
    JOB_STATE_UPDATEDJOB_STATE_UPDATED indicates that the job was successfully updated, meaning that this job was stopped and another job was started, inheriting state from this one. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_RUNNING.
    JobStateDraining
    JOB_STATE_DRAININGJOB_STATE_DRAINING indicates that the job is in the process of draining. A draining job has stopped pulling from its input sources and is processing any data that remains in-flight. This state may be set via a Cloud Dataflow UpdateJob call, but only as a transition from JOB_STATE_RUNNING. Jobs that are draining may only transition to JOB_STATE_DRAINED, JOB_STATE_CANCELLED, or JOB_STATE_FAILED.
    JobStateDrained
    JOB_STATE_DRAINEDJOB_STATE_DRAINED indicates that the job has been drained. A drained job terminated by stopping pulling from its input sources and processing any data that remained in-flight when draining was requested. This state is a terminal state, may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_DRAINING.
    JobStatePending
    JOB_STATE_PENDINGJOB_STATE_PENDING indicates that the job has been created but is not yet running. Jobs that are pending may only transition to JOB_STATE_RUNNING, or JOB_STATE_FAILED.
    JobStateCancelling
    JOB_STATE_CANCELLINGJOB_STATE_CANCELLING indicates that the job has been explicitly cancelled and is in the process of stopping. Jobs that are cancelling may only transition to JOB_STATE_CANCELLED or JOB_STATE_FAILED.
    JobStateQueued
    JOB_STATE_QUEUEDJOB_STATE_QUEUED indicates that the job has been created but is being delayed until launch. Jobs that are queued may only transition to JOB_STATE_PENDING or JOB_STATE_CANCELLED.
    JobStateResourceCleaningUp
    JOB_STATE_RESOURCE_CLEANING_UPJOB_STATE_RESOURCE_CLEANING_UP indicates that the batch job's associated resources are currently being cleaned up after a successful run. Currently, this is an opt-in feature, please reach out to Cloud support team if you are interested.
    JobCurrentStateJobStateUnknown
    JOB_STATE_UNKNOWNThe job's run state isn't specified.
    JobCurrentStateJobStateStopped
    JOB_STATE_STOPPEDJOB_STATE_STOPPED indicates that the job has not yet started to run.
    JobCurrentStateJobStateRunning
    JOB_STATE_RUNNINGJOB_STATE_RUNNING indicates that the job is currently running.
    JobCurrentStateJobStateDone
    JOB_STATE_DONEJOB_STATE_DONE indicates that the job has successfully completed. This is a terminal job state. This state may be set by the Cloud Dataflow service, as a transition from JOB_STATE_RUNNING. It may also be set via a Cloud Dataflow UpdateJob call, if the job has not yet reached a terminal state.
    JobCurrentStateJobStateFailed
    JOB_STATE_FAILEDJOB_STATE_FAILED indicates that the job has failed. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_RUNNING.
    JobCurrentStateJobStateCancelled
    JOB_STATE_CANCELLEDJOB_STATE_CANCELLED indicates that the job has been explicitly cancelled. This is a terminal job state. This state may only be set via a Cloud Dataflow UpdateJob call, and only if the job has not yet reached another terminal state.
    JobCurrentStateJobStateUpdated
    JOB_STATE_UPDATEDJOB_STATE_UPDATED indicates that the job was successfully updated, meaning that this job was stopped and another job was started, inheriting state from this one. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_RUNNING.
    JobCurrentStateJobStateDraining
    JOB_STATE_DRAININGJOB_STATE_DRAINING indicates that the job is in the process of draining. A draining job has stopped pulling from its input sources and is processing any data that remains in-flight. This state may be set via a Cloud Dataflow UpdateJob call, but only as a transition from JOB_STATE_RUNNING. Jobs that are draining may only transition to JOB_STATE_DRAINED, JOB_STATE_CANCELLED, or JOB_STATE_FAILED.
    JobCurrentStateJobStateDrained
    JOB_STATE_DRAINEDJOB_STATE_DRAINED indicates that the job has been drained. A drained job terminated by stopping pulling from its input sources and processing any data that remained in-flight when draining was requested. This state is a terminal state, may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_DRAINING.
    JobCurrentStateJobStatePending
    JOB_STATE_PENDINGJOB_STATE_PENDING indicates that the job has been created but is not yet running. Jobs that are pending may only transition to JOB_STATE_RUNNING, or JOB_STATE_FAILED.
    JobCurrentStateJobStateCancelling
    JOB_STATE_CANCELLINGJOB_STATE_CANCELLING indicates that the job has been explicitly cancelled and is in the process of stopping. Jobs that are cancelling may only transition to JOB_STATE_CANCELLED or JOB_STATE_FAILED.
    JobCurrentStateJobStateQueued
    JOB_STATE_QUEUEDJOB_STATE_QUEUED indicates that the job has been created but is being delayed until launch. Jobs that are queued may only transition to JOB_STATE_PENDING or JOB_STATE_CANCELLED.
    JobCurrentStateJobStateResourceCleaningUp
    JOB_STATE_RESOURCE_CLEANING_UPJOB_STATE_RESOURCE_CLEANING_UP indicates that the batch job's associated resources are currently being cleaned up after a successful run. Currently, this is an opt-in feature, please reach out to Cloud support team if you are interested.
    JobStateUnknown
    JOB_STATE_UNKNOWNThe job's run state isn't specified.
    JobStateStopped
    JOB_STATE_STOPPEDJOB_STATE_STOPPED indicates that the job has not yet started to run.
    JobStateRunning
    JOB_STATE_RUNNINGJOB_STATE_RUNNING indicates that the job is currently running.
    JobStateDone
    JOB_STATE_DONEJOB_STATE_DONE indicates that the job has successfully completed. This is a terminal job state. This state may be set by the Cloud Dataflow service, as a transition from JOB_STATE_RUNNING. It may also be set via a Cloud Dataflow UpdateJob call, if the job has not yet reached a terminal state.
    JobStateFailed
    JOB_STATE_FAILEDJOB_STATE_FAILED indicates that the job has failed. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_RUNNING.
    JobStateCancelled
    JOB_STATE_CANCELLEDJOB_STATE_CANCELLED indicates that the job has been explicitly cancelled. This is a terminal job state. This state may only be set via a Cloud Dataflow UpdateJob call, and only if the job has not yet reached another terminal state.
    JobStateUpdated
    JOB_STATE_UPDATEDJOB_STATE_UPDATED indicates that the job was successfully updated, meaning that this job was stopped and another job was started, inheriting state from this one. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_RUNNING.
    JobStateDraining
    JOB_STATE_DRAININGJOB_STATE_DRAINING indicates that the job is in the process of draining. A draining job has stopped pulling from its input sources and is processing any data that remains in-flight. This state may be set via a Cloud Dataflow UpdateJob call, but only as a transition from JOB_STATE_RUNNING. Jobs that are draining may only transition to JOB_STATE_DRAINED, JOB_STATE_CANCELLED, or JOB_STATE_FAILED.
    JobStateDrained
    JOB_STATE_DRAINEDJOB_STATE_DRAINED indicates that the job has been drained. A drained job terminated by stopping pulling from its input sources and processing any data that remained in-flight when draining was requested. This state is a terminal state, may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_DRAINING.
    JobStatePending
    JOB_STATE_PENDINGJOB_STATE_PENDING indicates that the job has been created but is not yet running. Jobs that are pending may only transition to JOB_STATE_RUNNING, or JOB_STATE_FAILED.
    JobStateCancelling
    JOB_STATE_CANCELLINGJOB_STATE_CANCELLING indicates that the job has been explicitly cancelled and is in the process of stopping. Jobs that are cancelling may only transition to JOB_STATE_CANCELLED or JOB_STATE_FAILED.
    JobStateQueued
    JOB_STATE_QUEUEDJOB_STATE_QUEUED indicates that the job has been created but is being delayed until launch. Jobs that are queued may only transition to JOB_STATE_PENDING or JOB_STATE_CANCELLED.
    JobStateResourceCleaningUp
    JOB_STATE_RESOURCE_CLEANING_UPJOB_STATE_RESOURCE_CLEANING_UP indicates that the batch job's associated resources are currently being cleaned up after a successful run. Currently, this is an opt-in feature, please reach out to Cloud support team if you are interested.
    JobStateUnknown
    JOB_STATE_UNKNOWNThe job's run state isn't specified.
    JobStateStopped
    JOB_STATE_STOPPEDJOB_STATE_STOPPED indicates that the job has not yet started to run.
    JobStateRunning
    JOB_STATE_RUNNINGJOB_STATE_RUNNING indicates that the job is currently running.
    JobStateDone
    JOB_STATE_DONEJOB_STATE_DONE indicates that the job has successfully completed. This is a terminal job state. This state may be set by the Cloud Dataflow service, as a transition from JOB_STATE_RUNNING. It may also be set via a Cloud Dataflow UpdateJob call, if the job has not yet reached a terminal state.
    JobStateFailed
    JOB_STATE_FAILEDJOB_STATE_FAILED indicates that the job has failed. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_RUNNING.
    JobStateCancelled
    JOB_STATE_CANCELLEDJOB_STATE_CANCELLED indicates that the job has been explicitly cancelled. This is a terminal job state. This state may only be set via a Cloud Dataflow UpdateJob call, and only if the job has not yet reached another terminal state.
    JobStateUpdated
    JOB_STATE_UPDATEDJOB_STATE_UPDATED indicates that the job was successfully updated, meaning that this job was stopped and another job was started, inheriting state from this one. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_RUNNING.
    JobStateDraining
    JOB_STATE_DRAININGJOB_STATE_DRAINING indicates that the job is in the process of draining. A draining job has stopped pulling from its input sources and is processing any data that remains in-flight. This state may be set via a Cloud Dataflow UpdateJob call, but only as a transition from JOB_STATE_RUNNING. Jobs that are draining may only transition to JOB_STATE_DRAINED, JOB_STATE_CANCELLED, or JOB_STATE_FAILED.
    JobStateDrained
    JOB_STATE_DRAINEDJOB_STATE_DRAINED indicates that the job has been drained. A drained job terminated by stopping pulling from its input sources and processing any data that remained in-flight when draining was requested. This state is a terminal state, may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_DRAINING.
    JobStatePending
    JOB_STATE_PENDINGJOB_STATE_PENDING indicates that the job has been created but is not yet running. Jobs that are pending may only transition to JOB_STATE_RUNNING, or JOB_STATE_FAILED.
    JobStateCancelling
    JOB_STATE_CANCELLINGJOB_STATE_CANCELLING indicates that the job has been explicitly cancelled and is in the process of stopping. Jobs that are cancelling may only transition to JOB_STATE_CANCELLED or JOB_STATE_FAILED.
    JobStateQueued
    JOB_STATE_QUEUEDJOB_STATE_QUEUED indicates that the job has been created but is being delayed until launch. Jobs that are queued may only transition to JOB_STATE_PENDING or JOB_STATE_CANCELLED.
    JobStateResourceCleaningUp
    JOB_STATE_RESOURCE_CLEANING_UPJOB_STATE_RESOURCE_CLEANING_UP indicates that the batch job's associated resources are currently being cleaned up after a successful run. Currently, this is an opt-in feature, please reach out to Cloud support team if you are interested.
    JOB_STATE_UNKNOWN
    JOB_STATE_UNKNOWNThe job's run state isn't specified.
    JOB_STATE_STOPPED
    JOB_STATE_STOPPEDJOB_STATE_STOPPED indicates that the job has not yet started to run.
    JOB_STATE_RUNNING
    JOB_STATE_RUNNINGJOB_STATE_RUNNING indicates that the job is currently running.
    JOB_STATE_DONE
    JOB_STATE_DONEJOB_STATE_DONE indicates that the job has successfully completed. This is a terminal job state. This state may be set by the Cloud Dataflow service, as a transition from JOB_STATE_RUNNING. It may also be set via a Cloud Dataflow UpdateJob call, if the job has not yet reached a terminal state.
    JOB_STATE_FAILED
    JOB_STATE_FAILEDJOB_STATE_FAILED indicates that the job has failed. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_RUNNING.
    JOB_STATE_CANCELLED
    JOB_STATE_CANCELLEDJOB_STATE_CANCELLED indicates that the job has been explicitly cancelled. This is a terminal job state. This state may only be set via a Cloud Dataflow UpdateJob call, and only if the job has not yet reached another terminal state.
    JOB_STATE_UPDATED
    JOB_STATE_UPDATEDJOB_STATE_UPDATED indicates that the job was successfully updated, meaning that this job was stopped and another job was started, inheriting state from this one. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_RUNNING.
    JOB_STATE_DRAINING
    JOB_STATE_DRAININGJOB_STATE_DRAINING indicates that the job is in the process of draining. A draining job has stopped pulling from its input sources and is processing any data that remains in-flight. This state may be set via a Cloud Dataflow UpdateJob call, but only as a transition from JOB_STATE_RUNNING. Jobs that are draining may only transition to JOB_STATE_DRAINED, JOB_STATE_CANCELLED, or JOB_STATE_FAILED.
    JOB_STATE_DRAINED
    JOB_STATE_DRAINEDJOB_STATE_DRAINED indicates that the job has been drained. A drained job terminated by stopping pulling from its input sources and processing any data that remained in-flight when draining was requested. This state is a terminal state, may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_DRAINING.
    JOB_STATE_PENDING
    JOB_STATE_PENDINGJOB_STATE_PENDING indicates that the job has been created but is not yet running. Jobs that are pending may only transition to JOB_STATE_RUNNING, or JOB_STATE_FAILED.
    JOB_STATE_CANCELLING
    JOB_STATE_CANCELLINGJOB_STATE_CANCELLING indicates that the job has been explicitly cancelled and is in the process of stopping. Jobs that are cancelling may only transition to JOB_STATE_CANCELLED or JOB_STATE_FAILED.
    JOB_STATE_QUEUED
    JOB_STATE_QUEUEDJOB_STATE_QUEUED indicates that the job has been created but is being delayed until launch. Jobs that are queued may only transition to JOB_STATE_PENDING or JOB_STATE_CANCELLED.
    JOB_STATE_RESOURCE_CLEANING_UP
    JOB_STATE_RESOURCE_CLEANING_UPJOB_STATE_RESOURCE_CLEANING_UP indicates that the batch job's associated resources are currently being cleaned up after a successful run. Currently, this is an opt-in feature, please reach out to Cloud support team if you are interested.
    "JOB_STATE_UNKNOWN"
    JOB_STATE_UNKNOWNThe job's run state isn't specified.
    "JOB_STATE_STOPPED"
    JOB_STATE_STOPPEDJOB_STATE_STOPPED indicates that the job has not yet started to run.
    "JOB_STATE_RUNNING"
    JOB_STATE_RUNNINGJOB_STATE_RUNNING indicates that the job is currently running.
    "JOB_STATE_DONE"
    JOB_STATE_DONEJOB_STATE_DONE indicates that the job has successfully completed. This is a terminal job state. This state may be set by the Cloud Dataflow service, as a transition from JOB_STATE_RUNNING. It may also be set via a Cloud Dataflow UpdateJob call, if the job has not yet reached a terminal state.
    "JOB_STATE_FAILED"
    JOB_STATE_FAILEDJOB_STATE_FAILED indicates that the job has failed. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_RUNNING.
    "JOB_STATE_CANCELLED"
    JOB_STATE_CANCELLEDJOB_STATE_CANCELLED indicates that the job has been explicitly cancelled. This is a terminal job state. This state may only be set via a Cloud Dataflow UpdateJob call, and only if the job has not yet reached another terminal state.
    "JOB_STATE_UPDATED"
    JOB_STATE_UPDATEDJOB_STATE_UPDATED indicates that the job was successfully updated, meaning that this job was stopped and another job was started, inheriting state from this one. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_RUNNING.
    "JOB_STATE_DRAINING"
    JOB_STATE_DRAININGJOB_STATE_DRAINING indicates that the job is in the process of draining. A draining job has stopped pulling from its input sources and is processing any data that remains in-flight. This state may be set via a Cloud Dataflow UpdateJob call, but only as a transition from JOB_STATE_RUNNING. Jobs that are draining may only transition to JOB_STATE_DRAINED, JOB_STATE_CANCELLED, or JOB_STATE_FAILED.
    "JOB_STATE_DRAINED"
    JOB_STATE_DRAINEDJOB_STATE_DRAINED indicates that the job has been drained. A drained job terminated by stopping pulling from its input sources and processing any data that remained in-flight when draining was requested. This state is a terminal state, may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_DRAINING.
    "JOB_STATE_PENDING"
    JOB_STATE_PENDINGJOB_STATE_PENDING indicates that the job has been created but is not yet running. Jobs that are pending may only transition to JOB_STATE_RUNNING, or JOB_STATE_FAILED.
    "JOB_STATE_CANCELLING"
    JOB_STATE_CANCELLINGJOB_STATE_CANCELLING indicates that the job has been explicitly cancelled and is in the process of stopping. Jobs that are cancelling may only transition to JOB_STATE_CANCELLED or JOB_STATE_FAILED.
    "JOB_STATE_QUEUED"
    JOB_STATE_QUEUEDJOB_STATE_QUEUED indicates that the job has been created but is being delayed until launch. Jobs that are queued may only transition to JOB_STATE_PENDING or JOB_STATE_CANCELLED.
    "JOB_STATE_RESOURCE_CLEANING_UP"
    JOB_STATE_RESOURCE_CLEANING_UPJOB_STATE_RESOURCE_CLEANING_UP indicates that the batch job's associated resources are currently being cleaned up after a successful run. Currently, this is an opt-in feature, please reach out to Cloud support team if you are interested.

    JobExecutionInfo, JobExecutionInfoArgs

    Stages Dictionary<string, string>
    A mapping from each stage to the information about that stage.
    Stages map[string]string
    A mapping from each stage to the information about that stage.
    stages Map<String,String>
    A mapping from each stage to the information about that stage.
    stages {[key: string]: string}
    A mapping from each stage to the information about that stage.
    stages Mapping[str, str]
    A mapping from each stage to the information about that stage.
    stages Map<String>
    A mapping from each stage to the information about that stage.

    JobExecutionInfoResponse, JobExecutionInfoResponseArgs

    Stages Dictionary<string, string>
    A mapping from each stage to the information about that stage.
    Stages map[string]string
    A mapping from each stage to the information about that stage.
    stages Map<String,String>
    A mapping from each stage to the information about that stage.
    stages {[key: string]: string}
    A mapping from each stage to the information about that stage.
    stages Mapping[str, str]
    A mapping from each stage to the information about that stage.
    stages Map<String>
    A mapping from each stage to the information about that stage.

    JobMetadata, JobMetadataArgs

    BigTableDetails List<Pulumi.GoogleNative.Dataflow.V1b3.Inputs.BigTableIODetails>
    Identification of a Cloud Bigtable source used in the Dataflow job.
    BigqueryDetails List<Pulumi.GoogleNative.Dataflow.V1b3.Inputs.BigQueryIODetails>
    Identification of a BigQuery source used in the Dataflow job.
    DatastoreDetails List<Pulumi.GoogleNative.Dataflow.V1b3.Inputs.DatastoreIODetails>
    Identification of a Datastore source used in the Dataflow job.
    FileDetails List<Pulumi.GoogleNative.Dataflow.V1b3.Inputs.FileIODetails>
    Identification of a File source used in the Dataflow job.
    PubsubDetails List<Pulumi.GoogleNative.Dataflow.V1b3.Inputs.PubSubIODetails>
    Identification of a Pub/Sub source used in the Dataflow job.
    SdkVersion Pulumi.GoogleNative.Dataflow.V1b3.Inputs.SdkVersion
    The SDK version used to run the job.
    SpannerDetails List<Pulumi.GoogleNative.Dataflow.V1b3.Inputs.SpannerIODetails>
    Identification of a Spanner source used in the Dataflow job.
    UserDisplayProperties Dictionary<string, string>
    List of display properties to help UI filter jobs.
    BigTableDetails []BigTableIODetails
    Identification of a Cloud Bigtable source used in the Dataflow job.
    BigqueryDetails []BigQueryIODetails
    Identification of a BigQuery source used in the Dataflow job.
    DatastoreDetails []DatastoreIODetails
    Identification of a Datastore source used in the Dataflow job.
    FileDetails []FileIODetails
    Identification of a File source used in the Dataflow job.
    PubsubDetails []PubSubIODetails
    Identification of a Pub/Sub source used in the Dataflow job.
    SdkVersion SdkVersion
    The SDK version used to run the job.
    SpannerDetails []SpannerIODetails
    Identification of a Spanner source used in the Dataflow job.
    UserDisplayProperties map[string]string
    List of display properties to help UI filter jobs.
    bigTableDetails List<BigTableIODetails>
    Identification of a Cloud Bigtable source used in the Dataflow job.
    bigqueryDetails List<BigQueryIODetails>
    Identification of a BigQuery source used in the Dataflow job.
    datastoreDetails List<DatastoreIODetails>
    Identification of a Datastore source used in the Dataflow job.
    fileDetails List<FileIODetails>
    Identification of a File source used in the Dataflow job.
    pubsubDetails List<PubSubIODetails>
    Identification of a Pub/Sub source used in the Dataflow job.
    sdkVersion SdkVersion
    The SDK version used to run the job.
    spannerDetails List<SpannerIODetails>
    Identification of a Spanner source used in the Dataflow job.
    userDisplayProperties Map<String,String>
    List of display properties to help UI filter jobs.
    bigTableDetails BigTableIODetails[]
    Identification of a Cloud Bigtable source used in the Dataflow job.
    bigqueryDetails BigQueryIODetails[]
    Identification of a BigQuery source used in the Dataflow job.
    datastoreDetails DatastoreIODetails[]
    Identification of a Datastore source used in the Dataflow job.
    fileDetails FileIODetails[]
    Identification of a File source used in the Dataflow job.
    pubsubDetails PubSubIODetails[]
    Identification of a Pub/Sub source used in the Dataflow job.
    sdkVersion SdkVersion
    The SDK version used to run the job.
    spannerDetails SpannerIODetails[]
    Identification of a Spanner source used in the Dataflow job.
    userDisplayProperties {[key: string]: string}
    List of display properties to help UI filter jobs.
    big_table_details Sequence[BigTableIODetails]
    Identification of a Cloud Bigtable source used in the Dataflow job.
    bigquery_details Sequence[BigQueryIODetails]
    Identification of a BigQuery source used in the Dataflow job.
    datastore_details Sequence[DatastoreIODetails]
    Identification of a Datastore source used in the Dataflow job.
    file_details Sequence[FileIODetails]
    Identification of a File source used in the Dataflow job.
    pubsub_details Sequence[PubSubIODetails]
    Identification of a Pub/Sub source used in the Dataflow job.
    sdk_version SdkVersion
    The SDK version used to run the job.
    spanner_details Sequence[SpannerIODetails]
    Identification of a Spanner source used in the Dataflow job.
    user_display_properties Mapping[str, str]
    List of display properties to help UI filter jobs.
    bigTableDetails List<Property Map>
    Identification of a Cloud Bigtable source used in the Dataflow job.
    bigqueryDetails List<Property Map>
    Identification of a BigQuery source used in the Dataflow job.
    datastoreDetails List<Property Map>
    Identification of a Datastore source used in the Dataflow job.
    fileDetails List<Property Map>
    Identification of a File source used in the Dataflow job.
    pubsubDetails List<Property Map>
    Identification of a Pub/Sub source used in the Dataflow job.
    sdkVersion Property Map
    The SDK version used to run the job.
    spannerDetails List<Property Map>
    Identification of a Spanner source used in the Dataflow job.
    userDisplayProperties Map<String>
    List of display properties to help UI filter jobs.

    JobMetadataResponse, JobMetadataResponseArgs

    BigTableDetails List<Pulumi.GoogleNative.Dataflow.V1b3.Inputs.BigTableIODetailsResponse>
    Identification of a Cloud Bigtable source used in the Dataflow job.
    BigqueryDetails List<Pulumi.GoogleNative.Dataflow.V1b3.Inputs.BigQueryIODetailsResponse>
    Identification of a BigQuery source used in the Dataflow job.
    DatastoreDetails List<Pulumi.GoogleNative.Dataflow.V1b3.Inputs.DatastoreIODetailsResponse>
    Identification of a Datastore source used in the Dataflow job.
    FileDetails List<Pulumi.GoogleNative.Dataflow.V1b3.Inputs.FileIODetailsResponse>
    Identification of a File source used in the Dataflow job.
    PubsubDetails List<Pulumi.GoogleNative.Dataflow.V1b3.Inputs.PubSubIODetailsResponse>
    Identification of a Pub/Sub source used in the Dataflow job.
    SdkVersion Pulumi.GoogleNative.Dataflow.V1b3.Inputs.SdkVersionResponse
    The SDK version used to run the job.
    SpannerDetails List<Pulumi.GoogleNative.Dataflow.V1b3.Inputs.SpannerIODetailsResponse>
    Identification of a Spanner source used in the Dataflow job.
    UserDisplayProperties Dictionary<string, string>
    List of display properties to help UI filter jobs.
    BigTableDetails []BigTableIODetailsResponse
    Identification of a Cloud Bigtable source used in the Dataflow job.
    BigqueryDetails []BigQueryIODetailsResponse
    Identification of a BigQuery source used in the Dataflow job.
    DatastoreDetails []DatastoreIODetailsResponse
    Identification of a Datastore source used in the Dataflow job.
    FileDetails []FileIODetailsResponse
    Identification of a File source used in the Dataflow job.
    PubsubDetails []PubSubIODetailsResponse
    Identification of a Pub/Sub source used in the Dataflow job.
    SdkVersion SdkVersionResponse
    The SDK version used to run the job.
    SpannerDetails []SpannerIODetailsResponse
    Identification of a Spanner source used in the Dataflow job.
    UserDisplayProperties map[string]string
    List of display properties to help UI filter jobs.
    bigTableDetails List<BigTableIODetailsResponse>
    Identification of a Cloud Bigtable source used in the Dataflow job.
    bigqueryDetails List<BigQueryIODetailsResponse>
    Identification of a BigQuery source used in the Dataflow job.
    datastoreDetails List<DatastoreIODetailsResponse>
    Identification of a Datastore source used in the Dataflow job.
    fileDetails List<FileIODetailsResponse>
    Identification of a File source used in the Dataflow job.
    pubsubDetails List<PubSubIODetailsResponse>
    Identification of a Pub/Sub source used in the Dataflow job.
    sdkVersion SdkVersionResponse
    The SDK version used to run the job.
    spannerDetails List<SpannerIODetailsResponse>
    Identification of a Spanner source used in the Dataflow job.
    userDisplayProperties Map<String,String>
    List of display properties to help UI filter jobs.
    bigTableDetails BigTableIODetailsResponse[]
    Identification of a Cloud Bigtable source used in the Dataflow job.
    bigqueryDetails BigQueryIODetailsResponse[]
    Identification of a BigQuery source used in the Dataflow job.
    datastoreDetails DatastoreIODetailsResponse[]
    Identification of a Datastore source used in the Dataflow job.
    fileDetails FileIODetailsResponse[]
    Identification of a File source used in the Dataflow job.
    pubsubDetails PubSubIODetailsResponse[]
    Identification of a Pub/Sub source used in the Dataflow job.
    sdkVersion SdkVersionResponse
    The SDK version used to run the job.
    spannerDetails SpannerIODetailsResponse[]
    Identification of a Spanner source used in the Dataflow job.
    userDisplayProperties {[key: string]: string}
    List of display properties to help UI filter jobs.
    big_table_details Sequence[BigTableIODetailsResponse]
    Identification of a Cloud Bigtable source used in the Dataflow job.
    bigquery_details Sequence[BigQueryIODetailsResponse]
    Identification of a BigQuery source used in the Dataflow job.
    datastore_details Sequence[DatastoreIODetailsResponse]
    Identification of a Datastore source used in the Dataflow job.
    file_details Sequence[FileIODetailsResponse]
    Identification of a File source used in the Dataflow job.
    pubsub_details Sequence[PubSubIODetailsResponse]
    Identification of a Pub/Sub source used in the Dataflow job.
    sdk_version SdkVersionResponse
    The SDK version used to run the job.
    spanner_details Sequence[SpannerIODetailsResponse]
    Identification of a Spanner source used in the Dataflow job.
    user_display_properties Mapping[str, str]
    List of display properties to help UI filter jobs.
    bigTableDetails List<Property Map>
    Identification of a Cloud Bigtable source used in the Dataflow job.
    bigqueryDetails List<Property Map>
    Identification of a BigQuery source used in the Dataflow job.
    datastoreDetails List<Property Map>
    Identification of a Datastore source used in the Dataflow job.
    fileDetails List<Property Map>
    Identification of a File source used in the Dataflow job.
    pubsubDetails List<Property Map>
    Identification of a Pub/Sub source used in the Dataflow job.
    sdkVersion Property Map
    The SDK version used to run the job.
    spannerDetails List<Property Map>
    Identification of a Spanner source used in the Dataflow job.
    userDisplayProperties Map<String>
    List of display properties to help UI filter jobs.

    JobRequestedState, JobRequestedStateArgs

    JobStateUnknown
    JOB_STATE_UNKNOWNThe job's run state isn't specified.
    JobStateStopped
    JOB_STATE_STOPPEDJOB_STATE_STOPPED indicates that the job has not yet started to run.
    JobStateRunning
    JOB_STATE_RUNNINGJOB_STATE_RUNNING indicates that the job is currently running.
    JobStateDone
    JOB_STATE_DONEJOB_STATE_DONE indicates that the job has successfully completed. This is a terminal job state. This state may be set by the Cloud Dataflow service, as a transition from JOB_STATE_RUNNING. It may also be set via a Cloud Dataflow UpdateJob call, if the job has not yet reached a terminal state.
    JobStateFailed
    JOB_STATE_FAILEDJOB_STATE_FAILED indicates that the job has failed. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_RUNNING.
    JobStateCancelled
    JOB_STATE_CANCELLEDJOB_STATE_CANCELLED indicates that the job has been explicitly cancelled. This is a terminal job state. This state may only be set via a Cloud Dataflow UpdateJob call, and only if the job has not yet reached another terminal state.
    JobStateUpdated
    JOB_STATE_UPDATEDJOB_STATE_UPDATED indicates that the job was successfully updated, meaning that this job was stopped and another job was started, inheriting state from this one. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_RUNNING.
    JobStateDraining
    JOB_STATE_DRAININGJOB_STATE_DRAINING indicates that the job is in the process of draining. A draining job has stopped pulling from its input sources and is processing any data that remains in-flight. This state may be set via a Cloud Dataflow UpdateJob call, but only as a transition from JOB_STATE_RUNNING. Jobs that are draining may only transition to JOB_STATE_DRAINED, JOB_STATE_CANCELLED, or JOB_STATE_FAILED.
    JobStateDrained
    JOB_STATE_DRAINEDJOB_STATE_DRAINED indicates that the job has been drained. A drained job terminated by stopping pulling from its input sources and processing any data that remained in-flight when draining was requested. This state is a terminal state, may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_DRAINING.
    JobStatePending
    JOB_STATE_PENDINGJOB_STATE_PENDING indicates that the job has been created but is not yet running. Jobs that are pending may only transition to JOB_STATE_RUNNING, or JOB_STATE_FAILED.
    JobStateCancelling
    JOB_STATE_CANCELLINGJOB_STATE_CANCELLING indicates that the job has been explicitly cancelled and is in the process of stopping. Jobs that are cancelling may only transition to JOB_STATE_CANCELLED or JOB_STATE_FAILED.
    JobStateQueued
    JOB_STATE_QUEUEDJOB_STATE_QUEUED indicates that the job has been created but is being delayed until launch. Jobs that are queued may only transition to JOB_STATE_PENDING or JOB_STATE_CANCELLED.
    JobStateResourceCleaningUp
    JOB_STATE_RESOURCE_CLEANING_UPJOB_STATE_RESOURCE_CLEANING_UP indicates that the batch job's associated resources are currently being cleaned up after a successful run. Currently, this is an opt-in feature, please reach out to Cloud support team if you are interested.
    JobRequestedStateJobStateUnknown
    JOB_STATE_UNKNOWNThe job's run state isn't specified.
    JobRequestedStateJobStateStopped
    JOB_STATE_STOPPEDJOB_STATE_STOPPED indicates that the job has not yet started to run.
    JobRequestedStateJobStateRunning
    JOB_STATE_RUNNINGJOB_STATE_RUNNING indicates that the job is currently running.
    JobRequestedStateJobStateDone
    JOB_STATE_DONEJOB_STATE_DONE indicates that the job has successfully completed. This is a terminal job state. This state may be set by the Cloud Dataflow service, as a transition from JOB_STATE_RUNNING. It may also be set via a Cloud Dataflow UpdateJob call, if the job has not yet reached a terminal state.
    JobRequestedStateJobStateFailed
    JOB_STATE_FAILEDJOB_STATE_FAILED indicates that the job has failed. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_RUNNING.
    JobRequestedStateJobStateCancelled
    JOB_STATE_CANCELLEDJOB_STATE_CANCELLED indicates that the job has been explicitly cancelled. This is a terminal job state. This state may only be set via a Cloud Dataflow UpdateJob call, and only if the job has not yet reached another terminal state.
    JobRequestedStateJobStateUpdated
    JOB_STATE_UPDATEDJOB_STATE_UPDATED indicates that the job was successfully updated, meaning that this job was stopped and another job was started, inheriting state from this one. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_RUNNING.
    JobRequestedStateJobStateDraining
    JOB_STATE_DRAININGJOB_STATE_DRAINING indicates that the job is in the process of draining. A draining job has stopped pulling from its input sources and is processing any data that remains in-flight. This state may be set via a Cloud Dataflow UpdateJob call, but only as a transition from JOB_STATE_RUNNING. Jobs that are draining may only transition to JOB_STATE_DRAINED, JOB_STATE_CANCELLED, or JOB_STATE_FAILED.
    JobRequestedStateJobStateDrained
    JOB_STATE_DRAINEDJOB_STATE_DRAINED indicates that the job has been drained. A drained job terminated by stopping pulling from its input sources and processing any data that remained in-flight when draining was requested. This state is a terminal state, may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_DRAINING.
    JobRequestedStateJobStatePending
    JOB_STATE_PENDINGJOB_STATE_PENDING indicates that the job has been created but is not yet running. Jobs that are pending may only transition to JOB_STATE_RUNNING, or JOB_STATE_FAILED.
    JobRequestedStateJobStateCancelling
    JOB_STATE_CANCELLINGJOB_STATE_CANCELLING indicates that the job has been explicitly cancelled and is in the process of stopping. Jobs that are cancelling may only transition to JOB_STATE_CANCELLED or JOB_STATE_FAILED.
    JobRequestedStateJobStateQueued
    JOB_STATE_QUEUEDJOB_STATE_QUEUED indicates that the job has been created but is being delayed until launch. Jobs that are queued may only transition to JOB_STATE_PENDING or JOB_STATE_CANCELLED.
    JobRequestedStateJobStateResourceCleaningUp
    JOB_STATE_RESOURCE_CLEANING_UPJOB_STATE_RESOURCE_CLEANING_UP indicates that the batch job's associated resources are currently being cleaned up after a successful run. Currently, this is an opt-in feature, please reach out to Cloud support team if you are interested.
    JobStateUnknown
    JOB_STATE_UNKNOWNThe job's run state isn't specified.
    JobStateStopped
    JOB_STATE_STOPPEDJOB_STATE_STOPPED indicates that the job has not yet started to run.
    JobStateRunning
    JOB_STATE_RUNNINGJOB_STATE_RUNNING indicates that the job is currently running.
    JobStateDone
    JOB_STATE_DONEJOB_STATE_DONE indicates that the job has successfully completed. This is a terminal job state. This state may be set by the Cloud Dataflow service, as a transition from JOB_STATE_RUNNING. It may also be set via a Cloud Dataflow UpdateJob call, if the job has not yet reached a terminal state.
    JobStateFailed
    JOB_STATE_FAILEDJOB_STATE_FAILED indicates that the job has failed. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_RUNNING.
    JobStateCancelled
    JOB_STATE_CANCELLEDJOB_STATE_CANCELLED indicates that the job has been explicitly cancelled. This is a terminal job state. This state may only be set via a Cloud Dataflow UpdateJob call, and only if the job has not yet reached another terminal state.
    JobStateUpdated
    JOB_STATE_UPDATEDJOB_STATE_UPDATED indicates that the job was successfully updated, meaning that this job was stopped and another job was started, inheriting state from this one. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_RUNNING.
    JobStateDraining
    JOB_STATE_DRAININGJOB_STATE_DRAINING indicates that the job is in the process of draining. A draining job has stopped pulling from its input sources and is processing any data that remains in-flight. This state may be set via a Cloud Dataflow UpdateJob call, but only as a transition from JOB_STATE_RUNNING. Jobs that are draining may only transition to JOB_STATE_DRAINED, JOB_STATE_CANCELLED, or JOB_STATE_FAILED.
    JobStateDrained
    JOB_STATE_DRAINEDJOB_STATE_DRAINED indicates that the job has been drained. A drained job terminated by stopping pulling from its input sources and processing any data that remained in-flight when draining was requested. This state is a terminal state, may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_DRAINING.
    JobStatePending
    JOB_STATE_PENDINGJOB_STATE_PENDING indicates that the job has been created but is not yet running. Jobs that are pending may only transition to JOB_STATE_RUNNING, or JOB_STATE_FAILED.
    JobStateCancelling
    JOB_STATE_CANCELLINGJOB_STATE_CANCELLING indicates that the job has been explicitly cancelled and is in the process of stopping. Jobs that are cancelling may only transition to JOB_STATE_CANCELLED or JOB_STATE_FAILED.
    JobStateQueued
    JOB_STATE_QUEUEDJOB_STATE_QUEUED indicates that the job has been created but is being delayed until launch. Jobs that are queued may only transition to JOB_STATE_PENDING or JOB_STATE_CANCELLED.
    JobStateResourceCleaningUp
    JOB_STATE_RESOURCE_CLEANING_UPJOB_STATE_RESOURCE_CLEANING_UP indicates that the batch job's associated resources are currently being cleaned up after a successful run. Currently, this is an opt-in feature, please reach out to Cloud support team if you are interested.
    JobStateUnknown
    JOB_STATE_UNKNOWNThe job's run state isn't specified.
    JobStateStopped
    JOB_STATE_STOPPEDJOB_STATE_STOPPED indicates that the job has not yet started to run.
    JobStateRunning
    JOB_STATE_RUNNINGJOB_STATE_RUNNING indicates that the job is currently running.
    JobStateDone
    JOB_STATE_DONEJOB_STATE_DONE indicates that the job has successfully completed. This is a terminal job state. This state may be set by the Cloud Dataflow service, as a transition from JOB_STATE_RUNNING. It may also be set via a Cloud Dataflow UpdateJob call, if the job has not yet reached a terminal state.
    JobStateFailed
    JOB_STATE_FAILEDJOB_STATE_FAILED indicates that the job has failed. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_RUNNING.
    JobStateCancelled
    JOB_STATE_CANCELLEDJOB_STATE_CANCELLED indicates that the job has been explicitly cancelled. This is a terminal job state. This state may only be set via a Cloud Dataflow UpdateJob call, and only if the job has not yet reached another terminal state.
    JobStateUpdated
    JOB_STATE_UPDATEDJOB_STATE_UPDATED indicates that the job was successfully updated, meaning that this job was stopped and another job was started, inheriting state from this one. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_RUNNING.
    JobStateDraining
    JOB_STATE_DRAININGJOB_STATE_DRAINING indicates that the job is in the process of draining. A draining job has stopped pulling from its input sources and is processing any data that remains in-flight. This state may be set via a Cloud Dataflow UpdateJob call, but only as a transition from JOB_STATE_RUNNING. Jobs that are draining may only transition to JOB_STATE_DRAINED, JOB_STATE_CANCELLED, or JOB_STATE_FAILED.
    JobStateDrained
    JOB_STATE_DRAINEDJOB_STATE_DRAINED indicates that the job has been drained. A drained job terminated by stopping pulling from its input sources and processing any data that remained in-flight when draining was requested. This state is a terminal state, may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_DRAINING.
    JobStatePending
    JOB_STATE_PENDINGJOB_STATE_PENDING indicates that the job has been created but is not yet running. Jobs that are pending may only transition to JOB_STATE_RUNNING, or JOB_STATE_FAILED.
    JobStateCancelling
    JOB_STATE_CANCELLINGJOB_STATE_CANCELLING indicates that the job has been explicitly cancelled and is in the process of stopping. Jobs that are cancelling may only transition to JOB_STATE_CANCELLED or JOB_STATE_FAILED.
    JobStateQueued
    JOB_STATE_QUEUEDJOB_STATE_QUEUED indicates that the job has been created but is being delayed until launch. Jobs that are queued may only transition to JOB_STATE_PENDING or JOB_STATE_CANCELLED.
    JobStateResourceCleaningUp
    JOB_STATE_RESOURCE_CLEANING_UPJOB_STATE_RESOURCE_CLEANING_UP indicates that the batch job's associated resources are currently being cleaned up after a successful run. Currently, this is an opt-in feature, please reach out to Cloud support team if you are interested.
    JOB_STATE_UNKNOWN
    JOB_STATE_UNKNOWNThe job's run state isn't specified.
    JOB_STATE_STOPPED
    JOB_STATE_STOPPEDJOB_STATE_STOPPED indicates that the job has not yet started to run.
    JOB_STATE_RUNNING
    JOB_STATE_RUNNINGJOB_STATE_RUNNING indicates that the job is currently running.
    JOB_STATE_DONE
    JOB_STATE_DONEJOB_STATE_DONE indicates that the job has successfully completed. This is a terminal job state. This state may be set by the Cloud Dataflow service, as a transition from JOB_STATE_RUNNING. It may also be set via a Cloud Dataflow UpdateJob call, if the job has not yet reached a terminal state.
    JOB_STATE_FAILED
    JOB_STATE_FAILEDJOB_STATE_FAILED indicates that the job has failed. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_RUNNING.
    JOB_STATE_CANCELLED
    JOB_STATE_CANCELLEDJOB_STATE_CANCELLED indicates that the job has been explicitly cancelled. This is a terminal job state. This state may only be set via a Cloud Dataflow UpdateJob call, and only if the job has not yet reached another terminal state.
    JOB_STATE_UPDATED
    JOB_STATE_UPDATEDJOB_STATE_UPDATED indicates that the job was successfully updated, meaning that this job was stopped and another job was started, inheriting state from this one. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_RUNNING.
    JOB_STATE_DRAINING
    JOB_STATE_DRAININGJOB_STATE_DRAINING indicates that the job is in the process of draining. A draining job has stopped pulling from its input sources and is processing any data that remains in-flight. This state may be set via a Cloud Dataflow UpdateJob call, but only as a transition from JOB_STATE_RUNNING. Jobs that are draining may only transition to JOB_STATE_DRAINED, JOB_STATE_CANCELLED, or JOB_STATE_FAILED.
    JOB_STATE_DRAINED
    JOB_STATE_DRAINEDJOB_STATE_DRAINED indicates that the job has been drained. A drained job terminated by stopping pulling from its input sources and processing any data that remained in-flight when draining was requested. This state is a terminal state, may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_DRAINING.
    JOB_STATE_PENDING
    JOB_STATE_PENDINGJOB_STATE_PENDING indicates that the job has been created but is not yet running. Jobs that are pending may only transition to JOB_STATE_RUNNING, or JOB_STATE_FAILED.
    JOB_STATE_CANCELLING
    JOB_STATE_CANCELLINGJOB_STATE_CANCELLING indicates that the job has been explicitly cancelled and is in the process of stopping. Jobs that are cancelling may only transition to JOB_STATE_CANCELLED or JOB_STATE_FAILED.
    JOB_STATE_QUEUED
    JOB_STATE_QUEUEDJOB_STATE_QUEUED indicates that the job has been created but is being delayed until launch. Jobs that are queued may only transition to JOB_STATE_PENDING or JOB_STATE_CANCELLED.
    JOB_STATE_RESOURCE_CLEANING_UP
    JOB_STATE_RESOURCE_CLEANING_UPJOB_STATE_RESOURCE_CLEANING_UP indicates that the batch job's associated resources are currently being cleaned up after a successful run. Currently, this is an opt-in feature, please reach out to Cloud support team if you are interested.
    "JOB_STATE_UNKNOWN"
    JOB_STATE_UNKNOWNThe job's run state isn't specified.
    "JOB_STATE_STOPPED"
    JOB_STATE_STOPPEDJOB_STATE_STOPPED indicates that the job has not yet started to run.
    "JOB_STATE_RUNNING"
    JOB_STATE_RUNNINGJOB_STATE_RUNNING indicates that the job is currently running.
    "JOB_STATE_DONE"
    JOB_STATE_DONEJOB_STATE_DONE indicates that the job has successfully completed. This is a terminal job state. This state may be set by the Cloud Dataflow service, as a transition from JOB_STATE_RUNNING. It may also be set via a Cloud Dataflow UpdateJob call, if the job has not yet reached a terminal state.
    "JOB_STATE_FAILED"
    JOB_STATE_FAILEDJOB_STATE_FAILED indicates that the job has failed. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_RUNNING.
    "JOB_STATE_CANCELLED"
    JOB_STATE_CANCELLEDJOB_STATE_CANCELLED indicates that the job has been explicitly cancelled. This is a terminal job state. This state may only be set via a Cloud Dataflow UpdateJob call, and only if the job has not yet reached another terminal state.
    "JOB_STATE_UPDATED"
    JOB_STATE_UPDATEDJOB_STATE_UPDATED indicates that the job was successfully updated, meaning that this job was stopped and another job was started, inheriting state from this one. This is a terminal job state. This state may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_RUNNING.
    "JOB_STATE_DRAINING"
    JOB_STATE_DRAININGJOB_STATE_DRAINING indicates that the job is in the process of draining. A draining job has stopped pulling from its input sources and is processing any data that remains in-flight. This state may be set via a Cloud Dataflow UpdateJob call, but only as a transition from JOB_STATE_RUNNING. Jobs that are draining may only transition to JOB_STATE_DRAINED, JOB_STATE_CANCELLED, or JOB_STATE_FAILED.
    "JOB_STATE_DRAINED"
    JOB_STATE_DRAINEDJOB_STATE_DRAINED indicates that the job has been drained. A drained job terminated by stopping pulling from its input sources and processing any data that remained in-flight when draining was requested. This state is a terminal state, may only be set by the Cloud Dataflow service, and only as a transition from JOB_STATE_DRAINING.
    "JOB_STATE_PENDING"
    JOB_STATE_PENDINGJOB_STATE_PENDING indicates that the job has been created but is not yet running. Jobs that are pending may only transition to JOB_STATE_RUNNING, or JOB_STATE_FAILED.
    "JOB_STATE_CANCELLING"
    JOB_STATE_CANCELLINGJOB_STATE_CANCELLING indicates that the job has been explicitly cancelled and is in the process of stopping. Jobs that are cancelling may only transition to JOB_STATE_CANCELLED or JOB_STATE_FAILED.
    "JOB_STATE_QUEUED"
    JOB_STATE_QUEUEDJOB_STATE_QUEUED indicates that the job has been created but is being delayed until launch. Jobs that are queued may only transition to JOB_STATE_PENDING or JOB_STATE_CANCELLED.
    "JOB_STATE_RESOURCE_CLEANING_UP"
    JOB_STATE_RESOURCE_CLEANING_UPJOB_STATE_RESOURCE_CLEANING_UP indicates that the batch job's associated resources are currently being cleaned up after a successful run. Currently, this is an opt-in feature, please reach out to Cloud support team if you are interested.

    JobType, JobTypeArgs

    JobTypeUnknown
    JOB_TYPE_UNKNOWNThe type of the job is unspecified, or unknown.
    JobTypeBatch
    JOB_TYPE_BATCHA batch job with a well-defined end point: data is read, data is processed, data is written, and the job is done.
    JobTypeStreaming
    JOB_TYPE_STREAMINGA continuously streaming job with no end: data is read, processed, and written continuously.
    JobTypeJobTypeUnknown
    JOB_TYPE_UNKNOWNThe type of the job is unspecified, or unknown.
    JobTypeJobTypeBatch
    JOB_TYPE_BATCHA batch job with a well-defined end point: data is read, data is processed, data is written, and the job is done.
    JobTypeJobTypeStreaming
    JOB_TYPE_STREAMINGA continuously streaming job with no end: data is read, processed, and written continuously.
    JobTypeUnknown
    JOB_TYPE_UNKNOWNThe type of the job is unspecified, or unknown.
    JobTypeBatch
    JOB_TYPE_BATCHA batch job with a well-defined end point: data is read, data is processed, data is written, and the job is done.
    JobTypeStreaming
    JOB_TYPE_STREAMINGA continuously streaming job with no end: data is read, processed, and written continuously.
    JobTypeUnknown
    JOB_TYPE_UNKNOWNThe type of the job is unspecified, or unknown.
    JobTypeBatch
    JOB_TYPE_BATCHA batch job with a well-defined end point: data is read, data is processed, data is written, and the job is done.
    JobTypeStreaming
    JOB_TYPE_STREAMINGA continuously streaming job with no end: data is read, processed, and written continuously.
    JOB_TYPE_UNKNOWN
    JOB_TYPE_UNKNOWNThe type of the job is unspecified, or unknown.
    JOB_TYPE_BATCH
    JOB_TYPE_BATCHA batch job with a well-defined end point: data is read, data is processed, data is written, and the job is done.
    JOB_TYPE_STREAMING
    JOB_TYPE_STREAMINGA continuously streaming job with no end: data is read, processed, and written continuously.
    "JOB_TYPE_UNKNOWN"
    JOB_TYPE_UNKNOWNThe type of the job is unspecified, or unknown.
    "JOB_TYPE_BATCH"
    JOB_TYPE_BATCHA batch job with a well-defined end point: data is read, data is processed, data is written, and the job is done.
    "JOB_TYPE_STREAMING"
    JOB_TYPE_STREAMINGA continuously streaming job with no end: data is read, processed, and written continuously.

    Package, PackageArgs

    Location string
    The resource to read the package from. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket} bucket.storage.googleapis.com/
    Name string
    The name of the package.
    Location string
    The resource to read the package from. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket} bucket.storage.googleapis.com/
    Name string
    The name of the package.
    location String
    The resource to read the package from. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket} bucket.storage.googleapis.com/
    name String
    The name of the package.
    location string
    The resource to read the package from. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket} bucket.storage.googleapis.com/
    name string
    The name of the package.
    location str
    The resource to read the package from. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket} bucket.storage.googleapis.com/
    name str
    The name of the package.
    location String
    The resource to read the package from. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket} bucket.storage.googleapis.com/
    name String
    The name of the package.

    PackageResponse, PackageResponseArgs

    Location string
    The resource to read the package from. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket} bucket.storage.googleapis.com/
    Name string
    The name of the package.
    Location string
    The resource to read the package from. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket} bucket.storage.googleapis.com/
    Name string
    The name of the package.
    location String
    The resource to read the package from. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket} bucket.storage.googleapis.com/
    name String
    The name of the package.
    location string
    The resource to read the package from. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket} bucket.storage.googleapis.com/
    name string
    The name of the package.
    location str
    The resource to read the package from. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket} bucket.storage.googleapis.com/
    name str
    The name of the package.
    location String
    The resource to read the package from. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket} bucket.storage.googleapis.com/
    name String
    The name of the package.

    PipelineDescription, PipelineDescriptionArgs

    DisplayData List<Pulumi.GoogleNative.Dataflow.V1b3.Inputs.DisplayData>
    Pipeline level display data.
    ExecutionPipelineStage List<Pulumi.GoogleNative.Dataflow.V1b3.Inputs.ExecutionStageSummary>
    Description of each stage of execution of the pipeline.
    OriginalPipelineTransform List<Pulumi.GoogleNative.Dataflow.V1b3.Inputs.TransformSummary>
    Description of each transform in the pipeline and collections between them.
    StepNamesHash string
    A hash value of the submitted pipeline portable graph step names if exists.
    DisplayData []DisplayData
    Pipeline level display data.
    ExecutionPipelineStage []ExecutionStageSummary
    Description of each stage of execution of the pipeline.
    OriginalPipelineTransform []TransformSummary
    Description of each transform in the pipeline and collections between them.
    StepNamesHash string
    A hash value of the submitted pipeline portable graph step names if exists.
    displayData List<DisplayData>
    Pipeline level display data.
    executionPipelineStage List<ExecutionStageSummary>
    Description of each stage of execution of the pipeline.
    originalPipelineTransform List<TransformSummary>
    Description of each transform in the pipeline and collections between them.
    stepNamesHash String
    A hash value of the submitted pipeline portable graph step names if exists.
    displayData DisplayData[]
    Pipeline level display data.
    executionPipelineStage ExecutionStageSummary[]
    Description of each stage of execution of the pipeline.
    originalPipelineTransform TransformSummary[]
    Description of each transform in the pipeline and collections between them.
    stepNamesHash string
    A hash value of the submitted pipeline portable graph step names if exists.
    display_data Sequence[DisplayData]
    Pipeline level display data.
    execution_pipeline_stage Sequence[ExecutionStageSummary]
    Description of each stage of execution of the pipeline.
    original_pipeline_transform Sequence[TransformSummary]
    Description of each transform in the pipeline and collections between them.
    step_names_hash str
    A hash value of the submitted pipeline portable graph step names if exists.
    displayData List<Property Map>
    Pipeline level display data.
    executionPipelineStage List<Property Map>
    Description of each stage of execution of the pipeline.
    originalPipelineTransform List<Property Map>
    Description of each transform in the pipeline and collections between them.
    stepNamesHash String
    A hash value of the submitted pipeline portable graph step names if exists.

    PipelineDescriptionResponse, PipelineDescriptionResponseArgs

    DisplayData List<Pulumi.GoogleNative.Dataflow.V1b3.Inputs.DisplayDataResponse>
    Pipeline level display data.
    ExecutionPipelineStage List<Pulumi.GoogleNative.Dataflow.V1b3.Inputs.ExecutionStageSummaryResponse>
    Description of each stage of execution of the pipeline.
    OriginalPipelineTransform List<Pulumi.GoogleNative.Dataflow.V1b3.Inputs.TransformSummaryResponse>
    Description of each transform in the pipeline and collections between them.
    StepNamesHash string
    A hash value of the submitted pipeline portable graph step names if exists.
    DisplayData []DisplayDataResponse
    Pipeline level display data.
    ExecutionPipelineStage []ExecutionStageSummaryResponse
    Description of each stage of execution of the pipeline.
    OriginalPipelineTransform []TransformSummaryResponse
    Description of each transform in the pipeline and collections between them.
    StepNamesHash string
    A hash value of the submitted pipeline portable graph step names if exists.
    displayData List<DisplayDataResponse>
    Pipeline level display data.
    executionPipelineStage List<ExecutionStageSummaryResponse>
    Description of each stage of execution of the pipeline.
    originalPipelineTransform List<TransformSummaryResponse>
    Description of each transform in the pipeline and collections between them.
    stepNamesHash String
    A hash value of the submitted pipeline portable graph step names if exists.
    displayData DisplayDataResponse[]
    Pipeline level display data.
    executionPipelineStage ExecutionStageSummaryResponse[]
    Description of each stage of execution of the pipeline.
    originalPipelineTransform TransformSummaryResponse[]
    Description of each transform in the pipeline and collections between them.
    stepNamesHash string
    A hash value of the submitted pipeline portable graph step names if exists.
    display_data Sequence[DisplayDataResponse]
    Pipeline level display data.
    execution_pipeline_stage Sequence[ExecutionStageSummaryResponse]
    Description of each stage of execution of the pipeline.
    original_pipeline_transform Sequence[TransformSummaryResponse]
    Description of each transform in the pipeline and collections between them.
    step_names_hash str
    A hash value of the submitted pipeline portable graph step names if exists.
    displayData List<Property Map>
    Pipeline level display data.
    executionPipelineStage List<Property Map>
    Description of each stage of execution of the pipeline.
    originalPipelineTransform List<Property Map>
    Description of each transform in the pipeline and collections between them.
    stepNamesHash String
    A hash value of the submitted pipeline portable graph step names if exists.

    PubSubIODetails, PubSubIODetailsArgs

    Subscription string
    Subscription used in the connection.
    Topic string
    Topic accessed in the connection.
    Subscription string
    Subscription used in the connection.
    Topic string
    Topic accessed in the connection.
    subscription String
    Subscription used in the connection.
    topic String
    Topic accessed in the connection.
    subscription string
    Subscription used in the connection.
    topic string
    Topic accessed in the connection.
    subscription str
    Subscription used in the connection.
    topic str
    Topic accessed in the connection.
    subscription String
    Subscription used in the connection.
    topic String
    Topic accessed in the connection.

    PubSubIODetailsResponse, PubSubIODetailsResponseArgs

    Subscription string
    Subscription used in the connection.
    Topic string
    Topic accessed in the connection.
    Subscription string
    Subscription used in the connection.
    Topic string
    Topic accessed in the connection.
    subscription String
    Subscription used in the connection.
    topic String
    Topic accessed in the connection.
    subscription string
    Subscription used in the connection.
    topic string
    Topic accessed in the connection.
    subscription str
    Subscription used in the connection.
    topic str
    Topic accessed in the connection.
    subscription String
    Subscription used in the connection.
    topic String
    Topic accessed in the connection.

    RuntimeUpdatableParams, RuntimeUpdatableParamsArgs

    MaxNumWorkers int
    The maximum number of workers to cap autoscaling at. This field is currently only supported for Streaming Engine jobs.
    MinNumWorkers int
    The minimum number of workers to scale down to. This field is currently only supported for Streaming Engine jobs.
    MaxNumWorkers int
    The maximum number of workers to cap autoscaling at. This field is currently only supported for Streaming Engine jobs.
    MinNumWorkers int
    The minimum number of workers to scale down to. This field is currently only supported for Streaming Engine jobs.
    maxNumWorkers Integer
    The maximum number of workers to cap autoscaling at. This field is currently only supported for Streaming Engine jobs.
    minNumWorkers Integer
    The minimum number of workers to scale down to. This field is currently only supported for Streaming Engine jobs.
    maxNumWorkers number
    The maximum number of workers to cap autoscaling at. This field is currently only supported for Streaming Engine jobs.
    minNumWorkers number
    The minimum number of workers to scale down to. This field is currently only supported for Streaming Engine jobs.
    max_num_workers int
    The maximum number of workers to cap autoscaling at. This field is currently only supported for Streaming Engine jobs.
    min_num_workers int
    The minimum number of workers to scale down to. This field is currently only supported for Streaming Engine jobs.
    maxNumWorkers Number
    The maximum number of workers to cap autoscaling at. This field is currently only supported for Streaming Engine jobs.
    minNumWorkers Number
    The minimum number of workers to scale down to. This field is currently only supported for Streaming Engine jobs.

    RuntimeUpdatableParamsResponse, RuntimeUpdatableParamsResponseArgs

    MaxNumWorkers int
    The maximum number of workers to cap autoscaling at. This field is currently only supported for Streaming Engine jobs.
    MinNumWorkers int
    The minimum number of workers to scale down to. This field is currently only supported for Streaming Engine jobs.
    MaxNumWorkers int
    The maximum number of workers to cap autoscaling at. This field is currently only supported for Streaming Engine jobs.
    MinNumWorkers int
    The minimum number of workers to scale down to. This field is currently only supported for Streaming Engine jobs.
    maxNumWorkers Integer
    The maximum number of workers to cap autoscaling at. This field is currently only supported for Streaming Engine jobs.
    minNumWorkers Integer
    The minimum number of workers to scale down to. This field is currently only supported for Streaming Engine jobs.
    maxNumWorkers number
    The maximum number of workers to cap autoscaling at. This field is currently only supported for Streaming Engine jobs.
    minNumWorkers number
    The minimum number of workers to scale down to. This field is currently only supported for Streaming Engine jobs.
    max_num_workers int
    The maximum number of workers to cap autoscaling at. This field is currently only supported for Streaming Engine jobs.
    min_num_workers int
    The minimum number of workers to scale down to. This field is currently only supported for Streaming Engine jobs.
    maxNumWorkers Number
    The maximum number of workers to cap autoscaling at. This field is currently only supported for Streaming Engine jobs.
    minNumWorkers Number
    The minimum number of workers to scale down to. This field is currently only supported for Streaming Engine jobs.

    SdkBugResponse, SdkBugResponseArgs

    Severity string
    How severe the SDK bug is.
    Type string
    Describes the impact of this SDK bug.
    Uri string
    Link to more information on the bug.
    Severity string
    How severe the SDK bug is.
    Type string
    Describes the impact of this SDK bug.
    Uri string
    Link to more information on the bug.
    severity String
    How severe the SDK bug is.
    type String
    Describes the impact of this SDK bug.
    uri String
    Link to more information on the bug.
    severity string
    How severe the SDK bug is.
    type string
    Describes the impact of this SDK bug.
    uri string
    Link to more information on the bug.
    severity str
    How severe the SDK bug is.
    type str
    Describes the impact of this SDK bug.
    uri str
    Link to more information on the bug.
    severity String
    How severe the SDK bug is.
    type String
    Describes the impact of this SDK bug.
    uri String
    Link to more information on the bug.

    SdkHarnessContainerImage, SdkHarnessContainerImageArgs

    Capabilities List<string>
    The set of capabilities enumerated in the above Environment proto. See also beam_runner_api.proto
    ContainerImage string
    A docker container image that resides in Google Container Registry.
    EnvironmentId string
    Environment ID for the Beam runner API proto Environment that corresponds to the current SDK Harness.
    UseSingleCorePerContainer bool
    If true, recommends the Dataflow service to use only one core per SDK container instance with this image. If false (or unset) recommends using more than one core per SDK container instance with this image for efficiency. Note that Dataflow service may choose to override this property if needed.
    Capabilities []string
    The set of capabilities enumerated in the above Environment proto. See also beam_runner_api.proto
    ContainerImage string
    A docker container image that resides in Google Container Registry.
    EnvironmentId string
    Environment ID for the Beam runner API proto Environment that corresponds to the current SDK Harness.
    UseSingleCorePerContainer bool
    If true, recommends the Dataflow service to use only one core per SDK container instance with this image. If false (or unset) recommends using more than one core per SDK container instance with this image for efficiency. Note that Dataflow service may choose to override this property if needed.
    capabilities List<String>
    The set of capabilities enumerated in the above Environment proto. See also beam_runner_api.proto
    containerImage String
    A docker container image that resides in Google Container Registry.
    environmentId String
    Environment ID for the Beam runner API proto Environment that corresponds to the current SDK Harness.
    useSingleCorePerContainer Boolean
    If true, recommends the Dataflow service to use only one core per SDK container instance with this image. If false (or unset) recommends using more than one core per SDK container instance with this image for efficiency. Note that Dataflow service may choose to override this property if needed.
    capabilities string[]
    The set of capabilities enumerated in the above Environment proto. See also beam_runner_api.proto
    containerImage string
    A docker container image that resides in Google Container Registry.
    environmentId string
    Environment ID for the Beam runner API proto Environment that corresponds to the current SDK Harness.
    useSingleCorePerContainer boolean
    If true, recommends the Dataflow service to use only one core per SDK container instance with this image. If false (or unset) recommends using more than one core per SDK container instance with this image for efficiency. Note that Dataflow service may choose to override this property if needed.
    capabilities Sequence[str]
    The set of capabilities enumerated in the above Environment proto. See also beam_runner_api.proto
    container_image str
    A docker container image that resides in Google Container Registry.
    environment_id str
    Environment ID for the Beam runner API proto Environment that corresponds to the current SDK Harness.
    use_single_core_per_container bool
    If true, recommends the Dataflow service to use only one core per SDK container instance with this image. If false (or unset) recommends using more than one core per SDK container instance with this image for efficiency. Note that Dataflow service may choose to override this property if needed.
    capabilities List<String>
    The set of capabilities enumerated in the above Environment proto. See also beam_runner_api.proto
    containerImage String
    A docker container image that resides in Google Container Registry.
    environmentId String
    Environment ID for the Beam runner API proto Environment that corresponds to the current SDK Harness.
    useSingleCorePerContainer Boolean
    If true, recommends the Dataflow service to use only one core per SDK container instance with this image. If false (or unset) recommends using more than one core per SDK container instance with this image for efficiency. Note that Dataflow service may choose to override this property if needed.

    SdkHarnessContainerImageResponse, SdkHarnessContainerImageResponseArgs

    Capabilities List<string>
    The set of capabilities enumerated in the above Environment proto. See also beam_runner_api.proto
    ContainerImage string
    A docker container image that resides in Google Container Registry.
    EnvironmentId string
    Environment ID for the Beam runner API proto Environment that corresponds to the current SDK Harness.
    UseSingleCorePerContainer bool
    If true, recommends the Dataflow service to use only one core per SDK container instance with this image. If false (or unset) recommends using more than one core per SDK container instance with this image for efficiency. Note that Dataflow service may choose to override this property if needed.
    Capabilities []string
    The set of capabilities enumerated in the above Environment proto. See also beam_runner_api.proto
    ContainerImage string
    A docker container image that resides in Google Container Registry.
    EnvironmentId string
    Environment ID for the Beam runner API proto Environment that corresponds to the current SDK Harness.
    UseSingleCorePerContainer bool
    If true, recommends the Dataflow service to use only one core per SDK container instance with this image. If false (or unset) recommends using more than one core per SDK container instance with this image for efficiency. Note that Dataflow service may choose to override this property if needed.
    capabilities List<String>
    The set of capabilities enumerated in the above Environment proto. See also beam_runner_api.proto
    containerImage String
    A docker container image that resides in Google Container Registry.
    environmentId String
    Environment ID for the Beam runner API proto Environment that corresponds to the current SDK Harness.
    useSingleCorePerContainer Boolean
    If true, recommends the Dataflow service to use only one core per SDK container instance with this image. If false (or unset) recommends using more than one core per SDK container instance with this image for efficiency. Note that Dataflow service may choose to override this property if needed.
    capabilities string[]
    The set of capabilities enumerated in the above Environment proto. See also beam_runner_api.proto
    containerImage string
    A docker container image that resides in Google Container Registry.
    environmentId string
    Environment ID for the Beam runner API proto Environment that corresponds to the current SDK Harness.
    useSingleCorePerContainer boolean
    If true, recommends the Dataflow service to use only one core per SDK container instance with this image. If false (or unset) recommends using more than one core per SDK container instance with this image for efficiency. Note that Dataflow service may choose to override this property if needed.
    capabilities Sequence[str]
    The set of capabilities enumerated in the above Environment proto. See also beam_runner_api.proto
    container_image str
    A docker container image that resides in Google Container Registry.
    environment_id str
    Environment ID for the Beam runner API proto Environment that corresponds to the current SDK Harness.
    use_single_core_per_container bool
    If true, recommends the Dataflow service to use only one core per SDK container instance with this image. If false (or unset) recommends using more than one core per SDK container instance with this image for efficiency. Note that Dataflow service may choose to override this property if needed.
    capabilities List<String>
    The set of capabilities enumerated in the above Environment proto. See also beam_runner_api.proto
    containerImage String
    A docker container image that resides in Google Container Registry.
    environmentId String
    Environment ID for the Beam runner API proto Environment that corresponds to the current SDK Harness.
    useSingleCorePerContainer Boolean
    If true, recommends the Dataflow service to use only one core per SDK container instance with this image. If false (or unset) recommends using more than one core per SDK container instance with this image for efficiency. Note that Dataflow service may choose to override this property if needed.

    SdkVersion, SdkVersionArgs

    SdkSupportStatus Pulumi.GoogleNative.Dataflow.V1b3.SdkVersionSdkSupportStatus
    The support status for this SDK version.
    Version string
    The version of the SDK used to run the job.
    VersionDisplayName string
    A readable string describing the version of the SDK.
    SdkSupportStatus SdkVersionSdkSupportStatus
    The support status for this SDK version.
    Version string
    The version of the SDK used to run the job.
    VersionDisplayName string
    A readable string describing the version of the SDK.
    sdkSupportStatus SdkVersionSdkSupportStatus
    The support status for this SDK version.
    version String
    The version of the SDK used to run the job.
    versionDisplayName String
    A readable string describing the version of the SDK.
    sdkSupportStatus SdkVersionSdkSupportStatus
    The support status for this SDK version.
    version string
    The version of the SDK used to run the job.
    versionDisplayName string
    A readable string describing the version of the SDK.
    sdk_support_status SdkVersionSdkSupportStatus
    The support status for this SDK version.
    version str
    The version of the SDK used to run the job.
    version_display_name str
    A readable string describing the version of the SDK.
    sdkSupportStatus "UNKNOWN" | "SUPPORTED" | "STALE" | "DEPRECATED" | "UNSUPPORTED"
    The support status for this SDK version.
    version String
    The version of the SDK used to run the job.
    versionDisplayName String
    A readable string describing the version of the SDK.

    SdkVersionResponse, SdkVersionResponseArgs

    Bugs List<Pulumi.GoogleNative.Dataflow.V1b3.Inputs.SdkBugResponse>
    Known bugs found in this SDK version.
    SdkSupportStatus string
    The support status for this SDK version.
    Version string
    The version of the SDK used to run the job.
    VersionDisplayName string
    A readable string describing the version of the SDK.
    Bugs []SdkBugResponse
    Known bugs found in this SDK version.
    SdkSupportStatus string
    The support status for this SDK version.
    Version string
    The version of the SDK used to run the job.
    VersionDisplayName string
    A readable string describing the version of the SDK.
    bugs List<SdkBugResponse>
    Known bugs found in this SDK version.
    sdkSupportStatus String
    The support status for this SDK version.
    version String
    The version of the SDK used to run the job.
    versionDisplayName String
    A readable string describing the version of the SDK.
    bugs SdkBugResponse[]
    Known bugs found in this SDK version.
    sdkSupportStatus string
    The support status for this SDK version.
    version string
    The version of the SDK used to run the job.
    versionDisplayName string
    A readable string describing the version of the SDK.
    bugs Sequence[SdkBugResponse]
    Known bugs found in this SDK version.
    sdk_support_status str
    The support status for this SDK version.
    version str
    The version of the SDK used to run the job.
    version_display_name str
    A readable string describing the version of the SDK.
    bugs List<Property Map>
    Known bugs found in this SDK version.
    sdkSupportStatus String
    The support status for this SDK version.
    version String
    The version of the SDK used to run the job.
    versionDisplayName String
    A readable string describing the version of the SDK.

    SdkVersionSdkSupportStatus, SdkVersionSdkSupportStatusArgs

    Unknown
    UNKNOWNCloud Dataflow is unaware of this version.
    Supported
    SUPPORTEDThis is a known version of an SDK, and is supported.
    Stale
    STALEA newer version of the SDK family exists, and an update is recommended.
    Deprecated
    DEPRECATEDThis version of the SDK is deprecated and will eventually be unsupported.
    Unsupported
    UNSUPPORTEDSupport for this SDK version has ended and it should no longer be used.
    SdkVersionSdkSupportStatusUnknown
    UNKNOWNCloud Dataflow is unaware of this version.
    SdkVersionSdkSupportStatusSupported
    SUPPORTEDThis is a known version of an SDK, and is supported.
    SdkVersionSdkSupportStatusStale
    STALEA newer version of the SDK family exists, and an update is recommended.
    SdkVersionSdkSupportStatusDeprecated
    DEPRECATEDThis version of the SDK is deprecated and will eventually be unsupported.
    SdkVersionSdkSupportStatusUnsupported
    UNSUPPORTEDSupport for this SDK version has ended and it should no longer be used.
    Unknown
    UNKNOWNCloud Dataflow is unaware of this version.
    Supported
    SUPPORTEDThis is a known version of an SDK, and is supported.
    Stale
    STALEA newer version of the SDK family exists, and an update is recommended.
    Deprecated
    DEPRECATEDThis version of the SDK is deprecated and will eventually be unsupported.
    Unsupported
    UNSUPPORTEDSupport for this SDK version has ended and it should no longer be used.
    Unknown
    UNKNOWNCloud Dataflow is unaware of this version.
    Supported
    SUPPORTEDThis is a known version of an SDK, and is supported.
    Stale
    STALEA newer version of the SDK family exists, and an update is recommended.
    Deprecated
    DEPRECATEDThis version of the SDK is deprecated and will eventually be unsupported.
    Unsupported
    UNSUPPORTEDSupport for this SDK version has ended and it should no longer be used.
    UNKNOWN
    UNKNOWNCloud Dataflow is unaware of this version.
    SUPPORTED
    SUPPORTEDThis is a known version of an SDK, and is supported.
    STALE
    STALEA newer version of the SDK family exists, and an update is recommended.
    DEPRECATED
    DEPRECATEDThis version of the SDK is deprecated and will eventually be unsupported.
    UNSUPPORTED
    UNSUPPORTEDSupport for this SDK version has ended and it should no longer be used.
    "UNKNOWN"
    UNKNOWNCloud Dataflow is unaware of this version.
    "SUPPORTED"
    SUPPORTEDThis is a known version of an SDK, and is supported.
    "STALE"
    STALEA newer version of the SDK family exists, and an update is recommended.
    "DEPRECATED"
    DEPRECATEDThis version of the SDK is deprecated and will eventually be unsupported.
    "UNSUPPORTED"
    UNSUPPORTEDSupport for this SDK version has ended and it should no longer be used.

    SpannerIODetails, SpannerIODetailsArgs

    DatabaseId string
    DatabaseId accessed in the connection.
    InstanceId string
    InstanceId accessed in the connection.
    Project string
    ProjectId accessed in the connection.
    DatabaseId string
    DatabaseId accessed in the connection.
    InstanceId string
    InstanceId accessed in the connection.
    Project string
    ProjectId accessed in the connection.
    databaseId String
    DatabaseId accessed in the connection.
    instanceId String
    InstanceId accessed in the connection.
    project String
    ProjectId accessed in the connection.
    databaseId string
    DatabaseId accessed in the connection.
    instanceId string
    InstanceId accessed in the connection.
    project string
    ProjectId accessed in the connection.
    database_id str
    DatabaseId accessed in the connection.
    instance_id str
    InstanceId accessed in the connection.
    project str
    ProjectId accessed in the connection.
    databaseId String
    DatabaseId accessed in the connection.
    instanceId String
    InstanceId accessed in the connection.
    project String
    ProjectId accessed in the connection.

    SpannerIODetailsResponse, SpannerIODetailsResponseArgs

    DatabaseId string
    DatabaseId accessed in the connection.
    InstanceId string
    InstanceId accessed in the connection.
    Project string
    ProjectId accessed in the connection.
    DatabaseId string
    DatabaseId accessed in the connection.
    InstanceId string
    InstanceId accessed in the connection.
    Project string
    ProjectId accessed in the connection.
    databaseId String
    DatabaseId accessed in the connection.
    instanceId String
    InstanceId accessed in the connection.
    project String
    ProjectId accessed in the connection.
    databaseId string
    DatabaseId accessed in the connection.
    instanceId string
    InstanceId accessed in the connection.
    project string
    ProjectId accessed in the connection.
    database_id str
    DatabaseId accessed in the connection.
    instance_id str
    InstanceId accessed in the connection.
    project str
    ProjectId accessed in the connection.
    databaseId String
    DatabaseId accessed in the connection.
    instanceId String
    InstanceId accessed in the connection.
    project String
    ProjectId accessed in the connection.

    StageSource, StageSourceArgs

    Name string
    Dataflow service generated name for this source.
    OriginalTransformOrCollection string
    User name for the original user transform or collection with which this source is most closely associated.
    SizeBytes string
    Size of the source, if measurable.
    UserName string
    Human-readable name for this source; may be user or system generated.
    Name string
    Dataflow service generated name for this source.
    OriginalTransformOrCollection string
    User name for the original user transform or collection with which this source is most closely associated.
    SizeBytes string
    Size of the source, if measurable.
    UserName string
    Human-readable name for this source; may be user or system generated.
    name String
    Dataflow service generated name for this source.
    originalTransformOrCollection String
    User name for the original user transform or collection with which this source is most closely associated.
    sizeBytes String
    Size of the source, if measurable.
    userName String
    Human-readable name for this source; may be user or system generated.
    name string
    Dataflow service generated name for this source.
    originalTransformOrCollection string
    User name for the original user transform or collection with which this source is most closely associated.
    sizeBytes string
    Size of the source, if measurable.
    userName string
    Human-readable name for this source; may be user or system generated.
    name str
    Dataflow service generated name for this source.
    original_transform_or_collection str
    User name for the original user transform or collection with which this source is most closely associated.
    size_bytes str
    Size of the source, if measurable.
    user_name str
    Human-readable name for this source; may be user or system generated.
    name String
    Dataflow service generated name for this source.
    originalTransformOrCollection String
    User name for the original user transform or collection with which this source is most closely associated.
    sizeBytes String
    Size of the source, if measurable.
    userName String
    Human-readable name for this source; may be user or system generated.

    StageSourceResponse, StageSourceResponseArgs

    Name string
    Dataflow service generated name for this source.
    OriginalTransformOrCollection string
    User name for the original user transform or collection with which this source is most closely associated.
    SizeBytes string
    Size of the source, if measurable.
    UserName string
    Human-readable name for this source; may be user or system generated.
    Name string
    Dataflow service generated name for this source.
    OriginalTransformOrCollection string
    User name for the original user transform or collection with which this source is most closely associated.
    SizeBytes string
    Size of the source, if measurable.
    UserName string
    Human-readable name for this source; may be user or system generated.
    name String
    Dataflow service generated name for this source.
    originalTransformOrCollection String
    User name for the original user transform or collection with which this source is most closely associated.
    sizeBytes String
    Size of the source, if measurable.
    userName String
    Human-readable name for this source; may be user or system generated.
    name string
    Dataflow service generated name for this source.
    originalTransformOrCollection string
    User name for the original user transform or collection with which this source is most closely associated.
    sizeBytes string
    Size of the source, if measurable.
    userName string
    Human-readable name for this source; may be user or system generated.
    name str
    Dataflow service generated name for this source.
    original_transform_or_collection str
    User name for the original user transform or collection with which this source is most closely associated.
    size_bytes str
    Size of the source, if measurable.
    user_name str
    Human-readable name for this source; may be user or system generated.
    name String
    Dataflow service generated name for this source.
    originalTransformOrCollection String
    User name for the original user transform or collection with which this source is most closely associated.
    sizeBytes String
    Size of the source, if measurable.
    userName String
    Human-readable name for this source; may be user or system generated.

    Step, StepArgs

    Kind string
    The kind of step in the Cloud Dataflow job.
    Name string
    The name that identifies the step. This must be unique for each step with respect to all other steps in the Cloud Dataflow job.
    Properties Dictionary<string, string>
    Named properties associated with the step. Each kind of predefined step has its own required set of properties. Must be provided on Create. Only retrieved with JOB_VIEW_ALL.
    Kind string
    The kind of step in the Cloud Dataflow job.
    Name string
    The name that identifies the step. This must be unique for each step with respect to all other steps in the Cloud Dataflow job.
    Properties map[string]string
    Named properties associated with the step. Each kind of predefined step has its own required set of properties. Must be provided on Create. Only retrieved with JOB_VIEW_ALL.
    kind String
    The kind of step in the Cloud Dataflow job.
    name String
    The name that identifies the step. This must be unique for each step with respect to all other steps in the Cloud Dataflow job.
    properties Map<String,String>
    Named properties associated with the step. Each kind of predefined step has its own required set of properties. Must be provided on Create. Only retrieved with JOB_VIEW_ALL.
    kind string
    The kind of step in the Cloud Dataflow job.
    name string
    The name that identifies the step. This must be unique for each step with respect to all other steps in the Cloud Dataflow job.
    properties {[key: string]: string}
    Named properties associated with the step. Each kind of predefined step has its own required set of properties. Must be provided on Create. Only retrieved with JOB_VIEW_ALL.
    kind str
    The kind of step in the Cloud Dataflow job.
    name str
    The name that identifies the step. This must be unique for each step with respect to all other steps in the Cloud Dataflow job.
    properties Mapping[str, str]
    Named properties associated with the step. Each kind of predefined step has its own required set of properties. Must be provided on Create. Only retrieved with JOB_VIEW_ALL.
    kind String
    The kind of step in the Cloud Dataflow job.
    name String
    The name that identifies the step. This must be unique for each step with respect to all other steps in the Cloud Dataflow job.
    properties Map<String>
    Named properties associated with the step. Each kind of predefined step has its own required set of properties. Must be provided on Create. Only retrieved with JOB_VIEW_ALL.

    StepResponse, StepResponseArgs

    Kind string
    The kind of step in the Cloud Dataflow job.
    Name string
    The name that identifies the step. This must be unique for each step with respect to all other steps in the Cloud Dataflow job.
    Properties Dictionary<string, string>
    Named properties associated with the step. Each kind of predefined step has its own required set of properties. Must be provided on Create. Only retrieved with JOB_VIEW_ALL.
    Kind string
    The kind of step in the Cloud Dataflow job.
    Name string
    The name that identifies the step. This must be unique for each step with respect to all other steps in the Cloud Dataflow job.
    Properties map[string]string
    Named properties associated with the step. Each kind of predefined step has its own required set of properties. Must be provided on Create. Only retrieved with JOB_VIEW_ALL.
    kind String
    The kind of step in the Cloud Dataflow job.
    name String
    The name that identifies the step. This must be unique for each step with respect to all other steps in the Cloud Dataflow job.
    properties Map<String,String>
    Named properties associated with the step. Each kind of predefined step has its own required set of properties. Must be provided on Create. Only retrieved with JOB_VIEW_ALL.
    kind string
    The kind of step in the Cloud Dataflow job.
    name string
    The name that identifies the step. This must be unique for each step with respect to all other steps in the Cloud Dataflow job.
    properties {[key: string]: string}
    Named properties associated with the step. Each kind of predefined step has its own required set of properties. Must be provided on Create. Only retrieved with JOB_VIEW_ALL.
    kind str
    The kind of step in the Cloud Dataflow job.
    name str
    The name that identifies the step. This must be unique for each step with respect to all other steps in the Cloud Dataflow job.
    properties Mapping[str, str]
    Named properties associated with the step. Each kind of predefined step has its own required set of properties. Must be provided on Create. Only retrieved with JOB_VIEW_ALL.
    kind String
    The kind of step in the Cloud Dataflow job.
    name String
    The name that identifies the step. This must be unique for each step with respect to all other steps in the Cloud Dataflow job.
    properties Map<String>
    Named properties associated with the step. Each kind of predefined step has its own required set of properties. Must be provided on Create. Only retrieved with JOB_VIEW_ALL.

    TaskRunnerSettings, TaskRunnerSettingsArgs

    Alsologtostderr bool
    Whether to also send taskrunner log info to stderr.
    BaseTaskDir string
    The location on the worker for task-specific subdirectories.
    BaseUrl string
    The base URL for the taskrunner to use when accessing Google Cloud APIs. When workers access Google Cloud APIs, they logically do so via relative URLs. If this field is specified, it supplies the base URL to use for resolving these relative URLs. The normative algorithm used is defined by RFC 1808, "Relative Uniform Resource Locators". If not specified, the default value is "http://www.googleapis.com/"
    CommandlinesFileName string
    The file to store preprocessing commands in.
    ContinueOnException bool
    Whether to continue taskrunner if an exception is hit.
    DataflowApiVersion string
    The API version of endpoint, e.g. "v1b3"
    HarnessCommand string
    The command to launch the worker harness.
    LanguageHint string
    The suggested backend language.
    LogDir string
    The directory on the VM to store logs.
    LogToSerialconsole bool
    Whether to send taskrunner log info to Google Compute Engine VM serial console.
    LogUploadLocation string
    Indicates where to put logs. If this is not specified, the logs will not be uploaded. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
    OauthScopes List<string>
    The OAuth2 scopes to be requested by the taskrunner in order to access the Cloud Dataflow API.
    ParallelWorkerSettings Pulumi.GoogleNative.Dataflow.V1b3.Inputs.WorkerSettings
    The settings to pass to the parallel worker harness.
    StreamingWorkerMainClass string
    The streaming worker main class name.
    TaskGroup string
    The UNIX group ID on the worker VM to use for tasks launched by taskrunner; e.g. "wheel".
    TaskUser string
    The UNIX user ID on the worker VM to use for tasks launched by taskrunner; e.g. "root".
    TempStoragePrefix string
    The prefix of the resources the taskrunner should use for temporary storage. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
    VmId string
    The ID string of the VM.
    WorkflowFileName string
    The file to store the workflow in.
    Alsologtostderr bool
    Whether to also send taskrunner log info to stderr.
    BaseTaskDir string
    The location on the worker for task-specific subdirectories.
    BaseUrl string
    The base URL for the taskrunner to use when accessing Google Cloud APIs. When workers access Google Cloud APIs, they logically do so via relative URLs. If this field is specified, it supplies the base URL to use for resolving these relative URLs. The normative algorithm used is defined by RFC 1808, "Relative Uniform Resource Locators". If not specified, the default value is "http://www.googleapis.com/"
    CommandlinesFileName string
    The file to store preprocessing commands in.
    ContinueOnException bool
    Whether to continue taskrunner if an exception is hit.
    DataflowApiVersion string
    The API version of endpoint, e.g. "v1b3"
    HarnessCommand string
    The command to launch the worker harness.
    LanguageHint string
    The suggested backend language.
    LogDir string
    The directory on the VM to store logs.
    LogToSerialconsole bool
    Whether to send taskrunner log info to Google Compute Engine VM serial console.
    LogUploadLocation string
    Indicates where to put logs. If this is not specified, the logs will not be uploaded. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
    OauthScopes []string
    The OAuth2 scopes to be requested by the taskrunner in order to access the Cloud Dataflow API.
    ParallelWorkerSettings WorkerSettings
    The settings to pass to the parallel worker harness.
    StreamingWorkerMainClass string
    The streaming worker main class name.
    TaskGroup string
    The UNIX group ID on the worker VM to use for tasks launched by taskrunner; e.g. "wheel".
    TaskUser string
    The UNIX user ID on the worker VM to use for tasks launched by taskrunner; e.g. "root".
    TempStoragePrefix string
    The prefix of the resources the taskrunner should use for temporary storage. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
    VmId string
    The ID string of the VM.
    WorkflowFileName string
    The file to store the workflow in.
    alsologtostderr Boolean
    Whether to also send taskrunner log info to stderr.
    baseTaskDir String
    The location on the worker for task-specific subdirectories.
    baseUrl String
    The base URL for the taskrunner to use when accessing Google Cloud APIs. When workers access Google Cloud APIs, they logically do so via relative URLs. If this field is specified, it supplies the base URL to use for resolving these relative URLs. The normative algorithm used is defined by RFC 1808, "Relative Uniform Resource Locators". If not specified, the default value is "http://www.googleapis.com/"
    commandlinesFileName String
    The file to store preprocessing commands in.
    continueOnException Boolean
    Whether to continue taskrunner if an exception is hit.
    dataflowApiVersion String
    The API version of endpoint, e.g. "v1b3"
    harnessCommand String
    The command to launch the worker harness.
    languageHint String
    The suggested backend language.
    logDir String
    The directory on the VM to store logs.
    logToSerialconsole Boolean
    Whether to send taskrunner log info to Google Compute Engine VM serial console.
    logUploadLocation String
    Indicates where to put logs. If this is not specified, the logs will not be uploaded. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
    oauthScopes List<String>
    The OAuth2 scopes to be requested by the taskrunner in order to access the Cloud Dataflow API.
    parallelWorkerSettings WorkerSettings
    The settings to pass to the parallel worker harness.
    streamingWorkerMainClass String
    The streaming worker main class name.
    taskGroup String
    The UNIX group ID on the worker VM to use for tasks launched by taskrunner; e.g. "wheel".
    taskUser String
    The UNIX user ID on the worker VM to use for tasks launched by taskrunner; e.g. "root".
    tempStoragePrefix String
    The prefix of the resources the taskrunner should use for temporary storage. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
    vmId String
    The ID string of the VM.
    workflowFileName String
    The file to store the workflow in.
    alsologtostderr boolean
    Whether to also send taskrunner log info to stderr.
    baseTaskDir string
    The location on the worker for task-specific subdirectories.
    baseUrl string
    The base URL for the taskrunner to use when accessing Google Cloud APIs. When workers access Google Cloud APIs, they logically do so via relative URLs. If this field is specified, it supplies the base URL to use for resolving these relative URLs. The normative algorithm used is defined by RFC 1808, "Relative Uniform Resource Locators". If not specified, the default value is "http://www.googleapis.com/"
    commandlinesFileName string
    The file to store preprocessing commands in.
    continueOnException boolean
    Whether to continue taskrunner if an exception is hit.
    dataflowApiVersion string
    The API version of endpoint, e.g. "v1b3"
    harnessCommand string
    The command to launch the worker harness.
    languageHint string
    The suggested backend language.
    logDir string
    The directory on the VM to store logs.
    logToSerialconsole boolean
    Whether to send taskrunner log info to Google Compute Engine VM serial console.
    logUploadLocation string
    Indicates where to put logs. If this is not specified, the logs will not be uploaded. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
    oauthScopes string[]
    The OAuth2 scopes to be requested by the taskrunner in order to access the Cloud Dataflow API.
    parallelWorkerSettings WorkerSettings
    The settings to pass to the parallel worker harness.
    streamingWorkerMainClass string
    The streaming worker main class name.
    taskGroup string
    The UNIX group ID on the worker VM to use for tasks launched by taskrunner; e.g. "wheel".
    taskUser string
    The UNIX user ID on the worker VM to use for tasks launched by taskrunner; e.g. "root".
    tempStoragePrefix string
    The prefix of the resources the taskrunner should use for temporary storage. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
    vmId string
    The ID string of the VM.
    workflowFileName string
    The file to store the workflow in.
    alsologtostderr bool
    Whether to also send taskrunner log info to stderr.
    base_task_dir str
    The location on the worker for task-specific subdirectories.
    base_url str
    The base URL for the taskrunner to use when accessing Google Cloud APIs. When workers access Google Cloud APIs, they logically do so via relative URLs. If this field is specified, it supplies the base URL to use for resolving these relative URLs. The normative algorithm used is defined by RFC 1808, "Relative Uniform Resource Locators". If not specified, the default value is "http://www.googleapis.com/"
    commandlines_file_name str
    The file to store preprocessing commands in.
    continue_on_exception bool
    Whether to continue taskrunner if an exception is hit.
    dataflow_api_version str
    The API version of endpoint, e.g. "v1b3"
    harness_command str
    The command to launch the worker harness.
    language_hint str
    The suggested backend language.
    log_dir str
    The directory on the VM to store logs.
    log_to_serialconsole bool
    Whether to send taskrunner log info to Google Compute Engine VM serial console.
    log_upload_location str
    Indicates where to put logs. If this is not specified, the logs will not be uploaded. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
    oauth_scopes Sequence[str]
    The OAuth2 scopes to be requested by the taskrunner in order to access the Cloud Dataflow API.
    parallel_worker_settings WorkerSettings
    The settings to pass to the parallel worker harness.
    streaming_worker_main_class str
    The streaming worker main class name.
    task_group str
    The UNIX group ID on the worker VM to use for tasks launched by taskrunner; e.g. "wheel".
    task_user str
    The UNIX user ID on the worker VM to use for tasks launched by taskrunner; e.g. "root".
    temp_storage_prefix str
    The prefix of the resources the taskrunner should use for temporary storage. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
    vm_id str
    The ID string of the VM.
    workflow_file_name str
    The file to store the workflow in.
    alsologtostderr Boolean
    Whether to also send taskrunner log info to stderr.
    baseTaskDir String
    The location on the worker for task-specific subdirectories.
    baseUrl String
    The base URL for the taskrunner to use when accessing Google Cloud APIs. When workers access Google Cloud APIs, they logically do so via relative URLs. If this field is specified, it supplies the base URL to use for resolving these relative URLs. The normative algorithm used is defined by RFC 1808, "Relative Uniform Resource Locators". If not specified, the default value is "http://www.googleapis.com/"
    commandlinesFileName String
    The file to store preprocessing commands in.
    continueOnException Boolean
    Whether to continue taskrunner if an exception is hit.
    dataflowApiVersion String
    The API version of endpoint, e.g. "v1b3"
    harnessCommand String
    The command to launch the worker harness.
    languageHint String
    The suggested backend language.
    logDir String
    The directory on the VM to store logs.
    logToSerialconsole Boolean
    Whether to send taskrunner log info to Google Compute Engine VM serial console.
    logUploadLocation String
    Indicates where to put logs. If this is not specified, the logs will not be uploaded. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
    oauthScopes List<String>
    The OAuth2 scopes to be requested by the taskrunner in order to access the Cloud Dataflow API.
    parallelWorkerSettings Property Map
    The settings to pass to the parallel worker harness.
    streamingWorkerMainClass String
    The streaming worker main class name.
    taskGroup String
    The UNIX group ID on the worker VM to use for tasks launched by taskrunner; e.g. "wheel".
    taskUser String
    The UNIX user ID on the worker VM to use for tasks launched by taskrunner; e.g. "root".
    tempStoragePrefix String
    The prefix of the resources the taskrunner should use for temporary storage. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
    vmId String
    The ID string of the VM.
    workflowFileName String
    The file to store the workflow in.

    TaskRunnerSettingsResponse, TaskRunnerSettingsResponseArgs

    Alsologtostderr bool
    Whether to also send taskrunner log info to stderr.
    BaseTaskDir string
    The location on the worker for task-specific subdirectories.
    BaseUrl string
    The base URL for the taskrunner to use when accessing Google Cloud APIs. When workers access Google Cloud APIs, they logically do so via relative URLs. If this field is specified, it supplies the base URL to use for resolving these relative URLs. The normative algorithm used is defined by RFC 1808, "Relative Uniform Resource Locators". If not specified, the default value is "http://www.googleapis.com/"
    CommandlinesFileName string
    The file to store preprocessing commands in.
    ContinueOnException bool
    Whether to continue taskrunner if an exception is hit.
    DataflowApiVersion string
    The API version of endpoint, e.g. "v1b3"
    HarnessCommand string
    The command to launch the worker harness.
    LanguageHint string
    The suggested backend language.
    LogDir string
    The directory on the VM to store logs.
    LogToSerialconsole bool
    Whether to send taskrunner log info to Google Compute Engine VM serial console.
    LogUploadLocation string
    Indicates where to put logs. If this is not specified, the logs will not be uploaded. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
    OauthScopes List<string>
    The OAuth2 scopes to be requested by the taskrunner in order to access the Cloud Dataflow API.
    ParallelWorkerSettings Pulumi.GoogleNative.Dataflow.V1b3.Inputs.WorkerSettingsResponse
    The settings to pass to the parallel worker harness.
    StreamingWorkerMainClass string
    The streaming worker main class name.
    TaskGroup string
    The UNIX group ID on the worker VM to use for tasks launched by taskrunner; e.g. "wheel".
    TaskUser string
    The UNIX user ID on the worker VM to use for tasks launched by taskrunner; e.g. "root".
    TempStoragePrefix string
    The prefix of the resources the taskrunner should use for temporary storage. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
    VmId string
    The ID string of the VM.
    WorkflowFileName string
    The file to store the workflow in.
    Alsologtostderr bool
    Whether to also send taskrunner log info to stderr.
    BaseTaskDir string
    The location on the worker for task-specific subdirectories.
    BaseUrl string
    The base URL for the taskrunner to use when accessing Google Cloud APIs. When workers access Google Cloud APIs, they logically do so via relative URLs. If this field is specified, it supplies the base URL to use for resolving these relative URLs. The normative algorithm used is defined by RFC 1808, "Relative Uniform Resource Locators". If not specified, the default value is "http://www.googleapis.com/"
    CommandlinesFileName string
    The file to store preprocessing commands in.
    ContinueOnException bool
    Whether to continue taskrunner if an exception is hit.
    DataflowApiVersion string
    The API version of endpoint, e.g. "v1b3"
    HarnessCommand string
    The command to launch the worker harness.
    LanguageHint string
    The suggested backend language.
    LogDir string
    The directory on the VM to store logs.
    LogToSerialconsole bool
    Whether to send taskrunner log info to Google Compute Engine VM serial console.
    LogUploadLocation string
    Indicates where to put logs. If this is not specified, the logs will not be uploaded. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
    OauthScopes []string
    The OAuth2 scopes to be requested by the taskrunner in order to access the Cloud Dataflow API.
    ParallelWorkerSettings WorkerSettingsResponse
    The settings to pass to the parallel worker harness.
    StreamingWorkerMainClass string
    The streaming worker main class name.
    TaskGroup string
    The UNIX group ID on the worker VM to use for tasks launched by taskrunner; e.g. "wheel".
    TaskUser string
    The UNIX user ID on the worker VM to use for tasks launched by taskrunner; e.g. "root".
    TempStoragePrefix string
    The prefix of the resources the taskrunner should use for temporary storage. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
    VmId string
    The ID string of the VM.
    WorkflowFileName string
    The file to store the workflow in.
    alsologtostderr Boolean
    Whether to also send taskrunner log info to stderr.
    baseTaskDir String
    The location on the worker for task-specific subdirectories.
    baseUrl String
    The base URL for the taskrunner to use when accessing Google Cloud APIs. When workers access Google Cloud APIs, they logically do so via relative URLs. If this field is specified, it supplies the base URL to use for resolving these relative URLs. The normative algorithm used is defined by RFC 1808, "Relative Uniform Resource Locators". If not specified, the default value is "http://www.googleapis.com/"
    commandlinesFileName String
    The file to store preprocessing commands in.
    continueOnException Boolean
    Whether to continue taskrunner if an exception is hit.
    dataflowApiVersion String
    The API version of endpoint, e.g. "v1b3"
    harnessCommand String
    The command to launch the worker harness.
    languageHint String
    The suggested backend language.
    logDir String
    The directory on the VM to store logs.
    logToSerialconsole Boolean
    Whether to send taskrunner log info to Google Compute Engine VM serial console.
    logUploadLocation String
    Indicates where to put logs. If this is not specified, the logs will not be uploaded. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
    oauthScopes List<String>
    The OAuth2 scopes to be requested by the taskrunner in order to access the Cloud Dataflow API.
    parallelWorkerSettings WorkerSettingsResponse
    The settings to pass to the parallel worker harness.
    streamingWorkerMainClass String
    The streaming worker main class name.
    taskGroup String
    The UNIX group ID on the worker VM to use for tasks launched by taskrunner; e.g. "wheel".
    taskUser String
    The UNIX user ID on the worker VM to use for tasks launched by taskrunner; e.g. "root".
    tempStoragePrefix String
    The prefix of the resources the taskrunner should use for temporary storage. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
    vmId String
    The ID string of the VM.
    workflowFileName String
    The file to store the workflow in.
    alsologtostderr boolean
    Whether to also send taskrunner log info to stderr.
    baseTaskDir string
    The location on the worker for task-specific subdirectories.
    baseUrl string
    The base URL for the taskrunner to use when accessing Google Cloud APIs. When workers access Google Cloud APIs, they logically do so via relative URLs. If this field is specified, it supplies the base URL to use for resolving these relative URLs. The normative algorithm used is defined by RFC 1808, "Relative Uniform Resource Locators". If not specified, the default value is "http://www.googleapis.com/"
    commandlinesFileName string
    The file to store preprocessing commands in.
    continueOnException boolean
    Whether to continue taskrunner if an exception is hit.
    dataflowApiVersion string
    The API version of endpoint, e.g. "v1b3"
    harnessCommand string
    The command to launch the worker harness.
    languageHint string
    The suggested backend language.
    logDir string
    The directory on the VM to store logs.
    logToSerialconsole boolean
    Whether to send taskrunner log info to Google Compute Engine VM serial console.
    logUploadLocation string
    Indicates where to put logs. If this is not specified, the logs will not be uploaded. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
    oauthScopes string[]
    The OAuth2 scopes to be requested by the taskrunner in order to access the Cloud Dataflow API.
    parallelWorkerSettings WorkerSettingsResponse
    The settings to pass to the parallel worker harness.
    streamingWorkerMainClass string
    The streaming worker main class name.
    taskGroup string
    The UNIX group ID on the worker VM to use for tasks launched by taskrunner; e.g. "wheel".
    taskUser string
    The UNIX user ID on the worker VM to use for tasks launched by taskrunner; e.g. "root".
    tempStoragePrefix string
    The prefix of the resources the taskrunner should use for temporary storage. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
    vmId string
    The ID string of the VM.
    workflowFileName string
    The file to store the workflow in.
    alsologtostderr bool
    Whether to also send taskrunner log info to stderr.
    base_task_dir str
    The location on the worker for task-specific subdirectories.
    base_url str
    The base URL for the taskrunner to use when accessing Google Cloud APIs. When workers access Google Cloud APIs, they logically do so via relative URLs. If this field is specified, it supplies the base URL to use for resolving these relative URLs. The normative algorithm used is defined by RFC 1808, "Relative Uniform Resource Locators". If not specified, the default value is "http://www.googleapis.com/"
    commandlines_file_name str
    The file to store preprocessing commands in.
    continue_on_exception bool
    Whether to continue taskrunner if an exception is hit.
    dataflow_api_version str
    The API version of endpoint, e.g. "v1b3"
    harness_command str
    The command to launch the worker harness.
    language_hint str
    The suggested backend language.
    log_dir str
    The directory on the VM to store logs.
    log_to_serialconsole bool
    Whether to send taskrunner log info to Google Compute Engine VM serial console.
    log_upload_location str
    Indicates where to put logs. If this is not specified, the logs will not be uploaded. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
    oauth_scopes Sequence[str]
    The OAuth2 scopes to be requested by the taskrunner in order to access the Cloud Dataflow API.
    parallel_worker_settings WorkerSettingsResponse
    The settings to pass to the parallel worker harness.
    streaming_worker_main_class str
    The streaming worker main class name.
    task_group str
    The UNIX group ID on the worker VM to use for tasks launched by taskrunner; e.g. "wheel".
    task_user str
    The UNIX user ID on the worker VM to use for tasks launched by taskrunner; e.g. "root".
    temp_storage_prefix str
    The prefix of the resources the taskrunner should use for temporary storage. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
    vm_id str
    The ID string of the VM.
    workflow_file_name str
    The file to store the workflow in.
    alsologtostderr Boolean
    Whether to also send taskrunner log info to stderr.
    baseTaskDir String
    The location on the worker for task-specific subdirectories.
    baseUrl String
    The base URL for the taskrunner to use when accessing Google Cloud APIs. When workers access Google Cloud APIs, they logically do so via relative URLs. If this field is specified, it supplies the base URL to use for resolving these relative URLs. The normative algorithm used is defined by RFC 1808, "Relative Uniform Resource Locators". If not specified, the default value is "http://www.googleapis.com/"
    commandlinesFileName String
    The file to store preprocessing commands in.
    continueOnException Boolean
    Whether to continue taskrunner if an exception is hit.
    dataflowApiVersion String
    The API version of endpoint, e.g. "v1b3"
    harnessCommand String
    The command to launch the worker harness.
    languageHint String
    The suggested backend language.
    logDir String
    The directory on the VM to store logs.
    logToSerialconsole Boolean
    Whether to send taskrunner log info to Google Compute Engine VM serial console.
    logUploadLocation String
    Indicates where to put logs. If this is not specified, the logs will not be uploaded. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
    oauthScopes List<String>
    The OAuth2 scopes to be requested by the taskrunner in order to access the Cloud Dataflow API.
    parallelWorkerSettings Property Map
    The settings to pass to the parallel worker harness.
    streamingWorkerMainClass String
    The streaming worker main class name.
    taskGroup String
    The UNIX group ID on the worker VM to use for tasks launched by taskrunner; e.g. "wheel".
    taskUser String
    The UNIX user ID on the worker VM to use for tasks launched by taskrunner; e.g. "root".
    tempStoragePrefix String
    The prefix of the resources the taskrunner should use for temporary storage. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
    vmId String
    The ID string of the VM.
    workflowFileName String
    The file to store the workflow in.

    TransformSummary, TransformSummaryArgs

    DisplayData List<Pulumi.GoogleNative.Dataflow.V1b3.Inputs.DisplayData>
    Transform-specific display data.
    Id string
    SDK generated id of this transform instance.
    InputCollectionName List<string>
    User names for all collection inputs to this transform.
    Kind Pulumi.GoogleNative.Dataflow.V1b3.TransformSummaryKind
    Type of transform.
    Name string
    User provided name for this transform instance.
    OutputCollectionName List<string>
    User names for all collection outputs to this transform.
    DisplayData []DisplayData
    Transform-specific display data.
    Id string
    SDK generated id of this transform instance.
    InputCollectionName []string
    User names for all collection inputs to this transform.
    Kind TransformSummaryKind
    Type of transform.
    Name string
    User provided name for this transform instance.
    OutputCollectionName []string
    User names for all collection outputs to this transform.
    displayData List<DisplayData>
    Transform-specific display data.
    id String
    SDK generated id of this transform instance.
    inputCollectionName List<String>
    User names for all collection inputs to this transform.
    kind TransformSummaryKind
    Type of transform.
    name String
    User provided name for this transform instance.
    outputCollectionName List<String>
    User names for all collection outputs to this transform.
    displayData DisplayData[]
    Transform-specific display data.
    id string
    SDK generated id of this transform instance.
    inputCollectionName string[]
    User names for all collection inputs to this transform.
    kind TransformSummaryKind
    Type of transform.
    name string
    User provided name for this transform instance.
    outputCollectionName string[]
    User names for all collection outputs to this transform.
    display_data Sequence[DisplayData]
    Transform-specific display data.
    id str
    SDK generated id of this transform instance.
    input_collection_name Sequence[str]
    User names for all collection inputs to this transform.
    kind TransformSummaryKind
    Type of transform.
    name str
    User provided name for this transform instance.
    output_collection_name Sequence[str]
    User names for all collection outputs to this transform.
    displayData List<Property Map>
    Transform-specific display data.
    id String
    SDK generated id of this transform instance.
    inputCollectionName List<String>
    User names for all collection inputs to this transform.
    kind "UNKNOWN_KIND" | "PAR_DO_KIND" | "GROUP_BY_KEY_KIND" | "FLATTEN_KIND" | "READ_KIND" | "WRITE_KIND" | "CONSTANT_KIND" | "SINGLETON_KIND" | "SHUFFLE_KIND"
    Type of transform.
    name String
    User provided name for this transform instance.
    outputCollectionName List<String>
    User names for all collection outputs to this transform.

    TransformSummaryKind, TransformSummaryKindArgs

    UnknownKind
    UNKNOWN_KINDUnrecognized transform type.
    ParDoKind
    PAR_DO_KINDParDo transform.
    GroupByKeyKind
    GROUP_BY_KEY_KINDGroup By Key transform.
    FlattenKind
    FLATTEN_KINDFlatten transform.
    ReadKind
    READ_KINDRead transform.
    WriteKind
    WRITE_KINDWrite transform.
    ConstantKind
    CONSTANT_KINDConstructs from a constant value, such as with Create.of.
    SingletonKind
    SINGLETON_KINDCreates a Singleton view of a collection.
    ShuffleKind
    SHUFFLE_KINDOpening or closing a shuffle session, often as part of a GroupByKey.
    TransformSummaryKindUnknownKind
    UNKNOWN_KINDUnrecognized transform type.
    TransformSummaryKindParDoKind
    PAR_DO_KINDParDo transform.
    TransformSummaryKindGroupByKeyKind
    GROUP_BY_KEY_KINDGroup By Key transform.
    TransformSummaryKindFlattenKind
    FLATTEN_KINDFlatten transform.
    TransformSummaryKindReadKind
    READ_KINDRead transform.
    TransformSummaryKindWriteKind
    WRITE_KINDWrite transform.
    TransformSummaryKindConstantKind
    CONSTANT_KINDConstructs from a constant value, such as with Create.of.
    TransformSummaryKindSingletonKind
    SINGLETON_KINDCreates a Singleton view of a collection.
    TransformSummaryKindShuffleKind
    SHUFFLE_KINDOpening or closing a shuffle session, often as part of a GroupByKey.
    UnknownKind
    UNKNOWN_KINDUnrecognized transform type.
    ParDoKind
    PAR_DO_KINDParDo transform.
    GroupByKeyKind
    GROUP_BY_KEY_KINDGroup By Key transform.
    FlattenKind
    FLATTEN_KINDFlatten transform.
    ReadKind
    READ_KINDRead transform.
    WriteKind
    WRITE_KINDWrite transform.
    ConstantKind
    CONSTANT_KINDConstructs from a constant value, such as with Create.of.
    SingletonKind
    SINGLETON_KINDCreates a Singleton view of a collection.
    ShuffleKind
    SHUFFLE_KINDOpening or closing a shuffle session, often as part of a GroupByKey.
    UnknownKind
    UNKNOWN_KINDUnrecognized transform type.
    ParDoKind
    PAR_DO_KINDParDo transform.
    GroupByKeyKind
    GROUP_BY_KEY_KINDGroup By Key transform.
    FlattenKind
    FLATTEN_KINDFlatten transform.
    ReadKind
    READ_KINDRead transform.
    WriteKind
    WRITE_KINDWrite transform.
    ConstantKind
    CONSTANT_KINDConstructs from a constant value, such as with Create.of.
    SingletonKind
    SINGLETON_KINDCreates a Singleton view of a collection.
    ShuffleKind
    SHUFFLE_KINDOpening or closing a shuffle session, often as part of a GroupByKey.
    UNKNOWN_KIND
    UNKNOWN_KINDUnrecognized transform type.
    PAR_DO_KIND
    PAR_DO_KINDParDo transform.
    GROUP_BY_KEY_KIND
    GROUP_BY_KEY_KINDGroup By Key transform.
    FLATTEN_KIND
    FLATTEN_KINDFlatten transform.
    READ_KIND
    READ_KINDRead transform.
    WRITE_KIND
    WRITE_KINDWrite transform.
    CONSTANT_KIND
    CONSTANT_KINDConstructs from a constant value, such as with Create.of.
    SINGLETON_KIND
    SINGLETON_KINDCreates a Singleton view of a collection.
    SHUFFLE_KIND
    SHUFFLE_KINDOpening or closing a shuffle session, often as part of a GroupByKey.
    "UNKNOWN_KIND"
    UNKNOWN_KINDUnrecognized transform type.
    "PAR_DO_KIND"
    PAR_DO_KINDParDo transform.
    "GROUP_BY_KEY_KIND"
    GROUP_BY_KEY_KINDGroup By Key transform.
    "FLATTEN_KIND"
    FLATTEN_KINDFlatten transform.
    "READ_KIND"
    READ_KINDRead transform.
    "WRITE_KIND"
    WRITE_KINDWrite transform.
    "CONSTANT_KIND"
    CONSTANT_KINDConstructs from a constant value, such as with Create.of.
    "SINGLETON_KIND"
    SINGLETON_KINDCreates a Singleton view of a collection.
    "SHUFFLE_KIND"
    SHUFFLE_KINDOpening or closing a shuffle session, often as part of a GroupByKey.

    TransformSummaryResponse, TransformSummaryResponseArgs

    DisplayData List<Pulumi.GoogleNative.Dataflow.V1b3.Inputs.DisplayDataResponse>
    Transform-specific display data.
    InputCollectionName List<string>
    User names for all collection inputs to this transform.
    Kind string
    Type of transform.
    Name string
    User provided name for this transform instance.
    OutputCollectionName List<string>
    User names for all collection outputs to this transform.
    DisplayData []DisplayDataResponse
    Transform-specific display data.
    InputCollectionName []string
    User names for all collection inputs to this transform.
    Kind string
    Type of transform.
    Name string
    User provided name for this transform instance.
    OutputCollectionName []string
    User names for all collection outputs to this transform.
    displayData List<DisplayDataResponse>
    Transform-specific display data.
    inputCollectionName List<String>
    User names for all collection inputs to this transform.
    kind String
    Type of transform.
    name String
    User provided name for this transform instance.
    outputCollectionName List<String>
    User names for all collection outputs to this transform.
    displayData DisplayDataResponse[]
    Transform-specific display data.
    inputCollectionName string[]
    User names for all collection inputs to this transform.
    kind string
    Type of transform.
    name string
    User provided name for this transform instance.
    outputCollectionName string[]
    User names for all collection outputs to this transform.
    display_data Sequence[DisplayDataResponse]
    Transform-specific display data.
    input_collection_name Sequence[str]
    User names for all collection inputs to this transform.
    kind str
    Type of transform.
    name str
    User provided name for this transform instance.
    output_collection_name Sequence[str]
    User names for all collection outputs to this transform.
    displayData List<Property Map>
    Transform-specific display data.
    inputCollectionName List<String>
    User names for all collection inputs to this transform.
    kind String
    Type of transform.
    name String
    User provided name for this transform instance.
    outputCollectionName List<String>
    User names for all collection outputs to this transform.

    WorkerPool, WorkerPoolArgs

    WorkerHarnessContainerImage string
    Docker container image that executes the Cloud Dataflow worker harness, residing in Google Container Registry. Deprecated for the Fn API path. Use sdk_harness_container_images instead.

    Deprecated: Required. Docker container image that executes the Cloud Dataflow worker harness, residing in Google Container Registry. Deprecated for the Fn API path. Use sdk_harness_container_images instead.

    AutoscalingSettings Pulumi.GoogleNative.Dataflow.V1b3.Inputs.AutoscalingSettings
    Settings for autoscaling of this WorkerPool.
    DataDisks List<Pulumi.GoogleNative.Dataflow.V1b3.Inputs.Disk>
    Data disks that are used by a VM in this workflow.
    DefaultPackageSet Pulumi.GoogleNative.Dataflow.V1b3.WorkerPoolDefaultPackageSet
    The default package set to install. This allows the service to select a default set of packages which are useful to worker harnesses written in a particular language.
    DiskSizeGb int
    Size of root disk for VMs, in GB. If zero or unspecified, the service will attempt to choose a reasonable default.
    DiskSourceImage string
    Fully qualified source image for disks.
    DiskType string
    Type of root disk for VMs. If empty or unspecified, the service will attempt to choose a reasonable default.
    IpConfiguration Pulumi.GoogleNative.Dataflow.V1b3.WorkerPoolIpConfiguration
    Configuration for VM IPs.
    Kind string
    The kind of the worker pool; currently only harness and shuffle are supported.
    MachineType string
    Machine type (e.g. "n1-standard-1"). If empty or unspecified, the service will attempt to choose a reasonable default.
    Metadata Dictionary<string, string>
    Metadata to set on the Google Compute Engine VMs.
    Network string
    Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".
    NumThreadsPerWorker int
    The number of threads per worker harness. If empty or unspecified, the service will choose a number of threads (according to the number of cores on the selected machine type for batch, or 1 by convention for streaming).
    NumWorkers int
    Number of Google Compute Engine workers in this pool needed to execute the job. If zero or unspecified, the service will attempt to choose a reasonable default.
    OnHostMaintenance string
    The action to take on host maintenance, as defined by the Google Compute Engine API.
    Packages List<Pulumi.GoogleNative.Dataflow.V1b3.Inputs.Package>
    Packages to be installed on workers.
    PoolArgs Dictionary<string, string>
    Extra arguments for this worker pool.
    SdkHarnessContainerImages List<Pulumi.GoogleNative.Dataflow.V1b3.Inputs.SdkHarnessContainerImage>
    Set of SDK harness containers needed to execute this pipeline. This will only be set in the Fn API path. For non-cross-language pipelines this should have only one entry. Cross-language pipelines will have two or more entries.
    Subnetwork string
    Subnetwork to which VMs will be assigned, if desired. Expected to be of the form "regions/REGION/subnetworks/SUBNETWORK".
    TaskrunnerSettings Pulumi.GoogleNative.Dataflow.V1b3.Inputs.TaskRunnerSettings
    Settings passed through to Google Compute Engine workers when using the standard Dataflow task runner. Users should ignore this field.
    TeardownPolicy Pulumi.GoogleNative.Dataflow.V1b3.WorkerPoolTeardownPolicy
    Sets the policy for determining when to turndown worker pool. Allowed values are: TEARDOWN_ALWAYS, TEARDOWN_ON_SUCCESS, and TEARDOWN_NEVER. TEARDOWN_ALWAYS means workers are always torn down regardless of whether the job succeeds. TEARDOWN_ON_SUCCESS means workers are torn down if the job succeeds. TEARDOWN_NEVER means the workers are never torn down. If the workers are not torn down by the service, they will continue to run and use Google Compute Engine VM resources in the user's project until they are explicitly terminated by the user. Because of this, Google recommends using the TEARDOWN_ALWAYS policy except for small, manually supervised test jobs. If unknown or unspecified, the service will attempt to choose a reasonable default.
    Zone string
    Zone to run the worker pools in. If empty or unspecified, the service will attempt to choose a reasonable default.
    WorkerHarnessContainerImage string
    Docker container image that executes the Cloud Dataflow worker harness, residing in Google Container Registry. Deprecated for the Fn API path. Use sdk_harness_container_images instead.

    Deprecated: Required. Docker container image that executes the Cloud Dataflow worker harness, residing in Google Container Registry. Deprecated for the Fn API path. Use sdk_harness_container_images instead.

    AutoscalingSettings AutoscalingSettings
    Settings for autoscaling of this WorkerPool.
    DataDisks []Disk
    Data disks that are used by a VM in this workflow.
    DefaultPackageSet WorkerPoolDefaultPackageSet
    The default package set to install. This allows the service to select a default set of packages which are useful to worker harnesses written in a particular language.
    DiskSizeGb int
    Size of root disk for VMs, in GB. If zero or unspecified, the service will attempt to choose a reasonable default.
    DiskSourceImage string
    Fully qualified source image for disks.
    DiskType string
    Type of root disk for VMs. If empty or unspecified, the service will attempt to choose a reasonable default.
    IpConfiguration WorkerPoolIpConfiguration
    Configuration for VM IPs.
    Kind string
    The kind of the worker pool; currently only harness and shuffle are supported.
    MachineType string
    Machine type (e.g. "n1-standard-1"). If empty or unspecified, the service will attempt to choose a reasonable default.
    Metadata map[string]string
    Metadata to set on the Google Compute Engine VMs.
    Network string
    Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".
    NumThreadsPerWorker int
    The number of threads per worker harness. If empty or unspecified, the service will choose a number of threads (according to the number of cores on the selected machine type for batch, or 1 by convention for streaming).
    NumWorkers int
    Number of Google Compute Engine workers in this pool needed to execute the job. If zero or unspecified, the service will attempt to choose a reasonable default.
    OnHostMaintenance string
    The action to take on host maintenance, as defined by the Google Compute Engine API.
    Packages []Package
    Packages to be installed on workers.
    PoolArgs map[string]string
    Extra arguments for this worker pool.
    SdkHarnessContainerImages []SdkHarnessContainerImage
    Set of SDK harness containers needed to execute this pipeline. This will only be set in the Fn API path. For non-cross-language pipelines this should have only one entry. Cross-language pipelines will have two or more entries.
    Subnetwork string
    Subnetwork to which VMs will be assigned, if desired. Expected to be of the form "regions/REGION/subnetworks/SUBNETWORK".
    TaskrunnerSettings TaskRunnerSettings
    Settings passed through to Google Compute Engine workers when using the standard Dataflow task runner. Users should ignore this field.
    TeardownPolicy WorkerPoolTeardownPolicy
    Sets the policy for determining when to turndown worker pool. Allowed values are: TEARDOWN_ALWAYS, TEARDOWN_ON_SUCCESS, and TEARDOWN_NEVER. TEARDOWN_ALWAYS means workers are always torn down regardless of whether the job succeeds. TEARDOWN_ON_SUCCESS means workers are torn down if the job succeeds. TEARDOWN_NEVER means the workers are never torn down. If the workers are not torn down by the service, they will continue to run and use Google Compute Engine VM resources in the user's project until they are explicitly terminated by the user. Because of this, Google recommends using the TEARDOWN_ALWAYS policy except for small, manually supervised test jobs. If unknown or unspecified, the service will attempt to choose a reasonable default.
    Zone string
    Zone to run the worker pools in. If empty or unspecified, the service will attempt to choose a reasonable default.
    workerHarnessContainerImage String
    Docker container image that executes the Cloud Dataflow worker harness, residing in Google Container Registry. Deprecated for the Fn API path. Use sdk_harness_container_images instead.

    Deprecated: Required. Docker container image that executes the Cloud Dataflow worker harness, residing in Google Container Registry. Deprecated for the Fn API path. Use sdk_harness_container_images instead.

    autoscalingSettings AutoscalingSettings
    Settings for autoscaling of this WorkerPool.
    dataDisks List<Disk>
    Data disks that are used by a VM in this workflow.
    defaultPackageSet WorkerPoolDefaultPackageSet
    The default package set to install. This allows the service to select a default set of packages which are useful to worker harnesses written in a particular language.
    diskSizeGb Integer
    Size of root disk for VMs, in GB. If zero or unspecified, the service will attempt to choose a reasonable default.
    diskSourceImage String
    Fully qualified source image for disks.
    diskType String
    Type of root disk for VMs. If empty or unspecified, the service will attempt to choose a reasonable default.
    ipConfiguration WorkerPoolIpConfiguration
    Configuration for VM IPs.
    kind String
    The kind of the worker pool; currently only harness and shuffle are supported.
    machineType String
    Machine type (e.g. "n1-standard-1"). If empty or unspecified, the service will attempt to choose a reasonable default.
    metadata Map<String,String>
    Metadata to set on the Google Compute Engine VMs.
    network String
    Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".
    numThreadsPerWorker Integer
    The number of threads per worker harness. If empty or unspecified, the service will choose a number of threads (according to the number of cores on the selected machine type for batch, or 1 by convention for streaming).
    numWorkers Integer
    Number of Google Compute Engine workers in this pool needed to execute the job. If zero or unspecified, the service will attempt to choose a reasonable default.
    onHostMaintenance String
    The action to take on host maintenance, as defined by the Google Compute Engine API.
    packages List<Package>
    Packages to be installed on workers.
    poolArgs Map<String,String>
    Extra arguments for this worker pool.
    sdkHarnessContainerImages List<SdkHarnessContainerImage>
    Set of SDK harness containers needed to execute this pipeline. This will only be set in the Fn API path. For non-cross-language pipelines this should have only one entry. Cross-language pipelines will have two or more entries.
    subnetwork String
    Subnetwork to which VMs will be assigned, if desired. Expected to be of the form "regions/REGION/subnetworks/SUBNETWORK".
    taskrunnerSettings TaskRunnerSettings
    Settings passed through to Google Compute Engine workers when using the standard Dataflow task runner. Users should ignore this field.
    teardownPolicy WorkerPoolTeardownPolicy
    Sets the policy for determining when to turndown worker pool. Allowed values are: TEARDOWN_ALWAYS, TEARDOWN_ON_SUCCESS, and TEARDOWN_NEVER. TEARDOWN_ALWAYS means workers are always torn down regardless of whether the job succeeds. TEARDOWN_ON_SUCCESS means workers are torn down if the job succeeds. TEARDOWN_NEVER means the workers are never torn down. If the workers are not torn down by the service, they will continue to run and use Google Compute Engine VM resources in the user's project until they are explicitly terminated by the user. Because of this, Google recommends using the TEARDOWN_ALWAYS policy except for small, manually supervised test jobs. If unknown or unspecified, the service will attempt to choose a reasonable default.
    zone String
    Zone to run the worker pools in. If empty or unspecified, the service will attempt to choose a reasonable default.
    workerHarnessContainerImage string
    Docker container image that executes the Cloud Dataflow worker harness, residing in Google Container Registry. Deprecated for the Fn API path. Use sdk_harness_container_images instead.

    Deprecated: Required. Docker container image that executes the Cloud Dataflow worker harness, residing in Google Container Registry. Deprecated for the Fn API path. Use sdk_harness_container_images instead.

    autoscalingSettings AutoscalingSettings
    Settings for autoscaling of this WorkerPool.
    dataDisks Disk[]
    Data disks that are used by a VM in this workflow.
    defaultPackageSet WorkerPoolDefaultPackageSet
    The default package set to install. This allows the service to select a default set of packages which are useful to worker harnesses written in a particular language.
    diskSizeGb number
    Size of root disk for VMs, in GB. If zero or unspecified, the service will attempt to choose a reasonable default.
    diskSourceImage string
    Fully qualified source image for disks.
    diskType string
    Type of root disk for VMs. If empty or unspecified, the service will attempt to choose a reasonable default.
    ipConfiguration WorkerPoolIpConfiguration
    Configuration for VM IPs.
    kind string
    The kind of the worker pool; currently only harness and shuffle are supported.
    machineType string
    Machine type (e.g. "n1-standard-1"). If empty or unspecified, the service will attempt to choose a reasonable default.
    metadata {[key: string]: string}
    Metadata to set on the Google Compute Engine VMs.
    network string
    Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".
    numThreadsPerWorker number
    The number of threads per worker harness. If empty or unspecified, the service will choose a number of threads (according to the number of cores on the selected machine type for batch, or 1 by convention for streaming).
    numWorkers number
    Number of Google Compute Engine workers in this pool needed to execute the job. If zero or unspecified, the service will attempt to choose a reasonable default.
    onHostMaintenance string
    The action to take on host maintenance, as defined by the Google Compute Engine API.
    packages Package[]
    Packages to be installed on workers.
    poolArgs {[key: string]: string}
    Extra arguments for this worker pool.
    sdkHarnessContainerImages SdkHarnessContainerImage[]
    Set of SDK harness containers needed to execute this pipeline. This will only be set in the Fn API path. For non-cross-language pipelines this should have only one entry. Cross-language pipelines will have two or more entries.
    subnetwork string
    Subnetwork to which VMs will be assigned, if desired. Expected to be of the form "regions/REGION/subnetworks/SUBNETWORK".
    taskrunnerSettings TaskRunnerSettings
    Settings passed through to Google Compute Engine workers when using the standard Dataflow task runner. Users should ignore this field.
    teardownPolicy WorkerPoolTeardownPolicy
    Sets the policy for determining when to turndown worker pool. Allowed values are: TEARDOWN_ALWAYS, TEARDOWN_ON_SUCCESS, and TEARDOWN_NEVER. TEARDOWN_ALWAYS means workers are always torn down regardless of whether the job succeeds. TEARDOWN_ON_SUCCESS means workers are torn down if the job succeeds. TEARDOWN_NEVER means the workers are never torn down. If the workers are not torn down by the service, they will continue to run and use Google Compute Engine VM resources in the user's project until they are explicitly terminated by the user. Because of this, Google recommends using the TEARDOWN_ALWAYS policy except for small, manually supervised test jobs. If unknown or unspecified, the service will attempt to choose a reasonable default.
    zone string
    Zone to run the worker pools in. If empty or unspecified, the service will attempt to choose a reasonable default.
    worker_harness_container_image str
    Docker container image that executes the Cloud Dataflow worker harness, residing in Google Container Registry. Deprecated for the Fn API path. Use sdk_harness_container_images instead.

    Deprecated: Required. Docker container image that executes the Cloud Dataflow worker harness, residing in Google Container Registry. Deprecated for the Fn API path. Use sdk_harness_container_images instead.

    autoscaling_settings AutoscalingSettings
    Settings for autoscaling of this WorkerPool.
    data_disks Sequence[Disk]
    Data disks that are used by a VM in this workflow.
    default_package_set WorkerPoolDefaultPackageSet
    The default package set to install. This allows the service to select a default set of packages which are useful to worker harnesses written in a particular language.
    disk_size_gb int
    Size of root disk for VMs, in GB. If zero or unspecified, the service will attempt to choose a reasonable default.
    disk_source_image str
    Fully qualified source image for disks.
    disk_type str
    Type of root disk for VMs. If empty or unspecified, the service will attempt to choose a reasonable default.
    ip_configuration WorkerPoolIpConfiguration
    Configuration for VM IPs.
    kind str
    The kind of the worker pool; currently only harness and shuffle are supported.
    machine_type str
    Machine type (e.g. "n1-standard-1"). If empty or unspecified, the service will attempt to choose a reasonable default.
    metadata Mapping[str, str]
    Metadata to set on the Google Compute Engine VMs.
    network str
    Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".
    num_threads_per_worker int
    The number of threads per worker harness. If empty or unspecified, the service will choose a number of threads (according to the number of cores on the selected machine type for batch, or 1 by convention for streaming).
    num_workers int
    Number of Google Compute Engine workers in this pool needed to execute the job. If zero or unspecified, the service will attempt to choose a reasonable default.
    on_host_maintenance str
    The action to take on host maintenance, as defined by the Google Compute Engine API.
    packages Sequence[Package]
    Packages to be installed on workers.
    pool_args Mapping[str, str]
    Extra arguments for this worker pool.
    sdk_harness_container_images Sequence[SdkHarnessContainerImage]
    Set of SDK harness containers needed to execute this pipeline. This will only be set in the Fn API path. For non-cross-language pipelines this should have only one entry. Cross-language pipelines will have two or more entries.
    subnetwork str
    Subnetwork to which VMs will be assigned, if desired. Expected to be of the form "regions/REGION/subnetworks/SUBNETWORK".
    taskrunner_settings TaskRunnerSettings
    Settings passed through to Google Compute Engine workers when using the standard Dataflow task runner. Users should ignore this field.
    teardown_policy WorkerPoolTeardownPolicy
    Sets the policy for determining when to turndown worker pool. Allowed values are: TEARDOWN_ALWAYS, TEARDOWN_ON_SUCCESS, and TEARDOWN_NEVER. TEARDOWN_ALWAYS means workers are always torn down regardless of whether the job succeeds. TEARDOWN_ON_SUCCESS means workers are torn down if the job succeeds. TEARDOWN_NEVER means the workers are never torn down. If the workers are not torn down by the service, they will continue to run and use Google Compute Engine VM resources in the user's project until they are explicitly terminated by the user. Because of this, Google recommends using the TEARDOWN_ALWAYS policy except for small, manually supervised test jobs. If unknown or unspecified, the service will attempt to choose a reasonable default.
    zone str
    Zone to run the worker pools in. If empty or unspecified, the service will attempt to choose a reasonable default.
    workerHarnessContainerImage String
    Docker container image that executes the Cloud Dataflow worker harness, residing in Google Container Registry. Deprecated for the Fn API path. Use sdk_harness_container_images instead.

    Deprecated: Required. Docker container image that executes the Cloud Dataflow worker harness, residing in Google Container Registry. Deprecated for the Fn API path. Use sdk_harness_container_images instead.

    autoscalingSettings Property Map
    Settings for autoscaling of this WorkerPool.
    dataDisks List<Property Map>
    Data disks that are used by a VM in this workflow.
    defaultPackageSet "DEFAULT_PACKAGE_SET_UNKNOWN" | "DEFAULT_PACKAGE_SET_NONE" | "DEFAULT_PACKAGE_SET_JAVA" | "DEFAULT_PACKAGE_SET_PYTHON"
    The default package set to install. This allows the service to select a default set of packages which are useful to worker harnesses written in a particular language.
    diskSizeGb Number
    Size of root disk for VMs, in GB. If zero or unspecified, the service will attempt to choose a reasonable default.
    diskSourceImage String
    Fully qualified source image for disks.
    diskType String
    Type of root disk for VMs. If empty or unspecified, the service will attempt to choose a reasonable default.
    ipConfiguration "WORKER_IP_UNSPECIFIED" | "WORKER_IP_PUBLIC" | "WORKER_IP_PRIVATE"
    Configuration for VM IPs.
    kind String
    The kind of the worker pool; currently only harness and shuffle are supported.
    machineType String
    Machine type (e.g. "n1-standard-1"). If empty or unspecified, the service will attempt to choose a reasonable default.
    metadata Map<String>
    Metadata to set on the Google Compute Engine VMs.
    network String
    Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".
    numThreadsPerWorker Number
    The number of threads per worker harness. If empty or unspecified, the service will choose a number of threads (according to the number of cores on the selected machine type for batch, or 1 by convention for streaming).
    numWorkers Number
    Number of Google Compute Engine workers in this pool needed to execute the job. If zero or unspecified, the service will attempt to choose a reasonable default.
    onHostMaintenance String
    The action to take on host maintenance, as defined by the Google Compute Engine API.
    packages List<Property Map>
    Packages to be installed on workers.
    poolArgs Map<String>
    Extra arguments for this worker pool.
    sdkHarnessContainerImages List<Property Map>
    Set of SDK harness containers needed to execute this pipeline. This will only be set in the Fn API path. For non-cross-language pipelines this should have only one entry. Cross-language pipelines will have two or more entries.
    subnetwork String
    Subnetwork to which VMs will be assigned, if desired. Expected to be of the form "regions/REGION/subnetworks/SUBNETWORK".
    taskrunnerSettings Property Map
    Settings passed through to Google Compute Engine workers when using the standard Dataflow task runner. Users should ignore this field.
    teardownPolicy "TEARDOWN_POLICY_UNKNOWN" | "TEARDOWN_ALWAYS" | "TEARDOWN_ON_SUCCESS" | "TEARDOWN_NEVER"
    Sets the policy for determining when to turndown worker pool. Allowed values are: TEARDOWN_ALWAYS, TEARDOWN_ON_SUCCESS, and TEARDOWN_NEVER. TEARDOWN_ALWAYS means workers are always torn down regardless of whether the job succeeds. TEARDOWN_ON_SUCCESS means workers are torn down if the job succeeds. TEARDOWN_NEVER means the workers are never torn down. If the workers are not torn down by the service, they will continue to run and use Google Compute Engine VM resources in the user's project until they are explicitly terminated by the user. Because of this, Google recommends using the TEARDOWN_ALWAYS policy except for small, manually supervised test jobs. If unknown or unspecified, the service will attempt to choose a reasonable default.
    zone String
    Zone to run the worker pools in. If empty or unspecified, the service will attempt to choose a reasonable default.

    WorkerPoolDefaultPackageSet, WorkerPoolDefaultPackageSetArgs

    DefaultPackageSetUnknown
    DEFAULT_PACKAGE_SET_UNKNOWNThe default set of packages to stage is unknown, or unspecified.
    DefaultPackageSetNone
    DEFAULT_PACKAGE_SET_NONEIndicates that no packages should be staged at the worker unless explicitly specified by the job.
    DefaultPackageSetJava
    DEFAULT_PACKAGE_SET_JAVAStage packages typically useful to workers written in Java.
    DefaultPackageSetPython
    DEFAULT_PACKAGE_SET_PYTHONStage packages typically useful to workers written in Python.
    WorkerPoolDefaultPackageSetDefaultPackageSetUnknown
    DEFAULT_PACKAGE_SET_UNKNOWNThe default set of packages to stage is unknown, or unspecified.
    WorkerPoolDefaultPackageSetDefaultPackageSetNone
    DEFAULT_PACKAGE_SET_NONEIndicates that no packages should be staged at the worker unless explicitly specified by the job.
    WorkerPoolDefaultPackageSetDefaultPackageSetJava
    DEFAULT_PACKAGE_SET_JAVAStage packages typically useful to workers written in Java.
    WorkerPoolDefaultPackageSetDefaultPackageSetPython
    DEFAULT_PACKAGE_SET_PYTHONStage packages typically useful to workers written in Python.
    DefaultPackageSetUnknown
    DEFAULT_PACKAGE_SET_UNKNOWNThe default set of packages to stage is unknown, or unspecified.
    DefaultPackageSetNone
    DEFAULT_PACKAGE_SET_NONEIndicates that no packages should be staged at the worker unless explicitly specified by the job.
    DefaultPackageSetJava
    DEFAULT_PACKAGE_SET_JAVAStage packages typically useful to workers written in Java.
    DefaultPackageSetPython
    DEFAULT_PACKAGE_SET_PYTHONStage packages typically useful to workers written in Python.
    DefaultPackageSetUnknown
    DEFAULT_PACKAGE_SET_UNKNOWNThe default set of packages to stage is unknown, or unspecified.
    DefaultPackageSetNone
    DEFAULT_PACKAGE_SET_NONEIndicates that no packages should be staged at the worker unless explicitly specified by the job.
    DefaultPackageSetJava
    DEFAULT_PACKAGE_SET_JAVAStage packages typically useful to workers written in Java.
    DefaultPackageSetPython
    DEFAULT_PACKAGE_SET_PYTHONStage packages typically useful to workers written in Python.
    DEFAULT_PACKAGE_SET_UNKNOWN
    DEFAULT_PACKAGE_SET_UNKNOWNThe default set of packages to stage is unknown, or unspecified.
    DEFAULT_PACKAGE_SET_NONE
    DEFAULT_PACKAGE_SET_NONEIndicates that no packages should be staged at the worker unless explicitly specified by the job.
    DEFAULT_PACKAGE_SET_JAVA
    DEFAULT_PACKAGE_SET_JAVAStage packages typically useful to workers written in Java.
    DEFAULT_PACKAGE_SET_PYTHON
    DEFAULT_PACKAGE_SET_PYTHONStage packages typically useful to workers written in Python.
    "DEFAULT_PACKAGE_SET_UNKNOWN"
    DEFAULT_PACKAGE_SET_UNKNOWNThe default set of packages to stage is unknown, or unspecified.
    "DEFAULT_PACKAGE_SET_NONE"
    DEFAULT_PACKAGE_SET_NONEIndicates that no packages should be staged at the worker unless explicitly specified by the job.
    "DEFAULT_PACKAGE_SET_JAVA"
    DEFAULT_PACKAGE_SET_JAVAStage packages typically useful to workers written in Java.
    "DEFAULT_PACKAGE_SET_PYTHON"
    DEFAULT_PACKAGE_SET_PYTHONStage packages typically useful to workers written in Python.

    WorkerPoolIpConfiguration, WorkerPoolIpConfigurationArgs

    WorkerIpUnspecified
    WORKER_IP_UNSPECIFIEDThe configuration is unknown, or unspecified.
    WorkerIpPublic
    WORKER_IP_PUBLICWorkers should have public IP addresses.
    WorkerIpPrivate
    WORKER_IP_PRIVATEWorkers should have private IP addresses.
    WorkerPoolIpConfigurationWorkerIpUnspecified
    WORKER_IP_UNSPECIFIEDThe configuration is unknown, or unspecified.
    WorkerPoolIpConfigurationWorkerIpPublic
    WORKER_IP_PUBLICWorkers should have public IP addresses.
    WorkerPoolIpConfigurationWorkerIpPrivate
    WORKER_IP_PRIVATEWorkers should have private IP addresses.
    WorkerIpUnspecified
    WORKER_IP_UNSPECIFIEDThe configuration is unknown, or unspecified.
    WorkerIpPublic
    WORKER_IP_PUBLICWorkers should have public IP addresses.
    WorkerIpPrivate
    WORKER_IP_PRIVATEWorkers should have private IP addresses.
    WorkerIpUnspecified
    WORKER_IP_UNSPECIFIEDThe configuration is unknown, or unspecified.
    WorkerIpPublic
    WORKER_IP_PUBLICWorkers should have public IP addresses.
    WorkerIpPrivate
    WORKER_IP_PRIVATEWorkers should have private IP addresses.
    WORKER_IP_UNSPECIFIED
    WORKER_IP_UNSPECIFIEDThe configuration is unknown, or unspecified.
    WORKER_IP_PUBLIC
    WORKER_IP_PUBLICWorkers should have public IP addresses.
    WORKER_IP_PRIVATE
    WORKER_IP_PRIVATEWorkers should have private IP addresses.
    "WORKER_IP_UNSPECIFIED"
    WORKER_IP_UNSPECIFIEDThe configuration is unknown, or unspecified.
    "WORKER_IP_PUBLIC"
    WORKER_IP_PUBLICWorkers should have public IP addresses.
    "WORKER_IP_PRIVATE"
    WORKER_IP_PRIVATEWorkers should have private IP addresses.

    WorkerPoolResponse, WorkerPoolResponseArgs

    AutoscalingSettings Pulumi.GoogleNative.Dataflow.V1b3.Inputs.AutoscalingSettingsResponse
    Settings for autoscaling of this WorkerPool.
    DataDisks List<Pulumi.GoogleNative.Dataflow.V1b3.Inputs.DiskResponse>
    Data disks that are used by a VM in this workflow.
    DefaultPackageSet string
    The default package set to install. This allows the service to select a default set of packages which are useful to worker harnesses written in a particular language.
    DiskSizeGb int
    Size of root disk for VMs, in GB. If zero or unspecified, the service will attempt to choose a reasonable default.
    DiskSourceImage string
    Fully qualified source image for disks.
    DiskType string
    Type of root disk for VMs. If empty or unspecified, the service will attempt to choose a reasonable default.
    IpConfiguration string
    Configuration for VM IPs.
    Kind string
    The kind of the worker pool; currently only harness and shuffle are supported.
    MachineType string
    Machine type (e.g. "n1-standard-1"). If empty or unspecified, the service will attempt to choose a reasonable default.
    Metadata Dictionary<string, string>
    Metadata to set on the Google Compute Engine VMs.
    Network string
    Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".
    NumThreadsPerWorker int
    The number of threads per worker harness. If empty or unspecified, the service will choose a number of threads (according to the number of cores on the selected machine type for batch, or 1 by convention for streaming).
    NumWorkers int
    Number of Google Compute Engine workers in this pool needed to execute the job. If zero or unspecified, the service will attempt to choose a reasonable default.
    OnHostMaintenance string
    The action to take on host maintenance, as defined by the Google Compute Engine API.
    Packages List<Pulumi.GoogleNative.Dataflow.V1b3.Inputs.PackageResponse>
    Packages to be installed on workers.
    PoolArgs Dictionary<string, string>
    Extra arguments for this worker pool.
    SdkHarnessContainerImages List<Pulumi.GoogleNative.Dataflow.V1b3.Inputs.SdkHarnessContainerImageResponse>
    Set of SDK harness containers needed to execute this pipeline. This will only be set in the Fn API path. For non-cross-language pipelines this should have only one entry. Cross-language pipelines will have two or more entries.
    Subnetwork string
    Subnetwork to which VMs will be assigned, if desired. Expected to be of the form "regions/REGION/subnetworks/SUBNETWORK".
    TaskrunnerSettings Pulumi.GoogleNative.Dataflow.V1b3.Inputs.TaskRunnerSettingsResponse
    Settings passed through to Google Compute Engine workers when using the standard Dataflow task runner. Users should ignore this field.
    TeardownPolicy string
    Sets the policy for determining when to turndown worker pool. Allowed values are: TEARDOWN_ALWAYS, TEARDOWN_ON_SUCCESS, and TEARDOWN_NEVER. TEARDOWN_ALWAYS means workers are always torn down regardless of whether the job succeeds. TEARDOWN_ON_SUCCESS means workers are torn down if the job succeeds. TEARDOWN_NEVER means the workers are never torn down. If the workers are not torn down by the service, they will continue to run and use Google Compute Engine VM resources in the user's project until they are explicitly terminated by the user. Because of this, Google recommends using the TEARDOWN_ALWAYS policy except for small, manually supervised test jobs. If unknown or unspecified, the service will attempt to choose a reasonable default.
    WorkerHarnessContainerImage string
    Docker container image that executes the Cloud Dataflow worker harness, residing in Google Container Registry. Deprecated for the Fn API path. Use sdk_harness_container_images instead.

    Deprecated: Required. Docker container image that executes the Cloud Dataflow worker harness, residing in Google Container Registry. Deprecated for the Fn API path. Use sdk_harness_container_images instead.

    Zone string
    Zone to run the worker pools in. If empty or unspecified, the service will attempt to choose a reasonable default.
    AutoscalingSettings AutoscalingSettingsResponse
    Settings for autoscaling of this WorkerPool.
    DataDisks []DiskResponse
    Data disks that are used by a VM in this workflow.
    DefaultPackageSet string
    The default package set to install. This allows the service to select a default set of packages which are useful to worker harnesses written in a particular language.
    DiskSizeGb int
    Size of root disk for VMs, in GB. If zero or unspecified, the service will attempt to choose a reasonable default.
    DiskSourceImage string
    Fully qualified source image for disks.
    DiskType string
    Type of root disk for VMs. If empty or unspecified, the service will attempt to choose a reasonable default.
    IpConfiguration string
    Configuration for VM IPs.
    Kind string
    The kind of the worker pool; currently only harness and shuffle are supported.
    MachineType string
    Machine type (e.g. "n1-standard-1"). If empty or unspecified, the service will attempt to choose a reasonable default.
    Metadata map[string]string
    Metadata to set on the Google Compute Engine VMs.
    Network string
    Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".
    NumThreadsPerWorker int
    The number of threads per worker harness. If empty or unspecified, the service will choose a number of threads (according to the number of cores on the selected machine type for batch, or 1 by convention for streaming).
    NumWorkers int
    Number of Google Compute Engine workers in this pool needed to execute the job. If zero or unspecified, the service will attempt to choose a reasonable default.
    OnHostMaintenance string
    The action to take on host maintenance, as defined by the Google Compute Engine API.
    Packages []PackageResponse
    Packages to be installed on workers.
    PoolArgs map[string]string
    Extra arguments for this worker pool.
    SdkHarnessContainerImages []SdkHarnessContainerImageResponse
    Set of SDK harness containers needed to execute this pipeline. This will only be set in the Fn API path. For non-cross-language pipelines this should have only one entry. Cross-language pipelines will have two or more entries.
    Subnetwork string
    Subnetwork to which VMs will be assigned, if desired. Expected to be of the form "regions/REGION/subnetworks/SUBNETWORK".
    TaskrunnerSettings TaskRunnerSettingsResponse
    Settings passed through to Google Compute Engine workers when using the standard Dataflow task runner. Users should ignore this field.
    TeardownPolicy string
    Sets the policy for determining when to turndown worker pool. Allowed values are: TEARDOWN_ALWAYS, TEARDOWN_ON_SUCCESS, and TEARDOWN_NEVER. TEARDOWN_ALWAYS means workers are always torn down regardless of whether the job succeeds. TEARDOWN_ON_SUCCESS means workers are torn down if the job succeeds. TEARDOWN_NEVER means the workers are never torn down. If the workers are not torn down by the service, they will continue to run and use Google Compute Engine VM resources in the user's project until they are explicitly terminated by the user. Because of this, Google recommends using the TEARDOWN_ALWAYS policy except for small, manually supervised test jobs. If unknown or unspecified, the service will attempt to choose a reasonable default.
    WorkerHarnessContainerImage string
    Docker container image that executes the Cloud Dataflow worker harness, residing in Google Container Registry. Deprecated for the Fn API path. Use sdk_harness_container_images instead.

    Deprecated: Required. Docker container image that executes the Cloud Dataflow worker harness, residing in Google Container Registry. Deprecated for the Fn API path. Use sdk_harness_container_images instead.

    Zone string
    Zone to run the worker pools in. If empty or unspecified, the service will attempt to choose a reasonable default.
    autoscalingSettings AutoscalingSettingsResponse
    Settings for autoscaling of this WorkerPool.
    dataDisks List<DiskResponse>
    Data disks that are used by a VM in this workflow.
    defaultPackageSet String
    The default package set to install. This allows the service to select a default set of packages which are useful to worker harnesses written in a particular language.
    diskSizeGb Integer
    Size of root disk for VMs, in GB. If zero or unspecified, the service will attempt to choose a reasonable default.
    diskSourceImage String
    Fully qualified source image for disks.
    diskType String
    Type of root disk for VMs. If empty or unspecified, the service will attempt to choose a reasonable default.
    ipConfiguration String
    Configuration for VM IPs.
    kind String
    The kind of the worker pool; currently only harness and shuffle are supported.
    machineType String
    Machine type (e.g. "n1-standard-1"). If empty or unspecified, the service will attempt to choose a reasonable default.
    metadata Map<String,String>
    Metadata to set on the Google Compute Engine VMs.
    network String
    Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".
    numThreadsPerWorker Integer
    The number of threads per worker harness. If empty or unspecified, the service will choose a number of threads (according to the number of cores on the selected machine type for batch, or 1 by convention for streaming).
    numWorkers Integer
    Number of Google Compute Engine workers in this pool needed to execute the job. If zero or unspecified, the service will attempt to choose a reasonable default.
    onHostMaintenance String
    The action to take on host maintenance, as defined by the Google Compute Engine API.
    packages List<PackageResponse>
    Packages to be installed on workers.
    poolArgs Map<String,String>
    Extra arguments for this worker pool.
    sdkHarnessContainerImages List<SdkHarnessContainerImageResponse>
    Set of SDK harness containers needed to execute this pipeline. This will only be set in the Fn API path. For non-cross-language pipelines this should have only one entry. Cross-language pipelines will have two or more entries.
    subnetwork String
    Subnetwork to which VMs will be assigned, if desired. Expected to be of the form "regions/REGION/subnetworks/SUBNETWORK".
    taskrunnerSettings TaskRunnerSettingsResponse
    Settings passed through to Google Compute Engine workers when using the standard Dataflow task runner. Users should ignore this field.
    teardownPolicy String
    Sets the policy for determining when to turndown worker pool. Allowed values are: TEARDOWN_ALWAYS, TEARDOWN_ON_SUCCESS, and TEARDOWN_NEVER. TEARDOWN_ALWAYS means workers are always torn down regardless of whether the job succeeds. TEARDOWN_ON_SUCCESS means workers are torn down if the job succeeds. TEARDOWN_NEVER means the workers are never torn down. If the workers are not torn down by the service, they will continue to run and use Google Compute Engine VM resources in the user's project until they are explicitly terminated by the user. Because of this, Google recommends using the TEARDOWN_ALWAYS policy except for small, manually supervised test jobs. If unknown or unspecified, the service will attempt to choose a reasonable default.
    workerHarnessContainerImage String
    Docker container image that executes the Cloud Dataflow worker harness, residing in Google Container Registry. Deprecated for the Fn API path. Use sdk_harness_container_images instead.

    Deprecated: Required. Docker container image that executes the Cloud Dataflow worker harness, residing in Google Container Registry. Deprecated for the Fn API path. Use sdk_harness_container_images instead.

    zone String
    Zone to run the worker pools in. If empty or unspecified, the service will attempt to choose a reasonable default.
    autoscalingSettings AutoscalingSettingsResponse
    Settings for autoscaling of this WorkerPool.
    dataDisks DiskResponse[]
    Data disks that are used by a VM in this workflow.
    defaultPackageSet string
    The default package set to install. This allows the service to select a default set of packages which are useful to worker harnesses written in a particular language.
    diskSizeGb number
    Size of root disk for VMs, in GB. If zero or unspecified, the service will attempt to choose a reasonable default.
    diskSourceImage string
    Fully qualified source image for disks.
    diskType string
    Type of root disk for VMs. If empty or unspecified, the service will attempt to choose a reasonable default.
    ipConfiguration string
    Configuration for VM IPs.
    kind string
    The kind of the worker pool; currently only harness and shuffle are supported.
    machineType string
    Machine type (e.g. "n1-standard-1"). If empty or unspecified, the service will attempt to choose a reasonable default.
    metadata {[key: string]: string}
    Metadata to set on the Google Compute Engine VMs.
    network string
    Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".
    numThreadsPerWorker number
    The number of threads per worker harness. If empty or unspecified, the service will choose a number of threads (according to the number of cores on the selected machine type for batch, or 1 by convention for streaming).
    numWorkers number
    Number of Google Compute Engine workers in this pool needed to execute the job. If zero or unspecified, the service will attempt to choose a reasonable default.
    onHostMaintenance string
    The action to take on host maintenance, as defined by the Google Compute Engine API.
    packages PackageResponse[]
    Packages to be installed on workers.
    poolArgs {[key: string]: string}
    Extra arguments for this worker pool.
    sdkHarnessContainerImages SdkHarnessContainerImageResponse[]
    Set of SDK harness containers needed to execute this pipeline. This will only be set in the Fn API path. For non-cross-language pipelines this should have only one entry. Cross-language pipelines will have two or more entries.
    subnetwork string
    Subnetwork to which VMs will be assigned, if desired. Expected to be of the form "regions/REGION/subnetworks/SUBNETWORK".
    taskrunnerSettings TaskRunnerSettingsResponse
    Settings passed through to Google Compute Engine workers when using the standard Dataflow task runner. Users should ignore this field.
    teardownPolicy string
    Sets the policy for determining when to turndown worker pool. Allowed values are: TEARDOWN_ALWAYS, TEARDOWN_ON_SUCCESS, and TEARDOWN_NEVER. TEARDOWN_ALWAYS means workers are always torn down regardless of whether the job succeeds. TEARDOWN_ON_SUCCESS means workers are torn down if the job succeeds. TEARDOWN_NEVER means the workers are never torn down. If the workers are not torn down by the service, they will continue to run and use Google Compute Engine VM resources in the user's project until they are explicitly terminated by the user. Because of this, Google recommends using the TEARDOWN_ALWAYS policy except for small, manually supervised test jobs. If unknown or unspecified, the service will attempt to choose a reasonable default.
    workerHarnessContainerImage string
    Docker container image that executes the Cloud Dataflow worker harness, residing in Google Container Registry. Deprecated for the Fn API path. Use sdk_harness_container_images instead.

    Deprecated: Required. Docker container image that executes the Cloud Dataflow worker harness, residing in Google Container Registry. Deprecated for the Fn API path. Use sdk_harness_container_images instead.

    zone string
    Zone to run the worker pools in. If empty or unspecified, the service will attempt to choose a reasonable default.
    autoscaling_settings AutoscalingSettingsResponse
    Settings for autoscaling of this WorkerPool.
    data_disks Sequence[DiskResponse]
    Data disks that are used by a VM in this workflow.
    default_package_set str
    The default package set to install. This allows the service to select a default set of packages which are useful to worker harnesses written in a particular language.
    disk_size_gb int
    Size of root disk for VMs, in GB. If zero or unspecified, the service will attempt to choose a reasonable default.
    disk_source_image str
    Fully qualified source image for disks.
    disk_type str
    Type of root disk for VMs. If empty or unspecified, the service will attempt to choose a reasonable default.
    ip_configuration str
    Configuration for VM IPs.
    kind str
    The kind of the worker pool; currently only harness and shuffle are supported.
    machine_type str
    Machine type (e.g. "n1-standard-1"). If empty or unspecified, the service will attempt to choose a reasonable default.
    metadata Mapping[str, str]
    Metadata to set on the Google Compute Engine VMs.
    network str
    Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".
    num_threads_per_worker int
    The number of threads per worker harness. If empty or unspecified, the service will choose a number of threads (according to the number of cores on the selected machine type for batch, or 1 by convention for streaming).
    num_workers int
    Number of Google Compute Engine workers in this pool needed to execute the job. If zero or unspecified, the service will attempt to choose a reasonable default.
    on_host_maintenance str
    The action to take on host maintenance, as defined by the Google Compute Engine API.
    packages Sequence[PackageResponse]
    Packages to be installed on workers.
    pool_args Mapping[str, str]
    Extra arguments for this worker pool.
    sdk_harness_container_images Sequence[SdkHarnessContainerImageResponse]
    Set of SDK harness containers needed to execute this pipeline. This will only be set in the Fn API path. For non-cross-language pipelines this should have only one entry. Cross-language pipelines will have two or more entries.
    subnetwork str
    Subnetwork to which VMs will be assigned, if desired. Expected to be of the form "regions/REGION/subnetworks/SUBNETWORK".
    taskrunner_settings TaskRunnerSettingsResponse
    Settings passed through to Google Compute Engine workers when using the standard Dataflow task runner. Users should ignore this field.
    teardown_policy str
    Sets the policy for determining when to turndown worker pool. Allowed values are: TEARDOWN_ALWAYS, TEARDOWN_ON_SUCCESS, and TEARDOWN_NEVER. TEARDOWN_ALWAYS means workers are always torn down regardless of whether the job succeeds. TEARDOWN_ON_SUCCESS means workers are torn down if the job succeeds. TEARDOWN_NEVER means the workers are never torn down. If the workers are not torn down by the service, they will continue to run and use Google Compute Engine VM resources in the user's project until they are explicitly terminated by the user. Because of this, Google recommends using the TEARDOWN_ALWAYS policy except for small, manually supervised test jobs. If unknown or unspecified, the service will attempt to choose a reasonable default.
    worker_harness_container_image str
    Docker container image that executes the Cloud Dataflow worker harness, residing in Google Container Registry. Deprecated for the Fn API path. Use sdk_harness_container_images instead.

    Deprecated: Required. Docker container image that executes the Cloud Dataflow worker harness, residing in Google Container Registry. Deprecated for the Fn API path. Use sdk_harness_container_images instead.

    zone str
    Zone to run the worker pools in. If empty or unspecified, the service will attempt to choose a reasonable default.
    autoscalingSettings Property Map
    Settings for autoscaling of this WorkerPool.
    dataDisks List<Property Map>
    Data disks that are used by a VM in this workflow.
    defaultPackageSet String
    The default package set to install. This allows the service to select a default set of packages which are useful to worker harnesses written in a particular language.
    diskSizeGb Number
    Size of root disk for VMs, in GB. If zero or unspecified, the service will attempt to choose a reasonable default.
    diskSourceImage String
    Fully qualified source image for disks.
    diskType String
    Type of root disk for VMs. If empty or unspecified, the service will attempt to choose a reasonable default.
    ipConfiguration String
    Configuration for VM IPs.
    kind String
    The kind of the worker pool; currently only harness and shuffle are supported.
    machineType String
    Machine type (e.g. "n1-standard-1"). If empty or unspecified, the service will attempt to choose a reasonable default.
    metadata Map<String>
    Metadata to set on the Google Compute Engine VMs.
    network String
    Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".
    numThreadsPerWorker Number
    The number of threads per worker harness. If empty or unspecified, the service will choose a number of threads (according to the number of cores on the selected machine type for batch, or 1 by convention for streaming).
    numWorkers Number
    Number of Google Compute Engine workers in this pool needed to execute the job. If zero or unspecified, the service will attempt to choose a reasonable default.
    onHostMaintenance String
    The action to take on host maintenance, as defined by the Google Compute Engine API.
    packages List<Property Map>
    Packages to be installed on workers.
    poolArgs Map<String>
    Extra arguments for this worker pool.
    sdkHarnessContainerImages List<Property Map>
    Set of SDK harness containers needed to execute this pipeline. This will only be set in the Fn API path. For non-cross-language pipelines this should have only one entry. Cross-language pipelines will have two or more entries.
    subnetwork String
    Subnetwork to which VMs will be assigned, if desired. Expected to be of the form "regions/REGION/subnetworks/SUBNETWORK".
    taskrunnerSettings Property Map
    Settings passed through to Google Compute Engine workers when using the standard Dataflow task runner. Users should ignore this field.
    teardownPolicy String
    Sets the policy for determining when to turndown worker pool. Allowed values are: TEARDOWN_ALWAYS, TEARDOWN_ON_SUCCESS, and TEARDOWN_NEVER. TEARDOWN_ALWAYS means workers are always torn down regardless of whether the job succeeds. TEARDOWN_ON_SUCCESS means workers are torn down if the job succeeds. TEARDOWN_NEVER means the workers are never torn down. If the workers are not torn down by the service, they will continue to run and use Google Compute Engine VM resources in the user's project until they are explicitly terminated by the user. Because of this, Google recommends using the TEARDOWN_ALWAYS policy except for small, manually supervised test jobs. If unknown or unspecified, the service will attempt to choose a reasonable default.
    workerHarnessContainerImage String
    Docker container image that executes the Cloud Dataflow worker harness, residing in Google Container Registry. Deprecated for the Fn API path. Use sdk_harness_container_images instead.

    Deprecated: Required. Docker container image that executes the Cloud Dataflow worker harness, residing in Google Container Registry. Deprecated for the Fn API path. Use sdk_harness_container_images instead.

    zone String
    Zone to run the worker pools in. If empty or unspecified, the service will attempt to choose a reasonable default.

    WorkerPoolTeardownPolicy, WorkerPoolTeardownPolicyArgs

    TeardownPolicyUnknown
    TEARDOWN_POLICY_UNKNOWNThe teardown policy isn't specified, or is unknown.
    TeardownAlways
    TEARDOWN_ALWAYSAlways teardown the resource.
    TeardownOnSuccess
    TEARDOWN_ON_SUCCESSTeardown the resource on success. This is useful for debugging failures.
    TeardownNever
    TEARDOWN_NEVERNever teardown the resource. This is useful for debugging and development.
    WorkerPoolTeardownPolicyTeardownPolicyUnknown
    TEARDOWN_POLICY_UNKNOWNThe teardown policy isn't specified, or is unknown.
    WorkerPoolTeardownPolicyTeardownAlways
    TEARDOWN_ALWAYSAlways teardown the resource.
    WorkerPoolTeardownPolicyTeardownOnSuccess
    TEARDOWN_ON_SUCCESSTeardown the resource on success. This is useful for debugging failures.
    WorkerPoolTeardownPolicyTeardownNever
    TEARDOWN_NEVERNever teardown the resource. This is useful for debugging and development.
    TeardownPolicyUnknown
    TEARDOWN_POLICY_UNKNOWNThe teardown policy isn't specified, or is unknown.
    TeardownAlways
    TEARDOWN_ALWAYSAlways teardown the resource.
    TeardownOnSuccess
    TEARDOWN_ON_SUCCESSTeardown the resource on success. This is useful for debugging failures.
    TeardownNever
    TEARDOWN_NEVERNever teardown the resource. This is useful for debugging and development.
    TeardownPolicyUnknown
    TEARDOWN_POLICY_UNKNOWNThe teardown policy isn't specified, or is unknown.
    TeardownAlways
    TEARDOWN_ALWAYSAlways teardown the resource.
    TeardownOnSuccess
    TEARDOWN_ON_SUCCESSTeardown the resource on success. This is useful for debugging failures.
    TeardownNever
    TEARDOWN_NEVERNever teardown the resource. This is useful for debugging and development.
    TEARDOWN_POLICY_UNKNOWN
    TEARDOWN_POLICY_UNKNOWNThe teardown policy isn't specified, or is unknown.
    TEARDOWN_ALWAYS
    TEARDOWN_ALWAYSAlways teardown the resource.
    TEARDOWN_ON_SUCCESS
    TEARDOWN_ON_SUCCESSTeardown the resource on success. This is useful for debugging failures.
    TEARDOWN_NEVER
    TEARDOWN_NEVERNever teardown the resource. This is useful for debugging and development.
    "TEARDOWN_POLICY_UNKNOWN"
    TEARDOWN_POLICY_UNKNOWNThe teardown policy isn't specified, or is unknown.
    "TEARDOWN_ALWAYS"
    TEARDOWN_ALWAYSAlways teardown the resource.
    "TEARDOWN_ON_SUCCESS"
    TEARDOWN_ON_SUCCESSTeardown the resource on success. This is useful for debugging failures.
    "TEARDOWN_NEVER"
    TEARDOWN_NEVERNever teardown the resource. This is useful for debugging and development.

    WorkerSettings, WorkerSettingsArgs

    BaseUrl string
    The base URL for accessing Google Cloud APIs. When workers access Google Cloud APIs, they logically do so via relative URLs. If this field is specified, it supplies the base URL to use for resolving these relative URLs. The normative algorithm used is defined by RFC 1808, "Relative Uniform Resource Locators". If not specified, the default value is "http://www.googleapis.com/"
    ReportingEnabled bool
    Whether to send work progress updates to the service.
    ServicePath string
    The Cloud Dataflow service path relative to the root URL, for example, "dataflow/v1b3/projects".
    ShuffleServicePath string
    The Shuffle service path relative to the root URL, for example, "shuffle/v1beta1".
    TempStoragePrefix string
    The prefix of the resources the system should use for temporary storage. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
    WorkerId string
    The ID of the worker running this pipeline.
    BaseUrl string
    The base URL for accessing Google Cloud APIs. When workers access Google Cloud APIs, they logically do so via relative URLs. If this field is specified, it supplies the base URL to use for resolving these relative URLs. The normative algorithm used is defined by RFC 1808, "Relative Uniform Resource Locators". If not specified, the default value is "http://www.googleapis.com/"
    ReportingEnabled bool
    Whether to send work progress updates to the service.
    ServicePath string
    The Cloud Dataflow service path relative to the root URL, for example, "dataflow/v1b3/projects".
    ShuffleServicePath string
    The Shuffle service path relative to the root URL, for example, "shuffle/v1beta1".
    TempStoragePrefix string
    The prefix of the resources the system should use for temporary storage. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
    WorkerId string
    The ID of the worker running this pipeline.
    baseUrl String
    The base URL for accessing Google Cloud APIs. When workers access Google Cloud APIs, they logically do so via relative URLs. If this field is specified, it supplies the base URL to use for resolving these relative URLs. The normative algorithm used is defined by RFC 1808, "Relative Uniform Resource Locators". If not specified, the default value is "http://www.googleapis.com/"
    reportingEnabled Boolean
    Whether to send work progress updates to the service.
    servicePath String
    The Cloud Dataflow service path relative to the root URL, for example, "dataflow/v1b3/projects".
    shuffleServicePath String
    The Shuffle service path relative to the root URL, for example, "shuffle/v1beta1".
    tempStoragePrefix String
    The prefix of the resources the system should use for temporary storage. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
    workerId String
    The ID of the worker running this pipeline.
    baseUrl string
    The base URL for accessing Google Cloud APIs. When workers access Google Cloud APIs, they logically do so via relative URLs. If this field is specified, it supplies the base URL to use for resolving these relative URLs. The normative algorithm used is defined by RFC 1808, "Relative Uniform Resource Locators". If not specified, the default value is "http://www.googleapis.com/"
    reportingEnabled boolean
    Whether to send work progress updates to the service.
    servicePath string
    The Cloud Dataflow service path relative to the root URL, for example, "dataflow/v1b3/projects".
    shuffleServicePath string
    The Shuffle service path relative to the root URL, for example, "shuffle/v1beta1".
    tempStoragePrefix string
    The prefix of the resources the system should use for temporary storage. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
    workerId string
    The ID of the worker running this pipeline.
    base_url str
    The base URL for accessing Google Cloud APIs. When workers access Google Cloud APIs, they logically do so via relative URLs. If this field is specified, it supplies the base URL to use for resolving these relative URLs. The normative algorithm used is defined by RFC 1808, "Relative Uniform Resource Locators". If not specified, the default value is "http://www.googleapis.com/"
    reporting_enabled bool
    Whether to send work progress updates to the service.
    service_path str
    The Cloud Dataflow service path relative to the root URL, for example, "dataflow/v1b3/projects".
    shuffle_service_path str
    The Shuffle service path relative to the root URL, for example, "shuffle/v1beta1".
    temp_storage_prefix str
    The prefix of the resources the system should use for temporary storage. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
    worker_id str
    The ID of the worker running this pipeline.
    baseUrl String
    The base URL for accessing Google Cloud APIs. When workers access Google Cloud APIs, they logically do so via relative URLs. If this field is specified, it supplies the base URL to use for resolving these relative URLs. The normative algorithm used is defined by RFC 1808, "Relative Uniform Resource Locators". If not specified, the default value is "http://www.googleapis.com/"
    reportingEnabled Boolean
    Whether to send work progress updates to the service.
    servicePath String
    The Cloud Dataflow service path relative to the root URL, for example, "dataflow/v1b3/projects".
    shuffleServicePath String
    The Shuffle service path relative to the root URL, for example, "shuffle/v1beta1".
    tempStoragePrefix String
    The prefix of the resources the system should use for temporary storage. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
    workerId String
    The ID of the worker running this pipeline.

    WorkerSettingsResponse, WorkerSettingsResponseArgs

    BaseUrl string
    The base URL for accessing Google Cloud APIs. When workers access Google Cloud APIs, they logically do so via relative URLs. If this field is specified, it supplies the base URL to use for resolving these relative URLs. The normative algorithm used is defined by RFC 1808, "Relative Uniform Resource Locators". If not specified, the default value is "http://www.googleapis.com/"
    ReportingEnabled bool
    Whether to send work progress updates to the service.
    ServicePath string
    The Cloud Dataflow service path relative to the root URL, for example, "dataflow/v1b3/projects".
    ShuffleServicePath string
    The Shuffle service path relative to the root URL, for example, "shuffle/v1beta1".
    TempStoragePrefix string
    The prefix of the resources the system should use for temporary storage. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
    WorkerId string
    The ID of the worker running this pipeline.
    BaseUrl string
    The base URL for accessing Google Cloud APIs. When workers access Google Cloud APIs, they logically do so via relative URLs. If this field is specified, it supplies the base URL to use for resolving these relative URLs. The normative algorithm used is defined by RFC 1808, "Relative Uniform Resource Locators". If not specified, the default value is "http://www.googleapis.com/"
    ReportingEnabled bool
    Whether to send work progress updates to the service.
    ServicePath string
    The Cloud Dataflow service path relative to the root URL, for example, "dataflow/v1b3/projects".
    ShuffleServicePath string
    The Shuffle service path relative to the root URL, for example, "shuffle/v1beta1".
    TempStoragePrefix string
    The prefix of the resources the system should use for temporary storage. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
    WorkerId string
    The ID of the worker running this pipeline.
    baseUrl String
    The base URL for accessing Google Cloud APIs. When workers access Google Cloud APIs, they logically do so via relative URLs. If this field is specified, it supplies the base URL to use for resolving these relative URLs. The normative algorithm used is defined by RFC 1808, "Relative Uniform Resource Locators". If not specified, the default value is "http://www.googleapis.com/"
    reportingEnabled Boolean
    Whether to send work progress updates to the service.
    servicePath String
    The Cloud Dataflow service path relative to the root URL, for example, "dataflow/v1b3/projects".
    shuffleServicePath String
    The Shuffle service path relative to the root URL, for example, "shuffle/v1beta1".
    tempStoragePrefix String
    The prefix of the resources the system should use for temporary storage. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
    workerId String
    The ID of the worker running this pipeline.
    baseUrl string
    The base URL for accessing Google Cloud APIs. When workers access Google Cloud APIs, they logically do so via relative URLs. If this field is specified, it supplies the base URL to use for resolving these relative URLs. The normative algorithm used is defined by RFC 1808, "Relative Uniform Resource Locators". If not specified, the default value is "http://www.googleapis.com/"
    reportingEnabled boolean
    Whether to send work progress updates to the service.
    servicePath string
    The Cloud Dataflow service path relative to the root URL, for example, "dataflow/v1b3/projects".
    shuffleServicePath string
    The Shuffle service path relative to the root URL, for example, "shuffle/v1beta1".
    tempStoragePrefix string
    The prefix of the resources the system should use for temporary storage. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
    workerId string
    The ID of the worker running this pipeline.
    base_url str
    The base URL for accessing Google Cloud APIs. When workers access Google Cloud APIs, they logically do so via relative URLs. If this field is specified, it supplies the base URL to use for resolving these relative URLs. The normative algorithm used is defined by RFC 1808, "Relative Uniform Resource Locators". If not specified, the default value is "http://www.googleapis.com/"
    reporting_enabled bool
    Whether to send work progress updates to the service.
    service_path str
    The Cloud Dataflow service path relative to the root URL, for example, "dataflow/v1b3/projects".
    shuffle_service_path str
    The Shuffle service path relative to the root URL, for example, "shuffle/v1beta1".
    temp_storage_prefix str
    The prefix of the resources the system should use for temporary storage. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
    worker_id str
    The ID of the worker running this pipeline.
    baseUrl String
    The base URL for accessing Google Cloud APIs. When workers access Google Cloud APIs, they logically do so via relative URLs. If this field is specified, it supplies the base URL to use for resolving these relative URLs. The normative algorithm used is defined by RFC 1808, "Relative Uniform Resource Locators". If not specified, the default value is "http://www.googleapis.com/"
    reportingEnabled Boolean
    Whether to send work progress updates to the service.
    servicePath String
    The Cloud Dataflow service path relative to the root URL, for example, "dataflow/v1b3/projects".
    shuffleServicePath String
    The Shuffle service path relative to the root URL, for example, "shuffle/v1beta1".
    tempStoragePrefix String
    The prefix of the resources the system should use for temporary storage. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
    workerId String
    The ID of the worker running this pipeline.

    Package Details

    Repository
    Google Cloud Native pulumi/pulumi-google-native
    License
    Apache-2.0
    google-native logo

    Google Cloud Native is in preview. Google Cloud Classic is fully supported.

    Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi