1. Packages
  2. Google Cloud Native
  3. API Docs
  4. aiplatform
  5. aiplatform/v1beta1
  6. PipelineJob

Google Cloud Native is in preview. Google Cloud Classic is fully supported.

Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi

google-native.aiplatform/v1beta1.PipelineJob

Explore with Pulumi AI

google-native logo

Google Cloud Native is in preview. Google Cloud Classic is fully supported.

Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi

    Creates a PipelineJob. A PipelineJob will run immediately when created. Auto-naming is currently not supported for this resource.

    Create PipelineJob Resource

    Resources are created with functions called constructors. To learn more about declaring and configuring resources, see Resources.

    Constructor syntax

    new PipelineJob(name: string, args?: PipelineJobArgs, opts?: CustomResourceOptions);
    @overload
    def PipelineJob(resource_name: str,
                    args: Optional[PipelineJobArgs] = None,
                    opts: Optional[ResourceOptions] = None)
    
    @overload
    def PipelineJob(resource_name: str,
                    opts: Optional[ResourceOptions] = None,
                    display_name: Optional[str] = None,
                    encryption_spec: Optional[GoogleCloudAiplatformV1beta1EncryptionSpecArgs] = None,
                    labels: Optional[Mapping[str, str]] = None,
                    location: Optional[str] = None,
                    network: Optional[str] = None,
                    pipeline_job_id: Optional[str] = None,
                    pipeline_spec: Optional[Mapping[str, str]] = None,
                    project: Optional[str] = None,
                    reserved_ip_ranges: Optional[Sequence[str]] = None,
                    runtime_config: Optional[GoogleCloudAiplatformV1beta1PipelineJobRuntimeConfigArgs] = None,
                    service_account: Optional[str] = None,
                    template_uri: Optional[str] = None)
    func NewPipelineJob(ctx *Context, name string, args *PipelineJobArgs, opts ...ResourceOption) (*PipelineJob, error)
    public PipelineJob(string name, PipelineJobArgs? args = null, CustomResourceOptions? opts = null)
    public PipelineJob(String name, PipelineJobArgs args)
    public PipelineJob(String name, PipelineJobArgs args, CustomResourceOptions options)
    
    type: google-native:aiplatform/v1beta1:PipelineJob
    properties: # The arguments to resource properties.
    options: # Bag of options to control resource's behavior.
    
    

    Parameters

    name string
    The unique name of the resource.
    args PipelineJobArgs
    The arguments to resource properties.
    opts CustomResourceOptions
    Bag of options to control resource's behavior.
    resource_name str
    The unique name of the resource.
    args PipelineJobArgs
    The arguments to resource properties.
    opts ResourceOptions
    Bag of options to control resource's behavior.
    ctx Context
    Context object for the current deployment.
    name string
    The unique name of the resource.
    args PipelineJobArgs
    The arguments to resource properties.
    opts ResourceOption
    Bag of options to control resource's behavior.
    name string
    The unique name of the resource.
    args PipelineJobArgs
    The arguments to resource properties.
    opts CustomResourceOptions
    Bag of options to control resource's behavior.
    name String
    The unique name of the resource.
    args PipelineJobArgs
    The arguments to resource properties.
    options CustomResourceOptions
    Bag of options to control resource's behavior.

    Constructor example

    The following reference example uses placeholder values for all input properties.

    var google_nativePipelineJobResource = new GoogleNative.Aiplatform.V1Beta1.PipelineJob("google-nativePipelineJobResource", new()
    {
        DisplayName = "string",
        EncryptionSpec = new GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1EncryptionSpecArgs
        {
            KmsKeyName = "string",
        },
        Labels = 
        {
            { "string", "string" },
        },
        Location = "string",
        Network = "string",
        PipelineJobId = "string",
        PipelineSpec = 
        {
            { "string", "string" },
        },
        Project = "string",
        ReservedIpRanges = new[]
        {
            "string",
        },
        RuntimeConfig = new GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1PipelineJobRuntimeConfigArgs
        {
            GcsOutputDirectory = "string",
            FailurePolicy = GoogleNative.Aiplatform.V1Beta1.GoogleCloudAiplatformV1beta1PipelineJobRuntimeConfigFailurePolicy.PipelineFailurePolicyUnspecified,
            InputArtifacts = 
            {
                { "string", "string" },
            },
            ParameterValues = 
            {
                { "string", "string" },
            },
        },
        ServiceAccount = "string",
        TemplateUri = "string",
    });
    
    example, err := aiplatformv1beta1.NewPipelineJob(ctx, "google-nativePipelineJobResource", &aiplatformv1beta1.PipelineJobArgs{
    	DisplayName: pulumi.String("string"),
    	EncryptionSpec: &aiplatform.GoogleCloudAiplatformV1beta1EncryptionSpecArgs{
    		KmsKeyName: pulumi.String("string"),
    	},
    	Labels: pulumi.StringMap{
    		"string": pulumi.String("string"),
    	},
    	Location:      pulumi.String("string"),
    	Network:       pulumi.String("string"),
    	PipelineJobId: pulumi.String("string"),
    	PipelineSpec: pulumi.StringMap{
    		"string": pulumi.String("string"),
    	},
    	Project: pulumi.String("string"),
    	ReservedIpRanges: pulumi.StringArray{
    		pulumi.String("string"),
    	},
    	RuntimeConfig: &aiplatform.GoogleCloudAiplatformV1beta1PipelineJobRuntimeConfigArgs{
    		GcsOutputDirectory: pulumi.String("string"),
    		FailurePolicy:      aiplatformv1beta1.GoogleCloudAiplatformV1beta1PipelineJobRuntimeConfigFailurePolicyPipelineFailurePolicyUnspecified,
    		InputArtifacts: pulumi.StringMap{
    			"string": pulumi.String("string"),
    		},
    		ParameterValues: pulumi.StringMap{
    			"string": pulumi.String("string"),
    		},
    	},
    	ServiceAccount: pulumi.String("string"),
    	TemplateUri:    pulumi.String("string"),
    })
    
    var google_nativePipelineJobResource = new PipelineJob("google-nativePipelineJobResource", PipelineJobArgs.builder()
        .displayName("string")
        .encryptionSpec(GoogleCloudAiplatformV1beta1EncryptionSpecArgs.builder()
            .kmsKeyName("string")
            .build())
        .labels(Map.of("string", "string"))
        .location("string")
        .network("string")
        .pipelineJobId("string")
        .pipelineSpec(Map.of("string", "string"))
        .project("string")
        .reservedIpRanges("string")
        .runtimeConfig(GoogleCloudAiplatformV1beta1PipelineJobRuntimeConfigArgs.builder()
            .gcsOutputDirectory("string")
            .failurePolicy("PIPELINE_FAILURE_POLICY_UNSPECIFIED")
            .inputArtifacts(Map.of("string", "string"))
            .parameterValues(Map.of("string", "string"))
            .build())
        .serviceAccount("string")
        .templateUri("string")
        .build());
    
    google_native_pipeline_job_resource = google_native.aiplatform.v1beta1.PipelineJob("google-nativePipelineJobResource",
        display_name="string",
        encryption_spec={
            "kms_key_name": "string",
        },
        labels={
            "string": "string",
        },
        location="string",
        network="string",
        pipeline_job_id="string",
        pipeline_spec={
            "string": "string",
        },
        project="string",
        reserved_ip_ranges=["string"],
        runtime_config={
            "gcs_output_directory": "string",
            "failure_policy": google_native.aiplatform.v1beta1.GoogleCloudAiplatformV1beta1PipelineJobRuntimeConfigFailurePolicy.PIPELINE_FAILURE_POLICY_UNSPECIFIED,
            "input_artifacts": {
                "string": "string",
            },
            "parameter_values": {
                "string": "string",
            },
        },
        service_account="string",
        template_uri="string")
    
    const google_nativePipelineJobResource = new google_native.aiplatform.v1beta1.PipelineJob("google-nativePipelineJobResource", {
        displayName: "string",
        encryptionSpec: {
            kmsKeyName: "string",
        },
        labels: {
            string: "string",
        },
        location: "string",
        network: "string",
        pipelineJobId: "string",
        pipelineSpec: {
            string: "string",
        },
        project: "string",
        reservedIpRanges: ["string"],
        runtimeConfig: {
            gcsOutputDirectory: "string",
            failurePolicy: google_native.aiplatform.v1beta1.GoogleCloudAiplatformV1beta1PipelineJobRuntimeConfigFailurePolicy.PipelineFailurePolicyUnspecified,
            inputArtifacts: {
                string: "string",
            },
            parameterValues: {
                string: "string",
            },
        },
        serviceAccount: "string",
        templateUri: "string",
    });
    
    type: google-native:aiplatform/v1beta1:PipelineJob
    properties:
        displayName: string
        encryptionSpec:
            kmsKeyName: string
        labels:
            string: string
        location: string
        network: string
        pipelineJobId: string
        pipelineSpec:
            string: string
        project: string
        reservedIpRanges:
            - string
        runtimeConfig:
            failurePolicy: PIPELINE_FAILURE_POLICY_UNSPECIFIED
            gcsOutputDirectory: string
            inputArtifacts:
                string: string
            parameterValues:
                string: string
        serviceAccount: string
        templateUri: string
    

    PipelineJob Resource Properties

    To learn more about resource properties and how to use them, see Inputs and Outputs in the Architecture and Concepts docs.

    Inputs

    In Python, inputs that are objects can be passed either as argument classes or as dictionary literals.

    The PipelineJob resource accepts the following input properties:

    DisplayName string
    The display name of the Pipeline. The name can be up to 128 characters long and can consist of any UTF-8 characters.
    EncryptionSpec Pulumi.GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1EncryptionSpec
    Customer-managed encryption key spec for a pipelineJob. If set, this PipelineJob and all of its sub-resources will be secured by this key.
    Labels Dictionary<string, string>
    The labels with user-defined metadata to organize PipelineJob. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels. Note there is some reserved label key for Vertex AI Pipelines. - vertex-ai-pipelines-run-billing-id, user set value will get overrided.
    Location string
    Network string
    The full name of the Compute Engine network to which the Pipeline Job's workload should be peered. For example, projects/12345/global/networks/myVPC. Format is of the form projects/{project}/global/networks/{network}. Where {project} is a project number, as in 12345, and {network} is a network name. Private services access must already be configured for the network. Pipeline job will apply the network configuration to the Google Cloud resources being launched, if applied, such as Vertex AI Training or Dataflow job. If left unspecified, the workload is not peered with any network.
    PipelineJobId string
    The ID to use for the PipelineJob, which will become the final component of the PipelineJob name. If not provided, an ID will be automatically generated. This value should be less than 128 characters, and valid characters are /a-z-/.
    PipelineSpec Dictionary<string, string>
    The spec of the pipeline.
    Project string
    ReservedIpRanges List<string>
    A list of names for the reserved ip ranges under the VPC network that can be used for this Pipeline Job's workload. If set, we will deploy the Pipeline Job's workload within the provided ip ranges. Otherwise, the job will be deployed to any ip ranges under the provided VPC network. Example: ['vertex-ai-ip-range'].
    RuntimeConfig Pulumi.GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1PipelineJobRuntimeConfig
    Runtime config of the pipeline.
    ServiceAccount string
    The service account that the pipeline workload runs as. If not specified, the Compute Engine default service account in the project will be used. See https://cloud.google.com/compute/docs/access/service-accounts#default_service_account Users starting the pipeline must have the iam.serviceAccounts.actAs permission on this service account.
    TemplateUri string
    A template uri from where the PipelineJob.pipeline_spec, if empty, will be downloaded. Currently, only uri from Vertex Template Registry & Gallery is supported. Reference to https://cloud.google.com/vertex-ai/docs/pipelines/create-pipeline-template.
    DisplayName string
    The display name of the Pipeline. The name can be up to 128 characters long and can consist of any UTF-8 characters.
    EncryptionSpec GoogleCloudAiplatformV1beta1EncryptionSpecArgs
    Customer-managed encryption key spec for a pipelineJob. If set, this PipelineJob and all of its sub-resources will be secured by this key.
    Labels map[string]string
    The labels with user-defined metadata to organize PipelineJob. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels. Note there is some reserved label key for Vertex AI Pipelines. - vertex-ai-pipelines-run-billing-id, user set value will get overrided.
    Location string
    Network string
    The full name of the Compute Engine network to which the Pipeline Job's workload should be peered. For example, projects/12345/global/networks/myVPC. Format is of the form projects/{project}/global/networks/{network}. Where {project} is a project number, as in 12345, and {network} is a network name. Private services access must already be configured for the network. Pipeline job will apply the network configuration to the Google Cloud resources being launched, if applied, such as Vertex AI Training or Dataflow job. If left unspecified, the workload is not peered with any network.
    PipelineJobId string
    The ID to use for the PipelineJob, which will become the final component of the PipelineJob name. If not provided, an ID will be automatically generated. This value should be less than 128 characters, and valid characters are /a-z-/.
    PipelineSpec map[string]string
    The spec of the pipeline.
    Project string
    ReservedIpRanges []string
    A list of names for the reserved ip ranges under the VPC network that can be used for this Pipeline Job's workload. If set, we will deploy the Pipeline Job's workload within the provided ip ranges. Otherwise, the job will be deployed to any ip ranges under the provided VPC network. Example: ['vertex-ai-ip-range'].
    RuntimeConfig GoogleCloudAiplatformV1beta1PipelineJobRuntimeConfigArgs
    Runtime config of the pipeline.
    ServiceAccount string
    The service account that the pipeline workload runs as. If not specified, the Compute Engine default service account in the project will be used. See https://cloud.google.com/compute/docs/access/service-accounts#default_service_account Users starting the pipeline must have the iam.serviceAccounts.actAs permission on this service account.
    TemplateUri string
    A template uri from where the PipelineJob.pipeline_spec, if empty, will be downloaded. Currently, only uri from Vertex Template Registry & Gallery is supported. Reference to https://cloud.google.com/vertex-ai/docs/pipelines/create-pipeline-template.
    displayName String
    The display name of the Pipeline. The name can be up to 128 characters long and can consist of any UTF-8 characters.
    encryptionSpec GoogleCloudAiplatformV1beta1EncryptionSpec
    Customer-managed encryption key spec for a pipelineJob. If set, this PipelineJob and all of its sub-resources will be secured by this key.
    labels Map<String,String>
    The labels with user-defined metadata to organize PipelineJob. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels. Note there is some reserved label key for Vertex AI Pipelines. - vertex-ai-pipelines-run-billing-id, user set value will get overrided.
    location String
    network String
    The full name of the Compute Engine network to which the Pipeline Job's workload should be peered. For example, projects/12345/global/networks/myVPC. Format is of the form projects/{project}/global/networks/{network}. Where {project} is a project number, as in 12345, and {network} is a network name. Private services access must already be configured for the network. Pipeline job will apply the network configuration to the Google Cloud resources being launched, if applied, such as Vertex AI Training or Dataflow job. If left unspecified, the workload is not peered with any network.
    pipelineJobId String
    The ID to use for the PipelineJob, which will become the final component of the PipelineJob name. If not provided, an ID will be automatically generated. This value should be less than 128 characters, and valid characters are /a-z-/.
    pipelineSpec Map<String,String>
    The spec of the pipeline.
    project String
    reservedIpRanges List<String>
    A list of names for the reserved ip ranges under the VPC network that can be used for this Pipeline Job's workload. If set, we will deploy the Pipeline Job's workload within the provided ip ranges. Otherwise, the job will be deployed to any ip ranges under the provided VPC network. Example: ['vertex-ai-ip-range'].
    runtimeConfig GoogleCloudAiplatformV1beta1PipelineJobRuntimeConfig
    Runtime config of the pipeline.
    serviceAccount String
    The service account that the pipeline workload runs as. If not specified, the Compute Engine default service account in the project will be used. See https://cloud.google.com/compute/docs/access/service-accounts#default_service_account Users starting the pipeline must have the iam.serviceAccounts.actAs permission on this service account.
    templateUri String
    A template uri from where the PipelineJob.pipeline_spec, if empty, will be downloaded. Currently, only uri from Vertex Template Registry & Gallery is supported. Reference to https://cloud.google.com/vertex-ai/docs/pipelines/create-pipeline-template.
    displayName string
    The display name of the Pipeline. The name can be up to 128 characters long and can consist of any UTF-8 characters.
    encryptionSpec GoogleCloudAiplatformV1beta1EncryptionSpec
    Customer-managed encryption key spec for a pipelineJob. If set, this PipelineJob and all of its sub-resources will be secured by this key.
    labels {[key: string]: string}
    The labels with user-defined metadata to organize PipelineJob. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels. Note there is some reserved label key for Vertex AI Pipelines. - vertex-ai-pipelines-run-billing-id, user set value will get overrided.
    location string
    network string
    The full name of the Compute Engine network to which the Pipeline Job's workload should be peered. For example, projects/12345/global/networks/myVPC. Format is of the form projects/{project}/global/networks/{network}. Where {project} is a project number, as in 12345, and {network} is a network name. Private services access must already be configured for the network. Pipeline job will apply the network configuration to the Google Cloud resources being launched, if applied, such as Vertex AI Training or Dataflow job. If left unspecified, the workload is not peered with any network.
    pipelineJobId string
    The ID to use for the PipelineJob, which will become the final component of the PipelineJob name. If not provided, an ID will be automatically generated. This value should be less than 128 characters, and valid characters are /a-z-/.
    pipelineSpec {[key: string]: string}
    The spec of the pipeline.
    project string
    reservedIpRanges string[]
    A list of names for the reserved ip ranges under the VPC network that can be used for this Pipeline Job's workload. If set, we will deploy the Pipeline Job's workload within the provided ip ranges. Otherwise, the job will be deployed to any ip ranges under the provided VPC network. Example: ['vertex-ai-ip-range'].
    runtimeConfig GoogleCloudAiplatformV1beta1PipelineJobRuntimeConfig
    Runtime config of the pipeline.
    serviceAccount string
    The service account that the pipeline workload runs as. If not specified, the Compute Engine default service account in the project will be used. See https://cloud.google.com/compute/docs/access/service-accounts#default_service_account Users starting the pipeline must have the iam.serviceAccounts.actAs permission on this service account.
    templateUri string
    A template uri from where the PipelineJob.pipeline_spec, if empty, will be downloaded. Currently, only uri from Vertex Template Registry & Gallery is supported. Reference to https://cloud.google.com/vertex-ai/docs/pipelines/create-pipeline-template.
    display_name str
    The display name of the Pipeline. The name can be up to 128 characters long and can consist of any UTF-8 characters.
    encryption_spec GoogleCloudAiplatformV1beta1EncryptionSpecArgs
    Customer-managed encryption key spec for a pipelineJob. If set, this PipelineJob and all of its sub-resources will be secured by this key.
    labels Mapping[str, str]
    The labels with user-defined metadata to organize PipelineJob. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels. Note there is some reserved label key for Vertex AI Pipelines. - vertex-ai-pipelines-run-billing-id, user set value will get overrided.
    location str
    network str
    The full name of the Compute Engine network to which the Pipeline Job's workload should be peered. For example, projects/12345/global/networks/myVPC. Format is of the form projects/{project}/global/networks/{network}. Where {project} is a project number, as in 12345, and {network} is a network name. Private services access must already be configured for the network. Pipeline job will apply the network configuration to the Google Cloud resources being launched, if applied, such as Vertex AI Training or Dataflow job. If left unspecified, the workload is not peered with any network.
    pipeline_job_id str
    The ID to use for the PipelineJob, which will become the final component of the PipelineJob name. If not provided, an ID will be automatically generated. This value should be less than 128 characters, and valid characters are /a-z-/.
    pipeline_spec Mapping[str, str]
    The spec of the pipeline.
    project str
    reserved_ip_ranges Sequence[str]
    A list of names for the reserved ip ranges under the VPC network that can be used for this Pipeline Job's workload. If set, we will deploy the Pipeline Job's workload within the provided ip ranges. Otherwise, the job will be deployed to any ip ranges under the provided VPC network. Example: ['vertex-ai-ip-range'].
    runtime_config GoogleCloudAiplatformV1beta1PipelineJobRuntimeConfigArgs
    Runtime config of the pipeline.
    service_account str
    The service account that the pipeline workload runs as. If not specified, the Compute Engine default service account in the project will be used. See https://cloud.google.com/compute/docs/access/service-accounts#default_service_account Users starting the pipeline must have the iam.serviceAccounts.actAs permission on this service account.
    template_uri str
    A template uri from where the PipelineJob.pipeline_spec, if empty, will be downloaded. Currently, only uri from Vertex Template Registry & Gallery is supported. Reference to https://cloud.google.com/vertex-ai/docs/pipelines/create-pipeline-template.
    displayName String
    The display name of the Pipeline. The name can be up to 128 characters long and can consist of any UTF-8 characters.
    encryptionSpec Property Map
    Customer-managed encryption key spec for a pipelineJob. If set, this PipelineJob and all of its sub-resources will be secured by this key.
    labels Map<String>
    The labels with user-defined metadata to organize PipelineJob. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels. Note there is some reserved label key for Vertex AI Pipelines. - vertex-ai-pipelines-run-billing-id, user set value will get overrided.
    location String
    network String
    The full name of the Compute Engine network to which the Pipeline Job's workload should be peered. For example, projects/12345/global/networks/myVPC. Format is of the form projects/{project}/global/networks/{network}. Where {project} is a project number, as in 12345, and {network} is a network name. Private services access must already be configured for the network. Pipeline job will apply the network configuration to the Google Cloud resources being launched, if applied, such as Vertex AI Training or Dataflow job. If left unspecified, the workload is not peered with any network.
    pipelineJobId String
    The ID to use for the PipelineJob, which will become the final component of the PipelineJob name. If not provided, an ID will be automatically generated. This value should be less than 128 characters, and valid characters are /a-z-/.
    pipelineSpec Map<String>
    The spec of the pipeline.
    project String
    reservedIpRanges List<String>
    A list of names for the reserved ip ranges under the VPC network that can be used for this Pipeline Job's workload. If set, we will deploy the Pipeline Job's workload within the provided ip ranges. Otherwise, the job will be deployed to any ip ranges under the provided VPC network. Example: ['vertex-ai-ip-range'].
    runtimeConfig Property Map
    Runtime config of the pipeline.
    serviceAccount String
    The service account that the pipeline workload runs as. If not specified, the Compute Engine default service account in the project will be used. See https://cloud.google.com/compute/docs/access/service-accounts#default_service_account Users starting the pipeline must have the iam.serviceAccounts.actAs permission on this service account.
    templateUri String
    A template uri from where the PipelineJob.pipeline_spec, if empty, will be downloaded. Currently, only uri from Vertex Template Registry & Gallery is supported. Reference to https://cloud.google.com/vertex-ai/docs/pipelines/create-pipeline-template.

    Outputs

    All input properties are implicitly available as output properties. Additionally, the PipelineJob resource produces the following output properties:

    CreateTime string
    Pipeline creation time.
    EndTime string
    Pipeline end time.
    Error Pulumi.GoogleNative.Aiplatform.V1Beta1.Outputs.GoogleRpcStatusResponse
    The error that occurred during pipeline execution. Only populated when the pipeline's state is FAILED or CANCELLED.
    Id string
    The provider-assigned unique ID for this managed resource.
    JobDetail Pulumi.GoogleNative.Aiplatform.V1Beta1.Outputs.GoogleCloudAiplatformV1beta1PipelineJobDetailResponse
    The details of pipeline run. Not available in the list view.
    Name string
    The resource name of the PipelineJob.
    ScheduleName string
    The schedule resource name. Only returned if the Pipeline is created by Schedule API.
    StartTime string
    Pipeline start time.
    State string
    The detailed state of the job.
    TemplateMetadata Pulumi.GoogleNative.Aiplatform.V1Beta1.Outputs.GoogleCloudAiplatformV1beta1PipelineTemplateMetadataResponse
    Pipeline template metadata. Will fill up fields if PipelineJob.template_uri is from supported template registry.
    UpdateTime string
    Timestamp when this PipelineJob was most recently updated.
    CreateTime string
    Pipeline creation time.
    EndTime string
    Pipeline end time.
    Error GoogleRpcStatusResponse
    The error that occurred during pipeline execution. Only populated when the pipeline's state is FAILED or CANCELLED.
    Id string
    The provider-assigned unique ID for this managed resource.
    JobDetail GoogleCloudAiplatformV1beta1PipelineJobDetailResponse
    The details of pipeline run. Not available in the list view.
    Name string
    The resource name of the PipelineJob.
    ScheduleName string
    The schedule resource name. Only returned if the Pipeline is created by Schedule API.
    StartTime string
    Pipeline start time.
    State string
    The detailed state of the job.
    TemplateMetadata GoogleCloudAiplatformV1beta1PipelineTemplateMetadataResponse
    Pipeline template metadata. Will fill up fields if PipelineJob.template_uri is from supported template registry.
    UpdateTime string
    Timestamp when this PipelineJob was most recently updated.
    createTime String
    Pipeline creation time.
    endTime String
    Pipeline end time.
    error GoogleRpcStatusResponse
    The error that occurred during pipeline execution. Only populated when the pipeline's state is FAILED or CANCELLED.
    id String
    The provider-assigned unique ID for this managed resource.
    jobDetail GoogleCloudAiplatformV1beta1PipelineJobDetailResponse
    The details of pipeline run. Not available in the list view.
    name String
    The resource name of the PipelineJob.
    scheduleName String
    The schedule resource name. Only returned if the Pipeline is created by Schedule API.
    startTime String
    Pipeline start time.
    state String
    The detailed state of the job.
    templateMetadata GoogleCloudAiplatformV1beta1PipelineTemplateMetadataResponse
    Pipeline template metadata. Will fill up fields if PipelineJob.template_uri is from supported template registry.
    updateTime String
    Timestamp when this PipelineJob was most recently updated.
    createTime string
    Pipeline creation time.
    endTime string
    Pipeline end time.
    error GoogleRpcStatusResponse
    The error that occurred during pipeline execution. Only populated when the pipeline's state is FAILED or CANCELLED.
    id string
    The provider-assigned unique ID for this managed resource.
    jobDetail GoogleCloudAiplatformV1beta1PipelineJobDetailResponse
    The details of pipeline run. Not available in the list view.
    name string
    The resource name of the PipelineJob.
    scheduleName string
    The schedule resource name. Only returned if the Pipeline is created by Schedule API.
    startTime string
    Pipeline start time.
    state string
    The detailed state of the job.
    templateMetadata GoogleCloudAiplatformV1beta1PipelineTemplateMetadataResponse
    Pipeline template metadata. Will fill up fields if PipelineJob.template_uri is from supported template registry.
    updateTime string
    Timestamp when this PipelineJob was most recently updated.
    create_time str
    Pipeline creation time.
    end_time str
    Pipeline end time.
    error GoogleRpcStatusResponse
    The error that occurred during pipeline execution. Only populated when the pipeline's state is FAILED or CANCELLED.
    id str
    The provider-assigned unique ID for this managed resource.
    job_detail GoogleCloudAiplatformV1beta1PipelineJobDetailResponse
    The details of pipeline run. Not available in the list view.
    name str
    The resource name of the PipelineJob.
    schedule_name str
    The schedule resource name. Only returned if the Pipeline is created by Schedule API.
    start_time str
    Pipeline start time.
    state str
    The detailed state of the job.
    template_metadata GoogleCloudAiplatformV1beta1PipelineTemplateMetadataResponse
    Pipeline template metadata. Will fill up fields if PipelineJob.template_uri is from supported template registry.
    update_time str
    Timestamp when this PipelineJob was most recently updated.
    createTime String
    Pipeline creation time.
    endTime String
    Pipeline end time.
    error Property Map
    The error that occurred during pipeline execution. Only populated when the pipeline's state is FAILED or CANCELLED.
    id String
    The provider-assigned unique ID for this managed resource.
    jobDetail Property Map
    The details of pipeline run. Not available in the list view.
    name String
    The resource name of the PipelineJob.
    scheduleName String
    The schedule resource name. Only returned if the Pipeline is created by Schedule API.
    startTime String
    Pipeline start time.
    state String
    The detailed state of the job.
    templateMetadata Property Map
    Pipeline template metadata. Will fill up fields if PipelineJob.template_uri is from supported template registry.
    updateTime String
    Timestamp when this PipelineJob was most recently updated.

    Supporting Types

    GoogleCloudAiplatformV1beta1ContextResponse, GoogleCloudAiplatformV1beta1ContextResponseArgs

    CreateTime string
    Timestamp when this Context was created.
    Description string
    Description of the Context
    DisplayName string
    User provided display name of the Context. May be up to 128 Unicode characters.
    Etag string
    An eTag used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.
    Labels Dictionary<string, string>
    The labels with user-defined metadata to organize your Contexts. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. No more than 64 user labels can be associated with one Context (System labels are excluded).
    Metadata Dictionary<string, string>
    Properties of the Context. Top level metadata keys' heading and trailing spaces will be trimmed. The size of this field should not exceed 200KB.
    Name string
    Immutable. The resource name of the Context.
    ParentContexts List<string>
    A list of resource names of Contexts that are parents of this Context. A Context may have at most 10 parent_contexts.
    SchemaTitle string
    The title of the schema describing the metadata. Schema title and version is expected to be registered in earlier Create Schema calls. And both are used together as unique identifiers to identify schemas within the local metadata store.
    SchemaVersion string
    The version of the schema in schema_name to use. Schema title and version is expected to be registered in earlier Create Schema calls. And both are used together as unique identifiers to identify schemas within the local metadata store.
    UpdateTime string
    Timestamp when this Context was last updated.
    CreateTime string
    Timestamp when this Context was created.
    Description string
    Description of the Context
    DisplayName string
    User provided display name of the Context. May be up to 128 Unicode characters.
    Etag string
    An eTag used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.
    Labels map[string]string
    The labels with user-defined metadata to organize your Contexts. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. No more than 64 user labels can be associated with one Context (System labels are excluded).
    Metadata map[string]string
    Properties of the Context. Top level metadata keys' heading and trailing spaces will be trimmed. The size of this field should not exceed 200KB.
    Name string
    Immutable. The resource name of the Context.
    ParentContexts []string
    A list of resource names of Contexts that are parents of this Context. A Context may have at most 10 parent_contexts.
    SchemaTitle string
    The title of the schema describing the metadata. Schema title and version is expected to be registered in earlier Create Schema calls. And both are used together as unique identifiers to identify schemas within the local metadata store.
    SchemaVersion string
    The version of the schema in schema_name to use. Schema title and version is expected to be registered in earlier Create Schema calls. And both are used together as unique identifiers to identify schemas within the local metadata store.
    UpdateTime string
    Timestamp when this Context was last updated.
    createTime String
    Timestamp when this Context was created.
    description String
    Description of the Context
    displayName String
    User provided display name of the Context. May be up to 128 Unicode characters.
    etag String
    An eTag used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.
    labels Map<String,String>
    The labels with user-defined metadata to organize your Contexts. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. No more than 64 user labels can be associated with one Context (System labels are excluded).
    metadata Map<String,String>
    Properties of the Context. Top level metadata keys' heading and trailing spaces will be trimmed. The size of this field should not exceed 200KB.
    name String
    Immutable. The resource name of the Context.
    parentContexts List<String>
    A list of resource names of Contexts that are parents of this Context. A Context may have at most 10 parent_contexts.
    schemaTitle String
    The title of the schema describing the metadata. Schema title and version is expected to be registered in earlier Create Schema calls. And both are used together as unique identifiers to identify schemas within the local metadata store.
    schemaVersion String
    The version of the schema in schema_name to use. Schema title and version is expected to be registered in earlier Create Schema calls. And both are used together as unique identifiers to identify schemas within the local metadata store.
    updateTime String
    Timestamp when this Context was last updated.
    createTime string
    Timestamp when this Context was created.
    description string
    Description of the Context
    displayName string
    User provided display name of the Context. May be up to 128 Unicode characters.
    etag string
    An eTag used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.
    labels {[key: string]: string}
    The labels with user-defined metadata to organize your Contexts. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. No more than 64 user labels can be associated with one Context (System labels are excluded).
    metadata {[key: string]: string}
    Properties of the Context. Top level metadata keys' heading and trailing spaces will be trimmed. The size of this field should not exceed 200KB.
    name string
    Immutable. The resource name of the Context.
    parentContexts string[]
    A list of resource names of Contexts that are parents of this Context. A Context may have at most 10 parent_contexts.
    schemaTitle string
    The title of the schema describing the metadata. Schema title and version is expected to be registered in earlier Create Schema calls. And both are used together as unique identifiers to identify schemas within the local metadata store.
    schemaVersion string
    The version of the schema in schema_name to use. Schema title and version is expected to be registered in earlier Create Schema calls. And both are used together as unique identifiers to identify schemas within the local metadata store.
    updateTime string
    Timestamp when this Context was last updated.
    create_time str
    Timestamp when this Context was created.
    description str
    Description of the Context
    display_name str
    User provided display name of the Context. May be up to 128 Unicode characters.
    etag str
    An eTag used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.
    labels Mapping[str, str]
    The labels with user-defined metadata to organize your Contexts. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. No more than 64 user labels can be associated with one Context (System labels are excluded).
    metadata Mapping[str, str]
    Properties of the Context. Top level metadata keys' heading and trailing spaces will be trimmed. The size of this field should not exceed 200KB.
    name str
    Immutable. The resource name of the Context.
    parent_contexts Sequence[str]
    A list of resource names of Contexts that are parents of this Context. A Context may have at most 10 parent_contexts.
    schema_title str
    The title of the schema describing the metadata. Schema title and version is expected to be registered in earlier Create Schema calls. And both are used together as unique identifiers to identify schemas within the local metadata store.
    schema_version str
    The version of the schema in schema_name to use. Schema title and version is expected to be registered in earlier Create Schema calls. And both are used together as unique identifiers to identify schemas within the local metadata store.
    update_time str
    Timestamp when this Context was last updated.
    createTime String
    Timestamp when this Context was created.
    description String
    Description of the Context
    displayName String
    User provided display name of the Context. May be up to 128 Unicode characters.
    etag String
    An eTag used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.
    labels Map<String>
    The labels with user-defined metadata to organize your Contexts. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. No more than 64 user labels can be associated with one Context (System labels are excluded).
    metadata Map<String>
    Properties of the Context. Top level metadata keys' heading and trailing spaces will be trimmed. The size of this field should not exceed 200KB.
    name String
    Immutable. The resource name of the Context.
    parentContexts List<String>
    A list of resource names of Contexts that are parents of this Context. A Context may have at most 10 parent_contexts.
    schemaTitle String
    The title of the schema describing the metadata. Schema title and version is expected to be registered in earlier Create Schema calls. And both are used together as unique identifiers to identify schemas within the local metadata store.
    schemaVersion String
    The version of the schema in schema_name to use. Schema title and version is expected to be registered in earlier Create Schema calls. And both are used together as unique identifiers to identify schemas within the local metadata store.
    updateTime String
    Timestamp when this Context was last updated.

    GoogleCloudAiplatformV1beta1EncryptionSpec, GoogleCloudAiplatformV1beta1EncryptionSpecArgs

    KmsKeyName string
    The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key. The key needs to be in the same region as where the compute resource is created.
    KmsKeyName string
    The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key. The key needs to be in the same region as where the compute resource is created.
    kmsKeyName String
    The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key. The key needs to be in the same region as where the compute resource is created.
    kmsKeyName string
    The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key. The key needs to be in the same region as where the compute resource is created.
    kms_key_name str
    The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key. The key needs to be in the same region as where the compute resource is created.
    kmsKeyName String
    The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key. The key needs to be in the same region as where the compute resource is created.

    GoogleCloudAiplatformV1beta1EncryptionSpecResponse, GoogleCloudAiplatformV1beta1EncryptionSpecResponseArgs

    KmsKeyName string
    The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key. The key needs to be in the same region as where the compute resource is created.
    KmsKeyName string
    The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key. The key needs to be in the same region as where the compute resource is created.
    kmsKeyName String
    The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key. The key needs to be in the same region as where the compute resource is created.
    kmsKeyName string
    The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key. The key needs to be in the same region as where the compute resource is created.
    kms_key_name str
    The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key. The key needs to be in the same region as where the compute resource is created.
    kmsKeyName String
    The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key. The key needs to be in the same region as where the compute resource is created.

    GoogleCloudAiplatformV1beta1ExecutionResponse, GoogleCloudAiplatformV1beta1ExecutionResponseArgs

    CreateTime string
    Timestamp when this Execution was created.
    Description string
    Description of the Execution
    DisplayName string
    User provided display name of the Execution. May be up to 128 Unicode characters.
    Etag string
    An eTag used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.
    Labels Dictionary<string, string>
    The labels with user-defined metadata to organize your Executions. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. No more than 64 user labels can be associated with one Execution (System labels are excluded).
    Metadata Dictionary<string, string>
    Properties of the Execution. Top level metadata keys' heading and trailing spaces will be trimmed. The size of this field should not exceed 200KB.
    Name string
    The resource name of the Execution.
    SchemaTitle string
    The title of the schema describing the metadata. Schema title and version is expected to be registered in earlier Create Schema calls. And both are used together as unique identifiers to identify schemas within the local metadata store.
    SchemaVersion string
    The version of the schema in schema_title to use. Schema title and version is expected to be registered in earlier Create Schema calls. And both are used together as unique identifiers to identify schemas within the local metadata store.
    State string
    The state of this Execution. This is a property of the Execution, and does not imply or capture any ongoing process. This property is managed by clients (such as Vertex AI Pipelines) and the system does not prescribe or check the validity of state transitions.
    UpdateTime string
    Timestamp when this Execution was last updated.
    CreateTime string
    Timestamp when this Execution was created.
    Description string
    Description of the Execution
    DisplayName string
    User provided display name of the Execution. May be up to 128 Unicode characters.
    Etag string
    An eTag used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.
    Labels map[string]string
    The labels with user-defined metadata to organize your Executions. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. No more than 64 user labels can be associated with one Execution (System labels are excluded).
    Metadata map[string]string
    Properties of the Execution. Top level metadata keys' heading and trailing spaces will be trimmed. The size of this field should not exceed 200KB.
    Name string
    The resource name of the Execution.
    SchemaTitle string
    The title of the schema describing the metadata. Schema title and version is expected to be registered in earlier Create Schema calls. And both are used together as unique identifiers to identify schemas within the local metadata store.
    SchemaVersion string
    The version of the schema in schema_title to use. Schema title and version is expected to be registered in earlier Create Schema calls. And both are used together as unique identifiers to identify schemas within the local metadata store.
    State string
    The state of this Execution. This is a property of the Execution, and does not imply or capture any ongoing process. This property is managed by clients (such as Vertex AI Pipelines) and the system does not prescribe or check the validity of state transitions.
    UpdateTime string
    Timestamp when this Execution was last updated.
    createTime String
    Timestamp when this Execution was created.
    description String
    Description of the Execution
    displayName String
    User provided display name of the Execution. May be up to 128 Unicode characters.
    etag String
    An eTag used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.
    labels Map<String,String>
    The labels with user-defined metadata to organize your Executions. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. No more than 64 user labels can be associated with one Execution (System labels are excluded).
    metadata Map<String,String>
    Properties of the Execution. Top level metadata keys' heading and trailing spaces will be trimmed. The size of this field should not exceed 200KB.
    name String
    The resource name of the Execution.
    schemaTitle String
    The title of the schema describing the metadata. Schema title and version is expected to be registered in earlier Create Schema calls. And both are used together as unique identifiers to identify schemas within the local metadata store.
    schemaVersion String
    The version of the schema in schema_title to use. Schema title and version is expected to be registered in earlier Create Schema calls. And both are used together as unique identifiers to identify schemas within the local metadata store.
    state String
    The state of this Execution. This is a property of the Execution, and does not imply or capture any ongoing process. This property is managed by clients (such as Vertex AI Pipelines) and the system does not prescribe or check the validity of state transitions.
    updateTime String
    Timestamp when this Execution was last updated.
    createTime string
    Timestamp when this Execution was created.
    description string
    Description of the Execution
    displayName string
    User provided display name of the Execution. May be up to 128 Unicode characters.
    etag string
    An eTag used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.
    labels {[key: string]: string}
    The labels with user-defined metadata to organize your Executions. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. No more than 64 user labels can be associated with one Execution (System labels are excluded).
    metadata {[key: string]: string}
    Properties of the Execution. Top level metadata keys' heading and trailing spaces will be trimmed. The size of this field should not exceed 200KB.
    name string
    The resource name of the Execution.
    schemaTitle string
    The title of the schema describing the metadata. Schema title and version is expected to be registered in earlier Create Schema calls. And both are used together as unique identifiers to identify schemas within the local metadata store.
    schemaVersion string
    The version of the schema in schema_title to use. Schema title and version is expected to be registered in earlier Create Schema calls. And both are used together as unique identifiers to identify schemas within the local metadata store.
    state string
    The state of this Execution. This is a property of the Execution, and does not imply or capture any ongoing process. This property is managed by clients (such as Vertex AI Pipelines) and the system does not prescribe or check the validity of state transitions.
    updateTime string
    Timestamp when this Execution was last updated.
    create_time str
    Timestamp when this Execution was created.
    description str
    Description of the Execution
    display_name str
    User provided display name of the Execution. May be up to 128 Unicode characters.
    etag str
    An eTag used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.
    labels Mapping[str, str]
    The labels with user-defined metadata to organize your Executions. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. No more than 64 user labels can be associated with one Execution (System labels are excluded).
    metadata Mapping[str, str]
    Properties of the Execution. Top level metadata keys' heading and trailing spaces will be trimmed. The size of this field should not exceed 200KB.
    name str
    The resource name of the Execution.
    schema_title str
    The title of the schema describing the metadata. Schema title and version is expected to be registered in earlier Create Schema calls. And both are used together as unique identifiers to identify schemas within the local metadata store.
    schema_version str
    The version of the schema in schema_title to use. Schema title and version is expected to be registered in earlier Create Schema calls. And both are used together as unique identifiers to identify schemas within the local metadata store.
    state str
    The state of this Execution. This is a property of the Execution, and does not imply or capture any ongoing process. This property is managed by clients (such as Vertex AI Pipelines) and the system does not prescribe or check the validity of state transitions.
    update_time str
    Timestamp when this Execution was last updated.
    createTime String
    Timestamp when this Execution was created.
    description String
    Description of the Execution
    displayName String
    User provided display name of the Execution. May be up to 128 Unicode characters.
    etag String
    An eTag used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.
    labels Map<String>
    The labels with user-defined metadata to organize your Executions. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. No more than 64 user labels can be associated with one Execution (System labels are excluded).
    metadata Map<String>
    Properties of the Execution. Top level metadata keys' heading and trailing spaces will be trimmed. The size of this field should not exceed 200KB.
    name String
    The resource name of the Execution.
    schemaTitle String
    The title of the schema describing the metadata. Schema title and version is expected to be registered in earlier Create Schema calls. And both are used together as unique identifiers to identify schemas within the local metadata store.
    schemaVersion String
    The version of the schema in schema_title to use. Schema title and version is expected to be registered in earlier Create Schema calls. And both are used together as unique identifiers to identify schemas within the local metadata store.
    state String
    The state of this Execution. This is a property of the Execution, and does not imply or capture any ongoing process. This property is managed by clients (such as Vertex AI Pipelines) and the system does not prescribe or check the validity of state transitions.
    updateTime String
    Timestamp when this Execution was last updated.

    GoogleCloudAiplatformV1beta1PipelineJobDetailResponse, GoogleCloudAiplatformV1beta1PipelineJobDetailResponseArgs

    pipelineContext Property Map
    The context of the pipeline.
    pipelineRunContext Property Map
    The context of the current pipeline run.
    taskDetails List<Property Map>
    The runtime details of the tasks under the pipeline.

    GoogleCloudAiplatformV1beta1PipelineJobRuntimeConfig, GoogleCloudAiplatformV1beta1PipelineJobRuntimeConfigArgs

    GcsOutputDirectory string
    A path in a Cloud Storage bucket, which will be treated as the root output directory of the pipeline. It is used by the system to generate the paths of output artifacts. The artifact paths are generated with a sub-path pattern {job_id}/{task_id}/{output_key} under the specified output directory. The service account specified in this pipeline must have the storage.objects.get and storage.objects.create permissions for this bucket.
    FailurePolicy Pulumi.GoogleNative.Aiplatform.V1Beta1.GoogleCloudAiplatformV1beta1PipelineJobRuntimeConfigFailurePolicy
    Represents the failure policy of a pipeline. Currently, the default of a pipeline is that the pipeline will continue to run until no more tasks can be executed, also known as PIPELINE_FAILURE_POLICY_FAIL_SLOW. However, if a pipeline is set to PIPELINE_FAILURE_POLICY_FAIL_FAST, it will stop scheduling any new tasks when a task has failed. Any scheduled tasks will continue to completion.
    InputArtifacts Dictionary<string, string>
    The runtime artifacts of the PipelineJob. The key will be the input artifact name and the value would be one of the InputArtifact.
    ParameterValues Dictionary<string, string>
    The runtime parameters of the PipelineJob. The parameters will be passed into PipelineJob.pipeline_spec to replace the placeholders at runtime. This field is used by pipelines built using PipelineJob.pipeline_spec.schema_version 2.1.0, such as pipelines built using Kubeflow Pipelines SDK 1.9 or higher and the v2 DSL.
    Parameters Dictionary<string, string>
    Deprecated. Use RuntimeConfig.parameter_values instead. The runtime parameters of the PipelineJob. The parameters will be passed into PipelineJob.pipeline_spec to replace the placeholders at runtime. This field is used by pipelines built using PipelineJob.pipeline_spec.schema_version 2.0.0 or lower, such as pipelines built using Kubeflow Pipelines SDK 1.8 or lower.

    Deprecated: Deprecated. Use RuntimeConfig.parameter_values instead. The runtime parameters of the PipelineJob. The parameters will be passed into PipelineJob.pipeline_spec to replace the placeholders at runtime. This field is used by pipelines built using PipelineJob.pipeline_spec.schema_version 2.0.0 or lower, such as pipelines built using Kubeflow Pipelines SDK 1.8 or lower.

    GcsOutputDirectory string
    A path in a Cloud Storage bucket, which will be treated as the root output directory of the pipeline. It is used by the system to generate the paths of output artifacts. The artifact paths are generated with a sub-path pattern {job_id}/{task_id}/{output_key} under the specified output directory. The service account specified in this pipeline must have the storage.objects.get and storage.objects.create permissions for this bucket.
    FailurePolicy GoogleCloudAiplatformV1beta1PipelineJobRuntimeConfigFailurePolicy
    Represents the failure policy of a pipeline. Currently, the default of a pipeline is that the pipeline will continue to run until no more tasks can be executed, also known as PIPELINE_FAILURE_POLICY_FAIL_SLOW. However, if a pipeline is set to PIPELINE_FAILURE_POLICY_FAIL_FAST, it will stop scheduling any new tasks when a task has failed. Any scheduled tasks will continue to completion.
    InputArtifacts map[string]string
    The runtime artifacts of the PipelineJob. The key will be the input artifact name and the value would be one of the InputArtifact.
    ParameterValues map[string]string
    The runtime parameters of the PipelineJob. The parameters will be passed into PipelineJob.pipeline_spec to replace the placeholders at runtime. This field is used by pipelines built using PipelineJob.pipeline_spec.schema_version 2.1.0, such as pipelines built using Kubeflow Pipelines SDK 1.9 or higher and the v2 DSL.
    Parameters map[string]string
    Deprecated. Use RuntimeConfig.parameter_values instead. The runtime parameters of the PipelineJob. The parameters will be passed into PipelineJob.pipeline_spec to replace the placeholders at runtime. This field is used by pipelines built using PipelineJob.pipeline_spec.schema_version 2.0.0 or lower, such as pipelines built using Kubeflow Pipelines SDK 1.8 or lower.

    Deprecated: Deprecated. Use RuntimeConfig.parameter_values instead. The runtime parameters of the PipelineJob. The parameters will be passed into PipelineJob.pipeline_spec to replace the placeholders at runtime. This field is used by pipelines built using PipelineJob.pipeline_spec.schema_version 2.0.0 or lower, such as pipelines built using Kubeflow Pipelines SDK 1.8 or lower.

    gcsOutputDirectory String
    A path in a Cloud Storage bucket, which will be treated as the root output directory of the pipeline. It is used by the system to generate the paths of output artifacts. The artifact paths are generated with a sub-path pattern {job_id}/{task_id}/{output_key} under the specified output directory. The service account specified in this pipeline must have the storage.objects.get and storage.objects.create permissions for this bucket.
    failurePolicy GoogleCloudAiplatformV1beta1PipelineJobRuntimeConfigFailurePolicy
    Represents the failure policy of a pipeline. Currently, the default of a pipeline is that the pipeline will continue to run until no more tasks can be executed, also known as PIPELINE_FAILURE_POLICY_FAIL_SLOW. However, if a pipeline is set to PIPELINE_FAILURE_POLICY_FAIL_FAST, it will stop scheduling any new tasks when a task has failed. Any scheduled tasks will continue to completion.
    inputArtifacts Map<String,String>
    The runtime artifacts of the PipelineJob. The key will be the input artifact name and the value would be one of the InputArtifact.
    parameterValues Map<String,String>
    The runtime parameters of the PipelineJob. The parameters will be passed into PipelineJob.pipeline_spec to replace the placeholders at runtime. This field is used by pipelines built using PipelineJob.pipeline_spec.schema_version 2.1.0, such as pipelines built using Kubeflow Pipelines SDK 1.9 or higher and the v2 DSL.
    parameters Map<String,String>
    Deprecated. Use RuntimeConfig.parameter_values instead. The runtime parameters of the PipelineJob. The parameters will be passed into PipelineJob.pipeline_spec to replace the placeholders at runtime. This field is used by pipelines built using PipelineJob.pipeline_spec.schema_version 2.0.0 or lower, such as pipelines built using Kubeflow Pipelines SDK 1.8 or lower.

    Deprecated: Deprecated. Use RuntimeConfig.parameter_values instead. The runtime parameters of the PipelineJob. The parameters will be passed into PipelineJob.pipeline_spec to replace the placeholders at runtime. This field is used by pipelines built using PipelineJob.pipeline_spec.schema_version 2.0.0 or lower, such as pipelines built using Kubeflow Pipelines SDK 1.8 or lower.

    gcsOutputDirectory string
    A path in a Cloud Storage bucket, which will be treated as the root output directory of the pipeline. It is used by the system to generate the paths of output artifacts. The artifact paths are generated with a sub-path pattern {job_id}/{task_id}/{output_key} under the specified output directory. The service account specified in this pipeline must have the storage.objects.get and storage.objects.create permissions for this bucket.
    failurePolicy GoogleCloudAiplatformV1beta1PipelineJobRuntimeConfigFailurePolicy
    Represents the failure policy of a pipeline. Currently, the default of a pipeline is that the pipeline will continue to run until no more tasks can be executed, also known as PIPELINE_FAILURE_POLICY_FAIL_SLOW. However, if a pipeline is set to PIPELINE_FAILURE_POLICY_FAIL_FAST, it will stop scheduling any new tasks when a task has failed. Any scheduled tasks will continue to completion.
    inputArtifacts {[key: string]: string}
    The runtime artifacts of the PipelineJob. The key will be the input artifact name and the value would be one of the InputArtifact.
    parameterValues {[key: string]: string}
    The runtime parameters of the PipelineJob. The parameters will be passed into PipelineJob.pipeline_spec to replace the placeholders at runtime. This field is used by pipelines built using PipelineJob.pipeline_spec.schema_version 2.1.0, such as pipelines built using Kubeflow Pipelines SDK 1.9 or higher and the v2 DSL.
    parameters {[key: string]: string}
    Deprecated. Use RuntimeConfig.parameter_values instead. The runtime parameters of the PipelineJob. The parameters will be passed into PipelineJob.pipeline_spec to replace the placeholders at runtime. This field is used by pipelines built using PipelineJob.pipeline_spec.schema_version 2.0.0 or lower, such as pipelines built using Kubeflow Pipelines SDK 1.8 or lower.

    Deprecated: Deprecated. Use RuntimeConfig.parameter_values instead. The runtime parameters of the PipelineJob. The parameters will be passed into PipelineJob.pipeline_spec to replace the placeholders at runtime. This field is used by pipelines built using PipelineJob.pipeline_spec.schema_version 2.0.0 or lower, such as pipelines built using Kubeflow Pipelines SDK 1.8 or lower.

    gcs_output_directory str
    A path in a Cloud Storage bucket, which will be treated as the root output directory of the pipeline. It is used by the system to generate the paths of output artifacts. The artifact paths are generated with a sub-path pattern {job_id}/{task_id}/{output_key} under the specified output directory. The service account specified in this pipeline must have the storage.objects.get and storage.objects.create permissions for this bucket.
    failure_policy GoogleCloudAiplatformV1beta1PipelineJobRuntimeConfigFailurePolicy
    Represents the failure policy of a pipeline. Currently, the default of a pipeline is that the pipeline will continue to run until no more tasks can be executed, also known as PIPELINE_FAILURE_POLICY_FAIL_SLOW. However, if a pipeline is set to PIPELINE_FAILURE_POLICY_FAIL_FAST, it will stop scheduling any new tasks when a task has failed. Any scheduled tasks will continue to completion.
    input_artifacts Mapping[str, str]
    The runtime artifacts of the PipelineJob. The key will be the input artifact name and the value would be one of the InputArtifact.
    parameter_values Mapping[str, str]
    The runtime parameters of the PipelineJob. The parameters will be passed into PipelineJob.pipeline_spec to replace the placeholders at runtime. This field is used by pipelines built using PipelineJob.pipeline_spec.schema_version 2.1.0, such as pipelines built using Kubeflow Pipelines SDK 1.9 or higher and the v2 DSL.
    parameters Mapping[str, str]
    Deprecated. Use RuntimeConfig.parameter_values instead. The runtime parameters of the PipelineJob. The parameters will be passed into PipelineJob.pipeline_spec to replace the placeholders at runtime. This field is used by pipelines built using PipelineJob.pipeline_spec.schema_version 2.0.0 or lower, such as pipelines built using Kubeflow Pipelines SDK 1.8 or lower.

    Deprecated: Deprecated. Use RuntimeConfig.parameter_values instead. The runtime parameters of the PipelineJob. The parameters will be passed into PipelineJob.pipeline_spec to replace the placeholders at runtime. This field is used by pipelines built using PipelineJob.pipeline_spec.schema_version 2.0.0 or lower, such as pipelines built using Kubeflow Pipelines SDK 1.8 or lower.

    gcsOutputDirectory String
    A path in a Cloud Storage bucket, which will be treated as the root output directory of the pipeline. It is used by the system to generate the paths of output artifacts. The artifact paths are generated with a sub-path pattern {job_id}/{task_id}/{output_key} under the specified output directory. The service account specified in this pipeline must have the storage.objects.get and storage.objects.create permissions for this bucket.
    failurePolicy "PIPELINE_FAILURE_POLICY_UNSPECIFIED" | "PIPELINE_FAILURE_POLICY_FAIL_SLOW" | "PIPELINE_FAILURE_POLICY_FAIL_FAST"
    Represents the failure policy of a pipeline. Currently, the default of a pipeline is that the pipeline will continue to run until no more tasks can be executed, also known as PIPELINE_FAILURE_POLICY_FAIL_SLOW. However, if a pipeline is set to PIPELINE_FAILURE_POLICY_FAIL_FAST, it will stop scheduling any new tasks when a task has failed. Any scheduled tasks will continue to completion.
    inputArtifacts Map<String>
    The runtime artifacts of the PipelineJob. The key will be the input artifact name and the value would be one of the InputArtifact.
    parameterValues Map<String>
    The runtime parameters of the PipelineJob. The parameters will be passed into PipelineJob.pipeline_spec to replace the placeholders at runtime. This field is used by pipelines built using PipelineJob.pipeline_spec.schema_version 2.1.0, such as pipelines built using Kubeflow Pipelines SDK 1.9 or higher and the v2 DSL.
    parameters Map<String>
    Deprecated. Use RuntimeConfig.parameter_values instead. The runtime parameters of the PipelineJob. The parameters will be passed into PipelineJob.pipeline_spec to replace the placeholders at runtime. This field is used by pipelines built using PipelineJob.pipeline_spec.schema_version 2.0.0 or lower, such as pipelines built using Kubeflow Pipelines SDK 1.8 or lower.

    Deprecated: Deprecated. Use RuntimeConfig.parameter_values instead. The runtime parameters of the PipelineJob. The parameters will be passed into PipelineJob.pipeline_spec to replace the placeholders at runtime. This field is used by pipelines built using PipelineJob.pipeline_spec.schema_version 2.0.0 or lower, such as pipelines built using Kubeflow Pipelines SDK 1.8 or lower.

    GoogleCloudAiplatformV1beta1PipelineJobRuntimeConfigFailurePolicy, GoogleCloudAiplatformV1beta1PipelineJobRuntimeConfigFailurePolicyArgs

    PipelineFailurePolicyUnspecified
    PIPELINE_FAILURE_POLICY_UNSPECIFIEDDefault value, and follows fail slow behavior.
    PipelineFailurePolicyFailSlow
    PIPELINE_FAILURE_POLICY_FAIL_SLOWIndicates that the pipeline should continue to run until all possible tasks have been scheduled and completed.
    PipelineFailurePolicyFailFast
    PIPELINE_FAILURE_POLICY_FAIL_FASTIndicates that the pipeline should stop scheduling new tasks after a task has failed.
    GoogleCloudAiplatformV1beta1PipelineJobRuntimeConfigFailurePolicyPipelineFailurePolicyUnspecified
    PIPELINE_FAILURE_POLICY_UNSPECIFIEDDefault value, and follows fail slow behavior.
    GoogleCloudAiplatformV1beta1PipelineJobRuntimeConfigFailurePolicyPipelineFailurePolicyFailSlow
    PIPELINE_FAILURE_POLICY_FAIL_SLOWIndicates that the pipeline should continue to run until all possible tasks have been scheduled and completed.
    GoogleCloudAiplatformV1beta1PipelineJobRuntimeConfigFailurePolicyPipelineFailurePolicyFailFast
    PIPELINE_FAILURE_POLICY_FAIL_FASTIndicates that the pipeline should stop scheduling new tasks after a task has failed.
    PipelineFailurePolicyUnspecified
    PIPELINE_FAILURE_POLICY_UNSPECIFIEDDefault value, and follows fail slow behavior.
    PipelineFailurePolicyFailSlow
    PIPELINE_FAILURE_POLICY_FAIL_SLOWIndicates that the pipeline should continue to run until all possible tasks have been scheduled and completed.
    PipelineFailurePolicyFailFast
    PIPELINE_FAILURE_POLICY_FAIL_FASTIndicates that the pipeline should stop scheduling new tasks after a task has failed.
    PipelineFailurePolicyUnspecified
    PIPELINE_FAILURE_POLICY_UNSPECIFIEDDefault value, and follows fail slow behavior.
    PipelineFailurePolicyFailSlow
    PIPELINE_FAILURE_POLICY_FAIL_SLOWIndicates that the pipeline should continue to run until all possible tasks have been scheduled and completed.
    PipelineFailurePolicyFailFast
    PIPELINE_FAILURE_POLICY_FAIL_FASTIndicates that the pipeline should stop scheduling new tasks after a task has failed.
    PIPELINE_FAILURE_POLICY_UNSPECIFIED
    PIPELINE_FAILURE_POLICY_UNSPECIFIEDDefault value, and follows fail slow behavior.
    PIPELINE_FAILURE_POLICY_FAIL_SLOW
    PIPELINE_FAILURE_POLICY_FAIL_SLOWIndicates that the pipeline should continue to run until all possible tasks have been scheduled and completed.
    PIPELINE_FAILURE_POLICY_FAIL_FAST
    PIPELINE_FAILURE_POLICY_FAIL_FASTIndicates that the pipeline should stop scheduling new tasks after a task has failed.
    "PIPELINE_FAILURE_POLICY_UNSPECIFIED"
    PIPELINE_FAILURE_POLICY_UNSPECIFIEDDefault value, and follows fail slow behavior.
    "PIPELINE_FAILURE_POLICY_FAIL_SLOW"
    PIPELINE_FAILURE_POLICY_FAIL_SLOWIndicates that the pipeline should continue to run until all possible tasks have been scheduled and completed.
    "PIPELINE_FAILURE_POLICY_FAIL_FAST"
    PIPELINE_FAILURE_POLICY_FAIL_FASTIndicates that the pipeline should stop scheduling new tasks after a task has failed.

    GoogleCloudAiplatformV1beta1PipelineJobRuntimeConfigResponse, GoogleCloudAiplatformV1beta1PipelineJobRuntimeConfigResponseArgs

    FailurePolicy string
    Represents the failure policy of a pipeline. Currently, the default of a pipeline is that the pipeline will continue to run until no more tasks can be executed, also known as PIPELINE_FAILURE_POLICY_FAIL_SLOW. However, if a pipeline is set to PIPELINE_FAILURE_POLICY_FAIL_FAST, it will stop scheduling any new tasks when a task has failed. Any scheduled tasks will continue to completion.
    GcsOutputDirectory string
    A path in a Cloud Storage bucket, which will be treated as the root output directory of the pipeline. It is used by the system to generate the paths of output artifacts. The artifact paths are generated with a sub-path pattern {job_id}/{task_id}/{output_key} under the specified output directory. The service account specified in this pipeline must have the storage.objects.get and storage.objects.create permissions for this bucket.
    InputArtifacts Dictionary<string, string>
    The runtime artifacts of the PipelineJob. The key will be the input artifact name and the value would be one of the InputArtifact.
    ParameterValues Dictionary<string, string>
    The runtime parameters of the PipelineJob. The parameters will be passed into PipelineJob.pipeline_spec to replace the placeholders at runtime. This field is used by pipelines built using PipelineJob.pipeline_spec.schema_version 2.1.0, such as pipelines built using Kubeflow Pipelines SDK 1.9 or higher and the v2 DSL.
    Parameters Dictionary<string, string>
    Deprecated. Use RuntimeConfig.parameter_values instead. The runtime parameters of the PipelineJob. The parameters will be passed into PipelineJob.pipeline_spec to replace the placeholders at runtime. This field is used by pipelines built using PipelineJob.pipeline_spec.schema_version 2.0.0 or lower, such as pipelines built using Kubeflow Pipelines SDK 1.8 or lower.

    Deprecated: Deprecated. Use RuntimeConfig.parameter_values instead. The runtime parameters of the PipelineJob. The parameters will be passed into PipelineJob.pipeline_spec to replace the placeholders at runtime. This field is used by pipelines built using PipelineJob.pipeline_spec.schema_version 2.0.0 or lower, such as pipelines built using Kubeflow Pipelines SDK 1.8 or lower.

    FailurePolicy string
    Represents the failure policy of a pipeline. Currently, the default of a pipeline is that the pipeline will continue to run until no more tasks can be executed, also known as PIPELINE_FAILURE_POLICY_FAIL_SLOW. However, if a pipeline is set to PIPELINE_FAILURE_POLICY_FAIL_FAST, it will stop scheduling any new tasks when a task has failed. Any scheduled tasks will continue to completion.
    GcsOutputDirectory string
    A path in a Cloud Storage bucket, which will be treated as the root output directory of the pipeline. It is used by the system to generate the paths of output artifacts. The artifact paths are generated with a sub-path pattern {job_id}/{task_id}/{output_key} under the specified output directory. The service account specified in this pipeline must have the storage.objects.get and storage.objects.create permissions for this bucket.
    InputArtifacts map[string]string
    The runtime artifacts of the PipelineJob. The key will be the input artifact name and the value would be one of the InputArtifact.
    ParameterValues map[string]string
    The runtime parameters of the PipelineJob. The parameters will be passed into PipelineJob.pipeline_spec to replace the placeholders at runtime. This field is used by pipelines built using PipelineJob.pipeline_spec.schema_version 2.1.0, such as pipelines built using Kubeflow Pipelines SDK 1.9 or higher and the v2 DSL.
    Parameters map[string]string
    Deprecated. Use RuntimeConfig.parameter_values instead. The runtime parameters of the PipelineJob. The parameters will be passed into PipelineJob.pipeline_spec to replace the placeholders at runtime. This field is used by pipelines built using PipelineJob.pipeline_spec.schema_version 2.0.0 or lower, such as pipelines built using Kubeflow Pipelines SDK 1.8 or lower.

    Deprecated: Deprecated. Use RuntimeConfig.parameter_values instead. The runtime parameters of the PipelineJob. The parameters will be passed into PipelineJob.pipeline_spec to replace the placeholders at runtime. This field is used by pipelines built using PipelineJob.pipeline_spec.schema_version 2.0.0 or lower, such as pipelines built using Kubeflow Pipelines SDK 1.8 or lower.

    failurePolicy String
    Represents the failure policy of a pipeline. Currently, the default of a pipeline is that the pipeline will continue to run until no more tasks can be executed, also known as PIPELINE_FAILURE_POLICY_FAIL_SLOW. However, if a pipeline is set to PIPELINE_FAILURE_POLICY_FAIL_FAST, it will stop scheduling any new tasks when a task has failed. Any scheduled tasks will continue to completion.
    gcsOutputDirectory String
    A path in a Cloud Storage bucket, which will be treated as the root output directory of the pipeline. It is used by the system to generate the paths of output artifacts. The artifact paths are generated with a sub-path pattern {job_id}/{task_id}/{output_key} under the specified output directory. The service account specified in this pipeline must have the storage.objects.get and storage.objects.create permissions for this bucket.
    inputArtifacts Map<String,String>
    The runtime artifacts of the PipelineJob. The key will be the input artifact name and the value would be one of the InputArtifact.
    parameterValues Map<String,String>
    The runtime parameters of the PipelineJob. The parameters will be passed into PipelineJob.pipeline_spec to replace the placeholders at runtime. This field is used by pipelines built using PipelineJob.pipeline_spec.schema_version 2.1.0, such as pipelines built using Kubeflow Pipelines SDK 1.9 or higher and the v2 DSL.
    parameters Map<String,String>
    Deprecated. Use RuntimeConfig.parameter_values instead. The runtime parameters of the PipelineJob. The parameters will be passed into PipelineJob.pipeline_spec to replace the placeholders at runtime. This field is used by pipelines built using PipelineJob.pipeline_spec.schema_version 2.0.0 or lower, such as pipelines built using Kubeflow Pipelines SDK 1.8 or lower.

    Deprecated: Deprecated. Use RuntimeConfig.parameter_values instead. The runtime parameters of the PipelineJob. The parameters will be passed into PipelineJob.pipeline_spec to replace the placeholders at runtime. This field is used by pipelines built using PipelineJob.pipeline_spec.schema_version 2.0.0 or lower, such as pipelines built using Kubeflow Pipelines SDK 1.8 or lower.

    failurePolicy string
    Represents the failure policy of a pipeline. Currently, the default of a pipeline is that the pipeline will continue to run until no more tasks can be executed, also known as PIPELINE_FAILURE_POLICY_FAIL_SLOW. However, if a pipeline is set to PIPELINE_FAILURE_POLICY_FAIL_FAST, it will stop scheduling any new tasks when a task has failed. Any scheduled tasks will continue to completion.
    gcsOutputDirectory string
    A path in a Cloud Storage bucket, which will be treated as the root output directory of the pipeline. It is used by the system to generate the paths of output artifacts. The artifact paths are generated with a sub-path pattern {job_id}/{task_id}/{output_key} under the specified output directory. The service account specified in this pipeline must have the storage.objects.get and storage.objects.create permissions for this bucket.
    inputArtifacts {[key: string]: string}
    The runtime artifacts of the PipelineJob. The key will be the input artifact name and the value would be one of the InputArtifact.
    parameterValues {[key: string]: string}
    The runtime parameters of the PipelineJob. The parameters will be passed into PipelineJob.pipeline_spec to replace the placeholders at runtime. This field is used by pipelines built using PipelineJob.pipeline_spec.schema_version 2.1.0, such as pipelines built using Kubeflow Pipelines SDK 1.9 or higher and the v2 DSL.
    parameters {[key: string]: string}
    Deprecated. Use RuntimeConfig.parameter_values instead. The runtime parameters of the PipelineJob. The parameters will be passed into PipelineJob.pipeline_spec to replace the placeholders at runtime. This field is used by pipelines built using PipelineJob.pipeline_spec.schema_version 2.0.0 or lower, such as pipelines built using Kubeflow Pipelines SDK 1.8 or lower.

    Deprecated: Deprecated. Use RuntimeConfig.parameter_values instead. The runtime parameters of the PipelineJob. The parameters will be passed into PipelineJob.pipeline_spec to replace the placeholders at runtime. This field is used by pipelines built using PipelineJob.pipeline_spec.schema_version 2.0.0 or lower, such as pipelines built using Kubeflow Pipelines SDK 1.8 or lower.

    failure_policy str
    Represents the failure policy of a pipeline. Currently, the default of a pipeline is that the pipeline will continue to run until no more tasks can be executed, also known as PIPELINE_FAILURE_POLICY_FAIL_SLOW. However, if a pipeline is set to PIPELINE_FAILURE_POLICY_FAIL_FAST, it will stop scheduling any new tasks when a task has failed. Any scheduled tasks will continue to completion.
    gcs_output_directory str
    A path in a Cloud Storage bucket, which will be treated as the root output directory of the pipeline. It is used by the system to generate the paths of output artifacts. The artifact paths are generated with a sub-path pattern {job_id}/{task_id}/{output_key} under the specified output directory. The service account specified in this pipeline must have the storage.objects.get and storage.objects.create permissions for this bucket.
    input_artifacts Mapping[str, str]
    The runtime artifacts of the PipelineJob. The key will be the input artifact name and the value would be one of the InputArtifact.
    parameter_values Mapping[str, str]
    The runtime parameters of the PipelineJob. The parameters will be passed into PipelineJob.pipeline_spec to replace the placeholders at runtime. This field is used by pipelines built using PipelineJob.pipeline_spec.schema_version 2.1.0, such as pipelines built using Kubeflow Pipelines SDK 1.9 or higher and the v2 DSL.
    parameters Mapping[str, str]
    Deprecated. Use RuntimeConfig.parameter_values instead. The runtime parameters of the PipelineJob. The parameters will be passed into PipelineJob.pipeline_spec to replace the placeholders at runtime. This field is used by pipelines built using PipelineJob.pipeline_spec.schema_version 2.0.0 or lower, such as pipelines built using Kubeflow Pipelines SDK 1.8 or lower.

    Deprecated: Deprecated. Use RuntimeConfig.parameter_values instead. The runtime parameters of the PipelineJob. The parameters will be passed into PipelineJob.pipeline_spec to replace the placeholders at runtime. This field is used by pipelines built using PipelineJob.pipeline_spec.schema_version 2.0.0 or lower, such as pipelines built using Kubeflow Pipelines SDK 1.8 or lower.

    failurePolicy String
    Represents the failure policy of a pipeline. Currently, the default of a pipeline is that the pipeline will continue to run until no more tasks can be executed, also known as PIPELINE_FAILURE_POLICY_FAIL_SLOW. However, if a pipeline is set to PIPELINE_FAILURE_POLICY_FAIL_FAST, it will stop scheduling any new tasks when a task has failed. Any scheduled tasks will continue to completion.
    gcsOutputDirectory String
    A path in a Cloud Storage bucket, which will be treated as the root output directory of the pipeline. It is used by the system to generate the paths of output artifacts. The artifact paths are generated with a sub-path pattern {job_id}/{task_id}/{output_key} under the specified output directory. The service account specified in this pipeline must have the storage.objects.get and storage.objects.create permissions for this bucket.
    inputArtifacts Map<String>
    The runtime artifacts of the PipelineJob. The key will be the input artifact name and the value would be one of the InputArtifact.
    parameterValues Map<String>
    The runtime parameters of the PipelineJob. The parameters will be passed into PipelineJob.pipeline_spec to replace the placeholders at runtime. This field is used by pipelines built using PipelineJob.pipeline_spec.schema_version 2.1.0, such as pipelines built using Kubeflow Pipelines SDK 1.9 or higher and the v2 DSL.
    parameters Map<String>
    Deprecated. Use RuntimeConfig.parameter_values instead. The runtime parameters of the PipelineJob. The parameters will be passed into PipelineJob.pipeline_spec to replace the placeholders at runtime. This field is used by pipelines built using PipelineJob.pipeline_spec.schema_version 2.0.0 or lower, such as pipelines built using Kubeflow Pipelines SDK 1.8 or lower.

    Deprecated: Deprecated. Use RuntimeConfig.parameter_values instead. The runtime parameters of the PipelineJob. The parameters will be passed into PipelineJob.pipeline_spec to replace the placeholders at runtime. This field is used by pipelines built using PipelineJob.pipeline_spec.schema_version 2.0.0 or lower, such as pipelines built using Kubeflow Pipelines SDK 1.8 or lower.

    GoogleCloudAiplatformV1beta1PipelineTaskDetailPipelineTaskStatusResponse, GoogleCloudAiplatformV1beta1PipelineTaskDetailPipelineTaskStatusResponseArgs

    Error Pulumi.GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleRpcStatusResponse
    The error that occurred during the state. May be set when the state is any of the non-final state (PENDING/RUNNING/CANCELLING) or FAILED state. If the state is FAILED, the error here is final and not going to be retried. If the state is a non-final state, the error indicates a system-error being retried.
    State string
    The state of the task.
    UpdateTime string
    Update time of this status.
    Error GoogleRpcStatusResponse
    The error that occurred during the state. May be set when the state is any of the non-final state (PENDING/RUNNING/CANCELLING) or FAILED state. If the state is FAILED, the error here is final and not going to be retried. If the state is a non-final state, the error indicates a system-error being retried.
    State string
    The state of the task.
    UpdateTime string
    Update time of this status.
    error GoogleRpcStatusResponse
    The error that occurred during the state. May be set when the state is any of the non-final state (PENDING/RUNNING/CANCELLING) or FAILED state. If the state is FAILED, the error here is final and not going to be retried. If the state is a non-final state, the error indicates a system-error being retried.
    state String
    The state of the task.
    updateTime String
    Update time of this status.
    error GoogleRpcStatusResponse
    The error that occurred during the state. May be set when the state is any of the non-final state (PENDING/RUNNING/CANCELLING) or FAILED state. If the state is FAILED, the error here is final and not going to be retried. If the state is a non-final state, the error indicates a system-error being retried.
    state string
    The state of the task.
    updateTime string
    Update time of this status.
    error GoogleRpcStatusResponse
    The error that occurred during the state. May be set when the state is any of the non-final state (PENDING/RUNNING/CANCELLING) or FAILED state. If the state is FAILED, the error here is final and not going to be retried. If the state is a non-final state, the error indicates a system-error being retried.
    state str
    The state of the task.
    update_time str
    Update time of this status.
    error Property Map
    The error that occurred during the state. May be set when the state is any of the non-final state (PENDING/RUNNING/CANCELLING) or FAILED state. If the state is FAILED, the error here is final and not going to be retried. If the state is a non-final state, the error indicates a system-error being retried.
    state String
    The state of the task.
    updateTime String
    Update time of this status.

    GoogleCloudAiplatformV1beta1PipelineTaskDetailResponse, GoogleCloudAiplatformV1beta1PipelineTaskDetailResponseArgs

    CreateTime string
    Task create time.
    EndTime string
    Task end time.
    Error Pulumi.GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleRpcStatusResponse
    The error that occurred during task execution. Only populated when the task's state is FAILED or CANCELLED.
    Execution Pulumi.GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1ExecutionResponse
    The execution metadata of the task.
    ExecutorDetail Pulumi.GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1PipelineTaskExecutorDetailResponse
    The detailed execution info.
    Inputs Dictionary<string, string>
    The runtime input artifacts of the task.
    Outputs Dictionary<string, string>
    The runtime output artifacts of the task.
    ParentTaskId string
    The id of the parent task if the task is within a component scope. Empty if the task is at the root level.
    PipelineTaskStatus List<Pulumi.GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1PipelineTaskDetailPipelineTaskStatusResponse>
    A list of task status. This field keeps a record of task status evolving over time.
    StartTime string
    Task start time.
    State string
    State of the task.
    TaskId string
    The system generated ID of the task.
    TaskName string
    The user specified name of the task that is defined in pipeline_spec.
    CreateTime string
    Task create time.
    EndTime string
    Task end time.
    Error GoogleRpcStatusResponse
    The error that occurred during task execution. Only populated when the task's state is FAILED or CANCELLED.
    Execution GoogleCloudAiplatformV1beta1ExecutionResponse
    The execution metadata of the task.
    ExecutorDetail GoogleCloudAiplatformV1beta1PipelineTaskExecutorDetailResponse
    The detailed execution info.
    Inputs map[string]string
    The runtime input artifacts of the task.
    Outputs map[string]string
    The runtime output artifacts of the task.
    ParentTaskId string
    The id of the parent task if the task is within a component scope. Empty if the task is at the root level.
    PipelineTaskStatus []GoogleCloudAiplatformV1beta1PipelineTaskDetailPipelineTaskStatusResponse
    A list of task status. This field keeps a record of task status evolving over time.
    StartTime string
    Task start time.
    State string
    State of the task.
    TaskId string
    The system generated ID of the task.
    TaskName string
    The user specified name of the task that is defined in pipeline_spec.
    createTime String
    Task create time.
    endTime String
    Task end time.
    error GoogleRpcStatusResponse
    The error that occurred during task execution. Only populated when the task's state is FAILED or CANCELLED.
    execution GoogleCloudAiplatformV1beta1ExecutionResponse
    The execution metadata of the task.
    executorDetail GoogleCloudAiplatformV1beta1PipelineTaskExecutorDetailResponse
    The detailed execution info.
    inputs Map<String,String>
    The runtime input artifacts of the task.
    outputs Map<String,String>
    The runtime output artifacts of the task.
    parentTaskId String
    The id of the parent task if the task is within a component scope. Empty if the task is at the root level.
    pipelineTaskStatus List<GoogleCloudAiplatformV1beta1PipelineTaskDetailPipelineTaskStatusResponse>
    A list of task status. This field keeps a record of task status evolving over time.
    startTime String
    Task start time.
    state String
    State of the task.
    taskId String
    The system generated ID of the task.
    taskName String
    The user specified name of the task that is defined in pipeline_spec.
    createTime string
    Task create time.
    endTime string
    Task end time.
    error GoogleRpcStatusResponse
    The error that occurred during task execution. Only populated when the task's state is FAILED or CANCELLED.
    execution GoogleCloudAiplatformV1beta1ExecutionResponse
    The execution metadata of the task.
    executorDetail GoogleCloudAiplatformV1beta1PipelineTaskExecutorDetailResponse
    The detailed execution info.
    inputs {[key: string]: string}
    The runtime input artifacts of the task.
    outputs {[key: string]: string}
    The runtime output artifacts of the task.
    parentTaskId string
    The id of the parent task if the task is within a component scope. Empty if the task is at the root level.
    pipelineTaskStatus GoogleCloudAiplatformV1beta1PipelineTaskDetailPipelineTaskStatusResponse[]
    A list of task status. This field keeps a record of task status evolving over time.
    startTime string
    Task start time.
    state string
    State of the task.
    taskId string
    The system generated ID of the task.
    taskName string
    The user specified name of the task that is defined in pipeline_spec.
    create_time str
    Task create time.
    end_time str
    Task end time.
    error GoogleRpcStatusResponse
    The error that occurred during task execution. Only populated when the task's state is FAILED or CANCELLED.
    execution GoogleCloudAiplatformV1beta1ExecutionResponse
    The execution metadata of the task.
    executor_detail GoogleCloudAiplatformV1beta1PipelineTaskExecutorDetailResponse
    The detailed execution info.
    inputs Mapping[str, str]
    The runtime input artifacts of the task.
    outputs Mapping[str, str]
    The runtime output artifacts of the task.
    parent_task_id str
    The id of the parent task if the task is within a component scope. Empty if the task is at the root level.
    pipeline_task_status Sequence[GoogleCloudAiplatformV1beta1PipelineTaskDetailPipelineTaskStatusResponse]
    A list of task status. This field keeps a record of task status evolving over time.
    start_time str
    Task start time.
    state str
    State of the task.
    task_id str
    The system generated ID of the task.
    task_name str
    The user specified name of the task that is defined in pipeline_spec.
    createTime String
    Task create time.
    endTime String
    Task end time.
    error Property Map
    The error that occurred during task execution. Only populated when the task's state is FAILED or CANCELLED.
    execution Property Map
    The execution metadata of the task.
    executorDetail Property Map
    The detailed execution info.
    inputs Map<String>
    The runtime input artifacts of the task.
    outputs Map<String>
    The runtime output artifacts of the task.
    parentTaskId String
    The id of the parent task if the task is within a component scope. Empty if the task is at the root level.
    pipelineTaskStatus List<Property Map>
    A list of task status. This field keeps a record of task status evolving over time.
    startTime String
    Task start time.
    state String
    State of the task.
    taskId String
    The system generated ID of the task.
    taskName String
    The user specified name of the task that is defined in pipeline_spec.

    GoogleCloudAiplatformV1beta1PipelineTaskExecutorDetailContainerDetailResponse, GoogleCloudAiplatformV1beta1PipelineTaskExecutorDetailContainerDetailResponseArgs

    FailedMainJobs List<string>
    The names of the previously failed CustomJob for the main container executions. The list includes the all attempts in chronological order.
    FailedPreCachingCheckJobs List<string>
    The names of the previously failed CustomJob for the pre-caching-check container executions. This job will be available if the PipelineJob.pipeline_spec specifies the pre_caching_check hook in the lifecycle events. The list includes the all attempts in chronological order.
    MainJob string
    The name of the CustomJob for the main container execution.
    PreCachingCheckJob string
    The name of the CustomJob for the pre-caching-check container execution. This job will be available if the PipelineJob.pipeline_spec specifies the pre_caching_check hook in the lifecycle events.
    FailedMainJobs []string
    The names of the previously failed CustomJob for the main container executions. The list includes the all attempts in chronological order.
    FailedPreCachingCheckJobs []string
    The names of the previously failed CustomJob for the pre-caching-check container executions. This job will be available if the PipelineJob.pipeline_spec specifies the pre_caching_check hook in the lifecycle events. The list includes the all attempts in chronological order.
    MainJob string
    The name of the CustomJob for the main container execution.
    PreCachingCheckJob string
    The name of the CustomJob for the pre-caching-check container execution. This job will be available if the PipelineJob.pipeline_spec specifies the pre_caching_check hook in the lifecycle events.
    failedMainJobs List<String>
    The names of the previously failed CustomJob for the main container executions. The list includes the all attempts in chronological order.
    failedPreCachingCheckJobs List<String>
    The names of the previously failed CustomJob for the pre-caching-check container executions. This job will be available if the PipelineJob.pipeline_spec specifies the pre_caching_check hook in the lifecycle events. The list includes the all attempts in chronological order.
    mainJob String
    The name of the CustomJob for the main container execution.
    preCachingCheckJob String
    The name of the CustomJob for the pre-caching-check container execution. This job will be available if the PipelineJob.pipeline_spec specifies the pre_caching_check hook in the lifecycle events.
    failedMainJobs string[]
    The names of the previously failed CustomJob for the main container executions. The list includes the all attempts in chronological order.
    failedPreCachingCheckJobs string[]
    The names of the previously failed CustomJob for the pre-caching-check container executions. This job will be available if the PipelineJob.pipeline_spec specifies the pre_caching_check hook in the lifecycle events. The list includes the all attempts in chronological order.
    mainJob string
    The name of the CustomJob for the main container execution.
    preCachingCheckJob string
    The name of the CustomJob for the pre-caching-check container execution. This job will be available if the PipelineJob.pipeline_spec specifies the pre_caching_check hook in the lifecycle events.
    failed_main_jobs Sequence[str]
    The names of the previously failed CustomJob for the main container executions. The list includes the all attempts in chronological order.
    failed_pre_caching_check_jobs Sequence[str]
    The names of the previously failed CustomJob for the pre-caching-check container executions. This job will be available if the PipelineJob.pipeline_spec specifies the pre_caching_check hook in the lifecycle events. The list includes the all attempts in chronological order.
    main_job str
    The name of the CustomJob for the main container execution.
    pre_caching_check_job str
    The name of the CustomJob for the pre-caching-check container execution. This job will be available if the PipelineJob.pipeline_spec specifies the pre_caching_check hook in the lifecycle events.
    failedMainJobs List<String>
    The names of the previously failed CustomJob for the main container executions. The list includes the all attempts in chronological order.
    failedPreCachingCheckJobs List<String>
    The names of the previously failed CustomJob for the pre-caching-check container executions. This job will be available if the PipelineJob.pipeline_spec specifies the pre_caching_check hook in the lifecycle events. The list includes the all attempts in chronological order.
    mainJob String
    The name of the CustomJob for the main container execution.
    preCachingCheckJob String
    The name of the CustomJob for the pre-caching-check container execution. This job will be available if the PipelineJob.pipeline_spec specifies the pre_caching_check hook in the lifecycle events.

    GoogleCloudAiplatformV1beta1PipelineTaskExecutorDetailCustomJobDetailResponse, GoogleCloudAiplatformV1beta1PipelineTaskExecutorDetailCustomJobDetailResponseArgs

    FailedJobs List<string>
    The names of the previously failed CustomJob. The list includes the all attempts in chronological order.
    Job string
    The name of the CustomJob.
    FailedJobs []string
    The names of the previously failed CustomJob. The list includes the all attempts in chronological order.
    Job string
    The name of the CustomJob.
    failedJobs List<String>
    The names of the previously failed CustomJob. The list includes the all attempts in chronological order.
    job String
    The name of the CustomJob.
    failedJobs string[]
    The names of the previously failed CustomJob. The list includes the all attempts in chronological order.
    job string
    The name of the CustomJob.
    failed_jobs Sequence[str]
    The names of the previously failed CustomJob. The list includes the all attempts in chronological order.
    job str
    The name of the CustomJob.
    failedJobs List<String>
    The names of the previously failed CustomJob. The list includes the all attempts in chronological order.
    job String
    The name of the CustomJob.

    GoogleCloudAiplatformV1beta1PipelineTaskExecutorDetailResponse, GoogleCloudAiplatformV1beta1PipelineTaskExecutorDetailResponseArgs

    containerDetail Property Map
    The detailed info for a container executor.
    customJobDetail Property Map
    The detailed info for a custom job executor.

    GoogleCloudAiplatformV1beta1PipelineTemplateMetadataResponse, GoogleCloudAiplatformV1beta1PipelineTemplateMetadataResponseArgs

    Version string
    The version_name in artifact registry. Will always be presented in output if the PipelineJob.template_uri is from supported template registry. Format is "sha256:abcdef123456...".
    Version string
    The version_name in artifact registry. Will always be presented in output if the PipelineJob.template_uri is from supported template registry. Format is "sha256:abcdef123456...".
    version String
    The version_name in artifact registry. Will always be presented in output if the PipelineJob.template_uri is from supported template registry. Format is "sha256:abcdef123456...".
    version string
    The version_name in artifact registry. Will always be presented in output if the PipelineJob.template_uri is from supported template registry. Format is "sha256:abcdef123456...".
    version str
    The version_name in artifact registry. Will always be presented in output if the PipelineJob.template_uri is from supported template registry. Format is "sha256:abcdef123456...".
    version String
    The version_name in artifact registry. Will always be presented in output if the PipelineJob.template_uri is from supported template registry. Format is "sha256:abcdef123456...".

    GoogleRpcStatusResponse, GoogleRpcStatusResponseArgs

    Code int
    The status code, which should be an enum value of google.rpc.Code.
    Details List<ImmutableDictionary<string, string>>
    A list of messages that carry the error details. There is a common set of message types for APIs to use.
    Message string
    A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
    Code int
    The status code, which should be an enum value of google.rpc.Code.
    Details []map[string]string
    A list of messages that carry the error details. There is a common set of message types for APIs to use.
    Message string
    A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
    code Integer
    The status code, which should be an enum value of google.rpc.Code.
    details List<Map<String,String>>
    A list of messages that carry the error details. There is a common set of message types for APIs to use.
    message String
    A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
    code number
    The status code, which should be an enum value of google.rpc.Code.
    details {[key: string]: string}[]
    A list of messages that carry the error details. There is a common set of message types for APIs to use.
    message string
    A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
    code int
    The status code, which should be an enum value of google.rpc.Code.
    details Sequence[Mapping[str, str]]
    A list of messages that carry the error details. There is a common set of message types for APIs to use.
    message str
    A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
    code Number
    The status code, which should be an enum value of google.rpc.Code.
    details List<Map<String>>
    A list of messages that carry the error details. There is a common set of message types for APIs to use.
    message String
    A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.

    Package Details

    Repository
    Google Cloud Native pulumi/pulumi-google-native
    License
    Apache-2.0
    google-native logo

    Google Cloud Native is in preview. Google Cloud Classic is fully supported.

    Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi