1. Packages
  2. Google Cloud Native
  3. API Docs
  4. aiplatform
  5. aiplatform/v1beta1
  6. CustomJob

Google Cloud Native is in preview. Google Cloud Classic is fully supported.

Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi

google-native.aiplatform/v1beta1.CustomJob

Explore with Pulumi AI

google-native logo

Google Cloud Native is in preview. Google Cloud Classic is fully supported.

Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi

    Creates a CustomJob. A created CustomJob right away will be attempted to be run. Auto-naming is currently not supported for this resource.

    Create CustomJob Resource

    Resources are created with functions called constructors. To learn more about declaring and configuring resources, see Resources.

    Constructor syntax

    new CustomJob(name: string, args: CustomJobArgs, opts?: CustomResourceOptions);
    @overload
    def CustomJob(resource_name: str,
                  args: CustomJobArgs,
                  opts: Optional[ResourceOptions] = None)
    
    @overload
    def CustomJob(resource_name: str,
                  opts: Optional[ResourceOptions] = None,
                  display_name: Optional[str] = None,
                  job_spec: Optional[GoogleCloudAiplatformV1beta1CustomJobSpecArgs] = None,
                  encryption_spec: Optional[GoogleCloudAiplatformV1beta1EncryptionSpecArgs] = None,
                  labels: Optional[Mapping[str, str]] = None,
                  location: Optional[str] = None,
                  project: Optional[str] = None)
    func NewCustomJob(ctx *Context, name string, args CustomJobArgs, opts ...ResourceOption) (*CustomJob, error)
    public CustomJob(string name, CustomJobArgs args, CustomResourceOptions? opts = null)
    public CustomJob(String name, CustomJobArgs args)
    public CustomJob(String name, CustomJobArgs args, CustomResourceOptions options)
    
    type: google-native:aiplatform/v1beta1:CustomJob
    properties: # The arguments to resource properties.
    options: # Bag of options to control resource's behavior.
    
    

    Parameters

    name string
    The unique name of the resource.
    args CustomJobArgs
    The arguments to resource properties.
    opts CustomResourceOptions
    Bag of options to control resource's behavior.
    resource_name str
    The unique name of the resource.
    args CustomJobArgs
    The arguments to resource properties.
    opts ResourceOptions
    Bag of options to control resource's behavior.
    ctx Context
    Context object for the current deployment.
    name string
    The unique name of the resource.
    args CustomJobArgs
    The arguments to resource properties.
    opts ResourceOption
    Bag of options to control resource's behavior.
    name string
    The unique name of the resource.
    args CustomJobArgs
    The arguments to resource properties.
    opts CustomResourceOptions
    Bag of options to control resource's behavior.
    name String
    The unique name of the resource.
    args CustomJobArgs
    The arguments to resource properties.
    options CustomResourceOptions
    Bag of options to control resource's behavior.

    Constructor example

    The following reference example uses placeholder values for all input properties.

    var google_nativeCustomJobResource = new GoogleNative.Aiplatform.V1Beta1.CustomJob("google-nativeCustomJobResource", new()
    {
        DisplayName = "string",
        JobSpec = new GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1CustomJobSpecArgs
        {
            WorkerPoolSpecs = new[]
            {
                new GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1WorkerPoolSpecArgs
                {
                    ContainerSpec = new GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1ContainerSpecArgs
                    {
                        ImageUri = "string",
                        Args = new[]
                        {
                            "string",
                        },
                        Command = new[]
                        {
                            "string",
                        },
                        Env = new[]
                        {
                            new GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1EnvVarArgs
                            {
                                Name = "string",
                                Value = "string",
                            },
                        },
                    },
                    DiskSpec = new GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1DiskSpecArgs
                    {
                        BootDiskSizeGb = 0,
                        BootDiskType = "string",
                    },
                    MachineSpec = new GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1MachineSpecArgs
                    {
                        AcceleratorCount = 0,
                        AcceleratorType = GoogleNative.Aiplatform.V1Beta1.GoogleCloudAiplatformV1beta1MachineSpecAcceleratorType.AcceleratorTypeUnspecified,
                        MachineType = "string",
                        TpuTopology = "string",
                    },
                    NfsMounts = new[]
                    {
                        new GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1NfsMountArgs
                        {
                            MountPoint = "string",
                            Path = "string",
                            Server = "string",
                        },
                    },
                    PythonPackageSpec = new GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1PythonPackageSpecArgs
                    {
                        ExecutorImageUri = "string",
                        PackageUris = new[]
                        {
                            "string",
                        },
                        PythonModule = "string",
                        Args = new[]
                        {
                            "string",
                        },
                        Env = new[]
                        {
                            new GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1EnvVarArgs
                            {
                                Name = "string",
                                Value = "string",
                            },
                        },
                    },
                    ReplicaCount = "string",
                },
            },
            PersistentResourceId = "string",
            EnableWebAccess = false,
            Experiment = "string",
            ExperimentRun = "string",
            Network = "string",
            BaseOutputDirectory = new GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1GcsDestinationArgs
            {
                OutputUriPrefix = "string",
            },
            ProtectedArtifactLocationId = "string",
            ReservedIpRanges = new[]
            {
                "string",
            },
            Scheduling = new GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1SchedulingArgs
            {
                DisableRetries = false,
                RestartJobOnWorkerRestart = false,
                Timeout = "string",
            },
            ServiceAccount = "string",
            Tensorboard = "string",
            EnableDashboardAccess = false,
        },
        EncryptionSpec = new GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1EncryptionSpecArgs
        {
            KmsKeyName = "string",
        },
        Labels = 
        {
            { "string", "string" },
        },
        Location = "string",
        Project = "string",
    });
    
    example, err := aiplatformv1beta1.NewCustomJob(ctx, "google-nativeCustomJobResource", &aiplatformv1beta1.CustomJobArgs{
    	DisplayName: pulumi.String("string"),
    	JobSpec: &aiplatform.GoogleCloudAiplatformV1beta1CustomJobSpecArgs{
    		WorkerPoolSpecs: aiplatform.GoogleCloudAiplatformV1beta1WorkerPoolSpecArray{
    			&aiplatform.GoogleCloudAiplatformV1beta1WorkerPoolSpecArgs{
    				ContainerSpec: &aiplatform.GoogleCloudAiplatformV1beta1ContainerSpecArgs{
    					ImageUri: pulumi.String("string"),
    					Args: pulumi.StringArray{
    						pulumi.String("string"),
    					},
    					Command: pulumi.StringArray{
    						pulumi.String("string"),
    					},
    					Env: aiplatform.GoogleCloudAiplatformV1beta1EnvVarArray{
    						&aiplatform.GoogleCloudAiplatformV1beta1EnvVarArgs{
    							Name:  pulumi.String("string"),
    							Value: pulumi.String("string"),
    						},
    					},
    				},
    				DiskSpec: &aiplatform.GoogleCloudAiplatformV1beta1DiskSpecArgs{
    					BootDiskSizeGb: pulumi.Int(0),
    					BootDiskType:   pulumi.String("string"),
    				},
    				MachineSpec: &aiplatform.GoogleCloudAiplatformV1beta1MachineSpecArgs{
    					AcceleratorCount: pulumi.Int(0),
    					AcceleratorType:  aiplatformv1beta1.GoogleCloudAiplatformV1beta1MachineSpecAcceleratorTypeAcceleratorTypeUnspecified,
    					MachineType:      pulumi.String("string"),
    					TpuTopology:      pulumi.String("string"),
    				},
    				NfsMounts: aiplatform.GoogleCloudAiplatformV1beta1NfsMountArray{
    					&aiplatform.GoogleCloudAiplatformV1beta1NfsMountArgs{
    						MountPoint: pulumi.String("string"),
    						Path:       pulumi.String("string"),
    						Server:     pulumi.String("string"),
    					},
    				},
    				PythonPackageSpec: &aiplatform.GoogleCloudAiplatformV1beta1PythonPackageSpecArgs{
    					ExecutorImageUri: pulumi.String("string"),
    					PackageUris: pulumi.StringArray{
    						pulumi.String("string"),
    					},
    					PythonModule: pulumi.String("string"),
    					Args: pulumi.StringArray{
    						pulumi.String("string"),
    					},
    					Env: aiplatform.GoogleCloudAiplatformV1beta1EnvVarArray{
    						&aiplatform.GoogleCloudAiplatformV1beta1EnvVarArgs{
    							Name:  pulumi.String("string"),
    							Value: pulumi.String("string"),
    						},
    					},
    				},
    				ReplicaCount: pulumi.String("string"),
    			},
    		},
    		PersistentResourceId: pulumi.String("string"),
    		EnableWebAccess:      pulumi.Bool(false),
    		Experiment:           pulumi.String("string"),
    		ExperimentRun:        pulumi.String("string"),
    		Network:              pulumi.String("string"),
    		BaseOutputDirectory: &aiplatform.GoogleCloudAiplatformV1beta1GcsDestinationArgs{
    			OutputUriPrefix: pulumi.String("string"),
    		},
    		ProtectedArtifactLocationId: pulumi.String("string"),
    		ReservedIpRanges: pulumi.StringArray{
    			pulumi.String("string"),
    		},
    		Scheduling: &aiplatform.GoogleCloudAiplatformV1beta1SchedulingArgs{
    			DisableRetries:            pulumi.Bool(false),
    			RestartJobOnWorkerRestart: pulumi.Bool(false),
    			Timeout:                   pulumi.String("string"),
    		},
    		ServiceAccount:        pulumi.String("string"),
    		Tensorboard:           pulumi.String("string"),
    		EnableDashboardAccess: pulumi.Bool(false),
    	},
    	EncryptionSpec: &aiplatform.GoogleCloudAiplatformV1beta1EncryptionSpecArgs{
    		KmsKeyName: pulumi.String("string"),
    	},
    	Labels: pulumi.StringMap{
    		"string": pulumi.String("string"),
    	},
    	Location: pulumi.String("string"),
    	Project:  pulumi.String("string"),
    })
    
    var google_nativeCustomJobResource = new CustomJob("google-nativeCustomJobResource", CustomJobArgs.builder()
        .displayName("string")
        .jobSpec(GoogleCloudAiplatformV1beta1CustomJobSpecArgs.builder()
            .workerPoolSpecs(GoogleCloudAiplatformV1beta1WorkerPoolSpecArgs.builder()
                .containerSpec(GoogleCloudAiplatformV1beta1ContainerSpecArgs.builder()
                    .imageUri("string")
                    .args("string")
                    .command("string")
                    .env(GoogleCloudAiplatformV1beta1EnvVarArgs.builder()
                        .name("string")
                        .value("string")
                        .build())
                    .build())
                .diskSpec(GoogleCloudAiplatformV1beta1DiskSpecArgs.builder()
                    .bootDiskSizeGb(0)
                    .bootDiskType("string")
                    .build())
                .machineSpec(GoogleCloudAiplatformV1beta1MachineSpecArgs.builder()
                    .acceleratorCount(0)
                    .acceleratorType("ACCELERATOR_TYPE_UNSPECIFIED")
                    .machineType("string")
                    .tpuTopology("string")
                    .build())
                .nfsMounts(GoogleCloudAiplatformV1beta1NfsMountArgs.builder()
                    .mountPoint("string")
                    .path("string")
                    .server("string")
                    .build())
                .pythonPackageSpec(GoogleCloudAiplatformV1beta1PythonPackageSpecArgs.builder()
                    .executorImageUri("string")
                    .packageUris("string")
                    .pythonModule("string")
                    .args("string")
                    .env(GoogleCloudAiplatformV1beta1EnvVarArgs.builder()
                        .name("string")
                        .value("string")
                        .build())
                    .build())
                .replicaCount("string")
                .build())
            .persistentResourceId("string")
            .enableWebAccess(false)
            .experiment("string")
            .experimentRun("string")
            .network("string")
            .baseOutputDirectory(GoogleCloudAiplatformV1beta1GcsDestinationArgs.builder()
                .outputUriPrefix("string")
                .build())
            .protectedArtifactLocationId("string")
            .reservedIpRanges("string")
            .scheduling(GoogleCloudAiplatformV1beta1SchedulingArgs.builder()
                .disableRetries(false)
                .restartJobOnWorkerRestart(false)
                .timeout("string")
                .build())
            .serviceAccount("string")
            .tensorboard("string")
            .enableDashboardAccess(false)
            .build())
        .encryptionSpec(GoogleCloudAiplatformV1beta1EncryptionSpecArgs.builder()
            .kmsKeyName("string")
            .build())
        .labels(Map.of("string", "string"))
        .location("string")
        .project("string")
        .build());
    
    google_native_custom_job_resource = google_native.aiplatform.v1beta1.CustomJob("google-nativeCustomJobResource",
        display_name="string",
        job_spec={
            "worker_pool_specs": [{
                "container_spec": {
                    "image_uri": "string",
                    "args": ["string"],
                    "command": ["string"],
                    "env": [{
                        "name": "string",
                        "value": "string",
                    }],
                },
                "disk_spec": {
                    "boot_disk_size_gb": 0,
                    "boot_disk_type": "string",
                },
                "machine_spec": {
                    "accelerator_count": 0,
                    "accelerator_type": google_native.aiplatform.v1beta1.GoogleCloudAiplatformV1beta1MachineSpecAcceleratorType.ACCELERATOR_TYPE_UNSPECIFIED,
                    "machine_type": "string",
                    "tpu_topology": "string",
                },
                "nfs_mounts": [{
                    "mount_point": "string",
                    "path": "string",
                    "server": "string",
                }],
                "python_package_spec": {
                    "executor_image_uri": "string",
                    "package_uris": ["string"],
                    "python_module": "string",
                    "args": ["string"],
                    "env": [{
                        "name": "string",
                        "value": "string",
                    }],
                },
                "replica_count": "string",
            }],
            "persistent_resource_id": "string",
            "enable_web_access": False,
            "experiment": "string",
            "experiment_run": "string",
            "network": "string",
            "base_output_directory": {
                "output_uri_prefix": "string",
            },
            "protected_artifact_location_id": "string",
            "reserved_ip_ranges": ["string"],
            "scheduling": {
                "disable_retries": False,
                "restart_job_on_worker_restart": False,
                "timeout": "string",
            },
            "service_account": "string",
            "tensorboard": "string",
            "enable_dashboard_access": False,
        },
        encryption_spec={
            "kms_key_name": "string",
        },
        labels={
            "string": "string",
        },
        location="string",
        project="string")
    
    const google_nativeCustomJobResource = new google_native.aiplatform.v1beta1.CustomJob("google-nativeCustomJobResource", {
        displayName: "string",
        jobSpec: {
            workerPoolSpecs: [{
                containerSpec: {
                    imageUri: "string",
                    args: ["string"],
                    command: ["string"],
                    env: [{
                        name: "string",
                        value: "string",
                    }],
                },
                diskSpec: {
                    bootDiskSizeGb: 0,
                    bootDiskType: "string",
                },
                machineSpec: {
                    acceleratorCount: 0,
                    acceleratorType: google_native.aiplatform.v1beta1.GoogleCloudAiplatformV1beta1MachineSpecAcceleratorType.AcceleratorTypeUnspecified,
                    machineType: "string",
                    tpuTopology: "string",
                },
                nfsMounts: [{
                    mountPoint: "string",
                    path: "string",
                    server: "string",
                }],
                pythonPackageSpec: {
                    executorImageUri: "string",
                    packageUris: ["string"],
                    pythonModule: "string",
                    args: ["string"],
                    env: [{
                        name: "string",
                        value: "string",
                    }],
                },
                replicaCount: "string",
            }],
            persistentResourceId: "string",
            enableWebAccess: false,
            experiment: "string",
            experimentRun: "string",
            network: "string",
            baseOutputDirectory: {
                outputUriPrefix: "string",
            },
            protectedArtifactLocationId: "string",
            reservedIpRanges: ["string"],
            scheduling: {
                disableRetries: false,
                restartJobOnWorkerRestart: false,
                timeout: "string",
            },
            serviceAccount: "string",
            tensorboard: "string",
            enableDashboardAccess: false,
        },
        encryptionSpec: {
            kmsKeyName: "string",
        },
        labels: {
            string: "string",
        },
        location: "string",
        project: "string",
    });
    
    type: google-native:aiplatform/v1beta1:CustomJob
    properties:
        displayName: string
        encryptionSpec:
            kmsKeyName: string
        jobSpec:
            baseOutputDirectory:
                outputUriPrefix: string
            enableDashboardAccess: false
            enableWebAccess: false
            experiment: string
            experimentRun: string
            network: string
            persistentResourceId: string
            protectedArtifactLocationId: string
            reservedIpRanges:
                - string
            scheduling:
                disableRetries: false
                restartJobOnWorkerRestart: false
                timeout: string
            serviceAccount: string
            tensorboard: string
            workerPoolSpecs:
                - containerSpec:
                    args:
                        - string
                    command:
                        - string
                    env:
                        - name: string
                          value: string
                    imageUri: string
                  diskSpec:
                    bootDiskSizeGb: 0
                    bootDiskType: string
                  machineSpec:
                    acceleratorCount: 0
                    acceleratorType: ACCELERATOR_TYPE_UNSPECIFIED
                    machineType: string
                    tpuTopology: string
                  nfsMounts:
                    - mountPoint: string
                      path: string
                      server: string
                  pythonPackageSpec:
                    args:
                        - string
                    env:
                        - name: string
                          value: string
                    executorImageUri: string
                    packageUris:
                        - string
                    pythonModule: string
                  replicaCount: string
        labels:
            string: string
        location: string
        project: string
    

    CustomJob Resource Properties

    To learn more about resource properties and how to use them, see Inputs and Outputs in the Architecture and Concepts docs.

    Inputs

    In Python, inputs that are objects can be passed either as argument classes or as dictionary literals.

    The CustomJob resource accepts the following input properties:

    DisplayName string
    The display name of the CustomJob. The name can be up to 128 characters long and can consist of any UTF-8 characters.
    JobSpec Pulumi.GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1CustomJobSpec
    Job spec.
    EncryptionSpec Pulumi.GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1EncryptionSpec
    Customer-managed encryption key options for a CustomJob. If this is set, then all resources created by the CustomJob will be encrypted with the provided encryption key.
    Labels Dictionary<string, string>
    The labels with user-defined metadata to organize CustomJobs. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.
    Location string
    Project string
    DisplayName string
    The display name of the CustomJob. The name can be up to 128 characters long and can consist of any UTF-8 characters.
    JobSpec GoogleCloudAiplatformV1beta1CustomJobSpecArgs
    Job spec.
    EncryptionSpec GoogleCloudAiplatformV1beta1EncryptionSpecArgs
    Customer-managed encryption key options for a CustomJob. If this is set, then all resources created by the CustomJob will be encrypted with the provided encryption key.
    Labels map[string]string
    The labels with user-defined metadata to organize CustomJobs. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.
    Location string
    Project string
    displayName String
    The display name of the CustomJob. The name can be up to 128 characters long and can consist of any UTF-8 characters.
    jobSpec GoogleCloudAiplatformV1beta1CustomJobSpec
    Job spec.
    encryptionSpec GoogleCloudAiplatformV1beta1EncryptionSpec
    Customer-managed encryption key options for a CustomJob. If this is set, then all resources created by the CustomJob will be encrypted with the provided encryption key.
    labels Map<String,String>
    The labels with user-defined metadata to organize CustomJobs. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.
    location String
    project String
    displayName string
    The display name of the CustomJob. The name can be up to 128 characters long and can consist of any UTF-8 characters.
    jobSpec GoogleCloudAiplatformV1beta1CustomJobSpec
    Job spec.
    encryptionSpec GoogleCloudAiplatformV1beta1EncryptionSpec
    Customer-managed encryption key options for a CustomJob. If this is set, then all resources created by the CustomJob will be encrypted with the provided encryption key.
    labels {[key: string]: string}
    The labels with user-defined metadata to organize CustomJobs. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.
    location string
    project string
    display_name str
    The display name of the CustomJob. The name can be up to 128 characters long and can consist of any UTF-8 characters.
    job_spec GoogleCloudAiplatformV1beta1CustomJobSpecArgs
    Job spec.
    encryption_spec GoogleCloudAiplatformV1beta1EncryptionSpecArgs
    Customer-managed encryption key options for a CustomJob. If this is set, then all resources created by the CustomJob will be encrypted with the provided encryption key.
    labels Mapping[str, str]
    The labels with user-defined metadata to organize CustomJobs. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.
    location str
    project str
    displayName String
    The display name of the CustomJob. The name can be up to 128 characters long and can consist of any UTF-8 characters.
    jobSpec Property Map
    Job spec.
    encryptionSpec Property Map
    Customer-managed encryption key options for a CustomJob. If this is set, then all resources created by the CustomJob will be encrypted with the provided encryption key.
    labels Map<String>
    The labels with user-defined metadata to organize CustomJobs. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.
    location String
    project String

    Outputs

    All input properties are implicitly available as output properties. Additionally, the CustomJob resource produces the following output properties:

    CreateTime string
    Time when the CustomJob was created.
    EndTime string
    Time when the CustomJob entered any of the following states: JOB_STATE_SUCCEEDED, JOB_STATE_FAILED, JOB_STATE_CANCELLED.
    Error Pulumi.GoogleNative.Aiplatform.V1Beta1.Outputs.GoogleRpcStatusResponse
    Only populated when job's state is JOB_STATE_FAILED or JOB_STATE_CANCELLED.
    Id string
    The provider-assigned unique ID for this managed resource.
    Name string
    Resource name of a CustomJob.
    StartTime string
    Time when the CustomJob for the first time entered the JOB_STATE_RUNNING state.
    State string
    The detailed state of the job.
    UpdateTime string
    Time when the CustomJob was most recently updated.
    WebAccessUris Dictionary<string, string>
    URIs for accessing interactive shells (one URI for each training node). Only available if job_spec.enable_web_access is true. The keys are names of each node in the training job; for example, workerpool0-0 for the primary node, workerpool1-0 for the first node in the second worker pool, and workerpool1-1 for the second node in the second worker pool. The values are the URIs for each node's interactive shell.
    CreateTime string
    Time when the CustomJob was created.
    EndTime string
    Time when the CustomJob entered any of the following states: JOB_STATE_SUCCEEDED, JOB_STATE_FAILED, JOB_STATE_CANCELLED.
    Error GoogleRpcStatusResponse
    Only populated when job's state is JOB_STATE_FAILED or JOB_STATE_CANCELLED.
    Id string
    The provider-assigned unique ID for this managed resource.
    Name string
    Resource name of a CustomJob.
    StartTime string
    Time when the CustomJob for the first time entered the JOB_STATE_RUNNING state.
    State string
    The detailed state of the job.
    UpdateTime string
    Time when the CustomJob was most recently updated.
    WebAccessUris map[string]string
    URIs for accessing interactive shells (one URI for each training node). Only available if job_spec.enable_web_access is true. The keys are names of each node in the training job; for example, workerpool0-0 for the primary node, workerpool1-0 for the first node in the second worker pool, and workerpool1-1 for the second node in the second worker pool. The values are the URIs for each node's interactive shell.
    createTime String
    Time when the CustomJob was created.
    endTime String
    Time when the CustomJob entered any of the following states: JOB_STATE_SUCCEEDED, JOB_STATE_FAILED, JOB_STATE_CANCELLED.
    error GoogleRpcStatusResponse
    Only populated when job's state is JOB_STATE_FAILED or JOB_STATE_CANCELLED.
    id String
    The provider-assigned unique ID for this managed resource.
    name String
    Resource name of a CustomJob.
    startTime String
    Time when the CustomJob for the first time entered the JOB_STATE_RUNNING state.
    state String
    The detailed state of the job.
    updateTime String
    Time when the CustomJob was most recently updated.
    webAccessUris Map<String,String>
    URIs for accessing interactive shells (one URI for each training node). Only available if job_spec.enable_web_access is true. The keys are names of each node in the training job; for example, workerpool0-0 for the primary node, workerpool1-0 for the first node in the second worker pool, and workerpool1-1 for the second node in the second worker pool. The values are the URIs for each node's interactive shell.
    createTime string
    Time when the CustomJob was created.
    endTime string
    Time when the CustomJob entered any of the following states: JOB_STATE_SUCCEEDED, JOB_STATE_FAILED, JOB_STATE_CANCELLED.
    error GoogleRpcStatusResponse
    Only populated when job's state is JOB_STATE_FAILED or JOB_STATE_CANCELLED.
    id string
    The provider-assigned unique ID for this managed resource.
    name string
    Resource name of a CustomJob.
    startTime string
    Time when the CustomJob for the first time entered the JOB_STATE_RUNNING state.
    state string
    The detailed state of the job.
    updateTime string
    Time when the CustomJob was most recently updated.
    webAccessUris {[key: string]: string}
    URIs for accessing interactive shells (one URI for each training node). Only available if job_spec.enable_web_access is true. The keys are names of each node in the training job; for example, workerpool0-0 for the primary node, workerpool1-0 for the first node in the second worker pool, and workerpool1-1 for the second node in the second worker pool. The values are the URIs for each node's interactive shell.
    create_time str
    Time when the CustomJob was created.
    end_time str
    Time when the CustomJob entered any of the following states: JOB_STATE_SUCCEEDED, JOB_STATE_FAILED, JOB_STATE_CANCELLED.
    error GoogleRpcStatusResponse
    Only populated when job's state is JOB_STATE_FAILED or JOB_STATE_CANCELLED.
    id str
    The provider-assigned unique ID for this managed resource.
    name str
    Resource name of a CustomJob.
    start_time str
    Time when the CustomJob for the first time entered the JOB_STATE_RUNNING state.
    state str
    The detailed state of the job.
    update_time str
    Time when the CustomJob was most recently updated.
    web_access_uris Mapping[str, str]
    URIs for accessing interactive shells (one URI for each training node). Only available if job_spec.enable_web_access is true. The keys are names of each node in the training job; for example, workerpool0-0 for the primary node, workerpool1-0 for the first node in the second worker pool, and workerpool1-1 for the second node in the second worker pool. The values are the URIs for each node's interactive shell.
    createTime String
    Time when the CustomJob was created.
    endTime String
    Time when the CustomJob entered any of the following states: JOB_STATE_SUCCEEDED, JOB_STATE_FAILED, JOB_STATE_CANCELLED.
    error Property Map
    Only populated when job's state is JOB_STATE_FAILED or JOB_STATE_CANCELLED.
    id String
    The provider-assigned unique ID for this managed resource.
    name String
    Resource name of a CustomJob.
    startTime String
    Time when the CustomJob for the first time entered the JOB_STATE_RUNNING state.
    state String
    The detailed state of the job.
    updateTime String
    Time when the CustomJob was most recently updated.
    webAccessUris Map<String>
    URIs for accessing interactive shells (one URI for each training node). Only available if job_spec.enable_web_access is true. The keys are names of each node in the training job; for example, workerpool0-0 for the primary node, workerpool1-0 for the first node in the second worker pool, and workerpool1-1 for the second node in the second worker pool. The values are the URIs for each node's interactive shell.

    Supporting Types

    GoogleCloudAiplatformV1beta1ContainerSpec, GoogleCloudAiplatformV1beta1ContainerSpecArgs

    ImageUri string
    The URI of a container image in the Container Registry that is to be run on each worker replica.
    Args List<string>
    The arguments to be passed when starting the container.
    Command List<string>
    The command to be invoked when the container is started. It overrides the entrypoint instruction in Dockerfile when provided.
    Env List<Pulumi.GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1EnvVar>
    Environment variables to be passed to the container. Maximum limit is 100.
    ImageUri string
    The URI of a container image in the Container Registry that is to be run on each worker replica.
    Args []string
    The arguments to be passed when starting the container.
    Command []string
    The command to be invoked when the container is started. It overrides the entrypoint instruction in Dockerfile when provided.
    Env []GoogleCloudAiplatformV1beta1EnvVar
    Environment variables to be passed to the container. Maximum limit is 100.
    imageUri String
    The URI of a container image in the Container Registry that is to be run on each worker replica.
    args List<String>
    The arguments to be passed when starting the container.
    command List<String>
    The command to be invoked when the container is started. It overrides the entrypoint instruction in Dockerfile when provided.
    env List<GoogleCloudAiplatformV1beta1EnvVar>
    Environment variables to be passed to the container. Maximum limit is 100.
    imageUri string
    The URI of a container image in the Container Registry that is to be run on each worker replica.
    args string[]
    The arguments to be passed when starting the container.
    command string[]
    The command to be invoked when the container is started. It overrides the entrypoint instruction in Dockerfile when provided.
    env GoogleCloudAiplatformV1beta1EnvVar[]
    Environment variables to be passed to the container. Maximum limit is 100.
    image_uri str
    The URI of a container image in the Container Registry that is to be run on each worker replica.
    args Sequence[str]
    The arguments to be passed when starting the container.
    command Sequence[str]
    The command to be invoked when the container is started. It overrides the entrypoint instruction in Dockerfile when provided.
    env Sequence[GoogleCloudAiplatformV1beta1EnvVar]
    Environment variables to be passed to the container. Maximum limit is 100.
    imageUri String
    The URI of a container image in the Container Registry that is to be run on each worker replica.
    args List<String>
    The arguments to be passed when starting the container.
    command List<String>
    The command to be invoked when the container is started. It overrides the entrypoint instruction in Dockerfile when provided.
    env List<Property Map>
    Environment variables to be passed to the container. Maximum limit is 100.

    GoogleCloudAiplatformV1beta1ContainerSpecResponse, GoogleCloudAiplatformV1beta1ContainerSpecResponseArgs

    Args List<string>
    The arguments to be passed when starting the container.
    Command List<string>
    The command to be invoked when the container is started. It overrides the entrypoint instruction in Dockerfile when provided.
    Env List<Pulumi.GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1EnvVarResponse>
    Environment variables to be passed to the container. Maximum limit is 100.
    ImageUri string
    The URI of a container image in the Container Registry that is to be run on each worker replica.
    Args []string
    The arguments to be passed when starting the container.
    Command []string
    The command to be invoked when the container is started. It overrides the entrypoint instruction in Dockerfile when provided.
    Env []GoogleCloudAiplatformV1beta1EnvVarResponse
    Environment variables to be passed to the container. Maximum limit is 100.
    ImageUri string
    The URI of a container image in the Container Registry that is to be run on each worker replica.
    args List<String>
    The arguments to be passed when starting the container.
    command List<String>
    The command to be invoked when the container is started. It overrides the entrypoint instruction in Dockerfile when provided.
    env List<GoogleCloudAiplatformV1beta1EnvVarResponse>
    Environment variables to be passed to the container. Maximum limit is 100.
    imageUri String
    The URI of a container image in the Container Registry that is to be run on each worker replica.
    args string[]
    The arguments to be passed when starting the container.
    command string[]
    The command to be invoked when the container is started. It overrides the entrypoint instruction in Dockerfile when provided.
    env GoogleCloudAiplatformV1beta1EnvVarResponse[]
    Environment variables to be passed to the container. Maximum limit is 100.
    imageUri string
    The URI of a container image in the Container Registry that is to be run on each worker replica.
    args Sequence[str]
    The arguments to be passed when starting the container.
    command Sequence[str]
    The command to be invoked when the container is started. It overrides the entrypoint instruction in Dockerfile when provided.
    env Sequence[GoogleCloudAiplatformV1beta1EnvVarResponse]
    Environment variables to be passed to the container. Maximum limit is 100.
    image_uri str
    The URI of a container image in the Container Registry that is to be run on each worker replica.
    args List<String>
    The arguments to be passed when starting the container.
    command List<String>
    The command to be invoked when the container is started. It overrides the entrypoint instruction in Dockerfile when provided.
    env List<Property Map>
    Environment variables to be passed to the container. Maximum limit is 100.
    imageUri String
    The URI of a container image in the Container Registry that is to be run on each worker replica.

    GoogleCloudAiplatformV1beta1CustomJobSpec, GoogleCloudAiplatformV1beta1CustomJobSpecArgs

    WorkerPoolSpecs List<Pulumi.GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1WorkerPoolSpec>
    The spec of the worker pools including machine type and Docker image. All worker pools except the first one are optional and can be skipped by providing an empty value.
    BaseOutputDirectory Pulumi.GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1GcsDestination
    The Cloud Storage location to store the output of this CustomJob or HyperparameterTuningJob. For HyperparameterTuningJob, the baseOutputDirectory of each child CustomJob backing a Trial is set to a subdirectory of name id under its parent HyperparameterTuningJob's baseOutputDirectory. The following Vertex AI environment variables will be passed to containers or python modules when this field is set: For CustomJob: * AIP_MODEL_DIR = /model/ * AIP_CHECKPOINT_DIR = /checkpoints/ * AIP_TENSORBOARD_LOG_DIR = /logs/ For CustomJob backing a Trial of HyperparameterTuningJob: * AIP_MODEL_DIR = //model/ * AIP_CHECKPOINT_DIR = //checkpoints/ * AIP_TENSORBOARD_LOG_DIR = //logs/
    EnableDashboardAccess bool
    Optional. Whether you want Vertex AI to enable access to the customized dashboard in training chief container. If set to true, you can access the dashboard at the URIs given by CustomJob.web_access_uris or Trial.web_access_uris (within HyperparameterTuningJob.trials).
    EnableWebAccess bool
    Optional. Whether you want Vertex AI to enable interactive shell access to training containers. If set to true, you can access interactive shells at the URIs given by CustomJob.web_access_uris or Trial.web_access_uris (within HyperparameterTuningJob.trials).
    Experiment string
    Optional. The Experiment associated with this job. Format: projects/{project}/locations/{location}/metadataStores/{metadataStores}/contexts/{experiment-name}
    ExperimentRun string
    Optional. The Experiment Run associated with this job. Format: projects/{project}/locations/{location}/metadataStores/{metadataStores}/contexts/{experiment-name}-{experiment-run-name}
    Network string
    Optional. The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the form projects/{project}/global/networks/{network}. Where {project} is a project number, as in 12345, and {network} is a network name. To specify this field, you must have already configured VPC Network Peering for Vertex AI. If this field is left unspecified, the job is not peered with any network.
    PersistentResourceId string
    Optional. The ID of the PersistentResource in the same Project and Location which to run If this is specified, the job will be run on existing machines held by the PersistentResource instead of on-demand short-live machines. The network and CMEK configs on the job should be consistent with those on the PersistentResource, otherwise, the job will be rejected.
    ProtectedArtifactLocationId string
    The ID of the location to store protected artifacts. e.g. us-central1. Populate only when the location is different than CustomJob location. List of supported locations: https://cloud.google.com/vertex-ai/docs/general/locations
    ReservedIpRanges List<string>
    Optional. A list of names for the reserved ip ranges under the VPC network that can be used for this job. If set, we will deploy the job within the provided ip ranges. Otherwise, the job will be deployed to any ip ranges under the provided VPC network. Example: ['vertex-ai-ip-range'].
    Scheduling Pulumi.GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1Scheduling
    Scheduling options for a CustomJob.
    ServiceAccount string
    Specifies the service account for workload run-as account. Users submitting jobs must have act-as permission on this run-as account. If unspecified, the Vertex AI Custom Code Service Agent for the CustomJob's project is used.
    Tensorboard string
    Optional. The name of a Vertex AI Tensorboard resource to which this CustomJob will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}
    WorkerPoolSpecs []GoogleCloudAiplatformV1beta1WorkerPoolSpec
    The spec of the worker pools including machine type and Docker image. All worker pools except the first one are optional and can be skipped by providing an empty value.
    BaseOutputDirectory GoogleCloudAiplatformV1beta1GcsDestination
    The Cloud Storage location to store the output of this CustomJob or HyperparameterTuningJob. For HyperparameterTuningJob, the baseOutputDirectory of each child CustomJob backing a Trial is set to a subdirectory of name id under its parent HyperparameterTuningJob's baseOutputDirectory. The following Vertex AI environment variables will be passed to containers or python modules when this field is set: For CustomJob: * AIP_MODEL_DIR = /model/ * AIP_CHECKPOINT_DIR = /checkpoints/ * AIP_TENSORBOARD_LOG_DIR = /logs/ For CustomJob backing a Trial of HyperparameterTuningJob: * AIP_MODEL_DIR = //model/ * AIP_CHECKPOINT_DIR = //checkpoints/ * AIP_TENSORBOARD_LOG_DIR = //logs/
    EnableDashboardAccess bool
    Optional. Whether you want Vertex AI to enable access to the customized dashboard in training chief container. If set to true, you can access the dashboard at the URIs given by CustomJob.web_access_uris or Trial.web_access_uris (within HyperparameterTuningJob.trials).
    EnableWebAccess bool
    Optional. Whether you want Vertex AI to enable interactive shell access to training containers. If set to true, you can access interactive shells at the URIs given by CustomJob.web_access_uris or Trial.web_access_uris (within HyperparameterTuningJob.trials).
    Experiment string
    Optional. The Experiment associated with this job. Format: projects/{project}/locations/{location}/metadataStores/{metadataStores}/contexts/{experiment-name}
    ExperimentRun string
    Optional. The Experiment Run associated with this job. Format: projects/{project}/locations/{location}/metadataStores/{metadataStores}/contexts/{experiment-name}-{experiment-run-name}
    Network string
    Optional. The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the form projects/{project}/global/networks/{network}. Where {project} is a project number, as in 12345, and {network} is a network name. To specify this field, you must have already configured VPC Network Peering for Vertex AI. If this field is left unspecified, the job is not peered with any network.
    PersistentResourceId string
    Optional. The ID of the PersistentResource in the same Project and Location which to run If this is specified, the job will be run on existing machines held by the PersistentResource instead of on-demand short-live machines. The network and CMEK configs on the job should be consistent with those on the PersistentResource, otherwise, the job will be rejected.
    ProtectedArtifactLocationId string
    The ID of the location to store protected artifacts. e.g. us-central1. Populate only when the location is different than CustomJob location. List of supported locations: https://cloud.google.com/vertex-ai/docs/general/locations
    ReservedIpRanges []string
    Optional. A list of names for the reserved ip ranges under the VPC network that can be used for this job. If set, we will deploy the job within the provided ip ranges. Otherwise, the job will be deployed to any ip ranges under the provided VPC network. Example: ['vertex-ai-ip-range'].
    Scheduling GoogleCloudAiplatformV1beta1Scheduling
    Scheduling options for a CustomJob.
    ServiceAccount string
    Specifies the service account for workload run-as account. Users submitting jobs must have act-as permission on this run-as account. If unspecified, the Vertex AI Custom Code Service Agent for the CustomJob's project is used.
    Tensorboard string
    Optional. The name of a Vertex AI Tensorboard resource to which this CustomJob will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}
    workerPoolSpecs List<GoogleCloudAiplatformV1beta1WorkerPoolSpec>
    The spec of the worker pools including machine type and Docker image. All worker pools except the first one are optional and can be skipped by providing an empty value.
    baseOutputDirectory GoogleCloudAiplatformV1beta1GcsDestination
    The Cloud Storage location to store the output of this CustomJob or HyperparameterTuningJob. For HyperparameterTuningJob, the baseOutputDirectory of each child CustomJob backing a Trial is set to a subdirectory of name id under its parent HyperparameterTuningJob's baseOutputDirectory. The following Vertex AI environment variables will be passed to containers or python modules when this field is set: For CustomJob: * AIP_MODEL_DIR = /model/ * AIP_CHECKPOINT_DIR = /checkpoints/ * AIP_TENSORBOARD_LOG_DIR = /logs/ For CustomJob backing a Trial of HyperparameterTuningJob: * AIP_MODEL_DIR = //model/ * AIP_CHECKPOINT_DIR = //checkpoints/ * AIP_TENSORBOARD_LOG_DIR = //logs/
    enableDashboardAccess Boolean
    Optional. Whether you want Vertex AI to enable access to the customized dashboard in training chief container. If set to true, you can access the dashboard at the URIs given by CustomJob.web_access_uris or Trial.web_access_uris (within HyperparameterTuningJob.trials).
    enableWebAccess Boolean
    Optional. Whether you want Vertex AI to enable interactive shell access to training containers. If set to true, you can access interactive shells at the URIs given by CustomJob.web_access_uris or Trial.web_access_uris (within HyperparameterTuningJob.trials).
    experiment String
    Optional. The Experiment associated with this job. Format: projects/{project}/locations/{location}/metadataStores/{metadataStores}/contexts/{experiment-name}
    experimentRun String
    Optional. The Experiment Run associated with this job. Format: projects/{project}/locations/{location}/metadataStores/{metadataStores}/contexts/{experiment-name}-{experiment-run-name}
    network String
    Optional. The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the form projects/{project}/global/networks/{network}. Where {project} is a project number, as in 12345, and {network} is a network name. To specify this field, you must have already configured VPC Network Peering for Vertex AI. If this field is left unspecified, the job is not peered with any network.
    persistentResourceId String
    Optional. The ID of the PersistentResource in the same Project and Location which to run If this is specified, the job will be run on existing machines held by the PersistentResource instead of on-demand short-live machines. The network and CMEK configs on the job should be consistent with those on the PersistentResource, otherwise, the job will be rejected.
    protectedArtifactLocationId String
    The ID of the location to store protected artifacts. e.g. us-central1. Populate only when the location is different than CustomJob location. List of supported locations: https://cloud.google.com/vertex-ai/docs/general/locations
    reservedIpRanges List<String>
    Optional. A list of names for the reserved ip ranges under the VPC network that can be used for this job. If set, we will deploy the job within the provided ip ranges. Otherwise, the job will be deployed to any ip ranges under the provided VPC network. Example: ['vertex-ai-ip-range'].
    scheduling GoogleCloudAiplatformV1beta1Scheduling
    Scheduling options for a CustomJob.
    serviceAccount String
    Specifies the service account for workload run-as account. Users submitting jobs must have act-as permission on this run-as account. If unspecified, the Vertex AI Custom Code Service Agent for the CustomJob's project is used.
    tensorboard String
    Optional. The name of a Vertex AI Tensorboard resource to which this CustomJob will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}
    workerPoolSpecs GoogleCloudAiplatformV1beta1WorkerPoolSpec[]
    The spec of the worker pools including machine type and Docker image. All worker pools except the first one are optional and can be skipped by providing an empty value.
    baseOutputDirectory GoogleCloudAiplatformV1beta1GcsDestination
    The Cloud Storage location to store the output of this CustomJob or HyperparameterTuningJob. For HyperparameterTuningJob, the baseOutputDirectory of each child CustomJob backing a Trial is set to a subdirectory of name id under its parent HyperparameterTuningJob's baseOutputDirectory. The following Vertex AI environment variables will be passed to containers or python modules when this field is set: For CustomJob: * AIP_MODEL_DIR = /model/ * AIP_CHECKPOINT_DIR = /checkpoints/ * AIP_TENSORBOARD_LOG_DIR = /logs/ For CustomJob backing a Trial of HyperparameterTuningJob: * AIP_MODEL_DIR = //model/ * AIP_CHECKPOINT_DIR = //checkpoints/ * AIP_TENSORBOARD_LOG_DIR = //logs/
    enableDashboardAccess boolean
    Optional. Whether you want Vertex AI to enable access to the customized dashboard in training chief container. If set to true, you can access the dashboard at the URIs given by CustomJob.web_access_uris or Trial.web_access_uris (within HyperparameterTuningJob.trials).
    enableWebAccess boolean
    Optional. Whether you want Vertex AI to enable interactive shell access to training containers. If set to true, you can access interactive shells at the URIs given by CustomJob.web_access_uris or Trial.web_access_uris (within HyperparameterTuningJob.trials).
    experiment string
    Optional. The Experiment associated with this job. Format: projects/{project}/locations/{location}/metadataStores/{metadataStores}/contexts/{experiment-name}
    experimentRun string
    Optional. The Experiment Run associated with this job. Format: projects/{project}/locations/{location}/metadataStores/{metadataStores}/contexts/{experiment-name}-{experiment-run-name}
    network string
    Optional. The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the form projects/{project}/global/networks/{network}. Where {project} is a project number, as in 12345, and {network} is a network name. To specify this field, you must have already configured VPC Network Peering for Vertex AI. If this field is left unspecified, the job is not peered with any network.
    persistentResourceId string
    Optional. The ID of the PersistentResource in the same Project and Location which to run If this is specified, the job will be run on existing machines held by the PersistentResource instead of on-demand short-live machines. The network and CMEK configs on the job should be consistent with those on the PersistentResource, otherwise, the job will be rejected.
    protectedArtifactLocationId string
    The ID of the location to store protected artifacts. e.g. us-central1. Populate only when the location is different than CustomJob location. List of supported locations: https://cloud.google.com/vertex-ai/docs/general/locations
    reservedIpRanges string[]
    Optional. A list of names for the reserved ip ranges under the VPC network that can be used for this job. If set, we will deploy the job within the provided ip ranges. Otherwise, the job will be deployed to any ip ranges under the provided VPC network. Example: ['vertex-ai-ip-range'].
    scheduling GoogleCloudAiplatformV1beta1Scheduling
    Scheduling options for a CustomJob.
    serviceAccount string
    Specifies the service account for workload run-as account. Users submitting jobs must have act-as permission on this run-as account. If unspecified, the Vertex AI Custom Code Service Agent for the CustomJob's project is used.
    tensorboard string
    Optional. The name of a Vertex AI Tensorboard resource to which this CustomJob will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}
    worker_pool_specs Sequence[GoogleCloudAiplatformV1beta1WorkerPoolSpec]
    The spec of the worker pools including machine type and Docker image. All worker pools except the first one are optional and can be skipped by providing an empty value.
    base_output_directory GoogleCloudAiplatformV1beta1GcsDestination
    The Cloud Storage location to store the output of this CustomJob or HyperparameterTuningJob. For HyperparameterTuningJob, the baseOutputDirectory of each child CustomJob backing a Trial is set to a subdirectory of name id under its parent HyperparameterTuningJob's baseOutputDirectory. The following Vertex AI environment variables will be passed to containers or python modules when this field is set: For CustomJob: * AIP_MODEL_DIR = /model/ * AIP_CHECKPOINT_DIR = /checkpoints/ * AIP_TENSORBOARD_LOG_DIR = /logs/ For CustomJob backing a Trial of HyperparameterTuningJob: * AIP_MODEL_DIR = //model/ * AIP_CHECKPOINT_DIR = //checkpoints/ * AIP_TENSORBOARD_LOG_DIR = //logs/
    enable_dashboard_access bool
    Optional. Whether you want Vertex AI to enable access to the customized dashboard in training chief container. If set to true, you can access the dashboard at the URIs given by CustomJob.web_access_uris or Trial.web_access_uris (within HyperparameterTuningJob.trials).
    enable_web_access bool
    Optional. Whether you want Vertex AI to enable interactive shell access to training containers. If set to true, you can access interactive shells at the URIs given by CustomJob.web_access_uris or Trial.web_access_uris (within HyperparameterTuningJob.trials).
    experiment str
    Optional. The Experiment associated with this job. Format: projects/{project}/locations/{location}/metadataStores/{metadataStores}/contexts/{experiment-name}
    experiment_run str
    Optional. The Experiment Run associated with this job. Format: projects/{project}/locations/{location}/metadataStores/{metadataStores}/contexts/{experiment-name}-{experiment-run-name}
    network str
    Optional. The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the form projects/{project}/global/networks/{network}. Where {project} is a project number, as in 12345, and {network} is a network name. To specify this field, you must have already configured VPC Network Peering for Vertex AI. If this field is left unspecified, the job is not peered with any network.
    persistent_resource_id str
    Optional. The ID of the PersistentResource in the same Project and Location which to run If this is specified, the job will be run on existing machines held by the PersistentResource instead of on-demand short-live machines. The network and CMEK configs on the job should be consistent with those on the PersistentResource, otherwise, the job will be rejected.
    protected_artifact_location_id str
    The ID of the location to store protected artifacts. e.g. us-central1. Populate only when the location is different than CustomJob location. List of supported locations: https://cloud.google.com/vertex-ai/docs/general/locations
    reserved_ip_ranges Sequence[str]
    Optional. A list of names for the reserved ip ranges under the VPC network that can be used for this job. If set, we will deploy the job within the provided ip ranges. Otherwise, the job will be deployed to any ip ranges under the provided VPC network. Example: ['vertex-ai-ip-range'].
    scheduling GoogleCloudAiplatformV1beta1Scheduling
    Scheduling options for a CustomJob.
    service_account str
    Specifies the service account for workload run-as account. Users submitting jobs must have act-as permission on this run-as account. If unspecified, the Vertex AI Custom Code Service Agent for the CustomJob's project is used.
    tensorboard str
    Optional. The name of a Vertex AI Tensorboard resource to which this CustomJob will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}
    workerPoolSpecs List<Property Map>
    The spec of the worker pools including machine type and Docker image. All worker pools except the first one are optional and can be skipped by providing an empty value.
    baseOutputDirectory Property Map
    The Cloud Storage location to store the output of this CustomJob or HyperparameterTuningJob. For HyperparameterTuningJob, the baseOutputDirectory of each child CustomJob backing a Trial is set to a subdirectory of name id under its parent HyperparameterTuningJob's baseOutputDirectory. The following Vertex AI environment variables will be passed to containers or python modules when this field is set: For CustomJob: * AIP_MODEL_DIR = /model/ * AIP_CHECKPOINT_DIR = /checkpoints/ * AIP_TENSORBOARD_LOG_DIR = /logs/ For CustomJob backing a Trial of HyperparameterTuningJob: * AIP_MODEL_DIR = //model/ * AIP_CHECKPOINT_DIR = //checkpoints/ * AIP_TENSORBOARD_LOG_DIR = //logs/
    enableDashboardAccess Boolean
    Optional. Whether you want Vertex AI to enable access to the customized dashboard in training chief container. If set to true, you can access the dashboard at the URIs given by CustomJob.web_access_uris or Trial.web_access_uris (within HyperparameterTuningJob.trials).
    enableWebAccess Boolean
    Optional. Whether you want Vertex AI to enable interactive shell access to training containers. If set to true, you can access interactive shells at the URIs given by CustomJob.web_access_uris or Trial.web_access_uris (within HyperparameterTuningJob.trials).
    experiment String
    Optional. The Experiment associated with this job. Format: projects/{project}/locations/{location}/metadataStores/{metadataStores}/contexts/{experiment-name}
    experimentRun String
    Optional. The Experiment Run associated with this job. Format: projects/{project}/locations/{location}/metadataStores/{metadataStores}/contexts/{experiment-name}-{experiment-run-name}
    network String
    Optional. The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the form projects/{project}/global/networks/{network}. Where {project} is a project number, as in 12345, and {network} is a network name. To specify this field, you must have already configured VPC Network Peering for Vertex AI. If this field is left unspecified, the job is not peered with any network.
    persistentResourceId String
    Optional. The ID of the PersistentResource in the same Project and Location which to run If this is specified, the job will be run on existing machines held by the PersistentResource instead of on-demand short-live machines. The network and CMEK configs on the job should be consistent with those on the PersistentResource, otherwise, the job will be rejected.
    protectedArtifactLocationId String
    The ID of the location to store protected artifacts. e.g. us-central1. Populate only when the location is different than CustomJob location. List of supported locations: https://cloud.google.com/vertex-ai/docs/general/locations
    reservedIpRanges List<String>
    Optional. A list of names for the reserved ip ranges under the VPC network that can be used for this job. If set, we will deploy the job within the provided ip ranges. Otherwise, the job will be deployed to any ip ranges under the provided VPC network. Example: ['vertex-ai-ip-range'].
    scheduling Property Map
    Scheduling options for a CustomJob.
    serviceAccount String
    Specifies the service account for workload run-as account. Users submitting jobs must have act-as permission on this run-as account. If unspecified, the Vertex AI Custom Code Service Agent for the CustomJob's project is used.
    tensorboard String
    Optional. The name of a Vertex AI Tensorboard resource to which this CustomJob will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}

    GoogleCloudAiplatformV1beta1CustomJobSpecResponse, GoogleCloudAiplatformV1beta1CustomJobSpecResponseArgs

    BaseOutputDirectory Pulumi.GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1GcsDestinationResponse
    The Cloud Storage location to store the output of this CustomJob or HyperparameterTuningJob. For HyperparameterTuningJob, the baseOutputDirectory of each child CustomJob backing a Trial is set to a subdirectory of name id under its parent HyperparameterTuningJob's baseOutputDirectory. The following Vertex AI environment variables will be passed to containers or python modules when this field is set: For CustomJob: * AIP_MODEL_DIR = /model/ * AIP_CHECKPOINT_DIR = /checkpoints/ * AIP_TENSORBOARD_LOG_DIR = /logs/ For CustomJob backing a Trial of HyperparameterTuningJob: * AIP_MODEL_DIR = //model/ * AIP_CHECKPOINT_DIR = //checkpoints/ * AIP_TENSORBOARD_LOG_DIR = //logs/
    EnableDashboardAccess bool
    Optional. Whether you want Vertex AI to enable access to the customized dashboard in training chief container. If set to true, you can access the dashboard at the URIs given by CustomJob.web_access_uris or Trial.web_access_uris (within HyperparameterTuningJob.trials).
    EnableWebAccess bool
    Optional. Whether you want Vertex AI to enable interactive shell access to training containers. If set to true, you can access interactive shells at the URIs given by CustomJob.web_access_uris or Trial.web_access_uris (within HyperparameterTuningJob.trials).
    Experiment string
    Optional. The Experiment associated with this job. Format: projects/{project}/locations/{location}/metadataStores/{metadataStores}/contexts/{experiment-name}
    ExperimentRun string
    Optional. The Experiment Run associated with this job. Format: projects/{project}/locations/{location}/metadataStores/{metadataStores}/contexts/{experiment-name}-{experiment-run-name}
    Network string
    Optional. The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the form projects/{project}/global/networks/{network}. Where {project} is a project number, as in 12345, and {network} is a network name. To specify this field, you must have already configured VPC Network Peering for Vertex AI. If this field is left unspecified, the job is not peered with any network.
    PersistentResourceId string
    Optional. The ID of the PersistentResource in the same Project and Location which to run If this is specified, the job will be run on existing machines held by the PersistentResource instead of on-demand short-live machines. The network and CMEK configs on the job should be consistent with those on the PersistentResource, otherwise, the job will be rejected.
    ProtectedArtifactLocationId string
    The ID of the location to store protected artifacts. e.g. us-central1. Populate only when the location is different than CustomJob location. List of supported locations: https://cloud.google.com/vertex-ai/docs/general/locations
    ReservedIpRanges List<string>
    Optional. A list of names for the reserved ip ranges under the VPC network that can be used for this job. If set, we will deploy the job within the provided ip ranges. Otherwise, the job will be deployed to any ip ranges under the provided VPC network. Example: ['vertex-ai-ip-range'].
    Scheduling Pulumi.GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1SchedulingResponse
    Scheduling options for a CustomJob.
    ServiceAccount string
    Specifies the service account for workload run-as account. Users submitting jobs must have act-as permission on this run-as account. If unspecified, the Vertex AI Custom Code Service Agent for the CustomJob's project is used.
    Tensorboard string
    Optional. The name of a Vertex AI Tensorboard resource to which this CustomJob will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}
    WorkerPoolSpecs List<Pulumi.GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1WorkerPoolSpecResponse>
    The spec of the worker pools including machine type and Docker image. All worker pools except the first one are optional and can be skipped by providing an empty value.
    BaseOutputDirectory GoogleCloudAiplatformV1beta1GcsDestinationResponse
    The Cloud Storage location to store the output of this CustomJob or HyperparameterTuningJob. For HyperparameterTuningJob, the baseOutputDirectory of each child CustomJob backing a Trial is set to a subdirectory of name id under its parent HyperparameterTuningJob's baseOutputDirectory. The following Vertex AI environment variables will be passed to containers or python modules when this field is set: For CustomJob: * AIP_MODEL_DIR = /model/ * AIP_CHECKPOINT_DIR = /checkpoints/ * AIP_TENSORBOARD_LOG_DIR = /logs/ For CustomJob backing a Trial of HyperparameterTuningJob: * AIP_MODEL_DIR = //model/ * AIP_CHECKPOINT_DIR = //checkpoints/ * AIP_TENSORBOARD_LOG_DIR = //logs/
    EnableDashboardAccess bool
    Optional. Whether you want Vertex AI to enable access to the customized dashboard in training chief container. If set to true, you can access the dashboard at the URIs given by CustomJob.web_access_uris or Trial.web_access_uris (within HyperparameterTuningJob.trials).
    EnableWebAccess bool
    Optional. Whether you want Vertex AI to enable interactive shell access to training containers. If set to true, you can access interactive shells at the URIs given by CustomJob.web_access_uris or Trial.web_access_uris (within HyperparameterTuningJob.trials).
    Experiment string
    Optional. The Experiment associated with this job. Format: projects/{project}/locations/{location}/metadataStores/{metadataStores}/contexts/{experiment-name}
    ExperimentRun string
    Optional. The Experiment Run associated with this job. Format: projects/{project}/locations/{location}/metadataStores/{metadataStores}/contexts/{experiment-name}-{experiment-run-name}
    Network string
    Optional. The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the form projects/{project}/global/networks/{network}. Where {project} is a project number, as in 12345, and {network} is a network name. To specify this field, you must have already configured VPC Network Peering for Vertex AI. If this field is left unspecified, the job is not peered with any network.
    PersistentResourceId string
    Optional. The ID of the PersistentResource in the same Project and Location which to run If this is specified, the job will be run on existing machines held by the PersistentResource instead of on-demand short-live machines. The network and CMEK configs on the job should be consistent with those on the PersistentResource, otherwise, the job will be rejected.
    ProtectedArtifactLocationId string
    The ID of the location to store protected artifacts. e.g. us-central1. Populate only when the location is different than CustomJob location. List of supported locations: https://cloud.google.com/vertex-ai/docs/general/locations
    ReservedIpRanges []string
    Optional. A list of names for the reserved ip ranges under the VPC network that can be used for this job. If set, we will deploy the job within the provided ip ranges. Otherwise, the job will be deployed to any ip ranges under the provided VPC network. Example: ['vertex-ai-ip-range'].
    Scheduling GoogleCloudAiplatformV1beta1SchedulingResponse
    Scheduling options for a CustomJob.
    ServiceAccount string
    Specifies the service account for workload run-as account. Users submitting jobs must have act-as permission on this run-as account. If unspecified, the Vertex AI Custom Code Service Agent for the CustomJob's project is used.
    Tensorboard string
    Optional. The name of a Vertex AI Tensorboard resource to which this CustomJob will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}
    WorkerPoolSpecs []GoogleCloudAiplatformV1beta1WorkerPoolSpecResponse
    The spec of the worker pools including machine type and Docker image. All worker pools except the first one are optional and can be skipped by providing an empty value.
    baseOutputDirectory GoogleCloudAiplatformV1beta1GcsDestinationResponse
    The Cloud Storage location to store the output of this CustomJob or HyperparameterTuningJob. For HyperparameterTuningJob, the baseOutputDirectory of each child CustomJob backing a Trial is set to a subdirectory of name id under its parent HyperparameterTuningJob's baseOutputDirectory. The following Vertex AI environment variables will be passed to containers or python modules when this field is set: For CustomJob: * AIP_MODEL_DIR = /model/ * AIP_CHECKPOINT_DIR = /checkpoints/ * AIP_TENSORBOARD_LOG_DIR = /logs/ For CustomJob backing a Trial of HyperparameterTuningJob: * AIP_MODEL_DIR = //model/ * AIP_CHECKPOINT_DIR = //checkpoints/ * AIP_TENSORBOARD_LOG_DIR = //logs/
    enableDashboardAccess Boolean
    Optional. Whether you want Vertex AI to enable access to the customized dashboard in training chief container. If set to true, you can access the dashboard at the URIs given by CustomJob.web_access_uris or Trial.web_access_uris (within HyperparameterTuningJob.trials).
    enableWebAccess Boolean
    Optional. Whether you want Vertex AI to enable interactive shell access to training containers. If set to true, you can access interactive shells at the URIs given by CustomJob.web_access_uris or Trial.web_access_uris (within HyperparameterTuningJob.trials).
    experiment String
    Optional. The Experiment associated with this job. Format: projects/{project}/locations/{location}/metadataStores/{metadataStores}/contexts/{experiment-name}
    experimentRun String
    Optional. The Experiment Run associated with this job. Format: projects/{project}/locations/{location}/metadataStores/{metadataStores}/contexts/{experiment-name}-{experiment-run-name}
    network String
    Optional. The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the form projects/{project}/global/networks/{network}. Where {project} is a project number, as in 12345, and {network} is a network name. To specify this field, you must have already configured VPC Network Peering for Vertex AI. If this field is left unspecified, the job is not peered with any network.
    persistentResourceId String
    Optional. The ID of the PersistentResource in the same Project and Location which to run If this is specified, the job will be run on existing machines held by the PersistentResource instead of on-demand short-live machines. The network and CMEK configs on the job should be consistent with those on the PersistentResource, otherwise, the job will be rejected.
    protectedArtifactLocationId String
    The ID of the location to store protected artifacts. e.g. us-central1. Populate only when the location is different than CustomJob location. List of supported locations: https://cloud.google.com/vertex-ai/docs/general/locations
    reservedIpRanges List<String>
    Optional. A list of names for the reserved ip ranges under the VPC network that can be used for this job. If set, we will deploy the job within the provided ip ranges. Otherwise, the job will be deployed to any ip ranges under the provided VPC network. Example: ['vertex-ai-ip-range'].
    scheduling GoogleCloudAiplatformV1beta1SchedulingResponse
    Scheduling options for a CustomJob.
    serviceAccount String
    Specifies the service account for workload run-as account. Users submitting jobs must have act-as permission on this run-as account. If unspecified, the Vertex AI Custom Code Service Agent for the CustomJob's project is used.
    tensorboard String
    Optional. The name of a Vertex AI Tensorboard resource to which this CustomJob will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}
    workerPoolSpecs List<GoogleCloudAiplatformV1beta1WorkerPoolSpecResponse>
    The spec of the worker pools including machine type and Docker image. All worker pools except the first one are optional and can be skipped by providing an empty value.
    baseOutputDirectory GoogleCloudAiplatformV1beta1GcsDestinationResponse
    The Cloud Storage location to store the output of this CustomJob or HyperparameterTuningJob. For HyperparameterTuningJob, the baseOutputDirectory of each child CustomJob backing a Trial is set to a subdirectory of name id under its parent HyperparameterTuningJob's baseOutputDirectory. The following Vertex AI environment variables will be passed to containers or python modules when this field is set: For CustomJob: * AIP_MODEL_DIR = /model/ * AIP_CHECKPOINT_DIR = /checkpoints/ * AIP_TENSORBOARD_LOG_DIR = /logs/ For CustomJob backing a Trial of HyperparameterTuningJob: * AIP_MODEL_DIR = //model/ * AIP_CHECKPOINT_DIR = //checkpoints/ * AIP_TENSORBOARD_LOG_DIR = //logs/
    enableDashboardAccess boolean
    Optional. Whether you want Vertex AI to enable access to the customized dashboard in training chief container. If set to true, you can access the dashboard at the URIs given by CustomJob.web_access_uris or Trial.web_access_uris (within HyperparameterTuningJob.trials).
    enableWebAccess boolean
    Optional. Whether you want Vertex AI to enable interactive shell access to training containers. If set to true, you can access interactive shells at the URIs given by CustomJob.web_access_uris or Trial.web_access_uris (within HyperparameterTuningJob.trials).
    experiment string
    Optional. The Experiment associated with this job. Format: projects/{project}/locations/{location}/metadataStores/{metadataStores}/contexts/{experiment-name}
    experimentRun string
    Optional. The Experiment Run associated with this job. Format: projects/{project}/locations/{location}/metadataStores/{metadataStores}/contexts/{experiment-name}-{experiment-run-name}
    network string
    Optional. The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the form projects/{project}/global/networks/{network}. Where {project} is a project number, as in 12345, and {network} is a network name. To specify this field, you must have already configured VPC Network Peering for Vertex AI. If this field is left unspecified, the job is not peered with any network.
    persistentResourceId string
    Optional. The ID of the PersistentResource in the same Project and Location which to run If this is specified, the job will be run on existing machines held by the PersistentResource instead of on-demand short-live machines. The network and CMEK configs on the job should be consistent with those on the PersistentResource, otherwise, the job will be rejected.
    protectedArtifactLocationId string
    The ID of the location to store protected artifacts. e.g. us-central1. Populate only when the location is different than CustomJob location. List of supported locations: https://cloud.google.com/vertex-ai/docs/general/locations
    reservedIpRanges string[]
    Optional. A list of names for the reserved ip ranges under the VPC network that can be used for this job. If set, we will deploy the job within the provided ip ranges. Otherwise, the job will be deployed to any ip ranges under the provided VPC network. Example: ['vertex-ai-ip-range'].
    scheduling GoogleCloudAiplatformV1beta1SchedulingResponse
    Scheduling options for a CustomJob.
    serviceAccount string
    Specifies the service account for workload run-as account. Users submitting jobs must have act-as permission on this run-as account. If unspecified, the Vertex AI Custom Code Service Agent for the CustomJob's project is used.
    tensorboard string
    Optional. The name of a Vertex AI Tensorboard resource to which this CustomJob will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}
    workerPoolSpecs GoogleCloudAiplatformV1beta1WorkerPoolSpecResponse[]
    The spec of the worker pools including machine type and Docker image. All worker pools except the first one are optional and can be skipped by providing an empty value.
    base_output_directory GoogleCloudAiplatformV1beta1GcsDestinationResponse
    The Cloud Storage location to store the output of this CustomJob or HyperparameterTuningJob. For HyperparameterTuningJob, the baseOutputDirectory of each child CustomJob backing a Trial is set to a subdirectory of name id under its parent HyperparameterTuningJob's baseOutputDirectory. The following Vertex AI environment variables will be passed to containers or python modules when this field is set: For CustomJob: * AIP_MODEL_DIR = /model/ * AIP_CHECKPOINT_DIR = /checkpoints/ * AIP_TENSORBOARD_LOG_DIR = /logs/ For CustomJob backing a Trial of HyperparameterTuningJob: * AIP_MODEL_DIR = //model/ * AIP_CHECKPOINT_DIR = //checkpoints/ * AIP_TENSORBOARD_LOG_DIR = //logs/
    enable_dashboard_access bool
    Optional. Whether you want Vertex AI to enable access to the customized dashboard in training chief container. If set to true, you can access the dashboard at the URIs given by CustomJob.web_access_uris or Trial.web_access_uris (within HyperparameterTuningJob.trials).
    enable_web_access bool
    Optional. Whether you want Vertex AI to enable interactive shell access to training containers. If set to true, you can access interactive shells at the URIs given by CustomJob.web_access_uris or Trial.web_access_uris (within HyperparameterTuningJob.trials).
    experiment str
    Optional. The Experiment associated with this job. Format: projects/{project}/locations/{location}/metadataStores/{metadataStores}/contexts/{experiment-name}
    experiment_run str
    Optional. The Experiment Run associated with this job. Format: projects/{project}/locations/{location}/metadataStores/{metadataStores}/contexts/{experiment-name}-{experiment-run-name}
    network str
    Optional. The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the form projects/{project}/global/networks/{network}. Where {project} is a project number, as in 12345, and {network} is a network name. To specify this field, you must have already configured VPC Network Peering for Vertex AI. If this field is left unspecified, the job is not peered with any network.
    persistent_resource_id str
    Optional. The ID of the PersistentResource in the same Project and Location which to run If this is specified, the job will be run on existing machines held by the PersistentResource instead of on-demand short-live machines. The network and CMEK configs on the job should be consistent with those on the PersistentResource, otherwise, the job will be rejected.
    protected_artifact_location_id str
    The ID of the location to store protected artifacts. e.g. us-central1. Populate only when the location is different than CustomJob location. List of supported locations: https://cloud.google.com/vertex-ai/docs/general/locations
    reserved_ip_ranges Sequence[str]
    Optional. A list of names for the reserved ip ranges under the VPC network that can be used for this job. If set, we will deploy the job within the provided ip ranges. Otherwise, the job will be deployed to any ip ranges under the provided VPC network. Example: ['vertex-ai-ip-range'].
    scheduling GoogleCloudAiplatformV1beta1SchedulingResponse
    Scheduling options for a CustomJob.
    service_account str
    Specifies the service account for workload run-as account. Users submitting jobs must have act-as permission on this run-as account. If unspecified, the Vertex AI Custom Code Service Agent for the CustomJob's project is used.
    tensorboard str
    Optional. The name of a Vertex AI Tensorboard resource to which this CustomJob will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}
    worker_pool_specs Sequence[GoogleCloudAiplatformV1beta1WorkerPoolSpecResponse]
    The spec of the worker pools including machine type and Docker image. All worker pools except the first one are optional and can be skipped by providing an empty value.
    baseOutputDirectory Property Map
    The Cloud Storage location to store the output of this CustomJob or HyperparameterTuningJob. For HyperparameterTuningJob, the baseOutputDirectory of each child CustomJob backing a Trial is set to a subdirectory of name id under its parent HyperparameterTuningJob's baseOutputDirectory. The following Vertex AI environment variables will be passed to containers or python modules when this field is set: For CustomJob: * AIP_MODEL_DIR = /model/ * AIP_CHECKPOINT_DIR = /checkpoints/ * AIP_TENSORBOARD_LOG_DIR = /logs/ For CustomJob backing a Trial of HyperparameterTuningJob: * AIP_MODEL_DIR = //model/ * AIP_CHECKPOINT_DIR = //checkpoints/ * AIP_TENSORBOARD_LOG_DIR = //logs/
    enableDashboardAccess Boolean
    Optional. Whether you want Vertex AI to enable access to the customized dashboard in training chief container. If set to true, you can access the dashboard at the URIs given by CustomJob.web_access_uris or Trial.web_access_uris (within HyperparameterTuningJob.trials).
    enableWebAccess Boolean
    Optional. Whether you want Vertex AI to enable interactive shell access to training containers. If set to true, you can access interactive shells at the URIs given by CustomJob.web_access_uris or Trial.web_access_uris (within HyperparameterTuningJob.trials).
    experiment String
    Optional. The Experiment associated with this job. Format: projects/{project}/locations/{location}/metadataStores/{metadataStores}/contexts/{experiment-name}
    experimentRun String
    Optional. The Experiment Run associated with this job. Format: projects/{project}/locations/{location}/metadataStores/{metadataStores}/contexts/{experiment-name}-{experiment-run-name}
    network String
    Optional. The full name of the Compute Engine network to which the Job should be peered. For example, projects/12345/global/networks/myVPC. Format is of the form projects/{project}/global/networks/{network}. Where {project} is a project number, as in 12345, and {network} is a network name. To specify this field, you must have already configured VPC Network Peering for Vertex AI. If this field is left unspecified, the job is not peered with any network.
    persistentResourceId String
    Optional. The ID of the PersistentResource in the same Project and Location which to run If this is specified, the job will be run on existing machines held by the PersistentResource instead of on-demand short-live machines. The network and CMEK configs on the job should be consistent with those on the PersistentResource, otherwise, the job will be rejected.
    protectedArtifactLocationId String
    The ID of the location to store protected artifacts. e.g. us-central1. Populate only when the location is different than CustomJob location. List of supported locations: https://cloud.google.com/vertex-ai/docs/general/locations
    reservedIpRanges List<String>
    Optional. A list of names for the reserved ip ranges under the VPC network that can be used for this job. If set, we will deploy the job within the provided ip ranges. Otherwise, the job will be deployed to any ip ranges under the provided VPC network. Example: ['vertex-ai-ip-range'].
    scheduling Property Map
    Scheduling options for a CustomJob.
    serviceAccount String
    Specifies the service account for workload run-as account. Users submitting jobs must have act-as permission on this run-as account. If unspecified, the Vertex AI Custom Code Service Agent for the CustomJob's project is used.
    tensorboard String
    Optional. The name of a Vertex AI Tensorboard resource to which this CustomJob will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard}
    workerPoolSpecs List<Property Map>
    The spec of the worker pools including machine type and Docker image. All worker pools except the first one are optional and can be skipped by providing an empty value.

    GoogleCloudAiplatformV1beta1DiskSpec, GoogleCloudAiplatformV1beta1DiskSpecArgs

    BootDiskSizeGb int
    Size in GB of the boot disk (default is 100GB).
    BootDiskType string
    Type of the boot disk (default is "pd-ssd"). Valid values: "pd-ssd" (Persistent Disk Solid State Drive) or "pd-standard" (Persistent Disk Hard Disk Drive).
    BootDiskSizeGb int
    Size in GB of the boot disk (default is 100GB).
    BootDiskType string
    Type of the boot disk (default is "pd-ssd"). Valid values: "pd-ssd" (Persistent Disk Solid State Drive) or "pd-standard" (Persistent Disk Hard Disk Drive).
    bootDiskSizeGb Integer
    Size in GB of the boot disk (default is 100GB).
    bootDiskType String
    Type of the boot disk (default is "pd-ssd"). Valid values: "pd-ssd" (Persistent Disk Solid State Drive) or "pd-standard" (Persistent Disk Hard Disk Drive).
    bootDiskSizeGb number
    Size in GB of the boot disk (default is 100GB).
    bootDiskType string
    Type of the boot disk (default is "pd-ssd"). Valid values: "pd-ssd" (Persistent Disk Solid State Drive) or "pd-standard" (Persistent Disk Hard Disk Drive).
    boot_disk_size_gb int
    Size in GB of the boot disk (default is 100GB).
    boot_disk_type str
    Type of the boot disk (default is "pd-ssd"). Valid values: "pd-ssd" (Persistent Disk Solid State Drive) or "pd-standard" (Persistent Disk Hard Disk Drive).
    bootDiskSizeGb Number
    Size in GB of the boot disk (default is 100GB).
    bootDiskType String
    Type of the boot disk (default is "pd-ssd"). Valid values: "pd-ssd" (Persistent Disk Solid State Drive) or "pd-standard" (Persistent Disk Hard Disk Drive).

    GoogleCloudAiplatformV1beta1DiskSpecResponse, GoogleCloudAiplatformV1beta1DiskSpecResponseArgs

    BootDiskSizeGb int
    Size in GB of the boot disk (default is 100GB).
    BootDiskType string
    Type of the boot disk (default is "pd-ssd"). Valid values: "pd-ssd" (Persistent Disk Solid State Drive) or "pd-standard" (Persistent Disk Hard Disk Drive).
    BootDiskSizeGb int
    Size in GB of the boot disk (default is 100GB).
    BootDiskType string
    Type of the boot disk (default is "pd-ssd"). Valid values: "pd-ssd" (Persistent Disk Solid State Drive) or "pd-standard" (Persistent Disk Hard Disk Drive).
    bootDiskSizeGb Integer
    Size in GB of the boot disk (default is 100GB).
    bootDiskType String
    Type of the boot disk (default is "pd-ssd"). Valid values: "pd-ssd" (Persistent Disk Solid State Drive) or "pd-standard" (Persistent Disk Hard Disk Drive).
    bootDiskSizeGb number
    Size in GB of the boot disk (default is 100GB).
    bootDiskType string
    Type of the boot disk (default is "pd-ssd"). Valid values: "pd-ssd" (Persistent Disk Solid State Drive) or "pd-standard" (Persistent Disk Hard Disk Drive).
    boot_disk_size_gb int
    Size in GB of the boot disk (default is 100GB).
    boot_disk_type str
    Type of the boot disk (default is "pd-ssd"). Valid values: "pd-ssd" (Persistent Disk Solid State Drive) or "pd-standard" (Persistent Disk Hard Disk Drive).
    bootDiskSizeGb Number
    Size in GB of the boot disk (default is 100GB).
    bootDiskType String
    Type of the boot disk (default is "pd-ssd"). Valid values: "pd-ssd" (Persistent Disk Solid State Drive) or "pd-standard" (Persistent Disk Hard Disk Drive).

    GoogleCloudAiplatformV1beta1EncryptionSpec, GoogleCloudAiplatformV1beta1EncryptionSpecArgs

    KmsKeyName string
    The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key. The key needs to be in the same region as where the compute resource is created.
    KmsKeyName string
    The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key. The key needs to be in the same region as where the compute resource is created.
    kmsKeyName String
    The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key. The key needs to be in the same region as where the compute resource is created.
    kmsKeyName string
    The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key. The key needs to be in the same region as where the compute resource is created.
    kms_key_name str
    The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key. The key needs to be in the same region as where the compute resource is created.
    kmsKeyName String
    The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key. The key needs to be in the same region as where the compute resource is created.

    GoogleCloudAiplatformV1beta1EncryptionSpecResponse, GoogleCloudAiplatformV1beta1EncryptionSpecResponseArgs

    KmsKeyName string
    The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key. The key needs to be in the same region as where the compute resource is created.
    KmsKeyName string
    The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key. The key needs to be in the same region as where the compute resource is created.
    kmsKeyName String
    The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key. The key needs to be in the same region as where the compute resource is created.
    kmsKeyName string
    The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key. The key needs to be in the same region as where the compute resource is created.
    kms_key_name str
    The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key. The key needs to be in the same region as where the compute resource is created.
    kmsKeyName String
    The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key. The key needs to be in the same region as where the compute resource is created.

    GoogleCloudAiplatformV1beta1EnvVar, GoogleCloudAiplatformV1beta1EnvVarArgs

    Name string
    Name of the environment variable. Must be a valid C identifier.
    Value string
    Variables that reference a $(VAR_NAME) are expanded using the previous defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. The $(VAR_NAME) syntax can be escaped with a double $$, ie: $$(VAR_NAME). Escaped references will never be expanded, regardless of whether the variable exists or not.
    Name string
    Name of the environment variable. Must be a valid C identifier.
    Value string
    Variables that reference a $(VAR_NAME) are expanded using the previous defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. The $(VAR_NAME) syntax can be escaped with a double $$, ie: $$(VAR_NAME). Escaped references will never be expanded, regardless of whether the variable exists or not.
    name String
    Name of the environment variable. Must be a valid C identifier.
    value String
    Variables that reference a $(VAR_NAME) are expanded using the previous defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. The $(VAR_NAME) syntax can be escaped with a double $$, ie: $$(VAR_NAME). Escaped references will never be expanded, regardless of whether the variable exists or not.
    name string
    Name of the environment variable. Must be a valid C identifier.
    value string
    Variables that reference a $(VAR_NAME) are expanded using the previous defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. The $(VAR_NAME) syntax can be escaped with a double $$, ie: $$(VAR_NAME). Escaped references will never be expanded, regardless of whether the variable exists or not.
    name str
    Name of the environment variable. Must be a valid C identifier.
    value str
    Variables that reference a $(VAR_NAME) are expanded using the previous defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. The $(VAR_NAME) syntax can be escaped with a double $$, ie: $$(VAR_NAME). Escaped references will never be expanded, regardless of whether the variable exists or not.
    name String
    Name of the environment variable. Must be a valid C identifier.
    value String
    Variables that reference a $(VAR_NAME) are expanded using the previous defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. The $(VAR_NAME) syntax can be escaped with a double $$, ie: $$(VAR_NAME). Escaped references will never be expanded, regardless of whether the variable exists or not.

    GoogleCloudAiplatformV1beta1EnvVarResponse, GoogleCloudAiplatformV1beta1EnvVarResponseArgs

    Name string
    Name of the environment variable. Must be a valid C identifier.
    Value string
    Variables that reference a $(VAR_NAME) are expanded using the previous defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. The $(VAR_NAME) syntax can be escaped with a double $$, ie: $$(VAR_NAME). Escaped references will never be expanded, regardless of whether the variable exists or not.
    Name string
    Name of the environment variable. Must be a valid C identifier.
    Value string
    Variables that reference a $(VAR_NAME) are expanded using the previous defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. The $(VAR_NAME) syntax can be escaped with a double $$, ie: $$(VAR_NAME). Escaped references will never be expanded, regardless of whether the variable exists or not.
    name String
    Name of the environment variable. Must be a valid C identifier.
    value String
    Variables that reference a $(VAR_NAME) are expanded using the previous defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. The $(VAR_NAME) syntax can be escaped with a double $$, ie: $$(VAR_NAME). Escaped references will never be expanded, regardless of whether the variable exists or not.
    name string
    Name of the environment variable. Must be a valid C identifier.
    value string
    Variables that reference a $(VAR_NAME) are expanded using the previous defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. The $(VAR_NAME) syntax can be escaped with a double $$, ie: $$(VAR_NAME). Escaped references will never be expanded, regardless of whether the variable exists or not.
    name str
    Name of the environment variable. Must be a valid C identifier.
    value str
    Variables that reference a $(VAR_NAME) are expanded using the previous defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. The $(VAR_NAME) syntax can be escaped with a double $$, ie: $$(VAR_NAME). Escaped references will never be expanded, regardless of whether the variable exists or not.
    name String
    Name of the environment variable. Must be a valid C identifier.
    value String
    Variables that reference a $(VAR_NAME) are expanded using the previous defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. The $(VAR_NAME) syntax can be escaped with a double $$, ie: $$(VAR_NAME). Escaped references will never be expanded, regardless of whether the variable exists or not.

    GoogleCloudAiplatformV1beta1GcsDestination, GoogleCloudAiplatformV1beta1GcsDestinationArgs

    OutputUriPrefix string
    Google Cloud Storage URI to output directory. If the uri doesn't end with '/', a '/' will be automatically appended. The directory is created if it doesn't exist.
    OutputUriPrefix string
    Google Cloud Storage URI to output directory. If the uri doesn't end with '/', a '/' will be automatically appended. The directory is created if it doesn't exist.
    outputUriPrefix String
    Google Cloud Storage URI to output directory. If the uri doesn't end with '/', a '/' will be automatically appended. The directory is created if it doesn't exist.
    outputUriPrefix string
    Google Cloud Storage URI to output directory. If the uri doesn't end with '/', a '/' will be automatically appended. The directory is created if it doesn't exist.
    output_uri_prefix str
    Google Cloud Storage URI to output directory. If the uri doesn't end with '/', a '/' will be automatically appended. The directory is created if it doesn't exist.
    outputUriPrefix String
    Google Cloud Storage URI to output directory. If the uri doesn't end with '/', a '/' will be automatically appended. The directory is created if it doesn't exist.

    GoogleCloudAiplatformV1beta1GcsDestinationResponse, GoogleCloudAiplatformV1beta1GcsDestinationResponseArgs

    OutputUriPrefix string
    Google Cloud Storage URI to output directory. If the uri doesn't end with '/', a '/' will be automatically appended. The directory is created if it doesn't exist.
    OutputUriPrefix string
    Google Cloud Storage URI to output directory. If the uri doesn't end with '/', a '/' will be automatically appended. The directory is created if it doesn't exist.
    outputUriPrefix String
    Google Cloud Storage URI to output directory. If the uri doesn't end with '/', a '/' will be automatically appended. The directory is created if it doesn't exist.
    outputUriPrefix string
    Google Cloud Storage URI to output directory. If the uri doesn't end with '/', a '/' will be automatically appended. The directory is created if it doesn't exist.
    output_uri_prefix str
    Google Cloud Storage URI to output directory. If the uri doesn't end with '/', a '/' will be automatically appended. The directory is created if it doesn't exist.
    outputUriPrefix String
    Google Cloud Storage URI to output directory. If the uri doesn't end with '/', a '/' will be automatically appended. The directory is created if it doesn't exist.

    GoogleCloudAiplatformV1beta1MachineSpec, GoogleCloudAiplatformV1beta1MachineSpecArgs

    AcceleratorCount int
    The number of accelerators to attach to the machine.
    AcceleratorType Pulumi.GoogleNative.Aiplatform.V1Beta1.GoogleCloudAiplatformV1beta1MachineSpecAcceleratorType
    Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
    MachineType string
    Immutable. The type of the machine. See the list of machine types supported for prediction See the list of machine types supported for custom training. For DeployedModel this field is optional, and the default value is n1-standard-2. For BatchPredictionJob or as part of WorkerPoolSpec this field is required.
    TpuTopology string
    Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").
    AcceleratorCount int
    The number of accelerators to attach to the machine.
    AcceleratorType GoogleCloudAiplatformV1beta1MachineSpecAcceleratorType
    Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
    MachineType string
    Immutable. The type of the machine. See the list of machine types supported for prediction See the list of machine types supported for custom training. For DeployedModel this field is optional, and the default value is n1-standard-2. For BatchPredictionJob or as part of WorkerPoolSpec this field is required.
    TpuTopology string
    Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").
    acceleratorCount Integer
    The number of accelerators to attach to the machine.
    acceleratorType GoogleCloudAiplatformV1beta1MachineSpecAcceleratorType
    Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
    machineType String
    Immutable. The type of the machine. See the list of machine types supported for prediction See the list of machine types supported for custom training. For DeployedModel this field is optional, and the default value is n1-standard-2. For BatchPredictionJob or as part of WorkerPoolSpec this field is required.
    tpuTopology String
    Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").
    acceleratorCount number
    The number of accelerators to attach to the machine.
    acceleratorType GoogleCloudAiplatformV1beta1MachineSpecAcceleratorType
    Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
    machineType string
    Immutable. The type of the machine. See the list of machine types supported for prediction See the list of machine types supported for custom training. For DeployedModel this field is optional, and the default value is n1-standard-2. For BatchPredictionJob or as part of WorkerPoolSpec this field is required.
    tpuTopology string
    Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").
    accelerator_count int
    The number of accelerators to attach to the machine.
    accelerator_type GoogleCloudAiplatformV1beta1MachineSpecAcceleratorType
    Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
    machine_type str
    Immutable. The type of the machine. See the list of machine types supported for prediction See the list of machine types supported for custom training. For DeployedModel this field is optional, and the default value is n1-standard-2. For BatchPredictionJob or as part of WorkerPoolSpec this field is required.
    tpu_topology str
    Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").
    acceleratorCount Number
    The number of accelerators to attach to the machine.
    acceleratorType "ACCELERATOR_TYPE_UNSPECIFIED" | "NVIDIA_TESLA_K80" | "NVIDIA_TESLA_P100" | "NVIDIA_TESLA_V100" | "NVIDIA_TESLA_P4" | "NVIDIA_TESLA_T4" | "NVIDIA_TESLA_A100" | "NVIDIA_A100_80GB" | "NVIDIA_L4" | "NVIDIA_H100_80GB" | "TPU_V2" | "TPU_V3" | "TPU_V4_POD" | "TPU_V5_LITEPOD"
    Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
    machineType String
    Immutable. The type of the machine. See the list of machine types supported for prediction See the list of machine types supported for custom training. For DeployedModel this field is optional, and the default value is n1-standard-2. For BatchPredictionJob or as part of WorkerPoolSpec this field is required.
    tpuTopology String
    Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").

    GoogleCloudAiplatformV1beta1MachineSpecAcceleratorType, GoogleCloudAiplatformV1beta1MachineSpecAcceleratorTypeArgs

    AcceleratorTypeUnspecified
    ACCELERATOR_TYPE_UNSPECIFIEDUnspecified accelerator type, which means no accelerator.
    NvidiaTeslaK80
    NVIDIA_TESLA_K80Nvidia Tesla K80 GPU.
    NvidiaTeslaP100
    NVIDIA_TESLA_P100Nvidia Tesla P100 GPU.
    NvidiaTeslaV100
    NVIDIA_TESLA_V100Nvidia Tesla V100 GPU.
    NvidiaTeslaP4
    NVIDIA_TESLA_P4Nvidia Tesla P4 GPU.
    NvidiaTeslaT4
    NVIDIA_TESLA_T4Nvidia Tesla T4 GPU.
    NvidiaTeslaA100
    NVIDIA_TESLA_A100Nvidia Tesla A100 GPU.
    NvidiaA10080gb
    NVIDIA_A100_80GBNvidia A100 80GB GPU.
    NvidiaL4
    NVIDIA_L4Nvidia L4 GPU.
    NvidiaH10080gb
    NVIDIA_H100_80GBNvidia H100 80Gb GPU.
    TpuV2
    TPU_V2TPU v2.
    TpuV3
    TPU_V3TPU v3.
    TpuV4Pod
    TPU_V4_PODTPU v4.
    TpuV5Litepod
    TPU_V5_LITEPODTPU v5.
    GoogleCloudAiplatformV1beta1MachineSpecAcceleratorTypeAcceleratorTypeUnspecified
    ACCELERATOR_TYPE_UNSPECIFIEDUnspecified accelerator type, which means no accelerator.
    GoogleCloudAiplatformV1beta1MachineSpecAcceleratorTypeNvidiaTeslaK80
    NVIDIA_TESLA_K80Nvidia Tesla K80 GPU.
    GoogleCloudAiplatformV1beta1MachineSpecAcceleratorTypeNvidiaTeslaP100
    NVIDIA_TESLA_P100Nvidia Tesla P100 GPU.
    GoogleCloudAiplatformV1beta1MachineSpecAcceleratorTypeNvidiaTeslaV100
    NVIDIA_TESLA_V100Nvidia Tesla V100 GPU.
    GoogleCloudAiplatformV1beta1MachineSpecAcceleratorTypeNvidiaTeslaP4
    NVIDIA_TESLA_P4Nvidia Tesla P4 GPU.
    GoogleCloudAiplatformV1beta1MachineSpecAcceleratorTypeNvidiaTeslaT4
    NVIDIA_TESLA_T4Nvidia Tesla T4 GPU.
    GoogleCloudAiplatformV1beta1MachineSpecAcceleratorTypeNvidiaTeslaA100
    NVIDIA_TESLA_A100Nvidia Tesla A100 GPU.
    GoogleCloudAiplatformV1beta1MachineSpecAcceleratorTypeNvidiaA10080gb
    NVIDIA_A100_80GBNvidia A100 80GB GPU.
    GoogleCloudAiplatformV1beta1MachineSpecAcceleratorTypeNvidiaL4
    NVIDIA_L4Nvidia L4 GPU.
    GoogleCloudAiplatformV1beta1MachineSpecAcceleratorTypeNvidiaH10080gb
    NVIDIA_H100_80GBNvidia H100 80Gb GPU.
    GoogleCloudAiplatformV1beta1MachineSpecAcceleratorTypeTpuV2
    TPU_V2TPU v2.
    GoogleCloudAiplatformV1beta1MachineSpecAcceleratorTypeTpuV3
    TPU_V3TPU v3.
    GoogleCloudAiplatformV1beta1MachineSpecAcceleratorTypeTpuV4Pod
    TPU_V4_PODTPU v4.
    GoogleCloudAiplatformV1beta1MachineSpecAcceleratorTypeTpuV5Litepod
    TPU_V5_LITEPODTPU v5.
    AcceleratorTypeUnspecified
    ACCELERATOR_TYPE_UNSPECIFIEDUnspecified accelerator type, which means no accelerator.
    NvidiaTeslaK80
    NVIDIA_TESLA_K80Nvidia Tesla K80 GPU.
    NvidiaTeslaP100
    NVIDIA_TESLA_P100Nvidia Tesla P100 GPU.
    NvidiaTeslaV100
    NVIDIA_TESLA_V100Nvidia Tesla V100 GPU.
    NvidiaTeslaP4
    NVIDIA_TESLA_P4Nvidia Tesla P4 GPU.
    NvidiaTeslaT4
    NVIDIA_TESLA_T4Nvidia Tesla T4 GPU.
    NvidiaTeslaA100
    NVIDIA_TESLA_A100Nvidia Tesla A100 GPU.
    NvidiaA10080gb
    NVIDIA_A100_80GBNvidia A100 80GB GPU.
    NvidiaL4
    NVIDIA_L4Nvidia L4 GPU.
    NvidiaH10080gb
    NVIDIA_H100_80GBNvidia H100 80Gb GPU.
    TpuV2
    TPU_V2TPU v2.
    TpuV3
    TPU_V3TPU v3.
    TpuV4Pod
    TPU_V4_PODTPU v4.
    TpuV5Litepod
    TPU_V5_LITEPODTPU v5.
    AcceleratorTypeUnspecified
    ACCELERATOR_TYPE_UNSPECIFIEDUnspecified accelerator type, which means no accelerator.
    NvidiaTeslaK80
    NVIDIA_TESLA_K80Nvidia Tesla K80 GPU.
    NvidiaTeslaP100
    NVIDIA_TESLA_P100Nvidia Tesla P100 GPU.
    NvidiaTeslaV100
    NVIDIA_TESLA_V100Nvidia Tesla V100 GPU.
    NvidiaTeslaP4
    NVIDIA_TESLA_P4Nvidia Tesla P4 GPU.
    NvidiaTeslaT4
    NVIDIA_TESLA_T4Nvidia Tesla T4 GPU.
    NvidiaTeslaA100
    NVIDIA_TESLA_A100Nvidia Tesla A100 GPU.
    NvidiaA10080gb
    NVIDIA_A100_80GBNvidia A100 80GB GPU.
    NvidiaL4
    NVIDIA_L4Nvidia L4 GPU.
    NvidiaH10080gb
    NVIDIA_H100_80GBNvidia H100 80Gb GPU.
    TpuV2
    TPU_V2TPU v2.
    TpuV3
    TPU_V3TPU v3.
    TpuV4Pod
    TPU_V4_PODTPU v4.
    TpuV5Litepod
    TPU_V5_LITEPODTPU v5.
    ACCELERATOR_TYPE_UNSPECIFIED
    ACCELERATOR_TYPE_UNSPECIFIEDUnspecified accelerator type, which means no accelerator.
    NVIDIA_TESLA_K80
    NVIDIA_TESLA_K80Nvidia Tesla K80 GPU.
    NVIDIA_TESLA_P100
    NVIDIA_TESLA_P100Nvidia Tesla P100 GPU.
    NVIDIA_TESLA_V100
    NVIDIA_TESLA_V100Nvidia Tesla V100 GPU.
    NVIDIA_TESLA_P4
    NVIDIA_TESLA_P4Nvidia Tesla P4 GPU.
    NVIDIA_TESLA_T4
    NVIDIA_TESLA_T4Nvidia Tesla T4 GPU.
    NVIDIA_TESLA_A100
    NVIDIA_TESLA_A100Nvidia Tesla A100 GPU.
    NVIDIA_A10080GB
    NVIDIA_A100_80GBNvidia A100 80GB GPU.
    NVIDIA_L4
    NVIDIA_L4Nvidia L4 GPU.
    NVIDIA_H10080GB
    NVIDIA_H100_80GBNvidia H100 80Gb GPU.
    TPU_V2
    TPU_V2TPU v2.
    TPU_V3
    TPU_V3TPU v3.
    TPU_V4_POD
    TPU_V4_PODTPU v4.
    TPU_V5_LITEPOD
    TPU_V5_LITEPODTPU v5.
    "ACCELERATOR_TYPE_UNSPECIFIED"
    ACCELERATOR_TYPE_UNSPECIFIEDUnspecified accelerator type, which means no accelerator.
    "NVIDIA_TESLA_K80"
    NVIDIA_TESLA_K80Nvidia Tesla K80 GPU.
    "NVIDIA_TESLA_P100"
    NVIDIA_TESLA_P100Nvidia Tesla P100 GPU.
    "NVIDIA_TESLA_V100"
    NVIDIA_TESLA_V100Nvidia Tesla V100 GPU.
    "NVIDIA_TESLA_P4"
    NVIDIA_TESLA_P4Nvidia Tesla P4 GPU.
    "NVIDIA_TESLA_T4"
    NVIDIA_TESLA_T4Nvidia Tesla T4 GPU.
    "NVIDIA_TESLA_A100"
    NVIDIA_TESLA_A100Nvidia Tesla A100 GPU.
    "NVIDIA_A100_80GB"
    NVIDIA_A100_80GBNvidia A100 80GB GPU.
    "NVIDIA_L4"
    NVIDIA_L4Nvidia L4 GPU.
    "NVIDIA_H100_80GB"
    NVIDIA_H100_80GBNvidia H100 80Gb GPU.
    "TPU_V2"
    TPU_V2TPU v2.
    "TPU_V3"
    TPU_V3TPU v3.
    "TPU_V4_POD"
    TPU_V4_PODTPU v4.
    "TPU_V5_LITEPOD"
    TPU_V5_LITEPODTPU v5.

    GoogleCloudAiplatformV1beta1MachineSpecResponse, GoogleCloudAiplatformV1beta1MachineSpecResponseArgs

    AcceleratorCount int
    The number of accelerators to attach to the machine.
    AcceleratorType string
    Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
    MachineType string
    Immutable. The type of the machine. See the list of machine types supported for prediction See the list of machine types supported for custom training. For DeployedModel this field is optional, and the default value is n1-standard-2. For BatchPredictionJob or as part of WorkerPoolSpec this field is required.
    TpuTopology string
    Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").
    AcceleratorCount int
    The number of accelerators to attach to the machine.
    AcceleratorType string
    Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
    MachineType string
    Immutable. The type of the machine. See the list of machine types supported for prediction See the list of machine types supported for custom training. For DeployedModel this field is optional, and the default value is n1-standard-2. For BatchPredictionJob or as part of WorkerPoolSpec this field is required.
    TpuTopology string
    Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").
    acceleratorCount Integer
    The number of accelerators to attach to the machine.
    acceleratorType String
    Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
    machineType String
    Immutable. The type of the machine. See the list of machine types supported for prediction See the list of machine types supported for custom training. For DeployedModel this field is optional, and the default value is n1-standard-2. For BatchPredictionJob or as part of WorkerPoolSpec this field is required.
    tpuTopology String
    Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").
    acceleratorCount number
    The number of accelerators to attach to the machine.
    acceleratorType string
    Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
    machineType string
    Immutable. The type of the machine. See the list of machine types supported for prediction See the list of machine types supported for custom training. For DeployedModel this field is optional, and the default value is n1-standard-2. For BatchPredictionJob or as part of WorkerPoolSpec this field is required.
    tpuTopology string
    Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").
    accelerator_count int
    The number of accelerators to attach to the machine.
    accelerator_type str
    Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
    machine_type str
    Immutable. The type of the machine. See the list of machine types supported for prediction See the list of machine types supported for custom training. For DeployedModel this field is optional, and the default value is n1-standard-2. For BatchPredictionJob or as part of WorkerPoolSpec this field is required.
    tpu_topology str
    Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").
    acceleratorCount Number
    The number of accelerators to attach to the machine.
    acceleratorType String
    Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
    machineType String
    Immutable. The type of the machine. See the list of machine types supported for prediction See the list of machine types supported for custom training. For DeployedModel this field is optional, and the default value is n1-standard-2. For BatchPredictionJob or as part of WorkerPoolSpec this field is required.
    tpuTopology String
    Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").

    GoogleCloudAiplatformV1beta1NfsMount, GoogleCloudAiplatformV1beta1NfsMountArgs

    MountPoint string
    Destination mount path. The NFS will be mounted for the user under /mnt/nfs/
    Path string
    Source path exported from NFS server. Has to start with '/', and combined with the ip address, it indicates the source mount path in the form of server:path
    Server string
    IP address of the NFS server.
    MountPoint string
    Destination mount path. The NFS will be mounted for the user under /mnt/nfs/
    Path string
    Source path exported from NFS server. Has to start with '/', and combined with the ip address, it indicates the source mount path in the form of server:path
    Server string
    IP address of the NFS server.
    mountPoint String
    Destination mount path. The NFS will be mounted for the user under /mnt/nfs/
    path String
    Source path exported from NFS server. Has to start with '/', and combined with the ip address, it indicates the source mount path in the form of server:path
    server String
    IP address of the NFS server.
    mountPoint string
    Destination mount path. The NFS will be mounted for the user under /mnt/nfs/
    path string
    Source path exported from NFS server. Has to start with '/', and combined with the ip address, it indicates the source mount path in the form of server:path
    server string
    IP address of the NFS server.
    mount_point str
    Destination mount path. The NFS will be mounted for the user under /mnt/nfs/
    path str
    Source path exported from NFS server. Has to start with '/', and combined with the ip address, it indicates the source mount path in the form of server:path
    server str
    IP address of the NFS server.
    mountPoint String
    Destination mount path. The NFS will be mounted for the user under /mnt/nfs/
    path String
    Source path exported from NFS server. Has to start with '/', and combined with the ip address, it indicates the source mount path in the form of server:path
    server String
    IP address of the NFS server.

    GoogleCloudAiplatformV1beta1NfsMountResponse, GoogleCloudAiplatformV1beta1NfsMountResponseArgs

    MountPoint string
    Destination mount path. The NFS will be mounted for the user under /mnt/nfs/
    Path string
    Source path exported from NFS server. Has to start with '/', and combined with the ip address, it indicates the source mount path in the form of server:path
    Server string
    IP address of the NFS server.
    MountPoint string
    Destination mount path. The NFS will be mounted for the user under /mnt/nfs/
    Path string
    Source path exported from NFS server. Has to start with '/', and combined with the ip address, it indicates the source mount path in the form of server:path
    Server string
    IP address of the NFS server.
    mountPoint String
    Destination mount path. The NFS will be mounted for the user under /mnt/nfs/
    path String
    Source path exported from NFS server. Has to start with '/', and combined with the ip address, it indicates the source mount path in the form of server:path
    server String
    IP address of the NFS server.
    mountPoint string
    Destination mount path. The NFS will be mounted for the user under /mnt/nfs/
    path string
    Source path exported from NFS server. Has to start with '/', and combined with the ip address, it indicates the source mount path in the form of server:path
    server string
    IP address of the NFS server.
    mount_point str
    Destination mount path. The NFS will be mounted for the user under /mnt/nfs/
    path str
    Source path exported from NFS server. Has to start with '/', and combined with the ip address, it indicates the source mount path in the form of server:path
    server str
    IP address of the NFS server.
    mountPoint String
    Destination mount path. The NFS will be mounted for the user under /mnt/nfs/
    path String
    Source path exported from NFS server. Has to start with '/', and combined with the ip address, it indicates the source mount path in the form of server:path
    server String
    IP address of the NFS server.

    GoogleCloudAiplatformV1beta1PythonPackageSpec, GoogleCloudAiplatformV1beta1PythonPackageSpecArgs

    ExecutorImageUri string
    The URI of a container image in Artifact Registry that will run the provided Python package. Vertex AI provides a wide range of executor images with pre-installed packages to meet users' various use cases. See the list of pre-built containers for training. You must use an image from this list.
    PackageUris List<string>
    The Google Cloud Storage location of the Python package files which are the training program and its dependent packages. The maximum number of package URIs is 100.
    PythonModule string
    The Python module name to run after installing the packages.
    Args List<string>
    Command line arguments to be passed to the Python task.
    Env List<Pulumi.GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1EnvVar>
    Environment variables to be passed to the python module. Maximum limit is 100.
    ExecutorImageUri string
    The URI of a container image in Artifact Registry that will run the provided Python package. Vertex AI provides a wide range of executor images with pre-installed packages to meet users' various use cases. See the list of pre-built containers for training. You must use an image from this list.
    PackageUris []string
    The Google Cloud Storage location of the Python package files which are the training program and its dependent packages. The maximum number of package URIs is 100.
    PythonModule string
    The Python module name to run after installing the packages.
    Args []string
    Command line arguments to be passed to the Python task.
    Env []GoogleCloudAiplatformV1beta1EnvVar
    Environment variables to be passed to the python module. Maximum limit is 100.
    executorImageUri String
    The URI of a container image in Artifact Registry that will run the provided Python package. Vertex AI provides a wide range of executor images with pre-installed packages to meet users' various use cases. See the list of pre-built containers for training. You must use an image from this list.
    packageUris List<String>
    The Google Cloud Storage location of the Python package files which are the training program and its dependent packages. The maximum number of package URIs is 100.
    pythonModule String
    The Python module name to run after installing the packages.
    args List<String>
    Command line arguments to be passed to the Python task.
    env List<GoogleCloudAiplatformV1beta1EnvVar>
    Environment variables to be passed to the python module. Maximum limit is 100.
    executorImageUri string
    The URI of a container image in Artifact Registry that will run the provided Python package. Vertex AI provides a wide range of executor images with pre-installed packages to meet users' various use cases. See the list of pre-built containers for training. You must use an image from this list.
    packageUris string[]
    The Google Cloud Storage location of the Python package files which are the training program and its dependent packages. The maximum number of package URIs is 100.
    pythonModule string
    The Python module name to run after installing the packages.
    args string[]
    Command line arguments to be passed to the Python task.
    env GoogleCloudAiplatformV1beta1EnvVar[]
    Environment variables to be passed to the python module. Maximum limit is 100.
    executor_image_uri str
    The URI of a container image in Artifact Registry that will run the provided Python package. Vertex AI provides a wide range of executor images with pre-installed packages to meet users' various use cases. See the list of pre-built containers for training. You must use an image from this list.
    package_uris Sequence[str]
    The Google Cloud Storage location of the Python package files which are the training program and its dependent packages. The maximum number of package URIs is 100.
    python_module str
    The Python module name to run after installing the packages.
    args Sequence[str]
    Command line arguments to be passed to the Python task.
    env Sequence[GoogleCloudAiplatformV1beta1EnvVar]
    Environment variables to be passed to the python module. Maximum limit is 100.
    executorImageUri String
    The URI of a container image in Artifact Registry that will run the provided Python package. Vertex AI provides a wide range of executor images with pre-installed packages to meet users' various use cases. See the list of pre-built containers for training. You must use an image from this list.
    packageUris List<String>
    The Google Cloud Storage location of the Python package files which are the training program and its dependent packages. The maximum number of package URIs is 100.
    pythonModule String
    The Python module name to run after installing the packages.
    args List<String>
    Command line arguments to be passed to the Python task.
    env List<Property Map>
    Environment variables to be passed to the python module. Maximum limit is 100.

    GoogleCloudAiplatformV1beta1PythonPackageSpecResponse, GoogleCloudAiplatformV1beta1PythonPackageSpecResponseArgs

    Args List<string>
    Command line arguments to be passed to the Python task.
    Env List<Pulumi.GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1EnvVarResponse>
    Environment variables to be passed to the python module. Maximum limit is 100.
    ExecutorImageUri string
    The URI of a container image in Artifact Registry that will run the provided Python package. Vertex AI provides a wide range of executor images with pre-installed packages to meet users' various use cases. See the list of pre-built containers for training. You must use an image from this list.
    PackageUris List<string>
    The Google Cloud Storage location of the Python package files which are the training program and its dependent packages. The maximum number of package URIs is 100.
    PythonModule string
    The Python module name to run after installing the packages.
    Args []string
    Command line arguments to be passed to the Python task.
    Env []GoogleCloudAiplatformV1beta1EnvVarResponse
    Environment variables to be passed to the python module. Maximum limit is 100.
    ExecutorImageUri string
    The URI of a container image in Artifact Registry that will run the provided Python package. Vertex AI provides a wide range of executor images with pre-installed packages to meet users' various use cases. See the list of pre-built containers for training. You must use an image from this list.
    PackageUris []string
    The Google Cloud Storage location of the Python package files which are the training program and its dependent packages. The maximum number of package URIs is 100.
    PythonModule string
    The Python module name to run after installing the packages.
    args List<String>
    Command line arguments to be passed to the Python task.
    env List<GoogleCloudAiplatformV1beta1EnvVarResponse>
    Environment variables to be passed to the python module. Maximum limit is 100.
    executorImageUri String
    The URI of a container image in Artifact Registry that will run the provided Python package. Vertex AI provides a wide range of executor images with pre-installed packages to meet users' various use cases. See the list of pre-built containers for training. You must use an image from this list.
    packageUris List<String>
    The Google Cloud Storage location of the Python package files which are the training program and its dependent packages. The maximum number of package URIs is 100.
    pythonModule String
    The Python module name to run after installing the packages.
    args string[]
    Command line arguments to be passed to the Python task.
    env GoogleCloudAiplatformV1beta1EnvVarResponse[]
    Environment variables to be passed to the python module. Maximum limit is 100.
    executorImageUri string
    The URI of a container image in Artifact Registry that will run the provided Python package. Vertex AI provides a wide range of executor images with pre-installed packages to meet users' various use cases. See the list of pre-built containers for training. You must use an image from this list.
    packageUris string[]
    The Google Cloud Storage location of the Python package files which are the training program and its dependent packages. The maximum number of package URIs is 100.
    pythonModule string
    The Python module name to run after installing the packages.
    args Sequence[str]
    Command line arguments to be passed to the Python task.
    env Sequence[GoogleCloudAiplatformV1beta1EnvVarResponse]
    Environment variables to be passed to the python module. Maximum limit is 100.
    executor_image_uri str
    The URI of a container image in Artifact Registry that will run the provided Python package. Vertex AI provides a wide range of executor images with pre-installed packages to meet users' various use cases. See the list of pre-built containers for training. You must use an image from this list.
    package_uris Sequence[str]
    The Google Cloud Storage location of the Python package files which are the training program and its dependent packages. The maximum number of package URIs is 100.
    python_module str
    The Python module name to run after installing the packages.
    args List<String>
    Command line arguments to be passed to the Python task.
    env List<Property Map>
    Environment variables to be passed to the python module. Maximum limit is 100.
    executorImageUri String
    The URI of a container image in Artifact Registry that will run the provided Python package. Vertex AI provides a wide range of executor images with pre-installed packages to meet users' various use cases. See the list of pre-built containers for training. You must use an image from this list.
    packageUris List<String>
    The Google Cloud Storage location of the Python package files which are the training program and its dependent packages. The maximum number of package URIs is 100.
    pythonModule String
    The Python module name to run after installing the packages.

    GoogleCloudAiplatformV1beta1Scheduling, GoogleCloudAiplatformV1beta1SchedulingArgs

    DisableRetries bool
    Optional. Indicates if the job should retry for internal errors after the job starts running. If true, overrides Scheduling.restart_job_on_worker_restart to false.
    RestartJobOnWorkerRestart bool
    Restarts the entire CustomJob if a worker gets restarted. This feature can be used by distributed training jobs that are not resilient to workers leaving and joining a job.
    Timeout string
    The maximum job running time. The default is 7 days.
    DisableRetries bool
    Optional. Indicates if the job should retry for internal errors after the job starts running. If true, overrides Scheduling.restart_job_on_worker_restart to false.
    RestartJobOnWorkerRestart bool
    Restarts the entire CustomJob if a worker gets restarted. This feature can be used by distributed training jobs that are not resilient to workers leaving and joining a job.
    Timeout string
    The maximum job running time. The default is 7 days.
    disableRetries Boolean
    Optional. Indicates if the job should retry for internal errors after the job starts running. If true, overrides Scheduling.restart_job_on_worker_restart to false.
    restartJobOnWorkerRestart Boolean
    Restarts the entire CustomJob if a worker gets restarted. This feature can be used by distributed training jobs that are not resilient to workers leaving and joining a job.
    timeout String
    The maximum job running time. The default is 7 days.
    disableRetries boolean
    Optional. Indicates if the job should retry for internal errors after the job starts running. If true, overrides Scheduling.restart_job_on_worker_restart to false.
    restartJobOnWorkerRestart boolean
    Restarts the entire CustomJob if a worker gets restarted. This feature can be used by distributed training jobs that are not resilient to workers leaving and joining a job.
    timeout string
    The maximum job running time. The default is 7 days.
    disable_retries bool
    Optional. Indicates if the job should retry for internal errors after the job starts running. If true, overrides Scheduling.restart_job_on_worker_restart to false.
    restart_job_on_worker_restart bool
    Restarts the entire CustomJob if a worker gets restarted. This feature can be used by distributed training jobs that are not resilient to workers leaving and joining a job.
    timeout str
    The maximum job running time. The default is 7 days.
    disableRetries Boolean
    Optional. Indicates if the job should retry for internal errors after the job starts running. If true, overrides Scheduling.restart_job_on_worker_restart to false.
    restartJobOnWorkerRestart Boolean
    Restarts the entire CustomJob if a worker gets restarted. This feature can be used by distributed training jobs that are not resilient to workers leaving and joining a job.
    timeout String
    The maximum job running time. The default is 7 days.

    GoogleCloudAiplatformV1beta1SchedulingResponse, GoogleCloudAiplatformV1beta1SchedulingResponseArgs

    DisableRetries bool
    Optional. Indicates if the job should retry for internal errors after the job starts running. If true, overrides Scheduling.restart_job_on_worker_restart to false.
    RestartJobOnWorkerRestart bool
    Restarts the entire CustomJob if a worker gets restarted. This feature can be used by distributed training jobs that are not resilient to workers leaving and joining a job.
    Timeout string
    The maximum job running time. The default is 7 days.
    DisableRetries bool
    Optional. Indicates if the job should retry for internal errors after the job starts running. If true, overrides Scheduling.restart_job_on_worker_restart to false.
    RestartJobOnWorkerRestart bool
    Restarts the entire CustomJob if a worker gets restarted. This feature can be used by distributed training jobs that are not resilient to workers leaving and joining a job.
    Timeout string
    The maximum job running time. The default is 7 days.
    disableRetries Boolean
    Optional. Indicates if the job should retry for internal errors after the job starts running. If true, overrides Scheduling.restart_job_on_worker_restart to false.
    restartJobOnWorkerRestart Boolean
    Restarts the entire CustomJob if a worker gets restarted. This feature can be used by distributed training jobs that are not resilient to workers leaving and joining a job.
    timeout String
    The maximum job running time. The default is 7 days.
    disableRetries boolean
    Optional. Indicates if the job should retry for internal errors after the job starts running. If true, overrides Scheduling.restart_job_on_worker_restart to false.
    restartJobOnWorkerRestart boolean
    Restarts the entire CustomJob if a worker gets restarted. This feature can be used by distributed training jobs that are not resilient to workers leaving and joining a job.
    timeout string
    The maximum job running time. The default is 7 days.
    disable_retries bool
    Optional. Indicates if the job should retry for internal errors after the job starts running. If true, overrides Scheduling.restart_job_on_worker_restart to false.
    restart_job_on_worker_restart bool
    Restarts the entire CustomJob if a worker gets restarted. This feature can be used by distributed training jobs that are not resilient to workers leaving and joining a job.
    timeout str
    The maximum job running time. The default is 7 days.
    disableRetries Boolean
    Optional. Indicates if the job should retry for internal errors after the job starts running. If true, overrides Scheduling.restart_job_on_worker_restart to false.
    restartJobOnWorkerRestart Boolean
    Restarts the entire CustomJob if a worker gets restarted. This feature can be used by distributed training jobs that are not resilient to workers leaving and joining a job.
    timeout String
    The maximum job running time. The default is 7 days.

    GoogleCloudAiplatformV1beta1WorkerPoolSpec, GoogleCloudAiplatformV1beta1WorkerPoolSpecArgs

    ContainerSpec GoogleCloudAiplatformV1beta1ContainerSpec
    The custom container task.
    DiskSpec GoogleCloudAiplatformV1beta1DiskSpec
    Disk spec.
    MachineSpec GoogleCloudAiplatformV1beta1MachineSpec
    Optional. Immutable. The specification of a single machine.
    NfsMounts []GoogleCloudAiplatformV1beta1NfsMount
    Optional. List of NFS mount spec.
    PythonPackageSpec GoogleCloudAiplatformV1beta1PythonPackageSpec
    The Python packaged task.
    ReplicaCount string
    Optional. The number of worker replicas to use for this worker pool.
    containerSpec GoogleCloudAiplatformV1beta1ContainerSpec
    The custom container task.
    diskSpec GoogleCloudAiplatformV1beta1DiskSpec
    Disk spec.
    machineSpec GoogleCloudAiplatformV1beta1MachineSpec
    Optional. Immutable. The specification of a single machine.
    nfsMounts List<GoogleCloudAiplatformV1beta1NfsMount>
    Optional. List of NFS mount spec.
    pythonPackageSpec GoogleCloudAiplatformV1beta1PythonPackageSpec
    The Python packaged task.
    replicaCount String
    Optional. The number of worker replicas to use for this worker pool.
    containerSpec GoogleCloudAiplatformV1beta1ContainerSpec
    The custom container task.
    diskSpec GoogleCloudAiplatformV1beta1DiskSpec
    Disk spec.
    machineSpec GoogleCloudAiplatformV1beta1MachineSpec
    Optional. Immutable. The specification of a single machine.
    nfsMounts GoogleCloudAiplatformV1beta1NfsMount[]
    Optional. List of NFS mount spec.
    pythonPackageSpec GoogleCloudAiplatformV1beta1PythonPackageSpec
    The Python packaged task.
    replicaCount string
    Optional. The number of worker replicas to use for this worker pool.
    container_spec GoogleCloudAiplatformV1beta1ContainerSpec
    The custom container task.
    disk_spec GoogleCloudAiplatformV1beta1DiskSpec
    Disk spec.
    machine_spec GoogleCloudAiplatformV1beta1MachineSpec
    Optional. Immutable. The specification of a single machine.
    nfs_mounts Sequence[GoogleCloudAiplatformV1beta1NfsMount]
    Optional. List of NFS mount spec.
    python_package_spec GoogleCloudAiplatformV1beta1PythonPackageSpec
    The Python packaged task.
    replica_count str
    Optional. The number of worker replicas to use for this worker pool.
    containerSpec Property Map
    The custom container task.
    diskSpec Property Map
    Disk spec.
    machineSpec Property Map
    Optional. Immutable. The specification of a single machine.
    nfsMounts List<Property Map>
    Optional. List of NFS mount spec.
    pythonPackageSpec Property Map
    The Python packaged task.
    replicaCount String
    Optional. The number of worker replicas to use for this worker pool.

    GoogleCloudAiplatformV1beta1WorkerPoolSpecResponse, GoogleCloudAiplatformV1beta1WorkerPoolSpecResponseArgs

    ContainerSpec GoogleCloudAiplatformV1beta1ContainerSpecResponse
    The custom container task.
    DiskSpec GoogleCloudAiplatformV1beta1DiskSpecResponse
    Disk spec.
    MachineSpec GoogleCloudAiplatformV1beta1MachineSpecResponse
    Optional. Immutable. The specification of a single machine.
    NfsMounts []GoogleCloudAiplatformV1beta1NfsMountResponse
    Optional. List of NFS mount spec.
    PythonPackageSpec GoogleCloudAiplatformV1beta1PythonPackageSpecResponse
    The Python packaged task.
    ReplicaCount string
    Optional. The number of worker replicas to use for this worker pool.
    containerSpec GoogleCloudAiplatformV1beta1ContainerSpecResponse
    The custom container task.
    diskSpec GoogleCloudAiplatformV1beta1DiskSpecResponse
    Disk spec.
    machineSpec GoogleCloudAiplatformV1beta1MachineSpecResponse
    Optional. Immutable. The specification of a single machine.
    nfsMounts List<GoogleCloudAiplatformV1beta1NfsMountResponse>
    Optional. List of NFS mount spec.
    pythonPackageSpec GoogleCloudAiplatformV1beta1PythonPackageSpecResponse
    The Python packaged task.
    replicaCount String
    Optional. The number of worker replicas to use for this worker pool.
    containerSpec GoogleCloudAiplatformV1beta1ContainerSpecResponse
    The custom container task.
    diskSpec GoogleCloudAiplatformV1beta1DiskSpecResponse
    Disk spec.
    machineSpec GoogleCloudAiplatformV1beta1MachineSpecResponse
    Optional. Immutable. The specification of a single machine.
    nfsMounts GoogleCloudAiplatformV1beta1NfsMountResponse[]
    Optional. List of NFS mount spec.
    pythonPackageSpec GoogleCloudAiplatformV1beta1PythonPackageSpecResponse
    The Python packaged task.
    replicaCount string
    Optional. The number of worker replicas to use for this worker pool.
    container_spec GoogleCloudAiplatformV1beta1ContainerSpecResponse
    The custom container task.
    disk_spec GoogleCloudAiplatformV1beta1DiskSpecResponse
    Disk spec.
    machine_spec GoogleCloudAiplatformV1beta1MachineSpecResponse
    Optional. Immutable. The specification of a single machine.
    nfs_mounts Sequence[GoogleCloudAiplatformV1beta1NfsMountResponse]
    Optional. List of NFS mount spec.
    python_package_spec GoogleCloudAiplatformV1beta1PythonPackageSpecResponse
    The Python packaged task.
    replica_count str
    Optional. The number of worker replicas to use for this worker pool.
    containerSpec Property Map
    The custom container task.
    diskSpec Property Map
    Disk spec.
    machineSpec Property Map
    Optional. Immutable. The specification of a single machine.
    nfsMounts List<Property Map>
    Optional. List of NFS mount spec.
    pythonPackageSpec Property Map
    The Python packaged task.
    replicaCount String
    Optional. The number of worker replicas to use for this worker pool.

    GoogleRpcStatusResponse, GoogleRpcStatusResponseArgs

    Code int
    The status code, which should be an enum value of google.rpc.Code.
    Details List<ImmutableDictionary<string, string>>
    A list of messages that carry the error details. There is a common set of message types for APIs to use.
    Message string
    A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
    Code int
    The status code, which should be an enum value of google.rpc.Code.
    Details []map[string]string
    A list of messages that carry the error details. There is a common set of message types for APIs to use.
    Message string
    A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
    code Integer
    The status code, which should be an enum value of google.rpc.Code.
    details List<Map<String,String>>
    A list of messages that carry the error details. There is a common set of message types for APIs to use.
    message String
    A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
    code number
    The status code, which should be an enum value of google.rpc.Code.
    details {[key: string]: string}[]
    A list of messages that carry the error details. There is a common set of message types for APIs to use.
    message string
    A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
    code int
    The status code, which should be an enum value of google.rpc.Code.
    details Sequence[Mapping[str, str]]
    A list of messages that carry the error details. There is a common set of message types for APIs to use.
    message str
    A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
    code Number
    The status code, which should be an enum value of google.rpc.Code.
    details List<Map<String>>
    A list of messages that carry the error details. There is a common set of message types for APIs to use.
    message String
    A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.

    Package Details

    Repository
    Google Cloud Native pulumi/pulumi-google-native
    License
    Apache-2.0
    google-native logo

    Google Cloud Native is in preview. Google Cloud Classic is fully supported.

    Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi