Google Cloud Native is in preview. Google Cloud Classic is fully supported.
google-native.aiplatform/v1beta1.NasJob
Explore with Pulumi AI
Google Cloud Native is in preview. Google Cloud Classic is fully supported.
Creates a NasJob Auto-naming is currently not supported for this resource.
Create NasJob Resource
Resources are created with functions called constructors. To learn more about declaring and configuring resources, see Resources.
Constructor syntax
new NasJob(name: string, args: NasJobArgs, opts?: CustomResourceOptions);
@overload
def NasJob(resource_name: str,
args: NasJobArgs,
opts: Optional[ResourceOptions] = None)
@overload
def NasJob(resource_name: str,
opts: Optional[ResourceOptions] = None,
display_name: Optional[str] = None,
nas_job_spec: Optional[GoogleCloudAiplatformV1beta1NasJobSpecArgs] = None,
enable_restricted_image_training: Optional[bool] = None,
encryption_spec: Optional[GoogleCloudAiplatformV1beta1EncryptionSpecArgs] = None,
labels: Optional[Mapping[str, str]] = None,
location: Optional[str] = None,
project: Optional[str] = None)
func NewNasJob(ctx *Context, name string, args NasJobArgs, opts ...ResourceOption) (*NasJob, error)
public NasJob(string name, NasJobArgs args, CustomResourceOptions? opts = null)
public NasJob(String name, NasJobArgs args)
public NasJob(String name, NasJobArgs args, CustomResourceOptions options)
type: google-native:aiplatform/v1beta1:NasJob
properties: # The arguments to resource properties.
options: # Bag of options to control resource's behavior.
Parameters
- name string
- The unique name of the resource.
- args NasJobArgs
- The arguments to resource properties.
- opts CustomResourceOptions
- Bag of options to control resource's behavior.
- resource_name str
- The unique name of the resource.
- args NasJobArgs
- The arguments to resource properties.
- opts ResourceOptions
- Bag of options to control resource's behavior.
- ctx Context
- Context object for the current deployment.
- name string
- The unique name of the resource.
- args NasJobArgs
- The arguments to resource properties.
- opts ResourceOption
- Bag of options to control resource's behavior.
- name string
- The unique name of the resource.
- args NasJobArgs
- The arguments to resource properties.
- opts CustomResourceOptions
- Bag of options to control resource's behavior.
- name String
- The unique name of the resource.
- args NasJobArgs
- The arguments to resource properties.
- options CustomResourceOptions
- Bag of options to control resource's behavior.
Constructor example
The following reference example uses placeholder values for all input properties.
var google_nativeNasJobResource = new GoogleNative.Aiplatform.V1Beta1.NasJob("google-nativeNasJobResource", new()
{
DisplayName = "string",
NasJobSpec = new GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1NasJobSpecArgs
{
MultiTrialAlgorithmSpec = new GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1NasJobSpecMultiTrialAlgorithmSpecArgs
{
SearchTrialSpec = new GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1NasJobSpecMultiTrialAlgorithmSpecSearchTrialSpecArgs
{
MaxParallelTrialCount = 0,
MaxTrialCount = 0,
SearchTrialJobSpec = new GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1CustomJobSpecArgs
{
WorkerPoolSpecs = new[]
{
new GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1WorkerPoolSpecArgs
{
ContainerSpec = new GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1ContainerSpecArgs
{
ImageUri = "string",
Args = new[]
{
"string",
},
Command = new[]
{
"string",
},
Env = new[]
{
new GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1EnvVarArgs
{
Name = "string",
Value = "string",
},
},
},
DiskSpec = new GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1DiskSpecArgs
{
BootDiskSizeGb = 0,
BootDiskType = "string",
},
MachineSpec = new GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1MachineSpecArgs
{
AcceleratorCount = 0,
AcceleratorType = GoogleNative.Aiplatform.V1Beta1.GoogleCloudAiplatformV1beta1MachineSpecAcceleratorType.AcceleratorTypeUnspecified,
MachineType = "string",
TpuTopology = "string",
},
NfsMounts = new[]
{
new GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1NfsMountArgs
{
MountPoint = "string",
Path = "string",
Server = "string",
},
},
PythonPackageSpec = new GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1PythonPackageSpecArgs
{
ExecutorImageUri = "string",
PackageUris = new[]
{
"string",
},
PythonModule = "string",
Args = new[]
{
"string",
},
Env = new[]
{
new GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1EnvVarArgs
{
Name = "string",
Value = "string",
},
},
},
ReplicaCount = "string",
},
},
PersistentResourceId = "string",
EnableWebAccess = false,
Experiment = "string",
ExperimentRun = "string",
Network = "string",
BaseOutputDirectory = new GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1GcsDestinationArgs
{
OutputUriPrefix = "string",
},
ProtectedArtifactLocationId = "string",
ReservedIpRanges = new[]
{
"string",
},
Scheduling = new GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1SchedulingArgs
{
DisableRetries = false,
RestartJobOnWorkerRestart = false,
Timeout = "string",
},
ServiceAccount = "string",
Tensorboard = "string",
EnableDashboardAccess = false,
},
MaxFailedTrialCount = 0,
},
Metric = new GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1NasJobSpecMultiTrialAlgorithmSpecMetricSpecArgs
{
Goal = GoogleNative.Aiplatform.V1Beta1.GoogleCloudAiplatformV1beta1NasJobSpecMultiTrialAlgorithmSpecMetricSpecGoal.GoalTypeUnspecified,
MetricId = "string",
},
MultiTrialAlgorithm = GoogleNative.Aiplatform.V1Beta1.GoogleCloudAiplatformV1beta1NasJobSpecMultiTrialAlgorithmSpecMultiTrialAlgorithm.MultiTrialAlgorithmUnspecified,
TrainTrialSpec = new GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1NasJobSpecMultiTrialAlgorithmSpecTrainTrialSpecArgs
{
Frequency = 0,
MaxParallelTrialCount = 0,
TrainTrialJobSpec = new GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1CustomJobSpecArgs
{
WorkerPoolSpecs = new[]
{
new GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1WorkerPoolSpecArgs
{
ContainerSpec = new GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1ContainerSpecArgs
{
ImageUri = "string",
Args = new[]
{
"string",
},
Command = new[]
{
"string",
},
Env = new[]
{
new GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1EnvVarArgs
{
Name = "string",
Value = "string",
},
},
},
DiskSpec = new GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1DiskSpecArgs
{
BootDiskSizeGb = 0,
BootDiskType = "string",
},
MachineSpec = new GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1MachineSpecArgs
{
AcceleratorCount = 0,
AcceleratorType = GoogleNative.Aiplatform.V1Beta1.GoogleCloudAiplatformV1beta1MachineSpecAcceleratorType.AcceleratorTypeUnspecified,
MachineType = "string",
TpuTopology = "string",
},
NfsMounts = new[]
{
new GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1NfsMountArgs
{
MountPoint = "string",
Path = "string",
Server = "string",
},
},
PythonPackageSpec = new GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1PythonPackageSpecArgs
{
ExecutorImageUri = "string",
PackageUris = new[]
{
"string",
},
PythonModule = "string",
Args = new[]
{
"string",
},
Env = new[]
{
new GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1EnvVarArgs
{
Name = "string",
Value = "string",
},
},
},
ReplicaCount = "string",
},
},
PersistentResourceId = "string",
EnableWebAccess = false,
Experiment = "string",
ExperimentRun = "string",
Network = "string",
BaseOutputDirectory = new GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1GcsDestinationArgs
{
OutputUriPrefix = "string",
},
ProtectedArtifactLocationId = "string",
ReservedIpRanges = new[]
{
"string",
},
Scheduling = new GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1SchedulingArgs
{
DisableRetries = false,
RestartJobOnWorkerRestart = false,
Timeout = "string",
},
ServiceAccount = "string",
Tensorboard = "string",
EnableDashboardAccess = false,
},
},
},
ResumeNasJobId = "string",
SearchSpaceSpec = "string",
},
EnableRestrictedImageTraining = false,
EncryptionSpec = new GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1EncryptionSpecArgs
{
KmsKeyName = "string",
},
Labels =
{
{ "string", "string" },
},
Location = "string",
Project = "string",
});
example, err := aiplatformv1beta1.NewNasJob(ctx, "google-nativeNasJobResource", &aiplatformv1beta1.NasJobArgs{
DisplayName: pulumi.String("string"),
NasJobSpec: &aiplatform.GoogleCloudAiplatformV1beta1NasJobSpecArgs{
MultiTrialAlgorithmSpec: &aiplatform.GoogleCloudAiplatformV1beta1NasJobSpecMultiTrialAlgorithmSpecArgs{
SearchTrialSpec: &aiplatform.GoogleCloudAiplatformV1beta1NasJobSpecMultiTrialAlgorithmSpecSearchTrialSpecArgs{
MaxParallelTrialCount: pulumi.Int(0),
MaxTrialCount: pulumi.Int(0),
SearchTrialJobSpec: &aiplatform.GoogleCloudAiplatformV1beta1CustomJobSpecArgs{
WorkerPoolSpecs: aiplatform.GoogleCloudAiplatformV1beta1WorkerPoolSpecArray{
&aiplatform.GoogleCloudAiplatformV1beta1WorkerPoolSpecArgs{
ContainerSpec: &aiplatform.GoogleCloudAiplatformV1beta1ContainerSpecArgs{
ImageUri: pulumi.String("string"),
Args: pulumi.StringArray{
pulumi.String("string"),
},
Command: pulumi.StringArray{
pulumi.String("string"),
},
Env: aiplatform.GoogleCloudAiplatformV1beta1EnvVarArray{
&aiplatform.GoogleCloudAiplatformV1beta1EnvVarArgs{
Name: pulumi.String("string"),
Value: pulumi.String("string"),
},
},
},
DiskSpec: &aiplatform.GoogleCloudAiplatformV1beta1DiskSpecArgs{
BootDiskSizeGb: pulumi.Int(0),
BootDiskType: pulumi.String("string"),
},
MachineSpec: &aiplatform.GoogleCloudAiplatformV1beta1MachineSpecArgs{
AcceleratorCount: pulumi.Int(0),
AcceleratorType: aiplatformv1beta1.GoogleCloudAiplatformV1beta1MachineSpecAcceleratorTypeAcceleratorTypeUnspecified,
MachineType: pulumi.String("string"),
TpuTopology: pulumi.String("string"),
},
NfsMounts: aiplatform.GoogleCloudAiplatformV1beta1NfsMountArray{
&aiplatform.GoogleCloudAiplatformV1beta1NfsMountArgs{
MountPoint: pulumi.String("string"),
Path: pulumi.String("string"),
Server: pulumi.String("string"),
},
},
PythonPackageSpec: &aiplatform.GoogleCloudAiplatformV1beta1PythonPackageSpecArgs{
ExecutorImageUri: pulumi.String("string"),
PackageUris: pulumi.StringArray{
pulumi.String("string"),
},
PythonModule: pulumi.String("string"),
Args: pulumi.StringArray{
pulumi.String("string"),
},
Env: aiplatform.GoogleCloudAiplatformV1beta1EnvVarArray{
&aiplatform.GoogleCloudAiplatformV1beta1EnvVarArgs{
Name: pulumi.String("string"),
Value: pulumi.String("string"),
},
},
},
ReplicaCount: pulumi.String("string"),
},
},
PersistentResourceId: pulumi.String("string"),
EnableWebAccess: pulumi.Bool(false),
Experiment: pulumi.String("string"),
ExperimentRun: pulumi.String("string"),
Network: pulumi.String("string"),
BaseOutputDirectory: &aiplatform.GoogleCloudAiplatformV1beta1GcsDestinationArgs{
OutputUriPrefix: pulumi.String("string"),
},
ProtectedArtifactLocationId: pulumi.String("string"),
ReservedIpRanges: pulumi.StringArray{
pulumi.String("string"),
},
Scheduling: &aiplatform.GoogleCloudAiplatformV1beta1SchedulingArgs{
DisableRetries: pulumi.Bool(false),
RestartJobOnWorkerRestart: pulumi.Bool(false),
Timeout: pulumi.String("string"),
},
ServiceAccount: pulumi.String("string"),
Tensorboard: pulumi.String("string"),
EnableDashboardAccess: pulumi.Bool(false),
},
MaxFailedTrialCount: pulumi.Int(0),
},
Metric: &aiplatform.GoogleCloudAiplatformV1beta1NasJobSpecMultiTrialAlgorithmSpecMetricSpecArgs{
Goal: aiplatformv1beta1.GoogleCloudAiplatformV1beta1NasJobSpecMultiTrialAlgorithmSpecMetricSpecGoalGoalTypeUnspecified,
MetricId: pulumi.String("string"),
},
MultiTrialAlgorithm: aiplatformv1beta1.GoogleCloudAiplatformV1beta1NasJobSpecMultiTrialAlgorithmSpecMultiTrialAlgorithmMultiTrialAlgorithmUnspecified,
TrainTrialSpec: &aiplatform.GoogleCloudAiplatformV1beta1NasJobSpecMultiTrialAlgorithmSpecTrainTrialSpecArgs{
Frequency: pulumi.Int(0),
MaxParallelTrialCount: pulumi.Int(0),
TrainTrialJobSpec: &aiplatform.GoogleCloudAiplatformV1beta1CustomJobSpecArgs{
WorkerPoolSpecs: aiplatform.GoogleCloudAiplatformV1beta1WorkerPoolSpecArray{
&aiplatform.GoogleCloudAiplatformV1beta1WorkerPoolSpecArgs{
ContainerSpec: &aiplatform.GoogleCloudAiplatformV1beta1ContainerSpecArgs{
ImageUri: pulumi.String("string"),
Args: pulumi.StringArray{
pulumi.String("string"),
},
Command: pulumi.StringArray{
pulumi.String("string"),
},
Env: aiplatform.GoogleCloudAiplatformV1beta1EnvVarArray{
&aiplatform.GoogleCloudAiplatformV1beta1EnvVarArgs{
Name: pulumi.String("string"),
Value: pulumi.String("string"),
},
},
},
DiskSpec: &aiplatform.GoogleCloudAiplatformV1beta1DiskSpecArgs{
BootDiskSizeGb: pulumi.Int(0),
BootDiskType: pulumi.String("string"),
},
MachineSpec: &aiplatform.GoogleCloudAiplatformV1beta1MachineSpecArgs{
AcceleratorCount: pulumi.Int(0),
AcceleratorType: aiplatformv1beta1.GoogleCloudAiplatformV1beta1MachineSpecAcceleratorTypeAcceleratorTypeUnspecified,
MachineType: pulumi.String("string"),
TpuTopology: pulumi.String("string"),
},
NfsMounts: aiplatform.GoogleCloudAiplatformV1beta1NfsMountArray{
&aiplatform.GoogleCloudAiplatformV1beta1NfsMountArgs{
MountPoint: pulumi.String("string"),
Path: pulumi.String("string"),
Server: pulumi.String("string"),
},
},
PythonPackageSpec: &aiplatform.GoogleCloudAiplatformV1beta1PythonPackageSpecArgs{
ExecutorImageUri: pulumi.String("string"),
PackageUris: pulumi.StringArray{
pulumi.String("string"),
},
PythonModule: pulumi.String("string"),
Args: pulumi.StringArray{
pulumi.String("string"),
},
Env: aiplatform.GoogleCloudAiplatformV1beta1EnvVarArray{
&aiplatform.GoogleCloudAiplatformV1beta1EnvVarArgs{
Name: pulumi.String("string"),
Value: pulumi.String("string"),
},
},
},
ReplicaCount: pulumi.String("string"),
},
},
PersistentResourceId: pulumi.String("string"),
EnableWebAccess: pulumi.Bool(false),
Experiment: pulumi.String("string"),
ExperimentRun: pulumi.String("string"),
Network: pulumi.String("string"),
BaseOutputDirectory: &aiplatform.GoogleCloudAiplatformV1beta1GcsDestinationArgs{
OutputUriPrefix: pulumi.String("string"),
},
ProtectedArtifactLocationId: pulumi.String("string"),
ReservedIpRanges: pulumi.StringArray{
pulumi.String("string"),
},
Scheduling: &aiplatform.GoogleCloudAiplatformV1beta1SchedulingArgs{
DisableRetries: pulumi.Bool(false),
RestartJobOnWorkerRestart: pulumi.Bool(false),
Timeout: pulumi.String("string"),
},
ServiceAccount: pulumi.String("string"),
Tensorboard: pulumi.String("string"),
EnableDashboardAccess: pulumi.Bool(false),
},
},
},
ResumeNasJobId: pulumi.String("string"),
SearchSpaceSpec: pulumi.String("string"),
},
EnableRestrictedImageTraining: pulumi.Bool(false),
EncryptionSpec: &aiplatform.GoogleCloudAiplatformV1beta1EncryptionSpecArgs{
KmsKeyName: pulumi.String("string"),
},
Labels: pulumi.StringMap{
"string": pulumi.String("string"),
},
Location: pulumi.String("string"),
Project: pulumi.String("string"),
})
var google_nativeNasJobResource = new NasJob("google-nativeNasJobResource", NasJobArgs.builder()
.displayName("string")
.nasJobSpec(GoogleCloudAiplatformV1beta1NasJobSpecArgs.builder()
.multiTrialAlgorithmSpec(GoogleCloudAiplatformV1beta1NasJobSpecMultiTrialAlgorithmSpecArgs.builder()
.searchTrialSpec(GoogleCloudAiplatformV1beta1NasJobSpecMultiTrialAlgorithmSpecSearchTrialSpecArgs.builder()
.maxParallelTrialCount(0)
.maxTrialCount(0)
.searchTrialJobSpec(GoogleCloudAiplatformV1beta1CustomJobSpecArgs.builder()
.workerPoolSpecs(GoogleCloudAiplatformV1beta1WorkerPoolSpecArgs.builder()
.containerSpec(GoogleCloudAiplatformV1beta1ContainerSpecArgs.builder()
.imageUri("string")
.args("string")
.command("string")
.env(GoogleCloudAiplatformV1beta1EnvVarArgs.builder()
.name("string")
.value("string")
.build())
.build())
.diskSpec(GoogleCloudAiplatformV1beta1DiskSpecArgs.builder()
.bootDiskSizeGb(0)
.bootDiskType("string")
.build())
.machineSpec(GoogleCloudAiplatformV1beta1MachineSpecArgs.builder()
.acceleratorCount(0)
.acceleratorType("ACCELERATOR_TYPE_UNSPECIFIED")
.machineType("string")
.tpuTopology("string")
.build())
.nfsMounts(GoogleCloudAiplatformV1beta1NfsMountArgs.builder()
.mountPoint("string")
.path("string")
.server("string")
.build())
.pythonPackageSpec(GoogleCloudAiplatformV1beta1PythonPackageSpecArgs.builder()
.executorImageUri("string")
.packageUris("string")
.pythonModule("string")
.args("string")
.env(GoogleCloudAiplatformV1beta1EnvVarArgs.builder()
.name("string")
.value("string")
.build())
.build())
.replicaCount("string")
.build())
.persistentResourceId("string")
.enableWebAccess(false)
.experiment("string")
.experimentRun("string")
.network("string")
.baseOutputDirectory(GoogleCloudAiplatformV1beta1GcsDestinationArgs.builder()
.outputUriPrefix("string")
.build())
.protectedArtifactLocationId("string")
.reservedIpRanges("string")
.scheduling(GoogleCloudAiplatformV1beta1SchedulingArgs.builder()
.disableRetries(false)
.restartJobOnWorkerRestart(false)
.timeout("string")
.build())
.serviceAccount("string")
.tensorboard("string")
.enableDashboardAccess(false)
.build())
.maxFailedTrialCount(0)
.build())
.metric(GoogleCloudAiplatformV1beta1NasJobSpecMultiTrialAlgorithmSpecMetricSpecArgs.builder()
.goal("GOAL_TYPE_UNSPECIFIED")
.metricId("string")
.build())
.multiTrialAlgorithm("MULTI_TRIAL_ALGORITHM_UNSPECIFIED")
.trainTrialSpec(GoogleCloudAiplatformV1beta1NasJobSpecMultiTrialAlgorithmSpecTrainTrialSpecArgs.builder()
.frequency(0)
.maxParallelTrialCount(0)
.trainTrialJobSpec(GoogleCloudAiplatformV1beta1CustomJobSpecArgs.builder()
.workerPoolSpecs(GoogleCloudAiplatformV1beta1WorkerPoolSpecArgs.builder()
.containerSpec(GoogleCloudAiplatformV1beta1ContainerSpecArgs.builder()
.imageUri("string")
.args("string")
.command("string")
.env(GoogleCloudAiplatformV1beta1EnvVarArgs.builder()
.name("string")
.value("string")
.build())
.build())
.diskSpec(GoogleCloudAiplatformV1beta1DiskSpecArgs.builder()
.bootDiskSizeGb(0)
.bootDiskType("string")
.build())
.machineSpec(GoogleCloudAiplatformV1beta1MachineSpecArgs.builder()
.acceleratorCount(0)
.acceleratorType("ACCELERATOR_TYPE_UNSPECIFIED")
.machineType("string")
.tpuTopology("string")
.build())
.nfsMounts(GoogleCloudAiplatformV1beta1NfsMountArgs.builder()
.mountPoint("string")
.path("string")
.server("string")
.build())
.pythonPackageSpec(GoogleCloudAiplatformV1beta1PythonPackageSpecArgs.builder()
.executorImageUri("string")
.packageUris("string")
.pythonModule("string")
.args("string")
.env(GoogleCloudAiplatformV1beta1EnvVarArgs.builder()
.name("string")
.value("string")
.build())
.build())
.replicaCount("string")
.build())
.persistentResourceId("string")
.enableWebAccess(false)
.experiment("string")
.experimentRun("string")
.network("string")
.baseOutputDirectory(GoogleCloudAiplatformV1beta1GcsDestinationArgs.builder()
.outputUriPrefix("string")
.build())
.protectedArtifactLocationId("string")
.reservedIpRanges("string")
.scheduling(GoogleCloudAiplatformV1beta1SchedulingArgs.builder()
.disableRetries(false)
.restartJobOnWorkerRestart(false)
.timeout("string")
.build())
.serviceAccount("string")
.tensorboard("string")
.enableDashboardAccess(false)
.build())
.build())
.build())
.resumeNasJobId("string")
.searchSpaceSpec("string")
.build())
.enableRestrictedImageTraining(false)
.encryptionSpec(GoogleCloudAiplatformV1beta1EncryptionSpecArgs.builder()
.kmsKeyName("string")
.build())
.labels(Map.of("string", "string"))
.location("string")
.project("string")
.build());
google_native_nas_job_resource = google_native.aiplatform.v1beta1.NasJob("google-nativeNasJobResource",
display_name="string",
nas_job_spec={
"multi_trial_algorithm_spec": {
"search_trial_spec": {
"max_parallel_trial_count": 0,
"max_trial_count": 0,
"search_trial_job_spec": {
"worker_pool_specs": [{
"container_spec": {
"image_uri": "string",
"args": ["string"],
"command": ["string"],
"env": [{
"name": "string",
"value": "string",
}],
},
"disk_spec": {
"boot_disk_size_gb": 0,
"boot_disk_type": "string",
},
"machine_spec": {
"accelerator_count": 0,
"accelerator_type": google_native.aiplatform.v1beta1.GoogleCloudAiplatformV1beta1MachineSpecAcceleratorType.ACCELERATOR_TYPE_UNSPECIFIED,
"machine_type": "string",
"tpu_topology": "string",
},
"nfs_mounts": [{
"mount_point": "string",
"path": "string",
"server": "string",
}],
"python_package_spec": {
"executor_image_uri": "string",
"package_uris": ["string"],
"python_module": "string",
"args": ["string"],
"env": [{
"name": "string",
"value": "string",
}],
},
"replica_count": "string",
}],
"persistent_resource_id": "string",
"enable_web_access": False,
"experiment": "string",
"experiment_run": "string",
"network": "string",
"base_output_directory": {
"output_uri_prefix": "string",
},
"protected_artifact_location_id": "string",
"reserved_ip_ranges": ["string"],
"scheduling": {
"disable_retries": False,
"restart_job_on_worker_restart": False,
"timeout": "string",
},
"service_account": "string",
"tensorboard": "string",
"enable_dashboard_access": False,
},
"max_failed_trial_count": 0,
},
"metric": {
"goal": google_native.aiplatform.v1beta1.GoogleCloudAiplatformV1beta1NasJobSpecMultiTrialAlgorithmSpecMetricSpecGoal.GOAL_TYPE_UNSPECIFIED,
"metric_id": "string",
},
"multi_trial_algorithm": google_native.aiplatform.v1beta1.GoogleCloudAiplatformV1beta1NasJobSpecMultiTrialAlgorithmSpecMultiTrialAlgorithm.MULTI_TRIAL_ALGORITHM_UNSPECIFIED,
"train_trial_spec": {
"frequency": 0,
"max_parallel_trial_count": 0,
"train_trial_job_spec": {
"worker_pool_specs": [{
"container_spec": {
"image_uri": "string",
"args": ["string"],
"command": ["string"],
"env": [{
"name": "string",
"value": "string",
}],
},
"disk_spec": {
"boot_disk_size_gb": 0,
"boot_disk_type": "string",
},
"machine_spec": {
"accelerator_count": 0,
"accelerator_type": google_native.aiplatform.v1beta1.GoogleCloudAiplatformV1beta1MachineSpecAcceleratorType.ACCELERATOR_TYPE_UNSPECIFIED,
"machine_type": "string",
"tpu_topology": "string",
},
"nfs_mounts": [{
"mount_point": "string",
"path": "string",
"server": "string",
}],
"python_package_spec": {
"executor_image_uri": "string",
"package_uris": ["string"],
"python_module": "string",
"args": ["string"],
"env": [{
"name": "string",
"value": "string",
}],
},
"replica_count": "string",
}],
"persistent_resource_id": "string",
"enable_web_access": False,
"experiment": "string",
"experiment_run": "string",
"network": "string",
"base_output_directory": {
"output_uri_prefix": "string",
},
"protected_artifact_location_id": "string",
"reserved_ip_ranges": ["string"],
"scheduling": {
"disable_retries": False,
"restart_job_on_worker_restart": False,
"timeout": "string",
},
"service_account": "string",
"tensorboard": "string",
"enable_dashboard_access": False,
},
},
},
"resume_nas_job_id": "string",
"search_space_spec": "string",
},
enable_restricted_image_training=False,
encryption_spec={
"kms_key_name": "string",
},
labels={
"string": "string",
},
location="string",
project="string")
const google_nativeNasJobResource = new google_native.aiplatform.v1beta1.NasJob("google-nativeNasJobResource", {
displayName: "string",
nasJobSpec: {
multiTrialAlgorithmSpec: {
searchTrialSpec: {
maxParallelTrialCount: 0,
maxTrialCount: 0,
searchTrialJobSpec: {
workerPoolSpecs: [{
containerSpec: {
imageUri: "string",
args: ["string"],
command: ["string"],
env: [{
name: "string",
value: "string",
}],
},
diskSpec: {
bootDiskSizeGb: 0,
bootDiskType: "string",
},
machineSpec: {
acceleratorCount: 0,
acceleratorType: google_native.aiplatform.v1beta1.GoogleCloudAiplatformV1beta1MachineSpecAcceleratorType.AcceleratorTypeUnspecified,
machineType: "string",
tpuTopology: "string",
},
nfsMounts: [{
mountPoint: "string",
path: "string",
server: "string",
}],
pythonPackageSpec: {
executorImageUri: "string",
packageUris: ["string"],
pythonModule: "string",
args: ["string"],
env: [{
name: "string",
value: "string",
}],
},
replicaCount: "string",
}],
persistentResourceId: "string",
enableWebAccess: false,
experiment: "string",
experimentRun: "string",
network: "string",
baseOutputDirectory: {
outputUriPrefix: "string",
},
protectedArtifactLocationId: "string",
reservedIpRanges: ["string"],
scheduling: {
disableRetries: false,
restartJobOnWorkerRestart: false,
timeout: "string",
},
serviceAccount: "string",
tensorboard: "string",
enableDashboardAccess: false,
},
maxFailedTrialCount: 0,
},
metric: {
goal: google_native.aiplatform.v1beta1.GoogleCloudAiplatformV1beta1NasJobSpecMultiTrialAlgorithmSpecMetricSpecGoal.GoalTypeUnspecified,
metricId: "string",
},
multiTrialAlgorithm: google_native.aiplatform.v1beta1.GoogleCloudAiplatformV1beta1NasJobSpecMultiTrialAlgorithmSpecMultiTrialAlgorithm.MultiTrialAlgorithmUnspecified,
trainTrialSpec: {
frequency: 0,
maxParallelTrialCount: 0,
trainTrialJobSpec: {
workerPoolSpecs: [{
containerSpec: {
imageUri: "string",
args: ["string"],
command: ["string"],
env: [{
name: "string",
value: "string",
}],
},
diskSpec: {
bootDiskSizeGb: 0,
bootDiskType: "string",
},
machineSpec: {
acceleratorCount: 0,
acceleratorType: google_native.aiplatform.v1beta1.GoogleCloudAiplatformV1beta1MachineSpecAcceleratorType.AcceleratorTypeUnspecified,
machineType: "string",
tpuTopology: "string",
},
nfsMounts: [{
mountPoint: "string",
path: "string",
server: "string",
}],
pythonPackageSpec: {
executorImageUri: "string",
packageUris: ["string"],
pythonModule: "string",
args: ["string"],
env: [{
name: "string",
value: "string",
}],
},
replicaCount: "string",
}],
persistentResourceId: "string",
enableWebAccess: false,
experiment: "string",
experimentRun: "string",
network: "string",
baseOutputDirectory: {
outputUriPrefix: "string",
},
protectedArtifactLocationId: "string",
reservedIpRanges: ["string"],
scheduling: {
disableRetries: false,
restartJobOnWorkerRestart: false,
timeout: "string",
},
serviceAccount: "string",
tensorboard: "string",
enableDashboardAccess: false,
},
},
},
resumeNasJobId: "string",
searchSpaceSpec: "string",
},
enableRestrictedImageTraining: false,
encryptionSpec: {
kmsKeyName: "string",
},
labels: {
string: "string",
},
location: "string",
project: "string",
});
type: google-native:aiplatform/v1beta1:NasJob
properties:
displayName: string
enableRestrictedImageTraining: false
encryptionSpec:
kmsKeyName: string
labels:
string: string
location: string
nasJobSpec:
multiTrialAlgorithmSpec:
metric:
goal: GOAL_TYPE_UNSPECIFIED
metricId: string
multiTrialAlgorithm: MULTI_TRIAL_ALGORITHM_UNSPECIFIED
searchTrialSpec:
maxFailedTrialCount: 0
maxParallelTrialCount: 0
maxTrialCount: 0
searchTrialJobSpec:
baseOutputDirectory:
outputUriPrefix: string
enableDashboardAccess: false
enableWebAccess: false
experiment: string
experimentRun: string
network: string
persistentResourceId: string
protectedArtifactLocationId: string
reservedIpRanges:
- string
scheduling:
disableRetries: false
restartJobOnWorkerRestart: false
timeout: string
serviceAccount: string
tensorboard: string
workerPoolSpecs:
- containerSpec:
args:
- string
command:
- string
env:
- name: string
value: string
imageUri: string
diskSpec:
bootDiskSizeGb: 0
bootDiskType: string
machineSpec:
acceleratorCount: 0
acceleratorType: ACCELERATOR_TYPE_UNSPECIFIED
machineType: string
tpuTopology: string
nfsMounts:
- mountPoint: string
path: string
server: string
pythonPackageSpec:
args:
- string
env:
- name: string
value: string
executorImageUri: string
packageUris:
- string
pythonModule: string
replicaCount: string
trainTrialSpec:
frequency: 0
maxParallelTrialCount: 0
trainTrialJobSpec:
baseOutputDirectory:
outputUriPrefix: string
enableDashboardAccess: false
enableWebAccess: false
experiment: string
experimentRun: string
network: string
persistentResourceId: string
protectedArtifactLocationId: string
reservedIpRanges:
- string
scheduling:
disableRetries: false
restartJobOnWorkerRestart: false
timeout: string
serviceAccount: string
tensorboard: string
workerPoolSpecs:
- containerSpec:
args:
- string
command:
- string
env:
- name: string
value: string
imageUri: string
diskSpec:
bootDiskSizeGb: 0
bootDiskType: string
machineSpec:
acceleratorCount: 0
acceleratorType: ACCELERATOR_TYPE_UNSPECIFIED
machineType: string
tpuTopology: string
nfsMounts:
- mountPoint: string
path: string
server: string
pythonPackageSpec:
args:
- string
env:
- name: string
value: string
executorImageUri: string
packageUris:
- string
pythonModule: string
replicaCount: string
resumeNasJobId: string
searchSpaceSpec: string
project: string
NasJob Resource Properties
To learn more about resource properties and how to use them, see Inputs and Outputs in the Architecture and Concepts docs.
Inputs
In Python, inputs that are objects can be passed either as argument classes or as dictionary literals.
The NasJob resource accepts the following input properties:
- Display
Name string - The display name of the NasJob. The name can be up to 128 characters long and can consist of any UTF-8 characters.
- Nas
Job Pulumi.Spec Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Nas Job Spec - The specification of a NasJob.
- Enable
Restricted boolImage Training - Optional. Enable a separation of Custom model training and restricted image training for tenant project.
- Encryption
Spec Pulumi.Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Encryption Spec - Customer-managed encryption key options for a NasJob. If this is set, then all resources created by the NasJob will be encrypted with the provided encryption key.
- Labels Dictionary<string, string>
- The labels with user-defined metadata to organize NasJobs. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.
- Location string
- Project string
- Display
Name string - The display name of the NasJob. The name can be up to 128 characters long and can consist of any UTF-8 characters.
- Nas
Job GoogleSpec Cloud Aiplatform V1beta1Nas Job Spec Args - The specification of a NasJob.
- Enable
Restricted boolImage Training - Optional. Enable a separation of Custom model training and restricted image training for tenant project.
- Encryption
Spec GoogleCloud Aiplatform V1beta1Encryption Spec Args - Customer-managed encryption key options for a NasJob. If this is set, then all resources created by the NasJob will be encrypted with the provided encryption key.
- Labels map[string]string
- The labels with user-defined metadata to organize NasJobs. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.
- Location string
- Project string
- display
Name String - The display name of the NasJob. The name can be up to 128 characters long and can consist of any UTF-8 characters.
- nas
Job GoogleSpec Cloud Aiplatform V1beta1Nas Job Spec - The specification of a NasJob.
- enable
Restricted BooleanImage Training - Optional. Enable a separation of Custom model training and restricted image training for tenant project.
- encryption
Spec GoogleCloud Aiplatform V1beta1Encryption Spec - Customer-managed encryption key options for a NasJob. If this is set, then all resources created by the NasJob will be encrypted with the provided encryption key.
- labels Map<String,String>
- The labels with user-defined metadata to organize NasJobs. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.
- location String
- project String
- display
Name string - The display name of the NasJob. The name can be up to 128 characters long and can consist of any UTF-8 characters.
- nas
Job GoogleSpec Cloud Aiplatform V1beta1Nas Job Spec - The specification of a NasJob.
- enable
Restricted booleanImage Training - Optional. Enable a separation of Custom model training and restricted image training for tenant project.
- encryption
Spec GoogleCloud Aiplatform V1beta1Encryption Spec - Customer-managed encryption key options for a NasJob. If this is set, then all resources created by the NasJob will be encrypted with the provided encryption key.
- labels {[key: string]: string}
- The labels with user-defined metadata to organize NasJobs. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.
- location string
- project string
- display_
name str - The display name of the NasJob. The name can be up to 128 characters long and can consist of any UTF-8 characters.
- nas_
job_ Googlespec Cloud Aiplatform V1beta1Nas Job Spec Args - The specification of a NasJob.
- enable_
restricted_ boolimage_ training - Optional. Enable a separation of Custom model training and restricted image training for tenant project.
- encryption_
spec GoogleCloud Aiplatform V1beta1Encryption Spec Args - Customer-managed encryption key options for a NasJob. If this is set, then all resources created by the NasJob will be encrypted with the provided encryption key.
- labels Mapping[str, str]
- The labels with user-defined metadata to organize NasJobs. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.
- location str
- project str
- display
Name String - The display name of the NasJob. The name can be up to 128 characters long and can consist of any UTF-8 characters.
- nas
Job Property MapSpec - The specification of a NasJob.
- enable
Restricted BooleanImage Training - Optional. Enable a separation of Custom model training and restricted image training for tenant project.
- encryption
Spec Property Map - Customer-managed encryption key options for a NasJob. If this is set, then all resources created by the NasJob will be encrypted with the provided encryption key.
- labels Map<String>
- The labels with user-defined metadata to organize NasJobs. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.
- location String
- project String
Outputs
All input properties are implicitly available as output properties. Additionally, the NasJob resource produces the following output properties:
- Create
Time string - Time when the NasJob was created.
- End
Time string - Time when the NasJob entered any of the following states:
JOB_STATE_SUCCEEDED
,JOB_STATE_FAILED
,JOB_STATE_CANCELLED
. - Error
Pulumi.
Google Native. Aiplatform. V1Beta1. Outputs. Google Rpc Status Response - Only populated when job's state is JOB_STATE_FAILED or JOB_STATE_CANCELLED.
- Id string
- The provider-assigned unique ID for this managed resource.
- Name string
- Resource name of the NasJob.
- Nas
Job Pulumi.Output Google Native. Aiplatform. V1Beta1. Outputs. Google Cloud Aiplatform V1beta1Nas Job Output Response - Output of the NasJob.
- Start
Time string - Time when the NasJob for the first time entered the
JOB_STATE_RUNNING
state. - State string
- The detailed state of the job.
- Update
Time string - Time when the NasJob was most recently updated.
- Create
Time string - Time when the NasJob was created.
- End
Time string - Time when the NasJob entered any of the following states:
JOB_STATE_SUCCEEDED
,JOB_STATE_FAILED
,JOB_STATE_CANCELLED
. - Error
Google
Rpc Status Response - Only populated when job's state is JOB_STATE_FAILED or JOB_STATE_CANCELLED.
- Id string
- The provider-assigned unique ID for this managed resource.
- Name string
- Resource name of the NasJob.
- Nas
Job GoogleOutput Cloud Aiplatform V1beta1Nas Job Output Response - Output of the NasJob.
- Start
Time string - Time when the NasJob for the first time entered the
JOB_STATE_RUNNING
state. - State string
- The detailed state of the job.
- Update
Time string - Time when the NasJob was most recently updated.
- create
Time String - Time when the NasJob was created.
- end
Time String - Time when the NasJob entered any of the following states:
JOB_STATE_SUCCEEDED
,JOB_STATE_FAILED
,JOB_STATE_CANCELLED
. - error
Google
Rpc Status Response - Only populated when job's state is JOB_STATE_FAILED or JOB_STATE_CANCELLED.
- id String
- The provider-assigned unique ID for this managed resource.
- name String
- Resource name of the NasJob.
- nas
Job GoogleOutput Cloud Aiplatform V1beta1Nas Job Output Response - Output of the NasJob.
- start
Time String - Time when the NasJob for the first time entered the
JOB_STATE_RUNNING
state. - state String
- The detailed state of the job.
- update
Time String - Time when the NasJob was most recently updated.
- create
Time string - Time when the NasJob was created.
- end
Time string - Time when the NasJob entered any of the following states:
JOB_STATE_SUCCEEDED
,JOB_STATE_FAILED
,JOB_STATE_CANCELLED
. - error
Google
Rpc Status Response - Only populated when job's state is JOB_STATE_FAILED or JOB_STATE_CANCELLED.
- id string
- The provider-assigned unique ID for this managed resource.
- name string
- Resource name of the NasJob.
- nas
Job GoogleOutput Cloud Aiplatform V1beta1Nas Job Output Response - Output of the NasJob.
- start
Time string - Time when the NasJob for the first time entered the
JOB_STATE_RUNNING
state. - state string
- The detailed state of the job.
- update
Time string - Time when the NasJob was most recently updated.
- create_
time str - Time when the NasJob was created.
- end_
time str - Time when the NasJob entered any of the following states:
JOB_STATE_SUCCEEDED
,JOB_STATE_FAILED
,JOB_STATE_CANCELLED
. - error
Google
Rpc Status Response - Only populated when job's state is JOB_STATE_FAILED or JOB_STATE_CANCELLED.
- id str
- The provider-assigned unique ID for this managed resource.
- name str
- Resource name of the NasJob.
- nas_
job_ Googleoutput Cloud Aiplatform V1beta1Nas Job Output Response - Output of the NasJob.
- start_
time str - Time when the NasJob for the first time entered the
JOB_STATE_RUNNING
state. - state str
- The detailed state of the job.
- update_
time str - Time when the NasJob was most recently updated.
- create
Time String - Time when the NasJob was created.
- end
Time String - Time when the NasJob entered any of the following states:
JOB_STATE_SUCCEEDED
,JOB_STATE_FAILED
,JOB_STATE_CANCELLED
. - error Property Map
- Only populated when job's state is JOB_STATE_FAILED or JOB_STATE_CANCELLED.
- id String
- The provider-assigned unique ID for this managed resource.
- name String
- Resource name of the NasJob.
- nas
Job Property MapOutput - Output of the NasJob.
- start
Time String - Time when the NasJob for the first time entered the
JOB_STATE_RUNNING
state. - state String
- The detailed state of the job.
- update
Time String - Time when the NasJob was most recently updated.
Supporting Types
GoogleCloudAiplatformV1beta1ContainerSpec, GoogleCloudAiplatformV1beta1ContainerSpecArgs
- Image
Uri string - The URI of a container image in the Container Registry that is to be run on each worker replica.
- Args List<string>
- The arguments to be passed when starting the container.
- Command List<string>
- The command to be invoked when the container is started. It overrides the entrypoint instruction in Dockerfile when provided.
- Env
List<Pulumi.
Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Env Var> - Environment variables to be passed to the container. Maximum limit is 100.
- Image
Uri string - The URI of a container image in the Container Registry that is to be run on each worker replica.
- Args []string
- The arguments to be passed when starting the container.
- Command []string
- The command to be invoked when the container is started. It overrides the entrypoint instruction in Dockerfile when provided.
- Env
[]Google
Cloud Aiplatform V1beta1Env Var - Environment variables to be passed to the container. Maximum limit is 100.
- image
Uri String - The URI of a container image in the Container Registry that is to be run on each worker replica.
- args List<String>
- The arguments to be passed when starting the container.
- command List<String>
- The command to be invoked when the container is started. It overrides the entrypoint instruction in Dockerfile when provided.
- env
List<Google
Cloud Aiplatform V1beta1Env Var> - Environment variables to be passed to the container. Maximum limit is 100.
- image
Uri string - The URI of a container image in the Container Registry that is to be run on each worker replica.
- args string[]
- The arguments to be passed when starting the container.
- command string[]
- The command to be invoked when the container is started. It overrides the entrypoint instruction in Dockerfile when provided.
- env
Google
Cloud Aiplatform V1beta1Env Var[] - Environment variables to be passed to the container. Maximum limit is 100.
- image_
uri str - The URI of a container image in the Container Registry that is to be run on each worker replica.
- args Sequence[str]
- The arguments to be passed when starting the container.
- command Sequence[str]
- The command to be invoked when the container is started. It overrides the entrypoint instruction in Dockerfile when provided.
- env
Sequence[Google
Cloud Aiplatform V1beta1Env Var] - Environment variables to be passed to the container. Maximum limit is 100.
- image
Uri String - The URI of a container image in the Container Registry that is to be run on each worker replica.
- args List<String>
- The arguments to be passed when starting the container.
- command List<String>
- The command to be invoked when the container is started. It overrides the entrypoint instruction in Dockerfile when provided.
- env List<Property Map>
- Environment variables to be passed to the container. Maximum limit is 100.
GoogleCloudAiplatformV1beta1ContainerSpecResponse, GoogleCloudAiplatformV1beta1ContainerSpecResponseArgs
- Args List<string>
- The arguments to be passed when starting the container.
- Command List<string>
- The command to be invoked when the container is started. It overrides the entrypoint instruction in Dockerfile when provided.
- Env
List<Pulumi.
Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Env Var Response> - Environment variables to be passed to the container. Maximum limit is 100.
- Image
Uri string - The URI of a container image in the Container Registry that is to be run on each worker replica.
- Args []string
- The arguments to be passed when starting the container.
- Command []string
- The command to be invoked when the container is started. It overrides the entrypoint instruction in Dockerfile when provided.
- Env
[]Google
Cloud Aiplatform V1beta1Env Var Response - Environment variables to be passed to the container. Maximum limit is 100.
- Image
Uri string - The URI of a container image in the Container Registry that is to be run on each worker replica.
- args List<String>
- The arguments to be passed when starting the container.
- command List<String>
- The command to be invoked when the container is started. It overrides the entrypoint instruction in Dockerfile when provided.
- env
List<Google
Cloud Aiplatform V1beta1Env Var Response> - Environment variables to be passed to the container. Maximum limit is 100.
- image
Uri String - The URI of a container image in the Container Registry that is to be run on each worker replica.
- args string[]
- The arguments to be passed when starting the container.
- command string[]
- The command to be invoked when the container is started. It overrides the entrypoint instruction in Dockerfile when provided.
- env
Google
Cloud Aiplatform V1beta1Env Var Response[] - Environment variables to be passed to the container. Maximum limit is 100.
- image
Uri string - The URI of a container image in the Container Registry that is to be run on each worker replica.
- args Sequence[str]
- The arguments to be passed when starting the container.
- command Sequence[str]
- The command to be invoked when the container is started. It overrides the entrypoint instruction in Dockerfile when provided.
- env
Sequence[Google
Cloud Aiplatform V1beta1Env Var Response] - Environment variables to be passed to the container. Maximum limit is 100.
- image_
uri str - The URI of a container image in the Container Registry that is to be run on each worker replica.
- args List<String>
- The arguments to be passed when starting the container.
- command List<String>
- The command to be invoked when the container is started. It overrides the entrypoint instruction in Dockerfile when provided.
- env List<Property Map>
- Environment variables to be passed to the container. Maximum limit is 100.
- image
Uri String - The URI of a container image in the Container Registry that is to be run on each worker replica.
GoogleCloudAiplatformV1beta1CustomJobSpec, GoogleCloudAiplatformV1beta1CustomJobSpecArgs
- Worker
Pool List<Pulumi.Specs Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Worker Pool Spec> - The spec of the worker pools including machine type and Docker image. All worker pools except the first one are optional and can be skipped by providing an empty value.
- Base
Output Pulumi.Directory Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Gcs Destination - The Cloud Storage location to store the output of this CustomJob or HyperparameterTuningJob. For HyperparameterTuningJob, the baseOutputDirectory of each child CustomJob backing a Trial is set to a subdirectory of name id under its parent HyperparameterTuningJob's baseOutputDirectory. The following Vertex AI environment variables will be passed to containers or python modules when this field is set: For CustomJob: * AIP_MODEL_DIR =
/model/
* AIP_CHECKPOINT_DIR =/checkpoints/
* AIP_TENSORBOARD_LOG_DIR =/logs/
For CustomJob backing a Trial of HyperparameterTuningJob: * AIP_MODEL_DIR =//model/
* AIP_CHECKPOINT_DIR =//checkpoints/
* AIP_TENSORBOARD_LOG_DIR =//logs/
- Enable
Dashboard boolAccess - Optional. Whether you want Vertex AI to enable access to the customized dashboard in training chief container. If set to
true
, you can access the dashboard at the URIs given by CustomJob.web_access_uris or Trial.web_access_uris (within HyperparameterTuningJob.trials). - Enable
Web boolAccess - Optional. Whether you want Vertex AI to enable interactive shell access to training containers. If set to
true
, you can access interactive shells at the URIs given by CustomJob.web_access_uris or Trial.web_access_uris (within HyperparameterTuningJob.trials). - Experiment string
- Optional. The Experiment associated with this job. Format:
projects/{project}/locations/{location}/metadataStores/{metadataStores}/contexts/{experiment-name}
- Experiment
Run string - Optional. The Experiment Run associated with this job. Format:
projects/{project}/locations/{location}/metadataStores/{metadataStores}/contexts/{experiment-name}-{experiment-run-name}
- Network string
- Optional. The full name of the Compute Engine network to which the Job should be peered. For example,
projects/12345/global/networks/myVPC
. Format is of the formprojects/{project}/global/networks/{network}
. Where {project} is a project number, as in12345
, and {network} is a network name. To specify this field, you must have already configured VPC Network Peering for Vertex AI. If this field is left unspecified, the job is not peered with any network. - Persistent
Resource stringId - Optional. The ID of the PersistentResource in the same Project and Location which to run If this is specified, the job will be run on existing machines held by the PersistentResource instead of on-demand short-live machines. The network and CMEK configs on the job should be consistent with those on the PersistentResource, otherwise, the job will be rejected.
- Protected
Artifact stringLocation Id - The ID of the location to store protected artifacts. e.g. us-central1. Populate only when the location is different than CustomJob location. List of supported locations: https://cloud.google.com/vertex-ai/docs/general/locations
- Reserved
Ip List<string>Ranges - Optional. A list of names for the reserved ip ranges under the VPC network that can be used for this job. If set, we will deploy the job within the provided ip ranges. Otherwise, the job will be deployed to any ip ranges under the provided VPC network. Example: ['vertex-ai-ip-range'].
- Scheduling
Pulumi.
Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Scheduling - Scheduling options for a CustomJob.
- Service
Account string - Specifies the service account for workload run-as account. Users submitting jobs must have act-as permission on this run-as account. If unspecified, the Vertex AI Custom Code Service Agent for the CustomJob's project is used.
- Tensorboard string
- Optional. The name of a Vertex AI Tensorboard resource to which this CustomJob will upload Tensorboard logs. Format:
projects/{project}/locations/{location}/tensorboards/{tensorboard}
- Worker
Pool []GoogleSpecs Cloud Aiplatform V1beta1Worker Pool Spec - The spec of the worker pools including machine type and Docker image. All worker pools except the first one are optional and can be skipped by providing an empty value.
- Base
Output GoogleDirectory Cloud Aiplatform V1beta1Gcs Destination - The Cloud Storage location to store the output of this CustomJob or HyperparameterTuningJob. For HyperparameterTuningJob, the baseOutputDirectory of each child CustomJob backing a Trial is set to a subdirectory of name id under its parent HyperparameterTuningJob's baseOutputDirectory. The following Vertex AI environment variables will be passed to containers or python modules when this field is set: For CustomJob: * AIP_MODEL_DIR =
/model/
* AIP_CHECKPOINT_DIR =/checkpoints/
* AIP_TENSORBOARD_LOG_DIR =/logs/
For CustomJob backing a Trial of HyperparameterTuningJob: * AIP_MODEL_DIR =//model/
* AIP_CHECKPOINT_DIR =//checkpoints/
* AIP_TENSORBOARD_LOG_DIR =//logs/
- Enable
Dashboard boolAccess - Optional. Whether you want Vertex AI to enable access to the customized dashboard in training chief container. If set to
true
, you can access the dashboard at the URIs given by CustomJob.web_access_uris or Trial.web_access_uris (within HyperparameterTuningJob.trials). - Enable
Web boolAccess - Optional. Whether you want Vertex AI to enable interactive shell access to training containers. If set to
true
, you can access interactive shells at the URIs given by CustomJob.web_access_uris or Trial.web_access_uris (within HyperparameterTuningJob.trials). - Experiment string
- Optional. The Experiment associated with this job. Format:
projects/{project}/locations/{location}/metadataStores/{metadataStores}/contexts/{experiment-name}
- Experiment
Run string - Optional. The Experiment Run associated with this job. Format:
projects/{project}/locations/{location}/metadataStores/{metadataStores}/contexts/{experiment-name}-{experiment-run-name}
- Network string
- Optional. The full name of the Compute Engine network to which the Job should be peered. For example,
projects/12345/global/networks/myVPC
. Format is of the formprojects/{project}/global/networks/{network}
. Where {project} is a project number, as in12345
, and {network} is a network name. To specify this field, you must have already configured VPC Network Peering for Vertex AI. If this field is left unspecified, the job is not peered with any network. - Persistent
Resource stringId - Optional. The ID of the PersistentResource in the same Project and Location which to run If this is specified, the job will be run on existing machines held by the PersistentResource instead of on-demand short-live machines. The network and CMEK configs on the job should be consistent with those on the PersistentResource, otherwise, the job will be rejected.
- Protected
Artifact stringLocation Id - The ID of the location to store protected artifacts. e.g. us-central1. Populate only when the location is different than CustomJob location. List of supported locations: https://cloud.google.com/vertex-ai/docs/general/locations
- Reserved
Ip []stringRanges - Optional. A list of names for the reserved ip ranges under the VPC network that can be used for this job. If set, we will deploy the job within the provided ip ranges. Otherwise, the job will be deployed to any ip ranges under the provided VPC network. Example: ['vertex-ai-ip-range'].
- Scheduling
Google
Cloud Aiplatform V1beta1Scheduling - Scheduling options for a CustomJob.
- Service
Account string - Specifies the service account for workload run-as account. Users submitting jobs must have act-as permission on this run-as account. If unspecified, the Vertex AI Custom Code Service Agent for the CustomJob's project is used.
- Tensorboard string
- Optional. The name of a Vertex AI Tensorboard resource to which this CustomJob will upload Tensorboard logs. Format:
projects/{project}/locations/{location}/tensorboards/{tensorboard}
- worker
Pool List<GoogleSpecs Cloud Aiplatform V1beta1Worker Pool Spec> - The spec of the worker pools including machine type and Docker image. All worker pools except the first one are optional and can be skipped by providing an empty value.
- base
Output GoogleDirectory Cloud Aiplatform V1beta1Gcs Destination - The Cloud Storage location to store the output of this CustomJob or HyperparameterTuningJob. For HyperparameterTuningJob, the baseOutputDirectory of each child CustomJob backing a Trial is set to a subdirectory of name id under its parent HyperparameterTuningJob's baseOutputDirectory. The following Vertex AI environment variables will be passed to containers or python modules when this field is set: For CustomJob: * AIP_MODEL_DIR =
/model/
* AIP_CHECKPOINT_DIR =/checkpoints/
* AIP_TENSORBOARD_LOG_DIR =/logs/
For CustomJob backing a Trial of HyperparameterTuningJob: * AIP_MODEL_DIR =//model/
* AIP_CHECKPOINT_DIR =//checkpoints/
* AIP_TENSORBOARD_LOG_DIR =//logs/
- enable
Dashboard BooleanAccess - Optional. Whether you want Vertex AI to enable access to the customized dashboard in training chief container. If set to
true
, you can access the dashboard at the URIs given by CustomJob.web_access_uris or Trial.web_access_uris (within HyperparameterTuningJob.trials). - enable
Web BooleanAccess - Optional. Whether you want Vertex AI to enable interactive shell access to training containers. If set to
true
, you can access interactive shells at the URIs given by CustomJob.web_access_uris or Trial.web_access_uris (within HyperparameterTuningJob.trials). - experiment String
- Optional. The Experiment associated with this job. Format:
projects/{project}/locations/{location}/metadataStores/{metadataStores}/contexts/{experiment-name}
- experiment
Run String - Optional. The Experiment Run associated with this job. Format:
projects/{project}/locations/{location}/metadataStores/{metadataStores}/contexts/{experiment-name}-{experiment-run-name}
- network String
- Optional. The full name of the Compute Engine network to which the Job should be peered. For example,
projects/12345/global/networks/myVPC
. Format is of the formprojects/{project}/global/networks/{network}
. Where {project} is a project number, as in12345
, and {network} is a network name. To specify this field, you must have already configured VPC Network Peering for Vertex AI. If this field is left unspecified, the job is not peered with any network. - persistent
Resource StringId - Optional. The ID of the PersistentResource in the same Project and Location which to run If this is specified, the job will be run on existing machines held by the PersistentResource instead of on-demand short-live machines. The network and CMEK configs on the job should be consistent with those on the PersistentResource, otherwise, the job will be rejected.
- protected
Artifact StringLocation Id - The ID of the location to store protected artifacts. e.g. us-central1. Populate only when the location is different than CustomJob location. List of supported locations: https://cloud.google.com/vertex-ai/docs/general/locations
- reserved
Ip List<String>Ranges - Optional. A list of names for the reserved ip ranges under the VPC network that can be used for this job. If set, we will deploy the job within the provided ip ranges. Otherwise, the job will be deployed to any ip ranges under the provided VPC network. Example: ['vertex-ai-ip-range'].
- scheduling
Google
Cloud Aiplatform V1beta1Scheduling - Scheduling options for a CustomJob.
- service
Account String - Specifies the service account for workload run-as account. Users submitting jobs must have act-as permission on this run-as account. If unspecified, the Vertex AI Custom Code Service Agent for the CustomJob's project is used.
- tensorboard String
- Optional. The name of a Vertex AI Tensorboard resource to which this CustomJob will upload Tensorboard logs. Format:
projects/{project}/locations/{location}/tensorboards/{tensorboard}
- worker
Pool GoogleSpecs Cloud Aiplatform V1beta1Worker Pool Spec[] - The spec of the worker pools including machine type and Docker image. All worker pools except the first one are optional and can be skipped by providing an empty value.
- base
Output GoogleDirectory Cloud Aiplatform V1beta1Gcs Destination - The Cloud Storage location to store the output of this CustomJob or HyperparameterTuningJob. For HyperparameterTuningJob, the baseOutputDirectory of each child CustomJob backing a Trial is set to a subdirectory of name id under its parent HyperparameterTuningJob's baseOutputDirectory. The following Vertex AI environment variables will be passed to containers or python modules when this field is set: For CustomJob: * AIP_MODEL_DIR =
/model/
* AIP_CHECKPOINT_DIR =/checkpoints/
* AIP_TENSORBOARD_LOG_DIR =/logs/
For CustomJob backing a Trial of HyperparameterTuningJob: * AIP_MODEL_DIR =//model/
* AIP_CHECKPOINT_DIR =//checkpoints/
* AIP_TENSORBOARD_LOG_DIR =//logs/
- enable
Dashboard booleanAccess - Optional. Whether you want Vertex AI to enable access to the customized dashboard in training chief container. If set to
true
, you can access the dashboard at the URIs given by CustomJob.web_access_uris or Trial.web_access_uris (within HyperparameterTuningJob.trials). - enable
Web booleanAccess - Optional. Whether you want Vertex AI to enable interactive shell access to training containers. If set to
true
, you can access interactive shells at the URIs given by CustomJob.web_access_uris or Trial.web_access_uris (within HyperparameterTuningJob.trials). - experiment string
- Optional. The Experiment associated with this job. Format:
projects/{project}/locations/{location}/metadataStores/{metadataStores}/contexts/{experiment-name}
- experiment
Run string - Optional. The Experiment Run associated with this job. Format:
projects/{project}/locations/{location}/metadataStores/{metadataStores}/contexts/{experiment-name}-{experiment-run-name}
- network string
- Optional. The full name of the Compute Engine network to which the Job should be peered. For example,
projects/12345/global/networks/myVPC
. Format is of the formprojects/{project}/global/networks/{network}
. Where {project} is a project number, as in12345
, and {network} is a network name. To specify this field, you must have already configured VPC Network Peering for Vertex AI. If this field is left unspecified, the job is not peered with any network. - persistent
Resource stringId - Optional. The ID of the PersistentResource in the same Project and Location which to run If this is specified, the job will be run on existing machines held by the PersistentResource instead of on-demand short-live machines. The network and CMEK configs on the job should be consistent with those on the PersistentResource, otherwise, the job will be rejected.
- protected
Artifact stringLocation Id - The ID of the location to store protected artifacts. e.g. us-central1. Populate only when the location is different than CustomJob location. List of supported locations: https://cloud.google.com/vertex-ai/docs/general/locations
- reserved
Ip string[]Ranges - Optional. A list of names for the reserved ip ranges under the VPC network that can be used for this job. If set, we will deploy the job within the provided ip ranges. Otherwise, the job will be deployed to any ip ranges under the provided VPC network. Example: ['vertex-ai-ip-range'].
- scheduling
Google
Cloud Aiplatform V1beta1Scheduling - Scheduling options for a CustomJob.
- service
Account string - Specifies the service account for workload run-as account. Users submitting jobs must have act-as permission on this run-as account. If unspecified, the Vertex AI Custom Code Service Agent for the CustomJob's project is used.
- tensorboard string
- Optional. The name of a Vertex AI Tensorboard resource to which this CustomJob will upload Tensorboard logs. Format:
projects/{project}/locations/{location}/tensorboards/{tensorboard}
- worker_
pool_ Sequence[Googlespecs Cloud Aiplatform V1beta1Worker Pool Spec] - The spec of the worker pools including machine type and Docker image. All worker pools except the first one are optional and can be skipped by providing an empty value.
- base_
output_ Googledirectory Cloud Aiplatform V1beta1Gcs Destination - The Cloud Storage location to store the output of this CustomJob or HyperparameterTuningJob. For HyperparameterTuningJob, the baseOutputDirectory of each child CustomJob backing a Trial is set to a subdirectory of name id under its parent HyperparameterTuningJob's baseOutputDirectory. The following Vertex AI environment variables will be passed to containers or python modules when this field is set: For CustomJob: * AIP_MODEL_DIR =
/model/
* AIP_CHECKPOINT_DIR =/checkpoints/
* AIP_TENSORBOARD_LOG_DIR =/logs/
For CustomJob backing a Trial of HyperparameterTuningJob: * AIP_MODEL_DIR =//model/
* AIP_CHECKPOINT_DIR =//checkpoints/
* AIP_TENSORBOARD_LOG_DIR =//logs/
- enable_
dashboard_ boolaccess - Optional. Whether you want Vertex AI to enable access to the customized dashboard in training chief container. If set to
true
, you can access the dashboard at the URIs given by CustomJob.web_access_uris or Trial.web_access_uris (within HyperparameterTuningJob.trials). - enable_
web_ boolaccess - Optional. Whether you want Vertex AI to enable interactive shell access to training containers. If set to
true
, you can access interactive shells at the URIs given by CustomJob.web_access_uris or Trial.web_access_uris (within HyperparameterTuningJob.trials). - experiment str
- Optional. The Experiment associated with this job. Format:
projects/{project}/locations/{location}/metadataStores/{metadataStores}/contexts/{experiment-name}
- experiment_
run str - Optional. The Experiment Run associated with this job. Format:
projects/{project}/locations/{location}/metadataStores/{metadataStores}/contexts/{experiment-name}-{experiment-run-name}
- network str
- Optional. The full name of the Compute Engine network to which the Job should be peered. For example,
projects/12345/global/networks/myVPC
. Format is of the formprojects/{project}/global/networks/{network}
. Where {project} is a project number, as in12345
, and {network} is a network name. To specify this field, you must have already configured VPC Network Peering for Vertex AI. If this field is left unspecified, the job is not peered with any network. - persistent_
resource_ strid - Optional. The ID of the PersistentResource in the same Project and Location which to run If this is specified, the job will be run on existing machines held by the PersistentResource instead of on-demand short-live machines. The network and CMEK configs on the job should be consistent with those on the PersistentResource, otherwise, the job will be rejected.
- protected_
artifact_ strlocation_ id - The ID of the location to store protected artifacts. e.g. us-central1. Populate only when the location is different than CustomJob location. List of supported locations: https://cloud.google.com/vertex-ai/docs/general/locations
- reserved_
ip_ Sequence[str]ranges - Optional. A list of names for the reserved ip ranges under the VPC network that can be used for this job. If set, we will deploy the job within the provided ip ranges. Otherwise, the job will be deployed to any ip ranges under the provided VPC network. Example: ['vertex-ai-ip-range'].
- scheduling
Google
Cloud Aiplatform V1beta1Scheduling - Scheduling options for a CustomJob.
- service_
account str - Specifies the service account for workload run-as account. Users submitting jobs must have act-as permission on this run-as account. If unspecified, the Vertex AI Custom Code Service Agent for the CustomJob's project is used.
- tensorboard str
- Optional. The name of a Vertex AI Tensorboard resource to which this CustomJob will upload Tensorboard logs. Format:
projects/{project}/locations/{location}/tensorboards/{tensorboard}
- worker
Pool List<Property Map>Specs - The spec of the worker pools including machine type and Docker image. All worker pools except the first one are optional and can be skipped by providing an empty value.
- base
Output Property MapDirectory - The Cloud Storage location to store the output of this CustomJob or HyperparameterTuningJob. For HyperparameterTuningJob, the baseOutputDirectory of each child CustomJob backing a Trial is set to a subdirectory of name id under its parent HyperparameterTuningJob's baseOutputDirectory. The following Vertex AI environment variables will be passed to containers or python modules when this field is set: For CustomJob: * AIP_MODEL_DIR =
/model/
* AIP_CHECKPOINT_DIR =/checkpoints/
* AIP_TENSORBOARD_LOG_DIR =/logs/
For CustomJob backing a Trial of HyperparameterTuningJob: * AIP_MODEL_DIR =//model/
* AIP_CHECKPOINT_DIR =//checkpoints/
* AIP_TENSORBOARD_LOG_DIR =//logs/
- enable
Dashboard BooleanAccess - Optional. Whether you want Vertex AI to enable access to the customized dashboard in training chief container. If set to
true
, you can access the dashboard at the URIs given by CustomJob.web_access_uris or Trial.web_access_uris (within HyperparameterTuningJob.trials). - enable
Web BooleanAccess - Optional. Whether you want Vertex AI to enable interactive shell access to training containers. If set to
true
, you can access interactive shells at the URIs given by CustomJob.web_access_uris or Trial.web_access_uris (within HyperparameterTuningJob.trials). - experiment String
- Optional. The Experiment associated with this job. Format:
projects/{project}/locations/{location}/metadataStores/{metadataStores}/contexts/{experiment-name}
- experiment
Run String - Optional. The Experiment Run associated with this job. Format:
projects/{project}/locations/{location}/metadataStores/{metadataStores}/contexts/{experiment-name}-{experiment-run-name}
- network String
- Optional. The full name of the Compute Engine network to which the Job should be peered. For example,
projects/12345/global/networks/myVPC
. Format is of the formprojects/{project}/global/networks/{network}
. Where {project} is a project number, as in12345
, and {network} is a network name. To specify this field, you must have already configured VPC Network Peering for Vertex AI. If this field is left unspecified, the job is not peered with any network. - persistent
Resource StringId - Optional. The ID of the PersistentResource in the same Project and Location which to run If this is specified, the job will be run on existing machines held by the PersistentResource instead of on-demand short-live machines. The network and CMEK configs on the job should be consistent with those on the PersistentResource, otherwise, the job will be rejected.
- protected
Artifact StringLocation Id - The ID of the location to store protected artifacts. e.g. us-central1. Populate only when the location is different than CustomJob location. List of supported locations: https://cloud.google.com/vertex-ai/docs/general/locations
- reserved
Ip List<String>Ranges - Optional. A list of names for the reserved ip ranges under the VPC network that can be used for this job. If set, we will deploy the job within the provided ip ranges. Otherwise, the job will be deployed to any ip ranges under the provided VPC network. Example: ['vertex-ai-ip-range'].
- scheduling Property Map
- Scheduling options for a CustomJob.
- service
Account String - Specifies the service account for workload run-as account. Users submitting jobs must have act-as permission on this run-as account. If unspecified, the Vertex AI Custom Code Service Agent for the CustomJob's project is used.
- tensorboard String
- Optional. The name of a Vertex AI Tensorboard resource to which this CustomJob will upload Tensorboard logs. Format:
projects/{project}/locations/{location}/tensorboards/{tensorboard}
GoogleCloudAiplatformV1beta1CustomJobSpecResponse, GoogleCloudAiplatformV1beta1CustomJobSpecResponseArgs
- Base
Output Pulumi.Directory Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Gcs Destination Response - The Cloud Storage location to store the output of this CustomJob or HyperparameterTuningJob. For HyperparameterTuningJob, the baseOutputDirectory of each child CustomJob backing a Trial is set to a subdirectory of name id under its parent HyperparameterTuningJob's baseOutputDirectory. The following Vertex AI environment variables will be passed to containers or python modules when this field is set: For CustomJob: * AIP_MODEL_DIR =
/model/
* AIP_CHECKPOINT_DIR =/checkpoints/
* AIP_TENSORBOARD_LOG_DIR =/logs/
For CustomJob backing a Trial of HyperparameterTuningJob: * AIP_MODEL_DIR =//model/
* AIP_CHECKPOINT_DIR =//checkpoints/
* AIP_TENSORBOARD_LOG_DIR =//logs/
- Enable
Dashboard boolAccess - Optional. Whether you want Vertex AI to enable access to the customized dashboard in training chief container. If set to
true
, you can access the dashboard at the URIs given by CustomJob.web_access_uris or Trial.web_access_uris (within HyperparameterTuningJob.trials). - Enable
Web boolAccess - Optional. Whether you want Vertex AI to enable interactive shell access to training containers. If set to
true
, you can access interactive shells at the URIs given by CustomJob.web_access_uris or Trial.web_access_uris (within HyperparameterTuningJob.trials). - Experiment string
- Optional. The Experiment associated with this job. Format:
projects/{project}/locations/{location}/metadataStores/{metadataStores}/contexts/{experiment-name}
- Experiment
Run string - Optional. The Experiment Run associated with this job. Format:
projects/{project}/locations/{location}/metadataStores/{metadataStores}/contexts/{experiment-name}-{experiment-run-name}
- Network string
- Optional. The full name of the Compute Engine network to which the Job should be peered. For example,
projects/12345/global/networks/myVPC
. Format is of the formprojects/{project}/global/networks/{network}
. Where {project} is a project number, as in12345
, and {network} is a network name. To specify this field, you must have already configured VPC Network Peering for Vertex AI. If this field is left unspecified, the job is not peered with any network. - Persistent
Resource stringId - Optional. The ID of the PersistentResource in the same Project and Location which to run If this is specified, the job will be run on existing machines held by the PersistentResource instead of on-demand short-live machines. The network and CMEK configs on the job should be consistent with those on the PersistentResource, otherwise, the job will be rejected.
- Protected
Artifact stringLocation Id - The ID of the location to store protected artifacts. e.g. us-central1. Populate only when the location is different than CustomJob location. List of supported locations: https://cloud.google.com/vertex-ai/docs/general/locations
- Reserved
Ip List<string>Ranges - Optional. A list of names for the reserved ip ranges under the VPC network that can be used for this job. If set, we will deploy the job within the provided ip ranges. Otherwise, the job will be deployed to any ip ranges under the provided VPC network. Example: ['vertex-ai-ip-range'].
- Scheduling
Pulumi.
Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Scheduling Response - Scheduling options for a CustomJob.
- Service
Account string - Specifies the service account for workload run-as account. Users submitting jobs must have act-as permission on this run-as account. If unspecified, the Vertex AI Custom Code Service Agent for the CustomJob's project is used.
- Tensorboard string
- Optional. The name of a Vertex AI Tensorboard resource to which this CustomJob will upload Tensorboard logs. Format:
projects/{project}/locations/{location}/tensorboards/{tensorboard}
- Worker
Pool List<Pulumi.Specs Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Worker Pool Spec Response> - The spec of the worker pools including machine type and Docker image. All worker pools except the first one are optional and can be skipped by providing an empty value.
- Base
Output GoogleDirectory Cloud Aiplatform V1beta1Gcs Destination Response - The Cloud Storage location to store the output of this CustomJob or HyperparameterTuningJob. For HyperparameterTuningJob, the baseOutputDirectory of each child CustomJob backing a Trial is set to a subdirectory of name id under its parent HyperparameterTuningJob's baseOutputDirectory. The following Vertex AI environment variables will be passed to containers or python modules when this field is set: For CustomJob: * AIP_MODEL_DIR =
/model/
* AIP_CHECKPOINT_DIR =/checkpoints/
* AIP_TENSORBOARD_LOG_DIR =/logs/
For CustomJob backing a Trial of HyperparameterTuningJob: * AIP_MODEL_DIR =//model/
* AIP_CHECKPOINT_DIR =//checkpoints/
* AIP_TENSORBOARD_LOG_DIR =//logs/
- Enable
Dashboard boolAccess - Optional. Whether you want Vertex AI to enable access to the customized dashboard in training chief container. If set to
true
, you can access the dashboard at the URIs given by CustomJob.web_access_uris or Trial.web_access_uris (within HyperparameterTuningJob.trials). - Enable
Web boolAccess - Optional. Whether you want Vertex AI to enable interactive shell access to training containers. If set to
true
, you can access interactive shells at the URIs given by CustomJob.web_access_uris or Trial.web_access_uris (within HyperparameterTuningJob.trials). - Experiment string
- Optional. The Experiment associated with this job. Format:
projects/{project}/locations/{location}/metadataStores/{metadataStores}/contexts/{experiment-name}
- Experiment
Run string - Optional. The Experiment Run associated with this job. Format:
projects/{project}/locations/{location}/metadataStores/{metadataStores}/contexts/{experiment-name}-{experiment-run-name}
- Network string
- Optional. The full name of the Compute Engine network to which the Job should be peered. For example,
projects/12345/global/networks/myVPC
. Format is of the formprojects/{project}/global/networks/{network}
. Where {project} is a project number, as in12345
, and {network} is a network name. To specify this field, you must have already configured VPC Network Peering for Vertex AI. If this field is left unspecified, the job is not peered with any network. - Persistent
Resource stringId - Optional. The ID of the PersistentResource in the same Project and Location which to run If this is specified, the job will be run on existing machines held by the PersistentResource instead of on-demand short-live machines. The network and CMEK configs on the job should be consistent with those on the PersistentResource, otherwise, the job will be rejected.
- Protected
Artifact stringLocation Id - The ID of the location to store protected artifacts. e.g. us-central1. Populate only when the location is different than CustomJob location. List of supported locations: https://cloud.google.com/vertex-ai/docs/general/locations
- Reserved
Ip []stringRanges - Optional. A list of names for the reserved ip ranges under the VPC network that can be used for this job. If set, we will deploy the job within the provided ip ranges. Otherwise, the job will be deployed to any ip ranges under the provided VPC network. Example: ['vertex-ai-ip-range'].
- Scheduling
Google
Cloud Aiplatform V1beta1Scheduling Response - Scheduling options for a CustomJob.
- Service
Account string - Specifies the service account for workload run-as account. Users submitting jobs must have act-as permission on this run-as account. If unspecified, the Vertex AI Custom Code Service Agent for the CustomJob's project is used.
- Tensorboard string
- Optional. The name of a Vertex AI Tensorboard resource to which this CustomJob will upload Tensorboard logs. Format:
projects/{project}/locations/{location}/tensorboards/{tensorboard}
- Worker
Pool []GoogleSpecs Cloud Aiplatform V1beta1Worker Pool Spec Response - The spec of the worker pools including machine type and Docker image. All worker pools except the first one are optional and can be skipped by providing an empty value.
- base
Output GoogleDirectory Cloud Aiplatform V1beta1Gcs Destination Response - The Cloud Storage location to store the output of this CustomJob or HyperparameterTuningJob. For HyperparameterTuningJob, the baseOutputDirectory of each child CustomJob backing a Trial is set to a subdirectory of name id under its parent HyperparameterTuningJob's baseOutputDirectory. The following Vertex AI environment variables will be passed to containers or python modules when this field is set: For CustomJob: * AIP_MODEL_DIR =
/model/
* AIP_CHECKPOINT_DIR =/checkpoints/
* AIP_TENSORBOARD_LOG_DIR =/logs/
For CustomJob backing a Trial of HyperparameterTuningJob: * AIP_MODEL_DIR =//model/
* AIP_CHECKPOINT_DIR =//checkpoints/
* AIP_TENSORBOARD_LOG_DIR =//logs/
- enable
Dashboard BooleanAccess - Optional. Whether you want Vertex AI to enable access to the customized dashboard in training chief container. If set to
true
, you can access the dashboard at the URIs given by CustomJob.web_access_uris or Trial.web_access_uris (within HyperparameterTuningJob.trials). - enable
Web BooleanAccess - Optional. Whether you want Vertex AI to enable interactive shell access to training containers. If set to
true
, you can access interactive shells at the URIs given by CustomJob.web_access_uris or Trial.web_access_uris (within HyperparameterTuningJob.trials). - experiment String
- Optional. The Experiment associated with this job. Format:
projects/{project}/locations/{location}/metadataStores/{metadataStores}/contexts/{experiment-name}
- experiment
Run String - Optional. The Experiment Run associated with this job. Format:
projects/{project}/locations/{location}/metadataStores/{metadataStores}/contexts/{experiment-name}-{experiment-run-name}
- network String
- Optional. The full name of the Compute Engine network to which the Job should be peered. For example,
projects/12345/global/networks/myVPC
. Format is of the formprojects/{project}/global/networks/{network}
. Where {project} is a project number, as in12345
, and {network} is a network name. To specify this field, you must have already configured VPC Network Peering for Vertex AI. If this field is left unspecified, the job is not peered with any network. - persistent
Resource StringId - Optional. The ID of the PersistentResource in the same Project and Location which to run If this is specified, the job will be run on existing machines held by the PersistentResource instead of on-demand short-live machines. The network and CMEK configs on the job should be consistent with those on the PersistentResource, otherwise, the job will be rejected.
- protected
Artifact StringLocation Id - The ID of the location to store protected artifacts. e.g. us-central1. Populate only when the location is different than CustomJob location. List of supported locations: https://cloud.google.com/vertex-ai/docs/general/locations
- reserved
Ip List<String>Ranges - Optional. A list of names for the reserved ip ranges under the VPC network that can be used for this job. If set, we will deploy the job within the provided ip ranges. Otherwise, the job will be deployed to any ip ranges under the provided VPC network. Example: ['vertex-ai-ip-range'].
- scheduling
Google
Cloud Aiplatform V1beta1Scheduling Response - Scheduling options for a CustomJob.
- service
Account String - Specifies the service account for workload run-as account. Users submitting jobs must have act-as permission on this run-as account. If unspecified, the Vertex AI Custom Code Service Agent for the CustomJob's project is used.
- tensorboard String
- Optional. The name of a Vertex AI Tensorboard resource to which this CustomJob will upload Tensorboard logs. Format:
projects/{project}/locations/{location}/tensorboards/{tensorboard}
- worker
Pool List<GoogleSpecs Cloud Aiplatform V1beta1Worker Pool Spec Response> - The spec of the worker pools including machine type and Docker image. All worker pools except the first one are optional and can be skipped by providing an empty value.
- base
Output GoogleDirectory Cloud Aiplatform V1beta1Gcs Destination Response - The Cloud Storage location to store the output of this CustomJob or HyperparameterTuningJob. For HyperparameterTuningJob, the baseOutputDirectory of each child CustomJob backing a Trial is set to a subdirectory of name id under its parent HyperparameterTuningJob's baseOutputDirectory. The following Vertex AI environment variables will be passed to containers or python modules when this field is set: For CustomJob: * AIP_MODEL_DIR =
/model/
* AIP_CHECKPOINT_DIR =/checkpoints/
* AIP_TENSORBOARD_LOG_DIR =/logs/
For CustomJob backing a Trial of HyperparameterTuningJob: * AIP_MODEL_DIR =//model/
* AIP_CHECKPOINT_DIR =//checkpoints/
* AIP_TENSORBOARD_LOG_DIR =//logs/
- enable
Dashboard booleanAccess - Optional. Whether you want Vertex AI to enable access to the customized dashboard in training chief container. If set to
true
, you can access the dashboard at the URIs given by CustomJob.web_access_uris or Trial.web_access_uris (within HyperparameterTuningJob.trials). - enable
Web booleanAccess - Optional. Whether you want Vertex AI to enable interactive shell access to training containers. If set to
true
, you can access interactive shells at the URIs given by CustomJob.web_access_uris or Trial.web_access_uris (within HyperparameterTuningJob.trials). - experiment string
- Optional. The Experiment associated with this job. Format:
projects/{project}/locations/{location}/metadataStores/{metadataStores}/contexts/{experiment-name}
- experiment
Run string - Optional. The Experiment Run associated with this job. Format:
projects/{project}/locations/{location}/metadataStores/{metadataStores}/contexts/{experiment-name}-{experiment-run-name}
- network string
- Optional. The full name of the Compute Engine network to which the Job should be peered. For example,
projects/12345/global/networks/myVPC
. Format is of the formprojects/{project}/global/networks/{network}
. Where {project} is a project number, as in12345
, and {network} is a network name. To specify this field, you must have already configured VPC Network Peering for Vertex AI. If this field is left unspecified, the job is not peered with any network. - persistent
Resource stringId - Optional. The ID of the PersistentResource in the same Project and Location which to run If this is specified, the job will be run on existing machines held by the PersistentResource instead of on-demand short-live machines. The network and CMEK configs on the job should be consistent with those on the PersistentResource, otherwise, the job will be rejected.
- protected
Artifact stringLocation Id - The ID of the location to store protected artifacts. e.g. us-central1. Populate only when the location is different than CustomJob location. List of supported locations: https://cloud.google.com/vertex-ai/docs/general/locations
- reserved
Ip string[]Ranges - Optional. A list of names for the reserved ip ranges under the VPC network that can be used for this job. If set, we will deploy the job within the provided ip ranges. Otherwise, the job will be deployed to any ip ranges under the provided VPC network. Example: ['vertex-ai-ip-range'].
- scheduling
Google
Cloud Aiplatform V1beta1Scheduling Response - Scheduling options for a CustomJob.
- service
Account string - Specifies the service account for workload run-as account. Users submitting jobs must have act-as permission on this run-as account. If unspecified, the Vertex AI Custom Code Service Agent for the CustomJob's project is used.
- tensorboard string
- Optional. The name of a Vertex AI Tensorboard resource to which this CustomJob will upload Tensorboard logs. Format:
projects/{project}/locations/{location}/tensorboards/{tensorboard}
- worker
Pool GoogleSpecs Cloud Aiplatform V1beta1Worker Pool Spec Response[] - The spec of the worker pools including machine type and Docker image. All worker pools except the first one are optional and can be skipped by providing an empty value.
- base_
output_ Googledirectory Cloud Aiplatform V1beta1Gcs Destination Response - The Cloud Storage location to store the output of this CustomJob or HyperparameterTuningJob. For HyperparameterTuningJob, the baseOutputDirectory of each child CustomJob backing a Trial is set to a subdirectory of name id under its parent HyperparameterTuningJob's baseOutputDirectory. The following Vertex AI environment variables will be passed to containers or python modules when this field is set: For CustomJob: * AIP_MODEL_DIR =
/model/
* AIP_CHECKPOINT_DIR =/checkpoints/
* AIP_TENSORBOARD_LOG_DIR =/logs/
For CustomJob backing a Trial of HyperparameterTuningJob: * AIP_MODEL_DIR =//model/
* AIP_CHECKPOINT_DIR =//checkpoints/
* AIP_TENSORBOARD_LOG_DIR =//logs/
- enable_
dashboard_ boolaccess - Optional. Whether you want Vertex AI to enable access to the customized dashboard in training chief container. If set to
true
, you can access the dashboard at the URIs given by CustomJob.web_access_uris or Trial.web_access_uris (within HyperparameterTuningJob.trials). - enable_
web_ boolaccess - Optional. Whether you want Vertex AI to enable interactive shell access to training containers. If set to
true
, you can access interactive shells at the URIs given by CustomJob.web_access_uris or Trial.web_access_uris (within HyperparameterTuningJob.trials). - experiment str
- Optional. The Experiment associated with this job. Format:
projects/{project}/locations/{location}/metadataStores/{metadataStores}/contexts/{experiment-name}
- experiment_
run str - Optional. The Experiment Run associated with this job. Format:
projects/{project}/locations/{location}/metadataStores/{metadataStores}/contexts/{experiment-name}-{experiment-run-name}
- network str
- Optional. The full name of the Compute Engine network to which the Job should be peered. For example,
projects/12345/global/networks/myVPC
. Format is of the formprojects/{project}/global/networks/{network}
. Where {project} is a project number, as in12345
, and {network} is a network name. To specify this field, you must have already configured VPC Network Peering for Vertex AI. If this field is left unspecified, the job is not peered with any network. - persistent_
resource_ strid - Optional. The ID of the PersistentResource in the same Project and Location which to run If this is specified, the job will be run on existing machines held by the PersistentResource instead of on-demand short-live machines. The network and CMEK configs on the job should be consistent with those on the PersistentResource, otherwise, the job will be rejected.
- protected_
artifact_ strlocation_ id - The ID of the location to store protected artifacts. e.g. us-central1. Populate only when the location is different than CustomJob location. List of supported locations: https://cloud.google.com/vertex-ai/docs/general/locations
- reserved_
ip_ Sequence[str]ranges - Optional. A list of names for the reserved ip ranges under the VPC network that can be used for this job. If set, we will deploy the job within the provided ip ranges. Otherwise, the job will be deployed to any ip ranges under the provided VPC network. Example: ['vertex-ai-ip-range'].
- scheduling
Google
Cloud Aiplatform V1beta1Scheduling Response - Scheduling options for a CustomJob.
- service_
account str - Specifies the service account for workload run-as account. Users submitting jobs must have act-as permission on this run-as account. If unspecified, the Vertex AI Custom Code Service Agent for the CustomJob's project is used.
- tensorboard str
- Optional. The name of a Vertex AI Tensorboard resource to which this CustomJob will upload Tensorboard logs. Format:
projects/{project}/locations/{location}/tensorboards/{tensorboard}
- worker_
pool_ Sequence[Googlespecs Cloud Aiplatform V1beta1Worker Pool Spec Response] - The spec of the worker pools including machine type and Docker image. All worker pools except the first one are optional and can be skipped by providing an empty value.
- base
Output Property MapDirectory - The Cloud Storage location to store the output of this CustomJob or HyperparameterTuningJob. For HyperparameterTuningJob, the baseOutputDirectory of each child CustomJob backing a Trial is set to a subdirectory of name id under its parent HyperparameterTuningJob's baseOutputDirectory. The following Vertex AI environment variables will be passed to containers or python modules when this field is set: For CustomJob: * AIP_MODEL_DIR =
/model/
* AIP_CHECKPOINT_DIR =/checkpoints/
* AIP_TENSORBOARD_LOG_DIR =/logs/
For CustomJob backing a Trial of HyperparameterTuningJob: * AIP_MODEL_DIR =//model/
* AIP_CHECKPOINT_DIR =//checkpoints/
* AIP_TENSORBOARD_LOG_DIR =//logs/
- enable
Dashboard BooleanAccess - Optional. Whether you want Vertex AI to enable access to the customized dashboard in training chief container. If set to
true
, you can access the dashboard at the URIs given by CustomJob.web_access_uris or Trial.web_access_uris (within HyperparameterTuningJob.trials). - enable
Web BooleanAccess - Optional. Whether you want Vertex AI to enable interactive shell access to training containers. If set to
true
, you can access interactive shells at the URIs given by CustomJob.web_access_uris or Trial.web_access_uris (within HyperparameterTuningJob.trials). - experiment String
- Optional. The Experiment associated with this job. Format:
projects/{project}/locations/{location}/metadataStores/{metadataStores}/contexts/{experiment-name}
- experiment
Run String - Optional. The Experiment Run associated with this job. Format:
projects/{project}/locations/{location}/metadataStores/{metadataStores}/contexts/{experiment-name}-{experiment-run-name}
- network String
- Optional. The full name of the Compute Engine network to which the Job should be peered. For example,
projects/12345/global/networks/myVPC
. Format is of the formprojects/{project}/global/networks/{network}
. Where {project} is a project number, as in12345
, and {network} is a network name. To specify this field, you must have already configured VPC Network Peering for Vertex AI. If this field is left unspecified, the job is not peered with any network. - persistent
Resource StringId - Optional. The ID of the PersistentResource in the same Project and Location which to run If this is specified, the job will be run on existing machines held by the PersistentResource instead of on-demand short-live machines. The network and CMEK configs on the job should be consistent with those on the PersistentResource, otherwise, the job will be rejected.
- protected
Artifact StringLocation Id - The ID of the location to store protected artifacts. e.g. us-central1. Populate only when the location is different than CustomJob location. List of supported locations: https://cloud.google.com/vertex-ai/docs/general/locations
- reserved
Ip List<String>Ranges - Optional. A list of names for the reserved ip ranges under the VPC network that can be used for this job. If set, we will deploy the job within the provided ip ranges. Otherwise, the job will be deployed to any ip ranges under the provided VPC network. Example: ['vertex-ai-ip-range'].
- scheduling Property Map
- Scheduling options for a CustomJob.
- service
Account String - Specifies the service account for workload run-as account. Users submitting jobs must have act-as permission on this run-as account. If unspecified, the Vertex AI Custom Code Service Agent for the CustomJob's project is used.
- tensorboard String
- Optional. The name of a Vertex AI Tensorboard resource to which this CustomJob will upload Tensorboard logs. Format:
projects/{project}/locations/{location}/tensorboards/{tensorboard}
- worker
Pool List<Property Map>Specs - The spec of the worker pools including machine type and Docker image. All worker pools except the first one are optional and can be skipped by providing an empty value.
GoogleCloudAiplatformV1beta1DiskSpec, GoogleCloudAiplatformV1beta1DiskSpecArgs
- Boot
Disk intSize Gb - Size in GB of the boot disk (default is 100GB).
- Boot
Disk stringType - Type of the boot disk (default is "pd-ssd"). Valid values: "pd-ssd" (Persistent Disk Solid State Drive) or "pd-standard" (Persistent Disk Hard Disk Drive).
- Boot
Disk intSize Gb - Size in GB of the boot disk (default is 100GB).
- Boot
Disk stringType - Type of the boot disk (default is "pd-ssd"). Valid values: "pd-ssd" (Persistent Disk Solid State Drive) or "pd-standard" (Persistent Disk Hard Disk Drive).
- boot
Disk IntegerSize Gb - Size in GB of the boot disk (default is 100GB).
- boot
Disk StringType - Type of the boot disk (default is "pd-ssd"). Valid values: "pd-ssd" (Persistent Disk Solid State Drive) or "pd-standard" (Persistent Disk Hard Disk Drive).
- boot
Disk numberSize Gb - Size in GB of the boot disk (default is 100GB).
- boot
Disk stringType - Type of the boot disk (default is "pd-ssd"). Valid values: "pd-ssd" (Persistent Disk Solid State Drive) or "pd-standard" (Persistent Disk Hard Disk Drive).
- boot_
disk_ intsize_ gb - Size in GB of the boot disk (default is 100GB).
- boot_
disk_ strtype - Type of the boot disk (default is "pd-ssd"). Valid values: "pd-ssd" (Persistent Disk Solid State Drive) or "pd-standard" (Persistent Disk Hard Disk Drive).
- boot
Disk NumberSize Gb - Size in GB of the boot disk (default is 100GB).
- boot
Disk StringType - Type of the boot disk (default is "pd-ssd"). Valid values: "pd-ssd" (Persistent Disk Solid State Drive) or "pd-standard" (Persistent Disk Hard Disk Drive).
GoogleCloudAiplatformV1beta1DiskSpecResponse, GoogleCloudAiplatformV1beta1DiskSpecResponseArgs
- Boot
Disk intSize Gb - Size in GB of the boot disk (default is 100GB).
- Boot
Disk stringType - Type of the boot disk (default is "pd-ssd"). Valid values: "pd-ssd" (Persistent Disk Solid State Drive) or "pd-standard" (Persistent Disk Hard Disk Drive).
- Boot
Disk intSize Gb - Size in GB of the boot disk (default is 100GB).
- Boot
Disk stringType - Type of the boot disk (default is "pd-ssd"). Valid values: "pd-ssd" (Persistent Disk Solid State Drive) or "pd-standard" (Persistent Disk Hard Disk Drive).
- boot
Disk IntegerSize Gb - Size in GB of the boot disk (default is 100GB).
- boot
Disk StringType - Type of the boot disk (default is "pd-ssd"). Valid values: "pd-ssd" (Persistent Disk Solid State Drive) or "pd-standard" (Persistent Disk Hard Disk Drive).
- boot
Disk numberSize Gb - Size in GB of the boot disk (default is 100GB).
- boot
Disk stringType - Type of the boot disk (default is "pd-ssd"). Valid values: "pd-ssd" (Persistent Disk Solid State Drive) or "pd-standard" (Persistent Disk Hard Disk Drive).
- boot_
disk_ intsize_ gb - Size in GB of the boot disk (default is 100GB).
- boot_
disk_ strtype - Type of the boot disk (default is "pd-ssd"). Valid values: "pd-ssd" (Persistent Disk Solid State Drive) or "pd-standard" (Persistent Disk Hard Disk Drive).
- boot
Disk NumberSize Gb - Size in GB of the boot disk (default is 100GB).
- boot
Disk StringType - Type of the boot disk (default is "pd-ssd"). Valid values: "pd-ssd" (Persistent Disk Solid State Drive) or "pd-standard" (Persistent Disk Hard Disk Drive).
GoogleCloudAiplatformV1beta1EncryptionSpec, GoogleCloudAiplatformV1beta1EncryptionSpecArgs
- Kms
Key stringName - The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form:
projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key
. The key needs to be in the same region as where the compute resource is created.
- Kms
Key stringName - The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form:
projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key
. The key needs to be in the same region as where the compute resource is created.
- kms
Key StringName - The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form:
projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key
. The key needs to be in the same region as where the compute resource is created.
- kms
Key stringName - The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form:
projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key
. The key needs to be in the same region as where the compute resource is created.
- kms_
key_ strname - The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form:
projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key
. The key needs to be in the same region as where the compute resource is created.
- kms
Key StringName - The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form:
projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key
. The key needs to be in the same region as where the compute resource is created.
GoogleCloudAiplatformV1beta1EncryptionSpecResponse, GoogleCloudAiplatformV1beta1EncryptionSpecResponseArgs
- Kms
Key stringName - The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form:
projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key
. The key needs to be in the same region as where the compute resource is created.
- Kms
Key stringName - The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form:
projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key
. The key needs to be in the same region as where the compute resource is created.
- kms
Key StringName - The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form:
projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key
. The key needs to be in the same region as where the compute resource is created.
- kms
Key stringName - The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form:
projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key
. The key needs to be in the same region as where the compute resource is created.
- kms_
key_ strname - The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form:
projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key
. The key needs to be in the same region as where the compute resource is created.
- kms
Key StringName - The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form:
projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key
. The key needs to be in the same region as where the compute resource is created.
GoogleCloudAiplatformV1beta1EnvVar, GoogleCloudAiplatformV1beta1EnvVarArgs
- Name string
- Name of the environment variable. Must be a valid C identifier.
- Value string
- Variables that reference a $(VAR_NAME) are expanded using the previous defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. The $(VAR_NAME) syntax can be escaped with a double $$, ie: $$(VAR_NAME). Escaped references will never be expanded, regardless of whether the variable exists or not.
- Name string
- Name of the environment variable. Must be a valid C identifier.
- Value string
- Variables that reference a $(VAR_NAME) are expanded using the previous defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. The $(VAR_NAME) syntax can be escaped with a double $$, ie: $$(VAR_NAME). Escaped references will never be expanded, regardless of whether the variable exists or not.
- name String
- Name of the environment variable. Must be a valid C identifier.
- value String
- Variables that reference a $(VAR_NAME) are expanded using the previous defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. The $(VAR_NAME) syntax can be escaped with a double $$, ie: $$(VAR_NAME). Escaped references will never be expanded, regardless of whether the variable exists or not.
- name string
- Name of the environment variable. Must be a valid C identifier.
- value string
- Variables that reference a $(VAR_NAME) are expanded using the previous defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. The $(VAR_NAME) syntax can be escaped with a double $$, ie: $$(VAR_NAME). Escaped references will never be expanded, regardless of whether the variable exists or not.
- name str
- Name of the environment variable. Must be a valid C identifier.
- value str
- Variables that reference a $(VAR_NAME) are expanded using the previous defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. The $(VAR_NAME) syntax can be escaped with a double $$, ie: $$(VAR_NAME). Escaped references will never be expanded, regardless of whether the variable exists or not.
- name String
- Name of the environment variable. Must be a valid C identifier.
- value String
- Variables that reference a $(VAR_NAME) are expanded using the previous defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. The $(VAR_NAME) syntax can be escaped with a double $$, ie: $$(VAR_NAME). Escaped references will never be expanded, regardless of whether the variable exists or not.
GoogleCloudAiplatformV1beta1EnvVarResponse, GoogleCloudAiplatformV1beta1EnvVarResponseArgs
- Name string
- Name of the environment variable. Must be a valid C identifier.
- Value string
- Variables that reference a $(VAR_NAME) are expanded using the previous defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. The $(VAR_NAME) syntax can be escaped with a double $$, ie: $$(VAR_NAME). Escaped references will never be expanded, regardless of whether the variable exists or not.
- Name string
- Name of the environment variable. Must be a valid C identifier.
- Value string
- Variables that reference a $(VAR_NAME) are expanded using the previous defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. The $(VAR_NAME) syntax can be escaped with a double $$, ie: $$(VAR_NAME). Escaped references will never be expanded, regardless of whether the variable exists or not.
- name String
- Name of the environment variable. Must be a valid C identifier.
- value String
- Variables that reference a $(VAR_NAME) are expanded using the previous defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. The $(VAR_NAME) syntax can be escaped with a double $$, ie: $$(VAR_NAME). Escaped references will never be expanded, regardless of whether the variable exists or not.
- name string
- Name of the environment variable. Must be a valid C identifier.
- value string
- Variables that reference a $(VAR_NAME) are expanded using the previous defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. The $(VAR_NAME) syntax can be escaped with a double $$, ie: $$(VAR_NAME). Escaped references will never be expanded, regardless of whether the variable exists or not.
- name str
- Name of the environment variable. Must be a valid C identifier.
- value str
- Variables that reference a $(VAR_NAME) are expanded using the previous defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. The $(VAR_NAME) syntax can be escaped with a double $$, ie: $$(VAR_NAME). Escaped references will never be expanded, regardless of whether the variable exists or not.
- name String
- Name of the environment variable. Must be a valid C identifier.
- value String
- Variables that reference a $(VAR_NAME) are expanded using the previous defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. The $(VAR_NAME) syntax can be escaped with a double $$, ie: $$(VAR_NAME). Escaped references will never be expanded, regardless of whether the variable exists or not.
GoogleCloudAiplatformV1beta1GcsDestination, GoogleCloudAiplatformV1beta1GcsDestinationArgs
- Output
Uri stringPrefix - Google Cloud Storage URI to output directory. If the uri doesn't end with '/', a '/' will be automatically appended. The directory is created if it doesn't exist.
- Output
Uri stringPrefix - Google Cloud Storage URI to output directory. If the uri doesn't end with '/', a '/' will be automatically appended. The directory is created if it doesn't exist.
- output
Uri StringPrefix - Google Cloud Storage URI to output directory. If the uri doesn't end with '/', a '/' will be automatically appended. The directory is created if it doesn't exist.
- output
Uri stringPrefix - Google Cloud Storage URI to output directory. If the uri doesn't end with '/', a '/' will be automatically appended. The directory is created if it doesn't exist.
- output_
uri_ strprefix - Google Cloud Storage URI to output directory. If the uri doesn't end with '/', a '/' will be automatically appended. The directory is created if it doesn't exist.
- output
Uri StringPrefix - Google Cloud Storage URI to output directory. If the uri doesn't end with '/', a '/' will be automatically appended. The directory is created if it doesn't exist.
GoogleCloudAiplatformV1beta1GcsDestinationResponse, GoogleCloudAiplatformV1beta1GcsDestinationResponseArgs
- Output
Uri stringPrefix - Google Cloud Storage URI to output directory. If the uri doesn't end with '/', a '/' will be automatically appended. The directory is created if it doesn't exist.
- Output
Uri stringPrefix - Google Cloud Storage URI to output directory. If the uri doesn't end with '/', a '/' will be automatically appended. The directory is created if it doesn't exist.
- output
Uri StringPrefix - Google Cloud Storage URI to output directory. If the uri doesn't end with '/', a '/' will be automatically appended. The directory is created if it doesn't exist.
- output
Uri stringPrefix - Google Cloud Storage URI to output directory. If the uri doesn't end with '/', a '/' will be automatically appended. The directory is created if it doesn't exist.
- output_
uri_ strprefix - Google Cloud Storage URI to output directory. If the uri doesn't end with '/', a '/' will be automatically appended. The directory is created if it doesn't exist.
- output
Uri StringPrefix - Google Cloud Storage URI to output directory. If the uri doesn't end with '/', a '/' will be automatically appended. The directory is created if it doesn't exist.
GoogleCloudAiplatformV1beta1MachineSpec, GoogleCloudAiplatformV1beta1MachineSpecArgs
- Accelerator
Count int - The number of accelerators to attach to the machine.
- Accelerator
Type Pulumi.Google Native. Aiplatform. V1Beta1. Google Cloud Aiplatform V1beta1Machine Spec Accelerator Type - Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
- Machine
Type string - Immutable. The type of the machine. See the list of machine types supported for prediction See the list of machine types supported for custom training. For DeployedModel this field is optional, and the default value is
n1-standard-2
. For BatchPredictionJob or as part of WorkerPoolSpec this field is required. - Tpu
Topology string - Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").
- Accelerator
Count int - The number of accelerators to attach to the machine.
- Accelerator
Type GoogleCloud Aiplatform V1beta1Machine Spec Accelerator Type - Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
- Machine
Type string - Immutable. The type of the machine. See the list of machine types supported for prediction See the list of machine types supported for custom training. For DeployedModel this field is optional, and the default value is
n1-standard-2
. For BatchPredictionJob or as part of WorkerPoolSpec this field is required. - Tpu
Topology string - Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").
- accelerator
Count Integer - The number of accelerators to attach to the machine.
- accelerator
Type GoogleCloud Aiplatform V1beta1Machine Spec Accelerator Type - Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
- machine
Type String - Immutable. The type of the machine. See the list of machine types supported for prediction See the list of machine types supported for custom training. For DeployedModel this field is optional, and the default value is
n1-standard-2
. For BatchPredictionJob or as part of WorkerPoolSpec this field is required. - tpu
Topology String - Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").
- accelerator
Count number - The number of accelerators to attach to the machine.
- accelerator
Type GoogleCloud Aiplatform V1beta1Machine Spec Accelerator Type - Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
- machine
Type string - Immutable. The type of the machine. See the list of machine types supported for prediction See the list of machine types supported for custom training. For DeployedModel this field is optional, and the default value is
n1-standard-2
. For BatchPredictionJob or as part of WorkerPoolSpec this field is required. - tpu
Topology string - Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").
- accelerator_
count int - The number of accelerators to attach to the machine.
- accelerator_
type GoogleCloud Aiplatform V1beta1Machine Spec Accelerator Type - Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
- machine_
type str - Immutable. The type of the machine. See the list of machine types supported for prediction See the list of machine types supported for custom training. For DeployedModel this field is optional, and the default value is
n1-standard-2
. For BatchPredictionJob or as part of WorkerPoolSpec this field is required. - tpu_
topology str - Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").
- accelerator
Count Number - The number of accelerators to attach to the machine.
- accelerator
Type "ACCELERATOR_TYPE_UNSPECIFIED" | "NVIDIA_TESLA_K80" | "NVIDIA_TESLA_P100" | "NVIDIA_TESLA_V100" | "NVIDIA_TESLA_P4" | "NVIDIA_TESLA_T4" | "NVIDIA_TESLA_A100" | "NVIDIA_A100_80GB" | "NVIDIA_L4" | "NVIDIA_H100_80GB" | "TPU_V2" | "TPU_V3" | "TPU_V4_POD" | "TPU_V5_LITEPOD" - Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
- machine
Type String - Immutable. The type of the machine. See the list of machine types supported for prediction See the list of machine types supported for custom training. For DeployedModel this field is optional, and the default value is
n1-standard-2
. For BatchPredictionJob or as part of WorkerPoolSpec this field is required. - tpu
Topology String - Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").
GoogleCloudAiplatformV1beta1MachineSpecAcceleratorType, GoogleCloudAiplatformV1beta1MachineSpecAcceleratorTypeArgs
- Accelerator
Type Unspecified - ACCELERATOR_TYPE_UNSPECIFIEDUnspecified accelerator type, which means no accelerator.
- Nvidia
Tesla K80 - NVIDIA_TESLA_K80Nvidia Tesla K80 GPU.
- Nvidia
Tesla P100 - NVIDIA_TESLA_P100Nvidia Tesla P100 GPU.
- Nvidia
Tesla V100 - NVIDIA_TESLA_V100Nvidia Tesla V100 GPU.
- Nvidia
Tesla P4 - NVIDIA_TESLA_P4Nvidia Tesla P4 GPU.
- Nvidia
Tesla T4 - NVIDIA_TESLA_T4Nvidia Tesla T4 GPU.
- Nvidia
Tesla A100 - NVIDIA_TESLA_A100Nvidia Tesla A100 GPU.
- Nvidia
A10080gb - NVIDIA_A100_80GBNvidia A100 80GB GPU.
- Nvidia
L4 - NVIDIA_L4Nvidia L4 GPU.
- Nvidia
H10080gb - NVIDIA_H100_80GBNvidia H100 80Gb GPU.
- Tpu
V2 - TPU_V2TPU v2.
- Tpu
V3 - TPU_V3TPU v3.
- Tpu
V4Pod - TPU_V4_PODTPU v4.
- Tpu
V5Litepod - TPU_V5_LITEPODTPU v5.
- Google
Cloud Aiplatform V1beta1Machine Spec Accelerator Type Accelerator Type Unspecified - ACCELERATOR_TYPE_UNSPECIFIEDUnspecified accelerator type, which means no accelerator.
- Google
Cloud Aiplatform V1beta1Machine Spec Accelerator Type Nvidia Tesla K80 - NVIDIA_TESLA_K80Nvidia Tesla K80 GPU.
- Google
Cloud Aiplatform V1beta1Machine Spec Accelerator Type Nvidia Tesla P100 - NVIDIA_TESLA_P100Nvidia Tesla P100 GPU.
- Google
Cloud Aiplatform V1beta1Machine Spec Accelerator Type Nvidia Tesla V100 - NVIDIA_TESLA_V100Nvidia Tesla V100 GPU.
- Google
Cloud Aiplatform V1beta1Machine Spec Accelerator Type Nvidia Tesla P4 - NVIDIA_TESLA_P4Nvidia Tesla P4 GPU.
- Google
Cloud Aiplatform V1beta1Machine Spec Accelerator Type Nvidia Tesla T4 - NVIDIA_TESLA_T4Nvidia Tesla T4 GPU.
- Google
Cloud Aiplatform V1beta1Machine Spec Accelerator Type Nvidia Tesla A100 - NVIDIA_TESLA_A100Nvidia Tesla A100 GPU.
- Google
Cloud Aiplatform V1beta1Machine Spec Accelerator Type Nvidia A10080gb - NVIDIA_A100_80GBNvidia A100 80GB GPU.
- Google
Cloud Aiplatform V1beta1Machine Spec Accelerator Type Nvidia L4 - NVIDIA_L4Nvidia L4 GPU.
- Google
Cloud Aiplatform V1beta1Machine Spec Accelerator Type Nvidia H10080gb - NVIDIA_H100_80GBNvidia H100 80Gb GPU.
- Google
Cloud Aiplatform V1beta1Machine Spec Accelerator Type Tpu V2 - TPU_V2TPU v2.
- Google
Cloud Aiplatform V1beta1Machine Spec Accelerator Type Tpu V3 - TPU_V3TPU v3.
- Google
Cloud Aiplatform V1beta1Machine Spec Accelerator Type Tpu V4Pod - TPU_V4_PODTPU v4.
- Google
Cloud Aiplatform V1beta1Machine Spec Accelerator Type Tpu V5Litepod - TPU_V5_LITEPODTPU v5.
- Accelerator
Type Unspecified - ACCELERATOR_TYPE_UNSPECIFIEDUnspecified accelerator type, which means no accelerator.
- Nvidia
Tesla K80 - NVIDIA_TESLA_K80Nvidia Tesla K80 GPU.
- Nvidia
Tesla P100 - NVIDIA_TESLA_P100Nvidia Tesla P100 GPU.
- Nvidia
Tesla V100 - NVIDIA_TESLA_V100Nvidia Tesla V100 GPU.
- Nvidia
Tesla P4 - NVIDIA_TESLA_P4Nvidia Tesla P4 GPU.
- Nvidia
Tesla T4 - NVIDIA_TESLA_T4Nvidia Tesla T4 GPU.
- Nvidia
Tesla A100 - NVIDIA_TESLA_A100Nvidia Tesla A100 GPU.
- Nvidia
A10080gb - NVIDIA_A100_80GBNvidia A100 80GB GPU.
- Nvidia
L4 - NVIDIA_L4Nvidia L4 GPU.
- Nvidia
H10080gb - NVIDIA_H100_80GBNvidia H100 80Gb GPU.
- Tpu
V2 - TPU_V2TPU v2.
- Tpu
V3 - TPU_V3TPU v3.
- Tpu
V4Pod - TPU_V4_PODTPU v4.
- Tpu
V5Litepod - TPU_V5_LITEPODTPU v5.
- Accelerator
Type Unspecified - ACCELERATOR_TYPE_UNSPECIFIEDUnspecified accelerator type, which means no accelerator.
- Nvidia
Tesla K80 - NVIDIA_TESLA_K80Nvidia Tesla K80 GPU.
- Nvidia
Tesla P100 - NVIDIA_TESLA_P100Nvidia Tesla P100 GPU.
- Nvidia
Tesla V100 - NVIDIA_TESLA_V100Nvidia Tesla V100 GPU.
- Nvidia
Tesla P4 - NVIDIA_TESLA_P4Nvidia Tesla P4 GPU.
- Nvidia
Tesla T4 - NVIDIA_TESLA_T4Nvidia Tesla T4 GPU.
- Nvidia
Tesla A100 - NVIDIA_TESLA_A100Nvidia Tesla A100 GPU.
- Nvidia
A10080gb - NVIDIA_A100_80GBNvidia A100 80GB GPU.
- Nvidia
L4 - NVIDIA_L4Nvidia L4 GPU.
- Nvidia
H10080gb - NVIDIA_H100_80GBNvidia H100 80Gb GPU.
- Tpu
V2 - TPU_V2TPU v2.
- Tpu
V3 - TPU_V3TPU v3.
- Tpu
V4Pod - TPU_V4_PODTPU v4.
- Tpu
V5Litepod - TPU_V5_LITEPODTPU v5.
- ACCELERATOR_TYPE_UNSPECIFIED
- ACCELERATOR_TYPE_UNSPECIFIEDUnspecified accelerator type, which means no accelerator.
- NVIDIA_TESLA_K80
- NVIDIA_TESLA_K80Nvidia Tesla K80 GPU.
- NVIDIA_TESLA_P100
- NVIDIA_TESLA_P100Nvidia Tesla P100 GPU.
- NVIDIA_TESLA_V100
- NVIDIA_TESLA_V100Nvidia Tesla V100 GPU.
- NVIDIA_TESLA_P4
- NVIDIA_TESLA_P4Nvidia Tesla P4 GPU.
- NVIDIA_TESLA_T4
- NVIDIA_TESLA_T4Nvidia Tesla T4 GPU.
- NVIDIA_TESLA_A100
- NVIDIA_TESLA_A100Nvidia Tesla A100 GPU.
- NVIDIA_A10080GB
- NVIDIA_A100_80GBNvidia A100 80GB GPU.
- NVIDIA_L4
- NVIDIA_L4Nvidia L4 GPU.
- NVIDIA_H10080GB
- NVIDIA_H100_80GBNvidia H100 80Gb GPU.
- TPU_V2
- TPU_V2TPU v2.
- TPU_V3
- TPU_V3TPU v3.
- TPU_V4_POD
- TPU_V4_PODTPU v4.
- TPU_V5_LITEPOD
- TPU_V5_LITEPODTPU v5.
- "ACCELERATOR_TYPE_UNSPECIFIED"
- ACCELERATOR_TYPE_UNSPECIFIEDUnspecified accelerator type, which means no accelerator.
- "NVIDIA_TESLA_K80"
- NVIDIA_TESLA_K80Nvidia Tesla K80 GPU.
- "NVIDIA_TESLA_P100"
- NVIDIA_TESLA_P100Nvidia Tesla P100 GPU.
- "NVIDIA_TESLA_V100"
- NVIDIA_TESLA_V100Nvidia Tesla V100 GPU.
- "NVIDIA_TESLA_P4"
- NVIDIA_TESLA_P4Nvidia Tesla P4 GPU.
- "NVIDIA_TESLA_T4"
- NVIDIA_TESLA_T4Nvidia Tesla T4 GPU.
- "NVIDIA_TESLA_A100"
- NVIDIA_TESLA_A100Nvidia Tesla A100 GPU.
- "NVIDIA_A100_80GB"
- NVIDIA_A100_80GBNvidia A100 80GB GPU.
- "NVIDIA_L4"
- NVIDIA_L4Nvidia L4 GPU.
- "NVIDIA_H100_80GB"
- NVIDIA_H100_80GBNvidia H100 80Gb GPU.
- "TPU_V2"
- TPU_V2TPU v2.
- "TPU_V3"
- TPU_V3TPU v3.
- "TPU_V4_POD"
- TPU_V4_PODTPU v4.
- "TPU_V5_LITEPOD"
- TPU_V5_LITEPODTPU v5.
GoogleCloudAiplatformV1beta1MachineSpecResponse, GoogleCloudAiplatformV1beta1MachineSpecResponseArgs
- Accelerator
Count int - The number of accelerators to attach to the machine.
- Accelerator
Type string - Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
- Machine
Type string - Immutable. The type of the machine. See the list of machine types supported for prediction See the list of machine types supported for custom training. For DeployedModel this field is optional, and the default value is
n1-standard-2
. For BatchPredictionJob or as part of WorkerPoolSpec this field is required. - Tpu
Topology string - Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").
- Accelerator
Count int - The number of accelerators to attach to the machine.
- Accelerator
Type string - Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
- Machine
Type string - Immutable. The type of the machine. See the list of machine types supported for prediction See the list of machine types supported for custom training. For DeployedModel this field is optional, and the default value is
n1-standard-2
. For BatchPredictionJob or as part of WorkerPoolSpec this field is required. - Tpu
Topology string - Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").
- accelerator
Count Integer - The number of accelerators to attach to the machine.
- accelerator
Type String - Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
- machine
Type String - Immutable. The type of the machine. See the list of machine types supported for prediction See the list of machine types supported for custom training. For DeployedModel this field is optional, and the default value is
n1-standard-2
. For BatchPredictionJob or as part of WorkerPoolSpec this field is required. - tpu
Topology String - Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").
- accelerator
Count number - The number of accelerators to attach to the machine.
- accelerator
Type string - Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
- machine
Type string - Immutable. The type of the machine. See the list of machine types supported for prediction See the list of machine types supported for custom training. For DeployedModel this field is optional, and the default value is
n1-standard-2
. For BatchPredictionJob or as part of WorkerPoolSpec this field is required. - tpu
Topology string - Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").
- accelerator_
count int - The number of accelerators to attach to the machine.
- accelerator_
type str - Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
- machine_
type str - Immutable. The type of the machine. See the list of machine types supported for prediction See the list of machine types supported for custom training. For DeployedModel this field is optional, and the default value is
n1-standard-2
. For BatchPredictionJob or as part of WorkerPoolSpec this field is required. - tpu_
topology str - Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").
- accelerator
Count Number - The number of accelerators to attach to the machine.
- accelerator
Type String - Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
- machine
Type String - Immutable. The type of the machine. See the list of machine types supported for prediction See the list of machine types supported for custom training. For DeployedModel this field is optional, and the default value is
n1-standard-2
. For BatchPredictionJob or as part of WorkerPoolSpec this field is required. - tpu
Topology String - Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").
GoogleCloudAiplatformV1beta1MeasurementMetricResponse, GoogleCloudAiplatformV1beta1MeasurementMetricResponseArgs
GoogleCloudAiplatformV1beta1MeasurementResponse, GoogleCloudAiplatformV1beta1MeasurementResponseArgs
- Elapsed
Duration string - Time that the Trial has been running at the point of this Measurement.
- Metrics
List<Pulumi.
Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Measurement Metric Response> - A list of metrics got by evaluating the objective functions using suggested Parameter values.
- Step
Count string - The number of steps the machine learning model has been trained for. Must be non-negative.
- Elapsed
Duration string - Time that the Trial has been running at the point of this Measurement.
- Metrics
[]Google
Cloud Aiplatform V1beta1Measurement Metric Response - A list of metrics got by evaluating the objective functions using suggested Parameter values.
- Step
Count string - The number of steps the machine learning model has been trained for. Must be non-negative.
- elapsed
Duration String - Time that the Trial has been running at the point of this Measurement.
- metrics
List<Google
Cloud Aiplatform V1beta1Measurement Metric Response> - A list of metrics got by evaluating the objective functions using suggested Parameter values.
- step
Count String - The number of steps the machine learning model has been trained for. Must be non-negative.
- elapsed
Duration string - Time that the Trial has been running at the point of this Measurement.
- metrics
Google
Cloud Aiplatform V1beta1Measurement Metric Response[] - A list of metrics got by evaluating the objective functions using suggested Parameter values.
- step
Count string - The number of steps the machine learning model has been trained for. Must be non-negative.
- elapsed_
duration str - Time that the Trial has been running at the point of this Measurement.
- metrics
Sequence[Google
Cloud Aiplatform V1beta1Measurement Metric Response] - A list of metrics got by evaluating the objective functions using suggested Parameter values.
- step_
count str - The number of steps the machine learning model has been trained for. Must be non-negative.
- elapsed
Duration String - Time that the Trial has been running at the point of this Measurement.
- metrics List<Property Map>
- A list of metrics got by evaluating the objective functions using suggested Parameter values.
- step
Count String - The number of steps the machine learning model has been trained for. Must be non-negative.
GoogleCloudAiplatformV1beta1NasJobOutputMultiTrialJobOutputResponse, GoogleCloudAiplatformV1beta1NasJobOutputMultiTrialJobOutputResponseArgs
- Search
Trials List<Pulumi.Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Nas Trial Response> - List of NasTrials that were started as part of search stage.
- Train
Trials List<Pulumi.Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Nas Trial Response> - List of NasTrials that were started as part of train stage.
- Search
Trials []GoogleCloud Aiplatform V1beta1Nas Trial Response - List of NasTrials that were started as part of search stage.
- Train
Trials []GoogleCloud Aiplatform V1beta1Nas Trial Response - List of NasTrials that were started as part of train stage.
- search
Trials List<GoogleCloud Aiplatform V1beta1Nas Trial Response> - List of NasTrials that were started as part of search stage.
- train
Trials List<GoogleCloud Aiplatform V1beta1Nas Trial Response> - List of NasTrials that were started as part of train stage.
- search
Trials GoogleCloud Aiplatform V1beta1Nas Trial Response[] - List of NasTrials that were started as part of search stage.
- train
Trials GoogleCloud Aiplatform V1beta1Nas Trial Response[] - List of NasTrials that were started as part of train stage.
- search_
trials Sequence[GoogleCloud Aiplatform V1beta1Nas Trial Response] - List of NasTrials that were started as part of search stage.
- train_
trials Sequence[GoogleCloud Aiplatform V1beta1Nas Trial Response] - List of NasTrials that were started as part of train stage.
- search
Trials List<Property Map> - List of NasTrials that were started as part of search stage.
- train
Trials List<Property Map> - List of NasTrials that were started as part of train stage.
GoogleCloudAiplatformV1beta1NasJobOutputResponse, GoogleCloudAiplatformV1beta1NasJobOutputResponseArgs
- Multi
Trial Pulumi.Job Output Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Nas Job Output Multi Trial Job Output Response - The output of this multi-trial Neural Architecture Search (NAS) job.
- Multi
Trial GoogleJob Output Cloud Aiplatform V1beta1Nas Job Output Multi Trial Job Output Response - The output of this multi-trial Neural Architecture Search (NAS) job.
- multi
Trial GoogleJob Output Cloud Aiplatform V1beta1Nas Job Output Multi Trial Job Output Response - The output of this multi-trial Neural Architecture Search (NAS) job.
- multi
Trial GoogleJob Output Cloud Aiplatform V1beta1Nas Job Output Multi Trial Job Output Response - The output of this multi-trial Neural Architecture Search (NAS) job.
- multi_
trial_ Googlejob_ output Cloud Aiplatform V1beta1Nas Job Output Multi Trial Job Output Response - The output of this multi-trial Neural Architecture Search (NAS) job.
- multi
Trial Property MapJob Output - The output of this multi-trial Neural Architecture Search (NAS) job.
GoogleCloudAiplatformV1beta1NasJobSpec, GoogleCloudAiplatformV1beta1NasJobSpecArgs
- Multi
Trial Pulumi.Algorithm Spec Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Nas Job Spec Multi Trial Algorithm Spec - The spec of multi-trial algorithms.
- Resume
Nas stringJob Id - The ID of the existing NasJob in the same Project and Location which will be used to resume search. search_space_spec and nas_algorithm_spec are obtained from previous NasJob hence should not provide them again for this NasJob.
- Search
Space stringSpec - It defines the search space for Neural Architecture Search (NAS).
- Multi
Trial GoogleAlgorithm Spec Cloud Aiplatform V1beta1Nas Job Spec Multi Trial Algorithm Spec - The spec of multi-trial algorithms.
- Resume
Nas stringJob Id - The ID of the existing NasJob in the same Project and Location which will be used to resume search. search_space_spec and nas_algorithm_spec are obtained from previous NasJob hence should not provide them again for this NasJob.
- Search
Space stringSpec - It defines the search space for Neural Architecture Search (NAS).
- multi
Trial GoogleAlgorithm Spec Cloud Aiplatform V1beta1Nas Job Spec Multi Trial Algorithm Spec - The spec of multi-trial algorithms.
- resume
Nas StringJob Id - The ID of the existing NasJob in the same Project and Location which will be used to resume search. search_space_spec and nas_algorithm_spec are obtained from previous NasJob hence should not provide them again for this NasJob.
- search
Space StringSpec - It defines the search space for Neural Architecture Search (NAS).
- multi
Trial GoogleAlgorithm Spec Cloud Aiplatform V1beta1Nas Job Spec Multi Trial Algorithm Spec - The spec of multi-trial algorithms.
- resume
Nas stringJob Id - The ID of the existing NasJob in the same Project and Location which will be used to resume search. search_space_spec and nas_algorithm_spec are obtained from previous NasJob hence should not provide them again for this NasJob.
- search
Space stringSpec - It defines the search space for Neural Architecture Search (NAS).
- multi_
trial_ Googlealgorithm_ spec Cloud Aiplatform V1beta1Nas Job Spec Multi Trial Algorithm Spec - The spec of multi-trial algorithms.
- resume_
nas_ strjob_ id - The ID of the existing NasJob in the same Project and Location which will be used to resume search. search_space_spec and nas_algorithm_spec are obtained from previous NasJob hence should not provide them again for this NasJob.
- search_
space_ strspec - It defines the search space for Neural Architecture Search (NAS).
- multi
Trial Property MapAlgorithm Spec - The spec of multi-trial algorithms.
- resume
Nas StringJob Id - The ID of the existing NasJob in the same Project and Location which will be used to resume search. search_space_spec and nas_algorithm_spec are obtained from previous NasJob hence should not provide them again for this NasJob.
- search
Space StringSpec - It defines the search space for Neural Architecture Search (NAS).
GoogleCloudAiplatformV1beta1NasJobSpecMultiTrialAlgorithmSpec, GoogleCloudAiplatformV1beta1NasJobSpecMultiTrialAlgorithmSpecArgs
- Search
Trial Pulumi.Spec Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Nas Job Spec Multi Trial Algorithm Spec Search Trial Spec - Spec for search trials.
- Metric
Pulumi.
Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Nas Job Spec Multi Trial Algorithm Spec Metric Spec - Metric specs for the NAS job. Validation for this field is done at
multi_trial_algorithm_spec
field. - Multi
Trial Pulumi.Algorithm Google Native. Aiplatform. V1Beta1. Google Cloud Aiplatform V1beta1Nas Job Spec Multi Trial Algorithm Spec Multi Trial Algorithm - The multi-trial Neural Architecture Search (NAS) algorithm type. Defaults to
REINFORCEMENT_LEARNING
. - Train
Trial Pulumi.Spec Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Nas Job Spec Multi Trial Algorithm Spec Train Trial Spec - Spec for train trials. Top N [TrainTrialSpec.max_parallel_trial_count] search trials will be trained for every M [TrainTrialSpec.frequency] trials searched.
- Search
Trial GoogleSpec Cloud Aiplatform V1beta1Nas Job Spec Multi Trial Algorithm Spec Search Trial Spec - Spec for search trials.
- Metric
Google
Cloud Aiplatform V1beta1Nas Job Spec Multi Trial Algorithm Spec Metric Spec - Metric specs for the NAS job. Validation for this field is done at
multi_trial_algorithm_spec
field. - Multi
Trial GoogleAlgorithm Cloud Aiplatform V1beta1Nas Job Spec Multi Trial Algorithm Spec Multi Trial Algorithm - The multi-trial Neural Architecture Search (NAS) algorithm type. Defaults to
REINFORCEMENT_LEARNING
. - Train
Trial GoogleSpec Cloud Aiplatform V1beta1Nas Job Spec Multi Trial Algorithm Spec Train Trial Spec - Spec for train trials. Top N [TrainTrialSpec.max_parallel_trial_count] search trials will be trained for every M [TrainTrialSpec.frequency] trials searched.
- search
Trial GoogleSpec Cloud Aiplatform V1beta1Nas Job Spec Multi Trial Algorithm Spec Search Trial Spec - Spec for search trials.
- metric
Google
Cloud Aiplatform V1beta1Nas Job Spec Multi Trial Algorithm Spec Metric Spec - Metric specs for the NAS job. Validation for this field is done at
multi_trial_algorithm_spec
field. - multi
Trial GoogleAlgorithm Cloud Aiplatform V1beta1Nas Job Spec Multi Trial Algorithm Spec Multi Trial Algorithm - The multi-trial Neural Architecture Search (NAS) algorithm type. Defaults to
REINFORCEMENT_LEARNING
. - train
Trial GoogleSpec Cloud Aiplatform V1beta1Nas Job Spec Multi Trial Algorithm Spec Train Trial Spec - Spec for train trials. Top N [TrainTrialSpec.max_parallel_trial_count] search trials will be trained for every M [TrainTrialSpec.frequency] trials searched.
- search
Trial GoogleSpec Cloud Aiplatform V1beta1Nas Job Spec Multi Trial Algorithm Spec Search Trial Spec - Spec for search trials.
- metric
Google
Cloud Aiplatform V1beta1Nas Job Spec Multi Trial Algorithm Spec Metric Spec - Metric specs for the NAS job. Validation for this field is done at
multi_trial_algorithm_spec
field. - multi
Trial GoogleAlgorithm Cloud Aiplatform V1beta1Nas Job Spec Multi Trial Algorithm Spec Multi Trial Algorithm - The multi-trial Neural Architecture Search (NAS) algorithm type. Defaults to
REINFORCEMENT_LEARNING
. - train
Trial GoogleSpec Cloud Aiplatform V1beta1Nas Job Spec Multi Trial Algorithm Spec Train Trial Spec - Spec for train trials. Top N [TrainTrialSpec.max_parallel_trial_count] search trials will be trained for every M [TrainTrialSpec.frequency] trials searched.
- search_
trial_ Googlespec Cloud Aiplatform V1beta1Nas Job Spec Multi Trial Algorithm Spec Search Trial Spec - Spec for search trials.
- metric
Google
Cloud Aiplatform V1beta1Nas Job Spec Multi Trial Algorithm Spec Metric Spec - Metric specs for the NAS job. Validation for this field is done at
multi_trial_algorithm_spec
field. - multi_
trial_ Googlealgorithm Cloud Aiplatform V1beta1Nas Job Spec Multi Trial Algorithm Spec Multi Trial Algorithm - The multi-trial Neural Architecture Search (NAS) algorithm type. Defaults to
REINFORCEMENT_LEARNING
. - train_
trial_ Googlespec Cloud Aiplatform V1beta1Nas Job Spec Multi Trial Algorithm Spec Train Trial Spec - Spec for train trials. Top N [TrainTrialSpec.max_parallel_trial_count] search trials will be trained for every M [TrainTrialSpec.frequency] trials searched.
- search
Trial Property MapSpec - Spec for search trials.
- metric Property Map
- Metric specs for the NAS job. Validation for this field is done at
multi_trial_algorithm_spec
field. - multi
Trial "MULTI_TRIAL_ALGORITHM_UNSPECIFIED" | "REINFORCEMENT_LEARNING" | "GRID_SEARCH"Algorithm - The multi-trial Neural Architecture Search (NAS) algorithm type. Defaults to
REINFORCEMENT_LEARNING
. - train
Trial Property MapSpec - Spec for train trials. Top N [TrainTrialSpec.max_parallel_trial_count] search trials will be trained for every M [TrainTrialSpec.frequency] trials searched.
GoogleCloudAiplatformV1beta1NasJobSpecMultiTrialAlgorithmSpecMetricSpec, GoogleCloudAiplatformV1beta1NasJobSpecMultiTrialAlgorithmSpecMetricSpecArgs
- Goal
Pulumi.
Google Native. Aiplatform. V1Beta1. Google Cloud Aiplatform V1beta1Nas Job Spec Multi Trial Algorithm Spec Metric Spec Goal - The optimization goal of the metric.
- Metric
Id string - The ID of the metric. Must not contain whitespaces.
- Goal
Google
Cloud Aiplatform V1beta1Nas Job Spec Multi Trial Algorithm Spec Metric Spec Goal - The optimization goal of the metric.
- Metric
Id string - The ID of the metric. Must not contain whitespaces.
- goal
Google
Cloud Aiplatform V1beta1Nas Job Spec Multi Trial Algorithm Spec Metric Spec Goal - The optimization goal of the metric.
- metric
Id String - The ID of the metric. Must not contain whitespaces.
- goal
Google
Cloud Aiplatform V1beta1Nas Job Spec Multi Trial Algorithm Spec Metric Spec Goal - The optimization goal of the metric.
- metric
Id string - The ID of the metric. Must not contain whitespaces.
- goal
Google
Cloud Aiplatform V1beta1Nas Job Spec Multi Trial Algorithm Spec Metric Spec Goal - The optimization goal of the metric.
- metric_
id str - The ID of the metric. Must not contain whitespaces.
- goal "GOAL_TYPE_UNSPECIFIED" | "MAXIMIZE" | "MINIMIZE"
- The optimization goal of the metric.
- metric
Id String - The ID of the metric. Must not contain whitespaces.
GoogleCloudAiplatformV1beta1NasJobSpecMultiTrialAlgorithmSpecMetricSpecGoal, GoogleCloudAiplatformV1beta1NasJobSpecMultiTrialAlgorithmSpecMetricSpecGoalArgs
- Goal
Type Unspecified - GOAL_TYPE_UNSPECIFIEDGoal Type will default to maximize.
- Maximize
- MAXIMIZEMaximize the goal metric.
- Minimize
- MINIMIZEMinimize the goal metric.
- Google
Cloud Aiplatform V1beta1Nas Job Spec Multi Trial Algorithm Spec Metric Spec Goal Goal Type Unspecified - GOAL_TYPE_UNSPECIFIEDGoal Type will default to maximize.
- Google
Cloud Aiplatform V1beta1Nas Job Spec Multi Trial Algorithm Spec Metric Spec Goal Maximize - MAXIMIZEMaximize the goal metric.
- Google
Cloud Aiplatform V1beta1Nas Job Spec Multi Trial Algorithm Spec Metric Spec Goal Minimize - MINIMIZEMinimize the goal metric.
- Goal
Type Unspecified - GOAL_TYPE_UNSPECIFIEDGoal Type will default to maximize.
- Maximize
- MAXIMIZEMaximize the goal metric.
- Minimize
- MINIMIZEMinimize the goal metric.
- Goal
Type Unspecified - GOAL_TYPE_UNSPECIFIEDGoal Type will default to maximize.
- Maximize
- MAXIMIZEMaximize the goal metric.
- Minimize
- MINIMIZEMinimize the goal metric.
- GOAL_TYPE_UNSPECIFIED
- GOAL_TYPE_UNSPECIFIEDGoal Type will default to maximize.
- MAXIMIZE
- MAXIMIZEMaximize the goal metric.
- MINIMIZE
- MINIMIZEMinimize the goal metric.
- "GOAL_TYPE_UNSPECIFIED"
- GOAL_TYPE_UNSPECIFIEDGoal Type will default to maximize.
- "MAXIMIZE"
- MAXIMIZEMaximize the goal metric.
- "MINIMIZE"
- MINIMIZEMinimize the goal metric.
GoogleCloudAiplatformV1beta1NasJobSpecMultiTrialAlgorithmSpecMetricSpecResponse, GoogleCloudAiplatformV1beta1NasJobSpecMultiTrialAlgorithmSpecMetricSpecResponseArgs
GoogleCloudAiplatformV1beta1NasJobSpecMultiTrialAlgorithmSpecMultiTrialAlgorithm, GoogleCloudAiplatformV1beta1NasJobSpecMultiTrialAlgorithmSpecMultiTrialAlgorithmArgs
- Multi
Trial Algorithm Unspecified - MULTI_TRIAL_ALGORITHM_UNSPECIFIEDDefaults to
REINFORCEMENT_LEARNING
. - Reinforcement
Learning - REINFORCEMENT_LEARNINGThe Reinforcement Learning Algorithm for Multi-trial Neural Architecture Search (NAS).
- Grid
Search - GRID_SEARCHThe Grid Search Algorithm for Multi-trial Neural Architecture Search (NAS).
- Google
Cloud Aiplatform V1beta1Nas Job Spec Multi Trial Algorithm Spec Multi Trial Algorithm Multi Trial Algorithm Unspecified - MULTI_TRIAL_ALGORITHM_UNSPECIFIEDDefaults to
REINFORCEMENT_LEARNING
. - Google
Cloud Aiplatform V1beta1Nas Job Spec Multi Trial Algorithm Spec Multi Trial Algorithm Reinforcement Learning - REINFORCEMENT_LEARNINGThe Reinforcement Learning Algorithm for Multi-trial Neural Architecture Search (NAS).
- Google
Cloud Aiplatform V1beta1Nas Job Spec Multi Trial Algorithm Spec Multi Trial Algorithm Grid Search - GRID_SEARCHThe Grid Search Algorithm for Multi-trial Neural Architecture Search (NAS).
- Multi
Trial Algorithm Unspecified - MULTI_TRIAL_ALGORITHM_UNSPECIFIEDDefaults to
REINFORCEMENT_LEARNING
. - Reinforcement
Learning - REINFORCEMENT_LEARNINGThe Reinforcement Learning Algorithm for Multi-trial Neural Architecture Search (NAS).
- Grid
Search - GRID_SEARCHThe Grid Search Algorithm for Multi-trial Neural Architecture Search (NAS).
- Multi
Trial Algorithm Unspecified - MULTI_TRIAL_ALGORITHM_UNSPECIFIEDDefaults to
REINFORCEMENT_LEARNING
. - Reinforcement
Learning - REINFORCEMENT_LEARNINGThe Reinforcement Learning Algorithm for Multi-trial Neural Architecture Search (NAS).
- Grid
Search - GRID_SEARCHThe Grid Search Algorithm for Multi-trial Neural Architecture Search (NAS).
- MULTI_TRIAL_ALGORITHM_UNSPECIFIED
- MULTI_TRIAL_ALGORITHM_UNSPECIFIEDDefaults to
REINFORCEMENT_LEARNING
. - REINFORCEMENT_LEARNING
- REINFORCEMENT_LEARNINGThe Reinforcement Learning Algorithm for Multi-trial Neural Architecture Search (NAS).
- GRID_SEARCH
- GRID_SEARCHThe Grid Search Algorithm for Multi-trial Neural Architecture Search (NAS).
- "MULTI_TRIAL_ALGORITHM_UNSPECIFIED"
- MULTI_TRIAL_ALGORITHM_UNSPECIFIEDDefaults to
REINFORCEMENT_LEARNING
. - "REINFORCEMENT_LEARNING"
- REINFORCEMENT_LEARNINGThe Reinforcement Learning Algorithm for Multi-trial Neural Architecture Search (NAS).
- "GRID_SEARCH"
- GRID_SEARCHThe Grid Search Algorithm for Multi-trial Neural Architecture Search (NAS).
GoogleCloudAiplatformV1beta1NasJobSpecMultiTrialAlgorithmSpecResponse, GoogleCloudAiplatformV1beta1NasJobSpecMultiTrialAlgorithmSpecResponseArgs
- Metric
Pulumi.
Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Nas Job Spec Multi Trial Algorithm Spec Metric Spec Response - Metric specs for the NAS job. Validation for this field is done at
multi_trial_algorithm_spec
field. - Multi
Trial stringAlgorithm - The multi-trial Neural Architecture Search (NAS) algorithm type. Defaults to
REINFORCEMENT_LEARNING
. - Search
Trial Pulumi.Spec Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Nas Job Spec Multi Trial Algorithm Spec Search Trial Spec Response - Spec for search trials.
- Train
Trial Pulumi.Spec Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Nas Job Spec Multi Trial Algorithm Spec Train Trial Spec Response - Spec for train trials. Top N [TrainTrialSpec.max_parallel_trial_count] search trials will be trained for every M [TrainTrialSpec.frequency] trials searched.
- Metric
Google
Cloud Aiplatform V1beta1Nas Job Spec Multi Trial Algorithm Spec Metric Spec Response - Metric specs for the NAS job. Validation for this field is done at
multi_trial_algorithm_spec
field. - Multi
Trial stringAlgorithm - The multi-trial Neural Architecture Search (NAS) algorithm type. Defaults to
REINFORCEMENT_LEARNING
. - Search
Trial GoogleSpec Cloud Aiplatform V1beta1Nas Job Spec Multi Trial Algorithm Spec Search Trial Spec Response - Spec for search trials.
- Train
Trial GoogleSpec Cloud Aiplatform V1beta1Nas Job Spec Multi Trial Algorithm Spec Train Trial Spec Response - Spec for train trials. Top N [TrainTrialSpec.max_parallel_trial_count] search trials will be trained for every M [TrainTrialSpec.frequency] trials searched.
- metric
Google
Cloud Aiplatform V1beta1Nas Job Spec Multi Trial Algorithm Spec Metric Spec Response - Metric specs for the NAS job. Validation for this field is done at
multi_trial_algorithm_spec
field. - multi
Trial StringAlgorithm - The multi-trial Neural Architecture Search (NAS) algorithm type. Defaults to
REINFORCEMENT_LEARNING
. - search
Trial GoogleSpec Cloud Aiplatform V1beta1Nas Job Spec Multi Trial Algorithm Spec Search Trial Spec Response - Spec for search trials.
- train
Trial GoogleSpec Cloud Aiplatform V1beta1Nas Job Spec Multi Trial Algorithm Spec Train Trial Spec Response - Spec for train trials. Top N [TrainTrialSpec.max_parallel_trial_count] search trials will be trained for every M [TrainTrialSpec.frequency] trials searched.
- metric
Google
Cloud Aiplatform V1beta1Nas Job Spec Multi Trial Algorithm Spec Metric Spec Response - Metric specs for the NAS job. Validation for this field is done at
multi_trial_algorithm_spec
field. - multi
Trial stringAlgorithm - The multi-trial Neural Architecture Search (NAS) algorithm type. Defaults to
REINFORCEMENT_LEARNING
. - search
Trial GoogleSpec Cloud Aiplatform V1beta1Nas Job Spec Multi Trial Algorithm Spec Search Trial Spec Response - Spec for search trials.
- train
Trial GoogleSpec Cloud Aiplatform V1beta1Nas Job Spec Multi Trial Algorithm Spec Train Trial Spec Response - Spec for train trials. Top N [TrainTrialSpec.max_parallel_trial_count] search trials will be trained for every M [TrainTrialSpec.frequency] trials searched.
- metric
Google
Cloud Aiplatform V1beta1Nas Job Spec Multi Trial Algorithm Spec Metric Spec Response - Metric specs for the NAS job. Validation for this field is done at
multi_trial_algorithm_spec
field. - multi_
trial_ stralgorithm - The multi-trial Neural Architecture Search (NAS) algorithm type. Defaults to
REINFORCEMENT_LEARNING
. - search_
trial_ Googlespec Cloud Aiplatform V1beta1Nas Job Spec Multi Trial Algorithm Spec Search Trial Spec Response - Spec for search trials.
- train_
trial_ Googlespec Cloud Aiplatform V1beta1Nas Job Spec Multi Trial Algorithm Spec Train Trial Spec Response - Spec for train trials. Top N [TrainTrialSpec.max_parallel_trial_count] search trials will be trained for every M [TrainTrialSpec.frequency] trials searched.
- metric Property Map
- Metric specs for the NAS job. Validation for this field is done at
multi_trial_algorithm_spec
field. - multi
Trial StringAlgorithm - The multi-trial Neural Architecture Search (NAS) algorithm type. Defaults to
REINFORCEMENT_LEARNING
. - search
Trial Property MapSpec - Spec for search trials.
- train
Trial Property MapSpec - Spec for train trials. Top N [TrainTrialSpec.max_parallel_trial_count] search trials will be trained for every M [TrainTrialSpec.frequency] trials searched.
GoogleCloudAiplatformV1beta1NasJobSpecMultiTrialAlgorithmSpecSearchTrialSpec, GoogleCloudAiplatformV1beta1NasJobSpecMultiTrialAlgorithmSpecSearchTrialSpecArgs
- Max
Parallel intTrial Count - The maximum number of trials to run in parallel.
- Max
Trial intCount - The maximum number of Neural Architecture Search (NAS) trials to run.
- Search
Trial Pulumi.Job Spec Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Custom Job Spec - The spec of a search trial job. The same spec applies to all search trials.
- Max
Failed intTrial Count - The number of failed trials that need to be seen before failing the NasJob. If set to 0, Vertex AI decides how many trials must fail before the whole job fails.
- Max
Parallel intTrial Count - The maximum number of trials to run in parallel.
- Max
Trial intCount - The maximum number of Neural Architecture Search (NAS) trials to run.
- Search
Trial GoogleJob Spec Cloud Aiplatform V1beta1Custom Job Spec - The spec of a search trial job. The same spec applies to all search trials.
- Max
Failed intTrial Count - The number of failed trials that need to be seen before failing the NasJob. If set to 0, Vertex AI decides how many trials must fail before the whole job fails.
- max
Parallel IntegerTrial Count - The maximum number of trials to run in parallel.
- max
Trial IntegerCount - The maximum number of Neural Architecture Search (NAS) trials to run.
- search
Trial GoogleJob Spec Cloud Aiplatform V1beta1Custom Job Spec - The spec of a search trial job. The same spec applies to all search trials.
- max
Failed IntegerTrial Count - The number of failed trials that need to be seen before failing the NasJob. If set to 0, Vertex AI decides how many trials must fail before the whole job fails.
- max
Parallel numberTrial Count - The maximum number of trials to run in parallel.
- max
Trial numberCount - The maximum number of Neural Architecture Search (NAS) trials to run.
- search
Trial GoogleJob Spec Cloud Aiplatform V1beta1Custom Job Spec - The spec of a search trial job. The same spec applies to all search trials.
- max
Failed numberTrial Count - The number of failed trials that need to be seen before failing the NasJob. If set to 0, Vertex AI decides how many trials must fail before the whole job fails.
- max_
parallel_ inttrial_ count - The maximum number of trials to run in parallel.
- max_
trial_ intcount - The maximum number of Neural Architecture Search (NAS) trials to run.
- search_
trial_ Googlejob_ spec Cloud Aiplatform V1beta1Custom Job Spec - The spec of a search trial job. The same spec applies to all search trials.
- max_
failed_ inttrial_ count - The number of failed trials that need to be seen before failing the NasJob. If set to 0, Vertex AI decides how many trials must fail before the whole job fails.
- max
Parallel NumberTrial Count - The maximum number of trials to run in parallel.
- max
Trial NumberCount - The maximum number of Neural Architecture Search (NAS) trials to run.
- search
Trial Property MapJob Spec - The spec of a search trial job. The same spec applies to all search trials.
- max
Failed NumberTrial Count - The number of failed trials that need to be seen before failing the NasJob. If set to 0, Vertex AI decides how many trials must fail before the whole job fails.
GoogleCloudAiplatformV1beta1NasJobSpecMultiTrialAlgorithmSpecSearchTrialSpecResponse, GoogleCloudAiplatformV1beta1NasJobSpecMultiTrialAlgorithmSpecSearchTrialSpecResponseArgs
- Max
Failed intTrial Count - The number of failed trials that need to be seen before failing the NasJob. If set to 0, Vertex AI decides how many trials must fail before the whole job fails.
- Max
Parallel intTrial Count - The maximum number of trials to run in parallel.
- Max
Trial intCount - The maximum number of Neural Architecture Search (NAS) trials to run.
- Search
Trial Pulumi.Job Spec Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Custom Job Spec Response - The spec of a search trial job. The same spec applies to all search trials.
- Max
Failed intTrial Count - The number of failed trials that need to be seen before failing the NasJob. If set to 0, Vertex AI decides how many trials must fail before the whole job fails.
- Max
Parallel intTrial Count - The maximum number of trials to run in parallel.
- Max
Trial intCount - The maximum number of Neural Architecture Search (NAS) trials to run.
- Search
Trial GoogleJob Spec Cloud Aiplatform V1beta1Custom Job Spec Response - The spec of a search trial job. The same spec applies to all search trials.
- max
Failed IntegerTrial Count - The number of failed trials that need to be seen before failing the NasJob. If set to 0, Vertex AI decides how many trials must fail before the whole job fails.
- max
Parallel IntegerTrial Count - The maximum number of trials to run in parallel.
- max
Trial IntegerCount - The maximum number of Neural Architecture Search (NAS) trials to run.
- search
Trial GoogleJob Spec Cloud Aiplatform V1beta1Custom Job Spec Response - The spec of a search trial job. The same spec applies to all search trials.
- max
Failed numberTrial Count - The number of failed trials that need to be seen before failing the NasJob. If set to 0, Vertex AI decides how many trials must fail before the whole job fails.
- max
Parallel numberTrial Count - The maximum number of trials to run in parallel.
- max
Trial numberCount - The maximum number of Neural Architecture Search (NAS) trials to run.
- search
Trial GoogleJob Spec Cloud Aiplatform V1beta1Custom Job Spec Response - The spec of a search trial job. The same spec applies to all search trials.
- max_
failed_ inttrial_ count - The number of failed trials that need to be seen before failing the NasJob. If set to 0, Vertex AI decides how many trials must fail before the whole job fails.
- max_
parallel_ inttrial_ count - The maximum number of trials to run in parallel.
- max_
trial_ intcount - The maximum number of Neural Architecture Search (NAS) trials to run.
- search_
trial_ Googlejob_ spec Cloud Aiplatform V1beta1Custom Job Spec Response - The spec of a search trial job. The same spec applies to all search trials.
- max
Failed NumberTrial Count - The number of failed trials that need to be seen before failing the NasJob. If set to 0, Vertex AI decides how many trials must fail before the whole job fails.
- max
Parallel NumberTrial Count - The maximum number of trials to run in parallel.
- max
Trial NumberCount - The maximum number of Neural Architecture Search (NAS) trials to run.
- search
Trial Property MapJob Spec - The spec of a search trial job. The same spec applies to all search trials.
GoogleCloudAiplatformV1beta1NasJobSpecMultiTrialAlgorithmSpecTrainTrialSpec, GoogleCloudAiplatformV1beta1NasJobSpecMultiTrialAlgorithmSpecTrainTrialSpecArgs
- Frequency int
- Frequency of search trials to start train stage. Top N [TrainTrialSpec.max_parallel_trial_count] search trials will be trained for every M [TrainTrialSpec.frequency] trials searched.
- Max
Parallel intTrial Count - The maximum number of trials to run in parallel.
- Train
Trial Pulumi.Job Spec Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Custom Job Spec - The spec of a train trial job. The same spec applies to all train trials.
- Frequency int
- Frequency of search trials to start train stage. Top N [TrainTrialSpec.max_parallel_trial_count] search trials will be trained for every M [TrainTrialSpec.frequency] trials searched.
- Max
Parallel intTrial Count - The maximum number of trials to run in parallel.
- Train
Trial GoogleJob Spec Cloud Aiplatform V1beta1Custom Job Spec - The spec of a train trial job. The same spec applies to all train trials.
- frequency Integer
- Frequency of search trials to start train stage. Top N [TrainTrialSpec.max_parallel_trial_count] search trials will be trained for every M [TrainTrialSpec.frequency] trials searched.
- max
Parallel IntegerTrial Count - The maximum number of trials to run in parallel.
- train
Trial GoogleJob Spec Cloud Aiplatform V1beta1Custom Job Spec - The spec of a train trial job. The same spec applies to all train trials.
- frequency number
- Frequency of search trials to start train stage. Top N [TrainTrialSpec.max_parallel_trial_count] search trials will be trained for every M [TrainTrialSpec.frequency] trials searched.
- max
Parallel numberTrial Count - The maximum number of trials to run in parallel.
- train
Trial GoogleJob Spec Cloud Aiplatform V1beta1Custom Job Spec - The spec of a train trial job. The same spec applies to all train trials.
- frequency int
- Frequency of search trials to start train stage. Top N [TrainTrialSpec.max_parallel_trial_count] search trials will be trained for every M [TrainTrialSpec.frequency] trials searched.
- max_
parallel_ inttrial_ count - The maximum number of trials to run in parallel.
- train_
trial_ Googlejob_ spec Cloud Aiplatform V1beta1Custom Job Spec - The spec of a train trial job. The same spec applies to all train trials.
- frequency Number
- Frequency of search trials to start train stage. Top N [TrainTrialSpec.max_parallel_trial_count] search trials will be trained for every M [TrainTrialSpec.frequency] trials searched.
- max
Parallel NumberTrial Count - The maximum number of trials to run in parallel.
- train
Trial Property MapJob Spec - The spec of a train trial job. The same spec applies to all train trials.
GoogleCloudAiplatformV1beta1NasJobSpecMultiTrialAlgorithmSpecTrainTrialSpecResponse, GoogleCloudAiplatformV1beta1NasJobSpecMultiTrialAlgorithmSpecTrainTrialSpecResponseArgs
- Frequency int
- Frequency of search trials to start train stage. Top N [TrainTrialSpec.max_parallel_trial_count] search trials will be trained for every M [TrainTrialSpec.frequency] trials searched.
- Max
Parallel intTrial Count - The maximum number of trials to run in parallel.
- Train
Trial Pulumi.Job Spec Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Custom Job Spec Response - The spec of a train trial job. The same spec applies to all train trials.
- Frequency int
- Frequency of search trials to start train stage. Top N [TrainTrialSpec.max_parallel_trial_count] search trials will be trained for every M [TrainTrialSpec.frequency] trials searched.
- Max
Parallel intTrial Count - The maximum number of trials to run in parallel.
- Train
Trial GoogleJob Spec Cloud Aiplatform V1beta1Custom Job Spec Response - The spec of a train trial job. The same spec applies to all train trials.
- frequency Integer
- Frequency of search trials to start train stage. Top N [TrainTrialSpec.max_parallel_trial_count] search trials will be trained for every M [TrainTrialSpec.frequency] trials searched.
- max
Parallel IntegerTrial Count - The maximum number of trials to run in parallel.
- train
Trial GoogleJob Spec Cloud Aiplatform V1beta1Custom Job Spec Response - The spec of a train trial job. The same spec applies to all train trials.
- frequency number
- Frequency of search trials to start train stage. Top N [TrainTrialSpec.max_parallel_trial_count] search trials will be trained for every M [TrainTrialSpec.frequency] trials searched.
- max
Parallel numberTrial Count - The maximum number of trials to run in parallel.
- train
Trial GoogleJob Spec Cloud Aiplatform V1beta1Custom Job Spec Response - The spec of a train trial job. The same spec applies to all train trials.
- frequency int
- Frequency of search trials to start train stage. Top N [TrainTrialSpec.max_parallel_trial_count] search trials will be trained for every M [TrainTrialSpec.frequency] trials searched.
- max_
parallel_ inttrial_ count - The maximum number of trials to run in parallel.
- train_
trial_ Googlejob_ spec Cloud Aiplatform V1beta1Custom Job Spec Response - The spec of a train trial job. The same spec applies to all train trials.
- frequency Number
- Frequency of search trials to start train stage. Top N [TrainTrialSpec.max_parallel_trial_count] search trials will be trained for every M [TrainTrialSpec.frequency] trials searched.
- max
Parallel NumberTrial Count - The maximum number of trials to run in parallel.
- train
Trial Property MapJob Spec - The spec of a train trial job. The same spec applies to all train trials.
GoogleCloudAiplatformV1beta1NasJobSpecResponse, GoogleCloudAiplatformV1beta1NasJobSpecResponseArgs
- Multi
Trial Pulumi.Algorithm Spec Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Nas Job Spec Multi Trial Algorithm Spec Response - The spec of multi-trial algorithms.
- Resume
Nas stringJob Id - The ID of the existing NasJob in the same Project and Location which will be used to resume search. search_space_spec and nas_algorithm_spec are obtained from previous NasJob hence should not provide them again for this NasJob.
- Search
Space stringSpec - It defines the search space for Neural Architecture Search (NAS).
- Multi
Trial GoogleAlgorithm Spec Cloud Aiplatform V1beta1Nas Job Spec Multi Trial Algorithm Spec Response - The spec of multi-trial algorithms.
- Resume
Nas stringJob Id - The ID of the existing NasJob in the same Project and Location which will be used to resume search. search_space_spec and nas_algorithm_spec are obtained from previous NasJob hence should not provide them again for this NasJob.
- Search
Space stringSpec - It defines the search space for Neural Architecture Search (NAS).
- multi
Trial GoogleAlgorithm Spec Cloud Aiplatform V1beta1Nas Job Spec Multi Trial Algorithm Spec Response - The spec of multi-trial algorithms.
- resume
Nas StringJob Id - The ID of the existing NasJob in the same Project and Location which will be used to resume search. search_space_spec and nas_algorithm_spec are obtained from previous NasJob hence should not provide them again for this NasJob.
- search
Space StringSpec - It defines the search space for Neural Architecture Search (NAS).
- multi
Trial GoogleAlgorithm Spec Cloud Aiplatform V1beta1Nas Job Spec Multi Trial Algorithm Spec Response - The spec of multi-trial algorithms.
- resume
Nas stringJob Id - The ID of the existing NasJob in the same Project and Location which will be used to resume search. search_space_spec and nas_algorithm_spec are obtained from previous NasJob hence should not provide them again for this NasJob.
- search
Space stringSpec - It defines the search space for Neural Architecture Search (NAS).
- multi_
trial_ Googlealgorithm_ spec Cloud Aiplatform V1beta1Nas Job Spec Multi Trial Algorithm Spec Response - The spec of multi-trial algorithms.
- resume_
nas_ strjob_ id - The ID of the existing NasJob in the same Project and Location which will be used to resume search. search_space_spec and nas_algorithm_spec are obtained from previous NasJob hence should not provide them again for this NasJob.
- search_
space_ strspec - It defines the search space for Neural Architecture Search (NAS).
- multi
Trial Property MapAlgorithm Spec - The spec of multi-trial algorithms.
- resume
Nas StringJob Id - The ID of the existing NasJob in the same Project and Location which will be used to resume search. search_space_spec and nas_algorithm_spec are obtained from previous NasJob hence should not provide them again for this NasJob.
- search
Space StringSpec - It defines the search space for Neural Architecture Search (NAS).
GoogleCloudAiplatformV1beta1NasTrialResponse, GoogleCloudAiplatformV1beta1NasTrialResponseArgs
- End
Time string - Time when the NasTrial's status changed to
SUCCEEDED
orINFEASIBLE
. - Final
Measurement Pulumi.Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Measurement Response - The final measurement containing the objective value.
- Start
Time string - Time when the NasTrial was started.
- State string
- The detailed state of the NasTrial.
- End
Time string - Time when the NasTrial's status changed to
SUCCEEDED
orINFEASIBLE
. - Final
Measurement GoogleCloud Aiplatform V1beta1Measurement Response - The final measurement containing the objective value.
- Start
Time string - Time when the NasTrial was started.
- State string
- The detailed state of the NasTrial.
- end
Time String - Time when the NasTrial's status changed to
SUCCEEDED
orINFEASIBLE
. - final
Measurement GoogleCloud Aiplatform V1beta1Measurement Response - The final measurement containing the objective value.
- start
Time String - Time when the NasTrial was started.
- state String
- The detailed state of the NasTrial.
- end
Time string - Time when the NasTrial's status changed to
SUCCEEDED
orINFEASIBLE
. - final
Measurement GoogleCloud Aiplatform V1beta1Measurement Response - The final measurement containing the objective value.
- start
Time string - Time when the NasTrial was started.
- state string
- The detailed state of the NasTrial.
- end_
time str - Time when the NasTrial's status changed to
SUCCEEDED
orINFEASIBLE
. - final_
measurement GoogleCloud Aiplatform V1beta1Measurement Response - The final measurement containing the objective value.
- start_
time str - Time when the NasTrial was started.
- state str
- The detailed state of the NasTrial.
- end
Time String - Time when the NasTrial's status changed to
SUCCEEDED
orINFEASIBLE
. - final
Measurement Property Map - The final measurement containing the objective value.
- start
Time String - Time when the NasTrial was started.
- state String
- The detailed state of the NasTrial.
GoogleCloudAiplatformV1beta1NfsMount, GoogleCloudAiplatformV1beta1NfsMountArgs
- Mount
Point string - Destination mount path. The NFS will be mounted for the user under /mnt/nfs/
- Path string
- Source path exported from NFS server. Has to start with '/', and combined with the ip address, it indicates the source mount path in the form of
server:path
- Server string
- IP address of the NFS server.
- Mount
Point string - Destination mount path. The NFS will be mounted for the user under /mnt/nfs/
- Path string
- Source path exported from NFS server. Has to start with '/', and combined with the ip address, it indicates the source mount path in the form of
server:path
- Server string
- IP address of the NFS server.
- mount
Point String - Destination mount path. The NFS will be mounted for the user under /mnt/nfs/
- path String
- Source path exported from NFS server. Has to start with '/', and combined with the ip address, it indicates the source mount path in the form of
server:path
- server String
- IP address of the NFS server.
- mount
Point string - Destination mount path. The NFS will be mounted for the user under /mnt/nfs/
- path string
- Source path exported from NFS server. Has to start with '/', and combined with the ip address, it indicates the source mount path in the form of
server:path
- server string
- IP address of the NFS server.
- mount_
point str - Destination mount path. The NFS will be mounted for the user under /mnt/nfs/
- path str
- Source path exported from NFS server. Has to start with '/', and combined with the ip address, it indicates the source mount path in the form of
server:path
- server str
- IP address of the NFS server.
- mount
Point String - Destination mount path. The NFS will be mounted for the user under /mnt/nfs/
- path String
- Source path exported from NFS server. Has to start with '/', and combined with the ip address, it indicates the source mount path in the form of
server:path
- server String
- IP address of the NFS server.
GoogleCloudAiplatformV1beta1NfsMountResponse, GoogleCloudAiplatformV1beta1NfsMountResponseArgs
- Mount
Point string - Destination mount path. The NFS will be mounted for the user under /mnt/nfs/
- Path string
- Source path exported from NFS server. Has to start with '/', and combined with the ip address, it indicates the source mount path in the form of
server:path
- Server string
- IP address of the NFS server.
- Mount
Point string - Destination mount path. The NFS will be mounted for the user under /mnt/nfs/
- Path string
- Source path exported from NFS server. Has to start with '/', and combined with the ip address, it indicates the source mount path in the form of
server:path
- Server string
- IP address of the NFS server.
- mount
Point String - Destination mount path. The NFS will be mounted for the user under /mnt/nfs/
- path String
- Source path exported from NFS server. Has to start with '/', and combined with the ip address, it indicates the source mount path in the form of
server:path
- server String
- IP address of the NFS server.
- mount
Point string - Destination mount path. The NFS will be mounted for the user under /mnt/nfs/
- path string
- Source path exported from NFS server. Has to start with '/', and combined with the ip address, it indicates the source mount path in the form of
server:path
- server string
- IP address of the NFS server.
- mount_
point str - Destination mount path. The NFS will be mounted for the user under /mnt/nfs/
- path str
- Source path exported from NFS server. Has to start with '/', and combined with the ip address, it indicates the source mount path in the form of
server:path
- server str
- IP address of the NFS server.
- mount
Point String - Destination mount path. The NFS will be mounted for the user under /mnt/nfs/
- path String
- Source path exported from NFS server. Has to start with '/', and combined with the ip address, it indicates the source mount path in the form of
server:path
- server String
- IP address of the NFS server.
GoogleCloudAiplatformV1beta1PythonPackageSpec, GoogleCloudAiplatformV1beta1PythonPackageSpecArgs
- Executor
Image stringUri - The URI of a container image in Artifact Registry that will run the provided Python package. Vertex AI provides a wide range of executor images with pre-installed packages to meet users' various use cases. See the list of pre-built containers for training. You must use an image from this list.
- Package
Uris List<string> - The Google Cloud Storage location of the Python package files which are the training program and its dependent packages. The maximum number of package URIs is 100.
- Python
Module string - The Python module name to run after installing the packages.
- Args List<string>
- Command line arguments to be passed to the Python task.
- Env
List<Pulumi.
Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Env Var> - Environment variables to be passed to the python module. Maximum limit is 100.
- Executor
Image stringUri - The URI of a container image in Artifact Registry that will run the provided Python package. Vertex AI provides a wide range of executor images with pre-installed packages to meet users' various use cases. See the list of pre-built containers for training. You must use an image from this list.
- Package
Uris []string - The Google Cloud Storage location of the Python package files which are the training program and its dependent packages. The maximum number of package URIs is 100.
- Python
Module string - The Python module name to run after installing the packages.
- Args []string
- Command line arguments to be passed to the Python task.
- Env
[]Google
Cloud Aiplatform V1beta1Env Var - Environment variables to be passed to the python module. Maximum limit is 100.
- executor
Image StringUri - The URI of a container image in Artifact Registry that will run the provided Python package. Vertex AI provides a wide range of executor images with pre-installed packages to meet users' various use cases. See the list of pre-built containers for training. You must use an image from this list.
- package
Uris List<String> - The Google Cloud Storage location of the Python package files which are the training program and its dependent packages. The maximum number of package URIs is 100.
- python
Module String - The Python module name to run after installing the packages.
- args List<String>
- Command line arguments to be passed to the Python task.
- env
List<Google
Cloud Aiplatform V1beta1Env Var> - Environment variables to be passed to the python module. Maximum limit is 100.
- executor
Image stringUri - The URI of a container image in Artifact Registry that will run the provided Python package. Vertex AI provides a wide range of executor images with pre-installed packages to meet users' various use cases. See the list of pre-built containers for training. You must use an image from this list.
- package
Uris string[] - The Google Cloud Storage location of the Python package files which are the training program and its dependent packages. The maximum number of package URIs is 100.
- python
Module string - The Python module name to run after installing the packages.
- args string[]
- Command line arguments to be passed to the Python task.
- env
Google
Cloud Aiplatform V1beta1Env Var[] - Environment variables to be passed to the python module. Maximum limit is 100.
- executor_
image_ struri - The URI of a container image in Artifact Registry that will run the provided Python package. Vertex AI provides a wide range of executor images with pre-installed packages to meet users' various use cases. See the list of pre-built containers for training. You must use an image from this list.
- package_
uris Sequence[str] - The Google Cloud Storage location of the Python package files which are the training program and its dependent packages. The maximum number of package URIs is 100.
- python_
module str - The Python module name to run after installing the packages.
- args Sequence[str]
- Command line arguments to be passed to the Python task.
- env
Sequence[Google
Cloud Aiplatform V1beta1Env Var] - Environment variables to be passed to the python module. Maximum limit is 100.
- executor
Image StringUri - The URI of a container image in Artifact Registry that will run the provided Python package. Vertex AI provides a wide range of executor images with pre-installed packages to meet users' various use cases. See the list of pre-built containers for training. You must use an image from this list.
- package
Uris List<String> - The Google Cloud Storage location of the Python package files which are the training program and its dependent packages. The maximum number of package URIs is 100.
- python
Module String - The Python module name to run after installing the packages.
- args List<String>
- Command line arguments to be passed to the Python task.
- env List<Property Map>
- Environment variables to be passed to the python module. Maximum limit is 100.
GoogleCloudAiplatformV1beta1PythonPackageSpecResponse, GoogleCloudAiplatformV1beta1PythonPackageSpecResponseArgs
- Args List<string>
- Command line arguments to be passed to the Python task.
- Env
List<Pulumi.
Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Env Var Response> - Environment variables to be passed to the python module. Maximum limit is 100.
- Executor
Image stringUri - The URI of a container image in Artifact Registry that will run the provided Python package. Vertex AI provides a wide range of executor images with pre-installed packages to meet users' various use cases. See the list of pre-built containers for training. You must use an image from this list.
- Package
Uris List<string> - The Google Cloud Storage location of the Python package files which are the training program and its dependent packages. The maximum number of package URIs is 100.
- Python
Module string - The Python module name to run after installing the packages.
- Args []string
- Command line arguments to be passed to the Python task.
- Env
[]Google
Cloud Aiplatform V1beta1Env Var Response - Environment variables to be passed to the python module. Maximum limit is 100.
- Executor
Image stringUri - The URI of a container image in Artifact Registry that will run the provided Python package. Vertex AI provides a wide range of executor images with pre-installed packages to meet users' various use cases. See the list of pre-built containers for training. You must use an image from this list.
- Package
Uris []string - The Google Cloud Storage location of the Python package files which are the training program and its dependent packages. The maximum number of package URIs is 100.
- Python
Module string - The Python module name to run after installing the packages.
- args List<String>
- Command line arguments to be passed to the Python task.
- env
List<Google
Cloud Aiplatform V1beta1Env Var Response> - Environment variables to be passed to the python module. Maximum limit is 100.
- executor
Image StringUri - The URI of a container image in Artifact Registry that will run the provided Python package. Vertex AI provides a wide range of executor images with pre-installed packages to meet users' various use cases. See the list of pre-built containers for training. You must use an image from this list.
- package
Uris List<String> - The Google Cloud Storage location of the Python package files which are the training program and its dependent packages. The maximum number of package URIs is 100.
- python
Module String - The Python module name to run after installing the packages.
- args string[]
- Command line arguments to be passed to the Python task.
- env
Google
Cloud Aiplatform V1beta1Env Var Response[] - Environment variables to be passed to the python module. Maximum limit is 100.
- executor
Image stringUri - The URI of a container image in Artifact Registry that will run the provided Python package. Vertex AI provides a wide range of executor images with pre-installed packages to meet users' various use cases. See the list of pre-built containers for training. You must use an image from this list.
- package
Uris string[] - The Google Cloud Storage location of the Python package files which are the training program and its dependent packages. The maximum number of package URIs is 100.
- python
Module string - The Python module name to run after installing the packages.
- args Sequence[str]
- Command line arguments to be passed to the Python task.
- env
Sequence[Google
Cloud Aiplatform V1beta1Env Var Response] - Environment variables to be passed to the python module. Maximum limit is 100.
- executor_
image_ struri - The URI of a container image in Artifact Registry that will run the provided Python package. Vertex AI provides a wide range of executor images with pre-installed packages to meet users' various use cases. See the list of pre-built containers for training. You must use an image from this list.
- package_
uris Sequence[str] - The Google Cloud Storage location of the Python package files which are the training program and its dependent packages. The maximum number of package URIs is 100.
- python_
module str - The Python module name to run after installing the packages.
- args List<String>
- Command line arguments to be passed to the Python task.
- env List<Property Map>
- Environment variables to be passed to the python module. Maximum limit is 100.
- executor
Image StringUri - The URI of a container image in Artifact Registry that will run the provided Python package. Vertex AI provides a wide range of executor images with pre-installed packages to meet users' various use cases. See the list of pre-built containers for training. You must use an image from this list.
- package
Uris List<String> - The Google Cloud Storage location of the Python package files which are the training program and its dependent packages. The maximum number of package URIs is 100.
- python
Module String - The Python module name to run after installing the packages.
GoogleCloudAiplatformV1beta1Scheduling, GoogleCloudAiplatformV1beta1SchedulingArgs
- Disable
Retries bool - Optional. Indicates if the job should retry for internal errors after the job starts running. If true, overrides
Scheduling.restart_job_on_worker_restart
to false. - Restart
Job boolOn Worker Restart - Restarts the entire CustomJob if a worker gets restarted. This feature can be used by distributed training jobs that are not resilient to workers leaving and joining a job.
- Timeout string
- The maximum job running time. The default is 7 days.
- Disable
Retries bool - Optional. Indicates if the job should retry for internal errors after the job starts running. If true, overrides
Scheduling.restart_job_on_worker_restart
to false. - Restart
Job boolOn Worker Restart - Restarts the entire CustomJob if a worker gets restarted. This feature can be used by distributed training jobs that are not resilient to workers leaving and joining a job.
- Timeout string
- The maximum job running time. The default is 7 days.
- disable
Retries Boolean - Optional. Indicates if the job should retry for internal errors after the job starts running. If true, overrides
Scheduling.restart_job_on_worker_restart
to false. - restart
Job BooleanOn Worker Restart - Restarts the entire CustomJob if a worker gets restarted. This feature can be used by distributed training jobs that are not resilient to workers leaving and joining a job.
- timeout String
- The maximum job running time. The default is 7 days.
- disable
Retries boolean - Optional. Indicates if the job should retry for internal errors after the job starts running. If true, overrides
Scheduling.restart_job_on_worker_restart
to false. - restart
Job booleanOn Worker Restart - Restarts the entire CustomJob if a worker gets restarted. This feature can be used by distributed training jobs that are not resilient to workers leaving and joining a job.
- timeout string
- The maximum job running time. The default is 7 days.
- disable_
retries bool - Optional. Indicates if the job should retry for internal errors after the job starts running. If true, overrides
Scheduling.restart_job_on_worker_restart
to false. - restart_
job_ boolon_ worker_ restart - Restarts the entire CustomJob if a worker gets restarted. This feature can be used by distributed training jobs that are not resilient to workers leaving and joining a job.
- timeout str
- The maximum job running time. The default is 7 days.
- disable
Retries Boolean - Optional. Indicates if the job should retry for internal errors after the job starts running. If true, overrides
Scheduling.restart_job_on_worker_restart
to false. - restart
Job BooleanOn Worker Restart - Restarts the entire CustomJob if a worker gets restarted. This feature can be used by distributed training jobs that are not resilient to workers leaving and joining a job.
- timeout String
- The maximum job running time. The default is 7 days.
GoogleCloudAiplatformV1beta1SchedulingResponse, GoogleCloudAiplatformV1beta1SchedulingResponseArgs
- Disable
Retries bool - Optional. Indicates if the job should retry for internal errors after the job starts running. If true, overrides
Scheduling.restart_job_on_worker_restart
to false. - Restart
Job boolOn Worker Restart - Restarts the entire CustomJob if a worker gets restarted. This feature can be used by distributed training jobs that are not resilient to workers leaving and joining a job.
- Timeout string
- The maximum job running time. The default is 7 days.
- Disable
Retries bool - Optional. Indicates if the job should retry for internal errors after the job starts running. If true, overrides
Scheduling.restart_job_on_worker_restart
to false. - Restart
Job boolOn Worker Restart - Restarts the entire CustomJob if a worker gets restarted. This feature can be used by distributed training jobs that are not resilient to workers leaving and joining a job.
- Timeout string
- The maximum job running time. The default is 7 days.
- disable
Retries Boolean - Optional. Indicates if the job should retry for internal errors after the job starts running. If true, overrides
Scheduling.restart_job_on_worker_restart
to false. - restart
Job BooleanOn Worker Restart - Restarts the entire CustomJob if a worker gets restarted. This feature can be used by distributed training jobs that are not resilient to workers leaving and joining a job.
- timeout String
- The maximum job running time. The default is 7 days.
- disable
Retries boolean - Optional. Indicates if the job should retry for internal errors after the job starts running. If true, overrides
Scheduling.restart_job_on_worker_restart
to false. - restart
Job booleanOn Worker Restart - Restarts the entire CustomJob if a worker gets restarted. This feature can be used by distributed training jobs that are not resilient to workers leaving and joining a job.
- timeout string
- The maximum job running time. The default is 7 days.
- disable_
retries bool - Optional. Indicates if the job should retry for internal errors after the job starts running. If true, overrides
Scheduling.restart_job_on_worker_restart
to false. - restart_
job_ boolon_ worker_ restart - Restarts the entire CustomJob if a worker gets restarted. This feature can be used by distributed training jobs that are not resilient to workers leaving and joining a job.
- timeout str
- The maximum job running time. The default is 7 days.
- disable
Retries Boolean - Optional. Indicates if the job should retry for internal errors after the job starts running. If true, overrides
Scheduling.restart_job_on_worker_restart
to false. - restart
Job BooleanOn Worker Restart - Restarts the entire CustomJob if a worker gets restarted. This feature can be used by distributed training jobs that are not resilient to workers leaving and joining a job.
- timeout String
- The maximum job running time. The default is 7 days.
GoogleCloudAiplatformV1beta1WorkerPoolSpec, GoogleCloudAiplatformV1beta1WorkerPoolSpecArgs
- Container
Spec Pulumi.Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Container Spec - The custom container task.
- Disk
Spec Pulumi.Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Disk Spec - Disk spec.
- Machine
Spec Pulumi.Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Machine Spec - Optional. Immutable. The specification of a single machine.
- Nfs
Mounts List<Pulumi.Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Nfs Mount> - Optional. List of NFS mount spec.
- Python
Package Pulumi.Spec Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Python Package Spec - The Python packaged task.
- Replica
Count string - Optional. The number of worker replicas to use for this worker pool.
- Container
Spec GoogleCloud Aiplatform V1beta1Container Spec - The custom container task.
- Disk
Spec GoogleCloud Aiplatform V1beta1Disk Spec - Disk spec.
- Machine
Spec GoogleCloud Aiplatform V1beta1Machine Spec - Optional. Immutable. The specification of a single machine.
- Nfs
Mounts []GoogleCloud Aiplatform V1beta1Nfs Mount - Optional. List of NFS mount spec.
- Python
Package GoogleSpec Cloud Aiplatform V1beta1Python Package Spec - The Python packaged task.
- Replica
Count string - Optional. The number of worker replicas to use for this worker pool.
- container
Spec GoogleCloud Aiplatform V1beta1Container Spec - The custom container task.
- disk
Spec GoogleCloud Aiplatform V1beta1Disk Spec - Disk spec.
- machine
Spec GoogleCloud Aiplatform V1beta1Machine Spec - Optional. Immutable. The specification of a single machine.
- nfs
Mounts List<GoogleCloud Aiplatform V1beta1Nfs Mount> - Optional. List of NFS mount spec.
- python
Package GoogleSpec Cloud Aiplatform V1beta1Python Package Spec - The Python packaged task.
- replica
Count String - Optional. The number of worker replicas to use for this worker pool.
- container
Spec GoogleCloud Aiplatform V1beta1Container Spec - The custom container task.
- disk
Spec GoogleCloud Aiplatform V1beta1Disk Spec - Disk spec.
- machine
Spec GoogleCloud Aiplatform V1beta1Machine Spec - Optional. Immutable. The specification of a single machine.
- nfs
Mounts GoogleCloud Aiplatform V1beta1Nfs Mount[] - Optional. List of NFS mount spec.
- python
Package GoogleSpec Cloud Aiplatform V1beta1Python Package Spec - The Python packaged task.
- replica
Count string - Optional. The number of worker replicas to use for this worker pool.
- container_
spec GoogleCloud Aiplatform V1beta1Container Spec - The custom container task.
- disk_
spec GoogleCloud Aiplatform V1beta1Disk Spec - Disk spec.
- machine_
spec GoogleCloud Aiplatform V1beta1Machine Spec - Optional. Immutable. The specification of a single machine.
- nfs_
mounts Sequence[GoogleCloud Aiplatform V1beta1Nfs Mount] - Optional. List of NFS mount spec.
- python_
package_ Googlespec Cloud Aiplatform V1beta1Python Package Spec - The Python packaged task.
- replica_
count str - Optional. The number of worker replicas to use for this worker pool.
- container
Spec Property Map - The custom container task.
- disk
Spec Property Map - Disk spec.
- machine
Spec Property Map - Optional. Immutable. The specification of a single machine.
- nfs
Mounts List<Property Map> - Optional. List of NFS mount spec.
- python
Package Property MapSpec - The Python packaged task.
- replica
Count String - Optional. The number of worker replicas to use for this worker pool.
GoogleCloudAiplatformV1beta1WorkerPoolSpecResponse, GoogleCloudAiplatformV1beta1WorkerPoolSpecResponseArgs
- Container
Spec Pulumi.Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Container Spec Response - The custom container task.
- Disk
Spec Pulumi.Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Disk Spec Response - Disk spec.
- Machine
Spec Pulumi.Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Machine Spec Response - Optional. Immutable. The specification of a single machine.
- Nfs
Mounts List<Pulumi.Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Nfs Mount Response> - Optional. List of NFS mount spec.
- Python
Package Pulumi.Spec Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Python Package Spec Response - The Python packaged task.
- Replica
Count string - Optional. The number of worker replicas to use for this worker pool.
- Container
Spec GoogleCloud Aiplatform V1beta1Container Spec Response - The custom container task.
- Disk
Spec GoogleCloud Aiplatform V1beta1Disk Spec Response - Disk spec.
- Machine
Spec GoogleCloud Aiplatform V1beta1Machine Spec Response - Optional. Immutable. The specification of a single machine.
- Nfs
Mounts []GoogleCloud Aiplatform V1beta1Nfs Mount Response - Optional. List of NFS mount spec.
- Python
Package GoogleSpec Cloud Aiplatform V1beta1Python Package Spec Response - The Python packaged task.
- Replica
Count string - Optional. The number of worker replicas to use for this worker pool.
- container
Spec GoogleCloud Aiplatform V1beta1Container Spec Response - The custom container task.
- disk
Spec GoogleCloud Aiplatform V1beta1Disk Spec Response - Disk spec.
- machine
Spec GoogleCloud Aiplatform V1beta1Machine Spec Response - Optional. Immutable. The specification of a single machine.
- nfs
Mounts List<GoogleCloud Aiplatform V1beta1Nfs Mount Response> - Optional. List of NFS mount spec.
- python
Package GoogleSpec Cloud Aiplatform V1beta1Python Package Spec Response - The Python packaged task.
- replica
Count String - Optional. The number of worker replicas to use for this worker pool.
- container
Spec GoogleCloud Aiplatform V1beta1Container Spec Response - The custom container task.
- disk
Spec GoogleCloud Aiplatform V1beta1Disk Spec Response - Disk spec.
- machine
Spec GoogleCloud Aiplatform V1beta1Machine Spec Response - Optional. Immutable. The specification of a single machine.
- nfs
Mounts GoogleCloud Aiplatform V1beta1Nfs Mount Response[] - Optional. List of NFS mount spec.
- python
Package GoogleSpec Cloud Aiplatform V1beta1Python Package Spec Response - The Python packaged task.
- replica
Count string - Optional. The number of worker replicas to use for this worker pool.
- container_
spec GoogleCloud Aiplatform V1beta1Container Spec Response - The custom container task.
- disk_
spec GoogleCloud Aiplatform V1beta1Disk Spec Response - Disk spec.
- machine_
spec GoogleCloud Aiplatform V1beta1Machine Spec Response - Optional. Immutable. The specification of a single machine.
- nfs_
mounts Sequence[GoogleCloud Aiplatform V1beta1Nfs Mount Response] - Optional. List of NFS mount spec.
- python_
package_ Googlespec Cloud Aiplatform V1beta1Python Package Spec Response - The Python packaged task.
- replica_
count str - Optional. The number of worker replicas to use for this worker pool.
- container
Spec Property Map - The custom container task.
- disk
Spec Property Map - Disk spec.
- machine
Spec Property Map - Optional. Immutable. The specification of a single machine.
- nfs
Mounts List<Property Map> - Optional. List of NFS mount spec.
- python
Package Property MapSpec - The Python packaged task.
- replica
Count String - Optional. The number of worker replicas to use for this worker pool.
GoogleRpcStatusResponse, GoogleRpcStatusResponseArgs
- Code int
- The status code, which should be an enum value of google.rpc.Code.
- Details
List<Immutable
Dictionary<string, string>> - A list of messages that carry the error details. There is a common set of message types for APIs to use.
- Message string
- A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
- Code int
- The status code, which should be an enum value of google.rpc.Code.
- Details []map[string]string
- A list of messages that carry the error details. There is a common set of message types for APIs to use.
- Message string
- A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
- code Integer
- The status code, which should be an enum value of google.rpc.Code.
- details List<Map<String,String>>
- A list of messages that carry the error details. There is a common set of message types for APIs to use.
- message String
- A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
- code number
- The status code, which should be an enum value of google.rpc.Code.
- details {[key: string]: string}[]
- A list of messages that carry the error details. There is a common set of message types for APIs to use.
- message string
- A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
- code int
- The status code, which should be an enum value of google.rpc.Code.
- details Sequence[Mapping[str, str]]
- A list of messages that carry the error details. There is a common set of message types for APIs to use.
- message str
- A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
- code Number
- The status code, which should be an enum value of google.rpc.Code.
- details List<Map<String>>
- A list of messages that carry the error details. There is a common set of message types for APIs to use.
- message String
- A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
Package Details
- Repository
- Google Cloud Native pulumi/pulumi-google-native
- License
- Apache-2.0
Google Cloud Native is in preview. Google Cloud Classic is fully supported.