Google Cloud Native is in preview. Google Cloud Classic is fully supported.
google-native.notebooks/v1.Execution
Explore with Pulumi AI
Google Cloud Native is in preview. Google Cloud Classic is fully supported.
Creates a new Execution in a given project and location. Auto-naming is currently not supported for this resource.
Create Execution Resource
Resources are created with functions called constructors. To learn more about declaring and configuring resources, see Resources.
Constructor syntax
new Execution(name: string, args: ExecutionArgs, opts?: CustomResourceOptions);
@overload
def Execution(resource_name: str,
args: ExecutionArgs,
opts: Optional[ResourceOptions] = None)
@overload
def Execution(resource_name: str,
opts: Optional[ResourceOptions] = None,
execution_id: Optional[str] = None,
description: Optional[str] = None,
execution_template: Optional[ExecutionTemplateArgs] = None,
location: Optional[str] = None,
output_notebook_file: Optional[str] = None,
project: Optional[str] = None)
func NewExecution(ctx *Context, name string, args ExecutionArgs, opts ...ResourceOption) (*Execution, error)
public Execution(string name, ExecutionArgs args, CustomResourceOptions? opts = null)
public Execution(String name, ExecutionArgs args)
public Execution(String name, ExecutionArgs args, CustomResourceOptions options)
type: google-native:notebooks/v1:Execution
properties: # The arguments to resource properties.
options: # Bag of options to control resource's behavior.
Parameters
- name string
- The unique name of the resource.
- args ExecutionArgs
- The arguments to resource properties.
- opts CustomResourceOptions
- Bag of options to control resource's behavior.
- resource_name str
- The unique name of the resource.
- args ExecutionArgs
- The arguments to resource properties.
- opts ResourceOptions
- Bag of options to control resource's behavior.
- ctx Context
- Context object for the current deployment.
- name string
- The unique name of the resource.
- args ExecutionArgs
- The arguments to resource properties.
- opts ResourceOption
- Bag of options to control resource's behavior.
- name string
- The unique name of the resource.
- args ExecutionArgs
- The arguments to resource properties.
- opts CustomResourceOptions
- Bag of options to control resource's behavior.
- name String
- The unique name of the resource.
- args ExecutionArgs
- The arguments to resource properties.
- options CustomResourceOptions
- Bag of options to control resource's behavior.
Constructor example
The following reference example uses placeholder values for all input properties.
var exampleexecutionResourceResourceFromNotebooksv1 = new GoogleNative.Notebooks.V1.Execution("exampleexecutionResourceResourceFromNotebooksv1", new()
{
ExecutionId = "string",
Description = "string",
ExecutionTemplate = new GoogleNative.Notebooks.V1.Inputs.ExecutionTemplateArgs
{
Labels =
{
{ "string", "string" },
},
OutputNotebookFolder = "string",
InputNotebookFile = "string",
JobType = GoogleNative.Notebooks.V1.ExecutionTemplateJobType.JobTypeUnspecified,
KernelSpec = "string",
AcceleratorConfig = new GoogleNative.Notebooks.V1.Inputs.SchedulerAcceleratorConfigArgs
{
CoreCount = "string",
Type = GoogleNative.Notebooks.V1.SchedulerAcceleratorConfigType.SchedulerAcceleratorTypeUnspecified,
},
MasterType = "string",
DataprocParameters = new GoogleNative.Notebooks.V1.Inputs.DataprocParametersArgs
{
Cluster = "string",
},
Parameters = "string",
ParamsYamlFile = "string",
ContainerImageUri = "string",
ServiceAccount = "string",
Tensorboard = "string",
VertexAiParameters = new GoogleNative.Notebooks.V1.Inputs.VertexAIParametersArgs
{
Env =
{
{ "string", "string" },
},
Network = "string",
},
},
Location = "string",
OutputNotebookFile = "string",
Project = "string",
});
example, err := notebooks.NewExecution(ctx, "exampleexecutionResourceResourceFromNotebooksv1", ¬ebooks.ExecutionArgs{
ExecutionId: pulumi.String("string"),
Description: pulumi.String("string"),
ExecutionTemplate: ¬ebooks.ExecutionTemplateArgs{
Labels: pulumi.StringMap{
"string": pulumi.String("string"),
},
OutputNotebookFolder: pulumi.String("string"),
InputNotebookFile: pulumi.String("string"),
JobType: notebooks.ExecutionTemplateJobTypeJobTypeUnspecified,
KernelSpec: pulumi.String("string"),
AcceleratorConfig: ¬ebooks.SchedulerAcceleratorConfigArgs{
CoreCount: pulumi.String("string"),
Type: notebooks.SchedulerAcceleratorConfigTypeSchedulerAcceleratorTypeUnspecified,
},
MasterType: pulumi.String("string"),
DataprocParameters: ¬ebooks.DataprocParametersArgs{
Cluster: pulumi.String("string"),
},
Parameters: pulumi.String("string"),
ParamsYamlFile: pulumi.String("string"),
ContainerImageUri: pulumi.String("string"),
ServiceAccount: pulumi.String("string"),
Tensorboard: pulumi.String("string"),
VertexAiParameters: ¬ebooks.VertexAIParametersArgs{
Env: pulumi.StringMap{
"string": pulumi.String("string"),
},
Network: pulumi.String("string"),
},
},
Location: pulumi.String("string"),
OutputNotebookFile: pulumi.String("string"),
Project: pulumi.String("string"),
})
var exampleexecutionResourceResourceFromNotebooksv1 = new Execution("exampleexecutionResourceResourceFromNotebooksv1", ExecutionArgs.builder()
.executionId("string")
.description("string")
.executionTemplate(ExecutionTemplateArgs.builder()
.labels(Map.of("string", "string"))
.outputNotebookFolder("string")
.inputNotebookFile("string")
.jobType("JOB_TYPE_UNSPECIFIED")
.kernelSpec("string")
.acceleratorConfig(SchedulerAcceleratorConfigArgs.builder()
.coreCount("string")
.type("SCHEDULER_ACCELERATOR_TYPE_UNSPECIFIED")
.build())
.masterType("string")
.dataprocParameters(DataprocParametersArgs.builder()
.cluster("string")
.build())
.parameters("string")
.paramsYamlFile("string")
.containerImageUri("string")
.serviceAccount("string")
.tensorboard("string")
.vertexAiParameters(VertexAIParametersArgs.builder()
.env(Map.of("string", "string"))
.network("string")
.build())
.build())
.location("string")
.outputNotebookFile("string")
.project("string")
.build());
exampleexecution_resource_resource_from_notebooksv1 = google_native.notebooks.v1.Execution("exampleexecutionResourceResourceFromNotebooksv1",
execution_id="string",
description="string",
execution_template={
"labels": {
"string": "string",
},
"output_notebook_folder": "string",
"input_notebook_file": "string",
"job_type": google_native.notebooks.v1.ExecutionTemplateJobType.JOB_TYPE_UNSPECIFIED,
"kernel_spec": "string",
"accelerator_config": {
"core_count": "string",
"type": google_native.notebooks.v1.SchedulerAcceleratorConfigType.SCHEDULER_ACCELERATOR_TYPE_UNSPECIFIED,
},
"master_type": "string",
"dataproc_parameters": {
"cluster": "string",
},
"parameters": "string",
"params_yaml_file": "string",
"container_image_uri": "string",
"service_account": "string",
"tensorboard": "string",
"vertex_ai_parameters": {
"env": {
"string": "string",
},
"network": "string",
},
},
location="string",
output_notebook_file="string",
project="string")
const exampleexecutionResourceResourceFromNotebooksv1 = new google_native.notebooks.v1.Execution("exampleexecutionResourceResourceFromNotebooksv1", {
executionId: "string",
description: "string",
executionTemplate: {
labels: {
string: "string",
},
outputNotebookFolder: "string",
inputNotebookFile: "string",
jobType: google_native.notebooks.v1.ExecutionTemplateJobType.JobTypeUnspecified,
kernelSpec: "string",
acceleratorConfig: {
coreCount: "string",
type: google_native.notebooks.v1.SchedulerAcceleratorConfigType.SchedulerAcceleratorTypeUnspecified,
},
masterType: "string",
dataprocParameters: {
cluster: "string",
},
parameters: "string",
paramsYamlFile: "string",
containerImageUri: "string",
serviceAccount: "string",
tensorboard: "string",
vertexAiParameters: {
env: {
string: "string",
},
network: "string",
},
},
location: "string",
outputNotebookFile: "string",
project: "string",
});
type: google-native:notebooks/v1:Execution
properties:
description: string
executionId: string
executionTemplate:
acceleratorConfig:
coreCount: string
type: SCHEDULER_ACCELERATOR_TYPE_UNSPECIFIED
containerImageUri: string
dataprocParameters:
cluster: string
inputNotebookFile: string
jobType: JOB_TYPE_UNSPECIFIED
kernelSpec: string
labels:
string: string
masterType: string
outputNotebookFolder: string
parameters: string
paramsYamlFile: string
serviceAccount: string
tensorboard: string
vertexAiParameters:
env:
string: string
network: string
location: string
outputNotebookFile: string
project: string
Execution Resource Properties
To learn more about resource properties and how to use them, see Inputs and Outputs in the Architecture and Concepts docs.
Inputs
In Python, inputs that are objects can be passed either as argument classes or as dictionary literals.
The Execution resource accepts the following input properties:
- Execution
Id string - Required. User-defined unique ID of this execution.
- Description string
- A brief description of this execution.
- Execution
Template Pulumi.Google Native. Notebooks. V1. Inputs. Execution Template - execute metadata including name, hardware spec, region, labels, etc.
- Location string
- Output
Notebook stringFile - Output notebook file generated by this execution
- Project string
- Execution
Id string - Required. User-defined unique ID of this execution.
- Description string
- A brief description of this execution.
- Execution
Template ExecutionTemplate Args - execute metadata including name, hardware spec, region, labels, etc.
- Location string
- Output
Notebook stringFile - Output notebook file generated by this execution
- Project string
- execution
Id String - Required. User-defined unique ID of this execution.
- description String
- A brief description of this execution.
- execution
Template ExecutionTemplate - execute metadata including name, hardware spec, region, labels, etc.
- location String
- output
Notebook StringFile - Output notebook file generated by this execution
- project String
- execution
Id string - Required. User-defined unique ID of this execution.
- description string
- A brief description of this execution.
- execution
Template ExecutionTemplate - execute metadata including name, hardware spec, region, labels, etc.
- location string
- output
Notebook stringFile - Output notebook file generated by this execution
- project string
- execution_
id str - Required. User-defined unique ID of this execution.
- description str
- A brief description of this execution.
- execution_
template ExecutionTemplate Args - execute metadata including name, hardware spec, region, labels, etc.
- location str
- output_
notebook_ strfile - Output notebook file generated by this execution
- project str
- execution
Id String - Required. User-defined unique ID of this execution.
- description String
- A brief description of this execution.
- execution
Template Property Map - execute metadata including name, hardware spec, region, labels, etc.
- location String
- output
Notebook StringFile - Output notebook file generated by this execution
- project String
Outputs
All input properties are implicitly available as output properties. Additionally, the Execution resource produces the following output properties:
- Create
Time string - Time the Execution was instantiated.
- Display
Name string - Name used for UI purposes. Name can only contain alphanumeric characters and underscores '_'.
- Id string
- The provider-assigned unique ID for this managed resource.
- Job
Uri string - The URI of the external job used to execute the notebook.
- Name string
- The resource name of the execute. Format:
projects/{project_id}/locations/{location}/executions/{execution_id}
- State string
- State of the underlying AI Platform job.
- Update
Time string - Time the Execution was last updated.
- Create
Time string - Time the Execution was instantiated.
- Display
Name string - Name used for UI purposes. Name can only contain alphanumeric characters and underscores '_'.
- Id string
- The provider-assigned unique ID for this managed resource.
- Job
Uri string - The URI of the external job used to execute the notebook.
- Name string
- The resource name of the execute. Format:
projects/{project_id}/locations/{location}/executions/{execution_id}
- State string
- State of the underlying AI Platform job.
- Update
Time string - Time the Execution was last updated.
- create
Time String - Time the Execution was instantiated.
- display
Name String - Name used for UI purposes. Name can only contain alphanumeric characters and underscores '_'.
- id String
- The provider-assigned unique ID for this managed resource.
- job
Uri String - The URI of the external job used to execute the notebook.
- name String
- The resource name of the execute. Format:
projects/{project_id}/locations/{location}/executions/{execution_id}
- state String
- State of the underlying AI Platform job.
- update
Time String - Time the Execution was last updated.
- create
Time string - Time the Execution was instantiated.
- display
Name string - Name used for UI purposes. Name can only contain alphanumeric characters and underscores '_'.
- id string
- The provider-assigned unique ID for this managed resource.
- job
Uri string - The URI of the external job used to execute the notebook.
- name string
- The resource name of the execute. Format:
projects/{project_id}/locations/{location}/executions/{execution_id}
- state string
- State of the underlying AI Platform job.
- update
Time string - Time the Execution was last updated.
- create_
time str - Time the Execution was instantiated.
- display_
name str - Name used for UI purposes. Name can only contain alphanumeric characters and underscores '_'.
- id str
- The provider-assigned unique ID for this managed resource.
- job_
uri str - The URI of the external job used to execute the notebook.
- name str
- The resource name of the execute. Format:
projects/{project_id}/locations/{location}/executions/{execution_id}
- state str
- State of the underlying AI Platform job.
- update_
time str - Time the Execution was last updated.
- create
Time String - Time the Execution was instantiated.
- display
Name String - Name used for UI purposes. Name can only contain alphanumeric characters and underscores '_'.
- id String
- The provider-assigned unique ID for this managed resource.
- job
Uri String - The URI of the external job used to execute the notebook.
- name String
- The resource name of the execute. Format:
projects/{project_id}/locations/{location}/executions/{execution_id}
- state String
- State of the underlying AI Platform job.
- update
Time String - Time the Execution was last updated.
Supporting Types
DataprocParameters, DataprocParametersArgs
- Cluster string
- URI for cluster used to run Dataproc execution. Format:
projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}
- Cluster string
- URI for cluster used to run Dataproc execution. Format:
projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}
- cluster String
- URI for cluster used to run Dataproc execution. Format:
projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}
- cluster string
- URI for cluster used to run Dataproc execution. Format:
projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}
- cluster str
- URI for cluster used to run Dataproc execution. Format:
projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}
- cluster String
- URI for cluster used to run Dataproc execution. Format:
projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}
DataprocParametersResponse, DataprocParametersResponseArgs
- Cluster string
- URI for cluster used to run Dataproc execution. Format:
projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}
- Cluster string
- URI for cluster used to run Dataproc execution. Format:
projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}
- cluster String
- URI for cluster used to run Dataproc execution. Format:
projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}
- cluster string
- URI for cluster used to run Dataproc execution. Format:
projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}
- cluster str
- URI for cluster used to run Dataproc execution. Format:
projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}
- cluster String
- URI for cluster used to run Dataproc execution. Format:
projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}
ExecutionTemplate, ExecutionTemplateArgs
- Scale
Tier Pulumi.Google Native. Notebooks. V1. Execution Template Scale Tier - Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.
- Accelerator
Config Pulumi.Google Native. Notebooks. V1. Inputs. Scheduler Accelerator Config - Configuration (count and accelerator type) for hardware running notebook execution.
- Container
Image stringUri - Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container
- Dataproc
Parameters Pulumi.Google Native. Notebooks. V1. Inputs. Dataproc Parameters - Parameters used in Dataproc JobType executions.
- Input
Notebook stringFile - Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format:
gs://{bucket_name}/{folder}/{notebook_file_name}
Ex:gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb
- Job
Type Pulumi.Google Native. Notebooks. V1. Execution Template Job Type - The type of Job to be used on this execution.
- Kernel
Spec string - Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.
- Labels Dictionary<string, string>
- Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.
- Master
Type string - Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when
scaleTier
is set toCUSTOM
. You can use certain Compute Engine machine types directly in this field. The following types are supported: -n1-standard-4
-n1-standard-8
-n1-standard-16
-n1-standard-32
-n1-standard-64
-n1-standard-96
-n1-highmem-2
-n1-highmem-4
-n1-highmem-8
-n1-highmem-16
-n1-highmem-32
-n1-highmem-64
-n1-highmem-96
-n1-highcpu-16
-n1-highcpu-32
-n1-highcpu-64
-n1-highcpu-96
Alternatively, you can use the following legacy machine types: -standard
-large_model
-complex_model_s
-complex_model_m
-complex_model_l
-standard_gpu
-complex_model_m_gpu
-complex_model_l_gpu
-standard_p100
-complex_model_m_p100
-standard_v100
-large_model_v100
-complex_model_m_v100
-complex_model_l_v100
Finally, if you want to use a TPU for training, specifycloud_tpu
in this field. Learn more about the special configuration options for training with TPU. - Output
Notebook stringFolder - Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format:
gs://{bucket_name}/{folder}
Ex:gs://notebook_user/scheduled_notebooks
- Parameters string
- Parameters used within the 'input_notebook_file' notebook.
- Params
Yaml stringFile - Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex:
gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml
- Service
Account string - The email address of a service account to use when running the execution. You must have the
iam.serviceAccounts.actAs
permission for the specified service account. - Tensorboard string
- The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format:
projects/{project}/locations/{location}/tensorboards/{tensorboard}
- Vertex
Ai Pulumi.Parameters Google Native. Notebooks. V1. Inputs. Vertex AIParameters - Parameters used in Vertex AI JobType executions.
- Scale
Tier ExecutionTemplate Scale Tier - Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.
- Accelerator
Config SchedulerAccelerator Config - Configuration (count and accelerator type) for hardware running notebook execution.
- Container
Image stringUri - Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container
- Dataproc
Parameters DataprocParameters - Parameters used in Dataproc JobType executions.
- Input
Notebook stringFile - Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format:
gs://{bucket_name}/{folder}/{notebook_file_name}
Ex:gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb
- Job
Type ExecutionTemplate Job Type - The type of Job to be used on this execution.
- Kernel
Spec string - Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.
- Labels map[string]string
- Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.
- Master
Type string - Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when
scaleTier
is set toCUSTOM
. You can use certain Compute Engine machine types directly in this field. The following types are supported: -n1-standard-4
-n1-standard-8
-n1-standard-16
-n1-standard-32
-n1-standard-64
-n1-standard-96
-n1-highmem-2
-n1-highmem-4
-n1-highmem-8
-n1-highmem-16
-n1-highmem-32
-n1-highmem-64
-n1-highmem-96
-n1-highcpu-16
-n1-highcpu-32
-n1-highcpu-64
-n1-highcpu-96
Alternatively, you can use the following legacy machine types: -standard
-large_model
-complex_model_s
-complex_model_m
-complex_model_l
-standard_gpu
-complex_model_m_gpu
-complex_model_l_gpu
-standard_p100
-complex_model_m_p100
-standard_v100
-large_model_v100
-complex_model_m_v100
-complex_model_l_v100
Finally, if you want to use a TPU for training, specifycloud_tpu
in this field. Learn more about the special configuration options for training with TPU. - Output
Notebook stringFolder - Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format:
gs://{bucket_name}/{folder}
Ex:gs://notebook_user/scheduled_notebooks
- Parameters string
- Parameters used within the 'input_notebook_file' notebook.
- Params
Yaml stringFile - Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex:
gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml
- Service
Account string - The email address of a service account to use when running the execution. You must have the
iam.serviceAccounts.actAs
permission for the specified service account. - Tensorboard string
- The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format:
projects/{project}/locations/{location}/tensorboards/{tensorboard}
- Vertex
Ai VertexParameters AIParameters - Parameters used in Vertex AI JobType executions.
- scale
Tier ExecutionTemplate Scale Tier - Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.
- accelerator
Config SchedulerAccelerator Config - Configuration (count and accelerator type) for hardware running notebook execution.
- container
Image StringUri - Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container
- dataproc
Parameters DataprocParameters - Parameters used in Dataproc JobType executions.
- input
Notebook StringFile - Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format:
gs://{bucket_name}/{folder}/{notebook_file_name}
Ex:gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb
- job
Type ExecutionTemplate Job Type - The type of Job to be used on this execution.
- kernel
Spec String - Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.
- labels Map<String,String>
- Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.
- master
Type String - Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when
scaleTier
is set toCUSTOM
. You can use certain Compute Engine machine types directly in this field. The following types are supported: -n1-standard-4
-n1-standard-8
-n1-standard-16
-n1-standard-32
-n1-standard-64
-n1-standard-96
-n1-highmem-2
-n1-highmem-4
-n1-highmem-8
-n1-highmem-16
-n1-highmem-32
-n1-highmem-64
-n1-highmem-96
-n1-highcpu-16
-n1-highcpu-32
-n1-highcpu-64
-n1-highcpu-96
Alternatively, you can use the following legacy machine types: -standard
-large_model
-complex_model_s
-complex_model_m
-complex_model_l
-standard_gpu
-complex_model_m_gpu
-complex_model_l_gpu
-standard_p100
-complex_model_m_p100
-standard_v100
-large_model_v100
-complex_model_m_v100
-complex_model_l_v100
Finally, if you want to use a TPU for training, specifycloud_tpu
in this field. Learn more about the special configuration options for training with TPU. - output
Notebook StringFolder - Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format:
gs://{bucket_name}/{folder}
Ex:gs://notebook_user/scheduled_notebooks
- parameters String
- Parameters used within the 'input_notebook_file' notebook.
- params
Yaml StringFile - Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex:
gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml
- service
Account String - The email address of a service account to use when running the execution. You must have the
iam.serviceAccounts.actAs
permission for the specified service account. - tensorboard String
- The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format:
projects/{project}/locations/{location}/tensorboards/{tensorboard}
- vertex
Ai VertexParameters AIParameters - Parameters used in Vertex AI JobType executions.
- scale
Tier ExecutionTemplate Scale Tier - Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.
- accelerator
Config SchedulerAccelerator Config - Configuration (count and accelerator type) for hardware running notebook execution.
- container
Image stringUri - Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container
- dataproc
Parameters DataprocParameters - Parameters used in Dataproc JobType executions.
- input
Notebook stringFile - Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format:
gs://{bucket_name}/{folder}/{notebook_file_name}
Ex:gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb
- job
Type ExecutionTemplate Job Type - The type of Job to be used on this execution.
- kernel
Spec string - Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.
- labels {[key: string]: string}
- Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.
- master
Type string - Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when
scaleTier
is set toCUSTOM
. You can use certain Compute Engine machine types directly in this field. The following types are supported: -n1-standard-4
-n1-standard-8
-n1-standard-16
-n1-standard-32
-n1-standard-64
-n1-standard-96
-n1-highmem-2
-n1-highmem-4
-n1-highmem-8
-n1-highmem-16
-n1-highmem-32
-n1-highmem-64
-n1-highmem-96
-n1-highcpu-16
-n1-highcpu-32
-n1-highcpu-64
-n1-highcpu-96
Alternatively, you can use the following legacy machine types: -standard
-large_model
-complex_model_s
-complex_model_m
-complex_model_l
-standard_gpu
-complex_model_m_gpu
-complex_model_l_gpu
-standard_p100
-complex_model_m_p100
-standard_v100
-large_model_v100
-complex_model_m_v100
-complex_model_l_v100
Finally, if you want to use a TPU for training, specifycloud_tpu
in this field. Learn more about the special configuration options for training with TPU. - output
Notebook stringFolder - Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format:
gs://{bucket_name}/{folder}
Ex:gs://notebook_user/scheduled_notebooks
- parameters string
- Parameters used within the 'input_notebook_file' notebook.
- params
Yaml stringFile - Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex:
gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml
- service
Account string - The email address of a service account to use when running the execution. You must have the
iam.serviceAccounts.actAs
permission for the specified service account. - tensorboard string
- The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format:
projects/{project}/locations/{location}/tensorboards/{tensorboard}
- vertex
Ai VertexParameters AIParameters - Parameters used in Vertex AI JobType executions.
- scale_
tier ExecutionTemplate Scale Tier - Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.
- accelerator_
config SchedulerAccelerator Config - Configuration (count and accelerator type) for hardware running notebook execution.
- container_
image_ struri - Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container
- dataproc_
parameters DataprocParameters - Parameters used in Dataproc JobType executions.
- input_
notebook_ strfile - Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format:
gs://{bucket_name}/{folder}/{notebook_file_name}
Ex:gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb
- job_
type ExecutionTemplate Job Type - The type of Job to be used on this execution.
- kernel_
spec str - Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.
- labels Mapping[str, str]
- Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.
- master_
type str - Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when
scaleTier
is set toCUSTOM
. You can use certain Compute Engine machine types directly in this field. The following types are supported: -n1-standard-4
-n1-standard-8
-n1-standard-16
-n1-standard-32
-n1-standard-64
-n1-standard-96
-n1-highmem-2
-n1-highmem-4
-n1-highmem-8
-n1-highmem-16
-n1-highmem-32
-n1-highmem-64
-n1-highmem-96
-n1-highcpu-16
-n1-highcpu-32
-n1-highcpu-64
-n1-highcpu-96
Alternatively, you can use the following legacy machine types: -standard
-large_model
-complex_model_s
-complex_model_m
-complex_model_l
-standard_gpu
-complex_model_m_gpu
-complex_model_l_gpu
-standard_p100
-complex_model_m_p100
-standard_v100
-large_model_v100
-complex_model_m_v100
-complex_model_l_v100
Finally, if you want to use a TPU for training, specifycloud_tpu
in this field. Learn more about the special configuration options for training with TPU. - output_
notebook_ strfolder - Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format:
gs://{bucket_name}/{folder}
Ex:gs://notebook_user/scheduled_notebooks
- parameters str
- Parameters used within the 'input_notebook_file' notebook.
- params_
yaml_ strfile - Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex:
gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml
- service_
account str - The email address of a service account to use when running the execution. You must have the
iam.serviceAccounts.actAs
permission for the specified service account. - tensorboard str
- The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format:
projects/{project}/locations/{location}/tensorboards/{tensorboard}
- vertex_
ai_ Vertexparameters AIParameters - Parameters used in Vertex AI JobType executions.
- scale
Tier "SCALE_TIER_UNSPECIFIED" | "BASIC" | "STANDARD_1" | "PREMIUM_1" | "BASIC_GPU" | "BASIC_TPU" | "CUSTOM" - Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.
- accelerator
Config Property Map - Configuration (count and accelerator type) for hardware running notebook execution.
- container
Image StringUri - Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container
- dataproc
Parameters Property Map - Parameters used in Dataproc JobType executions.
- input
Notebook StringFile - Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format:
gs://{bucket_name}/{folder}/{notebook_file_name}
Ex:gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb
- job
Type "JOB_TYPE_UNSPECIFIED" | "VERTEX_AI" | "DATAPROC" - The type of Job to be used on this execution.
- kernel
Spec String - Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.
- labels Map<String>
- Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.
- master
Type String - Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when
scaleTier
is set toCUSTOM
. You can use certain Compute Engine machine types directly in this field. The following types are supported: -n1-standard-4
-n1-standard-8
-n1-standard-16
-n1-standard-32
-n1-standard-64
-n1-standard-96
-n1-highmem-2
-n1-highmem-4
-n1-highmem-8
-n1-highmem-16
-n1-highmem-32
-n1-highmem-64
-n1-highmem-96
-n1-highcpu-16
-n1-highcpu-32
-n1-highcpu-64
-n1-highcpu-96
Alternatively, you can use the following legacy machine types: -standard
-large_model
-complex_model_s
-complex_model_m
-complex_model_l
-standard_gpu
-complex_model_m_gpu
-complex_model_l_gpu
-standard_p100
-complex_model_m_p100
-standard_v100
-large_model_v100
-complex_model_m_v100
-complex_model_l_v100
Finally, if you want to use a TPU for training, specifycloud_tpu
in this field. Learn more about the special configuration options for training with TPU. - output
Notebook StringFolder - Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format:
gs://{bucket_name}/{folder}
Ex:gs://notebook_user/scheduled_notebooks
- parameters String
- Parameters used within the 'input_notebook_file' notebook.
- params
Yaml StringFile - Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex:
gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml
- service
Account String - The email address of a service account to use when running the execution. You must have the
iam.serviceAccounts.actAs
permission for the specified service account. - tensorboard String
- The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format:
projects/{project}/locations/{location}/tensorboards/{tensorboard}
- vertex
Ai Property MapParameters - Parameters used in Vertex AI JobType executions.
ExecutionTemplateJobType, ExecutionTemplateJobTypeArgs
- Job
Type Unspecified - JOB_TYPE_UNSPECIFIEDNo type specified.
- Vertex
Ai - VERTEX_AICustom Job in
aiplatform.googleapis.com
. Default value for an execution. - Dataproc
- DATAPROCRun execution on a cluster with Dataproc as a job. https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs
- Execution
Template Job Type Job Type Unspecified - JOB_TYPE_UNSPECIFIEDNo type specified.
- Execution
Template Job Type Vertex Ai - VERTEX_AICustom Job in
aiplatform.googleapis.com
. Default value for an execution. - Execution
Template Job Type Dataproc - DATAPROCRun execution on a cluster with Dataproc as a job. https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs
- Job
Type Unspecified - JOB_TYPE_UNSPECIFIEDNo type specified.
- Vertex
Ai - VERTEX_AICustom Job in
aiplatform.googleapis.com
. Default value for an execution. - Dataproc
- DATAPROCRun execution on a cluster with Dataproc as a job. https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs
- Job
Type Unspecified - JOB_TYPE_UNSPECIFIEDNo type specified.
- Vertex
Ai - VERTEX_AICustom Job in
aiplatform.googleapis.com
. Default value for an execution. - Dataproc
- DATAPROCRun execution on a cluster with Dataproc as a job. https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs
- JOB_TYPE_UNSPECIFIED
- JOB_TYPE_UNSPECIFIEDNo type specified.
- VERTEX_AI
- VERTEX_AICustom Job in
aiplatform.googleapis.com
. Default value for an execution. - DATAPROC
- DATAPROCRun execution on a cluster with Dataproc as a job. https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs
- "JOB_TYPE_UNSPECIFIED"
- JOB_TYPE_UNSPECIFIEDNo type specified.
- "VERTEX_AI"
- VERTEX_AICustom Job in
aiplatform.googleapis.com
. Default value for an execution. - "DATAPROC"
- DATAPROCRun execution on a cluster with Dataproc as a job. https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs
ExecutionTemplateResponse, ExecutionTemplateResponseArgs
- Accelerator
Config Pulumi.Google Native. Notebooks. V1. Inputs. Scheduler Accelerator Config Response - Configuration (count and accelerator type) for hardware running notebook execution.
- Container
Image stringUri - Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container
- Dataproc
Parameters Pulumi.Google Native. Notebooks. V1. Inputs. Dataproc Parameters Response - Parameters used in Dataproc JobType executions.
- Input
Notebook stringFile - Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format:
gs://{bucket_name}/{folder}/{notebook_file_name}
Ex:gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb
- Job
Type string - The type of Job to be used on this execution.
- Kernel
Spec string - Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.
- Labels Dictionary<string, string>
- Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.
- Master
Type string - Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when
scaleTier
is set toCUSTOM
. You can use certain Compute Engine machine types directly in this field. The following types are supported: -n1-standard-4
-n1-standard-8
-n1-standard-16
-n1-standard-32
-n1-standard-64
-n1-standard-96
-n1-highmem-2
-n1-highmem-4
-n1-highmem-8
-n1-highmem-16
-n1-highmem-32
-n1-highmem-64
-n1-highmem-96
-n1-highcpu-16
-n1-highcpu-32
-n1-highcpu-64
-n1-highcpu-96
Alternatively, you can use the following legacy machine types: -standard
-large_model
-complex_model_s
-complex_model_m
-complex_model_l
-standard_gpu
-complex_model_m_gpu
-complex_model_l_gpu
-standard_p100
-complex_model_m_p100
-standard_v100
-large_model_v100
-complex_model_m_v100
-complex_model_l_v100
Finally, if you want to use a TPU for training, specifycloud_tpu
in this field. Learn more about the special configuration options for training with TPU. - Output
Notebook stringFolder - Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format:
gs://{bucket_name}/{folder}
Ex:gs://notebook_user/scheduled_notebooks
- Parameters string
- Parameters used within the 'input_notebook_file' notebook.
- Params
Yaml stringFile - Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex:
gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml
- Scale
Tier string - Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.
- Service
Account string - The email address of a service account to use when running the execution. You must have the
iam.serviceAccounts.actAs
permission for the specified service account. - Tensorboard string
- The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format:
projects/{project}/locations/{location}/tensorboards/{tensorboard}
- Vertex
Ai Pulumi.Parameters Google Native. Notebooks. V1. Inputs. Vertex AIParameters Response - Parameters used in Vertex AI JobType executions.
- Accelerator
Config SchedulerAccelerator Config Response - Configuration (count and accelerator type) for hardware running notebook execution.
- Container
Image stringUri - Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container
- Dataproc
Parameters DataprocParameters Response - Parameters used in Dataproc JobType executions.
- Input
Notebook stringFile - Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format:
gs://{bucket_name}/{folder}/{notebook_file_name}
Ex:gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb
- Job
Type string - The type of Job to be used on this execution.
- Kernel
Spec string - Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.
- Labels map[string]string
- Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.
- Master
Type string - Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when
scaleTier
is set toCUSTOM
. You can use certain Compute Engine machine types directly in this field. The following types are supported: -n1-standard-4
-n1-standard-8
-n1-standard-16
-n1-standard-32
-n1-standard-64
-n1-standard-96
-n1-highmem-2
-n1-highmem-4
-n1-highmem-8
-n1-highmem-16
-n1-highmem-32
-n1-highmem-64
-n1-highmem-96
-n1-highcpu-16
-n1-highcpu-32
-n1-highcpu-64
-n1-highcpu-96
Alternatively, you can use the following legacy machine types: -standard
-large_model
-complex_model_s
-complex_model_m
-complex_model_l
-standard_gpu
-complex_model_m_gpu
-complex_model_l_gpu
-standard_p100
-complex_model_m_p100
-standard_v100
-large_model_v100
-complex_model_m_v100
-complex_model_l_v100
Finally, if you want to use a TPU for training, specifycloud_tpu
in this field. Learn more about the special configuration options for training with TPU. - Output
Notebook stringFolder - Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format:
gs://{bucket_name}/{folder}
Ex:gs://notebook_user/scheduled_notebooks
- Parameters string
- Parameters used within the 'input_notebook_file' notebook.
- Params
Yaml stringFile - Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex:
gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml
- Scale
Tier string - Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.
- Service
Account string - The email address of a service account to use when running the execution. You must have the
iam.serviceAccounts.actAs
permission for the specified service account. - Tensorboard string
- The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format:
projects/{project}/locations/{location}/tensorboards/{tensorboard}
- Vertex
Ai VertexParameters AIParameters Response - Parameters used in Vertex AI JobType executions.
- accelerator
Config SchedulerAccelerator Config Response - Configuration (count and accelerator type) for hardware running notebook execution.
- container
Image StringUri - Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container
- dataproc
Parameters DataprocParameters Response - Parameters used in Dataproc JobType executions.
- input
Notebook StringFile - Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format:
gs://{bucket_name}/{folder}/{notebook_file_name}
Ex:gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb
- job
Type String - The type of Job to be used on this execution.
- kernel
Spec String - Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.
- labels Map<String,String>
- Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.
- master
Type String - Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when
scaleTier
is set toCUSTOM
. You can use certain Compute Engine machine types directly in this field. The following types are supported: -n1-standard-4
-n1-standard-8
-n1-standard-16
-n1-standard-32
-n1-standard-64
-n1-standard-96
-n1-highmem-2
-n1-highmem-4
-n1-highmem-8
-n1-highmem-16
-n1-highmem-32
-n1-highmem-64
-n1-highmem-96
-n1-highcpu-16
-n1-highcpu-32
-n1-highcpu-64
-n1-highcpu-96
Alternatively, you can use the following legacy machine types: -standard
-large_model
-complex_model_s
-complex_model_m
-complex_model_l
-standard_gpu
-complex_model_m_gpu
-complex_model_l_gpu
-standard_p100
-complex_model_m_p100
-standard_v100
-large_model_v100
-complex_model_m_v100
-complex_model_l_v100
Finally, if you want to use a TPU for training, specifycloud_tpu
in this field. Learn more about the special configuration options for training with TPU. - output
Notebook StringFolder - Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format:
gs://{bucket_name}/{folder}
Ex:gs://notebook_user/scheduled_notebooks
- parameters String
- Parameters used within the 'input_notebook_file' notebook.
- params
Yaml StringFile - Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex:
gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml
- scale
Tier String - Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.
- service
Account String - The email address of a service account to use when running the execution. You must have the
iam.serviceAccounts.actAs
permission for the specified service account. - tensorboard String
- The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format:
projects/{project}/locations/{location}/tensorboards/{tensorboard}
- vertex
Ai VertexParameters AIParameters Response - Parameters used in Vertex AI JobType executions.
- accelerator
Config SchedulerAccelerator Config Response - Configuration (count and accelerator type) for hardware running notebook execution.
- container
Image stringUri - Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container
- dataproc
Parameters DataprocParameters Response - Parameters used in Dataproc JobType executions.
- input
Notebook stringFile - Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format:
gs://{bucket_name}/{folder}/{notebook_file_name}
Ex:gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb
- job
Type string - The type of Job to be used on this execution.
- kernel
Spec string - Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.
- labels {[key: string]: string}
- Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.
- master
Type string - Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when
scaleTier
is set toCUSTOM
. You can use certain Compute Engine machine types directly in this field. The following types are supported: -n1-standard-4
-n1-standard-8
-n1-standard-16
-n1-standard-32
-n1-standard-64
-n1-standard-96
-n1-highmem-2
-n1-highmem-4
-n1-highmem-8
-n1-highmem-16
-n1-highmem-32
-n1-highmem-64
-n1-highmem-96
-n1-highcpu-16
-n1-highcpu-32
-n1-highcpu-64
-n1-highcpu-96
Alternatively, you can use the following legacy machine types: -standard
-large_model
-complex_model_s
-complex_model_m
-complex_model_l
-standard_gpu
-complex_model_m_gpu
-complex_model_l_gpu
-standard_p100
-complex_model_m_p100
-standard_v100
-large_model_v100
-complex_model_m_v100
-complex_model_l_v100
Finally, if you want to use a TPU for training, specifycloud_tpu
in this field. Learn more about the special configuration options for training with TPU. - output
Notebook stringFolder - Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format:
gs://{bucket_name}/{folder}
Ex:gs://notebook_user/scheduled_notebooks
- parameters string
- Parameters used within the 'input_notebook_file' notebook.
- params
Yaml stringFile - Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex:
gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml
- scale
Tier string - Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.
- service
Account string - The email address of a service account to use when running the execution. You must have the
iam.serviceAccounts.actAs
permission for the specified service account. - tensorboard string
- The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format:
projects/{project}/locations/{location}/tensorboards/{tensorboard}
- vertex
Ai VertexParameters AIParameters Response - Parameters used in Vertex AI JobType executions.
- accelerator_
config SchedulerAccelerator Config Response - Configuration (count and accelerator type) for hardware running notebook execution.
- container_
image_ struri - Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container
- dataproc_
parameters DataprocParameters Response - Parameters used in Dataproc JobType executions.
- input_
notebook_ strfile - Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format:
gs://{bucket_name}/{folder}/{notebook_file_name}
Ex:gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb
- job_
type str - The type of Job to be used on this execution.
- kernel_
spec str - Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.
- labels Mapping[str, str]
- Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.
- master_
type str - Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when
scaleTier
is set toCUSTOM
. You can use certain Compute Engine machine types directly in this field. The following types are supported: -n1-standard-4
-n1-standard-8
-n1-standard-16
-n1-standard-32
-n1-standard-64
-n1-standard-96
-n1-highmem-2
-n1-highmem-4
-n1-highmem-8
-n1-highmem-16
-n1-highmem-32
-n1-highmem-64
-n1-highmem-96
-n1-highcpu-16
-n1-highcpu-32
-n1-highcpu-64
-n1-highcpu-96
Alternatively, you can use the following legacy machine types: -standard
-large_model
-complex_model_s
-complex_model_m
-complex_model_l
-standard_gpu
-complex_model_m_gpu
-complex_model_l_gpu
-standard_p100
-complex_model_m_p100
-standard_v100
-large_model_v100
-complex_model_m_v100
-complex_model_l_v100
Finally, if you want to use a TPU for training, specifycloud_tpu
in this field. Learn more about the special configuration options for training with TPU. - output_
notebook_ strfolder - Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format:
gs://{bucket_name}/{folder}
Ex:gs://notebook_user/scheduled_notebooks
- parameters str
- Parameters used within the 'input_notebook_file' notebook.
- params_
yaml_ strfile - Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex:
gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml
- scale_
tier str - Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.
- service_
account str - The email address of a service account to use when running the execution. You must have the
iam.serviceAccounts.actAs
permission for the specified service account. - tensorboard str
- The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format:
projects/{project}/locations/{location}/tensorboards/{tensorboard}
- vertex_
ai_ Vertexparameters AIParameters Response - Parameters used in Vertex AI JobType executions.
- accelerator
Config Property Map - Configuration (count and accelerator type) for hardware running notebook execution.
- container
Image StringUri - Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container
- dataproc
Parameters Property Map - Parameters used in Dataproc JobType executions.
- input
Notebook StringFile - Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format:
gs://{bucket_name}/{folder}/{notebook_file_name}
Ex:gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb
- job
Type String - The type of Job to be used on this execution.
- kernel
Spec String - Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.
- labels Map<String>
- Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.
- master
Type String - Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when
scaleTier
is set toCUSTOM
. You can use certain Compute Engine machine types directly in this field. The following types are supported: -n1-standard-4
-n1-standard-8
-n1-standard-16
-n1-standard-32
-n1-standard-64
-n1-standard-96
-n1-highmem-2
-n1-highmem-4
-n1-highmem-8
-n1-highmem-16
-n1-highmem-32
-n1-highmem-64
-n1-highmem-96
-n1-highcpu-16
-n1-highcpu-32
-n1-highcpu-64
-n1-highcpu-96
Alternatively, you can use the following legacy machine types: -standard
-large_model
-complex_model_s
-complex_model_m
-complex_model_l
-standard_gpu
-complex_model_m_gpu
-complex_model_l_gpu
-standard_p100
-complex_model_m_p100
-standard_v100
-large_model_v100
-complex_model_m_v100
-complex_model_l_v100
Finally, if you want to use a TPU for training, specifycloud_tpu
in this field. Learn more about the special configuration options for training with TPU. - output
Notebook StringFolder - Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format:
gs://{bucket_name}/{folder}
Ex:gs://notebook_user/scheduled_notebooks
- parameters String
- Parameters used within the 'input_notebook_file' notebook.
- params
Yaml StringFile - Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex:
gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml
- scale
Tier String - Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.
- service
Account String - The email address of a service account to use when running the execution. You must have the
iam.serviceAccounts.actAs
permission for the specified service account. - tensorboard String
- The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format:
projects/{project}/locations/{location}/tensorboards/{tensorboard}
- vertex
Ai Property MapParameters - Parameters used in Vertex AI JobType executions.
ExecutionTemplateScaleTier, ExecutionTemplateScaleTierArgs
- Scale
Tier Unspecified - SCALE_TIER_UNSPECIFIEDUnspecified Scale Tier.
- Basic
- BASICA single worker instance. This tier is suitable for learning how to use Cloud ML, and for experimenting with new models using small datasets.
- Standard1
- STANDARD_1Many workers and a few parameter servers.
- Premium1
- PREMIUM_1A large number of workers with many parameter servers.
- Basic
Gpu - BASIC_GPUA single worker instance with a K80 GPU.
- Basic
Tpu - BASIC_TPUA single worker instance with a Cloud TPU.
- Custom
- CUSTOMThe CUSTOM tier is not a set tier, but rather enables you to use your own cluster specification. When you use this tier, set values to configure your processing cluster according to these guidelines: * You must set
ExecutionTemplate.masterType
to specify the type of machine to use for your master node. This is the only required setting.
- Execution
Template Scale Tier Scale Tier Unspecified - SCALE_TIER_UNSPECIFIEDUnspecified Scale Tier.
- Execution
Template Scale Tier Basic - BASICA single worker instance. This tier is suitable for learning how to use Cloud ML, and for experimenting with new models using small datasets.
- Execution
Template Scale Tier Standard1 - STANDARD_1Many workers and a few parameter servers.
- Execution
Template Scale Tier Premium1 - PREMIUM_1A large number of workers with many parameter servers.
- Execution
Template Scale Tier Basic Gpu - BASIC_GPUA single worker instance with a K80 GPU.
- Execution
Template Scale Tier Basic Tpu - BASIC_TPUA single worker instance with a Cloud TPU.
- Execution
Template Scale Tier Custom - CUSTOMThe CUSTOM tier is not a set tier, but rather enables you to use your own cluster specification. When you use this tier, set values to configure your processing cluster according to these guidelines: * You must set
ExecutionTemplate.masterType
to specify the type of machine to use for your master node. This is the only required setting.
- Scale
Tier Unspecified - SCALE_TIER_UNSPECIFIEDUnspecified Scale Tier.
- Basic
- BASICA single worker instance. This tier is suitable for learning how to use Cloud ML, and for experimenting with new models using small datasets.
- Standard1
- STANDARD_1Many workers and a few parameter servers.
- Premium1
- PREMIUM_1A large number of workers with many parameter servers.
- Basic
Gpu - BASIC_GPUA single worker instance with a K80 GPU.
- Basic
Tpu - BASIC_TPUA single worker instance with a Cloud TPU.
- Custom
- CUSTOMThe CUSTOM tier is not a set tier, but rather enables you to use your own cluster specification. When you use this tier, set values to configure your processing cluster according to these guidelines: * You must set
ExecutionTemplate.masterType
to specify the type of machine to use for your master node. This is the only required setting.
- Scale
Tier Unspecified - SCALE_TIER_UNSPECIFIEDUnspecified Scale Tier.
- Basic
- BASICA single worker instance. This tier is suitable for learning how to use Cloud ML, and for experimenting with new models using small datasets.
- Standard1
- STANDARD_1Many workers and a few parameter servers.
- Premium1
- PREMIUM_1A large number of workers with many parameter servers.
- Basic
Gpu - BASIC_GPUA single worker instance with a K80 GPU.
- Basic
Tpu - BASIC_TPUA single worker instance with a Cloud TPU.
- Custom
- CUSTOMThe CUSTOM tier is not a set tier, but rather enables you to use your own cluster specification. When you use this tier, set values to configure your processing cluster according to these guidelines: * You must set
ExecutionTemplate.masterType
to specify the type of machine to use for your master node. This is the only required setting.
- SCALE_TIER_UNSPECIFIED
- SCALE_TIER_UNSPECIFIEDUnspecified Scale Tier.
- BASIC
- BASICA single worker instance. This tier is suitable for learning how to use Cloud ML, and for experimenting with new models using small datasets.
- STANDARD1
- STANDARD_1Many workers and a few parameter servers.
- PREMIUM1
- PREMIUM_1A large number of workers with many parameter servers.
- BASIC_GPU
- BASIC_GPUA single worker instance with a K80 GPU.
- BASIC_TPU
- BASIC_TPUA single worker instance with a Cloud TPU.
- CUSTOM
- CUSTOMThe CUSTOM tier is not a set tier, but rather enables you to use your own cluster specification. When you use this tier, set values to configure your processing cluster according to these guidelines: * You must set
ExecutionTemplate.masterType
to specify the type of machine to use for your master node. This is the only required setting.
- "SCALE_TIER_UNSPECIFIED"
- SCALE_TIER_UNSPECIFIEDUnspecified Scale Tier.
- "BASIC"
- BASICA single worker instance. This tier is suitable for learning how to use Cloud ML, and for experimenting with new models using small datasets.
- "STANDARD_1"
- STANDARD_1Many workers and a few parameter servers.
- "PREMIUM_1"
- PREMIUM_1A large number of workers with many parameter servers.
- "BASIC_GPU"
- BASIC_GPUA single worker instance with a K80 GPU.
- "BASIC_TPU"
- BASIC_TPUA single worker instance with a Cloud TPU.
- "CUSTOM"
- CUSTOMThe CUSTOM tier is not a set tier, but rather enables you to use your own cluster specification. When you use this tier, set values to configure your processing cluster according to these guidelines: * You must set
ExecutionTemplate.masterType
to specify the type of machine to use for your master node. This is the only required setting.
SchedulerAcceleratorConfig, SchedulerAcceleratorConfigArgs
- Core
Count string - Count of cores of this accelerator.
- Type
Pulumi.
Google Native. Notebooks. V1. Scheduler Accelerator Config Type - Type of this accelerator.
- Core
Count string - Count of cores of this accelerator.
- Type
Scheduler
Accelerator Config Type - Type of this accelerator.
- core
Count String - Count of cores of this accelerator.
- type
Scheduler
Accelerator Config Type - Type of this accelerator.
- core
Count string - Count of cores of this accelerator.
- type
Scheduler
Accelerator Config Type - Type of this accelerator.
- core_
count str - Count of cores of this accelerator.
- type
Scheduler
Accelerator Config Type - Type of this accelerator.
- core
Count String - Count of cores of this accelerator.
- type "SCHEDULER_ACCELERATOR_TYPE_UNSPECIFIED" | "NVIDIA_TESLA_K80" | "NVIDIA_TESLA_P100" | "NVIDIA_TESLA_V100" | "NVIDIA_TESLA_P4" | "NVIDIA_TESLA_T4" | "NVIDIA_TESLA_A100" | "TPU_V2" | "TPU_V3"
- Type of this accelerator.
SchedulerAcceleratorConfigResponse, SchedulerAcceleratorConfigResponseArgs
- core_
count str - Count of cores of this accelerator.
- type str
- Type of this accelerator.
SchedulerAcceleratorConfigType, SchedulerAcceleratorConfigTypeArgs
- Scheduler
Accelerator Type Unspecified - SCHEDULER_ACCELERATOR_TYPE_UNSPECIFIEDUnspecified accelerator type. Default to no GPU.
- Nvidia
Tesla K80 - NVIDIA_TESLA_K80Nvidia Tesla K80 GPU.
- Nvidia
Tesla P100 - NVIDIA_TESLA_P100Nvidia Tesla P100 GPU.
- Nvidia
Tesla V100 - NVIDIA_TESLA_V100Nvidia Tesla V100 GPU.
- Nvidia
Tesla P4 - NVIDIA_TESLA_P4Nvidia Tesla P4 GPU.
- Nvidia
Tesla T4 - NVIDIA_TESLA_T4Nvidia Tesla T4 GPU.
- Nvidia
Tesla A100 - NVIDIA_TESLA_A100Nvidia Tesla A100 GPU.
- Tpu
V2 - TPU_V2TPU v2.
- Tpu
V3 - TPU_V3TPU v3.
- Scheduler
Accelerator Config Type Scheduler Accelerator Type Unspecified - SCHEDULER_ACCELERATOR_TYPE_UNSPECIFIEDUnspecified accelerator type. Default to no GPU.
- Scheduler
Accelerator Config Type Nvidia Tesla K80 - NVIDIA_TESLA_K80Nvidia Tesla K80 GPU.
- Scheduler
Accelerator Config Type Nvidia Tesla P100 - NVIDIA_TESLA_P100Nvidia Tesla P100 GPU.
- Scheduler
Accelerator Config Type Nvidia Tesla V100 - NVIDIA_TESLA_V100Nvidia Tesla V100 GPU.
- Scheduler
Accelerator Config Type Nvidia Tesla P4 - NVIDIA_TESLA_P4Nvidia Tesla P4 GPU.
- Scheduler
Accelerator Config Type Nvidia Tesla T4 - NVIDIA_TESLA_T4Nvidia Tesla T4 GPU.
- Scheduler
Accelerator Config Type Nvidia Tesla A100 - NVIDIA_TESLA_A100Nvidia Tesla A100 GPU.
- Scheduler
Accelerator Config Type Tpu V2 - TPU_V2TPU v2.
- Scheduler
Accelerator Config Type Tpu V3 - TPU_V3TPU v3.
- Scheduler
Accelerator Type Unspecified - SCHEDULER_ACCELERATOR_TYPE_UNSPECIFIEDUnspecified accelerator type. Default to no GPU.
- Nvidia
Tesla K80 - NVIDIA_TESLA_K80Nvidia Tesla K80 GPU.
- Nvidia
Tesla P100 - NVIDIA_TESLA_P100Nvidia Tesla P100 GPU.
- Nvidia
Tesla V100 - NVIDIA_TESLA_V100Nvidia Tesla V100 GPU.
- Nvidia
Tesla P4 - NVIDIA_TESLA_P4Nvidia Tesla P4 GPU.
- Nvidia
Tesla T4 - NVIDIA_TESLA_T4Nvidia Tesla T4 GPU.
- Nvidia
Tesla A100 - NVIDIA_TESLA_A100Nvidia Tesla A100 GPU.
- Tpu
V2 - TPU_V2TPU v2.
- Tpu
V3 - TPU_V3TPU v3.
- Scheduler
Accelerator Type Unspecified - SCHEDULER_ACCELERATOR_TYPE_UNSPECIFIEDUnspecified accelerator type. Default to no GPU.
- Nvidia
Tesla K80 - NVIDIA_TESLA_K80Nvidia Tesla K80 GPU.
- Nvidia
Tesla P100 - NVIDIA_TESLA_P100Nvidia Tesla P100 GPU.
- Nvidia
Tesla V100 - NVIDIA_TESLA_V100Nvidia Tesla V100 GPU.
- Nvidia
Tesla P4 - NVIDIA_TESLA_P4Nvidia Tesla P4 GPU.
- Nvidia
Tesla T4 - NVIDIA_TESLA_T4Nvidia Tesla T4 GPU.
- Nvidia
Tesla A100 - NVIDIA_TESLA_A100Nvidia Tesla A100 GPU.
- Tpu
V2 - TPU_V2TPU v2.
- Tpu
V3 - TPU_V3TPU v3.
- SCHEDULER_ACCELERATOR_TYPE_UNSPECIFIED
- SCHEDULER_ACCELERATOR_TYPE_UNSPECIFIEDUnspecified accelerator type. Default to no GPU.
- NVIDIA_TESLA_K80
- NVIDIA_TESLA_K80Nvidia Tesla K80 GPU.
- NVIDIA_TESLA_P100
- NVIDIA_TESLA_P100Nvidia Tesla P100 GPU.
- NVIDIA_TESLA_V100
- NVIDIA_TESLA_V100Nvidia Tesla V100 GPU.
- NVIDIA_TESLA_P4
- NVIDIA_TESLA_P4Nvidia Tesla P4 GPU.
- NVIDIA_TESLA_T4
- NVIDIA_TESLA_T4Nvidia Tesla T4 GPU.
- NVIDIA_TESLA_A100
- NVIDIA_TESLA_A100Nvidia Tesla A100 GPU.
- TPU_V2
- TPU_V2TPU v2.
- TPU_V3
- TPU_V3TPU v3.
- "SCHEDULER_ACCELERATOR_TYPE_UNSPECIFIED"
- SCHEDULER_ACCELERATOR_TYPE_UNSPECIFIEDUnspecified accelerator type. Default to no GPU.
- "NVIDIA_TESLA_K80"
- NVIDIA_TESLA_K80Nvidia Tesla K80 GPU.
- "NVIDIA_TESLA_P100"
- NVIDIA_TESLA_P100Nvidia Tesla P100 GPU.
- "NVIDIA_TESLA_V100"
- NVIDIA_TESLA_V100Nvidia Tesla V100 GPU.
- "NVIDIA_TESLA_P4"
- NVIDIA_TESLA_P4Nvidia Tesla P4 GPU.
- "NVIDIA_TESLA_T4"
- NVIDIA_TESLA_T4Nvidia Tesla T4 GPU.
- "NVIDIA_TESLA_A100"
- NVIDIA_TESLA_A100Nvidia Tesla A100 GPU.
- "TPU_V2"
- TPU_V2TPU v2.
- "TPU_V3"
- TPU_V3TPU v3.
VertexAIParameters, VertexAIParametersArgs
- Env Dictionary<string, string>
- Environment variables. At most 100 environment variables can be specified and unique. Example:
GCP_BUCKET=gs://my-bucket/samples/
- Network string
- The full name of the Compute Engine network to which the Job should be peered. For example,
projects/12345/global/networks/myVPC
. Format is of the formprojects/{project}/global/networks/{network}
. Where{project}
is a project number, as in12345
, and{network}
is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.
- Env map[string]string
- Environment variables. At most 100 environment variables can be specified and unique. Example:
GCP_BUCKET=gs://my-bucket/samples/
- Network string
- The full name of the Compute Engine network to which the Job should be peered. For example,
projects/12345/global/networks/myVPC
. Format is of the formprojects/{project}/global/networks/{network}
. Where{project}
is a project number, as in12345
, and{network}
is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.
- env Map<String,String>
- Environment variables. At most 100 environment variables can be specified and unique. Example:
GCP_BUCKET=gs://my-bucket/samples/
- network String
- The full name of the Compute Engine network to which the Job should be peered. For example,
projects/12345/global/networks/myVPC
. Format is of the formprojects/{project}/global/networks/{network}
. Where{project}
is a project number, as in12345
, and{network}
is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.
- env {[key: string]: string}
- Environment variables. At most 100 environment variables can be specified and unique. Example:
GCP_BUCKET=gs://my-bucket/samples/
- network string
- The full name of the Compute Engine network to which the Job should be peered. For example,
projects/12345/global/networks/myVPC
. Format is of the formprojects/{project}/global/networks/{network}
. Where{project}
is a project number, as in12345
, and{network}
is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.
- env Mapping[str, str]
- Environment variables. At most 100 environment variables can be specified and unique. Example:
GCP_BUCKET=gs://my-bucket/samples/
- network str
- The full name of the Compute Engine network to which the Job should be peered. For example,
projects/12345/global/networks/myVPC
. Format is of the formprojects/{project}/global/networks/{network}
. Where{project}
is a project number, as in12345
, and{network}
is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.
- env Map<String>
- Environment variables. At most 100 environment variables can be specified and unique. Example:
GCP_BUCKET=gs://my-bucket/samples/
- network String
- The full name of the Compute Engine network to which the Job should be peered. For example,
projects/12345/global/networks/myVPC
. Format is of the formprojects/{project}/global/networks/{network}
. Where{project}
is a project number, as in12345
, and{network}
is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.
VertexAIParametersResponse, VertexAIParametersResponseArgs
- Env Dictionary<string, string>
- Environment variables. At most 100 environment variables can be specified and unique. Example:
GCP_BUCKET=gs://my-bucket/samples/
- Network string
- The full name of the Compute Engine network to which the Job should be peered. For example,
projects/12345/global/networks/myVPC
. Format is of the formprojects/{project}/global/networks/{network}
. Where{project}
is a project number, as in12345
, and{network}
is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.
- Env map[string]string
- Environment variables. At most 100 environment variables can be specified and unique. Example:
GCP_BUCKET=gs://my-bucket/samples/
- Network string
- The full name of the Compute Engine network to which the Job should be peered. For example,
projects/12345/global/networks/myVPC
. Format is of the formprojects/{project}/global/networks/{network}
. Where{project}
is a project number, as in12345
, and{network}
is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.
- env Map<String,String>
- Environment variables. At most 100 environment variables can be specified and unique. Example:
GCP_BUCKET=gs://my-bucket/samples/
- network String
- The full name of the Compute Engine network to which the Job should be peered. For example,
projects/12345/global/networks/myVPC
. Format is of the formprojects/{project}/global/networks/{network}
. Where{project}
is a project number, as in12345
, and{network}
is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.
- env {[key: string]: string}
- Environment variables. At most 100 environment variables can be specified and unique. Example:
GCP_BUCKET=gs://my-bucket/samples/
- network string
- The full name of the Compute Engine network to which the Job should be peered. For example,
projects/12345/global/networks/myVPC
. Format is of the formprojects/{project}/global/networks/{network}
. Where{project}
is a project number, as in12345
, and{network}
is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.
- env Mapping[str, str]
- Environment variables. At most 100 environment variables can be specified and unique. Example:
GCP_BUCKET=gs://my-bucket/samples/
- network str
- The full name of the Compute Engine network to which the Job should be peered. For example,
projects/12345/global/networks/myVPC
. Format is of the formprojects/{project}/global/networks/{network}
. Where{project}
is a project number, as in12345
, and{network}
is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.
- env Map<String>
- Environment variables. At most 100 environment variables can be specified and unique. Example:
GCP_BUCKET=gs://my-bucket/samples/
- network String
- The full name of the Compute Engine network to which the Job should be peered. For example,
projects/12345/global/networks/myVPC
. Format is of the formprojects/{project}/global/networks/{network}
. Where{project}
is a project number, as in12345
, and{network}
is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.
Package Details
- Repository
- Google Cloud Native pulumi/pulumi-google-native
- License
- Apache-2.0
Google Cloud Native is in preview. Google Cloud Classic is fully supported.