Google Cloud Native is in preview. Google Cloud Classic is fully supported.
google-native.notebooks/v1.Schedule
Explore with Pulumi AI
Google Cloud Native is in preview. Google Cloud Classic is fully supported.
Creates a new Scheduled Notebook in a given project and location. Auto-naming is currently not supported for this resource.
Create Schedule Resource
Resources are created with functions called constructors. To learn more about declaring and configuring resources, see Resources.
Constructor syntax
new Schedule(name: string, args: ScheduleArgs, opts?: CustomResourceOptions);
@overload
def Schedule(resource_name: str,
args: ScheduleArgs,
opts: Optional[ResourceOptions] = None)
@overload
def Schedule(resource_name: str,
opts: Optional[ResourceOptions] = None,
schedule_id: Optional[str] = None,
cron_schedule: Optional[str] = None,
description: Optional[str] = None,
execution_template: Optional[ExecutionTemplateArgs] = None,
location: Optional[str] = None,
project: Optional[str] = None,
state: Optional[ScheduleState] = None,
time_zone: Optional[str] = None)
func NewSchedule(ctx *Context, name string, args ScheduleArgs, opts ...ResourceOption) (*Schedule, error)
public Schedule(string name, ScheduleArgs args, CustomResourceOptions? opts = null)
public Schedule(String name, ScheduleArgs args)
public Schedule(String name, ScheduleArgs args, CustomResourceOptions options)
type: google-native:notebooks/v1:Schedule
properties: # The arguments to resource properties.
options: # Bag of options to control resource's behavior.
Parameters
- name string
- The unique name of the resource.
- args ScheduleArgs
- The arguments to resource properties.
- opts CustomResourceOptions
- Bag of options to control resource's behavior.
- resource_name str
- The unique name of the resource.
- args ScheduleArgs
- The arguments to resource properties.
- opts ResourceOptions
- Bag of options to control resource's behavior.
- ctx Context
- Context object for the current deployment.
- name string
- The unique name of the resource.
- args ScheduleArgs
- The arguments to resource properties.
- opts ResourceOption
- Bag of options to control resource's behavior.
- name string
- The unique name of the resource.
- args ScheduleArgs
- The arguments to resource properties.
- opts CustomResourceOptions
- Bag of options to control resource's behavior.
- name String
- The unique name of the resource.
- args ScheduleArgs
- The arguments to resource properties.
- options CustomResourceOptions
- Bag of options to control resource's behavior.
Constructor example
The following reference example uses placeholder values for all input properties.
var examplescheduleResourceResourceFromNotebooksv1 = new GoogleNative.Notebooks.V1.Schedule("examplescheduleResourceResourceFromNotebooksv1", new()
{
ScheduleId = "string",
CronSchedule = "string",
Description = "string",
ExecutionTemplate = new GoogleNative.Notebooks.V1.Inputs.ExecutionTemplateArgs
{
Labels =
{
{ "string", "string" },
},
OutputNotebookFolder = "string",
InputNotebookFile = "string",
JobType = GoogleNative.Notebooks.V1.ExecutionTemplateJobType.JobTypeUnspecified,
KernelSpec = "string",
AcceleratorConfig = new GoogleNative.Notebooks.V1.Inputs.SchedulerAcceleratorConfigArgs
{
CoreCount = "string",
Type = GoogleNative.Notebooks.V1.SchedulerAcceleratorConfigType.SchedulerAcceleratorTypeUnspecified,
},
MasterType = "string",
DataprocParameters = new GoogleNative.Notebooks.V1.Inputs.DataprocParametersArgs
{
Cluster = "string",
},
Parameters = "string",
ParamsYamlFile = "string",
ContainerImageUri = "string",
ServiceAccount = "string",
Tensorboard = "string",
VertexAiParameters = new GoogleNative.Notebooks.V1.Inputs.VertexAIParametersArgs
{
Env =
{
{ "string", "string" },
},
Network = "string",
},
},
Location = "string",
Project = "string",
State = GoogleNative.Notebooks.V1.ScheduleState.StateUnspecified,
TimeZone = "string",
});
example, err := notebooks.NewSchedule(ctx, "examplescheduleResourceResourceFromNotebooksv1", ¬ebooks.ScheduleArgs{
ScheduleId: pulumi.String("string"),
CronSchedule: pulumi.String("string"),
Description: pulumi.String("string"),
ExecutionTemplate: ¬ebooks.ExecutionTemplateArgs{
Labels: pulumi.StringMap{
"string": pulumi.String("string"),
},
OutputNotebookFolder: pulumi.String("string"),
InputNotebookFile: pulumi.String("string"),
JobType: notebooks.ExecutionTemplateJobTypeJobTypeUnspecified,
KernelSpec: pulumi.String("string"),
AcceleratorConfig: ¬ebooks.SchedulerAcceleratorConfigArgs{
CoreCount: pulumi.String("string"),
Type: notebooks.SchedulerAcceleratorConfigTypeSchedulerAcceleratorTypeUnspecified,
},
MasterType: pulumi.String("string"),
DataprocParameters: ¬ebooks.DataprocParametersArgs{
Cluster: pulumi.String("string"),
},
Parameters: pulumi.String("string"),
ParamsYamlFile: pulumi.String("string"),
ContainerImageUri: pulumi.String("string"),
ServiceAccount: pulumi.String("string"),
Tensorboard: pulumi.String("string"),
VertexAiParameters: ¬ebooks.VertexAIParametersArgs{
Env: pulumi.StringMap{
"string": pulumi.String("string"),
},
Network: pulumi.String("string"),
},
},
Location: pulumi.String("string"),
Project: pulumi.String("string"),
State: notebooks.ScheduleStateStateUnspecified,
TimeZone: pulumi.String("string"),
})
var examplescheduleResourceResourceFromNotebooksv1 = new Schedule("examplescheduleResourceResourceFromNotebooksv1", ScheduleArgs.builder()
.scheduleId("string")
.cronSchedule("string")
.description("string")
.executionTemplate(ExecutionTemplateArgs.builder()
.labels(Map.of("string", "string"))
.outputNotebookFolder("string")
.inputNotebookFile("string")
.jobType("JOB_TYPE_UNSPECIFIED")
.kernelSpec("string")
.acceleratorConfig(SchedulerAcceleratorConfigArgs.builder()
.coreCount("string")
.type("SCHEDULER_ACCELERATOR_TYPE_UNSPECIFIED")
.build())
.masterType("string")
.dataprocParameters(DataprocParametersArgs.builder()
.cluster("string")
.build())
.parameters("string")
.paramsYamlFile("string")
.containerImageUri("string")
.serviceAccount("string")
.tensorboard("string")
.vertexAiParameters(VertexAIParametersArgs.builder()
.env(Map.of("string", "string"))
.network("string")
.build())
.build())
.location("string")
.project("string")
.state("STATE_UNSPECIFIED")
.timeZone("string")
.build());
exampleschedule_resource_resource_from_notebooksv1 = google_native.notebooks.v1.Schedule("examplescheduleResourceResourceFromNotebooksv1",
schedule_id="string",
cron_schedule="string",
description="string",
execution_template={
"labels": {
"string": "string",
},
"output_notebook_folder": "string",
"input_notebook_file": "string",
"job_type": google_native.notebooks.v1.ExecutionTemplateJobType.JOB_TYPE_UNSPECIFIED,
"kernel_spec": "string",
"accelerator_config": {
"core_count": "string",
"type": google_native.notebooks.v1.SchedulerAcceleratorConfigType.SCHEDULER_ACCELERATOR_TYPE_UNSPECIFIED,
},
"master_type": "string",
"dataproc_parameters": {
"cluster": "string",
},
"parameters": "string",
"params_yaml_file": "string",
"container_image_uri": "string",
"service_account": "string",
"tensorboard": "string",
"vertex_ai_parameters": {
"env": {
"string": "string",
},
"network": "string",
},
},
location="string",
project="string",
state=google_native.notebooks.v1.ScheduleState.STATE_UNSPECIFIED,
time_zone="string")
const examplescheduleResourceResourceFromNotebooksv1 = new google_native.notebooks.v1.Schedule("examplescheduleResourceResourceFromNotebooksv1", {
scheduleId: "string",
cronSchedule: "string",
description: "string",
executionTemplate: {
labels: {
string: "string",
},
outputNotebookFolder: "string",
inputNotebookFile: "string",
jobType: google_native.notebooks.v1.ExecutionTemplateJobType.JobTypeUnspecified,
kernelSpec: "string",
acceleratorConfig: {
coreCount: "string",
type: google_native.notebooks.v1.SchedulerAcceleratorConfigType.SchedulerAcceleratorTypeUnspecified,
},
masterType: "string",
dataprocParameters: {
cluster: "string",
},
parameters: "string",
paramsYamlFile: "string",
containerImageUri: "string",
serviceAccount: "string",
tensorboard: "string",
vertexAiParameters: {
env: {
string: "string",
},
network: "string",
},
},
location: "string",
project: "string",
state: google_native.notebooks.v1.ScheduleState.StateUnspecified,
timeZone: "string",
});
type: google-native:notebooks/v1:Schedule
properties:
cronSchedule: string
description: string
executionTemplate:
acceleratorConfig:
coreCount: string
type: SCHEDULER_ACCELERATOR_TYPE_UNSPECIFIED
containerImageUri: string
dataprocParameters:
cluster: string
inputNotebookFile: string
jobType: JOB_TYPE_UNSPECIFIED
kernelSpec: string
labels:
string: string
masterType: string
outputNotebookFolder: string
parameters: string
paramsYamlFile: string
serviceAccount: string
tensorboard: string
vertexAiParameters:
env:
string: string
network: string
location: string
project: string
scheduleId: string
state: STATE_UNSPECIFIED
timeZone: string
Schedule Resource Properties
To learn more about resource properties and how to use them, see Inputs and Outputs in the Architecture and Concepts docs.
Inputs
In Python, inputs that are objects can be passed either as argument classes or as dictionary literals.
The Schedule resource accepts the following input properties:
- Schedule
Id string - Required. User-defined unique ID of this schedule.
- Cron
Schedule string - Cron-tab formatted schedule by which the job will execute. Format: minute, hour, day of month, month, day of week, e.g.
0 0 * * WED
= every Wednesday More examples: https://crontab.guru/examples.html - Description string
- A brief description of this environment.
- Execution
Template Pulumi.Google Native. Notebooks. V1. Inputs. Execution Template - Notebook Execution Template corresponding to this schedule.
- Location string
- Project string
- State
Pulumi.
Google Native. Notebooks. V1. Schedule State - Time
Zone string - Timezone on which the cron_schedule. The value of this field must be a time zone name from the tz database. TZ Database: https://en.wikipedia.org/wiki/List_of_tz_database_time_zones Note that some time zones include a provision for daylight savings time. The rules for daylight saving time are determined by the chosen tz. For UTC use the string "utc". If a time zone is not specified, the default will be in UTC (also known as GMT).
- Schedule
Id string - Required. User-defined unique ID of this schedule.
- Cron
Schedule string - Cron-tab formatted schedule by which the job will execute. Format: minute, hour, day of month, month, day of week, e.g.
0 0 * * WED
= every Wednesday More examples: https://crontab.guru/examples.html - Description string
- A brief description of this environment.
- Execution
Template ExecutionTemplate Args - Notebook Execution Template corresponding to this schedule.
- Location string
- Project string
- State
Schedule
State Enum - Time
Zone string - Timezone on which the cron_schedule. The value of this field must be a time zone name from the tz database. TZ Database: https://en.wikipedia.org/wiki/List_of_tz_database_time_zones Note that some time zones include a provision for daylight savings time. The rules for daylight saving time are determined by the chosen tz. For UTC use the string "utc". If a time zone is not specified, the default will be in UTC (also known as GMT).
- schedule
Id String - Required. User-defined unique ID of this schedule.
- cron
Schedule String - Cron-tab formatted schedule by which the job will execute. Format: minute, hour, day of month, month, day of week, e.g.
0 0 * * WED
= every Wednesday More examples: https://crontab.guru/examples.html - description String
- A brief description of this environment.
- execution
Template ExecutionTemplate - Notebook Execution Template corresponding to this schedule.
- location String
- project String
- state
Schedule
State - time
Zone String - Timezone on which the cron_schedule. The value of this field must be a time zone name from the tz database. TZ Database: https://en.wikipedia.org/wiki/List_of_tz_database_time_zones Note that some time zones include a provision for daylight savings time. The rules for daylight saving time are determined by the chosen tz. For UTC use the string "utc". If a time zone is not specified, the default will be in UTC (also known as GMT).
- schedule
Id string - Required. User-defined unique ID of this schedule.
- cron
Schedule string - Cron-tab formatted schedule by which the job will execute. Format: minute, hour, day of month, month, day of week, e.g.
0 0 * * WED
= every Wednesday More examples: https://crontab.guru/examples.html - description string
- A brief description of this environment.
- execution
Template ExecutionTemplate - Notebook Execution Template corresponding to this schedule.
- location string
- project string
- state
Schedule
State - time
Zone string - Timezone on which the cron_schedule. The value of this field must be a time zone name from the tz database. TZ Database: https://en.wikipedia.org/wiki/List_of_tz_database_time_zones Note that some time zones include a provision for daylight savings time. The rules for daylight saving time are determined by the chosen tz. For UTC use the string "utc". If a time zone is not specified, the default will be in UTC (also known as GMT).
- schedule_
id str - Required. User-defined unique ID of this schedule.
- cron_
schedule str - Cron-tab formatted schedule by which the job will execute. Format: minute, hour, day of month, month, day of week, e.g.
0 0 * * WED
= every Wednesday More examples: https://crontab.guru/examples.html - description str
- A brief description of this environment.
- execution_
template ExecutionTemplate Args - Notebook Execution Template corresponding to this schedule.
- location str
- project str
- state
Schedule
State - time_
zone str - Timezone on which the cron_schedule. The value of this field must be a time zone name from the tz database. TZ Database: https://en.wikipedia.org/wiki/List_of_tz_database_time_zones Note that some time zones include a provision for daylight savings time. The rules for daylight saving time are determined by the chosen tz. For UTC use the string "utc". If a time zone is not specified, the default will be in UTC (also known as GMT).
- schedule
Id String - Required. User-defined unique ID of this schedule.
- cron
Schedule String - Cron-tab formatted schedule by which the job will execute. Format: minute, hour, day of month, month, day of week, e.g.
0 0 * * WED
= every Wednesday More examples: https://crontab.guru/examples.html - description String
- A brief description of this environment.
- execution
Template Property Map - Notebook Execution Template corresponding to this schedule.
- location String
- project String
- state "STATE_UNSPECIFIED" | "ENABLED" | "PAUSED" | "DISABLED" | "UPDATE_FAILED" | "INITIALIZING" | "DELETING"
- time
Zone String - Timezone on which the cron_schedule. The value of this field must be a time zone name from the tz database. TZ Database: https://en.wikipedia.org/wiki/List_of_tz_database_time_zones Note that some time zones include a provision for daylight savings time. The rules for daylight saving time are determined by the chosen tz. For UTC use the string "utc". If a time zone is not specified, the default will be in UTC (also known as GMT).
Outputs
All input properties are implicitly available as output properties. Additionally, the Schedule resource produces the following output properties:
- Create
Time string - Time the schedule was created.
- Display
Name string - Display name used for UI purposes. Name can only contain alphanumeric characters, hyphens
-
, and underscores_
. - Id string
- The provider-assigned unique ID for this managed resource.
- Name string
- The name of this schedule. Format:
projects/{project_id}/locations/{location}/schedules/{schedule_id}
- Recent
Executions List<Pulumi.Google Native. Notebooks. V1. Outputs. Execution Response> - The most recent execution names triggered from this schedule and their corresponding states.
- Update
Time string - Time the schedule was last updated.
- Create
Time string - Time the schedule was created.
- Display
Name string - Display name used for UI purposes. Name can only contain alphanumeric characters, hyphens
-
, and underscores_
. - Id string
- The provider-assigned unique ID for this managed resource.
- Name string
- The name of this schedule. Format:
projects/{project_id}/locations/{location}/schedules/{schedule_id}
- Recent
Executions []ExecutionResponse - The most recent execution names triggered from this schedule and their corresponding states.
- Update
Time string - Time the schedule was last updated.
- create
Time String - Time the schedule was created.
- display
Name String - Display name used for UI purposes. Name can only contain alphanumeric characters, hyphens
-
, and underscores_
. - id String
- The provider-assigned unique ID for this managed resource.
- name String
- The name of this schedule. Format:
projects/{project_id}/locations/{location}/schedules/{schedule_id}
- recent
Executions List<ExecutionResponse> - The most recent execution names triggered from this schedule and their corresponding states.
- update
Time String - Time the schedule was last updated.
- create
Time string - Time the schedule was created.
- display
Name string - Display name used for UI purposes. Name can only contain alphanumeric characters, hyphens
-
, and underscores_
. - id string
- The provider-assigned unique ID for this managed resource.
- name string
- The name of this schedule. Format:
projects/{project_id}/locations/{location}/schedules/{schedule_id}
- recent
Executions ExecutionResponse[] - The most recent execution names triggered from this schedule and their corresponding states.
- update
Time string - Time the schedule was last updated.
- create_
time str - Time the schedule was created.
- display_
name str - Display name used for UI purposes. Name can only contain alphanumeric characters, hyphens
-
, and underscores_
. - id str
- The provider-assigned unique ID for this managed resource.
- name str
- The name of this schedule. Format:
projects/{project_id}/locations/{location}/schedules/{schedule_id}
- recent_
executions Sequence[ExecutionResponse] - The most recent execution names triggered from this schedule and their corresponding states.
- update_
time str - Time the schedule was last updated.
- create
Time String - Time the schedule was created.
- display
Name String - Display name used for UI purposes. Name can only contain alphanumeric characters, hyphens
-
, and underscores_
. - id String
- The provider-assigned unique ID for this managed resource.
- name String
- The name of this schedule. Format:
projects/{project_id}/locations/{location}/schedules/{schedule_id}
- recent
Executions List<Property Map> - The most recent execution names triggered from this schedule and their corresponding states.
- update
Time String - Time the schedule was last updated.
Supporting Types
DataprocParameters, DataprocParametersArgs
- Cluster string
- URI for cluster used to run Dataproc execution. Format:
projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}
- Cluster string
- URI for cluster used to run Dataproc execution. Format:
projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}
- cluster String
- URI for cluster used to run Dataproc execution. Format:
projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}
- cluster string
- URI for cluster used to run Dataproc execution. Format:
projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}
- cluster str
- URI for cluster used to run Dataproc execution. Format:
projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}
- cluster String
- URI for cluster used to run Dataproc execution. Format:
projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}
DataprocParametersResponse, DataprocParametersResponseArgs
- Cluster string
- URI for cluster used to run Dataproc execution. Format:
projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}
- Cluster string
- URI for cluster used to run Dataproc execution. Format:
projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}
- cluster String
- URI for cluster used to run Dataproc execution. Format:
projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}
- cluster string
- URI for cluster used to run Dataproc execution. Format:
projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}
- cluster str
- URI for cluster used to run Dataproc execution. Format:
projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}
- cluster String
- URI for cluster used to run Dataproc execution. Format:
projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}
ExecutionResponse, ExecutionResponseArgs
- Create
Time string - Time the Execution was instantiated.
- Description string
- A brief description of this execution.
- Display
Name string - Name used for UI purposes. Name can only contain alphanumeric characters and underscores '_'.
- Execution
Template Pulumi.Google Native. Notebooks. V1. Inputs. Execution Template Response - execute metadata including name, hardware spec, region, labels, etc.
- Job
Uri string - The URI of the external job used to execute the notebook.
- Name string
- The resource name of the execute. Format:
projects/{project_id}/locations/{location}/executions/{execution_id}
- Output
Notebook stringFile - Output notebook file generated by this execution
- State string
- State of the underlying AI Platform job.
- Update
Time string - Time the Execution was last updated.
- Create
Time string - Time the Execution was instantiated.
- Description string
- A brief description of this execution.
- Display
Name string - Name used for UI purposes. Name can only contain alphanumeric characters and underscores '_'.
- Execution
Template ExecutionTemplate Response - execute metadata including name, hardware spec, region, labels, etc.
- Job
Uri string - The URI of the external job used to execute the notebook.
- Name string
- The resource name of the execute. Format:
projects/{project_id}/locations/{location}/executions/{execution_id}
- Output
Notebook stringFile - Output notebook file generated by this execution
- State string
- State of the underlying AI Platform job.
- Update
Time string - Time the Execution was last updated.
- create
Time String - Time the Execution was instantiated.
- description String
- A brief description of this execution.
- display
Name String - Name used for UI purposes. Name can only contain alphanumeric characters and underscores '_'.
- execution
Template ExecutionTemplate Response - execute metadata including name, hardware spec, region, labels, etc.
- job
Uri String - The URI of the external job used to execute the notebook.
- name String
- The resource name of the execute. Format:
projects/{project_id}/locations/{location}/executions/{execution_id}
- output
Notebook StringFile - Output notebook file generated by this execution
- state String
- State of the underlying AI Platform job.
- update
Time String - Time the Execution was last updated.
- create
Time string - Time the Execution was instantiated.
- description string
- A brief description of this execution.
- display
Name string - Name used for UI purposes. Name can only contain alphanumeric characters and underscores '_'.
- execution
Template ExecutionTemplate Response - execute metadata including name, hardware spec, region, labels, etc.
- job
Uri string - The URI of the external job used to execute the notebook.
- name string
- The resource name of the execute. Format:
projects/{project_id}/locations/{location}/executions/{execution_id}
- output
Notebook stringFile - Output notebook file generated by this execution
- state string
- State of the underlying AI Platform job.
- update
Time string - Time the Execution was last updated.
- create_
time str - Time the Execution was instantiated.
- description str
- A brief description of this execution.
- display_
name str - Name used for UI purposes. Name can only contain alphanumeric characters and underscores '_'.
- execution_
template ExecutionTemplate Response - execute metadata including name, hardware spec, region, labels, etc.
- job_
uri str - The URI of the external job used to execute the notebook.
- name str
- The resource name of the execute. Format:
projects/{project_id}/locations/{location}/executions/{execution_id}
- output_
notebook_ strfile - Output notebook file generated by this execution
- state str
- State of the underlying AI Platform job.
- update_
time str - Time the Execution was last updated.
- create
Time String - Time the Execution was instantiated.
- description String
- A brief description of this execution.
- display
Name String - Name used for UI purposes. Name can only contain alphanumeric characters and underscores '_'.
- execution
Template Property Map - execute metadata including name, hardware spec, region, labels, etc.
- job
Uri String - The URI of the external job used to execute the notebook.
- name String
- The resource name of the execute. Format:
projects/{project_id}/locations/{location}/executions/{execution_id}
- output
Notebook StringFile - Output notebook file generated by this execution
- state String
- State of the underlying AI Platform job.
- update
Time String - Time the Execution was last updated.
ExecutionTemplate, ExecutionTemplateArgs
- Scale
Tier Pulumi.Google Native. Notebooks. V1. Execution Template Scale Tier - Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.
- Accelerator
Config Pulumi.Google Native. Notebooks. V1. Inputs. Scheduler Accelerator Config - Configuration (count and accelerator type) for hardware running notebook execution.
- Container
Image stringUri - Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container
- Dataproc
Parameters Pulumi.Google Native. Notebooks. V1. Inputs. Dataproc Parameters - Parameters used in Dataproc JobType executions.
- Input
Notebook stringFile - Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format:
gs://{bucket_name}/{folder}/{notebook_file_name}
Ex:gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb
- Job
Type Pulumi.Google Native. Notebooks. V1. Execution Template Job Type - The type of Job to be used on this execution.
- Kernel
Spec string - Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.
- Labels Dictionary<string, string>
- Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.
- Master
Type string - Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when
scaleTier
is set toCUSTOM
. You can use certain Compute Engine machine types directly in this field. The following types are supported: -n1-standard-4
-n1-standard-8
-n1-standard-16
-n1-standard-32
-n1-standard-64
-n1-standard-96
-n1-highmem-2
-n1-highmem-4
-n1-highmem-8
-n1-highmem-16
-n1-highmem-32
-n1-highmem-64
-n1-highmem-96
-n1-highcpu-16
-n1-highcpu-32
-n1-highcpu-64
-n1-highcpu-96
Alternatively, you can use the following legacy machine types: -standard
-large_model
-complex_model_s
-complex_model_m
-complex_model_l
-standard_gpu
-complex_model_m_gpu
-complex_model_l_gpu
-standard_p100
-complex_model_m_p100
-standard_v100
-large_model_v100
-complex_model_m_v100
-complex_model_l_v100
Finally, if you want to use a TPU for training, specifycloud_tpu
in this field. Learn more about the special configuration options for training with TPU. - Output
Notebook stringFolder - Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format:
gs://{bucket_name}/{folder}
Ex:gs://notebook_user/scheduled_notebooks
- Parameters string
- Parameters used within the 'input_notebook_file' notebook.
- Params
Yaml stringFile - Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex:
gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml
- Service
Account string - The email address of a service account to use when running the execution. You must have the
iam.serviceAccounts.actAs
permission for the specified service account. - Tensorboard string
- The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format:
projects/{project}/locations/{location}/tensorboards/{tensorboard}
- Vertex
Ai Pulumi.Parameters Google Native. Notebooks. V1. Inputs. Vertex AIParameters - Parameters used in Vertex AI JobType executions.
- Scale
Tier ExecutionTemplate Scale Tier - Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.
- Accelerator
Config SchedulerAccelerator Config - Configuration (count and accelerator type) for hardware running notebook execution.
- Container
Image stringUri - Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container
- Dataproc
Parameters DataprocParameters - Parameters used in Dataproc JobType executions.
- Input
Notebook stringFile - Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format:
gs://{bucket_name}/{folder}/{notebook_file_name}
Ex:gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb
- Job
Type ExecutionTemplate Job Type - The type of Job to be used on this execution.
- Kernel
Spec string - Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.
- Labels map[string]string
- Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.
- Master
Type string - Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when
scaleTier
is set toCUSTOM
. You can use certain Compute Engine machine types directly in this field. The following types are supported: -n1-standard-4
-n1-standard-8
-n1-standard-16
-n1-standard-32
-n1-standard-64
-n1-standard-96
-n1-highmem-2
-n1-highmem-4
-n1-highmem-8
-n1-highmem-16
-n1-highmem-32
-n1-highmem-64
-n1-highmem-96
-n1-highcpu-16
-n1-highcpu-32
-n1-highcpu-64
-n1-highcpu-96
Alternatively, you can use the following legacy machine types: -standard
-large_model
-complex_model_s
-complex_model_m
-complex_model_l
-standard_gpu
-complex_model_m_gpu
-complex_model_l_gpu
-standard_p100
-complex_model_m_p100
-standard_v100
-large_model_v100
-complex_model_m_v100
-complex_model_l_v100
Finally, if you want to use a TPU for training, specifycloud_tpu
in this field. Learn more about the special configuration options for training with TPU. - Output
Notebook stringFolder - Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format:
gs://{bucket_name}/{folder}
Ex:gs://notebook_user/scheduled_notebooks
- Parameters string
- Parameters used within the 'input_notebook_file' notebook.
- Params
Yaml stringFile - Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex:
gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml
- Service
Account string - The email address of a service account to use when running the execution. You must have the
iam.serviceAccounts.actAs
permission for the specified service account. - Tensorboard string
- The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format:
projects/{project}/locations/{location}/tensorboards/{tensorboard}
- Vertex
Ai VertexParameters AIParameters - Parameters used in Vertex AI JobType executions.
- scale
Tier ExecutionTemplate Scale Tier - Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.
- accelerator
Config SchedulerAccelerator Config - Configuration (count and accelerator type) for hardware running notebook execution.
- container
Image StringUri - Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container
- dataproc
Parameters DataprocParameters - Parameters used in Dataproc JobType executions.
- input
Notebook StringFile - Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format:
gs://{bucket_name}/{folder}/{notebook_file_name}
Ex:gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb
- job
Type ExecutionTemplate Job Type - The type of Job to be used on this execution.
- kernel
Spec String - Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.
- labels Map<String,String>
- Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.
- master
Type String - Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when
scaleTier
is set toCUSTOM
. You can use certain Compute Engine machine types directly in this field. The following types are supported: -n1-standard-4
-n1-standard-8
-n1-standard-16
-n1-standard-32
-n1-standard-64
-n1-standard-96
-n1-highmem-2
-n1-highmem-4
-n1-highmem-8
-n1-highmem-16
-n1-highmem-32
-n1-highmem-64
-n1-highmem-96
-n1-highcpu-16
-n1-highcpu-32
-n1-highcpu-64
-n1-highcpu-96
Alternatively, you can use the following legacy machine types: -standard
-large_model
-complex_model_s
-complex_model_m
-complex_model_l
-standard_gpu
-complex_model_m_gpu
-complex_model_l_gpu
-standard_p100
-complex_model_m_p100
-standard_v100
-large_model_v100
-complex_model_m_v100
-complex_model_l_v100
Finally, if you want to use a TPU for training, specifycloud_tpu
in this field. Learn more about the special configuration options for training with TPU. - output
Notebook StringFolder - Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format:
gs://{bucket_name}/{folder}
Ex:gs://notebook_user/scheduled_notebooks
- parameters String
- Parameters used within the 'input_notebook_file' notebook.
- params
Yaml StringFile - Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex:
gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml
- service
Account String - The email address of a service account to use when running the execution. You must have the
iam.serviceAccounts.actAs
permission for the specified service account. - tensorboard String
- The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format:
projects/{project}/locations/{location}/tensorboards/{tensorboard}
- vertex
Ai VertexParameters AIParameters - Parameters used in Vertex AI JobType executions.
- scale
Tier ExecutionTemplate Scale Tier - Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.
- accelerator
Config SchedulerAccelerator Config - Configuration (count and accelerator type) for hardware running notebook execution.
- container
Image stringUri - Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container
- dataproc
Parameters DataprocParameters - Parameters used in Dataproc JobType executions.
- input
Notebook stringFile - Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format:
gs://{bucket_name}/{folder}/{notebook_file_name}
Ex:gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb
- job
Type ExecutionTemplate Job Type - The type of Job to be used on this execution.
- kernel
Spec string - Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.
- labels {[key: string]: string}
- Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.
- master
Type string - Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when
scaleTier
is set toCUSTOM
. You can use certain Compute Engine machine types directly in this field. The following types are supported: -n1-standard-4
-n1-standard-8
-n1-standard-16
-n1-standard-32
-n1-standard-64
-n1-standard-96
-n1-highmem-2
-n1-highmem-4
-n1-highmem-8
-n1-highmem-16
-n1-highmem-32
-n1-highmem-64
-n1-highmem-96
-n1-highcpu-16
-n1-highcpu-32
-n1-highcpu-64
-n1-highcpu-96
Alternatively, you can use the following legacy machine types: -standard
-large_model
-complex_model_s
-complex_model_m
-complex_model_l
-standard_gpu
-complex_model_m_gpu
-complex_model_l_gpu
-standard_p100
-complex_model_m_p100
-standard_v100
-large_model_v100
-complex_model_m_v100
-complex_model_l_v100
Finally, if you want to use a TPU for training, specifycloud_tpu
in this field. Learn more about the special configuration options for training with TPU. - output
Notebook stringFolder - Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format:
gs://{bucket_name}/{folder}
Ex:gs://notebook_user/scheduled_notebooks
- parameters string
- Parameters used within the 'input_notebook_file' notebook.
- params
Yaml stringFile - Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex:
gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml
- service
Account string - The email address of a service account to use when running the execution. You must have the
iam.serviceAccounts.actAs
permission for the specified service account. - tensorboard string
- The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format:
projects/{project}/locations/{location}/tensorboards/{tensorboard}
- vertex
Ai VertexParameters AIParameters - Parameters used in Vertex AI JobType executions.
- scale_
tier ExecutionTemplate Scale Tier - Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.
- accelerator_
config SchedulerAccelerator Config - Configuration (count and accelerator type) for hardware running notebook execution.
- container_
image_ struri - Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container
- dataproc_
parameters DataprocParameters - Parameters used in Dataproc JobType executions.
- input_
notebook_ strfile - Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format:
gs://{bucket_name}/{folder}/{notebook_file_name}
Ex:gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb
- job_
type ExecutionTemplate Job Type - The type of Job to be used on this execution.
- kernel_
spec str - Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.
- labels Mapping[str, str]
- Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.
- master_
type str - Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when
scaleTier
is set toCUSTOM
. You can use certain Compute Engine machine types directly in this field. The following types are supported: -n1-standard-4
-n1-standard-8
-n1-standard-16
-n1-standard-32
-n1-standard-64
-n1-standard-96
-n1-highmem-2
-n1-highmem-4
-n1-highmem-8
-n1-highmem-16
-n1-highmem-32
-n1-highmem-64
-n1-highmem-96
-n1-highcpu-16
-n1-highcpu-32
-n1-highcpu-64
-n1-highcpu-96
Alternatively, you can use the following legacy machine types: -standard
-large_model
-complex_model_s
-complex_model_m
-complex_model_l
-standard_gpu
-complex_model_m_gpu
-complex_model_l_gpu
-standard_p100
-complex_model_m_p100
-standard_v100
-large_model_v100
-complex_model_m_v100
-complex_model_l_v100
Finally, if you want to use a TPU for training, specifycloud_tpu
in this field. Learn more about the special configuration options for training with TPU. - output_
notebook_ strfolder - Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format:
gs://{bucket_name}/{folder}
Ex:gs://notebook_user/scheduled_notebooks
- parameters str
- Parameters used within the 'input_notebook_file' notebook.
- params_
yaml_ strfile - Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex:
gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml
- service_
account str - The email address of a service account to use when running the execution. You must have the
iam.serviceAccounts.actAs
permission for the specified service account. - tensorboard str
- The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format:
projects/{project}/locations/{location}/tensorboards/{tensorboard}
- vertex_
ai_ Vertexparameters AIParameters - Parameters used in Vertex AI JobType executions.
- scale
Tier "SCALE_TIER_UNSPECIFIED" | "BASIC" | "STANDARD_1" | "PREMIUM_1" | "BASIC_GPU" | "BASIC_TPU" | "CUSTOM" - Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.
- accelerator
Config Property Map - Configuration (count and accelerator type) for hardware running notebook execution.
- container
Image StringUri - Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container
- dataproc
Parameters Property Map - Parameters used in Dataproc JobType executions.
- input
Notebook StringFile - Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format:
gs://{bucket_name}/{folder}/{notebook_file_name}
Ex:gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb
- job
Type "JOB_TYPE_UNSPECIFIED" | "VERTEX_AI" | "DATAPROC" - The type of Job to be used on this execution.
- kernel
Spec String - Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.
- labels Map<String>
- Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.
- master
Type String - Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when
scaleTier
is set toCUSTOM
. You can use certain Compute Engine machine types directly in this field. The following types are supported: -n1-standard-4
-n1-standard-8
-n1-standard-16
-n1-standard-32
-n1-standard-64
-n1-standard-96
-n1-highmem-2
-n1-highmem-4
-n1-highmem-8
-n1-highmem-16
-n1-highmem-32
-n1-highmem-64
-n1-highmem-96
-n1-highcpu-16
-n1-highcpu-32
-n1-highcpu-64
-n1-highcpu-96
Alternatively, you can use the following legacy machine types: -standard
-large_model
-complex_model_s
-complex_model_m
-complex_model_l
-standard_gpu
-complex_model_m_gpu
-complex_model_l_gpu
-standard_p100
-complex_model_m_p100
-standard_v100
-large_model_v100
-complex_model_m_v100
-complex_model_l_v100
Finally, if you want to use a TPU for training, specifycloud_tpu
in this field. Learn more about the special configuration options for training with TPU. - output
Notebook StringFolder - Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format:
gs://{bucket_name}/{folder}
Ex:gs://notebook_user/scheduled_notebooks
- parameters String
- Parameters used within the 'input_notebook_file' notebook.
- params
Yaml StringFile - Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex:
gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml
- service
Account String - The email address of a service account to use when running the execution. You must have the
iam.serviceAccounts.actAs
permission for the specified service account. - tensorboard String
- The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format:
projects/{project}/locations/{location}/tensorboards/{tensorboard}
- vertex
Ai Property MapParameters - Parameters used in Vertex AI JobType executions.
ExecutionTemplateJobType, ExecutionTemplateJobTypeArgs
- Job
Type Unspecified - JOB_TYPE_UNSPECIFIEDNo type specified.
- Vertex
Ai - VERTEX_AICustom Job in
aiplatform.googleapis.com
. Default value for an execution. - Dataproc
- DATAPROCRun execution on a cluster with Dataproc as a job. https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs
- Execution
Template Job Type Job Type Unspecified - JOB_TYPE_UNSPECIFIEDNo type specified.
- Execution
Template Job Type Vertex Ai - VERTEX_AICustom Job in
aiplatform.googleapis.com
. Default value for an execution. - Execution
Template Job Type Dataproc - DATAPROCRun execution on a cluster with Dataproc as a job. https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs
- Job
Type Unspecified - JOB_TYPE_UNSPECIFIEDNo type specified.
- Vertex
Ai - VERTEX_AICustom Job in
aiplatform.googleapis.com
. Default value for an execution. - Dataproc
- DATAPROCRun execution on a cluster with Dataproc as a job. https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs
- Job
Type Unspecified - JOB_TYPE_UNSPECIFIEDNo type specified.
- Vertex
Ai - VERTEX_AICustom Job in
aiplatform.googleapis.com
. Default value for an execution. - Dataproc
- DATAPROCRun execution on a cluster with Dataproc as a job. https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs
- JOB_TYPE_UNSPECIFIED
- JOB_TYPE_UNSPECIFIEDNo type specified.
- VERTEX_AI
- VERTEX_AICustom Job in
aiplatform.googleapis.com
. Default value for an execution. - DATAPROC
- DATAPROCRun execution on a cluster with Dataproc as a job. https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs
- "JOB_TYPE_UNSPECIFIED"
- JOB_TYPE_UNSPECIFIEDNo type specified.
- "VERTEX_AI"
- VERTEX_AICustom Job in
aiplatform.googleapis.com
. Default value for an execution. - "DATAPROC"
- DATAPROCRun execution on a cluster with Dataproc as a job. https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs
ExecutionTemplateResponse, ExecutionTemplateResponseArgs
- Accelerator
Config Pulumi.Google Native. Notebooks. V1. Inputs. Scheduler Accelerator Config Response - Configuration (count and accelerator type) for hardware running notebook execution.
- Container
Image stringUri - Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container
- Dataproc
Parameters Pulumi.Google Native. Notebooks. V1. Inputs. Dataproc Parameters Response - Parameters used in Dataproc JobType executions.
- Input
Notebook stringFile - Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format:
gs://{bucket_name}/{folder}/{notebook_file_name}
Ex:gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb
- Job
Type string - The type of Job to be used on this execution.
- Kernel
Spec string - Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.
- Labels Dictionary<string, string>
- Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.
- Master
Type string - Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when
scaleTier
is set toCUSTOM
. You can use certain Compute Engine machine types directly in this field. The following types are supported: -n1-standard-4
-n1-standard-8
-n1-standard-16
-n1-standard-32
-n1-standard-64
-n1-standard-96
-n1-highmem-2
-n1-highmem-4
-n1-highmem-8
-n1-highmem-16
-n1-highmem-32
-n1-highmem-64
-n1-highmem-96
-n1-highcpu-16
-n1-highcpu-32
-n1-highcpu-64
-n1-highcpu-96
Alternatively, you can use the following legacy machine types: -standard
-large_model
-complex_model_s
-complex_model_m
-complex_model_l
-standard_gpu
-complex_model_m_gpu
-complex_model_l_gpu
-standard_p100
-complex_model_m_p100
-standard_v100
-large_model_v100
-complex_model_m_v100
-complex_model_l_v100
Finally, if you want to use a TPU for training, specifycloud_tpu
in this field. Learn more about the special configuration options for training with TPU. - Output
Notebook stringFolder - Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format:
gs://{bucket_name}/{folder}
Ex:gs://notebook_user/scheduled_notebooks
- Parameters string
- Parameters used within the 'input_notebook_file' notebook.
- Params
Yaml stringFile - Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex:
gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml
- Scale
Tier string - Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.
- Service
Account string - The email address of a service account to use when running the execution. You must have the
iam.serviceAccounts.actAs
permission for the specified service account. - Tensorboard string
- The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format:
projects/{project}/locations/{location}/tensorboards/{tensorboard}
- Vertex
Ai Pulumi.Parameters Google Native. Notebooks. V1. Inputs. Vertex AIParameters Response - Parameters used in Vertex AI JobType executions.
- Accelerator
Config SchedulerAccelerator Config Response - Configuration (count and accelerator type) for hardware running notebook execution.
- Container
Image stringUri - Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container
- Dataproc
Parameters DataprocParameters Response - Parameters used in Dataproc JobType executions.
- Input
Notebook stringFile - Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format:
gs://{bucket_name}/{folder}/{notebook_file_name}
Ex:gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb
- Job
Type string - The type of Job to be used on this execution.
- Kernel
Spec string - Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.
- Labels map[string]string
- Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.
- Master
Type string - Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when
scaleTier
is set toCUSTOM
. You can use certain Compute Engine machine types directly in this field. The following types are supported: -n1-standard-4
-n1-standard-8
-n1-standard-16
-n1-standard-32
-n1-standard-64
-n1-standard-96
-n1-highmem-2
-n1-highmem-4
-n1-highmem-8
-n1-highmem-16
-n1-highmem-32
-n1-highmem-64
-n1-highmem-96
-n1-highcpu-16
-n1-highcpu-32
-n1-highcpu-64
-n1-highcpu-96
Alternatively, you can use the following legacy machine types: -standard
-large_model
-complex_model_s
-complex_model_m
-complex_model_l
-standard_gpu
-complex_model_m_gpu
-complex_model_l_gpu
-standard_p100
-complex_model_m_p100
-standard_v100
-large_model_v100
-complex_model_m_v100
-complex_model_l_v100
Finally, if you want to use a TPU for training, specifycloud_tpu
in this field. Learn more about the special configuration options for training with TPU. - Output
Notebook stringFolder - Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format:
gs://{bucket_name}/{folder}
Ex:gs://notebook_user/scheduled_notebooks
- Parameters string
- Parameters used within the 'input_notebook_file' notebook.
- Params
Yaml stringFile - Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex:
gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml
- Scale
Tier string - Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.
- Service
Account string - The email address of a service account to use when running the execution. You must have the
iam.serviceAccounts.actAs
permission for the specified service account. - Tensorboard string
- The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format:
projects/{project}/locations/{location}/tensorboards/{tensorboard}
- Vertex
Ai VertexParameters AIParameters Response - Parameters used in Vertex AI JobType executions.
- accelerator
Config SchedulerAccelerator Config Response - Configuration (count and accelerator type) for hardware running notebook execution.
- container
Image StringUri - Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container
- dataproc
Parameters DataprocParameters Response - Parameters used in Dataproc JobType executions.
- input
Notebook StringFile - Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format:
gs://{bucket_name}/{folder}/{notebook_file_name}
Ex:gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb
- job
Type String - The type of Job to be used on this execution.
- kernel
Spec String - Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.
- labels Map<String,String>
- Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.
- master
Type String - Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when
scaleTier
is set toCUSTOM
. You can use certain Compute Engine machine types directly in this field. The following types are supported: -n1-standard-4
-n1-standard-8
-n1-standard-16
-n1-standard-32
-n1-standard-64
-n1-standard-96
-n1-highmem-2
-n1-highmem-4
-n1-highmem-8
-n1-highmem-16
-n1-highmem-32
-n1-highmem-64
-n1-highmem-96
-n1-highcpu-16
-n1-highcpu-32
-n1-highcpu-64
-n1-highcpu-96
Alternatively, you can use the following legacy machine types: -standard
-large_model
-complex_model_s
-complex_model_m
-complex_model_l
-standard_gpu
-complex_model_m_gpu
-complex_model_l_gpu
-standard_p100
-complex_model_m_p100
-standard_v100
-large_model_v100
-complex_model_m_v100
-complex_model_l_v100
Finally, if you want to use a TPU for training, specifycloud_tpu
in this field. Learn more about the special configuration options for training with TPU. - output
Notebook StringFolder - Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format:
gs://{bucket_name}/{folder}
Ex:gs://notebook_user/scheduled_notebooks
- parameters String
- Parameters used within the 'input_notebook_file' notebook.
- params
Yaml StringFile - Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex:
gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml
- scale
Tier String - Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.
- service
Account String - The email address of a service account to use when running the execution. You must have the
iam.serviceAccounts.actAs
permission for the specified service account. - tensorboard String
- The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format:
projects/{project}/locations/{location}/tensorboards/{tensorboard}
- vertex
Ai VertexParameters AIParameters Response - Parameters used in Vertex AI JobType executions.
- accelerator
Config SchedulerAccelerator Config Response - Configuration (count and accelerator type) for hardware running notebook execution.
- container
Image stringUri - Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container
- dataproc
Parameters DataprocParameters Response - Parameters used in Dataproc JobType executions.
- input
Notebook stringFile - Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format:
gs://{bucket_name}/{folder}/{notebook_file_name}
Ex:gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb
- job
Type string - The type of Job to be used on this execution.
- kernel
Spec string - Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.
- labels {[key: string]: string}
- Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.
- master
Type string - Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when
scaleTier
is set toCUSTOM
. You can use certain Compute Engine machine types directly in this field. The following types are supported: -n1-standard-4
-n1-standard-8
-n1-standard-16
-n1-standard-32
-n1-standard-64
-n1-standard-96
-n1-highmem-2
-n1-highmem-4
-n1-highmem-8
-n1-highmem-16
-n1-highmem-32
-n1-highmem-64
-n1-highmem-96
-n1-highcpu-16
-n1-highcpu-32
-n1-highcpu-64
-n1-highcpu-96
Alternatively, you can use the following legacy machine types: -standard
-large_model
-complex_model_s
-complex_model_m
-complex_model_l
-standard_gpu
-complex_model_m_gpu
-complex_model_l_gpu
-standard_p100
-complex_model_m_p100
-standard_v100
-large_model_v100
-complex_model_m_v100
-complex_model_l_v100
Finally, if you want to use a TPU for training, specifycloud_tpu
in this field. Learn more about the special configuration options for training with TPU. - output
Notebook stringFolder - Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format:
gs://{bucket_name}/{folder}
Ex:gs://notebook_user/scheduled_notebooks
- parameters string
- Parameters used within the 'input_notebook_file' notebook.
- params
Yaml stringFile - Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex:
gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml
- scale
Tier string - Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.
- service
Account string - The email address of a service account to use when running the execution. You must have the
iam.serviceAccounts.actAs
permission for the specified service account. - tensorboard string
- The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format:
projects/{project}/locations/{location}/tensorboards/{tensorboard}
- vertex
Ai VertexParameters AIParameters Response - Parameters used in Vertex AI JobType executions.
- accelerator_
config SchedulerAccelerator Config Response - Configuration (count and accelerator type) for hardware running notebook execution.
- container_
image_ struri - Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container
- dataproc_
parameters DataprocParameters Response - Parameters used in Dataproc JobType executions.
- input_
notebook_ strfile - Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format:
gs://{bucket_name}/{folder}/{notebook_file_name}
Ex:gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb
- job_
type str - The type of Job to be used on this execution.
- kernel_
spec str - Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.
- labels Mapping[str, str]
- Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.
- master_
type str - Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when
scaleTier
is set toCUSTOM
. You can use certain Compute Engine machine types directly in this field. The following types are supported: -n1-standard-4
-n1-standard-8
-n1-standard-16
-n1-standard-32
-n1-standard-64
-n1-standard-96
-n1-highmem-2
-n1-highmem-4
-n1-highmem-8
-n1-highmem-16
-n1-highmem-32
-n1-highmem-64
-n1-highmem-96
-n1-highcpu-16
-n1-highcpu-32
-n1-highcpu-64
-n1-highcpu-96
Alternatively, you can use the following legacy machine types: -standard
-large_model
-complex_model_s
-complex_model_m
-complex_model_l
-standard_gpu
-complex_model_m_gpu
-complex_model_l_gpu
-standard_p100
-complex_model_m_p100
-standard_v100
-large_model_v100
-complex_model_m_v100
-complex_model_l_v100
Finally, if you want to use a TPU for training, specifycloud_tpu
in this field. Learn more about the special configuration options for training with TPU. - output_
notebook_ strfolder - Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format:
gs://{bucket_name}/{folder}
Ex:gs://notebook_user/scheduled_notebooks
- parameters str
- Parameters used within the 'input_notebook_file' notebook.
- params_
yaml_ strfile - Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex:
gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml
- scale_
tier str - Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.
- service_
account str - The email address of a service account to use when running the execution. You must have the
iam.serviceAccounts.actAs
permission for the specified service account. - tensorboard str
- The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format:
projects/{project}/locations/{location}/tensorboards/{tensorboard}
- vertex_
ai_ Vertexparameters AIParameters Response - Parameters used in Vertex AI JobType executions.
- accelerator
Config Property Map - Configuration (count and accelerator type) for hardware running notebook execution.
- container
Image StringUri - Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container
- dataproc
Parameters Property Map - Parameters used in Dataproc JobType executions.
- input
Notebook StringFile - Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format:
gs://{bucket_name}/{folder}/{notebook_file_name}
Ex:gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb
- job
Type String - The type of Job to be used on this execution.
- kernel
Spec String - Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.
- labels Map<String>
- Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.
- master
Type String - Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when
scaleTier
is set toCUSTOM
. You can use certain Compute Engine machine types directly in this field. The following types are supported: -n1-standard-4
-n1-standard-8
-n1-standard-16
-n1-standard-32
-n1-standard-64
-n1-standard-96
-n1-highmem-2
-n1-highmem-4
-n1-highmem-8
-n1-highmem-16
-n1-highmem-32
-n1-highmem-64
-n1-highmem-96
-n1-highcpu-16
-n1-highcpu-32
-n1-highcpu-64
-n1-highcpu-96
Alternatively, you can use the following legacy machine types: -standard
-large_model
-complex_model_s
-complex_model_m
-complex_model_l
-standard_gpu
-complex_model_m_gpu
-complex_model_l_gpu
-standard_p100
-complex_model_m_p100
-standard_v100
-large_model_v100
-complex_model_m_v100
-complex_model_l_v100
Finally, if you want to use a TPU for training, specifycloud_tpu
in this field. Learn more about the special configuration options for training with TPU. - output
Notebook StringFolder - Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format:
gs://{bucket_name}/{folder}
Ex:gs://notebook_user/scheduled_notebooks
- parameters String
- Parameters used within the 'input_notebook_file' notebook.
- params
Yaml StringFile - Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex:
gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml
- scale
Tier String - Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.
- service
Account String - The email address of a service account to use when running the execution. You must have the
iam.serviceAccounts.actAs
permission for the specified service account. - tensorboard String
- The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format:
projects/{project}/locations/{location}/tensorboards/{tensorboard}
- vertex
Ai Property MapParameters - Parameters used in Vertex AI JobType executions.
ExecutionTemplateScaleTier, ExecutionTemplateScaleTierArgs
- Scale
Tier Unspecified - SCALE_TIER_UNSPECIFIEDUnspecified Scale Tier.
- Basic
- BASICA single worker instance. This tier is suitable for learning how to use Cloud ML, and for experimenting with new models using small datasets.
- Standard1
- STANDARD_1Many workers and a few parameter servers.
- Premium1
- PREMIUM_1A large number of workers with many parameter servers.
- Basic
Gpu - BASIC_GPUA single worker instance with a K80 GPU.
- Basic
Tpu - BASIC_TPUA single worker instance with a Cloud TPU.
- Custom
- CUSTOMThe CUSTOM tier is not a set tier, but rather enables you to use your own cluster specification. When you use this tier, set values to configure your processing cluster according to these guidelines: * You must set
ExecutionTemplate.masterType
to specify the type of machine to use for your master node. This is the only required setting.
- Execution
Template Scale Tier Scale Tier Unspecified - SCALE_TIER_UNSPECIFIEDUnspecified Scale Tier.
- Execution
Template Scale Tier Basic - BASICA single worker instance. This tier is suitable for learning how to use Cloud ML, and for experimenting with new models using small datasets.
- Execution
Template Scale Tier Standard1 - STANDARD_1Many workers and a few parameter servers.
- Execution
Template Scale Tier Premium1 - PREMIUM_1A large number of workers with many parameter servers.
- Execution
Template Scale Tier Basic Gpu - BASIC_GPUA single worker instance with a K80 GPU.
- Execution
Template Scale Tier Basic Tpu - BASIC_TPUA single worker instance with a Cloud TPU.
- Execution
Template Scale Tier Custom - CUSTOMThe CUSTOM tier is not a set tier, but rather enables you to use your own cluster specification. When you use this tier, set values to configure your processing cluster according to these guidelines: * You must set
ExecutionTemplate.masterType
to specify the type of machine to use for your master node. This is the only required setting.
- Scale
Tier Unspecified - SCALE_TIER_UNSPECIFIEDUnspecified Scale Tier.
- Basic
- BASICA single worker instance. This tier is suitable for learning how to use Cloud ML, and for experimenting with new models using small datasets.
- Standard1
- STANDARD_1Many workers and a few parameter servers.
- Premium1
- PREMIUM_1A large number of workers with many parameter servers.
- Basic
Gpu - BASIC_GPUA single worker instance with a K80 GPU.
- Basic
Tpu - BASIC_TPUA single worker instance with a Cloud TPU.
- Custom
- CUSTOMThe CUSTOM tier is not a set tier, but rather enables you to use your own cluster specification. When you use this tier, set values to configure your processing cluster according to these guidelines: * You must set
ExecutionTemplate.masterType
to specify the type of machine to use for your master node. This is the only required setting.
- Scale
Tier Unspecified - SCALE_TIER_UNSPECIFIEDUnspecified Scale Tier.
- Basic
- BASICA single worker instance. This tier is suitable for learning how to use Cloud ML, and for experimenting with new models using small datasets.
- Standard1
- STANDARD_1Many workers and a few parameter servers.
- Premium1
- PREMIUM_1A large number of workers with many parameter servers.
- Basic
Gpu - BASIC_GPUA single worker instance with a K80 GPU.
- Basic
Tpu - BASIC_TPUA single worker instance with a Cloud TPU.
- Custom
- CUSTOMThe CUSTOM tier is not a set tier, but rather enables you to use your own cluster specification. When you use this tier, set values to configure your processing cluster according to these guidelines: * You must set
ExecutionTemplate.masterType
to specify the type of machine to use for your master node. This is the only required setting.
- SCALE_TIER_UNSPECIFIED
- SCALE_TIER_UNSPECIFIEDUnspecified Scale Tier.
- BASIC
- BASICA single worker instance. This tier is suitable for learning how to use Cloud ML, and for experimenting with new models using small datasets.
- STANDARD1
- STANDARD_1Many workers and a few parameter servers.
- PREMIUM1
- PREMIUM_1A large number of workers with many parameter servers.
- BASIC_GPU
- BASIC_GPUA single worker instance with a K80 GPU.
- BASIC_TPU
- BASIC_TPUA single worker instance with a Cloud TPU.
- CUSTOM
- CUSTOMThe CUSTOM tier is not a set tier, but rather enables you to use your own cluster specification. When you use this tier, set values to configure your processing cluster according to these guidelines: * You must set
ExecutionTemplate.masterType
to specify the type of machine to use for your master node. This is the only required setting.
- "SCALE_TIER_UNSPECIFIED"
- SCALE_TIER_UNSPECIFIEDUnspecified Scale Tier.
- "BASIC"
- BASICA single worker instance. This tier is suitable for learning how to use Cloud ML, and for experimenting with new models using small datasets.
- "STANDARD_1"
- STANDARD_1Many workers and a few parameter servers.
- "PREMIUM_1"
- PREMIUM_1A large number of workers with many parameter servers.
- "BASIC_GPU"
- BASIC_GPUA single worker instance with a K80 GPU.
- "BASIC_TPU"
- BASIC_TPUA single worker instance with a Cloud TPU.
- "CUSTOM"
- CUSTOMThe CUSTOM tier is not a set tier, but rather enables you to use your own cluster specification. When you use this tier, set values to configure your processing cluster according to these guidelines: * You must set
ExecutionTemplate.masterType
to specify the type of machine to use for your master node. This is the only required setting.
ScheduleState, ScheduleStateArgs
- State
Unspecified - STATE_UNSPECIFIEDUnspecified state.
- Enabled
- ENABLEDThe job is executing normally.
- Paused
- PAUSEDThe job is paused by the user. It will not execute. A user can intentionally pause the job using PauseJobRequest.
- Disabled
- DISABLEDThe job is disabled by the system due to error. The user cannot directly set a job to be disabled.
- Update
Failed - UPDATE_FAILEDThe job state resulting from a failed CloudScheduler.UpdateJob operation. To recover a job from this state, retry CloudScheduler.UpdateJob until a successful response is received.
- Initializing
- INITIALIZINGThe schedule resource is being created.
- Deleting
- DELETINGThe schedule resource is being deleted.
- Schedule
State State Unspecified - STATE_UNSPECIFIEDUnspecified state.
- Schedule
State Enabled - ENABLEDThe job is executing normally.
- Schedule
State Paused - PAUSEDThe job is paused by the user. It will not execute. A user can intentionally pause the job using PauseJobRequest.
- Schedule
State Disabled - DISABLEDThe job is disabled by the system due to error. The user cannot directly set a job to be disabled.
- Schedule
State Update Failed - UPDATE_FAILEDThe job state resulting from a failed CloudScheduler.UpdateJob operation. To recover a job from this state, retry CloudScheduler.UpdateJob until a successful response is received.
- Schedule
State Initializing - INITIALIZINGThe schedule resource is being created.
- Schedule
State Deleting - DELETINGThe schedule resource is being deleted.
- State
Unspecified - STATE_UNSPECIFIEDUnspecified state.
- Enabled
- ENABLEDThe job is executing normally.
- Paused
- PAUSEDThe job is paused by the user. It will not execute. A user can intentionally pause the job using PauseJobRequest.
- Disabled
- DISABLEDThe job is disabled by the system due to error. The user cannot directly set a job to be disabled.
- Update
Failed - UPDATE_FAILEDThe job state resulting from a failed CloudScheduler.UpdateJob operation. To recover a job from this state, retry CloudScheduler.UpdateJob until a successful response is received.
- Initializing
- INITIALIZINGThe schedule resource is being created.
- Deleting
- DELETINGThe schedule resource is being deleted.
- State
Unspecified - STATE_UNSPECIFIEDUnspecified state.
- Enabled
- ENABLEDThe job is executing normally.
- Paused
- PAUSEDThe job is paused by the user. It will not execute. A user can intentionally pause the job using PauseJobRequest.
- Disabled
- DISABLEDThe job is disabled by the system due to error. The user cannot directly set a job to be disabled.
- Update
Failed - UPDATE_FAILEDThe job state resulting from a failed CloudScheduler.UpdateJob operation. To recover a job from this state, retry CloudScheduler.UpdateJob until a successful response is received.
- Initializing
- INITIALIZINGThe schedule resource is being created.
- Deleting
- DELETINGThe schedule resource is being deleted.
- STATE_UNSPECIFIED
- STATE_UNSPECIFIEDUnspecified state.
- ENABLED
- ENABLEDThe job is executing normally.
- PAUSED
- PAUSEDThe job is paused by the user. It will not execute. A user can intentionally pause the job using PauseJobRequest.
- DISABLED
- DISABLEDThe job is disabled by the system due to error. The user cannot directly set a job to be disabled.
- UPDATE_FAILED
- UPDATE_FAILEDThe job state resulting from a failed CloudScheduler.UpdateJob operation. To recover a job from this state, retry CloudScheduler.UpdateJob until a successful response is received.
- INITIALIZING
- INITIALIZINGThe schedule resource is being created.
- DELETING
- DELETINGThe schedule resource is being deleted.
- "STATE_UNSPECIFIED"
- STATE_UNSPECIFIEDUnspecified state.
- "ENABLED"
- ENABLEDThe job is executing normally.
- "PAUSED"
- PAUSEDThe job is paused by the user. It will not execute. A user can intentionally pause the job using PauseJobRequest.
- "DISABLED"
- DISABLEDThe job is disabled by the system due to error. The user cannot directly set a job to be disabled.
- "UPDATE_FAILED"
- UPDATE_FAILEDThe job state resulting from a failed CloudScheduler.UpdateJob operation. To recover a job from this state, retry CloudScheduler.UpdateJob until a successful response is received.
- "INITIALIZING"
- INITIALIZINGThe schedule resource is being created.
- "DELETING"
- DELETINGThe schedule resource is being deleted.
SchedulerAcceleratorConfig, SchedulerAcceleratorConfigArgs
- Core
Count string - Count of cores of this accelerator.
- Type
Pulumi.
Google Native. Notebooks. V1. Scheduler Accelerator Config Type - Type of this accelerator.
- Core
Count string - Count of cores of this accelerator.
- Type
Scheduler
Accelerator Config Type - Type of this accelerator.
- core
Count String - Count of cores of this accelerator.
- type
Scheduler
Accelerator Config Type - Type of this accelerator.
- core
Count string - Count of cores of this accelerator.
- type
Scheduler
Accelerator Config Type - Type of this accelerator.
- core_
count str - Count of cores of this accelerator.
- type
Scheduler
Accelerator Config Type - Type of this accelerator.
- core
Count String - Count of cores of this accelerator.
- type "SCHEDULER_ACCELERATOR_TYPE_UNSPECIFIED" | "NVIDIA_TESLA_K80" | "NVIDIA_TESLA_P100" | "NVIDIA_TESLA_V100" | "NVIDIA_TESLA_P4" | "NVIDIA_TESLA_T4" | "NVIDIA_TESLA_A100" | "TPU_V2" | "TPU_V3"
- Type of this accelerator.
SchedulerAcceleratorConfigResponse, SchedulerAcceleratorConfigResponseArgs
- core_
count str - Count of cores of this accelerator.
- type str
- Type of this accelerator.
SchedulerAcceleratorConfigType, SchedulerAcceleratorConfigTypeArgs
- Scheduler
Accelerator Type Unspecified - SCHEDULER_ACCELERATOR_TYPE_UNSPECIFIEDUnspecified accelerator type. Default to no GPU.
- Nvidia
Tesla K80 - NVIDIA_TESLA_K80Nvidia Tesla K80 GPU.
- Nvidia
Tesla P100 - NVIDIA_TESLA_P100Nvidia Tesla P100 GPU.
- Nvidia
Tesla V100 - NVIDIA_TESLA_V100Nvidia Tesla V100 GPU.
- Nvidia
Tesla P4 - NVIDIA_TESLA_P4Nvidia Tesla P4 GPU.
- Nvidia
Tesla T4 - NVIDIA_TESLA_T4Nvidia Tesla T4 GPU.
- Nvidia
Tesla A100 - NVIDIA_TESLA_A100Nvidia Tesla A100 GPU.
- Tpu
V2 - TPU_V2TPU v2.
- Tpu
V3 - TPU_V3TPU v3.
- Scheduler
Accelerator Config Type Scheduler Accelerator Type Unspecified - SCHEDULER_ACCELERATOR_TYPE_UNSPECIFIEDUnspecified accelerator type. Default to no GPU.
- Scheduler
Accelerator Config Type Nvidia Tesla K80 - NVIDIA_TESLA_K80Nvidia Tesla K80 GPU.
- Scheduler
Accelerator Config Type Nvidia Tesla P100 - NVIDIA_TESLA_P100Nvidia Tesla P100 GPU.
- Scheduler
Accelerator Config Type Nvidia Tesla V100 - NVIDIA_TESLA_V100Nvidia Tesla V100 GPU.
- Scheduler
Accelerator Config Type Nvidia Tesla P4 - NVIDIA_TESLA_P4Nvidia Tesla P4 GPU.
- Scheduler
Accelerator Config Type Nvidia Tesla T4 - NVIDIA_TESLA_T4Nvidia Tesla T4 GPU.
- Scheduler
Accelerator Config Type Nvidia Tesla A100 - NVIDIA_TESLA_A100Nvidia Tesla A100 GPU.
- Scheduler
Accelerator Config Type Tpu V2 - TPU_V2TPU v2.
- Scheduler
Accelerator Config Type Tpu V3 - TPU_V3TPU v3.
- Scheduler
Accelerator Type Unspecified - SCHEDULER_ACCELERATOR_TYPE_UNSPECIFIEDUnspecified accelerator type. Default to no GPU.
- Nvidia
Tesla K80 - NVIDIA_TESLA_K80Nvidia Tesla K80 GPU.
- Nvidia
Tesla P100 - NVIDIA_TESLA_P100Nvidia Tesla P100 GPU.
- Nvidia
Tesla V100 - NVIDIA_TESLA_V100Nvidia Tesla V100 GPU.
- Nvidia
Tesla P4 - NVIDIA_TESLA_P4Nvidia Tesla P4 GPU.
- Nvidia
Tesla T4 - NVIDIA_TESLA_T4Nvidia Tesla T4 GPU.
- Nvidia
Tesla A100 - NVIDIA_TESLA_A100Nvidia Tesla A100 GPU.
- Tpu
V2 - TPU_V2TPU v2.
- Tpu
V3 - TPU_V3TPU v3.
- Scheduler
Accelerator Type Unspecified - SCHEDULER_ACCELERATOR_TYPE_UNSPECIFIEDUnspecified accelerator type. Default to no GPU.
- Nvidia
Tesla K80 - NVIDIA_TESLA_K80Nvidia Tesla K80 GPU.
- Nvidia
Tesla P100 - NVIDIA_TESLA_P100Nvidia Tesla P100 GPU.
- Nvidia
Tesla V100 - NVIDIA_TESLA_V100Nvidia Tesla V100 GPU.
- Nvidia
Tesla P4 - NVIDIA_TESLA_P4Nvidia Tesla P4 GPU.
- Nvidia
Tesla T4 - NVIDIA_TESLA_T4Nvidia Tesla T4 GPU.
- Nvidia
Tesla A100 - NVIDIA_TESLA_A100Nvidia Tesla A100 GPU.
- Tpu
V2 - TPU_V2TPU v2.
- Tpu
V3 - TPU_V3TPU v3.
- SCHEDULER_ACCELERATOR_TYPE_UNSPECIFIED
- SCHEDULER_ACCELERATOR_TYPE_UNSPECIFIEDUnspecified accelerator type. Default to no GPU.
- NVIDIA_TESLA_K80
- NVIDIA_TESLA_K80Nvidia Tesla K80 GPU.
- NVIDIA_TESLA_P100
- NVIDIA_TESLA_P100Nvidia Tesla P100 GPU.
- NVIDIA_TESLA_V100
- NVIDIA_TESLA_V100Nvidia Tesla V100 GPU.
- NVIDIA_TESLA_P4
- NVIDIA_TESLA_P4Nvidia Tesla P4 GPU.
- NVIDIA_TESLA_T4
- NVIDIA_TESLA_T4Nvidia Tesla T4 GPU.
- NVIDIA_TESLA_A100
- NVIDIA_TESLA_A100Nvidia Tesla A100 GPU.
- TPU_V2
- TPU_V2TPU v2.
- TPU_V3
- TPU_V3TPU v3.
- "SCHEDULER_ACCELERATOR_TYPE_UNSPECIFIED"
- SCHEDULER_ACCELERATOR_TYPE_UNSPECIFIEDUnspecified accelerator type. Default to no GPU.
- "NVIDIA_TESLA_K80"
- NVIDIA_TESLA_K80Nvidia Tesla K80 GPU.
- "NVIDIA_TESLA_P100"
- NVIDIA_TESLA_P100Nvidia Tesla P100 GPU.
- "NVIDIA_TESLA_V100"
- NVIDIA_TESLA_V100Nvidia Tesla V100 GPU.
- "NVIDIA_TESLA_P4"
- NVIDIA_TESLA_P4Nvidia Tesla P4 GPU.
- "NVIDIA_TESLA_T4"
- NVIDIA_TESLA_T4Nvidia Tesla T4 GPU.
- "NVIDIA_TESLA_A100"
- NVIDIA_TESLA_A100Nvidia Tesla A100 GPU.
- "TPU_V2"
- TPU_V2TPU v2.
- "TPU_V3"
- TPU_V3TPU v3.
VertexAIParameters, VertexAIParametersArgs
- Env Dictionary<string, string>
- Environment variables. At most 100 environment variables can be specified and unique. Example:
GCP_BUCKET=gs://my-bucket/samples/
- Network string
- The full name of the Compute Engine network to which the Job should be peered. For example,
projects/12345/global/networks/myVPC
. Format is of the formprojects/{project}/global/networks/{network}
. Where{project}
is a project number, as in12345
, and{network}
is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.
- Env map[string]string
- Environment variables. At most 100 environment variables can be specified and unique. Example:
GCP_BUCKET=gs://my-bucket/samples/
- Network string
- The full name of the Compute Engine network to which the Job should be peered. For example,
projects/12345/global/networks/myVPC
. Format is of the formprojects/{project}/global/networks/{network}
. Where{project}
is a project number, as in12345
, and{network}
is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.
- env Map<String,String>
- Environment variables. At most 100 environment variables can be specified and unique. Example:
GCP_BUCKET=gs://my-bucket/samples/
- network String
- The full name of the Compute Engine network to which the Job should be peered. For example,
projects/12345/global/networks/myVPC
. Format is of the formprojects/{project}/global/networks/{network}
. Where{project}
is a project number, as in12345
, and{network}
is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.
- env {[key: string]: string}
- Environment variables. At most 100 environment variables can be specified and unique. Example:
GCP_BUCKET=gs://my-bucket/samples/
- network string
- The full name of the Compute Engine network to which the Job should be peered. For example,
projects/12345/global/networks/myVPC
. Format is of the formprojects/{project}/global/networks/{network}
. Where{project}
is a project number, as in12345
, and{network}
is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.
- env Mapping[str, str]
- Environment variables. At most 100 environment variables can be specified and unique. Example:
GCP_BUCKET=gs://my-bucket/samples/
- network str
- The full name of the Compute Engine network to which the Job should be peered. For example,
projects/12345/global/networks/myVPC
. Format is of the formprojects/{project}/global/networks/{network}
. Where{project}
is a project number, as in12345
, and{network}
is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.
- env Map<String>
- Environment variables. At most 100 environment variables can be specified and unique. Example:
GCP_BUCKET=gs://my-bucket/samples/
- network String
- The full name of the Compute Engine network to which the Job should be peered. For example,
projects/12345/global/networks/myVPC
. Format is of the formprojects/{project}/global/networks/{network}
. Where{project}
is a project number, as in12345
, and{network}
is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.
VertexAIParametersResponse, VertexAIParametersResponseArgs
- Env Dictionary<string, string>
- Environment variables. At most 100 environment variables can be specified and unique. Example:
GCP_BUCKET=gs://my-bucket/samples/
- Network string
- The full name of the Compute Engine network to which the Job should be peered. For example,
projects/12345/global/networks/myVPC
. Format is of the formprojects/{project}/global/networks/{network}
. Where{project}
is a project number, as in12345
, and{network}
is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.
- Env map[string]string
- Environment variables. At most 100 environment variables can be specified and unique. Example:
GCP_BUCKET=gs://my-bucket/samples/
- Network string
- The full name of the Compute Engine network to which the Job should be peered. For example,
projects/12345/global/networks/myVPC
. Format is of the formprojects/{project}/global/networks/{network}
. Where{project}
is a project number, as in12345
, and{network}
is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.
- env Map<String,String>
- Environment variables. At most 100 environment variables can be specified and unique. Example:
GCP_BUCKET=gs://my-bucket/samples/
- network String
- The full name of the Compute Engine network to which the Job should be peered. For example,
projects/12345/global/networks/myVPC
. Format is of the formprojects/{project}/global/networks/{network}
. Where{project}
is a project number, as in12345
, and{network}
is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.
- env {[key: string]: string}
- Environment variables. At most 100 environment variables can be specified and unique. Example:
GCP_BUCKET=gs://my-bucket/samples/
- network string
- The full name of the Compute Engine network to which the Job should be peered. For example,
projects/12345/global/networks/myVPC
. Format is of the formprojects/{project}/global/networks/{network}
. Where{project}
is a project number, as in12345
, and{network}
is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.
- env Mapping[str, str]
- Environment variables. At most 100 environment variables can be specified and unique. Example:
GCP_BUCKET=gs://my-bucket/samples/
- network str
- The full name of the Compute Engine network to which the Job should be peered. For example,
projects/12345/global/networks/myVPC
. Format is of the formprojects/{project}/global/networks/{network}
. Where{project}
is a project number, as in12345
, and{network}
is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.
- env Map<String>
- Environment variables. At most 100 environment variables can be specified and unique. Example:
GCP_BUCKET=gs://my-bucket/samples/
- network String
- The full name of the Compute Engine network to which the Job should be peered. For example,
projects/12345/global/networks/myVPC
. Format is of the formprojects/{project}/global/networks/{network}
. Where{project}
is a project number, as in12345
, and{network}
is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.
Package Details
- Repository
- Google Cloud Native pulumi/pulumi-google-native
- License
- Apache-2.0
Google Cloud Native is in preview. Google Cloud Classic is fully supported.