Google Cloud Native is in preview. Google Cloud Classic is fully supported.
google-native.ml/v1.Version
Explore with Pulumi AI
Google Cloud Native is in preview. Google Cloud Classic is fully supported.
Creates a new version of a model from a trained TensorFlow model. If the version created in the cloud by this call is the first deployed version of the specified model, it will be made the default version of the model. When you add a version to a model that already has one or more versions, the default version does not automatically change. If you want a new version to be the default, you must call projects.models.versions.setDefault.
Create Version Resource
Resources are created with functions called constructors. To learn more about declaring and configuring resources, see Resources.
Constructor syntax
new Version(name: string, args: VersionArgs, opts?: CustomResourceOptions);
@overload
def Version(resource_name: str,
args: VersionArgs,
opts: Optional[ResourceOptions] = None)
@overload
def Version(resource_name: str,
opts: Optional[ResourceOptions] = None,
model_id: Optional[str] = None,
runtime_version: Optional[str] = None,
python_version: Optional[str] = None,
manual_scaling: Optional[GoogleCloudMlV1__ManualScalingArgs] = None,
name: Optional[str] = None,
etag: Optional[str] = None,
explanation_config: Optional[GoogleCloudMlV1__ExplanationConfigArgs] = None,
framework: Optional[VersionFramework] = None,
labels: Optional[Mapping[str, str]] = None,
machine_type: Optional[str] = None,
accelerator_config: Optional[GoogleCloudMlV1__AcceleratorConfigArgs] = None,
deployment_uri: Optional[str] = None,
description: Optional[str] = None,
package_uris: Optional[Sequence[str]] = None,
prediction_class: Optional[str] = None,
project: Optional[str] = None,
container: Optional[GoogleCloudMlV1__ContainerSpecArgs] = None,
request_logging_config: Optional[GoogleCloudMlV1__RequestLoggingConfigArgs] = None,
routes: Optional[GoogleCloudMlV1__RouteMapArgs] = None,
auto_scaling: Optional[GoogleCloudMlV1__AutoScalingArgs] = None,
service_account: Optional[str] = None)
func NewVersion(ctx *Context, name string, args VersionArgs, opts ...ResourceOption) (*Version, error)
public Version(string name, VersionArgs args, CustomResourceOptions? opts = null)
public Version(String name, VersionArgs args)
public Version(String name, VersionArgs args, CustomResourceOptions options)
type: google-native:ml/v1:Version
properties: # The arguments to resource properties.
options: # Bag of options to control resource's behavior.
Parameters
- name string
- The unique name of the resource.
- args VersionArgs
- The arguments to resource properties.
- opts CustomResourceOptions
- Bag of options to control resource's behavior.
- resource_name str
- The unique name of the resource.
- args VersionArgs
- The arguments to resource properties.
- opts ResourceOptions
- Bag of options to control resource's behavior.
- ctx Context
- Context object for the current deployment.
- name string
- The unique name of the resource.
- args VersionArgs
- The arguments to resource properties.
- opts ResourceOption
- Bag of options to control resource's behavior.
- name string
- The unique name of the resource.
- args VersionArgs
- The arguments to resource properties.
- opts CustomResourceOptions
- Bag of options to control resource's behavior.
- name String
- The unique name of the resource.
- args VersionArgs
- The arguments to resource properties.
- options CustomResourceOptions
- Bag of options to control resource's behavior.
Constructor example
The following reference example uses placeholder values for all input properties.
var exampleversionResourceResourceFromMlv1 = new GoogleNative.Ml.V1.Version("exampleversionResourceResourceFromMlv1", new()
{
ModelId = "string",
RuntimeVersion = "string",
PythonVersion = "string",
ManualScaling = new GoogleNative.Ml.V1.Inputs.GoogleCloudMlV1__ManualScalingArgs
{
Nodes = 0,
},
Name = "string",
Etag = "string",
ExplanationConfig = new GoogleNative.Ml.V1.Inputs.GoogleCloudMlV1__ExplanationConfigArgs
{
IntegratedGradientsAttribution = new GoogleNative.Ml.V1.Inputs.GoogleCloudMlV1__IntegratedGradientsAttributionArgs
{
NumIntegralSteps = 0,
},
SampledShapleyAttribution = new GoogleNative.Ml.V1.Inputs.GoogleCloudMlV1__SampledShapleyAttributionArgs
{
NumPaths = 0,
},
XraiAttribution = new GoogleNative.Ml.V1.Inputs.GoogleCloudMlV1__XraiAttributionArgs
{
NumIntegralSteps = 0,
},
},
Framework = GoogleNative.Ml.V1.VersionFramework.FrameworkUnspecified,
Labels =
{
{ "string", "string" },
},
MachineType = "string",
AcceleratorConfig = new GoogleNative.Ml.V1.Inputs.GoogleCloudMlV1__AcceleratorConfigArgs
{
Count = "string",
Type = GoogleNative.Ml.V1.GoogleCloudMlV1__AcceleratorConfigType.AcceleratorTypeUnspecified,
},
DeploymentUri = "string",
Description = "string",
PackageUris = new[]
{
"string",
},
PredictionClass = "string",
Project = "string",
Container = new GoogleNative.Ml.V1.Inputs.GoogleCloudMlV1__ContainerSpecArgs
{
Args = new[]
{
"string",
},
Command = new[]
{
"string",
},
Env = new[]
{
new GoogleNative.Ml.V1.Inputs.GoogleCloudMlV1__EnvVarArgs
{
Name = "string",
Value = "string",
},
},
Image = "string",
Ports = new[]
{
new GoogleNative.Ml.V1.Inputs.GoogleCloudMlV1__ContainerPortArgs
{
ContainerPort = 0,
},
},
},
RequestLoggingConfig = new GoogleNative.Ml.V1.Inputs.GoogleCloudMlV1__RequestLoggingConfigArgs
{
BigqueryTableName = "string",
SamplingPercentage = 0,
},
Routes = new GoogleNative.Ml.V1.Inputs.GoogleCloudMlV1__RouteMapArgs
{
Health = "string",
Predict = "string",
},
AutoScaling = new GoogleNative.Ml.V1.Inputs.GoogleCloudMlV1__AutoScalingArgs
{
MaxNodes = 0,
Metrics = new[]
{
new GoogleNative.Ml.V1.Inputs.GoogleCloudMlV1__MetricSpecArgs
{
Name = GoogleNative.Ml.V1.GoogleCloudMlV1__MetricSpecName.MetricNameUnspecified,
Target = 0,
},
},
MinNodes = 0,
},
ServiceAccount = "string",
});
example, err := ml.NewVersion(ctx, "exampleversionResourceResourceFromMlv1", &ml.VersionArgs{
ModelId: pulumi.String("string"),
RuntimeVersion: pulumi.String("string"),
PythonVersion: pulumi.String("string"),
ManualScaling: &ml.GoogleCloudMlV1__ManualScalingArgs{
Nodes: pulumi.Int(0),
},
Name: pulumi.String("string"),
Etag: pulumi.String("string"),
ExplanationConfig: &ml.GoogleCloudMlV1__ExplanationConfigArgs{
IntegratedGradientsAttribution: &ml.GoogleCloudMlV1__IntegratedGradientsAttributionArgs{
NumIntegralSteps: pulumi.Int(0),
},
SampledShapleyAttribution: &ml.GoogleCloudMlV1__SampledShapleyAttributionArgs{
NumPaths: pulumi.Int(0),
},
XraiAttribution: &ml.GoogleCloudMlV1__XraiAttributionArgs{
NumIntegralSteps: pulumi.Int(0),
},
},
Framework: ml.VersionFrameworkFrameworkUnspecified,
Labels: pulumi.StringMap{
"string": pulumi.String("string"),
},
MachineType: pulumi.String("string"),
AcceleratorConfig: &ml.GoogleCloudMlV1__AcceleratorConfigArgs{
Count: pulumi.String("string"),
Type: ml.GoogleCloudMlV1__AcceleratorConfigTypeAcceleratorTypeUnspecified,
},
DeploymentUri: pulumi.String("string"),
Description: pulumi.String("string"),
PackageUris: pulumi.StringArray{
pulumi.String("string"),
},
PredictionClass: pulumi.String("string"),
Project: pulumi.String("string"),
Container: &ml.GoogleCloudMlV1__ContainerSpecArgs{
Args: pulumi.StringArray{
pulumi.String("string"),
},
Command: pulumi.StringArray{
pulumi.String("string"),
},
Env: ml.GoogleCloudMlV1__EnvVarArray{
&ml.GoogleCloudMlV1__EnvVarArgs{
Name: pulumi.String("string"),
Value: pulumi.String("string"),
},
},
Image: pulumi.String("string"),
Ports: ml.GoogleCloudMlV1__ContainerPortArray{
&ml.GoogleCloudMlV1__ContainerPortArgs{
ContainerPort: pulumi.Int(0),
},
},
},
RequestLoggingConfig: &ml.GoogleCloudMlV1__RequestLoggingConfigArgs{
BigqueryTableName: pulumi.String("string"),
SamplingPercentage: pulumi.Float64(0),
},
Routes: &ml.GoogleCloudMlV1__RouteMapArgs{
Health: pulumi.String("string"),
Predict: pulumi.String("string"),
},
AutoScaling: &ml.GoogleCloudMlV1__AutoScalingArgs{
MaxNodes: pulumi.Int(0),
Metrics: ml.GoogleCloudMlV1__MetricSpecArray{
&ml.GoogleCloudMlV1__MetricSpecArgs{
Name: ml.GoogleCloudMlV1__MetricSpecNameMetricNameUnspecified,
Target: pulumi.Int(0),
},
},
MinNodes: pulumi.Int(0),
},
ServiceAccount: pulumi.String("string"),
})
var exampleversionResourceResourceFromMlv1 = new Version("exampleversionResourceResourceFromMlv1", VersionArgs.builder()
.modelId("string")
.runtimeVersion("string")
.pythonVersion("string")
.manualScaling(GoogleCloudMlV1__ManualScalingArgs.builder()
.nodes(0)
.build())
.name("string")
.etag("string")
.explanationConfig(GoogleCloudMlV1__ExplanationConfigArgs.builder()
.integratedGradientsAttribution(GoogleCloudMlV1__IntegratedGradientsAttributionArgs.builder()
.numIntegralSteps(0)
.build())
.sampledShapleyAttribution(GoogleCloudMlV1__SampledShapleyAttributionArgs.builder()
.numPaths(0)
.build())
.xraiAttribution(GoogleCloudMlV1__XraiAttributionArgs.builder()
.numIntegralSteps(0)
.build())
.build())
.framework("FRAMEWORK_UNSPECIFIED")
.labels(Map.of("string", "string"))
.machineType("string")
.acceleratorConfig(GoogleCloudMlV1__AcceleratorConfigArgs.builder()
.count("string")
.type("ACCELERATOR_TYPE_UNSPECIFIED")
.build())
.deploymentUri("string")
.description("string")
.packageUris("string")
.predictionClass("string")
.project("string")
.container(GoogleCloudMlV1__ContainerSpecArgs.builder()
.args("string")
.command("string")
.env(GoogleCloudMlV1__EnvVarArgs.builder()
.name("string")
.value("string")
.build())
.image("string")
.ports(GoogleCloudMlV1__ContainerPortArgs.builder()
.containerPort(0)
.build())
.build())
.requestLoggingConfig(GoogleCloudMlV1__RequestLoggingConfigArgs.builder()
.bigqueryTableName("string")
.samplingPercentage(0)
.build())
.routes(GoogleCloudMlV1__RouteMapArgs.builder()
.health("string")
.predict("string")
.build())
.autoScaling(GoogleCloudMlV1__AutoScalingArgs.builder()
.maxNodes(0)
.metrics(GoogleCloudMlV1__MetricSpecArgs.builder()
.name("METRIC_NAME_UNSPECIFIED")
.target(0)
.build())
.minNodes(0)
.build())
.serviceAccount("string")
.build());
exampleversion_resource_resource_from_mlv1 = google_native.ml.v1.Version("exampleversionResourceResourceFromMlv1",
model_id="string",
runtime_version="string",
python_version="string",
manual_scaling={
"nodes": 0,
},
name="string",
etag="string",
explanation_config={
"integrated_gradients_attribution": {
"num_integral_steps": 0,
},
"sampled_shapley_attribution": {
"num_paths": 0,
},
"xrai_attribution": {
"num_integral_steps": 0,
},
},
framework=google_native.ml.v1.VersionFramework.FRAMEWORK_UNSPECIFIED,
labels={
"string": "string",
},
machine_type="string",
accelerator_config={
"count": "string",
"type": google_native.ml.v1.GoogleCloudMlV1__AcceleratorConfigType.ACCELERATOR_TYPE_UNSPECIFIED,
},
deployment_uri="string",
description="string",
package_uris=["string"],
prediction_class="string",
project="string",
container={
"args": ["string"],
"command": ["string"],
"env": [{
"name": "string",
"value": "string",
}],
"image": "string",
"ports": [{
"container_port": 0,
}],
},
request_logging_config={
"bigquery_table_name": "string",
"sampling_percentage": 0,
},
routes={
"health": "string",
"predict": "string",
},
auto_scaling={
"max_nodes": 0,
"metrics": [{
"name": google_native.ml.v1.GoogleCloudMlV1__MetricSpecName.METRIC_NAME_UNSPECIFIED,
"target": 0,
}],
"min_nodes": 0,
},
service_account="string")
const exampleversionResourceResourceFromMlv1 = new google_native.ml.v1.Version("exampleversionResourceResourceFromMlv1", {
modelId: "string",
runtimeVersion: "string",
pythonVersion: "string",
manualScaling: {
nodes: 0,
},
name: "string",
etag: "string",
explanationConfig: {
integratedGradientsAttribution: {
numIntegralSteps: 0,
},
sampledShapleyAttribution: {
numPaths: 0,
},
xraiAttribution: {
numIntegralSteps: 0,
},
},
framework: google_native.ml.v1.VersionFramework.FrameworkUnspecified,
labels: {
string: "string",
},
machineType: "string",
acceleratorConfig: {
count: "string",
type: google_native.ml.v1.GoogleCloudMlV1__AcceleratorConfigType.AcceleratorTypeUnspecified,
},
deploymentUri: "string",
description: "string",
packageUris: ["string"],
predictionClass: "string",
project: "string",
container: {
args: ["string"],
command: ["string"],
env: [{
name: "string",
value: "string",
}],
image: "string",
ports: [{
containerPort: 0,
}],
},
requestLoggingConfig: {
bigqueryTableName: "string",
samplingPercentage: 0,
},
routes: {
health: "string",
predict: "string",
},
autoScaling: {
maxNodes: 0,
metrics: [{
name: google_native.ml.v1.GoogleCloudMlV1__MetricSpecName.MetricNameUnspecified,
target: 0,
}],
minNodes: 0,
},
serviceAccount: "string",
});
type: google-native:ml/v1:Version
properties:
acceleratorConfig:
count: string
type: ACCELERATOR_TYPE_UNSPECIFIED
autoScaling:
maxNodes: 0
metrics:
- name: METRIC_NAME_UNSPECIFIED
target: 0
minNodes: 0
container:
args:
- string
command:
- string
env:
- name: string
value: string
image: string
ports:
- containerPort: 0
deploymentUri: string
description: string
etag: string
explanationConfig:
integratedGradientsAttribution:
numIntegralSteps: 0
sampledShapleyAttribution:
numPaths: 0
xraiAttribution:
numIntegralSteps: 0
framework: FRAMEWORK_UNSPECIFIED
labels:
string: string
machineType: string
manualScaling:
nodes: 0
modelId: string
name: string
packageUris:
- string
predictionClass: string
project: string
pythonVersion: string
requestLoggingConfig:
bigqueryTableName: string
samplingPercentage: 0
routes:
health: string
predict: string
runtimeVersion: string
serviceAccount: string
Version Resource Properties
To learn more about resource properties and how to use them, see Inputs and Outputs in the Architecture and Concepts docs.
Inputs
In Python, inputs that are objects can be passed either as argument classes or as dictionary literals.
The Version resource accepts the following input properties:
- Model
Id string - Python
Version string - The version of Python used in prediction. The following Python versions are available: * Python '3.7' is available when
runtime_version
is set to '1.15' or later. * Python '3.5' is available whenruntime_version
is set to a version from '1.4' to '1.14'. * Python '2.7' is available whenruntime_version
is set to '1.15' or earlier. Read more about the Python versions available for each runtime version. - Runtime
Version string - The AI Platform runtime version to use for this deployment. For more information, see the runtime version list and how to manage runtime versions.
- Accelerator
Config Pulumi.Google Native. Ml. V1. Inputs. Google Cloud Ml V1__Accelerator Config - Optional. Accelerator config for using GPUs for online prediction (beta). Only specify this field if you have specified a Compute Engine (N1) machine type in the
machineType
field. Learn more about using GPUs for online prediction. - Auto
Scaling Pulumi.Google Native. Ml. V1. Inputs. Google Cloud Ml V1__Auto Scaling - Automatically scale the number of nodes used to serve the model in response to increases and decreases in traffic. Care should be taken to ramp up traffic according to the model's ability to scale or you will start seeing increases in latency and 429 response codes.
- Container
Pulumi.
Google Native. Ml. V1. Inputs. Google Cloud Ml V1__Container Spec - Optional. Specifies a custom container to use for serving predictions. If you specify this field, then
machineType
is required. If you specify this field, thendeploymentUri
is optional. If you specify this field, then you must not specifyruntimeVersion
,packageUris
,framework
,pythonVersion
, orpredictionClass
. - Deployment
Uri string - The Cloud Storage URI of a directory containing trained model artifacts to be used to create the model version. See the guide to deploying models for more information. The total number of files under this directory must not exceed 1000. During projects.models.versions.create, AI Platform Prediction copies all files from the specified directory to a location managed by the service. From then on, AI Platform Prediction uses these copies of the model artifacts to serve predictions, not the original files in Cloud Storage, so this location is useful only as a historical record. If you specify container, then this field is optional. Otherwise, it is required. Learn how to use this field with a custom container.
- Description string
- Optional. The description specified for the version when it was created.
- Etag string
etag
is used for optimistic concurrency control as a way to help prevent simultaneous updates of a model from overwriting each other. It is strongly suggested that systems make use of theetag
in the read-modify-write cycle to perform model updates in order to avoid race conditions: Anetag
is returned in the response toGetVersion
, and systems are expected to put that etag in the request toUpdateVersion
to ensure that their change will be applied to the model as intended.- Explanation
Config Pulumi.Google Native. Ml. V1. Inputs. Google Cloud Ml V1__Explanation Config - Optional. Configures explainability features on the model's version. Some explanation features require additional metadata to be loaded as part of the model payload.
- Framework
Pulumi.
Google Native. Ml. V1. Version Framework - Optional. The machine learning framework AI Platform uses to train this version of the model. Valid values are
TENSORFLOW
,SCIKIT_LEARN
,XGBOOST
. If you do not specify a framework, AI Platform will analyze files in the deployment_uri to determine a framework. If you chooseSCIKIT_LEARN
orXGBOOST
, you must also set the runtime version of the model to 1.4 or greater. Do not specify a framework if you're deploying a custom prediction routine or if you're using a custom container. - Labels Dictionary<string, string>
- Optional. One or more labels that you can add, to organize your model versions. Each label is a key-value pair, where both the key and the value are arbitrary strings that you supply. For more information, see the documentation on using labels. Note that this field is not updatable for mls1* models.
- Machine
Type string - Optional. The type of machine on which to serve the model. Currently only applies to online prediction service. To learn about valid values for this field, read Choosing a machine type for online prediction. If this field is not specified and you are using a regional endpoint, then the machine type defaults to
n1-standard-2
. If this field is not specified and you are using the global endpoint (ml.googleapis.com
), then the machine type defaults tomls1-c1-m2
. - Manual
Scaling Pulumi.Google Native. Ml. V1. Inputs. Google Cloud Ml V1__Manual Scaling - Manually select the number of nodes to use for serving the model. You should generally use
auto_scaling
with an appropriatemin_nodes
instead, but this option is available if you want more predictable billing. Beware that latency and error rates will increase if the traffic exceeds that capability of the system to serve it based on the selected number of nodes. - Name string
- The name specified for the version when it was created. The version name must be unique within the model it is created in.
- Package
Uris List<string> - Optional. Cloud Storage paths (
gs://…
) of packages for custom prediction routines or scikit-learn pipelines with custom code. For a custom prediction routine, one of these packages must contain your Predictor class (seepredictionClass
). Additionally, include any dependencies used by your Predictor or scikit-learn pipeline uses that are not already included in your selected runtime version. If you specify this field, you must also setruntimeVersion
to 1.4 or greater. - Prediction
Class string - Optional. The fully qualified name (module_name.class_name) of a class that implements the Predictor interface described in this reference field. The module containing this class should be included in a package provided to the
packageUris
field. Specify this field if and only if you are deploying a custom prediction routine (beta). If you specify this field, you must setruntimeVersion
to 1.4 or greater and you must setmachineType
to a legacy (MLS1) machine type. The following code sample provides the Predictor interface: class Predictor(object): """Interface for constructing custom predictors.""" def predict(self, instances, **kwargs): """Performs custom prediction. Instances are the decoded values from the request. They have already been deserialized from JSON. Args: instances: A list of prediction input instances. **kwargs: A dictionary of keyword args provided as additional fields on the predict request body. Returns: A list of outputs containing the prediction results. This list must be JSON serializable. """ raise NotImplementedError() @classmethod def from_path(cls, model_dir): """Creates an instance of Predictor using the given path. Loading of the predictor should be done in this method. Args: model_dir: The local directory that contains the exported model file along with any additional files uploaded when creating the version resource. Returns: An instance implementing this Predictor class. """ raise NotImplementedError() Learn more about the Predictor interface and custom prediction routines. - Project string
- Request
Logging Pulumi.Config Google Native. Ml. V1. Inputs. Google Cloud Ml V1__Request Logging Config - Optional. Only specify this field in a projects.models.versions.patch request. Specifying it in a projects.models.versions.create request has no effect. Configures the request-response pair logging on predictions from this Version.
- Routes
Pulumi.
Google Native. Ml. V1. Inputs. Google Cloud Ml V1__Route Map - Optional. Specifies paths on a custom container's HTTP server where AI Platform Prediction sends certain requests. If you specify this field, then you must also specify the
container
field. If you specify thecontainer
field and do not specify this field, it defaults to the following:json { "predict": "/v1/models/MODEL/versions/VERSION:predict", "health": "/v1/models/MODEL/versions/VERSION" }
See RouteMap for more details about these default values. - Service
Account string - Optional. Specifies the service account for resource access control. If you specify this field, then you must also specify either the
containerSpec
or thepredictionClass
field. Learn more about using a custom service account.
- Model
Id string - Python
Version string - The version of Python used in prediction. The following Python versions are available: * Python '3.7' is available when
runtime_version
is set to '1.15' or later. * Python '3.5' is available whenruntime_version
is set to a version from '1.4' to '1.14'. * Python '2.7' is available whenruntime_version
is set to '1.15' or earlier. Read more about the Python versions available for each runtime version. - Runtime
Version string - The AI Platform runtime version to use for this deployment. For more information, see the runtime version list and how to manage runtime versions.
- Accelerator
Config GoogleCloud Ml V1__Accelerator Config Args - Optional. Accelerator config for using GPUs for online prediction (beta). Only specify this field if you have specified a Compute Engine (N1) machine type in the
machineType
field. Learn more about using GPUs for online prediction. - Auto
Scaling GoogleCloud Ml V1__Auto Scaling Args - Automatically scale the number of nodes used to serve the model in response to increases and decreases in traffic. Care should be taken to ramp up traffic according to the model's ability to scale or you will start seeing increases in latency and 429 response codes.
- Container
Google
Cloud Ml V1__Container Spec Args - Optional. Specifies a custom container to use for serving predictions. If you specify this field, then
machineType
is required. If you specify this field, thendeploymentUri
is optional. If you specify this field, then you must not specifyruntimeVersion
,packageUris
,framework
,pythonVersion
, orpredictionClass
. - Deployment
Uri string - The Cloud Storage URI of a directory containing trained model artifacts to be used to create the model version. See the guide to deploying models for more information. The total number of files under this directory must not exceed 1000. During projects.models.versions.create, AI Platform Prediction copies all files from the specified directory to a location managed by the service. From then on, AI Platform Prediction uses these copies of the model artifacts to serve predictions, not the original files in Cloud Storage, so this location is useful only as a historical record. If you specify container, then this field is optional. Otherwise, it is required. Learn how to use this field with a custom container.
- Description string
- Optional. The description specified for the version when it was created.
- Etag string
etag
is used for optimistic concurrency control as a way to help prevent simultaneous updates of a model from overwriting each other. It is strongly suggested that systems make use of theetag
in the read-modify-write cycle to perform model updates in order to avoid race conditions: Anetag
is returned in the response toGetVersion
, and systems are expected to put that etag in the request toUpdateVersion
to ensure that their change will be applied to the model as intended.- Explanation
Config GoogleCloud Ml V1__Explanation Config Args - Optional. Configures explainability features on the model's version. Some explanation features require additional metadata to be loaded as part of the model payload.
- Framework
Version
Framework - Optional. The machine learning framework AI Platform uses to train this version of the model. Valid values are
TENSORFLOW
,SCIKIT_LEARN
,XGBOOST
. If you do not specify a framework, AI Platform will analyze files in the deployment_uri to determine a framework. If you chooseSCIKIT_LEARN
orXGBOOST
, you must also set the runtime version of the model to 1.4 or greater. Do not specify a framework if you're deploying a custom prediction routine or if you're using a custom container. - Labels map[string]string
- Optional. One or more labels that you can add, to organize your model versions. Each label is a key-value pair, where both the key and the value are arbitrary strings that you supply. For more information, see the documentation on using labels. Note that this field is not updatable for mls1* models.
- Machine
Type string - Optional. The type of machine on which to serve the model. Currently only applies to online prediction service. To learn about valid values for this field, read Choosing a machine type for online prediction. If this field is not specified and you are using a regional endpoint, then the machine type defaults to
n1-standard-2
. If this field is not specified and you are using the global endpoint (ml.googleapis.com
), then the machine type defaults tomls1-c1-m2
. - Manual
Scaling GoogleCloud Ml V1__Manual Scaling Args - Manually select the number of nodes to use for serving the model. You should generally use
auto_scaling
with an appropriatemin_nodes
instead, but this option is available if you want more predictable billing. Beware that latency and error rates will increase if the traffic exceeds that capability of the system to serve it based on the selected number of nodes. - Name string
- The name specified for the version when it was created. The version name must be unique within the model it is created in.
- Package
Uris []string - Optional. Cloud Storage paths (
gs://…
) of packages for custom prediction routines or scikit-learn pipelines with custom code. For a custom prediction routine, one of these packages must contain your Predictor class (seepredictionClass
). Additionally, include any dependencies used by your Predictor or scikit-learn pipeline uses that are not already included in your selected runtime version. If you specify this field, you must also setruntimeVersion
to 1.4 or greater. - Prediction
Class string - Optional. The fully qualified name (module_name.class_name) of a class that implements the Predictor interface described in this reference field. The module containing this class should be included in a package provided to the
packageUris
field. Specify this field if and only if you are deploying a custom prediction routine (beta). If you specify this field, you must setruntimeVersion
to 1.4 or greater and you must setmachineType
to a legacy (MLS1) machine type. The following code sample provides the Predictor interface: class Predictor(object): """Interface for constructing custom predictors.""" def predict(self, instances, **kwargs): """Performs custom prediction. Instances are the decoded values from the request. They have already been deserialized from JSON. Args: instances: A list of prediction input instances. **kwargs: A dictionary of keyword args provided as additional fields on the predict request body. Returns: A list of outputs containing the prediction results. This list must be JSON serializable. """ raise NotImplementedError() @classmethod def from_path(cls, model_dir): """Creates an instance of Predictor using the given path. Loading of the predictor should be done in this method. Args: model_dir: The local directory that contains the exported model file along with any additional files uploaded when creating the version resource. Returns: An instance implementing this Predictor class. """ raise NotImplementedError() Learn more about the Predictor interface and custom prediction routines. - Project string
- Request
Logging GoogleConfig Cloud Ml V1__Request Logging Config Args - Optional. Only specify this field in a projects.models.versions.patch request. Specifying it in a projects.models.versions.create request has no effect. Configures the request-response pair logging on predictions from this Version.
- Routes
Google
Cloud Ml V1__Route Map Args - Optional. Specifies paths on a custom container's HTTP server where AI Platform Prediction sends certain requests. If you specify this field, then you must also specify the
container
field. If you specify thecontainer
field and do not specify this field, it defaults to the following:json { "predict": "/v1/models/MODEL/versions/VERSION:predict", "health": "/v1/models/MODEL/versions/VERSION" }
See RouteMap for more details about these default values. - Service
Account string - Optional. Specifies the service account for resource access control. If you specify this field, then you must also specify either the
containerSpec
or thepredictionClass
field. Learn more about using a custom service account.
- model
Id String - python
Version String - The version of Python used in prediction. The following Python versions are available: * Python '3.7' is available when
runtime_version
is set to '1.15' or later. * Python '3.5' is available whenruntime_version
is set to a version from '1.4' to '1.14'. * Python '2.7' is available whenruntime_version
is set to '1.15' or earlier. Read more about the Python versions available for each runtime version. - runtime
Version String - The AI Platform runtime version to use for this deployment. For more information, see the runtime version list and how to manage runtime versions.
- accelerator
Config GoogleCloud Ml V1__Accelerator Config - Optional. Accelerator config for using GPUs for online prediction (beta). Only specify this field if you have specified a Compute Engine (N1) machine type in the
machineType
field. Learn more about using GPUs for online prediction. - auto
Scaling GoogleCloud Ml V1__Auto Scaling - Automatically scale the number of nodes used to serve the model in response to increases and decreases in traffic. Care should be taken to ramp up traffic according to the model's ability to scale or you will start seeing increases in latency and 429 response codes.
- container
Google
Cloud Ml V1__Container Spec - Optional. Specifies a custom container to use for serving predictions. If you specify this field, then
machineType
is required. If you specify this field, thendeploymentUri
is optional. If you specify this field, then you must not specifyruntimeVersion
,packageUris
,framework
,pythonVersion
, orpredictionClass
. - deployment
Uri String - The Cloud Storage URI of a directory containing trained model artifacts to be used to create the model version. See the guide to deploying models for more information. The total number of files under this directory must not exceed 1000. During projects.models.versions.create, AI Platform Prediction copies all files from the specified directory to a location managed by the service. From then on, AI Platform Prediction uses these copies of the model artifacts to serve predictions, not the original files in Cloud Storage, so this location is useful only as a historical record. If you specify container, then this field is optional. Otherwise, it is required. Learn how to use this field with a custom container.
- description String
- Optional. The description specified for the version when it was created.
- etag String
etag
is used for optimistic concurrency control as a way to help prevent simultaneous updates of a model from overwriting each other. It is strongly suggested that systems make use of theetag
in the read-modify-write cycle to perform model updates in order to avoid race conditions: Anetag
is returned in the response toGetVersion
, and systems are expected to put that etag in the request toUpdateVersion
to ensure that their change will be applied to the model as intended.- explanation
Config GoogleCloud Ml V1__Explanation Config - Optional. Configures explainability features on the model's version. Some explanation features require additional metadata to be loaded as part of the model payload.
- framework
Version
Framework - Optional. The machine learning framework AI Platform uses to train this version of the model. Valid values are
TENSORFLOW
,SCIKIT_LEARN
,XGBOOST
. If you do not specify a framework, AI Platform will analyze files in the deployment_uri to determine a framework. If you chooseSCIKIT_LEARN
orXGBOOST
, you must also set the runtime version of the model to 1.4 or greater. Do not specify a framework if you're deploying a custom prediction routine or if you're using a custom container. - labels Map<String,String>
- Optional. One or more labels that you can add, to organize your model versions. Each label is a key-value pair, where both the key and the value are arbitrary strings that you supply. For more information, see the documentation on using labels. Note that this field is not updatable for mls1* models.
- machine
Type String - Optional. The type of machine on which to serve the model. Currently only applies to online prediction service. To learn about valid values for this field, read Choosing a machine type for online prediction. If this field is not specified and you are using a regional endpoint, then the machine type defaults to
n1-standard-2
. If this field is not specified and you are using the global endpoint (ml.googleapis.com
), then the machine type defaults tomls1-c1-m2
. - manual
Scaling GoogleCloud Ml V1__Manual Scaling - Manually select the number of nodes to use for serving the model. You should generally use
auto_scaling
with an appropriatemin_nodes
instead, but this option is available if you want more predictable billing. Beware that latency and error rates will increase if the traffic exceeds that capability of the system to serve it based on the selected number of nodes. - name String
- The name specified for the version when it was created. The version name must be unique within the model it is created in.
- package
Uris List<String> - Optional. Cloud Storage paths (
gs://…
) of packages for custom prediction routines or scikit-learn pipelines with custom code. For a custom prediction routine, one of these packages must contain your Predictor class (seepredictionClass
). Additionally, include any dependencies used by your Predictor or scikit-learn pipeline uses that are not already included in your selected runtime version. If you specify this field, you must also setruntimeVersion
to 1.4 or greater. - prediction
Class String - Optional. The fully qualified name (module_name.class_name) of a class that implements the Predictor interface described in this reference field. The module containing this class should be included in a package provided to the
packageUris
field. Specify this field if and only if you are deploying a custom prediction routine (beta). If you specify this field, you must setruntimeVersion
to 1.4 or greater and you must setmachineType
to a legacy (MLS1) machine type. The following code sample provides the Predictor interface: class Predictor(object): """Interface for constructing custom predictors.""" def predict(self, instances, **kwargs): """Performs custom prediction. Instances are the decoded values from the request. They have already been deserialized from JSON. Args: instances: A list of prediction input instances. **kwargs: A dictionary of keyword args provided as additional fields on the predict request body. Returns: A list of outputs containing the prediction results. This list must be JSON serializable. """ raise NotImplementedError() @classmethod def from_path(cls, model_dir): """Creates an instance of Predictor using the given path. Loading of the predictor should be done in this method. Args: model_dir: The local directory that contains the exported model file along with any additional files uploaded when creating the version resource. Returns: An instance implementing this Predictor class. """ raise NotImplementedError() Learn more about the Predictor interface and custom prediction routines. - project String
- request
Logging GoogleConfig Cloud Ml V1__Request Logging Config - Optional. Only specify this field in a projects.models.versions.patch request. Specifying it in a projects.models.versions.create request has no effect. Configures the request-response pair logging on predictions from this Version.
- routes
Google
Cloud Ml V1__Route Map - Optional. Specifies paths on a custom container's HTTP server where AI Platform Prediction sends certain requests. If you specify this field, then you must also specify the
container
field. If you specify thecontainer
field and do not specify this field, it defaults to the following:json { "predict": "/v1/models/MODEL/versions/VERSION:predict", "health": "/v1/models/MODEL/versions/VERSION" }
See RouteMap for more details about these default values. - service
Account String - Optional. Specifies the service account for resource access control. If you specify this field, then you must also specify either the
containerSpec
or thepredictionClass
field. Learn more about using a custom service account.
- model
Id string - python
Version string - The version of Python used in prediction. The following Python versions are available: * Python '3.7' is available when
runtime_version
is set to '1.15' or later. * Python '3.5' is available whenruntime_version
is set to a version from '1.4' to '1.14'. * Python '2.7' is available whenruntime_version
is set to '1.15' or earlier. Read more about the Python versions available for each runtime version. - runtime
Version string - The AI Platform runtime version to use for this deployment. For more information, see the runtime version list and how to manage runtime versions.
- accelerator
Config GoogleCloud Ml V1__Accelerator Config - Optional. Accelerator config for using GPUs for online prediction (beta). Only specify this field if you have specified a Compute Engine (N1) machine type in the
machineType
field. Learn more about using GPUs for online prediction. - auto
Scaling GoogleCloud Ml V1__Auto Scaling - Automatically scale the number of nodes used to serve the model in response to increases and decreases in traffic. Care should be taken to ramp up traffic according to the model's ability to scale or you will start seeing increases in latency and 429 response codes.
- container
Google
Cloud Ml V1__Container Spec - Optional. Specifies a custom container to use for serving predictions. If you specify this field, then
machineType
is required. If you specify this field, thendeploymentUri
is optional. If you specify this field, then you must not specifyruntimeVersion
,packageUris
,framework
,pythonVersion
, orpredictionClass
. - deployment
Uri string - The Cloud Storage URI of a directory containing trained model artifacts to be used to create the model version. See the guide to deploying models for more information. The total number of files under this directory must not exceed 1000. During projects.models.versions.create, AI Platform Prediction copies all files from the specified directory to a location managed by the service. From then on, AI Platform Prediction uses these copies of the model artifacts to serve predictions, not the original files in Cloud Storage, so this location is useful only as a historical record. If you specify container, then this field is optional. Otherwise, it is required. Learn how to use this field with a custom container.
- description string
- Optional. The description specified for the version when it was created.
- etag string
etag
is used for optimistic concurrency control as a way to help prevent simultaneous updates of a model from overwriting each other. It is strongly suggested that systems make use of theetag
in the read-modify-write cycle to perform model updates in order to avoid race conditions: Anetag
is returned in the response toGetVersion
, and systems are expected to put that etag in the request toUpdateVersion
to ensure that their change will be applied to the model as intended.- explanation
Config GoogleCloud Ml V1__Explanation Config - Optional. Configures explainability features on the model's version. Some explanation features require additional metadata to be loaded as part of the model payload.
- framework
Version
Framework - Optional. The machine learning framework AI Platform uses to train this version of the model. Valid values are
TENSORFLOW
,SCIKIT_LEARN
,XGBOOST
. If you do not specify a framework, AI Platform will analyze files in the deployment_uri to determine a framework. If you chooseSCIKIT_LEARN
orXGBOOST
, you must also set the runtime version of the model to 1.4 or greater. Do not specify a framework if you're deploying a custom prediction routine or if you're using a custom container. - labels {[key: string]: string}
- Optional. One or more labels that you can add, to organize your model versions. Each label is a key-value pair, where both the key and the value are arbitrary strings that you supply. For more information, see the documentation on using labels. Note that this field is not updatable for mls1* models.
- machine
Type string - Optional. The type of machine on which to serve the model. Currently only applies to online prediction service. To learn about valid values for this field, read Choosing a machine type for online prediction. If this field is not specified and you are using a regional endpoint, then the machine type defaults to
n1-standard-2
. If this field is not specified and you are using the global endpoint (ml.googleapis.com
), then the machine type defaults tomls1-c1-m2
. - manual
Scaling GoogleCloud Ml V1__Manual Scaling - Manually select the number of nodes to use for serving the model. You should generally use
auto_scaling
with an appropriatemin_nodes
instead, but this option is available if you want more predictable billing. Beware that latency and error rates will increase if the traffic exceeds that capability of the system to serve it based on the selected number of nodes. - name string
- The name specified for the version when it was created. The version name must be unique within the model it is created in.
- package
Uris string[] - Optional. Cloud Storage paths (
gs://…
) of packages for custom prediction routines or scikit-learn pipelines with custom code. For a custom prediction routine, one of these packages must contain your Predictor class (seepredictionClass
). Additionally, include any dependencies used by your Predictor or scikit-learn pipeline uses that are not already included in your selected runtime version. If you specify this field, you must also setruntimeVersion
to 1.4 or greater. - prediction
Class string - Optional. The fully qualified name (module_name.class_name) of a class that implements the Predictor interface described in this reference field. The module containing this class should be included in a package provided to the
packageUris
field. Specify this field if and only if you are deploying a custom prediction routine (beta). If you specify this field, you must setruntimeVersion
to 1.4 or greater and you must setmachineType
to a legacy (MLS1) machine type. The following code sample provides the Predictor interface: class Predictor(object): """Interface for constructing custom predictors.""" def predict(self, instances, **kwargs): """Performs custom prediction. Instances are the decoded values from the request. They have already been deserialized from JSON. Args: instances: A list of prediction input instances. **kwargs: A dictionary of keyword args provided as additional fields on the predict request body. Returns: A list of outputs containing the prediction results. This list must be JSON serializable. """ raise NotImplementedError() @classmethod def from_path(cls, model_dir): """Creates an instance of Predictor using the given path. Loading of the predictor should be done in this method. Args: model_dir: The local directory that contains the exported model file along with any additional files uploaded when creating the version resource. Returns: An instance implementing this Predictor class. """ raise NotImplementedError() Learn more about the Predictor interface and custom prediction routines. - project string
- request
Logging GoogleConfig Cloud Ml V1__Request Logging Config - Optional. Only specify this field in a projects.models.versions.patch request. Specifying it in a projects.models.versions.create request has no effect. Configures the request-response pair logging on predictions from this Version.
- routes
Google
Cloud Ml V1__Route Map - Optional. Specifies paths on a custom container's HTTP server where AI Platform Prediction sends certain requests. If you specify this field, then you must also specify the
container
field. If you specify thecontainer
field and do not specify this field, it defaults to the following:json { "predict": "/v1/models/MODEL/versions/VERSION:predict", "health": "/v1/models/MODEL/versions/VERSION" }
See RouteMap for more details about these default values. - service
Account string - Optional. Specifies the service account for resource access control. If you specify this field, then you must also specify either the
containerSpec
or thepredictionClass
field. Learn more about using a custom service account.
- model_
id str - python_
version str - The version of Python used in prediction. The following Python versions are available: * Python '3.7' is available when
runtime_version
is set to '1.15' or later. * Python '3.5' is available whenruntime_version
is set to a version from '1.4' to '1.14'. * Python '2.7' is available whenruntime_version
is set to '1.15' or earlier. Read more about the Python versions available for each runtime version. - runtime_
version str - The AI Platform runtime version to use for this deployment. For more information, see the runtime version list and how to manage runtime versions.
- accelerator_
config GoogleCloud Ml V1Accelerator Config Args - Optional. Accelerator config for using GPUs for online prediction (beta). Only specify this field if you have specified a Compute Engine (N1) machine type in the
machineType
field. Learn more about using GPUs for online prediction. - auto_
scaling GoogleCloud Ml V1Auto Scaling Args - Automatically scale the number of nodes used to serve the model in response to increases and decreases in traffic. Care should be taken to ramp up traffic according to the model's ability to scale or you will start seeing increases in latency and 429 response codes.
- container
Google
Cloud Ml V1Container Spec Args - Optional. Specifies a custom container to use for serving predictions. If you specify this field, then
machineType
is required. If you specify this field, thendeploymentUri
is optional. If you specify this field, then you must not specifyruntimeVersion
,packageUris
,framework
,pythonVersion
, orpredictionClass
. - deployment_
uri str - The Cloud Storage URI of a directory containing trained model artifacts to be used to create the model version. See the guide to deploying models for more information. The total number of files under this directory must not exceed 1000. During projects.models.versions.create, AI Platform Prediction copies all files from the specified directory to a location managed by the service. From then on, AI Platform Prediction uses these copies of the model artifacts to serve predictions, not the original files in Cloud Storage, so this location is useful only as a historical record. If you specify container, then this field is optional. Otherwise, it is required. Learn how to use this field with a custom container.
- description str
- Optional. The description specified for the version when it was created.
- etag str
etag
is used for optimistic concurrency control as a way to help prevent simultaneous updates of a model from overwriting each other. It is strongly suggested that systems make use of theetag
in the read-modify-write cycle to perform model updates in order to avoid race conditions: Anetag
is returned in the response toGetVersion
, and systems are expected to put that etag in the request toUpdateVersion
to ensure that their change will be applied to the model as intended.- explanation_
config GoogleCloud Ml V1Explanation Config Args - Optional. Configures explainability features on the model's version. Some explanation features require additional metadata to be loaded as part of the model payload.
- framework
Version
Framework - Optional. The machine learning framework AI Platform uses to train this version of the model. Valid values are
TENSORFLOW
,SCIKIT_LEARN
,XGBOOST
. If you do not specify a framework, AI Platform will analyze files in the deployment_uri to determine a framework. If you chooseSCIKIT_LEARN
orXGBOOST
, you must also set the runtime version of the model to 1.4 or greater. Do not specify a framework if you're deploying a custom prediction routine or if you're using a custom container. - labels Mapping[str, str]
- Optional. One or more labels that you can add, to organize your model versions. Each label is a key-value pair, where both the key and the value are arbitrary strings that you supply. For more information, see the documentation on using labels. Note that this field is not updatable for mls1* models.
- machine_
type str - Optional. The type of machine on which to serve the model. Currently only applies to online prediction service. To learn about valid values for this field, read Choosing a machine type for online prediction. If this field is not specified and you are using a regional endpoint, then the machine type defaults to
n1-standard-2
. If this field is not specified and you are using the global endpoint (ml.googleapis.com
), then the machine type defaults tomls1-c1-m2
. - manual_
scaling GoogleCloud Ml V1Manual Scaling Args - Manually select the number of nodes to use for serving the model. You should generally use
auto_scaling
with an appropriatemin_nodes
instead, but this option is available if you want more predictable billing. Beware that latency and error rates will increase if the traffic exceeds that capability of the system to serve it based on the selected number of nodes. - name str
- The name specified for the version when it was created. The version name must be unique within the model it is created in.
- package_
uris Sequence[str] - Optional. Cloud Storage paths (
gs://…
) of packages for custom prediction routines or scikit-learn pipelines with custom code. For a custom prediction routine, one of these packages must contain your Predictor class (seepredictionClass
). Additionally, include any dependencies used by your Predictor or scikit-learn pipeline uses that are not already included in your selected runtime version. If you specify this field, you must also setruntimeVersion
to 1.4 or greater. - prediction_
class str - Optional. The fully qualified name (module_name.class_name) of a class that implements the Predictor interface described in this reference field. The module containing this class should be included in a package provided to the
packageUris
field. Specify this field if and only if you are deploying a custom prediction routine (beta). If you specify this field, you must setruntimeVersion
to 1.4 or greater and you must setmachineType
to a legacy (MLS1) machine type. The following code sample provides the Predictor interface: class Predictor(object): """Interface for constructing custom predictors.""" def predict(self, instances, **kwargs): """Performs custom prediction. Instances are the decoded values from the request. They have already been deserialized from JSON. Args: instances: A list of prediction input instances. **kwargs: A dictionary of keyword args provided as additional fields on the predict request body. Returns: A list of outputs containing the prediction results. This list must be JSON serializable. """ raise NotImplementedError() @classmethod def from_path(cls, model_dir): """Creates an instance of Predictor using the given path. Loading of the predictor should be done in this method. Args: model_dir: The local directory that contains the exported model file along with any additional files uploaded when creating the version resource. Returns: An instance implementing this Predictor class. """ raise NotImplementedError() Learn more about the Predictor interface and custom prediction routines. - project str
- request_
logging_ Googleconfig Cloud Ml V1Request Logging Config Args - Optional. Only specify this field in a projects.models.versions.patch request. Specifying it in a projects.models.versions.create request has no effect. Configures the request-response pair logging on predictions from this Version.
- routes
Google
Cloud Ml V1Route Map Args - Optional. Specifies paths on a custom container's HTTP server where AI Platform Prediction sends certain requests. If you specify this field, then you must also specify the
container
field. If you specify thecontainer
field and do not specify this field, it defaults to the following:json { "predict": "/v1/models/MODEL/versions/VERSION:predict", "health": "/v1/models/MODEL/versions/VERSION" }
See RouteMap for more details about these default values. - service_
account str - Optional. Specifies the service account for resource access control. If you specify this field, then you must also specify either the
containerSpec
or thepredictionClass
field. Learn more about using a custom service account.
- model
Id String - python
Version String - The version of Python used in prediction. The following Python versions are available: * Python '3.7' is available when
runtime_version
is set to '1.15' or later. * Python '3.5' is available whenruntime_version
is set to a version from '1.4' to '1.14'. * Python '2.7' is available whenruntime_version
is set to '1.15' or earlier. Read more about the Python versions available for each runtime version. - runtime
Version String - The AI Platform runtime version to use for this deployment. For more information, see the runtime version list and how to manage runtime versions.
- accelerator
Config Property Map - Optional. Accelerator config for using GPUs for online prediction (beta). Only specify this field if you have specified a Compute Engine (N1) machine type in the
machineType
field. Learn more about using GPUs for online prediction. - auto
Scaling Property Map - Automatically scale the number of nodes used to serve the model in response to increases and decreases in traffic. Care should be taken to ramp up traffic according to the model's ability to scale or you will start seeing increases in latency and 429 response codes.
- container Property Map
- Optional. Specifies a custom container to use for serving predictions. If you specify this field, then
machineType
is required. If you specify this field, thendeploymentUri
is optional. If you specify this field, then you must not specifyruntimeVersion
,packageUris
,framework
,pythonVersion
, orpredictionClass
. - deployment
Uri String - The Cloud Storage URI of a directory containing trained model artifacts to be used to create the model version. See the guide to deploying models for more information. The total number of files under this directory must not exceed 1000. During projects.models.versions.create, AI Platform Prediction copies all files from the specified directory to a location managed by the service. From then on, AI Platform Prediction uses these copies of the model artifacts to serve predictions, not the original files in Cloud Storage, so this location is useful only as a historical record. If you specify container, then this field is optional. Otherwise, it is required. Learn how to use this field with a custom container.
- description String
- Optional. The description specified for the version when it was created.
- etag String
etag
is used for optimistic concurrency control as a way to help prevent simultaneous updates of a model from overwriting each other. It is strongly suggested that systems make use of theetag
in the read-modify-write cycle to perform model updates in order to avoid race conditions: Anetag
is returned in the response toGetVersion
, and systems are expected to put that etag in the request toUpdateVersion
to ensure that their change will be applied to the model as intended.- explanation
Config Property Map - Optional. Configures explainability features on the model's version. Some explanation features require additional metadata to be loaded as part of the model payload.
- framework "FRAMEWORK_UNSPECIFIED" | "TENSORFLOW" | "SCIKIT_LEARN" | "XGBOOST"
- Optional. The machine learning framework AI Platform uses to train this version of the model. Valid values are
TENSORFLOW
,SCIKIT_LEARN
,XGBOOST
. If you do not specify a framework, AI Platform will analyze files in the deployment_uri to determine a framework. If you chooseSCIKIT_LEARN
orXGBOOST
, you must also set the runtime version of the model to 1.4 or greater. Do not specify a framework if you're deploying a custom prediction routine or if you're using a custom container. - labels Map<String>
- Optional. One or more labels that you can add, to organize your model versions. Each label is a key-value pair, where both the key and the value are arbitrary strings that you supply. For more information, see the documentation on using labels. Note that this field is not updatable for mls1* models.
- machine
Type String - Optional. The type of machine on which to serve the model. Currently only applies to online prediction service. To learn about valid values for this field, read Choosing a machine type for online prediction. If this field is not specified and you are using a regional endpoint, then the machine type defaults to
n1-standard-2
. If this field is not specified and you are using the global endpoint (ml.googleapis.com
), then the machine type defaults tomls1-c1-m2
. - manual
Scaling Property Map - Manually select the number of nodes to use for serving the model. You should generally use
auto_scaling
with an appropriatemin_nodes
instead, but this option is available if you want more predictable billing. Beware that latency and error rates will increase if the traffic exceeds that capability of the system to serve it based on the selected number of nodes. - name String
- The name specified for the version when it was created. The version name must be unique within the model it is created in.
- package
Uris List<String> - Optional. Cloud Storage paths (
gs://…
) of packages for custom prediction routines or scikit-learn pipelines with custom code. For a custom prediction routine, one of these packages must contain your Predictor class (seepredictionClass
). Additionally, include any dependencies used by your Predictor or scikit-learn pipeline uses that are not already included in your selected runtime version. If you specify this field, you must also setruntimeVersion
to 1.4 or greater. - prediction
Class String - Optional. The fully qualified name (module_name.class_name) of a class that implements the Predictor interface described in this reference field. The module containing this class should be included in a package provided to the
packageUris
field. Specify this field if and only if you are deploying a custom prediction routine (beta). If you specify this field, you must setruntimeVersion
to 1.4 or greater and you must setmachineType
to a legacy (MLS1) machine type. The following code sample provides the Predictor interface: class Predictor(object): """Interface for constructing custom predictors.""" def predict(self, instances, **kwargs): """Performs custom prediction. Instances are the decoded values from the request. They have already been deserialized from JSON. Args: instances: A list of prediction input instances. **kwargs: A dictionary of keyword args provided as additional fields on the predict request body. Returns: A list of outputs containing the prediction results. This list must be JSON serializable. """ raise NotImplementedError() @classmethod def from_path(cls, model_dir): """Creates an instance of Predictor using the given path. Loading of the predictor should be done in this method. Args: model_dir: The local directory that contains the exported model file along with any additional files uploaded when creating the version resource. Returns: An instance implementing this Predictor class. """ raise NotImplementedError() Learn more about the Predictor interface and custom prediction routines. - project String
- request
Logging Property MapConfig - Optional. Only specify this field in a projects.models.versions.patch request. Specifying it in a projects.models.versions.create request has no effect. Configures the request-response pair logging on predictions from this Version.
- routes Property Map
- Optional. Specifies paths on a custom container's HTTP server where AI Platform Prediction sends certain requests. If you specify this field, then you must also specify the
container
field. If you specify thecontainer
field and do not specify this field, it defaults to the following:json { "predict": "/v1/models/MODEL/versions/VERSION:predict", "health": "/v1/models/MODEL/versions/VERSION" }
See RouteMap for more details about these default values. - service
Account String - Optional. Specifies the service account for resource access control. If you specify this field, then you must also specify either the
containerSpec
or thepredictionClass
field. Learn more about using a custom service account.
Outputs
All input properties are implicitly available as output properties. Additionally, the Version resource produces the following output properties:
- Create
Time string - The time the version was created.
- Error
Message string - The details of a failure or a cancellation.
- Id string
- The provider-assigned unique ID for this managed resource.
- Is
Default bool - If true, this version will be used to handle prediction requests that do not specify a version. You can change the default version by calling projects.methods.versions.setDefault.
- Last
Migration stringModel Id - The AI Platform (Unified)
Model
ID for the last model migration. - Last
Migration stringTime - The last time this version was successfully migrated to AI Platform (Unified).
- Last
Use stringTime - The time the version was last used for prediction.
- State string
- The state of a version.
- Create
Time string - The time the version was created.
- Error
Message string - The details of a failure or a cancellation.
- Id string
- The provider-assigned unique ID for this managed resource.
- Is
Default bool - If true, this version will be used to handle prediction requests that do not specify a version. You can change the default version by calling projects.methods.versions.setDefault.
- Last
Migration stringModel Id - The AI Platform (Unified)
Model
ID for the last model migration. - Last
Migration stringTime - The last time this version was successfully migrated to AI Platform (Unified).
- Last
Use stringTime - The time the version was last used for prediction.
- State string
- The state of a version.
- create
Time String - The time the version was created.
- error
Message String - The details of a failure or a cancellation.
- id String
- The provider-assigned unique ID for this managed resource.
- is
Default Boolean - If true, this version will be used to handle prediction requests that do not specify a version. You can change the default version by calling projects.methods.versions.setDefault.
- last
Migration StringModel Id - The AI Platform (Unified)
Model
ID for the last model migration. - last
Migration StringTime - The last time this version was successfully migrated to AI Platform (Unified).
- last
Use StringTime - The time the version was last used for prediction.
- state String
- The state of a version.
- create
Time string - The time the version was created.
- error
Message string - The details of a failure or a cancellation.
- id string
- The provider-assigned unique ID for this managed resource.
- is
Default boolean - If true, this version will be used to handle prediction requests that do not specify a version. You can change the default version by calling projects.methods.versions.setDefault.
- last
Migration stringModel Id - The AI Platform (Unified)
Model
ID for the last model migration. - last
Migration stringTime - The last time this version was successfully migrated to AI Platform (Unified).
- last
Use stringTime - The time the version was last used for prediction.
- state string
- The state of a version.
- create_
time str - The time the version was created.
- error_
message str - The details of a failure or a cancellation.
- id str
- The provider-assigned unique ID for this managed resource.
- is_
default bool - If true, this version will be used to handle prediction requests that do not specify a version. You can change the default version by calling projects.methods.versions.setDefault.
- last_
migration_ strmodel_ id - The AI Platform (Unified)
Model
ID for the last model migration. - last_
migration_ strtime - The last time this version was successfully migrated to AI Platform (Unified).
- last_
use_ strtime - The time the version was last used for prediction.
- state str
- The state of a version.
- create
Time String - The time the version was created.
- error
Message String - The details of a failure or a cancellation.
- id String
- The provider-assigned unique ID for this managed resource.
- is
Default Boolean - If true, this version will be used to handle prediction requests that do not specify a version. You can change the default version by calling projects.methods.versions.setDefault.
- last
Migration StringModel Id - The AI Platform (Unified)
Model
ID for the last model migration. - last
Migration StringTime - The last time this version was successfully migrated to AI Platform (Unified).
- last
Use StringTime - The time the version was last used for prediction.
- state String
- The state of a version.
Supporting Types
GoogleCloudMlV1__AcceleratorConfig, GoogleCloudMlV1__AcceleratorConfigArgs
- Count string
- The number of accelerators to attach to each machine running the job.
- Type
Pulumi.
Google Native. Ml. V1. Google Cloud Ml V1__Accelerator Config Type - The type of accelerator to use.
- Count string
- The number of accelerators to attach to each machine running the job.
- Type
Google
Cloud Ml V1__Accelerator Config Type - The type of accelerator to use.
- count String
- The number of accelerators to attach to each machine running the job.
- type
Google
Cloud Ml V1__Accelerator Config Type - The type of accelerator to use.
- count string
- The number of accelerators to attach to each machine running the job.
- type
Google
Cloud Ml V1__Accelerator Config Type - The type of accelerator to use.
- count str
- The number of accelerators to attach to each machine running the job.
- type
Google
Cloud Ml V1Accelerator Config Type - The type of accelerator to use.
- count String
- The number of accelerators to attach to each machine running the job.
- type "ACCELERATOR_TYPE_UNSPECIFIED" | "NVIDIA_TESLA_K80" | "NVIDIA_TESLA_P100" | "NVIDIA_TESLA_V100" | "NVIDIA_TESLA_P4" | "NVIDIA_TESLA_T4" | "NVIDIA_TESLA_A100" | "TPU_V2" | "TPU_V3" | "TPU_V2_POD" | "TPU_V3_POD" | "TPU_V4_POD"
- The type of accelerator to use.
GoogleCloudMlV1__AcceleratorConfigResponse, GoogleCloudMlV1__AcceleratorConfigResponseArgs
GoogleCloudMlV1__AcceleratorConfigType, GoogleCloudMlV1__AcceleratorConfigTypeArgs
- Accelerator
Type Unspecified - ACCELERATOR_TYPE_UNSPECIFIEDUnspecified accelerator type. Default to no GPU.
- Nvidia
Tesla K80 - NVIDIA_TESLA_K80Nvidia Tesla K80 GPU.
- Nvidia
Tesla P100 - NVIDIA_TESLA_P100Nvidia Tesla P100 GPU.
- Nvidia
Tesla V100 - NVIDIA_TESLA_V100Nvidia V100 GPU.
- Nvidia
Tesla P4 - NVIDIA_TESLA_P4Nvidia Tesla P4 GPU.
- Nvidia
Tesla T4 - NVIDIA_TESLA_T4Nvidia T4 GPU.
- Nvidia
Tesla A100 - NVIDIA_TESLA_A100Nvidia A100 GPU.
- Tpu
V2 - TPU_V2TPU v2.
- Tpu
V3 - TPU_V3TPU v3.
- Tpu
V2Pod - TPU_V2_PODTPU v2 POD.
- Tpu
V3Pod - TPU_V3_PODTPU v3 POD.
- Tpu
V4Pod - TPU_V4_PODTPU v4 POD.
- Google
Cloud Ml V1__Accelerator Config Type Accelerator Type Unspecified - ACCELERATOR_TYPE_UNSPECIFIEDUnspecified accelerator type. Default to no GPU.
- Google
Cloud Ml V1__Accelerator Config Type Nvidia Tesla K80 - NVIDIA_TESLA_K80Nvidia Tesla K80 GPU.
- Google
Cloud Ml V1__Accelerator Config Type Nvidia Tesla P100 - NVIDIA_TESLA_P100Nvidia Tesla P100 GPU.
- Google
Cloud Ml V1__Accelerator Config Type Nvidia Tesla V100 - NVIDIA_TESLA_V100Nvidia V100 GPU.
- Google
Cloud Ml V1__Accelerator Config Type Nvidia Tesla P4 - NVIDIA_TESLA_P4Nvidia Tesla P4 GPU.
- Google
Cloud Ml V1__Accelerator Config Type Nvidia Tesla T4 - NVIDIA_TESLA_T4Nvidia T4 GPU.
- Google
Cloud Ml V1__Accelerator Config Type Nvidia Tesla A100 - NVIDIA_TESLA_A100Nvidia A100 GPU.
- Google
Cloud Ml V1__Accelerator Config Type Tpu V2 - TPU_V2TPU v2.
- Google
Cloud Ml V1__Accelerator Config Type Tpu V3 - TPU_V3TPU v3.
- Google
Cloud Ml V1__Accelerator Config Type Tpu V2Pod - TPU_V2_PODTPU v2 POD.
- Google
Cloud Ml V1__Accelerator Config Type Tpu V3Pod - TPU_V3_PODTPU v3 POD.
- Google
Cloud Ml V1__Accelerator Config Type Tpu V4Pod - TPU_V4_PODTPU v4 POD.
- Accelerator
Type Unspecified - ACCELERATOR_TYPE_UNSPECIFIEDUnspecified accelerator type. Default to no GPU.
- Nvidia
Tesla K80 - NVIDIA_TESLA_K80Nvidia Tesla K80 GPU.
- Nvidia
Tesla P100 - NVIDIA_TESLA_P100Nvidia Tesla P100 GPU.
- Nvidia
Tesla V100 - NVIDIA_TESLA_V100Nvidia V100 GPU.
- Nvidia
Tesla P4 - NVIDIA_TESLA_P4Nvidia Tesla P4 GPU.
- Nvidia
Tesla T4 - NVIDIA_TESLA_T4Nvidia T4 GPU.
- Nvidia
Tesla A100 - NVIDIA_TESLA_A100Nvidia A100 GPU.
- Tpu
V2 - TPU_V2TPU v2.
- Tpu
V3 - TPU_V3TPU v3.
- Tpu
V2Pod - TPU_V2_PODTPU v2 POD.
- Tpu
V3Pod - TPU_V3_PODTPU v3 POD.
- Tpu
V4Pod - TPU_V4_PODTPU v4 POD.
- Accelerator
Type Unspecified - ACCELERATOR_TYPE_UNSPECIFIEDUnspecified accelerator type. Default to no GPU.
- Nvidia
Tesla K80 - NVIDIA_TESLA_K80Nvidia Tesla K80 GPU.
- Nvidia
Tesla P100 - NVIDIA_TESLA_P100Nvidia Tesla P100 GPU.
- Nvidia
Tesla V100 - NVIDIA_TESLA_V100Nvidia V100 GPU.
- Nvidia
Tesla P4 - NVIDIA_TESLA_P4Nvidia Tesla P4 GPU.
- Nvidia
Tesla T4 - NVIDIA_TESLA_T4Nvidia T4 GPU.
- Nvidia
Tesla A100 - NVIDIA_TESLA_A100Nvidia A100 GPU.
- Tpu
V2 - TPU_V2TPU v2.
- Tpu
V3 - TPU_V3TPU v3.
- Tpu
V2Pod - TPU_V2_PODTPU v2 POD.
- Tpu
V3Pod - TPU_V3_PODTPU v3 POD.
- Tpu
V4Pod - TPU_V4_PODTPU v4 POD.
- ACCELERATOR_TYPE_UNSPECIFIED
- ACCELERATOR_TYPE_UNSPECIFIEDUnspecified accelerator type. Default to no GPU.
- NVIDIA_TESLA_K80
- NVIDIA_TESLA_K80Nvidia Tesla K80 GPU.
- NVIDIA_TESLA_P100
- NVIDIA_TESLA_P100Nvidia Tesla P100 GPU.
- NVIDIA_TESLA_V100
- NVIDIA_TESLA_V100Nvidia V100 GPU.
- NVIDIA_TESLA_P4
- NVIDIA_TESLA_P4Nvidia Tesla P4 GPU.
- NVIDIA_TESLA_T4
- NVIDIA_TESLA_T4Nvidia T4 GPU.
- NVIDIA_TESLA_A100
- NVIDIA_TESLA_A100Nvidia A100 GPU.
- TPU_V2
- TPU_V2TPU v2.
- TPU_V3
- TPU_V3TPU v3.
- TPU_V2_POD
- TPU_V2_PODTPU v2 POD.
- TPU_V3_POD
- TPU_V3_PODTPU v3 POD.
- TPU_V4_POD
- TPU_V4_PODTPU v4 POD.
- "ACCELERATOR_TYPE_UNSPECIFIED"
- ACCELERATOR_TYPE_UNSPECIFIEDUnspecified accelerator type. Default to no GPU.
- "NVIDIA_TESLA_K80"
- NVIDIA_TESLA_K80Nvidia Tesla K80 GPU.
- "NVIDIA_TESLA_P100"
- NVIDIA_TESLA_P100Nvidia Tesla P100 GPU.
- "NVIDIA_TESLA_V100"
- NVIDIA_TESLA_V100Nvidia V100 GPU.
- "NVIDIA_TESLA_P4"
- NVIDIA_TESLA_P4Nvidia Tesla P4 GPU.
- "NVIDIA_TESLA_T4"
- NVIDIA_TESLA_T4Nvidia T4 GPU.
- "NVIDIA_TESLA_A100"
- NVIDIA_TESLA_A100Nvidia A100 GPU.
- "TPU_V2"
- TPU_V2TPU v2.
- "TPU_V3"
- TPU_V3TPU v3.
- "TPU_V2_POD"
- TPU_V2_PODTPU v2 POD.
- "TPU_V3_POD"
- TPU_V3_PODTPU v3 POD.
- "TPU_V4_POD"
- TPU_V4_PODTPU v4 POD.
GoogleCloudMlV1__AutoScaling, GoogleCloudMlV1__AutoScalingArgs
- Max
Nodes int - The maximum number of nodes to scale this model under load. The actual value will depend on resource quota and availability.
- Metrics
List<Pulumi.
Google Native. Ml. V1. Inputs. Google Cloud Ml V1__Metric Spec> - MetricSpec contains the specifications to use to calculate the desired nodes count.
- Min
Nodes int - Optional. The minimum number of nodes to allocate for this model. These nodes are always up, starting from the time the model is deployed. Therefore, the cost of operating this model will be at least
rate
*min_nodes
* number of hours since last billing cycle, whererate
is the cost per node-hour as documented in the pricing guide, even if no predictions are performed. There is additional cost for each prediction performed. Unlike manual scaling, if the load gets too heavy for the nodes that are up, the service will automatically add nodes to handle the increased load as well as scale back as traffic drops, always maintaining at leastmin_nodes
. You will be charged for the time in which additional nodes are used. Ifmin_nodes
is not specified and AutoScaling is used with a legacy (MLS1) machine type,min_nodes
defaults to 0, in which case, when traffic to a model stops (and after a cool-down period), nodes will be shut down and no charges will be incurred until traffic to the model resumes. Ifmin_nodes
is not specified and AutoScaling is used with a Compute Engine (N1) machine type,min_nodes
defaults to 1.min_nodes
must be at least 1 for use with a Compute Engine machine type. You can setmin_nodes
when creating the model version, and you can also updatemin_nodes
for an existing version: update_body.json: { 'autoScaling': { 'minNodes': 5 } } HTTP request: PATCH https://ml.googleapis.com/v1/{name=projects//models//versions/*}?update_mask=autoScaling.minNodes -d @./update_body.json
- Max
Nodes int - The maximum number of nodes to scale this model under load. The actual value will depend on resource quota and availability.
- Metrics
[]Google
Cloud Ml V1__Metric Spec - MetricSpec contains the specifications to use to calculate the desired nodes count.
- Min
Nodes int - Optional. The minimum number of nodes to allocate for this model. These nodes are always up, starting from the time the model is deployed. Therefore, the cost of operating this model will be at least
rate
*min_nodes
* number of hours since last billing cycle, whererate
is the cost per node-hour as documented in the pricing guide, even if no predictions are performed. There is additional cost for each prediction performed. Unlike manual scaling, if the load gets too heavy for the nodes that are up, the service will automatically add nodes to handle the increased load as well as scale back as traffic drops, always maintaining at leastmin_nodes
. You will be charged for the time in which additional nodes are used. Ifmin_nodes
is not specified and AutoScaling is used with a legacy (MLS1) machine type,min_nodes
defaults to 0, in which case, when traffic to a model stops (and after a cool-down period), nodes will be shut down and no charges will be incurred until traffic to the model resumes. Ifmin_nodes
is not specified and AutoScaling is used with a Compute Engine (N1) machine type,min_nodes
defaults to 1.min_nodes
must be at least 1 for use with a Compute Engine machine type. You can setmin_nodes
when creating the model version, and you can also updatemin_nodes
for an existing version: update_body.json: { 'autoScaling': { 'minNodes': 5 } } HTTP request: PATCH https://ml.googleapis.com/v1/{name=projects//models//versions/*}?update_mask=autoScaling.minNodes -d @./update_body.json
- max
Nodes Integer - The maximum number of nodes to scale this model under load. The actual value will depend on resource quota and availability.
- metrics
List<Google
Cloud Ml V1__Metric Spec> - MetricSpec contains the specifications to use to calculate the desired nodes count.
- min
Nodes Integer - Optional. The minimum number of nodes to allocate for this model. These nodes are always up, starting from the time the model is deployed. Therefore, the cost of operating this model will be at least
rate
*min_nodes
* number of hours since last billing cycle, whererate
is the cost per node-hour as documented in the pricing guide, even if no predictions are performed. There is additional cost for each prediction performed. Unlike manual scaling, if the load gets too heavy for the nodes that are up, the service will automatically add nodes to handle the increased load as well as scale back as traffic drops, always maintaining at leastmin_nodes
. You will be charged for the time in which additional nodes are used. Ifmin_nodes
is not specified and AutoScaling is used with a legacy (MLS1) machine type,min_nodes
defaults to 0, in which case, when traffic to a model stops (and after a cool-down period), nodes will be shut down and no charges will be incurred until traffic to the model resumes. Ifmin_nodes
is not specified and AutoScaling is used with a Compute Engine (N1) machine type,min_nodes
defaults to 1.min_nodes
must be at least 1 for use with a Compute Engine machine type. You can setmin_nodes
when creating the model version, and you can also updatemin_nodes
for an existing version: update_body.json: { 'autoScaling': { 'minNodes': 5 } } HTTP request: PATCH https://ml.googleapis.com/v1/{name=projects//models//versions/*}?update_mask=autoScaling.minNodes -d @./update_body.json
- max
Nodes number - The maximum number of nodes to scale this model under load. The actual value will depend on resource quota and availability.
- metrics
Google
Cloud Ml V1__Metric Spec[] - MetricSpec contains the specifications to use to calculate the desired nodes count.
- min
Nodes number - Optional. The minimum number of nodes to allocate for this model. These nodes are always up, starting from the time the model is deployed. Therefore, the cost of operating this model will be at least
rate
*min_nodes
* number of hours since last billing cycle, whererate
is the cost per node-hour as documented in the pricing guide, even if no predictions are performed. There is additional cost for each prediction performed. Unlike manual scaling, if the load gets too heavy for the nodes that are up, the service will automatically add nodes to handle the increased load as well as scale back as traffic drops, always maintaining at leastmin_nodes
. You will be charged for the time in which additional nodes are used. Ifmin_nodes
is not specified and AutoScaling is used with a legacy (MLS1) machine type,min_nodes
defaults to 0, in which case, when traffic to a model stops (and after a cool-down period), nodes will be shut down and no charges will be incurred until traffic to the model resumes. Ifmin_nodes
is not specified and AutoScaling is used with a Compute Engine (N1) machine type,min_nodes
defaults to 1.min_nodes
must be at least 1 for use with a Compute Engine machine type. You can setmin_nodes
when creating the model version, and you can also updatemin_nodes
for an existing version: update_body.json: { 'autoScaling': { 'minNodes': 5 } } HTTP request: PATCH https://ml.googleapis.com/v1/{name=projects//models//versions/*}?update_mask=autoScaling.minNodes -d @./update_body.json
- max_
nodes int - The maximum number of nodes to scale this model under load. The actual value will depend on resource quota and availability.
- metrics
Sequence[Google
Cloud Ml V1Metric Spec] - MetricSpec contains the specifications to use to calculate the desired nodes count.
- min_
nodes int - Optional. The minimum number of nodes to allocate for this model. These nodes are always up, starting from the time the model is deployed. Therefore, the cost of operating this model will be at least
rate
*min_nodes
* number of hours since last billing cycle, whererate
is the cost per node-hour as documented in the pricing guide, even if no predictions are performed. There is additional cost for each prediction performed. Unlike manual scaling, if the load gets too heavy for the nodes that are up, the service will automatically add nodes to handle the increased load as well as scale back as traffic drops, always maintaining at leastmin_nodes
. You will be charged for the time in which additional nodes are used. Ifmin_nodes
is not specified and AutoScaling is used with a legacy (MLS1) machine type,min_nodes
defaults to 0, in which case, when traffic to a model stops (and after a cool-down period), nodes will be shut down and no charges will be incurred until traffic to the model resumes. Ifmin_nodes
is not specified and AutoScaling is used with a Compute Engine (N1) machine type,min_nodes
defaults to 1.min_nodes
must be at least 1 for use with a Compute Engine machine type. You can setmin_nodes
when creating the model version, and you can also updatemin_nodes
for an existing version: update_body.json: { 'autoScaling': { 'minNodes': 5 } } HTTP request: PATCH https://ml.googleapis.com/v1/{name=projects//models//versions/*}?update_mask=autoScaling.minNodes -d @./update_body.json
- max
Nodes Number - The maximum number of nodes to scale this model under load. The actual value will depend on resource quota and availability.
- metrics List<Property Map>
- MetricSpec contains the specifications to use to calculate the desired nodes count.
- min
Nodes Number - Optional. The minimum number of nodes to allocate for this model. These nodes are always up, starting from the time the model is deployed. Therefore, the cost of operating this model will be at least
rate
*min_nodes
* number of hours since last billing cycle, whererate
is the cost per node-hour as documented in the pricing guide, even if no predictions are performed. There is additional cost for each prediction performed. Unlike manual scaling, if the load gets too heavy for the nodes that are up, the service will automatically add nodes to handle the increased load as well as scale back as traffic drops, always maintaining at leastmin_nodes
. You will be charged for the time in which additional nodes are used. Ifmin_nodes
is not specified and AutoScaling is used with a legacy (MLS1) machine type,min_nodes
defaults to 0, in which case, when traffic to a model stops (and after a cool-down period), nodes will be shut down and no charges will be incurred until traffic to the model resumes. Ifmin_nodes
is not specified and AutoScaling is used with a Compute Engine (N1) machine type,min_nodes
defaults to 1.min_nodes
must be at least 1 for use with a Compute Engine machine type. You can setmin_nodes
when creating the model version, and you can also updatemin_nodes
for an existing version: update_body.json: { 'autoScaling': { 'minNodes': 5 } } HTTP request: PATCH https://ml.googleapis.com/v1/{name=projects//models//versions/*}?update_mask=autoScaling.minNodes -d @./update_body.json
GoogleCloudMlV1__AutoScalingResponse, GoogleCloudMlV1__AutoScalingResponseArgs
- Max
Nodes int - The maximum number of nodes to scale this model under load. The actual value will depend on resource quota and availability.
- Metrics
List<Pulumi.
Google Native. Ml. V1. Inputs. Google Cloud Ml V1__Metric Spec Response> - MetricSpec contains the specifications to use to calculate the desired nodes count.
- Min
Nodes int - Optional. The minimum number of nodes to allocate for this model. These nodes are always up, starting from the time the model is deployed. Therefore, the cost of operating this model will be at least
rate
*min_nodes
* number of hours since last billing cycle, whererate
is the cost per node-hour as documented in the pricing guide, even if no predictions are performed. There is additional cost for each prediction performed. Unlike manual scaling, if the load gets too heavy for the nodes that are up, the service will automatically add nodes to handle the increased load as well as scale back as traffic drops, always maintaining at leastmin_nodes
. You will be charged for the time in which additional nodes are used. Ifmin_nodes
is not specified and AutoScaling is used with a legacy (MLS1) machine type,min_nodes
defaults to 0, in which case, when traffic to a model stops (and after a cool-down period), nodes will be shut down and no charges will be incurred until traffic to the model resumes. Ifmin_nodes
is not specified and AutoScaling is used with a Compute Engine (N1) machine type,min_nodes
defaults to 1.min_nodes
must be at least 1 for use with a Compute Engine machine type. You can setmin_nodes
when creating the model version, and you can also updatemin_nodes
for an existing version: update_body.json: { 'autoScaling': { 'minNodes': 5 } } HTTP request: PATCH https://ml.googleapis.com/v1/{name=projects//models//versions/*}?update_mask=autoScaling.minNodes -d @./update_body.json
- Max
Nodes int - The maximum number of nodes to scale this model under load. The actual value will depend on resource quota and availability.
- Metrics
[]Google
Cloud Ml V1__Metric Spec Response - MetricSpec contains the specifications to use to calculate the desired nodes count.
- Min
Nodes int - Optional. The minimum number of nodes to allocate for this model. These nodes are always up, starting from the time the model is deployed. Therefore, the cost of operating this model will be at least
rate
*min_nodes
* number of hours since last billing cycle, whererate
is the cost per node-hour as documented in the pricing guide, even if no predictions are performed. There is additional cost for each prediction performed. Unlike manual scaling, if the load gets too heavy for the nodes that are up, the service will automatically add nodes to handle the increased load as well as scale back as traffic drops, always maintaining at leastmin_nodes
. You will be charged for the time in which additional nodes are used. Ifmin_nodes
is not specified and AutoScaling is used with a legacy (MLS1) machine type,min_nodes
defaults to 0, in which case, when traffic to a model stops (and after a cool-down period), nodes will be shut down and no charges will be incurred until traffic to the model resumes. Ifmin_nodes
is not specified and AutoScaling is used with a Compute Engine (N1) machine type,min_nodes
defaults to 1.min_nodes
must be at least 1 for use with a Compute Engine machine type. You can setmin_nodes
when creating the model version, and you can also updatemin_nodes
for an existing version: update_body.json: { 'autoScaling': { 'minNodes': 5 } } HTTP request: PATCH https://ml.googleapis.com/v1/{name=projects//models//versions/*}?update_mask=autoScaling.minNodes -d @./update_body.json
- max
Nodes Integer - The maximum number of nodes to scale this model under load. The actual value will depend on resource quota and availability.
- metrics
List<Google
Cloud Ml V1__Metric Spec Response> - MetricSpec contains the specifications to use to calculate the desired nodes count.
- min
Nodes Integer - Optional. The minimum number of nodes to allocate for this model. These nodes are always up, starting from the time the model is deployed. Therefore, the cost of operating this model will be at least
rate
*min_nodes
* number of hours since last billing cycle, whererate
is the cost per node-hour as documented in the pricing guide, even if no predictions are performed. There is additional cost for each prediction performed. Unlike manual scaling, if the load gets too heavy for the nodes that are up, the service will automatically add nodes to handle the increased load as well as scale back as traffic drops, always maintaining at leastmin_nodes
. You will be charged for the time in which additional nodes are used. Ifmin_nodes
is not specified and AutoScaling is used with a legacy (MLS1) machine type,min_nodes
defaults to 0, in which case, when traffic to a model stops (and after a cool-down period), nodes will be shut down and no charges will be incurred until traffic to the model resumes. Ifmin_nodes
is not specified and AutoScaling is used with a Compute Engine (N1) machine type,min_nodes
defaults to 1.min_nodes
must be at least 1 for use with a Compute Engine machine type. You can setmin_nodes
when creating the model version, and you can also updatemin_nodes
for an existing version: update_body.json: { 'autoScaling': { 'minNodes': 5 } } HTTP request: PATCH https://ml.googleapis.com/v1/{name=projects//models//versions/*}?update_mask=autoScaling.minNodes -d @./update_body.json
- max
Nodes number - The maximum number of nodes to scale this model under load. The actual value will depend on resource quota and availability.
- metrics
Google
Cloud Ml V1__Metric Spec Response[] - MetricSpec contains the specifications to use to calculate the desired nodes count.
- min
Nodes number - Optional. The minimum number of nodes to allocate for this model. These nodes are always up, starting from the time the model is deployed. Therefore, the cost of operating this model will be at least
rate
*min_nodes
* number of hours since last billing cycle, whererate
is the cost per node-hour as documented in the pricing guide, even if no predictions are performed. There is additional cost for each prediction performed. Unlike manual scaling, if the load gets too heavy for the nodes that are up, the service will automatically add nodes to handle the increased load as well as scale back as traffic drops, always maintaining at leastmin_nodes
. You will be charged for the time in which additional nodes are used. Ifmin_nodes
is not specified and AutoScaling is used with a legacy (MLS1) machine type,min_nodes
defaults to 0, in which case, when traffic to a model stops (and after a cool-down period), nodes will be shut down and no charges will be incurred until traffic to the model resumes. Ifmin_nodes
is not specified and AutoScaling is used with a Compute Engine (N1) machine type,min_nodes
defaults to 1.min_nodes
must be at least 1 for use with a Compute Engine machine type. You can setmin_nodes
when creating the model version, and you can also updatemin_nodes
for an existing version: update_body.json: { 'autoScaling': { 'minNodes': 5 } } HTTP request: PATCH https://ml.googleapis.com/v1/{name=projects//models//versions/*}?update_mask=autoScaling.minNodes -d @./update_body.json
- max_
nodes int - The maximum number of nodes to scale this model under load. The actual value will depend on resource quota and availability.
- metrics
Sequence[Google
Cloud Ml V1Metric Spec Response] - MetricSpec contains the specifications to use to calculate the desired nodes count.
- min_
nodes int - Optional. The minimum number of nodes to allocate for this model. These nodes are always up, starting from the time the model is deployed. Therefore, the cost of operating this model will be at least
rate
*min_nodes
* number of hours since last billing cycle, whererate
is the cost per node-hour as documented in the pricing guide, even if no predictions are performed. There is additional cost for each prediction performed. Unlike manual scaling, if the load gets too heavy for the nodes that are up, the service will automatically add nodes to handle the increased load as well as scale back as traffic drops, always maintaining at leastmin_nodes
. You will be charged for the time in which additional nodes are used. Ifmin_nodes
is not specified and AutoScaling is used with a legacy (MLS1) machine type,min_nodes
defaults to 0, in which case, when traffic to a model stops (and after a cool-down period), nodes will be shut down and no charges will be incurred until traffic to the model resumes. Ifmin_nodes
is not specified and AutoScaling is used with a Compute Engine (N1) machine type,min_nodes
defaults to 1.min_nodes
must be at least 1 for use with a Compute Engine machine type. You can setmin_nodes
when creating the model version, and you can also updatemin_nodes
for an existing version: update_body.json: { 'autoScaling': { 'minNodes': 5 } } HTTP request: PATCH https://ml.googleapis.com/v1/{name=projects//models//versions/*}?update_mask=autoScaling.minNodes -d @./update_body.json
- max
Nodes Number - The maximum number of nodes to scale this model under load. The actual value will depend on resource quota and availability.
- metrics List<Property Map>
- MetricSpec contains the specifications to use to calculate the desired nodes count.
- min
Nodes Number - Optional. The minimum number of nodes to allocate for this model. These nodes are always up, starting from the time the model is deployed. Therefore, the cost of operating this model will be at least
rate
*min_nodes
* number of hours since last billing cycle, whererate
is the cost per node-hour as documented in the pricing guide, even if no predictions are performed. There is additional cost for each prediction performed. Unlike manual scaling, if the load gets too heavy for the nodes that are up, the service will automatically add nodes to handle the increased load as well as scale back as traffic drops, always maintaining at leastmin_nodes
. You will be charged for the time in which additional nodes are used. Ifmin_nodes
is not specified and AutoScaling is used with a legacy (MLS1) machine type,min_nodes
defaults to 0, in which case, when traffic to a model stops (and after a cool-down period), nodes will be shut down and no charges will be incurred until traffic to the model resumes. Ifmin_nodes
is not specified and AutoScaling is used with a Compute Engine (N1) machine type,min_nodes
defaults to 1.min_nodes
must be at least 1 for use with a Compute Engine machine type. You can setmin_nodes
when creating the model version, and you can also updatemin_nodes
for an existing version: update_body.json: { 'autoScaling': { 'minNodes': 5 } } HTTP request: PATCH https://ml.googleapis.com/v1/{name=projects//models//versions/*}?update_mask=autoScaling.minNodes -d @./update_body.json
GoogleCloudMlV1__ContainerPort, GoogleCloudMlV1__ContainerPortArgs
- Container
Port int - Number of the port to expose on the container. This must be a valid port number: 0 < PORT_NUMBER < 65536.
- Container
Port int - Number of the port to expose on the container. This must be a valid port number: 0 < PORT_NUMBER < 65536.
- container
Port Integer - Number of the port to expose on the container. This must be a valid port number: 0 < PORT_NUMBER < 65536.
- container
Port number - Number of the port to expose on the container. This must be a valid port number: 0 < PORT_NUMBER < 65536.
- container_
port int - Number of the port to expose on the container. This must be a valid port number: 0 < PORT_NUMBER < 65536.
- container
Port Number - Number of the port to expose on the container. This must be a valid port number: 0 < PORT_NUMBER < 65536.
GoogleCloudMlV1__ContainerPortResponse, GoogleCloudMlV1__ContainerPortResponseArgs
- Container
Port int - Number of the port to expose on the container. This must be a valid port number: 0 < PORT_NUMBER < 65536.
- Container
Port int - Number of the port to expose on the container. This must be a valid port number: 0 < PORT_NUMBER < 65536.
- container
Port Integer - Number of the port to expose on the container. This must be a valid port number: 0 < PORT_NUMBER < 65536.
- container
Port number - Number of the port to expose on the container. This must be a valid port number: 0 < PORT_NUMBER < 65536.
- container_
port int - Number of the port to expose on the container. This must be a valid port number: 0 < PORT_NUMBER < 65536.
- container
Port Number - Number of the port to expose on the container. This must be a valid port number: 0 < PORT_NUMBER < 65536.
GoogleCloudMlV1__ContainerSpec, GoogleCloudMlV1__ContainerSpecArgs
- Args List<string>
- Immutable. Specifies arguments for the command that runs when the container starts. This overrides the container's
CMD
. Specify this field as an array of executable and arguments, similar to a DockerCMD
's "default parameters" form. If you don't specify this field but do specify the command field, then the command from thecommand
field runs without any additional arguments. See the Kubernetes documentation about how thecommand
andargs
fields interact with a container'sENTRYPOINT
andCMD
. If you don't specify this field and don't specify thecommmand
field, then the container'sENTRYPOINT
andCMD
determine what runs based on their default behavior. See the Docker documentation about howCMD
andENTRYPOINT
interact. In this field, you can reference environment variables set by AI Platform Prediction and environment variables set in the env field. You cannot reference environment variables set in the Docker image. In order for environment variables to be expanded, reference them by using the following syntax: $( VARIABLE_NAME) Note that this differs from Bash variable expansion, which does not use parentheses. If a variable cannot be resolved, the reference in the input string is used unchanged. To avoid variable expansion, you can escape this syntax with$$
; for example: $$(VARIABLE_NAME) This field corresponds to theargs
field of the Kubernetes Containers v1 core API. - Command List<string>
- Immutable. Specifies the command that runs when the container starts. This overrides the container's
ENTRYPOINT
. Specify this field as an array of executable and arguments, similar to a DockerENTRYPOINT
's "exec" form, not its "shell" form. If you do not specify this field, then the container'sENTRYPOINT
runs, in conjunction with the args field or the container'sCMD
, if either exists. If this field is not specified and the container does not have anENTRYPOINT
, then refer to the Docker documentation about howCMD
andENTRYPOINT
interact. If you specify this field, then you can also specify theargs
field to provide additional arguments for this command. However, if you specify this field, then the container'sCMD
is ignored. See the Kubernetes documentation about how thecommand
andargs
fields interact with a container'sENTRYPOINT
andCMD
. In this field, you can reference environment variables set by AI Platform Prediction and environment variables set in the env field. You cannot reference environment variables set in the Docker image. In order for environment variables to be expanded, reference them by using the following syntax: $( VARIABLE_NAME) Note that this differs from Bash variable expansion, which does not use parentheses. If a variable cannot be resolved, the reference in the input string is used unchanged. To avoid variable expansion, you can escape this syntax with$$
; for example: $$(VARIABLE_NAME) This field corresponds to thecommand
field of the Kubernetes Containers v1 core API. - Env
List<Pulumi.
Google Native. Ml. V1. Inputs. Google Cloud Ml V1__Env Var> - Immutable. List of environment variables to set in the container. After the container starts running, code running in the container can read these environment variables. Additionally, the command and args fields can reference these variables. Later entries in this list can also reference earlier entries. For example, the following example sets the variable
VAR_2
to have the valuefoo bar
:json [ { "name": "VAR_1", "value": "foo" }, { "name": "VAR_2", "value": "$(VAR_1) bar" } ]
If you switch the order of the variables in the example, then the expansion does not occur. This field corresponds to theenv
field of the Kubernetes Containers v1 core API. - Image string
- URI of the Docker image to be used as the custom container for serving predictions. This URI must identify an image in Artifact Registry and begin with the hostname
{REGION}-docker.pkg.dev
, where{REGION}
is replaced by the region that matches AI Platform Prediction regional endpoint that you are using. For example, if you are using theus-central1-ml.googleapis.com
endpoint, then this URI must begin withus-central1-docker.pkg.dev
. To use a custom container, the AI Platform Google-managed service account must have permission to pull (read) the Docker image at this URI. The AI Platform Google-managed service account has the following format:service-{PROJECT_NUMBER}@cloud-ml.google.com.iam.gserviceaccount.com
{PROJECT_NUMBER} is replaced by your Google Cloud project number. By default, this service account has necessary permissions to pull an Artifact Registry image in the same Google Cloud project where you are using AI Platform Prediction. In this case, no configuration is necessary. If you want to use an image from a different Google Cloud project, learn how to grant the Artifact Registry Reader (roles/artifactregistry.reader) role for a repository to your projet's AI Platform Google-managed service account. To learn about the requirements for the Docker image itself, read Custom container requirements. - Ports
List<Pulumi.
Google Native. Ml. V1. Inputs. Google Cloud Ml V1__Container Port> - Immutable. List of ports to expose from the container. AI Platform Prediction sends any prediction requests that it receives to the first port on this list. AI Platform Prediction also sends liveness and health checks to this port. If you do not specify this field, it defaults to following value:
json [ { "containerPort": 8080 } ]
AI Platform Prediction does not use ports other than the first one listed. This field corresponds to theports
field of the Kubernetes Containers v1 core API.
- Args []string
- Immutable. Specifies arguments for the command that runs when the container starts. This overrides the container's
CMD
. Specify this field as an array of executable and arguments, similar to a DockerCMD
's "default parameters" form. If you don't specify this field but do specify the command field, then the command from thecommand
field runs without any additional arguments. See the Kubernetes documentation about how thecommand
andargs
fields interact with a container'sENTRYPOINT
andCMD
. If you don't specify this field and don't specify thecommmand
field, then the container'sENTRYPOINT
andCMD
determine what runs based on their default behavior. See the Docker documentation about howCMD
andENTRYPOINT
interact. In this field, you can reference environment variables set by AI Platform Prediction and environment variables set in the env field. You cannot reference environment variables set in the Docker image. In order for environment variables to be expanded, reference them by using the following syntax: $( VARIABLE_NAME) Note that this differs from Bash variable expansion, which does not use parentheses. If a variable cannot be resolved, the reference in the input string is used unchanged. To avoid variable expansion, you can escape this syntax with$$
; for example: $$(VARIABLE_NAME) This field corresponds to theargs
field of the Kubernetes Containers v1 core API. - Command []string
- Immutable. Specifies the command that runs when the container starts. This overrides the container's
ENTRYPOINT
. Specify this field as an array of executable and arguments, similar to a DockerENTRYPOINT
's "exec" form, not its "shell" form. If you do not specify this field, then the container'sENTRYPOINT
runs, in conjunction with the args field or the container'sCMD
, if either exists. If this field is not specified and the container does not have anENTRYPOINT
, then refer to the Docker documentation about howCMD
andENTRYPOINT
interact. If you specify this field, then you can also specify theargs
field to provide additional arguments for this command. However, if you specify this field, then the container'sCMD
is ignored. See the Kubernetes documentation about how thecommand
andargs
fields interact with a container'sENTRYPOINT
andCMD
. In this field, you can reference environment variables set by AI Platform Prediction and environment variables set in the env field. You cannot reference environment variables set in the Docker image. In order for environment variables to be expanded, reference them by using the following syntax: $( VARIABLE_NAME) Note that this differs from Bash variable expansion, which does not use parentheses. If a variable cannot be resolved, the reference in the input string is used unchanged. To avoid variable expansion, you can escape this syntax with$$
; for example: $$(VARIABLE_NAME) This field corresponds to thecommand
field of the Kubernetes Containers v1 core API. - Env
[]Google
Cloud Ml V1__Env Var - Immutable. List of environment variables to set in the container. After the container starts running, code running in the container can read these environment variables. Additionally, the command and args fields can reference these variables. Later entries in this list can also reference earlier entries. For example, the following example sets the variable
VAR_2
to have the valuefoo bar
:json [ { "name": "VAR_1", "value": "foo" }, { "name": "VAR_2", "value": "$(VAR_1) bar" } ]
If you switch the order of the variables in the example, then the expansion does not occur. This field corresponds to theenv
field of the Kubernetes Containers v1 core API. - Image string
- URI of the Docker image to be used as the custom container for serving predictions. This URI must identify an image in Artifact Registry and begin with the hostname
{REGION}-docker.pkg.dev
, where{REGION}
is replaced by the region that matches AI Platform Prediction regional endpoint that you are using. For example, if you are using theus-central1-ml.googleapis.com
endpoint, then this URI must begin withus-central1-docker.pkg.dev
. To use a custom container, the AI Platform Google-managed service account must have permission to pull (read) the Docker image at this URI. The AI Platform Google-managed service account has the following format:service-{PROJECT_NUMBER}@cloud-ml.google.com.iam.gserviceaccount.com
{PROJECT_NUMBER} is replaced by your Google Cloud project number. By default, this service account has necessary permissions to pull an Artifact Registry image in the same Google Cloud project where you are using AI Platform Prediction. In this case, no configuration is necessary. If you want to use an image from a different Google Cloud project, learn how to grant the Artifact Registry Reader (roles/artifactregistry.reader) role for a repository to your projet's AI Platform Google-managed service account. To learn about the requirements for the Docker image itself, read Custom container requirements. - Ports
[]Google
Cloud Ml V1__Container Port - Immutable. List of ports to expose from the container. AI Platform Prediction sends any prediction requests that it receives to the first port on this list. AI Platform Prediction also sends liveness and health checks to this port. If you do not specify this field, it defaults to following value:
json [ { "containerPort": 8080 } ]
AI Platform Prediction does not use ports other than the first one listed. This field corresponds to theports
field of the Kubernetes Containers v1 core API.
- args List<String>
- Immutable. Specifies arguments for the command that runs when the container starts. This overrides the container's
CMD
. Specify this field as an array of executable and arguments, similar to a DockerCMD
's "default parameters" form. If you don't specify this field but do specify the command field, then the command from thecommand
field runs without any additional arguments. See the Kubernetes documentation about how thecommand
andargs
fields interact with a container'sENTRYPOINT
andCMD
. If you don't specify this field and don't specify thecommmand
field, then the container'sENTRYPOINT
andCMD
determine what runs based on their default behavior. See the Docker documentation about howCMD
andENTRYPOINT
interact. In this field, you can reference environment variables set by AI Platform Prediction and environment variables set in the env field. You cannot reference environment variables set in the Docker image. In order for environment variables to be expanded, reference them by using the following syntax: $( VARIABLE_NAME) Note that this differs from Bash variable expansion, which does not use parentheses. If a variable cannot be resolved, the reference in the input string is used unchanged. To avoid variable expansion, you can escape this syntax with$$
; for example: $$(VARIABLE_NAME) This field corresponds to theargs
field of the Kubernetes Containers v1 core API. - command List<String>
- Immutable. Specifies the command that runs when the container starts. This overrides the container's
ENTRYPOINT
. Specify this field as an array of executable and arguments, similar to a DockerENTRYPOINT
's "exec" form, not its "shell" form. If you do not specify this field, then the container'sENTRYPOINT
runs, in conjunction with the args field or the container'sCMD
, if either exists. If this field is not specified and the container does not have anENTRYPOINT
, then refer to the Docker documentation about howCMD
andENTRYPOINT
interact. If you specify this field, then you can also specify theargs
field to provide additional arguments for this command. However, if you specify this field, then the container'sCMD
is ignored. See the Kubernetes documentation about how thecommand
andargs
fields interact with a container'sENTRYPOINT
andCMD
. In this field, you can reference environment variables set by AI Platform Prediction and environment variables set in the env field. You cannot reference environment variables set in the Docker image. In order for environment variables to be expanded, reference them by using the following syntax: $( VARIABLE_NAME) Note that this differs from Bash variable expansion, which does not use parentheses. If a variable cannot be resolved, the reference in the input string is used unchanged. To avoid variable expansion, you can escape this syntax with$$
; for example: $$(VARIABLE_NAME) This field corresponds to thecommand
field of the Kubernetes Containers v1 core API. - env
List<Google
Cloud Ml V1__Env Var> - Immutable. List of environment variables to set in the container. After the container starts running, code running in the container can read these environment variables. Additionally, the command and args fields can reference these variables. Later entries in this list can also reference earlier entries. For example, the following example sets the variable
VAR_2
to have the valuefoo bar
:json [ { "name": "VAR_1", "value": "foo" }, { "name": "VAR_2", "value": "$(VAR_1) bar" } ]
If you switch the order of the variables in the example, then the expansion does not occur. This field corresponds to theenv
field of the Kubernetes Containers v1 core API. - image String
- URI of the Docker image to be used as the custom container for serving predictions. This URI must identify an image in Artifact Registry and begin with the hostname
{REGION}-docker.pkg.dev
, where{REGION}
is replaced by the region that matches AI Platform Prediction regional endpoint that you are using. For example, if you are using theus-central1-ml.googleapis.com
endpoint, then this URI must begin withus-central1-docker.pkg.dev
. To use a custom container, the AI Platform Google-managed service account must have permission to pull (read) the Docker image at this URI. The AI Platform Google-managed service account has the following format:service-{PROJECT_NUMBER}@cloud-ml.google.com.iam.gserviceaccount.com
{PROJECT_NUMBER} is replaced by your Google Cloud project number. By default, this service account has necessary permissions to pull an Artifact Registry image in the same Google Cloud project where you are using AI Platform Prediction. In this case, no configuration is necessary. If you want to use an image from a different Google Cloud project, learn how to grant the Artifact Registry Reader (roles/artifactregistry.reader) role for a repository to your projet's AI Platform Google-managed service account. To learn about the requirements for the Docker image itself, read Custom container requirements. - ports
List<Google
Cloud Ml V1__Container Port> - Immutable. List of ports to expose from the container. AI Platform Prediction sends any prediction requests that it receives to the first port on this list. AI Platform Prediction also sends liveness and health checks to this port. If you do not specify this field, it defaults to following value:
json [ { "containerPort": 8080 } ]
AI Platform Prediction does not use ports other than the first one listed. This field corresponds to theports
field of the Kubernetes Containers v1 core API.
- args string[]
- Immutable. Specifies arguments for the command that runs when the container starts. This overrides the container's
CMD
. Specify this field as an array of executable and arguments, similar to a DockerCMD
's "default parameters" form. If you don't specify this field but do specify the command field, then the command from thecommand
field runs without any additional arguments. See the Kubernetes documentation about how thecommand
andargs
fields interact with a container'sENTRYPOINT
andCMD
. If you don't specify this field and don't specify thecommmand
field, then the container'sENTRYPOINT
andCMD
determine what runs based on their default behavior. See the Docker documentation about howCMD
andENTRYPOINT
interact. In this field, you can reference environment variables set by AI Platform Prediction and environment variables set in the env field. You cannot reference environment variables set in the Docker image. In order for environment variables to be expanded, reference them by using the following syntax: $( VARIABLE_NAME) Note that this differs from Bash variable expansion, which does not use parentheses. If a variable cannot be resolved, the reference in the input string is used unchanged. To avoid variable expansion, you can escape this syntax with$$
; for example: $$(VARIABLE_NAME) This field corresponds to theargs
field of the Kubernetes Containers v1 core API. - command string[]
- Immutable. Specifies the command that runs when the container starts. This overrides the container's
ENTRYPOINT
. Specify this field as an array of executable and arguments, similar to a DockerENTRYPOINT
's "exec" form, not its "shell" form. If you do not specify this field, then the container'sENTRYPOINT
runs, in conjunction with the args field or the container'sCMD
, if either exists. If this field is not specified and the container does not have anENTRYPOINT
, then refer to the Docker documentation about howCMD
andENTRYPOINT
interact. If you specify this field, then you can also specify theargs
field to provide additional arguments for this command. However, if you specify this field, then the container'sCMD
is ignored. See the Kubernetes documentation about how thecommand
andargs
fields interact with a container'sENTRYPOINT
andCMD
. In this field, you can reference environment variables set by AI Platform Prediction and environment variables set in the env field. You cannot reference environment variables set in the Docker image. In order for environment variables to be expanded, reference them by using the following syntax: $( VARIABLE_NAME) Note that this differs from Bash variable expansion, which does not use parentheses. If a variable cannot be resolved, the reference in the input string is used unchanged. To avoid variable expansion, you can escape this syntax with$$
; for example: $$(VARIABLE_NAME) This field corresponds to thecommand
field of the Kubernetes Containers v1 core API. - env
Google
Cloud Ml V1__Env Var[] - Immutable. List of environment variables to set in the container. After the container starts running, code running in the container can read these environment variables. Additionally, the command and args fields can reference these variables. Later entries in this list can also reference earlier entries. For example, the following example sets the variable
VAR_2
to have the valuefoo bar
:json [ { "name": "VAR_1", "value": "foo" }, { "name": "VAR_2", "value": "$(VAR_1) bar" } ]
If you switch the order of the variables in the example, then the expansion does not occur. This field corresponds to theenv
field of the Kubernetes Containers v1 core API. - image string
- URI of the Docker image to be used as the custom container for serving predictions. This URI must identify an image in Artifact Registry and begin with the hostname
{REGION}-docker.pkg.dev
, where{REGION}
is replaced by the region that matches AI Platform Prediction regional endpoint that you are using. For example, if you are using theus-central1-ml.googleapis.com
endpoint, then this URI must begin withus-central1-docker.pkg.dev
. To use a custom container, the AI Platform Google-managed service account must have permission to pull (read) the Docker image at this URI. The AI Platform Google-managed service account has the following format:service-{PROJECT_NUMBER}@cloud-ml.google.com.iam.gserviceaccount.com
{PROJECT_NUMBER} is replaced by your Google Cloud project number. By default, this service account has necessary permissions to pull an Artifact Registry image in the same Google Cloud project where you are using AI Platform Prediction. In this case, no configuration is necessary. If you want to use an image from a different Google Cloud project, learn how to grant the Artifact Registry Reader (roles/artifactregistry.reader) role for a repository to your projet's AI Platform Google-managed service account. To learn about the requirements for the Docker image itself, read Custom container requirements. - ports
Google
Cloud Ml V1__Container Port[] - Immutable. List of ports to expose from the container. AI Platform Prediction sends any prediction requests that it receives to the first port on this list. AI Platform Prediction also sends liveness and health checks to this port. If you do not specify this field, it defaults to following value:
json [ { "containerPort": 8080 } ]
AI Platform Prediction does not use ports other than the first one listed. This field corresponds to theports
field of the Kubernetes Containers v1 core API.
- args Sequence[str]
- Immutable. Specifies arguments for the command that runs when the container starts. This overrides the container's
CMD
. Specify this field as an array of executable and arguments, similar to a DockerCMD
's "default parameters" form. If you don't specify this field but do specify the command field, then the command from thecommand
field runs without any additional arguments. See the Kubernetes documentation about how thecommand
andargs
fields interact with a container'sENTRYPOINT
andCMD
. If you don't specify this field and don't specify thecommmand
field, then the container'sENTRYPOINT
andCMD
determine what runs based on their default behavior. See the Docker documentation about howCMD
andENTRYPOINT
interact. In this field, you can reference environment variables set by AI Platform Prediction and environment variables set in the env field. You cannot reference environment variables set in the Docker image. In order for environment variables to be expanded, reference them by using the following syntax: $( VARIABLE_NAME) Note that this differs from Bash variable expansion, which does not use parentheses. If a variable cannot be resolved, the reference in the input string is used unchanged. To avoid variable expansion, you can escape this syntax with$$
; for example: $$(VARIABLE_NAME) This field corresponds to theargs
field of the Kubernetes Containers v1 core API. - command Sequence[str]
- Immutable. Specifies the command that runs when the container starts. This overrides the container's
ENTRYPOINT
. Specify this field as an array of executable and arguments, similar to a DockerENTRYPOINT
's "exec" form, not its "shell" form. If you do not specify this field, then the container'sENTRYPOINT
runs, in conjunction with the args field or the container'sCMD
, if either exists. If this field is not specified and the container does not have anENTRYPOINT
, then refer to the Docker documentation about howCMD
andENTRYPOINT
interact. If you specify this field, then you can also specify theargs
field to provide additional arguments for this command. However, if you specify this field, then the container'sCMD
is ignored. See the Kubernetes documentation about how thecommand
andargs
fields interact with a container'sENTRYPOINT
andCMD
. In this field, you can reference environment variables set by AI Platform Prediction and environment variables set in the env field. You cannot reference environment variables set in the Docker image. In order for environment variables to be expanded, reference them by using the following syntax: $( VARIABLE_NAME) Note that this differs from Bash variable expansion, which does not use parentheses. If a variable cannot be resolved, the reference in the input string is used unchanged. To avoid variable expansion, you can escape this syntax with$$
; for example: $$(VARIABLE_NAME) This field corresponds to thecommand
field of the Kubernetes Containers v1 core API. - env
Sequence[Google
Cloud Ml V1Env Var] - Immutable. List of environment variables to set in the container. After the container starts running, code running in the container can read these environment variables. Additionally, the command and args fields can reference these variables. Later entries in this list can also reference earlier entries. For example, the following example sets the variable
VAR_2
to have the valuefoo bar
:json [ { "name": "VAR_1", "value": "foo" }, { "name": "VAR_2", "value": "$(VAR_1) bar" } ]
If you switch the order of the variables in the example, then the expansion does not occur. This field corresponds to theenv
field of the Kubernetes Containers v1 core API. - image str
- URI of the Docker image to be used as the custom container for serving predictions. This URI must identify an image in Artifact Registry and begin with the hostname
{REGION}-docker.pkg.dev
, where{REGION}
is replaced by the region that matches AI Platform Prediction regional endpoint that you are using. For example, if you are using theus-central1-ml.googleapis.com
endpoint, then this URI must begin withus-central1-docker.pkg.dev
. To use a custom container, the AI Platform Google-managed service account must have permission to pull (read) the Docker image at this URI. The AI Platform Google-managed service account has the following format:service-{PROJECT_NUMBER}@cloud-ml.google.com.iam.gserviceaccount.com
{PROJECT_NUMBER} is replaced by your Google Cloud project number. By default, this service account has necessary permissions to pull an Artifact Registry image in the same Google Cloud project where you are using AI Platform Prediction. In this case, no configuration is necessary. If you want to use an image from a different Google Cloud project, learn how to grant the Artifact Registry Reader (roles/artifactregistry.reader) role for a repository to your projet's AI Platform Google-managed service account. To learn about the requirements for the Docker image itself, read Custom container requirements. - ports
Sequence[Google
Cloud Ml V1Container Port] - Immutable. List of ports to expose from the container. AI Platform Prediction sends any prediction requests that it receives to the first port on this list. AI Platform Prediction also sends liveness and health checks to this port. If you do not specify this field, it defaults to following value:
json [ { "containerPort": 8080 } ]
AI Platform Prediction does not use ports other than the first one listed. This field corresponds to theports
field of the Kubernetes Containers v1 core API.
- args List<String>
- Immutable. Specifies arguments for the command that runs when the container starts. This overrides the container's
CMD
. Specify this field as an array of executable and arguments, similar to a DockerCMD
's "default parameters" form. If you don't specify this field but do specify the command field, then the command from thecommand
field runs without any additional arguments. See the Kubernetes documentation about how thecommand
andargs
fields interact with a container'sENTRYPOINT
andCMD
. If you don't specify this field and don't specify thecommmand
field, then the container'sENTRYPOINT
andCMD
determine what runs based on their default behavior. See the Docker documentation about howCMD
andENTRYPOINT
interact. In this field, you can reference environment variables set by AI Platform Prediction and environment variables set in the env field. You cannot reference environment variables set in the Docker image. In order for environment variables to be expanded, reference them by using the following syntax: $( VARIABLE_NAME) Note that this differs from Bash variable expansion, which does not use parentheses. If a variable cannot be resolved, the reference in the input string is used unchanged. To avoid variable expansion, you can escape this syntax with$$
; for example: $$(VARIABLE_NAME) This field corresponds to theargs
field of the Kubernetes Containers v1 core API. - command List<String>
- Immutable. Specifies the command that runs when the container starts. This overrides the container's
ENTRYPOINT
. Specify this field as an array of executable and arguments, similar to a DockerENTRYPOINT
's "exec" form, not its "shell" form. If you do not specify this field, then the container'sENTRYPOINT
runs, in conjunction with the args field or the container'sCMD
, if either exists. If this field is not specified and the container does not have anENTRYPOINT
, then refer to the Docker documentation about howCMD
andENTRYPOINT
interact. If you specify this field, then you can also specify theargs
field to provide additional arguments for this command. However, if you specify this field, then the container'sCMD
is ignored. See the Kubernetes documentation about how thecommand
andargs
fields interact with a container'sENTRYPOINT
andCMD
. In this field, you can reference environment variables set by AI Platform Prediction and environment variables set in the env field. You cannot reference environment variables set in the Docker image. In order for environment variables to be expanded, reference them by using the following syntax: $( VARIABLE_NAME) Note that this differs from Bash variable expansion, which does not use parentheses. If a variable cannot be resolved, the reference in the input string is used unchanged. To avoid variable expansion, you can escape this syntax with$$
; for example: $$(VARIABLE_NAME) This field corresponds to thecommand
field of the Kubernetes Containers v1 core API. - env List<Property Map>
- Immutable. List of environment variables to set in the container. After the container starts running, code running in the container can read these environment variables. Additionally, the command and args fields can reference these variables. Later entries in this list can also reference earlier entries. For example, the following example sets the variable
VAR_2
to have the valuefoo bar
:json [ { "name": "VAR_1", "value": "foo" }, { "name": "VAR_2", "value": "$(VAR_1) bar" } ]
If you switch the order of the variables in the example, then the expansion does not occur. This field corresponds to theenv
field of the Kubernetes Containers v1 core API. - image String
- URI of the Docker image to be used as the custom container for serving predictions. This URI must identify an image in Artifact Registry and begin with the hostname
{REGION}-docker.pkg.dev
, where{REGION}
is replaced by the region that matches AI Platform Prediction regional endpoint that you are using. For example, if you are using theus-central1-ml.googleapis.com
endpoint, then this URI must begin withus-central1-docker.pkg.dev
. To use a custom container, the AI Platform Google-managed service account must have permission to pull (read) the Docker image at this URI. The AI Platform Google-managed service account has the following format:service-{PROJECT_NUMBER}@cloud-ml.google.com.iam.gserviceaccount.com
{PROJECT_NUMBER} is replaced by your Google Cloud project number. By default, this service account has necessary permissions to pull an Artifact Registry image in the same Google Cloud project where you are using AI Platform Prediction. In this case, no configuration is necessary. If you want to use an image from a different Google Cloud project, learn how to grant the Artifact Registry Reader (roles/artifactregistry.reader) role for a repository to your projet's AI Platform Google-managed service account. To learn about the requirements for the Docker image itself, read Custom container requirements. - ports List<Property Map>
- Immutable. List of ports to expose from the container. AI Platform Prediction sends any prediction requests that it receives to the first port on this list. AI Platform Prediction also sends liveness and health checks to this port. If you do not specify this field, it defaults to following value:
json [ { "containerPort": 8080 } ]
AI Platform Prediction does not use ports other than the first one listed. This field corresponds to theports
field of the Kubernetes Containers v1 core API.
GoogleCloudMlV1__ContainerSpecResponse, GoogleCloudMlV1__ContainerSpecResponseArgs
- Args List<string>
- Immutable. Specifies arguments for the command that runs when the container starts. This overrides the container's
CMD
. Specify this field as an array of executable and arguments, similar to a DockerCMD
's "default parameters" form. If you don't specify this field but do specify the command field, then the command from thecommand
field runs without any additional arguments. See the Kubernetes documentation about how thecommand
andargs
fields interact with a container'sENTRYPOINT
andCMD
. If you don't specify this field and don't specify thecommmand
field, then the container'sENTRYPOINT
andCMD
determine what runs based on their default behavior. See the Docker documentation about howCMD
andENTRYPOINT
interact. In this field, you can reference environment variables set by AI Platform Prediction and environment variables set in the env field. You cannot reference environment variables set in the Docker image. In order for environment variables to be expanded, reference them by using the following syntax: $( VARIABLE_NAME) Note that this differs from Bash variable expansion, which does not use parentheses. If a variable cannot be resolved, the reference in the input string is used unchanged. To avoid variable expansion, you can escape this syntax with$$
; for example: $$(VARIABLE_NAME) This field corresponds to theargs
field of the Kubernetes Containers v1 core API. - Command List<string>
- Immutable. Specifies the command that runs when the container starts. This overrides the container's
ENTRYPOINT
. Specify this field as an array of executable and arguments, similar to a DockerENTRYPOINT
's "exec" form, not its "shell" form. If you do not specify this field, then the container'sENTRYPOINT
runs, in conjunction with the args field or the container'sCMD
, if either exists. If this field is not specified and the container does not have anENTRYPOINT
, then refer to the Docker documentation about howCMD
andENTRYPOINT
interact. If you specify this field, then you can also specify theargs
field to provide additional arguments for this command. However, if you specify this field, then the container'sCMD
is ignored. See the Kubernetes documentation about how thecommand
andargs
fields interact with a container'sENTRYPOINT
andCMD
. In this field, you can reference environment variables set by AI Platform Prediction and environment variables set in the env field. You cannot reference environment variables set in the Docker image. In order for environment variables to be expanded, reference them by using the following syntax: $( VARIABLE_NAME) Note that this differs from Bash variable expansion, which does not use parentheses. If a variable cannot be resolved, the reference in the input string is used unchanged. To avoid variable expansion, you can escape this syntax with$$
; for example: $$(VARIABLE_NAME) This field corresponds to thecommand
field of the Kubernetes Containers v1 core API. - Env
List<Pulumi.
Google Native. Ml. V1. Inputs. Google Cloud Ml V1__Env Var Response> - Immutable. List of environment variables to set in the container. After the container starts running, code running in the container can read these environment variables. Additionally, the command and args fields can reference these variables. Later entries in this list can also reference earlier entries. For example, the following example sets the variable
VAR_2
to have the valuefoo bar
:json [ { "name": "VAR_1", "value": "foo" }, { "name": "VAR_2", "value": "$(VAR_1) bar" } ]
If you switch the order of the variables in the example, then the expansion does not occur. This field corresponds to theenv
field of the Kubernetes Containers v1 core API. - Image string
- URI of the Docker image to be used as the custom container for serving predictions. This URI must identify an image in Artifact Registry and begin with the hostname
{REGION}-docker.pkg.dev
, where{REGION}
is replaced by the region that matches AI Platform Prediction regional endpoint that you are using. For example, if you are using theus-central1-ml.googleapis.com
endpoint, then this URI must begin withus-central1-docker.pkg.dev
. To use a custom container, the AI Platform Google-managed service account must have permission to pull (read) the Docker image at this URI. The AI Platform Google-managed service account has the following format:service-{PROJECT_NUMBER}@cloud-ml.google.com.iam.gserviceaccount.com
{PROJECT_NUMBER} is replaced by your Google Cloud project number. By default, this service account has necessary permissions to pull an Artifact Registry image in the same Google Cloud project where you are using AI Platform Prediction. In this case, no configuration is necessary. If you want to use an image from a different Google Cloud project, learn how to grant the Artifact Registry Reader (roles/artifactregistry.reader) role for a repository to your projet's AI Platform Google-managed service account. To learn about the requirements for the Docker image itself, read Custom container requirements. - Ports
List<Pulumi.
Google Native. Ml. V1. Inputs. Google Cloud Ml V1__Container Port Response> - Immutable. List of ports to expose from the container. AI Platform Prediction sends any prediction requests that it receives to the first port on this list. AI Platform Prediction also sends liveness and health checks to this port. If you do not specify this field, it defaults to following value:
json [ { "containerPort": 8080 } ]
AI Platform Prediction does not use ports other than the first one listed. This field corresponds to theports
field of the Kubernetes Containers v1 core API.
- Args []string
- Immutable. Specifies arguments for the command that runs when the container starts. This overrides the container's
CMD
. Specify this field as an array of executable and arguments, similar to a DockerCMD
's "default parameters" form. If you don't specify this field but do specify the command field, then the command from thecommand
field runs without any additional arguments. See the Kubernetes documentation about how thecommand
andargs
fields interact with a container'sENTRYPOINT
andCMD
. If you don't specify this field and don't specify thecommmand
field, then the container'sENTRYPOINT
andCMD
determine what runs based on their default behavior. See the Docker documentation about howCMD
andENTRYPOINT
interact. In this field, you can reference environment variables set by AI Platform Prediction and environment variables set in the env field. You cannot reference environment variables set in the Docker image. In order for environment variables to be expanded, reference them by using the following syntax: $( VARIABLE_NAME) Note that this differs from Bash variable expansion, which does not use parentheses. If a variable cannot be resolved, the reference in the input string is used unchanged. To avoid variable expansion, you can escape this syntax with$$
; for example: $$(VARIABLE_NAME) This field corresponds to theargs
field of the Kubernetes Containers v1 core API. - Command []string
- Immutable. Specifies the command that runs when the container starts. This overrides the container's
ENTRYPOINT
. Specify this field as an array of executable and arguments, similar to a DockerENTRYPOINT
's "exec" form, not its "shell" form. If you do not specify this field, then the container'sENTRYPOINT
runs, in conjunction with the args field or the container'sCMD
, if either exists. If this field is not specified and the container does not have anENTRYPOINT
, then refer to the Docker documentation about howCMD
andENTRYPOINT
interact. If you specify this field, then you can also specify theargs
field to provide additional arguments for this command. However, if you specify this field, then the container'sCMD
is ignored. See the Kubernetes documentation about how thecommand
andargs
fields interact with a container'sENTRYPOINT
andCMD
. In this field, you can reference environment variables set by AI Platform Prediction and environment variables set in the env field. You cannot reference environment variables set in the Docker image. In order for environment variables to be expanded, reference them by using the following syntax: $( VARIABLE_NAME) Note that this differs from Bash variable expansion, which does not use parentheses. If a variable cannot be resolved, the reference in the input string is used unchanged. To avoid variable expansion, you can escape this syntax with$$
; for example: $$(VARIABLE_NAME) This field corresponds to thecommand
field of the Kubernetes Containers v1 core API. - Env
[]Google
Cloud Ml V1__Env Var Response - Immutable. List of environment variables to set in the container. After the container starts running, code running in the container can read these environment variables. Additionally, the command and args fields can reference these variables. Later entries in this list can also reference earlier entries. For example, the following example sets the variable
VAR_2
to have the valuefoo bar
:json [ { "name": "VAR_1", "value": "foo" }, { "name": "VAR_2", "value": "$(VAR_1) bar" } ]
If you switch the order of the variables in the example, then the expansion does not occur. This field corresponds to theenv
field of the Kubernetes Containers v1 core API. - Image string
- URI of the Docker image to be used as the custom container for serving predictions. This URI must identify an image in Artifact Registry and begin with the hostname
{REGION}-docker.pkg.dev
, where{REGION}
is replaced by the region that matches AI Platform Prediction regional endpoint that you are using. For example, if you are using theus-central1-ml.googleapis.com
endpoint, then this URI must begin withus-central1-docker.pkg.dev
. To use a custom container, the AI Platform Google-managed service account must have permission to pull (read) the Docker image at this URI. The AI Platform Google-managed service account has the following format:service-{PROJECT_NUMBER}@cloud-ml.google.com.iam.gserviceaccount.com
{PROJECT_NUMBER} is replaced by your Google Cloud project number. By default, this service account has necessary permissions to pull an Artifact Registry image in the same Google Cloud project where you are using AI Platform Prediction. In this case, no configuration is necessary. If you want to use an image from a different Google Cloud project, learn how to grant the Artifact Registry Reader (roles/artifactregistry.reader) role for a repository to your projet's AI Platform Google-managed service account. To learn about the requirements for the Docker image itself, read Custom container requirements. - Ports
[]Google
Cloud Ml V1__Container Port Response - Immutable. List of ports to expose from the container. AI Platform Prediction sends any prediction requests that it receives to the first port on this list. AI Platform Prediction also sends liveness and health checks to this port. If you do not specify this field, it defaults to following value:
json [ { "containerPort": 8080 } ]
AI Platform Prediction does not use ports other than the first one listed. This field corresponds to theports
field of the Kubernetes Containers v1 core API.
- args List<String>
- Immutable. Specifies arguments for the command that runs when the container starts. This overrides the container's
CMD
. Specify this field as an array of executable and arguments, similar to a DockerCMD
's "default parameters" form. If you don't specify this field but do specify the command field, then the command from thecommand
field runs without any additional arguments. See the Kubernetes documentation about how thecommand
andargs
fields interact with a container'sENTRYPOINT
andCMD
. If you don't specify this field and don't specify thecommmand
field, then the container'sENTRYPOINT
andCMD
determine what runs based on their default behavior. See the Docker documentation about howCMD
andENTRYPOINT
interact. In this field, you can reference environment variables set by AI Platform Prediction and environment variables set in the env field. You cannot reference environment variables set in the Docker image. In order for environment variables to be expanded, reference them by using the following syntax: $( VARIABLE_NAME) Note that this differs from Bash variable expansion, which does not use parentheses. If a variable cannot be resolved, the reference in the input string is used unchanged. To avoid variable expansion, you can escape this syntax with$$
; for example: $$(VARIABLE_NAME) This field corresponds to theargs
field of the Kubernetes Containers v1 core API. - command List<String>
- Immutable. Specifies the command that runs when the container starts. This overrides the container's
ENTRYPOINT
. Specify this field as an array of executable and arguments, similar to a DockerENTRYPOINT
's "exec" form, not its "shell" form. If you do not specify this field, then the container'sENTRYPOINT
runs, in conjunction with the args field or the container'sCMD
, if either exists. If this field is not specified and the container does not have anENTRYPOINT
, then refer to the Docker documentation about howCMD
andENTRYPOINT
interact. If you specify this field, then you can also specify theargs
field to provide additional arguments for this command. However, if you specify this field, then the container'sCMD
is ignored. See the Kubernetes documentation about how thecommand
andargs
fields interact with a container'sENTRYPOINT
andCMD
. In this field, you can reference environment variables set by AI Platform Prediction and environment variables set in the env field. You cannot reference environment variables set in the Docker image. In order for environment variables to be expanded, reference them by using the following syntax: $( VARIABLE_NAME) Note that this differs from Bash variable expansion, which does not use parentheses. If a variable cannot be resolved, the reference in the input string is used unchanged. To avoid variable expansion, you can escape this syntax with$$
; for example: $$(VARIABLE_NAME) This field corresponds to thecommand
field of the Kubernetes Containers v1 core API. - env
List<Google
Cloud Ml V1__Env Var Response> - Immutable. List of environment variables to set in the container. After the container starts running, code running in the container can read these environment variables. Additionally, the command and args fields can reference these variables. Later entries in this list can also reference earlier entries. For example, the following example sets the variable
VAR_2
to have the valuefoo bar
:json [ { "name": "VAR_1", "value": "foo" }, { "name": "VAR_2", "value": "$(VAR_1) bar" } ]
If you switch the order of the variables in the example, then the expansion does not occur. This field corresponds to theenv
field of the Kubernetes Containers v1 core API. - image String
- URI of the Docker image to be used as the custom container for serving predictions. This URI must identify an image in Artifact Registry and begin with the hostname
{REGION}-docker.pkg.dev
, where{REGION}
is replaced by the region that matches AI Platform Prediction regional endpoint that you are using. For example, if you are using theus-central1-ml.googleapis.com
endpoint, then this URI must begin withus-central1-docker.pkg.dev
. To use a custom container, the AI Platform Google-managed service account must have permission to pull (read) the Docker image at this URI. The AI Platform Google-managed service account has the following format:service-{PROJECT_NUMBER}@cloud-ml.google.com.iam.gserviceaccount.com
{PROJECT_NUMBER} is replaced by your Google Cloud project number. By default, this service account has necessary permissions to pull an Artifact Registry image in the same Google Cloud project where you are using AI Platform Prediction. In this case, no configuration is necessary. If you want to use an image from a different Google Cloud project, learn how to grant the Artifact Registry Reader (roles/artifactregistry.reader) role for a repository to your projet's AI Platform Google-managed service account. To learn about the requirements for the Docker image itself, read Custom container requirements. - ports
List<Google
Cloud Ml V1__Container Port Response> - Immutable. List of ports to expose from the container. AI Platform Prediction sends any prediction requests that it receives to the first port on this list. AI Platform Prediction also sends liveness and health checks to this port. If you do not specify this field, it defaults to following value:
json [ { "containerPort": 8080 } ]
AI Platform Prediction does not use ports other than the first one listed. This field corresponds to theports
field of the Kubernetes Containers v1 core API.
- args string[]
- Immutable. Specifies arguments for the command that runs when the container starts. This overrides the container's
CMD
. Specify this field as an array of executable and arguments, similar to a DockerCMD
's "default parameters" form. If you don't specify this field but do specify the command field, then the command from thecommand
field runs without any additional arguments. See the Kubernetes documentation about how thecommand
andargs
fields interact with a container'sENTRYPOINT
andCMD
. If you don't specify this field and don't specify thecommmand
field, then the container'sENTRYPOINT
andCMD
determine what runs based on their default behavior. See the Docker documentation about howCMD
andENTRYPOINT
interact. In this field, you can reference environment variables set by AI Platform Prediction and environment variables set in the env field. You cannot reference environment variables set in the Docker image. In order for environment variables to be expanded, reference them by using the following syntax: $( VARIABLE_NAME) Note that this differs from Bash variable expansion, which does not use parentheses. If a variable cannot be resolved, the reference in the input string is used unchanged. To avoid variable expansion, you can escape this syntax with$$
; for example: $$(VARIABLE_NAME) This field corresponds to theargs
field of the Kubernetes Containers v1 core API. - command string[]
- Immutable. Specifies the command that runs when the container starts. This overrides the container's
ENTRYPOINT
. Specify this field as an array of executable and arguments, similar to a DockerENTRYPOINT
's "exec" form, not its "shell" form. If you do not specify this field, then the container'sENTRYPOINT
runs, in conjunction with the args field or the container'sCMD
, if either exists. If this field is not specified and the container does not have anENTRYPOINT
, then refer to the Docker documentation about howCMD
andENTRYPOINT
interact. If you specify this field, then you can also specify theargs
field to provide additional arguments for this command. However, if you specify this field, then the container'sCMD
is ignored. See the Kubernetes documentation about how thecommand
andargs
fields interact with a container'sENTRYPOINT
andCMD
. In this field, you can reference environment variables set by AI Platform Prediction and environment variables set in the env field. You cannot reference environment variables set in the Docker image. In order for environment variables to be expanded, reference them by using the following syntax: $( VARIABLE_NAME) Note that this differs from Bash variable expansion, which does not use parentheses. If a variable cannot be resolved, the reference in the input string is used unchanged. To avoid variable expansion, you can escape this syntax with$$
; for example: $$(VARIABLE_NAME) This field corresponds to thecommand
field of the Kubernetes Containers v1 core API. - env
Google
Cloud Ml V1__Env Var Response[] - Immutable. List of environment variables to set in the container. After the container starts running, code running in the container can read these environment variables. Additionally, the command and args fields can reference these variables. Later entries in this list can also reference earlier entries. For example, the following example sets the variable
VAR_2
to have the valuefoo bar
:json [ { "name": "VAR_1", "value": "foo" }, { "name": "VAR_2", "value": "$(VAR_1) bar" } ]
If you switch the order of the variables in the example, then the expansion does not occur. This field corresponds to theenv
field of the Kubernetes Containers v1 core API. - image string
- URI of the Docker image to be used as the custom container for serving predictions. This URI must identify an image in Artifact Registry and begin with the hostname
{REGION}-docker.pkg.dev
, where{REGION}
is replaced by the region that matches AI Platform Prediction regional endpoint that you are using. For example, if you are using theus-central1-ml.googleapis.com
endpoint, then this URI must begin withus-central1-docker.pkg.dev
. To use a custom container, the AI Platform Google-managed service account must have permission to pull (read) the Docker image at this URI. The AI Platform Google-managed service account has the following format:service-{PROJECT_NUMBER}@cloud-ml.google.com.iam.gserviceaccount.com
{PROJECT_NUMBER} is replaced by your Google Cloud project number. By default, this service account has necessary permissions to pull an Artifact Registry image in the same Google Cloud project where you are using AI Platform Prediction. In this case, no configuration is necessary. If you want to use an image from a different Google Cloud project, learn how to grant the Artifact Registry Reader (roles/artifactregistry.reader) role for a repository to your projet's AI Platform Google-managed service account. To learn about the requirements for the Docker image itself, read Custom container requirements. - ports
Google
Cloud Ml V1__Container Port Response[] - Immutable. List of ports to expose from the container. AI Platform Prediction sends any prediction requests that it receives to the first port on this list. AI Platform Prediction also sends liveness and health checks to this port. If you do not specify this field, it defaults to following value:
json [ { "containerPort": 8080 } ]
AI Platform Prediction does not use ports other than the first one listed. This field corresponds to theports
field of the Kubernetes Containers v1 core API.
- args Sequence[str]
- Immutable. Specifies arguments for the command that runs when the container starts. This overrides the container's
CMD
. Specify this field as an array of executable and arguments, similar to a DockerCMD
's "default parameters" form. If you don't specify this field but do specify the command field, then the command from thecommand
field runs without any additional arguments. See the Kubernetes documentation about how thecommand
andargs
fields interact with a container'sENTRYPOINT
andCMD
. If you don't specify this field and don't specify thecommmand
field, then the container'sENTRYPOINT
andCMD
determine what runs based on their default behavior. See the Docker documentation about howCMD
andENTRYPOINT
interact. In this field, you can reference environment variables set by AI Platform Prediction and environment variables set in the env field. You cannot reference environment variables set in the Docker image. In order for environment variables to be expanded, reference them by using the following syntax: $( VARIABLE_NAME) Note that this differs from Bash variable expansion, which does not use parentheses. If a variable cannot be resolved, the reference in the input string is used unchanged. To avoid variable expansion, you can escape this syntax with$$
; for example: $$(VARIABLE_NAME) This field corresponds to theargs
field of the Kubernetes Containers v1 core API. - command Sequence[str]
- Immutable. Specifies the command that runs when the container starts. This overrides the container's
ENTRYPOINT
. Specify this field as an array of executable and arguments, similar to a DockerENTRYPOINT
's "exec" form, not its "shell" form. If you do not specify this field, then the container'sENTRYPOINT
runs, in conjunction with the args field or the container'sCMD
, if either exists. If this field is not specified and the container does not have anENTRYPOINT
, then refer to the Docker documentation about howCMD
andENTRYPOINT
interact. If you specify this field, then you can also specify theargs
field to provide additional arguments for this command. However, if you specify this field, then the container'sCMD
is ignored. See the Kubernetes documentation about how thecommand
andargs
fields interact with a container'sENTRYPOINT
andCMD
. In this field, you can reference environment variables set by AI Platform Prediction and environment variables set in the env field. You cannot reference environment variables set in the Docker image. In order for environment variables to be expanded, reference them by using the following syntax: $( VARIABLE_NAME) Note that this differs from Bash variable expansion, which does not use parentheses. If a variable cannot be resolved, the reference in the input string is used unchanged. To avoid variable expansion, you can escape this syntax with$$
; for example: $$(VARIABLE_NAME) This field corresponds to thecommand
field of the Kubernetes Containers v1 core API. - env
Sequence[Google
Cloud Ml V1Env Var Response] - Immutable. List of environment variables to set in the container. After the container starts running, code running in the container can read these environment variables. Additionally, the command and args fields can reference these variables. Later entries in this list can also reference earlier entries. For example, the following example sets the variable
VAR_2
to have the valuefoo bar
:json [ { "name": "VAR_1", "value": "foo" }, { "name": "VAR_2", "value": "$(VAR_1) bar" } ]
If you switch the order of the variables in the example, then the expansion does not occur. This field corresponds to theenv
field of the Kubernetes Containers v1 core API. - image str
- URI of the Docker image to be used as the custom container for serving predictions. This URI must identify an image in Artifact Registry and begin with the hostname
{REGION}-docker.pkg.dev
, where{REGION}
is replaced by the region that matches AI Platform Prediction regional endpoint that you are using. For example, if you are using theus-central1-ml.googleapis.com
endpoint, then this URI must begin withus-central1-docker.pkg.dev
. To use a custom container, the AI Platform Google-managed service account must have permission to pull (read) the Docker image at this URI. The AI Platform Google-managed service account has the following format:service-{PROJECT_NUMBER}@cloud-ml.google.com.iam.gserviceaccount.com
{PROJECT_NUMBER} is replaced by your Google Cloud project number. By default, this service account has necessary permissions to pull an Artifact Registry image in the same Google Cloud project where you are using AI Platform Prediction. In this case, no configuration is necessary. If you want to use an image from a different Google Cloud project, learn how to grant the Artifact Registry Reader (roles/artifactregistry.reader) role for a repository to your projet's AI Platform Google-managed service account. To learn about the requirements for the Docker image itself, read Custom container requirements. - ports
Sequence[Google
Cloud Ml V1Container Port Response] - Immutable. List of ports to expose from the container. AI Platform Prediction sends any prediction requests that it receives to the first port on this list. AI Platform Prediction also sends liveness and health checks to this port. If you do not specify this field, it defaults to following value:
json [ { "containerPort": 8080 } ]
AI Platform Prediction does not use ports other than the first one listed. This field corresponds to theports
field of the Kubernetes Containers v1 core API.
- args List<String>
- Immutable. Specifies arguments for the command that runs when the container starts. This overrides the container's
CMD
. Specify this field as an array of executable and arguments, similar to a DockerCMD
's "default parameters" form. If you don't specify this field but do specify the command field, then the command from thecommand
field runs without any additional arguments. See the Kubernetes documentation about how thecommand
andargs
fields interact with a container'sENTRYPOINT
andCMD
. If you don't specify this field and don't specify thecommmand
field, then the container'sENTRYPOINT
andCMD
determine what runs based on their default behavior. See the Docker documentation about howCMD
andENTRYPOINT
interact. In this field, you can reference environment variables set by AI Platform Prediction and environment variables set in the env field. You cannot reference environment variables set in the Docker image. In order for environment variables to be expanded, reference them by using the following syntax: $( VARIABLE_NAME) Note that this differs from Bash variable expansion, which does not use parentheses. If a variable cannot be resolved, the reference in the input string is used unchanged. To avoid variable expansion, you can escape this syntax with$$
; for example: $$(VARIABLE_NAME) This field corresponds to theargs
field of the Kubernetes Containers v1 core API. - command List<String>
- Immutable. Specifies the command that runs when the container starts. This overrides the container's
ENTRYPOINT
. Specify this field as an array of executable and arguments, similar to a DockerENTRYPOINT
's "exec" form, not its "shell" form. If you do not specify this field, then the container'sENTRYPOINT
runs, in conjunction with the args field or the container'sCMD
, if either exists. If this field is not specified and the container does not have anENTRYPOINT
, then refer to the Docker documentation about howCMD
andENTRYPOINT
interact. If you specify this field, then you can also specify theargs
field to provide additional arguments for this command. However, if you specify this field, then the container'sCMD
is ignored. See the Kubernetes documentation about how thecommand
andargs
fields interact with a container'sENTRYPOINT
andCMD
. In this field, you can reference environment variables set by AI Platform Prediction and environment variables set in the env field. You cannot reference environment variables set in the Docker image. In order for environment variables to be expanded, reference them by using the following syntax: $( VARIABLE_NAME) Note that this differs from Bash variable expansion, which does not use parentheses. If a variable cannot be resolved, the reference in the input string is used unchanged. To avoid variable expansion, you can escape this syntax with$$
; for example: $$(VARIABLE_NAME) This field corresponds to thecommand
field of the Kubernetes Containers v1 core API. - env List<Property Map>
- Immutable. List of environment variables to set in the container. After the container starts running, code running in the container can read these environment variables. Additionally, the command and args fields can reference these variables. Later entries in this list can also reference earlier entries. For example, the following example sets the variable
VAR_2
to have the valuefoo bar
:json [ { "name": "VAR_1", "value": "foo" }, { "name": "VAR_2", "value": "$(VAR_1) bar" } ]
If you switch the order of the variables in the example, then the expansion does not occur. This field corresponds to theenv
field of the Kubernetes Containers v1 core API. - image String
- URI of the Docker image to be used as the custom container for serving predictions. This URI must identify an image in Artifact Registry and begin with the hostname
{REGION}-docker.pkg.dev
, where{REGION}
is replaced by the region that matches AI Platform Prediction regional endpoint that you are using. For example, if you are using theus-central1-ml.googleapis.com
endpoint, then this URI must begin withus-central1-docker.pkg.dev
. To use a custom container, the AI Platform Google-managed service account must have permission to pull (read) the Docker image at this URI. The AI Platform Google-managed service account has the following format:service-{PROJECT_NUMBER}@cloud-ml.google.com.iam.gserviceaccount.com
{PROJECT_NUMBER} is replaced by your Google Cloud project number. By default, this service account has necessary permissions to pull an Artifact Registry image in the same Google Cloud project where you are using AI Platform Prediction. In this case, no configuration is necessary. If you want to use an image from a different Google Cloud project, learn how to grant the Artifact Registry Reader (roles/artifactregistry.reader) role for a repository to your projet's AI Platform Google-managed service account. To learn about the requirements for the Docker image itself, read Custom container requirements. - ports List<Property Map>
- Immutable. List of ports to expose from the container. AI Platform Prediction sends any prediction requests that it receives to the first port on this list. AI Platform Prediction also sends liveness and health checks to this port. If you do not specify this field, it defaults to following value:
json [ { "containerPort": 8080 } ]
AI Platform Prediction does not use ports other than the first one listed. This field corresponds to theports
field of the Kubernetes Containers v1 core API.
GoogleCloudMlV1__EnvVar, GoogleCloudMlV1__EnvVarArgs
- Name string
- Name of the environment variable. Must be a valid C identifier and must not begin with the prefix
AIP_
. - Value string
- Value of the environment variable. Defaults to an empty string. In this field, you can reference environment variables set by AI Platform Prediction and environment variables set earlier in the same env field as where this message occurs. You cannot reference environment variables set in the Docker image. In order for environment variables to be expanded, reference them by using the following syntax: $(VARIABLE_NAME) Note that this differs from Bash variable expansion, which does not use parentheses. If a variable cannot be resolved, the reference in the input string is used unchanged. To avoid variable expansion, you can escape this syntax with
$$
; for example: $$(VARIABLE_NAME)
- Name string
- Name of the environment variable. Must be a valid C identifier and must not begin with the prefix
AIP_
. - Value string
- Value of the environment variable. Defaults to an empty string. In this field, you can reference environment variables set by AI Platform Prediction and environment variables set earlier in the same env field as where this message occurs. You cannot reference environment variables set in the Docker image. In order for environment variables to be expanded, reference them by using the following syntax: $(VARIABLE_NAME) Note that this differs from Bash variable expansion, which does not use parentheses. If a variable cannot be resolved, the reference in the input string is used unchanged. To avoid variable expansion, you can escape this syntax with
$$
; for example: $$(VARIABLE_NAME)
- name String
- Name of the environment variable. Must be a valid C identifier and must not begin with the prefix
AIP_
. - value String
- Value of the environment variable. Defaults to an empty string. In this field, you can reference environment variables set by AI Platform Prediction and environment variables set earlier in the same env field as where this message occurs. You cannot reference environment variables set in the Docker image. In order for environment variables to be expanded, reference them by using the following syntax: $(VARIABLE_NAME) Note that this differs from Bash variable expansion, which does not use parentheses. If a variable cannot be resolved, the reference in the input string is used unchanged. To avoid variable expansion, you can escape this syntax with
$$
; for example: $$(VARIABLE_NAME)
- name string
- Name of the environment variable. Must be a valid C identifier and must not begin with the prefix
AIP_
. - value string
- Value of the environment variable. Defaults to an empty string. In this field, you can reference environment variables set by AI Platform Prediction and environment variables set earlier in the same env field as where this message occurs. You cannot reference environment variables set in the Docker image. In order for environment variables to be expanded, reference them by using the following syntax: $(VARIABLE_NAME) Note that this differs from Bash variable expansion, which does not use parentheses. If a variable cannot be resolved, the reference in the input string is used unchanged. To avoid variable expansion, you can escape this syntax with
$$
; for example: $$(VARIABLE_NAME)
- name str
- Name of the environment variable. Must be a valid C identifier and must not begin with the prefix
AIP_
. - value str
- Value of the environment variable. Defaults to an empty string. In this field, you can reference environment variables set by AI Platform Prediction and environment variables set earlier in the same env field as where this message occurs. You cannot reference environment variables set in the Docker image. In order for environment variables to be expanded, reference them by using the following syntax: $(VARIABLE_NAME) Note that this differs from Bash variable expansion, which does not use parentheses. If a variable cannot be resolved, the reference in the input string is used unchanged. To avoid variable expansion, you can escape this syntax with
$$
; for example: $$(VARIABLE_NAME)
- name String
- Name of the environment variable. Must be a valid C identifier and must not begin with the prefix
AIP_
. - value String
- Value of the environment variable. Defaults to an empty string. In this field, you can reference environment variables set by AI Platform Prediction and environment variables set earlier in the same env field as where this message occurs. You cannot reference environment variables set in the Docker image. In order for environment variables to be expanded, reference them by using the following syntax: $(VARIABLE_NAME) Note that this differs from Bash variable expansion, which does not use parentheses. If a variable cannot be resolved, the reference in the input string is used unchanged. To avoid variable expansion, you can escape this syntax with
$$
; for example: $$(VARIABLE_NAME)
GoogleCloudMlV1__EnvVarResponse, GoogleCloudMlV1__EnvVarResponseArgs
- Name string
- Name of the environment variable. Must be a valid C identifier and must not begin with the prefix
AIP_
. - Value string
- Value of the environment variable. Defaults to an empty string. In this field, you can reference environment variables set by AI Platform Prediction and environment variables set earlier in the same env field as where this message occurs. You cannot reference environment variables set in the Docker image. In order for environment variables to be expanded, reference them by using the following syntax: $(VARIABLE_NAME) Note that this differs from Bash variable expansion, which does not use parentheses. If a variable cannot be resolved, the reference in the input string is used unchanged. To avoid variable expansion, you can escape this syntax with
$$
; for example: $$(VARIABLE_NAME)
- Name string
- Name of the environment variable. Must be a valid C identifier and must not begin with the prefix
AIP_
. - Value string
- Value of the environment variable. Defaults to an empty string. In this field, you can reference environment variables set by AI Platform Prediction and environment variables set earlier in the same env field as where this message occurs. You cannot reference environment variables set in the Docker image. In order for environment variables to be expanded, reference them by using the following syntax: $(VARIABLE_NAME) Note that this differs from Bash variable expansion, which does not use parentheses. If a variable cannot be resolved, the reference in the input string is used unchanged. To avoid variable expansion, you can escape this syntax with
$$
; for example: $$(VARIABLE_NAME)
- name String
- Name of the environment variable. Must be a valid C identifier and must not begin with the prefix
AIP_
. - value String
- Value of the environment variable. Defaults to an empty string. In this field, you can reference environment variables set by AI Platform Prediction and environment variables set earlier in the same env field as where this message occurs. You cannot reference environment variables set in the Docker image. In order for environment variables to be expanded, reference them by using the following syntax: $(VARIABLE_NAME) Note that this differs from Bash variable expansion, which does not use parentheses. If a variable cannot be resolved, the reference in the input string is used unchanged. To avoid variable expansion, you can escape this syntax with
$$
; for example: $$(VARIABLE_NAME)
- name string
- Name of the environment variable. Must be a valid C identifier and must not begin with the prefix
AIP_
. - value string
- Value of the environment variable. Defaults to an empty string. In this field, you can reference environment variables set by AI Platform Prediction and environment variables set earlier in the same env field as where this message occurs. You cannot reference environment variables set in the Docker image. In order for environment variables to be expanded, reference them by using the following syntax: $(VARIABLE_NAME) Note that this differs from Bash variable expansion, which does not use parentheses. If a variable cannot be resolved, the reference in the input string is used unchanged. To avoid variable expansion, you can escape this syntax with
$$
; for example: $$(VARIABLE_NAME)
- name str
- Name of the environment variable. Must be a valid C identifier and must not begin with the prefix
AIP_
. - value str
- Value of the environment variable. Defaults to an empty string. In this field, you can reference environment variables set by AI Platform Prediction and environment variables set earlier in the same env field as where this message occurs. You cannot reference environment variables set in the Docker image. In order for environment variables to be expanded, reference them by using the following syntax: $(VARIABLE_NAME) Note that this differs from Bash variable expansion, which does not use parentheses. If a variable cannot be resolved, the reference in the input string is used unchanged. To avoid variable expansion, you can escape this syntax with
$$
; for example: $$(VARIABLE_NAME)
- name String
- Name of the environment variable. Must be a valid C identifier and must not begin with the prefix
AIP_
. - value String
- Value of the environment variable. Defaults to an empty string. In this field, you can reference environment variables set by AI Platform Prediction and environment variables set earlier in the same env field as where this message occurs. You cannot reference environment variables set in the Docker image. In order for environment variables to be expanded, reference them by using the following syntax: $(VARIABLE_NAME) Note that this differs from Bash variable expansion, which does not use parentheses. If a variable cannot be resolved, the reference in the input string is used unchanged. To avoid variable expansion, you can escape this syntax with
$$
; for example: $$(VARIABLE_NAME)
GoogleCloudMlV1__ExplanationConfig, GoogleCloudMlV1__ExplanationConfigArgs
- Integrated
Gradients Pulumi.Attribution Google Native. Ml. V1. Inputs. Google Cloud Ml V1__Integrated Gradients Attribution - Attributes credit by computing the Aumann-Shapley value taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1703.01365
- Sampled
Shapley Pulumi.Attribution Google Native. Ml. V1. Inputs. Google Cloud Ml V1__Sampled Shapley Attribution - An attribution method that approximates Shapley values for features that contribute to the label being predicted. A sampling strategy is used to approximate the value rather than considering all subsets of features.
- Xrai
Attribution Pulumi.Google Native. Ml. V1. Inputs. Google Cloud Ml V1__Xrai Attribution - Attributes credit by computing the XRAI taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1906.02825 Currently only implemented for models with natural image inputs.
- Integrated
Gradients GoogleAttribution Cloud Ml V1__Integrated Gradients Attribution - Attributes credit by computing the Aumann-Shapley value taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1703.01365
- Sampled
Shapley GoogleAttribution Cloud Ml V1__Sampled Shapley Attribution - An attribution method that approximates Shapley values for features that contribute to the label being predicted. A sampling strategy is used to approximate the value rather than considering all subsets of features.
- Xrai
Attribution GoogleCloud Ml V1__Xrai Attribution - Attributes credit by computing the XRAI taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1906.02825 Currently only implemented for models with natural image inputs.
- integrated
Gradients GoogleAttribution Cloud Ml V1__Integrated Gradients Attribution - Attributes credit by computing the Aumann-Shapley value taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1703.01365
- sampled
Shapley GoogleAttribution Cloud Ml V1__Sampled Shapley Attribution - An attribution method that approximates Shapley values for features that contribute to the label being predicted. A sampling strategy is used to approximate the value rather than considering all subsets of features.
- xrai
Attribution GoogleCloud Ml V1__Xrai Attribution - Attributes credit by computing the XRAI taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1906.02825 Currently only implemented for models with natural image inputs.
- integrated
Gradients GoogleAttribution Cloud Ml V1__Integrated Gradients Attribution - Attributes credit by computing the Aumann-Shapley value taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1703.01365
- sampled
Shapley GoogleAttribution Cloud Ml V1__Sampled Shapley Attribution - An attribution method that approximates Shapley values for features that contribute to the label being predicted. A sampling strategy is used to approximate the value rather than considering all subsets of features.
- xrai
Attribution GoogleCloud Ml V1__Xrai Attribution - Attributes credit by computing the XRAI taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1906.02825 Currently only implemented for models with natural image inputs.
- integrated_
gradients_ Googleattribution Cloud Ml V1Integrated Gradients Attribution - Attributes credit by computing the Aumann-Shapley value taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1703.01365
- sampled_
shapley_ Googleattribution Cloud Ml V1Sampled Shapley Attribution - An attribution method that approximates Shapley values for features that contribute to the label being predicted. A sampling strategy is used to approximate the value rather than considering all subsets of features.
- xrai_
attribution GoogleCloud Ml V1Xrai Attribution - Attributes credit by computing the XRAI taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1906.02825 Currently only implemented for models with natural image inputs.
- integrated
Gradients Property MapAttribution - Attributes credit by computing the Aumann-Shapley value taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1703.01365
- sampled
Shapley Property MapAttribution - An attribution method that approximates Shapley values for features that contribute to the label being predicted. A sampling strategy is used to approximate the value rather than considering all subsets of features.
- xrai
Attribution Property Map - Attributes credit by computing the XRAI taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1906.02825 Currently only implemented for models with natural image inputs.
GoogleCloudMlV1__ExplanationConfigResponse, GoogleCloudMlV1__ExplanationConfigResponseArgs
- Integrated
Gradients Pulumi.Attribution Google Native. Ml. V1. Inputs. Google Cloud Ml V1__Integrated Gradients Attribution Response - Attributes credit by computing the Aumann-Shapley value taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1703.01365
- Sampled
Shapley Pulumi.Attribution Google Native. Ml. V1. Inputs. Google Cloud Ml V1__Sampled Shapley Attribution Response - An attribution method that approximates Shapley values for features that contribute to the label being predicted. A sampling strategy is used to approximate the value rather than considering all subsets of features.
- Xrai
Attribution Pulumi.Google Native. Ml. V1. Inputs. Google Cloud Ml V1__Xrai Attribution Response - Attributes credit by computing the XRAI taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1906.02825 Currently only implemented for models with natural image inputs.
- Integrated
Gradients GoogleAttribution Cloud Ml V1__Integrated Gradients Attribution Response - Attributes credit by computing the Aumann-Shapley value taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1703.01365
- Sampled
Shapley GoogleAttribution Cloud Ml V1__Sampled Shapley Attribution Response - An attribution method that approximates Shapley values for features that contribute to the label being predicted. A sampling strategy is used to approximate the value rather than considering all subsets of features.
- Xrai
Attribution GoogleCloud Ml V1__Xrai Attribution Response - Attributes credit by computing the XRAI taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1906.02825 Currently only implemented for models with natural image inputs.
- integrated
Gradients GoogleAttribution Cloud Ml V1__Integrated Gradients Attribution Response - Attributes credit by computing the Aumann-Shapley value taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1703.01365
- sampled
Shapley GoogleAttribution Cloud Ml V1__Sampled Shapley Attribution Response - An attribution method that approximates Shapley values for features that contribute to the label being predicted. A sampling strategy is used to approximate the value rather than considering all subsets of features.
- xrai
Attribution GoogleCloud Ml V1__Xrai Attribution Response - Attributes credit by computing the XRAI taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1906.02825 Currently only implemented for models with natural image inputs.
- integrated
Gradients GoogleAttribution Cloud Ml V1__Integrated Gradients Attribution Response - Attributes credit by computing the Aumann-Shapley value taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1703.01365
- sampled
Shapley GoogleAttribution Cloud Ml V1__Sampled Shapley Attribution Response - An attribution method that approximates Shapley values for features that contribute to the label being predicted. A sampling strategy is used to approximate the value rather than considering all subsets of features.
- xrai
Attribution GoogleCloud Ml V1__Xrai Attribution Response - Attributes credit by computing the XRAI taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1906.02825 Currently only implemented for models with natural image inputs.
- integrated_
gradients_ Googleattribution Cloud Ml V1Integrated Gradients Attribution Response - Attributes credit by computing the Aumann-Shapley value taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1703.01365
- sampled_
shapley_ Googleattribution Cloud Ml V1Sampled Shapley Attribution Response - An attribution method that approximates Shapley values for features that contribute to the label being predicted. A sampling strategy is used to approximate the value rather than considering all subsets of features.
- xrai_
attribution GoogleCloud Ml V1Xrai Attribution Response - Attributes credit by computing the XRAI taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1906.02825 Currently only implemented for models with natural image inputs.
- integrated
Gradients Property MapAttribution - Attributes credit by computing the Aumann-Shapley value taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1703.01365
- sampled
Shapley Property MapAttribution - An attribution method that approximates Shapley values for features that contribute to the label being predicted. A sampling strategy is used to approximate the value rather than considering all subsets of features.
- xrai
Attribution Property Map - Attributes credit by computing the XRAI taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1906.02825 Currently only implemented for models with natural image inputs.
GoogleCloudMlV1__IntegratedGradientsAttribution, GoogleCloudMlV1__IntegratedGradientsAttributionArgs
- Num
Integral intSteps - Number of steps for approximating the path integral. A good value to start is 50 and gradually increase until the sum to diff property is met within the desired error range.
- Num
Integral intSteps - Number of steps for approximating the path integral. A good value to start is 50 and gradually increase until the sum to diff property is met within the desired error range.
- num
Integral IntegerSteps - Number of steps for approximating the path integral. A good value to start is 50 and gradually increase until the sum to diff property is met within the desired error range.
- num
Integral numberSteps - Number of steps for approximating the path integral. A good value to start is 50 and gradually increase until the sum to diff property is met within the desired error range.
- num_
integral_ intsteps - Number of steps for approximating the path integral. A good value to start is 50 and gradually increase until the sum to diff property is met within the desired error range.
- num
Integral NumberSteps - Number of steps for approximating the path integral. A good value to start is 50 and gradually increase until the sum to diff property is met within the desired error range.
GoogleCloudMlV1__IntegratedGradientsAttributionResponse, GoogleCloudMlV1__IntegratedGradientsAttributionResponseArgs
- Num
Integral intSteps - Number of steps for approximating the path integral. A good value to start is 50 and gradually increase until the sum to diff property is met within the desired error range.
- Num
Integral intSteps - Number of steps for approximating the path integral. A good value to start is 50 and gradually increase until the sum to diff property is met within the desired error range.
- num
Integral IntegerSteps - Number of steps for approximating the path integral. A good value to start is 50 and gradually increase until the sum to diff property is met within the desired error range.
- num
Integral numberSteps - Number of steps for approximating the path integral. A good value to start is 50 and gradually increase until the sum to diff property is met within the desired error range.
- num_
integral_ intsteps - Number of steps for approximating the path integral. A good value to start is 50 and gradually increase until the sum to diff property is met within the desired error range.
- num
Integral NumberSteps - Number of steps for approximating the path integral. A good value to start is 50 and gradually increase until the sum to diff property is met within the desired error range.
GoogleCloudMlV1__ManualScaling, GoogleCloudMlV1__ManualScalingArgs
- Nodes int
- The number of nodes to allocate for this model. These nodes are always up, starting from the time the model is deployed, so the cost of operating this model will be proportional to
nodes
* number of hours since last billing cycle plus the cost for each prediction performed.
- Nodes int
- The number of nodes to allocate for this model. These nodes are always up, starting from the time the model is deployed, so the cost of operating this model will be proportional to
nodes
* number of hours since last billing cycle plus the cost for each prediction performed.
- nodes Integer
- The number of nodes to allocate for this model. These nodes are always up, starting from the time the model is deployed, so the cost of operating this model will be proportional to
nodes
* number of hours since last billing cycle plus the cost for each prediction performed.
- nodes number
- The number of nodes to allocate for this model. These nodes are always up, starting from the time the model is deployed, so the cost of operating this model will be proportional to
nodes
* number of hours since last billing cycle plus the cost for each prediction performed.
- nodes int
- The number of nodes to allocate for this model. These nodes are always up, starting from the time the model is deployed, so the cost of operating this model will be proportional to
nodes
* number of hours since last billing cycle plus the cost for each prediction performed.
- nodes Number
- The number of nodes to allocate for this model. These nodes are always up, starting from the time the model is deployed, so the cost of operating this model will be proportional to
nodes
* number of hours since last billing cycle plus the cost for each prediction performed.
GoogleCloudMlV1__ManualScalingResponse, GoogleCloudMlV1__ManualScalingResponseArgs
- Nodes int
- The number of nodes to allocate for this model. These nodes are always up, starting from the time the model is deployed, so the cost of operating this model will be proportional to
nodes
* number of hours since last billing cycle plus the cost for each prediction performed.
- Nodes int
- The number of nodes to allocate for this model. These nodes are always up, starting from the time the model is deployed, so the cost of operating this model will be proportional to
nodes
* number of hours since last billing cycle plus the cost for each prediction performed.
- nodes Integer
- The number of nodes to allocate for this model. These nodes are always up, starting from the time the model is deployed, so the cost of operating this model will be proportional to
nodes
* number of hours since last billing cycle plus the cost for each prediction performed.
- nodes number
- The number of nodes to allocate for this model. These nodes are always up, starting from the time the model is deployed, so the cost of operating this model will be proportional to
nodes
* number of hours since last billing cycle plus the cost for each prediction performed.
- nodes int
- The number of nodes to allocate for this model. These nodes are always up, starting from the time the model is deployed, so the cost of operating this model will be proportional to
nodes
* number of hours since last billing cycle plus the cost for each prediction performed.
- nodes Number
- The number of nodes to allocate for this model. These nodes are always up, starting from the time the model is deployed, so the cost of operating this model will be proportional to
nodes
* number of hours since last billing cycle plus the cost for each prediction performed.
GoogleCloudMlV1__MetricSpec, GoogleCloudMlV1__MetricSpecArgs
- Name
Pulumi.
Google Native. Ml. V1. Google Cloud Ml V1__Metric Spec Name - metric name.
- Target int
- Target specifies the target value for the given metric; once real metric deviates from the threshold by a certain percentage, the node count changes.
- Name
Google
Cloud Ml V1__Metric Spec Name - metric name.
- Target int
- Target specifies the target value for the given metric; once real metric deviates from the threshold by a certain percentage, the node count changes.
- name
Google
Cloud Ml V1__Metric Spec Name - metric name.
- target Integer
- Target specifies the target value for the given metric; once real metric deviates from the threshold by a certain percentage, the node count changes.
- name
Google
Cloud Ml V1__Metric Spec Name - metric name.
- target number
- Target specifies the target value for the given metric; once real metric deviates from the threshold by a certain percentage, the node count changes.
- name
Google
Cloud Ml V1Metric Spec Name - metric name.
- target int
- Target specifies the target value for the given metric; once real metric deviates from the threshold by a certain percentage, the node count changes.
- name "METRIC_NAME_UNSPECIFIED" | "CPU_USAGE" | "GPU_DUTY_CYCLE"
- metric name.
- target Number
- Target specifies the target value for the given metric; once real metric deviates from the threshold by a certain percentage, the node count changes.
GoogleCloudMlV1__MetricSpecName, GoogleCloudMlV1__MetricSpecNameArgs
- Metric
Name Unspecified - METRIC_NAME_UNSPECIFIEDUnspecified MetricName.
- Cpu
Usage - CPU_USAGECPU usage.
- Gpu
Duty Cycle - GPU_DUTY_CYCLEGPU duty cycle.
- Google
Cloud Ml V1__Metric Spec Name Metric Name Unspecified - METRIC_NAME_UNSPECIFIEDUnspecified MetricName.
- Google
Cloud Ml V1__Metric Spec Name Cpu Usage - CPU_USAGECPU usage.
- Google
Cloud Ml V1__Metric Spec Name Gpu Duty Cycle - GPU_DUTY_CYCLEGPU duty cycle.
- Metric
Name Unspecified - METRIC_NAME_UNSPECIFIEDUnspecified MetricName.
- Cpu
Usage - CPU_USAGECPU usage.
- Gpu
Duty Cycle - GPU_DUTY_CYCLEGPU duty cycle.
- Metric
Name Unspecified - METRIC_NAME_UNSPECIFIEDUnspecified MetricName.
- Cpu
Usage - CPU_USAGECPU usage.
- Gpu
Duty Cycle - GPU_DUTY_CYCLEGPU duty cycle.
- METRIC_NAME_UNSPECIFIED
- METRIC_NAME_UNSPECIFIEDUnspecified MetricName.
- CPU_USAGE
- CPU_USAGECPU usage.
- GPU_DUTY_CYCLE
- GPU_DUTY_CYCLEGPU duty cycle.
- "METRIC_NAME_UNSPECIFIED"
- METRIC_NAME_UNSPECIFIEDUnspecified MetricName.
- "CPU_USAGE"
- CPU_USAGECPU usage.
- "GPU_DUTY_CYCLE"
- GPU_DUTY_CYCLEGPU duty cycle.
GoogleCloudMlV1__MetricSpecResponse, GoogleCloudMlV1__MetricSpecResponseArgs
GoogleCloudMlV1__RequestLoggingConfig, GoogleCloudMlV1__RequestLoggingConfigArgs
- Bigquery
Table stringName - Fully qualified BigQuery table name in the following format: " project_id.dataset_name.table_name" The specified table must already exist, and the "Cloud ML Service Agent" for your project must have permission to write to it. The table must have the following schema: Field nameType Mode model STRING REQUIRED model_version STRING REQUIRED time TIMESTAMP REQUIRED raw_data STRING REQUIRED raw_prediction STRING NULLABLE groundtruth STRING NULLABLE
- Sampling
Percentage double - Percentage of requests to be logged, expressed as a fraction from 0 to 1. For example, if you want to log 10% of requests, enter
0.1
. The sampling window is the lifetime of the model version. Defaults to 0.
- Bigquery
Table stringName - Fully qualified BigQuery table name in the following format: " project_id.dataset_name.table_name" The specified table must already exist, and the "Cloud ML Service Agent" for your project must have permission to write to it. The table must have the following schema: Field nameType Mode model STRING REQUIRED model_version STRING REQUIRED time TIMESTAMP REQUIRED raw_data STRING REQUIRED raw_prediction STRING NULLABLE groundtruth STRING NULLABLE
- Sampling
Percentage float64 - Percentage of requests to be logged, expressed as a fraction from 0 to 1. For example, if you want to log 10% of requests, enter
0.1
. The sampling window is the lifetime of the model version. Defaults to 0.
- bigquery
Table StringName - Fully qualified BigQuery table name in the following format: " project_id.dataset_name.table_name" The specified table must already exist, and the "Cloud ML Service Agent" for your project must have permission to write to it. The table must have the following schema: Field nameType Mode model STRING REQUIRED model_version STRING REQUIRED time TIMESTAMP REQUIRED raw_data STRING REQUIRED raw_prediction STRING NULLABLE groundtruth STRING NULLABLE
- sampling
Percentage Double - Percentage of requests to be logged, expressed as a fraction from 0 to 1. For example, if you want to log 10% of requests, enter
0.1
. The sampling window is the lifetime of the model version. Defaults to 0.
- bigquery
Table stringName - Fully qualified BigQuery table name in the following format: " project_id.dataset_name.table_name" The specified table must already exist, and the "Cloud ML Service Agent" for your project must have permission to write to it. The table must have the following schema: Field nameType Mode model STRING REQUIRED model_version STRING REQUIRED time TIMESTAMP REQUIRED raw_data STRING REQUIRED raw_prediction STRING NULLABLE groundtruth STRING NULLABLE
- sampling
Percentage number - Percentage of requests to be logged, expressed as a fraction from 0 to 1. For example, if you want to log 10% of requests, enter
0.1
. The sampling window is the lifetime of the model version. Defaults to 0.
- bigquery_
table_ strname - Fully qualified BigQuery table name in the following format: " project_id.dataset_name.table_name" The specified table must already exist, and the "Cloud ML Service Agent" for your project must have permission to write to it. The table must have the following schema: Field nameType Mode model STRING REQUIRED model_version STRING REQUIRED time TIMESTAMP REQUIRED raw_data STRING REQUIRED raw_prediction STRING NULLABLE groundtruth STRING NULLABLE
- sampling_
percentage float - Percentage of requests to be logged, expressed as a fraction from 0 to 1. For example, if you want to log 10% of requests, enter
0.1
. The sampling window is the lifetime of the model version. Defaults to 0.
- bigquery
Table StringName - Fully qualified BigQuery table name in the following format: " project_id.dataset_name.table_name" The specified table must already exist, and the "Cloud ML Service Agent" for your project must have permission to write to it. The table must have the following schema: Field nameType Mode model STRING REQUIRED model_version STRING REQUIRED time TIMESTAMP REQUIRED raw_data STRING REQUIRED raw_prediction STRING NULLABLE groundtruth STRING NULLABLE
- sampling
Percentage Number - Percentage of requests to be logged, expressed as a fraction from 0 to 1. For example, if you want to log 10% of requests, enter
0.1
. The sampling window is the lifetime of the model version. Defaults to 0.
GoogleCloudMlV1__RequestLoggingConfigResponse, GoogleCloudMlV1__RequestLoggingConfigResponseArgs
- Bigquery
Table stringName - Fully qualified BigQuery table name in the following format: " project_id.dataset_name.table_name" The specified table must already exist, and the "Cloud ML Service Agent" for your project must have permission to write to it. The table must have the following schema: Field nameType Mode model STRING REQUIRED model_version STRING REQUIRED time TIMESTAMP REQUIRED raw_data STRING REQUIRED raw_prediction STRING NULLABLE groundtruth STRING NULLABLE
- Sampling
Percentage double - Percentage of requests to be logged, expressed as a fraction from 0 to 1. For example, if you want to log 10% of requests, enter
0.1
. The sampling window is the lifetime of the model version. Defaults to 0.
- Bigquery
Table stringName - Fully qualified BigQuery table name in the following format: " project_id.dataset_name.table_name" The specified table must already exist, and the "Cloud ML Service Agent" for your project must have permission to write to it. The table must have the following schema: Field nameType Mode model STRING REQUIRED model_version STRING REQUIRED time TIMESTAMP REQUIRED raw_data STRING REQUIRED raw_prediction STRING NULLABLE groundtruth STRING NULLABLE
- Sampling
Percentage float64 - Percentage of requests to be logged, expressed as a fraction from 0 to 1. For example, if you want to log 10% of requests, enter
0.1
. The sampling window is the lifetime of the model version. Defaults to 0.
- bigquery
Table StringName - Fully qualified BigQuery table name in the following format: " project_id.dataset_name.table_name" The specified table must already exist, and the "Cloud ML Service Agent" for your project must have permission to write to it. The table must have the following schema: Field nameType Mode model STRING REQUIRED model_version STRING REQUIRED time TIMESTAMP REQUIRED raw_data STRING REQUIRED raw_prediction STRING NULLABLE groundtruth STRING NULLABLE
- sampling
Percentage Double - Percentage of requests to be logged, expressed as a fraction from 0 to 1. For example, if you want to log 10% of requests, enter
0.1
. The sampling window is the lifetime of the model version. Defaults to 0.
- bigquery
Table stringName - Fully qualified BigQuery table name in the following format: " project_id.dataset_name.table_name" The specified table must already exist, and the "Cloud ML Service Agent" for your project must have permission to write to it. The table must have the following schema: Field nameType Mode model STRING REQUIRED model_version STRING REQUIRED time TIMESTAMP REQUIRED raw_data STRING REQUIRED raw_prediction STRING NULLABLE groundtruth STRING NULLABLE
- sampling
Percentage number - Percentage of requests to be logged, expressed as a fraction from 0 to 1. For example, if you want to log 10% of requests, enter
0.1
. The sampling window is the lifetime of the model version. Defaults to 0.
- bigquery_
table_ strname - Fully qualified BigQuery table name in the following format: " project_id.dataset_name.table_name" The specified table must already exist, and the "Cloud ML Service Agent" for your project must have permission to write to it. The table must have the following schema: Field nameType Mode model STRING REQUIRED model_version STRING REQUIRED time TIMESTAMP REQUIRED raw_data STRING REQUIRED raw_prediction STRING NULLABLE groundtruth STRING NULLABLE
- sampling_
percentage float - Percentage of requests to be logged, expressed as a fraction from 0 to 1. For example, if you want to log 10% of requests, enter
0.1
. The sampling window is the lifetime of the model version. Defaults to 0.
- bigquery
Table StringName - Fully qualified BigQuery table name in the following format: " project_id.dataset_name.table_name" The specified table must already exist, and the "Cloud ML Service Agent" for your project must have permission to write to it. The table must have the following schema: Field nameType Mode model STRING REQUIRED model_version STRING REQUIRED time TIMESTAMP REQUIRED raw_data STRING REQUIRED raw_prediction STRING NULLABLE groundtruth STRING NULLABLE
- sampling
Percentage Number - Percentage of requests to be logged, expressed as a fraction from 0 to 1. For example, if you want to log 10% of requests, enter
0.1
. The sampling window is the lifetime of the model version. Defaults to 0.
GoogleCloudMlV1__RouteMap, GoogleCloudMlV1__RouteMapArgs
- Health string
- HTTP path on the container to send health checkss to. AI Platform Prediction intermittently sends GET requests to this path on the container's IP address and port to check that the container is healthy. Read more about health checks. For example, if you set this field to
/bar
, then AI Platform Prediction intermittently sends a GET request to the/bar
path on the port of your container specified by the first value of Version.container.ports. If you don't specify this field, it defaults to the following value: /v1/models/ MODEL/versions/VERSION The placeholders in this value are replaced as follows: * MODEL: The name of the parent Model. This does not include the "projects/PROJECT_ID/models/" prefix that the API returns in output; it is the bare model name, as provided to projects.models.create. * VERSION: The name of the model version. This does not include the "projects/PROJECT_ID /models/MODEL/versions/" prefix that the API returns in output; it is the bare version name, as provided to projects.models.versions.create. - Predict string
- HTTP path on the container to send prediction requests to. AI Platform Prediction forwards requests sent using projects.predict to this path on the container's IP address and port. AI Platform Prediction then returns the container's response in the API response. For example, if you set this field to
/foo
, then when AI Platform Prediction receives a prediction request, it forwards the request body in a POST request to the/foo
path on the port of your container specified by the first value of Version.container.ports. If you don't specify this field, it defaults to the following value: /v1/models/MODEL/versions/VERSION:predict The placeholders in this value are replaced as follows: * MODEL: The name of the parent Model. This does not include the "projects/PROJECT_ID/models/" prefix that the API returns in output; it is the bare model name, as provided to projects.models.create. * VERSION: The name of the model version. This does not include the "projects/PROJECT_ID/models/MODEL/versions/" prefix that the API returns in output; it is the bare version name, as provided to projects.models.versions.create.
- Health string
- HTTP path on the container to send health checkss to. AI Platform Prediction intermittently sends GET requests to this path on the container's IP address and port to check that the container is healthy. Read more about health checks. For example, if you set this field to
/bar
, then AI Platform Prediction intermittently sends a GET request to the/bar
path on the port of your container specified by the first value of Version.container.ports. If you don't specify this field, it defaults to the following value: /v1/models/ MODEL/versions/VERSION The placeholders in this value are replaced as follows: * MODEL: The name of the parent Model. This does not include the "projects/PROJECT_ID/models/" prefix that the API returns in output; it is the bare model name, as provided to projects.models.create. * VERSION: The name of the model version. This does not include the "projects/PROJECT_ID /models/MODEL/versions/" prefix that the API returns in output; it is the bare version name, as provided to projects.models.versions.create. - Predict string
- HTTP path on the container to send prediction requests to. AI Platform Prediction forwards requests sent using projects.predict to this path on the container's IP address and port. AI Platform Prediction then returns the container's response in the API response. For example, if you set this field to
/foo
, then when AI Platform Prediction receives a prediction request, it forwards the request body in a POST request to the/foo
path on the port of your container specified by the first value of Version.container.ports. If you don't specify this field, it defaults to the following value: /v1/models/MODEL/versions/VERSION:predict The placeholders in this value are replaced as follows: * MODEL: The name of the parent Model. This does not include the "projects/PROJECT_ID/models/" prefix that the API returns in output; it is the bare model name, as provided to projects.models.create. * VERSION: The name of the model version. This does not include the "projects/PROJECT_ID/models/MODEL/versions/" prefix that the API returns in output; it is the bare version name, as provided to projects.models.versions.create.
- health String
- HTTP path on the container to send health checkss to. AI Platform Prediction intermittently sends GET requests to this path on the container's IP address and port to check that the container is healthy. Read more about health checks. For example, if you set this field to
/bar
, then AI Platform Prediction intermittently sends a GET request to the/bar
path on the port of your container specified by the first value of Version.container.ports. If you don't specify this field, it defaults to the following value: /v1/models/ MODEL/versions/VERSION The placeholders in this value are replaced as follows: * MODEL: The name of the parent Model. This does not include the "projects/PROJECT_ID/models/" prefix that the API returns in output; it is the bare model name, as provided to projects.models.create. * VERSION: The name of the model version. This does not include the "projects/PROJECT_ID /models/MODEL/versions/" prefix that the API returns in output; it is the bare version name, as provided to projects.models.versions.create. - predict String
- HTTP path on the container to send prediction requests to. AI Platform Prediction forwards requests sent using projects.predict to this path on the container's IP address and port. AI Platform Prediction then returns the container's response in the API response. For example, if you set this field to
/foo
, then when AI Platform Prediction receives a prediction request, it forwards the request body in a POST request to the/foo
path on the port of your container specified by the first value of Version.container.ports. If you don't specify this field, it defaults to the following value: /v1/models/MODEL/versions/VERSION:predict The placeholders in this value are replaced as follows: * MODEL: The name of the parent Model. This does not include the "projects/PROJECT_ID/models/" prefix that the API returns in output; it is the bare model name, as provided to projects.models.create. * VERSION: The name of the model version. This does not include the "projects/PROJECT_ID/models/MODEL/versions/" prefix that the API returns in output; it is the bare version name, as provided to projects.models.versions.create.
- health string
- HTTP path on the container to send health checkss to. AI Platform Prediction intermittently sends GET requests to this path on the container's IP address and port to check that the container is healthy. Read more about health checks. For example, if you set this field to
/bar
, then AI Platform Prediction intermittently sends a GET request to the/bar
path on the port of your container specified by the first value of Version.container.ports. If you don't specify this field, it defaults to the following value: /v1/models/ MODEL/versions/VERSION The placeholders in this value are replaced as follows: * MODEL: The name of the parent Model. This does not include the "projects/PROJECT_ID/models/" prefix that the API returns in output; it is the bare model name, as provided to projects.models.create. * VERSION: The name of the model version. This does not include the "projects/PROJECT_ID /models/MODEL/versions/" prefix that the API returns in output; it is the bare version name, as provided to projects.models.versions.create. - predict string
- HTTP path on the container to send prediction requests to. AI Platform Prediction forwards requests sent using projects.predict to this path on the container's IP address and port. AI Platform Prediction then returns the container's response in the API response. For example, if you set this field to
/foo
, then when AI Platform Prediction receives a prediction request, it forwards the request body in a POST request to the/foo
path on the port of your container specified by the first value of Version.container.ports. If you don't specify this field, it defaults to the following value: /v1/models/MODEL/versions/VERSION:predict The placeholders in this value are replaced as follows: * MODEL: The name of the parent Model. This does not include the "projects/PROJECT_ID/models/" prefix that the API returns in output; it is the bare model name, as provided to projects.models.create. * VERSION: The name of the model version. This does not include the "projects/PROJECT_ID/models/MODEL/versions/" prefix that the API returns in output; it is the bare version name, as provided to projects.models.versions.create.
- health str
- HTTP path on the container to send health checkss to. AI Platform Prediction intermittently sends GET requests to this path on the container's IP address and port to check that the container is healthy. Read more about health checks. For example, if you set this field to
/bar
, then AI Platform Prediction intermittently sends a GET request to the/bar
path on the port of your container specified by the first value of Version.container.ports. If you don't specify this field, it defaults to the following value: /v1/models/ MODEL/versions/VERSION The placeholders in this value are replaced as follows: * MODEL: The name of the parent Model. This does not include the "projects/PROJECT_ID/models/" prefix that the API returns in output; it is the bare model name, as provided to projects.models.create. * VERSION: The name of the model version. This does not include the "projects/PROJECT_ID /models/MODEL/versions/" prefix that the API returns in output; it is the bare version name, as provided to projects.models.versions.create. - predict str
- HTTP path on the container to send prediction requests to. AI Platform Prediction forwards requests sent using projects.predict to this path on the container's IP address and port. AI Platform Prediction then returns the container's response in the API response. For example, if you set this field to
/foo
, then when AI Platform Prediction receives a prediction request, it forwards the request body in a POST request to the/foo
path on the port of your container specified by the first value of Version.container.ports. If you don't specify this field, it defaults to the following value: /v1/models/MODEL/versions/VERSION:predict The placeholders in this value are replaced as follows: * MODEL: The name of the parent Model. This does not include the "projects/PROJECT_ID/models/" prefix that the API returns in output; it is the bare model name, as provided to projects.models.create. * VERSION: The name of the model version. This does not include the "projects/PROJECT_ID/models/MODEL/versions/" prefix that the API returns in output; it is the bare version name, as provided to projects.models.versions.create.
- health String
- HTTP path on the container to send health checkss to. AI Platform Prediction intermittently sends GET requests to this path on the container's IP address and port to check that the container is healthy. Read more about health checks. For example, if you set this field to
/bar
, then AI Platform Prediction intermittently sends a GET request to the/bar
path on the port of your container specified by the first value of Version.container.ports. If you don't specify this field, it defaults to the following value: /v1/models/ MODEL/versions/VERSION The placeholders in this value are replaced as follows: * MODEL: The name of the parent Model. This does not include the "projects/PROJECT_ID/models/" prefix that the API returns in output; it is the bare model name, as provided to projects.models.create. * VERSION: The name of the model version. This does not include the "projects/PROJECT_ID /models/MODEL/versions/" prefix that the API returns in output; it is the bare version name, as provided to projects.models.versions.create. - predict String
- HTTP path on the container to send prediction requests to. AI Platform Prediction forwards requests sent using projects.predict to this path on the container's IP address and port. AI Platform Prediction then returns the container's response in the API response. For example, if you set this field to
/foo
, then when AI Platform Prediction receives a prediction request, it forwards the request body in a POST request to the/foo
path on the port of your container specified by the first value of Version.container.ports. If you don't specify this field, it defaults to the following value: /v1/models/MODEL/versions/VERSION:predict The placeholders in this value are replaced as follows: * MODEL: The name of the parent Model. This does not include the "projects/PROJECT_ID/models/" prefix that the API returns in output; it is the bare model name, as provided to projects.models.create. * VERSION: The name of the model version. This does not include the "projects/PROJECT_ID/models/MODEL/versions/" prefix that the API returns in output; it is the bare version name, as provided to projects.models.versions.create.
GoogleCloudMlV1__RouteMapResponse, GoogleCloudMlV1__RouteMapResponseArgs
- Health string
- HTTP path on the container to send health checkss to. AI Platform Prediction intermittently sends GET requests to this path on the container's IP address and port to check that the container is healthy. Read more about health checks. For example, if you set this field to
/bar
, then AI Platform Prediction intermittently sends a GET request to the/bar
path on the port of your container specified by the first value of Version.container.ports. If you don't specify this field, it defaults to the following value: /v1/models/ MODEL/versions/VERSION The placeholders in this value are replaced as follows: * MODEL: The name of the parent Model. This does not include the "projects/PROJECT_ID/models/" prefix that the API returns in output; it is the bare model name, as provided to projects.models.create. * VERSION: The name of the model version. This does not include the "projects/PROJECT_ID /models/MODEL/versions/" prefix that the API returns in output; it is the bare version name, as provided to projects.models.versions.create. - Predict string
- HTTP path on the container to send prediction requests to. AI Platform Prediction forwards requests sent using projects.predict to this path on the container's IP address and port. AI Platform Prediction then returns the container's response in the API response. For example, if you set this field to
/foo
, then when AI Platform Prediction receives a prediction request, it forwards the request body in a POST request to the/foo
path on the port of your container specified by the first value of Version.container.ports. If you don't specify this field, it defaults to the following value: /v1/models/MODEL/versions/VERSION:predict The placeholders in this value are replaced as follows: * MODEL: The name of the parent Model. This does not include the "projects/PROJECT_ID/models/" prefix that the API returns in output; it is the bare model name, as provided to projects.models.create. * VERSION: The name of the model version. This does not include the "projects/PROJECT_ID/models/MODEL/versions/" prefix that the API returns in output; it is the bare version name, as provided to projects.models.versions.create.
- Health string
- HTTP path on the container to send health checkss to. AI Platform Prediction intermittently sends GET requests to this path on the container's IP address and port to check that the container is healthy. Read more about health checks. For example, if you set this field to
/bar
, then AI Platform Prediction intermittently sends a GET request to the/bar
path on the port of your container specified by the first value of Version.container.ports. If you don't specify this field, it defaults to the following value: /v1/models/ MODEL/versions/VERSION The placeholders in this value are replaced as follows: * MODEL: The name of the parent Model. This does not include the "projects/PROJECT_ID/models/" prefix that the API returns in output; it is the bare model name, as provided to projects.models.create. * VERSION: The name of the model version. This does not include the "projects/PROJECT_ID /models/MODEL/versions/" prefix that the API returns in output; it is the bare version name, as provided to projects.models.versions.create. - Predict string
- HTTP path on the container to send prediction requests to. AI Platform Prediction forwards requests sent using projects.predict to this path on the container's IP address and port. AI Platform Prediction then returns the container's response in the API response. For example, if you set this field to
/foo
, then when AI Platform Prediction receives a prediction request, it forwards the request body in a POST request to the/foo
path on the port of your container specified by the first value of Version.container.ports. If you don't specify this field, it defaults to the following value: /v1/models/MODEL/versions/VERSION:predict The placeholders in this value are replaced as follows: * MODEL: The name of the parent Model. This does not include the "projects/PROJECT_ID/models/" prefix that the API returns in output; it is the bare model name, as provided to projects.models.create. * VERSION: The name of the model version. This does not include the "projects/PROJECT_ID/models/MODEL/versions/" prefix that the API returns in output; it is the bare version name, as provided to projects.models.versions.create.
- health String
- HTTP path on the container to send health checkss to. AI Platform Prediction intermittently sends GET requests to this path on the container's IP address and port to check that the container is healthy. Read more about health checks. For example, if you set this field to
/bar
, then AI Platform Prediction intermittently sends a GET request to the/bar
path on the port of your container specified by the first value of Version.container.ports. If you don't specify this field, it defaults to the following value: /v1/models/ MODEL/versions/VERSION The placeholders in this value are replaced as follows: * MODEL: The name of the parent Model. This does not include the "projects/PROJECT_ID/models/" prefix that the API returns in output; it is the bare model name, as provided to projects.models.create. * VERSION: The name of the model version. This does not include the "projects/PROJECT_ID /models/MODEL/versions/" prefix that the API returns in output; it is the bare version name, as provided to projects.models.versions.create. - predict String
- HTTP path on the container to send prediction requests to. AI Platform Prediction forwards requests sent using projects.predict to this path on the container's IP address and port. AI Platform Prediction then returns the container's response in the API response. For example, if you set this field to
/foo
, then when AI Platform Prediction receives a prediction request, it forwards the request body in a POST request to the/foo
path on the port of your container specified by the first value of Version.container.ports. If you don't specify this field, it defaults to the following value: /v1/models/MODEL/versions/VERSION:predict The placeholders in this value are replaced as follows: * MODEL: The name of the parent Model. This does not include the "projects/PROJECT_ID/models/" prefix that the API returns in output; it is the bare model name, as provided to projects.models.create. * VERSION: The name of the model version. This does not include the "projects/PROJECT_ID/models/MODEL/versions/" prefix that the API returns in output; it is the bare version name, as provided to projects.models.versions.create.
- health string
- HTTP path on the container to send health checkss to. AI Platform Prediction intermittently sends GET requests to this path on the container's IP address and port to check that the container is healthy. Read more about health checks. For example, if you set this field to
/bar
, then AI Platform Prediction intermittently sends a GET request to the/bar
path on the port of your container specified by the first value of Version.container.ports. If you don't specify this field, it defaults to the following value: /v1/models/ MODEL/versions/VERSION The placeholders in this value are replaced as follows: * MODEL: The name of the parent Model. This does not include the "projects/PROJECT_ID/models/" prefix that the API returns in output; it is the bare model name, as provided to projects.models.create. * VERSION: The name of the model version. This does not include the "projects/PROJECT_ID /models/MODEL/versions/" prefix that the API returns in output; it is the bare version name, as provided to projects.models.versions.create. - predict string
- HTTP path on the container to send prediction requests to. AI Platform Prediction forwards requests sent using projects.predict to this path on the container's IP address and port. AI Platform Prediction then returns the container's response in the API response. For example, if you set this field to
/foo
, then when AI Platform Prediction receives a prediction request, it forwards the request body in a POST request to the/foo
path on the port of your container specified by the first value of Version.container.ports. If you don't specify this field, it defaults to the following value: /v1/models/MODEL/versions/VERSION:predict The placeholders in this value are replaced as follows: * MODEL: The name of the parent Model. This does not include the "projects/PROJECT_ID/models/" prefix that the API returns in output; it is the bare model name, as provided to projects.models.create. * VERSION: The name of the model version. This does not include the "projects/PROJECT_ID/models/MODEL/versions/" prefix that the API returns in output; it is the bare version name, as provided to projects.models.versions.create.
- health str
- HTTP path on the container to send health checkss to. AI Platform Prediction intermittently sends GET requests to this path on the container's IP address and port to check that the container is healthy. Read more about health checks. For example, if you set this field to
/bar
, then AI Platform Prediction intermittently sends a GET request to the/bar
path on the port of your container specified by the first value of Version.container.ports. If you don't specify this field, it defaults to the following value: /v1/models/ MODEL/versions/VERSION The placeholders in this value are replaced as follows: * MODEL: The name of the parent Model. This does not include the "projects/PROJECT_ID/models/" prefix that the API returns in output; it is the bare model name, as provided to projects.models.create. * VERSION: The name of the model version. This does not include the "projects/PROJECT_ID /models/MODEL/versions/" prefix that the API returns in output; it is the bare version name, as provided to projects.models.versions.create. - predict str
- HTTP path on the container to send prediction requests to. AI Platform Prediction forwards requests sent using projects.predict to this path on the container's IP address and port. AI Platform Prediction then returns the container's response in the API response. For example, if you set this field to
/foo
, then when AI Platform Prediction receives a prediction request, it forwards the request body in a POST request to the/foo
path on the port of your container specified by the first value of Version.container.ports. If you don't specify this field, it defaults to the following value: /v1/models/MODEL/versions/VERSION:predict The placeholders in this value are replaced as follows: * MODEL: The name of the parent Model. This does not include the "projects/PROJECT_ID/models/" prefix that the API returns in output; it is the bare model name, as provided to projects.models.create. * VERSION: The name of the model version. This does not include the "projects/PROJECT_ID/models/MODEL/versions/" prefix that the API returns in output; it is the bare version name, as provided to projects.models.versions.create.
- health String
- HTTP path on the container to send health checkss to. AI Platform Prediction intermittently sends GET requests to this path on the container's IP address and port to check that the container is healthy. Read more about health checks. For example, if you set this field to
/bar
, then AI Platform Prediction intermittently sends a GET request to the/bar
path on the port of your container specified by the first value of Version.container.ports. If you don't specify this field, it defaults to the following value: /v1/models/ MODEL/versions/VERSION The placeholders in this value are replaced as follows: * MODEL: The name of the parent Model. This does not include the "projects/PROJECT_ID/models/" prefix that the API returns in output; it is the bare model name, as provided to projects.models.create. * VERSION: The name of the model version. This does not include the "projects/PROJECT_ID /models/MODEL/versions/" prefix that the API returns in output; it is the bare version name, as provided to projects.models.versions.create. - predict String
- HTTP path on the container to send prediction requests to. AI Platform Prediction forwards requests sent using projects.predict to this path on the container's IP address and port. AI Platform Prediction then returns the container's response in the API response. For example, if you set this field to
/foo
, then when AI Platform Prediction receives a prediction request, it forwards the request body in a POST request to the/foo
path on the port of your container specified by the first value of Version.container.ports. If you don't specify this field, it defaults to the following value: /v1/models/MODEL/versions/VERSION:predict The placeholders in this value are replaced as follows: * MODEL: The name of the parent Model. This does not include the "projects/PROJECT_ID/models/" prefix that the API returns in output; it is the bare model name, as provided to projects.models.create. * VERSION: The name of the model version. This does not include the "projects/PROJECT_ID/models/MODEL/versions/" prefix that the API returns in output; it is the bare version name, as provided to projects.models.versions.create.
GoogleCloudMlV1__SampledShapleyAttribution, GoogleCloudMlV1__SampledShapleyAttributionArgs
- Num
Paths int - The number of feature permutations to consider when approximating the Shapley values.
- Num
Paths int - The number of feature permutations to consider when approximating the Shapley values.
- num
Paths Integer - The number of feature permutations to consider when approximating the Shapley values.
- num
Paths number - The number of feature permutations to consider when approximating the Shapley values.
- num_
paths int - The number of feature permutations to consider when approximating the Shapley values.
- num
Paths Number - The number of feature permutations to consider when approximating the Shapley values.
GoogleCloudMlV1__SampledShapleyAttributionResponse, GoogleCloudMlV1__SampledShapleyAttributionResponseArgs
- Num
Paths int - The number of feature permutations to consider when approximating the Shapley values.
- Num
Paths int - The number of feature permutations to consider when approximating the Shapley values.
- num
Paths Integer - The number of feature permutations to consider when approximating the Shapley values.
- num
Paths number - The number of feature permutations to consider when approximating the Shapley values.
- num_
paths int - The number of feature permutations to consider when approximating the Shapley values.
- num
Paths Number - The number of feature permutations to consider when approximating the Shapley values.
GoogleCloudMlV1__XraiAttribution, GoogleCloudMlV1__XraiAttributionArgs
- Num
Integral intSteps - Number of steps for approximating the path integral. A good value to start is 50 and gradually increase until the sum to diff property is met within the desired error range.
- Num
Integral intSteps - Number of steps for approximating the path integral. A good value to start is 50 and gradually increase until the sum to diff property is met within the desired error range.
- num
Integral IntegerSteps - Number of steps for approximating the path integral. A good value to start is 50 and gradually increase until the sum to diff property is met within the desired error range.
- num
Integral numberSteps - Number of steps for approximating the path integral. A good value to start is 50 and gradually increase until the sum to diff property is met within the desired error range.
- num_
integral_ intsteps - Number of steps for approximating the path integral. A good value to start is 50 and gradually increase until the sum to diff property is met within the desired error range.
- num
Integral NumberSteps - Number of steps for approximating the path integral. A good value to start is 50 and gradually increase until the sum to diff property is met within the desired error range.
GoogleCloudMlV1__XraiAttributionResponse, GoogleCloudMlV1__XraiAttributionResponseArgs
- Num
Integral intSteps - Number of steps for approximating the path integral. A good value to start is 50 and gradually increase until the sum to diff property is met within the desired error range.
- Num
Integral intSteps - Number of steps for approximating the path integral. A good value to start is 50 and gradually increase until the sum to diff property is met within the desired error range.
- num
Integral IntegerSteps - Number of steps for approximating the path integral. A good value to start is 50 and gradually increase until the sum to diff property is met within the desired error range.
- num
Integral numberSteps - Number of steps for approximating the path integral. A good value to start is 50 and gradually increase until the sum to diff property is met within the desired error range.
- num_
integral_ intsteps - Number of steps for approximating the path integral. A good value to start is 50 and gradually increase until the sum to diff property is met within the desired error range.
- num
Integral NumberSteps - Number of steps for approximating the path integral. A good value to start is 50 and gradually increase until the sum to diff property is met within the desired error range.
VersionFramework, VersionFrameworkArgs
- Framework
Unspecified - FRAMEWORK_UNSPECIFIEDUnspecified framework. Assigns a value based on the file suffix.
- Tensorflow
- TENSORFLOWTensorflow framework.
- Scikit
Learn - SCIKIT_LEARNScikit-learn framework.
- Xgboost
- XGBOOSTXGBoost framework.
- Version
Framework Framework Unspecified - FRAMEWORK_UNSPECIFIEDUnspecified framework. Assigns a value based on the file suffix.
- Version
Framework Tensorflow - TENSORFLOWTensorflow framework.
- Version
Framework Scikit Learn - SCIKIT_LEARNScikit-learn framework.
- Version
Framework Xgboost - XGBOOSTXGBoost framework.
- Framework
Unspecified - FRAMEWORK_UNSPECIFIEDUnspecified framework. Assigns a value based on the file suffix.
- Tensorflow
- TENSORFLOWTensorflow framework.
- Scikit
Learn - SCIKIT_LEARNScikit-learn framework.
- Xgboost
- XGBOOSTXGBoost framework.
- Framework
Unspecified - FRAMEWORK_UNSPECIFIEDUnspecified framework. Assigns a value based on the file suffix.
- Tensorflow
- TENSORFLOWTensorflow framework.
- Scikit
Learn - SCIKIT_LEARNScikit-learn framework.
- Xgboost
- XGBOOSTXGBoost framework.
- FRAMEWORK_UNSPECIFIED
- FRAMEWORK_UNSPECIFIEDUnspecified framework. Assigns a value based on the file suffix.
- TENSORFLOW
- TENSORFLOWTensorflow framework.
- SCIKIT_LEARN
- SCIKIT_LEARNScikit-learn framework.
- XGBOOST
- XGBOOSTXGBoost framework.
- "FRAMEWORK_UNSPECIFIED"
- FRAMEWORK_UNSPECIFIEDUnspecified framework. Assigns a value based on the file suffix.
- "TENSORFLOW"
- TENSORFLOWTensorflow framework.
- "SCIKIT_LEARN"
- SCIKIT_LEARNScikit-learn framework.
- "XGBOOST"
- XGBOOSTXGBoost framework.
Package Details
- Repository
- Google Cloud Native pulumi/pulumi-google-native
- License
- Apache-2.0
Google Cloud Native is in preview. Google Cloud Classic is fully supported.