sagemaker.core.helper.session_helper#
Functions
Get the DNS suffix for the given region. |
|
|
Create a definition for executing a container as part of a SageMaker model. |
|
Expand an IAM role name into an ARN. |
|
Get request dictionary for UpdateModelPackage API for additional inference. |
|
Return the role ARN whose credentials are used to call the API. |
Retrieves log events from the specified CloudWatch log group and log stream. |
|
Get request dictionary for UpdateModelPackage API for inference specification. |
|
|
Create a production variant description suitable for use in a |
|
Returns the arguments joined by a slash ("/"), similar to |
|
Get the AWS STS endpoint specific for the given region. |
|
Updates the request arguments dict with the value if populated. |
Classes
|
Placeholder docstring |
|
Manage interactions with the Amazon SageMaker APIs and any other AWS services needed. |
- class sagemaker.core.helper.session_helper.LogState[source]#
Bases:
objectPlaceholder docstring
- COMPLETE = 5#
- JOB_COMPLETE = 4#
- STARTING = 1#
- TAILING = 3#
- WAIT_IN_PROGRESS = 2#
- class sagemaker.core.helper.session_helper.Session(boto_session=None, sagemaker_client=None, sagemaker_runtime_client=None, sagemaker_featurestore_runtime_client=None, default_bucket=None, sagemaker_config: dict | None = None, settings=None, sagemaker_metrics_client=None, default_bucket_prefix: str | None = None)[source]#
Bases:
objectManage interactions with the Amazon SageMaker APIs and any other AWS services needed.
This class provides convenient methods for manipulating entities and resources that Amazon SageMaker uses, such as training jobs, endpoints, and input datasets in S3. AWS service calls are delegated to an underlying Boto3 session, which by default is initialized using the AWS configuration chain. When you make an Amazon SageMaker API call that accesses an S3 bucket location and one is not specified, the
Sessioncreates a default bucket based on a naming convention which includes the current AWS account ID.- property boto_region_name#
Placeholder docstring
- property config: Dict | None#
The config for the local mode, unused in a normal session
- create_bucket_for_not_exist_error(bucket_name, region, s3)[source]#
Creates the S3 bucket in the given region
- Parameters:
bucket_name (str) – Name of the S3 bucket
s3 (str) – S3 object from boto session
region (str) – The region in which to create the bucket.
- create_endpoint(endpoint_name, config_name, tags=None, wait=True, live_logging=False)[source]#
Create an Amazon SageMaker
Endpointaccording to the configuration in the request.Once the
Endpointis created, client applications can send requests to obtain inferences. The endpoint configuration is created using theCreateEndpointConfigAPI.- Parameters:
endpoint_name (str) – Name of the Amazon SageMaker
Endpointbeing created.config_name (str) – Name of the Amazon SageMaker endpoint configuration to deploy.
wait (bool) – Whether to wait for the endpoint deployment to complete before returning (default: True).
tags (Optional[Tags]) – A list of key-value pairs for tagging the endpoint (default: None).
- Returns:
Name of the Amazon SageMaker
Endpointcreated.- Return type:
str
- Raises:
botocore.exceptions.ClientError – If Sagemaker throws an exception while creating
endpoint. –
- create_inference_component(inference_component_name: str, endpoint_name: str, variant_name: str, specification: Dict[str, Any], runtime_config: Dict[str, Any] | None = None, tags: List[Dict[str, str | PipelineVariable]] | Dict[str, str | PipelineVariable] | None = None, wait: bool = True)[source]#
Create an Amazon SageMaker Inference Component.
- Parameters:
inference_component_name (str) – Name of the Amazon SageMaker inference component to create.
endpoint_name (str) – Name of the Amazon SageMaker endpoint that the inference component will deploy to.
variant_name (str) – Name of the Amazon SageMaker variant that the inference component will deploy to.
specification (Dict[str, Any]) – The inference component specification.
runtime_config (Optional[Dict[str, Any]]) – Optional. The inference component runtime configuration. (Default: None).
tags (Optional[Tags]) – Optional. Either a dictionary or a list of dictionaries containing key-value pairs. (Default: None).
wait (bool) – Optional. Wait for the inference component to finish being created before returning a value. (Default: True).
- Returns:
Name of the Amazon SageMaker
InferenceComponentif created.- Return type:
str
- create_inference_recommendations_job(role: str, sample_payload_url: str, supported_content_types: List[str], job_name: str | None = None, job_type: str = 'Default', model_name: str | None = None, model_package_version_arn: str | None = None, job_duration_in_seconds: int | None = None, nearest_model_name: str | None = None, supported_instance_types: List[str] | None = None, framework: str | None = None, framework_version: str | None = None, endpoint_configurations: List[Dict[str, any]] | None = None, traffic_pattern: Dict[str, any] | None = None, stopping_conditions: Dict[str, any] | None = None, resource_limit: Dict[str, any] | None = None)[source]#
Creates an Inference Recommendations Job
- Parameters:
role (str) – An AWS IAM role (either name or full ARN). The Amazon SageMaker training jobs and APIs that create Amazon SageMaker endpoints use this role to access training data and model artifactså. You must grant sufficient permissions to this role.
sample_payload_url (str) – The S3 path where the sample payload is stored.
supported_content_types (List[str]) – The supported MIME types for the input data.
model_name (str) – Name of the Amazon SageMaker
Modelto be used.model_package_version_arn (str) – The Amazon Resource Name (ARN) of a versioned model package.
job_name (str) – The name of the job being run.
job_type (str) – The type of job being run. Must either be Default or Advanced.
job_duration_in_seconds (int) – The maximum job duration that a job can run for. Will be used for Advanced jobs.
nearest_model_name (str) – The name of a pre-trained machine learning model benchmarked by Amazon SageMaker Inference Recommender that matches your model.
supported_instance_types (List[str]) – A list of the instance types that are used to generate inferences in real-time.
framework (str) – The machine learning framework of the Image URI.
framework_version (str) – The framework version of the Image URI.
endpoint_configurations (List[Dict[str, any]]) – Specifies the endpoint configurations to use for a job. Will be used for Advanced jobs.
traffic_pattern (Dict[str, any]) – Specifies the traffic pattern for the job. Will be used for Advanced jobs.
stopping_conditions (Dict[str, any]) – A set of conditions for stopping a recommendation job. If any of the conditions are met, the job is automatically stopped. Will be used for Advanced jobs.
resource_limit (Dict[str, any]) – Defines the resource limit for the job. Will be used for Advanced jobs.
- Returns:
The name of the job created. In the form of SMPYTHONSDK-<timestamp>
- Return type:
str
- create_model(name, role=None, container_defs=None, vpc_config=None, enable_network_isolation=None, primary_container=None, tags=None)[source]#
Create an Amazon SageMaker
Model.Specify the S3 location of the model artifacts and Docker image containing the inference code. Amazon SageMaker uses this information to deploy the model in Amazon SageMaker. This method can also be used to create a Model for an Inference Pipeline if you pass the list of container definitions through the containers parameter.
- Parameters:
name (str) – Name of the Amazon SageMaker
Modelto create.role (str) – An AWS IAM role (either name or full ARN). The Amazon SageMaker training jobs and APIs that create Amazon SageMaker endpoints use this role to access training data and model artifacts. You must grant sufficient permissions to this role.
container_defs (list[dict[str, str]] or [dict[str, str]]) – A single container definition or a list of container definitions which will be invoked sequentially while performing the prediction. If the list contains only one container, then it’ll be passed to SageMaker Hosting as the
PrimaryContainerand otherwise, it’ll be passed asContainers.You can also specify the return value ofsagemaker.get_container_def()orsagemaker.pipeline_container_def(), which will used to create more advanced container configurations, including model containers which need artifacts from S3.vpc_config (dict[str, list[str]]) – The VpcConfig set on the model (default: None) * ‘Subnets’ (list[str]): List of subnet ids. * ‘SecurityGroupIds’ (list[str]): List of security group ids.
enable_network_isolation (bool) – Whether the model requires network isolation or not.
primary_container (str or dict[str, str]) – Docker image which defines the inference code. You can also specify the return value of
sagemaker.container_def(), which is used to create more advanced container configurations, including model containers which need artifacts from S3. This field is deprecated, please use container_defs instead.tags (Optional[Tags]) – Optional. The list of tags to add to the model.
Example
>>> tags = [{'Key': 'tagname', 'Value': 'tagvalue'}] For more information about tags, see https://boto3.amazonaws.com/v1/documentation /api/latest/reference/services/sagemaker.html#SageMaker.Client.add_tags
- Returns:
Name of the Amazon SageMaker
Modelcreated.- Return type:
str
- create_model_package_from_algorithm(name, description, algorithm_arn, model_data)[source]#
Create a SageMaker Model Package from the results of training with an Algorithm Package.
- Parameters:
name (str) – ModelPackage name
description (str) – Model Package description
algorithm_arn (str) – arn or name of the algorithm used for training.
model_data (str or dict[str, Any]) – s3 URI or a dictionary representing a
training (ModelDataSource to the model artifacts produced by)
- default_bucket()[source]#
Return the name of the default bucket to use in relevant Amazon SageMaker interactions.
This function will create the s3 bucket if it does not exist.
- Returns:
- The name of the default bucket. If the name was not explicitly specified through
the Session or sagemaker_config, the bucket will take the form:
sagemaker-{region}-{AWS account ID}.
- Return type:
str
- delete_endpoint(endpoint_name)[source]#
Delete an Amazon SageMaker
Endpoint.- Parameters:
endpoint_name (str) – Name of the Amazon SageMaker
Endpointto delete.
- delete_endpoint_config(endpoint_config_name)[source]#
Delete an Amazon SageMaker endpoint configuration.
- Parameters:
endpoint_config_name (str) – Name of the Amazon SageMaker endpoint configuration to delete.
- delete_model(model_name)[source]#
Delete an Amazon SageMaker Model.
- Parameters:
model_name (str) – Name of the Amazon SageMaker model to delete.
- describe_inference_component(inference_component_name)[source]#
Describe an Amazon SageMaker
InferenceComponent- Parameters:
inference_component_name (str) – Name of the Amazon SageMaker
InferenceComponent.- Returns:
Inference component details.
- Return type:
dict[str,str]
- determine_bucket_and_prefix(bucket: str | None = None, key_prefix: str | None = None, sagemaker_session=None)[source]#
Helper function that returns the correct S3 bucket and prefix to use depending on the inputs.
- Parameters:
bucket (Optional[str]) – S3 Bucket to use (if it exists)
key_prefix (Optional[str]) – S3 Object Key Prefix to use or append to (if it exists)
sagemaker_session (sagemaker.core.session.Session) – Session to fetch a default bucket and prefix from, if bucket doesn’t exist. Expected to exist
Returns: The correct S3 Bucket and S3 Object Key Prefix that should be used
- download_data(path, bucket, key_prefix='', extra_args=None)[source]#
Download file or directory from S3. :param path: Local path where the file or directory should be downloaded to. :type path: str :param bucket: Name of the S3 Bucket to download from. :type bucket: str :param key_prefix: Optional S3 object key name prefix. :type key_prefix: str :param extra_args: Optional extra arguments that may be passed to the
download operation. Please refer to the ExtraArgs parameter in the boto3 documentation here: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/s3-example-download-file.html
- Returns:
List of local paths of downloaded files
- Return type:
list[str]
- endpoint_from_production_variants(name, production_variants, tags=None, kms_key=None, wait=True, data_capture_config_dict=None, async_inference_config_dict=None, explainer_config_dict=None, live_logging=False, vpc_config=None, enable_network_isolation=None, role=None)[source]#
Create an SageMaker
Endpointfrom a list of production variants.- Parameters:
name (str) – The name of the
Endpointto create.production_variants (list[dict[str, str]]) – The list of production variants to deploy.
tags (Optional[Tags]) – A list of key-value pairs for tagging the endpoint (default: None).
kms_key (str) – The KMS key that is used to encrypt the data on the storage volume attached to the instance hosting the endpoint.
wait (bool) – Whether to wait for the endpoint deployment to complete before returning (default: True).
data_capture_config_dict (dict) – Specifies configuration related to Endpoint data capture for use with Amazon SageMaker Model Monitoring. Default: None.
async_inference_config_dict (dict) – specifies configuration related to async endpoint. Use this configuration when trying to create async endpoint and make async inference (default: None)
explainer_config_dict (dict) – Specifies configuration related to explainer. Use this configuration when trying to use online explainability. (default: None).
(dict[str (vpc_config) – The VpcConfig set on the model (default: None). * ‘Subnets’ (list[str]): List of subnet ids. * ‘SecurityGroupIds’ (list[str]): List of security group ids.
list[str]] – The VpcConfig set on the model (default: None). * ‘Subnets’ (list[str]): List of subnet ids. * ‘SecurityGroupIds’ (list[str]): List of security group ids.
enable_network_isolation (Boolean) – Default False. If True, enables network isolation in the endpoint, isolating the model container. No inbound or outbound network calls can be made to or from the model container.
role (str) – An AWS IAM role (either name or full ARN). The Amazon SageMaker training jobs and APIs that create Amazon SageMaker endpoints use this role to access training data and model artifacts. After the endpoint is created, the inference code might use the IAM role if it needs to access some AWS resources. (default: None).
- Returns:
The name of the created
Endpoint.- Return type:
str
- endpoint_in_service_or_not(endpoint_name: str)[source]#
Check whether an Amazon SageMaker
Endpoint`is in IN_SERVICE status.Raise any exception that is not recognized as “not found”.
- Parameters:
endpoint_name (str) – Name of the Amazon SageMaker
Endpointtostatus. (check)
- Returns:
True if
Endpointis IN_SERVICE, False ifEndpointnot exists or it’s in other status.- Return type:
bool
Raises:
- expand_role(role)[source]#
Expand an IAM role name into an ARN.
If the role is already in the form of an ARN, then the role is simply returned. Otherwise we retrieve the full ARN and return it.
- Parameters:
role (str) – An AWS IAM role (either name or full ARN).
- Returns:
The corresponding AWS IAM role ARN.
- Return type:
str
- expected_bucket_owner_id_bucket_check(bucket_name, s3, expected_bucket_owner_id)[source]#
Checks if the bucket belongs to a particular owner and throws a Client Error if it is not
- Parameters:
bucket_name (str) – Name of the S3 bucket
s3 (str) – S3 object from boto session
expected_bucket_owner_id (str) – Owner ID string
- general_bucket_check_if_user_has_permission(bucket_name, s3, bucket, region, bucket_creation_date_none)[source]#
Checks if the person running has the permissions to the bucket
If there is any other error that comes up with calling head bucket, it is raised up here If there is no bucket , it will create one
- Parameters:
bucket_name (str) – Name of the S3 bucket
s3 (str) – S3 object from boto session
region (str) – The region in which to create the bucket.
bucket_creation_date_none (bool) – Indicating whether S3 bucket already exists or not
- generate_default_sagemaker_bucket_name(boto_session)[source]#
Generates a name for the default sagemaker S3 bucket.
- Parameters:
boto_session (boto3.session.Session) – The underlying Boto3 session which AWS service
- get_caller_identity_arn()[source]#
Returns the ARN user or role whose credentials are used to call the API.
- Returns:
The ARN user or role
- Return type:
str
- list_s3_files(bucket, key_prefix)[source]#
Lists the S3 files given an S3 bucket and key. :param bucket: Name of the S3 Bucket to download from. :type bucket: str :param key_prefix: S3 object key name prefix. :type key_prefix: str
- Returns:
The list of files at the S3 path.
- Return type:
[str]
- read_s3_file(bucket, key_prefix)[source]#
Read a single file from S3.
- Parameters:
bucket (str) – Name of the S3 Bucket to download from.
key_prefix (str) – S3 object key name prefix.
- Returns:
The body of the s3 file as a string.
- Return type:
str
- update_endpoint(endpoint_name, endpoint_config_name, wait=True)[source]#
Update an Amazon SageMaker
Endpoint, Raise an error endpoint_name does not exist.- Parameters:
endpoint_name (str) – Name of the Amazon SageMaker
Endpointto update.endpoint_config_name (str) – Name of the Amazon SageMaker endpoint configuration to deploy.
wait (bool) – Whether to wait for the endpoint deployment to complete before returning (default: True).
- Returns:
Name of the Amazon SageMaker
Endpointbeing updated.- Return type:
str
- Raises:
- ValueError – if the endpoint does not already exist
- botocore.exceptions.ClientError – If SageMaker throws an error while
creating endpoint config, describing endpoint or updating endpoint –
- update_inference_component(inference_component_name, specification=None, runtime_config=None, wait=True)[source]#
Update an Amazon SageMaker
InferenceComponent- Parameters:
inference_component_name (str) – Name of the Amazon SageMaker
InferenceComponent.specification ([dict[str,int]]) – Resource configuration. Optional. Example: { “MinMemoryRequiredInMb”: 1024, “NumberOfCpuCoresRequired”: 1, “NumberOfAcceleratorDevicesRequired”: 1, “MaxMemoryRequiredInMb”: 4096, },
runtime_config ([dict[str,int]]) – Number of copies. Optional. Default: { “copyCount”: 1 }
wait – Wait for inference component to be created before return. Optional. Default is True.
- Returns:
inference component name
- Return type:
str
- Raises:
ValueError – If the inference_component_name does not exist.
- upload_data(path, bucket=None, key_prefix='data', callback=None, extra_args=None)[source]#
Upload local file or directory to S3.
If a single file is specified for upload, the resulting S3 object key is
{key_prefix}/{filename}(filename does not include the local path, if any specified). If a directory is specified for upload, the API uploads all content, recursively, preserving relative structure of subdirectories. The resulting object key names are:{key_prefix}/{relative_subdirectory_path}/filename.- Parameters:
path (str) – Path (absolute or relative) of local file or directory to upload.
bucket (str) – Name of the S3 Bucket to upload to (default: None). If not specified, the default bucket of the
Sessionis used (if default bucket does not exist, theSessioncreates it).key_prefix (str) – Optional S3 object key name prefix (default: ‘data’). S3 uses the prefix to create a directory structure for the bucket content that it display in the S3 console.
extra_args (dict) – Optional extra arguments that may be passed to the upload operation. Similar to ExtraArgs parameter in S3 upload_file function. Please refer to the ExtraArgs parameter documentation here: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/s3-uploading-files.html#the-extraargs-parameter
- Returns:
- The S3 URI of the uploaded file(s). If a file is specified in the path argument,
the URI format is:
s3://{bucket name}/{key_prefix}/{original_file_name}. If a directory is specified in the path argument, the URI format iss3://{bucket name}/{key_prefix}.
- Return type:
str
- upload_string_as_file_body(body, bucket, key, kms_key=None)[source]#
Upload a string as a file body. :param body: String representing the body of the file. :type body: str :param bucket: Name of the S3 Bucket to upload to (default: None). If not specified, the
default bucket of the
Sessionis used (if default bucket does not exist, theSessioncreates it).- Parameters:
key (str) – S3 object key. This is the s3 path to the file.
kms_key (str) – The KMS key to use for encrypting the file.
- Returns:
- The S3 URI of the uploaded file.
The URI format is:
s3://{bucket name}/{key}.
- Return type:
str
- wait_for_endpoint(endpoint, poll=30, live_logging=False)[source]#
Wait for an Amazon SageMaker endpoint deployment to complete.
- Parameters:
endpoint (str) – Name of the
Endpointto wait for.poll (int) – Polling interval in seconds (default: 30).
- Raises:
exceptions.CapacityError – If the endpoint creation job fails with CapacityError.
exceptions.UnexpectedStatusException – If the endpoint creation job fails.
- Returns:
Return value from the
DescribeEndpointAPI.- Return type:
dict
- wait_for_inference_component(inference_component_name, poll=20)[source]#
Wait for an Amazon SageMaker
Inference Componentdeployment to complete.- Parameters:
inference_component_name (str) – Name of the
Inference Componentto wait for.poll (int) – Polling interval in seconds (default: 20).
- Raises:
exceptions.CapacityError – If the inference component creation fails with CapacityError.
exceptions.UnexpectedStatusException – If the inference component creation fails.
- Returns:
Return value from the
DescribeInferenceComponentAPI.- Return type:
dict
- wait_for_inference_recommendations_job(job_name: str, poll: int = 120, log_level: str = 'Verbose') Dict[str, Any][source]#
Wait for an Amazon SageMaker Inference Recommender job to complete.
- Parameters:
job_name (str) – Name of the Inference Recommender job to wait for.
poll (int) – Polling interval in seconds (default: 120).
log_level (str) – The level of verbosity for the logs.
(default (Can be "Quiet" or "Verbose") – “Quiet”).
- Returns:
Return value from the
DescribeInferenceRecommendationsJobAPI.- Return type:
(dict)
- Raises:
exceptions.CapacityError – If the Inference Recommender job fails with CapacityError.
exceptions.UnexpectedStatusException – If the Inference Recommender job fails.
- wait_for_optimization_job(job, poll=5)[source]#
Wait for an Amazon SageMaker Optimization job to complete.
- Parameters:
job (str) – Name of optimization job to wait for.
poll (int) – Polling interval in seconds (default: 5).
- Returns:
Return value from the
DescribeOptimizationJobAPI.- Return type:
(dict)
- Raises:
exceptions.ResourceNotFound – If optimization job fails with CapacityError.
exceptions.UnexpectedStatusException – If optimization job fails.
- sagemaker.core.helper.session_helper.botocore_resolver()[source]#
Get the DNS suffix for the given region.
- Parameters:
region (str) – AWS region name
- Returns:
the DNS suffix
- Return type:
str
- sagemaker.core.helper.session_helper.container_def(image_uri, model_data_url=None, env=None, container_mode=None, image_config=None, accept_eula=None, additional_model_data_sources=None, model_reference_arn=None)[source]#
Create a definition for executing a container as part of a SageMaker model.
- Parameters:
image_uri (str) – Docker image URI to run for this container.
model_data_url (str or dict[str, Any]) – S3 location of model data required by this container, e.g. SageMaker training job model artifacts. It can either be a string representing S3 URI of model data, or a dictionary representing a
ModelDataSourceobject. (default: None).env (dict[str, str]) – Environment variables to set inside the container (default: None).
container_mode (str) – The model container mode. Valid modes: * MultiModel: Indicates that model container can support hosting multiple models * SingleModel: Indicates that model container can support hosting a single model This is the default model container mode when container_mode = None
image_config (dict[str, str]) – Specifies whether the image of model container is pulled from ECR, or private registry in your VPC. By default it is set to pull model container image from ECR. (default: None).
accept_eula (bool) – For models that require a Model Access Config, specify True or False to indicate whether model terms of use have been accepted. The accept_eula value must be explicitly defined as True in order to accept the end-user license agreement (EULA) that some models require. (Default: None).
additional_model_data_sources (PipelineVariable or dict) – Additional location of SageMaker model data (default: None).
- Returns:
A complete container definition object usable with the CreateModel API if passed via PrimaryContainers field.
- Return type:
dict[str, str]
- sagemaker.core.helper.session_helper.expand_role(self, role)[source]#
Expand an IAM role name into an ARN.
If the role is already in the form of an ARN, then the role is simply returned. Otherwise we retrieve the full ARN and return it.
- Parameters:
role (str) – An AWS IAM role (either name or full ARN).
- Returns:
The corresponding AWS IAM role ARN.
- Return type:
str
- sagemaker.core.helper.session_helper.get_add_model_package_inference_args(model_package_arn, name, containers=None, content_types=None, response_types=None, inference_instances=None, transform_instances=None, description=None)[source]#
Get request dictionary for UpdateModelPackage API for additional inference.
- Parameters:
model_package_arn (str) – Arn for the model package.
name (str) – Name to identify the additional inference specification
containers (dict) – The Amazon ECR registry path of the Docker image that contains the inference code.
image_uris (List[str]) – The ECR path where inference code is stored.
description (str) – Description for the additional inference specification
content_types (list[str]) – The supported MIME types for the input data.
response_types (list[str]) – The supported MIME types for the output data.
inference_instances (list[str]) – A list of the instance types that are used to generate inferences in real-time (default: None).
transform_instances (list[str]) – A list of the instance types on which a transformation job can be run or on which an endpoint can be deployed (default: None).
- sagemaker.core.helper.session_helper.get_execution_role(sagemaker_session=None, use_default=False)[source]#
Return the role ARN whose credentials are used to call the API.
Throws an exception if role doesn’t exist.
- Parameters:
sagemaker_session (Session) – Current sagemaker session.
use_default (bool) – Use a default role if
get_caller_identity_arndoes not return a correct role. This default role will be created if needed. Defaults toFalse.
- Returns:
The role ARN
- Return type:
(str)
- sagemaker.core.helper.session_helper.get_log_events_for_inference_recommender(cw_client, log_group_name, log_stream_name)[source]#
Retrieves log events from the specified CloudWatch log group and log stream.
- Parameters:
cw_client (boto3.client) – A boto3 CloudWatch client.
log_group_name (str) – The name of the CloudWatch log group.
log_stream_name (str) – The name of the CloudWatch log stream.
- Returns:
A dictionary containing log events from CloudWatch log group and log stream.
- Return type:
(dict)
- sagemaker.core.helper.session_helper.get_update_model_package_inference_args(model_package_arn, containers=None, content_types=None, response_types=None, inference_instances=None, transform_instances=None)[source]#
Get request dictionary for UpdateModelPackage API for inference specification.
- Parameters:
model_package_arn (str) – Arn for the model package.
containers (dict) – The Amazon ECR registry path of the Docker image that contains the inference code.
content_types (list[str]) – The supported MIME types for the input data.
response_types (list[str]) – The supported MIME types for the output data.
inference_instances (list[str]) – A list of the instance types that are used to generate inferences in real-time (default: None).
transform_instances (list[str]) – A list of the instance types on which a transformation job can be run or on which an endpoint can be deployed (default: None).
- sagemaker.core.helper.session_helper.production_variant(model_name=None, instance_type=None, initial_instance_count=None, variant_name='AllTraffic', initial_weight=1, accelerator_type=None, serverless_inference_config=None, volume_size=None, model_data_download_timeout=None, container_startup_health_check_timeout=None, managed_instance_scaling=None, routing_config=None, inference_ami_version=None)[source]#
Create a production variant description suitable for use in a
ProductionVariantlist.This is also part of a
CreateEndpointConfigrequest.- Parameters:
model_name (str) – The name of the SageMaker model this production variant references.
instance_type (str) – The EC2 instance type for this production variant. For example, ‘ml.c4.8xlarge’.
initial_instance_count (int) – The initial instance count for this production variant (default: 1).
variant_name (string) – The
VariantNameof this production variant (default: ‘AllTraffic’).initial_weight (int) – The relative
InitialVariantWeightof this production variant (default: 1).accelerator_type (str) – Type of Elastic Inference accelerator for this production variant. For example, ‘ml.eia1.medium’. For more information: https://docs.aws.amazon.com/sagemaker/latest/dg/ei.html
serverless_inference_config (dict) – Specifies configuration dict related to serverless endpoint. The dict is converted from sagemaker.model_monitor.ServerlessInferenceConfig object (default: None)
volume_size (int) – The size, in GB, of the ML storage volume attached to individual inference instance associated with the production variant. Currenly only Amazon EBS gp2 storage volumes are supported.
model_data_download_timeout (int) – The timeout value, in seconds, to download and extract model data from Amazon S3 to the individual inference instance associated with this production variant.
container_startup_health_check_timeout (int) – The timeout value, in seconds, for your inference container to pass health check by SageMaker Hosting. For more information about health check see: https://docs.aws.amazon.com/sagemaker/latest/dg/your-algorithms-inference-code.html #your-algorithms -inference-algo-ping-requests
- Returns:
An SageMaker
ProductionVariantdescription- Return type:
dict[str, str]
- sagemaker.core.helper.session_helper.s3_path_join(*args, with_end_slash: bool = False)[source]#
Returns the arguments joined by a slash (“/”), similar to
os.path.join()(on Unix).Behavior of this function: - If the first argument is “s3://”, then that is preserved. - The output by default will have no slashes at the beginning or end. There is one exception
(see with_end_slash). For example, s3_path_join(“/foo”, “bar/”) will yield “foo/bar” and s3_path_join(“foo”, “bar”, with_end_slash=True) will yield “foo/bar/”
- Any repeat slashes will be removed in the output (except for “s3://” if provided at the
beginning). For example, s3_path_join(“s3://”, “//foo/”, “/bar///baz”) will yield “s3://foo/bar/baz”.
- Empty or None arguments will be skipped. For example
s3_path_join(“foo”, “”, None, “bar”) will yield “foo/bar”
Alternatives to this function that are NOT recommended for S3 paths: - os.path.join(…) will have different behavior on Unix machines vs non-Unix machines - pathlib.PurePosixPath(…) will apply potentially unintended simplification of single
dots (“.”) and root directories. (for example pathlib.PurePosixPath(“foo”, “/bar/./”, “baz”) would yield “/bar/baz”)
“{}/{}/{}”.format(…) and similar may result in unintended repeat slashes
- Parameters:
*args – The strings to join with a slash.
with_end_slash (bool) – (default: False) If true and if the path is not empty, appends a “/” to the end of the path
- Returns:
The joined string, without a slash at the end unless with_end_slash is True.
- Return type:
str
- sagemaker.core.helper.session_helper.sts_regional_endpoint(region)[source]#
Get the AWS STS endpoint specific for the given region.
We need this function because the AWS SDK does not yet honor the
region_nameparameter when creating an AWS STS client.For the list of regional endpoints, see https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_enable-regions.html#id_credentials_region-endpoints.
- Parameters:
region (str) – AWS region name
- Returns:
AWS STS regional endpoint
- Return type:
str
- sagemaker.core.helper.session_helper.update_args(args: Dict[str, Any], **kwargs)[source]#
Updates the request arguments dict with the value if populated.
This is to handle the case that the service API doesn’t like NoneTypes for argument values.
- Parameters:
request_args (Dict[str, Any]) – the request arguments dict
kwargs – key, value pairs to update the args dict