The tutorial below is a transcript of a python notebook provided on our Github repository. This tutorial takes you step-by-step of setting up a subscription to H-optimus-1 on AWS Sagemaker and performing inferences using two methods: real-time inference and batch inference.
NOTE: This is a tutorial to get started, for more help in deploying H-optimus for production workloads please contact us and check back for more tutorials.
For latest version of the below notebook please check our Github Repository here:
https://github.com/bioptimus/h1-jumpstart
H-optimus-1 is a foundation model for histology, developed by Bioptimus.
The model is a 1.1B parameter vision transformer trained on a proprietary collection of more than 1 million H&E stained whole slide histology images. For more information, please refer to this page.
H-optimus-1 can extract powerful features from histology images for various downstream applications, such as mutation prediction, survival analysis, or tissue classification.
This sample notebook shows you how to deploy H-optimus-1 using Amazon SageMaker.
Note: This is a reference notebook and it cannot run unless you make changes suggested in the notebook.
You can run this notebook one cell at a time (By using Shift+Enter for running a cell).
To subscribe to the model package:
model_package_arn = "arn:aws:sagemaker:eu-north-1:136758871317:model-package/h-optimus-1-7f16e68f69cf3b7bb608d126ac6b9a99"
# The code was executed with python 3.13.7
%pip install sagemaker=="2.254.1"
%pip install pillow=="11.1.0"
%pip install boto3=="1.42.2"
import json
import time
from datetime import datetime
from sagemaker import ModelPackage
import sagemaker as sage
from sagemaker import get_execution_role
import boto3
from PIL import Image as ImageEdit
from io import BytesIO
role = get_execution_role(
sagemaker_session = sage.Session()
bucket = sagemaker_session.default_bucket()
runtime = boto3.client("runtime.sagemaker")
sm_client = boto3.client("sagemaker")
bucket
'sagemaker-eu-north-1-840737971346'
If you want to understand how real-time inference with Amazon SageMaker works, see Documentation.
model_name = "h-optimus-1"
content_type = "image/*"
real_time_inference_instance_type = "ml.g5.xlarge"
batch_transform_inference_instance_type = "ml.g5.xlarge"
# Create a deployable model from the model package.
model = ModelPackage(
role=role, model_package_arn=model_package_arn, sagemaker_session=sagemaker_session
)
# Deploy the model.
predictor = model.deploy(
1,
real_time_inference_instance_type,
endpoint_name=model_name,
inference_ami_version="al2-ami-sagemaker-inference-gpu-3-1"
)
---------------!
Once endpoint has been created, you would be able to perform real-time inference.
image_file = "data/input/real-time/example_input.png"
img = ImageEdit.open(image_file)
# Save the image to a byte stream in PNG format
buffer = BytesIO()
img.save(buffer, format="PNG")
buffer.seek(0) # Reset the buffer's current position
# Get the bytes
img_bytes = buffer.getvalue()
response = runtime.invoke_endpoint(
EndpointName=model_name,
ContentType="image/*",
Accept="application/json",
Body=img_bytes,
)
features = json.load(response["Body"])[0]
assert len(features) == 1536, f"Unexpected features dimension."
Now that you have successfully performed a real-time inference, you do not need the endpoint any more. You can terminate the endpoint to avoid being charged.
model.sagemaker_session.delete_endpoint(model_name)
model.sagemaker_session.delete_endpoint_config(model_name)
model.delete_model()
In this section, you will perform batch inference using multiple input payloads together. If you are not familiar with batch transform, and want to learn more, see these links:
Create the model parameters
model_name = "h-optimus-1"
content_type = "application/x-image"
batch_transform_inference_instance_type = "ml.g5.xlarge"
Upload your batch data to S3, note you can change the directory structure to something else depending on where you want to upload the files to.
# upload the batch-transform job input files to S3
transform_input_folder = "data/input/batch"
transform_input = sagemaker_session.upload_data(
transform_input_folder, key_prefix=model_name
)
print("Transform input uploaded to " + transform_input)
Transform input uploaded to s3://sagemaker-eu-north-1-840737971346/h-optimus-1
Create a directory to store the output of the batch transform job
transform_output = f"{transform_input}-output-{datetime.now().strftime('%Y-%m-%d-%H-%M-%S')}"
print(transform_output)
s3://sagemaker-eu-north-1-840737971346/h-optimus-1-output-2025-12-15-10-03-13
Create the model based on the parameters above
# Create the model
print(f"Creating Model:{model_name}...")
create_model_response = sm_client.create_model(
ModelName=model_name,
ExecutionRoleArn=role, # Replace with your IAM Role ARN
PrimaryContainer={
# This tells SageMaker to use the Model Package definition
"ModelPackageName": model_package_arn
},
EnableNetworkIsolation=True
)
Creating Model: h-optimus-1...
Now perform the batch transform job
# Now create the transform job
transform_job_name = f"transform-job-{datetime.now().strftime('%Y-%m-%d-%H-%M-%S')}"
print(f"Starting Transform Job:{transform_job_name}...")
response = sm_client.create_transform_job(
TransformJobName=transform_job_name,
ModelName=model_name, # Reference the model created in Step 1
MaxConcurrentTransforms=1,
MaxPayloadInMB=6,
BatchStrategy="MultiRecord",
TransformInput={
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix", # Processing all files under the prefix
"S3Uri": transform_input
}
},
"ContentType": content_type, # Change to "application/json" or "application/x-image" if needed
"SplitType": "None", # Use "None" if passing whole files (e.g. images)
"CompressionType": "None"
},
TransformOutput={
"S3OutputPath": transform_output,
"AssembleWith": "Line",
"Accept": "application/json"
},
TransformResources={
"TransformAmiVersion": "al2-ami-sagemaker-batch-gpu-535",
"InstanceType": batch_transform_inference_instance_type,
"InstanceCount": 1
}
)
print(f"Transform Job ARN:{response['TransformJobArn']}")
Starting Transform Job: transform-job-2025-12-15-10-03-18...
Transform Job ARN: arn:aws:sagemaker:eu-north-1:840737971346:transform-job/transform-job-2025-12-15-10-03-18
# Wait for completion
print("Waiting for job to complete...")
start_time = time.time()
waiter = sm_client.get_waiter('transform_job_completed_or_stopped')
waiter.wait(TransformJobName=transform_job_name)
end_time = time.time()
# Calculate duration
duration_seconds = end_time - start_time
minutes = int(duration_seconds // 60)
seconds = int(duration_seconds % 60)
# Check final status
status = sm_client.describe_transform_job(TransformJobName=transform_job_name)
print(f" Job finished with status:{status['TransformJobStatus']}")
print(f" Total Wait Time:{minutes}m{seconds}s")
Waiting for job to complete...
Job finished with status: Completed
Total Wait Time: 15m 3s
# output is available on following path
print(transform_output)
s3://sagemaker-eu-north-1-840737971346/h-optimus-1-output-2025-12-15-09-16-02
# Clean up model
try:
sm_client.delete_model(ModelName=model_name)
print(f" Successfully deleted model:{model_name}")
except Exception as cleanup_error:
print(f" Warning: Could not delete model. It may have already been deleted or never created.")
print(f" Error details:{cleanup_error}")
Successfully deleted model: h-optimus-1
If you would like to unsubscribe to the model package, follow these steps. Before you cancel the subscription, ensure that you do not have any deployable model created from the model package or using the algorithm. Note - You can find this information by looking at the container name associated with the model.
Steps to unsubscribe to product from AWS Marketplace:
<aside>
Quick Navigation
</aside>
Latest version: December 16, 2025
Support: [email protected]