Introduction

The tutorial below is a transcript of a python notebook provided on our Github repository. This tutorial takes you step-by-step of setting up a subscription to H-optimus-1 on AWS Sagemaker and performing inferences using two methods: real-time inference and batch inference.

NOTE: This is a tutorial to get started, for more help in deploying H-optimus for production workloads please contact us and check back for more tutorials.

Latest Changes

For latest version of the below notebook please check our Github Repository here:

https://github.com/bioptimus/h1-jumpstart

Deploy H-optimus-1 Model Package from AWS Marketplace

H-optimus-1 is a foundation model for histology, developed by Bioptimus.

The model is a 1.1B parameter vision transformer trained on a proprietary collection of more than 1 million H&E stained whole slide histology images. For more information, please refer to this page.

H-optimus-1 can extract powerful features from histology images for various downstream applications, such as mutation prediction, survival analysis, or tissue classification.

This sample notebook shows you how to deploy H-optimus-1 using Amazon SageMaker.

Note: This is a reference notebook and it cannot run unless you make changes suggested in the notebook.

Pre-requisites:

  1. Note: This notebook contains elements which render correctly in Jupyter interface. The below is a direct run and output from the notebook, please download the latest version to run it yourself (see above).
  2. Ensure that IAM role used has AmazonSageMakerFullAccess
  3. To deploy this ML model successfully, ensure that:
    1. Either your IAM role has these three permissions and you have authority to make AWS Marketplace subscriptions in the AWS account used:
      1. aws-marketplace:ViewSubscriptions
      2. aws-marketplace:Unsubscribe
      3. aws-marketplace:Subscribe
    2. or your AWS account has a subscription to H-optimus-1. If so, skip step: Subscribe to the model package.

Contents:

  1. Subscribe to the model package
  2. Create an endpoint and perform real-time inference
    1. Create an endpoint
    2. Create input payload
    3. Perform real-time inference
    4. Visualize output
    5. Delete the endpoint
  3. Perform batch inference
  4. Clean-up
    1. Delete the model
    2. Unsubscribe to the listing (optional)

Usage instructions

You can run this notebook one cell at a time (By using Shift+Enter for running a cell).

1. Subscribe to the model package

To subscribe to the model package:

  1. Open the model package listing page H-optimus-1.
  2. On the AWS Marketplace listing, click on the Continue to subscribe button.
  3. On the Subscribe to this software page, review and click on “Accept Offer” if you and your organization agrees with EULA, pricing, and support terms.
  4. Once you click on Continue to configuration button and then choose a region, you will see a Product Arn displayed. This is the model package ARN that you need to specify while creating a deployable model using Boto3. Copy the ARN corresponding to your region and specify the same in the following cell.
model_package_arn = "arn:aws:sagemaker:eu-north-1:136758871317:model-package/h-optimus-1-7f16e68f69cf3b7bb608d126ac6b9a99"
# The code was executed with python 3.13.7
%pip install sagemaker=="2.254.1"
%pip install pillow=="11.1.0"
%pip install boto3=="1.42.2"
import json
import time
from datetime import datetime
from sagemaker import ModelPackage
import sagemaker as sage
from sagemaker import get_execution_role
import boto3
from PIL import Image as ImageEdit

from io import BytesIO
role = get_execution_role(

sagemaker_session = sage.Session()
bucket = sagemaker_session.default_bucket()
runtime = boto3.client("runtime.sagemaker")
sm_client = boto3.client("sagemaker")

bucket
'sagemaker-eu-north-1-840737971346'

2. Create an endpoint and perform real-time inference

If you want to understand how real-time inference with Amazon SageMaker works, see Documentation.

model_name = "h-optimus-1"
content_type = "image/*"
real_time_inference_instance_type = "ml.g5.xlarge"
batch_transform_inference_instance_type = "ml.g5.xlarge"

A. Create an endpoint

# Create a deployable model from the model package.
model = ModelPackage(
    role=role, model_package_arn=model_package_arn, sagemaker_session=sagemaker_session
)
# Deploy the model.
predictor = model.deploy(
    1,
    real_time_inference_instance_type,
    endpoint_name=model_name,
    inference_ami_version="al2-ami-sagemaker-inference-gpu-3-1"
)
---------------!

Once endpoint has been created, you would be able to perform real-time inference.

B. Create input payload

image_file = "data/input/real-time/example_input.png"
img = ImageEdit.open(image_file)
# Save the image to a byte stream in PNG format
buffer = BytesIO()
img.save(buffer, format="PNG")
buffer.seek(0)  # Reset the buffer's current position
# Get the bytes
img_bytes = buffer.getvalue()

C. Perform real-time inference

response = runtime.invoke_endpoint(
    EndpointName=model_name,
    ContentType="image/*",
    Accept="application/json",
    Body=img_bytes,
)

features = json.load(response["Body"])[0]
assert len(features) == 1536, f"Unexpected features dimension."

E. Delete the endpoint and model

Now that you have successfully performed a real-time inference, you do not need the endpoint any more. You can terminate the endpoint to avoid being charged.

model.sagemaker_session.delete_endpoint(model_name)
model.sagemaker_session.delete_endpoint_config(model_name)
model.delete_model()

3. Perform batch inference

In this section, you will perform batch inference using multiple input payloads together. If you are not familiar with batch transform, and want to learn more, see these links:

  1. How it works
  2. How to run a batch transform job

Create the model parameters

model_name = "h-optimus-1"
content_type = "application/x-image"
batch_transform_inference_instance_type = "ml.g5.xlarge"

Upload your batch data to S3, note you can change the directory structure to something else depending on where you want to upload the files to.

# upload the batch-transform job input files to S3
transform_input_folder = "data/input/batch"
transform_input = sagemaker_session.upload_data(
    transform_input_folder, key_prefix=model_name
)
print("Transform input uploaded to " + transform_input)
Transform input uploaded to s3://sagemaker-eu-north-1-840737971346/h-optimus-1

Create a directory to store the output of the batch transform job

transform_output = f"{transform_input}-output-{datetime.now().strftime('%Y-%m-%d-%H-%M-%S')}"
print(transform_output)
s3://sagemaker-eu-north-1-840737971346/h-optimus-1-output-2025-12-15-10-03-13

Create the model based on the parameters above

# Create the model

print(f"Creating Model:{model_name}...")
create_model_response = sm_client.create_model(
    ModelName=model_name,
    ExecutionRoleArn=role,  # Replace with your IAM Role ARN
    PrimaryContainer={
        # This tells SageMaker to use the Model Package definition
        "ModelPackageName": model_package_arn
    },
    EnableNetworkIsolation=True
)
Creating Model: h-optimus-1...

Now perform the batch transform job

# Now create the transform job

transform_job_name = f"transform-job-{datetime.now().strftime('%Y-%m-%d-%H-%M-%S')}"

print(f"Starting Transform Job:{transform_job_name}...")
response = sm_client.create_transform_job(
    TransformJobName=transform_job_name,
    ModelName=model_name,  # Reference the model created in Step 1
    MaxConcurrentTransforms=1,
    MaxPayloadInMB=6,
    BatchStrategy="MultiRecord",
    TransformInput={
        "DataSource": {
            "S3DataSource": {
                "S3DataType": "S3Prefix",  # Processing all files under the prefix
                "S3Uri": transform_input
            }
        },
        "ContentType": content_type,  # Change to "application/json" or "application/x-image" if needed
        "SplitType": "None",        # Use "None" if passing whole files (e.g. images)
        "CompressionType": "None"
    },
    TransformOutput={
        "S3OutputPath": transform_output,
        "AssembleWith": "Line",
        "Accept": "application/json"
    },
    TransformResources={
        "TransformAmiVersion": "al2-ami-sagemaker-batch-gpu-535",
        "InstanceType": batch_transform_inference_instance_type,
        "InstanceCount": 1
    }
)

print(f"Transform Job ARN:{response['TransformJobArn']}")
Starting Transform Job: transform-job-2025-12-15-10-03-18...
Transform Job ARN: arn:aws:sagemaker:eu-north-1:840737971346:transform-job/transform-job-2025-12-15-10-03-18
# Wait for completion

print("Waiting for job to complete...")
start_time = time.time()
waiter = sm_client.get_waiter('transform_job_completed_or_stopped')
waiter.wait(TransformJobName=transform_job_name)
end_time = time.time()

# Calculate duration
duration_seconds = end_time - start_time
minutes = int(duration_seconds // 60)
seconds = int(duration_seconds % 60)

# Check final status
status = sm_client.describe_transform_job(TransformJobName=transform_job_name)

print(f"   Job finished with status:{status['TransformJobStatus']}")
print(f"   Total Wait Time:{minutes}m{seconds}s")
Waiting for job to complete...
   Job finished with status: Completed
   Total Wait Time: 15m 3s
# output is available on following path
print(transform_output)
s3://sagemaker-eu-north-1-840737971346/h-optimus-1-output-2025-12-15-09-16-02

4. Clean-up

A. Delete the model

# Clean up model
try:
    sm_client.delete_model(ModelName=model_name)
    print(f"   Successfully deleted model:{model_name}")
except Exception as cleanup_error:
    print(f"   Warning: Could not delete model. It may have already been deleted or never created.")
    print(f"   Error details:{cleanup_error}")
   Successfully deleted model: h-optimus-1

B. Unsubscribe to the listing (optional)

If you would like to unsubscribe to the model package, follow these steps. Before you cancel the subscription, ensure that you do not have any deployable model created from the model package or using the algorithm. Note - You can find this information by looking at the container name associated with the model.

Steps to unsubscribe to product from AWS Marketplace:

  1. Navigate to Machine Learning tab on Your Software subscriptions page
  2. Locate the listing that you want to cancel the subscription for, and then choose Cancel Subscription to cancel the subscription.

<aside>

Quick Navigation

</aside>


Latest version: December 16, 2025

Support: [email protected]