Deployment

Learn how to deploy AI inference APIs and web applications to cloud providers with OnglX Deploy.

Deploy to the cloud

OnglX Deploy supports deployment to major cloud providers with built-in best practices and security configurations.

Deployment workflow

Deploy your AI APIs in three simple steps:

1

Initialize Project

Set up your project configuration and cloud provider settings.

onglx-deploy init
2

Add Components

Configure AI inference endpoints and other application components.

onglx-deploy add inference
3

Deploy

Deploy your infrastructure and get your API endpoints.

onglx-deploy deploy

Cloud provider support

AWS

Amazon Web Services

Production Ready

Full support for AWS Bedrock, Lambda, API Gateway, and other AWS services.

GCP

Google Cloud Platform

Coming Soon

Support for GCP Vertex AI and Cloud Functions is in development.