Deployment
Learn how to deploy AI inference APIs and web applications to cloud providers with OnglX Deploy.
Deploy to the cloud
OnglX Deploy supports deployment to major cloud providers with built-in best practices and security configurations.
☁️
AWS Setup Guide
Configure your AWS account for deploying AI APIs with Bedrock foundation models.
IAM roles and policies
Bedrock model access
Region configuration
🤖
AI Inference
Deploy OpenAI-compatible chat completion APIs powered by foundation models.
Chat completions API
Multiple model support
OpenAI compatibility
Deployment workflow
Deploy your AI APIs in three simple steps:
1
Initialize Project
Set up your project configuration and cloud provider settings.
onglx-deploy init
2
Add Components
Configure AI inference endpoints and other application components.
onglx-deploy add inference
3
Deploy
Deploy your infrastructure and get your API endpoints.
onglx-deploy deploy
Cloud provider support
AWS
Amazon Web Services
Production Ready
Full support for AWS Bedrock, Lambda, API Gateway, and other AWS services.
GCP
Google Cloud Platform
Coming Soon
Support for GCP Vertex AI and Cloud Functions is in development.