Environments
Understanding environments in Ryvn
An environment is an isolated Kubernetes cluster with dedicated networking, security boundaries, and resource controls. When you deploy a service, it becomes an installation running within that environment’s infrastructure boundaries.
Each environment operates as a completely isolated infrastructure space. Network traffic cannot flow between environments unless explicitly configured through cross-VPC connections. This isolation extends to DNS resolution, service discovery, and resource allocation—installations in one environment cannot access or interfere with installations in another.
Environment Architecture
Environments run on managed Kubernetes clusters provisioned within dedicated virtual networks. The cluster architecture separates public-facing components from application workloads through subnet isolation. Public subnets contain only load balancers and NAT gateways, while all application workloads run in private subnets without direct internet access.
Resource boundaries are enforced at the environment level through Kubernetes resource quotas and limit ranges. Each environment allocates CPU, memory, and storage independently, preventing resource contention between different environments. Within an environment, individual installations can specify their own resource requirements and scaling policies.
Network isolation between installations is enforced through VPC subnet boundaries and firewall rules. TLS encryption is handled at the ingress layer through load balancers with automatic certificate provisioning, while inter-service communication within the cluster uses standard Kubernetes networking.
The Ryvn agent runs within each environment to manage deployments, monitor infrastructure health, and coordinate with the Ryvn control plane.
Networking and Access Control
Access Modes
Installations within an environment can operate in three networking modes that determine their accessibility. Public mode exposes installations to the internet through TLS-terminated load balancers with automatic certificate provisioning. Internal mode makes installations accessible within connected networks but not from the public internet. Private mode restricts access to within the environment boundary only.
Cloud Provider Implementation
The networking implementation varies by cloud provider but maintains consistent behavior:
Component | AWS | Google Cloud | Azure |
---|---|---|---|
Network | VPC with public/private subnets | VPC network | Virtual Network |
Kubernetes | EKS cluster | GKE cluster | AKS cluster |
Load Balancing | Application Load Balancer | Cloud Load Balancing | Azure Load Balancer |
Outbound Access | NAT Gateway | Cloud NAT | NAT Gateway |
Security | Security Groups | VPC Service Controls | Network Security Groups |
DNS and Service Discovery
DNS resolution within environments uses cluster-internal DNS for private communication and cloud provider DNS services for external resolution. Installations receive predictable DNS names based on their configuration, enabling reliable service discovery without hardcoded endpoints. For public access, you can configure custom domains that route to your installations.
Deployment Models
Environments can be deployed in your cloud account or directly in customer cloud accounts. The choice affects data residency, network connectivity, and operational control but does not change the underlying environment architecture or capabilities. Learn more about the benefits of customer cloud deployments in our BYOC guide.
Your Cloud Deployments
Running environments in your own cloud account provides centralized operational control and simplified compliance management. Multiple customer workloads can run in a single environment with installation-level isolation, or separate environments can provide complete customer isolation. This model works well for SaaS operations where customers access your service through standard internet connectivity.
Billing and resource management remain under your control, allowing predictable operational costs and capacity planning. Monitoring and logging aggregate across all customer installations, providing unified operational visibility.
Customer Cloud Deployments
Customer cloud environments deploy the same Kubernetes infrastructure directly within customer cloud accounts. The customer retains full control over the underlying infrastructure, networking configuration, and data residency while you maintain the ability to deploy and manage application installations.
This deployment model addresses enterprise requirements for data sovereignty and compliance frameworks that mandate specific geographic or jurisdictional controls. Customer networks can directly connect to the environment, enabling integration with internal services without internet transit.
Authentication and authorization integrate with customer identity systems through standard protocols. The customer configures their own backup policies, disaster recovery procedures, and security monitoring according to their organizational requirements.
Security Boundaries
Network Isolation
Environment isolation is enforced through multiple layers of network and infrastructure controls. Each environment operates within its own virtual network with no default connectivity to other environments. Cross-environment communication requires explicit network peering or VPN connections configured at the infrastructure level.
Access Control
Within environments, Kubernetes RBAC policies restrict access to resources based on installation boundaries. The Ryvn agent operates with limited permissions scoped to specific namespaces, preventing unauthorized access to customer workloads or sensitive cluster resources.
Data Encryption
Data encryption occurs at multiple levels: TLS for all network communication, encryption at rest for persistent storage, and envelope encryption for sensitive configuration data. Encryption keys are managed by cloud provider key management services with automatic rotation policies.
Monitoring and Auditing
Network traffic analysis and monitoring operate at the environment level, providing visibility into communication patterns while maintaining installation-level privacy. Audit logs capture all administrative actions and resource modifications for compliance and security analysis.
Resource Management and Scaling
Resource Allocation
Each environment manages compute, memory, and storage resources independently through Kubernetes resource quotas. These quotas prevent any single installation from consuming excessive resources that would impact other installations within the environment.
Installations specify resource requests and limits that determine their guaranteed resources and maximum consumption. The Kubernetes scheduler places workloads based on available capacity and anti-affinity rules that distribute installations across cluster nodes for resilience.
Autoscaling Behavior
Horizontal pod autoscaling responds to CPU and memory utilization metrics, automatically scaling installation replicas to meet demand. Cluster autoscaling adjusts the underlying node capacity when resource demands exceed current cluster capacity.
Storage Management
Storage provisioning uses cloud provider persistent volume systems with configurable performance characteristics and backup policies. Each installation receives isolated storage that persists across pod restarts and node failures.
For detailed setup procedures, see the provisioning guide. Cloud-specific provisioning instructions are available for AWS, Google Cloud, and Azure.