A service defines the runtime configuration and operational requirements for an application component. It specifies container images, resource requirements, networking configuration, and deployment behavior. When you deploy a service to an environment, it creates an installation—the actual running instance with environment-specific settings applied.Services function as deployment templates that can be instantiated across multiple environments. Each installation inherits the service’s base configuration while applying environment-specific overrides for variables, scaling parameters, and networking rules. This separation enables consistent application behavior across development, staging, and production environments while allowing necessary customization.
Ryvn supports five service types that correspond to different application runtime patterns. Each type implements specific scheduling, networking, and scaling characteristics suited to its operational model.
Server Services
Server services run continuously and handle HTTP/HTTPS traffic through load balancers. They implement rolling deployments with configurable health checks to ensure zero-downtime updates. The scheduler maintains the specified number of replicas across cluster nodes, automatically replacing failed instances.Server services expose ports through Kubernetes services and can operate in public, internal, or private networking modes. Ingress controllers handle TLS termination and routing based on hostname and path rules. Built-in readiness and liveness probes monitor application health and trigger automatic recovery actions.
Worker Services
Worker services process asynchronous tasks from message queues or event streams. They run continuously but scale based on queue depth, CPU utilization, or custom metrics rather than handling direct network traffic. Workers typically connect to external systems like databases, APIs, or file storage.The horizontal pod autoscaler adjusts worker replica counts based on resource consumption metrics. Workers can process tasks in parallel across multiple instances, with each replica handling independent workloads. Failed tasks can be retried according to configured policies, and dead letter queues capture permanently failed messages.
Job Services
Job services execute batch workloads on scheduled intervals or manual triggers. They run to completion rather than continuously, with the scheduler ensuring successful execution before marking the job as finished. Jobs can run once, on cron schedules, or be triggered manually through the API.Kubernetes CronJobs handle scheduled execution, creating new pods for each run while cleaning up completed instances according to retention policies. Job parallelism settings control how many pods execute simultaneously for the same task. Failed jobs can retry automatically with exponential backoff or fail immediately based on configuration.
Chart Services
Chart services deploy Helm charts with full access to Kubernetes resource types. They enable complex multi-container applications, custom resource definitions, and advanced networking configurations that exceed the capabilities of simpler service types.Chart services manage the complete Helm release lifecycle including installation, upgrades, and rollbacks. Values files can be customized per environment while maintaining consistent chart definitions. Dependencies between charts are resolved automatically, and hooks enable pre/post-deployment actions.
Terraform Services
Terraform services manage infrastructure resources as code. They provision cloud resources, configure networking, and manage dependencies between infrastructure components. Each service maintains its own Terraform state and can coordinate with other infrastructure services.Terraform execution occurs within secure environments with appropriate cloud credentials and network access. State files are stored securely and shared across team members through remote backends. Infrastructure changes follow the same release and deployment patterns as application services.
Services source their runtime artifacts through two primary mechanisms: GitHub repositories and container registries. The choice affects the build process, versioning strategy, and deployment workflow.
GitHub-sourced services connect directly to repository branches or tags. Ryvn monitors repository changes and automatically builds new releases when code changes are detected. The build process varies by service type:
Server and Worker services: Build container images from Dockerfiles or use buildpacks for source code
Chart services: Package Helm charts and validate templates
Terraform services: Validate configurations and plan infrastructure changes
Build artifacts are stored in the Ryvn Registry with semantic versioning based on Git tags. Build logs and metadata are preserved for debugging and audit purposes.
Registry-sourced services deploy pre-built container images from external registries. This approach suits teams with existing CI/CD pipelines or complex build requirements that exceed Ryvn’s build capabilities.Image tags determine release versions, with Ryvn automatically detecting new images and creating corresponding releases. The registry integration supports authentication through service accounts and can pull from private registries with appropriate credentials.
Releases represent immutable snapshots of service configuration and artifacts ready for deployment. Each release receives a semantic version number and contains all necessary components: container images, configuration templates, and deployment specifications.Release creation varies by source type. GitHub services create releases when tags are pushed to the repository, using the tag name as the version number. Registry services create releases automatically when new image tags are detected.Releases progress through release channels that control deployment timing and environment targeting. A typical flow promotes releases from development to staging to production, with approval gates and automated testing at each stage.Release metadata includes build information, source commit details, and deployment history. This audit trail enables precise rollback capabilities and change tracking across environments. Failed releases can be quickly replaced with known-good versions without affecting running installations.
Service health encompasses multiple layers: individual pod health, application-level metrics, and cross-environment consistency. Health monitoring operates at both the service level (across all installations) and installation level (within specific environments).
Health Checks and Probes
Kubernetes health probes monitor container startup, readiness, and ongoing health. Readiness probes prevent traffic routing to containers that aren’t ready to handle requests. Liveness probes restart containers that become unresponsive. Startup probes provide additional time for slow-starting applications.Application-level health checks can integrate with external dependencies like databases or APIs. Custom health endpoints enable complex validation logic beyond simple port connectivity checks.
Resource Monitoring
Resource consumption monitoring tracks CPU, memory, and storage utilization across all service installations. Metrics collection provides historical trends and enables capacity planning. Alerts trigger when resource consumption exceeds defined thresholds.Scaling policies use these metrics to automatically adjust replica counts based on demand. Resource requests and limits ensure predictable performance while preventing resource starvation of other services.
Multi-Environment Visibility
Service dashboards aggregate health and performance data across all environments where the service is installed. This unified view enables quick identification of environment-specific issues and comparison of service behavior across different configurations.Logs aggregation correlates log entries across installations, enabling debugging of distributed issues and performance analysis. Centralized logging maintains consistent log formats while preserving environment-specific context.