Bring your own Kubernetes (BYOK) system requirements
Chef 360 Platform Server has the following requirements for a bring your own Kubernetes (BYOK) deployment.
Hardware
Chef 360 Platform supports single-node and multi-node deployments. Select a topology based on your availability and scalability requirements.
For production environments, run a benchmark test to determine your system’s requirements. The benchmark test should include the number of nodes you plan to enroll, job frequency, output size, job duration, and check-in frequency.
Note
If the root directory has space restrictions, mount the following directories before installing Chef 360 Platform:
/var/lib/k0s//run/k0s//var/lib/embedded-cluster/etc/k0s/
Single-node requirements
A single-node Chef 360 Platform deployment (hyperconverged non-HA) has the following minimum requirements. Adjust these values based on your specific usage patterns and workload. For sizing recommendations tailored to your environment, contact your Customer Architect or Customer Success Manager.
| vCPU | Memory | Storage |
|---|---|---|
| 16 | 32 GB | 200 GB |
Multi-node requirements
All nodes must meet or exceed the requirements for their assigned role. Minimum node counts and node sizing requirements must both be satisfied. Using fewer nodes with larger specifications doesn’t replace the required node count. Node roles must be deployed exactly as defined for each topology.
These requirements support reliable operation and high availability of the platform.
Multi-node systems have the following minimum requirements:
| Topology | Roles | Nodes | vCPU | Memory (GB) | Disk (GB) |
|---|---|---|---|---|---|
| Hyperconverged-HA | Controller + Frontend + Backend | 3 | 16 | 32 | 200 |
| Tiered-HA | Controller + Backend | 3 | 16 | 32 | 200 |
| Frontend | 3 or more | 8 | 16 | 50 | |
| Hyperscale-HA | Controller | 3 | 4 | 16 | 50 |
| Frontend | 3 or more | 8 | 16 | 50 | |
| Backend | 3 | 16 | 32 | 200 |
Node sizing requirements can vary based on workload characteristics, scale expectations, performance objectives, availability requirements, and integration patterns. The requirements documented here represent a baseline configuration. Work with a Chef 360 Platform Architect to validate and refine node sizing and ensure your deployment meets the specific needs of your environment.
For more information about cluster topologies and adding nodes, see Cluster management.
File system requirements
Chef 360 Platform has the following file system requirements:
- A mounted XFS filesystem with the
ftype=1option. This is the default in recent RHEL versions. - The
/vardirectory isn’t mounted with thenoexecoption.
Ports
Chef 360 Platform requires the following ports for all deployments. Open the following ports if you are using default ports.
Ports for inbound connections:
| Port | Description |
|---|---|
| 22 | SSH |
| 5985-5986 | WinRM |
| 30000 | Chef 360 Platform Console |
| 31000 | API Gateway |
| 31050 | RabbitMQ |
| 31101 | Mailpit (optional) |
Ports for outbound connections:
| Port | Description |
|---|---|
| 443 | For non-air gapped installations |
Ports for multi-node deployment
Multi-node deployments require additional ports for node-to-node communication. Create firewall rules to allow bidirectional traffic between nodes on these ports.
| Port | Description |
|---|---|
| 2380 | etcd server client API (TCP) |
| 4789 | Flannel VXLAN overlay network (UDP) |
| 6443 | Kubernetes API server (TCP) |
| 9091 | Prometheus metrics (TCP) |
| 9443 | Webhook server (TCP) |
| 10249 | kube-proxy (TCP) |
| 10250 | Kubelet API (TCP) |
| 10256 | kube-proxy health check and metrics (TCP) |
| 30000 | Admin Console (TCP), required for nodes joining the cluster |
Fully qualified domain name
Chef 360 Platform Server requires a fully qualified domain name (FQDN) that’s RFC 1123 compliant and registered with the Domain Name System (DNS).Disaster recovery and data backup
Chef 360 Platform’s built-in disaster recovery features aren’t supported in BYOK deployments. Your organization is responsible for backup and recovery strategies for both the Kubernetes cluster and Chef 360 Platform’s data.
Kubernetes cluster
Chef 360 Platform supports the following Kubernetes platforms:
- Amazon EKS: Elastic Kubernetes Service on AWS
- Azure AKS: Azure Kubernetes Service
- Red Hat OpenShift: Version 4.10 or later
- Generic Kubernetes: Version 1.29 or later (other managed or self-hosted clusters)
You must have the following administrative access on the cluster:
- kubectl version 1.30 or later with context set to target the cluster from your workstation
- Administrative access to create namespaces, CustomResourceDefinitions, and cluster-scoped roles or role bindings
Air-gapped cluster
For air-gapped deployments, the following network and infrastructure requirements apply.
Network isolation:
- The Kubernetes cluster has no direct internet connectivity.
- Cluster nodes can communicate with each other over the internal network.
- If the cluster’s workstation is also air-gapped, you must have a secure file-transfer mechanism to transfer artifacts to the workstation.
Infrastructure requirements:
A private container registry accessible from all Kubernetes nodes. Supported options include Docker Registry, Harbor, or a similar registry. The registry must have sufficient storage capacity for Chef 360 Platform images.
A local file server or object storage to host installation artifacts. Supported options include:
- An HTTP/HTTPS file server
- An S3-compatible object storage service
- A network file share accessible from the installation host
Internet-connected cluster
If you’re deploying Chef 360 Platform on an internet-connected cluster, the cluster must have outbound connectivity to the following Chef 360 Platform services:
- Container registry:
registry.chef360.chef.io - Downloads portal:
download.chef360.chef.io - Proxy registry services:
proxy.chef360.chef.io - Application services:
appservice.chef360.chef.io - Docker Hub domains:
index.docker.iocdn.auth0.com*.docker.io*.docker.com
If your environment uses a corporate proxy, ensure it’s configured to allow access to these endpoints. Configure proxy settings in:
- Docker daemon on cluster nodes
- Kubernetes cluster configuration
- Container runtime (containerd, cri-o, etc.)