This is the multi-page printable view of this section. Click here to print.
1 - Introduction
Sidero (“Iron” in Greek) is a project created by the Sidero Labs team. Sidero Metal provides lightweight, composable tools that can be used to create bare-metal Talos Linux + Kubernetes clusters. These tools are built around the Cluster API project.
Because of the design of Cluster API, there is inherently a “chicken and egg” problem: you need an existing Kubernetes cluster in order to provision the management plane, that can then provision more clusters. The initial management plane cluster that runs the Sidero Metal provider does not need to be based on Talos Linux - although it is recommended for security and stability reasons. The Getting Started guide will walk you through installing Sidero Metal either on an existing cluster, or by quickly creating a docker based cluster used to bootstrap the process.
Sidero Metal is currently made up of two components:
- Metal Controller Manager: Provides custom resources and controllers for managing the lifecycle of metal machines, iPXE server, metadata service, and gRPC API service
- Cluster API Provider Sidero (CAPS): A Cluster API infrastructure provider that makes use of the pieces above to spin up Kubernetes clusters
Sidero Metal also needs these co-requisites in order to be useful:
All components mentioned above can be installed using Cluster API’s
See the Getting Started for more details.
2 - Installation
As of Cluster API version 0.3.9, Sidero is included as a default infrastructure provider in
To install Sidero and the other Talos providers, simply issue:
clusterctl init -b talos -c talos -i sidero
Sidero supports several variables to configure the installation, these variables can be set either as environment
variables or as variables in the
sidero-controller-manageron host network
SIDERO_CONTROLLER_MANAGER_API_ENDPOINT(empty): specifies the IP address controller manager can be reached on, defaults to the node IP
SIDERO_CONTROLLER_MANAGER_API_PORT(8081): specifies the port controller manager can be reached on
SIDERO_CONTROLLER_MANAGER_CONTAINER_API_PORT(8081): specifies the controller manager internal container port
SIDERO_CONTROLLER_MANAGER_EXTRA_AGENT_KERNEL_ARGS(empty): specifies additional Linux kernel arguments for the Sidero agent (for example, different console settings)
false): automatically accept discovered servers, by default
.spec.acceptedshould be changed to
trueto accept the server
true): automatically attempt to configure the BMC with a
siderouser that will be used for all IPMI tasks.
true): wipe only the first megabyte of each disk on the server, otherwise wipe the full disk
20m): timeout for the server reboot (how long it might take for the server to be rebooted before Sidero retries an IPMI reboot operation)
uefi): IPMI boot from PXE method:
uefifor UEFI boot or
biosfor BIOS boot
ipxe-exit): configures the way Sidero forces server to boot from disk when server hits iPXE server after initial install:
ipxe-exitreturns iPXE script with
http-404returns HTTP 404 Not Found error,
sanbootcommand to boot from the first hard disk
Sidero provides two endpoints which should be made available to the infrastructure:
- TCP port 8081 which provides combined iPXE, metadata and gRPC service (external endpoint should be passed to Sidero as
- UDP port 69 for the TFTP service (DHCP server should point the nodes to PXE boot from that IP)
These endpoints could be exposed to the infrastructure using different strategies:
sidero-controller-manageron the host network.
- using Kubernetes load balancers (e.g. MetalLB), ingress controllers, etc.
Note: If you want to run
sidero-controller-manageron the host network using port different from
8081you should set both
SIDERO_CONTROLLER_MANAGER_CONTAINER_API_PORTto the same value.
3 - Architecture
The overarching architecture of Sidero centers around a “management plane”. This plane is expected to serve as a single interface upon which administrators can create, scale, upgrade, and delete Kubernetes clusters. At a high level view, the management plane + created clusters should look something like:
4 - Resources
Sidero, the Talos bootstrap/controlplane providers, and Cluster API each provide several custom resources (CRDs) to Kubernetes. These CRDs are crucial to understanding the connections between each provider and in troubleshooting problems. It may also help to look at the cluster template to get an idea of the relationships between these.
Cluster API (CAPI)
It’s worth defining the most basic resources that CAPI provides first, as they are related to several subsequent resources below.
Cluster is the highest level CAPI resource.
It allows users to specify things like network layout of the cluster, as well as contains references to the infrastructure and control plane resources that will be used to create the cluster.
Machine represents an infrastructure component hosting a Kubernetes node.
Allows for specification of things like Kubernetes version, as well as contains reference to the infrastructure resource that relates to this machine.
MachineDeployments are similar to a
Deployment and their relationship to
Pods in Kubernetes primitives.
MachineDeployment allows for specification of a number of Machine replicas with a given specification.
Cluster API Bootstrap Provider Talos (CABPT)
TalosConfig resource allows a user to specify the type (init, controlplane, join) for a given machine.
The bootstrap provider will then generate a Talos machine configuration for that machine.
This resource also provides the ability to pass a full, pre-generated machine configuration.
Finally, users have the ability to pass
configPatches, which are applied to edit a generate machine configuration with user-defined settings.
TalosConfig corresponds to the
bootstrap sections of Machines,
MachineDeployments, and the
controlPlaneConfig section of
TalosConfigTemplates are similar to the
TalosConfig above, but used when specifying a bootstrap reference in a
Cluster API Control Plane Provider Talos (CACPPT)
The control plane provider presents a single CRD, the
This resource is similar to
MachineDeployments, but is targeted exclusively for the Kubernetes control plane nodes.
TalosControlPlane allows for specification of the number of replicas, version of Kubernetes for the control plane nodes, references to the infrastructure resource to use (
infrastructureTemplate section), as well as the configuration of the bootstrap data via the
This resource is referred to by the CAPI Cluster resource via the
Cluster API Provider Sidero (CAPS)
MetalCluster is Sidero’s view of the cluster resource.
This resource allows users to define the control plane endpoint that corresponds to the Kubernetes API server.
This resource corresponds to the
infrastructureRef section of Cluster API’s
MetalMachine is Sidero’s view of a machine.
Allows for reference of a single server or a server class from which a physical server will be picked to bootstrap.
MetalMachineTemplate is similar to a
MetalMachine above, but serves as a template that is reused for resources like
TalosControlPlanes that allocate multiple
Machines at once.
ServerBindings represent a one-to-one mapping between a Server resource and a
ServerBinding is used internally to keep track of servers that are allocated to a Kubernetes cluster and used to make decisions on cleaning and returning servers to a
ServerClass upon deallocation.
Metal Controller Manager
These define a desired deployment environment for Talos, including things like which kernel to use, kernel args to pass, and the initrd to use.
Sidero allows you to define a default environment, as well as other environments that may be specific to a subset of nodes.
Users can override the environment at the
Server level, if you have requirements for different kernels or kernel parameters.
See the Environments section of our Configuration docs for examples and more detail.
These represent physical machines as resources in the management plane.
Servers are created when the physical machine PXE boots and completes a “discovery” process in which it registers with the management plane and provides SMBIOS information such as the CPU manufacturer and version, and memory information.
See the Servers section of our Configuration docs for examples and more detail.
ServerClasses are a grouping of the
Servers mentioned above, grouped to create classes of servers based on Memory, CPU or other attributes.
These can be used to compose a bank of
Servers that are eligible for provisioning.
See the ServerClasses section of our Configuration docs for examples and more detail.
Sidero Controller Manager
While the controller does not present unique CRDs within Kubernetes, it’s important to understand the metadata resources that are returned to physical servers during the boot process.
The Sidero controller manager server may be familiar to you if you have used cloud environments previously.
Using Talos machine configurations created by the Talos Cluster API bootstrap provider, along with patches specified by editing
ServerClass resources or
TalosControlPlane resources, metadata is returned to servers who query the controller manager at boot time.
See the Metadata section of our Configuration docs for examples and more detail.
5 - System Requirements
Most of the time, Sidero does very little, so it needs very few resources. However, since it is in charge of any number of workload clusters, it should be built with redundancy. It is also common, if the cluster is single-purpose, to combine the controlplane and worker node roles. Virtual machines are also perfectly well-suited for this role.
Minimum suggested dimensions:
- Node count: 3
- Node RAM: 4GB
- Node CPU: ARM64 or x86-64 class
- Node storage: 32GB storage on system disk