The following setup includes the process of creating 3 control planes (master) nodes and 4 worker machines created dynamically, on bare metal servers.
We simulate a scenario that DC provided us the metal machines. Booting Talos over the network on bare-metal with PXE & Sidero Cluster API and connect them together.
We are going to deploy DHCP server but there is no need for next-server boot and TFTP because it’s already implemented in Sidero controller-manager, including DHCP proxy.
Sidero v0.6 comes with DHCP proxy which augments the DHCP service provided by the network environment with PXE boot instructions automatically. There is no configuration required besides configuring the network environment DHCP server to assign IPs to the machines.
dnsmasq configuration would look like this:
services:
dnsmasq:
image: quay.io/poseidon/dnsmasq:v0.5.0-32-g4327d60-amd64
container_name: dnsmasq
cap_add:
- NET_ADMIN
network_mode: host
command: >
-d -q -p0
--dhcp-range=10.1.1.3,10.1.1.30
--dhcp-option=option:router,10.1.1.1
--log-queries
--log-dhcp
Management Plane cluster
In order to run Sidero, you first need a Kubernetes “Management cluster”.
- Kubernetes v1.26 or later
- Ability to expose TCP and UDP Services to the workload cluster machines
- Access to the cluster: we deploy 1 node Talos cluster so access can be achieve via
talosctl kubeconfig
We create a one node cluster with allowSchedulingOnControlPlanes: true
, which allows running workload on control-plane nodes.
k get node -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
talos-74m-2r5 Ready control-plane 11m v1.31.2 10.1.1.18 Talos (v1.8.3) 6.6.60-talos containerd://2.0.0
Sidero Cluster API
Sidero is included as a default infrastructure provider in clusterctl, so the installation of both Sidero and the Cluster API (CAPI) components is as simple as using the clusterctl tool.
First, we are telling Sidero to use hostNetwork: true
so that it binds its ports directly to the host, rather than being available only from inside the cluster. There are many ways of exposing the services, but this is the simplest path for the single-node management cluster. When you scale the management cluster, you will need to use an alternative method, such as an external load balancer or something like MetalLB
.
export SIDERO_CONTROLLER_MANAGER_HOST_NETWORK=true
export SIDERO_CONTROLLER_MANAGER_DEPLOYMENT_STRATEGY=Recreate
export SIDERO_CONTROLLER_MANAGER_API_ENDPOINT=10.1.1.18
export SIDERO_CONTROLLER_MANAGER_SIDEROLINK_ENDPOINT=10.1.1.18
clusterctl init -b talos -c talos -i sidero
k get po
NAMESPACE NAME READY STATUS RESTARTS AGE
cabpt-system cabpt-controller-manager-6b8b989d68-lwxbw 1/1 Running 0 39h
cacppt-system cacppt-controller-manager-858fccc654-xzfds 1/1 Running 0 39h
capi-system capi-controller-manager-564745d4b-hbh7x 1/1 Running 0 39h
cert-manager cert-manager-5c887c889d-dflnl 1/1 Running 0 39h
cert-manager cert-manager-cainjector-58f6855565-5wf5z 1/1 Running 0 39h
cert-manager cert-manager-webhook-6647d6545d-k7qhf 1/1 Running 0 39h
sidero-system caps-controller-manager-67f75b9cb-9z2fq 1/1 Running 0 39h
sidero-system sidero-controller-manager-97cb45f57-v7cv2 4/4 Running 0 39h
curl -I http://10.1.1.18:8081/tftp/snp.efi
HTTP/1.1 200 OK
Accept-Ranges: bytes
Content-Length: 1020416
Content-Type: application/octet-stream
Environment
Environments are a custom resource provided by the Metal Controller Manager. An environment is a codified description of what should be returned by the PXE server when a physical server attempts to PXE boot.
Environments can be supplied to a given server either at the Server or the ServerClass level. The hierarchy from most to least respected is:
.spec.environmentRef provided at Server level
.spec.environmentRef provided at ServerClass level
"default" Environment created automatically and modified by an administrator
kubectl edit environment default
apiVersion: metal.sidero.dev/v1alpha2
kind: Environment
metadata:
creationTimestamp: "2024-11-23T13:16:12Z"
generation: 1
name: default
resourceVersion: "6527"
uid: 9e069ed5-886c-4b3c-9875-fe8e7f453dda
spec:
initrd:
url: https://github.com/siderolabs/talos/releases/download/v1.8.3/initramfs-amd64.xz
kernel:
args:
- console=tty0
- console=ttyS0
- consoleblank=0
- earlyprintk=ttyS0
- ima_appraise=fix
- ima_hash=sha512
- ima_template=ima-ng
- init_on_alloc=1
- initrd=initramfs.xz
- nvme_core.io_timeout=4294967295
- printk.devkmsg=on
- pti=on
- slab_nomerge=
- talos.platform=metal
url: https://github.com/siderolabs/talos/releases/download/v1.8.3/vmlinuz-amd64
Servers and ServerClasses
Servers are the basic resource of bare metal in the Metal Controller Manager. These are created by PXE booting the servers and allowing them to send a registration request to the management plane.
Server classes are a way to group distinct server resources. The qualifiers and selector keys allow the administrator to specify criteria upon which to group these servers.
So here we are creating 2 ServerClasses for masters:
apiVersion: metal.sidero.dev/v1alpha1
kind: ServerClass
metadata:
name: masters
spec:
qualifiers:
hardware:
- system:
manufacturer: QEMU
memory:
totalSize: "12 GB"
configPatches:
- op: add
path: /machine/network/interfaces
value:
- deviceSelector:
busPath: "0*"
dhcp: true
vip:
ip: "10.1.1.50"
- op: add
path: /machine/network/nameservers
value:
- 1.1.1.1
- 1.0.0.1
- op: replace
path: /machine/install
value:
disk: none
diskSelector:
size: '< 100GB'
- op: replace
path: /cluster/network/cni
value:
name: none
# name: "custom"
# urls:
# - "https://raw.githubusercontent.com/kubebn/talos-proxmox-kaas/main/manifests/talos/cilium.yaml"
- op: replace
path: /cluster/proxy
value:
disabled: true
- op: replace
path: /machine/kubelet/extraArgs
value:
rotate-server-certificates: true
- op: replace
path: /cluster/inlineManifests
value:
- name: cilium
contents: |-
apiVersion: v1
kind: Namespace
metadata:
name: cilium
labels:
pod-security.kubernetes.io/enforce: "privileged"
- op: replace
path: /cluster/extraManifests
value:
- https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
- https://raw.githubusercontent.com/alex1989hu/kubelet-serving-cert-approver/main/deploy/standalone-install.yaml
and workers:
apiVersion: metal.sidero.dev/v1alpha1
kind: ServerClass
metadata:
name: workers
spec:
qualifiers:
hardware:
- system:
manufacturer: QEMU
memory:
totalSize: "19 GB"
configPatches:
- op: add
path: /machine/network/interfaces
value:
- deviceSelector:
busPath: "0*"
dhcp: true
- op: add
path: /machine/network/nameservers
value:
- 1.1.1.1
- 1.0.0.1
- op: replace
path: /machine/install
value:
disk: none
diskSelector:
size: '< 100GB'
- op: replace
path: /cluster/proxy
value:
disabled: true
- op: replace
path: /machine/kubelet/extraArgs
value:
rotate-server-certificates: true
Let’s spin up machines:
# 3 nodes with 12GB memory
for id in {105..107}; do
qm create $id --name vm$id --memory 12288 --cores 3 --net0 virtio,bridge=vmbr0 --ostype l26 --scsihw virtio-scsi-pci --sata0 lvm1:32 --cpu host && qm start $id
done
# 4 nodes with 19GB memory
for id in {108..111}; do
qm create $id --name vm$id --memory 20288 --cores 3 --net0 virtio,bridge=vmbr0 --ostype l26 --scsihw virtio-scsi-pci --sata0 lvm1:32 --cpu host && qm start $id
done
So in ServerClass we use the difference between memory allocations for the nodes:
hardware:
- system:
manufacturer: QEMU
memory:
totalSize: "12 GB" # masters
---
totalSize: "19 GB" # workers
kubectl get serverclasses
NAME AVAILABLE IN USE AGE
any [] [] 18m
masters [] [] 9s
workers [] [] 9s
kubectl get servers
NAME HOSTNAME ACCEPTED CORDONED ALLOCATED CLEAN POWER AGE
13f56641-ff59-467c-94df-55a2861146d9 (none) true true on 96s
26f39da5-c622-42e0-b160-ff0eb58eb56b (none) true true on 75s
4e56c769-8a35-4a68-b90b-0e1dca530fb0 (none) true true on 107s
5211912d-8f32-4ea4-8738-aaff57386391 (none) true true on 96s
a0b83613-faa9-468a-9289-1aa270117d54 (none) true true on 104s
a6c8afca-15b9-4254-82b4-91bbcd76dba0 (none) true true on 96s
ec39bf0e-632d-4dca-9ae0-0b3509368de6 (none) true true on 108s
f426397a-76ff-4ea6-815a-2be97265f5e6 (none) true true on 107s
We can describe the server to see other details of it:
kubectl get server 10ea52da-e1fc-4b83-81ef-b9cd40d1d25e -o yaml
apiVersion: metal.sidero.dev/v1alpha2
kind: Server
metadata:
creationTimestamp: "2024-11-25T04:26:35Z"
finalizers:
- storage.finalizers.server.k8s.io
generation: 1
name: 10ea52da-e1fc-4b83-81ef-b9cd40d1d25e
resourceVersion: "562154"
uid: 4c08c327-8aba-4af0-8204-5985b2a76e95
spec:
accepted: false
hardware:
compute:
processorCount: 1
processors:
- coreCount: 3
manufacturer: QEMU
productName: pc-i440fx-9.0
speed: 2000
threadCount: 3
totalCoreCount: 3
totalThreadCount: 3
memory:
moduleCount: 1
modules:
- manufacturer: QEMU
size: 12288
type: ROM
totalSize: 12 GB
network:
interfaceCount: 3
interfaces:
- flags: broadcast|multicast
index: 2
mac: 36:d0:4f:23:f7:03
mtu: 1500
name: bond0
- flags: broadcast
index: 3
mac: d6:9b:8b:99:6f:d0
mtu: 1500
name: dummy0
- addresses:
- 10.1.1.6/24
flags: up|broadcast|multicast
index: 8
mac: bc:24:11:a1:77:25
mtu: 1500
name: eth0
storage:
deviceCount: 1
devices:
- deviceName: /dev/sda
productName: QEMU HARDDISK
size: 34359738368
type: HDD
wwid: t10.ATA QEMU HARDDISK QM00005
totalSize: 32 GB
system:
manufacturer: QEMU
productName: Standard PC (i440FX + PIIX, 1996)
uuid: 10ea52da-e1fc-4b83-81ef-b9cd40d1d25e
hostname: (none)
status:
addresses:
- address: 10.1.1.6
type: InternalIP
power: "on"
Note in the output above that the newly registered servers are not accepted. In order for a server to be eligible for consideration, it must be marked as accepted. Before a Server is accepted, no write action will be performed against it. This default is for safety (don’t accidentally delete something just because it was plugged in) and security (make sure you know the machine before it is given credentials to communicate).
There are two ways to accept the server:
kubectl patch server 00000000-0000-0000-0000-d05099d33360 --type='json' -p='[{"op": "replace", "path": "/spec/accepted", "value": true}]'
or you can enable auto-acceptance by passing the –auto-accept-servers=true flag to sidero-controller-manager.
kubectl edit deploy sidero-controller-manager -n sidero-system
After the servers are accepted, they can be seen allocated to serverclasses, but they are still not “IN USE”:
kubectl get serverclass -A
NAME AVAILABLE IN USE AGE
any ["13f56641-ff59-467c-94df-55a2861146d9","26f39da5-c622-42e0-b160-ff0eb58eb56b","4e56c769-8a35-4a68-b90b-0e1dca530fb0","5211912d-8f32-4ea4-8738-aaff57386391","a0b83613-faa9-468a-9289-1aa270117d54","a6c8afca-15b9-4254-82b4-91bbcd76dba0","ec39bf0e-632d-4dca-9ae0-0b3509368de6","f426397a-76ff-4ea6-815a-2be97265f5e6"] [] 21m
masters ["4e56c769-8a35-4a68-b90b-0e1dca530fb0","ec39bf0e-632d-4dca-9ae0-0b3509368de6","f426397a-76ff-4ea6-815a-2be97265f5e6"] [] 3m27s
workers ["13f56641-ff59-467c-94df-55a2861146d9","26f39da5-c622-42e0-b160-ff0eb58eb56b","5211912d-8f32-4ea4-8738-aaff57386391","a0b83613-faa9-468a-9289-1aa270117d54","a6c8afca-15b9-4254-82b4-91bbcd76dba0"] [] 3m27s
While developing config patches it is usually convenient to test generated config with patches before actual server is provisioned with the config.
This can be achieved by querying the metadata server endpoint directly:
curl http://$PUBLIC_IP:8081/configdata?uuid=$SERVER_UUID # example "http://10.1.1.18:8081/configdata?uuid=4e56c769-8a35-4a68-b90b-0e1dca530fb0"
version: v1alpha1
Create a Workload Cluster
We are now ready to generate the configuration manifest templates for our first workload cluster.
There are several configuration parameters that should be set in order for the templating to work properly:
export CONTROL_PLANE_SERVERCLASS=masters
export WORKER_SERVERCLASS=workers
export TALOS_VERSION=v1.8.3
export KUBERNETES_VERSION=v1.31.2
export CONTROL_PLANE_PORT=6443
export CONTROL_PLANE_ENDPOINT=10.1.1.50
clusterctl generate cluster cluster-0 -i sidero > cluster-0.yaml
One of the pain points when building a high-availability controlplane is giving clients a single IP or URL at which they can reach any of the controlplane nodes. The most common approaches – reverse proxy, load balancer, BGP, and DNS – all require external resources, and add complexity in setting up Kubernetes.
To simplify cluster creation, Talos Linux supports a “Virtual” IP (VIP) address to access the Kubernetes API server, providing high availability with no other resources required.
For cluster endpoint we use 10.1.1.50
ip address, which will be our share Virtual IP address. We can actually set it in ServerClass:
configPatches:
- op: add
path: /machine/network/interfaces
value:
- deviceSelector:
busPath: "0*" # any network device
dhcp: true
vip:
ip: "10.1.1.50"
The yaml manifest for cluster-0.yaml:
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: cluster-0
namespace: default
spec:
clusterNetwork:
pods:
cidrBlocks:
- 10.244.0.0/16
services:
cidrBlocks:
- 10.96.0.0/12
controlPlaneRef:
apiVersion: controlplane.cluster.x-k8s.io/v1alpha3
kind: TalosControlPlane
name: cluster-0-cp
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha3
kind: MetalCluster
name: cluster-0
---
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha3
kind: MetalCluster
metadata:
name: cluster-0
namespace: default
spec:
controlPlaneEndpoint:
host: 10.1.1.50
port: 6443
---
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha3
kind: MetalMachineTemplate
metadata:
name: cluster-0-cp
namespace: default
spec:
template:
spec:
serverClassRef:
apiVersion: metal.sidero.dev/v1alpha2
kind: ServerClass
name: masters
---
apiVersion: controlplane.cluster.x-k8s.io/v1alpha3
kind: TalosControlPlane
metadata:
name: cluster-0-cp
namespace: default
spec:
controlPlaneConfig:
controlplane:
generateType: controlplane
talosVersion: v1.8.3
infrastructureTemplate:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha3
kind: MetalMachineTemplate
name: cluster-0-cp
replicas: 1
version: v1.31.2
---
apiVersion: bootstrap.cluster.x-k8s.io/v1alpha3
kind: TalosConfigTemplate
metadata:
name: cluster-0-workers
namespace: default
spec:
template:
spec:
generateType: join
talosVersion: v1.8.3
---
apiVersion: cluster.x-k8s.io/v1beta1
kind: MachineDeployment
metadata:
name: cluster-0-workers
namespace: default
spec:
clusterName: cluster-0
replicas: 0
selector:
matchLabels: null
template:
spec:
bootstrap:
configRef:
apiVersion: bootstrap.cluster.x-k8s.io/v1alpha3
kind: TalosConfigTemplate
name: cluster-0-workers
clusterName: cluster-0
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha3
kind: MetalMachineTemplate
name: cluster-0-workers
version: v1.31.2
---
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha3
kind: MetalMachineTemplate
metadata:
name: cluster-0-workers
namespace: default
spec:
template:
spec:
serverClassRef:
apiVersion: metal.sidero.dev/v1alpha2
kind: ServerClass
name: workers
When you are satisfied with your configuration, go ahead and apply it to Sidero:
kubectl apply -f cluster-0.yaml
cluster.cluster.x-k8s.io/cluster-0 created
metalcluster.infrastructure.cluster.x-k8s.io/cluster-0 created
metalmachinetemplate.infrastructure.cluster.x-k8s.io/cluster-0-cp created
taloscontrolplane.controlplane.cluster.x-k8s.io/cluster-0-cp created
talosconfigtemplate.bootstrap.cluster.x-k8s.io/cluster-0-workers created
machinedeployment.cluster.x-k8s.io/cluster-0-workers created
metalmachinetemplate.infrastructure.cluster.x-k8s.io/cluster-0-workers created
At this point, Sidero will allocate Servers according to the requests in the cluster manifest. Once allocated, each of those machines will be installed with Talos, given their configuration, and form a cluster.
You can watch the progress of the Servers being selected:
kubectl get servers,machines,clusters
NAME HOSTNAME ACCEPTED CORDONED ALLOCATED CLEAN POWER AGE
server.metal.sidero.dev/0859ab1b-32d1-4acc-bb91-74c4eafe8017 (none) true true false on 29m
server.metal.sidero.dev/25ba21df-363c-4345-bbb6-b2f21487d103 (none) true true on 29m
server.metal.sidero.dev/6c9aa61c-b9c6-4b69-8b77-8a46b5f217c6 (none) true true on 29m
server.metal.sidero.dev/992f741b-fc3a-48e4-814b-c2f351b320eb (none) true true on 29m
server.metal.sidero.dev/995d057f-4f61-4359-8e78-9a043904fe3a (none) true true on 29m
server.metal.sidero.dev/9f909243-9333-461c-bcd3-dce874d5c36a (none) true true on 29m
server.metal.sidero.dev/d6141070-cb35-47d7-8f12-447ee936382a (none) true true on 29m
NAME CLUSTER NODENAME PROVIDERID PHASE AGE VERSION
machine.cluster.x-k8s.io/cluster-0-cp-89hsd cluster-0 talos-rlj-gk7 sidero://0859ab1b-32d1-4acc-bb91-74c4eafe8017 Running 21m v1.31.2
NAME CLUSTERCLASS PHASE AGE VERSION
cluster.cluster.x-k8s.io/cluster-0 Provisioned 21m
During the Provisioning phase, a Server will become allocated, the hardware will be powered up, Talos will be installed onto it, and it will be rebooted into Talos. Depending on the hardware involved, this may take several minutes. Currently, we can see that only 1 server is allocated because in cluster-0.yaml manifest we specified only 1 replica for control plane and 0 replicas for workers.
Retrieve the talosconfig & kubeconfig
In order to interact with the new machines (outside of Kubernetes), you will need to obtain the talosctl client configuration, or talosconfig. You can do this by retrieving the secret from the Sidero management cluster:
kubectl get talosconfig -o yaml $(kubectl get talosconfig --no-headers | awk 'NR==1{print $1}') -o jsonpath='{.status.talosConfig}' > talosconfig
kubectl describe server 0859ab1b-32d1-4acc-bb91-74c4eafe8017 | grep Address
Addresses:
Addresses:
Address: 10.1.1.5
talosctl --talosconfig talosconfig -n 10.1.1.5 -e 10.1.1.5 kubeconfig -f
Check access and scale the cluster
k get node
NAME STATUS ROLES AGE VERSION
talos-rlj-gk7 NotReady control-plane 20m v1.31.2
k get po -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-68d75fd545-6rxp7 0/1 Pending 0 20m
kube-system coredns-68d75fd545-h9xrx 0/1 Pending 0 20m
kube-system kube-apiserver-talos-rlj-gk7 1/1 Running 0 19m
kube-system kube-controller-manager-talos-rlj-gk7 1/1 Running 2 (20m ago) 18m
kube-system kube-scheduler-talos-rlj-gk7 1/1 Running 2 (20m ago) 18m
kube-system metrics-server-54bf7cdd6-tlhg5 0/1 Pending 0 20m
kube-system talos-cloud-controller-manager-df65c8444-47w49 0/1 Pending 0 20m
Node is in NotReady status because we do not have CNI installed. Let’s install cilium:
ipam:
mode: kubernetes
k8sServiceHost: localhost
k8sServicePort: 7445
kubeProxyReplacement: true
installNoConntrackIptablesRules: true
enableK8sEndpointSlice: true
localRedirectPolicy: true
healthChecking: true
routingMode: native
autoDirectNodeRoutes: true
ipv4NativeRoutingCIDR: 10.244.0.0/16
loadBalancer:
mode: hybrid
algorithm: maglev
acceleration: best-effort
serviceTopology: true
bpf:
masquerade: true
ipv4:
enabled: true
hostServices:
enabled: true
hostPort:
enabled: true
nodePort:
enabled: true
externalIPs:
enabled: true
hostFirewall:
enabled: true
ingressController:
enabled: false
envoy:
enabled: false
prometheus:
enabled: false
hubble:
enabled: false
operator:
enabled: true
rollOutPods: true
replicas: 1
prometheus:
enabled: false
nodeSelector:
node-role.kubernetes.io/control-plane: ""
tolerations:
- operator: Exists
effect: NoSchedule
cgroup:
autoMount:
enabled: false
hostRoot: /sys/fs/cgroup
resources:
limits:
cpu: 2
memory: 4Gi
requests:
cpu: 100m
memory: 128Mi
securityContext:
capabilities:
ciliumAgent:
- CHOWN
- KILL
- NET_ADMIN
- NET_RAW
- IPC_LOCK
- SYS_ADMIN
- SYS_RESOURCE
- DAC_OVERRIDE
- FOWNER
- SETGID
- SETUID
cleanCiliumState:
- NET_ADMIN
- SYS_ADMIN
- SYS_RESOURCE
# Same command to install cilium
cilium install --version 1.16.4 -f cilium.yaml -n cilium
We have more machines available, we can scale both the controlplane (TalosControlPlane) and the workers (MachineDeployment) for any cluster after it has been deployed. This is done just like normal Kubernetes Deployments.
kubectl get TalosControlPlane
NAME READY INITIALIZED REPLICAS READY REPLICAS UNAVAILABLE REPLICAS
cluster-0-cp true true 1 1
kubectl get MachineDeployment
NAME CLUSTER REPLICAS READY UPDATED UNAVAILABLE PHASE AGE VERSION
cluster-0-workers cluster-0 Running 80m v1.31.2
kubectl scale taloscontrolplane cluster-0-cp --replicas=3
taloscontrolplane.controlplane.cluster.x-k8s.io/cluster-0-cp scaled
kubectl scale MachineDeployment cluster-0-workers --replicas=4
machinedeployment.cluster.x-k8s.io/cluster-0-workers scaled
Now we can see that all of our nodes are “IN USE” in serverclass:
kubectl get serverclass
NAME AVAILABLE IN USE AGE
any [] ["0859ab1b-32d1-4acc-bb91-74c4eafe8017","25ba21df-363c-4345-bbb6-b2f21487d103","6c9aa61c-b9c6-4b69-8b77-8a46b5f217c6","992f741b-fc3a-48e4-814b-c2f351b320eb","995d057f-4f61-4359-8e78-9a043904fe3a","9f909243-9333-461c-bcd3-dce874d5c36a","d6141070-cb35-47d7-8f12-447ee936382a"] 101m
masters [] ["0859ab1b-32d1-4acc-bb91-74c4eafe8017","6c9aa61c-b9c6-4b69-8b77-8a46b5f217c6","995d057f-4f61-4359-8e78-9a043904fe3a"] 98m
workers [] ["25ba21df-363c-4345-bbb6-b2f21487d103","992f741b-fc3a-48e4-814b-c2f351b320eb","9f909243-9333-461c-bcd3-dce874d5c36a","d6141070-cb35-47d7-8f12-447ee936382a"] 98m []
&&&
k get servers,machines,clusters
NAME HOSTNAME ACCEPTED CORDONED ALLOCATED CLEAN POWER AGE
server.metal.sidero.dev/0859ab1b-32d1-4acc-bb91-74c4eafe8017 (none) true true false on 99m
server.metal.sidero.dev/25ba21df-363c-4345-bbb6-b2f21487d103 (none) true true false on 99m
server.metal.sidero.dev/6c9aa61c-b9c6-4b69-8b77-8a46b5f217c6 (none) true true false on 99m
server.metal.sidero.dev/992f741b-fc3a-48e4-814b-c2f351b320eb (none) true true false on 99m
server.metal.sidero.dev/995d057f-4f61-4359-8e78-9a043904fe3a (none) true true false on 99m
server.metal.sidero.dev/9f909243-9333-461c-bcd3-dce874d5c36a (none) true true false on 99m
server.metal.sidero.dev/d6141070-cb35-47d7-8f12-447ee936382a (none) true true false on 99m
NAME CLUSTER NODENAME PROVIDERID PHASE AGE VERSION
machine.cluster.x-k8s.io/cluster-0-cp-22d6n cluster-0 talos-m0p-gke sidero://995d057f-4f61-4359-8e78-9a043904fe3a Running 10m v1.31.2
machine.cluster.x-k8s.io/cluster-0-cp-544tn cluster-0 talos-5le-4ou sidero://6c9aa61c-b9c6-4b69-8b77-8a46b5f217c6 Running 10m v1.31.2
machine.cluster.x-k8s.io/cluster-0-cp-89hsd cluster-0 talos-rlj-gk7 sidero://0859ab1b-32d1-4acc-bb91-74c4eafe8017 Running 91m v1.31.2
machine.cluster.x-k8s.io/cluster-0-workers-lgphx-8brp6 cluster-0 talos-rvd-w8m sidero://9f909243-9333-461c-bcd3-dce874d5c36a Running 9m52s v1.31.2
machine.cluster.x-k8s.io/cluster-0-workers-lgphx-hklgh cluster-0 talos-aps-1nj sidero://d6141070-cb35-47d7-8f12-447ee936382a Running 9m52s v1.31.2
machine.cluster.x-k8s.io/cluster-0-workers-lgphx-krwxw cluster-0 talos-72m-mif sidero://25ba21df-363c-4345-bbb6-b2f21487d103 Running 9m53s v1.31.2
machine.cluster.x-k8s.io/cluster-0-workers-lgphx-xs6fm cluster-0 talos-9ww-ia0 sidero://992f741b-fc3a-48e4-814b-c2f351b320eb Running 9m52s v1.31.2
NAME CLUSTERCLASS PHASE AGE VERSION
cluster.cluster.x-k8s.io/cluster-0 Provisioned 91m
Check the workload cluster:
k get node
NAME STATUS ROLES AGE VERSION
talos-5le-4ou Ready control-plane 9m31s v1.31.2
talos-72m-mif Ready 9m11s v1.31.2
talos-9ww-ia0 Ready 9m37s v1.31.2
talos-aps-1nj Ready 9m6s v1.31.2
talos-m0p-gke Ready control-plane 9m29s v1.31.2
talos-rlj-gk7 Ready control-plane 89m v1.31.2
talos-rvd-w8m Ready 9m42s v1.31.2