Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kind create cluster failed on Centos Stream 9 #3491

Closed
ZimmerHao opened this issue Jan 25, 2024 · 4 comments
Closed

kind create cluster failed on Centos Stream 9 #3491

ZimmerHao opened this issue Jan 25, 2024 · 4 comments
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@ZimmerHao
Copy link

What happened:

kind create cluster failed

What you expected to happen:

kind cluster running as expected

How to reproduce it (as minimally and precisely as possible):

[root@iZm5e94vais6hr50l278wpZ kubevirtapp]# kind create cluster --config=kube-config.yaml
Creating cluster "kind" ...
 ✓ Ensuring node image (kindest/node:v1.27.3) 🖼
 ✓ Preparing nodes 📦 📦
 ✓ Writing configuration 📜
 ✗ Starting control-plane 🕹️
Deleted nodes: ["kind-worker" "kind-control-plane"]
ERROR: failed to create cluster: failed to init node with kubeadm: command "docker exec --privileged kind-control-plane kubeadm init --skip-phases=preflight --config=/kind/kubeadm.conf --skip-token-print --v=6" failed with error: exit status 1
Command Output: I0125 03:07:56.395902     139 initconfiguration.go:255] loading configuration from "/kind/kubeadm.conf"
W0125 03:07:56.396642     139 initconfiguration.go:332] [config] WARNING: Ignored YAML document with GroupVersionKind kubeadm.k8s.io/v1beta3, Kind=JoinConfiguration
W0125 03:07:56.398952     139 utils.go:69] The recommended value for "healthzBindAddress" in "KubeletConfiguration" is: 127.0.0.1; the provided value is: ::
[init] Using Kubernetes version: v1.27.3
[certs] Using certificateDir folder "/etc/kubernetes/pki"
I0125 03:07:56.403534     139 certs.go:112] creating a new certificate authority for ca
[certs] Generating "ca" certificate and key
I0125 03:07:56.482218     139 certs.go:519] validating certificate period for ca certificate
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kind-control-plane kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [fd00:10:96::1 fc00:f853:ccd:e793::3 ::1]
[certs] Generating "apiserver-kubelet-client" certificate and key
I0125 03:07:56.620190     139 certs.go:112] creating a new certificate authority for front-proxy-ca
[certs] Generating "front-proxy-ca" certificate and key
I0125 03:07:56.690008     139 certs.go:519] validating certificate period for front-proxy-ca certificate
[certs] Generating "front-proxy-client" certificate and key
I0125 03:07:56.812243     139 certs.go:112] creating a new certificate authority for etcd-ca
[certs] Generating "etcd/ca" certificate and key
I0125 03:07:56.970599     139 certs.go:519] validating certificate period for etcd/ca certificate
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [kind-control-plane localhost] and IPs [fc00:f853:ccd:e793::3 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [kind-control-plane localhost] and IPs [fc00:f853:ccd:e793::3 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
I0125 03:07:57.804602     139 certs.go:78] creating new public/private key files for signing service account users
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0125 03:07:57.979317     139 kubeconfig.go:103] creating kubeconfig file for admin.conf
[kubeconfig] Writing "admin.conf" kubeconfig file
I0125 03:07:58.107327     139 kubeconfig.go:103] creating kubeconfig file for kubelet.conf
[kubeconfig] Writing "kubelet.conf" kubeconfig file
I0125 03:07:58.202025     139 kubeconfig.go:103] creating kubeconfig file for controller-manager.conf
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0125 03:07:58.500057     139 kubeconfig.go:103] creating kubeconfig file for scheduler.conf
[kubeconfig] Writing "scheduler.conf" kubeconfig file
I0125 03:07:58.578719     139 kubelet.go:67] Stopping the kubelet
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
I0125 03:07:58.663616     139 manifests.go:99] [control-plane] getting StaticPodSpecs
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
I0125 03:07:58.663914     139 certs.go:519] validating certificate period for CA certificate
I0125 03:07:58.664015     139 manifests.go:125] [control-plane] adding volume "ca-certs" for component "kube-apiserver"
I0125 03:07:58.664027     139 manifests.go:125] [control-plane] adding volume "etc-ca-certificates" for component "kube-apiserver"
I0125 03:07:58.664035     139 manifests.go:125] [control-plane] adding volume "k8s-certs" for component "kube-apiserver"
I0125 03:07:58.664041     139 manifests.go:125] [control-plane] adding volume "usr-local-share-ca-certificates" for component "kube-apiserver"
I0125 03:07:58.664049     139 manifests.go:125] [control-plane] adding volume "usr-share-ca-certificates" for component "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
I0125 03:07:58.667006     139 manifests.go:154] [control-plane] wrote static Pod manifest for component "kube-apiserver" to "/etc/kubernetes/manifests/kube-apiserver.yaml"
I0125 03:07:58.667025     139 manifests.go:99] [control-plane] getting StaticPodSpecs
I0125 03:07:58.667286     139 manifests.go:125] [control-plane] adding volume "ca-certs" for component "kube-controller-manager"
I0125 03:07:58.667299     139 manifests.go:125] [control-plane] adding volume "etc-ca-certificates" for component "kube-controller-manager"
I0125 03:07:58.667305     139 manifests.go:125] [control-plane] adding volume "flexvolume-dir" for component "kube-controller-manager"
I0125 03:07:58.667312     139 manifests.go:125] [control-plane] adding volume "k8s-certs" for component "kube-controller-manager"
I0125 03:07:58.667326     139 manifests.go:125] [control-plane] adding volume "kubeconfig" for component "kube-controller-manager"
I0125 03:07:58.667335     139 manifests.go:125] [control-plane] adding volume "usr-local-share-ca-certificates" for component "kube-controller-manager"
I0125 03:07:58.667344     139 manifests.go:125] [control-plane] adding volume "usr-share-ca-certificates" for component "kube-controller-manager"
I0125 03:07:58.668478     139 manifests.go:154] [control-plane] wrote static Pod manifest for component "kube-controller-manager" to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
I0125 03:07:58.668491     139 manifests.go:99] [control-plane] getting StaticPodSpecs
[control-plane] Creating static Pod manifest for "kube-scheduler"
I0125 03:07:58.668710     139 manifests.go:125] [control-plane] adding volume "kubeconfig" for component "kube-scheduler"
I0125 03:07:58.669915     139 manifests.go:154] [control-plane] wrote static Pod manifest for component "kube-scheduler" to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0125 03:07:58.671198     139 local.go:65] [etcd] wrote Static Pod manifest for a local etcd member to "/etc/kubernetes/manifests/etcd.yaml"
I0125 03:07:58.671209     139 waitcontrolplane.go:83] [wait-control-plane] Waiting for the API server to be healthy
I0125 03:07:58.671768     139 loader.go:373] Config loaded from file:  /etc/kubernetes/admin.conf
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0125 03:07:58.673190     139 round_trippers.go:553] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0125 03:11:58.675033     139 round_trippers.go:553] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
......
Unfortunately, an error has occurred:
	timed out waiting for the condition

This error is likely caused by:
	- The kubelet is not running
	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	- 'systemctl status kubelet'
	- 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
	- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
	Once you have found the failing container, you can inspect its logs with:
	- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
couldn't initialize a Kubernetes cluster
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init.runWaitControlPlanePhase
	cmd/kubeadm/app/cmd/phases/init/waitcontrolplane.go:108
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
	cmd/kubeadm/app/cmd/phases/workflow/runner.go:259
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
	cmd/kubeadm/app/cmd/phases/workflow/runner.go:446
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
	cmd/kubeadm/app/cmd/phases/workflow/runner.go:232
k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdInit.func1
	cmd/kubeadm/app/cmd/init.go:111
github.com/spf13/cobra.(*Command).execute
	vendor/github.com/spf13/cobra/command.go:916
github.com/spf13/cobra.(*Command).ExecuteC
	vendor/github.com/spf13/cobra/command.go:1040
github.com/spf13/cobra.(*Command).Execute
	vendor/github.com/spf13/cobra/command.go:968
k8s.io/kubernetes/cmd/kubeadm/app.Run
	cmd/kubeadm/app/kubeadm.go:50
main.main
	cmd/kubeadm/kubeadm.go:25
runtime.main
	/usr/local/go/src/runtime/proc.go:250
runtime.goexit
	/usr/local/go/src/runtime/asm_amd64.s:1598
error execution phase wait-control-plane
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
	cmd/kubeadm/app/cmd/phases/workflow/runner.go:260
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
	cmd/kubeadm/app/cmd/phases/workflow/runner.go:446
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
	cmd/kubeadm/app/cmd/phases/workflow/runner.go:232
k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdInit.func1
	cmd/kubeadm/app/cmd/init.go:111
github.com/spf13/cobra.(*Command).execute
	vendor/github.com/spf13/cobra/command.go:916
github.com/spf13/cobra.(*Command).ExecuteC
	vendor/github.com/spf13/cobra/command.go:1040
github.com/spf13/cobra.(*Command).Execute
	vendor/github.com/spf13/cobra/command.go:968
k8s.io/kubernetes/cmd/kubeadm/app.Run
	cmd/kubeadm/app/kubeadm.go:50
main.main
	cmd/kubeadm/kubeadm.go:25
runtime.main
	/usr/local/go/src/runtime/proc.go:250
runtime.goexit
	/usr/local/go/src/runtime/asm_amd64.s:1598

Before the container crashed and I enter the container using docker exec

[root@iZm5e94vais6hr50l278wpZ ~]# docker ps
CONTAINER ID   IMAGE                  COMMAND                  CREATED          STATUS         PORTS                 NAMES
3d1f6712ffef   kindest/node:v1.27.3   "/usr/local/bin/entr…"   11 seconds ago   Up 2 seconds                         kind-worker
3ff5dd73ef80   kindest/node:v1.27.3   "/usr/local/bin/entr…"   11 seconds ago   Up 2 seconds   ::1:44779->6443/tcp   kind-control-plane
[root@iZm5e94vais6hr50l278wpZ ~]# docker exec -it 3ff5dd73ef80 /bin/bash
root@kind-control-plane:/#
root@kind-control-plane:/#
root@kind-control-plane:/#
root@kind-control-plane:/#
root@kind-control-plane:/# systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
     Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: enabled)
    Drop-In: /etc/systemd/system/kubelet.service.d
             └─10-kubeadm.conf, 11-kind.conf
     Active: active (running) since Thu 2024-01-25 03:07:58 UTC; 36s ago
       Docs: http://kubernetes.io/docs/
    Process: 169 ExecStartPre=/bin/sh -euc if [ -f /sys/fs/cgroup/cgroup.controllers ]; then /kind/bin/create-kubelet-cgroup-v2.sh; fi (code=exited, status=0/SUCCESS)
    Process: 177 ExecStartPre=/bin/sh -euc if [ ! -f /sys/fs/cgroup/cgroup.controllers ] && [ ! -d /sys/fs/cgroup/systemd/kubelet ]; then mkdir -p /sys/fs/cgroup/systemd/kubelet; fi (code=exited, status=0/SUCCESS)
   Main PID: 178 (kubelet)
      Tasks: 11 (limit: 7082)
     Memory: 29.9M
        CPU: 766ms
     CGroup: /kubelet.slice/kubelet.service
             └─178 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///run/containerd/containerd.sock --node-ip=fc00:f853:ccd:e793::3 --node-labels= --pod-infra-container-image=registry.k8s.io/pause:3.9 --provider-id=kind://docker/kind/kind-control-plane --runtime-cgroups=/system.slice/containerd.service

Jan 25 03:08:28 kind-control-plane kubelet[178]: E0125 03:08:28.993618     178 eviction_manager.go:262] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"kind-control-plane\" not found"
Jan 25 03:08:30 kind-control-plane kubelet[178]: E0125 03:08:30.330396     178 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://kind-control-plane:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.18.0.3:6443: connect: connection refused
Jan 25 03:08:30 kind-control-plane kubelet[178]: E0125 03:08:30.330420     178 certificate_manager.go:440] kubernetes.io/kube-apiserver-client-kubelet: Reached backoff limit, still unable to rotate certs: timed out waiting for the condition
Jan 25 03:08:30 kind-control-plane kubelet[178]: E0125 03:08:30.700482     178 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-control-plane.17ad7846b68af5f6", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-control-plane", UID:"kind-control-plane", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"kind-control-plane"}, FirstTimestamp:time.Date(2024, time.January, 25, 3, 7, 58, 948890102, time.Local), LastTimestamp:time.Date(2024, time.January, 25, 3, 7, 58, 948890102, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://kind-control-plane:6443/api/v1/namespaces/default/events": dial tcp 172.18.0.3:6443: connect: connection refused'(may retry after sleeping)
Jan 25 03:08:31 kind-control-plane kubelet[178]: W0125 03:08:31.142052     178 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubelet.slice/kubelet-kubepods.slice/kubelet-kubepods-burstable.slice/kubelet-kubepods-burstable-pod2407c8c8998179530aad0cb9d0772c0e.slice/cri-containerd-c4200b5729a1d1b73e388e297cb6a4beba47211254bbaf2c924467e965a5ea1e.scope WatchSource:0}: container "c4200b5729a1d1b73e388e297cb6a4beba47211254bbaf2c924467e965a5ea1e" in namespace "k8s.io": not found
Jan 25 03:08:32 kind-control-plane kubelet[178]: E0125 03:08:32.564360     178 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://kind-control-plane:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kind-control-plane?timeout=10s\": dial tcp 172.18.0.3:6443: connect: connection refused" interval="7s"
Jan 25 03:08:32 kind-control-plane kubelet[178]: I0125 03:08:32.669857     178 kubelet_node_status.go:70] "Attempting to register node" node="kind-control-plane"
Jan 25 03:08:32 kind-control-plane kubelet[178]: E0125 03:08:32.670318     178 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://kind-control-plane:6443/api/v1/nodes\": dial tcp 172.18.0.3:6443: connect: connection refused" node="kind-control-plane"
Jan 25 03:08:33 kind-control-plane kubelet[178]: W0125 03:08:33.844355     178 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://kind-control-plane:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.18.0.3:6443: connect: connection refused
Jan 25 03:08:33 kind-control-plane kubelet[178]: E0125 03:08:33.844414     178 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://kind-control-plane:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.18.0.3:6443: connect: connection refused

Anything else we need to know?:

  • the config file
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
networking:
  ipFamily: ipv6
nodes:
  - role: control-plane
  - role: worker

Environment:

  • kind version: (use kind version): kind v0.20.0 go1.20.4 linux/amd64
  • Runtime info: (use docker info or podman info):
Client: Docker Engine - Community
 Version:    25.0.1
 Context:    default
 Debug Mode: false
 Plugins:
  buildx: Docker Buildx (Docker Inc.)
    Version:  v0.12.1
    Path:     /usr/libexec/docker/cli-plugins/docker-buildx
  compose: Docker Compose (Docker Inc.)
    Version:  v2.24.2
    Path:     /usr/libexec/docker/cli-plugins/docker-compose

Server:
 Containers: 0
  Running: 0
  Paused: 0
  Stopped: 0
 Images: 1
 Server Version: 25.0.1
 Storage Driver: overlay2
  Backing Filesystem: xfs
  Supports d_type: true
  Using metacopy: false
  Native Overlay Diff: true
  userxattr: false
 Logging Driver: json-file
 Cgroup Driver: systemd
 Cgroup Version: 2
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local splunk syslog
 Swarm: inactive
 Runtimes: io.containerd.runc.v2 runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: a1496014c916f9e62104b33d1bb5bd03b0858e59
 runc version: v1.1.11-0-g4bccb38
 init version: de40ad0
 Security Options:
  seccomp
   Profile: builtin
  cgroupns
 Kernel Version: 5.14.0-391.el9.x86_64
 Operating System: CentOS Stream 9
 OSType: linux
 Architecture: x86_64
 CPUs: 4
 Total Memory: 7.242GiB
 Name: iZm5e94vais6hr50l278wpZ
 ID: 25433af0-4836-4aca-8bbe-c657f3138792
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false

WARNING: No cpu cfs quota support
WARNING: No cpu cfs period support
WARNING: No cpu shares support
  • OS (e.g. from /etc/os-release):
NAME="CentOS Stream"
VERSION="9"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="9"
PLATFORM_ID="platform:el9"
PRETTY_NAME="CentOS Stream 9"
ANSI_COLOR="0;31"
LOGO="fedora-logo-icon"
CPE_NAME="cpe:/o:centos:centos:9"
HOME_URL="https://centos.org/"
BUG_REPORT_URL="https://bugzilla.redhat.com/"
REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux 9"
REDHAT_SUPPORT_PRODUCT_VERSION="CentOS Stream"
[root@iZm5e94vais6hr50l278wpZ]# uname -a
Linux iZm5e94vais6hr50l278wpZ 5.14.0-391.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Tue Nov 28 20:35:49 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
  • Kubernetes version: (use kubectl version):
Client Version: v1.29.1
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
  • Any proxies or other special environment settings?:
@ZimmerHao ZimmerHao added the kind/bug Categorizes issue or PR as related to a bug. label Jan 25, 2024
@stmcginnis
Copy link
Contributor

Server:
 ...
 Server Version: 25.0.1

There are various issues with Docker 25 at the moment. I would recommend downgrading for now until they get sorted out. There is a pinned issue for kind load docker-image (#3488) but I believe there are other outstanding issues with this release as well.

@ZimmerHao
Copy link
Author

Server:
 ...
 Server Version: 25.0.1

There are various issues with Docker 25 at the moment. I would recommend downgrading for now until they get sorted out. There is a pinned issue for kind load docker-image (#3488) but I believe there are other outstanding issues with this release as well.

After update the kernel to 5.14.0-410.el9.x86_64 and it worked. Actually I don't known what happened.

@aojea
Copy link
Contributor

aojea commented Jan 26, 2024

After update the kernel to 5.14.0-410.el9.x86_64 and it worked. Actually I don't known what happened.

some kernel bug or incompatibility ???

/close

@k8s-ci-robot
Copy link
Contributor

@aojea: Closing this issue.

In response to this:

After update the kernel to 5.14.0-410.el9.x86_64 and it worked. Actually I don't known what happened.

some kernel bug or incompatibility ???

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

No branches or pull requests

4 participants