-
Notifications
You must be signed in to change notification settings - Fork 920
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kubectl using 1200% CPU on MacOS 14.4.1 #1668
Comments
This issue is currently awaiting triage. SIG CLI takes a lead on issue triage for this repo, but any Kubernetes member can accept issues by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
/kind support |
Thank @ardaguclu. I've added the flag and will be monitoring CPU usage. If anything happens I'll let you know :) |
The sha512 hash (for the gz) is published in the changelog. Something like this should work:
Example (your will want to use darwin instead of linux-amd64):
Then get your local kubectl's hash and compare it...
|
The interesting thing about this is that kubectl is not running for 5 days, it is being invoked by watch every 2 seconds for 5 days. In addition to using If it happens again, try doing the following in another terminal, while the problem is occurring, to collect information that might be helpful to diagnose the problem:
|
Fantastic @brianpursley, thanks for the additional tips! I will be checking the checksum tomorrow :)
I was thinking the same thing - after 5 days, maybe that there's some kind of low-level error that leads to more CPU consumption or some data that accumulates. But every 2 second one execution? Shouldn't be an issue. I'll follow up shortly! |
Hi @brianpursley!
Here are the steps taken:
2.1. Navigate to https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.30.md#downloads-for-v1304
and
As you can see the checksum mismatch even though the client version matches. Thank you for your help! |
Note that I also opened a support case with Google Cloud internally to assist in confirming the integrity of the kubectl binary packaged as part of google-cloud-sdk. Any information on your end will still be helpful, and I'll provide any information I got from Google Cloud's support team. Last note, no spike in CPU usage has been noticed since last encountered. |
@philippefutureboy Do the Google Cloud SDK maintainers build their own kubectl binary that has gcloud specific changes? If so, and if |
@brianpursley that's also what I'm trying to figure out with my support rep. I'll keep you in the loop with any new info. |
@brianpursley Following up on the Google Cloud SDK kubectl binary - here's the support team's response:
I'm not sure how to approach this, as I can't realistically reproduce the environment in which they compiled their version of the kubectl binary. I'll inquire if there's anything I can do to counter-verify the signature of their kubectl binary. |
Here's the follow-up answer from the support team:
So from what I understand it is not possible to assert the integrity of the kubectl binary when it is packaged by the Google Cloud team as part of the gcloud utilities. I've asked a follow-up question to see if it is possible to assert the checksum of the gcloud sdk with the kubectl binary against a publicly published list of checksums. |
What happened:
I always keep a terminal open with
watch "kubectl get pods"
while I work, so that if I can at a glance see the status of my remote cluster.I noticed today while working that my computer was sluggish. When looking in activity monitor, kubectl was running at 1200% (12 full CPU cores) CPU usage, with low memory usage. At that time,
watch "kubectl get pods"
had been running for 5d 14h, polling state every 2s while my laptop is not in sleep mode.I killed the command
watch "kubectl get pods"
and the process successfully exited, releasing the CPU load.What you expected to happen:
Not eat 12 full CPU, it's polling once every 2 sec.
How to reproduce it (as minimally and precisely as possible):
No idea really! Anything I can do to help diagnose this?
The only reason that I'm posting here is that high CPU usage like this can be indicative of an exploited security vulnerability, and thus why I'm taking proactive action to open this issue.
I think my kubectl is packaged directly with gcloud. I'm not sure; how do I check?
Anything else we need to know?:
Environment:
kubectl version
):Client Version: v1.30.4
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.30.4-gke.1348000
cat /etc/os-release
): MacOS 14.4.1 SonomaThe text was updated successfully, but these errors were encountered: