You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Nov 7, 2019. It is now read-only.
Attached is a gif of system calls showing a poll, when an event loop is intended. This polling is spiking the operator causing it to chew up CPU by spinning waiting for an event.
operators.Watch seems to be the cause of the CPU spins.
Nested in this code path, is a channel interaction that causes channels to be replaced, producing a loop, which I believe is the root cause of the spin to catch events.
The text was updated successfully, but these errors were encountered:
To give you an idea of the CPU spin severity the aws-service-operator chews up 300% (3 cores) of my laptop and is the highest system resource on our kube-cluster; most of the time it is waiting for events - which exposes the spin.
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Attached is a gif of system calls showing a poll, when an event loop is intended. This polling is spiking the operator causing it to chew up CPU by spinning waiting for an event.
operators.Watch seems to be the cause of the CPU spins.
https://github.com/awslabs/aws-service-operator/blob/master/pkg/server/server.go#L61
go operators.Watch(ctx, k8sNamespaceToWatch)
Nested in this code path, is a channel interaction that causes channels to be replaced, producing a loop, which I believe is the root cause of the spin to catch events.
The text was updated successfully, but these errors were encountered: