You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
I deployed authentik with K8s, using ArgoCD, cloudnativepg 17 with read replica, and minio. I got "failed to proxy to backend" on the log, and the pod status on amd64 node stuck on progressing, with message "Readiness probe failed: Get "http://10.15.0.123:9000/-/health/ready/": context deadline exceeded (Client.Timeout exceeded while awaiting headers)".
To Reproduce
Steps to reproduce the behavior:
Deploy argocd, cloudnativepg with read replica, and minio
Deploy authentik with kustomization and helm chart, with 3 server replicas
I think this problem is related to #5059, but in my case, it was caused by slow storage on redis instance because it was using longhorn storage class. I switched it to local-path, and everything seems to be fine now.
Even after changing redis storage class, when I increased the server replicas to more than 1 ( in my case was 3), it would still encounter the same error.
Describe the bug
I deployed authentik with K8s, using ArgoCD, cloudnativepg 17 with read replica, and minio. I got "failed to proxy to backend" on the log, and the pod status on amd64 node stuck on progressing, with message "Readiness probe failed: Get "http://10.15.0.123:9000/-/health/ready/": context deadline exceeded (Client.Timeout exceeded while awaiting headers)".
To Reproduce
Steps to reproduce the behavior:
Expected behavior
The deployment run smoothly
Logs
https://hastebin.com/share/igupoxoful.json
https://hastebin.com/share/ilecucoquz.python
Version and Deployment (please complete the following information):
The text was updated successfully, but these errors were encountered: