You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When I use kconnect to connect to two different eks clusters in different regions with the same user and providerid, only the most recently used cluster works. Commands to the other cluster fail with Unauthorized.
What did you expect to happen:
Commands to both clusters work as long as I have TIME LEFT according to kconnect ls.
How to reproduce it:
Have access to two eks clusters in the same account, same user, same role, but different regions
kconnect use eks and log in to cluster1
kconnect use eks and log in to cluster2
kubectl config get-contexts and note the context name for cluster1 and cluster2
kubectl --context <context-of-cluster1> version -o json and observe failure/Unauthorized
kubectl --context <context-of-cluster2> version -o json and observe success
kconnect ls and validate that there is still TIME LEFT on both cluster1 and cluster2
Anything else you would like to add:
The kubeconfig user name ( kubectl config view --minify | yq .users[].name ) is the same for both clusters despite the args having values that are specific to one of the clusters.
Environment:
kconnect version (use kconnect version): 0.5.11
Kubernetes version (use kubectl version):
OS (e.g. from /etc/os-release): MacOS Ventura 13.04
Target environment (e.g. EKS, AKS, Rancher): EKS
Authentication Used (e.g. SAML, IAM, Azure AD): SAML
The text was updated successfully, but these errors were encountered:
What happened:
When I use kconnect to connect to two different eks clusters in different regions with the same user and providerid, only the most recently used cluster works. Commands to the other cluster fail with
Unauthorized
.What did you expect to happen:
Commands to both clusters work as long as I have
TIME LEFT
according tokconnect ls
.How to reproduce it:
kconnect use eks
and log in to cluster1kconnect use eks
and log in to cluster2kubectl config get-contexts
and note the context name for cluster1 and cluster2kubectl --context <context-of-cluster1> version -o json
and observe failure/Unauthorized
kubectl --context <context-of-cluster2> version -o json
and observe successkconnect ls
and validate that there is stillTIME LEFT
on both cluster1 and cluster2Anything else you would like to add:
The kubeconfig user name (
kubectl config view --minify | yq .users[].name
) is the same for both clusters despite the args having values that are specific to one of the clusters.Environment:
kconnect version
): 0.5.11kubectl version
):/etc/os-release
): MacOS Ventura 13.04The text was updated successfully, but these errors were encountered: