Skip to content
This repository has been archived by the owner on Jan 22, 2021. It is now read-only.

No token file in the metrics adapter pod #94

Open
1 task done
AlexeyRaga opened this issue May 10, 2020 · 0 comments
Open
1 task done

No token file in the metrics adapter pod #94

AlexeyRaga opened this issue May 10, 2020 · 0 comments

Comments

@AlexeyRaga
Copy link

I have a K8S cluster in Azure that was created with Terraform.
Trying to deploy the metrics adapter according to the instruction, and it gets deployed, but its pod in custom-metrics namespace fails with the following log line:

unable to construct client config: unable to construct lister client config to initialize provider: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory

My service accounts look like:

$ kubectl describe serviceaccounts
Name:                azure-k8s-metrics-adapter
Namespace:           custom-metrics
Labels:              app=azure-k8s-metrics-adapter
                     chart=azure-k8s-metrics-adapter-0.1.0
                     heritage=Tiller
                     release=azure-k8s-metrics-adapter
Annotations:         kubectl.kubernetes.io/last-applied-configuration:
                       {"apiVersion":"v1","kind":"ServiceAccount","metadata":{"annotations":{},"labels":{"app":"azure-k8s-metrics-adapter","chart":"azure-k8s-met...
Image pull secrets:  <none>
Mountable secrets:   azure-k8s-metrics-adapter-token-hcdq4
Tokens:              azure-k8s-metrics-adapter-token-hcdq4
Events:              <none>


Name:                default
Namespace:           custom-metrics
Labels:              <none>
Annotations:         <none>
Image pull secrets:  <none>
Mountable secrets:   default-token-2pwlk
Tokens:              default-token-2pwlk
Events:              <none>

When I look at Configuration/Secrets I see that azure-k8s-metrics-adapter (Opaque), azure-k8s-metrics-adapter-token-hcdq4 (service-account-token) and default-token-2pwlk are present.

The cluster has been created using the following expression:

resource "azurerm_kubernetes_cluster" "main" {
  name                = "${var.prefix}-cluster"
  location            = azurerm_resource_group.main.location
  resource_group_name = azurerm_resource_group.main.name
  dns_prefix          = var.prefix

  default_node_pool {
    name            = "default"
    node_count      = 1
    vm_size         = "Standard_DS2_v2"
    os_disk_size_gb = 30
    vnet_subnet_id  = azurerm_subnet.aks.id
  }

  network_profile {
    network_plugin = "azure"
  }

  addon_profile {
    aci_connector_linux {
      enabled     = true
      subnet_name = azurerm_subnet.aci.name
    }
  }

  role_based_access_control {
    enabled = true
  }

  service_principal {
    client_id     = azuread_application.main.application_id
    client_secret = azuread_service_principal_password.main.value
  }
}

Did I miss something in configuration? How do I make it work?

Kubernetes version 1.15.10:

  • Running on AKS
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant