Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

enable_private_nodes not creating noodpools with nodes without external IPs #2258

Open
yemaney opened this issue Jan 28, 2025 · 4 comments
Open

Comments

@yemaney
Copy link

yemaney commented Jan 28, 2025

I'm trying to use google modules to create a kubernetes cluster.
There should be two nodepools, where one nodepool should have nodes with no external ips.
I've tried to do this by seetting enable_private_nodes=true.
But all nodes have external Ips. What is the issue?

module "vpc" {
  source  = "terraform-google-modules/network/google"
  version = "~> 10.0"

  project_id   = "wired-height-365016"
  network_name = "example-vpc4"
  routing_mode = "GLOBAL"

  subnets = [
    {
      subnet_name           = "subnet-01"
      subnet_ip             = "10.10.0.0/16" 
      subnet_region         = "us-west1"
    }
  ]

  secondary_ranges = {
    subnet-01 = [
      {
        range_name    = "subnet-01-secondary-01"
        ip_cidr_range = "192.168.0.0/16" # Large range for pods
      },
      {
        range_name    = "subnet-01-secondary-02"
        ip_cidr_range = "192.169.0.0/16" # Large range for services
      }
    ]
  }

  routes = [
    {
      name                   = "egress-internet1"
      description            = "route through IGW to access internet"
      destination_range      = "0.0.0.0/0"
      tags                   = "egress-inet"
      next_hop_internet      = true
    }
  ]
}

module "gke" {
  source                     = "terraform-google-modules/kubernetes-engine/google"
  project_id                 = "wired-height-365016"
  name                       = "gke-test-4"
  region                     = "us-west1"
  zones                      = ["us-west1-a"]

  network                    = module.vpc.network_name
  subnetwork                 = "subnet-01"
  ip_range_pods              = "subnet-01-secondary-01"
  ip_range_services          = "subnet-01-secondary-02"



  http_load_balancing        = false
  network_policy             = false
  horizontal_pod_autoscaling = true
  filestore_csi_driver       = false
  dns_cache                  = false
  deletion_protection        = false

  node_pools = [
    {
      name                        = "public-node-pool"
      machine_type                = "e2-medium"
      autoscaling                 = true
      node_locations              = "us-west1-a"
      min_count                   = 1
      max_count                   = 10
      local_ssd_count             = 0
      spot                        = false
      disk_size_gb                = 100
      disk_type                   = "pd-standard"
      image_type                  = "COS_CONTAINERD"
      auto_repair                 = true
      auto_upgrade                = true
      enable_private_nodes        = false
    },
    {
      name                        = "private-node-pool"
      machine_type                = "e2-medium"
      autoscaling                 = true
      node_locations              = "us-west1-a"
      min_count                   = 1
      max_count                   = 10
      local_ssd_count             = 0
      spot                        = false
      disk_size_gb                = 100
      disk_type                   = "pd-standard"
      image_type                  = "COS_CONTAINERD"
      auto_repair                 = true
      auto_upgrade                = true
      enable_private_nodes        = true
    },
  ]

  node_pools_oauth_scopes = {
    all = [
      "https://www.googleapis.com/auth/logging.write",
      "https://www.googleapis.com/auth/monitoring",
    ]
  }

  node_pools_labels = {
    all = {}

    default-node-pool = {
      default-node-pool = true
    }
  }

  node_pools_metadata = {
    all = {}

    default-node-pool = {
      node-pool-metadata-custom-value = "my-node-pool"
    }
  }

  node_pools_taints = {
    all = []

    default-node-pool = [
      {
        key    = "default-node-pool"
        value  = true
        effect = "PREFER_NO_SCHEDULE"
      },
    ]
  }

  node_pools_tags = {
    all = []

    default-node-pool = [
      "default-node-pool",
    ]
  }
}
$ kubectl get no -o wide
NAME                                             STATUS   ROLES    AGE     VERSION               INTERNAL-IP   EXTERNAL-IP     OS-IMAGE                             KERNEL-VERSION   CONTAINER-RUNTIME
gke-gke-test-4-private-node-pool-ff6cd4d4-23vg   Ready    <none>   5m24s   v1.31.4-gke.1183000   10.10.0.5     35.247.5.249    Container-Optimized OS from Google   6.6.56+          containerd://1.7.24
gke-gke-test-4-private-node-pool-ff6cd4d4-qnvk   Ready    <none>   50s     v1.31.4-gke.1183000   10.10.0.7     34.82.87.200    Container-Optimized OS from Google   6.6.56+          containerd://1.7.24
gke-gke-test-4-public-node-pool-3ddbc82e-41gc    Ready    <none>   64s     v1.31.4-gke.1183000   10.10.0.6     34.145.118.38   Container-Optimized OS from Google   6.6.56+          containerd://1.7.24
@apeabody
Copy link
Collaborator

Hi @yemaney - Can you please provide the full Terraform plan for your initial apply?

@vnandha
Copy link

vnandha commented Feb 5, 2025

Just curious, if the cluster was created using module version before v33.0 then default enable_private_nodes is false. It would only create public node pool as I dont see any node_pool specific enable_private_nodes in this module, Please correct me if I am wrong. If this is a bug or a feature request I could create one to private node_pool specific enable_private_nodes variable.

ref: https://github.com/terraform-google-modules/terraform-google-kubernetes-engine/blob/main/modules/private-cluster/cluster.tf#L631

@apeabody
Copy link
Collaborator

apeabody commented Feb 5, 2025

Yes, best practice would be to always include a module version constraint. E.g.

module "gke" {
  source  = "terraform-google-modules/kubernetes-engine/google"
  version = "~> 36.0"

@kusnitsyn
Copy link

I am having the same problem, but only with private cluster version = "~> 36.0"

I need both public and private node pools, but enable_private_nodes in node_pools block doesn't do anything.
Maybe I'm missing something, but this should be possible.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants