-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature Request: Support EndpointSlices Without In-cluster Pod Targets in Ingress #4017
Comments
Could you expand further on this point:
Later versions of Kubernetes and the controller have made using NodePorts for traffic a lot more reliable. For example, when using cluster autoscaler: #1688 |
@zac-nixon
Additionally, by supporting direct IP-based communication as described in the Kubernetes documentation—rather than routing traffic exclusively through Nodes—we can further improve interoperability with existing controllers, foster additional integrations, and enable even more significant innovation in future. |
I've created a separate issue regarding the problem we discussed about AWS Load Balancer Controller not handling Karpenter taints: |
Sorry for the delayed response. What automation are you using to populate the custom endpoint slice? I wonder if you can use a Multicluster Target Group Binding (https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/guide/targetgroupbinding/targetgroupbinding/#multicluster-target-group) and then point your automation to just register the targets directly into the Target Group? |
I am currently trying to implement an MCS controller using Sveltos (Related Issue: projectsveltos/sveltos#435 (comment)).
On the other hand, if AWS Load Balancer Controller directly supports Custom EndpointSlices, which is a Kubernetes standard specification, the complicated setup mentioned above would become unnecessary. I believe this approach is preferable in terms of achieving the configuration that users ultimately need in a simpler way. |
Hi @zac-nixon , I hope you’re doing well. I’d like to follow up on the feature request discussed earlier in this thread and get your input on a couple of points:
Once we have consensus on both the overall feature request and the implementation plan, my goal would be to update the feature request status to “implementation pending” so we can move forward with development. Given your extensive contributions and deep understanding of aws‑load‑balancer‑controller, your feedback is extremely valuable. Looking forward to hearing your thoughts. Best regards, |
Hi @kahirokunn, I apologize for the delayed response. While I think we have existing solutions in place, like instance based targets or usage of a multicluster target group, I think your purposed solution makes sense. This new pod discovery would have to be completely feature flagged, which your proposal suggests. One caveat that we can't support is the usage of public IPs are registered targets, doing so would block target registration. Are you ok with this caveat? Thank you for putting together this feature idea. We can work together to implement it to be sure that it fits your usecase. |
Dear @zac-nixon , Thank you for your response. I am delighted to receive your feedback.
Yes, I agree with the restriction on public IPs. Upon investigating why public IPs cannot be allowed, I found the following AWS documentation: According to this documentation, the allowed CIDRs are:
Following these specifications, it naturally follows that public IPs cannot be allowed. This limitation does not pose any functional issues, as we can still achieve multi-cluster functionality without problems. As a user-friendly enhancement, we could reflect error conditions in the Ingress status when IPs outside these ranges are specified. Here's an example of how the status condition could be formatted in YAML: status:
conditions:
- type: ValidIPRange
status: "False"
reason: "IPOutOfAllowedRange"
message: "One or more IP addresses are outside the allowed private IP ranges. Allowed ranges are: 10.0.0.0/8 (RFC1918), 100.64.0.0/10 (RFC6598), 172.16.0.0/12 (RFC1918), and 192.168.0.0/16 (RFC1918)."
lastTransitionTime: "2025-02-14T12:00:00Z" In this example, the I believe implementing according to these specifications will enable integration with on-premises and multi-cloud environments through private IP registration.
I deeply appreciate your support. Best regards, |
Related Problem
When deploying a multi-cluster EKS environment that shares services via the Multi-Cluster Services (MCS) API, multiple EndpointSlices may be created for a single Service. Currently, in “target-type: ip” mode, the AWS Load Balancer Controller only registers Pod IPs of locally running Pods. It does not register:
This behavior forces users to employ workarounds—such as using “target-type: instance” and routing traffic through NodePorts—which can introduce suboptimal routing and increase the risk of disruptions if a Node is scaled in or replaced.
Proposed Unified Solution
Enhance the AWS Load Balancer Controller to directly register IP addresses from EndpointSlices in “target-type: ip” mode, even if those addresses are intended for multi-cluster usage (MCS) or represent external endpoints. This can be done by:
A relevant part of the AWS Load Balancer Controller’s current design is located here:
aws-load-balancer-controller/pkg/backend/endpoint_resolver.go
Lines 155 to 157 in c701a42
Here, the logic could be extended to handle these alternative address types. For example, if the
endpointslice.kubernetes.io/managed-by: endpointslice-controller.k8s.io
label is missing, the Controller might treat the EndpointSlice’s IP addresses as external IPs; or ifEndpointSlice.Endpoints[].TargetRef.Kind != "Pod"
, the Controller might interpret them as external endpoints.In both cases, the goal remains the same: provide direct integration with new or external IP addresses listed in EndpointSlices, reducing complexity and offering more efficient traffic routing.
Alternatives Considered
Using “target-type: instance”
Example: MCS with Additional Cluster IPs
Below is a sample configuration demonstrating how MCS might export a Service, creating an EndpointSlice in one cluster with Pod IPs from another cluster:
With the proposed feature enabled, the IP “10.11.12.13” would be recognized by the AWS Load Balancer Controller and automatically registered in the Target Group.
References
The text was updated successfully, but these errors were encountered: