Gateway API vs Ingress: No Ingress Controller Needed
ingress-nginx is being retired. The Kubernetes community ends support in March 2026: no more releases, no security patches, no CVE fixes. The cluster-takeover vulnerability (CVE-2025-1974) accelerated the decision, but the Ingress API had structural problems long before that.
This homelab never ran an ingress controller. The cluster was built on Cilium from day one (Part 3), and Cilium includes a conformant Gateway API implementation. One component handles L3 networking, L4 load balancing, and L7 routing. No NGINX, no Traefik, no extra pods.
If you're migrating off ingress-nginx, this post shows what the destination looks like. If you're building from scratch, it shows why you can skip the ingress controller entirely. Either way, the Gateway routes 16 services across 13 namespaces through a single resource.
This is Part 4 of the homelab series. Part 2 set up the HA control plane. Part 3 deployed Cilium with eBPF kube-proxy replacement. Now we're routing traffic.
Note: The retirement applies to the community
kubernetes/ingress-nginxproject. F5's commercial NGINX Ingress Controller (nginxinc/kubernetes-ingress) is a separate project and remains actively maintained.
Ingress API Limitations
The Ingress API shipped with Kubernetes 1.1 in 2015 and reached GA in 1.19. It works, and it isn't going away: the Ingress resource remains stable in Kubernetes core with no deprecation timeline. But the API has structural problems that annotations can't fix.
Every controller invents its own configuration language. NGINX uses nginx.ingress.kubernetes.io/rewrite-target. Traefik uses traefik.ingress.kubernetes.io/router.entrypoints. HAProxy uses its own set. Your Ingress manifests are vendor-locked from the first annotation you write.
The API also collapses infrastructure and application concerns into one resource. The same manifest that defines a hostname configures TLS certificates, rate limits, and redirect rules. App developers need cluster-wide knowledge to deploy a route. Infrastructure teams can't enforce standards without reviewing every manifest.
TLS is per-resource. Each Ingress references its own Secret. In a 16-service cluster, that means 16 Ingress resources each duplicating the same wildcard cert reference. Change the cert name, update 16 files.
And the spec only supports HTTP and HTTPS. Need gRPC routing? TCP passthrough? You're back to controller-specific CRDs.
How Gateway API Splits the Work
Gateway API replaces Ingress with a role-oriented model. The current stable spec is v1.4.0 (Standard Channel), with CRD bundle v1.4.1. Three resources replace the single Ingress object:
GatewayClass declares which controller handles traffic. Cilium auto-registers gatewayClassName: cilium when gatewayAPI.enabled: true is set in the Helm values. The cluster operator owns this resource.
A Gateway defines listeners: ports, protocols, TLS certificates, and hostname patterns. The infrastructure team manages it. One Gateway can serve an entire cluster.
Application teams create HTTPRoutes. Each route lives in the application's own namespace, specifying a hostname, a path match, and a backend. Developers never touch the Gateway or its TLS config.
When a new service needs external access, the developer creates an HTTPRoute. The Gateway's allowedRoutes policy permits attachment automatically. No RBAC escalation, no infrastructure tickets, no TLS configuration. Features like header-based routing, URL rewrites, and traffic splitting are spec fields, not annotations. Switch Gateway implementations without rewriting a single route manifest.
The Homelab Gateway
One YAML file. Four listeners. All traffic.
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: homelab-gateway
namespace: default
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
gatewayClassName: cilium
listeners:
- name: http
protocol: HTTP
port: 80
allowedRoutes:
namespaces: { from: All }
- name: https
protocol: HTTPS
port: 443
hostname: "*.k8s.rommelporras.com"
tls:
certificateRefs:
- name: wildcard-k8s-tls
allowedRoutes:
namespaces: { from: All }
- name: https-dev
protocol: HTTPS
port: 443
hostname: "*.dev.k8s.rommelporras.com"
tls:
certificateRefs:
- name: wildcard-dev-k8s-tls
allowedRoutes:
namespaces: { from: All }
- name: https-stg
protocol: HTTPS
port: 443
hostname: "*.stg.k8s.rommelporras.com"
tls:
certificateRefs:
- name: wildcard-stg-k8s-tls
allowedRoutes:
namespaces: { from: All }
Three HTTPS listeners, each bound to a wildcard hostname and its own TLS certificate. The cert-manager.io/cluster-issuer annotation tells cert-manager to issue certificates for any certificateRefs Secret that doesn't exist yet. Apply the Gateway, and three Let's Encrypt wildcard certs appear within minutes.
The running Gateway on the live cluster:
$ kubectl get gateways -A
NAMESPACE NAME CLASS ADDRESS PROGRAMMED AGE
default homelab-gateway cilium 10.10.30.20 True 43d
All four listeners report Programmed=True and Accepted=True. The 10.10.30.20 IP comes from a CiliumLoadBalancerIPPool (range .20-.99, 80 IPs), announced via ARP on the local network. No MetalLB, no BGP, no router configuration. Part 3 covered how Cilium's L2 announcements replace MetalLB.
Cilium Envoy handles the actual L7 processing: three pods running as a DaemonSet, one per node, zero restarts since deployment. When a request hits the Gateway VIP, Cilium's eBPF program intercepts it at the kernel level. Because the traffic needs L7 processing (hostname matching, TLS termination), it gets transparently redirected to the local cilium-envoy pod via TPROXY. Envoy terminates TLS, evaluates the HTTPRoute rules, and forwards to the backend.
The http listener on port 80 exists but has zero attached routes. This is a gap in the homelab's configuration that hasn't been fixed yet. Cilium doesn't auto-redirect HTTP to HTTPS because Gateway API favors explicit behavior over implicit. The planned fix is a single HTTPRoute:
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: http-to-https-redirect
namespace: default
spec:
parentRefs:
- name: homelab-gateway
sectionName: http
rules:
- filters:
- type: RequestRedirect
requestRedirect:
scheme: https
statusCode: 301
Compare this to Ingress, where the same behavior is an annotation buried in controller docs: nginx.ingress.kubernetes.io/ssl-redirect: "true". The Gateway API version is a resource you can kubectl get, audit, and version control.
Wildcard TLS via DNS-01
The three wildcard certificates are the foundation of the multi-environment architecture. Wildcard domains (*.k8s.rommelporras.com) can't use HTTP-01 because the CA needs to validate the base domain, not a specific hostname. DNS-01 proves domain ownership through a TXT record instead.
The homelab uses Cloudflare for DNS. cert-manager's ClusterIssuer holds a Cloudflare API token with Zone:DNS:Write permission:
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
email: [email protected]
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: letsencrypt-prod
solvers:
- dns01:
cloudflare:
apiTokenSecretRef:
name: cloudflare-api-token
key: api-token
When the Gateway references a certificateRefs Secret that doesn't exist, cert-manager steps in. It creates a Certificate resource, calls the Cloudflare API to add a _acme-challenge TXT record, and waits for Let's Encrypt to validate it. The signed certificate lands as a Kubernetes Secret. All three wildcards auto-renew before expiry.
$ kubectl get certificates -A
NAMESPACE NAME READY SECRET AGE
default wildcard-dev-k8s-tls True wildcard-dev-k8s-tls 43d
default wildcard-k8s-tls True wildcard-k8s-tls 43d
default wildcard-stg-k8s-tls True wildcard-stg-k8s-tls 43d
Three certs, all Ready=True, automatically renewed by cert-manager before expiry.
DNS-01 has a practical advantage for homelabs: it works for domains that never touch the public internet. The .k8s.rommelporras.com subdomains resolve to 10.10.30.20 on the local network only. HTTP-01 would require inbound internet access, which most homelab setups don't expose.
cert-manager's Gateway API integration requires enableGatewayAPI: true in the Helm config. This flag has been required since the feature was introduced in cert-manager v1.15, and it's still necessary as of v1.19.2 to activate the Gateway controller.
16 Routes, 13 Namespaces
Every service gets an HTTPRoute in its own namespace. The route targets the Gateway by name and selects a listener via sectionName:
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: grafana
namespace: monitoring
spec:
parentRefs:
- name: homelab-gateway
namespace: default
sectionName: https
hostnames:
- grafana.k8s.rommelporras.com
rules:
- matches:
- path: { type: PathPrefix, value: / }
backendRefs:
- name: prometheus-grafana
port: 80
Grafana lives in monitoring. The HTTPRoute references the Gateway in default. Gateway API handles this cross-namespace attachment natively through allowedRoutes. No ReferenceGrant needed because the Gateway permits routes from all namespaces.
The full route inventory from the live cluster:
$ kubectl get httproutes -A
NAMESPACE NAME HOSTNAMES AGE
browser firefox ["browser.k8s.rommelporras.com"] 28d
ghost-dev ghost-dev ["blog.dev.k8s.rommelporras.com"] 43d
ghost-prod ghost-prod ["blog.k8s.rommelporras.com"] 43d
gitlab gitlab ["gitlab.k8s.rommelporras.com"] 43d
gitlab gitlab-registry ["registry.k8s.rommelporras.com"] 43d
home adguard ["adguard.k8s.rommelporras.com"] 43d
home homepage ["portal.k8s.rommelporras.com"] 43d
home myspeed ["myspeed.k8s.rommelporras.com"] 43d
invoicetron-dev invoicetron-dev ["invoicetron.dev.k8s.rommelporras.com"] 35d
invoicetron-prod invoicetron-prod ["invoicetron.k8s.rommelporras.com"] 35d
longhorn-system longhorn ["longhorn.k8s.rommelporras.com"] 43d
monitoring grafana ["grafana.k8s.rommelporras.com"] 43d
portfolio-dev portfolio-dev ["portfolio.dev.k8s.rommelporras.com"] 43d
portfolio-prod portfolio-prod ["portfolio.k8s.rommelporras.com"] 43d
portfolio-staging portfolio-staging ["portfolio.stg.k8s.rommelporras.com"] 43d
uptime-kuma uptime-kuma ["uptime.k8s.rommelporras.com"] 43d
Sixteen routes, all Accepted=True, spread across 13 namespaces. The distribution by listener:
| Listener | Routes | Services |
|---|---|---|
https (prod) |
12 | Grafana, Longhorn, AdGuard, Homepage, GitLab, Registry, Ghost, Portfolio, Invoicetron, Uptime Kuma, MySpeed, Firefox |
https-dev |
3 | Ghost dev, Portfolio dev, Invoicetron dev |
https-stg |
1 | Portfolio staging |
http |
0 | (redirect not yet configured) |
Multi-environment routing for the same application is three HTTPRoutes in three namespaces, each pointing to a different listener:
# portfolio-prod namespace → production listener
sectionName: https
hostnames: ["portfolio.k8s.rommelporras.com"]
# portfolio-dev namespace → development listener
sectionName: https-dev
hostnames: ["portfolio.dev.k8s.rommelporras.com"]
# portfolio-staging namespace → staging listener
sectionName: https-stg
hostnames: ["portfolio.stg.k8s.rommelporras.com"]
Each environment has its own namespace, wildcard cert, and listener. Deploying a new dev service means creating one HTTPRoute. Three wildcard DNS records in AdGuard (*.k8s, *.dev.k8s, *.stg.k8s, all pointing to 10.10.30.20) plus three wildcard certs plus four Gateway listeners. Add any new service without touching DNS, TLS, or the Gateway.
Every one of these 16 routes uses identical structure: one hostname, PathPrefix: /, one backend. The value of Gateway API in this cluster isn't per-route complexity. It's centralized TLS, cross-namespace routing, and role separation between the platform team (who manages the Gateway) and application teams (who create HTTPRoutes in their own namespaces).
Two-Path Architecture
Not every service goes through the Gateway. Public-facing services use Cloudflare Tunnel, which bypasses it entirely.
| Path | Flow | TLS |
|---|---|---|
| Internal | AdGuard DNS → Gateway VIP (10.10.30.20) → Cilium Envoy → Service |
Let's Encrypt wildcard |
| Public | Cloudflare DNS → Tunnel → cloudflared pod → Service |
Cloudflare edge cert |
The cloudflared pods connect directly to backend services via Kubernetes service DNS, skipping the Gateway:
| Public URL | Tunnel Routes To |
|---|---|
blog.rommelporras.com |
ghost.ghost-prod:2368 |
rommelporras.com |
portfolio.portfolio-prod:80 |
invoicetron.rommelporras.com |
invoicetron.invoicetron-prod:3000 |
status.rommelporras.com |
uptime-kuma.uptime-kuma:3001 |
A CiliumNetworkPolicy on the cloudflare-tunnel namespace restricts which backends the tunnel pods can reach. Only production namespaces (ghost-prod, portfolio-prod, invoicetron-prod, uptime-kuma) are permitted as egress targets. Dev environments, GitLab, Grafana, and Longhorn are unreachable from the tunnel. Internal tools stay internal.
The blog you're reading right now reaches blog.rommelporras.com through Cloudflare Tunnel. I access the Ghost admin panel at blog.k8s.rommelporras.com through the Gateway. Same Ghost instance, different security boundaries.
This split means the Gateway only serves internal users on the local network. DNS-01 wildcard certs work because the .k8s.rommelporras.com subdomains never touch the public internet. The tunnel handles public TLS at the Cloudflare edge. Two paths, no overlap, each with its own access policy.
Honest Trade-offs
Cilium as a unified Gateway works well for homelabs and mid-size clusters. The architecture has real constraints.
Coupled upgrade cycles. A Gateway API bug fix requires upgrading the same binary that handles pod networking. Dedicated implementations like Envoy Gateway or Istio allow independent lifecycle management. In this homelab, Cilium upgrades have been smooth, but the coupling matters more as cluster size grows.
Independent benchmarks show Cilium's control plane can hit severe CPU spikes beyond ~5,000 HTTPRoutes during route churn. Sixteen routes won't reach that ceiling. Enterprise clusters managing hundreds of teams should evaluate dedicated proxy architectures where Istio's control plane handles high route counts with lower CPU overhead.
All 16 routes use basic path matching: PathPrefix: / with a single backend. Cilium supports path rewrites, header matching, request mirroring, gRPC routing, WebSocket upgrades, and request timeouts. None of that is tested here. The architectural model is the value, not per-route complexity.
CiliumLoadBalancerIPPool and CiliumL2AnnouncementPolicy are v2alpha1 resources. They work reliably, but the API surface could change between Cilium releases. Plan for manifest updates when upgrading.
If You're Migrating
The homelab was built with Gateway API from the start. If you're moving off ingress-nginx before the March 2026 archival, the migration path adds extra steps.
- Install Gateway API CRDs (v1.4.1) before upgrading Cilium or deploying any Gateway resource
- Enable
gatewayAPI.enabled: truein Cilium Helm values (requires Cilium 1.16+) - Run both stacks in parallel during migration. Gateway API and Ingress coexist. Use blue-green DNS with reduced TTLs to shift traffic gradually
- Rewrite annotations as HTTPRoute filters.
ssl-redirectbecomesRequestRedirect.rewrite-targetbecomesURLRewrite. Don't try to port them 1:1 - Set
enableGatewayAPI: truein cert-manager's Helm config for Gateway-based certificate issuance - Use DNS-01 for wildcard certs (HTTP-01 can't validate wildcard domains)
- The ingress2gateway tool can automate basic conversions, but complex annotation setups need manual review
The homelab's actual installation order (from the rebuild guide):
- Gateway API CRDs v1.4.1
- Cilium 1.18.6 with
gatewayAPI.enabled: true - CiliumLoadBalancerIPPool + CiliumL2AnnouncementPolicy
- cert-manager 1.19.2 with
enableGatewayAPI: true - Cloudflare API token Secret
- Gateway resource (triggers automatic cert issuance)
Total: about 15 minutes from CRD install to first HTTPS route, most of it waiting for DNS-01 validation.
Zapier documented their migration from ingress-nginx to Envoy Gateway after the EOL announcement. Their biggest win was eliminating configuration drift that had accumulated across years of annotation-based routing.
What's Next
This post is the fourth in the "Building a Production-Grade Homelab" series:
- Why kubeadm Over k3s, RKE2, and Talos
- HA Control Plane with kube-vip
- Cilium Deep Dive: eBPF Networking That Replaces kube-proxy
- Gateway API vs Ingress (you are here)
- Distributed Storage with Longhorn: 2 Replicas Are Enough
- The Modern Logging Stack: Loki + Alloy
- Alerting That Actually Works: Discord, Email, and Dead Man's Switches
- Self-Hosted GitLab: CI/CD Without Cloud Vendor Lock-in
Part 5 covers distributed storage with Longhorn, turning three NVMe SSDs into a replicated storage pool that survives node failures.
The complete Gateway configuration lives in the homelab repo under manifests/gateway/.