TKG Procedure: Migrate from NGINX Ingress to Project Contour
This post documents a practical migration flow for moving a TKG workload cluster from NGINX Ingress Controller to Project Contour. The sequence below follows an executed shell history pattern and preserves the same operational order: remove NGINX first, then install Contour.
1) Download the upstream Contour manifest
cd /tmp
curl -fsSLo contour.yaml https://raw.githubusercontent.com/projectcontour/contour/release-1.31/examples/render/contour.yaml
test -s contour.yaml
grep -n 'image:' contour.yaml
For release-1.31, expected images are Contour v1.31.3 and Envoy v1.34.12.
2) Mirror Contour and Envoy images to Harbor
HARBOR_REGISTRY=<harbor-fqdn>
HARBOR_PROJECT=<harbor-project>
CONTOUR_TAG=v1.31.3
ENVOY_TAG=v1.34.12
SRC_CONTOUR=ghcr.io/projectcontour/contour:${CONTOUR_TAG}
SRC_ENVOY=docker.io/envoyproxy/envoy:${ENVOY_TAG}
DST_CONTOUR=${HARBOR_REGISTRY}/${HARBOR_PROJECT}/contour:${CONTOUR_TAG}
DST_ENVOY=${HARBOR_REGISTRY}/${HARBOR_PROJECT}/envoy:${ENVOY_TAG}
docker pull "${SRC_CONTOUR}"
docker pull "${SRC_ENVOY}"
docker tag "${SRC_CONTOUR}" "${DST_CONTOUR}"
docker tag "${SRC_ENVOY}" "${DST_ENVOY}"
docker push "${DST_CONTOUR}"
docker push "${DST_ENVOY}"
3) Rewrite contour.yaml to Harbor image paths
sed -i "s|ghcr.io/projectcontour/contour:${CONTOUR_TAG}|${DST_CONTOUR}|g" contour.yaml
sed -i "s|docker.io/envoyproxy/envoy:${ENVOY_TAG}|${DST_ENVOY}|g" contour.yaml
grep -n 'image:' contour.yaml
4) Run pre-checks and backup ingress objects
kubectl config use-context <target-tkg-context>
kubectl config current-context
kubectl get ingress -A
kubectl get ingressclass
kubectl get ingress -A -o yaml > ingress-backup-$(date +%Y%m%d-%H%M%S).yaml
kubectl get crd | grep -i nginx || true
kubectl get validatingwebhookconfigurations.admissionregistration.k8s.io | grep -i nginx || true
kubectl get mutatingwebhookconfigurations.admissionregistration.k8s.io | grep -i nginx || true
kubectl get svc -A | grep -i nginx || true
5) Remove NGINX completely (before Contour install)
kubectl delete ingressclass nginx || true
kubectl delete crd \
dnsendpoints.externaldns.nginx.org \
globalconfigurations.k8s.nginx.org \
policies.k8s.nginx.org \
transportservers.k8s.nginx.org \
virtualserverroutes.k8s.nginx.org \
virtualservers.k8s.nginx.org || true
kubectl delete validatingwebhookconfigurations.admissionregistration.k8s.io ingress-nginx-admission || true
kubectl delete mutatingwebhookconfigurations.admissionregistration.k8s.io ingress-nginx-admission || true
kubectl delete ns nginx-ingress || true
Post-removal validation:
kubectl api-resources | grep -i nginx || true
kubectl get crd | grep -i nginx || true
kubectl get svc -A | grep -i nginx || true
kubectl get ingressclass
6) Install Project Contour
kubectl apply -f contour.yaml
kubectl get pods -n projectcontour
kubectl get crd | grep -i projectcontour
kubectl get ingressclass
7) Migrate routes to HTTPProxy
apiVersion: projectcontour.io/v1
kind: HTTPProxy
metadata:
name: <proxy-app-1>
namespace: <app-namespace>
spec:
virtualhost:
fqdn: <app1.example.internal>
tls:
secretName: <tls-secret-app-1>
routes:
- conditions:
- prefix: /
services:
- name: <service-app-1>
port: 8080
---kubectl apply -f <httpproxy-file>.yaml
kubectl get httpproxy -n <app-namespace>
kubectl describe httpproxy -n <app-namespace> <proxy-app-1>
8) Validate cutover
kubectl get pods -n projectcontour
kubectl get httpproxy -A
curl -Ik https://<app1.example.internal>
9) Rollback path
kubectl apply -f <nginx-controller-manifests>.yaml
kubectl apply -f ingress-backup-<timestamp>.yaml
kubectl delete -f <httpproxy-file>.yaml
This runbook intentionally avoids real cluster names or application identifiers and can be adapted per environment.
