From Zero to HTTPS: Deploying the Spices App on Google Kubernetes Engine
This post is a practical end-to-end walk-through of deploying a real application, Spices (an Angular frontend with a REST backend) on Google Kubernetes Engine (GKE), served securely at https://spices.polakams.com. Along the way we'll cover DNS (with the domain registered at GoDaddy), SSL via Google-managed certificates, a single Ingress that routes both frontend and backend, and every troubleshooting rabbit hole I fell into so you can skip them.
1. The Starting Point
The goal was simple to state and deceptively involved in practice:
- Two Kubernetes Deployments — an Angular frontend (
spicesui-app) and a REST API backend. - Domain
polakams.comregistered with GoDaddy. - Expose everything at
https://spices.polakams.comwith a trusted SSL certificate. - One Ingress fronting both services, with the backend reachable under
/api.
2. Domain & DNS — Do I Move Everything to GCP?
A common question: should the domain itself move to GCP? The short answer is no — keep the registration at GoDaddy and only point DNS to the GCP load balancer. Google Cloud Domains stopped accepting new registrations, so the registrar question is largely settled. You have two options:
- Keep DNS at GoDaddy and add A records pointing to the GCP static IP. Fastest path, works fine.
- Move DNS to Cloud DNS while keeping registration at GoDaddy. Better if you want tighter GCP integration, but requires recreating all records.
For the Spices app, we kept DNS at GoDaddy and just added an A record for spices.polakams.com.
3. Reserve a Static External IP
A stable IP is the foundation for both DNS and SSL. Reserve a global static IP:
gcloud compute addresses create spices-ip --global
gcloud compute addresses describe spices-ip --global --format="value(address)"Copy the returned IP — we'll hand it to GoDaddy next.
gcloud compute addresses list and gcloud compute forwarding-rules list and delete anything orphaned. Nine times out of ten you don't need a quota increase — you just need to clean up.4. Point the Domain at the IP (GoDaddy)
In GoDaddy: My Products → DNS → Manage DNS for polakams.com. Add:
- Type: A Host: spices Points to: <your static IP> TTL: 600s
Verify propagation:
dig +short spices.polakams.comThe IP you reserved should come back. If GoDaddy has a "Domain Forwarding" rule on the subdomain, disable it — forwarding overrides A records.
5. The SSL Decision — Google-Managed Certificates
For a public GKE app fronted by a Google HTTP(S) Load Balancer, the best choice is a Google-managed SSL certificate. It's free, auto-renewing, and requires zero cert files in your cluster. Two options exist:
- Classic ManagedCertificate CRD — what we used. Simple, works well for a handful of domains.
- Certificate Manager — newer, supports wildcards, pre-provisioning before DNS cutover, and scales much higher.
Use Certificate Manager for anything non-trivial. For Spices' single subdomain, the classic CRD is fine.
6. Deploy the Backend and Frontend
Two Deployments, two Services of type NodePort.
apiVersion: v1
kind: Service
metadata:
name: spicesui-app-svc
annotations:
cloud.google.com/backend-config: '{"default": "frontend-backendconfig"}'
spec:
type: NodePort
selector: { app: spicesui-app }
ports:
- port: 80
targetPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: spices-backend-svc
annotations:
cloud.google.com/backend-config: '{"default": "backend-backendconfig"}'
spec:
type: NodePort
selector: { app: spices-backend }
ports:
- port: 80
targetPort: 80807. BackendConfig — The Health Check Fix
Here's where our first real-world roadblock hit. The GKE Ingress creates an HTTP health check that probes GET / on your backend by default. If your API doesn't return 200 on /, the backend stays UNHEALTHY forever:
All backend services are in UNHEALTHY stateFix it by giving each service a BackendConfig pointing at a real health endpoint:
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
name: backend-backendconfig
spec:
healthCheck:
type: HTTP
requestPath: /actuator/health # or whatever returns 200 on your API
port: 8080
---
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
name: frontend-backendconfig
spec:
healthCheck:
type: HTTP
requestPath: / # Angular's nginx serves index.html here
port: 80readinessProbe on your Deployment and GKE will auto-derive the LB health check from it. One less resource to maintain.8. The ManagedCertificate
apiVersion: networking.gke.io/v1
kind: ManagedCertificate
metadata:
name: spices-cert
spec:
domains:
- spices.polakams.com9. The Ingress — One LB for Frontend + Backend
Path-based routing keeps everything on a single hostname, one cert, and zero CORS headaches.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: spices-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: spices-ip
networking.gke.io/managed-certificates: spices-cert
kubernetes.io/ingress.class: gce
networking.gke.io/v1beta1.FrontendConfig: spices-frontend
spec:
rules:
- host: spices.polakams.com
http:
paths:
- path: /api
pathType: Prefix
backend:
service:
name: spices-backend-svc
port: { number: 80 }
- path: /
pathType: Prefix
backend:
service:
name: spicesui-app-svc
port: { number: 80 }And an HTTP-to-HTTPS redirect via FrontendConfig:
apiVersion: networking.gke.io/v1beta1
kind: FrontendConfig
metadata:
name: spices-frontend
spec:
redirectToHttps:
enabled: true
responseCodeName: MOVED_PERMANENTLY_DEFAULT10. "Certificate Status: Provisioning... Forever"
The managed cert sat in Provisioning for what felt like eternity. The usual suspects:
- DNS not yet pointing to the LB — Google validates over HTTP, so the domain must resolve to the LB's IP before validation can succeed.
- Backends UNHEALTHY — if port 80 isn't answering healthily, validation fails silently.
- CAA records restricting the CA — check
dig CAA polakams.com; if present, make surepki.googis allowed.
Diagnostic command that tells you exactly which domain is stuck and why:
kubectl describe managedcertificate spices-cert11. "The server encountered a temporary error"
This is the default GCP Load Balancer error page. It means the LB is up but can't reach a healthy backend. Also the reason the ManagedCertificate is stuck. Fix the health check, and both problems clear up.
kubectl describe ingress spices-ingress | grep -A 20 Backends12. Scheduling Failure — "Preemption Is Not Helpful"
Adding the second deployment pushed the tiny node pool over capacity:
Cannot schedule pods: Preemption is not helpful for scheduling.Translation: the cluster is full and evicting pods won't help. Three fixes:
- Lower the pod's CPU/memory requests in the Deployment spec.
- Scale the node pool:
gcloud container clusters resize ... --num-nodes=N. - Enable cluster autoscaling so it grows on demand.
13. CrashLoopBackOff — The nginx Upstream Trap
With the cluster sized correctly, the frontend pod started crashing:
Back-off restarting failed container spicesui-app
nginx: [emerg] host not found in upstream "backend" in /etc/nginx/conf.d/default.conf:30The Angular Docker image's nginx config had a proxy_pass http://backend:8080 directive left over from the local docker-compose setup. Inside the GKE cluster there was no Service called backend, so nginx failed to resolve the upstream at startup and bailed out.
/api to the backend Service, remove the location /api/ proxy block from nginx entirely. The frontend nginx becomes a pure static-file server. No more DNS race, no more upstream lookups.Final, clean default.conf:
server {
listen 80;
server_name _;
root /usr/share/nginx/html;
index index.html;
location / {
try_files $uri $uri/ /index.html;
}
}The try_files ... /index.html line is essential for Angular — without it, refreshing on /dashboard or any nested Angular route returns 404 from nginx.
14. NEG Sync Error — "Endpoint Count Cannot Be Zero"
Failed to sync NEG "k8s1-...-spicesui-app-svc-80-...": endpoint count cannot be zeroA Network Endpoint Group with zero endpoints means no pod is Ready to receive traffic. It's a symptom of the crashloop above — fix the root cause and this clears on its own. Verify with:
kubectl get pods -l app=spicesui-app
kubectl get endpoints spicesui-app-svc15. "Failed to fetch" in the Browser
After the app came up, the Angular UI still couldn't call the API. The console error:
Failed to fetch. Possible Reasons:
* CORS
* Network Failure
* URL scheme must be "http" or "https" for CORS request.That message is misleading — it's the browser's generic error. The real answer was in the Network tab:
Request URL: https://spices.polakams.com/api/v1/products?page=0&size=10
Status Code: 403 Forbidden16. 403 Forbidden — Spring Security Strikes
The Spices backend is a Spring Boot app. As soon as spring-boot-starter-security lands on the classpath, every endpoint is protected by default. The Angular UI was hitting /api/v1/products unauthenticated and getting rejected before the controller even ran.
For a public product-listing endpoint, permit it explicitly:
@Configuration
@EnableWebSecurity
public class SecurityConfig {
@Bean
public SecurityFilterChain filterChain(HttpSecurity http) throws Exception {
http
.csrf(csrf -> csrf.disable())
.authorizeHttpRequests(auth -> auth
.requestMatchers("/api/v1/products/**").permitAll()
.requestMatchers("/actuator/health", "/healthz").permitAll()
.anyRequest().authenticated()
);
return http.build();
}
}For protected endpoints, the Angular side needs an HTTP interceptor that attaches the Authorization: Bearer <token> header on every request.
17. The Final Architecture
After all that, here's the traffic flow that serves https://spices.polakams.com:
Browser
│
▼ HTTPS (TLS terminated by Google-managed cert)
Google Cloud HTTP(S) Load Balancer ── static IP: spices-ip
│
├── /api/* ──► spices-backend-svc ──► Spring Boot pods
│
└── /* ──► spicesui-app-svc ──► nginx + Angular static files18. Lessons Learned
- Keep your domain at GoDaddy; move only the A record. No registrar transfer needed to deploy on GCP.
- Google-managed certs are magical but literal. DNS must already resolve to the LB, and backends must be healthy, before the cert goes Active. If it sits in Provisioning, something upstream is broken.
- One Ingress, path-based routing. Simpler than subdomain splitting and it eliminates CORS entirely. Just use relative API URLs in Angular (
apiUrl: '/api'). - Don't let the frontend nginx proxy to the backend in Kubernetes. It's a docker-compose habit. Let the Ingress do the routing.
- "Failed to fetch" in the browser is never the actual error. Always look at the Network tab's status code.
- If you pulled in Spring Security, every endpoint is locked. Explicitly permit what should be public.
- BackendConfig (or a readinessProbe) is mandatory unless
GET /happens to return 200 on your app. - Cluster too small is the first bottleneck. Start with at least
e2-standard-2nodes or enable autoscaling before adding more workloads.
Spices is now live and serving traffic with proper HTTPS. The whole stack — from the static IP down to the Spring Security config — took more trial-and-error than the official docs suggest, which is exactly why I wrote this post. Hopefully it saves you a few hours.
— Deployed at spices.polakams.com
Comments
Post a Comment