Skip to main content

Multi-Namespace Restores

Some applications are split across multiple namespaces: a frontend in app-frontend, a backend API in app-backend, and a database in app-db. Kymaros supports testing these workloads by restoring all relevant namespaces into isolated sandboxes and running health checks against the restored group.


Namespace mapping

The backupSource.namespaces field specifies which namespaces to include in the restore and, optionally, which sandbox namespace each should be restored into.

spec:
backupSource:
name: myapp-backup
namespaces:
- name: app-frontend
sandboxName: sandbox-frontend
- name: app-backend
sandboxName: sandbox-backend
- name: app-db
sandboxName: sandbox-db

Fields:

FieldRequiredDescription
nameYesSource namespace name from the backup
sandboxNameNoTarget namespace for the sandbox restore. If omitted, Kymaros generates a name.

Specifying sandboxName explicitly is useful when your health check configuration references specific namespace names, or when you want predictable names in reports.

Single-namespace restore

For applications contained in one namespace, namespaces is a list with one entry:

spec:
backupSource:
name: webapp-backup
namespaces:
- name: webapp-prod
sandboxName: sandbox-webapp

Network isolation modes

By default, each sandbox namespace is network-isolated from all other namespaces. This prevents cross-contamination between concurrent test runs.

For multi-namespace applications where components need to reach each other, use the networkIsolation field with mode group.

spec:
networkIsolation:
mode: group

In group mode, all sandbox namespaces created for the same RestoreTest run can communicate with each other. Traffic from outside the group (other test runs, production namespaces) remains blocked.

Available modes

ModeBehavior
groupSandbox namespaces within the same RestoreTest run can communicate
strict (default)Each sandbox is fully isolated; no cross-namespace traffic

group mode is implemented via Kubernetes NetworkPolicy. Kymaros creates a policy that selects pods across all sandbox namespaces in the run using a shared label (kymaros.io/run-id).


DNS rewriting

When a workload in app-backend connects to a database using the in-cluster DNS name postgres.app-db.svc.cluster.local, that name resolves to the production namespace — not the sandbox.

DNS rewriting rewrites these in-cluster service references to point at the corresponding sandbox namespace at runtime, so postgres.app-db.svc.cluster.local becomes postgres.sandbox-db.svc.cluster.local without requiring application configuration changes.

Status: DNS rewriting is a planned feature targeted at the Pro tier. It is not available in the current release. Without DNS rewriting, cross-namespace service references in restored workloads will resolve to production services. To avoid this, either:

  1. Use group mode and ensure services in the restored namespace override cross-namespace references with environment variables.
  2. Deploy the application with configurable service endpoints (12-factor style) so that sandboxed instances point at sandboxed dependencies.

Complete multi-namespace example

The following RestoreTest restores a three-namespace application stack, enables group networking so the components can reach each other, and runs checks against each tier.

apiVersion: restore.kymaros.io/v1alpha1
kind: RestoreTest
metadata:
name: myapp-full-stack
namespace: kymaros-system
spec:
schedule: "0 2 * * *"
backupSource:
name: myapp-backup
namespaces:
- name: app-db
sandboxName: sandbox-db
- name: app-backend
sandboxName: sandbox-backend
- name: app-frontend
sandboxName: sandbox-frontend
networkIsolation:
mode: group
checks:
- name: db-resources
type: resourceExists
resourceExists:
resources:
- kind: PVC
name: postgres-data
- kind: Secret
name: db-credentials

- name: db-pod-ready
type: podStatus
podStatus:
labelSelector:
app: postgres
minReady: 1
timeout: 10m

- name: db-accepting-queries
type: exec
exec:
podSelector:
app: postgres
container: postgres
command:
- pg_isready
- -U
- postgres
successExitCode: 0
timeout: 30s

- name: backend-pods-ready
type: podStatus
podStatus:
labelSelector:
app: api-server
minReady: 2
timeout: 5m

- name: backend-health
type: httpGet
httpGet:
service: api-server-svc
port: 8080
path: /healthz
expectedStatus: 200
timeout: 15s
retries: 3

- name: frontend-pod-ready
type: podStatus
podStatus:
labelSelector:
app: frontend
minReady: 1
timeout: 3m

- name: frontend-port
type: tcpSocket
tcpSocket:
service: frontend-svc
port: 3000
timeout: 10s

Limitations

Cross-namespace exec: The exec check's podSelector matches pods in the sandbox namespace for the first namespace in the namespaces list. Targeting a pod in a specific sandbox namespace is not currently supported by selector alone — ensure the pod label set is unique across all sandbox namespaces in a run.

Namespace ordering: Namespaces are restored in the order they appear in backupSource.namespaces. Place stateful dependencies (databases) before stateless tiers (applications) so that upstream stores are ready before application pods attempt connections.

Sandbox cleanup: After a test run completes, all sandbox namespaces are deleted. Resources created by health checks (for example, exec sessions) are cleaned up with the namespace. Sandbox lifetime can be extended for debugging using .spec.sandbox.keepOnFailure: true if your operator build supports that field.

Concurrent runs: Two RestoreTest resources restoring the same source namespace simultaneously will produce separate sandbox namespaces and do not interfere with each other. The run-id label ensures NetworkPolicies are scoped per run.