Upgrading to swap

Upgrades to v26 and later have swap enabled by default. In order to provide an upgrade path without disruption to existing installations, we have introduced additional labels into the node selectors for clusterd pods. Due to these new selector labels, your existing nodes will intentionally not be selected. You will need to take additional actions in preparation for upgrading to v26.

If you wish to opt out of swap and retain the old behavior, you may set operator.clusters.swap_enabled: false in your helm values. Otherwise, continue below.

Upgrade preparation steps

  1. Label existing scratchfs/lgalloc node groups

    If using lgalloc on scratchfs volumes, you must add the additional "materialize.cloud/scratch-fs": "true" label to your existing node groups and nodes running Materialize workloads.

    Adding this label to the node group (or nodepool) configuration will apply the label to newly spawned nodes, but depending on your cloud provider may not apply the label to existing nodes.

    If not automatically applied, you may need to use kubectl label to apply the change to existing nodes.

  2. Modify existing scratchfs/lgalloc disk setup daemonset selector labels

    If using our ephemeral-storage-setup image as a daemonset to configure scratchfs LVM volumes for lgalloc, you must add the additional "materialize.cloud/scratch-fs": "true" label to multiple places:

    • spec.selector.matchLabels
    • spec.template.metadata.labels
    • (if using nodeAffinity) spec.template.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms
    • (if using nodeSelector) spec.template.spec.nodeSelector

    You must use at least one of nodeAffinity or nodeSelector.

    It is recommended to rename this daemonset to make it clear that it is only for the legacy scratchfs/lgalloc nodes (for example, change the name disk-setup to disk-setup-scratchfs).

  3. Create a new node group for swap

    1. Create a new node group (or ec2nodeclass and nodepool if using Karpenter in AWS) using an instance type with local NVMe disks. If in GCP, the disks must be in raw mode.

    2. Label the node group with "materialize.cloud/swap": "true".

    3. If using AWS Bottlerocket AMIs (highly recommended if running in AWS), set the following in the userdata to configure the disks for swap, and enable swap in the kubelet:

      [settings.oci-defaults.resource-limits.max-open-files]
      soft-limit = 1048576
      hard-limit = 1048576
      
      [settings.bootstrap-containers.diskstrap]
      source = "docker.io/materialize/ephemeral-storage-setup-image:v0.4.0"
      mode = "once"
      essential = "true"
      # ["swap", "--cloud-provider", "aws", "--bottlerocket-enable-swap"]
      user-data = "WyJzd2FwIiwgIi0tY2xvdWQtcHJvdmlkZXIiLCAiYXdzIiwgIi0tYm90dGxlcm9ja2V0LWVuYWJsZS1zd2FwIl0="
      
      [kernel.sysctl]
      "vm.swappiness" = "100"
      "vm.min_free_kbytes" = "1048576"
      "vm.watermark_scale_factor" = "100"
      
    4. If not using AWS or not using Bottlerocket AMIs, and your node group supports it (Azure does not as of 2025-11-05), add a startup taint. This taint will be removed after the disk is configured for swap.

      taints:
        - key: startup-taint.cluster-autoscaler.kubernetes.io/disk-unconfigured
          value: "true"
          effect: NoSchedule
      
  4. Create a new disk-setup-swap daemonset

    If using Bottlerocket AMIs in AWS, you may skip this step, as you should have configured swap using userdata previously.

    Create a new daemonset using our ephemeral-storage-setup image to configure the disks for swap and to enable swap in the kubelet.

    The arguments to the init container in this daemonset need to be configured for swap. See the examples in the linked git repository for more details.

    This daemonset should run only on the new swap nodes, so we need to ensure it has the "materialize.cloud/swap": "true" label in several places:

    • spec.selector.matchLabels
    • spec.template.metadata.labels
    • (if using nodeAffinity) spec.template.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms
    • (if using nodeSelector) spec.template.spec.nodeSelector

    You must use at least one of nodeAffinity or nodeSelector.

    It is recommended to name this daemonset to clearly indicate that it is for configuring swap (ie: disk-setup-swap), as opposed to other disk configurations.

  5. (Optional) Configure environmentd to also use swap

    Swap is enabled by default for clusterd, but not for environmentd. If you’d like to enable swap for environmentd, add "materialize.cloud/swap": "true" to the environmentd.node_selector helm value.

  6. Upgrade the Materialize operator helm chart to v26

    The cluster size definitions for existing Materialize instances will not be changed at this point, but any newly created Materialize instances, or upgraded Materialize instances will pick up the new sizes.

    Do not create any new Materialize instances at versions less than v26, or perform any rollouts to existing Materialize instances to versions less than v26.

  7. Upgrade existing Materialize instances to v26

    The new v26 pods should go to the new swap nodes.

    You can verify that swap is enabled and working by execing into a clusterd pod and running cat /sys/fs/cgroup/memory.swap.max. If you get a number greater than 0, swap is enabled and the pod is allowed to use it.

  8. (Optional) Delete old scratchfs/lgalloc node groups and disk-setup-scratchfs daemonset

    If you no longer have anything running on the old scratchfs/lgalloc nodes, you may delete their node group and the disk-setup-scratchfs daemonset.

Back to top ↑