Install on Azure

Materialize provides a set of modular Terraform modules that can be used to deploy all services required for Materialize to run on Azure. The module is intended to provide a simple set of examples on how to deploy Materialize. It can be used as is or modules can be taken from the example and integrated with existing DevOps tooling.

Self-managed Materialize requires: a Kubernetes (v1.31+) cluster; PostgreSQL as a metadata database; blob storage; and a license key. The example on this page deploys a complete Materialize environment on Azure using the modular Terraform setup from this repository.

WARNING!

The Terraform modules used in this tutorial are intended for evaluation/demonstration purposes and for serving as a template when building your own production deployment. The modules should not be directly relied upon for production deployments: future releases of the modules will contain breaking changes. Instead, to use as a starting point for your own production deployment, either:

  • Fork the repo and pin to a specific version; or

  • Use the code as a reference when developing your own deployment.

What Gets Created

This example provisions the following infrastructure:

Resource Group

Resource Description
Resource Group New resource group to contain all resources

Networking

Resource Description
Virtual Network 20.0.0.0/16 address space
AKS Subnet 20.0.0.0/20 with NAT Gateway association and service endpoints for Storage and SQL
PostgreSQL Subnet 20.0.16.0/24 delegated to PostgreSQL Flexible Server
NAT Gateway Standard SKU with static public IP for outbound connectivity
Private DNS Zone For PostgreSQL private endpoint resolution with VNet link

Compute

Resource Description
AKS Cluster Version 1.32 with Cilium networking (network plugin: azure, data plane: cilium, policy: cilium)
Default Node Pool Standard_D4pds_v6 VMs, autoscaling 2-5 nodes, labeled for generic workloads
Materialize Node Pool Standard_E4pds_v6 VMs with 100GB disk, autoscaling 2-5 nodes, swap enabled, dedicated taints for Materialize workloads
Managed Identities AKS cluster identity (used by AKS control plane to provision Azure resources like load balancers and network interfaces) and Workload identity (used by Materialize pods for secure, passwordless authentication to Azure Storage)

Database

Resource Description
Azure PostgreSQL Flexible Server Version 15
SKU GP_Standard_D2s_v3 (2 vCores, 4GB memory)
Storage 32GB with 7-day backup retention
Network Access Public Network Access is disabled, Private access only (no public endpoint)
Database materialize database pre-created

Storage

Resource Description
Storage Account Premium BlockBlobStorage with LRS replication for Materialize persistence
Container materialize blob container
Access Control Workload Identity federation for Kubernetes service account (passwordless authentication via OIDC)
Network Access Currently allows all traffic(production deployments should restrict to AKS subnet only traffic)

Kubernetes Add-ons

Resource Description
cert-manager Certificate management controller for Kubernetes that automates TLS certificate provisioning and renewal
Self-signed ClusterIssuer Provides self-signed TLS certificates for Materialize instance internal communication (balancerd, console). Used by the Materialize instance for secure inter-component communication.

Materialize

Resource Description
Operator Materialize Kubernetes operator in the materialize namespace
Instance Single Materialize instance in the materialize-environment namespace
Load Balancers Internal Azure Load Balancers for Materialize access
Port Description
6875 For SQL connections to the database
6876 For HTTP(S) connections to the database
8080 For HTTP(S) connections to Materialize Console

Prerequisites

Azure Account Requirements

An active Azure subscription with appropriate permissions to create:

  • AKS clusters
  • Azure PostgreSQL Flexible Server instances
  • Storage accounts
  • Virtual networks and networking resources
  • Managed identities and role assignments

Required Tools

License Key

License key type Deployment type Action
Community New deployments

To get a license key:

Community Existing deployments Contact Materialize support.
Enterprise New deployments Visit https://materialize.com/self-managed/enterprise-license/ to purchase an Enterprise license.
Enterprise Existing deployments Contact Materialize support.

Getting started: Simple example

WARNING!

The Terraform modules used in this tutorial are intended for evaluation/demonstration purposes and for serving as a template when building your own production deployment. The modules should not be directly relied upon for production deployments: future releases of the modules will contain breaking changes. Instead, to use as a starting point for your own production deployment, either:

  • Fork the repo and pin to a specific version; or

  • Use the code as a reference when developing your own deployment.

Step 1: Set Up the Environment

  1. Open a terminal window.

  2. Clone the Materialize Terraform repository and go to the azure/examples/simple directory.

    git clone https://github.com/MaterializeInc/materialize-terraform-self-managed.git
    cd materialize-terraform-self-managed/azure/examples/simple
    
  3. Authenticate with Azure.

    az login
    

    The command opens a browser window to sign in to Azure. Sign in.

  4. Select the subscription and tenant to use. After you have signed in, back in the terminal, your tenant and subscription information is displayed.

    Retrieving tenants and subscriptions for the selection...
    
    [Tenant and subscription selection]
    
    No     Subscription name    Subscription ID                       Tenant
    -----  -------------------  ------------------------------------  ----------------
    [1]*   ...                  ...                                   ...
    
    The default is marked with an *; the default tenant is '<Tenant>' and
    subscription is '<Subscription Name>' (<Subscription ID>).
    

    Select the subscription and tenant.

Step 2: Configure Terraform Variables

  1. Create a terraform.tfvars file with the following variables:

    • subscription_id: Azure subscription ID
    • resource_group_name: Name for the resource group to create (e.g. mz-demo-rg)
    • name_prefix: Prefix for all resource names (e.g., simple-demo)
    • location: Azure region for deployment (e.g., westus2)
    • license_key: Materialize license key
    • tags: Map of tags to apply to resources
    subscription_id     = "your-subscription-id"
    resource_group_name = "mz-demo-rg"
    name_prefix         = "simple-demo"
    location            = "westus2"
    license_key         = "your-materialize-license-key"
    tags = {
      environment = "demo"
    }
    

Step 3: Apply the Terraform

  1. Initialize the Terraform directory to download the required providers and modules:

    terraform init
    
  2. Apply the Terraform configuration to create the infrastructure.

    • To deploy with the default internal NLB for Materialize access:
    terraform apply
    
    • To deploy with public NLB for Materialize access:
    terraform apply -var="internal=false"
    

    If you are satisfied with the planned changes, type yes when prompted to proceed.

  3. From the output, you will need the following field(s) to connect:

    • console_load_balancer_ip for the Materialize Console
    • balancerd_load_balancer_ip to connect PostgreSQL-compatible clients/drivers.
    terraform output -raw <field_name>
    
    💡 Tip: Your shell may show an ending marker (such as %) because the output did not end with a newline. Do not include the marker when using the value.
  4. Configure kubectl to connect to your cluster, replacing:

    • <your-resource-group-name> with your resource group name; i.e., the resource_group_name in the Terraform output or in the terraform.tfvars file.

    • <your-aks-cluster-name> with your cluster name; i.e., the aks_cluster_name in the Terraform output. For the sample example, your cluster name has the form {prefix_name}-aks; e.g., simple-demo-aks.

    # az aks get-credentials --resource-group <your-resource-group-name> --name <your-aks-cluster-name>
    az aks get-credentials --resource-group $(terraform output -raw resource_group_name) --name $(terraform output -raw aks_cluster_name)
    

Step 4. Optional. Verify the deployment.

  1. Check the status of your deployment:

    To check the status of the Materialize operator, which runs in the materialize namespace:

    kubectl -n materialize get all
    

    To check the status of the Materialize instance, which runs in the materialize-environment namespace:

    kubectl -n materialize-environment get all
    

    If you run into an error during deployment, refer to the Troubleshooting.

Step 5: Connect to Materialize

NOTE: If using an internal Network Load Balancer (NLB) for your Materialize access, you can connect from inside the same VPC or from networks that are privately connected to it.

Connect using the Materialize Console

Using the console_load_balancer_ip from the Terraform output, you can connect to Materialize via the Materialize Console.

To connect to the Materialize Console, open a browser to https://<console_load_balancer_ip>:8080, substituting your <console_load_balancer_ip>.

From the terminal, you can type:

open "https://$(terraform output -raw  console_load_balancer_ip):8080/materialize"
💡 Tip: The example uses a self-signed ClusterIssuer. As such, you may encounter a warning with regards to the certificate. In production, run with certificates from an official Certificate Authority (CA) rather than self-signed certificates.

Connect using the psql

Using the balancerd_load_balancer_ip value from the Terraform output, you can connect to Materialize via PostgreSQL-compatible clients/drivers, such as psql:

psql "postgres://$(terraform output -raw balancerd_load_balancer_ip):6875/materialize"

Customizing Your Deployment

💡 Tip: To reduce cost in your demo environment, you can tweak VM sizes and database tiers in main.tf.

You can customize each Terraform module independently.

NOTE: Autoscaling: Uses Azure’s native cluster autoscaler that integrates directly with Azure Virtual Machine Scale Sets for automated node scaling. In future we are planning to enhance this by making use of karpenter-provider-azure.

See also:

Cleanup

To delete the whole sample infrastructure and deployment (including the Materialize operator and Materialize instances and data), run from the Terraform directory:

terraform destroy

When prompted to proceed, type yes to confirm the deletion.

See Also

Back to top ↑