Install on Azure
Materialize provides a set of modular Terraform modules that can be used to deploy all services required for Materialize to run on Azure. The module is intended to provide a simple set of examples on how to deploy Materialize. It can be used as is or modules can be taken from the example and integrated with existing DevOps tooling.
Self-managed Materialize requires: a Kubernetes (v1.31+) cluster; PostgreSQL as a metadata database; blob storage; and a license key. The example on this page deploys a complete Materialize environment on Azure using the modular Terraform setup from this repository.
The Terraform modules used in this tutorial are intended for evaluation/demonstration purposes and for serving as a template when building your own production deployment. The modules should not be directly relied upon for production deployments: future releases of the modules will contain breaking changes. Instead, to use as a starting point for your own production deployment, either:
-
Fork the repo and pin to a specific version; or
-
Use the code as a reference when developing your own deployment.
What Gets Created
This example provisions the following infrastructure:
Resource Group
| Resource | Description |
|---|---|
| Resource Group | New resource group to contain all resources |
Networking
| Resource | Description |
|---|---|
| Virtual Network | 20.0.0.0/16 address space |
| AKS Subnet | 20.0.0.0/20 with NAT Gateway association and service endpoints for Storage and SQL |
| PostgreSQL Subnet | 20.0.16.0/24 delegated to PostgreSQL Flexible Server |
| NAT Gateway | Standard SKU with static public IP for outbound connectivity |
| Private DNS Zone | For PostgreSQL private endpoint resolution with VNet link |
Compute
| Resource | Description |
|---|---|
| AKS Cluster | Version 1.32 with Cilium networking (network plugin: azure, data plane: cilium, policy: cilium) |
| Default Node Pool | Standard_D4pds_v6 VMs, autoscaling 2-5 nodes, labeled for generic workloads |
| Materialize Node Pool | Standard_E4pds_v6 VMs with 100GB disk, autoscaling 2-5 nodes, swap enabled, dedicated taints for Materialize workloads |
| Managed Identities | AKS cluster identity (used by AKS control plane to provision Azure resources like load balancers and network interfaces) and Workload identity (used by Materialize pods for secure, passwordless authentication to Azure Storage) |
Database
| Resource | Description |
|---|---|
| Azure PostgreSQL Flexible Server | Version 15 |
| SKU | GP_Standard_D2s_v3 (2 vCores, 4GB memory) |
| Storage | 32GB with 7-day backup retention |
| Network Access | Public Network Access is disabled, Private access only (no public endpoint) |
| Database | materialize database pre-created |
Storage
| Resource | Description |
|---|---|
| Storage Account | Premium BlockBlobStorage with LRS replication for Materialize persistence |
| Container | materialize blob container |
| Access Control | Workload Identity federation for Kubernetes service account (passwordless authentication via OIDC) |
| Network Access | Currently allows |
Kubernetes Add-ons
| Resource | Description |
|---|---|
| cert-manager | Certificate management controller for Kubernetes that automates TLS certificate provisioning and renewal |
| Self-signed ClusterIssuer | Provides self-signed TLS certificates for Materialize instance internal communication (balancerd, console). Used by the Materialize instance for secure inter-component communication. |
Materialize
| Resource | Description | ||||||||
|---|---|---|---|---|---|---|---|---|---|
| Operator | Materialize Kubernetes operator in the materialize namespace |
||||||||
| Instance | Single Materialize instance in the materialize-environment namespace |
||||||||
| Load Balancers | Internal Azure Load Balancers for Materialize access
|
Prerequisites
Azure Account Requirements
An active Azure subscription with appropriate permissions to create:
- AKS clusters
- Azure PostgreSQL Flexible Server instances
- Storage accounts
- Virtual networks and networking resources
- Managed identities and role assignments
Required Tools
License Key
| License key type | Deployment type | Action |
|---|---|---|
| Community | New deployments |
To get a license key:
|
| Community | Existing deployments | Contact Materialize support. |
| Enterprise | New deployments | Visit https://materialize.com/self-managed/enterprise-license/ to purchase an Enterprise license. |
| Enterprise | Existing deployments | Contact Materialize support. |
Getting started: Simple example
The Terraform modules used in this tutorial are intended for evaluation/demonstration purposes and for serving as a template when building your own production deployment. The modules should not be directly relied upon for production deployments: future releases of the modules will contain breaking changes. Instead, to use as a starting point for your own production deployment, either:
-
Fork the repo and pin to a specific version; or
-
Use the code as a reference when developing your own deployment.
Step 1: Set Up the Environment
-
Open a terminal window.
-
Clone the Materialize Terraform repository and go to the
azure/examples/simpledirectory.git clone https://github.com/MaterializeInc/materialize-terraform-self-managed.git cd materialize-terraform-self-managed/azure/examples/simple -
Authenticate with Azure.
az loginThe command opens a browser window to sign in to Azure. Sign in.
-
Select the subscription and tenant to use. After you have signed in, back in the terminal, your tenant and subscription information is displayed.
Retrieving tenants and subscriptions for the selection... [Tenant and subscription selection] No Subscription name Subscription ID Tenant ----- ------------------- ------------------------------------ ---------------- [1]* ... ... ... The default is marked with an *; the default tenant is '<Tenant>' and subscription is '<Subscription Name>' (<Subscription ID>).Select the subscription and tenant.
Step 2: Configure Terraform Variables
-
Create a
terraform.tfvarsfile with the following variables:subscription_id: Azure subscription IDresource_group_name: Name for the resource group to create (e.g.mz-demo-rg)name_prefix: Prefix for all resource names (e.g.,simple-demo)location: Azure region for deployment (e.g.,westus2)license_key: Materialize license keytags: Map of tags to apply to resources
subscription_id = "your-subscription-id" resource_group_name = "mz-demo-rg" name_prefix = "simple-demo" location = "westus2" license_key = "your-materialize-license-key" tags = { environment = "demo" }
Step 3: Apply the Terraform
-
Initialize the Terraform directory to download the required providers and modules:
terraform init -
Apply the Terraform configuration to create the infrastructure.
- To deploy with the default internal NLB for Materialize access:
terraform apply- To deploy with
public NLB for Materialize access:
terraform apply -var="internal=false"If you are satisfied with the planned changes, type
yeswhen prompted to proceed. -
From the output, you will need the following field(s) to connect:
console_load_balancer_ipfor the Materialize Consolebalancerd_load_balancer_ipto connect PostgreSQL-compatible clients/drivers.
terraform output -raw <field_name>💡 Tip: Your shell may show an ending marker (such as%) because the output did not end with a newline. Do not include the marker when using the value. -
Configure
kubectlto connect to your cluster, replacing:-
<your-resource-group-name>with your resource group name; i.e., theresource_group_namein the Terraform output or in theterraform.tfvarsfile. -
<your-aks-cluster-name>with your cluster name; i.e., theaks_cluster_namein the Terraform output. For the sample example, your cluster name has the form{prefix_name}-aks; e.g.,simple-demo-aks.
# az aks get-credentials --resource-group <your-resource-group-name> --name <your-aks-cluster-name> az aks get-credentials --resource-group $(terraform output -raw resource_group_name) --name $(terraform output -raw aks_cluster_name) -
Step 4. Optional. Verify the deployment.
-
Check the status of your deployment:
To check the status of the Materialize operator, which runs in the
materializenamespace:kubectl -n materialize get allTo check the status of the Materialize instance, which runs in the
materialize-environmentnamespace:kubectl -n materialize-environment get allIf you run into an error during deployment, refer to the Troubleshooting.
Step 5: Connect to Materialize
Connect using the Materialize Console
Using the console_load_balancer_ip from the Terraform output, you can connect
to Materialize via the Materialize Console.
To connect to the Materialize Console, open a browser to
https://<console_load_balancer_ip>:8080, substituting your
<console_load_balancer_ip>.
From the terminal, you can type:
open "https://$(terraform output -raw console_load_balancer_ip):8080/materialize"
Connect using the psql
Using the balancerd_load_balancer_ip value from the Terraform output, you can
connect to Materialize via PostgreSQL-compatible clients/drivers, such as
psql:
psql "postgres://$(terraform output -raw balancerd_load_balancer_ip):6875/materialize"
Customizing Your Deployment
main.tf.
You can customize each Terraform module independently.
-
For details on the Terraform modules, see both the top level and Azure specific modules.
-
For details on recommended instance sizing and configuration, see the Azure deployment guide.
See also:
Cleanup
To delete the whole sample infrastructure and deployment (including the Materialize operator and Materialize instances and data), run from the Terraform directory:
terraform destroy
When prompted to proceed, type yes to confirm the deletion.