Install on AWS

Materialize provides a set of modular Terraform modules that can be used to deploy all services required for Materialize to run on AWS. The module is intended to provide a simple set of examples on how to deploy Materialize. It can be used as is or modules can be taken from the example and integrated with existing DevOps tooling.

Self-managed Materialize requires: a Kubernetes (v1.31+) cluster; PostgreSQL as a metadata database; blob storage; and a license key. The example on this page deploys a complete Materialize environment on AWS using the modular Terraform setup from this repository.

WARNING!

The Terraform modules used in this tutorial are intended for evaluation/demonstration purposes and for serving as a template when building your own production deployment. The modules should not be directly relied upon for production deployments: future releases of the modules will contain breaking changes. Instead, to use as a starting point for your own production deployment, either:

  • Fork the repo and pin to a specific version; or

  • Use the code as a reference when developing your own deployment.

What Gets Created

This example provisions the following infrastructure:

Networking

Resource Description
VPC 10.0.0.0/16 with DNS hostnames and support enabled
Subnets 3 private subnets (10.0.1.0/24, 10.0.2.0/24, 10.0.3.0/24) and 3 public subnets (10.0.101.0/24, 10.0.102.0/24, 10.0.103.0/24) across availability zones us-east-1a, us-east-1b, us-east-1c
NAT Gateway Single NAT Gateway for all private subnets
Internet Gateway For public subnet connectivity

Compute

Resource Description
EKS Cluster Version 1.32 with CloudWatch logging (API, audit)
Base Node Group 2 nodes (t4g.medium) for Karpenter and CoreDNS
Karpenter Auto-scaling controller with two node classes: Generic nodepool (t4g.xlarge instances for general workloads) and Materialize nodepool (r7gd.2xlarge instances with swap enabled and dedicated taints to run materialize instance workloads)

Database

Resource Description
RDS PostgreSQL Version 15, db.t3.large instance
Storage 50GB allocated, autoscaling up to 100GB
Deployment Single-AZ (non-production configuration)
Backups 7-day retention
Security Dedicated security group with access from EKS cluster and nodes

Storage

Resource Description
S3 Bucket Dedicated bucket for Materialize persistence
Encryption Disabled (for testing; enable in production)
Versioning Disabled (for testing; enable in production)
IAM Role IRSA role for Kubernetes service account access

Kubernetes Add-ons

Resource Description
AWS Load Balancer Controller For managing Network Load Balancers
cert-manager Certificate management controller for Kubernetes that automates TLS certificate provisioning and renewal
Self-signed ClusterIssuer Provides self-signed TLS certificates for Materialize instance internal communication (balancerd, console). Used by the Materialize instance for secure inter-component communication.

Materialize

Resource Description
Operator Materialize Kubernetes operator in the materialize namespace
Instance Single Materialize instance in the materialize-environment namespace
Network Load Balancer Dedicated internal NLB for Materialize access
Port Description
6875 For SQL connections to the database
6876 For HTTP(S) connections to the database
8080 For HTTP(S) connections to Materialize Console

Prerequisites

AWS Account Requirements

An active AWS account with appropriate permissions to create:

  • EKS clusters
  • RDS instances
  • S3 buckets
  • VPCs and networking resources
  • IAM roles and policies

Required Tools

License Key

License key type Deployment type Action
Community New deployments

To get a license key:

Community Existing deployments Contact Materialize support.
Enterprise New deployments Visit https://materialize.com/self-managed/enterprise-license/ to purchase an Enterprise license.
Enterprise Existing deployments Contact Materialize support.

Getting started: Simple example

WARNING!

The Terraform modules used in this tutorial are intended for evaluation/demonstration purposes and for serving as a template when building your own production deployment. The modules should not be directly relied upon for production deployments: future releases of the modules will contain breaking changes. Instead, to use as a starting point for your own production deployment, either:

  • Fork the repo and pin to a specific version; or

  • Use the code as a reference when developing your own deployment.

Step 1: Set Up the Environment

  1. Open a terminal window.

  2. Clone the Materialize Terraform repository and go to the aws/examples/simple directory.

    git clone https://github.com/MaterializeInc/materialize-terraform-self-managed.git
    cd materialize-terraform-self-managed/aws/examples/simple
    
  3. Ensure your AWS CLI is configured with the appropriate profile, substitute <your-aws-profile> with the profile to use:

    # Set your AWS profile for the session
    export AWS_PROFILE=<your-aws-profile>
    

Step 2: Configure Terraform Variables

  1. Create a terraform.tfvars file with the following variables:

    • name_prefix: Prefix for all resource names (e.g., simple-demo)
    • aws_region: AWS region for deployment (e.g., us-east-1)
    • aws_profile: AWS CLI profile to use
    • license_key: Materialize license key
    • tags: Map of tags to apply to resources
    name_prefix = "simple-demo"
    aws_region  = "us-east-1"
    aws_profile = "your-aws-profile"
    license_key = "your-materialize-license-key"
    tags = {
      environment = "demo"
    }
    

Step 3: Apply the Terraform

  1. Initialize the Terraform directory to download the required providers and modules:

    terraform init
    
  2. Apply the Terraform configuration to create the infrastructure.

    • To deploy with the default internal NLB for Materialize access:

      terraform apply
      
    • To deploy with public NLB for Materialize access:

      terraform apply -var="internal=false"
      

    If you are satisfied with the planned changes, type yes when prompted to proceed.

  3. From the output, you will need the following fields to connect using the Materialize Console and PostgreSQL-compatible clients/drivers:

    • nlb_dns_name
    • external_login_password_mz_system.
    terraform output -raw <field_name>
    
    💡 Tip: Your shell may show an ending marker (such as %) because the output did not end with a newline. Do not include the marker when using the value.
  4. Configure kubectl to connect to your cluster, replacing:

    • <your-eks-cluster-name> with the your cluster name; i.e., the eks_cluster_name in the Terraform output. For the sample example, your cluster name has the form {prefix_name}-eks; e.g., simple-demo-eks.

    • <your-region> with the region of your cluster. Your region can be found in your terraform.tfvars file; e.g., us-east-1.

    # aws eks update-kubeconfig --name <your-eks-cluster-name> --region <your-region>
    aws eks update-kubeconfig --name $(terraform output -raw eks_cluster_name) --region <your-region>
    

Step 4. Optional. Verify the deployment.

  1. Check the status of your deployment:

    To check the status of the Materialize operator, which runs in the materialize namespace:

    kubectl -n materialize get all
    

    To check the status of the Materialize instance, which runs in the materialize-environment namespace:

    kubectl -n materialize-environment get all
    

    If you run into an error during deployment, refer to the Troubleshooting.

Step 5: Connect to Materialize

Using the dns_name and external_login_password_mz_system from the Terraform output, you can connect to Materialize via the Materialize Console or PostgreSQL-compatible tools/drivers using the following ports:

Port Description
6875 For SQL connections to the database
6876 For HTTP(S) connections to the database
8080 For HTTP(S) connections to Materialize Console
NOTE: If using an internal Network Load Balancer (NLB) for your Materialize access, you can connect from inside the same VPC or from networks that are privately connected to it.

Connect to the Materialize Console

  1. To connect to the Materialize Console, open a browser to https://<nlb_dns_name>:8080, substituting your <nlb_dns_name>.

    From the terminal, you can type:

    open "https://$(terraform output -raw  nlb_dns_name):8080/materialize"
    
    💡 Tip: The example uses a self-signed ClusterIssuer. As such, you may encounter a warning with regards to the certificate. In production, run with certificates from an official Certificate Authority (CA) rather than self-signed certificates.
  2. Log in as mz_system, using external_login_password_mz_system as the password.

  3. Create new users and log out.

    In general, other than the initial login to create new users for new deployments, avoid using mz_system since mz_system also used by the Materialize Operator for upgrades and maintenance tasks.

    For more information on authentication and authorization for Self-Managed Materialize, see:

  4. Login as one of the created user.

Connect using psql

  1. To connect using psql, in the connection string, specify:

    • mz_system as the user
    • Your <nlb_dns_name> as the host
    • 6875 as the port:
    psql "postgres://mz_system@$(terraform output -raw  nlb_dns_name):6875/materialize"
    

    When prompted for the password, enter the external_login_password_mz_system value.

  2. Create new users and log out.

    In general, other than the initial login to create new users for new deployments, avoid using mz_system since mz_system also used by the Materialize Operator for upgrades and maintenance tasks.

    For more information on authentication and authorization for Self-Managed Materialize, see:

  3. Login as one of the created user.

Customizing Your Deployment

💡 Tip: To reduce cost in your demo environment, you can tweak subnet CIDRs and instance types in main.tf.

You can customize each Terraform module independently.

See also:

Cleanup

To delete the whole sample infrastructure and deployment (including the Materialize operator and Materialize instances and data), run from the Terraform directory:

terraform destroy

When prompted to proceed, type yes to confirm the deletion.

See Also

Back to top ↑