Recipe: Provisioning Three Nodes and Deploying Kubernetes with RKE Using Faxter Terraform Provider

Note: This example is still undergoing tests!

If you encounter any issue trying to run this, please contact support@faxter.com

Overview

This recipe guides you through the process of:

  1. Provisioning Infrastructure: Creating three virtual machine (VM) nodes on the Faxter platform using Terraform.
  2. Configuring Networking: Setting up networks, routers, and security groups to enable communication between nodes and external access.
  3. Deploying Kubernetes: Installing and configuring Kubernetes on the provisioned nodes using Rancher Kubernetes Engine (RKE).

Prerequisites

Before you begin, ensure you have the following:

  • Faxter Account: Access credentials for the Faxter platform.
  • Terraform Installed: Version 1.0.0 or higher. Download from Terraform Downloads.
  • SSH Key Pair: A public/private key pair for SSH access to the VMs.
  • RKE Binary: Installed on your local machine (Download RKE).

Step 1: Configure Terraform Provider

Create a providers.tf file to configure the Faxter Terraform provider.

# providers.tf

terraform {
  required_providers {
    faxter = {
      source  = "local/faxter/faxter"
      version = "0.1.0"
    }
    local = {
      source  = "hashicorp/local"
      version = "~> 2.1"
    }
  }
  required_version = ">= 1.0.0"
}

provider "faxter" {
  token = var.faxter_token
}

provider "local" {}

Step 2: Define Variables

Create a variables.tf file to manage configurable parameters.

# variables.tf

variable "faxter_token" {
  description = "API token for Faxter provider"
  type        = string
}

variable "project" {
  description = "Name of the Faxter project"
  type        = string
  default     = "default"
}

variable "ssh_key_name" {
  description = "Name of the SSH key"
  type        = string
}

variable "ssh_public_key" {
  description = "Path to the SSH public key"
  type        = string
}

variable "private_key_path" {
  description = "Path to the SSH private key"
  type        = string
}

variable "kubernetes_version" {
  description = "Version of Kubernetes to deploy"
  type        = string
  default     = "v1.24.0"
}

Step 3: Create Terraform Configuration

Create a main.tf file that includes resources for SSH keys, networks, security groups, routers, and servers.

# main.tf

# SSH Key
resource "faxter_ssh_key" "default" {
  project    = var.project
  name       = var.ssh_key_name
  public_key = file(var.ssh_public_key)
}

# Network
resource "faxter_network" "private" {
  project = var.project
  name    = "private-network"

  subnets {
    name = "private-subnet"
    cidr = "192.168.100.0/24"
  }
}

# Router
resource "faxter_router" "k8s_router" {
  project          = var.project
  name             = "k8s-router"
  subnets          = [faxter_network.private.subnets[0].name]
  connect_external = true
}

# Security Group
resource "faxter_security_group" "k8s_sg" {
  project = var.project
  name    = "k8s-security-group"

  rules {
    protocol          = "tcp"
    port_range_min    = 22
    port_range_max    = 22
    direction         = "ingress"
    remote_ip_prefix  = "0.0.0.0/0"
  }

  rules {
    protocol          = "tcp"
    port_range_min    = 80
    port_range_max    = 80
    direction         = "ingress"
    remote_ip_prefix  = "0.0.0.0/0"
  }

  rules {
    protocol          = "tcp"
    port_range_min    = 53
    port_range_max    = 53
    direction         = "ingress"
    remote_ip_prefix  = "0.0.0.0/0"
  }

  rules {
    protocol          = "tcp"
    port_range_min    = 179
    port_range_max    = 179
    direction         = "ingress"
    remote_ip_prefix  = "0.0.0.0/0"
  }

  rules {
    protocol          = "tcp"
    port_range_min    = 6443
    port_range_max    = 6443
    direction         = "ingress"
    remote_ip_prefix  = "0.0.0.0/0"
  }

  rules {
    protocol          = "tcp"
    port_range_min    = 2379
    port_range_max    = 2379
    direction         = "ingress"
    remote_ip_prefix  = "0.0.0.0/0"
  }

  rules {
    protocol          = "tcp"
    port_range_min    = 2380
    port_range_max    = 2380
    direction         = "ingress"
    remote_ip_prefix  = "0.0.0.0/0"
  }

  rules {
    protocol          = "tcp"
    port_range_min    = 7080
    port_range_max    = 7080
    direction         = "ingress"
    remote_ip_prefix  = "0.0.0.0/0"
  }

  rules {
    protocol          = "tcp"
    port_range_min    = 8472
    port_range_max    = 8472
    direction         = "ingress"
    remote_ip_prefix  = "0.0.0.0/0"
  }

  rules {
    protocol          = "tcp"
    port_range_min    = 8080
    port_range_max    = 8080
    direction         = "ingress"
    remote_ip_prefix  = "0.0.0.0/0"
  }

  rules {
    protocol          = "tcp"
    port_range_min    = 9100
    port_range_max    = 9100
    direction         = "ingress"
    remote_ip_prefix  = "0.0.0.0/0"
  }

  rules {
    protocol          = "tcp"
    port_range_min    = 10250
    port_range_max    = 10250
    direction         = "ingress"
    remote_ip_prefix  = "0.0.0.0/0"
  }

  rules {
    protocol          = "tcp"
    port_range_min    = 8472
    port_range_max    = 8472
    direction         = "ingress"
    remote_ip_prefix  = "0.0.0.0/0"
  }

  rules {
    protocol          = "tcp"
    port_range_min    = 30000
    port_range_max    = 32767
    direction         = "ingress"
    remote_ip_prefix  = "0.0.0.0/0"
  }
}

# Servers
resource "faxter_server" "k8s_node" {
  count            = 3
  project          = var.project
  name             = "k8s-node-${count.index + 1}"
  flavor           = "silver"
  image            = "Ubuntu2204"
  key_name         = faxter_ssh_key.default.name
  networks         = [faxter_network.private.name]
  security_groups  = [faxter_security_group.k8s_sg.name]

  provisioner "remote-exec" {
    inline = [
      "sudo apt-get update -y",
      "sudo apt-get install -y docker.io",
      "sudo systemctl enable docker",
      "sudo systemctl start docker",
      "sudo usermod -aG docker ubuntu",
      "sudo curl -LO https://github.com/rancher/rke/releases/download/${var.kubernetes_version}/rke_linux-amd64",
      "sudo mv rke_linux-amd64 /usr/local/bin/rke",
      "sudo chmod +x /usr/local/bin/rke"
    ]

    connection {
      type        = "ssh"
      user        = "ubuntu"
      private_key = file(var.private_key_path)
      host        = self.ip_addresses[0] # Adjust based on actual IP
    }
  }
}

# Generate RKE cluster.yml using templatefile and local_file

locals {
  rke_cluster_config = templatefile("${path.module}/cluster_template.yml.tpl", {
    nodes = [for server in faxter_server.k8s_node : server.ip_addresses[0]]
  })
}

resource "local_file" "rke_cluster" {
  content  = local.rke_cluster_config
  filename = "${path.module}/cluster.yml"
}

Explanation: - SSH Key: Manages the SSH key for accessing the VMs. - Network: Sets up a private network with a subnet. - Router: Connects the private network to external networks. - Security Group: Defines firewall rules allowing SSH and Kubernetes API traffic. - Servers: Provisions three VM nodes

Step 4: Create RKE Configuration File

To generate the cluster.yml needed by RKE, we’ll use Terraform’s templatefile function and the local_file resource.

  1. Create cluster_template.yml.tpl

Create a file named cluster_template.yml.tpl in your Terraform configuration directory with the following content:

# cluster_template.yml.tpl

nodes:
%{ for node in nodes ~}
  - address: ${node}
    user: ubuntu
    role:
      - controlplane
      - etcd
      - worker
%{ endfor ~}
services:
  etcd:
    snapshot: true
    creation: 6h
    retention: 24h
  kube-api:
    service_cluster_ip_range: 10.43.0.0/16
  kube-controller:
    cluster_cidr: 10.42.0.0/16
  scheduler: {}
  kubelet:
    cluster_domain: cluster.local
    cluster_dns_server: 10.43.0.10
network:
  plugin: canal
  1. Generate cluster.yml Using local_file and templatefile

Add the following to your main.tf to define the local_file resource that generates cluster.yml:

# main.tf

# Data source for RKE cluster template using templatefile function
locals {
  rke_cluster_config = templatefile("${path.module}/cluster_template.yml.tpl", {
    nodes = [for server in faxter_server.k8s_node : server.ip_addresses[0]]
  })
}

# Generate cluster.yml
resource "local_file" "rke_cluster" {
  content  = local.rke_cluster_config
  filename = "${path.module}/cluster.yml"
}

Step 5: Initialize and Apply Terraform Configuration

  1. Initialize Terraform:
terraform init
  1. Set Variables: Create a terraform.tfvars file or pass variables via the command line.
# terraform.tfvars

faxter_token       = "YOUR_FAXTER_API_TOKEN"
ssh_key_name       = "my-ssh-key"
ssh_public_key     = "~/.ssh/id_rsa.pub"
private_key_path   = "~/.ssh/id_rsa"
kubernetes_version = "v1.24.0" # Adjust as needed
project            = "my-project"

Ensure: - Replace "YOUR_FAXTER_API_TOKEN" with your actual Faxter API token. - The paths to your SSH keys (ssh_public_key and private_key_path) are correct and accessible. - The project name aligns with your Faxter setup.

  1. Apply Configuration:
terraform apply
  • Review the planned actions.
  • Confirm the apply by typing yes when prompted. Note: This process will provision the infrastructure and generate the cluster.yml file. The Kubernetes deployment via RKE is handled in the next step.

Step 6: Deploy Kubernetes Cluster with RKE

After the Terraform apply completes, proceed to deploy Kubernetes using RKE. 1. Ensure RKE is Installed Verify that the RKE binary is installed and executable:

rke --version

If not installed, download and install it from RKE Releases.

  1. Navigate to Terraform Directory Ensure you are in the Terraform configuration directory where cluster.yml was generated.
  2. Run RKE Deployment Execute the RKE command to set up the Kubernetes cluster:
rke up --config cluster.yml

Notes: - RKE will connect to the provisioned nodes via SSH using the provided private_key_path. - The command will generate a kube_config_cluster.yml file upon successful deployment.

  1. Install kubectl: If not already installed, install kubectl.

  2. Configure kubectl:

export KUBECONFIG=$(pwd)/kube_config_cluster.yml
  1. Test Cluster:
kubectl get nodes

Optional step:

Instead of step 6, you could do the following after step 4, so that the RKE is automatically executed by terraform once the nodes are ready.

Create a deploy_k8s.tf file to execute the RKE deployment using a null_resource and local-exec provisioner.

# deploy_k8s.tf

resource "null_resource" "deploy_k8s" {
  depends_on = [
    faxter_server.k8s_node,
    local_file.rke_cluster
  ]

  provisioner "local-exec" {
    command = "rke up --config cluster.yml"
  }
}

Explanation: - null_resource: Acts as a placeholder to execute provisioners after dependencies are met. - local-exec Provisioner: Runs the RKE command to set up Kubernetes using the generated cluster.yml.

Important: Ensure that the RKE binary is installed and accessible in your system’s PATH.