top of page
Search

How to:Terraform in gcp immutable Infrastructure

  • Writer: RE
    RE
  • Dec 30, 2019
  • 4 min read

Let's download the terraform. Pretty simple click the link below; just follow the rules by Clicking here. We need the GCP credentials which can be found in GCP console and select APIs & Services then credentials.

Create credentials and service the next thing we shall do is get the necessary credentials from GCP. Go to the GCP console menu and select APIs & Services then Credentials. Now click the Create Credentials button and choose Service Account Key. On the next screen choose Compute Engine default service account and JSON then click Create.

This will download a .json file to your computer, move this file to the folder where we shall write our terraform code.


provider "google" {

credentials = "${file("authfile.json")}"

project = "<project Id>"

region = "us-east-1"

}


In a Terraform project, it is recommended to have main.tf and variables.tf with the simplest structure.

main.tf should be the primary entry point. For a simple Terraform project, this may be where all the resources are created.


This process is described in Terraform as follows:


terraform {

backend "gcs" {

project = "project-id"

bucket = "project-tfstate"

prefix = "terraform/state"

}

}

Create a ptilize google cloud storage to store states.

provider "google" {

project = "${var.gcp_project}"

region = "${var.region}"

zone = "${var.zone}"

}

provider "google-beta" {

project = "${var.gcp_project}"

region = "${var.region}"

zone = "${var.zone}"

}

Here we use Google Cloud Storage to store states. That’s why we need a bucket called ’project-tfstate’ in GC Storage. We have to create this manually.

In main.tf,


provider "google" {

project = "${var.gcp_project}"

region = "${var.region}"

zone = "${var.zone}"

}

provider "google-beta" {

project = "${var.gcp_project}"

region = "${var.region}"

zone = "${var.zone}"

}

Here, we need to specify variables with “${var.variable_name}” blocks to be reusable and use the same Terraform project in different environments.


We need to create a service account to access services such as network elements, kubernetes engine etc. with Terraform on GCP.


resource "google_service_account" "sa" {

account_id = "${var.cluster_name}-gke-sa"

display_name = "${var.cluster_name}-gke-sa"

}

We must define the role or roles required for this service account.


resource "google_project_iam_member" "k8s-member" {

count = "${length(var.iam_roles)}"

project = "${var.gcp_project}"

role = "${element(values(var.iam_roles), count.index)}"

member = "serviceAccount:${google_service_account.sa.email}"

}



network.tf


To create the project from scratch, we must create the network needs. To do this, we will first create a new VPC network.


resource “google_compute_network” “project-network” {

name = “${var.vpc_name}-network”

auto_create_subnetworks = “false”

routing_mode = “REGIONAL”

}


Production and staging environments must be on the same VPC network, but two different subnets are required to isolate these environments.(We had this issue in one of our project )If we opened separate networks for production and staging, we would have to create separate network elements if we needed to define a firewall rule or tunnel between all GC projects.


resource "google_compute_subnetwork" "project-subnet" {

name = "${var.cluster_name}"

ip_cidr_range = "${var.subnet_cidr}"

private_ip_google_access = true

network = "${google_compute_network.project-network}"

}


Here the cluster_name variable will differ for the production and staging environment.

Next, Firewall rules should be defined as follows.


resource "google_compute_firewall" "project-firewall-allow-ssh" {

name = "${var.vpc_name}-allow-something"

network = "${google_compute_network.project-network.self_link}"

allow {

protocol = "some-protocol #tcp, udp, icmp...

ports = ["some-port"] #22, 80...

}

source_ranges = ["IP/range"] #according to cidr notation

# source_ranges = ["${var.subnet_cidr}", "${var.pod_range}", "${var.service_range}"

}



We create a Cloud Router to route the VPC network with internal IP addreses called project-network to the Cloud Nat gateway.


resource “google_compute_router” “project-router” {

name = “${var.vpc_name}-nat-router”

network = “${google_compute_network.project-network.self_link}”

}


We connect the Cloud Router object and the Google Compute Address Public IP that we created in the previous two blocks to the Cloud Nat gateway. In other words, we take the relevant network from the internal network to the outside world.


resource “google_compute_router_nat” “project-nat” {

name = “${var.vpc_name}-nat-gw”

router = “${google_compute_router.project-router.name}”

nat_ip_allocate_option = “MANUAL_ONLY”

nat_ips = [“${google_compute_address.project-nat-ips.*.self_link}”]

source_subnetwork_ip_ranges_to_nat = “ALL_SUBNETWORKS_ALL_IP_RANGES”

depends_on = [“google_compute_address.project-nat-ips”]

}

We will create a VPC Peering element so that the relevant VPC network can communicate with the VPC networks or networks in another project(We had to peer another network in Azure)


resource "google_compute_network_peering" "vpc_peerings" {

count = "${length(var.vpc_peerings)}"

name = "${element(keys(var.vpc_peerings), count.index)}"

network = "${google_compute_network.project-network.self_link}"

peer_network = "${element(values(var.vpc_peerings), count.index)}"

}



With main.tf and network.tf, we have made the necessary development for the creation of base resources. We will now look at the gke blocks where we will create cluster resources where the project will be the main host.

In gke.tf we will first add a cluster and then a node pool that will be autoscale in the cluster.


resource "google_container_cluster" "primary" {

provider = "google-beta"

name = "${var.cluster_name}"

zone = "${var.zone}"

min_master_version = "${var.gke_version}"

remove_default_node_pool = true

initial_node_count = 1

master_authorized_networks_config {

cidr_blocks = [

{

cidr_block = "IP/range" #according to cidr notation

display_name = "all"

},

]

}

ip_allocation_policy {

cluster_ipv4_cidr_block = "${var.pod_range}"

services_ipv4_cidr_block = "${var.service_range}"

}

network = "${var.network_name}"

subnetwork = "${var.cluster_name}"

node_version = "${var.gke_version}"

private_cluster_config {

enable_private_nodes = true

master_ipv4_cidr_block = "${var.master_range}"

}

master_auth {

username = ""

password = ""

client_certificate_config {

issue_client_certificate = false

}

}

}


If the master_auth block is not provided, GKE will generate a password for you with the username ‘admin’ for HTTP basic authentication when accessing the Kubernetes master endpoint.

master_authorized_networks_config block required for accessing to the master from internal IP addresses other than nodes and Pods. Address ranges that you have authorized and this block brings back a publicly accessible endpoint.


resource "google_container_node_pool" "default" {

provider = "google-beta"

name = "${var.default_pool_name}"

zone = "${var.zone}"

cluster = "${google_container_cluster.primary.name}"

node_count = "${var.default_pool_node_number}"

version = "${var.gke_version}"

autoscaling {

min_node_count = "${var.default_pool_min_node}"

max_node_count = "${var.default_pool_max_node}"

}

node_config {

machine_type = "${var.machine_type}"

oauth_scopes = [

"https://www.googleapis.com/auth/logging.write",

"https://www.googleapis.com/auth/monitoring",

]

service_account = "${var.service_account}"

metadata = {

disable-legacy-endpoints = "true"

}

}

}

Workspace

We discussed Terraform’s state, and we created our files to store the states of our permanent data in GC Storage. These states belong to workspaces. If you have only one terraform.tfvars file, you can use the default workspace. To manage the same configuration settings in multiple environments, you must create different workspaces.



In our case, we need separate workspaces for staging and production. For example, I’ll do the necessary operations for staging.

$ terraform workspace new staging

Created and switched to workspace "staging"!

You're now on a new, empty workspace. Workspaces isolate their state, so if you run "terraform plan" Terraform will not see any existing state for this configuration.

When we create a workspace for production, we use the following command to switch between them.

terraform workspace select (staging/production)

Which workspace we are going to do changes, we have to work in that workspace.



After making sure that we are in the workspace we created for staging, we will create the staging.tfvars file for staging and determine the appropriate values for staging in our variables.

subnet_cidr = "10.10.0.0/16"

subnet_name = "example-staging"

cluster_name = "example-staging

...

network_name = "example-network"

pod_range = "10.20.0.0/16"

...

default_pool_node_number = 1

default_pool_min_node = 1

default_pool_max_node = 1

...

default_node_pool_name = "example"

machine_type = "n1-standard-2"

We run the terraform plan command, where we see a preview of all resources that will create with the current configuration settings and assigned variable values. One important point is the usage of the -out flag. The -out flag is used to save changes to a file that the terraform plan command says during that execution. It is used to save versions of states during your work.

terraform plan -var-file="staging.tfvars" -out=staging.out

Carefully examine the output of the command, the resulting resources, and variable values will be displayed completely.

Plan: n to add, 0 to change, 0 to destroy.

If everything goes well, we can create all the resources on GCP with terraform apply command.

terraform apply "staging.out"



 
 
 

Comments


158091998563.png
Picture2.png
Picture3.png
bottom of page