Select Page

Managing AWS IAM With Terraform: Part 2

Tiexin Guo
Published: October 11, 2022

Part 1 connected to this article focuses on users, groups, and policy automation with Terraform. If you haven’t read it yet, it’s called Managing AWS IAM With Terraform: Part 1.

Following the first part, in this tutorial, we will cover:

  • How to centralize IAM to reduce operational overhead for multiple AWS accounts
  • How to create an EC2 instance profile
  • How to use the HashiCorp Vault AWS engine for Just-In-Time access

Side note: This article assumes you already understand what Terraform is and know the basics of it. If not, start with a Terraform official tutorial.

1. Centralized IAM: Cross-Account Access

1.1 Why Do I Need This?

In a medium to large-sized team or company, using a multi-account strategy is common to ensure maximum resource separation and precise permission management.

For example, it’s common to see the following three accounts in an organization:

Creating a user in each account for each team member can be tedious. Yes, we can use Terraform to automate the process, but there are still two major drawbacks:

  1. You’d have to create three different sets of Terraform scripts, one for each environment
  2. Each team member would have to use three different sets of usernames/passwords to log in to each environment

Solution: Create a “central” account, and only create users there; for other accounts like dev/test/prod, we grant access to the users inside the “central” account.

The idea is to use a role to delegate access to resources in different AWS accounts. By setting up centralized cross-account access in this way, you don’t have to create individual IAM users in each account. In addition, users don’t have to sign out of one account and sign in to another to access resources in different AWS accounts.

1.2 Prerequisites

We will use two AWS accounts in this tutorial:

  1. Central Account: Where we create a user. This user will try to access resources in the dev account. Remember to create a test user here before proceeding to the next section
  2. Dev Account: Where we run the Terraform scripts to create roles that allow central account users to access the dev account resources

Note: The Terraform information in the following section (1.3) is executed in this account

1.3 Setup Cross-Account Access With Terraform

Prepare the following files:

  • main.tf
  • output.tf
  • variables.tf

You can get all the code in this tutorial from this repo.

Content of main.tf:

data "aws_iam_policy_document" "assume_role" {
  statement {
    actions = [
      "sts:AssumeRole",
    ]
    principals {
      type        = "AWS"
      identifiers = [var.account_id]
    }
    effect = "Allow"
  }
}
resource "aws_iam_role" "test_role" {
  name               = "test_role"
  assume_role_policy = data.aws_iam_policy_document.assume_role.json
}
resource "aws_iam_role_policy" "test_policy" {
  name = "test_policy"
  role = aws_iam_role.test_role.id
  policy = jsonencode({
    "Version" : "2012-10-17",
    "Statement" : [
      {
        "Effect" : "Allow",
        "Action" : "s3:ListAllMyBuckets",
        "Resource" : "*"
      }
    ]
  })
}

The code snippet above means:

  • We created a test role with permission to list S3 buckets in the AWS dev account
  • We used Terraform dataaws_iam_policy_document to allow users from the AWS central account to assume the test role so users from the central account have access to the dev account

Content of variables.tf:

variable "account_id" {
  type        = string
  description = "allow this account to assume roles"
}

Content of output.tf:

output "role_arn" {
  value = aws_iam_role.test_role.arn
}

Then create a terraform.tfvars:

account_id = YOUR_CENTRAL_ACCOUNT_ID_HERE

Put your central account ID in the terraform.tfvars file before continuing.

Then, with your dev account AWS access keys, execute:

In the output, we will get an AWS role ARN like the following:

arn:aws:iam::858104165444:role/test_role

1.4 Testing the Access

Let’s test the cross-account access with AWS CLI. 

Note: The assumed role can also work in the AWS Console

First, with the central account AWS access key set up, we can run:

aws sts assume-role --role-arn "arn:aws:iam::858104165444:role/test_role" --role-session-name "tiexin-test-access"

The role ARN is the Terraform output from the previous section.

The output should look like the following:

{
    "Credentials": {
        "SecretAccessKey": "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY",
        "SessionToken": "AQoDYXdzEGcaEXAMPLE2gsYULo+Im5ZEXAMPLEeYjs1M2FUIgIJx9tQqNMBEXAMPLE
CvSRyh0FW7jEXAMPLEW+vE/7s1HRpXviG7b+qYf4nD00EXAMPLEmj4wxS04L/uZEXAMPLECihzFB5lTYLto9dyBgSDy
EXAMPLE9/g7QRUhZp4bqbEXAMPLENwGPyOj59pFA4lNKCIkVgkREXAMPLEjlzxQ7y52gekeVEXAMPLEDiB9ST3Uuysg
sKdEXAMPLE1TVastU1A0SKFEXAMPLEiywCC/Cs8EXAMPLEpZgOs+6hz4AP4KEXAMPLERbASP+4eZScEXAMPLEsnf87e
NhyDHq6ikBQ==",
        "Expiration": "2014-12-11T23:08:07Z",
        "AccessKeyId": "AKIAIOSFODNN7EXAMPLE"
    }
}

The AccessKeyId, SecretAccessKey, and SessionToken are what we need to proceed with. Run:

export AWS_ACCESS_KEY_ID="PASTE_THE_VALUE_FROM_THE_ABOVE_OUTPUT_HERE"
export AWS_SECRET_ACCESS_KEY="PASTE_THE_VALUE_FROM_THE_ABOVE_OUTPUT_HERE"
export AWS_SESSION_TOKEN="PASTE_THE_VALUE_FROM_THE_ABOVE_OUTPUT_HERE"

Note: Remember to clean your bash history after the session

At this point, any following commands will run under the permissions of the role identified by those credentials. We can run aws sts get-caller-identity to check that we are already assuming a role under the dev account and not our original user in the central account.

If we execute:

We can list all the buckets in the dev account.

1.5 AWSume

The previous testing process is a bit tedious because we need to export a lot of sensitive stuff, which is not recommended.

Here, I introduce an interesting CLI tool: AWSume. AWSume is a convenient way to manage session tokens and assume role credentials.

After we get it installed and configured, we can simply run one command:

awsume --role-arn arn:aws:iam::858104165444:role/test_role

This does exactly what all those commands in the previous section did.

2. AWS Instance Profile

If you are using AWS EC2 instances, chances are you have deployed some apps on those instances, and those apps might need access to other AWS services.

For example, you wrote a small application that writes some data to an AWS S3 bucket. You deploy your app in an EC2 instance, and you must configure it correctly so your app has the right permission to write to AWS S3.

2.1 Why Not Use Access Keys on EC2 Instances?

It is not recommended to use an IAM user for machine-to-machine connection, for two reasons:

  1. User access keys are long-lived static credentials, meaning they don’t expire. If a key is compromised, it can be used for as long as the key is not revoked. This has big security implications since a malicious actor could be acting for a very long time undetected
  2. Since they don’t expire, access keys are permanent security liability. Anyone who accesses the instance might get the key and use it from anywhere without notice. In other words, the risk of a leak grows over time

2.2 What Are Instance Profiles?

Amazon EC2 uses an instance profile as a container for an IAM role. When you create an IAM role using the IAM console, the console creates an instance profile automatically and gives it the same name as the role to which it corresponds. If you use the Amazon EC2 console to launch an instance with an IAM role or to attach an IAM role to an instance, you choose the role based on a list of instance profile names. In this way, apps that run on the EC2 instance can use the role’s credentials when they access AWS resources.

TL;DR: It’s safer to use IAM roles to manage permissions and attach them to instances. Instance profiles and roles, in general, provide temporary (or dynamic) credentials on request. If those credentials leak, the damage is contained to their lifespan.

2.3 Launching an EC2 Instance With Instance Profile

Create a Terraform file with the following content:

resource "aws_iam_role" "role" {
  name = "test_role"
  path = "http://dzone.com/"
  assume_role_policy = <<EOF
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Action": "sts:AssumeRole",
            "Principal": {
               "Service": "ec2.amazonaws.com"
            },
            "Effect": "Allow",
            "Sid": ""
        }
    ]
}
EOF
}
resource "aws_iam_instance_profile" "test_profile" {
  name = "test_profile"
  role = aws_iam_role.role.name
}
data "aws_ami" "ubuntu" {
  most_recent = true
  filter {
    name   = "name"
    values = ["ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-*"]
  }
  filter {
    name   = "virtualization-type"
    values = ["hvm"]
  }
  owners = ["099720109477"] # Canonical
}
resource "aws_instance" "web" {
  ami                  = data.aws_ami.ubuntu.id
  instance_type        = "t3.micro"
  iam_instance_profile = aws_iam_instance_profile.test_profile.id
  tags = {
    Name = "HelloWorld"
  }
}

And run:

The code snippet above does the following:

  1. Create an IAM role (you can also attach policies to the role so it has the correct permissions you want it to have) and delegate permissions to the EC2 service
  2. Create an instance profile, which serves as a container of the role
  3. Launching an EC2 instance with the instance profile so the EC2 instance has the permissions attached to the IAM role

In this case, if you attach an S3 access policy to the role, the EC2 with this instance profile will get AWS S3 access without having to export access keys and secrets.

3. Just-In-Time Access With HashiCorp Vault

Even if we already have Terraform, we still have to create some temporary users for just-in-time access.

For example, one data engineer needs temporary access to an S3 bucket to verify some data. Maybe he only needs this permission for a one-time job, then he would not use AWS for a long time, so it doesn’t make sense to create a permanent account for him to have indefinite access.

For another, maybe your developer needs to get access to a production EC2 instance for debugging, which might also be a one-time job. If you are using Terraform to manage permissions, you probably already use groups for permissions, but it doesn’t make sense to grant the whole group temporary prod access.

You get the idea. While you can give someone permanent access, most of the time, you don’t have to, and you shouldn’t, according to our zero-trust and least-privilege access principles.

Even if you used a specific set of Terraform rules to create temporary roles for these temporary accesses, you’d still have to apply/destroy them manually, which is not only operational overhead but also increases potential risks: What if you forgot to destroy permissions and indefinite access is maintained?

In this section, we will use Vault to achieve true just-in-time access management for AWS IAM.

3.1 Vault Introduction

HashiCorp Vault can secure, store, and tightly control access to tokens, passwords, certificates, and encryption keys for protecting secrets and other sensitive data using a UI, CLI, or HTTP API.

It has different types of engines to manage different types of secrets. For AWS, it has the AWS secrets engine, which generates AWS access credentials dynamically based on IAM policies.

3.2 Installation

In this part, we will install Vault in a local macOS environment.

For other OS, check out the official doc here. For K8s installation, check out the official doc here. For production-ready installation, check out the doc here.

Run:

brew tap hashicorp/tap
brew install hashicorp/tap/vault

Then start the server by running:

3.3 AWS Engine Configuration

If we visit http://127.0.0.1:8200 and use the token to sign in, we are already in Vault.

Under the “Secrets Engines” tab, click the “enable new engine” button and choose AWS. Click next, then enable the engine:

Enable a Secrets Engine

Then we go to the configuration tab of the AWS engine, and click “configure:”

AWS

After, we put an AWS key and secret (who has IAM access) to the “dynamic IAM root credentials” tab, and in the “leases” tab, let’s choose a reasonable time, say, 30 minutes:

Dynamic IAM Root Credentials
Configure AWS

Then we can go to the “secrets” tab, choose “AWS,” and click “Create role:”

Secrets Engine

Secrets AWS

On the create role page, we put the following info:

Create an AWS Role

For testing purposes, we use the role ARN arn:aws:iam::aws:policy/AmazonS3FullAccess , which will give this role full S3 access. But don’t worry, this role won’t be created right away and it’s managed by Vault.

3.4 Testing the Temporary Role

If we go to the secrets tab in the AWS engine for the test role we created, we can create a new credential:

AWS Configuration

Generate AWS Credentials

Generate AWS Credentials Part II

Now if we use this user’s key and secret, we have full S3 access. And, it’s only valid for 30 minutes, after which, the user will be deleted.

If we login to the AWS console and go to IAM, we can see this user:

Vault

And it has the right permission:

Summary Permissions

Summary

In summary, Vault can manage predefined IAM roles according to predefined policies automatically, and ensure the duration of the access won’t exceed the predefined lease time.

In this second (and last) part of the IAM tutorial, we have learned:

  • How to create cross-account accesses and assume roles with Terraform
  • How to manage EC2 access with an instance profile
  • How to achieve true just-in-time access automagically with HashiCorp Vault

For the last one, there is more to learn if you want to make it production-ready, for example, you could explore DynamoDB Storage Backend and Auto-unseal with AWS KMS.

IAM has grown more complicated over the years. I hope this mini-series helps automate your daily routine and reduce your operational overhead.

Source: dzone.com