This simple tutorial demonstrates how to setup Kubernetes deployments with Kubernetes Persistent Volumes using the Amazon EBS CSI driver for volume lifecycle management.
The instructions in what follows assume you have kubectl
access to your Kubernetes cluster being managed by MKE.
Prerequisites
wget
utility installed in your bash environment- a Kubernetes 1.13 cluster
- access to
kubectl
connected to your Kubernetes cluster(s) being managed by MKE - ability to provision volumes on AWS in the same AZ as the target Kubernetes cluster(s)
Setting Up
-
Download the demo repository to your working directory:
-
Grant AWS API IAM permissions:
The CSI driver must be connected to the AWS API. This sample IAM policy can be used to grant the driver the necessary permissions:
The recommended approach in this case is to add the above policy to the EC2 instance roles. If this is not possible in your case, you will need to modify the secrets.yaml
with key_id
, access_key
and optionally session_token
credentials for a IAM user that does have this policy.
- Install the AWS EBS CSI driver:
Your Kubernetes cluster is ready to provision volumes via AWS EBS.
Dynamically Provisioned Volume
Launch a deployment with a dynamically provisioned EBS volume
-
To begin the demonstration, launch the following application deployment:
The dummy app
example-dynamic
utilizes dynamically provisioned EBS volumes created by the CSI driver. -
Wait for an EBS volume to be created and Bound to the claim:
-
Get the EBS volumeID that was provisioned by the CSI driver:
-
Note the returned value of
VolumeHandle
from the CLI output and confirm this value matches in the AWS console:
Delete the attached pod
-
Get the name of the pod:
-
Take note when the pod started writing data:
-
Delete the pod:
Deleting the pods takes a few seconds because the driver is unmounting the volume and detaching from the instance.
-
Get a list of your pods again:
However, because the deployment is still active, Kubernetes will immediately reattach the volume to the new pod.
-
Now, take note of when this new pod started writing data:
You can see that data persisted across pod restart, as the log begins at the same time.
Delete the deployment and associated dynamically provisioned volume
-
Delete dynamic application deployment:
-
Check the AWS console, see that the volume will still be “available”:
-
Delete the dynamic deployment’s pvc:
-
Check the AWS console again, this time the volume is deleted and does not even show up:
Pre-provisioned Volume
Imagine you have an existing application that is already using an EBS volume to persist its data and it is now being migrated to run in Kubernetes.
Using a pre-provisioned volume as described below allows you to safely migrate that application without losing any of its data.
Creating a PersistentVolume
resource type directly and specifying the backing EBS volumeID
instead of relying on the CSI driver to provision the EBS volumes allows you to reuse your existing EBS volume(s) but still leverage the CSI driver functionality to properly attach and detach EBS volume(s) from the EC2 instances when application pods are being scheduled.
Launch a deployment with a pre-provisioned EBS volume
- Create a new EBS volume in the same AZ as the cluster in the AWS console, note the
volumeID
of the new volume:
- Next, edit the
pre-provisioned/pv.yaml
, by inserting the value ofvolumeID
from the previous step in for the value ofvolumeHandle
in thespec.csi.volumeHandle
, replacing__REPLACE_ME__
:
-
Launch the application deployment with a pre-provisioned EBS volume:
the EBS volume will be in “in-use”.
Delete the pre-provisioned deployment
-
Delete the application deployment:
-
Delete the PV and PVC:
-
Check the AWS console again, the volume will be “available” even though the PVC and PV have been deleted. This is because we have set the appropriate reclaim policy in the PV configuration:
so that that same EBS volume can be reused in other pods later on if desired.