CI CD Pipeline With Tekton Automated Deployment For Tax Calculator
Hey guys! As a DevOps engineer, setting up a robust CI/CD pipeline is crucial for automating our build, test, and deployment processes. In this article, we'll dive deep into creating a CI/CD pipeline using Tekton for the Tax Calculator application. Tekton is a powerful open-source framework that allows us to create cloud-native CI/CD systems. Let's get started and make our deployments smoother and more reliable!
Introduction to CI/CD Pipelines
Before we jump into the specifics, let's quickly recap what a CI/CD pipeline is and why it's so important. CI/CD, which stands for Continuous Integration and Continuous Deployment, is a practice that automates the software release process. It’s all about making sure that our code changes are reliably built, tested, and deployed.
Why CI/CD Matters
CI/CD pipelines are essential in modern software development because they significantly reduce the manual effort and potential errors associated with releasing software. Imagine having to manually build, test, and deploy every single change you make. Sounds like a nightmare, right? With CI/CD, we automate these steps, which means faster release cycles, fewer bugs in production, and happier developers. By automating the pipeline, the development teams can focus more on writing code and less on operations. This accelerates the pace of innovation and allows for quicker feedback loops. Moreover, automated testing ensures that each code change is thoroughly vetted before it makes its way to production, minimizing the risk of introducing critical issues. In essence, CI/CD acts as a safety net, catching potential problems early in the development lifecycle and preventing them from escalating into major incidents.
Key Components of a CI/CD Pipeline
A typical CI/CD pipeline consists of several stages, each designed to ensure the quality and reliability of the software. These stages include building the application, running tests, and deploying the application to various environments. Understanding these components is key to setting up an effective pipeline. The first stage, building, involves compiling the source code and packaging it into an executable form, often a Docker image in modern cloud-native applications. This ensures that the application can be deployed consistently across different environments. Next comes the testing phase, where unit tests, integration tests, and other forms of automated testing are executed to verify the functionality and stability of the code. This phase is crucial for catching bugs and preventing regressions. Finally, the deployment stage involves deploying the built application to the target environment, which could be a staging environment for further testing or a production environment for end-users. Automating this entire process ensures that releases are consistent and repeatable, reducing the risk of human error and streamlining the software delivery lifecycle.
Tekton: Your CI/CD Superhero
So, why Tekton? Tekton is a powerful and flexible open-source framework for creating CI/CD pipelines. It's designed to run on Kubernetes, making it a perfect fit for cloud-native applications. With Tekton, we define our pipeline as a series of tasks, and these tasks can be anything from building a Docker image to deploying our application. The cool part? Tekton is highly customizable and scalable, allowing us to tailor our pipelines to fit our specific needs.
Why Choose Tekton?
There are several reasons why Tekton stands out as a great choice for CI/CD. First off, it's cloud-native, meaning it's designed to work seamlessly with Kubernetes. This makes it super easy to integrate with our existing infrastructure. Secondly, Tekton is declarative. We define our pipelines using YAML, which makes them easy to read, understand, and version control. The declarative nature of Tekton also means that the system maintains the desired state, automatically correcting any deviations. This ensures consistency and reliability in our deployments. Additionally, Tekton is extensible, allowing us to create custom tasks and integrate with other tools and services. This flexibility is invaluable for adapting the pipeline to the specific needs of our project. Finally, Tekton promotes reusability. We can define tasks and pipelines once and reuse them across multiple projects, saving time and effort. This modular approach simplifies pipeline management and ensures consistency across different applications.
Tekton Components: A Quick Overview
To get started with Tekton, it’s helpful to understand its core components. The main building blocks are Tasks and Pipelines. A Task is a set of steps that perform a specific action, like building a Docker image or running tests. Think of it as a single command or a script that needs to be executed. A Pipeline, on the other hand, is a collection of Tasks that run in a specific order. It defines the entire workflow of our CI/CD process. In addition to Tasks and Pipelines, Tekton also introduces the concept of TaskRuns and PipelineRuns. A TaskRun is an instance of a Task, representing a single execution of that Task. Similarly, a PipelineRun is an instance of a Pipeline, representing a single execution of the entire pipeline. These runs are where the actual work happens, and Tekton provides detailed logs and status updates for each run. Understanding these fundamental components allows us to effectively design and manage our CI/CD pipelines with Tekton.
Setting Up Tekton on Kubernetes
Alright, let's get our hands dirty! To start using Tekton, we need to install it on our Kubernetes cluster. If you don't have a Kubernetes cluster yet, now's the time to set one up. You can use Minikube for local development or a cloud-based Kubernetes service like IBM Cloud Kubernetes Service. Installing Tekton is pretty straightforward. We'll use kubectl
to apply the Tekton manifests.
Installing Tekton Pipelines
First, we'll install Tekton Pipelines, which is the core component. This involves applying a YAML file that defines the necessary Tekton resources. Open your terminal and run the following command:
kubectl apply --filename https://storage.googleapis.com/tekton-releases/pipeline/latest/release.yaml
This command downloads the latest Tekton Pipelines release and applies it to your cluster. Kubernetes will create the necessary namespaces, service accounts, and other resources required for Tekton to function correctly. After running this command, you can check the status of the Tekton installation by listing the pods in the tekton-pipelines
namespace:
kubectl get pods --namespace tekton-pipelines
You should see several pods running, including the Tekton Pipelines controller and webhook. If all pods are in the Running
state, you're good to go!
Installing Tekton CLI (Optional but Recommended)
While you can interact with Tekton using kubectl
, the Tekton CLI (tkn
) makes things much easier. It provides a more user-friendly way to manage Tekton resources and monitor pipeline executions. To install the Tekton CLI, you can use various package managers or download the binary directly from the Tekton releases page. For example, on macOS, you can use Homebrew:
brew install tektoncd-cli
Once installed, you can verify the installation by running:
tkn version
This command should display the version of the Tekton CLI you have installed. With the Tekton CLI, you can easily create, list, and manage Tekton resources, making the CI/CD pipeline setup and management much more efficient.
Verifying the Installation
To ensure everything is set up correctly, let's run a quick check. We can create a simple Tekton Task and TaskRun to verify that Tekton is working as expected. First, create a YAML file named verify-task.yaml
with the following content:
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: verify-tekton
spec:
steps:
- name: echo-hello
image: ubuntu
script: |
#!/usr/bin/env bash
echo "Hello from Tekton!"
This Task simply prints "Hello from Tekton!" to the console. Now, apply this Task to your cluster:
kubectl apply --filename verify-task.yaml
Next, create a TaskRun to execute this Task. Create a YAML file named verify-taskrun.yaml
with the following content:
apiVersion: tekton.dev/v1beta1
kind: TaskRun
metadata:
name: verify-tekton-run
spec:
taskRef:
name: verify-tekton
Apply this TaskRun to your cluster:
kubectl apply --filename verify-taskrun.yaml
To check the logs of the TaskRun, you can use the Tekton CLI:
tkn tr logs verify-tekton-run --follow
If you see "Hello from Tekton!" in the logs, congratulations! Tekton is installed and working correctly. This verification step ensures that your environment is properly configured before you start building more complex pipelines.
Designing the CI/CD Pipeline for the Tax Calculator Application
Now that we have Tekton up and running, let's design our CI/CD pipeline for the Tax Calculator application. Remember our goals? We need to automatically build the Docker image, run unit tests, and deploy to IBM Cloud Code Engine if the tests pass. To achieve this, we'll break our pipeline into several key tasks.
Pipeline Stages
Our pipeline will consist of the following stages:
- Checkout: This task will fetch the source code from the Git repository. This is the first step in any CI/CD pipeline, ensuring that we have the latest code to work with. It typically involves using Git commands to clone the repository or fetch updates from the remote branch. This stage is crucial for ensuring that the subsequent steps in the pipeline are working with the most recent version of the code.
- Build: This task will build the Docker image for the application. Building the Docker image involves using Docker commands to create a container image from the source code and any necessary dependencies. This step is essential for packaging the application in a consistent and reproducible manner. The resulting Docker image can then be easily deployed to various environments.
- Test: This task will run the unit tests. Running unit tests is a critical step in the CI/CD pipeline, as it verifies the functionality of the code and helps to catch bugs early in the development process. This stage typically involves executing automated tests that cover different parts of the application. Passing these tests ensures that the code changes have not introduced any regressions.
- Deploy: This task will deploy the application to IBM Cloud Code Engine. Deploying the application to IBM Cloud Code Engine involves pushing the Docker image to a container registry and then configuring Code Engine to run the application. This stage automates the deployment process, ensuring that the application is consistently and reliably deployed to the target environment. It also allows for quick and easy rollbacks if necessary.
Defining Tekton Tasks
Each of these stages will be implemented as a Tekton Task. Let's start by defining the tasks.
Checkout Task
This task will use the git-clone
task from the Tekton Catalog to clone the repository. The Tekton Catalog provides a collection of pre-built tasks that can be easily reused in pipelines. The git-clone
task is a convenient way to fetch source code from a Git repository. It accepts parameters such as the repository URL, the revision to checkout, and the target directory. By using this task, we can avoid writing custom scripts for cloning the repository.
Build Task
This task will use the kaniko
task from the Tekton Catalog to build the Docker image. Kaniko is a tool for building container images from a Dockerfile, without requiring Docker daemon access. This makes it ideal for use in Kubernetes environments. The kaniko
task takes parameters such as the Dockerfile path, the context directory, and the image name. It builds the image and pushes it to a container registry, making it available for deployment.
Test Task
This task will execute the unit tests using a custom script. The specifics of this task will depend on the testing framework used by the application. Typically, it involves running a command that executes the unit tests and reports the results. For example, if the application uses JUnit for testing, the task might run mvn test
to execute the tests. The task should also include steps to handle test failures, such as failing the pipeline if any tests fail.
Deploy Task
This task will deploy the application to IBM Cloud Code Engine using the ibmcloud
CLI. This involves authenticating with IBM Cloud, selecting the target Code Engine project, and then updating the application with the new Docker image. The task will use the ibmcloud ce application update
command to deploy the new version of the application. It will also handle any necessary configuration updates, such as setting environment variables or scaling the application.
Defining the Tekton Pipeline
With our tasks defined, we can now create the Tekton Pipeline. The pipeline will orchestrate the execution of these tasks in the correct order. We'll define the pipeline using a YAML file, specifying the tasks to run and the dependencies between them. The pipeline will start with the Checkout
task, followed by the Build
task. If the build is successful, the Test
task will run. Finally, if the tests pass, the Deploy
task will be executed. This sequential execution ensures that each stage of the pipeline is completed successfully before moving on to the next stage, maintaining the integrity of the deployment process.
Implementing the Tekton Tasks and Pipeline
Okay, time to translate our design into actual Tekton resources. We'll create YAML files for each Task and the Pipeline. This will involve defining the steps within each task, specifying the input and output resources, and configuring the pipeline to run these tasks in the correct sequence. By implementing these resources, we're essentially building the blueprint for our automated CI/CD process.
Creating the Checkout Task
Let's start with the Checkout Task. We'll create a YAML file named checkout-task.yaml
:
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: checkout-source
spec:
params:
- name: repo-url
type: string
description: The URL of the Git repository
- name: revision
type: string
description: The Git revision to checkout
default: main
workspaces:
- name: output
description: The workspace where the repository will be cloned
steps:
- name: clone
image: alpine/git
workingDir: $(workspaces.output.path)
script: |
#!/usr/bin/env sh
git clone $(params.repo-url) .
git checkout $(params.revision)
This Task defines two parameters: repo-url
for the Git repository URL and revision
for the Git revision to checkout. It also defines a workspace named output
, which is where the repository will be cloned. The Task uses the alpine/git
image and runs a script to clone the repository and checkout the specified revision. This task is crucial for fetching the latest code changes and ensuring that the subsequent tasks are working with the correct version of the application.
Creating the Build Task
Next up is the Build Task. We'll create a YAML file named build-task.yaml
:
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: build-image
spec:
params:
- name: image-name
type: string
description: The name of the Docker image to build
- name: dockerfile
type: string
description: The path to the Dockerfile
default: Dockerfile
- name: context
type: string
description: The build context
default: .
workspaces:
- name: source
description: The workspace containing the source code
steps:
- name: build
image: gcr.io/kaniko-project/executor:latest
workingDir: $(workspaces.source.path)
args:
- "--dockerfile=$(params.dockerfile)"
- "--context=$(params.context)"
- "--destination=$(params.image-name)"
This Task uses the kaniko
executor to build the Docker image. It takes three parameters: image-name
for the name of the Docker image, dockerfile
for the path to the Dockerfile, and context
for the build context. The Task uses a workspace named source
to access the source code. The kaniko
executor builds the image and pushes it to the specified destination. This task ensures that the application is packaged into a Docker image, making it ready for deployment.
Creating the Test Task
Now, let's create the Test Task. We'll create a YAML file named test-task.yaml
:
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: run-tests
spec:
workspaces:
- name: source
description: The workspace containing the source code
steps:
- name: test
image: maven:3.8.1-openjdk-11
workingDir: $(workspaces.source.path)
script: |
#!/usr/bin/env bash
mvn test
This Task uses a Maven image to run the unit tests. It assumes that the application uses Maven for building and testing. The Task uses a workspace named source
to access the source code. The script runs the mvn test
command, which executes the unit tests defined in the application. This task is crucial for verifying the functionality of the code and ensuring that no regressions have been introduced.
Creating the Deploy Task
Finally, let's create the Deploy Task. We'll create a YAML file named deploy-task.yaml
:
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: deploy-to-code-engine
spec:
params:
- name: image-name
type: string
description: The name of the Docker image to deploy
- name: project-id
type: string
description: The IBM Cloud Code Engine project ID
- name: region
type: string
description: The IBM Cloud region
default: us-south
- name: app-name
type: string
description: The name of the Code Engine application
steps:
- name: deploy
image: ibmcloud/ibm-cloud-cli:latest
script: |
#!/usr/bin/env bash
ibmcloud login --apikey "$IBMCLOUD_API_KEY" --no-region
ibmcloud ce project select --name $(params.project-id)
ibmcloud ce application update --name $(params.app-name) --image $(params.image-name) --force
This Task uses the ibmcloud
CLI to deploy the application to IBM Cloud Code Engine. It takes several parameters: image-name
for the name of the Docker image, project-id
for the IBM Cloud Code Engine project ID, region
for the IBM Cloud region, and app-name
for the name of the Code Engine application. The Task uses an environment variable IBMCLOUD_API_KEY
for authentication. The script logs in to IBM Cloud, selects the specified project, and updates the Code Engine application with the new Docker image. This task automates the deployment process, ensuring that the application is consistently and reliably deployed to IBM Cloud Code Engine.
Creating the Pipeline
With all the tasks defined, let's create the Pipeline. We'll create a YAML file named tax-calculator-pipeline.yaml
:
apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
name: tax-calculator-pipeline
spec:
params:
- name: repo-url
type: string
description: The URL of the Git repository
- name: image-name
type: string
description: The name of the Docker image to build
- name: project-id
type: string
description: The IBM Cloud Code Engine project ID
- name: app-name
type: string
description: The name of the Code Engine application
workspaces:
- name: shared-data
description: A workspace for sharing data between tasks
tasks:
- name: checkout
taskRef:
name: checkout-source
params:
- name: repo-url
value: $(params.repo-url)
- name: revision
value: main
workspaces:
- name: output
workspace: shared-data
- name: build
taskRef:
name: build-image
params:
- name: image-name
value: $(params.image-name)
workspaces:
- name: source
workspace: shared-data
runAfter:
- checkout
- name: test
taskRef:
name: run-tests
workspaces:
- name: source
workspace: shared-data
runAfter:
- build
- name: deploy
taskRef:
name: deploy-to-code-engine
params:
- name: image-name
value: $(params.image-name)
- name: project-id
value: $(params.project-id)
- name: app-name
value: $(params.app-name)
runAfter:
- test
This Pipeline defines four parameters: repo-url
for the Git repository URL, image-name
for the name of the Docker image, project-id
for the IBM Cloud Code Engine project ID, and app-name
for the name of the Code Engine application. It also defines a workspace named shared-data
for sharing data between tasks. The Pipeline consists of four tasks: checkout
, build
, test
, and deploy
. The runAfter
field specifies the order in which the tasks should be executed. This pipeline orchestrates the entire CI/CD process, from fetching the source code to deploying the application.
Triggering the Pipeline with Tekton Triggers
To make our CI/CD pipeline fully automated, we need to set up triggers. Tekton Triggers allow us to automatically start our pipeline in response to events, such as commits to the main branch. This is where the magic happens! We'll use Tekton Triggers to listen for these events and kick off our pipeline. Tekton Triggers provide a flexible and powerful way to automate the execution of our pipelines, making our CI/CD process truly seamless.
Setting Up Tekton Triggers
First, we need to install Tekton Triggers. Similar to Tekton Pipelines, we can install Triggers using kubectl
:
kubectl apply --filename https://storage.googleapis.com/tekton-releases/triggers/latest/release.yaml
This command installs the necessary components for Tekton Triggers on our Kubernetes cluster. After running this command, we can check the status of the Triggers installation by listing the pods in the tekton-pipelines
namespace:
kubectl get pods --namespace tekton-pipelines
We should see additional pods related to Tekton Triggers running alongside the Tekton Pipelines pods. If all pods are in the Running
state, we're ready to configure our triggers.
Creating a TriggerTemplate
A TriggerTemplate defines the Tekton resources that will be created when a trigger is activated. In our case, we want to create a PipelineRun. Let's create a YAML file named trigger-template.yaml
:
apiVersion: triggers.tekton.dev/v1beta1
kind: TriggerTemplate
metadata:
name: tax-calculator-trigger-template
spec:
params:
- name: repo-url
description: The URL of the Git repository
- name: image-name
description: The name of the Docker image to build
- name: project-id
description: The IBM Cloud Code Engine project ID
- name: app-name
description: The name of the Code Engine application
resourcetemplates:
- apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
generateName: tax-calculator-pipeline-run-
spec:
pipelineRef:
name: tax-calculator-pipeline
params:
- name: repo-url
value: $(tt.params.repo-url)
- name: image-name
value: $(tt.params.image-name)
- name: project-id
value: $(tt.params.project-id)
- name: app-name
value: $(tt.params.app-name)
workspaces:
- name: shared-data
volumeClaimTemplate:
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
metadata:
labels:
tekton.dev/pipeline: tax-calculator-pipeline
This TriggerTemplate defines four parameters that will be passed to the PipelineRun: repo-url
, image-name
, project-id
, and app-name
. It also defines a resource template that creates a PipelineRun using the tax-calculator-pipeline
we defined earlier. The PipelineRun uses a volumeClaimTemplate
to provision a persistent volume for the shared-data
workspace, allowing data to be shared between tasks in the pipeline. This TriggerTemplate serves as the blueprint for creating PipelineRuns in response to trigger events.
Creating a TriggerBinding
A TriggerBinding extracts information from the incoming event and makes it available to the TriggerTemplate. For a GitHub push event, we can extract the repository URL and the commit SHA. Let's create a YAML file named trigger-binding.yaml
:
apiVersion: triggers.tekton.dev/v1beta1
kind: TriggerBinding
metadata:
name: github-push-binding
spec:
params:
- name: repo-url
value: $(body.repository.url)
This TriggerBinding extracts the repository.url
from the webhook payload and makes it available as the repo-url
parameter. We can add more parameters to extract additional information from the event, such as the commit SHA or branch name. This binding ensures that the necessary information from the trigger event is passed to the TriggerTemplate for creating the PipelineRun.
Creating an EventListener
An EventListener listens for incoming events and processes them by applying TriggerBindings and TriggerTemplates. Let's create a YAML file named event-listener.yaml
:
apiVersion: triggers.tekton.dev/v1beta1
kind: EventListener
metadata:
name: github-push-listener
spec:
triggers:
- binding:
name: github-push-binding
template:
name: tax-calculator-trigger-template
interceptors:
- github:
secretRef:
secretName: github-webhook-secret
eventTypes:
- push
serviceAccountName: tekton-triggers-sa
This EventListener defines a trigger that listens for GitHub push events. It uses the github-push-binding
to extract parameters from the event and the tax-calculator-trigger-template
to create a PipelineRun. The interceptors
section specifies that the event should be a GitHub push event and that the webhook secret should be stored in a Kubernetes Secret named github-webhook-secret
. The serviceAccountName
specifies the service account that the EventListener will use to access Kubernetes resources. This EventListener acts as the central point for receiving and processing trigger events, ensuring that the PipelineRun is created in response to the correct events.
Creating a GitHub Webhook Secret
To secure our webhook, we need to create a GitHub webhook secret. This secret is used to verify that the incoming event is indeed from GitHub. First, generate a random secret string. You can use a command like openssl rand -hex 20
to generate a random 20-byte hexadecimal string. Then, create a Kubernetes Secret to store the webhook secret:
kubectl create secret generic github-webhook-secret --from-literal=secret=<your-secret-string>
Replace <your-secret-string>
with the actual secret you generated. This command creates a Kubernetes Secret named github-webhook-secret
with a key named secret
containing the webhook secret. This secret will be used by the EventListener to verify the authenticity of incoming webhook events.
Configuring the GitHub Webhook
Finally, we need to configure a GitHub webhook to send events to our EventListener. First, we need to expose our EventListener so that GitHub can reach it. We can do this by creating a Kubernetes Service of type LoadBalancer or NodePort. For simplicity, let's use NodePort. Create a Service YAML file:
apiVersion: v1
kind: Service
metadata:
name: github-push-listener-service
spec:
selector:
triggers.tekton.dev/eventlistener: github-push-listener
ports:
- protocol: TCP
port: 8080
targetPort: 8080
nodePort: 31000
type: NodePort
Apply this Service to your cluster:
kubectl apply --filename listener-service.yaml
This Service exposes the EventListener on NodePort 31000. Now, find the external IP address of your Kubernetes nodes. If you're using Minikube, you can use the minikube ip
command. If you're using a cloud-based Kubernetes service, you can find the external IP addresses in your cloud provider's console. Once you have the external IP address and the NodePort, you can configure the GitHub webhook.
Go to your GitHub repository settings, click on