Setting up a project with a security-hardened CI/CD pipeline
A Beginner’s Guide
Introduction
“Every journey begins with a single step.” - Lao Tzu
In our daily work, we often come across projects which have been started by developers that are looking to quickly code features and build software that can be deployed quickly. The focus early is on shipping something, and as a result there are many security and automation considerations which should be taken into account, but are not at the expense of going faster. These early decisions and considerations to prioritize speed make it more difficult and expensive to alter in the future (this is known as technical debt).
Many of the modern code repository systems have built in building tools that are like a gift from the DevOps goods:
- CI/CD pipelines, directly included in their repository provider (GitHub Actions, BitBucket Pipelines, GitLab CI/CD pipelines).
- Containers and container orchestration systems.
Source-integrated CI/CD pipelines and container orchestration give development teams the building blocks making it possible from day one to map out a scalable production system that meets many of the requirements for future compliance needs. The early investment in automating the production build pipeline, can improve the speed of deployments (the legendary 10 deployments per day goal), and the confidence in the correctness of what is deployed. The tools support the team to ship more faster from day one. Allowing the team to grow and the pipeline automation to support the growth from the start.
In this article, we show an example of how to set up a software project to use automated CI/CD pipeline to build and deploy to product in the “right way”, to get started. Setting up a build process should take less than a day. Here is how to get started and automate your build pipeline.
Considerations and Stack Decisions
To demonstrate the ease with which this can be accomplished, we are going to start a simple React application project.
We are going to add recommended components, and set up a baseline for a CI/CD pipeline, which continuously checks both code and the code defining infrastructure.
We will be focusing on the setup of the CI/CD pipeline. The fixing of the issues with regards to code and configuration scans will be subject of a subsequent article.
Here are our choices of tech stack:
- GitHub, using Actions/Workflows, as our choice of CI/CD and repository.
- As a code coverage tool, we are going to use the `react-scripts test –coverage` included in the create-react-app template
- As a linting tool, we are going to use Prettier.
- We will perform basic code scanning using Semgrep
- The deployment strategy will be done using Docker containers. To simulate an environment, we will use Docker Compose.
- GitHub Secrets will be used to store our secrets
- For infrastructure/configuration/Docker image scanning with CoGuard.
- We will create a Redis instance for shared state
- The React App will be sitting behind NGINX, which also serves as a load balancing mechanism
- The monitoring will be performed with the ELK stack.
- For authentication, we are assuming that there will be an OICD provider given which we can configure. The react application will use passport.js for the authentication logic and framework. The details will not be covered in this article.
- Since we are just looking at a plain front-end for now, we are going to omit any database related work.
Step 1+2: Set up code repository, and start with a code coverage workflow
Create a folder on your machine, and execute the following commands inside of it
> git init
> git remote add origin https://github.com/coguardio/react-app-setup-for-blog.git
The link at the second command is the supplementary git repository we set up for this blog article.
After installing `create-react-app`, simply run
To install the basics of a react application.
After this is done, we should set ourselves up with a good coverage and code fail threshold.
In the file `package.lock`, add the following key-value pair.
"jest": {
"collectCoverageFrom": [
"src/**/*.tsx",
"src/**/*.jsx",
"src/**/*.js",
"src/**/*.ts",
"src/components/*.tsx",
"!**/node_modules/**",
"!**/src/test-utils/**",
"!**/src/index.tsx"
],
"coverageThreshold": {
"global": {
"lines": 80
}
}
}
Now, in order to have test coverage from the start being part of the build and fail when not, we are going to create our github action to test that on each pull request.
For that, create the folders .github/workflows, and inside we will have a file called coverage.yml.
In order to get the basic coverage report for your project, coverage.yml needs the following lines.
name: Coverage for the React application
run-name: Pull request ${{github.event.number}} is being tested for coverage
on:
- push
- pull_request
jobs:
run_coverage_test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/setup-node@v3
- run: npm install
- run: npm test -- --coverage --watchAll=false
The test is initially failing, since the coverage is below the threshold. As mentioned before, we will deal with the fixing in a future article.
Step 3: Linting
In order to add prettier to our project, we simply run
> npm install --save-dev prettier
and create a new file .github/workflows/prettier.yaml with the following content:
name: Prettier check for the React application
run-name: Pull request ${{github.event.number}} is being tested for good style
on:
- push
- pull_request
jobs:
run_lint:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/setup-node@v3
- run: npm install
- run: npx prettier --check ./src/
- run: npx eslint src/**
This job fails from the start as well. As you can see, we also added ESLint here, since it also contains checks for potential bugs.
Step 4: Add SemGrep as a code scanning tool
In order to use SemGrep on our code-base, we will simply install the action from the marketplace. Hence, we would add a file called .github/workflows/semgrep.yml with the following content.
name: Running SemGrep for some static analysis
run-name: Pull request ${{github.event.number}} is being statically analyzed.
on:
- push
- pull_request
jobs:
run_semgrep:
name: Scan
runs-on: ubuntu-20.04
container:
image: returntocorp/semgrep
steps:
- uses: actions/checkout@v3
- run: semgrep ci
Step 5: Create a basic Docker container for the react app, and add it to a docker-compose
The basic Docker container should already follow some good principles. We are putting the Dockerfile at the root of the repository, and it contains the following lines.
FROM alpine:3.16.2 AS builder
# Installing the necessary packages
RUN apk add --update npm
# Copying the source files and building
RUN mkdir -p /opt/build_dir
COPY . /opt/build_dir
WORKDIR /opt/build_dir
RUN npm install
RUN npm run-script build
FROM alpine:3.16.2
RUN apk add --update npm
RUN apk add curl
RUN npm install -g serve
# Setting of environment variables
ENV PORT=3000
ENV HOME_FOLDER=/home/reactserving
# Creating a separate user
RUN addgroup -S reactserving
RUN adduser -S reactserving -G reactserving
# Copying the build folder into the home folder
COPY --from=builder /opt/build_dir/build $HOME_FOLDER/build
# Setting the necessary permissions
RUN chown -R reactserving:reactserving $HOME_FOLDER/build
# Adding a healthcheck, which checks if the app is listening on the specified port
HEALTHCHECK CMD \
curl -f localhost:$PORT
# Changing the user
USER reactserving
EXPOSE $PORT
CMD serve -s -l $PORT $HOME_FOLDER/build
And the simple docker-compose.yml will look like the following:
version: '3.8'
services:
react-client:
build: .
restart: always
hostname: react-client.docker-example.net
logging:
options:
max-size: 100m
max-file: '3'
networks:
custom_react_network:
ipv4_address: 172.16.238.2
ipv6_address: 2001:3984:3989::2
networks:
custom_react_network:
ipam:
driver: default
config:
- subnet: 172.16.238.0/24
- subnet: 2001:3984:3989::/64
Steps 6+7: Infrastructure Scanning initialization and Secret Storage
We are using CoGuard to scan our Dockerfiles and images. For that, we are going to set up two secrets, namely the CoGuard username and Password. Then we are going to add to the workflow the upload of that specific Dockerfile to start.
After creating an account, we store the username and password inside GitHub Secrets with the identifiers
- COGUARD_USER_NAME
- COGUARD_PASSWORD
Now, we are creating an action to upload the data to CoGuard. This is done using the following action script:
name: CoGuard Scanning for infrastructure related files
run-name: Pull request ${{github.event.number}} is being scanned for misconfigurations.
on:
- push
- pull_request
jobs:
run_coguard_check:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- run: |
BEARER=$(curl -f -d 'client_id=client-react-frontend' \
-d "username=${{ secrets.COGUARD_USER_NAME }}" \
-d "password=${{ secrets.COGUARD_PASSWORD }}" \
-d 'grant_type=password' \
"https://portal.coguard.io/auth/realms/coguard/protocol/openid-connect/token" | jq -r '.access_token');
echo "BEARER=$BEARER" >> $GITHUB_ENV
- run: |
curl -X POST \
-H "Authorization: Bearer $BEARER" \
-d '{"clusterType": "", "location": "", "derivedFrom": ""}' \
"https://portal.coguard.io/server/cluster/add-cluster/react-test-app?organizationName=foobar"
- run: |
curl -X POST \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $BEARER" \
-d '{"id": "react-app-container", "hostName": "", "externalIp": "", "internalIp": "172.16.238.2"}' \
"https://portal.coguard.io/server/cluster/add-new-machine/react-test-app?organizationName=foobar"
- run: |
curl -X POST \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $BEARER" \
-d '{"serviceName": "dockerfile", "serviceKey": "dockerfile_react", "version": ""}' \
"https://portal.coguard.io/server/cluster/add-new-service/react-test-app/react-app-container?organizationName=foobar"
- run: |
curl -f \
-X POST \
--header "Content-Type: application/octet-stream" \
-H "Authorization: Bearer $BEARER" \
--data-binary @./Dockerfile \
"https://portal.coguard.io/server/cluster/upsert-config-file/react-test-app/react-app-container/dockerfile_react?organizationName=foobar&fileName=Dockerfile&defaultFileName=Dockerfile&subPath=."
- run: |
curl -f \
-X PUT \
-H "Authorization: Bearer $BEARER" \
"https://portal.coguard.io/server/cluster/run-report/react-test-app?organizationName=foobar"
- run: |
LATEST_REPORT_TIMESTAMP=$(curl -k -f -H \
"Authorization: Bearer $BEARER" \
"https://portal.coguard.io/server/cluster/reports/list?organizationName=foobar&clusterName=react-test-app" \
| jq -r 'last(.[])');
echo "LATEST_REPORT_TIMESTAMP=$LATEST_REPORT_TIMESTAMP" >> $GITHUB_ENV;
- run: |
curl -f \
-X GET \
-H "Authorization: Bearer $BEARER" \
"https://portal.coguard.io/server/cluster/report?organizationName=foobar&clusterName=react-test-app&reportName=$LATEST_REPORT_TIMESTAMP" | \
jq -r '.failed | length' | \
xargs -n 1 test 0 -eq
Steps 8+9: Adding Redis and NGINX
We will add Redis and NGINX as containers. However, we will also ensure that their respective configuration files are tracked in our repository.
We will store them inside subfolders in docker_images/nginx and docker_images/redis.
Let us start with nginx. We will right away assume that there will be TLS traffic enabled, and hence, we will need our first certificate pair.
We will generate it inside docker_images/nginx/conf.d.
The command to generate one is the following:
openssl req -x509 -newkey rsa:4096 -keyout server.key -out server.pem -days 10000 -nodes
The Dockerfile for nginx looks like the following:
FROM nginx:1.18.0-alpine
COPY conf/nginx.conf /etc/nginx/nginx.conf
COPY conf/mime.types /etc/nginx/mime.types
COPY conf/conf.d/server.key /etc/nginx/conf.d/server.key
COPY conf/conf.d/server.pem /etc/nginx/conf.d/server.pem
The nginx.conf should be configured in the following way:
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log notice;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
limit_req_zone $binary_remote_addr zone=allzones:10m rate=10r/s;
keepalive_timeout 65;
upstream react-client {
server 172.16.238.2:3000;
}
# Redirecting port 80 to ssl
server {
listen 80;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
server_name localhost;
ssl_certificate /etc/nginx/conf.d/server.pem;
ssl_certificate_key /etc/nginx/conf.d/server.key;
ssl_protocols TLSv1.2;
location / {
limit_req zone=allzones burst=30 delay=15;
limit_req_status 429;
proxy_pass http://react-client/;
}
}
}
The docker-compose file entry for nginx will then look like the following:
nginx:
build: ./docker_images/nginx
restart: always
hostname: nginx.docker-example.net
ports:
- 80:80
- 443:443
logging:
options:
max-size: 100m
max-file: '3'
networks:
custom_react_network:
ipv4_address: 172.16.238.3
ipv6_address: 2001:3984:3989::3
We will do the same with Redis.
The Dockerfile for Redis is like the following:
FROM redis:7.0.5-alpine
COPY conf/redis.conf /usr/local/etc/redis/redis.conf
CMD [ "redis-server", "/usr/local/etc/redis/redis.conf" ]
The configuration redis.conf is stored in the subdirectory, and copied (for now) from the redis example configuration as found here: https://redis.io/docs/management/config-file/
The docker-compose entry for redis is looking like the following:
redis:
build: ./docker_images/redis
restart: always
hostname: redis.docker-example.net
logging:
options:
max-size: 100m
max-file: '3'
networks:
custom_react_network:
ipv4_address: 172.16.238.4
ipv6_address: 2001:3984:3989::4
Our goal will also be to have the configurations scanned by CoGuard. The additional lines in .github/workflows/coguard.yml are the following:
- run: |
curl -X POST \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $BEARER" \
-d '{"serviceName": "dockerfile", "serviceKey": "dockerfile_nginx", "version": ""}' \
"https://portal.coguard.io/server/cluster/add-new-service/react-test-app/react-app-container?organizationName=foobar"
- run: |
curl -f \
-X POST \
--header "Content-Type: application/octet-stream" \
-H "Authorization: Bearer $BEARER" \
--data-binary @./docker_images/nginx/Dockerfile \
"https://portal.coguard.io/server/cluster/upsert-config-file/react-test-app/react-app-container/dockerfile_nginx?organizationName=foobar&fileName=Dockerfile&defaultFileName=Dockerfile&subPath=."
- run: |
curl -X POST \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $BEARER" \
-d '{"serviceName": "dockerfile", "serviceKey": "dockerfile_redis", "version": ""}' \
"https://portal.coguard.io/server/cluster/add-new-service/react-test-app/react-app-container?organizationName=foobar"
- run: |
curl -f \
-X POST \
--header "Content-Type: application/octet-stream" \
-H "Authorization: Bearer $BEARER" \
--data-binary @./docker_images/redis/Dockerfile \
"https://portal.coguard.io/server/cluster/upsert-config-file/react-test-app/react-app-container/dockerfile_redis?organizationName=foobar&fileName=Dockerfile&defaultFileName=Dockerfile&subPath=."
- run: |
curl -X POST \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $BEARER" \
-d '{"serviceName": "nginx", "serviceKey": "nginx_ingress_react", "version": ""}' \
"https://portal.coguard.io/server/cluster/add-new-service/react-test-app/react-app-container?organizationName=foobar"
- run: |
curl -f \
-X POST \
--header "Content-Type: application/octet-stream" \
-H "Authorization: Bearer $BEARER" \
--data-binary @./docker_images/nginx/conf/nginx.conf \
"https://portal.coguard.io/server/cluster/upsert-config-file/react-test-app/react-app-container/nginx_ingress_react?organizationName=foobar&fileName=nginx.conf&defaultFileName=nginx.conf&subPath=."
- run: |
curl -f \
-X POST \
--header "Content-Type: application/json" \
-H "Authorization: Bearer $BEARER" \
-d '{"fileName": "mime.types", "subPath": ".", "aliasList": ["/etc/nginx/mime.types;"]}' \
"https://portal.coguard.io/server/cluster/upsert-complimentary-file-entry/react-test-app/react-app-container/nginx_ingress_react?organizationName=foobar"
- run: |
curl -f \
-X POST \
--header "Content-Type: application/octet-stream" \
-H "Authorization: Bearer $BEARER" \
--data-binary @./docker_images/nginx/conf/mime.types \
"https://portal.coguard.io/server/cluster/upsert-complimentary-file/react-test-app/react-app-container/nginx_ingress_react?organizationName=foobar&fileName=mime.types&subPath=."
Step 10: Monitoring with the ELK Stack
We are not going to put a full-on description here on how to include the ELK stack into the docker-compose. We will, for now, follow the instructions as given here and talk about the alterations needed.
The alterations to get this working in the docker-compose file is to add the `network` key to every service, and provide proper IP addresses. As with the services before, we will discuss in a subsequent article how we are going to secure the configurations of each element.
As per always, we would need to be able to be in control of the individual configurations, and hence, we create our own Dockerfiles with versioned configurations.
ElasticSearch:
FROM elasticsearch:8.5.0
COPY conf/elasticsearch.yml /usr/share/elasticsearch/config/elasticsearch.yml
Kibana:
FROM kibana:8.5.0
COPY conf/kibana.yml /usr/share/kibana/config/kibana.yml
And, the elasticsearch.yml and kibana.yml are the default ones coming from the container for now. In order to include them into the scanning, we are adding the following lines to coguard.yml:
- run: |
curl -X POST \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $BEARER" \
-d '{"serviceName": "dockerfile", "serviceKey": "dockerfile_elasticsearch", "version": ""}' \
"https://portal.coguard.io/server/cluster/add-new-service/react-test-app/react-app-container?organizationName=foobar"
- run: |
curl -f \
-X POST \
--header "Content-Type: application/octet-stream" \
-H "Authorization: Bearer $BEARER" \
--data-binary @./docker_images/elasticsearch/Dockerfile \
"https://portal.coguard.io/server/cluster/upsert-config-file/react-test-app/react-app-container/dockerfile_elasticsearch?organizationName=foobar&fileName=Dockerfile&defaultFileName=Dockerfile&subPath=."
- run: |
curl -X POST \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $BEARER" \
-d '{"serviceName": "elasticsearch", "serviceKey": "elasticsearch_base", "version": ""}' \
"https://portal.coguard.io/server/cluster/add-new-service/react-test-app/react-app-container?organizationName=foobar"
- run: |
curl -f \
-X POST \
--header "Content-Type: application/octet-stream" \
-H "Authorization: Bearer $BEARER" \
--data-binary @./docker_images/elasticsearch/conf/elasticsearch.yml \
"https://portal.coguard.io/server/cluster/upsert-config-file/react-test-app/react-app-container/elasticsearch_base?organizationName=foobar&fileName=elasticsearch.yml&defaultFileName=elasticsearch.yml&subPath=."
Summary
This article demonstrates how to start building a react application, get it committed to GitHub and set up linting and code scanning. Then we set up and use GitHub Actions, Docker, Docker Compose, GitHub Secrets and CoGuard to build and secure the initial CI/CD pipeline. This initial investment and steps has laid the groundwork for a faster deployment pipeline. It has added key tools and items to help the team build faster and reduce the risk of accidentally introducing bugs.
All of the configuration files, starting from the Infrastructure as Code (IaC) layer down to the individual configurations of infrastructure and containers have been versioned and added to source repository. This is the foundation for the development teams to grow and ship code securely.
The full repository for this blog article can be found here. This repository includes all fixes and status completed at the end of the article.
Next Steps
In subsequent articles, we are going to tackle resolving the issues flagged by the different scanners/coverage checkers. This will include configuring the build pipeline to terminate the build workflow with breaking issues allowing the pipeline to create tickets in the issue management system allowing a developer to build on top of it. We will also publish an article outlining the reasoning for each toolset.