You can use this article as a guideline for your own projects, and it is not restricted to React/NodeJS specifically. We have shown how to initialize a project, and the components that need to be present. With this article, you get a sense on what it takes to fix the initial items that are flagged. Maintaining a clean report for each tool as you build the project will set you up for scale, and give you confidence that your changes will not break things that easily.
These articles are meant to happen at the beginning of a project, where there is a minimal risk in implementing these changes. As projects evolve, implementing changes to the linting and code scanning require additional investment of both time and money.
In our previous article, we set out to start a project building a React application on GitHub with linting, code scanning and monitoring setup. Our setup uses GitHub Actions, Docker, Docker Compose, GitHub Secrets and CoGuard to build and secure the CI/CD pipeline. In the previous post, we put in place the necessary tools to ensure that any code written and the configurations set can be scanned as part of the build process.
We are going through the steps to fix each item, so that a team can then move forward and create features on a strong foundation.
We have some checks that have failed.
Running SemGrep for some static analysis
The code static analyzer passed, it did not output anything wrong. This is also not surprising, since there is very little code to analyze in the beginning.
CoGuard Scanning for infrastructure related files
The various default configurations of the extra services in the infrastructure have some flags. This includes:
ElasticSearch
NGINX
Docker- and Docker-compose files
Coverage for the React application
There is a code coverage fail. The reason was that we required at least 80% coverage, and a bare project did not start with enough tests to achieve this.
Prettier for the React application
There are additional items from the prettier/linting step that are relatively minor but need to be addressed.
Dependabot alerts
GitHub’s own “Dependabot” has scanned the project dependencies and discovered some outdated libraries for the react project.
Getting Started: Fixing Issues
We will start with the easiest items and move our way up to the most difficult ones.
Dependabot alerts
Dependabot maps the different libraries and versions inside the repository to common known vulnerabilities and exposures.
The code is initialized as a NodeJS project using React. We can use NPM’s functionality to fix some of the flagged issues.
We execute:
npm audit
At the initial state of the project, it reports on 7 high vulnerabilities, which match the raised the dependabot alerts.
We start by trying to use the built in npm tools for a simple fix. In my experience this only works about 10% time, but it’s worth a try.
npm audit fix
This fixes one of the high priority issues. But it warns for the other issues that breaking changes are necessary. When reviewing the suggested changes, I noticed the npm tools are suggesting downgrading the react-scripts from version 5.0.1 to 2.1.3. This is strange. It will introduce more vulnerabilities due to the age of the suggested packages. I attempt to force the necessary fixes.
npm audit fix --force
Unsurprisingly, the result introduces more errors. I have turned our 6 high vulnerabilities into 72 high vulnerabilities.
In order to remove the remaining flagged items, the package-lock.json needs to be altered to force the newer version.
We search package-lock.json for all blocks of the form:
“dependencies”: {
“nth-check”: [VERSION]
}
and replace [VERSION] with “^2.0.1”, where the version is lower than that. Afterwards, run
npm install
To update the changes. The output will also tell you that you have 0 audit issues now.
Author’s Remark: It is important to always check the application when doing manual version changes, such as for n-th-check. Hence, run the application and ensure no breaking is happening. Luckily, in this case, the apis of the project have not changed much, and the application was still running. In cases where changing the version breaks your program, you need to work on rather ensuring that the functions which are mentioned in the CVE are not called, or only called in a controlled way. Keep track of those changes until a new version of the respective parent-library is available which loads the newer version.
Prettier for the React application
The next item is to fix the linting issues flagged in the project. We have selected prettier as our linter of choice.
Run:
npx prettier --check .
This needs to be executed inside the root directory. Some generated files (like the coverage related files) are flagged. This can be prevented by creating a .prettierignore file, and copying and pasting the same contents as .gitignore.
After ensuring that the files flagged in the check are ones you are interested in fixing, run
We set a code coverage threshold for this project at 80%. The code coverage was below the initial threshold.
There is not a lot of code, there are only 3 files. It is easy to write tests that cover them. The files are:
App.js
index.js
reportWebVitals.js
Luckily, for App.js, there is already a test present. It remains to create files for index.js and reportWebVitals.js. The tests which we have proposed are the following
reportWebVitals.test.js:
import { getCLS, getFID, getFCP, getLCP, getTTFB } from "web-vitals";
import reportWebVitals from "./reportWebVitals";
jest.mock("web-vitals");
describe("reportWebVitals test", () => {
it("Should get all the different functions to be called once", async () => {
const calledWith = "foo";
await reportWebVitals(() => calledWith);
await new Promise(process.nextTick); // needed since the return value of reportWebVitals is not a promise
await expect(getFID).toHaveBeenCalledTimes(1);
await expect(getFCP).toHaveBeenCalledTimes(1);
await expect(getLCP).toHaveBeenCalledTimes(1);
await expect(getTTFB).toHaveBeenCalledTimes(1);
await expect(getCLS).toHaveBeenCalledTimes(1);
});
});
This is a check which ensures that the image is not run as a root user. While the execution may be sandboxed, one always wants to limit access as much as possible.
For all containers but one, this was an easy fix. ElasticSearch, Kibana and Redis are all running already as non-root user, so we just had to add the respective `USER` directive in our Dockerfile to let the scanner know that it stays that way. For NGINX, the user was created, but the main process was run by root. In order to make it work with an underprivileged user, we were adding the following lines
RUN chown -R nginx:nginx /etc/nginx
RUN chown -R nginx:nginx /var/cache/nginx
RUN touch /var/run/nginx.pid
# The next line violates CIS benchmark 2.3.3, but is
# unavoidable for now if we do not want to run NGINX
# as root, which poses a higher threat than the ownership
# of this one PID file.
RUN chown -R nginx:root /var/run/nginx.pid
RUN chmod 0644 /var/run/nginx.pid
This change violates CIS benchmark 2.3.3. There are special considerations in making this change, but we believe that violating the CIS benchmark is unavoidable and that changing the ownership of the PID file is a lower threat than running NGINX as root. This is a situation where we have been left choosing the lesser of two evils. We have filed a bug report to find other solutions.
Pinning versions for installation
There is an expectation that if a Docker image is unchanged, the build should produce the same image. There are situations where this is not true. For example, this can be violated when packages are installed by a package manager as part of the build scripts. Software packages can be updated to a newer version if one is available on update. To prevent this, CoGuard requires versions to be fixed for the installation.
In to satisfy this check, the installation lines are altered to the follow:
RUN apk add npm=8.10.0-r0
RUN apk add curl=7.83.1-r5
RUN npm install -g serve@14.1.2
Putting system logs into a Docker volume
For post-mortem analysis, system logs are an essential source of information.
This is why the general recommendation is to put `/var/log` into a volume, which can be analyzed in case a container crashes. At the minimum, this is achieved by adding `VOLUME /var/log` inside the Dockerfiles. Then the volumes appear when you run `docker volume ls`.
Healthcheck parameter set
How do we know that the application running inside the container is running and healthy?
For that, one can define a `HEALTHCHECK` directive inside the Dockerfile. This probes if everything is up and running as expected.
For Kibana and ElasticSearch, the health checks are defined in the parent image. Hence, unless we want to add further checks custom to our application, we can ignore the flag for now. In order to disable it, we can add
#coguard-config-checker: disable dockerfile_container_healthcheck_parameter Check already in parent image
For NGINX, we can add the following line for a simple health check
HEALTHCHECK CMD \
curl -k -f https://localhost
It is important to add the -f parameter to curl to ensure that if the response is not 2xx or 3xx, the process will fail and trigger Docker to mark the container as unhealthy. We added -k to ensure that self-signed certificates are not causing issues here.
The fixes for ElasticSearch are mostly straightforward, they require changing settings inside the configuration file. There were a couple of unique configuration parameters to understand.
FIPS mode could not be enabled
FIPS mode could not be enabled, since it is a feature in the enterprise version of ElasticSearch. To prevent CoGuard from marking it as failure, we need to add the following line:
We have whitelisted IPs and connections from the internal network. Depending on your project, you may want to shorten that list and limit access even further.
Index Configuration Refresh Interval
In the latest versions of ElasticSearch, any index configurations are done dynamically, and we needed to disable the setting of the index refresh interval.
Encrypt Sensitive Data
Finally, the setting to let the watcher encrypt sensitive data required an additional generation of a key shared by all nodes. This was the most complicated piece of this change. We needed to alter the setup script to create an encryption key, and then alter the entrypoint to pull it into each node.
Whenever you alter an internal script from an upstream project, you need to ensure that you keep track of changes upstream. In our case, we captured the current hash and ensured that the build fails if it ever changes, so that changes can be addressed. This is done with the line
RUN test $(md5sum /usr/local/bin/docker-entrypoint.sh | awk '{print $1}') = "8fb8c5e0e9eeb1eb2bae786401c46fa1"
The main changes in nginx was to include the OWASP security headers, and ensure that some built-in counter-measures for DDos attacks are being implemented.
This concludes all the fixes necessary to make the build-jobs we created in the previous article to pass.
We have a project with a React application on GitHub using GitHub Actions for the CI/CD pipeline with code analyzer (SemGrep), linter (Prettier) and configuration analyzer (CoGuard) to evaluate the code and infrastructure. We can now start building our application and functionality from a stable foundation.
You can use this article as a guideline for your own projects, and it is not restricted to React/NodeJS specifically. We have shown how to initialize a project, and the components that need to be present. With this article, you get a sense on what it takes to fix the initial items that are flagged. Maintaining a clean report for each tool as you build the project will set you up for scale, and give you confidence that your changes will not break things that easily.
For more information on how CoGuard can help you secure the infrastructure around your project, please contact us today.
Oops! Something went wrong while submitting the form.
Check out and explore a test environment to run infra audits on sample repositories of web applications and view select reports on CoGuard's interative dashboard today.