Openshift Deployment Prerequisites
Table of Content
Â
We assume you are logged into OpenShift and are in the repo/openshift local directory. We will run the scripts from there.
Add Default Kubernetes Network Policies
Before deploying, ensure that you have the Network Policies deny-by-default
and allow-from-openshift-ingress
by running the following:
export NAMESPACE=<yournamespace>
oc process -n $NAMESPACE -f https://raw.githubusercontent.com/wiki/bcgov/nr-get-token/assets/templates/default.np.yaml | oc apply -n $NAMESPACE -f -
Environment Setup - ConfigMaps and Secrets (Top of the page)
There are some requirements in the target Openshift namespace/project which are outside of the CI/CD pipeline process. This application requires that a few Secrets as well as Config Maps are already present in the environment before it is able to function as intended. Otherwise the Jenkins pipeline will fail the deployment by design.
In order to prepare an environment, you will need to ensure that all of the following configmaps and secrets are populated. This is achieved by executing the following commands as a project administrator of the targeted environment. Note that this must be repeated on each of the target deployment namespace/projects (i.e. dev
, test
and prod
) as that they are independent of each other. Deployments will fail otherwise. Refer to custom-environment-variables for the direct mapping of environment variables to the app.
Config Maps
Note: Replace anything in angle brackets with the appropriate value!
Note 2: The Keycloak Public Key can be found in the Keycloak Admin Panel under Realm Settings > Keys. Look for the Public key button (normally under RS256 row), and click to see the key. The key should begin with a pattern of MIIBIjANB...
.
export NAMESPACE=<yournamespace>
export APP_NAME=<yourappshortname>
export PUBLIC_KEY=<yourkeycloakpublickey>
export REPO_NAME=common-hosted-form-service
# parameters for Fluent-bit container
export FLUENTD=<yourfluentdendpoint>
export AWS_DEFAULT_REGION=<AWS region>
export AWS_KINESIS_STREAM=<AWS Kinesis stream name>
export AWS_ROLE_ARN=<AWS credential>
oc create -n $NAMESPACE configmap $APP_NAME-frontend-config \
--from-literal=FRONTEND_APIPATH=api/v1 \
--from-literal=FRONTEND_BASEPATH=/app \
--from-literal=FRONTEND_ENV=dev \
--from-literal=FRONTEND_KC_REALM=cp1qly2d \
--from-literal=FRONTEND_KC_SERVERURL=https://dev.oidc.gov.bc.ca/auth
oc create -n $NAMESPACE configmap $APP_NAME-sc-config \
--from-literal=SC_CS_CHES_ENDPOINT=https://ches-dev.apps.silver.devops.gov.bc.ca/api \
--from-literal=SC_CS_CDOGS_ENDPOINT=https://cdogs-dev.apps.silver.devops.gov.bc.ca/api \
--from-literal=SC_CS_TOKEN_ENDPOINT=https://dev.oidc.gov.bc.ca/auth/realms/jbd6rnxw/protocol/openid-connect/token
oc create -n $NAMESPACE configmap $APP_NAME-server-config \
--from-literal=SERVER_APIPATH=/api/v1 \
--from-literal=SERVER_BASEPATH=/app \
--from-literal=SERVER_BODYLIMIT=30mb \
--from-literal=SERVER_KC_PUBLICKEY=$PUBLIC_KEY \
--from-literal=SERVER_KC_REALM=cp1qly2d \
--from-literal=SERVER_KC_SERVERURL=https://dev.oidc.gov.bc.ca/auth \
--from-literal=SERVER_LOGLEVEL=http \
--from-literal=SERVER_PORT=8080
Note: We use the NRS Object Storage for CHEFS.
oc create -n $NAMESPACE configmap $APP_NAME-files-config \
--from-literal=FILES_UPLOADS_DIR= \
--from-literal=FILES_UPLOADS_ENABLED=true \
--from-literal=FILES_UPLOADS_FILECOUNT=1 \
--from-literal=FILES_UPLOADS_FILEKEY=files \
--from-literal=FILES_UPLOADS_FILEMAXSIZE=25MB \
--from-literal=FILES_UPLOADS_FILEMINSIZE=0KB \
--from-literal=FILES_UPLOADS_PATH=files \
--from-literal=FILES_PERMANENT=objectStorage \
--from-literal=FILES_LOCALSTORAGE_PATH= \
--from-literal=FILES_OBJECTSTORAGE_BUCKET=egejyy \
--from-literal=FILES_OBJECTSTORAGE_ENDPOINT=https://nrs.objectstore.gov.bc.ca \
--from-literal=FILES_OBJECTSTORAGE_KEY=chefs/dev/ \
The following command creates an OpenShift config map that contains configuration files for our Fluent-bit log forwarder.
Secrets
Replace anything in angle brackets with the appropriate value!
Build Config & Deployment (Top of the page)
This application is currently designed as a single application pod deployments. It will host a static frontend containing all of the Vue.js resources and assets, and a Node.js backend which serves the API that the frontend requires. We are currently leveraging Openshift Routes with path based filtering in order to forward incoming traffic to the right deployment service.
Frontend
The frontend temporarily installs dependencies needed to generate the static assets that will appear in the /app/frontend/dist
folder. These contents will be picked up by the application and hosted appropriately.
Application
The backend is a standard Node/Express server. It handles the JWT based authentication via OIDC authentication flow, and exposes the API to authorized users. This deployment container is built up on top of an Alpine Node image. The resulting container after build is what is deployed.
Templates (Top of the page)
The Jenkins pipeline heavily leverages Openshift Templates in order to ensure that all of the environment variables, settings, and contexts are pushed to Openshift correctly. Files ending with .bc.yaml
specify the build configurations, while files ending with .dc.yaml
specify the components required for deployment.
Build Configurations
Build configurations will emit and handle the chained builds or standard builds as necessary. They take in the following parameters:
Name | Required | Description |
---|---|---|
REPO_NAME | yes | Application repository name |
JOB_NAME | yes | Job identifier (i.e. 'pr-5' OR 'master') |
SOURCE_REPO_REF | yes | Git Pull Request Reference (i.e. 'pull/CHANGE_ID/head') |
SOURCE_REPO_URL | yes | Git Repository URL |
The template can be manually invoked and deployed via Openshift CLI. For example:
Note that these build configurations do not have any triggers defined. They will be invoked by the Jenkins pipeline, started manually in the console, or by an equivalent oc command for example:
Finally, we generally tag the resultant image so that the deployment config will know which exact image to use. This is also handled by the Jenkins pipeline. The equivalent oc command for example is:
Note: Remember to swap out the bracketed values with the appropriate values!
Deployment Configurations
Deployment configurations will emit and handle the deployment lifecycles of running containers based off of the previously built images. They generally contain a deploymentconfig, a service, and a route. Before our application is deployed, Patroni (a Highly Available Postgres Cluster implementation) needs to be deployed. Refer to any patroni*
templates and their official documentation for more details.
Our application template take in the following parameters:
Name | Required | Description |
---|---|---|
REPO_NAME | yes | Application repository name |
JOB_NAME | yes | Job identifier (i.e. 'pr-5' OR 'master') |
NAMESPACE | yes | which namespace/"environment" are we deploying to? dev, test, prod? |
APP_NAME | yes | short name for the application |
ROUTE_HOST | yes | base domain for the publicly accessible URL |
ROUTE_PATH | yes | base path for the publicly accessible URL |
The Jenkins pipeline will handle deployment invocation automatically. However should you need to run it manually, you can do so with the following for example:
Due to the triggers that are set in the deploymentconfig, the deployment will begin automatically. However, you can deploy manually by use the following command for example:
Note: Remember to swap out the bracketed values with the appropriate values!
Sidecar Logging (Top of the page)
Our deployment on OpenShift uses a Fluent-bit sidecar to collect logs from the CHEFS application. The sidecar deployment is included in the main app.dc.yaml file. Our NodeJS apps output logs to a configurable file path (for example app/app.log ). This is done using using a logger script. For example see our CHEFS app logger
The Fluent-bit configuration is kept in the openshift config map fluent-bit.cm.yaml
Additional details for configuring the sidecar can be seen on the wiki.
Logs sent to AWS Opensearch
We currently forward our application logs from Fluent-bit to an AWS OpenSearch service. the AWS connection credentials are found using environment variables in the fluent-bit container (aws credentials stored in 'chefs-aws-kinesis-secret' secret.)
to create this secret on OpenShift:
The Fluent-bit configuration includes the output plugin 'kinesis streams' where we define our AWS region, arn_role and stream name. A further parser for our logs was added to a node app running on an AWS Lambda service
Error Notifications
We currently also output logs to a Fluentd service where we can trigger error notifications to our Discord channel. See our Wiki from more details.
Pull Request Cleanup (Top of the page)
As of this time, we do not automatically clean up resources generated by a Pull Request once it has been accepted and merged in. This is still a manual process. Our PR deployments are all named in the format "pr-###", where the ### is the number of the specific PR. In order to clear all resources for a specific PR, run the following two commands to delete all relevant resources from the Openshift project (replacing PRNUMBER
with the appropriate number):
The first command will clear out all related executable resources for the application, while the second command will clear out the remaining Patroni cluster resources associated with that PR.
Appendix - Supporting Deployments (Top of the page)
There will be instances where this application will need supporting modifications or deployment such as databases and business analytics tools. Below is a list of initial reference points for other Openshift templates that could be leveraged and bolted onto the existing Jenkins pipeline if applicable.
Metabase
MongoDB
Refer to the mongodb.dc.yaml
and mongodb.secret.yaml
files found below for a simple persistent MongoDB deployment:
Patroni (HA Postgres)
Refer to the patroni.dc.yaml
and patroni.secret.yaml
files found below for a Highly Available Patroni cluster statefulset:
Database Backup
Redis
Refer to the redis.dc.yaml
and redis.secret.yaml
files found below for a simple persistent Redis deployment:
Â