Introduction
Deploying containerized applications from a GitLab CI/CD pipeline to an OpenShift cluster requires configuring both platforms to work together securely. In a GitLab self-managed instance, you can set up CI pipelines that build Docker images and deploy them to OpenShift using the OpenShift CLI (oc
) and service account credentials. The GitLab Runner will execute pipeline jobs for building the container image and then trigger a deployment to OpenShift. This integration enables seamless continuous delivery: code changes in GitLab can automatically result in updated applications running on OpenShift. For example, the pipeline can build a new image on each commit, push it to a registry, and then instruct OpenShift to deploy the updated image. The following diagram illustrates a high-level overview of this CI/CD workflow from GitLab to OpenShift, showing how source code flows through the pipeline to become a running application on the cluster.

(Diagram: A high-level illustration of the GitLab CI/CD pipeline deploying to an OpenShift cluster, including stages for building a container image and deploying it to the OpenShift environment.)
Prerequisites and Initial Setup
Before starting, ensure you have the necessary prerequisites on both GitLab and OpenShift. On the OpenShift side, you need access to an OpenShift cluster (with a project/namespace for your application) and appropriate permissions to create and modify resources there. You should also have the OpenShift CLI (oc
) available to the GitLab Runner (this can be achieved by installing oc
on the runner’s host, or using a Docker image that contains the CLI in your CI jobs). On the GitLab side, you need a self-managed GitLab instance with a project repository set up for your application, and a GitLab Runner configured to run CI jobs (this runner can be a shell runner on a VM, or a Kubernetes/OpenShift runner). If the runner is hosted within OpenShift or Kubernetes, note that building container images might require special configuration due to security constraints – for instance, OpenShift runs containers as non-root by default, so using a rootless build tool like Buildah can be easier than Docker’s DinD in that environment. In summary, verify the following before proceeding:
- OpenShift Cluster: An OpenShift 4.x cluster where you have a project (namespace) to deploy into. Make sure you have cluster credentials (URL and token) or can create a service account for CI access.
- GitLab Self-Managed: A GitLab project with GitLab CI/CD enabled and a runner available to execute jobs. The runner should have Docker capabilities for building images (or use an alternative like Kaniko/Buildah if running without privileged containers).
- CLI Tools: The OpenShift CLI (
oc
) should be available in the CI environment. You can install it on the runner or use an official OpenShift CLI container image in your CI jobs for convenience. - Container Registry: A container registry accessible to both the CI pipeline and the OpenShift cluster. This could be the GitLab Container Registry (if enabled on your self-hosted GitLab) or an external registry. Ensure the OpenShift cluster can pull images from this registry (network access and credentials) – you may need to configure an image pull secret in OpenShift if the registry is private.
With these prerequisites in place, you can proceed to configure the integration between GitLab CI and OpenShift for deployments.
Creating OpenShift Service Accounts and Roles
To allow GitLab CI to deploy to OpenShift, it’s best practice to use a dedicated OpenShift Service Account for authentication rather than personal user credentials. On the OpenShift cluster, create a service account that the pipeline will use, and grant it sufficient permissions in the target project/namespace. For example, you can create a service account named gitlab-deployer
in your project and give it the edit
role, which allows creating and modifying typical application resources (like pods, services, deployments):
# Use your OpenShift project name here
oc project my-app-project
# Create a service account for GitLab CI/CD
oc create sa gitlab-deployer
# Grant the service account edit permissions in this project
oc adm policy add-role-to-user edit -z gitlab-deployer -n my-app-project
The above commands will create a service account and then bind the built-in Edit role to it, allowing it to manage resources in the my-app-project
namespace. Next, you need to retrieve the service account’s API token, which the GitLab pipeline will use to log in to OpenShift. You can get the token by examining the secret associated with the service account or by using the OpenShift CLI directly. An easy way is to run the following command (as a user with access to read secrets in that project):
# Get the token for the gitlab-deployer service account
oc sa get-token gitlab-deployer -n my-app-project
This will output a long token string. Copy this value – it will be used in GitLab CI for authentication. (The oc sa get-token
command is a convenient shortcut to fetch the service account token.) Ensure you keep this token secure, as it grants whatever permissions you gave the service account.
Note: The service account approach is recommended for automation. It scopes the CI/CD access to only the project in question and can be tightly permissioned. Avoid using a cluster-admin or highly privileged account token in your CI pipeline. The edit
role in OpenShift is usually sufficient for deployment tasks (creating deployments, services, routes, etc.), but you may adjust roles if you need more restricted or broader access. For instance, if the pipeline needs to create new projects/namespaces (for dynamic review environments), you might need higher-level permissions temporarily or have a pre-created project for each environment.
Configuring GitLab CI/CD Variables for OpenShift
With an OpenShift service account token (or another form of credentials) in hand, the next step is to store these credentials and connection details in your GitLab project as CI/CD variables. In GitLab, navigate to your project Settings > CI/CD > Variables and add the necessary variables. At minimum, you will need:
OPENSHIFT_SERVER
: The API server URL of your OpenShift cluster (e.g.,https://api.openshift.example.com:6443
). If you obtained a login command from OpenShift’s web console (“Copy Login Command”), the--server
URL in that command is what you need here.OPENSHIFT_TOKEN
: The service account token (or user token) that you obtained for authentication.OPENSHIFT_PROJECT
: The OpenShift project/namespace name where you will deploy the app.
You might also add variables for registry credentials if needed (for example, if using an external container registry). For instance, if using an OpenShift internal registry or another private registry, ensure you have a REGISTRY_USER
and REGISTRY_PASSWORD
(or a token) stored as variables to allow the docker login
in the pipeline. If you use the GitLab Container Registry for your project, GitLab provides built-in environment variables like $CI_REGISTRY
, $CI_REGISTRY_IMAGE
, $CI_REGISTRY_USER
, and $CI_REGISTRY_PASSWORD
, so you might not need to add those manually – just ensure the GitLab registry is enabled and the runner has access.
GitLab CI/CD Settings: Adding protected variables for the OpenShift API URL (OPENSHIFT_SERVER
), project name (OPENSHIFT_PROJECT
), and service account token (OPENSHIFT_TOKEN
). Storing credentials as CI/CD variables allows the pipeline to authenticate to OpenShift without exposing secrets in the repository.
When adding these variables, mark them as “Protected” and “Masked” for security. Protected variables will only be available in pipelines running on protected branches or tags (e.g., your main
or release
branches), preventing exposure from untrusted forks or feature branches. Masking the variable ensures the actual value is hidden in job logs (the runner will not print the token in plaintext). These settings are crucial to avoid leaking the OpenShift token or other sensitive credentials during the CI process. In summary, your GitLab CI/CD variables might look like:
- OPENSHIFT_SERVER – (protected, masked) e.g.
https://api.openshift.example.com:6443
- OPENSHIFT_PROJECT – (protected) e.g.
my-app-project
- OPENSHIFT_TOKEN – (protected, masked) e.g.
sha256~xxxxxxxx...
(the long token string) - [Optional] REGISTRY_USER / REGISTRY_PASSWORD – (protected, masked) if pushing to a registry that needs credentials (not needed for GitLab Registry when using project’s built-in variables, since GitLab provides a CI token user for registry).
By configuring these variables, your pipeline jobs can later reference them to log in to OpenShift and push/pull images securely, without hardcoding any secrets in the .gitlab-ci.yml
.
Configuring the GitLab CI/CD Pipeline (.gitlab-ci.yml
)
With credentials set up, you can now create the GitLab CI/CD pipeline definition that will build and deploy your application. In the root of your repository, create a file named .gitlab-ci.yml
. This YAML file will define the stages and jobs for your CI pipeline. A typical pipeline for deploying to OpenShift will have at least two stages: one for building the container image, and one for deploying to OpenShift (you might also have a testing stage before deployment, which is highly recommended). For example, we can define stages as build
and deploy
:
stages:
- build
- deploy
Build Stage – Building the Container Image
In the build stage, you will instruct GitLab Runner to build the Docker image for your application (and push it to the registry). If your GitLab runner is a Docker-based runner (Docker executor), you can use Docker-in-Docker (DinD) service to build images. If the runner is a shell executor on a VM with Docker installed, you can just run docker commands directly. Here’s a sample job for building a Docker image using DinD:
build-image:
stage: build
image: docker:20.10.16 # Use Docker client image (version as needed)
services:
- docker:dind # Docker daemon for DinD
variables:
DOCKER_DRIVER: overlay2 # (Optional) use overlay2 driver for DinD
script:
- docker info # Check Docker is working
- echo "Building Docker image..."
- docker build -t "$CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA" .
- echo "Logging in to registry..."
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" "$CI_REGISTRY"
- echo "Pushing image to registry..."
- docker push "$CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA"
Let’s break down what this does. The job uses the official Docker image as its environment and spins up a Docker daemon service (docker:dind
) so that it can run Docker commands. It then builds the image from the Dockerfile
in the repository (assuming the Dockerfile
is in the project root; adjust path if not). We tag the image with $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA
, which is a convenient GitLab predefined variable – it expands to the URL of your project’s container registry and uses the short commit SHA as the tag. This ensures each build push is uniquely tagged with the commit, rather than using latest
every time (which is good for traceability). After building, the script logs in to the registry using credentials (CI_REGISTRY_USER
and CI_REGISTRY_PASSWORD
are provided by GitLab for the project’s registry, or you could use a custom variable if pushing to an external registry). Finally, it pushes the image to the registry.
In practice, you might choose a different tagging strategy (e.g., using a version number or branch name) and may also push an immutable tag like the commit SHA and update a latest
tag. But using the commit SHA is a simple approach to avoid tag collisions. The above example is aligned with common GitLab CI usage for container builds. If you run this on a Kubernetes-based GitLab Runner (like one deployed on OpenShift itself), remember that Docker DinD requires the runner to run in privileged mode, which might not be allowed by OpenShift’s default security constraints. In such cases, consider alternatives like Kaniko or Buildah to build images in rootless mode. For instance, you could have a build job that uses a Buildah image (which can build images without a Docker daemon). Regardless of the tool, the end result of the build stage is a container image pushed to a registry accessible by the OpenShift cluster.
Before moving to the deploy stage, double-check that the image is indeed in the registry and tagged as expected. GitLab’s job log will show the push success, and you can browse the registry in GitLab’s UI to see the image. The OpenShift deployment in the next stage will reference this image.
Deploy Stage – Deploying to OpenShift
After a successful build (and possibly after running tests), the deploy stage will take the new container image and deploy it to the OpenShift cluster. This job will use the OpenShift CLI (oc
) to perform the deployment steps. We have already stored the OpenShift credentials and cluster info in variables, so now we’ll use them.
It’s convenient to run this job in an environment where the oc
command is available. You have a couple of options: you could use a pre-built image that contains oc
, or you could install oc
on the fly. One convenient choice is the official OpenShift CLI image (for example, quay.io/openshift/origin-cli:4.12
corresponds to an OpenShift CLI client for version 4.12, which works for interacting with an OCP 4.x cluster). For this example, we’ll specify an image that has the CLI. Alternatively, if using a shell runner, ensure oc
is installed on that machine as we did in prerequisites.
Here’s a sample deploy job in the pipeline:
deploy-to-openshift:
stage: deploy
image: quay.io/openshift/origin-cli:4.12 # Image with oc CLI
script:
- echo "Logging in to OpenShift..."
- oc login "$OPENSHIFT_SERVER" --token="$OPENSHIFT_TOKEN" --insecure-skip-tls-verify
- oc project "$OPENSHIFT_PROJECT" || oc new-project "$OPENSHIFT_PROJECT"
- echo "Deploying application manifests..."
- oc apply -f openshift-manifests/ -R -n "$OPENSHIFT_PROJECT"
- echo "Restarting deployment to pick up new image..."
- oc rollout restart deployment/my-app -n "$OPENSHIFT_PROJECT"
Let’s explain the steps. First, the job logs in to the OpenShift API server using the service account token we stored, disabling TLS verification if using a self-signed cluster certificate (in a production setup, you’d ideally have the cluster CA certificate and avoid skipping TLS verify, but often in demos or internal setups this flag is used). Once authenticated, we select the project. We use oc project
to switch to the target namespace, and we include an || oc new-project
fallback – this means if the project doesn’t exist (e.g., deploying to a fresh environment or dynamically named namespace), the pipeline will create it on the fly. In many cases, your project will already exist and oc project
will succeed; the oc new-project
is useful if you use dynamic namespaces for per-branch environments (review apps).
Next, the job uses oc apply
to apply Kubernetes/OpenShift manifest files from a directory (in this case, assume we have an openshift-manifests/
directory in our repo containing YAML files for the Deployment, Service, Route, etc. for our app). The -R
flag tells oc
to process the directory recursively, and -f <dir>
with --recursive
(-R
) will apply all YAMLs in that folder. Using oc apply
is a declarative approach – it will create or update resources to match the definitions in your files. This is good for ensuring the cluster’s state matches your git manifests (“GitOps-lite” approach). For example, you might have deployment.yaml
, service.yaml
, and route.yaml
files defining your app’s Kubernetes Deployment, a Service for it, and an OpenShift Route to expose it externally. By applying them each deployment, you guard against config drift.
Finally, the script performs an oc rollout restart
on the Deployment. This step is important when re-using the same image tag for updates. If you always push a new image tag (like one containing the commit SHA), and your Deployment YAML is updated to reference that new tag, an oc apply
would automatically trigger a new deployment rollout (because the pod template changed). However, if you are using a static tag (say “latest” or a constant image name), Kubernetes might not pull the new image unless the deployment spec changes. In our example, we used commit SHA tags, but our manifest might just refer to a floating tag. The rollout restart
forces the Deployment to rollout the new image by restarting the pods, ensuring the latest pushed image is used. In a production scenario, you might instead update the image tag in the Deployment spec (which could be done by templating the manifest or using oc set image deployment/my-app my-app=image:tag
). But a simple restart is a quick solution if using a constant tag. The result is that OpenShift will create a new replication controller or ReplicaSet and gradually replace the old pods with new ones running the updated image (assuming your Deployment is configured with RollingUpdate strategy, which is default).
After these steps, the application should be updated on OpenShift. You can include additional verification in the pipeline if desired – for instance, you could run oc rollout status deployment/my-app
to wait until the rollout is complete and ensure it succeeded. You might also run some smoke tests or health checks against the running app as part of the pipeline to confirm everything is working.
Important: The above pipeline jobs are a basic example. You will likely tailor them to your needs. For example, if deploying to multiple environments (dev/staging/prod), you might have separate jobs or pipelines per environment, each with its own credentials and project. GitLab CI allows environment-specific variables, or you can parameterize the project name and cluster based on branch or environment name. Additionally, if your OpenShift cluster requires logging in with a username/password or using an OAuth token, you could use those as well (OpenShift supports oc login with username/password, or using an OAuth bearer token which is what we did with the service account token). The service account token approach is convenient for CI automation and is the method shown in our examples.
Example: Deploying a Basic Node.js Application to OpenShift
To make the scenario more concrete, let’s consider a basic example – deploying a Node.js web application to OpenShift using GitLab CI/CD. Suppose our repository contains a simple Node.js app with a Dockerfile and Kubernetes manifests. Here’s how the pieces come together:
- Dockerfile: Defines how to containerize the Node.js app. For example, it might use
FROM node:16-alpine
, copy the application files, install dependencies, and specifyCMD ["node", "app.js"]
to start the app. - Kubernetes Manifests: We have YAML files describing a Deployment, Service, and Route for the app. The Deployment will use an image that we build in the CI pipeline. For instance, in
deployment.yaml
, the container image might be set tomy-registry.example.com/myproject/myapp:latest
(this will be updated when we deploy, to whichever tag we pushed). - GitLab CI Pipeline: As discussed, the pipeline will build the image and then deploy it.
Build and Push Example: The pipeline builds the Node.js app’s image in the build stage. This could involve installing any needed dependencies and running tests before building. For example, you might expand the build job to:
before_script:
- npm install # install Node.js dependencies
- npm run test # run tests (assuming you have a test script)
script:
- docker build -t "$CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA" .
...
This ensures that the image is only built if tests pass (GitLab will stop the job if a command in script
fails). After building, the image is pushed to the GitLab Container Registry (or your chosen registry).
Deployment Example: In the deploy stage, once oc apply
updates the OpenShift Deployment, the new pods will start using the Node.js image we just pushed. If the Deployment’s image reference was something like myapp:latest
, and we pushed a new latest, we would use oc rollout restart
to refresh the pods. If instead our CI pipeline updates the image tag in the deployment.yaml
(this could be done with a simple sed or by using Kustomize/Helm in the pipeline), then applying that manifest will trigger the rollout automatically because the Kubernetes Deployment sees a new image name (like myapp:<commit-sha>
).
Expose the App: With OpenShift, creating a Route is the typical way to expose the service externally. Our example assumes we applied a route.yaml
in the manifests. Alternatively, we could have the CI job run oc expose svc/my-app
to create a route if one doesn’t exist. In the OpenShift web console, you would then see the application’s route and be able to access the app in a browser. The GitLab job can even be configured to output the URL (as shown in some examples using the GitLab environment URL variable) so that each pipeline provides a direct link to the deployed app. For instance, you could set in .gitlab-ci.yml
:
environment:
name: production
url: http://$CI_PROJECT_NAME-$OPENSHIFT_PROJECT.apps.openshift.example.com
This would tell GitLab that the job deploys an “environment” and GitLab’s UI would show the URL where it’s accessible. (The exact URL structure depends on your OpenShift Routes; the example uses a common pattern for OpenShift Online using a default router domain and combining project and app name.)
Once deployed, you should see the Node.js app running on OpenShift. The OpenShift web console (Developer view Topology) will show the application’s components (the Deployment, pods, service, route) and their status. You can verify that the new version corresponds to your recent commit. Each new pipeline run (triggered by a code push) would repeat this process: test, build new image, push, and rollout update, resulting in continuous deployment.
Securing Credentials and Pipeline Security
Security is paramount when configuring CI/CD pipelines that interact with your cluster. We already covered storing the OpenShift token and other secrets in protected, masked CI variables – this ensures they aren’t exposed in logs or to unauthorized forks/users. Here are additional best practices to secure the pipeline and deployment process:
- Limit Permissions: As mentioned, use a dedicated service account with the minimal roles needed (e.g.,
edit
on one project). Avoid using cluster-admin or the defaultsystem:admin
account in automation. This principle of least privilege reduces the impact if credentials are compromised. - Protect the Pipeline: Only let trusted code trigger deployments. Mark your deploy job to run only on specific branches (e.g., only deploy from
main
or from release tags, not from every feature branch). In GitLab CI, you can use theonly:
orrules:
keywords to control this. For instance,only: [main]
on the deploy job ensures that forks or untrusted branches won’t execute deployment steps. Also consider using merge request approvals or other gating for production deployments. - Protected Runners: Use protected runners for deployment jobs. If you have shared runners, ensure that the runner picking up the job is trusted and secure, since it will have access to the OpenShift token. Ideally, run your own runner for this project (it could even be running inside the OpenShift cluster in a locked-down namespace).
- Mask Sensitive Output: Double-check that commands in your script don’t accidentally print secrets. For example, avoid writing the token to the console. The
oc login
command will not print the token, and because we masked the variable, even if echoed it would show asxxxx
. Just be mindful of any debugging steps you add. - Network Security: If your GitLab instance or runners are in a different network from OpenShift, ensure secure connectivity. The
oc login
is done over HTTPS; if using self-signed certs, we used--insecure-skip-tls-verify
for simplicity, but the better approach is to install the cluster’s CA certificate in the runner environment so that TLS can be verified. You can add the CA cert as a CI variable (masked) and write it to a file in the job before logging in, or bake it into a custom CLI image. - Image Security: When building images, use base images from trusted sources (like official images or your company’s vetted images). You can integrate security scanning in the pipeline as well (GitLab has container scanning features, and OpenShift’s registry or advanced deployments can trigger image scans).
- Resource Quotas and Clean-up: Automated deployments can create a lot of resources over time (images, running pods, etc.). Implement a clean-up strategy for any dynamic environments. For example, if you deploy review apps for each merge request (i.e., creating temporary OpenShift projects or unique deployments per branch), configure an automatic or manual teardown. GitLab environments can have an on_stop action – you can create a job that runs to delete the OpenShift project or resources when a branch is removed or a merge request is closed. For instance, a stop job could run
oc delete project my-app-pr-123
oroc delete all -l app=my-app-branch
to clean up all resources with a certain label once they’re no longer needed. - Reliability and Rollback: For production deployments, consider deployment strategies that allow easy rollback. OpenShift’s DeploymentConfig (if you use those) keeps a history of revisions and you can trigger a rollback if a deployment fails. If using Kubernetes Deployments, you might implement manual approval steps in GitLab for production or use Argo CD/GitOps for continuous synchronization. While beyond the scope of this article, it’s good to be aware of these options. At minimum, ensure that a failed deploy (e.g., if health checks fail and the rollout is paused) will be noticed – the pipeline can catch this by checking rollout status. You might choose to have the pipeline mark a job as failed if the new version doesn’t become ready, preventing further steps.
By following these practices, you help ensure that your CI/CD pipeline is not only functional but also secure and robust. You’ve now set up a pipeline where developers pushing code to your GitLab repository will trigger an automated build and deployment to OpenShift. This accelerates the development cycle and reduces manual intervention, all while leveraging the strengths of OpenShift (for scaling, routing, and managing the app) and GitLab CI/CD (for automation and integration with version control).
Conclusion
Configuring OpenShift as a deployment target for GitLab CI/CD enables an efficient DevOps workflow for containerized applications. We covered how to prepare both GitLab and OpenShift: from creating a dedicated service account in OpenShift with the right permissions, to storing cluster credentials in GitLab CI/CD variables securely, and writing a pipeline that builds Docker images and deploys them using the OpenShift CLI. We also walked through an example of deploying a simple Node.js app, demonstrating how each change in code can flow through the pipeline to an updated application in the cluster.
This setup can be extended and customized – for example, adding stages for automated testing, integrating code quality scans, or deploying to multiple OpenShift environments (dev/staging/prod) using different variables. OpenShift provides a solid platform for running containers, and GitLab CI/CD offers a flexible way to automate the build and release process. Together, they form a powerful combo for continuous delivery.
By adhering to best practices in security and reliability – such as using least-privilege accounts, protecting secrets, and verifying deployments – you can confidently automate your deployments to OpenShift. With everything in place, developers can focus on writing code, and your CI/CD pipeline will handle the heavy lifting of building images and deploying applications in a repeatable, consistent manner.
Leave a Comment
You must be logged in to post a comment.