Overview of OpenShift for Containerized Applications
Red Hat OpenShift is an enterprise-grade Kubernetes platform that simplifies how teams build, deploy, and manage containerized applications. It extends vanilla Kubernetes with additional features for security, multi-tenancy, developer productivity, and integrated tools. In practice, OpenShift provides a consistent hybrid-cloud platform that developers and operations teams use to build, deploy, and manage containerized applications more efficiently and consistently across different environments. Unlike a basic Kubernetes setup, OpenShift comes with out-of-the-box components like an internal container registry, networking (via Routes for easy external access), built-in monitoring/logging, and CI/CD pipeline integration. These extras mean that by itself Kubernetes is not enough – OpenShift adds capabilities such as automated image builds and pipelines to accelerate the delivery of containerized apps. In short, OpenShift’s relevance to containerized applications lies in providing a turn-key platform with all the tools needed to develop, deploy, and maintain containers at scale, whether on-premises or in the cloud.
OpenShift Pipelines is one of those integrated tools, providing a cloud-native CI/CD solution based on the Tekton project. Tekton pipelines are defined as Kubernetes Custom Resources, meaning your build/deploy pipelines run on the cluster itself in isolated containers, without requiring an external CI server. OpenShift Pipelines inherits Tekton’s flexibility and uses standard Kubernetes constructs, making it portable and extensible. OpenShift Pipelines features standard Tekton pipeline definitions (Tasks, Pipelines, etc.), can build images with tools like S2I or Buildah, and is integrated into the OpenShift Developer Console for visualization. This makes it straightforward to incorporate continuous integration/continuous delivery workflows into your OpenShift deployments.
In the remainder of this article, we will walk through deploying a containerized frontend web application on an on-premises OpenShift cluster. We’ll cover both the CLI (oc
) approach and the web console (GUI) approach, and then show how to automate the build and deployment using OpenShift Pipelines (Tekton). Along the way, we’ll provide example YAML manifests and highlight best practices for managing frontend apps on OpenShift.
Prerequisites for Deployment
Before you begin, ensure you have the following prerequisites in place:
- Access to an OpenShift Cluster: You need an OpenShift 4.x cluster (on-premises) up and running, with the ability to create projects and deploy applications. Make sure you have login credentials. If using the CLI, log in with
oc login
(you’ll need the API server URL and a token or username/password). If using the web console, have the URL and an account with the proper access rights (developer access to a project or the ability to create a new project). You should have cluster access and the necessary permissions to deploy applications in a given namespace (project). - OpenShift CLI (
oc
) Installed: Install the OpenShift CLI on your workstation. This tool is similar tokubectl
but tailored for OpenShift, allowing you to manage projects, applications, and resources from your terminal. Ensure theoc
version matches your cluster version. - OpenShift Web Console Access: If using the GUI method, a modern web browser and network access to the OpenShift web console UI is required. The web console provides a Developer perspective for deploying applications easily.
- Container Image or Source Code: Since our goal is to deploy a containerized frontend application, you should either have:
- A container image for your frontend app, hosted on an accessible registry (Docker Hub, Quay.io, your internal registry, etc.), or
- The source code and a Dockerfile (or use of Source-to-Image) so that OpenShift can build the image. In this article, we’ll assume you already have a container image available (for example,
quay.io/openshiftroadshow/parksmap:latest
, which is a sample frontend app image). Adjust image names and sources as needed for your actual application.
- Container Registry Access: If your image is in a private registry or if you plan to push images to a registry as part of CI/CD, ensure you have credentials and access. OpenShift includes an internal registry; you might use that for on-premises image storage. If using OpenShift’s internal registry for pushing images in a pipeline, you may need to log in to it (e.g.,
oc registry login
) or set up a pull secret in the project. - OpenShift Pipelines (Tekton) Installed: To use OpenShift Pipelines, the cluster must have the OpenShift Pipelines Operator installed. Many OpenShift clusters will have this by default or it can be added via OperatorHub. Ensure the OpenShift Pipelines operator is installed and enabled on your cluster (ask your administrator if unsure). You can verify by checking if the
tekton.dev
custom resource types (Pipeline, PipelineRun, Task, etc.) are recognized. If not present, install the operator in the cluster (requires cluster admin privileges). Once installed, the operator will set up a defaultpipeline
ServiceAccount in each namespace and provide a catalog of Tekton ClusterTasks (like build tools). - Tekton CLI (Optional): Optionally, install the Tekton CLI (
tkn
) for convenience when interacting with pipelines via command line. This is not strictly required (you can useoc
or the web console to start/monitor pipelines), buttkn
can be handy for developers.
With these prerequisites satisfied, you’re ready to create a project and deploy your frontend application.
Creating a Project (Namespace) on OpenShift
OpenShift uses the concept of a Project (which maps to a Kubernetes Namespace) to isolate resources. A project is a self-contained environment for your application, including its deployments, services, routes, etc. Projects also help manage access control and quotas for teams. (In fact, projects are OpenShift extensions to Kubernetes namespaces with additional features for user self-provisioning and isolation.) We need to create a project for our frontend application deployment if one does not already exist.
You can create a project either via the CLI or the web console:
- Using the CLI: Run the following command, providing a project name of your choice (here we use “
frontend-project
” as an example):oc new-project frontend-project
or equivalently:
oc create namespace frontend-project oc project frontend-project # switch current context to the new project
This will create a new project/namespace named
frontend-project
. (Theoc new-project
command is a convenient shortcut that creates the project and switches your context to it in one step.) - Using the Web Console: Log in to the OpenShift web console and ensure you are in the Developer perspective (you can toggle between Administrator and Developer perspectives using the dropdown at the top of the console). In the Developer perspective, do the following:
- Click the +Add button (typically on the left navigation menu).
- From the Add options, select Project and then click Create Project.
- Enter a Name for your project (e.g.,
frontend-project
). You can also provide an optional display name or description. - Click Create to create the project.
Once the project is created, you can proceed to deploy your application into it. (If you are using the CLI, ensure your current project is set to this new namespace by running oc project frontend-project
).
Deploying the Frontend Application using the OpenShift CLI (oc
)
We’ll first walk through deployment using the OpenShift CLI. This method is scriptable and useful for automation or when you’re working in a terminal environment. We assume you have a container image for your frontend application. In this example, we’ll use the ParksMap frontend demo image (quay.io/openshiftroadshow/parksmap:latest
) as a stand-in for a generic frontend web app.
Step 1: Log in and select the project. Ensure you are logged in (oc login ...
) and have switched to the target project (e.g., oc project frontend-project
). You can verify your current project with oc whoami --show-context
or oc project
. The CLI will show the active project in its context.
Step 2: Deploy the application container image. The simplest way to deploy an existing image in OpenShift is to use the oc new-app
command. This command handles creation of the necessary Kubernetes objects for you (Deployment or DeploymentConfig, Service, ImageStream, etc.) in one go. Run the following, substituting your image name as appropriate:
oc new-app quay.io/openshiftroadshow/parksmap:latest --name=parksmap
This will output something like the following:
--> Found Docker image ... for "quay.io/openshiftroadshow/parksmap:latest"
* An image stream will be created as "parksmap:latest" that will track this image
* This image will be deployed in deployment config "parksmap"
* Port 8080/tcp will be load balanced by service "parksmap"
* Other containers can access this service through the hostname "parksmap":contentReference[oaicite:12]{index=12}
--> Creating resources ...
imagestream.image.openshift.io "parksmap" created
deploymentconfig.apps.openshift.io "parksmap" created
service "parksmap" created
--> Success
A few things to note from this output: OpenShift detected that the image is available and proceeds to set up resources. By default, oc new-app
in OpenShift will create an ImageStream to track the image (this is an OpenShift mechanism to keep track of image versions/tags), then deploy the image using a DeploymentConfig (a OpenShift-specific deployment controller similar to a Kubernetes Deployment), and expose it internally via a Service. The service is given a DNS name (here parksmap
) for internal communication. The output also mentions port 8080 – this is derived from the image’s metadata (the ParksMap image exposes port 8080 for the web server).
We used --name=parksmap
to explicitly name our app. If you omit the --name
, OpenShift will derive a name from the image (in this case it would likely use “parksmap” anyway). You can always specify a custom name; note that this name will be used as the base for all resources (and as the app
label).
Step 3: Expose the application externally with a Route. By default, the new app is not yet accessible outside the cluster – we have a service, but no external route. To create an external URL, use the oc expose
command to create an OpenShift Route that maps to the service:
oc expose service/parksmap
This creates a Route resource. OpenShift’s router will pick this up and provide a URL (hostname) at which your frontend app can be accessed. You can find the route by running:
oc get route/parksmap
The output will show a host name, for example:
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
parksmap parksmap-frontend-project.apps.mycluster.example.com parksmap 8080 edge None
You can now access the frontend application in a browser at the URL listed under HOST/PORT. The oc expose
command is a quick way to expose a service; it essentially creates a Route resource linking to the specified service.
Step 4: Verify the deployment. Check that the pod for the frontend application is running:
oc get pods
You should see a pod named parksmap-<random-hash>-<deploy-id>
in the Running state. If it’s not running, you can inspect logs with oc logs
or describe the pod for troubleshooting. Also verify the application is responding: open the route URL in your web browser to ensure the frontend loads correctly (e.g., the ParksMap app should show an interactive map). This confirms the containerized frontend is successfully deployed on OpenShift.
Alternative – using YAML manifests: Instead of oc new-app
, you can deploy by directly writing Kubernetes manifest YAML files for the Deployment, Service, and Route, then applying them with oc apply -f
. This gives you full control over the configuration. For example, here are sample manifests for our frontend app (parksmap):
# deployment.yaml – Kubernetes Deployment for the frontend app
apiVersion: apps/v1
kind: Deployment
metadata:
name: parksmap
labels:
app: parksmap
spec:
replicas: 1
selector:
matchLabels:
app: parksmap
template:
metadata:
labels:
app: parksmap
spec:
containers:
- name: parksmap
image: quay.io/openshiftroadshow/parksmap:latest
ports:
- containerPort: 8080
# service.yaml – Service to expose the Deployment internally
apiVersion: v1
kind: Service
metadata:
name: parksmap
labels:
app: parksmap
spec:
selector:
app: parksmap
ports:
- port: 8080
targetPort: 8080
protocol: TCP
name: http
# route.yaml – OpenShift Route to expose the Service externally
apiVersion: route.openshift.io/v1
kind: Route
metadata:
name: parksmap
spec:
to:
kind: Service
name: parksmap
port:
targetPort: 8080
# Optionally, define tls settings here if you need HTTPS (edge termination, etc.)
You could save these files and run oc apply -f deployment.yaml -f service.yaml -f route.yaml
to create the same set of resources. Using YAML is useful for maintaining these configs in Git (Infrastructure as Code) or tweaking advanced settings. In our simple example, oc new-app
did similar work under the hood (creating a DeploymentConfig instead of Deployment in our case). Both approaches are valid on OpenShift.
Deploying the Frontend Application using the OpenShift Web Console (GUI)
OpenShift’s web console provides a user-friendly way to deploy applications, which is great for newcomers or those who prefer UI workflows. We’ll now deploy the same frontend application using the Developer perspective of the web console. The steps below assume you have the project (frontend-project
) created and selected.
- Open the Developer Perspective & Add an Application: In the web console, ensure you’re in the Developer perspective (check the top-left corner toggle). Navigate to the project namespace you want to use (e.g., select “frontend-project”). Click the +Add button. This opens the Add view, which provides multiple options (Import from Git, Container Images, etc.). For an existing container image, choose the “Container Images” option (sometimes labeled Deploy Image in older versions).
- Specify the Image to Deploy: In the Deploy Image form, you’ll see a field for the image reference. Select “Image name from external registry” (if there’s an option) and then enter the image’s pull spec. For example, enter:
quay.io/openshiftroadshow/parksmap:latest
in the Image Name field. The console will attempt to pull metadata for this image. (If the image is private, you’d need to create or select a secret with credentials; but our example image is public.)
- Set Application Name and Component Name: The form will also have fields for Application and Name. An “Application” in console terms is a grouping mechanism to organize multiple components; you can use an existing application group or create a new name. For instance, set Application to
national-parks-app
(as in the example from OpenShift docs) and Name toparksmap
. The Name will be the name of the deployment (and other resources). In many cases, the UI auto-fills these based on the image name, but you can adjust them as needed. - Resource Type: Choose the resource type for deployment. The form might let you select between Deployment or DeploymentConfig. Select Deployment (to use the Kubernetes Deployment controller) unless you specifically want OpenShift’s DeploymentConfig. In OpenShift 4.x, both work, but Deployments are more aligned with Kubernetes standards. (DeploymentConfigs are still supported for advanced use cases like automated rollouts triggered by image changes, but not needed for a basic deployment.)
- Create a Route: Ensure the option “Create route to the application” is selected (there is usually a checkbox). This tells OpenShift to automatically create a Route for external access. You may also specify an optional hostname for the route or leave it blank to let the cluster generate one (based on the app name and default domain).
- (Optional) Advanced Settings: Expand Advanced Options if you wish to configure things like environment variables, resource limits, health checks, or labels. For example, you might add labels such as
app=national-parks-app
,component=parksmap
,role=frontend
to tag this frontend component clearly. Labels help with organizing and filtering resources later. You could also set up resource requests/limits or attach config maps/secrets here if your app needs them. For a simple deployment, you might skip detailed configuration at this point. - Deploy: Click Create. OpenShift will then deploy the application. You will be redirected to the Topology view, where you should see a new node representing the
parksmap
frontend deployment (likely shown with the OpenShift logo if no custom icon). The topology view gives a visual representation of the app components. - Verify the Deployment in the Console: In topology, clicking the node (the circle representing
parksmap
) will show details on the right panel. You can see information such as pod status, associated service and route, and other details under the Resources tab. The console will indicate the route URL as well. You can copy the route URL (e.g.,parksmap-frontend-project.apps.mycluster.example.com
) and open it in a browser to view the running frontend application. The OpenShift UI makes it easy to find the route and view logs or metrics under the “Observe” section for the deployment as well.
At this point, whether you used CLI or GUI, your containerized frontend web app should be up and running on OpenShift. Next, we’ll set up OpenShift Pipelines to automate the build and deployment process as part of a CI/CD workflow.
Setting Up OpenShift Pipelines for CI/CD Automation
Manually deploying the application is useful, but automating the build and deployment via pipelines will enable continuous integration and delivery. OpenShift Pipelines, based on Tekton, allows you to define CI/CD workflows as code (YAML) and run them on the cluster.
Installing and Configuring OpenShift Pipelines
If you haven’t already, ensure the OpenShift Pipelines Operator is installed on your cluster (as mentioned in prerequisites). Once installed, verify that the Tekton Pipelines are available. The operator typically adds a namespace (often openshift-pipelines
) containing default ClusterTasks (common Tekton tasks available cluster-wide) and sets up a service account named pipeline
in each namespace.
For our purposes, we’ll assume the operator is installed and we’re working in the same project frontend-project
. If not, have your admin install it, or follow the documentation to install via the OperatorHub in the web console (Administrator perspective).
Important Configuration: OpenShift’s Pipelines operator will also ensure that the pipeline
service account in your project has sufficient privileges for typical CI/CD tasks, such as pushing images to the internal registry. (By default, it attaches the pipeline
SA to the privileged
SCC and a role to allow image pushes). This is needed for tasks like building container images. You should also make sure any required secrets (for Git access or external registry credentials) are created and linked to the pipeline
service account. For example, if your pipeline needs to pull source code from a private Git repo or push to Quay.io, you’d store those tokens in Secrets and annotate them for Tekton to use. In our example, we’ll stick to public repositories and the default internal registry.
Defining a CI/CD Pipeline
A Tekton pipeline is composed of Tasks. Each Task is a set of steps (running in its own container) that accomplish a specific piece of work (e.g., “build the image” or “deploy to dev environment”). A Pipeline strings together multiple tasks, passing outputs to inputs, to achieve a full CI/CD flow.
For our frontend application, the pipeline might have tasks to:
- Pull the source code from a Git repository (if you’re building the image from source).
- Build the container image and push it to a registry.
- Apply the deployment manifests (Kubernetes YAML) to the cluster (to deploy or update the app).
- (Optionally) run tests or scans, or update configurations.
We will illustrate a simple pipeline that builds and deploys the application. This example assumes you have a Git repo for your app’s source and a Dockerfile (or you could use Source-to-Image). We’ll use Tekton’s Buildah task to build and push the image, since Buildah can run rootless builds which are compatible with OpenShift’s security constraints.
Install or verify required Tasks: OpenShift Pipelines comes with a set of built-in ClusterTasks. In particular, confirm that the git-clone
task (for cloning source code) and the buildah
task (for building images) are available. You can list tasks with tkn cluster-task ls
or via the web console (Administrator -> Pipelines -> Tasks). You should see git-clone
and buildah
among others; these are installed by the operator in the openshift-pipelines
namespace. If they are missing, you may need to import them from the Tekton catalog. Additionally, we will need tasks to deploy our app. We can create two simple custom tasks:
- apply-manifests: a task to apply Kubernetes manifest files (YAML) from our repo (for example, to create/update the Deployment, Service, Route).
- update-deployment: a task to patch the existing Deployment with the new image (if not already handled by apply). In some designs, after building the new image, you might patch the Kubernetes Deployment to use that image tag, ensuring a rollout of the new version.
For brevity, assume we fetch these two tasks from a repository of Tekton tasks. (The OpenShift Pipelines tutorial provides ready-made YAML for these tasks, which you can install with oc create -f <task_url>
.) Once installed in the project, they become available as regular Tasks.
Now, let’s define the Pipeline itself. Below is a sample Pipeline YAML (pipeline.yaml
) that ties together the tasks:
apiVersion: tekton.dev/v1
kind: Pipeline
metadata:
name: build-and-deploy-frontend
namespace: frontend-project # ensure this is your project
spec:
workspaces:
- name: shared-workspace # shared volume for tasks to share data (source code)
params:
- name: deployment-name
type: string
description: Name of the Kubernetes Deployment to update
- name: git-url
type: string
description: Git repository URL for the frontend app source
- name: git-revision
type: string
description: Git revision (branch or tag) to build
default: "main"
- name: IMAGE
type: string
description: Image registry and name for the built image (e.g., image repo URL)
tasks:
- name: fetch-repository
taskRef:
name: git-clone # using Tekton ClusterTask to clone a Git repo
kind: ClusterTask
workspaces:
- name: output
workspace: shared-workspace
params:
- name: url
value: $(params.git-url)
- name: revision
value: $(params.git-revision)
- name: deleteExisting
value: "true"
- name: build-image
runAfter:
- fetch-repository
taskRef:
name: buildah # Tekton task to build & push container images
kind: ClusterTask
workspaces:
- name: source
workspace: shared-workspace
params:
- name: IMAGE
value: $(params.IMAGE)
- name: TLSVERIFY
value: "false" # skip TLS verify for internal registry, if needed
- name: apply-manifests
runAfter:
- build-image
taskRef:
name: apply-manifests # custom task to apply K8s manifests
kind: Task # (Assuming it's installed as a Task in this namespace)
workspaces:
- name: source
workspace: shared-workspace
# The apply-manifests task will apply the deployment yaml, which presumably references the image.
- name: update-deployment
runAfter:
- apply-manifests
taskRef:
name: update-deployment # custom task to update Deployment image
kind: Task
params:
- name: deployment
value: $(params.deployment-name)
- name: IMAGE
value: $(params.IMAGE)
Let’s break down what this pipeline does:
- It declares a workspace (
shared-workspace
) that will be shared between tasks. This is typically backed by a PersistentVolumeClaim or emptyDir and will hold the checked-out source code so subsequent tasks can access it. - It declares several parameters:
deployment-name
: the name of the Deployment in Kubernetes that we want to deploy/update (e.g.,parksmap
in our case).git-url
andgit-revision
: to specify the Git repository and branch/commit of the frontend source to build.IMAGE
: the full image name (including registry) where the built image should be pushed (e.g.,image-registry.openshift-image-registry.svc:5000/frontend-project/parksmap:latest
if pushing to OpenShift’s internal registry, or a Quay/Docker Hub URL).
- Task: fetch-repository – uses the
git-clone
ClusterTask to clone the source code from the Git repo into the workspace. It takes thegit-url
andgit-revision
params. This task produces the source code that others will use. - Task: build-image – waits for the fetch task (
runAfter: fetch-repository
). It uses thebuildah
task (a ClusterTask installed by OpenShift Pipelines) to build the container image from the source. Thebuildah
task expects a Dockerfile in the source (or it can infer one). It uses theIMAGE
param to know where to push the built image. The example above setsTLSVERIFY=false
to allow pushing to a local or insecure registry, if applicable. After this step, if successful, our new container image is built and pushed to the registry. - Task: apply-manifests – waits for the image to be built. This is a custom task that applies Kubernetes manifests from the repo (which presumably were fetched in the source workspace). This could involve applying a Deployment config, Service, and Route YAML for the frontend app. In our scenario, since we already deployed once manually, this might be used for initial deployment or for creating any config maps or other resources. Essentially, it ensures the Kubernetes objects for the app are present/up-to-date (aside from the image tag).
- Task: update-deployment – finally, after manifests are applied, this task updates the image in the Deployment to trigger a rollout of the new version. It takes the deployment name and new image reference as params. The task would typically use
oc set image
orkubectl patch
under the hood to set the Deployment’s container image to the new${IMAGE}
value (which includes a new tag, e.g., using the Git commit SHA or pipeline run ID as a tag). By updating the Deployment, Kubernetes will perform a rolling update to replace the pod with a new one running the new image.
This pipeline codifies the CI/CD process: from code to image to deployment on the cluster. You can create this pipeline in OpenShift by saving the YAML and running oc apply -f pipeline.yaml
(in the appropriate namespace). After creation, the pipeline will appear in the web console under Pipelines in the Developer perspective.
Note: We referenced tasks apply-manifests
and update-deployment
– you will need to have those Task definitions available. You can create them via their YAML definitions. For instance, the OpenShift Pipelines tutorial provides these task YAMLs which apply manifests from a Git workspace and patch deployments, respectively. In practice, one could also combine the apply and update steps by templating the manifest with the image or using Kubernetes rolling updates directly, but separating them as shown provides clarity and reusability.
Running the Pipeline
Once the pipeline and tasks are set up, you can run the pipeline to automate the deployment:
- Via Web Console: In the Developer perspective, navigate to Pipelines and locate
build-and-deploy-frontend
(the name we gave). Click Start (or Start Pipeline). The console will prompt you to enter the pipeline parameters. You would enter:- deployment-name =
parksmap
(or your deployment’s name) - git-url = (URL of your Git repo containing the frontend source)
- git-revision = (branch name like
main
) - IMAGE = (the registry location for the new image, e.g., for internal registry:
image-registry.openshift-image-registry.svc:5000/frontend-project/parksmap:${PIPELINE_RUN_ID}
or use an external registry path)
Also, you’ll need to select a Workspace for
shared-workspace
. Typically, you can use an EmptyDir (if pipelineRun is ephemeral) or create a PVC and attach it. For simplicity, use EmptyDir if supported by the interface (it may allow you to select “Generate EmptyDir”).Start the pipeline. You can then watch the pipeline’s progress in the web console’s pipeline run viewer. OpenShift’s console provides a visual graph and logs for each Task in the pipeline as it runs.
- deployment-name =
- Via CLI (tkn): If you prefer CLI, you can start the pipeline with
tkn
oroc
. For example, usingtkn
:tkn pipeline start build-and-deploy-frontend -p deployment-name=parksmap -p git-url=https://github.com/youruser/your-frontend.git -p IMAGE=image-registry.openshift-image-registry.svc:5000/frontend-project/parksmap:$(git rev-parse --short HEAD) -w name=shared-workspace,volumeClaimTemplateFile=workspace.yaml --showlog
This is an example; here we assume you use a PVC for the workspace (referenced via a template file). The
--showlog
will tail the logs of the run so you can see the output of each step.Alternatively, you can create a
PipelineRun
YAML specifying all parameters, workspaces, and trigger it byoc apply -f pipelinerun.yaml
. A PipelineRun is a one-time execution of the pipeline with specific parameters.
Pipeline Execution and Outcome: The pipeline will execute step by step. If all goes well:
- The source is cloned.
- The image is built and pushed to the registry.
- Kubernetes manifests from the repo are applied (ensuring the Deployment and Service exist).
- The Deployment is updated with the new image, causing a rollout.
After the pipeline run completes, you should have a new version of your frontend application running in the cluster (updated to the latest built image). You can verify this by checking the Deployment’s pod (it will have a new pod instance if a rollout happened) and by refreshing the application in your browser to see any changes.
It’s possible to further automate this by adding a trigger so that a Git push triggers the pipeline. OpenShift Pipelines (Tekton) has a component called Tekton Triggers for handling webhook events. You can configure a GitHub or GitLab webhook to hit a TriggerService that starts the pipeline on each commit, enabling true continuous deployment. Trigger configuration is beyond the scope of this article, but keep in mind it’s an available feature.
Testing and Verification After Deployment
Whether you deploy manually or via the pipeline, it’s important to test and verify that everything is working correctly:
- Verify Pods are Running: Use
oc get pods
(or the web console topology view) to ensure the frontend pod is in Running state. If using a pipeline, watch the pipeline logs to confirm each task succeeded. A successful pipeline run will show each Task completed; if something failed (e.g., build failed or image push failed), the pipeline will stop at that task. The OpenShift console’s PipelineRun details ortkn pipelinerun logs
can be used to troubleshoot. - Check Application Functionality: Access the frontend via its route URL. You can get the URL in a couple of ways:
- CLI:
oc get route/<app-name>
as shown earlier, to retrieve the host. Example:
oc get route/parksmap -o jsonpath='{.spec.host}'
This will print the host like
parksmap-frontend-project.apps.mycluster.example.com
.- Web Console: Go to the project’s Topology, click the component and find the route, or go to Networking -> Routes in the console. The route is listed there and is clickable.
Open the URL in a browser to ensure the frontend loads. If the frontend needs to communicate with a backend API, verify that those connections are working (this might involve ensuring the backend service is deployed and the frontend is pointing to the correct backend service URL).
- CLI:
- Verify the Deployment Rolling Update: If you ran the pipeline for an update, check that the Deployment updated the image:
oc describe deployment parksmap
Look for the image field under containers to see if it matches the new image tag (for example, a new image SHA or tag that the pipeline used). Also, you might see in the events that a rollout occurred. The web console’s Deployment details (under Workload -> Deployments) will show the number of pods updated and the revision history.
- Logs and Debugging: If the application isn’t working as expected, check the pod logs:
oc logs deployment/parksmap
(or use the Logs tab in the console’s pod view). This will show the output of the frontend application container, which can reveal runtime errors (for instance, if it couldn’t connect to a backend or had config issues).
- Pipeline Run Verification: In the OpenShift console, under Pipelines -> Pipeline Runs, you should see a record of the pipeline execution (PipelineRun). It will show as Succeeded or Failed. Clicking it will show which task may have failed if any. Ensure that the PipelineRun succeeded. You can also re-run it or create new PipelineRuns as needed (for example, after pushing new code).
By performing these verification steps, you ensure that the deployment was successful and the application is operational for end users.
Best Practices for Managing Frontend Applications on OpenShift
Deploying a frontend on OpenShift is not just about getting it running; it’s about running it well. Here are some best practices and tips for managing frontend (and generally stateless web) applications on OpenShift:
- External Access via Routes: OpenShift Routes provide a simple way to expose your frontend to users. Routes are custom to OpenShift and are easier to use for simple HTTP/HTTPS exposure than Kubernetes Ingress in many cases. They allow you to quickly create externally reachable URLs for your services. Use routes to expose your frontend, and take advantage of OpenShift’s built-in TLS termination (you can secure your route with edge/passthrough/reencrypt termination as needed, and OpenShift can even provision a default wildcard cert for routes).
- Configuration and Secrets: Avoid hard-coding configuration or secrets (like API endpoints, feature flags, etc.) into your frontend container image. Instead, use ConfigMaps and Secrets to manage configuration data and inject them into your application at runtime. For example, if your frontend needs to know an API base URL or has a feature toggle, store those in a ConfigMap and consume them via environment variables. This way, you can adjust configurations per environment (dev/test/prod) without rebuilding the image. OpenShift (like Kubernetes) allows mounting ConfigMaps as files or exposing them as env vars, and similarly for Secrets (for sensitive info).
- Resource Management: Define resource requests and limits for your frontend container. This ensures the OpenShift scheduler knows the app’s needs and prevents a runaway process from starving others. For instance, a simple frontend might request 100m CPU and 128Mi memory, and have a limit of 200m CPU and 256Mi memory. Setting these helps with capacity planning and stability.
- Health Probes (Liveness and Readiness): Implement readiness and liveness probes for your frontend container. Readiness probes ensure the router only sends traffic to a pod when the app is ready to serve requests (e.g., after it has loaded data or warmed up). Liveness probes help OpenShift auto-restart the container if it becomes unresponsive (for example, if the NodeJS process hangs). OpenShift’s deployment strategy will use readiness checks to decide if a new pod is ready during rolling updates, so it’s crucial to have them to avoid downtime during deployments. A simple HTTP GET probe on the root or a health endpoint of your frontend can suffice.
- Stateless Design: Frontend web applications should be stateless (no session data stored on the container filesystem). This allows you to scale replicas easily and perform rolling updates without session loss. If you need to store user session state, use external stores (like Redis or a database) rather than in-memory or file system on the container.
- Scaling and High Availability: Take advantage of OpenShift’s scaling features. You can scale your frontend horizontally by increasing the replica count (
oc scale deployment/parksmap --replicas=3
). OpenShift (Kubernetes) will load-balance requests across the pods via the Service. You can also configure an Horizontal Pod Autoscaler (HPA) to automatically scale the frontend based on CPU or memory usage or custom metrics. This ensures your app can handle variable load. - Image Management: Use image tags and promotions carefully for frontend apps. In CI/CD, a good practice is to build an image once (e.g., tagged with a commit SHA or build ID) and promote that through environments (dev -> stage -> prod) rather than rebuilding for each environment. OpenShift’s internal registry and ImageStreams can help track image versions and trigger deployments when a new image is pushed. For example, a DeploymentConfig can be set to automatically deploy a new image tag. If you use the Pipeline as above, you might incorporate an ImageStream or update the deployment image field as done.
- CI/CD Pipelines: We integrated OpenShift Pipelines (Tekton) for automation. Ensure your pipeline covers essential steps like running frontend tests (e.g., unit/integration tests) and maybe a stage to run a security scan on the built image (OpenShift Pipelines can integrate tasks like Trivy for vulnerability scanning as shown in some Red Hat examples). Automating tests and scans helps catch issues early. If you have multiple apps (e.g., a backend and a frontend), you could orchestrate pipelines to build and deploy both, possibly with a trigger on one to deploy the other. OpenShift Pipelines is flexible in allowing such workflows.
- GitOps for Deployment (advanced): Consider using OpenShift GitOps (Argo CD) for managing deployments, especially for production. With GitOps, you’d store the Kubernetes manifests (Deployment, Service, Route, etc.) in a Git repo and let Argo CD sync the cluster state to match. Your CI pipeline would then only build/push the image and update the Git repo with the new image tag. This can improve traceability for changes. This is an optional advanced approach, but worth mentioning for completeness in an OpenShift environment.
- Monitoring and Logging: Leverage OpenShift’s monitoring stack for your app. OpenShift has built-in monitoring (based on Prometheus) and logging (via Elasticsearch/Kibana or Loki) which can be configured to capture application metrics and logs. Expose metrics from your frontend (if applicable, e.g., using a
/metrics
endpoint and Prometheus client libraries) and add them to the cluster monitoring. Use labels (likeapp=parksmap
) to easily filter logs for your app in the logging interface. Constant monitoring and observability help ensure your frontend is performing correctly and help detect issues early in a production environment. - Security Best Practices: Run your frontend container as a non-root user. OpenShift by default disallows running as root unless you modify the Security Context Constraints. Ensure your Dockerfile sets a user, or use OpenShift’s default restricted SCC which will assign a random uid. Also, use network policies if needed to restrict what your frontend can communicate with (for example, maybe it should only talk to your backend service and nothing else). Regularly update the base images of your frontend to get security patches.
- Dev/Test vs. Prod: Maintain separate OpenShift projects for dev, staging, and production environments for your frontend. This is a best practice to clearly separate concerns and data. You can use the same pipeline to deploy to dev, run tests, then promote the image to staging and prod (perhaps with manual approval steps). OpenShift’s RBAC can ensure that, for instance, developers can deploy to dev but only ops can approve deployment to prod, etc.
By following these best practices, you will have a more robust, secure, and maintainable frontend application deployment on OpenShift. OpenShift, with its additional tooling (like Routes for easy exposure, integrated image registry, and pipelines), provides a powerful environment for running containerized frontends at scale.
Conclusion
In this article, we covered the end-to-end process of deploying a containerized frontend web application on an on-premises OpenShift cluster, using both the oc
CLI and the OpenShift web console. We also incorporated OpenShift Pipelines (Tekton) to automate the build and deployment, turning our manual steps into a repeatable CI/CD pipeline. We discussed how to create the necessary OpenShift resources (Projects, Deployments, Services, Routes) and provided example YAML manifests for these. We walked through setting up a Tekton pipeline to fetch source, build a container image with Buildah, and deploy the new version to the cluster, along with sample pipeline and task definitions. Finally, we highlighted some best practices for managing frontend applications on OpenShift – covering topics from configuration management to scaling, routing, and security.
With this knowledge, an intermediate user familiar with Kubernetes should be able to confidently use OpenShift’s tools and conventions to deploy and manage frontend applications. OpenShift’s added conveniences like the web console, Source-to-Image builds, and integrated CI/CD can greatly streamline your development workflows. As you implement these steps, always refer to OpenShift’s documentation and community resources for deeper dives into each topic.
Leave a Comment
You must be logged in to post a comment.