Blog Icon

Blog Post

How to monitor OpenShift with Sysdig

There are a lot of reasons to love OpenShift, and as we’ll show you, monitoring OpenShift is one of those reasons. OpenShift builds on top of Kubernetes to provide enterprises with a stable, secure, and powerful approach to building a container platform-as-a-service for development teams. With OpenShift in place, developers can quickly build and move cloud-native apps from development to production.

In this post, we walk through the challenges of getting visibility into containers and look at how you can instrument the OpenShift platform to gain deeper visibility into your containers and OpenShift itself.

The why: containers, platforms, and visibility

Containers are seeing rapid adoption due to the simplicity and consistency by which developers can get their code from a laptop to production. Containers are also a key building block for microservices. Many enterprises are actively harnessing this new development model by creating an internal platform-as-a-service (PaaS). PaaS helps get containers into production more easily. This applies if you use a public cloud, private cloud, or even a hybrid of both. With OpenShift, Red Hat delivers a powerful, secure container application platform that makes the whole process easier, and more efficient.

But, containers, in general, present a major stumbling block: visibility for monitoring health, performance, and risk are technologically harder to implement for containerized applications than legacy virtual or physical environments.

The business implications of this should not be taken lightly. Customers expect applications to have near-perfect uptime, high performance, and a bug-free experience. But how can you achieve this state without seeing how your services are performing in production?

While containers are fantastic for developer agility and portability, they increase the complexity of operations.

Containers achieve portability in part by black-boxing the code inside. This layer of abstraction is great for development, but not so great for operations. And at the same time, the prospect of putting a monitoring agent in each container is expensive in terms of resources. It also creates dependencies that can hamper developer efficiency, increase troubleshooting overhead, and thereby limit the value of containers.

To successfully build and operation platforms leveraging containers and OpenShift, then enterprises will need top-notch monitoring of not only the hosts and containers but also the applications and processes running inside. Equally important is the ability to “see” clearly the microservices the containers make up. And – all of this has to happen in a way that respects the operating principles of containers. Enter Sysdig.

The what and the how: Monitoring OpenShift with Sysdig

Sysdig’s claim to fame is ContainerVision, our ability to monitor the code running inside your containers without instrumenting each individual container. It’s given us a unique position in the market. We’re the only technology that gives you full visibility into the performance of your container-native applications without bloating each of your containers with a monitoring agent or requiring you to instrument your code or pull from APIs that provide only basic details.

But seeing inside containers alone isn’t enough. You also need to understand the logical architecture of your applications and services as they are deployed into your production environment. That’s where the ability to tie monitoring into orchestration comes in.

OpenShift integrates Kubernetes as its container orchestration tool. If you’re not familiar with Kubernetes check out our introductory blog here. In short OpenShift, by way of Kubernetes, holds the details of how your containers are deployed in order to map the physical deployment of containers to the logical services you’re running.

Sysdig integrates and communicates with Kubernetes to map all the monitoring metrics from individual containers to the applications and services that you actually want to monitor. We use metadata about your namespaces, replicasets, deployments, pods, etc. In addition we even capture custom metadata (labels and tags). This gives you context to monitor the actual applications that are deployed across you many containers and multiple nodes.

An Example: Monitoring OpenShift Hosts, Projects, and Services

Let’s look at a quick example to make this more concrete. We’ve deployed a set of services onto a single host. To visualize the environment we’ll use Sysdig Monitor. For the sake of this example we are using our demo environment that runs a number of projects on OpenShift. For instance, we have a number of sample projects like a “java app,” “go app,” and “voting app” along with OpenShift components. We also have some synthetic clients here that generate a bit of activity. Yeah, you probably wouldn’t be doing this in production, but it’s useful to see the power of our OpenShift integration. Here’s a view of our projects within OpenShift:

Openshift Projects

And if you dig into a given project, say “java-app” you can see the services that make up this application. This includes the number of pods running for each as deployed via OpenShift/Kubernetes:

Openshift Projects

OK, now with our services up and running, let’s look at the job of monitoring these services.

The screenshots below come from our OpenShift demo environment, but you’ll also see multiple references to Kubernetes. That’s because OpenShift packages up Kubernetes in order to orchestrate containers.

To start, we see some basic performance details like CPU, memory and network of the host.

Openshift Host Data

Each host runs a number of containers. Drilling down on the hosts, we see the containers themselves:

Openshift Container Data

Simply scanning this list of containers on a single host, I don’t see much organization to the responsibilities of these objects. I can’t really tell what each container belongs to and therefore this host-based view has limited value.

What is interesting about this view is that it gives us per-container resource utilization. So, if you’re looking to see which squeaky container is getting more than its fair share of resources, this will instantly tell you. Useful – and hard to get in other monitoring tools – but still not as useful as analyzing our infrastructure on a services-level as opposed to a container-level.

Now let’s use some of the metadata provided by Kubernetes to take an application-centric view of the system. Let’s start by creating a hierarchy of components based on labels, in this order:

Kubernetes namespace -> deployment -> pod -> container

This aggregates containers at corresponding levels based on the above labels. The app UI below shows this aggregation and hierarchy in the grey “grouping” bar above the data table. As you can see, we have an “example-java-app” namespace with deployments of the various services that make up the app like Cassandra, Mongo, Redis, etc. Each deployment can consist of multiple pods, which are in turn made up of one or more containers.

Openshift Service Data

In addition to organizing containers via labels, this view also aggregates metrics across relevant containers. This means you get a singular view into the performance of each namespace, deployment, pod, etc.

In other words, with this aggregated view based on metadata, you can now start by monitoring and troubleshooting services, and drill into hosts and containers only if needed.

Let’s do one more thing with this environment. Let’s use the metadata to create a visual representation of services and the topology of their communications. Here you see our containers organized by services, but also a map-like view that shows you how these services relate to each other.

Openshift Topology Map

The boxes represent the deployments within our example-java-app namespace. These aggregate the pods and containers, which can be viewed by clicking on each to drill to the next level. The lines shown here represent communications between services and their latencies. This kind of view provides yet another logical, instead of physical, view of how these application components are working together. From here I can understand service performance, relationships and apply a range of metrics to understand underlying resource consumption (response time in this example).

OpenShift + Sysdig: Enabling the production-ready container platform

The next wave of application platforms are being built on containers and the orchestration tools that manage them. But making them production-ready also requires deep application-level and service-level visibility inside containers.

We think the combination of OpenShift and Sysdig will help you achieve the agility you want, while getting a new level of visibility into their applications and microservices.

Our hope is that this post whets your appetite to get production-grade container monitoring running in your environment. We’d love to give you an in-depth demo and have you try it yourself! Just let us know.

If you’re interested in learning more about Sysdig and Red Hat OpenShift, visit our partnership page.

Share This

Stay up to date

Sign up to recieve our newest.