Monitoring Pi-hole: Having your Raspberry Pi, and eating it with Prometheus remote write!

By Donald Patterson - AUGUST 11, 2021
Topics: Monitoring

SHARE:

Last year we cooked a holiday ham using Sysdig. Honestly, just revisiting that makes me hungry, but it got me thinking. What about dessert?

Today, I’d like to discuss baking a pie and eating it with Prometheus Remote Write. But not just any pie: a Raspberry Pi. Specifically, I’d like to introduce you to Pi-hole, an open-source project that has become very popular in the community. In this article you’ll learn how easy is Monitoring Pi-hole with Prometheus Remote Write.

What is Pi-hole?

Pi-hole is an advertisement and internet-tracking blocker for your network. It’s designed to run on devices such as the Raspberry Pi, but you can easily install it on almost any Linux system.

Based on the rules you’ve defined in the application, Pi-hole intercepts DNS requests that would otherwise go out into the ether, and blocks the potentially dangerous or otherwise ad-laden requests. All devices connected to your network are protected: phones, tablets, computers, and that scary no-name IoT device you just purchased on eBay.

That’s why some businesses and schools use Pi-hole as a way to prevent employees from navigating to dangerous places and protect children while browsing the web.

Let’s see how you can properly start monitoring Pi-hole with Prometheus Remote Write to assure it’s successfully protecting your network.

Monitoring Pi-hole with Prometheus Remote Write

Having a Prometheus server in a Raspberry Pi can be problematic due to Raspis using SD memory cards as storage. Intensive continuous writing can damage the cards and long retention configurations can end the available storage in the SD card.

That’s why you can configure the local Prometheus to have a short retention time, minimizing the occupation of disk and configuring it to send the data to a Prometheus server hosted on a computer or cloud service.

Monitoring Pi-hole with Prometheus Remote Write, instead of relying on an agent to pull a few key metrics from your Prometheus Exporter, allows you to configure Prometheus itself to write your favorite metrics to your Prometheus server, completely agentless.

This solution also allows you to centralize the data if you have different Pi-Holes for different rooms, LANs, or buildings.

Let’s Bake a Pi!

Ingredients:

A Delicious Crust to Contain our Pi

You need to get Docker installed in your Raspi since it is going to be used in later steps:

curl -sSL https://get.docker.com | sudo sh

Prepare the Filling

Let’s deploy the Prometheus exporter to get the metrics that the Prometheus server will receive.

So what is a Prometheus Exporter?

An exporter is a “translator” or “adapter” program that fetches data from a non-Prometheus endpoint and then converts it to the Prometheus metrics format, ready to be scraped by a Prometheus server.

There are a ton of different Prometheus exporters out there to monitor all kinds of things. You’ll need the Pi-hole exporter for this project, a simple exporter written for Pi-Hole and already containerized.

The first step is getting credentials that the exporter can use to collect data from Pi-Hole. The easiest way to do this is to use the API key, which can be obtained using the following command:

awk -F= -v key="WEBPASSWORD" '$1==key {print $2}' /etc/pihole/setupVars.conf

Next, we need to spin up the exporter as a daemon to run in the background. In this case, we are also exposing the exporter on port 9617, but you can use any free port available:

sudo docker run \
  -d \
  --network="host" \
  -e 'PIHOLE_HOSTNAME=127.0.0.1' \
  -e "PIHOLE_API_TOKEN=<Pi-Hole Token>" \
  -e 'INTERVAL=10s' \
  -e 'PORT=9617' \
  -p 9617:9617 \
  ekofr/pihole-exporter:v0.0.11

Once the container is running, you can test that metrics are being collected from Pi-Hole and published by the exporter:

curl -s http://127.0.0.1:9617/metrics | grep pihole
# HELP pihole_ads_blocked_today This represent the number of ads blocked over the current day
# TYPE pihole_ads_blocked_today gauge
pihole_ads_blocked_today{hostname="127.0.0.1"} 21319
# HELP pihole_ads_percentage_today This represent the percentage of ads blocked over the current day
# TYPE pihole_ads_percentage_today gauge
pihole_ads_percentage_today{hostname="127.0.0.1"} 28.602285
…

You could also access the /metrics endpoint via a web browser to verify the collected and available metrics.

Baking our Pi!

Now that our metrics are available locally, we need a way to ship them to the Prometheus server to start monitoring Pi-hole with Prometheus Remote Write. As discussed earlier, We’ll use the Prometheus containerized version to accomplish this.

First, create the prometheus.yml file. There is a basic one already included with the binary you unpacked earlier, so you can use a copy as a template.

To get started, there are a few main sections you might want to be aware of:

The external_labels section

In this section, you can pass through labels on every metric written to the Prometheus destination server. Here, you can place some labels to help you easily identify and scope the metrics later.

The scrape_configs section

Here, you can define jobs to tell Prometheus where to scrape the metrics from. In this case, the only job you need is to connect to the loopback and collect the metrics from the Pi-Hole Exporter on port 9617.

By default, Prometheus will scrape from the /metrics path when an alternative is not defined, which is exactly what we need.

The remote_write section

This is the section you’ll use to ship metrics to the Prometheus server. You need to define an url that is the API endpoint for the Prometheus server.

Here’s the resulting prometheus.yml after the aforementioned modifications:

global:
  scrape_interval:     10s # By default, scrape targets every 15 seconds.
  evaluation_interval: 10s # By default, scrape targets every 15 seconds.
  scrape_timeout: 10s      # By default, is set to the global default (10s).
  # Attach these labels to any time series or alerts when communicating with
  # external systems (federation, remote storage, Alertmanager).
  external_labels:
      monitor: 'pihole'
      origin_prometheus: 'donald-pihole'
# A scrape configuration containing exactly one endpoint to scrape:
scrape_configs:
  - job_name: 'pihole'
    static_configs:
      - targets: ['127.0.0.1:9617']
remote_write:
- url: "<PROMETHEUS_SERVER_URL>"
  tls_config:
    insecure_skip_verify: true

To simplify the Prometheus installation process, we recommend you using the official Prometheus docker images to launch the server in the Raspi. This will start the scraping from your Pi-Hole exporter and remote write the metrics to the Prometheus server!

docker run \
    -p 9090:9090 \
    -v /path/to/prometheus.yml:/etc/prometheus/prometheus.yml \
    prom/prometheus

Let’s Eat!

Within a few moments of starting the Prometheus container, you should notice delicious, new pihole_* metrics available within Sysdig Monitor for your enjoyment! You can use Grafana or whatever dashboard tool you like, but I will be using Sysdig Monitor for the sake of convenience.

With PromQL, you can make queries ranging from the simple to the complex, and everything in between. This allows you to monitor your new Pi-Hole deployment the way you want.

In only a few minutes, I was able to create a dashboard to give me complete visibility into the requests on my network, leaving me feeling more secure and protected:


Sysdig’s managed Prometheus service supports Prometheus remote write functionality allowing you to easily implement a long-term managed storage solution for your metrics, and monitor the metrics that matter with minimal overhead, and without any impact to your waistline!

Prometheus monitoring with Sysdig

You can try this in just a few minutes in the Sysdig Monitor free trial,

Subscribe and get the latest updates