This series is a general purpose getting started guide for those of us wanting to learn about the Cloud Native Computing Foundation (CNCF) project Fluent Bit.
Each article in this series addresses a single topic by providing insights into what the topic is, why we are interested in exploring that topic, where to get started with the topic, and how to get hands-on with learning about the topic as it relates to the Fluent Bit project.
The idea is that each article can stand on its own, but that they also lead down a path that slowly increases our abilities to implement solutions with Fluent Bit telemetry pipelines.
Let's take a look at the topic of this article, integrating Fluent Bit with Prometheus. In case you missed the previous article, check out the developer guide to monitoring health metrics with Prometheus where you explore how to monitor the health of your telemetry data pipelines.
This article will complete our hands-on exploration of Prometheus integration, helping developers leverage Fluent Bit's powerful metrics capabilities. We'll look at the final pattern for integrating Fluent Bit with Prometheus in your observability infrastructure.
All examples in this article have been done on OSX and are assuming the reader is able to convert the actions shown here to their own local machines.
Integrating with Prometheus?
Before diving into the hands-on examples, let's understand why Prometheus integration matters for Fluent Bit users. Prometheus is the de facto standard for metrics collection and monitoring in cloud native environments. It's another CNCF graduated project that provides a time-series database optimized for operational monitoring. The combination of Fluent Bit's lightweight, high-throughput telemetry pipeline with Prometheus's battle-tested metrics storage creates a powerful observability solution.
Fluent Bit provides several ways to integrate with Prometheus, the first of which we covered in the first article. In the second article we explored Fluent Bit monitoring itself and exposing internal pipeline metrics, giving you visibility into the health and performance of your telemetry infrastructure. Understanding how your telemetry pipeline is performing is critical for maintaining reliable observability.
In this third and final article, we integrate with Prometheus using Fluent Bit as a metrics proxy, scraping metrics from various sources and forwarding them to Prometheus. This is particularly useful when we need to aggregate metrics from multiple sources or transform them before they reach Prometheus.
Let's dive into this final pattern, forwarding telemetry pipeline metrics to Prometheus with remote write.
Where to get started
You should have explored the previous articles in this series to install and get started with Fluent Bit on your developer local machine, either using the source code or container images. Links at the end of this article will point you to a free hands-on workshop that lets you explore more of Fluent Bit in detail.
You can verify that you have a functioning installation by testing your Fluent Bit, either using a source installation or a container installation as shown below:
# For source installation.$ fluent-bit -i dummy -o stdout
# For container installation.$ podman run -ti ghcr.io/fluent/fluent-bit:4.2.2 -i dummy -o stdout
...
[0] dummy.0: [[1753105021.031338000, {}], {"message"=>"dummy"}]
[0] dummy.0: [[1753105022.033205000, {}], {"message"=>"dummy"}]
[0] dummy.0: [[1753105023.032600000, {}], {"message"=>"dummy"}]
[0] dummy.0: [[1753105024.033517000, {}], {"message"=>"dummy"}]...Let's explore the Prometheus integration pattern for forwarding Fluent Bit metrics that will help you with your observability infrastructure.
How to integrate with Prometheus
See this article for details about the service section of the configurations used in the rest of this article, but for now we plan to focus on our Fluent Bit pipeline and specifically the Prometheus integration capabilities that can be of great help in managing metrics in your observability stack.
Below in the figure you see the phases of a telemetry pipeline. Metrics collected by input plugins flow through the pipeline and can be routed to Prometheus-compatible outputs.
Understanding how metrics flow through Fluent Bit's pipeline is essential for effective Prometheus integration. Input plugins collect metrics, which then pass through filters for transformation, before being routed to output plugins that deliver metrics to Prometheus.
Forwarding metrics using Prometheus remote write
The final integration pattern demonstrates using Fluent Bit as a metrics proxy that scrapes existing Prometheus endpoints and forwards metrics using the Prometheus remote write protocol. This pattern is useful when you need to aggregate metrics from multiple sources, transform metrics before they reach Prometheus, or push metrics to remote Prometheus backends.
The prometheus_scrape input plugin allows Fluent Bit to collect metrics from any Prometheus-compatible endpoint. The prometheus_remote_write output plugin then pushes these metrics to a Prometheus server or compatible backend using the remote write API.
Let's create a configuration that scrapes metrics from a local application (we'll simulate this with HashiCorp Vault's metrics endpoint format) and forwards them using remote write:
service: flush: 1
log_level: info
http_server: on
http_listen: 0.0.0.0
http_port: 2020
hot_reload: on
pipeline:
inputs:
# Scrape metrics from an application's Prometheus endpoint
- name: prometheus_scrape
host: 127.0.0.1
port: 8200
tag: app_metrics
metrics_path: /v1/sys/metrics
scrape_interval: 10s
# Also collect host metrics
- name: node_exporter_metrics
tag: host_metrics
scrape_interval: 5
outputs:
# Forward all metrics to Prometheus using remote write
- name: prometheus_remote_write
match: '*'
host: prometheus-server.example.com
port: 9090
uri: /api/v1/write
add_label:
- datacenter us-west-2
- cluster productionThis configuration demonstrates a powerful pattern where Fluent Bit acts as a metrics aggregator. It scrapes application metrics from a Prometheus endpoint, collects host metrics using the node exporter plugin, and forwards everything to a central Prometheus server using the remote write protocol. The custom labels datacenter and cluster are added to all metrics, providing organizational context for querying.
For environments where you need secure remote write with authentication, the configuration would look like this:
service:flush: 1 log_level: info http_server: on http_listen: 0.0.0.0 http_port: 2020 hot_reload: on pipeline: inputs: - name: node_exporter_metrics tag: host_metrics scrape_interval: 5 - name: fluentbit_metrics tag: pipeline_metrics scrape_interval: 2 outputs: - name: prometheus_remote_write match: '*' host: prometheus.example.com port: 443 uri: /api/v1/write tls: on tls.verify: on http_user: ${PROMETHEUS_USERNAME} http_passwd: ${PROMETHEUS_PASSWORD} add_label: - service my-application - environment production
This configuration securely pushes metrics to a remote Prometheus server using TLS and HTTP basic authentication. The credentials are read from environment variables, following security best practices. Both host metrics and Fluent Bit pipeline metrics are collected and forwarded together.
Let's examine a more advanced scenario where we scrape metrics from multiple sources and use tagging to route them appropriately:
service:flush: 1 log_level: info http_server: on http_listen: 0.0.0.0 http_port: 2020 hot_reload: on pipeline: inputs: # Scrape Redis metrics - name: prometheus_scrape host: redis-server port: 9121 tag: redis_metrics metrics_path: /metrics scrape_interval: 15s # Scrape PostgreSQL metrics - name: prometheus_scrape host: postgres-server port: 9187 tag: postgres_metrics metrics_path: /metrics scrape_interval: 15s # Collect host metrics - name: node_exporter_metrics tag: node_metrics scrape_interval: 10 outputs: # Send infrastructure metrics to operations Prometheus - name: prometheus_remote_write match: node_metrics host: ops-prometheus.internal port: 9090 uri: /api/v1/write add_label: - team operations - tier infrastructure # Send application metrics to development Prometheus - name: prometheus_remote_write match: '*_metrics' host: dev-prometheus.internal port: 9090 uri: /api/v1/write add_label: - team development - tier application
This configuration demonstrates Fluent Bit's routing capabilities applied to metrics before they are sent to the Prometheus backend. Infrastructure metrics (from node_exporter_metrics) are routed to an operations Prometheus instance, while application metrics (Redis and PostgreSQL) are routed to a development team's Prometheus instance. Each destination receives appropriately labeled metrics for easy filtering and organization.
The Prometheus remote write protocol also supports compression for efficient network usage. You can enable compression by adding the compress parameter:
outputs:- name: prometheus_remote_write match: '*' host: prometheus-server.example.com port: 9090 uri: /api/v1/write compress: gzip
This is particularly useful when pushing metrics over high-latency or bandwidth-constrained networks.
More in the series
In this article you explored the final of three powerful patterns for integrating Fluent Bit with Prometheus: forwarding metrics with Fluent Bit to Prometheus. This article is based on this online free workshop.
There will be more in this series as you continue to learn how to configure, run, manage, and master the use of Fluent Bit in the wild.

No comments:
Post a Comment
Note: Only a member of this blog may post a comment.