This series is a general purpose getting started guide for those of us wanting to learn about the Cloud Native Computing Foundation (CNCF) project Fluent Bit.
Each article in this series addresses a single topic by providing insights into what the topic is, why we are interested in exploring that topic, where to get started with the topic, and how to get hands-on with learning about the topic as it relates to the Fluent Bit project.
The idea is that each article can stand on its own, but that they also lead down a path that slowly increases our abilities to implement solutions with Fluent Bit telemetry pipelines.
Let's take a look at the topic of this article, using Fluent Bit tips for developers. In case you missed the previous article, check out the top 3 telemetry pipeline input plugins for developers where you get tips on the best of Fluent Bit for your developer experiences.
This article will be a hands-on tour of the things that help you as a developer testing out your Fluent Bit pipelines. We'll take a look at the top three output plugins for your telemetry pipeline configuration.
All examples in this article have been done on OSX and are assuming the reader is able to convert the actions shown here to their own local machines.
Where to get started
You should have explored the previous articles in this series to install and get started with Fluent Bit on your developer local machine, either using the source code or container images. Links at the end of this article will point you to a free hands-on workshop that lets you explore more of Fluent Bit in detail.
You can verify that you have a functioning installation by testing your Fluent Bit, either using a source installation or a container installation as shown below:
# For source installation.
$ fluent-bit -i dummy -o stdout
# For container installation.
$ podman run -ti ghcr.io/fluent/fluent-bit:4.0.8 -i dummy -o stdout
...
[0] dummy.0: [[1753105021.031338000, {}], {"message"=>"dummy"}]
[0] dummy.0: [[1753105022.033205000, {}], {"message"=>"dummy"}]
[0] dummy.0: [[1753105023.032600000, {}], {"message"=>"dummy"}]
[0] dummy.0: [[1753105024.033517000, {}], {"message"=>"dummy"}]
...
Let's look at a few tips and tricks to help you with your local development testing of Fluent Bit input plugins.
Routing pipeline output
See this article for details about the service section of the configurations used in the rest of this article, but for now we plan to focus on our Fluent Bit pipeline and specifically the output plugins that can be of great help in routing telemetry data during testing in our inner developer loop.
1. Standard output plugin
While it might seem redundant to point out, the standard output stream is every developers friend. Whether we are testing and debugging Java, Python, or Go projects, we love our standard output to quickly visually verify many of the fixes we are applying.
Fluent Bit is no different and while in a testing environment you will always want to have the routing option to send all tagged telemetry data to the standard output plugin. We see below the use of generated dummy telemetry data and direct routing to standard output.
service:
flush: 1
log_level: info
http_server: on
http_listen: 0.0.0.0
http_port: 2020
hot_reload: on
pipeline:
inputs:
# This entry generates a successful message.
- name: dummy
tag: event.success
dummy: '{"message":"true 200 success"}'
# This entry generates a failure message.
- name: dummy
tag: event.error
dummy: '{"message":"false 500 error"}'
outputs:
- name: stdout
match: '*'
For completeness, we run this configuration to see the output as follows:
# For source installation.
$ fluent-bit --config fluent-bit.yaml
# For container installation after building new image with your
# configuration using a Buildfile as follows:
#
# FROM ghcr.io/fluent/fluent-bit:4.0.9
# COPY ./fluent-bit.yaml /fluent-bit/etc/fluent-bit.yaml
# CMD [ "fluent-bit", "-c", "/fluent-bit/etc/fluent-bit.yaml" ]
#
$ podman build -t fb -f Buildfile
$ podman run --rm fb
...
[0] event.success: [[1757944598.315568000, {}], {"message"=>"true 200 success"}]
[0] event.error: [[1757944598.316068000, {}], {"message"=>"false 500 error"}]
[0] event.success: [[1757944599.313530000, {}], {"message"=>"true 200 success"}]
[0] event.error: [[1757944599.313752000, {}], {"message"=>"false 500 error"}]
[0] event.success: [[1757944600.315704000, {}], {"message"=>"true 200 success"}]
[0] event.error: [[1757944600.315809000, {}], {"message"=>"false 500 error"}]
...
As we can see, for developers it's an easy way to access whatever the pipeline is producing at the end of processing. In this example there is no formatting, so let's explore a few of the quick formatting options we have available. The above format is the default, known as msgpack.
There are several options which we can explore below to adjust the format sent to standard output, such as json, and json_lines. Let's explore each one starting with json:
service:
flush: 1
log_level: info
http_server: on
http_listen: 0.0.0.0
http_port: 2020
hot_reload: on
pipeline:
inputs:
# This entry generates a successful message.
- name: dummy
tag: event.success
dummy: '{"message":"true 200 success"}'
# This entry generates a failure message.
- name: dummy
tag: event.error
dummy: '{"message":"false 500 error"}'
outputs:
- name: stdout
match: '*'
format: json
This results in the following output where each telemetry line is seen as distinct json line encapsulated by square brackets:
# For source installation.
$ fluent-bit --config fluent-bit.yaml
# For container installation after building new image with your
# configuration using a Buildfile as follows:
#
# FROM ghcr.io/fluent/fluent-bit:4.0.9
# COPY ./fluent-bit.yaml /fluent-bit/etc/fluent-bit.yaml
# CMD [ "fluent-bit", "-c", "/fluent-bit/etc/fluent-bit.yaml" ]
#
$ podman build -t fb -f Buildfile
$ podman run --rm fb
...
[{"date":1757945117.953698,"message":"true 200 success"}]
[{"date":1757945117.954361,"message":"false 500 error"}]
[{"date":1757945118.949162,"message":"true 200 success"}]
[{"date":1757945118.949535,"message":"false 500 error"}]
[{"date":1757945119.95173,"message":"true 200 success"}]
[{"date":1757945119.951864,"message":"false 500 error"}]
...
And if we try the following configuration with json_lines:
service:
flush: 1
log_level: info
http_server: on
http_listen: 0.0.0.0
http_port: 2020
hot_reload: on
pipeline:
inputs:
# This entry generates a successful message.
- name: dummy
tag: event.success
dummy: '{"message":"true 200 success"}'
# This entry generates a failure message.
- name: dummy
tag: event.error
dummy: '{"message":"false 500 error"}'
outputs:
- name: stdout
match: '*'
format: json_lines
This results in the following output, we note this is now more simplified:
# For source installation.
$ fluent-bit --config fluent-bit.yaml
# For container installation after building new image with your
# configuration using a Buildfile as follows:
#
# FROM ghcr.io/fluent/fluent-bit:4.0.9
# COPY ./fluent-bit.yaml /fluent-bit/etc/fluent-bit.yaml
# CMD [ "fluent-bit", "-c", "/fluent-bit/etc/fluent-bit.yaml" ]
#
$ podman build -t fb -f Buildfile
$ podman run --rm fb
...
{"date":1757945242.482185,"message":"true 200 success"}
{"date":1757945242.482595,"message":"false 500 error"}
{"date":1757945243.482893,"message":"true 200 success"}
{"date":1757945243.483157,"message":"false 500 error"}
{"date":1757945244.483228,"message":"true 200 success"}
{"date":1757945244.483269,"message":"false 500 error"}
...
Another nice feature is to remove the extra data information, as often in development testing we are not too worried about that as we are testing our processing rules, so to remove we configure as follows:
service:
flush: 1
log_level: info
http_server: on
http_listen: 0.0.0.0
http_port: 2020
hot_reload: on
pipeline:
inputs:
# This entry generates a successful message.
- name: dummy
tag: event.success
dummy: '{"message":"true 200 success"}'
# This entry generates a failure message.
- name: dummy
tag: event.error
dummy: '{"message":"false 500 error"}'
outputs:
- name: stdout
match: '*'
format: json_lines
json_date_key: false
This results in the following output, we note this is now more simplified:
# For source installation.
$ fluent-bit --config fluent-bit.yaml
# For container installation after building new image with your
# configuration using a Buildfile as follows:
#
# FROM ghcr.io/fluent/fluent-bit:4.0.9
# COPY ./fluent-bit.yaml /fluent-bit/etc/fluent-bit.yaml
# CMD [ "fluent-bit", "-c", "/fluent-bit/etc/fluent-bit.yaml" ]
#
$ podman build -t fb -f Buildfile
$ podman run --rm fb
...
{"message":"true 200 success"}
{"message":"false 500 error"}
{"message":"true 200 success"}
{"message":"false 500 error"}
{"message":"true 200 success"}
{"message":"false 500 error"}
{"message":"true 200 success"}
...
And if we want to display our data but in a format a little easier to digest in our testing, we can configure the following:
service:
flush: 1
log_level: info
http_server: on
http_listen: 0.0.0.0
http_port: 2020
hot_reload: on
pipeline:
inputs:
# This entry generates a successful message.
- name: dummy
tag: event.success
dummy: '{"message":"true 200 success"}'
# This entry generates a failure message.
- name: dummy
tag: event.error
dummy: '{"message":"false 500 error"}'
outputs:
- name: stdout
match: '*'
format: json_lines
json_date_format: java_sql_timestamp
This results in the following output with human readable timestamps:
# For source installation.
$ fluent-bit --config fluent-bit.yaml
# For container installation after building new image with your
# configuration using a Buildfile as follows:
#
# FROM ghcr.io/fluent/fluent-bit:4.0.9
# COPY ./fluent-bit.yaml /fluent-bit/etc/fluent-bit.yaml
# CMD [ "fluent-bit", "-c", "/fluent-bit/etc/fluent-bit.yaml" ]
#
$ podman build -t fb -f Buildfile
$ podman run --rm fb
...
{"date":"2025-09-15 14:21:29.245802","message":"true 200 success"}
{"date":"2025-09-15 14:21:29.246220","message":"false 500 error"}
{"date":"2025-09-15 14:21:30.245534","message":"true 200 success"}
{"date":"2025-09-15 14:21:30.245956","message":"false 500 error"}
{"date":"2025-09-15 14:21:31.243520","message":"true 200 success"}
{"date":"2025-09-15 14:21:31.243680","message":"false 500 error"}
...
While somewhat trivial, this output plugin is the meat and potatoes of the developer testing and should never be excluded from the top listings. Let's move on to the second output plugin.
2. File output plugin
The most common use case next to just visualizing output with the standard output plugin, is a need for output to selectively be stored. An example of this would be to push all your telemetry output after filtering to standard output, but errors would be collected in a file to be watched separately.
The following configuration shows the file output plugin configured for exactly the example scenario described in the previous paragraph. Generated telemetry data is collected, could be filtered, and then routed to standard output. The second route taken is to filter only error telemetry to a file in the configured directory:
service:
flush: 1
log_level: info
http_server: on
http_listen: 0.0.0.0
http_port: 2020
hot_reload: on
pipeline:
inputs:
# This entry generates a successful message.
- name: dummy
tag: event.success
dummy: '{"message":"true 200 success"}'
# This entry generates a failure message.
- name: dummy
tag: event.error
dummy: '{"message":"false 500 error"}'
outputs:
- name: stdout
match: '*'
format: json_lines
json_date_format: java_sql_timestamp
- name: file
match: '*.error'
path: /tmp
This results in the following standard output with human readable timestamps:
# For source installation.
$ fluent-bit --config fluent-bit.yaml
# For container installation after building new image with your
# configuration using a Buildfile as follows:
#
# FROM ghcr.io/fluent/fluent-bit:4.0.9
# COPY ./fluent-bit.yaml /fluent-bit/etc/fluent-bit.yaml
# CMD [ "fluent-bit", "-c", "/fluent-bit/etc/fluent-bit.yaml" ]
#
$ podman build -t fb -f Buildfile
# mounting current directory as container /tmp directory.
$ podman run --rm fb -v ./:/tmp
...
{"date":"2025-09-15 14:21:29.245802","message":"true 200 success"}
{"date":"2025-09-15 14:21:29.246220","message":"false 500 error"}
{"date":"2025-09-15 14:21:30.245534","message":"true 200 success"}
{"date":"2025-09-15 14:21:30.245956","message":"false 500 error"}
{"date":"2025-09-15 14:21:31.243520","message":"true 200 success"}
{"date":"2025-09-15 14:21:31.243680","message":"false 500 error"}
...
While the file output can be found in the following location once Fluent Bit has been started (or stopped, as the file will contain all telemetry errors collected during execution and is persisted in this file):
# Watching the errors collecting in our filesystem.
$ tail -f /tmp/event.error
...
event.error: [1757947064.060294000, {"message":"false 500 error"}]
event.error: [1757947065.061441000, {"message":"false 500 error"}]
event.error: [1757947066.059972000, {"message":"false 500 error"}]
event.error: [1757947067.060831000, {"message":"false 500 error"}]
event.error: [1757947068.059963000, {"message":"false 500 error"}]
...
With these two output plugins, we now have the flexibility to route our telemetry data to several locations to help in our development testing. Our final plugin for developers covered in the next section gives us the ability to run almost anything while capturing its telemetry data.
3. Flow count output plugin
The final output plugin to be mentioned in our top three listing is the flow count output plugin. A plugin that will give us insights into the amount of records and the size of the telemetry data that is flowing through our pipeline. The following configuration shows basic counts for a second of telemetry data, focusing in this case on only the error records:
service:
flush: 1
log_level: info
http_server: on
http_listen: 0.0.0.0
http_port: 2020
hot_reload: on
pipeline:
inputs:
# This entry generates a successful message.
- name: dummy
tag: event.success
dummy: '{"message":"true 200 success"}'
# This entry generates a failure message.
- name: dummy
tag: event.error
rate: 5
dummy: '{"message":"false 500 error"}'
outputs:
- name: stdout
match: '*'
format: json_lines
json_date_format: java_sql_timestamp
- name: flowcounter
match: '*.error'
unit: second
This results in the following where we see the first single success message followed by five error messages as we set the rate variable to repeat five times. This is followed by output routed to standard out and then measuring the details of the message flows every based on a time measurement of a second:
# For source installation.
$ fluent-bit --config fluent-bit.yaml
# For container installation after building new image with your
# configuration using a Buildfile as follows:
#
# FROM ghcr.io/fluent/fluent-bit:4.0.9
# COPY ./fluent-bit.yaml /fluent-bit/etc/fluent-bit.yaml
# CMD [ "fluent-bit", "-c", "/fluent-bit/etc/fluent-bit.yaml" ]
#
$ podman build -t fb -f Buildfile
$ podman run --rm fb
...
{"date":"2025-09-15 17:54:33.624733","message":"true 200 success"}
{"date":"2025-09-15 17:54:33.624867","message":"false 500 error"}{"date":"2025-09-15 17:54:33.824136","message":"false 500 error"}{"date":"2025-09-15 17:54:34.024303","message":"false 500 error"}{"date":"2025-09-15 17:54:34.224448","message":"false 500 error"}{"date":"2025-09-15 17:54:34.424642","message":"false 500 error"}
[out_flowcounter] [1757958872, {"counts":5, "bytes":0, "counts/second":5, "bytes/second":0 }]
...
Note that our record size is too small to register anything other than in the counts category, but if we had larger records it would report the byte size being sent.
As a self-paced exercise, try increasing the record size from the dummy plugin and also try changing the unit to the value minute to see how many records per minute are flowing through our Fluent Bit telemetry pipeline.
This wraps up a few handy output plugin and routing tricks for developers getting started with Fluent Bit. The ability to set up and leverage these plugins will help in speeding up your inner development loop experience.
More in the series
In this article you learned a few handy tricks for using Fluent Bit output plugins and routing to improve the inner developer loop experience. This article is based on this online free workshop.
There will be more in this series as you continue to learn how to configure, run, manage, and master the use of Fluent Bit in the wild. Next up, exploring some of the more interesting Fluent Bit parsers for developers.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.