How to Use Filebeat

How to Use Filebeat Filebeat is a lightweight, open-source log shipper developed by Elastic as part of the Elastic Stack (formerly known as the ELK Stack). Designed to efficiently collect, forward, and centralize log data from files on servers, Filebeat plays a critical role in modern observability architectures. Whether you’re managing a single application server or a distributed microservices en

Nov 10, 2025 - 12:07
Nov 10, 2025 - 12:07
 1

How to Use Filebeat

Filebeat is a lightweight, open-source log shipper developed by Elastic as part of the Elastic Stack (formerly known as the ELK Stack). Designed to efficiently collect, forward, and centralize log data from files on servers, Filebeat plays a critical role in modern observability architectures. Whether youre managing a single application server or a distributed microservices environment, Filebeat ensures that your log data is reliably delivered to destinations like Elasticsearch, Logstash, or even Kafka for indexing, analysis, and visualization.

Unlike traditional log collection tools that require heavy resource usage or complex configurations, Filebeat is optimized for minimal overhead. It uses a small memory footprint and consumes negligible CPU cycles, making it ideal for deployment across thousands of systems without impacting performance. Its modular architecture, built-in processors, and seamless integration with other Elastic components make it the go-to solution for DevOps teams, site reliability engineers (SREs), and security analysts who need real-time visibility into system and application behavior.

In this comprehensive guide, youll learn exactly how to use Filebeatfrom initial installation to advanced configuration, best practices, real-world use cases, and troubleshooting. By the end of this tutorial, youll have the knowledge and confidence to deploy Filebeat in production environments, optimize its performance, and ensure your log data flows reliably through your observability pipeline.

Step-by-Step Guide

1. Understanding Filebeats Role in the Data Pipeline

Before installing Filebeat, its essential to understand where it fits in the broader log management workflow. Filebeat is not a log analyzer or visualizerits a collector and forwarder. It reads log files from your system, tailing them in real time, and sends the data to a configured output destination.

The typical data flow looks like this:

  • Application or system generates logs (e.g., nginx access logs, application error logs, system syslog)
  • Filebeat monitors specified log files and reads new entries
  • Filebeat optionally processes logs using built-in processors (e.g., parsing, renaming fields, dropping events)
  • Filebeat sends logs to an output: Elasticsearch, Logstash, or Kafka
  • Elasticsearch stores and indexes the data
  • Kibana visualizes the data for analysis

Filebeat can also send data directly to Elasticsearch, bypassing Logstash, which reduces complexity and resource usage in simpler deployments.

2. Prerequisites

Before installing Filebeat, ensure your system meets the following requirements:

  • A Linux, macOS, or Windows server (Filebeat supports all major platforms)
  • At least 1 GB of available disk space for logs and Filebeats registry file
  • Network connectivity to your destination (Elasticsearch, Logstash, or Kafka)
  • Administrative (sudo/root) access to install and configure the service
  • Basic familiarity with command-line interfaces and YAML configuration files

Ensure that your destination service is running and accessible. For example, if youre sending logs to Elasticsearch, verify that port 9200 is open and responding.

3. Installing Filebeat

Filebeat can be installed via package managers, Docker, or by downloading binaries directly. Below are installation instructions for the most common environments.

On Ubuntu/Debian

First, import the Elastic GPG key:

wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -

Add the Elastic repository:

echo "deb https://artifacts.elastic.co/packages/8.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-8.x.list

Update the package list and install Filebeat:

sudo apt-get update && sudo apt-get install filebeat

On CentOS/RHEL

Import the GPG key:

rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch

Create a repository file:

sudo tee /etc/yum.repos.d/elastic-8.x.repo [elastic-8.x]

name=Elastic repository for 8.x packages

baseurl=https://artifacts.elastic.co/packages/8.x/yum

gpgcheck=1

gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch

enabled=1

autorefresh=1

type=rpm-md

EOF

Install Filebeat:

sudo yum install filebeat

On Windows

Download the latest Filebeat Windows ZIP file from the official downloads page.

Extract the ZIP file to a directory like C:\Program Files\Filebeat.

Open PowerShell as Administrator and navigate to the extracted directory:

cd 'C:\Program Files\Filebeat'

Install Filebeat as a Windows service:

.\install-service-filebeat.ps1

4. Configuring Filebeat

The main configuration file for Filebeat is filebeat.yml, located at:

  • Linux: /etc/filebeat/filebeat.yml
  • Windows: C:\Program Files\Filebeat\filebeat.yml

Before editing, make a backup:

sudo cp /etc/filebeat/filebeat.yml /etc/filebeat/filebeat.yml.bak

Basic Configuration: Sending Logs to Elasticsearch

Open the configuration file in your preferred editor:

sudo nano /etc/filebeat/filebeat.yml

Locate the output.elasticsearch section and uncomment it. Configure the host:

output.elasticsearch:

hosts: ["http://localhost:9200"]

If Elasticsearch requires authentication, add credentials:

output.elasticsearch:

hosts: ["http://your-elasticsearch-ip:9200"]

username: "elastic"

password: "your-password"

Configuring Input Sources

Now define which log files Filebeat should monitor. Locate the filebeat.inputs section. By default, its commented out. Uncomment and modify it:

filebeat.inputs:

- type: filestream

enabled: true

paths:

- /var/log/*.log

- /var/log/apache2/*.log

Important: In Filebeat 8.0+, the input type changed from log to filestream. Ensure you use the correct type for your version.

You can also monitor specific applications:

- type: filestream

enabled: true

paths:

- /opt/myapp/logs/*.log

fields:

app: "myapp"

fields_under_root: true

The fields parameter adds custom metadata to each log event. Setting fields_under_root: true places these fields at the top level of the event, making them easier to query in Kibana.

Configuring Output to Logstash

If youre using Logstash for advanced processing (e.g., parsing complex log formats), configure Filebeat to send logs to Logstash instead of Elasticsearch:

output.logstash:

hosts: ["your-logstash-server:5044"]

Ensure Logstash is configured to listen on port 5044 (default) and has a Beats input plugin enabled:

input {

beats {

port => 5044

}

}

5. Enabling and Starting Filebeat

After saving your configuration, validate it for syntax errors:

sudo filebeat test config

If successful, test connectivity to your output:

sudo filebeat test output

Enable Filebeat to start on boot:

sudo systemctl enable filebeat

Start the service:

sudo systemctl start filebeat

Check its status:

sudo systemctl status filebeat

You should see active (running). If theres an error, check the logs:

sudo journalctl -u filebeat -f

6. Verifying Log Delivery

To confirm Filebeat is working:

  1. Generate test log entries. For example, append a line to an Apache log:
echo "$(date) - Test log entry from Filebeat" >> /var/log/apache2/access.log

  1. Check Elasticsearch for new documents:
curl -X GET "http://localhost:9200/filebeat-*/_search?pretty"

If you see a response with hits, Filebeat is successfully sending data.

  1. In Kibana, create an index pattern matching filebeat-* and explore your logs in the Discover tab.

7. Advanced Configuration: Using Processors

Filebeat includes powerful built-in processors to clean, enrich, and transform logs before sending them. These are defined under the processors section in filebeat.yml.

Example: Remove Sensitive Data

Redact IP addresses or API keys from logs:

processors:

- drop_fields:

fields: ["password", "api_key"]

Example: Parse JSON Logs

If your application outputs JSON logs:

processors:

- decode_json_fields:

fields: ["message"]

target: ""

overwrite_keys: true

This extracts JSON fields from the message field and promotes them to top-level fields.

Example: Add Host Metadata

Automatically add system information:

processors:

- add_host_metadata:

when.not.contains.tags: forwarded

This adds fields like host.name, host.ip, and host.architecture to each event.

Example: Rename Fields

Standardize field names across multiple log sources:

processors:

- rename:

fields:

- from: "log.file.path"

to: "source.path"

Processors are executed in order, so place them logically. Use when conditions to apply them only under specific circumstances.

8. Handling Large Volumes and High Throughput

For environments generating thousands of logs per second, optimize Filebeat performance:

  • Adjust batch size: Increase bulk_max_size in the output section (default: 50) to 100200 for better throughput.
  • Enable compression: Set compression_level: 5 in the output section to reduce network bandwidth.
  • Use multiple Filebeat instances: Deploy Filebeat on each host rather than centralizing collection.
  • Use Logstash for heavy processing: Offload parsing and filtering to Logstash to reduce Filebeat CPU usage.
  • Monitor registry file: Filebeat tracks which lines have been read in /var/lib/filebeat/registry. Ensure this directory has sufficient I/O performance.

Best Practices

1. Use Dedicated Log Directories

Organize application logs in dedicated directories (e.g., /opt/appname/logs/) rather than mixing them with system logs. This simplifies Filebeat configuration, improves maintainability, and reduces the risk of monitoring unintended files.

2. Avoid Monitoring Temporary or Rotated Logs

Filebeat can handle log rotation, but its best to avoid monitoring files that are frequently deleted or rewritten. Use log rotation tools like logrotate with the copytruncate option to preserve file handles while rotating.

Example logrotate configuration:

/var/log/myapp/*.log {

daily

missingok

rotate 14

compress

delaycompress

copytruncate

notifempty

create 644 root root

}

3. Secure Communication

Never send logs over unencrypted channels. Configure TLS between Filebeat and Elasticsearch or Logstash:

output.elasticsearch:

hosts: ["https://your-es-server:9200"]

ssl.enabled: true

ssl.certificate_authorities: ["/etc/filebeat/ca.crt"]

Use certificates signed by a trusted CA, and avoid self-signed certs in production unless properly managed.

4. Use Fields for Context

Always enrich logs with contextual metadata using the fields parameter:

fields:

environment: "production"

service: "payment-service"

region: "us-east-1"

This makes filtering and aggregation in Kibana far more efficient and meaningful.

5. Monitor Filebeat Health

Filebeat exposes metrics via its internal HTTP endpoint. Enable it in filebeat.yml:

monitoring.enabled: true

monitoring.elasticsearch:

hosts: ["http://localhost:9200"]

Then monitor Filebeats performance in Kibana under Stack Monitoring. Track metrics like:

  • Events published per second
  • Failed events
  • Registry file size
  • Memory usage

6. Limit Log File Permissions

Ensure Filebeat runs with the minimum required privileges. On Linux, create a dedicated user:

sudo useradd -s /sbin/nologin -r -M filebeat

sudo chown -R filebeat:filebeat /var/lib/filebeat/

Then update the service file to run as this user:

sudo systemctl edit --full filebeat

Add:

User=filebeat

Group=filebeat

7. Regularly Update Filebeat

Keep Filebeat updated to benefit from performance improvements, security patches, and new features. Use your package manager to upgrade:

sudo apt-get update && sudo apt-get upgrade filebeat

8. Test Configurations Before Deployment

Always validate your filebeat.yml before restarting the service:

sudo filebeat test config

sudo filebeat test output

Use filebeat -e -c /etc/filebeat/filebeat.yml to run Filebeat in foreground mode for debugging.

Tools and Resources

Official Documentation

The most authoritative resource for Filebeat is the official Elastic documentation. It includes detailed guides on every input type, processor, and output configuration.

Filebeat Modules

Elastic provides pre-built modules for common services like:

  • Apache
  • Nginx
  • MySQL
  • PostgreSQL
  • Windows Event Logs
  • Docker
  • Kubernetes

Enable a module with:

sudo filebeat modules enable apache nginx

Then configure the paths:

sudo nano /etc/filebeat/modules.d/apache.yml

Set the log paths:

- module: apache

access:

enabled: true

var.paths:

- /var/log/apache2/access.log*

error:

enabled: true

var.paths:

- /var/log/apache2/error.log*

Modules automatically apply the correct parsers and field mappings, saving hours of configuration time.

Filebeat Docker Image

For containerized environments, use the official Filebeat Docker image:

docker run -d \

--name=filebeat \

--user=root \

--volume="$(pwd)/filebeat.yml:/usr/share/filebeat/filebeat.yml:ro" \

--volume="/var/log:/var/log:ro" \

--volume="/var/lib/docker/containers:/var/lib/docker/containers:ro" \

docker.elastic.co/beats/filebeat:8.12.0

This is ideal for Kubernetes deployments using DaemonSets.

Third-Party Tools

  • Ansible Playbooks: Automate Filebeat deployment across hundreds of servers.
  • Terraform: Provision Filebeat on cloud instances using infrastructure-as-code.
  • Logstash Filters: Use Grok patterns to parse non-standard logs if Filebeats processors arent sufficient.
  • Kibana Dashboard Templates: Import pre-built dashboards for Filebeat modules from the Elastic gallery.

Community and Support

Join the Elastic Discuss forum for troubleshooting and best practices. Search for existing threads before postingmost common issues have already been addressed.

GitHub repositories for Filebeat and its modules are publicly available at github.com/elastic/beats.

Real Examples

Example 1: Monitoring Nginx Access Logs

Scenario: You run a web server with Nginx and want to monitor traffic patterns, error rates, and client IPs.

Steps:

  1. Enable the Nginx Filebeat module:
sudo filebeat modules enable nginx

  1. Update the module config to point to your log files:
sudo nano /etc/filebeat/modules.d/nginx.yml

- module: nginx

access:

enabled: true

var.paths:

- /var/log/nginx/access.log*

error:

enabled: true

var.paths:

- /var/log/nginx/error.log*

  1. Restart Filebeat:
sudo systemctl restart filebeat

  1. In Kibana, go to Dashboard ? Import ? Select Nginx from the Filebeat module dashboards.

Result: You now have real-time visualizations of HTTP status codes, top clients, response times, and geographic distribution of trafficall without writing custom parsers.

Example 2: Centralizing Application Logs from Microservices

Scenario: You have 50+ Docker containers running microservices, each generating custom JSON logs.

Steps:

  1. Configure each container to write logs to a shared volume:
docker run -v /opt/logs:/app/logs myapp

  1. On the host, configure Filebeat to monitor the shared directory:
- type: filestream

enabled: true

paths:

- /opt/logs/*.log

json.keys_under_root: true

json.add_error_key: true

json.message_key: message

  1. Add metadata to identify services:
fields:

service: "auth-service"

environment: "staging"

  1. Use a processor to extract timestamps:
processors:

- decode_json_fields:

fields: ["message"]

target: ""

overwrite_keys: true

- timestamp:

field: "timestamp"

layouts:

- "2006-01-02T15:04:05Z07:00"

Result: All application logs are indexed with consistent structure, enabling cross-service correlation and alerting.

Example 3: Security Log Collection from Linux Servers

Scenario: You need to monitor SSH login attempts and sudo commands for security auditing.

Steps:

  1. Monitor auth logs:
- type: filestream

enabled: true

paths:

- /var/log/auth.log

- /var/log/secure

fields:

type: "security"

  1. Use a processor to extract SSH events:
processors:

- dissect:

tokenizer: "%{timestamp} %{host} sshd[%{pid}]: %{message}"

field: "message"

target_prefix: "ssh"

  1. Create an alert in Kibana for 5 failed SSH attempts in 1 minute.

Result: Automated detection of brute-force attacks with real-time alerts.

FAQs

Is Filebeat better than Logstash for log collection?

Filebeat is optimized for lightweight, reliable log shipping. Logstash is better for complex parsing, filtering, and enrichment. Use Filebeat to collect logs and send them to Logstash if you need advanced processing. For simple use cases, send directly to Elasticsearch.

Does Filebeat support Windows Event Logs?

Yes. Use the wineventlog input type or enable the Windows module. Example:

- type: wineventlog

enabled: true

event_logs:

- name: Application

ignore_older: 72h

- name: System

How does Filebeat handle log rotation?

Filebeat automatically detects when a log file is rotated and continues reading from the new file using the registry file that tracks the last read position. Ensure log rotation uses copytruncate or renames the file (not deletes) to avoid data loss.

Can Filebeat send logs to multiple outputs?

No. Filebeat supports only one output at a time. To send logs to multiple destinations, use Logstash as an intermediary or deploy multiple Filebeat instances with different configurations.

What happens if Elasticsearch is down?

Filebeat stores unacknowledged events in an internal queue (on disk) and retries sending them when the destination becomes available. This ensures no data loss during temporary outages.

How much disk space does Filebeat use?

Filebeat uses minimal disk spacetypically less than 100 MB for the registry and queue files, even under heavy load. The actual log files are stored on your system, not by Filebeat.

Can I use Filebeat with cloud providers like AWS or Azure?

Yes. Filebeat runs on EC2, Azure VMs, and GCP instances. Use IAM roles or service accounts to authenticate securely with Elasticsearch hosted on Elastic Cloud or self-managed clusters.

How do I upgrade Filebeat without losing configuration?

Always back up your filebeat.yml before upgrading. Most upgrades preserve configuration files. After upgrading, validate the config and restart the service.

Is Filebeat free to use?

Yes. Filebeat is open-source under the Apache 2.0 license. All core features are free. Elastic Cloud offers paid tiers with enhanced support, but Filebeat itself requires no license.

Why are my logs not appearing in Kibana?

Check:

  • Filebeat service status
  • Network connectivity to Elasticsearch
  • Correct index pattern in Kibana (filebeat-*)
  • Time range filter in Kibana
  • Log file permissions and paths

Conclusion

Filebeat is a powerful, efficient, and indispensable tool for modern log management. Its lightweight design, rich feature set, and seamless integration with the Elastic Stack make it the preferred choice for organizations seeking reliable, scalable, and maintainable log collection.

By following the steps outlined in this guidefrom installation and configuration to advanced processing and real-world use casesyou now have the expertise to deploy Filebeat confidently in any environment, whether its a single server or a distributed cloud-native architecture.

Remember: the key to success with Filebeat lies in thoughtful configuration, consistent monitoring, and adherence to best practices. Enrich your logs with context, secure your data pipeline, and automate deployments where possible.

As observability becomes central to system reliability and security, mastering Filebeat is no longer optionalits essential. Start small, validate your setup, and scale gradually. With Filebeat, youre not just collecting logsyoure building the foundation for proactive insights, faster troubleshooting, and data-driven decision-making across your entire infrastructure.