You can view your logs in your cloud provider's blob storage solution if you have the custom monitoring and logging feature enabled. If the Cloud Provider for your BigAnimal cluster is AWS, the logs are stored in your Amazon S3.
Elastic Stack is comprised of Elasticsearch, Kibana, Beats, and more. It can help users take data from any type of source and in any format, and search, analyze and visualize that data. In this article, we’ll walk you through how to set up Filebeat and configure your Amazon S3 buckets to view BigAnimal Postgres logs in Kibana.
- The cloud provider is AWS for BigAnimal cluster;
- The custom monitoring and logging feature is enabled on BigAnimal;
- The Amazon S3 read access;
- Elastic Filebeat, Elasticsearch and Kibana are installed on a server.
Please follow the below steps to implement the solution:
- Get the Object Store information of the Logs;
- Configure `filebeat.yml`;
- Configure AWS S3 input on Filebeat;
- Explore the logs on Kibana.
Step 1: Get the Object Store information of the Logs
You can get the S3 URL from the Monitoring & Logging tab on Cluster page at BigAnimal portal
In this example: s3://logs-bucket-xgv7pbgfawhhnakq-us-east-1.s3.us-east-1.amazonaws.com is the S3 URI.
logs-bucket-xgv7pbgfawhhnakq-us-east-1 is the S3 bucket ARN. You can view it from the AWS console. You need the below information for the following configuration on Filebeat:
- S3 bucket ARN: 'arn:aws:s3:::logs-bucket-xgv7pbgfawhhnakq-us-east-1'
- S3 bucket list prefix is fixed as: 'kubernetes-logs/pg_cluster_logs_for_customer/'
Step 2: Configure filebeat.yml
Configure `/etc/filebeat/filebeat.yml` on the server with Filebeat installed. If this is your first time using Filebeat, see the Quick Start.
1. Edit the section of `processors` as below:
# ================================= Processors =================================
- add_cloud_metadata: ~
- add_docker_metadata: ~
- add_kubernetes_metadata: ~
Note: Each entry in BigAnimal cluster logs is a JSON object having the `logger` key set to`postgres` and the `record` field with the Postgres csvlog format described in the following example:
"log_time": "2023-02-22 09:08:58.273 UTC",
"session_start_time": "2023-02-22 09:08:58 UTC",
"message": "duration: 2.861 ms statement: SELECT slot_name, database, active, pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_lsn(), restart_lsn) FROM pg_catalog.pg_replication_slots",
"backend_type": "client backend",
With the `processors` settings in the filebeat.yml, the `record` fields are with `messageo.record.*` format:
You can refer to the Filebeat documentation to set the `processors`.
Step 3: Configure AWS S3 input on Filebeat
1. Enable and configure the AWS module on the server with Filebeat installed:
sudo filebeat modules enable aws
2. Edit section of `s3access` at `/etc/filebeat/modules.d/aws.yml` file
Set below fields:
- var.bucket_arn, var.bucket_list_prefix - Set the settings values you have got at Step 1.
- var.access_key_id, var.secret_access_key(or use the AWS credential file) - Set the setting values you have got from your AWS account.
# AWS S3 bucket arn
# AWS S3 list prefix
# Number of workers on S3 bucket
# Use access_key_id, secret_access_key and/or session_token instead of shared credential file
More AWS S3 input settings are in Filebeat documentation.
3. Setup and restart Filebeat
sudo filebeat setup
sudo service filebeat restart
Step 4: Explore the logs on Kibana
1. Open Kibana at your web browser, locate to Discover:
2. Choose `filebeat-*` from the list and explore the logs.
You can add the fields from the left panel.
To search the logs, Kibana is using Kibana Query Language (KQL), there are the instructions.