You can view your logs in your cloud provider's blob storage solution if you have the custom monitoring and logging feature enabled. If the Cloud Provider for your BigAnimal cluster is AWS, the logs are stored in your Amazon S3.
Elastic Stack is comprised of Elasticsearch, Kibana, Beats, and more. It can help users take data from any type of source and in any format, and search, analyze and visualize that data. In this article, we’ll walk you through how to set up Filebeat and configure your Amazon S3 buckets to view BigAnimal Postgres logs in Kibana.
Prerequisites
- The cloud provider is AWS for BigAnimal cluster;
- The custom monitoring and logging feature is enabled on BigAnimal;
- The Amazon S3 read access;
- Elastic Filebeat, Elasticsearch and Kibana are installed on a server.
Steps
Please follow the below steps to implement the solution:
- Get the Object Store information of the Logs;
- Configure `filebeat.yml`;
- Configure AWS S3 input on Filebeat;
- Explore the logs on Kibana.
Step 1: Get the Object Store information of the Logs
You can get the S3 URL from the Monitoring & Logging tab on Cluster page at BigAnimal portal
In this example: s3://logs-bucket-xgv7pbgfawhhnakq-us-east-1.s3.us-east-1.amazonaws.com is the S3 URI.
logs-bucket-xgv7pbgfawhhnakq-us-east-1 is the S3 bucket ARN. You can view it from the AWS console. You need the below information for the following configuration on Filebeat:
- S3 bucket ARN: 'arn:aws:s3:::logs-bucket-xgv7pbgfawhhnakq-us-east-1'
- S3 bucket list prefix is fixed as: 'kubernetes-logs/pg_cluster_logs_for_customer/'
Step 2: Configure filebeat.yml
Configure `/etc/filebeat/filebeat.yml` on the server with Filebeat installed. If this is your first time using Filebeat, see the Quick Start.
1. Edit the section of `processors` as below:
# ================================= Processors =================================
processors:
- add_host_metadata:
when.not.contains.tags: forwarded
- add_cloud_metadata: ~
- add_docker_metadata: ~
- add_kubernetes_metadata: ~
- decode_json_fields:
fields: ["message"]
process_array: true
max_depth: 1
target: "messageo"
overwrite_keys: false
add_error_key: true
Note: Each entry in BigAnimal cluster logs is a JSON object having the `logger` key set to`postgres` and the `record` field with the Postgres csvlog format described in the following example:
{
"level": "info",
"ts": "2023-02-22T09:08:58Z",
"logger": "postgres",
"msg": "record",
"logging_pod": "p-hj69fbkdra-1",
"record": {
"log_time": "2023-02-22 09:08:58.273 UTC",
"user_name": "postgres",
"database_name": "edb_admin",
"process_id": "32640",
"connection_from": "[local]",
"session_id": "63f5dbaa.7f80",
"session_line_num": "4",
"command_tag": "SELECT",
"session_start_time": "2023-02-22 09:08:58 UTC",
"virtual_transaction_id": "5/10878",
"transaction_id": "0",
"error_severity": "LOG",
"sql_state_code": "00000",
"message": "duration: 2.861 ms statement: SELECT slot_name, database, active, pg_catalog.pg_wal_lsn_diff(pg_catalog.pg_current_wal_lsn(), restart_lsn) FROM pg_catalog.pg_replication_slots",
"application_name": "cnp_metrics_exporter",
"backend_type": "client backend",
"query_id": "206663354275086577"
}
}
With the `processors` settings in the filebeat.yml, the `record` fields are with `messageo.record.*` format:
You can refer to the Filebeat documentation to set the `processors`.
Step 3: Configure AWS S3 input on Filebeat
1. Enable and configure the AWS module on the server with Filebeat installed:
sudo filebeat modules enable aws
2. Edit section of `s3access` at `/etc/filebeat/modules.d/aws.yml` file
Set below fields:
- var.bucket_arn, var.bucket_list_prefix - Set the settings values you have got at Step 1.
- var.access_key_id, var.secret_access_key(or use the AWS credential file) - Set the setting values you have got from your AWS account.
For example:
s3access:
enabled: true
# AWS S3 bucket arn
var.bucket_arn: 'arn:aws:s3:::logs-bucket-xgv7pbgfawhhnakq-us-east-1'
# AWS S3 list prefix
var.bucket_list_prefix: 'kubernetes-logs/pg_cluster_logs_for_customer/'
# Number of workers on S3 bucket
var.number_of_workers: 5
# Use access_key_id, secret_access_key and/or session_token instead of shared credential file
var.access_key_id: xxxxxxx
var.secret_access_key: xxxxxxx
More AWS S3 input settings are in Filebeat documentation.
3. Setup and restart Filebeat
sudo filebeat setup
sudo service filebeat restart
Step 4: Explore the logs on Kibana
1. Open Kibana at your web browser, locate to Discover:
2. Choose `filebeat-*` from the list and explore the logs.
You can add the fields from the left panel.
To search the logs, Kibana is using Kibana Query Language (KQL), there are the instructions.