Skip to content

Logging operator

For logging the banzai cloud logging operator is used.The Banzai Cloud Logging Operator is a Kubernetes operator designed to simplify and streamline the deployment and management of logging solutions in Kubernetes clusters. This operator provides an easy way to set up centralized logging, making it easier for cluster administrators to monitor and troubleshoot applications running on Kubernetes.

01Cloud integrates external logger to store the logs from the specific application to the preferred output provider. 01cloud automatically configures the logging solutions for selected providers, reducing the complexity of manual setup. By providing required credentials the logs can be automatically stored and managed in the various output providers.

External Logger Architecture

  • 01cloud collects the logs from the application and passes the logs to the preferred destination.Here, user creates the environment and enables the logging.
  • After the logging is enabled the logs of specific application is passed to 01cloud logging.
  • Here the logs are configured and passed to the destination that is provided by the user where the logs from specific application is stored and managed.

Components

External logger is the log collecter and forwarder of 01Cloud. It consists of loggig operator that deploys and configures a log collector (currently a Fluent Bit DaemonSet) on every node to collect container and application logs from the node file system.

The log forwarder instance receives, filters, and transforms the incoming the logs, and transfers them to one or more destination outputs.External logger consist of three parts that controls the logging cycle.

Logging

The logging resource defines the logging infrastructure (the log collectors and forwarders) for your cluster that collects and transports your log messages. It also contains configurations for Fluent Bit, Fluentd.

Example

  apiVersion: logging.banzaicloud.io/v1beta1
  kind: Logging
  metadata:
    name: logging
  spec:
    fluentd: {}
    fluentbit: {}
    controlNamespace: logging

Flow/ClusterFlow

Flow defines the filters (e.g. parsers, tags, label selectors) and output references. Flow is a namespaced resource, whereas Clusterflow applies to the entire cluster.Clusterflow defines a Fluentd logging flow that collects logs from all namespaces by default. The operator evaluates clusterflows in the controlNamespace only unless allowClusterResourcesFromAllNamespaces is set to true.

Example

apiVersion: logging.banzaicloud.io/v1beta1 
kind: ClusterFlow
metadata:
 name: 01cloud-flow
spec:
  globalOutputRefs:
    - 01cloud-output
  match:
    - select:
        namespaces:
        - 01cloud

Output/ClusterOutput

Output controls where the logs are sent (e.g. Amazon Cloudwatch, S3, Kinesis, Datadog, Elasticsearch, Grafana Loki, Kafka, NewRelic, Splunk, SumoLogic, Syslog).Clusteroutput defines a Fluentd output that is available from all flows and clusterflows. The operator evaluates clusteroutputs in the controlNamespace only unless allowClusterResourcesFromAllNamespaces is set to true.

Example

apiVersion: logging.banzaicloud.io/v1beta1
kind: ClusterOutput
metadata:
 name: 01cloud-output
spec:
 loki:
   buffer:
     timekey: 1m
     timekey_use_utc: true
     timekey_wait: 30s
   configure_kubernetes_labels: true
   url: http://loki:3100

The external logging feature has proven to be extremely beneficial in organizing and managing the logging infrastructure across multiple clusters and teams with varying logging requirements. External logging simplifies the process of setting up logs as it handles all aspects of log management.