Skip to content
Cloud8 Logo
  • PRODUCT
  • PRICING
  • SUPPORT
  • CONTACT US
  • LOGIN
  • PRODUCT
  • PRICING
  • SUPPORT
  • CONTACT US
  • LOGIN

Best Practices

  • Best Practices: Custom Rule Notifications via AWS S3
  • Best Practices: Microsoft Teams Support
  • Best Practices: Password and Credential Monitoring
  • AWS Bucket S3 Topic Notification
  • How to integrate Slack into Best Practices
  • How to monitor memory and swap with SSM

Charging

  • AWS account suspension: tips and what to do

Concepts

  • Cloud Control Panel – AWS, Azure, GCP, Huawei and Oracle
  • Cloud server image or template
  • Cost model: pay per use
  • FINOPS: Data Integration and Enhancement Flow (Infographic)
  • Difference between RI Applied vs RI in Cloud8 Panel
  • Security groups
  • Access key
  • Snapshot
  • Additional disks
  • Cloud Computing
  • Comparison: Automation via Cloud8 vs “homemade” automation

Credentials

  • Connecting OCI Providers to Cloud8 – Full Tutorial
  • Best Practices: Enabling user monitoring in Azure
  • Multi-Factor Authentication (MFA) with Cloud8 Panel
  • How to integrate SSO with Azure AD
  • Security credentials for public clouds
  • How to use IAM Role to integrate your security with Cloud8
  • Using Cloud8 with a custom AWS security credential
  • Credential for Huawei Cloud

First Steps

  • Connecting OCI Providers to Cloud8 – Full Tutorial
  • Connecting AWS Providers to Cloud8 – Full Tutorial
  • Onboarding: getting started on Cloud8
  • Cloud8 Users and Profiles
  • How to create an Azure credential to integrate with Cloud8
  • Creating a new Amazon AWS account
  • How to associate your Amazon AWS account with Cloud8
  • Hot to enable cloud cost estimates monitoring
  • How to manage more than one AWS account
  • How to create a GCP Credential to integrate with Cloud8
  • Creating a New Account on Amazon Cloud (AWS)

MSP / Reseller

  • MSP: Configuring markup
  • White label at no additional cost
  • MSP: Configuring costs

Services

  • S3 Lambda Notification Processor (deploy via CLI)
  • Exporting data to Azure Storage Account
  • FinOps: Cost Anomaly Reports and Charts
  • FinOps: Reports, Alerts and Budgets
  • FinOps: Tagged / Untagged
  • FinOps: Tag Sanitization, Compliance and MultiCloud
  • FinOps: Tag Sharing and Prorating
  • FinOps: Reverse API
  • Exporting data to AWS S3 (Bucket)
  • Cloud Task Automations
  • Automated backup of cloud servers
  • How to install the Metricbeat component in OKE
  • How to install Metricbeat component on EKS
  • How to install Metricbeat component on GKE clusters
  • How to install Metricbeat component on AKS
  • GCP Storage Integration
  • How to enable support for ECS / EKS shared costs
  • RDS reports with grouping by ID
  • Add TAGs with CSV file
  • Kubernetes Cost Support
  • Detailed Costs Report
  • Workflow: How to reset tasks periodically
  • How to integrate SSO with Azure AD
  • Cloud aggregator control panel
  • Multiple Users – Multiuser Panel
  • Cloud cost control, alerts and reports
  • Cloud usage statistics
  • Alerts
  • Managers on Cloud8 – Resource management on AWS, Azure and GCP
  • Audit logs
  • ECS / Fargate support on Workflow

Troubleshooting

  • I exported the cloud server usage report. What do the fields mean?
  • I subscribed Amazon and I still can’t access Cloud8
  • How is the cloud cost estimate calculated?
  • I created a security group through the AWS console and it still doesn’t appear in Cloud8
  • Cloud8 and Amazon don’t monitor my cloud server’s memory?
  • Using Cloud8 with a custom AWS security credential

Tutorials

  • S3 Lambda Notification Processor (deploy via CLI)
  • Best Practices: Microsoft Teams Support
  • FinOps: Cost Anomaly Reports and Charts
  • FinOps: Tagged / Untagged
  • FinOps: Tag Sanitization, Compliance and MultiCloud
  • Group data in Pivot Table
  • How to install the Metricbeat component in OKE
  • How to install Metricbeat component on EKS
  • How to install Metricbeat component on GKE clusters
  • How to install Metricbeat component on AKS
  • Workflow: How to reset tasks periodically
  • How to integrate SSO with Azure AD
  • How to configure the Scheduler for script execution on OCI
  • How to access a Windows server in the Amazon AWS cloud
  • How to access a Linux server
  • How to create a cloud server
  • How to configure scheduling for script execution in AWS
  • How to configure scheduling by Tags / Labels
  • Configure vault copy at AWS (cross account) with KMS
  • How to integrate Slack into Best Practices
View Categories
  • Home
  • Docs
  • Services

How to install Metricbeat component on GKE clusters

4 min read

Metricbeat is a lightweight agent that collects and sends metrics from your systems and services to Elasticsearch or Logstash. It provides valuable insights into the health and performance of your infrastructure , making it an essential tool for monitoring and observability. By the end of this tutorial, you will have Metricbeat set up and running, allowing you to effectively monitor your GKE clusters.

Creating a Cloud Storage bucket for exporting files with K8s metrics collection #

Create a bucket according to desired settings in Cloud Storage:  #

Configure bucket with “uniform-bucket-level-access” – bucket access level with IAM ref: 

https://cloud.google.com/storage/docs/uniform-bucket-level-access#should-you-use

gcloud example: 

gcloud storage buckets create gs://<NOME_DO_NOVO_BUCKET> --project=<YOUR_PROJECT_ID> --default-storage-class=STANDARD --location=<YOUR_REGION> --uniform-bucket-level-access 

This bucket (with NOME_DO_NOVO_BUCKET as the BUCKET NAME) will store the exported files and the GKE cluster metrics collected by the metricbeat component (configurations from steps 3 and 4) 

Configure bucket permissions.  #

Get the cluster service account for the bucket IAM access settings: Example using gcloud: 

gcloud container clusters describe <CLUSTER_NAME> --region <CLUSTER_ZONE_REGION> --format 'json(serviceaccount, nodeConfig.serviceAccount)' 
  • <CLUSTER_ZONE_REGION> : cluster region – depending on the Location type – example: us-east1 or us-east1-c 

Example output: 

{ 
"nodeConfig": { 
"serviceAccount": 
"(SERVICE_ACCOUNT_NAME)@PROJECT_ID.iam.gserviceaccount.com" } 
}

Add the “storage.objectAdmin” role permission to the service account. ref:  #

https://cloud.google.com/storage/docs/access-control/iam-roles#:~:text=Storage%20Object%20Admin

Example using gcloud: 

gcloud storage buckets add-iam-policy-binding gs://<NEW_BUCKET_NAME> --role "roles/storage.objectAdmin" --member 
"serviceAccount:<SERVICE_ACCOUNT_NAME>@<YOUR_PROJECT_ID>.iam.gserviceaccount.com" 
  • <NEW_BUCKET_NAME> : bucket created in step 1.1. 
  • <SERVICE_ACCOUNT_NAME> : service account of the cluster obtained by step 1.2

GKE Metadata  #

Add GKE_METADATA metadata to the cluster  #

Example using gcloud: 

gcloud container clusters update <CLUSTER_NAME> --location=<CLUSTER_ZONE_REGION> --workload-pool=<YOUR_PROJECT_ID>.svc.id.goog 
  • <CLUSTER_ZONE_REGION> : cluster region – depending on the Location type – example: us-east1 or us-east1-c 

Enable Integration Add-ons on the GKE cluster.  #

GcsFuseCsiDriver add-on on cluster  #

Example using gcloud: 

gcloud container clusters update <CLUSTER_NAME> --update-addons GcsFuseCsiDriver=ENABLED --region <CLUSTER_ZONE_REGION> 
  • <CLUSTER_ZONE_REGION> : cluster region – depending on the Location type – example: us-east1 or us-east1-c

Configure metricbeat deployment with file export to bucket in Cloud Storage.  #

kube-state-metrics deployment  #

Get the kube-state-metrics template and deploy it: 

https://kube-state-metrics-template.s3.amazonaws.com/kube-state-metrics-template.yml

Deployment of metricbeat  #

Get the metricbeat template: 

https://metricbeat-deployment-template-gke-gfuse-bucket.s3.amazonaws.com/metricbeat-deployment-template-gke-bucket.yml

Manually adjust the following parameters in the template: 

  • <BUCKET_NAME> : name of the bucket for integrating and exporting files, created in step 1.1. 

Adjust the only-dir= gke/ parameter<YOUR_REGION>/<CLUSTER_NAME> 

  • <YOUR_REGION> : region where the cluster is located 
  • <CLUSTER_NAME> : name of the cluster that metricbeat will collect and send metrics to in the integration. 

This template is already prepared for creating objects in the cluster for metricbeat to work: 

  • ServiceAccount – will be used when executing the metricbeat service; 
  • ClusterRole – k8s API and object configurations – read-only; 
  • Roles and ClusterRoleBinding – additional configurations for reading k8s APIs in metricbeat; 
  • ConfigMaps – parameters and configurations for integrating metricbeat with kubernetes; 
  • DaemonSet – metricbeat service that collects metrics and exports files to the bucket.

Metricbeat Service Account  #

By default, the name of the Service Account used in the deployment in step 4.2 is metricbeat 

Link the metricbeat Service Account with the IAM service account for access to the integration bucket: 

Example: 

gcloud iam service-accounts add-iam-policy-binding 
<SERVICE_ACCOUNT_NAME>@<YOUR_PROJECT_ID>.iam.gserviceaccount.com --role roles/iam.workloadIdentityUser --member 
"serviceAccount:<YOUR_PROJECT_ID>.svc.id.goog[kube-system/metricbeat]" 
  • <SERVICE_ACCOUNT_NAME> : service account of the cluster obtained by step 1.2

Metricbeat Service Account Integration Bucket Permissions  #

Configure the same IAM role/permissions for the integration bucket in the metricbeat Service Account linked in step 4.3 

Example: 

gcloud storage buckets add-iam-policy-binding gs://<BUCKET_NAME> --role "roles/storage.objectAdmin" --member 
"serviceAccount:${PROJECT_ID}.svc.id.goog[kube-system/metricbeat]" 
  • <BUCKET_NAME> : name of the bucket for integrating and exporting files, created in step 1.1.

Check the metricbeat deployment and check the export.  #

Verify that metricbeat pods are running.  #

Example: 

kubectl get pods -n kube-system -o wide 

NOTE : Metricbeat will spin up one pod per node to collect metrics. 

Check the pod logs to see if metrics collection events are being generated.  #

Example:

Check if after a few minutes of the pod running, the files are being exported to the integration bucket:  #

Example: 

  • <NOME_DO_BUCKET> : bucket name 
  • <SUA_REGIAO> : region inserted in the prefix as per step 4.2; 
  • <NOME_DO_CLUSTER> : name of the cluster configured in the prefix step 4.2
  • <NOME_DO_CLUSTER_PREFIX> : cluster name, according to the prefix settings in step 4.2 
  • <NOME_DO_NODE>: name of the node on which the pod that exported the metric is running 

File export #

Due to metricbeat limitations, only 1024 log files are preserved. For the system to function correctly, at least the files from the last 7 days need to be preserved – however, we recommend keeping them for at least 35 days. 

Since the available configuration is by size and not by time, we recommend the following: 

  • Leave the default setting (which is 10mb per file) for 1 day; – After exactly 24 hours, check the number of files generated: 
  • If more than 145 files were generated, please let us know as the bucket will not retain files for a week; 
  • If 29 or more were generated, your configuration is correct; – If it is less than 29, apply the following formula: 
FILESIZE = 10240 / 29 * QUANTITY 

For example, if 5 files were generated: 

FILEZISE = 10240 / 29 * 5 = 1765 

Then, inside the metricbeat-deployment-template-gke-bucket.yml file, set data -> metricbeat.yml:-> output.file -> rotate_every_kb to 1765 instead of 10240.


You may want to check these Docs too: #

  • S3 Lambda Notification Processor (deploy via CLI)
  • Exporting data to Azure Storage Account
  • FinOps: Reports, Alerts and Budgets
  • FinOps: Tagged / Untagged
  • FinOps: Tag Sanitization, Compliance and MultiCloud
bucket, Cluster, GCP, GKE, google, K8, Metricbeat, storage
Did this Doc help you?

Share This Article:

  • Facebook
  • X
  • LinkedIn
  • Pinterest
Table of Contents
  • Creating a Cloud Storage bucket for exporting files with K8s metrics collection
    • Create a bucket according to desired settings in Cloud Storage: 
    • Configure bucket permissions. 
    • Add the “storage.objectAdmin” role permission to the service account. ref: 
  • GKE Metadata 
    • Add GKE_METADATA metadata to the cluster 
  • Enable Integration Add-ons on the GKE cluster. 
    • GcsFuseCsiDriver add-on on cluster 
  • Configure metricbeat deployment with file export to bucket in Cloud Storage. 
    • kube-state-metrics deployment 
    • Deployment of metricbeat 
    • Metricbeat Service Account 
    • Metricbeat Service Account Integration Bucket Permissions 
  • Check the metricbeat deployment and check the export. 
    • Verify that metricbeat pods are running. 
    • Check the pod logs to see if metrics collection events are being generated. 
    • Check if after a few minutes of the pod running, the files are being exported to the integration bucket: 
    • File export
Cloud8 Logo
  • Terms of Use
  • About Us
  • FAQ / Support
  • Blog
  • Contact Us
  • Cookies (EU)
  • Terms of Use
  • About Us
  • FAQ / Support
  • Blog
  • Contact Us
  • Cookies (EU)
Globe-americas Facebook Twitter Linkedin Youtube

Disclaimer: AWS, images, and associated services are property of Amazon Web Services Inc. and its affiliates. Azure, images, and associated services are property of Microsoft Corporation. GCP, images, and associated services are property of Google Inc. Huawei, images, and associated services are property of Huawei Technologies Co Ltd. Oracle, images, and associated services are property of Oracle Corporation. Cloud8 Brasil em Português.

Manoel Netto Designer
Manage Consent
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes. The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.
Manage options Manage services Manage {vendor_count} vendors Read more about these purposes
View preferences
{title} {title} {title}