Skip to content
Cloud8 Logo
  • PRODUCT
  • PRICING
  • SUPPORT
  • CONTACT US
  • LOGIN
  • PRODUCT
  • PRICING
  • SUPPORT
  • CONTACT US
  • LOGIN

Best Practices

  • Best Practices: Custom Rule Notifications via AWS S3
  • Best Practices: Microsoft Teams Support
  • Best Practices: Password and Credential Monitoring
  • AWS Bucket S3 Topic Notification
  • How to integrate Slack into Best Practices
  • How to monitor memory and swap with SSM

Charging

  • AWS account suspension: tips and what to do

Concepts

  • Cloud Control Panel – AWS, Azure, GCP, Huawei and Oracle
  • Cloud server image or template
  • Cost model: pay per use
  • FINOPS: Data Integration and Enhancement Flow (Infographic)
  • Difference between RI Applied vs RI in Cloud8 Panel
  • Security groups
  • Access key
  • Snapshot
  • Additional disks
  • Cloud Computing
  • Comparison: Automation via Cloud8 vs “homemade” automation

Credentials

  • Connecting OCI Providers to Cloud8 – Full Tutorial
  • Best Practices: Enabling user monitoring in Azure
  • Multi-Factor Authentication (MFA) with Cloud8 Panel
  • How to integrate SSO with Azure AD
  • Security credentials for public clouds
  • How to use IAM Role to integrate your security with Cloud8
  • Using Cloud8 with a custom AWS security credential
  • Credential for Huawei Cloud

First Steps

  • Connecting OCI Providers to Cloud8 – Full Tutorial
  • Connecting AWS Providers to Cloud8 – Full Tutorial
  • Onboarding: getting started on Cloud8
  • Cloud8 Users and Profiles
  • How to create an Azure credential to integrate with Cloud8
  • Creating a new Amazon AWS account
  • How to associate your Amazon AWS account with Cloud8
  • Hot to enable cloud cost estimates monitoring
  • How to manage more than one AWS account
  • How to create a GCP Credential to integrate with Cloud8
  • Creating a New Account on Amazon Cloud (AWS)

MSP / Reseller

  • MSP: Configuring markup
  • White label at no additional cost
  • MSP: Configuring costs

Services

  • S3 Lambda Notification Processor (deploy via CLI)
  • Exporting data to Azure Storage Account
  • FinOps: Cost Anomaly Reports and Charts
  • FinOps: Reports, Alerts and Budgets
  • FinOps: Tagged / Untagged
  • FinOps: Tag Sanitization, Compliance and MultiCloud
  • FinOps: Tag Sharing and Prorating
  • FinOps: Reverse API
  • Exporting data to AWS S3 (Bucket)
  • Cloud Task Automations
  • Automated backup of cloud servers
  • How to install the Metricbeat component in OKE
  • How to install Metricbeat component on EKS
  • How to install Metricbeat component on GKE clusters
  • How to install Metricbeat component on AKS
  • GCP Storage Integration
  • How to enable support for ECS / EKS shared costs
  • RDS reports with grouping by ID
  • Add TAGs with CSV file
  • Kubernetes Cost Support
  • Detailed Costs Report
  • Workflow: How to reset tasks periodically
  • How to integrate SSO with Azure AD
  • Cloud aggregator control panel
  • Multiple Users – Multiuser Panel
  • Cloud cost control, alerts and reports
  • Cloud usage statistics
  • Alerts
  • Managers on Cloud8 – Resource management on AWS, Azure and GCP
  • Audit logs
  • ECS / Fargate support on Workflow

Troubleshooting

  • I exported the cloud server usage report. What do the fields mean?
  • I subscribed Amazon and I still can’t access Cloud8
  • How is the cloud cost estimate calculated?
  • I created a security group through the AWS console and it still doesn’t appear in Cloud8
  • Cloud8 and Amazon don’t monitor my cloud server’s memory?
  • Using Cloud8 with a custom AWS security credential

Tutorials

  • S3 Lambda Notification Processor (deploy via CLI)
  • Best Practices: Microsoft Teams Support
  • FinOps: Cost Anomaly Reports and Charts
  • FinOps: Tagged / Untagged
  • FinOps: Tag Sanitization, Compliance and MultiCloud
  • Group data in Pivot Table
  • How to install the Metricbeat component in OKE
  • How to install Metricbeat component on EKS
  • How to install Metricbeat component on GKE clusters
  • How to install Metricbeat component on AKS
  • Workflow: How to reset tasks periodically
  • How to integrate SSO with Azure AD
  • How to configure the Scheduler for script execution on OCI
  • How to access a Windows server in the Amazon AWS cloud
  • How to access a Linux server
  • How to create a cloud server
  • How to configure scheduling for script execution in AWS
  • How to configure scheduling by Tags / Labels
  • Configure vault copy at AWS (cross account) with KMS
  • How to integrate Slack into Best Practices
View Categories
  • Home
  • Docs
  • Services

How to install Metricbeat component on AKS

5 min read

Metricbeat is a lightweight shipper that collects and forwards metrics from your systems and services to Elasticsearch or Logstash. It provides valuable insights into the health and performance of your infrastructure, making it a key tool for monitoring and observability. By the end of this tutorial, you will have Metricbeat up and running, allowing you to monitor your clusters effectively.

Creation of a Storage Account and Container to export files with the collection of K8s metrics.  #

Create a Storage Account in Azure:  #

Example using Azure CLI: 

az storage account create -n <NEW_STORAGE_ACCOUNT_NAME> -g <RESOURCE_GROUP_NAME> -l <REGION> --sku Standard_LRS --kind StorageV2 
AZ New Storage

NOTE: The “Allow storage account key access” setting must be enabled for the integration to work.

Create a container in the Storage Account for file integration. #

Example using Azure CLI: 

az storage container create --name ${STORAGE_ACCOUNT_CONTAINER_NAME} --account-name <NEW_STORAGE_ACCOUNT_NAME> 
AZ New Container

Obtain the Storage Account credentials #

Reference: https://learn.microsoft.com/en-us/azure/storage/common/storage-account-keys-manage?tabs=azure-cli#regenerate-access-keys 

Obtain the Storage Account Access Key to configure the integration with metricbeat 

Example using Azure CLI: 

az storage account keys list -g <RESOURCE_GROUP_NAME> -n <NEW_STORAGE_ACCOUNT_NAME> --query [1].value 

In this example, we managed to obtain the first active access key for the integration 

Create the Secret in the AKS cluster with the Storage Account integration settings  #

Example using kubectl: 

kubectl create secret generic <SECRET_NAME> --from-literal 
azurestorageaccountname=<NEW_STORAGE_ACCOUNT_NAME> --from-literal azurestorageaccountkey=<ACCESS_KEY> --type=Opaque 
  • <SECRET_NAME>: Name of the new secret that will be created in the AKS cluster
  • <NEW_STORAGE_ACCOUNT_NAME>: Configured in step 1.1 
  • <ACCESS_KEY>: Obtained in step 1.3

Enable integration add-on in the AKS cluster.  #

Add-on Azure Blob storage Container Storage Interface (CSI) driver no cluster #

Example using Azure CLI: 

az aks update --enable-blob-driver -n <CLUSTER_NAME> -g  <RESOURCE_GROUP_NAME> -y 

Configure the metricbeat deployment with the export of files to the Storage Acount Container.  #

Deployment of kube-state-metrics  #

Get the kube-state-metrics template and deploy it: 

https://kube-state-metrics-template.s3.amazonaws.com/kube-state-metrics-template.yml

Deployment of metricbeat  #

Get the metricbeat template: 

https://metricbeat-deployment-template-aks-blob-csi.s3.amazonaws.com/metricbeat-deployment-template-aks-blob-csi.yml

Manually adjust the following parameters in the template. Adjust the parameter:

--subdirectory=aks/<RESOURCE_GROUP_NAME>/<YOUR_REGION>/<CLUSTER_NAME>/ 
  • <RESOURCE_GROUP_NAME>: name of the resource group that the AKS cluster is in <YOUR_REGIAO>: region that the AKS cluster is in 
  • <CLUSTER_NAME>: name of the cluster that metricbeat will collect and send the metrics in the integration. 
  • <STORAGE_ACCOUNT_NAME>: Name of the Storage Account configured in the step 1.1. 
  • <CONTAINER_NAME_IN_STORAGE_ACCOUNT>: Container name configured in step 1.2.

This template is already prepared for creating objects in the cluster for metricbeat to work: 

  • ServiceAccount – will be used when running the metricbeat service; 
  • ClusterRole – k8s API settings and objects – read only; 
  • Roles and ClusterRoleBinding – complementary configurations for reading the k8s APIs in metricbeat; 
  • ConfigMaps – parameters and configurations for metricbeat integration with kubernetes; 
  • DaemonSet – metricbeat service that collects metrics and exports files to the Storage Account Container. 

Deploy metricbeat and check export.  #

Continue with the deployment of metricbeat in the cluster after applying the configurations. After deployment, it is important to check whether the component is collecting metrics and exporting them to the integration Storage Account container. 

Check if the metricbeat pods are running.  #

Example: 

kubectl get pods -n kube-system -o wide 
AZ Processes Metricbeat

NOTE: metricbeat will upload one pod per node to collect metrics

Check the pod logs to see if metrics collection events are being generated.  #

Example: 

AZ Metricbeat logs

Check that after a few minutes of the pod running, if the files are being exported to the integration Storage Account Container:  #

Example: 

AZ Metricbeat reports
  • <NEW_CONTAINER_NAME> : name of the container inserted in the subdirectory as shown in 3.2 
  • <NOME_DO_RESOURCE_GROUP>: region inserted in the subdirectory as shown in 3.2
  • <YOUR_REGION>: region inserted in the subdirectory as shown in 3.2
  • <CLUSTER_NAME>: name of the cluster configured in the step 3.2

File export  #

Due to metricbeat limitations, only 1024 log files are preserved. For the system to function correctly, at least files from the last 7 days must be preserved – we recommend, however, that they be kept for at least 35 days. 

As the available configuration is by size and not by time, we recommend the following: 

  • Leave the default setting (which is 10mb per file) for 1 day;
  • After exactly 24 hours, check the number of files generated: 
  • If more than 145 files were generated, please inform us as the Container will not retain files for a week; 
  • If 29 or more were generated, your configuration is adequate;
  • If it is less than 29, apply the following formula: 
FILE_SIZE = 10240 / 29 * QUANTITY 

For example, if 5 files were generated: 

FILE_SIZE = 10240 / 29 * 5 = 1765 

Therefore, within the metricbeat-deployment-template-aks-blob-csi.yml file, configure data -> metricbeat.yml: -> output.file -> rotate_every_kb value of 1765 instead of 10240.


You may want to check these Docs too: #

  • S3 Lambda Notification Processor (deploy via CLI)
  • Exporting data to Azure Storage Account
  • FinOps: Reports, Alerts and Budgets
  • FinOps: Tagged / Untagged
  • FinOps: Tag Sanitization, Compliance and MultiCloud
AKS, Azure, Cluster, Container, K8, Metricbeat, storage
Did this Doc help you?

Share This Article:

  • Facebook
  • X
  • LinkedIn
  • Pinterest
Table of Contents
  • Creation of a Storage Account and Container to export files with the collection of K8s metrics. 
    • Create a Storage Account in Azure: 
    • Create a container in the Storage Account for file integration.
    • Obtain the Storage Account credentials
    • Create the Secret in the AKS cluster with the Storage Account integration settings 
  • Enable integration add-on in the AKS cluster. 
    • Add-on Azure Blob storage Container Storage Interface (CSI) driver no cluster
  • Configure the metricbeat deployment with the export of files to the Storage Acount Container. 
    • Deployment of kube-state-metrics 
    • Deployment of metricbeat 
  • Deploy metricbeat and check export. 
    • Check if the metricbeat pods are running. 
    • Check the pod logs to see if metrics collection events are being generated. 
    • Check that after a few minutes of the pod running, if the files are being exported to the integration Storage Account Container: 
    • File export 
Cloud8 Logo
  • Terms of Use
  • About Us
  • FAQ / Support
  • Blog
  • Contact Us
  • Cookies (EU)
  • Terms of Use
  • About Us
  • FAQ / Support
  • Blog
  • Contact Us
  • Cookies (EU)
Globe-americas Facebook Twitter Linkedin Youtube

Disclaimer: AWS, images, and associated services are property of Amazon Web Services Inc. and its affiliates. Azure, images, and associated services are property of Microsoft Corporation. GCP, images, and associated services are property of Google Inc. Huawei, images, and associated services are property of Huawei Technologies Co Ltd. Oracle, images, and associated services are property of Oracle Corporation. Cloud8 Brasil em Português.

Manoel Netto Designer
Manage Consent
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes. The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.
Manage options Manage services Manage {vendor_count} vendors Read more about these purposes
View preferences
{title} {title} {title}