![]() Lightbend has spent a lot of time working with Apache Kafka on Kubernetes. Kafka Lag Exporter can run anywhere, but it provides features to run easily on Kubernetes clusters against Strimzi Kafka clusters using the Prometheus and Grafana monitoring stack. "apiVersion": " Kafka Lag Exporter, a tool to make it easy to view consumer group metrics using Kubernetes, Prometheus, and Grafana. $ kubectl get -raw "/apis//v1beta1/namespaces/default/|kafka-exporter|kafka_brokers" | jq source =kafka-exporter: =kafka_brokers,kafka_topic_partitions,kafka_consumergroup_current_offset_sum,kafka_consumergroup_lag_sumĭeploy this to ship your metrics to stackdriver. Image: gcr.io/google-containers/prometheus-to-sd:v0.3.2 # prometheus-to-sd-custom-metrics-kafka-exporter.yaml apiVersion : extensions/ v1 beta1 Now, we have deployed our custom server and registered APIs to Aggregator Layer. Service "custom-metrics-stackdriver-adapter" createdĪ "8s.io" createdĪ "8s.io" createdĬ8s.io "external-metrics-reader" createdĬ8s.io "external-metrics-reader" created ![]() 8s.io "custom-metrics-auth-reader" createdĬ8s.io "custom-metrics-resource-reader" createdĭeployment.extensions "custom-metrics-stackdriver-adapter" created Serviceaccount "custom-metrics-stackdriver-adapter" createdĬ8s.io "custom-metrics:system:auth-delegator" created # We will deploy new resource model based APIs clusterrole cluster-admin -user "$(gcloud config get-value account)" $ kubectl create clusterrolebinding cluster-admin-binding \ # Use one of google user account to create a cluster role The scope will enable write permission to stackdriver which is important for writing metrics. If you have an older version, upgrade it to the latest version and then update your node version as well. It is enabled by default, so you need not do anything. Monitoring scope should be enabled on cluster nodes. Enable cluster monitoring for Stackdriver You have kubectl CLI installed and configured to your GKE cluster.ġ.You have a Kubernetes cluster (GKE) running on GCP.PrerequisitesĮnsure the following dependencies are already fulfilled: The kind of metrics which we are going to write to Stackdriver will be exposed under instead of. We will be also deploy an application to write metrics (in this case kafka metrics) to google stackdriver. Our custom API server will register two APIs to Kubernetes : and. API server that we are going to deploy here is Stackdriver adapter which can collect metrics from Stackdriver and send them to the HPA controller via REST queries. The custom API server that we deploy registers an API to Kubernetes and allows the HPA controller query custom metrics from that. How custom API server and HPA ties together ? Few adapters are written by the third party to implement custom APIs which can be used to expose these metrics to Kubernetes resources such like HPA.Ĭurrent Implementations: /Kubernetes/IMPLEMENTATIONS.md Kubernetes has extended the support to allow custom APIs to expose other metrics provider. By default, metrics-server and heapster act as core metrics backend. Kubernetes allows us to deploy your own metrics solutions. Here, in this guide, we will deploy our HPA reading from as our kafka metrics will be exposed to that API. The HorizontalPodAutoscaler can also fetch metrics directly from Heapster. The API is usually provided by metrics-server, which needs to be launched separately. The HorizontalPodAutoscaler normally fetches metrics from a series of aggregated APIs (, and ). The controller periodically adjusts the number of replicas in a replication controller or deployment. The resource determines the behaviour of the controller. The Horizontal Pod Autoscaler is implemented as a Kubernetes API resource and a controller. We need to cover some concepts which are good to know before we move forward to the demonstration. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |