Commit 60deedd4 authored by Ecklory's avatar Ecklory

First release

parent 069d259e
# zabbix-agent-extension-elasticsearch
zabbix-agent-extension-elasticsearch - this extension for monitoring Elasticsearch cluster and node health/status.
### Supported features
This extension obtains stats of two types:
ROOT:
BUILD AND INSTALLING
Download golang:
cd ~ && wget https://dl.google.com/go/go1.14.1.linux-amd64.tar.gz && tar -xvzf go1.14.1.linux-amd64.tar.gz && rm go1.14.1.linux-amd64.tar.gz
Added path:
export PATH=$PATH:/root/go/bin
Install git:
apt -y --no-install-recommends install git
Install dep:
cd /root/go/bin && wget https://raw.githubusercontent.com/golang/dep/master/install.sh && chmod +x install.sh && ./install.sh && cd ~
Build:
git clone https://git.ckcorp.ru/ck/zabbix/elasticsearch && cd elasticsearch/src
#### Node stat
https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-nodes-stats.html
- [ ] roles
- [ ] attributes
- [x] indices (partly)
- [ ] os
- [ ] processes
- [x] jvm
- [x] thread_pool
- [ ] fs
- [ ] transport
- [ ] http
- [ ] breakers
- [ ] script
- [ ] discovery
- [ ] ingest
#### Cluster health
https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-health.html
- [x] cluster_name
- [x] status
- [x] timed_out
- [x] number_of_nodes
- [x] number_of_data_nodes
- [x] total indices docs count
- [x] total indices deleted docs count
- [x] primary indices docs count
- [x] primary indices deleted docs count
- [x] total indices store size
- [x] primary indices store size
- [x] active_primary_shards
- [x] active_shards
- [x] relocating_shards
- [x] initializing_shards
- [x] unassigned_shards
- [x] delayed_unassigned_shards
- [x] number_of_pending_tasks
- [x] number_of_in_flight_fetch
- [x] task_max_waiting_in_queue_millis
- [x] active_shards_percent_as_number
### Installation
##### Notice
Before manual installation you should check `Include` option in your `zabbix-agent` configuration, it should be uncomment and check that include path are the same with this installation rule - https://github.com/zarplata/zabbix-agent-extension-elasticsearch/blob/master/Makefile#L54 otherwise you should change it to your include path.
After installation you should restart your `zabbix-agent` manually for inclusion new `UserParameter` from extension configuration.
#### Manual build
```sh
# Building
git clone https://github.com/zarplata/zabbix-agent-extension-elasticsearch.git
cd zabbix-agent-extension-elasticsearch
make
#Installing
make install
# By default, binary installs into /usr/bin/ and zabbix config in /etc/zabbix/zabbix_agentd.conf.d/ but,
# you may manually copy binary to your executable path and zabbix config to specific include directory
```
#### Arch Linux package
```sh
# Building
git clone https://github.com/zarplata/zabbix-agent-extension-elasticsearch.git
git checkout pkgbuild
./build.sh
#Installing
pacman -U *.tar.xz
```
### Dependencies
zabbix-agent-extension-elasticsearch requires [zabbix-agent](http://www.zabbix.com/download) v2.4+ to run.
### Zabbix configuration
In order to start getting metrics, it is enough to import template and attach it to monitored node.
`WARNING:` You must define macro with name - `{$ZABBIX_SERVER_IP}` in global or local (template) scope with IP address of zabbix server.
On one node of cluster set MACRO `{$GROUPNAME}` = `REAL_ZABBIX_GROUP`. This group must include all nodes of the cluster.
Only this one node will be triggered cluster status (low level discovery added aggregate checks of cluster health).
### Customize key prefix
It may you need if key in template already used.
If you need change key `elasticsearch.*` -> `YOUR_PREFIX_PART.elasticsearch.*`, run script `custom_key_template.sh` whit `YOUR_PREFIX_PART` and import updated zabbix template `template_elasticsearch_service.xml`.
```sh
./custom_key_template.sh YOUR_PREFIX_PART
```
### Elasticsearch API authentication (X-Pack security)
This extension support basic authentication which provided by X-Pack. For authentication in Elasticsearch you must set valid values in template macros - `${ES_USER}` and `${ES_PASSWORD}`
### Customize Elasticsearch address.
You can customize you Elasticsearch listen address.
Just change`{$ES_ADDRESS}` macros in template.
Possible values are - `(http|https)://host:port`
Be note if you choose `https` and have self-signed certificate you also should add path to you CA in marcos `{$CA_PATH}`
{
"Fast": true,
"Linters": {
"gas": {
"Command": "gas -fmt=csv",
"PartitionStrategy": "directories"
},
"vet": {
"Command": "go vet"
}
}
}
# This file is autogenerated, do not edit; changes may be undone by the next 'dep ensure'.
[[projects]]
branch = "master"
digest = "1:72b78aac789a7b10282d8e71bb6618eaba311ff9da66bed25af1d76f181d3561"
name = "github.com/blacked/go-zabbix"
packages = ["."]
pruneopts = "UT"
revision = "3c6a95ec4fdc345b48c4e0e5f5c87d48d3fc40b5"
[[projects]]
digest = "1:abaaa7489a2f0f3afb2adc8ea1a282a5bd52350b87b26da220c94fc778d6d63b"
name = "github.com/docopt/docopt-go"
packages = ["."]
pruneopts = "UT"
revision = "784ddc588536785e7299f7272f39101f7faccc3f"
version = "0.6.2"
[[projects]]
branch = "master"
digest = "1:4fcc2642b79154894b404300e290f7967dcacd069b22b74867015b32da89aa42"
name = "github.com/reconquest/hierr-go"
packages = ["."]
pruneopts = "UT"
revision = "7d09c0176fd2bb7fd71a4349d1253eef9edb2c5c"
[[projects]]
branch = "master"
digest = "1:ade8553e2161fce98433fe17b6bcbfadeaa727e8d0fb0a6542d8385911487be4"
name = "github.com/reconquest/karma-go"
packages = ["."]
pruneopts = "UT"
revision = "1dd2a756e5072411904cf1b01a678baed59092e4"
[solve-meta]
analyzer-name = "dep"
analyzer-version = 1
input-imports = [
"github.com/blacked/go-zabbix",
"github.com/docopt/docopt-go",
"github.com/reconquest/hierr-go",
"github.com/reconquest/karma-go",
]
solver-name = "gps-cdcl"
solver-version = 1
# Gopkg.toml example
#
# Refer to https://golang.github.io/dep/docs/Gopkg.toml.html
# for detailed Gopkg.toml documentation.
#
# required = ["github.com/user/thing/cmd/thing"]
# ignored = ["github.com/user/project/pkgX", "bitbucket.org/user/project/pkgA/pkgY"]
#
# [[constraint]]
# name = "github.com/user/project"
# version = "1.0.0"
#
# [[constraint]]
# name = "github.com/user/project2"
# branch = "dev"
# source = "github.com/myfork/project2"
#
# [[override]]
# name = "github.com/x/y"
# version = "2.4.0"
#
# [prune]
# non-go = false
# go-tests = true
# unused-packages = true
[[constraint]]
branch = "master"
name = "github.com/blacked/go-zabbix"
[[constraint]]
name = "github.com/docopt/docopt-go"
version = "0.6.2"
[[constraint]]
branch = "master"
name = "github.com/reconquest/hierr-go"
[[constraint]]
branch = "master"
name = "github.com/reconquest/karma-go"
[prune]
go-tests = true
unused-packages = true
.PHONY: all clean-all build cleand-deps deps ver make-gopath
DATE := $(shell git log -1 --format="%cd" --date=short | sed s/-//g)
COUNT := $(shell git rev-list --count HEAD)
COMMIT := $(shell git rev-parse --short HEAD)
CWD := $(shell pwd)
BINARYNAME := zabbix-agent-extension-elasticsearch
CONFIG := zabbix-agent-extension-elasticsearch.conf
VERSION := "${DATE}.${COUNT}_${COMMIT}"
LDFLAGS := "-X main.version=${VERSION}"
default: all
all: clean-all make-gopath deps build
ver:
@echo ${VERSION}
clean-all: clean-deps
@echo Clean builded binaries
rm -rf .out/
rm -rf .gopath/
@echo Done
build:
@echo Build
cd ${CWD}/.gopath/src/${BINARYNAME}; \
GOPATH=${CWD}/.gopath \
go build -v -o .out/${BINARYNAME} -ldflags ${LDFLAGS} *.go
@echo Done
clean-deps:
@echo Clean dependencies
rm -rf vendor/*
deps:
@echo Fetch dependencies
cd ${CWD}/.gopath/src/${BINARYNAME}; \
GOPATH=${CWD}/.gopath \
/root/go/bin/dep ensure -v
make-gopath:
@echo Creating GOPATH
mkdir -p .gopath/src
ln -s ${CWD} ${CWD}/.gopath/src/${BINARYNAME}
install:
@echo Install
cp .out/${BINARYNAME} /usr/bin/${BINARYNAME}
cp userparameter_elasticsearch.conf \
/etc/zabbix/zabbix_agentd.conf.d/userparameter_elasticsearch.conf
@echo Done
remove:
@echo Remove
rm /usr/bin/${BINARYNAME}
rm /etc/zabbix/zabbix_agentd.conf.d/userparameter_elasticsearch.conf
@echo Done
package main
import "os"
func obtainCAPath() string {
caPath := os.Getenv("ZBX_ES_CA_PATH")
if len(caPath) == 0 {
return "None"
}
return caPath
}
func obtainESDSN() string {
dsn := os.Getenv("ZBX_ES_DSN")
if len(dsn) == 0 {
return "http://127.0.0.1:9200"
}
return dsn
}
#!/bin/sh
prefix="$1"
if [ -z "$prefix" ]; then
echo "Not define prefix."
exit 1
fi
sed "s/elasticsearch./$prefix.elasticsearch./g" -i template_elasticsearch_service.xml
sed "s/None_pfx/$prefix/g" -i template_elasticsearch_service.xml
echo "Done."
exit 0
package main
import (
"encoding/json"
"fmt"
)
func discovery(
nodesStats *ElasticNodesStats,
aggGroup string,
) error {
discoveryData := make(map[string][]map[string]string)
var discoveredItems []map[string]string
if aggGroup != "None" {
aggregateItem := make(map[string]string)
aggregateItem["{#GROUPNAME}"] = aggGroup
discoveredItems = append(discoveredItems, aggregateItem)
}
for _, nodeStats := range nodesStats.Nodes {
for collectorsName := range nodeStats.JVM.GC.Collectors {
discoveredItem := make(map[string]string)
discoveredItem["{#JVMGCCOLLECTORS}"] = collectorsName
discoveredItems = append(discoveredItems, discoveredItem)
}
for bufferPoolsName := range nodeStats.JVM.BufferPools {
discoveredItem := make(map[string]string)
discoveredItem["{#JVMBUFFERSPOOLS}"] = bufferPoolsName
discoveredItems = append(discoveredItems, discoveredItem)
}
for poolsName := range nodeStats.JVM.Mem.Pools {
discoveredItem := make(map[string]string)
discoveredItem["{#JVMMEMPOOLS}"] = poolsName
discoveredItems = append(discoveredItems, discoveredItem)
}
for threadPoolName := range nodeStats.ThreadPools {
discoveredItem := make(map[string]string)
discoveredItem["{#THREADPOOLNAME}"] = threadPoolName
discoveredItems = append(discoveredItems, discoveredItem)
}
}
discoveryData["data"] = discoveredItems
out, err := json.Marshal(discoveryData)
if err != nil {
return err
}
fmt.Printf("%s\n", out)
return nil
}
func discoveryIndices(
indicesStats *ElasticIndicesStats,
) error {
discoveryData := make(map[string][]map[string]string)
var discoveredItems []map[string]string
for name, _ := range indicesStats.Indices {
discoveredItem := make(map[string]string)
discoveredItem["{#INDEX}"] = name
discoveredItems = append(discoveredItems, discoveredItem)
}
discoveryData["data"] = discoveredItems
out, err := json.Marshal(discoveryData)
if err != nil {
return err
}
fmt.Printf("%s\n", out)
return nil
}
package main
import (
"encoding/json"
"fmt"
"net/http"
"github.com/reconquest/hierr-go"
)
type ElasticClusterHealth struct {
ClusterName string `json:"cluster_name"`
Status string `json:"status"`
TimedOut bool `json:"timed_out"`
NumderOfNodes int64 `json:"number_of_nodes"`
NumberOfDataNodes int64 `json:"number_of_data_nodes"`
ActivePrimaryShards int64 `json:"active_primary_shards"`
ActiveShards int64 `json:"active_shards"`
RelocatingShards int64 `json:"relocating_shards"`
InitializingShards int64 `json:"initializing_shards"`
UnassignedShards int64 `json:"unassigned_shards"`
DelayedUnassignedShards int64 `json:"delayed_unassigned_shards"`
NumberOfPendingTasks int64 `json:"number_of_pending_tasks"`
NumberOfInFlightFetch int64 `json:"number_of_in_flight_fetch"`
TaskMaxWaitingInQueueMillis int64 `json:"task_max_waiting_in_queue_millis"`
ActiveShardsPercent float64 `json:"active_shards_percent_as_number"`
}
type ElasticNodesStats struct {
Nodes map[string]ElasticNodeStats `json:"nodes"`
}
type ElasticNodeStats struct {
JVM ElasticNodeStatsJVM `json:"jvm"`
ThreadPools map[string]NodeThreadPool `json:"thread_pool"`
Indices NodeIndices `json:"indices"`
Transport ElasticNodeStatsTransport `json:"transport"`
Http ElasticNodeStatsHttp `json:"http"`
}
type ElasticNodeStatsJVM struct {
Timestamp int64 `json:"timestamp"`
UptimeInMillis int64 `json:"uptime_in_millis"`
Mem ElasticNodeStatsJVMMem `json:"mem"`
Threads ElasticNodeStatsJVMThreadsStats `json:"threads"`
GC ElasticNodeStatsJVMGC `json:"gc"`
BufferPools map[string]ElasticNodeStatsJVMBufferPoolsStats `json:"buffer_pools"`
Classes ElasticNodeStatsJVMClassesStats `json:"classes"`
}
type ElasticNodeStatsJVMMem struct {
HeapUsedInBytes int64 `json:"heap_used_in_bytes"`
HeapUsedPercent int64 `json:"heap_used_percent"`
HeapCommittedInBytes int64 `json:"heap_committed_in_bytes"`
HeapMaxInBytes int64 `json:"heap_max_in_bytes"`
NonHeapUsedInBytes int64 `json:"non_heap_used_in_bytes"`
NonHeapCommittedInBytes int64 `json:"non_heap_committed_in_bytes"`
Pools map[string]ElasticNodeStatsJVMMemPoolsStats `json:"pools"`
}
type ElasticNodeStatsJVMMemPoolsStats struct {
UsedInBytes int64 `json:"used_in_bytes"`
MaxInBytes int64 `json:"max_in_bytes"`
PeakUsedInBytes int64 `json:"peak_used_in_bytes"`
PeakMaxInBytes int64 `json:"peak_max_in_bytes"`
}
type ElasticNodeStatsJVMThreadsStats struct {
Count int64 `json:"count"`
PeakCount int64 `json:"peak_count"`
}
type ElasticNodeStatsJVMGC struct {
Collectors map[string]ElasticNodeStatsJVMGCCollectorsStats `json:"collectors"`
}
type ElasticNodeStatsJVMGCCollectorsStats struct {
CollectionCount int64 `json:"collection_count"`
CollectionTimeInMillis int64 `json:"collection_time_in_millis"`
}
type ElasticNodeStatsJVMBufferPoolsStats struct {
Count int64 `json:"count"`
UsedInBytes int64 `json:"used_in_bytes"`
TotalCapacityInBytes int64 `json:"total_capacity_in_bytes"`
}
type ElasticNodeStatsJVMClassesStats struct {
CurrentLoadedCount int64 `json:"current_loaded_count"`
TotalLoadedCount int64 `json:"total_loaded_count"`
TotalUnloadedCount int64 `json:"total_unloaded_count"`
}
type ElasticNodeStatsTransport struct {
ServerOpen int64 `json:"server_open"`
RxCount int64 `json:"rx_count"`
RxSizeInBytes int64 `json:"rx_size_in_bytes"`
TxCount int64 `json:"tx_count"`
TxSizeInBytes int64 `json:"tx_size_in_bytes"`
}
type ElasticNodeStatsHttp struct {
CurrentOpen int64 `json:"current_open"`
TotalOpened int64 `json:"total_opened"`
}
type ElasticIndicesStats struct {
Shards ElasticIndicesStatsShards `json:"_shards"`
All ElasticIndicesStatsAll `json:"_all"`
Indices map[string]ElasticIndicesStatsIndice `json:"indices"`
}
type ElasticIndicesStatsShards struct {
Total int64 `json:"total"`
Successful int64 `json:"successful"`
Failed int64 `json:"failed"`
}
type ElasticIndicesStatsAll struct {
Primaries ElasticIndicesStatsIndex `json:"primaries"`
Total ElasticIndicesStatsIndex `json:"total"`
}
type ElasticIndicesStatsIndice struct {
Primaries ElasticIndicesStatsIndex `json:"primaries"`
Total ElasticIndicesStatsIndex `json:"total"`
}
type ElasticIndicesStatsIndex struct {
Docs struct {
Count int64 `json:"count"`
Deleted int64 `json:"deleted"`
} `json:"docs"`
Store struct {
SizeInBytes int64 `json:"size_in_bytes"`
ThrottleTimeInMillis int64 `json:"throttle_time_in_millis"`
} `json:"store"`
}
func getClusterHealth(
elasticDSN string,
elasticsearchAuthToken string,
client *http.Client,
) (*ElasticClusterHealth, error) {
var elasticClusterHealth ElasticClusterHealth
clutserHealthURL := fmt.Sprintf("%s/_cluster/health", elasticDSN)
request, err := http.NewRequest("GET", clutserHealthURL, nil)
if err != nil {
return nil, hierr.Errorf(
err,
"can`t create new HTTP request to %s",
elasticDSN,
)
}
if elasticsearchAuthToken != noneValue {
request.Header.Add("Authorization", "Basic "+elasticsearchAuthToken)
}
clusterHealthResponse, err := client.Do(request)
if err != nil {
return nil, hierr.Errorf(
err.Error(),
"can`t get cluster health from Elasticsearch %s",
elasticDSN,
)
}
defer clusterHealthResponse.Body.Close()
if clusterHealthResponse.StatusCode != http.StatusOK {
return nil, fmt.Errorf(
"can`t get cluster health, Elasticsearch cluster returned %d HTTP code, expected %d HTTP code",
clusterHealthResponse.StatusCode,
http.StatusOK,
)
}
err = json.NewDecoder(clusterHealthResponse.Body).Decode(&elasticClusterHealth)
if err != nil {
return nil, hierr.Errorf(
err.Error(),
"can`t decode cluster health response from Elasticsearch %s",
elasticDSN,
)
}
return &elasticClusterHealth, nil
}
func getNodeStats(
elasticDSN string,
elasticsearchAuthToken string,
client *http.Client,
) (*ElasticNodesStats, error) {
var elasticNodesStats ElasticNodesStats
nodeStatsURL := fmt.Sprintf("%s/_nodes/_local/stats", elasticDSN)
request, err := http.NewRequest("GET", nodeStatsURL, nil)
if err != nil {
return nil, hierr.Errorf(
err,
"can`t create new HTTP request to %s",
elasticDSN,
)
}
if elasticsearchAuthToken != noneValue {
request.Header.Add("Authorization", "Basic "+elasticsearchAuthToken)
}
nodeStatsResponse, err := client.Do(request)
if err != nil {
return nil, hierr.Errorf(
err.Error(),
"can`t get node stats from Elasticsearch %s",
elasticDSN,
)
}
defer nodeStatsResponse.Body.Close()
if nodeStatsResponse.StatusCode != http.StatusOK {
return nil, fmt.Errorf(
"can`t get node stats, Elasticsearch node returned %d HTTP code",
nodeStatsResponse.StatusCode,
)
}
err = json.NewDecoder(nodeStatsResponse.Body).Decode(&elasticNodesStats)
if err != nil {
return nil, hierr.Errorf(
err.Error(),
"can`t decode node stats response from Elasticsearch %s",
elasticDSN,
)
}
return &elasticNodesStats, nil
}
func getIndicesStats(
elasticDSN string,
elasticsearchAuthToken string,
client *http.Client,
) (*ElasticIndicesStats, error) {
var elasticIndicesStats ElasticIndicesStats
indicesStatsURL := fmt.Sprintf("%s/_stats", elasticDSN)
request, err := http.NewRequest("GET", indicesStatsURL, nil)
if err != nil {
return nil, hierr.Errorf(
err,
"can`t create new HTTP request to %s",
elasticDSN,
)
}
if elasticsearchAuthToken != noneValue {
request.Header.Add("Authorization", "Basic "+elasticsearchAuthToken)
}
indicesStatsResponse, err := client.Do(request)
if err != nil {
return nil, hierr.Errorf(
err.Error(),
"can`t get indices stats from Elasticsearch %s",
elasticDSN,
)
}
defer indicesStatsResponse.Body.Close()
if indicesStatsResponse.StatusCode != http.StatusOK {
return nil, fmt.Errorf(
"can`t get indices stats, Elasticsearch node returned %d HTTP code, expected %d HTTP code",
indicesStatsResponse.StatusCode,
http.StatusOK,
)
}
err = json.NewDecoder(indicesStatsResponse.Body).Decode(&elasticIndicesStats)
if err != nil {
return nil, hierr.Errorf(
err.Error(),
"can`t decode indices stats response from Elasticsearch %s",
elasticDSN,
)
}
return &elasticIndicesStats, nil
}
package main
import (
"strconv"
zsend "github.com/blacked/go-zabbix"
)
//NodeIndices - indices stats
type NodeIndices struct {
Docs struct {
Count int64 `json:"count"`
Deleted int64 `json:"deleted"`
} `json:"docs"`
Store struct {
SizeInBytes int64 `json:"size_in_bytes"`
ThrottleTimeInMillis int64 `json:"throttle_time_in_millis"`
} `json:"store"`
Indexing IndicesIndexingStats `json:"indexing"`
Get IndicesGetStats `json:"get"`
Search IndicesSearchStats `json:"search"`
Merges IndicesMergesStats `json:"merges"`
QueryCache IndicesQueryCache `json:"query_cache"`
}
// IndicesIndexingStats - indices indexing stats
type IndicesIndexingStats struct {
IndexTotal int64 `json:"index_total"`
IndexTimeInMillis int64 `json:"index_time_in_millis"`
IndexCurrent int64 `json:"index_current"`
IndexFailed int64 `json:"index_failed"`
DeleteTotal int64 `json:"delete_total"`