bin/kibana-setup --enrollment-token
Tag Archives: elasticsearch
Elasticsearch settings for single-node cluster
Update default template:
curl -X PUT http://localhost:9200/_template/default -H ‘Content-Type: application/json’ -d ‘{“index_patterns”: [“*”],”order”: -1,”settings”: {“number_of_shards”: “1”,”number_of_replicas”: “0”}}’
If yellow indices exist, you can update them with:
curl -X PUT http://localhost:9200/_settings -H ‘Content-Type: application/json’ -d ‘{“index”: {“number_of_shards”: “1”,”number_of_replicas”: “0”}}’
If error: {“error”:{“root_cause”:[{“type”:”cluster_block_exception”,”reason”:”blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];”}],”type”:”cluster_block_exception”,”reason”:”blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];”},”status”:403}
curl -X PUT http://localhost:9200/_settings -H ‘Content-Type: application/json’ -d ‘{“index”: {“blocks”: {“read_only_allow_delete”: “false”}}}’
elasticsearch cluster.routing.allocation.disk.watermark.low
curl -X PUT "localhost:9200/_cluster/settings?pretty" -H 'Content-Type: application/json' -d ' { "transient": { "cluster.routing.allocation.disk.watermark.low": "90%", "cluster.routing.allocation.disk.watermark.high": "95%", "cluster.routing.allocation.disk.watermark.flood_stage": "98%" } } '
filebeat custom index name
filebeat output to elasticsearch indices
filebeat separate index
filebeat log different index
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/some/path/*.log
fields:
type: "query"
- type: log
enabled: true
paths:
- /var/log/another.path/*.log
fields:
type: "error"
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
setup.template.settings:
index.number_of_shards: 3
setup.kibana:
output.elasticsearch:
hosts: ["192.168.1.100:9200"]
index: "newindex-%{[fields.type]:other}-%{+yyyy.MM.dd}"
setup.template.name: "newindex"
setup.template.pattern: "newindex-*"
increase shards elasticsearch – maximum shards open
Moving to ERROR step
org.elasticsearch.common.ValidationException: Validation Failed: 1: this action would add [2] total shards, but this cluster currently has [100]/[100] maximum shards open;
at org.elasticsearch.indices.ShardLimitValidator.validateShardLimit(ShardLimitValidator.java:80) ~[elasticsearch
curl http://localhost:9200/_cat/shards | wc -l
100
curl -XPUT http://localhost:9200/_cluster/settings -H "Content-Type: application/json" -d '{ "persistent": { "cluster.max_shards_per_node": "200" } }'
elastic check version
curl -XGET 'http://localhost:9200'
elasticsearch limit memory usage
vim /etc/init.d/elasticsearch
ES_HEAP_SIZE=512m
vim /etc/elasticsearch/elasticsearch.yml
bootstrap.memory_lock: true