kibana geo_point How to Part 2

Step: .Change Kibana & elk order. Now elk import template_filebeat, then wait logstash put log to elk. elk can get index EX:filebeat-6.4.2-2018.11.19 filebeat-6.4.2-2018.11.20 Then kibana import index-partten and set default. #!/bin/bash echo '@edge http://dl-cdn.alpinelinux.org/alpine/edge/main' >> /etc/apk/repositories echo '@edge http://dl-cdn.alpinelinux.org/alpine/edge/community' >> /etc/apk/repositories echo '@edge http://dl-cdn.alpinelinux.org/alpine/edge/testing' >> /etc/apk/repositories apk --no-cache upgrade apk --no-cache add curl echo "=====Elk config ========" until echo | nc -z -v elasticsearch 9200; do echo "Waiting Elk Kibana to start..." sleep 2 done code="400" until [ "$code" != "400" ]; do echo "=====Elk importing mappings json =======" curl -v -XPUT elasticsearch:9200/_template/template_filebeat -H 'Content-Type: application/json' -d @/usr/share/elkconfig/config/template_filebeat.json 2>/dev/null | head -n 1 | cut -d ':' -f2|cut -d ',' -f1 > code.txt code=`cat code.txt` sleep 2 done #reload index for geo_point echo "=====Get kibana idnex lists =======" indexlists=() while [ ${#indexlists[@]} -eq 0 ] do sleep 2 indexlists=($(curl -s elasticsearch:9200/_aliases?pretty=true | awk -F\" '!/aliases/ && $2 != "" {print $2}' | grep filebeat-)) done sleep 10 #========kibana========= id="f1836c20-e880-11e8-8d66-7d7b4c3a5906" echo "=====Kibana default index-pattern ========" until echo | nc -z -v kibana 5601; do echo "Waiting for Kibana to start..." sleep 2 done code="400" until [ "$code" != "400" ]; do echo "=====kibana importing json =======" curl -v -XPOST kibana:5601/api/kibana/dashboards/import?force=true -H "kbn-xsrf:true" -H "Content-type:application/json" -d @/usr/share/elkconfig/config/index-pattern-export.json 2>/dev/null | head -n 1 | cut -d ':' -f2|cut -d ',' -f1 > code.txt code=`cat code.txt` sleep 2 done code="400" until [ "$code" != "400" ]; do curl -v -XPOST kibana:5601/api/kibana/settings/defaultIndex -H "kbn-xsrf:true" -H "Content-Type: application/json" -d "{\"value\": \"$id\"}" 2>/dev/null | head -n 1 | cut -d ':' -f2|cut -d ',' -f1 > code.txt code=`cat code.txt` sleep 2 done tail -f /dev/null .template_filebeat template_filebeat.json ...

2018-11-21 · 2 min · 399 words · Me

[Failed again!!] kibana geo_point How to

Fxxx kibana elk Now try to do again. But can’t get geo_point…. reindex no use No Use POST /_refresh POST /_flush/synced POST /_cache/clear Only do this can apply Wast time Fxxx system. ……………… ……………… ……………… ……………… ……………… ……………… ……………… ……………… ……………… ……………… ……………… ……………… ……………… ……………… ……………… very bad document, very bad change version…………Everythings is BAD for elk kibana 1、 Every time see this “PUT GET or DELETE” command. Use where ??? https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-get.html ...

2018-11-20 · 3 min · 550 words · Me

logstash kibana geth log ethereum Grok Constructor

filter json { source => “message” } This mean is Try to use json format transfer log, then put some data to message filed. So some filed just be setting, and some data set to message. .Use this to check mach and log https://grokconstructor.appspot.com/do/match https://blog.johnwu.cc/article/elk-logstash-grok-filter.html https://github.com/logstash-plugins/logstash-patterns-core/blob/master/patterns/grok-patterns This is geth log for example A: INFO [11-14|09:58:17.730] Generating DAG in progress epoch=1 percentage=99 elapsed=4m8.643s INFO [11-15|01:41:33.455] Generating DAG in progress epoch=1 percentage=9 elapsed=27.614s B: INFO [11-15|01:19:44.590] Loaded most recent local fast block number=0 hash=656134…58fded td=1 age=49y7mo1h, Loaded most recent local fast block ...

2018-11-15 · 2 min · 366 words · Me

kibana default index-pattern

先建立index-pattern,匯出index-pattern json檔,然後刪除建立index-pattern後,再由rest api匯入。 1、顯示 index-pattern 列表 (先用web建立一個index-pattern) curl http://localhost:5601/api/saved_objects/_find?type=index-pattern 2、匯出saved_objects index-pattern curl http://localhost:5601/api/saved_objects/index-pattern/c0c02200-e6e0-11e8-b183-ebb59b02f871 > export.json c0c02200-e6e0-11e8-b183-ebb59b02f871 是 1找到的id json檔匯出後不可以直接用,必需頭尾補上 header補上: { “objects”: [ end補上: ]} 3、匯入saved_objects index-pattern (記得先砍了kibana-*) curl -v -XPOST localhost:5601/api/kibana/dashboards/import?force=true -H ‘kbn-xsrf:true’ -H ‘Content-type:application/json’ -d @./export.json json放在執行curl 同目錄就可以了 4、強制設定預設值 Kibana -> Managment -> Advanced Settings defaultIndex curl -XPOST http://localhost:5601/api/kibana/settings/defaultIndex -H “kbn-xsrf: true” -H “Content-Type: application/json” -d ‘{“value”: “id”}’ id from export.json inside have id value If already open kibana website, use Fresh (F5) page again. ...

2018-11-13 · 2 min · 306 words · Me

elk Elasticsearch Logstash and Kibana fortigate ubuntu

https://www.rosehosting.com/blog/install-and-configure-the-elk-stack-on-ubuntu-16-04/ https://www.elastic.co/guide/en/logstash/current/configuration.html https://dotblogs.com.tw/supershowwei/2016/05/25/185741 install finish 1、/etc/logstash/conf.d/ put some logstash conf 2、ubuntu have logstash listen error, so nano /etc/logstash/startup.options LS_USER = root 3、/usr/share/logstash/bin# ./system-install reuse LS_USER for config 注意: mutate { add_field => { “logTime” => “%{+YYYY-MM-dd} %{time}” }

2017-08-14 · 1 min · 38 words · Me