To create an index, an index mapping is needed - In general, I think for collecting filebased logs - the filebeat template suits me.
Make a copy of filebeat.json from the zip package at https://download.elastic.co/beats/dashboards/beats-dashboards-1.1.0.zip and chage filebeat.json name and content.
Then upload and create index.
root@elkserver:curl -XPUT http://localhost:9200/.kibana/index-pattern/pinglog-* -d @pinglog.json {"_index":".kibana","_type":"index-pattern","_id":"pinglog-*","_version":2,"_shards":{"total":2,"successful":1,"failed":0},"created":false} root@elkserver:
Afterwards, create the index on the server:
The collection on afserver still is like on ELK - ElasticSearch, LogStash, Kibana
paths: - /var/log/pingkaf.txt document_type: pinglog input_type: log
This is shipped to Logstash, where output is configured for ElasticSearch- notice the if for type "pinglog":
output { if [type] == "pinglog" { elasticsearch { hosts => ["localhost:9200"] sniffing => true manage_template => false index => "pinglog-%{+YYYY.MM.dd}" document_type => "%{[@metadata][type]}" } } else { elasticsearch { hosts => ["localhost:9200"] sniffing => true manage_template => false index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}" document_type => "%{[@metadata][type]}" } } }
This should bring pinglogs in the index "pinglogs"