Versioner sammenlignet

Nøgle

  • Linjen blev tilføjet.
  • Denne linje blev fjernet.
  • Formatering blev ændret.

...

After a short while, elkserver3 and elkserver2 got the same (x-pack excluded) - between node upgrades went aproximatly 10 minutes.

And then I enabled Shard allocation again:

Kodeblok
curl -XPUT 'localhost:9200/_cluster/settings?pretty' -H 'Content-Type: application/json' -d'
{
  "transient": {
    "cluster.routing.allocation.enable": "all"
  }
}'

Advarsel

During the entire upgrade, Cluster health was either red or yellow - and I was somewhat concerned about the state of everything in the cluster.

But it came back to green and the number of unassigned shards went towards a nice 0 (zero)

Bemærk

This actually costed me some data loss... first of all, after rebooting my elkserver1, the filebeat service did not start and I did not realized this for 2 days.

Before that realization, searching is the "syslog-*" index gave me "Courier Fetch: X of 5 shards failed" and looking at the Shards from the Upgrade time and onwards for the syslog-* index, size was closed to 0.

I never found the reason, but ended up deleting the "closed to 0 ones" in Kibana with "DELETE /syslog-dd.mm.yyyy" and then everything worked again. The X-Pack could be a possibility, but all other index'es works and have worked fine the entire time.

Also, the "Courier Fetch: X of 5 shards failed" is a common problem it seems, when googling it.

But I should have closed all Logstash instances before the upgrade....