Hi everyone!
I have two problems that confuse me, I have already searched with google,
and dived into the doc of this project, but I can't find proper explains
for them, I need help!
1. I use www.*aliyun*.com cloud compute virtual machine build a
elasticsearch cluster, and I already know that add a now node(in real
server environment) that could auto join in a exist cluster which hold the
same cluster name.
But in my work environment(I use virtual machine and use default "zen
discovery"), I must restart the cluster when I add a new node in this
cluster. Is this necessary?
I have noticed that there are exclusive plugins for google, es2, azure
cloud compute, is cloud compute different real machine? If it is, what
different between them?
2. Is there any backup or snapshot mechanism in elasticsearch, because
every midnight (from 2:00-6:00), the performance became lower than daytime
and more likely occur error such as below, should I limit "bulk" operation
in this range of time?
If you know the answer, please explain it for me, Thank you!
by magigo
the error message!
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
No handlers could be found for logger "elasticsearch"
Exception in thread C_Threading:
Traceback (most recent call last):
File "/usr/lib/python2.7/threading.py", line 551, in __bootstrap_inner
self.run()
File "es_indexing_3.py", line 87, in run
res = helpers.bulk(self.es, actions, chunk_size=size)
File
"/usr/local/lib/python2.7/dist-packages/elasticsearch/helpers/__init__.py",
line 145, in bulk
for ok, item in streaming_bulk(client, actions, **kwargs):
File
"/usr/local/lib/python2.7/dist-packages/elasticsearch/helpers/__init__.py",
line 104, in streaming_bulk
resp = client.bulk(bulk_actions, **kwargs)
File
"/usr/local/lib/python2.7/dist-packages/elasticsearch/client/utils.py",
line 67, in _wrapped
return func(*args, params=params, **kwargs)
File
"/usr/local/lib/python2.7/dist-packages/elasticsearch/client/__init__.py",
line 646, in bulk
params=params, body=self._bulk_body(body))
File "/usr/local/lib/python2.7/dist-packages/elasticsearch/transport.py",
line 276, in perform_request
status, headers, data = connection.perform_request(method, url, params,
body, ignore=ignore, timeout=timeout)
File
"/usr/local/lib/python2.7/dist-packages/elasticsearch/connection/http_urllib3.py",
line 51, in perform_request
raise ConnectionError('N/A', str(e), e)
ConnectionError: ConnectionError(HTTPConnectionPool(host='10.161.165.194',
port=9200): Read timed out. (read timeout=10)) caused by:
ReadTimeoutError(HTTPConnectionPool(host='10.161.165.194'$
--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/3c7c96a9-a8c0-4477-9082-7f7c47e7258f%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.