There is no current method to redirect indexing to a preparer index for
delayed indexing, while searching is still enabled.
By using rivers, you can close the _river index, some rivers (not all) may
take this as an indicator to stop indexing unless the _river index is
reopened. I consider this as
Sorry!!! (-_-) Please forgive me
I've got 2 running instance of elasticsearch in server and I don't know
this. I found it by jps and I found my own elasticsearch in port 9201 !
Why when I start elasticsearch -d, It don't say me its port,
cluster.name and node.name; It can be useful, can't?
if i set 4Gb heap for 8Gb system master node only, for what purposes are
the remaning 4GB ?
For the filesystem cache (but the master don't do IOs) ?
Le mardi 11 novembre 2014 22:46:50 UTC+1, Mark Walkom a écrit :
You should use 50% of your system memory for heap.
A client is just a node
set a custom template, you can find example in
logstash/lib/logstash/outputs/elasticsearch/template-elasticsearch.json
2014-11-11 17:46 GMT+08:00 Sang Dang zkid...@gmail.com:
Hi all,
currently my index will create automatically when a new doc indexed. so I
couldn't use CreateMapping request
Heya,
We are pleased to announce the release of the Elasticsearch Azure cloud plugin,
version 2.5.0.
The Azure Cloud plugin allows to use Azure API for the unicast discovery
mechanism and add Azure storage repositories..
https://github.com/elasticsearch/elasticsearch-cloud-azure/
Release
Can you try in your pom.xml to exclude the default randomizedtesting jar
retrieved by Lucene with another version? Use the change below and give it
a go.
dependency
groupIdorg.apache.lucene/groupId
artifactIdlucene-test-framework/artifactId
version${lucene.version}/version
scopetest/scope
Maybe this would help you:
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/analysis-lang-analyzer.html#swedish-analyzer
in the example they use only swedish
Am Dienstag, 11. November 2014 14:55:47 UTC+1 schrieb Linus Pettersson:
Hello
I'm trying to use Swedish stemming
Hi,
In the Node Stats API, there are several metrics ending in _total and
_time_in_millis. Example:
- merges.total
- merges.total_time_in_millis
Is this calculated since the node was started, or is it the last XX hours?
This isn't documented anywhere, as far as I can see.
Thanks,
Lasse
--
Hi,
I implemented query parser plugin. While query is parsed a few costly
operations have to be performed. I decided to cache sub-results of this
operation and reuse it when needed. This works fine.
The problem begins when cache should be invalidated. I built REST API
(BaseRestAction) for cache
Hi, I'm very newbie on ElasticSearch.
I'm try to indexing a set of biological data. There are some fields like
'gene_id' or 'gene_shortname' that should be processed as literal strings.
When I try to search for 'ZNF6092' in a field filled with 'linc-ZNF6092-6',
I can't find anything. When I
I have an object in the Elasticsearch index that has a nested object which
is a list of strings.
I would like to do the intersection against this list in both exact and
fuzzy ways.
So for example I have browser names with versions in the index like:
browsers: [{name:Chrome 38}, {name:Firefox
i am writing java code for getting total term frequency(number of times a
term occured within a document).
please help me to write this.
here is the document that i inserted into elastic search index.
resp_aaa_error:[
{edi_location:Payer,error_code:45,followup_action_code:R},
Hi ,
I would like to know what are the plans for presenting documents as table (
Like the one in Kibana 3) , in Kibana 4.
One of the feature I am badly looking forward is the power to download
documents as CSV from the table type from kibana 3 ( Not current data table
in kibana 4).
Thanks
On Wed, Nov 12, 2014 at 8:15 AM, Alessandro Bonfanti bnf@gmail.com
wrote:
Hi, I'm very newbie on ElasticSearch.
I'm try to indexing a set of biological data. There are some fields like
'gene_id' or 'gene_shortname' that should be processed as literal strings.
When I try to search for
I beg to differ, aggregations work with the root documents returned by
query, so they do not work under a global context. :) I guess under my
proposed vision the issue would be then how to have aggregations on
documents returned with a nested filter, but still maintain all the nested
documents. A
ok, So I figured this bit out with a few provisos ( there wasn't a
test that covered this case in the UnitTests)
Build the Search Requests like so :
{
search in index/type query
{
matchQuery(FIELD, field_value)
} scroll (KeepAlive - duration using ES duration syntax for eg: 1m -
This is probably a very noobish question. I just starting playing with an
ELK stack I have set up on Centos 7. All the core services seem to be
working but I can't seem to get it to receive syslog messages. I have both
selinux and the firewall turned off (just a local lab right now).
https://gist.github.com/anonymous/b59ea5a6bbf308f8e562
This is the definition of the problem.
It seems that index : not_analyzed is broken when using copy_to fields
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe from this group
The terms are copied to the full name and are not analyzed as specified.
However, two terms are being copied, not one. The term query expects a
single token of Jeremy Smith, while you have two separate non analyzed
tokens.
Cheers,
Ivan
On Nov 12, 2014 10:29 AM, Robert Alkire
On Wednesday, November 12, 2014 at 16:14 CET,
Andrew Stacey arsta...@gmail.com wrote:
This is probably a very noobish question. I just starting playing
with an ELK stack I have set up on Centos 7. All the core services
seem to be working but I can't seem to get it to receive syslog
Crap... I'm sorry. Noobish indeed. Didn't even realized there was a
separate group. I'll post it over there. Thanks!
On Wednesday, November 12, 2014 9:14:34 AM UTC-6, Andrew Stacey wrote:
This is probably a very noobish question. I just starting playing with an
ELK stack I have set up
Il 12/11/2014 15:25, Nikolas Everett ha
scritto:
On Wed, Nov 12, 2014 at 8:15 AM,
Alessandro Bonfanti bnf@gmail.com wrote:
Hi, I'm very newbie on ElasticSearch.
I'm try to indexing a
On Wed, Nov 12, 2014 at 11:13 AM, Alessandro Bonfanti bnf@gmail.com
wrote:
Il 12/11/2014 15:25, Nikolas Everett ha scritto:
On Wed, Nov 12, 2014 at 8:15 AM, Alessandro Bonfanti bnf@gmail.com
wrote:
Hi, I'm very newbie on ElasticSearch.
I'm try to indexing a set of biological
Ignore first link use this instead:
https://gist.github.com/anonymous/7051eb01114d4cbf3160
I understand the composite now is composed of two tokens thus my query must
be altered. If there is a better way to accomplish what I desire to do,
please let me know.
On Wednesday, November 12, 2014
Il 12/11/2014 17:20, Nikolas Everett ha
scritto:
On Wed, Nov 12, 2014 at 11:13 AM,
Alessandro Bonfanti bnf@gmail.com wrote:
Il 12/11/2014 15:25, Nikolas Everett ha scritto:
I am having an issue when I am trying to specify the path.data with
multiple dirs as
path.data: [/mnt/first, /mnt/second]
I am running multiple instances on the same machine and seeing that
elasticsearch instance starts up with data path as
[/var/lib/elasticsearch-(instance name e.g.data),
I have an ElasticSearch index with 20+ million documents, which I'm using
to perform GeoBoundingBox filters, combined to other queries.
I'd like to have, for each search, a random sample from my results.
After reading the documentation, I tried using the RandomScoreFunction,
which gives a random
Finally the issue was solved.
I forgot to mention that I had a Logstash output connected and it's protocol
http://logstash.net/docs/1.4.2/outputs/elasticsearch#protocol was set to
'node', meaning that logstash was part of my cluster.
Once I set the protocol to 'transport',scrolling was
Thanks Jorg.
On Wednesday, November 12, 2014 12:23:06 AM UTC-8, Jörg Prante wrote:
There is no current method to redirect indexing to a preparer index for
delayed indexing, while searching is still enabled.
By using rivers, you can close the _river index, some rivers (not all) may
take
Hi all,
I am getting timeout exceptions with elastic search while deleting an index
and while looking for logs in kibana its taking lot of time to load logs.
Following are the log details and setup info.
API Call for deleting an Index:
curl -XDELETE 'http://localhost:9200/logstash-2014.08.21'
How much data is in the cluster? How big are your nodes?
You're running a very old version of Java, recommended is 1.7u55 or
greater, and Oracle not OpenJDK.
On 13 November 2014 06:49, shriyansh jain shriyanshaj...@gmail.com wrote:
Hi all,
I am getting timeout exceptions with elastic search
Hi Mark,
The data is around id 150GB.
The master node holds : 16 GB RAM with ES_HEAP_SIZE = 8GB
The other node holds : 16 GB RAM with ES_HEAP_SIZE = 8 GB
I will get the version updated with Oracle Java, will that solve the issue.?
Thanks!
Shriyansh
--
You received this message because you
It won't solve it but it may help.
You have some pretty large GC's happening, some getting close to the 30s
default ping timeout.
You can try increasing discovery.zen.fd.ping_timeout but you may want to
consider adding another node in to generally help relieve heap pressure and
improve
Hi,
I'm thinking about building custom ClusterAction. I see that I can build
custom classes for Request, NodeResponse and NodesRespone but it is not
clear to me how I can register my custom action.
In case of Rest action it was quite easy because in plugin i simply use
public void
I have stopped collecting all the data from logstash( as I am using ELK
stack), but it still I am getting pretty large GC's (as you mentioned). Can
you please point me out what might be the reason for that.?
Thank you for the suggestion and pointing me out the error, I appreciate it
and will
I am also getting Out of Memory exception now in the logs
[2014-11-12 13:08:23,157][WARN ][netty.channel.DefaultChannelPipeline] An
exception was thrown by a user handler while handling an exception event
([id: 0x40f3788c, /10.18.144.231:52924 = /10.18.144.229:9300] EXCEPTION:
how I can create the query with client elastic4s scala?
I call using marvel/sense
GET /business/_search{
query: {
function_score: {
query: {
match: {
name: my text
}
},
script_score: {
script: _score + log(doc['reviews'].value + 1 ),
You either need to add more nodes, reduce your dataset, increase the heap
or close some old indexes.
On 13 November 2014 08:11, shriyansh jain shriyanshaj...@gmail.com wrote:
I am also getting Out of Memory exception now in the logs
[2014-11-12 13:08:23,157][WARN
There is also an ActionModule
public void onModule(ActionModule module) {
module.registerAction(MyAction.INSTANCE, TransportMyAction.class);
}
It is always easier to follow existing plugins.
Cheers,
Ivan
On Wed, Nov 12, 2014 at 3:50 PM, Pawel pro...@gmail.com wrote:
Hi,
I'm thinking
Interesting in that they use Cassandra for discovery.
http://techblog.netflix.com/2014/11/introducing-raigad-elasticsearch-sidecar.html
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe from this group and stop receiving emails
Thank you mark for pointing me out with the situation. As I am trying to
delete the index I am getting timeouts for that. But you suggested me to
increase discovery.zen.fd.ping_timeout to get rid of that. I am not able to
see any such variable in elasticsearch.yml file.
Can you please point me
You can add that in yourself.
On 13 November 2014 08:26, shriyansh jain shriyanshaj...@gmail.com wrote:
Thank you mark for pointing me out with the situation. As I am trying to
delete the index I am getting timeouts for that. But you suggested me to
increase discovery.zen.fd.ping_timeout to
Thanks!
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit
will it need a restart to being that info effect.?
Thank!
Shriyansh
On Wednesday, November 12, 2014 1:30:57 PM UTC-8, Mark Walkom wrote:
You can add that in yourself.
On 13 November 2014 08:26, shriyansh jain shriyan...@gmail.com
javascript: wrote:
Thank you mark for pointing me out
Hi All,
Thanks pulkitsinghal and Nicolas for your post replies.
Actually I decided to go with pagination in elasticsearch, so on scroll I
woluld load another part of results.
On every scroll request I have to do a request in elastic search server
with the same query just changing from offset.
Hi all, is there any comment on this issues, is this a bug of elasticsearch.
This is really a big problem for me cause we have to query on this big long
field!
thank you!
Wang
From: Wang Yong [mailto:cnwangy...@gmail.com]
Sent: Monday, November 10, 2014 5:33 PM
To:
Hi,
Thank you very much :-)
Paweł
On Wed, Nov 12, 2014 at 10:23 PM, Ivan Brusic i...@brusic.com wrote:
There is also an ActionModule
public void onModule(ActionModule module) {
module.registerAction(MyAction.INSTANCE, TransportMyAction.class);
}
It is always easier to follow existing
Hi Adrian,
thanks,
we are already using count type , the filter will be an actual filter ,
we want different filters on each aggregation so it would not be possible
to do a filtered query.
Can we improve using more replications or more sharding .
On Wednesday, 12 November 2014 04:16:54
Hello,
I am also running into this issue. I've specified more details in this
thread
https://groups.google.com/d/msg/elasticsearch/cXl9UrqgwBI/et56xPUqwGkJ. I
would greatly appreciate any response.
Thanks,
Sally
On Thursday, May 8, 2014 7:17:37 AM UTC-7, Dipesh Patel wrote:
Hi Igor
We
event/dt=20141112
event/dt=20141113
user retention is tracking if a user produce an event(activity) today and
produce an event in another day. the sql is like
SELECT count(*)
FROM event-log-20141112 AS l
JOIN event-log-20141112 AS r
ON l.user_id = r.user_id
According to the documentation
50 matches
Mail list logo