to verify the above procedure.
So in short, when replacing a node the force-replace procedure doesn't
actually cause data to be synched to the new node. The above erlang shell
commands do force a sync.
Thanks for the support!
//Sean.
On Thu, Aug 9, 2018 at 11:25 PM sean mcevoy wrote:
> Hi Mar
Hi Martin,
Thanks for taking the time.
Yes, by "size of the bitcask directory" I mean I did a "du -h --max-depth=1
bitcask", so I think that would cover all the vnodes. We don't use any
other backends.
Those answers are helpful, will get back to this in a few days and see what
I can determine
Hi All,
A few questions on the procedure here to recover a failed node:
http://docs.basho.com/riak/kv/2.2.3/using/repair-recovery/failed-node/
We lost a production riak server when AWS decided to delete a node and we
plan on doing this procedure to replace it with a newly built node. A
practice
pt in our prod env to keep these awake and
see if our observed timeout rate reduces.
If this is actually our problem are there any JVM config options we can use
to keep the index active all the time?
//Sean.
On Fri, Jun 23, 2017 at 1:48 PM, sean mcevoy <sean.mce...@gmail.com> wrote:
&
Hi Mark,
I've observed timeouts too but always on serach operation, you might have
seen my thread "Solr search response time spikes".
I'm getting stats by polling this every minute:
http://docs.basho.com/riak/kv/2.2.3/developing/api/http/status/
The 99 & 100% response times are most interesting
not a Solr expert, and I don't even
> play one on TV. But maybe the fact that you are not hitting node 5 is
> relevant for that reason?
>
> Can you do more analysis on your client, to make sure you are not favoring
> node 1?
>
> -Fred
>
> > On Jun 22, 2017, at 10:20 A
Hi List,
We have a standard riak cluster with 5 nodes and at the minute the traffic
levels are fairly low. Each of our application nodes has 25 client
connections, 5 to each riak node which get selected in a round robin.
Our application level requests involve multiple riak requests so our
Hi David,
I vaguely remember the same problem from a previous setup I did, a while
ago now.
IIRC, the original configured IP gets written to disk on the initial start
and then the next start fails due to the mis-match.
Try deleting your data directory and restarting, so this will be like the
Cheers Luca, easy when you know how ;-)
PR has been made.
//Sean.
On Tue, Nov 15, 2016 at 9:31 AM, Luca Favatella <
luca.favate...@erlang-solutions.com> wrote:
> On 15 November 2016 at 09:17, sean mcevoy <sean.mce...@gmail.com> wrote:
> [...]
>
>> Hi Basho guys,
;
> -Fred
>
> [1] https://cwiki.apache.org/confluence/display/solr/Pagination+of+Results
> [2] https://github.com/basho/yokozuna/commit/
> f64e19cef107d982082f5b95ed598da96fb419b0
>
>
> > On Sep 19, 2016, at 4:48 PM, sean mcevoy <sean.mce...@gmail.com> wrote:
Hi All,
We have an index with ~548,000 entries, ~14,000 of which match one of our
queries.
We read these in a paginated search and the first page (of 100 hits)
returns quickly in ~70ms.
This response time seems to increase exponentially as we walk through the
pages:
the 4th page takes ~200ms,
the
; -Alexander
>
>
> [0] https://raw.githubusercontent.com/basho/yokozuna/develop/priv/default_
> schema.xml
> [1] http://stackoverflow.com/questions/10023133/solr-
> wildcard-query-with-whitespace
>
>
>
> On Wednesday, September 7, 2016, sean mcevoy <sean.mce...@gmail.com>
> wrot
quot;test_index">>, <<"name_s:\"my test
name\" AND age_i:2">>, []).
{ok,{search_results,[{<<"test_index">>,
[{<<"score">>,<<"1.007369608719e+00">>},
Tue, Sep 6, 2016 at 2:48 PM, Jason Voegele <jvoeg...@basho.com> wrote:
> Hi Sean,
>
> Have you tried escaping the space in your query?
>
> http://stackoverflow.com/questions/10023133/solr-
> wildcard-query-with-whitespace
>
>
> On Sep 5, 2016, at 6:24 PM, sean mcevoy
Hi List,
We have a solr index where we store something like:
<<"{\"key_s\":\"ID\",\"body_s\":\"some test string\"}">>}],
Then we try to do a riakc_pb_socket:search with the pattern:
<<"body_s:*test str*">>
The request will fail with an error message telling us to check the logs
and in there we
15 matches
Mail list logo