Answering my own question, in case somebody has the same need some day,
seems that a SET works like a collection of REGISTERS and you can use it as
follows:
r = client.fulltext_search('ix_images', 'keywords_set:DLSR')
Thanks!
Alex
On Thu, Aug 21, 2014 at 8:32 PM, Alex De la rosa
Hello, we upgraded to 1.4.10 on all nodes. Please see below:
riak-admin transfers
'riak@newnode5 waiting to handoff 11 partitions
'riak@newnode4' waiting to handoff 11 partitions
'riak@newnode3' waiting to handoff 11 partitions
'riak@newnode2' waiting to handoff 11 partitions
I am looking for some clues as to why there might be duplicate keys in a Riak
Secondary Index. I am using version 1.4.0.
Thanks,
Chaim
___
riak-users mailing list
riak-users@lists.basho.com
Hi Marcel -
Could you run the following commands to see if it restarts handoff?
riak-admin transfer-limit 0
riak-admin transfer-limit 8
--
Luke Bakken
CSE
lbak...@basho.com
On Fri, Aug 22, 2014 at 12:29 AM, Marcel Koopman
marcel.koop...@gmail.com wrote:
Hello, we upgraded to 1.4.10 on all
Hi Luc,
Thanks for the URL, it is indeed really nice and covers all the use cases.
Thank you!
Regards,
Istvan
On Thu, Aug 21, 2014 at 10:43 AM, Luc Perkins lperk...@basho.com wrote:
István,
For future reference, the Riak 2.0 docs have client-library-specific code
samples. The *Querying*
Might be siblings?
Thanks,
Alex
On Thu, Aug 21, 2014 at 10:29 PM, Chaim Peck chaimp...@gmail.com wrote:
I am looking for some clues as to why there might be duplicate keys in a
Riak Secondary Index. I am using version 1.4.0.
Thanks,
Chaim
___
Have you changed the n_val property of the bucket in question? Lowering
the n_val can result in duplicate results.
Kelly
On 08/21/2014 02:29 PM, Chaim Peck wrote:
I am looking for some clues as to why there might be duplicate keys in a Riak
Secondary Index. I am using version 1.4.0.
Thanks,
Could you please explain how that might be?
Just to give some more information… At this point, I am trying to simply purge
the bucket and start fresh
I am using the python client, basically like this:
for keys in streaming_bucket.stream_index('$bucket', bucket_name):
for key in keys:
Hi David Raina,
I took a look over your configuration files — depending on the level of
disk use you're at, GC may in fact be working correctly. The areas you can
look to in order to tune the speed at which disk space is reclaimed are the
garbage collector's leeway_seconds, and Riak's Bitcask
Hi all, if I nuke /var/lib/riak/* - is there anywhere else on the system that
contains ring data?
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Hi Sebastian,
Is the node still running? If so, not only is the ring cached in memory,
Riak will rewrite it to disk again if its changed. If you attach to the
Riak node in question and run the following from the erlang shell provided
(including the period!) you should find the ringfile written
11 matches
Mail list logo