Hi Istvan,

A couple of questions:

1. Do the buckets (or bucket types) the data is written to still contain the 
search index to be used to index the data?
2. Did the indices get re-created on the new nodes you added?  You can verify 
this by looking at /var/lib/riak/yz/<index-name> (or wherever your platform 
data directory is on your platform), where <index-name> is the name of the index
3. Do you have AAE enabled?

If the indices have been removed on all of the nodes, you will need to recreate 
them, via the riak-admin command [1].

If you have AAE enabled, data should get reindexed when the trees expire.  You 
can force them to expire via riak attach:

shell$ bin/riak attach
Remote Shell: Use "Ctrl-C a" to quit. q() or init:stop() will terminate the 
riak node.
Erlang R16B02_basho9 (erts-5.10.3) [source] [64-bit] [smp:8:8] 
[async-threads:10] [kernel-poll:false] [dtrace]

Eshell V5.10.3  (abort with ^G)
(dev1@127.0.0.1)1> yz_entropy_mgr:expire_trees().  
ok

It may take some time for the trees to rebuild, but eventually you should see 
entries in the log, such as

2015-12-23 21:18:04.536 [info] <0.4457.0>@yz_exchange_fsm:key_exchange:176 Will 
repair 342 keys of partition 0 for preflist 
{1278813932664540053428224228626747642198940975104,3}
2015-12-23 21:18:19.541 [info] <0.4533.0>@yz_exchange_fsm:key_exchange:176 Will 
repair 368 keys of partition 0 for preflist 
{1370157784997721485815954530671515330927436759040,3}
2015-12-23 21:18:34.560 [info] <0.4609.0>@yz_exchange_fsm:key_exchange:176 Will 
repair 347 keys of partition 0 for preflist {0,3}

If you are not happy with that pace, you can adjust the hash tree build limits 
as described in the "Hash Trees" section of [2].  Those settings are for 
Riak/KV AAE, but YZ will inherit them automatically on restart.  If you only 
want to adjust these settings for YZ AAE, you can use the settings listed in 
[3], but you'll need to set those in your advanced.config, as I do not believe 
we have cuttlefish schema for these settings.

Some examples:

# riak.conf
anti_entropy.tree.build_limit.per_timespan = 1m
anti_entropy.tree.build_limit.number = 5

%% advanced.config
...
{yokozuna, [{anti_entropy_concurrency, 5}, {anti_entropy_build_limit, {5, 
60000}}]}
...

Hope that helps.

-Fred

[1] http://docs.basho.com/riak/latest/dev/using/search/ 
<http://docs.basho.com/riak/latest/dev/using/search/>
[2] http://docs.basho.com/riak/latest/ops/advanced/aae/#Configuring-AAE 
<http://docs.basho.com/riak/latest/ops/advanced/aae/#Configuring-AAE>
[3] https://github.com/basho/yokozuna/blob/2.1.1/include/yokozuna.hrl#L192



> On Dec 23, 2015, at 5:10 PM, István <lecc...@gmail.com> wrote:
> 
> Hi Jason,
> 
> Actually I was moving nodes, one by one. I guess I was missing the riak-admin 
> replace command. Is there an easy way of restoring Solr indexes on disk on a 
> node? The data is fine after the recovery but the index data got deleted in 
> the yz folder. Any advice how to restore it when a cluster is running and the 
> data is there?
> 
> If the fastest way to recover is the reload the entire dataset that is fine 
> too.
> 
> Thank you in advance,
> Istvan
> 
> On Wed, Dec 23, 2015 at 7:51 PM, Jason Voegele <jvoeg...@basho.com 
> <mailto:jvoeg...@basho.com>> wrote:
>> On Dec 23, 2015, at 12:54 PM, István <lecc...@gmail.com 
>> <mailto:lecc...@gmail.com>> wrote:
>> 
>> Hi,
>> 
>> I had to move the nodes of a Riak cluster to new ones. Everything is fine 
>> with the data, we have been following the recovery procedures here:
>> 
>> http://docs.basho.com/riak/latest/ops/running/backups/#Restoring-a-Node 
>> <http://docs.basho.com/riak/latest/ops/running/backups/#Restoring-a-Node>
>> 
>> After the moving all of the nodes I found out that all of the Solr indexes 
>> are gone.
> 
> Hi Istvan,
> 
> It looks like you are restoring an entire cluster, not just a single node 
> within a cluster. If so, the relevant recovery procedures are documented on 
> this page:
> 
> http://docs.basho.com/riak/latest/ops/running/recovery/failure-recovery/#Cluster-Recovery-From-Backups
>  
> <http://docs.basho.com/riak/latest/ops/running/recovery/failure-recovery/#Cluster-Recovery-From-Backups>
> 
> Can you try following the full cluster recovery procedure and see if that 
> solves the problem?
> 
> -- 
> Jason Voegele
> Manly's Maxim:
>       Logic is a systematic method of coming to the wrong conclusion
>       with confidence.
> 
> _______________________________________________
> riak-users mailing list
> riak-users@lists.basho.com <mailto:riak-users@lists.basho.com>
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com 
> <http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com>
> 
> 
> 
> 
> -- 
> the sun shines for all
> 
> 
> _______________________________________________
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

_______________________________________________
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

Reply via email to