Hi Mark,
No, I don't use Riak Search.

Godefroy

2013/3/20 Mark Phillips <[email protected]>

> Hey Godefroy,
>
> Do you have Riak Search enabled?
>
> Mark
>
> On Wed, Mar 20, 2013 at 11:41 AM, Godefroy de Compreignac
> <[email protected]> wrote:
> > Any news? I still have the same problem, same data repartition...
> >
> >
> > 2013/3/13 Godefroy de Compreignac <[email protected]>
> >>
> >> I'm running Riak 1.2.1
> >> I installed it with riak_1.2.1-1_amd64.deb
> >>
> >> Godefroy
> >>
> >>
> >> 2013/3/13 Tom Santero <[email protected]>
> >>>
> >>> Godefroy,
> >>>
> >>> Which version of Riak are you running?
> >>>
> >>> Tom
> >>>
> >>>
> >>> On Wed, Mar 13, 2013 at 5:51 AM, Godefroy de Compreignac
> >>> <[email protected]> wrote:
> >>>>
> >>>> Hi Mark,
> >>>>
> >>>> Thanks for your email.
> >>>> The rebalancing doesn't seem to be working really good...
> >>>> I still have approximately the same repartition as last week :
> >>>>
> >>>> # riak-admin member-status
> >>>> Attempting to restart script through sudo -H -u riak
> >>>> ================================= Membership
> >>>> ==================================
> >>>> Status     Ring    Pending    Node
> >>>>
> >>>>
> -------------------------------------------------------------------------------
> >>>> valid      18.0%     25.0%    '[email protected]'
> >>>> valid       0.0%      0.0%    '[email protected]'
> >>>> valid      18.8%     25.0%    '[email protected]'
> >>>> valid      29.7%     25.0%    '[email protected]'
> >>>> valid      33.6%     25.0%    '[email protected]'
> >>>>
> >>>>
> -------------------------------------------------------------------------------
> >>>> Valid:5 / Leaving:0 / Exiting:0 / Joining:0 / Down:0
> >>>>
> >>>> (the second node is waiting for the end of the rebalancing to join the
> >>>> cluster and to begin a second rebalancing)
> >>>> My servers are in the public network of OVH. No Vlan, but good
> iptables
> >>>> rules. 1 Gbps network interface.
> >>>>
> >>>> If it can be useful:
> >>>>
> >>>> # riak-admin ring-status
> >>>> Attempting to restart script through sudo -H -u riak
> >>>> ================================== Claimant
> >>>> ===================================
> >>>> Claimant:  '[email protected]'
> >>>> Status:     up
> >>>> Ring Ready: true
> >>>>
> >>>> ============================== Ownership Handoff
> >>>> ==============================
> >>>> Owner:      [email protected]
> >>>> Next Owner: [email protected]
> >>>>
> >>>> Index: 662242929415565384811044689824565743281594433536
> >>>>   Waiting on: [riak_kv_vnode]
> >>>>   Complete:   [riak_pipe_vnode]
> >>>>
> >>>> Index: 799258707915337533392640142891717276374338109440
> >>>>   Waiting on: [riak_kv_vnode]
> >>>>   Complete:   [riak_pipe_vnode]
> >>>>
> >>>> Index: 844930634081928249586505293914101120738586001408
> >>>>   Waiting on: [riak_kv_vnode]
> >>>>   Complete:   [riak_pipe_vnode]
> >>>>
> >>>> Index: 890602560248518965780370444936484965102833893376
> >>>>   Waiting on: [riak_kv_vnode]
> >>>>   Complete:   [riak_pipe_vnode]
> >>>>
> >>>>
> >>>>
> -------------------------------------------------------------------------------
> >>>> Owner:      [email protected]
> >>>> Next Owner: [email protected]
> >>>>
> >>>> Index: 1004782375664995756265033322492444576013453623296
> >>>>   Waiting on: [riak_kv_vnode]
> >>>>   Complete:   [riak_pipe_vnode]
> >>>>
> >>>> Index: 1050454301831586472458898473514828420377701515264
> >>>>   Waiting on: [riak_kv_vnode]
> >>>>   Complete:   [riak_pipe_vnode]
> >>>>
> >>>> Index: 1096126227998177188652763624537212264741949407232
> >>>>   Waiting on: [riak_kv_vnode]
> >>>>   Complete:   [riak_pipe_vnode]
> >>>>
> >>>> Index: 1141798154164767904846628775559596109106197299200
> >>>>   Waiting on: [riak_kv_vnode]
> >>>>   Complete:   [riak_pipe_vnode]
> >>>>
> >>>> Index: 1187470080331358621040493926581979953470445191168
> >>>>   Waiting on: [riak_kv_vnode]
> >>>>   Complete:   [riak_pipe_vnode]
> >>>>
> >>>> Index: 1233142006497949337234359077604363797834693083136
> >>>>   Waiting on: [riak_kv_vnode]
> >>>>   Complete:   [riak_pipe_vnode]
> >>>>
> >>>> Index: 1278813932664540053428224228626747642198940975104
> >>>>   Waiting on: [riak_kv_vnode]
> >>>>   Complete:   [riak_pipe_vnode]
> >>>>
> >>>> Index: 1324485858831130769622089379649131486563188867072
> >>>>   Waiting on: [riak_kv_vnode]
> >>>>   Complete:   [riak_pipe_vnode]
> >>>>
> >>>> Index: 1415829711164312202009819681693899175291684651008
> >>>>   Waiting on: [riak_kv_vnode]
> >>>>   Complete:   [riak_pipe_vnode]
> >>>>
> >>>>
> >>>>
> -------------------------------------------------------------------------------
> >>>> Owner:      [email protected]
> >>>> Next Owner: [email protected]
> >>>>
> >>>> Index: 924856504873462002925769308203272848376019812352
> >>>>   Waiting on: [riak_kv_vnode]
> >>>>   Complete:   [riak_pipe_vnode]
> >>>>
> >>>> Index: 970528431040052719119634459225656692740267704320
> >>>>   Waiting on: [riak_kv_vnode]
> >>>>   Complete:   [riak_pipe_vnode]
> >>>>
> >>>> Index: 1016200357206643435313499610248040537104515596288
> >>>>   Waiting on: [riak_kv_vnode]
> >>>>   Complete:   [riak_pipe_vnode]
> >>>>
> >>>> Index: 1061872283373234151507364761270424381468763488256
> >>>>   Waiting on: [riak_kv_vnode]
> >>>>   Complete:   [riak_pipe_vnode]
> >>>>
> >>>> Index: 1107544209539824867701229912292808225833011380224
> >>>>   Waiting on: [riak_kv_vnode]
> >>>>   Complete:   [riak_pipe_vnode]
> >>>>
> >>>> Index: 1153216135706415583895095063315192070197259272192
> >>>>   Waiting on: [riak_kv_vnode]
> >>>>   Complete:   [riak_pipe_vnode]
> >>>>
> >>>> Index: 1198888061873006300088960214337575914561507164160
> >>>>   Waiting on: [riak_kv_vnode]
> >>>>   Complete:   [riak_pipe_vnode]
> >>>>
> >>>> Index: 1244559988039597016282825365359959758925755056128
> >>>>   Waiting on: [riak_kv_vnode]
> >>>>   Complete:   [riak_pipe_vnode]
> >>>>
> >>>> Index: 1290231914206187732476690516382343603290002948096
> >>>>   Waiting on: [riak_kv_vnode]
> >>>>   Complete:   [riak_pipe_vnode]
> >>>>
> >>>> Index: 1335903840372778448670555667404727447654250840064
> >>>>   Waiting on: [riak_kv_vnode]
> >>>>   Complete:   [riak_pipe_vnode]
> >>>>
> >>>> Index: 1381575766539369164864420818427111292018498732032
> >>>>   Waiting on: [riak_kv_vnode]
> >>>>   Complete:   [riak_pipe_vnode]
> >>>>
> >>>> Index: 1427247692705959881058285969449495136382746624000
> >>>>   Waiting on: [riak_kv_vnode]
> >>>>   Complete:   [riak_pipe_vnode]
> >>>>
> >>>>
> >>>>
> -------------------------------------------------------------------------------
> >>>> Owner:      [email protected]
> >>>> Next Owner: [email protected]
> >>>>
> >>>> Index: 936274486415109681974235595958868809467081785344
> >>>>   Waiting on: [riak_kv_vnode]
> >>>>   Complete:   [riak_pipe_vnode]
> >>>>
> >>>> Index: 981946412581700398168100746981252653831329677312
> >>>>   Waiting on: [riak_kv_vnode]
> >>>>   Complete:   [riak_pipe_vnode]
> >>>>
> >>>> Index: 1027618338748291114361965898003636498195577569280
> >>>>   Waiting on: [riak_kv_vnode]
> >>>>   Complete:   [riak_pipe_vnode]
> >>>>
> >>>> Index: 1073290264914881830555831049026020342559825461248
> >>>>   Waiting on: [riak_kv_vnode]
> >>>>   Complete:   [riak_pipe_vnode]
> >>>>
> >>>> Index: 1118962191081472546749696200048404186924073353216
> >>>>   Waiting on: [riak_kv_vnode]
> >>>>   Complete:   [riak_pipe_vnode]
> >>>>
> >>>> Index: 1210306043414653979137426502093171875652569137152
> >>>>   Waiting on: [riak_kv_vnode]
> >>>>   Complete:   [riak_pipe_vnode]
> >>>>
> >>>> Index: 1255977969581244695331291653115555720016817029120
> >>>>   Waiting on: [riak_kv_vnode]
> >>>>   Complete:   [riak_pipe_vnode]
> >>>>
> >>>> Index: 1301649895747835411525156804137939564381064921088
> >>>>   Waiting on: [riak_kv_vnode]
> >>>>   Complete:   [riak_pipe_vnode]
> >>>>
> >>>> Index: 1347321821914426127719021955160323408745312813056
> >>>>   Waiting on: [riak_kv_vnode]
> >>>>   Complete:   [riak_pipe_vnode]
> >>>>
> >>>> Index: 1392993748081016843912887106182707253109560705024
> >>>>   Waiting on: [riak_kv_vnode]
> >>>>   Complete:   [riak_pipe_vnode]
> >>>>
> >>>> Index: 1438665674247607560106752257205091097473808596992
> >>>>   Waiting on: [riak_kv_vnode]
> >>>>   Complete:   [riak_pipe_vnode]
> >>>>
> >>>>
> >>>>
> -------------------------------------------------------------------------------
> >>>>
> >>>> ============================== Unreachable Nodes
> >>>> ==============================
> >>>> All nodes are up and reachable
> >>>>
> >>>>
> >>>> Godefroy
> >>>>
> >>>>
> >>>> 2013/3/13 Mark Phillips <[email protected]>
> >>>>>
> >>>>> Hi Godefroy,
> >>>>>
> >>>>> Good to hear you managed to get the node running again. How is the
> >>>>> rebalancing going? Also, what's the network setup for the cluster?
> >>>>>
> >>>>> Mark
> >>>>>
> >>>>> On Fri, Mar 8, 2013 at 4:24 AM, Godefroy de Compreignac
> >>>>> <[email protected]> wrote:
> >>>>> > Hi Mark,
> >>>>> >
> >>>>> > Thanks for your answer.
> >>>>> > To have the node running back, I moved a 39GB bitcask dir to
> another
> >>>>> > disk
> >>>>> > and made a symlink (ln -s). Rebalancing seems to be running, but
> >>>>> > inequalities in data repartition stay huge and the node I added
> >>>>> > yesterday
> >>>>> > still has 0% of cluster data.
> >>>>> >
> >>>>> > Godefroy
> >>>>> >
> >>>>> >
> >>>>> > 2013/3/7 Mark Phillips <[email protected]>
> >>>>> >>
> >>>>> >> Salut Godefroy
> >>>>> >>
> >>>>> >> On Thu, Mar 7, 2013 at 6:50 AM, Godefroy de Compreignac
> >>>>> >> <[email protected]> wrote:
> >>>>> >> > Hello,
> >>>>> >> >
> >>>>> >> > I'm running a cluster of 4 nodes (1,8 TB on each) and I have a
> >>>>> >> > problem
> >>>>> >> > of
> >>>>> >> > balancing. Current data repartition is 18%, 19%, 30%, 34%. The
> >>>>> >> > node with
> >>>>> >> > 34%
> >>>>> >> > of cluster data is completely full and doesn't want to start
> >>>>> >> > anymoe.
> >>>>> >> > I don't know what to do. Do you have a solution for such a
> >>>>> >> > problem?
> >>>>> >> >
> >>>>> >>
> >>>>> >>
> >>>>> >> It looks like you need to increase storage capacity for the entire
> >>>>> >> cluster so you can move some data off of the full node. Do you
> have
> >>>>> >> the ability to another machine (or two) to the cluster?
> >>>>> >>
> >>>>> >> The issue, of course, is that you'll need to get that Riak node
> >>>>> >> running before it can hand off a subset of its data to a new
> member.
> >>>>> >> I
> >>>>> >> assume the disk being full is the primary reason its failing to
> >>>>> >> start?
> >>>>> >>
> >>>>> >> Mark
> >>>>> >>
> >>>>> >> > Thank you in advance!
> >>>>> >> >
> >>>>> >> > Godefroy
> >>>>> >> >
> >>>>> >> > _______________________________________________
> >>>>> >> > riak-users mailing list
> >>>>> >> > [email protected]
> >>>>> >> >
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> >>>>> >> >
> >>>>> >
> >>>>> >
> >>>>
> >>>>
> >>>>
> >>>> _______________________________________________
> >>>> riak-users mailing list
> >>>> [email protected]
> >>>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> >>>>
> >>>
> >>
> >
>
_______________________________________________
riak-users mailing list
[email protected]
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

Reply via email to