Search for:
 - "exited abnormally"  which should catch receive-side failures
 - "transfer.*failed" (with egrep), which should catch send-side failures.

On Wed, Mar 20, 2013 at 3:53 PM, Godefroy de Compreignac
<[email protected]> wrote:
> I have a lot of errors like these:
>
> 2013-03-20 23:05:00.004 [info]
> <0.12359.176>@riak_core_handoff_manager:handle_info:271 An outbound handoff
> of partition riak_kv_vnode 890602560248518965780370444936484965102833893376
> was terminated for reason: {shutdown,max_concurrency}
> 2013-03-20 23:05:09.895 [info]
> <0.12359.176>@riak_core_handoff_manager:handle_info:271 An outbound handoff
> of partition riak_kv_vnode 662242929415565384811044689824565743281594433536
> was terminated for reason: {shutdown,max_concurrency}
> 2013-03-20 23:05:11.789 [info]
> <0.12359.176>@riak_core_handoff_manager:handle_info:271 An outbound handoff
> of partition riak_kv_vnode 114179815416476790484662877555959610910619729920
> was terminated for reason: {shutdown,max_concurrency}
> 2013-03-20 23:05:20.045 [info]
> <0.12359.176>@riak_core_handoff_manager:handle_info:271 An outbound handoff
> of partition riak_kv_vnode 662242929415565384811044689824565743281594433536
> was terminated for reason: {shutdown,max_concurrency}
> 2013-03-20 23:05:22.434 [info]
> <0.12359.176>@riak_core_handoff_manager:handle_info:271 An outbound handoff
> of partition riak_kv_vnode 353957427791078050502454920423474793822921162752
> was terminated for reason: {shutdown,max_concurrency}
> 2013-03-20 23:05:30.121 [info]
> <0.12359.176>@riak_core_handoff_manager:handle_info:271 An outbound handoff
> of partition riak_kv_vnode 799258707915337533392640142891717276374338109440
> was terminated for reason: {shutdown,max_concurrency}
> 2013-03-20 23:05:40.352 [info]
> <0.12359.176>@riak_core_handoff_manager:handle_info:271 An outbound handoff
> of partition riak_kv_vnode 799258707915337533392640142891717276374338109440
> was terminated for reason: {shutdown,max_concurrency}
> 2013-03-20 23:05:50.098 [info]
> <0.12359.176>@riak_core_handoff_manager:handle_info:271 An outbound handoff
> of partition riak_kv_vnode 799258707915337533392640142891717276374338109440
> was terminated for reason: {shutdown,max_concurrency}
> 2013-03-20 23:05:52.589 [info]
> <0.31129.1646>@riak_core_handoff_receiver:process_message:99 Receiving
> handoff data for partition
> riak_kv_vnode:650824947873917705762578402068969782190532460544
> 2013-03-20 23:05:55.624 [info]
> <0.31129.1646>@riak_core_handoff_receiver:handle_info:69 Handoff receiver
> for partition 650824947873917705762578402068969782190532460544 exited after
> processing 142 objects
> 2013-03-20 23:06:00.052 [info]
> <0.31802.1646>@riak_core_handoff_sender:start_fold:126 Starting
> ownership_handoff transfer of riak_kv_vnode from '[email protected]'
> 662242929415565384811044689824565743281594433536 to '[email protected]'
> 662242929415565384811044689824565743281594433536
> 2013-03-20 23:15:59.425 [info]
> <0.9893.1629>@riak_core_handoff_receiver:handle_info:69 Handoff receiver for
> partition 970528431040052719119634459225656692740267704320 exited after
> processing 138503 objects
> 2013-03-20 23:16:01.827 [info]
> <0.29612.1647>@riak_core_handoff_sender:start_fold:126 Starting
> ownership_handoff transfer of riak_kv_vnode from '[email protected]'
> 890602560248518965780370444936484965102833893376 to '[email protected]'
> 890602560248518965780370444936484965102833893376
>
>
>
>
> 2013/3/20 Evan Vigil-McClanahan <[email protected]>
>>
>> Godefroy,
>>
>> It does look like some things are in progress, but it's possible that
>> there are failures that are keeping your partitions from handing off.
>>  If you grep through your console.log files for 'handoff', do you see
>> any abnormal exits or other failures?
>>
>> On Wed, Mar 20, 2013 at 3:22 PM, Godefroy de Compreignac
>> <[email protected]> wrote:
>> > Here is what you asked for:
>> >
>> >
>> > $ sudo riak-admin transfers
>> > Attempting to restart script through sudo -H -u riak
>> > '[email protected]' waiting to handoff 25 partitions
>> > '[email protected]' waiting to handoff 27 partitions
>> > '[email protected]' waiting to handoff 36 partitions
>> > '[email protected]' waiting to handoff 36 partitions
>> > '[email protected]' waiting to handoff 63 partitions
>> >
>> > Active Transfers:
>> >
>> > transfer type: hinted_handoff
>> > vnode type: riak_kv_vnode
>> > partition: 627988984790622347665645826557777860008408514560
>> > started: 2013-03-20 22:15:53 [273.71 s ago]
>> > last update: 2013-03-20 22:20:26 [1.22 s ago]
>> > objects transferred: 3545
>> >
>> >                         13 Objs/s
>> > [email protected] =======================>  [email protected]
>> >                         3.53 MB/s
>> >
>> > transfer type: hinted_handoff
>> > vnode type: riak_kv_vnode
>> > partition: 125597796958124469533129165311555572001681702912
>> > started: 2013-03-20 22:19:04 [83.31 s ago]
>> > last update: 2013-03-20 22:20:26 [1.02 s ago]
>> > objects transferred: 1050
>> >
>> >                         13 Objs/s
>> > [email protected] =======================>  [email protected]
>> >                         3.56 MB/s
>> >
>> > transfer type: ownership_handoff
>> > vnode type: riak_kv_vnode
>> > partition: 662242929415565384811044689824565743281594433536
>> > started: 2013-03-20 22:06:00 [867.49 s ago]
>> > last update: 2013-03-20 22:09:26 [661.11 s ago]
>> > objects transferred: 2491
>> >
>> >                         12 Objs/s
>> > [email protected] =======================>  [email protected]
>> >                         4.65 MB/s
>> >
>> > transfer type: ownership_handoff
>> > vnode type: riak_kv_vnode
>> > partition: 890602560248518965780370444936484965102833893376
>> > started: 2013-03-20 22:16:01 [266.00 s ago]
>> > last update: 2013-03-20 22:18:11 [135.83 s ago]
>> > objects transferred: 1480
>> >
>> >                         11 Objs/s
>> > [email protected] =======================>  [email protected]
>> >                         5.79 MB/s
>> >
>> > transfer type: ownership_handoff
>> > vnode type: riak_kv_vnode
>> > partition: 924856504873462002925769308203272848376019812352
>> > started: 2013-03-16 12:50:53 [6.33 hr ago]
>> > last update: no updates seen
>> > objects transferred: unknown
>> >
>> >                          unknown
>> >  [email protected] =======================> [email protected]
>> >                          unknown
>> >
>> > transfer type: hinted_handoff
>> > vnode type: riak_kv_vnode
>> > partition: 993364394123348077216567034736848614922391650304
>> > started: 2013-03-20 22:16:43 [223.69 s ago]
>> > last update: 2013-03-20 22:20:26 [925.68 ms ago]
>> > objects transferred: 1657
>> >
>> >                          7 Objs/s
>> >  [email protected] =======================>  [email protected]
>> >                         2.21 MB/s
>> >
>> > transfer type: hinted_handoff
>> > vnode type: riak_kv_vnode
>> > partition: 262613575457896618114724618378707105094425378816
>> > started: 2013-03-20 22:18:23 [124.21 s ago]
>> > last update: 2013-03-20 22:20:26 [1.49 s ago]
>> > objects transferred: 1544
>> >
>> >                         13 Objs/s
>> >  [email protected] =======================>  [email protected]
>> >                         3.28 MB/s
>> >
>> >
>> > $ sudo riak-admin transfer-limit
>> > Attempting to restart script through sudo -H -u riak
>> > =============================== Transfer Limit
>> > ================================
>> > Limit        Node
>> >
>> > -------------------------------------------------------------------------------
>> >     4        '[email protected]'
>> >     4        '[email protected]'
>> >     4        '[email protected]'
>> >     4        '[email protected]'
>> >     4        '[email protected]'
>> >
>> > -------------------------------------------------------------------------------
>> > Note: You can change transfer limits with 'riak-admin transfer_limit
>> > <limit>'
>> >       and 'riak-admin transfer_limit <node> <limit>'
>> >
>> >
>> > Thanks for your help!
>> >
>> >
>> >
>> > --
>> > Godefroy de Compreignac
>> >
>> > Eklaweb CEO - www.eklaweb.com
>> > EklaBlog CEO - www.eklablog.com
>> >
>> > +33(0)6 11 89 13 84
>> > http://www.linkedin.com/in/godefroy
>> > http://twitter.com/Godefroy
>> >
>> >
>> > 2013/3/20 Mark Phillips <[email protected]>
>> >>
>> >> Hmm. Ok. We'll get to the bottom of this.
>> >>
>> >> Can you share the output of "riak-admin transfers" and "riak-admin
>> >> transfers-limit"?
>> >>
>> >>
>> >>
>> >> http://docs.basho.com/riak/latest/references/Command-Line-Tools---riak-admin/#transfers
>> >>
>> >> Mark
>> >>
>> >> On Wed, Mar 20, 2013 at 12:32 PM, Godefroy de Compreignac
>> >> <[email protected]> wrote:
>> >> > Hi Mark,
>> >> > No, I don't use Riak Search.
>> >> >
>> >> > Godefroy
>> >> >
>> >> > 2013/3/20 Mark Phillips <[email protected]>
>> >> >>
>> >> >> Hey Godefroy,
>> >> >>
>> >> >> Do you have Riak Search enabled?
>> >> >>
>> >> >> Mark
>> >> >>
>> >> >> On Wed, Mar 20, 2013 at 11:41 AM, Godefroy de Compreignac
>> >> >> <[email protected]> wrote:
>> >> >> > Any news? I still have the same problem, same data repartition...
>> >> >> >
>> >> >> >
>> >> >> > 2013/3/13 Godefroy de Compreignac <[email protected]>
>> >> >> >>
>> >> >> >> I'm running Riak 1.2.1
>> >> >> >> I installed it with riak_1.2.1-1_amd64.deb
>> >> >> >>
>> >> >> >> Godefroy
>> >> >> >>
>> >> >> >>
>> >> >> >> 2013/3/13 Tom Santero <[email protected]>
>> >> >> >>>
>> >> >> >>> Godefroy,
>> >> >> >>>
>> >> >> >>> Which version of Riak are you running?
>> >> >> >>>
>> >> >> >>> Tom
>> >> >> >>>
>> >> >> >>>
>> >> >> >>> On Wed, Mar 13, 2013 at 5:51 AM, Godefroy de Compreignac
>> >> >> >>> <[email protected]> wrote:
>> >> >> >>>>
>> >> >> >>>> Hi Mark,
>> >> >> >>>>
>> >> >> >>>> Thanks for your email.
>> >> >> >>>> The rebalancing doesn't seem to be working really good...
>> >> >> >>>> I still have approximately the same repartition as last week :
>> >> >> >>>>
>> >> >> >>>> # riak-admin member-status
>> >> >> >>>> Attempting to restart script through sudo -H -u riak
>> >> >> >>>> ================================= Membership
>> >> >> >>>> ==================================
>> >> >> >>>> Status     Ring    Pending    Node
>> >> >> >>>>
>> >> >> >>>>
>> >> >> >>>>
>> >> >> >>>>
>> >> >> >>>> -------------------------------------------------------------------------------
>> >> >> >>>> valid      18.0%     25.0%    '[email protected]'
>> >> >> >>>> valid       0.0%      0.0%    '[email protected]'
>> >> >> >>>> valid      18.8%     25.0%    '[email protected]'
>> >> >> >>>> valid      29.7%     25.0%    '[email protected]'
>> >> >> >>>> valid      33.6%     25.0%    '[email protected]'
>> >> >> >>>>
>> >> >> >>>>
>> >> >> >>>>
>> >> >> >>>>
>> >> >> >>>> -------------------------------------------------------------------------------
>> >> >> >>>> Valid:5 / Leaving:0 / Exiting:0 / Joining:0 / Down:0
>> >> >> >>>>
>> >> >> >>>> (the second node is waiting for the end of the rebalancing to
>> >> >> >>>> join
>> >> >> >>>> the
>> >> >> >>>> cluster and to begin a second rebalancing)
>> >> >> >>>> My servers are in the public network of OVH. No Vlan, but good
>> >> >> >>>> iptables
>> >> >> >>>> rules. 1 Gbps network interface.
>> >> >> >>>>
>> >> >> >>>> If it can be useful:
>> >> >> >>>>
>> >> >> >>>> # riak-admin ring-status
>> >> >> >>>> Attempting to restart script through sudo -H -u riak
>> >> >> >>>> ================================== Claimant
>> >> >> >>>> ===================================
>> >> >> >>>> Claimant:  '[email protected]'
>> >> >> >>>> Status:     up
>> >> >> >>>> Ring Ready: true
>> >> >> >>>>
>> >> >> >>>> ============================== Ownership Handoff
>> >> >> >>>> ==============================
>> >> >> >>>> Owner:      [email protected]
>> >> >> >>>> Next Owner: [email protected]
>> >> >> >>>>
>> >> >> >>>> Index: 662242929415565384811044689824565743281594433536
>> >> >> >>>>   Waiting on: [riak_kv_vnode]
>> >> >> >>>>   Complete:   [riak_pipe_vnode]
>> >> >> >>>>
>> >> >> >>>> Index: 799258707915337533392640142891717276374338109440
>> >> >> >>>>   Waiting on: [riak_kv_vnode]
>> >> >> >>>>   Complete:   [riak_pipe_vnode]
>> >> >> >>>>
>> >> >> >>>> Index: 844930634081928249586505293914101120738586001408
>> >> >> >>>>   Waiting on: [riak_kv_vnode]
>> >> >> >>>>   Complete:   [riak_pipe_vnode]
>> >> >> >>>>
>> >> >> >>>> Index: 890602560248518965780370444936484965102833893376
>> >> >> >>>>   Waiting on: [riak_kv_vnode]
>> >> >> >>>>   Complete:   [riak_pipe_vnode]
>> >> >> >>>>
>> >> >> >>>>
>> >> >> >>>>
>> >> >> >>>>
>> >> >> >>>>
>> >> >> >>>> -------------------------------------------------------------------------------
>> >> >> >>>> Owner:      [email protected]
>> >> >> >>>> Next Owner: [email protected]
>> >> >> >>>>
>> >> >> >>>> Index: 1004782375664995756265033322492444576013453623296
>> >> >> >>>>   Waiting on: [riak_kv_vnode]
>> >> >> >>>>   Complete:   [riak_pipe_vnode]
>> >> >> >>>>
>> >> >> >>>> Index: 1050454301831586472458898473514828420377701515264
>> >> >> >>>>   Waiting on: [riak_kv_vnode]
>> >> >> >>>>   Complete:   [riak_pipe_vnode]
>> >> >> >>>>
>> >> >> >>>> Index: 1096126227998177188652763624537212264741949407232
>> >> >> >>>>   Waiting on: [riak_kv_vnode]
>> >> >> >>>>   Complete:   [riak_pipe_vnode]
>> >> >> >>>>
>> >> >> >>>> Index: 1141798154164767904846628775559596109106197299200
>> >> >> >>>>   Waiting on: [riak_kv_vnode]
>> >> >> >>>>   Complete:   [riak_pipe_vnode]
>> >> >> >>>>
>> >> >> >>>> Index: 1187470080331358621040493926581979953470445191168
>> >> >> >>>>   Waiting on: [riak_kv_vnode]
>> >> >> >>>>   Complete:   [riak_pipe_vnode]
>> >> >> >>>>
>> >> >> >>>> Index: 1233142006497949337234359077604363797834693083136
>> >> >> >>>>   Waiting on: [riak_kv_vnode]
>> >> >> >>>>   Complete:   [riak_pipe_vnode]
>> >> >> >>>>
>> >> >> >>>> Index: 1278813932664540053428224228626747642198940975104
>> >> >> >>>>   Waiting on: [riak_kv_vnode]
>> >> >> >>>>   Complete:   [riak_pipe_vnode]
>> >> >> >>>>
>> >> >> >>>> Index: 1324485858831130769622089379649131486563188867072
>> >> >> >>>>   Waiting on: [riak_kv_vnode]
>> >> >> >>>>   Complete:   [riak_pipe_vnode]
>> >> >> >>>>
>> >> >> >>>> Index: 1415829711164312202009819681693899175291684651008
>> >> >> >>>>   Waiting on: [riak_kv_vnode]
>> >> >> >>>>   Complete:   [riak_pipe_vnode]
>> >> >> >>>>
>> >> >> >>>>
>> >> >> >>>>
>> >> >> >>>>
>> >> >> >>>>
>> >> >> >>>> -------------------------------------------------------------------------------
>> >> >> >>>> Owner:      [email protected]
>> >> >> >>>> Next Owner: [email protected]
>> >> >> >>>>
>> >> >> >>>> Index: 924856504873462002925769308203272848376019812352
>> >> >> >>>>   Waiting on: [riak_kv_vnode]
>> >> >> >>>>   Complete:   [riak_pipe_vnode]
>> >> >> >>>>
>> >> >> >>>> Index: 970528431040052719119634459225656692740267704320
>> >> >> >>>>   Waiting on: [riak_kv_vnode]
>> >> >> >>>>   Complete:   [riak_pipe_vnode]
>> >> >> >>>>
>> >> >> >>>> Index: 1016200357206643435313499610248040537104515596288
>> >> >> >>>>   Waiting on: [riak_kv_vnode]
>> >> >> >>>>   Complete:   [riak_pipe_vnode]
>> >> >> >>>>
>> >> >> >>>> Index: 1061872283373234151507364761270424381468763488256
>> >> >> >>>>   Waiting on: [riak_kv_vnode]
>> >> >> >>>>   Complete:   [riak_pipe_vnode]
>> >> >> >>>>
>> >> >> >>>> Index: 1107544209539824867701229912292808225833011380224
>> >> >> >>>>   Waiting on: [riak_kv_vnode]
>> >> >> >>>>   Complete:   [riak_pipe_vnode]
>> >> >> >>>>
>> >> >> >>>> Index: 1153216135706415583895095063315192070197259272192
>> >> >> >>>>   Waiting on: [riak_kv_vnode]
>> >> >> >>>>   Complete:   [riak_pipe_vnode]
>> >> >> >>>>
>> >> >> >>>> Index: 1198888061873006300088960214337575914561507164160
>> >> >> >>>>   Waiting on: [riak_kv_vnode]
>> >> >> >>>>   Complete:   [riak_pipe_vnode]
>> >> >> >>>>
>> >> >> >>>> Index: 1244559988039597016282825365359959758925755056128
>> >> >> >>>>   Waiting on: [riak_kv_vnode]
>> >> >> >>>>   Complete:   [riak_pipe_vnode]
>> >> >> >>>>
>> >> >> >>>> Index: 1290231914206187732476690516382343603290002948096
>> >> >> >>>>   Waiting on: [riak_kv_vnode]
>> >> >> >>>>   Complete:   [riak_pipe_vnode]
>> >> >> >>>>
>> >> >> >>>> Index: 1335903840372778448670555667404727447654250840064
>> >> >> >>>>   Waiting on: [riak_kv_vnode]
>> >> >> >>>>   Complete:   [riak_pipe_vnode]
>> >> >> >>>>
>> >> >> >>>> Index: 1381575766539369164864420818427111292018498732032
>> >> >> >>>>   Waiting on: [riak_kv_vnode]
>> >> >> >>>>   Complete:   [riak_pipe_vnode]
>> >> >> >>>>
>> >> >> >>>> Index: 1427247692705959881058285969449495136382746624000
>> >> >> >>>>   Waiting on: [riak_kv_vnode]
>> >> >> >>>>   Complete:   [riak_pipe_vnode]
>> >> >> >>>>
>> >> >> >>>>
>> >> >> >>>>
>> >> >> >>>>
>> >> >> >>>>
>> >> >> >>>> -------------------------------------------------------------------------------
>> >> >> >>>> Owner:      [email protected]
>> >> >> >>>> Next Owner: [email protected]
>> >> >> >>>>
>> >> >> >>>> Index: 936274486415109681974235595958868809467081785344
>> >> >> >>>>   Waiting on: [riak_kv_vnode]
>> >> >> >>>>   Complete:   [riak_pipe_vnode]
>> >> >> >>>>
>> >> >> >>>> Index: 981946412581700398168100746981252653831329677312
>> >> >> >>>>   Waiting on: [riak_kv_vnode]
>> >> >> >>>>   Complete:   [riak_pipe_vnode]
>> >> >> >>>>
>> >> >> >>>> Index: 1027618338748291114361965898003636498195577569280
>> >> >> >>>>   Waiting on: [riak_kv_vnode]
>> >> >> >>>>   Complete:   [riak_pipe_vnode]
>> >> >> >>>>
>> >> >> >>>> Index: 1073290264914881830555831049026020342559825461248
>> >> >> >>>>   Waiting on: [riak_kv_vnode]
>> >> >> >>>>   Complete:   [riak_pipe_vnode]
>> >> >> >>>>
>> >> >> >>>> Index: 1118962191081472546749696200048404186924073353216
>> >> >> >>>>   Waiting on: [riak_kv_vnode]
>> >> >> >>>>   Complete:   [riak_pipe_vnode]
>> >> >> >>>>
>> >> >> >>>> Index: 1210306043414653979137426502093171875652569137152
>> >> >> >>>>   Waiting on: [riak_kv_vnode]
>> >> >> >>>>   Complete:   [riak_pipe_vnode]
>> >> >> >>>>
>> >> >> >>>> Index: 1255977969581244695331291653115555720016817029120
>> >> >> >>>>   Waiting on: [riak_kv_vnode]
>> >> >> >>>>   Complete:   [riak_pipe_vnode]
>> >> >> >>>>
>> >> >> >>>> Index: 1301649895747835411525156804137939564381064921088
>> >> >> >>>>   Waiting on: [riak_kv_vnode]
>> >> >> >>>>   Complete:   [riak_pipe_vnode]
>> >> >> >>>>
>> >> >> >>>> Index: 1347321821914426127719021955160323408745312813056
>> >> >> >>>>   Waiting on: [riak_kv_vnode]
>> >> >> >>>>   Complete:   [riak_pipe_vnode]
>> >> >> >>>>
>> >> >> >>>> Index: 1392993748081016843912887106182707253109560705024
>> >> >> >>>>   Waiting on: [riak_kv_vnode]
>> >> >> >>>>   Complete:   [riak_pipe_vnode]
>> >> >> >>>>
>> >> >> >>>> Index: 1438665674247607560106752257205091097473808596992
>> >> >> >>>>   Waiting on: [riak_kv_vnode]
>> >> >> >>>>   Complete:   [riak_pipe_vnode]
>> >> >> >>>>
>> >> >> >>>>
>> >> >> >>>>
>> >> >> >>>>
>> >> >> >>>>
>> >> >> >>>> -------------------------------------------------------------------------------
>> >> >> >>>>
>> >> >> >>>> ============================== Unreachable Nodes
>> >> >> >>>> ==============================
>> >> >> >>>> All nodes are up and reachable
>> >> >> >>>>
>> >> >> >>>>
>> >> >> >>>> Godefroy
>> >> >> >>>>
>> >> >> >>>>
>> >> >> >>>> 2013/3/13 Mark Phillips <[email protected]>
>> >> >> >>>>>
>> >> >> >>>>> Hi Godefroy,
>> >> >> >>>>>
>> >> >> >>>>> Good to hear you managed to get the node running again. How is
>> >> >> >>>>> the
>> >> >> >>>>> rebalancing going? Also, what's the network setup for the
>> >> >> >>>>> cluster?
>> >> >> >>>>>
>> >> >> >>>>> Mark
>> >> >> >>>>>
>> >> >> >>>>> On Fri, Mar 8, 2013 at 4:24 AM, Godefroy de Compreignac
>> >> >> >>>>> <[email protected]> wrote:
>> >> >> >>>>> > Hi Mark,
>> >> >> >>>>> >
>> >> >> >>>>> > Thanks for your answer.
>> >> >> >>>>> > To have the node running back, I moved a 39GB bitcask dir to
>> >> >> >>>>> > another
>> >> >> >>>>> > disk
>> >> >> >>>>> > and made a symlink (ln -s). Rebalancing seems to be running,
>> >> >> >>>>> > but
>> >> >> >>>>> > inequalities in data repartition stay huge and the node I
>> >> >> >>>>> > added
>> >> >> >>>>> > yesterday
>> >> >> >>>>> > still has 0% of cluster data.
>> >> >> >>>>> >
>> >> >> >>>>> > Godefroy
>> >> >> >>>>> >
>> >> >> >>>>> >
>> >> >> >>>>> > 2013/3/7 Mark Phillips <[email protected]>
>> >> >> >>>>> >>
>> >> >> >>>>> >> Salut Godefroy
>> >> >> >>>>> >>
>> >> >> >>>>> >> On Thu, Mar 7, 2013 at 6:50 AM, Godefroy de Compreignac
>> >> >> >>>>> >> <[email protected]> wrote:
>> >> >> >>>>> >> > Hello,
>> >> >> >>>>> >> >
>> >> >> >>>>> >> > I'm running a cluster of 4 nodes (1,8 TB on each) and I
>> >> >> >>>>> >> > have
>> >> >> >>>>> >> > a
>> >> >> >>>>> >> > problem
>> >> >> >>>>> >> > of
>> >> >> >>>>> >> > balancing. Current data repartition is 18%, 19%, 30%,
>> >> >> >>>>> >> > 34%.
>> >> >> >>>>> >> > The
>> >> >> >>>>> >> > node with
>> >> >> >>>>> >> > 34%
>> >> >> >>>>> >> > of cluster data is completely full and doesn't want to
>> >> >> >>>>> >> > start
>> >> >> >>>>> >> > anymoe.
>> >> >> >>>>> >> > I don't know what to do. Do you have a solution for such
>> >> >> >>>>> >> > a
>> >> >> >>>>> >> > problem?
>> >> >> >>>>> >> >
>> >> >> >>>>> >>
>> >> >> >>>>> >>
>> >> >> >>>>> >> It looks like you need to increase storage capacity for the
>> >> >> >>>>> >> entire
>> >> >> >>>>> >> cluster so you can move some data off of the full node. Do
>> >> >> >>>>> >> you
>> >> >> >>>>> >> have
>> >> >> >>>>> >> the ability to another machine (or two) to the cluster?
>> >> >> >>>>> >>
>> >> >> >>>>> >> The issue, of course, is that you'll need to get that Riak
>> >> >> >>>>> >> node
>> >> >> >>>>> >> running before it can hand off a subset of its data to a
>> >> >> >>>>> >> new
>> >> >> >>>>> >> member.
>> >> >> >>>>> >> I
>> >> >> >>>>> >> assume the disk being full is the primary reason its
>> >> >> >>>>> >> failing
>> >> >> >>>>> >> to
>> >> >> >>>>> >> start?
>> >> >> >>>>> >>
>> >> >> >>>>> >> Mark
>> >> >> >>>>> >>
>> >> >> >>>>> >> > Thank you in advance!
>> >> >> >>>>> >> >
>> >> >> >>>>> >> > Godefroy
>> >> >> >>>>> >> >
>> >> >> >>>>> >> > _______________________________________________
>> >> >> >>>>> >> > riak-users mailing list
>> >> >> >>>>> >> > [email protected]
>> >> >> >>>>> >> >
>> >> >> >>>>> >> >
>> >> >> >>>>> >> >
>> >> >> >>>>> >> > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>> >> >> >>>>> >> >
>> >> >> >>>>> >
>> >> >> >>>>> >
>> >> >> >>>>
>> >> >> >>>>
>> >> >> >>>>
>> >> >> >>>> _______________________________________________
>> >> >> >>>> riak-users mailing list
>> >> >> >>>> [email protected]
>> >> >> >>>>
>> >> >> >>>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>> >> >> >>>>
>> >> >> >>>
>> >> >> >>
>> >> >> >
>> >> >
>> >> >
>> >
>> >
>> >
>> > _______________________________________________
>> > riak-users mailing list
>> > [email protected]
>> > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>> >
>
>

_______________________________________________
riak-users mailing list
[email protected]
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

Reply via email to