Thanks Amar,but it looks like that the v3.1.1 hasn't support the command

'gluster volume rebalance dfs migrate-data start'

# gluster volume rebalance dfs migrate-data start
Usage: volume rebalance <VOLNAME> <start|stop|status>
Rebalance of Volume dfs failed

On Tue, Oct 18, 2011 at 3:33 PM, Amar Tumballi <[email protected]> wrote:

> Hi Chen,
>
> Can you restart the 'glusterd' and run 'gluster volume rebalance dfs
> migrate-data start' and check if your data migration happens?
>
> Regards,
> Amar
>
> On Tue, Oct 18, 2011 at 12:54 PM, Changliang Chen <[email protected]>wrote:
>
>> Hi guys,
>>
>>     we have a rebalance running on eight  bricks since  July and this is
>> what the status looks like right now:
>>
>> ===Tue Oct 18 13:45:01 CST 2011 ====
>> rebalance step 1: layout fix in progress: fixed layout 223623
>>
>> There are roughly 8T photos in the storage,so how long should this
>> rebalance take?
>>
>> What does the number (in this case) 22362 represent?
>>
>> Our gluster infomation:
>> Repository revision: v3.1.1
>> Volume Name: dfs
>> Type: Distributed-Replicate
>> Status: Started
>> Number of Bricks: 4 x 2 = 8
>> Transport-type: tcp
>> Bricks:
>> Brick1: 10.1.1.23:/data0
>> Brick2: 10.1.1.24:/data0
>> Brick3: 10.1.1.25:/data0
>> Brick4: 10.1.1.26:/data0
>> Brick5: 10.1.1.27:/data0
>> Brick6: 10.1.1.28:/data0
>> Brick7: 10.1.1.64:/data0
>> Brick8: 10.1.1.65:/data0
>> Options Reconfigured:
>> cluster.min-free-disk: 10%
>> network.ping-timeout: 25
>> network.frame-timeout: 30
>> performance.cache-max-file-size: 512KB
>> performance.cache-size: 3GB
>>
>>
>>
>> --
>>
>> Regards,
>>
>> Cocl
>> OM manager
>> 19lou Operation & Maintenance Dept
>>
>> _______________________________________________
>> Gluster-users mailing list
>> [email protected]
>> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>>
>>
>


-- 

Regards,

Cocl
OM manager
19lou Operation & Maintenance Dept
_______________________________________________
Gluster-users mailing list
[email protected]
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

Reply via email to