Hi Sander,

Sorry for not getting back to you.

I guess, when you don’t use quota you do not need to run the scripts.

I do not have any experience changing the op-version on a running glusterfs 
cluster. But looking at some threads, it should be possible to change it on a 
running glusterfs cluster. But I think, only when all clients are the same 
version has the server.

And good luck this weekend.

Grtz, Jiri

> On 14 Apr 2015, at 15:15, Sander Zijlstra <[email protected]> wrote:
> 
> Jiri,
> 
> thanks, I totally missed the op-version part as it’s not mentioned in the 
> upgrade instructions as per the link you send. Actually I read that link and 
> because I do not use quota I didn’t use that script either.
> 
> Can I update the op-version when the volume is online and currently doing a 
> rebalance or shall I stop the rebalance, set the new op-version and then 
> start the rebalance again?
> 
> many thanks for all the input….
> 
> Met vriendelijke groet / kind regards,
> 
> Sander Zijlstra
> 
> | Linux Engineer | SURFsara | Science Park 140 | 1098XG Amsterdam | T +31 
> (0)6 43 99 12 47 | [email protected] 
> <mailto:[email protected]> | www.surfsara.nl 
> <http://www.surfsara.nl/> |
> 
> Regular day off on friday
> 
>> On 14 Apr 2015, at 15:01, Jiri Hoogeveen <[email protected] 
>> <mailto:[email protected]>> wrote:
>> 
>> Hi Sander,
>> 
>> If I take a look at 
>> http://www.gluster.org/community/documentation/index.php/OperatingVersions 
>> <http://www.gluster.org/community/documentation/index.php/OperatingVersions>
>> 
>> Then operating-version=2 is for glusterfs version 3.4, so I guess you still 
>> will be using the old style.
>> I think this is useful 
>> http://www.gluster.org/community/documentation/index.php/Upgrade_to_3.6 
>> <http://www.gluster.org/community/documentation/index.php/Upgrade_to_3.6>
>> 
>> And don’t forget to upgrade the clients also.
>> 
>> 
>> Grtz, Jiri
>> 
>>> On 14 Apr 2015, at 14:14, Sander Zijlstra <[email protected] 
>>> <mailto:[email protected]>> wrote:
>>> 
>>> Jiri,
>>> 
>>> thanks for the information, I just commented on a question about 
>>> op-version…. 
>>> 
>>> I upgraded all systems to 3.6.2 does this mean they all will use the 
>>> correct op-version and will not revert to old style behaviour?
>>> 
>>> Met vriendelijke groet / kind regards,
>>> 
>>> Sander Zijlstra
>>> 
>>> | Linux Engineer | SURFsara | Science Park 140 | 1098XG Amsterdam | T +31 
>>> (0)6 43 99 12 47 | [email protected] 
>>> <mailto:[email protected]> | www.surfsara.nl 
>>> <http://www.surfsara.nl/> |
>>> 
>>> Regular day off on friday
>>> 
>>>> On 14 Apr 2015, at 14:11, Jiri Hoogeveen <[email protected] 
>>>> <mailto:[email protected]>> wrote:
>>>> 
>>>> Hi Sander,
>>>> 
>>>> 
>>>>> Since version 3.6 the remove brick command migrates the data away from 
>>>>> the brick being removed, right?
>>>> It should :)
>>>> https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3/pdf/Administration_Guide/Red_Hat_Storage-3-Administration_Guide-en-US.pdf
>>>>  
>>>> <https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3/pdf/Administration_Guide/Red_Hat_Storage-3-Administration_Guide-en-US.pdf>
>>>>  page 100, is I think a good start.
>>>> I think this is the most complete documentation.
>>>> 
>>>>> When I have replicated bricks (replica 2), I also need to do "remove 
>>>>> brick <volume> replica 2 brick1 brick2 …. , right?
>>>> 
>>>> Yes, you need to remove both replica’s at the same time.
>>>> 
>>>> 
>>>>> Last but not least, is there any way to tell how long a “remove brick” 
>>>>> will take when it’s moving the data? I have dual 10GB ethernet between 
>>>>> the cluster members and the brick storage is a RAID-6 set which can read 
>>>>> 400-600MB/sec without any problems.
>>>> 
>>>> 
>>>> Depends on the size of the disk, the number of files and type of file. 
>>>> Network speed is less a issu, then the IO on the disks/brick.
>>>> To migratie data from one disk to a other (is like self-healing) GlusterFS 
>>>> will do a scan of all files on the disk, which can cause a high IO on the 
>>>> disk.
>>>> 
>>>> Because you had also some performance issues, when you added some bricks, 
>>>> I will expect the same issue with remove brick. So do this at night if 
>>>> possible.
>>>> 
>>>> 
>>>> Grtz, Jiri
>>>> 
>>>> 
>>>>> On 14 Apr 2015, at 12:53, Sander Zijlstra <[email protected] 
>>>>> <mailto:[email protected]>> wrote:
>>>>> 
>>>>> LS,
>>>>> 
>>>>> I’m planning to decommission a few servers from my cluster, so to confirm:
>>>>> 
>>>>> Since version 3.6 the remove brick command migrates the data away from 
>>>>> the brick being removed, right?
>>>>> When I have replicated bricks (replica 2), I also need to do "remove 
>>>>> brick <volume> replica 2 brick1 brick2 …. , right?
>>>>> 
>>>>> Last but not least, is there any way to tell how long a “remove brick” 
>>>>> will take when it’s moving the data? I have dual 10GB ethernet between 
>>>>> the cluster members and the brick storage is a RAID-6 set which can read 
>>>>> 400-600MB/sec without any problems.
>>>>> 
>>>>> Met vriendelijke groet / kind regards,
>>>>> 
>>>>> Sander Zijlstra
>>>>> 
>>>>> | Linux Engineer | SURFsara | Science Park 140 | 1098XG Amsterdam | T +31 
>>>>> (0)6 43 99 12 47 | [email protected] 
>>>>> <mailto:[email protected]> | www.surfsara.nl 
>>>>> <http://www.surfsara.nl/> |
>>>>> 
>>>>> Regular day off on friday
>>>>> 
>>>>> _______________________________________________
>>>>> Gluster-users mailing list
>>>>> [email protected] <mailto:[email protected]>
>>>>> http://www.gluster.org/mailman/listinfo/gluster-users 
>>>>> <http://www.gluster.org/mailman/listinfo/gluster-users>
>>> 
>> 
> 

_______________________________________________
Gluster-users mailing list
[email protected]
http://www.gluster.org/mailman/listinfo/gluster-users

Reply via email to