Hi Vijiay,
it’s the latest one, I guess:

# gluster --version
glusterfs 3.7.1 built on Jun  1 2015 17:53:10
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU General 
Public License.

Many thanks,

        Alessandro

> Il giorno 09/giu/2015, alle ore 09:08, Vijaikumar M <[email protected]> ha 
> scritto:
> 
> Hi Alessandro,
> 
> We have recently fixed one issues related to below warning message.
> Please provide the output of 'gluster --version', we will check if the issue 
> is the same.
> 
> Thanks,
> Vijay
> 
> 
> On Monday 08 June 2015 06:43 PM, Alessandro De Salvo wrote:
>> OK, many thanks Rajesh.
>> I just wanted to add that I see a lot of warnings in the logs like the 
>> following:
>> 
>> [2015-06-08 13:13:10.365633] W [marker-quota.c:3162:mq_initiate_quota_task] 
>> 0-atlas-data-01-marker: inode ctx get failed, aborting quota txn
>> 
>> I’m not sure if this is a bug (related or not to the one you mention) or if 
>> it is normal and harmless.
>> Thanks,
>> 
>> 
>>         Alessandro
>> 
>> 
>>> Il giorno 08/giu/2015, alle ore 10:39, Rajesh kumar Reddy Mekala 
>>> <[email protected] <mailto:[email protected]>> ha scritto:
>>> 
>>> We have open bug 1227724 for the similar problem 
>>> 
>>> Thanks,
>>> Rajesh
>>> 
>>> On 06/08/2015 12:08 PM, Vijaikumar M wrote:
>>>> Hi Alessandro,
>>>> 
>>>> Please provide the test-case, so that we can try to re-create this problem 
>>>> in-house?
>>>> 
>>>> Thanks,
>>>> Vijay
>>>> 
>>>> On Saturday 06 June 2015 05:59 AM, Alessandro De Salvo wrote:
>>>>> Hi,
>>>>> just to answer to myself, it really seems the temp files from rsync are 
>>>>> the culprit, it seems that their size are summed up to the real contents 
>>>>> of the directories I’m synchronizing, or in other terms their size is not 
>>>>> removed from the used size after they are removed. I suppose this is 
>>>>> someway connected to the error on removexattr I’m seeing. The temporary 
>>>>> solution I’ve found is to use rsync with the option to write the temp 
>>>>> files to /tmp, but it would be very interesting to understand why this is 
>>>>> happening.
>>>>> Cheers,
>>>>> 
>>>>>   Alessandro
>>>>> 
>>>>>> Il giorno 06/giu/2015, alle ore 01:19, Alessandro De Salvo 
>>>>>> <[email protected]> 
>>>>>> <mailto:[email protected]> ha scritto:
>>>>>> 
>>>>>> Hi,
>>>>>> I currently have two brick with replica 2 on the same machine, pointing 
>>>>>> to different disks of a connected SAN.
>>>>>> The volume itself is fine:
>>>>>> 
>>>>>> # gluster volume info atlas-home-01
>>>>>> 
>>>>>> Volume Name: atlas-home-01
>>>>>> Type: Replicate
>>>>>> Volume ID: 660db960-31b8-4341-b917-e8b43070148b
>>>>>> Status: Started
>>>>>> Number of Bricks: 1 x 2 = 2
>>>>>> Transport-type: tcp
>>>>>> Bricks:
>>>>>> Brick1: host1:/bricks/atlas/home02/data
>>>>>> Brick2: host2:/bricks/atlas/home01/data
>>>>>> Options Reconfigured:
>>>>>> performance.write-behind-window-size: 4MB
>>>>>> performance.io-thread-count: 32
>>>>>> performance.readdir-ahead: on
>>>>>> server.allow-insecure: on
>>>>>> nfs.disable: true
>>>>>> features.quota: on
>>>>>> features.inode-quota: on
>>>>>> 
>>>>>> 
>>>>>> However, when I set a quota on a dir of the volume the size show is 
>>>>>> twice the physical size of the actual dir:
>>>>>> 
>>>>>> # gluster volume quota atlas-home-01 list /user1
>>>>>>                  Path                   Hard-limit Soft-limit   Used  
>>>>>> Available  Soft-limit exceeded? Hard-limit exceeded?
>>>>>> ---------------------------------------------------------------------------------------------------------------------------
>>>>>> /user1                                    4.0GB       80%       3.2GB 
>>>>>> 853.4MB              No                   No
>>>>>> 
>>>>>> # du -sh /storage/atlas/home/user1
>>>>>> 1.6G    /storage/atlas/home/user1
>>>>>> 
>>>>>> If I remove one of the bricks the quota shows the correct value.
>>>>>> Is there any double counting in case the bricks are on the same machine?
>>>>>> Also, I see a lot of errors in the logs like the following:
>>>>>> 
>>>>>> [2015-06-05 21:59:27.450407] E 
>>>>>> [posix-handle.c:157:posix_make_ancestryfromgfid] 0-atlas-home-01-posix: 
>>>>>> could not read the link from the gfid handle 
>>>>>> /bricks/atlas/home01/data/.glusterfs/be/e5/bee5e2b8-c639-4539-a483-96c19cd889eb
>>>>>>  (No such file or directory)
>>>>>> 
>>>>>> and also
>>>>>> 
>>>>>> [2015-06-05 22:52:01.112070] E [marker-quota.c:2363:mq_mark_dirty] 
>>>>>> 0-atlas-home-01-marker: failed to get inode ctx for /user1/file1
>>>>>> 
>>>>>> When running rsync I also see the following errors:
>>>>>> 
>>>>>> [2015-06-05 23:06:22.203968] E [marker-quota.c:2601:mq_remove_contri] 
>>>>>> 0-atlas-home-01-marker: removexattr 
>>>>>> trusted.glusterfs.quota.fddf31ba-7f1d-4ba8-a5ad-2ebd6e4030f3.contri 
>>>>>> failed for /user1/..bashrc.O4kekp: No data available
>>>>>> 
>>>>>> Those files are the temp files of rsync, I’m not sure why the throw 
>>>>>> errors in glusterfs.
>>>>>> Any help?
>>>>>> Thanks,
>>>>>> 
>>>>>>  Alessandro
>>>>>> 
>>>>>> 
>>>>>> _______________________________________________
>>>>>> Gluster-users mailing list
>>>>>> [email protected] <mailto:[email protected]>
>>>>>> http://www.gluster.org/mailman/listinfo/gluster-users 
>>>>>> <http://www.gluster.org/mailman/listinfo/gluster-users>
>>>>> 
>>>>> 
>>>>> _______________________________________________
>>>>> Gluster-users mailing list
>>>>> [email protected] <mailto:[email protected]>
>>>>> http://www.gluster.org/mailman/listinfo/gluster-users 
>>>>> <http://www.gluster.org/mailman/listinfo/gluster-users>
>>>> 
>>>> 
>>>> _______________________________________________
>>>> Gluster-users mailing list
>>>> [email protected] <mailto:[email protected]>
>>>> http://www.gluster.org/mailman/listinfo/gluster-users 
>>>> <http://www.gluster.org/mailman/listinfo/gluster-users>
>> 
> 

Attachment: smime.p7s
Description: S/MIME cryptographic signature

_______________________________________________
Gluster-users mailing list
[email protected]
http://www.gluster.org/mailman/listinfo/gluster-users

Reply via email to