[Gluster-users] How to distributed with a folder

2016-11-16 Thread ????
In distributed volume, the files will be distributed to different bricks.But,my 
data is a folder if it's lack of a file,the folder of data will be disabled.So, 
I want a specific folder will be store one brick to avoid dropping a brick and 
all data is disable.


--
--___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] question about info and info.tmp

2016-11-16 Thread songxin


Hi Atin,
Thank you for your support.


I have a question for you.


glusterd_store_volinfo() will hidden remove the info and bricks/* by rename().
Why glusterd must remove the info and bricks/* in function 
glusterd_delete_stale_volume() before calling glusterd_store_volinfo()?


Thanks,
Xin





在 2016-11-16 20:59:05,"Atin Mukherjee"  写道:





On Tue, Nov 15, 2016 at 1:53 PM, songxin  wrote:

ok, thank you.





在 2016-11-15 16:12:34,"Atin Mukherjee"  写道:





On Tue, Nov 15, 2016 at 12:47 PM, songxin  wrote:



Hi Atin,


I think the root cause is in the function glusterd_import_friend_volume as 
below. 

int32_t 
glusterd_import_friend_volume (dict_t *peer_data, size_t count) 
{ 
... 
ret = glusterd_volinfo_find (new_volinfo->volname, _volinfo); 
if (0 == ret) { 
(void) gd_check_and_update_rebalance_info (old_volinfo, 
   new_volinfo); 
(void) glusterd_delete_stale_volume (old_volinfo, new_volinfo); 
} 
... 
ret = glusterd_store_volinfo (new_volinfo, 
GLUSTERD_VOLINFO_VER_AC_NONE); 
if (ret) { 
gf_msg (this->name, GF_LOG_ERROR, 0, 
GD_MSG_VOLINFO_STORE_FAIL, "Failed to store " 
"volinfo for volume %s", new_volinfo->volname); 
goto out; 
} 
... 
} 

glusterd_delete_stale_volume will remove the info and bricks/* and the 
glusterd_store_volinfo will create the new one. 
But if glusterd is killed before rename the info will is empty. 


And glusterd will start failed because the infois empty in the next time you 
start the glusterd.


Any idea, Atin?


Give me some time, will check it out, but reading at this analysis looks very 
well possible if a volume is changed when the glusterd was done on node a and 
when the same comes up during peer handshake we update the volinfo and during 
that time glusterd goes down once again. I'll confirm it by tomorrow.



I checked the code and it does look like you have got the right RCA for the 
issue which you simulated through those two scripts. However this can happen 
even when you try to create a fresh volume and while glusterd tries to write 
the content into the store and goes down before renaming the info.tmp file you 
get into the same situation.


I'd really need to think through if this can be fixed. Suggestions are always 
appreciated.

 



BTW, excellent work Xin!




Thanks,
Xin



在 2016-11-15 12:07:05,"Atin Mukherjee"  写道:





On Tue, Nov 15, 2016 at 8:58 AM, songxin  wrote:

Hi Atin,
I have some clues about this issue.
I could reproduce this issue use the scrip that mentioned in 
https://bugzilla.redhat.com/show_bug.cgi?id=1308487 .


I really appreciate your help in trying to nail down this issue. While I am at 
your email and going through the code to figure out the possible cause for it, 
unfortunately I don't see any script in the attachment of the bug.  Could you 
please cross check?
 



After I added some debug print,which like below, in glusterd-store.c and I 
found that the /var/lib/glusterd/vols/xxx/info and 
/var/lib/glusterd/vols/xxx/bricks/* are removed. 
But other files in /var/lib/glusterd/vols/xxx/ will not be remove.


int32_t
glusterd_store_volinfo (glusterd_volinfo_t *volinfo, glusterd_volinfo_ver_ac_t 
ac)
{
int32_t ret = -1;


GF_ASSERT (volinfo)


ret = access("/var/lib/glusterd/vols/gv0/info", F_OK);
if(ret < 0)
{
gf_msg (THIS->name, GF_LOG_ERROR, 0, 0, "info is not exit(%d)", 
errno);
}
else
{
ret = stat("/var/lib/glusterd/vols/gv0/info", );
if(ret < 0)
{
gf_msg (THIS->name, GF_LOG_ERROR, 0, 0, "stat info 
error");
}
else
{
gf_msg (THIS->name, GF_LOG_ERROR, 0, 0, "info size is 
%lu, inode num is %lu", buf.st_size, buf.st_ino);
}
}


glusterd_perform_volinfo_version_action (volinfo, ac);
ret = glusterd_store_create_volume_dir (volinfo);
if (ret)
goto out;


...
}


So it is easy to understand why  the info or 10.32.1.144.-opt-lvmdir-c2-brick 
sometimes is empty.
It is becaue the info file is not exist, and it will be create by “fd = open 
(path, O_RDWR | O_CREAT | O_APPEND, 0600);” in function gf_store_handle_new.
And the info file is empty before rename.
So the info file is empty if glusterd shutdown before rename.
 



My question is following.
1.I did not find the point the info is removed.Could you tell me the point 
where the info and /bricks/* are removed?
2.why the file info and bricks/* is removed?But other files in 
var/lib/glusterd/vols/xxx/ are not be removed?

AFAIK, we never 

Re: [Gluster-users] question about glusterfs version migrate

2016-11-16 Thread songxin
ok,thank you for your reply.








At 2016-11-16 17:59:34, "Serkan Çoban"  wrote:
>Below link has changes in each release.
>https://github.com/gluster/glusterfs/tree/release-3.7/doc/release-notes
>
>
>On Wed, Nov 16, 2016 at 11:49 AM, songxin  wrote:
>> Hi,
>> I am planning to migrate from gluster 3.7.6 to gluster 3.7.10.
>> So I have two questions below.
>> 1.How could I know the changes in gluster 3.7.6 compared to gluster 3.7.10?
>> 2.Does my application need any NBC changes?
>>
>> Thanks,
>> Xin
>>
>>
>>
>>
>>
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Unable to stop volume because geo-replication

2016-11-16 Thread Kotresh Hiremath Ravishankar
Hi Ping,

That's good to here. Let us know if you face any issues further.
We are happy to help you.

Thanks and Regards,
Kotresh H R

- Original Message -
> From: "Chao-Ping Chien" 
> To: "Kotresh Hiremath Ravishankar" 
> Cc: gluster-users@gluster.org
> Sent: Wednesday, November 16, 2016 7:32:11 PM
> Subject: RE: [Gluster-users] Unable to stop volume because geo-replication
> 
> Hi Kotresh,
> 
> Thank you very much for taking time to help.
> 
> I follow your instruction, restart glusterd with log level at DEBUG. I think
> the restart somehow fix the state. The geo-replication statue correctly
> report all the volumes status (unlink before when the time I report the
> problem, it only show some part of setting)
> 
> I was able to stop the geo-replication and delete the geo-replication and
> eventually delete the volume.
> 
> If you wish I can send you the log. I am not attach the log this time because
> the problem seems to because the environment is not in normal state. And the
> restart fix the problem.
> 
> Thanks.
> 
> Ping.
> 
> -Original Message-
> From: Kotresh Hiremath Ravishankar [mailto:khire...@redhat.com]
> Sent: Tuesday, November 15, 2016 12:51 AM
> To: Chao-Ping Chien 
> Cc: gluster-users@gluster.org
> Subject: Re: [Gluster-users] Unable to stop volume because geo-replication
> 
> Hi,
> 
> Could you please restart glusterd in DEBUG mode and share the glusterd logs?
> 
> *Starting glusterd in DEBUG mode as follows.
> 
> #glusterd -LDEBUG
> 
> *Stop the volume
>#gluster vol stop 
> 
> Share the glusterd logs.
> 
> Thanks and Regards,
> Kotresh H R
> 
> - Original Message -
> > From: "Chao-Ping Chien" 
> > To: gluster-users@gluster.org
> > Sent: Monday, November 14, 2016 10:18:16 PM
> > Subject: [Gluster-users] Unable to stop volume because geo-replication
> > 
> > 
> > 
> > Hi,
> > 
> > 
> > 
> > Hope someone can point me how to do this.
> > 
> > 
> > 
> > I want to delete a volume but not able to do so because glusterfs is
> > keep reporting there is geo-replication setup which seems to be not
> > exist at the moment when I issue stop command.
> > 
> > 
> > 
> > On a Redhat 7.2 kernel: 3.10.0-327.36.3.el7.x86_64
> > 
> > [root@eqappsrvp01 mule1]# rpm -qa |grep gluster
> > 
> > glusterfs-3.7.14-1.el7.x86_64
> > 
> > glusterfs-fuse-3.7.14-1.el7.x86_64
> > 
> > glusterfs-server-3.7.14-1.el7.x86_64
> > 
> > glusterfs-libs-3.7.14-1.el7.x86_64
> > 
> > glusterfs-api-3.7.14-1.el7.x86_64
> > 
> > glusterfs-geo-replication-3.7.14-1.el7.x86_64
> > 
> > glusterfs-cli-3.7.14-1.el7.x86_64
> > 
> > glusterfs-client-xlators-3.7.14-1.el7.x86_64
> > 
> > 
> > 
> > 
> > 
> > [root@eqappsrvp01 mule1]# gluster volume stop mule1 Stopping volume
> > will make its data inaccessible. Do you want to continue? (y/n) y volume
> > stop: mule1:
> > failed: geo-replication sessions are active for the volume mule1.
> > 
> > Stop geo-replication sessions involved in this volume. Use 'volume
> > geo-replication status' command for more info.
> > 
> > [root@eqappsrvp01 mule1]# gluster volume geo-replication status
> > 
> > 
> > 
> > MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS
> > CRAWL STATUS LAST_SYNCED
> > 
> > --
> > --
> > --
> > 
> > eqappsrvp01 gitlab_data /data/gitlab_data root
> > ssh://eqappsrvd02::gitlab_data N/A Stopped N/A N/A
> > 
> > eqappsrvp02 gitlab_data /data/gitlab_data root
> > ssh://eqappsrvd02::gitlab_data N/A Stopped N/A N/A
> > 
> > [root@eqappsrvp01 mule1]# uname -a
> > 
> > Linux eqappsrvp01 3.10.0-327.36.3.el7.x86_64 #1 SMP Thu Oct 20
> > 04:56:07 EDT
> > 2016 x86_64 x86_64 x86_64 GNU/Linux
> > 
> > [root@eqappsrvp01 mule1]# cat /etc/redhat-release Red Hat Enterprise
> > Linux Server release 7.2 (Maipo)
> > =
> > 
> > 
> > 
> > I search the internet found in Redhat Bugzilla bug 1342431 seems to
> > address this problem but according to its status should be fixed in
> > 3.7.12 but in my version 3.7.14 it still exist.
> > 
> > 
> > 
> > Thanks
> > 
> > 
> > 
> > Ping.
> > 
> > 
> > 
> > ___
> > Gluster-users mailing list
> > Gluster-users@gluster.org
> > http://www.gluster.org/mailman/listinfo/gluster-users
> 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Notice: https://download.gluster.org:/pub/gluster/glusterfs/LATEST has changed

2016-11-16 Thread Kaleb S. KEITHLEY
Hi,

As some of you may have noticed, GlusterFS-3.9.0 was released. Watch
this space for the official announcement soon.

If you are using Community GlusterFS packages from download.gluster.org
you should check your package metadata to be sure that an update doesn't
inadvertently update your system to 3.9.

There is a new symlink:
https://download.gluster.org:/pub/gluster/glusterfs/LTM-3.8 which will
remain pointed at the GlusterFS-3.8 packages. Use this instead of
.../LATEST to keep getting 3.8 updates without risk of accidentally
getting 3.9. There is also a new LTM-3.7 symlink that you can use for
3.7 updates.

Also note that there is a new package signing key for the 3.9 packages
that are on download.gluster.org. The old key remains the same for 3.8
and earlier packages. New releases of 3.8 and 3.7 packages will continue
to use the old key.

GlusterFS-3.9 is the first "short term" release; it will be supported
for approximately six months. 3.7 and 3.8 are Long Term Maintenance
(LTM) releases. 3.9 will be followed by 3.10; 3.10 will be a LTM release
and 3.9 and 3.7 will be End-of-Life (EOL) at that time.


-- 

Kaleb
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] move file in brck btu in .gluster I have a "copy"

2016-11-16 Thread Sergio Traldi

Hi,

we have a gluster with 2 servers with centos 6 and glusterfs-3.7.4-2 
with 2 bircks each server (4 bricks total). We have one brick quite full 
and other 3 bricks quite empty. I stop a volume and I move a big file 
from the brick quite full to the one quite emply. I copy the file 
preserve acl and xattr. After the copy I delete the file in the first 
birck (rm -rf /brick1/data/ccaf ) but I observe the space is not free.


I look into and I find a file in the brick in .glusterfs direcotory big 
as the file I move. I read the article here:


https://joejulian.name/blog/what-is-this-new-glusterfs-directory-in-33/

But I did not understand if it is safety removing the directory 
containing the big file in .glusterfs in the first brick. Some one can 
help me? Can I delete that directory (/brick1/data/.glusterfs/cc) ? Is 
it better deleting the file when glusterd is stopped? or is it enough 
the volume is stopped?


Thanks for any answer.

Cheers

Sergio

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] question about nfs lookup result

2016-11-16 Thread jin deng
sorry,I read you patch( http://review.gluster.org/14911),that fix the
problem.Thanks very much.

2016-11-16 22:38 GMT+08:00 jin deng :

>
>
> 2016-11-16 22:30 GMT+08:00 Soumya Koduri :
>
>>
>>
>> On 11/16/2016 07:38 PM, jin deng wrote:
>>
>>> Thank you so much.Soumya,you make me more clear about the logic of the
>>> code.
>>>
>>> I used to wonder the code was to handle the last resolve case.However i
>>> followed
>>>
>>> the "nfs3_fh_resolve_entry_hard" and I thought it would get the ret ==
>>> -2 case and went
>>>
>>> into the "nfs3_lookup_op" branch,and finally call the
>>> "nfs3_call_resume".Seems has
>>>
>>> no chance to call "nfs3_fh_resolve_entry_lookup_cbk" because it was a
>>> LOOKUP
>>>
>>> operation.Am i wrong again? :-)
>>>
>>
>> You are right :)...for LOOKUP fop, we go to "nfs3_call_resume" which is
>> nfs3_lookup_resume and the callback is "nfs3svc_lookup_cbk" where in we are
>> not updating cs->stbuf. But we seem to be constructing lookup reply
>> (nfs3_lookup_reply) using 'buf' directly returned for the child entry
>> instead of using cs->stbuf. Maybe that's the reason it was working well
>> till now.
>> FYI - there was an issue in the lookup logic code path which we fixed as
>> part of http://review.gluster.org/14911 . I will not be surprised if
>> there are any more lurking around :)
>>
>
>  hmm...seems still has a problem.As the "cs->hardresolved" has been set to
> 1 in "nfs3_fh_resolve_inode_hard".The "nfs3_lookup_resume" callback will
> not
>  get into the "nfs3svc_lookup_cbk".Instead,the "nfs3_lookup_resume" will
> terminate at here as I thought:
>
>  if (cs->hardresolved) {
> stat = NFS3_OK;
> nfs3_fh_build_child_fh (>parent, >stbuf, );
> goto nfs3err;
> }
>
> I wonder if this works fine because the NFS client always resolve the
> parent first, not very sure.
>
>
>> Thanks,
>> Soumya
>>
>>
>>>
>>>
>>> 2016-11-16 21:45 GMT+08:00 Soumya Koduri >> >:
>>>
>>>
>>>
>>> On 11/16/2016 06:38 PM, Pranith Kumar Karampuri wrote:
>>>
>>> Added people who know nfs code.
>>>
>>> On Wed, Nov 16, 2016 at 6:21 PM, jin deng
>>> 
>>> >>
>>>
>>> wrote:
>>>
>>> Hi all,
>>> I'm reading the code of 3.6.9 version and got a
>>> question.And I'm
>>> not very familiar with the code,so I have to confirm it(I
>>> checked
>>> the master branch,it's same with 3.6.9).
>>>
>>> The question is about the 'lookup' operation of NFS.And
>>> i'm with
>>> this code flow:
>>>
>>> nfs3_lookup (nfs3.c) ==> nfs3_fh_resolve_and_resume
>>> ==> nfs3_fh_resolve_root ==> nfs3_fh_resolve_resume
>>> ==> nfs3_fh_resolve_entry ==> nfs3_fh_resolve_entry_hard.
>>>
>>> Now enter the "nfs_entry_loc_fill" and return with -1 which
>>> means
>>> the "parent" not found,so we have to do hard resolve about
>>> the
>>> parent. After doing a hard resolve,we get into the callback
>>> "nfs3_fh_resolve_inode_lookup_cbk".And the callback has the
>>> following code:
>>>
>>> >>> memcpy (>stbuf, buf, sizeof(*buf));
>>> >>> memcpy (>postparent, buf, sizeof(*postparent))
>>>
>>>
>>> This must had been done  (and required) in case if this was the last
>>> entry(/inode) to be looked up
>>>
>>>
>>> I think we've just done a parent resolve,how could we assign
>>> the
>>> parent result into the "stbuf" and "postparent".The later
>>> two should
>>> be the information of the child file/directory.Do i made a
>>> misunderstand?
>>>
>>>
>>> In case if it was not the last entry we fall through below code in
>>> "nfs3_fh_resolve_inode_lookup_cbk" -
>>>
>>> if (cs->resolventry)
>>> nfs3_fh_resolve_entry_hard (cs);
>>>
>>> Callback is "nfs3_fh_resolve_entry_lookup_cbk()" where in cs->stbuf
>>> and cs->postparent get overridden with the values corresponding to
>>> the child entry.
>>>
>>> Thanks,
>>> Soumya
>>>
>>>
>>> Thanks advance for your help.
>>>
>>> ___
>>> Gluster-users mailing list
>>> Gluster-users@gluster.org 
>>> >> >
>>> http://www.gluster.org/mailman/listinfo/gluster-users
>>> 
>>> >> >
>>>
>>>
>>>
>>>
>>> --
>>> Pranith
>>>
>>>
>>>

Re: [Gluster-users] question about nfs lookup result

2016-11-16 Thread Soumya Koduri



On 11/16/2016 07:38 PM, jin deng wrote:

Thank you so much.Soumya,you make me more clear about the logic of the code.

I used to wonder the code was to handle the last resolve case.However i
followed

the "nfs3_fh_resolve_entry_hard" and I thought it would get the ret ==
-2 case and went

into the "nfs3_lookup_op" branch,and finally call the
"nfs3_call_resume".Seems has

no chance to call "nfs3_fh_resolve_entry_lookup_cbk" because it was a LOOKUP

operation.Am i wrong again? :-)


You are right :)...for LOOKUP fop, we go to "nfs3_call_resume" which is 
nfs3_lookup_resume and the callback is "nfs3svc_lookup_cbk" where in we 
are not updating cs->stbuf. But we seem to be constructing lookup reply 
(nfs3_lookup_reply) using 'buf' directly returned for the child entry 
instead of using cs->stbuf. Maybe that's the reason it was working well 
till now.
FYI - there was an issue in the lookup logic code path which we fixed as 
part of http://review.gluster.org/14911 . I will not be surprised if 
there are any more lurking around :)


Thanks,
Soumya





2016-11-16 21:45 GMT+08:00 Soumya Koduri >:



On 11/16/2016 06:38 PM, Pranith Kumar Karampuri wrote:

Added people who know nfs code.

On Wed, Nov 16, 2016 at 6:21 PM, jin deng

>>
wrote:

Hi all,
I'm reading the code of 3.6.9 version and got a
question.And I'm
not very familiar with the code,so I have to confirm it(I
checked
the master branch,it's same with 3.6.9).

The question is about the 'lookup' operation of NFS.And
i'm with
this code flow:

nfs3_lookup (nfs3.c) ==> nfs3_fh_resolve_and_resume
==> nfs3_fh_resolve_root ==> nfs3_fh_resolve_resume
==> nfs3_fh_resolve_entry ==> nfs3_fh_resolve_entry_hard.

Now enter the "nfs_entry_loc_fill" and return with -1 which
means
the "parent" not found,so we have to do hard resolve about the
parent. After doing a hard resolve,we get into the callback
"nfs3_fh_resolve_inode_lookup_cbk".And the callback has the
following code:

>>> memcpy (>stbuf, buf, sizeof(*buf));
>>> memcpy (>postparent, buf, sizeof(*postparent))


This must had been done  (and required) in case if this was the last
entry(/inode) to be looked up


I think we've just done a parent resolve,how could we assign the
parent result into the "stbuf" and "postparent".The later
two should
be the information of the child file/directory.Do i made a
misunderstand?


In case if it was not the last entry we fall through below code in
"nfs3_fh_resolve_inode_lookup_cbk" -

if (cs->resolventry)
nfs3_fh_resolve_entry_hard (cs);

Callback is "nfs3_fh_resolve_entry_lookup_cbk()" where in cs->stbuf
and cs->postparent get overridden with the values corresponding to
the child entry.

Thanks,
Soumya


Thanks advance for your help.

___
Gluster-users mailing list
Gluster-users@gluster.org 
>
http://www.gluster.org/mailman/listinfo/gluster-users

>




--
Pranith



___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] question about nfs lookup result

2016-11-16 Thread jin deng
2016-11-16 22:30 GMT+08:00 Soumya Koduri :

>
>
> On 11/16/2016 07:38 PM, jin deng wrote:
>
>> Thank you so much.Soumya,you make me more clear about the logic of the
>> code.
>>
>> I used to wonder the code was to handle the last resolve case.However i
>> followed
>>
>> the "nfs3_fh_resolve_entry_hard" and I thought it would get the ret ==
>> -2 case and went
>>
>> into the "nfs3_lookup_op" branch,and finally call the
>> "nfs3_call_resume".Seems has
>>
>> no chance to call "nfs3_fh_resolve_entry_lookup_cbk" because it was a
>> LOOKUP
>>
>> operation.Am i wrong again? :-)
>>
>
> You are right :)...for LOOKUP fop, we go to "nfs3_call_resume" which is
> nfs3_lookup_resume and the callback is "nfs3svc_lookup_cbk" where in we are
> not updating cs->stbuf. But we seem to be constructing lookup reply
> (nfs3_lookup_reply) using 'buf' directly returned for the child entry
> instead of using cs->stbuf. Maybe that's the reason it was working well
> till now.
> FYI - there was an issue in the lookup logic code path which we fixed as
> part of http://review.gluster.org/14911 . I will not be surprised if
> there are any more lurking around :)
>

 hmm...seems still has a problem.As the "cs->hardresolved" has been set to
1 in "nfs3_fh_resolve_inode_hard".The "nfs3_lookup_resume" callback will not
 get into the "nfs3svc_lookup_cbk".Instead,the "nfs3_lookup_resume" will
terminate at here as I thought:

 if (cs->hardresolved) {
stat = NFS3_OK;
nfs3_fh_build_child_fh (>parent, >stbuf, );
goto nfs3err;
}

I wonder if this works fine because the NFS client always resolve the
parent first, not very sure.


> Thanks,
> Soumya
>
>
>>
>>
>> 2016-11-16 21:45 GMT+08:00 Soumya Koduri > >:
>>
>>
>>
>> On 11/16/2016 06:38 PM, Pranith Kumar Karampuri wrote:
>>
>> Added people who know nfs code.
>>
>> On Wed, Nov 16, 2016 at 6:21 PM, jin deng
>> 
>> >>
>>
>> wrote:
>>
>> Hi all,
>> I'm reading the code of 3.6.9 version and got a
>> question.And I'm
>> not very familiar with the code,so I have to confirm it(I
>> checked
>> the master branch,it's same with 3.6.9).
>>
>> The question is about the 'lookup' operation of NFS.And
>> i'm with
>> this code flow:
>>
>> nfs3_lookup (nfs3.c) ==> nfs3_fh_resolve_and_resume
>> ==> nfs3_fh_resolve_root ==> nfs3_fh_resolve_resume
>> ==> nfs3_fh_resolve_entry ==> nfs3_fh_resolve_entry_hard.
>>
>> Now enter the "nfs_entry_loc_fill" and return with -1 which
>> means
>> the "parent" not found,so we have to do hard resolve about the
>> parent. After doing a hard resolve,we get into the callback
>> "nfs3_fh_resolve_inode_lookup_cbk".And the callback has the
>> following code:
>>
>> >>> memcpy (>stbuf, buf, sizeof(*buf));
>> >>> memcpy (>postparent, buf, sizeof(*postparent))
>>
>>
>> This must had been done  (and required) in case if this was the last
>> entry(/inode) to be looked up
>>
>>
>> I think we've just done a parent resolve,how could we assign
>> the
>> parent result into the "stbuf" and "postparent".The later
>> two should
>> be the information of the child file/directory.Do i made a
>> misunderstand?
>>
>>
>> In case if it was not the last entry we fall through below code in
>> "nfs3_fh_resolve_inode_lookup_cbk" -
>>
>> if (cs->resolventry)
>> nfs3_fh_resolve_entry_hard (cs);
>>
>> Callback is "nfs3_fh_resolve_entry_lookup_cbk()" where in cs->stbuf
>> and cs->postparent get overridden with the values corresponding to
>> the child entry.
>>
>> Thanks,
>> Soumya
>>
>>
>> Thanks advance for your help.
>>
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org 
>> > >
>> http://www.gluster.org/mailman/listinfo/gluster-users
>> 
>> > >
>>
>>
>>
>>
>> --
>> Pranith
>>
>>
>>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] question about nfs lookup result

2016-11-16 Thread jin deng
Thank you so much.Soumya,you make me more clear about the logic of the code.

I used to wonder the code was to handle the last resolve case.However i
followed

the "nfs3_fh_resolve_entry_hard" and I thought it would get the ret == -2
case and went

into the "nfs3_lookup_op" branch,and finally call the
"nfs3_call_resume".Seems has

no chance to call "nfs3_fh_resolve_entry_lookup_cbk" because it was a LOOKUP

operation.Am i wrong again? :-)



2016-11-16 21:45 GMT+08:00 Soumya Koduri :

>
>
> On 11/16/2016 06:38 PM, Pranith Kumar Karampuri wrote:
>
>> Added people who know nfs code.
>>
>> On Wed, Nov 16, 2016 at 6:21 PM, jin deng > > wrote:
>>
>> Hi all,
>> I'm reading the code of 3.6.9 version and got a question.And I'm
>> not very familiar with the code,so I have to confirm it(I checked
>> the master branch,it's same with 3.6.9).
>>
>> The question is about the 'lookup' operation of NFS.And i'm with
>> this code flow:
>>
>> nfs3_lookup (nfs3.c) ==> nfs3_fh_resolve_and_resume
>> ==> nfs3_fh_resolve_root ==> nfs3_fh_resolve_resume
>> ==> nfs3_fh_resolve_entry ==> nfs3_fh_resolve_entry_hard.
>>
>> Now enter the "nfs_entry_loc_fill" and return with -1 which means
>> the "parent" not found,so we have to do hard resolve about the
>> parent. After doing a hard resolve,we get into the callback
>> "nfs3_fh_resolve_inode_lookup_cbk".And the callback has the
>> following code:
>>
>> >>> memcpy (>stbuf, buf, sizeof(*buf));
>> >>> memcpy (>postparent, buf, sizeof(*postparent))
>>
>
> This must had been done  (and required) in case if this was the last
> entry(/inode) to be looked up
>

>> I think we've just done a parent resolve,how could we assign the
>> parent result into the "stbuf" and "postparent".The later two should
>> be the information of the child file/directory.Do i made a
>> misunderstand?
>>
>
> In case if it was not the last entry we fall through below code in
> "nfs3_fh_resolve_inode_lookup_cbk" -
>
> if (cs->resolventry)
> nfs3_fh_resolve_entry_hard (cs);
>
> Callback is "nfs3_fh_resolve_entry_lookup_cbk()" where in cs->stbuf and
> cs->postparent get overridden with the values corresponding to the child
> entry.
>
> Thanks,
> Soumya
>
>
>> Thanks advance for your help.
>>
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org 
>> http://www.gluster.org/mailman/listinfo/gluster-users
>> 
>>
>>
>>
>>
>> --
>> Pranith
>>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Unable to stop volume because geo-replication

2016-11-16 Thread Chao-Ping Chien
Hi Kotresh,

Thank you very much for taking time to help. 

I follow your instruction, restart glusterd with log level at DEBUG. I think 
the restart somehow fix the state. The geo-replication statue correctly report 
all the volumes status (unlink before when the time I report the problem, it 
only show some part of setting)

I was able to stop the geo-replication and delete the geo-replication and 
eventually delete the volume.

If you wish I can send you the log. I am not attach the log this time because 
the problem seems to because the environment is not in normal state. And the 
restart fix the problem.

Thanks.

Ping.

-Original Message-
From: Kotresh Hiremath Ravishankar [mailto:khire...@redhat.com] 
Sent: Tuesday, November 15, 2016 12:51 AM
To: Chao-Ping Chien 
Cc: gluster-users@gluster.org
Subject: Re: [Gluster-users] Unable to stop volume because geo-replication

Hi,

Could you please restart glusterd in DEBUG mode and share the glusterd logs?

*Starting glusterd in DEBUG mode as follows.

#glusterd -LDEBUG

*Stop the volume
   #gluster vol stop 

Share the glusterd logs.

Thanks and Regards,
Kotresh H R

- Original Message -
> From: "Chao-Ping Chien" 
> To: gluster-users@gluster.org
> Sent: Monday, November 14, 2016 10:18:16 PM
> Subject: [Gluster-users] Unable to stop volume because geo-replication
> 
> 
> 
> Hi,
> 
> 
> 
> Hope someone can point me how to do this.
> 
> 
> 
> I want to delete a volume but not able to do so because glusterfs is 
> keep reporting there is geo-replication setup which seems to be not 
> exist at the moment when I issue stop command.
> 
> 
> 
> On a Redhat 7.2 kernel: 3.10.0-327.36.3.el7.x86_64
> 
> [root@eqappsrvp01 mule1]# rpm -qa |grep gluster
> 
> glusterfs-3.7.14-1.el7.x86_64
> 
> glusterfs-fuse-3.7.14-1.el7.x86_64
> 
> glusterfs-server-3.7.14-1.el7.x86_64
> 
> glusterfs-libs-3.7.14-1.el7.x86_64
> 
> glusterfs-api-3.7.14-1.el7.x86_64
> 
> glusterfs-geo-replication-3.7.14-1.el7.x86_64
> 
> glusterfs-cli-3.7.14-1.el7.x86_64
> 
> glusterfs-client-xlators-3.7.14-1.el7.x86_64
> 
> 
> 
> 
> 
> [root@eqappsrvp01 mule1]# gluster volume stop mule1 Stopping volume 
> will make its data inaccessible. Do you want to continue? (y/n) y volume 
> stop: mule1:
> failed: geo-replication sessions are active for the volume mule1.
> 
> Stop geo-replication sessions involved in this volume. Use 'volume 
> geo-replication status' command for more info.
> 
> [root@eqappsrvp01 mule1]# gluster volume geo-replication status
> 
> 
> 
> MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS 
> CRAWL STATUS LAST_SYNCED
> 
> --
> --
> --
> 
> eqappsrvp01 gitlab_data /data/gitlab_data root 
> ssh://eqappsrvd02::gitlab_data N/A Stopped N/A N/A
> 
> eqappsrvp02 gitlab_data /data/gitlab_data root 
> ssh://eqappsrvd02::gitlab_data N/A Stopped N/A N/A
> 
> [root@eqappsrvp01 mule1]# uname -a
> 
> Linux eqappsrvp01 3.10.0-327.36.3.el7.x86_64 #1 SMP Thu Oct 20 
> 04:56:07 EDT
> 2016 x86_64 x86_64 x86_64 GNU/Linux
> 
> [root@eqappsrvp01 mule1]# cat /etc/redhat-release Red Hat Enterprise 
> Linux Server release 7.2 (Maipo) 
> =
> 
> 
> 
> I search the internet found in Redhat Bugzilla bug 1342431 seems to 
> address this problem but according to its status should be fixed in 
> 3.7.12 but in my version 3.7.14 it still exist.
> 
> 
> 
> Thanks
> 
> 
> 
> Ping.
> 
> 
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] question about nfs lookup result

2016-11-16 Thread Soumya Koduri



On 11/16/2016 06:38 PM, Pranith Kumar Karampuri wrote:

Added people who know nfs code.

On Wed, Nov 16, 2016 at 6:21 PM, jin deng > wrote:

Hi all,
I'm reading the code of 3.6.9 version and got a question.And I'm
not very familiar with the code,so I have to confirm it(I checked
the master branch,it's same with 3.6.9).

The question is about the 'lookup' operation of NFS.And i'm with
this code flow:

nfs3_lookup (nfs3.c) ==> nfs3_fh_resolve_and_resume
==> nfs3_fh_resolve_root ==> nfs3_fh_resolve_resume
==> nfs3_fh_resolve_entry ==> nfs3_fh_resolve_entry_hard.

Now enter the "nfs_entry_loc_fill" and return with -1 which means
the "parent" not found,so we have to do hard resolve about the
parent. After doing a hard resolve,we get into the callback
"nfs3_fh_resolve_inode_lookup_cbk".And the callback has the
following code:

>>> memcpy (>stbuf, buf, sizeof(*buf));
>>> memcpy (>postparent, buf, sizeof(*postparent))


This must had been done  (and required) in case if this was the last 
entry(/inode) to be looked up.




I think we've just done a parent resolve,how could we assign the
parent result into the "stbuf" and "postparent".The later two should
be the information of the child file/directory.Do i made a
misunderstand?


In case if it was not the last entry we fall through below code in 
"nfs3_fh_resolve_inode_lookup_cbk" -


if (cs->resolventry)
nfs3_fh_resolve_entry_hard (cs);

Callback is "nfs3_fh_resolve_entry_lookup_cbk()" where in cs->stbuf and 
cs->postparent get overridden with the values corresponding to the child 
entry.


Thanks,
Soumya



Thanks advance for your help.

___
Gluster-users mailing list
Gluster-users@gluster.org 
http://www.gluster.org/mailman/listinfo/gluster-users





--
Pranith

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Weekly Community Meeting - 2016-11-16

2016-11-16 Thread Kaushal M
On Wed, Nov 16, 2016 at 2:22 PM, Kaushal M  wrote:
> I forgot to send this reminder out earlier. Please add your updates
> and any topics of discussion to
> https://public.pad.fsfe.org/p/gluster-community-meetings .
>
> The meeting starts in ~3 hours from now.
>
> ~kaushal

Hi All,

Not a lot of updates or topics of discussion this week. Seems like
everyone's already lost interest in just over 3 weeks.

We did discuss a lot about what a release means for us. You can read
about it more in the minutes and logs. This discussion will also come
up soon on the mailing lists as well.
We also discussed replacements for FSFE etherpad. As a result, next
week we'll be trying out hackmd[1].

The logs for today's meeting are available at [2], [3] and [4].

Nigel will your host for the next meeting. Add your updates and topics at [1].
See you all then.

Cheers,
Kaushal

[1]: https://hackmd.io/CwBghmCsCcDGBmBaApgDlqxwBMYDMiqkqS0wxAjAEbLIDssVIQA=?both
[2]: Minutes: 
https://meetbot.fedoraproject.org/gluster-meeting/2016-11-16/weekly_community_meeting_2016-11-16.2016-11-16-12.02.html
[3]: Minutes (text):
https://meetbot.fedoraproject.org/gluster-meeting/2016-11-16/weekly_community_meeting_2016-11-16.2016-11-16-12.02.txt
[4]: Log: 
https://meetbot.fedoraproject.org/gluster-meeting/2016-11-16/weekly_community_meeting_2016-11-16.2016-11-16-12.02.log.html

## Topics of Discussion

### Next weeks meeting host

- nigelb

### Open floor

- FSFE etherpad replacement
- Try hackmd.io for next meeting
- [nigelb] What is a release?
- Tagging?
- Blog announcement?
- What constitutes a release?
- Just tag+tarball
- Packages, docs, upgrade guides also
- [nigelb] release is not 'when our work is done' but 'when users can
consume our work
- _Discussion will be continued on mailing lists_

## Updates

> NOTE : Updates will not be discussed during meetings. Any important or 
> noteworthy update will be announced at the end of the meeting

### Releases

 GlusterFS 4.0

- Tracker bug :
https://bugzilla.redhat.com/showdependencytree.cgi?id=glusterfs-4.0
- Roadmap : https://www.gluster.org/community/roadmap/4.0/
- Updates:
  - GD2 - 
https://www.gluster.org/pipermail/gluster-devel/2016-November/051532.html

 GlusterFS 3.9

- Maintainers : pranithk, aravindavk, dblack
- Current release : 3.9.0rc2
- Next release : 3.9.0
  - Release date : End of Sept 2016
- Tracker bug : https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.9.0
- Open bugs : 
https://bugzilla.redhat.com/showdependencytree.cgi?maxdepth=2=glusterfs-3.9.0_resolved=1
- Roadmap : https://www.gluster.org/community/roadmap/3.9/
- Updates:
  - Release has been tagged. Announcement pending.

 GlusterFS 3.8

- Maintainers : ndevos, jiffin
- Current release : 3.8.5
- Next release : 3.8.6
  - Release date : 10 November 2016 - probably Friday 18
- Tracker bug : https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.8.6
- Open bugs : 
https://bugzilla.redhat.com/showdependencytree.cgi?maxdepth=2=glusterfs-3.8.6_resolved=1
- Updates:
  - 3.8.6 has been delayed a little due to maintainer travelling
  - a little more delay for the 3.9 release ([ndevos] building
packages and setting up CentOS Storage SIG repo)

 GlusterFS 3.7

- Maintainers : kshlm, samikshan
- Current release : 3.7.17
- Next release : 3.7.18
  - Release date : 30 November 2016
- Tracker bug : https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.7.18
- Open bugs : 
https://bugzilla.redhat.com/showdependencytree.cgi?maxdepth=2=glusterfs-3.7.18_resolved=1
- Updates:
  - None this week.
  - Still on track for 3.7.18.

### Related projects and efforts

 Community Infra

- Now that 3.9 is released, we'll begin work on improving Gerrit,
including upgrading the host VM to Centos 7.
- http://fstat.gluster.org/ is live!
- [nigelb] Progress on dbench test issues - seems to be a write-ahead bug

 Samba

- _None_

 Ganesha

- _None_

 Containers

- _None_

 Testing

- _None_

 Others

- [atinm] Updates on GlusterD-1.0
https://www.gluster.org/pipermail/gluster-devel/2016-November/051529.html


### Action Items from last week

- Saravanakmr will take up the task for finding and collecting all
gluster etherpads on FSFE etherpad.
- Mail sent. 
https://www.gluster.org/pipermail/gluster-devel/2016-November/051450.html
- kshlm will ask for feedback on the trial run of the new meeting format
- Not done.
- kshlm to send out an email about Etherpad+Wiki after it's done for
the first time.
- Not done.

## Announcements

### New announcements
_None_

### Regular announcements

- If you're attending any event/conference please add the event and
yourselves to Gluster attendance of events:
http://www.gluster.org/events (replaces
https://public.pad.fsfe.org/p/gluster-events)
- Put (even minor) interesting topics on
https://public.pad.fsfe.org/p/gluster-weekly-news
- Remember to add your updates to the next meetings agenda.
___
Gluster-users mailing 

Re: [Gluster-users] question about nfs lookup result

2016-11-16 Thread Pranith Kumar Karampuri
Added people who know nfs code.

On Wed, Nov 16, 2016 at 6:21 PM, jin deng  wrote:

> Hi all,
> I'm reading the code of 3.6.9 version and got a question.And I'm not
> very familiar with the code,so I have to confirm it(I checked the master
> branch,it's same with 3.6.9).
>
> The question is about the 'lookup' operation of NFS.And i'm with this
> code flow:
>
> nfs3_lookup (nfs3.c) ==> nfs3_fh_resolve_and_resume
> ==> nfs3_fh_resolve_root ==> nfs3_fh_resolve_resume
> ==> nfs3_fh_resolve_entry ==> nfs3_fh_resolve_entry_hard.
>
> Now enter the "nfs_entry_loc_fill" and return with -1 which means the
> "parent" not found,so we have to do hard resolve about the parent. After
> doing a hard resolve,we get into the callback 
> "nfs3_fh_resolve_inode_lookup_cbk".And
> the callback has the following code:
>
> >>> memcpy (>stbuf, buf, sizeof(*buf));
> >>> memcpy (>postparent, buf, sizeof(*postparent))
>
> I think we've just done a parent resolve,how could we assign the parent
> result into the "stbuf" and "postparent".The later two should be the
> information of the child file/directory.Do i made a misunderstand?
>
> Thanks advance for your help.
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>



-- 
Pranith
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] question about info and info.tmp

2016-11-16 Thread Atin Mukherjee
On Tue, Nov 15, 2016 at 1:53 PM, songxin  wrote:

> ok, thank you.
>
>
>
>
> 在 2016-11-15 16:12:34,"Atin Mukherjee"  写道:
>
>
>
> On Tue, Nov 15, 2016 at 12:47 PM, songxin  wrote:
>
>>
>> Hi Atin,
>>
>> I think the root cause is in the function glusterd_import_friend_volume
>> as below.
>>
>> int32_t
>> glusterd_import_friend_volume (dict_t *peer_data, size_t count)
>> {
>> ...
>> ret = glusterd_volinfo_find (new_volinfo->volname, _volinfo);
>> if (0 == ret) {
>> (void) gd_check_and_update_rebalance_info (old_volinfo,
>>new_volinfo);
>> (void) glusterd_delete_stale_volume (old_volinfo,
>> new_volinfo);
>> }
>> ...
>> ret = glusterd_store_volinfo (new_volinfo,
>> GLUSTERD_VOLINFO_VER_AC_NONE);
>> if (ret) {
>> gf_msg (this->name, GF_LOG_ERROR, 0,
>> GD_MSG_VOLINFO_STORE_FAIL, "Failed to store "
>> "volinfo for volume %s", new_volinfo->volname);
>> goto out;
>> }
>> ...
>> }
>>
>> glusterd_delete_stale_volume will remove the info and bricks/* and the
>> glusterd_store_volinfo will create the new one.
>> But if glusterd is killed before rename the info will is empty.
>>
>> And glusterd will start failed because the infois empty in the next time
>> you start the glusterd.
>>
>> Any idea, Atin?
>>
>
> Give me some time, will check it out, but reading at this analysis looks
> very well possible if a volume is changed when the glusterd was done on
> node a and when the same comes up during peer handshake we update the
> volinfo and during that time glusterd goes down once again. I'll confirm it
> by tomorrow.
>
>
I checked the code and it does look like you have got the right RCA for the
issue which you simulated through those two scripts. However this can
happen even when you try to create a fresh volume and while glusterd tries
to write the content into the store and goes down before renaming the
info.tmp file you get into the same situation.

I'd really need to think through if this can be fixed. Suggestions are
always appreciated.



>
> BTW, excellent work Xin!
>
>
>> Thanks,
>> Xin
>>
>>
>> 在 2016-11-15 12:07:05,"Atin Mukherjee"  写道:
>>
>>
>>
>> On Tue, Nov 15, 2016 at 8:58 AM, songxin  wrote:
>>
>>> Hi Atin,
>>> I have some clues about this issue.
>>> I could reproduce this issue use the scrip that mentioned in
>>> https://bugzilla.redhat.com/show_bug.cgi?id=1308487 .
>>>
>>
>> I really appreciate your help in trying to nail down this issue. While I
>> am at your email and going through the code to figure out the possible
>> cause for it, unfortunately I don't see any script in the attachment of the
>> bug.  Could you please cross check?
>>
>>
>>>
>>> After I added some debug print,which like below, in glusterd-store.c and
>>> I found that the /var/lib/glusterd/vols/xxx/info and
>>> /var/lib/glusterd/vols/xxx/bricks/* are removed.
>>> But other files in /var/lib/glusterd/vols/xxx/ will not be remove.
>>>
>>> int32_t
>>> glusterd_store_volinfo (glusterd_volinfo_t *volinfo,
>>> glusterd_volinfo_ver_ac_t ac)
>>> {
>>> int32_t ret = -1;
>>>
>>> GF_ASSERT (volinfo)
>>>
>>> ret = access("/var/lib/glusterd/vols/gv0/info", F_OK);
>>> if(ret < 0)
>>> {
>>> gf_msg (THIS->name, GF_LOG_ERROR, 0, 0, "info is not
>>> exit(%d)", errno);
>>> }
>>> else
>>> {
>>> ret = stat("/var/lib/glusterd/vols/gv0/info", );
>>> if(ret < 0)
>>> {
>>> gf_msg (THIS->name, GF_LOG_ERROR, 0, 0, "stat
>>> info error");
>>> }
>>> else
>>> {
>>> gf_msg (THIS->name, GF_LOG_ERROR, 0, 0, "info
>>> size is %lu, inode num is %lu", buf.st_size, buf.st_ino);
>>> }
>>> }
>>>
>>> glusterd_perform_volinfo_version_action (volinfo, ac);
>>> ret = glusterd_store_create_volume_dir (volinfo);
>>> if (ret)
>>> goto out;
>>>
>>> ...
>>> }
>>>
>>> So it is easy to understand why  the info or 10.32.1.144.-opt-lvmdir-c2-
>>> brick sometimes is empty.
>>> It is becaue the info file is not exist, and it will be create by “fd =
>>> open (path, O_RDWR | O_CREAT | O_APPEND, 0600);” in function
>>> gf_store_handle_new.
>>> And the info file is empty before rename.
>>> So the info file is empty if glusterd shutdown before rename.
>>>
>>>
>>
>>> My question is following.
>>> 1.I did not find the point the info is removed.Could you tell me the
>>> point where the info and /bricks/* are removed?
>>> 2.why the file info and bricks/* is removed?But other files in 
>>> var/lib/glusterd/vols/xxx/
>>> are not be removed?
>>>
>>
>> AFAIK, we never delete the 

[Gluster-users] question about nfs lookup result

2016-11-16 Thread jin deng
Hi all,
I'm reading the code of 3.6.9 version and got a question.And I'm not
very familiar with the code,so I have to confirm it(I checked the master
branch,it's same with 3.6.9).

The question is about the 'lookup' operation of NFS.And i'm with this
code flow:

nfs3_lookup (nfs3.c) ==> nfs3_fh_resolve_and_resume
==> nfs3_fh_resolve_root ==> nfs3_fh_resolve_resume
==> nfs3_fh_resolve_entry ==> nfs3_fh_resolve_entry_hard.

Now enter the "nfs_entry_loc_fill" and return with -1 which means the
"parent" not found,so we have to do hard resolve about the parent. After
doing a hard resolve,we get into the callback
"nfs3_fh_resolve_inode_lookup_cbk".And
the callback has the following code:

>>> memcpy (>stbuf, buf, sizeof(*buf));
>>> memcpy (>postparent, buf, sizeof(*postparent))

I think we've just done a parent resolve,how could we assign the parent
result into the "stbuf" and "postparent".The later two should be the
information of the child file/directory.Do i made a misunderstand?

Thanks advance for your help.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Minutes from yesterday's Gluster Community Bug Triage meeting (Nov 16 2016)

2016-11-16 Thread Soumya Koduri

Hi,

Sorry for the delay. Please find the minutes of yesterday's Gluster 
Community Bug Triage meeting below.


Meeting summary

agenda: https://public.pad.fsfe.org/p/gluster-bug-triage 
(skoduri, 12:00:57)


Roll Call (skoduri, 12:01:03)
Next weeks meeting host (skoduri, 12:03:48)
Next week chair - hgowtham (skoduri, 12:05:01)

ndevos need to decide on how to provide/use debug builds (skoduri, 
12:05:28)
ACTION: ndevos need to decide on how to provide/use debug 
builds (skoduri, 12:06:45)


jiffin will try to add an error for bug ownership to check-bugs.py 
(skoduri, 12:07:17)
ACTION: jiffin will try to add an error for bug ownership to 
check-bugs.py (skoduri, 12:07:37)


Group Triage (skoduri, 12:07:52)
https://public.pad.fsfe.org/p/gluster-bugs-to-triage (skoduri, 
12:08:06)


Open Floor (skoduri, 12:13:17)
Discussions wrt to other pad service shall be discussed in 
gluster community meeting happening on wednesday (skoduri, 12:16:00)


http://download.gluster.org/pub/gluster/glusterfs/static-analysis/ 
(kkeithley, 12:18:21)
beta.etherpad.org being hosted by rackspace could be used in 
place of public.pad.fsfe (skoduri, 12:21:43)




Meeting ended at 12:23:28 UTC (full logs).

Action items

ndevos need to decide on how to provide/use debug builds
jiffin will try to add an error for bug ownership to check-bugs.py



Action items, by person

ndevos
ndevos need to decide on how to provide/use debug builds



People present (lines said)

skoduri (48)
kkeithley (16)
Saravanakmr (9)
ndevos (7)
hgowtham (5)
zodbot (3)
rafi (1)
nigelb (1)



Thanks,
Soumya
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster File Abnormalities

2016-11-16 Thread Nithya Balachandran
On 16 November 2016 at 00:06, Kevin Leigeb  wrote:

> I guess I don’t understand why in some cases we only have these link-to
> files and not the new data files. Nothing has been overwritten, unless it
> was by gluster.
>
>
>
Could the linkto files might have been copied by rsync overwriting the
original data files? Was rsync run consecutively on each brick?

How can I tell if a file shows as the initial size, but isn’t actually
> using disk space?
>
>
>
du -h  will show you the actual disk usage


> What should the contents of one of these link-to files look like if I try
> to open it with less or some other program?
>
>
>
It will probably show junk characters as it is not a valid file.

It is highly recommended that you perform all such operations from the
gluster mount in future as there are several internal operations that are
performed.

Thanks,
Nithya

>
>
> *From:* Nithya Balachandran [mailto:nbala...@redhat.com]
> *Sent:* Tuesday, November 15, 2016 10:55 AM
> *To:* Kevin Leigeb 
>
> *Subject:* Re: [Gluster-users] Gluster File Abnormalities
>
>
>
>
>
>
>
> On 15 November 2016 at 22:15, Nithya Balachandran 
> wrote:
>
>
>
>
>
> On 15 November 2016 at 21:59, Kevin Leigeb  wrote:
>
> Nithya -
>
>
>
> Thanks for the reply, I will send this at the top to keep the thread from
> getting really ugly.
>
>
>
> We did indeed copy from the individual bricks in an effort to speed up the
> copy. We had one rsync running from each brick to the mount point for the
> new cluster. As stated, we skipped all files with size 0 so that stub files
> wouldn’t be copied. Some files with permissions of 1000 (equivalent to
> -T) were larger than 0 and were also copied.
>
>
>
> I’m mostly trying to ascertain why such files would exist (failed
> rebalance?) and what we can do about this problem.
>
>
>
> Rebalance creates these linkto files as targets for the file migration -
> while the file size displayed would be nonzero, the actual disk usage for
> such files would be zero.
>
>
>
> Missed one crucial point - the rebalance sets the size of these linkto
> files to that of the original file.
>
>
>
> The rebalance process first creates these files and then checks if it is
> possible to migrate the file. If not, it just leaves it there as it is not
> using up any space and will be ignored by gluster clients. It is not a
> failed rebalance - sometimes some files are not migrated because of space
> constraints and they are considered to have been skipped.
>
>
>
> This behaviour was modified as part of http://review.gluster.org/#/c/12347.
> We now reset the size to 0.
>
>
>
> I'm afraid, if those files were overwritten by their linkto files, the
> only way forward would be to restore from a backup.
>
>
>
> Regards,
>
> Nithya
>
>
>
>
>
>
>
>
>
>
>
> Thanks,
>
> Kevin
>
>
>
> *From:* Nithya Balachandran [mailto:nbala...@redhat.com]
> *Sent:* Tuesday, November 15, 2016 10:21 AM
> *To:* Kevin Leigeb 
> *Cc:* gluster-users@gluster.org
> *Subject:* Re: [Gluster-users] Gluster File Abnormalities
>
>
>
>
>
> Hi kevin,
>
>
>
> On 15 November 2016 at 20:56, Kevin Leigeb  wrote:
>
> All -
>
>
>
> We recently moved from an old cluster running 3.7.9 to a new one running
> 3.8.4. To move the data we rsync’d all files from the old gluster nodes
> that were not in the .glusterfs directory and had a size of greater-than
> zero (to avoid stub files) through the front-end of the new cluster.
>
>
>
> Did you rsync via the mount point or directly from the bricks?
>
>
>
> However, it has recently come to our attention that some of the files
> copied over were already “corrupted” on the old back-end. That is, these
> files had permissions of 1000 (like a stub file) yet were the full size of
> the actual file.
>
>
>
> Does this correspond to a file permission of ___T when viewed using ls? If
> yes, these are dht linkto files. They were possibly created during a
> rebalance and left behind because the file was skipped. They should be
> ignored when accessing the gluster volume via the mount point.
>
>
>
> In some cases, these were the only copies of the file that existed at all
> on any of the bricks, in others, another version of the file existed that
> was also full size and had the proper permissions. In some cases, we
> believe, these correct files were rsync’d but then overwritten by the 1000
> permission version resulting in a useless file on the new cluster.
>
>
>
> This sounds like you were running rsync directly on the bricks. Can you
> please confirm if that is the case?
>
>
>
> These files are thought by the OS to be binaries when trying to open them
> using vim, but they are actually text files (or at least were originally).
> We can cat the file to see that it has a length of zero and so far that is
> our only reliable test to find which files are indeed corrupted (find .
> -type f | xargs wc -l). 

Re: [Gluster-users] question about glusterfs version migrate

2016-11-16 Thread Serkan Çoban
Below link has changes in each release.
https://github.com/gluster/glusterfs/tree/release-3.7/doc/release-notes


On Wed, Nov 16, 2016 at 11:49 AM, songxin  wrote:
> Hi,
> I am planning to migrate from gluster 3.7.6 to gluster 3.7.10.
> So I have two questions below.
> 1.How could I know the changes in gluster 3.7.6 compared to gluster 3.7.10?
> 2.Does my application need any NBC changes?
>
> Thanks,
> Xin
>
>
>
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Last week in GD2 - 2016-11-16

2016-11-16 Thread Kaushal M
Hi All,

There was one big change in GD2 in the past week.

Prashanth completed embedding etcd into GD2 [1]. There is still a
little work remaining to store/restore etcd configuration on GD2
restarts. Once that is done, we'll do a new release of GD2.

I've continued working on volgen at [2]. I can now build a dependency
graph for a brick. This not the final graph though, as it needs to be
linearized to get the volume graph. I'll working on this next.

In addition, the test program now outputs a graphviz dotfile output of
the created graph, the makes it easier to visualize the graph during
development. A sample generate graph can be viewed at [3].

I'm still planning the hangout on volgen, but it will be done (sometime soon).

Thanks,
Kaushal

[1]: https://github.com/gluster/glusterd2/pull/148
[2]: https://github.com/kshlm/glusterd2-volgen
[3]: 
https://github.com/kshlm/glusterd2-volgen/blob/volgen-systemd-style/brick-graph.svg
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Weekly Community Meeting - 2016-11-16

2016-11-16 Thread Kaushal M
I forgot to send this reminder out earlier. Please add your updates
and any topics of discussion to
https://public.pad.fsfe.org/p/gluster-community-meetings .

The meeting starts in ~3 hours from now.

~kaushal
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] question about glusterfs version migrate

2016-11-16 Thread songxin
Hi,
I am planning to migrate from gluster 3.7.6 to gluster 3.7.10.
So I have two questions below.
1.How could I know the changes in gluster 3.7.6 compared to gluster 3.7.10?
2.Does my application need any NBC changes?


Thanks,
Xin___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users