Re: [Gluster-devel] [Gluster-users] Hole punch support

2016-11-11 Thread Ankireddypalle Reddy
Ravi,
   Thanks for the explanation.  Filed a bug.

https://bugzilla.redhat.com/show_bug.cgi?id=1394298

Thanks and Regards,
Ram
From: Ravishankar N [mailto:ravishan...@redhat.com]
Sent: Friday, November 11, 2016 10:01 AM
To: Ankireddypalle Reddy; gluster-us...@gluster.org; Gluster Devel
Subject: Re: [Gluster-users] Hole punch support

+ gluster-devel.

Can you raise an RFE bug for this and assign it to me?
The thing is,  FALLOC_FL_PUNCH_HOLE must be used in tandem with 
FALLOC_FL_KEEP_SIZE, and the latter is currently broken in gluster because 
there are some conversions done in iatt_from_stat() in gluster for quota to 
work. I'm not sure if these are needed anymore, or can be circumvented, but it 
is about time we looked into it.

Thanks,
Ravi

On 11/11/2016 07:55 PM, Ankireddypalle Reddy wrote:
Hi,
   Any idea about when will hole punch support be available with glusterfs.

Thanks and Regards,
Ram
***Legal Disclaimer***
"This communication may contain confidential and privileged material for the
sole use of the intended recipient. Any unauthorized review, use or distribution
by others is strictly prohibited. If you have received the message by mistake,
please advise the sender by reply email and delete the message. Thank you."
**



___

Gluster-users mailing list

gluster-us...@gluster.org

http://www.gluster.org/mailman/listinfo/gluster-users


***Legal Disclaimer***
"This communication may contain confidential and privileged material for the
sole use of the intended recipient. Any unauthorized review, use or distribution
by others is strictly prohibited. If you have received the message by mistake,
please advise the sender by reply email and delete the message. Thank you."
**
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-users] Feedback on DHT option "cluster.readdir-optimize"

2016-11-11 Thread Kyle Johnson
Sure, I'd be happy to supply some more details.  See below.

On 11/10/2016 01:17 AM, Nithya Balachandran wrote:
> 
> 
> On 8 November 2016 at 20:21, Kyle Johnson  > wrote:
> 
> Hey there,
> 
> We have a number of processes which daily walk our entire directory
> tree and perform operations on the found files.
> 
> Pre-gluster, this processes was able to complete within 24 hours of
> starting.  After outgrowing that single server and moving to a
> gluster setup (two bricks, two servers, distribute, 10gig uplink),
> the processes became unusable.
> 
> After turning this option on, we were back to normal run times, with
> the process completing within 24 hours.
> 
> Our data is heavy nested in a large number of subfolders under
> /media/ftp.
> 
> 
> Thanks for getting back to us - this is very good information. Can you
> provide a few more details?
> 
> How deep is your directory tree and roughly how many directories do you
> have at each level? 

Depends on the directory, Most of them, such as /media/ftp/dig_dis,
there is only one level of nesting. e.g.
/media/ftp/dig_dis/4058765004173/ is one of the 48,000 directories.

With other directories, such as /media/ftp/believe_digital, there are
two levels of nesting.
/media/ftp/believe_digital/20160225/3614597218815.  In this case, there
are 463 top level (date) directories, and then a huge number of subdirs
under them.

In both cases, once you get to the bottom of the directory tree, there
are generally no more than 20 files in the given directory, and they're
somewhat large (flac files).

While dig_dis is 15T of files, believe_digital is 26T of files.

Our processes operate on the individual subdirs under /media/ftp/, such
as /media/ftp/dig_dis.  They don't start at /media/ftp.


> Are all your files in the lowest level dirs or do they exist on several
> levels?

They're all in the lowest, though the number of nested directories
varies between 1 and 2, as seen above.

> Would you be willing to provide the gluster volume info output for this
> volume?
> 

Sure.


[root@colossus dig_dis]# gluster volume info ftp

Volume Name: ftp
Type: Distribute
Volume ID: f3f2b222-575c-4c8d-92f1-e640fd7edfbb
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: 192.168.110.1:/tank/bricks/ftp
Brick2: 192.168.110.2:/ftp/bricks/ftp
Options Reconfigured:
cluster.readdir-optimize: on
performance.client-io-threads: on
cluster.weighted-rebalance: off
performance.readdir-ahead: off
nfs.disable: on


With some additional info...


110.1 is a freebsd 10.3 box with zfs-backed bricks.
110.2 is a centos box on an older kernel (2.6.32) with zfs-backed bricks.

The .110 network is directly connected between the two hosts on 10GE
NICs with CAT6.  Each host has 2 NICs and the NICs are LAGG'd (bonded)
together in mode 4.

weighted-reblance is turned off because of
https://bugzilla.redhat.com/show_bug.cgi?id=1356076

readdir-ahead was turned off due to a tip in
https://bugzilla.redhat.com/show_bug.cgi?id=1369364

I don't specifically remember tweaking client-io-threads

Hope this helps,
Kyle

> 
> A subset of our data:
> 
> 15T of files in 48163 directories under /media/ftp/dig_dis.
> 
> Without readdir-optimize:
> 
> [root@colossus dig_dis]# time ls|wc -l
> 48163
> 
> real13m1.582s
> user0m0.294s
> sys 0m0.205s
> 
> 
> With readdir-optimize:
> 
> [root@colossus dig_dis]# time ls | wc -l
> 48163
> 
> real0m23.785s
> user0m0.296s
> sys 0m0.108s
> 
> 
> Long story short - this option is super important to me as it
> resolved an issue that would have otherwise made me move my data off
> of gluster.
> 
> 
> Thank you for all of your work,
> 
> Kyle
> 
> 
> 
> 
> 
> On 11/07/2016 10:07 PM, Raghavendra Gowdappa wrote:
> 
> Hi all,
> 
> We have an option in called "cluster.readdir-optimize" which
> alters the behavior of readdirp in DHT. This value affects how
> storage/posix treats dentries corresponding to directories (not
> for files).
> 
> When this value is on,
> * DHT asks only one subvol/brick to return dentries
> corresponding to directories.
> * Other subvols/bricks filter dentries corresponding to
> directories and send only dentries corresponding to files.
> 
> When this value is off (this is the default value),
> * All subvols return all dentries stored on them. IOW, bricks
> don't filter any dentries.
> * Since a directory has one dentry representing it on each
> subvol, dht (loaded on client) picks up dentry only from hashed
> subvol.
> 
> Note that irrespective of value of this option, _all_ subvols
> return dentries corresponding to files which are stored on them.
> 
> This option was introduced to boost performance of 

Re: [Gluster-devel] Is it possible to turn an existing filesystem (with data) into a GlusterFS brick ?

2016-11-11 Thread Gandalf Corvotempesta
2016-11-11 16:09 GMT+01:00 Sander Eikelenboom :
> I think that could also be useful
> when trying to recover from total disaster (where glusterfs bricks are 
> brokedown
> and you end up with lose bricks. At least you would be able to keep the
> filesystem data, remove the .glusterfs metadata dir. Then you can use low 
> level
> filesystem tools and rebuild your glusterfs volume and brick inplace, instead 
> of
> to move it out which could be difficult datasize wise.

This would be awesome.
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Is it possible to turn an existing filesystem (with data) into a GlusterFS brick ?

2016-11-11 Thread Sander Eikelenboom

Friday, November 11, 2016, 4:28:36 PM, you wrote:

> Feature requests to in Bugzilla anyway. 
> Create your volume with the populated brick as brick one. Start it and "heal 
> full". 

gluster> volume create testvolume transport tcp 
gluster> 192.168.1.1:/mnt/glusterfs/testdata/brick force
volume create: private: success: please start the volume to access data
gluster> volume heal testvolume full
Launching heal operation to perform full self heal on volume testvolume has 
been unsuccessful on bricks that are down. Please check if all brick processes 
are running.
gluster> volume start testvolume
volume start: testvolume: success
gluster> volume heal testvolume full
Launching heal operation to perform full self heal on volume testvolume has 
been unsuccessful on bricks that are down. Please check if all brick processes 
are running.

So it seems healing only works on volumes with 2 or more bricks.
So that doesn't seem to workout very well.

--
Sander




> On November 11, 2016 7:12:03 AM PST, Sander Eikelenboom 
>  wrote:
>  

> Friday, November 11, 2016, 3:47:26 PM, you wrote:



>  Reposting to gluster-users as this is not development related. 


> I posted @devel, because in the most likely case of "No", it could become a 
> feature request ;-)

> --
> Sander



>  On November 11, 2016 6:32:49 AM PST, Pranith Kumar Karampuri 
>  wrote:
>  






>  On Fri, Nov 11, 2016 at 8:01 PM, Pranith Kumar Karampuri 
>  wrote:







>  On Fri, Nov 11, 2016 at 6:24 PM, Saravanakumar Arumugam 
>  wrote:




>  
>  On 11/11/2016 06:03 PM, Sander Eikelenboom wrote:
>  
>  L.S.,
>  
>  I was wondering if it would be possible to turn an existing filesystem with 
> data
>  (ext4 with files en dirs) into a GlusterFS brick ?
>  
>  It is not possible, at least I am not aware about any such solution yet.
>  
>  
>  I can't find much info about it except the following remark at [1] which 
> seems
>  to indicate it is not possible yet:
>  
>          Data import tool
>  
>          Create a tool which will allow importing already existing data in 
> the brick
>          directories into the gluster volume.
>          This is most likely going to be a special rebalance process.
>  
>  So that would mean i would always have to:
>  - first create an GlusterFS brick on an empty filesystem
>  - after that copy all the data into the mounted GlusterFS brick
>  - never ever copy something into the filesystem (or manipulate it otherwise)
>    used as a GlusterFS brick directly (without going through a GlusterFS 
> client mount)
>  
>  because there is no checking / healing between GlusterFS's view on the data 
> and the data in the
>  underlying brick filesystem ?
>  
>  Is this a correct view ?
>  
>  
>  you are right !
>  Once the data is copied into Gluster, it internally creates meta-data about 
> data(file/dir).
>  Unless you copy it via Gluster mount point, it is NOT possible to create 
> such meta-data.






>  No, it is possible. You just need to be a bit creative.




>  Could you let me know how many such bricks you have which you want to 
> convert to glusterfs. It seems like you want replication as well. So if you 
> give me all this information. With your help may be we can at least come up 
> with a document on how this can be done.






>  Once the import is complete, whatever you are saying about not touching the 
> brick directly and doing everything from the mount point holds. But we can 
> definitely convert an existing ext4 directory structure into a volume.
>   



>   
>  
>  Thanks,
>  Saravana



>  



>  Gluster-devel mailing list
>  Gluster-devel@gluster.org
>  http://www.gluster.org/mailman/listinfo/gluster-devel
>  







>  -- 
>  Sent from my Android device with K-9 Mail. Please excuse my brevity.

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Is it possible to turn an existing filesystem (with data) into a GlusterFS brick ?

2016-11-11 Thread Joe Julian
Feature requests to in Bugzilla anyway. 

Create your volume with the populated brick as brick one. Start it and "heal 
full". 

On November 11, 2016 7:12:03 AM PST, Sander Eikelenboom  
wrote:
>
>Friday, November 11, 2016, 3:47:26 PM, you wrote:
>
>> Reposting to gluster-users as this is not development related. 
>
>I posted @devel, because in the most likely case of "No", it could
>become a 
>feature request ;-)
>
>--
>Sander
>
>
>> On November 11, 2016 6:32:49 AM PST, Pranith Kumar Karampuri
> wrote:
>>  
>
>
>
>
>> On Fri, Nov 11, 2016 at 8:01 PM, Pranith Kumar Karampuri
> wrote:
>
>
>
>
>
>> On Fri, Nov 11, 2016 at 6:24 PM, Saravanakumar Arumugam
> wrote:
>
>
>>  
>>  On 11/11/2016 06:03 PM, Sander Eikelenboom wrote:
>>  
>>  L.S.,
>>  
>>  I was wondering if it would be possible to turn an existing
>filesystem with data
>>  (ext4 with files en dirs) into a GlusterFS brick ?
>>  
>>  It is not possible, at least I am not aware about any such solution
>yet.
>>  
>>  
>>  I can't find much info about it except the following remark at [1]
>which seems
>>  to indicate it is not possible yet:
>>  
>>          Data import tool
>>  
>>          Create a tool which will allow importing already existing
>data in the brick
>>          directories into the gluster volume.
>>          This is most likely going to be a special rebalance process.
>>  
>>  So that would mean i would always have to:
>>  - first create an GlusterFS brick on an empty filesystem
>>  - after that copy all the data into the mounted GlusterFS brick
>>  - never ever copy something into the filesystem (or manipulate it
>otherwise)
>>    used as a GlusterFS brick directly (without going through a
>GlusterFS client mount)
>>  
>>  because there is no checking / healing between GlusterFS's view on
>the data and the data in the
>>  underlying brick filesystem ?
>>  
>>  Is this a correct view ?
>>  
>>  
>>  you are right !
>>  Once the data is copied into Gluster, it internally creates
>meta-data about data(file/dir).
>>  Unless you copy it via Gluster mount point, it is NOT possible to
>create such meta-data.
>
>
>
>
>> No, it is possible. You just need to be a bit creative.
>
>
>> Could you let me know how many such bricks you have which you want to
>convert to glusterfs. It seems like you want replication as well. So if
>you give me all this information. With your help may be we can at least
>come up with a document on how this can be done.
>
>
>
>
>> Once the import is complete, whatever you are saying about not
>touching the brick directly and doing everything from the mount point
>holds. But we can definitely convert an existing ext4 directory
>structure into a volume.
>>  
>
>>  
>>  
>>  Thanks,
>>  Saravana
>
>>  
>>  ___
>>  Gluster-devel mailing list
>>  Gluster-devel@gluster.org
>>  http://www.gluster.org/mailman/listinfo/gluster-devel
>>  

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Is it possible to turn an existing filesystem (with data) into a GlusterFS brick ?

2016-11-11 Thread Sander Eikelenboom

Friday, November 11, 2016, 3:31:05 PM, you wrote:


> On Fri, Nov 11, 2016 at 6:24 PM, Saravanakumar Arumugam  
> wrote:


>  
>  On 11/11/2016 06:03 PM, Sander Eikelenboom wrote:
>  
>  L.S.,
>  
>  I was wondering if it would be possible to turn an existing filesystem with 
> data
>  (ext4 with files en dirs) into a GlusterFS brick ?
>  
>  It is not possible, at least I am not aware about any such solution yet.
>  
>  
>  I can't find much info about it except the following remark at [1] which 
> seems
>  to indicate it is not possible yet:
>  
>          Data import tool
>  
>          Create a tool which will allow importing already existing data in 
> the brick
>          directories into the gluster volume.
>          This is most likely going to be a special rebalance process.
>  
>  So that would mean i would always have to:
>  - first create an GlusterFS brick on an empty filesystem
>  - after that copy all the data into the mounted GlusterFS brick
>  - never ever copy something into the filesystem (or manipulate it otherwise)
>    used as a GlusterFS brick directly (without going through a GlusterFS 
> client mount)
>  
>  because there is no checking / healing between GlusterFS's view on the data 
> and the data in the
>  underlying brick filesystem ?
>  
>  Is this a correct view ?
>  
>  
>  you are right !
>  Once the data is copied into Gluster, it internally creates meta-data about 
> data(file/dir).
>  Unless you copy it via Gluster mount point, it is NOT possible to create 
> such meta-data.

> No, it is possible. You just need to be a bit creative.
> Could you let me know how many such bricks you have which you want to convert 
> to glusterfs. It seems like you want replication as well. So if you give me 
> all this information. With your help may be we can at least come up with a 
> document on how this can be done.

Hi Saravanakumar,

Thanks for your swift reply.

Well the most achievable workflow to me seems:
1) Start with one filesystem already filled with data
2) Let glusterfs create a glusterfs volume with only that FS as brick
3) Have some tool scan that volume/brick and check / compare the filesystem 
data 
   with glusterfs metadata. And have a option to repair / generate the missing 
   (or wrong) glusterfs metadata based on the filesystem data
4) If whished for add other (empty) bricks 
5) Start the gluster volume and/or healing / replication

What seems to be missing is a tool for (3).
I think that could also be useful 
when trying to recover from total disaster (where glusterfs bricks are 
brokedown 
and you end up with lose bricks. At least you would be able to keep the 
filesystem data, remove the .glusterfs metadata dir. Then you can use low level 
filesystem tools and rebuild your glusterfs volume and brick inplace, instead 
of 
to move it out which could be difficult datasize wise.

--
Sander

>  
>  
>  Thanks,
>  Saravana

>  
>  ___
>  Gluster-devel mailing list
>  Gluster-devel@gluster.org
>  http://www.gluster.org/mailman/listinfo/gluster-devel
>  




___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Is it possible to turn an existing filesystem (with data) into a GlusterFS brick ?

2016-11-11 Thread Sander Eikelenboom

Friday, November 11, 2016, 3:47:26 PM, you wrote:

> Reposting to gluster-users as this is not development related. 

I posted @devel, because in the most likely case of "No", it could become a 
feature request ;-)

--
Sander


> On November 11, 2016 6:32:49 AM PST, Pranith Kumar Karampuri 
>  wrote:
>  




> On Fri, Nov 11, 2016 at 8:01 PM, Pranith Kumar Karampuri 
>  wrote:





> On Fri, Nov 11, 2016 at 6:24 PM, Saravanakumar Arumugam  
> wrote:


>  
>  On 11/11/2016 06:03 PM, Sander Eikelenboom wrote:
>  
>  L.S.,
>  
>  I was wondering if it would be possible to turn an existing filesystem with 
> data
>  (ext4 with files en dirs) into a GlusterFS brick ?
>  
>  It is not possible, at least I am not aware about any such solution yet.
>  
>  
>  I can't find much info about it except the following remark at [1] which 
> seems
>  to indicate it is not possible yet:
>  
>          Data import tool
>  
>          Create a tool which will allow importing already existing data in 
> the brick
>          directories into the gluster volume.
>          This is most likely going to be a special rebalance process.
>  
>  So that would mean i would always have to:
>  - first create an GlusterFS brick on an empty filesystem
>  - after that copy all the data into the mounted GlusterFS brick
>  - never ever copy something into the filesystem (or manipulate it otherwise)
>    used as a GlusterFS brick directly (without going through a GlusterFS 
> client mount)
>  
>  because there is no checking / healing between GlusterFS's view on the data 
> and the data in the
>  underlying brick filesystem ?
>  
>  Is this a correct view ?
>  
>  
>  you are right !
>  Once the data is copied into Gluster, it internally creates meta-data about 
> data(file/dir).
>  Unless you copy it via Gluster mount point, it is NOT possible to create 
> such meta-data.




> No, it is possible. You just need to be a bit creative.


> Could you let me know how many such bricks you have which you want to convert 
> to glusterfs. It seems like you want replication as well. So if you give me 
> all this information. With your help may be we can at least come up with a 
> document on how this can be done.




> Once the import is complete, whatever you are saying about not touching the 
> brick directly and doing everything from the mount point holds. But we can 
> definitely convert an existing ext4 directory structure into a volume.
>  

>  
>  
>  Thanks,
>  Saravana

>  
>  ___
>  Gluster-devel mailing list
>  Gluster-devel@gluster.org
>  http://www.gluster.org/mailman/listinfo/gluster-devel
>  




___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-users] Hole punch support

2016-11-11 Thread Ravishankar N

+ gluster-devel.

Can you raise an RFE bug for this and assign it to me?
The thing is,  FALLOC_FL_PUNCH_HOLE must be used in tandem with 
FALLOC_FL_KEEP_SIZE, and the latter is currently broken in gluster 
because there are some conversions done in iatt_from_stat() in gluster 
for quota to work. I'm not sure if these are needed anymore, or can be 
circumvented, but it is about time we looked into it.


Thanks,
Ravi

On 11/11/2016 07:55 PM, Ankireddypalle Reddy wrote:


Hi,

   Any idea about when will hole punch support be available with 
glusterfs.


Thanks and Regards,

Ram

***Legal Disclaimer***
"This communication may contain confidential and privileged material 
for the
sole use of the intended recipient. Any unauthorized review, use or 
distribution
by others is strictly prohibited. If you have received the message by 
mistake,
please advise the sender by reply email and delete the message. Thank 
you."

**


___
Gluster-users mailing list
gluster-us...@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users



___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Is it possible to turn an existing filesystem (with data) into a GlusterFS brick ?

2016-11-11 Thread Joe Julian
Reposting to gluster-users as this is not development related. 

On November 11, 2016 6:32:49 AM PST, Pranith Kumar Karampuri 
 wrote:
>On Fri, Nov 11, 2016 at 8:01 PM, Pranith Kumar Karampuri <
>pkara...@redhat.com> wrote:
>
>>
>>
>> On Fri, Nov 11, 2016 at 6:24 PM, Saravanakumar Arumugam <
>> sarum...@redhat.com> wrote:
>>
>>>
>>>
>>> On 11/11/2016 06:03 PM, Sander Eikelenboom wrote:
>>>
 L.S.,

 I was wondering if it would be possible to turn an existing
>filesystem
 with data
 (ext4 with files en dirs) into a GlusterFS brick ?

>>> It is not possible, at least I am not aware about any such solution
>yet.
>>>

 I can't find much info about it except the following remark at [1]
>which
 seems
 to indicate it is not possible yet:

 Data import tool

 Create a tool which will allow importing already existing
>data
 in the brick
 directories into the gluster volume.
 This is most likely going to be a special rebalance
>process.

 So that would mean i would always have to:
 - first create an GlusterFS brick on an empty filesystem
 - after that copy all the data into the mounted GlusterFS brick
 - never ever copy something into the filesystem (or manipulate it
 otherwise)
   used as a GlusterFS brick directly (without going through a
>GlusterFS
 client mount)

 because there is no checking / healing between GlusterFS's view on
>the
 data and the data in the
 underlying brick filesystem ?

 Is this a correct view ?

 you are right !
>>> Once the data is copied into Gluster, it internally creates
>meta-data
>>> about data(file/dir).
>>> Unless you copy it via Gluster mount point, it is NOT possible to
>create
>>> such meta-data.
>>>
>>
>> No, it is possible. You just need to be a bit creative.
>>
>> Could you let me know how many such bricks you have which you want to
>> convert to glusterfs. It seems like you want replication as well. So
>if you
>> give me all this information. With your help may be we can at least
>come up
>> with a document on how this can be done.
>>
>
>Once the import is complete, whatever you are saying about not touching
>the
>brick directly and doing everything from the mount point holds. But we
>can
>definitely convert an existing ext4 directory structure into a volume.
>
>
>>
>>
>>>
>>> Thanks,
>>> Saravana
>>>
>>>
>>> ___
>>> Gluster-devel mailing list
>>> Gluster-devel@gluster.org
>>> http://www.gluster.org/mailman/listinfo/gluster-devel
>>>
>>
>>
>>
>> --
>> Pranith
>>
>
>
>
>-- 
>Pranith
>
>
>
>
>___
>Gluster-devel mailing list
>Gluster-devel@gluster.org
>http://www.gluster.org/mailman/listinfo/gluster-devel

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Is it possible to turn an existing filesystem (with data) into a GlusterFS brick ?

2016-11-11 Thread Pranith Kumar Karampuri
On Fri, Nov 11, 2016 at 8:01 PM, Pranith Kumar Karampuri <
pkara...@redhat.com> wrote:

>
>
> On Fri, Nov 11, 2016 at 6:24 PM, Saravanakumar Arumugam <
> sarum...@redhat.com> wrote:
>
>>
>>
>> On 11/11/2016 06:03 PM, Sander Eikelenboom wrote:
>>
>>> L.S.,
>>>
>>> I was wondering if it would be possible to turn an existing filesystem
>>> with data
>>> (ext4 with files en dirs) into a GlusterFS brick ?
>>>
>> It is not possible, at least I am not aware about any such solution yet.
>>
>>>
>>> I can't find much info about it except the following remark at [1] which
>>> seems
>>> to indicate it is not possible yet:
>>>
>>> Data import tool
>>>
>>> Create a tool which will allow importing already existing data
>>> in the brick
>>> directories into the gluster volume.
>>> This is most likely going to be a special rebalance process.
>>>
>>> So that would mean i would always have to:
>>> - first create an GlusterFS brick on an empty filesystem
>>> - after that copy all the data into the mounted GlusterFS brick
>>> - never ever copy something into the filesystem (or manipulate it
>>> otherwise)
>>>   used as a GlusterFS brick directly (without going through a GlusterFS
>>> client mount)
>>>
>>> because there is no checking / healing between GlusterFS's view on the
>>> data and the data in the
>>> underlying brick filesystem ?
>>>
>>> Is this a correct view ?
>>>
>>> you are right !
>> Once the data is copied into Gluster, it internally creates meta-data
>> about data(file/dir).
>> Unless you copy it via Gluster mount point, it is NOT possible to create
>> such meta-data.
>>
>
> No, it is possible. You just need to be a bit creative.
>
> Could you let me know how many such bricks you have which you want to
> convert to glusterfs. It seems like you want replication as well. So if you
> give me all this information. With your help may be we can at least come up
> with a document on how this can be done.
>

Once the import is complete, whatever you are saying about not touching the
brick directly and doing everything from the mount point holds. But we can
definitely convert an existing ext4 directory structure into a volume.


>
>
>>
>> Thanks,
>> Saravana
>>
>>
>> ___
>> Gluster-devel mailing list
>> Gluster-devel@gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-devel
>>
>
>
>
> --
> Pranith
>



-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Is it possible to turn an existing filesystem (with data) into a GlusterFS brick ?

2016-11-11 Thread Pranith Kumar Karampuri
On Fri, Nov 11, 2016 at 6:24 PM, Saravanakumar Arumugam  wrote:

>
>
> On 11/11/2016 06:03 PM, Sander Eikelenboom wrote:
>
>> L.S.,
>>
>> I was wondering if it would be possible to turn an existing filesystem
>> with data
>> (ext4 with files en dirs) into a GlusterFS brick ?
>>
> It is not possible, at least I am not aware about any such solution yet.
>
>>
>> I can't find much info about it except the following remark at [1] which
>> seems
>> to indicate it is not possible yet:
>>
>> Data import tool
>>
>> Create a tool which will allow importing already existing data in
>> the brick
>> directories into the gluster volume.
>> This is most likely going to be a special rebalance process.
>>
>> So that would mean i would always have to:
>> - first create an GlusterFS brick on an empty filesystem
>> - after that copy all the data into the mounted GlusterFS brick
>> - never ever copy something into the filesystem (or manipulate it
>> otherwise)
>>   used as a GlusterFS brick directly (without going through a GlusterFS
>> client mount)
>>
>> because there is no checking / healing between GlusterFS's view on the
>> data and the data in the
>> underlying brick filesystem ?
>>
>> Is this a correct view ?
>>
>> you are right !
> Once the data is copied into Gluster, it internally creates meta-data
> about data(file/dir).
> Unless you copy it via Gluster mount point, it is NOT possible to create
> such meta-data.
>

No, it is possible. You just need to be a bit creative.

Could you let me know how many such bricks you have which you want to
convert to glusterfs. It seems like you want replication as well. So if you
give me all this information. With your help may be we can at least come up
with a document on how this can be done.


>
> Thanks,
> Saravana
>
>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
>



-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Is it possible to turn an existing filesystem (with data) into a GlusterFS brick ?

2016-11-11 Thread Saravanakumar Arumugam



On 11/11/2016 06:03 PM, Sander Eikelenboom wrote:

L.S.,

I was wondering if it would be possible to turn an existing filesystem 
with data

(ext4 with files en dirs) into a GlusterFS brick ?

It is not possible, at least I am not aware about any such solution yet.


I can't find much info about it except the following remark at [1] 
which seems

to indicate it is not possible yet:

Data import tool

Create a tool which will allow importing already existing data 
in the brick

directories into the gluster volume.
This is most likely going to be a special rebalance process.

So that would mean i would always have to:
- first create an GlusterFS brick on an empty filesystem
- after that copy all the data into the mounted GlusterFS brick
- never ever copy something into the filesystem (or manipulate it 
otherwise)
  used as a GlusterFS brick directly (without going through a 
GlusterFS client mount)


because there is no checking / healing between GlusterFS's view on the 
data and the data in the

underlying brick filesystem ?

Is this a correct view ?


you are right !
Once the data is copied into Gluster, it internally creates meta-data 
about data(file/dir).
Unless you copy it via Gluster mount point, it is NOT possible to create 
such meta-data.


Thanks,
Saravana

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Is it possible to turn an existing filesystem (with data) into a GlusterFS brick ?

2016-11-11 Thread Sander Eikelenboom

L.S.,

I was wondering if it would be possible to turn an existing filesystem 
with data

(ext4 with files en dirs) into a GlusterFS brick ?

I can't find much info about it except the following remark at [1] which 
seems

to indicate it is not possible yet:

Data import tool

Create a tool which will allow importing already existing data 
in the brick

directories into the gluster volume.
This is most likely going to be a special rebalance process.

So that would mean i would always have to:
- first create an GlusterFS brick on an empty filesystem
- after that copy all the data into the mounted GlusterFS brick
- never ever copy something into the filesystem (or manipulate it 
otherwise)
  used as a GlusterFS brick directly (without going through a GlusterFS 
client mount)


because there is no checking / healing between GlusterFS's view on the 
data and the data in the

underlying brick filesystem ?

Is this a correct view ?

--
Sander Eikelenboom

[1] https://gluster.readthedocs.io/en/latest/Developer-guide/Projects/
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Feature: Rebalance completion time estimation

2016-11-11 Thread Susant Palai
Hello All,
   We have been receiving many requests from users to give a "Rebalance  
completion time estimation". This email is to gather ideas and feedback from 
the community for the same. We have one proposal, but nothing is concrete. 
Please feel free to give your input for this problem.

A brief about rebalance operation:
- Rebalance process is used to rebalance data across cluster most likely in the 
event of add-brick and remove-brick. Rebalance is spawned on each node. The job 
for the process is to read directories, fix it's layout to include the newly 
added brick. Read children files(only those reside on local bricks) of the 
directory and migrate them if necessary decided by the new layout.


Here is one of the solution pitched by Manoj Pillai.

Assumptions for this idea:
 - files are of similar size.
 - Max 40% of the total files will be migrated

1- Do a statfs on the local bricks. Say the total size is St.
2- Based on first file size say Sf, assume the no of files in the local system 
to be: Nt
3- So the time estimation would be: (Nt * migration time for one file) * 40%.
4- Rebalance will keep updating this estimation as more files are crawled and 
will try to give a fare estimation.

Problem with this approach: This method assumes that the files size will be 
almost similar. For cluster  with variable file sizes this estimation go wrong.

So this is one initial idea. Please give your suggestions/ideas/feedback on 
this.


Thanks,
Susant






 

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-Maintainers] Please pause merging patches to 3.9 waiting for just one patch

2016-11-11 Thread Manikandan Selvaganesh
Yes Sanoj, that's the issue. It will somehow write the latest header which
has conf 1.2.
But for all the individual gfid's it won't work properly always. We need to
do it many times
and sometimes it will be still not set properly.

--
Thanks & Regards,
Manikandan Selvaganesan.
(@Manikandan Selvaganesh on Web)

On Fri, Nov 11, 2016 at 12:47 PM, Sanoj Unnikrishnan 
wrote:

>  Pasting Testing Logs
> ==
>
> 3.6
>
> [root@dhcp-0-112 rpms]# /sbin/gluster v create v1 $tm1:/export/sdb/br1
> volume create: v1: success: please start the volume to access data
>
> [root@dhcp-0-112 rpms]# gluster v start v1
> [root@dhcp-0-112 rpms]#
> [root@dhcp-0-112 rpms]#  mount -t glusterfs $tm1:v1 /gluster_vols/vol
> [root@dhcp-0-112 rpms]# gluster v quota v1 enable
> volume quota : success
> [root@dhcp-0-112 rpms]#
> [root@dhcp-0-112 rpms]# mkdir -p /gluster_vols/vol/dir1;  gluster v quota
> v1 limit-usage /dir1 5MB 10
> volume quota : success
> [root@dhcp-0-112 rpms]# mkdir -p /gluster_vols/vol/dir2;  gluster v quota
> v1 limit-usage /dir2 16MB 10
> volume quota : success
> [root@dhcp-0-112 rpms]# gluster v quota v1 list
>   Path   Hard-limit Soft-limit   Used
> Available  Soft-limit exceeded? Hard-limit exceeded?
> 
> ---
> /dir1  5.0MB   10%  0Bytes
> 5.0MB  No   No
> /dir2 16.0MB   10%  0Bytes
> 16.0MB  No   No
> [root@dhcp-0-112 rpms]#
> [root@dhcp-0-112 rpms]#
> [root@dhcp-0-112 rpms]# rpm -qa | grep glusterfs-ser
> glusterfs-server-3.6.9-0.1.gitcaccd6c.fc24.x86_64
>
> [root@dhcp-0-112 rpms]# umount /gluster_vols/vol
> [root@dhcp-0-112 rpms]#
>
> [root@dhcp-0-112 rpms]# cat /var/lib/glusterd/vols/v1/quota.conf
> [root@dhcp-0-112 rpms]# hexdump /var/lib/glusterd/vols/v1/quota.conf
>
> [root@dhcp-0-112 rpms]# hexdump -c /var/lib/glusterd/vols/v1/quota.conf
> 000   G   l   u   s   t   e   r   F   S   Q   u   o   t   a
> 010   c   o   n   f   |   v   e   r   s   i   o   n   :
> 020   v   1   .   1  \n   U  \t 213   I 252 251   C 337 262   x  \b
> 030   i   y   r   5 021 312 335   w 366   X   5   B   H 210 260 227
> 040   ^ 251   X 237   G
> 045
> [root@dhcp-0-112 rpms]#
>
> [root@dhcp-0-112 rpms]# getfattr -d -m. -e hex /export/sdb/br1/dir1/ |
> grep gfid
> getfattr: Removing leading '/' from absolute path names
> trusted.gfid=0x55098b49aaa943dfb278086979723511
> [root@dhcp-0-112 rpms]# getfattr -d -m. -e hex /export/sdb/br1/dir2/ |
> grep gfid
> getfattr: Removing leading '/' from absolute path names
> trusted.gfid=0xcadd77f65835424888b0975ea9589f47
>
> [root@dhcp-0-112 rpms]# gluster v stop v1
> Stopping volume will make its data inaccessible. Do you want to continue?
> (y/n) y
> volume stop: v1: success
>
> [root@dhcp-0-112 rpms]# pkill glusterd
>
> +++ Replace with 3.9 build  without patch++
>
> [root@dhcp-0-112 3.9]# systemctl start glusterd
> [root@dhcp-0-112 3.9]#
> [root@dhcp-0-112 3.9]# rpm -qa | grep glusterfs-ser
> glusterfs-server-3.9.0rc2-0.13.gita3bade0.fc24.x86_64
> [
> [root@dhcp-0-112 3.9]# gluster v set all cluster.op-version 30700
> volume set: success
>
> [root@dhcp-0-112 3.9]# gluster v start v1
> volume start: v1: success
>
> [root@dhcp-0-112 3.9]#  mount -t glusterfs $tm1:v1 /gluster_vols/vol
>
> >> not sure why we see this , second attempt succeeds
> [root@dhcp-0-112 3.9]#  gluster v quota v1 limit-usage /dir1 12MB 10
> quota command failed : Failed to start aux mount
> [root@dhcp-0-112 3.9]#
> [root@dhcp-0-112 3.9]#  gluster v quota v1 limit-usage /dir2 12MB 10
> volume quota : success
>
> [root@dhcp-0-112 3.9]# hexdump -c /var/lib/glusterd/vols/v1/quota.conf
> 000   G   l   u   s   t   e   r   F   S   Q   u   o   t   a
> 010   c   o   n   f   |   v   e   r   s   i   o   n   :
> 020   v   1   .   2  \n   U  \t 213   I 252 251   C 337 262   x  \b
> 030   i   y   r   5 021 001 312 335   w 366   X   5   B   H 210 260
> 040 227   ^ 251   X 237   G 001
> 047
> [root@dhcp-0-112 3.9]# gluster v quota v1 list
>   Path   Hard-limit  Soft-limit  Used
> Available  Soft-limit exceeded? Hard-limit exceeded?
> 
> ---
> /dir1  5.0MB 10%(512.0KB)
> 0Bytes   5.0MB  No   No
> /dir2 12.0MB 10%(1.2MB)   0Bytes
> 12.0MB  No   No
> [root@dhcp-0-112 3.9]#
> [root@dhcp-0-112 3.9]#  gluster v quota v1 limit-usage /dir1 12MB 10
>
> [root@dhcp-0-112 3.9]# cksum /var/lib/glusterd/vols/v1/quota.conf
> 496616948 71 

Re: [Gluster-devel] [Gluster-Maintainers] Please pause merging patches to 3.9 waiting for just one patch

2016-11-11 Thread Manikandan Selvaganesh
Thanks Sanoj for the work and pasting the work in detail.

--
Thanks & Regards,
Manikandan Selvaganesan.
(@Manikandan Selvaganesh on Web)

On Fri, Nov 11, 2016 at 3:31 PM, Manikandan Selvaganesh <
manikandancs...@gmail.com> wrote:

> Yes Sanoj, that's the issue. It will somehow write the latest header which
> has conf 1.2.
> But for all the individual gfid's it won't work properly always. We need
> to do it many times
> and sometimes it will be still not set properly.
>
> --
> Thanks & Regards,
> Manikandan Selvaganesan.
> (@Manikandan Selvaganesh on Web)
>
> On Fri, Nov 11, 2016 at 12:47 PM, Sanoj Unnikrishnan 
> wrote:
>
>>  Pasting Testing Logs
>> ==
>>
>> 3.6
>>
>> [root@dhcp-0-112 rpms]# /sbin/gluster v create v1 $tm1:/export/sdb/br1
>> volume create: v1: success: please start the volume to access data
>>
>> [root@dhcp-0-112 rpms]# gluster v start v1
>> [root@dhcp-0-112 rpms]#
>> [root@dhcp-0-112 rpms]#  mount -t glusterfs $tm1:v1 /gluster_vols/vol
>> [root@dhcp-0-112 rpms]# gluster v quota v1 enable
>> volume quota : success
>> [root@dhcp-0-112 rpms]#
>> [root@dhcp-0-112 rpms]# mkdir -p /gluster_vols/vol/dir1;  gluster v
>> quota v1 limit-usage /dir1 5MB 10
>> volume quota : success
>> [root@dhcp-0-112 rpms]# mkdir -p /gluster_vols/vol/dir2;  gluster v
>> quota v1 limit-usage /dir2 16MB 10
>> volume quota : success
>> [root@dhcp-0-112 rpms]# gluster v quota v1 list
>>   Path   Hard-limit Soft-limit   Used
>> Available  Soft-limit exceeded? Hard-limit exceeded?
>> 
>> ---
>> /dir1  5.0MB   10%  0Bytes
>> 5.0MB  No   No
>> /dir2 16.0MB   10%  0Bytes
>> 16.0MB  No   No
>> [root@dhcp-0-112 rpms]#
>> [root@dhcp-0-112 rpms]#
>> [root@dhcp-0-112 rpms]# rpm -qa | grep glusterfs-ser
>> glusterfs-server-3.6.9-0.1.gitcaccd6c.fc24.x86_64
>>
>> [root@dhcp-0-112 rpms]# umount /gluster_vols/vol
>> [root@dhcp-0-112 rpms]#
>>
>> [root@dhcp-0-112 rpms]# cat /var/lib/glusterd/vols/v1/quota.conf
>> [root@dhcp-0-112 rpms]# hexdump /var/lib/glusterd/vols/v1/quota.conf
>>
>> [root@dhcp-0-112 rpms]# hexdump -c /var/lib/glusterd/vols/v1/quota.conf
>> 000   G   l   u   s   t   e   r   F   S   Q   u   o   t   a
>> 010   c   o   n   f   |   v   e   r   s   i   o   n   :
>> 020   v   1   .   1  \n   U  \t 213   I 252 251   C 337 262   x  \b
>> 030   i   y   r   5 021 312 335   w 366   X   5   B   H 210 260 227
>> 040   ^ 251   X 237   G
>> 045
>> [root@dhcp-0-112 rpms]#
>>
>> [root@dhcp-0-112 rpms]# getfattr -d -m. -e hex /export/sdb/br1/dir1/ |
>> grep gfid
>> getfattr: Removing leading '/' from absolute path names
>> trusted.gfid=0x55098b49aaa943dfb278086979723511
>> [root@dhcp-0-112 rpms]# getfattr -d -m. -e hex /export/sdb/br1/dir2/ |
>> grep gfid
>> getfattr: Removing leading '/' from absolute path names
>> trusted.gfid=0xcadd77f65835424888b0975ea9589f47
>>
>> [root@dhcp-0-112 rpms]# gluster v stop v1
>> Stopping volume will make its data inaccessible. Do you want to continue?
>> (y/n) y
>> volume stop: v1: success
>>
>> [root@dhcp-0-112 rpms]# pkill glusterd
>>
>> +++ Replace with 3.9 build  without patch++
>>
>> [root@dhcp-0-112 3.9]# systemctl start glusterd
>> [root@dhcp-0-112 3.9]#
>> [root@dhcp-0-112 3.9]# rpm -qa | grep glusterfs-ser
>> glusterfs-server-3.9.0rc2-0.13.gita3bade0.fc24.x86_64
>> [
>> [root@dhcp-0-112 3.9]# gluster v set all cluster.op-version 30700
>> volume set: success
>>
>> [root@dhcp-0-112 3.9]# gluster v start v1
>> volume start: v1: success
>>
>> [root@dhcp-0-112 3.9]#  mount -t glusterfs $tm1:v1 /gluster_vols/vol
>>
>> >> not sure why we see this , second attempt succeeds
>> [root@dhcp-0-112 3.9]#  gluster v quota v1 limit-usage /dir1 12MB 10
>> quota command failed : Failed to start aux mount
>> [root@dhcp-0-112 3.9]#
>> [root@dhcp-0-112 3.9]#  gluster v quota v1 limit-usage /dir2 12MB 10
>> volume quota : success
>>
>> [root@dhcp-0-112 3.9]# hexdump -c /var/lib/glusterd/vols/v1/quota.conf
>> 000   G   l   u   s   t   e   r   F   S   Q   u   o   t   a
>> 010   c   o   n   f   |   v   e   r   s   i   o   n   :
>> 020   v   1   .   2  \n   U  \t 213   I 252 251   C 337 262   x  \b
>> 030   i   y   r   5 021 001 312 335   w 366   X   5   B   H 210 260
>> 040 227   ^ 251   X 237   G 001
>> 047
>> [root@dhcp-0-112 3.9]# gluster v quota v1 list
>>   Path   Hard-limit  Soft-limit
>> Used  Available  Soft-limit exceeded? Hard-limit exceeded?
>> 
>> ---
>> /dir1  5.0MB 10%(512.0KB)
>> 

Re: [Gluster-devel] [Gluster-users] getting "Transport endpoint is not connected" in glusterfs mount log file.

2016-11-11 Thread Pranith Kumar Karampuri
Abhishek,
  Both Rafi and I tried to look at the logs but the file seems to be
corrupted. I was saying that there is connection problem because the
following log appeard in between lot of connection failures in the logs you
posted. Are you on IRC #gluster-dev?

[2016-10-31 04:06:03.628539] I [MSGID: 108019]
[afr-transaction.c:1224:afr_post_blocking_inodelk_cbk]
0-c_glusterfs-replicate-0: Blocking inodelks failed.

On Fri, Nov 11, 2016 at 1:05 PM, ABHISHEK PALIWAL 
wrote:

> Hi Pranith,
>
> Could you please tell tell me the logs showing that the mount is not
> available to connect to both the bricks.
>
> On Fri, Nov 11, 2016 at 12:05 PM, Pranith Kumar Karampuri <
> pkara...@redhat.com> wrote:
>
>> As per the logs, the mount is not able to connect to both the bricks. Are
>> the connections fine?
>>
>> On Fri, Nov 11, 2016 at 10:20 AM, ABHISHEK PALIWAL <
>> abhishpali...@gmail.com> wrote:
>>
>>> Hi,
>>>
>>> Its an urgent case.
>>>
>>> Atleast provide your views on this
>>>
>>> On Wed, Nov 9, 2016 at 11:08 AM, ABHISHEK PALIWAL <
>>> abhishpali...@gmail.com> wrote:
>>>
 Hi,

 We could see that sync is getting failed to sync the GlusterFS bricks
 due to error trace "Transport endpoint is not connected "

 [2016-10-31 04:06:03.627395] E [MSGID: 114031]
 [client-rpc-fops.c:1673:client3_3_finodelk_cbk]
 0-c_glusterfs-client-9: remote operation failed [Transport endpoint is not
 connected]
 [2016-10-31 04:06:03.628381] I [socket.c:3308:socket_submit_request]
 0-c_glusterfs-client-9: not connected (priv->connected = 0)
 [2016-10-31 04:06:03.628432] W [rpc-clnt.c:1586:rpc_clnt_submit]
 0-c_glusterfs-client-9: failed to submit rpc-request (XID: 0x7f5f Program:
 GlusterFS 3.3, ProgVers: 330, Proc: 30) to rpc-transport
 (c_glusterfs-client-9)
 [2016-10-31 04:06:03.628466] E [MSGID: 114031]
 [client-rpc-fops.c:1673:client3_3_finodelk_cbk]
 0-c_glusterfs-client-9: remote operation failed [Transport endpoint is not
 connected]
 [2016-10-31 04:06:03.628475] I [MSGID: 108019]
 [afr-lk-common.c:1086:afr_lock_blocking] 0-c_glusterfs-replicate-0:
 unable to lock on even one child
 [2016-10-31 04:06:03.628539] I [MSGID: 108019]
 [afr-transaction.c:1224:afr_post_blocking_inodelk_cbk]
 0-c_glusterfs-replicate-0: Blocking inodelks failed.
 [2016-10-31 04:06:03.628630] W [fuse-bridge.c:1282:fuse_err_cbk]
 0-glusterfs-fuse: 20790: FLUSH() ERR => -1 (Transport endpoint is not
 connected)
 [2016-10-31 04:06:03.629149] E [rpc-clnt.c:362:saved_frames_unwind]
 (--> 
 /usr/lib64/libglusterfs.so.0(_gf_log_callingfn-0xb5c80)[0x3fff8ab79f58]
 (--> /usr/lib64/libgfrpc.so.0(saved_frames_unwind-0x1b7a0)[0x3fff8ab1dc90]
 (--> /usr/lib64/libgfrpc.so.0(saved_frames_destroy-0x1b638)[0x3fff8ab1de10]
 (--> 
 /usr/lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup-0x19af8)[0x3fff8ab1fb18]
 (--> /usr/lib64/libgfrpc.so.0(rpc_clnt_notify-0x18e68)[0x3fff8ab20808]
 ) 0-c_glusterfs-client-9: forced unwinding frame type(GlusterFS 3.3)
 op(LOOKUP(27)) called at 2016-10-31 04:06:03.624346 (xid=0x7f5a)
 [2016-10-31 04:06:03.629183] I [rpc-clnt.c:1847:rpc_clnt_reconfig]
 0-c_glusterfs-client-9: changing port to 49391 (from 0)
 [2016-10-31 04:06:03.629210] W [MSGID: 114031]
 [client-rpc-fops.c:2971:client3_3_lookup_cbk] 0-c_glusterfs-client-9:
 remote operation failed. Path: 
 /loadmodules_norepl/CXC1725605_P93A001/cello/emasviews
 (b0e5a94e-a432-4dce-b86f-a551555780a2) [Transport endpoint is not
 connected]


 Could you please tell us the reason why we are getting these trace and
 how to resolve this.

 Logs are attached here please share your analysis.

 Thanks in advanced

 --
 Regards
 Abhishek Paliwal

>>>
>>>
>>>
>>> --
>>>
>>>
>>>
>>>
>>> Regards
>>> Abhishek Paliwal
>>>
>>> ___
>>> Gluster-users mailing list
>>> gluster-us...@gluster.org
>>> http://www.gluster.org/mailman/listinfo/gluster-users
>>>
>>
>>
>>
>> --
>> Pranith
>>
>
>
>
> --
>
>
>
>
> Regards
> Abhishek Paliwal
>



-- 
Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel