Re: [Gluster-users] [Gluster-devel] [DHT] The myth of two hops for linkto file resolution

2017-05-04 Thread Raghavendra G
On Thu, May 4, 2017 at 4:36 PM, Xavier Hernandez 
wrote:

> Hi,
>
> On 30/04/17 06:03, Raghavendra Gowdappa wrote:
>
>> All,
>>
>> Its a common perception that the resolution of a file having linkto file
>> on the hashed-subvol requires two hops:
>>
>> 1. client to hashed-subvol.
>> 2. client to the subvol where file actually resides.
>>
>> While it is true that a fresh lookup behaves this way, the other fact
>> that get's ignored is that fresh lookups on files are almost always
>> prevented by readdirplus. Since readdirplus picks the dentry from the
>> subvolume where actual file (data-file) resides, the two hop cost is most
>> likely never witnessed by the application.
>>
>
> This is true for workloads that list directory contents before accessing
> the files, but there are other use cases that directly access the file
> without navigating through the file system. In this case fresh lookups are
> needed.
>

I agree, if the contents of parent directory are not listed at least once,
 penalty is still there.


> Xavi
>
>
>
>> A word of caution is that I've not done any testing to prove this
>> observation :).
>>
>> regards,
>> Raghavendra
>> ___
>> Gluster-devel mailing list
>> gluster-de...@gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-devel
>>
>>
> ___
> Gluster-devel mailing list
> gluster-de...@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-devel
>



-- 
Raghavendra G
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] moving drives containing bricks from one server to another

2017-05-04 Thread Andy Tai
Hi, I have a gluster volume with bricks spread over several physical
drives.   I now want to upgrade my server to a new system and plan to move
the drives from the old server to the new server, with a different host
name and IP address.  Can I shut down the gluster volume on the old server,
move and install the physical drives containing the bricks to the new
server, and then create a new gluster volume on the new server, and add the
bricks to the new volume in the same way reflecting the previous
organization of the volume on the old server, and expect everything to work
(all files preserved and accessible via glusterfs, with no data loss?

The gluster volume on the old server would be retired and I want to let the
new server taking over the role of serving the gluster volume.

Thanks

-- 
Andy Tai, a...@atai.org, Skype: licheng.tai, Line: andy_tai, WeChat:
andytai1010
Year 2017 民國106年
自動的精神力是信仰與覺悟
自動的行為力是勞動與技能
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [DHT] The myth of two hops for linkto file resolution

2017-05-04 Thread Raghavendra Gowdappa


- Original Message -
> From: "Pranith Kumar Karampuri" 
> To: "Raghavendra Gowdappa" 
> Cc: "Gluster Devel" , "gluster-users" 
> 
> Sent: Thursday, May 4, 2017 4:03:18 PM
> Subject: Re: [Gluster-users] [DHT] The myth of two hops for linkto file 
> resolution
> 
> On Sun, Apr 30, 2017 at 9:33 AM, Raghavendra Gowdappa 
> wrote:
> 
> > All,
> >
> > Its a common perception that the resolution of a file having linkto file
> > on the hashed-subvol requires two hops:
> >
> > 1. client to hashed-subvol.
> > 2. client to the subvol where file actually resides.
> >
> > While it is true that a fresh lookup behaves this way, the other fact that
> > get's ignored is that fresh lookups on files are almost always prevented by
> > readdirplus. Since readdirplus picks the dentry from the subvolume where
> > actual file (data-file) resides, the two hop cost is most likely never
> > witnessed by the application.
> >
> > A word of caution is that I've not done any testing to prove this
> > observation :).
> >
> 
> May be you should do it and send an update. That way we can use the
> knowledge to do something.

I understand the importance of performance numbers. But, I am tied up with 
other things :). We can measure them when we really need this information. If 
you (or anyone else) have anything lined up which needs this data, please let 
me know and I can devote sometime to provide it. Till then, Other things take 
high priority.

regards,
Raghavendra
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] postgresql is unable to create a table in gluster volume

2017-05-04 Thread Vijay Bellur
On Thu, May 4, 2017 at 3:19 AM, Jiffin Tony Thottan 
wrote:

>
>
> On 04/05/17 02:03, Praveen George wrote:
>
> Hi Team,
>
> We’ve been intermittently seeing issues where postgresql is unable to
> create a table, or some info is missing.
>
> Postgresql logs the following error:
>
> ERROR:  unexpected data beyond EOF in block 53 of relation base/16384/12009
> HINT:  This has been seen to occur with buggy kernels; consider updating
> your system.
>
> We are using the k8s PV/PVC to bind the volumes to the containers and
> using the gluster plugin to mount the volumes on the worker nodes and take
> it into the containers.
>
> The issue occurs regardless of whether the  k8s spec specifies mounting of
> the pv using the pv provider or mount the gluster volume directly.
>
> Just to check if the issue is with the glusterfs client, we mount the
> volume using NFS (NFS on the client talking to gluster on the master), the
> issue doesn’t occur. However, with the NFS client talking directly to _one_
> of the gluster masters; this means that if that master fails, it will not
> failover to the other gluster master - we thus lose gluster HA if we go
> this route.
>
>
> If you are interested there are HA solutions available with NFS. It
> depends on NFS solution which u are trying, if it is gluster nfs(integrated
> nfs server with gluster) then use ctdb and for NFS-Ganesha , we already
> have an integrated solution with pacemaker/corosync
>
> Please update ur gluster version since it EOLed, you don't receive any
> more update for that version.
>
>

Do you notice any errors in the fuse client logs when postgresql complains
about the error?

It might be useful to turn off all the performance translators in gluster
and check if the problem persists.

Regards,
Vijay
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] total folder size increased

2017-05-04 Thread Umarzuki Mochlis
Hi,

I'm copying a web folder with total size of 109GB to a mounted volume
(locally). Now it is already 260GB and still copying.

Is there any sizing guide so I can predict usage in the future?

That is a ext4 formatted logical volume.

Thanks.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Very odd performance issue

2017-05-04 Thread Ben Turner
- Original Message -
> From: "David Miller" 
> To: gluster-users@gluster.org
> Sent: Thursday, May 4, 2017 2:48:38 PM
> Subject: [Gluster-users] Very odd performance issue
> 
> Background: 4 identical gluster servers with 15 TB each in 2x2 setup.
> CentOS Linux release 7.3.1611 (Core)
> glust erfs-server-3.9.1-1.el7.x86_64
> client systems are using:
> glusterfs-client 3.5.2-2+deb8u3
> 
> The cluster has ~12 TB in use with 21 million files. Lots of jpgs. About 12
> clients are mounting gluster volumes.
> 
> Network load is light: iftop shows each server has 10-15 Mbit reads and about
> half that in writes.
> 
> What I’m seeing that concerns me is that one box, gluster4, has roughly twice
> the CPU utilization and twice or more the load average of the other three
> servers. gluster4 has a 24 hour average of about 30% CPU utilization,
> something that seems to me to be way out of line for a couple MB/sec of
> traffic.
> 
> In running volume top, the odd thing I see is that for gluster1-3 I get
> latency summaries like this:
> Brick: gluster1.publicinteractive.com :/gluster/drupal_prod
> —
> %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop
>  --- --- ---  
> 
> 9.96 675.07 us 15.00 us 1067793.00 us 205060 INODELK
> 15.85 3414.20 us 16.00 us 773621.00 us 64494 READ
> 51.35 2235.96 us 12.00 us 1093609.00 us 319120 LOOKUP
> 
> … but my problem server has far more inodelk latency:
> 
> 12.01 4712.03 us 17.00 us 1773590.00 us 47214 READ
> 27.50 2390.27 us 14.00 us 1877571.00 us 213121 INODELK
> 28.70 1643.65 us 12.00 us 1837696.00 us 323407 LOOKUP
> 
> The servers are intended to be identical, and are indeed identical hardware.
> 
> Suggestions on where to look or which FM to RT ver welcome indeed.

IIRC INODELK is for internal locking / synchronization:

"GlusterFS has locks translator which provides the following internal locking 
operations called  inodelk, entrylk which are used by afr to achieve 
synchronization of operations on files or directories that conflict with each 
other."

I found a bug where there was a leak:

https://bugzilla.redhat.com/show_bug.cgi?id=1405886

It was fixed in the 3.8 line, it may be worth looking into upgrading the 
gluster version on your clients to eliminate any issues that were fixed between 
3.5(your client version) and 3.9(your server version).

Also, have a look at the brick and client logs.  You could try searching them 
for "INODELK".  Are your clients accessing alot of the same files at the same 
time?  Also on the server where you are seeing the higher load check the self 
heal daemon logs to see if there is any healing happening.

Sorry I don't have anything concrete, like I said it may be worth upgrading the 
clients and having a look at your logs to see if you can glean any information 
from them.

-b

> 
> Thanks,
> 
> David
> 
> 
> 
> 
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] libgfapi access to snapshot volume

2017-05-04 Thread Ankireddypalle Reddy
Vijay,
  Thanks for pointing to the relevant source code.

Sent from my iPhone

On May 4, 2017, at 6:32 PM, Vijay Bellur 
> wrote:


On Thu, May 4, 2017 at 9:12 AM, Ankireddypalle Reddy 
> wrote:
Hi,
   Can glusterfs snapshot volume be accessed through libgfapi.


Yes, activated snapshots can be accessed through libgfapi. User serviceable 
snapshots in Gluster makes use of gfapi to access activated snapshots. 
xlators/features/snapview-server contains code that accesses snapshots through 
libgfapi.

Regards,
Vijay


Thanks and Regards,
Ram
***Legal Disclaimer***
"This communication may contain confidential and privileged material for the
sole use of the intended recipient. Any unauthorized review, use or distribution
by others is strictly prohibited. If you have received the message by mistake,
please advise the sender by reply email and delete the message. Thank you."
**

___
Gluster-devel mailing list
gluster-de...@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

***Legal Disclaimer***
"This communication may contain confidential and privileged material for the
sole use of the intended recipient. Any unauthorized review, use or distribution
by others is strictly prohibited. If you have received the message by mistake,
please advise the sender by reply email and delete the message. Thank you."
**
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] libgfapi access to snapshot volume

2017-05-04 Thread Vijay Bellur
On Thu, May 4, 2017 at 9:12 AM, Ankireddypalle Reddy 
wrote:

> Hi,
>
>Can glusterfs snapshot volume be accessed through libgfapi.
>


Yes, activated snapshots can be accessed through libgfapi. User serviceable
snapshots in Gluster makes use of gfapi to access activated snapshots.
xlators/features/snapview-server contains code that accesses snapshots
through libgfapi.

Regards,
Vijay


>
>
> Thanks and Regards,
>
> Ram
> ***Legal Disclaimer***
> "This communication may contain confidential and privileged material for
> the
> sole use of the intended recipient. Any unauthorized review, use or
> distribution
> by others is strictly prohibited. If you have received the message by
> mistake,
> please advise the sender by reply email and delete the message. Thank you."
> **
>
> ___
> Gluster-devel mailing list
> gluster-de...@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-devel
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] disperse volume brick counts limits in RHES

2017-05-04 Thread Alastair Neil
Hi

we are deploying a large (24node/45brick) cluster and noted that the RHES
guidelines limit the number of data bricks in a disperse set to 8.  Is
there any reason for this.  I am aware that you want this to be a power of
2, but as we have a large number of nodes we were planning on going with
16+3.  Dropping to 8+2 or 8+3 will be a real waste for us.

Thanks,


Alastair
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Very odd performance issue

2017-05-04 Thread David Miller
Background:  4 identical gluster servers with 15 TB each in 2x2 setup.
CentOS Linux release 7.3.1611 (Core)
glusterfs-server-3.9.1-1.el7.x86_64
client systems are using:
glusterfs-client 3.5.2-2+deb8u3

The cluster has ~12 TB in use with 21 million files.  Lots of jpgs.  About 12 
clients are mounting gluster volumes.  

Network load is light: iftop shows each server has 10-15 Mbit reads and about 
half that in writes.

What I’m seeing that concerns me is that one box, gluster4, has roughly twice 
the CPU utilization and twice or more the load average of the other three 
servers.  gluster4 has a 24 hour average of about 30% CPU utilization, 
something that seems to me to be way out of line for a couple MB/sec of traffic.

In running volume top, the odd thing I see is that for gluster1-3 I get latency 
summaries like this:
Brick: gluster1.publicinteractive.com:/gluster/drupal_prod
—
%-latency  Avg-latency  Min-Latency  Max-Latency   No. of calls   Fop
   ---  ---  ---     

 9.96 675.07 us  15.00 us 1067793.00 us 205060 INODELK 
15.853414.20 us  16.00 us  773621.00 us  64494READ
51.352235.96 us  12.00 us 1093609.00 us 319120  LOOKUP

… but my problem server has far more inodelk latency:

12.014712.03 us  17.00 us 1773590.00 us  47214READ
27.502390.27 us  14.00 us 1877571.00 us 213121 INODELK
28.701643.65 us  12.00 us 1837696.00 us 323407  LOOKUP

The servers are intended to be identical, and are indeed identical hardware.

Suggestions on where to look or which FM to RT ver welcome indeed.

Thanks,

David




___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Elasticsearch facing CorruptIndexException exception with GlusterFs 3.10.1

2017-05-04 Thread Abhijit Paul
Thanks for the reply, i will try it out but i am also facing one more issue
"i.e. replicated volumes returning different timestamps"
so is this because of Bug 1426548 - Openshift Logging ElasticSearch FSLocks
when using GlusterFS storage backend
 ?

*FYI i am using glusterfs 3.10.1 tar.gz*

Regards,
Abhijit



On Thu, May 4, 2017 at 10:58 PM, Amar Tumballi  wrote:

>
>
> On Thu, May 4, 2017 at 10:41 PM, Abhijit Paul 
> wrote:
>
>> Since i am new to gluster, can please provide how to turn off/disable "perf
>> xlator options"?
>>
>>
> $ gluster volume set  performance.stat-prefetch off
> $ gluster volume set  performance.read-ahead off
> $ gluster volume set  performance.write-behind off
> $ gluster volume set  performance.io-cache off
> $ gluster volume set  performance.quick-read off
>
>
> Regards,
> Amar
>
>>
>>> On Wed, May 3, 2017 at 8:51 PM, Atin Mukherjee 
>>> wrote:
>>>
 I think there is still some pending stuffs in some of the gluster perf
 xlators to make that work complete. Cced the relevant folks for more
 information. Can you please turn off all the perf xlator options as a work
 around to move forward?

 On Wed, May 3, 2017 at 8:04 PM, Abhijit Paul 
 wrote:

> Dear folks,
>
> I setup Glusterfs(3.10.1) NFS type as persistence volume for
> Elasticsearch(5.1.2) but currently facing issue with 
> *"CorruptIndexException"
> *with Elasticseach logs and due to that index health trued RED in
> Elasticsearch.
>
> Later found that there was an issue with gluster < 3.10 (
> https://bugzilla.redhat.com/show_bug.cgi?id=1390050) but even after 
> *upgrading
> to 3.10.1 issue is still there.*
>
> *So curios to know what would be the root cause to fix this issue.*
>
> Regards,
> Abhijit
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>


>>>
>>
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-users
>>
>
>
>
> --
> Amar Tumballi (amarts)
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Gluster and NFS-Ganesha - cluster is down after reboot

2017-05-04 Thread Mahdi Adnan
Hi,


Same here, when i reboot the node i have to manually execute "pcs cluster start 
gluster01" and pcsd already enabled and started.

Gluster 3.8.11

Centos 7.3 latest

Installed using CentOS Storage SIG repository


--

Respectfully
Mahdi A. Mahdi


From: gluster-users-boun...@gluster.org  on 
behalf of Adam Ru 
Sent: Wednesday, May 3, 2017 12:09:58 PM
To: Soumya Koduri
Cc: gluster-users@gluster.org
Subject: Re: [Gluster-users] Gluster and NFS-Ganesha - cluster is down after 
reboot

Hi Soumya,

thank you very much for your reply.

I enabled pcsd during setup and after reboot during troubleshooting I manually 
started it and checked resources (pcs status). They were not running. I didn’t 
find what was wrong but I’m going to try it again.

I’ve thoroughly checked
http://gluster.readthedocs.io/en/latest/Administrator%20Guide/NFS-Ganesha%20GlusterFS%20Integration/
and I can confirm that I followed all steps with one exception. I installed 
following RPMs:
glusterfs-server
glusterfs-fuse
glusterfs-cli
glusterfs-ganesha
nfs-ganesha-xfs

and the guide referenced above specifies:
glusterfs-server
glusterfs-api
glusterfs-ganesha

glusterfs-api is a dependency of one of RPMs that I installed so this is not a 
problem. But I cannot find any mention to install nfs-ganesha-xfs.

I’ll try to setup the whole environment again without installing 
nfs-ganesha-xfs (I assume glusterfs-ganesha has all required binaries).

Again, thank you for you time to answer my previous message.

Kind regards,
Adam

On Tue, May 2, 2017 at 8:49 AM, Soumya Koduri 
> wrote:
Hi,

On 05/02/2017 01:34 AM, Rudolf wrote:
Hi Gluster users,

First, I'd like to thank you all for this amazing open-source! Thank you!

I'm working on home project – three servers with Gluster and
NFS-Ganesha. My goal is to create HA NFS share with three copies of each
file on each server.

My systems are CentOS 7.3 Minimal install with the latest updates and
the most current RPMs from "centos-gluster310" repository.

I followed this tutorial:
http://blog.gluster.org/2015/10/linux-scale-out-nfsv4-using-nfs-ganesha-and-glusterfs-one-step-at-a-time/
(second half that describes multi-node HA setup)

with a few exceptions:

1. All RPMs are from "centos-gluster310" repo that is installed by "yum
-y install centos-release-gluster"
2. I have three nodes (not four) with "replica 3" volume.
3. I created empty ganesha.conf and not empty ganesha-ha.conf in
"/var/run/gluster/shared_storage/nfs-ganesha/" (referenced blog post is
outdated, this is now requirement)
4. ganesha-ha.conf doesn't have "HA_VOL_SERVER" since this isn't needed
anymore.


Please refer to 
http://gluster.readthedocs.io/en/latest/Administrator%20Guide/NFS-Ganesha%20GlusterFS%20Integration/

It is being updated with latest changes happened wrt setup.

When I finish configuration, all is good. nfs-ganesha.service is active
and running and from client I can ping all three VIPs and I can mount
NFS. Copied files are replicated to all nodes.

But when I restart nodes (one by one, with 5 min. delay between) then I
cannot ping or mount (I assume that all VIPs are down). So my setup
definitely isn't HA.

I found that:
# pcs status
Error: cluster is not currently running on this node

This means pcsd service is not up. Did you enable (systemctl enable pcsd) pcsd 
service so that is comes up post reboot automatically. If not please start it 
manually.


and nfs-ganesha.service is in inactive state. Btw. I didn't enable
"systemctl enable nfs-ganesha" since I assume that this is something
that Gluster does.

Please check /var/log/ganesha.log for any errors/warnings.

We recommend not to enable nfs-ganesha.service (by default), as the shared 
storage (where the ganesha.conf file resides now) should be up and running 
before nfs-ganesha gets started.
So if enabled by default it could happen that shared_storage mount point is not 
yet up and it resulted in nfs-ganesha service failure. If you would like to 
address this, you could have a cron job which keeps checking the mount point 
health and then start nfs-ganesha service.

Thanks,
Soumya


I assume that my issue is that I followed instructions in blog post from
2015/10 that are outdated. Unfortunately I cannot find anything better –
I spent whole day by googling.

Would you be so kind and check the instructions in blog post and let me
know what steps are wrong / outdated? Or please do you have more current
instructions for Gluster+Ganesha setup?

Thank you.

Kind regards,
Adam



___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users




--
Adam
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Elasticsearch facing CorruptIndexException exception with GlusterFs 3.10.1

2017-05-04 Thread Amar Tumballi
On Thu, May 4, 2017 at 10:41 PM, Abhijit Paul 
wrote:

> Since i am new to gluster, can please provide how to turn off/disable "perf
> xlator options"?
>
>
$ gluster volume set  performance.stat-prefetch off
$ gluster volume set  performance.read-ahead off
$ gluster volume set  performance.write-behind off
$ gluster volume set  performance.io-cache off
$ gluster volume set  performance.quick-read off


Regards,
Amar

>
>> On Wed, May 3, 2017 at 8:51 PM, Atin Mukherjee 
>> wrote:
>>
>>> I think there is still some pending stuffs in some of the gluster perf
>>> xlators to make that work complete. Cced the relevant folks for more
>>> information. Can you please turn off all the perf xlator options as a work
>>> around to move forward?
>>>
>>> On Wed, May 3, 2017 at 8:04 PM, Abhijit Paul 
>>> wrote:
>>>
 Dear folks,

 I setup Glusterfs(3.10.1) NFS type as persistence volume for
 Elasticsearch(5.1.2) but currently facing issue with 
 *"CorruptIndexException"
 *with Elasticseach logs and due to that index health trued RED in
 Elasticsearch.

 Later found that there was an issue with gluster < 3.10 (
 https://bugzilla.redhat.com/show_bug.cgi?id=1390050) but even after 
 *upgrading
 to 3.10.1 issue is still there.*

 *So curios to know what would be the root cause to fix this issue.*

 Regards,
 Abhijit

 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://lists.gluster.org/mailman/listinfo/gluster-users

>>>
>>>
>>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>



-- 
Amar Tumballi (amarts)
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Elasticsearch facing CorruptIndexException exception with GlusterFs 3.10.1

2017-05-04 Thread Abhijit Paul
Since i am new to gluster, can please provide how to turn off/disable "perf
xlator options"?


> On Wed, May 3, 2017 at 8:51 PM, Atin Mukherjee 
> wrote:
>
>> I think there is still some pending stuffs in some of the gluster perf
>> xlators to make that work complete. Cced the relevant folks for more
>> information. Can you please turn off all the perf xlator options as a work
>> around to move forward?
>>
>> On Wed, May 3, 2017 at 8:04 PM, Abhijit Paul 
>> wrote:
>>
>>> Dear folks,
>>>
>>> I setup Glusterfs(3.10.1) NFS type as persistence volume for
>>> Elasticsearch(5.1.2) but currently facing issue with 
>>> *"CorruptIndexException"
>>> *with Elasticseach logs and due to that index health trued RED in
>>> Elasticsearch.
>>>
>>> Later found that there was an issue with gluster < 3.10 (
>>> https://bugzilla.redhat.com/show_bug.cgi?id=1390050) but even after 
>>> *upgrading
>>> to 3.10.1 issue is still there.*
>>>
>>> *So curios to know what would be the root cause to fix this issue.*
>>>
>>> Regards,
>>> Abhijit
>>>
>>> ___
>>> Gluster-users mailing list
>>> Gluster-users@gluster.org
>>> http://lists.gluster.org/mailman/listinfo/gluster-users
>>>
>>
>>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] libgfapi access to snapshot volume

2017-05-04 Thread Ankireddypalle Reddy
Rafi,
   Thanks. Will change the volume name to the notation that you 
provided and try it out.

Thanks and Regards,
Ram
From: Mohammed Rafi K C [mailto:rkavu...@redhat.com]
Sent: Thursday, May 04, 2017 11:11 AM
To: Ankireddypalle Reddy; Gluster Devel (gluster-de...@gluster.org); 
gluster-users@gluster.org
Subject: Re: [Gluster-devel] libgfapi access to snapshot volume


Hi Ram,



You can access snapshot through libgfapi, it is just that the volname will 
become something like /snaps// . I can give you some example 
programs if you have any trouble in doing so.



Or you can use uss feature to use snapshot through main volume via libgfapi (it 
is also uses the above method internally).



Regards

Rafi KC


On 05/04/2017 06:42 PM, Ankireddypalle Reddy wrote:
Hi,
   Can glusterfs snapshot volume be accessed through libgfapi.

Thanks and Regards,
Ram
***Legal Disclaimer***
"This communication may contain confidential and privileged material for the
sole use of the intended recipient. Any unauthorized review, use or distribution
by others is strictly prohibited. If you have received the message by mistake,
please advise the sender by reply email and delete the message. Thank you."
**



___

Gluster-devel mailing list

gluster-de...@gluster.org

http://lists.gluster.org/mailman/listinfo/gluster-devel

***Legal Disclaimer***
"This communication may contain confidential and privileged material for the
sole use of the intended recipient. Any unauthorized review, use or distribution
by others is strictly prohibited. If you have received the message by mistake,
please advise the sender by reply email and delete the message. Thank you."
**
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] libgfapi access to snapshot volume

2017-05-04 Thread Mohammed Rafi K C
Hi Ram,


You can access snapshot through libgfapi, it is just that the volname
will become something like /snaps// . I can give you
some example programs if you have any trouble in doing so.


Or you can use uss feature to use snapshot through main volume via
libgfapi (it is also uses the above method internally).


Regards

Rafi KC


On 05/04/2017 06:42 PM, Ankireddypalle Reddy wrote:
>
> Hi,
>
>Can glusterfs snapshot volume be accessed through libgfapi.
>
>  
>
> Thanks and Regards,
>
> Ram 
>
> ***Legal Disclaimer***
> "This communication may contain confidential and privileged material
> for the
> sole use of the intended recipient. Any unauthorized review, use or
> distribution
> by others is strictly prohibited. If you have received the message by
> mistake,
> please advise the sender by reply email and delete the message. Thank
> you."
> **
>
>
> ___
> Gluster-devel mailing list
> gluster-de...@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-devel

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Advice needed for geo replication

2017-05-04 Thread Felipe Arturo Polanco
Hi,

I would like some advice on setting up a replicated or geo replicated setup
in Gluster.

Right now the setup consists of 1 storage server with no replica serving
gluster volumes to clients.

We need to have some sort of replication of it by adding a second server
but the catch is this second server will have spinning disks while the
current one has SSD disks.

I have read that in Gluster, the clients are the one running the
replication of data by sending the same bytes to the gluster servers, so by
using one gluster server with spinning disks the performance of clients
will be as fast as a spinning disk speed even when the other server has SSD.
For budget reasons we can't have SSDs in second storage server.

I kept reading and found the geo replication feature which makes the server
do the replication of data instead of the clients, which is more likely my
case but looks like there is no automatic failover mechanism of it and the
administrator need to intervene to make the slave server a master one
according to this document:
https://access.redhat.com/documentation/en-US/Red_Hat_Storage/2.0/html/Administration_Guide/ch11s05.html

Given this scenario, I really need a piece of advice from the gluster users
on how would be the best approach to have a replicated setup with SSD+HDD
storage servers.

Thanks,
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] libgfapi access to snapshot volume

2017-05-04 Thread Ankireddypalle Reddy
Hi,
   Can glusterfs snapshot volume be accessed through libgfapi.

Thanks and Regards,
Ram
***Legal Disclaimer***
"This communication may contain confidential and privileged material for the
sole use of the intended recipient. Any unauthorized review, use or distribution
by others is strictly prohibited. If you have received the message by mistake,
please advise the sender by reply email and delete the message. Thank you."
**
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-Maintainers] [Gluster-devel] Release 3.11: Has been Branched (and pending feature notes)

2017-05-04 Thread Kaushal M
On Thu, May 4, 2017 at 4:38 PM, Niels de Vos  wrote:
> On Thu, May 04, 2017 at 03:39:58PM +0530, Pranith Kumar Karampuri wrote:
>> On Wed, May 3, 2017 at 2:36 PM, Kaushal M  wrote:
>>
>> > On Tue, May 2, 2017 at 3:55 PM, Pranith Kumar Karampuri
>> >  wrote:
>> > >
>> > >
>> > > On Sun, Apr 30, 2017 at 9:01 PM, Shyam  wrote:
>> > >>
>> > >> Hi,
>> > >>
>> > >> Release 3.11 for gluster has been branched [1] and tagged [2].
>> > >>
>> > >> We have ~4weeks to release of 3.11, and a week to backport features that
>> > >> slipped the branching date (May-5th).
>> > >>
>> > >> A tracker BZ [3] has been opened for *blockers* of 3.11 release. Request
>> > >> that any bug that is determined as a blocker for the release be noted
>> > as a
>> > >> "blocks" against this bug.
>> > >>
>> > >> NOTE: Just a heads up, all bugs that are to be backported in the next 4
>> > >> weeks need not be reflected against the blocker, *only* blocker bugs
>> > >> identified that should prevent the release, need to be tracked against
>> > this
>> > >> tracker bug.
>> > >>
>> > >> We are not building beta1 packages, and will build out RC0 packages once
>> > >> we cross the backport dates. Hence, folks interested in testing this
>> > out can
>> > >> either build from the code or wait for (about) a week longer for the
>> > >> packages (and initial release notes).
>> > >>
>> > >> Features tracked as slipped and expected to be backported by 5th May
>> > are,
>> > >>
>> > >> 1) [RFE] libfuse rebase to latest? #153 (@amar, @csaba)
>> > >>
>> > >> 2) SELinux support for Gluster Volumes #55 (@ndevos, @jiffin)
>> > >>   - Needs a +2 on https://review.gluster.org/13762
>> > >>
>> > >> 3) Enhance handleops readdirplus operation to return handles along with
>> > >> dirents #174 (@skoduri)
>> > >>
>> > >> 4) Halo - Initial version (@pranith)
>> > >
>> > >
>> > > I merged the patch on master. Will send out the port on Thursday. I have
>> > to
>> > > leave like right now to catch train and am on leave tomorrow, so will be
>> > > back on Thursday and get the port done. Will also try to get the other
>> > > patches fb guys mentioned post that preferably by 5th itself.
>> >
>> > Niels found that the HALO patch has pulled in a little bit of the IPv6
>> > patch. This shouldn't have happened.
>> > The IPv6 patch is currently stalled because it depends on an internal
>> > FB library. The IPv6 bits that made it in pull this dependency.
>> > This would have lead to a -2 on the HALO patch by me, but as I wasn't
>> > aware of it, the patch was merged.
>> >
>> > The IPV6 changes are in rpcsvh.{c,h} and configure.ac, and don't seem
>> > to affect anything HALO. So they should be easily removable and should
>> > be removed.
>> >
>>
>> As per the configure.ac the macro is enabled only when we are building
>> gluster with "--with-fb-extras", which I don't think we do anywhere, so
>> didn't think they are important at the moment. Sorry for the confusion
>> caused because of this. Thanks to Kaushal for the patch. I will backport
>> that one as well when I do the 3.11 backport of HALO. So will wait for the
>> backport until Kaushal's patch is merged.
>
> Note that there have been disucssions about preventing special vendor
> (Red Hat or Facebook) flags and naming. In that sense, --with-fb-extras
> is not acceptible. Someone was interested in providing a "site.h"
> configuration file that different vendors can use to fine-tune certain
> things that are too detailed for ./configure options.
>
> We should remove the --with-fb-extras as well, specially because it is
> not useful for anyone that does not have access to the forked fbtirpc
> library.
>
> Kaushal mentioned he'll update the patch that removed the IPv6 default
> define, to also remove the --with-fb-extras and related bits.

The patch removing IPV6 and fbextras is at
https://review.gluster.org/17174 waiting for regression tests to run.

I've merged the Selinux backports, https://review.gluster.org/17159
and https://review.gluster.org/17157 into release-3.11

>
> Thanks,
> Niels
>
>>
>>
>>
>> >
>> > >
>> > >>
>> > >>
>> > >> Thanks,
>> > >> Kaushal, Shyam
>> > >>
>> > >> [1] 3.11 Branch: https://github.com/gluster/glusterfs/tree/release-3.11
>> > >>
>> > >> [2] Tag for 3.11.0beta1 :
>> > >> https://github.com/gluster/glusterfs/tree/v3.11.0beta1
>> > >>
>> > >> [3] Tracker BZ for 3.11.0 blockers:
>> > >> https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.11.0
>> > >>
>> > >> ___
>> > >> maintainers mailing list
>> > >> maintain...@gluster.org
>> > >> http://lists.gluster.org/mailman/listinfo/maintainers
>> > >
>> > >
>> > >
>> > >
>> > > --
>> > > Pranith
>> > >
>> > > ___
>> > > Gluster-devel mailing list
>> > > gluster-de...@gluster.org
>> > > http://lists.gluster.org/mailman/listinfo/gluster-devel
>> >
>>
>>
>>
>> --
>> Pranith
>
>> 

Re: [Gluster-users] [Gluster-Maintainers] [Gluster-devel] Release 3.11: Has been Branched (and pending feature notes)

2017-05-04 Thread Niels de Vos
On Thu, May 04, 2017 at 03:39:58PM +0530, Pranith Kumar Karampuri wrote:
> On Wed, May 3, 2017 at 2:36 PM, Kaushal M  wrote:
> 
> > On Tue, May 2, 2017 at 3:55 PM, Pranith Kumar Karampuri
> >  wrote:
> > >
> > >
> > > On Sun, Apr 30, 2017 at 9:01 PM, Shyam  wrote:
> > >>
> > >> Hi,
> > >>
> > >> Release 3.11 for gluster has been branched [1] and tagged [2].
> > >>
> > >> We have ~4weeks to release of 3.11, and a week to backport features that
> > >> slipped the branching date (May-5th).
> > >>
> > >> A tracker BZ [3] has been opened for *blockers* of 3.11 release. Request
> > >> that any bug that is determined as a blocker for the release be noted
> > as a
> > >> "blocks" against this bug.
> > >>
> > >> NOTE: Just a heads up, all bugs that are to be backported in the next 4
> > >> weeks need not be reflected against the blocker, *only* blocker bugs
> > >> identified that should prevent the release, need to be tracked against
> > this
> > >> tracker bug.
> > >>
> > >> We are not building beta1 packages, and will build out RC0 packages once
> > >> we cross the backport dates. Hence, folks interested in testing this
> > out can
> > >> either build from the code or wait for (about) a week longer for the
> > >> packages (and initial release notes).
> > >>
> > >> Features tracked as slipped and expected to be backported by 5th May
> > are,
> > >>
> > >> 1) [RFE] libfuse rebase to latest? #153 (@amar, @csaba)
> > >>
> > >> 2) SELinux support for Gluster Volumes #55 (@ndevos, @jiffin)
> > >>   - Needs a +2 on https://review.gluster.org/13762
> > >>
> > >> 3) Enhance handleops readdirplus operation to return handles along with
> > >> dirents #174 (@skoduri)
> > >>
> > >> 4) Halo - Initial version (@pranith)
> > >
> > >
> > > I merged the patch on master. Will send out the port on Thursday. I have
> > to
> > > leave like right now to catch train and am on leave tomorrow, so will be
> > > back on Thursday and get the port done. Will also try to get the other
> > > patches fb guys mentioned post that preferably by 5th itself.
> >
> > Niels found that the HALO patch has pulled in a little bit of the IPv6
> > patch. This shouldn't have happened.
> > The IPv6 patch is currently stalled because it depends on an internal
> > FB library. The IPv6 bits that made it in pull this dependency.
> > This would have lead to a -2 on the HALO patch by me, but as I wasn't
> > aware of it, the patch was merged.
> >
> > The IPV6 changes are in rpcsvh.{c,h} and configure.ac, and don't seem
> > to affect anything HALO. So they should be easily removable and should
> > be removed.
> >
> 
> As per the configure.ac the macro is enabled only when we are building
> gluster with "--with-fb-extras", which I don't think we do anywhere, so
> didn't think they are important at the moment. Sorry for the confusion
> caused because of this. Thanks to Kaushal for the patch. I will backport
> that one as well when I do the 3.11 backport of HALO. So will wait for the
> backport until Kaushal's patch is merged.

Note that there have been disucssions about preventing special vendor
(Red Hat or Facebook) flags and naming. In that sense, --with-fb-extras
is not acceptible. Someone was interested in providing a "site.h"
configuration file that different vendors can use to fine-tune certain
things that are too detailed for ./configure options.

We should remove the --with-fb-extras as well, specially because it is
not useful for anyone that does not have access to the forked fbtirpc
library.

Kaushal mentioned he'll update the patch that removed the IPv6 default
define, to also remove the --with-fb-extras and related bits.

Thanks,
Niels

> 
> 
> 
> >
> > >
> > >>
> > >>
> > >> Thanks,
> > >> Kaushal, Shyam
> > >>
> > >> [1] 3.11 Branch: https://github.com/gluster/glusterfs/tree/release-3.11
> > >>
> > >> [2] Tag for 3.11.0beta1 :
> > >> https://github.com/gluster/glusterfs/tree/v3.11.0beta1
> > >>
> > >> [3] Tracker BZ for 3.11.0 blockers:
> > >> https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.11.0
> > >>
> > >> ___
> > >> maintainers mailing list
> > >> maintain...@gluster.org
> > >> http://lists.gluster.org/mailman/listinfo/maintainers
> > >
> > >
> > >
> > >
> > > --
> > > Pranith
> > >
> > > ___
> > > Gluster-devel mailing list
> > > gluster-de...@gluster.org
> > > http://lists.gluster.org/mailman/listinfo/gluster-devel
> >
> 
> 
> 
> -- 
> Pranith

> ___
> maintainers mailing list
> maintain...@gluster.org
> http://lists.gluster.org/mailman/listinfo/maintainers



signature.asc
Description: PGP signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] [DHT] The myth of two hops for linkto file resolution

2017-05-04 Thread Xavier Hernandez

Hi,

On 30/04/17 06:03, Raghavendra Gowdappa wrote:

All,

Its a common perception that the resolution of a file having linkto file on the 
hashed-subvol requires two hops:

1. client to hashed-subvol.
2. client to the subvol where file actually resides.

While it is true that a fresh lookup behaves this way, the other fact that 
get's ignored is that fresh lookups on files are almost always prevented by 
readdirplus. Since readdirplus picks the dentry from the subvolume where actual 
file (data-file) resides, the two hop cost is most likely never witnessed by 
the application.


This is true for workloads that list directory contents before accessing 
the files, but there are other use cases that directly access the file 
without navigating through the file system. In this case fresh lookups 
are needed.


Xavi



A word of caution is that I've not done any testing to prove this observation 
:).

regards,
Raghavendra
___
Gluster-devel mailing list
gluster-de...@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel



___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [Gluster-devel] Enabling shard on EC

2017-05-04 Thread Pranith Kumar Karampuri
+Krutika

Krutika started work on this. But it is very long term. Not a simple thing
to do.

On Thu, May 4, 2017 at 3:53 PM, Ankireddypalle Reddy 
wrote:

> Pranith,
>
>  Thanks. Is there any work in progress to add this support.
>
>
>
> Thanks and Regards,
>
> Ram
>
> *From:* Pranith Kumar Karampuri [mailto:pkara...@redhat.com]
> *Sent:* Thursday, May 04, 2017 6:17 AM
>
> *To:* Ankireddypalle Reddy
> *Cc:* Gluster Devel (gluster-de...@gluster.org); gluster-users@gluster.org
> *Subject:* Re: [Gluster-devel] Enabling shard on EC
>
>
>
>
>
>
>
> On Thu, May 4, 2017 at 3:43 PM, Ankireddypalle Reddy 
> wrote:
>
> Pranith,
>
>  Thanks. Does it mean that a given file can be written by
> only one client at a time. If multiple clients try to access the file in
> write mode, does it lead to any kind of data inconsistencies.
>
>
>
> We only tested it for single writer cases such as VM usecases. We need to
> bring in transaction framework for sharding to work with multiple writers.
>
>
>
>
>
> Thanks and Regards,
>
> Ram
>
> *From:* Pranith Kumar Karampuri [mailto:pkara...@redhat.com]
> *Sent:* Thursday, May 04, 2017 6:07 AM
> *To:* Ankireddypalle Reddy
> *Cc:* Gluster Devel (gluster-de...@gluster.org); gluster-users@gluster.org
> *Subject:* Re: [Gluster-devel] Enabling shard on EC
>
>
>
> It is never been tested. That said, I don't see any missing pieces that we
> know of for it to work. Please note that sharding works only for single
> writer cases at the moment. Do let us know if you find any problems and we
> will fix them.
>
>
>
> On Wed, May 3, 2017 at 2:17 PM, Ankireddypalle Reddy 
> wrote:
>
> Hi,
>
>   Are there any known negatives of enabling shard on EC. Is this a
> recommended configuration?.
>
>
>
> Thanks and Regards,
>
> Ram
>
>
>
>
>
> ***Legal Disclaimer***
>
> "This communication may contain confidential and privileged material for
> the
>
> sole use of the intended recipient. Any unauthorized review, use or
> distribution
>
> by others is strictly prohibited. If you have received the message by
> mistake,
>
> please advise the sender by reply email and delete the message. Thank you."
>
> **
>
>
> ___
> Gluster-devel mailing list
> gluster-de...@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-devel
>
>
>
>
> --
>
> Pranith
>
> ***Legal Disclaimer***
>
> "This communication may contain confidential and privileged material for
> the
>
> sole use of the intended recipient. Any unauthorized review, use or
> distribution
>
> by others is strictly prohibited. If you have received the message by
> mistake,
>
> please advise the sender by reply email and delete the message. Thank you."
>
> **
>
>
>
>
> --
>
> Pranith
> ***Legal Disclaimer***
> "This communication may contain confidential and privileged material for
> the
> sole use of the intended recipient. Any unauthorized review, use or
> distribution
> by others is strictly prohibited. If you have received the message by
> mistake,
> please advise the sender by reply email and delete the message. Thank you."
> **
>



-- 
Pranith
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [DHT] The myth of two hops for linkto file resolution

2017-05-04 Thread Pranith Kumar Karampuri
On Sun, Apr 30, 2017 at 9:33 AM, Raghavendra Gowdappa 
wrote:

> All,
>
> Its a common perception that the resolution of a file having linkto file
> on the hashed-subvol requires two hops:
>
> 1. client to hashed-subvol.
> 2. client to the subvol where file actually resides.
>
> While it is true that a fresh lookup behaves this way, the other fact that
> get's ignored is that fresh lookups on files are almost always prevented by
> readdirplus. Since readdirplus picks the dentry from the subvolume where
> actual file (data-file) resides, the two hop cost is most likely never
> witnessed by the application.
>
> A word of caution is that I've not done any testing to prove this
> observation :).
>

May be you should do it and send an update. That way we can use the
knowledge to do something.


>
> regards,
> Raghavendra
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>



-- 
Pranith
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] Enabling shard on EC

2017-05-04 Thread Ankireddypalle Reddy
Pranith,
 Thanks. Is there any work in progress to add this support.

Thanks and Regards,
Ram
From: Pranith Kumar Karampuri [mailto:pkara...@redhat.com]
Sent: Thursday, May 04, 2017 6:17 AM
To: Ankireddypalle Reddy
Cc: Gluster Devel (gluster-de...@gluster.org); gluster-users@gluster.org
Subject: Re: [Gluster-devel] Enabling shard on EC



On Thu, May 4, 2017 at 3:43 PM, Ankireddypalle Reddy 
> wrote:
Pranith,
 Thanks. Does it mean that a given file can be written by only 
one client at a time. If multiple clients try to access the file in write mode, 
does it lead to any kind of data inconsistencies.

We only tested it for single writer cases such as VM usecases. We need to bring 
in transaction framework for sharding to work with multiple writers.


Thanks and Regards,
Ram
From: Pranith Kumar Karampuri 
[mailto:pkara...@redhat.com]
Sent: Thursday, May 04, 2017 6:07 AM
To: Ankireddypalle Reddy
Cc: Gluster Devel 
(gluster-de...@gluster.org); 
gluster-users@gluster.org
Subject: Re: [Gluster-devel] Enabling shard on EC

It is never been tested. That said, I don't see any missing pieces that we know 
of for it to work. Please note that sharding works only for single writer cases 
at the moment. Do let us know if you find any problems and we will fix them.

On Wed, May 3, 2017 at 2:17 PM, Ankireddypalle Reddy 
> wrote:
Hi,
  Are there any known negatives of enabling shard on EC. Is this a 
recommended configuration?.

Thanks and Regards,
Ram


***Legal Disclaimer***
"This communication may contain confidential and privileged material for the
sole use of the intended recipient. Any unauthorized review, use or distribution
by others is strictly prohibited. If you have received the message by mistake,
please advise the sender by reply email and delete the message. Thank you."
**

___
Gluster-devel mailing list
gluster-de...@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel



--
Pranith
***Legal Disclaimer***
"This communication may contain confidential and privileged material for the
sole use of the intended recipient. Any unauthorized review, use or distribution
by others is strictly prohibited. If you have received the message by mistake,
please advise the sender by reply email and delete the message. Thank you."
**



--
Pranith
***Legal Disclaimer***
"This communication may contain confidential and privileged material for the
sole use of the intended recipient. Any unauthorized review, use or distribution
by others is strictly prohibited. If you have received the message by mistake,
please advise the sender by reply email and delete the message. Thank you."
**
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] Enabling shard on EC

2017-05-04 Thread Ankireddypalle Reddy
Pranith,
 Thanks. Does it mean that a given file can be written by only 
one client at a time. If multiple clients try to access the file in write mode, 
does it lead to any kind of data inconsistencies.

Thanks and Regards,
Ram
From: Pranith Kumar Karampuri [mailto:pkara...@redhat.com]
Sent: Thursday, May 04, 2017 6:07 AM
To: Ankireddypalle Reddy
Cc: Gluster Devel (gluster-de...@gluster.org); gluster-users@gluster.org
Subject: Re: [Gluster-devel] Enabling shard on EC

It is never been tested. That said, I don't see any missing pieces that we know 
of for it to work. Please note that sharding works only for single writer cases 
at the moment. Do let us know if you find any problems and we will fix them.

On Wed, May 3, 2017 at 2:17 PM, Ankireddypalle Reddy 
> wrote:
Hi,
  Are there any known negatives of enabling shard on EC. Is this a 
recommended configuration?.

Thanks and Regards,
Ram


***Legal Disclaimer***
"This communication may contain confidential and privileged material for the
sole use of the intended recipient. Any unauthorized review, use or distribution
by others is strictly prohibited. If you have received the message by mistake,
please advise the sender by reply email and delete the message. Thank you."
**

___
Gluster-devel mailing list
gluster-de...@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel



--
Pranith
***Legal Disclaimer***
"This communication may contain confidential and privileged material for the
sole use of the intended recipient. Any unauthorized review, use or distribution
by others is strictly prohibited. If you have received the message by mistake,
please advise the sender by reply email and delete the message. Thank you."
**
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] Enabling shard on EC

2017-05-04 Thread Pranith Kumar Karampuri
On Thu, May 4, 2017 at 3:43 PM, Ankireddypalle Reddy 
wrote:

> Pranith,
>
>  Thanks. Does it mean that a given file can be written by
> only one client at a time. If multiple clients try to access the file in
> write mode, does it lead to any kind of data inconsistencies.
>

We only tested it for single writer cases such as VM usecases. We need to
bring in transaction framework for sharding to work with multiple writers.


>
>
> Thanks and Regards,
>
> Ram
>
> *From:* Pranith Kumar Karampuri [mailto:pkara...@redhat.com]
> *Sent:* Thursday, May 04, 2017 6:07 AM
> *To:* Ankireddypalle Reddy
> *Cc:* Gluster Devel (gluster-de...@gluster.org); gluster-users@gluster.org
> *Subject:* Re: [Gluster-devel] Enabling shard on EC
>
>
>
> It is never been tested. That said, I don't see any missing pieces that we
> know of for it to work. Please note that sharding works only for single
> writer cases at the moment. Do let us know if you find any problems and we
> will fix them.
>
>
>
> On Wed, May 3, 2017 at 2:17 PM, Ankireddypalle Reddy 
> wrote:
>
> Hi,
>
>   Are there any known negatives of enabling shard on EC. Is this a
> recommended configuration?.
>
>
>
> Thanks and Regards,
>
> Ram
>
>
>
>
>
> ***Legal Disclaimer***
>
> "This communication may contain confidential and privileged material for
> the
>
> sole use of the intended recipient. Any unauthorized review, use or
> distribution
>
> by others is strictly prohibited. If you have received the message by
> mistake,
>
> please advise the sender by reply email and delete the message. Thank you."
>
> **
>
>
> ___
> Gluster-devel mailing list
> gluster-de...@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-devel
>
>
>
>
> --
>
> Pranith
> ***Legal Disclaimer***
> "This communication may contain confidential and privileged material for
> the
> sole use of the intended recipient. Any unauthorized review, use or
> distribution
> by others is strictly prohibited. If you have received the message by
> mistake,
> please advise the sender by reply email and delete the message. Thank you."
> **
>



-- 
Pranith
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] [Gluster-Maintainers] Release 3.11: Has been Branched (and pending feature notes)

2017-05-04 Thread Pranith Kumar Karampuri
On Wed, May 3, 2017 at 2:36 PM, Kaushal M  wrote:

> On Tue, May 2, 2017 at 3:55 PM, Pranith Kumar Karampuri
>  wrote:
> >
> >
> > On Sun, Apr 30, 2017 at 9:01 PM, Shyam  wrote:
> >>
> >> Hi,
> >>
> >> Release 3.11 for gluster has been branched [1] and tagged [2].
> >>
> >> We have ~4weeks to release of 3.11, and a week to backport features that
> >> slipped the branching date (May-5th).
> >>
> >> A tracker BZ [3] has been opened for *blockers* of 3.11 release. Request
> >> that any bug that is determined as a blocker for the release be noted
> as a
> >> "blocks" against this bug.
> >>
> >> NOTE: Just a heads up, all bugs that are to be backported in the next 4
> >> weeks need not be reflected against the blocker, *only* blocker bugs
> >> identified that should prevent the release, need to be tracked against
> this
> >> tracker bug.
> >>
> >> We are not building beta1 packages, and will build out RC0 packages once
> >> we cross the backport dates. Hence, folks interested in testing this
> out can
> >> either build from the code or wait for (about) a week longer for the
> >> packages (and initial release notes).
> >>
> >> Features tracked as slipped and expected to be backported by 5th May
> are,
> >>
> >> 1) [RFE] libfuse rebase to latest? #153 (@amar, @csaba)
> >>
> >> 2) SELinux support for Gluster Volumes #55 (@ndevos, @jiffin)
> >>   - Needs a +2 on https://review.gluster.org/13762
> >>
> >> 3) Enhance handleops readdirplus operation to return handles along with
> >> dirents #174 (@skoduri)
> >>
> >> 4) Halo - Initial version (@pranith)
> >
> >
> > I merged the patch on master. Will send out the port on Thursday. I have
> to
> > leave like right now to catch train and am on leave tomorrow, so will be
> > back on Thursday and get the port done. Will also try to get the other
> > patches fb guys mentioned post that preferably by 5th itself.
>
> Niels found that the HALO patch has pulled in a little bit of the IPv6
> patch. This shouldn't have happened.
> The IPv6 patch is currently stalled because it depends on an internal
> FB library. The IPv6 bits that made it in pull this dependency.
> This would have lead to a -2 on the HALO patch by me, but as I wasn't
> aware of it, the patch was merged.
>
> The IPV6 changes are in rpcsvh.{c,h} and configure.ac, and don't seem
> to affect anything HALO. So they should be easily removable and should
> be removed.
>

As per the configure.ac the macro is enabled only when we are building
gluster with "--with-fb-extras", which I don't think we do anywhere, so
didn't think they are important at the moment. Sorry for the confusion
caused because of this. Thanks to Kaushal for the patch. I will backport
that one as well when I do the 3.11 backport of HALO. So will wait for the
backport until Kaushal's patch is merged.



>
> >
> >>
> >>
> >> Thanks,
> >> Kaushal, Shyam
> >>
> >> [1] 3.11 Branch: https://github.com/gluster/glusterfs/tree/release-3.11
> >>
> >> [2] Tag for 3.11.0beta1 :
> >> https://github.com/gluster/glusterfs/tree/v3.11.0beta1
> >>
> >> [3] Tracker BZ for 3.11.0 blockers:
> >> https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.11.0
> >>
> >> ___
> >> maintainers mailing list
> >> maintain...@gluster.org
> >> http://lists.gluster.org/mailman/listinfo/maintainers
> >
> >
> >
> >
> > --
> > Pranith
> >
> > ___
> > Gluster-devel mailing list
> > gluster-de...@gluster.org
> > http://lists.gluster.org/mailman/listinfo/gluster-devel
>



-- 
Pranith
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Elasticsearch facing CorruptIndexException exception with GlusterFs 3.10.1

2017-05-04 Thread Abhijit Paul
Since i am new to gluster, can please provide how to turn off/disable "perf
xlator options"?

On Wed, May 3, 2017 at 8:51 PM, Atin Mukherjee  wrote:

> I think there is still some pending stuffs in some of the gluster perf
> xlators to make that work complete. Cced the relevant folks for more
> information. Can you please turn off all the perf xlator options as a work
> around to move forward?
>
> On Wed, May 3, 2017 at 8:04 PM, Abhijit Paul 
> wrote:
>
>> Dear folks,
>>
>> I setup Glusterfs(3.10.1) NFS type as persistence volume for
>> Elasticsearch(5.1.2) but currently facing issue with *"CorruptIndexException"
>> *with Elasticseach logs and due to that index health trued RED in
>> Elasticsearch.
>>
>> Later found that there was an issue with gluster < 3.10 (
>> https://bugzilla.redhat.com/show_bug.cgi?id=1390050) but even after 
>> *upgrading
>> to 3.10.1 issue is still there.*
>>
>> *So curios to know what would be the root cause to fix this issue.*
>>
>> Regards,
>> Abhijit
>>
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-users
>>
>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] postgresql is unable to create a table in gluster volume

2017-05-04 Thread Jiffin Tony Thottan



On 04/05/17 02:03, Praveen George wrote:

Hi Team,

We’ve been intermittently seeing issues where postgresql is unable to 
create a table, or some info is missing.


Postgresql logs the following error:

ERROR:  unexpected data beyond EOF in block 53 of relation 
base/16384/12009
HINT:  This has been seen to occur with buggy kernels; consider 
updating your system.


We are using the k8s PV/PVC to bind the volumes to the containers and 
using the gluster plugin to mount the volumes on the worker nodes and 
take it into the containers.


The issue occurs regardless of whether the  k8s spec specifies 
mounting of the pv using the pv provider or mount the gluster volume 
directly.


Just to check if the issue is with the glusterfs client, we mount the 
volume using NFS (NFS on the client talking to gluster on the master), 
the issue doesn’t occur. However, with the NFS client talking directly 
to _one_ of the gluster masters; this means that if that master fails, 
it will not failover to the other gluster master - we thus lose 
gluster HA if we go this route.




If you are interested there are HA solutions available with NFS. It 
depends on NFS solution which u are trying, if it is gluster 
nfs(integrated nfs server with gluster) then use ctdb and for 
NFS-Ganesha , we already have an integrated solution with pacemaker/corosync


Please update ur gluster version since it EOLed, you don't receive any 
more update for that version.


--

Jiffin

Anyone faced this issue, is there any fix already available for the 
same. Gluster version is 3.7.20 and k8s is 1.5.2.


Thanks
Praveen


___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Remove-brick failed

2017-05-04 Thread Jesper Led Lauridsen TS Infra server
Hi

I'm trying to remove 2 bricks from a Distributed-Replicate without losing data. 
But it fails in rebalance

Any help is appreciated... 

What I do:
# gluster volume remove-brick glu_linux_dr2_oracle replica 2 
glustoretst03.net.dr.dk:/bricks/brick1/glu_linux_dr2_oracle 
glustoretst04.net.dr.dk:/bricks/brick1/glu_linux_dr2_oracle start
volume remove-brick start: success
ID: c2549eb4-e37a-4f0d-9273-3f7c580e9e80
# gluster volume remove-brick glu_linux_dr2_oracle replica 2 
glustoretst03.net.dr.dk:/bricks/brick1/glu_linux_dr2_oracle 
glustoretst04.net.dr.dk:/bricks/brick1/glu_linux_dr2_oracle status
Node Rebalanced-files  size   
scanned  failures   skipped   status   run time in secs
   -  ---   ---   
---   ---   ---  --
 glustoretst04.net.dr.dk00Bytes 
0 0 0   failed   0.00
 glustoretst03.net.dr.dk00Bytes 
0 0 0   failed   0.00

 log output ***
# cat etc-glusterfs-glusterd.vol.log
[2017-05-03 12:18:59.423867] I 
[glusterd-handler.c:1296:__glusterd_handle_cli_get_volume] 0-glusterd: Received 
get vol req
[2017-05-03 12:20:21.024213] I 
[glusterd-handler.c:3836:__glusterd_handle_status_volume] 0-management: 
Received status volume req for volume glu_int_dr2_dalet
[2017-05-03 12:21:10.813956] I 
[glusterd-handler.c:1296:__glusterd_handle_cli_get_volume] 0-glusterd: Received 
get vol req
[2017-05-03 12:22:45.298742] I 
[glusterd-brick-ops.c:676:__glusterd_handle_remove_brick] 0-management: 
Received rem brick req
[2017-05-03 12:22:45.298807] I 
[glusterd-brick-ops.c:722:__glusterd_handle_remove_brick] 0-management: request 
to change replica-count to 2
[2017-05-03 12:22:45.311705] I 
[glusterd-utils.c:11549:glusterd_generate_and_set_task_id] 0-management: 
Generated task-id c2549eb4-e37a-4f0d-9273-3f7c580e9e80 for key remove-brick-id
[2017-05-03 12:22:45.312296] I 
[glusterd-op-sm.c:5105:glusterd_bricks_select_remove_brick] 0-management: force 
flag is not set
[2017-05-03 12:22:46.414038] I 
[glusterd-volgen.c:1177:get_vol_nfs_transport_type] 0-glusterd: The default 
transport type for tcp,rdma volume is tcp if option is not defined by the user
[2017-05-03 12:22:46.419778] I 
[glusterd-volgen.c:1177:get_vol_nfs_transport_type] 0-glusterd: The default 
transport type for tcp,rdma volume is tcp if option is not defined by the user
[2017-05-03 12:22:46.425132] I 
[glusterd-volgen.c:1177:get_vol_nfs_transport_type] 0-glusterd: The default 
transport type for tcp,rdma volume is tcp if option is not defined by the user
[2017-05-03 12:22:46.429469] I 
[glusterd-volgen.c:1177:get_vol_nfs_transport_type] 0-glusterd: The default 
transport type for tcp,rdma volume is tcp if option is not defined by the user
[2017-05-03 12:22:46.433623] I 
[glusterd-volgen.c:1177:get_vol_nfs_transport_type] 0-glusterd: The default 
transport type for tcp,rdma volume is tcp if option is not defined by the user
[2017-05-03 12:22:46.439089] I 
[glusterd-volgen.c:1177:get_vol_nfs_transport_type] 0-glusterd: The default 
transport type for tcp,rdma volume is tcp if option is not defined by the user
[2017-05-03 12:22:46.444048] I 
[glusterd-volgen.c:1177:get_vol_nfs_transport_type] 0-glusterd: The default 
transport type for tcp,rdma volume is tcp if option is not defined by the user
[2017-05-03 12:22:46.448623] I 
[glusterd-volgen.c:1177:get_vol_nfs_transport_type] 0-glusterd: The default 
transport type for tcp,rdma volume is tcp if option is not defined by the user
[2017-05-03 12:22:46.457386] I 
[glusterd-volgen.c:1177:get_vol_nfs_transport_type] 0-glusterd: The default 
transport type for tcp,rdma volume is tcp if option is not defined by the user
[2017-05-03 12:22:46.538115] I 
[glusterd-volgen.c:1177:get_vol_nfs_transport_type] 0-glusterd: The default 
transport type for tcp,rdma volume is tcp if option is not defined by the user
[2017-05-03 12:22:46.542870] I 
[glusterd-volgen.c:1177:get_vol_nfs_transport_type] 0-glusterd: The default 
transport type for tcp,rdma volume is tcp if option is not defined by the user
[2017-05-03 12:22:46.547325] I 
[glusterd-volgen.c:1177:get_vol_nfs_transport_type] 0-glusterd: The default 
transport type for tcp,rdma volume is tcp if option is not defined by the user
[2017-05-03 12:22:46.551742] I 
[glusterd-volgen.c:1177:get_vol_nfs_transport_type] 0-glusterd: The default 
transport type for tcp,rdma volume is tcp if option is not defined by the user
[2017-05-03 12:22:46.555951] I 
[glusterd-volgen.c:1177:get_vol_nfs_transport_type] 0-glusterd: The default 
transport type for tcp,rdma volume is tcp if option is not defined by the user
[2017-05-03 12:22:46.560725] I