Re: [Gluster-users] Checklist for ganesha FSAL plugin integration testing for 3.9

2016-09-06 Thread Soumya Koduri
CCin gluster-devel & users ML. Somehow they got missed in my earlier reply. Thanks, Soumya On 09/06/2016 12:19 PM, Soumya Koduri wrote: On 09/03/2016 12:44 AM, Pranith Kumar Karampuri wrote: hi, Did you get a chance to decide on the nfs-ganesha integrations tests that need to be run

Re: [Gluster-users] Gfapi memleaks, all versions

2016-09-06 Thread Pranith Kumar Karampuri
On Wed, Sep 7, 2016 at 12:24 AM, Oleksandr Natalenko < oleksa...@natalenko.name> wrote: > Hello, > > thanks, but that is not what I want. I have no issues debugging gfapi apps, > but have an issue with GlusterFS FUSE client not being handled properly by > Massif tool. > > Valgrind+Massif does not

Re: [Gluster-users] group write permissions not being respected

2016-09-06 Thread Pat Haley
Using the gluster client rather than NFS seems to fix the problem On 09/01/2016 02:35 PM, Pat Haley wrote: Hi Pranith, In attached file capture.pcap On 09/01/2016 01:01 PM, Pranith Kumar Karampuri wrote: You need to capture the file so that we can see the tcpdump in wireshark to inspect

Re: [Gluster-users] [Gluster-devel] Anyone wants to maintain Mac-OSX port of gluster?

2016-09-06 Thread Kaleb S. KEITHLEY
On 09/06/2016 08:03 AM, Emmanuel Dreyfus wrote: On Tue, Sep 06, 2016 at 07:30:08AM -0400, Kaleb S. KEITHLEY wrote: Mac OS X doesn't build at the present time because its sed utility (used in the xdrgen/rpcgen part of the build) doesn't support the (linux compatible) '-r' command line option.

Re: [Gluster-users] [URGENT] Add-bricks to a volume corrupted the files

2016-09-06 Thread Kevin Lemonnier
Hi, Here is the info : Volume Name: VMs Type: Replicate Volume ID: c5272382-d0c8-4aa4-aced-dd25a064e45c Status: Started Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: ips4adm.name:/mnt/storage/VMs Brick2: ips5adm.name:/mnt/storage/VMs Brick3: ips6adm.name:/mnt/storage/VMs

Re: [Gluster-users] [Gluster-devel] Anyone wants to maintain Mac-OSX port of gluster?

2016-09-06 Thread Kaleb S. KEITHLEY
On 09/02/2016 03:49 PM, Pranith Kumar Karampuri wrote: hi, As per MAINTAINERS file this port doesn't have maintainer. If you want to take up the responsibility of maintaining the port please let us know how you want to go about doing it and what should be the checklist of things that should

Re: [Gluster-users] [Gluster-devel] Anyone wants to maintain Mac-OSX port of gluster?

2016-09-06 Thread Emmanuel Dreyfus
On Tue, Sep 06, 2016 at 07:30:08AM -0400, Kaleb S. KEITHLEY wrote: > Mac OS X doesn't build at the present time because its sed utility (used in > the xdrgen/rpcgen part of the build) doesn't support the (linux compatible) > '-r' command line option. (NetBSD and FreeBSD do.) > > (There's an easy

[Gluster-users] Distributed Geo Replication Lag

2016-09-06 Thread Georg Schoenberger
Hi folks, I am trying to switch over to my distributed geo redundant volume. Any tools/checks/methods to ensure the geo redundant volume does not lag behind? Can I ensure, that the geo replicated host has all the data? THX, Georg ___ Gluster-users

Re: [Gluster-users] Tiering and sharding for VM workload

2016-09-06 Thread Gandalf Corvotempesta
Anybody? Il 05 set 2016 22:19, "Gandalf Corvotempesta" < gandalf.corvotempe...@gmail.com> ha scritto: > Is tiering with sharding usefull with VM workload? > Let's assume a storage with tiering and sharding enabled, used for > hosting VM images. > Each shard is subject to tiering, thus the most

Re: [Gluster-users] Tiering and sharding for VM workload

2016-09-06 Thread Dan Lambright
- Original Message - > From: "Gandalf Corvotempesta" > To: "gluster-users" > Sent: Tuesday, September 6, 2016 8:29:06 AM > Subject: Re: [Gluster-users] Tiering and sharding for VM workload > > > > Anybody? Paul Cruzner did

[Gluster-users] Un-delete a file

2016-09-06 Thread Kevin Lemonnier
Hi, During last night's problems we ended up having to delete the VM's disks and re-creating them from scratch, then import the datas from the backups. I don't think so but just to be sure, is there a way to recover those deleted files ? We had to remove them because we don't have much space on

Re: [Gluster-users] Tiering and sharding for VM workload

2016-09-06 Thread David Gossage
On Tue, Sep 6, 2016 at 7:29 AM, Gandalf Corvotempesta < gandalf.corvotempe...@gmail.com> wrote: > Anybody? > > While I have not tested it yet the 2 email chains I have seen from users trying it is that the performance has been worse rather than any increased benefit. Perhaps those using it

Re: [Gluster-users] 3.8.3 Shards Healing Glacier Slow

2016-09-06 Thread David Gossage
Going to top post with solution Krutika Dhananjay came up with. His steps were much less volatile and could be done with volume still being actively used and also much less prone to accidental destruction. My use case and issue were desire to wipe a brick and recreate with same directory

Re: [Gluster-users] Tiering and sharding for VM workload

2016-09-06 Thread Krutika Dhananjay
Theoretically whatever you said is correct (at least from shard's perspective). Adding Rafi who's worked on tiering to know if he thinks otherwise. It must be mentioned that sharding + tiering hasn't been tested as such till now by us at least. Did you try it? If so, what was your experience?

Re: [Gluster-users] Un-delete a file

2016-09-06 Thread Niels de Vos
On Tue, Sep 06, 2016 at 03:27:46PM +0200, Kevin Lemonnier wrote: > Hi, > > During last night's problems we ended up having to delete the VM's disks > and re-creating them from scratch, then import the datas from the backups. > I don't think so but just to be sure, is there a way to recover those

Re: [Gluster-users] Fwd: Very slow performance when enabling tierd storage with SSD

2016-09-06 Thread Milind Changire
Benjamin, There are three issues of interest: 1. Since the hot tier reached 90% of its capacity, the nature and frequency of accesses of files are important Aggressive file accesses will cause a flood of promotions, although sequentially, hampering performance. Abstaining from access

Re: [Gluster-users] Tiering and sharding for VM workload

2016-09-06 Thread Mohammed Rafi K C
Yes, you are correct. On a sharded volume, the hot and cold would be based on sharded chunks. I'm stressing the point which Krutika mentioned in her mail that we haven't tested the use case in depth. Regards Rafi KC On 09/06/2016 06:38 PM, Krutika Dhananjay wrote: > Theoretically whatever you

Re: [Gluster-users] Gfapi memleaks, all versions

2016-09-06 Thread feihu929
Hello, Oleksandr You can compile that simple test code posted here(http://www.gluster.org/pipermail/gluster-users/2016-August/028183.html). Then, run the command $>valgrind cmd: G_SLICE=always-malloc G_DEBUG=gc-friendly valgrind --tool=massif ./glfsxmp the cmd will produce a file like

Re: [Gluster-users] 3.8.3 Shards Healing Glacier Slow

2016-09-06 Thread Pranith Kumar Karampuri
On Tue, Sep 6, 2016 at 10:11 PM, Krutika Dhananjay wrote: > > > On Tue, Sep 6, 2016 at 7:27 PM, David Gossage > wrote: > >> Going to top post with solution Krutika Dhananjay came up with. His >> steps were much less volatile and could be done

Re: [Gluster-users] 3.8.3 Shards Healing Glacier Slow

2016-09-06 Thread David Gossage
On Tue, Sep 6, 2016 at 11:41 AM, Krutika Dhananjay wrote: > > > On Tue, Sep 6, 2016 at 7:27 PM, David Gossage > wrote: > >> Going to top post with solution Krutika Dhananjay came up with. His >> steps were much less volatile and could be done

Re: [Gluster-users] 3.8.3 Shards Healing Glacier Slow

2016-09-06 Thread Krutika Dhananjay
On Tue, Sep 6, 2016 at 7:27 PM, David Gossage wrote: > Going to top post with solution Krutika Dhananjay came up with. His steps > were much less volatile and could be done with volume still being actively > used and also much less prone to accidental destruction. >

Re: [Gluster-users] [Gluster-devel] Profiling GlusterFS FUSE client with Valgrind's Massif tool

2016-09-06 Thread Oleksandr Natalenko
Created BZ for it [1]. [1] https://bugzilla.redhat.com/show_bug.cgi?id=1373630 On вівторок, 6 вересня 2016 р. 23:32:51 EEST Pranith Kumar Karampuri wrote: > I included you on a thread on users, let us see if he can help you out. > > On Mon, Aug 29, 2016 at 4:02 PM, Oleksandr Natalenko < > >

Re: [Gluster-users] yum errors

2016-09-06 Thread Dj Merrill
On 09/06/2016 01:54 AM, Kaushal M wrote: >> Following down through the docs on that link, I find the Centos Storage >> > SIG repo has 3.7.13, and the Storage testing repo has 3.7.15. >> > >> > What is a typical timeframe for releases to transition from the testing >> > repo to the normal repo? >

Re: [Gluster-users] Change active node in Distributed Geo Replication

2016-09-06 Thread Aravinda
Unfortunately we do not have option to choose Active Workers. Following two modes are supported(Both are automatic) Node ID:(Default Mode) -- If a node ID present in first up sub volumes list then the respective worker will become Active, Rest of the workers will be

Re: [Gluster-users] Gfapi memleaks, all versions

2016-09-06 Thread Oleksandr Natalenko
Correct. On September 7, 2016 1:51:08 AM GMT+03:00, Pranith Kumar Karampuri wrote: >On Wed, Sep 7, 2016 at 12:24 AM, Oleksandr Natalenko < >oleksa...@natalenko.name> wrote: > >> Hello, >> >> thanks, but that is not what I want. I have no issues debugging gfapi >apps, >> but

Re: [Gluster-users] Gfapi memleaks, all versions

2016-09-06 Thread Oleksandr Natalenko
Hello, thanks, but that is not what I want. I have no issues debugging gfapi apps, but have an issue with GlusterFS FUSE client not being handled properly by Massif tool. Valgrind+Massif does not handle all forked children properly, and I believe that happens because of some memory corruption

Re: [Gluster-users] Distributed Geo Replication Lag

2016-09-06 Thread Georg Schoenberger
On 2016-09-07 07:45, Aravinda wrote: Using Checkpoint feature you can confirm that Geo-rep synced till that time. Set Checkpoint gluster volume geo-replication :: config checkpoint now Touch Mount point (To record setattr in every bricks changelog) mount -t glusterfs localhost:/ /mnt/ touch