Re: [Gluster-users] Does Gluster NFS mount support locking?

2016-04-27 Thread Chen Chen

Hi Kaleb,

As you said, mount with "local_lock=flock" solved my problem.
Many thanks!

Chen

On 4/27/2016 7:25 PM, Kaleb S. KEITHLEY wrote:

On 04/27/2016 05:37 AM, Chen Chen wrote:

Hi everyone,

Does Gluster NFS mount support file locking?

My program blocked at "flock(3, LOCK_EX"
"rpcinfo -p" said nlockmgr is running
firewalld (of both client and server) already disabled
mounted with "local_lock=none"



NFS (and Gluster NFS, Gluster Native) _only_ supports POSIX advisory
locking, i.e. fcntl (fd, F_SETLK, );



--
Chen Chen
Shanghai SmartQuerier Biotechnology Co., Ltd.
Add: Add: 3F, 1278 Keyuan Road, Shanghai 201203, P. R. China
Mob: +86 15221885893
Email: chenc...@smartquerier.com
Web: www.smartquerier.com



smime.p7s
Description: S/MIME Cryptographic Signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] FIle size error when restoring backups causes restore failure.

2016-04-27 Thread Lindsay Mathieson
On 27 April 2016 at 17:55, Lindsay Mathieson 
wrote:

> I'm getting the following file size error when restoring proxmox qemu
> backups via gfapi. I don't think the issue is with proxmox as I have tested
> the same restore process with other storages with no problem(nfs, cephfs,
> ceph rbd). Also if I restore the the gluster *fuse* mount it works ok.




I did some more testing and tracing through code and it appears to be a
problem with the proxmox restore script, not gluster. My apologies for the
false alarm.


-- 
Lindsay
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Fwd: dht_is_subvol_filled messages on client

2016-04-27 Thread Serkan Çoban
I already checked the bricks file systems. They are nearly empty (%1 usage)

On Wed, Apr 27, 2016 at 3:52 PM, Kaushal M  wrote:
> On Wed, Apr 27, 2016 at 3:13 PM, Serkan Çoban  wrote:
>> Hi, can someone give me a clue about below problem?
>> Thanks,
>> Serkan
>>
>>
>> -- Forwarded message --
>> From: Serkan Çoban 
>> Date: Tue, Apr 26, 2016 at 3:33 PM
>> Subject: dht_is_subvol_filled messages on client
>> To: Gluster Users 
>>
>>
>> Hi,
>>
>> I am getting [dht-diskusage.c:277:dht_is_subvol_filled] 0-v0-dht:
>> inodes on subvolume 'v0-disperse-56' are at (100.00 %), consider
>> adding more bricks.
>
> This means that the underlying filesystem has run out of inodes it can
> allocate. You will not be able to create any more files/directories on
> this brick anymore.
>
>>
>> message on client logs. My cluster is empty there are only a couple of
>> GB files for testing. Why this message appear in syslog? Is is safe to
>> ignore it?
>
> Since you are testing, are by chance using the root filesystem or
> another already in use filesystem for the brick?
> That should explain why you're getting these logs even though you
> actually have just a few files on the glusterfs brick.
>
>>
>> Serkan
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] "gluster volume heal full" locking all files after adding a brick

2016-04-27 Thread Alastair Neil
what are the quorum setting on the volumes?


On 27 April 2016 at 08:08, Tomer Paretsky  wrote:

> Hi all
>
> i am currently running two replica 3 volumes acting as storage for VM
> images.
> due to some issues with glusterfs over ext4 filesystem (kernel panics), i
> tried removing one of the bricks from each volume from a single server, and
> than re adding them after re-formatting the underlying partition to xfs, on
> only one of the hosts for testing purposes.
>
> the commands used were:
>
> 1) gluster volume remove-brick gv1 replica 2  :/storage/gv1/brk
> force
> 2) gluster volume remove-brick gv2 replica 2 :/storage/gv2/brk
> force
>
> 3) reformatted /storage/gv1 and /storage/gv2 to xfs (these are the
> local/physical mountpoints of the gluster bricks)
>
> 4) gluster volume add-brick gv1 replica 3 :/storage/gv1/brk
> 5) gluster volume add-brick gv2 replica 3 :/storage/gv2/brk
>
> so far - so good -- both bricks were successfully re added to the volume.
>
> 6) gluster volume heal gv1 full
> 7) gluster volume heal gv2 full
>
> the heal operation started and i can see files being replicated into the
> newly added bricks BUT - all the files on the two nodes which were not
> touched are now locked (ReadOnly), i presume, until the heal operation
> finishes and replicates all the files to the newly added bricks (which
> might take a while..)
>
> now as far as i understood the documentation of the healing process - the
> files should not have been locked at all. or am i missing something
> fundemental here?
>
> is there a way to prevent locking of the source files during a heal -full
> operation?
>
> is there a better way to perform the process i just described?
>
> your help is enormously appreciated,
> Cheers,
> Tomer Paretsky
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-infra] [ovirt-users] [Attention needed] GlusterFS repository down - affects CI / Installations

2016-04-27 Thread Michael Scherer
Le mercredi 27 avril 2016 à 14:30 +0530, Ravishankar N a écrit :
> @gluster infra  - FYI.

So that's fixed. There was a issue on rackspace, I will post a detailed
timeline as soon as I have time to write it.

In short, the volume hosting the content went down, a reboot fixed the
issue.

> On 04/27/2016 02:20 PM, Nadav Goldin wrote:
> > Hi,
> > The GlusterFS repository became unavailable this morning, as a result 
> > all Jenkins jobs that use the repository will fail, the common error 
> > would be:
> >
> > 
> > http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/epel-7/noarch/repodata/repomd.xml:
> > [Errno 14] HTTP Error 403 - Forbidden
> >
> >
> > Also, installations of oVirt will fail.
> >
> > We are working on a solution and will update asap.
> >
> > Nadav.
> >
> >
> >
> > ___
> > Users mailing list
> > us...@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
> 
> 
> ___
> Gluster-infra mailing list
> gluster-in...@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-infra

-- 
Michael Scherer
Sysadmin, Community Infrastructure and Platform, OSAS




signature.asc
Description: This is a digitally signed message part
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [ovirt-users] [Gluster-infra] [Attention needed] GlusterFS repository down - affects CI / Installations

2016-04-27 Thread Sandro Bonazzola
On Wed, Apr 27, 2016 at 11:09 AM, Niels de Vos  wrote:

> On Wed, Apr 27, 2016 at 02:30:57PM +0530, Ravishankar N wrote:
> > @gluster infra  - FYI.
> >
> > On 04/27/2016 02:20 PM, Nadav Goldin wrote:
> > >Hi,
> > >The GlusterFS repository became unavailable this morning, as a result
> all
> > >Jenkins jobs that use the repository will fail, the common error would
> be:
> > >
> > >
> http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/epel-7/noarch/repodata/repomd.xml
> :
> > >[Errno 14] HTTP Error 403 - Forbidden
> > >
> > >
> > >Also, installations of oVirt will fail.
>
> I thought oVirt moved to using the packages from the CentOS Storage SIG?
>

We did that for CentOS Virt SIG builds.
On oVirt upstream we're still on Gluster upstream.
We'll move to Storage SIG there as well.



> In any case, automated tests should probably use those instead of the
> packages on download.gluster.org. We're trying to minimize the work
> packagers need to do, and get the glusterfs and other components in the
> repositories that are provided by different distributions.
>
> For more details, see the quickstart for the Storage SIG here:
>   https://wiki.centos.org/SpecialInterestGroup/Storage/gluster-Quickstart
>
> HTH,
> Niels
>
> ___
> Users mailing list
> us...@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>


-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] GlusterFS writing file from Java using NIO

2016-04-27 Thread Kaushal M
We don't have a lot of Java developers around who can help you.

Maybe semiosis/Louis, the author of the java library can help you.

On Wed, Apr 27, 2016 at 1:03 PM, Patroklos Papapetrou
 wrote:
> Hi all
>
> I'm new to GlusterFS so please forgive me if I'm using wrong mailing list or
> this questions has been already answered in the past.
> I have setup GlusterFS ( server and client ) to an Ubuntu instance and now
> I'm trying to use the Java library to read and write files.
>
> The Example.java works pretty fine but when I try to write a big file (
> actually after some tests I realized that "big file" = > 8k ) I get the
> following exception
>
> Exception in thread "main" java.lang.IllegalArgumentException
> at java.nio.Buffer.position(Buffer.java:244)
> at
> com.peircean.glusterfs.GlusterFileChannel.write(GlusterFileChannel.java:175)
> at java.nio.channels.Channels.writeFullyImpl(Channels.java:78)
> at java.nio.channels.Channels.writeFully(Channels.java:101)
> at java.nio.channels.Channels.access$000(Channels.java:61)
> at java.nio.channels.Channels$1.write(Channels.java:174)
> at java.nio.file.Files.write(Files.java:3297)
> at com.peircean.glusterfs.example.Example.main(Example.java:82)
>
> Which is caused because the bytes written are more than the buffer limit (
> 8192 ) . However the file is correctly written in Gluster.
>
> So here's my questions:
> 1. Is there any known issue in java lib?
> 2. Should I use another way of writing "big" files?
>
> BTW, trying to write the same file using the relative path of Gluster's
> mounted volume is working without any issues.
>
> Thanks for your response.
>
>
> --
> Patroklos Papapetrou | Chief Software Architect
> s: ppapapetrou
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Gluster Weekly Community meeting - 27/Apr/2016

2016-04-27 Thread Kaushal M
The meeting logs for todays meeting are available at the following links.

- Minutes: 
https://meetbot.fedoraproject.org/gluster-meeting/2016-04-27/weekly_community_meeting_27apr2016.2016-04-27-12.02.html
- Minutes (text):
https://meetbot.fedoraproject.org/gluster-meeting/2016-04-27/weekly_community_meeting_27apr2016.2016-04-27-12.02.txt
- Log: 
https://meetbot.fedoraproject.org/gluster-meeting/2016-04-27/weekly_community_meeting_27apr2016.2016-04-27-12.02.log.html

The minutes have been pasted below.

Next meeting will be held at the same time, 1200 UTC on 04/May/2016,
and will be hosted by rastar.

Thank you, to everyone who attended the meeting. See you next week.

Cheers.
Kaushal

Meeting summary
---
* Roll call  (kshlm, 12:03:02)

* Last weeks AIs  (kshlm, 12:06:52)
  * ACTION: kshlm & csim to set up faux/pseudo user email for gerrit,
bugzilla, github  (kshlm, 12:07:40)

* jdarcy to provide a general Gluster-4.0 status update  (kshlm,
  12:09:28)
  * ACTION: jdarcy to provide a general Gluster-4.0 status update
(kshlm, 12:10:37)

* jdarcy will get more technical and community leads to participate in
  the weekly meeting  (kshlm, 12:10:48)
  * ACTION: jdarcy will get more technical and community leads to
participate in the weekly meeting  (kshlm, 12:12:43)

* hagarth to take forward discussion on release and support strategies
  (onto mailing lists or another IRC meeting)  (kshlm, 12:13:00)
  * ACTION: hagarth to take forward discussion on release and support
strategies (onto mailing lists or another IRC meeting)  (kshlm,
12:16:34)

* amye to check on some blog posts being distorted on blog.gluster.org
  (kshlm, 12:16:51)
  * LINK:
http://www.gluster.org/pipermail/gluster-infra/2016-April/002106.html
(msvbhat, 12:20:53)
  * ACTION: amye to check on some blog posts being distorted on
blog.gluster.org, josferna's post in particular.  (kshlm, 12:26:06)

* GlusterFS-3.7  (kshlm, 12:26:27)
  * ACTION: hagarth will start a discussion on his release-management
strategy  (kshlm, 12:35:36)
  * LINK: https://bugzilla.redhat.com/show_bug.cgi?id=1318289
(post-factum, 12:36:30)

* GlusterFS-3.6  (kshlm, 12:42:43)
  * LINK:
https://www.gluster.org/pipermail/gluster-users/2016-April/026438.html
(kshlm, 12:44:06)
  * ACTION: kshlm to check with reported of 3.6 leaks on backport need
(kshlm, 12:47:58)

* GlusterFS-3.8 & 4.0  (kshlm, 12:55:14)

* Next weeks host?  (kshlm, 13:02:25)

Meeting ended at 13:06:34 UTC.




Action Items

* kshlm & csim to set up faux/pseudo user email for gerrit, bugzilla,
  github
* jdarcy to provide a general Gluster-4.0 status update
* jdarcy will get more technical and community leads to participate in
  the weekly meeting
* hagarth to take forward discussion on release and support strategies
  (onto mailing lists or another IRC meeting)
* amye to check on some blog posts being distorted on blog.gluster.org,
  josferna's post in particular.
* hagarth will start a discussion on his release-management strategy
* kshlm to check with reported of 3.6 leaks on backport need




Action Items, by person
---
* hagarth
  * hagarth to take forward discussion on release and support strategies
(onto mailing lists or another IRC meeting)
  * hagarth will start a discussion on his release-management strategy
* jdarcy
  * jdarcy to provide a general Gluster-4.0 status update
  * jdarcy will get more technical and community leads to participate in
the weekly meeting
* josferna
  * amye to check on some blog posts being distorted on
blog.gluster.org, josferna's post in particular.
* kshlm
  * kshlm & csim to set up faux/pseudo user email for gerrit, bugzilla,
github
  * kshlm to check with reported of 3.6 leaks on backport need
* **UNASSIGNED**
  * (none)




People Present (lines said)
---
* kshlm (109)
* post-factum (22)
* rastar (18)
* hagarth (16)
* atinm (9)
* josferna (7)
* jiffin (6)
* jdarcy (6)
* msvbhat (6)
* kkeithley (6)
* zodbot (3)
* anoopcs (1)
* skoduri (1)
* glusterbot (1)
* pkalever (1)
* karthik___ (1)
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Fwd: dht_is_subvol_filled messages on client

2016-04-27 Thread Kaushal M
On Wed, Apr 27, 2016 at 3:13 PM, Serkan Çoban  wrote:
> Hi, can someone give me a clue about below problem?
> Thanks,
> Serkan
>
>
> -- Forwarded message --
> From: Serkan Çoban 
> Date: Tue, Apr 26, 2016 at 3:33 PM
> Subject: dht_is_subvol_filled messages on client
> To: Gluster Users 
>
>
> Hi,
>
> I am getting [dht-diskusage.c:277:dht_is_subvol_filled] 0-v0-dht:
> inodes on subvolume 'v0-disperse-56' are at (100.00 %), consider
> adding more bricks.

This means that the underlying filesystem has run out of inodes it can
allocate. You will not be able to create any more files/directories on
this brick anymore.

>
> message on client logs. My cluster is empty there are only a couple of
> GB files for testing. Why this message appear in syslog? Is is safe to
> ignore it?

Since you are testing, are by chance using the root filesystem or
another already in use filesystem for the brick?
That should explain why you're getting these logs even though you
actually have just a few files on the glusterfs brick.

>
> Serkan
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] FIle size error when restoring backups causes restore failure.

2016-04-27 Thread Krutika Dhananjay
Hmm could you get me the output of `getfattr -d -m . -e hex
 from all the three bricks?

And also the `ls -l` output of this vm file as seen from the mount point.

-Krutika

On Wed, Apr 27, 2016 at 1:25 PM, Lindsay Mathieson <
lindsay.mathie...@gmail.com> wrote:

> I'm getting the following file size error when restoring proxmox qemu
> backups via gfapi. I don't think the issue is with proxmox as I have tested
> the same restore process with other storages with no problem(nfs, cephfs,
> ceph rbd). Also if I restore the the gluster *fuse* mount it works ok.
>
> Have tested with:
>
> performance.stat-prefetch: off
> performance.strict-write-ordering: on
>
> still same problem.
>
> This is with sharded storage - as soon as I some time I'll test with
> non-shared storage.
>
> Error is bolded at the end of the logging
>
> restore vma archive: lzop -d -c
> /mnt/nas-backups-smb/dump/vzdump-qemu-910-2016_04_23-08_22_07.vma.lzo|vma
> extract -v -r /var/tmp/vzdumptmp8999.fifo - /var/tmp/vzdumptmp8999
> CFG: size: 391 name: qemu-server.conf
> DEV: dev_id=1 size: 34359738368 devname: drive-scsi0
> CTIME: Sat Apr 23 08:22:11 2016
> [2016-04-27 07:02:21.688878] I [MSGID: 104045] [glfs-master.c:95:notify]
> 0-gfapi: New graph 766e622d-3930-3033-2d32-3031362d3034 (0) coming up
> [2016-04-27 07:02:21.688921] I [MSGID: 114020] [client.c:2106:notify]
> 0-datastore4-client-0: parent translators are ready, attempting connect on
> transport
> [2016-04-27 07:02:21.689530] I [MSGID: 114020] [client.c:2106:notify]
> 0-datastore4-client-1: parent translators are ready, attempting connect on
> transport
> [2016-04-27 07:02:21.689793] I [rpc-clnt.c:1868:rpc_clnt_reconfig]
> 0-datastore4-client-0: changing port to 49156 (from 0)
> [2016-04-27 07:02:21.689951] I [MSGID: 114020] [client.c:2106:notify]
> 0-datastore4-client-2: parent translators are ready, attempting connect on
> transport
> [2016-04-27 07:02:21.690608] I [MSGID: 114057]
> [client-handshake.c:1437:select_server_supported_programs]
> 0-datastore4-client-0: Using Program GlusterFS 3.3, Num (1298437), Version
> (330)
> [2016-04-27 07:02:21.690977] I [rpc-clnt.c:1868:rpc_clnt_reconfig]
> 0-datastore4-client-1: changing port to 49155 (from 0)
> [2016-04-27 07:02:21.691032] I [MSGID: 114046]
> [client-handshake.c:1213:client_setvolume_cbk] 0-datastore4-client-0:
> Connected to datastore4-client-0, attached to remote volume
> '/tank/vmdata/datastore4'.
> [2016-04-27 07:02:21.691068] I [MSGID: 114047]
> [client-handshake.c:1224:client_setvolume_cbk] 0-datastore4-client-0:
> Server and Client lk-version numbers are not same, reopening the fds
> [2016-04-27 07:02:21.691148] I [MSGID: 108005]
> [afr-common.c:4007:afr_notify] 0-datastore4-replicate-0: Subvolume
> 'datastore4-client-0' came back up; going online.
> [2016-04-27 07:02:21.691235] I [MSGID: 114035]
> [client-handshake.c:193:client_set_lk_version_cbk] 0-datastore4-client-0:
> Server lk version = 1
> [2016-04-27 07:02:21.691430] I [rpc-clnt.c:1868:rpc_clnt_reconfig]
> 0-datastore4-client-2: changing port to 49155 (from 0)
> [2016-04-27 07:02:21.691867] I [MSGID: 114057]
> [client-handshake.c:1437:select_server_supported_programs]
> 0-datastore4-client-1: Using Program GlusterFS 3.3, Num (1298437), Version
> (330)
> [2016-04-27 07:02:21.692350] I [MSGID: 114046]
> [client-handshake.c:1213:client_setvolume_cbk] 0-datastore4-client-1:
> Connected to datastore4-client-1, attached to remote volume
> '/tank/vmdata/datastore4'.
> [2016-04-27 07:02:21.692369] I [MSGID: 114047]
> [client-handshake.c:1224:client_setvolume_cbk] 0-datastore4-client-1:
> Server and Client lk-version numbers are not same, reopening the fds
> [2016-04-27 07:02:21.692474] I [MSGID: 114057]
> [client-handshake.c:1437:select_server_supported_programs]
> 0-datastore4-client-2: Using Program GlusterFS 3.3, Num (1298437), Version
> (330)
> [2016-04-27 07:02:21.692590] I [MSGID: 114035]
> [client-handshake.c:193:client_set_lk_version_cbk] 0-datastore4-client-1:
> Server lk version = 1
> [2016-04-27 07:02:21.692926] I [MSGID: 114046]
> [client-handshake.c:1213:client_setvolume_cbk] 0-datastore4-client-2:
> Connected to datastore4-client-2, attached to remote volume
> '/tank/vmdata/datastore4'.
> [2016-04-27 07:02:21.692942] I [MSGID: 114047]
> [client-handshake.c:1224:client_setvolume_cbk] 0-datastore4-client-2:
> Server and Client lk-version numbers are not same, reopening the fds
> [2016-04-27 07:02:21.708641] I [MSGID: 114035]
> [client-handshake.c:193:client_set_lk_version_cbk] 0-datastore4-client-2:
> Server lk version = 1
> [2016-04-27 07:02:21.709796] I [MSGID: 108031]
> [afr-common.c:1900:afr_local_discovery_cbk] 0-datastore4-replicate-0:
> selecting local read_child datastore4-client-0
> [2016-04-27 07:02:21.710401] I [MSGID: 104041]
> [glfs-resolve.c:869:__glfs_active_subvol] 0-datastore4: switched to graph
> 766e622d-3930-3033-2d32-3031362d3034 (0)
> [2016-04-27 07:02:21.828061] I [MSGID: 114021] [client.c:2115:notify]
> 0-datastore4-client-0: 

[Gluster-users] "gluster volume heal full" locking all files after adding a brick

2016-04-27 Thread Tomer Paretsky
Hi all

i am currently running two replica 3 volumes acting as storage for VM
images.
due to some issues with glusterfs over ext4 filesystem (kernel panics), i
tried removing one of the bricks from each volume from a single server, and
than re adding them after re-formatting the underlying partition to xfs, on
only one of the hosts for testing purposes.

the commands used were:

1) gluster volume remove-brick gv1 replica 2  :/storage/gv1/brk
force
2) gluster volume remove-brick gv2 replica 2 :/storage/gv2/brk
force

3) reformatted /storage/gv1 and /storage/gv2 to xfs (these are the
local/physical mountpoints of the gluster bricks)

4) gluster volume add-brick gv1 replica 3 :/storage/gv1/brk
5) gluster volume add-brick gv2 replica 3 :/storage/gv2/brk

so far - so good -- both bricks were successfully re added to the volume.

6) gluster volume heal gv1 full
7) gluster volume heal gv2 full

the heal operation started and i can see files being replicated into the
newly added bricks BUT - all the files on the two nodes which were not
touched are now locked (ReadOnly), i presume, until the heal operation
finishes and replicates all the files to the newly added bricks (which
might take a while..)

now as far as i understood the documentation of the healing process - the
files should not have been locked at all. or am i missing something
fundemental here?

is there a way to prevent locking of the source files during a heal -full
operation?

is there a better way to perform the process i just described?

your help is enormously appreciated,
Cheers,
Tomer Paretsky
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] MInutes of Gluster Community Bug Triage meeting at 12:00 UTC on 26th April 2016

2016-04-27 Thread M S Vishwanath Bhat
On 27 April 2016 at 16:49, Jiffin Tony Thottan  wrote:

> Hi all,
>
> Minutes:
> https://meetbot.fedoraproject.org/gluster-meeting/2016-04-26/gluster_bug_triage_meeting.2016-04-26-12.11.html
> Minutes (text):
> https://meetbot.fedoraproject.org/gluster-meeting/2016-04-26/gluster_bug_triage_meeting.2016-04-26-12.11.txt
> Log:
> https://meetbot.fedoraproject.org/gluster-meeting/2016-04-26/gluster_bug_triage_meeting.2016-04-26-12.11.log.html
>
>
> Meeting summary
> ---
> * agenda: https://public.pad.fsfe.org/p/gluster-bug-triage (jiffin,
>   12:11:39)
> * Roll call  (jiffin, 12:12:07)
>
> * msvbhat  will look into lalatenduM's automated Coverity setup in
>   Jenkins   which need assistance from an admin with more permissions
>   (jiffin, 12:18:13)
>   * ACTION: msvbhat  will look into lalatenduM's automated Coverity
> setup in   Jenkins   which need assistance from an admin with more
> permissions  (jiffin, 12:21:04)
>
> * ndevos need to decide on how to provide/use debug builds (jiffin,
>   12:21:18)
>   * ACTION: Manikandan to followup with kashlm to get access to
> gluster-infra  (jiffin, 12:24:18)
>   * ACTION: Manikandan and Nandaja will update on bug automation
> (jiffin, 12:24:30)
>
> * msvbhat  provide a simple step/walk-through on how to provide
>   testcases for the nightly rpm tests  (jiffin, 12:25:09)
>

I have already added how to write test cases here.
 This was
completed last time I attended the meeting (which was a month ago).

Best Regards,
Vishwanath


  * ACTION: msvbhat  provide a simple step/walk-through on how to
> provide testcases for the nightly rpm tests  (jiffin, 12:27:00)
>
> * rafi needs to followup on #bug 1323895  (jiffin, 12:27:15)
>
> * ndevos need to decide on how to provide/use debug builds (jiffin,
>   12:30:44)
>   * ACTION: ndevos need to decide on how to provide/use debug builds
> (jiffin, 12:32:09)
>   * ACTION: ndevos to propose some test-cases for minimal libgfapi test
> (jiffin, 12:32:21)
>   * ACTION: ndevos need to discuss about writing a script to update bug
> assignee from gerrit patch  (jiffin, 12:32:31)
>
> * Group triage  (jiffin, 12:33:07)
>
> * openfloor  (jiffin, 12:52:52)
>
> * gluster bug triage meeting schedule May 2016  (jiffin, 12:55:33)
>   * ACTION: hgowtham will host meeting on 03/05/2016  (jiffin, 12:56:18)
>   * ACTION: Saravanakmr will host meeting on 24/05/2016  (jiffin,
> 12:56:49)
>   * ACTION: kkeithley_ will host meeting on 10/05/2016  (jiffin,
> 13:00:13)
>   * ACTION: jiffin will host meeting on 17/05/2016  (jiffin, 13:00:28)
>
> Meeting ended at 13:01:34 UTC.
>
>
>
>
> Action Items
> 
> * msvbhat  will look into lalatenduM's automated Coverity setup in
>   Jenkins   which need assistance from an admin with more permissions
> * Manikandan to followup with kashlm to get access to gluster-infra
> * Manikandan and Nandaja will update on bug automation
> * msvbhat  provide a simple step/walk-through on how to provide
>   testcases for the nightly rpm tests
> * ndevos need to decide on how to provide/use debug builds
> * ndevos to propose some test-cases for minimal libgfapi test
> * ndevos need to discuss about writing a script to update bug assignee
>   from gerrit patch
> * hgowtham will host meeting on 03/05/2016
> * Saravanakmr will host meeting on 24/05/2016
> * kkeithley_ will host meeting on 10/05/2016
> * jiffin will host meeting on 17/05/2016
>
> People Present (lines said)
> ---
> * jiffin (87)
> * rafi1 (21)
> * ndevos (10)
> * hgowtham (8)
> * kkeithley_ (6)
> * Saravanakmr (6)
> * Manikandan (5)
> * zodbot (3)
> * post-factum (2)
> * lalatenduM (1)
> * glusterbot (1)
>
>
> Cheers,
>
> Jiffin
>
> ___
> Gluster-devel mailing list
> gluster-de...@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Memory leak in 3.6.9

2016-04-27 Thread Alessandro Ipe
OK, great... Any plan to backport those important fixes to the 3.6 branch ?
Because, I am not reading to upgrade to the 3.7 branch for a production system. 
My 
fears is that 3.7 will bring other new issues and all I want is a stable and 
reliable 
branch without extra new functionalities (and new bugs) that will just work 
under 
normal use.


Thanks,


A.


On Wednesday 27 April 2016 09:58:00 Tim wrote:



There have been alot of fixes since  3.6.9. Specifically, 
https://bugzilla.redhat.com/1311377[1] was fixed in  3.7.9. 
re:https://github.com/gluster/glusterfs/blob/release-3.7/doc/release-notes/3.7.9.md[2]

  
Hi,  
   
   
Apparently, version 3.6.9 is suffering from a SERIOUS memoryleak as 
illustrated 
in the following logs:  
2016-04-26T11:54:27.971564+00:00 tsunami1 kernel:[698635.210069] 
glusterfsd invoked oom-killer: gfp_mask=0x201da,order=0, 
oom_score_adj=0  
2016-04-26T11:54:27.974133+00:00 tsunami1 kernel:[698635.210076] Pid: 
28111, comm: glusterfsd Tainted: G W O3.7.10-1.1-desktop #1  
2016-04-26T11:54:27.974136+00:00 tsunami1 kernel:[698635.210077] Call 
Trace:  
2016-04-26T11:54:27.974137+00:00 tsunami1 kernel:[698635.210090] 
[] dump_trace+0x88/0x300  
2016-04-26T11:54:27.974137+00:00 tsunami1 kernel:[698635.210096] 
[] dump_stack+0x69/0x6f  
2016-04-26T11:54:27.974138+00:00 tsunami1 kernel:[698635.210101] 
[]dump_header+0x70/0x200  
2016-04-26T11:54:27.974139+00:00 tsunami1 kernel:[698635.210105] 
[]oom_kill_process+0x244/0x390  
2016-04-26T11:54:28.113125+00:00 tsunami1 kernel:[698635.210111] 
[]out_of_memory+0x451/0x490  
2016-04-26T11:54:28.113142+00:00 tsunami1 kernel:[698635.210116] 
[]__alloc_pages_nodemask+0x8ae/0x9f0  
2016-04-26T11:54:28.113143+00:00 tsunami1 kernel:[698635.210122] 
[]alloc_pages_current+0xb7/0x130  
2016-04-26T11:54:28.113144+00:00 tsunami1 kernel:[698635.210127] 
[]filemap_fault+0x283/0x440  
2016-04-26T11:54:28.113144+00:00 tsunami1 kernel:[698635.210131] 
[] __do_fault+0x6e/0x560  
2016-04-26T11:54:28.113145+00:00 tsunami1 kernel:[698635.210136] 
[]handle_pte_fault+0x97/0x490  
2016-04-26T11:54:28.113145+00:00 tsunami1 kernel:[698635.210141] 
[]__do_page_fault+0x16b/0x4c0  
2016-04-26T11:54:28.113562+00:00 tsunami1 kernel:[698635.210145] 
[] page_fault+0x28/0x30  
2016-04-26T11:54:28.113565+00:00 tsunami1 kernel:[698635.210158] 
[<7fa9d8a8292b>] 0x7fa9d8a8292a  
2016-04-26T11:54:28.120811+00:00 tsunami1 kernel:[698635.226243] Out of 
memory: Kill process 17144 (glusterfsd)score 694 or sacrifice child 
 
2016-04-26T11:54:28.120811+00:00 tsunami1 kernel:[698635.226251] Killed 
process 17144 (glusterfsd)total-vm:8956384kB, anon-rss:6670900kB, file-
rss:0kB  
   
It makes this version completely useless in production. Bricksservers 
have 8 GB 
of RAM (but will be upgraded to 16 GB).  
   
gluster volume info  returns:  
Volume Name: home  
Type: Distributed-Replicate  
Volume ID: 501741ed-4146-4022-af0b-41f5b1297766  
Status: Started  
Number of Bricks: 14 x 2 = 28  
Transport-type: tcp  
Bricks:  
Brick1: tsunami1:/data/glusterfs/home/brick1  
Brick2: tsunami2:/data/glusterfs/home/brick1  
Brick3: tsunami1:/data/glusterfs/home/brick2  
Brick4: tsunami2:/data/glusterfs/home/brick2  
Brick5: tsunami1:/data/glusterfs/home/brick3  
Brick6: tsunami2:/data/glusterfs/home/brick3  
Brick7: tsunami1:/data/glusterfs/home/brick4  
Brick8: tsunami2:/data/glusterfs/home/brick4  
Brick9: tsunami3:/data/glusterfs/home/brick1  
Brick10: tsunami4:/data/glusterfs/home/brick1  
Brick11: tsunami3:/data/glusterfs/home/brick2  
Brick12: tsunami4:/data/glusterfs/home/brick2  
Brick13: tsunami3:/data/glusterfs/home/brick3  
Brick14: tsunami4:/data/glusterfs/home/brick3  
Brick15: tsunami3:/data/glusterfs/home/brick4  
Brick16: tsunami4:/data/glusterfs/home/brick4  
Brick17: tsunami5:/data/glusterfs/home/brick1  
Brick18: tsunami6:/data/glusterfs/home/brick1  
Brick19: tsunami5:/data/glusterfs/home/brick2  
Brick20: tsunami6:/data/glusterfs/home/brick2  
Brick21: tsunami5:/data/glusterfs/home/brick3  
Brick22: tsunami6:/data/glusterfs/home/brick3  
Brick23: tsunami5:/data/glusterfs/home/brick4  
Brick24: tsunami6:/data/glusterfs/home/brick4  
Brick25: tsunami7:/data/glusterfs/home/brick1  
Brick26: tsunami8:/data/glusterfs/home/brick1  
Brick27: tsunami7:/data/glusterfs/home/brick2  
Brick28: tsunami8:/data/glusterfs/home/brick2  
Options Reconfigured:  
nfs.export-dir: /gerb-reproc/Archive  
nfs.volume-access: read-only  

Re: [Gluster-users] [Gluster-devel] How to enable ACL support in Glusterfs volume

2016-04-27 Thread ABHISHEK PALIWAL
Hi Niels,

Thanks for reply.

I am trying to user Gluster NFS even in this case I am facing the same
issue.

Could please suggest me what I need to do and how can I setup Gluster NFS.

Regards,
Abhishek

On Wed, Apr 27, 2016 at 1:53 PM, Niels de Vos  wrote:

> On Tue, Apr 26, 2016 at 08:23:15PM +0530, ABHISHEK PALIWAL wrote:
> > On Tue, Apr 26, 2016 at 8:06 PM, Niels de Vos  wrote:
> >
> > > On Tue, Apr 26, 2016 at 07:46:03PM +0530, ABHISHEK PALIWAL wrote:
> > > > On Tue, Apr 26, 2016 at 7:06 PM, Niels de Vos 
> wrote:
> > > >
> > > > > On Tue, Apr 26, 2016 at 06:45:59PM +0530, ABHISHEK PALIWAL wrote:
> > > > > > On Tue, Apr 26, 2016 at 6:37 PM, Niels de Vos  >
> > > wrote:
> > > > > >
> > > > > > > On Tue, Apr 26, 2016 at 12:11:06PM +0530, ABHISHEK PALIWAL
> wrote:
> > > > > > > >  Hi,
> > > > > > > >
> > > > > > > > I want to enable ACL support on gluster volume using the
> kernel
> > > NFS
> > > > > ACL
> > > > > > > > support so I have followed below steps after creation of
> gluster
> > > > > volume:
> > > > > > > >
> > > > > > > > 1. mount -t glusterfs -o acl 10.32.0.48:/c_glusterfs /tmp/a2
> > > > > > > >
> > > > > > > > 2.   update the /etc/exports file
> > > > > > > > /tmp/a2
> > > 10.32.*(rw,acl,sync,no_subtree_check,no_root_squash,fsid=14)
> > > > > > > >
> > > > > > > > 3.   exportfs –ra
> > > > > > > >
> > > > > > > > 4.   gluster volume set c_glusterfs nfs.acl off
> > > > > > > >
> > > > > > > > 5.   gluster volume set c_glusterfs nfs.disable on
> > > > > > > >
> > > > > > > > we have disabled above two options because we are using
> Kernel
> > > NFS
> > > > > ACL
> > > > > > > > support and that is already enabled.
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > > on other board mounting it using
> > > > > > > >
> > > > > > > > mount -t nfs -o acl,vers=3 10.32.0.48:/tmp/a2 /tmp/e/
> > > > > > > >
> > > > > > > > setfacl -m u:application:rw /tmp/e/usr
> > > > > > > > setfacl: /tmp/e/usr: Operation not supported
> > > > > > >
> > > > > > > Have you tried to set/getfacl on the Gluster FUSE mountpoint
> > > (/tmp/a2)
> > > > > > > too? Depending on the filesystem that you use on the bricks,
> you
> > > may
> > > > > > > need to mount with "-o acl" there as well. Try to set/get an
> ACL
> > > on all
> > > > > > > of these different levels to see where is starts to fail.
> > > > > > >
> > > > > > Yes, you can check I have already given -o acl on /tmp/a2 as
> well as
> > > > > below
> > > > >
> > > > > Sorry, that is not what I meant. The bricks that provide the
> > > c_glusterfs
> > > > > volume need to support and have ACLs enabled as well. If you use
> XFS,
> > > it
> > > > > should be enabled by default. But some other filesystems do not do
> > > that.
> > > > >
> > > > > You have three different mountpoints:
> > > > >
> > > > >  - /tmp/e: nfs
> > > > >  - /tmp/a2: Gluster FUSE
> > > > >  - whatever you use as bricks for c_glusterfs: XFS or something
> else?
> > > > >
> > > >
> > > > I have following volume info
> > > >
> > > > Volume Name: c_glusterfs
> > > > Type: Replicate
> > > > Volume ID: 5be1524c-21ae-47d5-970a-d4920fca39cf
> > > > Status: Started
> > > > Number of Bricks: 1 x 2 = 2
> > > > Transport-type: tcp
> > > > Bricks:
> > > > Brick1: 10.32.0.48:/opt/lvmdir/c2/brick
> > > > Brick2: 10.32.1.144:/opt/lvmdir/c2/brick
> > > > Options Reconfigured:
> > > > nfs.acl: off
> > > > nfs.disable: on
> > > > network.ping-timeout: 4
> > > > performance.readdir-ahead: on
> > > >
> > > > now according to you /opt/lvmdir/c2/brick should support ACL option
> or
> > > > /opt/lvmdir/c2 ? if /opt/lvmdir/
> > > > c2 then we are mounting it as below
> > > >
> > > >  mount -o acl /dev/cpplvm_vg/vol2  /opt/lvmdir//c2
> > >
> > > If /opt/lvmdir/c2 is the mountpoint, then make sure that a test-file
> > > like /opt/lvmdir/c2/test-acl can have ACLs. It may require mounting
> > > /opt/lvmdir/c2 with the "-o acl" option, but that depends on the
> > > filesystem.
> > >
> > > Also try to create a test-file on /tmp/a2 and check of ACLs work on the
> > > Gluster FUSE mountpoint.
> > >
> > > If these two filesystems support ACLs, I do not see a problem why the
> > > kernel NFS server can not use them.
> > >
> > > > I have one more question : we are using logical volume here for
> glusterfs
> > > > so it should not create any issue in ACL support?
> > >
> > > No, that should not matter.
> > >
> >
> > it is working fine locally means at 10.32.0.48 but when I am exporting it
> > using /etc/exportfs file
> > like
> > */tmp/a2 10.32.*(rw,acl,sync,no_subtree_check,no_root_squash,fsid=10) *
> > and then run *exportfs -ra* command to export it on other board.
> >
> > and trying to mount it on other board
> >
> > like
> >
> > *mount -t nfs -o acl 10.32.0.48:/tmp/a2 /mnt/glust*
> >
> > and then run setfacl
> >
> > *setfacl -m u:application:r /mnt/glust/usr*
> > *setfacl: /mnt/glust/usr: Operation not supported 

Re: [Gluster-users] Does Gluster NFS mount support locking?

2016-04-27 Thread Kaleb S. KEITHLEY
On 04/27/2016 05:37 AM, Chen Chen wrote:
> Hi everyone,
> 
> Does Gluster NFS mount support file locking?
> 
> My program blocked at "flock(3, LOCK_EX"
> "rpcinfo -p" said nlockmgr is running
> firewalld (of both client and server) already disabled
> mounted with "local_lock=none"
> 

NFS (and Gluster NFS, Gluster Native) _only_ supports POSIX advisory
locking, i.e. fcntl (fd, F_SETLK, );

-- 

Kaleb
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] MInutes of Gluster Community Bug Triage meeting at 12:00 UTC on 26th April 2016

2016-04-27 Thread Jiffin Tony Thottan

Hi all,

Minutes: 
https://meetbot.fedoraproject.org/gluster-meeting/2016-04-26/gluster_bug_triage_meeting.2016-04-26-12.11.html
Minutes (text): 
https://meetbot.fedoraproject.org/gluster-meeting/2016-04-26/gluster_bug_triage_meeting.2016-04-26-12.11.txt
Log: 
https://meetbot.fedoraproject.org/gluster-meeting/2016-04-26/gluster_bug_triage_meeting.2016-04-26-12.11.log.html



Meeting summary
---
* agenda: https://public.pad.fsfe.org/p/gluster-bug-triage (jiffin,
  12:11:39)
* Roll call  (jiffin, 12:12:07)

* msvbhat  will look into lalatenduM's automated Coverity setup in
  Jenkins   which need assistance from an admin with more permissions
  (jiffin, 12:18:13)
  * ACTION: msvbhat  will look into lalatenduM's automated Coverity
setup in   Jenkins   which need assistance from an admin with more
permissions  (jiffin, 12:21:04)

* ndevos need to decide on how to provide/use debug builds (jiffin,
  12:21:18)
  * ACTION: Manikandan to followup with kashlm to get access to
gluster-infra  (jiffin, 12:24:18)
  * ACTION: Manikandan and Nandaja will update on bug automation
(jiffin, 12:24:30)

* msvbhat  provide a simple step/walk-through on how to provide
  testcases for the nightly rpm tests  (jiffin, 12:25:09)
  * ACTION: msvbhat  provide a simple step/walk-through on how to
provide testcases for the nightly rpm tests  (jiffin, 12:27:00)

* rafi needs to followup on #bug 1323895  (jiffin, 12:27:15)

* ndevos need to decide on how to provide/use debug builds (jiffin,
  12:30:44)
  * ACTION: ndevos need to decide on how to provide/use debug builds
(jiffin, 12:32:09)
  * ACTION: ndevos to propose some test-cases for minimal libgfapi test
(jiffin, 12:32:21)
  * ACTION: ndevos need to discuss about writing a script to update bug
assignee from gerrit patch  (jiffin, 12:32:31)

* Group triage  (jiffin, 12:33:07)

* openfloor  (jiffin, 12:52:52)

* gluster bug triage meeting schedule May 2016  (jiffin, 12:55:33)
  * ACTION: hgowtham will host meeting on 03/05/2016  (jiffin, 12:56:18)
  * ACTION: Saravanakmr will host meeting on 24/05/2016  (jiffin,
12:56:49)
  * ACTION: kkeithley_ will host meeting on 10/05/2016  (jiffin,
13:00:13)
  * ACTION: jiffin will host meeting on 17/05/2016  (jiffin, 13:00:28)

Meeting ended at 13:01:34 UTC.




Action Items

* msvbhat  will look into lalatenduM's automated Coverity setup in
  Jenkins   which need assistance from an admin with more permissions
* Manikandan to followup with kashlm to get access to gluster-infra
* Manikandan and Nandaja will update on bug automation
* msvbhat  provide a simple step/walk-through on how to provide
  testcases for the nightly rpm tests
* ndevos need to decide on how to provide/use debug builds
* ndevos to propose some test-cases for minimal libgfapi test
* ndevos need to discuss about writing a script to update bug assignee
  from gerrit patch
* hgowtham will host meeting on 03/05/2016
* Saravanakmr will host meeting on 24/05/2016
* kkeithley_ will host meeting on 10/05/2016
* jiffin will host meeting on 17/05/2016

People Present (lines said)
---
* jiffin (87)
* rafi1 (21)
* ndevos (10)
* hgowtham (8)
* kkeithley_ (6)
* Saravanakmr (6)
* Manikandan (5)
* zodbot (3)
* post-factum (2)
* lalatenduM (1)
* glusterbot (1)


Cheers,

Jiffin

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Fwd: dht_is_subvol_filled messages on client

2016-04-27 Thread Serkan Çoban
Hi, can someone give me a clue about below problem?
Thanks,
Serkan


-- Forwarded message --
From: Serkan Çoban 
Date: Tue, Apr 26, 2016 at 3:33 PM
Subject: dht_is_subvol_filled messages on client
To: Gluster Users 


Hi,

I am getting [dht-diskusage.c:277:dht_is_subvol_filled] 0-v0-dht:
inodes on subvolume 'v0-disperse-56' are at (100.00 %), consider
adding more bricks.

message on client logs. My cluster is empty there are only a couple of
GB files for testing. Why this message appear in syslog? Is is safe to
ignore it?

Serkan
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Does Gluster NFS mount support locking?

2016-04-27 Thread Chen Chen

Hi everyone,

Does Gluster NFS mount support file locking?

My program blocked at "flock(3, LOCK_EX"
"rpcinfo -p" said nlockmgr is running
firewalld (of both client and server) already disabled
mounted with "local_lock=none"

Many thanks,
Chen

--
Chen Chen
Shanghai SmartQuerier Biotechnology Co., Ltd.
Add: Add: 3F, 1278 Keyuan Road, Shanghai 201203, P. R. China
Mob: +86 15221885893
Email: chenc...@smartquerier.com
Web: www.smartquerier.com



smime.p7s
Description: S/MIME Cryptographic Signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-infra] [ovirt-users] [Attention needed] GlusterFS repository down - affects CI / Installations

2016-04-27 Thread Niels de Vos
On Wed, Apr 27, 2016 at 02:30:57PM +0530, Ravishankar N wrote:
> @gluster infra  - FYI.
> 
> On 04/27/2016 02:20 PM, Nadav Goldin wrote:
> >Hi,
> >The GlusterFS repository became unavailable this morning, as a result all
> >Jenkins jobs that use the repository will fail, the common error would be:
> >
> >
> > http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/epel-7/noarch/repodata/repomd.xml:
> >[Errno 14] HTTP Error 403 - Forbidden
> >
> >
> >Also, installations of oVirt will fail.

I thought oVirt moved to using the packages from the CentOS Storage SIG?
In any case, automated tests should probably use those instead of the
packages on download.gluster.org. We're trying to minimize the work
packagers need to do, and get the glusterfs and other components in the
repositories that are provided by different distributions.

For more details, see the quickstart for the Storage SIG here:
  https://wiki.centos.org/SpecialInterestGroup/Storage/gluster-Quickstart

HTH,
Niels


signature.asc
Description: PGP signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [ovirt-users] [Attention needed] GlusterFS repository down - affects CI / Installations

2016-04-27 Thread Ravishankar N

@gluster infra  - FYI.

On 04/27/2016 02:20 PM, Nadav Goldin wrote:

Hi,
The GlusterFS repository became unavailable this morning, as a result 
all Jenkins jobs that use the repository will fail, the common error 
would be:



http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/epel-7/noarch/repodata/repomd.xml:
[Errno 14] HTTP Error 403 - Forbidden


Also, installations of oVirt will fail.

We are working on a solution and will update asap.

Nadav.



___
Users mailing list
us...@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] How to enable ACL support in Glusterfs volume

2016-04-27 Thread Niels de Vos
On Tue, Apr 26, 2016 at 08:23:15PM +0530, ABHISHEK PALIWAL wrote:
> On Tue, Apr 26, 2016 at 8:06 PM, Niels de Vos  wrote:
> 
> > On Tue, Apr 26, 2016 at 07:46:03PM +0530, ABHISHEK PALIWAL wrote:
> > > On Tue, Apr 26, 2016 at 7:06 PM, Niels de Vos  wrote:
> > >
> > > > On Tue, Apr 26, 2016 at 06:45:59PM +0530, ABHISHEK PALIWAL wrote:
> > > > > On Tue, Apr 26, 2016 at 6:37 PM, Niels de Vos 
> > wrote:
> > > > >
> > > > > > On Tue, Apr 26, 2016 at 12:11:06PM +0530, ABHISHEK PALIWAL wrote:
> > > > > > >  Hi,
> > > > > > >
> > > > > > > I want to enable ACL support on gluster volume using the kernel
> > NFS
> > > > ACL
> > > > > > > support so I have followed below steps after creation of gluster
> > > > volume:
> > > > > > >
> > > > > > > 1. mount -t glusterfs -o acl 10.32.0.48:/c_glusterfs /tmp/a2
> > > > > > >
> > > > > > > 2.   update the /etc/exports file
> > > > > > > /tmp/a2
> > 10.32.*(rw,acl,sync,no_subtree_check,no_root_squash,fsid=14)
> > > > > > >
> > > > > > > 3.   exportfs –ra
> > > > > > >
> > > > > > > 4.   gluster volume set c_glusterfs nfs.acl off
> > > > > > >
> > > > > > > 5.   gluster volume set c_glusterfs nfs.disable on
> > > > > > >
> > > > > > > we have disabled above two options because we are using Kernel
> > NFS
> > > > ACL
> > > > > > > support and that is already enabled.
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > on other board mounting it using
> > > > > > >
> > > > > > > mount -t nfs -o acl,vers=3 10.32.0.48:/tmp/a2 /tmp/e/
> > > > > > >
> > > > > > > setfacl -m u:application:rw /tmp/e/usr
> > > > > > > setfacl: /tmp/e/usr: Operation not supported
> > > > > >
> > > > > > Have you tried to set/getfacl on the Gluster FUSE mountpoint
> > (/tmp/a2)
> > > > > > too? Depending on the filesystem that you use on the bricks, you
> > may
> > > > > > need to mount with "-o acl" there as well. Try to set/get an ACL
> > on all
> > > > > > of these different levels to see where is starts to fail.
> > > > > >
> > > > > Yes, you can check I have already given -o acl on /tmp/a2 as well as
> > > > below
> > > >
> > > > Sorry, that is not what I meant. The bricks that provide the
> > c_glusterfs
> > > > volume need to support and have ACLs enabled as well. If you use XFS,
> > it
> > > > should be enabled by default. But some other filesystems do not do
> > that.
> > > >
> > > > You have three different mountpoints:
> > > >
> > > >  - /tmp/e: nfs
> > > >  - /tmp/a2: Gluster FUSE
> > > >  - whatever you use as bricks for c_glusterfs: XFS or something else?
> > > >
> > >
> > > I have following volume info
> > >
> > > Volume Name: c_glusterfs
> > > Type: Replicate
> > > Volume ID: 5be1524c-21ae-47d5-970a-d4920fca39cf
> > > Status: Started
> > > Number of Bricks: 1 x 2 = 2
> > > Transport-type: tcp
> > > Bricks:
> > > Brick1: 10.32.0.48:/opt/lvmdir/c2/brick
> > > Brick2: 10.32.1.144:/opt/lvmdir/c2/brick
> > > Options Reconfigured:
> > > nfs.acl: off
> > > nfs.disable: on
> > > network.ping-timeout: 4
> > > performance.readdir-ahead: on
> > >
> > > now according to you /opt/lvmdir/c2/brick should support ACL option or
> > > /opt/lvmdir/c2 ? if /opt/lvmdir/
> > > c2 then we are mounting it as below
> > >
> > >  mount -o acl /dev/cpplvm_vg/vol2  /opt/lvmdir//c2
> >
> > If /opt/lvmdir/c2 is the mountpoint, then make sure that a test-file
> > like /opt/lvmdir/c2/test-acl can have ACLs. It may require mounting
> > /opt/lvmdir/c2 with the "-o acl" option, but that depends on the
> > filesystem.
> >
> > Also try to create a test-file on /tmp/a2 and check of ACLs work on the
> > Gluster FUSE mountpoint.
> >
> > If these two filesystems support ACLs, I do not see a problem why the
> > kernel NFS server can not use them.
> >
> > > I have one more question : we are using logical volume here for glusterfs
> > > so it should not create any issue in ACL support?
> >
> > No, that should not matter.
> >
> 
> it is working fine locally means at 10.32.0.48 but when I am exporting it
> using /etc/exportfs file
> like
> */tmp/a2 10.32.*(rw,acl,sync,no_subtree_check,no_root_squash,fsid=10) *
> and then run *exportfs -ra* command to export it on other board.
> 
> and trying to mount it on other board
> 
> like
> 
> *mount -t nfs -o acl 10.32.0.48:/tmp/a2 /mnt/glust*
> 
> and then run setfacl
> 
> *setfacl -m u:application:r /mnt/glust/usr*
> *setfacl: /mnt/glust/usr: Operation not supported *//Reporting this error

If the ACL works on /opt/lvmdir/c2 and /tmp/a2 at least on the Gluster
and FUSE side all seems to be fine. You would need to check with the
kernel NFS server people to figure out why the mounted Gluster volume
can not use ACLs through knfsd.

Note that we really recommend to use Gluster/NFS or NFS-Ganesha with
Gluster. We do not test exporting FUSE mounted Gluster volume through
knfsd at all, and I am not aware that anyone uses this combination in
their production environment.

Cheers,
Niels


> 
> >
> > Cheers,
> > Niels
> >
> 

[Gluster-users] FIle size error when restoring backups causes restore failure.

2016-04-27 Thread Lindsay Mathieson
I'm getting the following file size error when restoring proxmox qemu
backups via gfapi. I don't think the issue is with proxmox as I have tested
the same restore process with other storages with no problem(nfs, cephfs,
ceph rbd). Also if I restore the the gluster *fuse* mount it works ok.

Have tested with:

performance.stat-prefetch: off
performance.strict-write-ordering: on

still same problem.

This is with sharded storage - as soon as I some time I'll test with
non-shared storage.

Error is bolded at the end of the logging

restore vma archive: lzop -d -c
/mnt/nas-backups-smb/dump/vzdump-qemu-910-2016_04_23-08_22_07.vma.lzo|vma
extract -v -r /var/tmp/vzdumptmp8999.fifo - /var/tmp/vzdumptmp8999
CFG: size: 391 name: qemu-server.conf
DEV: dev_id=1 size: 34359738368 devname: drive-scsi0
CTIME: Sat Apr 23 08:22:11 2016
[2016-04-27 07:02:21.688878] I [MSGID: 104045] [glfs-master.c:95:notify]
0-gfapi: New graph 766e622d-3930-3033-2d32-3031362d3034 (0) coming up
[2016-04-27 07:02:21.688921] I [MSGID: 114020] [client.c:2106:notify]
0-datastore4-client-0: parent translators are ready, attempting connect on
transport
[2016-04-27 07:02:21.689530] I [MSGID: 114020] [client.c:2106:notify]
0-datastore4-client-1: parent translators are ready, attempting connect on
transport
[2016-04-27 07:02:21.689793] I [rpc-clnt.c:1868:rpc_clnt_reconfig]
0-datastore4-client-0: changing port to 49156 (from 0)
[2016-04-27 07:02:21.689951] I [MSGID: 114020] [client.c:2106:notify]
0-datastore4-client-2: parent translators are ready, attempting connect on
transport
[2016-04-27 07:02:21.690608] I [MSGID: 114057]
[client-handshake.c:1437:select_server_supported_programs]
0-datastore4-client-0: Using Program GlusterFS 3.3, Num (1298437), Version
(330)
[2016-04-27 07:02:21.690977] I [rpc-clnt.c:1868:rpc_clnt_reconfig]
0-datastore4-client-1: changing port to 49155 (from 0)
[2016-04-27 07:02:21.691032] I [MSGID: 114046]
[client-handshake.c:1213:client_setvolume_cbk] 0-datastore4-client-0:
Connected to datastore4-client-0, attached to remote volume
'/tank/vmdata/datastore4'.
[2016-04-27 07:02:21.691068] I [MSGID: 114047]
[client-handshake.c:1224:client_setvolume_cbk] 0-datastore4-client-0:
Server and Client lk-version numbers are not same, reopening the fds
[2016-04-27 07:02:21.691148] I [MSGID: 108005]
[afr-common.c:4007:afr_notify] 0-datastore4-replicate-0: Subvolume
'datastore4-client-0' came back up; going online.
[2016-04-27 07:02:21.691235] I [MSGID: 114035]
[client-handshake.c:193:client_set_lk_version_cbk] 0-datastore4-client-0:
Server lk version = 1
[2016-04-27 07:02:21.691430] I [rpc-clnt.c:1868:rpc_clnt_reconfig]
0-datastore4-client-2: changing port to 49155 (from 0)
[2016-04-27 07:02:21.691867] I [MSGID: 114057]
[client-handshake.c:1437:select_server_supported_programs]
0-datastore4-client-1: Using Program GlusterFS 3.3, Num (1298437), Version
(330)
[2016-04-27 07:02:21.692350] I [MSGID: 114046]
[client-handshake.c:1213:client_setvolume_cbk] 0-datastore4-client-1:
Connected to datastore4-client-1, attached to remote volume
'/tank/vmdata/datastore4'.
[2016-04-27 07:02:21.692369] I [MSGID: 114047]
[client-handshake.c:1224:client_setvolume_cbk] 0-datastore4-client-1:
Server and Client lk-version numbers are not same, reopening the fds
[2016-04-27 07:02:21.692474] I [MSGID: 114057]
[client-handshake.c:1437:select_server_supported_programs]
0-datastore4-client-2: Using Program GlusterFS 3.3, Num (1298437), Version
(330)
[2016-04-27 07:02:21.692590] I [MSGID: 114035]
[client-handshake.c:193:client_set_lk_version_cbk] 0-datastore4-client-1:
Server lk version = 1
[2016-04-27 07:02:21.692926] I [MSGID: 114046]
[client-handshake.c:1213:client_setvolume_cbk] 0-datastore4-client-2:
Connected to datastore4-client-2, attached to remote volume
'/tank/vmdata/datastore4'.
[2016-04-27 07:02:21.692942] I [MSGID: 114047]
[client-handshake.c:1224:client_setvolume_cbk] 0-datastore4-client-2:
Server and Client lk-version numbers are not same, reopening the fds
[2016-04-27 07:02:21.708641] I [MSGID: 114035]
[client-handshake.c:193:client_set_lk_version_cbk] 0-datastore4-client-2:
Server lk version = 1
[2016-04-27 07:02:21.709796] I [MSGID: 108031]
[afr-common.c:1900:afr_local_discovery_cbk] 0-datastore4-replicate-0:
selecting local read_child datastore4-client-0
[2016-04-27 07:02:21.710401] I [MSGID: 104041]
[glfs-resolve.c:869:__glfs_active_subvol] 0-datastore4: switched to graph
766e622d-3930-3033-2d32-3031362d3034 (0)
[2016-04-27 07:02:21.828061] I [MSGID: 114021] [client.c:2115:notify]
0-datastore4-client-0: current graph is no longer active, destroying
rpc_client
[2016-04-27 07:02:21.828125] I [MSGID: 114021] [client.c:2115:notify]
0-datastore4-client-1: current graph is no longer active, destroying
rpc_client
[2016-04-27 07:02:21.828140] I [MSGID: 114018]
[client.c:2030:client_rpc_notify] 0-datastore4-client-0: disconnected from
datastore4-client-0. Client process will keep trying to connect to glusterd
until brick's port is available
[2016-04-27 

[Gluster-users] GlusterFS writing file from Java using NIO

2016-04-27 Thread Patroklos Papapetrou
Hi all

I'm new to GlusterFS so please forgive me if I'm using wrong mailing list
or this questions has been already answered in the past.
I have setup GlusterFS ( server and client ) to an Ubuntu instance and now
I'm trying to use the Java library
to read and write
files.

The Example.java works pretty fine but when I try to write a big file (
actually after some tests I realized that "big file" = > 8k ) I get the
following exception

Exception in thread "main" java.lang.IllegalArgumentException
at java.nio.Buffer.position(Buffer.java:244)
at
com.peircean.glusterfs.GlusterFileChannel.write(GlusterFileChannel.java:175)
at java.nio.channels.Channels.writeFullyImpl(Channels.java:78)
at java.nio.channels.Channels.writeFully(Channels.java:101)
at java.nio.channels.Channels.access$000(Channels.java:61)
at java.nio.channels.Channels$1.write(Channels.java:174)
at java.nio.file.Files.write(Files.java:3297)
at com.peircean.glusterfs.example.Example.main(Example.java:82)

​Which is caused because the bytes written are more than the buffer limit (
8192 ) . However the file is correctly written in Gluster.

So here's ​my questions:
1. Is there any known issue in java lib?
2. Should I use another way of writing "big" files?

BTW, trying to write the same file using the relative path of Gluster's
mounted volume is working without any issues.

Thanks for your response.


-- 
Patroklos Papapetrou | Chief Software Architect
s: ppapapetrou
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users