Re: [Gluster-users] [ovirt-users] Gluster + ovirt + resize2fs

2016-06-01 Thread Sahina Bose

[+gluster-users]

On 06/01/2016 11:30 PM, Matt Wells wrote:

Apologies, it's XFS so would be an xfs_growfs

On Wed, Jun 1, 2016 at 10:58 AM, Matt Wells > wrote:


Hi everyone, I had a quick question that I really needed to bounce
off someone; one of those measure twice cut once moments.

My primary datastore is on a gluster volume and the short story is
I'm going to grow it.  I've thought of two options

1 - add a brick with the new space
** Was wondering from the gluster point of view if anyone had a
best practice for this. I've looked around and find many people
explaining their stories but not a definitive best practices.


2 - as I'm sitting atop LVMs grow the LVM.
** This is the one that makes me a little nervous.  I've done many
resize2fs and never had issues, but I've never had gluster running
atop that volume and my VM's atop that.  Has anyone had any
experiences they could share?

Thanks all -
Wells


















___
Users mailing list
us...@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Small files performance

2016-06-01 Thread Gmail

> On Jun 1, 2016, at 1:41 PM, Gandalf Corvotempesta 
>  wrote:
> 
> Il 01 giu 2016 22:34, "Gmail"  > ha scritto:
> >
> >
> >> On Jun 1, 2016, at 1:25 PM, Gandalf Corvotempesta 
> >> > 
> >> wrote:
> >> with nfs replication is made directly by gluster servers with no client 
> >> involved?
> >
> > correct
> 
> This is good
> what i really don't like in gluster is the client doing all the replication.
> replication and cluster management should be done directly by servers, not by 
> clients
> client side the resources used for replication and cluster management could 
> be used for something else like virtualizations and so on.
> 
> > the NFS client talks to only one NFS server (the one which it mounts), the 
> > NFS HA setup is only to failover a virtual IP to another healthy node. so 
> > the NFS client will just do 3 minor timeouts then it will do a major 
> > timeout, when that happens, the virtual IP failover will be already done.
> 
> The same is for native client.
> even the native client has to wait for a timeout before changing the storage 
> node, right?
> 
no, there is no timeout with the client, the client knows how to talk to all 
the nodes at the same time, so if a node goes down, not a big deal, it still 
can reach out to the others
> what happens to a virtual machine writing to disk during this timeout? 
If two out of three storage nodes acknowledged the writes (in case of replica 
3), it will be ok, the node failure will not affect the write performance, but 
when two nodes goes down, the third node with turn RO, as there is no quorum.

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Small files performance

2016-06-01 Thread Gandalf Corvotempesta
Il 01 giu 2016 22:34, "Gmail"  ha scritto:
>
>
>> On Jun 1, 2016, at 1:25 PM, Gandalf Corvotempesta <
gandalf.corvotempe...@gmail.com> wrote:
>> with nfs replication is made directly by gluster servers with no client
involved?
>
> correct

This is good
what i really don't like in gluster is the client doing all the replication.
replication and cluster management should be done directly by servers, not
by clients
client side the resources used for replication and cluster management could
be used for something else like virtualizations and so on.

> the NFS client talks to only one NFS server (the one which it mounts),
the NFS HA setup is only to failover a virtual IP to another healthy node.
so the NFS client will just do 3 minor timeouts then it will do a major
timeout, when that happens, the virtual IP failover will be already done.

The same is for native client.
even the native client has to wait for a timeout before changing the
storage node, right?
what happens to a virtual machine writing to disk during this timeout?
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Small files performance

2016-06-01 Thread Gmail

> On Jun 1, 2016, at 1:25 PM, Gandalf Corvotempesta 
>  wrote:
> 
> Il 01 giu 2016 22:06, "Gmail"  > ha scritto:
> > stat() on NFS, is just a single stat() from the client to the storage node, 
> > then all the storage nodes in the same replica group talk to each other 
> > using libgfapi (no FUSE overhead)
> >
> > conclusion, I’d prefer NFS over FUSE with small files.
> > drawback, NFS HA is more complicated to setup and maintain than FUSE.
> 
> NFS HA with ganesha should be easier than kernel NFS
> 
> Skipping the whole fuse stack should be good also for big files
> 
with big files, i don’t notice much difference in performance for NFS over FUSE
> with nfs replication is made directly by gluster servers with no client 
> involved?
> 
correct
> In this case would be possibile to split the gluster networks with 10gb used 
> for replication and multiple 1gb bonded for clients.
> 
don’t forget the complication of Ganesha HA setup, pacemaker is pain in the 
butt.
> I can see only advantage for nfs over native gluster
> 
> One question: with no gluster client that always know on which node a single 
> file is located, who is telling nfs where to find the required file? Is nfs 
> totally distributed with no "gateway"/"proxy" or any centralized server?
> 
the NFS client talks to only one NFS server (the one which it mounts), the NFS 
HA setup is only to failover a virtual IP to another healthy node. so the NFS 
client will just do 3 minor timeouts then it will do a major timeout, when that 
happens, the virtual IP failover will be already done.

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Small files performance

2016-06-01 Thread Gandalf Corvotempesta
Il 01 giu 2016 22:06, "Gmail"  ha scritto:
> stat() on NFS, is just a single stat() from the client to the storage
node, then all the storage nodes in the same replica group talk to each
other using libgfapi (no FUSE overhead)
>
> conclusion, I’d prefer NFS over FUSE with small files.
> drawback, NFS HA is more complicated to setup and maintain than FUSE.

NFS HA with ganesha should be easier than kernel NFS

Skipping the whole fuse stack should be good also for big files
with nfs replication is made directly by gluster servers with no client
involved?
In this case would be possibile to split the gluster networks with 10gb
used for replication and multiple 1gb bonded for clients.
I can see only advantage for nfs over native gluster

One question: with no gluster client that always know on which node a
single file is located, who is telling nfs where to find the required file?
Is nfs totally distributed with no "gateway"/"proxy" or any centralized
server?
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Small files performance

2016-06-01 Thread Gmail
Find my answer inline.
> On Jun 1, 2016, at 12:30 PM, Gandalf Corvotempesta 
>  wrote:
> 
> Il 28/05/2016 11:46, Gandalf Corvotempesta ha scritto:
>> 
>> if i remember properly, each stat() on a file needs to be sent to all host 
>> in replica to check if are in sync
>> 
>> Is this true for both gluster native client and nfs ganesha?
stat() on FUSE mount is done from the client to all the bricks in the same 
replica group carrying the file, the data flow is as follows, the FUSE mount 
point do the call using libgfapi (FUSE overhead), libgfapi talks to the client 
kernel, then the client kernel talks to the kernels of all the storage nodes in 
the same replica group, the storage node kernel talks to the Gluster daemon 
then Gluster talks to the underlying filesystem, etc…

stat() on NFS, is just a single stat() from the client to the storage node, 
then all the storage nodes in the same replica group talk to each other using 
libgfapi (no FUSE overhead)

conclusion, I’d prefer NFS over FUSE with small files.
drawback, NFS HA is more complicated to setup and maintain than FUSE.
>> 
>> Which is the best for a shared hosting storage with many millions of small 
>> files? About 15.000.000 small files in 800gb ? Or even for Maildir hosting
>> 
>> Ganesha can be configured for HA and loadbalancing so the biggest issue that 
>> was present in standard NFS now is gone
>> 
>> Any advantage about native gluster over Ganesha? Removing the fuse 
>> requirement should also be a performance advantage for Ganesha over native 
>> client
>> 
> 
> bump
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Small files performance

2016-06-01 Thread Gandalf Corvotempesta

Il 28/05/2016 11:46, Gandalf Corvotempesta ha scritto:


if i remember properly, each stat() on a file needs to be sent to all 
host in replica to check if are in sync


Is this true for both gluster native client and nfs ganesha?

Which is the best for a shared hosting storage with many millions of 
small files? About 15.000.000 small files in 800gb ? Or even for 
Maildir hosting


Ganesha can be configured for HA and loadbalancing so the biggest 
issue that was present in standard NFS now is gone


Any advantage about native gluster over Ganesha? Removing the fuse 
requirement should also be a performance advantage for Ganesha over 
native client




bump
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] NFS ganesha client not showing files after crash

2016-06-01 Thread Alan Hartless
Yes, I had a brick that I restored and so it had existing files. After the
crash, it wouldn't let me re-add it because it said the files were already
part of a gluster. So I followed
https://joejulian.name/blog/glusterfs-path-or-a-prefix-of-it-is-already-part-of-a-volume/
to
reset it.

Also correct that I can access all files through fuse but only the root
directory via ganesha NFS4 or any directories/files that have since been
created.

Using a forced lookup on a specific file, I found that I can reach it and
even edit it. But a ls or dir will not list it or any of it's parent
directories. Even after editing the file, it does not list with ls.

I'm using gluster 3.7 and ganesha 2.3 from Gluster's Ubuntu repositories.

I don't have a /var/log/ganesha.log but I do /var/log/ganesha-gfapi.log. I
tailed it while restarting ganesha and got this for the specific volume:

[2016-06-01 18:44:44.876385] I [MSGID: 114020] [client.c:2106:notify]
0-letsencrypt-client-0: parent translators are ready, attempting connect on
transport
[2016-06-01 18:44:44.876903] I [MSGID: 114020] [client.c:2106:notify]
0-letsencrypt-client-1: parent translators are ready, attempting connect on
transport
[2016-06-01 18:44:44.877193] I [rpc-clnt.c:1868:rpc_clnt_reconfig]
0-letsencrypt-client-0: changing port to 49154 (from 0)
[2016-06-01 18:44:44.877837] I [MSGID: 114057]
[client-handshake.c:1437:select_server_supported_programs]
0-letsencrypt-client-0: Using Program GlusterFS 3.3, Num (1298437), Version
(330)
[2016-06-01 18:44:44.878234] I [MSGID: 114046]
[client-handshake.c:1213:client_setvolume_cbk] 0-letsencrypt-client-0:
Connected to letsencrypt-client-0, attached to remote volume
'/gluster_volume/letsencrypt'.
[2016-06-01 18:44:44.878253] I [MSGID: 114047]
[client-handshake.c:1224:client_setvolume_cbk] 0-letsencrypt-client-0:
Server and Client lk-version numbers are not same, reopening the fds
[2016-06-01 18:44:44.878338] I [MSGID: 108005]
[afr-common.c:4007:afr_notify] 0-letsencrypt-replicate-0: Subvolume
'letsencrypt-client-0' came back up; going online.
[2016-06-01 18:44:44.878390] I [MSGID: 114035]
[client-handshake.c:193:client_set_lk_version_cbk] 0-letsencrypt-client-0:
Server lk version = 1
[2016-06-01 18:44:44.878505] I [rpc-clnt.c:1868:rpc_clnt_reconfig]
0-letsencrypt-client-1: changing port to 49154 (from 0)
[2016-06-01 18:44:44.879568] I [MSGID: 114057]
[client-handshake.c:1437:select_server_supported_programs]
0-letsencrypt-client-1: Using Program GlusterFS 3.3, Num (1298437), Version
(330)
[2016-06-01 18:44:44.880155] I [MSGID: 114046]
[client-handshake.c:1213:client_setvolume_cbk] 0-letsencrypt-client-1:
Connected to letsencrypt-client-1, attached to remote volume
'/gluster_volume/letsencrypt'.
[2016-06-01 18:44:44.880175] I [MSGID: 114047]
[client-handshake.c:1224:client_setvolume_cbk] 0-letsencrypt-client-1:
Server and Client lk-version numbers are not same, reopening the fds
[2016-06-01 18:44:44.896801] I [MSGID: 114035]
[client-handshake.c:193:client_set_lk_version_cbk] 0-letsencrypt-client-1:
Server lk version = 1
[2016-06-01 18:44:44.898290] I [MSGID: 108031]
[afr-common.c:1900:afr_local_discovery_cbk] 0-letsencrypt-replicate-0:
selecting local read_child letsencrypt-client-0
[2016-06-01 18:44:44.898798] I [MSGID: 104041]
[glfs-resolve.c:869:__glfs_active_subvol] 0-letsencrypt: switched to graph
676c7573-7465-7266-732d-6e6f64652d63 (0)
[2016-06-01 18:44:45.913545] I [MSGID: 104045] [glfs-master.c:95:notify]
0-gfapi: New graph 676c7573-7465-7266-732d-6e6f64652d63 (0) coming up

I also tailed it while accessing files through a mount point but nothing
was logged.

This is the ganesha config for the specific volume I'm testing with. I have
others but they are the same except for export ID and the paths.

EXPORT
{
Export_Id = 3;
Path = "/letsencrypt";
Pseudo = "/letsencrypt";
FSAL {
name = GLUSTER;
hostname = "localhost";
volume = "letsencrypt";
}
Access_type = RW;
Squash = No_root_squash;
Disable_ACL = TRUE;
}

Many thanks!


On Sun, May 29, 2016 at 12:46 PM Jiffin Tony Thottan 
wrote:

>
>
> On 28/05/16 08:07, Alan Hartless wrote:
>
> I had everything working well when I had a complete melt down :-) Well got
> all that sorted and everything back up and running or so I thought. Now NFS
> ganesha is not showing any existing files but the root level of the brick.
> It's empty for all subdirectories. New files or directories added show up
> as well. Everything shows up when using the fuse client.
>
>
> If I understand your issue correctly
> * You have created a volume using brick which contains pre existing file
> and directories
> * When you tried to access  the files via ganesha, it does not show up.
> But with fuse it is visible.
>
> Can please try to perform force lookup on the directories/files(ls  to directory/file>) from the ganesha mount?
> Also check the ganesha logs (/var/log/ganesha.log and
> /var/log/ganesha-gfapi.log) for clues.
> 

[Gluster-users] snapshot removal failed on one node how to recover (3.7.11)

2016-06-01 Thread Alastair Neil
I have a replica 3 volume that has snapshot scheduled using
snap_scheduler.py

I recently tried to remove a snapshot and the command failed on one node:

snapshot delete: failed: Commit failed on gluster0.vsnet.gmu.edu. Please
> check log file for details.
> Snapshot command failed


How do I recover from this failure.  Clearly I need to remove the snapshot
from the offending server but this does not seem possible as the snapshot
no longer exists on the other two nodes.
Suggestions welcome.

-Alastair
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Minutes of Gluster Community Bug Triage meeting at 12:00 UTC ~(in 1.5 hours)

2016-06-01 Thread Jiffin Tony Thottan


 Meeting summary

1. *Roll call* (jiffin
   
,
   12:01:29)
2. *msvbhat will look into lalatenduM's automated Coverity setup in
   Jenkins which need assistance from an admin with more permissions*
   (jiffin
   
,
   12:05:12)
1. /ACTION/: kkeithley Saravanakmr will set up Coverity, clang, etc
   on public facing machine and run it regularly (jiffin
   
,
   12:12:44)
2. http://download.gluster.org/pub/gluster/glusterfs/static-analysis/
   (jiffin
   
,
   12:13:30)
3. /ACTION/: ndevos need to decide on how to provide/use debug
   builds (jiffin
   
,
   12:17:26)
4. /ACTION/: ndevos to propose some test-cases for minimal libgfapi
   test (jiffin
   
,
   12:17:37)

3. *Manikandan and gem to followup with kshlm to get access to
   gluster-infra* (jiffin
   
,
   12:17:59)
1. /ACTION/: Manikandan and gem to followup with kshlm/misc to get
   access to gluster-infra (jiffin
   
,
   12:20:41)

4. *Group Triage* (jiffin
   
,
   12:22:22)
1. https://public.pad.fsfe.org/p/gluster-bugs-to-triage (jiffin
   
,
   12:22:41)

5. *Get moderators for June 2016* (jiffin
   
,
   12:37:34)
1. /ACTION/: Saravanakmr will host bug triage meeting on June 14th
   2016 (jiffin
   
,
   12:40:20)
2. /ACTION/: Manikandan will host bug triage meeting on June 21st
   2016 (jiffin
   
,
   12:41:31)
3. /ACTION/: ndevos will host bug triage meeting on June 28th 2016
   (jiffin
   
,
   12:42:54)
4. /ACTION/: Jiffin will host bug triage meeting on June 7th 2016
   (jiffin
   
,
   12:44:09)

6. *Open Floror* (jiffin
   
,
   12:44:43)
7. *Open Floor* (jiffin
   
,
   12:45:07)
1. /ACTION/: ? decide how component maintainers/developers use the
   BZ queries or RSS-feeds for the Triaged bugs (jiffin
   
,
   13:00:52)



Meeting ended at 13:02:21 UTC (full logs 
). 




 Action items

1. kkeithley Saravanakmr will set up Coverity, clang, etc on public
   facing machine and run it regularly
2. ndevos need to decide on how to provide/use debug builds
3. ndevos to propose some test-cases for minimal libgfapi test
4. Manikandan and gem to followup with kshlm/misc to get access to
   gluster-infra
5. Saravanakmr will host bug triage meeting on June 14th 2016
6. Manikandan will host bug triage meeting on June 21st 2016
7. ndevos will host bug triage meeting on June 28th 2016
8. Jiffin will host bug triage meeting on June 7th 2016
9. ? decide how component maintainers/developers use the BZ queries or
   RSS-feeds for the Triaged bugs



 Action items, by person

1. kkeithley
1. kkeithley Saravanakmr will set up Coverity, clang, etc on public
   facing machine and run it regularly
2. Manikandan
1. Manikandan and gem to followup with kshlm/misc to get 

[Gluster-users] [Gluster-devel] REMINDER: Weekly Gluster Community meeting starts in ~15mnts

2016-06-01 Thread Mohammed Rafi K C
Hi all,

The weekly Gluster community meeting is starting in 1 hour at 12:00 UTC.
The current agenda for the meeting is below. Add any further topics to
the agenda at https://public.pad.fsfe.org/p/gluster-community-meetings

Meeting details:
- location: #gluster-meeting on Freenode IRC
- date: every Wednesday
- time: 8:00 EDT, 12:00 UTC, 13:00 CET, 17:30 IST
(in your terminal, run: date -d "12:00 UTC")

Current Agenda:
 * Roll Call
 * AIs from last meeting
 * GlusterFS 3.7
 * GlusterFS 3.6
 * GlusterFS 3.5
 * GlusterFS 3.8
 * GlusterFS 3.9
 * GlusterFS 4.0
 * Open Floor

See you there,
Rafi



___
Gluster-devel mailing list
gluster-de...@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users