Re: [Gluster-users] 2015 Gluster Community Survey

2015-11-18 Thread Amye Scavarda
On Wed, Nov 4, 2015 at 8:41 AM, Amye Scavarda  wrote:
> Hi all,
> It's that time of year again! The 2015 Gluster Community Survey is an
> important way to be able to give your feedback to the project as a whole,
> letting us know where you'd like to see improvements, what you like, and how
> you're using Gluster.
>
> This survey will be open until the end of November to give everyone plenty
> of time to respond.
>
> https://goo.gl/bXjnNW
>
> Let me know if you have trouble accessing this!
>  - amye
>
> --
> Amye Scavarda | a...@redhat.com | Gluster Community Lead

We've been getting great responses from the community but we'd love to
hear from everyone.
This community survey will be open until November 30th to give
everyone time to respond.

https://goo.gl/bXjnNW


Thanks!
-- amye

-- 
Amye Scavarda | a...@redhat.com | Gluster Community Lead
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] nfs mounting and copying files

2015-11-18 Thread Soumya Koduri



On 11/17/2015 08:50 PM, Pierre Léonard wrote:

Hi all,

Y have a cluster with 14 nodes. I have build a volume stripe 7 with the
14 nodes. Underlying I use XFS.

Locally I mount the global volume with nfs :
mount -t nfs 127.0.0.1:gvExport /glusterfs/gvExport  -o
_netdev,nosuid,bg,exec

then I untar big files  and get a lot of errors and tar stop without
finishing it's job :


tar:
cuir/xsq_data/nash2_r55a_evot_r3a_cuiR_r1a__w4_FC1/runSummary/L03_F3_PM_line_Residual_Coef_Mean.tab:
Cannot open: Input/output error
tar:
cuir/xsq_data/nash2_r55a_evot_r3a_cuiR_r1a__w4_FC1/runSummary/RunThroughputSummary_nash2_r55a_evot_r3a_cuiR_r1a__w4_FC1.csv:
Cannot open: Input/output error
tar:
cuir/xsq_data/nash2_r55a_evot_r3a_cuiR_r1a__w4_FC1/runSummary/L01_BC_PM_box_SampleAvgQV.png:
Cannot open: Input/output error
tar: Exiting with failure status due to previous errors



To start with, could you please check the logs 
'/var/log/glusterfs/nfs.log' and brick logs for any errors/warnings.


Thanks,
Soumya


[1]+  Exit 2  tar --acl -zxf
/scratch/sauverepli/projets/cuir.tgz


I that a problem with the nfs mount and big files ?

and then I had to go back with a gluster mount ?

Many thanks in advance.


Pierre Léonard





___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] glusterfs backup - how to migrate image to new server?

2015-11-18 Thread Merlin Morgenstern
I am running glusterfs on a 3 node prod server with thinly provisioned
LVM-Volumes. My goal is to automate a backup process that is based on
gluster snapshots. The idea is basically to run a shell script via cron
that takes the snapshot, zips it and moves it to a remote server.

Backup works, now I do want to test restoring it on a development server
with a similar glusterfs setup. My question is, how to restore this image
into gluter. Is this simply by replacing the brick, or would I run into
conflicts with the volume_id or similar things?

This is how my script looks like so far:

#!/bin/bash

NOW=$(date +%Y%m%d_%H%M%S)
DAY=$(date +%u)

GS_VOLUME="vol1"
BACKUP_DIR="/home/user/backup/"
SNAP_NAME="snap_"$GS_VOLUME"-"$NOW

# create snapshot
gluster snapshot create $SNAP_NAME $GS_VOLUME no-timestamp

# get snapshot volume name
SNAP_VOL_NAME=$(gluster snapshot info $SNAP_NAME | grep "Snap\ Volume\
Name" | sed -e 's/.*S.*:.//g')
MOUNT_OBJECT="/dev/vg0/"$SNAP_VOL_NAME"_0"
MOUNT_POINT="/run/gluster/snaps/$SNAP_VOL_NAME/brick1"

# umount the image
umount $MOUNT_POINT

# create backup
sudo dd if=$MOUNT_OBJECT | lz4c -c > $BACKUP_DIR$SNAP_NAME.ddimg.lz4

# mount image back
mount $MOUNT_OBJECT $MOUNT_POINT

Thank you in advance for any help on this toppic.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Configuring Ganesha and gluster on separate nodes?

2015-11-18 Thread Soumya Koduri



On 11/17/2015 10:21 PM, Surya K Ghatty wrote:

Hi:

I am trying to understand if it is technically feasible to have gluster
nodes on one machine, and export a volume from one of these nodes using
a nfs-ganesha server installed on a totally different machine? I tried
the below and showmount -e does not show my volume exported. Any
suggestions will be appreciated.

1. Here is my configuration:

Gluster nodes: glusterA and glusterB on individual bare metals - both in
Trusted pool, with volume gvol0 up and running.
Ganesha node: on bare metal ganeshaA.

2. my ganesha.conf looks like this with IP address of glusterA in the FSAL.

FSAL {
Name = GLUSTER;

# IP of one of the nodes in the trusted pool
*hostname = "WW.ZZ.XX.YY" --> IP address of GlusterA.*

# Volume name. Eg: "test_volume"
volume = "gvol0";
}

3. I disabled nfs on gvol0. As you can see, *nfs.disable is set to on.*

[root@glusterA ~]# gluster vol info

Volume Name: gvol0
Type: Distribute
Volume ID: 16015bcc-1d17-4ef1-bb8b-01b7fdf6efa0
Status: Started
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: glusterA:/data/brick0/gvol0
Options Reconfigured:
*nfs.disable: on*
nfs.export-volumes: off
features.quota-deem-statfs: on
features.inode-quota: on
features.quota: on
performance.readdir-ahead: on

4. I then ran ganesha.nfsd -f /etc/ganesha/ganesha.conf -L
/var/log/ganesha.log -N NIV_FULL_DEBUG
Ganesha server was put in grace, no errors.

17/11/2015 10:44:40 : epoch 564b5964 : ganeshaA:
nfs-ganesha-26426[reaper] fridgethr_freeze :RW LOCK :F_DBG :Released
mutex 0x7f21a92818d0 (>mtx) at
/builddir/build/BUILD/nfs-ganesha-2.2.0/src/support/fridgethr.c:484
17/11/2015 10:44:40 : epoch 564b5964 : ganeshaA:
nfs-ganesha-26426[reaper] nfs_in_grace :RW LOCK :F_DBG :Acquired mutex
0x7f21ad1f18e0 (_mutex) at
/builddir/build/BUILD/nfs-ganesha-2.2.0/src/SAL/nfs4_recovery.c:129
*17/11/2015 10:44:40 : epoch 564b5964 : ganeshaA :
nfs-ganesha-26426[reaper] nfs_in_grace :STATE :DEBUG :NFS Server IN GRACE*
17/11/2015 10:44:40 : epoch 564b5964 : ganeshaA :
nfs-ganesha-26426[reaper] nfs_in_grace :RW LOCK :F_DBG :Released mutex
0x7f21ad1f18e0 (_mutex) at
/builddir/build/BUILD/nfs-ganesha-2.2.0/src/SAL/nfs4_recovery.c:141



You shall still need gluster-client bits on the machine where 
nfs-ganesha server is installed to export a gluster volume. Check if you 
have got libgfapi.so installed on that machine.


Also, ganesha server does log the warnings if its unable to process the 
EXPORT/FSAL block. Please recheck the logs if you have got any.


Thanks,
Soumya


5. [root@ganeshaA glusterfs]# showmount -e
Export list for ganeshaA:


Any suggestions on what I am missing?

Regards,

Surya Ghatty

"This too shall pass"

Surya Ghatty | Software Engineer | IBM Cloud Infrastructure Services
Development | tel: (507) 316-0559 | gha...@us.ibm.com



___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] rsync to gluster mount: self-heal and bad performance

2015-11-18 Thread Joe Julian

On 11/17/2015 08:19 AM, Tiemen Ruiten wrote:
I double-checked my config and found out that the filesystem of the 
brick on the arbiter node doesn't support ACLs: underlying fs is ext4 
without acl mount option, while the other bricks are XFS ( where it's 
always enabled). Do all the bricks need to support ACLs?


For the volume to support ACLs the bricks must support ACLs too.



To keep things simple I suppose it makes sense to remove ACLs as it's 
not strictly needed for my setup, so I ran some tests and I can now 
confirm that I don't see the self-heals when the volume isn't mounted 
with --acl. I don't have exact numbers but I have the impression that 
syncing is faster as well.


Makes sense. When one brick of a replica had the ACL metadata, a 
lookup() to the two bricks would find a mismatch and try to heal that 
missing ACL metadata every time.

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] REMINDER: Weekly gluster community meeting to start in ~90 minutes

2015-11-18 Thread Kaushal M
Hi All,

The weekly Gluster community meeting will start in ~90 minutes.

Meeting details:
- location: #gluster-meeting on Freenode IRC
- date: every Wednesday
- time: 12:00 UTC, 14:00 CEST, 17:30 IST
(in your terminal, run: date -d "12:00 UTC")
- agenda: https://public.pad.fsfe.org/p/gluster-community-meetings

Today's agenda includes,
- Roll Call
- Status of last week's action items
- Gluster 3.7
- Gluster 3.8
- Gluster 3.6
- Gluster 3.5
- Gluster 4.0
- Open Floor
  - bring your own topic! Add your own topic to the agenda.

Thanks,
Kaushal
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Meeting minutes of Gluster community meeting 18-11-2015

2015-11-18 Thread Kaushal M
Thank you everyone who attended today's meeting. We ran overtime
slightly, and couldn't cover all topics. We hope to cover any missed
topics in the next meeting, which will happen sametime next week. A
calendar invite has been attached for next weeks meeting.

Today's meeting logs are available at the locations mentioned below.
The meeting minutes have also bee added to the end of this mail.

Minutes: 
http://meetbot.fedoraproject.org/gluster-meeting/2015-11-18/gluster-meeting.2015-11-18-12.01.html
Minutes (text):
http://meetbot.fedoraproject.org/gluster-meeting/2015-11-18/gluster-meeting.2015-11-18-12.01.txt
Log: 
http://meetbot.fedoraproject.org/gluster-meeting/2015-11-18/gluster-meeting.2015-11-18-12.01.log.html

Cheers,
Kaushal

Meeting summary
---
* Roll call  (kshlm, 12:03:18)

* Last weeks AIs  (kshlm, 12:06:19)

* kshlm to check back with misc on the new jenkins slaves  (kshlm,
  12:06:49)

* ndevos send out a reminder to the maintainers about more actively
  enforcing backports of bugfixes  (kshlm, 12:08:01)
  * ACTION: ndevos send out a reminder to the maintainers about more
actively enforcing backports of bugfixes  (kshlm, 12:09:19)

* raghu to call for volunteers and help from maintainers for doing
  backports listed by rwareing to 3.6.7  (kshlm, 12:09:40)
  * ACTION: raghu to call for volunteers and help from maintainers for
doing backports listed by rwareing to 3.6.7  (kshlm, 12:10:51)

* hagarth to post a tracking page on gluster.org for 3.8 by next week's
  meeting  (kshlm, 12:11:14)
  * ACTION: hagarth to post a tracking page on gluster.org for 3.8 by
next week's meeting  (kshlm, 12:12:22)

* rafi to setup a doodle poll for bug triage meeting  (kshlm, 12:13:17)
  * ACTION: rafi1 to setup a doodle poll for bug triage meeting  (kshlm,
12:14:32)

* rastar and msvbhat to publish a test exit criterion for major/minor
  releases on gluster.org  (kshlm, 12:14:53)
  * ACTION: rastar and msvbhat to publish a test exit criterion for
major/minor releases on gluster.org  (kshlm, 12:16:20)

* jdarcy to send monthly update for NSR  (kshlm, 12:16:33)
  * ACTION: jdarcy to send monthly update for NSR  (kshlm, 12:18:15)

* samikshan to send status on Gluster Eventing  (kshlm, 12:18:45)
  * ACTION: samikshan to send status on Gluster Eventing  (kshlm,
12:19:14)

* atinm to put up the GlusterD 2.0 design doc by end of next week
  (kshlm, 12:19:36)
  * ACTION: atinm/kshlm to put up the GlusterD 2.0 design doc by end of
next week  (kshlm, 12:20:57)

* more eyes needed on http://review.gluster.org/#/c/12321/  (kshlm,
  12:21:25)
  * The patch Shyam's experimental  xlator change.  (kshlm, 12:22:26)

* kshlm & csim to set up faux/pseudo user email for gerrit, bugzilla,
  github  (kshlm, 12:24:16)
  * ACTION: kshlm & csim to set up faux/pseudo user email for gerrit,
bugzilla, github  (kshlm, 12:25:23)

* atinm to send a mail to Manu asking his help on 3.6 BSD smoke failures
  (kshlm, 12:25:57)
  * LINK:

http://www.gluster.org/pipermail/gluster-devel/2015-November/thread.html#47085
(atinm, 12:29:04)
  *

http://www.gluster.org/pipermail/gluster-devel/2015-November/thread.html#47085
(kshlm, 12:29:20)
  * ACTION: Need to decide if fixing BSD testing for release-3.6 is
worth it.  (kshlm, 12:33:37)

* raghu to send a note in ML to ignore BSD failures  (kshlm, 12:33:55)
  *
https://www.gluster.org/pipermail/gluster-devel/2015-November/047082.html
(kshlm, 12:35:38)

* GlusterFS 3.7  (kshlm, 12:36:14)
  * 3.7.6 released  (kshlm, 12:36:46)
  * release-announcement
https://www.gluster.org/pipermail/gluster-devel/2015-November/047152.html
(kshlm, 12:37:33)
  * LINK: https://bugzilla.redhat.com/show_bug.cgi?id=1279240   (rastar,
12:38:31)
  * ACTION: rastar to close the glusterfs-3.7.6 tracker  (kshlm,
12:40:20)

* Need release-manager for glusterfs-3.7.7  (kshlm, 12:40:55)
  * ACTION: rastar Will call for volunteers for 3.7.7 release-manager
(kshlm, 12:43:50)

* GlusterFS 3.6  (kshlm, 12:44:05)
  * 3.6.7 on track for release on 20-Nov  (kshlm, 12:45:19)

* GlusterFS 3.5  (kshlm, 12:47:29)

* GlusterFS-3.8/GlusterFS-4.0  (kshlm, 12:51:58)
  * LINK:

https://drive.google.com/file/d/0B_1LWY0UTlNiQTI0ZU9hWDJGdEk/view?usp=sharing
(atinm, 12:57:24)
  * DHT-2.0 hangout was held. More details at
http://www.gluster.org/pipermail/gluster-devel/2015-November/047098.html
(kshlm, 12:58:09)

* Open floor  (kshlm, 12:58:21)

* Should  we have an LTS release? There is a community user who is
  currently  running 3.6 in production. They plan to use it long past
  its expected  EOL. They would like to see a LTS release, along the
  lines of, e.g., Ubuntu LTS.  (kshlm, 12:59:04)
  * LINK:
http://www.gluster.org/community/documentation/index.php/Life_Cycle
(JoeJulian, 13:04:28)
  * ACTION: amye to get on top of disucssion on long-term releases.
(kshlm, 13:08:30)

* Gartner is reporting that Gluster doesn't have a very active

Re: [Gluster-users] File Corruption with shards - 100% reproducable

2015-11-18 Thread Krutika Dhananjay
Lindsay, 

I wanted to ask you one more thing: specifically in VM workload with sharding, 
do you run into consistency issues with strict-write-ordering being off? 
I remember suggesting that this option be enabled. But that was for plain dd on 
the mountpoint (and not inside the vm), where it was necessary. 
I want to know if it is *really* necessary in VM workloads. 

-Krutika 

- Original Message -

> From: "Lindsay Mathieson" 
> To: "Krutika Dhananjay" 
> Cc: "gluster-users" 
> Sent: Sunday, November 15, 2015 11:39:57 AM
> Subject: Re: [Gluster-users] File Corruption with shards - 100% reproducable

> On 15 November 2015 at 13:32, Krutika Dhananjay < kdhan...@redhat.com >
> wrote:

> > So to start with, just disable performance.stat-prefetch and leave the rest
> > of the options as they were before and run the test case.
> 
> Yes, that seems to be the guilty party. When disabled I can freely migrate
> VM's, emabled, things rapidly go pear shaped.

> --
> Lindsay
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Geo-replication started but not replicating

2015-11-18 Thread Deepak Ravi
Aravinda,

I figured it out. The problem was that I was using the public IPs to create
the gluster cluster which started giving the transport node issue. I found
a work around by using the ec2-DNS private for peering and public for
geo-replication which worked like a charm. Sorry, if this doesn't make
complete sense. Anyway, thanks for your help

Deepak

On Wed, Nov 18, 2015 at 1:29 AM, Deepak Ravi  wrote:

> Alright, here you go. Slaves xfs1 and xfs2:
>
> *[root@xfs1 ~]*# cat
> /var/log/glusterfs/geo-replication-slaves/f77a024e-a865-493e-9ce2
> d7dbe99ee6d5\:gluster%3A%2F%2F127.0.0.1%3Axvol.gluster.log | less
>
> [2015-11-17 15:30:32.082984] I [MSGID: 101190]
> [event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started thread
> with index 2
> [2015-11-17 15:30:32.083124] I [rpc-clnt.c:1847:rpc_clnt_reconfig]
> 0-xvol-client-0: changing port to 49152 (from 0)
> [2015-11-17 15:30:32.085969] E [socket.c:3021:socket_connect]
> 0-xvol-client-0: connection attempt on 127.0.0.1:24007 failed, (Invalid
> argument)
> [2015-11-17 15:30:32.086037] W [socket.c:588:__socket_rwv]
> 0-xvol-client-0: writev on 54.172.172.245:49152 failed (Broken pipe)
> [2015-11-17 15:30:32.086301] E [rpc-clnt.c:362:saved_frames_unwind] (-->
> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x192)[0x7fc03801ea82] (-->
> /lib64/libgfrpc.so.0(saved_frames_unwind+0x1de)[0x7fc037de9a3e] (-->
> /lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7fc037de9b4e] (-->
> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x7a)[0x7fc037deb4da] (-->
> /lib64/libgfrpc.so.0(rpc_clnt_notify+0x88)[0x7fc037debd08] )
> 0-xvol-client-0: forced unwinding frame type(GF-DUMP) op(DUMP(1)) called at
> 2015-11-17 15:30:32.086051 (xid=0x3)
> [2015-11-17 15:30:32.086316] W [MSGID: 114032]
> [client-handshake.c:1623:client_dump_version_cbk] 0-xvol-client-0: received
> RPC status error [Transport endpoint is not connected]
> [2015-11-17 15:30:32.086331] I [MSGID: 114018]
> [client.c:2042:client_rpc_notify] 0-xvol-client-0: disconnected from
> xvol-client-0. Client process will keep trying to connect to glusterd until
> brick's port is available
> [2015-11-17 15:30:32.086670] I [rpc-clnt.c:1847:rpc_clnt_reconfig]
> 0-xvol-client-1: changing port to 49152 (from 0)
> [2015-11-17 15:30:32.090173] I [MSGID: 114057]
> [client-handshake.c:1437:select_server_supported_programs] 0-xvol-client-1:
> Using Program GlusterFS 3.3, Num (1298437), Version (330)
> [2015-11-17 15:30:32.090911] I [MSGID: 114046]
> [client-handshake.c:1213:client_setvolume_cbk] 0-xvol-client-1: Connected
> to xvol-client-1, attached to remote volume '/data/brick/xvol'.
> [2015-11-17 15:30:32.090924] I [MSGID: 114047]
> [client-handshake.c:1224:client_setvolume_cbk] 0-xvol-client-1: Server and
> Client lk-version numbers are not same, reopening the fds
> [2015-11-17 15:30:32.090972] I [MSGID: 108005]
> [afr-common.c:3841:afr_notify] 0-xvol-replicate-0: Subvolume
> 'xvol-client-1' came back up; going online.
> [2015-11-17 15:30:32.094306] I [fuse-bridge.c:5137:fuse_graph_setup]
> 0-fuse: switched to graph 0
> [2015-11-17 15:30:32.094349] I [MSGID: 114035]
> [client-handshake.c:193:client_set_lk_version_cbk] 0-xvol-client-1: Server
> lk version = 1
> [2015-11-17 15:30:32.094463] I [fuse-bridge.c:4030:fuse_init]
> 0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.22 kernel
> 7.22
> [2015-11-17 15:30:32.098769] W [MSGID: 108027]
> [afr-common.c:2100:afr_discover_done] 0-xvol-replicate-0: no read subvols
> for /
> [2015-11-17 15:30:36.075211] I [rpc-clnt.c:1847:rpc_clnt_reconfig]
> 0-xvol-client-0: changing port to 49152 (from 0)
> [2015-11-17 15:30:36.077673] E [socket.c:3021:socket_connect]
> 0-xvol-client-0: connection attempt on 127.0.0.1:24007 failed, (Invalid
> argument)
> [2015-11-17 15:30:36.077716] W [socket.c:588:__socket_rwv]
> 0-xvol-client-0: writev on 54.172.172.245:49152 failed (Broken pipe)
> [2015-11-17 15:30:36.077963] E [rpc-clnt.c:362:saved_frames_unwind] (-->
> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x192)[0x7fc03801ea82] (-->
> /lib64/libgfrpc.so.0(saved_frames_unwind+0x1de)[0x7fc037de9a3e] (-->
> /lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7fc037de9b4e] (-->
> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x7a)[0x7fc037deb4da] (-->
> /lib64/libgfrpc.so.0(rpc_clnt_notify+0x88)[0x7fc037debd08] )
> 0-xvol-client-0: forced unwinding frame type(GF-DUMP) op(DUMP(1)) called at
> 2015-11-17 15:30:36.077724 (xid=0x6)
> [2015-11-17 15:30:32.099560] W [MSGID: 108027]
> [afr-common.c:2100:afr_discover_done] 0-xvol-replicate-0: no read subvols
> for /
> [2015-11-17 15:30:36.077980] W [MSGID: 114032]
> [client-handshake.c:1623:client_dump_version_cbk] 0-xvol-client-0: received
> RPC status error [Transport endpoint is not connected]
> [2015-11-17 15:30:36.078003] I [MSGID: 114018]
> [client.c:2042:client_rpc_notify] 0-xvol-client-0: disconnected from
> xvol-client-0. Client process will keep trying to connect to glusterd until
> brick's port is