[Gluster-users] Block storage

2016-09-30 Thread Gandalf Corvotempesta
I was looking for block storage in gluster but I don't see docs anymore
Is this an unsupported feature ?
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Production cluster planning

2016-09-30 Thread Lindsay Mathieson

On 1/10/2016 4:15 AM, mabi wrote:
The data will not be in "any" state as you mention or please define 
what you mean by "any". In the worst case you will just loose 5 
seconds of data that's all as far as I understand.




By "Any" state I mean *Any*, you have no way of predicting how much data 
would be written.



The key thing here is that gluster will think the data has been safely 
written to disk when it has not. Imagine you have a three node volume 
undergoing heavy writes. Easily time for multiple shards or small files  
to be written in 5 seconds.



All of a sudden one of the nodes goes down - perhaps the cleaner 
unplugged it.



Now the third node is missing up to 5 seconds of data compared to the 
other two nodes - your gluster volume is in a inconsistent state and 
*doesn't* know it.


--
Lindsay Mathieson

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Production cluster planning

2016-09-30 Thread mabi
Sorry the link is missing in my previous post:

https://groups.google.com/a/zfsonlinux.org/d/msg/zfs-discuss/OI5dchl7d_8/vLRMZgJGYUoJ









 Original Message 
Subject: Re: [Gluster-users] Production cluster planning
Local Time: September 30, 2016 8:15 PM
UTC Time: September 30, 2016 6:15 PM
From: m...@protonmail.ch
To: Gluster Users 
Gluster Users 

The data will not be in "any" state as you mention or please define what you 
mean by "any". In the worst case you will just loose 5 seconds of data that's 
all as far as I understand.

Here is another very interesting but long post regarding this topic. Basically 
it all boils down to this specific









 Original Message 
Subject: Re: [Gluster-users] Production cluster planning
Local Time: September 30, 2016 12:41 PM
UTC Time: September 30, 2016 10:41 AM
From: lindsay.mathie...@gmail.com
To: mabi , Gluster Users 

On 29/09/2016 4:32 AM, mabi wrote:
> hat's not correct. There is no risk of corruption using
> "sync=disabled". In the worst case you just end up with old data but
> no corruption. See the following comment from a master of ZFS (Aaron
> Toponce):
>
> https://pthree.org/2013/01/25/glusterfs-linked-list-topology/#comment-227906

Your missing what he said - *ZFS* will not be corrupted but the data
written could be in any state, in this case the gluster filesystem data
and meta data. To have one ndoe in a cluster out of sync with out the
cluster knowing would be very bad.

--
Lindsay Mathieson___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Production cluster planning

2016-09-30 Thread mabi
The data will not be in "any" state as you mention or please define what you 
mean by "any". In the worst case you will just loose 5 seconds of data that's 
all as far as I understand.

Here is another very interesting but long post regarding this topic. Basically 
it all boils down to this specific









 Original Message 
Subject: Re: [Gluster-users] Production cluster planning
Local Time: September 30, 2016 12:41 PM
UTC Time: September 30, 2016 10:41 AM
From: lindsay.mathie...@gmail.com
To: mabi , Gluster Users 

On 29/09/2016 4:32 AM, mabi wrote:
> hat's not correct. There is no risk of corruption using
> "sync=disabled". In the worst case you just end up with old data but
> no corruption. See the following comment from a master of ZFS (Aaron
> Toponce):
>
> https://pthree.org/2013/01/25/glusterfs-linked-list-topology/#comment-227906

Your missing what he said - *ZFS* will not be corrupted but the data
written could be in any state, in this case the gluster filesystem data
and meta data. To have one ndoe in a cluster out of sync with out the
cluster knowing would be very bad.

--
Lindsay Mathieson___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Permission denied based on file name when renaming

2016-09-30 Thread Michael Seidel

Hi,

I recently deployed a GlusterFS system and observed the following 
behaviour which leaves me quite puzzled:


I'm running glusterfs 3.7.1 in a replicated setup (2 replicas, see below).

1. Create a file with arbitrary content (irrelevant):
$ echo 123 > test


2. Copying the file works fine:
$ cp test test_2016.00


3. Remove the copy:
$ rm test_2016.00


4. Renaming the file fails:
$ mv test test_2016.00
mv: cannot move ‘test’ to ‘test_2016.00’: Permission denied


5. Choosing another filename works as expected:
$ mv test test_2017.00



I wonder if this could be related to some sort of hash collision (is 
there a way to find out?) or if this is a known bug in this version. I 
was however not able to find any reports describing this behaviour. Did 
anyone observe a similar behaviour?


Cheers,
- Michael





=

Volume Name: _rep
Type: Distributed-Replicate
Volume ID: fc9493b4-1f89-4ec8-9a24-c7d4faf19959
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: -gluster01.XX.XXX:/srv/gluster/bricks/brick1/g
Brick2: -gluster02.XX.XXX:/srv/gluster/bricks/brick1/g
Brick3: -gluster01.XX.XXX:/srv/gluster/bricks/brick4/g
Brick4: -gluster02.XX.XXX:/srv/gluster/bricks/brick4/g
Options Reconfigured:
cluster.self-heal-daemon: enable
server.root-squash: on
features.quota-deem-statfs: on
features.inode-quota: on
features.quota: on
performance.readdir-ahead: on
Helmholtz Zentrum Muenchen
Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
Ingolstaedter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDir'in Baerbel Brumme-Bothe
Geschaeftsfuehrer: Prof. Dr. Guenther Wess, Dr. Alfons Enhsen
Registergericht: Amtsgericht Muenchen HRB 6466
USt-IdNr: DE 129521671

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] Community Gluster Package Matrix, updated

2016-09-30 Thread Niels de Vos
On Thu, Sep 29, 2016 at 09:54:24PM -0400, Vijay Bellur wrote:
> Thank you Kaleb for putting this together. I think it would also be useful
> to list where our official container images would be present too.

I think a different page would be most useful, so that we do not
overwhelm users too much. Both of these pages should be linked on
https://www.gluster.org/download/ and probably also refer to each other.

> Should we make this content persistent somewhere on our website and have a
> link from the release notes? The complaints that we encountered after
> releasing 3.8 (mostly on CentOS) makes me wonder about that.

It is already merged for our docs:
  http://gluster.readthedocs.io/en/latest/Install-Guide/Community_Packages/

We should probably correct the formatting a little and make the links
clickable. A link to the release schedule with the release status table
would be a good thing too.
  https://www.gluster.org/community/release-schedule/

Pull requests welcome!
  https://github.com/gluster/glusterdocs
  
https://github.com/gluster/glusterdocs/blob/master/Install-Guide/Community_Packages.md

Niels


> 
> Regards,
> Vijay
> 
> 
> 
> On Wed, Sep 28, 2016 at 10:29 AM, Kaleb S. KEITHLEY 
> wrote:
> 
> > Hi,
> >
> > With the imminent release of 3.9 in a week or two, here's a summary of the
> > Community packages for various Linux distributions that are tentatively
> > planned going forward.
> >
> > Note that 3.6 will reach end-of-life (EOL) when 3.9 is released, and no
> > further releases will be made on the release-3.6 branch.
> >
> > N.B. Fedora 23 and Ubuntu Wily are nearing EOL.
> >
> > (I haven't included NetBSD or FreeBSD here, only because they're not Linux
> > and we have little control over them.)
> >
> > An X means packages are planned to be in the repository.
> > A — means we have no plans to build the version for the repository.
> > d.g.o means packages will (also) be provided on
> > https://download.gluster.org
> > DNF/YUM means the packages are included in the Fedora updates or
> > updates-testing repos.
> >
> >
> >
> > 3.9
> > 3.8 3.7 3.6
> > CentOS Storage SIG¹ el5 — —
> > d.g.o d.g.o
> >
> > el6 X
> > X X, d.g.o X, d.g.o
> >
> > el7 X
> > X X, d.g.o X, d.g.o
> >
> >
> >
> >
> >
> >
> > Fedora
> > F23 — d.g.o DNF/YUM d.g.o
> >
> > F24 d.g.o
> > DNF/YUM d.g.o d.g.o
> >
> > F25 DNF/YUM d.g.o d.g.o d.g.o
> >
> > F26
> > DNF/YUM
> > d.g.o d.g.o d.g.o
> >
> >
> >
> >
> >
> >
> > Ubuntu Launchpad² Precise (12.04 LTS) — —€” X X
> >
> > Trusty (14.04 LTS) — X X X
> >
> > Wily (15.10) — X X X
> >
> > Xenial (16.04 LTS) X
> > X X X
> >
> > Yakkety (16.10)
> > X
> > X
> > — —
> >
> >
> >
> >
> >
> >
> > Debian Wheezy (7) — —€” d.g.o d.g.o
> >
> > Jessie (8) d.g.o
> > d.g.o d.g.o d.g.o
> >
> > Stretch (9) d.g.o
> > d.g.o d.g.o d.g.o
> >
> >
> >
> >
> >
> >
> > SuSE Build System³ OpenSuSE13
> > X
> > X X X
> >
> > Leap 42.X X
> > X X —€”
> >
> > SLES11 — —€” —€” X
> >
> > SLES12 X
> > X X X
> >
> > ¹ https://wiki.centos.org/SpecialInterestGroup/Storage
> > ² https://launchpad.net/~gluster
> > ³ https://build.opensuse.org/project/subprojects/home:kkeithleatredhat
> >
> > -- Kaleb
> >
> > ___
> > Gluster-devel mailing list
> > gluster-de...@gluster.org
> > http://www.gluster.org/mailman/listinfo/gluster-devel
> >

> ___
> Gluster-devel mailing list
> gluster-de...@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel



signature.asc
Description: PGP signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Production cluster planning

2016-09-30 Thread Gandalf Corvotempesta
2016-09-30 12:41 GMT+02:00 Lindsay Mathieson :
> Your missing what he said - *ZFS* will not be corrupted but the data written
> could be in any state, in this case the gluster filesystem data and meta
> data. To have one ndoe in a cluster out of sync with out the cluster knowing
> would be very bad.

This is where gluster bitrot could help, by comparing files across the
whole cluster.
In a 3 nodes replica, if node1 is out of sync, on the next scrub,
gluster will be able to see that node2 and node3 are saying "1" and
node1 is saying "0" thus should replicate from node2 and node3 to
node1
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] File Size and Brick Size

2016-09-30 Thread ML Wong
Hello Krutika, Ravishankar,
Unfortunately, i deleted my previous test instance in AWS (running on EBS
storage, on CentOS7 with XFS).
I was using 3.7.15 for Gluster.  It's good to know they should be the same.
And, i have also set up another set of VMs quick locally, and use the same
version of 3.7.15. It did return the same checksum. I will see if i have
time and resources to set a test again in AWS.

Thank you both for the promptly reply,
Melvin

On Tue, Sep 27, 2016 at 7:59 PM, Krutika Dhananjay 
wrote:

> Worked fine for me actually.
>
> # md5sum lastlog
> ab7557d582484a068c3478e342069326  lastlog
> # rsync -avH lastlog  /mnt/
> sending incremental file list
> lastlog
>
> sent 364,001,522 bytes  received 35 bytes  48,533,540.93 bytes/sec
> total size is 363,912,592  speedup is 1.00
> # cd /mnt
> # md5sum lastlog
> ab7557d582484a068c3478e342069326  lastlog
>
> -Krutika
>
>
> On Wed, Sep 28, 2016 at 8:21 AM, Krutika Dhananjay 
> wrote:
>
>> Hi,
>>
>> What version of gluster are you using?
>> Also, could you share your volume configuration (`gluster volume info`)?
>>
>> -Krutika
>>
>> On Wed, Sep 28, 2016 at 6:58 AM, Ravishankar N 
>> wrote:
>>
>>> On 09/28/2016 12:16 AM, ML Wong wrote:
>>>
>>> Hello Ravishankar,
>>> Thanks for introducing the sharding feature to me.
>>> It does seems to resolve the problem i was encountering earlier. But I
>>> have 1 question, do we expect the checksum of the file to be different if i
>>> copy from directory A to a shard-enabled volume?
>>>
>>>
>>> No the checksums must match. Perhaps Krutika who works on Sharding
>>> (CC'ed) can help you figure out why that isn't the case here.
>>> -Ravi
>>>
>>>
>>> [x@ip-172-31-1-72 ~]$ sudo sha1sum /var/tmp/oVirt-Live-4.0.4.iso
>>> ea8472f6408163fa9a315d878c651a519fc3f438  /var/tmp/oVirt-Live-4.0.4.iso
>>> [x@ip-172-31-1-72 ~]$ sudo rsync -avH /var/tmp/oVirt-Live-4.0.4.iso
>>> /mnt/
>>> sending incremental file list
>>> oVirt-Live-4.0.4.iso
>>>
>>> sent 1373802342 bytes  received 31 bytes  30871963.44 bytes/sec
>>> total size is 1373634560  speedup is 1.00
>>> [x@ip-172-31-1-72 ~]$ sudo sha1sum /mnt/oVirt-Live-4.0.4.iso
>>> 14e9064857b40face90c91750d79c4d8665b9cab  /mnt/oVirt-Live-4.0.4.iso
>>>
>>> On Mon, Sep 26, 2016 at 6:42 PM, Ravishankar N 
>>> wrote:
>>>
 On 09/27/2016 05:15 AM, ML Wong wrote:

 Have anyone in the list who has tried copying file which is bigger than
 the individual brick/replica size?
 Test Scenario:
 Distributed-Replicated volume, 2GB size, 2x2 = 4 bricks, 2 replicas
 Each replica has 1GB

 When i tried to copy file this volume, by both fuse, or nfs mount. i
 get I/O error.
 Filesystem  Size  Used Avail Use% Mounted on
 /dev/mapper/vg0-brick1 1017M   33M  985M   4% /data/brick1
 /dev/mapper/vg0-brick2 1017M  109M  909M  11% /data/brick2
 lbre-cloud-dev1:/sharevol1  2.0G  141M  1.9G   7% /sharevol1

 [xx@cloud-dev1 ~]$ du -sh /var/tmp/ovirt-live-el7-3.6.2.iso
 1.3G /var/tmp/ovirt-live-el7-3.6.2.iso

 [melvinw@lbre-cloud-dev1 ~]$ sudo cp /var/tmp/ovirt-live-el7-3.6.2.iso
 /sharevol1/
 cp: error writing ‘/sharevol1/ovirt-live-el7-3.6.2.iso’: Input/output
 error
 cp: failed to extend ‘/sharevol1/ovirt-live-el7-3.6.2.iso’:
 Input/output error
 cp: failed to close ‘/sharevol1/ovirt-live-el7-3.6.2.iso’:
 Input/output error


 Does the mount log give you more information? It it was a disk full
 issue, the error you would get is ENOSPC and not EIO. This looks like
 something else.


 I know, we have experts in this mailing list. And, i assume, this is a
 common situation where many Gluster users may have encountered.  The worry
 i have what if you have a big VM file sitting on top of Gluster volume ...?

 It is recommended to use sharding (http://blog.gluster.org/2015/
 12/introducing-shard-translator/) for VM workloads to alleviate these
 kinds of issues.
 -Ravi

 Any insights will be much appreciated.



 ___
 Gluster-users mailing 
 listGluster-users@gluster.orghttp://www.gluster.org/mailman/listinfo/gluster-users


>>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Minio as object storage

2016-09-30 Thread Gandalf Corvotempesta
2016-09-29 11:58 GMT+02:00 Prashanth Pai :
> Yes, that can be done. Container ACLs allows you to just that.

Ok, so I have to follow the linked guide.
How to make this HA and load balanced? I don't saw any DB for storing
ACL or similiar.
If I run multiple gluster-swift instances on multiple nodes, I don't
think it will work
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] increase qcow2 image size

2016-09-30 Thread Kevin Lemonnier
On Thu, Sep 29, 2016 at 12:02:39AM +0200, Gandalf Corvotempesta wrote:
> I'm doing some tests with proxmox.
> I've created a test VM with 100GB qcow2 image stored on gluster with sharding
> All shards was created properly.
> 
> Then, I've increased the qcow2 image size from 100GB to 150GB.
> Proxmox did this well, but on gluster i'm still seeing the old qcow2
> image size (1600 shard, 64MB each)

We do this sort of things a lot, with the same setup, and never had a problem
I never checked the file size though, but I don't see any reason it wouldn't
work with you :). We are using 3.7.12 and 3.7.15 though, didn't try 3.8 yet.

-- 
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111


signature.asc
Description: Digital signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] GlusterFs upstream bugzilla components Fine graining

2016-09-30 Thread Prasanna Kalever
On Fri, Sep 30, 2016 at 3:16 PM, Niels de Vos  wrote:
> On Wed, Sep 28, 2016 at 10:09:34PM +0530, Prasanna Kalever wrote:
>> On Wed, Sep 28, 2016 at 11:24 AM, Muthu Vigneshwaran
>>  wrote:
>> >
>> > Hi,
>> >
>> > This an update to the previous mail about Fine graining of the
>> > GlusterFS upstream bugzilla components.
>> >
>> > Finally we have come out a new structure that would help in easy
>> > access of the bug for reporter and assignee too.
>> >
>> > In the new structure we have decided to remove components that are
>> > listed as below -
>> >
>> > - BDB
>> > - HDFS
>> > - booster
>> > - coreutils
>> > - gluster-hdoop
>> > - gluster-hadoop-install
>> > - libglusterfsclient
>> > - map
>> > - path-converter
>> > - protect
>> > - qemu-block
>>
>> Well, we are working on bringing qemu-block xlator to alive again.
>> This is needed in achieving qcow2 based internal snapshots for/in the
>> gluster block store.
>
> We can keep this as a subcomponent for now.

What should be the main component in this case?

>
>> Take a look at  http://review.gluster.org/#/c/15588/  and dependent patches.
>
> Although we can take qemu-block back, we need a plan to address the
> copied qemu sources to handle the qcow2 format. Reducing the bundled
> sources (in contrib/) is important. Do you have a feature page in the
> glusterfs-specs repository that explains the usability of qemu-block? I
> have not seen a discussion on gluster-devel about this yet either,
> otherwise I would have replied there...

Yeah, have refreshed some part of the code already (local). The
current code is way old (2013) and miss the compat 1.1 (qcow2v3)
features and many more. We are cross checking the merits in using this
in the block store. Once we are in a state to say yes/continue with
this approach, I'm glad to take initiation in refreshing the complete
source and flush out the unused bundle of code.

Well, I do not know about any qcow libraries other than [1], and don't
think we have choice of keeping this outside the repo tree?

And currently I don't have a feature page, will update after summit
time frame, also make a note to post with the complete details in
devel mailing list.

>
> Nobody used this before, and I wonder if we should not design and
> develop a standard file-snapshot functionality that is not dependent on
> qcow2 format.

IMO, that will take an another year or more to bring into block store use.


[1] https://github.com/libyal/libqcow

--
Prasanna

>
> Niels
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Minio as object storage

2016-09-30 Thread Kaushal M
On Wed, Sep 28, 2016 at 10:38 PM, Ben Werthmann  wrote:
> These are interesting projects:
> https://github.com/prashanthpai/antbird
> https://github.com/kshlm/gogfapi
>
> Are there plans for an official go gfapi client library?

I hope to do make the gogfapi package official someday. I've not
gotten around to it yet, and don't know when I can.

>
> On Wed, Sep 28, 2016 at 12:16 PM, John Mark Walker 
> wrote:
>>
>> No - gluster-swift adds the swift API on top of GlusterFS. It doesn't
>> require Swift itself.
>>
>> This project is 4 years old now - how do people not know this?
>>
>> -JM
>>
>>
>>
>> On Wed, Sep 28, 2016 at 11:28 AM, Gandalf Corvotempesta
>>  wrote:
>>>
>>> 2016-09-28 16:27 GMT+02:00 Prashanth Pai :
>>> > There's gluster-swift[1]. It works with oth Swift API and S3 API[2]
>>> > (using Swift).
>>> >
>>> > [1]: https://github.com/prashanthpai/docker-gluster-swift
>>> > [2]:
>>> > https://github.com/gluster/gluster-swift/blob/master/doc/markdown/s3.md
>>>
>>> I wasn't aware of S3 support on Swift.
>>> Anyway, Swift has some requirements like the whole keyring stack
>>> proxies and so on from OpenStack, I prefere something smaller
>>> ___
>>> Gluster-users mailing list
>>> Gluster-users@gluster.org
>>> http://www.gluster.org/mailman/listinfo/gluster-users
>>
>>
>>
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-users
>
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Production cluster planning

2016-09-30 Thread Lindsay Mathieson

On 29/09/2016 4:32 AM, mabi wrote:
hat's not correct. There is no risk of corruption using 
"sync=disabled". In the worst case you just end up with old data but 
no corruption. See the following comment from a master of ZFS (Aaron 
Toponce):


https://pthree.org/2013/01/25/glusterfs-linked-list-topology/#comment-227906


Your missing what he said - *ZFS* will not be corrupted but the data 
written could be in any state, in this case the gluster filesystem data 
and meta data. To have one ndoe in a cluster out of sync with out the 
cluster knowing would be very bad.


--
Lindsay Mathieson

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Minio as object storage

2016-09-30 Thread Prashanth Pai

- Original Message -
> From: "Gandalf Corvotempesta" 
> To: "Prashanth Pai" 
> Cc: "John Mark Walker" , "gluster-users" 
> 
> Sent: Thursday, 29 September, 2016 3:55:18 PM
> Subject: Re: [Gluster-users] Minio as object storage
> 
> 2016-09-29 12:22 GMT+02:00 Prashanth Pai :
> > In pure vanilla Swift, ACL information is stored in container DBs (sqlite)
> > In gluster-swift, ACLs are stored in the extended attribute of the
> > directory.
> 
> So, as long the directory is stored on gluster, gluster makes this redundant
> 
> > This can be easily done using haproxy.
> 
> The only thing to do is spawn multiple VMs with gluster-switft
> pointing to the same gluster volume,
> nothing else, as exattr are stored on gluster and thus readable by all
> VMs and HAproxy will balance the requests.
> 
> Right ? A sort of spawn :)
> 

Correct. gluster-swift itself is pretty much stateless here
and uses glusterfs for storing all user related data.

You can also choose between two auth systems - tempauth
and gswauth. Tempauth stores usernames and password (plaintext!)
in a conf file at /etc/swift while gswauth uses a dedicated
glusterfs volume to store username and password (hashed).

The ACL information is stored in xattrs regardless of which of
the above two auth mechanisms you choose to use.

gluster-swift has it's configurations files at /etc/swift
and also has ring files there. These ring files determine
which volumes are made available over swift interface.

If you have 10 volumes, you can choose to make only few
of them available over object interface.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [ovirt-users] Ovirt/Gluster replica 3 distributed-replicated problem

2016-09-30 Thread Ravishankar N

On 09/29/2016 05:18 PM, Sahina Bose wrote:

Yes, this is a GlusterFS problem. Adding gluster users ML

On Thu, Sep 29, 2016 at 5:11 PM, Davide Ferrari > wrote:


Hello

maybe this is more glustefs then ovirt related but since OVirt
integrates Gluster management and I'm experiencing the problem in
an ovirt cluster, I'm writing here.

The problem is simple: I have a data domain mappend on a replica 3
arbiter1 Gluster volume with 6 bricks, like this:

Status of volume: data_ssd
Gluster process TCP Port  RDMA Port  Online  Pid

--
Brick vm01.storage.billy:/gluster/ssd/data/
brick 49153 0  Y   19298
Brick vm02.storage.billy:/gluster/ssd/data/
brick 49153 0  Y   6146
Brick vm03.storage.billy:/gluster/ssd/data/
arbiter_brick 49153 0  Y   6552
Brick vm03.storage.billy:/gluster/ssd/data/
brick 49154 0  Y   6559
Brick vm04.storage.billy:/gluster/ssd/data/
brick 49152 0  Y   6077
Brick vm02.storage.billy:/gluster/ssd/data/
arbiter_brick 49154 0  Y   6153
Self-heal Daemon on localhost N/A   N/AY   30746
Self-heal Daemon on vm01.storage.billy N/A   N/A   
Y   196058
Self-heal Daemon on vm03.storage.billy N/A   N/A   
Y   23205
Self-heal Daemon on vm04.storage.billy N/A   N/A   
Y   8246



Now, I've put in maintenance the vm04 host, from ovirt, ticking
the "Stop gluster" checkbox, and Ovirt didn't complain about
anything. But when I tried to run a new VM it complained about
"storage I/O problem", while the storage data status was always UP.

Looking in the gluster logs I can see this:

[2016-09-29 11:01:01.556908] I
[glusterfsd-mgmt.c:1596:mgmt_getspec_cbk] 0-glusterfs: No change
in volfile, continuing
[2016-09-29 11:02:28.124151] E [MSGID: 108008]
[afr-read-txn.c:89:afr_read_txn_refresh_done]
0-data_ssd-replicate-1: Failing READ on gfid
bf5922b7-19f3-4ce3-98df-71e981ecca8d: split-brain observed.
[Input/output error]
[2016-09-29 11:02:28.126580] W [MSGID: 108008]
[afr-read-txn.c:244:afr_read_txn] 0-data_ssd-replicate-1:
Unreadable subvolume -1 found with event generation 6 for gfid
bf5922b7-19f3-4ce3-98df-71e981ecca8d. (Possible split-brain)
[2016-09-29 11:02:28.127374] E [MSGID: 108008]
[afr-read-txn.c:89:afr_read_txn_refresh_done]
0-data_ssd-replicate-1: Failing FGETXATTR on gfid
bf5922b7-19f3-4ce3-98df-71e981ecca8d: split-brain observed.
[Input/output error]
[2016-09-29 11:02:28.128130] W [MSGID: 108027]
[afr-common.c:2403:afr_discover_done] 0-data_ssd-replicate-1: no
read subvols for (null)
[2016-09-29 11:02:28.129890] W [fuse-bridge.c:2228:fuse_readv_cbk]
0-glusterfs-fuse: 8201: READ => -1
gfid=bf5922b7-19f3-4ce3-98df-71e981ecca8d fd=0x7f09b749d210
(Input/output error)
[2016-09-29 11:02:28.130824] E [MSGID: 108008]
[afr-read-txn.c:89:afr_read_txn_refresh_done]
0-data_ssd-replicate-1: Failing FSTAT on gfid
bf5922b7-19f3-4ce3-98df-71e981ecca8d: split-brain observed.
[Input/output error]



Does `gluster volume heal data_ssd info split-brain` report that the 
file is in split-brain, with vm04 still being down?
If yes, could you provide the extended attributes of this gfid from all 
3 bricks:
getfattr -d -m . -e hex 
/path/to/brick/bf/59/bf5922b7-19f3-4ce3-98df-71e981ecca8d


If no, then I'm guessing that it is not in actual split-brain (hence the 
'Possible split-brain' message). If the node you brought down contains 
the only good copy of the file (i.e the other data brick and arbiter are 
up, and the arbiter 'blames' this other brick), all I/O is failed with 
EIO to prevent file from getting into actual split-brain. The heals will 
happen when the good node comes up and I/O should be allowed again in 
that case.


-Ravi



[2016-09-29 11:02:28.133879] W [fuse-bridge.c:767:fuse_attr_cbk]
0-glusterfs-fuse: 8202: FSTAT()

/ba2bd397-9222-424d-aecc-eb652c0169d9/images/f02ac1ce-52cd-4b81-8b29-f8006d0469e0/ff4e49c6-3084-4234-80a1-18a67615c527
=> -1 (Input/output error)
The message "W [MSGID: 108008] [afr-read-txn.c:244:afr_read_txn]
0-data_ssd-replicate-1: Unreadable subvolume -1 found with event
generation 6 for gfid bf5922b7-19f3-4ce3-98df-71e981ecca8d.
(Possible split-brain)" repeated 11 times between [2016-09-29
11:02:28.126580] and [2016-09-29 11:02:28.517744]
[2016-09-29 11:02:28.518607] E [MSGID: 108008]
[afr-read-txn.c:89:afr_read_txn_refresh_done]
0-data_ssd-replicate-1: Failing STAT on gfid
bf5922b7-19f3-4ce3-98df-71e981ecca8d: split-brain observed.
[Input/output error]

Now, how is it possible to have a split brain if I stopped just

Re: [Gluster-users] Problem with add-brick

2016-09-30 Thread Dennis Michael
Are there any workarounds to this?  RDMA is configured on my servers.

Dennis

On Thu, Sep 29, 2016 at 7:19 AM, Atin Mukherjee  wrote:

> Dennis,
>
> Thanks for sharing the logs.
>
> It seems like a volume configured created with tcp,rdma transport fails to
> start (atleast in my local set up). The issue here is although the brick
> process comes up, but glusterd receives a non zero ret code from the runner
> interface which spawns the brick process(es).
>
> Raghavendra Talur/Rafi,
>
> Is this an intended behaviour if rdma device is not configured? Please
> chime in with your thoughts
>
>
> On Wed, Sep 28, 2016 at 10:22 AM, Atin Mukherjee 
> wrote:
>
>> Dennis,
>>
>> It seems like that add-brick has definitely failed and the entry is not
>> committed into glusterd store. volume status and volume info commands are
>> referring the in-memory data for fs4 (which exist) but post a restart they
>> are no longer available. Could you run glusterd with debug log enabled
>> (systemctl stop glusterd; glusterd -LDEBUG) and provide us cmd_history.log,
>> glusterd log along with fs4 brick log files to further analyze the issue?
>> Regarding the missing RDMA ports for fs2, fs3 brick can you cross check if
>> glusterfs-rdma package is installed on both the nodes?
>>
>> On Wed, Sep 28, 2016 at 7:14 AM, Ravishankar N 
>> wrote:
>>
>>> On 09/27/2016 10:29 PM, Dennis Michael wrote:
>>>
>>>
>>>
>>> [root@fs4 bricks]# gluster volume info
>>>
>>> Volume Name: cees-data
>>> Type: Distribute
>>> Volume ID: 27d2a59c-bdac-4f66-bcd8-e6124e53a4a2
>>> Status: Started
>>> Number of Bricks: 4
>>> Transport-type: tcp,rdma
>>> Bricks:
>>> Brick1: fs1:/data/brick
>>> Brick2: fs2:/data/brick
>>> Brick3: fs3:/data/brick
>>> Brick4: fs4:/data/brick
>>> Options Reconfigured:
>>> features.quota-deem-statfs: on
>>> features.inode-quota: on
>>> features.quota: on
>>> performance.readdir-ahead: on
>>> [root@fs4 bricks]# gluster volume status
>>> Status of volume: cees-data
>>> Gluster process TCP Port  RDMA Port  Online
>>>  Pid
>>> 
>>> --
>>> Brick fs1:/data/brick   49152 49153  Y
>>> 1878
>>> Brick fs2:/data/brick   49152 0  Y
>>> 1707
>>> Brick fs3:/data/brick   49152 0  Y
>>> 4696
>>> Brick fs4:/data/brick   N/A   N/AN
>>> N/A
>>> NFS Server on localhost 2049  0  Y
>>> 13808
>>> Quota Daemon on localhost   N/A   N/AY
>>> 13813
>>> NFS Server on fs1   2049  0  Y
>>> 6722
>>> Quota Daemon on fs1 N/A   N/AY
>>> 6730
>>> NFS Server on fs3   2049  0  Y
>>> 12553
>>> Quota Daemon on fs3 N/A   N/AY
>>> 12561
>>> NFS Server on fs2   2049  0  Y
>>> 11702
>>> Quota Daemon on fs2 N/A   N/AY
>>> 11710
>>>
>>> Task Status of Volume cees-data
>>> 
>>> --
>>> There are no active volume tasks
>>>
>>> [root@fs4 bricks]# ps auxww | grep gluster
>>> root 13791  0.0  0.0 701472 19768 ?Ssl  09:06   0:00
>>> /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO
>>> root 13808  0.0  0.0 560236 41420 ?Ssl  09:07   0:00
>>> /usr/sbin/glusterfs -s localhost --volfile-id gluster/nfs -p
>>> /var/lib/glusterd/nfs/run/nfs.pid -l /var/log/glusterfs/nfs.log -S
>>> /var/run/gluster/01c61523374369658a62b75c582b5ac2.socket
>>> root 13813  0.0  0.0 443164 17908 ?Ssl  09:07   0:00
>>> /usr/sbin/glusterfs -s localhost --volfile-id gluster/quotad -p
>>> /var/lib/glusterd/quotad/run/quotad.pid -l
>>> /var/log/glusterfs/quotad.log -S 
>>> /var/run/gluster/3753def90f5c34f656513dba6a544f7d.socket
>>> --xlator-option *replicate*.data-self-heal=off --xlator-option
>>> *replicate*.metadata-self-heal=off --xlator-option
>>> *replicate*.entry-self-heal=off
>>> root 13874  0.0  0.0 1200472 31700 ?   Ssl  09:16   0:00
>>> /usr/sbin/glusterfsd -s fs4 --volfile-id cees-data.fs4.data-brick -p
>>> /var/lib/glusterd/vols/cees-data/run/fs4-data-brick.pid -S
>>> /var/run/gluster/5203ab38be21e1d37c04f6bdfee77d4a.socket --brick-name
>>> /data/brick -l /var/log/glusterfs/bricks/data-brick.log --xlator-option
>>> *-posix.glusterd-uuid=f04b231e-63f8-4374-91ae-17c0c623f165 --brick-port
>>> 49152 49153 --xlator-option 
>>> cees-data-server.transport.rdma.listen-port=49153
>>> --xlator-option cees-data-server.listen-port=49152
>>> --volfile-server-transport=socket,rdma
>>> root 13941  0.0  0.0 112648   976 pts/0S+   09:50   0:00 grep
>>> --color=auto gluster
>>>
>>> [root@fs4 bricks]# systemctl restart glusterfsd 

Re: [Gluster-users] Problem with add-brick

2016-09-30 Thread Mohammed Rafi K C
It seems like an actual bug, if youcan file a bug in bugzilla, that
would be great.


At least I don't see workaround for this issue, may  be till the next
update is available with fix, you can use either rdma alone or tcp alone
volume.

Let me know whether this is acceptable, if so I can give you the steps
to change the transport of an existing volume.


Regards

Rafi KC


On 09/30/2016 10:35 AM, Mohammed Rafi K C wrote:
>
>
>
> On 09/30/2016 02:35 AM, Dennis Michael wrote:
>>
>> Are there any workarounds to this?  RDMA is configured on my servers.
>
>
> By this, I assume your rdma setup/configuration over IPoIB is working
> fine.
>
> Can you tell us what machine you are using and whether SELinux is
> configured on the machine or not.
>
> Also I couldn't see any logs attached here.
>
> Rafi KC
>
>
>>
>> Dennis
>>
>> On Thu, Sep 29, 2016 at 7:19 AM, Atin Mukherjee > > wrote:
>>
>> Dennis,
>>
>> Thanks for sharing the logs.
>>
>> It seems like a volume configured created with tcp,rdma transport
>> fails to start (atleast in my local set up). The issue here is
>> although the brick process comes up, but glusterd receives a non
>> zero ret code from the runner interface which spawns the brick
>> process(es).
>>
>> Raghavendra Talur/Rafi,
>>
>> Is this an intended behaviour if rdma device is not configured?
>> Please chime in with your thoughts
>>
>>
>> On Wed, Sep 28, 2016 at 10:22 AM, Atin Mukherjee
>>  wrote:
>>
>> Dennis,
>>
>> It seems like that add-brick has definitely failed and the
>> entry is not committed into glusterd store. volume status and
>> volume info commands are referring the in-memory data for fs4
>> (which exist) but post a restart they are no longer
>> available. Could you run glusterd with debug log enabled
>> (systemctl stop glusterd; glusterd -LDEBUG) and provide us
>> cmd_history.log, glusterd log along with fs4 brick log files
>> to further analyze the issue? Regarding the missing RDMA
>> ports for fs2, fs3 brick can you cross check if
>> glusterfs-rdma package is installed on both the nodes?
>>
>> On Wed, Sep 28, 2016 at 7:14 AM, Ravishankar N
>>  wrote:
>>
>> On 09/27/2016 10:29 PM, Dennis Michael wrote:
>>>
>>>
>>> [root@fs4 bricks]# gluster volume info
>>>  
>>> Volume Name: cees-data
>>> Type: Distribute
>>> Volume ID: 27d2a59c-bdac-4f66-bcd8-e6124e53a4a2
>>> Status: Started
>>> Number of Bricks: 4
>>> Transport-type: tcp,rdma
>>> Bricks:
>>> Brick1: fs1:/data/brick
>>> Brick2: fs2:/data/brick
>>> Brick3: fs3:/data/brick
>>> Brick4: fs4:/data/brick
>>> Options Reconfigured:
>>> features.quota-deem-statfs: on
>>> features.inode-quota: on
>>> features.quota: on
>>> performance.readdir-ahead: on
>>> [root@fs4 bricks]# gluster volume status
>>> Status of volume: cees-data
>>> Gluster process TCP Port
>>>  RDMA Port  Online  Pid
>>> 
>>> --
>>> Brick fs1:/data/brick   49152
>>> 49153  Y   1878 
>>> Brick fs2:/data/brick   49152 0
>>>  Y   1707 
>>> Brick fs3:/data/brick   49152 0
>>>  Y   4696 
>>> Brick fs4:/data/brick   N/A  
>>> N/AN   N/A  
>>> NFS Server on localhost 2049  0
>>>  Y   13808
>>> Quota Daemon on localhost   N/A  
>>> N/AY   13813
>>> NFS Server on fs1   2049  0
>>>  Y   6722 
>>> Quota Daemon on fs1 N/A  
>>> N/AY   6730 
>>> NFS Server on fs3   2049  0
>>>  Y   12553
>>> Quota Daemon on fs3 N/A  
>>> N/AY   12561
>>> NFS Server on fs2   2049  0
>>>  Y   11702
>>> Quota Daemon on fs2 N/A  
>>> N/AY   11710
>>>  
>>> Task Status of Volume cees-data
>>> 
>>> --
>>> There are no active volume 

Re: [Gluster-users] Minio as object storage

2016-09-30 Thread Prashanth Pai

- Original Message -
> From: "Gandalf Corvotempesta" 
> To: "Prashanth Pai" 
> Cc: "John Mark Walker" , "gluster-users" 
> 
> Sent: Thursday, 29 September, 2016 3:42:06 PM
> Subject: Re: [Gluster-users] Minio as object storage
> 
> 2016-09-29 11:58 GMT+02:00 Prashanth Pai :
> > Yes, that can be done. Container ACLs allows you to just that.
> 
> Ok, so I have to follow the linked guide.
> How to make this HA and load balanced? I don't saw any DB for storing
> ACL or similiar.

In pure vanilla Swift, ACL information is stored in container DBs (sqlite)
In gluster-swift, ACLs are stored in the extended attribute of the directory. 

> If I run multiple gluster-swift instances on multiple nodes, I don't
> think it will work
> 

This can be easily done using haproxy.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [Gluster-devel] GlusterFs upstream bugzilla components Fine graining

2016-09-30 Thread Niels de Vos
On Wed, Sep 28, 2016 at 10:09:34PM +0530, Prasanna Kalever wrote:
> On Wed, Sep 28, 2016 at 11:24 AM, Muthu Vigneshwaran
>  wrote:
> >
> > Hi,
> >
> > This an update to the previous mail about Fine graining of the
> > GlusterFS upstream bugzilla components.
> >
> > Finally we have come out a new structure that would help in easy
> > access of the bug for reporter and assignee too.
> >
> > In the new structure we have decided to remove components that are
> > listed as below -
> >
> > - BDB
> > - HDFS
> > - booster
> > - coreutils
> > - gluster-hdoop
> > - gluster-hadoop-install
> > - libglusterfsclient
> > - map
> > - path-converter
> > - protect
> > - qemu-block
> 
> Well, we are working on bringing qemu-block xlator to alive again.
> This is needed in achieving qcow2 based internal snapshots for/in the
> gluster block store.

We can keep this as a subcomponent for now.

> Take a look at  http://review.gluster.org/#/c/15588/  and dependent patches.

Although we can take qemu-block back, we need a plan to address the
copied qemu sources to handle the qcow2 format. Reducing the bundled
sources (in contrib/) is important. Do you have a feature page in the
glusterfs-specs repository that explains the usability of qemu-block? I
have not seen a discussion on gluster-devel about this yet either,
otherwise I would have replied there...

Nobody used this before, and I wonder if we should not design and
develop a standard file-snapshot functionality that is not dependent on
qcow2 format.

Niels


signature.asc
Description: PGP signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] Community Gluster Package Matrix, updated

2016-09-30 Thread Vijay Bellur
Thank you Kaleb for putting this together. I think it would also be useful
to list where our official container images would be present too.

Should we make this content persistent somewhere on our website and have a
link from the release notes? The complaints that we encountered after
releasing 3.8 (mostly on CentOS) makes me wonder about that.

Regards,
Vijay



On Wed, Sep 28, 2016 at 10:29 AM, Kaleb S. KEITHLEY 
wrote:

> Hi,
>
> With the imminent release of 3.9 in a week or two, here's a summary of the
> Community packages for various Linux distributions that are tentatively
> planned going forward.
>
> Note that 3.6 will reach end-of-life (EOL) when 3.9 is released, and no
> further releases will be made on the release-3.6 branch.
>
> N.B. Fedora 23 and Ubuntu Wily are nearing EOL.
>
> (I haven't included NetBSD or FreeBSD here, only because they're not Linux
> and we have little control over them.)
>
> An X means packages are planned to be in the repository.
> A — means we have no plans to build the version for the repository.
> d.g.o means packages will (also) be provided on
> https://download.gluster.org
> DNF/YUM means the packages are included in the Fedora updates or
> updates-testing repos.
>
>
>
> 3.9
> 3.8 3.7 3.6
> CentOS Storage SIG¹ el5 — —
> d.g.o d.g.o
>
> el6 X
> X X, d.g.o X, d.g.o
>
> el7 X
> X X, d.g.o X, d.g.o
>
>
>
>
>
>
> Fedora
> F23 — d.g.o DNF/YUM d.g.o
>
> F24 d.g.o
> DNF/YUM d.g.o d.g.o
>
> F25 DNF/YUM d.g.o d.g.o d.g.o
>
> F26
> DNF/YUM
> d.g.o d.g.o d.g.o
>
>
>
>
>
>
> Ubuntu Launchpad² Precise (12.04 LTS) — —€” X X
>
> Trusty (14.04 LTS) — X X X
>
> Wily (15.10) — X X X
>
> Xenial (16.04 LTS) X
> X X X
>
> Yakkety (16.10)
> X
> X
> — —
>
>
>
>
>
>
> Debian Wheezy (7) — —€” d.g.o d.g.o
>
> Jessie (8) d.g.o
> d.g.o d.g.o d.g.o
>
> Stretch (9) d.g.o
> d.g.o d.g.o d.g.o
>
>
>
>
>
>
> SuSE Build System³ OpenSuSE13
> X
> X X X
>
> Leap 42.X X
> X X —€”
>
> SLES11 — —€” —€” X
>
> SLES12 X
> X X X
>
> ¹ https://wiki.centos.org/SpecialInterestGroup/Storage
> ² https://launchpad.net/~gluster
> ³ https://build.opensuse.org/project/subprojects/home:kkeithleatredhat
>
> -- Kaleb
>
> ___
> Gluster-devel mailing list
> gluster-de...@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Problem with add-brick

2016-09-30 Thread Atin Mukherjee
Dennis,

Thanks for sharing the logs.

It seems like a volume configured created with tcp,rdma transport fails to
start (atleast in my local set up). The issue here is although the brick
process comes up, but glusterd receives a non zero ret code from the runner
interface which spawns the brick process(es).

Raghavendra Talur/Rafi,

Is this an intended behaviour if rdma device is not configured? Please
chime in with your thoughts


On Wed, Sep 28, 2016 at 10:22 AM, Atin Mukherjee 
wrote:

> Dennis,
>
> It seems like that add-brick has definitely failed and the entry is not
> committed into glusterd store. volume status and volume info commands are
> referring the in-memory data for fs4 (which exist) but post a restart they
> are no longer available. Could you run glusterd with debug log enabled
> (systemctl stop glusterd; glusterd -LDEBUG) and provide us cmd_history.log,
> glusterd log along with fs4 brick log files to further analyze the issue?
> Regarding the missing RDMA ports for fs2, fs3 brick can you cross check if
> glusterfs-rdma package is installed on both the nodes?
>
> On Wed, Sep 28, 2016 at 7:14 AM, Ravishankar N 
> wrote:
>
>> On 09/27/2016 10:29 PM, Dennis Michael wrote:
>>
>>
>>
>> [root@fs4 bricks]# gluster volume info
>>
>> Volume Name: cees-data
>> Type: Distribute
>> Volume ID: 27d2a59c-bdac-4f66-bcd8-e6124e53a4a2
>> Status: Started
>> Number of Bricks: 4
>> Transport-type: tcp,rdma
>> Bricks:
>> Brick1: fs1:/data/brick
>> Brick2: fs2:/data/brick
>> Brick3: fs3:/data/brick
>> Brick4: fs4:/data/brick
>> Options Reconfigured:
>> features.quota-deem-statfs: on
>> features.inode-quota: on
>> features.quota: on
>> performance.readdir-ahead: on
>> [root@fs4 bricks]# gluster volume status
>> Status of volume: cees-data
>> Gluster process TCP Port  RDMA Port  Online
>>  Pid
>> 
>> --
>> Brick fs1:/data/brick   49152 49153  Y
>> 1878
>> Brick fs2:/data/brick   49152 0  Y
>> 1707
>> Brick fs3:/data/brick   49152 0  Y
>> 4696
>> Brick fs4:/data/brick   N/A   N/AN
>> N/A
>> NFS Server on localhost 2049  0  Y
>> 13808
>> Quota Daemon on localhost   N/A   N/AY
>> 13813
>> NFS Server on fs1   2049  0  Y
>> 6722
>> Quota Daemon on fs1 N/A   N/AY
>> 6730
>> NFS Server on fs3   2049  0  Y
>> 12553
>> Quota Daemon on fs3 N/A   N/AY
>> 12561
>> NFS Server on fs2   2049  0  Y
>> 11702
>> Quota Daemon on fs2 N/A   N/AY
>> 11710
>>
>> Task Status of Volume cees-data
>> 
>> --
>> There are no active volume tasks
>>
>> [root@fs4 bricks]# ps auxww | grep gluster
>> root 13791  0.0  0.0 701472 19768 ?Ssl  09:06   0:00
>> /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO
>> root 13808  0.0  0.0 560236 41420 ?Ssl  09:07   0:00
>> /usr/sbin/glusterfs -s localhost --volfile-id gluster/nfs -p
>> /var/lib/glusterd/nfs/run/nfs.pid -l /var/log/glusterfs/nfs.log -S
>> /var/run/gluster/01c61523374369658a62b75c582b5ac2.socket
>> root 13813  0.0  0.0 443164 17908 ?Ssl  09:07   0:00
>> /usr/sbin/glusterfs -s localhost --volfile-id gluster/quotad -p
>> /var/lib/glusterd/quotad/run/quotad.pid -l /var/log/glusterfs/quotad.log
>> -S /var/run/gluster/3753def90f5c34f656513dba6a544f7d.socket
>> --xlator-option *replicate*.data-self-heal=off --xlator-option
>> *replicate*.metadata-self-heal=off --xlator-option
>> *replicate*.entry-self-heal=off
>> root 13874  0.0  0.0 1200472 31700 ?   Ssl  09:16   0:00
>> /usr/sbin/glusterfsd -s fs4 --volfile-id cees-data.fs4.data-brick -p
>> /var/lib/glusterd/vols/cees-data/run/fs4-data-brick.pid -S
>> /var/run/gluster/5203ab38be21e1d37c04f6bdfee77d4a.socket --brick-name
>> /data/brick -l /var/log/glusterfs/bricks/data-brick.log --xlator-option
>> *-posix.glusterd-uuid=f04b231e-63f8-4374-91ae-17c0c623f165 --brick-port
>> 49152 49153 --xlator-option cees-data-server.transport.rdma.listen-port=49153
>> --xlator-option cees-data-server.listen-port=49152
>> --volfile-server-transport=socket,rdma
>> root 13941  0.0  0.0 112648   976 pts/0S+   09:50   0:00 grep
>> --color=auto gluster
>>
>> [root@fs4 bricks]# systemctl restart glusterfsd glusterd
>>
>> [root@fs4 bricks]# ps auxww | grep gluster
>> root 13808  0.0  0.0 560236 41420 ?Ssl  09:07   0:00
>> /usr/sbin/glusterfs -s localhost --volfile-id gluster/nfs -p
>> /var/lib/glusterd/nfs/run/nfs.pid -l /var/log/glusterfs/nfs.log -S
>> 

Re: [Gluster-users] Problem with add-brick

2016-09-30 Thread Mohammed Rafi K C


On 09/30/2016 02:35 AM, Dennis Michael wrote:
>
> Are there any workarounds to this?  RDMA is configured on my servers.


By this, I assume your rdma setup/configuration over IPoIB is working fine.

Can you tell us what machine you are using and whether SELinux is
configured on the machine or not.

Also I couldn't see any logs attached here.

Rafi KC


>
> Dennis
>
> On Thu, Sep 29, 2016 at 7:19 AM, Atin Mukherjee  > wrote:
>
> Dennis,
>
> Thanks for sharing the logs.
>
> It seems like a volume configured created with tcp,rdma transport
> fails to start (atleast in my local set up). The issue here is
> although the brick process comes up, but glusterd receives a non
> zero ret code from the runner interface which spawns the brick
> process(es).
>
> Raghavendra Talur/Rafi,
>
> Is this an intended behaviour if rdma device is not configured?
> Please chime in with your thoughts
>
>
> On Wed, Sep 28, 2016 at 10:22 AM, Atin Mukherjee
> > wrote:
>
> Dennis,
>
> It seems like that add-brick has definitely failed and the
> entry is not committed into glusterd store. volume status and
> volume info commands are referring the in-memory data for fs4
> (which exist) but post a restart they are no longer available.
> Could you run glusterd with debug log enabled (systemctl stop
> glusterd; glusterd -LDEBUG) and provide us cmd_history.log,
> glusterd log along with fs4 brick log files to further analyze
> the issue? Regarding the missing RDMA ports for fs2, fs3 brick
> can you cross check if glusterfs-rdma package is installed on
> both the nodes?
>
> On Wed, Sep 28, 2016 at 7:14 AM, Ravishankar N
> > wrote:
>
> On 09/27/2016 10:29 PM, Dennis Michael wrote:
>>
>>
>> [root@fs4 bricks]# gluster volume info
>>  
>> Volume Name: cees-data
>> Type: Distribute
>> Volume ID: 27d2a59c-bdac-4f66-bcd8-e6124e53a4a2
>> Status: Started
>> Number of Bricks: 4
>> Transport-type: tcp,rdma
>> Bricks:
>> Brick1: fs1:/data/brick
>> Brick2: fs2:/data/brick
>> Brick3: fs3:/data/brick
>> Brick4: fs4:/data/brick
>> Options Reconfigured:
>> features.quota-deem-statfs: on
>> features.inode-quota: on
>> features.quota: on
>> performance.readdir-ahead: on
>> [root@fs4 bricks]# gluster volume status
>> Status of volume: cees-data
>> Gluster process TCP Port
>>  RDMA Port  Online  Pid
>> 
>> --
>> Brick fs1:/data/brick   49152
>> 49153  Y   1878 
>> Brick fs2:/data/brick   49152 0  
>>Y   1707 
>> Brick fs3:/data/brick   49152 0  
>>Y   4696 
>> Brick fs4:/data/brick   N/A   N/A
>>N   N/A  
>> NFS Server on localhost 2049  0  
>>Y   13808
>> Quota Daemon on localhost   N/A   N/A
>>Y   13813
>> NFS Server on fs1   2049  0  
>>Y   6722 
>> Quota Daemon on fs1 N/A   N/A
>>Y   6730 
>> NFS Server on fs3   2049  0  
>>Y   12553
>> Quota Daemon on fs3 N/A   N/A
>>Y   12561
>> NFS Server on fs2   2049  0  
>>Y   11702
>> Quota Daemon on fs2 N/A   N/A
>>Y   11710
>>  
>> Task Status of Volume cees-data
>> 
>> --
>> There are no active volume tasks
>>  
>> [root@fs4 bricks]# ps auxww | grep gluster
>> root 13791  0.0  0.0 701472 19768 ?Ssl  09:06
>>   0:00 /usr/sbin/glusterd -p /var/run/glusterd.pid
>> --log-level INFO
>> root 13808  0.0  0.0 560236 41420 ?Ssl  09:07
>>   0:00 /usr/sbin/glusterfs -s localhost --volfile-id
>> gluster/nfs -p /var/lib/glusterd/nfs/run/nfs.pid -l
>> 

Re: [Gluster-users] [ovirt-users] Ovirt/Gluster replica 3 distributed-replicated problem

2016-09-30 Thread Sahina Bose
Yes, this is a GlusterFS problem. Adding gluster users ML

On Thu, Sep 29, 2016 at 5:11 PM, Davide Ferrari  wrote:

> Hello
>
> maybe this is more glustefs then ovirt related but since OVirt integrates
> Gluster management and I'm experiencing the problem in an ovirt cluster,
> I'm writing here.
>
> The problem is simple: I have a data domain mappend on a replica 3
> arbiter1 Gluster volume with 6 bricks, like this:
>
> Status of volume: data_ssd
> Gluster process TCP Port  RDMA Port  Online
> Pid
> 
> --
> Brick vm01.storage.billy:/gluster/ssd/data/
> brick   49153 0  Y
> 19298
> Brick vm02.storage.billy:/gluster/ssd/data/
> brick   49153 0  Y
> 6146
> Brick vm03.storage.billy:/gluster/ssd/data/
> arbiter_brick   49153 0  Y
> 6552
> Brick vm03.storage.billy:/gluster/ssd/data/
> brick   49154 0  Y
> 6559
> Brick vm04.storage.billy:/gluster/ssd/data/
> brick   49152 0  Y
> 6077
> Brick vm02.storage.billy:/gluster/ssd/data/
> arbiter_brick   49154 0  Y
> 6153
> Self-heal Daemon on localhost   N/A   N/AY
> 30746
> Self-heal Daemon on vm01.storage.billy  N/A   N/AY
> 196058
> Self-heal Daemon on vm03.storage.billy  N/A   N/AY
> 23205
> Self-heal Daemon on vm04.storage.billy  N/A   N/AY
> 8246
>
>
> Now, I've put in maintenance the vm04 host, from ovirt, ticking the "Stop
> gluster" checkbox, and Ovirt didn't complain about anything. But when I
> tried to run a new VM it complained about "storage I/O problem", while the
> storage data status was always UP.
>
> Looking in the gluster logs I can see this:
>
> [2016-09-29 11:01:01.556908] I [glusterfsd-mgmt.c:1596:mgmt_getspec_cbk]
> 0-glusterfs: No change in volfile, continuing
> [2016-09-29 11:02:28.124151] E [MSGID: 108008] 
> [afr-read-txn.c:89:afr_read_txn_refresh_done]
> 0-data_ssd-replicate-1: Failing READ on gfid 
> bf5922b7-19f3-4ce3-98df-71e981ecca8d:
> split-brain observed. [Input/output error]
> [2016-09-29 11:02:28.126580] W [MSGID: 108008]
> [afr-read-txn.c:244:afr_read_txn] 0-data_ssd-replicate-1: Unreadable
> subvolume -1 found with event generation 6 for gfid 
> bf5922b7-19f3-4ce3-98df-71e981ecca8d.
> (Possible split-brain)
> [2016-09-29 11:02:28.127374] E [MSGID: 108008] 
> [afr-read-txn.c:89:afr_read_txn_refresh_done]
> 0-data_ssd-replicate-1: Failing FGETXATTR on gfid 
> bf5922b7-19f3-4ce3-98df-71e981ecca8d:
> split-brain observed. [Input/output error]
> [2016-09-29 11:02:28.128130] W [MSGID: 108027] 
> [afr-common.c:2403:afr_discover_done]
> 0-data_ssd-replicate-1: no read subvols for (null)
> [2016-09-29 11:02:28.129890] W [fuse-bridge.c:2228:fuse_readv_cbk]
> 0-glusterfs-fuse: 8201: READ => -1 gfid=bf5922b7-19f3-4ce3-98df-71e981ecca8d
> fd=0x7f09b749d210 (Input/output error)
> [2016-09-29 11:02:28.130824] E [MSGID: 108008] 
> [afr-read-txn.c:89:afr_read_txn_refresh_done]
> 0-data_ssd-replicate-1: Failing FSTAT on gfid 
> bf5922b7-19f3-4ce3-98df-71e981ecca8d:
> split-brain observed. [Input/output error]
> [2016-09-29 11:02:28.133879] W [fuse-bridge.c:767:fuse_attr_cbk]
> 0-glusterfs-fuse: 8202: FSTAT() /ba2bd397-9222-424d-aecc-
> eb652c0169d9/images/f02ac1ce-52cd-4b81-8b29-f8006d0469e0/
> ff4e49c6-3084-4234-80a1-18a67615c527 => -1 (Input/output error)
> The message "W [MSGID: 108008] [afr-read-txn.c:244:afr_read_txn]
> 0-data_ssd-replicate-1: Unreadable subvolume -1 found with event generation
> 6 for gfid bf5922b7-19f3-4ce3-98df-71e981ecca8d. (Possible split-brain)"
> repeated 11 times between [2016-09-29 11:02:28.126580] and [2016-09-29
> 11:02:28.517744]
> [2016-09-29 11:02:28.518607] E [MSGID: 108008] 
> [afr-read-txn.c:89:afr_read_txn_refresh_done]
> 0-data_ssd-replicate-1: Failing STAT on gfid 
> bf5922b7-19f3-4ce3-98df-71e981ecca8d:
> split-brain observed. [Input/output error]
>
> Now, how is it possible to have a split brain if I stopped just ONE server
> which had just ONE of six bricks, and it was cleanly shut down with
> maintenance mode from ovirt?
>
> I created the volume originally this way:
> # gluster volume create data_ssd replica 3 arbiter 1
> vm01.storage.billy:/gluster/ssd/data/brick 
> vm02.storage.billy:/gluster/ssd/data/brick
> vm03.storage.billy:/gluster/ssd/data/arbiter_brick
> vm03.storage.billy:/gluster/ssd/data/brick 
> vm04.storage.billy:/gluster/ssd/data/brick
> vm02.storage.billy:/gluster/ssd/data/arbiter_brick
> # gluster volume set data_ssd group virt
> # gluster volume set data_ssd storage.owner-uid 36 && gluster volume set
> data_ssd storage.owner-gid 36
> # gluster volume start data_ssd
>
>
> --
> Davide Ferrari
> Senior Systems Engineer
>
> 

Re: [Gluster-users] Production cluster planning

2016-09-30 Thread Gandalf Corvotempesta
Il 30 set 2016 11:35, "mabi"  ha scritto:
>
> That's not correct. There is no risk of corruption using "sync=disabled".
In the worst case you just end up with old data but no corruption. See the
following comment from a master of ZFS (Aaron Toponce):
>
>
https://pthree.org/2013/01/25/glusterfs-linked-list-topology/#comment-227906
>
> Btw: I have enterprise SSD for my ZFS SLOG but in the case of GlusterFS I
see not much improvement. The real performance improvement comes by
disabling ZFS synchronous writes. I do that for all my ZFS pools/partitions
which have GlutserFS on top.

This seems logical.
did you mesure the performance gain with sync disabled?

Which configuration do you use in gluster?  Zfs with raidz2 and slog to
ssd? Any l2arc?

I was thinking about creating one or more raidz2 to use as bricks, with 2
ssd. One small partition on these ssd would be used as a mirrored SLOG and
the other 2 would be used as standalone arc cache. will this worth the use
of SSD or would be totally useless with gluster?

I don't know if use gluster hot tiering or let zfs manage everything

As suggestion for gluster developers:  if ZFS is considered stable it could
be used as default (replacing xfs) and many features that zfs already has
could be removed from gluster (like bitrot) keeping gluster smaller and
faster
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [ovirt-users] Ovirt/Gluster replica 3 distributed-replicated problem

2016-09-30 Thread Ravishankar N

On 09/29/2016 08:03 PM, Davide Ferrari wrote:
It's strange, I've tried to trigger the error again by putting vm04 in 
maintenence and stopping the gluster service (from ovirt gui) and now 
the VM starts correctly. Maybe the arbiter indeed blamed the brick 
that was still up before, but how's that possible?


A write from the client on that file (vm image) could have succeeded 
only on vm04 even before you brought it down.


The only (maybe big) difference with the previous, erroneous 
situation, is that before I did maintenence (+ reboot) of 3 of my 4 
hosts, maybe I should have left more time between one reboot and another?


If you did not do anything from the previous run other than to bring the 
node up and things worked, then the file is not in split-brain. Split 
braine'd files need to be resolved before they can be accessed again, 
which apparently did not happen in your case.


-Ravi


2016-09-29 14:16 GMT+02:00 Ravishankar N >:


On 09/29/2016 05:18 PM, Sahina Bose wrote:

Yes, this is a GlusterFS problem. Adding gluster users ML

On Thu, Sep 29, 2016 at 5:11 PM, Davide Ferrari
> wrote:

Hello

maybe this is more glustefs then ovirt related but since
OVirt integrates Gluster management and I'm experiencing the
problem in an ovirt cluster, I'm writing here.

The problem is simple: I have a data domain mappend on a
replica 3 arbiter1 Gluster volume with 6 bricks, like this:

Status of volume: data_ssd
Gluster process TCP Port  RDMA Port  Online  Pid

--
Brick vm01.storage.billy:/gluster/ssd/data/
brick 49153 0  Y   19298
Brick vm02.storage.billy:/gluster/ssd/data/
brick 49153 0  Y   6146
Brick vm03.storage.billy:/gluster/ssd/data/
arbiter_brick 49153 0  Y   6552
Brick vm03.storage.billy:/gluster/ssd/data/
brick 49154 0  Y   6559
Brick vm04.storage.billy:/gluster/ssd/data/
brick 49152 0  Y   6077
Brick vm02.storage.billy:/gluster/ssd/data/
arbiter_brick 49154 0  Y   6153
Self-heal Daemon on localhost   N/A N/A   
Y   30746
Self-heal Daemon on vm01.storage.billy  N/A N/A   
Y   196058
Self-heal Daemon on vm03.storage.billy  N/A N/A   
Y   23205
Self-heal Daemon on vm04.storage.billy  N/A N/A   
Y   8246



Now, I've put in maintenance the vm04 host, from ovirt,
ticking the "Stop gluster" checkbox, and Ovirt didn't
complain about anything. But when I tried to run a new VM it
complained about "storage I/O problem", while the storage
data status was always UP.

Looking in the gluster logs I can see this:

[2016-09-29 11:01:01.556908] I
[glusterfsd-mgmt.c:1596:mgmt_getspec_cbk] 0-glusterfs: No
change in volfile, continuing
[2016-09-29 11:02:28.124151] E [MSGID: 108008]
[afr-read-txn.c:89:afr_read_txn_refresh_done]
0-data_ssd-replicate-1: Failing READ on gfid
bf5922b7-19f3-4ce3-98df-71e981ecca8d: split-brain observed.
[Input/output error]
[2016-09-29 11:02:28.126580] W [MSGID: 108008]
[afr-read-txn.c:244:afr_read_txn] 0-data_ssd-replicate-1:
Unreadable subvolume -1 found with event generation 6 for
gfid bf5922b7-19f3-4ce3-98df-71e981ecca8d. (Possible split-brain)
[2016-09-29 11:02:28.127374] E [MSGID: 108008]
[afr-read-txn.c:89:afr_read_txn_refresh_done]
0-data_ssd-replicate-1: Failing FGETXATTR on gfid
bf5922b7-19f3-4ce3-98df-71e981ecca8d: split-brain observed.
[Input/output error]
[2016-09-29 11:02:28.128130] W [MSGID: 108027]
[afr-common.c:2403:afr_discover_done] 0-data_ssd-replicate-1:
no read subvols for (null)
[2016-09-29 11:02:28.129890] W
[fuse-bridge.c:2228:fuse_readv_cbk] 0-glusterfs-fuse: 8201:
READ => -1 gfid=bf5922b7-19f3-4ce3-98df-71e981ecca8d
fd=0x7f09b749d210 (Input/output error)
[2016-09-29 11:02:28.130824] E [MSGID: 108008]
[afr-read-txn.c:89:afr_read_txn_refresh_done]
0-data_ssd-replicate-1: Failing FSTAT on gfid
bf5922b7-19f3-4ce3-98df-71e981ecca8d: split-brain observed.
[Input/output error]



Does `gluster volume heal data_ssd info split-brain` report that
the file is in split-brain, with vm04 still being down?
If yes, could you provide the extended attributes of this gfid
from all 3 bricks:
getfattr -d -m . -e hex
/path/to/brick/bf/59/bf5922b7-19f3-4ce3-98df-71e981ecca8d

If no, then I'm guessing that it is 

Re: [Gluster-users] Minio as object storage

2016-09-30 Thread Gandalf Corvotempesta
2016-09-29 12:22 GMT+02:00 Prashanth Pai :
> In pure vanilla Swift, ACL information is stored in container DBs (sqlite)
> In gluster-swift, ACLs are stored in the extended attribute of the directory.

So, as long the directory is stored on gluster, gluster makes this redundant

> This can be easily done using haproxy.

The only thing to do is spawn multiple VMs with gluster-switft
pointing to the same gluster volume,
nothing else, as exattr are stored on gluster and thus readable by all
VMs and HAproxy will balance the requests.

Right ? A sort of spawn :)
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Minio as object storage

2016-09-30 Thread Prashanth Pai


- Original Message -
> From: "Gandalf Corvotempesta" 
> To: "Prashanth Pai" 
> Cc: "John Mark Walker" , "gluster-users" 
> 
> Sent: Thursday, 29 September, 2016 3:23:27 PM
> Subject: Re: [Gluster-users] Minio as object storage
> 
> 2016-09-29 11:49 GMT+02:00 Prashanth Pai :
> > Swift can enforce allowing/denying access to swift users.
> > The Swift API provides Account ACLs and Container ACLs for this.
> > http://docs.openstack.org/developer/swift/overview_auth.html
> >
> > There is no mapping between a swift user and a linux user as
> > such. Hence these ACLs are enforced at object interface level
> > and not at the filesystem layer.
> 
> I don't need a map between linux users and switft users but only a way
> to force that user1 can't see uploaded files by user2
> 

Yes, that can be done. Container ACLs allows you to just that.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Minio as object storage

2016-09-30 Thread Gandalf Corvotempesta
2016-09-29 11:49 GMT+02:00 Prashanth Pai :
> Swift can enforce allowing/denying access to swift users.
> The Swift API provides Account ACLs and Container ACLs for this.
> http://docs.openstack.org/developer/swift/overview_auth.html
>
> There is no mapping between a swift user and a linux user as
> such. Hence these ACLs are enforced at object interface level
> and not at the filesystem layer.

I don't need a map between linux users and switft users but only a way
to force that user1 can't see uploaded files by user2
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Minio as object storage

2016-09-30 Thread Prashanth Pai

- Original Message -
> From: "Gandalf Corvotempesta" 
> To: "Prashanth Pai" 
> Cc: "John Mark Walker" , "gluster-users" 
> 
> Sent: Thursday, 29 September, 2016 2:50:33 PM
> Subject: Re: [Gluster-users] Minio as object storage
> 
> 2016-09-29 11:03 GMT+02:00 Prashanth Pai :
> > Each account can have as many users you'd want.
> >
> > If you'd like 10 accounts, you'll need 10 volumes.
> > If you have 10 volumes, you'd have 10 accounts.
> >
> > For example (uploading an object):
> > curl -v -X PUT -T mytestfile
> > http://localhost:8080/v1/AUTH_myvolume/mycontainer/mytestfile
> >
> > Here "myvolumename" is the name of the volume as well as the account.
> 
> So, let's assume a single "volume" with multiple users.
> Would be possible to share this volume with multiple users and deny
> access files by users ?

Swift can enforce allowing/denying access to swift users.
The Swift API provides Account ACLs and Container ACLs for this.
http://docs.openstack.org/developer/swift/overview_auth.html

There is no mapping between a swift user and a linux user as
such. Hence these ACLs are enforced at object interface level
and not at the filesystem layer.

> user1 should only see it's own files and so on.
> 
> If this is not possible, would be a mess: gluster volumes needs many
> bricks (in my case, with replica 3, at least 3 bricks).
> Having to create 1 volume for each account mens thousands of volumes
> and then thousands*3 bricks
> 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] increase qcow2 image size

2016-09-30 Thread Gandalf Corvotempesta
2016-09-29 0:02 GMT+02:00 Gandalf Corvotempesta
:
> Shouldn't gluster increase the image size?

This morning i've checked the image size and it was properly increased.
So, gluster is able to increase (by adding shards) the VM image only
when needed, right?
I've started with a 100GB qcow2 image that was preallocated by gluster
(1600 shards created, 64MB each) even if the qcow2 image was only a
couple of GB (a debian minimal install) then I've increased the image
to 150GB, nothing changed.

I've created a huge file (120GB) with random content and this morning
the stored file was increased up to 133GB
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Minio as object storage

2016-09-30 Thread Gandalf Corvotempesta
2016-09-29 11:03 GMT+02:00 Prashanth Pai :
> Each account can have as many users you'd want.
>
> If you'd like 10 accounts, you'll need 10 volumes.
> If you have 10 volumes, you'd have 10 accounts.
>
> For example (uploading an object):
> curl -v -X PUT -T mytestfile 
> http://localhost:8080/v1/AUTH_myvolume/mycontainer/mytestfile
>
> Here "myvolumename" is the name of the volume as well as the account.

So, let's assume a single "volume" with multiple users.
Would be possible to share this volume with multiple users and deny
access files by users ?
user1 should only see it's own files and so on.

If this is not possible, would be a mess: gluster volumes needs many
bricks (in my case, with replica 3, at least 3 bricks).
Having to create 1 volume for each account mens thousands of volumes
and then thousands*3 bricks
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Minio as object storage

2016-09-30 Thread Prashanth Pai

> 
> is this quick start guide correct ?
> https://github.com/gluster/gluster-swift/blob/master/doc/markdown/quick_start_guide.md

Except for the part where you get the packages from,
the guide is correct.

> 
> What does it mean "NOTE: In Gluster-Swift, accounts must be GlusterFS
> volumes." ?
> I have to create one gluster volume for each swift account ?
> If I would like to have 10 users, I have to create 10 gluster volumes ?
> 

Each account can have as many users you'd want.

If you'd like 10 accounts, you'll need 10 volumes.
If you have 10 volumes, you'd have 10 accounts.

For example (uploading an object):
curl -v -X PUT -T mytestfile 
http://localhost:8080/v1/AUTH_myvolume/mycontainer/mytestfile

Here "myvolumename" is the name of the volume as well as the account.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Minio as object storage

2016-09-30 Thread Gandalf Corvotempesta
2016-09-29 6:58 GMT+02:00 Prashanth Pai :
> But gluster-swift isn't so. The distribution and replication
> functionality of Swift is suppressed and delegated to gluster.
> gluster-swift is front-end which processes and converts all
> incoming object requests into filesystem operations that
> gluster can work with. That's all it does.

is this quick start guide correct ?
https://github.com/gluster/gluster-swift/blob/master/doc/markdown/quick_start_guide.md

What does it mean "NOTE: In Gluster-Swift, accounts must be GlusterFS
volumes." ?
I have to create one gluster volume for each swift account ?
If I would like to have 10 users, I have to create 10 gluster volumes ?
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] An Update on GlusterD-2.0

2016-09-30 Thread Vijay Bellur

On 09/29/2016 12:48 AM, Kaushal M wrote:

On Thu, Sep 29, 2016 at 10:10 AM, Vijay Bellur  wrote:

On 09/22/2016 07:28 AM, Kaushal M wrote:


The first preview/dev release of GlusterD-2.0 is available now. A
prebuilt binary is available for download from the release-page[1].

This is just a preview of what has been happening in GD2, to give
users a taste of how GD2 is evolving.

GD2 can now form a cluster, list peers, create/delete,(psuedo)
start/stop and list volumes. Most of these will undergo changes and
refined as we progress.

More information on how to test this release can be found on the release
page.

We'll providing periodic (hopefully fortnightly) updates on the
changes happening in GD2 from now on.




Thank you for posting this, Kaushal!

I was trying to add a peer using the gluster/gluster-centos docker
containers and I encountered the following error:

INFO[17533] New member added to the cluster   New member
=ETCD_172.17.0.3 member Id =b197797611650d60
INFO[17533] ETCD_NAME ETCD_NAME=ETCD_172.17.0.3
INFO[17533] ETCD_INITIAL_CLUSTER
ETCD_INITIAL_CLUSTER=default=http://172.17.0.4:2380,ETCD_172.17.0.3=http://172.17.0.3:2380
INFO[17533] ETCD_INITIAL_CLUSTER_STATE"existing"
ERRO[17540] Failed to add peer into the etcd storeerror=client: etcd
cluster is unavailable or misconfigured peer/node=172.17.0.3
ERRO[21635] Failed to add member into etcd clustererror=client: etcd
cluster is unavailable or misconfigured member=172.17.0.3


These 2 errors are from etcd-client, which mean that the GD2 cannot
connect to the etcd server.
As the errors indicate, it could be because the etcd daemon isn't
running successfully,
or because etcd hasn't successfully connected to its cluster. There
could be more information in the etcd log under
`GD2WORKDIR/log/etcd.log`.



From etcd.log:

2016-09-29 04:53:45.379924 W | etcdserver: cannot get the version of 
member b197797611650d60 (Get http://172.17.0.3:2380/version: dial tcp 
172.17.0.3:2380: getsockopt: connection refused)
2016-09-29 04:53:49.380374 W | etcdserver: failed to reach the 
peerURL(http://172.17.0.3:2380) of member b197797611650d60 (Get 
http://172.17.0.3:2380/version: dial tcp 172.17.0.3:2380: getsockopt: 
connection refused)


It does look like a firewall issue. I will resolve that and check.

Thanks,
Vijay


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] An Update on GlusterD-2.0

2016-09-30 Thread Kaushal M
On Thu, Sep 29, 2016 at 10:10 AM, Vijay Bellur  wrote:
> On 09/22/2016 07:28 AM, Kaushal M wrote:
>>
>> The first preview/dev release of GlusterD-2.0 is available now. A
>> prebuilt binary is available for download from the release-page[1].
>>
>> This is just a preview of what has been happening in GD2, to give
>> users a taste of how GD2 is evolving.
>>
>> GD2 can now form a cluster, list peers, create/delete,(psuedo)
>> start/stop and list volumes. Most of these will undergo changes and
>> refined as we progress.
>>
>> More information on how to test this release can be found on the release
>> page.
>>
>> We'll providing periodic (hopefully fortnightly) updates on the
>> changes happening in GD2 from now on.
>>
>
>
> Thank you for posting this, Kaushal!
>
> I was trying to add a peer using the gluster/gluster-centos docker
> containers and I encountered the following error:
>
> INFO[17533] New member added to the cluster   New member
> =ETCD_172.17.0.3 member Id =b197797611650d60
> INFO[17533] ETCD_NAME ETCD_NAME=ETCD_172.17.0.3
> INFO[17533] ETCD_INITIAL_CLUSTER
> ETCD_INITIAL_CLUSTER=default=http://172.17.0.4:2380,ETCD_172.17.0.3=http://172.17.0.3:2380
> INFO[17533] ETCD_INITIAL_CLUSTER_STATE"existing"
> ERRO[17540] Failed to add peer into the etcd storeerror=client: etcd
> cluster is unavailable or misconfigured peer/node=172.17.0.3
> ERRO[21635] Failed to add member into etcd clustererror=client: etcd
> cluster is unavailable or misconfigured member=172.17.0.3

These 2 errors are from etcd-client, which mean that the GD2 cannot
connect to the etcd server.
As the errors indicate, it could be because the etcd daemon isn't
running successfully,
or because etcd hasn't successfully connected to its cluster. There
could be more information in the etcd log under
`GD2WORKDIR/log/etcd.log`.

I've faced this issue intermittently, but I've never bothered checking
what caused it yet. I just nuke everything and start again.

>
> What should be done to overcome this error?
>
> Also noticed that there is a minor change in the actual response to /version
> when compared with what is documented in the API guide. We would need to
> change that.

Will do it. The whole ReST document is in need of a recheck.

>
> -Vijay
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] An Update on GlusterD-2.0

2016-09-30 Thread Vijay Bellur

On 09/22/2016 07:28 AM, Kaushal M wrote:

The first preview/dev release of GlusterD-2.0 is available now. A
prebuilt binary is available for download from the release-page[1].

This is just a preview of what has been happening in GD2, to give
users a taste of how GD2 is evolving.

GD2 can now form a cluster, list peers, create/delete,(psuedo)
start/stop and list volumes. Most of these will undergo changes and
refined as we progress.

More information on how to test this release can be found on the release page.

We'll providing periodic (hopefully fortnightly) updates on the
changes happening in GD2 from now on.




Thank you for posting this, Kaushal!

I was trying to add a peer using the gluster/gluster-centos docker 
containers and I encountered the following error:


INFO[17533] New member added to the cluster   New member 
=ETCD_172.17.0.3 member Id =b197797611650d60
INFO[17533] ETCD_NAME 
ETCD_NAME=ETCD_172.17.0.3
INFO[17533] ETCD_INITIAL_CLUSTER 
ETCD_INITIAL_CLUSTER=default=http://172.17.0.4:2380,ETCD_172.17.0.3=http://172.17.0.3:2380

INFO[17533] ETCD_INITIAL_CLUSTER_STATE"existing"
ERRO[17540] Failed to add peer into the etcd storeerror=client: 
etcd cluster is unavailable or misconfigured peer/node=172.17.0.3
ERRO[21635] Failed to add member into etcd clustererror=client: 
etcd cluster is unavailable or misconfigured member=172.17.0.3


What should be done to overcome this error?

Also noticed that there is a minor change in the actual response to 
/version when compared with what is documented in the API guide. We 
would need to change that.


-Vijay

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Minio as object storage

2016-09-30 Thread Vijay Bellur
On Wed, Sep 28, 2016 at 11:28 AM, Gandalf Corvotempesta
 wrote:
> 2016-09-28 16:27 GMT+02:00 Prashanth Pai :
>> There's gluster-swift[1]. It works with oth Swift API and S3 API[2] (using 
>> Swift).
>>
>> [1]: https://github.com/prashanthpai/docker-gluster-swift
>> [2]: https://github.com/gluster/gluster-swift/blob/master/doc/markdown/s3.md
>
> I wasn't aware of S3 support on Swift.
> Anyway, Swift has some requirements like the whole keyring stack
> proxies and so on from OpenStack, I prefere something smaller


Have you tried playing with docker-gluster-swift as Prashanth
mentions? All the swift dependencies are handled by the container and
it is quite easy IMO to get going.

We are attempting to improve capabilities of our object interface and
containerized gluster-swift is one of our current efforts to make that
happen. Would look forward to any feedback that you can provide about
gluster-swift.

Thanks!
Vijay
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] increase qcow2 image size

2016-09-30 Thread Gandalf Corvotempesta
I'm doing some tests with proxmox.
I've created a test VM with 100GB qcow2 image stored on gluster with sharding
All shards was created properly.

Then, I've increased the qcow2 image size from 100GB to 150GB.
Proxmox did this well, but on gluster i'm still seeing the old qcow2
image size (1600 shard, 64MB each)

What happens when qemu has to write the 150GB to disk by incresing the
qcow2 image (it's a COW, thus it expand only when needed)

Shouldn't gluster increase the image size?
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Production cluster planning

2016-09-30 Thread mabi
That's not correct. There is no risk of corruption using "sync=disabled". In 
the worst case you just end up with old data but no corruption. See the 
following comment from a master of ZFS (Aaron Toponce):

https://pthree.org/2013/01/25/glusterfs-linked-list-topology/#comment-227906

Btw: I have enterprise SSD for my ZFS SLOG but in the case of GlusterFS I see 
not much improvement. The real performance improvement comes by disabling ZFS 
synchronous writes. I do that for all my ZFS pools/partitions which have 
GlutserFS on top.









 Original Message 
Subject: Re: [Gluster-users] Production cluster planning
Local Time: September 26, 2016 11:08 PM
UTC Time: September 26, 2016 9:08 PM
From: lindsay.mathie...@gmail.com
To: gluster-users@gluster.org

On 27/09/2016 4:13 AM, mabi wrote:
> I would also say do not forget to set "sync=disabled".

I wouldn't be doing that - very high risk of gluster corruption in the
event of power loss or server crash. Up to 5 seconds of writes could be
lost that way.


If writes aren't fast enough I'd add a SSD partition for slog.
Preferably a data center quality one.

--
Lindsay Mathieson

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] GlusterFs upstream bugzilla components Fine graining

2016-09-30 Thread Prasanna Kalever
On Wed, Sep 28, 2016 at 11:24 AM, Muthu Vigneshwaran
 wrote:
>
> Hi,
>
> This an update to the previous mail about Fine graining of the
> GlusterFS upstream bugzilla components.
>
> Finally we have come out a new structure that would help in easy
> access of the bug for reporter and assignee too.
>
> In the new structure we have decided to remove components that are
> listed as below -
>
> - BDB
> - HDFS
> - booster
> - coreutils
> - gluster-hdoop
> - gluster-hadoop-install
> - libglusterfsclient
> - map
> - path-converter
> - protect
> - qemu-block

Well, we are working on bringing qemu-block xlator to alive again.
This is needed in achieving qcow2 based internal snapshots for/in the
gluster block store.

Take a look at  http://review.gluster.org/#/c/15588/  and dependent patches.

--
Prasanna

[...]
> Thanks and regards,
>
> Muthu Vigneshwaran & Niels de vos
> ___
> Gluster-devel mailing list
> gluster-de...@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Minio as object storage

2016-09-30 Thread Ben Werthmann
These are interesting projects:
https://github.com/prashanthpai/antbird
https://github.com/kshlm/gogfapi

Are there plans for an official go gfapi client library?

On Wed, Sep 28, 2016 at 12:16 PM, John Mark Walker 
wrote:

> No - gluster-swift adds the swift API on top of GlusterFS. It doesn't
> require Swift itself.
>
> This project is 4 years old now - how do people not know this?
>
> -JM
>
>
>
> On Wed, Sep 28, 2016 at 11:28 AM, Gandalf Corvotempesta <
> gandalf.corvotempe...@gmail.com> wrote:
>
>> 2016-09-28 16:27 GMT+02:00 Prashanth Pai :
>> > There's gluster-swift[1]. It works with oth Swift API and S3 API[2]
>> (using Swift).
>> >
>> > [1]: https://github.com/prashanthpai/docker-gluster-swift
>> > [2]: https://github.com/gluster/gluster-swift/blob/master/doc/mar
>> kdown/s3.md
>>
>> I wasn't aware of S3 support on Swift.
>> Anyway, Swift has some requirements like the whole keyring stack
>> proxies and so on from OpenStack, I prefere something smaller
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-users
>>
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Minio as object storage

2016-09-30 Thread Gandalf Corvotempesta
2016-09-28 18:16 GMT+02:00 John Mark Walker :
> No - gluster-swift adds the swift API on top of GlusterFS. It doesn't
> require Swift itself.
>
> This project is 4 years old now - how do people not know this?

gluster-switft is obsolete.
The "proper" way to use the object storage is with Swift:
https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3.1/html/Administration_Guide/chap-Managing_Object_Store.html
"Object Store technology is built upon OpenStack Swift. OpenStack
Swift allows users to store and retrieve files and content through a
simple Web Service REST (Representational State Transfer) interface as
objects. Red Hat Gluster Storage uses glusterFS as a back-end file
system for OpenStack Swift."

and you need the whole Swift stack from OpenStack:

# rpm -qa | grep swift
openstack-swift-container-1.13.1-6.el7ost.noarch
openstack-swift-object-1.13.1-6.el7ost.noarch
swiftonfile-1.13.1-6.el7rhgs.noarch
openstack-swift-proxy-1.13.1-6.el7ost.noarch
openstack-swift-doc-1.13.1-6.el7ost.noarch
openstack-swift-1.13.1-6.el7ost.noarch
openstack-swift-account-1.13.1-6.el7ost.noarch


but as wrote by Ben, there are too many moving parts to get that working.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users