Re: [Gluster-users] Minio as object storage

2016-09-28 Thread John Mark Walker
No - gluster-swift adds the swift API on top of GlusterFS. It doesn't
require Swift itself.

This project is 4 years old now - how do people not know this?

-JM



On Wed, Sep 28, 2016 at 11:28 AM, Gandalf Corvotempesta <
gandalf.corvotempe...@gmail.com> wrote:

> 2016-09-28 16:27 GMT+02:00 Prashanth Pai :
> > There's gluster-swift[1]. It works with oth Swift API and S3 API[2]
> (using Swift).
> >
> > [1]: https://github.com/prashanthpai/docker-gluster-swift
> > [2]: https://github.com/gluster/gluster-swift/blob/master/doc/
> markdown/s3.md
>
> I wasn't aware of S3 support on Swift.
> Anyway, Swift has some requirements like the whole keyring stack
> proxies and so on from OpenStack, I prefere something smaller
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Minio as object storage

2016-09-28 Thread Ben Werthmann
Here's an older thread discussing gfapi + swiftonfile + swift3.
https://www.gluster.org/pipermail/gluster-users.old/2015-December/024676.html

We looked at this and decided it was too many moving parts for our use case.

On Wed, Sep 28, 2016 at 11:28 AM, Gandalf Corvotempesta <
gandalf.corvotempe...@gmail.com> wrote:

> 2016-09-28 16:27 GMT+02:00 Prashanth Pai :
> > There's gluster-swift[1]. It works with oth Swift API and S3 API[2]
> (using Swift).
> >
> > [1]: https://github.com/prashanthpai/docker-gluster-swift
> > [2]: https://github.com/gluster/gluster-swift/blob/master/doc/
> markdown/s3.md
>
> I wasn't aware of S3 support on Swift.
> Anyway, Swift has some requirements like the whole keyring stack
> proxies and so on from OpenStack, I prefere something smaller
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Minio as object storage

2016-09-28 Thread Prashanth Pai

> 
> Anything simpler to use as S3-compatible APIs on top of gluster?

There's gluster-swift[1]. It works with oth Swift API and S3 API[2] (using 
Swift).

[1]: https://github.com/prashanthpai/docker-gluster-swift
[2]: https://github.com/gluster/gluster-swift/blob/master/doc/markdown/s3.md

> There is no need for replication or similiar like Riak does as this would be
> handled by gluster itself.
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Community Gluster Package Matrix, updated

2016-09-28 Thread Kaleb S. KEITHLEY
Hi,

With the imminent release of 3.9 in a week or two, here's a summary of
the Community packages for various Linux distributions that are
tentatively planned going forward.

Note that 3.6 will reach end-of-life (EOL) when 3.9 is released, and no
further releases will be made on the release-3.6 branch.

N.B. Fedora 23 and Ubuntu Wily are nearing EOL.

(I haven't included NetBSD or FreeBSD here, only because they're not
Linux and we have little controlover them.)

An X means packages are planned to be in the repository.
A — means we have no plans to build the version for the repository.
d.g.o means packages will (also) be provided on https://download.gluster.org
DNF/YUM means the packages are included in the Fedora updates or
updates-testing repos.



3.9
3.8 3.7 3.6
CentOS Storage SIG¹ el5 —   —
d.g.o   d.g.o

el6 X
X   X, d.g.oX, d.g.o

el7 X
X   X, d.g.oX, d.g.o






Fedora
F23 —   d.g.o   DNF/YUM d.g.o

F24 d.g.o
DNF/YUM d.g.o   d.g.o

F25 DNF/YUM d.g.o   d.g.o   d.g.o

F26
DNF/YUM
d.g.o   d.g.o   d.g.o






Ubuntu Launchpad²   Precise (12.04 LTS) —   —€” X   X

Trusty (14.04 LTS)  —   X   X   X

Wily (15.10)—   X   X   X

Xenial (16.04 LTS)  X
X   X   X

Yakkety (16.10)
X
X
—   —






Debian  Wheezy (7)  —   —€” d.g.o   d.g.o

Jessie (8)  d.g.o
d.g.o   d.g.o   d.g.o

Stretch (9) d.g.o
d.g.o   d.g.o   d.g.o






SuSE Build System³  OpenSuSE13
X
X   X   X

Leap 42.X   X
X   X   —€”

SLES11  —   —€” —€” X

SLES12  X
X   X   X

¹ https://wiki.centos.org/SpecialInterestGroup/Storage
² https://launchpad.net/~gluster
³ https://build.opensuse.org/project/subprojects/home:kkeithleatredhat

-- Kaleb
___
Gluster-devel mailing list
gluster-de...@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


signature.asc
Description: OpenPGP digital signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Minio as object storage

2016-09-28 Thread Gandalf Corvotempesta
2016-09-28 16:27 GMT+02:00 Prashanth Pai :
> There's gluster-swift[1]. It works with oth Swift API and S3 API[2] (using 
> Swift).
>
> [1]: https://github.com/prashanthpai/docker-gluster-swift
> [2]: https://github.com/gluster/gluster-swift/blob/master/doc/markdown/s3.md

I wasn't aware of S3 support on Swift.
Anyway, Swift has some requirements like the whole keyring stack
proxies and so on from OpenStack, I prefere something smaller
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Weekly Community Meeting 28/Sep/2016 - Minutes

2016-09-28 Thread Samikshan Bairagya

Hi all,

Thank you all those who participated in today's community meeting. The 
next meeting is scheduled next week (October 5th) at #gluster-meeting 
and will be hosted by Ankit.


The minutes, logs and a summary for today's meeting can be found below.

 - Minutes: 
https://meetbot.fedoraproject.org/gluster-meeting/2016-09-28/weekly_community_meeting_28-sep-2016.2016-09-28-12.00.html
 - Minutes (text): 
https://meetbot.fedoraproject.org/gluster-meeting/2016-09-28/weekly_community_meeting_28-sep-2016.2016-09-28-12.00.txt
 - Log: 
https://meetbot.fedoraproject.org/gluster-meeting/2016-09-28/weekly_community_meeting_28-sep-2016.2016-09-28-12.00.log.html


Meeting summary
---
* Roll Call  (samikshan, 12:01:24)

* Next week's host  (samikshan, 12:06:37)
  * ankitraj is next week's host  (samikshan, 12:09:47)

* GlusterFS-4.0  (samikshan, 12:16:33)
  * LINK:

http://www.gluster.org/pipermail/gluster-devel/2016-September/050988.html
(ndevos, 12:22:01)

* GlusterFS-3.9  (samikshan, 12:23:19)

* GlusterFS-3.8  (samikshan, 12:29:16)
  * LINK:

http://gluster.readthedocs.io/en/latest/Contributors-Guide/GlusterFS-Release-process/
(ndevos, 12:36:03)
  * ACTION: document RC tagging guidelines in release steps document
(samikshan, 12:37:08)
  * ACTION: Discuss RC tagging guidelines on mailing lists  (samikshan,
12:37:34)

* GlusterFS-3.7  (samikshan, 12:38:04)

* NFS Ganesha  (samikshan, 12:41:35)
  * LINK: https://www.youtube.com/watch?v=54y4WkijkoI   (samikshan,
12:45:16)
  * NFS Ganesha 2.4.1 expected around mid-October, near NFS Bake-a-thon,
being held in Westford, MA  (samikshan, 12:46:49)

* Samba  (samikshan, 12:49:45)

* Project Infrastructure  (samikshan, 12:54:13)
  * LINK: https://github.com/gluster/infra-docs/wiki/Gerrit   (misc,
12:56:13)
  * LINK: https://github.com/gluster/infra-docs/wiki/Gerrit
(samikshan, 12:57:43)
  * LINK:

http://www.gluster.org/pipermail/gluster-devel/2016-September/051011.html
(samikshan, 12:59:03)

* Last week's AIs  (samikshan, 13:00:56)

* rastar_afk/ndevos/jdarcy to improve cleanup to control the processes
  that test starts  (samikshan, 13:01:19)
  * ACTION: rastar_afk/ndevos/jdarcy to improve cleanup to control the
processes that test starts  (samikshan, 13:02:39)

* RC tagging to be done by this week for 3.9 by aravindavk/pranithk
  (samikshan, 13:03:43)

* Open Floor  (samikshan, 13:04:21)

Meeting ended at 13:07:45 UTC.




Action Items

* document RC tagging guidelines in release steps document
* Discuss RC tagging guidelines on mailing lists
* rastar_afk/ndevos/jdarcy to improve cleanup to control the processes
  that test starts




Action Items, by person
---
* ndevos
  * rastar_afk/ndevos/jdarcy to improve cleanup to control the processes
that test starts
* **UNASSIGNED**
  * document RC tagging guidelines in release steps document
  * Discuss RC tagging guidelines on mailing lists




People Present (lines said)
---
* samikshan (100)
* kkeithley (39)
* ndevos (29)
* misc (9)
* ankitraj (5)
* zodbot (3)
* anoopcs (2)
* post-factum (1)
* jiffin (1)
* ira (1)


See you all next week. Thanks again.

~ Samikshan
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Minio as object storage

2016-09-28 Thread Niels de Vos
On Wed, Sep 28, 2016 at 08:56:35AM +0200, Gandalf Corvotempesta wrote:
> Il 28 set 2016 5:42 AM, "Outback Dingo"  ha scritto:
> > however simple minio is, it doesnt support clustering, replication or
> > multiple users. replacing RIAK with minio... FAIL! riak and skylable
> > by far are better suited.
...
> Anything simpler to use as S3-compatible APIs on top of gluster?
> There is no need for replication or similiar like Riak does as this would
> be handled by gluster itself.

Gluster offers libgfapi for accessing files/directories from within
other applications. If there is a modular S3-compatible service that can
easily be extended to use libgfapi, we could possible have a look at
providing such a plugin/module.

Does anyone know what commonly used S3-compatible servers are available?

Thanks,
Niels


signature.asc
Description: PGP signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] GlusterFs upstream bugzilla components Fine graining

2016-09-28 Thread Kaleb S. KEITHLEY
On 09/28/2016 01:54 AM, Muthu Vigneshwaran wrote:
> Hi,
> as we find that the above mentioned components are either
> deprecated,uses GitHub for bugs/issues filing and also planned to add
> the following components as the main component
> 
> - common-ha

common-ha is to (eventually) be replaced with storhaug, which I believe
uses github issues.

But if you want to keep common-ha for now, that's okay with me.

-- 

Kaleb
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Error log in brick file : mkdir of /opt/lvmdir/c2/brick/.trashcan/ failed [File exists]

2016-09-28 Thread ABHISHEK PALIWAL
Even having some problem in mnt-c.log file as well

[2016-09-27 13:01:00.455588] I [dict.c:473:dict_get]
(-->/usr/lib64/glusterfs/3.7.6/xlator/debug/io-stats.so(io_stats_lookup_cbk-0x1d7dc)
[0x3fff80c2a574]
-->/usr/lib64/glusterfs/3.7.6/xlator/system/posix-acl.so(posix_acl_lookup_cbk-0x15b5c)
[0x3fff80c00944] -->/usr/lib64/libglusterfs.so.0(dict_get-0xc10f4)
[0x3fff84c8dc2c] ) 0-dict: !this || key=system.posix_acl_default [Invalid
argument]
[2016-09-27 13:01:22.388314] W [MSGID: 114031]
[client-rpc-fops.c:2971:client3_3_lookup_cbk] 0-c_glusterfs-client-0:
remote operation failed. Path: /loadmodules/CXC1733370_P91A033
(a1d7c756-a9ba-4525-af5c-a8b7ebcbbb1a) [No such file or directory]
[2016-09-27 13:01:22.388403] E [fuse-bridge.c:2117:fuse_open_resume]
0-glusterfs-fuse: 8716: OPEN a1d7c756-a9ba-4525-af5c-a8b7ebcbbb1a
resolution failed

Could you please let me know the possible reason for it



On Wed, Sep 28, 2016 at 3:58 PM, ABHISHEK PALIWAL 
wrote:

> Hi,
>
> I am getting some unwanted errors when created the distributed volume and
> not able to access file from the gluster mount.
>
> Could you please let me know the reason behind these errors.
>
> Also please let me know why gluster is calling "posix_mkdir" when file is
> exist.
>
> Please find the attached log for more details.
>
> --
>
>
>
>
> Regards
> Abhishek Paliwal
>



-- 




Regards
Abhishek Paliwal
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] GlusterFs upstream bugzilla components Fine graining

2016-09-28 Thread Kaleb S. KEITHLEY
On 09/28/2016 02:10 AM, Soumya Koduri wrote:
> Hi,
> 
> On 09/28/2016 11:24 AM, Muthu Vigneshwaran wrote:
> 
>> +- Component GlusterFS
>> |
>> |
>> |  +Subcomponent nfs
> 
> Maybe its time to change it to 'gluster-NFS/native NFS'. Niels/Kaleb?

IIRC there is a separate nfs-ganesha subcomponent already. Correct?

But I agree with calling it gluster-nfs, or anything that makes the
distinction between gluster-nfs and nfs-ganesha clear.

> 
>> +- Component gdeploy
>>
>> |  |
>>
>> |  +Subcomponent samba
>>
>> |  +Subcomponent hyperconvergence

I don't know what hyper-convergence is in the context of gdeploy.

>>
>> |  +Subcomponent RHSC 2.0
> 
> gdeploy has support for 'ganesha' configuration as well. Also would it
> help if we have additional subcomponent 'glusterfs' as well, may be as
> the default one (any new support being added can fall under that
> category)? Request Sac to comment.

Yes, we need a ganesha or nfs-ganesha subcomponent here.

Thanks

-- 

Kaleb
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Reminder: Weekly Gluster Community Meeting

2016-09-28 Thread Samikshan Bairagya

Hi all,

The weekly Gluster community meeting is about to take place in ~30 minutes.

Meeting details:
- Location: #gluster-meeting on Freenode IRC
(https://webchat.freenode.net/?channels=gluster-meeting)
- Date: Every Wednesday
- Time: 12:00 UTC
(on your terminal, run: date -d "12:00 UTC")

Please find the agenda to be discussed in the meeting here: 
https://public.pad.fsfe.org/p/gluster-community-meetings.


Currently the following topics are listed:
 - GlusterFS 4.0
 - GlusterFS 3.9
 - GlusterFS 3.8
 - GlusterFS 3.7
 - GlusterFS 3.6
 - Project infrastructure
 - NFS Ganesha
 - Samba
 - Action items from last week
 - Open floor

If you have any other topic to be discussed please add so under the 
'Open floor' section as a sub-topic.


Looking forward to your participation.

Thanks and Regards,
Samikshan Bairagya
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Minio as object storage

2016-09-28 Thread Outback Dingo
On Wed, Sep 28, 2016 at 5:58 AM, Gandalf Corvotempesta
 wrote:
> 2016-09-28 11:52 GMT+02:00 Outback Dingo :
>> which itself is a waste. you can do better, look at skylable
>> sxdrive and libres3
>
> This is a fully-features storage.
> I would like to use Gluster as storage, I just need the S3 interface

then best bet stick with RIAK ... its most enterprise worthy
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Error log in brick file : mkdir of /opt/lvmdir/c2/brick/.trashcan/ failed [File exists]

2016-09-28 Thread ABHISHEK PALIWAL
Hi,

I am getting some unwanted errors when created the distributed volume and
not able to access file from the gluster mount.

Could you please let me know the reason behind these errors.

Also please let me know why gluster is calling "posix_mkdir" when file is
exist.

Please find the attached log for more details.

-- 




Regards
Abhishek Paliwal


gluster.tar
Description: Unix tar archive
Log start: 160928-103437 - 10.220.32.69 - moshell 16.0s - /home/eandmle/tmp/file_permission/gluster_logs.log

STP69> ls /system/glusterd 

160928-10:35:49 10.220.32.69 16.0s RNC_NODE_MODEL_V_7_1551_COMPLETE stopfile=/tmp/23223
$ ls /system/glusterd
bitd	   glustershd  hooks  options  quotad  snaps
glusterd.info  groups	   nfs	  peersscrub   vols
$ 

STP69> lhsh 000300 ls /system/glusterd

160928-10:36:01 10.220.32.69 16.0s RNC_NODE_MODEL_V_7_1551_COMPLETE stopfile=/tmp/23223
$ lhsh 000300 ls /system/glusterd
bitd
glusterd.info
glustershd
groups
hooks
nfs
options
peers
quotad
scrub
snaps
vols
$ 

STP69> lhsh 000300/d1 ls /system/glusterd

160928-10:36:06 10.220.32.69 16.0s RNC_NODE_MODEL_V_7_1551_COMPLETE stopfile=/tmp/23223
$ lhsh 000300/d1 ls /system/glusterd
bitd
glusterd.info
glustershd
groups
hooks
nfs
options
peers
quotad
scrub
snaps
vols
$ 

STP69> 

STP69> lhsh 000300 gluster volume info

160928-10:37:40 10.220.32.69 16.0s RNC_NODE_MODEL_V_7_1551_COMPLETE stopfile=/tmp/23223
$ lhsh 000300 gluster volume info
 
Volume Name: c_glusterfs
Type: Distribute
Volume ID: caed5dc4-1c56-4b92-af1f-99ae8271d99c
Status: Started
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: 10.32.0.48:/opt/lvmdir/c2/brick
Options Reconfigured:
nfs.disable: on
network.ping-timeout: 4
performance.readdir-ahead: on
$ 

STP69> lhsh 000300/d1 gluster volume info

160928-10:37:42 10.220.32.69 16.0s RNC_NODE_MODEL_V_7_1551_COMPLETE stopfile=/tmp/23223
$ lhsh 000300/d1 gluster volume info
rcmd: unknown command 'gluster'
$ 

STP69> bo

160928-10:37:56 10.220.32.69 16.0s RNC_NODE_MODEL_V_7_1551_COMPLETE stopfile=/tmp/23223


00M BoardType DevsSwAllocation

01  SMXB2OE   SMXB
03  EPB2  EPB_C1  
04 1041 EPB2  PCD EPB_BLADE_A 
05 1051 EPB2  PCD EPB_BLADE_A 
27  SMXB2OE   SMXB

STP69> 

STP69> lhsh 000300 gluster volume status

160928-10:38:05 10.220.32.69 16.0s RNC_NODE_MODEL_V_7_1551_COMPLETE stopfile=/tmp/23223
$ lhsh 000300 gluster volume status
Status of volume: c_glusterfs
Gluster process TCP Port  RDMA Port  Online  Pid
--
Brick 10.32.0.48:/opt/lvmdir/c2/brick   49152 0  Y   1481 
 
Task Status of Volume c_glusterfs
--
There are no active volume tasks
 
$ 

STP69> lhsh 000300/d1 gluster volume status

160928-10:38:07 10.220.32.69 16.0s RNC_NODE_MODEL_V_7_1551_COMPLETE stopfile=/tmp/23223
$ lhsh 000300/d1 gluster volume status
rcmd: unknown command 'gluster'
$ 

STP69> 

STP69> lhsh 000300 gluster peer status

160928-10:38:12 10.220.32.69 16.0s RNC_NODE_MODEL_V_7_1551_COMPLETE stopfile=/tmp/23223
$ lhsh 000300 gluster peer status
Number of Peers: 0
$ 

STP69> lhsh 000300/d1 gluster peer status

160928-10:38:13 10.220.32.69 16.0s RNC_NODE_MODEL_V_7_1551_COMPLETE stopfile=/tmp/23223
$ lhsh 000300/d1 gluster peer status
rcmd: unknown command 'gluster'
$ 

STP69> 

STP69> lhsh 000300 tar -zcvf /d/glusterd_PIU.tar /system/glusterd

160928-10:38:19 10.220.32.69 16.0s RNC_NODE_MODEL_V_7_1551_COMPLETE stopfile=/tmp/23223
$ lhsh 000300 tar -zcvf /d/glusterd_PIU.tar /system/glusterd
tar: Removing leading `/' from member names
/system/glusterd/
/system/glusterd/hooks/
/system/glusterd/hooks/1/
/system/glusterd/hooks/1/remove-brick/
/system/glusterd/hooks/1/remove-brick/pre/
/system/glusterd/hooks/1/remove-brick/post/
/system/glusterd/hooks/1/create/
/system/glusterd/hooks/1/create/pre/
/system/glusterd/hooks/1/create/post/
/system/glusterd/hooks/1/start/
/system/glusterd/hooks/1/start/pre/
/system/glusterd/hooks/1/start/post/
/system/glusterd/hooks/1/add-brick/
/system/glusterd/hooks/1/add-brick/pre/
/system/glusterd/hooks/1/add-brick/post/
/system/glusterd/hooks/1/delete/
/system/glusterd/hooks/1/delete/pre/
/system/glusterd/hooks/1/delete/post/
/system/glusterd/hooks/1/set/
/system/glusterd/hooks/1/set/pre/
/system/glusterd/hooks/1/set/post/
/system/glusterd/hooks/1/stop/
/system/glusterd/hooks/1/stop/pre/
/system/glusterd/hooks/1/stop/post/
/system/glusterd/hooks/1/reset/
/system/glusterd/hooks/1/reset/pre/
/system/glusterd/hooks/1/reset/post/

Re: [Gluster-users] Minio as object storage

2016-09-28 Thread Gandalf Corvotempesta
2016-09-28 11:52 GMT+02:00 Outback Dingo :
> which itself is a waste. you can do better, look at skylable
> sxdrive and libres3

This is a fully-features storage.
I would like to use Gluster as storage, I just need the S3 interface
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Minio as object storage

2016-09-28 Thread Outback Dingo
On Wed, Sep 28, 2016 at 5:51 AM, Gandalf Corvotempesta
 wrote:
> 2016-09-28 9:40 GMT+02:00 Outback Dingo :
>> s your happy to have all users use a single user id to access
>> buckets no security at all. pfffttt
>
> No, i'm not happy. I'm looking at something different.
>
> But keep in mind that minio could be contenerized and use with 1
> instance for each user.

which itself is a waste. you can do better, look at skylable
sxdrive and libres3
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Minio as object storage

2016-09-28 Thread Gandalf Corvotempesta
2016-09-28 9:40 GMT+02:00 Outback Dingo :
> s your happy to have all users use a single user id to access
> buckets no security at all. pfffttt

No, i'm not happy. I'm looking at something different.

But keep in mind that minio could be contenerized and use with 1
instance for each user.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] [Review] oVirt and Gluster - Integrated solution for Disaster Recovery

2016-09-28 Thread Sahina Bose
[Forwarding to a wider audience]

Feature page outlining the proposed solution is at
http://www.ovirt.org/develop/release-management/features/gluster/gluster-dr/
Please review and provide feedback.

thanks,
sahina

-- Forwarded message --
From: Sahina Bose 
Date: Wed, Sep 14, 2016 at 5:51 PM
Subject: Integrating oVirt and Gluster geo-replication to provide a DR
solution
To: devel 


Hi all,

Though there are many solutions that integrate with oVirt to provide
disaster recovery for the guest images, these solutions either rely on
backup agents running on guests or third party software and are complicated
to setup

Since oVirt already integrates with glusterfs, we can leverage gluster's
geo-replication feature to mirror contents to a remote/secondary site
periodically for disaster recovery, without the need for additional software

Please review the PR[1] for the feature page outlining the solution and
integration in oVirt.
Comments and feedback welcome.

[1] https://github.com/oVirt/ovirt-site/pull/453

thanks,
sahina
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Minio as object storage

2016-09-28 Thread Дмитрий Глушенок
Hi,

I've tried Minio and Scality S3 (both as Docker containers). None of them give 
me more than 60 MB/sec for one stream.

--
Dmitry Glushenok
Jet Infosystems

> 28 сент. 2016 г., в 1:04, Gandalf Corvotempesta 
>  написал(а):
> 
> Anyone tried Minio as object storage over gluster?
> It mostly a one-liner:
> https://docs.minio.io/docs/minio-quickstart-guide
> 
> something like:
> ./minio server /mnt/my_gluster_volume
> 
> Having an Amazon S3 compatible object store could be great in some 
> environments
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Minio as object storage

2016-09-28 Thread Outback Dingo
s your happy to have all users use a single user id to access
buckets no security at all. pfffttt

On Wed, Sep 28, 2016 at 2:56 AM, Gandalf Corvotempesta
 wrote:
> Il 28 set 2016 5:42 AM, "Outback Dingo"  ha scritto:
>> however simple minio is, it doesnt support clustering, replication or
>> multiple users. replacing RIAK with minio... FAIL! riak and skylable
>> by far are better suited.
>>
>
> Both products are not comparable
> minio is very simple, risk is far more complicated and has some components
> to install
>
> Anything simpler to use as S3-compatible APIs on top of gluster?
> There is no need for replication or similiar like Riak does as this would be
> handled by gluster itself.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Minio as object storage

2016-09-28 Thread Gandalf Corvotempesta
Il 28 set 2016 5:42 AM, "Outback Dingo"  ha scritto:
> however simple minio is, it doesnt support clustering, replication or
> multiple users. replacing RIAK with minio... FAIL! riak and skylable
> by far are better suited.
>

Both products are not comparable
minio is very simple, risk is far more complicated and has some components
to install

Anything simpler to use as S3-compatible APIs on top of gluster?
There is no need for replication or similiar like Riak does as this would
be handled by gluster itself.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] GlusterFs upstream bugzilla components Fine graining

2016-09-28 Thread Soumya Koduri

Hi,

On 09/28/2016 11:24 AM, Muthu Vigneshwaran wrote:

> +- Component GlusterFS
> |
> |
> |  +Subcomponent nfs

Maybe its time to change it to 'gluster-NFS/native NFS'. Niels/Kaleb?


+- Component gdeploy

|  |

|  +Subcomponent sambha

|  +Subcomponent hyperconvergence

|  +Subcomponent RHSC 2.0


gdeploy has support for 'ganesha' configuration as well. Also would it 
help if we have additional subcomponent 'glusterfs' as well, may be as 
the default one (any new support being added can fall under that 
category)? Request Sac to comment.


Thanks,
Soumya
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] GlusterFs upstream bugzilla components Fine graining

2016-09-28 Thread Ravishankar N

On 09/28/2016 11:24 AM, Muthu Vigneshwaran wrote:

Hi,

This an update to the previous mail about Fine graining of the
GlusterFS upstream bugzilla components.

Finally we have come out a new structure that would help in easy
access of the bug for reporter and assignee too.

In the new structure we have decided to remove components that are
listed as below -

- BDB
- HDFS
- booster
- coreutils
- gluster-hdoop
- gluster-hadoop-install
- libglusterfsclient
- map
- path-converter
- protect
- qemu-block
- stripe
- unify

as we find that the above mentioned components are either
deprecated,uses GitHub for bugs/issues filing and also planned to add
the following components as the main component

- common-ha
- documentation
- gdeploy
- gluster-nagios
- project-infrastructure
- puppet-gluster

The final structure would look like as below -



Structure



Product GlusterFS (Versions: 3.6, 3.7, 3.8, 3.9, mainline )

|

+- Component GlusterFS

|  |

|  +Subcomponent access-controll

|  +Subcomponent afr(automatic file replication)

|  +Subcomponent arbiter

|  +Subcomponent barrier

|  +Subcomponent blockdevice

|  +Subcomponent bitrot

|  +Subcomponent build

|  +Subcomponent changelog

|  +Subcomponent changetimerecorder

|  +Subcomponent cli

|  +Subcomponent core

|  +Subcomponent dht2(distributed hashing table)

|  +Subcomponent disperse

|  +Subcomponent distribute

|  +Subcomponent encryption-xlator

|  +Subcomponent error-gen

|  +Subcomponent eventsapi

|  +Subcomponent filter

|  +Subcomponent fuse

|  +Subcomponent geo-replication

|  +Subcomponent gfid-access

|  +Subcomponent glupy

|  +Subcomponent gluster-smb

|  +Subcomponent glusterd

|  +Subcomponent glusterd2

|  +Subcomponent glusterfind

|  +Subcomponent index

|  +Subcomponent io-cache

|  +Subcomponent io-stats

|  +Subcomponent io-threads

|  +Subcomponent jbr

|  +Subcomponent libgfapi

|  +Subcomponent locks

|  +Subcomponent logging

|  +Subcomponent marker

|  +Subcomponent md-cache

|  +Subcomponent nfs

|  +Subcomponent open-behind

|  +Subcomponent packaging

|  +Subcomponent porting

|  +Subcomponent posix

|  +Subcomponent posix-acl

|  +Subcomponent protocol

|  +Subcomponent quick-read

|  +Subcomponent quiesce

|  +Subcomponent quota

|  +Subcomponent rdma

|  +Subcomponent read-head

|  +Subcomponent replicate
Currently this is the component being used for AFR, so you could remove 
AFR from the list. Or retain AFR and remove this one, since we also have 
jbr as a form of replication. I'd prefer the former since all current 
AFR bugs are filed under replicate.




|  +Subcomponent richacl

|  +Subcomponent rpc

|  +Subcomponent scripts

|  +Subcomponent selfheal
Is this new component being introduced for a specific reason? selfheal 
is just a process used by various components like afr and ec and IMO 
doesn't need to be an explicit component.


Regards,
Ravi


|  +Subcomponent sharding

|  +Subcomponent snapshot

|  +Subcomponent stat-prefetch

|  +Subcomponent symlink-cache

|  +Subcomponent tests

|  +Subcomponent tiering

|  +Subcomponent trace

|  +Subcomponent transport

|  +Subcomponent trash-xlator

|  +Subcomponent unclassified

|  +Subcomponent upcall

|  +Subcomponent write-behind

|

+- Component common-ha

|  |

|  +Subcomponent ganesha

|

+- documentation

|

+- Component gdeploy

|  |

|  +Subcomponent sambha

|  +Subcomponent hyperconvergence

|  +Subcomponent RHSC 2.0

|

+- Component gluster-nagios

|

+- Component project-infrastructure (Version: staging, production)

|  |

|  +Subcomponent website

|  +Subcomponent jenkins

|

+- Component puppet-gluster

Here the versions for all the component is the same as the versions
does not vary per component and varies per product.

So we would like to have your comments on the new structure before 1st
OCT,i.e. three days from now on, is there anything needed to be added
or removed or moved. :) and also we are planning to ask the Bugzilla
admins to update the structure early next week.

Thanks and regards,

Muthu Vigneshwaran & Niels de vos
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users



___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users