I was looking for block storage in gluster but I don't see docs anymore
Is this an unsupported feature ?
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
On 1/10/2016 4:15 AM, mabi wrote:
The data will not be in "any" state as you mention or please define
what you mean by "any". In the worst case you will just loose 5
seconds of data that's all as far as I understand.
By "Any" state I mean *Any*, you have no way of predicting how much data
Sorry the link is missing in my previous post:
https://groups.google.com/a/zfsonlinux.org/d/msg/zfs-discuss/OI5dchl7d_8/vLRMZgJGYUoJ
Original Message
Subject: Re: [Gluster-users] Production cluster planning
Local Time: September 30, 2016 8:15 PM
UTC Time: September 30,
The data will not be in "any" state as you mention or please define what you
mean by "any". In the worst case you will just loose 5 seconds of data that's
all as far as I understand.
Here is another very interesting but long post regarding this topic. Basically
it all boils down to this
Hi,
I recently deployed a GlusterFS system and observed the following
behaviour which leaves me quite puzzled:
I'm running glusterfs 3.7.1 in a replicated setup (2 replicas, see below).
1. Create a file with arbitrary content (irrelevant):
$ echo 123 > test
2. Copying the file works fine:
On Thu, Sep 29, 2016 at 09:54:24PM -0400, Vijay Bellur wrote:
> Thank you Kaleb for putting this together. I think it would also be useful
> to list where our official container images would be present too.
I think a different page would be most useful, so that we do not
overwhelm users too much.
2016-09-30 12:41 GMT+02:00 Lindsay Mathieson :
> Your missing what he said - *ZFS* will not be corrupted but the data written
> could be in any state, in this case the gluster filesystem data and meta
> data. To have one ndoe in a cluster out of sync with out the
Hello Krutika, Ravishankar,
Unfortunately, i deleted my previous test instance in AWS (running on EBS
storage, on CentOS7 with XFS).
I was using 3.7.15 for Gluster. It's good to know they should be the same.
And, i have also set up another set of VMs quick locally, and use the same
version of
2016-09-29 11:58 GMT+02:00 Prashanth Pai :
> Yes, that can be done. Container ACLs allows you to just that.
Ok, so I have to follow the linked guide.
How to make this HA and load balanced? I don't saw any DB for storing
ACL or similiar.
If I run multiple gluster-swift instances
On Thu, Sep 29, 2016 at 12:02:39AM +0200, Gandalf Corvotempesta wrote:
> I'm doing some tests with proxmox.
> I've created a test VM with 100GB qcow2 image stored on gluster with sharding
> All shards was created properly.
>
> Then, I've increased the qcow2 image size from 100GB to 150GB.
>
On Fri, Sep 30, 2016 at 3:16 PM, Niels de Vos wrote:
> On Wed, Sep 28, 2016 at 10:09:34PM +0530, Prasanna Kalever wrote:
>> On Wed, Sep 28, 2016 at 11:24 AM, Muthu Vigneshwaran
>> wrote:
>> >
>> > Hi,
>> >
>> > This an update to the previous mail about
On Wed, Sep 28, 2016 at 10:38 PM, Ben Werthmann wrote:
> These are interesting projects:
> https://github.com/prashanthpai/antbird
> https://github.com/kshlm/gogfapi
>
> Are there plans for an official go gfapi client library?
I hope to do make the gogfapi package official
On 29/09/2016 4:32 AM, mabi wrote:
hat's not correct. There is no risk of corruption using
"sync=disabled". In the worst case you just end up with old data but
no corruption. See the following comment from a master of ZFS (Aaron
Toponce):
- Original Message -
> From: "Gandalf Corvotempesta"
> To: "Prashanth Pai"
> Cc: "John Mark Walker" , "gluster-users"
>
> Sent: Thursday, 29 September, 2016 3:55:18 PM
> Subject: Re:
On 09/29/2016 05:18 PM, Sahina Bose wrote:
Yes, this is a GlusterFS problem. Adding gluster users ML
On Thu, Sep 29, 2016 at 5:11 PM, Davide Ferrari > wrote:
Hello
maybe this is more glustefs then ovirt related but since OVirt
Are there any workarounds to this? RDMA is configured on my servers.
Dennis
On Thu, Sep 29, 2016 at 7:19 AM, Atin Mukherjee wrote:
> Dennis,
>
> Thanks for sharing the logs.
>
> It seems like a volume configured created with tcp,rdma transport fails to
> start (atleast in
It seems like an actual bug, if youcan file a bug in bugzilla, that
would be great.
At least I don't see workaround for this issue, may be till the next
update is available with fix, you can use either rdma alone or tcp alone
volume.
Let me know whether this is acceptable, if so I can give you
- Original Message -
> From: "Gandalf Corvotempesta"
> To: "Prashanth Pai"
> Cc: "John Mark Walker" , "gluster-users"
>
> Sent: Thursday, 29 September, 2016 3:42:06 PM
> Subject: Re:
On Wed, Sep 28, 2016 at 10:09:34PM +0530, Prasanna Kalever wrote:
> On Wed, Sep 28, 2016 at 11:24 AM, Muthu Vigneshwaran
> wrote:
> >
> > Hi,
> >
> > This an update to the previous mail about Fine graining of the
> > GlusterFS upstream bugzilla components.
> >
> > Finally we
Thank you Kaleb for putting this together. I think it would also be useful
to list where our official container images would be present too.
Should we make this content persistent somewhere on our website and have a
link from the release notes? The complaints that we encountered after
releasing
Dennis,
Thanks for sharing the logs.
It seems like a volume configured created with tcp,rdma transport fails to
start (atleast in my local set up). The issue here is although the brick
process comes up, but glusterd receives a non zero ret code from the runner
interface which spawns the brick
On 09/30/2016 02:35 AM, Dennis Michael wrote:
>
> Are there any workarounds to this? RDMA is configured on my servers.
By this, I assume your rdma setup/configuration over IPoIB is working fine.
Can you tell us what machine you are using and whether SELinux is
configured on the machine or
Yes, this is a GlusterFS problem. Adding gluster users ML
On Thu, Sep 29, 2016 at 5:11 PM, Davide Ferrari wrote:
> Hello
>
> maybe this is more glustefs then ovirt related but since OVirt integrates
> Gluster management and I'm experiencing the problem in an ovirt cluster,
Il 30 set 2016 11:35, "mabi" ha scritto:
>
> That's not correct. There is no risk of corruption using "sync=disabled".
In the worst case you just end up with old data but no corruption. See the
following comment from a master of ZFS (Aaron Toponce):
>
>
On 09/29/2016 08:03 PM, Davide Ferrari wrote:
It's strange, I've tried to trigger the error again by putting vm04 in
maintenence and stopping the gluster service (from ovirt gui) and now
the VM starts correctly. Maybe the arbiter indeed blamed the brick
that was still up before, but how's that
2016-09-29 12:22 GMT+02:00 Prashanth Pai :
> In pure vanilla Swift, ACL information is stored in container DBs (sqlite)
> In gluster-swift, ACLs are stored in the extended attribute of the directory.
So, as long the directory is stored on gluster, gluster makes this redundant
>
- Original Message -
> From: "Gandalf Corvotempesta"
> To: "Prashanth Pai"
> Cc: "John Mark Walker" , "gluster-users"
>
> Sent: Thursday, 29 September, 2016 3:23:27 PM
> Subject: Re:
2016-09-29 11:49 GMT+02:00 Prashanth Pai :
> Swift can enforce allowing/denying access to swift users.
> The Swift API provides Account ACLs and Container ACLs for this.
> http://docs.openstack.org/developer/swift/overview_auth.html
>
> There is no mapping between a swift user and
- Original Message -
> From: "Gandalf Corvotempesta"
> To: "Prashanth Pai"
> Cc: "John Mark Walker" , "gluster-users"
>
> Sent: Thursday, 29 September, 2016 2:50:33 PM
> Subject: Re:
2016-09-29 0:02 GMT+02:00 Gandalf Corvotempesta
:
> Shouldn't gluster increase the image size?
This morning i've checked the image size and it was properly increased.
So, gluster is able to increase (by adding shards) the VM image only
when needed, right?
I've
2016-09-29 11:03 GMT+02:00 Prashanth Pai :
> Each account can have as many users you'd want.
>
> If you'd like 10 accounts, you'll need 10 volumes.
> If you have 10 volumes, you'd have 10 accounts.
>
> For example (uploading an object):
> curl -v -X PUT -T mytestfile
>
>
> is this quick start guide correct ?
> https://github.com/gluster/gluster-swift/blob/master/doc/markdown/quick_start_guide.md
Except for the part where you get the packages from,
the guide is correct.
>
> What does it mean "NOTE: In Gluster-Swift, accounts must be GlusterFS
> volumes." ?
>
2016-09-29 6:58 GMT+02:00 Prashanth Pai :
> But gluster-swift isn't so. The distribution and replication
> functionality of Swift is suppressed and delegated to gluster.
> gluster-swift is front-end which processes and converts all
> incoming object requests into filesystem
On 09/29/2016 12:48 AM, Kaushal M wrote:
On Thu, Sep 29, 2016 at 10:10 AM, Vijay Bellur wrote:
On 09/22/2016 07:28 AM, Kaushal M wrote:
The first preview/dev release of GlusterD-2.0 is available now. A
prebuilt binary is available for download from the release-page[1].
On Thu, Sep 29, 2016 at 10:10 AM, Vijay Bellur wrote:
> On 09/22/2016 07:28 AM, Kaushal M wrote:
>>
>> The first preview/dev release of GlusterD-2.0 is available now. A
>> prebuilt binary is available for download from the release-page[1].
>>
>> This is just a preview of what
On 09/22/2016 07:28 AM, Kaushal M wrote:
The first preview/dev release of GlusterD-2.0 is available now. A
prebuilt binary is available for download from the release-page[1].
This is just a preview of what has been happening in GD2, to give
users a taste of how GD2 is evolving.
GD2 can now
On Wed, Sep 28, 2016 at 11:28 AM, Gandalf Corvotempesta
wrote:
> 2016-09-28 16:27 GMT+02:00 Prashanth Pai :
>> There's gluster-swift[1]. It works with oth Swift API and S3 API[2] (using
>> Swift).
>>
>> [1]:
I'm doing some tests with proxmox.
I've created a test VM with 100GB qcow2 image stored on gluster with sharding
All shards was created properly.
Then, I've increased the qcow2 image size from 100GB to 150GB.
Proxmox did this well, but on gluster i'm still seeing the old qcow2
image size (1600
That's not correct. There is no risk of corruption using "sync=disabled". In
the worst case you just end up with old data but no corruption. See the
following comment from a master of ZFS (Aaron Toponce):
https://pthree.org/2013/01/25/glusterfs-linked-list-topology/#comment-227906
Btw: I have
On Wed, Sep 28, 2016 at 11:24 AM, Muthu Vigneshwaran
wrote:
>
> Hi,
>
> This an update to the previous mail about Fine graining of the
> GlusterFS upstream bugzilla components.
>
> Finally we have come out a new structure that would help in easy
> access of the bug for
These are interesting projects:
https://github.com/prashanthpai/antbird
https://github.com/kshlm/gogfapi
Are there plans for an official go gfapi client library?
On Wed, Sep 28, 2016 at 12:16 PM, John Mark Walker
wrote:
> No - gluster-swift adds the swift API on top of
2016-09-28 18:16 GMT+02:00 John Mark Walker :
> No - gluster-swift adds the swift API on top of GlusterFS. It doesn't
> require Swift itself.
>
> This project is 4 years old now - how do people not know this?
gluster-switft is obsolete.
The "proper" way to use the object
42 matches
Mail list logo