That is very good news!
On Sun, Nov 13, 2016 at 11:58 AM, Humble Devassy Chirammal <
humble.deva...@gmail.com> wrote:
> On Thu, Oct 20, 2016 at 11:56 AM, Humble Devassy Chirammal <
> humble.deva...@gmail.com> wrote:
>
>> Hi All,
>>
>> We have kept our official Gluster Container images in Docker
On Thu, Oct 20, 2016 at 11:56 AM, Humble Devassy Chirammal <
humble.deva...@gmail.com> wrote:
> Hi All,
>
> We have kept our official Gluster Container images in Docker hub for
> CentOS and Fedora distros for some time now.
>
> https://hub.docker.com/r/gluster/gluster-centos/
>
On Sat, Nov 12, 2016 at 2:11 PM, Kevin Lemonnier
wrote:
> >
> > On the other hand at home, I tried to use GlusterFS for VM images in a
> > simple replica 2 setup with Pacemaker for HA. VMs were constantly
> > failing en masse even without making any changes. Very often the
Il 12 nov 2016 9:04 PM, "Alex Crow" ha scritto:
IMHO GlusterFS would be a great
> product if it tried to:
>
> a) Add less features per release, and/or slowing down the release cycle.
> Maybe have a "Feature"
> those that need to try new, well, features.
> b) Concentrate on
>
> On the other hand at home, I tried to use GlusterFS for VM images in a
> simple replica 2 setup with Pacemaker for HA. VMs were constantly
> failing en masse even without making any changes. Very often the images
> got corrupted and had to be restored from backups. This was over a year
> ago
> Sure, but thinking about it later we realised that it might be for the better.
> I believe when sharding is enabled the shards will be dispersed across all the
> replica sets, making it that losing a replica set will kill all your VMs.
>
> Imagine a 16x3 volume for example, losing 2 bricks
Il 12 nov 2016 19:29, "Kevin Lemonnier" ha scritto:
> I don't understand the issue. Let's say I can fit 30 VMs on a 3 node
cluster,
> whenever I need to create the VM 31 I just order 3 nodes and replicate the
> exact same cluster. I get the exact same performances as on the
On 11/11/2016 09:09 PM, Sander Eikelenboom wrote:
Friday, November 11, 2016, 4:28:36 PM, you wrote:
Feature requests to in Bugzilla anyway.
Create your volume with the populated brick as brick one. Start it and "heal
full".
gluster> volume create testvolume transport tcp
gluster>
Il 12 nov 2016 16:13, "David Gossage" ha
scritto:
>
> also maybe a code monkey to sit at my keyboard and screech at me whenever
I type sudo so I pay attention to what I am about to do.
>
Obviously yes, but for destructive operation a confirm should always asked
On Sat, Nov 12, 2016 at 7:42 AM, Gandalf Corvotempesta <
gandalf.corvotempe...@gmail.com> wrote:
> Il 12 nov 2016 14:27, "Lindsay Mathieson"
> ha scritto:
> >
> > gluster volume reset *finger twitch*
> >
> >
> > And boom! volume gone.
> >
>
> There are too many
Il 12 nov 2016 14:27, "Lindsay Mathieson" ha
scritto:
>
> gluster volume reset *finger twitch*
>
>
> And boom! volume gone.
>
There are too many destructive operations in gluster :)
More security on stored data please!
On 12/11/2016 9:58 PM, Gandalf Corvotempesta wrote:
Exactly. I've proposed a warning in the cli when changing the shard
size but this is still unfixed and this is scaring me
it's a critical bug, IMHO, and should be addressed asap or any user
could destroy the whole cluster with a simple command
Il 12 nov 2016 12:53, "Kevin Lemonnier" ha scritto:
> Sure, but thinking about it later we realised that it might be for the
better.
> I believe when sharding is enabled the shards will be dispersed across
all the
> replica sets, making it that losing a replica set will kill
>
>Having to create multiple cluster is not a solution and is much more
>expansive.
>And if you corrupt data from a single cluster you still have issues
>
Sure, but thinking about it later we realised that it might be for the better.
I believe when sharding is enabled the shards
Il 12 nov 2016 10:21, "Kevin Lemonnier" ha scritto:
> We've had a lot of problems in the past, but at least for us 3.7.12 (and
3.7.15)
> seems to be working pretty well as long as you don't add bricks. We
started doing
> multiple little clusters and abandonned the idea of
>Don't get me wrong but I'm seeing too many "critical" issues like file
>corruptions, crashes or similiar recently
>Is gluster ready for production?
>I'm scared about placing our production VMs (more or less 80) on gluster,
>in case of corruption I'll loose everything
We've
16 matches
Mail list logo