Re: [Gluster-users] How gluster parallelize reads

2016-10-03 Thread Jeff Darcy
> Anyway, in gerrit you are talking about "local" reads. How could you
> have a "local" read? This would be possible only mounting the volume
> locally on a server. is this a supported configuration?

Whether or not it's supported for native protocol, it's a common case
when using NFS or SMB with the servers for those protocols appearing
as native-protocol clients on the server machines.

> Probably, a "priority" could be added in mount option, so that when
> mounting the gluster volume i can set the preferred host for reads.
> 
> Something like this:
> 
> mount -t glusterfs -o preferred-read-host=1.2.3.4 server1:/test-volume
> /mnt/glusterfs

It's a great idea that would work well for a volume containing a single
replica set, but what about when that volume contains multiple?  Specify
a preferred read source for each?  Even that will get tricky when we
start to work around the limitation of adding bricks in multiples of the
replica count.  Then we'll be building new replica sets "automatically"
so the user would have to keep re-examining the volume structure to
decide on a new priority list.  Also, what should we do if that priority
list is "pathological" in the sense of creating unnecessary hot spots?
Should we accept it as an expression of the user's will anyway, or
override it to ensure continued smooth operation?

IMO we should try harder to find the right answers *autonomously*,
perhaps based on user-specified relationships between client networks
and servers.  (Ceph does some of this in their CRUSH maps, but I think
that conflates separate problems of managing placement and traffic.)  To
look at it another way, we'd be doing the same calculations the user
might do to create that explicit priority list, except we'd be in a
better position to *re*calculate that list when appropriate.  We're
thinking about some of this in the context of handling multiple networks
better in general, but it's still a bit of a research effort because
AFAICT nobody else has come up with much empirically-backed research to
guide solutions.

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] How gluster parallelize reads

2016-10-03 Thread Gandalf Corvotempesta
2016-10-03 20:48 GMT+02:00 Jeff Darcy :
> Basic storage-developer conservatism.  Zero was the behavior before
> read-hash-mode was implemented.  As strongly as some of us might believe
> that such tweaks lead to better behavior - as I did with this one in
> 2012[1] - we've kind of learned the hard way that existing users often
> disagree with our estimations.  Thus, new behavior is often kept as a
> "special" for particular known environments or use cases, and the
> default is left unchanged until there's clear feedback indicating it
> should be otherwise.

That's fine, but i've not found description of these values in docs.
is it documented ?
Anyway, in gerrit you are talking about "local" reads. How could you
have a "local" read? This would be possible only mounting the volume
locally on a server. is this a supported configuration?

And, as you write, the local server ins't always the faster to
response, thus is not a good check.
Probably, a "priority" could be added in mount option, so that when
mounting the gluster volume i can set the preferred host for reads.

Something like this:

mount -t glusterfs -o preferred-read-host=1.2.3.4 server1:/test-volume
/mnt/glusterfs
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] How gluster parallelize reads

2016-10-03 Thread Jeff Darcy
> > 0 means use the first server to respond I think - at least that's my guess
> > of what "first up server" means
> > 1 hashed by GFID,  so clients will use the same server for a given file but
> > different files may be accessed from different nodes.
> 
> I think that 1 is better.
> Why "0" is the default ?

Basic storage-developer conservatism.  Zero was the behavior before
read-hash-mode was implemented.  As strongly as some of us might believe
that such tweaks lead to better behavior - as I did with this one in
2012[1] - we've kind of learned the hard way that existing users often
disagree with our estimations.  Thus, new behavior is often kept as a
"special" for particular known environments or use cases, and the
default is left unchanged until there's clear feedback indicating it
should be otherwise.

[1] http://review.gluster.org/#/c/2926/
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] How gluster parallelize reads

2016-10-03 Thread Gandalf Corvotempesta
2016-10-03 20:13 GMT+02:00 Alastair Neil :
> I think this might give you something like  the behaviour you are looking
> for, it will not balance blocks across different servers but will distribute
> reads from clients across all the servers.
>
> cluster.read-hash-mode 2
>
> 0 means use the first server to respond I think - at least that's my guess
> of what "first up server" means
> 1 hashed by GFID,  so clients will use the same server for a given file but
> different files may be accessed from different nodes.

I think that 1 is better.
Why "0" is the default ?
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] How gluster parallelize reads

2016-10-03 Thread Alastair Neil
I think this might give you something like  the behaviour you are looking
for, it will not balance blocks across different servers but will
distribute reads from clients across all the servers.

cluster.read-hash-mode 2

0 means use the first server to respond I think - at least that's my guess
of what "first up server" means
1 hashed by GFID,  so clients will use the same server for a given file but
different files may be accessed from different nodes.

On 3 October 2016 at 05:50, Gandalf Corvotempesta <
gandalf.corvotempe...@gmail.com> wrote:

> 2016-10-03 11:33 GMT+02:00 Joe Julian :
> > By default, the client reads from localhost first, if the client is also
> a
> > server, or the first to respond. This can be tuned to balance the load
> > better (see "gluster volume set help") but that's not necessarily more
> > efficient. As always, it depends on the workload.
>
> So, is no true saying that gluster aggregate bandwidth in readings.
> Each client will always read from 1 node. Having 3 nodes means that
> I can support a number of clients increased by 3.
>
> Something like an ethernet bonding, each transfer is always subject to the
> single port speed, but I can support twice the connections by creating
> a bond of 2.
>
> > Reading as you suggested is actually far less efficient. The reads would
> > always be coming from disk and never in any readahead cache.
>
> What I mean is to read the same file in multiple parts from multiple
> servers and not
> reading the same file part from multiple servers.
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] FUSE mounts and Docker integration

2016-10-03 Thread Kremmyda, Olympia (Nokia - GR/Athens)
Hi,

We have a similar setup and we use Kubernetes and its Persistent Volumes 
subsystem for this purpose.
However, how can a user exit from a mount point?

Best Regards,
Olia

-Original Message-
From: gluster-users-boun...@gluster.org 
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Gandalf Corvotempesta
Sent: Thursday, September 22, 2016 3:39 PM
To: gluster-users 
Subject: [Gluster-users] FUSE mounts and Docker integration

I would like to use Gluster as shared storage for apps deployed
through a PaaS that we are creating.
Currently I'm able to mount a gluster volume in each "compute" nodes
and then mount a sudirectory from this shared volume to each Docker
app.

Obviously this is not very secure, as also wrote on official docker
docs. There are some cases where users could exit from the mount point
and be able to traverse the whole FS

One solution (I think) would be to use docker volume plugin like this:
https://github.com/amarkwalder/docker-volume-glusterfs

but have to create 1 volume (with replica and so on) for each app is
resource wasting.

Any solution?
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] FUSE mounts and Docker integration

2016-10-03 Thread Kremmyda, Olympia (Nokia - GR/Athens)
I’m sorry, I don’t understand. If it has been fixed why is it an issue on 
latest deployments?


-Original Message-
From: Gandalf Corvotempesta [mailto:gandalf.corvotempe...@gmail.com] 
Sent: Monday, October 03, 2016 12:25 PM
To: Kremmyda, Olympia (Nokia - GR/Athens) 
Cc: gluster-users 
Subject: Re: [Gluster-users] FUSE mounts and Docker integration

2016-10-03 11:02 GMT+02:00 Kremmyda, Olympia (Nokia - GR/Athens)
:
> Hi,
>
> We have a similar setup and we use Kubernetes and its Persistent Volumes 
> subsystem for this purpose.
> However, how can a user exit from a mount point?

There was a bug in LXC, fixed last year.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] 3.6 branch upgrade plan

2016-10-03 Thread Roman
Hello, dear community!
It was pretty much time for me not writing here, but it only means, that
everything was just fine with our gluster storage for KVM VMs.

We are running 3.6.5 @ debian wheezy servers. As wheezy is a part of
history, we would like to upgrade. We're pretty happy with 3.6 branch and
we'd like to stay with it. As far as I can see from gluster repo, 3.6.9
supports both jessie and wheezy.

So I've made a little plan and I'd like to ask if it is OK (may be someone
already done something like this and could advise me something).

1. stop all KVM VM-s, unmount gluster share from KVM hosts
2. stop glusterd on both   our storages
3. upgrade both storages to jessie via apt-get dist-upgrade
4. update 3.6.5 to 3.6.9
5. start everything.

Should this work or I might facing  some problems?

-- 
Best regards,
Roman.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] New commands for supporting add/remove brick and rebalance on tiered volume

2016-10-03 Thread Atin Mukherjee
Hari,

I think you misunderstood my statement, probably I shouldn't have mentioned
existing semantics. One eg here should clarify it, so this is what I
propose:

gluster v tier  remove-brick tier-type hot  start

Note that my request was to add an argument i.e tier-type here.


On Monday 3 October 2016, Hari Gowtham  wrote:

> Hi Atin,
> Yes, we can do it. the existing semantics need some changes because of the
> attach tier command (gluster volume tier  attach ...) the
> parsing has to be changed to accommodate the attach tier command. if used
> as I
> mentioned then we can use the functions of attach tier generic for adding
> brick
> also. Other thing with using args is. it needs changes to support the
> keywords
> like replica  also. so when we try to make a generic function for
> add
> brick on tiered volume and attach tier these keywords like replica 
> and
> tier-type  will need more changes.
>
> So i feel its better to have a separate command instead of the args.
> If i have been missing any pros from having the args let me know.
>
> - Original Message -
> > From: "Atin Mukherjee" >
> > To: "Hari Gowtham" >
> > Cc: "gluster-devel" >,
> "gluster-users" >
> > Sent: Monday, October 3, 2016 2:31:40 PM
> > Subject: Re: [Gluster-devel] New commands for supporting add/remove
> brick and rebalance on tiered volume
> >
> > On Mon, Oct 3, 2016 at 12:21 PM, Hari Gowtham  > wrote:
> >
> > > Hi,
> > >
> > > The current add and remove brick commands aren't sufficient to support
> > > add/remove brick on tiered volumes.So the commands need minor changes
> > > like mentioning which tier we are doing the operation on. So in order
> > > to specify the tier on which we are performing the changes, I thought
> > > of using the following commands for add and remove brick
> > >
> > > adding brick on tiered volume:
> > > gluster volume tier  add-hot-brick/add-cold-brick  ...
> > > 
> > >
> > > removing brick on tierd volume:
> > > gluster volume tier  remove-hot-brick/remove-cold-brick
> 
> > > ... 
> > >
> > > I have framed it this way because once we mention details about tiering
> > > these commands become specific to tier and the syntax that we follow
> for
> > > commands are gluster volume component  ...
> > > So i have made sure that the keyword tier comes after volume.
> > > Need suggestions to make these commands better.
> > >
> > > Similarly once we support add/remove brick we will be having rebalance
> > > commands and the idea is to support rebalance separately for each tier.
> > > So once we will have to rebalance status to display for which we need
> > > rebalance commands specific to tier. so these are the commands we have
> > > thought of:
> > > gluster v tier  hot-rebalance/cold-rebalance
> 
> > >
> > > Need your comments regarding this.
> > >
> >
> > Overall it makes sense. Just a comment here. Instead of mentioning
> > remove/add/rebalance-hot/cold-brick can we have an additional arg called
> > tier-type  and continue with the existing semantics like
> > remove-brick, add-brick and rebalance?
> >
> >
> > > --
> > > Regards,
> > > Hari.
> > >
> > > ___
> > > Gluster-devel mailing list
> > > gluster-de...@gluster.org 
> > > http://www.gluster.org/mailman/listinfo/gluster-devel
> > >
> >
> >
> >
> > --
> >
> > --Atin
> >
>
> --
> Regards,
> Hari.
>
>

-- 
--Atin
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] How gluster parallelize reads

2016-10-03 Thread Gandalf Corvotempesta
2016-10-03 11:33 GMT+02:00 Joe Julian :
> By default, the client reads from localhost first, if the client is also a
> server, or the first to respond. This can be tuned to balance the load
> better (see "gluster volume set help") but that's not necessarily more
> efficient. As always, it depends on the workload.

So, is no true saying that gluster aggregate bandwidth in readings.
Each client will always read from 1 node. Having 3 nodes means that
I can support a number of clients increased by 3.

Something like an ethernet bonding, each transfer is always subject to the
single port speed, but I can support twice the connections by creating
a bond of 2.

> Reading as you suggested is actually far less efficient. The reads would
> always be coming from disk and never in any readahead cache.

What I mean is to read the same file in multiple parts from multiple
servers and not
reading the same file part from multiple servers.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] How gluster parallelize reads

2016-10-03 Thread Joe Julian
By default, the client reads from localhost first, if the client is also 
a server, or the first to respond. This can be tuned to balance the load 
better (see "gluster volume set help") but that's not necessarily more 
efficient. As always, it depends on the workload.


Reading as you suggested is actually far less efficient. The reads would 
always be coming from disk and never in any readahead cache.


On 10/03/2016 02:24 AM, Gandalf Corvotempesta wrote:

Hi to all.
I know that when writing, a client is writing to all replica at once,
thus the transfer rate is total_bandwidth/replica_count

But what about reads? Client is able to read from multiple nodes at
once? Which kind of data is reading?

Let me try to explain:

let's assume a 3MB file stored on 3 bricks on 3 servers (1 brick per
server). For semplicity, let's assume 1MB stored on each brick.

When client reads, will read the first MB from all of 3 servers in
parallel, then it moves to the secodn MB and so on, or is smart enough
to read the first MB from server1, the second from server2 and the
third from server3 at the same time?

I think that second case is way faster than first.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] How gluster parallelize reads

2016-10-03 Thread Gandalf Corvotempesta
Hi to all.
I know that when writing, a client is writing to all replica at once,
thus the transfer rate is total_bandwidth/replica_count

But what about reads? Client is able to read from multiple nodes at
once? Which kind of data is reading?

Let me try to explain:

let's assume a 3MB file stored on 3 bricks on 3 servers (1 brick per
server). For semplicity, let's assume 1MB stored on each brick.

When client reads, will read the first MB from all of 3 servers in
parallel, then it moves to the secodn MB and so on, or is smart enough
to read the first MB from server1, the second from server2 and the
third from server3 at the same time?

I think that second case is way faster than first.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] FUSE mounts and Docker integration

2016-10-03 Thread Gandalf Corvotempesta
2016-10-03 11:02 GMT+02:00 Kremmyda, Olympia (Nokia - GR/Athens)
:
> Hi,
>
> We have a similar setup and we use Kubernetes and its Persistent Volumes 
> subsystem for this purpose.
> However, how can a user exit from a mount point?

There was a bug in LXC, fixed last year.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [Gluster-devel] New commands for supporting add/remove brick and rebalance on tiered volume

2016-10-03 Thread Atin Mukherjee
On Mon, Oct 3, 2016 at 12:21 PM, Hari Gowtham  wrote:

> Hi,
>
> The current add and remove brick commands aren't sufficient to support
> add/remove brick on tiered volumes.So the commands need minor changes
> like mentioning which tier we are doing the operation on. So in order
> to specify the tier on which we are performing the changes, I thought
> of using the following commands for add and remove brick
>
> adding brick on tiered volume:
> gluster volume tier  add-hot-brick/add-cold-brick  ...
> 
>
> removing brick on tierd volume:
> gluster volume tier  remove-hot-brick/remove-cold-brick 
> ... 
>
> I have framed it this way because once we mention details about tiering
> these commands become specific to tier and the syntax that we follow for
> commands are gluster volume component  ...
> So i have made sure that the keyword tier comes after volume.
> Need suggestions to make these commands better.
>
> Similarly once we support add/remove brick we will be having rebalance
> commands and the idea is to support rebalance separately for each tier.
> So once we will have to rebalance status to display for which we need
> rebalance commands specific to tier. so these are the commands we have
> thought of:
> gluster v tier  hot-rebalance/cold-rebalance 
>
> Need your comments regarding this.
>

Overall it makes sense. Just a comment here. Instead of mentioning
remove/add/rebalance-hot/cold-brick can we have an additional arg called
tier-type  and continue with the existing semantics like
remove-brick, add-brick and rebalance?


> --
> Regards,
> Hari.
>
> ___
> Gluster-devel mailing list
> gluster-de...@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
>



-- 

--Atin
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] What application workloads are too slow for you on gluster?

2016-10-03 Thread Gandalf Corvotempesta
Il 03 ott 2016 09:33, "Kevin Lemonnier"  ha scritto:
>
> Not sure about that for the web hosting workload, but that would be very
> interesting for our VM hosting service. I read the topic about that,
> if there are docs about how to set that up I might test it next time
> I have to setup a VM cluster.
>

how do you plan to use block storage for vm hosting?
With vm hosting you could create an additional qcow2 disk and access that
natively with qemu, isn't it?

It should way faster than using iscsi to access the block device on top of
gluster

I think that top priorities for gluster would be:

- small files performance so that gluster could replace NFS on almost every
workload (i know, writing 3 times like in gluster would be always slower
than writing once like in nfs, but the replica is not the only issue that
gluster has on small files. Ever a "ls -la" is slow)

- some way to add bricks in a number less than the replica count. I don't
know how but ceph does it even without EC thus there's a way to accomplish

In full replica 3 cluster,  adding a brick means adding at least 3 severs
with 1 brick each

DRBD9 does something similiar by adding one server at once and rebalance
the cluster
http://www.drbd.org/en/doc/users-guide-90/s-rebalance

- native snmp monitoring (with smuxpeer or similiar, traps and so on).
Having a good monitoring system is mandatory in every production
environment . This could be really a plus for gluster. Triggering traps on
events is cool!
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] What application workloads are too slow for you on gluster?

2016-10-03 Thread Kevin Lemonnier
Not sure about that for the web hosting workload, but that would be very
interesting for our VM hosting service. I read the topic about that,
if there are docs about how to set that up I might test it next time
I have to setup a VM cluster.

On Mon, Oct 03, 2016 at 12:53:36PM +0530, Pranith Kumar Karampuri wrote:
>If doing a VM on top of gluster is the kind of solution you are resorting
>to. Do you think using gluster-block would be a good idea here? It is
>still in development state. We are working on how to get snapshot
>functionalities. But if you guys want to test and give us feedback, that
>would be very helpful and we can move very quickly to get it in GA state.
>On Tue, Sep 27, 2016 at 5:13 PM, AndrA(c) Bauer  wrote:
> 
>  Dito...
>  Am 24.09.2016 um 17:29 schrieb Kevin Lemonnier:
>  > On Sat, Sep 24, 2016 at 07:48:53PM +0530, Pranith Kumar Karampuri
>  wrote:
>  >>A  A  hi,
>  >>A  A  A A A A AA  I want to get a sense of the kinds of applications
>  you tried
>  >>A  A  out on gluster but you had to find other alternatives because
>  gluster
>  >>A  A  didn't perform well enough or the soultion would become too
>  expensive if
>  >>A  A  you move to all SSD kind of setup.
>  >
>  > Hi,
>  >
>  > Web Hosting is what comes to mind for me. Applications like
>  prestashop, wordpress,
>  > some custom apps ... I know that I try to use DRBD as much as I can
>  for that since
>  > GlusterFS makes the sites just way too slow to use, I tried both fuse
>  and NFS (not
>  > ganesha since I'm on debian everytime though, don't know if that
>  matters).
>  > Using things like OPCache and moving the application's cache outside
>  of the volume
>  > are helping a lot but that brings a whole loads of other problems you
>  can't always
>  > deal with, so most of the time I just don't use gluster for that.
>  >
>  > Last time I really had to use gluster to host a web app I ended up
>  installing a VM
>  > with a disk stored on glusterfs and configuring a simple NFS server,
>  that was way
>  > faster than mounting a gluster volume directly on the web servers. At
>  least that
>  > proves VM hosting works pretty well now though !
>  >
>  > Now I can't try tiering, unfortunatly I don't have the option of
>  having hardware for
>  > that, but maybe that would indeed solve it if it makes looking up lots
>  of tiny files
>  > quicker.
>  >
>  >
>  >
>  > ___
>  > Gluster-users mailing list
>  > Gluster-users@gluster.org
>  > http://www.gluster.org/mailman/listinfo/gluster-users
>  >
> 
>  --
>  Mit freundlichen GrA 1/4A*en
>  AndrA(c) Bauer
> 
>  MAGIX Software GmbH
>  AndrA(c) Bauer
>  Administrator
>  August-Bebel-StraA*e 48
>  01219 Dresden
>  GERMANY
> 
>  tel.: 0351 41884875
>  e-mail: aba...@magix.net
>  aba...@magix.net 
>  www.magix.com 
> 
>  GeschACURftsfA 1/4hrer | Managing Directors: Dr. Arnd SchrAP:der, Klaus
>  Schmidt
>  Amtsgericht | Commercial Register: Berlin Charlottenburg, HRB 127205
> 
>  Find us on:
> 
>   
>   
>  --
>  The information in this email is intended only for the addressee named
>  above. Access to this email by anyone else is unauthorized. If you are
>  not the intended recipient of this message any disclosure, copying,
>  distribution or any action taken in reliance on it is prohibited and
>  may be unlawful. MAGIX does not warrant that any attachments are free
>  from viruses or other defects and accepts no liability for any losses
>  resulting from infected email transmissions. Please note that any
>  views expressed in this email may be those of the originator and do
>  not necessarily represent the agenda of the company.
>  --
>  ___
>  Gluster-users mailing list
>  Gluster-users@gluster.org
>  http://www.gluster.org/mailman/listinfo/gluster-users
> 
>--
>Pranith

> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users


-- 
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111


signature.asc
Description: Digital signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] What application workloads are too slow for you on gluster?

2016-10-03 Thread Pranith Kumar Karampuri
If doing a VM on top of gluster is the kind of solution you are resorting
to. Do you think using gluster-block would be a good idea here? It is still
in development state. We are working on how to get snapshot
functionalities. But if you guys want to test and give us feedback, that
would be very helpful and we can move very quickly to get it in GA state.

On Tue, Sep 27, 2016 at 5:13 PM, André Bauer  wrote:

> Dito...
>
> Am 24.09.2016 um 17:29 schrieb Kevin Lemonnier:
> > On Sat, Sep 24, 2016 at 07:48:53PM +0530, Pranith Kumar Karampuri wrote:
> >>hi,
> >>A A A A A  I want to get a sense of the kinds of applications you
> tried
> >>out on gluster but you had to find other alternatives because gluster
> >>didn't perform well enough or the soultion would become too
> expensive if
> >>you move to all SSD kind of setup.
> >
> > Hi,
> >
> > Web Hosting is what comes to mind for me. Applications like prestashop,
> wordpress,
> > some custom apps ... I know that I try to use DRBD as much as I can for
> that since
> > GlusterFS makes the sites just way too slow to use, I tried both fuse
> and NFS (not
> > ganesha since I'm on debian everytime though, don't know if that
> matters).
> > Using things like OPCache and moving the application's cache outside of
> the volume
> > are helping a lot but that brings a whole loads of other problems you
> can't always
> > deal with, so most of the time I just don't use gluster for that.
> >
> > Last time I really had to use gluster to host a web app I ended up
> installing a VM
> > with a disk stored on glusterfs and configuring a simple NFS server,
> that was way
> > faster than mounting a gluster volume directly on the web servers. At
> least that
> > proves VM hosting works pretty well now though !
> >
> > Now I can't try tiering, unfortunatly I don't have the option of having
> hardware for
> > that, but maybe that would indeed solve it if it makes looking up lots
> of tiny files
> > quicker.
> >
> >
> >
> > ___
> > Gluster-users mailing list
> > Gluster-users@gluster.org
> > http://www.gluster.org/mailman/listinfo/gluster-users
> >
>
> --
> Mit freundlichen Grüßen
> André Bauer
>
> MAGIX Software GmbH
> André Bauer
> Administrator
> August-Bebel-Straße 48
> 01219 Dresden
> GERMANY
>
> tel.: 0351 41884875
> e-mail: aba...@magix.net
> aba...@magix.net 
> www.magix.com 
>
> Geschäftsführer | Managing Directors: Dr. Arnd Schröder, Klaus Schmidt
> Amtsgericht | Commercial Register: Berlin Charlottenburg, HRB 127205
>
> Find us on:
>
>  
>  
> --
> The information in this email is intended only for the addressee named
> above. Access to this email by anyone else is unauthorized. If you are
> not the intended recipient of this message any disclosure, copying,
> distribution or any action taken in reliance on it is prohibited and
> may be unlawful. MAGIX does not warrant that any attachments are free
> from viruses or other defects and accepts no liability for any losses
> resulting from infected email transmissions. Please note that any
> views expressed in this email may be those of the originator and do
> not necessarily represent the agenda of the company.
> --
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>



-- 
Pranith
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Block storage

2016-10-03 Thread Prasanna Kalever
On Mon, Oct 3, 2016 at 12:40 PM, Gandalf Corvotempesta
 wrote:
> Il 03 ott 2016 08:48, "Pranith Kumar Karampuri"  ha
> scritto:
>>
>> It is in early development phase. If you don't want snapshot
>> functionality, then I think the work is reasonably ready. Do let us know if
>> you want to take it for a spin and give us feedback.
>
> Any guide on how to configure it?

Here are some POC guides for block store using in containers:

Docker: 
https://pkalever.wordpress.com/2016/06/23/gluster-solution-for-non-shared-persistent-storage-in-docker-container/
Kubernetes: 
https://pkalever.wordpress.com/2016/06/29/non-shared-persistent-gluster-storage-with-kubernetes/
Openshift: 
https://pkalever.wordpress.com/2016/08/16/read-write-once-persistent-storage-for-openshift-origin-using-gluster/

Performance numbers measured can be found at:
htmlpreview.github.io?https://github.com/pkalever/iozone_results_gluster/blob/master/block-store/iscsi-fuse-virt-mpath-shard-4/html_out/index.html

Let me know how this goes :)

Cheers,
--
Prasanna

>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Block storage

2016-10-03 Thread Gandalf Corvotempesta
Il 03 ott 2016 08:48, "Pranith Kumar Karampuri"  ha
scritto:
>
> It is in early development phase. If you don't want snapshot
functionality, then I think the work is reasonably ready. Do let us know if
you want to take it for a spin and give us feedback.

Any guide on how to configure it?
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Block storage

2016-10-03 Thread Pranith Kumar Karampuri
It is in early development phase. If you don't want snapshot functionality,
then I think the work is reasonably ready. Do let us know if you want to
take it for a spin and give us feedback.

On Sat, Oct 1, 2016 at 2:48 AM, Gandalf Corvotempesta <
gandalf.corvotempe...@gmail.com> wrote:

> I was looking for block storage in gluster but I don't see docs anymore
> Is this an unsupported feature ?
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>



-- 
Pranith
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Healing Delays

2016-10-03 Thread Pranith Kumar Karampuri
On Sun, Oct 2, 2016 at 5:49 AM, Lindsay Mathieson <
lindsay.mathie...@gmail.com> wrote:

> On 2/10/2016 12:48 AM, Lindsay Mathieson wrote:
>
>> Only the heal count does not change, it just does not seem to start. It
>> can take hours before it shifts, but once it does, its quite rapid. Node 1
>> has restarted and the heal count has been static at 511 shards for 45
>> minutes now. Nodes 1 & 2 have low CPU load, node 3 has glusterfsd pegged at
>> 800% CPU.
>>
>
> Ok, had a try at systematically reproducing it this morning and was
> actually unable to do so - quite weird. Testing was the same as last night
> - move all the VM's off a server and reboot it, wait for the healing to
> finish. This time I tried it with various different settings.
>
>
> Test 1
> --
> cluster.granular-entry-heal no
> cluster.locking-scheme full
> Shards / Min: 350 / 8
>
>
> Test 2
> --
> cluster.granular-entry-heal yes
> cluster.locking-scheme granular
> Shards / Min:  391 / 10
>
> Test 3
> --
> cluster.granular-entry-heal yes
> cluster.locking-scheme granular
> heal command issued
> Shards / Min: 358 / 11
>
> Test 3
> --
> cluster.granular-entry-heal yes
> cluster.locking-scheme granular
> heal full command issued
> Shards / Min: 358 / 27
>
>
> Best results were with cluster.granular-entry-heal=yes,
> cluster.locking-scheme=granular but they were all quite good.
>
>
> Don't know why it was so much worse last night - i/o load, cpu and memory
> were the same. However one thin that is different which I can't easily
> reproduce was that the cluster had been running for several weeks, but last
> night I rebooted all nodes. Could gluster be developing an issue after
> running for some time?


>From the algorithm point of view, the only thing that matters is load that
it needs to heal. Doesn't depend on age. So whether the load to heal is
100GB in very less time or in few months, the time to heal should be same.


>
>
>
> --
> Lindsay Mathieson
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>



-- 
Pranith
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users