On 11.07.2012 21:35, Brian Candler wrote:
On Wed, Jul 11, 2012 at 12:55:50PM -0500, Mark Nipper wrote:
Would that be using something like O_DIRECT which FUSE
doesn't support at the moment?
Yes. FUSE does support it in recent kernels (3.4), and I tried it.
Nothing
happened until I
On Thu, Jul 12, 2012 at 09:01:48AM +0300, Samuli Heinonen wrote:
On 11.07.2012 21:35, Brian Candler wrote:
On Wed, Jul 11, 2012 at 12:55:50PM -0500, Mark Nipper wrote:
Would that be using something like O_DIRECT which FUSE
doesn't support at the moment?
Yes. FUSE does support it in
On Wed, Jul 11, 2012 at 06:06:44PM +0100, Brian Candler wrote:
But my understanding from reading previous posts on this
list is that using something other than a cache mode of none is
acceptable and safe with Gluster at least.
cache=none is definitely what we want, but doesn't currently
On Wed, Jul 11, 2012 at 03:06:19PM +, Stefan Schloesser wrote:
My Idea is to use GlusterFS as a replicated filesystem (for apache) and
built-in mysql replication for the database. In the event of a failure
I need to run a script to implement an ip switch.
To switch IP for
Hi,
are NFS mounts made on a single server (i.e. where glusterd is running)
supposed to be stable (with gluster 3.2.6)?
I'm using the following line in /etc/fstab:
localhost:/sites /var/ftp/sites nfs _netdev,mountproto=tcp,nfsvers=3,bg 0 0
The problem is, after some time (~1-6 hours),
HI,
Is there a way to get good performance when an application does small
writes?
Most of the apllications using NetCDF write big files (upto 100GB) but
using small block-sized writes(Block size less than 1KB)
Hi Pranith;
Thanks for your reply.
I checked md5sum too, it is different.
Here is my output:
# getfattr -d -m . -e hex
/export/data10/.glusterfs/d9/b0/d9b0c350-33ba-4090-ab08-f91f30dd661f
getfattr: Removing leading '/' from absolute path names
# file:
On Thu, Jul 12, 2012 at 03:40:14AM -0500, Mark Nipper wrote:
On 12 Jul 2012, Brian Candler wrote:
And I forgot to add: since a KVM VM is a userland process anyway, I'd expect
a big performance gain when KVM gets the ability to talk to libglusterfs to
send its disk I/O directly, without
Folks,
Gluster is not ready to run Virtual Machines at all. Yes you can build a 2 node
cluster and live migrate machines, but the performance is poor and they need to
do a lot of work on it yet.
I wouldn't put in production even a cluster with low performance web server VMs
until this is
On 12 Jul 2012, Brian Candler wrote:
On 12 Jul 2012, Brian Candler wrote:
http://lists.gnu.org/archive/html/qemu-devel/2012-06/msg01745.html
I read it as:
(aggrb)
- base 72.9MB/s
- fuse bypass (libglusterfs) 66.MB/s
(minb)
- base 18.2MB/s
- fuse bypass 16.6MB/s
(maxb)
- base
On Thu, Jul 12, 2012 at 07:38:18AM +, Stefan Schloesser wrote:
To switch IP for what? glusterfs in replicated mode, using the native
(FUSE) client, doesn't need this. The client talks to both backends,
and if either backend fails, it continues to work.
[]
I am slightly confused
On 12.07.2012 11:40, Mark Nipper wrote:
Something concerns me about those performance figures.
If I'm reading them correctly, the normal fuse mount performance
is about what I was seeing, 2-3MB. And now bypassing everything,
libglusterfs is still capping out a little under 20MB/s.
It's
On 12 Jul 2012, Fernando Frediani (Qube) wrote:
Gluster is not ready to run Virtual Machines at all. Yes you can build a 2
node cluster and live migrate machines, but the performance is poor and they
need to do a lot of work on it yet.
I wouldn't put in production even a cluster with low
On Thu, Jul 12, 2012 at 04:01:29AM -0500, Mark Nipper wrote:
I read it as:
(aggrb)
- base 72.9MB/s
- fuse bypass (libglusterfs) 66.MB/s
(minb)
- base 18.2MB/s
- fuse bypass 16.6MB/s
(maxb)
- base 18.9MB/s
- fuse bypass 17.8MB/s
I was trying to figure out what
Hi group,
I'm in production with gluster for the last 2 weeks. No problems until
today.
As of today I've got the Transport endpoint is not connected problem
on the client, maybe once every hour.
df: `/services/users/6': Transport endpoint is not connected
Here is my setup:
I have 1 Client and
On Thu, Jul 12, 2012 at 01:13:26PM +, Stefan Schloesser wrote:
I am mounting it via
mount -t glusterfs -o log-level=WARNING,log-file=/var/log/gluster.log
cluster-1:/shared /shared
and sure, the apache will write to it ..
OK. Then that's the client I was talking about.
On 12 Jul 2012, Brian Candler wrote:
I don't believe he is getting that speed with cache=writethrough. I think
he means cache=writeback.
No, I meant writethrough. Initially I had just done dd
using something like:
---
$ dd if=/dev/zero of=/home/user/testing bs=1024k count=1000
1000+0
Hi Harry,
Running Gluster-3.3 on Redhat 6.2 with the RPMs provided by gluster.org.
I rpm -e previous gluster install, it could be possible that it's picking
up older files.
I did find in the entire system with '*gluster*' and cleaned those out.
So, it's a bit weird to me.
Robin
On 6/27/12
I see both glusterfsd and glusterfs eat a good 70-100% of CPU while dd
runs (see below)
[root@lab0 ~]# gluster volume info
Volume Name: testrdma
Type: Replicate
Volume ID: bf7b42aa-5680-4f5c-8027-d0a56cc5e65d
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: rdma
Bricks:
Brick1:
I was using the ISO from Oracle and it's booting Oracle Unbreakable Kernel
(rather than the upstream Redhat).
Switching to upstream Redhat Kernel fixed the issue.
Robin
On 7/12/12 2:30 PM, Robin, Robin rob...@muohio.edu wrote:
Hi Harry,
Running Gluster-3.3 on Redhat 6.2 with the RPMs provided
Hello everybody,
I would export thought NFS, or evently native glusterFs, some directories from
a volume but I need that each directory should be only exported to a specific
IP address.
As an exemple :
/data1 would be exported only to 10.0.0.1
/data2 would be exported only to 10.0.0.2
/dataX
21 matches
Mail list logo