this?
Gerald
--
Gerald Brandt
Majentis Technologies
g...@majentis.com
204-229-6595
www.majentis.com
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users
(the
time varies) it ramps up to use about 88% of each interface.
Is there something in Linux, Gluster, or ??? that slows down initial writes?
Gerald
ps: on a side note, NFS reads are REALLY slow... roughly 43 MB/s of the
bonded link, which is about 12% of each interface.
--
Gerald Brandt
Majentis
are there big reads when I was only doing a write
test? Why is the ethernet still busy?
Looks like I'm back to trying DRBD.
Gerald
--
Gerald Brandt
Majentis Technologies
g...@majentis.com
204-229-6595
www.majentis.com
___
Gluster-users mailing list
Gluster
Hi,
Is anyone using GlusterFS on Ubuntu in production? Specifically, I'm looking
at using the NFS portion of it over a bonded interface. I believe I'll get
better speed than user the gluster client across a single interface.
Setup:
3 servers running KVM (about 24 VM's)
2 NAS boxes running
Hi,
I reboot a brick in a replica, and now I get an error, and no replication.
[2013-12-05 23:17:23.691651] I
[glusterd-utils.c:954:glusterd_volume_brickinfo_get] 0-management: Found brick
[2013-12-05 23:17:26.691910] E [socket.c:2788:socket_connect] 0-management:
connection attempt failed
On 12-12-26 10:24 PM, Miles Fidelman wrote:
Hi Folks,
I find myself trying to expand a 2-node high-availability cluster from
to a 4-node cluster. I'm running Xen virtualization, and currently
using DRBD to mirror data, and pacemaker to failover cleanly.
The thing is, I'm trying to add 2
Hi,
My nfs.log is filled with this:
[2012-12-08 16:12:56.170509] E [nfs3.c:2163:nfs3_write_resume] 0-nfs-nfsv3:
Unable to resolve FH: (192.168.20.13:943) NFS_RAID6_FO :
06799271-0ef4-4b7c-96ed-175c2c84994a
[2012-12-08 16:12:56.187355] W [nfs3-helpers.c:3389:nfs3_log_common_res]
0-nfs-nfsv3:
Hi,
I got the following error:
[2012-12-02 11:20:48.548129] C
[client-handshake.c:126:rpc_client_ping_timer_expired] 0-NFS_RAID1_FO-client-0:
server 192.168.10.2:24011 has not responded in the last 42 seconds,
disconnecting.
[2012-12-02 11:20:48.634077] E [rpc-clnt.c:373:saved_frames_unwind]
Hi,
I have a 2 server replicate. The main server is running 3.3.0 and the second
server is running 3.3.1. The clients connect with NFS, using the Gluster NFS
server (to 3.3.0).
Today, by 3.3.1 server went down. I don't know why yet, I'll figure that out
tomorrow. When the 3.3.1 server
How about an option to throttle/limit the self heal speed? DRBD has a speed
limit, which very effectively cuts down on the resources needed.
That being said, I have not had a problem with self heal on my VM images. Just
two days ago, I deleted all images from one brick and let the self heal
Hi,
I'll actually add some info here I keep forgetting. I heal (and sync) on a
separate interface, so:
eth0: clients mount NFS
eth1: heal and sync
eth2: management
I don't think it's a 'supported' configuration, but it works very well.
Gerald
- Original Message -
From: Gerald
Hi,
I've had a few incidents over the last week where GlusterFS NFS server started
using 400% CPU, and the Gluster server went to a load of 29. I couldn't figure
out the issue, but a system reset fixed it. This server has been in production
since 3.3.0 came out.
Yesterday, I may have fixed
Hi,
I have speed and stability increases with 3.3.0/3.3.1. If you're running VM's
on gluster, it's a no brainer as well.
Gerald
- Original Message -
From: Paul Simpson p...@realisestudio.com
To: gluster-users@gluster.org
Sent: Tuesday, November 27, 2012 9:33:51 AM
Subject: Re:
Hi Whit, To be honest, I don't see any improvements since 3.2
concerning virtual machines (maybe under the hood?). Self-heal is
still blocking and performance is not better.
We've waited over 6 months for the 3.3 release and it was really
disappointing for me personally.
Cheers,
Interestingly enough, a couple of reboots later, things started syncing again.
Gerald
- Original Message -
From: Bryan Whitehead dri...@megahappy.net
To: Gerald Brandt g...@majentis.com
Cc: gluster-users gluster-users@gluster.org
Sent: Friday, November 23, 2012 8:59:05 PM
Subject
Any ideas? I'm going in tomorrow to try and fix things, so any help is
appreciated.
Gerald
- Original Message -
From: Gerald Brandt g...@majentis.com
To: gluster-users gluster-users@gluster.org
Sent: Thursday, November 22, 2012 9:17:49 AM
Subject: Re: [Gluster-users] Rebuilt RAID
Hi,
Any ideas on this? I'm currently running non-replicated, and I'm not
comfortable with that.
Gerald
- Original Message -
From: Gerald Brandt g...@majentis.com
To: gluster-users gluster-users@gluster.org
Sent: Wednesday, November 21, 2012 12:34:12 PM
Subject: [Gluster-users
Hi,
I had a RAID-6 array fail on me, so I got some new HDD and rebuilt it. The
glusterfs config didn't change at all.
When the array was rebuilt and mounted, it (naturally) had no files on it.
GlusterFS seems to have created the .gluster directory.
However, self heal isn't working. I tried
Wow. I have 2 replica servers that host VM's via GlusterNFS. uCARP handles
the IP failover if one system dies.
Both systems were running fine. In fact, I was logged into the backup NFS
server watching a large file get created on a RAID-6 array exported via Gluster.
NAS-1 - primary
Hi,
I did all my testing with 3.2.6, but went into production with 3.3 (with
minimal testing). I'm happy with it.
Gerald
- Original Message -
From: Frank Sonntag f.sonn...@metocean.co.nz
To: gluster-users gluster-users@gluster.org
Sent: Saturday, October 13, 2012 2:37:48 AM
Hi,
I had a running/working GlusterFS system, but the boot drive died. I
re-installed and setup gluster to export the same data (using NFS). This is
what I get when I mount it on a remote system:
#ls -l
ls: cannot access dbbackups: No such file or directory
ls: cannot access fanart: No such
or directory), POSIX:
14(Bad address)
- Original Message -
From: Gerald Brandt g...@majentis.com
To: gluster-users gluster-users@gluster.org
Sent: Tuesday, September 25, 2012 6:29:46 PM
Subject: [Gluster-users] Files and directories not accessible
Hi,
I had a running/working GlusterFS
The mount works for about a minute, then starts getting the errors.
Gerald
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
Hi,
Are there any proposed dates for a new release of Gluster? I'm currently
running 3.3, and the gluster heal info commands all segfault.
Gerald
___
Gluster-users mailing list
Gluster-users@gluster.org
Hi,
You need to write to the gluster mounted partition, not the XFS mounted one.
Gerald
- Original Message -
Greetings,
I'm trying to setup a small glusterFS test cluster, in order to gauge
the feasibility for using it in a large production environment. I've
been working through
Hi,
There was a previous thread about segfaults on Ubuntu 12.04 with the
replicate heal commands. Has this issue been resolved, and if so, where
can I find the .debs? (3.3.0)
I have a self-heal that seems to have failed:
# gluster volume heal NFS_RAID6_FO info heal-failed
Heal operation
the files on the underlying file system, not the gluster
mounted one. (I have no issue forcing a full resync)
Gerald
- Original Message -
From: Louis Zuckerman m...@louiszuckerman.com
To: Gerald Brandt g...@majentis.com
Cc: John Mark Walker johnm...@redhat.com
Sent: Thursday, September
Hi,
I just noticed this too while fixing a split-brain during testing. For me, any
if the info arguments (healed, heal-failed, split-brain) crashes.
Gerald
- Original Message -
From: Brian Candler b.cand...@pobox.com
To: gluster-users@gluster.org
Sent: Friday, July 13, 2012
Hi,
I have a replica 2 setup that I use via Gluster NFS (3.3). Both bricks show
all files in sync (gluster volume heal VOLNAME info). I forced a sync the
new way (gluster volume heal VOLNAME all) and the old way (find /root/tmp/
-noleaf -print0 | xargs --null stat /dev/null [gluster volume
I created a replica 2 system, and removed one of the replicas. Gluster
switched the system from replica to distribute.
I now want to add a system, and go back to replica. Is it possible to do this?
Why did Gluster switch to distribute?
Gerald
___
- Original Message -
From: Nicolas Sebrecht nsebre...@piing.fr
To: gluster-users gluster-users@gluster.org
Sent: Wednesday, June 27, 2012 3:06:30 AM
Subject: [Gluster-users] about HA infrastructure for hypervisors
Hi,
We are going to try glusterfs for our new HA servers.
To
Hi,
Does anyone have NFS failover working with Glusters NFS server?
I can't get it to work reliably with Gluster 3.3/Ubuntu 12.04/uCARP.
My IP switches over just fine, but XenServer fails to remake the connection.
I believe it may be FSID related, but I don't know how to set the Gluster NFS
Hi,
I created a test volume, deleted it, and can not re-create it.
# gluster volume create nfstest replica 2 transport tcp nfstest1:/nfstest
nfstest2:/nfstest
# gluster volume delete nfstest
# gluster volume create nfstest replica 2 transport tcp nfstest1:/export/nfs
nfstest2:/export/nfs
Here's the link:
http://community.gluster.org/a/nfs-performance-with-fuse-client-redundancy/
Sent again with a reply to all.
Gerald
- Original Message -
From: Christian Meisinger em_got...@gmx.net
To: olav johansen luxis2...@gmail.com
Cc: gluster-users@gluster.org
Sent: Thursday,
Anthony,
I have the exact same issue with GlusterFS 3.2.5 under Ubuntu 10.04. I haven't
got an answer yet on what is happening.
Are you using the NFS server s GlusterFS?
Gerald
- Original Message -
From: anthony garnier sokar6...@hotmail.com
To: gluster-users@gluster.org
Sent:
archive 27-04-16-00_test_glusterfs_save.tar
Maybe we should open a Bug if we don't get a answer ?
Anthony,
Message: 1
Date: Fri, 27 Apr 2012 08:07:33 -0500 (CDT)
From: Gerald Brandt g...@majentis.com
Subject: Re: [Gluster-users] remote operation failed: No space left
Can anyone answer the question below?
- Forwarded Message -
From: Gerald Brandt g...@majentis.com
To: gluster-users gluster-users@gluster.org
Sent: Monday, April 23, 2012 12:39:02 PM
Subject: [Gluster-users] deleted files not giving space back
Hi,
I'm running 3.2.5 on Ubuntu 10.04. I
- Original Message -
From: Rodrigo Severo rodr...@fabricadeideias.com
To: Gerald Brandt g...@majentis.com
Cc: gluster-users gluster-users@gluster.org
Sent: Wednesday, April 25, 2012 6:37:13 AM
Subject: Re: [Gluster-users] Fwd: deleted files not giving space back
On Wed, Apr
Hi,
We're currently using GlusterFS and it's NFS server to serve VM images to
Citrix XenServer. GlusterFS was set up as replicate, but no replicate machine
was added (so, relicate with 1 brick active).
We're about to add the second brick (looking forward to 3.3), and I have a
couple of
Hi,
I'm running 3.2.5 on Ubuntu 10.04. I have a USB drive that stores rotating
snapshots of our VM's, and I keep 2 days of snapshots on the drive.
Every night a cron job runs and deletes the oldest snapshots off the drive.
After a few days, the gluster mount, and it's base drive, show 100%
Hi,
I have Gluster 3.2.6 RPM's for Citrix XenServer 6.0. I've installed and
mounted exports, but that's where I stopped.
My issues are:
1. XenServer mounts the NFS servers SR subdirectory, not the export. Gluster
won't do that.
-- I can, apparently, mount the gluster export somewhere else,
Hi,
- Original Message -
Dear All-
I find that I have to restart glusterd every few days on my servers
to
stop NFS performance from becoming unbearably slow. When the problem
occurs, volumes can take several minutes to mount and there are long
delays responding to ls. Mounting
Hi,
On April 1st, I started seeing the following errors in my nfs.log file. I
didn't notice until today, when most of my VM's under Citrix XenServer made
their drives read-only. The VM disk images are NFS mounted via gluster nfs.
I didn't notice any issues until the remount as read-only
However, In 3.3 branch there is provision to change the replica count
after volume creation.
Best Regards,
Vishwanath
That is great news!
Gerald
___
Gluster-users mailing list
Gluster-users@gluster.org
Hi,
I asked this a while back, but haven't received any answers. Has anyone tried
some form of dedup?
Gerald
- Forwarded Message -
From: Gerald Brandt g...@majentis.com
To: gluster-users@gluster.org
Sent: Saturday, October 22, 2011 7:55:59 PM
Subject: GlusterFS over lessfs/opendedupe
Hi,
If I have a replica 2 group, and I take one server offline and reboot the
other, it takes 3 minutes for the connection to timeout, and the gluster share
to come up.
Is that standard? It's not a DNS lookup, since the second server is in the
hosts file.
Gerald
Hi,
I can't stop or delete a replica volume:
# gluster volume info
Volume Name: sync1
Type: Replicate
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: thinkpad:/gluster/export
Brick2: quad:/raid/gluster/export
# gluster volume stop sync1
Stopping volume will make its
Thanks for the pointer.
Step 1 fails to return a UUID. If I 'gluster peer status' on the working
server, I get:
Number of Peers: 1
Hostname: quad
Uuid: 03eaca98-ac4f-4b61-9b30-7e1b40b01d9b
State: Peer in Cluster (Connected)
I then go to step 2 with the shown UUID.
I start glusterd on
Hi,
I just ended up rebuilding my bricks (by removed glusterfs completely,
including config files, and reinstalling from scratch).
Gerald
- Original Message -
From: Gerald Brandt g...@majentis.com
To: Gabriel Othmezouri gabriel.othmezo...@toyota-europe.com
Cc: gluster-users
Hi,
I ran 3.2.3 under Ubuntu 10.04 LTS with some pretty serious IO tests. My
install was rock solid. Doesn't help much, but may indicate to look outside of
gluster.
Gerald
- Original Message -
From: anthony garnier sokar6...@hotmail.com
To: gluster-users@gluster.org
Sent:
Hi,
I can answer point 1. GlusterFS 3.3 (still in beta), does finer locking during
self-heal, which is what the VM images like.
Gerald
- Original Message -
From: Miles Fidelman mfidel...@meetinghouse.net
To: gluster-users@gluster.org
Sent: Tuesday, November 1, 2011 6:31:35 PM
Hi,
I'm currently running GlusterFS over XFS, and it works quite well.
I'm wondering if it's possible to add data deduplication into the mix by:
glusterfs -- lessfs -- xfs or
glusterfs -- opendedupe -- xfs
Has anybody tried doing this? We're running VM images on gluster, and I figure
we
Hi,
- Original Message -
From: Thai. Ngo Bao tha...@vng.com.vn
To: Anush Shetty an...@gluster.com, gluster-users@gluster.org
Sent: Friday, October 21, 2011 2:05:12 AM
Subject: Re: [Gluster-users] ACL
Hi Anush,
Well, I was aware of this feature of glusterfs several months ago and
Hi,
Are there any 'optimal' settings for XFS formatting under GlusterFS? The
storage will be used for Virtual Disk storage, virtual disk size from 8GB to
100 GB in size.
One of the VM's (separate gluster volume) will be running MSSQL server (4K
reads and writes). The other will be running
mounting with this :
mount -t xfs -o rw,noatime,nodiratime,logbufs=8
HTH,
Sabuj
On Thu, Oct 20, 2011 at 8:18 AM, Gerald Brandt g...@majentis.com
wrote:
Hi,
Are there any 'optimal' settings for XFS formatting under
GlusterFS? The storage will be used for Virtual Disk storage
Hi,
Any news on this? I'm planning to place VM images on it, and the 3.3 replica
locking mechanism is way better.
For those using 3.3 RC, how stable is it?
Gerald
- Original Message -
From: Gerald Brandt g...@majentis.com
To: gluster-users@gluster.org
Sent: Tuesday, October 11
Is there any known date for the release of 3.3?
Gerald
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
Hi,
If I have a gluster system sitting on top of ext4 with RAID5, will expanding
the RAID5 with another disk, and resizing the ext4 partition, automatically be
noticed by gluster?
I would:
1. take down a replica brick
2. add the physical disk
3. Restart system
4. stop gluster
5. and unmount
Hi,
I just completed testing on using Gluster via it's NFS server in a XenServer
environment. My comparison was iSCSI in the same environment.
You can see the results at
http://majentis.com/2011/09/21/xenserver-iscsi-and-glusterfsnfs/
Gerald
___
Hi,
I hope I can explain this properly.
1. I have a two brick system replicating each other. (10.1.4.181 and 10.1.40.2)
2. I have a third system that mounts the gluster fileshares (192.168.30.111)
3. I export the share on 192.168.30.111 as NFS to a XenServer.
What I'm hoping for, is the
Hi,
I have two replicated bricks. I mounted brick1 as NFS, and started
writing/creating files.
I didn't see those changed replicated to brick2.
I then mounted the gluster share with the gluster client, and did a:
# find gluster-mount - noleaf -print0 | xags -- null stat /dev/null
And then
Hi,
I'm testing gluster/nfs for replacement of an existing DRBD/iSCSI system.
Speed tests show gluster NFS to be pretty close to iSCSI, but I have some
questions.
If I do a sequential write of data, I get ~118 MB/s. A sequential read of data
gets about 65 MB/s.
If I do a sequential read and
62 matches
Mail list logo