Hi,
I'm attempting to install the 3.3 beta3 on Debian.
The files are located in a directory that looks like they were built for
Debian Lenny, here:
http://download.gluster.org/pub/gluster/glusterfs/qa-releases/3.3.0beta3/Debian/5.0.3/
Note the 5.0.3 at the end of the path..
However, when
On 03/05/12 17:55, Sachidananda Urs wrote:
Hi,
On Thu, May 3, 2012 at 10:49 AM, Toby Corkindale
toby.corkind...@strategicdata.com.au
mailto:toby.corkind...@strategicdata.com.au wrote:
The files are located in a directory that looks like they were built for
Debian Lenny, here:
http
that sync start happening immediately, or after a certain time
period, or only after we manually run the 'volume heal' command?
ta,
Toby
On 03/05/12 19:45, Toby Corkindale wrote:
Hi,
I eventually installed three Debian unstable machines, so I could
install the GlusterFS 3.3 beta3.
I have
Hi,
I saw in the 3.3 changelog that now it is possible to set a secondary
server to retrieve the volume information from, when mounting a volume
via the native client.
However... I can't find any documentation in the man pages explaining
how to do this.
Currently I have:
mount -t
On 04/05/12 11:35, Toby Corkindale wrote:
Hi,
I saw in the 3.3 changelog that now it is possible to set a secondary
server to retrieve the volume information from, when mounting a volume
via the native client.
However... I can't find any documentation in the man pages explaining
how to do
On 04/05/12 17:53, Amar Tumballi wrote:
0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version
3.3.0qa39
[2012-05-04 17:21:47.918568] E [fuse-bridge.c::init] 0-fuse:
Mountpoint gluster seems to have a stale mount, run 'umount gluster' and
try again.
Hi Toby,
Thanks for the
On 07/05/12 15:25, Amar Tumballi wrote:
On 05/07/2012 06:20 AM, Toby Corkindale wrote:
On 04/05/12 17:53, Amar Tumballi wrote:
0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version
3.3.0qa39
[2012-05-04 17:21:47.918568] E [fuse-bridge.c::init] 0-fuse:
Mountpoint gluster seems
Hi,
Just wanted to confirm something..
On Linux clients, using the FUSE method of mounting volumes, do you need
glusterd to be running?
I don't *think* so, but want to check.
Thanks,
Toby
___
Gluster-users mailing list
Gluster-users@gluster.org
On 22/05/12 16:59, Amar Tumballi wrote:
On 05/22/2012 10:54 AM, Toby Corkindale wrote:
Hi,
Just wanted to confirm something..
On Linux clients, using the FUSE method of mounting volumes, do you need
glusterd to be running?
I don't *think* so, but want to check.
'glusterd' is *not* required
Hi,
This method of installing libssl1.0.0 is really not going to be
acceptable to most system administrators.
A version of Gluster that's been built properly for Debian Squeeze (and
also separately for Ubuntu Precise) would be much appreciated.
If you already have a build on Lenny (3.2) and
Hi,
I'm trying to find official documentation that describes the procedure
for recovering from a split-brain situation with replicated volumes.
I can find various posts on the mailing list that refer to the version
2.x series, but nothing good for 3.x.
Can anyone point me in the right
On 07/06/12 01:10, Sachidananda URS wrote:
Hi Filipe,
I have built the RDMA packages and can be found in:
http://download.gluster.com/pub/gluster/glusterfs/LATEST/Debian/5.0.3/
Why is the .deb in a Debian 5.0.3 directory, when it can't be installed
on anything earlier than version 7?
On 07/06/12 14:34, Toby Corkindale wrote:
Hi,
I'm trying to find official documentation that describes the procedure
for recovering from a split-brain situation with replicated volumes.
I can find various posts on the mailing list that refer to the version
2.x series, but nothing good for 3.x
On 25/06/12 21:04, samuel wrote:
Is there any guide or procedure to handle split brains on 3.3?
I've asked the list several times for this information and been ignored.
I'm sure people are just busy with higher priority issues.. but it'd be
nice to see this failure case documented.
-Toby
Hi,
Has there been any progress on building 3.3.1 .deb packages for Debian
Squeeze yet?
If no-one else has come forward, I can get one knocked up for. I build
packages pretty regularly. I consider myself reasonably competent
rather than expert at it, though.
If you're interested, can you
Hi,
Last night I attempted to upgrade some GlusterFS servers from 3.2.x to
3.3.1.
The upgrade did NOT go smoothly, and I'm quite disappointed in the
documentation for the upgrade as it was quite erroneous.
I followed this guide:
http://www.gluster.org/2012/05/upgrading-to-glusterfs-3-3-0/
On 24/01/13 10:57, Joe Julian wrote:
On 01/23/2013 03:43 PM, Toby Corkindale wrote:
Hi,
Last night I attempted to upgrade some GlusterFS servers from 3.2.x to
3.3.1.
The upgrade did NOT go smoothly, and I'm quite disappointed in the
documentation for the upgrade as it was quite erroneous.
I
I'm seeing these messages in logfiles a lot now, since upgrading to
3.3.1. What do they mean and how do I fix it? (I am running the same
version of Gluster on clients and servers, of course)
Server and Client lk-version numbers are not same, reopening the fds
Server lk version = 1
On 24/01/13 15:41, Toby Corkindale wrote:
I'm seeing these messages in logfiles a lot now, since upgrading to
3.3.1. What do they mean and how do I fix it? (I am running the same
version of Gluster on clients and servers, of course)
Server and Client lk-version numbers are not same, reopening
and normal.
OK, thanks.
It seems concerning as it happens quite frequently now; you'd assume
that they should have stayed in sync once the fd was reopened.
You only, really, need to be concerned with E (error) and C (critical).
Toby Corkindale toby.corkind...@strategicdata.com.au wrote:
On 24
logrotate.d/glusterfs-common (in the debian package for 3.3.1) is faulty.
It rotates the log files, but it doesn't tell glusterd to re-open them,
so it continues to write to what is now .1 (and then later it gets
gziped and corrupted)
I also note that the debian packages do not include the
On 22/02/13 11:18, Joe Julian wrote:
On 02/20/2013 05:05 PM, Toby Corkindale wrote:
logrotate.d/glusterfs-common (in the debian package for 3.3.1) is faulty.
It rotates the log files, but it doesn't tell glusterd to re-open
them, so it continues to write to what is now .1 (and then later
/3.3.1/Debian/squeeze.repo/pool/main/g/glusterfs/
The source for those packages is here:
https://github.com/semiosis/glusterfs-debian
On Thu, Feb 21, 2013 at 2:05 AM, Toby Corkindale
toby.corkind...@strategicdata.com.au wrote:
logrotate.d/glusterfs-common (in the debian package for 3.3.1
.
Toby Corkindale toby.corkind...@strategicdata.com.au wrote:
On 22/02/13 11:18, Joe Julian wrote:
On 02/20/2013 05:05 PM, Toby Corkindale wrote:
logrotate.d/glusterfs-common (in the debian package for
3.3.1) is faulty.
It rotates the log files
On 25/02/13 20:50, Robert Hajime Lanning wrote:
On 02/24/13 20:41, Toby Corkindale wrote:
In the meantime, could someone advise me on the correct way to tell
Glusterfs to rotate the logs for the bricks and mounts?
Have you tried:
# gluster volume log rotate VOLNAME [BRICK]
Is there a way
On 06/03/13 03:33, Joe Julian wrote:
It comes up on this list from time to time that there's not sufficient
documentation on troubleshooting. I assume that's what some people mean
when they refer to disappointing documentation as the current
documentation is far more detailed and useful than it
Hi Torbjorn,
I notice that your package still contain the totally-broken logrotate
scripts that Semiosis used in his packaging.
You may remember (since you were involved in it) the discussion around
this fairly recently.
It'd be great if you could update the debian packaging to include the
We experienced the gluster self-heal daemon crashing on us after a
reboot of the server.. Logs are below.
Is this a known issue:
[2013-04-03 20:54:06.500173] E [dict.c:2424:dict_unserialize]
(--/lib/libc.so.6(+0x41600) [0x7f6fff533600]
(--/usr/lib/libglusterfs.so.0(synctask_wrap+0x12)
On 11/05/13 00:40, Matthew Day wrote:
Hi all,
I'm pretty new to Gluster, and the company I work for uses it for
storage across 2 data centres. An issue has cropped up fairly recently
with regards to the self-heal mechanism.
Occasionally the connection between these 2 Gluster servers breaks or
On 09/07/13 15:38, Bobby Jacob wrote:
Hi,
I have a 2-node gluster with 3 TB storage.
1)I believe the “glusterfsd” is responsible for the self healing between
the 2 nodes.
2)Due to some network error, the replication stopped for some reason but
the application was accessing the data from
of
volumes defined, I think?
Toby
2013/7/9 Toby Corkindale toby.corkind...@strategicdata.com.au
mailto:toby.corkind...@strategicdata.com.au
On 09/07/13 15:38, Bobby Jacob wrote:
Hi,
I have a 2-node gluster with 3 TB storage.
1)I believe the “glusterfsd” is responsible
On 12/07/13 06:44, Michael Peek wrote:
Hi gurus,
So I have a cluster that I've set up and I'm banging on. It's comprised
of four machines with two drives in each machine. (By the way, the
3.2.5 version that comes with stock Ubuntu 12.04 seems to have a lot of
bugs/instability. I was screwing
Hi,
I saw that there are Debian 7 (Wheezy) packages for Gluster 3.3 and 3.4
available currently. Are there any plans to provide Debian 6 (Squeeze)
packages?
cheers,
Toby
___
Gluster-users mailing list
Gluster-users@gluster.org
Hi,
What does it mean when you use peer probe to add a new host, but then
afterwards the peer status is reported as Rejected yet Connected?
And of course -- how does one fix this?
gluster peer status
Number of Peers: 1
Hostname: 192.168.10.32
Uuid: 32497846-6e02-4b68-b147-6f4b936b3373
State:
Hi,
I'm getting some confusing Incorrect brick errors when attempting to
remove OR replace a brick.
gluster volume info condor
Volume Name: condor
Type: Replicate
Volume ID: 9fef3f76-525f-4bfe-9755-151e0d8279fd
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1:
On 06/08/13 18:12, Toby Corkindale wrote:
Hi,
What does it mean when you use peer probe to add a new host, but then
afterwards the peer status is reported as Rejected yet Connected?
And of course -- how does one fix this?
gluster peer status
Number of Peers: 1
Hostname: 192.168.10.32
Uuid
On 06/08/13 21:25, Kaushal M wrote:
Toby,
What versions of gluster are on the peers? And does the cluster have
just two peers or more?
Version 3.3.1.
The cluster has/had two nodes; we're trying to replace one with another one.
On Tue, Aug 6, 2013 at 4:32 PM, Toby Corkindale
toby.corkind
why this was required.
We still can't remove, replace or add bricks but I'll continue that in
another thread..
-T
On 07/08/13 10:51, Toby Corkindale wrote:
On 06/08/13 21:25, Kaushal M wrote:
Toby,
What versions of gluster are on the peers? And does the cluster have
just two peers or more
On 06/08/13 18:24, Toby Corkindale wrote:
Hi,
I'm getting some confusing Incorrect brick errors when attempting to
remove OR replace a brick.
gluster volume info condor
Volume Name: condor
Type: Replicate
Volume ID: 9fef3f76-525f-4bfe-9755-151e0d8279fd
Status: Started
Number of Bricks: 1 x 2
Is this a bug or a feature?
# gluster volume create foo mel-storage01:/tmp/foo
Creation of volume foo has been successful. Please start the volume to
access data.
# gluster volume delete foo
Deleting volume foo has been successful
# gluster volume create foo mel-storage01:/tmp/foo
/tmp/foo
, and if anything version 3.3 has
been worse than 3.2 for bugs. (And I have no faith at all that 3.4 is an
improvement)
-Toby
On 07/08/13 11:44, Toby Corkindale wrote:
On 06/08/13 18:24, Toby Corkindale wrote:
Hi,
I'm getting some confusing Incorrect brick errors when attempting to
remove
On 08/08/13 13:09, Krishnan Parthasarathi wrote:
Hi Toby,
- Original Message -
Hi,
I'm getting some confusing Incorrect brick errors when attempting to
remove OR replace a brick.
gluster volume info condor
Volume Name: condor
Type: Replicate
Volume ID:
Hi,
Having built a fresh Gluster cluster, this time out of Ubuntu LTS with
Gluster 3.3.2, we've found that the replace-brick command now seems to
succeed. (Unlike our Debian 6 + 3.3.1 cluster before it)
I say seems to succeed, because it fails after about half a dozen
volumes have been
On 10/10/13 05:22, Pruner, Anne (Anne) wrote:
I’m evaluating gluster for use in our product, and I want to ensure that
I understand the failover behavior. What I’m seeing isn’t great, but it
doesn’t look from the docs I’ve read that this is what everyone else is
experiencing.
Is this normal?
44 matches
Mail list logo