Hi, I just started using Gluster today to build a new fileserver, and so
far I'm impressed with the ease of set-up and configuration.
However, my cluster appears to be working normally, yet no updates are
made on the actual filesystem. I change files on one node, and nothing
shows up on the
On 2014-09-04 16:31, Kaleb KEITHLEY wrote:
On 09/04/2014 06:18 PM, Ernie Dunbar wrote:
Hi, I just started using Gluster today to build a new fileserver, and so far
I'm impressed with the ease of set-up and configuration. However, my cluster
appears to be working normally, yet
Hi list.
Did I ask this question the wrong way, or does nobody know how to
diagnose this issue?
On 2015-01-29 09:28, Ernie Dunbar wrote:
I've created a GlusterFS server pair, with GlusterFS v 3.2.5 (because Debian
uses that version, and all our servers are Debian), using the official
as it could lead to losing existing data.
So if you want to reuse a brick, you need to clean it up and recreate the
brick directory.
On Wed, Feb 11, 2015 at 4:38 AM, Ernie Dunbar maill...@lightspeed.ca wrote:
I'm just going to paste this here to see if it drives you as mad as it does
me
Bump!
On 2015-02-16 16:19, Ernie Dunbar wrote:
Hi list.
I've searched around and I've found that nobody seems to have asked this
question before. Is it 100% necessary to have Gluster bricks that are
formatted with XFS, and is it also 100% necessary that it needs to be its own
On 2015-02-12 02:55, Atin Mukherjee wrote:
On 02/12/2015 12:36 AM, Ernie Dunbar wrote:
I nuked the entire partition with mkfs, just to be *sure*, and I still get
the error message: volume create: gv0: failed: /brick1/gv0 is already part
of a volume Clearly, there's some bit of data
:
On 11 Feb 2015, at 19:06, Ernie Dunbar maill...@lightspeed.ca wrote:
I nuked the entire partition with mkfs, just to be *sure*, and I still get
the error message: volume create: gv0: failed: /brick1/gv0 is already part
of a volume Clearly, there's some bit of data being kept somewhere else
Hi list.
I've searched around and I've found that nobody seems to have asked this
question before. Is it 100% necessary to have Gluster bricks that are
formatted with XFS, and is it also 100% necessary that it needs to be
its own partition and/or drive?
If it does, I think there needs to be
I've created a GlusterFS server pair, with GlusterFS v 3.2.5 (because
Debian uses that version, and all our servers are Debian), using the
official
guide:http://www.gluster.org/community/documentation/index.php/Getting_started_overview
[1]
I'm able to successfully mount the Gluster volume
I'm just going to paste this here to see if it drives you as mad as it
does me.
I'm trying to re-create a new volume in gluster. The old volume is empty
and can be removed. And besides that, this is just an experimental
server that isn't in production just yet. Who cares. I just want to
start
Hello everyone.
I've built a replicated Gluster cluster (volume info shown below) of two
Dell servers on a 1 GB switch, plus a second NIC on each server for
replication data. But when I try to copy our mail store from our backup
server onto the Gluster volume, I've been having nothing but
On 2015-04-23 12:58, Ben Turner wrote:
+1, lets nuke everything and start from a known good. Those error
messages make me think something is really wrong with how we are
copying the data. Gluster does NFS by default so you shouldn't have
have to reconfigure anything after you recreate the
I've nuked my gluster brick, configuration, and files with the intent to
rebuild them. This is what happens when I try to start glusterd with the
--debug option:
root@nfs1:/etc/apt/sources.list.d# glusterd --debug
[2015-04-24 18:26:07.978598] I [MSGID: 100030] [glusterfsd.c:2018:main]
Note: this was fixed by removing the files in /var/lib/glusterd like
this:
for file in /var/lib/glusterd/*; do if ! echo $file | grep 'hooks'
/dev/null 21;then rm -rf $file; fi; done
I could swear I'd done this once before during the reinstall, but
whatever.
On 2015-04-24 11:29, Ernie
On 2015-04-23 18:10, Joe Julian wrote:
On 04/23/2015 04:41 PM, Ernie Dunbar wrote:
On 2015-04-23 12:58, Ben Turner wrote:
+1, lets nuke everything and start from a known good. Those error
messages make me think something is really wrong with how we are
copying the data. Gluster does NFS
On 2015-04-24 11:43, Joe Julian wrote:
This should get you where you need to be. Before you start to migrate
the data maybe do a couple DDs and send me the output so we can get an
idea of how your cluster performs:
time `dd if=/dev/zero of=gluster-mount/myfile bs=1024k count=1000;
sync`
On 2015-04-27 14:09, Joe Julian wrote:
I've also noticed that if I increase the count of those writes, the
transfer speed increases as well:
2097152 bytes (2.1 MB) copied, 0.036291 s, 57.8 MB/s
root@backup:/home/webmailbak# dd if=/dev/zero of=/mnt/testfile
count=2048 bs=1024; sync
2048+0
Hi all.
First, I have a specific question about what hardware should be used for
Gluster, then after that I have a question about how Gluster does its
multithreading/hyperthreading.
So, we have a new Gluster cluster (currently, two servers with one
replicated volume) serving up our files
On 2015-06-18 15:10, Ernie Dunbar wrote:
Hi everyone.
Today I did the latest security updates for Ubuntu 14.04 LTS, and
after rebooting my failover/testing node for the new kernel version
(3.13.0-55.62), the server no longer boots with the following message:
The disk drive for /brick1
Hi everyone.
Today I did the latest security updates for Ubuntu 14.04 LTS, and after
rebooting my failover/testing node for the new kernel version
(3.13.0-55.62), the server no longer boots with the following message:
The disk drive for /brick1 is not ready yet or not present.
keys:Continue
must be installed to perform the upgrade then that will be
listed as kept-back.
http://www.debian-administration.org/article/69/Some_upgrades_show_packages_being_kept_back
[1]
On 06/15/2015 11:03 AM, Ernie Dunbar wrote:
That's nice to see, but apparently the new packages are being held
back
That's nice to see, but apparently the new packages are being held back,
preventing an upgrade. Any ideas as to why?
# apt-get upgrade
Reading package lists... Done
Building dependency tree
Reading state information... Done
Calculating upgrade... Done
The following packages have been kept back:
Hi Tiemen
It sounds like you're trying to rsync files onto your Gluster server,
rather than to the Gluster filesystem. You want to copy these files into
the mounted filesystem (typically on some other system than the Gluster
servers), because Gluster is designed to handle it that way.
I
Hi everyone.
I'm trying to add a new Gluster node to our cluster, and when trying to
probing the first node in the cluster, the new node crashes with the
following report (logs start when the daemon starts):
-
[2016-03-30 20:32:05.191659] I [MSGID: 100030] [glusterfsd.c:2332:main]
Great, upgrading to 3.7.10 did indeed fix this issue.
On 2016-03-31 21:07, Atin Mukherjee wrote:
On 03/31/2016 11:18 PM, Ernie Dunbar wrote:
Oops. I replied to Mohammed and not the whole list. Here's the
backtrace
and the full backtrace too:
root@nfs3:/home/ernied# gdb /usr/sbin/glusterd
I've already successfully created a Gluster cluster, but when I try to
add a new node, gluster on the new node claims it can't find the
hostname of the first node in the cluster.
I've added the hostname nfs1.lightspeed.ca to /etc/hosts like this:
root@nfs3:/home/ernied# cat /etc/hosts
On 2016-04-06 21:20, Atin Mukherjee wrote:
On 04/07/2016 04:04 AM, Ernie Dunbar wrote:
On 2016-04-06 11:42, Ernie Dunbar wrote:
I've already successfully created a Gluster cluster, but when I try
to
add a new node, gluster on the new node claims it can't find the
hostname of the first node
On 2016-04-06 11:42, Ernie Dunbar wrote:
I've already successfully created a Gluster cluster, but when I try to
add a new node, gluster on the new node claims it can't find the
hostname of the first node in the cluster.
I've added the hostname nfs1.lightspeed.ca to /etc/hosts like this:
root
fi K C wrote:
Hi Ernie,
Can you please paste the back trace from the core file.
Regards
Rafi KC
On 03/31/2016 02:31 AM, Ernie Dunbar wrote:
Hi everyone.
I'm trying to add a new Gluster node to our cluster, and when trying
to probing the first node in the cluster, the new node crashes with
t
Hi everyone.
My Gluster cluster is finally behaving fairly well, CPU, disk and
network performance has returned to a stable state, and I'd like to
start doing some performance tuning. To do that though, we need to have
some metrics to see if the changes we make, are making any difference at
We had one of our gluster servers in the cluster fail on us yesterday,
and now one (and only one) of the other servers in the cluster has
managed to collect about 7 gigabytes of logs in the past 12 hours,
seemingly only with lines like this:
[2016-05-20 16:08:05.119529] I
Hi everyone.
So, a few days ago, I installed another gluster server to our cluster to
prevent split-brains. I told the server to do a self-heal operation, and
sat back and waited while the performance of the cluster dropped
dramatically and our customers all lost patience with us over the
Hi everyone!
We have a gluster array of three servers supporting a large mail
server with about 10,000 e-mail accounts with the Maildir file
format. This means lots of random small file reads and writes.
Gluster's performance hasn't been great since we switched to
Oh, I also forgot to include the fact that this is a Replicate
volume. That's kind of a critical feature, if I want to use
dangerous RAID configurations like RAID0.
On 2017-02-24 11:36 AM, Ernie Dunbar
wrote:
Hi everyone!
We
Hi everyone!
After a bit of an ordeal with our Gluster servers last week, I
discovered some very important coincidences that can very badly
affect Gluster performance when they occur.
Should the Mlocate updater start when Gluster is going through
the self-heal
We currently have a Gluster array of three baremetal servers in a
Replicate 1x3 configuration. This single brick has about 1.1TB of
data and is configured for 3.7 TB of total space. This array is
mostly hosting mail in Maildir format, although we'd like it to
On 2017-02-28 04:01 PM, Lindsay Mathieson wrote:
On 1 March 2017 at 09:20, Ernie
Dunbar <maill...@lightspeed.ca>
wrote:
Every
node in the Gluster array has their RAID array configured
as
Hi everyone!
I need a sanity check on our Server Quorum Ratio settings to
ensure the maximum uptime for our virtual machines. I'd like to
modify them slightly, but I'm not really interested in
experimenting with live servers to see if what I'm doing is going
Hi everyone. I have a question about performance, hoping that perhaps
someone has already tested these scenarios so that I don't have to.
In order to maximize a Gluster array's performance, which is faster:
Gluster servers with 6 SAS disks each set up in a RAID0 configuration,
letting Gluster
Hi everyone. I need some sage advice for upcoming upgrades we're
planning for our Gluster array.
I'll start by describing our server cluster:
We currently have 3 Proxmox nodes. Two of them are the workhorses,
running 12 of our production VMs and a handful of dev VMs that don't see
the heavy
40 matches
Mail list logo