Having read a lot I want to give gluster a try on my network. Most
systems run Debian. I plan to use Jessie for the servers, since I have
to set them up from scratch anyhow. Jessie comes with gluster 3.5.2. But
I cannot do much about the clients, which mostly run Wheezy, coming with
gluster
Lars,
Why not use the gluster repo and not the default debian? Gluster is
already updated to 3.6.2-1 for both Jessie and Wheezy
http://download.gluster.org/pub/gluster/glusterfs/LATEST/Debian/
On 01/28/15 08:30, Lars Hanke wrote:
Having read a lot I want to give gluster a try on my network.
On Wed, Jan 28, 2015 at 02:30:54PM +0100, Lars Hanke wrote:
Having read a lot I want to give gluster a try on my network. Most systems
run Debian. I plan to use Jessie for the servers, since I have to set them
up from scratch anyhow. Jessie comes with gluster 3.5.2. But I cannot do
much about
Could you attach the client, brick logs to check where the error is
coming from.
Pranith
On 01/27/2015 01:52 PM, 肖力 wrote:
Yes !
Have enough space, Tks!
I use xfs。 Dose with this.
At 2015-01-27 16:13:46, Anatoly Pugachev mator...@gmail.com wrote:
Does host system (kvm host) has
On 01/27/2015 02:43 PM, yang.bi...@zte.com.cn wrote:
Hi,
Can the replica position be configured ?
Could you give an example?
Pranith
Regards.
Bin.Yang
ZTE Information Security Notice: The information contained in this
gfapi currently doesn't provide a way to setup backup volfile servers. This
would make a nice feature request. You could open a feature request by
filing a RFE bug at [1]. If you want, one of us could file it in your stead.
But even if gfapi were to provide an api to set backup volfile servers,
On Wed, Jan 28, 2015 at 02:06:32PM +0530, Pranith Kumar Karampuri wrote:
Added Niels and Shyam who may know about this.
Pranith
On 01/27/2015 12:25 PM, Arash Shams wrote:
is there anyone pay attention to my question ?
Hi,
the issue is fixed in 3.6.2 GA?
Thanks
Alessandro
From: Pranith Kumar Karampuri [mailto:pkara...@redhat.com]
Sent: giovedì 22 gennaio 2015 06:24
To: Xavier Hernandez
Cc: RASTELLI Alessandro; gluster-users@gluster.org; panpan feng
Subject: Re: [Gluster-users] Cannot move or rename files on
On 01/27/2015 11:29 PM, Paul E Stallworth wrote:
Hello,
I ran a gluster volume profile while doing a dd to read from /dev/zero and
write to a file and noticed unusually high latency on the FSYNC, INODELK, and
FINODELK operations. This latency seems to correspond with very slow page
loads
Oops. I missed the note when I went through the core. Sorry everyone for
the incorrect information I provided earlier about libgfapi.
And thank you Niels for bringing up the correct information.
Regarding libvirt parsing the provided XML; according to a table provided
under [1], a source of type
Added Niels and Shyam who may know about this.
Pranith
On 01/27/2015 12:25 PM, Arash Shams wrote:
is there anyone pay attention to my question ?
From: ara...@hotmail.com
To: gluster-users@gluster.org
Date: Sun, 25 Jan
On 01/27/2015 11:43 PM, Joe Julian wrote:
No, there's not. I've been asking for this for years.
Hey Joe,
Vijay and I were just talking about this today. We were
wondering if you could give us the inputs to make it a feature to implement.
Here are the questions I have:
Basic
Can anyone help me here please?
On Tue, Jan 27, 2015 at 7:09 PM, Ml Ml mliebher...@googlemail.com wrote:
Hello List,
i was able to produce a split brain:
[root@ovirt-node03 splitmount]# gluster volume heal RaidVolB info
Brick ovirt-node03.example.local:/raidvol/volb/brick/
On 01/28/2015 02:02 PM, Ml Ml wrote:
I want to either take the file from node03 or node04. i really don’t
mind. Can i not just tell gluster that it should use one node as the
„current“ one?
Policy based split-brain resolution [1] which does just that, has been
merged in master and should be
Is this phenomenon normal when use gluster default configure?
At 2015-01-28 13:19:47, Dang Zhiqiang dzq...@163.com wrote:
Hi,
I want to known why net recv 91MB/s, read only 22MB/s when do FIO read
test.
when I set cache-size=2GB or disable open-behind, this phenomenon
disappears.
Hi all,
In about 30 minutes from now we will have the regular weekly Gluster
Community meeting.
Meeting details:
- location: #gluster-meeting on Freenode IRC
- date: every Wednesday
- time: 7:00 EST, 12:00 UTC, 13:00 CET, 17:30 IST (in your terminal,
run: date -d 12:00 UTC)
- agenda:
Hi All,
Thanks to everyone who responded to the recent community survey [1], we
have an idea of what you think would be necessary in GlusterFS. I have
tried to collate the wishlist of features under appropriate categories
here [2].
As a continuation of this, the approaching feature freeze
/1701d5ae-6a44-4374-8b29-61c699da870b/dom_md/ids is a binary file.
Here is the output of gluster volume info:
--
[root@ovirt-node03 ~]# gluster volume info
Volume Name: RaidVolB
Type: Replicate
Volume ID:
Hi
This must be a stupid question, but how do I replace a brick once it has
died if the replacement has the same name?
# gluster volume replace-brick tmp cubi:/wd1 cubi:/wd1 commit force
volume replace-brick: failed: Brick: cubi:/wd1 not available. Brick may be
containing or be contained by an
On 01/28/2015 08:34 PM, Ml Ml wrote:
Hello Ravi,
thanks a lot for your reply.
The Data on ovirt-node03 is the one which i want.
Here are the infos collected by following the howto:
https://github.com/GlusterFS/glusterfs/blob/master/doc/debugging/split-brain.md
[root@ovirt-node03 ~]#
We are running version 3.6.1 on Ubuntu 12.04.5 LTS. The file system for the
volume is ext4 and is mounted on clients as glusterfs.
I verified apache wasn't running, and there shouldn't be any other workload on
the web server and ran another test using the following:
dd if=/dev/zero
Hello Ravi,
thanks a lot for your reply.
The Data on ovirt-node03 is the one which i want.
Here are the infos collected by following the howto:
https://github.com/GlusterFS/glusterfs/blob/master/doc/debugging/split-brain.md
[root@ovirt-node03 ~]# gluster volume heal RaidVolB info split-brain
On 01/27/2015 11:43 PM, Joe Julian wrote:
No, there's not. I've been asking for this for years.
Hey Joe,
Vijay and I were just talking about this today. We were
wondering if you could give us the inputs to make it a feature to implement.
Here are the questions I have:
Basic
On 01/28/2015 10:58 PM, Ml Ml wrote:
/1701d5ae-6a44-4374-8b29-61c699da870b/dom_md/ids is a binary file.
Here is the output of gluster volume info:
--
[root@ovirt-node03 ~]# gluster volume info
Volume Name:
We noticed a VERY strange NFS issue with sqlplus.
When we have the tnsnames.ora sitting on a gluster NFS mount. Sqlplus plus
throw out error for
ERROR:
ORA-12154: TNS:could not resolve the connect identifier specified
It works fine it the tnsnames.ora on local file systems or any other NFS
We were able to stop the lockd error after rebooted the server.
However still encountering another strange issue with sqlplus.
I sent another email to the group on it.
Thanks
Peter
From: gluster-users-boun...@gluster.org
Hi All,
No idea from your side? Nobody is using the geo-replication?
I updated to version 3.6.2 and issue remain the same.
Can someone help, please?
Thanks
PM
From: gluster-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Janvre, Pierre-Marie
(Agoda)
Sent:
An object interface to GlusterFS would greatly benefit Swift integration
efforts. Currently swiftonfile project uses FUSE mount to perform I/O on
GlusterFS volumes. Moving to libgfapi would increase performance if there were
an object interface. May be something like this:
On 01/28/2015 09:24 PM, Paul E Stallworth wrote:
We are running version 3.6.1 on Ubuntu 12.04.5 LTS. The file system for the
volume is ext4 and is mounted on clients as glusterfs.
I verified apache wasn't running, and there shouldn't be any other workload on
the web server and ran another
Since I stopped writing to the clients (so I could cleanly work on the
split brain) I got no more entries on /var/log/gluster.log (this is the
client log, right?)
While working with diff command in order to fix the split brain, I saw
several entries like these:
diff:
30 matches
Mail list logo