Bump?
Anyone have any thoughts on this?
Cheers,
James
On Mon, Nov 11, 2013 at 7:24 PM, James wrote:
> Hi there,
>
> This is a hypothetical problem, not one that describes specific hardware
> at the moment.
>
> As we all know, gluster currently usually works best when each brick is
> the same s
Apologies for interrupting the normal business...
Hi all,
The ICCLab [1] has another new position opened that perhaps you or someone
you know might be interested in. Briefly, the position is a Applied
Researcher in the area of Cloud Computing (more IaaS than PaaS) and would
need particular skills
On 11/12/2013 05:54 AM, James wrote:
Hi there,
This is a hypothetical problem, not one that describes specific hardware
at the moment.
As we all know, gluster currently usually works best when each brick is
the same size, and each host has the same number of bricks. Let's call
this a "homogeneo
On 11/19/2013 06:04 PM, French Teddy wrote:
I'm using GlusterFS 3.3.2. Two servers, a brick on each one. The Volume
is "ARCHIVE80"
I can mount the volume on Server2; if I touch a new file, it appears
inside the brick on Server1.
However, if I try to mount the volume on Server1, I have an error:
On 11/19/2013 06:04 PM, French Teddy wrote:
I'm using GlusterFS 3.3.2. Two servers, a brick on each one. The Volume
is "ARCHIVE80"
I can mount the volume on Server2; if I touch a new file, it appears
inside the brick on Server1.
However, if I try to mount the volume on Server1, I have an error:
On 11/19/2013 09:12 AM, Xiao Bin XB Zhang wrote:
Hey,
I am a new babies of Gluster, and found it is very useful, and many
production usage.
When setup our Gluster for some customer engagement project, a problem
rising:
Suppose that I setup my Gluster with Strip + Replica 2 on my 4
physic
Hey john mark.
I saw that you recently mentioned some work using gluster for sequencing
data (where there are alot of intermediates, and sometimes, huge raw input
data sets that get denoised).
http://184.106.200.248/2012/07/improving-high-throughput-next-gen-sequencing/
Well, today fredrick san
On 11/19/2013 10:49 PM, Alexandre Fournier wrote:
Hello,
We are experiencing strange behavior when writing file on the Gluster
mount point. On some occasion, when writing to the Gluster Mount we
have an Open Stream error. We've looked the gluster logs and found
the following faulty entri
I've included straces from both successful and unsuccessful exections, as
well as the PHP error information below. Let me know if there is anything
else I can provide which would be helpful.
PHP Error (as provided by error_get_last()):
Array
(
[type] => 2
[message] => symlink(): No such
Peter,
Thanks, this was helpful. Can you please try out the following patch:
http://review.gluster.org/6319
Thanks,
Avati
On Wed, Nov 20, 2013 at 6:35 PM, Peter Drake wrote:
> I've included straces from both successful and unsuccessful exections, as
> well as the PHP error information below.
Alexandre,
Seems like there is an entry split-brain (same file/dir name but on one
brick it is a file and on the other it is a directory) according to the
following log:
> [2013-11-18 18:18:43.052446] W [afr-common.c:1411:afr_conflicting_iattrs]
> 0-gv0-replicate-0: /aa/aa/aa/aa: filetype diff
On 11/20/13, 10:40 PM, Randy Breunling wrote:
Hi.
We met at a storage meetup in SF a couple months ago...and I think
exchanged a couple emails regarding some gluster-related questions I had
(which I can't seem to find at this time).
Anyway...I'm interested in learning a little more about gluste
Hi, I've got a similar issue on CentOS 6.4 + GlusterFS 3.4.0.
Yesterday I shut down one node in a replicated volume for some hardware
maintenance, and after bringing it up a few minutes later the healing
started automatically. The healing has been going for 18 hours and
seems to be in a loop or s
On Wed, 2013-11-20 at 18:30 +0530, Lalatendu Mohanty wrote:
> On 11/12/2013 05:54 AM, James wrote:
> > Hi there,
> >
> > This is a hypothetical problem, not one that describes specific hardware
> > at the moment.
> >
> > As we all know, gluster currently usually works best when each brick is
> > th
14 matches
Mail list logo