Hi Susant ,
Hi,
Can you check the bricks and send the `ls` output ?
Regards,
Susant
Ok I do that and the directory is not empty. But while searching in the
forums, I found that errors often I I tried a "rebalance" of the volume
and it Works !
Maby thanks for all who answer to me.
--
On 04/28/2015 06:53 AM, Atin Mukherjee wrote:
>
>
> On 04/28/2015 06:37 AM, 何亦军 wrote:
>> Hi Guys,
>>
>>How to upgrade GlusterFS to 3.6.3 from 3.6.2 ?Any document talk
>> about that?
> http://www.gluster.org/community/documentation/index.php/Upgrade_to_3.6
> talks about how to upgr
On 04/28/2015 06:37 AM, 何亦军 wrote:
> Hi Guys,
>
>How to upgrade GlusterFS to 3.6.3 from 3.6.2 ?Any document talk
> about that?
http://www.gluster.org/community/documentation/index.php/Upgrade_to_3.6
talks about how to upgrade from 3.4/3.5 to 3.6, however you could follow
the same st
Hi Guys,
How to upgrade GlusterFS to 3.6.3 from 3.6.2 ?Any document talk
about that?
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
/bin/ls just asks the filesystem for a list of files in this directory.
/bin/ls -F or --color or -l or anything else that requires some sort of
decoration will do a fstat for each file in order to find out the mode,
ownership, type, etc. in order to determine how to decorate that
filename. Eac
On 04/27/2015 03:00 PM, Ernie Dunbar wrote:
On 2015-04-27 14:09, Joe Julian wrote:
I've also noticed that if I increase the count of those writes, the
transfer speed increases as well:
2097152 bytes (2.1 MB) copied, 0.036291 s, 57.8 MB/s
root@backup:/home/webmailbak# dd if=/dev/zero of=/mnt/
Do you think this issue is related to the one seen when you have 'ls'
aliased to 'ls -F' or 'ls --color=auto'?
I included a snippet from a previous email that I had sent to the
gluster devels (see below).
David
> My code developers were moved over to the gluster 3.6.1 system and
were
> stru
On 2015-04-27 14:09, Joe Julian wrote:
I've also noticed that if I increase the count of those writes, the
transfer speed increases as well:
2097152 bytes (2.1 MB) copied, 0.036291 s, 57.8 MB/s
root@backup:/home/webmailbak# dd if=/dev/zero of=/mnt/testfile
count=2048 bs=1024; sync
2048+0 record
- Original Message -
> From: "David Robinson"
> To: "Ben Turner" , "Ernie Dunbar"
> Cc: "Gluster Users"
> Sent: Monday, April 27, 2015 5:21:08 PM
> Subject: Re[2]: [Gluster-users] Disastrous performance with rsync to mounted
> Gluster volume.
>
> I am also having a terrible time with r
I am also having a terrible time with rsync and gluster. The vast
majority of my time is spent figuring out what to sync... This sync
takes 17-hours even though very little data is being transferred.
sent 120,523 bytes received 74,485,191,265 bytes 1,210,720.02
bytes/sec
total size is 27,
On 04/27/2015 01:52 PM, Ben Turner wrote:
- Original Message -
From: "Ernie Dunbar"
To: "Gluster Users"
Sent: Monday, April 27, 2015 4:24:56 PM
Subject: Re: [Gluster-users] Disastrous performance with rsync to mounted
Gluster volume.
On 2015-04-24 11:43, Joe Julian wrote:
This sh
- Original Message -
> From: "Ernie Dunbar"
> To: "Gluster Users"
> Sent: Monday, April 27, 2015 4:24:56 PM
> Subject: Re: [Gluster-users] Disastrous performance with rsync to mounted
> Gluster volume.
>
> On 2015-04-24 11:43, Joe Julian wrote:
>
> >> This should get you where you need
On 2015-04-24 11:43, Joe Julian wrote:
This should get you where you need to be. Before you start to migrate
the data maybe do a couple DDs and send me the output so we can get an
idea of how your cluster performs:
time `dd if=/dev/zero of=/myfile bs=1024k count=1000;
sync`
echo 3 > /proc/
FYI, I’ve tried with both glusterfs and NFS mounts, and the reaction is the
same. The value of ping.timeout seems to have no effect at all.
I did discover one thing that makes a difference on reboot. There is a second
service descriptor for “glusterfsd”, which is not enabled by default, but is
Hi Kiran, thanks for the feedback! I already put up a repo on githib:
https://github.com/bennyturns/gluster-bench
On my TODO list is:
-The benchmark is currently RHEL / RHGS(Red Hat Gluster Storage) specific,
I want to make things work with at least non paid RPM distros and Ubuntu.
-Other files
Hello all,
I am a graduate student at University of Waterloo and working on
filesystem. I have two questions about GlusterFS that I will appreciate if
anyone can comment about.
First, does GlusterFS FUSE client bypass Linux page cache mechanism(mount
with -o direct_io)? I have a machine with 64
- Original Message -
> From: "Khoi Mai"
> To: gluster-users@gluster.org
> Sent: Monday, April 27, 2015 10:43:27 AM
> Subject: [Gluster-users] unusual gluster-fuse client load
>
> All,
>
> I have an unusual situation.
>
> I have a client whose 1 fuse mount to a volume adds increased load
All,
I have an unusual situation.
I have a client whose 1 fuse mount to a volume adds increased load to the
machine when its mounted. When it is not mounted the server is well below
1.00 in top/uptime for load avg.
The strange part is, this gluster volume is mounted across multiple
servers i
2015-04-22 14:27 GMT+02:00 Krutika Dhananjay :
> 3.2.7 is a really old version. Do you mind upgrading to a more recent
> release and see if the said problem reappears there?
Thank you very much!
I upgraded to a newer version and realized that the node2 hadn't
syadmin capabilities enabled on OpenVZ
Corey—
I was able to get a third node setup. I recreated the volume as “replica 3”.
The hang still happens (on two nodes, now) when I reboot a single node, even
though two are still surviving, which should constitute a quorum.
—CJ
> On Apr 17, 2015, at 6:18 AM, Corey Kovacs wrote:
>
> Typical
Hi all,
I am trying to do some cross-layer optimization between GlusterFS and
underlying EXT4 file system. Does GlusterFS care about which is the
underlying file system or is the user/admin supposed to tune the parameters
of GFS to suit to the underlying file system?
Please let me know if any such
Hi,
glusterfs-3.6.3 has been released and can be found here.
http://download.gluster.org/pub/gluster/glusterfs/3.6/LATEST/.
This release supposedly fixes the bugs listed below since 3.6.2 was made
available. Thanks to all who submitted patches, reviewed the changes.
1187526 - Disperse volum
Hi,
I came across "Gluster Benchmark Kit" while reading [Gluster-users]
Disastrous performance with rsync to mounted Gluster volume thread.
http://54.82.237.211/gluster-benchmark/gluster-bench-README
http://54.82.237.211/gluster-benchmark
The Kit includes tools such as iozone, smallfile and fio
23 matches
Mail list logo