Hola Luis hablo Español, que alegría me das de verdad, lo que no hablo o
hablo muy mal es el Ingles ggg
Un saludo
César
From: Luis Cerezo [mailto:l...@luiscerezo.org]
Sent: dinsdag 21 februari 2012 13:32
To: Cesar Miguel Fuentes
Cc: 'gluster-users@gluster.org'
Subject: Re: [Gluster-users]
Hi,all
I have serval several servers , they all have a user “a” ,I try to mount
glusterfs client on them, but user “a” on two of them has different uid
gid with others,
So ,mount cause different permission.
I saw there was a solution:
features/filter:
* root-squashing
On Wed, Feb 22, 2012 at 7:56 AM, Lars Erik Dangvard Jensen
l...@dcmediahosting.com wrote:
Den 22/02/2012 kl. 16.46 skrev Dipeit:
we have this running with zfsonlinux and glusterfs 3.2.5 and are using a
60TB volume across 3 storage server. In the last 6 months we had one
unexplained reboot
On 02/23/2012 03:17 PM, Vijay Bellur wrote:
On 02/23/2012 11:01 AM, Deepak C Shetty wrote:
Is gluster supported/tested or tried on powerpc platform ?
Are there rpms available for the same ?
Gluster is not supported on powerpc platform. There have been
community testing and trial reports on
On 02/23/2012 04:10 PM, Fabricio wrote:
On 23-02-2012 07:47, Vijay Bellur wrote:
On 02/23/2012 11:01 AM, Deepak C Shetty wrote:
Is gluster supported/tested or tried on powerpc platform ?
Are there rpms available for the same ?
Gluster is not supported on powerpc platform. There have been
I should add that if you really want great performance (random,
metadata, throughput) you may want to use this http://www.fhgfs.com/
We installed it next to the gluster folder on our zfs volumes. fhgfs
has better performance than gluster but backup and DR is much more
difficult. However they run
Thanks Jeff, that's interesting.
It is reassuring to know that these errors are self repairing. That
does appear to be happening, but only when I run find -print0 | xargs
--null stat /dev/null in affected directories. I will run that
self-heal on the whole volume as well, but I have had to
On 02/23/2012 08:58 AM, Dan Bretherton wrote:
It is reassuring to know that these errors are self repairing. That does
appear to be happening, but only when I run find -print0 | xargs --null stat
/dev/null in affected directories.
Hm. Then maybe the xattrs weren't *set* on that brick.
I
Jeff,
The main question is therefore why
we're losing connectivity to these servers.
Could there be a hardware issue? I have replaced the network cables for
the two servers but I don't really know what else to check. The network
switch hasn't recorded any errors for those two ports. There
On 02/23/2012 11:45 AM, Dan Bretherton wrote:
The main question is therefore why
we're losing connectivity to these servers.
Could there be a hardware issue? I have replaced the network cables for
the two servers but I don't really know what else to check. The network
switch hasn't
(Sorry, made my answer Rahul directly and not to the list)
I have the rpm version installed on my test cluster at the moment
Got theese libxml2 packages installed
libxml2-static-2.7.6-4.el6_2.4.x86_64
libxml2-python-2.7.6-4.el6_2.4.x86_64
libxml2-2.7.6-4.el6_2.4.i686
Hello
I have just started to prepare a smallish production setup (nothing
critical running on it yet).
I have 2 gluster servers with 8 volumes and Im getting a lot of theese
warnings in the cli.log
[2012-02-23 22:32:15.808271] W [rpc-transport.c:606:rpc_transport_load]
0-rpc-transport: missing
Hi,
I've been migrating data from an old striped 3.0.x gluster install to
a 3.3 beta install. I copied all the data to a regular XFS partition
(4K blocksize) from the old gluster striped volume and it totaled
9.2TB. With the old setup I used the following option in a volume
stripe block in the
This seems to be a bug in XFS as Joe pointed out :
http://oss.sgi.com/archives/xfs/2011-06/msg00233.html
http://stackoverflow.com/questions/6940516/create-sparse-file-with-alternate-data-and-hole-on-ext3-and-xfs
It seems to be there in XFS available natively in RHEL6 and RHEL5
On Thu, Feb 23,
Well, I'm still getting weird results with ext4 :
gluster1:/pirstripe34T 88G 34T 1% /pirstripe
gluster1:/pirdist 34T 88G 34T 1% /pirdist
[root@gluster1 ~]# du -sh /pirstripe /pirdist
10G /pirstripe
38G /pirdist
10 * 5 + 38 = 88, but not 10 + 38
On Fri, Feb 24,
15 matches
Mail list logo