d on small files.
>
>
> Gary Lloyd
>
> I.T. Systems:Keele University
> Finance & IT Directorate
> Keele:Staffs:IC1 Building:ST5 5NB:UK
> +44 1782 733063 <tel:%2B44%201782%20733073>
> ______
Hi,
There is a number of tweaks/hacks to make it better, but IMHO overall
performance with small files is still unacceptable for such folders with
thousands of entries.
If your shares are not too large to be placed on single filesystem and you
still want to use Gluster - it is possible to run
Hello,
For offline migration you can use storage domain of type Export, shared between
clusters. For online storage migration the source and destination storage have
to be present in current cluster.
Regarding the different glusterfs versions - it must not be a problem because
oVirt uses vm
Hi,
According to man page for setfacl: For uid and gid you can specify either a
name or a number.
But actually the information will be stored in xattrs in the form of numbers,
afaik.
One way to solve your problem is the consistent name/id mapping, which can be
achieved by using directory
Hi,
I always though that hardware RAID is a requirement for SDS as it hides all
dirty work with raw disks from software which just cannot deal with all kinds
of hardware faults. If disk starts to experiencing long delays, then after
about 7 seconds RAID controller marks this disk as failed
5 PM, Olivier Lambert
>>>>>> <lambert.oliv...@gmail.com> wrote:
>>>>>>> It's planned to have an arbiter soon :) It was just preliminary
>>>>>>> tests.
>>>>>>>
>>
Hi,
I've tried Minio and Scality S3 (both as Docker containers). None of them give
me more than 60 MB/sec for one stream.
--
Dmitry Glushenok
Jet Infosystems
> 28 сент. 2016 г., в 1:04, Gandalf Corvotempesta
> написал(а):
>
> Anyone tried Minio as object
Hi,
Red Hat only supports XFS for some reason:
https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3.1/html/Installation_Guide/sect-Prerequisites1.html
--
Dmitry Glushenok
Jet Infosystems
> 26 сент. 2016 г., в 14:26, Lindsay Mathieson
> написал(а):
>
>
Hi,
It looks like for NFS you have to change nfs.rpc-auth-allow, not auth.allow
(which is for access by API). Docs for nfs.rpc-auth-allow states that "By
default, all clients are disallowed", but in fact the option has "all" as
default value.
Regarding auth.allow and information disclosure
Hi,
It is because your switch is not performing round-robin distribution while
sending data to server (probably it can't). Usually it is enough to configure
ip-port LACP hashing to evenly distribute traffic by all ports in aggregation.
But any single tcp connections will still be using only
You are right, stat triggers self-heal. Thank you!
--
Dmitry Glushenok
Jet Infosystems
> 17 авг. 2016 г., в 13:38, Ravishankar N <ravishan...@redhat.com> написал(а):
>
> On 08/17/2016 03:48 PM, Дмитрий Глушенок wrote:
>> Unfortunately not:
>>
>> Remount FS,
cluster.granular-entry-heal no
[root@srv01 ~]#
--
Dmitry Glushenok
Jet Infosystems
> 17 авг. 2016 г., в 11:30, Ravishankar N <ravishan...@redhat.com> написал(а):
>
> On 08/17/2016 01:48 PM, Дмитрий Глушенок wrote:
>> Hello Ravi,
>&
8/16/2016 10:44 PM, Дмитрий Глушенок wrote:
>> Hello,
>>
>> While testing healing after bitrot error it was found that self healing
>> cannot heal files which were manually deleted from brick. Gluster 3.8.1:
>>
>> - Create volume, mount it locally and copy test
Hello,
While testing healing after bitrot error it was found that self healing cannot
heal files which were manually deleted from brick. Gluster 3.8.1:
- Create volume, mount it locally and copy test file to it
[root@srv01 ~]# gluster volume create test01 replica 2 srv01:/R1/test01
Hi,
Same problem on 3.8.1. Even on loopback interface (traffic not leaves gluster
node):
Writing locally to replica 2 volume (each brick is separate local RAID6): 613
MB/sec
Writing locally to 1-brick volume: 877 MB/sec
Writing locally to the brick itself (directly to XFS): 1400 MB/sec
Tests
ok
Jet Infosystems
> 15 июня 2016 г., в 19:42, Gandalf Corvotempesta
> <gandalf.corvotempe...@gmail.com> написал(а):
>
> 2016-06-15 18:12 GMT+02:00 Дмитрий Глушенок <gl...@jet.msk.su>:
>> Hello.
>>
>> May be because of current implementation of rotten bits de
Hello.
May be because of current implementation of rotten bits detection - one hash
for whole file. Imagine 40 GB VM image - few parts of the image are modified
continuously (VM log files and application data are constantly changing). Those
writes making checksum invalid and BitD has to
17 matches
Mail list logo