Re: [Gluster-users] How to sync content to a file through standard Java APIs

2018-03-18 Thread Niels de Vos
On Wed, Mar 14, 2018 at 06:07:19PM +0100, Sven Ludwig wrote:
> Hello,
> 
> we have a Gluster FS mounted at some /mnt/... path on a Server. The
> actual physical device behind this resides on some other Server.
> 
> Now, the requirement is to write files to this Gluster FS Volume in a
> durable fashion, i.e. for an officially succeeded write the contents
> MUST have been actually synced to disk on at least one of the Gluster
> FS nodes in the replica set.
> 
> Our questions:
> 
> 1. Which JDK 8 APIs can we use that would fulfill this requirement
> with an actually working sync?
> 
> 2. Is java.nio.channels.FileChannel.open with StandardOpenOption#SYNC
> in the set of options given to this method an option, or would it
> perhaps not actually guarantee the sync?
> 
> 3. Apart from 2., are there other ways to achieve the requirement
> through the standard JDK 8 APIs?

There is nothing special to take care of, all languages should support
the required features. These are the two things you need to take into
account (just like with other filesystems):

1. call a flush() or sync() function after writing to a file(descriptor)
2. when creating a new file (or any directory entry), call flush() or
   sync() on the directory where the new entry is created to persist the
   metadata of the directory.

See 'man 2 fsync' for a little more details.

HTH,
Niels
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster very poor performance when copying small files (1x (2+1) = 3, SSD)

2018-03-18 Thread TomK

On 3/18/2018 6:13 PM, Sam McLeod wrote:
Even your NFS transfers are 12.5 or so MB per second or less.

1) Did you use fdisk and LVM under that XFS filesystem?

2) Did you benchmark the XFS with something like bonnie++?  (There's 
probably newer benchmark suites now.)


3) Did you benchmark your Network transfer speeds?  Perhaps your NIC 
negotiated a lower speed.


3) I've done XFS tuning for another purpose but got good results.  If it 
helps, I can send you the doc.


Cheers,
Tom


Howdy all,

We're experiencing terrible small file performance when copying or 
moving files on gluster clients.


In the example below, Gluster is taking 6mins~ to copy 128MB / 21,000 
files sideways on a client, doing the same thing on NFS (which I know is 
a totally different solution etc. etc.) takes approximately 10-15 
seconds(!).


Any advice for tuning the volume or XFS settings would be greatly 
appreciated.


Hopefully I've included enough relevant information below.


## Gluster Client

root@gluster-client:/mnt/gluster_perf_test/  # du -sh .
127M    .
root@gluster-client:/mnt/gluster_perf_test/  # find . -type f | wc -l
21791
root@gluster-client:/mnt/gluster_perf_test/  # du 9584toto9584.txt
4    9584toto9584.txt


root@gluster-client:/mnt/gluster_perf_test/  # time cp -a private 
private_perf_test


real    5m51.862s
user    0m0.862s
sys    0m8.334s

root@gluster-client:/mnt/gluster_perf_test/ # time rm -rf private_perf_test/

real    0m49.702s
user    0m0.087s
sys    0m0.958s


## Hosts

- 16x Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz per Gluster host / client
- Storage: iSCSI provisioned (via 10Gbit DAC/Fibre), SSD disk, 50K R/RW 
4k IOP/s, 400MB/s per Gluster host

- Volumes are replicated across two hosts and one arbiter only host
- Networking is 10Gbit DAC/Fibre between Gluster hosts and clients
- 18GB DDR4 ECC memory

## Volume Info

root@gluster-host-01:~ # gluster pool list
UUID          Hostname                        State
ad02970b-e2aa-4ca8-998c-bd10d5970faa  gluster-host-02.fqdn Connected
ea116a94-c19e-48db-b108-0be3ae622e2e  gluster-host-03.fqdn Connected
2e855c25-e7ac-4ff6-be85-e8bcc6f45ee4  localhost   
Connected


root@gluster-host-01:~ # gluster volume info uat_storage

Volume Name: uat_storage
Type: Replicate
Volume ID: 7918f1c5-5031-47b8-b054-56f6f0c569a2
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: gluster-host-01.fqdn:/mnt/gluster-storage/uat_storage
Brick2: gluster-host-02.fqdn:/mnt/gluster-storage/uat_storage
Brick3: gluster-host-03.fqdn:/mnt/gluster-storage/uat_storage (arbiter)
Options Reconfigured:
performance.rda-cache-limit: 256MB
network.inode-lru-limit: 5
server.outstanding-rpc-limit: 256
performance.client-io-threads: true
nfs.disable: on
transport.address-family: inet
client.event-threads: 8
cluster.eager-lock: true
cluster.favorite-child-policy: size
cluster.lookup-optimize: true
cluster.readdir-optimize: true
cluster.use-compound-fops: true
diagnostics.brick-log-level: ERROR
diagnostics.client-log-level: ERROR
features.cache-invalidation-timeout: 600
features.cache-invalidation: true
network.ping-timeout: 15
performance.cache-invalidation: true
performance.cache-max-file-size: 6MB
performance.cache-refresh-timeout: 60
performance.cache-size: 1024MB
performance.io -thread-count: 16
performance.md-cache-timeout: 600
performance.stat-prefetch: true
performance.write-behind-window-size: 256MB
server.event-threads: 8
transport.listen-backlog: 2048

root@gluster-host-01:~ # xfs_info /dev/mapper/gluster-storage-unlocked
meta-data=/dev/mapper/gluster-storage-unlocked isize=512    agcount=4, 
agsize=196607360 blks

          =                       sectsz=512   attr=2, projid32bit=1
          =                       crc=1        finobt=0 spinodes=0
data     =                       bsize=4096   blocks=786429440, imaxpct=5
          =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=8192   ascii-ci=0 ftype=1
log      =internal               bsize=4096   blocks=383998, version=2
          =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0


--
Sam McLeod (protoporpoise on IRC)
https://smcleod.net
https://twitter.com/s_mcleod

Words are my own opinions and do not necessarily represent those of 
my employer or partners.




___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users




--
Cheers,
Tom K.
-

Living on earth is expensive, but it includes a free trip around the sun.

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster very poor performance when copying small files (1x (2+1) = 3, SSD)

2018-03-18 Thread Sam McLeod
Hi Tom,

Thanks for your reply.

1. Yes XFS is on a LUKs LV (see below).
2. Yes, I prefer FIO but each Gluster host gets between 50-100K 4K random IOP/s 
both write and read to disk.
3. Yes, we actually use 2x 10Gbit DACs in LACP, but we get full 10Gbit speeds 
(and very low latency thanks to the DACs).
4. I'd love to see that, it'd be much appreciated thanks.



# lsblk
NAME   MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINT
xvdc   202:32   0  1.5T  0 disk
└─xvdc1202:33   0  1.5T  0 part
  └─gluster-storage253:103T  0 lvm
└─gluster-storage-unlocked 253:303T  0 crypt /mnt/gluster-storage
xvda   202:00   18G  0 disk
├─xvda2202:20 17.5G  0 part
│ ├─centos-var 253:20  9.5G  0 lvm   /var
│ └─centos-root253:008G  0 lvm   /
└─xvda1202:10  500M  0 part  /boot
sr0 11:01 1024M  0 rom
xvdb   202:16   0  1.5T  0 disk
└─xvdb1202:17   0  1.5T  0 part
  └─gluster-storage253:103T  0 lvm
└─gluster-storage-unlocked 253:303T  0 crypt /mnt/gluster-storage

--
Sam McLeod
Please respond via email when possible.
https://smcleod.net
https://twitter.com/s_mcleod

> On 19 Mar 2018, at 10:37 am, TomK  wrote:
> 
> On 3/18/2018 6:13 PM, Sam McLeod wrote:
> Even your NFS transfers are 12.5 or so MB per second or less.
> 
> 1) Did you use fdisk and LVM under that XFS filesystem?
> 
> 2) Did you benchmark the XFS with something like bonnie++?  (There's probably 
> newer benchmark suites now.)
> 
> 3) Did you benchmark your Network transfer speeds?  Perhaps your NIC 
> negotiated a lower speed.
> 
> 3) I've done XFS tuning for another purpose but got good results.  If it 
> helps, I can send you the doc.
> 
> Cheers,
> Tom
> 
>> Howdy all,
>> We're experiencing terrible small file performance when copying or moving 
>> files on gluster clients.
>> In the example below, Gluster is taking 6mins~ to copy 128MB / 21,000 files 
>> sideways on a client, doing the same thing on NFS (which I know is a totally 
>> different solution etc. etc.) takes approximately 10-15 seconds(!).
>> Any advice for tuning the volume or XFS settings would be greatly 
>> appreciated.
>> Hopefully I've included enough relevant information below.
>> ## Gluster Client
>> root@gluster-client:/mnt/gluster_perf_test/  # du -sh .
>> 127M.
>> root@gluster-client:/mnt/gluster_perf_test/  # find . -type f | wc -l
>> 21791
>> root@gluster-client:/mnt/gluster_perf_test/  # du 9584toto9584.txt
>> 49584toto9584.txt
>> root@gluster-client:/mnt/gluster_perf_test/  # time cp -a private 
>> private_perf_test
>> real5m51.862s
>> user0m0.862s
>> sys0m8.334s
>> root@gluster-client:/mnt/gluster_perf_test/ # time rm -rf private_perf_test/
>> real0m49.702s
>> user0m0.087s
>> sys0m0.958s
>> ## Hosts
>> - 16x Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz per Gluster host / client
>> - Storage: iSCSI provisioned (via 10Gbit DAC/Fibre), SSD disk, 50K R/RW 4k 
>> IOP/s, 400MB/s per Gluster host
>> - Volumes are replicated across two hosts and one arbiter only host
>> - Networking is 10Gbit DAC/Fibre between Gluster hosts and clients
>> - 18GB DDR4 ECC memory
>> ## Volume Info
>> root@gluster-host-01:~ # gluster pool list
>> UUID  HostnameState
>> ad02970b-e2aa-4ca8-998c-bd10d5970faa  gluster-host-02.fqdn Connected
>> ea116a94-c19e-48db-b108-0be3ae622e2e  gluster-host-03.fqdn Connected
>> 2e855c25-e7ac-4ff6-be85-e8bcc6f45ee4  localhost   
>> Connected
>> root@gluster-host-01:~ # gluster volume info uat_storage
>> Volume Name: uat_storage
>> Type: Replicate
>> Volume ID: 7918f1c5-5031-47b8-b054-56f6f0c569a2
>> Status: Started
>> Snapshot Count: 0
>> Number of Bricks: 1 x (2 + 1) = 3
>> Transport-type: tcp
>> Bricks:
>> Brick1: gluster-host-01.fqdn:/mnt/gluster-storage/uat_storage
>> Brick2: gluster-host-02.fqdn:/mnt/gluster-storage/uat_storage
>> Brick3: gluster-host-03.fqdn:/mnt/gluster-storage/uat_storage (arbiter)
>> Options Reconfigured:
>> performance.rda-cache-limit: 256MB
>> network.inode-lru-limit: 5
>> server.outstanding-rpc-limit: 256
>> performance.client-io-threads: true
>> nfs.disable: on
>> transport.address-family: inet
>> client.event-threads: 8
>> cluster.eager-lock: true
>> cluster.favorite-child-policy: size
>> cluster.lookup-optimize: true
>> cluster.readdir-optimize: true
>> cluster.use-compound-fops: true
>> diagnostics.brick-log-level: ERROR
>> diagnostics.client-log-level: ERROR
>> features.cache-invalidation-timeout: 600
>> features.cache-invalidation: true
>> network.ping-timeout: 15
>> performance.cache-invalidation: true
>> performance.cache-max-file-size: 6MB
>> performance.cache-refresh-timeout: 60
>> performance.cache-size: 1024MB
>> 

[Gluster-users] Gluster very poor performance when copying small files (1x (2+1) = 3, SSD)

2018-03-18 Thread Sam McLeod
Howdy all,

We're experiencing terrible small file performance when copying or moving files 
on gluster clients.

In the example below, Gluster is taking 6mins~ to copy 128MB / 21,000 files 
sideways on a client, doing the same thing on NFS (which I know is a totally 
different solution etc. etc.) takes approximately 10-15 seconds(!).

Any advice for tuning the volume or XFS settings would be greatly appreciated.

Hopefully I've included enough relevant information below.


## Gluster Client

root@gluster-client:/mnt/gluster_perf_test/  # du -sh .
127M.
root@gluster-client:/mnt/gluster_perf_test/  # find . -type f | wc -l
21791
root@gluster-client:/mnt/gluster_perf_test/  # du 9584toto9584.txt
49584toto9584.txt


root@gluster-client:/mnt/gluster_perf_test/  # time cp -a private 
private_perf_test

real5m51.862s
user0m0.862s
sys0m8.334s

root@gluster-client:/mnt/gluster_perf_test/ # time rm -rf private_perf_test/

real0m49.702s
user0m0.087s
sys0m0.958s


## Hosts

- 16x Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz per Gluster host / client
- Storage: iSCSI provisioned (via 10Gbit DAC/Fibre), SSD disk, 50K R/RW 4k 
IOP/s, 400MB/s per Gluster host
- Volumes are replicated across two hosts and one arbiter only host
- Networking is 10Gbit DAC/Fibre between Gluster hosts and clients
- 18GB DDR4 ECC memory

## Volume Info

root@gluster-host-01:~ # gluster pool list
UUID  HostnameState
ad02970b-e2aa-4ca8-998c-bd10d5970faa  gluster-host-02.fqdn Connected
ea116a94-c19e-48db-b108-0be3ae622e2e  gluster-host-03.fqdn Connected
2e855c25-e7ac-4ff6-be85-e8bcc6f45ee4  localhost   Connected

root@gluster-host-01:~ # gluster volume info uat_storage

Volume Name: uat_storage
Type: Replicate
Volume ID: 7918f1c5-5031-47b8-b054-56f6f0c569a2
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: gluster-host-01.fqdn:/mnt/gluster-storage/uat_storage
Brick2: gluster-host-02.fqdn:/mnt/gluster-storage/uat_storage
Brick3: gluster-host-03.fqdn:/mnt/gluster-storage/uat_storage (arbiter)
Options Reconfigured:
performance.rda-cache-limit: 256MB
network.inode-lru-limit: 5
server.outstanding-rpc-limit: 256
performance.client-io-threads: true
nfs.disable: on
transport.address-family: inet
client.event-threads: 8
cluster.eager-lock: true
cluster.favorite-child-policy: size
cluster.lookup-optimize: true
cluster.readdir-optimize: true
cluster.use-compound-fops: true
diagnostics.brick-log-level: ERROR
diagnostics.client-log-level: ERROR
features.cache-invalidation-timeout: 600
features.cache-invalidation: true
network.ping-timeout: 15
performance.cache-invalidation: true
performance.cache-max-file-size: 6MB
performance.cache-refresh-timeout: 60
performance.cache-size: 1024MB
performance.io-thread-count: 16
performance.md-cache-timeout: 600
performance.stat-prefetch: true
performance.write-behind-window-size: 256MB
server.event-threads: 8
transport.listen-backlog: 2048

root@gluster-host-01:~ # xfs_info /dev/mapper/gluster-storage-unlocked
meta-data=/dev/mapper/gluster-storage-unlocked isize=512agcount=4, 
agsize=196607360 blks
 =   sectsz=512   attr=2, projid32bit=1
 =   crc=1finobt=0 spinodes=0
data =   bsize=4096   blocks=786429440, imaxpct=5
 =   sunit=0  swidth=0 blks
naming   =version 2  bsize=8192   ascii-ci=0 ftype=1
log  =internal   bsize=4096   blocks=383998, version=2
 =   sectsz=512   sunit=0 blks, lazy-count=1
realtime =none   extsz=4096   blocks=0, rtextents=0


--
Sam McLeod (protoporpoise on IRC)
https://smcleod.net
https://twitter.com/s_mcleod

Words are my own opinions and do not necessarily represent those of my employer 
or partners.

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Bug 1442983 on 3.10.11 Unable to acquire lock for gluster volume leading to 'another transaction in progress' error

2018-03-18 Thread Atin Mukherjee
The fix is already available from glusterfs-3.12.3 onwards. Please refer
https://bugzilla.redhat.com/show_bug.cgi?id=1503239 .

On Sat, Mar 17, 2018 at 2:21 PM, Paolo Margara 
wrote:

> Hi,
>
> this patch it's already available in the community version of gluster
> 3.12? In which version? If not, there is plan to backport it?
>
>
> Greetings,
>
> Paolo
>
> Il 16/03/2018 13:24, Atin Mukherjee ha scritto:
>
> Have sent a backport request https://review.gluster.org/19730 at
> release-3.10 branch. Hopefully this fix will be picked up in next update.
>
> On Fri, Mar 16, 2018 at 4:47 PM, Marco Lorenzo Crociani <
> mar...@prismatelecomtesting.com> wrote:
>
>> Hi,
>> I'm hitting bug https://bugzilla.redhat.com/show_bug.cgi?id=1442983
>> on glusterfs 3.10.11 and oVirt 4.1.9 (and before on glusterfs 3.8.14)
>>
>> The bug report says fixed in glusterfs-3.12.2-1
>>
>
> The above is not a community bug. So even though the version says 3.12.2
> it's technically not the same community version of 3.12.2. What I see this
> fix is introduced from glusterfs-3.13. If they can be backported
>
>>
>> Is there a plan to backport the fix to 3.10.x releases or the only way to
>> fix is upgrade to 3.12?
>>
>> Regards,
>>
>> --
>> Marco Crociani
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-users
>>
>
>
>
> ___
> Gluster-users mailing 
> listGluster-users@gluster.orghttp://lists.gluster.org/mailman/listinfo/gluster-users
>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users