[Gluster-users] Hot tiering and data writes

2019-11-06 Thread Green Lantern
Is it possible to force all writes to a cold tier in a hot/cold tier
arrangement?


Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/118564314

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/118564314

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] how to downgrade GlusterFS from version 7 to 3.13?

2019-11-06 Thread Amar Tumballi
On Wed, Nov 6, 2019 at 2:05 PM Riccardo Murri 
wrote:

> Hello,
>
> Is there any way to downgrade a GlusterFS cluster?  Given the
> performance issues that I have seen with GlusterFS 6 and 7 (reported
> elsewhere on this mailing-list), I am now considering downgrading back
> to GlusterFS 3.13.
>
> I have set up a test cluster, copied some files on it, and tried to
> downgrade the naive way (uninstall GlusterFS 7, install GlusterFS
> 3.13, do not touch bricks or other config).  This didn't work:
> `glusterd` v3.13 complains that the o-version 7 cannot be
> restored.
>
> So I tried again, this time trying to lower op-version before
> downgrading the system packages, but `gluster set all op-version`
> won't let me:
>
> ```
> $ sudo gluster volume set all cluster.op-version 31000
> volume set: failed: Required op-version (31000) should not be equal or
> lower than current cluster op-version (7).
> ```
>
> What can I try next?
>
>
We don't advise downgrade, and don't test downgrade as part of any
automated tests right now.

Disclaimer: Below is not tested

Considering you have some genuine issue, and there are no new options added
with new version, try editing '/var/lib/glusterd/glusterd.info' file, and
change the op-version manually in all machines, and restart glusterd.

-Amar


> Thanks,
> Riccardo
> 
>
> Community Meeting Calendar:
>
> APAC Schedule -
> Every 2nd and 4th Tuesday at 11:30 AM IST
> Bridge: https://bluejeans.com/118564314
>
> NA/EMEA Schedule -
> Every 1st and 3rd Tuesday at 01:00 PM EDT
> Bridge: https://bluejeans.com/118564314
>
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>


Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/118564314

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/118564314

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Performance is falling rapidly when updating from v5.5 to v7.0

2019-11-06 Thread David Spisla
I did another test with inode_size on xfs bricks=1024Bytes, but it had also
no effect. Here is the measurement:

(All values in MiB/s)
64KiB1MiB 10MiB
0,16   2,52   76,58

Beside of that I was not able to set the xattr trusted.io-stats-dump. I am
wondering myself why it is not working

Regards
David Spisla

Am Mi., 6. Nov. 2019 um 11:16 Uhr schrieb RAFI KC :

>
> On 11/6/19 3:42 PM, David Spisla wrote:
>
> Hello Rafi,
>
> I tried to set the xattr via
>
> setfattr -n trusted.io-stats-dump -v '/tmp/iostat.log'
> /gluster/repositories/repo1/
>
> but it had no effect. There is no such a xattr via getfattr and no
> logfile. The command setxattr is not available. What I am doing wrong?
>
>
> I will check it out and get back to you.
>
>
> By the way, you mean to increase the inode size of xfs layer from 512
> Bytes to 1024KB(!)? I think it should be 1024 Bytes because 2048 Bytes is
> the maximum
>
> It was a type, I meant to set up 1024 bytes, sorry for that.
>
>
> Regards
> David
>
> Am Mi., 6. Nov. 2019 um 04:10 Uhr schrieb RAFI KC :
>
>> I will take a look at the profile info shared. Since there is a huge
>> difference in the performance numbers between fuse and samba, it would be
>> great if we can get the profile info of fuse (on v7). This will help to
>> compare the number of calls for each fops. There should be some fops that
>> samba repeat, and we can find out it by comparing with fuse.
>>
>> Also if possible, can you please get client profile info from fuse mount
>> using the command `setxattr -n trusted.io-stats-dump -v > /tmp/iostat.log> `.
>>
>>
>> Regards
>>
>> Rafi KC
>>
>> On 11/5/19 11:05 PM, David Spisla wrote:
>>
>> I did the test with Gluster 7.0 ctime disabled. But it had no effect:
>> (All values in MiB/s)
>> 64KiB1MiB 10MiB
>> 0,16   2,60   54,74
>>
>> Attached there is now the complete profile file also with the results
>> from the last test. I will not repeat it with an higher inode size because
>> I don't think this will have an effect.
>> There must be another cause for the low performance
>>
>>
>> Yes. No need to try with higher inode size
>>
>>
>>
>> Regards
>> David Spisla
>>
>> Am Di., 5. Nov. 2019 um 16:25 Uhr schrieb David Spisla <
>> spisl...@gmail.com>:
>>
>>>
>>>
>>> Am Di., 5. Nov. 2019 um 12:06 Uhr schrieb RAFI KC :
>>>

 On 11/4/19 8:46 PM, David Spisla wrote:

 Dear Gluster Community,

 I also have a issue concerning performance. The last days I updated our
 test cluster from GlusterFS v5.5 to v7.0 . The setup in general:

 2 HP DL380 Servers with 10Gbit NICs, 1 Distribute-Replica 2 Volume with
 2 Replica Pairs. Client is SMB Samba (access via vfs_glusterfs) . I did
 several tests to ensure that Samba don't causes the fall.
 The setup ist completely the same except the Gluster Version
 Here are my results:
 64KiB   1MiB 10MiB(Filesize)
 3,49 47,41300,50  (Values in MiB/s with
 GlusterFS v5.5)
 0,16  2,61 76,63(Values in MiB/s
 with GlusterFS v7.0)


 Can you please share the profile information [1] for both versions?
 Also it would be really helpful if you can mention the io patterns that
 used for this tests.

 [1] :
 https://docs.gluster.org/en/latest/Administrator%20Guide/Monitoring%20Workload/

>>> Hello Rafi,
>>> thank you for your help.
>>>
>>> * First more information about the io patterns: As a client we use a
>>> DL360 Windws Server 2017 machine with 10Gbit NIC connected to the storage
>>> machines. The share will be mounted via SMB and the tests writes with fio.
>>> We use this job files (see attachment). Each job file will be executed
>>> separetely and there is a sleep about 60s between each test run to calm
>>> down the system before starting a new test.
>>>
>>> * Attached below you find the profile output from the tests with v5.5
>>> (ctime enabled), v7.0 (ctime enabled).
>>>
>>> * Beside of the tests with Samba I did also some fio tests directly on
>>> the FUSE Mounts (locally on one of the storage nodes). The results show
>>> that there is only a small decrease of performance between v5.5 and v7.0
>>> (All values in MiB/s)
>>> 64KiB1MiB 10MiB
>>> 50,09 679,96   1023,02 (v5.5)
>>> 47,00 656,46977,60 (v7.0)
>>>
>>> It seems to be that the combination of samba + gluster7.0 has a lot of
>>> problems, or not?
>>>
>>>

 We use this volume options (GlusterFS 7.0):

 Volume Name: archive1
 Type: Distributed-Replicate
 Volume ID: 44c17844-0bd4-4ca2-98d8-a1474add790c
 Status: Started
 Snapshot Count: 0
 Number of Bricks: 2 x 2 = 4
 Transport-type: tcp
 Bricks:
 Brick1: fs-dl380-c1-n1:/gluster/brick1/glusterbrick
 Brick2: fs-dl380-c1-n2:/gluster/brick1/glusterbrick
 Brick3: fs-dl380-c1-n1:/gluster/brick2/glusterbrick
 Brick4: 

[Gluster-users] Add existing data to new glusterfs install

2019-11-06 Thread Michael Rightmire
This is related to my previous post "Sudden,dramatic performance drops 
with Glusterfs"


Is it possible to install glusterfs on an existing server and 
filesystem, and migrate all of that existing data into a glusterfs volume.


The ultimate goal would be to create a single-bricked volume, migrate 
the data into that volume, and then add mirrored volumes after - such 
that the data would then by synchronized to the newly added servers.

I.e.
- Have a raid6 virtual disk at server1:/dev/sdX filled with data
- Install glusterfs-server on server1.
- Create a volume with one brick (server1:/dev/sdX) called Data1
- Manage to migrate all the data already on /dev/sdX into the gluster 
volume data1 (AKA get the data recognized by the gluster index.)

- And then add server2:/dev/sdY to the data1 volume as a mirror.
- Have all the data on server1:data1 sync over to server2:/data1

We are discussing about 5 volumes, and a total of 60TB of data.

Thanks!
--

Mike

Karlsruher Institut für Technologie (KIT)

Institut für Anthropomatik und Robotik (IAR)

Hochperformante Humanoide Technologien (H2T)

Michael Rightmire

B.Sci, HPUXCA, MCSE, MCP, VDB, ISCB

Systems IT/Development

Adenauerring 2 , Gebäude 50.20, Raum 022

76131 Karlsruhe

Telefon: +49 721 608-45032

Fax:+49 721 608-44077

E-Mail:michael.rightm...@kit.edu

http://www.humanoids.kit.edu/

http://h2t.anthropomatik.kit.edu 

KIT – Die Forschungsuniversität in der Helmholtz-Gemeinschaft

Das KIT ist seit 2010 als familiengerechte Hochschule zertifiziert



Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/118564314

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/118564314

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Sudden, dramatic performance drops with Glusterfs

2019-11-06 Thread DUCARROZ Birgit


Hi,

I had complete crash with gluster and ZFS. Installed ext4 instead and it 
worked much more better.


Regards,
Birgit

On 06/11/19 11:50, Michael Rightmire wrote:

Hello list!

I'm new to Glusterfs in general. We have chosen to use it as our 
distributed file system on a new set of HA file servers.


The setup is:
2 SUPERMICRO SuperStorage Server 6049PE1CR36L with 24-4TB spinning disks 
and NVMe for cache and slog.

HBA not RAID card
Ubuntu 18.04 server (on both systems)
ZFS filestorage
Glusterfs 5.10

Step one was to install Ubuntu, ZFS, and gluster. This all went without 
issue.

We have 3 ZFS raidz2 identical on both servers
We have three glusterfs mirrored volumes - 1 attached to each raidz on 
each server. I.e.


And mounted the gluster volumes as (for example) "/glusterfs/homes -> 
/zpool/homes". I.e.
gluster volume create homes replica 2 transport tcp 
server1:/zpool-homes/homes server2:/zpool-homes/homes force
(on server1) server1:/homes 44729413504 16032705152 28696708352  36% 
/glusterfs/homes


The problem is, the performance has deteriorated terribly.
We needed to copy all of our data from the old server to the new 
glusterfs volumes (appx. 60TB).
We decided to do this with multiple rsync commands (like 400 simultanous 
rsyncs)
The copy went well for the first 4 days, with an average across all 
rsyncs of 150-200 MBytes per second.

Then, suddenly, on the fourth day, it dropped to about 50 MBytes/s.
Then, by the end of the day, down to ~5MBytes/s (five).
I've stopped the rsyncs, and Ican still copy an individual file across 
to the glusterfs shared directory at 100MB/s.

But actions such as "ls -la" or "find" take forever!

Are there obvious flaws in my setup to correct?
How can I better troubleshoot this?

Thanks!
--

Mike




Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/118564314

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/118564314

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users




Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/118564314

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/118564314

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Sudden, dramatic performance drops with Glusterfs

2019-11-06 Thread Michael Rightmire

Hello list!

I'm new to Glusterfs in general. We have chosen to use it as our 
distributed file system on a new set of HA file servers.


The setup is:
2 SUPERMICRO SuperStorage Server 6049PE1CR36L with 24-4TB spinning disks 
and NVMe for cache and slog.

HBA not RAID card
Ubuntu 18.04 server (on both systems)
ZFS filestorage
Glusterfs 5.10

Step one was to install Ubuntu, ZFS, and gluster. This all went without 
issue.

We have 3 ZFS raidz2 identical on both servers
We have three glusterfs mirrored volumes - 1 attached to each raidz on 
each server. I.e.


And mounted the gluster volumes as (for example) "/glusterfs/homes -> 
/zpool/homes". I.e.
gluster volume create homes replica 2 transport tcp 
server1:/zpool-homes/homes server2:/zpool-homes/homes force
(on server1) server1:/homes 44729413504 16032705152 28696708352  36% 
/glusterfs/homes


The problem is, the performance has deteriorated terribly.
We needed to copy all of our data from the old server to the new 
glusterfs volumes (appx. 60TB).
We decided to do this with multiple rsync commands (like 400 simultanous 
rsyncs)
The copy went well for the first 4 days, with an average across all 
rsyncs of 150-200 MBytes per second.

Then, suddenly, on the fourth day, it dropped to about 50 MBytes/s.
Then, by the end of the day, down to ~5MBytes/s (five).
I've stopped the rsyncs, and Ican still copy an individual file across 
to the glusterfs shared directory at 100MB/s.

But actions such as "ls -la" or "find" take forever!

Are there obvious flaws in my setup to correct?
How can I better troubleshoot this?

Thanks!
--

Mike



Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/118564314

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/118564314

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Performance is falling rapidly when updating from v5.5 to v7.0

2019-11-06 Thread RAFI KC


On 11/6/19 3:42 PM, David Spisla wrote:

Hello Rafi,

I tried to set the xattr via

setfattr -n trusted.io-stats-dump -v '/tmp/iostat.log' 
/gluster/repositories/repo1/


but it had no effect. There is no such a xattr via getfattr and no 
logfile. The command setxattr is not available. What I am doing wrong?



I will check it out and get back to you.


By the way, you mean to increase the inode size of xfs layer from 512 
Bytes to 1024KB(!)? I think it should be 1024 Bytes because 2048 Bytes 
is the maximum

It was a type, I meant to set up 1024 bytes, sorry for that.


Regards
David

Am Mi., 6. Nov. 2019 um 04:10 Uhr schrieb RAFI KC >:


I will take a look at the profile info shared. Since there is a
huge difference in the performance numbers between fuse and samba,
it would be great if we can get the profile info of fuse (on v7).
This will help to compare the number of calls for each fops. There
should be some fops that samba repeat, and we can find out it by
comparing with fuse.

Also if possible, can you please get client profile info from fuse
mount using the command `setxattr -n trusted.io-stats-dump -v
 `.


Regards

Rafi KC


On 11/5/19 11:05 PM, David Spisla wrote:

I did the test with Gluster 7.0 ctime disabled. But it had no effect:
(All values in MiB/s)
64KiB    1MiB     10MiB
0,16   2,60   54,74

Attached there is now the complete profile file also with the
results from the last test. I will not repeat it with an higher
inode size because I don't think this will have an effect.
There must be another cause for the low performance



Yes. No need to try with higher inode size




Regards
David Spisla

Am Di., 5. Nov. 2019 um 16:25 Uhr schrieb David Spisla
mailto:spisl...@gmail.com>>:



Am Di., 5. Nov. 2019 um 12:06 Uhr schrieb RAFI KC
mailto:rkavu...@redhat.com>>:


On 11/4/19 8:46 PM, David Spisla wrote:

Dear Gluster Community,

I also have a issue concerning performance. The last
days I updated our test cluster from GlusterFS v5.5 to
v7.0 . The setup in general:

2 HP DL380 Servers with 10Gbit NICs, 1
Distribute-Replica 2 Volume with 2 Replica Pairs. Client
is SMB Samba (access via vfs_glusterfs) . I did several
tests to ensure that Samba don't causes the fall.
The setup ist completely the same except the Gluster Version
Here are my results:
64KiB       1MiB 10MiB            (Filesize)
3,49             47,41     300,50  (Values in
MiB/s with GlusterFS v5.5)
0,16              2,61    76,63 (Values in MiB/s
with GlusterFS v7.0)



Can you please share the profile information [1] for both
versions?  Also it would be really helpful if you can
mention the io patterns that used for this tests.

[1] :

https://docs.gluster.org/en/latest/Administrator%20Guide/Monitoring%20Workload/

Hello Rafi,
thank you for your help.

* First more information about the io patterns: As a client
we use a DL360 Windws Server 2017 machine with 10Gbit NIC
connected to the storage machines. The share will be mounted
via SMB and the tests writes with fio. We use this job files
(see attachment). Each job file will be executed separetely
and there is a sleep about 60s between each test run to calm
down the system before starting a new test.

* Attached below you find the profile output from the tests
with v5.5 (ctime enabled), v7.0 (ctime enabled).

* Beside of the tests with Samba I did also some fio tests
directly on the FUSE Mounts (locally on one of the storage
nodes). The results show that there is only a small decrease
of performance between v5.5 and v7.0
(All values in MiB/s)
64KiB    1MiB     10MiB
50,09 679,96   1023,02 (v5.5)
47,00 656,46    977,60 (v7.0)

It seems to be that the combination of samba + gluster7.0 has
a lot of problems, or not?




We use this volume options (GlusterFS 7.0):

Volume Name: archive1
Type: Distributed-Replicate
Volume ID: 44c17844-0bd4-4ca2-98d8-a1474add790c
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: fs-dl380-c1-n1:/gluster/brick1/glusterbrick
Brick2: fs-dl380-c1-n2:/gluster/brick1/glusterbrick
Brick3: fs-dl380-c1-n1:/gluster/brick2/glusterbrick
Brick4: fs-dl380-c1-n2:/gluster/brick2/glusterbrick
Options Reconfigured:
performance.client-io-threads: off
 

Re: [Gluster-users] Performance is falling rapidly when updating from v5.5 to v7.0

2019-11-06 Thread David Spisla
Hello Rafi,

I tried to set the xattr via

setfattr -n trusted.io-stats-dump -v '/tmp/iostat.log'
/gluster/repositories/repo1/

but it had no effect. There is no such a xattr via getfattr and no logfile.
The command setxattr is not available. What I am doing wrong?
By the way, you mean to increase the inode size of xfs layer from 512 Bytes
to 1024KB(!)? I think it should be 1024 Bytes because 2048 Bytes is the
maximum

Regards
David

Am Mi., 6. Nov. 2019 um 04:10 Uhr schrieb RAFI KC :

> I will take a look at the profile info shared. Since there is a huge
> difference in the performance numbers between fuse and samba, it would be
> great if we can get the profile info of fuse (on v7). This will help to
> compare the number of calls for each fops. There should be some fops that
> samba repeat, and we can find out it by comparing with fuse.
>
> Also if possible, can you please get client profile info from fuse mount
> using the command `setxattr -n trusted.io-stats-dump -v  /tmp/iostat.log> `.
>
>
> Regards
>
> Rafi KC
>
> On 11/5/19 11:05 PM, David Spisla wrote:
>
> I did the test with Gluster 7.0 ctime disabled. But it had no effect:
> (All values in MiB/s)
> 64KiB1MiB 10MiB
> 0,16   2,60   54,74
>
> Attached there is now the complete profile file also with the results from
> the last test. I will not repeat it with an higher inode size because I
> don't think this will have an effect.
> There must be another cause for the low performance
>
>
> Yes. No need to try with higher inode size
>
>
>
> Regards
> David Spisla
>
> Am Di., 5. Nov. 2019 um 16:25 Uhr schrieb David Spisla  >:
>
>>
>>
>> Am Di., 5. Nov. 2019 um 12:06 Uhr schrieb RAFI KC :
>>
>>>
>>> On 11/4/19 8:46 PM, David Spisla wrote:
>>>
>>> Dear Gluster Community,
>>>
>>> I also have a issue concerning performance. The last days I updated our
>>> test cluster from GlusterFS v5.5 to v7.0 . The setup in general:
>>>
>>> 2 HP DL380 Servers with 10Gbit NICs, 1 Distribute-Replica 2 Volume with
>>> 2 Replica Pairs. Client is SMB Samba (access via vfs_glusterfs) . I did
>>> several tests to ensure that Samba don't causes the fall.
>>> The setup ist completely the same except the Gluster Version
>>> Here are my results:
>>> 64KiB   1MiB 10MiB(Filesize)
>>> 3,49 47,41300,50  (Values in MiB/s with
>>> GlusterFS v5.5)
>>> 0,16  2,61 76,63(Values in MiB/s
>>> with GlusterFS v7.0)
>>>
>>>
>>> Can you please share the profile information [1] for both versions?
>>> Also it would be really helpful if you can mention the io patterns that
>>> used for this tests.
>>>
>>> [1] :
>>> https://docs.gluster.org/en/latest/Administrator%20Guide/Monitoring%20Workload/
>>>
>> Hello Rafi,
>> thank you for your help.
>>
>> * First more information about the io patterns: As a client we use a
>> DL360 Windws Server 2017 machine with 10Gbit NIC connected to the storage
>> machines. The share will be mounted via SMB and the tests writes with fio.
>> We use this job files (see attachment). Each job file will be executed
>> separetely and there is a sleep about 60s between each test run to calm
>> down the system before starting a new test.
>>
>> * Attached below you find the profile output from the tests with v5.5
>> (ctime enabled), v7.0 (ctime enabled).
>>
>> * Beside of the tests with Samba I did also some fio tests directly on
>> the FUSE Mounts (locally on one of the storage nodes). The results show
>> that there is only a small decrease of performance between v5.5 and v7.0
>> (All values in MiB/s)
>> 64KiB1MiB 10MiB
>> 50,09 679,96   1023,02 (v5.5)
>> 47,00 656,46977,60 (v7.0)
>>
>> It seems to be that the combination of samba + gluster7.0 has a lot of
>> problems, or not?
>>
>>
>>>
>>> We use this volume options (GlusterFS 7.0):
>>>
>>> Volume Name: archive1
>>> Type: Distributed-Replicate
>>> Volume ID: 44c17844-0bd4-4ca2-98d8-a1474add790c
>>> Status: Started
>>> Snapshot Count: 0
>>> Number of Bricks: 2 x 2 = 4
>>> Transport-type: tcp
>>> Bricks:
>>> Brick1: fs-dl380-c1-n1:/gluster/brick1/glusterbrick
>>> Brick2: fs-dl380-c1-n2:/gluster/brick1/glusterbrick
>>> Brick3: fs-dl380-c1-n1:/gluster/brick2/glusterbrick
>>> Brick4: fs-dl380-c1-n2:/gluster/brick2/glusterbrick
>>> Options Reconfigured:
>>> performance.client-io-threads: off
>>> nfs.disable: on
>>> storage.fips-mode-rchecksum: on
>>> transport.address-family: inet
>>> user.smb: disable
>>> features.read-only: off
>>> features.worm: off
>>> features.worm-file-level: on
>>> features.retention-mode: enterprise
>>> features.default-retention-period: 120
>>> network.ping-timeout: 10
>>> features.cache-invalidation: on
>>> features.cache-invalidation-timeout: 600
>>> performance.nl-cache: on
>>> performance.nl-cache-timeout: 600
>>> client.event-threads: 32
>>> server.event-threads: 32
>>> cluster.lookup-optimize: on
>>> performance.stat-prefetch: on
>>> performance.cache-invalidation: on

[Gluster-users] how to downgrade GlusterFS from version 7 to 3.13?

2019-11-06 Thread Riccardo Murri
Hello,

Is there any way to downgrade a GlusterFS cluster?  Given the
performance issues that I have seen with GlusterFS 6 and 7 (reported
elsewhere on this mailing-list), I am now considering downgrading back
to GlusterFS 3.13.

I have set up a test cluster, copied some files on it, and tried to
downgrade the naive way (uninstall GlusterFS 7, install GlusterFS
3.13, do not touch bricks or other config).  This didn't work:
`glusterd` v3.13 complains that the o-version 7 cannot be
restored.

So I tried again, this time trying to lower op-version before
downgrading the system packages, but `gluster set all op-version`
won't let me:

```
$ sudo gluster volume set all cluster.op-version 31000
volume set: failed: Required op-version (31000) should not be equal or
lower than current cluster op-version (7).
```

What can I try next?

Thanks,
Riccardo


Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/118564314

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/118564314

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Performance drop when upgrading from 3.8 to 6.5

2019-11-06 Thread Riccardo Murri
Dear Rafi, all,

please find attached two profile files; both are profiling the same command:
```
time rsync -a $SRC root@172.23.187.207:/glusterfs
```

In both cases, the target is a Ubuntu 16.04 VM mounting a pure
distributed GlusterFS 7 filesystem on `/glusterfs`.  The GlusterFS 7
cluster is comprised of 3 identical Ubuntu 16.04 VMs, each with one
brick of 200GB.  I have turned ctime off, as suggested in a previous
email.

In the case of file `profile1.txt` (larger file), $SRC is a directory
tree containing ~94'500 files, collectively weighing ~376GB.  The
transfer takes between 250 and 300 minutes (I've made several attempts
now), for an average bandwidth of ~21MB/s.

In the case of `profile3.txt`, $SRC is a single tar file weighing
72GB.  It takes between 16 and 30 minutes to write it into the
GlusterFS 7 filesystem; average bandwidth is ~60MB/s.

To me, this seems to indicate that, while write performance on data is
good, metadata ops on GlusterFS 7 are rather slow, and much slower
than in the 3.x series.  Is there any other tweak that I may try to
apply?

Thanks,
Riccardo
Brick: server001:/srv/glusterfs
---
Cumulative Stats:
   Block Size:  2b+   4b+   8b+ 
 No. of Reads:0 0 0 
No. of Writes:4 2 9 
 
   Block Size: 16b+  32b+  64b+ 
 No. of Reads:0 0 0 
No. of Writes:   322837 
 
   Block Size:128b+ 256b+ 512b+ 
 No. of Reads:0 0 0 
No. of Writes:  108   297   575 
 
   Block Size:   1024b+2048b+4096b+ 
 No. of Reads:0 0 0 
No. of Writes:  984  1994  4334 
 
   Block Size:   8192b+   16384b+   32768b+ 
 No. of Reads:0 0 0 
No. of Writes: 8304 16146 33163 
 
   Block Size:  65536b+  131072b+  262144b+ 
 No. of Reads:0 0 0 
No. of Writes:64431   4173687   207 
 
 %-latency   Avg-latency   Min-Latency   Max-Latency   No. of calls Fop
 -   ---   ---   ---   
  0.00   0.00 us   0.00 us   0.00 us 114999  FORGET
  0.00   0.00 us   0.00 us   0.00 us 130690 RELEASE
  0.00   0.00 us   0.00 us   0.00 us   1886  RELEASEDIR
  0.00 449.84 us 222.38 us2924.41 us 31   MKNOD
  0.01 177.17 us 104.16 us 436.34 us281LINK
  0.01 128.41 us  44.77 us2512.16 us414SETXATTR
  0.01  82.37 us  29.26 us4792.82 us   1319   FSTAT
  0.02 414.20 us 181.30 us   10818.54 us414   MKDIR
  0.02 741.77 us 103.38 us  158364.81 us281  UNLINK
  0.29  77.67 us  17.57 us4887.93 us  34118  STATFS
  0.37  50.29 us  10.83 us7341.33 us  67134   FLUSH
  0.73  46.83 us   9.41 us7990.78 us 140764 ENTRYLK
  0.85 219.73 us  75.32 us  125748.37 us  35106  RENAME
  1.21  51.71 us  10.32 us   10816.56 us 211798 INODELK
  1.33 377.95 us  38.65 us  150160.77 us  31778   FTRUNCATE
  1.64  97.82 us  23.85 us6386.67 us 151947STAT
  3.25 123.40 us  35.48 us   43775.28 us 238449 SETATTR
  4.31 581.91 us 123.91 us  261522.99 us  67134  CREATE
  7.30 137.63 us  23.45 us  153825.73 us 480472  LOOKUP
 78.65 321.99 us  32.95 us  169393.74 us2211891   WRITE
 
Duration: 156535 seconds
   Data Read: 0 bytes
Data Written: 555615327151 bytes
 
Interval 2 Stats:
   Block Size:  2b+   8b+  16b+ 
 No. of Reads:0 0 0 
No. of Writes:3 624 
 
   Block Size: 32b+  64b+ 128b+ 
 No. of Reads:0 0 0 
No. of Writes:   2127