Re: [PVE-User] Shared storage on NAS speed - LVM(over iSCSI) vs NFS

2017-07-19 Thread Michael Rasmussen
On Thu, 20 Jul 2017 00:49:29 +0300
Mikhail  wrote:

> 
> I heard about original OmniOS, by OmniTI, is being discountinued and
> that OmniOSCE takes care of it.
> But I could not find installation ISO images of the OmniOSCE. Which
> procedure to follow to get OmniOSCE installed? I guess it is to get
> latest available OmniOS installation ISO from
> https://omnios.omniti.com/wiki.php/Installation and then follow
> procedure described on the http://www.omniosce.org/ page to convert it
> into OmniOSCE?
> 
This is correct. Pay attention to upgrade to latest kernel release
r151022i if you have HBA's based on LSI SAS >= 2300 (using mr_sas
driver)

-- 
Hilsen/Regards
Michael Rasmussen

Get my public GnuPG keys:
michael  rasmussen  cc
http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E
mir  datanom  net
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE501F51C
mir  miras  org
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE3E80917
--
/usr/games/fortune -es says:
 "I've finally learned what `upward compatible' means.  It means
we get to keep all our old mistakes."
-- Dennie van Tassel


pgpv_8edO5xYq.pgp
Description: OpenPGP digital signature
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Shared storage on NAS speed - LVM(over iSCSI) vs NFS

2017-07-19 Thread Mikhail
On 07/20/2017 12:33 AM, Michael Rasmussen wrote:
>> The time to give OmniOS a try has come, I also noticed that X550 NICs
>> are supported by OmniOS since late autumn 2016. Luckily Proxmox supports
>> online storage migration (Move disk) without bringing vms down - this
>> simplifies storage migration a lot in live environment!
>>
> Remember to get it here: http://www.omniosce.org/

I heard about original OmniOS, by OmniTI, is being discountinued and
that OmniOSCE takes care of it.
But I could not find installation ISO images of the OmniOSCE. Which
procedure to follow to get OmniOSCE installed? I guess it is to get
latest available OmniOS installation ISO from
https://omnios.omniti.com/wiki.php/Installation and then follow
procedure described on the http://www.omniosce.org/ page to convert it
into OmniOSCE?

thanks,
Mikhail.
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Shared storage on NAS speed - LVM(over iSCSI) vs NFS

2017-07-19 Thread Michael Rasmussen
On Thu, 20 Jul 2017 00:07:44 +0300
Mikhail  wrote:

> 
> The time to give OmniOS a try has come, I also noticed that X550 NICs
> are supported by OmniOS since late autumn 2016. Luckily Proxmox supports
> online storage migration (Move disk) without bringing vms down - this
> simplifies storage migration a lot in live environment!
> 
Remember to get it here: http://www.omniosce.org/

-- 
Hilsen/Regards
Michael Rasmussen

Get my public GnuPG keys:
michael  rasmussen  cc
http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E
mir  datanom  net
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE501F51C
mir  miras  org
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE3E80917
--
/usr/games/fortune -es says:
Don't stop at one bug.
- The Elements of Programming Style (Kernighan & Plaugher)


pgp2heoVsl0rP.pgp
Description: OpenPGP digital signature
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Shared storage on NAS speed - LVM(over iSCSI) vs NFS

2017-07-19 Thread Mikhail
On 07/19/2017 06:38 PM, Michael Rasmussen wrote:
>> I guess my only way to fix this is to migrate everything off that server
>> and reinstall it from scratch, throwing away things like MDADM and LVM
>> this time and replacing them with ZFS for storage purposes.
>>
> You mentioned before that you hoped to use Omnios. Latest stable
> now supports your nics.

Yes, that was more than a year ago when I deployed this storage server -
you have good memory, Michael! =)

The time to give OmniOS a try has come, I also noticed that X550 NICs
are supported by OmniOS since late autumn 2016. Luckily Proxmox supports
online storage migration (Move disk) without bringing vms down - this
simplifies storage migration a lot in live environment!

cheers,
Mikhail.
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Shared storage on NAS speed - LVM(over iSCSI) vs NFS

2017-07-19 Thread Michael Rasmussen
On Wed, 19 Jul 2017 18:30:01 +0300
Mikhail  wrote:

> 
> Basically, it looks like MDADM array, and LVM on top (and possibly FS
> inside the VMs) need to be created with manual calculations for
> alignment and these calculations need to be specified on the command
> line at the time of creation. It is pity to find out this now, when
> server is in active use - many manuals mention that MDADM, LVM, etc are
> smart enough these days to make these calculations automatically at the
> time of creation, but this does not appear to be true and that's where
> problems come from later on.
> 
Your problem is that your disks is native 4k which advertises 512b as
well. This means lvm and mdadm got confused ;-)

> I guess my only way to fix this is to migrate everything off that server
> and reinstall it from scratch, throwing away things like MDADM and LVM
> this time and replacing them with ZFS for storage purposes.
> 
You mentioned before that you hoped to use Omnios. Latest stable
now supports your nics.

-- 
Hilsen/Regards
Michael Rasmussen

Get my public GnuPG keys:
michael  rasmussen  cc
http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E
mir  datanom  net
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE501F51C
mir  miras  org
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE3E80917
--
/usr/games/fortune -es says:
I'm telling you that the kernel is stable not because it's a kernel,
but because I refuse to listen to arguments like this.
-- Linus Torvalds


pgpWi__eiZMSh.pgp
Description: OpenPGP digital signature
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Shared storage on NAS speed - LVM(over iSCSI) vs NFS

2017-07-19 Thread Mikhail
On 07/19/2017 05:15 PM, Michael Rasmussen wrote:
> Try do read here:
> http://dennisfleurbaaij.blogspot.dk/2013/01/setting-up-linux-mdadm-raid-array-with.html

Hello,

Thanks, I also checked that post earlier today.

Basically, it looks like MDADM array, and LVM on top (and possibly FS
inside the VMs) need to be created with manual calculations for
alignment and these calculations need to be specified on the command
line at the time of creation. It is pity to find out this now, when
server is in active use - many manuals mention that MDADM, LVM, etc are
smart enough these days to make these calculations automatically at the
time of creation, but this does not appear to be true and that's where
problems come from later on.

I guess my only way to fix this is to migrate everything off that server
and reinstall it from scratch, throwing away things like MDADM and LVM
this time and replacing them with ZFS for storage purposes.

Thanks all.
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Shared storage on NAS speed - LVM(over iSCSI) vs NFS

2017-07-19 Thread Michael Rasmussen
On Wed, 19 Jul 2017 15:10:22 +0300
Mikhail  wrote:

> Here's what I can see now:
> 
> 1) fdisk output for one of disks in array:
> # fdisk -l /dev/sda
> 
> Disk /dev/sda: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
> Units: sectors of 1 * 512 = 512 bytes
> Sector size (logical/physical): 512 bytes / 4096 bytes
> I/O size (minimum/optimal): 4096 bytes / 4096 bytes
> Disklabel type: gpt
> Disk identifier: 482FED1A-9CD0-4AEF-ACFC-D981C9916FE2
> 
> Device   StartEndSectors  Size Type
> /dev/sda1 204819537911951744  953M Linux filesystem
> /dev/sda2  1953792 7814035455 7812081664  3.7T Linux RAID
> 
> 2) MDADM array details:
> 
> # mdadm --detail /dev/md0
> /dev/md0:
> Version : 1.2
>   Creation Time : Fri Mar 18 18:27:06 2016
>  Raid Level : raid10
>  Array Size : 7811819520 (7449.93 GiB 7999.30 GB)
>   Used Dev Size : 3905909760 (3724.97 GiB 3999.65 GB)
>Raid Devices : 4
>   Total Devices : 4
> Persistence : Superblock is persistent
> 
>   Intent Bitmap : Internal
> 
> Update Time : Wed Jul 19 14:58:57 2017
>   State : active, checking
>  Active Devices : 4
> Working Devices : 4
>  Failed Devices : 0
>   Spare Devices : 0
> 
>  Layout : near=2
>  Chunk Size : 512K
> 
Try do read here:
http://dennisfleurbaaij.blogspot.dk/2013/01/setting-up-linux-mdadm-raid-array-with.html

-- 
Hilsen/Regards
Michael Rasmussen

Get my public GnuPG keys:
michael  rasmussen  cc
http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E
mir  datanom  net
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE501F51C
mir  miras  org
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE3E80917
--
/usr/games/fortune -es says:
He hated being thought of as one of those people that wore stupid
ornamental armour. It was gilt by association.
-- Terry Pratchett, "Night Watch"


pgpPYhGalFprj.pgp
Description: OpenPGP digital signature
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Shared storage on NAS speed - LVM(over iSCSI) vs NFS

2017-07-19 Thread Mikhail
On 07/19/2017 02:43 PM, Yannis Milios wrote:
> Have you checked if these drives are properly aligned, sometimes that can
> cause low r/w performance.
> Is there any particular reason you use mdadm instead of h/w raid controller?

Hello Yannis

There's no h/w raid controller because first we wanted to adopt ZFS on
that storage server. I wanted to use OmniOS as a base OS, but by the
time (about ~15 months ago) OmniOS did not support Intel's X550 10GiGE
(no drivers in kernel) NICs we have inside that server, so had to fall
back to Linux. As you know ZFS feels better when it has direct access to
the drives, without h/w raid level.

The MDADM RAID10 array was created without specifying any special
alignment options. What's the best way to check if the drives are
aligned in a proper way on existent arrwy?

Here's what I can see now:

1) fdisk output for one of disks in array:
# fdisk -l /dev/sda

Disk /dev/sda: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 482FED1A-9CD0-4AEF-ACFC-D981C9916FE2

Device   StartEndSectors  Size Type
/dev/sda1 204819537911951744  953M Linux filesystem
/dev/sda2  1953792 7814035455 7812081664  3.7T Linux RAID

2) MDADM array details:

# mdadm --detail /dev/md0
/dev/md0:
Version : 1.2
  Creation Time : Fri Mar 18 18:27:06 2016
 Raid Level : raid10
 Array Size : 7811819520 (7449.93 GiB 7999.30 GB)
  Used Dev Size : 3905909760 (3724.97 GiB 3999.65 GB)
   Raid Devices : 4
  Total Devices : 4
Persistence : Superblock is persistent

  Intent Bitmap : Internal

Update Time : Wed Jul 19 14:58:57 2017
  State : active, checking
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

 Layout : near=2
 Chunk Size : 512K

   Check Status : 43% complete

   Name : storage:0  (local to host storage)
   UUID : 7346ef36:0a6b33f6:37eb29cd:58d04b7c
 Events : 1010431

Number   Major   Minor   RaidDevice State
   0   820  active sync set-A   /dev/sda2
   1   8   181  active sync set-B   /dev/sdb2
   2   8   342  active sync set-A   /dev/sdc2
   3   8   503  active sync set-B   /dev/sdd2

3) and LVM information for the PV that resides on md0 arrway:

# pvdisplay
  --- Physical volume ---
  PV Name   /dev/md0
  VG Name   vg0
  PV Size   7.28 TiB / not usable 2.00 MiB
  Allocatable   yes
  PE Size   4.00 MiB
  Total PE  1907182
  Free PE   182430
  Allocated PE  1724752
  PV UUID   CefFFF-Q6yz-eX2p-Ziev-jdFW-3G6h-vHaesD

The mdadm array is running check right now, but the speed is limited to
it's defaults:

# cat /proc/sys/dev/raid/speed_limit_max
20
# cat /proc/sys/dev/raid/speed_limit_min
1000

# cat /proc/mdstat
Personalities : [raid10]
md0 : active raid10 sda2[0] sdd2[3] sdc2[2] sdb2[1]
  7811819520 blocks super 1.2 512K chunks 2 near-copies [4/4] []
  [>]  check = 43.6% (3412377088/7811819520)
finish=17599.2min speed=4165K/sec
  bitmap: 16/59 pages [64KB], 65536KB chunk

unused devices: 

Thanks for your help!
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Shared storage on NAS speed - LVM(over iSCSI) vs NFS

2017-07-19 Thread Yannis Milios
>> (storage server has 4x4TB SAS
>> drives in RAID10 configured with MDADM)

Have you checked if these drives are properly aligned, sometimes that can
cause low r/w performance.
Is there any particular reason you use mdadm instead of h/w raid controller?

Yannis
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Shared storage on NAS speed - LVM(over iSCSI) vs NFS

2017-07-19 Thread Mikhail
On 07/19/2017 12:52 PM, Emmanuel Kasper wrote:
> do not use dd to benchmark storages, use fio
> 
> with a command line like
> 
> fio  --size=9G --bs=64k --rw=write --direct=1 --runtime=60
> --name=64kwrite --group_reporting | grep bw
> 
> inside your mount point
> 
> or use the --filename option to point to a block device
> 
> from this you will get reliable sequential write info


Emmanuel, thanks for the hint!
Just tried benchmarking with fio using your command line. Results below,
looks very slow (avg=24888.52 KB/s):

# fio  --size=9G --bs=64k --rw=write --direct=1 --runtime=60
--name=64kwrite --group_reporting
64kwrite: (g=0): rw=write, bs=64K-64K/64K-64K/64K-64K, ioengine=sync,
iodepth=1
fio-2.1.11
Starting 1 process
64kwrite: Laying out IO file(s) (1 file(s) / 9216MB)
Jobs: 1 (f=1): [W(1)] [15.4% done] [0KB/1022KB/0KB /s] [0/15/0 iops]
[eta 05m:34s]
64kwrite: (groupid=0, jobs=1): err= 0: pid=7841: Wed Jul 19 12:57:15 2017
  write: io=1422.6MB, bw=24231KB/s, iops=378, runt= 60117msec
clat (usec): min=87, max=293416, avg=2637.70, stdev=14667.15
 lat (usec): min=87, max=293418, avg=2639.85, stdev=14667.17
clat percentiles (usec):
 |  1.00th=[   87],  5.00th=[   88], 10.00th=[   88], 20.00th=[   89],
 | 30.00th=[  101], 40.00th=[  135], 50.00th=[  195], 60.00th=[  235],
 | 70.00th=[  334], 80.00th=[  414], 90.00th=[  700], 95.00th=[ 8384],
 | 99.00th=[81408], 99.50th=[117248], 99.90th=[193536],
99.95th=[211968],
 | 99.99th=[250880]
bw (KB  /s): min=  555, max=172928, per=100.00%, avg=24888.52,
stdev=34949.10
lat (usec) : 100=29.27%, 250=32.97%, 500=25.85%, 750=2.35%, 1000=1.41%
lat (msec) : 2=0.49%, 4=0.37%, 10=3.04%, 20=1.57%, 50=1.22%
lat (msec) : 100=0.78%, 250=0.67%, 500=0.01%
  cpu  : usr=0.18%, sys=1.34%, ctx=26211, majf=0, minf=8
  IO depths: 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%,
>=64=0.0%
 submit: 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%,
>=64=0.0%
 complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%,
>=64=0.0%
 issued: total=r=0/w=22761/d=0, short=r=0/w=0/d=0
 latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
  WRITE: io=1422.6MB, aggrb=24231KB/s, minb=24231KB/s, maxb=24231KB/s,
mint=60117msec, maxt=60117msec

Disk stats (read/write):
dm-7: ios=0/22961, merge=0/0, ticks=0/77576, in_queue=77692,
util=98.84%, aggrios=2437/28407, aggrmerge=0/0, aggrticks=0/0,
aggrin_queue=0, aggrutil=0.00%
md0: ios=2437/28407, merge=0/0, ticks=0/0, in_queue=0, util=0.00%,
aggrios=1035/14632, aggrmerge=53/259, aggrticks=4785/68958,
aggrin_queue=73796, aggrutil=67.74%
  sda: ios=1782/14834, merge=50/265, ticks=8488/77372, in_queue=85876,
util=67.74%
  sdb: ios=1153/14837, merge=50/264, ticks=4460/71308, in_queue=75792,
util=63.19%
  sdc: ios=737/14428, merge=57/254, ticks=3924/65828, in_queue=69896,
util=56.76%
  sdd: ios=471/14431, merge=55/255, ticks=2268/61324, in_queue=63620,
util=54.84%
#

I have also changed CPU freq. to max 3.40GHz, but looks like this was
not an issue.

Mikhail.
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Shared storage on NAS speed - LVM(over iSCSI) vs NFS

2017-07-19 Thread Emmanuel Kasper
On 07/19/2017 11:32 AM, Mikhail wrote:
> Hello,
> 
> Thanks for your responses.
> The issue appears to be somewhere beyond iSCSI.
> I just tried to do some "dd" tests locally on the storage server and I'm
> getting very low write speeds:

do not use dd to benchmark storages, use fio

with a command line like

fio  --size=9G --bs=64k --rw=write --direct=1 --runtime=60
--name=64kwrite --group_reporting | grep bw

inside your mount point

or use the --filename option to point to a block device

from this you will get reliable sequential write info







___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Shared storage on NAS speed - LVM(over iSCSI) vs NFS

2017-07-19 Thread Mikhail
Hello,

Thanks for your responses.
The issue appears to be somewhere beyond iSCSI.
I just tried to do some "dd" tests locally on the storage server and I'm
getting very low write speeds:

root@storage:/root# dd if=/dev/vg0/isoimages of=isoimages.vg0
62914560+0 records in
62914560+0 records out
32212254720 bytes (32 GB) copied, 945.573 s, 34.1 MB/s
root@storage:/root#

(/dev/vg0/isoimages is local LV to the storage server)

So will have to find out what's the problem or bottleneck somewhere else.
Load average on storage server is 4.0-5.0 for the following CPU

according to lscpu output:
Model name:Intel(R) Xeon(R) CPU E3-1230 v5 @ 3.40GHz
Stepping:  3
CPU MHz:   800.000
CPU max MHz:   3401.
CPU min MHz:   800.

Could this be due to low (800.000 MHz) CPU frequency?

Thanks!

On 07/19/2017 10:03 AM, Eneko Lacunza wrote:
> El 19/07/17 a las 08:41, Dietmar Maurer escribió:
>>> So I cannot figure out why LVM-over-iSCSI is so slow.
>> I guess your benchmark is simply wrong. You are testing the
>> local cache, because you do not sync the data back to the storage.
> Really, 2.7GB/s for 4x4TB disks in RAID10 seems totally unreasonable (I
> guess they're not SSD drives...)
> 
> I think that in the best conditions that could give about 200-250MB/s
> max, totally sequential writes, etc.
> 
> Don't know why iSCSI is so slow, have you checked CPU usage on both sides?
> 
> Anyhow your test copy is too small, use a file that at least is double
> the available RAM on storage server, or otherwise force sync.
> 
> Cheers
> Eneko
> 

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Shared storage on NAS speed - LVM(over iSCSI) vs NFS

2017-07-19 Thread Eneko Lacunza

El 19/07/17 a las 08:41, Dietmar Maurer escribió:

So I cannot figure out why LVM-over-iSCSI is so slow.

I guess your benchmark is simply wrong. You are testing the
local cache, because you do not sync the data back to the storage.
Really, 2.7GB/s for 4x4TB disks in RAID10 seems totally unreasonable (I 
guess they're not SSD drives...)


I think that in the best conditions that could give about 200-250MB/s 
max, totally sequential writes, etc.


Don't know why iSCSI is so slow, have you checked CPU usage on both sides?

Anyhow your test copy is too small, use a file that at least is double 
the available RAM on storage server, or otherwise force sync.


Cheers
Eneko

--
Zuzendari Teknikoa / Director Técnico
Binovo IT Human Project, S.L.
Telf. 943493611
  943324914
Astigarraga bidea 2, planta 6 dcha., ofi. 3-2; 20180 Oiartzun (Gipuzkoa)
www.binovo.es

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Shared storage on NAS speed - LVM(over iSCSI) vs NFS

2017-07-19 Thread Dietmar Maurer
> So I cannot figure out why LVM-over-iSCSI is so slow. 

I guess your benchmark is simply wrong. You are testing the
local cache, because you do not sync the data back to the storage.

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user