Re: [Users] Experience with low cost NFS-Storage as VM-Storage?

2014-01-09 Thread squadra
another point is, that a correct configured multipathing is way more solid
when it comes to a single path outage. at the software side, i have seen
countless nfs servers which where unresponsive because of lockd issues for
example, and only a reboot fixed this since its kernel based.

another contra for me is, that its rather complicated and a 50/50 chance
that a nfs failover in a nfs ha setup works without any clients dying.

dont get me wrong, nfs is great for small setups. its easy to setup, easy
to scale, i use it very widespread for content sharing and homedirs. but i
am healed regarding vm images on nfs.


On Thu, Jan 9, 2014 at 8:48 AM, Karli Sjöberg karli.sjob...@slu.se wrote:

 On Thu, 2014-01-09 at 08:35 +0100, squadra wrote:
  Right, try multipathing with nfs :)

 Yes, that´s what I meant, maybe could have been more clear about that,
 sorry. Multipathing (and the load-balancing it brings) is what really
 separates iSCSI from NFS.

 What I´d be interested in knowing is at what breaking-point, not having
 multipathing becomes an issue. I mean, we might not have such a big
 VM-park, about 300-400 VMs. But so far running without multipathing
 using good ole' NFS and no performance issues this far. Would be good to
 know beforehand if we´re headed for a wall of some sorts, and about
 when we´ll hit it...

 /K

 
  On Jan 9, 2014 8:30 AM, Karli Sjöberg karli.sjob...@slu.se wrote:
  On Thu, 2014-01-09 at 07:10 +, Markus Stockhausen wrote:
Von: users-boun...@ovirt.org [users-boun...@ovirt.org] im
  Auftrag von squadra [squa...@gmail.com]
Gesendet: Mittwoch, 8. Januar 2014 17:15
An: users@ovirt.org
Betreff: Re: [Users] Experience with low cost NFS-Storage
  as VM-Storage?
   
better go for iscsi or something else... i whould avoid
  nfs for vm hosting
Freebsd10 delivers kernel iscsitarget now, which works
  great so far. or go with omnios to get comstar iscsi, which is
  a rocksolid solution
   
Cheers,
   
Juergen
  
   That is usually a matter of taste and the available
  environment.
   The minimal differences in performance usually only show up
   if you drive the storage to its limits. I guess you could
  help Sven
   better if you had some hard facts why to favour ISCSI.
  
   Best regards.
  
   Markus
 
  Only technical difference I can think of is the iSCSI-level
  load-balancing. With NFS you set up the network with LACP and
  let that
  load-balance for you (and you should probably do that with
  iSCSI as well
  but you don´t strictly have to). I think it has to do with a
  chance of
  trying to go beyond the capacity of 1 network interface at the
  same
  time, from one Host (higher bandwidth) that makes people try
  iSCSI
  instead of plain NFS. I have tried that but was never able to
  achieve
  that effect, so in our situation, there´s no difference. In
  comparing
  them both in benchmarks, there was no performance difference
  at all, at
  least for our storage systems that are based on FreeBSD.
 
  /K




-- 

Sent from the Delta quadrant using Borg technology!
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Experience with low cost NFS-Storage as VM-Storage?

2014-01-09 Thread Markus Stockhausen
 Von: Karli Sjöberg [karli.sjob...@slu.se]
 Gesendet: Donnerstag, 9. Januar 2014 08:48
 An: squa...@gmail.com
 Cc: users@ovirt.org; Markus Stockhausen
 Betreff: Re: [Users] Experience with low cost NFS-Storage as VM-Storage?
 
 On Thu, 2014-01-09 at 08:35 +0100, squadra wrote:
 Right, try multipathing with nfs :)

 Yes, that´s what I meant, maybe could have been more clear about that,
 sorry. Multipathing (and the load-balancing it brings) is what really
 separates iSCSI from NFS.
 
 What I´d be interested in knowing is at what breaking-point, not having
 multipathing becomes an issue. I mean, we might not have such a big
 VM-park, about 300-400 VMs. But so far running without multipathing
 using good ole' NFS and no performance issues this far. Would be good to
 know beforehand if we´re headed for a wall of some sorts, and about
 when we´ll hit it...

/K

If that is really a concern for the initial question about a low cost NFS
solution LACP on the NFS filer side will mitigate the bottleneck from 
too many hypervisors. 

My personal headache is the I/O performance of QEMU. More details here:
http://lists.nongnu.org/archive/html/qemu-discuss/2013-12/msg00028.html
Or to make it short: Each I/O in a VM gets a penalty of 370us. That is much
more than in ESX environments.
 
I would be interested if this the same in ISCSI setups.

Markus

Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte
Informationen. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail
irrtümlich erhalten haben, informieren Sie bitte sofort den Absender und
vernichten Sie diese Mail. Das unerlaubte Kopieren sowie die unbefugte
Weitergabe dieser Mail ist nicht gestattet.

Über das Internet versandte E-Mails können unter fremden Namen erstellt oder
manipuliert werden. Deshalb ist diese als E-Mail verschickte Nachricht keine
rechtsverbindliche Willenserklärung.

Collogia
Unternehmensberatung AG
Ubierring 11
D-50678 Köln

Vorstand:
Kadir Akin
Dr. Michael Höhnerbach

Vorsitzender des Aufsichtsrates:
Hans Kristian Langva

Registergericht: Amtsgericht Köln
Registernummer: HRB 52 497

This e-mail may contain confidential and/or privileged information. If you
are not the intended recipient (or have received this e-mail in error)
please notify the sender immediately and destroy this e-mail. Any
unauthorized copying, disclosure or distribution of the material in this
e-mail is strictly forbidden.

e-mails sent over the internet may have been written under a wrong name or
been manipulated. That is why this message sent as an e-mail is not a
legally binding declaration of intention.

Collogia
Unternehmensberatung AG
Ubierring 11
D-50678 Köln

executive board:
Kadir Akin
Dr. Michael Höhnerbach

President of the supervisory board:
Hans Kristian Langva

Registry office: district court Cologne
Register number: HRB 52 497


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Experience with low cost NFS-Storage as VM-Storage?

2014-01-09 Thread squadra
try it, i bet that you will get better latency results with proper
configured iscsitarget/initiator.

btw, freebsd 10 includes kernel based iscsi-target now. which works pretty
good for me since some time, easy to setup and working performing well (zfs
not to forget ;) )


On Thu, Jan 9, 2014 at 9:20 AM, Markus Stockhausen
stockhau...@collogia.dewrote:

  Von: Karli Sjöberg [karli.sjob...@slu.se]
  Gesendet: Donnerstag, 9. Januar 2014 08:48
  An: squa...@gmail.com
  Cc: users@ovirt.org; Markus Stockhausen
  Betreff: Re: [Users] Experience with low cost NFS-Storage as VM-Storage?
 
  On Thu, 2014-01-09 at 08:35 +0100, squadra wrote:
  Right, try multipathing with nfs :)
 
  Yes, that´s what I meant, maybe could have been more clear about that,
  sorry. Multipathing (and the load-balancing it brings) is what really
  separates iSCSI from NFS.
 
  What I´d be interested in knowing is at what breaking-point, not having
  multipathing becomes an issue. I mean, we might not have such a big
  VM-park, about 300-400 VMs. But so far running without multipathing
  using good ole' NFS and no performance issues this far. Would be good to
  know beforehand if we´re headed for a wall of some sorts, and about
  when we´ll hit it...
 
 /K

 If that is really a concern for the initial question about a low cost NFS
 solution LACP on the NFS filer side will mitigate the bottleneck from
 too many hypervisors.

 My personal headache is the I/O performance of QEMU. More details here:
 http://lists.nongnu.org/archive/html/qemu-discuss/2013-12/msg00028.html
 Or to make it short: Each I/O in a VM gets a penalty of 370us. That is much
 more than in ESX environments.

 I would be interested if this the same in ISCSI setups.

 Markus




-- 

Sent from the Delta quadrant using Borg technology!
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Experience with low cost NFS-Storage as VM-Storage?

2014-01-09 Thread Karli Sjöberg
On Thu, 2014-01-09 at 09:30 +0100, squadra wrote:
 try it, i bet that you will get better latency results with proper
 configured iscsitarget/initiator. 
 
 
 btw, freebsd 10 includes kernel based iscsi-target now. which works
 pretty good for me since some time, easy to setup and working
 performing well (zfs not to forget ;) )

Yeah I see 10´s reached RC4 now, probably´ll be out for real soon, and
then a while more to wait for 10.1 to have longer support:)

Have you compared the new iscsi-target with ports/istgt btw?

/K

 
 
 On Thu, Jan 9, 2014 at 9:20 AM, Markus Stockhausen
 stockhau...@collogia.de wrote:
  Von: Karli Sjöberg [karli.sjob...@slu.se]
  Gesendet: Donnerstag, 9. Januar 2014 08:48
  An: squa...@gmail.com
  Cc: users@ovirt.org; Markus Stockhausen
  Betreff: Re: [Users] Experience with low cost NFS-Storage as
 VM-Storage?
 
 
  On Thu, 2014-01-09 at 08:35 +0100, squadra wrote:
  Right, try multipathing with nfs :)
 
  Yes, that´s what I meant, maybe could have been more clear
 about that,
  sorry. Multipathing (and the load-balancing it brings) is
 what really
  separates iSCSI from NFS.
 
  What I´d be interested in knowing is at what breaking-point,
 not having
  multipathing becomes an issue. I mean, we might not have
 such a big
  VM-park, about 300-400 VMs. But so far running without
 multipathing
  using good ole' NFS and no performance issues this far.
 Would be good to
  know beforehand if we´re headed for a wall of some sorts,
 and about
  when we´ll hit it...
 
 /K
 
 
 If that is really a concern for the initial question about a
 low cost NFS
 solution LACP on the NFS filer side will mitigate the
 bottleneck from
 too many hypervisors.
 
 My personal headache is the I/O performance of QEMU. More
 details here:
 
 http://lists.nongnu.org/archive/html/qemu-discuss/2013-12/msg00028.html
 Or to make it short: Each I/O in a VM gets a penalty of 370us.
 That is much
 more than in ESX environments.
 
 I would be interested if this the same in ISCSI setups.
 
 Markus
 
 
 
 
 -- 
 Sent from the Delta quadrant using Borg technology!

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Experience with low cost NFS-Storage as VM-Storage?

2014-01-09 Thread Markus Stockhausen
 Von: squadra [squa...@gmail.com]
 Gesendet: Donnerstag, 9. Januar 2014 09:30
 An: Markus Stockhausen
 Cc: Karli Sjöberg; users@ovirt.org
 Betreff: Re: [Users] Experience with low cost NFS-Storage as VM-Storage?

 try it, i bet that you will get better latency results with proper configured 
 iscsitarget/initiator. 

I guess you did not take time to read the hole post. The latency I speak
of comes ontop the NFS latency. So my setup has

- 83us latency per I/O in the hypervisor on a NFS share
- 450us latency per I/O in the VM on a disk hosted on the same NFS share

If ISCSI could reduce latency to 40us instead of 83us in our wishfulst dreams
the QEMU penalty hits too hard.

Markus
Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte
Informationen. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail
irrtümlich erhalten haben, informieren Sie bitte sofort den Absender und
vernichten Sie diese Mail. Das unerlaubte Kopieren sowie die unbefugte
Weitergabe dieser Mail ist nicht gestattet.

Über das Internet versandte E-Mails können unter fremden Namen erstellt oder
manipuliert werden. Deshalb ist diese als E-Mail verschickte Nachricht keine
rechtsverbindliche Willenserklärung.

Collogia
Unternehmensberatung AG
Ubierring 11
D-50678 Köln

Vorstand:
Kadir Akin
Dr. Michael Höhnerbach

Vorsitzender des Aufsichtsrates:
Hans Kristian Langva

Registergericht: Amtsgericht Köln
Registernummer: HRB 52 497

This e-mail may contain confidential and/or privileged information. If you
are not the intended recipient (or have received this e-mail in error)
please notify the sender immediately and destroy this e-mail. Any
unauthorized copying, disclosure or distribution of the material in this
e-mail is strictly forbidden.

e-mails sent over the internet may have been written under a wrong name or
been manipulated. That is why this message sent as an e-mail is not a
legally binding declaration of intention.

Collogia
Unternehmensberatung AG
Ubierring 11
D-50678 Köln

executive board:
Kadir Akin
Dr. Michael Höhnerbach

President of the supervisory board:
Hans Kristian Langva

Registry office: district court Cologne
Register number: HRB 52 497


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Experience with low cost NFS-Storage as VM-Storage?

2014-01-09 Thread Sander Grendelman
On Thu, Jan 9, 2014 at 9:39 AM, Markus Stockhausen
stockhau...@collogia.de wrote:
 Von: squadra [squa...@gmail.com]
 Gesendet: Donnerstag, 9. Januar 2014 09:30
 An: Markus Stockhausen
 Cc: Karli Sjöberg; users@ovirt.org
 Betreff: Re: [Users] Experience with low cost NFS-Storage as VM-Storage?

 try it, i bet that you will get better latency results with proper 
 configured iscsitarget/initiator.

 I guess you did not take time to read the hole post. The latency I speak
 of comes ontop the NFS latency. So my setup has

 - 83us latency per I/O in the hypervisor on a NFS share
 - 450us latency per I/O in the VM on a disk hosted on the same NFS share

 If ISCSI could reduce latency to 40us instead of 83us in our wishfulst dreams
 the QEMU penalty hits too hard.

There are some interesting tests here:
http://www.linux-kvm.org/page/Virtio/Block/Latency
Results seem to depend a lot on the guest OS IO stack/drivers (I see
you use win2k3?).
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Experience with low cost NFS-Storage as VM-Storage?

2014-01-09 Thread Karli Sjöberg
On Thu, 2014-01-09 at 09:53 +0100, Sander Grendelman wrote:
 On Thu, Jan 9, 2014 at 9:39 AM, Markus Stockhausen
 stockhau...@collogia.de wrote:
  Von: squadra [squa...@gmail.com]
  Gesendet: Donnerstag, 9. Januar 2014 09:30
  An: Markus Stockhausen
  Cc: Karli Sjöberg; users@ovirt.org
  Betreff: Re: [Users] Experience with low cost NFS-Storage as VM-Storage?
 
  try it, i bet that you will get better latency results with proper 
  configured iscsitarget/initiator.
 
  I guess you did not take time to read the hole post. The latency I speak
  of comes ontop the NFS latency. So my setup has
 
  - 83us latency per I/O in the hypervisor on a NFS share
  - 450us latency per I/O in the VM on a disk hosted on the same NFS share
 
  If ISCSI could reduce latency to 40us instead of 83us in our wishfulst 
  dreams
  the QEMU penalty hits too hard.
 
 There are some interesting tests here:
 http://www.linux-kvm.org/page/Virtio/Block/Latency

Very interesting:
...23% overhead compared to a host read request. This deserves closer
study so that the overhead can be reduced.

Good to know people know and are at least thinking about it:)

Seeing as it´s such a fast-paced development, have you done any
benchmarks on different distributions as well? I mean like comparing the
same test against both, say Fedora and CentOS, to see if that makes any
difference?

/K

 Results seem to depend a lot on the guest OS IO stack/drivers (I see
 you use win2k3?).
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] Experience with low cost NFS-Storage as VM-Storage?

2014-01-08 Thread Sven Kieske
Hi,

I'd like to ask around if someone does run oVirt
with NFS backed Storage provided by simple servers (no SAN or NAS)
and what your experience is so far?

In particular I'm interested what happens if there is a connection
loss to the NFS-Volume.

How does this affect running vms and the compute nodes they run on?

I suspect they would first write their changes to RAM instead of virtual
HDD.
But once the RAM is full, does just the vm become unresponsive or
does the whole compute node die?

I couldn't test this yet myself, but my limited experience with
NFS-Servers tells me, that they become unresponsive if they are under
heavy load and I'd like to know how this affects the vms and
computenode.

Thanks!

PS: Bonus question: Does someone utilize the NFS-Servers also as
computenodes ?

-- 
Mit freundlichen Grüßen / Regards

Sven Kieske

Systemadministrator
Mittwald CM Service GmbH  Co. KG
Königsberger Straße 6
32339 Espelkamp
T: +49-5772-293-100
F: +49-5772-293-333
https://www.mittwald.de
Geschäftsführer: Robert Meyer
St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen
Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Experience with low cost NFS-Storage as VM-Storage?

2014-01-08 Thread Markus Stockhausen
 Hi,
 
I'd like to ask around if someone does run oVirt
 with NFS backed Storage provided by simple servers (no SAN or NAS)
 and what your experience is so far?

We are running OVirt (still in test phase) on three self build equal Ubuntu
NFS servers over Infiniband (50 Euros for a ConnectX card + 50 Euros a cable +
400 Euros a switch). Due to massive bandwidth the bottleneck are the SATA 
disks and the RAID controller. I fixed some info at 
http://www.ovirt.org/Infiniband
Our soon to be replaced ESX infrastucture uses the same platform.

 In particular I'm interested what happens if there is a connection
 loss to the NFS-Volume.

You can be sure that the applications in your VMs will get stuck. They usually 
flush 
their write caches in regular intervals. This is directly translated into a NFS 
write
on the hypervisor. At least it should be configured that way otherwise you could
loose data in case of a crash. E.g. VM thinks data is persisent, but it was 
only cached
in the hypervisor RAM.

 How does this affect running vms and the compute nodes they run on?

The hypervisor has no problems at all (except you run a df command) and the 
VMs usually will continue their operation after NFS recovers. Nevertheless 
the are lot of timeout settings in the architecute stack that may stop a running
application inside the VM. Expect do have a lot of manual cleanup after a
NFS failure.

 I suspect they would first write their changes to RAM instead of virtual HDD.
 But once the RAM is full, does just the vm become unresponsive or
 does the whole compute node die?

That would be bad practice. See above.

 I couldn't test this yet myself, but my limited experience with
 NFS-Servers tells me, that they become unresponsive if they are under
 heavy load and I'd like to know how this affects the vms and
 computenode.

Defining heavy load is basically some kind of I/O calculation. A simple
example from our setup.

- Each NFS server has a RAID6 consisting of 14 SATA disks (7.2K)
- The top-loaded NFS server runs a minimum of 140 8K I/Os per second
- Assuming a random pattern we speak of 140 Read-Modify-Write cycles
- That translates to roughly to 6*140=840 I/Os per second.
- The 14 SATA disks offer a maximum of 14*90=1260 I/Os per second
- No wonder that the NFS server gives usually a 20% wait I/O usage.

 Thanks!

 PS: Bonus question: Does someone utilize the NFS-Servers also as
 computenodes ?

We spearated that stricly. Otherwise we would go with a Gluster FS.

Markus

P.S. We leave VMWare because live storage migration of OVirt VMs 
gives us the possibility to reduce the complexity on the storage side.

Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte
Informationen. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail
irrtümlich erhalten haben, informieren Sie bitte sofort den Absender und
vernichten Sie diese Mail. Das unerlaubte Kopieren sowie die unbefugte
Weitergabe dieser Mail ist nicht gestattet.

Über das Internet versandte E-Mails können unter fremden Namen erstellt oder
manipuliert werden. Deshalb ist diese als E-Mail verschickte Nachricht keine
rechtsverbindliche Willenserklärung.

Collogia
Unternehmensberatung AG
Ubierring 11
D-50678 Köln

Vorstand:
Kadir Akin
Dr. Michael Höhnerbach

Vorsitzender des Aufsichtsrates:
Hans Kristian Langva

Registergericht: Amtsgericht Köln
Registernummer: HRB 52 497

This e-mail may contain confidential and/or privileged information. If you
are not the intended recipient (or have received this e-mail in error)
please notify the sender immediately and destroy this e-mail. Any
unauthorized copying, disclosure or distribution of the material in this
e-mail is strictly forbidden.

e-mails sent over the internet may have been written under a wrong name or
been manipulated. That is why this message sent as an e-mail is not a
legally binding declaration of intention.

Collogia
Unternehmensberatung AG
Ubierring 11
D-50678 Köln

executive board:
Kadir Akin
Dr. Michael Höhnerbach

President of the supervisory board:
Hans Kristian Langva

Registry office: district court Cologne
Register number: HRB 52 497


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Experience with low cost NFS-Storage as VM-Storage?

2014-01-08 Thread noc

On 8-1-2014 13:18, Sven Kieske wrote:

PS: Bonus question: Does someone utilize the NFS-Servers also as
computenodes ?

We do, temporarily. It is NOT recommended :-) because:
- can't update your NFS server without shutting down all VMs
- myriad of other reasons

Still I did a reboot of our NFS server to update all nodes/engine from 
3.2.2 to 3.3.2. How?
Made a script which did a virsh suspend VM which freezes all I/O and 
then ran yum update/reboot on the NFS server. It worked but its not good 
for your stress levels.


Joop

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Experience with low cost NFS-Storage as VM-Storage?

2014-01-08 Thread squadra
better go for iscsi or something else... i whould avoid nfs for vm hosting

Freebsd10 delivers kernel iscsitarget now, which works great so far. or go
with omnios to get comstar iscsi, which is a rocksolid solution


Cheers,

Juergen


On Wed, Jan 8, 2014 at 2:34 PM, noc n...@nieuwland.nl wrote:

 On 8-1-2014 13:18, Sven Kieske wrote:

 PS: Bonus question: Does someone utilize the NFS-Servers also as
 computenodes ?

 We do, temporarily. It is NOT recommended :-) because:
 - can't update your NFS server without shutting down all VMs
 - myriad of other reasons

 Still I did a reboot of our NFS server to update all nodes/engine from
 3.2.2 to 3.3.2. How?
 Made a script which did a virsh suspend VM which freezes all I/O and then
 ran yum update/reboot on the NFS server. It worked but its not good for
 your stress levels.

 Joop


 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users




-- 

Sent from the Delta quadrant using Borg technology!
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Experience with low cost NFS-Storage as VM-Storage?

2014-01-08 Thread Markus Stockhausen
 Von: users-boun...@ovirt.org [users-boun...@ovirt.org] im Auftrag von 
 squadra [squa...@gmail.com]
 Gesendet: Mittwoch, 8. Januar 2014 17:15
 An: users@ovirt.org
 Betreff: Re: [Users] Experience with low cost NFS-Storage as VM-Storage?

 better go for iscsi or something else... i whould avoid nfs for vm hosting
 Freebsd10 delivers kernel iscsitarget now, which works great so far. or go 
 with omnios to get comstar iscsi, which is a rocksolid solution

 Cheers,
 
 Juergen

That is usually a matter of taste and the available environment. 
The minimal differences in performance usually only show up
if you drive the storage to its limits. I guess you could help Sven 
better if you had some hard facts why to favour ISCSI. 

Best regards.

Markus
Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte
Informationen. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail
irrtümlich erhalten haben, informieren Sie bitte sofort den Absender und
vernichten Sie diese Mail. Das unerlaubte Kopieren sowie die unbefugte
Weitergabe dieser Mail ist nicht gestattet.

Über das Internet versandte E-Mails können unter fremden Namen erstellt oder
manipuliert werden. Deshalb ist diese als E-Mail verschickte Nachricht keine
rechtsverbindliche Willenserklärung.

Collogia
Unternehmensberatung AG
Ubierring 11
D-50678 Köln

Vorstand:
Kadir Akin
Dr. Michael Höhnerbach

Vorsitzender des Aufsichtsrates:
Hans Kristian Langva

Registergericht: Amtsgericht Köln
Registernummer: HRB 52 497

This e-mail may contain confidential and/or privileged information. If you
are not the intended recipient (or have received this e-mail in error)
please notify the sender immediately and destroy this e-mail. Any
unauthorized copying, disclosure or distribution of the material in this
e-mail is strictly forbidden.

e-mails sent over the internet may have been written under a wrong name or
been manipulated. That is why this message sent as an e-mail is not a
legally binding declaration of intention.

Collogia
Unternehmensberatung AG
Ubierring 11
D-50678 Köln

executive board:
Kadir Akin
Dr. Michael Höhnerbach

President of the supervisory board:
Hans Kristian Langva

Registry office: district court Cologne
Register number: HRB 52 497


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Experience with low cost NFS-Storage as VM-Storage?

2014-01-08 Thread Karli Sjöberg
On Thu, 2014-01-09 at 07:10 +, Markus Stockhausen wrote:
  Von: users-boun...@ovirt.org [users-boun...@ovirt.org] im Auftrag von 
  squadra [squa...@gmail.com]
  Gesendet: Mittwoch, 8. Januar 2014 17:15
  An: users@ovirt.org
  Betreff: Re: [Users] Experience with low cost NFS-Storage as VM-Storage?
 
  better go for iscsi or something else... i whould avoid nfs for vm hosting
  Freebsd10 delivers kernel iscsitarget now, which works great so far. or go 
  with omnios to get comstar iscsi, which is a rocksolid solution
 
  Cheers,
  
  Juergen
 
 That is usually a matter of taste and the available environment. 
 The minimal differences in performance usually only show up
 if you drive the storage to its limits. I guess you could help Sven 
 better if you had some hard facts why to favour ISCSI. 
 
 Best regards.
 
 Markus

Only technical difference I can think of is the iSCSI-level
load-balancing. With NFS you set up the network with LACP and let that
load-balance for you (and you should probably do that with iSCSI as well
but you don´t strictly have to). I think it has to do with a chance of
trying to go beyond the capacity of 1 network interface at the same
time, from one Host (higher bandwidth) that makes people try iSCSI
instead of plain NFS. I have tried that but was never able to achieve
that effect, so in our situation, there´s no difference. In comparing
them both in benchmarks, there was no performance difference at all, at
least for our storage systems that are based on FreeBSD.

/K
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Experience with low cost NFS-Storage as VM-Storage?

2014-01-08 Thread squadra
There's are already enaugh articles on the web about NFS problems related
locking, latency, etc Eh stacking a protocol onto another to fix
problem and then maybe one more to glue them together.

Google for the suse PDF  why NFS sucks, I don't agree with the whole
sheet.. NFS got his place,too. But not as production filer for VM.

Cheers,

Juergen, the NFS lover
On Jan 9, 2014 8:10 AM, Markus Stockhausen stockhau...@collogia.de
wrote:

  Von: users-boun...@ovirt.org [users-boun...@ovirt.org] im Auftrag von
 squadra [squa...@gmail.com]
  Gesendet: Mittwoch, 8. Januar 2014 17:15
  An: users@ovirt.org
  Betreff: Re: [Users] Experience with low cost NFS-Storage as VM-Storage?
 
  better go for iscsi or something else... i whould avoid nfs for vm
 hosting
  Freebsd10 delivers kernel iscsitarget now, which works great so far. or
 go with omnios to get comstar iscsi, which is a rocksolid solution
 
  Cheers,
 
  Juergen

 That is usually a matter of taste and the available environment.
 The minimal differences in performance usually only show up
 if you drive the storage to its limits. I guess you could help Sven
 better if you had some hard facts why to favour ISCSI.

 Best regards.

 Markus
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Experience with low cost NFS-Storage as VM-Storage?

2014-01-08 Thread squadra
Right, try multipathing with nfs :)
On Jan 9, 2014 8:30 AM, Karli Sjöberg karli.sjob...@slu.se wrote:

 On Thu, 2014-01-09 at 07:10 +, Markus Stockhausen wrote:
   Von: users-boun...@ovirt.org [users-boun...@ovirt.org] im Auftrag
 von squadra [squa...@gmail.com]
   Gesendet: Mittwoch, 8. Januar 2014 17:15
   An: users@ovirt.org
   Betreff: Re: [Users] Experience with low cost NFS-Storage as
 VM-Storage?
  
   better go for iscsi or something else... i whould avoid nfs for vm
 hosting
   Freebsd10 delivers kernel iscsitarget now, which works great so far.
 or go with omnios to get comstar iscsi, which is a rocksolid solution
  
   Cheers,
  
   Juergen
 
  That is usually a matter of taste and the available environment.
  The minimal differences in performance usually only show up
  if you drive the storage to its limits. I guess you could help Sven
  better if you had some hard facts why to favour ISCSI.
 
  Best regards.
 
  Markus

 Only technical difference I can think of is the iSCSI-level
 load-balancing. With NFS you set up the network with LACP and let that
 load-balance for you (and you should probably do that with iSCSI as well
 but you don´t strictly have to). I think it has to do with a chance of
 trying to go beyond the capacity of 1 network interface at the same
 time, from one Host (higher bandwidth) that makes people try iSCSI
 instead of plain NFS. I have tried that but was never able to achieve
 that effect, so in our situation, there´s no difference. In comparing
 them both in benchmarks, there was no performance difference at all, at
 least for our storage systems that are based on FreeBSD.

 /K

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Experience with low cost NFS-Storage as VM-Storage?

2014-01-08 Thread Karli Sjöberg
On Thu, 2014-01-09 at 08:35 +0100, squadra wrote:
 Right, try multipathing with nfs :)

Yes, that´s what I meant, maybe could have been more clear about that,
sorry. Multipathing (and the load-balancing it brings) is what really
separates iSCSI from NFS.

What I´d be interested in knowing is at what breaking-point, not having
multipathing becomes an issue. I mean, we might not have such a big
VM-park, about 300-400 VMs. But so far running without multipathing
using good ole' NFS and no performance issues this far. Would be good to
know beforehand if we´re headed for a wall of some sorts, and about
when we´ll hit it...

/K

 
 On Jan 9, 2014 8:30 AM, Karli Sjöberg karli.sjob...@slu.se wrote:
 On Thu, 2014-01-09 at 07:10 +, Markus Stockhausen wrote:
   Von: users-boun...@ovirt.org [users-boun...@ovirt.org] im
 Auftrag von squadra [squa...@gmail.com]
   Gesendet: Mittwoch, 8. Januar 2014 17:15
   An: users@ovirt.org
   Betreff: Re: [Users] Experience with low cost NFS-Storage
 as VM-Storage?
  
   better go for iscsi or something else... i whould avoid
 nfs for vm hosting
   Freebsd10 delivers kernel iscsitarget now, which works
 great so far. or go with omnios to get comstar iscsi, which is
 a rocksolid solution
  
   Cheers,
  
   Juergen
 
  That is usually a matter of taste and the available
 environment.
  The minimal differences in performance usually only show up
  if you drive the storage to its limits. I guess you could
 help Sven
  better if you had some hard facts why to favour ISCSI.
 
  Best regards.
 
  Markus
 
 Only technical difference I can think of is the iSCSI-level
 load-balancing. With NFS you set up the network with LACP and
 let that
 load-balance for you (and you should probably do that with
 iSCSI as well
 but you don´t strictly have to). I think it has to do with a
 chance of
 trying to go beyond the capacity of 1 network interface at the
 same
 time, from one Host (higher bandwidth) that makes people try
 iSCSI
 instead of plain NFS. I have tried that but was never able to
 achieve
 that effect, so in our situation, there´s no difference. In
 comparing
 them both in benchmarks, there was no performance difference
 at all, at
 least for our storage systems that are based on FreeBSD.
 
 /K

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users