Re: [ovirt-users] iSCSI and multipath deliberation

2017-03-16 Thread Pavel Gashev
Using this logic, oVirt should detect existing storages instead of 
attaching/mounting them. The same is for networks. A VM can be placed on any 
host that has required storage(s) and network(s) attached. Datacenters/clusters 
would be virtual things.

Pros:
* Flexibility for Linux admins, wider usage
* Less code. Less bugs.
Cons:
* You must be a Linux admin to maintain oVirt instances.
* It’s a step back in RHEV/oVirt functionality. I oVirt would lose users. 
RedHat would lose customers.


From: <users-boun...@ovirt.org> on behalf of Marcin Kruk 
<askifyoun...@gmail.com>
Date: Thursday, 16 March 2017 at 20:17
To: users <users@ovirt.org>
Subject: [ovirt-users] iSCSI and multipath deliberation

In my opionion the main problem in configuration iscsi and multipath is that 
the ovirt developers try to start everything automaticaly during installation, 
and then during start services like vdsmd.
But during installation process adminstrator shoud choose the right multipath 
WWID identifier only.
And administrator should be responsible for setting multipath and iSCSI 
properly.
Otherwise ovirt installator does everything automaticaly in the universal way 
which is weak due to so many storage types.
Howgh :)
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] iSCSI and multipath deliberation

2017-03-16 Thread Marcin Kruk
In my opionion the main problem in configuration iscsi and multipath is
that the ovirt developers try to start everything automaticaly during
installation, and then during start services like vdsmd.
But during installation process adminstrator shoud choose the right
multipath WWID identifier only.
And administrator should be responsible for setting multipath and iSCSI
properly.
Otherwise ovirt installator does everything automaticaly in the universal
way which is weak due to so many storage types.
Howgh :)
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] iSCSI and multipath

2014-07-07 Thread Gary Lloyd
Is there any chance of multipath working with direct LUN instead of just
storage domains ? I've asked/checked a couple of times, but not had much
luck.

Thanks

*Gary Lloyd*
--
IT Services
Keele University
---


On 9 June 2014 15:17, John Taylor jtt77...@gmail.com wrote:

 On Mon, Jun 9, 2014 at 9:23 AM, Nicolas Ecarnot nico...@ecarnot.net
 wrote:
  Le 09-06-2014 14:44, Maor Lipchuk a écrit :
 
  basically, you should upgrade your DC to 3.4, and then upgrade the
  clusters you desire also to 3.4.
 
 
  Well, that seems to have worked, except I had to raise the cluster level
  first, then the DC level.
 
  Now, I can see the iSCSI multipath tab has appeared.
  But I confirm what I wrote below :
 
  I saw that multipathing is talked here :
  http://www.ovirt.org/Feature/iSCSI-Multipath
 
  Add an iSCSI Storage to the Data Center
  Make sure the Data Center contains networks.
  Go to the Data Center main tab and choose the specific Data
 Center
  At the sub tab choose iSCSI Bond
  Press the new button to add a new iSCSI Bond
  Configure the networks you want to add to the new iSCSI Bond.
 
 
  Anyway, I'm not sure to understand the point of this wiki page and
 this
  implementation : it looks like a much higher level of multipathing
 over
  virtual networks, and not at all what I'm talking about above...?
 
 
  I am actually trying to know whether bonding interfaces (at low level)
 for
  the iSCSI network is a bad thing, as was told by my storage provider?
 
  --
  Nicolas Ecarnot


 Hi Nicolas,
 I think the naming of the managed iscsi multipathing feature a bond
 might be a bit confusing. It's not an ethernet/nic bond, but a way to
 group networks and targets together, so it's not bonding interfaces
 Behind the scenes what it does is creates iscsi
 ifaces(/var/lib/iscsi/ifaces) and changes the way the iscsiadm calls
 are constructed to use those ifaces (instead of the default) to
 connect and login to the targets
 Hope that helps.

 -John
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] iSCSI and multipath

2014-07-07 Thread Sven Kieske
Am 07.07.2014 10:17, schrieb Gary Lloyd:
 Is there any chance of multipath working with direct LUN instead of just
 storage domains ? I've asked/checked a couple of times, but not had much
 luck.

Hi,

the best way to get features into ovirt is to create a Bug
titled as RFE (request for enhancement) here:

https://bugzilla.redhat.com/enter_bug.cgi?product=oVirt

if you have any custom vdsm code it would be cool to share
it with the community, so you might even not be responsible
for maintaining it in the future, but that's your decision
to make.

HTH



-- 
Mit freundlichen Grüßen / Regards

Sven Kieske

Systemadministrator
Mittwald CM Service GmbH  Co. KG
Königsberger Straße 6
32339 Espelkamp
T: +49-5772-293-100
F: +49-5772-293-333
https://www.mittwald.de
Geschäftsführer: Robert Meyer
St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen
Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] iSCSI and multipath

2014-07-07 Thread Gary Lloyd
Hi Sven

Thanks

My colleague has already tried to submit the code and I think it was Itamar
we spoke with sometime in Oct/Nov 13.
I think someone decided that the functionality should be driven from the
engine itself rather than having it set on the vdsm nodes.

I will see about putting in for a feature request though.

Cheers

*Gary Lloyd*
--
IT Services
Keele University
---


On 7 July 2014 09:45, Sven Kieske s.kie...@mittwald.de wrote:

 Am 07.07.2014 10:17, schrieb Gary Lloyd:
  Is there any chance of multipath working with direct LUN instead of just
  storage domains ? I've asked/checked a couple of times, but not had much
  luck.

 Hi,

 the best way to get features into ovirt is to create a Bug
 titled as RFE (request for enhancement) here:

 https://bugzilla.redhat.com/enter_bug.cgi?product=oVirt

 if you have any custom vdsm code it would be cool to share
 it with the community, so you might even not be responsible
 for maintaining it in the future, but that's your decision
 to make.

 HTH



 --
 Mit freundlichen Grüßen / Regards

 Sven Kieske

 Systemadministrator
 Mittwald CM Service GmbH  Co. KG
 Königsberger Straße 6
 32339 Espelkamp
 T: +49-5772-293-100
 F: +49-5772-293-333
 https://www.mittwald.de
 Geschäftsführer: Robert Meyer
 St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen
 Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] iSCSI and multipath

2014-07-07 Thread jplor...@gmail.com
: Jorick Astrego j.astr...@netbulae.eu
 To: users@ovirt.org
 Subject: Re: [ovirt-users] Live Migration / Snapshots- CentOS 6.5
 Message-ID: 53ba48d3.4020...@netbulae.eu
 Content-Type: text/plain; charset=iso-8859-1; Format=flowed


 On 07/05/2014 04:39 PM, Karli Sj?berg wrote:
 
 
  Den 5 jul 2014 16:22 skrev Brad Bendy brad.be...@gmail.com:
  
   Haha, yeah never have been a Fedora fan, and nothing has changed. Is
   the only big feature im missing out on is snapshots? From what I can
   tell, and in my testing, everything else seems to work. Was deploying
   GlusterFS but without the live migration to another host that is
   somewhat defeated.
 
  VM live migration works, live _disk_ migration does not.
 
   Only way to get that is with RHEL really then?
 
  No, as I earlier pointed out, there is a place you can get the
  packages you need for CentOS:
 
 http://jenkins.ovirt.org/view/All/job/qemu-kvm-rhev_create-rpms_el6/lastStableBuild/
 
  You'll have to download- and force install them over the already
  installed versions of those packages on all Hosts and then it'll work.
 
  Though, next time there are updates, yum will update from the standard
  repos and it just stops working again until you repeat the procedure.
 
  /K
 
 
 Just add exclude=qemu-kvm* in /etc/yum.conf so yum will leave them allone

 Kind regards,
 Jorick Astrego
 Netbulae
 -- next part --
 An HTML attachment was scrubbed...
 URL: 
 http://lists.ovirt.org/pipermail/users/attachments/20140707/51a52214/attachment-0001.html
 

 --

 Message: 5
 Date: Mon, 7 Jul 2014 09:17:20 +0100
 From: Gary Lloyd g.ll...@keele.ac.uk
 To: John Taylor jtt77...@gmail.com
 Cc: users users@ovirt.org
 Subject: Re: [ovirt-users] iSCSI and multipath
 Message-ID:
 
 caezlwnf5p_k2okhqds+ttalf2hne+vnf0_6r64lhfwqp-tu...@mail.gmail.com
 Content-Type: text/plain; charset=utf-8

 Is there any chance of multipath working with direct LUN instead of just
 storage domains ? I've asked/checked a couple of times, but not had much
 luck.

 Thanks

 *Gary Lloyd*
 --
 IT Services
 Keele University
 ---


 On 9 June 2014 15:17, John Taylor jtt77...@gmail.com wrote:

  On Mon, Jun 9, 2014 at 9:23 AM, Nicolas Ecarnot nico...@ecarnot.net
  wrote:
   Le 09-06-2014 14:44, Maor Lipchuk a ?crit :
  
   basically, you should upgrade your DC to 3.4, and then upgrade the
   clusters you desire also to 3.4.
  
  
   Well, that seems to have worked, except I had to raise the cluster
 level
   first, then the DC level.
  
   Now, I can see the iSCSI multipath tab has appeared.
   But I confirm what I wrote below :
  
   I saw that multipathing is talked here :
   http://www.ovirt.org/Feature/iSCSI-Multipath
  
   Add an iSCSI Storage to the Data Center
   Make sure the Data Center contains networks.
   Go to the Data Center main tab and choose the specific Data
  Center
   At the sub tab choose iSCSI Bond
   Press the new button to add a new iSCSI Bond
   Configure the networks you want to add to the new iSCSI Bond.
  
  
   Anyway, I'm not sure to understand the point of this wiki page and
  this
   implementation : it looks like a much higher level of multipathing
  over
   virtual networks, and not at all what I'm talking about above...?
  
  
   I am actually trying to know whether bonding interfaces (at low level)
  for
   the iSCSI network is a bad thing, as was told by my storage provider?
  
   --
   Nicolas Ecarnot
 
 
  Hi Nicolas,
  I think the naming of the managed iscsi multipathing feature a bond
  might be a bit confusing. It's not an ethernet/nic bond, but a way to
  group networks and targets together, so it's not bonding interfaces
  Behind the scenes what it does is creates iscsi
  ifaces(/var/lib/iscsi/ifaces) and changes the way the iscsiadm calls
  are constructed to use those ifaces (instead of the default) to
  connect and login to the targets
  Hope that helps.
 
  -John
  ___
  Users mailing list
  Users@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/users
 
 -- next part --
 An HTML attachment was scrubbed...
 URL: 
 http://lists.ovirt.org/pipermail/users/attachments/20140707/6e401efe/attachment-0001.html
 

 --

 Message: 6
 Date: Mon, 7 Jul 2014 08:45:23 +
 From: Sven Kieske s.kie...@mittwald.de
 To: users@ovirt.org users@ovirt.org
 Subject: Re: [ovirt-users] iSCSI and multipath
 Message-ID: 53ba5ec0.6000...@mittwald.de
 Content-Type: text/plain; charset=utf-8

 Am 07.07.2014 10:17, schrieb Gary Lloyd:
  Is there any chance of multipath working with direct LUN instead of just
  storage domains ? I've asked/checked a couple of times, but not had much
  luck.

 Hi,

 the best way to get features into ovirt is to create a Bug
 titled as RFE (request for enhancement) here:

 https://bugzilla.redhat.com/enter_bug.cgi

[ovirt-users] iSCSI and multipath

2014-06-09 Thread Nicolas Ecarnot

Hi,

Context here :
- 2 setups (2 datacenters) in oVirt 3.4.1 with CentOS 6.4 and 6.5 hosts
- connected to some LUNs in iSCSI on a dedicated physical network

Every host has two interfaces used for management and end-user LAN 
activity. Every host also have 4 additional NICs dedicated to the iSCSI 
network.


Those 4 NICs were setup from the oVirt web GUI in a bonding with a 
unique IP address and connected to the SAN.


Everything is working fine. I just had to manually tweak some points 
(MTU, other small things) but it is working.



Recently, our SAN dealer told us that using bonding in an iSCSI context 
was terrible, and the recommendation is to use multipathing.
My previous experience pre-oVirt was to agree with that. Long story 
short is just that when setting up the host from oVirt, it was so 
convenient to click and setup bonding, and observe it working that I did 
not pay further attention. (and we seem to have no bottleneck yet).


Anyway, I dedicated a host to experiment, I things are not clear to me.
I know how to setup NICs, iSCSI and multipath to present the host OS a 
partition or a logical volume, using multipathing instead of bonding.


But in this precise case, what is disturbing me is that many layers 
described above are managed by oVirt (mount/unmount of LV, creation of 
bridges on top of bonded interfaces, managing the WWID amongst the cluster).


And I see nothing related to multipath at the NICs level.
Though I can setup everything fine in the host, this setup does not 
match what oVirt is expecting : oVirt is expecting a bridge named as the 
iSCSI network, and able to connect to the SAN.
My multipathing is offering the access to the partition of the LUNs, it 
is not the same.


I saw that multipathing is talked here :
http://www.ovirt.org/Feature/iSCSI-Multipath

I here read :

Add an iSCSI Storage to the Data Center
Make sure the Data Center contains networks.
Go to the Data Center main tab and choose the specific Data Center
At the sub tab choose iSCSI Bond


The only tabs I see are Storage/Logical Networks/Network 
QoS/Clusters/Permissions.


In this datacenter, I have one iSCSI master storage domain, two iSCSI 
storage domains and one NFS export domain.


What did I miss?


Press the new button to add a new iSCSI Bond
Configure the networks you want to add to the new iSCSI Bond.


Anyway, I'm not sure to understand the point of this wiki page and this 
implementation : it looks like a much higher level of multipathing over 
virtual networks, and not at all what I'm talking about above...?


Well as you see, I need enlightenments.

--
Nicolas Ecarnot
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] iSCSI and multipath

2014-06-09 Thread Maor Lipchuk
Hi Nicolas,

Which DC level are you using?
iSCSI multipath should be supported only from DC with compatibility
version of 3.4

regards,
Maor

On 06/09/2014 01:06 PM, Nicolas Ecarnot wrote:
 Hi,
 
 Context here :
 - 2 setups (2 datacenters) in oVirt 3.4.1 with CentOS 6.4 and 6.5 hosts
 - connected to some LUNs in iSCSI on a dedicated physical network
 
 Every host has two interfaces used for management and end-user LAN
 activity. Every host also have 4 additional NICs dedicated to the iSCSI
 network.
 
 Those 4 NICs were setup from the oVirt web GUI in a bonding with a
 unique IP address and connected to the SAN.
 
 Everything is working fine. I just had to manually tweak some points
 (MTU, other small things) but it is working.
 
 
 Recently, our SAN dealer told us that using bonding in an iSCSI context
 was terrible, and the recommendation is to use multipathing.
 My previous experience pre-oVirt was to agree with that. Long story
 short is just that when setting up the host from oVirt, it was so
 convenient to click and setup bonding, and observe it working that I did
 not pay further attention. (and we seem to have no bottleneck yet).
 
 Anyway, I dedicated a host to experiment, I things are not clear to me.
 I know how to setup NICs, iSCSI and multipath to present the host OS a
 partition or a logical volume, using multipathing instead of bonding.
 
 But in this precise case, what is disturbing me is that many layers
 described above are managed by oVirt (mount/unmount of LV, creation of
 bridges on top of bonded interfaces, managing the WWID amongst the
 cluster).
 
 And I see nothing related to multipath at the NICs level.
 Though I can setup everything fine in the host, this setup does not
 match what oVirt is expecting : oVirt is expecting a bridge named as the
 iSCSI network, and able to connect to the SAN.
 My multipathing is offering the access to the partition of the LUNs, it
 is not the same.
 
 I saw that multipathing is talked here :
 http://www.ovirt.org/Feature/iSCSI-Multipath
 
 I here read :
 Add an iSCSI Storage to the Data Center
 Make sure the Data Center contains networks.
 Go to the Data Center main tab and choose the specific Data Center
 At the sub tab choose iSCSI Bond
 
 The only tabs I see are Storage/Logical Networks/Network
 QoS/Clusters/Permissions.
 
 In this datacenter, I have one iSCSI master storage domain, two iSCSI
 storage domains and one NFS export domain.
 
 What did I miss?
 
 Press the new button to add a new iSCSI Bond
 Configure the networks you want to add to the new iSCSI Bond.
 
 Anyway, I'm not sure to understand the point of this wiki page and this
 implementation : it looks like a much higher level of multipathing over
 virtual networks, and not at all what I'm talking about above...?
 
 Well as you see, I need enlightenments.
 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] iSCSI and multipath

2014-06-09 Thread Nicolas Ecarnot

Le 09-06-2014 13:55, Maor Lipchuk a écrit :

Hi Nicolas,

Which DC level are you using?
iSCSI multipath should be supported only from DC with compatibility
version of 3.4


Hi Maor,

Oops you're right, my both 3.4 datacenters are using 3.3 level.
I migrated recently.

How safe or risky is it to increase this DC level ?



regards,
Maor

On 06/09/2014 01:06 PM, Nicolas Ecarnot wrote:

Hi,

Context here :
- 2 setups (2 datacenters) in oVirt 3.4.1 with CentOS 6.4 and 6.5 
hosts

- connected to some LUNs in iSCSI on a dedicated physical network

Every host has two interfaces used for management and end-user LAN
activity. Every host also have 4 additional NICs dedicated to the 
iSCSI

network.

Those 4 NICs were setup from the oVirt web GUI in a bonding with a
unique IP address and connected to the SAN.

Everything is working fine. I just had to manually tweak some points
(MTU, other small things) but it is working.


Recently, our SAN dealer told us that using bonding in an iSCSI 
context

was terrible, and the recommendation is to use multipathing.
My previous experience pre-oVirt was to agree with that. Long story
short is just that when setting up the host from oVirt, it was so
convenient to click and setup bonding, and observe it working that I 
did

not pay further attention. (and we seem to have no bottleneck yet).

Anyway, I dedicated a host to experiment, I things are not clear to 
me.

I know how to setup NICs, iSCSI and multipath to present the host OS a
partition or a logical volume, using multipathing instead of bonding.

But in this precise case, what is disturbing me is that many layers
described above are managed by oVirt (mount/unmount of LV, creation of
bridges on top of bonded interfaces, managing the WWID amongst the
cluster).

And I see nothing related to multipath at the NICs level.
Though I can setup everything fine in the host, this setup does not
match what oVirt is expecting : oVirt is expecting a bridge named as 
the

iSCSI network, and able to connect to the SAN.
My multipathing is offering the access to the partition of the LUNs, 
it

is not the same.

I saw that multipathing is talked here :
http://www.ovirt.org/Feature/iSCSI-Multipath

I here read :

Add an iSCSI Storage to the Data Center
Make sure the Data Center contains networks.
Go to the Data Center main tab and choose the specific Data 
Center

At the sub tab choose iSCSI Bond


The only tabs I see are Storage/Logical Networks/Network
QoS/Clusters/Permissions.

In this datacenter, I have one iSCSI master storage domain, two iSCSI
storage domains and one NFS export domain.

What did I miss?


Press the new button to add a new iSCSI Bond
Configure the networks you want to add to the new iSCSI Bond.


Anyway, I'm not sure to understand the point of this wiki page and 
this
implementation : it looks like a much higher level of multipathing 
over

virtual networks, and not at all what I'm talking about above...?

Well as you see, I need enlightenments.



--
Nicolas Ecarnot
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] iSCSI and multipath

2014-06-09 Thread Maor Lipchuk
basically, you should upgrade your DC to 3.4, and then upgrade the
clusters you desire also to 3.4.

You might need to upgrade your hosts to be compatible with the cluster
emulated machines, or they might become non-operational if qemu-kvm does
not support it.

ether way, you can always ask for advice in the mailing list if you
encounter any problem.

Regards,
Maor

On 06/09/2014 03:30 PM, Nicolas Ecarnot wrote:
 Le 09-06-2014 13:55, Maor Lipchuk a écrit :
 Hi Nicolas,

 Which DC level are you using?
 iSCSI multipath should be supported only from DC with compatibility
 version of 3.4
 
 Hi Maor,
 
 Oops you're right, my both 3.4 datacenters are using 3.3 level.
 I migrated recently.
 
 How safe or risky is it to increase this DC level ?
 

 regards,
 Maor

 On 06/09/2014 01:06 PM, Nicolas Ecarnot wrote:
 Hi,

 Context here :
 - 2 setups (2 datacenters) in oVirt 3.4.1 with CentOS 6.4 and 6.5 hosts
 - connected to some LUNs in iSCSI on a dedicated physical network

 Every host has two interfaces used for management and end-user LAN
 activity. Every host also have 4 additional NICs dedicated to the iSCSI
 network.

 Those 4 NICs were setup from the oVirt web GUI in a bonding with a
 unique IP address and connected to the SAN.

 Everything is working fine. I just had to manually tweak some points
 (MTU, other small things) but it is working.


 Recently, our SAN dealer told us that using bonding in an iSCSI context
 was terrible, and the recommendation is to use multipathing.
 My previous experience pre-oVirt was to agree with that. Long story
 short is just that when setting up the host from oVirt, it was so
 convenient to click and setup bonding, and observe it working that I did
 not pay further attention. (and we seem to have no bottleneck yet).

 Anyway, I dedicated a host to experiment, I things are not clear to me.
 I know how to setup NICs, iSCSI and multipath to present the host OS a
 partition or a logical volume, using multipathing instead of bonding.

 But in this precise case, what is disturbing me is that many layers
 described above are managed by oVirt (mount/unmount of LV, creation of
 bridges on top of bonded interfaces, managing the WWID amongst the
 cluster).

 And I see nothing related to multipath at the NICs level.
 Though I can setup everything fine in the host, this setup does not
 match what oVirt is expecting : oVirt is expecting a bridge named as the
 iSCSI network, and able to connect to the SAN.
 My multipathing is offering the access to the partition of the LUNs, it
 is not the same.

 I saw that multipathing is talked here :
 http://www.ovirt.org/Feature/iSCSI-Multipath

 I here read :
 Add an iSCSI Storage to the Data Center
 Make sure the Data Center contains networks.
 Go to the Data Center main tab and choose the specific Data Center
 At the sub tab choose iSCSI Bond

 The only tabs I see are Storage/Logical Networks/Network
 QoS/Clusters/Permissions.

 In this datacenter, I have one iSCSI master storage domain, two iSCSI
 storage domains and one NFS export domain.

 What did I miss?

 Press the new button to add a new iSCSI Bond
 Configure the networks you want to add to the new iSCSI Bond.

 Anyway, I'm not sure to understand the point of this wiki page and this
 implementation : it looks like a much higher level of multipathing over
 virtual networks, and not at all what I'm talking about above...?

 Well as you see, I need enlightenments.

 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] iSCSI and multipath

2014-06-09 Thread Nicolas Ecarnot

Le 09-06-2014 14:44, Maor Lipchuk a écrit :

basically, you should upgrade your DC to 3.4, and then upgrade the
clusters you desire also to 3.4.


Well, that seems to have worked, except I had to raise the cluster level 
first, then the DC level.


Now, I can see the iSCSI multipath tab has appeared.
But I confirm what I wrote below :


I saw that multipathing is talked here :
http://www.ovirt.org/Feature/iSCSI-Multipath


Add an iSCSI Storage to the Data Center
Make sure the Data Center contains networks.
Go to the Data Center main tab and choose the specific Data 
Center

At the sub tab choose iSCSI Bond
Press the new button to add a new iSCSI Bond
Configure the networks you want to add to the new iSCSI Bond.


Anyway, I'm not sure to understand the point of this wiki page and 
this
implementation : it looks like a much higher level of multipathing 
over

virtual networks, and not at all what I'm talking about above...?


I am actually trying to know whether bonding interfaces (at low level) 
for the iSCSI network is a bad thing, as was told by my storage 
provider?


--
Nicolas Ecarnot
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] iSCSI and multipath

2014-06-09 Thread John Taylor
On Mon, Jun 9, 2014 at 9:23 AM, Nicolas Ecarnot nico...@ecarnot.net wrote:
 Le 09-06-2014 14:44, Maor Lipchuk a écrit :

 basically, you should upgrade your DC to 3.4, and then upgrade the
 clusters you desire also to 3.4.


 Well, that seems to have worked, except I had to raise the cluster level
 first, then the DC level.

 Now, I can see the iSCSI multipath tab has appeared.
 But I confirm what I wrote below :

 I saw that multipathing is talked here :
 http://www.ovirt.org/Feature/iSCSI-Multipath

 Add an iSCSI Storage to the Data Center
 Make sure the Data Center contains networks.
 Go to the Data Center main tab and choose the specific Data Center
 At the sub tab choose iSCSI Bond
 Press the new button to add a new iSCSI Bond
 Configure the networks you want to add to the new iSCSI Bond.


 Anyway, I'm not sure to understand the point of this wiki page and this
 implementation : it looks like a much higher level of multipathing over
 virtual networks, and not at all what I'm talking about above...?


 I am actually trying to know whether bonding interfaces (at low level) for
 the iSCSI network is a bad thing, as was told by my storage provider?

 --
 Nicolas Ecarnot


Hi Nicolas,
I think the naming of the managed iscsi multipathing feature a bond
might be a bit confusing. It's not an ethernet/nic bond, but a way to
group networks and targets together, so it's not bonding interfaces
Behind the scenes what it does is creates iscsi
ifaces(/var/lib/iscsi/ifaces) and changes the way the iscsiadm calls
are constructed to use those ifaces (instead of the default) to
connect and login to the targets
Hope that helps.

-John
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users