[ovirt-users] Re: Hosts are non responsive Ovirt 3.6

2021-01-06 Thread Gary Lloyd
I ended up cheating in the end.
I copied vdsmcert.pem and vdsmkey.pem from the host that is still working to 
the others and did a chown on the files.
All the hosts are back online without downtime. I must have rebuilt that server 
a couple of years after the other ones.

Thanks

Gary Lloyd 
IT Infrastructure Manager 
Information and Digital Services 
01782 733063 
Innovation Centre 2 | Keele University | ST5 5NH 
  

-Original Message-
From: Strahil Nikolov  
Sent: 06 January 2021 12:11
To: Gary Lloyd 
Cc: users@ovirt.org
Subject: Re: [ovirt-users] Hosts are non responsive Ovirt 3.6

Have you tried to put a host into maintenance, remove and then readd it ?

You can access all Red Hat solutions with their free developer subscription.



Best Regards,
Strahil Nikolov






В сряда, 6 януари 2021 г., 13:17:42 Гринуич+2, Gary Lloyd  
написа: 





  


Hi please could someone point me in the right direction to renew ssl 
certificates for vdsm to communicate with ovirt 3.6 ?

I’m aware that this version hasn’t been supported for some time, this is a 
legacy environment which we are working towards decommissioning.

 

There seems to be a fix article for RHEV but we don’t have a subscription to 
view this information:

How to update expired RHEV certificates when all RHEV hosts got 
'Non-responsive' - Red Hat Customer Portal

 

These are what the vdsm hosts are showing:

Reactor thread::ERROR::2021-01-06 
11:04:59,505::m2cutils::337::ProtocolDetector.SSLHandshakeDispatcher::(handle_read)
 Error during handshake: sslv3 alert certificate expired

 

I have rerun engine-setup but this only seems to have fixed one of the vdsm 
hosts and the others are non responsive.

The others are in different clusters and we have some important services still 
running on these. 

 

Thanks

  

  
  


Gary Lloyd 
  
IT Infrastructure Manager 
  
Information and Digital Services 
  
01782 733063 
  
Innovation Centre 2 | Keele University | ST5 5NH 
  


  

 



___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: 
https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HT6Z3WHPFTBIVKQYQZBZEXHSO24LPKVS/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EYI5L6Q3QJICSUNVXHDVTMRCLMIH5II3/


[ovirt-users] Hosts are non responsive Ovirt 3.6

2021-01-06 Thread Gary Lloyd
Hi please could someone point me in the right direction to renew ssl 
certificates for vdsm to communicate with ovirt 3.6 ?
I'm aware that this version hasn't been supported for some time, this is a 
legacy environment which we are working towards decommissioning.

There seems to be a fix article for RHEV but we don't have a subscription to 
view this information:
How to update expired RHEV certificates when all RHEV hosts got 
'Non-responsive' - Red Hat Customer 
Portal<https://access.redhat.com/solutions/3028811>

These are what the vdsm hosts are showing:
Reactor thread::ERROR::2021-01-06 
11:04:59,505::m2cutils::337::ProtocolDetector.SSLHandshakeDispatcher::(handle_read)
 Error during handshake: sslv3 alert certificate expired

I have rerun engine-setup but this only seems to have fixed one of the vdsm 
hosts and the others are non responsive.
The others are in different clusters and we have some important services still 
running on these.

Thanks

[https://lh5.googleusercontent.com/v29r0Xyldsxb-jrbIFBAQIhyVvznLy4FvU0sxq6bMFnEIxeq8_6N-T7R9dqA9Hq_Zis1X732m0C1FE4piYQX_XyJN7nj3rTj1R4NBdpvqi12bqV-sVbVHNB7UT77_Yd5g8hQ5gYR]
Gary Lloyd
IT Infrastructure Manager
Information and Digital Services
01782 733063
Innovation Centre 2 | Keele University | ST5 5NH


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HT6Z3WHPFTBIVKQYQZBZEXHSO24LPKVS/


[ovirt-users] Re: VMs paused due to IO issues - Dell Equallogic controller failover

2019-05-15 Thread Gary Lloyd
I asked on the Dell Storage Forum and they recommend the following:

*I recommend not using a numeric value for the "no_path_retry" variable
within /etc/multipath.conf as once that numeric value is reached, if no
healthy LUNs were discovered during that defined time multipath will
disable the I/O queue altogether.*

*I do recommend, however, changing the variable value from "12" (or even
"60") to "queue" which will then allow multipathd to continue queing I/O
until a healthy LUN is discovered (time of fail-over between controllers)
and I/O is allowed to flow once again.*

Can you see any issues with this recommendation as far as Ovirt is
concerned ?

Thanks again

*Gary Lloyd*

I.T. Systems:Keele University
Finance & IT Directorate
Keele:Staffs:IC1 Building:ST5 5NB:UK
+44 1782 733063 <%2B44%201782%20733073>


On 4 October 2016 at 19:11, Nir Soffer  wrote:

> On Tue, Oct 4, 2016 at 10:51 AM, Gary Lloyd  wrote:
>
>> Hi
>>
>> We have Ovirt 3.65 with a Dell Equallogic SAN and we use Direct Luns for
>> all our VMs.
>> At the weekend during early hours an Equallogic controller failed over to
>> its standby on one of our arrays and this caused about 20 of our VMs to be
>> paused due to IO problems.
>>
>> I have also noticed that this happens during Equallogic firmware upgrades
>> since we moved onto Ovirt 3.65.
>>
>> As recommended by Dell disk timeouts within the VMs are set to 60 seconds
>> when they are hosted on an EqualLogic SAN.
>>
>> Is there any other timeout value that we can configure in vdsm.conf to
>> stop VMs from getting paused when a controller fails over ?
>>
>
> You can set the timeout in multipath.conf.
>
> With current multipath configuration (deployed by vdsm), when all paths to
> a device
> are lost (e.g. you take down all ports on the server during upgrade), all
> io will fail
> immediately.
>
> If you want to allow 60 seconds gracetime in such case, you can configure:
>
> no_path_retry 12
>
> This will continue to monitor the paths 12 times, each 5 seconds
> (assuming polling_interval=5). If some path recover during this time, the
> io
> can complete and the vm will not be paused.
>
> If no path is available after these retries, io will fail and vms with
> pending io
> will pause.
>
> Note that this will also cause delays in vdsm in various flows, increasing
> the chance
> of timeouts in engine side, or delays in storage domain monitoring.
>
> However, the 60 seconds delay is expected only on the first time all paths
> become
> faulty. Once the timeout has expired, any access to the device will fail
> immediately.
>
> To configure this, you must add the # VDSM PRIVATE tag at the second line
> of
> multipath.conf, otherwise vdsm will override your configuration in the
> next time
> you run vdsm-tool configure.
>
> multipath.conf should look like this:
>
> # VDSM REVISION 1.3
> # VDSM PRIVATE
>
> defaults {
> polling_interval5
> no_path_retry   12
> user_friendly_names no
> flush_on_last_del   yes
> fast_io_fail_tmo5
> dev_loss_tmo30
> max_fds 4096
> }
>
> devices {
> device {
> all_devsyes
> no_path_retry   12
> }
> }
>
> This will use 12 retries (60 seconds) timeout for any device. If you like
> to
> configure only your specific device, you can add a device section for
> your specific server instead.
>
>
>>
>> Also is there anything that we can tweak to automatically unpause the VMs
>> once connectivity with the arrays is re-established ?
>>
>
> Vdsm will resume the vms when storage monitor detect that storage became
> available again.
> However we cannot guarantee that storage monitoring will detect that
> storage was down.
> This should be improved in 4.0.
>
>
>> At the moment we are running a customized version of storageServer.py, as
>> Ovirt has yet to include iscsi multipath support for Direct Luns out of the
>> box.
>>
>
> Would you like to share this code?
>
> Nir
>

--
IMPORTANT!
This message has been scanned for viruses and phishing links.
However, it is your responsibility to evaluate the links and attachments you 
choose to click.
If you are uncertain, we always try to help.
Greetings helpd...@actnet.se



--
IMPORTANT!
This message has been scanned for viruses and phishing links.
However, it is your responsibility to evaluate the links and attachments you 
choose to click.
If you are u

[ovirt-users] Re: VMs paused due to IO issues - Dell Equallogic controller failover

2019-05-15 Thread Gary Lloyd
>From the sounds of it the best we can do then is to use a 60 second timeout
on paths in multipathd.
The main reason we use Direct Lun is because we replicate /snapshot VMs
associated Luns at SAN level as a means of disaster recovery.

I have read a bit of documentation of how to backup virtual machines in
storage domains, but the process of mounting snapshots for all our machines
within a dedicated VM doesn't seem as efficient when we have almost 300
virtual machines and only 1Gb networking.

Thanks for the advice.

*Gary Lloyd*

I.T. Systems:Keele University
Finance & IT Directorate
Keele:Staffs:IC1 Building:ST5 5NB:UK
+44 1782 733063 <%2B44%201782%20733073>


On 6 October 2016 at 11:07, Nir Soffer  wrote:

> On Thu, Oct 6, 2016 at 10:19 AM, Gary Lloyd  wrote:
>
>> I asked on the Dell Storage Forum and they recommend the following:
>>
>> *I recommend not using a numeric value for the "no_path_retry" variable
>> within /etc/multipath.conf as once that numeric value is reached, if no
>> healthy LUNs were discovered during that defined time multipath will
>> disable the I/O queue altogether.*
>>
>> *I do recommend, however, changing the variable value from "12" (or even
>> "60") to "queue" which will then allow multipathd to continue queing I/O
>> until a healthy LUN is discovered (time of fail-over between controllers)
>> and I/O is allowed to flow once again.*
>>
>> Can you see any issues with this recommendation as far as Ovirt is
>> concerned ?
>>
> Yes, we cannot work with unlimited queue. This will block vdsm for
> unlimited
> time when the next command try to access storage. Because we don't have
> good isolation between different storage domains, this may cause other
> storage
> domains to become faulty. Also engine flows that have a timeout will fail
> with
> a timeout.
>
> If you are on 3.x, this will be very painfull, on 4.0 it should be better,
> but it is not
> recommended.
>
> Nir
>
>

--
IMPORTANT!
This message has been scanned for viruses and phishing links.
However, it is your responsibility to evaluate the links and attachments you 
choose to click.
If you are uncertain, we always try to help.
Greetings helpd...@actnet.se



--
IMPORTANT!
This message has been scanned for viruses and phishing links.
However, it is your responsibility to evaluate the links and attachments you 
choose to click.
If you are uncertain, we always try to help.
Greetings helpd...@actnet.se


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SGEGODHSGVRCQ3A6KZI5BGB3HVT5JEER/


[ovirt-users] VMs paused due to IO issues - Dell Equallogic controller failover

2019-05-14 Thread Gary Lloyd
Hi

We have Ovirt 3.65 with a Dell Equallogic SAN and we use Direct Luns for
all our VMs.
At the weekend during early hours an Equallogic controller failed over to
its standby on one of our arrays and this caused about 20 of our VMs to be
paused due to IO problems.

I have also noticed that this happens during Equallogic firmware upgrades
since we moved onto Ovirt 3.65.

As recommended by Dell disk timeouts within the VMs are set to 60 seconds
when they are hosted on an EqualLogic SAN.

Is there any other timeout value that we can configure in vdsm.conf to stop
VMs from getting paused when a controller fails over ?

Also is there anything that we can tweak to automatically unpause the VMs
once connectivity with the arrays is re-established ?

At the moment we are running a customized version of storageServer.py, as
Ovirt has yet to include iscsi multipath support for Direct Luns out of the
box.

Many Thanks


*Gary Lloyd*

I.T. Systems:Keele University
Finance & IT Directorate
Keele:Staffs:IC1 Building:ST5 5NB:UK
+44 1782 733063 <%2B44%201782%20733073>


--
IMPORTANT!
This message has been scanned for viruses and phishing links.
However, it is your responsibility to evaluate the links and attachments you 
choose to click.
If you are uncertain, we always try to help.
Greetings helpd...@actnet.se



--
IMPORTANT!
This message has been scanned for viruses and phishing links.
However, it is your responsibility to evaluate the links and attachments you 
choose to click.
If you are uncertain, we always try to help.
Greetings helpd...@actnet.se


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/AIQDZRSVWWMK5IAN7K3S5MX4JAHDPKJU/


Re: [ovirt-users] Ovirt 3.6 to 4.2 upgrade

2018-02-14 Thread Gary Lloyd
Hi Yaniv

We attempted to share the code a few years back, but I don't think it got
accepted.

In vdsm.conf we have two bridged interfaces, each connected to a SAN uplink:

[irs]
iscsi_default_ifaces = san1,san2

And here is a diff of the file
/usr/lib/python2.7/site-packages/vdsm/storage/ vs the original for
vdsm-4.20.17-1
:

463,498c463,464
<
< # Original Code ##
<
< #iscsi.addIscsiNode(self._iface, self._target, self._cred)
< #timeout = config.getint("irs", "udev_settle_timeout")
< #udevadm.settle(timeout)
<
< ### Altered Code for EqualLogic Direct LUNs for Keele University
: G.Lloyd ###
<
< ifaceNames = config.get('irs', 'iscsi_default_ifaces').split(',')
< if not ifaceNames:
< iscsi.addIscsiNode(self._iface, self._target, self._cred)
< else:
< self.log.debug("Connecting on interfaces:
{}".format(ifaceNames))
< #for ifaceName in ifaceNames:
< success = False
< while ifaceNames:
< self.log.debug("Remaining interfaces to try:
{}".format(ifaceNames))
< ifaceName = ifaceNames.pop()
< try:
< self.log.debug("Connecting on {}".format(ifaceName))
< iscsi.addIscsiNode(iscsi.IscsiInterface(ifaceName),
self._target, self._cred)
< self.log.debug("Success connecting on
{}".format(ifaceName))
< success = True
< except:
< self.log.debug("Failure connecting on interface
{}".format(ifaceName))
< if ifaceNames:
<   self.log.debug("More iscsi interfaces to try,
continuing")
<   pass
< elif success:
<   self.log.debug("Already succeded on an interface,
continuing")
<   pass
< else:
<   self.log.debug("Could not connect to iscsi target
on any interface, raising exception")
<   raise
< timeout = config.getint("irs", "scsi_settle_timeout")
---
> iscsi.addIscsiNode(self._iface, self._target, self._cred)
> timeout = config.getint("irs", "udev_settle_timeout")
501,502d466
< ### End of Custom Alterations ###
<

Regards

*Gary Lloyd*

I.T. Systems:Keele University
Finance & IT Directorate
Keele:Staffs:IC1 Building:ST5 5NB:UK
+44 1782 733063 <%2B44%201782%20733073>


On 11 February 2018 at 08:38, Yaniv Kaul <yk...@redhat.com> wrote:

>
>
> On Fri, Feb 9, 2018 at 4:06 PM, Gary Lloyd <g.ll...@keele.ac.uk> wrote:
>
>> Hi
>>
>> Is it possible/supported to upgrade from Ovirt 3.6 straight to Ovirt 4.2 ?
>>
>
> No, you go through 4.0, 4.1.
>
>
>> Does live migration still function between the older vdsm nodes and vdsm
>> nodes with software built against Ovirt 4.2 ?
>>
>
> Yes, keep the cluster level at 3.6.
>
>
>>
>> We changed a couple of the vdsm python files to enable iscsi multipath on
>> direct luns.
>> (It's a fairly simple change to a couple of the python files).
>>
>
> Nice!
> Can you please contribute those patches to oVirt?
> Y.
>
>
>>
>> We've been running it this way since 2012 (Ovirt 3.2).
>>
>> Many Thanks
>>
>> *Gary Lloyd*
>> 
>> I.T. Systems:Keele University
>> Finance & IT Directorate
>> Keele:Staffs:IC1 Building:ST5 5NB:UK
>> +44 1782 733063 <%2B44%201782%20733073>
>> 
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Ovirt 3.6 to 4.2 upgrade

2018-02-09 Thread Gary Lloyd
Hi

Is it possible/supported to upgrade from Ovirt 3.6 straight to Ovirt 4.2 ?
Does live migration still function between the older vdsm nodes and vdsm
nodes with software built against Ovirt 4.2 ?

We changed a couple of the vdsm python files to enable iscsi multipath on
direct luns.
(It's a fairly simple change to a couple of the python files).

We've been running it this way since 2012 (Ovirt 3.2).

Many Thanks

*Gary Lloyd*

I.T. Systems:Keele University
Finance & IT Directorate
Keele:Staffs:IC1 Building:ST5 5NB:UK
+44 1782 733063 <%2B44%201782%20733073>

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Anyone running Shared SAS hosted engine ?

2017-05-09 Thread Gary Lloyd
Hi

I was just wondering if anyone is running Ovirt using a shared SAS array
with the ability to live migrate between hosts ?
If so has anyone been able to get hosted engine working with it ?

Thanks

*Gary Lloyd*

I.T. Systems:Keele University
Finance & IT Directorate
Keele:Staffs:IC1 Building:ST5 5NB:UK
+44 1782 733063 <%2B44%201782%20733073>

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Disaster Recovery Testing

2017-02-15 Thread Gary Lloyd
Hi Nir thanks for the guidance

We started to use ovirt a good few years ago now (version 3.2).

At the time iscsi multipath wasn't supported, so we made our own
modifications to vdsm and this worked well with direct lun.
We decided to go with direct lun in case things didn't work out with OVirt
and in that case we would go back to using vanilla kvm / virt-manager.

At the time I don't believe that you could import iscsi data domains that
had already been configured into a different installation, so we replicated
each raw VM volume using the SAN to another server room for DR purposes.
We use Dell Equallogic and there is a documented limitation of 1024 iscsi
connections and 256 volume replications. This isn't a problem at the
moment, but the more VMs that we have the more conscious I am about us
reaching those limits (we have around 300 VMs at the moment and we have a
vdsm hook that closes off iscsi connections if a vm is migrated /powered
off).

Moving to storage domains keeps the number of iscsi connections /
replicated volumes down and we won't need to make custom changes to vdsm
when we upgrade.
We can then use the SAN to replicate the storage domains to another data
centre and bring that online with a different install of OVirt (we will
have to use these arrays for at least the next 3 years).

I didn't realise that each storage domain contained the configuration
details/metadata for the VMs.
This to me is an extra win as we can recover VMs faster than we can now if
we have to move them to a different data centre in the event of a disaster.


Are there any maximum size / vm limits or recommendations for each storage
domain ?
Does Ovirt support moving VM's between different storage domain type e.g.
ISCSI to gluster ?


Many Thanks

*Gary Lloyd*

I.T. Systems:Keele University
Finance & IT Directorate
Keele:Staffs:IC1 Building:ST5 5NB:UK
+44 1782 733063 <%2B44%201782%20733073>


On 15 February 2017 at 18:56, Nir Soffer <nsof...@redhat.com> wrote:

> On Wed, Feb 15, 2017 at 2:32 PM, Gary Lloyd <g.ll...@keele.ac.uk> wrote:
> > Hi
> >
> > We currently use direct lun for our virtual machines and I would like to
> > move away from doing this and move onto storage domains.
> >
> > At the moment we are using an ISCSI SAN and we use on replicas created on
> > the SAN for disaster recovery.
> >
> > As a test I thought I would replicate an existing storage domain's volume
> > (via the SAN) and try to mount again as a separate storage domain (This
> is
> > with ovirt 4.06 (cluster mode 3.6))
>
> Why do want to replicate a storage domain and connect to it?
>
> > I can log into the iscsi disk but then nothing gets listed under Storage
> > Name / Storage ID (VG Name)
> >
> >
> > Should this be possible or will it not work due the the uids being
> identical
> > ?
>
> Connecting 2 storage domains with same uid will not work. You can use
> either
> the old or the new, but not both at the same time.
>
> Can you explain how replicating the storage domain volume is related to
> moving from direct luns to storage domains?
>
> If you want to move from direct lun to storage domain, you need to create
> a new disk on the storage domain, and copy the direct lun data to the new
> disk.
>
> We don't support this yet, but you can copy manually like this:
>
> 1. Find the lv of the new disk
>
> lvs -o name --select "{IU_} = lv_tags" vg-name
>
> 2. Activate the lv
>
> lvchange -ay vg-name/lv-name
>
> 3. Copy the data from the lun
>
> qemu-img convert -p -f raw -O raw -t none -T none
> /dev/mapper/xxxyyy /dev/vg-name/lv-name
>
> 4. Deactivate the disk
>
> lvchange -an vg-name/lv-name
>
> Nir
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Disaster Recovery Testing

2017-02-15 Thread Gary Lloyd
Hi

We currently use direct lun for our virtual machines and I would like to
move away from doing this and move onto storage domains.

At the moment we are using an ISCSI SAN and we use on replicas created on
the SAN for disaster recovery.

As a test I thought I would replicate an existing storage domain's volume
(via the SAN) and try to mount again as a separate storage domain (This is
with ovirt 4.06 (cluster mode 3.6))

I can log into the iscsi disk but then nothing gets listed under Storage
Name / Storage ID (VG Name)


Should this be possible or will it not work due the the uids being
identical ?


Many Thanks

*Gary Lloyd*

I.T. Systems:Keele University
Finance & IT Directorate
Keele:Staffs:IC1 Building:ST5 5NB:UK
+44 1782 733063 <%2B44%201782%20733073>

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VMs paused due to IO issues - Dell Equallogic controller failover

2016-10-07 Thread Gary Lloyd
>From the sounds of it the best we can do then is to use a 60 second timeout
on paths in multipathd.
The main reason we use Direct Lun is because we replicate /snapshot VMs
associated Luns at SAN level as a means of disaster recovery.

I have read a bit of documentation of how to backup virtual machines in
storage domains, but the process of mounting snapshots for all our machines
within a dedicated VM doesn't seem as efficient when we have almost 300
virtual machines and only 1Gb networking.

Thanks for the advice.

*Gary Lloyd*

I.T. Systems:Keele University
Finance & IT Directorate
Keele:Staffs:IC1 Building:ST5 5NB:UK
+44 1782 733063 <%2B44%201782%20733073>


On 6 October 2016 at 11:07, Nir Soffer <nsof...@redhat.com> wrote:

> On Thu, Oct 6, 2016 at 10:19 AM, Gary Lloyd <g.ll...@keele.ac.uk> wrote:
>
>> I asked on the Dell Storage Forum and they recommend the following:
>>
>> *I recommend not using a numeric value for the "no_path_retry" variable
>> within /etc/multipath.conf as once that numeric value is reached, if no
>> healthy LUNs were discovered during that defined time multipath will
>> disable the I/O queue altogether.*
>>
>> *I do recommend, however, changing the variable value from "12" (or even
>> "60") to "queue" which will then allow multipathd to continue queing I/O
>> until a healthy LUN is discovered (time of fail-over between controllers)
>> and I/O is allowed to flow once again.*
>>
>> Can you see any issues with this recommendation as far as Ovirt is
>> concerned ?
>>
> Yes, we cannot work with unlimited queue. This will block vdsm for
> unlimited
> time when the next command try to access storage. Because we don't have
> good isolation between different storage domains, this may cause other
> storage
> domains to become faulty. Also engine flows that have a timeout will fail
> with
> a timeout.
>
> If you are on 3.x, this will be very painfull, on 4.0 it should be better,
> but it is not
> recommended.
>
> Nir
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VMs paused due to IO issues - Dell Equallogic controller failover

2016-10-06 Thread Gary Lloyd
I asked on the Dell Storage Forum and they recommend the following:

*I recommend not using a numeric value for the "no_path_retry" variable
within /etc/multipath.conf as once that numeric value is reached, if no
healthy LUNs were discovered during that defined time multipath will
disable the I/O queue altogether.*

*I do recommend, however, changing the variable value from "12" (or even
"60") to "queue" which will then allow multipathd to continue queing I/O
until a healthy LUN is discovered (time of fail-over between controllers)
and I/O is allowed to flow once again.*

Can you see any issues with this recommendation as far as Ovirt is
concerned ?

Thanks again

*Gary Lloyd*

I.T. Systems:Keele University
Finance & IT Directorate
Keele:Staffs:IC1 Building:ST5 5NB:UK
+44 1782 733063 <%2B44%201782%20733073>


On 4 October 2016 at 19:11, Nir Soffer <nsof...@redhat.com> wrote:

> On Tue, Oct 4, 2016 at 10:51 AM, Gary Lloyd <g.ll...@keele.ac.uk> wrote:
>
>> Hi
>>
>> We have Ovirt 3.65 with a Dell Equallogic SAN and we use Direct Luns for
>> all our VMs.
>> At the weekend during early hours an Equallogic controller failed over to
>> its standby on one of our arrays and this caused about 20 of our VMs to be
>> paused due to IO problems.
>>
>> I have also noticed that this happens during Equallogic firmware upgrades
>> since we moved onto Ovirt 3.65.
>>
>> As recommended by Dell disk timeouts within the VMs are set to 60 seconds
>> when they are hosted on an EqualLogic SAN.
>>
>> Is there any other timeout value that we can configure in vdsm.conf to
>> stop VMs from getting paused when a controller fails over ?
>>
>
> You can set the timeout in multipath.conf.
>
> With current multipath configuration (deployed by vdsm), when all paths to
> a device
> are lost (e.g. you take down all ports on the server during upgrade), all
> io will fail
> immediately.
>
> If you want to allow 60 seconds gracetime in such case, you can configure:
>
> no_path_retry 12
>
> This will continue to monitor the paths 12 times, each 5 seconds
> (assuming polling_interval=5). If some path recover during this time, the
> io
> can complete and the vm will not be paused.
>
> If no path is available after these retries, io will fail and vms with
> pending io
> will pause.
>
> Note that this will also cause delays in vdsm in various flows, increasing
> the chance
> of timeouts in engine side, or delays in storage domain monitoring.
>
> However, the 60 seconds delay is expected only on the first time all paths
> become
> faulty. Once the timeout has expired, any access to the device will fail
> immediately.
>
> To configure this, you must add the # VDSM PRIVATE tag at the second line
> of
> multipath.conf, otherwise vdsm will override your configuration in the
> next time
> you run vdsm-tool configure.
>
> multipath.conf should look like this:
>
> # VDSM REVISION 1.3
> # VDSM PRIVATE
>
> defaults {
> polling_interval5
> no_path_retry   12
> user_friendly_names no
> flush_on_last_del   yes
> fast_io_fail_tmo5
> dev_loss_tmo30
> max_fds 4096
> }
>
> devices {
> device {
> all_devsyes
> no_path_retry   12
> }
> }
>
> This will use 12 retries (60 seconds) timeout for any device. If you like
> to
> configure only your specific device, you can add a device section for
> your specific server instead.
>
>
>>
>> Also is there anything that we can tweak to automatically unpause the VMs
>> once connectivity with the arrays is re-established ?
>>
>
> Vdsm will resume the vms when storage monitor detect that storage became
> available again.
> However we cannot guarantee that storage monitoring will detect that
> storage was down.
> This should be improved in 4.0.
>
>
>> At the moment we are running a customized version of storageServer.py, as
>> Ovirt has yet to include iscsi multipath support for Direct Luns out of the
>> box.
>>
>
> Would you like to share this code?
>
> Nir
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] VMs paused due to IO issues - Dell Equallogic controller failover

2016-10-04 Thread Gary Lloyd
Hi

We have Ovirt 3.65 with a Dell Equallogic SAN and we use Direct Luns for
all our VMs.
At the weekend during early hours an Equallogic controller failed over to
its standby on one of our arrays and this caused about 20 of our VMs to be
paused due to IO problems.

I have also noticed that this happens during Equallogic firmware upgrades
since we moved onto Ovirt 3.65.

As recommended by Dell disk timeouts within the VMs are set to 60 seconds
when they are hosted on an EqualLogic SAN.

Is there any other timeout value that we can configure in vdsm.conf to stop
VMs from getting paused when a controller fails over ?

Also is there anything that we can tweak to automatically unpause the VMs
once connectivity with the arrays is re-established ?

At the moment we are running a customized version of storageServer.py, as
Ovirt has yet to include iscsi multipath support for Direct Luns out of the
box.

Many Thanks


*Gary Lloyd*

I.T. Systems:Keele University
Finance & IT Directorate
Keele:Staffs:IC1 Building:ST5 5NB:UK
+44 1782 733063 <%2B44%201782%20733073>

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] changing the mtu on bridge devices

2015-11-16 Thread Gary Lloyd
Hi Ido

I updated the database and all seemed to be OK.

Cheers

*Gary Lloyd*

I.T. Systems:Keele University
Finance & IT Directorate
Keele:Staffs:IC1 Building:ST5 5NB:UK
+44 1782 733063 <%2B44%201782%20733073>


On 15 November 2015 at 13:08, Ido Barkan <ibar...@redhat.com> wrote:

> Hi Gary.
> First, In oVirt 3.6 we will enable editing a few network attributes
> while there vms attached to it.
> The reason this is forbidden is that until v3.6, by default, vdsm will
> recreate the network for
> every network edition and this can seriously disturb the network for
> the running vms.
> Second, if you want to tweak the MTU you should use 'ip link set dev
>  mtu 9000'
> for each device in the network (Bridge, Vlan device, Bond, physical
> NIC). Then, you should edit
> vdsm persistence json files under both /var/run/vdsm/netconf//
> and /var/lib/vdsm/persistence/netconf//. You should find a
> file for each network/bond
> under those paths.
> On the engine DB you should find the proper row in the 'network' table
> and update the 'mtu' attribute.
> Good luck,
> Ido
>
> On Thu, Nov 12, 2015 at 4:45 PM, Gary Lloyd <g.ll...@keele.ac.uk> wrote:
> > Hi we have upgraded to ovirt 3.5.5 and it seems to have gone well.
> However
> > when we initially configured one of our clusters (back in the day of
> ovirt
> > 3.2) we forgot to adjust two of our vm guest bridges that we use for
> iscsi
> > traffic to an mtu of 9000.
> >
> > We have around 50 or so vms using these bridges and we cant change the
> mtu
> > on the data center screen due to the vms being attached to the bridges
> (This
> > message comes up trying to change it).
> >
> > We were able to easily override the settings on the hosts in prior
> versions
> > of ovirt (but I don't remember where we did it). Our current 3.4 cluster
> is
> > fine, but I am now setting up a 3.5 cluster with new servers.
> > I don't see how changing it to 9000 will cause us any problems due to the
> > fact that our older clusters are running those bridges at 9000 anyway.
> >
> > Also has anyone experienced the new ui not being as responsive when
> showing
> > the state of vm migrations (doesn't seem to refresh very well on the
> hosts
> > screen).
> >
> > Does anyone know if there a way I can force the value to mtu 9000 in the
> > database somewhere without blowing it up ?
> >
> > Cheers
> >
> > Gary Lloyd
> > 
> > I.T. Systems:Keele University
> > Finance & IT Directorate
> > Keele:Staffs:IC1 Building:ST5 5NB:UK
> > +44 1782 733063
> > 
> >
> > ___
> > Users mailing list
> > Users@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
> >
>
>
>
> --
> Thanks,
> Ido Barkan
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] changing the mtu on bridge devices

2015-11-12 Thread Gary Lloyd
Hi we have upgraded to ovirt 3.5.5 and it seems to have gone well. However
when we initially configured one of our clusters (back in the day of ovirt
3.2) we forgot to adjust two of our vm guest bridges that we use for iscsi
traffic to an mtu of 9000.

We have around 50 or so vms using these bridges and we cant change the mtu
on the data center screen due to the vms being attached to the bridges
(This message comes up trying to change it).

We were able to easily override the settings on the hosts in prior versions
of ovirt (but I don't remember where we did it). Our current 3.4 cluster is
fine, but I am now setting up a 3.5 cluster with new servers.
I don't see how changing it to 9000 will cause us any problems due to the
fact that our older clusters are running those bridges at 9000 anyway.

Also has anyone experienced the new ui not being as responsive when showing
the state of vm migrations (doesn't seem to refresh very well on the hosts
screen).

Does anyone know if there a way I can force the value to mtu 9000 in the
database somewhere without blowing it up ?

Cheers

*Gary Lloyd*

I.T. Systems:Keele University
Finance & IT Directorate
Keele:Staffs:IC1 Building:ST5 5NB:UK
+44 1782 733063 <%2B44%201782%20733073>

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] VM Affinity groups ovirt 3.4.4

2015-01-08 Thread Gary Lloyd
Hi we have recently updated our production environment to ovirt 3.4.4 .

I have created a positive enforcing VM Affinity Group with 2 vms in one of
our clusters, but they don't seem to be moving (currently on different
hosts). Is there something else I need to activate ?

Thanks

*Gary Lloyd*
--
IT Services
Keele University
---
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Spice on Ovirt 3.4.4 Windows 2012 Guests

2014-10-16 Thread Gary Lloyd
Thanks David that appears to work.

I did a few more tests and it seems that without your fix Ovirt tries to
connect via rpd to machines windows 2012 machines that had the spice
console set prior to the 3.4.4 update.

One other thing I have noticed on our test install is that a powered off
machine could not be moved between different clusters in the same data
centre. The list doesn't expand to include the other clusters on the vm
properties for some reason.

This doesn't affect us at the moment though.

*Gary Lloyd*
--
IT Services
Keele University
---

On 14 October 2014 12:00, David Jaša dj...@redhat.com wrote:

 Hi Gary,

 I can't tell you what happens on upgrade but in order to be able to
 choose Spice consoles for win8/win2012 guests, drop this file
 to /etc/ovirt-engine/osinfo.conf.d:


 cat 
 /etc/ovirt-engine/osinfo.conf.d/99-spice_for_win8_and_win2012.properties 
 EOF
 os.windows_8.devices.display.protocols.value = qxl/qxl,vnc/cirrus
 os.windows_8x64.devices.display.protocols.value = qxl/qxl,vnc/cirrus
 os.windows_2012x64.devices.display.protocols.value = qxl/qxl,vnc/cirrus
 EOF

 and restart the engine.

 You won't be able to use qxl driver (and by extension multi-monitor and
 continuous resolution change) but the features provided by spice-vdagent
 should continue to work for you.

 Regards,

 David

 On Út, 2014-10-14 at 11:07 +0100, Gary Lloyd wrote:
  Hi
 
 
  We are currently running Ovirt 3.3.5 in production, currently our
  Windows Guests are using spice for the console. I have noticed that in
  our test environment running 3.4.4 that this option cannot be selected
  for Windows 2012 guests.
 
 
  Does anyone know what will happen to the console on VMs running
  windows 2012 if we upgrade to 3.4.4 on our production environment ?
 
 
  Will it simply not work resulting in us having to power down the
  affected VMs or will the existing VMs carry on working with the spice
  console setting ?
 
 
  Thanks
 
 
  Gary Lloyd
  --
  IT Services
  Keele University
  ---
  ___
  Users mailing list
  Users@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Spice on Ovirt 3.4.4 Windows 2012 Guests

2014-10-14 Thread Gary Lloyd
Hi

We are currently running Ovirt 3.3.5 in production, currently our Windows
Guests are using spice for the console. I have noticed that in our test
environment running 3.4.4 that this option cannot be selected for Windows
2012 guests.

Does anyone know what will happen to the console on VMs running windows
2012 if we upgrade to 3.4.4 on our production environment ?

Will it simply not work resulting in us having to power down the affected
VMs or will the existing VMs carry on working with the spice console
setting ?

Thanks

*Gary Lloyd*
--
IT Services
Keele University
---
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [RFI] oVirt 3.6 Planning

2014-09-12 Thread Gary Lloyd
Proper iscsi multipath support/config for direct lun , not just storage
domains.

(I've already put an RFE request in for this as well).

*Gary Lloyd*
--
IT Services
Keele University
---

On 12 September 2014 13:22, Itamar Heim ih...@redhat.com wrote:

 With oVirt 3.5 nearing GA, time to ask for what do you want to see in
 oVirt 3.6?

 Thanks,
Itamar
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] iSCSI and multipath

2014-07-07 Thread Gary Lloyd
Is there any chance of multipath working with direct LUN instead of just
storage domains ? I've asked/checked a couple of times, but not had much
luck.

Thanks

*Gary Lloyd*
--
IT Services
Keele University
---


On 9 June 2014 15:17, John Taylor jtt77...@gmail.com wrote:

 On Mon, Jun 9, 2014 at 9:23 AM, Nicolas Ecarnot nico...@ecarnot.net
 wrote:
  Le 09-06-2014 14:44, Maor Lipchuk a écrit :
 
  basically, you should upgrade your DC to 3.4, and then upgrade the
  clusters you desire also to 3.4.
 
 
  Well, that seems to have worked, except I had to raise the cluster level
  first, then the DC level.
 
  Now, I can see the iSCSI multipath tab has appeared.
  But I confirm what I wrote below :
 
  I saw that multipathing is talked here :
  http://www.ovirt.org/Feature/iSCSI-Multipath
 
  Add an iSCSI Storage to the Data Center
  Make sure the Data Center contains networks.
  Go to the Data Center main tab and choose the specific Data
 Center
  At the sub tab choose iSCSI Bond
  Press the new button to add a new iSCSI Bond
  Configure the networks you want to add to the new iSCSI Bond.
 
 
  Anyway, I'm not sure to understand the point of this wiki page and
 this
  implementation : it looks like a much higher level of multipathing
 over
  virtual networks, and not at all what I'm talking about above...?
 
 
  I am actually trying to know whether bonding interfaces (at low level)
 for
  the iSCSI network is a bad thing, as was told by my storage provider?
 
  --
  Nicolas Ecarnot


 Hi Nicolas,
 I think the naming of the managed iscsi multipathing feature a bond
 might be a bit confusing. It's not an ethernet/nic bond, but a way to
 group networks and targets together, so it's not bonding interfaces
 Behind the scenes what it does is creates iscsi
 ifaces(/var/lib/iscsi/ifaces) and changes the way the iscsiadm calls
 are constructed to use those ifaces (instead of the default) to
 connect and login to the targets
 Hope that helps.

 -John
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] iSCSI and multipath

2014-07-07 Thread Gary Lloyd
Hi Sven

Thanks

My colleague has already tried to submit the code and I think it was Itamar
we spoke with sometime in Oct/Nov 13.
I think someone decided that the functionality should be driven from the
engine itself rather than having it set on the vdsm nodes.

I will see about putting in for a feature request though.

Cheers

*Gary Lloyd*
--
IT Services
Keele University
---


On 7 July 2014 09:45, Sven Kieske s.kie...@mittwald.de wrote:

 Am 07.07.2014 10:17, schrieb Gary Lloyd:
  Is there any chance of multipath working with direct LUN instead of just
  storage domains ? I've asked/checked a couple of times, but not had much
  luck.

 Hi,

 the best way to get features into ovirt is to create a Bug
 titled as RFE (request for enhancement) here:

 https://bugzilla.redhat.com/enter_bug.cgi?product=oVirt

 if you have any custom vdsm code it would be cool to share
 it with the community, so you might even not be responsible
 for maintaining it in the future, but that's your decision
 to make.

 HTH



 --
 Mit freundlichen Grüßen / Regards

 Sven Kieske

 Systemadministrator
 Mittwald CM Service GmbH  Co. KG
 Königsberger Straße 6
 32339 Espelkamp
 T: +49-5772-293-100
 F: +49-5772-293-333
 https://www.mittwald.de
 Geschäftsführer: Robert Meyer
 St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen
 Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ISCSI question

2014-06-02 Thread Gary Lloyd
Hi Michael

The command you mentioned is what I'm doing as we use direct lun:

iscsiadm -m node -u -T
 iqn.2001-05.com.equallogic:0-1cb196-9691c713e-748004c63d151d52-vm-test

 You can craft a vdsm hook to do this to go in after_vm_destroy if you are
using direct lun as well to save you having to do this every time you
migrate / power down a vm.

*Gary Lloyd*
--
IT Services
Keele University
---


On 2 June 2014 12:49, Michael Wagenknecht wagenkne...@fuh-e.de wrote:

 Hi,
 what is the best way to logout a single ISCSI Path?
 I can't find a way in the GUI.
 Can I use the iscsiadm command (iscsiadm -u -T ... -p ...)?
 We use oVirt 3.3.2-1.el6.

 Best regards,
 Michael
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Fwd: [Users] Ovirt 3.4 EqualLogic multipath Bug 953343

2014-05-21 Thread Gary Lloyd
Hi

I was just wondering if ISCSI Multipathing is supported yet on Direct Lun ?
I have deployed 3.4.0.1 but i can only see the option for ISCSI
multipathing on storage domains.
We will be glad if it could be, as it saves us having to inject new code
into our vdsm nodes with each new version.

Thanks


*Gary Lloyd*
--
IT Services
Keele University
---

Guys,

Please, pay attention that this feature currently may not work. I resolved
several bugs related to this feature but some of my patches are still
waiting for merge.

Regards,

Sergey

- Original Message -
 From: Maor Lipchuk mlipc...@redhat.com
 To: Gary Lloyd g.ll...@keele.ac.uk, users@ovirt.org, Sergey Gotliv 
sgot...@redhat.com
 Sent: Thursday, March 27, 2014 6:50:10 PM
 Subject: Re: [Users] Ovirt 3.4 EqualLogic multipath Bug 953343

 IIRC it should also support direct luns as well.
 Sergey?

 regards,
 Maor

 On 03/27/2014 06:25 PM, Gary Lloyd wrote:
  Hi I have just had a look at this thanks. Whilst it seemed promising we
  are in a situation where we use Direct Lun for all our production VM's
  in order to take advantage of being able to individually replicate and
  restore vm volumes using the SAN tools. Is multipath supported for
  Direct Luns or only data domains ?
 
  Thanks
 
  /Gary Lloyd/
  --
  IT Services
  Keele University
  ---
 
 
  On 27 March 2014 16:02, Maor Lipchuk mlipc...@redhat.com
  mailto:mlipc...@redhat.com wrote:
 
  Hi Gary,
 
  Please take a look at
  http://www.ovirt.org/Feature/iSCSI-Multipath#User_Experience
 
  Regards,
  Maor
 
  On 03/27/2014 05:59 PM, Gary Lloyd wrote:
   Hello
  
   I have just deployed Ovirt 3.4 on our test environment. Does
  anyone know
   how the ISCSI multipath issue is resolved ? At the moment it is
  behaving
   as before and only opening one session per lun ( we bodged vdsm
   python
   code in previous releases to get it to work).
  
   The Planning sheet shows that its fixed but I am not sure what to
do
   next:
 
https://docs.google.com/spreadsheet/ccc?key=0AuAtmJW_VMCRdHJ6N1M3d1F1UTJTS1dSMnZwMF9XWVEusp=drive_web#gid=0
  
  
   Thanks
  
   /Gary Lloyd/
   --
   IT Services
   Keele University
   ---
  
  
   ___
   Users mailing list
   Users@ovirt.org mailto:Users@ovirt.org
   http://lists.ovirt.org/mailman/listinfo/users
  
 
 


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



-- 
Martin Goldstone
IT Systems Administrator - Finance  IT
Keele University, Keele, Staffordshire, United Kingdom, ST5 5BG
Telephone: +44 1782 734457
G+: http://google.com/+MartinGoldstoneKeele
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Ovirt Python SDK adding a directlun

2014-05-08 Thread Gary Lloyd
We are working on a script so that we can create an ISCSI LUN on our SAN
and then directly assign it to a vm.

We have been able to get it to work but with one small annoyance. I can't
figure out how to populate size,serial,vendor_id and product_id via the
api. Would anyone be able to point me in the right direction ? code (see
def add_disk):

def get_clusterid(cluster_name):
cluster = ovirt_api.clusters.get(cluster_name)
try:
return cluster.id
except:
logging.error('the cluster: %s does not appear to exist' %
cluster_name )
sys.exit(1)

def nominate_host(cluster_id):
for host in ovirt_api.hosts.list():
if host.cluster.id == cluster_id and host.status.state == 'up':
host.iscsidiscover
return host
logging.error('could not find a suitable host to nominate in cluster:')
sys.exit(1)


def iscsi_discover_and_login(cluster,target,portal,chap_user,chap_pass):
clusterid=get_clusterid(cluster)
host=nominate_host(clusterid)

iscsidet = params.IscsiDetails()
iscsidet.address=portal
iscsidet.username=chap_user
iscsidet.password=chap_pass
iscsidet.target=target

host.iscsidiscover(params.Action(iscsi=iscsidet))
result = host.iscsilogin(params.Action(iscsi=iscsidet))

if result.status.state == 'complete':

storecon = params.StorageConnection()
storecon.address=portal
storecon.type_='iscsi'
storecon.port=3260
storecon.target=target
storecon.username=chap_user
storecon.password=chap_pass

ovirt_api.storageconnections.add(storecon)

return result
# error checking code needs to be added to this function

def add_disk(vm_name,wwid,target,size,portal):

logunit = params.LogicalUnit()
logunit.id=wwid
logunit.vendor_id='EQLOGIC'
logunit.product_id='100E-00'
logunit.port=3260
logunit.lun_mapping=0
logunit.address=portal
logunit.target=target
logunit.size=size * 1073741824

stor = params.Storage(logical_unit=[logunit])
stor.type_='iscsi'


disk = params.Disk()
disk.alias = 'vm-' + vm_name
disk.name = disk.alias
disk.interface = 'virtio'
disk.bootable = True
disk.type_ = 'iscsi'
disk.format='raw'
disk.set_size(size * 1073741824)
#disk.size=size * 1073741824
#disk.active=True

disk.lun_storage=stor

try:
result = ovirt_api.disks.add(disk)
except:
logging.error('Could not add disk')
sys.exit(1)

attachdisk=ovirt_api.disks.get(disk.alias)
attachdisk.active = True

try:
ovirt_api.vms.get(vm_name).disks.add(attachdisk)
except:
logging.error('Could attach disk to vm')
sys.exit(1)

return result



If we could just get the size to show correctly that would be enough, the
others don't really matter to me.


Thanks

*Gary Lloyd*
--
IT Services
Keele University
---
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt Python SDK adding a directlun

2014-05-08 Thread Gary Lloyd
When I add direct Luns this way the size shows as 1 on the GUI and 0 when 
called from the rest api. All the other items mentioned are not present.

Thanks

 On 8 May 2014, at 18:05, Juan Hernandez jhern...@redhat.com wrote:
 
 On 05/08/2014 05:04 PM, Gary Lloyd wrote:
 
 We are working on a script so that we can create an ISCSI LUN on our SAN
 and then directly assign it to a vm.
 
 We have been able to get it to work but with one small annoyance. I
 can't figure out how to populate size,serial,vendor_id and product_id
 via the api. Would anyone be able to point me in the right direction ?
 code (see def add_disk):
 
 def get_clusterid(cluster_name):
cluster = ovirt_api.clusters.get(cluster_name)
try:
return cluster.id http://cluster.id
except:
logging.error('the cluster: %s does not appear to exist' %
 cluster_name )
sys.exit(1)
 
 def nominate_host(cluster_id):
for host in ovirt_api.hosts.list():
if host.cluster.id http://host.cluster.id == cluster_id and
 host.status.state == 'up':
host.iscsidiscover
return host
logging.error('could not find a suitable host to nominate in cluster:')
sys.exit(1)
 
 
 def iscsi_discover_and_login(cluster,target,portal,chap_user,chap_pass):
clusterid=get_clusterid(cluster)
host=nominate_host(clusterid)
 
iscsidet = params.IscsiDetails()
iscsidet.address=portal
iscsidet.username=chap_user
iscsidet.password=chap_pass
iscsidet.target=target
 
host.iscsidiscover(params.Action(iscsi=iscsidet))
result = host.iscsilogin(params.Action(iscsi=iscsidet))
 
if result.status.state == 'complete':
 
storecon = params.StorageConnection()
storecon.address=portal
storecon.type_='iscsi'
storecon.port=3260
storecon.target=target
storecon.username=chap_user
storecon.password=chap_pass
 
ovirt_api.storageconnections.add(storecon)
 
return result
# error checking code needs to be added to this function
 
 def add_disk(vm_name,wwid,target,size,portal):
 
logunit = params.LogicalUnit()
logunit.id http://logunit.id=wwid
logunit.vendor_id='EQLOGIC'
logunit.product_id='100E-00'
logunit.port=3260
logunit.lun_mapping=0
logunit.address=portal
logunit.target=target
logunit.size=size * 1073741824
 
stor = params.Storage(logical_unit=[logunit])
stor.type_='iscsi'
 
 
disk = params.Disk()
disk.alias = 'vm-' + vm_name
disk.name http://disk.name = disk.alias
disk.interface = 'virtio'
disk.bootable = True
disk.type_ = 'iscsi'
disk.format='raw'
disk.set_size(size * 1073741824)
#disk.size=size * 1073741824
#disk.active=True
 
disk.lun_storage=stor
 
try:
result = ovirt_api.disks.add(disk)
except:
logging.error('Could not add disk')
sys.exit(1)
 
attachdisk=ovirt_api.disks.get(disk.alias)
attachdisk.active = True
 
try:
ovirt_api.vms.get(vm_name).disks.add(attachdisk)
except:
logging.error('Could attach disk to vm')
sys.exit(1)
 
return result
 
 
 
 If we could just get the size to show correctly that would be enough,
 the others don't really matter to me.
 
 
 Thanks
 
 /Gary Lloyd/
 
 For a direct LUN disk all these values are ready only. Why do you need
 to change them?
 
 -- 
 Dirección Comercial: C/Jose Bardasano Baos, 9, Edif. Gorbea 3, planta
 3ºD, 28016 Madrid, Spain
 Inscrita en el Reg. Mercantil de Madrid – C.I.F. B82657941 - Red Hat S.L.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Add a Direct Lun via rest API (Oivrt 3.3.5)

2014-04-23 Thread Gary Lloyd
Hello

I was just wondering if anyone would be able to help me figure out if there
is a way to login to an ISCSI target (EqualLogic) and add its associated
volume as a Direct LUN via the REST api.

I have figured out how to add an existing Direct LUN to a vm.

I have created a volume on the SAN and then I am attempting to upload some
xml to the API:

curl -v -u 'admin@internal:mypass' -H Content-type: application/xml -d
@disk.xml https://ovirt-test/disks/ --insecure

cat disk.xml

disk
aliasdirect_lun/alias
interfacevirtio/interface
formatraw/format
lunStorage
typeiscsi/type
logical_unit
address10.0.0.1/address
port3260/port

targetiqn.2001-05.com.equallogic:0-1cb196-cff1c713e-e2a004dfcc65357b-dev-directlun/target
/logical_unit
/lunStorage
/disk


At the moment the API is returning with a HTTP 400:

fault
reasonIncomplete parameters/reason
detailDisk [provisionedSize|size] required for add/detail
/fault

Is it possible to achieve my goal via the API ?

Thanks


*Gary Lloyd*
--
IT Services
Keele University
---
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] Ovirt 3.4 EqualLogic multipath Bug 953343

2014-03-27 Thread Gary Lloyd
Hello

I have just deployed Ovirt 3.4 on our test environment. Does anyone know
how the ISCSI multipath issue is resolved ? At the moment it is behaving as
before and only opening one session per lun ( we bodged vdsm python code in
previous releases to get it to work).

The Planning sheet shows that its fixed but I am not sure what to do next:
https://docs.google.com/spreadsheet/ccc?key=0AuAtmJW_VMCRdHJ6N1M3d1F1UTJTS1dSMnZwMF9XWVEusp=drive_web#gid=0


Thanks

*Gary Lloyd*
--
IT Services
Keele University
---
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Ovirt 3.4 EqualLogic multipath Bug 953343

2014-03-27 Thread Gary Lloyd
Hi I have just had a look at this thanks. Whilst it seemed promising we are
in a situation where we use Direct Lun for all our production VM's in order
to take advantage of being able to individually replicate and restore vm
volumes using the SAN tools. Is multipath supported for Direct Luns or only
data domains ?

Thanks

*Gary Lloyd*
--
IT Services
Keele University
---


On 27 March 2014 16:02, Maor Lipchuk mlipc...@redhat.com wrote:

 Hi Gary,

 Please take a look at
 http://www.ovirt.org/Feature/iSCSI-Multipath#User_Experience

 Regards,
 Maor

 On 03/27/2014 05:59 PM, Gary Lloyd wrote:
  Hello
 
  I have just deployed Ovirt 3.4 on our test environment. Does anyone know
  how the ISCSI multipath issue is resolved ? At the moment it is behaving
  as before and only opening one session per lun ( we bodged vdsm python
  code in previous releases to get it to work).
 
  The Planning sheet shows that its fixed but I am not sure what to do
  next:
 https://docs.google.com/spreadsheet/ccc?key=0AuAtmJW_VMCRdHJ6N1M3d1F1UTJTS1dSMnZwMF9XWVEusp=drive_web#gid=0
 
 
  Thanks
 
  /Gary Lloyd/
  --
  IT Services
  Keele University
  ---
 
 
  ___
  Users mailing list
  Users@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/users
 


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Ovirt 3.3.3 to 3.3.4 upgrade issues Centos 6

2014-03-20 Thread Gary Lloyd
I must have checked that box at some point and forgot about it. It is
working fine now thanks.

*Gary Lloyd*
--
IT Services
Keele University
---


On 19 March 2014 17:15, René Koch rk...@linuxland.at wrote:

 On 03/19/2014 05:19 PM, Gary Lloyd wrote:

 I have tested an upgrade from engine 3.3.3 to 3.3.4 under centos 6. It
 seems that the vdsm nodes are now showing as non-operational and the
 only way I have been able to cure this was to:

 - Upgrade the vdsm version on a node to 4.13.3-4
 - install vdsm-gluster
 - start glusterd service


 Did you check Enable Gluster Service for your cluster? If so, you need
 vdsm-gluster installed on all hosts. If you don't use GlusterFS on your
 hosts, disable this setting for your cluster.


 Regards,
 René


 We are only currently using ISCSI. I cannot get a node running
 vdsm-4.13.3-3 to show as being operational under engine 3.3.4

 Has anyone been able to get 3.3.4 functioning with previous version vdsm
 nodes ?

 This issue may hinder our progress when it comes time to upgrade our
 production engine / nodes when putting hosts into maintenance mode and
 performing migrations etc.

 Thanks

 /Gary Lloyd/
 --
 IT Services
 Keele University
 ---


 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] Ovirt 3.3.3 to 3.3.4 upgrade issues Centos 6

2014-03-19 Thread Gary Lloyd
I have tested an upgrade from engine 3.3.3 to 3.3.4 under centos 6. It
seems that the vdsm nodes are now showing as non-operational and the only
way I have been able to cure this was to:

- Upgrade the vdsm version on a node to 4.13.3-4
- install vdsm-gluster
- start glusterd service

We are only currently using ISCSI. I cannot get a node running vdsm-4.13.3-3
to show as being operational under engine 3.3.4

Has anyone been able to get 3.3.4 functioning with previous version vdsm
nodes ?

This issue may hinder our progress when it comes time to upgrade our
production engine / nodes when putting hosts into maintenance mode and
performing migrations etc.

Thanks

*Gary Lloyd*
--
IT Services
Keele University
---
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] Dell EqualLogic and Ovirt

2013-10-25 Thread Gary Lloyd
Hello

I just thought I'd send a quick message to say Hi and to see if anyone
on here is using Ovirt with Dell EqualLogic as the back-end storage.
If anyone is having difficulty getting up and running this may be helpful:

https://sites.google.com/a/keele.ac.uk/partlycloudy/ovirt/gettingovirttoworkwithdellequallogic

Any feedback is welcome.

Thanks

Gary Lloyd
--
Systems Administrator
Keele University
---
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] CentOS 6 and multiple ISCSI interfaces/multipath

2013-06-18 Thread Gary Lloyd

Dan Yasny dyasny@... writes:
 
 
 This is an old issue with EQL, best practice is to have different iSCSI 
paths on different subnets or VLANs, and to login to every iSCSI portal when 
you create the storage domain. 
 Alternatively, you can use bonds of course
 
 
 On Thu, Jun 13, 2013 at 4:41 PM, Martin Goldstone m.j.goldstone-
gmjiioya...@public.gmane.org.uk wrote:
 Hi all,
 I've recently started looking at oVirt to provide the next iteration of 
our virtualization infrastructure, as we have recently acquired some ISCSI 
storage. Up to now, we've been using KVM managed by libvirt/virt-manager 
locally on each of our hosts, using direct attached storage, which was 
obviously less than ideal.
 
 
 In setting it all up, I've hit a bit of a snag. We've got 4 GbE network 
interfaces on our pilot host (running vdsm on CentOS 6.4) which are 
connected to our storage arrays (equallogic). I've created 4 interfaces with 
iscsiadm for these and bound them, but when setting up the discs in oVirt, 
I've not seen a way of telling it which interfaces to use. The node makes 
the connection to the target successfully, but it seems its only connecting 
via the default iscsi interface, and not making a connection via each 
interface that I've defined. Obviously this means I can't use multipath, and 
being only GbE interfaces it means I'm not getting the performance I should.
 
 
 I've done some searching via Google, but I've not really found any thing 
that helps. Perhaps I've missed something, but can anyone give me any 
pointers for getting this to work across multiple interfaces?
 
 
 Thanks?
 
 Martin
 ___
 Users mailing listUsers-
dEQiMlfYlSzYtjvyW6yDsg@public.gmane.orghttp://lists.ovirt.org/mailman/listin
fo/users
 
 
 
 
 
 
 ___
 Users mailing list
 Users@...
 http://lists.ovirt.org/mailman/listinfo/users
 

Hi Dan

I work with Martin.

The problem we are having seems to be with regards to how ovirt/vdsm manages 
iscsi sessions and not how our server is connected. I have manually tested 
iscsiadm on the same machine(Centos 6.4) and we are indeed getting multiple 
sessions from 2 ifaces with mpio working via dm-multipath, etc. 

vdsm however does not appear to be logging into the targets using the ifaces 
we use in iscsiadm, I have tried adding this to /etc/vdsm/vdsm.conf, but it 
makes little difference:

[irs]
iscsi_default_ifaces = eth2,eth3

Thanks

Gary



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users