Re: [ovirt-users] for some reason ovirtnode creates unecessary vg and lvs

2017-04-02 Thread martin chamambo
Hie scott ,read the links you sentbut since i am using targetcli on
Centos 7.3 ,isnt that supposed to have incoporated that patch already ,
will read deeper to see how i can get it fixed

On Sun, Apr 2, 2017 at 8:40 PM, Scott Worthington <
scott.c.worthing...@gmail.com> wrote:

> Also, this 'targetctlfix' script seems to have helped others, too:
>   https://github.com/wfurmank/targetctlfix/blob/master/targetctlfix
>
>
> On 4/2/2017 2:15 PM, martin chamambo wrote:
> > I managed to configure the mail iscsi domain for my ovirt 4.1 engine and
> > node , it connects to the storage initially and initialises the data
> > center ,but after rebooting the node and engine , it creates unecessary
> > vg and lvs like below
> >
> > 280246d3-ac7b-44ff-8c03-dc2bcb9edb70
> > d5104206-5863-4f9d-9ea7-2b140c97d65f -wi-a-
> > 128.00m
> >   2d57ab88-16e4-4007-9047-55fc4a35b534
> > d5104206-5863-4f9d-9ea7-2b140c97d65f -wi-a-
> > 128.00m
> >   ids
> > d5104206-5863-4f9d-9ea7-2b140c97d65f -wi-a-
> > 128.00m
> >   inbox
> > d5104206-5863-4f9d-9ea7-2b140c97d65f -wi-a-
> > 128.00m
> >   leases
> > d5104206-5863-4f9d-9ea7-2b140c97d65f -wi-a-
> > 2.00g
> >   master
> > d5104206-5863-4f9d-9ea7-2b140c97d65f -wi-a-
> > 1.00g
> >   metadata
> > d5104206-5863-4f9d-9ea7-2b140c97d65f -wi-a-
> > 512.00m
> >   outbox
> > d5104206-5863-4f9d-9ea7-2b140c97d65f -wi-a-
> > 128.00m
> >   xleases
> > d5104206-5863-4f9d-9ea7-2b140c97d65f -wi-a-   1.00g
> >
> > whats the cause of this
> >
> > NB:mY ISCSI storage is on a centos 7 box
> >
> >
> > ___
> > Users mailing list
> > Users@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
> >
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] iSCSI Multipathing

2017-04-02 Thread Devin A. Bougie
Thanks for following up, Gianluca.  At this point, my main question is why 
should I configure iSCSI Bonds within the oVirt engine instead of or in 
addition to configuring iSCSI initiators and multipathd directly in the host's 
OS.

The multipath.conf created by VDSM works fine with our devices, as do the stock 
EL6/7 kernels and drivers.  We've had great success using these devices for 
over a decade in various EL6/7 High-Availability server clusters, and when we 
configure everything manually they seem to work great with oVirt.  We're just 
wondering exactly what the advantage is to taking the next step of configuring 
iSCSI Bonds within the oVirt engine.

For what it's worth, these are Infortrend ESDS devices with redundant 
controllers and two 10GbE ports per controller.  We connect each host and each 
controller to two separate switches, so we can simultaneously lose both a 
controller and a switch without impacting availability.

Thanks again!
Devin

> On Apr 2, 2017, at 7:47 AM, Gianluca Cecchi  wrote:
> 
> 
> 
> Il 02 Apr 2017 05:20, "Devin A. Bougie"  ha scritto:
> We have a new 4.1.1 cluster up and running with OVS switches and an iSCSI 
> hosted_storage and VM data domain (same target, different LUN's).  Everything 
> works fine, and I can configure iscsid and multipathd outside of the oVirt 
> engine to ensure redundancy with our iSCSI device.  However, if I try to 
> configure iSCSI Multipathing within the engine, all of the hosts get stuck in 
> the "Connecting" status and the Data Center and Storage Domains go down.  The 
> hosted engine, however, continues to work just fine.
> 
> Before I provide excerpts from our logs and more details on what we're 
> seeing, it would be helpful to understand better what the advantages are of 
> configuring iSCSI Bonds within the oVirt engine.  Is this mainly a feature 
> for oVirt users that don't have experience configuring and managing iscsid 
> and multipathd directly?  Or, is it important to actually setup iSCSI Bonds 
> within the engine instead of directly in the underlying OS?
> 
> Any advice or links to documentation I've overlooked would be greatly 
> appreciated.
> 
> Many thanks,
> Devin
> 
> What kind of iscsi storage stay are you using?

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] iscsi config for dell ps series

2017-04-02 Thread Marcin Kruk
In my perspective, the ovirt developers put very general configuration of
multipath.conf file, which in their opinion should work with as much as
posslible arrays.
So you should modify this file and try to do some tests, plug in, plug out
links etc.

If you want to get luns picture, you should better use: iscsiadm -m session
-P 3.

2017-03-28 18:25 GMT+02:00 Gianluca Cecchi :

>
> Hello,
> I'm configuring an hypervisor for iSCSI Dell PS Series
> It is a CentOS 7.3 + updates server.
> The server has been already added to oVirt as a node, but without any
> storage domain configured yet.
> It has access to one lun that will become the storage domain one.
>
> Default oVirt generated multipath.conf is like this:
>
> defaults {
> polling_interval5
> no_path_retry   fail
> user_friendly_names no
> flush_on_last_del   yes
> fast_io_fail_tmo5
> dev_loss_tmo30
> max_fds 4096
> }
>
> devices {
> device {
> # These settings overrides built-in devices settings. It does not
> apply
> # to devices without built-in settings (these use the settings in
> the
> # "defaults" section), or to devices defined in the "devices"
> section.
> # Note: This is not available yet on Fedora 21. For more info see
> # https://bugzilla.redhat.com/1253799
> all_devsyes
> no_path_retry   fail
> }
> }
>
>
> Apparently in device-mapper-multipath there is no builtin for this
> combination
>
>   Vendor: EQLOGIC  Model: 100E-00  Rev: 8.1
>
> So, with the oVirt provided configuration a "show config" for multipath
> reports something like this at the end:
>
> polling_interval 5
> path_selector "service-time 0"
> path_grouping_policy "failover"
> path_checker "directio"
> rr_min_io_rq 1
> max_fds 4096
> rr_weight "uniform"
> failback "manual"
> features "0"
>
> and multipath layout this way
>
> [root@ov300 etc]# multipath -l
> 364817197b5dfd0e5538d959702249b1c dm-3 EQLOGIC ,100E-00
> size=1.0T features='0' hwhandler='0' wp=rw
> |-+- policy='service-time 0' prio=0 status=active
> | `- 7:0:0:0 sde 8:64 active undef  running
> `-+- policy='service-time 0' prio=0 status=enabled
>   `- 8:0:0:0 sdf 8:80 active undef  running
> [root@ov300 etc]#
>
> Following recommendations from Dell here:
> http://en.community.dell.com/techcenter/extras/m/white_papers/20442422
>
> I should put into defaults section these directives:
>
> defaults {
> polling_interval10
> path_selector   "round-robin 0"
> path_grouping_policymultibus
> path_checkertur
> rr_min_io_rq10
> max_fds 8192
> rr_weight   priorities
> failbackimmediate
> features0
> }
>
> I'm trying to mix EQL and oVirt reccomendations to have the best for my
> use
> and arrived at this config (plus a blacklist section with my internal hd
> and my flash wwids that is not relevant here):
>
> # VDSM REVISION 1.3
> # VDSM PRIVATE
>
> defaults {
> polling_interval5
> no_path_retry   fail
> user_friendly_names no
> flush_on_last_del   yes
> fast_io_fail_tmo5
> dev_loss_tmo30
> #Default oVirt value overwritten
> #max_fds 4096
> #
> max_fds 8192
> }
>
> devices {
> device {
> # These settings overrides built-in devices settings. It does not
> apply
> # to devices without built-in settings (these use the settings in
> the
> # "defaults" section), or to devices defined in the "devices"
> section.
> # Note: This is not available yet on Fedora 21. For more info see
> # https://bugzilla.redhat.com/1253799
> all_devsyes
> no_path_retry   fail
> }
> device {
> vendor  "EQLOGIC"
> product "100E-00"
> #Default EQL configuration overwritten by oVirt default
> #polling_interval10
> #
> path_selector   "round-robin 0"
> path_grouping_policymultibus
> path_checkertur
> rr_min_io_rq10
> rr_weight   priorities
> failbackimmediate
> features"0"
> }
> }
>
> After activating this config I have this mutipath layout
>
> [root@ov300 etc]# multipath -l
> 364817197b5dfd0e5538d959702249b1c dm-3 EQLOGIC ,100E-00
> size=1.0T features='0' hwhandler='0' wp=rw
> `-+- policy='round-robin 0' prio=0 status=active
>   |- 7:0:0:0 sde 8:64 active undef  running
>   `- 8:0:0:0 sdf 8:80 active undef  running
> [root@ov300 etc]#
>
> NOTE: at 

Re: [ovirt-users] Best Storage Option: iSCSI/NFS/GlusterFS?

2017-04-02 Thread Marcin Kruk
No. You have to edit vdsm.conf, when:
1) link will be broken, and it point to the iSCSI target IP and
2) you want to reboot your host or restart VDSM
I don't know, why but VDSM during startup tries to connect to IP target in
my opinion it should use the /var/lib/iscsi configuration which was set
previously.

I also had problem "Device is not on preferred path", but I edited
multipath.conf file and set the round-robin alghoritm, because during
installation multipathd.conf was changed.

If you want to get right configuration to your array execute:
1) multipath -k #console mode
2) show config #find the proper configuration to your array
3) modify multipath.conf and put above configuration.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] iSCSI Discovery cannot detetect LUN

2017-04-02 Thread Lukáš Kaplan
I am using Centos 7 and tgtd (scsi-target-utils)

# cat /etc/centos-release
CentOS Linux release 7.3.1611 (Core)

# tgtd -V
1.0.55

# rpm -qi scsi-target-utils
Name: scsi-target-utils
Version : 1.0.55
Release : 4.el7
Architecture: x86_64
... etc

--
Lukas Kaplan



2017-03-31 21:59 GMT+02:00 Yaniv Kaul :

>
>
> On Fri, Mar 31, 2017 at 3:43 PM, Lukáš Kaplan  wrote:
>
>> I solved this issue now.
>>
>> I thought till today, that iSCSI LUN ID (WWN or WWID) is globaly unique.
>> It is not true!
>> If you power on two identical linux machines and create iSCSI target on
>> them, their LUN IDs will be same...
>>
>> 36e010001 - for first LUN
>> 36e010002 - for second LUN etc
>>
>
> How did you get to such a number with so many zero's? Usually, there's
> some vendor ID and so on there...
> What target are you using?
> Y.
>
>
>>
>> You have to change LUN ID manualy (and take care of that uniqueness in
>> your domain) in /etc/tgtd/targets.conf for example:
>>
>> 
>> 
>> scsi_id 00020001
>> 
>> 
>> scsi_id 00020002
>> 
>> initiator-address 192.168.1.0/24
>> 
>>
>>
>>
>> --
>> Lukas Kaplan
>>
>>
>>
>> 2017-03-31 9:06 GMT+02:00 Lukáš Kaplan :
>>
>>> Is it possible that problem can be in conflicting LUN IDs?
>>> I see that first LUN from both storage servers have same LUN ID
>>>  36e010001. One storage server is connected to
>>> ovirt and second is not connected because of described problem (ovirt dont
>>> show lun after login and discovery).
>>>
>>> I am using tgtd as iscsi target server on both servers. Both have same
>>> configuration (same disks, md raid6), but different iqn and ip address...
>>>
>>> --
>>> Lukas Kaplan
>>>
>>> Dragon Internet a.s.
>>>
>>
>>
>>> 2017-03-29 12:12 GMT+02:00 Liron Aravot :
>>>


 On Wed, Mar 29, 2017 at 12:59 PM, Eduardo Mayoral 
 wrote:

> I had a similar problem, in my case this was related to multipath, it
> was not masking the LUNs correctly, it was seeing it multiple times (one
> per path), and I could not select the LUNs in the oVirt interface.
>
> Once I configured multipath correctly, everything worked like a charm.
>
> Best regards,
>
> --
>
> Eduardo Mayoral.
>
> On 29/03/17 11:30, Lukáš Kaplan wrote:
>
> Hello all,
>
> I did all steps as I described in previous email, but no change. I
> can't see any  LUN after discovery and login of new iSCSI storage.
> (That storage is ok, if I try to connect it to another and older ovirt
> domain, it is working...)
>
> I tryed it on 3 new iSCSI targets alredy, all have same problem...
>
> Can somebody help me, please?
>
> --
> Lukas Kaplan
>
>
 Hi Lukas,
 If you try to perform the discovery yourself, do you see the luns?

>
>
> 2017-03-27 16:22 GMT+02:00 Lukáš Kaplan :
>
>> I did following steps:
>>
>>  - delete target on all initiators (ovirt nodes)
>>  iscsiadm -m node -T iqn.2017-03.cz.dragon.ovirt.fuvs-sn1:10T -p
>> 10.53.1.201:3260 -u
>>  iscsiadm -m node -T iqn.2017-03.cz.dragon.ovirt.fuvs-sn1:10T -p
>> 10.53.1.201:3260 -o delete
>>
>>  - stop tgtd on target
>>  - fill storage by zeroes (dd if=/dev/zero of=/dev/md125 bs=4096
>> status=progress)
>>  - start tgtd
>>  - tried to connect to ovirt (Discovery=ok, Login=ok, but can not see
>> any LUN).
>>
>> === After that I ran this commands on one node: ===
>>
>> [root@fudi-cn1 ~]# iscsiadm -m session -o show
>> tcp: [1] 10.53.0.10:3260,1 iqn.2017-03.cz.dragon.ovirt:ovirtengine
>> (non-flash)
>> tcp: [11] 10.53.0.201:3260,1 iqn.2017-03.cz.dragon.ovirt.fudi-sn1:10T
>> (non-flash)
>> tcp: [12] 10.53.1.201:3260,1 iqn.2017-03.cz.dragon.ovirt.fuvs-sn1:10T
>> (non-flash)
>>
>> [root@fudi-cn1 ~]# iscsiadm -m discoverydb -P1
>> SENDTARGETS:
>> DiscoveryAddress: 10.53.0.201,3260
>> Target: iqn.2017-03.cz.dragon.ovirt:ovirtengine
>> Portal: 10.53.0.201:3260,1
>> Iface Name: default
>> iSNS:
>> No targets found.
>> STATIC:
>> Target: iqn.2017-03.cz.dragon.ovirt.fuvs-sn1:10T
>> Portal: 10.53.1.201:3260,1
>> Iface Name: default
>> Target: iqn.2017-03.cz.dragon.ovirt:ovirtengine
>> Portal: 10.53.0.10:3260,1
>> Iface Name: default
>> Target: iqn.2017-03.cz.dragon.ovirt.fudi-sn1:10T
>> Portal: 10.53.0.201:3260,1
>> Iface Name: default
>> FIRMWARE:
>> No targets found.
>>
>> === On iscsi target: ===
>> [root@fuvs-sn1 ~]# cat /proc/mdstat
>> Personalities : [raid1] [raid6] [raid5] 

Re: [ovirt-users] for some reason ovirtnode creates unecessary vg and lvs

2017-04-02 Thread Scott Worthington
Also, this 'targetctlfix' script seems to have helped others, too:
  https://github.com/wfurmank/targetctlfix/blob/master/targetctlfix


On 4/2/2017 2:15 PM, martin chamambo wrote:
> I managed to configure the mail iscsi domain for my ovirt 4.1 engine and
> node , it connects to the storage initially and initialises the data
> center ,but after rebooting the node and engine , it creates unecessary
> vg and lvs like below
> 
> 280246d3-ac7b-44ff-8c03-dc2bcb9edb70
> d5104206-5863-4f9d-9ea7-2b140c97d65f -wi-a-
> 128.00m   
>   2d57ab88-16e4-4007-9047-55fc4a35b534
> d5104206-5863-4f9d-9ea7-2b140c97d65f -wi-a-
> 128.00m   
>   ids 
> d5104206-5863-4f9d-9ea7-2b140c97d65f -wi-a-
> 128.00m   
>   inbox   
> d5104206-5863-4f9d-9ea7-2b140c97d65f -wi-a-
> 128.00m   
>   leases  
> d5104206-5863-4f9d-9ea7-2b140c97d65f -wi-a-  
> 2.00g   
>   master  
> d5104206-5863-4f9d-9ea7-2b140c97d65f -wi-a-  
> 1.00g   
>   metadata
> d5104206-5863-4f9d-9ea7-2b140c97d65f -wi-a-
> 512.00m   
>   outbox  
> d5104206-5863-4f9d-9ea7-2b140c97d65f -wi-a-
> 128.00m   
>   xleases 
> d5104206-5863-4f9d-9ea7-2b140c97d65f -wi-a-   1.00g 
> 
> whats the cause of this
> 
> NB:mY ISCSI storage is on a centos 7 box
> 
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] for some reason ovirtnode creates unecessary vg and lvs

2017-04-02 Thread Scott Worthington
You need to set a global filter to ignore PVs found in LVs.

Please read the comments in this bugzilla:
  https://bugzilla.redhat.com/show_bug.cgi?id=1139441

Hope that helps you.

On 4/2/2017 2:15 PM, martin chamambo wrote:
> I managed to configure the mail iscsi domain for my ovirt 4.1 engine and
> node , it connects to the storage initially and initialises the data
> center ,but after rebooting the node and engine , it creates unecessary
> vg and lvs like below
> 
> 280246d3-ac7b-44ff-8c03-dc2bcb9edb70
> d5104206-5863-4f9d-9ea7-2b140c97d65f -wi-a-
> 128.00m   
>   2d57ab88-16e4-4007-9047-55fc4a35b534
> d5104206-5863-4f9d-9ea7-2b140c97d65f -wi-a-
> 128.00m   
>   ids 
> d5104206-5863-4f9d-9ea7-2b140c97d65f -wi-a-
> 128.00m   
>   inbox   
> d5104206-5863-4f9d-9ea7-2b140c97d65f -wi-a-
> 128.00m   
>   leases  
> d5104206-5863-4f9d-9ea7-2b140c97d65f -wi-a-  
> 2.00g   
>   master  
> d5104206-5863-4f9d-9ea7-2b140c97d65f -wi-a-  
> 1.00g   
>   metadata
> d5104206-5863-4f9d-9ea7-2b140c97d65f -wi-a-
> 512.00m   
>   outbox  
> d5104206-5863-4f9d-9ea7-2b140c97d65f -wi-a-
> 128.00m   
>   xleases 
> d5104206-5863-4f9d-9ea7-2b140c97d65f -wi-a-   1.00g 
> 
> whats the cause of this
> 
> NB:mY ISCSI storage is on a centos 7 box
> 
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] for some reason ovirtnode creates unecessary vg and lvs

2017-04-02 Thread martin chamambo
I managed to configure the mail iscsi domain for my ovirt 4.1 engine and
node , it connects to the storage initially and initialises the data center
,but after rebooting the node and engine , it creates unecessary vg and lvs
like below

280246d3-ac7b-44ff-8c03-dc2bcb9edb70 d5104206-5863-4f9d-9ea7-2b140c97d65f
-wi-a- 128.00m
  2d57ab88-16e4-4007-9047-55fc4a35b534 d5104206-5863-4f9d-9ea7-2b140c97d65f
-wi-a- 128.00m
  ids  d5104206-5863-4f9d-9ea7-2b140c97d65f
-wi-a- 128.00m
  inboxd5104206-5863-4f9d-9ea7-2b140c97d65f
-wi-a- 128.00m
  leases   d5104206-5863-4f9d-9ea7-2b140c97d65f
-wi-a-   2.00g
  master   d5104206-5863-4f9d-9ea7-2b140c97d65f
-wi-a-   1.00g
  metadata d5104206-5863-4f9d-9ea7-2b140c97d65f
-wi-a- 512.00m
  outbox   d5104206-5863-4f9d-9ea7-2b140c97d65f
-wi-a- 128.00m
  xleases  d5104206-5863-4f9d-9ea7-2b140c97d65f
-wi-a-   1.00g

whats the cause of this

NB:mY ISCSI storage is on a centos 7 box
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt upgrade => vms refusing to (re)boot

2017-04-02 Thread Michal Skrivanek

> On 31 Mar 2017, at 15:49, Nelson Lameiras  
> wrote:
> 
> Hello,
> 
> We had a rather unpleseant surprise while upgrading our production 
> datacenters to oVirt 4.0 => 4.1 (although I think this problem is not related 
> to oVirt 4.1). 
> Some critical VMs simply refused to reboot without any particular error 
> message other than "VM failed to boot"
> 
> After a stressfull 2h of investigation, it turns out that the problematic VMs 
> shared the same "exotic" condition. They were not rebooted since the latest 
> major upgrade 3.6 => 4.0 (I know, this is not a good practise and we 
> underestimated it's priority, we will change our reboot policies)

it’s indeed not a good practice. You can’t use the features (those related to 
hw) without updating the guest HW and that will only happen once you shut the 
VM down. If you can’t do that then just keep the old cluster level, that is 
fully supported.

> Their "custom compatilibity setting" was set to 3.6 !! When changing this 
> setting to "Empty", The VM booted normaly. Hard to find, easy to fix !! ;)
> 
> Neverthess, there is little to none information/documentation about the 
> usefullness of this setting, and I have a hard time understanding the 
> consequences of changing it manually.
> 
> Our oVirt datacenter (which has  engine 4.1.1, but some hosts on 4.0, so it's 
> still on 4.0 compatibility) hosts currently +- 200 Vms,
> - 10 with "custom compatilibity setting" set to 3.6 - which will not 
> (reb)boot in current state,
> - 30some with "custom compatilibity setting" set to 4.0 - which (re)boot 
> normally 
> - the rest with "custom compatilibity setting" set to "empty" - which 
> (re)boot normally 
> 
> So can a oVirt guru please answer the following questions :
> 
> - What does this setting do? which are the consequences of changing it 
> manually ?

it emulates the corresponding guest hardware and engine feature behavior.

> - Is it "normal" that a VM not reboooted since the 3.6 update does not boot 
> if a new major upgrade is done on hostig host ? (maybe a exotic bug worth 
> correcting ?)

no, it’s a bug https://bugzilla.redhat.com/show_bug.cgi?id=1436325 

will be fixed in 4.1.2

> - Which are the possible consequences of manually setting this setting to 
> "Empty" in all my VMs (running or stopped) ?

then it will use the cluster settings. that is the “normal” state

> - Which events will change this setting automatically ? (cluster major 
> version upgrade, first reboot after upgrade, ...) ?

cluster level upgrade while VM is running. Since the VM is running the changes 
to hardware cannot be applied and it is temporarily reconfigured to use the 
previous cluster level. On VM shutdown the settings are all updated and the 
field set to empty.
there was another bug (hopefully fixed) which kept the value there even on 
shutdown. Just edit and put “empty”, then it will start as a regular VM in its 
cluster.

> - Some of my VMs have custom_compatibility_version set to 4.0 (in REST API)  
> eventough they have been recently rebooted and "custom compatilibity setting" 
> is empty in GUI, who's this possible ?

was it done before the reboot? are there pending changes to be applied perhaps?

Thanks,
michal

> 
> cordialement, regards,
> 
> 
>  
> Nelson LAMEIRAS
> Ingénieur Systèmes et Réseaux / Systems and Networks engineer
> Tel: +33 5 32 09 09 70
> nelson.lamei...@lyra-network.com 
> www.lyra-network.com  | www.payzen.eu 
> 
>  
> 
>  
> 
>  
>  
> Lyra Network, 109 rue de l'innovation, 31670 Labège, FRANCE
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] One of my 2 identical Nodes keeps restarting every couple of hours

2017-04-02 Thread Yaniv Kaul
On Fri, Mar 31, 2017 at 3:23 PM, George Mcro  wrote:

> Hello,
>
> My infrastructure consist of 2 ovirt-nodes and one ovirt-engine. All of
> them use Centos 7.
>
> I have already configured Ovirt-Engine and both of ovirt-nodes. Before I
> install the two ovirt-nodes in Ovirt engine, I install this repo in both of
> them à yum install http://resources.ovirt.org/
> pub/yum-repo/ovirt-release41.rpm.
>
> I want to clarify that the ovirt engine is HP proliant DL380 G6 and the 2
> ovirt nodes are HP Proliant DL380G7. Also, the ovirt-node servers are
> hardware identical(same motherboard, same HP model, same NIC’s etc).
>
> Now, the issue.
>
> Ovirt-Node no2(hype02) operates perfectly for days with 4 VM’s on it. But,
> when I am migrating vm’s from ovirt-Node no2 (hype02) to ovirt-Node no1
> (hype01) to see if it is capable to operate like hype02, it restarts after
> couple of hours (2-4).
>
>
>
> Ovirt engine event logs report :
>
> VDSM hype01 command GetStatsVDS failed: Heartbeat exceeded (hype01).
>
> Or
>
> VDSM hype01 command GetStatsVDS failed: Connection issue
> java.rmi.ConnectException: Connection timeout
>
>
>
> I have done almost every change I could think of. I reinstall Centos 7
> couple of times, upgrade BIOS and iLO in the latest version. Moreover, I
> changed Hard Drives (with the same HP Model), Motherboard, and RAMs but
> nothing worked.
>
>
>
> Then, I tried something else. I put the server in Maintenance mode and
> voila, it was operating for 4 days straight without restarting.
>

Could it be unrelated HW? Your network connection (switch) for example is
flapping? Anything on the storage?
Y.


>
>
> So, I do not know what it’s wrong and logs sadly do not help me understand.
>
> I will post here some log files, dmesg, messages, supervdsm and vdsm.
>
> Any ideas what’s the issue here. Hardware or Software. Any help would be
> appreciated.
>
>
> King Regards,
>
> George Mcro
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] New SAN Storage Domain: Cannot deactivate Logical Volume

2017-04-02 Thread Liron Aravot
Hi Alexey,
can you please attach the engine/vdsm logs?

thanks.

On Fri, Mar 31, 2017 at 11:08 AM, Николаев Алексей <
alexeynikolaev.p...@yandex.ru> wrote:

> Hi, community!
>
> I'm try using ISCSI DATA DOMAIN with oVirt 4.1.1.6-1 and oVirt Node 4.1.1.
> But get this error while adding data domain.
>
> Error while executing action New SAN Storage Domain: Cannot deactivate
> Logical Volume
>
>
> 2017-03-31 10:47:45,099+03 ERROR [org.ovirt.engine.core.
> vdsbroker.vdsbroker.CreateStorageDomainVDSCommand] (default task-41)
> [717af17f] Command 'CreateStorageDomainVDSCommand(HostName = node169-07.,
> CreateStorageDomainVDSCommandParameters:{runAsync='true',
> hostId='ca177a14-56ef-4736-a0a9-ab9dc2a8eb90', storageDomain='
> StorageDomainStatic:{name='data1', 
> id='56038842-2fbe-4ada-96f1-c4b4e66fd0b7'}',
> args='sUG5rL-zqvF-DMox-uMK3-Xl91-ZiYw-k27C3B'})' execution failed:
> VDSGenericException: VDSErrorException: Failed in vdscommand to
> CreateStorageDomainVDS, error = Cannot deactivate Logical Volume: ('General
> Storage Exception: (\'5 [] [\\\'  /dev/mapper/
> 36001405b7dd7b800e7049ecbf6830637: read failed after 0 of 4096 at 0:
> Input/output error\\\', \\\'  /dev/mapper/36001405b7dd7b800e7049ecbf6830637:
> read failed after 0 of 4096 at 4294967230464: Input/output error\\\', \\\'
> /dev/mapper/36001405b7dd7b800e7049ecbf6830637: read failed after 0 of
> 4096 at 4294967287808: Input/output error\\\', \\\'  WARNING: Error counts
> reached a limit of 3. Device /dev/mapper/36001405b7dd7b800e7049ecbf6830637
> was disabled\\\', \\\'  WARNING: Error counts reached a limit of 3. Device
> /dev/mapper/36001405b7dd7b800e7049ecbf6830637 was disabled\\\', \\\'
> Volume group "56038842-2fbe-4ada-96f1-c4b4e66fd0b7" not found\\\', \\\'
> Cannot process volume group 56038842-2fbe-4ada-96f1-
> c4b4e66fd0b7\\\']\\n56038842-2fbe-4ada-96f1-c4b4e66fd0b7/[\
> \\'master\\\']\',)',)
>
>
> Previously, this ISCSI DATA DOMAIN works well with oVirt 3.6 and CentOS
> 7.2 host.
> How I can debug this error?
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] iSCSI Multipathing

2017-04-02 Thread Gianluca Cecchi
Il 02 Apr 2017 05:20, "Devin A. Bougie"  ha
scritto:

We have a new 4.1.1 cluster up and running with OVS switches and an iSCSI
hosted_storage and VM data domain (same target, different LUN's).
Everything works fine, and I can configure iscsid and multipathd outside of
the oVirt engine to ensure redundancy with our iSCSI device.  However, if I
try to configure iSCSI Multipathing within the engine, all of the hosts get
stuck in the "Connecting" status and the Data Center and Storage Domains go
down.  The hosted engine, however, continues to work just fine.

Before I provide excerpts from our logs and more details on what we're
seeing, it would be helpful to understand better what the advantages are of
configuring iSCSI Bonds within the oVirt engine.  Is this mainly a feature
for oVirt users that don't have experience configuring and managing iscsid
and multipathd directly?  Or, is it important to actually setup iSCSI Bonds
within the engine instead of directly in the underlying OS?

Any advice or links to documentation I've overlooked would be greatly
appreciated.

Many thanks,
Devin


What kind of iscsi storage stay are you using?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users