Re: [ovirt-users] import qcow2 and sparse

2017-08-16 Thread Marcin Kruk
I have an iSCSI storage.

2017-08-15 18:04 GMT+02:00 Michal Skrivanek <mskri...@redhat.com>:

> > On 14 Aug 2017, at 13:19, Marcin Kruk <askifyoun...@gmail.com> wrote:
> >
> > After import machine from KVM which was based on qemu disk 2GB phisical
> and 50 GB virtual size
> > I have got disk machine which occupy 50GB and even sparse option does
> not work. It still ocupy 50GB?
>
> Depends on what kind of storage you have on the ovirt side. Is it file
> based?
>
> > ___
> > Users mailing list
> > Users@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] import qcow2 and sparse

2017-08-14 Thread Marcin Kruk
After import machine from KVM which was based on qemu disk 2GB phisical and
50 GB virtual size
I have got disk machine which occupy 50GB and even sparse option does not
work. It still ocupy 50GB?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] iscsi config for dell ps series

2017-04-02 Thread Marcin Kruk
In my perspective, the ovirt developers put very general configuration of
multipath.conf file, which in their opinion should work with as much as
posslible arrays.
So you should modify this file and try to do some tests, plug in, plug out
links etc.

If you want to get luns picture, you should better use: iscsiadm -m session
-P 3.

2017-03-28 18:25 GMT+02:00 Gianluca Cecchi :

>
> Hello,
> I'm configuring an hypervisor for iSCSI Dell PS Series
> It is a CentOS 7.3 + updates server.
> The server has been already added to oVirt as a node, but without any
> storage domain configured yet.
> It has access to one lun that will become the storage domain one.
>
> Default oVirt generated multipath.conf is like this:
>
> defaults {
> polling_interval5
> no_path_retry   fail
> user_friendly_names no
> flush_on_last_del   yes
> fast_io_fail_tmo5
> dev_loss_tmo30
> max_fds 4096
> }
>
> devices {
> device {
> # These settings overrides built-in devices settings. It does not
> apply
> # to devices without built-in settings (these use the settings in
> the
> # "defaults" section), or to devices defined in the "devices"
> section.
> # Note: This is not available yet on Fedora 21. For more info see
> # https://bugzilla.redhat.com/1253799
> all_devsyes
> no_path_retry   fail
> }
> }
>
>
> Apparently in device-mapper-multipath there is no builtin for this
> combination
>
>   Vendor: EQLOGIC  Model: 100E-00  Rev: 8.1
>
> So, with the oVirt provided configuration a "show config" for multipath
> reports something like this at the end:
>
> polling_interval 5
> path_selector "service-time 0"
> path_grouping_policy "failover"
> path_checker "directio"
> rr_min_io_rq 1
> max_fds 4096
> rr_weight "uniform"
> failback "manual"
> features "0"
>
> and multipath layout this way
>
> [root@ov300 etc]# multipath -l
> 364817197b5dfd0e5538d959702249b1c dm-3 EQLOGIC ,100E-00
> size=1.0T features='0' hwhandler='0' wp=rw
> |-+- policy='service-time 0' prio=0 status=active
> | `- 7:0:0:0 sde 8:64 active undef  running
> `-+- policy='service-time 0' prio=0 status=enabled
>   `- 8:0:0:0 sdf 8:80 active undef  running
> [root@ov300 etc]#
>
> Following recommendations from Dell here:
> http://en.community.dell.com/techcenter/extras/m/white_papers/20442422
>
> I should put into defaults section these directives:
>
> defaults {
> polling_interval10
> path_selector   "round-robin 0"
> path_grouping_policymultibus
> path_checkertur
> rr_min_io_rq10
> max_fds 8192
> rr_weight   priorities
> failbackimmediate
> features0
> }
>
> I'm trying to mix EQL and oVirt reccomendations to have the best for my
> use
> and arrived at this config (plus a blacklist section with my internal hd
> and my flash wwids that is not relevant here):
>
> # VDSM REVISION 1.3
> # VDSM PRIVATE
>
> defaults {
> polling_interval5
> no_path_retry   fail
> user_friendly_names no
> flush_on_last_del   yes
> fast_io_fail_tmo5
> dev_loss_tmo30
> #Default oVirt value overwritten
> #max_fds 4096
> #
> max_fds 8192
> }
>
> devices {
> device {
> # These settings overrides built-in devices settings. It does not
> apply
> # to devices without built-in settings (these use the settings in
> the
> # "defaults" section), or to devices defined in the "devices"
> section.
> # Note: This is not available yet on Fedora 21. For more info see
> # https://bugzilla.redhat.com/1253799
> all_devsyes
> no_path_retry   fail
> }
> device {
> vendor  "EQLOGIC"
> product "100E-00"
> #Default EQL configuration overwritten by oVirt default
> #polling_interval10
> #
> path_selector   "round-robin 0"
> path_grouping_policymultibus
> path_checkertur
> rr_min_io_rq10
> rr_weight   priorities
> failbackimmediate
> features"0"
> }
> }
>
> After activating this config I have this mutipath layout
>
> [root@ov300 etc]# multipath -l
> 364817197b5dfd0e5538d959702249b1c dm-3 EQLOGIC ,100E-00
> size=1.0T features='0' hwhandler='0' wp=rw
> `-+- policy='round-robin 0' prio=0 status=active
>   |- 7:0:0:0 sde 8:64 active undef  running
>   `- 8:0:0:0 sdf 8:80 active undef  running
> [root@ov300 etc]#
>
> NOTE: at 

Re: [ovirt-users] Best Storage Option: iSCSI/NFS/GlusterFS?

2017-04-02 Thread Marcin Kruk
No. You have to edit vdsm.conf, when:
1) link will be broken, and it point to the iSCSI target IP and
2) you want to reboot your host or restart VDSM
I don't know, why but VDSM during startup tries to connect to IP target in
my opinion it should use the /var/lib/iscsi configuration which was set
previously.

I also had problem "Device is not on preferred path", but I edited
multipath.conf file and set the round-robin alghoritm, because during
installation multipathd.conf was changed.

If you want to get right configuration to your array execute:
1) multipath -k #console mode
2) show config #find the proper configuration to your array
3) modify multipath.conf and put above configuration.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Best Storage Option: iSCSI/NFS/GlusterFS?

2017-03-26 Thread Marcin Kruk
But on the Dell MD32x00 you have got two controllers. The trick is that you
have to sustain link to both controllers, so the best option is to use
multipath as Yaniv said. Otherwise you get an error notifications from the
array.
The problem is with iSCSI target.
After server reboot, VDSM tries to connect to target which was previously
set, but it could be inactive.
So in that case you have to remember to edit configuration in vdsm.conf,
because vdsm.conf do not accept target with multi IP addresses.

2017-03-26 9:40 GMT+02:00 Yaniv Kaul :

>
>
> On Sat, Mar 25, 2017 at 9:20 AM, Charles Tassell 
> wrote:
>
>> Hi Everyone,
>>
>>   I'm about to setup an oVirt cluster with two hosts hitting a Linux
>> storage server.  Since the Linux box can provide the storage in pretty much
>> any form, I'm wondering which option is "best." Our primary focus is on
>> reliability, with performance being a close second.  Since we will only be
>> using a single storage server I was thinking NFS would probably beat out
>> GlusterFS, and that NFSv4 would be a better choice than NFSv3.  I had
>> assumed that that iSCSI would be better performance wise, but from what I'm
>> seeing online that might not be the case.
>>
>
> NFS 4.2 is better than NFS 3 in the sense that you'll get DISCARD support,
> which is nice.
> Gluster probably requires 3 servers.
> In most cases, I don't think people see the difference in performance
> between NFS and iSCSI. The theory is that block storage is faster, but in
> practice, most don't get to those limits where it matters really.
>
>
>>
>>   Our servers will be using a 1G network backbone for regular traffic and
>> a dedicated 10G backbone with LACP for redundancy and extra bandwidth for
>> storage traffic if that makes a difference.
>>
>
> LCAP many times (especially on NFS) does not provide extra bandwidth, as
> the (single) NFS connection tends to be sticky to a single physical link.
> It's one of the reasons I personally prefer iSCSI with multipathing.
>
>
>>
>>   I'll probably try to do some performance benchmarks with 2-3 options,
>> but the reliability issue is a little harder to test for.  Has anyone had
>> any particularly bad experiences with a particular storage option?  We have
>> been using iSCSI with a Dell MD3x00 SAN and have run into a bunch of issues
>> with the multipath setup, but that won't be a problem with the new SAN
>> since it's only got a single controller interface.
>>
>
> A single controller is not very reliable. If reliability is your primary
> concern, I suggest ensuring there is no single point of failure - or at
> least you are aware of all of them (does the storage server have redundant
> power supply? to two power sources? Of course in some scenarios it's an
> overkill and perhaps not practical, but you should be aware of your weak
> spots).
>
> I'd stick with what you are most comfortable managing - creating, backing
> up, extending, verifying health, etc.
> Y.
>
>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] ovirt-shell backup

2017-03-23 Thread Marcin Kruk
Hello is it possible to execute below actions from the ovirth-shell:
1. Make snapshot
( ovirt-shell -E 'add snapshot --parent-vm-name  --description
 )
2. Make clone from snaphot above
( ? )
3. Export clone
( ? )
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] iSCSI and multipath deliberation

2017-03-16 Thread Marcin Kruk
In my opionion the main problem in configuration iscsi and multipath is
that the ovirt developers try to start everything automaticaly during
installation, and then during start services like vdsmd.
But during installation process adminstrator shoud choose the right
multipath WWID identifier only.
And administrator should be responsible for setting multipath and iSCSI
properly.
Otherwise ovirt installator does everything automaticaly in the universal
way which is weak due to so many storage types.
Howgh :)
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] OVirt new cluster host only logins on one iSCSI

2017-03-16 Thread Marcin Kruk
How to find the relation between "list storagedomains" and "list
storageconnections"?

2017-03-07 17:55 GMT+01:00 Duarte Fernandes Rocha :

> Hello,
>
>
>
> For anyone interested this is how I solved the problem.
>
>
>
> First, please note that as I mentioned earlier the OVirt documentation
> states that ALL the iscsi sessions/paths must be configured when adding the
> storage domain:
>
>
>
> http://www.ovirt.org/documentation/admin-guide/chap-Storage/
>
> Adding iSCSI Storage
>
> ...
>
> Important: If more than one path access is required, ensure to discover
> and log in to the target through all the required paths. Modifying a
> storage domain to add additional paths is currently not supported.
>
>
>
> So one way to fix this issue is:
>
>
>
> 1. Put the Storage domain under maintenance
>
> 2. Remove the storage domain from the datacenter (do not wipe/format)
>
> 3. Import storage domain using all the iscsi paths
>
>
>
> Another way that I think is better because it does not involve deleting
> the storage domain is to use the REST API and/or OVirt Shell.
>
>
>
> The storage domain must be in maintenance and the iscsi storage connection
> must exist. The upside is that the storage domain does not have to be
> deleted and imported which may not be easy if there are VMs using that
> storage domain.
>
>
>
> First we need to have the storage domain ID and the storage connection ID
> (this one I got using the first method on an empty LUN)
>
>
>
>
>
> [oVirt shell (connected)]# list storagedomains
>
>
>
> id : d591d9f0-.
>
> name : ISO_DOMAIN
>
> description: ISO_DOMAIN
>
>
>
> id : 7d30dff2-.
>
> name : LUN
>
> description: ISCSI LUN
>
>
>
>
>
> [oVirt shell (connected)]# list storageconnections
>
>
>
> id : b793b463-.
>
>
>
> id : 65300f39-
>
>
>
> [oVirt shell (connected)]# show storageconnection 65300f39-ac93-423a-9692-
> 15984f842ae2
>
>
>
> id : 65300f39-.
>
> address : ip.addr.x.x
>
> port : 3260
>
> target : iqn.1992-04
>
> type : iscsi
>
> username: username
>
>
>
> With this info make a POST request to the API using curl
>
>
>
> #!/bin/sh -ex
>
>
>
> # https://ovirt.addr/ovirt-engine/api/storagedomains/STORAGE-DOMAIN-ID/
> storageconnections
>
>
>
> url="https://ovirt.addr/ovirt-engine/api/storagedomains/521fcfdc-./
> storageconnections"
>
> User="admin at internal"
>
> password="*"
>
>
>
> curl \
>
> --insecure \
>
> --user "${user}:${password}" \
>
> --request POST \
>
> --header "Accept: application/xml" --header "Content-Type:
> application/xml" \
>
> --data '
>
> 
>
> 
>
> ' \
>
> "${url}"
>
>
>
> If there are easier/simpler ways please let me know =)
>
>
>
> --
>
> Duarte Fernandes Rocha 
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Maintenance GUI vs CLI

2017-03-14 Thread Marcin Kruk
So if the maintenance mode from the GUI is necessary to upgrade host, how
to launch it from the CLI on that host?

2017-03-14 11:30 GMT+01:00 Yedidyah Bar David <d...@redhat.com>:

> On Tue, Mar 14, 2017 at 12:12 PM, Marcin Kruk <askifyoun...@gmail.com>
> wrote:
> > What is the difference between:
>
> There are two different notions of host maintenance:
>
> 1. in the engine, meaning the engine will migrate away VMs from
> this host, not start new ones on it, etc. This applies to all
> hosts, not just hosted-engine ones
>
> 2. in ovirt-hosted-engine-ha, the high availability daemons.
> Here it means similar things, but applies only to the hosted
> engine vm, and is maintained in the HE shared storage (not in
> the engine db).
>
> > host CLI command:  hosted-engine --vm-maintenance --mode=local
>
> This one does (2.).
>
> > and
> > RHVEM gui click action:  Hosts ->  -> Maintenance?
>
> In the past, this did only (1.), but now it does both.
>
> See also:
>
> https://www.ovirt.org/develop/release-management/features/
> engine/self-hosted-engine-maintenance-flows/
> https://bugzilla.redhat.com/show_bug.cgi?id=1047649
> https://bugzilla.redhat.com/show_bug.cgi?id=1277646
>
> Best,
> --
> Didi
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Maintenance GUI vs CLI

2017-03-14 Thread Marcin Kruk
What is the difference between:
host CLI command:  hosted-engine --vm-maintenance --mode=local
and
RHVEM gui click action:  Hosts ->  -> Maintenance?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] iscsi data domain when engine is down

2017-03-12 Thread Marcin Kruk
Are you 1000% sure? Because during startup vmsd the comman iscsiadm
  is executed, In my opinion vmsd should rely on
/var/lib/iscsi settings.

2017-03-12 15:02 GMT+01:00 Chris Adams <c...@cmadams.net>:

> Once upon a time, Marcin Kruk <askifyoun...@gmail.com> said:
> > OK, but what your script do, only add paths by the iscsiadm command, but
> > the question is if hosted-engine can see it.
> > I do not know how to add an extra path, for example, when I congirured
> > hosted-engine during installation there was only one path in the target,
> > but now there are four. So how can I verify how many paths are now, and
> how
> > eventually change it.
>
> oVirt access iSCSI storage through multipath devices, so adding a path
> to the multipath device will work.  Adding an additional path with
> iscsiadm causes multipathd to recognize it; you can verify that with
> "multipath -ll".
> --
> Chris Adams <c...@cmadams.net>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] iscsi data domain when engine is down

2017-03-11 Thread Marcin Kruk
OK, but what your script do, only add paths by the iscsiadm command, but
the question is if hosted-engine can see it.
I do not know how to add an extra path, for example, when I congirured
hosted-engine during installation there was only one path in the target,
but now there are four. So how can I verify how many paths are now, and how
eventually change it.

2017-03-12 0:32 GMT+01:00 Chris Adams :

> Once upon a time, Devin A. Bougie  said:
> > On Mar 11, 2017, at 10:59 AM, Chris Adams  wrote:
> > > Hosted engine runs fine on iSCSI since oVirt 3.5.  It needs a separate
> > > target from VM storage, but then that access is managed by the hosted
> > > engine HA system.
> >
> > Thanks so much, Chris.  It sounds like that is exactly what I was
> missing.
> >
> > It would be great to know how to add multiple paths to the hosted
> engine's iSCSI target, but hopefully I can figure that out once I have
> things up and running.
>
> oVirt doesn't currently support adding paths to an existing storage
> domain; they all have to be selected when the domain is created.  Since
> the hosted engine setup doesn't support that, there's no way to add
> additional paths after the fact.  I think adding paths to a regular
> storage domain is something that is being looked at (believe I read that
> on the list), so maybe if that gets added, support for adding to the
> hosted engine domain will be added as well.
>
> I have a script that gets run out of cron periodically that looks at
> sessions and configured paths and tries to add them if necessary.  I
> manually created the config for the additional paths with iscsiadm.  It
> works, although I'm not sure that the hosted engine HA is really "happy"
> with what I did. :)
>
> --
> Chris Adams 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] iscsi data domain when engine is down

2017-03-11 Thread Marcin Kruk
"Hosted engine runs fine on iSCSI since oVirt 3.5.", I have similar
configuration and also I have got problem with resolve the hosted-engine
storage by vdsm but the cause is that I do know how to edit iSCSI settings.
I suspect that there is only one path to target not four :(

Devin what is the output of "hosted-engine --vm-status"?

2017-03-11 16:59 GMT+01:00 Chris Adams :

> Once upon a time, Devin A. Bougie  said:
> > Thanks for replying, Juan.  I was under the impression that the hosted
> engine would run on an iSCSI data domain, based on
> http://www.ovirt.org/develop/release-management/features/
> engine/self-hosted-engine-iscsi-support/ and the fact that "hosted-engine
> --deploy" does give you the option to choose iscsi storage (but only one
> path, as far as I can tell).
>
> Hosted engine runs fine on iSCSI since oVirt 3.5.  It needs a separate
> target from VM storage, but then that access is managed by the hosted
> engine HA system.
>
> If all the engine hosts are shut down together, it will take a bit after
> boot for the HA system to converge and try to bring the engine back
> online (including logging in to the engine iSCSI LUN).  You can force
> this on one host by running "hosted-engine --vm-start".
>
> --
> Chris Adams 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] iscsi failover MD3820i

2017-03-08 Thread Marcin Kruk
According to not hosted-engine storage, I found the
"storage_server_connections" and "lun_storage_server_connnection_map"
tables.
Is it possible to insert there rows with info about my extra iSCSI
connections?

2017-03-08 11:45 GMT+01:00 Marcin Kruk <askifyoun...@gmail.com>:

> I found the configuration file /etc/ovirt-hosted-engine/hosted-engine.conf
> with field sotrage=.
> Is it possible to change the value above to put four IP there (four array
> interfaces IP)?
>
> 2017-03-08 10:58 GMT+01:00 Marcin Kruk <askifyoun...@gmail.com>:
>
>> How to verify this?
>> In the RHEM GUI storage ->  -> Manage domain  I can see all
>> fourt target names,
>> but mabye there is only present state and I shoud verify something else
>> in the database or wherever?
>>
>> Where is the info about iSCSI hosted-engine storage target IP,
>> about other storages information is in the table
>> engine.storage_server_connections.
>> I presume that information about iSCSI storage target IP can not be in
>> the
>> the database due to "chicken or the egg" dillema.
>>
>> 2017-03-07 22:50 GMT+01:00 Dan Yasny <dya...@gmail.com>:
>>
>>> "*Important:* If more than one path access is required, ensure to
>>> discover and log in to the target through all the required paths. Modifying
>>> a storage domain to add additional paths is currently not supported."
>>>
>>> This is from the oVirt admin guide, did you discover and login to all
>>> the MD3xxxi controllers/portals when you were setting the storage domain up?
>>>
>>> On Tue, Mar 7, 2017 at 4:43 PM, Marcin Kruk <askifyoun...@gmail.com>
>>> wrote:
>>>
>>>> Hello I have got Dell MD3820i and four interfaces which was connected
>>>> to the hostst via switches.
>>>>
>>>> But there is only one IP which I can set during the configuration of
>>>> iSCSI RHV storage.
>>>> Everything has been fine until I pluged off cable from the storage NIC
>>>> interface with IP configured in the RHV iSCSI section.
>>>>
>>>> Now VDSMD got problem because it tries to connect via iscsiadm with
>>>> exact IP and does not use the configuration from the /var/lib/iscsi/, where
>>>> are tree extra paths to achive the LUN. In multipath -ll I see an active
>>>> path, of course with the second storage interface IP
>>>>
>>>> systemctl status vdsmd:
>>>>CGroup: /system.slice/vdsmd.service
>>>>├─4975 /usr/bin/python /usr/share/vdsm/vdsm
>>>>├─5558 /usr/bin/sudo -n /usr/sbin/iscsiadm -m node -T
>>>> iqn.1984-05.com.dell:powervault.md3800i.600a098000ae589c5893ade2
>>>> -I default -p 192...
>>>>└─5559 /usr/sbin/iscsiadm -m node -T
>>>> iqn.1984-05.com.dell:powervault.md3800i.600a098000ae589c5893ade2
>>>> -I default -p 192.168.130.101 32...
>>>>
>>>> ___
>>>> Users mailing list
>>>> Users@ovirt.org
>>>> http://lists.ovirt.org/mailman/listinfo/users
>>>>
>>>>
>>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] iscsi failover MD3820i

2017-03-08 Thread Marcin Kruk
I found the configuration file /etc/ovirt-hosted-engine/hosted-engine.conf
with field sotrage=.
Is it possible to change the value above to put four IP there (four array
interfaces IP)?

2017-03-08 10:58 GMT+01:00 Marcin Kruk <askifyoun...@gmail.com>:

> How to verify this?
> In the RHEM GUI storage ->  -> Manage domain  I can see all
> fourt target names,
> but mabye there is only present state and I shoud verify something else in
> the database or wherever?
>
> Where is the info about iSCSI hosted-engine storage target IP,
> about other storages information is in the table
> engine.storage_server_connections.
> I presume that information about iSCSI storage target IP can not be in the
> the database due to "chicken or the egg" dillema.
>
> 2017-03-07 22:50 GMT+01:00 Dan Yasny <dya...@gmail.com>:
>
>> "*Important:* If more than one path access is required, ensure to
>> discover and log in to the target through all the required paths. Modifying
>> a storage domain to add additional paths is currently not supported."
>>
>> This is from the oVirt admin guide, did you discover and login to all the
>> MD3xxxi controllers/portals when you were setting the storage domain up?
>>
>> On Tue, Mar 7, 2017 at 4:43 PM, Marcin Kruk <askifyoun...@gmail.com>
>> wrote:
>>
>>> Hello I have got Dell MD3820i and four interfaces which was connected to
>>> the hostst via switches.
>>>
>>> But there is only one IP which I can set during the configuration of
>>> iSCSI RHV storage.
>>> Everything has been fine until I pluged off cable from the storage NIC
>>> interface with IP configured in the RHV iSCSI section.
>>>
>>> Now VDSMD got problem because it tries to connect via iscsiadm with
>>> exact IP and does not use the configuration from the /var/lib/iscsi/, where
>>> are tree extra paths to achive the LUN. In multipath -ll I see an active
>>> path, of course with the second storage interface IP
>>>
>>> systemctl status vdsmd:
>>>CGroup: /system.slice/vdsmd.service
>>>├─4975 /usr/bin/python /usr/share/vdsm/vdsm
>>>├─5558 /usr/bin/sudo -n /usr/sbin/iscsiadm -m node -T
>>> iqn.1984-05.com.dell:powervault.md3800i.600a098000ae589c5893ade2
>>> -I default -p 192...
>>>└─5559 /usr/sbin/iscsiadm -m node -T
>>> iqn.1984-05.com.dell:powervault.md3800i.600a098000ae589c5893ade2
>>> -I default -p 192.168.130.101 32...
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] iscsi failover MD3820i

2017-03-08 Thread Marcin Kruk
How to verify this?
In the RHEM GUI storage ->  -> Manage domain  I can see all
fourt target names,
but mabye there is only present state and I shoud verify something else in
the database or wherever?

Where is the info about iSCSI hosted-engine storage target IP,
about other storages information is in the table engine.storage_server_
connections.
I presume that information about iSCSI storage target IP can not be in the
the database due to "chicken or the egg" dillema.

2017-03-07 22:50 GMT+01:00 Dan Yasny <dya...@gmail.com>:

> "*Important:* If more than one path access is required, ensure to
> discover and log in to the target through all the required paths. Modifying
> a storage domain to add additional paths is currently not supported."
>
> This is from the oVirt admin guide, did you discover and login to all the
> MD3xxxi controllers/portals when you were setting the storage domain up?
>
> On Tue, Mar 7, 2017 at 4:43 PM, Marcin Kruk <askifyoun...@gmail.com>
> wrote:
>
>> Hello I have got Dell MD3820i and four interfaces which was connected to
>> the hostst via switches.
>>
>> But there is only one IP which I can set during the configuration of
>> iSCSI RHV storage.
>> Everything has been fine until I pluged off cable from the storage NIC
>> interface with IP configured in the RHV iSCSI section.
>>
>> Now VDSMD got problem because it tries to connect via iscsiadm with exact
>> IP and does not use the configuration from the /var/lib/iscsi/, where are
>> tree extra paths to achive the LUN. In multipath -ll I see an active path,
>> of course with the second storage interface IP
>>
>> systemctl status vdsmd:
>>CGroup: /system.slice/vdsmd.service
>>├─4975 /usr/bin/python /usr/share/vdsm/vdsm
>>├─5558 /usr/bin/sudo -n /usr/sbin/iscsiadm -m node -T
>> iqn.1984-05.com.dell:powervault.md3800i.600a098000ae589c5893ade2
>> -I default -p 192...
>>└─5559 /usr/sbin/iscsiadm -m node -T
>> iqn.1984-05.com.dell:powervault.md3800i.600a098000ae589c5893ade2
>> -I default -p 192.168.130.101 32...
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] iscsi failover MD3820i

2017-03-07 Thread Marcin Kruk
Hello I have got Dell MD3820i and four interfaces which was connected to
the hostst via switches.

But there is only one IP which I can set during the configuration of iSCSI
RHV storage.
Everything has been fine until I pluged off cable from the storage NIC
interface with IP configured in the RHV iSCSI section.

Now VDSMD got problem because it tries to connect via iscsiadm with exact
IP and does not use the configuration from the /var/lib/iscsi/, where are
tree extra paths to achive the LUN. In multipath -ll I see an active path,
of course with the second storage interface IP

systemctl status vdsmd:
   CGroup: /system.slice/vdsmd.service
   ├─4975 /usr/bin/python /usr/share/vdsm/vdsm
   ├─5558 /usr/bin/sudo -n /usr/sbin/iscsiadm -m node -T
iqn.1984-05.com.dell:powervault.md3800i.600a098000ae589c5893ade2 -I
default -p 192...
   └─5559 /usr/sbin/iscsiadm -m node -T
iqn.1984-05.com.dell:powervault.md3800i.600a098000ae589c5893ade2 -I
default -p 192.168.130.101 32...
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] lvscan on rhv host

2017-03-04 Thread Marcin Kruk
Could somebody give me a links to doc or explain behavior of lv volumes in
RHV 4 cluster, once they are active another time they are incative, and how
to corelate the lv volumes names with disks of virtual machines, or other
disks?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] hosted-engine status timestamp

2017-03-02 Thread Marcin Kruk
RHV 4.0 and self hosted engine.
Please explain why when I run hosted-engine --vm-status, the timestamp
field in the Extra metadata (valid at timestamp) section is obsolete not
the present time?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users