[ovirt-users] Re: Desktops as Nodes and Client?

2018-11-03 Thread Drew Gilbert
Thanks for the suggestions, I'll look into them.  I am looking for a user 
management solution, but we also use mostly windows desktops (also centos and 
macos) and Office365, so will probably end up with some solution based on Azure 
AD.

For extra context, one other reason for wanting the desktops to be nodes is 
that the base OS could be oVirt node. Our DR site provider charges a lot if we 
need to update the OS image of desktops, so using VMs could be a way to allow 
us to keep things up to date and in sync to the DR site without incurring the 
costs.

Thanks again,
Drew
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/AK5NQQDOZXA2LWUVYC5UBYBDCCUTQMBH/


[ovirt-users] Re: iSCSI Boot from SAN

2018-11-03 Thread Nir Soffer
On Sat, Nov 3, 2018 at 1:36 PM Alan G  wrote:

> I eventually figured out that shared targets was the problem. I'm now
> using three targets: one for BFS, one for hosted_storage and one for an
> additional storage domain. This seems to be working fine. However, I've
> noticed that the hosted_storage domain is only utilising a single path. Is
> there anyway to get hosted_storage working with MP?
>

It should use the same way you used to get multiple paths for BFS and the
2nd storage domains.

Did you try to configure iSCSI multipathing? The setting should be available
as a subtab in the DC tab.

Simone can add specific details for hosted engine setup and iSCSI
multipathing.


>
> The output of getDeviceList is below
>
> BFS - 3600a098038304631373f4d2f70305a6b
> hosted_storage - 3600a098038304630662b4d612d736762
> 2nd data domain - 3600a098038304630662b4d612d736764
>
> [
> ...
>
{
>
"status": "used",
> "vendorID": "NETAPP",
> "GUID": "3600a098038304630662b4d612d736762",
> "capacity": "107374182400",
> "fwrev": "9300",
> "discard_zeroes_data": 0,
> "vgUUID": "CeFXY1-34gB-NJPP-tw18-nZWo-qAWu-6cx82z",
> "pathlist": [
> {
> "connection": "172.31.6.7",
> "iqn":
> "iqn.1992-08.com.netapp:sn.39d910dede8311e8a98a00a098d7cd76:vs.5",
> "portal": "1030",
>

Do you have additional portal defined on the server side for this
connection?


> "port": "3260",
> "initiatorname": "default"
> }
> ],
> "pvsize": "106971529216",
> "discard_max_bytes": 0,
> "pathstatus": [
> {
> "capacity": "107374182400",
> "physdev": "sdc",
> "type": "iSCSI",
> "state": "active",
> "lun": "0"
> }
> ],
> "devtype": "iSCSI",
> "physicalblocksize": "4096",
> "pvUUID": "eejJVE-BTns-VgK0-s00D-t1sP-Fc4y-l60XUt",
> "serial": "SNETAPP_LUN_C-Mode_80F0f+Ma-sgb",
> "logicalblocksize": "512",
> "productID": "LUN C-Mode"
> },
> ...
> ]
>

Nir


>
>
>  On Sat, 03 Nov 2018 00:01:02 + *Nir Soffer  >* wrote 
>
> On Fri, 2 Nov 2018, 20:31 Alan G 
> I'm setting up a lab with oVirt 4.2. All hosts are disk-less and boot from
> a NetApp using iSCSI. All storage domains are also iSCSI, to the same
> NetApp as BFS.
>
> Whenever I put a host into maintenance vdsm seems to try to un-mount all
> iSCSI partitions including the OS partition, causing the host fail.
>
> Is this a supported configuration?
>
>
> This works (with some issues, see bellow) for FC, when all luns are always
> connected.
>
> For iSCSI we don't have a way to prevent disconnect since we are not aware
> thst you boot from one of the luns. I guess we could detect that and avoid
> the disconnect but nobody sent a patch to implement it.
>
> It can work if you serve the boot luns from a different portal on same
> server. The system will create additional iscsi connection for the ovirt
> storage domains and disconnecting from storage will not affect your boot
> luns connection.
>
> It can also work if your luns look like FC devices - to check this
> option,  can you share output of:
>
> vdsm-client Host getDeviceList
>
> On one of the hosts?
>
> Elad, did we test such setup?
>
> You also need to blacklist the boot lun in vdsm config - requires this
> patch for 4.2:
> https://gerrit.ovirt.org/c/93301/
>
> And add multipath configuration for the boot lun with "no_path_retry
> queue" to avoid readonly file system if you loose all paths to storage.
>
> Nir
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/Y46ASEUKSH3O3GUL4SWRXZDNIMWNGBQU/
>
>
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4C73ZBMUUC5ROZ3RAT6WRURVUGFHRNVE/


[ovirt-users] Re: iSCSI Boot from SAN

2018-11-03 Thread Alan G
I eventually figured out that shared targets was the problem. I'm now using 
three targets: one for BFS, one for hosted_storage and one for an additional 
storage domain. This seems to be working fine. However, I've noticed that the 
hosted_storage domain is only utilising a single path. Is there anyway to get 
hosted_storage working with MP? The output of getDeviceList is below BFS - 
3600a098038304631373f4d2f70305a6b hosted_storage - 
3600a098038304630662b4d612d736762 2nd data domain - 
3600a098038304630662b4d612d736764 [     {     "status": "used",     
"vendorID": "NETAPP",     "GUID": "3600a098038304631373f4d2f70305a6b",  
   "capacity": "53687091200",     "fwrev": "9000",     
"discard_zeroes_data": 0,     "vgUUID": "",     "pathlist": [   
  {     "connection": "172.31.6.4",     "iqn": 
"iqn.1992-08.com.netapp:sn.cd78fb1bdc5311e8a98a00a098d7cd76:vs.4",  
   "portal": "1027",     "port": "3260",     
"initiatorname": "default"     },     {     
"connection": "172.31.6.5",     "iqn": 
"iqn.1992-08.com.netapp:sn.cd78fb1bdc5311e8a98a00a098d7cd76:vs.4",  
   "portal": "1028",     "port": "3260",     
"initiatorname": "default"     }     ],     "pvsize": "",   
  "discard_max_bytes": 0,     "pathstatus": [     {     
"capacity": "53687091200",     "physdev": "sda",     
"type": "iSCSI",     "state": "active",     "lun": "0"  
   },     {     "capacity": "53687091200",  
   "physdev": "sdb",     "type": "iSCSI",     
"state": "active",     "lun": "0"     }     ],     
"devtype": "iSCSI",     "physicalblocksize": "4096",     "pvUUID": "",  
   "serial": "SNETAPP_LUN_C-Mode_80F17_M_p0Zk",     "logicalblocksize": 
"512",     "productID": "LUN C-Mode"     },     {     "status": "used", 
    "vendorID": "NETAPP",     "GUID": 
"3600a098038304630662b4d612d736762",     "capacity": "107374182400",    
 "fwrev": "9300",     "discard_zeroes_data": 0,     "vgUUID": 
"CeFXY1-34gB-NJPP-tw18-nZWo-qAWu-6cx82z",     "pathlist": [     {   
  "connection": "172.31.6.7",     "iqn": 
"iqn.1992-08.com.netapp:sn.39d910dede8311e8a98a00a098d7cd76:vs.5",  
   "portal": "1030",     "port": "3260",     
"initiatorname": "default"     }     ],     "pvsize": 
"106971529216",     "discard_max_bytes": 0,     "pathstatus": [ 
    {     "capacity": "107374182400",     "physdev": 
"sdc",     "type": "iSCSI",     "state": "active",  
   "lun": "0"     }     ],     "devtype": "iSCSI",  
   "physicalblocksize": "4096",     "pvUUID": 
"eejJVE-BTns-VgK0-s00D-t1sP-Fc4y-l60XUt",     "serial": 
"SNETAPP_LUN_C-Mode_80F0f+Ma-sgb",     "logicalblocksize": "512",     
"productID": "LUN C-Mode"     },     {     "status": "used",     
"vendorID": "NETAPP",     "GUID": "3600a098038304630662b4d612d736764",  
   "capacity": "1099529453568",     "fwrev": "9300",     
"discard_zeroes_data": 0,     "vgUUID": 
"QNJvat-uz1N-s53M-WlH7-NM6L-CH5F-LpVhtR",     "pathlist": [     {   
  "connection": "172.31.6.9",     "iqn": 
"iqn.1992-08.com.netapp:sn.7d9d3bb2dece11e8a98a00a098d7cd76:vs.6",  
   "portal": "1032",     "port": "3260",     
"initiatorname": "default"     },     {     
"connection": "172.31.6.10",     "iqn": 
"iqn.1992-08.com.netapp:sn.7d9d3bb2dece11e8a98a00a098d7cd76:vs.6",  
   "portal": "1033",     "port": "3260",     
"initiatorname": "default"     }     ],     "pvsize": 
"1099243192320",     "discard_max_bytes": 0,     "pathstatus": [    
 {     "capacity": "1099529453568",     "physdev": 
"sdd",     "type": "iSCSI",     "state": "active",  
   "lun": "0"     },     {     "capacity": 
"1099529453568",     "physdev": "sdf",     "type": 
"iSCSI",     "state": "active",     "lun": "0"  
   }     ],     "devtype": "iSCSI",     "physicalblocksize": 
"4096",     "pvUUID": "cWucdo-DYZc-IlLU-VuED-6FAa-iLdx-dq3RWU",     
"serial": "SNETAPP_LUN_C-Mode_80F0f+Ma-sgd",     "logicalblocksize": "512", 
    "productID": "LUN C-Mode"     },     {     "status": "used",    
 "vendorID": "NETAPP",     "GUID": "3600a098038304631373f4d2f70305a6e", 
    "capacity": "536952700928",   

[ovirt-users] Re: ovirt-engine-extension-aaa-ldap-setup failed

2018-11-03 Thread Jeremy Tourville
I have been trying to find the setting to confirm that.

On Nov 2, 2018 7:43 AM, Donny Davis  wrote:
Is binding allowed in your 389ds instance?


On Fri, Nov 2, 2018, 8:11 AM Jeremy Tourville 
mailto:jeremy_tourvi...@hotmail.com> wrote:
The backend is 389 DS, no this is not Govt related.  This will be used as a 
training platform for my local ISSA chapter.  This is a new 389 DS server.  I 
followed the instructions at 
https://www.unixmen.com/install-and-configure-ldap-server-in-centos-7/
The server is "stock" with the exceptions of the settings for startTLS and 
adding certificates, etc (basically, whatever is needed to integrate with the 
Ovirt Engine.)
I am using my Admin account to perform the bind.  What I don't understand is 
why everything else in the aaa setup script works except the login sequence.  
It would seem like my certificates are correct, correct use of the admin DN, 
etc.  The funny part is I can login to the server using the admin account and 
password yet the same admin account and password fail when using the aaa setup 
script.  But, that is why I am using the expert knowledge on the list!  Maybe I 
have overlooked a simple prerequisite setting needed for setup somewhere?

I'll wait for someone to chime in on possible reasons to get this message:
SEVERE  Authn.Result code is: CREDENTIALS_INVALID
[ ERROR ] Login sequence failed

__
Users mailing list -- users@ovirt.org
To unsubscribe send an email to 
users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TGT7ASCWSUTU6TDT2HIBLBCRL2CEF3G6/

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JN4AMQUNTFGL2NDUWNDG2AZTF7YIQPN6/