Re: [ovirt-users] imageio: Error setting ImageProxyAddress's value. No such entry with version general.

2017-08-28 Thread Daniel Erez
Hi Richard,

This issue has been already addressed by
https://bugzilla.redhat.com/show_bug.cgi?id=1476979
The fix should be in latest build (4.1.5.1).
Alternatively, you can also amend it manually in the DB if you prefer to
avoid updating


On Sat, Jul 29, 2017 at 10:21 AM Richard Chan 
wrote:

> oVirt 4.1.4 - engine-setup of imageio proxy on a host, at the end it
> requires:
>
> engine-config -s ImageProxyAddress=:54323
>
> But on engine:
>
> Error setting ImageProxyAddress's value. No such entry with version
> general.
>
> Any ideas thanks!
>
>
>
> --
> Richard Chan
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] hosted engine setup with Gluster fail

2017-08-28 Thread Kasturi Narra
can you please check if you have any additional disk in the system? If you
have additional disk in the system other than the disk which is being used
for root partition then you could specify the disk in the cockpit UI (i
hope you are using cockpit UI to do the installation) with no partitions on
that. That will take care of the installation and make your life easier as
cockpit + gdeploy would take care of configuring gluster bricks and volumes
for you.

On Mon, Aug 28, 2017 at 2:55 PM, Anzar Esmail Sainudeen <
an...@it.thumbay.com> wrote:

> Dear Nara,
>
>
>
> All the partitions, pv and vg are created automatically during the initial
> setup time.
>
>
>
> [root@ovirtnode1 ~]# vgs
>
>   VG  #PV #LV #SN Attr   VSize   VFree
>
>   onn   1  12   0 wz--n- 555.73g 14.93g
>
>
>
> All space are mounted to the below location, all free space are mounted in
> /.
>
>
>
> Filesystem  Size  Used Avail
> Use% Mounted on
>
> /dev/mapper/onn-ovirt--node--ng--4.1.4--0.20170728.0+1  513G  4.2G
> 483G   1% /
>
> devtmpfs 44G 0
> 44G   0% /dev
>
> tmpfs44G  4.0K
> 44G   1% /dev/shm
>
> tmpfs44G   33M
> 44G   1% /run
>
> tmpfs44G 0
> 44G   0% /sys/fs/cgroup
>
> /dev/sda2   976M  135M  774M
> 15% /boot
>
> /dev/mapper/onn-home976M  2.6M
> 907M   1% /home
>
> /dev/mapper/onn-tmp 2.0G  6.3M
> 1.8G   1% /tmp
>
> /dev/sda1   200M  9.5M
> 191M   5% /boot/efi
>
> /dev/mapper/onn-var  15G  1.8G   13G
> 13% /var
>
> /dev/mapper/onn-var--log7.8G  224M
> 7.2G   3% /var/log
>
> /dev/mapper/onn-var--log--audit 2.0G   44M
> 1.8G   3% /var/log/audit
>
> tmpfs   8.7G 0
> 8.7G   0% /run/user/0
>
>
>
> If we need any space we want to reduce the vg size and create new
> one.(This is correct)
>
>
>
>
>
> If the above step is complicated, can you please suggest to setup
> glusterfs datastore in ovirt
>
>
>
> Anzar Esmail Sainudeen
>
> Group Datacenter Incharge| IT Infra Division | Thumbay Group
>
> P.O Box : 4184 | Ajman | United Arab Emirates.
>
> Mobile: 055-8633699|Tel: 06 7431333 |  Extn :1303
>
> Email: an...@it.thumbay.com | Website: www.thumbay.com
>
> [image: cid:image001.jpg@01D18D9D.15A17620]
>
>
>
> Disclaimer: This message contains confidential information and is intended
> only for the individual named. If you are not the named addressee, you are
> hereby notified that disclosing, copying, distributing or taking any action
> in reliance on the contents of this e-mail is strictly prohibited. Please
> notify the sender immediately by e-mail if you have received this e-mail by
> mistake, and delete this material. Thumbay Group accepts no liability for
> errors or omissions in the contents of this message, which arise as a
> result of e-mail transmission.
>
>
>
> *From:* Kasturi Narra [mailto:kna...@redhat.com]
> *Sent:* Monday, August 28, 2017 1:14 PM
>
> *To:* Anzar Esmail Sainudeen
> *Cc:* users
> *Subject:* Re: [ovirt-users] hosted engine setup with Gluster fail
>
>
>
> yes, you can create. I do not see any problems there.
>
>
>
> May i know how these vgs are created ? If they are not created using
> gdeploy then you will have to create bricks manually from the new vg you
> have created.
>
>
>
> On Mon, Aug 28, 2017 at 2:10 PM, Anzar Esmail Sainudeen <
> an...@it.thumbay.com> wrote:
>
> Dear Nara,
>
>
>
> Thank you for your great reply.
>
>
>
> 1) can you please check if the disks what would be used for brick creation
> does not have labels or any partitions on them ?
>
>
>
> Yes I agreed there is no labels partition available, my doubt is it
> possible to create required bricks partition from available 406.7G  Linux
> LVM. Following are the physical volume and volume group information.
>
>
>
>
>
> [root@ovirtnode1 ~]# pvdisplay
>
>   --- Physical volume ---
>
>   PV Name   /dev/sda3
>
>   VG Name   onn
>
>   PV Size   555.73 GiB / not usable 2.00 MiB
>
>   Allocatable   yes
>
>   PE Size   4.00 MiB
>
>   Total PE  142267
>
>   Free PE   3823
>
>   Allocated PE  138444
>
>   PV UUID   v1eGGf-r1he-3XZt-JUOM-8XiT-iGkf-0xClUe
>
>
>
> [root@ovirtnode1 ~]# vgdisplay
>
>   --- Volume group ---
>
>   VG Name   onn
>
>   System ID
>
>   Formatlvm2
>
>   Metadata Areas1
>
>   Metadata Sequence No  48
>
>   VG Access read/write
>
>   VG Status resizable
>
>   MAX LV0
>
>   Cur LV12
>
>   Open LV

Re: [ovirt-users] hosted engine setup with Gluster fail

2017-08-28 Thread Anzar Esmail Sainudeen
Dear Nara,

 

All the partitions, pv and vg are created automatically during the initial 
setup time.

 

[root@ovirtnode1 ~]# vgs

  VG  #PV #LV #SN Attr   VSize   VFree 

  onn   1  12   0 wz--n- 555.73g 14.93g

 

All space are mounted to the below location, all free space are mounted in /.

 

Filesystem  Size  Used Avail Use% 
Mounted on

/dev/mapper/onn-ovirt--node--ng--4.1.4--0.20170728.0+1  513G  4.2G  483G   1% /

devtmpfs 44G 0   44G   0% 
/dev

tmpfs44G  4.0K   44G   1% 
/dev/shm

tmpfs44G   33M   44G   1% 
/run

tmpfs44G 0   44G   0% 
/sys/fs/cgroup

/dev/sda2   976M  135M  774M  15% 
/boot

/dev/mapper/onn-home976M  2.6M  907M   1% 
/home

/dev/mapper/onn-tmp 2.0G  6.3M  1.8G   1% 
/tmp

/dev/sda1   200M  9.5M  191M   5% 
/boot/efi

/dev/mapper/onn-var  15G  1.8G   13G  13% 
/var

/dev/mapper/onn-var--log7.8G  224M  7.2G   3% 
/var/log

/dev/mapper/onn-var--log--audit 2.0G   44M  1.8G   3% 
/var/log/audit

tmpfs   8.7G 0  8.7G   0% 
/run/user/0

 

If we need any space we want to reduce the vg size and create new one.(This is 
correct)

 

 

If the above step is complicated, can you please suggest to setup glusterfs 
datastore in ovirt 

 

Anzar Esmail Sainudeen

Group Datacenter Incharge| IT Infra Division | Thumbay Group 

P.O Box : 4184 | Ajman | United Arab Emirates. 

Mobile: 055-8633699|Tel: 06 7431333 |  Extn :1303

Email: an...@it.thumbay.com   | Website: 
www.thumbay.com  



 

Disclaimer: This message contains confidential information and is intended only 
for the individual named. If you are not the named addressee, you are hereby 
notified that disclosing, copying, distributing or taking any action in 
reliance on the contents of this e-mail is strictly prohibited. Please notify 
the sender immediately by e-mail if you have received this e-mail by mistake, 
and delete this material. Thumbay Group accepts no liability for errors or 
omissions in the contents of this message, which arise as a result of e-mail 
transmission.

 

From: Kasturi Narra [mailto:kna...@redhat.com] 
Sent: Monday, August 28, 2017 1:14 PM
To: Anzar Esmail Sainudeen
Cc: users
Subject: Re: [ovirt-users] hosted engine setup with Gluster fail

 

yes, you can create. I do not see any problems there. 

 

May i know how these vgs are created ? If they are not created using gdeploy 
then you will have to create bricks manually from the new vg you have created.

 

On Mon, Aug 28, 2017 at 2:10 PM, Anzar Esmail Sainudeen  > wrote:

Dear Nara,

 

Thank you for your great reply.

 

1) can you please check if the disks what would be used for brick creation does 
not have labels or any partitions on them ?

 

Yes I agreed there is no labels partition available, my doubt is it possible to 
create required bricks partition from available 406.7G  Linux LVM. Following 
are the physical volume and volume group information.

 

 

[root@ovirtnode1 ~]# pvdisplay 

  --- Physical volume ---

  PV Name   /dev/sda3

  VG Name   onn

  PV Size   555.73 GiB / not usable 2.00 MiB

  Allocatable   yes 

  PE Size   4.00 MiB

  Total PE  142267

  Free PE   3823

  Allocated PE  138444

  PV UUID   v1eGGf-r1he-3XZt-JUOM-8XiT-iGkf-0xClUe

   

[root@ovirtnode1 ~]# vgdisplay 

  --- Volume group ---

  VG Name   onn

  System ID 

  Formatlvm2

  Metadata Areas1

  Metadata Sequence No  48

  VG Access read/write

  VG Status resizable

  MAX LV0

  Cur LV12

  Open LV   7

  Max PV0

  Cur PV1

  Act PV1

  VG Size   555.73 GiB

  PE Size   4.00 MiB

  Total PE  142267

  Alloc PE / Size   138444 / 540.80 GiB

  Free  PE / Size   3823 / 14.93 GiB

  VG UUID   nFfNXN-DcJt-bX1Q-UQ2U-07J5-ceT3-ULFtcy

   

 

I am thinking, to reduce the vg size and create new vg for gluster. Is it a 
good thinking.

   

 

 

Anzar Esmail Sainudeen

Group Datacenter Incharge| IT Infra Division | Thumbay Group 

P.O Box : 4184 | Ajman | United Arab Emirates. 

Mobile: 055-8633699|Tel: 06 7431333 |  Extn :1303

Email: an...@it.thumbay.com   | Website: 

Re: [ovirt-users] Question on Datacenters / clusters / data domains

2017-08-28 Thread Shani Leviim
Hi Eduardo,
Welcome aboard!

First, you may find some relevant information in here: http://www.ovirt.org/
documentation/admin-guide/administration-guide/ .

Regarding your questions:
* A data domain in an oVirt Data Center must be available to every Host on
the Data Center: Am I right?
Yes, you're right.

* Can I manually migrate VMs between Datacenters?
VM migration can't be performed between data canters, so you can't use the
'migrate VM' function.
In order to "migrate" your VM between different data canters, you can use
'export' and 'import' functions and an 'export domain':
By creating an export domain for one of your DC's (each DC can have up to
one export domain), and exporting your VM to that storage domain,
you can then detach the export domain from that DC and attach it to the
other DC, and by importing there your VM you'll finish the transaction.

Another option is to detach the VM's storage domain from one DC and attach
it the second one.
That way you'll move the whole storage domain between your DCs.

If you have any further questions, don't hesitate to ask :)


*Regards,*

*Shani Leviim*

On Thu, Aug 24, 2017 at 2:51 PM, Eduardo Mayoral  wrote:

> Hi,
>
> First of all, sorry for the naive question, but I have not been able
> to find good guidance on the docs.
>
> I come from the VMWare environment, now I am starting to migrate
> some workload from VMWare to oVirt (v4.1.4 , CentOS 7.3 hosts).
>
> In VMWare I am used to have one datacenter, several host clusters,
> and a bunch of iSCSI Datastores, but we do not map every iSCSI
> LUN/datastore to every host. Actually we used to do that, but we hit
> limits on the number of iSCSI paths with our infrastructure.
>
> Rather than that, we have groups of LUNs/Datastores mapped to the
> ESXi hosts which form a given VMware cluster. Then we have a couple of
> datastores mapped to every ESXi in the vmware datacenter, and we use
> those to store the ISO images and as storage that we use when we need to
> migrate VMs between clusters for some reason.
>
> Given the role of the Master data domain and the SPM in oVIrt it is
> my understanding that I cannot replicate this kind of setup in oVirt: a
> data domain in an oVirt Data Center must be available to every Host on
> the Data Center: Am I right?
>
> So, our current setup is still small, but I am concerned that as it
> grows, if I stay with one Datacenter, several clusters and a group of
> data domains mapped to every host I may run again into problems with the
> number of iSCSI paths (the limit in VMWare was around 1024), it is easy
> to reach that limit as it is (number of hosts) * (number of LUNs) *
> (number of paths/LUN).
>
> If I split my setup in several datacenters controlled by a single
> oVirt-engine in order to keep the number of iSCSI paths reasonable. Can
> I manually migrate VMs between Datacenters? I assume that in order to do
> that, those datacenters will need to share some data domain , Can this
> be done? Maybe with NFS?
>
> Thanks for your help!
>
> --
> Eduardo Mayoral Jimeno (emayo...@arsys.es)
> Administrador de sistemas. Departamento de Plataformas. Arsys internet.
> +34 941 620 145 ext. 5153
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] hosted engine setup with Gluster fail

2017-08-28 Thread Kasturi Narra
yes, you can create. I do not see any problems there.

May i know how these vgs are created ? If they are not created using
gdeploy then you will have to create bricks manually from the new vg you
have created.

On Mon, Aug 28, 2017 at 2:10 PM, Anzar Esmail Sainudeen <
an...@it.thumbay.com> wrote:

> Dear Nara,
>
>
>
> Thank you for your great reply.
>
>
>
> 1) can you please check if the disks what would be used for brick creation
> does not have labels or any partitions on them ?
>
>
>
> Yes I agreed there is no labels partition available, my doubt is it
> possible to create required bricks partition from available 406.7G  Linux
> LVM. Following are the physical volume and volume group information.
>
>
>
>
>
> [root@ovirtnode1 ~]# pvdisplay
>
>   --- Physical volume ---
>
>   PV Name   /dev/sda3
>
>   VG Name   onn
>
>   PV Size   555.73 GiB / not usable 2.00 MiB
>
>   Allocatable   yes
>
>   PE Size   4.00 MiB
>
>   Total PE  142267
>
>   Free PE   3823
>
>   Allocated PE  138444
>
>   PV UUID   v1eGGf-r1he-3XZt-JUOM-8XiT-iGkf-0xClUe
>
>
>
> [root@ovirtnode1 ~]# vgdisplay
>
>   --- Volume group ---
>
>   VG Name   onn
>
>   System ID
>
>   Formatlvm2
>
>   Metadata Areas1
>
>   Metadata Sequence No  48
>
>   VG Access read/write
>
>   VG Status resizable
>
>   MAX LV0
>
>   Cur LV12
>
>   Open LV   7
>
>   Max PV0
>
>   Cur PV1
>
>   Act PV1
>
>   VG Size   555.73 GiB
>
>   PE Size   4.00 MiB
>
>   Total PE  142267
>
>   Alloc PE / Size   138444 / 540.80 GiB
>
>   Free  PE / Size   3823 / 14.93 GiB
>
>   VG UUID   nFfNXN-DcJt-bX1Q-UQ2U-07J5-ceT3-ULFtcy
>
>
>
>
>
> I am thinking, to reduce the vg size and create new vg for gluster. Is it
> a good thinking.
>
>
>
>
>
>
>
> Anzar Esmail Sainudeen
>
> Group Datacenter Incharge| IT Infra Division | Thumbay Group
>
> P.O Box : 4184 | Ajman | United Arab Emirates.
>
> Mobile: 055-8633699|Tel: 06 7431333 |  Extn :1303
>
> Email: an...@it.thumbay.com | Website: www.thumbay.com
>
> [image: cid:image001.jpg@01D18D9D.15A17620]
>
>
>
> Disclaimer: This message contains confidential information and is intended
> only for the individual named. If you are not the named addressee, you are
> hereby notified that disclosing, copying, distributing or taking any action
> in reliance on the contents of this e-mail is strictly prohibited. Please
> notify the sender immediately by e-mail if you have received this e-mail by
> mistake, and delete this material. Thumbay Group accepts no liability for
> errors or omissions in the contents of this message, which arise as a
> result of e-mail transmission.
>
>
>
> *From:* Kasturi Narra [mailto:kna...@redhat.com]
> *Sent:* Monday, August 28, 2017 9:48 AM
> *To:* Anzar Esmail Sainudeen
> *Cc:* users
> *Subject:* Re: [ovirt-users] hosted engine setup with Gluster fail
>
>
>
> Hi,
>
>
>
>If i understand right gdeploy script is failing at [1]. There could be
> two possible reasons why that would fail.
>
>
>
> 1) can you please check if the disks what would be used for brick creation
> does not have lables or any partitions on them ?
>
>
>
> 2) can you please check if the path [1] exists. If it does not can you
> please change the path of the script in gdeploy.conf file
> to /usr/share/gdeploy/scripts/grafton-sanity-check.sh
>
>
>
> [1] /usr/share/ansible/gdeploy/scripts/grafton-sanity-check.sh
>
>
>
> Thanks
>
> kasturi
>
>
>
> On Sun, Aug 27, 2017 at 6:52 PM, Anzar Esmail Sainudeen <
> an...@it.thumbay.com> wrote:
>
> Dear Team Ovirt,
>
>
>
> I am trying to deploy hosted engine setup with Gluster. Hosted engine
> setup was failed. Total number of host is 3 server
>
>
>
>
>
> PLAY [gluster_servers] **
> ***
>
>
>
> TASK [Run a shell script] **
> 
>
> fatal: [ovirtnode4.thumbaytechlabs.int]: FAILED! => {"failed": true,
> "msg": "The conditional check 'result.rc != 0' failed. The error was: error
> while evaluating conditional (result.rc != 0): 'dict object' has no
> attribute 'rc'"}
>
> fatal: [ovirtnode3.thumbaytechlabs.int]: FAILED! => {"failed": true,
> "msg": "The conditional check 'result.rc != 0' failed. The error was: error
> while evaluating conditional (result.rc != 0): 'dict object' has no
> attribute 'rc'"}
>
> fatal: [ovirtnode2.thumbaytechlabs.int]: FAILED! => {"failed": true,
> "msg": "The conditional check 'result.rc != 0' failed. The error was: error
> while evaluating conditional (result.rc != 0): 'dict object' has no
> attribute 'rc'"}
>
> to retry, use: --limit @/tmp/tmp59G7Vc/run-script.retry
>
>
>
> PLAY RECAP 
> *
>
> 

Re: [ovirt-users] hosted engine setup with Gluster fail

2017-08-28 Thread Anzar Esmail Sainudeen
Dear Nara,

 

Thank you for your great reply.

 

1) can you please check if the disks what would be used for brick creation does 
not have labels or any partitions on them ?

 

Yes I agreed there is no labels partition available, my doubt is it possible to 
create required bricks partition from available 406.7G  Linux LVM. Following 
are the physical volume and volume group information.

 

 

[root@ovirtnode1 ~]# pvdisplay 

  --- Physical volume ---

  PV Name   /dev/sda3

  VG Name   onn

  PV Size   555.73 GiB / not usable 2.00 MiB

  Allocatable   yes 

  PE Size   4.00 MiB

  Total PE  142267

  Free PE   3823

  Allocated PE  138444

  PV UUID   v1eGGf-r1he-3XZt-JUOM-8XiT-iGkf-0xClUe

   

[root@ovirtnode1 ~]# vgdisplay 

  --- Volume group ---

  VG Name   onn

  System ID 

  Formatlvm2

  Metadata Areas1

  Metadata Sequence No  48

  VG Access read/write

  VG Status resizable

  MAX LV0

  Cur LV12

  Open LV   7

  Max PV0

  Cur PV1

  Act PV1

  VG Size   555.73 GiB

  PE Size   4.00 MiB

  Total PE  142267

  Alloc PE / Size   138444 / 540.80 GiB

  Free  PE / Size   3823 / 14.93 GiB

  VG UUID   nFfNXN-DcJt-bX1Q-UQ2U-07J5-ceT3-ULFtcy

   

 

I am thinking, to reduce the vg size and create new vg for gluster. Is it a 
good thinking.

   

 

 

Anzar Esmail Sainudeen

Group Datacenter Incharge| IT Infra Division | Thumbay Group 

P.O Box : 4184 | Ajman | United Arab Emirates. 

Mobile: 055-8633699|Tel: 06 7431333 |  Extn :1303

Email: an...@it.thumbay.com   | Website: 
www.thumbay.com  



 

Disclaimer: This message contains confidential information and is intended only 
for the individual named. If you are not the named addressee, you are hereby 
notified that disclosing, copying, distributing or taking any action in 
reliance on the contents of this e-mail is strictly prohibited. Please notify 
the sender immediately by e-mail if you have received this e-mail by mistake, 
and delete this material. Thumbay Group accepts no liability for errors or 
omissions in the contents of this message, which arise as a result of e-mail 
transmission.

 

From: Kasturi Narra [mailto:kna...@redhat.com] 
Sent: Monday, August 28, 2017 9:48 AM
To: Anzar Esmail Sainudeen
Cc: users
Subject: Re: [ovirt-users] hosted engine setup with Gluster fail

 

Hi,

 

   If i understand right gdeploy script is failing at [1]. There could be two 
possible reasons why that would fail.

 

1) can you please check if the disks what would be used for brick creation does 
not have lables or any partitions on them ?

 

2) can you please check if the path [1] exists. If it does not can you please 
change the path of the script in gdeploy.conf file to 
/usr/share/gdeploy/scripts/grafton-sanity-check.sh

 

[1] /usr/share/ansible/gdeploy/scripts/grafton-sanity-check.sh

 

Thanks

kasturi

 

On Sun, Aug 27, 2017 at 6:52 PM, Anzar Esmail Sainudeen  > wrote:

Dear Team Ovirt,

 

I am trying to deploy hosted engine setup with Gluster. Hosted engine setup was 
failed. Total number of host is 3 server 

 

 

PLAY [gluster_servers] *

 

TASK [Run a shell script] **

fatal: [ovirtnode4.thumbaytechlabs.int  
]: FAILED! => {"failed": true, "msg": "The conditional check 'result.rc != 0' 
failed. The error was: error while evaluating conditional (result.rc != 0): 
'dict object' has no attribute 'rc'"}

fatal: [ovirtnode3.thumbaytechlabs.int  
]: FAILED! => {"failed": true, "msg": "The conditional check 'result.rc != 0' 
failed. The error was: error while evaluating conditional (result.rc != 0): 
'dict object' has no attribute 'rc'"}

fatal: [ovirtnode2.thumbaytechlabs.int  
]: FAILED! => {"failed": true, "msg": "The conditional check 'result.rc != 0' 
failed. The error was: error while evaluating conditional (result.rc != 0): 
'dict object' has no attribute 'rc'"}

to retry, use: --limit @/tmp/tmp59G7Vc/run-script.retry

 

PLAY RECAP *

ovirtnode2.thumbaytechlabs.int   : ok=0  
  changed=0unreachable=0failed=1   

ovirtnode3.thumbaytechlabs.int   : ok=0  
  changed=0unreachable=0failed=1   

ovirtnode4.thumbaytechlabs.int   : ok=0  
  changed=0unreachable=0failed=1   

 

 

Re: [ovirt-users] Ovirt 4.1 testing backup and restore Self-hosted Engine

2017-08-28 Thread wodel youchi
Thanks it worked like a charm
Regards

Le 27 août 2017 09:26, "Yedidyah Bar David"  a écrit :

On Sat, Aug 26, 2017 at 1:56 AM, wodel youchi 
wrote:

> Hi again,
>
> I found this article https://keithtenzer.com/2017/0
> 5/02/rhev-4-1-lab-installation-and-configuration-guide/
> I used the last section to delete the old hosted-engine storage, and it
> worked, the minute I deleted the old hosted-storage the system imported the
> new one and then imported the new VM-Manager into the Web admin portal.
>

In 4.1, engine-backup has new options during restore:
'--he-remove-storage-vm' and '--he-remove-hosts'. Check '--help'. Sadly, we
do not have enough documentation for this. This is worked on, I hope to
have updates soon.

Best,



>
> Regards.
>
> 2017-08-25 23:15 GMT+01:00 wodel youchi :
>
>> Hi again,
>>
>> I redid the test again, I re-read the Self-Hosted Engine documentation,
>> there is a link to a RedHat article https://access.redhat.com/solu
>> tions/1517683 which talks about how to remove the dead hostedEngine VM
>> from the web admin portal.
>>
>> But the article does not talk about how to remove the old hosted engine
>> storage, and this is what causes the problem.
>>
>> This storage is still pointing to the old iscsi disk used by the dead
>> Manager, it's is down, but the new manager cannot detach it, saying that
>> the storage domain doesn't exist, which is right, but how to force the
>> Manager to delete it? I have no idea, I tried to remove it with REST API,
>> without luck.
>>
>> I tried to import the new hosted storage, but the system said: the
>> storage name is already in use. So I am stuck.
>>
>> any idea? do I have to delete it from the database? if yes how?
>>
>> Regards.
>>
>> 2017-08-25 20:07 GMT+01:00 wodel youchi :
>>
>>> Hi,
>>>
>>> I was able to remove the hostedEngine VM, but I didn't succeed to remove
>>> the old hostedEngine Storage domain.
>>> I tried several time to remove it, but I couldn't, the VM engine goes in
>>> pause mode. All I could do is to detach the hostedEngine from the
>>> datacenter. I then put all the other data domains in maintenance mode, the
>>> I reactivated my master data domain hoping that it will import the new
>>> hostedEngine domain, but without luck.
>>>
>>> It seems like there is something missing in this procedure.
>>>
>>> Regards
>>>
>>> 2017-08-25 9:28 GMT+01:00 Alan Griffiths :
>>>
 As I recall (a few weeks ago now) it was after restore, once the host
 had been registered in the Manager. However, I was testing on 4.0, so maybe
 the behaviour is slightly different in 4.1.

 Can you see anything in the Engine or vdsm logs as to why it won't
 remove the storage? Perhaps try removing the stale HostedEngine VM ?

 On 25 August 2017 at 09:14, wodel youchi 
 wrote:

> Hi and thanks,
>
> But when to remove the hosted_engine storage ? During the restore
> procedure or after ? Because after I couldn't do it, the manager refused 
> to
> put that storage in maintenance mode.
>
> Regards
>
> Le 25 août 2017 08:49, "Alan Griffiths"  a
> écrit :
>
>> As I recall from my testing. If you remove the old hosted_storage
>> domain then the new one should get automatically imported.
>>
>> On 24 August 2017 at 23:03, wodel youchi 
>> wrote:
>>
>>> Hi,
>>>
>>> I am testing the backup and restore procedure of the Self-hosted
>>> Engine, and I have a problem.
>>>
>>> This haw I did the test.
>>>
>>> I have two hypervisors hosted-engine. I am used iSCSI disk for the
>>> VM engine.
>>>
>>> I followed the procedure described in the Self-hosted Engine
>>> document to execute the backup, I put the first host in maintenance 
>>> mode,
>>> the I create the backup and save it elsewhere.
>>>
>>> Then I've create a new iscsi disk, I reinstalled the first host with
>>> the save IP/hostname, then I followed the restore procedure to get the
>>> Manager up and running again.
>>> - hosted-engine --deploy
>>> - do not execute engine-setup, restore backup first
>>> - execute engine-setup
>>> - remove the host from the manager
>>> - synchronize the restored manger with the host
>>> - finalize deployment.
>>>
>>> all went well till this point, but I have a problem with the
>>> VM-engine, it is shown as down in the admin portal. the ovirt-ha-agent
>>> cannot retrieve the VM config from the shared storage.
>>>
>>> I think the problem, is that the hosted-engine storage domain is
>>> still pointing to the old disk of the old manager and not the new one. I
>>> don't know where is this information is stored, in the DB or in the
>>> Manager's config files, but when I click 

Re: [ovirt-users] Failed to deploy hosted-engin wih ISCSI

2017-08-28 Thread Shani Leviim
Hi Willie,
Can you please attach the log file?

/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20170824112038-v09rvf.log



*Regards,*

*Shani Leviim*

On Thu, Aug 24, 2017 at 5:45 PM, Willie Cadete 
wrote:

> Hello,
>
> It's my first time using mailing list, I hope that someone could help me.
>
> I'm trying to deploy hosted-engine on a server, but I can not use ISCSI
> storage.
>
> Configuration preview:
>
>  --== CONFIGURATION PREVIEW ==--
>
>   Bridge interface   : eno1
>   Engine FQDN:
> srsp-lab-ovirt01.example.org
>   Bridge name: ovirtmgmt
>   Host address   : srsp-lab-srv01.example.org
>   SSH daemon port: 22
>   Firewall manager   : iptables
>   Gateway address: 192.168.200.254
>   Storage Domain type: iscsi
>   LUN ID :
> 36f01faf000e05ff01f3659483c7c
>   Image size GB  : 58
>   iSCSI Portal IP Address: 192.168.130.102
>   iSCSI Target Name  :
> iqn.1984-05.com.dell:powervault.md3600i.6f01faf000e05ff052fd7354
>   iSCSI Portal port  : 3260
>   Host ID: 1
>   iSCSI Portal user  :
>   Console type   : vnc
>   Memory size MB : 4096
>   MAC address: 00:16:3e:18:85:93
>   Number of CPUs : 4
>   OVF archive (for disk boot):
> /usr/share/ovirt-engine-appliance/ovirt-engine-appliance-4.
> 1-20170821.1.el7.centos.ova
>   Appliance version  : 4.1-20170821.1.el7.centos
>   Restart engine VM after engine-setup: True
>   Engine VM timezone : America/Sao_Paulo
>   CPU Type   : model_SandyBridge
>
>   Please confirm installation settings (Yes, No)[Yes]:
>
>
> on hosted-setup I'm having this error message:
>
> [ INFO  ] Creating Volume Group
> [ ERROR ] Error creating Volume Group: Failed to initialize physical
> device: ("[u'/dev/mapper/36f01faf000e05ff01f3659483c7c']",)
> [ ERROR ] Failed to execute stage 'Misc configuration': Failed to
> initialize physical device: ("[u'/dev/mapper/36f01faf000e0
> 5ff01f3659483c7c']",)
> [ INFO  ] Yum Performing yum transaction rollback
> [ INFO  ] Stage: Clean up
> [ INFO  ] Generating answer file '/var/lib/ovirt-hosted-engine-
> setup/answers/answers-20170824112807.conf'
> [ INFO  ] Stage: Pre-termination
> [ INFO  ] Stage: Termination
> [ ERROR ] Hosted Engine deployment failed: this system is not reliable,
> please check the issue,fix and redeploy
>   Log file is located at /var/log/ovirt-hosted-engine-s
> etup/ovirt-hosted-engine-setup-20170824112038-v09rvf.log
>
> Thanks for any help.
>
> Best regards
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users