[ovirt-users]Отн: HyperConverged Self-Hosted deployment fails

2019-01-19 Thread Strahil Nikolov
Thanks Simone,
I will check the broker.I didn't specify the layout correctly - it's 'replica 3 
arbiter 1' which was OK last time I used this layout.
Best Regards,Strahil Nikolov

  От: Simone Tiraboschi 
 До: hunter86bg  
Копие: users 
 Изпратен: събота, 19 януари 2019 г. 17:42
 Тема: Re: [ovirt-users] HyperConverged Self-Hosted deployment fails
   


On Sat, Jan 19, 2019 at 1:07 PM  wrote:

Hello Community,

recently I managed somehow to deploy a 2 node cluster on GlusterFS , but after 
a serious engine failiure - I have decided to start from scratch.


2 node hyperconverged gluster is definitively a bad idea since it's not going 
to protect you from split brains.Please choose 1 or 3 but not 2. 
What I have done so far:
1. Inctall CentOS7 from scratch
2. Add ovirt repositories, vdo,cockpit for ovirt
3. Deployed the gluster cluster using cockpit 
4. Trying to deploy the hosted-engine , which has failed several times.


Without any logs it's difficult to guess what really happened but I think that 
it could be related to the two nodes approach which is explicitly prevented. 

Up to now I have detected that ovirt-ha-agent is giving:

яну 19 13:54:57 ovirt1.localdomain ovirt-ha-agent[16992]: ovirt-ha-agent 
ovirt_hosted_engine_ha.agent.agent.Agent ERROR Traceback (most recent call 
last):
                                                               File 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/agent.py", line 
131, in _run_agent
                                                                 return 
action(he)
                                                               File 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/agent.py", line 
55, in action_proper
                                                                 return 
he.start_monitoring()
                                                               File 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/hosted_engine.py",
 line 413, in start_monitoring
                                                                 
self._initialize_broker()
                                                               File 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/hosted_engine.py",
 line 535, in _initialize_broker
                                                                 
m.get('options', {}))
                                                               File 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py", 
line 83, in start_monitor
                                                                 .format(type, 
options, e ))
                                                             RequestError: 
Failed to start monitor ping, options {'addr': '192.168.1.1'}: [Errno 2] No 
such file or directory


This simply means that ovirt-ha-agents fails to communicate (in order to send a 
ping to check network connectivity) with ovirt-ha-broker over a unix domain 
socket.
'[Errno 2] No such file or directory' means that the socket is closed on 
ovirt-ha-broker side: you can probably see why checking 
/var/log/ovirt-hosted-engine-ha/broker.log but if didn't successfully completed 
the setup this is not surprising me and I strongly suggest to correctly 
complete the deployment before trying anything else.  

According to https://access.redhat.com/solutions/3353391 , the 
/etc/ovirt-hosted-engine/hosted-engine.conf should be empty , but it's OK:

[root@ovirt1 tmp]# cat /etc/ovirt-hosted-engine/hosted-engine.conf
fqdn=engine.localdomain
vm_disk_id=bb0a9839-a05d-4d0a-998c-74da539a9574
vm_disk_vol_id=c1fc3c59-bc6e-4b74-a624-557a1a62a34f
vmid=d0e695da-ec1a-4d6f-b094-44a8cac5f5cd
storage=ovirt1.localdomain:/engine
nfs_version=
mnt_options=backup-volfile-servers=ovirt2.localdomain:ovirt3.localdomain
conf=/var/run/ovirt-hosted-engine-ha/vm.conf
host_id=1
console=vnc
domainType=glusterfs
spUUID=----
sdUUID=444e524e-9008-48f8-b842-1ce7b95bf248
connectionUUID=e29cf818-5ee5-46e1-85c1-8aeefa33e95d
ca_cert=/etc/pki/vdsm/libvirt-spice/ca-cert.pem
ca_subject="C=EN, L=Test, O=Test, CN=Test"
vdsm_use_ssl=true
gateway=192.168.1.1
bridge=ovirtmgmt
metadata_volume_UUID=a3be2390-017f-485b-8f42-716fb6094692
metadata_image_UUID=368fb8dc-6049-4ef0-8cf8-9d3c4d772d59
lockspace_volume_UUID=41762f85-5d00-488f-bcd0-3de49ec39e8b
lockspace_image_UUID=de100b9b-07ac-4986-9d86-603475572510
conf_volume_UUID=4306f6d6-7fe9-499d-81a5-6b354e8ecb79
conf_image_UUID=d090dd3f-fc62-442a-9710-29eeb56b0019

# The following are used only for iSCSI storage
iqn=
portal=
user=
password=
port=

Ovirt-ha-agent version is:
ovirt-hosted-engine-ha-2.2.18-1.el7.noarch

Can you guide me in order to resolve this issue and to deploy the self-hosted 
engine ?
Where should I start from ?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-l

[ovirt-users]Отн: HyperConverged Self-Hosted deployment fails

2019-01-19 Thread Strahil Nikolov
Hello All,
it seems that the ovirt-ha-broker has some problems:Thread-8::DEBUG::2019-01-19 
19:30:16,048::stompreactor::479::jsonrpc.AsyncoreClient::(send) Sending response
...skipping...
smtp-server = localhost
smtp-port = 25
source-email = root@localhost
destination-emails = root@localhost

[notify]
state_transition = maintenance|start|stop|migrate|up|down

Listener::DEBUG::2019-01-19 
19:30:31,741::heconflib::95::ovirt_hosted_engine_ha.broker.notifications.Notifications.config.broker::(_dd_pipe_tar)
 stderr
: 
Thread-3::DEBUG::2019-01-19 
19:30:31,747::stompreactor::479::jsonrpc.AsyncoreClient::(send) Sending response
StatusStorageThread::ERROR::2019-01-19 
19:30:31,751::status_broker::90::ovirt_hosted_engine_ha.broker.status_broker.StatusBroker.Update::(run)
 Failed t
o update state.
Traceback (most recent call last):
  File 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/status_broker.py",
 line 82, in run
if (self._status_broker._inquire_whiteboard_lock() or
  File 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/status_broker.py",
 line 190, in _inquire_whiteboard_lock
self.host_id, self._lease_file)
  File 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/status_broker.py",
 line 128, in host_id
raise ex.HostIdNotLockedError("Host id is not set")
HostIdNotLockedError: Host id is not set
StatusStorageThread::ERROR::2019-01-19 
19:30:31,751::status_broker::70::ovirt_hosted_engine_ha.broker.status_broker.StatusBroker.Update::(trigger_restart)
 Trying to restart the brokerAnd most probably the issue is within the sanlock:
2019-01-19 19:29:57 4739 [4602]: worker0 aio collect WR 
0x7f92a8c0:0x7f92a8d0:0x7f92acc7 result 1048576:0 other free
2019-01-19 19:30:01 4744 [4603]: s8 lockspace 
hosted-engine:1:/var/run/vdsm/storage/b388324b-eaf4-4158-8e1b-0b7c9b861002/5a849a03-ecbc-4b3f-b558-ec2ebbc42c6f/dd663799-36ed-47de-8208-d357f803efde:0
2019-01-19 19:30:01 4744 [2779]: verify_leader 1 wrong magic 0 
/var/run/vdsm/storage/b388324b-eaf4-4158-8e1b-0b7c9b861002/5a849a03-ecbc-4b3f-b558-ec2ebbc42c6f/dd663799-36ed-47de-8208-d357f803efde
2019-01-19 19:30:01 4744 [2779]: leader1 delta_acquire_begin error -223 
lockspace hosted-engine host_id 1
2019-01-19 19:30:01 4744 [2779]: leader2 path 
/var/run/vdsm/storage/b388324b-eaf4-4158-8e1b-0b7c9b861002/5a849a03-ecbc-4b3f-b558-ec2ebbc42c6f/dd663799-36ed-47de-8208-d357f803efde
 offset 0
2019-01-19 19:30:01 4744 [2779]: leader3 m 0 v 30003 ss 512 nh 0 mh 1 oi 0 og 0 
lv 0
2019-01-19 19:30:01 4744 [2779]: leader4 sn hosted-engine rn  ts 0 cs 60346c59
2019-01-19 19:30:02 4745 [4603]: s8 add_lockspace fail result -223
2019-01-19 19:30:07 4750 [4603]: s9 lockspace 
hosted-engine:1:/var/run/vdsm/storage/b388324b-eaf4-4158-8e1b-0b7c9b861002/5a849a03-ecbc-4b3f-b558-ec2ebbc42c6f/dd663799-36ed-47de-8208-d357f803efde:0
2019-01-19 19:30:07 4750 [2837]: verify_leader 1 wrong magic 0 
/var/run/vdsm/storage/b388324b-eaf4-4158-8e1b-0b7c9b861002/5a849a03-ecbc-4b3f-b558-ec2ebbc42c6f/dd663799-36ed-47de-8208-d357f803efde
2019-01-19 19:30:07 4750 [2837]: leader1 delta_acquire_begin error -223 
lockspace hosted-engine host_id 1
2019-01-19 19:30:07 4750 [2837]: leader2 path 
/var/run/vdsm/storage/b388324b-eaf4-4158-8e1b-0b7c9b861002/5a849a03-ecbc-4b3f-b558-ec2ebbc42c6f/dd663799-36ed-47de-8208-d357f803efde
 offset 0
2019-01-19 19:30:07 4750 [2837]: leader3 m 0 v 30003 ss 512 nh 0 mh 1 oi 0 og 0 
lv 0
2019-01-19 19:30:07 4750 [2837]: leader4 sn hosted-engine rn  ts 0 cs 60346c59
2019-01-19 19:30:08 4751 [4603]: s9 add_lockspace fail result -223
Can someone guide me how to go further ? Can debug be enabled for sanlock ?
Best Regards,Strahil Nikolov

  От: Strahil Nikolov 
 До: Simone Tiraboschi  
Копие: users 
 Изпратен: събота, 19 януари 2019 г. 17:54
 Тема: Отн: [ovirt-users] HyperConverged Self-Hosted deployment fails
   
Thanks Simone,
I will check the broker.I didn't specify the layout correctly - it's 'replica 3 
arbiter 1' which was OK last time I used this layout.
Best Regards,Strahil Nikolov

  От: Simone Tiraboschi 
 До: hunter86bg  
Копие: users 
 Изпратен: събота, 19 януари 2019 г. 17:42
 Тема: Re: [ovirt-users] HyperConverged Self-Hosted deployment fails
  


On Sat, Jan 19, 2019 at 1:07 PM  wrote:

Hello Community,

recently I managed somehow to deploy a 2 node cluster on GlusterFS , but after 
a serious engine failiure - I have decided to start from scratch.


2 node hyperconverged gluster is definitively a bad idea since it's not going 
to protect you from split brains.Please choose 1 or 3 but not 2. 
What I have done so far:
1. Inctall CentOS7 from scratch
2. Add ovirt repositories, vdo,cockpit for ovirt
3. Deployed the gluster cluster using cockpit 
4. Trying to deploy the hosted-engine , which has failed several times.


Without any logs it's difficult to guess what really happened but I think that 
it could be related to the two nodes approach which is 

[ovirt-users]Отн: HyperConverged Self-Hosted deployment fails

2019-01-19 Thread Strahil Nikolov
Hi Again,
it seems that sanlock error -223 indicated sanlock lockspace error.I have 
somehow reinitialize the lockspace and the engine is up and running, but I have 
2 VMs defined :1. The engine itself 2. A VM called "External-HostedEngineLocal"
I'm pretty sure that there are some tasks that the wizard completes after 
successfull power-on of the engine , which should clean up the situation and in 
my case - is not actually working.
Could someone advise how to get rid of that VM and what should I do in order to 
complete the deployment.
Thanks in advance for all who read this thread.
Best Regards,Strahil Nikolov

  От: Strahil Nikolov 
 До: Simone Tiraboschi  
Копие: users 
 Изпратен: събота, 19 януари 2019 г. 23:34
 Тема: Отн: [ovirt-users] HyperConverged Self-Hosted deployment fails
   
Hello All,
it seems that the ovirt-ha-broker has some problems:Thread-8::DEBUG::2019-01-19 
19:30:16,048::stompreactor::479::jsonrpc.AsyncoreClient::(send) Sending response
...skipping...
smtp-server = localhost
smtp-port = 25
source-email = root@localhost
destination-emails = root@localhost

[notify]
state_transition = maintenance|start|stop|migrate|up|down

Listener::DEBUG::2019-01-19 
19:30:31,741::heconflib::95::ovirt_hosted_engine_ha.broker.notifications.Notifications.config.broker::(_dd_pipe_tar)
 stderr
: 
Thread-3::DEBUG::2019-01-19 
19:30:31,747::stompreactor::479::jsonrpc.AsyncoreClient::(send) Sending response
StatusStorageThread::ERROR::2019-01-19 
19:30:31,751::status_broker::90::ovirt_hosted_engine_ha.broker.status_broker.StatusBroker.Update::(run)
 Failed t
o update state.
Traceback (most recent call last):
  File 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/status_broker.py",
 line 82, in run
if (self._status_broker._inquire_whiteboard_lock() or
  File 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/status_broker.py",
 line 190, in _inquire_whiteboard_lock
self.host_id, self._lease_file)
  File 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/status_broker.py",
 line 128, in host_id
raise ex.HostIdNotLockedError("Host id is not set")
HostIdNotLockedError: Host id is not set
StatusStorageThread::ERROR::2019-01-19 
19:30:31,751::status_broker::70::ovirt_hosted_engine_ha.broker.status_broker.StatusBroker.Update::(trigger_restart)
 Trying to restart the brokerAnd most probably the issue is within the sanlock:
2019-01-19 19:29:57 4739 [4602]: worker0 aio collect WR 
0x7f92a8c0:0x7f92a8d0:0x7f92acc7 result 1048576:0 other free
2019-01-19 19:30:01 4744 [4603]: s8 lockspace 
hosted-engine:1:/var/run/vdsm/storage/b388324b-eaf4-4158-8e1b-0b7c9b861002/5a849a03-ecbc-4b3f-b558-ec2ebbc42c6f/dd663799-36ed-47de-8208-d357f803efde:0
2019-01-19 19:30:01 4744 [2779]: verify_leader 1 wrong magic 0 
/var/run/vdsm/storage/b388324b-eaf4-4158-8e1b-0b7c9b861002/5a849a03-ecbc-4b3f-b558-ec2ebbc42c6f/dd663799-36ed-47de-8208-d357f803efde
2019-01-19 19:30:01 4744 [2779]: leader1 delta_acquire_begin error -223 
lockspace hosted-engine host_id 1
2019-01-19 19:30:01 4744 [2779]: leader2 path 
/var/run/vdsm/storage/b388324b-eaf4-4158-8e1b-0b7c9b861002/5a849a03-ecbc-4b3f-b558-ec2ebbc42c6f/dd663799-36ed-47de-8208-d357f803efde
 offset 0
2019-01-19 19:30:01 4744 [2779]: leader3 m 0 v 30003 ss 512 nh 0 mh 1 oi 0 og 0 
lv 0
2019-01-19 19:30:01 4744 [2779]: leader4 sn hosted-engine rn  ts 0 cs 60346c59
2019-01-19 19:30:02 4745 [4603]: s8 add_lockspace fail result -223
2019-01-19 19:30:07 4750 [4603]: s9 lockspace 
hosted-engine:1:/var/run/vdsm/storage/b388324b-eaf4-4158-8e1b-0b7c9b861002/5a849a03-ecbc-4b3f-b558-ec2ebbc42c6f/dd663799-36ed-47de-8208-d357f803efde:0
2019-01-19 19:30:07 4750 [2837]: verify_leader 1 wrong magic 0 
/var/run/vdsm/storage/b388324b-eaf4-4158-8e1b-0b7c9b861002/5a849a03-ecbc-4b3f-b558-ec2ebbc42c6f/dd663799-36ed-47de-8208-d357f803efde
2019-01-19 19:30:07 4750 [2837]: leader1 delta_acquire_begin error -223 
lockspace hosted-engine host_id 1
2019-01-19 19:30:07 4750 [2837]: leader2 path 
/var/run/vdsm/storage/b388324b-eaf4-4158-8e1b-0b7c9b861002/5a849a03-ecbc-4b3f-b558-ec2ebbc42c6f/dd663799-36ed-47de-8208-d357f803efde
 offset 0
2019-01-19 19:30:07 4750 [2837]: leader3 m 0 v 30003 ss 512 nh 0 mh 1 oi 0 og 0 
lv 0
2019-01-19 19:30:07 4750 [2837]: leader4 sn hosted-engine rn  ts 0 cs 60346c59
2019-01-19 19:30:08 4751 [4603]: s9 add_lockspace fail result -223
Can someone guide me how to go further ? Can debug be enabled for sanlock ?
Best Regards,Strahil Nikolov

  От: Strahil Nikolov 
 До: Simone Tiraboschi  
Копие: users 
 Изпратен: събота, 19 януари 2019 г. 17:54
 Тема: Отн: [ovirt-users] HyperConverged Self-Hosted deployment fails
  
Thanks Simone,
I will check the broker.I didn't specify the layout correctly - it's 'replica 3 
arbiter 1' which was OK last time I used this layout.
Best Regards,Strahil Nikolov

  От: Simone Tiraboschi 
 До: hunter86bg  
Копие: users 
 Изпратен: събот

[ovirt-users] Re: Sanlock volume corrupted on deployment

2019-01-28 Thread Strahil Nikolov
 Hi Simone,
I will reinstall the nodes and will provide an update.
Best Regards,Strahil Nikolov
On Sat, Jan 26, 2019 at 5:13 PM Strahil  wrote:

Hey guys,
I have noticed that with 4.2.8 the sanlock issue (during deployment) is still 
not fixed.Am I the only one with bad luck or there is something broken there ?

Hi,I'm not aware on anything breaking hosted-engine deployment on 4.2.8.Which 
kind of storage are you using?Can you please share your logs? 

The sanlock service reports code 's7 add_lockspace fail result -233' 'leader1 
delta_acquire_begin error -233 lockspace hosted-engine host_id 1'.
Best Regards,Strahil Nikolov___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SZMF5KKHSXOUTLGX3LR2NBN7E6QGS6G3/

  ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MTRUQE7Q7NHIK7MHROGIN56FXVK65ZOD/


[ovirt-users] Re: Vm status not update after update

2019-04-02 Thread Strahil Nikolov
 I think I already met a solution in the mail lists. Can you check and apply 
the fix mentioned there ?
Best Regards,Strahil Nikolov

В вторник, 2 април 2019 г., 14:39:10 ч. Гринуич+3, Marcelo Leandro 
 написа:  
 
 Hi, After update my hosts to ovirt node 4.3.2 with vdsm  version 
vdsm-4.30.11-1.el7 my vms status not update, if I do anything with vm like 
shutdown, migrate this status not change , only a restart the vdsm the host 
that vm is runnig.
vdmd status :

 ERROR Internal server error                                                    
     Traceback (most recent call last):                                         
                  File 
"/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 345, in 
_handle_request..
Thanks,___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/23ERLSYUQKPXIAAPDZ6KAOBTHW7DMCSA/
  ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VCUPKIGKHICIJDM2QZUHGEQVYVY5HY5E/


[ovirt-users] Re: Controller recomandation - LSI2008/9265

2019-04-05 Thread Strahil Nikolov
 At least , based on spec  I would prefer  LSI9265-8i as it supports hot spare, 
SSD support , cache and set it up in Raid 0 - but only in a replica 3 or 
replica 3 arbiter 1 volumes.
Best Regards,Strahil Nikolov

В петък, 5 април 2019 г., 9:20:57 ч. Гринуич+3, Leo David 
 написа:  
 
 Thank you Strahil for that.
On Fri, Apr 5, 2019, 06:45 Strahil  wrote:


Adding Gluster users' mail list.
On Apr 5, 2019 06:02, Leo David  wrote:

Hi Everyone,Any thoughts on this ?

On Wed, Apr 3, 2019, 17:02 Leo David  wrote:

Hi Everyone,For a hyperconverged setup started with 3 nodes and going up in 
time up to 12 nodes, I have to choose between LSI2008 ( jbod ) and LSI9265 
(raid).Perc h710 ( raid ) might be an option too, but on a different 
chassis.There will not be many disk installed on each node, so the replication 
will be replica 3 replicated-distribute volumes across the nodes as:node1/disk1 
 node2/disk1  node3/disk1node1/disk2  node2/disk2  node3/disk2and so on...
As i will add nodes to the cluster ,  I intend expand the volumes using the 
same rule.What would it be a better way,  to used jbod cards ( no cache ) or 
raid card and create raid0 arrays ( one for each disk ) and therefore have a 
bit of raid cache ( 512Mb ) ?Is raid caching a benefit to have it underneath 
ovirt/gluster as long as I go for "Jbod"  installation anyway ?Thank you very 
much !
-- 
Best regards, Leo David


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6STI7U7LTOXSSH6WUNHX63WDIF2LZ46K/
  ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VPQXXCANJNLY7NSBCPBGAL6EITRM5BO6/


[ovirt-users] Re: Hosted-Engine constantly dies

2019-04-05 Thread Strahil Nikolov
 Hi Simone,
a short mail chain in gluster-users Amar confirmed my suspicion that Gluster 
v5.5 is performing a little bit slower than 3.12.15 .In result the sanlock 
reservations take too much time.
I have updated my setup and uncached (used lvm caching in writeback mode) my 
data bricks and used the SSD for the engine volume.Now the engine is running 
quite well and no more issues were observed.
Can you share any thoughts about oVirt being updated to Gluster v6.x ? I know 
that there are any hooks between vdsm and gluster and I'm not sure how vdsm 
will react on the new version.
Best Regards,Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CEQUSJEZMIA6R6TB6OHTFFA3ZA6FSM6B/


[ovirt-users] oVirt 4.3.2 Importing VMs from a detached domain not keeping cluster info

2019-04-05 Thread Strahil Nikolov
Hello,
can someone tell me if this is an expected behaviour:
1. I have created a data storage domain exported by nfs-ganesha via NFS2. Stop 
all VMs on the Storage domain
3. Set to maintenance and detached (without wipe) the storage domain3.2 All VMs 
are gone (which was expected)4. Imported the existing data domain via Gluster5. 
Wen to the Gluster domain and imported all templates and VMs5.2 Power on some 
of the VMs but some of them failed
The reason for failure is that some of the re-imported VMs were automatically 
assigned to the Default Cluster , while they belonged to another one.
Most probably this is not a supported activity, but can someone clarify it ?
Thanks in advance.
Best Regards,Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SMPIPRXOM6BVPJ7ELN6KVYZGI2WKYRY2/


[ovirt-users] Re: Hosted-Engine constantly dies

2019-04-05 Thread Strahil Nikolov
 
>This definitively helps, but for my experience the network speed is really 
>determinant here.Can you describe your network >configuration?
>A 10 Gbps net is definitively fine here.
>A few bonded 1 Gbps nics could work.
>A single 1 Gbps nic could be an issue.


I have a gigabit interface on my workstations and sadly I have no option for 
upgrade without switching the hardware. 
I have observed my network traffic for days with iftop and gtop and I have 
never reached my Gbit interface's maximum bandwidth, not even the half of it.
Even when reseting my bricks (gluster volume reset-brick) and running a full 
heal - I do not observe more than 50GiB/s utilization. I am not sure if FUSE is 
using network for accessing the local brick - but I  hope that it is not true.
Checking disk performance - everything is in the expected ranges.
I suspect that the Gluster v5 enhancements are increasing both network and IOPS 
requirements and my setup was not dealing with it properly.

>It's definitively planned, see: https://bugzilla.redhat.com/1693998>I'm not 
>really sure about its time plan.
I will try to get involved and provide feedback both to oVirt and Gluster dev 
teams.
Best Regards,Strahil Nikolov
  ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XSNSV63YJFY7LGFVMNYIZYQMNPGAAMCH/


[ovirt-users] oVirt 4.3.2 Disk extended broken (UI)

2019-04-05 Thread Strahil Nikolov
Hi,
I have just extended the disk of one of my openSuSE VMs and I have noticed that 
despite the disk is only 140GiB (in UI), the VM sees it as 180GiB.
I think that this should not happen at all.
[root@ovirt1 ee8b1dce-c498-47ef-907f-8f1a6fe4e9be]# qemu-img info 
c525f67d-92ac-4f36-a0ef-f8db501102faimage: 
c525f67d-92ac-4f36-a0ef-f8db501102fafile format: rawvirtual size: 180G 
(193273528320 bytes)disk size: 71G
Attaching some UI screen shots.
Note: I have extended the disk via the UI by selecting 40GB (old value in UI -> 
100GB).
Best Regards,Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YDIMSTFH74YPK7RKBKWKKPJ3TP3YI64B/


[ovirt-users] Hosted-Engine constantly dies

2019-03-31 Thread Strahil Nikolov
Hi Guys,
As I'm still quite new in oVirt - I have some problems finding the problem on 
this one.My Hosted Engine (4.3.2) is constantly dieing (even when the Global 
Maintenance is enabled).My interpretation of the logs indicates some lease 
problem , but I don't get the whole picture ,yet.
I'm attaching the output of 'journalctl -f | grep -Ev "Started Session|session 
opened|session closed"' after I have tried to power on the hosted engine 
(hosted-engine --vm-start).
The nodes are fully updated and I don't see anything in the gluster v5.5 logs, 
but I can double check.
Any hints are appreciated and thanks in advance.
Best Regards,Strahil Nikolov

hosted-engine-crash
Description: Binary data
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TRQL5EOCRLELX46GSLJI4V5KT2QCME7U/


[ovirt-users] Re: Hosted-Engine constantly dies

2019-04-01 Thread Strahil Nikolov
Hi Simone,
I am attaching the gluster logs from ovirt1.I hope you see something I missed.
Best Regards,Strahil Nikolov
<>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RRN34XK24F67EPN5UXGF4NVKWAE5235X/


[ovirt-users] Re: Hosted-Engine constantly dies

2019-04-01 Thread Strahil Nikolov
Hi Simone,
>Sorry, it looks empty.
Sadly it's true. This one should be OK.

Best Regards,Strahil Nikolov


  <>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DGB7TYSWORVXZAGE7UXXCLZS4ANIH72O/


[ovirt-users] Re: Ovirt 4.3 RC missing glusterfs-gnfs

2019-02-20 Thread Strahil Nikolov
el7           Available: glusterfs-client-xlators-3.12.9-1.el7.x86_64 
(ovirt-4.2-centos-gluster312)               glusterfs-client-xlators(x86-64) = 
3.12.9-1.el7           Available: glusterfs-client-xlators-3.12.11-1.el7.x86_64 
(ovirt-4.2-centos-gluster312)               glusterfs-client-xlators(x86-64) = 
3.12.11-1.el7           Available: 
glusterfs-client-xlators-3.12.13-1.el7.x86_64 (ovirt-4.2-centos-gluster312)     
          glusterfs-client-xlators(x86-64) = 3.12.13-1.el7           Available: 
glusterfs-client-xlators-3.12.14-1.el7.x86_64 (ovirt-4.2-centos-gluster312)     
          glusterfs-client-xlators(x86-64) = 3.12.14-1.el7           Available: 
glusterfs-client-xlators-5.0-1.el7.x86_64 (ovirt-4.3-centos-gluster5)           
    glusterfs-client-xlators(x86-64) = 5.0-1.el7           Available: 
glusterfs-client-xlators-5.1-1.el7.x86_64 (ovirt-4.3-centos-gluster5)           
    glusterfs-client-xlators(x86-64) = 5.1-1.el7           Available: 
glusterfs-client-xlators-5.2-1.el7.x86_64 (ovirt-4.3-centos-gluster5)           
    glusterfs-client-xlators(x86-64) = 5.2-1.el7 You could try using 
--skip-broken to work around the problem
Best Regards,Strahil Nikolov  ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BYI3QLCVB7XPGP7XHWYXBGV2JUSF4TKU/


[ovirt-users] Re: Are people still experiencing issues with GlusterFS on 4.3x?

2019-03-17 Thread Strahil Nikolov
 Dear Sandro, Nir,
usually I avoid the test repos for 2 reasons:1. I had a bad experience getting 
away from the RHEL 7.5 Beta to 7.5 Standard repos ,so now I prefer to update 
only when patches are in standard repo2. My Lab is a kind of a test environment 
, but I prefer to be able to spin up a VM or 2 once needed. Last issues 
rendered the lab unmanageable for several days due to 0 bytes OVF_STORE and 
other issues.
About the gluster update issue - I think this is a serious one. If we had that 
in mind , the most wise approach would be to stay on gluster v3 until the issue 
is resolved.
I have a brick down almost every day and despite not being a "killer", the 
experience is no way near the 4.2.7 - 4.2.8 .
Can someone point me to a docu with minimum requirements for a nested oVirt Lab?
 I'm planing to create a nested test environment in order to both provide 
feedback on new releases and to be prepared before deploying on my lab.
Best Regards,Strahil Nikolov

В събота, 16 март 2019 г., 15:35:05 ч. Гринуич+2, Nir Soffer 
 написа:  
 
 

On Fri, Mar 15, 2019, 15:16 Sandro Bonazzola  
ha scritto:

Hi,
something that I’m seeing in the vdsm.log, that I think is gluster related is 
the following message:
2019-03-15 05:58:28,980-0700 INFO  (jsonrpc/6) [root] managedvolume not 
supported: Managed Volume Not Supported. Missing package os-brick.: ('Cannot 
import os_brick',) (caps:148)
os_brick seems something available by openstack channels but I didn’t verify.

Fred, I see you introduced above error in vdsm commit 
9646c6dc1b875338b170df2cfa4f41c0db8a6525 back in November 2018.I guess you are 
referring to python-os-brick.Looks like it's related to cinderlib integration.I 
would suggest to:- fix error message pointing to python-os-brick- add 
python-os-brick dependency in spec file if the dependency is not optional- if 
the dependency is optional as it seems to be, adjust the error message to say 
so. I feel nervous seeing errors on missing packages :-) 
  There is no error message here. This is an INFO level message, not an ERROR 
or WARN, and it just explains why managed volumes will not be available on this 
host.
Having this information in the log is extremely important for developers and 
support.
I think we can improve the message to mention the actual package name, but 
otherwise there is no issue in this info message.
Nir



Simon


On Mar 15, 2019, at 1:54 PM, Sandro Bonazzola  wrote:


Il giorno ven 15 mar 2019 alle ore 13:46 Strahil Nikolov 
 ha scritto:


>I along with others had GlusterFS issues after 4.3 upgrades, the failed to 
>dispatch handler issue with bricks going down intermittently.  After some time 
>it seemed to have corrected itself (at least in my enviornment) and I >hadn't 
>had any brick problems in a while.  I upgraded my three node HCI cluster to 
>4.3.1 yesterday and again I'm running in to brick issues.  They will all be up 
>running fine then all of a sudden a brick will randomly drop >and I have to 
>force start the volume to get it back up. >
>Have any of these Gluster issues been addressed in 4.3.2 or any other 
>releases/patches that may be available to help the problem at this time?>
>Thanks!
Yep,
sometimes a brick dies (usually my ISO domain ) and then I have to "gluster 
volume start isos force".Sadly I had several issues with 4.3.X - problematic 
OVF_STORE (0 bytes), issues with gluster , out-of-sync network - so for me 
4.3.0 & 4.3.0 are quite unstable.
Is there a convention indicating stability ? Is 4.3.xxx means unstable , while 
4.2.yyy means stable ?

No, there's no such convention. 4.3 is supposed to be stable and production 
ready.The fact it isn't stable enough for all the cases means it has not been 
tested for those cases.In oVirt 4.3.1 RC cycle testing 
(https://trello.com/b/5ZNJgPC3/ovirt-431-test-day-1 ) we got participation of 
only 6 people and not even all the tests have been completed.Help testing 
during release candidate phase helps having more stable final releases.oVirt 
4.3.2 is at its second release candidate, if you have time and resource, it 
would be helpful testing it on an environment which is similar to your 
production environment and give feedback / report bugs.
Thanks
 

Best Regards,Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ACQE2DCN2LP3RPIPZNXYSLCBXZ4VOPX2/



-- 
SANDRO BONAZZOLA
MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
Red Hat EMEA
sbona...@redhat.com   

|  |


|

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https:

[ovirt-users] Re: Hosted Engine I/O scheduler

2019-03-18 Thread Strahil Nikolov
 Hi All,
I have changed my I/O scheduler to none and here are the results so far:
Before (mq-deadline):Adding a disk to VM (initial creation) START: 
2019-03-17 16:34:46.709Adding a disk to VM (initial creation) COMPLETED: 
2019-03-17 16:45:17.996
After (none):Adding a disk to VM (initial creation) START: 2019-03-18 
08:52:02.xxxAdding a disk to VM (initial creation) COMPLETED: 2019-03-18 
08:52:20.xxx
Of course the results are inconclusive, as I have tested only once - but I feel 
the engine more responsive.
Best Regards,Strahil Nikolov

В неделя, 17 март 2019 г., 18:30:23 ч. Гринуич+2, Strahil 
 написа:  
 
 
Dear All,

I have just noticed that my Hosted Engine has  a strange I/O scheduler:

Last login: Sun Mar 17 18:14:26 2019 from 192.168.1.43
[root@engine ~]# cat /sys/block/vda/queue/scheduler
[mq-deadline] kyber none
[root@engine ~]#

Based on my experience  anything than noop/none  is useless and performance 
degrading  for a VM.


Is there any reason that we have this scheduler ?
It is quite pointless  to process (and delay) the I/O in the VM and then 
process (and again delay)  on Host Level .

If there is no reason to keep the deadline, I will open a bug about it.

Best Regards,
Strahil Nikolov
Dear All,

I have just noticed that my Hosted Engine has  a strange I/O scheduler:

Last login: Sun Mar 17 18:14:26 2019 from 192.168.1.43
[root@engine ~]# cat /sys/block/vda/queue/scheduler
[mq-deadline] kyber none
[root@engine ~]#

Based on my experience  anything than noop/none  is useless and performance 
degrading  for a VM.


Is there any reason that we have this scheduler ?
It is quite pointless  to process (and delay) the I/O in the VM and then 
process (and again delay)  on Host Level .

If there is no reason to keep the deadline, I will open a bug about it.

Best Regards,
Strahil Nikolov  ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YY5ZAPMTD5HUYEBEGD2YYO7EOSTVYIE7/


[ovirt-users] Ovirt 4.3.1 cannto set host to maintenance

2019-03-12 Thread Strahil Nikolov
Hi Community,
I have tried to download my OVF_STORE images that were damaged on the shared 
storage , but it Failed.In result , I cannot set any host into maintenance via 
the UI.I have found out this bug 1586126 – After upgrade to RHV 4.2.3, hosts 
can no longer be set into maintenance mode. , but the case is different as mine 
has failed and should not block that operation.


| 
| 
|  | 
1586126 – After upgrade to RHV 4.2.3, hosts can no longer be set into ma...


 |

 |

 |




Here are 2 screenshots:Imgur


| 
| 
| 
|  |  |

 |

 |
| 
|  | 
Imgur

Imgur

Post with 0 votes and 1 views.
 |

 |

 |


Imgur


| 
| 
| 
|  |  |

 |

 |
| 
|  | 
Imgur

Imgur

Post with 0 votes and 0 views.
 |

 |

 |



How can I recover from that situation?
Best Regards,Strahil Nikolov

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BVCJDVPO63RZWRM2N6RINGP5OHP2L64G/


[ovirt-users] Re: Ovirt 4.3.1 problem with HA agent

2019-03-12 Thread Strahil Nikolov
 Latest update - the system is back and running normally.After a day (or maybe 
a little more), the OVF is OK:
[root@ovirt1 ~]# ls -l 
/rhev/data-center/mnt/glusterSD/ovirt1.localdomain\:_engine/808423f9-8a5c-40cd-bc9f-2568c85b8c74/images/{441abdc8-6cb1-49a4-903f-a1ec0ed88429,94ade632-6ecc-4901-8cec-8e39f3d69cb0}
/rhev/data-center/mnt/glusterSD/ovirt1.localdomain:_engine/808423f9-8a5c-40cd-bc9f-2568c85b8c74/images/441abdc8-6cb1-49a4-903f-a1ec0ed88429:
total 66591
-rw-rw. 1 vdsm kvm   30720 Mar 12 08:06 c3309fc0-8707-4de1-903d-8d4bbb024f81
-rw-rw. 1 vdsm kvm 1048576 Jan 31 13:24 
c3309fc0-8707-4de1-903d-8d4bbb024f81.lease
-rw-r--r--. 1 vdsm kvm 435 Mar 12 08:06 
c3309fc0-8707-4de1-903d-8d4bbb024f81.meta

/rhev/data-center/mnt/glusterSD/ovirt1.localdomain:_engine/808423f9-8a5c-40cd-bc9f-2568c85b8c74/images/94ade632-6ecc-4901-8cec-8e39f3d69cb0:
total 66591
-rw-rw. 1 vdsm kvm   30720 Mar 12 08:06 9460fc4b-54f3-48e3-b7b6-da962321ecf4
-rw-rw. 1 vdsm kvm 1048576 Jan 31 13:24 
9460fc4b-54f3-48e3-b7b6-da962321ecf4.lease
-rw-r--r--. 1 vdsm kvm 435 Mar 12 08:06 
9460fc4b-54f3-48e3-b7b6-da962321ecf4.meta


Once it's got fixed, I have managed to start the hosted-engine properly (I have 
rebooted the whole cluster just to be on the safe side):
[root@ovirt1 ~]# hosted-engine --vm-status


--== Host ovirt1.localdomain (id: 1) status ==--

conf_on_shared_storage : True
Status up-to-date  : True
Hostname   : ovirt1.localdomain
Host ID    : 1
Engine status  : {"health": "good", "vm": "up", "detail": 
"Up"}
Score  : 3400
stopped    : False
Local maintenance  : False
crc32  : 8ec26591
local_conf_timestamp   : 49704
Host timestamp : 49704
Extra metadata (valid at timestamp):
    metadata_parse_version=1
    metadata_feature_version=1
    timestamp=49704 (Tue Mar 12 10:47:43 2019)
    host-id=1
    score=3400
    vm_conf_refresh_time=49704 (Tue Mar 12 10:47:43 2019)
    conf_on_shared_storage=True
    maintenance=False
    state=EngineUp
    stopped=False


--== Host ovirt2.localdomain (id: 2) status ==--

conf_on_shared_storage : True
Status up-to-date  : True
Hostname   : ovirt2.localdomain
Host ID    : 2
Engine status  : {"reason": "vm not running on this host", 
"health": "bad", "vm": "down", "detail": "unknown"}
Score  : 3400
stopped    : False
Local maintenance  : False
crc32  : f9f39dcd
local_conf_timestamp   : 14458
Host timestamp : 14458
Extra metadata (valid at timestamp):
    metadata_parse_version=1
    metadata_feature_version=1
    timestamp=14458 (Tue Mar 12 10:47:41 2019)
    host-id=2
    score=3400
    vm_conf_refresh_time=14458 (Tue Mar 12 10:47:41 2019)
    conf_on_shared_storage=True
    maintenance=False
    state=EngineDown
    stopped=False



Best Regards,Strahil Nikolov

В неделя, 10 март 2019 г., 5:05:33 ч. Гринуич+2, Strahil Nikolov 
 написа:  
 
  Hello again,
Latest update: the engine is up and running (or at least the login portal).
[root@ovirt1 ~]# hosted-engine --check-livelinessHosted Engine is up!
I have found online the xml for the network:
[root@ovirt1 ~]# cat ovirtmgmt_net.xml   vdsm-ovirtmgmt  
    
Sadly, I had to create a symbolic link to the main disk in 
/var/run/vdsm/storage , as it was missing.
So, what's next.
Issues up to now:2 OVF - 0 bytesProblem with local copy of the HostedEngine 
config - used xml from an old vdsm logMissing vdsm-ovirtmgmt definitionNo link 
for the main raw disk in /var/run/vdsm/storage .
Can you hint me how to recover the 2 OVF tars now ?
Best Regards,Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ODFPHVS5LYY6JWFWKWR3PBYTF3QSDKGV/


[ovirt-users] Re: Ovirt 4.3.1 problem with HA agent

2019-03-13 Thread Strahil Nikolov
 Dear Simone,
it seems that there is some kind of problem ,as the OVF got updated with wrong 
configuration:[root@ovirt2 ~]# ls -l 
/rhev/data-center/mnt/glusterSD/ovirt1.localdomain:_engine/808423f9-8a5c-40cd-bc9f-2568c85b8c74/images/{441abdc8-6cb1-49a4-903f-a1ec0ed88429,94ade632-6ecc-4901-8cec-8e39f3d69cb0}
/rhev/data-center/mnt/glusterSD/ovirt1.localdomain:_engine/808423f9-8a5c-40cd-bc9f-2568c85b8c74/images/441abdc8-6cb1-49a4-903f-a1ec0ed88429:
total 66591
-rw-rw. 1 vdsm kvm   30720 Mar 12 08:06 c3309fc0-8707-4de1-903d-8d4bbb024f81
-rw-rw. 1 vdsm kvm 1048576 Jan 31 13:24 
c3309fc0-8707-4de1-903d-8d4bbb024f81.lease
-rw-r--r--. 1 vdsm kvm 435 Mar 12 08:06 
c3309fc0-8707-4de1-903d-8d4bbb024f81.meta

/rhev/data-center/mnt/glusterSD/ovirt1.localdomain:_engine/808423f9-8a5c-40cd-bc9f-2568c85b8c74/images/94ade632-6ecc-4901-8cec-8e39f3d69cb0:
total 66591
-rw-rw. 1 vdsm kvm   30720 Mar 13 11:07 9460fc4b-54f3-48e3-b7b6-da962321ecf4
-rw-rw. 1 vdsm kvm 1048576 Jan 31 13:24 
9460fc4b-54f3-48e3-b7b6-da962321ecf4.lease
-rw-r--r--. 1 vdsm kvm 435 Mar 13 11:07 
9460fc4b-54f3-48e3-b7b6-da962321ecf4.meta

Starting the hosted-engine fails with:
2019-03-13 12:48:21,237+0200 ERROR (vm/8474ae07) [virt.vm] 
(vmId='8474ae07-f172-4a20-b516-375c73903df7') The vm start process failed 
(vm:937)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 866, in 
_startUnderlyingVm
    self._run()
  File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2852, in _run
    dom = self._connection.defineXML(self._domain.xml)
  File "/usr/lib/python2.7/site-packages/vdsm/common/libvirtconnection.py", 
line 131, in wrapper
    ret = f(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/common/function.py", line 94, in 
wrapper
    return func(inst, *args, **kwargs)
  File "/usr/lib64/python2.7/site-packages/libvirt.py", line 3743, in defineXML
    if ret is None:raise libvirtError('virDomainDefineXML() failed', conn=self)
libvirtError: XML error: No PCI buses available

Best Regards,Strahil Nikolov


В вторник, 12 март 2019 г., 14:14:26 ч. Гринуич+2, Strahil Nikolov 
 написа:  
 
  Dear Simone,
it should be 60 min , but I have checked several hours after that and it didn't 
update it.
[root@engine ~]# engine-config -g OvfUpdateIntervalInMinutes
OvfUpdateIntervalInMinutes: 60 version: general

How can i make a backup of the VM config , as you have noticed the local copy 
in /var/run/ovirt-hosted-engine-ha/vm.conf won't work ?
I will keep the HostedEngine's xml - so I can redefine if needed.
Best Regards,Strahil Nikolov
  
  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3XPJXJ4I4LVDDV47BTSXA4FQE3OM5T5J/


[ovirt-users] Re: Ovirt 4.3.1 cannto set host to maintenance

2019-03-13 Thread Strahil Nikolov
 It seems to be working properly , but the OVF got updated recently and 
powering up the hosted-engine is not working :)
[root@ovirt2 ~]# sudo -u vdsm tar -tvf  
/var/run/vdsm/storage/808423f9-8a5c-40cd-bc9f-2568c85b8c74/441abdc8-6cb1-49a4-903f-a1ec0ed88429/c3309fc0-8707-4de1-903d-8d4bbb024f81
-rw-r--r-- 0/0 138 2019-03-12 08:06 info.json
-rw-r--r-- 0/0   21164 2019-03-12 08:06 
8474ae07-f172-4a20-b516-375c73903df7.ovf
-rw-r--r-- 0/0  72 2019-03-12 08:06 metadata.json

[root@ovirt2 ~]# sudo -u vdsm tar -tvf 
/var/run/vdsm/storage/808423f9-8a5c-40cd-bc9f-2568c85b8c74/94ade632-6ecc-4901-8cec-8e39f3d69cb0/9460fc4b-54f3-48e3-b7b6-da962321ecf4
-rw-r--r-- 0/0 138 2019-03-13 11:06 info.json
-rw-r--r-- 0/0   21164 2019-03-13 11:06 
8474ae07-f172-4a20-b516-375c73903df7.ovf
-rw-r--r-- 0/0  72 2019-03-13 11:06 metadata.json

Best Regards,Strahil Nikolov

В сряда, 13 март 2019 г., 11:08:57 ч. Гринуич+2, Simone Tiraboschi 
 написа:  
 
 

On Wed, Mar 13, 2019 at 9:57 AM Strahil Nikolov  wrote:

Hi Simone,Nir,

>Adding also Nir on this, the whole sequence is tracked here:
>I'd suggest to check ovirt-imageio and vdsm logs on ovirt2.localdomain about 
>the same time.
I have tested again (first wiped current transfers) and it is happening the 
same (phase 10).
engine=# \x
Expanded display is on.
engine=# select * from image_transfers;
-[ RECORD 1 ]-+-
command_id    | 11b2c162-29e0-46ef-b0a4-f41ebe3c2910
command_type  | 1024
phase | 10
last_updated  | 2019-03-13 09:38:30.365+02
message   |
vds_id    |
disk_id   | 94ade632-6ecc-4901-8cec-8e39f3d69cb0
imaged_ticket_id  |
proxy_uri |
signed_ticket |
bytes_sent    | 0
bytes_total   | 134217728
type  | 1
active    | f
daemon_uri    |
client_inactivity_timeout | 60

engine=# delete from image_transfers where 
disk_id='94ade632-6ecc-4901-8cec-8e39f3d69cb0';

This is the VDSM log from the last test:

2019-03-13 09:38:23,229+0200 INFO  (jsonrpc/4) [vdsm.api] START 
prepareImage(sdUUID=u'808423f9-8a5c-40cd-bc9f-2568c85b8c74', 
spUUID=u'b803f7e4-2543-11e9-ba9a-00163e6272c8', 
imgUUID=u'94ade632-6ecc-4901-8cec-8e39f3d69cb0', 
leafUUID=u'9460fc4b-54f3-48e3-b7b6-da962321ecf4', allowIllegal=True) 
from=:::192.168.1.2,42644, flow_id=d48d9272-2e65-438d-a7b2-46979309833b, 
task_id=bb534320-451c-45c0-b7a6-0cce017ec7cb (api:48)
2019-03-13 09:38:23,253+0200 INFO  (jsonrpc/4) [storage.StorageDomain] Fixing 
permissions on 
/rhev/data-center/mnt/glusterSD/ovirt1.localdomain:_engine/808423f9-8a5c-40cd-bc9f-2568c85b8c74/images/94ade632-6ecc-4901-8cec-8e39f3d69cb0/9460fc4b-54f3-48e3-b7b6-da962321ecf4
 (fileSD:623)
2019-03-13 09:38:23,254+0200 INFO  (jsonrpc/4) [storage.StorageDomain] Creating 
domain run directory 
u'/var/run/vdsm/storage/808423f9-8a5c-40cd-bc9f-2568c85b8c74' (fileSD:577)
2019-03-13 09:38:23,254+0200 INFO  (jsonrpc/4) [storage.fileUtils] Creating 
directory: /var/run/vdsm/storage/808423f9-8a5c-40cd-bc9f-2568c85b8c74 mode: 
None (fileUtils:199)
2019-03-13 09:38:23,254+0200 INFO  (jsonrpc/4) [storage.StorageDomain] Creating 
symlink from 
/rhev/data-center/mnt/glusterSD/ovirt1.localdomain:_engine/808423f9-8a5c-40cd-bc9f-2568c85b8c74/images/94ade632-6ecc-4901-8cec-8e39f3d69cb0
 to 
/var/run/vdsm/storage/808423f9-8a5c-40cd-bc9f-2568c85b8c74/94ade632-6ecc-4901-8cec-8e39f3d69cb0
 (fileSD:580)
2019-03-13 09:38:23,260+0200 INFO  (jsonrpc/4) [vdsm.api] FINISH prepareImage 
error=Volume does not exist: (u'9460fc4b-54f3-48e3-b7b6-da962321ecf4',) 
from=:::192.168.1.2,42644, flow_id=d48d9272-2e65-438d-a7b2-46979309833b, 
task_id=bb534320-451c-45c0-b7a6-0cce017ec7cb (api:52)
2019-03-13 09:38:23,261+0200 ERROR (jsonrpc/4) [storage.TaskManager.Task] 
(Task='bb534320-451c-45c0-b7a6-0cce017ec7cb') Unexpected error (task:875)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882, in 
_run
    return fn(*args, **kargs)
  File "", line 2, in prepareImage
  File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 50, in method
    ret = func(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 3212, in 
prepareImage
    leafInfo = dom.produceVolume(imgUUID, leafUUID).getVmVolumeInfo()
  File "/usr/lib/python2.7/site-packages/vdsm/storage/sd.py", line 822, in 
produceVolume
    volUUID)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/glusterVolume.py", line 
45, in __init__
    volUUID)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/volume.py", line 801, in 
__init__
    self._manifest = self.manifestClass(repoPath, sdUUID, imgUUID, volUUID)
  File "/usr/lib/python2.7/site-packa

[ovirt-users] Re: Ovirt 4.3.1 cannto set host to maintenance

2019-03-13 Thread Strahil Nikolov
s not exist: 
(u'9460fc4b-54f3-48e3-b7b6-da962321ecf4',) (dispatcher:83)

Yet, the volume is there and is accessible:
[root@ovirt1 94ade632-6ecc-4901-8cec-8e39f3d69cb0]# tar -tvf 
/var/run/vdsm/storage/808423f9-8a5c-40cd-bc9f-2568c85b8c74/94ade632-6ecc-4901-8cec-8e39f3d69cb0/9460fc4b-54f3-48e3-b7b6-da962321ecf4
-rw-r--r-- 0/0 138 2019-03-12 08:06 info.json
-rw-r--r-- 0/0   21164 2019-03-12 08:06 
8474ae07-f172-4a20-b516-375c73903df7.ovf
-rw-r--r-- 0/0  72 2019-03-12 08:06 metadata.json


[root@ovirt1 94ade632-6ecc-4901-8cec-8e39f3d69cb0]# tar -tvf 
/rhev/data-center/mnt/glusterSD/ovirt1.localdomain:_engine/808423f9-8a5c-40cd-bc9f-2568c85b8c74/images/94ade632-6ecc-4901-8cec-8e39f3d69cb0/9460fc4b-54f3-48e3-b7b6-da962321ecf4
-rw-r--r-- 0/0 138 2019-03-12 08:06 info.json
-rw-r--r-- 0/0   21164 2019-03-12 08:06 
8474ae07-f172-4a20-b516-375c73903df7.ovf
-rw-r--r-- 0/0  72 2019-03-12 08:06 metadata.json

Best Regards,Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/R2XDRYELOJKZDFTHQFZ5TTJY4SZJ6KHQ/


[ovirt-users] Re: Ovirt 4.3.1 problem with HA agent

2019-03-07 Thread Strahil Nikolov
{"reason": "vm not running on this host", 
"health": "bad", "vm": "down", "detail": "unknown"}
Score  : 3400
stopped    : False
Local maintenance  : False
crc32  : 45e6772b
local_conf_timestamp   : 288
Host timestamp : 287
Extra metadata (valid at timestamp):
    metadata_parse_version=1
    metadata_feature_version=1
    timestamp=287 (Thu Mar  7 15:34:06 2019)
    host-id=1
    score=3400
    vm_conf_refresh_time=288 (Thu Mar  7 15:34:07 2019)
    conf_on_shared_storage=True
    maintenance=False
    state=GlobalMaintenance
    stopped=False


--== Host ovirt2.localdomain (id: 2) status ==--

conf_on_shared_storage : True
Status up-to-date  : True
Hostname   : ovirt2.localdomain
Host ID    : 2
Engine status  : {"reason": "vm not running on this host", 
"health": "bad", "vm": "down", "detail": "unknown"}
Score  : 3400
stopped    : False
Local maintenance  : False
crc32  : 2e9a0444
local_conf_timestamp   : 3886
Host timestamp : 3885
Extra metadata (valid at timestamp):
    metadata_parse_version=1
    metadata_feature_version=1
    timestamp=3885 (Thu Mar  7 15:34:05 2019)
    host-id=2
    score=3400
    vm_conf_refresh_time=3886 (Thu Mar  7 15:34:06 2019)
    conf_on_shared_storage=True
    maintenance=False
    state=GlobalMaintenance
    stopped=False


!! Cluster is in GLOBAL MAINTENANCE mode !!

[root@ovirt1 ovirt-hosted-engine-ha]# hosted-engine --vm-start
Command VM.getStats with args {'vmID': '8474ae07-f172-4a20-b516-375c73903df7'} 
failed:
(code=1, message=Virtual machine does not exist: {'vmId': 
u'8474ae07-f172-4a20-b516-375c73903df7'})
[root@ovirt1 ovirt-hosted-engine-ha]# hosted-engine --vm-start
VM exists and is down, cleaning up and restarting

[root@ovirt1 ovirt-hosted-engine-ha]# hosted-engine --vm-status


!! Cluster is in GLOBAL MAINTENANCE mode !!



--== Host ovirt1.localdomain (id: 1) status ==--

conf_on_shared_storage : True
Status up-to-date  : True
Hostname   : ovirt1.localdomain
Host ID    : 1
Engine status  : {"reason": "bad vm status", "health": 
"bad", "vm": "down", "detail": "Down"}
Score  : 3400
stopped    : False
Local maintenance  : False
crc32  : 6b086b7c
local_conf_timestamp   : 328
Host timestamp : 327
Extra metadata (valid at timestamp):
    metadata_parse_version=1
    metadata_feature_version=1
    timestamp=327 (Thu Mar  7 15:34:46 2019)
    host-id=1
    score=3400
    vm_conf_refresh_time=328 (Thu Mar  7 15:34:47 2019)
    conf_on_shared_storage=True
    maintenance=False
    state=GlobalMaintenance
    stopped=False


--== Host ovirt2.localdomain (id: 2) status ==--

conf_on_shared_storage : True
Status up-to-date  : True
Hostname   : ovirt2.localdomain
Host ID    : 2
Engine status  : {"reason": "vm not running on this host", 
"health": "bad", "vm": "down", "detail": "unknown"}
Score  : 3400
stopped    : False
Local maintenance  : False
crc32  : c5890e9c
local_conf_timestamp   : 3926
Host timestamp : 3925
Extra metadata (valid at timestamp):
    metadata_parse_version=1
    metadata_feature_version=1
    timestamp=3925 (Thu Mar  7 15:34:45 2019)
    host-id=2
    score=3400
    vm_conf_refresh_time=3926 (Thu Mar  7 15:34:45 2019)
    conf_on_shared_storage=True
    maintenance=False
    state=GlobalMaintenance
    stopped=False


!! Cluster is in GLOBAL MAINTENANCE mode !!

[root@ovirt1 ovirt-hosted-engine-ha]# virsh list --all
 Id    Name   State

 - HostedEngine   shut off

I am really puzzled why both volumes are wiped out .

Best Regards,Strahil Nikolov




  
  
  ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/453BKSP3XMIIF3K2OEXMXFIVC7OHGXU4/


[ovirt-users] Re: Ovirt 4.3.1 problem with HA agent

2019-03-07 Thread Strahil Nikolov
 Hi Simone,
I think I found the problem - ovirt-ha cannot extract the file containing the 
needed data .In my case it is completely empty:

[root@ovirt1 ~]# ll 
/rhev/data-center/mnt/glusterSD/ovirt1.localdomain:_engine/808423f9-8a5c-40cd-bc9f-2568c85b8c74/images/94ade632-6ecc-4901-8cec-8e39f3d69cb0total
 66561-rw-rw. 1 vdsm kvm       0 Mar  4 05:21 
9460fc4b-54f3-48e3-b7b6-da962321ecf4-rw-rw. 1 vdsm kvm 1048576 Jan 31 13:24 
9460fc4b-54f3-48e3-b7b6-da962321ecf4.lease-rw-r--r--. 1 vdsm kvm     435 Mar  4 
05:22 9460fc4b-54f3-48e3-b7b6-da962321ecf4.meta

Any hint how to recreate that ? Maybe wipe and restart the ovirt-ha-broker and 
agent ?
Also, I think this happened when I was upgrading ovirt1 (last in the gluster 
cluster) from 4.3.0 to 4.3.1 . The engine got restarted , because I forgot to 
enable the global maintenance.


Best Regards,Strahil Nikolov
В сряда, 6 март 2019 г., 16:57:30 ч. Гринуич+2, Simone Tiraboschi 
 написа:  
 
 

On Wed, Mar 6, 2019 at 3:09 PM Strahil Nikolov  wrote:

 Hi Simone,
thanks for your reply.
>Are you really sure that the issue was on the ping?>on storage errors the 
>broker restart itself and while the broker is restarting >the agent cannot ask 
>the broker to trigger the gateway monitor (the ping one) and >so that error 
>message.
It seemed so in that moment, but I'm not so sure , right now :)
>Which kind of storage are you using?>can you please attach 
>/var/log/ovirt-hosted-engine-ha/broker.log ?
I'm using glustervs v5 from ovirt 4.3.1 with FUSE mount.Please , have a look in 
the attached logs.

Nothing seems that strange there but that error.Can you please try with 
ovirt-ha-agent and ovirt-ha-broker in debug mode?you have to set level=DEBUG in 
[logger_root] section in /etc/ovirt-hosted-engine-ha/agent-log.conf and 
/etc/ovirt-hosted-engine-ha/broker-log.conf and restart the two services. 

Best Regards,Strahil Nikolov

В сряда, 6 март 2019 г., 9:53:20 ч. Гринуич+2, Simone Tiraboschi 
 написа:  
 
 

On Wed, Mar 6, 2019 at 6:13 AM Strahil  wrote:


Hi guys,

After updating to 4.3.1 I had an issue where the ovirt-ha-broker was 
complaining that it couldn't ping the gateway.



Are you really sure that the issue was on the ping?on storage errors the broker 
restart itself and while the broker is restarting the agent cannot ask the 
broker to trigger the gateway monitor (the ping one) and so that error message. 

As I have seen that before - I stopped ovirt-ha-agent, ovirt-ha-broker, vdsmd, 
supervdsmd and sanlock on the nodes and reinitialized the lockspace.

I gues s I didn't do it properly as now I receive:

ovirt-ha-agent 
ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config.vm ERROR Failed 
extracting VM OVF from the OVF_STORE volume, falling back to initial vm.conf

Any hints how to fix this ? Of course a redeploy is possible, but I prefer to 
recover from that.


Which kind of storage are you using?can you please attach 
/var/log/ovirt-hosted-engine-ha/broker.log ? 

Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OU3FKLEPH7AHT2LO2IYZ47RJHRA72C3Z/

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BNV7AVUBLOV2UDVBTYN23ZEZ2Q4TJYHV/
  
  ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QGBP6GYMCMEMI7GM2RB5OQOWMMNILDX5/


[ovirt-users] Re: Ovirt 4.3.1 problem with HA agent

2019-03-09 Thread Strahil Nikolov
 Hi Simone,
and thanks for your help.
So far I found out that there is some problem with the local copy of the 
HostedEngine config (see attached part of vdsm.log).
I have found out an older xml configuration (in an old vdsm.log) and defining 
the VM works, but powering it on reports:
[root@ovirt1 ~]# virsh define hosted-engine.xmlDomain HostedEngine defined from 
hosted-engine.xml
[root@ovirt1 ~]# virsh list --all Id    Name                           
State -     HostedEngine    
               shut off
[root@ovirt1 ~]# virsh start HostedEngineerror: Failed to start domain 
HostedEngineerror: Network not found: no network with matching name 
'vdsm-ovirtmgmt'
[root@ovirt1 ~]# virsh net-list --all Name                 State      Autostart 
    Persistent-- 
;vdsmdummy;          active     no            no default              inactive  
 no            yes
[root@ovirt1 ~]# brctl showbridge name     bridge id               STP enabled  
   interfaces;vdsmdummy;             8000.       noovirtmgmt        
       8000.bc5ff467f5b3       no              enp2s0
[root@ovirt1 ~]# ip a s1: lo:  mtu 65536 qdisc noqueue 
state UNKNOWN group default qlen 1000    link/loopback 00:00:00:00:00:00 brd 
00:00:00:00:00:00    inet 127.0.0.1/8 scope host lo       valid_lft forever 
preferred_lft forever    inet6 ::1/128 scope host        valid_lft forever 
preferred_lft forever2: enp2s0:  mtu 9000 
qdisc mq master ovirtmgmt state UP group default qlen 1000    link/ether 
bc:5f:f4:67:f5:b3 brd ff:ff:ff:ff:ff:ff3: ovs-system:  mtu 
1500 qdisc noop state DOWN group default qlen 1000    link/ether 
f6:78:c7:2d:32:f9 brd ff:ff:ff:ff:ff:ff4: br-int:  mtu 
1500 qdisc noop state DOWN group default qlen 1000    link/ether 
66:36:dd:63:dc:48 brd ff:ff:ff:ff:ff:ff20: ovirtmgmt: 
 mtu 9000 qdisc noqueue state UP group default 
qlen 1000    link/ether bc:5f:f4:67:f5:b3 brd ff:ff:ff:ff:ff:ff    inet 
192.168.1.90/24 brd 192.168.1.255 scope global ovirtmgmt       valid_lft 
forever preferred_lft forever    inet 192.168.1.243/24 brd 192.168.1.255 scope 
global secondary ovirtmgmt       valid_lft forever preferred_lft forever    
inet6 fe80::be5f:f4ff:fe67:f5b3/64 scope link        valid_lft forever 
preferred_lft forever21: ;vdsmdummy;:  mtu 1500 qdisc noop 
state DOWN group default qlen 1000    link/ether ce:36:8d:b7:64:bd brd 
ff:ff:ff:ff:ff:ff

192.168.1.243/24 is the one of the IPs in ctdb..

So , now comes the question - is there an xml in the logs that defines the 
network ?My hope is to power up the HostedEngine properly and hope that it will 
push all the configurations to the right places ... maybe this is way too 
optimistic.
At least I have learned a lot for oVirt.
Best Regards,Strahil Nikolov


В четвъртък, 7 март 2019 г., 17:55:12 ч. Гринуич+2, Simone Tiraboschi 
 написа:  
 
 

On Thu, Mar 7, 2019 at 2:54 PM Strahil Nikolov  wrote:

 

  
>The OVF_STORE volume is going to get periodically recreated by the engine so 
>at least you need a running engine.
>In order to avoid this kind of issue we have two OVF_STORE disks, in your case:
>MainThread::INFO::2019-03-06 
>06:50:02,391::ovf_store::120::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(scan)
> Found >OVF_STORE: imgUUID:441abdc8-6cb1-49a4-903f-a1ec0ed88429, 
>volUUID:c3309fc0-8707-4de1-903d-8d4bbb024f81>MainThread::INFO::2019-03-06 
>06:50:02,748::ovf_store::120::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(scan)
> Found >OVF_STORE: imgUUID:94ade632-6ecc-4901-8cec-8e39f3d69cb0, 
>volUUID:9460fc4b-54f3-48e3-b7b6-da962321ecf4
>Can you please check if you have at lest the second copy?
Second Copy is empty too:[root@ovirt1 ~]# ll 
/rhev/data-center/mnt/glusterSD/ovirt1.localdomain:_engine/808423f9-8a5c-40cd-bc9f-2568c85b8c74/images/441abdc8-6cb1-49a4-903f-a1ec0ed88429
total 66561
-rw-rw. 1 vdsm kvm   0 Mar  4 05:23 c3309fc0-8707-4de1-903d-8d4bbb024f81
-rw-rw. 1 vdsm kvm 1048576 Jan 31 13:24 
c3309fc0-8707-4de1-903d-8d4bbb024f81.lease
-rw-r--r--. 1 vdsm kvm 435 Mar  4 05:24 
c3309fc0-8707-4de1-903d-8d4bbb024f81.meta



>And even in the case you lost both, we are storing on the shared storage the 
>initial vm.conf:>MainThread::ERROR::2019-03-06 
>>06:50:02,971::config_ovf::70::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config.vm::>(_get_vm_conf_content_from_ovf_store)
> Failed extracting VM OVF from the OVF_STORE volume, falling back to initial 
>vm.conf

>Can you please check what do you have in 
>/var/run/ovirt-hosted-engine-ha/vm.conf ? It exists and has the following:
[root@ovirt1 ~]# cat /var/run/ovirt-hosted-engine-ha/vm.conf
# Editing the hosted engine VM is only possible via the manager UI\API
# This file was generated at Thu Mar  7 15:37:26 2019

vmId=8474ae07-f172-4a20-b516-375c73903df7
memSize=4096
display=vnc
devices={index:2,iface:ide

[ovirt-users] Re: Migrate HE beetwen hosts failed.

2019-03-18 Thread Strahil Nikolov
 Dear Kiv,
It seems that you have hosts with different CPus in the same cluster - which 
shouldn't happen.In your case the HE is on host with Intel SandyBridge IBRS 
SSBD Family , but you have no other host with that CPU.
Can you power off and edit the CPU of this VM to match the rest of the Host 
CPUs ?usually the older the CPU type on the VM - the higher compatibility it 
has , but performance drops - so keep that in mind.
Best Regards,Strahil Nikolov


В понеделник, 18 март 2019 г., 8:36:01 ч. Гринуич+2, k...@intercom.pro 
 написа:  
 
 Hi all.

I have oVirt 4.3.1 and 3 node hosts.
All VM migrate beetwen all hosts successfully.
VM with HE - does not migrate.

vdsm.log:

libvirtError: operation failed: guest CPU doesn't match specification: missing 
features: xsave,avx,xsaveopt

Nodes:
1. Intel Westmere IBRS SSBD Family
2. Intel Westmere IBRS SSBD Family
3. Intel SandyBridge IBRS SSBD Family <- HE now here

Cluster CPU Type: Intel Westmere Family

Info from VM with HE:

Guest CPU Type: Intel Westmere Family

Does anyone know what needs to be done to make migration work?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TJTJKPF54G5JENPDRTTHQPUG3RAMGQ2C/
  ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7NPTNTUBX76EUQVOXBEG4Z56FSFUZ4JC/


[ovirt-users] Re: Ovirt 4.3.1 problem with HA agent

2019-03-18 Thread Strahil Nikolov
 Hi Alexei,
In order to debug it check the following:
1. Check gluster:1.1 All bricks up ?1.2 All bricks healed (gluster volume heal 
data info summary) and no split-brain
2. Go to the problematic host and check the mount point is there2.1. Check 
permissions (should be vdsm:kvm) and fix with chown -R if needed2.2. Check the 
OVF_STORE from the logs that it exists2.3. Check that vdsm can extract the 
file:sudo -u vdsm tar -tvf 
/rhev/data-center/mnt/glusterSD/msk-gluster-facility.:_data/DOMAIN-UUID/Volume-UUID/Image-ID
3 Configure virsh alias, as it's quite helpful:alias virsh='virsh -c 
qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf'
4. If VM is running - go to the host and get the xml:virsh dumpxml HostedEngine 
> /root/HostedEngine.xml4.1. Get the Network:virsh net-dumpxml vdsm-ovirtmgmt > 
/root/vdsm-ovirtmgmt.xml4.2 If not , Here is mine:[root@ovirt1 ~]# virsh 
net-dumpxml vdsm-ovirtmgmt

  vdsm-ovirtmgmt
  7ae538ce-d419-4dae-93b8-3a4d27700227
  
  


UUID is not important, as my first recovery was with different one.
5. If you Hosted Engine is down:5.1 Remove the VM (if exists anywhere)on all 
nodes:virsh undefine HostedEngine5.2 Verify that the nodes are in global 
maintenance:hosted-engine --vm-status5.3 Define the Engine on only 1 
machinevirsh define HostedEngine.xmlvirsh net-define vdsm-ovirtmgmt.xml
virsh start HostedEngine

Note: if it complains about the storage - there is no link in 
/var/run/vdsm/storage/DOMAIN-UUID/Volume-UUID to your Volume-UUIDHere is how it 
looks mine:[root@ovirt1 808423f9-8a5c-40cd-bc9f-2568c85b8c74]# ll 
/var/run/vdsm/storage/808423f9-8a5c-40cd-bc9f-2568c85b8c74
total 24
lrwxrwxrwx. 1 vdsm kvm 139 Mar 17 07:42 2c74697a-8bd9-4472-8a98-bf624f3462d5 -> 
/rhev/data-center/mnt/glusterSD/ovirt1.localdomain:_engine/808423f9-8a5c-40cd-bc9f-2568c85b8c74/images/2c74697a-8bd9-4472-8a98-bf624f3462d5
lrwxrwxrwx. 1 vdsm kvm 139 Mar 17 07:45 3ec27d6d-921c-4348-b799-f50543b6f919 -> 
/rhev/data-center/mnt/glusterSD/ovirt1.localdomain:_engine/808423f9-8a5c-40cd-bc9f-2568c85b8c74/images/3ec27d6d-921c-4348-b799-f50543b6f919
lrwxrwxrwx. 1 vdsm kvm 139 Mar 17 08:28 441abdc8-6cb1-49a4-903f-a1ec0ed88429 -> 
/rhev/data-center/mnt/glusterSD/ovirt1.localdomain:_engine/808423f9-8a5c-40cd-bc9f-2568c85b8c74/images/441abdc8-6cb1-49a4-903f-a1ec0ed88429
lrwxrwxrwx. 1 vdsm kvm 139 Mar 17 21:15 8ec7a465-151e-4ac3-92a7-965ecf854501 -> 
/rhev/data-center/mnt/glusterSD/ovirt1.localdomain:_engine/808423f9-8a5c-40cd-bc9f-2568c85b8c74/images/8ec7a465-151e-4ac3-92a7-965ecf854501
lrwxrwxrwx. 1 vdsm kvm 139 Mar 17 08:28 94ade632-6ecc-4901-8cec-8e39f3d69cb0 -> 
/rhev/data-center/mnt/glusterSD/ovirt1.localdomain:_engine/808423f9-8a5c-40cd-bc9f-2568c85b8c74/images/94ade632-6ecc-4901-8cec-8e39f3d69cb0
lrwxrwxrwx. 1 vdsm kvm 139 Mar 17 07:42 fe62a281-51e9-4b23-87b3-2deb52357304 -> 
/rhev/data-center/mnt/glusterSD/ovirt1.localdomain:_engine/808423f9-8a5c-40cd-bc9f-2568c85b8c74/images/fe62a281-51e9-4b23-87b3-2deb52357304


Once you create your link , start it again.
6. Wait till OVF is fixed (takes more than the settings in the engine :) )
Good Luck!
Best Regards,Strahil Nikolov


В понеделник, 18 март 2019 г., 12:57:30 ч. Гринуич+2, Николаев Алексей 
 написа:  
 
 Hi all! I have a very similar problem after update one of the two nodes to 
version 4.3.1. This node77-02 lost connection to gluster volume named DATA, but 
not to volume with hosted engine.  node77-02 /var/log/messages Mar 18 13:40:00 
node77-02 journal: ovirt-ha-agent 
ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config.vm ERROR Failed 
scanning for OVF_STORE due to Command Volume.getInfo with args 
{'storagepoolID': '----', 'storagedomainID': 
'2ee71105-1810-46eb-9388-cc6caccf9fac', 'volumeID': 
u'224e4b80-2744-4d7f-bd9f-43eb8fe6cf11', 'imageID': 
u'43b75b50-cad4-411f-8f51-2e99e52f4c77'} failed:#012(code=201, message=Volume 
does not exist: (u'224e4b80-2744-4d7f-bd9f-43eb8fe6cf11',))Mar 18 13:40:00 
node77-02 journal: ovirt-ha-agent 
ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config.vm ERROR Unable 
to identify the OVF_STORE volume, falling back to initial vm.conf. Please 
ensure you already added your first data domain for regular VMs HostedEngine VM 
works fine on all nodes. But node77-02 failed witherror in webUI: 
ConnectStoragePoolVDS failed: Cannot find master domain: 
u'spUUID=5a5cca91-01f8-01af-0297-025f, 
msdUUID=7d5de684-58ff-4fbc-905d-3048fc55b2b1' node77-02 vdsm.log 2019-03-18 
13:51:46,287+0300 WARN  (jsonrpc/7) [storage.StorageServer.MountConnection] 
gluster server u'msk-gluster-facility.' is not in bricks 
['node-msk-gluster203', 'node-msk-gluster205', 'node-msk-gluster201'], possibly 
mounting duplicate servers (storageServer:317)2019-03-18 13:51:46,287+0300 INFO 
 (jsonrpc/7) [storage.Mount] mounting msk-gluster-facility.ipt.fsin.uis:/data 
at /rhev/data-center/mnt/glusterSD/msk-gluster-facility.:_data 
(mount:204

[ovirt-users] Re: Ovirt 4.3.1 cannto set host to maintenance

2019-03-12 Thread Strahil Nikolov

>Can you please check engine.log and vdsm.log to try understanding why the 
>upload of the >OVF_STORE content is failing on your environment?>I fear it 
>could cause other issues in the future.The failure for 
>94ade632-6ecc-4901-8cec-8e39f3d69cb0 is because I didn't click the save button 
>(google chrome os has issues with ovirt and this will be avoided):
2019-03-10 05:11:33,066+02 ERROR 
[org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand] 
(EE-ManagedThreadFactory-engineScheduled-Thread-77) 
[b25001bb-dc7f-4ff0-9223-e63b6f3
8c5e2] Transfer failed. Download disk 'OVF_STORE' (disk id: 
'94ade632-6ecc-4901-8cec-8e39f3d69cb0', image id: 
'9460fc4b-54f3-48e3-b7b6-da962321ecf4')
2019-03-10 05:11:34,215+02 INFO  
[org.ovirt.engine.core.bll.storage.disk.image.ImageTransferUpdater] 
(EE-ManagedThreadFactory-engineScheduled-Thread-69) 
[b25001bb-dc7f-4ff0-9223-e63b6f38c5e2] Updating image transfer 
2773daf9-5920-404d-8a5b-6f04e431a9aa (image 
94ade632-6ecc-4901-8cec-8e39f3d69cb0) phase to Finished Failure
2019-03-10 05:11:34,329+02 ERROR 
[org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand] 
(EE-ManagedThreadFactory-engineScheduled-Thread-69) 
[b25001bb-dc7f-4ff0-9223-e63b6f38c5e2] Failed to transfer disk 
'----' (command id 
'2773daf9-5920-404d-8a5b-6f04e431a9aa')
2019-03-10 05:11:34,331+02 INFO  
[org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand] 
(EE-ManagedThreadFactory-engineScheduled-Thread-69) 
[b25001bb-dc7f-4ff0-9223-e63b6f38c5e2] Ending command 
'org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand' 
successfully.
2019-03-10 05:11:34,331+02 INFO  
[org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand] 
(EE-ManagedThreadFactory-engineScheduled-Thread-69) 
[b25001bb-dc7f-4ff0-9223-e63b6f38c5e2] Lock freed to object 
'EngineLock:{exclusiveLocks='', 
sharedLocks='[94ade632-6ecc-4901-8cec-8e39f3d69cb0=DISK]'}'
2019-03-10 05:11:34,705+02 INFO  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(EE-ManagedThreadFactory-engineScheduled-Thread-69) 
[b25001bb-dc7f-4ff0-9223-e63b6f38c5e2] EVENT_ID: 
TRANSFER_IMAGE_CANCELLED(1,033), Image Download with disk OVF_STORE was 
cancelled.



And the failure for 441abdc8-6cb1-49a4-903f-a1ec0ed88429 happened on Fedora28 
and the logs report:
2019-03-11 18:16:33,251+02 ERROR 
[org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand] 
(EE-ManagedThreadFactory-engineScheduled-Thread-7) 
[434241fa-037c-44d2-8f83-0a69baa027e4] Finalizing failed transfer. Download 
disk 'OVF_STORE' (disk id: '441abdc8-6cb1-49a4-903f-a1ec0ed88429', image id: 
'c3309fc0-8707-4de1-903d-8d4bbb024f81')
2019-03-11 18:16:43,704+02 ERROR 
[org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand] 
(EE-ManagedThreadFactory-engineScheduled-Thread-74) 
[434241fa-037c-44d2-8f83-0a69baa027e4] Transfer failed. Download disk 
'OVF_STORE' (disk id: '441abdc8-6cb1-49a4-903f-a1ec0ed88429', image id: 
'c3309fc0-8707-4de1-903d-8d4bbb024f81')
2019-03-11 18:16:44,929+02 WARN  
[org.ovirt.engine.core.bll.lock.InMemoryLockManager] 
(EE-ManagedThreadFactory-engineScheduled-Thread-72) 
[434241fa-037c-44d2-8f83-0a69baa027e4] Trying to release a shared lock for key: 
'441abdc8-6cb1-49a4-903f-a1ec0ed88429DISK' , but lock does not exist


For the Cancelled event - I think it shouldn't go into this "Failed" state as 
the user has cancelled the action.For the second - i have no explanation.
Now comes the question - what should be done in order to fix that.
Best Regards,Strahil Nikolov

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BVK6Z7VGB4T7MJM7WGJYT45ISHY3ZZRK/


[ovirt-users] Re: Ovirt 4.3.1 problem with HA agent

2019-03-12 Thread Strahil Nikolov
 Dear Simone,
it should be 60 min , but I have checked several hours after that and it didn't 
update it.
[root@engine ~]# engine-config -g OvfUpdateIntervalInMinutes
OvfUpdateIntervalInMinutes: 60 version: general

How can i make a backup of the VM config , as you have noticed the local copy 
in /var/run/ovirt-hosted-engine-ha/vm.conf won't work ?
I will keep the HostedEngine's xml - so I can redefine if needed.
Best Regards,Strahil Nikolov
  
  
  ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GHH4TKNKFWSBKJVX6UHIVB6R4EKS54EH/


[ovirt-users] Re: iSCSI domain creation ; nothing happens

2019-03-12 Thread Strahil Nikolov
 Do you have the iscsi-initiator-utils rpm installed ?
Best Regards,Strahil Nikolov

В вторник, 12 март 2019 г., 15:46:36 ч. Гринуич+2, Guillaume Pavese 
 написа:  
 
 My setup : oVirt 4.3.1 HC on Centos 7.6, everything up2dateI try to create a 
new iSCSI Domain. It's a new LUN/Target created on synology bay, no CHAP (I 
tried with CHAP too but that does not help)
I first entered the syno's address and clicked discoverI saw the existing 
Targets ; I clicked on the arrow on the right. I then get the following Error 
:"Error while executing action: Failed to setup iSCSI subsystem"

In hosts logs, I get conn 0 login rejected: initiator error 
(02/00)Connection1:0 to [target: 
iqn.2000-01.com.synology:SVC-STO-FR-301.Target-2.2dfed4a32a, portal: 
10.199.9.16,3260] through [iface: default] is shutdown.

In engine logs, I get :
2019-03-12 14:33:35,504+01 INFO  
[org.ovirt.engine.core.bll.storage.connection.ConnectStorageToVdsCommand] 
(default task-24) [85d27833-b5d5-4bc8-b43c-88d980c30333] Running command: 
ConnectStorageToVdsCommand internal: false. Entities affected :  ID: 
aaa0----123456789aaa Type: SystemAction group 
CREATE_STORAGE_DOMAIN with role type ADMIN2019-03-12 14:33:35,511+01 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] 
(default task-24) [85d27833-b5d5-4bc8-b43c-88d980c30333] START, 
ConnectStorageServerVDSCommand(HostName = ps-inf-int-kvm-fr-305-210.hostics.fr, 
StorageServerConnectionManagementVDSParameters:{hostId='6958c4f7-3716-40e4-859a-bfce2f6dbdba',
 storagePoolId='----', storageType='ISCSI', 
connectionList='[StorageServerConnections:{id='null', connection='10.199.9.16', 
iqn='iqn.2000-01.com.synology:SVC-STO-FR-301.Target-2.2dfed4a32a', 
vfsType='null', mountOptions='null', nfsVersion='null', nfsRetrans='null', 
nfsTimeo='null', iface='null', netIfaceName='null'}]', 
sendNetworkEventOnFailure='true'}), log id: 7f36d8a92019-03-12 14:33:36,302+01 
INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] 
(default task-24) [85d27833-b5d5-4bc8-b43c-88d980c30333] FINISH, 
ConnectStorageServerVDSCommand, return: 
{----=465}, log id: 7f36d8a92019-03-12 
14:33:36,310+01 ERROR 
[org.ovirt.engine.core.bll.storage.connection.ISCSIStorageHelper] (default 
task-24) [85d27833-b5d5-4bc8-b43c-88d980c30333] The connection with details 
'----' failed because of error code '465' and 
error message is: failed to setup iscsi subsystem2019-03-12 14:33:36,315+01 
ERROR [org.ovirt.engine.core.bll.storage.connection.ConnectStorageToVdsCommand] 
(default task-24) [85d27833-b5d5-4bc8-b43c-88d980c30333] Transaction 
rolled-back for command 
'org.ovirt.engine.core.bll.storage.connection.ConnectStorageToVdsCommand'.2019-03-12
 14:33:36,676+01 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetDeviceListVDSCommand] (default 
task-24) [70251a16-0049-4d90-a67c-653b229f7639] START, 
GetDeviceListVDSCommand(HostName = ps-inf-int-kvm-fr-305-210.hostics.fr, 
GetDeviceListVDSCommandParameters:{hostId='6958c4f7-3716-40e4-859a-bfce2f6dbdba',
 storageType='ISCSI', checkStatus='false', lunIds='null'}), log id: 
539fb3452019-03-12 14:33:36,995+01 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetDeviceListVDSCommand] (default 
task-24) [70251a16-0049-4d90-a67c-653b229f7639] FINISH, 
GetDeviceListVDSCommand, return: [], log id: 539fb345



Guillaume Pavese
Ingénieur Système et 
RéseauInteractiv-Group___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SG2SWTJII4S3W3G3PLUQRMTQ4DSBL6RI/
  ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MAQYGZAA2WK3BOPALWS2EJ3BRVMEZDIN/


[ovirt-users] Re: Ovirt 4.3.1 problem with HA agent

2019-03-15 Thread Strahil Nikolov
 Ok,
I have managed to recover again and no issues are detected this time.I guess 
this case is quite rare and nobody has experienced that.
Best Regards,Strahil Nikolov

В сряда, 13 март 2019 г., 13:03:38 ч. Гринуич+2, Strahil Nikolov 
 написа:  
 
  Dear Simone,
it seems that there is some kind of problem ,as the OVF got updated with wrong 
configuration:[root@ovirt2 ~]# ls -l 
/rhev/data-center/mnt/glusterSD/ovirt1.localdomain:_engine/808423f9-8a5c-40cd-bc9f-2568c85b8c74/images/{441abdc8-6cb1-49a4-903f-a1ec0ed88429,94ade632-6ecc-4901-8cec-8e39f3d69cb0}
/rhev/data-center/mnt/glusterSD/ovirt1.localdomain:_engine/808423f9-8a5c-40cd-bc9f-2568c85b8c74/images/441abdc8-6cb1-49a4-903f-a1ec0ed88429:
total 66591
-rw-rw. 1 vdsm kvm   30720 Mar 12 08:06 c3309fc0-8707-4de1-903d-8d4bbb024f81
-rw-rw. 1 vdsm kvm 1048576 Jan 31 13:24 
c3309fc0-8707-4de1-903d-8d4bbb024f81.lease
-rw-r--r--. 1 vdsm kvm 435 Mar 12 08:06 
c3309fc0-8707-4de1-903d-8d4bbb024f81.meta

/rhev/data-center/mnt/glusterSD/ovirt1.localdomain:_engine/808423f9-8a5c-40cd-bc9f-2568c85b8c74/images/94ade632-6ecc-4901-8cec-8e39f3d69cb0:
total 66591
-rw-rw. 1 vdsm kvm   30720 Mar 13 11:07 9460fc4b-54f3-48e3-b7b6-da962321ecf4
-rw-rw. 1 vdsm kvm 1048576 Jan 31 13:24 
9460fc4b-54f3-48e3-b7b6-da962321ecf4.lease
-rw-r--r--. 1 vdsm kvm 435 Mar 13 11:07 
9460fc4b-54f3-48e3-b7b6-da962321ecf4.meta

Starting the hosted-engine fails with:
2019-03-13 12:48:21,237+0200 ERROR (vm/8474ae07) [virt.vm] 
(vmId='8474ae07-f172-4a20-b516-375c73903df7') The vm start process failed 
(vm:937)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 866, in 
_startUnderlyingVm
    self._run()
  File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2852, in _run
    dom = self._connection.defineXML(self._domain.xml)
  File "/usr/lib/python2.7/site-packages/vdsm/common/libvirtconnection.py", 
line 131, in wrapper
    ret = f(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/common/function.py", line 94, in 
wrapper
    return func(inst, *args, **kwargs)
  File "/usr/lib64/python2.7/site-packages/libvirt.py", line 3743, in defineXML
    if ret is None:raise libvirtError('virDomainDefineXML() failed', conn=self)
libvirtError: XML error: No PCI buses available

Best Regards,Strahil Nikolov


В вторник, 12 март 2019 г., 14:14:26 ч. Гринуич+2, Strahil Nikolov 
 написа:  
 
  Dear Simone,
it should be 60 min , but I have checked several hours after that and it didn't 
update it.
[root@engine ~]# engine-config -g OvfUpdateIntervalInMinutes
OvfUpdateIntervalInMinutes: 60 version: general

How can i make a backup of the VM config , as you have noticed the local copy 
in /var/run/ovirt-hosted-engine-ha/vm.conf won't work ?
I will keep the HostedEngine's xml - so I can redefine if needed.
Best Regards,Strahil Nikolov
  
  
  ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZRPIBZKOD533HODP6VER726XWGQEZXM7/


[ovirt-users] oVirt 4.3.1 - Remove VM greyed out

2019-03-15 Thread Strahil Nikolov
Hi Community,
I have the following problem.A VM was created based on template and after 
poweroff/shutdown it cannot be removed - the button is greyed-out.
Anyone who got such an issue ?Any hint where to look for ?
Best Regards,Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YMLA37UBITKQT5VZYFL3L6P4PXKB7UGE/


[ovirt-users] Re: Ovirt 4.3.1 problem with HA agent

2019-03-09 Thread Strahil Nikolov
 Hello again,
Latest update: the engine is up and running (or at least the login portal).
[root@ovirt1 ~]# hosted-engine --check-livelinessHosted Engine is up!
I have found online the xml for the network:
[root@ovirt1 ~]# cat ovirtmgmt_net.xml   vdsm-ovirtmgmt  
    
Sadly, I had to create a symbolic link to the main disk in 
/var/run/vdsm/storage , as it was missing.
So, what's next.
Issues up to now:2 OVF - 0 bytesProblem with local copy of the HostedEngine 
config - used xml from an old vdsm logMissing vdsm-ovirtmgmt definitionNo link 
for the main raw disk in /var/run/vdsm/storage .
Can you hint me how to recover the 2 OVF tars now ?
Best Regards,Strahil Nikolov
  ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/57QC4DEXCVF6AEIDFDLDBYSPZQIYJGOR/


[ovirt-users] oVirt 4.3.2 - Cannot update Host via UI

2019-03-22 Thread Strahil Nikolov
Hello guys,
I have the following issue after I have successfully updated by engine from 
4.3.1 to 4.3.2 - I canont update any host via the UI.
The event log show startup of the update , but there is no process running on 
the Host, yum.log is not updated and engine log doesn't show anything 
meaningful.
Any hint where to look for ?
Thanks in advance.
Best Regards,Strahil Nikolov



engine-log-without-Gluster
Description: Binary data
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RBXXBPZH5FFP23KYSNIAS6MSVUW6ZIUW/


[ovirt-users] Re: Ovirt self-hosted engine won't come up

2019-02-14 Thread Strahil Nikolov
 I have noticed that sometimes virsh shows the real error (like firewalld being 
stopped).Can you try to start paused and then ask virsh to resume:hosted-engine 
--vm-start-paused
virsh -c qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf list

virsh -c qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf 
resume HostedEngine

Best Regards,Strahil Nikolov
В четвъртък, 14 февруари 2019 г., 19:39:35 ч. Гринуич+2, 
joshuao...@gmail.com  написа:  
 
 It appears the engine is down entirely now and hosted-engine --vm-start 
doesn't appear to change anything.

Engine status                      : {"reason": "bad vm status", "health": 
"bad", "vm": "down_unexpected", "detail": "Down"}
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2CADFURYFT5ULVQ6EYHNTYICSALACZ2V/
  ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HJZO23SUZS7UIHOSGRAZFUFLWFTOR2F6/


[ovirt-users] Re: Unable to deatch/remove ISO DOMAIN

2019-01-25 Thread Strahil Nikolov
 Hi Martin,
this is my history (please keep in mind that it might get distorted due to mail 
client).Note: I didn't stop the ovirt-engine.service and this caused some 
errors to be logged - but the engine is still working without issues. As I said 
- this is my test lab and I was willing to play around :)
Good Luck!

ssh root@engine
#Switch to postgre usersu - postgres

#If you don't load this , there will be no path for psql , nor it will start at 
allsource /opt/rh/rh-postgresql95/enable
#open the DB. psql engine
#Commands in the DB:select id, storage_name from storage_domain_static;
select storage_domain_id, ovf_disk_id from storage_domains_ovf_info where 
storag                                                                          
              e_domain_id='fbe7bf1a-2f03-4311-89fa-5031eab638bf';
delete from storage_domain_dynamic where id = 
'fbe7bf1a-2f03-4311-89fa-5031eab63                                              
                                          8bf';
delete from storage_domain_static where id = 
'fbe7bf1a-2f03-4311-89fa-5031eab638                                             
                                           bf';
delete from base_disks where disk_id = 
'7a155ede-5317-4860-aa93-de1dc283213e';delete from base_disks where disk_id = 
'7dedd0e1-8ce8-444e-8a3d-117c46845bb0';
delete from storage_domains_ovf_info where storage_domain_id = 
'fbe7bf1a-2f03-43                                                               
                         11-89fa-5031eab638bf';
delete from storage_pool_iso_map where storage_id = 
'fbe7bf1a-2f03-4311-89fa-503                                                    
                                    1eab638bf';
#I think this shows all tables:select table_schema ,table_name from 
information_schema.tables order by table_sc                                     
                                                   hema,table_name;#Maybe you 
don't need this one and you need to find the NFS volume:select * from 
gluster_volumes ;delete from gluster_volumes where id = 
'9b06a1e9-8102-4cd7-bc56-84960a1efaa2';
select table_schema ,table_name from information_schema.tables order by 
table_sc                                                                        
                hema,table_name;
# The previous delete failed as there was an entry in 
storage_server_connections.#In your case could be differentselect * from 
storage_server_connections;delete from storage_server_connections where id = 
'490ee1c7-ae29-45c0-bddd-61708                                                  
                                      22c8490';delete from gluster_volumes 
where id = '9b06a1e9-8102-4cd7-bc56-84960a1efaa2';


Best Regards,Strahil Nikolov
В петък, 25 януари 2019 г., 11:04:01 ч. Гринуич+2, Martin Humaj 
 написа:  
 
 Hi StrahilI have tried to use the same ip and nfs export to replace the 
original, did not work properly.
If you can guide me how to do it in engine DB i would appreciate it. This is a 
test system.
thank you Martin

On Fri, Jan 25, 2019 at 9:56 AM Strahil  wrote:

Can you create a temporary NFS server which to be accessed during the removal?I 
have managed to edit the engine's DB to get rid of cluster domain, but this is 
not recommended for production  systems :)
  ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FHVNCODMC2POM5ISTICNMJ462VX72WXT/


[ovirt-users] Re: Ovirt 4.2.8 allows to remove a gluster volume without detaching the storage domain

2019-01-25 Thread Strahil Nikolov
 Hey Community,
where can I report this one ?
Best Regards,Strahil Nikolov

В четвъртък, 24 януари 2019 г., 19:25:37 ч. Гринуич+2, Strahil Nikolov 
 написа:  
 
 Hello Community,
As I'm still experimenting with my ovirt lab , I have managed somehow to remove 
my gluster volume ('gluster volume list' confirms it) whithout detaching the 
storage domain.
This sounds to me as bug, am I right ?
Steps to reproduce:1. Create a replica 3 arbiter 1 gluster volume2. Create a 
storage domain of it3. Go to Volumes and select the name of the volume4. Press 
remove and confirm . The tasks fails , but the volume is now gone in gluster .
I guess , I have to do some cleanup in the DB in order to fix that.
Best Regards,Strahil Nikolov
  ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4B2U6XEK6XIXTF5SZEJWAGGX5ENGSS52/


[ovirt-users] Re: Sanlock volume corrupted on deployment

2019-01-30 Thread Strahil Nikolov
 Dear All,
I have rebuilt the gluster cluster , but it seems that with the latest updates 
(I started over from scratch) I am not able to complete the "Prepare VM" phase 
and thus I cannot reach to the last phase where the sanlock issue happens.

I have checked the contents of " 
/var/log/ovirt-hosted-engine-setup/engine-logs-2019-01-31T06:54:22Z/ovirt-engine/engine.log"
 and the only errors I see are:

[root@ovirt1 ovirt-engine]# grep ERROR engine.log
2019-01-31 08:56:33,326+02 ERROR 
[org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] 
(EE-ManagedThreadFactory-engineScheduled-Thread-55) [3806b629] Failed in 
'GlusterServersListVDS' method
2019-01-31 08:56:33,343+02 ERROR 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(EE-ManagedThreadFactory-engineScheduled-Thread-55) [3806b629] EVENT_ID: 
VDS_BROKER_COMMAND_FAILURE(10,802), VDSM ovirt1.localdomain command 
GlusterServersListVDS failed: The method does not exist or is not available: 
{'method': u'GlusterHost.list'}
2019-01-31 08:56:33,344+02 ERROR 
[org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] 
(EE-ManagedThreadFactory-engineScheduled-Thread-55) [3806b629] Command 
'GlusterServersListVDSCommand(HostName = ovirt1.localdomain, 
VdsIdVDSCommandParametersBase:{hostId='07c6b36a-6939-4059-8dd3-4e47ea094538'})' 
execution failed: VDSGenericException: VDSErrorException: Failed to 
GlusterServersListVDS, error = The method does not exist or is not available: 
{'method': u'GlusterHost.list'}, code = -32601
2019-01-31 08:56:33,591+02 ERROR 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(EE-ManagedThreadFactory-engineScheduled-Thread-55) [51bf8a11] EVENT_ID: 
GLUSTER_COMMAND_FAILED(4,035), Gluster command [] failed on server 
.
2019-01-31 08:56:34,856+02 ERROR 
[org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] 
(EE-ManagedThreadFactory-engineScheduled-Thread-60) [3ee4bd51] Failed in 
'GlusterServersListVDS' method
2019-01-31 08:56:34,857+02 ERROR 
[org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] 
(EE-ManagedThreadFactory-engineScheduled-Thread-60) [3ee4bd51] Command 
'GlusterServersListVDSCommand(HostName = ovirt1.localdomain, 
VdsIdVDSCommandParametersBase:{hostId='07c6b36a-6939-4059-8dd3-4e47ea094538'})' 
execution failed: VDSGenericException: VDSErrorException: Failed to 
GlusterServersListVDS, error = The method does not exist or is not available: 
{'method': u'GlusterHost.list'}, code = -32601
2019-01-31 08:56:35,191+02 ERROR 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(EE-ManagedThreadFactory-engineScheduled-Thread-60) [3fd826e] EVENT_ID: 
GLUSTER_COMMAND_FAILED(4,035), Gluster command [] failed on server 
.



Any hint how to proceed further ?
Best Regards,Strahil Nikolov



В вторник, 29 януари 2019 г., 14:01:17 ч. Гринуич+2, Strahil 
 написа:  
 
 Dear Nir,
According to redhat solution 1179163 'add_lockspace fail result -233' indicates 
corrupted ids lockspace.
During the install, the VM fails to get up.In order to fix it, I 
stop:ovirt-ha-agent, ovirt-ha-broker, vdsmd, supervdsmd, sanlockThen 
reinitialize the lockspace via 'sanlock direct init -s' (used bugreport 1116469 
as guidance).Once the init is successful and all the services are up - the VM 
is started but the deployment was long over and the setup needs additional 
cleaning up.
I will rebuild the gluster cluster and then will repeat the deployment.
Can you guide me what information will be needed , as I'm quite new in 
ovirt/RHV ?
Best Regards,Strahil Nikolov
On Jan 28, 2019 20:34, Nir Soffer  wrote:

On Sat, Jan 26, 2019 at 6:13 PM Strahil  wrote:

Hey guys,
I have noticed that with 4.2.8 the sanlock issue (during deployment) is still 
not fixed.Am I the only one with bad luck or there is something broken there ?
The sanlock service reports code 's7 add_lockspace fail result -233' 'leader1 
delta_acquire_begin error -233 lockspace hosted-engine host_id 1'.

Sanlock does not have such error code - are you sure this is -233?
Here sanlock return 
values:https://pagure.io/sanlock/blob/master/f/src/sanlock_rv.h

Can you share your sanlock log?
 

Best Regards,Strahil Nikolov___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SZMF5KKHSXOUTLGX3LR2NBN7E6QGS6G3/



Dear Nir,

According to redhat solution 1179163 'add_lockspace fail result -233' indicates 
corrupted ids lockspace.

During the install, the VM fails to get up.
In order to fix it, I stop:
ovirt-ha-agent, ovirt-ha-broker, vdsmd, supervdsmd, sanlock
Then reinitialize the lockspace via 'sanlock direct init -s' (used bugreport 
1116469 as guid

[ovirt-users] Re: Deploying single instance - error

2019-01-30 Thread Strahil Nikolov
Hi All,
I have managed to fix this by reinstalling gdeploy package. Yet, it still asks 
for "Disckount" section - but as the fix was not rolled for CentOS yet - this 
is expected.
Best Regards,Strahil Nikolov

 

On Thu, Jan 31, 2019 at 8:01 AM Strahil Nikolov  wrote:

Hey Guys/Gals,
did you update the gdeploy for CentOS ?

gdeploy is updated for Fedora, for CentOS the packages will be updated shortly, 
we are testing the packages.
However, this issue you are facing where RAID is selected over JBOD is 
strange.Gobinda will look into this, and might need more details. 

 
It seems to not be working - now it doesn't honour the whole cockpit 
wizard.Instead of JBOD - it selects raid6, instead of md0 - it uses sdb , 
etc.[root@ovirt1 ~]# gdeploy --versiongdeploy 2.0.2[root@ovirt1 ~]# rpm -qa 
gdeploygdeploy-2.0.8-1.el7.noarch
Note: This is a fresh install.
Best Regards,Strahil Nikolov
  ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QTHXJY4XARH4IFZX57LMUM2PXNBO4TN2/


[ovirt-users] Re: Deploying single instance - error

2019-01-30 Thread Strahil Nikolov
Hey Guys/Gals,

did you update the gdeploy for CentOS ?
It seems to not be working - now it doesn't honour the whole cockpit
wizard.
Instead of JBOD - it selects raid6, instead of md0 - it uses sdb , etc.
[root@ovirt1 ~]# gdeploy --version
gdeploy 2.0.2
[root@ovirt1 ~]# rpm -qa gdeploy
gdeploy-2.0.8-1.el7.noarch

Note: This is a fresh install.

Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QRCPZ6J4DT7W4PZYBAO6NKKYRHX3VYV6/


[ovirt-users] Re: But in the web interface?

2019-02-05 Thread Strahil Nikolov
Dear Hetz,
I have opened a bug for that : 1662047 – [UI] 2 dashboard icons after upgrade

| 
| 
|  | 
1662047 – [UI] 2 dashboard icons after upgrade


 |

 |

 |



You can check the workaround described there.
best Regards,Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BBGG2UGGYJR3SIYTW35XWA6BV3Y64KCI/


[ovirt-users] Re: Ovirt 4.3 RC missing glusterfs-gnfs

2019-02-05 Thread Strahil Nikolov
 
CentOS-7 - oVirt 4.3
    218
ovirt-4.3-centos-qemu-ev/x86_64   
CentOS-7 - QEMU EV  
 71
ovirt-4.3-epel/x86_64 Extra 
Packages for Enterprise Linux 7 - x86_64
   12,900
ovirt-4.3-pre/7   oVirt 
4.3 Pre-Release 
  648
ovirt-4.3-virtio-win-latest   
virtio-win builds roughly matching what will be shipped in upcoming RHEL
 39
ovirtwebui-ovirt-web-ui-master/x86_64 Copr 
repo for ovirt-web-ui-master owned by ovirtwebui
 2
updates/7/x86_64  
CentOS-7 - Updates  
  1,057
repolist: 51,664
Uploading Enabled Repositories Report
Loaded plugins: fastestmirror, product-id, subscription-manager
This system is not registered with an entitlement server. You can use 
subscription-manager to register.
Repository centos-sclo-rh-release is listed more than once in the configuration
Repository ovirt-4.3-epel is listed more than once in the configuration
Repository ovirt-4.3-centos-gluster5 is listed more than once in the 
configuration
Repository ovirt-4.3-virtio-win-latest is listed more than once in the 
configuration
Repository ovirt-4.3-centos-qemu-ev is listed more than once in the 
configuration
Repository ovirt-4.3-centos-ovirt43 is listed more than once in the 
configuration
Repository ovirt-4.3-centos-opstools is listed more than once in the 
configuration
Repository centos-sclo-rh-release is listed more than once in the configuration
Repository ovirtwebui-ovirt-web-ui-master is listed more than once in the 
configuration
Repository ovirt-4.3 is listed more than once in the configuration
Cannot upload enabled repos report, is this client registered?
As you might have noticed there is no gluster-gnfs from repository 
ovirt-4.3-centos-gluster5.

Best Regards,Strahil Nikolov
[root@ovirt2 yum.repos.d]# yum update
Loaded plugins: enabled_repos_upload, fastestmirror, package_upload, 
product-id, search-disabled-repos, subscription-manager, vdsmupgrade
This system is not registered with an entitlement server. You can use 
subscription-manager to register.
Repository centos-sclo-rh-release is listed more than once in the configuration
Repository ovirt-4.3-epel is listed more than once in the configuration
Repository ovirt-4.3-centos-gluster5 is listed more than once in the 
configuration
Repository ovirt-4.3-virtio-win-latest is listed more than once in the 
configuration
Repository ovirt-4.3-centos-qemu-ev is listed more than once in the 
configuration
Repository ovirt-4.3-centos-ovirt43 is listed more than once in the 
configuration
Repository ovirt-4.3-centos-opstools is listed more than once in the 
configuration
Repository centos-sclo-rh-release is listed more than once in the configuration
Repository ovirtwebui-ovirt-web-ui-master is listed more than once in the 
configuration
Repository ovirt-4.3 is listed more than once in the configuration
Loading mirror speeds from cached hostfile
 * base: mirrors.neterra.net
 * extras: mirrors.neterra.net
 * ovirt-4.2: mirror.slu.cz
 * ovirt-4.2-epel: mirrors.neterra.net
 * ovirt-4.3: mirror.slu.cz
 * ovirt-4.3-epel: mirrors.neterra.net
 * updates: mirrors.neterra.net
Resolving Dependencies
--> Running transaction check
---> Package cockpit-ovirt-dashboard.noarch 0:0.11.38-1.el7 will be updated
---> Package cockpit-ovirt-dashboard.noarch 0:0.12.1-1.el7 will be an update
---> Package glusterfs.x86_64 0:3.12.15-1.el7 will be updated
--> Processing Dependency: glusterfs(x86-64) = 3.12.15-1.el7 for package: 
glusterfs-gnfs-3.12.15-1.el7.x86_64
---> Package glusterfs.x86_64 0:5.3-1.el7 will be an update
---> Package glusterfs-api.x86_64 0:3.12.15-1.el7 will be updated
---> Package glusterfs-api.x86_64 0:5.3-1.el7 will be an update
---> Package glusterfs-api-devel.x86_64 0:3.12.15-1.el7 will be updated
---> Package glusterfs-api-devel.x86_64 0:5.3-1.el7 will be an update
---> Package glusterfs-cli.x86_64 0:3.12.15-1.el7 will be updated
---> Package glusterfs-cli.x86_64 0:5.3-1.el7 will be an update
---> Package glusterfs-client-xlators.x86_64 0:3.12.15-1.el7 will be updated
--> Processing Dependency: glusterfs-client-xlators(x86-64) = 3.12.15-1.el7 for 
package: glusterfs-gnfs-3.12.15-1.el7.x86_64
---> Package glusterfs-client-xlators.x86_64 0:5.3-1.el7 will be an update
---> Package gluster

[ovirt-users] Re: ovirt-node 4.2 iso - hyperconverged wizard doesn't write gdeployConfig settings

2019-02-05 Thread Strahil Nikolov
 Dear Feral,
>On that note, have you also had issues with gluster not restarting on reboot, 
>as well as >all of the HA stuff failing on reboot after power loss? Thus far, 
>the only way I've got the >cluster to come back to life, is to manually 
>restart glusterd on all nodes, then put the >cluster back into "not 
>mainentance" mode, and then manually starting the hosted-engine vm. >This also 
>fails after 2 or 3 power losses, even though the entire cluster is happy 
>through >the first 2.

About the gluster not starting - use systemd.mount unit files.here is my setup 
and for now works:
[root@ovirt2 yum.repos.d]# systemctl cat gluster_bricks-engine.mount
# /etc/systemd/system/gluster_bricks-engine.mount
[Unit]
Description=Mount glusterfs brick - ENGINE
Requires = vdo.service
After = vdo.service
Before = glusterd.service
Conflicts = umount.target

[Mount]
What=/dev/mapper/gluster_vg_md0-gluster_lv_engine
Where=/gluster_bricks/engine
Type=xfs
Options=inode64,noatime,nodiratime

[Install]
WantedBy=glusterd.service
[root@ovirt2 yum.repos.d]# systemctl cat gluster_bricks-engine.automount
# /etc/systemd/system/gluster_bricks-engine.automount
[Unit]
Description=automount for gluster brick ENGINE

[Automount]
Where=/gluster_bricks/engine

[Install]
WantedBy=multi-user.target
[root@ovirt2 yum.repos.d]# systemctl cat glusterd
# /etc/systemd/system/glusterd.service
[Unit]
Description=GlusterFS, a clustered file-system server
Requires=rpcbind.service gluster_bricks-engine.mount gluster_bricks-data.mount 
gluster_bricks-isos.mount
After=network.target rpcbind.service gluster_bricks-engine.mount 
gluster_bricks-data.mount gluster_bricks-isos.mount
Before=network-online.target

[Service]
Type=forking
PIDFile=/var/run/glusterd.pid
LimitNOFILE=65536
Environment="LOG_LEVEL=INFO"
EnvironmentFile=-/etc/sysconfig/glusterd
ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid  --log-level $LOG_LEVEL 
$GLUSTERD_OPTIONS
KillMode=process
SuccessExitStatus=15

[Install]
WantedBy=multi-user.target

# /etc/systemd/system/glusterd.service.d/99-cpu.conf
[Service]
CPUAccounting=yes
Slice=glusterfs.slice


Best Regards,Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/K24KAM7RXA77EWJDNYDFJYDDMNXX7OMB/


[ovirt-users] Ovirt 4.2.8 allows to remove a gluster volume without detaching the storage domain

2019-01-24 Thread Strahil Nikolov
Hello Community,
As I'm still experimenting with my ovirt lab , I have managed somehow to remove 
my gluster volume ('gluster volume list' confirms it) whithout detaching the 
storage domain.
This sounds to me as bug, am I right ?
Steps to reproduce:1. Create a replica 3 arbiter 1 gluster volume2. Create a 
storage domain of it3. Go to Volumes and select the name of the volume4. Press 
remove and confirm . The tasks fails , but the volume is now gone in gluster .
I guess , I have to do some cleanup in the DB in order to fix that.
Best Regards,Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CIU7OGRQU5IJ2JJLSRYS7DJXB3DNQSLQ/


[ovirt-users] Re: [Gluster-users] Gluster snapshot fails

2019-04-12 Thread Strahil Nikolov
 Hello All,
it seems that "systemd-1" is from the automount unit , and not from the systemd 
unit.
[root@ovirt1 system]# systemctl cat gluster_bricks-isos.automount
# /etc/systemd/system/gluster_bricks-isos.automount
[Unit]
Description=automount for gluster brick ISOS

[Automount]
Where=/gluster_bricks/isos

[Install]
WantedBy=multi-user.target



Best Regards,Strahil Nikolov

В петък, 12 април 2019 г., 4:12:31 ч. Гринуич-4, Strahil Nikolov 
 написа:  
 
  Hello All,
I have tried to enable debug and see the reason for the issue. Here is the 
relevant glusterd.log:
[2019-04-12 07:56:54.526508] E [MSGID: 106077] 
[glusterd-snapshot.c:1882:glusterd_is_thinp_brick] 0-management: Failed to get 
pool name for device systemd-1
[2019-04-12 07:56:54.527509] E [MSGID: 106121] 
[glusterd-snapshot.c:2523:glusterd_snapshot_create_prevalidate] 0-management: 
Failed to pre validate
[2019-04-12 07:56:54.527525] E [MSGID: 106024] 
[glusterd-snapshot.c:2547:glusterd_snapshot_create_prevalidate] 0-management: 
Snapshot is supported only for thin provisioned LV. Ensure that all bricks of 
isos are thinly provisioned LV.
[2019-04-12 07:56:54.527539] W [MSGID: 106029] 
[glusterd-snapshot.c:8613:glusterd_snapshot_prevalidate] 0-management: Snapshot 
create pre-validation failed
[2019-04-12 07:56:54.527552] W [MSGID: 106121] 
[glusterd-mgmt.c:147:gd_mgmt_v3_pre_validate_fn] 0-management: Snapshot 
Prevalidate Failed
[2019-04-12 07:56:54.527568] E [MSGID: 106121] 
[glusterd-mgmt.c:1015:glusterd_mgmt_v3_pre_validate] 0-management: Pre 
Validation failed for operation Snapshot on local node
[2019-04-12 07:56:54.527583] E [MSGID: 106121] 
[glusterd-mgmt.c:2377:glusterd_mgmt_v3_initiate_snap_phases] 0-management: Pre 
Validation Failed

here is the output of lvscan & lvs:
[root@ovirt1 ~]# lvscan
  ACTIVE    '/dev/gluster_vg_md0/my_vdo_thinpool' [9.86 TiB] inherit
  ACTIVE    '/dev/gluster_vg_md0/gluster_lv_data' [500.00 GiB] inherit
  ACTIVE    '/dev/gluster_vg_md0/gluster_lv_isos' [50.00 GiB] inherit
  ACTIVE    '/dev/gluster_vg_ssd/my_ssd_thinpool' [168.59 GiB] inherit
  ACTIVE    '/dev/gluster_vg_ssd/gluster_lv_engine' [40.00 GiB] inherit
  ACTIVE    '/dev/centos_ovirt1/swap' [6.70 GiB] inherit
  ACTIVE    '/dev/centos_ovirt1/home' [1.00 GiB] inherit
  ACTIVE    '/dev/centos_ovirt1/root' [60.00 GiB] inherit
[root@ovirt1 ~]# lvs --noheadings -o pool_lv



  my_vdo_thinpool
  my_vdo_thinpool

  my_ssd_thinpool

[root@ovirt1 ~]# ssh ovirt2 "lvscan;lvs --noheadings -o pool_lv"
  ACTIVE    '/dev/gluster_vg_md0/my_vdo_thinpool' [<9.77 TiB] inherit
  ACTIVE    '/dev/gluster_vg_md0/gluster_lv_data' [500.00 GiB] inherit
  ACTIVE    '/dev/gluster_vg_md0/gluster_lv_isos' [50.00 GiB] inherit
  ACTIVE    '/dev/gluster_vg_ssd/my_ssd_thinpool' [<161.40 GiB] inherit
  ACTIVE    '/dev/gluster_vg_ssd/gluster_lv_engine' [40.00 GiB] inherit
  ACTIVE    '/dev/centos_ovirt2/root' [15.00 GiB] inherit
  ACTIVE    '/dev/centos_ovirt2/home' [1.00 GiB] inherit
  ACTIVE    '/dev/centos_ovirt2/swap' [16.00 GiB] inherit



  my_vdo_thinpool
  my_vdo_thinpool

  my_ssd_thinpool

[root@ovirt1 ~]# ssh ovirt3 "lvscan;lvs --noheadings -o pool_lv"
  ACTIVE    '/dev/gluster_vg_sda3/gluster_thinpool_sda3' [41.00 GiB] 
inherit
  ACTIVE    '/dev/gluster_vg_sda3/gluster_lv_data' [15.00 GiB] inherit
  ACTIVE    '/dev/gluster_vg_sda3/gluster_lv_isos' [15.00 GiB] inherit
  ACTIVE    '/dev/gluster_vg_sda3/gluster_lv_engine' [15.00 GiB] inherit
  ACTIVE    '/dev/centos_ovirt3/root' [20.00 GiB] inherit
  ACTIVE    '/dev/centos_ovirt3/home' [1.00 GiB] inherit
  ACTIVE    '/dev/centos_ovirt3/swap' [8.00 GiB] inherit



  gluster_thinpool_sda3
  gluster_thinpool_sda3
  gluster_thinpool_sda3


I am mounting my bricks via systemd , as I have issues with bricks being 
started before VDO.
[root@ovirt1 ~]# findmnt /gluster_bricks/isos
TARGET   SOURCE FSTYPE OPTIONS
/gluster_bricks/isos systemd-1  autofs 
rw,relatime,fd=31,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=21843
/gluster_bricks/isos /dev/mapper/gluster_vg_md0-gluster_lv_isos xfs    
rw,noatime,nodiratime,seclabel,attr2,inode64,noquota
[root@ovirt1 ~]# ssh ovirt2 "findmnt /gluster_bricks/isos "
TARGET   SOURCE FSTYPE OPTIONS
/gluster_bricks/isos systemd-1  autofs 
rw,relatime,fd=26,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=14279
/gluster_bricks/isos /dev/mapper/gluster_vg_md0-gluster_lv_isos xfs    
rw,noatime,nodiratime,seclabel,attr2,inode64,noquota
[root@ovirt1 ~]# ssh ovirt3 "findmnt /gluster_bricks/isos "
TARGET   SOURCE  FSTYPE OPTIONS
/gluster_br

[ovirt-users] Re: [Gluster-users] Gluster snapshot fails

2019-04-12 Thread Strahil Nikolov
e_ino=21513 
0 0
systemd-1 /gluster_bricks/engine autofs 
rw,relatime,fd=25,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=21735 
0 0
systemd-1 /gluster_bricks/isos autofs 
rw,relatime,fd=31,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=21843 
0 0
/dev/mapper/gluster_vg_ssd-gluster_lv_engine /gluster_bricks/engine xfs 
rw,seclabel,noatime,nodiratime,attr2,inode64,sunit=256,swidth=256,noquota 0 0
/dev/mapper/gluster_vg_md0-gluster_lv_isos /gluster_bricks/isos xfs 
rw,seclabel,noatime,nodiratime,attr2,inode64,noquota 0 0
/dev/mapper/gluster_vg_md0-gluster_lv_data /gluster_bricks/data xfs 
rw,seclabel,noatime,nodiratime,attr2,inode64,noquota 0 0




Obviously , gluster is catching "systemd-1" as a device and tries to check if 
it's a thin LV.Where should I open a bug for that ?
P.S.: Adding oVirt User list.

Best Regards,Strahil Nikolov


    В четвъртък, 11 април 2019 г., 4:00:31 ч. Гринуич-4, Strahil Nikolov 
 написа:  
 
   Hi Rafi,
thanks for your update.
I have tested again with another gluster volume.[root@ovirt1 glusterfs]# 
gluster volume info isos

Volume Name: isos
Type: Replicate
Volume ID: 9b92b5bd-79f5-427b-bd8d-af28b038ed2a
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: ovirt1:/gluster_bricks/isos/isos
Brick2: ovirt2:/gluster_bricks/isos/isos
Brick3: ovirt3.localdomain:/gluster_bricks/isos/isos (arbiter)
Options Reconfigured:
cluster.granular-entry-heal: enable
performance.strict-o-direct: on
network.ping-timeout: 30
storage.owner-gid: 36
storage.owner-uid: 36
user.cifs: off
features.shard: on
cluster.shd-wait-qlength: 1
cluster.shd-max-threads: 8
cluster.locking-scheme: granular
cluster.data-self-heal-algorithm: full
cluster.server-quorum-type: server
cluster.quorum-type: auto
cluster.eager-lock: enable
network.remote-dio: off
performance.low-prio-threads: 32
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
cluster.enable-shared-storage: enable

Command run:
logrotate -f glusterfs ; logrotate -f glusterfs-georep;  gluster snapshot 
create isos-snap-2019-04-11 isos  description TEST

Logs:[root@ovirt1 glusterfs]# cat cli.log
[2019-04-11 07:51:02.367453] I [cli.c:769:main] 0-cli: Started running gluster 
with version 5.5
[2019-04-11 07:51:02.486863] I [MSGID: 101190] 
[event-epoll.c:621:event_dispatch_epoll_worker] 0-epoll: Started thread with 
index 1
[2019-04-11 07:51:02.556813] E [cli-rpc-ops.c:11293:gf_cli_snapshot] 0-cli: 
cli_to_glusterd for snapshot failed
[2019-04-11 07:51:02.556880] I [input.c:31:cli_batch] 0-: Exiting with: -1
[root@ovirt1 glusterfs]# cat glusterd.log
[2019-04-11 07:51:02.553357] E [MSGID: 106024] 
[glusterd-snapshot.c:2547:glusterd_snapshot_create_prevalidate] 0-management: 
Snapshot is supported only for thin provisioned LV. Ensure that all bricks of 
isos are thinly provisioned LV.
[2019-04-11 07:51:02.553365] W [MSGID: 106029] 
[glusterd-snapshot.c:8613:glusterd_snapshot_prevalidate] 0-management: Snapshot 
create pre-validation failed
[2019-04-11 07:51:02.553703] W [MSGID: 106121] 
[glusterd-mgmt.c:147:gd_mgmt_v3_pre_validate_fn] 0-management: Snapshot 
Prevalidate Failed
[2019-04-11 07:51:02.553719] E [MSGID: 106121] 
[glusterd-mgmt.c:1015:glusterd_mgmt_v3_pre_validate] 0-management: Pre 
Validation failed for operation Snapshot on local node

My LVs hosting the bricks are:[root@ovirt1 ~]# lvs gluster_vg_md0
  LV  VG Attr   LSize   Pool    Origin 
Data%  Meta%  Move Log Cpy%Sync Convert
  gluster_lv_data gluster_vg_md0 Vwi-aot--- 500.00g my_vdo_thinpool    35.97
  gluster_lv_isos gluster_vg_md0 Vwi-aot---  50.00g my_vdo_thinpool    52.11
  my_vdo_thinpool gluster_vg_md0 twi-aot---   9.86t    2.04 
  11.45

[root@ovirt1 ~]# ssh ovirt2 "lvs gluster_vg_md0"
  LV  VG Attr   LSize   Pool    Origin 
Data%  Meta%  Move Log Cpy%Sync Convert
  gluster_lv_data gluster_vg_md0 Vwi-aot--- 500.00g my_vdo_thinpool    35.98
  gluster_lv_isos gluster_vg_md0 Vwi-aot---  50.00g my_vdo_thinpool    25.94
  my_vdo_thinpool gluster_vg_md0 twi-aot---  <9.77t    1.93 
  11.39
[root@ovirt1 ~]# ssh ovirt3 "lvs gluster_vg_sda3"
  LV    VG  Attr   LSize  Pool  
Origin Data%  Meta%  Move Log Cpy%Sync Convert
  gluster_lv_data   gluster_vg_sda3 Vwi-aotz-- 15.00g gluster_thinpool_sda3 
   0.17
  gluster_lv_engine gluster_vg_sda3 Vwi-aotz-- 15.00g gluster_thinpool_sda3 
   0.16
  gluster_lv_isos   gluster_vg_sda3 Vwi-aotz-- 15.00g gluster_thinpool_sda3 
   0.12
  gluster_thinpool_sda3 gluster_vg_sda3 twi-aotz-- 41.00g   
   0.16   1.58

As you can see - all bricks are thin LV and space is not the issue.
Can someone hint me how to enable debug , so gluster logs

[ovirt-users] oVirt 4.3.2 missing/wrong status of VM

2019-04-14 Thread Strahil Nikolov
As I couldn't find the exact mail thread, I'm attaching my 
/usr/lib/python2.7/site-packages/vdsm/virt/guestagent.py which fixes the 
missing/wrong status of VMs.
You will need to restart vdsmd (I'm not sure how safe is that with running 
guests) in order to start working.
Best Regards,Strahil Nikolov

guestagent.py
Description: Binary data
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KK4NVC3U37HPKCO4KPO4YRBFCKYPDRGE/


[ovirt-users] Re: Tuning Gluster Writes

2019-04-14 Thread Strahil Nikolov
 Some kernels do not like values below 5%, thus I prefer to use vm.dirty_bytes 
& vm.dirty_background_bytes.
Try the following ones (comment out the vdsm.conf values 
):vm.dirty_background_bytes = 2vm.dirty_bytes = 45000
It's more like shooting in the dark , but it might help.
Best Regards,Strahil Nikolov
В неделя, 14 април 2019 г., 19:06:07 ч. Гринуич+3, Alex McWhirter 
 написа:  
 
 On 2019-04-13 03:15, Strahil wrote:
> Hi,
> 
> What is your dirty  cache settings on the gluster servers  ?
> 
> Best Regards,
> Strahil NikolovOn Apr 13, 2019 00:44, Alex McWhirter  
> wrote:
>> 
>> I have 8 machines acting as gluster servers. They each have 12 drives
>> raid 50'd together (3 sets of 4 drives raid 5'd then 0'd together as
>> one).
>> 
>> They connect to the compute hosts and to each other over lacp'd 10GB
>> connections split across two cisco nexus switched with VPC.
>> 
>> Gluster has the following set.
>> 
>> performance.write-behind-window-size: 4MB
>> performance.flush-behind: on
>> performance.stat-prefetch: on
>> server.event-threads: 4
>> client.event-threads: 8
>> performance.io-thread-count: 32
>> network.ping-timeout: 30
>> cluster.granular-entry-heal: enable
>> performance.strict-o-direct: on
>> storage.owner-gid: 36
>> storage.owner-uid: 36
>> features.shard: on
>> cluster.shd-wait-qlength: 1
>> cluster.shd-max-threads: 8
>> cluster.locking-scheme: granular
>> cluster.data-self-heal-algorithm: full
>> cluster.server-quorum-type: server
>> cluster.quorum-type: auto
>> cluster.eager-lock: enable
>> network.remote-dio: off
>> performance.low-prio-threads: 32
>> performance.io-cache: off
>> performance.read-ahead: off
>> performance.quick-read: off
>> auth.allow: *
>> user.cifs: off
>> transport.address-family: inet
>> nfs.disable: off
>> performance.client-io-threads: on
>> 
>> 
>> I have the following sysctl values on gluster client and servers, 
>> using
>> libgfapi, MTU 9K
>> 
>> net.core.rmem_max = 134217728
>> net.core.wmem_max = 134217728
>> net.ipv4.tcp_rmem = 4096 87380 134217728
>> net.ipv4.tcp_wmem = 4096 65536 134217728
>> net.core.netdev_max_backlog = 30
>> net.ipv4.tcp_moderate_rcvbuf =1
>> net.ipv4.tcp_no_metrics_save = 1
>> net.ipv4.tcp_congestion_control=htcp
>> 
>> reads with this setup are perfect, benchmarked in VM to be about 
>> 770MB/s
>> sequential with disk access times of < 1ms. Writes on the other hand 
>> are
>> all over the place. They peak around 320MB/s sequential write, which 
>> is
>> what i expect but it seems as if there is some blocking going on.
>> 
>> During the write test i will hit 320MB/s briefly, then 0MB/s as disk
>> access time shoot to over 3000ms, then back to 320MB/s. It averages 
>> out
>> to about 110MB/s afterwards.
>> 
>> Gluster version is 3.12.15 ovirt is 4.2.7.5
>> 
>> Any ideas on what i could tune to eliminate or minimize that blocking?
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct: 
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives: 
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/Z7F72BKYKAGICERZETSA4KCLQYR3AORR/
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/FMB6NCNJL2WKEDWPAM4OJIRF2GIDJUUE/

Just the vdsm defaults

vm.dirty_ratio = 5
vm.dirty_background_ratio = 2

these boxes only have 8gb of ram as well, so those percentages should be 
super small.

  ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5U6QGARQSLFXMPP2EB57DSEACZ3H5SBY/


[ovirt-users] Re: [Gluster-users] Gluster snapshot fails

2019-04-12 Thread Strahil Nikolov
 I hope this is the last update on the issue -> opened a bug 
https://bugzilla.redhat.com/show_bug.cgi?id=1699309

Best regards,Strahil Nikolov

В петък, 12 април 2019 г., 7:32:41 ч. Гринуич-4, Strahil Nikolov 
 написа:  
 
  Hi All,
I have tested gluster snapshot without systemd.automount units and it works as 
follows:

[root@ovirt1 system]# gluster snapshot create isos-snap-2019-04-11 isos  
description TEST
snapshot create: success: Snap isos-snap-2019-04-11_GMT-2019.04.12-11.18.24 
created successfully

[root@ovirt1 system]# gluster snapshot list
isos-snap-2019-04-11_GMT-2019.04.12-11.18.24
[root@ovirt1 system]# gluster snapshot info 
isos-snap-2019-04-11_GMT-2019.04.12-11.18.24
Snapshot  : isos-snap-2019-04-11_GMT-2019.04.12-11.18.24
Snap UUID : 70d5716e-4633-43d4-a562-8e29a96b0104
Description   : TEST
Created   : 2019-04-12 11:18:24
Snap Volumes:

    Snap Volume Name  : 584e88eab0374c0582cc544a2bc4b79e
    Origin Volume name    : isos
    Snaps taken for isos  : 1
    Snaps available for isos  : 255
    Status    : Stopped


Best Regards,Strahil Nikolov

В петък, 12 април 2019 г., 4:32:18 ч. Гринуич-4, Strahil Nikolov 
 написа:  
 
  Hello All,
it seems that "systemd-1" is from the automount unit , and not from the systemd 
unit.
[root@ovirt1 system]# systemctl cat gluster_bricks-isos.automount
# /etc/systemd/system/gluster_bricks-isos.automount
[Unit]
Description=automount for gluster brick ISOS

[Automount]
Where=/gluster_bricks/isos

[Install]
WantedBy=multi-user.target



Best Regards,Strahil Nikolov

В петък, 12 април 2019 г., 4:12:31 ч. Гринуич-4, Strahil Nikolov 
 написа:  
 
  Hello All,
I have tried to enable debug and see the reason for the issue. Here is the 
relevant glusterd.log:
[2019-04-12 07:56:54.526508] E [MSGID: 106077] 
[glusterd-snapshot.c:1882:glusterd_is_thinp_brick] 0-management: Failed to get 
pool name for device systemd-1
[2019-04-12 07:56:54.527509] E [MSGID: 106121] 
[glusterd-snapshot.c:2523:glusterd_snapshot_create_prevalidate] 0-management: 
Failed to pre validate
[2019-04-12 07:56:54.527525] E [MSGID: 106024] 
[glusterd-snapshot.c:2547:glusterd_snapshot_create_prevalidate] 0-management: 
Snapshot is supported only for thin provisioned LV. Ensure that all bricks of 
isos are thinly provisioned LV.
[2019-04-12 07:56:54.527539] W [MSGID: 106029] 
[glusterd-snapshot.c:8613:glusterd_snapshot_prevalidate] 0-management: Snapshot 
create pre-validation failed
[2019-04-12 07:56:54.527552] W [MSGID: 106121] 
[glusterd-mgmt.c:147:gd_mgmt_v3_pre_validate_fn] 0-management: Snapshot 
Prevalidate Failed
[2019-04-12 07:56:54.527568] E [MSGID: 106121] 
[glusterd-mgmt.c:1015:glusterd_mgmt_v3_pre_validate] 0-management: Pre 
Validation failed for operation Snapshot on local node
[2019-04-12 07:56:54.527583] E [MSGID: 106121] 
[glusterd-mgmt.c:2377:glusterd_mgmt_v3_initiate_snap_phases] 0-management: Pre 
Validation Failed

here is the output of lvscan & lvs:
[root@ovirt1 ~]# lvscan
  ACTIVE    '/dev/gluster_vg_md0/my_vdo_thinpool' [9.86 TiB] inherit
  ACTIVE    '/dev/gluster_vg_md0/gluster_lv_data' [500.00 GiB] inherit
  ACTIVE    '/dev/gluster_vg_md0/gluster_lv_isos' [50.00 GiB] inherit
  ACTIVE    '/dev/gluster_vg_ssd/my_ssd_thinpool' [168.59 GiB] inherit
  ACTIVE    '/dev/gluster_vg_ssd/gluster_lv_engine' [40.00 GiB] inherit
  ACTIVE    '/dev/centos_ovirt1/swap' [6.70 GiB] inherit
  ACTIVE    '/dev/centos_ovirt1/home' [1.00 GiB] inherit
  ACTIVE    '/dev/centos_ovirt1/root' [60.00 GiB] inherit
[root@ovirt1 ~]# lvs --noheadings -o pool_lv



  my_vdo_thinpool
  my_vdo_thinpool

  my_ssd_thinpool

[root@ovirt1 ~]# ssh ovirt2 "lvscan;lvs --noheadings -o pool_lv"
  ACTIVE    '/dev/gluster_vg_md0/my_vdo_thinpool' [<9.77 TiB] inherit
  ACTIVE    '/dev/gluster_vg_md0/gluster_lv_data' [500.00 GiB] inherit
  ACTIVE    '/dev/gluster_vg_md0/gluster_lv_isos' [50.00 GiB] inherit
  ACTIVE    '/dev/gluster_vg_ssd/my_ssd_thinpool' [<161.40 GiB] inherit
  ACTIVE    '/dev/gluster_vg_ssd/gluster_lv_engine' [40.00 GiB] inherit
  ACTIVE    '/dev/centos_ovirt2/root' [15.00 GiB] inherit
  ACTIVE    '/dev/centos_ovirt2/home' [1.00 GiB] inherit
  ACTIVE    '/dev/centos_ovirt2/swap' [16.00 GiB] inherit



  my_vdo_thinpool
  my_vdo_thinpool

  my_ssd_thinpool

[root@ovirt1 ~]# ssh ovirt3 "lvscan;lvs --noheadings -o pool_lv"
  ACTIVE    '/dev/gluster_vg_sda3/gluster_thinpool_sda3' [41.00 GiB] 
inherit
  ACTIVE    '/dev/gluster_vg_sda3/gluster_lv_data' [15.00 GiB] inherit
  ACTIVE    '/dev/gluster_vg_sda3/gluster_lv_isos' [15.00 GiB] inherit
  ACTIVE    '/dev/gluster_vg_sda3/gluster_lv_engine' [15.00 GiB] inherit
  ACTIVE    '/dev/centos_ovirt3/root' [20.00 GiB] inherit
  ACTIVE

[ovirt-users] Re: [Gluster-users] Gluster snapshot fails

2019-04-12 Thread Strahil Nikolov
 Hi All,
I have tested gluster snapshot without systemd.automount units and it works as 
follows:

[root@ovirt1 system]# gluster snapshot create isos-snap-2019-04-11 isos  
description TEST
snapshot create: success: Snap isos-snap-2019-04-11_GMT-2019.04.12-11.18.24 
created successfully

[root@ovirt1 system]# gluster snapshot list
isos-snap-2019-04-11_GMT-2019.04.12-11.18.24
[root@ovirt1 system]# gluster snapshot info 
isos-snap-2019-04-11_GMT-2019.04.12-11.18.24
Snapshot  : isos-snap-2019-04-11_GMT-2019.04.12-11.18.24
Snap UUID : 70d5716e-4633-43d4-a562-8e29a96b0104
Description   : TEST
Created   : 2019-04-12 11:18:24
Snap Volumes:

    Snap Volume Name  : 584e88eab0374c0582cc544a2bc4b79e
    Origin Volume name    : isos
    Snaps taken for isos  : 1
    Snaps available for isos  : 255
    Status    : Stopped


Best Regards,Strahil Nikolov

В петък, 12 април 2019 г., 4:32:18 ч. Гринуич-4, Strahil Nikolov 
 написа:  
 
  Hello All,
it seems that "systemd-1" is from the automount unit , and not from the systemd 
unit.
[root@ovirt1 system]# systemctl cat gluster_bricks-isos.automount
# /etc/systemd/system/gluster_bricks-isos.automount
[Unit]
Description=automount for gluster brick ISOS

[Automount]
Where=/gluster_bricks/isos

[Install]
WantedBy=multi-user.target



Best Regards,Strahil Nikolov

В петък, 12 април 2019 г., 4:12:31 ч. Гринуич-4, Strahil Nikolov 
 написа:  
 
  Hello All,
I have tried to enable debug and see the reason for the issue. Here is the 
relevant glusterd.log:
[2019-04-12 07:56:54.526508] E [MSGID: 106077] 
[glusterd-snapshot.c:1882:glusterd_is_thinp_brick] 0-management: Failed to get 
pool name for device systemd-1
[2019-04-12 07:56:54.527509] E [MSGID: 106121] 
[glusterd-snapshot.c:2523:glusterd_snapshot_create_prevalidate] 0-management: 
Failed to pre validate
[2019-04-12 07:56:54.527525] E [MSGID: 106024] 
[glusterd-snapshot.c:2547:glusterd_snapshot_create_prevalidate] 0-management: 
Snapshot is supported only for thin provisioned LV. Ensure that all bricks of 
isos are thinly provisioned LV.
[2019-04-12 07:56:54.527539] W [MSGID: 106029] 
[glusterd-snapshot.c:8613:glusterd_snapshot_prevalidate] 0-management: Snapshot 
create pre-validation failed
[2019-04-12 07:56:54.527552] W [MSGID: 106121] 
[glusterd-mgmt.c:147:gd_mgmt_v3_pre_validate_fn] 0-management: Snapshot 
Prevalidate Failed
[2019-04-12 07:56:54.527568] E [MSGID: 106121] 
[glusterd-mgmt.c:1015:glusterd_mgmt_v3_pre_validate] 0-management: Pre 
Validation failed for operation Snapshot on local node
[2019-04-12 07:56:54.527583] E [MSGID: 106121] 
[glusterd-mgmt.c:2377:glusterd_mgmt_v3_initiate_snap_phases] 0-management: Pre 
Validation Failed

here is the output of lvscan & lvs:
[root@ovirt1 ~]# lvscan
  ACTIVE    '/dev/gluster_vg_md0/my_vdo_thinpool' [9.86 TiB] inherit
  ACTIVE    '/dev/gluster_vg_md0/gluster_lv_data' [500.00 GiB] inherit
  ACTIVE    '/dev/gluster_vg_md0/gluster_lv_isos' [50.00 GiB] inherit
  ACTIVE    '/dev/gluster_vg_ssd/my_ssd_thinpool' [168.59 GiB] inherit
  ACTIVE    '/dev/gluster_vg_ssd/gluster_lv_engine' [40.00 GiB] inherit
  ACTIVE    '/dev/centos_ovirt1/swap' [6.70 GiB] inherit
  ACTIVE    '/dev/centos_ovirt1/home' [1.00 GiB] inherit
  ACTIVE    '/dev/centos_ovirt1/root' [60.00 GiB] inherit
[root@ovirt1 ~]# lvs --noheadings -o pool_lv



  my_vdo_thinpool
  my_vdo_thinpool

  my_ssd_thinpool

[root@ovirt1 ~]# ssh ovirt2 "lvscan;lvs --noheadings -o pool_lv"
  ACTIVE    '/dev/gluster_vg_md0/my_vdo_thinpool' [<9.77 TiB] inherit
  ACTIVE    '/dev/gluster_vg_md0/gluster_lv_data' [500.00 GiB] inherit
  ACTIVE    '/dev/gluster_vg_md0/gluster_lv_isos' [50.00 GiB] inherit
  ACTIVE    '/dev/gluster_vg_ssd/my_ssd_thinpool' [<161.40 GiB] inherit
  ACTIVE    '/dev/gluster_vg_ssd/gluster_lv_engine' [40.00 GiB] inherit
  ACTIVE    '/dev/centos_ovirt2/root' [15.00 GiB] inherit
  ACTIVE    '/dev/centos_ovirt2/home' [1.00 GiB] inherit
  ACTIVE    '/dev/centos_ovirt2/swap' [16.00 GiB] inherit



  my_vdo_thinpool
  my_vdo_thinpool

  my_ssd_thinpool

[root@ovirt1 ~]# ssh ovirt3 "lvscan;lvs --noheadings -o pool_lv"
  ACTIVE    '/dev/gluster_vg_sda3/gluster_thinpool_sda3' [41.00 GiB] 
inherit
  ACTIVE    '/dev/gluster_vg_sda3/gluster_lv_data' [15.00 GiB] inherit
  ACTIVE    '/dev/gluster_vg_sda3/gluster_lv_isos' [15.00 GiB] inherit
  ACTIVE    '/dev/gluster_vg_sda3/gluster_lv_engine' [15.00 GiB] inherit
  ACTIVE    '/dev/centos_ovirt3/root' [20.00 GiB] inherit
  ACTIVE    '/dev/centos_ovirt3/home' [1.00 GiB] inherit
  ACTIVE    '/dev/centos_ovirt3/swap' [8.00 GiB] inherit



  gluster_thinpool_sda3
  gluster_thinpool_sda3
  gluster_thinpool_sda3


I am mounting my bricks via syst

[ovirt-users] Re: Are people still experiencing issues with GlusterFS on 4.3x?

2019-03-15 Thread Strahil Nikolov
 
>I along with others had GlusterFS issues after 4.3 upgrades, the failed to 
>dispatch handler issue with bricks going down intermittently.  After some time 
>it seemed to have corrected itself (at least in my enviornment) and I >hadn't 
>had any brick problems in a while.  I upgraded my three node HCI cluster to 
>4.3.1 yesterday and again I'm running in to brick issues.  They will all be up 
>running fine then all of a sudden a brick will randomly drop >and I have to 
>force start the volume to get it back up. >
>Have any of these Gluster issues been addressed in 4.3.2 or any other 
>releases/patches that may be available to help the problem at this time?>
>Thanks!
Yep,
sometimes a brick dies (usually my ISO domain ) and then I have to "gluster 
volume start isos force".Sadly I had several issues with 4.3.X - problematic 
OVF_STORE (0 bytes), issues with gluster , out-of-sync network - so for me 
4.3.0 & 4.3.0 are quite unstable.
Is there a convention indicating stability ? Is 4.3.xxx means unstable , while 
4.2.yyy means stable ?
Best Regards,Strahil Nikolov
  ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ACQE2DCN2LP3RPIPZNXYSLCBXZ4VOPX2/


[ovirt-users] Re: Ovirt 4.3.1 problem with HA agent

2019-03-15 Thread Strahil Nikolov

On Fri, Mar 15, 2019 at 8:12 AM Strahil Nikolov  wrote:

 Ok,
I have managed to recover again and no issues are detected this time.I guess 
this case is quite rare and nobody has experienced that.

>Hi,>can you please explain how you fixed it?
I have set again to global maintenance, defined the HostedEngine from the old 
xml (taken from old vdsm log) , defined the network and powered it off.Set the 
OVF update period to 5 min , but it took several hours until the OVF_STORE were 
updated. Once this happened I restarted the ovirt-ha-agent ovirt-ha-broker on 
both nodes.Then I powered off the HostedEngine and undefined it from ovirt1.

then I set the maintenance to 'none' and the VM powered on ovirt1.
In order to test a failure, I removed the global maintenance and powered off 
the HostedEngine from itself (via ssh). It was brought back to the other node.
In order to test failure of ovirt2, I set ovirt1 in local maintenance and 
removed it (mode 'none') and again shutdown the VM via ssh and it started again 
to ovirt1.
It seems to be working, as I have later shut down the Engine several times and 
it managed to start without issues. 

I'm not sure this is related, but I had detected that ovirt2 was out-of-sync of 
the vdsm-ovirtmgmt network , but it got fixed easily via the UI.



Best Regards,Strahil Nikolov
  ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3B7OQUA733ETUA66TB7HF5Y24BLSI4XO/


[ovirt-users] Re: oVirt 4.3.1 - Remove VM greyed out

2019-03-15 Thread Strahil Nikolov
 Please ignore this one - I'm just too stupid and i didn't realize that the 
Deletion Protection was enabled.
Strahil

В петък, 15 март 2019 г., 11:27:08 ч. Гринуич+2, Strahil Nikolov 
 написа:  
 
 Hi Community,
I have the following problem.A VM was created based on template and after 
poweroff/shutdown it cannot be removed - the button is greyed-out.
Anyone who got such an issue ?Any hint where to look for ?
Best Regards,Strahil Nikolov
  ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7V6YQQQAKXGUSKCRTF2KKQAYCTAPTYKT/


[ovirt-users] Re: [ovirt-announce] Re: [ANN] oVirt 4.3.4 First Release Candidate is now available

2019-05-16 Thread Strahil Nikolov
 Ok, setting 'gluster volume set data_fast4 network.remote-dio on' allowed me 
to create the storage domain without any issues.I set it on all 4 new gluster 
volumes and the storage domains were successfully created.
I have created bug for that:https://bugzilla.redhat.com/show_bug.cgi?id=1711060
If someone else already opened - please ping me to mark this one as duplicate.
Best Regards,Strahil Nikolov

В четвъртък, 16 май 2019 г., 22:27:01 ч. Гринуич+3, Darrell Budic 
 написа:  
 
 On May 16, 2019, at 1:41 PM, Nir Soffer  wrote:


On Thu, May 16, 2019 at 8:38 PM Darrell Budic  wrote:

I tried adding a new storage domain on my hyper converged test cluster running 
Ovirt 4.3.3.7 and gluster 6.1. I was able to create the new gluster volume 
fine, but it’s not able to add the gluster storage domain (as either a managed 
gluster volume or directly entering values). The created gluster volume mounts 
and looks fine from the CLI. Errors in VDSM log:

... 
2019-05-16 10:25:09,584-0500 ERROR (jsonrpc/5) [storage.fileSD] Underlying file 
system doesn't supportdirect IO (fileSD:110)
2019-05-16 10:25:09,584-0500 INFO  (jsonrpc/5) [vdsm.api] FINISH 
createStorageDomain error=Storage Domain target is unsupported: () 
from=:::10.100.90.5,44732, flow_id=31d993dd, 
task_id=ecea28f3-60d4-476d-9ba8-b753b7c9940d (api:52)

The direct I/O check has failed.

So something is wrong in the files system.
To confirm, you can try to do:
dd if=/dev/zero of=/path/to/mountoint/test bs=4096 count=1 oflag=direct
This will probably fail with:dd: failed to open '/path/to/mountoint/test': 
Invalid argument
If it succeeds, but oVirt fail to connect to this domain, file a bug and we 
will investigate.
Nir

Yep, it fails as expected. Just to check, it is working on pre-existing 
volumes, so I poked around at gluster settings for the new volume. It has 
network.remote-dio=off set on the new volume, but enabled on old volumes. After 
enabling it, I’m able to run the dd test:
[root@boneyard mnt]# gluster vol set test network.remote-dio enablevolume set: 
success[root@boneyard mnt]# dd if=/dev/zero of=testfile bs=4096 count=1 
oflag=direct1+0 records in1+0 records out4096 bytes (4.1 kB) copied, 0.0018285 
s, 2.2 MB/s
I’m also able to add the storage domain in ovirt now.
I see network.remote-dio=enable is part of the gluster virt group, so 
apparently it’s not getting set by ovirt duding the volume creation/optimze for 
storage?


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OPBXHYOHZA4XR5CHU7KMD2ISQWLFRG5N/
  ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/B7K24XYG3M43CMMM7MMFARH52QEBXIU5/


[ovirt-users] Re: [ovirt-announce] Re: [ANN] oVirt 4.3.4 First Release Candidate is now available

2019-05-16 Thread Strahil Nikolov

>This may be another issue. This command works only for storage with 512 bytes 
>sector size.
>Hyperconverge systems may use VDO, and it must be configured in compatibility 
>mode to >support>512 bytes sector size.
>I'm not sure how this is configured but Sahina should know.
>Nir
I do use VDO.
  ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5ONURR6EWEOC7ERV5FYMMBTWYFAVDMWR/


[ovirt-users] Re: [ovirt-announce] Re: [ANN] oVirt 4.3.4 First Release Candidate is now available

2019-05-16 Thread Strahil Nikolov
 In my case the dio is off, but I can still do direct io:
[root@ovirt1 glusterfs]# cd 
/rhev/data-center/mnt/glusterSD/gluster1\:_data__fast/
[root@ovirt1 gluster1:_data__fast]# gluster volume info data_fast | grep dio
network.remote-dio: off
[root@ovirt1 gluster1:_data__fast]# dd if=/dev/zero of=testfile bs=4096 count=1 
oflag=direct
1+0 records in
1+0 records out
4096 bytes (4.1 kB) copied, 0.00295952 s, 1.4 MB/s


Most probably the 2 cases are different.
Best Regards,Strahil Nikolov


В четвъртък, 16 май 2019 г., 22:17:23 ч. Гринуич+3, Nir Soffer 
 написа:  
 
 On Thu, May 16, 2019 at 10:12 PM Darrell Budic  wrote:

On May 16, 2019, at 1:41 PM, Nir Soffer  wrote:


On Thu, May 16, 2019 at 8:38 PM Darrell Budic  wrote:

I tried adding a new storage domain on my hyper converged test cluster running 
Ovirt 4.3.3.7 and gluster 6.1. I was able to create the new gluster volume 
fine, but it’s not able to add the gluster storage domain (as either a managed 
gluster volume or directly entering values). The created gluster volume mounts 
and looks fine from the CLI. Errors in VDSM log:

... 
2019-05-16 10:25:09,584-0500 ERROR (jsonrpc/5) [storage.fileSD] Underlying file 
system doesn't supportdirect IO (fileSD:110)
2019-05-16 10:25:09,584-0500 INFO  (jsonrpc/5) [vdsm.api] FINISH 
createStorageDomain error=Storage Domain target is unsupported: () 
from=:::10.100.90.5,44732, flow_id=31d993dd, 
task_id=ecea28f3-60d4-476d-9ba8-b753b7c9940d (api:52)

The direct I/O check has failed.

So something is wrong in the files system.
To confirm, you can try to do:
dd if=/dev/zero of=/path/to/mountoint/test bs=4096 count=1 oflag=direct
This will probably fail with:dd: failed to open '/path/to/mountoint/test': 
Invalid argument
If it succeeds, but oVirt fail to connect to this domain, file a bug and we 
will investigate.
Nir

Yep, it fails as expected. Just to check, it is working on pre-existing 
volumes, so I poked around at gluster settings for the new volume. It has 
network.remote-dio=off set on the new volume, but enabled on old volumes. After 
enabling it, I’m able to run the dd test:
[root@boneyard mnt]# gluster vol set test network.remote-dio enablevolume set: 
success[root@boneyard mnt]# dd if=/dev/zero of=testfile bs=4096 count=1 
oflag=direct1+0 records in1+0 records out4096 bytes (4.1 kB) copied, 0.0018285 
s, 2.2 MB/s
I’m also able to add the storage domain in ovirt now.
I see network.remote-dio=enable is part of the gluster virt group, so 
apparently it’s not getting set by ovirt duding the volume creation/optimze for 
storage?

I'm not sure who is responsible for changing these settings. oVirt always 
required directio, and wenever had to change anything in gluster.
Sahina, maybe gluster changed the defaults?
Darrell, please file a bug, probably for RHHI.
Nir  ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IC4FIKTK5DSGMRCYXBTK7BLIDFSM76WN/


[ovirt-users] Re: Administration Portal "Uncaught Exception" Issue

2019-05-27 Thread Strahil Nikolov
 Well, in the older versions I have similar to your issue which was resolved by 
updating to the latest at that time version.
Best Regards,Strahil Nikolov
В понеделник, 27 май 2019 г., 23:31:13 ч. Гринуич+3, Zachary Winter 
 написа:  
 
  
Yes, I am planning to do so.  Is this fixed in that version?  Do you mind 
explaining what the issue was for future reference?
 
 On 5/27/2019 2:03 PM, Strahil Nikolov wrote:
  
 
 Are you considering updating to 4.3.7 ? 
  Best Regards, Strahil Nikolov
  
  В понеделник, 27 май 2019 г., 20:51:13 ч. Гринуич+3, Zachary Winter 
 написа:  
  
 
Thank you for the log location.  With apologies, it happens "consistently" on 
some pages but not constantly everywhere.  It generally occurs on pages that 
are attempting to auto-update.  For instance, it happens on the dashboard home 
page and on the Hosts page when a host is installing and then activating and 
changing statuses on screen.
 
The error log records the following, which is consistent with the property 'a' 
warning seen prior:
 

 
 

 
 2019-05-27 12:36:48,008-04 ERROR 
[org.ovirt.engine.ui.frontend.server.gwt.OvirtRemoteLoggingService] (default 
task-2) [] Permutation name:  C2713AD3F5A2D6197F7340BE88B50A14
 2019-05-27 12:36:48,008-04 ERROR 
[org.ovirt.engine.ui.frontend.server.gwt.OvirtRemoteLoggingService] (default 
task-2) [] Uncaught exception: com.google.gwt.core.client.JavaScriptException: 
(TypeError) : Cannot read property 'a' of null
     
atorg.ovirt.engine.core.common.businessentities.VDS.$hasSmtDiscrepancyAlert(VDS.java:1731)
 [common.jar:]
     
atorg.ovirt.engine.ui.uicommonweb.models.hosts.HostGeneralModel.$updateAlerts(HostGeneralModel.java:1046)
     
atorg.ovirt.engine.ui.uicommonweb.models.hosts.HostGeneralModel.onEntityChanged(HostGeneralModel.java:912)
     
atorg.ovirt.engine.ui.uicommonweb.models.EntityModel.$setEntity(EntityModel.java:35)
     
atorg.ovirt.engine.ui.uicommonweb.models.hosts.HostGeneralModel.$setEntity(HostGeneralModel.java:117)
     
atorg.ovirt.engine.ui.uicommonweb.models.hosts.HostGeneralModel.setEntity(HostGeneralModel.java:117)
     
atorg.ovirt.engine.ui.uicommonweb.models.ListWithDetailsModel.$onSelectedItemChanged(ListWithDetailsModel.java:89)
     
atorg.ovirt.engine.ui.uicommonweb.models.hosts.HostListModel.onSelectedItemChanged(HostListModel.java:1773)
     
atorg.ovirt.engine.ui.uicommonweb.models.SearchableListModel.$setItems(SearchableListModel.java:708)
     
atorg.ovirt.engine.ui.uicommonweb.models.hosts.HostListModel$lambda$2$Type.run(HostListModel.java:525)
     
atorg.ovirt.engine.ui.uicommonweb.dataprovider.AsyncDataProvider.lambda$55(AsyncDataProvider.java:3341)
     
atorg.ovirt.engine.ui.uicommonweb.dataprovider.AsyncDataProvider$lambda$55$Type.executed(AsyncDataProvider.java:3341)
     at 
org.ovirt.engine.ui.frontend.Frontend$2.$onSuccess(Frontend.java:319) 
[frontend.jar:]
     at 
org.ovirt.engine.ui.frontend.Frontend$2.onSuccess(Frontend.java:319) 
[frontend.jar:]
     
atorg.ovirt.engine.ui.frontend.communication.OperationProcessor$2.$onSuccess(OperationProcessor.java:170)
 [frontend.jar:]
     
atorg.ovirt.engine.ui.frontend.communication.OperationProcessor$2.onSuccess(OperationProcessor.java:170)
 [frontend.jar:]
     
atorg.ovirt.engine.ui.frontend.communication.GWTRPCCommunicationProvider$5$1.$onSuccess(GWTRPCCommunicationProvider.java:270)
 [frontend.jar:]
     
atorg.ovirt.engine.ui.frontend.communication.GWTRPCCommunicationProvider$5$1.onSuccess(GWTRPCCommunicationProvider.java:270)
 [frontend.jar:]
     
atcom.google.gwt.user.client.rpc.impl.RequestCallbackAdapter.onResponseReceived(RequestCallbackAdapter.java:198)
 [gwt-servlet.jar:]
     
atcom.google.gwt.http.client.Request.$fireOnResponseReceived(Request.java:233) 
[gwt-servlet.jar:]
     
atcom.google.gwt.http.client.RequestBuilder$1.onReadyStateChange(RequestBuilder.java:409)
 [gwt-servlet.jar:]
     at Unknown.eval(webadmin-0.js)
     at com.google.gwt.core.client.impl.Impl.apply(Impl.java:236) 
[gwt-servlet.jar:]
     at com.google.gwt.core.client.impl.Impl.entry0(Impl.java:275) 
[gwt-servlet.jar:]
     at Unknown.eval(webadmin-0.js)
 
 

 
  On 5/27/2019 1:25 PM, Lucie Leistnerova wrote:
  
 
 
Hi Zachary,
 
 On 5/27/19 6:43 PM, Zachary Winter wrote:
  
 
 
I am consistently receiving error warnings in the Administration Portal the 
read as follows:
 
 
 
"Uncaught exception occurred. Please try reloading the page. Details: 
(TypeError) : Cannot read property 'a' of null
   
Please have your administrator check the UI logs"
 

 
 
My questions are:
 
1)  Where/What are the UI logs?
 /var/log/ovirt-engine/ui.log
 
 
2)  Is this a known issue, and how do I fix it?
 

 
 
 
This error doesn't say anything specific to see what is the problem without the 
logs. 
 What does it mean cons

[ovirt-users] Re: Administration Portal "Uncaught Exception" Issue

2019-05-27 Thread Strahil Nikolov
 Are you considering updating to 4.3.7 ?
Best Regards,Strahil Nikolov

В понеделник, 27 май 2019 г., 20:51:13 ч. Гринуич+3, Zachary Winter 
 написа:  
 
  
Thank you for the log location.  With apologies, it happens "consistently" on 
some pages but not constantly everywhere.  It generally occurs on pages that 
are attempting to auto-update.  For instance, it happens on the dashboard home 
page and on the Hosts page when a host is installing and then activating and 
changing statuses on screen.
 
The error log records the following, which is consistent with the property 'a' 
warning seen prior:
 

 
 

 
 2019-05-27 12:36:48,008-04 ERROR 
[org.ovirt.engine.ui.frontend.server.gwt.OvirtRemoteLoggingService] (default 
task-2) [] Permutation name: C2713AD3F5A2D6197F7340BE88B50A14
 2019-05-27 12:36:48,008-04 ERROR 
[org.ovirt.engine.ui.frontend.server.gwt.OvirtRemoteLoggingService] (default 
task-2) [] Uncaught exception: com.google.gwt.core.client.JavaScriptException: 
(TypeError) : Cannot read property 'a' of null
     
atorg.ovirt.engine.core.common.businessentities.VDS.$hasSmtDiscrepancyAlert(VDS.java:1731)
 [common.jar:]
     
atorg.ovirt.engine.ui.uicommonweb.models.hosts.HostGeneralModel.$updateAlerts(HostGeneralModel.java:1046)
     
atorg.ovirt.engine.ui.uicommonweb.models.hosts.HostGeneralModel.onEntityChanged(HostGeneralModel.java:912)
     
atorg.ovirt.engine.ui.uicommonweb.models.EntityModel.$setEntity(EntityModel.java:35)
     
atorg.ovirt.engine.ui.uicommonweb.models.hosts.HostGeneralModel.$setEntity(HostGeneralModel.java:117)
     
atorg.ovirt.engine.ui.uicommonweb.models.hosts.HostGeneralModel.setEntity(HostGeneralModel.java:117)
     
atorg.ovirt.engine.ui.uicommonweb.models.ListWithDetailsModel.$onSelectedItemChanged(ListWithDetailsModel.java:89)
     
atorg.ovirt.engine.ui.uicommonweb.models.hosts.HostListModel.onSelectedItemChanged(HostListModel.java:1773)
     
atorg.ovirt.engine.ui.uicommonweb.models.SearchableListModel.$setItems(SearchableListModel.java:708)
     
atorg.ovirt.engine.ui.uicommonweb.models.hosts.HostListModel$lambda$2$Type.run(HostListModel.java:525)
     
atorg.ovirt.engine.ui.uicommonweb.dataprovider.AsyncDataProvider.lambda$55(AsyncDataProvider.java:3341)
     
atorg.ovirt.engine.ui.uicommonweb.dataprovider.AsyncDataProvider$lambda$55$Type.executed(AsyncDataProvider.java:3341)
     at 
org.ovirt.engine.ui.frontend.Frontend$2.$onSuccess(Frontend.java:319) 
[frontend.jar:]
     at 
org.ovirt.engine.ui.frontend.Frontend$2.onSuccess(Frontend.java:319) 
[frontend.jar:]
     
atorg.ovirt.engine.ui.frontend.communication.OperationProcessor$2.$onSuccess(OperationProcessor.java:170)
 [frontend.jar:]
     
atorg.ovirt.engine.ui.frontend.communication.OperationProcessor$2.onSuccess(OperationProcessor.java:170)
 [frontend.jar:]
     
atorg.ovirt.engine.ui.frontend.communication.GWTRPCCommunicationProvider$5$1.$onSuccess(GWTRPCCommunicationProvider.java:270)
 [frontend.jar:]
     
atorg.ovirt.engine.ui.frontend.communication.GWTRPCCommunicationProvider$5$1.onSuccess(GWTRPCCommunicationProvider.java:270)
 [frontend.jar:]
     
atcom.google.gwt.user.client.rpc.impl.RequestCallbackAdapter.onResponseReceived(RequestCallbackAdapter.java:198)
 [gwt-servlet.jar:]
     
atcom.google.gwt.http.client.Request.$fireOnResponseReceived(Request.java:233) 
[gwt-servlet.jar:]
     
atcom.google.gwt.http.client.RequestBuilder$1.onReadyStateChange(RequestBuilder.java:409)
 [gwt-servlet.jar:]
     at Unknown.eval(webadmin-0.js)
     at com.google.gwt.core.client.impl.Impl.apply(Impl.java:236) 
[gwt-servlet.jar:]
     at com.google.gwt.core.client.impl.Impl.entry0(Impl.java:275) 
[gwt-servlet.jar:]
     at Unknown.eval(webadmin-0.js)
 
 

 
 On 5/27/2019 1:25 PM, Lucie Leistnerova wrote:
  
 

Hi Zachary,
 
 On 5/27/19 6:43 PM, Zachary Winter wrote:
  
 

I am consistently receiving error warnings in the Administration Portal the 
read as follows:
 
 
 
"Uncaught exception occurred. Please try reloading the page. Details: 
(TypeError) : Cannot read property 'a' of null
   
Please have your administrator check the UI logs"
 

 
 
My questions are:
 
1)  Where/What are the UI logs?
 /var/log/ovirt-engine/ui.log
 
 
2)  Is this a known issue, and how do I fix it?
 

 
 
 
This error doesn't say anything specific to see what is the problem without the 
logs. 
 What does it mean constantly? On all pages?
 
 
 
 

 
 
  ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LQLZNQHJWJSO4YVGKGOMUYUOQQJQFG3E/
 
 Best regards, -- 
Lucie Leistnerova
Senior Quality Engineer, QE Cloud, RH

[ovirt-users]Re: Bond Mode 1 (Active-Backup),vm unreachable for minutes when bond link change

2019-05-25 Thread Strahil Nikolov
On May 25, 2019 5:04:33 AM GMT+03:00, henaum...@sina.com wrote:
>Hello, 
>
>I've a problem, all my ovirt hosts and vms are linked with a bonding
>mode 1(Active-Backup)2x10Gbps 
>ovirt version:4.3
>topology:
>   --eno2  
>vm--ovirtmgmt--bond0---eno1
>
>ifcfg-bond0:
># Generated by VDSM version 4.30.9.1
>DEVICE=bond0
>BONDING_OPTIOS='mode=1 miion=100'
>BRIDGE=ovirtmgmt
>MACADDR=a4:be:26:16:e9:b2
>ONBOOT=yes
>MTU=1500
>DEFROUTE=no
>NM_CONTROLLER=no
>IPV6INIT=no
Shouldn't it be 'NM_CONTROLLED' ?


>ifcfg-eno1:
># Generated by VDSM version 4.30.9.1
>DEVICE=eno1
>MASTER=bond0
>SLAVE=yes
>ONBOOT=yes
>MTU=1500
>DEFROUTE=no
>NM_CONTROLLER=no
>IPV6INIT=no

Shouldn't it be 'NM_CONTROLLED' ?

>ifcfg-eno2:
># Generated by VDSM version 4.30.9.1
>DEVICE=eno2
>MASTER=bond0
>SLAVE=yes
>ONBOOT=yes
>MTU=1500
>DEFROUTE=no
>NM_CONTROLLER=no
>IPV6INIT=no

Shouldn't it be 'NM_CONTROLLED' ?

>ifcfg-ovirtmgmt:
># Generated by VDSM version 4.30.9.1
>DEVICE=ovirtmgmt
>TYPE=Brodge
>DELAY=0
>STP=off
>ONBOOT=yes
>IPADDR=x.x.x.x
>NEYMASK=255.255.255.0
>GATEWAY=x.x.x.x
>BOOTPROTO=none
>MTU=1500
>DEFROUTE=yes
>NM_CONTROLLER=no
>IPV6INIT=yes
>IPV6_AUTOCONF=yes
>
Shouldn't it be 'TYPE=BRIDGE' ?
Also,
Check that there are no 'ifcfg-XXX.bkp'
in the folder as the network script will read it.If there is  any -move it to 
/root.

>cat /proc/net/bonding/bond0
>Ethernet Chanel Bonding Driver:v3.7.1(April 27, 2011)
>
>Bonding Mode:fault-tolerance(active-ackup)
>Primary Slave:none
>Currently Active Slave:eno1
>MII Status:up
>MII Polling Intercal (ms):100
>Up Delay (ms) : 0
>Down Delay (ms) : 0
>
>Slave Interface :eno1
>MII Status:up
>Speed : 1 Mbps
>Link Failure Count : 0
>Permanent HW addr :a4:be:26:16:e9:b2
>Slave queue ID: 0
>
>Slave Interface :eno2
>MII Status:up
>Speed : 1 Mbps
>Link Failure Count : 0
>Permanent HW addr :a4:be:26:16:e9:b2
>Slave queue ID: 0
As you have a bridge, maybe setting a delay might help.
What is the output once you plug the second NIC out?

>ping vm from different subnet.
>
>Eveything is okay if I don't change bond link interface。When I unplug 
>Currently Active Slave eno1,bond link change to eno2 as expected but vm
>become unreachable until external physical switch MAC Table ageing time
>expired.It seems that vm doesn't sent gratuitous ARP when bond link
>change. How can I fix if?
>
>vm os is Centos 7.5
>ovirt version 4.2 also tested.

CentOS 7.5  is quite old and could have some bugs - consider updating !!!

Best Regards,
Strahil Nikolov
Check inline.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5PB2FHGQZNC5CZAI23AXGN4BL66TF6SY/


[ovirt-users] oVirt 4.3.4 RC1 to RC2 - Dashboard error / VM/Host/Gluster Volumes OK

2019-05-26 Thread Strahil Nikolov
Hello All,
Just upgraded my engine from 4.3.4 RC1 to RC2 and my Dashboard is giving an 
error (see attached screenshot) despite everything seem to end well:
Error!
Could not fetch dashboard data. Please ensure that data warehouse is properly 
installed and configured.
I have checked and the VMs and Hosts + Gluster Volumes arep roperly detected 
(yet all my VMs are powered off since before RC2 upgrade).

Any clues that might help you solve that before I roll back (I have a gluster 
snapshot on 4.3.3-7) ?
Best Regards,Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ISW3HVK6FILOLO3UL3WGR2HUPCGHDPQQ/


[ovirt-users] Re: Single instance scaleup.

2019-05-26 Thread Strahil Nikolov
 Yeah,it seems different from the docs.I'm adding the gluster users list ,as 
they are more experienced into that.
@Gluster-users,
can you provide some hint how to add aditional replicas to the below volumes , 
so they become 'replica 2 arbiter 1' or 'replica 3' type volumes ?

Best Regards,Strahil Nikolov

В неделя, 26 май 2019 г., 15:16:18 ч. Гринуич+3, Leo David 
 написа:  
 
 Thank you Strahil,The engine and ssd-samsung are distributed...So these are 
the ones that I need to have replicated accross new nodes.I am not very sure 
about the procedure to accomplish this.Thanks,
Leo
On Sun, May 26, 2019, 13:04 Strahil  wrote:


Hi Leo,
As you do not have a distributed volume , you can easily switch to replica 2 
arbiter 1 or replica 3 volumes.

You can use the following for adding the bricks:

https://access.redhat.com/documentation/en-US/Red_Hat_Storage/2.1/html/Administration_Guide/Expanding_Volumes.html

Best Regards,
Strahil Nikoliv
On May 26, 2019 10:54, Leo David  wrote:

Hi Stahil,Thank you so much for yout input !
 gluster volume info

Volume Name: engine
Type: Distribute
Volume ID: d7449fc2-cc35-4f80-a776-68e4a3dbd7e1
Status: Started
Snapshot Count: 0
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: 192.168.80.191:/gluster_bricks/engine/engine
Options Reconfigured:
nfs.disable: on
transport.address-family: inet
storage.owner-uid: 36
storage.owner-gid: 36
features.shard: on
performance.low-prio-threads: 32
performance.strict-o-direct: off
network.remote-dio: off
network.ping-timeout: 30
user.cifs: off
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
cluster.eager-lock: enableVolume Name: ssd-samsung
Type: Distribute
Volume ID: 76576cc6-220b-4651-952d-99846178a19e
Status: Started
Snapshot Count: 0
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: 192.168.80.191:/gluster_bricks/sdc/data
Options Reconfigured:
cluster.eager-lock: enable
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
user.cifs: off
network.ping-timeout: 30
network.remote-dio: off
performance.strict-o-direct: on
performance.low-prio-threads: 32
features.shard: on
storage.owner-gid: 36
storage.owner-uid: 36
transport.address-family: inet
nfs.disable: on
The other two hosts will be 192.168.80.192/193  - this is gluster dedicated 
network over 10GB sfp+ switch.- host 2 wil have identical harware configuration 
with host 1 ( each disk is actually a raid0 array )- host 3 has:   -  1 ssd for 
OS   -  1 ssd - for adding to engine volume in a full replica 3   -  2 ssd's in 
a raid 1 array to be added as arbiter for the data volume ( ssd-samsung )So the 
plan is to have "engine"  scaled in a full replica 3,  and "ssd-samsung" 
scalled in a replica 3 arbitrated.



On Sun, May 26, 2019 at 10:34 AM Strahil  wrote:


Hi Leo,

Gluster is quite smart, but in order to provide any hints , can you provide 
output of 'gluster volume info '.
If you have 2 more systems , keep in mind that it is best to mirror the storage 
on the second replica (2 disks on 1 machine -> 2 disks on the new machine), 
while for the arbiter this is not neccessary.

What is your network and NICs ? Based on my experience , I can recommend at 
least 10 gbit/s  interfase(s).

Best Regards,
Strahil Nikolov
On May 26, 2019 07:52, Leo David  wrote:

Hello Everyone,Can someone help me to clarify this ?I have a single-node 4.2.8 
installation ( only two gluster storage domains - distributed  single drive 
volumes ). Now I just got two identintical servers and I would like to go for a 
3 nodes bundle.Is it possible ( after joining the new nodes to the cluster ) to 
expand the existing volumes across the new nodes and change them to replica 3 
arbitrated ?If so, could you share with me what would it be the procedure 
?Thank you very much !
Leo



-- 
Best regards, Leo David

  ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KLQAIK2SYERFL4IBPC7RQ6UT6ZRVU7GW/


[ovirt-users] Re: Unable to find OVF_STORE after recovery / upgrade

2019-05-25 Thread Strahil Nikolov
 I found your email in SPAM ... no idea how it happened.Well, I don't know how 
this happened for me , but the ovirt-ha-agent was not able to open the OVF for 
the HostedEngine - it was 0 bytes.
So in my case I have manually defined the HostedEngine via an xml found in the 
vdsm logs and I also defined the ovirtmgmt network (found on  the web).Once you 
power up the HostedEngine , the OVF get's updated a little bit slower than the 
timer set inside it.
In my case - I needed at least 2-3 hours before the OVF got updated and I could 
power down and power up via the 'hosted-engine' tool.
Best Regards,Strahil Nikolov

В вторник, 14 май 2019 г., 23:48:17 ч. Гринуич+3, Sam Cappello 
 написа:  
 
   Hi,
 so i was running a 3.4 hosted engine two node setup on centos 6, had some disk 
issues so i tried to upgrade to centos 7 and follow the path 3.4 > 3.5 > 3.6 > 
4.0.  i screwed up dig time somewhere between 3.6 and 4.0, so i wiped the 
drives, installed a fresh 4.0.3, then created the database and restored the 3.6 
engine backup before running engine-setup as per the docs.   things seemed to 
work, but i have the the following issues / symptoms:
 - ovirt-ha-agent running 100% CPU on both nodes
 - messages in the UI that the Hosted Engine storage Domain isn't active and 
Failed to import the Hosted Engine Storage Domain
 - hosted engine is not visible in the UI
 and the following repeating in the agent.log:
 
 
MainThread::INFO::2016-10-0312:38:27,718::hosted_engine::461::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
 Current state EngineUp (score: 3400)
 
MainThread::INFO::2016-10-0312:38:27,720::hosted_engine::466::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
 Best remote host vmhost1.oracool.net (id: 1, score: 3400)
 
MainThread::INFO::2016-10-0312:38:37,979::states::421::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume)
 Engine vm running on localhost
 
MainThread::INFO::2016-10-0312:38:37,985::hosted_engine::612::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_vdsm)
 Initializing VDSM
 
MainThread::INFO::2016-10-0312:38:45,645::hosted_engine::639::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_storage_images)
 Connecting the storage
 
MainThread::INFO::2016-10-0312:38:45,647::storage_server::219::ovirt_hosted_engine_ha.lib.storage_server.StorageServer::(connect_storage_server)
 Connecting storage server
 
MainThread::INFO::2016-10-0312:39:00,543::storage_server::226::ovirt_hosted_engine_ha.lib.storage_server.StorageServer::(connect_storage_server)
 Connecting storage server
 
MainThread::INFO::2016-10-0312:39:00,562::storage_server::233::ovirt_hosted_engine_ha.lib.storage_server.StorageServer::(connect_storage_server)
 Refreshing the storage domain
 
MainThread::INFO::2016-10-0312:39:01,235::hosted_engine::666::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_storage_images)
 Preparing images
 
MainThread::INFO::2016-10-0312:39:01,236::image::126::ovirt_hosted_engine_ha.lib.image.Image::(prepare_images)
 Preparing images
 
MainThread::INFO::2016-10-0312:39:09,295::hosted_engine::669::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_storage_images)
 Reloading vm.conf from the shared storage domain
 
MainThread::INFO::2016-10-0312:39:09,296::config::206::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(refresh_local_conf_file)
 Trying to get a fresher copy of vm configuration from the OVF_STORE
 
MainThread::WARNING::2016-10-0312:39:16,928::ovf_store::107::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(scan)
 Unable to find OVF_STORE
 
MainThread::ERROR::2016-10-0312:39:16,934::config::235::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(refresh_local_conf_file)
 Unable to get vm.conf from OVF_STORE, falling back to initial vm.conf
 
 I have searched a bit and not really found a solution, and have come to the 
conclusion that i have made a mess of things, and am wondering if the best 
solution is to export the VMs, and reinstall everything then import them back?
 i am using remote  NFS storage.
 if i try and add the hosted engine storage domain it says it is already 
registered.
 i have also upgraded and am now running oVirt Engine Version: 
4.0.4.4-1.el7.centos
 hosts were installed using ovirt-node.  currently at  
3.10.0-327.28.3.el7.x86_64
 if a fresh install is best, any advice / pointer to doc that explains best way 
to do this?
 i have not moved my most important server over to this cluster yet so i can 
take some downtime to reinstall.
 thanks!
 sam
 
 
 
-- 
IMPORTANT!
This message has been scanned for viruses and phishing links.
However, it is your responsibility to evaluate the links and attachments you 
choose to click.
If you are uncertain, we always try to help.
Greetings helpd...@actnet.se
-- 
IMPORTANT!
This message has been scanned for viruses and phishing links.
However, i

[ovirt-users] Re: Feature Request: oVirt to warn when VDO is getting full

2019-06-04 Thread Strahil Nikolov
 Hi Sahina,
thanks for your response .Currently I'm below 70% usage , so I guess it's 
working properly.Actually the VDO is the brick for the gluster.I didn't know we 
have such feature - this will make everyone's life way better.
Best Regards,Strahil Nikolov


В вторник, 4 юни 2019 г., 6:19:22 ч. Гринуич-4, Sahina Bose 
 написа:  
 
 

On Tue, Jun 4, 2019 at 3:26 PM Strahil  wrote:


Hello All,

I would like to ask how many of you use VDO  before asking the oVirt Devs to 
assess a feature in oVirt for monitoring the size of the VDOs on  
hyperconverged systems.

I think such warning, will save a lot of headaches, but it will not be usefull 
if most of the community is not using VDO at all.


We do have a feature that monitors the space usage of VDO volumes. If this is 
not working as expected, can you raise a bug.Is the storage domain linked to 
the gluster volume using the VDO devices?


Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/R2VAHSMRPJQA5P6O5IAX5UZFRXRCIJWO/

  ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LHC6QRUKFQMW2VJHP4QTVNFNQSMGSG3G/


[ovirt-users] fence_rhevm not working with ovirt 4.3.4.2-1.el7 (RC2)

2019-06-04 Thread Strahil Nikolov
Hello Community,
I'm sending this e-mail just to notify you that I have raised a bug for the 
fence_rhevm (RHEL 8) which has problems parsing the response from the oVirt's 
API.
The bug is : 1717179 – fence_rhevm cannot obtain plug status on oVirt 
4.3.4.2-1.el7 (RC2)

| 
| 
|  | 
1717179 – fence_rhevm cannot obtain plug status on oVirt 4.3.4.2-1.el7 (...


 |

 |

 |




I guess the package won't work also with RHV (unless the changes in the API are 
in a recent version).
Best Regards,Strahil Nikolov___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XO3JGXS44ENTKOGBWQDCTVWQFHWNTC4E/


[ovirt-users] Re: possible to clear vm started under different name warning in gui?

2019-05-31 Thread Strahil Nikolov
 Have you tried to power off and then power on the VM ?
Best Regards,Strahil Nikolov

В петък, 31 май 2019 г., 8:59:54 ч. Гринуич-4, Jayme  
написа:  
 
 When a VM is renamed a warning in engine gui appears with an exclamation point 
stating "vm was started with a different name".  Is there a way to clear this 
warning?  The VM has been restarted a few times since but it doesn't go away. 
Thanks!___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FLA643IOFHYXZZEWRJ6R46GQ3IVAQ2IB/
  ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GOJSZGMRE5IMYKX72FS7F2LRIAX26THO/


[ovirt-users] Re: change he vm memoy size

2019-05-30 Thread Strahil Nikolov
 Hi Alexey,
better open a bug for that. If the Description is updated, but after a reboot 
the engine is still using the old values - it seems that it is a bug.
Best Regards,Strahil Nikolov

В четвъртък, 30 май 2019 г., 9:26:51 ч. Гринуич-4, Valkov, Alexey 
 написа:  
 
 Indeed, after edit HE VM settings via manager UI, ovf update triggered 
immediately (checked in /var/log/ovirt-engine/engine.log).I dumped HE ovf_store 
and untar .ovf fom it.And i checked that all changes i made for 
Descriprion, MaxMemorySizeMb, minGuaranteedMemoryMb applyed (written to ovf) 
and remains after reboot. It works as expected.But not for memory or Memory 
Size - this settings remained initial and not written to ovf. Well, memory 
hotplug works - via adding new memory devices, but after reboot this memory 
devices detached but Memory Size not increased.
--Best regardsAlexey

Actually, you need to untar the OVF from the shared storage and check the 
configuration from the tar.
Just keep it like that (running ) and tomorrow power down and then up the 
HostedEngine.

Best Regards,
Strahil Nikolov
On May 30, 2019 12:06, "Valkov, Alexey"  wrote:

Hello, Strahil. I've just tried with engine-config -s 
OvfUpdateIntervalInMinutes=1 systemctl restart ovirt-engine.service After that, 
i changed Memory Size in manager UI. And waited about 30 minutes. Then checked 
memSize in /var/run/ovirt-hosted-engine-ha/vm.conf (which if i right understand 
syncronized with ovf every minute) and saw memSize have not been changed. And 
Memory Size property (in manager UI) also remains initial. Thus i think that 
ovf dont changes. I return OvfUpdateIntervalInMinutes=60 and will wait till 
tomorrow, may be the setting will be magically aplyed.
--Best regardsAlexey


Hi Alexey,
How much time did you check before rebooting.
I have noticed ,that despite the default OVF update interval of 1 hour, it 
takes 5-6 hours for the engine to update the OVF.

Best Regards,
Strahil Nikolov
On May 30, 2019 10:30, "Valkov, Alexey"  wrote:

I try to increase memory of HE VM (oVirt 4.2.8). If i do it from manager UI, i 
see that hot plug works - new memory devices appear and corresponding memory 
increase appeares inside engine guest. But 'Memory Size' property of hosted 
engine (in manager UI) don't reflect that new amount of memory. Also after 
reboot of engine vm, memory size changes back to initial value. Is it possible 
to change memory size of HE vm ( as far as i know the settings stored in ovf on 
HE domain) and how i can make this change to be persistent.
--Best regardsAlexey

___Users mailing list -- 
users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IKID3B2TH3VR273KZNQB4QC66WYC4PCQ/



  ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PKCLZCNLA2U7VXEXQFJCOTVMXBM53FA5/


[ovirt-users] Re: [ANN] oVirt 4.3.4 Third Release Candidate is now available

2019-05-30 Thread Strahil Nikolov
 Hi Sandro,
thanks for the update.
I have installed RC3 on the engine and I can confirm that the dashboard is now 
fixed, but BZ#1704782 (https://bugzilla.redhat.com/show_bug.cgi?id=1704782) is 
partially fixed -> the policy for Gluster-based Storages  is still 
"Preallocated" but this time "Thin Provisioned" is working as expected:
[root@ovirt1 948f106c-7bd6-49f1-b88f-30ac8c408d72]# qemu-img info 
fc230fd5-9b07-46be-88c2-937a3eeb01aa
image: fc230fd5-9b07-46be-88c2-937a3eeb01aa
file format: raw
virtual size: 1.0G (1073741824 bytes)
disk size: 0

Best Regards,Strahil Nikolov

В четвъртък, 30 май 2019 г., 3:00:24 ч. Гринуич-4, Sandro Bonazzola 
 написа:  
 
 The oVirt Project is pleased to announce the availability of the oVirt 4.3.4 
Third Release Candidate, as of May 30th, 2019.

This update is a release candidate of the fourth in a series of stabilization 
updates to the 4.3 series.
This is pre-release software. This pre-release should not to be used 
inproduction.

This release is available now on x86_64 architecture for:
* Red Hat Enterprise Linux 7.6 or later
* CentOS Linux (or similar) 7.6 or later

This release supports Hypervisor Hosts on x86_64 and ppc64le architectures for:
* Red Hat Enterprise Linux 7.6 or later
* CentOS Linux (or similar) 7.6 or later
* oVirt Node 4.3 (available for x86_64 only)

Experimental tech preview for x86_64 and s390x architectures for Fedora 28 is 
also included.

See the release notes [1] for installation / upgrade instructions and a list of 
new features and bugs fixed.

Notes:
- oVirt Appliance is already available
- oVirt Node is already available[2]

Additional Resources:
* Read more about the oVirt 4.3.4 release 
highlights:http://www.ovirt.org/release/4.3.4/
* Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:http://www.ovirt.org/blog/

[1] http://www.ovirt.org/release/4.3.4/
[2] http://resources.ovirt.org/pub/ovirt-4.3-pre/iso/

-- 

Sandro Bonazzola



MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA

sbona...@redhat.com   

|  |  |

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/64HFRFMOGXDPTWSEF7V56A6BIB75YCPC/
  ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7BMYIBFSPWVXMI2LSPWBE36UAROQL7ED/


[ovirt-users] Re: [ANN] oVirt 4.3.4 Fourth Release Candidate is now available

2019-06-06 Thread Strahil Nikolov
 Hi Sandro,
thanks for the update.I have noticed in RC3 and now in RC4 that data gluster 
bricks does not provide "Advanced Details", while the arbiter does.
I'm mentioning that , as oVirt is currently being rebased for gluster v6 (my 
setup is using gluster v6.1 from CentOS 7 repos) and you can keep that in 
mind.For details , check  1693998 – [Tracker] Rebase on Gluster 6

| 
| 
|  | 
1693998 – [Tracker] Rebase on Gluster 6


 |

 |

 |


I can't find any other issues in RC4. Maybe someone with gluster v5 can check 
their "Advanced Details" and confirm they are OK.
Best Regards,Strahil Nikolov
В четвъртък, 6 юни 2019 г., 11:02:00 ч. Гринуич+3, Sandro Bonazzola 
 написа:  
 
 The oVirt Project is pleased to announce the availability of the oVirt 4.3.4 
FourthRelease Candidate, as of June 6th, 2019.

This update is a release candidate of the fourth in a series of stabilization 
updates to the 4.3 series.
This is pre-release software. This pre-release should not to be used 
inproduction.

This release is available now on x86_64 architecture for:
* Red Hat Enterprise Linux 7.6 or later
* CentOS Linux (or similar) 7.6 or later

This release supports Hypervisor Hosts on x86_64 and ppc64le architectures for:
* Red Hat Enterprise Linux 7.6 or later
* CentOS Linux (or similar) 7.6 or later
* oVirt Node 4.3 (available for x86_64 only)

Experimental tech preview for x86_64 and s390x architectures for Fedora 28 is 
also included.

See the release notes [1] for installation / upgrade instructions and a list of 
new features and bugs fixed.

Notes:
- oVirt Appliance is already available
- oVirt Node is already available[2]
- oVirt Windows Guest Tools iso is already available [2]

Additional Resources:
* Read more about the oVirt 4.3.4 release 
highlights:http://www.ovirt.org/release/4.3.4/
* Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:http://www.ovirt.org/blog/

[1] http://www.ovirt.org/release/4.3.4/
[2] http://resources.ovirt.org/pub/ovirt-4.3-pre/iso/

-- 

Sandro Bonazzola



MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA

sbona...@redhat.com   

|  |  |

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZDVUI3KHHJCFEOYLMHVDIHPWE37TAKTK/
  ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DKNVUJYQ6GH3T6NES5OT3EETGHXZ7EO6/


[ovirt-users] Re: Replace bad Host from a 9 Node hyperconverged setup 4.3.3

2019-06-06 Thread Strahil Nikolov
 Have you tried with "Force remove" tick ?
Best Regards,Strahil Nikolov
В четвъртък, 6 юни 2019 г., 21:47:20 ч. Гринуич+3, Adrian Quintero 
 написа:  
 
 I tried removing the bad host but running into the following issue , any idea?

Operation Canceled
Error while executing action: 

host1.mydomain.com   
   - Cannot remove Host. Server having Gluster volume.



On Thu, Jun 6, 2019 at 11:18 AM Adrian Quintero  
wrote:

Leo, I forgot to mention that I have 1 SSD disk for caching purposes, wondering 
how that setup should be achieved?
thanks,
Adrian

On Wed, Jun 5, 2019 at 11:25 PM Adrian Quintero  
wrote:

Hi Leo, yes, this helps a lot, this confirms the plan we had in mind.
Will test tomorrow and post the results.
Thanks again
Adrian
On Wed, Jun 5, 2019 at 11:18 PM Leo David  wrote:

Hi Adrian,I think the steps are:- reinstall the host- join it to virtualisation 
clusterAnd if was member of gluster cluster as well:- go to host - storage 
devices- create the bricks on the devices - as they are on the other hosts- go 
to storage - volumes- replace each failed brick with the corresponding new 
one.Hope it helps.Cheers,Leo

On Wed, Jun 5, 2019, 23:09  wrote:

Anybody have had to replace a failed host from a 3, 6, or 9 node hyperconverged 
setup with gluster storage?

One of my hosts is completely dead, I need to do a fresh install using ovirt 
node iso, can anybody point me to the proper steps?

thanks,
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RFBYQKWC2KNZVYTYQF5T256UZBCJHK5F/


-- 
Adrian Quintero



-- 
Adrian Quintero



-- 
Adrian Quintero
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PB2YWWPO2TRJ6EYXAETPUV2DSVQLXDRR/
  ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6EDIM2TLIFPEKANZ2QIUTXGSIWKYC2ET/


[ovirt-users] Re: oVirt 4.3.4 RC1 to RC2 - Dashboard error / VM/Host/Gluster Volumes OK

2019-05-27 Thread Strahil Nikolov
 Hi Sandro,
thanks for your feedback.I'm providing the logs from the engine (including the 
setup logs).Also, I'm providing the yum history from the engine (maybe 
something didn't get installed).
All files can be located at: ovirt-4.3.4-RC1-to-RC2 - Google Drive

| 
| 
|  | 
ovirt-4.3.4-RC1-to-RC2 - Google Drive


 |

 |

 |





I hope this one helps for finding the reason for the DWH failiure.
Can you hint me what will happen if i purge the DWH data via the setup utility 
?What kind of data will be lost, as my VMs, storage and network settings seem 
to be OK ?
Best Regards,Strahil Nikolov

В понеделник, 27 май 2019 г., 2:44:27 ч. Гринуич-4, Sandro Bonazzola 
 написа:  
 
 

Il giorno dom 26 mag 2019 alle ore 12:46 Strahil Nikolov 
 ha scritto:

Hello All,
Just upgraded my engine from 4.3.4 RC1 to RC2 and my Dashboard is giving an 
error (see attached screenshot) despite everything seem to end well:
Error!
Could not fetch dashboard data. Please ensure that data warehouse is properly 
installed and configured.
I have checked and the VMs and Hosts + Gluster Volumes arep roperly detected 
(yet all my VMs are powered off since before RC2 upgrade).

Any clues that might help you solve that before I roll back (I have a gluster 
snapshot on 4.3.3-7) ?
Best Regards,Strahil Nikolov



Looks like DWH service is not feeding data to the dashboard, can you please 
sahre your engine and dwh logs?Adding Shirly and Sharon.

-- 

Sandro Bonazzola



MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA

sbona...@redhat.com   

|  |  |

  ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DVMIF2ZQBGNRRK3KEJRWSLYE67DW7QIS/


[ovirt-users] Re: iso files

2019-06-24 Thread Strahil Nikolov
 Iso domains are deprecated. You can upload an ISO to a data domain via UI (and 
maybe API).
Best Regards,Strahil Nikolov
В понеделник, 24 юни 2019 г., 16:33:57 ч. Гринуич+3, 
 написа:  
 
 Hi,

is possible to install a VM without an ISO domain? for version 4.3.4.3 ?

Thanks


-- 
Jose Ferradeira
http://www.logicworks.pt
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/P2XUVZKEHSZBDWGSNGCZYDME4HJS34WA/
  ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/J7KCKW2F4YNESVGXZPWK5Q5M56JV2KLA/


[ovirt-users] Re: RFE: HostedEngine to use boom by default

2019-06-12 Thread Strahil Nikolov
Hi Simone,
yes - it will work inside the VM , but I can assure you that this will require 
no or little effort from dev side and will allow easier fixing of the VM in 
most cases.
Of course system snapshot is also nice, but this requires a lot of 
development.For hyperconverged systems - we got another nice feature - gluster 
snapshots which I also use.
If the approach in https://bugzilla.redhat.com/1670788

| 
| 
|  | 
1670788 – [RFE] Enable Storage Live Migration for Hosted Engine from wit...


 |

 |

 |



 is easy to implement - then BOOM won't be needed.

Best Regards,Strahil Nikolov


On Tue, Jun 11, 2019 at 11:44 PM Strahil Nikolov  wrote:

Hello All,
I have seen a lot of cases where the HostedEngine gets corrupted/broken and 
beyond repair.
I think that BOOM is a good option for our HostedEngine appliances due to the 
fact that it supports booting from LVM snapshots and thus being able to easily 
recover after upgrades or other outstanding situations.
Sadly, BOOM has 1 drawback - that everything should be under a single snapshot 
- thus no separation of /var /log or /audit.
Do you think that changing the appliance layout is worth it ?

That idea is going to work at LVM level inside the VM, but at the end the 
hosted-engine VM is a VM so potentially taking a snapshot at VM level is a 
better option.Currently this is not working because the hosted-engine VM disk 
is protected for split brains by a volume lease (VM leases weren't available 
when we started hosted-engine) and this inhibits snapshots and so live storage 
migration and so on.We already have an open RFE  to implement it for 4.4: 
https://bugzilla.redhat.com/1670788 

Note: I might have an unsupported layout that could cause my confusion.Is your 
layout a single root LV ?
Best Regards,Strahil Nikolov___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5OTOIAI4BXMVRFN5MCDGXNZHYB46XWLF/



-- 

Simone Tiraboschi

He / Him / His

Principal Software Engineer

Red Hat

stira...@redhat.com   
@redhatjobs   redhatjobs @redhatjobs  
|  |  |

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VRRL7PRMLW6MQJNOP5OVICWFKQ6Q3QJD/
  ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ASS47B4QCQ6CDKMC5HRZ2OVL4VSITJC6/


[ovirt-users] Re: Ovirt hiperconverged setup error

2019-06-12 Thread Strahil Nikolov
 Command run is "dig' which tries to resolve the hostname of each server.Do you 
have a DNS resolver properly configured ?
Best Regards,Strahil Nikolov

В сряда, 12 юни 2019 г., 3:59:14 ч. Гринуич-4, PS Kazi 
 написа:  
 
 ovirt Node version 4.3.3.1
I am trying to configure 3 node Gluster storage and oVirt hosted engine but 
gettng following error:

TASK [gluster.features/roles/gluster_hci : Check if valid FQDN is provided] 
failed: [ov-node-2 -> localhost] (item=ov-node-2) => {"changed": true, "cmd": 
["dig", "ov-node-2", "+short"], "delta": "0:00:00.041003", "end": "2019-06-12 
12:52:34.158688", "failed_when_result": true, "item": "ov-node-2", "rc": 0, 
"start": "2019-06-12 12:52:34.117685", "stderr": "", "stderr_lines": [], 
"stdout": "", "stdout_lines": []}
failed: [ov-node-2 -> localhost] (item=ov-node-3) => {"changed": true, "cmd": 
["dig", "ov-node-3", "+short"], "delta": "0:00:00.038688", "end": "2019-06-12 
12:52:34.459176", "failed_when_result": true, "item": "ov-node-3", "rc": 0, 
"start": "2019-06-12 12:52:34.420488", "stderr": "", "stderr_lines": [], 
"stdout": "", "stdout_lines": []}
failed: [ov-node-2 -> localhost] (item=ov-node-1) => {"changed": true, "cmd": 
["dig", "ov-node-1", "+short"], "delta": "0:00:00.047938", "end": "2019-06-12 
12:52:34.768149", "failed_when_result": true, "item": "ov-node-1", "rc": 0, 
"start": "2019-06-12 12:52:34.720211", "stderr": "", "stderr_lines": [], 
"stdout": "", "stdout_lines": []}


Please help 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BXMOTKHGI5TNP5OYWVGINBVUYNVFOGDO/
  ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ORRDWBCUAX66V4OUOYCA634HDE226YNS/


[ovirt-users] Re: Can't import some VMs after storage domain detach and reattach to new datacenter.

2019-06-23 Thread Strahil Nikolov
 I have seen similar situation , when a VM had one disk on 1 domain and 2nd 
disk on another storage domain.
Are you sure that all disks of the problematic VMs were moved to the iSCSI 
storage domain ?
Best Regards,Strahil Nikolov
В неделя, 23 юни 2019 г., 11:28:56 ч. Гринуич+3, m black 
 написа:  
 
 Hi. I have a problem with importing some VMs after importing storage domain in 
new datacenter. I have 5 servers with oVirt version 4.1.7, hosted-engine setup 
and datacenter with iscsi, fc and nfs storages. Also i have 3 servers with 
oVirt 4.3.4, hosted-engine and nfs storage. I've set iscsi and fc storages to 
maintenance and detached them successfully on 4.1.7 datacenter.Then i've 
imported these storage domains via Import Domain in 4.3.4 datacenter 
successfully. After storage domains were imported to new 4.3.4 datacenter i've 
tried to import VMs from VM Import tab on storages. On the FC storage it was 
good, all VMs imported and started, all VMs in place. And with iSCSI storage 
i've got problems:On the iSCSI storage some VMs imported and started, but some 
of them missing, some of missing VMs disks are showing at Disk Import, i've 
tried to import disks from Disk Import tab and got error - 'Failed to register 
disk'.Tried to scan disks with 'Scan Disks' in storage domain, also tried 
'Update OVF' - no result. What caused this? What can i do to recover missing 
VMs? What logs to examine?Can it be storage domain disk corruption? Please, 
help. Thank you. ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MF5IUXURKIQZNNG4YW6ELENFD4GZIDQZ/
  ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6ZCNXVAUNM6SZB5FCFZALCXEZ5OBBTLW/


[ovirt-users] Re: Issues when Creating a Gluster Brick with Cache

2019-06-24 Thread Strahil Nikolov
 Did you blacklist in /etc/multipath.conf all local disks ?In other words, when 
you run 'lsblk' do you see hte disk to have a child device (usually the wwid) ?

Best Regards,Strahil Nikolov

В понеделник, 24 юни 2019 г., 2:08:37 ч. Гринуич-4, Robert Crawford 
 написа:  
 
 Hey Everyone,

When in the server manager and creating a brick from the storage device the 
brick will fail whenever i attach a cache device to it.

I'm not really sure why? It just says unknown. 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2XEP3P7SMUN2CWWNNVWC2ZDQVGFLMHGS/
  ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DETEWMK5IVSFUEU2RYKRJXNRJQKBVSHW/


[ovirt-users] RFE: HostedEngine to use boom by default

2019-06-11 Thread Strahil Nikolov
Hello All,
I have seen a lot of cases where the HostedEngine gets corrupted/broken and 
beyond repair.
I think that BOOM is a good option for our HostedEngine appliances due to the 
fact that it supports booting from LVM snapshots and thus being able to easily 
recover after upgrades or other outstanding situations.
Sadly, BOOM has 1 drawback - that everything should be under a single snapshot 
- thus no separation of /var /log or /audit.
Do you think that changing the appliance layout is worth it ?
Note: I might have an unsupported layout that could cause my confusion.Is your 
layout a single root LV ?
Best Regards,Strahil Nikolov___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5OTOIAI4BXMVRFN5MCDGXNZHYB46XWLF/


[ovirt-users] Re: Replace bad Host from a 9 Node hyperconverged setup 4.3.3

2019-06-11 Thread Strahil Nikolov
 Do you have empty space to store the VMs ? If yes, you can always script the 
migration of the disks via the API . Even a bash script and curl can do the 
trick.
About the /dev/sdb , I still don't get it . A pure "df -hT" from a node will 
make it way clear. I guess '/dev/sdb' is a PV and you got 2 LVs ontop of it.
Note: I should admit that as an admin - I don't use UI for gluster management.
For now do not try to remove the brick. The approach is either to migrate the 
qemu disks to another storage or to reset-brick/replace-brick in order to 
restore the replica count.I will check the file and I will try to figure it out.
Redeployment never fixes the issue, it just speeds up the recovery. If you can 
afford the time to spent on fixing the issue - then do not redeploy.
I would be able to take a look next week , but keep in mind that I'm not so in 
deep with oVirt - I have started playing with it when I deployed my lab.
Best Regards,Strahil Nikolov 
 Strahil,
  
Looking at yoursuggestions I think I need to provide a bit more info on my 
currentsetup. 



   
   -
I have 9 hosts in total
 
   -
I have 5 storage domains:
   
  -   
hosted_storage (Data Master)
 
  -   
vmstore1 (Data)
 
  -   
data1 (Data)
 
  -   
data2 (Data)
 
  -   
ISO (NFS) //had to create this one because oVirt 4.3.3.1 would not let me 
upload disk images to a data domain without an ISO (I think this is due to a 
bug)  
  
 
 
 
   -
Each volume is of the type “Distributed Replicate” and each one is composed of 
9 bricks.   
I started with 3 bricks per volume due to the initial Hyperconverged setup, 
then I expanded the cluster and the gluster cluster by 3 hosts at a time until 
I got to a total of 9 hosts.

   
   
   -
Disks, bricks and sizes used per volume   
 / dev/sdb engine 100GB   
 / dev/sdb vmstore1 2600GB   
 / dev/sdc data1 2600GB   
 / dev/sdd data2 2600GB   
/ dev/sde  400GB SSD Used for caching purposes   
   
>From the above layout a few questions came up:
   
  -   
Using the web UI, How can I create a 100GB brick and a 2600GB brick to replace 
the bad bricks for “engine” and “vmstore1” within the same block device (sdb) ? 
  
  
What about / dev/sde (caching disk), When I tried creating a new brick thru the 
UI I saw that I could use / dev/sde for caching but only for 1 brick (i.e. 
vmstore1) so if I try to create another brick how would I specify it is the 
same / dev/sde device to be used for caching?
 
 



   
   -
If I want to remove a brick and it being a replica 3, I go to storage > Volumes 
> select the volume > bricks once in there I can select the 3 servers that 
compose the replicated bricks and click remove, this gives a pop-up window with 
the following info:   
   
Are you sure you want to remove the following Brick(s)?   
- vmm11:/gluster_bricks/vmstore1/vmstore1   
- vmm12.virt.iad3p:/gluster_bricks/vmstore1/vmstore1   
- 192.168.0.100:/gluster-bricks/vmstore1/vmstore1   
- Migrate Data from the bricks?   
   
If I proceed with this that means I will have to do this for all the 4 volumes, 
that is just not very efficient, but if that is the only way, then I am 
hesitant to put this into a real production environment as there is no way I 
can take that kind of a hit for +500 vms :) and also I wont have that much 
storage or extra volumes to play with in a real sceneario.   
   
 
 
   -
After modifying yesterday / etc/vdsm/vdsm.id by following 
(https://stijn.tintel.eu/blog/2013/03/02/ovirt-problem-duplicate-uuids) I was 
able to add the server back to the cluster using a new fqdn and a new IP, and 
tested replacing one of the bricks and this is my mistake as mentioned in #3 
above I used / dev/sdb entirely for 1 brick because thru the UI I could not 
separate the block device and be used for 2 bricks (one for the engine and one 
for vmstore1). So in the “gluster vol info” you might see vmm102.mydomain.com 
but in reality it is myhost1.mydomain.com   
   
 
 
   -
I am also attaching gluster_peer_status.txt  and in the last 2 entries of that 
file you will see and entry vmm10.mydomain.com (old/bad entry) and 
vmm102.mydomain.com (new entry, same server vmm10, but renamed to vmm102). Also 
please find gluster_vol_info.txt file.   
   
 
 
   -
I am ready to redeploy this environment if needed, but I am also ready to test 
any other suggestion. If I can get a good understanding on how to recover from 
this I will be ready to move to production.   
   
 
 
   -
Wondering if you’d be willing to have a look at my setup through a shared 
screen?   
   
   
 


Thanks




Adrian

On Mon, Jun 10, 2019 at 11:41 PM Strahil  wrote:


Hi Adrian,

You have several options:
A) If you have space on another gluster volume (or volumes) or on NFS-based 
storage, you can migrate all VMs live . Once you do it,  the simple way will be 
to stop and remove the storage domain (from 

[ovirt-users] Re: VM Disk Performance metrics?

2019-06-11 Thread Strahil Nikolov
 +1 vote from me.

Best Regards,Strahil Nikolov
В вторник, 11 юни 2019 г., 18:54:54 ч. Гринуич+3, Wesley Stewart 
 написа:  
 
 Is there any way to get ovirt disk performance metrics into the web interface? 
 It would be nice to see some type of IOPs data, so we can see which VMs are 
hitting our data stores the most.
It seems you can run virt-top on a host to get some of these metrics, but it 
would be nice to get some sort of data in the gui.
Thanks!___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LMOOJ6JVZYAM74PWYPBCQ4FCNYTCY5KQ/
  ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PLTFNTEJF26IFTT65XZNRR4MFVDOM4NR/


[ovirt-users] HostedEngine migration from ovirt1:/engine to gluster1:/engine (new ip)

2019-05-13 Thread Strahil Nikolov
Hello Community,
I have added new interfaces and bonded them in order to split storage from 
oVirt traffic and I have one last issue.
In Storage -> Storage Domains , I have the "hosted_storage" domain that is 
pointing to the old "ovirt1.localdomain:/engine" instead of "gluster1:/engine".
I have managed to reconfigure the ha agent to bring the new storage , but it 
seems the engine mounts the old gluster path and this causes problem with the 
ha agent .
How can I edit the "hosted_storage" in a safe manner to point to 
"gluster1:/engine" and mount options of 
"backup-volfile-servers=gluster2:ovirt3".
Should I edit the DB ?

P.S.: My google skills did not show any results on this topic and thus I'm 
raising it to the mail list.Thanks in advance.

Best Regards,Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PEY7C63TWTQKDXEW7CSY7GP57ZFBDBSA/


[ovirt-users] Re: HostedEngine migration from ovirt1:/engine to gluster1:/engine (new ip)

2019-05-13 Thread Strahil Nikolov
On May 13, 2019 1:42:53 PM GMT+03:00, Andreas Elvers 
 wrote:
>Shouldn't this be done by restoring the engine? Initial engine host and
>storage parameters are collected while doing the restore. It might be a
>bit far stretched, but at least be an automated repeatable experience.
>
>Are there really procedures where you manipulate the DHW directly? I
>never saw a reference in the documentation. 
>___
>Users mailing list -- users@ovirt.org
>To unsubscribe send an email to users-le...@ovirt.org
>Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>oVirt Code of Conduct:
>https://www.ovirt.org/community/about/community-guidelines/
>List Archives:
>https://lists.ovirt.org/archives/list/users@ovirt.org/message/5JYINPWJINWQPIN2LQO2ZTEUGHY3YKQ2/

Updating the base is faster than restoring the engine.
I'm avoiding the restore, as I cannot find a dummy-style instruction for 
restore and with my luck - I will definately hit a wall.

In my case this is the final piece left and DB manipulation is far easier .
Of course , I wouldn't manipulate the DB on a production site - but for a lab 
is acceptable.

Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DQLDE7NDQQJ2DFRBOKD65OFTO5RHEDCJ/


[ovirt-users] Re: [ovirt-announce] Re: [ANN] oVirt 4.3.4 First Release Candidate is now available

2019-05-19 Thread Strahil Nikolov
 Ok,
so it seems that Darell's case and mine are different as I use vdo.
Now I have destroyed Storage Domains, gluster volumes and vdo and recreated 
again (4 gluster volumes on a single vdo).This time vdo has '--emulate512=true' 
and no issues have been observed.
Gluster volume options before 'Optimize for virt':
Volume Name: data_fast
Type: Replicate
Volume ID: 378804bf-2975-44d8-84c2-b541aa87f9ef
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: gluster1:/gluster_bricks/data_fast/data_fast
Brick2: gluster2:/gluster_bricks/data_fast/data_fast
Brick3: ovirt3:/gluster_bricks/data_fast/data_fast (arbiter)
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
cluster.enable-shared-storage: enable

Gluster volume after 'Optimize for virt':
Volume Name: data_fast
Type: Replicate
Volume ID: 378804bf-2975-44d8-84c2-b541aa87f9ef
Status: Stopped
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: gluster1:/gluster_bricks/data_fast/data_fast
Brick2: gluster2:/gluster_bricks/data_fast/data_fast
Brick3: ovirt3:/gluster_bricks/data_fast/data_fast (arbiter)
Options Reconfigured:
network.ping-timeout: 30
performance.strict-o-direct: on
storage.owner-gid: 36
storage.owner-uid: 36
server.event-threads: 4
client.event-threads: 4
cluster.choose-local: off
user.cifs: off
features.shard: on
cluster.shd-wait-qlength: 1
cluster.shd-max-threads: 8
cluster.locking-scheme: granular
cluster.data-self-heal-algorithm: full
cluster.server-quorum-type: server
cluster.quorum-type: auto
cluster.eager-lock: enable
network.remote-dio: off
performance.low-prio-threads: 32
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: on
cluster.enable-shared-storage: enable
After that adding the volumes as storage domains (via UI) worked without any 
issues.
Can someone clarify why we have now 'cluster.choose-local: off' when in oVirt 
4.2.7 (gluster v3.12.15) we didn't have that ?I'm using storage that is faster 
than network and reading from local brick gives very high read speed.
Best Regards,Strahil Nikolov


В неделя, 19 май 2019 г., 9:47:27 ч. Гринуич+3, Strahil 
 написа:  
 
 
On this one 
https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.3/html-single/configuring_red_hat_virtualization_with_red_hat_gluster_storage/index#proc-To_Configure_Volumes_Using_the_Command_Line_Interface
 
We should have the following options:

performance.quick-read=off performance.read-ahead=off performance.io-cache=off 
performance.stat-prefetch=off performance.low-prio-threads=32 
network.remote-dio=enable cluster.eager-lock=enable cluster.quorum-type=auto 
cluster.server-quorum-type=server cluster.data-self-heal-algorithm=full 
cluster.locking-scheme=granular cluster.shd-max-threads=8 
cluster.shd-wait-qlength=1 features.shard=on user.cifs=off

By the way the 'virt' gluster group disables 'cluster.choose-local' and I think 
it wasn't like that.
Any reasons behind that , as I use it to speedup my reads, as local storage is 
faster than the network?

Best Regards,
Strahil Nikolov
On May 19, 2019 09:36, Strahil  wrote:


OK,

Can we summarize it:
1. VDO must 'emulate512=true'
2. 'network.remote-dio' should be off ?

As per this: 
https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3/html/configuring_red_hat_openstack_with_red_hat_storage/sect-setting_up_red_hat_storage_trusted_storage_pool

We should have these:

quick-read=off
read-ahead=off
io-cache=off
stat-prefetch=off
eager-lock=enable
remote-dio=on 
quorum-type=auto
server-quorum-type=server

I'm a little bit confused here.

Best Regards,
Strahil Nikolov
On May 19, 2019 07:44, Sahina Bose  wrote:



On Sun, 19 May 2019 at 12:21 AM, Nir Soffer  wrote:

On Fri, May 17, 2019 at 7:54 AM Gobinda Das  wrote:

>From RHHI side default we are setting below volume options:
{ group: 'virt',     storage.owner-uid: '36',     storage.owner-gid: '36',     
network.ping-timeout: '30',     performance.strict-o-direct: 'on',     
network.remote-dio: 'off'

According to the user reports, this configuration is not compatible with oVirt.
Was this tested?

Yes, this is set by default in all test configuration. We’re checking on the 
bug, but the error is likely when the underlying device does not support 512b 
writes. With network.remote-dio off gluster will ensure o-direct writes


   }

On Fri, May 17, 2019 at 2:31 AM Strahil Nikolov  wrote:

 Ok, setting 'gluster volume set data_fast4 network.remote-dio on' allowed me 
to create the storage domain without any issues.I set it on all 4 new gluster 
volumes and the storage domains were successfully created.
I have created bug for that:https://bugzilla.redhat.com/show_bug.cgi?id=1711060
If someone else already opened - please ping me to mark this one as duplicate.
Best Regards,Stra

[ovirt-users] Re: oVirt upgrade version from 4.2 to 4.3

2019-05-20 Thread Strahil Nikolov
 No need,
I already have the number -> https://bugzilla.redhat.com/show_bug.cgi?id=1704782

I have just mentioned it ,as the RC1 for 4.3.4 still doesn't have the fix.
Best Regards,Strahil Nikolov

В понеделник, 20 май 2019 г., 3:00:12 ч. Гринуич-4, Sahina Bose 
 написа:  
 
 

On Sun, May 19, 2019 at 4:11 PM Strahil  wrote:

I would recommend you to postpone  your upgrade if you use gluster (without the 
API)  , as  creation of virtual disks via UI on gluster is having issues - only 
preallocated can be created.


+Gobinda Das +Satheesaran Sundaramoorthi 
Sas, can you log a bug on this?


Best Regards,
Strahil NikolovOn May 19, 2019 09:53, Yedidyah Bar David  
wrote:
>
> On Thu, May 16, 2019 at 3:40 PM  wrote: 
> > 
> > I cannot find an official upgrade procedure from 4.2 to 4.3 oVirt version 
> > on this page: 
> > https://www.ovirt.org/documentation/upgrade-guide/upgrade-guide.html 
> > 
> > Can you help me? 
>
> As others noted, the above should be sufficient, for general upgrade 
> instructions, even though it does require some updates. 
>
> You probably want to read also: 
>
> https://ovirt.org/release/4.3.0/ 
>
> as well as all the other relevant pages in: 
>
> https://ovirt.org/release/ 
>
> Best regards, 
>
> > 
> > Thanks 
> > ___ 
> > Users mailing list -- users@ovirt.org 
> > To unsubscribe send an email to users-le...@ovirt.org 
> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/ 
> > oVirt Code of Conduct: 
> > https://www.ovirt.org/community/about/community-guidelines/ 
> > List Archives: 
> > https://lists.ovirt.org/archives/list/users@ovirt.org/message/WG2EI6HL3S2AT6PITGEAJQFGKC6XMYRD/
> >  
>
>
>
> -- 
> Didi
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/KAJGM3URCFSNN6S6X3VZFFOSJF52A4RS/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6T7MO4AA7QHKGTD2E7OUNMSFLM4TXRPA/

  ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SJ6FKBGSRZR3YVSZLCUX2ZVFJUDA2WKU/


[ovirt-users] Re: oVirt node loses gluster volume UUID after reboot, goes to emergency mode every time I reboot.

2019-05-21 Thread Strahil Nikolov
 Do you use VDO ?If yes, consider setting up systemd ".mount" units, as this is 
the only way to setup dependencies.
Best Regards,Strahil Nikolov

В вторник, 21 май 2019 г., 22:44:06 ч. Гринуич+3, mich...@wanderingmad.com 
 написа:  
 
 I'm sorry, i'm still working on my linux knowledge, here is the output of my 
blkid on one of the servers:

/dev/nvme0n1: PTTYPE="dos"
/dev/nvme1n1: PTTYPE="dos"
/dev/mapper/eui.6479a71892882020: PTTYPE="dos"
/dev/mapper/eui.0025385881b40f60: PTTYPE="dos"
/dev/mapper/eui.6479a71892882020p1: 
UUID="pfJiP3-HCgP-gCyQ-UIzT-akGk-vRpV-aySGZ2" TYPE="LVM2_member"
/dev/mapper/eui.0025385881b40f60p1: 
UUID="Q0fyzN-9q0s-WDLe-r0IA-MFY0-tose-yzZeu2" TYPE="LVM2_member"

/dev/mapper/Samsung_SSD_850_EVO_1TB_S21CNXAG615134H: PTTYPE="dos"
/dev/mapper/Samsung_SSD_850_EVO_1TB_S21CNXAG615134H1: 
UUID="lQrtPt-nx0u-P6Or-f2YW-sN2o-jK9I-gp7P2m" TYPE="LVM2_member"
/dev/mapper/vg_gluster_ssd-lv_gluster_ssd: 
UUID="890feffe-c11b-4c01-b839-a5906ab39ecb" TYPE="vdo"
/dev/mapper/vg_gluster_nvme1-lv_gluster_nvme1: 
UUID="7049fd2a-788d-44cb-9dc5-7b4c0ee309fb" TYPE="vdo"
/dev/mapper/vg_gluster_nvme2-lv_gluster_nvme2: 
UUID="2c541b70-32c5-496e-863f-ea68b50e7671" TYPE="vdo"
/dev/mapper/vdo_gluster_ssd: UUID="e59a68d5-2b73-487a-ac5e-409e11402ab5" 
TYPE="xfs"
/dev/mapper/vdo_gluster_nvme1: UUID="d5f53f17-bca1-4cb9-86d5-34a468c062e7" 
TYPE="xfs"
/dev/mapper/vdo_gluster_nvme2: UUID="40a41b5f-be87-4994-b6ea-793cdfc076a4" 
TYPE="xfs"

#2
/dev/nvme0n1: PTTYPE="dos"
/dev/nvme1n1: PTTYPE="dos"
/dev/mapper/eui.6479a71892882020: PTTYPE="dos"
/dev/mapper/eui.6479a71892882020p1: 
UUID="GiBSqT-JJ3r-Tn3X-lzCr-zW3D-F3IE-OpE4Ga" TYPE="LVM2_member"
/dev/mapper/nvme.126f-324831323230303337383138-4144415441205358383030304e50-0001:
 PTTYPE="dos"
/dev/sda: PTTYPE="gpt"
/dev/mapper/nvme.126f-324831323230303337383138-4144415441205358383030304e50-0001p1:
 UUID="JBhj79-Uk0E-DdLE-Ibof-VwBq-T5nZ-F8d57O" TYPE="LVM2_member"
/dev/sdb: PTTYPE="dos"
/dev/mapper/Samsung_SSD_860_EVO_1TB_S3Z8NB0K843638B: PTTYPE="dos"
/dev/mapper/Samsung_SSD_860_EVO_1TB_S3Z8NB0K843638B1: 
UUID="6yp5YM-D1be-M27p-AEF5-w1pv-uXNF-2vkiJZ" TYPE="LVM2_member"
/dev/mapper/vg_gluster_ssd-lv_gluster_ssd: 
UUID="9643695c-0ace-4cba-a42c-3f337a7d5133" TYPE="vdo"
/dev/mapper/vg_gluster_nvme2-lv_gluster_nvme2: 
UUID="79f5bacc-cbe7-4b67-be05-414f68818f41" TYPE="vdo"
/dev/mapper/vg_gluster_nvme1-lv_gluster_nvme1: 
UUID="2438a550-5fb4-48f4-a5ef-5cff5e7d5ba8" TYPE="vdo"
/dev/mapper/vdo_gluster_ssd: UUID="5bb67f61-9d14-4d0b-8aa4-ae3905276797" 
TYPE="xfs"
/dev/mapper/vdo_gluster_nvme1: UUID="732f939c-f133-4e48-8dc8-c9d21dbc0853" 
TYPE="xfs"
/dev/mapper/vdo_gluster_nvme2: UUID="f55082ca-1269-4477-9bf8-7190f1add9ef" 
TYPE="xfs"

#3
/dev/nvme1n1: UUID="8f1dc44e-f35f-438a-9abc-54757fd7ef32" TYPE="vdo"
/dev/nvme0n1: PTTYPE="dos"
/dev/mapper/nvme.c0a9-313931304531454644323630-4354353030503153534438-0001: 
UUID="8f1dc44e-f35f-438a-9abc-54757fd7ef32" TYPE="vdo"
/dev/mapper/eui.6479a71892882020: PTTYPE="dos"
/dev/mapper/eui.6479a71892882020p1: 
UUID="FwBRJJ-ofHI-1kHq-uEf1-H3Fn-SQcw-qWYvmL" TYPE="LVM2_member"
/dev/sda: PTTYPE="gpt"
/dev/mapper/Samsung_SSD_850_EVO_1TB_S2RENX0J302798A: PTTYPE="gpt"
/dev/mapper/Samsung_SSD_850_EVO_1TB_S2RENX0J302798A1: 
UUID="weCmOq-VZ1a-Itf5-SOIS-AYLp-Ud5N-S1H2bR" TYPE="LVM2_member" 
PARTUUID="920ef5fd-e525-4cf0-99d5-3951d3013c19"
/dev/mapper/vg_gluster_ssd-lv_gluster_ssd: 
UUID="fbaffbde-74f0-4e4a-9564-64ca84398cde" TYPE="vdo"
/dev/mapper/vg_gluster_nvme2-lv_gluster_nvme2: 
UUID="ae0bd2ad-7da9-485b-824a-72038571c5ba" TYPE="vdo"
/dev/mapper/vdo_gluster_ssd: UUID="f0f56784-bc71-46c7-8bfe-6b71327c87c9" 
TYPE="xfs"
/dev/mapper/vdo_gluster_nvme1: UUID="0ddc1180-f228-4209-82f1-1607a46aed1f" 
TYPE="xfs"
/dev/mapper/vdo_gluster_nvme2: UUID="bcb7144a-6ce0-4b3f-9537-f465c46d4843" 
TYPE="xfs"

I don't have any errors on mount until I reboot, and once I reboot it takes 
~6hrs for everything to work 100% since I have to delete the mount commands out 
of stab for the 3 gluster volumes and reboot.  I'da rather wait until the next 
update to do that.

I don't have a variable file or playbook since I made the storage manually, I 
stopped using the playbook since at that point I couldn't enable RDMA or 
over-provision the disks correctly unless I made t

[ovirt-users] Re: Dropped RX Packets

2019-05-16 Thread Strahil Nikolov
 Hi Magnus,
do you notice any repetition there ? Does it happen completely random ?
Usually to debug network issues you will need tcpdump from Guest, Host and the 
other side if possible.Is that an option ?
Do you see in the host's tab those RX errors ?
What is the output of "ip -s link" on the Guest ?
Best Regards,Strahil Nikolov

В четвъртък, 16 май 2019 г., 9:19:57 ч. Гринуич-4, Magnus Isaksson 
 написа:  
 
 Hello all!

I'm having quite some trouble with VMs that have a large amount of dropped 
packets on RX.
This, plus customers complain about short dropped connections, for example one 
customer has a SQL server and an other serevr connecting to it, and it is 
randomly dropping connections. Before they moved their VM:s to us they did not 
have any of these issues.

Does anyone have an idea of what this can be due to? And how can i fix it? It 
is starting to be a deal breaker for our customers on whether they will stay 
with us or not.

I was thinking of reinstalling the nodes with oVirt Node, instead of the full 
CentOS, would this perhaps fix the issue?

The enviroment is:
Huawei x6000 with 4 nodes
Each node having Intel X722 network card and connecting with 10G (fiber) to a 
Juniper EX 4600. Storage via FC to a IBM FS900.
Each node is running a full CentOS 7.6 connecting to a Engine 4.2.8.2

Regards
 Magnus
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QXGQSKYBUCFPDCBIQVAAZAWFQX54A2BD/
  ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SCZNYVKLR54USPJW3EYA2NX5IH7BZDR6/


[ovirt-users] Fw: ovirt 4.3.3.7 cannot create a gluster storage domain

2019-05-16 Thread Strahil Nikolov
 It seems that the issue is within the 'dd' command as it stays waiting for 
input:
[root@ovirt1 mnt]# /usr/bin/dd iflag=fullblock  of=file oflag=direct,seek_bytes 
seek=1048576 bs=256512 count=1 conv=notrunc,nocreat,fsync  ^C0+0 records in
0+0 records out
0 bytes (0 B) copied, 19.3282 s, 0.0 kB/s

 Changing the dd command works and shows that the gluster is working:
[root@ovirt1 mnt]# cat /dev/urandom |  /usr/bin/dd  of=file 
oflag=direct,seek_bytes seek=1048576 bs=256512 count=1 
conv=notrunc,nocreat,fsync  0+1 records in
0+1 records out
131072 bytes (131 kB) copied, 0.00705081 s, 18.6 MB/s

Best Regards,Strahil Nikolov


   - Препратено съобщение - От: Strahil Nikolov 
До: Users Изпратено: четвъртък, 16 май 
2019 г., 5:56:44 ч. Гринуич-4Тема: ovirt 4.3.3.7 cannot create a gluster 
storage domain
 Hey guys,
I have recently updated (yesterday) my platform to latest available (v4.3.3.7) 
and upgraded to gluster v6.1 .The setup is hyperconverged 3 node cluster with 
ovirt1/gluster1 & ovirt2/gluster2 as replica nodes (glusterX is for gluster 
communication) while ovirt3 is the arbiter.
Today I have tried to add new domain storages but they fail with the following:
2019-05-16 10:15:21,296+0300 INFO  (jsonrpc/2) [vdsm.api] FINISH 
createStorageDomain error=Command ['/usr/bin/dd', 'iflag=fullblock', 
u'of=/rhev/data-center/mnt/glusterSD/gluster1:_data__fast2/591d9b61-5c7d-4388-a6b7-ab03181dff8a/dom_md/xleases',
 'oflag=direct,seek_bytes', 'seek=1048576', 'bs=256512', 'count=1', 
'conv=notrunc,nocreat,fsync'] failed with rc=1 out='[suppressed]' 
err="/usr/bin/dd: error writing 
'/rhev/data-center/mnt/glusterSD/gluster1:_data__fast2/591d9b61-5c7d-4388-a6b7-ab03181dff8a/dom_md/xleases':
 Invalid argument\n1+0 records in\n0+0 records out\n0 bytes (0 B) copied, 
0.0138582 s, 0.0 kB/s\n" from=:::192.168.1.2,43864, flow_id=4a54578a, 
task_id=d2535d0f-c7f7-4f31-a10f-704923ce1790 (api:52)
2019-05-16 10:15:21,296+0300 ERROR (jsonrpc/2) [storage.TaskManager.Task] 
(Task='d2535d0f-c7f7-4f31-a10f-704923ce1790') Unexpected error (task:875)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882, in 
_run
    return fn(*args, **kargs)
  File "", line 2, in createStorageDomain
  File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 50, in method
    ret = func(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 2614, in 
createStorageDomain
    storageType, domVersion, block_size, alignment)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/nfsSD.py", line 106, in 
create
    block_size)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/fileSD.py", line 466, in 
_prepareMetadata
    cls.format_external_leases(sdUUID, xleases_path)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/sd.py", line 1255, in 
format_external_leases
    xlease.format_index(lockspace, backend)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/xlease.py", line 681, in 
format_index
    index.dump(file)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/xlease.py", line 843, in 
dump
    file.pwrite(INDEX_BASE, self._buf)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/xlease.py", line 1076, in 
pwrite
    self._run(args, data=buf[:])
  File "/usr/lib/python2.7/site-packages/vdsm/storage/xlease.py", line 1093, in 
_run
    raise cmdutils.Error(args, rc, "[suppressed]", err)
Error: Command ['/usr/bin/dd', 'iflag=fullblock', 
u'of=/rhev/data-center/mnt/glusterSD/gluster1:_data__fast2/591d9b61-5c7d-4388-a6b7-ab03181dff8a/dom_md/xleases',
 'oflag=direct,seek_bytes', 'seek=1048576', 'bs=256512', 'count=1', 
'conv=notrunc,nocreat,fsync'] failed with rc=1 out='[suppressed]' 
err="/usr/bin/dd: error writing 
'/rhev/data-center/mnt/glusterSD/gluster1:_data__fast2/591d9b61-5c7d-4388-a6b7-ab03181dff8a/dom_md/xleases':
 Invalid argument\n1+0 records in\n0+0 records out\n0 bytes (0 B) copied, 
0.0138582 s, 0.0 kB/s\n"
2019-05-16 10:15:21,296+0300 INFO  (jsonrpc/2) [storage.TaskManager.Task] 
(Task='d2535d0f-c7f7-4f31-a10f-704923ce1790') aborting: Task is aborted: 
u'Command [\'/usr/bin/dd\', \'iflag=fullblock\', 
u\'of=/rhev/data-center/mnt/glusterSD/gluster1:_data__fast2/591d9b61-5c7d-4388-a6b7-ab03181dff8a/dom_md/xleases\',
 \'oflag=direct,seek_bytes\', \'seek=1048576\', \'bs=256512\', \'count=1\', 
\'conv=notrunc,nocreat,fsync\'] failed with rc=1 out=\'[suppressed]\' 
err="/usr/bin/dd: error writing 
\'/rhev/data-center/mnt/glusterSD/gluster1:_data__fast2/591d9b61-5c7d-4388-a6b7-ab03181dff8a/dom_md/xleases\':
 Invalid argument\\n1+0 records in\\n0+0 records out\\n0 bytes (0 B) copied, 
0.0138582 s, 0.0 kB/s\\n"' - code 100 (task:1181)
2019-05-16 10:15:21,297+0300 ERROR (jsonrpc/2) [storage.Dispatcher] FINISH 
createStorageDomain error=Command ['/usr/bin/dd', 'iflag=fullb

[ovirt-users] Re: ovirt 4.3.3.7 cannot create a gluster storage domain

2019-05-16 Thread Strahil Nikolov
 Due to the issue with dom_md/ids not getting in sync and always pending heal 
on ovirt2/gluster2 & ovirt3

Best Regards,Strahil Nikolov

В четвъртък, 16 май 2019 г., 6:08:44 ч. Гринуич-4, Andreas Elvers 
 написа:  
 
 Why  did you move to gluster v6? For the kicks? :-) The devs are currently 
evaluating for themselves whether they can switch to V6 for the upcoming 
releases.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JYZRKA4QBTXYDR3WXFRW7IXLCSGGVSLC/
  ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TM5NAZBYXEC2KZCVWKIWOBUAXR5QHKQ4/


[ovirt-users] ovirt 4.3.3.7 cannot create a gluster storage domain

2019-05-16 Thread Strahil Nikolov
*a, **kw)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 1189, in 
prepare
    raise self.error
Error: Command ['/usr/bin/dd', 'iflag=fullblock', 
u'of=/rhev/data-center/mnt/glusterSD/gluster1:_data__fast2/591d9b61-5c7d-4388-a6b7-ab03181dff8a/dom_md/xleases',
 'oflag=direct,seek_bytes', 'seek=1048576', 'bs=256512', 'count=1', 
'conv=notrunc,nocreat,fsync'] failed with rc=1 out='[suppressed]' 
err="/usr/bin/dd: error writing 
'/rhev/data-center/mnt/glusterSD/gluster1:_data__fast2/591d9b61-5c7d-4388-a6b7-ab03181dff8a/dom_md/xleases':
 Invalid argument\n1+0 records in\n0+0 records out\n0 bytes (0 B) copied, 
0.0138582 s, 0.0 kB/s\n"
2019-05-16 10:15:21,297+0300 INFO  (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC call 
StorageDomain.create failed (error 351) in 0.45 seconds (__init__:312)
2019-05-16 10:15:22,068+0300 INFO  (jsonrpc/1) [vdsm.api] START 
disconnectStorageServer(domType=7, 
spUUID=u'----', conList=[{u'mnt_options': 
u'backup-volfile-servers=gluster2:ovirt3', u'id': 
u'7442e9ab-dc54-4b9a-95d9-5d98a1e81b05', u'connection': 
u'gluster1:/data_fast2', u'iqn': u'', u'user': u'', u'tpgt': u'1', 
u'ipv6_enabled': u'false', u'vfs_type': u'glusterfs', u'password': '', 
u'port': u''}], options=None) from=:::192.168.1.2,43864, 
flow_id=33ced9b2-cdd5-4147-a223-d0eb398a2daf, 
task_id=a9a8f90a-1603-40c6-a959-3cbff29d1d7b (api:48)
2019-05-16 10:15:22,068+0300 INFO  (jsonrpc/1) [storage.Mount] unmounting 
/rhev/data-center/mnt/glusterSD/gluster1:_data__fast2 (mount:212)

I have tested manually mounting and trying it again:
[root@ovirt1 logs]# mount -t glusterfs -o backupvolfile-server=gluster2:ovirt3 
gluster1:/data_fast2 /mnt
[root@ovirt1 logs]# cd /mnt/
[root@ovirt1 mnt]# ll
total 0
[root@ovirt1 mnt]# dd if=/dev/zero of=file bs=4M status=progress count=250
939524096 bytes (940 MB) copied, 8.145447 s, 115 MB/s
250+0 records in
250+0 records out
1048576000 bytes (1.0 GB) copied, 9.08347 s, 115 MB/s
[root@ovirt1 mnt]#  /usr/bin/dd iflag=fullblock of=file oflag=direct,seek_bytes 
seek=1048576 bs=256512 count=1 conv=notrunc,nocreat,fsync status=progress
^C0+0 records in
0+0 records out
0 bytes (0 B) copied, 46.5877 s, 0.0 kB/s


Can someone give a hint ? Maybe it's related to gluster v6 ? 
Can someone test with older version of Gluster ?
Best Regards,Strahil Nikolov

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/U53LDKNVXP4QN3SPDBLMITVOSGQMSA6K/


[ovirt-users] ovirt 4.3.3 SELINUX issue

2019-05-15 Thread Strahil Nikolov
Hello All,
I want to warn you that selinux-policy & selinux-policy-targeted with version 
'3.13.1-229.el7_6.12' cause an issue with my HostedEngine where I got "Login 
incorrect" screen of death.
I have also raised a bug: https://bugzilla.redhat.com/show_bug.cgi?id=1710083

If you want to test your luck, increase the default grub timeout in 
"/etc/default/grub" to 15 and rebuild the grub menu via 'grub2-mkconfig -o 
/boot/grub2/grub.cfg'
If the issue occurs to you - just append 'enforcing=0' to your kernel and the 
issue will be over.Of course, you can always rollback either from a rescue DVD 
or from the running 'enforcing=0' system.
Best Regards,Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NQTBNNVWUXP33PCNVON6GSB42BG7LV5E/


[ovirt-users] Re: Change host names/IPs

2019-05-15 Thread Strahil Nikolov
 I think you need to :1. Set a host into maintenance2. Uninstall3. Remove the 
host (if HostedEngine is running there)
4. Change the hostname & IPs5. Add the host6. Install (if HstedEngine will be 
running there)
Best Regards,Strahil Nikolov

В вторник, 14 май 2019 г., 18:05:35 ч. Гринуич-4, Davide Ferrari 
 написа:  
 
 Hello

Is there a clean way and possibly without downtime to change the hostname and 
IP addresses of all the hosts in a running oVirt cluster?

-- 
Davide Ferrari
Senior Systems Engineer

-- 
IMPORTANT!
This message has been scanned for viruses and phishing links.
However, it is your responsibility to evaluate the links and attachments you 
choose to click.
If you are uncertain, we always try to help.
Greetings helpd...@actnet.se
-- 
IMPORTANT!
This message has been scanned for viruses and phishing links.
However, it is your responsibility to evaluate the links and attachments you 
choose to click.
If you are uncertain, we always try to help.
Greetings helpdesk@actnet.se___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4XLQ3ZQIEHPXOFIH2AQEYU5JUMSLFDAF/
  ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/AEP4RP4ALYJPDS4PZG5WDH6WTFBRJ2E3/


[ovirt-users] Re: oVirt Open Source Backup solution?

2019-05-14 Thread Strahil Nikolov
 In such case ,you use the same approach for the VM in whole - lock + snapshot 
on oVirt + unlock.This way you keep OS + app backup in one place , which has 
it's own Pluses and Minuses.
Best Regards,Strahil Nikolov

В вторник, 14 май 2019 г., 6:40:56 ч. Гринуич-4, Derek Atkins 
 написа:  
 
 Hi,

I am sorry I was unclear.  Of course the long operation happens with the
DB unlocked.

Once the LVM snapshot is created (from within the locked environment), the
lock is of course released and the backup proceeds from a db-unlocked
environment.

I apologize for my lack of clarity with "and then I backup off the
snapshot" not making that clear.

-derek

On Tue, May 14, 2019 6:20 am, Strahil wrote:
> Derek,
>
> That's risky.
> Just read lock the DB, create the lvm snapshot and release the lock.
> Otherwise you risk a transaction to be  interrupted.
>
> Best Regards,
> Strahil NikolovOn May 13, 2019 16:47, Derek Atkins 
> wrote:
>>
>> Strahil  writes:
>>
>> > Another option is to create a snapshot, backup the snapahot and merge
>> > the disks (delete the snapshot actually).
>> > Sadly that option doesn't work with Databases, as you might inyerrupt
>> > a transaction and leave the DB in inconsistent state.
>>
>> Yet another reason to do it from inside the VM.
>>
>> What I do (on systems that have a running database) is to run a "flush"
>> operation to sync the database to disk, and then from within the flush
>> operation I create an LVM snapshot, and then I backup off the snapshot.
>> If I'm not running a database, then I just create the snapshot directly.
>>
>> > Best Regards,
>> > Strahil Nikolov
>>
>> -derek
>> --
>>    Derek Atkins 617-623-3745
>>    de...@ihtfp.com www.ihtfp.com
>>    Computer and Internet Security Consultant
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/JS6YVB3S33VYLPEQTUE3UJVZOBBO5W7H/
>


-- 
      Derek Atkins                617-623-3745
      de...@ihtfp.com            www.ihtfp.com
      Computer and Internet Security Consultant

  ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MXK2PNDKTGGK4F55VACFRHRB2UADIQVP/


[ovirt-users] Re: Is teamd supported on 4.3?

2019-05-14 Thread Strahil Nikolov
I'm still implementing the change ,so I'm not sure.
By the way, as a workaround we can use vlan interfaces , right ?
Best Regards,Strahil Nikolov


В вторник, 14 май 2019 г., 6:46:06 ч. Гринуич-4, Dominik Holler 
 написа:  
 
 On Tue, 14 May 2019 13:33:30 +0300
Strahil  wrote:

> I'm using teaming and I don't see issues.


Are you able to connect the teaming device to VMs via logical
networks / birdges ?

> Just cannot control the teaming device.
> 
> Best Regards,
> Strahil NikolovOn May 13, 2019 22:26, Dominik Holler  
> wrote:
> >
> > On Mon, 13 May 2019 15:30:11 +0200 
> > Valentin Bajrami  wrote: 
> >
> > > Hello guys, 
> > > 
> > > Next week, I'm planning to deploy ovirt-node 4.3 on a few hosts. I've 
> > > been running bonds for the past years but I'd like to know if teaming 
> > > (teamd) is also supported with this version.  
> > > 
> >
> > No, unfortunately not. 
> > May I ask why you want to use teaming instead of bonding? 
> >
> >
> > > My current package version(s): 
> > > 
> > > OS Version: | RHEL - 7 - 6.1810.2.el7.centos 
> > > OS Description: | oVirt Node 4.3.1 
> > > Kernel Version: | 3.10.0 - 957.5.1.el7.x86_64 
> > > KVM Version: | 2.12.0 - 18.el7_6.3.1 
> > > LIBVIRT Version: | libvirt-4.5.0-10.el7_6.4 
> > > VDSM Version: | vdsm-4.30.9-1.el7 
> > > SPICE Version: | 0.14.0 - 6.el7_6.1 
> > > GlusterFS Version: | glusterfs-5.3-2.el7 
> > > CEPH Version: | librbd1-10.2.5-4.el7 
> > > Open vSwitch Version: | openvswitch-2.10.1-3.el7 
> > > Kernel Features: | PTI: 1, IBRS: 0, RETP: 1 
> > > VNC Encryption: | Disabled 
> > > 
> > > Is anyone running teamd on this version ? 
> > > 
> > > Thanks in advance 
> > > 
> > ___
> > Users mailing list -- users@ovirt.org
> > To unsubscribe send an email to users-le...@ovirt.org
> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> > oVirt Code of Conduct: 
> > https://www.ovirt.org/community/about/community-guidelines/
> > List Archives: 
> > https://lists.ovirt.org/archives/list/users@ovirt.org/message/XVFQQWUJK3QTVJJ7H3PI2T7FQJEIA6L5/
  ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QW5RYXJ6IF6DT7P3AGJ3KD6D5S7FMKXK/


[ovirt-users] Re: Scale out ovirt 4.3 (from 3 to 6 or 9 nodes) with hyperconverged setup and Gluster

2019-05-20 Thread Strahil Nikolov
 Hi Adrian,
are you using local storage ?
If yes, set a blacklist in multipath.conf (don't forget the "#VDSM PRIVATE" 
flag) and rebuild the initramfs and reboot.When multipath locks a path - no 
direct access is possible - thus your pvcreate should not be possible.Also , 
multipath is not needed for local storage ;)

Best Regards,Strahil Nikolov

В понеделник, 20 май 2019 г., 19:31:04 ч. Гринуич+3, Adrian Quintero 
 написа:  
 
 Sahina,Yesterday I started with a fresh install, I completely wiped clean all 
the disks, recreated the arrays from within my controller of our DL380 Gen 9's.
OS: RAID 1 (2x600GB HDDs): /dev/sda    // Using ovirt node 4.3.3.1 iso.
engine and VMSTORE1: JBOD (1x3TB HDD):/dev/sdb
DATA1: JBOD (1x3TB HDD): /dev/sdc
DATA2: JBOD (1x3TB HDD): /dev/sdd
Caching disk: JOBD (1x440GB SDD): /dev/sde

After the OS install on the first 3 servers and setting up ssh keys,  I started 
the Hyperconverged deploy process:1.-Logged int to the first server 
http://host1.example.com:90902.-Selected Hyperconverged, clicked on "Run 
Gluster Wizard"3.-Followed the wizard steps (Hosts, FQDNs, Packages, Volumes, 
Bricks, 
Review)Hosts/FQDNs:host1.example.comhost2.example.comhost3.example.comPackages:Volumes:engine:replicate:/gluster_bricks/engine/enginevmstore1:replicate:/gluster_bricks/vmstore1/vmstore1data1:replicate:/gluster_bricks/data1/data1data2:replicate:/gluster_bricks/data2/data2Bricks:engine:/dev/sdb:100GB:/gluster_bricks/enginevmstore1:/dev/sdb:2600GB:/gluster_bricks/vmstrore1data1:/dev/sdc:2700GB:/gluster_bricks/data1data2:/dev/sdd:2700GB:/gluster_bricks/data2LV
 Cache:/dev/sde:400GB:writethrough4.-After I hit deploy on the last step of the 
"Wizard" that is when I get the disk filter error.
TASK [gluster.infra/roles/backend_setup : Create volume groups] 
failed: [vmm10.virt.iad3p] (item={u'vgname': u'gluster_vg_sdb', u'pvname': 
u'/dev/sdb'}) => {"changed": false, "err": "  Device /dev/sdb excluded by a 
filter.\n", "item": {"pvname": "/dev/sdb", "vgname": "gluster_vg_sdb"}, "msg": 
"Creating physical volume '/dev/sdb' failed", "rc": 5}
failed: [vmm12.virt.iad3p] (item={u'vgname': u'gluster_vg_sdb', u'pvname': 
u'/dev/sdb'}) => {"changed": false, "err": "  Device /dev/sdb excluded by a 
filter.\n", "item": {"pvname": "/dev/sdb", "vgname": "gluster_vg_sdb"}, "msg": 
"Creating physical volume '/dev/sdb' failed", "rc": 5}
failed: [vmm11.virt.iad3p] (item={u'vgname': u'gluster_vg_sdb', u'pvname': 
u'/dev/sdb'}) => {"changed": false, "err": "  Device /dev/sdb excluded by a 
filter.\n", "item": {"pvname": "/dev/sdb", "vgname": "gluster_vg_sdb"}, "msg": 
"Creating physical volume '/dev/sdb' failed", "rc": 5}
failed: [vmm12.virt.iad3p] (item={u'vgname': u'gluster_vg_sdc', u'pvname': 
u'/dev/sdc'}) => {"changed": false, "err": "  Device /dev/sdc excluded by a 
filter.\n", "item": {"pvname": "/dev/sdc", "vgname": "gluster_vg_sdc"}, "msg": 
"Creating physical volume '/dev/sdc' failed", "rc": 5}
failed: [vmm10.virt.iad3p] (item={u'vgname': u'gluster_vg_sdc', u'pvname': 
u'/dev/sdc'}) => {"changed": false, "err": "  Device /dev/sdc excluded by a 
filter.\n", "item": {"pvname": "/dev/sdc", "vgname": "gluster_vg_sdc"}, "msg": 
"Creating physical volume '/dev/sdc' failed", "rc": 5}
failed: [vmm11.virt.iad3p] (item={u'vgname': u'gluster_vg_sdc', u'pvname': 
u'/dev/sdc'}) => {"changed": false, "err": "  Device /dev/sdc excluded by a 
filter.\n", "item": {"pvname": "/dev/sdc", "vgname": "gluster_vg_sdc"}, "msg": 
"Creating physical volume '/dev/sdc' failed", "rc": 5}
failed: [vmm10.virt.iad3p] (item={u'vgname': u'gluster_vg_sdd', u'pvname': 
u'/dev/sdd'}) => {"changed": false, "err": "  Device /dev/sdd excluded by a 
filter.\n", "item": {"pvname": "/dev/sdd", "vgname": "gluster_vg_sdd"}, "msg": 
"Creating physical volume '/dev/sdd' failed", "rc": 5}
failed: [vmm12.virt.iad3p] (item={u'vgname': u'gluster_vg_sdd', u'pvname': 
u'/dev/sdd'}) => {"changed": false, "err": "  Device /dev/sdd excluded by a 
filter.\n", "item": {"pvname": "/dev/sdd", "vgname": "gluster_vg_sdd"}, "msg": 
"Creating phys

[ovirt-users] Re: Scale out ovirt 4.3 (from 3 to 6 or 9 nodes) with hyperconverged setup and Gluster

2019-05-20 Thread Strahil Nikolov
 Hey Sahina,
it seems that almost all of my devices are locked - just like Fred's.What 
exactly does it mean - I don't have any issues with my bricks/storage domains.
Best Regards,Strahil Nikolov

В понеделник, 20 май 2019 г., 14:56:11 ч. Гринуич+3, Sahina Bose 
 написа:  
 
 To scale existing volumes - you need to add bricks and run rebalance on the 
gluster volume so that data is correctly redistributed as Alex mentioned.We do 
support expanding existing volumes as the bug 
https://bugzilla.redhat.com/show_bug.cgi?id=1471031 has been fixed
As to procedure to expand volumes:1. Create bricks from UI - select Host -> 
Storage Device -> Storage device. Click on "Create Brick"If the device is shown 
as locked, make sure there's no signature on device.  If multipath entries have 
been created for local devices, you can blacklist those devices in 
multipath.conf and restart multipath.
 (If you see device as locked even after you do this -please report back).2. 
Expand volume using Volume -> Bricks -> Add Bricks, and select the 3 bricks 
created in previous step3. Run Rebalance on the volume. Volume -> Rebalance.

On Thu, May 16, 2019 at 2:48 PM Fred Rolland  wrote:

Sahina,Can someone from your team review the steps done by Adrian?
Thanks,Freddy

On Thu, Apr 25, 2019 at 5:14 PM Adrian Quintero  
wrote:

Ok, I will remove the extra 3 hosts, rebuild them from scratch and re-attach 
them to clear any possible issues and try out the suggestions provided.
thank you!

On Thu, Apr 25, 2019 at 9:22 AM Strahil Nikolov  wrote:

 I have the same locks , despite I have blacklisted all local disks:
# VDSM PRIVATE
blacklist {
    devnode "*"
    wwid Crucial_CT256MX100SSD1_14390D52DCF5
    wwid WDC_WD5000AZRX-00A8LB0_WD-WCC1U0056126
    wwid WDC_WD5003ABYX-01WERA0_WD-WMAYP2335378
    wwid 
nvme.1cc1-324a31313230303131353936-414441544120535838323030504e50-0001
}

If you have multipath reconfigured, do not forget to rebuild the initramfs 
(dracut -f). It's a linux issue , and not oVirt one.
In your case you had something like this:   /dev/VG/LV
  /dev/disk/by-id/pvuuid
 /dev/mapper/multipath-uuid
/dev/sdb

Linux will not allow you to work with /dev/sdb , when multipath is locking the 
block device.
Best Regards,Strahil Nikolov

В четвъртък, 25 април 2019 г., 8:30:16 ч. Гринуич-4, Adrian Quintero 
 написа:  
 
 under Compute, hosts, select the host that has the locks on /dev/sdb, 
/dev/sdc, etc.., select storage devices and in here is where you see a small 
column with a bunch of lock images showing for each row.

However as a work around, on the newly added hosts (3 total), I had to manually 
modify /etc/multipath.conf and add the following at the end as this is what I 
noticed from the original 3 node setup.

-
# VDSM REVISION 1.3
# VDSM PRIVATE
# BEGIN Added by gluster_hci role

blacklist {
    devnode "*"
}
# END Added by gluster_hci role
--After this I 
restarted multipath and the lock went away and was able to configure the new 
bricks thru the UI, however my concern is what will happen if I reboot the 
server will the disks be read the same way by the OS?
Also now able to expand the gluster with a new replicate 3 volume if needed 
using http://host4.mydomain.com:9090.

thanks again

On Thu, Apr 25, 2019 at 8:00 AM Strahil Nikolov  wrote:

 In which menu do you see it this way ?
Best Regards,Strahil Nikolov

В сряда, 24 април 2019 г., 8:55:22 ч. Гринуич-4, Adrian Quintero 
 написа:  
 
 Strahil,this is the issue I am seeing now 


The is thru the UI when I try to create a new brick.
So my concern is if I modify the filters on the OS what impact will that have 
after server reboots?
thanks,


On Mon, Apr 22, 2019 at 11:39 PM Strahil  wrote:

I have edited my multipath.conf to exclude local disks , but you need to set 
'#VDSM private' as per the comments in the header of the file.
Otherwise, use the /dev/mapper/multipath-device notation - as you would do with 
any linux.

Best Regards,
Strahil NikolovOn Apr 23, 2019 01:07, adrianquint...@gmail.com wrote:
>
> Thanks Alex, that makes more sense now  while trying to follow the 
> instructions provided I see that all my disks /dev/sdb, /dev/sdc, /dev/sdd 
> are locked and inidicating " multpath_member" hence not letting me create new 
> bricks. And on the logs I see 
>
> Device /dev/sdb excluded by a filter.\n", "item": {"pvname": "/dev/sdb", 
> "vgname": "gluster_vg_sdb"}, "msg": "Creating physical volume '/dev/sdb' 
> failed", "rc": 5} 
> Same thing for sdc, sdd 
>
> Should I manually edit the filters inside the OS, what will be the impact? 
>
> thanks again.
> ___
> Use

[ovirt-users] Re: [ovirt-announce] Re: [ANN] oVirt 4.3.4 First Release Candidate is now available

2019-05-20 Thread Strahil Nikolov
 I got confused so far.What is best for oVirt ?remote-dio off or on ?My latest 
gluster volumes were set to 'off' while the older ones are 'on'.
Best Regards,Strahil Nikolov

В понеделник, 20 май 2019 г., 23:42:09 ч. Гринуич+3, Darrell Budic 
 написа:  
 
 Wow, I think Strahil and i both hit different edge cases on this one. I was 
running that on my test cluster with a ZFS backed brick, which does not support 
O_DIRECT (in the current version, 0.8 will, when it’s released). I tested on a 
XFS backed brick with gluster virt group applied and network.remote-dio 
disabled and ovirt was able to create the storage volume correctly. So not a 
huge problem for most people, I imagine.
Now I’m curious about the apparent disconnect between gluster and ovirt though. 
Since the gluster virt group sets network.remote-dio on, what’s the reasoning 
behind disabling it for these tests?


On May 18, 2019, at 11:44 PM, Sahina Bose  wrote:


On Sun, 19 May 2019 at 12:21 AM, Nir Soffer  wrote:

On Fri, May 17, 2019 at 7:54 AM Gobinda Das  wrote:

>From RHHI side default we are setting below volume options:
{ group: 'virt',     storage.owner-uid: '36',     storage.owner-gid: '36',     
network.ping-timeout: '30',     performance.strict-o-direct: 'on',     
network.remote-dio: 'off'

According to the user reports, this configuration is not compatible with oVirt.
Was this tested?

Yes, this is set by default in all test configuration. We’re checking on the 
bug, but the error is likely when the underlying device does not support 512b 
writes. With network.remote-dio off gluster will ensure o-direct writes


   }

On Fri, May 17, 2019 at 2:31 AM Strahil Nikolov  wrote:

 Ok, setting 'gluster volume set data_fast4 network.remote-dio on' allowed me 
to create the storage domain without any issues.I set it on all 4 new gluster 
volumes and the storage domains were successfully created.
I have created bug for that:https://bugzilla.redhat.com/show_bug.cgi?id=1711060
If someone else already opened - please ping me to mark this one as duplicate.
Best Regards,Strahil Nikolov

В четвъртък, 16 май 2019 г., 22:27:01 ч. Гринуич+3, Darrell Budic 
 написа:  
 
 On May 16, 2019, at 1:41 PM, Nir Soffer  wrote:


On Thu, May 16, 2019 at 8:38 PM Darrell Budic  wrote:

I tried adding a new storage domain on my hyper converged test cluster running 
Ovirt 4.3.3.7 and gluster 6.1. I was able to create the new gluster volume 
fine, but it’s not able to add the gluster storage domain (as either a managed 
gluster volume or directly entering values). The created gluster volume mounts 
and looks fine from the CLI. Errors in VDSM log:

... 
2019-05-16 10:25:09,584-0500 ERROR (jsonrpc/5) [storage.fileSD] Underlying file 
system doesn't supportdirect IO (fileSD:110)
2019-05-16 10:25:09,584-0500 INFO  (jsonrpc/5) [vdsm.api] FINISH 
createStorageDomain error=Storage Domain target is unsupported: () 
from=:::10.100.90.5,44732, flow_id=31d993dd, 
task_id=ecea28f3-60d4-476d-9ba8-b753b7c9940d (api:52)

The direct I/O check has failed.

So something is wrong in the files system.
To confirm, you can try to do:
dd if=/dev/zero of=/path/to/mountoint/test bs=4096 count=1 oflag=direct
This will probably fail with:dd: failed to open '/path/to/mountoint/test': 
Invalid argument
If it succeeds, but oVirt fail to connect to this domain, file a bug and we 
will investigate.
Nir

Yep, it fails as expected. Just to check, it is working on pre-existing 
volumes, so I poked around at gluster settings for the new volume. It has 
network.remote-dio=off set on the new volume, but enabled on old volumes. After 
enabling it, I’m able to run the dd test:
[root@boneyard mnt]# gluster vol set test network.remote-dio enablevolume set: 
success[root@boneyard mnt]# dd if=/dev/zero of=testfile bs=4096 count=1 
oflag=direct1+0 records in1+0 records out4096 bytes (4.1 kB) copied, 0.0018285 
s, 2.2 MB/s
I’m also able to add the storage domain in ovirt now.
I see network.remote-dio=enable is part of the gluster virt group, so 
apparently it’s not getting set by ovirt duding the volume creation/optimze for 
storage?


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OPBXHYOHZA4XR5CHU7KMD2ISQWLFRG5N/
  ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/B7K24XYG3M43CMMM7MMFARH52QEBXIU5/



-- 


Thanks,Gobinda



___
Us

[ovirt-users] ovirt 4.3.3 Disk for New VM is always Preallocated

2019-04-29 Thread Strahil Nikolov
Hi All,
I have stumbled upon a potential bug in UI. Can someone test it , in order to 
reproduce it ?
How to reproduce:1. Create a VM2. Within the new VM wizard - Create a disk and 
select Thin Provision.3. Create the VM and wait for the disk to be completed.4. 
Check the disk within UI - Allocation Policy is set to "Preallocation" 

Best Regards,Strahil Nikolov

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FUB67DZTZHNRB65QUV2FFCOA257564QM/


[ovirt-users] Re: ovirt 4.3.3 cannot create VM in non-default cluster

2019-04-29 Thread Strahil Nikolov
 I've upgraded to Version 4.3.3.6-1.el7 and the issue is gone.
Best Regards,Strahil Nikolov

В неделя, 28 април 2019 г., 4:14:57 ч. Гринуич-4, Strahil Nikolov 
 написа:  
 
 It seems that No matter which cluster is selected - UI uses only the "Default" 
one.
I'm attaching a screenshot.

Best Regards,
Strahil Nikolov


>Hi All,
>
>I'm having an issue to create a VM in my second cluster called "Intel" which 
>>consists of only 1 Host -> ovirt3 which plays the role of a gluster arbiter 
>in my >hyperconverged setup.
>
>When I try to create the  VM (via UI) I receive:
 >"Cannot add VM. CPU Profile doesn't match provided Cluster."
>
>When I try to select the host where I can put the VM , I see only ovirt1 or 
>>ovirt2 which are part of the 'Default' Cluster .
>
>Do we have an opened bug  for that ?
>
>Note: A workaround is to create the VM in the Default cluster and later edit 
>it >to match the needed Cluster.
>
>Best Regards,
>Strahil Nikolov  ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/W55NNRKVRLDR5U2KRELO5PCTFTQJOITV/


[ovirt-users] Re: ovirt 4.3.3 Disk for New VM is always Preallocated

2019-04-30 Thread Strahil Nikolov
I have raised a bug (1704782 – ovirt 4.3.3 doesn't allow creation of VM with 
"Thin Provision"-ed disk (always preallocated)) , despite not being sure if I 
have selected the right category.

Best Regards,
Strahil Nikolov

В вторник, 30 април 2019 г., 9:31:46 ч. Гринуич-4, Strahil Nikolov 
 написа: 

Hi Oliver,

can you check your version of UI ?

It seems that both VMs I had created are fully "Preallocated" instead of being 
"Thin Provisioned".

Can someone lead me what section of bugzilla should I open the bug ?

Here is some output:

[root@ovirt1 images]# qemu-img info 
/rhev/data-center/mnt/glusterSD/ovirt1.localdomain\:_data/cd0018d3-05cd-4667-a5f8-b26dca65a680/images/b87f1fe7-127a-4574-b835-85202f76368a/41fcb56c-7ee0-4575-9366-72ae051444f9
image: 
/rhev/data-center/mnt/glusterSD/ovirt1.localdomain:_data/cd0018d3-05cd-4667-a5f8-b26dca65a680/images/b87f1fe7-127a-4574-b835-85202f76368a/41fcb56c-7ee0-4575-9366-72ae051444f9
file format: raw
virtual size: 20G (21474836480 bytes)
disk size: 20G
[root@ovirt1 images]# qemu-img info 
/rhev/data-center/mnt/glusterSD/ovirt1.localdomain\:_data/cd0018d3-05cd-4667-a5f8-b26dca65a680/images/9e1065ed-fbc3-455b-a611-f650d56dadc9/aed4306e-7c45-4cf5-82ee-7bed3c9631ce
image: 
/rhev/data-center/mnt/glusterSD/ovirt1.localdomain:_data/cd0018d3-05cd-4667-a5f8-b26dca65a680/images/9e1065ed-fbc3-455b-a611-f650d56dadc9/aed4306e-7c45-4cf5-82ee-7bed3c9631ce
file format: raw
virtual size: 20G (21474836480 bytes)
disk size: 20G
[root@ovirt1 images]# ssh engine "rpm -qa | grep ovirt | sort "
root@engine's password:
ovirt-ansible-cluster-upgrade-1.1.13-1.el7.noarch
ovirt-ansible-disaster-recovery-1.1.4-1.el7.noarch
ovirt-ansible-engine-setup-1.1.9-1.el7.noarch
ovirt-ansible-hosted-engine-setup-1.0.17-1.el7.noarch
ovirt-ansible-image-template-1.1.9-1.el7.noarch
ovirt-ansible-infra-1.1.12-1.el7.noarch
ovirt-ansible-manageiq-1.1.13-1.el7.noarch
ovirt-ansible-repositories-1.1.5-1.el7.noarch
ovirt-ansible-roles-1.1.6-1.el7.noarch
ovirt-ansible-shutdown-env-1.0.3-1.el7.noarch
ovirt-ansible-vm-infra-1.1.14-1.el7.noarch
ovirt-cockpit-sso-0.1.1-1.el7.noarch
ovirt-engine-4.3.3.6-1.el7.noarch
ovirt-engine-api-explorer-0.0.4-1.el7.noarch
ovirt-engine-backend-4.3.3.6-1.el7.noarch
ovirt-engine-cli-3.6.9.2-1.el7.centos.noarch
ovirt-engine-dbscripts-4.3.3.6-1.el7.noarch
ovirt-engine-dwh-4.3.0-1.el7.noarch
ovirt-engine-dwh-setup-4.3.0-1.el7.noarch
ovirt-engine-extension-aaa-jdbc-1.1.10-1.el7.noarch
ovirt-engine-extension-aaa-ldap-1.3.9-1.el7.noarch
ovirt-engine-extension-aaa-ldap-setup-1.3.9-1.el7.noarch
ovirt-engine-extensions-api-impl-4.3.3.6-1.el7.noarch
ovirt-engine-metrics-1.3.0.2-1.el7.noarch
ovirt-engine-restapi-4.3.3.6-1.el7.noarch
ovirt-engine-sdk-python-3.6.9.1-1.el7.noarch
ovirt-engine-setup-4.3.3.6-1.el7.noarch
ovirt-engine-setup-base-4.3.3.6-1.el7.noarch
ovirt-engine-setup-plugin-cinderlib-4.3.3.6-1.el7.noarch
ovirt-engine-setup-plugin-ovirt-engine-4.3.3.6-1.el7.noarch
ovirt-engine-setup-plugin-ovirt-engine-common-4.3.3.6-1.el7.noarch
ovirt-engine-setup-plugin-vmconsole-proxy-helper-4.3.3.6-1.el7.noarch
ovirt-engine-setup-plugin-websocket-proxy-4.3.3.6-1.el7.noarch
ovirt-engine-tools-4.3.3.6-1.el7.noarch
ovirt-engine-tools-backup-4.3.3.6-1.el7.noarch
ovirt-engine-ui-extensions-1.0.4-1.el7.noarch
ovirt-engine-vmconsole-proxy-helper-4.3.3.6-1.el7.noarch
ovirt-engine-webadmin-portal-4.3.3.6-1.el7.noarch
ovirt-engine-websocket-proxy-4.3.3.6-1.el7.noarch
ovirt-engine-wildfly-15.0.1-1.el7.x86_64
ovirt-engine-wildfly-overlay-15.0.1-1.el7.noarch
ovirt-guest-agent-common-1.0.16-1.el7.noarch
ovirt-guest-tools-iso-4.3-2.el7.noarch
ovirt-host-deploy-common-1.8.0-1.el7.noarch
ovirt-host-deploy-java-1.8.0-1.el7.noarch
ovirt-imageio-common-1.5.1-0.el7.x86_64
ovirt-imageio-proxy-1.5.1-0.el7.noarch
ovirt-imageio-proxy-setup-1.5.1-0.el7.noarch
ovirt-iso-uploader-4.3.1-1.el7.noarch
ovirt-js-dependencies-1.2.0-3.1.el7.centos.noarch
ovirt-provider-ovn-1.2.20-1.el7.noarch
ovirt-release43-4.3.3.1-1.el7.noarch
ovirt-vmconsole-1.0.7-2.el7.noarch
ovirt-vmconsole-proxy-1.0.7-2.el7.noarch
ovirt-web-ui-1.5.2-1.el7.noarch
python2-ovirt-engine-lib-4.3.3.6-1.el7.noarch
python2-ovirt-host-deploy-1.8.0-1.el7.noarch
python2-ovirt-setup-lib-1.2.0-1.el7.noarch
python-ovirt-engine-sdk4-4.3.1-2.el7.x86_64

Best Regards,
Strahil Nikolov




В понеделник, 29 април 2019 г., 20:45:57 ч. Гринуич-4, Oliver Riesener 
 написа: 





Hi Strahil,

sorry can’t reproduce it on NFS SD.

- UI and Disk usage looks ok, Thin Provision for Thin Provision created Disks. 
Sparse File with (0 Blocks)

Second:

UI and Disk usage looks ok also for Preallocated. Preallocated File with 
(2097152 Blocks)

Regards

Oliver


root@ovn-elem images]# stat 
620b4bc0-3e46-4abc-b995-41f34ea84280/23bf0bea-c1ca-43fd-b9c3-bf35d9cfcd0c
  Datei: 
„620b4bc0-3e46-4abc-b995-41f34ea84280/23bf0bea-c1ca-43fd-b9c3-bf35d9cfcd0c“
  Größe: 5368709120 Blöcke: 0          EA Block: 4096   reguläre Datei
Gerät: fd12h/64786d Inode: 12884902

[ovirt-users] Re: ovirt 4.3.3 Disk for New VM is always Preallocated

2019-04-30 Thread Strahil Nikolov
 Hi Oliver,
can you check your version of UI ?
It seems that both VMs I had created are fully "Preallocated" instead of being 
"Thin Provisioned".
Can someone lead me what section of bugzilla should I open the bug ?
Here is some output:
[root@ovirt1 images]# qemu-img info 
/rhev/data-center/mnt/glusterSD/ovirt1.localdomain\:_data/cd0018d3-05cd-4667-a5f8-b26dca65a680/images/b87f1fe7-127a-4574-b835-85202f76368a/41fcb56c-7ee0-4575-9366-72ae051444f9
image: 
/rhev/data-center/mnt/glusterSD/ovirt1.localdomain:_data/cd0018d3-05cd-4667-a5f8-b26dca65a680/images/b87f1fe7-127a-4574-b835-85202f76368a/41fcb56c-7ee0-4575-9366-72ae051444f9
file format: raw
virtual size: 20G (21474836480 bytes)
disk size: 20G
[root@ovirt1 images]# qemu-img info 
/rhev/data-center/mnt/glusterSD/ovirt1.localdomain\:_data/cd0018d3-05cd-4667-a5f8-b26dca65a680/images/9e1065ed-fbc3-455b-a611-f650d56dadc9/aed4306e-7c45-4cf5-82ee-7bed3c9631ce
image: 
/rhev/data-center/mnt/glusterSD/ovirt1.localdomain:_data/cd0018d3-05cd-4667-a5f8-b26dca65a680/images/9e1065ed-fbc3-455b-a611-f650d56dadc9/aed4306e-7c45-4cf5-82ee-7bed3c9631ce
file format: raw
virtual size: 20G (21474836480 bytes)
disk size: 20G
[root@ovirt1 images]# ssh engine "rpm -qa | grep ovirt | sort "
root@engine's password:
ovirt-ansible-cluster-upgrade-1.1.13-1.el7.noarch
ovirt-ansible-disaster-recovery-1.1.4-1.el7.noarch
ovirt-ansible-engine-setup-1.1.9-1.el7.noarch
ovirt-ansible-hosted-engine-setup-1.0.17-1.el7.noarch
ovirt-ansible-image-template-1.1.9-1.el7.noarch
ovirt-ansible-infra-1.1.12-1.el7.noarch
ovirt-ansible-manageiq-1.1.13-1.el7.noarch
ovirt-ansible-repositories-1.1.5-1.el7.noarch
ovirt-ansible-roles-1.1.6-1.el7.noarch
ovirt-ansible-shutdown-env-1.0.3-1.el7.noarch
ovirt-ansible-vm-infra-1.1.14-1.el7.noarch
ovirt-cockpit-sso-0.1.1-1.el7.noarch
ovirt-engine-4.3.3.6-1.el7.noarch
ovirt-engine-api-explorer-0.0.4-1.el7.noarch
ovirt-engine-backend-4.3.3.6-1.el7.noarch
ovirt-engine-cli-3.6.9.2-1.el7.centos.noarch
ovirt-engine-dbscripts-4.3.3.6-1.el7.noarch
ovirt-engine-dwh-4.3.0-1.el7.noarch
ovirt-engine-dwh-setup-4.3.0-1.el7.noarch
ovirt-engine-extension-aaa-jdbc-1.1.10-1.el7.noarch
ovirt-engine-extension-aaa-ldap-1.3.9-1.el7.noarch
ovirt-engine-extension-aaa-ldap-setup-1.3.9-1.el7.noarch
ovirt-engine-extensions-api-impl-4.3.3.6-1.el7.noarch
ovirt-engine-metrics-1.3.0.2-1.el7.noarch
ovirt-engine-restapi-4.3.3.6-1.el7.noarch
ovirt-engine-sdk-python-3.6.9.1-1.el7.noarch
ovirt-engine-setup-4.3.3.6-1.el7.noarch
ovirt-engine-setup-base-4.3.3.6-1.el7.noarch
ovirt-engine-setup-plugin-cinderlib-4.3.3.6-1.el7.noarch
ovirt-engine-setup-plugin-ovirt-engine-4.3.3.6-1.el7.noarch
ovirt-engine-setup-plugin-ovirt-engine-common-4.3.3.6-1.el7.noarch
ovirt-engine-setup-plugin-vmconsole-proxy-helper-4.3.3.6-1.el7.noarch
ovirt-engine-setup-plugin-websocket-proxy-4.3.3.6-1.el7.noarch
ovirt-engine-tools-4.3.3.6-1.el7.noarch
ovirt-engine-tools-backup-4.3.3.6-1.el7.noarch
ovirt-engine-ui-extensions-1.0.4-1.el7.noarch
ovirt-engine-vmconsole-proxy-helper-4.3.3.6-1.el7.noarch
ovirt-engine-webadmin-portal-4.3.3.6-1.el7.noarch
ovirt-engine-websocket-proxy-4.3.3.6-1.el7.noarch
ovirt-engine-wildfly-15.0.1-1.el7.x86_64
ovirt-engine-wildfly-overlay-15.0.1-1.el7.noarch
ovirt-guest-agent-common-1.0.16-1.el7.noarch
ovirt-guest-tools-iso-4.3-2.el7.noarch
ovirt-host-deploy-common-1.8.0-1.el7.noarch
ovirt-host-deploy-java-1.8.0-1.el7.noarch
ovirt-imageio-common-1.5.1-0.el7.x86_64
ovirt-imageio-proxy-1.5.1-0.el7.noarch
ovirt-imageio-proxy-setup-1.5.1-0.el7.noarch
ovirt-iso-uploader-4.3.1-1.el7.noarch
ovirt-js-dependencies-1.2.0-3.1.el7.centos.noarch
ovirt-provider-ovn-1.2.20-1.el7.noarch
ovirt-release43-4.3.3.1-1.el7.noarch
ovirt-vmconsole-1.0.7-2.el7.noarch
ovirt-vmconsole-proxy-1.0.7-2.el7.noarch
ovirt-web-ui-1.5.2-1.el7.noarch
python2-ovirt-engine-lib-4.3.3.6-1.el7.noarch
python2-ovirt-host-deploy-1.8.0-1.el7.noarch
python2-ovirt-setup-lib-1.2.0-1.el7.noarch
python-ovirt-engine-sdk4-4.3.1-2.el7.x86_64

Best Regards,Strahil Nikolov

В понеделник, 29 април 2019 г., 20:45:57 ч. Гринуич-4, Oliver Riesener 
 написа:  
 
 Hi Strahil,
sorry can’t reproduce it on NFS SD.
- UI and Disk usage looks ok, Thin Provision for Thin Provision created Disks. 
Sparse File with (0 Blocks)
Second:
UI and Disk usage looks ok also for Preallocated. Preallocated File with 
(2097152 Blocks)
Regards
Oliver

root@ovn-elem images]# stat 
620b4bc0-3e46-4abc-b995-41f34ea84280/23bf0bea-c1ca-43fd-b9c3-bf35d9cfcd0c  
Datei: 
„620b4bc0-3e46-4abc-b995-41f34ea84280/23bf0bea-c1ca-43fd-b9c3-bf35d9cfcd0c“  
Größe: 5368709120 Blöcke: 0          EA Block: 4096   reguläre DateiGerät: 
fd12h/64786d Inode: 12884902045  Verknüpfungen: 1Zugriff: (0660/-rw-rw)  
Uid: (   36/    vdsm)   Gid: (   36/     kvm)Kontext: 
system_u:object_r:unlabeled_t:s0Zugriff    : 2019-04-30 02:23:36.170064398 
+0200Modifiziert: 2019-04-30 02:20:48.082782687 +0200Geändert   : 2019-04-30 
02:20:48.083782558 +0200 Geburt    : -[root@ovn-elem images]#

[ovirt-users] Re: ovirt 4.3.3 cannot create VM in non-default cluster

2019-04-28 Thread Strahil Nikolov
It seems that No matter which cluster is selected - UI uses only the "Default" 
one.
I'm attaching a screenshot.

Best Regards,
Strahil Nikolov


>Hi All,
>
>I'm having an issue to create a VM in my second cluster called "Intel" which 
>>consists of only 1 Host -> ovirt3 which plays the role of a gluster arbiter 
>in my >hyperconverged setup.
>
>When I try to create the  VM (via UI) I receive:
 >"Cannot add VM. CPU Profile doesn't match provided Cluster."
>
>When I try to select the host where I can put the VM , I see only ovirt1 or 
>>ovirt2 which are part of the 'Default' Cluster .
>
>Do we have an opened bug  for that ?
>
>Note: A workaround is to create the VM in the Default cluster and later edit 
>it >to match the needed Cluster.
>
>Best Regards,
>Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UVACXXWFJB6YO7FWKJL2CUAD5GSQET3E/


[ovirt-users] ovirt 4.3.3 cannot create VM in non-default cluster

2019-04-28 Thread Strahil Nikolov
Hi All,
I'm having an issue to create a VM in my second cluster called "Intel" which 
consists of only 1 Host -> ovirt3 which plays the role of a gluster arbiter in 
my hyperconverged setup.
When I try to create the  VM (via UI) I receive: "Cannot add VM. CPU Profile 
doesn't match provided Cluster."

When I try to select the host where I can put the VM , I see only ovirt1 or 
ovirt2 which are part of the 'Default' Cluster .
Do we have an opened bug  for that ?
Note: A workaround is to create the VM in the Default cluster and later edit it 
to match the needed Cluster.

Best Regards,Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7QT3DB5P6QIFGERLHGS4PGAT2DAHIM2P/


[ovirt-users] Re: HE deployment failing

2019-07-05 Thread Strahil Nikolov
 Did you expand all your Gluster Bricks to have at least 61Gb (arbiter is not 
needed) ?
A simple "df -h /gluster_bricks/engine/engine" should show the available space 
of your brick.
Best Regards,Strahil Nikolov

В петък, 5 юли 2019 г., 10:12:01 ч. Гринуич-4, Parth Dhanjal 
 написа:  
 
 Hey!

I'm trying to deploy a 3 node cluster with gluster storage.
After the gluster deployment is completed successfully, the creation of storage 
domain fails during HE deployment giving the error:

[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Error: the 
target storage domain contains only 46.0GiB of available space while a minimum 
of 61.0GiB is required If you wish to use the current target storage domain by 
extending it, make sure it contains nothing before adding it."}
I have tried to increase the disk size(provided in storage tab) to 90GiB. But 
the deployment still fails. A 50GiB storage domain is created by default even 
if some other size is provided.

Has anyone faced a similar issue?
Regards
Parth Dhanjal___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/W2CM74EU6KPPJ2NL3HXBTYPHDO7BMZB6/
  ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5CT6Q4QLGYZQYMPG3NHTQ7MA4URW7YJG/


  1   2   3   4   5   6   7   8   9   10   >