[ovirt-users] Re: Issue upgrading 4.4 to 4.5 Gluster HCG

2022-04-26 Thread Alessandro De Salvo

Hi,

the error with XML and gluster is the same I reported with a possible 
fix in vdsm in another thread.


The following fix worked for me, i.e. replacing the following line in 
/usr/lib/python3.6/site-packages/vdsm/gluster/cli.y


429c429
< if (el.find('stripeCount')): value['stripeCount'] = 
el.find('stripeCount').text

---
> value['stripeCount'] = el.find('stripeCount').text

In this way, after restarting vdsmd and supervdsmd, I was able to 
connect to gluster 10 volumes. I can file a bug if someone could please 
point me where to file it :-)


Cheers,


    Alessandro


Il 26/04/22 10:55, Sandro Bonazzola ha scritto:

@Gobinda Das  can you please have a look?

Il giorno mar 26 apr 2022 alle ore 06:47 Abe E  ha 
scritto:


Hey All,

I am having an issue upgrading from 4.4 to 4.5.
My setup
3 Node Gluster (Cluster 1) + 3 Node Cluster (Cluster 2)

If i recall the process correctly, the process I did last week:

On all my Nodes:
dnf install -y centos-release-ovirt45 --enablerepo=extras

On Ovirt Engine:
dnf install -y centos-release-ovirt45
dnf update -y --nobest
engine-setup

Once the engine was upgraded successfully I ran the upgrade from
the GUI on the Cluster 2 Nodes one by one although when they came
back, they complained of "Host failed to attach one of the Storage
Domains attached to it." which is the "hosted_storage", "data"
(gluster).

I thought maybe its due to the fact that 4.5 brings an update to
the glusterfs version, so I decided to upgrade Node 3 in my
Gluster Cluster and it booted to emergency mode after the install
"succeeded".

I feel like I did something wrong, aside from my bravery of
upgrading so much before realizing somethings not right.

My VDSM Logs from one of the nodes that fails to connect to
storage (FYI I have 2 Networks, one for Mgmt and 1 for storage
that are up):

[root@ovirt-4 ~]# tail -f /var/log/vdsm/vdsm.log
2022-04-25 22:41:31,584-0600 INFO  (jsonrpc/3) [vdsm.api] FINISH
repoStats return={} from=:::172.17.117.80,38712,
task_id=8370855e-dea6-4168-870a-d6235d9044e9 (api:54)
2022-04-25 22:41:31,584-0600 INFO  (jsonrpc/3) [vdsm.api] START
multipath_health() from=:::172.17.117.80,38712,
task_id=14eb199a-7fbf-4638-a6bf-a384dfbb9d2c (api:48)
2022-04-25 22:41:31,584-0600 INFO  (jsonrpc/3) [vdsm.api] FINISH
multipath_health return={} from=:::172.17.117.80,38712,
task_id=14eb199a-7fbf-4638-a6bf-a384dfbb9d2c (api:54)
2022-04-25 22:41:31,602-0600 INFO  (periodic/1) [vdsm.api] START
repoStats(domains=()) from=internal,
task_id=08a5c00b-1f66-493f-a408-d4006ddaa959 (api:48)
2022-04-25 22:41:31,603-0600 INFO  (periodic/1) [vdsm.api] FINISH
repoStats return={} from=internal,
task_id=08a5c00b-1f66-493f-a408-d4006ddaa959 (api:54)
2022-04-25 22:41:31,606-0600 INFO  (jsonrpc/3) [api.host] FINISH
getStats return={'status': {'code': 0, 'message': 'Done'}, 'info':
(suppressed)} from=:::172.17.117.80,38712 (api:54)
2022-04-25 22:41:35,393-0600 INFO  (jsonrpc/5) [api.host] START
getAllVmStats() from=:::172.17.117.80,38712 (api:48)
2022-04-25 22:41:35,393-0600 INFO  (jsonrpc/5) [api.host] FINISH
getAllVmStats return={'status': {'code': 0, 'message': 'Done'},
'statsList': (suppressed)} from=:::172.17.117.80,38712 (api:54)
2022-04-25 22:41:39,366-0600 INFO  (jsonrpc/2) [api.host] START
getAllVmStats() from=::1,53634 (api:48)
2022-04-25 22:41:39,366-0600 INFO  (jsonrpc/2) [api.host] FINISH
getAllVmStats return={'status': {'code': 0, 'message': 'Done'},
'statsList': (suppressed)} from=::1,53634 (api:54)
2022-04-25 22:41:46,530-0600 INFO  (jsonrpc/1) [api.host] START
getStats() from=:::172.17.117.80,38712 (api:48)
2022-04-25 22:41:46,568-0600 INFO  (jsonrpc/1) [vdsm.api] START
repoStats(domains=()) from=:::172.17.117.80,38712,
task_id=30404767-9761-4f8c-884a-5561dd0d82fe (api:48)
2022-04-25 22:41:46,568-0600 INFO  (jsonrpc/1) [vdsm.api] FINISH
repoStats return={} from=:::172.17.117.80,38712,
task_id=30404767-9761-4f8c-884a-5561dd0d82fe (api:54)
2022-04-25 22:41:46,569-0600 INFO  (jsonrpc/1) [vdsm.api] START
multipath_health() from=:::172.17.117.80,38712,
task_id=8dbfa47f-e1b7-408c-a060-8d45012f0b90 (api:48)
2022-04-25 22:41:46,569-0600 INFO  (jsonrpc/1) [vdsm.api] FINISH
multipath_health return={} from=:::172.17.117.80,38712,
task_id=8dbfa47f-e1b7-408c-a060-8d45012f0b90 (api:54)
2022-04-25 22:41:46,574-0600 INFO  (jsonrpc/1) [api.host] FINISH
getStats return={'status': {'code': 0, 'message': 'Done'}, 'info':
(suppressed)} from=:::172.17.117.80,38712 (api:54)
2022-04-25 22:41:46,651-0600 INFO  (periodic/0) [vdsm.api] START
repoStats(domains=()) from=internal,
task_id=92c69020-d0b1-4813-8610-3f3e1892c20b (api:48)

[ovirt-users] Re: Installing new self-hosted engine v4.5.0 on gluster 10

2022-04-26 Thread Alessandro De Salvo

Hi,

yes, sure, can you please point me to the github area where to file the bug?

Thanks,


    Alessandro


Il 25/04/22 19:45, Strahil Nikolov via Users ha scritto:

Would you be able to open a issue for that ?

@Sandro Bonazzola,
as far as I remember, we started using github, right ?

Best Regards,
Strahil Nikolov

On Mon, Apr 25, 2022 at 14:16, Alessandro De Salvo
 wrote:
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:

https://lists.ovirt.org/archives/list/users@ovirt.org/message/TDWHLUQUPRYNKL4BH2IDDMAR5EPCCKJ3/



___
Users mailing list --users@ovirt.org
To unsubscribe send an email tousers-le...@ovirt.org
Privacy Statement:https://www.ovirt.org/privacy-policy.html
oVirt Code of 
Conduct:https://www.ovirt.org/community/about/community-guidelines/
List 
Archives:https://lists.ovirt.org/archives/list/users@ovirt.org/message/RKE7TB5GNUHCUQO4CJ5RIP6N5CD6VMTT/___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4MWUUOVB626KQVZR6OJTZHQQ7NAY7COM/


[ovirt-users] Re: failed to mount hosted engine gluster storage - how to debug?

2022-04-25 Thread Alessandro De Salvo

Hi,

please try this workaround, replace the following line in 
/usr/lib/python3.6/site-packages/vdsm/gluster/cli.py



value['stripeCount'] = el.find('stripeCount').text


with:

if (el.find('stripeCount')): value['stripeCount'] = 
el.find('stripeCount').text



Then restart vdsmd and supervdmsd and retry. It worked for me, and it 
looks like a serious bug for people upgrading to glusterfs 10.


Cheers,


    Alessandro


Il 25/04/22 10:58, diego.ercol...@ssis.sm ha scritto:

I saw your report infact, they suggested to downgrade jdbc, for completeness I found also 
error report in vdsm.log while issuing "hosted-engine --connect-storage" 
corresponding to what you are noticing. I report the log except here if it can be useful.
by the way, why vdsm it's searching for the storage engine storage UUID in a 
lvm volumegroup name?


2022-04-25 10:53:35,506+0200 INFO  (Reactor thread) 
[ProtocolDetector.AcceptorImpl] Accepted connection from ::1:47350 
(protocoldetector:61)
2022-04-25 10:53:35,510+0200 INFO  (Reactor thread) [ProtocolDetector.Detector] 
Detected protocol stomp from ::1:47350 (protocoldetector:125)
2022-04-25 10:53:35,510+0200 INFO  (Reactor thread) [Broker.StompAdapter] 
Processing CONNECT request (stompserver:95)
2022-04-25 10:53:35,512+0200 INFO  (JsonRpc (StompReactor)) 
[Broker.StompAdapter] Subscribe command received (stompserver:124)
2022-04-25 10:53:35,518+0200 INFO  (jsonrpc/3) [vdsm.api] START 
getStorageDomainInfo(sdUUID='7b8f1cc9-e3de-401f-b97f-8c281ca30482') 
from=::1,47350, task_id=1803abb2-9e9a-4292-8349-678c793f7264 (api:48)
2022-04-25 10:53:35,518+0200 INFO  (jsonrpc/3) [storage.storagedomaincache] 
Refreshing storage domain cache (resize=True) (sdc:80)
2022-04-25 10:53:35,518+0200 INFO  (jsonrpc/3) [storage.iscsi] Scanning iSCSI 
devices (iscsi:462)
2022-04-25 10:53:35,532+0200 INFO  (jsonrpc/3) [storage.iscsi] Scanning iSCSI 
devices: 0.01 seconds (utils:390)
2022-04-25 10:53:35,532+0200 INFO  (jsonrpc/3) [storage.hba] Scanning FC 
devices (hba:59)
2022-04-25 10:53:35,565+0200 INFO  (jsonrpc/3) [storage.hba] Scanning FC 
devices: 0.03 seconds (utils:390)
2022-04-25 10:53:35,565+0200 INFO  (jsonrpc/3) [storage.multipath] Waiting 
until multipathd is ready (multipath:112)
2022-04-25 10:53:37,556+0200 INFO  (periodic/3) [vdsm.api] START 
repoStats(domains=()) from=internal, 
task_id=f4266860-9162-417e-85a5-087f9cb5cd51 (api:48)
2022-04-25 10:53:37,556+0200 INFO  (periodic/3) [vdsm.api] FINISH repoStats 
return={} from=internal, task_id=f4266860-9162-417e-85a5-087f9cb5cd51 (api:54)
2022-04-25 10:53:37,558+0200 WARN  (periodic/3) [root] Failed to retrieve 
Hosted Engine HA info, is Hosted Engine setup finished? (api:168)
2022-04-25 10:53:37,584+0200 INFO  (jsonrpc/3) [storage.multipath] Waited 2.02 
seconds for multipathd (tries=2, ready=2) (multipath:139)
2022-04-25 10:53:37,584+0200 INFO  (jsonrpc/3) [storage.multipath] Resizing 
multipath devices (multipath:220)
2022-04-25 10:53:37,586+0200 INFO  (jsonrpc/3) [storage.multipath] Resizing 
multipath devices: 0.00 seconds (utils:390)
2022-04-25 10:53:37,586+0200 INFO  (jsonrpc/3) [storage.storagedomaincache] 
Refreshing storage domain cache: 2.07 seconds (utils:390)
2022-04-25 10:53:37,586+0200 INFO  (jsonrpc/3) [storage.storagedomaincache] 
Looking up domain 7b8f1cc9-e3de-401f-b97f-8c281ca30482 (sdc:171)
2022-04-25 10:53:37,643+0200 WARN  (jsonrpc/3) [storage.lvm] All 1 tries have failed: LVM command 
failed: 'cmd=[\'/sbin/lvm\', \'vgs\', \'--devices\', 
\'/dev/mapper/Samsung_SSD_870_EVO_4TB_S6BCNG0R300064E,/dev/mapper/Samsung_SSD_870_EVO_4TB_S6BCNG0R300066N,/dev/mapper/Samsung_SSD_870_EVO_4TB_S6BCNG0R300067L,/dev/mapper/Samsung_SSD_870_EVO_4TB_S6BCNG0R300230B\',
 \'--config\', \'devices {  preferred_names=["^/dev/mapper/"]  ignore_suspended_devices=1  
write_cache_state=0  disable_after_error_count=3hints="none"  
obtain_device_list_from_udev=0 } global {  prioritise_write_locks=1  wait_for_locks=1  use_lvmpolld=1 } 
backup {  retain_min=50  retain_days=0 }\', \'--noheadings\', \'--units\', \'b\', \'--nosuffix\', 
\'--separator\', \'|\', \'--ignoreskippedcluster\', \'-o\', 
\'uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name\',
 \'7b8f1cc9-e3de-401f-b97f-8c281ca30482\'] rc=5 out=[] err=[\'  Volume group "7b8f1cc9-e3de-401f
  -b97f-8c281ca30482" not found\', \'  Cannot process volume group 
7b8f1cc9-e3de-401f-b97f-8c281ca30482\']' (lvm:482)
2022-04-25 10:53:37,643+0200 INFO  (jsonrpc/3) [storage.storagedomaincache] 
Looking up domain 7b8f1cc9-e3de-401f-b97f-8c281ca30482: 0.06 seconds (utils:390)
2022-04-25 10:53:37,643+0200 INFO  (jsonrpc/3) [vdsm.api] FINISH 
getStorageDomainInfo error=Storage domain does not exist: 
('7b8f1cc9-e3de-401f-b97f-8c281ca30482',) from=::1,47350, 
task_id=1803abb2-9e9a-4292-8349-678c793f7264 (api:52)
2022-04-25 10:53:37,643+0200 ERROR (jsonrpc/3) [storage.taskmanager.task] 
(Task='1803abb2-9e9a-4292-8349-678c793f7264') Unexpected 

[ovirt-users] Re: Installing new self-hosted engine v4.5.0 on gluster 10

2022-04-25 Thread Alessandro De Salvo

Hi,

I think I've found the root of the problem, it is a bug in vdsm. Gluster 
10 produces xml description without the stripeCount tag, while vdsm 
expects it to be present.


I've tried to fix it simply adding a check in 
/usr/lib/python3.6/site-packages/vdsm/gluster/cli.py


429c429
< if (el.find('stripeCount')): value['stripeCount'] = 
el.find('stripeCount').text

---
> value['stripeCount'] = el.find('stripeCount').text

In this way, after restarting vdsmd and supervdsmd, I'm able to connect 
to gluster 10 volumes.


I guess this should be fixed in possibly a more proper way upstream.

Cheers,


    Alessandro


Il 25/04/22 09:41, Alessandro De Salvo ha scritto:

Hi,
thanks, unfortunately I’ve done it already, otherwise it would not 
even start the engine. This error appears after the engine is up with 
the downgraded postgresql-jdbc.

Cheers,

   Alessandro

Il giorno 25 apr 2022, alle ore 06:11, Strahil Nikolov 
 ha scritto:


Mybe it's worth trying to downgrade postgresql-jdbc and try again.

Best Regards,
Strahil Nikolov

On Mon, Apr 25, 2022 at 4:52, Alessandro De Salvo
 wrote:
To complete the diagnosis, in vdsm.log I see the following error:


vdsm.gluster.exception.GlusterXmlErrorException: XML error: rc=0
out=()
err=[b'\n 0\n 0\n
\n \n \n \n
vm-01\n d77d9a24-5f30-4acb-962c-559e63917229\n
1\n Started\n
0\n 3\n
1\n 3\n
1\n 0\n
0\n 2\n
Replicate\n 0\n
\n  host1:/gluster/vm/01/datahost1:/gluster/vm/01/data09e78070-4d55-4a96-ada7-658e7e2799a60\n

host2:/gluster/vm/01/datahost2:/gluster/vm/01/datafb9eb3ab-a260-4ef7-94cf-f03c630d7b970\n

host3:/gluster/vm/01/datahost3:/gluster/vm/01/datacabe4f02-eb45-486e-97e0-3e2466415fd01\n

\n 24\n \n 
\n
nfs.disable\n on\n \n
\n transport.address-family\n
inet\n
\n  \n
performance.quick-read\n
off\n \n  \n
performance.read-ahead\n off\n
\n  \n performance.io-cache\n
off\n \n  \n
performance.stat-prefetch\n off\n
\n  \n
performance.low-prio-threads\n 32\n
\n  \n network.remote-dio\n
enable\n \n  \n
cluster.eager-lock\n enable\n
\n  \n cluster.quorum-type\n
auto\n \n  \n
cluster.server-quorum-type\n server\n
\n  \n
cluster.data-self-heal-algorithm\n
full\n
\n  \n
cluster.locking-scheme\n
granular\n \n  \n
cluster.shd-max-threads\n 8\n
\n \n cluster.shd-wait-qlength\n
1\n \n  \n
features.shard\n on\n \n
\n user.cifs\n off\n
\n  \n
features.shard-block-size\n
512MB\n \n  \n
storage.owner-uid\n 36\n
\n  \n storage.owner-gid\n
36\n \n  \n
features.cache-invalidation\n off\n
\n  \n
performance.client-io-threads\n off\n
\n  \n nfs-ganesha\n
disable\n \n  \n
cluster.enable-shared-storage\n
disable\n
\n    \n \n  1\n
\n \n']


Thanks,


    Alessandro


Il 25/04/22 01:02, Alessandro De Salvo ha scritto:
> Hi,
>
> I'm trying to install a new self-hosted engine 4.5.0 on an
upgraded
> gluster v10.1, but the deployment fails at the domain activation
> stage, with this error:
>
>
> [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Activate storage
> domain]
> [ ERROR ] ovirtsdk4.Error: Fault reason is "Operation Failed".
Fault
> detail is "[]". HTTP response code is 400.
>
>
> Looking at the server.log in the engine I see the follwing error:
>
>
> 2022-04-25 00:55:58,266+02 ERROR
> [org.jboss.resteasy.resteasy_jaxrs.i18n] (default task-1)
> RESTEASY002010: Failed to execute:
> javax.ws.rs.WebApplicationException: HTTP 404 Not Found
>     at
>

org.ovirt.engine.api.restapi-jaxrs//org.ovirt.engine.api.restapi.resource.BaseBackendResource.handleError(BaseBackendResource.java:236)
>     at
>

org.ovirt.engine.api.restapi-jaxrs//org.ovirt.engine.api.restapi.resource.BackendResource.getEntity(BackendResource.java:119)
>     at
>

org.ovirt.engine.api.restapi-jaxrs//org.ovirt.engine.api.restapi.resource.BackendResource.getEntity(BackendResource.java:99)
>     at
>

org.ovirt.engine.api.restapi-jaxrs//org.ovirt.engine.api.restapi.resource.AbstractBackendSubResource.performGet(AbstractBackendSubResource.java:34)
>     at
>

org.ovirt.engine.api.restapi-jaxrs//org.ovirt.engine.api.restapi.resource.AbstractBackendSubResource.performGet(AbstractBackendSubResource.java:30)
>     at
>

org.ovirt.engine.api.restapi-jaxrs//org.ovirt.engine.api.restapi.resource.BackendAttachedStorageDomainReso

[ovirt-users] Re: failed to mount hosted engine gluster storage - how to debug?

2022-04-25 Thread Alessandro De Salvo
Hi,
I think it may be a problem with vdsm and gluster 10, I’ve reported a similar 
issue in another thread. Vdsm is throwing an exception when parsing the XML 
from the gluster volume info when using the latest gluster version 10. This is 
particularly bad when the gluster server updates have been completed by moving 
the op-version, as it’s basically irreversible and it’s not even possible to 
easily downgrade gluster. If you downgrade to or use a gluster 8 works in fact.
I’m digging the code of vdsm to see if I can find the root cause.
Cheers,

   Alessandro

> Il giorno 25 apr 2022, alle ore 09:24, diego.ercol...@ssis.sm ha scritto:
> 
> Hello, I have an issue probably related to my particular implementation but 
> I think some controls are missing
> Here is the story.
> I have a cluster of two nodes 4.4.10.3 with an upgraded kernel as the cpu 
> (Ryzen 5) suffer from an incompatibility issue with the kernel provided by 
> 4.4.10.x series.
> On each node there are three glusterfs "partitions" in replica mode, one for 
> the hosted_engine, the other two are for user usage.
> The third node was an old i3 workstation only used to provide the arbiter 
> partition to the glusterfs cluster.
> I installed a new server (ryzen processor) with 4.5.0 and successfully 
> installed glusterfs 10.1 and inserted the arbiter bricks implemented on 
> glusterfs 10.1 while the replica bricks are 8.6 after removing the old i3 
> provided bricks.
> I successfully imported the new node in the ovirt engine (after updating the 
> engine to 4.5)
> The problem is that the ovirt-ha-broker doesn't start complaining that is not 
> possible to connect the storage. (I suppose the hosted_engine storage) so I 
> did some digs that I'm going to show here:
> 
> 
> 1. The node seem to be correctly configured:
> [root@ovirt-node3 devices]# vdsm-tool validate-config   
> SUCCESS: ssl configured to true. No conflicts
> [root@ovirt-node3 devices]# vdsm-tool configure   
> 
> Checking configuration status...
> 
> libvirt is already configured for vdsm
> SUCCESS: ssl configured to true. No conflicts
> sanlock is configured for vdsm
> Managed volume database is already configured
> lvm is configured for vdsm
> Current revision of multipath.conf detected, preserving
> 
> Running configure...
> 
> Done configuring modules to VDSM.
> [root@ovirt-node3 devices]# vdsm-tool validate-config 
> SUCCESS: ssl configured to true. No conflicts
> 
> 
> 2. the node refuses to mount via hosted-engine (same error in broker.log)
> [root@ovirt-node3 devices]# hosted-engine --connect-storage
> Traceback (most recent call last):
>  File "/usr/lib64/python3.6/runpy.py", line 193, in _run_module_as_main
>"__main__", mod_spec)
>  File "/usr/lib64/python3.6/runpy.py", line 85, in _run_code
>exec(code, run_globals)
>  File 
> "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_setup/connect_storage_server.py",
>  line 30, in 
>timeout=ohostedcons.Const.STORAGE_SERVER_TIMEOUT,
>  File 
> "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/client/client.py", 
> line 312, in connect_storage_server
>sserver.connect_storage_server(timeout=timeout)
>  File 
> "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/lib/storage_server.py",
>  line 451, in connect_storage_server
>'Connection to storage server failed'
> RuntimeError: Connection to storage server failed
> 
> #
> 3. manually mount of glusterfs work correctly
> [root@ovirt-node3 devices]# grep storage 
> /etc/ovirt-hosted-engine/hosted-engine.conf   
> storage=ovirt-node2.ovirt:/gveng
> # The following are used only for iSCSI storage
> [root@ovirt-node3 devices]# 
> [root@ovirt-node3 devices]# mount -t glusterfs ovirt-node2.ovirt:/gveng 
> /mnt/tmp/
> [root@ovirt-node3 devices]# ls -l /mnt/tmp
> total 0
> drwxr-xr-x. 6 vdsm kvm 64 Dec 15 19:04 7b8f1cc9-e3de-401f-b97f-8c281ca30482
> 
> 
> What else should I control? Thank you and sorry for the long message
> Diego
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/4LGBUOEBV7YNESM7N2TJSXOC4ERN3W23/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/T5Y6XKFJ5X3CVZQF3SSFQK2HS7ZSTVMA/


[ovirt-users] Re: Error 500 on Hosted Engine admin portal!!!

2022-04-25 Thread Alessandro De Salvo
Hi,
I’m not sure if it works with the web installer, but if you deploy with CLI you 
can use the following to ask the installer to pause before running engine-setup:

hosted-engine --deploy --ansible-extra-vars=he_pause_before_engine_setup=true

This gives you the time to ssh and exclude the offending package before the 
installer attempts to upgrade any package.
Cheers,

   Alessandro

> Il giorno 25 apr 2022, alle ore 09:27, lpat...@siriusag.com ha scritto:
> 
> This is the cause, yes Martin - but it wont help since the 
> Installer/deployment does an update while running, so "almost" no way to 
> bypass that
> 
> WORKAROUND:
> preparation: 
> - ssh login to the node, prepare ssh root@
> - copy paste buffer ready: echo exclude=postgresql-jdbc >> /etc/dnf/dnf.conf
> 
> when the webinstaller is running, the moment when the ip of the hosted engine 
> vm is displayed (something like 192.168.222.208) immediately ssh to that vm 
> from the host
> and execute the copy paste buffer before the dnf update takes place
> 
> uggly workaround, maybe there's another better option, but hey, it describes 
> the problem and works :)
> 
> Cheers
> Luis
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/ULUGCT73FOPPAY7EVBIJC4KBDGU22GDM/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UZALUJRVR4E7RABUGTSQJT6YYY7LQYK7/


[ovirt-users] Re: Installing new self-hosted engine v4.5.0 on gluster 10

2022-04-25 Thread Alessandro De Salvo
Hi,
thanks, unfortunately I’ve done it already, otherwise it would not even start 
the engine. This error appears after the engine is up with the downgraded 
postgresql-jdbc.
Cheers,

   Alessandro

> Il giorno 25 apr 2022, alle ore 06:11, Strahil Nikolov 
>  ha scritto:
> 
> Mybe it's worth trying to downgrade postgresql-jdbc and try again.
> 
> Best Regards,
> Strahil Nikolov
> 
> On Mon, Apr 25, 2022 at 4:52, Alessandro De Salvo
>  wrote:
> To complete the diagnosis, in vdsm.log I see the following error:
> 
> 
> vdsm.gluster.exception.GlusterXmlErrorException: XML error: rc=0 out=() 
> err=[b'\n  0\n 0\n  
> \n \n\n \n
> vm-01\n d77d9a24-5f30-4acb-962c-559e63917229\n 
> 1\n Started\n 
> 0\n 3\n 
> 1\n 3\n 
> 1\n 0\n 
> 0\n 2\n 
> Replicate\n 0\n 
> \n   uuid="09e78070-4d55-4a96-ada7-658e7e2799a6">host1:/gluster/vm/01/datahost1:/gluster/vm/01/data09e78070-4d55-4a96-ada7-658e7e2799a60\n
>  
>  uuid="fb9eb3ab-a260-4ef7-94cf-f03c630d7b97">host2:/gluster/vm/01/datahost2:/gluster/vm/01/datafb9eb3ab-a260-4ef7-94cf-f03c630d7b970\n
>  
>  uuid="cabe4f02-eb45-486e-97e0-3e2466415fd0">host3:/gluster/vm/01/datahost3:/gluster/vm/01/datacabe4f02-eb45-486e-97e0-3e2466415fd01\n
>  
> \n 24\n \n  \n 
> nfs.disable\n on\n \n  
> \n transport.address-family\n inet\n 
> \n  \n performance.quick-read\n 
> off\n \n  \n 
> performance.read-ahead\n off\n 
> \n  \n performance.io-cache\n 
> off\n \n  \n 
> performance.stat-prefetch\n off\n 
> \n  \n 
> performance.low-prio-threads\n 32\n 
> \n  \n network.remote-dio\n 
> enable\n \n  \n 
> cluster.eager-lock\n enable\n 
> \n  \n cluster.quorum-type\n 
> auto\n \n  \n 
> cluster.server-quorum-type\n server\n 
> \n  \n 
> cluster.data-self-heal-algorithm\n full\n 
> \n  \n cluster.locking-scheme\n
> granular\n \n  \n 
> cluster.shd-max-threads\n 8\n  
> \n \n cluster.shd-wait-qlength\n 
> 1\n \n  \n 
> features.shard\n on\n \n  
> \n user.cifs\n off\n 
> \n  \n features.shard-block-size\n 
> 512MB\n \n  \n 
> storage.owner-uid\n 36\n 
> \n  \n storage.owner-gid\n 
> 36\n \n  \n 
> features.cache-invalidation\n off\n 
> \n  \n 
> performance.client-io-threads\n off\n 
> \n  \n nfs-ganesha\n 
> disable\n \n  \n 
> cluster.enable-shared-storage\n disable\n 
> \n\n \n  1\n 
> \n  \n']
> 
> 
> Thanks,
> 
> 
> Alessandro
> 
> 
> Il 25/04/22 01:02, Alessandro De Salvo ha scritto:
> > Hi,
> >
> > I'm trying to install a new self-hosted engine 4.5.0 on an upgraded 
> > gluster v10.1, but the deployment fails at the domain activation 
> > stage, with this error:
> >
> >
> > [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Activate storage 
> > domain]
> > [ ERROR ] ovirtsdk4.Error: Fault reason is "Operation Failed". Fault 
> > detail is "[]". HTTP response code is 400.
> >
> >
> > Looking at the server.log in the engine I see the follwing error:
> >
> >
> > 2022-04-25 00:55:58,266+02 ERROR 
> > [org.jboss.resteasy.resteasy_jaxrs.i18n] (default task-1) 
> > RESTEASY002010: Failed to execute: 
> > javax.ws.rs.WebApplicationException: HTTP 404 Not Found
> > at 
> > org.ovirt.engine.api.restapi-jaxrs//org.ovirt.engine.api.restapi.resource.BaseBackendResource.handleError(BaseBackendResource.java:236)
> > at 
> > org.ovirt.engine.api.restapi-jaxrs//org.ovirt.engine.api.restapi.resource.BackendResource.getEntity(BackendResource.java:119)
> > at 
> > org.ovirt.engine.api.restapi-jaxrs//org.ovirt.engine.api.restapi.resource.BackendResource.getEntity(BackendResource.java:99)
> > at 
> > org.ovirt.engine.api.restapi-jaxrs//org.ovirt.engine.api.restapi.resource.AbstractBackendSubResource.performGet(AbstractBackendSubResource.java:34)
> > at 
> > org.ovirt.engine.api.restapi-jaxrs//org.ovirt.engine.api.restapi.resource.AbstractBackendSubResource.performGet(AbstractBackendSubResource.java:30)
> > at 
> > org.ovirt.engine.api.restapi-jaxrs//org.ovirt.engine.api.restapi.resource.BackendAttachedStorageDomainResource.get(BackendAttachedStorageDomainResource.java:35)
> > at 
> > org.ovirt.engine.api.restapi-definition//org.ovirt.engine.api.resource.AttachedStorageDomainResource.doGet(AttachedStorageDomainResource.java:81)
> > at 
> > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
> &g

[ovirt-users] Re: Installing new self-hosted engine v4.5.0 on gluster 10

2022-04-24 Thread Alessandro De Salvo

To complete the diagnosis, in vdsm.log I see the following error:


vdsm.gluster.exception.GlusterXmlErrorException: XML error: rc=0 out=() 
err=[b'\n  0\n 0\n  
\n \n    \n \n    
vm-01\n d77d9a24-5f30-4acb-962c-559e63917229\n 
1\n Started\n 
0\n 3\n 
1\n 3\n 
1\n 0\n 
0\n 2\n 
Replicate\n 0\n 
\n  uuid="09e78070-4d55-4a96-ada7-658e7e2799a6">host1:/gluster/vm/01/datahost1:/gluster/vm/01/data09e78070-4d55-4a96-ada7-658e7e2799a60\n 
uuid="fb9eb3ab-a260-4ef7-94cf-f03c630d7b97">host2:/gluster/vm/01/datahost2:/gluster/vm/01/datafb9eb3ab-a260-4ef7-94cf-f03c630d7b970\n 
uuid="cabe4f02-eb45-486e-97e0-3e2466415fd0">host3:/gluster/vm/01/datahost3:/gluster/vm/01/datacabe4f02-eb45-486e-97e0-3e2466415fd01\n 
\n 24\n \n  \n 
nfs.disable\n on\n \n  
\n transport.address-family\n inet\n 
\n  \n performance.quick-read\n 
off\n \n  \n 
performance.read-ahead\n off\n 
\n  \n performance.io-cache\n 
off\n \n  \n 
performance.stat-prefetch\n off\n 
\n  \n 
performance.low-prio-threads\n 32\n 
\n  \n network.remote-dio\n 
enable\n \n  \n 
cluster.eager-lock\n enable\n 
\n  \n cluster.quorum-type\n 
auto\n \n  \n 
cluster.server-quorum-type\n server\n 
\n  \n 
cluster.data-self-heal-algorithm\n full\n 
\n  \n cluster.locking-scheme\n 
granular\n \n  \n 
cluster.shd-max-threads\n 8\n  
\n \n cluster.shd-wait-qlength\n 
1\n \n  \n 
features.shard\n on\n \n  
\n user.cifs\n off\n 
\n  \n features.shard-block-size\n 
512MB\n \n  \n 
storage.owner-uid\n 36\n 
\n  \n storage.owner-gid\n 
36\n \n  \n 
features.cache-invalidation\n off\n 
\n  \n 
performance.client-io-threads\n off\n 
\n  \n nfs-ganesha\n 
disable\n \n  \n 
cluster.enable-shared-storage\n disable\n 
\n    \n \n  1\n 
\n  \n']



Thanks,


    Alessandro


Il 25/04/22 01:02, Alessandro De Salvo ha scritto:

Hi,

I'm trying to install a new self-hosted engine 4.5.0 on an upgraded 
gluster v10.1, but the deployment fails at the domain activation 
stage, with this error:



[ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Activate storage 
domain]
[ ERROR ] ovirtsdk4.Error: Fault reason is "Operation Failed". Fault 
detail is "[]". HTTP response code is 400.



Looking at the server.log in the engine I see the follwing error:


2022-04-25 00:55:58,266+02 ERROR 
[org.jboss.resteasy.resteasy_jaxrs.i18n] (default task-1) 
RESTEASY002010: Failed to execute: 
javax.ws.rs.WebApplicationException: HTTP 404 Not Found
    at 
org.ovirt.engine.api.restapi-jaxrs//org.ovirt.engine.api.restapi.resource.BaseBackendResource.handleError(BaseBackendResource.java:236)
    at 
org.ovirt.engine.api.restapi-jaxrs//org.ovirt.engine.api.restapi.resource.BackendResource.getEntity(BackendResource.java:119)
    at 
org.ovirt.engine.api.restapi-jaxrs//org.ovirt.engine.api.restapi.resource.BackendResource.getEntity(BackendResource.java:99)
    at 
org.ovirt.engine.api.restapi-jaxrs//org.ovirt.engine.api.restapi.resource.AbstractBackendSubResource.performGet(AbstractBackendSubResource.java:34)
    at 
org.ovirt.engine.api.restapi-jaxrs//org.ovirt.engine.api.restapi.resource.AbstractBackendSubResource.performGet(AbstractBackendSubResource.java:30)
    at 
org.ovirt.engine.api.restapi-jaxrs//org.ovirt.engine.api.restapi.resource.BackendAttachedStorageDomainResource.get(BackendAttachedStorageDomainResource.java:35)
    at 
org.ovirt.engine.api.restapi-definition//org.ovirt.engine.api.resource.AttachedStorageDomainResource.doGet(AttachedStorageDomainResource.java:81)
    at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method)
    at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

    at java.base/java.lang.reflect.Method.invoke(Method.java:566)


The gluster volume itself is working fine and has the storage uid/gid 
set to 36 as it should be, and if I use a server with gluster 8 the 
installation works, while it fails with gluster 10 servers.


Any help is appreciated, thanks,


    Alessandro
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BZG3UARP2NFC4PZPMD743JSTPTEOWZMK/

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code o

[ovirt-users] Installing new self-hosted engine v4.5.0 on gluster 10

2022-04-24 Thread Alessandro De Salvo

Hi,

I'm trying to install a new self-hosted engine 4.5.0 on an upgraded 
gluster v10.1, but the deployment fails at the domain activation stage, 
with this error:



[ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Activate storage domain]
[ ERROR ] ovirtsdk4.Error: Fault reason is "Operation Failed". Fault 
detail is "[]". HTTP response code is 400.



Looking at the server.log in the engine I see the follwing error:


2022-04-25 00:55:58,266+02 ERROR 
[org.jboss.resteasy.resteasy_jaxrs.i18n] (default task-1) 
RESTEASY002010: Failed to execute: javax.ws.rs.WebApplicationException: 
HTTP 404 Not Found
    at 
org.ovirt.engine.api.restapi-jaxrs//org.ovirt.engine.api.restapi.resource.BaseBackendResource.handleError(BaseBackendResource.java:236)
    at 
org.ovirt.engine.api.restapi-jaxrs//org.ovirt.engine.api.restapi.resource.BackendResource.getEntity(BackendResource.java:119)
    at 
org.ovirt.engine.api.restapi-jaxrs//org.ovirt.engine.api.restapi.resource.BackendResource.getEntity(BackendResource.java:99)
    at 
org.ovirt.engine.api.restapi-jaxrs//org.ovirt.engine.api.restapi.resource.AbstractBackendSubResource.performGet(AbstractBackendSubResource.java:34)
    at 
org.ovirt.engine.api.restapi-jaxrs//org.ovirt.engine.api.restapi.resource.AbstractBackendSubResource.performGet(AbstractBackendSubResource.java:30)
    at 
org.ovirt.engine.api.restapi-jaxrs//org.ovirt.engine.api.restapi.resource.BackendAttachedStorageDomainResource.get(BackendAttachedStorageDomainResource.java:35)
    at 
org.ovirt.engine.api.restapi-definition//org.ovirt.engine.api.resource.AttachedStorageDomainResource.doGet(AttachedStorageDomainResource.java:81)
    at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method)
    at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

    at java.base/java.lang.reflect.Method.invoke(Method.java:566)


The gluster volume itself is working fine and has the storage uid/gid 
set to 36 as it should be, and if I use a server with gluster 8 the 
installation works, while it fails with gluster 10 servers.


Any help is appreciated, thanks,


    Alessandro
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BZG3UARP2NFC4PZPMD743JSTPTEOWZMK/


[ovirt-users] Re: Best Openstack version to integrate with oVirt 4.2.7

2018-12-04 Thread Alessandro De Salvo

Hi,

we're also extensively using ceph via cinder on docker (Kolla project), 
but we're stuck on Pike for the missing keystone v3 support.


While cinderlib is desirable and it's a good solution for simple uses, 
adding support for keystone v3 and keeping the native cinder interface 
would also be good, for cases where you already have an openstack 
infrastructure available and you want to manage a storage system from a 
single place. The other reason to keep the current solution, at least 
for now, is what Matthias is mentioning: the transition phases.


How difficult it is to add the keystone v3 support for the storage side 
in ovirt, given you should already have the needed libs as you use them 
for the networking?


Thanks,


   Alessandro


Il 15/11/18 12:58, Matthias Leopold ha scritto:

Hi,

we are extensively using Ceph storage using the present 
OpenStack/Cinder integration in oVirt 4.2 which works for us. 
Openstack version in use is Pike.


I already heard about the plan to move to cinderlib which sounds 
promising. I very much hope there will be a migration scenario for 
users of "full" Openstack/Cinder installations when upgrading to oVirt 
4.3.


thanks
Matthias

Am 11.11.18 um 15:49 schrieb Nir Soffer:
On Sat, Nov 10, 2018 at 6:52 PM Gianluca Cecchi 
mailto:gianluca.cec...@gmail.com>> wrote:


    Hello,
    do you think it is ok to use Rocky version of Openstack to integrate
    its services with oVirt 4.2.7 on CentOS 7?
    I see on https://repos.fedorapeople.org/repos/openstack/ that, if
    Rocky is too new, between the older releases available there are,
    from newer to older:
    Queens
    Pike
    Ocata
    Newton


Nobody working on oVirt is testing any release of Openstack in the 
recent years.


The Cinder/Ceph support was released as tech preview in 3.6, and no 
work was

done since then, and I think this will be deprecated soon.

For 4.3 we are working on a different direction, using Cinderlib
https://github.com/Akrog/cinderlib

This is a way to use Cinder drivers without Openstack installation.
The same library is used to provide Cinder based storage in Kubernetes.
https://github.com/Akrog/ember-csi

You can find an early draft here for this feature. Note that it is 
expected to be

updated in the next weeks, but it can give you some idea on what we are
working on.
https://github.com/oVirt/ovirt-site/blob/f88f38ebb9afff656ab68a2d60c2b3ae88c21860/source/develop/release-management/features/storage/cinderlib-integration.html.md 



This will be tested with some version of Cinder drivers. I guess we 
will have

more info about it during 4.3 development.

    At the moment I have two separate lab environments:
    oVirt with 4.2.7
    Openstack with Rocky (single host with packstack allinone)

    just trying first integration steps with these versions, it seems
    I'm not able to communicate with glance, because I get in engine.log
    2018-11-10 17:32:58,386+01 ERROR
[org.ovirt.engine.core.bll.provider.storage.AbstractOpenStackStorageProviderProxy]
    (default task-51) [e2fccee7-1bb2-400f-b8d3-b87b679117d1] Not Found
    (OpenStack response error code: 404)


I think Glance support should work. Elad, which version of Glance was
tested for 4.2?

Regarding which Openstack version can work best with oVirt, maybe
Openstack guys I added can give a better answer.

Nir

    Nothing in glance logs on openstack, apparently.
    In my test I'm using
    http://xxx.xxx.xxx.xxx:9292 as provider url
    checked the authentication check box and
    glance user with its password
    35357 as the port and services as the tenant

    a telnet on port 9292 of openstack server from engine to 
openstack is ok


    similar with cinder I get:
    2018-11-10 17:45:42,226+01 ERROR
[org.ovirt.engine.core.bll.provider.storage.AbstractOpenStackStorageProviderProxy]
    (default task-50) [32a31aa7-fe3f-460c-a8b9-cc9b277deab7] Not Found
    (OpenStack response error code: 404)

    So before digging more I would lile to be certain which one is
    currently the best combination, possibly keeping as fixed the oVirt
    version to 4.2.7.

    Thanks,
    Gianluca
    ___
    Users mailing list -- users@ovirt.org 
    To unsubscribe send an email to users-le...@ovirt.org
    
    Privacy Statement: https://www.ovirt.org/site/privacy-policy/
    oVirt Code of Conduct:
    https://www.ovirt.org/community/about/community-guidelines/
    List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/C46XG5YF3JTAT7BF72RXND4EHD4ZB5GC/


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 

[ovirt-users] Re: Upgrade from 4.1.9 to 4.2.3 fails to upgrade postgresql

2018-05-17 Thread Alessandro De Salvo
Hi Ian,
I had the very same problem, but the upgrade was complaining for a
different locale.
Try to add the following line at the beginning of the
file /opt/rh/rh-postgresql95/root/usr/bin/postgresql-setup, after the
initial comments and before the code:


export PGSETUP_INITDB_OPTIONS="--lc-collate=en_GB.UTF-8"

Once done, please try again to run engine-setup, thus the upgrade.
You might need to add more options to PGSETUP_INITDB_OPTIONS, in case
the upgrade procedure still complains.
Cheers,

Alessandro


On Wed, 2018-05-16 at 22:32 +, Ian Fraser wrote:
> Hi, 
> 
> Could someone please provide advice on a production oVirt instance
> that fails to upgrade to 4.2.3?
> 
>  
> 
> Engine and hosts are all running CentOS 7
> 
>  
> 
> As per the upgrade instructions in the release notes I run:
> 
> # yum install
> http://resources.ovirt.org/pub/yum-repo/ovirt-release42.rpm
> 
> # yum update "ovirt-*-setup*"
> 
> # engine-setup
> 
>  
> 
> Then I answer the questions and it fails with:
> 
> …
> 
> [ INFO  ] Upgrading PostgreSQL
> 
> [ ERROR ] Failed to execute stage 'Misc configuration': Command
> '/opt/rh/rh-postgresql95/root/usr/bin/postgresql-setup' failed to
> execute
> 
> [ INFO  ] Yum Performing yum transaction rollback
> 
> [ INFO  ] Rolling back to the previous PostgreSQL instance
> (postgresql).
> 
> [ INFO  ] Stage: Clean up
> 
>   Log file is located
> at /var/log/ovirt-engine/setup/ovirt-engine-setup-20180516231622-3giicb.log
> 
> [ INFO  ] Generating answer file
> '/var/lib/ovirt-engine/setup/answers/20180516231735-setup.conf'
> 
> [ INFO  ] Stage: Pre-termination
> 
> [ INFO  ] Stage: Termination
> 
> [ ERROR ] Execution of setup failed
> 
>  
> 
>  
> 
> When I check  /var/lib/pgsql/upgrade_rh-postgresql95-postgresql.log:
> 
>  
> 
> Performing Consistency Checks
> 
> -
> 
> Checking cluster versions   ok
> 
> Checking database user is the install user  ok
> 
> Checking database connection settings   ok
> 
> Checking for prepared transactions  ok
> 
> Checking for reg* system OID user data typesok
> 
> Checking for contrib/isn with bigint-passing mismatch   ok
> 
> Checking for invalid "line" user columnsok
> 
> Creating dump of global objects ok
> 
> Creating dump of database schemas
> 
>   engine
> 
>   ovirt_engine_history
> 
>   postgres
> 
>   template1
> 
> ok
> 
>  
> 
> lc_collate values for database "postgres" do not match:  old
> "en_GB.UTF-8", new "en_US.UTF-8"
> 
> Failure, exiting
> 
>  
> 
>  
> 
> I have tried running the script here:
> https://gist.github.com/turboladen/6790847 and rerunning engine-setup
> but it fails with the same error.
> 
>  
> 
> Can anyone offer any other suggestions?
> 
>  
> 
> Best regards
> 
>  
> 
> Ian Fraser
> 
>  
> 
> Systems Administrator | Agency Sector Management (UK) Limited |
> www.asm.org.uk
> 
> [t] +44 (0)1784 242200 | [f] +44 (0)1784 242012 | [e]
> ian.fra...@asm.org.uk
> 
> Registered in England No. 2053849 | Registered address: Ashford House
> 41-45 Church Road  Ashford  Middx  TW15 2TQ
> 
> Follow us on twitter @asmukltd
> 
>  
> 
> 
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org


Re: [ovirt-users] Hosted Engine VM not imported

2018-02-26 Thread Alessandro De Salvo

Ciao Simone,

many thanks. So, how are we supposed to use those hooks? Should we just 
create a file 
/usr/share/ovirt-hosted-engine-setup/ansible/hooks/enginevm_before_engine_setup/enginevm_before_engine_setup.yml 
with the instructions to restore? Do you have an example for doing that?


For the moment I think I'll stick to the old procedure by calling 
--noansible, as you suggest.


I think the documenttation should be updated anyways, at least to add 
the --he-remove-storage-vm and --he-remove-hosts options, as well as the 
new procedure and the override with --noansible. Also, wouldn't it be 
safer to stick to the old procedure until the new one is fully 
operational? Or maybe at least a warning to the user, otherwise no one 
will ever be able to restorage a db and have it all functional with the 
default options.


Thanks,


    Alessandro


Il 26/02/18 18:17, Simone Tiraboschi ha scritto:



On Sat, Feb 24, 2018 at 2:32 PM, Alessandro De Salvo 
<alessandro.desa...@roma1.infn.it 
<mailto:alessandro.desa...@roma1.infn.it>> wrote:


Hi,

I have just migrated my dev cluster to the latest master,
reinstalling the engine VM and reimporting from a previous backup.
I'm trying with 4.3.0-0.0.master.20180222192611.git01e6ace.el7.centos

I had a few problems:

- the documentation seems to be outdated, and I just find by
searching the archives that it's needed to add the two
(undocumented) options --he-remove-storage-vm --he-remove-hosts

- despite the fact I selected "No" to running the engine-setup
command in the VM (the ovirt appliance), the engine-setup is
executed when running hosted-engine --deploy, and as a result the
procedure does not stop allowing to reload the db backup. The only
way I found was to put the hosted-engine in global maintenance
mode, stop the ovirt-engine, do an engine-cleanup and reload the
db, then it's possible to add the first host in the GUI, but must
be done manually

- after it's all done, I can see the hosted_storage is imported,
but the HostedEngine is not imported, and in the Events I see
messages like this:

VDSM atlas-svc-18 command GetVolumeInfoVDS failed: Image path does
not exist or cannot be accessed/created:

(u'/rhev/data-center/mnt/glusterSD/atlas-fsserv-07.roma1.infn.it:_atlas-engine-02/f02d7d5d-1459-48b8-bf27-4225cdfdce23/images/c815ec3f-6e31-4b08-81be-e515e803edce',)

   the path here is clearly wrong, it should be

/rhev/data-center/mnt/glusterSD/atlas-fsserv-07.roma1.infn.it:_atlas-engine-02/f02d7d5d-1459-48b8-bf27-4225cdfdce23/images/b7bc6468-438c-47e7-b7a4-7ed06b786da0/c815ec3f-6e31-4b08-81be-e515e803edce,
and I see the hosted_engine.conf in the shared storage has it
correctly set as vm_disk_id=b7bc6468-438c-47e7-b7a4-7ed06b786da0.


Any hint on what is not allowing the HostedEngine to be imported?
I didn't find a way to add other hosted engine nodes if the HE VM
is not imported in the cluster, like we were used in the past with
the CLI using hosted-engine --deploy on multiple hosts.


Ciao Alessandro,
with 4.2.1 we introduced a new deployment flow for hosted-engine based 
on ansible.
In this new flow we run a local VM with a running engine and we use 
that engine to create a storage domain and a VM there.
At the end we shutdown the locally running engine and we move it's 
disk over the disk of the VM created by the engine on the shared 
storage. At this point we don't need anymore the autoimport process 
since the engine migrated there already contains the engine VM and its 
storage domain.


We have an RFE, for this new flow, to add a mechanism to inject an 
existing engine backup to be automatically restored before executing 
engine-setup for migration/disaster-recovery scenarios.
Unfortunately it's still not ready but we have an hook mechanism to 
have hosted-engine-setup executing custom ansible tasks before running 
engine setup; we have an example 
in /usr/share/ovirt-hosted-engine-setup/ansible/hooks/enginevm_before_engine_setup/enginevm_before_engine_setup.yml.example


Otherwise the old flow is still there, you have just to add 
--noansible and everything should work as in the past.



Thanks for any help,


    Alessandro

___
Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users
<http://lists.ovirt.org/mailman/listinfo/users>




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted Engine VM not imported

2018-02-26 Thread Alessandro De Salvo

Hi,

after checking the engine.log I see a bunch of error like this too:


2018-02-26 03:22:06,806+01 INFO 
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetVolumeInfoVDSCommand] 
(EE-ManagedThreadFactory-engine-Thread-97153) [796a8bc5] HostName = 
atlas-svc-18
2018-02-26 03:22:06,806+01 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetVolumeInfoVDSCommand] 
(EE-ManagedThreadFactory-engine-Thread-97153) [796a8bc5] Command 
'GetVolumeInfoVDSCommand(HostName = atlas-svc-18, 
GetVolumeInfoVDSCommandParameters:{hostId='b18c40d8-7932-4b5d-995e-8ebc5ab2e3e2', 
storagePoolId='0001-0001-0001-0001-0056', 
storageDomainId='f02d7d5d-1459-48b8-bf27-4225cdfdce23', 
imageGroupId='c815ec3f-6e31-4b08-81be-e515e803edce', 
imageId='c815ec3f-6e31-4b08-81be-e515e803edce'})' execution failed: 
VDSGenericException: VDSErrorException: Failed to GetVolumeInfoVDS, 
error = Image path does not exist or cannot be accessed/created: 
(u'/rhev/data-center/mnt/glusterSD/atlas-fsserv-07.roma1.infn.it:_atlas-engine-02/f02d7d5d-1459-48b8-bf27-4225cdfdce23/images/c815ec3f-6e31-4b08-81be-e515e803edce',), 
code = 254
2018-02-26 03:22:06,806+01 INFO 
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetVolumeInfoVDSCommand] 
(EE-ManagedThreadFactory-engine-Thread-97153) [796a8bc5] FINISH, 
GetVolumeInfoVDSCommand, log id: 38adaef0
2018-02-26 03:22:06,806+01 ERROR 
[org.ovirt.engine.core.vdsbroker.irsbroker.GetImageInfoVDSCommand] 
(EE-ManagedThreadFactory-engine-Thread-97153) [796a8bc5] Failed to get 
the volume information, marking as FAILED
2018-02-26 03:22:06,806+01 INFO 
[org.ovirt.engine.core.vdsbroker.irsbroker.GetImageInfoVDSCommand] 
(EE-ManagedThreadFactory-engine-Thread-97153) [796a8bc5] FINISH, 
GetImageInfoVDSCommand, log id: 3ad29b91
2018-02-26 03:22:06,806+01 WARN 
[org.ovirt.engine.core.bll.exportimport.ImportVmCommand] 
(EE-ManagedThreadFactory-engine-Thread-97153) [796a8bc5] Validation of 
action 'ImportVm' failed for user SYSTEM. Reasons: 
VAR__ACTION__IMPORT,VAR__TYPE__VM,ACTION_TYPE_FAILED_VM_IMAGE_DOES_NOT_EXIST
2018-02-26 03:22:06,807+01 INFO 
[org.ovirt.engine.core.bll.exportimport.ImportVmCommand] 
(EE-ManagedThreadFactory-engine-Thread-97153) [796a8bc5] Lock freed to 
object 'EngineLock:{exclusiveLocks='[HostedEngine=VM_NAME, 
235b91ce-b6d8-44c6-ac26-791ac3946727=VM]', 
sharedLocks='[235b91ce-b6d8-44c6-ac26-791ac3946727=REMOTE_VM]'}'
2018-02-26 03:22:06,807+01 ERROR 
[org.ovirt.engine.core.bll.HostedEngineImporter] 
(EE-ManagedThreadFactory-engine-Thread-97153) [796a8bc5] Failed 
importing the Hosted Engine VM



Any help?

Thanks,


      Alessandro


Il 24/02/18 14:32, Alessandro De Salvo ha scritto:

Hi,

I have just migrated my dev cluster to the latest master, reinstalling 
the engine VM and reimporting from a previous backup. I'm trying with 
4.3.0-0.0.master.20180222192611.git01e6ace.el7.centos


I had a few problems:

- the documentation seems to be outdated, and I just find by searching 
the archives that it's needed to add the two (undocumented) options 
--he-remove-storage-vm --he-remove-hosts


- despite the fact I selected "No" to running the engine-setup command 
in the VM (the ovirt appliance), the engine-setup is executed when 
running hosted-engine --deploy, and as a result the procedure does not 
stop allowing to reload the db backup. The only way I found was to put 
the hosted-engine in global maintenance mode, stop the ovirt-engine, 
do an engine-cleanup and reload the db, then it's possible to add the 
first host in the GUI, but must be done manually


- after it's all done, I can see the hosted_storage is imported, but 
the HostedEngine is not imported, and in the Events I see messages 
like this:


VDSM atlas-svc-18 command GetVolumeInfoVDS failed: Image path does not 
exist or cannot be accessed/created: 
(u'/rhev/data-center/mnt/glusterSD/atlas-fsserv-07.roma1.infn.it:_atlas-engine-02/f02d7d5d-1459-48b8-bf27-4225cdfdce23/images/c815ec3f-6e31-4b08-81be-e515e803edce',)


   the path here is clearly wrong, it should be 
/rhev/data-center/mnt/glusterSD/atlas-fsserv-07.roma1.infn.it:_atlas-engine-02/f02d7d5d-1459-48b8-bf27-4225cdfdce23/images/b7bc6468-438c-47e7-b7a4-7ed06b786da0/c815ec3f-6e31-4b08-81be-e515e803edce, 
and I see the hosted_engine.conf in the shared storage has it 
correctly set as vm_disk_id=b7bc6468-438c-47e7-b7a4-7ed06b786da0.



Any hint on what is not allowing the HostedEngine to be imported? I 
didn't find a way to add other hosted engine nodes if the HE VM is not 
imported in the cluster, like we were used in the past with the CLI 
using hosted-engine --deploy on multiple hosts.


Thanks for any help,


    Alessandro

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Hosted Engine VM not imported

2018-02-24 Thread Alessandro De Salvo

Hi,

I have just migrated my dev cluster to the latest master, reinstalling 
the engine VM and reimporting from a previous backup. I'm trying with 
4.3.0-0.0.master.20180222192611.git01e6ace.el7.centos


I had a few problems:

- the documentation seems to be outdated, and I just find by searching 
the archives that it's needed to add the two (undocumented) options 
--he-remove-storage-vm --he-remove-hosts


- despite the fact I selected "No" to running the engine-setup command 
in the VM (the ovirt appliance), the engine-setup is executed when 
running hosted-engine --deploy, and as a result the procedure does not 
stop allowing to reload the db backup. The only way I found was to put 
the hosted-engine in global maintenance mode, stop the ovirt-engine, do 
an engine-cleanup and reload the db, then it's possible to add the first 
host in the GUI, but must be done manually


- after it's all done, I can see the hosted_storage is imported, but the 
HostedEngine is not imported, and in the Events I see messages like this:


VDSM atlas-svc-18 command GetVolumeInfoVDS failed: Image path does not 
exist or cannot be accessed/created: 
(u'/rhev/data-center/mnt/glusterSD/atlas-fsserv-07.roma1.infn.it:_atlas-engine-02/f02d7d5d-1459-48b8-bf27-4225cdfdce23/images/c815ec3f-6e31-4b08-81be-e515e803edce',)


   the path here is clearly wrong, it should be 
/rhev/data-center/mnt/glusterSD/atlas-fsserv-07.roma1.infn.it:_atlas-engine-02/f02d7d5d-1459-48b8-bf27-4225cdfdce23/images/b7bc6468-438c-47e7-b7a4-7ed06b786da0/c815ec3f-6e31-4b08-81be-e515e803edce, 
and I see the hosted_engine.conf in the shared storage has it correctly 
set as vm_disk_id=b7bc6468-438c-47e7-b7a4-7ed06b786da0.



Any hint on what is not allowing the HostedEngine to be imported? I 
didn't find a way to add other hosted engine nodes if the HE VM is not 
imported in the cluster, like we were used in the past with the CLI 
using hosted-engine --deploy on multiple hosts.


Thanks for any help,


    Alessandro

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Enabling libgfapi disk access with oVirt 4.2

2017-11-09 Thread Alessandro De Salvo

Hi again,

OK, tried to stop all the vms, except the engine, set engine-config -s 
LibgfApiSupported=true (for 4.2 only) and restarted the engine.


When I tried restarting the VMs they are still not using gfapi, so it 
does not seem to help.


Cheers,


Alessandro



Il 09/11/17 09:12, Alessandro De Salvo ha scritto:

Hi,
where should I enable gfapi via the UI?
The only command I tried was engine-config -s LibgfApiSupported=true 
but the result is what is shown in my output below, so it’s set to 
true for v4.2. Is it enough?
I’ll try restarting the engine. Is it really needed to stop all the 
VMs and restart them all? Of course this is a test setup and I can do 
it, but for production clusters in the future it may be a problem.

Thanks,

 Alessandro

Il giorno 09 nov 2017, alle ore 07:23, Kasturi Narra 
<kna...@redhat.com <mailto:kna...@redhat.com>> ha scritto:



Hi ,

    The procedure to enable gfapi is below.

1) stop all the vms running
2) Enable gfapi via UI or using engine-config command
3) Restart ovirt-engine service
4) start the vms.

Hope you have not missed any !!

Thanks
kasturi

On Wed, Nov 8, 2017 at 11:58 PM, Alessandro De Salvo 
<alessandro.desa...@roma1.infn.it 
<mailto:alessandro.desa...@roma1.infn.it>> wrote:


Hi,

I'm using the latest 4.2 beta release and want to try the gfapi
access, but I'm currently failing to use it.

My test setup has an external glusterfs cluster v3.12, not
managed by oVirt.

The compatibility flag is correctly showing gfapi should be
enabled with 4.2:

# engine-config -g LibgfApiSupported
LibgfApiSupported: false version: 3.6
LibgfApiSupported: false version: 4.0
LibgfApiSupported: false version: 4.1
LibgfApiSupported: true version: 4.2

The data center and cluster have the 4.2 compatibility flags as well.

However, when starting a VM with a disk on gluster I can still
see the disk is mounted via fuse.

Any clue of what I'm still missing?

Thanks,


   Alessandro

___
Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users
<http://lists.ovirt.org/mailman/listinfo/users>





___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Enabling libgfapi disk access with oVirt 4.2

2017-11-09 Thread Alessandro De Salvo
Hi,
where should I enable gfapi via the UI?
The only command I tried was engine-config -s LibgfApiSupported=true but the 
result is what is shown in my output below, so it’s set to true for v4.2. Is it 
enough?
I’ll try restarting the engine. Is it really needed to stop all the VMs and 
restart them all? Of course this is a test setup and I can do it, but for 
production clusters in the future it may be a problem.
Thanks,

   Alessandro

> Il giorno 09 nov 2017, alle ore 07:23, Kasturi Narra <kna...@redhat.com> ha 
> scritto:
> 
> Hi ,
> 
> The procedure to enable gfapi is below.
> 
> 1) stop all the vms running
> 2) Enable gfapi via UI or using engine-config command
> 3) Restart ovirt-engine service
> 4) start the vms.
> 
> Hope you have not missed any !!
> 
> Thanks
> kasturi 
> 
>> On Wed, Nov 8, 2017 at 11:58 PM, Alessandro De Salvo 
>> <alessandro.desa...@roma1.infn.it> wrote:
>> Hi,
>> 
>> I'm using the latest 4.2 beta release and want to try the gfapi access, but 
>> I'm currently failing to use it.
>> 
>> My test setup has an external glusterfs cluster v3.12, not managed by oVirt.
>> 
>> The compatibility flag is correctly showing gfapi should be enabled with 4.2:
>> 
>> # engine-config -g LibgfApiSupported
>> LibgfApiSupported: false version: 3.6
>> LibgfApiSupported: false version: 4.0
>> LibgfApiSupported: false version: 4.1
>> LibgfApiSupported: true version: 4.2
>> 
>> The data center and cluster have the 4.2 compatibility flags as well.
>> 
>> However, when starting a VM with a disk on gluster I can still see the disk 
>> is mounted via fuse.
>> 
>> Any clue of what I'm still missing?
>> 
>> Thanks,
>> 
>> 
>>Alessandro
>> 
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Enabling libgfapi disk access with oVirt 4.2

2017-11-08 Thread Alessandro De Salvo

Hi,

I'm using the latest 4.2 beta release and want to try the gfapi access, 
but I'm currently failing to use it.


My test setup has an external glusterfs cluster v3.12, not managed by oVirt.

The compatibility flag is correctly showing gfapi should be enabled with 
4.2:


# engine-config -g LibgfApiSupported
LibgfApiSupported: false version: 3.6
LibgfApiSupported: false version: 4.0
LibgfApiSupported: false version: 4.1
LibgfApiSupported: true version: 4.2

The data center and cluster have the 4.2 compatibility flags as well.

However, when starting a VM with a disk on gluster I can still see the 
disk is mounted via fuse.


Any clue of what I'm still missing?

Thanks,


   Alessandro

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] WebUI error with nightly 4.0.7

2016-12-19 Thread Alessandro De Salvo
Oh, that's good news, but then why it's in the nightly release of 4.0.7 
if it's fixed in 4.0.6?

Thanks,

Alessandro

Il 19/12/16 19:15, Maton, Brett ha scritto:

Sounds very much like the issue that was recently fixed in 4.0.6

On 19 December 2016 at 17:02, Alessandro De Salvo 
<alessandro.desa...@roma1.infn.it 
<mailto:alessandro.desa...@roma1.infn.it>> wrote:


Hi,
since a few days, after the upgrade my dev machine to the nightly repo
of 4.0.7, I'm getting these kind of errors from the WebUI after a few
minutes the ovirt-engine is up:

Error while executing action: A Request to the Server failed:
java.lang.reflect.InvocationTargetException

The errors go away if I restart the engine service, but after about 15
minutes they show up again. These errors are very annoying as I cannot
use the UI unless I restart the engine.
When I use my Mac I also get other errors like this:

ERROR: Possible problem with your *.gwt.xml module file. The compile
time user.agent (gecko1_8) does not match the runtime user.agent value
(safari). Expect more errors.

Does anyone knows if those errors will be corrected in a future
release?
Thanks,

Alessandro

___
Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users
<http://lists.ovirt.org/mailman/listinfo/users>




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] WebUI error with nightly 4.0.7

2016-12-19 Thread Alessandro De Salvo
Hi,
since a few days, after the upgrade my dev machine to the nightly repo
of 4.0.7, I'm getting these kind of errors from the WebUI after a few
minutes the ovirt-engine is up:

Error while executing action: A Request to the Server failed:
java.lang.reflect.InvocationTargetException

The errors go away if I restart the engine service, but after about 15
minutes they show up again. These errors are very annoying as I cannot
use the UI unless I restart the engine.
When I use my Mac I also get other errors like this:

ERROR: Possible problem with your *.gwt.xml module file. The compile
time user.agent (gecko1_8) does not match the runtime user.agent value
(safari). Expect more errors.

Does anyone knows if those errors will be corrected in a future release?
Thanks,

Alessandro

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt & Ceph

2016-12-18 Thread Alessandro De Salvo
Hi Rajat,
3 is the bare minimum, but yes, it works well, as I said before. But you still 
have to decide weather you want to have more resiliency for ovirt, and standard 
NFS is not helping much.
If you plan to run your cinder or openstack all in one box as VM in ovirt as 
well you should consider moving from standard NFS to something else, like 
gluster.
Cheers,

  Alessandro

> Il giorno 18 dic 2016, alle ore 18:56, rajatjpatel <rajatjpa...@gmail.com> ha 
> scritto:
> 
> 
> 
>> On Sun, Dec 18, 2016 at 9:31 PM, Alessandro De Salvo 
>> <alessandro.desa...@roma1.infn.it> wrote:
>> Alessandro
> 
> ​Thank you Alessandro, for all your support if I add one more ovirt-hyp to my 
> setup with same config as h/w will it work for ceph.
> 
> Regards
> Rajat​
> 
> 
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt & Ceph

2016-12-18 Thread Alessandro De Salvo
Hi Rajat,
OK, I see. Well, so just consider that ceph will not work at best in your 
setup, unless you add at least a physical machine. Same is true for ovirt if 
you are only using native NFS, as you loose a real HA.
Having said this, of course you choose what's best for your site or affordable, 
but your setup looks quite fragile to me. Happy to help more if you need.
Regards,

   Alessandro

> Il giorno 18 dic 2016, alle ore 18:22, rajatjpatel <rajatjpa...@gmail.com> ha 
> scritto:
> 
> Alessandro,
> 
> Right now I dont have cinder running in my setup in case if ceph don't work 
> then I have get one vm running open stack all in one and have all these disk 
> connect my open stack using cinder I can present storage to my ovirt.
> 
> At the same time I not getting case study for the same.
> 
> Regards
> Rajat
> 
> Hi
> 
> 
> Regards,
> Rajat Patel
> 
> http://studyhat.blogspot.com
> FIRST THEY IGNORE YOU...
> THEN THEY LAUGH AT YOU...
> THEN THEY FIGHT YOU...
> THEN YOU WIN...
> 
> 
>> On Sun, Dec 18, 2016 at 9:17 PM, Alessandro De Salvo 
>> <alessandro.desa...@roma1.infn.it> wrote:
>> Hi,
>> oh, so you have only 2 physical servers? I've understood they were 3! Well, 
>> in this case ceph would not work very well, too few resources and 
>> redundancy. You could try a replica 2, but it's not safe. Having a replica 3 
>> could be forced, but you would end up with a server with 2 replicas, which 
>> is dangerous/useless.
>> Okay, so you use nfs as storage domain, but in your setup the HA is not 
>> guaranteed: if a physical machine goes down and it's the one where the 
>> storage domain resides you are lost. Why not using gluster instead of nfs 
>> for the ovirt disks? You can still reserve a small gluster space for the 
>> non-ceph machines (for example a cinder VM) and ceph for the rest. Where do 
>> you have your cinder running?
>> Cheers,
>> 
>> Alessandro
>> 
>>> Il giorno 18 dic 2016, alle ore 18:05, rajatjpatel <rajatjpa...@gmail.com> 
>>> ha scritto:
>>> 
>>> Hi Alessandro,
>>> 
>>> Right now I have 2 physical server where I have host ovirt these are HP 
>>> proliant dl 380  each server 1*500GB SAS & 1TB *4 SAS Disk and 1*500GB SSD. 
>>> So right now I have use only one disk which 500GB of SAS for my ovirt to 
>>> run on both server. rest are not in use. At present I am using NFS which 
>>> coming from mapper to ovirt as storage, go forward we like to use all these 
>>> disk as  hyper-converged for ovirt. RH I could see there is KB for using 
>>> gluster. But we are looking for Ceph bcoz best pref romance and scale.
>>> 
>>> 
>>> Regards
>>> Rajat
>>> 
>>> Hi
>>> 
>>> 
>>> Regards,
>>> Rajat Patel
>>> 
>>> http://studyhat.blogspot.com
>>> FIRST THEY IGNORE YOU...
>>> THEN THEY LAUGH AT YOU...
>>> THEN THEY FIGHT YOU...
>>> THEN YOU WIN...
>>> 
>>> 
>>>> On Sun, Dec 18, 2016 at 8:49 PM, Alessandro De Salvo 
>>>> <alessandro.desa...@roma1.infn.it> wrote:
>>>> Hi Rajat,
>>>> sorry but I do not really have a clear picture of your actual setup, can 
>>>> you please explain a bit more?
>>>> In particular:
>>>> 
>>>> 1) what to you mean by using 4TB for ovirt? In which machines and how do 
>>>> you make it available to ovirt?
>>>> 
>>>> 2) how do you plan to use ceph with ovirt?
>>>> 
>>>> I guess we can give more help if you clarify those points.
>>>> Thanks,
>>>> 
>>>>Alessandro 
>>>> 
>>>>> Il giorno 18 dic 2016, alle ore 17:33, rajatjpatel 
>>>>> <rajatjpa...@gmail.com> ha scritto:
>>>>> 
>>>>> Great, thanks! Alessandro ++ Yaniv ++ 
>>>>> 
>>>>> What I want to use around 4 TB of SAS disk for my Ovirt (which going to 
>>>>> be RHV4.0.5 once POC get 100% successful, in fact all product will be RH )
>>>>> 
>>>>> I had done so much duckduckgo for all these solution and use lot of 
>>>>> reference from ovirt.org & access.redhat.com for setting up a Ovirt 
>>>>> engine and hyp.
>>>>> 
>>>>> We dont mind having more guest running and creating ceph block storage 
>>>>> and which will be presented to ovirt as storage. Gluster is not is use 
>>>>> right now bcoz we have DB will be running on gue

Re: [ovirt-users] Ovirt & Ceph

2016-12-18 Thread Alessandro De Salvo
Hi,
oh, so you have only 2 physical servers? I've understood they were 3! Well, in 
this case ceph would not work very well, too few resources and redundancy. You 
could try a replica 2, but it's not safe. Having a replica 3 could be forced, 
but you would end up with a server with 2 replicas, which is dangerous/useless.
Okay, so you use nfs as storage domain, but in your setup the HA is not 
guaranteed: if a physical machine goes down and it's the one where the storage 
domain resides you are lost. Why not using gluster instead of nfs for the ovirt 
disks? You can still reserve a small gluster space for the non-ceph machines 
(for example a cinder VM) and ceph for the rest. Where do you have your cinder 
running?
Cheers,

Alessandro

> Il giorno 18 dic 2016, alle ore 18:05, rajatjpatel <rajatjpa...@gmail.com> ha 
> scritto:
> 
> Hi Alessandro,
> 
> Right now I have 2 physical server where I have host ovirt these are HP 
> proliant dl 380  each server 1*500GB SAS & 1TB *4 SAS Disk and 1*500GB SSD. 
> So right now I have use only one disk which 500GB of SAS for my ovirt to run 
> on both server. rest are not in use. At present I am using NFS which coming 
> from mapper to ovirt as storage, go forward we like to use all these disk as  
> hyper-converged for ovirt. RH I could see there is KB for using gluster. But 
> we are looking for Ceph bcoz best pref romance and scale.
> 
> 
> Regards
> Rajat
> 
> Hi
> 
> 
> Regards,
> Rajat Patel
> 
> http://studyhat.blogspot.com
> FIRST THEY IGNORE YOU...
> THEN THEY LAUGH AT YOU...
> THEN THEY FIGHT YOU...
> THEN YOU WIN...
> 
> 
>> On Sun, Dec 18, 2016 at 8:49 PM, Alessandro De Salvo 
>> <alessandro.desa...@roma1.infn.it> wrote:
>> Hi Rajat,
>> sorry but I do not really have a clear picture of your actual setup, can you 
>> please explain a bit more?
>> In particular:
>> 
>> 1) what to you mean by using 4TB for ovirt? In which machines and how do you 
>> make it available to ovirt?
>> 
>> 2) how do you plan to use ceph with ovirt?
>> 
>> I guess we can give more help if you clarify those points.
>> Thanks,
>> 
>>Alessandro 
>> 
>>> Il giorno 18 dic 2016, alle ore 17:33, rajatjpatel <rajatjpa...@gmail.com> 
>>> ha scritto:
>>> 
>>> Great, thanks! Alessandro ++ Yaniv ++ 
>>> 
>>> What I want to use around 4 TB of SAS disk for my Ovirt (which going to be 
>>> RHV4.0.5 once POC get 100% successful, in fact all product will be RH )
>>> 
>>> I had done so much duckduckgo for all these solution and use lot of 
>>> reference from ovirt.org & access.redhat.com for setting up a Ovirt engine 
>>> and hyp.
>>> 
>>> We dont mind having more guest running and creating ceph block storage and 
>>> which will be presented to ovirt as storage. Gluster is not is use right 
>>> now bcoz we have DB will be running on guest.
>>> 
>>> Regard
>>> Rajat 
>>> 
>>>> On Sun, Dec 18, 2016 at 8:21 PM Alessandro De Salvo 
>>>> <alessandro.desa...@roma1.infn.it> wrote:
>>>> Hi,
>>>> having a 3-node ceph cluster is the bare minimum you can have to make it 
>>>> working, unless you want to have just a replica-2 mode, which is not safe.
>>>> It's not true that ceph is not easy to configure, you might use very 
>>>> easily ceph-deploy, have puppet configuring it or even run it in 
>>>> containers. Using docker is in fact the easiest solution, it really 
>>>> requires 10 minutes to make a cluster up. I've tried it both with jewel 
>>>> (official containers) and kraken (custom containers), and it works pretty 
>>>> well.
>>>> The real problem is not creating and configuring a ceph cluster, but using 
>>>> it from ovirt, as it requires cinder, i.e. a minimal setup of openstack. 
>>>> We have it and it's working pretty well, but it requires some work. For 
>>>> your reference we have cinder running on an ovirt VM using gluster.
>>>> Cheers,
>>>> 
>>>>Alessandro 
>>>> 
>>>>> Il giorno 18 dic 2016, alle ore 17:07, Yaniv Kaul <yk...@redhat.com> ha 
>>>>> scritto:
>>>>> 
>>>>> 
>>>>> 
>>>>> On Sun, Dec 18, 2016 at 3:29 PM, rajatjpatel <rajatjpa...@gmail.com> 
>>>>> wrote:
>>>>> ​Dear Team,
>>>>> 
>>>>> We are using Ovirt 4.0 for POC what we are doing I want to check with all 
>>>>> Guru's Ovirt.
>>>&

Re: [ovirt-users] Ovirt & Ceph

2016-12-18 Thread Alessandro De Salvo
Hi Yaniv,

> Il giorno 18 dic 2016, alle ore 17:37, Yaniv Kaul <yk...@redhat.com> ha 
> scritto:
> 
> 
> 
>> On Sun, Dec 18, 2016 at 6:21 PM, Alessandro De Salvo 
>> <alessandro.desa...@roma1.infn.it> wrote:
>> Hi,
>> having a 3-node ceph cluster is the bare minimum you can have to make it 
>> working, unless you want to have just a replica-2 mode, which is not safe.
> 
> How well does it perform?

One if the ceph clusters we use had exactly this setup: 3 DELL R630 (ceph 
jewel), 6 1TB NL-SAS disks so 3 mons, 6 osds. We bound the cluster network to a 
dedicated interface, 1Gbps. I can say it works pretty well, the performance 
reaches up to 100MB/s per rbd device, which is the expected maximum for the 
network connection. Resiliency is also pretty good, we can loose 2 osds (I.e. a 
full machine) without impacting on the performance.

>  
>> It's not true that ceph is not easy to configure, you might use very easily 
>> ceph-deploy, have puppet configuring it or even run it in containers. Using 
>> docker is in fact the easiest solution, it really requires 10 minutes to 
>> make a cluster up. I've tried it both with jewel (official containers) and 
>> kraken (custom containers), and it works pretty well.
> 
> This could be a great blog post in ovirt.org site - care to write something 
> describing the configuration and setup?

Oh sure, if it may be of general interest I'll be glad to. How can I do it? :-)
Cheers,

   Alessandro 

> Y.
>  
>> The real problem is not creating and configuring a ceph cluster, but using 
>> it from ovirt, as it requires cinder, i.e. a minimal setup of openstack. We 
>> have it and it's working pretty well, but it requires some work. For your 
>> reference we have cinder running on an ovirt VM using gluster.
>> Cheers,
>> 
>>Alessandro 
>> 
>>> Il giorno 18 dic 2016, alle ore 17:07, Yaniv Kaul <yk...@redhat.com> ha 
>>> scritto:
>>> 
>>> 
>>> 
>>>> On Sun, Dec 18, 2016 at 3:29 PM, rajatjpatel <rajatjpa...@gmail.com> wrote:
>>>> ​Dear Team,
>>>> 
>>>> We are using Ovirt 4.0 for POC what we are doing I want to check with all 
>>>> Guru's Ovirt.
>>>> 
>>>> We have 2 hp proliant dl 380 with 500GB SAS & 1TB *4 SAS Disk and 500GB 
>>>> SSD.
>>>> 
>>>> Waht we are done we have install ovirt hyp on these h/w and we have 
>>>> physical server where we are running our manager for ovirt. For ovirt hyp 
>>>> we are using only one 500GB of one HDD rest we have kept for ceph, so we 
>>>> have 3 node as guest running on ovirt and for ceph. My question you all is 
>>>> what I am doing is right or wrong.
>>> 
>>> I think Ceph requires a lot more resources than above. It's also a bit more 
>>> challenging to configure. I would highly recommend a 3-node cluster with 
>>> Gluster.
>>> Y.
>>>  
>>>> 
>>>> Regards
>>>> Rajat​
>>>> 
>>>> 
>>>> ___
>>>> Users mailing list
>>>> Users@ovirt.org
>>>> http://lists.ovirt.org/mailman/listinfo/users
>>>> 
>>> 
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt & Ceph

2016-12-18 Thread Alessandro De Salvo
Hi Rajat,
sorry but I do not really have a clear picture of your actual setup, can you 
please explain a bit more?
In particular:

1) what to you mean by using 4TB for ovirt? In which machines and how do you 
make it available to ovirt?

2) how do you plan to use ceph with ovirt?

I guess we can give more help if you clarify those points.
Thanks,

   Alessandro 

> Il giorno 18 dic 2016, alle ore 17:33, rajatjpatel <rajatjpa...@gmail.com> ha 
> scritto:
> 
> Great, thanks! Alessandro ++ Yaniv ++ 
> 
> What I want to use around 4 TB of SAS disk for my Ovirt (which going to be 
> RHV4.0.5 once POC get 100% successful, in fact all product will be RH )
> 
> I had done so much duckduckgo for all these solution and use lot of reference 
> from ovirt.org & access.redhat.com for setting up a Ovirt engine and hyp.
> 
> We dont mind having more guest running and creating ceph block storage and 
> which will be presented to ovirt as storage. Gluster is not is use right now 
> bcoz we have DB will be running on guest.
> 
> Regard
> Rajat 
> 
>> On Sun, Dec 18, 2016 at 8:21 PM Alessandro De Salvo 
>> <alessandro.desa...@roma1.infn.it> wrote:
>> Hi,
>> having a 3-node ceph cluster is the bare minimum you can have to make it 
>> working, unless you want to have just a replica-2 mode, which is not safe.
>> It's not true that ceph is not easy to configure, you might use very easily 
>> ceph-deploy, have puppet configuring it or even run it in containers. Using 
>> docker is in fact the easiest solution, it really requires 10 minutes to 
>> make a cluster up. I've tried it both with jewel (official containers) and 
>> kraken (custom containers), and it works pretty well.
>> The real problem is not creating and configuring a ceph cluster, but using 
>> it from ovirt, as it requires cinder, i.e. a minimal setup of openstack. We 
>> have it and it's working pretty well, but it requires some work. For your 
>> reference we have cinder running on an ovirt VM using gluster.
>> Cheers,
>> 
>>Alessandro 
>> 
>>> Il giorno 18 dic 2016, alle ore 17:07, Yaniv Kaul <yk...@redhat.com> ha 
>>> scritto:
>>> 
>>> 
>>> 
>>> On Sun, Dec 18, 2016 at 3:29 PM, rajatjpatel <rajatjpa...@gmail.com> wrote:
>>> ​Dear Team,
>>> 
>>> We are using Ovirt 4.0 for POC what we are doing I want to check with all 
>>> Guru's Ovirt.
>>> 
>>> We have 2 hp proliant dl 380 with 500GB SAS & 1TB *4 SAS Disk and 500GB SSD.
>>> 
>>> Waht we are done we have install ovirt hyp on these h/w and we have 
>>> physical server where we are running our manager for ovirt. For ovirt hyp 
>>> we are using only one 500GB of one HDD rest we have kept for ceph, so we 
>>> have 3 node as guest running on ovirt and for ceph. My question you all is 
>>> what I am doing is right or wrong.
>>> 
>>> I think Ceph requires a lot more resources than above. It's also a bit more 
>>> challenging to configure. I would highly recommend a 3-node cluster with 
>>> Gluster.
>>> Y.
>>>  
>>> 
>>> Regards
>>> Rajat​
>>> 
>>> 
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>> 
>>> 
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
> 
> -- 
> Sent from my Cell Phone - excuse the typos & auto incorrect
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt & Ceph

2016-12-18 Thread Alessandro De Salvo
Hi,
sorry, forgot to mention you may have both gluster and ceph on the same 
machines, as long as you have enough disk space.
Cheers,

   Alessandro 

> Il giorno 18 dic 2016, alle ore 17:07, Yaniv Kaul  ha 
> scritto:
> 
> 
> 
>> On Sun, Dec 18, 2016 at 3:29 PM, rajatjpatel  wrote:
>> ​Dear Team,
>> 
>> We are using Ovirt 4.0 for POC what we are doing I want to check with all 
>> Guru's Ovirt.
>> 
>> We have 2 hp proliant dl 380 with 500GB SAS & 1TB *4 SAS Disk and 500GB SSD.
>> 
>> Waht we are done we have install ovirt hyp on these h/w and we have physical 
>> server where we are running our manager for ovirt. For ovirt hyp we are 
>> using only one 500GB of one HDD rest we have kept for ceph, so we have 3 
>> node as guest running on ovirt and for ceph. My question you all is what I 
>> am doing is right or wrong.
> 
> I think Ceph requires a lot more resources than above. It's also a bit more 
> challenging to configure. I would highly recommend a 3-node cluster with 
> Gluster.
> Y.
>  
>> 
>> Regards
>> Rajat​
>> 
>> 
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>> 
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt & Ceph

2016-12-18 Thread Alessandro De Salvo
Hi,
having a 3-node ceph cluster is the bare minimum you can have to make it 
working, unless you want to have just a replica-2 mode, which is not safe.
It's not true that ceph is not easy to configure, you might use very easily 
ceph-deploy, have puppet configuring it or even run it in containers. Using 
docker is in fact the easiest solution, it really requires 10 minutes to make a 
cluster up. I've tried it both with jewel (official containers) and kraken 
(custom containers), and it works pretty well.
The real problem is not creating and configuring a ceph cluster, but using it 
from ovirt, as it requires cinder, i.e. a minimal setup of openstack. We have 
it and it's working pretty well, but it requires some work. For your reference 
we have cinder running on an ovirt VM using gluster.
Cheers,

   Alessandro 

> Il giorno 18 dic 2016, alle ore 17:07, Yaniv Kaul  ha 
> scritto:
> 
> 
> 
>> On Sun, Dec 18, 2016 at 3:29 PM, rajatjpatel  wrote:
>> ​Dear Team,
>> 
>> We are using Ovirt 4.0 for POC what we are doing I want to check with all 
>> Guru's Ovirt.
>> 
>> We have 2 hp proliant dl 380 with 500GB SAS & 1TB *4 SAS Disk and 500GB SSD.
>> 
>> Waht we are done we have install ovirt hyp on these h/w and we have physical 
>> server where we are running our manager for ovirt. For ovirt hyp we are 
>> using only one 500GB of one HDD rest we have kept for ceph, so we have 3 
>> node as guest running on ovirt and for ceph. My question you all is what I 
>> am doing is right or wrong.
> 
> I think Ceph requires a lot more resources than above. It's also a bit more 
> challenging to configure. I would highly recommend a 3-node cluster with 
> Gluster.
> Y.
>  
>> 
>> Regards
>> Rajat​
>> 
>> 
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>> 
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Error while extending a cinder/ceph disk

2016-07-21 Thread Alessandro De Salvo
Hi Daniel,

On Thu, 2016-07-21 at 17:34 +0300, Daniel Erez wrote:
> 
> 
> On Thu, Jul 21, 2016 at 4:21 PM, Alessandro De Salvo
> <alessandro.desa...@roma1.infn.it> wrote:
> Hi,
> when trying to extend a ceph disk in ovirt 4 (Virtual Machines
> => Edit
> Virtual Machine => Instance images => Edit => Extend size by)
> I get the
> following error:
> 
> Error while executing action: interface is required
> 
> 
> Sounds like a similar root cause
> of https://bugzilla.redhat.com/show_bug.cgi?id=1346887
> Should be already fixed on latest build. Can you please try to upgrade
> to latest?

I'm running on the latest snapshot, upgraded a few hours ago, so I guess
it's not totally fixed yet.
Those are the ovirt-engine-* RPMS I have in my machine:

ovirt-engine-setup-plugin-ovirt-engine-4.0.3-0.0.master.20160720203246.git9c88731.el7.centos.noarch
ovirt-engine-dashboard-1.0.1-1.el7.centos.noarch
ovirt-engine-sdk-python-3.6.8.0-1.el7.centos.noarch
ovirt-engine-setup-plugin-dockerc-4.0.3-0.0.master.20160720203246.git9c88731.el7.centos.noarch
ovirt-engine-tools-backup-4.0.3-0.0.master.20160720203246.git9c88731.el7.centos.noarch
ovirt-engine-tools-4.0.3-0.0.master.20160720203246.git9c88731.el7.centos.noarch
ovirt-engine-wildfly-10.0.0-1.el7.x86_64
ovirt-engine-setup-base-4.0.3-0.0.master.20160720203246.git9c88731.el7.centos.noarch
ovirt-engine-setup-4.0.3-0.0.master.20160720203246.git9c88731.el7.centos.noarch
ovirt-engine-vmconsole-proxy-helper-4.0.3-0.0.master.20160720203246.git9c88731.el7.centos.noarch
ovirt-engine-extension-aaa-jdbc-1.1.1-0.0.master.20160623200644.git2e68ef6.el7.noarch
ovirt-engine-restapi-4.0.3-0.0.master.20160720203246.git9c88731.el7.centos.noarch
ovirt-engine-4.0.3-0.0.master.20160720203246.git9c88731.el7.centos.noarch
ovirt-engine-wildfly-overlay-10.0.0-1.el7.noarch
ovirt-engine-dwh-setup-4.0.2-0.1.master.20160706084440.el7.centos.noarch
ovirt-engine-lib-4.0.3-0.0.master.20160720203246.git9c88731.el7.centos.noarch
ovirt-engine-setup-plugin-vmconsole-proxy-helper-4.0.3-0.0.master.20160720203246.git9c88731.el7.centos.noarch
python-ovirt-engine-sdk4-4.0.0-0.5.a5.el7.centos.x86_64
ovirt-engine-extensions-api-impl-4.0.3-0.0.master.20160720203246.git9c88731.el7.centos.noarch
ovirt-engine-cli-3.6.8.1-1.el7.centos.noarch
ovirt-engine-userportal-4.0.3-0.0.master.20160720203246.git9c88731.el7.centos.noarch
ovirt-engine-webadmin-portal-4.0.3-0.0.master.20160720203246.git9c88731.el7.centos.noarch
ovirt-engine-dwh-4.0.2-0.1.master.20160706084440.el7.centos.noarch
ovirt-engine-setup-plugin-websocket-proxy-4.0.3-0.0.master.20160720203246.git9c88731.el7.centos.noarch
ovirt-engine-websocket-proxy-4.0.3-0.0.master.20160720203246.git9c88731.el7.centos.noarch
ovirt-engine-backend-4.0.3-0.0.master.20160720203246.git9c88731.el7.centos.noarch
ovirt-engine-setup-plugin-ovirt-engine-common-4.0.3-0.0.master.20160720203246.git9c88731.el7.centos.noarch
ovirt-engine-dbscripts-4.0.3-0.0.master.20160720203246.git9c88731.el7.centos.noarch

> 
> 
> Or, try using the Edit disk dialog under 'VMs => Disks' instead (could
> be a specific issue with the 'Instance images' flow).

Tried this as well, but I still get the same error.
Thanks,

Alessandro

>  
> 
> In engine.log I see the following errors as well:
> 
> 2016-07-21 15:14:55,266 ERROR
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (default task-35) [] Correlation ID: 47ce3344, Call Stack:
> null, Custom
> Event ID: -1, Message: Failed to update VM test01 disk
> test01_Disk1
> (User: admin@internal-authz).
> 2016-07-21 15:14:56,091 ERROR
> 
> [org.ovirt.engine.core.bll.storage.disk.cinder.ExtendCinderDiskCommandCallback]
>  (DefaultQuartzScheduler8) [2819eb7c] Failed extending disk. ID: 
> a5dd90b1-3a76-4e38-af8c-e829b3b86a40
> 2016-07-21 15:14:56,124 ERROR
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (DefaultQuartzScheduler8) [2819eb7c] Correlation ID: 2819eb7c,
> Call
> Stack: null, Custom Event ID: -1, Message: Failed to extend
> size of the
> disk 'test01_Disk1' to 20 GB, User: admin@internal-authz.
> 2016-07-21 15:14:56,209 ERROR
> 
> [org.ovirt.engine.core.bll.storage.disk.cinder.ExtendCinderDiskCommand]
> (DefaultQuartzScheduler8) [2819eb7c] Ending command
> 
> 'org.ovirt.engine.core.bll.storage.disk.cinder.ExtendCinderDiskCommand'
> with failure.
> 
> 
> After this, the cinder image is flagged as illegal and I can
> just delete
> it.
> Extending the image on the cinder side is possible, and the
> com

Re: [ovirt-users] Exporting VMs using cinder disks and importing ceph disks from cinder

2016-07-21 Thread Alessandro De Salvo
Hi Nir,

On Thu, 2016-07-21 at 17:25 +0300, Nir Soffer wrote:
> On Thu, Jul 21, 2016 at 4:58 PM, Alessandro De Salvo
> <alessandro.desa...@roma1.infn.it> wrote:
> > Hi,
> > I'm trying to export VMs using ceph disks on ovirt 4. The export itself
> > works, but the disks are not saved in the export domains, only the VM
> > definition is stored. Also, I cannot easily re-import them, unless I
> > clone the VM instead of importing it, it just fails.
> > Cinder itself can backup volumes, if instructed to do so.
> > However, let's say I would like to export the machine from an ovirt
> > infrastructure and load it into another one passing by the export domain
> > and sharing the same underlying ceph cluster. I could easily load the VM
> > from the export domain into the second ovirt infrastructure, but how can
> > I define the disk, whose volume exists in the cinder instance, in the
> > second ovirt? I do not see any obvious way to map an existing cinder
> > volume into ovirt, without creating a new disk, which would not be
> > helpful.
> > As a consequence of all this, in case of problems with the disks, there
> > is no way to recover them.
> > Any help?
> 
> We do not support copying, moving, or exporting disks to/from ceph yet.
> 
> You can copy the disks manually from ceph using ceph command line
> tools, and qemu-img should also handle them.

Yes, I just found a trick to re-import the disk, using qemu-img to dump
the image on a file from rbd:/, create a new disk in ovirt
and use qemu-img to copy it back on the new image. Technically it is
working, but it's a bit convoluted. Any plans to add the relevant
support in ovirt?

> 
> After you copy an image, you will a have a raw image that can be
> uploaded to new ovirt disk from engine ui (new image upload feature).

Are you talking to the upload button in the Disks tab? I do not see how
I can upload to cinder/ceph from there. The only way I found was the
trick described above.
Thanks,

Alessandro

> 
> Nir


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Exporting VMs using cinder disks and importing ceph disks from cinder

2016-07-21 Thread Alessandro De Salvo
Hi,
I'm trying to export VMs using ceph disks on ovirt 4. The export itself
works, but the disks are not saved in the export domains, only the VM
definition is stored. Also, I cannot easily re-import them, unless I
clone the VM instead of importing it, it just fails.
Cinder itself can backup volumes, if instructed to do so.
However, let's say I would like to export the machine from an ovirt
infrastructure and load it into another one passing by the export domain
and sharing the same underlying ceph cluster. I could easily load the VM
from the export domain into the second ovirt infrastructure, but how can
I define the disk, whose volume exists in the cinder instance, in the
second ovirt? I do not see any obvious way to map an existing cinder
volume into ovirt, without creating a new disk, which would not be
helpful.
As a consequence of all this, in case of problems with the disks, there
is no way to recover them.
Any help?
Thanks,

Alessandro

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Error while extending a cinder/ceph disk

2016-07-21 Thread Alessandro De Salvo
Hi,
when trying to extend a ceph disk in ovirt 4 (Virtual Machines => Edit
Virtual Machine => Instance images => Edit => Extend size by) I get the
following error:

Error while executing action: interface is required

In engine.log I see the following errors as well:

2016-07-21 15:14:55,266 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-35) [] Correlation ID: 47ce3344, Call Stack: null, Custom
Event ID: -1, Message: Failed to update VM test01 disk test01_Disk1
(User: admin@internal-authz).
2016-07-21 15:14:56,091 ERROR
[org.ovirt.engine.core.bll.storage.disk.cinder.ExtendCinderDiskCommandCallback] 
(DefaultQuartzScheduler8) [2819eb7c] Failed extending disk. ID: 
a5dd90b1-3a76-4e38-af8c-e829b3b86a40
2016-07-21 15:14:56,124 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(DefaultQuartzScheduler8) [2819eb7c] Correlation ID: 2819eb7c, Call
Stack: null, Custom Event ID: -1, Message: Failed to extend size of the
disk 'test01_Disk1' to 20 GB, User: admin@internal-authz.
2016-07-21 15:14:56,209 ERROR
[org.ovirt.engine.core.bll.storage.disk.cinder.ExtendCinderDiskCommand]
(DefaultQuartzScheduler8) [2819eb7c] Ending command
'org.ovirt.engine.core.bll.storage.disk.cinder.ExtendCinderDiskCommand'
with failure.


After this, the cinder image is flagged as illegal and I can just delete
it.
Extending the image on the cinder side is possible, and the command
"cinder extend " works, but then the new size is not properly
reported back in ovirt.
Any clue?
Thanks,

Alessandro

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt and Ceph

2016-06-27 Thread Alessandro De Salvo

Hi,
the cinder container is broken since a while, since when the kollaglue 
changed the installation method upstream, AFAIK.
Also, it seems that even the latest ovirt 4.0 pulls down the "kilo" 
version of openstack, so you will need to install yours if you need a 
more recent one.
We are using a VM managed by ovirt itself for keystone/glance/cinder 
with our ceph cluster, and it works quite well with the Mitaka version, 
which is the latest one. The DB is hosted outside, so that even if we 
loose the VM we don't loose the state, besides all performance reasons. 
The installation is not using containers, but installing the services 
directly via puppet/Foreman.
So far we are happily using ceph in this way. The only drawback of this 
setup is that if the VM is not up we cannot start machines with ceph 
volumes attached, but the running machines survives without problems 
even if the cinder VM is down.

Cheers,

Alessandro


Il 27/06/16 09:37, Barak Korren ha scritto:

You may like to check this project providing production-ready openstack
containers:
https://github.com/openstack/kolla


Also, the oVirt installer can actually deploy these containers for you:

https://www.ovirt.org/develop/release-management/features/cinderglance-docker-integration/




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Using ceph volumes with ovirt

2016-05-30 Thread Alessandro De Salvo

Hi,
just to answer myself, I found these instructions that solved my problem:

http://7xqb88.com1.z0.glb.clouddn.com/Features_Cinder%20Integration.pdf

Basically, I was missing the step to add the ceph key to the 
Authentication Keys tab of the cinder External Provider.

It's all working now.
Cheers,

Alessandro

Il 30/05/16 10:55, Alessandro De Salvo ha scritto:


Hi,
I'm happily using our research cluster in Italy via gluster, and now 
I'm trying to hotplug a ceph disk on a VM of my cluster, without success.
The ceph cluster is managed via openstack cinder and I can create 
correctly the disk via ovirt (3.6.6.2-1 on CentOS 7.2).
The problem comes when trying to hotplug, or start a machine with the 
given disk attached.
In the vdsm log of the host where the VM is running or starting I see 
the following error:



jsonrpc.Executor/5::INFO::2016-05-30 
10:35:29,197::vm::2729::virt.vm::(hotplugDisk) 
vmId=`c189472e-25d2-4df1-b089-590009856dd3`::Hotplug disk xml: address="" device="disk" snapshot="no" type="network">
name="images/volume-9134b639-c23c-4ff1-91ca-0462c80026d2" protocol="rbd">








name="qemu" type="raw"/>



jsonrpc.Executor/5::ERROR::2016-05-30 
10:35:29,198::vm::2737::virt.vm::(hotplugDisk) 
vmId=`c189472e-25d2-4df1-b089-590009856dd3`::Hotplug failed

Traceback (most recent call last):
  File "/usr/share/vdsm/virt/vm.py", line 2735, in hotplugDisk
self._dom.attachDevice(driveXml)
  File "/usr/share/vdsm/virt/virdomain.py", line 68, in f
ret = attr(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", 
line 124, in wrapper

ret = f(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 1313, in 
wrapper

return func(inst, *args, **kwargs)
  File "/usr/lib64/python2.7/site-packages/libvirt.py", line 530, in 
attachDevice
if ret == -1: raise libvirtError ('virDomainAttachDevice() 
failed', dom=self)

libvirtError: XML error: invalid auth secret uuid



In fact the uuid of the secret used by ovirt to hotplug seems to be 
the ceph secret (masked here as ), while libvirt 
expects the uuid of the libvirt secret, by looking at the instructions 
http://docs.ceph.com/docs/jewel/rbd/libvirt/.

Anyone got it working?
Thanks,

Alessandro


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Using ceph volumes with ovirt

2016-05-30 Thread Alessandro De Salvo

Hi,
I'm happily using our research cluster in Italy via gluster, and now I'm 
trying to hotplug a ceph disk on a VM of my cluster, without success.
The ceph cluster is managed via openstack cinder and I can create 
correctly the disk via ovirt (3.6.6.2-1 on CentOS 7.2).
The problem comes when trying to hotplug, or start a machine with the 
given disk attached.
In the vdsm log of the host where the VM is running or starting I see 
the following error:



jsonrpc.Executor/5::INFO::2016-05-30 
10:35:29,197::vm::2729::virt.vm::(hotplugDisk) 
vmId=`c189472e-25d2-4df1-b089-590009856dd3`::Hotplug disk xml: address="" device="disk" snapshot="no" type="network">
name="images/volume-9134b639-c23c-4ff1-91ca-0462c80026d2" protocol="rbd">








name="qemu" type="raw"/>



jsonrpc.Executor/5::ERROR::2016-05-30 
10:35:29,198::vm::2737::virt.vm::(hotplugDisk) 
vmId=`c189472e-25d2-4df1-b089-590009856dd3`::Hotplug failed

Traceback (most recent call last):
  File "/usr/share/vdsm/virt/vm.py", line 2735, in hotplugDisk
self._dom.attachDevice(driveXml)
  File "/usr/share/vdsm/virt/virdomain.py", line 68, in f
ret = attr(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", 
line 124, in wrapper

ret = f(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 1313, in 
wrapper

return func(inst, *args, **kwargs)
  File "/usr/lib64/python2.7/site-packages/libvirt.py", line 530, in 
attachDevice
if ret == -1: raise libvirtError ('virDomainAttachDevice() failed', 
dom=self)

libvirtError: XML error: invalid auth secret uuid



In fact the uuid of the secret used by ovirt to hotplug seems to be the 
ceph secret (masked here as ), while libvirt 
expects the uuid of the libvirt secret, by looking at the instructions 
http://docs.ceph.com/docs/jewel/rbd/libvirt/.

Anyone got it working?
Thanks,

Alessandro
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users