[ovirt-users] sanlock issues after 4.3 to 4.4 migration

2022-01-05 Thread Strahil Nikolov via Users
Hello All,

I was trying to upgrade my single node setup (Actually it used to be 2+1 
arbiter, but one of the data nodes died) from 4.3.10 to 4.4.? 

The deployment failed on 'hosted-engine --reinitialize-lockspace --force' and 
it seems that sanlock fails to obtain a lock:

# hosted-engine --reinitialize-lockspace --force
Traceback (most recent call last):
  File "/usr/lib64/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
  File "/usr/lib64/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
  File 
"/usr/lib/python3.6/site-packages/ovirt_hosted_engine_setup/reinitialize_lockspace.py",
 line 30, in 
ha_cli.reset_lockspace(force)
  File 
"/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/client/client.py", 
line 286, in reset_lockspace
stats = broker.get_stats_from_storage()
  File 
"/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py", 
line 148, in get_stats_from_storage
result = self._proxy.get_stats()
  File "/usr/lib64/python3.6/xmlrpc/client.py", line 1112, in __call__
return self.__send(self.__name, args)
  File "/usr/lib64/python3.6/xmlrpc/client.py", line 1452, in __request
verbose=self.__verbose
  File "/usr/lib64/python3.6/xmlrpc/client.py", line 1154, in request
return self.single_request(host, handler, request_body, verbose)
  File "/usr/lib64/python3.6/xmlrpc/client.py", line 1166, in single_request
http_conn = self.send_request(host, handler, request_body, verbose)
  File "/usr/lib64/python3.6/xmlrpc/client.py", line 1279, in send_request
self.send_content(connection, request_body)
  File "/usr/lib64/python3.6/xmlrpc/client.py", line 1309, in send_content
connection.endheaders(request_body)
  File "/usr/lib64/python3.6/http/client.py", line 1268, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
  File "/usr/lib64/python3.6/http/client.py", line 1044, in _send_output
self.send(msg)
  File "/usr/lib64/python3.6/http/client.py", line 982, in send
self.connect()
  File 
"/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/lib/unixrpc.py", line 
74, in connect
self.sock.connect(base64.b16decode(self.host))
FileNotFoundError: [Errno 2] No such file or directory

# grep sanlock /var/log/messages | tail
Jan  6 08:29:48 ovirt2 sanlock[1269]: 2022-01-06 08:29:48 19341 [77108]: s1777 
failed to read device to find sector size error -223 
/run/vdsm/storage/ca3807b9-5afc-4bcd-a557-aacbcc53c340/39ee18b2-3d7b-4d48-8a0e-3ed7947b5038/d95ae3ee-b6d3-46c4-b6a2-75f96134c7f1
Jan  6 08:29:49 ovirt2 sanlock[1269]: 2022-01-06 08:29:49 19342 [1310]: s1777 
add_lockspace fail result -223
Jan  6 08:29:54 ovirt2 sanlock[1269]: 2022-01-06 08:29:54 19347 [77113]: s1778 
failed to read device to find sector size error -223 
/run/vdsm/storage/ca3807b9-5afc-4bcd-a557-aacbcc53c340/39ee18b2-3d7b-4d48-8a0e-3ed7947b5038/d95ae3ee-b6d3-46c4-b6a2-75f96134c7f1
Jan  6 08:29:55 ovirt2 sanlock[1269]: 2022-01-06 08:29:55 19348 [1310]: s1778 
add_lockspace fail result -223
Jan  6 08:30:00 ovirt2 sanlock[1269]: 2022-01-06 08:30:00 19353 [77138]: s1779 
failed to read device to find sector size error -223 
/run/vdsm/storage/ca3807b9-5afc-4bcd-a557-aacbcc53c340/39ee18b2-3d7b-4d48-8a0e-3ed7947b5038/d95ae3ee-b6d3-46c4-b6a2-75f96134c7f1
Jan  6 08:30:01 ovirt2 sanlock[1269]: 2022-01-06 08:30:01 19354 [1311]: s1779 
add_lockspace fail result -223
Jan  6 08:30:06 ovirt2 sanlock[1269]: 2022-01-06 08:30:06 19359 [77144]: s1780 
failed to read device to find sector size error -223 
/run/vdsm/storage/ca3807b9-5afc-4bcd-a557-aacbcc53c340/39ee18b2-3d7b-4d48-8a0e-3ed7947b5038/d95ae3ee-b6d3-46c4-b6a2-75f96134c7f1
Jan  6 08:30:07 ovirt2 sanlock[1269]: 2022-01-06 08:30:07 19360 [1310]: s1780 
add_lockspace fail result -223
Jan  6 08:30:12 ovirt2 sanlock[1269]: 2022-01-06 08:30:12 19365 [77151]: s1781 
failed to read device to find sector size error -223 
/run/vdsm/storage/ca3807b9-5afc-4bcd-a557-aacbcc53c340/39ee18b2-3d7b-4d48-8a0e-3ed7947b5038/d95ae3ee-b6d3-46c4-b6a2-75f96134c7f1
Jan  6 08:30:13 ovirt2 sanlock[1269]: 2022-01-06 08:30:13 19366 [1310]: s1781 
add_lockspace fail result -223


# sanlock client status
daemon 5f37f400-b865-11dc-a4f5-2c4d54502372
p -1 helper
p -1 listener
p -1 status
s 
ca3807b9-5afc-4bcd-a557-aacbcc53c340:1:/rhev/data-center/mnt/glusterSD/ovirt2\:_engine44/ca3807b9-5afc-4bcd-a557-aacbcc53c340/dom_md/ids:0


Could it be related to the sector size of the Gluster's Brick?

# smartctl -a /dev/sdb | grep  'Sector Sizes'
Sector Sizes: 512 bytes logical, 4096 bytes physical


Any hint will be helpful


Best Regads,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@o

Re: [ovirt-users] Sanlock issues after upgrading to 3.4

2014-06-11 Thread Maor Lipchuk
Hi Jairo,

Can u please open a bug on this at [1].
Also can u please add the logs of the sanlock, vdsm and engine to the bug.

[1] https://bugzilla.redhat.com/enter_bug.cgi?product=oVirt

Thanks,
Maor


On 06/07/2014 11:22 PM, Jairo Rizzo wrote:
> Hello, 
> 
> I have a small 2-node cluster setup running Glusterfs in replication mode :
> 
> CentOS v6.5 
> kernel-2.6.32-431.17.1.el6.x86_64
> vdsm-4.14.6-0.el6.x86_64
> ovirt-engine-3.4.0-1.el6.noarch  (on 1 node)
> 
> Basically I was running ovirt-engine v 3.3 for months fine and then
> upgraded to latest version of 3.3.X two days ago and could not join the
> nodes to the cluster due to a version mismatch,basically this:
> https://www.mail-archive.com/users@ovirt.org/msg17241.html  . After
> trying to correct this problem I ended up upgrading to 3.4 which created
> a new and challenng problem for me. Every couple of hours I get error
> messages like this:
> 
> Jun  7 13:40:01 hv1 sanlock[2341]: 2014-06-07 13:40:01-0400 19647
> [2341]: s3 check_our_lease warning 70 last_success 19577
> Jun  7 13:40:02 hv1 sanlock[2341]: 2014-06-07 13:40:02-0400 19648
> [2341]: s3 check_our_lease warning 71 last_success 19577
> Jun  7 13:40:03 hv1 sanlock[2341]: 2014-06-07 13:40:03-0400 19649
> [2341]: s3 check_our_lease warning 72 last_success 19577
> Jun  7 13:40:04 hv1 sanlock[2341]: 2014-06-07 13:40:04-0400 19650
> [2341]: s3 check_our_lease warning 73 last_success 19577
> Jun  7 13:40:05 hv1 sanlock[2341]: 2014-06-07 13:40:05-0400 19651
> [2341]: s3 check_our_lease warning 74 last_success 19577
> Jun  7 13:40:06 hv1 sanlock[2341]: 2014-06-07 13:40:06-0400 19652
> [2341]: s3 check_our_lease warning 75 last_success 19577
> Jun  7 13:40:07 hv1 sanlock[2341]: 2014-06-07 13:40:07-0400 19653
> [2341]: s3 check_our_lease warning 76 last_success 19577
> Jun  7 13:40:08 hv1 sanlock[2341]: 2014-06-07 13:40:08-0400 19654
> [2341]: s3 check_our_lease warning 77 last_success 19577
> Jun  7 13:40:09 hv1 wdmd[2330]: test warning now 19654 ping 19644 close
> 0 renewal 19577 expire 19657 client 2341
> sanlock_1e8615b0-7876-4a03-bdb0-352087fad0f3:1
> Jun  7 13:40:09 hv1 wdmd[2330]: /dev/watchdog closed unclean
> Jun  7 13:40:09 hv1 kernel: SoftDog: Unexpected close, not stopping
> watchdog!
> Jun  7 13:40:09 hv1 sanlock[2341]: 2014-06-07 13:40:09-0400 19655
> [2341]: s3 check_our_lease warning 78 last_success 19577
> Jun  7 13:40:10 hv1 wdmd[2330]: test warning now 19655 ping 19644 close
> 19654 renewal 19577 expire 19657 client 2341
> sanlock_1e8615b0-7876-4a03-bdb0-352087fad0f3:1
> Jun  7 13:40:10 hv1 sanlock[2341]: 2014-06-07 13:40:10-0400 19656
> [2341]: s3 check_our_lease warning 79 last_success 19577
> Jun  7 13:40:11 hv1 wdmd[2330]: test warning now 19656 ping 19644 close
> 19654 renewal 19577 expire 19657 client 2341
> sanlock_1e8615b0-7876-4a03-bdb0-352087fad0f3:1
> Jun  7 13:40:11 hv1 sanlock[2341]: 2014-06-07 13:40:11-0400 19657
> [2341]: s3 check_our_lease failed 80
> Jun  7 13:40:11 hv1 sanlock[2341]: 2014-06-07 13:40:11-0400 19657
> [2341]: s3 all pids clear
> 
> Jun  7 13:40:11 hv1 wdmd[2330]: /dev/watchdog reopen
> Jun  7 13:41:32 hv1 sanlock[2341]: 2014-06-07 13:41:32-0400 19738
> [5050]: s3 delta_renew write error -202
> Jun  7 13:41:32 hv1 sanlock[2341]: 2014-06-07 13:41:32-0400 19738
> [5050]: s3 renewal error -202 delta_length 140 last_success 19577
> Jun  7 13:41:42 hv1 sanlock[2341]: 2014-06-07 13:41:42-0400 19748
> [5050]: 1e8615b0 close_task_aio 0 0x7fd3040008c0 busy
> Jun  7 13:41:52 hv1 sanlock[2341]: 2014-06-07 13:41:52-0400 19758
> [5050]: 1e8615b0 close_task_aio 0 0x7fd3040008c0 busy
> 
> This makes one of the nodes not be able to see the storage and all its
> VMs will go into pause mode/stop. Wondering if you could provide some
> advice. Thank you
> 
> --Rizzo
> 
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Sanlock issues after upgrading to 3.4

2014-06-07 Thread Jairo Rizzo
Hello,

I have a small 2-node cluster setup running Glusterfs in replication mode :

CentOS v6.5
kernel-2.6.32-431.17.1.el6.x86_64
vdsm-4.14.6-0.el6.x86_64
ovirt-engine-3.4.0-1.el6.noarch  (on 1 node)

Basically I was running ovirt-engine v 3.3 for months fine and then
upgraded to latest version of 3.3.X two days ago and could not join the
nodes to the cluster due to a version mismatch,basically this:
https://www.mail-archive.com/users@ovirt.org/msg17241.html  . After trying
to correct this problem I ended up upgrading to 3.4 which created a new and
challenng problem for me. Every couple of hours I get error messages like
this:

Jun  7 13:40:01 hv1 sanlock[2341]: 2014-06-07 13:40:01-0400 19647 [2341]:
s3 check_our_lease warning 70 last_success 19577
Jun  7 13:40:02 hv1 sanlock[2341]: 2014-06-07 13:40:02-0400 19648 [2341]:
s3 check_our_lease warning 71 last_success 19577
Jun  7 13:40:03 hv1 sanlock[2341]: 2014-06-07 13:40:03-0400 19649 [2341]:
s3 check_our_lease warning 72 last_success 19577
Jun  7 13:40:04 hv1 sanlock[2341]: 2014-06-07 13:40:04-0400 19650 [2341]:
s3 check_our_lease warning 73 last_success 19577
Jun  7 13:40:05 hv1 sanlock[2341]: 2014-06-07 13:40:05-0400 19651 [2341]:
s3 check_our_lease warning 74 last_success 19577
Jun  7 13:40:06 hv1 sanlock[2341]: 2014-06-07 13:40:06-0400 19652 [2341]:
s3 check_our_lease warning 75 last_success 19577
Jun  7 13:40:07 hv1 sanlock[2341]: 2014-06-07 13:40:07-0400 19653 [2341]:
s3 check_our_lease warning 76 last_success 19577
Jun  7 13:40:08 hv1 sanlock[2341]: 2014-06-07 13:40:08-0400 19654 [2341]:
s3 check_our_lease warning 77 last_success 19577
Jun  7 13:40:09 hv1 wdmd[2330]: test warning now 19654 ping 19644 close 0
renewal 19577 expire 19657 client 2341
sanlock_1e8615b0-7876-4a03-bdb0-352087fad0f3:1
Jun  7 13:40:09 hv1 wdmd[2330]: /dev/watchdog closed unclean
Jun  7 13:40:09 hv1 kernel: SoftDog: Unexpected close, not stopping
watchdog!
Jun  7 13:40:09 hv1 sanlock[2341]: 2014-06-07 13:40:09-0400 19655 [2341]:
s3 check_our_lease warning 78 last_success 19577
Jun  7 13:40:10 hv1 wdmd[2330]: test warning now 19655 ping 19644 close
19654 renewal 19577 expire 19657 client 2341
sanlock_1e8615b0-7876-4a03-bdb0-352087fad0f3:1
Jun  7 13:40:10 hv1 sanlock[2341]: 2014-06-07 13:40:10-0400 19656 [2341]:
s3 check_our_lease warning 79 last_success 19577
Jun  7 13:40:11 hv1 wdmd[2330]: test warning now 19656 ping 19644 close
19654 renewal 19577 expire 19657 client 2341
sanlock_1e8615b0-7876-4a03-bdb0-352087fad0f3:1
Jun  7 13:40:11 hv1 sanlock[2341]: 2014-06-07 13:40:11-0400 19657 [2341]:
s3 check_our_lease failed 80
Jun  7 13:40:11 hv1 sanlock[2341]: 2014-06-07 13:40:11-0400 19657 [2341]:
s3 all pids clear

Jun  7 13:40:11 hv1 wdmd[2330]: /dev/watchdog reopen
Jun  7 13:41:32 hv1 sanlock[2341]: 2014-06-07 13:41:32-0400 19738 [5050]:
s3 delta_renew write error -202
Jun  7 13:41:32 hv1 sanlock[2341]: 2014-06-07 13:41:32-0400 19738 [5050]:
s3 renewal error -202 delta_length 140 last_success 19577
Jun  7 13:41:42 hv1 sanlock[2341]: 2014-06-07 13:41:42-0400 19748 [5050]:
1e8615b0 close_task_aio 0 0x7fd3040008c0 busy
Jun  7 13:41:52 hv1 sanlock[2341]: 2014-06-07 13:41:52-0400 19758 [5050]:
1e8615b0 close_task_aio 0 0x7fd3040008c0 busy

This makes one of the nodes not be able to see the storage and all its VMs
will go into pause mode/stop. Wondering if you could provide some advice.
Thank you

--Rizzo
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] sanlock issues

2012-08-08 Thread rino
On Wed, Aug 8, 2012 at 8:00 AM, Itamar Heim  wrote:

> On 08/08/2012 12:10 PM, rino wrote:
>
>>
>>
>> On Wed, Aug 8, 2012 at 4:31 AM, Itamar Heim > > wrote:
>>
>> On 08/08/2012 04:02 AM, Rino Rondan wrote:
>>
>> Hi , I did an install of 3.1 on a desktop using all in one with
>> local
>> data, I had some problem  with sanlock socket
>> (/var/run/sanlock), maybe
>> it was for selinux or permission because it have sanlock:sanlock,
>> i
>> change to vdsm:sanlock and permissive selinux, because vm doesnt
>> start
>> with sanlock down.
>>
>>
>> check for versions of sanlock and libvirt on fedora - there were
>> quite a few discussions round this.
>>
>> All in one script failed, but I did it with Web tool.
>>
>>
>> due to the sanlock issue?
>>
>>
>> Something happen with the function isHostUp -- never take the value up ..
>>
>>
>> In my laptop I can not activate local data storage domain, I did
>> not
>> test sanlock or selinux for the moment, maybe I Will try use
>> another
>> logical volume to add it, but I can remove the storage domain by
>> Web...
>> Just trying it with virt-shell but I need some practise to do
>> that..
>> Finally I have a problem with spice when I try to access outside
>> of my
>> network .
>> At least I have a simple configuration to show it maybe I can try
>> to
>> configure a vm to host in order to use migration, can I use the
>> same
>> host to create a vm as host too?
>>
>>
>> yes, look up faqemu for details on setting up a virtual host (though
>> the VM won't really run), or enable nested virtualization on fedora
>> 17.
>> http://wiki.ovirt.org/wiki/__**Vdsm_Developers#Running_Node__**
>> _as_guest_-_Nested_KVM
>> > as_guest_-_Nested_KVM
>> >
>>
>>
>> Yes I have it all in one, Node, and Ovirt, but i want to add another
>> Node using a VM created on the first node.. is that possible, because
>> the new vm create not support kvm...
>>
>> I did the installation on a amd G1 processor with 8gb, i have all in one
>> and works fine, at least i can show it..
>> How can I install reports ?? and how can i add another use to show the
>> User portal to provide a vm for each student ...
>>
>
> history and reports are still not packaged in rpm form in ovirt - still
> work in progress. you can install on your own if you want to tackle (yaniv
> - any link to how to setup history/reports from sources?)
>
> ovirt supports external directories.
> easiest to add users would be to install freeipa and add it via manage
> domains utility
>
>
Thank you for all support!!

I have some trouble with nfs on laptop but i installed in another manchine
and works fine.



-- 
---
Rondan Rino
Certificado en LPIC-2  
LPI ID:LPI000209832
Verification Code:gbblvwyfxu

Blog:http://www.itrestauracion.com.ar
Cv: http://cv.rinorondan.com.ar 
http://counter.li.org  Linux User -> #517918
Viva La Santa Federacion!!
Mueran Los Salvages Unitarios!!
^^^Transcripcion de la epoca ^^^
*
*
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] sanlock issues

2012-08-08 Thread Itamar Heim

On 08/08/2012 12:10 PM, rino wrote:



On Wed, Aug 8, 2012 at 4:31 AM, Itamar Heim mailto:ih...@redhat.com>> wrote:

On 08/08/2012 04:02 AM, Rino Rondan wrote:

Hi , I did an install of 3.1 on a desktop using all in one with
local
data, I had some problem  with sanlock socket
(/var/run/sanlock), maybe
it was for selinux or permission because it have sanlock:sanlock, i
change to vdsm:sanlock and permissive selinux, because vm doesnt
start
with sanlock down.


check for versions of sanlock and libvirt on fedora - there were
quite a few discussions round this.

All in one script failed, but I did it with Web tool.


due to the sanlock issue?


Something happen with the function isHostUp -- never take the value up ..


In my laptop I can not activate local data storage domain, I did not
test sanlock or selinux for the moment, maybe I Will try use another
logical volume to add it, but I can remove the storage domain by
Web...
Just trying it with virt-shell but I need some practise to do that..
Finally I have a problem with spice when I try to access outside
of my
network .
At least I have a simple configuration to show it maybe I can try to
configure a vm to host in order to use migration, can I use the same
host to create a vm as host too?


yes, look up faqemu for details on setting up a virtual host (though
the VM won't really run), or enable nested virtualization on fedora 17.

http://wiki.ovirt.org/wiki/__Vdsm_Developers#Running_Node___as_guest_-_Nested_KVM




Yes I have it all in one, Node, and Ovirt, but i want to add another
Node using a VM created on the first node.. is that possible, because
the new vm create not support kvm...

I did the installation on a amd G1 processor with 8gb, i have all in one
and works fine, at least i can show it..
How can I install reports ?? and how can i add another use to show the
User portal to provide a vm for each student ...


history and reports are still not packaged in rpm form in ovirt - still 
work in progress. you can install on your own if you want to tackle 
(yaniv - any link to how to setup history/reports from sources?)


ovirt supports external directories.
easiest to add users would be to install freeipa and add it via manage 
domains utility


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] sanlock issues (was: help with a presentation)

2012-08-08 Thread rino
On Wed, Aug 8, 2012 at 4:31 AM, Itamar Heim  wrote:

> On 08/08/2012 04:02 AM, Rino Rondan wrote:
>
>> Hi , I did an install of 3.1 on a desktop using all in one with local
>> data, I had some problem  with sanlock socket (/var/run/sanlock), maybe
>> it was for selinux or permission because it have sanlock:sanlock, i
>> change to vdsm:sanlock and permissive selinux, because vm doesnt start
>> with sanlock down.
>>
>
> check for versions of sanlock and libvirt on fedora - there were quite a
> few discussions round this.
>
>  All in one script failed, but I did it with Web tool.
>>
>
> due to the sanlock issue?
>

Something happen with the function isHostUp -- never take the value up ..

>
>  In my laptop I can not activate local data storage domain, I did not
>> test sanlock or selinux for the moment, maybe I Will try use another
>> logical volume to add it, but I can remove the storage domain by Web...
>> Just trying it with virt-shell but I need some practise to do that..
>> Finally I have a problem with spice when I try to access outside of my
>> network .
>> At least I have a simple configuration to show it maybe I can try to
>> configure a vm to host in order to use migration, can I use the same
>> host to create a vm as host too?
>>
>
> yes, look up faqemu for details on setting up a virtual host (though the
> VM won't really run), or enable nested virtualization on fedora 17.
> http://wiki.ovirt.org/wiki/**Vdsm_Developers#Running_Node_**
> as_guest_-_Nested_KVM
>
>
Yes I have it all in one, Node, and Ovirt, but i want to add another Node
using a VM created on the first node.. is that possible, because the new vm
create not support kvm...

I did the installation on a amd G1 processor with 8gb, i have all in one
and works fine, at least i can show it..
How can I install reports ?? and how can i add another use to show the User
portal to provide a vm for each student ...

Regards

>
>> Rino Rondan
>>
>> El ago 7, 2012 8:58 p.m., "Itamar Heim" > > escribió:
>>
>> On 08/06/2012 01:23 PM, rino wrote:
>>
>> HI:
>>
>> Thank you for the update...
>>
>> I did an installation on my laptop but i have some trouble with
>> create a
>> storage domain my i5 with 8gb is not enough
>>
>> Can I create an instance with a rhel or Fedora 17 on amazon using
>> an
>> special credit?? I need just to show it on Thursday.
>>
>> I saw the videos of jbrooks and it is good to show live
>> migrations.. but
>> I will be in an Spanish event.
>>
>>
>> sorry, no better suggestion.
>> once we'll have a few more non EC2 servers at our disposal, we could
>> try and setup such a demo environment.
>>
>> you should be able to setup an "all in one" deployment from ovirt
>> rpms using fedora 17, ovirt 3.1 with the allinone plugin installed
>> on a single machine with say, 8GB of RAM.
>>
>> Regards
>>
>> On Mon, Aug 6, 2012 at 3:33 AM, Itamar Heim > 
>> >> wrote:
>>
>>  On 08/04/2012 02:07 AM, Rino Rondan wrote:
>>
>>  Hi
>>
>>  I want to know if you have a demo of a configured ovirt
>> system
>>  because I
>>  will be on this event
>> http://www.fcad.uner.edu.ar/__**__destacadas/x-jornadas-**
>> nacionales-de-administracion-_**___e-informatica
>> > nacionales-de-administracion-_**_e-informatica
>> >
>>
>> > nacionales-de-administracion-_**_e-informatica
>> > nacionales-de-administracion-**e-informatica
>> >>
>>  ,
>>  and I need to show ovirt as open-source implementation .
>>
>>
>>  did you get any offline replies with info?
>>  for a demo, i suggest you install an instance.
>>  for screenshots you can ask people to send you some.
>>  also, jbrooks started working on videos:
>> 
>> http://blog.jebpages.com/**archives/screencasting-ovirt/
>> 
>> 
>> >
>>  

Re: [Users] sanlock issues (was: help with a presentation)

2012-08-08 Thread Itamar Heim

On 08/08/2012 04:02 AM, Rino Rondan wrote:

Hi , I did an install of 3.1 on a desktop using all in one with local
data, I had some problem  with sanlock socket (/var/run/sanlock), maybe
it was for selinux or permission because it have sanlock:sanlock, i
change to vdsm:sanlock and permissive selinux, because vm doesnt start
with sanlock down.


check for versions of sanlock and libvirt on fedora - there were quite a 
few discussions round this.



All in one script failed, but I did it with Web tool.


due to the sanlock issue?


In my laptop I can not activate local data storage domain, I did not
test sanlock or selinux for the moment, maybe I Will try use another
logical volume to add it, but I can remove the storage domain by Web...
Just trying it with virt-shell but I need some practise to do that..
Finally I have a problem with spice when I try to access outside of my
network .
At least I have a simple configuration to show it maybe I can try to
configure a vm to host in order to use migration, can I use the same
host to create a vm as host too?


yes, look up faqemu for details on setting up a virtual host (though the 
VM won't really run), or enable nested virtualization on fedora 17.

http://wiki.ovirt.org/wiki/Vdsm_Developers#Running_Node_as_guest_-_Nested_KVM



Rino Rondan

El ago 7, 2012 8:58 p.m., "Itamar Heim" mailto:ih...@redhat.com>> escribió:

On 08/06/2012 01:23 PM, rino wrote:

HI:

Thank you for the update...

I did an installation on my laptop but i have some trouble with
create a
storage domain my i5 with 8gb is not enough

Can I create an instance with a rhel or Fedora 17 on amazon using an
special credit?? I need just to show it on Thursday.

I saw the videos of jbrooks and it is good to show live
migrations.. but
I will be in an Spanish event.


sorry, no better suggestion.
once we'll have a few more non EC2 servers at our disposal, we could
try and setup such a demo environment.

you should be able to setup an "all in one" deployment from ovirt
rpms using fedora 17, ovirt 3.1 with the allinone plugin installed
on a single machine with say, 8GB of RAM.

Regards

On Mon, Aug 6, 2012 at 3:33 AM, Itamar Heim mailto:ih...@redhat.com>
>> wrote:

 On 08/04/2012 02:07 AM, Rino Rondan wrote:

 Hi

 I want to know if you have a demo of a configured ovirt
system
 because I
 will be on this event

http://www.fcad.uner.edu.ar/destacadas/x-jornadas-nacionales-de-administracion-e-informatica




>
 ,
 and I need to show ovirt as open-source implementation .


 did you get any offline replies with info?
 for a demo, i suggest you install an instance.
 for screenshots you can ask people to send you some.
 also, jbrooks started working on videos:
http://blog.jebpages.com/archives/screencasting-ovirt/

 >

 Regards
 --
 ---
 Rondan Rino_
 _
 Ambassador Fedora
 :_https://fedoraproject.org/wiki/User:Villadalmine_

 >



 ___
 Users mailing list
Users@ovirt.org  >
http://lists.ovirt.org/mailman/listinfo/users

 >






--
---
Rondan Rino
Certificado en LPIC-2
>
LPI ID:LPI000209832
Verification Code:gbblvwyfxu

Blog:http://www.__itrestauracion.com.ar

>
Cv:http://cv.rinorond

Re: [Users] sanlock issues

2012-08-07 Thread Dennis Jacobfeuerborn
On 08/07/2012 04:08 PM, Jacob Wyatt wrote:
> Solved with a hack.
> 
> oVirt Node Hypervisor release 2.5.0 (2.0.fc17)
> 
> Couldn't start a VM because sanlock wasn't running.
> Sanlock wasn't running because wdmd wasn't running.
> wdmd was running because softdog kernel module wasn't loaded.
> 
> As I didn't know of another way of making the change persistent I edited 
> /config/usr/sbin/ifup and added "/sbin/modprobe softdog" to the top of the 
> script.
> 
> I really think it would be a good idea for the oVirt Node team to add in some 
> persistent script files that are run at various points in the boot process so 
> that people like me can add in hacks where needed to make it work.

Shouldn't the following issue have fixed this already?

https://bugzilla.redhat.com/show_bug.cgi?id=832935

Regards,
  Dennis
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] sanlock issues

2012-08-07 Thread Jacob Wyatt
Solved with a hack.

oVirt Node Hypervisor release 2.5.0 (2.0.fc17)

Couldn't start a VM because sanlock wasn't running.
Sanlock wasn't running because wdmd wasn't running.
wdmd was running because softdog kernel module wasn't loaded.

As I didn't know of another way of making the change persistent I edited 
/config/usr/sbin/ifup and added "/sbin/modprobe softdog" to the top of the 
script.

I really think it would be a good idea for the oVirt Node team to add in some 
persistent script files that are run at various points in the boot process so 
that people like me can add in hacks where needed to make it work.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] sanlock issues

2012-08-07 Thread Jacob Wyatt
# grep -v ^#  /etc/libvirt/qemu-sanlock.conf
auto_disk_leases=0
require_lease_for_disks=0

I did try systemd-vdsmd reconfigure.


From: Mark Wu [wu...@linux.vnet.ibm.com]

What does  "grep -v ^#  /etc/libvirt/qemu-sanlock.conf"  say?

Have you tried `/usr/lib/systemd/systemd-vdsmd reconfigure` ?




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] sanlock issues

2012-08-07 Thread Mark Wu

On 08/07/2012 03:47 PM, Johan Kragsterman wrote:

Hi!

Since I'm a storage guy, I would like to know more about this! Are these 
watchdog/SANlock features LVM functionality, or is it a part of oVirt? Since I 
can't find any info about it in oVirt wiki documentation...

http://wiki.ovirt.org/wiki/SANLock   Hope it helps for you.



Since LUN's are mapped to all the machines in the cluster, I guess it is not on 
that level, but more on the Logical Volume level, since it is the LV that is 
used by the host.

I would like to get some deeper info about how the the storage architecture is 
implemented, and what the storage strategy is for the future...Someone can 
enlighten me? Show me to some documentation?

Rgrds Johan

-users-boun...@ovirt.org skrev: -
Till: Jacob Wyatt
Från: Mark Wu
Sänt av: users-boun...@ovirt.org
Datum: 2012.08.07 09:14
Kopia: "users@ovirt.org"
Ärende: Re: [Users] sanlock issues

On 08/07/2012 02:35 AM, Jacob Wyatt wrote:

oVirt Node Hypervisor release 2.5.0 (2.0.fc17)

Can't start a VM.  Same error in any of /var/log/libvirt/qemu/vmname.log

libvir: Locking error : unsupported configuration: Read/write, exclusive 
access, disks were present, but no leases specified

I've tried several suggestions including:

modprobe softdog
systemctl restart wdmd.service
systemctl restart sanlock.service

I haven't changed the default configs but I'm heading there next.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


What does  "grep -v ^#  /etc/libvirt/qemu-sanlock.conf"  say?

Have you tried `/usr/lib/systemd/systemd-vdsmd reconfigure` ?


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] sanlock issues

2012-08-07 Thread Johan Kragsterman
Hi!

Since I'm a storage guy, I would like to know more about this! Are these 
watchdog/SANlock features LVM functionality, or is it a part of oVirt? Since I 
can't find any info about it in oVirt wiki documentation...

Since LUN's are mapped to all the machines in the cluster, I guess it is not on 
that level, but more on the Logical Volume level, since it is the LV that is 
used by the host.

I would like to get some deeper info about how the the storage architecture is 
implemented, and what the storage strategy is for the future...Someone can 
enlighten me? Show me to some documentation?

Rgrds Johan

-users-boun...@ovirt.org skrev: -
Till: Jacob Wyatt 
Från: Mark Wu 
Sänt av: users-boun...@ovirt.org
Datum: 2012.08.07 09:14
Kopia: "users@ovirt.org" 
Ärende: Re: [Users] sanlock issues

On 08/07/2012 02:35 AM, Jacob Wyatt wrote:
> oVirt Node Hypervisor release 2.5.0 (2.0.fc17)
>
> Can't start a VM.  Same error in any of /var/log/libvirt/qemu/vmname.log
>
> libvir: Locking error : unsupported configuration: Read/write, exclusive 
> access, disks were present, but no leases specified
>
> I've tried several suggestions including:
>
> modprobe softdog
> systemctl restart wdmd.service
> systemctl restart sanlock.service
>
> I haven't changed the default configs but I'm heading there next.
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
What does  "grep -v ^#  /etc/libvirt/qemu-sanlock.conf"  say?

Have you tried `/usr/lib/systemd/systemd-vdsmd reconfigure` ?


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] sanlock issues

2012-08-07 Thread Mark Wu

On 08/07/2012 02:35 AM, Jacob Wyatt wrote:

oVirt Node Hypervisor release 2.5.0 (2.0.fc17)

Can't start a VM.  Same error in any of /var/log/libvirt/qemu/vmname.log

libvir: Locking error : unsupported configuration: Read/write, exclusive 
access, disks were present, but no leases specified

I've tried several suggestions including:

modprobe softdog
systemctl restart wdmd.service
systemctl restart sanlock.service

I haven't changed the default configs but I'm heading there next.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


What does  "grep -v ^#  /etc/libvirt/qemu-sanlock.conf"  say?

Have you tried `/usr/lib/systemd/systemd-vdsmd reconfigure` ?


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] sanlock issues

2012-08-06 Thread Jacob Wyatt
oVirt Node Hypervisor release 2.5.0 (2.0.fc17)

Can't start a VM.  Same error in any of /var/log/libvirt/qemu/vmname.log

libvir: Locking error : unsupported configuration: Read/write, exclusive 
access, disks were present, but no leases specified

I've tried several suggestions including:

modprobe softdog
systemctl restart wdmd.service
systemctl restart sanlock.service

I haven't changed the default configs but I'm heading there next.  
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users