Re: [ovirt-users] Creating a Instance

2015-10-18 Thread Budur Nagaraju
Below are the logs..



 tail -f /var/log/vdsm
vdsm/ vdsm-reg/
[root@pbuovirt3 ~]# tail -f /var/log/vdsm/vdsm.log
  File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py",
line 173, in _configure_broker_conn
  File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/env/config.py",
line 129, in get
Exception: Configuration value not found:
file=/etc/ovirt-hosted-engine/hosted-engine.conf, key=sdUUID
Thread-178420::DEBUG::2015-10-19
03:17:31,101::stompReactor::162::yajsonrpc.StompServer::(send) Sending
response
Thread-25::DEBUG::2015-10-19
03:17:31,112::fileSD::262::Storage.Misc.excCmd::(getReadDelay) /usr/bin/dd
if=/rhev/data-center/mnt/10.204.207.171:_var_lib_exports_iso/342d943d-bccb-49eb-abf5-be9f5a2afbb5/dom_md/metadata
iflag=direct of=/dev/null bs=4096 count=1 (cwd None)
Thread-25::DEBUG::2015-10-19
03:17:31,123::fileSD::262::Storage.Misc.excCmd::(getReadDelay) SUCCESS:
 = '0+1 records in\n0+1 records out\n342 bytes (342 B) copied,
0.000262612 s, 1.3 MB/s\n';  = 0
Thread-178421::DEBUG::2015-10-19
03:17:31,131::stompReactor::162::yajsonrpc.StompServer::(send) Sending
response
Thread-74698::DEBUG::2015-10-19
03:17:31,221::libvirtconnection::143::root::(wrapper) Unknown libvirterror:
ecode: 80 edom: 20 level: 2 message: metadata not found: Requested metadata
element is not present
Thread-178422::DEBUG::2015-10-19
03:17:34,156::stompReactor::162::yajsonrpc.StompServer::(send) Sending
response
Thread-178423::DEBUG::2015-10-19
03:17:37,164::stompReactor::162::yajsonrpc.StompServer::(send) Sending
response
Thread-32442::DEBUG::2015-10-19
03:17:37,824::fileSD::262::Storage.Misc.excCmd::(getReadDelay) /usr/bin/dd
if=/rhev/data-center/mnt/10.204.207.171:_home_export__domain/1484ea07-4269-44c4-a503-fa6bf43d8bd9/dom_md/metadata
iflag=direct of=/dev/null bs=4096 count=1 (cwd None)
Thread-32442::DEBUG::2015-10-19
03:17:37,834::fileSD::262::Storage.Misc.excCmd::(getReadDelay) SUCCESS:
 = '0+1 records in\n0+1 records out\n347 bytes (347 B) copied,
0.000355762 s, 975 kB/s\n';  = 0
Thread-26::DEBUG::2015-10-19
03:17:39,764::fileSD::262::Storage.Misc.excCmd::(getReadDelay) /usr/bin/dd
if=/rhev/data-center/mnt/10.204.206.10:_ovirt/49d4a9cd-946d-41e0-a7ae-f2620f010302/dom_md/metadata
iflag=direct of=/dev/null bs=4096 count=1 (cwd None)
Thread-26::DEBUG::2015-10-19
03:17:39,773::fileSD::262::Storage.Misc.excCmd::(getReadDelay) SUCCESS:
 = '0+1 records in\n0+1 records out\n329 bytes (329 B) copied,
0.00037456 s, 878 kB/s\n';  = 0
Thread-178424::DEBUG::2015-10-19
03:17:40,172::stompReactor::162::yajsonrpc.StompServer::(send) Sending
response
Thread-25::DEBUG::2015-10-19
03:17:41,131::fileSD::262::Storage.Misc.excCmd::(getReadDelay) /usr/bin/dd
if=/rhev/data-center/mnt/10.204.207.171:_var_lib_exports_iso/342d943d-bccb-49eb-abf5-be9f5a2afbb5/dom_md/metadata
iflag=direct of=/dev/null bs=4096 count=1 (cwd None)
Thread-25::DEBUG::2015-10-19
03:17:41,141::fileSD::262::Storage.Misc.excCmd::(getReadDelay) SUCCESS:
 = '0+1 records in\n0+1 records out\n342 bytes (342 B) copied,
0.000363463 s, 941 kB/s\n';  = 0
Thread-178425::DEBUG::2015-10-19
03:17:43,179::stompReactor::162::yajsonrpc.StompServer::(send) Sending
response
Thread-74187::DEBUG::2015-10-19
03:17:44,816::libvirtconnection::143::root::(wrapper) Unknown libvirterror:
ecode: 80 edom: 20 level: 2 message: metadata not found: Requested metadata
element is not present
Thread-74194::DEBUG::2015-10-19
03:17:44,816::libvirtconnection::143::root::(wrapper) Unknown libvirterror:
ecode: 80 edom: 20 level: 2 message: metadata not found: Requested metadata
element is not present
Thread-178426::DEBUG::2015-10-19
03:17:46,187::task::595::Storage.TaskManager.Task::(_updateState)
Task=`5e671cc5-9f6d-457b-8ad1-2f41e898aa56`::moving from state init ->
state preparing
Thread-178426::INFO::2015-10-19
03:17:46,187::logUtils::44::dispatcher::(wrapper) Run and protect:
repoStats(options=None)
Thread-178426::INFO::2015-10-19
03:17:46,188::logUtils::47::dispatcher::(wrapper) Run and protect:
repoStats, Return response: {u'342d943d-bccb-49eb-abf5-be9f5a2afbb5':
{'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay':
'0.000363463', 'lastCheck': '5.0', 'valid': True},
u'1484ea07-4269-44c4-a503-fa6bf43d8bd9': {'code': 0, 'actual': True,
'version': 0, 'acquired': True, 'delay': '0.000355762', 'lastCheck': '8.3',
'valid': True}, u'49d4a9cd-946d-41e0-a7ae-f2620f010302': {'code': 0,
'actual': True, 'version': 3, 'acquired': True, 'delay': '0.00037456',
'lastCheck': '6.4', 'valid': True}}
Thread-178426::DEBUG::2015-10-19
03:17:46,188::task::1191::Storage.TaskManager.Task::(prepare)
Task=`5e671cc5-9f6d-457b-8ad1-2f41e898aa56`::finished:
{u'342d943d-bccb-49eb-abf5-be9f5a2afbb5': {'code': 0, 'actual': True,
'version': 0, 'acquired': True, 'delay': '0.000363463', 'lastCheck': '5.0',
'valid': True}, u'1484ea07-4269-44c4-a503-fa6bf43d8bd9': {'code': 0,
'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000355762',
'lastCheck': '8.3', 'valid': True},

Re: [ovirt-users] This VM is not managed by the engine

2015-10-18 Thread Maor Lipchuk



- Original Message -
> From: "Nir Soffer" 
> To: "Jaret Garcia" 
> Cc: "users" , "Maor Lipchuk" 
> Sent: Sunday, October 18, 2015 10:14:15 PM
> Subject: Re: [ovirt-users] This VM is not managed by the engine
> 
> On Sun, Oct 18, 2015 at 7:00 PM, Jaret Garcia  wrote:
> > Hi everyone,
> >
> > Afew weeks ago we had a problem with the SPM and all host in the cluster
> > got
> > stocked in contending, we restarted hosts one by one, and the issue was
> > solved. Howerver we didn't notice that one server even it never stop
> > running, it changed its state some way and then no changes could be done to
> > the VM, we tried to add more RAM and we saw the message "Cannot run VM.
> > This
> > VM is not managed by the engine",
> 
> I would open a bug about this, and attach engine and vdsm logs showing the
> timeframe of this event.
> 
> > so we ssh the VM an send it to reboot, and
> > once we did that the VM never came back
> 
> Sure, if engine does not know this vm, it will never restart it. The
> libvirt vm is not
> persistent, engine is keeping the vm info in the engine database, and keeps
> the
> vm up on some host.
> 
> > , we still see the VM in the engine
> > administration but it does not show any information regarding to network,
> > disk, and so.
> 
> Please attach engine db dump to the bug, to understand what is "does not show
> any information"
> 
> > We created another VM to replace the services in the one we
> > lost, however we need to recover the files in the lost VM, we believe the
> > image should be in the storage but we haven't found a way to recover it,
> > some time ago we came across a similar situation but at that time it was a
> > NFS data domain, so it was easier for us to go inside the storage server an
> > search for the VM ID to scp the image and mount it somewhere else, this
> > time
> > the storage is iscsi and even we found that the hosts mount the target in
> > /rhev/data-center/mnt/blockSD/   we only see there the active images for
> > the
> > cluster, can anyone point us how we can recover the lost image?  We know
> > the
> > VM ID and the Disk ID from Ovirt.
> 
> To recover the images, you need the image id. If you don't see it in the
> engine
> ui, you can try to search in the engine database.
> (Adding Maor to help with finding the image id in the database)

Hi Jaret,

If you know the image id and you don't see the disk in the UI you can try to 
register it.
Please take a look at 
http://www.ovirt.org/Features/ImportStorageDomain#Register_an_unregistered_disk 
how to add an unregistered disk.
Let me know if that helps

Regards,
Maor


> 
> The pool id can be found on the host in /rhev/data-center - there
> should be one directory,
> its name is the pool id. If you have more than one, use the one which
> is not empty.
> 
> # Assuming this value (taken from my test setup)
> 
> pool_id = 591475db-6fa9-455d-9c05-7f6e30fb06d5
> image_id = 5b10b1b9-ee82-46ee-9f3d-3659d37e4851
> 
> Once you found the image id, do:
> 
> # Update lvm metadata daemon
> 
> pvscan --cache
> 
> # Find the volumes
> 
> # lvs -o lv_name,vg_name,tags | awk '/IU_/ {print $1,$2}'
> 2782e797-e49a-4364-99d7-d7544a42e939 6c77adb1-74fc-4fa9-a0ac-3b5a4b789318
> 4bc34865-64b8-4a6c-b2d0-0aaab3f2aa12 6c77adb1-74fc-4fa9-a0ac-3b5a4b789318
> 
> Now we know that:
> domain_id = 6c77adb1-74fc-4fa9-a0ac-3b5a4b789318
> 
> # Activate the lvs
> 
> lvchange -ay
> 6c77adb1-74fc-4fa9-a0ac-3b5a4b789318/2782e797-e49a-4364-99d7-d7544a42e939
> lvchange -ay
> 6c77adb1-74fc-4fa9-a0ac-3b5a4b789318/4bc34865-64b8-4a6c-b2d0-0aaab3f2aa12
> 
> # Find the top volume by running qemu-img info on all the lvs
> 
> # qemu-img info
> /dev/6c77adb1-74fc-4fa9-a0ac-3b5a4b789318/2782e797-e49a-4364-99d7-d7544a42e939
> image:
> /dev/6c77adb1-74fc-4fa9-a0ac-3b5a4b789318/2782e797-e49a-4364-99d7-d7544a42e939
> file format: qcow2
> virtual size: 8.0G (8589934592 bytes)
> disk size: 0
> cluster_size: 65536
> Format specific information:
> compat: 0.10
> 
> # qemu-img info
> /dev/6c77adb1-74fc-4fa9-a0ac-3b5a4b789318/4bc34865-64b8-4a6c-b2d0-0aaab3f2aa12
> image:
> /dev/6c77adb1-74fc-4fa9-a0ac-3b5a4b789318/4bc34865-64b8-4a6c-b2d0-0aaab3f2aa12
> file format: qcow2
> virtual size: 8.0G (8589934592 bytes)
> disk size: 0
> cluster_size: 65536
> backing file:
> ../5b10b1b9-ee82-46ee-9f3d-3659d37e4851/2782e797-e49a-4364-99d7-d7544a42e939
> (actual path:
> /dev/6c77adb1-74fc-4fa9-a0ac-3b5a4b789318/../5b10b1b9-ee82-46ee-9f3d-3659d37e4851/2782e797-e49a-4364-99d7-d7544a42e939)
> backing file format: qcow2
> Format specific information:
> compat: 0.10
> 
> The top volume is the one with the largest number of items in the
> "backing file" value.
> In this case, it is
> /dev/6c77adb1-74fc-4fa9-a0ac-3b5a4b789318/4bc34865-64b8-4a6c-b2d0-0aaab3f2aa12
> 
> So:
> volume_id = 4bc34865-64b8-4a6c-b2d0-0aaab3f2aa12
> 
> # Prepare the image to create the links in 

Re: [ovirt-users] Ha cluster 3 nodes but 1 slow

2015-10-18 Thread Ravishankar N



On 10/18/2015 07:27 PM, Nicolas LIENARD wrote:

Hey Nil

What about 
https://gluster.readthedocs.org/en/release-3.7.0/Features/afr-arbiter-volumes/ 
?


Regards
Nico


Le 18 octobre 2015 15:12:23 GMT+02:00, Nir Soffer  
a écrit :


On Sat, Oct 17, 2015 at 12:45 PM, Nicolas LIENARD  
wrote:

Hi Currently, i ve 3 nodes, 2 in same DC and a third in
another DC. They are all bridged together through a vpn. I
know a cluster is at least 3 nodes to satisfy the quorum. 




Just adding a 3rd node (without actually using it for 3 way replication) 
might not help in preventing split-brains. gluster has client-quorum and 
server-quorum. Have a look at 
http://comments.gmane.org/gmane.comp.file-systems.gluster.user/22609 for 
some information.


If you are indeed using it as a replica-3, then it is better to have all 
3 nodes in the same DC. gluster clients sends every write() to all 
bricks of the replica (and waits for their responses too), so if one of 
them is in another DC, it might slow writes due to network latency.


My question is to know if i can have my VM balancing on the 2
fast nodes with HA and glusterfs replica 2. 




replica 2 definitely provides HA but if you have more chances of files 
ending in split-brain if you have frequent network disconnects , which 
is why replica 3 with client-quorum set to 'auto' is better for 
preventing split-brains.
arbiter-volumes are a kind of a sweet-spot between replica-2 and 
replica-3 that can prevent split-brains. The link that Nir shared 
describes it and how to create one etc.


Regards,
Ravi


gluster replica 2 is not supported.

Nir



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Question about the ovirt-engine-sdk-java

2015-10-18 Thread Michael Pasternak
Hi Salifou,
Actually java sdk is intentionally hiding transport level internals so 
developers could stay in java domain,if your headers are static, easiest way 
would be using reverse proxy in a middle to intercept requests, 

can you tell me why do you need this?
 


 On Friday, October 16, 2015 1:14 AM, Salifou Sidi M. Malick 
 wrote:
   

 Hi Micheal,

I have a question about the ovirt-engine-sdk-java.

Is there a way to add custom request headers to each RHEVM API call?

Here is an example of a request that I would like to do:

$ curl -v -k \
          -H "ID: us...@ad.xyz.com" \
          -H "PASSWORD: Pwssd" \
          -H "TARGET: kobe" \
          https://vm0.smalick.com/api/hosts


I would like to add ID, PASSWORD and TARGET as HTTP request header. 

Thanks,
Salifou



  ___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ha cluster 3 nodes but 1 slow

2015-10-18 Thread Yaniv Kaul
Perhaps using
https://gluster.readthedocs.org/en/release-3.7.0/Features/afr-arbiter-volumes/
?
This has not been tested, AFAIK.
Y.

On Sat, Oct 17, 2015 at 12:45 PM, Nicolas LIENARD 
wrote:

> Hi
>
> Currently, i ve 3 nodes, 2 in same DC and a third in another DC.
>
> They are all bridged together through a vpn.
>
> I know a cluster is at least 3 nodes to satisfy the quorum.
>
> My question is to know if i can have my VM balancing on the 2 fast nodes
> with HA and glusterfs replica 2.
>
> And use the slow third node to satisfy quorum and a gluster
> geo-replication to act as a Backup.
>
> Let me know if this is technically suitable with Ovirt.
>
> Thanks a lot
> Regards
> Nico
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ha cluster 3 nodes but 1 slow

2015-10-18 Thread Nir Soffer
On Sat, Oct 17, 2015 at 12:45 PM, Nicolas LIENARD  wrote:
> Hi
>
> Currently, i ve 3 nodes, 2 in same DC and a third in another DC.
>
> They are all bridged together through a vpn.
>
> I know a cluster is at least 3 nodes to satisfy the quorum.
>
> My question is to know if i can have my VM balancing on the 2 fast nodes
> with HA and glusterfs replica 2.

gluster replica 2 is not supported.

Nir
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] integrate iSCSI and FC on the same oVirt datacenter

2015-10-18 Thread Nir Soffer
On Fri, Oct 16, 2015 at 1:31 PM, Kapetanakis Giannis
 wrote:
> On 16/10/15 01:01, Nir Soffer wrote:
>>
>>
>>
>> >
>> > I thought something like this might work:
>> >
>> > node[1]: FC - ISCSI <-> node[2]: ISCSI - FC
>>
>> I dont follow - how do you want to share your fc storage over iscsi?
>>
>> And what are these nodes? Storage nodes? Hypervisors?
>>
>> Nir
>>
>
> The initial idea was to share FC over IP network. All nodes are ovirt nodes
> (hypervisors).
>
> One node see the FC storage share as a block device. Take this block device
> and share it over iscsi to node[2].
> The node[2] will see the block device and create a new FC target to use it
> itself...
>
> I know, its science fiction...

I still don't understand the problem you are trying to solve.

Can you explain the network topology?

- Do you have FC storage server, switch?
- How many nodes to do you have with FC HBA?

Do you want to add nodes without FC HBA, and you want to consume
the FC storage?

Or you want to add nodes with FC HBA, but you don't have an FC switch?

Nir

> After reading a bit on the subject another solution would be FCoE vn2vn but
> this probably requires hardware.
> Specific switches with DCB and FIP snooping which i don't have.
>
> Any idea to extend FC to nodes without FC HBAs is welcome.
>
> G
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users