Re: [ovirt-users] Autostart VMS

2016-04-07 Thread Brett I. Holcomb
On Wed, 2016-04-06 at 21:44 +0200, Joop wrote:
> On 6-4-2016 21:10, Brett I. Holcomb wrote:
> > 
> > On Wed, 2016-04-06 at 13:42 -0400, Adam Litke wrote:
> > > 
> > > On 06/04/16 01:46 -0400, Brett I. Holcomb wrote:
> > > > 
> > > > In VMware we could setup guests to autostart when the host
> > > > started
> > > > and define the order.  Is that doable in oVirt?  The only thing
> > > > I've
> > > > seen is the watchdog and tell it to reset but nothing that
> > > > allows me
> > > > to define who starts up when and if they autostart.  I assume
> > > > it's
> > > > there but I must be missing it or haven't found it in the web
> > > > portal. 
> > > 
> > > In oVirt guests aren't tied to a host by default (although you
> > > can set
> > > them to run only on a specific host if you want).  The closest
> > > thing I
> > > can think of would be the High Availability features (VM->Edit).
> > > oVirt will try to restart highly available VMs if they go
> > > down.  You
> > > can also set the priority for migration and restart in that pane.
> > > Hopefully a combination of host pinning and the  high
> > > availability
> > > settings will get you close enough to where you want to be.
> > > 
> > > Otherwise, you could always do some scripting with the ovirt REST
> > > API
> > > using the SDK or CLI.
> > > 
> > If you had the VMware migration extra add-on you could have hosts
> > move
> > as needed so they were not tied to any host either but we could set
> > a
> > startup order and specify auto, manual so that once the host
> > started
> > the VMs were brought up as specified no matter what host they were
> > running on.  
> > 
> > I am running hosted-engine deployment with the Engine VM on the
> > host.
> > 
> > I set highly available on, did not pin to any host, and also set
> > the
> > watchdog which should reset if they go down but I'm not sure that
> > will
> > start them if the host comes up and the VMs are not running.  I'll
> > look at the CLI first.
> > 
> > It would be nice if oVirt added this feature as it's really
> > required
> > for large installations and is a help for any size installation,
> > even
> > small ones.
> > 
> Maybe there is a another way. It involves ovirt-shell and/or a sdk
> script. The idea is the create a custom property and set that to for
> example, y:01 or n:02 and yes that to read that back if the host
> comes
> up and start the vms if they have 'y' and use the number for
> ordering.
> If reading back the properties is a problem you might be able to
> 'use'
> the tagging feature todo something similar.
> 
> You can always create a RFE on the ovirt bugtracker.
> 
> Joop
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
I'll do that.  It's really a needed feature to make administration
easier.
I finally got the ovirt-shell connected with the help of this https://b
ugzilla.redhat.com/show_bug.cgi?id=1186365.  The silly thing can't
handle no domain and the error doesn't tell you that.
Oh, well.  We'll play with this and then check out the A
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Hosted engine on gluster problem

2016-04-07 Thread Bond, Darryl
There seems to be a pretty severe bug with using hosted engine on gluster.

If the host that was used as the initial hosted-engine --deploy host goes away, 
the engine VM wil crash and cannot be restarted until the host comes back.

This is regardless of which host the engine was currently running.


The issue seems to be buried in the bowels of VDSM and is not an issue with 
gluster itself.

The gluster filesystem is still accessable from the host that was running the 
engine. The issue has been submitted to bugzilla but the fix is some way off 
(4.1).


Can my hosted engine be converted to use NFS (using the gluster NFS server on 
the same filesystem) without rebuilding my hosted engine (ie change 
domainType=glusterfs to domainType=nfs)?

What effect would that have on the hosted-engine storage domain inside oVirt, 
ie would the same filesystem be mounted twice or would it just break.


Will this actually fix the problem, does it have the same issue when the hosted 
engine is on NFS?


Darryl






The contents of this electronic message and any attachments are intended only 
for the addressee and may contain legally privileged, personal, sensitive or 
confidential information. If you are not the intended addressee, and have 
received this email, any transmission, distribution, downloading, printing or 
photocopying of the contents of this message or attachments is strictly 
prohibited. Any legal privilege or confidentiality attached to this message and 
attachments is not waived, lost or destroyed by reason of delivery to any 
person other than intended addressee. If you have received this message and are 
not the intended addressee you should notify the sender by return email and 
destroy all copies of the message and any attachments. Unless expressly 
attributed, the views expressed in this email do not necessarily represent the 
views of the company.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Can not access storage domain hosted_storage

2016-04-07 Thread Bond, Darryl
The workaround for this bug is here 
https://bugzilla.redhat.com/show_bug.cgi?id=1317699



From: users-boun...@ovirt.org  on behalf of Simone 
Tiraboschi 
Sent: Friday, 8 April 2016 1:30 AM
To: Richard Neuboeck; Roy Golan
Cc: users
Subject: Re: [ovirt-users] Can not access storage domain hosted_storage

On Thu, Apr 7, 2016 at 4:17 PM, Richard Neuboeck  wrote:
> Hi oVirt Users/Developers,
>
> I'm having trouble adding another host to a working hosted engine
> setup. Through the WebUI I try to add another host. The package
> installation and configuration processes seemingly run without
> problems. When the second host tries to mount the engine storage
> volume it halts with the WebUI showing the following message:
>
> 'Failed to connect Host cube-two to the Storage Domain hosted_engine'
>
> The mount fails which results in the host status as 'non operational'.
>
> Checking the vdsm.log on the newly added host shows that the mount
> attempt of the engine volume doesn't use -t glusterfs. On the other
> hand the VM storage volume (also a glusterfs volume) is mounted the
> right way.
>
> It seems the Engine configuration that is given to the second host
> lacks the vfs_type property. So without glusterfs as fs given the
> system assumes an NFS mount and obviously fails.

It seams that the auto-import procedure in the engine didn't recognize
that the hosted-engine storage domain was on gluster and took it for
NFS.

Adding Roy here to take a look.


> Here are the relevant log lines showing the JSON reply to the
> configuration request, the working mount of the VM storage (called
> plexus) and the failing mount of the engine storage.
>
> ...
> jsonrpc.Executor/4::INFO::2016-04-07
> 15:45:53,043::logUtils::48::dispatcher::(wrapper) Run and protect:
> connectStorageServer(domType=7,
> spUUID=u'0001-0001-0001-0001-03ce', conList=[{u'id':
> u'981cd3aa-052b-498a-914e-5e8f314357a8', u'connection':
> u'borg-sphere-one:/plexus', u'iqn': u'', u'user': u'', u'tpgt':
> u'1', u'vfs_type': u'glusterfs', u'password': '', u'port':
> u''}, {u'id': u'cceaa988-9607-4bef-8854-0e7a585720aa',
> u'connection': u'borg-sphere-one:/engine', u'iqn': u'', u'user':
> u'', u'tpgt': u'1', u'password': '', u'port': u''}],
> options=None)
> ...
> jsonrpc.Executor/4::DEBUG::2016-04-07
> 15:45:53,062::mount::229::Storage.Misc.excCmd::(_runcmd)
> /usr/bin/taskset --cpu-list 0-39 /usr/bin/sudo -n
> /usr/bin/systemd-run --scope --slice=vdsm-glusterfs /usr/bin/mount
> -t glusterfs -o
> backup-volfile-servers=borg-sphere-two:borg-sphere-three
> borg-sphere-one:/plexus
> /rhev/data-center/mnt/glusterSD/borg-sphere-one:_plexus (cwd None)
> ...
> jsonrpc.Executor/4::DEBUG::2016-04-07
> 15:45:53,380::mount::229::Storage.Misc.excCmd::(_runcmd)
> /usr/bin/taskset --cpu-list 0-39 /usr/bin/sudo -n
> /usr/bin/systemd-run --scope --slice=vdsm-glusterfs /usr/bin/mount
> -o backup-volfile-servers=borg-sphere-two:borg-sphere-three
> borg-sphere-one:/engine
> /rhev/data-center/mnt/glusterSD/borg-sphere-one:_engine (cwd None)
> ...
>
> The problem seems to have been introduced since March 22nd. On this
> install I have added two additional hosts without problem. Three
> days ago I tried to reinstall the whole system for testing and
> documentation purposes but now am not able to add other hosts.
>
> All the installs follow the same documented procedure. I've verified
> several times that the problem exists with the components in the
> current 3.6 release repo as well as in the 3.6 snapshot repo.
>
> If I check the storage configuration of hosted_engine domain in the
> WebUI it shows glusterfs as VFS type.
>
> The initial mount during the hosted engine setup on the first host
> shows the correct parameters (vfs_type) in vdsm.log:
>
> Thread-42::INFO::2016-04-07
> 14:56:29,464::logUtils::48::dispatcher::(wrapper) Run and protect:
> connectStorageServer(domType=7,
> spUUID='----', conList=[{'id':
> 'b13ae31f-d66a-43a7-8aba-eaf4e62a6fb0', 'tpgt': '1', 'vfs_type':
> 'glusterfs', 'connection': 'borg-sphere-one:/engine', 'user':
> 'kvm'}], options=None)
> Thread-42::DEBUG::2016-04-07
> 14:56:29,591::fileUtils::143::Storage.fileUtils::(createdir)
> Creating directory:
> /rhev/data-center/mnt/glusterSD/borg-sphere-one:_engine mode: None
> Thread-42::DEBUG::2016-04-07
> 14:56:29,592::storageServer::364::Storage.StorageServer.MountConnection::(_get_backup_servers_option)
> Using bricks: ['borg-sphere-one', 'borg-sphere-two',
> 'borg-sphere-three']
> Thread-42::DEBUG::2016-04-07
> 14:56:29,592::mount::229::Storage.Misc.excCmd::(_runcmd)
> /usr/bin/taskset --cpu-list 0-39 /usr/bin/sudo -n
> /usr/bin/systemd-run --scope --slice=vdsm-glusterfs /usr/bin/mount
> -t glusterfs -o
> backup-volfile-servers=borg-sphere-two:borg-sphere-three
> borg-sphere-one:/engine
> 

Re: [ovirt-users] Lots of "domain in problem" warnings

2016-04-07 Thread Nir Soffer
On Thu, Apr 7, 2016 at 5:05 PM,   wrote:
> Hi,
>
> Lately we're having a lot of events like these:
>
> 2016-04-07 14:54:25,247 WARN
> [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData]
> (org.ovirt.thread.pool-8-thread-2) [] domain
> '5de4a000-a9c4-489c-8eee-10368647c413:iscsi01' in problem. vds:
> 'host7.domain.com'
> 2016-04-07 14:54:40,501 INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData]
> (org.ovirt.thread.pool-8-thread-17) [] Domain
> '5de4a000-a9c4-489c-8eee-10368647c413:iscsi01' recovered from problem. vds:
> 'host7.domain.com'
> 2016-04-07 14:54:40,501 INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData]
> (org.ovirt.thread.pool-8-thread-17) [] Domain
> '5de4a000-a9c4-489c-8eee-10368647c413:iscsi01' has recovered from problem.
> No active host in the DC is reporting it as problematic, so clearing the
> domain recovery timer.
> 2016-04-07 14:54:46,314 WARN
> [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData]
> (org.ovirt.thread.pool-8-thread-30) [] domain
> '5de4a000-a9c4-489c-8eee-10368647c413:iscsi01' in problem. vds:
> 'host5.domain.com'
> 2016-04-07 14:55:01,589 INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData]
> (org.ovirt.thread.pool-8-thread-32) [] Domain
> '5de4a000-a9c4-489c-8eee-10368647c413:iscsi01' recovered from problem. vds:
> 'host5.domain.com'
> 2016-04-07 14:55:01,589 INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData]
> (org.ovirt.thread.pool-8-thread-32) [] Domain
> '5de4a000-a9c4-489c-8eee-10368647c413:iscsi01' has recovered from problem.
> No active host in the DC is reporting it as problematic, so clearing the
> domain recovery timer.
>
> Up until now it's been only one domain that I see with this warning, this
> doesn't look good nevertheless. Not sure if related, but I can't find a disk
> with this UUID. How can I start debugging?
>
> This is oVirt 3.6.4.1-1, and using an iSCSI-based storage backend.

This may be related to this bug:
https://bugzilla.redhat.com/1081962

Running this tool on vdsm log will give better picture of what is
happening in the vdsm side:
https://github.com/oVirt/vdsm/blob/master/contrib/repoplot

You can see examples of the output in the bug:
https://bugzilla.redhat.com/attachment.cgi?id=1130967

We are working on improve monitoring that will eliminate this issue.


Nir
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Restoring oVirt after hardware crash

2016-04-07 Thread Oved Ourfali
Hard to help without the logs.
Can you attach the engine log and server log?

Regards,
Oved
On Apr 7, 2016 4:37 PM, "Morten A. Middelthon"  wrote:

> Hi,
>
> the machine running my oVirt Engine administrator recently died on me. I
> have been able to get most of it back up and running again on a new virtual
> machine with CentOS 6.x in another virtual environment. The local
> postgresql database is running, and I can login via the web interface. All
> my VMs, storages, networks and hosts are visible, but marked as being
> offline/non responsive. Is there a way I can re-add or re-enable my
> existing hosts on this restored manager?
>
> with regards,
>
> --
> Morten A. Middelthon
> Email: mor...@flipp.net
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Error: Storage format V3 is not supported

2016-04-07 Thread Allon Mureinik
I've reproduced the issue on my env, and cooked up a patch that seems to
solve it for me, in case anyone wants to cherry-pick and help verify it:
https://gerrit.ovirt.org/#/c/55836


On Wed, Apr 6, 2016 at 4:06 AM, Alex R  wrote:

> Thank you!  This worked, though I wish I documented the process better
>
> Becuase I am not sure what I did exactly that helped?
>
> # cat
> /mnt/export_ovirt/images/4be3f6ac-7946-4e7b-9ca2-11731c8ba236/dom_md/metadata
>
> CLASS=Backup
> # changed
> DESCRIPTION=eport_storage
> IOOPTIMEOUTSEC=10
> LEASERETRIES=3
> LEASETIMESEC=60
> LOCKPOLICY=
> LOCKRENEWALINTERVALSEC=5
> POOL_UUID=
> # I removed this
> REMOTE_PATH=..com:/mnt/export_ovirt/images # I
> changed this to what is listed
> ROLE=Regular
> SDUUID=4be3f6ac-7946-4e7b-9ca2-11731c8ba236
> TYPE=NFS
> # changed
> VERSION=0
> # changed from 3 to 0   ### I have tried this before with no succes, so it
> must be a combonation of other changes?
> _SHA_CKSUM=16dac1d1c915c4d30433f35dd668dd35f60dc22c   # I
> changed this to what was found in the logs
>
>
>
> -Alex
>
>
>
> On Sun, Apr 3, 2016 at 2:31 AM, Vered Volansky  wrote:
>
>> I've reported the issue:
>> https://bugzilla.redhat.com/show_bug.cgi?id=1323462
>>
>> A verified workaround is to change the metadata Version to 0.
>>
>> Checksum should be compromised by it, so follow
>> http://lists.ovirt.org/pipermail/users/2012-April/007149.html if you
>> have any issues with adjusting it.
>>
>> Please let me know how this worked out for you.
>>
>> Regards,
>> Vered
>>
>> On Thu, Mar 31, 2016 at 5:28 AM, Alex R  wrote:
>>
>>> I am trying to import a domain that I have used as an export on a
>>> previous install.  The previous install was no older then v3.5 and was
>>> built with the all-in-one-plugin.  Before destroying that system I took a
>>> portable drive and made an export domain to export my VMs and templates.
>>>
>>> The new system is up to date an was built as a hosted engine.  When I
>>> try to import the domain I get the following error:
>>>
>>> "Error while executing action: Cannot add Storage. Storage format V3 is
>>> not supported on the selected host version."
>>>
>>> I just need to recover the VMs.
>>>
>>> I connect the USB hard drive to the host and make an export directory
>>> just like I did on the old host.
>>>
>>> # ls -ld /mnt/export_ovirt
>>> drwxr-xr-x. 5 vdsm kvm 4096 Mar  6 11:27 /mnt/export_ovirt
>>>
>>> I have tried both doing an NFS mount
>>> # cat /etc/exports.d/ovirt.exports
>>> /home/engineha  127.0.0.1/32(rw,anonuid=36,anongid=36,all_squash)
>>> /mnt/backup-vm/ 10.3.1.0/24(rw,anonuid=36,anongid=36,all_squash)
>>> 127.0.0.1/32(rw,anonuid=36,anongid=36,all_squash)
>>>
>>> # cat
>>> /mnt/export_ovirt/images/4be3f6ac-7946-4e7b-9ca2-11731c8ba236/dom_md/metadata
>>> CLASS=Backup
>>> DESCRIPTION=eport_storage
>>> IOOPTIMEOUTSEC=10
>>> LEASERETRIES=3
>>> LEASETIMESEC=60
>>> LOCKPOLICY=
>>> LOCKRENEWALINTERVALSEC=5
>>> POOL_UUID=053926e4-e63d-450e-8aa7-6f1235b944c6
>>> REMOTE_PATH=/mnt/export_ovirt/images
>>> ROLE=Regular
>>> SDUUID=4be3f6ac-7946-4e7b-9ca2-11731c8ba236
>>> TYPE=LOCALFS
>>> VERSION=3
>>> _SHA_CKSUM=2e6e203168bd84f3dc97c953b520ea8f78119bf0
>>>
>>> # ls -l
>>> /mnt/export_ovirt/images/4be3f6ac-7946-4e7b-9ca2-11731c8ba236/master/vms/4873de49-9090-40b1-a21d-665633109aa2/4873de49-9090-40b1-a21d-665633109aa2.ovf
>>>
>>> -rw-r--r--. 1 vdsm kvm 9021 Mar  6 11:50
>>> /mnt/export_ovirt/images/4be3f6ac-7946-4e7b-9ca2-11731c8ba236/master/vms/4873de49-9090-40b1-a21d-665633109aa2/4873de49-9090-40b1-a21d-665633109aa2.ovf
>>>
>>> Thanks,
>>> Alex
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Can not access storage domain hosted_storage

2016-04-07 Thread Simone Tiraboschi
On Thu, Apr 7, 2016 at 4:17 PM, Richard Neuboeck  wrote:
> Hi oVirt Users/Developers,
>
> I'm having trouble adding another host to a working hosted engine
> setup. Through the WebUI I try to add another host. The package
> installation and configuration processes seemingly run without
> problems. When the second host tries to mount the engine storage
> volume it halts with the WebUI showing the following message:
>
> 'Failed to connect Host cube-two to the Storage Domain hosted_engine'
>
> The mount fails which results in the host status as 'non operational'.
>
> Checking the vdsm.log on the newly added host shows that the mount
> attempt of the engine volume doesn't use -t glusterfs. On the other
> hand the VM storage volume (also a glusterfs volume) is mounted the
> right way.
>
> It seems the Engine configuration that is given to the second host
> lacks the vfs_type property. So without glusterfs as fs given the
> system assumes an NFS mount and obviously fails.

It seams that the auto-import procedure in the engine didn't recognize
that the hosted-engine storage domain was on gluster and took it for
NFS.

Adding Roy here to take a look.


> Here are the relevant log lines showing the JSON reply to the
> configuration request, the working mount of the VM storage (called
> plexus) and the failing mount of the engine storage.
>
> ...
> jsonrpc.Executor/4::INFO::2016-04-07
> 15:45:53,043::logUtils::48::dispatcher::(wrapper) Run and protect:
> connectStorageServer(domType=7,
> spUUID=u'0001-0001-0001-0001-03ce', conList=[{u'id':
> u'981cd3aa-052b-498a-914e-5e8f314357a8', u'connection':
> u'borg-sphere-one:/plexus', u'iqn': u'', u'user': u'', u'tpgt':
> u'1', u'vfs_type': u'glusterfs', u'password': '', u'port':
> u''}, {u'id': u'cceaa988-9607-4bef-8854-0e7a585720aa',
> u'connection': u'borg-sphere-one:/engine', u'iqn': u'', u'user':
> u'', u'tpgt': u'1', u'password': '', u'port': u''}],
> options=None)
> ...
> jsonrpc.Executor/4::DEBUG::2016-04-07
> 15:45:53,062::mount::229::Storage.Misc.excCmd::(_runcmd)
> /usr/bin/taskset --cpu-list 0-39 /usr/bin/sudo -n
> /usr/bin/systemd-run --scope --slice=vdsm-glusterfs /usr/bin/mount
> -t glusterfs -o
> backup-volfile-servers=borg-sphere-two:borg-sphere-three
> borg-sphere-one:/plexus
> /rhev/data-center/mnt/glusterSD/borg-sphere-one:_plexus (cwd None)
> ...
> jsonrpc.Executor/4::DEBUG::2016-04-07
> 15:45:53,380::mount::229::Storage.Misc.excCmd::(_runcmd)
> /usr/bin/taskset --cpu-list 0-39 /usr/bin/sudo -n
> /usr/bin/systemd-run --scope --slice=vdsm-glusterfs /usr/bin/mount
> -o backup-volfile-servers=borg-sphere-two:borg-sphere-three
> borg-sphere-one:/engine
> /rhev/data-center/mnt/glusterSD/borg-sphere-one:_engine (cwd None)
> ...
>
> The problem seems to have been introduced since March 22nd. On this
> install I have added two additional hosts without problem. Three
> days ago I tried to reinstall the whole system for testing and
> documentation purposes but now am not able to add other hosts.
>
> All the installs follow the same documented procedure. I've verified
> several times that the problem exists with the components in the
> current 3.6 release repo as well as in the 3.6 snapshot repo.
>
> If I check the storage configuration of hosted_engine domain in the
> WebUI it shows glusterfs as VFS type.
>
> The initial mount during the hosted engine setup on the first host
> shows the correct parameters (vfs_type) in vdsm.log:
>
> Thread-42::INFO::2016-04-07
> 14:56:29,464::logUtils::48::dispatcher::(wrapper) Run and protect:
> connectStorageServer(domType=7,
> spUUID='----', conList=[{'id':
> 'b13ae31f-d66a-43a7-8aba-eaf4e62a6fb0', 'tpgt': '1', 'vfs_type':
> 'glusterfs', 'connection': 'borg-sphere-one:/engine', 'user':
> 'kvm'}], options=None)
> Thread-42::DEBUG::2016-04-07
> 14:56:29,591::fileUtils::143::Storage.fileUtils::(createdir)
> Creating directory:
> /rhev/data-center/mnt/glusterSD/borg-sphere-one:_engine mode: None
> Thread-42::DEBUG::2016-04-07
> 14:56:29,592::storageServer::364::Storage.StorageServer.MountConnection::(_get_backup_servers_option)
> Using bricks: ['borg-sphere-one', 'borg-sphere-two',
> 'borg-sphere-three']
> Thread-42::DEBUG::2016-04-07
> 14:56:29,592::mount::229::Storage.Misc.excCmd::(_runcmd)
> /usr/bin/taskset --cpu-list 0-39 /usr/bin/sudo -n
> /usr/bin/systemd-run --scope --slice=vdsm-glusterfs /usr/bin/mount
> -t glusterfs -o
> backup-volfile-servers=borg-sphere-two:borg-sphere-three
> borg-sphere-one:/engine
> /rhev/data-center/mnt/glusterSD/borg-sphere-one:_engine (cwd None)
>
>
> I've already created a bug report but since I didn't know where to
> put it filed it as VDSM bug which it doesn't seem to be.
> https://bugzilla.redhat.com/show_bug.cgi?id=1324075
>
>
> I would really like to help resolve this problem. If there is
> anything I can test, please let me know. I appreciate any help in
> this matter.
>
> Currently I'm running an oVirt 3.6 

Re: [ovirt-users] Network & local disk on same host, live storage migration

2016-04-07 Thread Matthew Bohnsack
Thanks for your responses.

Is gaining the ability to have local and shared storage domains in the same
datacenter (and therefore host) on the roadmap? It's most certainly
functionality we require in our virtual environment.

On Thu, Apr 7, 2016 at 8:44 AM, Alexander Wels  wrote:

> On Thursday, April 07, 2016 07:47:59 AM Matthew Bohnsack wrote:
> > Hello.
> >
> > I've been doing some experimentation with oVirt 3.6 and CentOS 7.2 hosts,
> > and I have a few high level questions, related to what I've seen in my
> > testing:
> >
> > 1/ It seems that for a given host, it's impossible to have both a local
> > storage domain and a shared storage domain such as NFS.  Is this correct?
> > If so, is adding this capability on the roadmap somewhere?  I would
> really
> > like to have the ability for a single host to simultaneously support both
> > local and network disks.
> >
>
> Storage is defined on a data center level, not on a host level, so you are
> correct you cannot have local storage and shared storage in the same data
> center.
>
> > 2/ Does oVirt support any sort of live storage migration functionality?
> >
>
> Yes, if you go to the disks main tab, you can select the disk you want to
> migrate and click 'move'. This will give you a dialog that allows you to
> select the target storage domain and profile.
>
> >   2.A/ For example, say I have one host (host#1) with two separate NFS
> > storage domains (NFS#1 and NFS#2).  In this case,  can a VM on host#1
> > associated with NFS#1 be live migrated to NFS#2 storage?  Or can this
> sort
> > of storage movement only be accomplished by a shutdown of the VM and an
> > export/import/restart workflow?
> >
> >   2.B/ Is the scenario described in 2.A possible across two different
> hosts
> > that share both NFS storage domains?  That is, can a VM guest on
> > host#1/NFS#1 be live migrated to host#2/NFS#2?
> >
>
> See above answer, you can live migrate between storage domains, and since
> storage domains are at a data center level if both hosts are in the same
> data
> center, and both storage domains are in the same data center, you can live
> migrate the storage as well as the VMs. It will be 2 separate operations
> though.
>
> > 3/ Let's says that you setup a host (host#1) with a shared storage domain
> > (NFS#1) and a second host (host#2) with a local storage domain (local#2).
> > If you wanted to move a VM from host#1/NFS#1 to host#2/local#2, how would
> > you most easily accomplish this?  All I could come up with is: Add NFS
> > export domain to host#1, turn VM on host#1 off, export VM, add NFS import
> > domain to host#2, import VM, on host#2 onto local#2, turn VM on on
> host#2?
> >
>
> Since these hosts cannot be in the same data center (cannot mix shared and
> local storage) that is the only option I know of for this particular
> scenario.
>
> > Thanks for your help,
> >
> > -Matthew
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Can not access storage domain hosted_storage

2016-04-07 Thread Richard Neuboeck
Hi oVirt Users/Developers,

I'm having trouble adding another host to a working hosted engine
setup. Through the WebUI I try to add another host. The package
installation and configuration processes seemingly run without
problems. When the second host tries to mount the engine storage
volume it halts with the WebUI showing the following message:

'Failed to connect Host cube-two to the Storage Domain hosted_engine'

The mount fails which results in the host status as 'non operational'.

Checking the vdsm.log on the newly added host shows that the mount
attempt of the engine volume doesn't use -t glusterfs. On the other
hand the VM storage volume (also a glusterfs volume) is mounted the
right way.

It seems the Engine configuration that is given to the second host
lacks the vfs_type property. So without glusterfs as fs given the
system assumes an NFS mount and obviously fails.

Here are the relevant log lines showing the JSON reply to the
configuration request, the working mount of the VM storage (called
plexus) and the failing mount of the engine storage.

...
jsonrpc.Executor/4::INFO::2016-04-07
15:45:53,043::logUtils::48::dispatcher::(wrapper) Run and protect:
connectStorageServer(domType=7,
spUUID=u'0001-0001-0001-0001-03ce', conList=[{u'id':
u'981cd3aa-052b-498a-914e-5e8f314357a8', u'connection':
u'borg-sphere-one:/plexus', u'iqn': u'', u'user': u'', u'tpgt':
u'1', u'vfs_type': u'glusterfs', u'password': '', u'port':
u''}, {u'id': u'cceaa988-9607-4bef-8854-0e7a585720aa',
u'connection': u'borg-sphere-one:/engine', u'iqn': u'', u'user':
u'', u'tpgt': u'1', u'password': '', u'port': u''}],
options=None)
...
jsonrpc.Executor/4::DEBUG::2016-04-07
15:45:53,062::mount::229::Storage.Misc.excCmd::(_runcmd)
/usr/bin/taskset --cpu-list 0-39 /usr/bin/sudo -n
/usr/bin/systemd-run --scope --slice=vdsm-glusterfs /usr/bin/mount
-t glusterfs -o
backup-volfile-servers=borg-sphere-two:borg-sphere-three
borg-sphere-one:/plexus
/rhev/data-center/mnt/glusterSD/borg-sphere-one:_plexus (cwd None)
...
jsonrpc.Executor/4::DEBUG::2016-04-07
15:45:53,380::mount::229::Storage.Misc.excCmd::(_runcmd)
/usr/bin/taskset --cpu-list 0-39 /usr/bin/sudo -n
/usr/bin/systemd-run --scope --slice=vdsm-glusterfs /usr/bin/mount
-o backup-volfile-servers=borg-sphere-two:borg-sphere-three
borg-sphere-one:/engine
/rhev/data-center/mnt/glusterSD/borg-sphere-one:_engine (cwd None)
...

The problem seems to have been introduced since March 22nd. On this
install I have added two additional hosts without problem. Three
days ago I tried to reinstall the whole system for testing and
documentation purposes but now am not able to add other hosts.

All the installs follow the same documented procedure. I've verified
several times that the problem exists with the components in the
current 3.6 release repo as well as in the 3.6 snapshot repo.

If I check the storage configuration of hosted_engine domain in the
WebUI it shows glusterfs as VFS type.

The initial mount during the hosted engine setup on the first host
shows the correct parameters (vfs_type) in vdsm.log:

Thread-42::INFO::2016-04-07
14:56:29,464::logUtils::48::dispatcher::(wrapper) Run and protect:
connectStorageServer(domType=7,
spUUID='----', conList=[{'id':
'b13ae31f-d66a-43a7-8aba-eaf4e62a6fb0', 'tpgt': '1', 'vfs_type':
'glusterfs', 'connection': 'borg-sphere-one:/engine', 'user':
'kvm'}], options=None)
Thread-42::DEBUG::2016-04-07
14:56:29,591::fileUtils::143::Storage.fileUtils::(createdir)
Creating directory:
/rhev/data-center/mnt/glusterSD/borg-sphere-one:_engine mode: None
Thread-42::DEBUG::2016-04-07
14:56:29,592::storageServer::364::Storage.StorageServer.MountConnection::(_get_backup_servers_option)
Using bricks: ['borg-sphere-one', 'borg-sphere-two',
'borg-sphere-three']
Thread-42::DEBUG::2016-04-07
14:56:29,592::mount::229::Storage.Misc.excCmd::(_runcmd)
/usr/bin/taskset --cpu-list 0-39 /usr/bin/sudo -n
/usr/bin/systemd-run --scope --slice=vdsm-glusterfs /usr/bin/mount
-t glusterfs -o
backup-volfile-servers=borg-sphere-two:borg-sphere-three
borg-sphere-one:/engine
/rhev/data-center/mnt/glusterSD/borg-sphere-one:_engine (cwd None)


I've already created a bug report but since I didn't know where to
put it filed it as VDSM bug which it doesn't seem to be.
https://bugzilla.redhat.com/show_bug.cgi?id=1324075


I would really like to help resolve this problem. If there is
anything I can test, please let me know. I appreciate any help in
this matter.

Currently I'm running an oVirt 3.6 snapshot installation on CentOS
7.2. The two storage volumes are both replica 3 on separate gluster
storage nodes.

Thanks in advance!
Richard

-- 
/dev/null



signature.asc
Description: OpenPGP digital signature
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Lots of "domain in problem" warnings

2016-04-07 Thread nicolas

Hi,

Lately we're having a lot of events like these:

2016-04-07 14:54:25,247 WARN  
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData] 
(org.ovirt.thread.pool-8-thread-2) [] domain 
'5de4a000-a9c4-489c-8eee-10368647c413:iscsi01' in problem. vds: 
'host7.domain.com'
2016-04-07 14:54:40,501 INFO  
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData] 
(org.ovirt.thread.pool-8-thread-17) [] Domain 
'5de4a000-a9c4-489c-8eee-10368647c413:iscsi01' recovered from problem. 
vds: 'host7.domain.com'
2016-04-07 14:54:40,501 INFO  
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData] 
(org.ovirt.thread.pool-8-thread-17) [] Domain 
'5de4a000-a9c4-489c-8eee-10368647c413:iscsi01' has recovered from 
problem. No active host in the DC is reporting it as problematic, so 
clearing the domain recovery timer.
2016-04-07 14:54:46,314 WARN  
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData] 
(org.ovirt.thread.pool-8-thread-30) [] domain 
'5de4a000-a9c4-489c-8eee-10368647c413:iscsi01' in problem. vds: 
'host5.domain.com'
2016-04-07 14:55:01,589 INFO  
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData] 
(org.ovirt.thread.pool-8-thread-32) [] Domain 
'5de4a000-a9c4-489c-8eee-10368647c413:iscsi01' recovered from problem. 
vds: 'host5.domain.com'
2016-04-07 14:55:01,589 INFO  
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData] 
(org.ovirt.thread.pool-8-thread-32) [] Domain 
'5de4a000-a9c4-489c-8eee-10368647c413:iscsi01' has recovered from 
problem. No active host in the DC is reporting it as problematic, so 
clearing the domain recovery timer.


Up until now it's been only one domain that I see with this warning, 
this doesn't look good nevertheless. Not sure if related, but I can't 
find a disk with this UUID. How can I start debugging?


This is oVirt 3.6.4.1-1, and using an iSCSI-based storage backend.

Thanks.





___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Restoring oVirt after hardware crash

2016-04-07 Thread Morten A. Middelthon

Hi,

the machine running my oVirt Engine administrator recently died on me. I 
have been able to get most of it back up and running again on a new 
virtual machine with CentOS 6.x in another virtual environment. The 
local postgresql database is running, and I can login via the web 
interface. All my VMs, storages, networks and hosts are visible, but 
marked as being offline/non responsive. Is there a way I can re-add or 
re-enable my existing hosts on this restored manager?


with regards,

--
Morten A. Middelthon
Email: mor...@flipp.net

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Network & local disk on same host, live storage migration

2016-04-07 Thread Alexander Wels
On Thursday, April 07, 2016 07:47:59 AM Matthew Bohnsack wrote:
> Hello.
> 
> I've been doing some experimentation with oVirt 3.6 and CentOS 7.2 hosts,
> and I have a few high level questions, related to what I've seen in my
> testing:
> 
> 1/ It seems that for a given host, it's impossible to have both a local
> storage domain and a shared storage domain such as NFS.  Is this correct?
> If so, is adding this capability on the roadmap somewhere?  I would really
> like to have the ability for a single host to simultaneously support both
> local and network disks.
> 

Storage is defined on a data center level, not on a host level, so you are 
correct you cannot have local storage and shared storage in the same data 
center.

> 2/ Does oVirt support any sort of live storage migration functionality?
> 

Yes, if you go to the disks main tab, you can select the disk you want to 
migrate and click 'move'. This will give you a dialog that allows you to 
select the target storage domain and profile.

>   2.A/ For example, say I have one host (host#1) with two separate NFS
> storage domains (NFS#1 and NFS#2).  In this case,  can a VM on host#1
> associated with NFS#1 be live migrated to NFS#2 storage?  Or can this sort
> of storage movement only be accomplished by a shutdown of the VM and an
> export/import/restart workflow?
> 
>   2.B/ Is the scenario described in 2.A possible across two different hosts
> that share both NFS storage domains?  That is, can a VM guest on
> host#1/NFS#1 be live migrated to host#2/NFS#2?
> 

See above answer, you can live migrate between storage domains, and since 
storage domains are at a data center level if both hosts are in the same data 
center, and both storage domains are in the same data center, you can live 
migrate the storage as well as the VMs. It will be 2 separate operations 
though.

> 3/ Let's says that you setup a host (host#1) with a shared storage domain
> (NFS#1) and a second host (host#2) with a local storage domain (local#2).
> If you wanted to move a VM from host#1/NFS#1 to host#2/local#2, how would
> you most easily accomplish this?  All I could come up with is: Add NFS
> export domain to host#1, turn VM on host#1 off, export VM, add NFS import
> domain to host#2, import VM, on host#2 onto local#2, turn VM on on host#2?
> 

Since these hosts cannot be in the same data center (cannot mix shared and 
local storage) that is the only option I know of for this particular scenario. 

> Thanks for your help,
> 
> -Matthew

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt + OpenVSwitch

2016-04-07 Thread Dan Kenigsberg
On Wed, Apr 06, 2016 at 11:57:08AM -0400, Martin Mucha wrote:
> Hi,
> 
> I think OpenVSwitch should be supported in 4.0.
> 
> M.
> 
> - Original Message -
> > Has anybody succeeded in installing Ovirt 3.6 with hosted engine on a
> > server which uses OpenVSwitch for the network config?
> > 
> > I believe my issue is that Ovirt wants to control the network to create
> > a bridge for its management and I wants it to just use whatever network
> > is available on the host without trying to be clever about it. I was
> > able to tweak it to get to the final stage where it fails on waiting for
> > the engine to start.

Martin is right, but we should understand your usage of OpenVSwitch
first.

Do you intend to use it for networking of ovirt VMs? For something else?

How did you tweak "it" (ovirt? hosted engine? ovs?)
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Network & local disk on same host, live storage migration

2016-04-07 Thread Matthew Bohnsack
Hello.

I've been doing some experimentation with oVirt 3.6 and CentOS 7.2 hosts,
and I have a few high level questions, related to what I've seen in my
testing:

1/ It seems that for a given host, it's impossible to have both a local
storage domain and a shared storage domain such as NFS.  Is this correct?
If so, is adding this capability on the roadmap somewhere?  I would really
like to have the ability for a single host to simultaneously support both
local and network disks.

2/ Does oVirt support any sort of live storage migration functionality?

  2.A/ For example, say I have one host (host#1) with two separate NFS
storage domains (NFS#1 and NFS#2).  In this case,  can a VM on host#1
associated with NFS#1 be live migrated to NFS#2 storage?  Or can this sort
of storage movement only be accomplished by a shutdown of the VM and an
export/import/restart workflow?

  2.B/ Is the scenario described in 2.A possible across two different hosts
that share both NFS storage domains?  That is, can a VM guest on
host#1/NFS#1 be live migrated to host#2/NFS#2?

3/ Let's says that you setup a host (host#1) with a shared storage domain
(NFS#1) and a second host (host#2) with a local storage domain (local#2).
If you wanted to move a VM from host#1/NFS#1 to host#2/local#2, how would
you most easily accomplish this?  All I could come up with is: Add NFS
export domain to host#1, turn VM on host#1 off, export VM, add NFS import
domain to host#2, import VM, on host#2 onto local#2, turn VM on on host#2?

Thanks for your help,

-Matthew
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt 4 engine and node

2016-04-07 Thread Yedidyah Bar David
On Thu, Apr 7, 2016 at 10:17 AM, Zeev Mindali  wrote:

> Dear all,
>
>
>
> Where I can found step by step ovirt engine install for version 4
>

Just yesterday first alpha version was announced:

http://lists.ovirt.org/pipermail/users/2016-April/038886.html

You can also try the nightly snapshot:

http://www.ovirt.org/develop/dev-process/install-nightly-snapshot/


> And ovirt node 4 install as well
>

I think it's this one:

http://resources.ovirt.org/pub/ovirt-4.0_alpha1/iso/ovirt-node-ng-installer/

You can also try the jenkins build:

http://jenkins.ovirt.org/job/ovirt-node-ng_master_build-artifacts-fc22-x86_64/

See also:

http://www.ovirt.org/develop/projects/node/4.0/


>
>
> I want to test it ….
>

Enjoy.

Keep in mind that it's far from ready.



>
>
> [image: logo]
>
>
>
> Zeev Mindali
> Windows & Mobile Developer
> Chip PC, 5 Nahum Hat St.
> Haifa
> Israel 3508504
>
> Tel+972-4-8501121
> Fax   +972-4-8501088
> Cell   +972-52-4043142
> Email ze...@chippc.com
> Web  www.chippc.com
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>


-- 
Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt reports

2016-04-07 Thread Shirly Radco
Hi,

Please check now again if you see data from recent days in table
"host_hourly_history".
What reports are you running and for which period?
Are all reports related to host empty? Can you please attach screenshots?

Best regards,
Shirly Radco

Shirly Radco
BI Software Engineer
Red Hat Israel Ltd.
34 Jerusalem Road
Building A, 4th floor
Ra'anana, Israel 4350109


On Wed, Apr 6, 2016 at 8:55 PM, Fernando Fuentes 
wrote:

> Didi,
>
> Yes. I still get empty reports. :(
>
> --
> Fernando Fuentes
> ffuen...@txweather.org
> http://www.txweather.org
>
> On Wed, Apr 6, 2016, at 02:08 AM, Yedidyah Bar David wrote:
> > On Wed, Apr 6, 2016 at 9:51 AM, Shirly Radco  wrote:
> > > Hi,
> > >
> > > No. You don't need to reinstall.
> > >
> > > There should be a cron job that restarts the dwh service after it is
> down
> > > for an hour.
> > > Didi, In what cases it will not start?
> >
> > The log, posted earlier in this thread, seems to show it's actually
> > running:
> >
> > >
> > > Best regards,
> > > Shirly
> > >
> > > Shirly Radco
> > > BI Software Engineer
> > > Red Hat Israel Ltd.
> > > 34 Jerusalem Road
> > > Building A, 4th floor
> > > Ra'anana, Israel 4350109
> > >
> > >
> > > On Tue, Apr 5, 2016 at 6:49 PM, Fernando Fuentes  >
> > > wrote:
> > >>
> > >> Team,
> > >>
> > >> Digging deaper I found that I do have data and create reports up to
> Oct of
> > >> 2015 Somewhere in Nov 2015 it stop hence why I cant see recent
> data.
> > >> A good note to the list is that November is right around the month I
> did
> > >> the update to 3.5.4.2-1.el6
> > >> Do I have to re-installovirt-engine-reports?
> > >>
> > >> Regards,
> > >>
> > >> --
> > >> Fernando Fuentes
> > >> ffuen...@txweather.org
> > >> http://www.txweather.org
> > >>
> > >>
> > >>
> > >> On Tue, Apr 5, 2016, at 10:23 AM, Fernando Fuentes wrote:
> > >>
> > >> As requested all dumps and info are at:
> > >>
> > >> http://pastebin.com/RMSP8HFB
> > >>
> > >> All systems are running the guest agent.
> > >>
> > >> Regards,
> > >>
> > >> --
> > >> Fernando Fuentes
> > >> ffuen...@txweather.org
> > >> http://www.txweather.org
> > >>
> > >>
> > >>
> > >> On Tue, Apr 5, 2016, at 03:38 AM, Shirly Radco wrote:
> > >>
> > >> Hi Fernando,
> > >>
> > >> Please check guest agent is running on all hosts.
> > >> Please also check if you see data from recent days in table
> > >> "host_hourly_history".
> > >>
> > >> Best regards,
> > >> Shirly Radco
> > >>
> > >>
> > >>
> > >> Shirly Radco
> > >> BI Software Engineer
> > >> Red Hat Israel Ltd.
> > >> 34 Jerusalem Road
> > >> Building A, 4th floor
> > >> Ra'anana, Israel 4350109
> > >>
> > >>
> > >> On Tue, Apr 5, 2016 at 9:02 AM, Yedidyah Bar David 
> > >> wrote:
> > >>
> > >> On Mon, Apr 4, 2016 at 8:53 PM, Fernando Fuentes <
> ffuen...@darktcp.net>
> > >> wrote:
> > >> > Didi,
> > >> >
> > >> > As requested.
> > >> >
> > >> >
> > >> > [root@ovirt-dev ~]# service ovirt-engine-dwhd status
> > >> > ovirt-engine-dwhd (pid  32410) is running...
> > >> > [root@ovirt-dev ~]#
> > >> >
> > >> > The log is over at pastebin to prevent a huge email/
> > >> >
> > >> > http://pastebin.com/hPcYCKVE
> > >> >
> > >> > Hope that is ok.
> > >>
> > >> The log looks mostly ok to me.
> >
> > The end of it looked like this:
> >
> > 2015-11-01
> >
> 02:00:29|YigZCz|UkJGNp|IMHBxO|OVIRT_ENGINE_DWH|SampleTimeKeepingJob|Default|5|tWarn|tWarn_1|Can
> > not sample data, oVirt Engine is not updating the statistics. Please
> > check your oVirt Engine status.|9704
> > 2016-04-03 14:57:41|ETL Service Stopped
> > 2016-04-03 15:12:41|ETL Service Started
> >
> > From this, I understood that:
> > 1. Many months ago, either there was some problem (e.g. dwhd could not
> > connect to engine db, or not see updates there, or whatever) or simply
> > there was no new data (engine was down, or simply nothing changed (is
> > this possible?)).
> > 2. Then things became ok for several months (or at least, nothing
> > caused dwhd to say otherwise)
> > 3. Then dwhd was restarted, perhaps for an upgrade, and still no new
> > errors.
> >
> > Fernando: Do you still only get empty reports?
> >
> > >>
> > >> Was this a normal, default setup?
> > >>
> > >> Please post the output of:
> > >>
> > >> su - postgres -c 'psql -c \\l'
> > >>
> > >> su - postgres -c "psql -c 'SELECT pg_database.datname,
> > >> pg_size_pretty(pg_database_size(pg_database.datname)) FROM
> > >> pg_database'"
> > >>
> > >> su - postgres -c "psql ovirt_engine_reports -c 'select
> > >> connectionurl,username from jijdbcdatasource'"
> > >>
> > >> grep DATABASE /etc/ovirt-engine/engine.conf.d/*
> > >> /etc/ovirt-engine-dwh/ovirt-engine-dwhd.conf.d/*
> > >>
> > >> grep dbName /var/lib/ovirt-engine-reports/build-conf/master.properties
> > >>
> > >> Adding Shirly.
> > >> --
> > >> Didi
> > >>
> > >>
> > >> ___
> > >> Users mailing list
> > >> Users@ovirt.org
> > >> 

[ovirt-users] ovirt 4 engine and node

2016-04-07 Thread Zeev Mindali
Dear all,

Where I can found step by step ovirt engine install for version 4
And ovirt node 4 install as well

I want to test it 

[logo]

Zeev Mindali
Windows & Mobile Developer
Chip PC, 5 Nahum Hat St.
Haifa
Israel 3508504

Tel+972-4-8501121
Fax   +972-4-8501088
Cell   +972-52-4043142
Email ze...@chippc.com
Web  www.chippc.com

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt-shell issue

2016-04-07 Thread Juan Hernández
On 04/06/2016 07:28 PM, Nathanaël Blanchet wrote:
> Hello,
> in an interactive shell, I can successfully execute such a command:
> add network --name brv13 --vlan-id 13  --datacenter-identifier Cines 
> --description A_FORM
> but the same as an argument of ovirt-shell,
> # ovirt-shell -E "add network --name brv13 --vlan-id 13 
> --datacenter-identifier Cines --description A_FORM"
> it leads me to "datacenter Cines does not exist."
> 
> of course, datacenter Cines does exist!
> 
> What's wrong there?
> 

That command should fail in both the interactive mode and with the "-E"
option. The problem is that the "--datacenter-identifier" option expects
an identifier, not a name. You should use "--datacenter-name" instaed:

  $ ovirt-shell -E "add network --name brv13 --vlan-id 13
--datacenter-name Cines --description A_FORM"

-- 
Dirección Comercial: C/Jose Bardasano Baos, 9, Edif. Gorbea 3, planta
3ºD, 28016 Madrid, Spain
Inscrita en el Reg. Mercantil de Madrid – C.I.F. B82657941 - Red Hat S.L.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users