[Users] Cannot connect to VM via browser if engine was not in /etc/hosts

2013-06-06 Thread lof yer
I connect https://192.168.1.111 and connect to the VM, then the
remote-viewer shows up, but failed to show the VM desktop.
Is it the https problem?
Can I connect to the VM without modify /etc/hosts?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] SD again ...

2013-06-06 Thread Alessandro Bianchi

  
  
Hi all
  
  Another problem with SD (those SD drive me crazy !)
  
  I've attached the Export Domain to a Local domain and succesfully
  exported a VM
  
  Then I've placed the Export SD in Maintenance 
  
  Then I asked oVirt to Detach the Domain and that seems impossible
  
  I see the mount point has been unomunted on the node but oVirt
  doesn't mark the SD as unattached ...
  
  I tried to reacttivate it with non luck (it's not mounted at the
  moment)
  
  Andy advice about how to get Export domain available again?
  
  I'm on Fedora 18
  
  Here is the log
  
  2013-06-06 13:03:12,676 INFO
  [org.ovirt.engine.core.bll.storage.DetachStorageDomainFromPoolCommand]
  (pool-3-thread-50) [59909ba8] Running command:
  DetachStorageDomainFromPoolCommand internal: false. Entities
  affected : ID: e79cd423-ae17-4f8b-9f53-28d851cc9822 Type: Storage
  2013-06-06 13:03:12,676 INFO
  [org.ovirt.engine.core.bll.storage.DetachStorageDomainFromPoolCommand]
  (pool-3-thread-50) [59909ba8] Start detach storage domain
  2013-06-06 13:03:12,684 INFO
  [org.ovirt.engine.core.bll.storage.DetachStorageDomainFromPoolCommand]
  (pool-3-thread-50) [59909ba8] Detach storage domain: before
  connect
  2013-06-06 13:03:12,689 INFO
  [org.ovirt.engine.core.vdsbroker.vdsbroker.ValidateStorageServerConnectionVDSCommand]
  (pool-3-thread-47) [69a827a7] START,
  ValidateStorageServerConnectionVDSCommand(HostName = nodo1, HostId
  = 3156bdac-ebfb-44cf-bea6-53d668b74a10, storagePoolId =
  ----, storageType = NFS,
  connectionList = [{ id: 45085cbf-da10-4852-9d85-754707d20a92,
  connection: 172.16.0.5:/home/external/migration, iqn: null,
  vfsType: null, mountOptions: null, nfsVersion: null, nfsRetrans:
  null, nfsTimeo: null };]), log id: 1b41b932
  2013-06-06 13:03:12,695 INFO
  [org.ovirt.engine.core.vdsbroker.vdsbroker.ValidateStorageServerConnectionVDSCommand]
  (pool-3-thread-47) [69a827a7] FINISH,
  ValidateStorageServerConnectionVDSCommand, return:
  {45085cbf-da10-4852-9d85-754707d20a92=0}, log id: 1b41b932
  2013-06-06 13:03:12,695 INFO
  [org.ovirt.engine.core.bll.storage.ConnectStorageToVdsCommand]
  (pool-3-thread-47) [69a827a7] Running command:
  ConnectStorageToVdsCommand internal: true. Entities affected :
  ID: aaa0----123456789aaa Type: System
  2013-06-06 13:03:12,697 INFO
  [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
  (pool-3-thread-47) [69a827a7] START,
  ConnectStorageServerVDSCommand(HostName = nodo1, HostId =
  3156bdac-ebfb-44cf-bea6-53d668b74a10, storagePoolId =
  ----, storageType = NFS,
  connectionList = [{ id: 45085cbf-da10-4852-9d85-754707d20a92,
  connection: 172.16.0.5:/home/external/migration, iqn: null,
  vfsType: null, mountOptions: null, nfsVersion: null, nfsRetrans:
  null, nfsTimeo: null };]), log id: 3f7041da
  2013-06-06 13:03:12,738 INFO
  [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
  (pool-3-thread-47) [69a827a7] FINISH,
  ConnectStorageServerVDSCommand, return:
  {45085cbf-da10-4852-9d85-754707d20a92=100}, log id: 3f7041da
  2013-06-06 13:03:12,739 ERROR
  [org.ovirt.engine.core.bll.storage.NFSStorageHelper]
  (pool-3-thread-47) [69a827a7] The connection with details
  172.16.0.5:/home/external/migration failed because of error code
  100 and error message is: generalexception
  2013-06-06 13:03:12,741 ERROR
  [org.ovirt.engine.core.bll.storage.ConnectStorageToVdsCommand]
  (pool-3-thread-47) [69a827a7] Transaction rolled-back for command:
  org.ovirt.engine.core.bll.storage.ConnectStorageToVdsCommand.
  2013-06-06 13:03:12,741 INFO
  [org.ovirt.engine.core.bll.storage.DetachStorageDomainFromPoolCommand]
  (pool-3-thread-50) [59909ba8] Detach storage domain: after
  connect
  2013-06-06 13:03:12,742 INFO
  [org.ovirt.engine.core.vdsbroker.irsbroker.DetachStorageDomainVDSCommand]
  (pool-3-thread-50) [59909ba8] START,
  DetachStorageDomainVDSCommand( storagePoolId =
  d76c9edf-34cb-48eb-b53b-32d27bedc26a, ignoreFailoverLimit = false,
  compatabilityVersion = null, storageDomainId =
  e79cd423-ae17-4f8b-9f53-28d851cc9822, masterDomainId =
  ----, masterVersion = 1, force =
  false), log id: 6e692e6d
  2013-06-06 13:03:14,327 INFO
  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
  (DefaultQuartzScheduler_Worker-3) No string for UNASSIGNED type.
  Use default Log
  2013-06-06 13:03:14,922 ERROR
  [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
  (pool-3-thread-50) [59909ba8] Failed in 

Re: [Users] SD Again

2013-06-06 Thread Alessandro Bianchi

  
  
Hi guys
  
  I solved manually remounting the Expord SD after restarting the
  NFS server and then detaching the export domain
  
  I wonder if someone knows why this rhings happen ...
  
  Best regards

  

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] SD Again

2013-06-06 Thread Alessandro Bianchi

  
  

  
  
Il 06/06/2013 13:56, Maor Lipchuk
  ha scritto:

  
  
Hi Alessandro, Is there a possibility some one else is using the export
domain in another DC or setup?

Regards,
Maor

On 06/06/2013 02:25 PM, Alessandro Bianchi wrote:


  Hi guys

I solved manually remounting the Expord SD after restarting the NFS
server and then detaching the export domain

I wonder if someone knows why this rhings happen ...

Best regards


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


  


  Hi and Thank you for your answer
  
  No: it's not possible since I'm the only admin of the ovirt nodes
  (3 nodes holding about 14 VMs) and I attached the Export SD, then
  exported the VM and got the lock on export domain.
  
  It's really funny that sometimes I keep in trouble with SD: they
  don't come up or don't go down (or they simply don't like me :-)
  
  All my nodes and the engine host are F 18 with all latest updates
  applied (engine is also checked daily for updates) except for
  kernel due to this nasty bug 
  https://bugzilla.redhat.com/show_bug.cgi?id=902012

  
  Am I the only one fighting with SD's?
  
  If yes it may be some sort of network/nfs/"x-file" problem,
  otherwise I don't know what to investigate ...
  
  Thanks again
  
  Best regards
  
  Alessandro Bianchi
  
  -- 








SkyNet SRL
Via Maggiate 67/a - 28021 Borgomanero (NO)
  - tel.
  +39 0322-836487/834765 - fax +39 0322-836608
http://www.skynet.it
Autorizzazione Ministeriale n.197
Le informazioni contenute in questo messaggio
  sono
  riservate e confidenziali ed  vietata la diffusione in
  qualunque
  modo eseguita.
  Qualora Lei non fosse la persona a cui il presente
  messaggio  destinato, La invitiamo ad eliminarlo ed a
  distruggerlo
  non divulgandolo, dandocene gentilmente comunicazione. 
  Per
  qualsiasi informazione si prega di contattare i...@skynet.it
  (e-mail dell'azienda). Rif. D.L. 196/2003
  
  

  

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] SD Again

2013-06-06 Thread Alessandro Bianchi

  
  

  
  
Il 06/06/2013 15:55,
  users-requ...@ovirt.org ha scritto:

  
  
Message: 5
Date: Thu, 06 Jun 2013 15:38:48 +0300
From: Dafna Ron d...@redhat.com
To: users@ovirt.org
Subject: Re: [Users] SD Again
Message-ID: 51b082d8.5040...@redhat.com
Content-Type: text/plain; charset=ISO-8859-1; format=flowed

You can attach full engine and vdsm logs and I'll try to debug it (I 
need to see the first umount).



On 06/06/2013 02:25 PM, Alessandro Bianchi wrote:


   Hi guys

 I solved manually remounting the Expord SD after restarting the NFS 
 server and then detaching the export domain

 I wonder if someone knows why this rhings happen ...

 Best regards


  

  Hope I'm providing the right stuff
  
  I may send you all the file but these are the relevat infos I
  suppose
  
  vdsm.log
  
  Thread-281877::DEBUG::2013-06-06
  12:36:49,932::BindingXMLRPC::161::vds::(wrapper) [172.16.0.5]
  Thread-281877::DEBUG::2013-06-06
  12:36:49,932::task::568::TaskManager.Task::(_updateState)
  Task=`c0d1c115-cb3d-4f39-8c9b-448401097921`::moving from state
  init - state preparing
  Thread-281877::INFO::2013-06-06
  12:36:49,933::logUtils::41::dispatcher::(wrapper) Run and protect:
  disconnectStorageServer(domType=1,
  spUUID='----',
  conList=[{'connection': '172.16.0.5:/home/external/migration',
  'iqn': '', 'portal': '', 'user': '', 'password': '**', 'id':
  '45085cbf-da10-4852-9d85-754707d20a92', 'port': ''}],
  options=None)
  Thread-281877::DEBUG::2013-06-06
12:36:49,933::misc::84::Storage.Misc.excCmd::(lambda)
'/usr/bin/sudo -n /usr/bin/umount -f -l
/rhev/data-center/mnt/172.16.0.5:_home_external_migration' (cwd
None)
  Thread-281877::DEBUG::2013-06-06
  12:36:50,012::misc::1054::SamplingMethod::(__call__) Trying to
  enter sampling method (storage.sdc.refreshStorage)
  Thread-281877::DEBUG::2013-06-06
  12:36:50,012::misc::1056::SamplingMethod::(__call__) Got in to
  sampling method
  Thread-281877::DEBUG::2013-06-06
  12:36:50,012::misc::1054::SamplingMethod::(__call__) Trying to
  enter sampling method (storage.iscsi.rescan)
  Thread-281877::DEBUG::2013-06-06
  12:36:50,013::misc::1056::SamplingMethod::(__call__) Got in to
  sampling method
  Thread-281877::DEBUG::2013-06-06
  12:36:50,013::misc::84::Storage.Misc.excCmd::(lambda)
  '/usr/bin/sudo -n /sbin/iscsiadm -m session -R' (cwd None)
  Thread-281877::DEBUG::2013-06-06
  12:36:50,021::misc::84::Storage.Misc.excCmd::(lambda)
  FAILED: err = 'iscsiadm: No session found.\n'; rc
  = 21
  Thread-281877::DEBUG::2013-06-06
  12:36:50,021::misc::1064::SamplingMethod::(__call__) Returning
  last result
  Thread-281877::DEBUG::2013-06-06
  12:36:52,029::misc::84::Storage.Misc.excCmd::(lambda)
  '/usr/bin/sudo -n /sbin/multipath' (cwd None)
  Thread-281877::DEBUG::2013-06-06
  12:36:52,188::misc::84::Storage.Misc.excCmd::(lambda)
  SUCCESS: err = ''; rc = 0
  Thread-281877::DEBUG::2013-06-06
  12:36:52,188::lvm::477::OperationMutex::(_invalidateAllPvs)
  Operation 'lvm invalidate operation' got the operation mutex
  Thread-281877::DEBUG::2013-06-06
  12:36:52,189::lvm::479::OperationMutex::(_invalidateAllPvs)
  Operation 'lvm invalidate operation' released the operation mutex
  Thread-281877::DEBUG::2013-06-06
  12:36:52,189::lvm::488::OperationMutex::(_invalidateAllVgs)
  Operation 'lvm invalidate operation' got the operation mutex
  Thread-281877::DEBUG::2013-06-06
  12:36:52,189::lvm::490::OperationMutex::(_invalidateAllVgs)
  Operation 'lvm invalidate operation' released the operation mutex
  Thread-281877::DEBUG::2013-06-06
  12:36:52,189::lvm::508::OperationMutex::(_invalidateAllLvs)
  Operation 'lvm invalidate operation' got the operation mutex
  Thread-281877::DEBUG::2013-06-06
  12:36:52,189::lvm::510::OperationMutex::(_invalidateAllLvs)
  Operation 'lvm invalidate operation' released the operation mutex
  Thread-281877::DEBUG::2013-06-06
  12:36:52,189::misc::1064::SamplingMethod::(__call__) Returning
  last result
  Thread-281877::INFO::2013-06-06
  12:36:52,189::logUtils::44::dispatcher::(wrapper) Run and protect:
  disconnectStorageServer, Return response: {'statuslist':
  [{'status': 0, 'id': '45085cbf-da10-4852-9d85-754707d20a92'}]}
  Thread-281877::DEBUG::2013-06-06
  12:36:52,190::task::1151::TaskManager.Task::(prepare)
  Task=`c0d1c115-cb3d-4f39-8c9b-448401097921`::finished:
  {'statuslist': [{'status': 0, 'id':
  '45085cbf-da10-4852-9d85-754707d20a92'}]}
  Thread-281877::DEBUG::2013-06-06
  12:36:52,190::task::568::TaskManager.Task::(_updateState)
  

[Users] Outage window :: resources.ovirt.org / Mailman / etc. :: 2013-06-07 03:00 - 11:00 UTC

2013-06-06 Thread Karsten 'quaid' Wade
There is going to be an outage of resources.ovirt.org for up to 2 hours
sometime during an 8 hour service window.

The outage will occur 2013-06-07 between 03:00 and 11:00 UTC. To view in
your local time:

date -d '2013-06-07 03:00 UTC'
date -d '2013-06-07 11:00 UTC'

== Details ==

From the Linode service bulletin/reminder:

This is just a friendly reminder that your maintenance window starts at
8PM PDT tonight and lasts until 4AM PDT.

Downtime from this maintenance is expected to be less than 120 minutes,
however please note that the entire maintenance window may be used if
required. Your Linode will be gracefully powered down and rebooted
during the maintenance. Services not configured to start on a reboot
will need to be manually started.

Additional information regarding this maintenance event can be found
here: https://blog.linode.com/2013/05/17/fremont-upgrades/ 

== Affected services ==

* resources.ovirt.org
* Mailman (lists.ovirt.org)
* some backup services

=== Not-affected services ==

* www.ovirt.org / wiki.ovirt.org
* gerrit.ovirt.org
* jenkins.ovirt.org
* Anything at AlterWay and RackSpace facilities
** Foreman
** Puppet
** Jenkins slaves
** ...

== Future plans ==

We'll be migrating all services from this host and decommissioning it ASAP.





signature.asc
Description: OpenPGP digital signature
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users