The novnc-server will translate WebSockets traffic to normal socket
traffic, therefore you don't have to expose the host IP to the final user,
she will interact with the proxy.
Cheers
On 10 February 2015 at 11:33, Nico Schottelius
nico-opennebula@schottelius.org wrote:
Hey,
I think I
Dear community,
Besides our annual OpenNebula Conference, we are planning to organize
Technology Day events in multiple cities globally during 2015. In the
shorter term we are planning to organize TechDays in:
* Prague, Czech Republic
* Dublin, Ireland
* Dallas, USA
* Chicago, USA
Please send us
Hey Daniel,
thanks for following up - I will lock down vnc ports to to only
allow access from the frontend to this a try today!
Cheers,
Nico
Daniel Molina [Fri, Feb 13, 2015 at 09:17:55AM +0100]:
The novnc-server will translate WebSockets traffic to normal socket
traffic, therefore you don't
Hi,
In this file you can check the headers used by the x509 auth
https://github.com/OpenNebula/one/blob/master/src/cloud/common/CloudAuth/X509CloudAuth.rb
an this is an old guide on how to setup this configuration in Apache:
http://community.opennebula.org/sunstone_x509
Hope this helps
On 10
Hi,
I would suspect this to happen when the probe times out somehow (due to the
network issues etc.) or simply it cannot detect the VM at the moment. Does the
VM remain in 'poweroff' state forever or does it become 'running' again after
some time?
Ondra
From: Users
Hi Everyone,
Anyone had ever encounter an issue about VM stat changes to POWEROFF a
few minutes (2 ~ 3 minutes) after RUNNING?
The actual problem is that the POWEROFF stat is incorrect as the VM is
actually running on the computing node. If you try to boot it back, it
will failed as that
I have found the solution. GATEWAY_IFACE must be set, but in upper case. I
don't know why. Any idea? If I put GATEWAY_IFACE=eth1 (eth1 is the real
name) it doesn't work, but GATEWAY_IFACE=ETH1 works...
Le Fri Feb 13 2015 at 10:03:33, Madko madk...@gmail.com a écrit :
Hi,
It seems vmcontext
Hi
This problem is as described by Ondra becasue of a kind of race condition
between the boot process and the monitor probes. You may or may not see it
depending on timing network and the like.
OpenNebula 4.10.2 includes some logic to deal with this and also to recover
automatically the VM when
I have had my one4.8 host up for a while with a single cluster
that has 150 hosts, one vnet, and a system and image datastore.
I am now adding hosts from a different vnet.
want to make second host + vnet cluster but still use
the same system and image data stores.
What's the right way to do
Yes, you can do:
Cluster A: Host_A0, Host_A1... + VNET_A0, VNET_A1...
Cluster B: HostB0, HostB1... + VNET_B0, VNET_B1...
Cluster Default: DS, DS_System
Then a VM that uses VNET_A0 + DS would be scheduled to Cluster A. Note
that using VNET_A0 constrain resources from Cluster A + Cluster
Default.
I know if I just take the vnet and the datastore out of the cluster, and have
no clusters at all, then everything
will work.. I was hoping to have a cluster structure of (host,vnet) pairings
that could
all share a common data store. However from the documentation, it looks like
if
your
Hi
If both clusters has access to the same datastores, just move them out
of the first cluster. When a datastore or network is not assigned to
any cluster (cluster default) OpenNebula assumes it can be used with
any host (no matter in which cluster is set).
BTW, although you do not needed for
Alberto Zuin - Liste li...@albertozuin.eu writes:
Hello all,
Hello,
just for information, I know there is a pre-compiled version of
OpenNebula in your repository, but in the official Debian repository
there is only an old version of OpenNebula for Wheezy (3.4) and only
the
Daniel Dehennin daniel.dehen...@baby-gnu.org writes:
[...]
It's a big work, for now lintian is far from being happy[3]
[...]
[3] c.f. attachement
Missing attachement, sorry.
--
Daniel Dehennin
Récupérer ma clef GPG: gpg --recv-keys 0xCC1E9E5B7A6FE2DF
Fingerprint: 3E69 014E 5C23 50E8
OK here we go:
VM in question is taking an image from image store 102 (currently in no
cluster),
vnet 0 routable private from cluster 100 cloud worker
also a number of hosts, including hosts # 0 and 2, also part of cluster cloud
worker
VM stays pending for ever, hold reason is below.--it is
One more followup:
host 156 + vnet2 + ds 100/102, all outside the cluster, no problem
host 156 + vnet2 + ds100/102, all in the cluster, no problem
host 156 and vnet2 in the cluster, DS outside of the cluster, problem.
SCHED_MESSAGE=Fri Feb 13 18:06:29 2015 : No system datastore meets
PS--if there are other vm's still launched and running from the time when the
datastore used to be part of
a cluster, could that confuse anything? Do I have to restart oned to clear
anything up?
Steve Timm
From: Steven C Timm
Sent: Friday, February
17 matches
Mail list logo