The novnc-server will translate WebSockets traffic to normal socket
traffic, therefore you don't have to expose the host IP to the final user,
she will interact with the proxy.
Cheers
On 10 February 2015 at 11:33, Nico Schottelius <
nico-opennebula@schottelius.org> wrote:
> Hey,
>
> I think
Hi,
In this file you can check the headers used by the x509 auth
https://github.com/OpenNebula/one/blob/master/src/cloud/common/CloudAuth/X509CloudAuth.rb
an this is an old guide on how to setup this configuration in Apache:
http://community.opennebula.org/sunstone_x509
Hope this helps
On 10 Fe
Hi,
It seems vmcontext scripts try to guess the gateway IP as soon as any
interface has a gateway set.
Here is my case:
one vm with eth0 to internal admin network, and eth1 to wan. Only eth1 has
a gateway set.
When init script vmcontext start, it found out that there is a gateway (but
on eth1), a
Dear community,
Besides our annual OpenNebula Conference, we are planning to organize
Technology Day events in multiple cities globally during 2015. In the
shorter term we are planning to organize TechDays in:
* Prague, Czech Republic
* Dublin, Ireland
* Dallas, USA
* Chicago, USA
Please send us
Hi:
I Does not know really if some recent version of libxmlrpc-c fix this
bug... bug the patch seems to be into some branch
http://sourceforge.net/p/xmlrpc-c/code/2520/
I don't pretend to confuse you with how we discover this bug, but in
short it appears when executing some accounting so
Hey Daniel,
thanks for following up - I will lock down vnc ports to to only
allow access from the frontend to this a try today!
Cheers,
Nico
Daniel Molina [Fri, Feb 13, 2015 at 09:17:55AM +0100]:
> The novnc-server will translate WebSockets traffic to normal socket
> traffic, therefore you don'
Alberto Zuin - Liste writes:
> Hello all,
Hello,
> just for information, I know there is a pre-compiled version of
> OpenNebula in your repository, but in the "official" Debian repository
> there is only an old version of OpenNebula for Wheezy (3.4) and only
> the contextualization package for
Daniel Dehennin writes:
[...]
> It's a big work, for now lintian is far from being happy[3]
[...]
> [3] c.f. attachement
Missing attachement, sorry.
--
Daniel Dehennin
Récupérer ma clef GPG: gpg --recv-keys 0xCC1E9E5B7A6FE2DF
Fingerprint: 3E69 014E 5C23 50E8 9ED6 2AAD CC1E 9E5B 7A6F E2DF
I have found the solution. GATEWAY_IFACE must be set, but in upper case. I
don't know why. Any idea? If I put GATEWAY_IFACE=eth1 (eth1 is the real
name) it doesn't work, but GATEWAY_IFACE=ETH1 works...
Le Fri Feb 13 2015 at 10:03:33, Madko a écrit :
> Hi,
>
> It seems vmcontext scripts try to gu
Hi,
here is my use case:
1) VM has a gateway role. So it needs to be ready/deployed first.
2) Others VMs that depends on the gateway, so they have the Gateway VM has
parent node checked.
If the gateway is not ready, other VMs won't have access to some network
resources and therefor can't finish
Hi Everyone,
Anyone had ever encounter an issue about VM stat changes to "POWEROFF" a
few minutes (2 ~ 3 minutes) after "RUNNING"?
The actual problem is that the "POWEROFF" stat is incorrect as the VM is
actually running on the computing node. If you try to "boot" it back, it
will failed as that
Hi,
I would suspect this to happen when the probe times out somehow (due to the
network issues etc.) or simply it cannot detect the VM at the moment. Does the
VM remain in 'poweroff' state forever or does it become 'running' again after
some time?
Ondra
From: Users [mailto:users-boun...@lists.
Hi
This problem is as described by Ondra becasue of a kind of race condition
between the boot process and the monitor probes. You may or may not see it
depending on timing network and the like.
OpenNebula 4.10.2 includes some logic to deal with this and also to recover
automatically the VM when i
I have had my one4.8 host up for a while with a single cluster
that has 150 hosts, one vnet, and a system and image datastore.
I am now adding hosts from a different vnet.
want to make second host + vnet cluster but still use
the same system and image data stores.
What's the right way to do tha
Hi
If both clusters has access to the same datastores, just move them out
of the first cluster. When a datastore or network is not assigned to
any cluster (cluster default) OpenNebula assumes it can be used with
any host (no matter in which cluster is set).
BTW, although you do not needed for you
I know if I just take the vnet and the datastore out of the cluster, and have
no clusters at all, then everything
will work.. I was hoping to have a cluster structure of (host,vnet) pairings
that could
all share a common data store. However from the documentation, it looks like
if
your templa
Yes, you can do:
Cluster A: Host_A0, Host_A1... + VNET_A0, VNET_A1...
Cluster B: HostB0, HostB1... + VNET_B0, VNET_B1...
Cluster Default: DS, DS_System
Then a VM that uses VNET_A0 + DS would be scheduled to Cluster A. Note
that using VNET_A0 constrain resources from Cluster A + Cluster
Default.
OK here we go:
VM in question is taking an image from image store 102 (currently in no
cluster),
vnet 0 "routable private" from cluster 100 "cloud worker"
also a number of hosts, including hosts # 0 and 2, also part of cluster "cloud
worker"
VM stays pending for ever, hold reason is below.--it
One more followup:
host 156 + vnet2 + ds 100/102, all outside the cluster, no problem
host 156 + vnet2 + ds100/102, all in the cluster, no problem
host 156 and vnet2 in the cluster, DS outside of the cluster, problem.
SCHED_MESSAGE="Fri Feb 13 18:06:29 2015 : No system datastore meets
SCHED_DS_R
PS--if there are other vm's still launched and running from the time when the
datastore used to be part of
a cluster, could that confuse anything? Do I have to restart oned to clear
anything up?
Steve Timm
From: Steven C Timm
Sent: Friday, February 13,
20 matches
Mail list logo