On 8 February 2012 08:57, Ulrich Schwickerath
ulrich.schwicker...@cern.ch wrote:
Hi, Ruben,
I confirm I get the same timing when I do NOT use the SSL proxy:
(...)
r tdrack.url_scheme/td td class=codedivhttp/div/td /tr tr
tdrack.version/td td class=codediv[1, 0]/div/td /tr tr
Hi Paul,
This is a very interesting feature. You should open a new ecosystem project
[1] as soon as your code is usable, so others can test it. If you would
like to see your code merged upstream once it gets to a mature enough
state, make sure that whoever has to give the thumbs up in your
Hi, Daniel,
sure, here it is. I have 3 of these guys now. Never seen that before.
[lsfadmin@oneadmin02 ~]$ onevm show 22976 -x
VM
ID22976/ID
UID7/UID
GID102/GID
UNAMElsfadmin/UNAME
GNAMEbatch/GNAME
NAMELXBATCH/NAME
PERMISSIONS
OWNER_U1/OWNER_U
OWNER_M1/OWNER_M
OWNER_A0/OWNER_A
GROUP_U0/GROUP_U
Hi,
I have 5 blades 32Gb each and 500GB disk space used for distributed fs
glusterfs. this shared fs is accessible under /var/lib/one ~1.3TB.
trying to start 4 VM two were successful the other two failed with errors:
Wed Feb 8 12:04:47 2012 [DiM][I]: New VM state is ACTIVE.
Wed Feb 8
OK, I see. Actually everything looks good in there - after investigating
how libvirt does its thing and checking with 'virsh dumpxml' it looks as
though the default CPU type is 64-bit so that's OK.
I've tried running the following:
- sudo virsh create /var/lib/one/20/images/deployment.0
Domain
Hi Joao.
I made a similar search recently. Here is my results:
Lustre - does not have redundancy, If you will use file striping between
nodes and one nodes go offline all data are not available.
Gluster - does not support KVM virtualization. Software developer lead
mentioned that it will be
Our research project is currently using GlusterFS for our distributed
NFS storage system. We're leveraging the distributed-replica
configuration in which every two pairs of servers is a replica pair, and
all pairs form a distributed cluster. We do not do data stripping since
to achieve up-time
2012/2/8 João Pagaime j...@fccn.pt:
Can anyone share his experiences on this topic? any hints would be nice...
I've written a small article few months ago after a successful
deployment of OpenNebula on top of a MooseFS volume:
http://blog.opennebula.org/?p=1512
Short anwser: yes, they works,
On 2012/02/08 5:42 PM, Alberto Zuin - Liste wrote:
In past we used OpenNebula 2.2 with MooseFS driver for immediate
deployment via snapshot, but the driver wasn't updated to OpenNebula 3.0.
We are very happy with MooseFS: it's very easy and robust. The only
consideration is on disk side: SATA
On 2012/02/08 7:54 PM, Chris Picton wrote:
On 2012/02/08 5:42 PM, Alberto Zuin - Liste wrote:
In past we used OpenNebula 2.2 with MooseFS driver for immediate
deployment via snapshot, but the driver wasn't updated to OpenNebula
3.0.
We are very happy with MooseFS: it's very easy and robust.
I think I've finally nailed the root cause of my troubles. I posted this
on http://serverfault.com/q/358118/2101 but you guys may be able to
answer with more authority:
I have a fresh Open Nebula 3.2.1 installation which I'm trying to get
working and manage some freshly-installed debian squeeze
Hello,
I'm trying to evaluate openNebula 3.2.0 (Centos 6.1) using KVM. Doing so
I chose a two node installation environment with one frontend and one
KVM server with the following packages installed:
KVM:
libvirt-client-0.9.4-23.el6_2.4.x86_64
libvirt-devel-0.9.4-23.el6_2.4.x86_64
Hi Everyone,
There are lack of information how to use 9p on KVM. I finally made it
work! Here is how:
*
On host:
1. modify /etc/libvirt/qemu.conf change owner and group to which will be
used for starting KVM. The description how to use user/group at the end.
in my case I used
Id say you added you host with vm_kvm instead of vmm_kvm
--
Hector Sanjuan
Opennebula developer
Original message
Subject: [one-users] [VMM][E]: deploy_action, error getting driver vm_kvm
From: Hendrik Wißmann wissm...@gmx.de
To: users@lists.opennebula.org
CC:
Hello,
I'm
14 matches
Mail list logo