Re: [one-users] Fault Tolerance and shared storage

2014-03-27 Thread Nuno Serro

  
  
Hi,
  
  Thanks for the input. We were using "-r" when we tested again this
  morning, and paying closer attention to the log showed that the
  problem is in the delete phase. The delete command seems to be
  executed on the host that is unreachable (in this case bl05):
  
  Thu Mar 27 11:07:36 2014 [VMM][I]: fi" failed: ssh: connect to
  host bl05 port 22: Connection refused
  Thu Mar 27 11:07:36 2014 [VMM][E]: Error deleting
  /var/lib/one/datastores/109/416/disk.0
  Thu Mar 27 11:07:36 2014 [VMM][I]: ExitCode: 255
  Thu Mar 27 11:07:36 2014 [VMM][I]: Failed to execute transfer
  manager driver operation: tm_delete.
  Thu Mar 27 11:07:36 2014 [VMM][I]: Command execution fail:
  /var/lib/one/remotes/tm/shared/delete
  bl05:/var/lib/one//datastores/109/416 416 109
  
  I attached a log for the VM running on host bl05.
  
  Thanks for the help,
  
  Nuno
  
  
  
  



  

  

   


  Nuno Serro
  Coordenador
  Núcleo de Infraestruturas e Telecomunicações
  Departamento de Informática 
  
  Alameda da Universidade  -  Cidade Universitária
  1649-004 LisboaPORTUGAL
  T. +351 210 443 566 - Ext. 19816
  E. nse...@reitoria.ulisboa.pt
  
  www.ulisboa.pt


  

  
   
   

  
  On 26-03-2014 16:57, Tino Vazquez wrote:


  
  Hi,


Thanks for the info.


The hook for host error in OpenNebula 4.4 allows to define
  one, and only one, of the "-r" and "-d" flags:


   * -r will "delete --recreate" the VM in the failed host,
  this will go through the epilog_delete phase, and it should
  erase the sym links and launch the VM again. This is probably
  what you want, please come back if the problem does not go
  away
 
  * -d will "delete" the VM in the failed host, but won't
  launch the VM again.


These two are mutually exclusive. 


Regards,


-Tino



  --
OpenNebula - Flexible Enterprise Cloud Made Simple

--
Constantino Vázquez Blanco, PhD, MSc
Senior Infrastructure Architect at C12G Labs
www.c12g.com | @C12G | es.linkedin.com/in/tinova

--
Confidentiality Warning: The information contained in this
e-mail and any accompanying documents, unless otherwise
expressly indicated, is confidential and privileged, and is
intended solely for the person and/or entity to whom it is
addressed (i.e. those identified in the "To" and "cc" box).
They are the property of C12G Labs S.L.. Unauthorized
distribution, review, use, disclosure, or copying of this
communication, or any part thereof, is strictly prohibited
and may be unlawful. If you have received this e-mail in
error, please notify us immediately by e-mail at ab...@c12g.com and delete the e-mail
and attachments and any copy from your system. C12G thanks
you for your cooperation.
  
  
  On 26 March 2014 17:48, Nuno Serro nse...@reitoria.ulisboa.pt
wrote:

  
Hello Tino,
  
  We are using version 4.4.1. If you need any details on
  the configuration I can provide them.
  



  

  

  


  
  
Nuno
Serro
Coordenador
Núcleo de Infraestruturas e
Telecomunicações
Departamento de Informática 

Alameda da Universidade  -  Cidade
Universitária
1649-004 LisboaPORTUGAL
T. +351 210 443 566
  

[one-users] problem with compressed images from Marketplace 4.2

2014-03-27 Thread Maxim Terletskiy

Hi!

We're trying to use Marketplace in Opennebula 4.2 and see strange things 
with compressed images. Importing image with name Ubuntu Server 12.04 
(Precise Pangolin) - kvm I see that script downloading bzipped file and 
extracting it to temp directory.  After downloading and unpacking image 
successfully registering, oneimage saying it is 10Gb size, but the real 
size of file is 4Gb. qemu-img info shows that it is raw format file 4gb 
size. Image not usable, grub starting, but OS not booting because of 
missing data. The same thing with the other images I've tried(CentOS 6, 
ttylinux).


Opennebula server runs CentOS 6. Maybe someone can help with this problem?
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] Add `SSH_PUBLIC_KEY` to contextualization to set root credentials.

2014-03-27 Thread Christophe Duez
Hello
In the marketplace you can download images.
However in the discription this is says: Add `SSH_PUBLIC_KEY` to
contextualization to set root credentials
what do they mean with this?

-- 
Kind regards,
Duez Christophe
Student at University of Antwerp :
Master of Industrial Sciences: Electronics-ICT

E christophe.d...@student.uantwperen.be
L linkedin duez-christophehttp://www.linkedin.com/pub/duez-christophe/74/7/39
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Add `SSH_PUBLIC_KEY` to contextualization to set root credentials.

2014-03-27 Thread Stefan Kooman
Quoting Christophe Duez (christophe.d...@student.uantwerpen.be):
 Hello
 In the marketplace you can download images.
 However in the discription this is says: Add `SSH_PUBLIC_KEY` to
 contextualization to set root credentials
 what do they mean with this?
In the template you create to use this image you have to provide your
SSH_PUBLIC_KEY so you can log in with ssh and your private key. Root
password is not set. So you can only log in through ssh. You might also
chroot the image (using a linux live environment like GRML [1]), set a
password there and log in through console or ssh with a password.

Gr. Stefan

[1]: http://grml.org/

P.s make sure the image is persistent if you set a password, otherwise
your changes will be lost.

-- 
| BIT BV  http://www.bit.nl/Kamer van Koophandel 09090351
| GPG: 0xD14839C6   +31 318 648 688 / i...@bit.nl
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Fault Tolerance and shared storage

2014-03-27 Thread Tino Vazquez
Hi Nuno,

Let's try a little modification to see if it alleviates the problem. Please
change the following line in /var/lib/one/remotes/tm/lvm/ln:

ln -s $TARGET_DEV $DST_PATH

to

ln -sf $TARGET_DEV $DST_PATH

Try again and let us know, we can include this modification in the upcoming
OpenNebula 4.6.

Best regards,

-Tino



--
OpenNebula - Flexible Enterprise Cloud Made Simple

--
Constantino Vázquez Blanco, PhD, MSc
Senior Infrastructure Architect at C12G Labs
www.c12g.com | @C12G | es.linkedin.com/in/tinova

--
Confidentiality Warning: The information contained in this e-mail and any
accompanying documents, unless otherwise expressly indicated, is
confidential and privileged, and is intended solely for the person and/or
entity to whom it is addressed (i.e. those identified in the To and cc
box). They are the property of C12G Labs S.L.. Unauthorized distribution,
review, use, disclosure, or copying of this communication, or any part
thereof, is strictly prohibited and may be unlawful. If you have received
this e-mail in error, please notify us immediately by e-mail at
ab...@c12g.com and delete the e-mail and attachments and any copy from your
system. C12G thanks you for your cooperation.
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] More ipv6 network questions

2014-03-27 Thread Steven Timm

Thanks for the help Javier.
I am familiar with the advanced context rpm package, we use it a lot
but had never used it for IPV6 before. It might be helpful to
have a link to this page in the Networking section of the docs.

On Thu, 27 Mar 2014, Javier Fontan wrote:


I fear that the OpenNebula automatic generation of IPv6 does not suit
your needs but you can take advantage of the context package and set
the parameters manually in the context. The parameters it allows are
described in the documentation [1]. Make sure you use version 4.4 of
the packages. Also do not set NETWORK=YES in the context. One idea is
to do something like this:

CONTEXT=[
 ETH0_IP = $NIC[IP, NETWORK=\public\],
 ETH0_NETWORK = $NETWORK[NETWORK_ADDRESS, NETWORK=\public\],
 ETH0_MASK = $NETWORK[NETWORK_MASK, NETWORK=\public\],
 ETH0_GATEWAY = $NETWORK[GATEWAY, NETWORK=\public\],
 ETH0_DNS = $NETWORK[DNS, NETWORK=\public\],
 ETH0_IPV6 = $IPV6
]

If you are using CLI to instantiate a template you can set the value
of IPV6 using the raw parameter:

onetemplate instantiate id --raw IPV6=some ipv6



Are you saying that this is the only way to force a specific IPv6 IP
into any given machine--to specify from the command line?  If I am reading
it right I could also just specify an IPV6 hardwired into the context 
section, is that correct?


Are there any restrictions on the size of a RANGED ipv6 subnet?

i.e. can I have a range of 5 ipv6 numbers from 2001:400:2410:29::182
to 2001:400:2410:29::186 inclusive and do a ranged subnet that way?
(which happen to correspond to the same 5 ipv4 numbers 131.225.41.182 - 
131.225.41.186)?


If so, maybe I can make it work that way.

But in the meantime, where do we go to file the feature request to
make ipv6 leases work just like ipv4 fixed leases do?  Right now
the leases table in the database is keyed only on ipv4 addresses.  Doesn't 
seem like it would be too hard to make it work the same for ipv6, would it?


The urgent problem I need to solve is that I need 4 or 5 VM's with 
existing ipv4 and ipv6 addresses (with gaps in the range) up on my test 
ONE4.4 cloud pretty fast.


Thanks

Steve Timm




Add any other parameter from the table I've linked to configure other
parameters.

I hope it helps.


[1] 
http://docs.opennebula.org/4.4/user/virtual_machine_setup/cong.html#network-configuration

On Wed, Mar 26, 2014 at 11:04 PM, Steven Timm t...@fnal.gov wrote:



A followup--I did find an example in the documentation but it is only for
RANGED IPv6 network.  I need a FIXED IPv6 network.

I saw that when I set an IP6_GLOBAL PREFIX in the network
file it would then append the ipv6-ified mac address of the
machine and construct an IP6_GLOBAL for me.  But that's not what I want.

Would like to do something like this:
LEASES = [ IP=131.225.41.182, MAC=54:52:00:02:0B:01,
IP6_GLOBAL=2001:400:2410:29::182 ]
LEASES = [ IP=131.225.41.183, MAC=54:52:00:02:0B:02,
IP6_GLOBAL=2001:400:2410:29::183]
LEASES = [ IP=131.225.41.184, MAC=54:52:00:02:0B:03,
IP6_GLOBAL=2001:400:2410:29::184 ]
LEASES = [ IP=131.225.41.185, MAC=54:52:00:02:0B:04,
IP6_GLOBAL=2001:400:2410:29::185 ]
LEASES = [ IP=131.225.41.186, MAC=54:52:00:02:0B:05,
IP6_GLOBAL=2001:400:2410:29::186 ]

But this doesn't work.  the IP6_GLOBAL in the LEASES field is ignored.

Is there any IPV6-related field that is accepted in the LEASES
field of a fixed-network network template?  This is of some urgency.
(I promised my users who depend on ipv6 cloud vm's I would have them
up this morning local time and it is now quitting time today).

Steve



On Wed, 26 Mar 2014, Steven Timm wrote:



Below is the network template that I used to
successfully create the IPV4 side of a dual stack ipv4/ipv6 network
in one4.4.


-bash-4.1$ cat static-ipv6-net
NAME = Static_IPV6_Public
TYPE = FIXED

#Now we'll use the cluster private network (physical)
BRIDGE = br0
DNS = 131.225.0.254
GATEWAY = 131.225.41.200
NETWORK_MASK = 255.255.255.128
LEASES = [ IP=131.225.41.132, MAC=00:16:3E:06:01:01 ]

--

and here's what I get back:

-bash-4.1$ onevnet show 1
VIRTUAL NETWORK 1 INFORMATION
ID : 1
NAME   : Static_IPV6_Public
USER   : oneadmin
GROUP  : oneadmin
CLUSTER: -
TYPE   : FIXED
BRIDGE : br0
VLAN   : No
USED LEASES: 0

PERMISSIONS
OWNER  : um-
GROUP  : ---
OTHER  : ---

VIRTUAL NETWORK TEMPLATE
DNS=131.225.0.254
GATEWAY=131.225.41.200
NETWORK_MASK=255.255.255.128

FREE LEASES
LEASE=[ MAC=00:16:3e:06:01:01, IP=131.225.41.132,
IP6_LINK=fe80::216:3eff:fe06:101, USED=0, VID=-1 ]

VIRTUAL MACHINES

   ID USER GROUPNAMESTAT UCPUUMEM HOST TIME


I have several questions:

1) Does the OpenNebula head node also have to have access to the
IPv6 network too, or just the VM hosts?

2) Is there any way to specify on a host by host basis
the IPV6 address as well as the IPV4 address?  

Re: [one-users] Fault Tolerance and shared storage

2014-03-27 Thread Nuno Serro

  
  
Hi Tino,
  
  Forcing the link creation solved the problem.
  
  Thanks for your help,
  
  Nuno
  

   

  
  On 27-03-2014 14:46, Tino Vazquez wrote:


  
  
Hi Nuno,


Let's try a little modification to see
  if it alleviates the problem. Please change the following line
  in /var/lib/one/remotes/tm/lvm/ln:

  
    ln
-s "$TARGET_DEV" "$DST_PATH"

  
  
to


    ln -sf "$TARGET_DEV" "$DST_PATH"



Try again and let us know, we can
  include this modification in the upcoming OpenNebula 4.6.



  Best regards,


-Tino



  
  --
OpenNebula - Flexible Enterprise Cloud Made Simple

--
Constantino Vázquez Blanco, PhD, MSc
Senior Infrastructure Architect at C12G Labs
www.c12g.com | @C12G | es.linkedin.com/in/tinova

--
Confidentiality Warning: The information contained in this
e-mail and any accompanying documents, unless otherwise
expressly indicated, is confidential and privileged, and is
intended solely for the person and/or entity to whom it is
addressed (i.e. those identified in the "To" and "cc" box).
They are the property of C12G Labs S.L.. Unauthorized
distribution, review, use, disclosure, or copying of this
communication, or any part thereof, is strictly prohibited
and may be unlawful. If you have received this e-mail in
error, please notify us immediately by e-mail at ab...@c12g.com and delete the e-mail
and attachments and any copy from your system. C12G thanks
you for your cooperation.

  


  




signature.asc
Description: OpenPGP digital signature
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] Resume paused VM due to full system datastore

2014-03-27 Thread Daniel Dehennin
Hello,

I just encounter an issue with KVM based VMs when the non shared system
datastore became full.

The libvirt/kvm paused the VMs trying to write on their discs and I have
to run:

for vm in $(virsh -c qemu:///system list | awk '/paused/ {print $1}')
do
virsh -c qemu:///system resume ${vm}
done

In ONE they was in UNKNOWN state.

Shouldn't it be handled by ONE directly?

Regards.
-- 
Daniel Dehennin
Récupérer ma clef GPG: gpg --recv-keys 0xCC1E9E5B7A6FE2DF
Fingerprint: 3E69 014E 5C23 50E8 9ED6  2AAD CC1E 9E5B 7A6F E2DF


signature.asc
Description: PGP signature
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] Recommended NFS options for datastores

2014-03-27 Thread ML mail
Hello,

I have my ONE datastores located on an HP server + external SAS JBOD running 
Debian squeeze and NFS for sharing the datastores. Currently on my ONE frontend 
and hosts I mount the datastores as such (fstab):

IP:/nfs/one/datastores/0  /var/lib/one/datastores/0  nfs   
soft,intr,rsize=8192,wsize=8192
IP:/nfs/one/datastores/1  /var/lib/one/datastores/1  nfs   
soft,intr,rsize=8192,wsize=8192
IP:/nfs/one/datastores/100  /var/lib/one/datastores/100  nfs   
soft,intr,rsize=8192,wsize=8192

If I remember correctly the NFS options I use here come from the ONE 
documentation but I wanted to know are these NFS options I use here (soft,intr, 
etc) the recommended ones? 

I am asking because this morning the server rebooted for no reasons and while 
some of the VMs where fine (just got some timeout error message) others had 
remounted their root filesystem in read-only mode which required a reboot of 
the VM. Some of them did not make it and required an fsck before being able to 
reboot correctly. 


I guess this is pretty normal and can not really be avoided as the NFS server 
crashed but what I really would like to know is if there are any NFS options 
which would be the most secure in case of NFS server crash. I am think here 
especially at the hard,intr NFS options, maybe these would be more appropriate? 
What do you think?

Regards
ML

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] blktap xen 4.2 and opennebula 4.2

2014-03-27 Thread kenny . kenny
it didnt work. 
 
i received this message
invalid option: --force
 
and it always use file insstead of tap2:aio
i dont know what to do.

De: jfon...@gmail.comEnviada: Quinta-feira, 27 de Março de 2014 18:08Para: kenny.ke...@bol.com.brAssunto: [one-users] blktap xen 4.2 and opennebula 4.2You can change it in in "/var/lib/one/remotes/vmm/xen4/xenrc", theparameter is DEFAULT_FILE_PREFIX.Remember to do a onehost sync --force so these files are copied to theremote hosts.On Thu, Mar 27, 2014 at 3:54 AM, kenny.ke...@bol.com.br wrote: Hello, i need to use blktap instead of default disk drive. i changed /var/lib/one/remotes/vmm/xen4/attach_disk and /etc/one/vmm_exec/vmm_exec_xen4.conf , but when take a look at deployment.0 , it always with "file:". What i need to do to change that ? I will change it beacuase with file i can run just 8 vm per node. thank ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org-- Javier Fontán MuiñosOpenNebula DeveloperOpenNebula - The Open Source Toolkit for Data Center Virtualizationwww.OpenNebula.org | @OpenNebula | github.com/jfontan
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Fault Tolerance and shared storage

2014-03-27 Thread Tino Vazquez
Hi Nuno,

Thanks for the feedback, I've already pushed the patch upstream so it
will be present in the upcoming OpenNebula 4.6.

https://github.com/OpenNebula/one/commit/05618fff9ce17974bee23f9d5887318190b77e27

Regards,

-Tino
--
OpenNebula - Flexible Enterprise Cloud Made Simple

--
Constantino Vázquez Blanco, PhD, MSc
Senior Infrastructure Architect at C12G Labs
www.c12g.com | @C12G | es.linkedin.com/in/tinova

--
Confidentiality Warning: The information contained in this e-mail and
any accompanying documents, unless otherwise expressly indicated, is
confidential and privileged, and is intended solely for the person
and/or entity to whom it is addressed (i.e. those identified in the
To and cc box). They are the property of C12G Labs S.L..
Unauthorized distribution, review, use, disclosure, or copying of this
communication, or any part thereof, is strictly prohibited and may be
unlawful. If you have received this e-mail in error, please notify us
immediately by e-mail at ab...@c12g.com and delete the e-mail and
attachments and any copy from your system. C12G thanks you for your
cooperation.


On 27 March 2014 16:46, Nuno Serro nse...@reitoria.ulisboa.pt wrote:
 Hi Tino,

 Forcing the link creation solved the problem.

 Thanks for your help,

 Nuno



 On 27-03-2014 14:46, Tino Vazquez wrote:

 Hi Nuno,

 Let's try a little modification to see if it alleviates the problem. Please
 change the following line in /var/lib/one/remotes/tm/lvm/ln:

 ln -s $TARGET_DEV $DST_PATH

 to

 ln -sf $TARGET_DEV $DST_PATH

 Try again and let us know, we can include this modification in the upcoming
 OpenNebula 4.6.

 Best regards,

 -Tino



 --
 OpenNebula - Flexible Enterprise Cloud Made Simple

 --
 Constantino Vázquez Blanco, PhD, MSc
 Senior Infrastructure Architect at C12G Labs
 www.c12g.com | @C12G | es.linkedin.com/in/tinova

 --
 Confidentiality Warning: The information contained in this e-mail and any
 accompanying documents, unless otherwise expressly indicated, is
 confidential and privileged, and is intended solely for the person and/or
 entity to whom it is addressed (i.e. those identified in the To and cc
 box). They are the property of C12G Labs S.L.. Unauthorized distribution,
 review, use, disclosure, or copying of this communication, or any part
 thereof, is strictly prohibited and may be unlawful. If you have received
 this e-mail in error, please notify us immediately by e-mail at
 ab...@c12g.com and delete the e-mail and attachments and any copy from your
 system. C12G thanks you for your cooperation.



 ___
 Users mailing list
 Users@lists.opennebula.org
 http://lists.opennebula.org/listinfo.cgi/users-opennebula.org

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] blktap xen 4.2 and opennebula 4.2

2014-03-27 Thread kenny . kenny
see atached files. 
thanks.

De: jfon...@gmail.comEnviada: Quinta-feira, 27 de Março de 2014 19:18Para: kenny.ke...@bol.com.brAssunto: [one-users] blktap xen 4.2 and opennebula 4.2Disregard the --force. I've misread the problem. The parameter --forcedoes not work in one 4.2. Just execute:onehost syncOn Thu, Mar 27, 2014 at 7:07 PM, Javier Fontan jfon...@gmail.com wrote: Are you sure that the drivers uncomented are xen4 and not xen3? Also, can you send me the file xenrc you've changed? That "invalid option: --force" is so strange. On Thu, Mar 27, 2014 at 7:03 PM, kenny.ke...@bol.com.br wrote: it didnt work. i received this message invalid option: --force and it always use file insstead of tap2:aio i dont know what to do.  De: jfon...@gmail.com Enviada: Quinta-feira, 27 de Março de 2014 18:08 Para: kenny.ke...@bol.com.br Assunto: [one-users] blktap xen 4.2 and opennebula 4.2 You can change it in in "/var/lib/one/remotes/vmm/xen4/xenrc", the parameter is DEFAULT_FILE_PREFIX. Remember to do a onehost sync --force so these files are copied to the remote hosts. On Thu, Mar 27, 2014 at 3:54 AM, kenny.ke...@bol.com.br wrote: Hello, i need to use blktap instead of default disk drive. i changed /var/lib/one/remotes/vmm/xen4/attach_disk and /etc/one/vmm_exec/vmm_exec_xen4.conf , but when take a look at deployment.0 , it always with "file:". What i need to do to change that ? I will change it beacuase with file i can run just 8 vm per node. thank ___ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org -- Javier Fontán Muiños OpenNebula Developer OpenNebula - The Open Source Toolkit for Data Center Virtualization www.OpenNebula.org | @OpenNebula | github.com/jfontan -- Javier Fontán Muiños OpenNebula Developer OpenNebula - The Open Source Toolkit for Data Center Virtualization www.OpenNebula.org | @OpenNebula | github.com/jfontan-- Javier Fontán MuiñosOpenNebula DeveloperOpenNebula - The Open Source Toolkit for Data Center Virtualizationwww.OpenNebula.org | @OpenNebula | github.com/jfontan


attach_disk
Description: Binary data


attach_diskxen4
Description: Binary data


vmm_exec_xen3.conf
Description: Binary data


vmm_exec_xen4.conf
Description: Binary data
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] data store not created in VMHosts

2014-03-27 Thread Hyun Woo Kim
Hi,

I am testing the data store feature of ON 4.4 in a very simple configuration
and I am getting an error that I can NOT understand.

I have one cluster with
one img type DS (ID 1, DS=fs, T/M=ssh) and
one sys type DS   (ID 100, that I created with ssh TM)

Then I attach one VMHost to this cluster.
According to the manual, the first deployment of VM creates
datastores/100 under /var/lib/one/, right?

But in my case, the first deployed VM is pending
with an error message in sched.log saying,
 Local Datastore 100 in Host 8 filtered out. Not enough capacity.
 No suitable System DS found for Host: 8. Filtering out host.

This VM gets deployed if I manually create this directory;
mkdir -p /var/lib/one/datastores/100

This error sometimes does not happen for a certain VMHost.

So, I would like to understand why datastores/ID is created in some VMHosts
and not in other VMHosts.

if the ON developers could point me at right codes to look at,
or describe what I am doing wrong,
it will be very helpful.

Thanks,
Hyuwoo Kim
FermiCloud



___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] ceph+flashcache datastore driver

2014-03-27 Thread Shankhadeep Shome
Yes bcache will allow real time configuration of cache policies, there are
a lot of tunables. It allows a single cache device to map to multiple
backing devices. We are using bcache with LVM and linux scsi target to
implement the storage target devices. We use several of these devices to
export block devices to servers over fiber channel then use those block
devices as building blocks for local or distributed file systems.The
opennebula implementation was using glusterfs-fuse but we hope to
transition to native glusterfs with qemu.


On Wed, Mar 26, 2014 at 8:39 AM, Stuart Longland stua...@vrt.com.au wrote:

 Hi Shankhadeep,
 On 26/03/14 12:35, Shankhadeep Shome wrote:
  Try bcache as a flash backend, I feel its more flexible as a caching
  tier and its well integrated into the kernel. The kernel 3.10.X version
  is now quite mature so an epel6 long term kernel would work great. We
  are using it in a linux based production SAN as a cache tier with pci-e
  SSDs, a very flexible subsystem and rock solid.

 Cheers for the heads up, I will have a look.  What are you using to
 implement the SAN and what sort of VMs are you using with it?

 One thing I'm finding: when I tried using this, I had a stack of RBD
 images created by OpenNebula that were in RBD v1 format.  I converted
 them to v2 format by means of a simple script: basically renaming the
 old images then doing a pipe from 'rbd export' to 'rbd import'.

 I had a few images in there, most originally for other hypervisors:
 - Windows 2000 Pro image
 - Windows XP Pro image (VMWare ESXi image)
 - Windows 2012 Standard Evaluation image (CloudBase OpenStack image)
 - Windows 2008 R2 Enterprise Evaluation (HyperV image)
 - Windows 2012 R2 Data Centre Evaluation (HyperV image)

 The latter two were downloaded from Microsoft's site and are actually
 supposed to run on HyperV, however they ran fine with IDE storage under
 KVM under the out-of-the-box Ceph support in OpenNebula 4.4.

 I'm finding that after conversion of the RBDs to RBDv2 format, and
 re-creating the image in OpenNebula to clear out the DISK_TYPE attribute
 (DISK_TYPE=RBD kept creeping in), the image would deploy but then the OS
 would crash.

 Win2008r2 would crash after changing the Administrator password (hang
 with black screen), Win2012r2 would crash with a CRITICAL_PROCESS_DIED
 blue-screen-of-death when attempting to set the Administrator password.

 The other images run fine.  The only two that were actually intended for
 KVM are the Windows 2012 evaluation image produced by CloudBase (for
 OpenStack), and the Windows 2000 image that I personally created.  The
 others were all built on other hypervisors, then converted.

 I'm not sure if it's something funny with the conversion of the RBDs or
 whether it's an oddity with FlashCache+RBD that's causing this.  These
 images were fine before I got FlashCache involved (if a little slow).
 Either there's a bug in my script, in FlashCache, or I buggered up the
 RBD conversion.

 But I will have a look at bcache and see how it performs in comparison.
  One thing we are looking for is the ability to throttle or control
 cache write-backs for non-production work-loads ... that is, we wish to
 prioritise Ceph traffic for production VMs during work hours.
 FlashCache doesn't offer this feature at this time.

 Do you know if bcache offers any such controls?
 --
 Stuart Longland
 Contractor
  _ ___
 \  /|_) |   T: +61 7 3535 9619
  \/ | \ | 38b Douglas StreetF: +61 7 3535 9699
SYSTEMSMilton QLD 4064   http://www.vrt.com.au



___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] ceph+flashcache datastore driver

2014-03-27 Thread Shankhadeep Shome
We have been running KVM very successfully and I find ceph very
interesting, I think a combination of ceph and linux scsi target with a
scale out architecture is the future of storage in enterprise.


On Thu, Mar 27, 2014 at 10:57 PM, Shankhadeep Shome shank15...@gmail.comwrote:

 Yes bcache will allow real time configuration of cache policies, there are
 a lot of tunables. It allows a single cache device to map to multiple
 backing devices. We are using bcache with LVM and linux scsi target to
 implement the storage target devices. We use several of these devices to
 export block devices to servers over fiber channel then use those block
 devices as building blocks for local or distributed file systems.The
 opennebula implementation was using glusterfs-fuse but we hope to
 transition to native glusterfs with qemu.


 On Wed, Mar 26, 2014 at 8:39 AM, Stuart Longland stua...@vrt.com.auwrote:

 Hi Shankhadeep,
 On 26/03/14 12:35, Shankhadeep Shome wrote:
  Try bcache as a flash backend, I feel its more flexible as a caching
  tier and its well integrated into the kernel. The kernel 3.10.X version
  is now quite mature so an epel6 long term kernel would work great. We
  are using it in a linux based production SAN as a cache tier with pci-e
  SSDs, a very flexible subsystem and rock solid.

 Cheers for the heads up, I will have a look.  What are you using to
 implement the SAN and what sort of VMs are you using with it?

 One thing I'm finding: when I tried using this, I had a stack of RBD
 images created by OpenNebula that were in RBD v1 format.  I converted
 them to v2 format by means of a simple script: basically renaming the
 old images then doing a pipe from 'rbd export' to 'rbd import'.

 I had a few images in there, most originally for other hypervisors:
 - Windows 2000 Pro image
 - Windows XP Pro image (VMWare ESXi image)
 - Windows 2012 Standard Evaluation image (CloudBase OpenStack image)
 - Windows 2008 R2 Enterprise Evaluation (HyperV image)
 - Windows 2012 R2 Data Centre Evaluation (HyperV image)

 The latter two were downloaded from Microsoft's site and are actually
 supposed to run on HyperV, however they ran fine with IDE storage under
 KVM under the out-of-the-box Ceph support in OpenNebula 4.4.

 I'm finding that after conversion of the RBDs to RBDv2 format, and
 re-creating the image in OpenNebula to clear out the DISK_TYPE attribute
 (DISK_TYPE=RBD kept creeping in), the image would deploy but then the OS
 would crash.

 Win2008r2 would crash after changing the Administrator password (hang
 with black screen), Win2012r2 would crash with a CRITICAL_PROCESS_DIED
 blue-screen-of-death when attempting to set the Administrator password.

 The other images run fine.  The only two that were actually intended for
 KVM are the Windows 2012 evaluation image produced by CloudBase (for
 OpenStack), and the Windows 2000 image that I personally created.  The
 others were all built on other hypervisors, then converted.

 I'm not sure if it's something funny with the conversion of the RBDs or
 whether it's an oddity with FlashCache+RBD that's causing this.  These
 images were fine before I got FlashCache involved (if a little slow).
 Either there's a bug in my script, in FlashCache, or I buggered up the
 RBD conversion.

 But I will have a look at bcache and see how it performs in comparison.
  One thing we are looking for is the ability to throttle or control
 cache write-backs for non-production work-loads ... that is, we wish to
 prioritise Ceph traffic for production VMs during work hours.
 FlashCache doesn't offer this feature at this time.

 Do you know if bcache offers any such controls?
 --
 Stuart Longland
 Contractor
  _ ___
 \  /|_) |   T: +61 7 3535 9619
  \/ | \ | 38b Douglas StreetF: +61 7 3535 9699
SYSTEMSMilton QLD 4064   http://www.vrt.com.au




___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org