[one-users] sequel rpm

2010-11-19 Thread Ruben Diez

Anyone know about a sequel rpm package for CENTOS 5.5??

sequel is required by oneacct...

The fedora rmp (rubygem-sequel-3.16.0-6.el6.noarch.rpm) don't work for 
me


Thanks in advance


___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Migration Path from 1.4 to 2.0

2010-11-19 Thread opennebula

I posted the last scripts and a tiny how-to under:

Request #415:

http://dev.opennebula.org/issues/415

Lucky migration!

Greats

Marlon Nerling
___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Making scheduler allocation aware

2010-11-19 Thread Rangababu Chakravarthula
Javier
Can you please let us know what is that you are referring to when you say I
have noticed another mismatch even with your probe changes. What other
changes are required besides changes to kvm.rb ?

Ranga


On Thu, Nov 11, 2010 at 8:06 AM, Javier Fontan jfon...@gmail.com wrote:

 Hello,

 Are you sure that those are the exact values for the host? OpenNebula
 counts real (from probes) and allocated (from database) memory so
 that should not happen.

 Snippet from a onehost show:

 --8--
 USED MEM (REAL)   : 0
 USED MEM (ALLOCATED)  : 65536
 --8--

 I am now working on the kvm monitoring and I have noticed another
 mismatch even with your probe changes. The values stored in the
 database for total memory should be changed and that's what I am
 working on.

 I am connected to irc.freenode.org in channel #opennebula if you want
 to discuss this further.

 Bye

 On Thu, Nov 11, 2010 at 5:20 AM, Shashank Rachamalla
 shashank.rachama...@hexagrid.com wrote:
  Hi Javier
 
  Thanks for the inputs but I came across another problem while testing:
 
  If opennebula receives multiple vm requests in a short span of time, the
  scheduler might take decisions for all these vms considering the host
  monitoring information available from the last monitoring cycle. Ideally,
  before processing every pending request,  fresh host monitoring
 information
  has to be taken into account as the previous set of requests might have
  already changed the host’s state. This can result in over committing when
  host is being used close to its full capacity.
 
  Is there any workaround which helps the scheduler to overcome the above
  problem ?
 
  steps to reproduce the problem scenario:
 
  Host 1 : Total memory = 3GB
  Host 2 : Total memory = 2GB
  Assume Host1 and Host2 have same number of CPU cores. ( Host1 will have a
  higher RANK value )
 
  VM1: memory = 2GB
  VM2: memroy = 2GB
 
  Start VM1 and VM2 immediately one after the other. Both VM1 and VM2 will
  come up on Host1.  ( Thus over committing )
 
  Start VM1 and VM2 with an intermediate delay of 60sec. VM1 will come up
 on
  Host1 and VM2 will come up on Host2. This is true because opennebula
 would
  have fetched a fresh set of host monitoring information in that time.
 
 
  On 4 November 2010 02:04, Javier Fontan jfon...@gmail.com wrote:
 
  Hello,
 
  It looks fine to me. I think that taking out the memory the hypervisor
  may be consuming is key to make it work.
 
  Bye
 
  On Wed, Nov 3, 2010 at 8:32 PM, Rangababu Chakravarthula
  rb...@hexagrid.com wrote:
   Javier
  
   Yes we are using KVM and OpenNebula 1.4.
  
   We have been having this problem since a long time and we were doing
 all
   kinds of validations ourselves before submitting the request to
   OpenNebula.
   (there should  be enough memory in the cloud that matches the
 requested
   memory  there should be atleast one host that has memory  requested
   memory
   )   We had to do those because OpenNebula would schedule to an
 arbitrary
   host based on the existing logic it had.
   So at last we thought that we need to make OpenNebula aware of memory
   allocated of running VM's on the host and started this discussion.
  
   Thanks for taking up this issue as priority. Appreciate it.
  
   Shashank came up with this patch to kvm.rb. Please take a look and let
   us
   know if that will work until we get a permanent solution.
  
  
  
 
  
   $mem_allocated_for_running_vms=0
   for i in `virsh list|grep running|tr -s ' ' ' '|cut -f2 -d' '` do
   $dominfo=`virsh dominfo #{i}`
   $dominfo.split(/\n/).each{|line|
   if line.match('^Max memory')
   $mem_allocated_for_running_vms += line.split(
   )[2].strip.to_i
   end
   }
   end
  
   $mem_used_by_base_hypervisor = [some xyz kb that we want to set aside
   for
   hypervisor]
  
   $free_memory = $total_memory.to_i - (
   $mem_allocated_for_running_vms.to_i +
   $mem_used_by_base_hypervisor.to_i )
  
  
  
 ==
  
   Ranga
  
   On Wed, Nov 3, 2010 at 2:16 PM, Javier Fontan jfon...@gmail.com
 wrote:
  
   Hello,
  
   Sorry for the delay in the response.
  
   It looks that the problem is OpenNebula calculating available memory.
   For xen = 3.2 there is a reliable way to get available memory that
 is
   calling xm info and getting max_free_memory attribute.
   Unfortunately for kvm or xen  3.2 there is not such attribute. I
   suppose you are using kvm as you tell about free command.
  
   I began analyzing the kvm IM probe that gets memory information and
   there is a problem on the way to get total memory. Here is how it now
   gets memory information:
  
   TOTALMEMORY: runs virsh info that gets the real physical memory
   installed in the machine
   FREEMEMORY: runs free command and gets the free column data without