Re: [Openstack] UK OpenStack Group

2012-03-06 Thread Cole
Hey Tom,

Boris Devouge should be working on an event for OpenStack UK.  Details to
come.


On Tue, Mar 6, 2012 at 2:43 PM, Tom Ellis tom.el...@canonical.com wrote:

 On 06/03/12 14:28, John Garbutt wrote:
  Hi,
 
  Are there people keen for a UK based OpenStack group?
 
  I noticed these:
 
  http://www.meetup.com/OpenStack-London
  http://www.meetup.com/openstack-uk
 
  But doesn’t seems to have been much happening yet.

 Yes, certainly interested. I'd be up for a bug hunting / QA / Triage
 and/or doc session.

 Cheers,

 Tom

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Announcing StackTach ...

2012-02-21 Thread Cole
very cool.  If there is any interest in extending the tool and making it
pluggable to work with other wire protocols i'd think the
openmamahttp://www.openmama.org/project would be an interesting
possibility.

nice work!
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Nova] Essex dead wood cutting

2012-01-29 Thread Cole Robinson
On 01/28/2012 04:32 PM, Wayne Walls wrote:

snip

 - To what extent will Microsoft support problems reported with a Windows
 guest running on a non-Microsoft hypervisor ?
 
 I think this is a much harder question to answer, as in the past
 (http://www.redhat.com/promo/svvp) there has been a reciprocal agreement
 between RedHat and MS to support each others efforts on their own
 respective virtualization platforms.  Seeing that a) Ubuntu+KVM/libvirt is
 the current standard, and b) RedHat is not actively participating in the
 OpenStack community it leaves us with a big question mark.

Just want to point out that Red Hat is definitely participating in the
OpenStack community, there's even a Red Hatter on the Nova Core team :) Right
now a lot of us are focused on making Openstack and Fedora work great
together. Essex will even be advertised as a primary feature of the upcoming
Fedora release, see the relevant Fedora 17 feature pages (well,
work-in-progress marketing pages really):

http://fedoraproject.org/wiki/Features/OpenStack_Essex
http://fedoraproject.org/wiki/Features/OpenStack_using_Qpid
http://fedoraproject.org/wiki/Features/OpenStack_using_libguestfs
http://fedoraproject.org/wiki/Features/OpenStack_Quantum
http://fedoraproject.org/wiki/Features/OpenStack_Horizon

(This is all completely tangential to the topic of a microsoft/red hat support
guarantee, since I have no idea how that works :) )

Thanks,
Cole

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Boot from volume invalid device name /dev/vda

2012-01-22 Thread Cole Robinson
On 01/21/2012 01:04 PM, Tres Henry wrote:
 Getting an error trying to boot an instance from volume (the following is the
 traceback from nova compute):
 
 (nova.rpc): TRACE: Traceback (most recent call last):
 (nova.rpc): TRACE:   File /opt/stack/nova/nova/rpc/impl_kombu.py, line 723,
 in _process_data
 (nova.rpc): TRACE: rval = node_func(context=ctxt, **node_args)
 (nova.rpc): TRACE:   File /opt/stack/nova/nova/exception.py, line 126, in
 wrapped
 (nova.rpc): TRACE: return f(*args, **kw)
 (nova.rpc): TRACE:   File /opt/stack/nova/nova/compute/manager.py, line 150,
 in decorated_function
 (nova.rpc): TRACE: self.add_instance_fault_from_exc(context, 
 instance_uuid, e)
 (nova.rpc): TRACE:   File /usr/lib/python2.7/contextlib.py, line 24, in 
 __exit__
 (nova.rpc): TRACE: self.gen.next()
 (nova.rpc): TRACE:   File /opt/stack/nova/nova/compute/manager.py, line 145,
 in decorated_function
 (nova.rpc): TRACE: return function(self, context, instance_uuid, *args,
 **kwargs)
 (nova.rpc): TRACE:   File /opt/stack/nova/nova/compute/manager.py, line 565,
 in run_instance
 (nova.rpc): TRACE: self._run_instance(context, instance_uuid, **kwargs)
 (nova.rpc): TRACE:   File /opt/stack/nova/nova/compute/manager.py, line 394,
 in _run_instance
 (nova.rpc): TRACE: vm_state=vm_states.ERROR)
 (nova.rpc): TRACE:   File /usr/lib/python2.7/contextlib.py, line 24, in 
 __exit__
 (nova.rpc): TRACE: self.gen.next()
 (nova.rpc): TRACE:   File /opt/stack/nova/nova/compute/manager.py, line 381,
 in _run_instance
 (nova.rpc): TRACE: self._deallocate_network(context, instance)
 (nova.rpc): TRACE:   File /usr/lib/python2.7/contextlib.py, line 24, in 
 __exit__
 (nova.rpc): TRACE: self.gen.next()
 (nova.rpc): TRACE:   File /opt/stack/nova/nova/compute/manager.py, line 378,
 in _run_instance
 (nova.rpc): TRACE: injected_files, admin_password)
 (nova.rpc): TRACE:   File /opt/stack/nova/nova/compute/manager.py, line 511,
 in _spawn
 (nova.rpc): TRACE: network_info, block_device_info)
 (nova.rpc): TRACE:   File /opt/stack/nova/nova/exception.py, line 126, in
 wrapped
 (nova.rpc): TRACE: return f(*args, **kw)
 (nova.rpc): TRACE:   File /opt/stack/nova/nova/virt/libvirt/connection.py,
 line 681, in spawn
 (nova.rpc): TRACE: domain = self._create_new_domain(xml)
 (nova.rpc): TRACE:   File /opt/stack/nova/nova/virt/libvirt/connection.py,
 line 1255, in _create_new_domain
 (nova.rpc): TRACE: domain = self._conn.defineXML(xml)
 (nova.rpc): TRACE:   File /usr/lib/python2.7/dist-packages/libvirt.py, line
 1708, in defineXML
 (nova.rpc): TRACE: if ret is None:raise libvirtError('virDomainDefineXML()
 failed', conn=self)
 (nova.rpc): TRACE: libvirtError: internal error Invalid harddisk device name:
 /dev/vda
 (nova.rpc): TRACE:
 
 The block_device_mapping supplied was {/dev/vda: 1:::1} which results in:
 [{u'volume_size': u'', u'device_name': u'/dev/vda', u'delete_on_termination':
 u'1', u'volume_id': u'1'}]), however I've tried about every combination of
 values I can think of (supplying type, size, changing device name, etc.) with
 the same result (although the error is Invalid harddisk device name:
 /dev/vdb or whatever I supplied as the device name).
 
 If it helps:
 Running devstack @ af0f7cadb9
 Tried to launch an instance with both the cirros default devstack image and
 UEC oneiric x64.
 The existing volume is larger than the image's ephemeral volume (not sure if
 that matters).
 
 What am I doing wrong?
 

I think libvirt is expecting a device name like 'vda' and not '/dev/vda', so
try giving that a spin in block_device_mapping.

- Cole

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] HPC with Openstack?

2011-12-03 Thread Cole
First and foremost:
http://wiki.openstack.org/HeterogeneousSgiUltraVioletSupport

With Numa and lightweight container technology (LXC / OpenVZ) you can
achieve very close to real hardware performance for certain HPC
applications.  The problem with technologies like LXC is there isn't a ton
of logic to address the cpu affinity that other hypervisors offer (which
generally wouldn't be ideal for HPC).

On the interconnect side.  There are plenty of open-mx(
http://open-mx.gforge.inria.fr/) HPC applications running on everything
from single channel 1 gig to bonded 10 gig.

This is an area I'm personally interested in and have done some testing and
will be doing more.  If you are going to try HPC with ethernet, Arista
makes the lowest latency switches in the business.

Cole
Nebula

On Sat, Dec 3, 2011 at 11:11 AM, Tim Bell tim.b...@cern.ch wrote:

 At CERN, we are also faced with similar thoughts as we look to the cloud
 on how to match the VM creation performance (typically O(minutes)) with the
 required batch job system rates for a single program (O(sub-second)).

 Data locality to aim that the job runs close to the source data makes this
 more difficult along with fair share to align the priority of the jobs to
 achieve the agreed quota between competing requests for limited and shared
 resource.  The classic IaaS model of 'have credit card, will compute' does
 not apply for some private cloud use cases/users.

 We would be interested to discuss further with other sites.  There is
 further background from OpenStack Boston at http://vimeo.com/31678577.

 Tim
 tim.b...@cern.ch



 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] 55PB storage cloud hosted using Swift

2011-09-29 Thread Cole
Anyone else think of office space just then?  Still a great accomplishment!

On Thu, Sep 29, 2011 at 1:37 PM, John Dickinson m...@not.mn wrote:

 clarification: 5.5PB


 On Sep 29, 2011, at 3:00 PM, Brian Schott wrote:

  In case you missed it, SDSC is hosting a commercial 55 petabyte storage
 cloud using Swift:
 
 
 http://arstechnica.com/business/news/2011/09/supercomputing-center-targets-55-petabyte-storage-at-academics-students.ars
  https://cloud.sdsc.edu/hp/index.php
 
  Congrats to the Swift team!
  Brian
 
  -
  Brian Schott, CTO
  Nimbis Services, Inc.
  brian.sch...@nimbisservices.com
  ph: 443-274-6064  fx: 443-274-6060
 
 
 
 
 
 
 
  ___
  Mailing list: https://launchpad.net/~openstack
  Post to : openstack@lists.launchpad.net
  Unsubscribe : https://launchpad.net/~openstack
  More help   : https://help.launchpad.net/ListHelp


 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Database replacement?

2011-09-25 Thread Cole
Hey Monty,

The only thing I might add is a thought around the horizontal scalability of
mysql in a single zone. I know a lot of work has gone into various mysql
clustering technologies but as we continue down the road of durable and
distributed services mysql ends up being the red headed stepchild of the
deployment.

Fo me the question isn't can mysql handle the load but rather, in a
standard and repeatable deployment, when is it detrimental to add another
mysql instance to a running cluster due to the replication requirements?

Sorry for not responding inline, on my phone!




On Sun, Sep 25, 2011 at 5:15 PM, Monty Taylor mord...@inaugust.com wrote:



 On 09/24/2011 10:50 AM, Brian Lamar wrote:
  Hey Josh,
 
  Has there been any thought on having a nova-db service that
  responds to requests for information from the db (or something
  like a db).
 
  No plans that I'm aware of, there is a Database-as-a-Service project
  called 'Red Dwarf' which might fit this bill however. I honestly
  haven't looked too much into it.
 
  This could be useful for companies that don't necessarily want to
  have a limiting factor being a database. Since when u scale past
  a certain number of compute nodes the database connections
  themselves may become a bottleneck (especially the heartbeat
  mechanism which updates a table every X seconds).
 
  Not sure what you mean by this. Currently the OpenStack architecture
  was built to allow hundreds and thousands (maybe?) of compute nodes
  in the same environment. The keys is to group compute nodes into
  clusters as outlined here:
 
  http://wiki.openstack.org/MultiClusterZones
 
  Long story short the database isn't being shared between all compute
  clusters, but instead a hierarchy of clusters is formed (something I,
  in a pinch, would consider akin to a distributed Map/Reduce model of
  data sharing).

 What are the actual scaling concerns? Have you seen scaling problems, or
 are you just concerned that they might be hit? I'm not seeing any
 mention of numbers here that would even come close to exceeding the
 MySQL-scales-that-far-without-breaking-a-sweat range of things... so I'd
 love to try and help address specific problems rather than re-architect
 something before we even know what the problem we're trying to solve are.

  Does something like this help out with your scaling concerns? I do
  know that personally I'd be interested in a CouchDB/NoSQL alternative
  to the Nova database layer...but what we have right now seems to
  conceptual work for scaling out to many hundreds of compute nodes.

 Again - to what end? What is it that the current db setup isn't
 providing that CouchDB would do a better job of?

  It would be interesting if these types of request could go to the
  message queue instead
 
  110% agree. Hopefully this is something we can talk about at the
  upcoming conference in Boston. :)

 I will definitely agree that message queues can be a way of adding
 scalability (async systems are often able to provide for interesting
 parallelism) ... but at the end of the day the unit of work still has to
 get accomplished, and if the request for data to the underlying message
 store is still slow (sql or nosql, whatever) - under extremely high load
 if your disk and/or cpu are saturated on the db infrastructure, async or
 sync is going to make a flips work of difference. So I'm going to be
 really annoying and again ask: to solve what actual problem? Example
 queries and/or any logging/capturing of system information during
 scaling issues would be a great start ... we can take a stab at solving
 any current problems that are there - and as part of solving those
 problems we can of course discuss approaches such as async message
 queues or nosql alternatives.

 Monty

 
  -Original Message- From: Joshua Harlow
  harlo...@yahoo-inc.com Sent: Friday, September 23, 2011 5:40pm To:
  openstack openstack@lists.launchpad.net Subject: [Openstack]
  Database replacement?
 
  ___ Mailing list:
  https://launchpad.net/~openstack Post to :
  openstack@lists.launchpad.net Unsubscribe :
  https://launchpad.net/~openstack More help   :
  https://help.launchpad.net/ListHelp This email may include
  confidential information. If you received it in error, please delete
  it. Howdy all, congrats on the diablo release!
 
  Has there been any thought on having a nova-db service that responds
  to requests for information from the db (or something like a db).
 
  This could be useful for companies that don't necessarily want to
  have a limiting factor being a database. Since when u scale past a
  certain number of compute nodes the database connections themselves
  may become a bottleneck (especially the heartbeat mechanism which
  updates a table every X seconds). It would be interesting if these
  types of request could go to the message queue instead and then the
  db backing could be swapped out with something more 

Re: [Openstack] Database replacement?

2011-09-23 Thread Cole
I know from talking to datastax and 10gen both that there is interest in
doing this.
On Sep 23, 2011 3:40 PM, Debo Dutta (dedutta) dedu...@cisco.com wrote:
 This is a good idea. Actually it might be a very good idea to think of
 scalable/distributed nosql engines to interface with nova and other
 Openstack projects.



 Regards

 debo



 From: openstack-bounces+dedutta=cisco@lists.launchpad.net
 [mailto:openstack-bounces+dedutta=cisco@lists.launchpad.net] On
 Behalf Of Joshua Harlow
 Sent: Friday, September 23, 2011 2:40 PM
 To: openstack
 Subject: [Openstack] Database replacement?



 Howdy all, congrats on the diablo release!

 Has there been any thought on having a nova-db service that responds to
 requests for information from the db (or something like a db).

 This could be useful for companies that don't necessarily want to have a
 limiting factor being a database. Since when u scale past a certain
 number of compute nodes the database connections themselves may become a
 bottleneck (especially the heartbeat mechanism which updates a table
 every X seconds). It would be interesting if these types of request
 could go to the message queue instead and then the db backing could be
 swapped out with something more scalable (or still use mysql/sqlite...).

 Any thoughts?

 -Josh

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Integrating GlusterFS with OpenStack

2011-06-06 Thread Cole
gfs2/rgmanager works reliably for kvm so i don't see why gluster wouldn't.

On Mon, Jun 6, 2011 at 5:20 AM, Shehjar Tikoo shehj...@gluster.com wrote:

 My understanding is that the --instances_path will be the share to which
 the VM state will be synced on the source hypervisor. This synced image will
 then be used to restart the VM at  the destination hypervisor. Ideally, I'd
 like to avoid having a local copy of the VM state in order to provide
 persistence in VMs so that if the hypervisor fails or crashes, etc. the VM
 state on a Gluster volume can be used to restart it. I am not sure whether
 this is possible in OpenStack yet or if KVM will even support using a shared
 mount as a VM state storage. This is something I will be looking into. Of
 course, some references to docs, emails will be nice . Thanks.


 
 From: masumo...@nttdata.co.jp [masumo...@nttdata.co.jp]
 Sent: Monday, June 06, 2011 6:23 AM
 To: Shehjar Tikoo
 Cc: openstack@lists.launchpad.net; so...@linux2go.dk
 Subject: RE: [Openstack] Integrating GlusterFS with OpenStack

 Hi,

  When doing live migration, we could let KVM migrate the
  VM state and somehow tell gluster to migrate the disk image to the new
  host (to maintain locality between the virtual machine and its storage).
  Gluster already has the means to do live rebalancing, but I don't believe
  an API to specify I'd like for this particular file to reside on this
  particular brick. Please make it so. That would *rock*.

 In current openstack live migration feature, a flag --instances_path
 specifies the path which has to be on shared storage. Live re-balancing
 sounds like good idea, and I think at least, just --instances_path is on
 the glusterFS is good option to users.
 If you want any help, my colleague and me would love to do ^^;


  -Original Message-
  From: openstack-bounces+masumotok=nttdata.co...@lists.launchpad.net
  [mailto:openstack-bounces+masumotok=nttdata.co...@lists.launchpad.ne
  t] On Behalf Of Soren Hansen
  Sent: Monday, June 06, 2011 7:40 AM
  To: Shehjar Tikoo
  Cc: openstack@lists.launchpad.net
  Subject: Re: [Openstack] Integrating GlusterFS with OpenStack
 
  2011/6/3 Shehjar Tikoo shehj...@gluster.com:
   We're aiming to integrate Gluster with Openstack in a way that allows
   GlusterFS to be used as the storage for application volumes as well
  as VMs.
   I am interested in hearing your ideas on how such an integration can
   be performed. Being the openstack experts, could you please share some
   issues I should be considering, the problems I may run into or
   anything else you think I should know.
 
  One thing I'd love to be able to do is store my virtual disks on a
 gluster
  filesystem that is shared across my compute nodes. The data would be
  stored on the host that is actually running the virtual machine (think
  NUFA scheduler). When doing live migration, we could let KVM migrate the
  VM state and somehow tell gluster to migrate the disk image to the new
  host (to maintain locality between the virtual machine and its storage).
  Gluster already has the means to do live rebalancing, but I don't believe
  an API to specify I'd like for this particular file to reside on this
  particular brick. Please make it so. That would *rock*.
 
  --
  Soren Hansen| http://linux2go.dk/ Ubuntu Developer|
  http://www.ubuntu.com/ OpenStack Developer | http://www.openstack.org/
 
  ___
  Mailing list: https://launchpad.net/~openstack
  Post to : openstack@lists.launchpad.net
  Unsubscribe : https://launchpad.net/~openstack
  More help   : https://help.launchpad.net/ListHelp

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp