Re: [Openstack] (no subject)

2011-04-11 Thread Diego Parrilla Santamaría
I think it's a great feature, considering the problems to scale a
shared storage like NFS.

I was wondering if some Service Provider out there is going to
implement a shared storage to take advantage of the new KVM live
migration features of Cactus.

-- 
Diego Parrilla
CEO
www.stackops.com |  diego.parri...@stackops.com | +34 649 94 43 29 |
skype:diegoparrilla

 ADVERTENCIA LEGAL 
Le informamos, como destinatario de este mensaje, que el correo
electrónico y las comunicaciones por medio de Internet no permiten
asegurar ni garantizar la confidencialidad de los mensajes
transmitidos, así como tampoco su integridad o su correcta recepción,
por lo que STACKOPS TECHNOLOGIES S.L. no asume responsabilidad alguna
por tales circunstancias. Si no consintiese en la utilización del
correo electrónico o de las comunicaciones vía Internet le rogamos nos
lo comunique y ponga en nuestro conocimiento de manera inmediata. Este
mensaje va dirigido, de manera exclusiva, a su destinatario y contiene
información confidencial y sujeta al secreto profesional, cuya
divulgación no está permitida por la ley. En caso de haber recibido
este mensaje por error, le rogamos que, de forma inmediata, nos lo
comunique mediante correo electrónico remitido a nuestra atención y
proceda a su eliminación, así como a la de cualquier documento adjunto
al mismo. Asimismo, le comunicamos que la distribución, copia o
utilización de este mensaje, o de cualquier documento adjunto al
mismo, cualquiera que fuera su finalidad, están prohibidas por la ley.

* PRIVILEGED AND CONFIDENTIAL 
We hereby inform you, as addressee of this message, that e-mail and
Internet do not guarantee the confidentiality, nor the completeness or
proper reception of the messages sent and, thus, STACKOPS TECHNOLOGIES
S.L. does not assume any liability for those circumstances. Should you
not agree to the use of e-mail or to communications via Internet, you
are kindly requested to notify us immediately. This message is
intended exclusively for the person to whom it is addressed and
contains privileged and confidential information protected from
disclosure by law. If you are not the addressee indicated in this
message, you should immediately delete it and any attachments and
notify the sender by reply e-mail. In such case, you are hereby
notified that any dissemination, distribution, copying or use of this
message or any attachments, for any purpose, is strictly prohibited by
law.






On Mon, Apr 11, 2011 at 7:41 AM, igoigo246 igoigo...@gmail.com wrote:
 hi all

 KVM Block Migration is wonderful function.

 http://www.linux-kvm.com/content/qemu-kvm-012-adds-block-migration-feature

 this allow   that live migration do  without shared storage.


 When KVM Block migration Support ?


 Thanks for reading.
 --
 Hisashi Ikari


 ___
 Mailing list: https://launchpad.net/~openstack
 Post to     : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Floating IP in OpenStack API

2011-04-11 Thread Eldar Nugaev
Hello everyone,

We going to add possibility to assigning floating IP addresses in OpenStack API.
Our goal reproduce AWS behavior when creating instance automatically
assigns any free floating IP or add methods to OpenStack API for
allocation and association API addresses.

At this time we see three way:

1. FLAG --auto_assign_floating_ip (default=False)
2. Optional parameter auto_assign_floating_ip in existing create method
3. OpenStack API add floating_ip - allocate_floating_ip, associate_floating_ip

What way is more suitable at this time?

-- 
Eldar
Skype: eldar.nugaev

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Moving code hosting to GitHub

2011-04-11 Thread Thomas Goirand
On 04/11/2011 10:52 AM, Robert Collins wrote:
 Also, the fact that Git doesn't do network connections
 unless its really needed is very welcome.

 bzr shouldn't do network connections except when really needed
 *either* : the world is big and networks are slow, so like other DVCS
 the strong preference it has is to cache data locally and only talk on
 the network when really needed.

It unfortunately does. bzr launchpad-login for example does, and if
I'm not mistaking or dreaming, bzr commit as well. Using Git, it's not
the case. The issue isn't to cache data, the issue is that a commit
should *never* access any remote data, so that I could work in the train
without connectivity, for example, and still be able to do bzr commit.
Only pull and push should do network accesses.

 We're desperately short of technical data on the slownesses reported
 from China *specifically*.

I'd be happy to help, but I'm very surprised that you didn't get reports
from Canonical people working in Beijing or Shanghai.

 Things that we'd love to know - how long does SSL handshake take for
 you, do you suffer packet loss talking to our servers, whats the peak
 bandwidth you can get back to our servers.

I can talk for connections from ChinaNet, which is what we have in half
of China, I cannot talk for connectivity using China Unicom (these are
the 2 operators in China, each selling ADSL access to half of the country).

From here, in Shanghai, I hardly get 8KB/s when I do an initial bzr
branch (the equivalent of a clone in Git). That's max speed I saw,
sometimes it is just stuck, and CURL fails to download, getting half a
SSL packet, and printing a Python stack trace.

The reason is simple: the traceroute goes by Sprint, which I believe has
poor connectivity to China (very few times, I see them in the
traceroutes). If you were getting some connectivity by twtelecom (or
maybe by PCCW (the biggest cable operator in Hongkong, but twtelecom is
better), the situation would be much better. We have connectivity from
twtelecom in Atlanta, and it's really good, much better than what we
have in Seattle by HE.

  - we have some analysis about performance of push and pull itself
 which the bzr guys are working on, that will go live as soon as they
 cut another release and we upgrade to bzr $thatversion

I was quite satisfied with the performances of pull and push, the
initial bzr branch lp:xxx was working at 2MB/s on some of my servers.
That's really good, but *if* you have a connection good enough, which
isn't my case when I want to work on my laptop here.

  - we're considering an SSL frontend CDN with a node in asia

Not needed. Just get bandwidth from the correct providers (like
twtelecom or PCCW), and it will be acceptable. Adding a cache wont help
much if the cache is badly connected...

, but its
 not at the very top of the list for performance: we're fixing the
 things that have the most impact - that affect everyone - before we
 start segmenting and improving performance for just one subset of the
 user base.

I'm not talking about *improving* performances, but about simply being
able to barely work with bzr. Can you imagine the frustration when I had
to do 7 times bzr launchpad-login until it worked (and of course,
having to wait for the timeout each time)? Currently, doing that on my
laptop with a direct connection to launchpad is nearly impossible, at
peak hours (like 5 or 6 pm local time). For that reason, I've been
working at night (and also to go on IRC and get in touch with people
helping me to understand Openstack as I discover it). So I have to go
around by my servers, which not everyone can do here (not everyone has
dozens of servers all around the world like I do).

  - the time it takes to deliver the html/json for a page is a key
 metric that we're driving down. 1/2 of the Launchpad developers are
 now in maintenance mode doing performance fixes and customer support.
 I'm completely confident we'll continue to make massive strides on
 this metric in the next 3-6 months. So far, we've dropped the peak
 time - the time the slowest pages in Launchpad take to render - by 9
 seconds (from a peak of 20 seconds).

Frankly, I very rarely do direct connections to websites from here,
because of slowness in China (and simply because I have solutions to
speed-up everything). But that's not the case when I use bzr unless I
use a VPN or something like that, which isn't something I like doing. So
I'm not really the one to ask for the launchpad website performance
feeling.

 - I've been trying to find a Launchpad user there who can help rule
 out whats making things slow.

Don't search: sprint is the one!!! As I'm writing this mail, it's 11pm,
and I get 20% packet loss... And that's not even peak hours in here
(which is between 5 and 8pm local time). I can send traceroutes with mtr
if you like, but I believe it will be annoying reads for the readers of
this list. Maybe we should switch to private emails?

I hope the above helps,


Re: [Openstack] Moving code hosting to GitHub

2011-04-11 Thread Jay Pipes
Looks like some awesome enhancements. Thanks for the link, Eric!

-jay

On Sat, Apr 9, 2011 at 11:26 PM, Eric Day e...@oddments.org wrote:
 Well, GitHub issues may be a bit more suitable for our needs now:

 https://github.com/blog/831-issues-2-0-the-next-generation

 -Eric

 On Fri, Apr 08, 2011 at 05:21:20PM -0400, Jay Pipes wrote:
 All,

 In an effort to speed up our code development processes, reduce the
 friction amongst existing contributors and reduce barriers to entry
 for new contributors familiar with the popular git DVCS, we (the
 OpenStack@Rackspace team) have been studying a transition of our code
 hosting from Launchpad to GitHub. We understand others would be
 proposing the same at the design summit, but we figured it would be
 good to get the discussion started earlier.

 GitHub has a number of major strengths when it comes to managing source code:
 - Contributors seem to be more familiar with, and comfortable using, git
 - The code review process on GitHub is easier to use for reviewers
   who use the website interface and allows for fine-grained comment
   control per line in diffs

 As good as the GitHub review system is, there are some deficiencies,
 such as the lack of ability to mark a request as definitively
 approved. We hope to work with the GitHub team to investigate how this
 can be rectified.

 Of course, there is much more to delivering a professionally released
 open source software package than just the code hosting platform. This
 is the primary interface for code contributors who are actively
 developing, but the project also needs to have processes in place for
 handling bug reports, managing distribution, packaging, translations,
 and releasing the code in an efficient manner.

 There are a number of things that Launchpad provides OpenStack
 projects that GitHub does not have the ability to do. Examples of
 these things include translation services, project management
 abilities, package archives for projects, and release-management
 functionality.

 Therefore, at this time, we are only proposing moving the code hosting
 functionality to GitHub, and not radically changing any other parts of
 the development and release process.

 Soren, Monty, and Thierry, who are the developers responsible for
 keeping our release management and development infrastructure in good
 shape, have identified the pieces of our existing infrastructure that
 they will have to modify. Some of these changes are small, some
 require a bit more work. They are all committed to making these
 changes and to moving us along in the process of transitioning code
 hosting over to GitHub.

 There will be a design summit session about this transition where the
 process will be discussed in more detail, as well as the possibility
 to migrate other parts of our infrastructure.

 Comments and discussion welcome.

 Cheers,
 -jay

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to     : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Moving code hosting to GitHub

2011-04-11 Thread Robert Collins
On Tue, Apr 12, 2011 at 3:13 AM, Thomas Goirand tho...@goirand.fr wrote:
 I'm not mistaking or dreaming, bzr commit as well. Using Git, it's not
 the case. The issue isn't to cache data, the issue is that a commit
 should *never* access any remote data, so that I could work in the train
 without connectivity, for example, and still be able to do bzr commit.
 Only pull and push should do network accesses.

If 'bzr commit' is doing network access, its been configured to work
directly with a branch over the network, or you have a plugin that is
doing network access. The former may have been done for workflow
reasons in maintaining the trunk - but that surprises me as I thought
your project was using tarmac, where you would only push to personal
branches and the robot would test and promote branches that pass the
test suite.

The next time you observe 'bzr commit' do networking, could you grab a
'bzr info' from that tree and show it to me, or #bzr on freenode, and
we can sort out what is up.

 We're desperately short of technical data on the slownesses reported
 from China *specifically*.

 I'd be happy to help, but I'm very surprised that you didn't get reports
 from Canonical people working in Beijing or Shanghai.

Sadly no, the folk I hear about suffering are staff on-site with other
companies, and I haven't managed to get the contacts in place to
diagnose yet.


  - we're considering an SSL frontend CDN with a node in asia

 Not needed. Just get bandwidth from the correct providers (like
 twtelecom or PCCW), and it will be acceptable. Adding a cache wont help
 much if the cache is badly connected...

It wouldn't cache, just do SSL in the region; this could help in a few ways:
 - we could get bandwidth to it from twtelecom
 - it would be close enough to do ssl handshaking in ~ 1 second rather
than the many seconds you're paying at the moment
 - we would have dedicated backhaul bandwidth on it to our primary
datacentre (London) rather than depending on local ISP prioritisation.

Anyhow -
...
 Don't search: sprint is the one!!! As I'm writing this mail, it's 11pm,
 and I get 20% packet loss... And that's not even peak hours in here
 (which is between 5 and 8pm local time). I can send traceroutes with mtr
 if you like, but I believe it will be annoying reads for the readers of
 this list. Maybe we should switch to private emails?

The data about sprint is interesting; I'm going to go find the ticket
we have about performance in china and add this information to it.

Thanks!
Rob

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] KVM Block Migration

2011-04-11 Thread igoigo246
Hi,

I look Japanease Site.
http://www.cuspy.org/blog/archives/917

This site wrote.

image file
vda.qcow

sending host

qemu --enable-kvm -m 512 \
-drive file=vda.qcow,if=virtio,boot=on \
-net nic,macaddr=00:16:3E:00:FF:32,model=virtio

receive host

touch vda.qcow
qemu -enable-kvm -m 512 \
-drive file=vda.qcow,if=virtio \
-net nic,macaddr=00:16:3E:00:FF:32,model=virtio \
-incoming tcp:0:

sending host

migrate -d -b tcp:wasabi:

 
(qemu) info migrate
Migration status: active
transferred ram: 48 kbytes
remaining ram: 147792 kbytes
total ram: 147840 kbytes
transferred disk: 206848 kbytes
remaining disk: 10278912 kbytes
total disk: 10485760 kbytes

Thanks,

--
Hisashi Ikari





2011-04-11 (月) の 16:50 +0900 に Masanori ITOH さんは書きました:
 Hi,
 
 We are considering if it's possible to support KVM block migration
 as the next step of live migration.
 
 Actually, our main issue at this moment is if that kvm feature is enough
 stable or not because we got several errors during our try of it 
 using Ubuntu 10.10 code base. Especially, I'm not sure if the feature
 is enabled or not in the qemu-kvm bundled in Ubuntu/RHEL.
 
 Do you have any information about stability?
 
 Thanks,
 Masanori
 
 ---
 Masanori ITOH  RD Headquarters, NTT DATA CORPORATION
e-mail: itou...@nttdata.co.jp
 
 From: igoigo246 igoigo...@gmail.com
 Subject: [Openstack] (no subject)
 Date: Mon, 11 Apr 2011 14:41:20 +0900
 
  hi all
  
  KVM Block Migration is wonderful function.
  
  http://www.linux-kvm.com/content/qemu-kvm-012-adds-block-migration-feature
  
  this allow   that live migration do  without shared storage.
  
  
  When KVM Block migration Support ?
  
  
  Thanks for reading.
  --
  Hisashi Ikari
  
  
  ___
  Mailing list: https://launchpad.net/~openstack
  Post to : openstack@lists.launchpad.net
  Unsubscribe : https://launchpad.net/~openstack
  More help   : https://help.launchpad.net/ListHelp
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] KVM Block Migration

2011-04-11 Thread Masanori ITOH

Hi,

Vish also mentioned that we should support the KVM block migration feature
instead of stability which I mentioned because it's very much useful.
I agree with Vish of course. :)
Actually, we discussed inclusion of block migration feature at San Antonio. :)


I've also seen the page Hisaki mentioned. My point is that
the page explains interacting qemu monitor but we need to invoke
the feature through libvirt layer not the qemu monitor directly.
Here, A colleague of mine mentioned that --copy-storage-all option of
virsh looked like not-supported yet in Ubuntu 10.10 at least.

Hisaki,
do you have any information if it's enabled in Ubuntu 11.04 libvirt?
I mean using virsh, did you succeed invoking block live migration
something like the following?

  $ virsh migrate --live --copy-storage-all DOMID DESTURL

Unfortunately, we haven't been able to make enough time trying Natty alpha
because of our schedule delay caused by the earthquake in Northeastern Japan. :(
At least, libvirt(virsh) included in RHEL6 does not support the feature.

But, anyway I will talk about the issue with Kei and Muneyuki before going to
the Design Summit. :)

Regards,
Masanori


From: igoigo246 igoigo...@gmail.com
Subject: Re: [Openstack] KVM Block Migration
Date: Tue, 12 Apr 2011 09:28:38 +0900

 Hi,
 
 I look Japanease Site.
 http://www.cuspy.org/blog/archives/917
 
 This site wrote.
 
 image file
 vda.qcow
 
 sending host
 
 qemu --enable-kvm -m 512 \
 -drive file=vda.qcow,if=virtio,boot=on \
 -net nic,macaddr=00:16:3E:00:FF:32,model=virtio
 
 receive host
 
 touch vda.qcow
 qemu -enable-kvm -m 512 \
 -drive file=vda.qcow,if=virtio \
 -net nic,macaddr=00:16:3E:00:FF:32,model=virtio \
 -incoming tcp:0:
 
 sending host
 
 migrate -d -b tcp:wasabi:
 
  
 (qemu) info migrate
 Migration status: active
 transferred ram: 48 kbytes
 remaining ram: 147792 kbytes
 total ram: 147840 kbytes
 transferred disk: 206848 kbytes
 remaining disk: 10278912 kbytes
 total disk: 10485760 kbytes
 
 Thanks,
 
 --
 Hisashi Ikari
 
 
 
 
 
 2011-04-11 (月) の 16:50 +0900 に Masanori ITOH さんは書きました:
  Hi,
  
  We are considering if it's possible to support KVM block migration
  as the next step of live migration.
  
  Actually, our main issue at this moment is if that kvm feature is enough
  stable or not because we got several errors during our try of it 
  using Ubuntu 10.10 code base. Especially, I'm not sure if the feature
  is enabled or not in the qemu-kvm bundled in Ubuntu/RHEL.
  
  Do you have any information about stability?
  
  Thanks,
  Masanori
  
  ---
  Masanori ITOH  RD Headquarters, NTT DATA CORPORATION
 e-mail: itou...@nttdata.co.jp
  
  From: igoigo246 igoigo...@gmail.com
  Subject: [Openstack] (no subject)
  Date: Mon, 11 Apr 2011 14:41:20 +0900
  
   hi all
   
   KVM Block Migration is wonderful function.
   
   http://www.linux-kvm.com/content/qemu-kvm-012-adds-block-migration-feature
   
   this allow   that live migration do  without shared storage.
   
   
   When KVM Block migration Support ?
   
   
   Thanks for reading.
   --
   Hisashi Ikari
   
   
   ___
   Mailing list: https://launchpad.net/~openstack
   Post to : openstack@lists.launchpad.net
   Unsubscribe : https://launchpad.net/~openstack
   More help   : https://help.launchpad.net/ListHelp
  
  ___
  Mailing list: https://launchpad.net/~openstack
  Post to : openstack@lists.launchpad.net
  Unsubscribe : https://launchpad.net/~openstack
  More help   : https://help.launchpad.net/ListHelp
 
 

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp