[Openstack] Questions about novnc with multihost OpenStack Nova Compute(Essex) in multihost

2012-10-24 Thread ??????
Dear all,
I have some questions about OpenStack Nova Compute(Essex) using novnc.
I build a cluster using 4 computers with OpenStack Nova Compute in 
multihost.
The follows were informations of my cluster:

nova01:compute server,api server,controller server  192.168.3.3
nova02:compute server   192.168.3.4
nova03:compute server   192.168.3.5
nova04:compute server   192.168.3.6

And the vms`s fixed was 10.0.0.0/8

Here was my nova.conf:
http://pastebin.com/K6ArR1HA

While,when i executed the command 
"nova get-vnc-console  novnc"
then,error occured.

Here were the error informations:

2012-10-25 14:25:27 ERROR nova.rpc.impl_qpid 
[req-1ad62be7-8eeb-43a5-898c-f3552b9f7748 3faf7062208c456c9a9365ee50bf15cd 
561a547e94c7
4ce797d0ef1f4bc91f91] Timed out waiting for RPC response: None
2012-10-25 14:25:27 TRACE nova.rpc.impl_qpid Traceback (most recent call 
last):
2012-10-25 14:25:27 TRACE nova.rpc.impl_qpid   File 
"/usr/lib/python2.6/site-packages/nova/rpc/impl_qpid.py", line 364, in ensure
2012-10-25 14:25:27 TRACE nova.rpc.impl_qpid return method(*args, 
**kwargs)
2012-10-25 14:25:27 TRACE nova.rpc.impl_qpid   File 
"/usr/lib/python2.6/site-packages/nova/rpc/impl_qpid.py", line 413, in _consume
2012-10-25 14:25:27 TRACE nova.rpc.impl_qpid nxt_receiver = 
self.session.next_receiver(timeout=timeout)
2012-10-25 14:25:27 TRACE nova.rpc.impl_qpid   File "", line 6, in 
next_receiver
2012-10-25 14:25:27 TRACE nova.rpc.impl_qpid   File 
"/usr/lib/python2.6/site-packages/qpid/messaging/endpoints.py", line 651, in nex
t_receiver
2012-10-25 14:25:27 TRACE nova.rpc.impl_qpid raise Empty
2012-10-25 14:25:27 TRACE nova.rpc.impl_qpid Empty: None
2012-10-25 14:25:27 TRACE nova.rpc.impl_qpid
2012-10-25 14:25:27 ERROR nova.api.openstack 
[req-1ad62be7-8eeb-43a5-898c-f3552b9f7748 3faf7062208c456c9a9365ee50bf15cd 
561a547e94c7
4ce797d0ef1f4bc91f91] Caught error: Timeout while waiting on RPC response.
2012-10-25 14:25:27 TRACE nova.api.openstack Traceback (most recent call 
last):
2012-10-25 14:25:27 TRACE nova.api.openstack   File 
"/usr/lib/python2.6/site-packages/nova/api/openstack/__init__.py", line 82, in _
_call__
2012-10-25 14:25:27 TRACE nova.api.openstack return 
req.get_response(self.application)
2012-10-25 14:25:27 TRACE nova.api.openstack   File 
"/usr/lib/python2.6/site-packages/WebOb-1.0.8-py2.6.egg/webob/request.py", line
1053, in get_response
2012-10-25 14:25:27 TRACE nova.api.openstack application, 
catch_exc_info=False)
2012-10-25 14:25:27 TRACE nova.api.openstack   File 
"/usr/lib/python2.6/site-packages/WebOb-1.0.8-py2.6.egg/webob/request.py", line
1022, in call_application
2012-10-25 14:25:27 TRACE nova.api.openstack app_iter = 
application(self.environ, start_response)
2012-10-25 14:25:27 TRACE nova.api.openstack   File 
"/usr/lib/python2.6/site-packages/keystone/middleware/auth_token.py", line 176,
in __call__
2012-10-25 14:25:27 TRACE nova.api.openstack return self.app(env, 
start_response)
2012-10-25 14:25:27 TRACE nova.api.openstack   File 
"/usr/lib/python2.6/site-packages/WebOb-1.0.8-py2.6.egg/webob/dec.py", line 159,
 in __call__
2012-10-25 14:25:27 TRACE nova.api.openstack return resp(environ, 
start_response)
2012-10-25 14:25:27 TRACE nova.api.openstack   File 
"/usr/lib/python2.6/site-packages/WebOb-1.0.8-py2.6.egg/webob/dec.py", line 159,
 in __call__
2012-10-25 14:25:27 TRACE nova.api.openstack return resp(environ, 
start_response)
2012-10-25 14:25:27 TRACE nova.api.openstack   File 
"/usr/lib/python2.6/site-packages/WebOb-1.0.8-py2.6.egg/webob/dec.py", line 159,
 in __call__
2012-10-25 14:25:27 TRACE nova.api.openstack return resp(environ, 
start_response)
2012-10-25 14:25:27 TRACE nova.api.openstack   File 
"/usr/lib/python2.6/site-packages/Routes-1.12.3-py2.6.egg/routes/middleware.py",
 line 131, in __call__
2012-10-25 14:25:27 TRACE nova.api.openstack response = 
self.app(environ, start_response)
2012-10-25 14:25:27 TRACE nova.api.openstack   File 
"/usr/lib/python2.6/site-packages/WebOb-1.0.8-py2.6.egg/webob/dec.py", line 159,
 in __call__
2012-10-25 14:25:27 TRACE nova.api.openstack return resp(environ, 
start_response)
2012-10-25 14:25:27 TRACE nova.api.openstack   File 
"/usr/lib/python2.6/site-packages/WebOb-1.0.8-py2.6.egg/webob/dec.py", line 147,
 in __call__
2012-10-25 14:25:27 TRACE nova.api.openstack resp = self.call_func(req, 
*args, **self.kwargs)
2012-10-25 14:25:27 TRACE nova.api.openstack   File 
"/usr/lib/python2.6/site-packages/WebOb-1.0.8-py2.6.egg/webob/dec.py", line 208,
 in call_func
2012-10-25 14:25:27 TRACE nova.api.openstack return self.func(req, 
*args, **kwarg

[Openstack] Retrieve Endpoints

2012-10-24 Thread Tummala Pradeep

Hi,

I want to configure "openstack.endpoint" and 
"openstack.identity.endpoint". However, I am a bit confused between the 
two. How can I configure them through terminal ?


Thanks

Pradeep

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] API Credentials

2012-10-24 Thread Tummala Pradeep

Thanks..got the point..misunderstood the gui

Pradeep

On 10/24/2012 03:58 AM, Vishvananda Ishaya wrote:
Alternatively, you can follow the method used in devstack[1] to 
retrieve valid credentials by using python-keystoneclient:


keystone --os-username=xxx --os-tenant-name=xxx 
--os-auth-url=http://$KEYSTONE_HOST:5000/v2.0 ec2-credentials-create


[1] https://github.com/openstack-dev/devstack/blob/master/eucarc

Vish

On Oct 22, 2012, at 10:12 PM, Sam Stoelinga > wrote:


No, I think what Vish is saying that it's possible to get the 
Openstack access key and secret by doing the following:

(Based on Folsom, but think it's the same in Essex)
1. Login with your account in Openstack dashboard (Horizon)
2. Go to  page
3. Click on EC2 Credentials
4. Click on Download EC2 credentials
The access key and secret seems to be in the file ec2rc.sh.


  Description:

Clicking "Download EC2 Credentials" will download a zip file which 
includes an rc file with your access/secret keys, as well as your 
x509 private key and certificate.


That's what you want right? Hope it helped.

Sam
On Tue, Oct 23, 2012 at 12:50 PM, Tummala Pradeep 
mailto:pradeep.tumm...@ericsson.com>> 
wrote:


Actually, I am trying to integrate PaaS with OpenStack. So, I
require access key and secret access key for that. So, I don't
think ec2 credentials will work. Are you saying it is not
possible to set up OpenStack's access key and secret access key ?

Pradeep


On 10/22/2012 10:26 PM, Vishvananda Ishaya wrote:

access and secret keys are ec2 credentials and they can be
retrieved using download ec2 credentials from the settings
page in horizon.

Vish

On Oct 22, 2012, at 4:56 AM, Tummala Pradeep
mailto:pradeep.tumm...@ericsson.com>> wrote:

I deployed OpenStack Essex on my server using the
documentation provided. Now, I need help with getting API
credentials similar to what HP OpenStack has.

For eg - Users having an account in HP Openstack can
retrieve access key and secret access key from the api
keys section.In my deployment, I can download Openstack
credentials from the settings tab in .pem format but it
does not contain access key and secret access key.
Therefore I want to setup api keys so that users can view
their credentials similar to HP Openstack.

Someone please guide me to get started on this.

Thanks
Pradeep

___
Mailing list: https://launchpad.net/~openstack

Post to : openstack@lists.launchpad.net

Unsubscribe : https://launchpad.net/~openstack

More help   : https://help.launchpad.net/ListHelp




___
Mailing list: https://launchpad.net/~openstack

Post to : openstack@lists.launchpad.net

Unsubscribe : https://launchpad.net/~openstack

More help   : https://help.launchpad.net/ListHelp







___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Ceilometer, StackTach, Tach / Scrutinize, CloudWatch integration ... Summit followup

2012-10-24 Thread Angus Salkeld

On 24/10/12 23:35 +, Sandy Walsh wrote:

Hey y'all,

Great to chat during the summit last week, but it's been a crazy few days of 
catch-up since then.

The main takeaway for me was the urgent need to get some common libraries under 
these efforts.


Yip.



So, to that end ...

1. To those that asked, I'm going to get my slides / video presentation made 
available via the list. Stay tuned.

2. I'm having a hard time following all the links to various efforts going on 
(seems every time I turn around there's a new metric/instrumentation effort, 
which is good I guess :)


Here is some fun I have been having with a bit of tach+ceilometer code.
https://github.com/asalkeld/statgen



Is there a single location I can place my feedback? If not, should we create 
one? I've got lots of suggestions/ideas and would hate to have to duplicate the 
threads or leave other groups out.


I'll add some links here that I am aware of:
https://bugs.launchpad.net/ceilometer/+bug/1071061
https://etherpad.openstack.org/grizzly-common-instrumentation
https://etherpad.openstack.org/grizzly-ceilometer-actions
https://blueprints.launchpad.net/nova/+spec/nova-instrumentation-metrics-monitoring




3. I'm wrapping up the packaging / cleanup of StackTach v2 with Stacky and hope 
to make a more formal announcement on this by the end of the week. Lots of 
great changes to make it easier to use/deploy based on the Summit feedback!

Unifying the stacktach worker (consumer of events) into ceilometer should be a 
first step to integration (or agree upon a common YAGI-based consumer?)

4. If you're looking at Tach, you should also consider looking at Scrutinize (my 
replacement effort) https://github.com/SandyWalsh/scrutinize (needs packaging/docs and 
some notifier tweaks on the cprofiler to be called "done for now")


Looks great! I like the monkey patching for performance as you have
done here, but we also need a nice clean way of manually inserting 
instrumentation
too (that is what I have been experimenting with in statgen).

Can we chat in #openstack-metering so we are a bit more aware what we are all 
up to?


-Angus



Looking forward to moving ahead on this ...

Cheers,
-S





___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] nova-br100.conf

2012-10-24 Thread heut2008
It contains the mapping between Instances IP and  MAC addresses  and
is read by dnsmasq  for  allocating  IP to  instance  at booting time
.

2012/10/25 Daniel Vázquez :
> what will be the content of /var/lib/nova/networks/nova-br100.conf file???
>
> thx!
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp



-- 
Yaguang Tang

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] How to commucation vms in multi nodes using quantum?

2012-10-24 Thread livemoon
Good work , you are so nice.

On Thu, Oct 25, 2012 at 3:26 AM, Robert Kukura  wrote:

> On 10/24/2012 12:42 PM, Dan Wendlandt wrote:
> > On Wed, Oct 24, 2012 at 3:22 AM, Gary Kotton  wrote:
> >> Hi,
> >> In addition to Dan's comments you can also take a look at the following
> link
> >> http://wiki.openstack.org/ConfigureOpenvswitch.
> >
> > Is there any content on that wiki page that is not yet in the quantum
> > admin guide:
> http://docs.openstack.org/trunk/openstack-network/admin/content/?
> >If so, we should file a bug to make sure it ends up in the admin
> > guide and that the wiki page is deleted so there is exactly one place
> > where we direct people and we avoid stale content.
> >
> > Bob is probably best to answer that question.
>
> I've already filed a docs bug to update the admin guide with the current
> configuration details for linuxbridge and openvswitch, and its assigned
> to me. I hope to get to this in the next few days. I'll remove the wiki
> page, which is also out-if-date, when its complete.
>
> -Bob
>
> >
> > Dan
> >
> >
> >> Thanks
> >> Gary
> >>
> >>
> >> On 10/24/2012 08:21 AM, livemoon wrote:
> >>
> >> Thanks Dan
> >>
> >> On Wed, Oct 24, 2012 at 2:15 PM, Dan Wendlandt  wrote:
> >>>
> >>> On Tue, Oct 23, 2012 at 10:56 PM, livemoon  wrote:
>  Dan:
>  Thank you for your help.
>  If the server have three nics, which one will be used as port of
>  "br-int". I
>  must know how "br-int" work between two machines, and then I can make
>  the
>  physical interface which "br-int" use to one switch
> >>>
> >>> If you are using tunneling, the traffic will exit out the NIC based on
> >>> your physical server's routing table and the destination IP of the
> >>> tunnel.  For example, if your physical server is tunneling a packet to
> >>> a VM on a physical server with IP W.X.Y.Z, the packet will leave
> >>> whatever NIC has the route to reach W.X.Y.Z .
> >>>
> >>> Dan
> >>>
> >>>
> >>>
> >>>
> 
>  On Wed, Oct 24, 2012 at 11:52 AM, Dan Wendlandt 
> wrote:
> >
> > all you need to do is create a bridge named "br-int", which is what
> > the linux devices representing the vm nics will be plugged into.
> >
> > since you are using tunneling, there is no need to create a br-ethX
> > and add a physical interface to it.
> >
> > dan
> >
> > p.s. btw, your config looks like its using database polling, which is
> > not preferred.  I'd suggest you use the default config, which uses
> RPC
> > communication between agents and the main quantum-server process
> >
> >
> > On Tue, Oct 23, 2012 at 8:44 PM, livemoon 
> wrote:
> >> I know in one node,vm can work well.
> >> I want to know in multi nodes, do I need to create a br-ethX, and
> >> port
> >> the
> >> physical interface to it? how to do that in configuration?
> >>
> >> On Wed, Oct 24, 2012 at 11:36 AM, 刘家军  wrote:
> >>>
> >>> you just need to create one or more networks and specify which
> >>> network
> >>> to
> >>> use when booting vm.
> >>>
> >>> 2012/10/24 livemoon 
> 
>  Hi, I use quantum as network. A question is if there are multi
>  nodes,
>  how
>  to config to make vms communicate with each other in the same
>  subnet.
> 
>  I use openvswitch as my plugin. And my setting is blow:
> 
>  [DATABASE]
>  sql_connection = mysql://
> quantum:openstack@172.16.1.1:3306/quantum
>  reconnect_interval = 2
> 
>  [OVS]
> 
>  tenant_network_type = gre
>  tunnel_id_ranges = 1:1000
>  integration_bridge = br-int
>  tunnel_bridge = br-tun
>  local_ip = 172.16.1.2
> 
>  enable_tunneling = True
> 
> 
>  [AGENT]
>  polling_interval = 2
>  root_helper = sudo /usr/bin/quantum-rootwrap
>  /etc/quantum/rootwrap.conf
> 
>  --
>  非淡薄无以明志,非宁静无以致远
> 
>  ___
>  Mailing list: https://launchpad.net/~openstack
>  Post to : openstack@lists.launchpad.net
>  Unsubscribe : https://launchpad.net/~openstack
>  More help   : https://help.launchpad.net/ListHelp
> 
> >>>
> >>>
> >>>
> >>> --
> >>> 刘家军@ljjjustin
> >>>
> >>
> >>
> >>
> >> --
> >> 非淡薄无以明志,非宁静无以致远
> >>
> >> ___
> >> Mailing list: https://launchpad.net/~openstack
> >> Post to : openstack@lists.launchpad.net
> >> Unsubscribe : https://launchpad.net/~openstack
> >> More help   : https://help.launchpad.net/ListHelp
> >>
> >
> >
> >
> > --
> > ~~~
> > Dan Wendlandt
> > Nicira, Inc: www.nicira.com
> > twitter: danwendlandt
> > ~~~
> >>

Re: [Openstack] [ceilometer] Potential New Use Cases

2012-10-24 Thread Angus Salkeld

On 24/10/12 23:35 +0200, Julien Danjou wrote:

On Wed, Oct 24 2012, Dan Dyer wrote:


Use Case 1
Service Owned Instances
There are a set of use cases where a service is acting on behalf of a user,
the service is the owner of the VM but billing needs to be attributed to the
end user of the system.This scenario drives two requirements:
1. Pricing is similar to base VM's but with a premium. So the type of
service for a VM needs to be identifiable so that the appropriate pricing
can be applied.
2. The actual end user of the VM needs to be identified so usage can be
properly attributed


I think that for this, you just need to add more meters on top of the
existing one with your own user and project id information.


As an example, in some of our PAAS use cases, there is a service controller
running on top of the base VM that maintains the control and and manages the
customer experience. The idea is to expose the service and not have the
customer have to (or even be able to) manipulate the virtual machine
directly. So in this case, from a Nova perspective, the PAAS service owns
the VM and it's tenantID is what is reported back in events. The way we
resolve this is to query the service controller for meta data about that
instances they own. This is stored off in a separate "table" and used to
determine the real user at aggregation time.


This is probably where you should emit the meters you need.


Use Case 2
Multple Instances combine to make a billable "product/service"
In this use case, a service might consist of several VM's, but the actual
number does not directly drive the billing.  An example of this might be a
redundant service that has a primary and two backup VM's that make up a
deployment. The customer is charged for the service, not the fact that there
are 3 VM's running. Once again, we need meta data that is able to describe
this relationship so that when the billing records are processed, this
relationship can be identified and billed properly.


Kind of the same here, if you don't want to really bill the vm, just
don't meter them (or ignore the meters) and emit your own meter via your
PaaS platform to bill your customer.

Or is there a limitation I miss?


If you do auto scaling you will have a similar problem. Here you
want to monitor the group (with instances comming and going) as
a logical unit. One way would be to tag the instances and then
extract the tag and send it with the metadata associated with
the meter. Then you could query the ceilometer db for that group.

(In CloudWatch this is just another "Dimension").

-Angus



--
Julien Danjou
-- Free Software hacker & freelance
-- http://julien.danjou.info





___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Imposible terminate instance in essex

2012-10-24 Thread Daniel Vázquez
Hi here!

I can't terminate instance in essex version.

I tried from horizon and from nova delete command.

I tried killall and restarting nova-network. Restarting host too.
I Re-tried set to null task_state by sql query

I Re-tried with nova.conf with dhcp release to false

... good work this instance is indestructible ;)
I don't want to delete instance folder, because openstack needs to
release ips, and some update and synchronizations data in Database.


Whan can I do?

Thx!

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] nova-br100.conf

2012-10-24 Thread Daniel Vázquez
what will be the content of /var/lib/nova/networks/nova-br100.conf file???

thx!

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [SWIFT] Proxies Sizing for 90.000 / 200.000 RPM

2012-10-24 Thread Adam Young

On 10/24/2012 07:45 PM, heckj wrote:

John brought the concern over auth_token middleware up to me directly -

I don't know of anyone that's driven the keystone middleware to these 
rates and determined where the bottlenecks are other than folks 
deploying swift and driving high performance numbers.


The concern that John detailed to me is how the middleware handles 
memcache connections, which is directly impacted by how you're 
deploying it. From John:


"Specifically, I'm concerned with the way auth_token handles memcache 
connections. I'm not sure how well it will work in swift with 
eventlet. If the memcache module being used caches sockets, then 
concurrency in eventlet (different greenthreads) will cause problems. 
Eventlet detects and prevents concurrent access to the same socket 
(for good reason--data from the socket may be delivered to the wrong 
listener)."


We should probably disable memcache for PKI tokens



I haven't driven any system this hard to suss out the issues, but 
there's the nut of it - how to keep from cascading that load out to 
validation of authorization tokens. The middleware is assuming that 
eventlet and any needed patching has already been done when it's 
invoked (i.e. no monkeypatching in there), and loads the "memcache" 
module and uses whatever it has in there directly.


This is all assuming you're using the current standard of UUID based 
tokens. Keystone is also supporting PKI based tokens, which removes 
the need to constantly make the validation call, but at the 
computational cost of unspinning the decryption around the signed 
token. I don't know of any load numbers and analysis with that backing 
set up at this time, and would expect that any initial analysis would 
lead to some clear performance optimizations that may be needed.


The fork is the expensive part, but it should also parallelize fairly 
well.  One reason we chose this approach is that it seems to fit in best 
with how Eventlet works:  a fork means there should be no reason to do a 
"real" thread, and thus it can use greenlets and processes.  But we 
won't know until we try to scale it.


PKI tokens just went default, so we should find out soon.



- joe


On Oct 24, 2012, at 1:20 PM, Alejandro Comisario 
> wrote:

Thanks Josh, and Thanks John.
I know it was an exciting Summit! Congrats to everyone !

John, let me give you extra data and something that i've already 
said, that might me wrong.


First, the request size that will compose the 90.000RPM - 200.000 RPM 
will be from 90% 20K objects, and 10% 150/200K objects.
Second, all the "GET" requests, are going to be "public", configured 
through ACL, so, if the GET requests are public (so, no X-Auth-Token 
is passed) why should i be worried about the keystone middleware ?


Just to clarify, because i really want to understand what my real 
metrics are so i can know where to tune in case i need to.

Thanks !
*
*
*---*
*Alejandrito*


On Wed, Oct 24, 2012 at 3:28 PM, John Dickinson > wrote:


Sorry for the delay. You've got an interesting problem, and we
were all quite busy last week with the summit.

First, the standard caveat: Your performance is going to be
highly dependent on your particular workload and your particular
hardware deployment. 3500 req/sec in two different deployments
may be very different based on the size of the requests, the
spread of the data requested, and the type of requests. Your
experience may vary, etc, etc.

However, for an attempt to answer your question...

6 proxies for 3500 req/sec doesn't sound unreasonable. It's in
line with other numbers I've seen from people and what I've seen
from other large scale deployments. You are basically looking at
about 600 req/sec/proxy.

My first concern is not the swift workload, but how keystone
handles the authentication of the tokens. A quick glance at the
keystone source seems to indicate that keystone's auth_token
middleware is using a standard memcached module that may not play
well with concurrent connections in eventlet. Specifically,
sockets cannot be reused concurrently by different greenthreads.
You may find that the token validation in the auth_token
middleware fails under any sort of load. This would need to be
verified by your testing or an examination of the memcache module
being used. An alternative would be to look at the way swift
implements it's memcache connections in an eventlet-friendly way
(see swift/common/memcache.py:_get_conns() in the swift codebase).

--John



On Oct 11, 2012, at 4:28 PM, Alejandro Comisario
mailto:alejandro.comisa...@mercadolibre.com>> wrote:

> Hi Stackers !
> This is the thing, today we have a 24 datanodes (3 copies, 90TB
usables) each datanode has 2 intel hexacores CPU with HT and 96GB
of RAM, and 6 Proxies with the same hardware configuration

Re: [Openstack] [SWIFT] Proxies Sizing for 90.000 / 200.000 RPM

2012-10-24 Thread heckj
John brought the concern over auth_token middleware up to me directly - 

I don't know of anyone that's driven the keystone middleware to these rates and 
determined where the bottlenecks are other than folks deploying swift and 
driving high performance numbers. 

The concern that John detailed to me is how the middleware handles memcache 
connections, which is directly impacted by how you're deploying it. From John:

"Specifically, I'm concerned with the way auth_token handles memcache 
connections. I'm not sure how well it will work in swift with eventlet. If the 
memcache module being used caches sockets, then concurrency in eventlet 
(different greenthreads) will cause problems. Eventlet detects and prevents 
concurrent access to the same socket (for good reason--data from the socket may 
be delivered to the wrong listener)."

I haven't driven any system this hard to suss out the issues, but there's the 
nut of it - how to keep from cascading that load out to validation of 
authorization tokens. The middleware is assuming that eventlet and any needed 
patching has already been done when it's invoked (i.e. no monkeypatching in 
there), and loads the "memcache" module and uses whatever it has in there 
directly. 

This is all assuming you're using the current standard of UUID based tokens. 
Keystone is also supporting PKI based tokens, which removes the need to 
constantly make the validation call, but at the computational cost of 
unspinning the decryption around the signed token. I don't know of any load 
numbers and analysis with that backing set up at this time, and would expect 
that any initial analysis would lead to some clear performance optimizations 
that may be needed.

- joe


On Oct 24, 2012, at 1:20 PM, Alejandro Comisario 
 wrote:
> Thanks Josh, and Thanks John.
> I know it was an exciting Summit! Congrats to everyone !
> 
> John, let me give you extra data and something that i've already said, that 
> might me wrong.
> 
> First, the request size that will compose the 90.000RPM - 200.000 RPM will be 
> from 90% 20K objects, and 10% 150/200K objects.
> Second, all the "GET" requests, are going to be "public", configured through 
> ACL, so, if the GET requests are public (so, no X-Auth-Token is passed) why 
> should i be worried about the keystone middleware ?
> 
> Just to clarify, because i really want to understand what my real metrics are 
> so i can know where to tune in case i need to.
> Thanks !
> 
> ---
> Alejandrito
> 
> 
> On Wed, Oct 24, 2012 at 3:28 PM, John Dickinson  wrote:
> Sorry for the delay. You've got an interesting problem, and we were all quite 
> busy last week with the summit.
> 
> First, the standard caveat: Your performance is going to be highly dependent 
> on your particular workload and your particular hardware deployment. 3500 
> req/sec in two different deployments may be very different based on the size 
> of the requests, the spread of the data requested, and the type of requests. 
> Your experience may vary, etc, etc.
> 
> However, for an attempt to answer your question...
> 
> 6 proxies for 3500 req/sec doesn't sound unreasonable. It's in line with 
> other numbers I've seen from people and what I've seen from other large scale 
> deployments. You are basically looking at about 600 req/sec/proxy.
> 
> My first concern is not the swift workload, but how keystone handles the 
> authentication of the tokens. A quick glance at the keystone source seems to 
> indicate that keystone's auth_token middleware is using a standard memcached 
> module that may not play well with concurrent connections in eventlet. 
> Specifically, sockets cannot be reused concurrently by different 
> greenthreads. You may find that the token validation in the auth_token 
> middleware fails under any sort of load. This would need to be verified by 
> your testing or an examination of the memcache module being used. An 
> alternative would be to look at the way swift implements it's memcache 
> connections in an eventlet-friendly way (see 
> swift/common/memcache.py:_get_conns() in the swift codebase).
> 
> --John
> 
> 
> 
> On Oct 11, 2012, at 4:28 PM, Alejandro Comisario 
>  wrote:
> 
> > Hi Stackers !
> > This is the thing, today we have a 24 datanodes (3 copies, 90TB usables) 
> > each datanode has 2 intel hexacores CPU with HT and 96GB of RAM, and 6 
> > Proxies with the same hardware configuration, using swift 1.4.8 with 
> > keystone.
> > Regarding the networking, each proxy / datanodes has a dual 1Gb nic, bonded 
> > in LACP mode 4, each of the proxies are behind an F5 BigIP Load Balancer ( 
> > so, no worries over there ).
> >
> > Today, we are receiving 5000 RPM ( Requests per Minute ) with 660 RPM per 
> > Proxies, i know its low, but now ... with a new product migration, soon ( 
> > really soon ) we are expecting to receive about a total of 90.000 RPM 
> > average ( 1500 req / s ) with weekly peaks of 200.000 RPM ( 3500 req / s ) 
> > to the swift api, witch will be 90% publi

[Openstack] Ceilometer, StackTach, Tach / Scrutinize, CloudWatch integration ... Summit followup

2012-10-24 Thread Sandy Walsh
Hey y'all,

Great to chat during the summit last week, but it's been a crazy few days of 
catch-up since then. 

The main takeaway for me was the urgent need to get some common libraries under 
these efforts. 

So, to that end ...

1. To those that asked, I'm going to get my slides / video presentation made 
available via the list. Stay tuned.

2. I'm having a hard time following all the links to various efforts going on 
(seems every time I turn around there's a new metric/instrumentation effort, 
which is good I guess :) 

Is there a single location I can place my feedback? If not, should we create 
one? I've got lots of suggestions/ideas and would hate to have to duplicate the 
threads or leave other groups out. 

3. I'm wrapping up the packaging / cleanup of StackTach v2 with Stacky and hope 
to make a more formal announcement on this by the end of the week. Lots of 
great changes to make it easier to use/deploy based on the Summit feedback!

Unifying the stacktach worker (consumer of events) into ceilometer should be a 
first step to integration (or agree upon a common YAGI-based consumer?) 

4. If you're looking at Tach, you should also consider looking at Scrutinize 
(my replacement effort) https://github.com/SandyWalsh/scrutinize (needs 
packaging/docs and some notifier tweaks on the cprofiler to be called "done for 
now")

Looking forward to moving ahead on this ...

Cheers,
-S





___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [SWIFT] Proxies Sizing for 90.000 / 200.000 RPM

2012-10-24 Thread Sina Sadeghi

  
  
The guys from Zmanda presented some
  evaluation of swift at the summit, might be useful here
  
  http://www.zmanda.com/blogs/?p=947 they've written a blog but it
  doesn't have all the findings which they presented at the summit.
  
  Maybe Chander would be willing to share? I've CC'd him in.
  
  --
  Sina Sadeghi
  Lead Cloud Engineer
  
  Aptira Pty Ltd
  1800 APTIRA
  aptira.com
  Follow @aptira

  
  On 25/10/12 08:03, Alejandro Comisario wrote:

Wow nice, i think we have a lot to look at
guys.
  I'll get back to you as
  soon as we have more metrics to share regarding this matter.
  Basically, we are going
  to try to add more proxies, since indeed, the requests are to
  small (20K not 20MB)
  

  
Thanks guys !
  ---
  Alejandrito
  

  

  On Wed, Oct 24, 2012 at 5:49 PM, John
Dickinson 
wrote:

  Smaller requests, of course, will have a higher percentage
  overhead for each request, so you will need more proxies
  for many small requests than the same number of larger
  requests (all other factors being equal).
  
  If most of the requests are reads, then you probably won't
  have to worry about keystone keeping up.
  
  You may want to look at tuning the object server config
  variable "keep_cache_size". This variable is the maximum
  size of an object to keep in the buffer cache for publicly
  requested objects. So if you tuned it to be 20K
  (20971520)--by default it is 5424880--you should be able
  to serve most of your requests without needing to do a
  disk seek, assuming you have enough RAM on the object
  servers. Note that background processes on the object
  servers end up using the cache for storing the filesystem
  inodes, so lots of RAM will be a very good thing in your
  use case. Of course, the usefulness of this caching is
  dependent on how frequently a given object is accessed.
  You may consider an external caching system (anything from
  varnish or squid to a CDN provider) if the direct public
  access becomes too expensive.
  
  One other factor to consider is that since swift stores 3
  replicas of the data, there are 3 servers that can serve a
  request for a given object, regardless of how many storage
  nodes you have. This means that if all 3500 req/sec are to
  the same object, only 3 object servers are handling that.
  However, if the 3500 req/sec are spread over many objects,
  the full cluster will be utilized. Some of us have talked
  about how to improve swift's performance for concurrent
  access to a single object, but those improvements have not
  been coded yet.
  
  --John

  

  
  
  On Oct 24, 2012, at 1:20 PM, Alejandro Comisario 
  wrote:
  
  > Thanks Josh, and Thanks John.
  > I know it was an exciting Summit! Congrats to
  everyone !
  >
  > John, let me give you extra data and something
  that i've already said, that might me wrong.
  >
  > First, the request size that will compose the
  90.000RPM - 200.000 RPM will be from 90% 20K objects,
  and 10% 150/200K objects.
  > Second, all the "GET" requests, are going to be
  "public", configured through ACL, so, if the GET
  requests are public (so, no X-Auth-Token is passed)
  why should i be worried about the keystone middleware
  ?
  >
  > Just to clarify, because i really want to
  understand what my real metrics are so i can know
  where to tune in case i need to.
  > Thanks !
  >
  > ---
  > Alejandrito
  >
  >
  > On Wed, Oct 24, 2012 at 3:28 PM, John Dickinson
  
  wrote:
  > Sorry for the delay. You've got an interesting
  problem, and we were all quite busy last week with the
  summit.
  >
 

[Openstack] iptables rule missing in multi node setup

2012-10-24 Thread Qin, Xiaohong
Hi All,

In one of my lab setups, I found the following iptable rules are missing on the 
controller node,

Chain nova-compute-inst-3 (1 references)
target prot opt source   destination
DROP   all  --  anywhere anywhere state INVALID
ACCEPT all  --  anywhere anywhere state 
RELATED,ESTABLISHED
nova-compute-provider  all  --  anywhere anywhere
ACCEPT udp  --  usxxcoberbmbp1.corp.emc.com  anywhere udp 
spt:bootps dpt:bootpc
ACCEPT all  --  10.0.0.0/24  anywhere
ACCEPT icmp --  anywhere anywhere
ACCEPT tcp  --  anywhere anywhere tcp dpt:ssh
nova-compute-sg-fallback  all  --  anywhere anywhere

Especially this entry,

ACCEPT all  --  10.0.0.0/24  anywhere

This is the network (10.0.0.0/24)  we used for all VMs. I'm using the latest 
Folsom quantum code.

Thanks.

Dennis Qin
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Possible upgrade bug in nova-volume (& cinder)?

2012-10-24 Thread John Griffith
On Wed, Oct 24, 2012 at 3:20 PM, Jonathan Proulx  wrote:

> On Wed, Oct 24, 2012 at 3:01 PM, John Griffith
>  wrote:
>
> > Hey Jon,
> >
> > Couple of things going on, one is the volume naming (in progress here:
> > https://review.openstack.org/#/c/14615/).  I'll take a closer look at
> some
> > of the other issues you pointed out.
>
> Hi John,
>
> On this issue I think the issue you link to above covers my problem
> (even if that exact implementation now seems abandoned).  So "the
> right thing" is to normalize all the naming on UUID in my case using
> lvrename (or symlinks as the patchset above does) and updating the
> cinder db provider_location.  What other issues did you see to look
> into?  Seems this covers everything I brought up here and the issues
> you helped me with on IRC earlier (thanks) seem like documentation
> issues (which I hope to get into the docs once I clean up my notes)
>
> -Jon
>
Hey Jon,

Cool... Yeah, I had intended for that patch to be a stable/folsom patch but
shouldn't have submitted it to master :(  The real problem isn't just
normalizing the lvm names, but also the provider_location information that
is stored in the DB for when you try to attach to your compute node.

The cinder version is the same (https://review.openstack.org/#/c/14790/)
and depending on the feedback it's a candidate for back-port.

The other possible issues that I've seen people run in to:
1. volumes directory not being specified correctly
2. not restarting tgtadm
3. not having the include statement in /etc/tgt/conf.d

I think you and I covered most of these in our chat on IRC earlier today...

One other thing that Vish pointed out is I made an assumption about
attached volumes that may result in you having to detach/reattach after the
upgrade.  I'm looking into that one now.

Thanks,
John
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] swift tempURL requests yield 401 Unauthorized

2012-10-24 Thread Dieter Plaetinck
thanks for the help.
along with your, and other people's in #openstack-swift on irc, we fixed it.

i had not added tempurl to the pipeline in proxy-server.conf. once that was
fixed, it worked immediately on 1.7, but not on 1.4 which started saying "500
Internal Server Error".  after some more tinkering, still couldn't get it to 
work.
ultimately i just upgraded this cluster to 1.7 too, and it worked straight away.

Dieter

On Wed, 24 Oct 2012 11:24:38 -0700
Orion Auld  wrote:

> First, is that the exact logging code that you have?  Because AFAIK,
> 
> self.logger("Message")
> 
> won't work.  self.logger is just the logger object.  You'd need to say
> something like:
> 
> self.logger.info("Message")
> 
> to see the message.  So you might try that, and then you can see what the
> issue is more clearly.  For me, it's usually one of the following, in order
> of likelihood:
> 
>1. I bungled the TempUrlKey header name setting it with the swift
>utility.
>2. I have a mismatched TempUrlKey.
>3. I forgot to set the TempUrlKey.
>4. Clock skew.
> 
> -- Orion
> 
> >
> > Hi,
> > using swift 1.4.8 on Centos machines. (latest packages for centos.  note
> > that i'm assuming tempurl works with this version merely because all the
> > code seems to be there, i couldn't find clear docs on whether it should
> > work or not?)
> > I want to use the swift tempURL feature as per
> > http://failverse.com/using-temporary-urls-on-rackspace-cloud-files/
> >
> > http://docs.rackspace.com/files/api/v1/cf-devguide/content/TempURL-d1a4450.html
> >
> > http://docs.rackspace.com/files/api/v1/cf-devguide/content/Set_Account_Metadata-d1a4460.html
> >
> > TLDR: set up metadata correctly, but tempurl requests yield http 401,
> > can't figure it out, _get_hmac() doesn't seem to be called.
> >
> > First, I set the key metadata (this works fine) (tried both the swift CLI
> > program as well as curl), and I tried setting it both on container level
> > (container "uploads") as well as account level
> > (though i would prefer container level)
> >
> > alias vimeoswift=swift -A http://$ip:8080/auth/v1.0 -U system:root -K
> > testpass'
> > vimeoswift post -m Temp-Url-Key:key uploads
> > vimeoswift post -m Temp-Url-Key:key
> > curl -i -X POST -H X-Auth-Token:$t -H X-Account-Meta-Temp-URL-Key:key
> > http://$ip:8080/v1/AUTH_system
> >
> > this seems to work, because when I stat the account and the container, they
> > show up:
> >
> >
> > [root@dfvimeodfsproxy1 ~]# vimeoswift stat uploads
> >   Account: AUTH_system
> > Container: uploads
> >   Objects: 1
> > Bytes: 1253
> >  Read ACL:
> > Write ACL:
> >   Sync To:
> >  Sync Key:
> > Meta Temp-Url-Key: key <--
> > Accept-Ranges: bytes
> > [root@dfvimeodfsproxy1 ~]# vimeoswift stat
> >Account: AUTH_system
> > Containers: 1
> >Objects: 1
> >  Bytes: 1253
> > Meta Temp-Url-Key: key <--
> > Accept-Ranges: bytes
> > [root@dfvimeodfsproxy1 ~]#
> >
> > I have already put a file in container uploads (which I can retrieve just
> > fine using an auth token):
> > [root@dfvimeodfsproxy1 ~]# vimeoswift stat uploads mylogfile.log | grep
> > 'Content Length'
> > Content Length: 1253
> >
> > now however, if i want to retrieve this file using the tempURL feature, it
> > doesn't work:
> >
> > using this script
> > #!/usr/bin/python2
> > import hmac
> > from hashlib import sha1
> > from time import time
> > method = 'GET'
> > expires = int(time() + 60)
> > base = 'http://10.90.151.5:8080'
> > path = '/v1/AUTH_system/uploads/mylogfile.log'
> > key = 'key'
> > hmac_body = '%s\n%s\n%s' % (method, expires, path)
> > sig = hmac.new(key, hmac_body, sha1).hexdigest()
> > print '%s%s?temp_url_sig=%s&temp_url_expires=%s' % (base, path, sig,
> > expires)
> >
> > ~ ❯ openstack-signed-url2.py
> >
> > http://10.90.151.5:8080/v1/AUTH_system/uploads/mylogfile.log?temp_url_sig=e700f568cd099a432890db00e263b29b999d3604&temp_url_expires=1350666309
> > ~ ❯ wget '
> > http://10.90.151.5:8080/v1/AUTH_system/uploads/mylogfile.log?temp_url_sig=e700f568cd099a432890db00e263b29b999d3604&temp_url_expires=1350666309
> > '
> > --2012-10-19 13:04:14--
> > http://10.90.151.5:8080/v1/AUTH_system/uploads/mylogfile.log?temp_url_sig=e700f568cd099a432890db00e263b29b999d3604&temp_url_expires=1350666309
> > Connecting to 10.90.151.5:8080... connected.
> > HTTP request sent, awaiting response... 401 Unauthorized
> > Authorization failed.
> >
> >
> > I thought I could easily debug this myself by changing the _get_hmac()
> > function
> > in /usr/lib/python2.6/site-packages/swift/common/middleware/tempurl.py
> > like so:
> >
> > def _get_hmac(self, env, expires, key, request_method=None):
> > """
> >(...)
> > """
> > if not request_method:
> > request_method = env['REQUEST_METHOD']
> > self.logger("getting HMAC for method %s, expires %s, path %s" %
> > (request_method, expires, env['PATH_INFO']))
> > hmac = hmac.new(key, '%s\n%s\n%s' % (

Re: [Openstack] [ceilometer] Potential New Use Cases

2012-10-24 Thread Dan Dyer
I don't think its just a matter of adding more meters or events for a 
couple of reasons:
1. In many cases the metadata I am referring to comes from a different 
source than the base usage data. Nova is still emitting its normal 
events, but we get the service/user mapping from a different source. I 
would not characterize this data as usage metrics but more data about 
the system relationships.
2. in the multiple VM case, we need to have the relationships specified 
so that we can ignore the proper VM's. There has also been talk of 
hybrid billing models that charge for some part of the VM usage as well 
as other metrics. Once again we need a way to characterize the 
relationships so that processing can associate and filter correctly.


Dan

On 10/24/2012 3:35 PM, Julien Danjou wrote:

On Wed, Oct 24 2012, Dan Dyer wrote:


Use Case 1
Service Owned Instances
There are a set of use cases where a service is acting on behalf of a user,
the service is the owner of the VM but billing needs to be attributed to the
end user of the system.This scenario drives two requirements:
1. Pricing is similar to base VM's but with a premium. So the type of
service for a VM needs to be identifiable so that the appropriate pricing
can be applied.
2. The actual end user of the VM needs to be identified so usage can be
properly attributed

I think that for this, you just need to add more meters on top of the
existing one with your own user and project id information.


As an example, in some of our PAAS use cases, there is a service controller
running on top of the base VM that maintains the control and and manages the
customer experience. The idea is to expose the service and not have the
customer have to (or even be able to) manipulate the virtual machine
directly. So in this case, from a Nova perspective, the PAAS service owns
the VM and it's tenantID is what is reported back in events. The way we
resolve this is to query the service controller for meta data about that
instances they own. This is stored off in a separate "table" and used to
determine the real user at aggregation time.

This is probably where you should emit the meters you need.


Use Case 2
Multple Instances combine to make a billable "product/service"
In this use case, a service might consist of several VM's, but the actual
number does not directly drive the billing.  An example of this might be a
redundant service that has a primary and two backup VM's that make up a
deployment. The customer is charged for the service, not the fact that there
are 3 VM's running. Once again, we need meta data that is able to describe
this relationship so that when the billing records are processed, this
relationship can be identified and billed properly.

Kind of the same here, if you don't want to really bill the vm, just
don't meter them (or ignore the meters) and emit your own meter via your
PaaS platform to bill your customer.

Or is there a limitation I miss?




___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Using nova-volumes openstack LVM group for other pourposes

2012-10-24 Thread Daniel Vázquez
Yeah!! Jon, I'm agree with you about organization/separation LVM
groups, this is for very very very special situation.
Any case if I use nova pattern labeling via logical volumen creation
or via renaming label, I hope can switch the content of this custom
logical volument to use with openstack, an attach to a VM in future.



2012/10/24 Jonathan Proulx :
> On Wed, Oct 24, 2012 at 08:56:26PM +0200, Daniel Vázquez wrote:
> :Hi here!
> :
> :Can we create and use news logical volumes for own/custom use(out of
> :openstack) on nova-volumes openstack LVM group, and use it beside
> :openstack operational?
> :IMO it's LVM and no problem, but it has openstack collateral consequences?
>
> If you are talking about creating random logical volumes for
> non-openstack use in the same volume group nova-volume or cinder is
> using to create volumes (lv are in the same vg but don't otherwise
> interact), yes you can do that without ocnfusing openstack or having
> your volumes trampled.  For example only having one volume group and
> using that for operating system partitions as well as volume-
> volumes for cinder
>
> I don't think it's a particularly good idea from an organizational
> standpoint I'd rather have distinct vg's for each purpose so it is
> clear which resources are operating system and which are data, but in
> my environment (a private computing/research cloud with a small admin
> group and 1k users in a few 10's of closely related tenents) it's
> probably more an aesthetic than technical choice.  The larger and more
> diverse your situation the stronger I'd argue for keeping them in
> seperate VGs.
>
> -Jon

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [ceilometer] Potential New Use Cases

2012-10-24 Thread Asher Newcomer
+1 for both of these use cases
On Oct 24, 2012 5:06 PM, "Dan Dyer"  wrote:

>  Based on a discussion with Doug at the Summit, I would like to propose a
> couple of new use cases for Ceilometer. As background, up until now, the
> usage data that Ceilometer collects could be considered atomic in the sense
> that everything needed to understand/process the information could be
> contained in a single generated event. We have identified some use cases
> that will require additional meta data about the source and/or type of the
> event so that later processing can be performed.
>
> Use Case 1
> Service Owned Instances
> There are a set of use cases where a service is acting on behalf of a
> user, the service is the owner of the VM but billing needs to be attributed
> to the end user of the system. This scenario drives two requirements:
> 1. Pricing is similar to base VM's but with a premium. So the type of
> service for a VM needs to be identifiable so that the appropriate pricing
> can be applied.
> 2. The actual end user of the VM needs to be identified so usage can be
> properly attributed
>
> As an example, in some of our PAAS use cases, there is a service
> controller running on top of the base VM that maintains the control and and
> manages the customer experience. The idea is to expose the service and not
> have the customer have to (or even be able to) manipulate the virtual
> machine directly. So in this case, from a Nova perspective, the PAAS
> service owns the VM and it's tenantID is what is reported back in events.
> The way we resolve this is to query the service controller for meta data
> about that instances they own. This is stored off in a separate "table" and
> used to determine the real user at aggregation time. Note that in theory
> you could do this in the agent as part of collection, but we have found
> that this is very expensive and scales best if the actual substitution is
> delayed until the latest point possible (which at that point potentially
> means there are less records to process or can be better handled with
> parallel processing using something like MapReduce. From a billing
> perspective these instances will have unique pricing (i.e. premium on top
> of the base VM cost). Part of the aggregation process is to substitute
> the billable account for the service account and identify the service type
> so that proper billing can be applied. We would like to see the Ceilometer
> data model expanded to store this kind of metadata.
>
>
> Use Case 2
> Multple Instances combine to make a billable "product/service"
> In this use case, a service might consist of several VM's, but the actual
> number does not directly drive the billing.  An example of this might be a
> redundant service that has a primary and two backup VM's that make up a
> deployment. The customer is charged for the service, not the fact that
> there are 3 VM's running. Once again, we need meta data that is able to
> describe this relationship so that when the billing records are processed,
> this relationship can be identified and billed properly.
>
> Both of these use cases point to a general need to be able to store
> meta-data that will allow the usage processing logic to identify
> relationships between VM's and provide additional context for determining
> billing policy.
>
> Dan Dyer
> HP Cloud Services
> aka: DanD
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Keystone connection issue

2012-10-24 Thread Kevin L. Mitchell
On Wed, 2012-10-24 at 21:40 +, Bhandaru, Malini K wrote:
> I have an Ubuntu 12.10 install with devstack freshly downloaded.
> Does anybody have an issue where devstack/stack.sh script fails because 
> keystone is unable to start, and consequently, none of the services start.
> ..
> 
> 'one/keystone.conf --log-config /etc/keystone/logging.conf -d --debug
> + echo 'Waiting for keystone to start...'
> 
> keystone endpoint-create: error: argument --service-id/--service_id: expected 
> one argument

Actually, it seems like I've seen that happen with our gate jobs, which
run tests under a fresh devstack environment.  You might try running it
again and seeing if it runs the second time…
-- 
Kevin L. Mitchell 


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Keystone connection issue

2012-10-24 Thread Bhandaru, Malini K
Hello All!

I have an Ubuntu 12.10 install with devstack freshly downloaded.
Does anybody have an issue where devstack/stack.sh script fails because 
keystone is unable to start, and consequently, none of the services start.
..

'one/keystone.conf --log-config /etc/keystone/logging.conf -d --debug
+ echo 'Waiting for keystone to start...'

keystone endpoint-create: error: argument --service-id/--service_id: expected 
one argument

Thanks
Malini


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [ceilometer] Potential New Use Cases

2012-10-24 Thread Matt Joyce
I think a good deal of ceilometer's messaging and event tracking could
additionally be used for event audit logging.

-Matt

On Wed, Oct 24, 2012 at 2:35 PM, Julien Danjou  wrote:

> On Wed, Oct 24 2012, Dan Dyer wrote:
>
> > Use Case 1
> > Service Owned Instances
> > There are a set of use cases where a service is acting on behalf of a
> user,
> > the service is the owner of the VM but billing needs to be attributed to
> the
> > end user of the system.This scenario drives two requirements:
> > 1. Pricing is similar to base VM's but with a premium. So the type of
> > service for a VM needs to be identifiable so that the appropriate pricing
> > can be applied.
> > 2. The actual end user of the VM needs to be identified so usage can be
> > properly attributed
>
> I think that for this, you just need to add more meters on top of the
> existing one with your own user and project id information.
>
> > As an example, in some of our PAAS use cases, there is a service
> controller
> > running on top of the base VM that maintains the control and and manages
> the
> > customer experience. The idea is to expose the service and not have the
> > customer have to (or even be able to) manipulate the virtual machine
> > directly. So in this case, from a Nova perspective, the PAAS service owns
> > the VM and it's tenantID is what is reported back in events. The way we
> > resolve this is to query the service controller for meta data about that
> > instances they own. This is stored off in a separate "table" and used to
> > determine the real user at aggregation time.
>
> This is probably where you should emit the meters you need.
>
> > Use Case 2
> > Multple Instances combine to make a billable "product/service"
> > In this use case, a service might consist of several VM's, but the actual
> > number does not directly drive the billing.  An example of this might be
> a
> > redundant service that has a primary and two backup VM's that make up a
> > deployment. The customer is charged for the service, not the fact that
> there
> > are 3 VM's running. Once again, we need meta data that is able to
> describe
> > this relationship so that when the billing records are processed, this
> > relationship can be identified and billed properly.
>
> Kind of the same here, if you don't want to really bill the vm, just
> don't meter them (or ignore the meters) and emit your own meter via your
> PaaS platform to bill your customer.
>
> Or is there a limitation I miss?
>
> --
> Julien Danjou
> -- Free Software hacker & freelance
> -- http://julien.danjou.info
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [ceilometer] Potential New Use Cases

2012-10-24 Thread Julien Danjou
On Wed, Oct 24 2012, Dan Dyer wrote:

> Use Case 1
> Service Owned Instances
> There are a set of use cases where a service is acting on behalf of a user,
> the service is the owner of the VM but billing needs to be attributed to the
> end user of the system.This scenario drives two requirements:
> 1. Pricing is similar to base VM's but with a premium. So the type of
> service for a VM needs to be identifiable so that the appropriate pricing
> can be applied.
> 2. The actual end user of the VM needs to be identified so usage can be
> properly attributed

I think that for this, you just need to add more meters on top of the
existing one with your own user and project id information.

> As an example, in some of our PAAS use cases, there is a service controller
> running on top of the base VM that maintains the control and and manages the
> customer experience. The idea is to expose the service and not have the
> customer have to (or even be able to) manipulate the virtual machine
> directly. So in this case, from a Nova perspective, the PAAS service owns
> the VM and it's tenantID is what is reported back in events. The way we
> resolve this is to query the service controller for meta data about that
> instances they own. This is stored off in a separate "table" and used to
> determine the real user at aggregation time.

This is probably where you should emit the meters you need.

> Use Case 2
> Multple Instances combine to make a billable "product/service"
> In this use case, a service might consist of several VM's, but the actual
> number does not directly drive the billing.  An example of this might be a
> redundant service that has a primary and two backup VM's that make up a
> deployment. The customer is charged for the service, not the fact that there
> are 3 VM's running. Once again, we need meta data that is able to describe
> this relationship so that when the billing records are processed, this
> relationship can be identified and billed properly.

Kind of the same here, if you don't want to really bill the vm, just
don't meter them (or ignore the meters) and emit your own meter via your
PaaS platform to bill your customer.

Or is there a limitation I miss?

-- 
Julien Danjou
-- Free Software hacker & freelance
-- http://julien.danjou.info


pgpJ98Zivjt7G.pgp
Description: PGP signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Possible upgrade bug in nova-volume (& cinder)?

2012-10-24 Thread Jonathan Proulx
On Wed, Oct 24, 2012 at 3:01 PM, John Griffith
 wrote:

> Hey Jon,
>
> Couple of things going on, one is the volume naming (in progress here:
> https://review.openstack.org/#/c/14615/).  I'll take a closer look at some
> of the other issues you pointed out.

Hi John,

On this issue I think the issue you link to above covers my problem
(even if that exact implementation now seems abandoned).  So "the
right thing" is to normalize all the naming on UUID in my case using
lvrename (or symlinks as the patchset above does) and updating the
cinder db provider_location.  What other issues did you see to look
into?  Seems this covers everything I brought up here and the issues
you helped me with on IRC earlier (thanks) seem like documentation
issues (which I hope to get into the docs once I clean up my notes)

-Jon

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [ceilometer] Potential New Use Cases

2012-10-24 Thread Dan Dyer
Based on a discussion with Doug at the Summit, I would like to propose a 
couple of new use cases for Ceilometer. As background, up until now, the 
usage data that Ceilometer collects could be considered atomic in the 
sense that everything needed to understand/process the information could 
be contained in a single generated event. We have identified some use 
cases that will require additional meta data about the source and/or 
type of the event so that later processing can be performed.


Use Case 1
Service Owned Instances
There are a set of use cases where a service is acting on behalf of a 
user, the service is the owner of the VM but billing needs to be 
attributed to the end user of the system.This scenario drives two 
requirements:
1. Pricing is similar to base VM's but with a premium. So the type of 
service for a VM needs to be identifiable so that the appropriate 
pricing can be applied.
2. The actual end user of the VM needs to be identified so usage can be 
properly attributed


As an example, in some of our PAAS use cases, there is a service 
controller running on top of the base VM that maintains the control and 
and manages the customer experience. The idea is to expose the service 
and not have the customer have to (or even be able to) manipulate the 
virtual machine directly. So in this case, from a Nova perspective, the 
PAAS service owns the VM and it's tenantID is what is reported back in 
events. The way we resolve this is to query the service controller for 
meta data about that instances they own. This is stored off in a 
separate "table" and used to determine the real user at aggregation 
time.Note that in theory you could do this in the agent as part of 
collection, but we have found that this is very expensive and scales 
best if the actual substitution is delayed until the latest point 
possible (which at that point potentially means there are less records 
to process or can be better handled with parallel processing using 
something like MapReduce.From a billing perspective these instances will 
have unique pricing (i.e. premium on top of the base VM cost). Part of 
the aggregation process is to substitute the billable account for the 
service account and identify the service type so that proper billing can 
be applied. We would like to see the Ceilometer data model expanded to 
store this kind of metadata.



Use Case 2
Multple Instances combine to make a billable "product/service"
In this use case, a service might consist of several VM's, but the 
actual number does not directly drive the billing.  An example of this 
might be a redundant service that has a primary and two backup VM's that 
make up a deployment. The customer is charged for the service, not the 
fact that there are 3 VM's running. Once again, we need meta data that 
is able to describe this relationship so that when the billing records 
are processed, this relationship can be identified and billed properly.


Both of these use cases point to a general need to be able to store 
meta-data that will allow the usage processing logic to identify 
relationships between VM's and provide additional context for 
determining billing policy.


Dan Dyer
HP Cloud Services
aka: DanD
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [SWIFT] Proxies Sizing for 90.000 / 200.000 RPM

2012-10-24 Thread Alejandro Comisario
Wow nice, i think we have a lot to look at guys.
I'll get back to you as soon as we have more metrics to share regarding
this matter.
Basically, we are going to try to add more proxies, since indeed, the
requests are to small (20K not 20MB)

Thanks guys !
---
Alejandrito

On Wed, Oct 24, 2012 at 5:49 PM, John Dickinson  wrote:

> Smaller requests, of course, will have a higher percentage overhead for
> each request, so you will need more proxies for many small requests than
> the same number of larger requests (all other factors being equal).
>
> If most of the requests are reads, then you probably won't have to worry
> about keystone keeping up.
>
> You may want to look at tuning the object server config variable
> "keep_cache_size". This variable is the maximum size of an object to keep
> in the buffer cache for publicly requested objects. So if you tuned it to
> be 20K (20971520)--by default it is 5424880--you should be able to serve
> most of your requests without needing to do a disk seek, assuming you have
> enough RAM on the object servers. Note that background processes on the
> object servers end up using the cache for storing the filesystem inodes, so
> lots of RAM will be a very good thing in your use case. Of course, the
> usefulness of this caching is dependent on how frequently a given object is
> accessed. You may consider an external caching system (anything from
> varnish or squid to a CDN provider) if the direct public access becomes too
> expensive.
>
> One other factor to consider is that since swift stores 3 replicas of the
> data, there are 3 servers that can serve a request for a given object,
> regardless of how many storage nodes you have. This means that if all 3500
> req/sec are to the same object, only 3 object servers are handling that.
> However, if the 3500 req/sec are spread over many objects, the full cluster
> will be utilized. Some of us have talked about how to improve swift's
> performance for concurrent access to a single object, but those
> improvements have not been coded yet.
>
> --John
>
>
>
> On Oct 24, 2012, at 1:20 PM, Alejandro Comisario <
> alejandro.comisa...@mercadolibre.com> wrote:
>
> > Thanks Josh, and Thanks John.
> > I know it was an exciting Summit! Congrats to everyone !
> >
> > John, let me give you extra data and something that i've already said,
> that might me wrong.
> >
> > First, the request size that will compose the 90.000RPM - 200.000 RPM
> will be from 90% 20K objects, and 10% 150/200K objects.
> > Second, all the "GET" requests, are going to be "public", configured
> through ACL, so, if the GET requests are public (so, no X-Auth-Token is
> passed) why should i be worried about the keystone middleware ?
> >
> > Just to clarify, because i really want to understand what my real
> metrics are so i can know where to tune in case i need to.
> > Thanks !
> >
> > ---
> > Alejandrito
> >
> >
> > On Wed, Oct 24, 2012 at 3:28 PM, John Dickinson  wrote:
> > Sorry for the delay. You've got an interesting problem, and we were all
> quite busy last week with the summit.
> >
> > First, the standard caveat: Your performance is going to be highly
> dependent on your particular workload and your particular hardware
> deployment. 3500 req/sec in two different deployments may be very different
> based on the size of the requests, the spread of the data requested, and
> the type of requests. Your experience may vary, etc, etc.
> >
> > However, for an attempt to answer your question...
> >
> > 6 proxies for 3500 req/sec doesn't sound unreasonable. It's in line with
> other numbers I've seen from people and what I've seen from other large
> scale deployments. You are basically looking at about 600 req/sec/proxy.
> >
> > My first concern is not the swift workload, but how keystone handles the
> authentication of the tokens. A quick glance at the keystone source seems
> to indicate that keystone's auth_token middleware is using a standard
> memcached module that may not play well with concurrent connections in
> eventlet. Specifically, sockets cannot be reused concurrently by different
> greenthreads. You may find that the token validation in the auth_token
> middleware fails under any sort of load. This would need to be verified by
> your testing or an examination of the memcache module being used. An
> alternative would be to look at the way swift implements it's memcache
> connections in an eventlet-friendly way (see
> swift/common/memcache.py:_get_conns() in the swift codebase).
> >
> > --John
> >
> >
> >
> > On Oct 11, 2012, at 4:28 PM, Alejandro Comisario <
> alejandro.comisa...@mercadolibre.com> wrote:
> >
> > > Hi Stackers !
> > > This is the thing, today we have a 24 datanodes (3 copies, 90TB
> usables) each datanode has 2 intel hexacores CPU with HT and 96GB of RAM,
> and 6 Proxies with the same hardware configuration, using swift 1.4.8 with
> keystone.
> > > Regarding the networking, each proxy / datanodes has a dual 1Gb nic,
> bonded in LACP mode

Re: [Openstack] Finding/Making Windows 7 Images for OpenStack

2012-10-24 Thread Pádraig Brady

On 10/21/2012 12:56 AM, Curtis C. wrote:

Hi,

Has anyone seen any recent documentation on creating Windows 7 images
for OpenStack? I know it's supposed to be as easy as using kvm to
install it initially (as per the OpenStack docs) then importing that
image into glance, but there are some subtle things that I might be
missing because I haven't really used Windows in a decade. I've
certainly done a lot of searching so if it's a link on the first ten
pages page of google I've probably seen it already. :)

Perhaps the best thing would be if anyone knew of a virtualbox/vagrant
or similar methodology for automatically creating Windows 7 images
that I could start from. Someone must be generating new Win 7 images
daily for their private cloud somewhere/somehow... :)

Any pointers much appreciated.


You could generate images for glance using OZ:
https://github.com/clalancette/oz/wiki

(I've not tried windows myself, but it's listed as being supported)

thanks,
Pádraig.

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [SWIFT] Proxies Sizing for 90.000 / 200.000 RPM

2012-10-24 Thread John Dickinson
Smaller requests, of course, will have a higher percentage overhead for each 
request, so you will need more proxies for many small requests than the same 
number of larger requests (all other factors being equal).

If most of the requests are reads, then you probably won't have to worry about 
keystone keeping up.

You may want to look at tuning the object server config variable 
"keep_cache_size". This variable is the maximum size of an object to keep in 
the buffer cache for publicly requested objects. So if you tuned it to be 20K 
(20971520)--by default it is 5424880--you should be able to serve most of your 
requests without needing to do a disk seek, assuming you have enough RAM on the 
object servers. Note that background processes on the object servers end up 
using the cache for storing the filesystem inodes, so lots of RAM will be a 
very good thing in your use case. Of course, the usefulness of this caching is 
dependent on how frequently a given object is accessed. You may consider an 
external caching system (anything from varnish or squid to a CDN provider) if 
the direct public access becomes too expensive.

One other factor to consider is that since swift stores 3 replicas of the data, 
there are 3 servers that can serve a request for a given object, regardless of 
how many storage nodes you have. This means that if all 3500 req/sec are to the 
same object, only 3 object servers are handling that. However, if the 3500 
req/sec are spread over many objects, the full cluster will be utilized. Some 
of us have talked about how to improve swift's performance for concurrent 
access to a single object, but those improvements have not been coded yet.

--John



On Oct 24, 2012, at 1:20 PM, Alejandro Comisario 
 wrote:

> Thanks Josh, and Thanks John.
> I know it was an exciting Summit! Congrats to everyone !
> 
> John, let me give you extra data and something that i've already said, that 
> might me wrong.
> 
> First, the request size that will compose the 90.000RPM - 200.000 RPM will be 
> from 90% 20K objects, and 10% 150/200K objects.
> Second, all the "GET" requests, are going to be "public", configured through 
> ACL, so, if the GET requests are public (so, no X-Auth-Token is passed) why 
> should i be worried about the keystone middleware ?
> 
> Just to clarify, because i really want to understand what my real metrics are 
> so i can know where to tune in case i need to.
> Thanks !
> 
> ---
> Alejandrito
> 
> 
> On Wed, Oct 24, 2012 at 3:28 PM, John Dickinson  wrote:
> Sorry for the delay. You've got an interesting problem, and we were all quite 
> busy last week with the summit.
> 
> First, the standard caveat: Your performance is going to be highly dependent 
> on your particular workload and your particular hardware deployment. 3500 
> req/sec in two different deployments may be very different based on the size 
> of the requests, the spread of the data requested, and the type of requests. 
> Your experience may vary, etc, etc.
> 
> However, for an attempt to answer your question...
> 
> 6 proxies for 3500 req/sec doesn't sound unreasonable. It's in line with 
> other numbers I've seen from people and what I've seen from other large scale 
> deployments. You are basically looking at about 600 req/sec/proxy.
> 
> My first concern is not the swift workload, but how keystone handles the 
> authentication of the tokens. A quick glance at the keystone source seems to 
> indicate that keystone's auth_token middleware is using a standard memcached 
> module that may not play well with concurrent connections in eventlet. 
> Specifically, sockets cannot be reused concurrently by different 
> greenthreads. You may find that the token validation in the auth_token 
> middleware fails under any sort of load. This would need to be verified by 
> your testing or an examination of the memcache module being used. An 
> alternative would be to look at the way swift implements it's memcache 
> connections in an eventlet-friendly way (see 
> swift/common/memcache.py:_get_conns() in the swift codebase).
> 
> --John
> 
> 
> 
> On Oct 11, 2012, at 4:28 PM, Alejandro Comisario 
>  wrote:
> 
> > Hi Stackers !
> > This is the thing, today we have a 24 datanodes (3 copies, 90TB usables) 
> > each datanode has 2 intel hexacores CPU with HT and 96GB of RAM, and 6 
> > Proxies with the same hardware configuration, using swift 1.4.8 with 
> > keystone.
> > Regarding the networking, each proxy / datanodes has a dual 1Gb nic, bonded 
> > in LACP mode 4, each of the proxies are behind an F5 BigIP Load Balancer ( 
> > so, no worries over there ).
> >
> > Today, we are receiving 5000 RPM ( Requests per Minute ) with 660 RPM per 
> > Proxies, i know its low, but now ... with a new product migration, soon ( 
> > really soon ) we are expecting to receive about a total of 90.000 RPM 
> > average ( 1500 req / s ) with weekly peaks of 200.000 RPM ( 3500 req / s ) 
> > to the swift api, witch will be 90% public gets ( no keystone 

Re: [Openstack] Finding/Making Windows 7 Images for OpenStack

2012-10-24 Thread Luis Fernandez Alvarez
Yes, you can do it with virt-manager. First, if you need it, you can modify the 
ISO... and then, with virti-manager you can create a machine to install the 
operating system (obtaining the disk image that should be uploaded to glance)

The best way (IMHO) to deploy the template is generalizing the system, because 
it will bring you the oppotunity of reuse the image template. To do that, when 
the installer asks for your computer name, press Ctrl + Shift + F3 and you will 
enter in audit mode.

In this state, you can modify the system before generalize, and finally you 
should run "c:\windows\system32\sysprep\sysprep.exe /shutdown /generalize 
/oobe" (have a look to the command). Then, your virtual machine is turned off 
and you'll be able to pick the disk, convert it to qcow2 (if needed), and add 
it to glance. With the image generalized, the next time it boots (on 
openstack), the OS will check the system to install drivers, etc, etc...

Good luck!

Cheers,

Luis

De: Curtis C. [serverasc...@gmail.com]
Enviado el: lunes, 22 de octubre de 2012 19:05
Para: Luis Fernandez Alvarez
Cc: openstack@lists.launchpad.net
Asunto: Re: [Openstack] Finding/Making Windows 7 Images for OpenStack

On Mon, Oct 22, 2012 at 3:11 AM, Luis Fernandez Alvarez
 wrote:
> Hi Curtis,
>
> If you're planning to use Windows 7 images with KVM hypervisors, the main 
> steps I follow are:
>
> 1- Modify your Windows 7 image to inject virtio drivers 
> (http://www.linux-kvm.org/page/WindowsGuestDrivers/Download_Drivers).
> To do that in a stylish way... you can use the Windows AIK that includes 
> DISM.exe and IMAGEX.exe that
> let you inject drivers to the WIM files included in Windows 7 
> installation media.
> I don't know if you have experiencie with it...but...after mounting the 
> wim files with imagex, you should run
> something like: "dism /image:C:\path\mount /add-driver 
> /driver:c:\virtiodrivers\win7\x86"

Hi,

Yeah, I will not be able to do anything stylish with Windows b/c of my
lack of experience with Windows. :)

> * Alternatively: you can just add it during installation from an external 
> media.

That I have done. I definitely get the need for the virtio drivers.

> 2- Use a tool like Aeolus Oz to create the image (you can automate it with 
> unattend.xml files) or the graphical KVM/qemu interface.

So once I create the image I should use something like virt-manager to
set it up? Maybe that's the step I'm missing.

> 3- Convert it to qcow2 and send it to glance.

Right.

>
> If you need further information, do not hesitate to contact me.
>

I very well may have to, thanks for the offer. :)

I will checkout setting up the image after the initial install with
virt-manager.

Thanks,
Curtis.

> Cheers,
>
> Luis.
>
> PS: You can also test the HyperV support, this way, you could use VHD images.
>
>
> 
> De: openstack-bounces+luis.fernandez.alvarez=cern...@lists.launchpad.net 
> [openstack-bounces+luis.fernandez.alvarez=cern...@lists.launchpad.net] en 
> nombre de Curtis C. [serverasc...@gmail.com]
> Enviado el: domingo, 21 de octubre de 2012 1:56
> Para: openstack@lists.launchpad.net
> Asunto: [Openstack] Finding/Making Windows 7 Images for OpenStack
>
> Hi,
>
> Has anyone seen any recent documentation on creating Windows 7 images
> for OpenStack? I know it's supposed to be as easy as using kvm to
> install it initially (as per the OpenStack docs) then importing that
> image into glance, but there are some subtle things that I might be
> missing because I haven't really used Windows in a decade. I've
> certainly done a lot of searching so if it's a link on the first ten
> pages page of google I've probably seen it already. :)
>
> Perhaps the best thing would be if anyone knew of a virtualbox/vagrant
> or similar methodology for automatically creating Windows 7 images
> that I could start from. Someone must be generating new Win 7 images
> daily for their private cloud somewhere/somehow... :)
>
> Any pointers much appreciated.
>
> Thanks,
> Curtis.
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp



--
Twitter: @serverascode
Blog: serverascode.com

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [SWIFT] Proxies Sizing for 90.000 / 200.000 RPM

2012-10-24 Thread Rick Jones

On Oct 11, 2012, at 4:28 PM, Alejandro Comisario 
 wrote:


Hi Stackers !
This is the thing, today we have a 24 datanodes (3 copies, 90TB
usables) each datanode has 2 intel hexacores CPU with HT and 96GB
of RAM, and 6 Proxies with the same hardware configuration, using
swift 1.4.8 with keystone. Regarding the networking, each proxy /
datanodes has a dual 1Gb nic, bonded in LACP mode 4,


Are you seeing good balancing of traffic across the two interfaces in 
the bonds?



each of the proxies are behind an F5 BigIP Load Balancer ( so, no
worries over there ).


What is the "pipe" into/out-of the F5 (cluster of F5's?) and how 
utilized is that pipe already?  If it is running at anything more than 
2.5% (5000/20) to 5.5% (5000/9) in the direction the GETS will 
flow it will become a bottleneck. (handwaving it as 100% GETS rather 
than 90%)


rick jones



Today, we are receiving 5000 RPM ( Requests per Minute ) with 660
RPM per Proxies, i know its low, but now ... with a new product
migration, soon ( really soon ) we are expecting to receive about a
total of 90.000 RPM average ( 1500 req / s ) with weekly peaks of
200.000 RPM ( 3500 req / s ) to the swift api, witch will be 90%
public gets ( no keystone auth ) and 10% authorized PUTS (keystone
in the middle, worth to know that we have a 10 keystone vms pool,
connected to a 5 nodes galera mysql cluster, so no worries there
either )

So, 3500 req/s divided by 6 proxy nodes doesnt sounds too much, but
well, its a number that we cant ignore. What do you think about
this numbers? does this 6 proxies sounds good, or we should double
or triple the proxies ? Does anyone has this size of requests and
can share their configs ?


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [SWIFT] Proxies Sizing for 90.000 / 200.000 RPM

2012-10-24 Thread Alejandro Comisario
Thanks Josh, and Thanks John.
I know it was an exciting Summit! Congrats to everyone !

John, let me give you extra data and something that i've already said, that
might me wrong.

First, the request size that will compose the 90.000RPM - 200.000 RPM will
be from 90% 20K objects, and 10% 150/200K objects.
Second, all the "GET" requests, are going to be "public", configured
through ACL, so, if the GET requests are public (so, no X-Auth-Token is
passed) why should i be worried about the keystone middleware ?

Just to clarify, because i really want to understand what my real metrics
are so i can know where to tune in case i need to.
Thanks !
*
*
*---*
*Alejandrito*


On Wed, Oct 24, 2012 at 3:28 PM, John Dickinson  wrote:

> Sorry for the delay. You've got an interesting problem, and we were all
> quite busy last week with the summit.
>
> First, the standard caveat: Your performance is going to be highly
> dependent on your particular workload and your particular hardware
> deployment. 3500 req/sec in two different deployments may be very different
> based on the size of the requests, the spread of the data requested, and
> the type of requests. Your experience may vary, etc, etc.
>
> However, for an attempt to answer your question...
>
> 6 proxies for 3500 req/sec doesn't sound unreasonable. It's in line with
> other numbers I've seen from people and what I've seen from other large
> scale deployments. You are basically looking at about 600 req/sec/proxy.
>
> My first concern is not the swift workload, but how keystone handles the
> authentication of the tokens. A quick glance at the keystone source seems
> to indicate that keystone's auth_token middleware is using a standard
> memcached module that may not play well with concurrent connections in
> eventlet. Specifically, sockets cannot be reused concurrently by different
> greenthreads. You may find that the token validation in the auth_token
> middleware fails under any sort of load. This would need to be verified by
> your testing or an examination of the memcache module being used. An
> alternative would be to look at the way swift implements it's memcache
> connections in an eventlet-friendly way (see
> swift/common/memcache.py:_get_conns() in the swift codebase).
>
> --John
>
>
>
> On Oct 11, 2012, at 4:28 PM, Alejandro Comisario <
> alejandro.comisa...@mercadolibre.com> wrote:
>
> > Hi Stackers !
> > This is the thing, today we have a 24 datanodes (3 copies, 90TB usables)
> each datanode has 2 intel hexacores CPU with HT and 96GB of RAM, and 6
> Proxies with the same hardware configuration, using swift 1.4.8 with
> keystone.
> > Regarding the networking, each proxy / datanodes has a dual 1Gb nic,
> bonded in LACP mode 4, each of the proxies are behind an F5 BigIP Load
> Balancer ( so, no worries over there ).
> >
> > Today, we are receiving 5000 RPM ( Requests per Minute ) with 660 RPM
> per Proxies, i know its low, but now ... with a new product migration, soon
> ( really soon ) we are expecting to receive about a total of 90.000 RPM
> average ( 1500 req / s ) with weekly peaks of 200.000 RPM ( 3500 req / s )
> to the swift api, witch will be 90% public gets ( no keystone auth ) and
> 10% authorized PUTS (keystone in the middle, worth to know that we have a
> 10 keystone vms pool, connected to a 5 nodes galera mysql cluster, so no
> worries there either )
> >
> > So, 3500 req/s divided by 6 proxy nodes doesnt sounds too much, but
> well, its a number that we cant ignore.
> > What do you think about this numbers? does this 6 proxies sounds good,
> or we should double or triple the proxies ? Does anyone has this size of
> requests and can share their configs ?
> >
> > Thanks a lot, hoping to ear from you guys !
> >
> > -
> > alejandrito
> > ___
> > Mailing list: https://launchpad.net/~openstack
> > Post to : openstack@lists.launchpad.net
> > Unsubscribe : https://launchpad.net/~openstack
> > More help   : https://help.launchpad.net/ListHelp
>
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Not able to get IP address for VM

2012-10-24 Thread Jānis Ģeņģeris
To test if it's running, you can check if the metadata process is running,
then you can also use the solution Daniel suggested.

To check if VMs ar able to access metadata, I think you have to connect
from the address that is registered as legitimate network in nova. VMs try
to connect to address 169.254.169.254 to recieve information provided by
metadata service. This address is non routable address, so you must have
iptables NAT rules that will rewrite it to the proper destination of
nova-metadata service address(server IP where metadata service is running),
if things are done properly then these rules must be in place already,
added by nova.

If you manage to get inside VM, you can run this command:
curl http://169.254.169.254/latest/meta-data/public-ipv4

to check if you can get to metadata.

As I can see from your config, you are using Quantum, then you can just run:
nc -v 169.254.169.254 80
from dhcp net namespace of your fixed network. To debug it further, run
tcpdump on every namespace involved to find out how far the packets go.

And make sure that you have your network topology setup as in docs:
http://docs.openstack.org/trunk/openstack-network/admin/content/connectivity.html

On Wed, Oct 24, 2012 at 9:40 PM, Daniel Vázquez wrote:

> As root user
>
> $ service open-stack-nova-metadata-api status
>
> or
>
> $  /etc/init.d/open-stack-nova-metadata-api status
>
> bests,
>
> 2012/10/24 Srikanth Kumar Lingala :
> > @janis: How can I check that metadata service is working?
> >
> > @Salvatore:
> > DHCP Agent is working fine and I am not seeing any ERROR logs.
> > I am able to see dnsmasq services. I am able to see those MAC entries in
> the
> > hosts file.
> > tap interface is creating on Host Node, which is attached to br-int.
> >
> > Regards,
> > Srikanth.
> >
> > On Wed, Oct 24, 2012 at 7:24 PM, Salvatore Orlando 
> > wrote:
> >>
> >> Srikanth,
> >>
> >> from your analysis it seems that L2 connectivity between the compute and
> >> the controller node is working as expected.
> >> Before looking further, it is maybe worth ruling out the obvious
> problems.
> >> Hence:
> >> 1) is the dhcp-agent service running (or is it stuck in some error
> state?)
> >> 2) Can you see dnsmasq instances running on the controller node? If yes,
> >> do you see your VM's MAC in the hosts file for the dnsmasq instance?
> >> 3) If dnsmasq instances are running, can you confirm the relevant tap
> >> ports are inserted on Open vSwitch instance br-int?
> >>
> >> Salvatore
> >>
> >>
> >> On 24 October 2012 14:14, Jānis Ģeņģeris 
> wrote:
> >>>
> >>> Hi Srikanth,
> >>>
> >>> Can you confirm that metadata service is working and the VMs are able
> to
> >>> access it? Usually if VM's can't get network settings is because of
> >>> inaccessible metadata service.
> >>>
> >>> --janis
> >>>
> >>> On Wed, Oct 24, 2012 at 4:00 PM, Srikanth Kumar Lingala
> >>>  wrote:
> 
>  Here is the nova.conf file contents:
> 
>  [DEFAULT]
>  # MySQL Connection #
>  sql_connection=mysql://nova:password@10.232.91.33/nova
> 
>  # nova-scheduler #
>  rabbit_host=10.232.91.33
>  rabbit_userid=guest
>  rabbit_password=password
>  #scheduler_driver=nova.scheduler.simple.SimpleScheduler
>  #scheduler_default_filters=ImagePropertiesFilter
> 
> 
>  scheduler_driver=nova.scheduler.multi.MultiScheduler
> 
> compute_scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler
>  scheduler_available_filters=nova.scheduler.filters.standard_filters
>  scheduler_default_filters=ImagePropertiesFilter
> 
> 
>  # nova-api #
>  cc_host=10.232.91.33
>  auth_strategy=keystone
>  s3_host=10.232.91.33
>  ec2_host=10.232.91.33
>  nova_url=http://10.232.91.33:8774/v1.1/
>  ec2_url=http://10.232.91.33:8773/services/Cloud
>  keystone_ec2_url=http://10.232.91.33:5000/v2.0/ec2tokens
>  api_paste_config=/etc/nova/api-paste.ini
>  allow_admin_api=true
>  use_deprecated_auth=false
>  ec2_private_dns_show_ip=True
>  dmz_cidr=169.254.169.254/32
>  ec2_dmz_host=169.254.169.254
>  metadata_host=169.254.169.254
>  enabled_apis=ec2,osapi_compute,metadata
> 
> 
>  # Networking #
>  network_api_class=nova.network.quantumv2.api.API
>  quantum_url=http://10.232.91.33:9696
>  libvirt_vif_type=ethernet
>  linuxnet_vif_driver=nova.network.linux_net.LinuxOVSInterfaceDriver
>  firewall_driver=nova.virt.firewall.NoopFirewallDriver
>  libvirt_use_virtio_for_bridges=True
> 
>  # Cinder #
>  #volume_api_class=cinder.volume.api.API
> 
>  # Glance #
>  glance_api_servers=10.232.91.33:9292
>  image_service=nova.image.glance.GlanceImageService
> 
>  # novnc #
>  novnc_enable=true
>  novncproxy_base_url=http://10.232.91.33:6080/vnc_auto.html
>  vncserver_proxyclient_address=127.0.0.1
>  vncserver_listen=0.0.0.0
> 
>  # Misc #
>  logdir=/var/log/nova
>  state_

Re: [Openstack] Using nova-volumes openstack LVM group for other pourposes

2012-10-24 Thread Jonathan Proulx
On Wed, Oct 24, 2012 at 08:56:26PM +0200, Daniel Vázquez wrote:
:Hi here!
:
:Can we create and use news logical volumes for own/custom use(out of
:openstack) on nova-volumes openstack LVM group, and use it beside
:openstack operational?
:IMO it's LVM and no problem, but it has openstack collateral consequences?

If you are talking about creating random logical volumes for
non-openstack use in the same volume group nova-volume or cinder is
using to create volumes (lv are in the same vg but don't otherwise
interact), yes you can do that without ocnfusing openstack or having
your volumes trampled.  For example only having one volume group and
using that for operating system partitions as well as volume-
volumes for cinder

I don't think it's a particularly good idea from an organizational
standpoint I'd rather have distinct vg's for each purpose so it is
clear which resources are operating system and which are data, but in
my environment (a private computing/research cloud with a small admin
group and 1k users in a few 10's of closely related tenents) it's
probably more an aesthetic than technical choice.  The larger and more
diverse your situation the stronger I'd argue for keeping them in
seperate VGs.

-Jon

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Not able to get IP address for VM

2012-10-24 Thread Daniel Vázquez
As root user

$ service open-stack-nova-metadata-api status

or

$  /etc/init.d/open-stack-nova-metadata-api status

bests,

2012/10/24 Srikanth Kumar Lingala :
> @janis: How can I check that metadata service is working?
>
> @Salvatore:
> DHCP Agent is working fine and I am not seeing any ERROR logs.
> I am able to see dnsmasq services. I am able to see those MAC entries in the
> hosts file.
> tap interface is creating on Host Node, which is attached to br-int.
>
> Regards,
> Srikanth.
>
> On Wed, Oct 24, 2012 at 7:24 PM, Salvatore Orlando 
> wrote:
>>
>> Srikanth,
>>
>> from your analysis it seems that L2 connectivity between the compute and
>> the controller node is working as expected.
>> Before looking further, it is maybe worth ruling out the obvious problems.
>> Hence:
>> 1) is the dhcp-agent service running (or is it stuck in some error state?)
>> 2) Can you see dnsmasq instances running on the controller node? If yes,
>> do you see your VM's MAC in the hosts file for the dnsmasq instance?
>> 3) If dnsmasq instances are running, can you confirm the relevant tap
>> ports are inserted on Open vSwitch instance br-int?
>>
>> Salvatore
>>
>>
>> On 24 October 2012 14:14, Jānis Ģeņģeris  wrote:
>>>
>>> Hi Srikanth,
>>>
>>> Can you confirm that metadata service is working and the VMs are able to
>>> access it? Usually if VM's can't get network settings is because of
>>> inaccessible metadata service.
>>>
>>> --janis
>>>
>>> On Wed, Oct 24, 2012 at 4:00 PM, Srikanth Kumar Lingala
>>>  wrote:

 Here is the nova.conf file contents:

 [DEFAULT]
 # MySQL Connection #
 sql_connection=mysql://nova:password@10.232.91.33/nova

 # nova-scheduler #
 rabbit_host=10.232.91.33
 rabbit_userid=guest
 rabbit_password=password
 #scheduler_driver=nova.scheduler.simple.SimpleScheduler
 #scheduler_default_filters=ImagePropertiesFilter


 scheduler_driver=nova.scheduler.multi.MultiScheduler
 compute_scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler
 scheduler_available_filters=nova.scheduler.filters.standard_filters
 scheduler_default_filters=ImagePropertiesFilter


 # nova-api #
 cc_host=10.232.91.33
 auth_strategy=keystone
 s3_host=10.232.91.33
 ec2_host=10.232.91.33
 nova_url=http://10.232.91.33:8774/v1.1/
 ec2_url=http://10.232.91.33:8773/services/Cloud
 keystone_ec2_url=http://10.232.91.33:5000/v2.0/ec2tokens
 api_paste_config=/etc/nova/api-paste.ini
 allow_admin_api=true
 use_deprecated_auth=false
 ec2_private_dns_show_ip=True
 dmz_cidr=169.254.169.254/32
 ec2_dmz_host=169.254.169.254
 metadata_host=169.254.169.254
 enabled_apis=ec2,osapi_compute,metadata


 # Networking #
 network_api_class=nova.network.quantumv2.api.API
 quantum_url=http://10.232.91.33:9696
 libvirt_vif_type=ethernet
 linuxnet_vif_driver=nova.network.linux_net.LinuxOVSInterfaceDriver
 firewall_driver=nova.virt.firewall.NoopFirewallDriver
 libvirt_use_virtio_for_bridges=True

 # Cinder #
 #volume_api_class=cinder.volume.api.API

 # Glance #
 glance_api_servers=10.232.91.33:9292
 image_service=nova.image.glance.GlanceImageService

 # novnc #
 novnc_enable=true
 novncproxy_base_url=http://10.232.91.33:6080/vnc_auto.html
 vncserver_proxyclient_address=127.0.0.1
 vncserver_listen=0.0.0.0

 # Misc #
 logdir=/var/log/nova
 state_path=/var/lib/nova
 lock_path=/var/lock/nova
 root_helper=sudo nova-rootwrap /etc/nova/rootwrap.conf
 verbose=true
 dhcpbridge_flagfile=/etc/nova/nova.conf
 dhcpbridge=/usr/bin/nova-dhcpbridge
 force_dhcp_release=True
 iscsi_helper=tgtadm
 connection_type=libvirt
 libvirt_type=kvm
 libvirt_ovs_bridge=br-int
 libvirt_vif_type=ethernet
 libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtOpenVswitchDriver


 Regards,
 Srikanth.


 On Mon, Oct 22, 2012 at 7:48 AM, gong yong sheng
  wrote:
>
> can u send out nova.conf file?
>
> On 10/22/2012 07:30 PM, Srikanth Kumar Lingala wrote:
>
> Hi,
> I am using latest devstack I am trying to create a VM with one Ethernet
> interface card. I am able to create the VM successfully, but not able to 
> get
> IP for the ethernet interface.
> I have Openstack Controller running the following:
>
> nova-api
> nova-cert
> nova-consoleauth
> nova-scheduler
> quantum-dhcp-agent
> quantum-openvswitch-agent
>
>
> And O also have Openstack Host Node running the following:
>
> nova-api
> nova-compute
> quantum-openvswitch-agent
>
>
> I am not seeing any kind of errors in logs related nova as well as
> quantum.
> I observed that when I execute 'dhclient' in VM, 'br-int' interface in
> 'Openstack Controller' getting DHCP requests, but not sending reply.
> 

Re: [Openstack] Using nova-volumes openstack LVM group for other pourposes

2012-10-24 Thread Daniel Vázquez
About security, you're talking about attaching volumes with data out
of tenants concerns. Or some else?
Anyway, we can use the volume group as any other LVM group, and ... :
  - If we use nova-volume labeling pattern, we can use it for future
uses in openstack instances.
  - But If we use our own labeling pattern, then we can use it in
openstack... But can we use lvrename command?? no other setup concerns
to openstack? thats right?


2012/10/24 Eric Windisch :
>
> On Wednesday, October 24, 2012 at 14:56 PM, Daniel Vázquez wrote:
>
> Hi here!
>
> Can we create and use news logical volumes for own/custom use(out of
> openstack) on nova-volumes openstack LVM group, and use it beside
> openstack operational?
> IMO it's LVM and no problem, but it has openstack collateral consequences?
>
> I generally advise not to do this due to potential security concerns.
>
> In practice, your concerns will be with deleting manually created volumes
> and creating volumes that match the pattern set in the nova-volumes/cinder
> configuration.
>
> Regards,
> Eric Windisch

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Expanding Storage - Rebalance Extreeemely Slow (or Stalled?)

2012-10-24 Thread Emre Sokullu
Thanks Andi, that helps, it's true that my expectations were misplaced; I
was expecting all nodes to "rebalance" until they each store the same size.

What's weird though is there are missing folders in the newly created
c0d4p1 node. Here's what I get

root@storage3:/srv/node# ls c0d1p1/
accounts  async_pending  containers  objects  tmp

root@storage3:/srv/node# ls c0d4p1/
accounts  tmp

Is that normal?

And when I check /var/log/rsyncd.log for the moves in between storage
nodes, I see too many of the following- which, again, makes me think
whether there's something wrong :

2012/10/24 19:22:56 [6514] rsync to
container/c0d4p1/tmp/e49cf526-1d53-4069-bbea-b74f6dbec5f1 from storage2
(192.168.1.4)
2012/10/24 19:22:56 [6514] receiving file list
2012/10/24 19:22:56 [6514] sent 54 bytes  received 17527 bytes  total size
17408
2012/10/24 21:22:56 [6516] connect from storage2 (192.168.1.4)
2012/10/24 19:22:56 [6516] rsync to
container/c0d4p1/tmp/4b8b0618-077b-48e2-a7a0-fb998fcf11bc from storage2
(192.168.1.4)
2012/10/24 19:22:56 [6516] receiving file list
2012/10/24 19:22:56 [6516] sent 54 bytes  received 26743 bytes  total size
26624
2012/10/24 21:22:56 [6518] connect from storage2 (192.168.1.4)
2012/10/24 19:22:56 [6518] rsync to
container/c0d4p1/tmp/53452ee6-c52c-4e3b-abe2-a31a2c8d65ba from storage2
(192.168.1.4)
2012/10/24 19:22:56 [6518] receiving file list
2012/10/24 19:22:57 [6518] sent 54 bytes  received 24695 bytes  total size
24576
2012/10/24 21:22:57 [6550] connect from storage2 (192.168.1.4)
2012/10/24 19:22:57 [6550] rsync to
container/c0d4p1/tmp/b858126d-3152-4d71-a0e8-eea115f69fc8 from storage2
(192.168.1.4)
2012/10/24 19:22:57 [6550] receiving file list
2012/10/24 19:22:57 [6550] sent 54 bytes  received 24695 bytes  total size
24576
2012/10/24 21:22:57 [6552] connect from storage2 (192.168.1.4)
2012/10/24 19:22:57 [6552] rsync to
container/c0d4p1/tmp/f3ce8205-84ac-4236-baea-3a3aef2da6ab from storage2
(192.168.1.4)
2012/10/24 19:22:57 [6552] receiving file list
2012/10/24 19:22:58 [6552] sent 54 bytes  received 25719 bytes  total size
25600
2012/10/24 21:22:58 [6554] connect from storage2 (192.168.1.4)
2012/10/24 19:22:58 [6554] rsync to
container/c0d4p1/tmp/91b4f046-eacb-4a1d-aed1-727d0c982742 from storage2
(192.168.1.4)
2012/10/24 19:22:58 [6554] receiving file list
2012/10/24 19:22:58 [6554] sent 54 bytes  received 18551 bytes  total size
18432
2012/10/24 21:22:58 [6556] connect from storage2 (192.168.1.4)
2012/10/24 19:22:58 [6556] rsync to
container/c0d4p1/tmp/94d223f9-b84d-4911-be6b-bb28f89b6647 from storage2
(192.168.1.4)
2012/10/24 19:22:58 [6556] receiving file list
2012/10/24 19:22:58 [6556] sent 54 bytes  received 24695 bytes  total size
24576




On Tue, Oct 23, 2012 at 11:17 AM, andi abes  wrote:

> On Tue, Oct 23, 2012 at 12:16 PM, Emre Sokullu 
> wrote:
> > Folks,
> >
> > This is the 3rd day and I see no or very little (kb.s) change with the
> new
> > disks.
> >
> > Could it be normal, is there a long computation process that takes time
> > first before actually filling newly added disks?
> >
> > Or should I just start from scratch with the "create" command this time.
> The
> > last time I did it, I didn't use the "swift-ring-builder create 20 3 1
> .."
> > command first but just started with "swift-ring-builder add ..." and used
> > existing ring.gz files, thinking otherwise I could be reformatting the
> whole
> > stack. I'm not sure if that's the case.
> >
>
> That is correct - you don't want to recreate the rings, since that is
> likely to cause redundant partition movement.
>
> > Please advise. Thanks,
> >
>
> I think your expectations might be misplaced. the ring builder tries
> to not move partitions needlessly. In your cluster, you had 3
> zones(and i'm assuming 3 replicas). swift placed the partitions as
> efficiently as it could, spread across the 3 zones (servers). As
> things stand, there's no real reason for partitions to move across the
> servers. I'm guessing that the data growth you've seen is from new
> data, not from existing data movement (but there are some calls to
> random in the code which might have produced some partition movement).
>
> If you truly want to move things around forcefully, you could:
> * decrease the weight of the old devices. This would cause them to be
> over weighted, and partitions reassigned away from them.
> * delete and re-add devices to the ring. This will cause all the
> partitions from the deleted devices to be spread across the new set of
> devices.
>
> After you perform your ring manipulation commands, execute the
> rebalance command and copy the ring files.
> This is likely to cause *lots* of activity in your cluster... which
> seems to be the desired outcome. Its likely to have negative impact of
> service requests to the proxy. It's something you probably want to be
> careful about.
>
> If you leave things alone as they are, new data will be distributed on
> the new devices, and as old data gets deleted usage will rebalance
> over 

Re: [Openstack] How to commucation vms in multi nodes using quantum?

2012-10-24 Thread Robert Kukura
On 10/24/2012 12:42 PM, Dan Wendlandt wrote:
> On Wed, Oct 24, 2012 at 3:22 AM, Gary Kotton  wrote:
>> Hi,
>> In addition to Dan's comments you can also take a look at the following link
>> http://wiki.openstack.org/ConfigureOpenvswitch.
> 
> Is there any content on that wiki page that is not yet in the quantum
> admin guide: http://docs.openstack.org/trunk/openstack-network/admin/content/?
>If so, we should file a bug to make sure it ends up in the admin
> guide and that the wiki page is deleted so there is exactly one place
> where we direct people and we avoid stale content.
> 
> Bob is probably best to answer that question.

I've already filed a docs bug to update the admin guide with the current
configuration details for linuxbridge and openvswitch, and its assigned
to me. I hope to get to this in the next few days. I'll remove the wiki
page, which is also out-if-date, when its complete.

-Bob

> 
> Dan
> 
> 
>> Thanks
>> Gary
>>
>>
>> On 10/24/2012 08:21 AM, livemoon wrote:
>>
>> Thanks Dan
>>
>> On Wed, Oct 24, 2012 at 2:15 PM, Dan Wendlandt  wrote:
>>>
>>> On Tue, Oct 23, 2012 at 10:56 PM, livemoon  wrote:
 Dan:
 Thank you for your help.
 If the server have three nics, which one will be used as port of
 "br-int". I
 must know how "br-int" work between two machines, and then I can make
 the
 physical interface which "br-int" use to one switch
>>>
>>> If you are using tunneling, the traffic will exit out the NIC based on
>>> your physical server's routing table and the destination IP of the
>>> tunnel.  For example, if your physical server is tunneling a packet to
>>> a VM on a physical server with IP W.X.Y.Z, the packet will leave
>>> whatever NIC has the route to reach W.X.Y.Z .
>>>
>>> Dan
>>>
>>>
>>>
>>>

 On Wed, Oct 24, 2012 at 11:52 AM, Dan Wendlandt  wrote:
>
> all you need to do is create a bridge named "br-int", which is what
> the linux devices representing the vm nics will be plugged into.
>
> since you are using tunneling, there is no need to create a br-ethX
> and add a physical interface to it.
>
> dan
>
> p.s. btw, your config looks like its using database polling, which is
> not preferred.  I'd suggest you use the default config, which uses RPC
> communication between agents and the main quantum-server process
>
>
> On Tue, Oct 23, 2012 at 8:44 PM, livemoon  wrote:
>> I know in one node,vm can work well.
>> I want to know in multi nodes, do I need to create a br-ethX, and
>> port
>> the
>> physical interface to it? how to do that in configuration?
>>
>> On Wed, Oct 24, 2012 at 11:36 AM, 刘家军  wrote:
>>>
>>> you just need to create one or more networks and specify which
>>> network
>>> to
>>> use when booting vm.
>>>
>>> 2012/10/24 livemoon 

 Hi, I use quantum as network. A question is if there are multi
 nodes,
 how
 to config to make vms communicate with each other in the same
 subnet.

 I use openvswitch as my plugin. And my setting is blow:

 [DATABASE]
 sql_connection = mysql://quantum:openstack@172.16.1.1:3306/quantum
 reconnect_interval = 2

 [OVS]

 tenant_network_type = gre
 tunnel_id_ranges = 1:1000
 integration_bridge = br-int
 tunnel_bridge = br-tun
 local_ip = 172.16.1.2

 enable_tunneling = True


 [AGENT]
 polling_interval = 2
 root_helper = sudo /usr/bin/quantum-rootwrap
 /etc/quantum/rootwrap.conf

 --
 非淡薄无以明志,非宁静无以致远

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

>>>
>>>
>>>
>>> --
>>> 刘家军@ljjjustin
>>>
>>
>>
>>
>> --
>> 非淡薄无以明志,非宁静无以致远
>>
>> ___
>> Mailing list: https://launchpad.net/~openstack
>> Post to : openstack@lists.launchpad.net
>> Unsubscribe : https://launchpad.net/~openstack
>> More help   : https://help.launchpad.net/ListHelp
>>
>
>
>
> --
> ~~~
> Dan Wendlandt
> Nicira, Inc: www.nicira.com
> twitter: danwendlandt
> ~~~




 --
 非淡薄无以明志,非宁静无以致远
>>>
>>>
>>>
>>> --
>>> ~~~
>>> Dan Wendlandt
>>> Nicira, Inc: www.nicira.com
>>> twitter: danwendlandt
>>> ~~~
>>
>>
>>
>>
>> --
>> Blog Site: livemoon.org
>> Twitter: mwjpiero
>> 非淡薄无以明志,非宁静无以致远
>>
>>
>>
>> ___
>> Mailing list: https://launchpad.net/~open

Re: [Openstack] SDKs

2012-10-24 Thread Everett Toews

From: Kiall Mac Innes mailto:ki...@managedit.ie>>
Date: Wednesday, October 24, 2012 1:34 PM
To: Everett Toews 
mailto:everett.to...@rackspace.com>>
Cc: "openstack@lists.launchpad.net" 
mailto:openstack@lists.launchpad.net>>
Subject: Re: [Openstack] SDKs


Should the official python sdk's not be listed? Eg novaclient etc

---

They're already listed on the page as

python-client
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Using nova-volumes openstack LVM group for other pourposes

2012-10-24 Thread Eric Windisch


On Wednesday, October 24, 2012 at 14:56 PM, Daniel Vázquez wrote:

> Hi here!
>  
> Can we create and use news logical volumes for own/custom use(out of
> openstack) on nova-volumes openstack LVM group, and use it beside
> openstack operational?
> IMO it's LVM and no problem, but it has openstack collateral consequences?
>  
>  
>  

I generally advise not to do this due to potential security concerns.

In practice, your concerns will be with deleting manually created volumes and 
creating volumes that match the pattern set in the nova-volumes/cinder 
configuration.

Regards,
Eric Windisch

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Possible upgrade bug in nova-volume (& cinder)?

2012-10-24 Thread John Griffith
On Wed, Oct 24, 2012 at 12:57 PM, Jonathan Proulx  wrote:

> On Wed, Oct 24, 2012 at 2:45 PM, Jonathan Proulx 
> wrote:
>
> > To fix this for me I can look up the volumes by ID in the database and
> > then lvrename the logical volumes (I don't have too many and all on
> > one volume server right now).
>
> That maybe the wrong answer as the database (both cinder and the older
> nova leavings) has a provider_location that implies the "right"
> logical volume name:
>
>
> +--++---+
> | id   | ec2_id | provider_location
>
>   |
>
> +--++---+
> | 25cb6abc-1938-41da-b4a4-7639fa122117 | NULL   | 128.52.x.x:3260,9
> iqn.2010-10.org.openstack:volume-001c 1
>   |
> | 60cd2c0e-6d61-4010-aee2-df738adb3581 | NULL   | 128.52.x.x:3260,4
> iqn.2010-10.org.openstack:volume-001a 1
>   |
> | 67ba5863-9f92-4694-b639-6c9520e0c6f3 | NULL   | 128.52.x.x:3260,2
> iqn.2010-10.org.openstack:volume-0016 1
>   |
> | 7397daa1-f4a7-47d4-b0dc-0b306defdf62 | NULL   | 128.52.x.x:3260,14
> iqn.2010-10.org.openstack:volume-0014 1
>  |
> | 7d8c51bc-9cac-4edf-b1e6-1c37d5a8256f | NULL   | 128.52.x.x:3260,10
> iqn.2010-10.org.openstack:volume-7d8c51bc-9cac-4edf-b1e6-1c37d5a8256f
> 1 |
> | 86426e77-e396-489d-9e66-49f0beef46bb | NULL   | 128.52.x.x:3260,16
> iqn.2010-10.org.openstack:volume-0019 1
>  |
> | 98ac28f5-77d8-476b-b3e1-c90a0fd3e880 | NULL   | 128.52.x.x:3260,1
> iqn.2010-10.org.openstack:volume-0010 1
>   |
> | a6e68eae-23a9-483e-bd42-e4b8a7f47dc4 | NULL   | 128.52.x.x:3260,24
> iqn.2010-10.org.openstack:volume-a6e68eae-23a9-483e-bd42-e4b8a7f47dc4
> 1 |
> | a89b9891-571c-43be-bc1b-0c346a161d38 | NULL   | 128.52.x.x:3260,9
> iqn.2010-10.org.openstack:volume-a89b9891-571c-43be-bc1b-0c346a161d38
> 1  |
> | cbd32221-7794-41d1-abf2-623c49f4ff03 | NULL   | 128.52.x.x:3260,6
> iqn.2010-10.org.openstack:volume-001b 1
>   |
>
> +--++---+
>
> so I'm also open to suggestions on the "right" resolution to this.
> Should I rename the logical volume sand update the provider_location
> or should I make the  /var/lib/cinder/volumes/* files match what is in
> the database and LVM (and if I do the latter will something come along
> and undo that)?
>
> -Jon
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
Hey Jon,

Couple of things going on, one is the volume naming (in progress here:
https://review.openstack.org/#/c/14615/).  I'll take a closer look at some
of the other issues you pointed out.

Thanks,
John
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Possible upgrade bug in nova-volume (& cinder)?

2012-10-24 Thread Jonathan Proulx
On Wed, Oct 24, 2012 at 2:45 PM, Jonathan Proulx  wrote:

> To fix this for me I can look up the volumes by ID in the database and
> then lvrename the logical volumes (I don't have too many and all on
> one volume server right now).

That maybe the wrong answer as the database (both cinder and the older
nova leavings) has a provider_location that implies the "right"
logical volume name:

+--++---+
| id   | ec2_id | provider_location

  |
+--++---+
| 25cb6abc-1938-41da-b4a4-7639fa122117 | NULL   | 128.52.x.x:3260,9
iqn.2010-10.org.openstack:volume-001c 1
  |
| 60cd2c0e-6d61-4010-aee2-df738adb3581 | NULL   | 128.52.x.x:3260,4
iqn.2010-10.org.openstack:volume-001a 1
  |
| 67ba5863-9f92-4694-b639-6c9520e0c6f3 | NULL   | 128.52.x.x:3260,2
iqn.2010-10.org.openstack:volume-0016 1
  |
| 7397daa1-f4a7-47d4-b0dc-0b306defdf62 | NULL   | 128.52.x.x:3260,14
iqn.2010-10.org.openstack:volume-0014 1
 |
| 7d8c51bc-9cac-4edf-b1e6-1c37d5a8256f | NULL   | 128.52.x.x:3260,10
iqn.2010-10.org.openstack:volume-7d8c51bc-9cac-4edf-b1e6-1c37d5a8256f
1 |
| 86426e77-e396-489d-9e66-49f0beef46bb | NULL   | 128.52.x.x:3260,16
iqn.2010-10.org.openstack:volume-0019 1
 |
| 98ac28f5-77d8-476b-b3e1-c90a0fd3e880 | NULL   | 128.52.x.x:3260,1
iqn.2010-10.org.openstack:volume-0010 1
  |
| a6e68eae-23a9-483e-bd42-e4b8a7f47dc4 | NULL   | 128.52.x.x:3260,24
iqn.2010-10.org.openstack:volume-a6e68eae-23a9-483e-bd42-e4b8a7f47dc4
1 |
| a89b9891-571c-43be-bc1b-0c346a161d38 | NULL   | 128.52.x.x:3260,9
iqn.2010-10.org.openstack:volume-a89b9891-571c-43be-bc1b-0c346a161d38
1  |
| cbd32221-7794-41d1-abf2-623c49f4ff03 | NULL   | 128.52.x.x:3260,6
iqn.2010-10.org.openstack:volume-001b 1
  |
+--++---+

so I'm also open to suggestions on the "right" resolution to this.
Should I rename the logical volume sand update the provider_location
or should I make the  /var/lib/cinder/volumes/* files match what is in
the database and LVM (and if I do the latter will something come along
and undo that)?

-Jon

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Using nova-volumes openstack LVM group for other pourposes

2012-10-24 Thread Daniel Vázquez
Hi here!

Can we create and use news logical volumes for own/custom use(out of
openstack) on nova-volumes openstack LVM group, and use it beside
openstack operational?
IMO it's LVM and no problem, but it has openstack collateral consequences?

Thx

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Possible upgrade bug in nova-volume (& cinder)?

2012-10-24 Thread Jonathan Proulx
Hi All,

I'm seeing a bug due to my recent essex to folsom upgrade relating to
LVM back volume storage, I'm not sure where it got introduced most
likely either in nova-volume or in the Ubuntu cloud archive
packaging...I only noticed it after transitioning from
folsom-nova-volume to fosom-cinder but despite thinking I'd tested the
nova-volume service before moving to cinder I'm pretty sure it had to
exist in nova-volume as well (prehaps it was masked becasue I didn't
restart tgtd until cinder)

The symptom is that volumes created under folsom (with nova-volume or
cinder) can be attached.

The reason is that the backing-store devices in both
/var/lib/nova/volumes/* and /var/lib/cinder/volumes/* are all named
/dev//volume- while under essex the volumes were named
/dev//volume-

To fix this for me I can look up the volumes by ID in the database and
then lvrename the logical volumes (I don't have too many and all on
one volume server right now).

Before I go sifting through postinst scripts and openstack code to see
whre this came from anyone know where I should file this (and has
anyone else run into it)?

-Jon

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] SDKs

2012-10-24 Thread Kiall Mac Innes
Should the official python sdk's not be listed? Eg novaclient etc
On Oct 24, 2012 7:28 PM, "Everett Toews" 
wrote:

>  One of the things that came out of the SDK Doc Discussion [1] at the
> Grizzly Summit was an action item for me to create a wiki page dedicated to
> Software Development Kits that support OpenStack. And here it is [2]! I
> also summarized some of the SDK related stuff from the Summit in [3].
>
>  In part the wiki page is a survey of existing SDKs. If there's an SDK
> that you think belongs on the list, please add it.
>
>  The goal of all of this is to raise the profile and legitimacy of SDKs
> that support OpenStack in the ecosystem.
>
>  Thanks,
> Everett
>
>  [1] https://etherpad.openstack.org/sdk-documentation
> [2] http://wiki.openstack.org/SDKs
> [3]
> http://blog.phymata.com/2012/10/22/sdks-and-an-openstack-grizzly-summit-wrap-up/
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [SWIFT] Proxies Sizing for 90.000 / 200.000 RPM

2012-10-24 Thread John Dickinson
Sorry for the delay. You've got an interesting problem, and we were all quite 
busy last week with the summit.

First, the standard caveat: Your performance is going to be highly dependent on 
your particular workload and your particular hardware deployment. 3500 req/sec 
in two different deployments may be very different based on the size of the 
requests, the spread of the data requested, and the type of requests. Your 
experience may vary, etc, etc.

However, for an attempt to answer your question...

6 proxies for 3500 req/sec doesn't sound unreasonable. It's in line with other 
numbers I've seen from people and what I've seen from other large scale 
deployments. You are basically looking at about 600 req/sec/proxy.

My first concern is not the swift workload, but how keystone handles the 
authentication of the tokens. A quick glance at the keystone source seems to 
indicate that keystone's auth_token middleware is using a standard memcached 
module that may not play well with concurrent connections in eventlet. 
Specifically, sockets cannot be reused concurrently by different greenthreads. 
You may find that the token validation in the auth_token middleware fails under 
any sort of load. This would need to be verified by your testing or an 
examination of the memcache module being used. An alternative would be to look 
at the way swift implements it's memcache connections in an eventlet-friendly 
way (see swift/common/memcache.py:_get_conns() in the swift codebase).

--John



On Oct 11, 2012, at 4:28 PM, Alejandro Comisario 
 wrote:

> Hi Stackers !
> This is the thing, today we have a 24 datanodes (3 copies, 90TB usables) each 
> datanode has 2 intel hexacores CPU with HT and 96GB of RAM, and 6 Proxies 
> with the same hardware configuration, using swift 1.4.8 with keystone.
> Regarding the networking, each proxy / datanodes has a dual 1Gb nic, bonded 
> in LACP mode 4, each of the proxies are behind an F5 BigIP Load Balancer ( 
> so, no worries over there ).
> 
> Today, we are receiving 5000 RPM ( Requests per Minute ) with 660 RPM per 
> Proxies, i know its low, but now ... with a new product migration, soon ( 
> really soon ) we are expecting to receive about a total of 90.000 RPM average 
> ( 1500 req / s ) with weekly peaks of 200.000 RPM ( 3500 req / s ) to the 
> swift api, witch will be 90% public gets ( no keystone auth ) and 10% 
> authorized PUTS (keystone in the middle, worth to know that we have a 10 
> keystone vms pool, connected to a 5 nodes galera mysql cluster, so no worries 
> there either ) 
> 
> So, 3500 req/s divided by 6 proxy nodes doesnt sounds too much, but well, its 
> a number that we cant ignore.
> What do you think about this numbers? does this 6 proxies sounds good, or we 
> should double or triple the proxies ? Does anyone has this size of requests 
> and can share their configs ?
> 
> Thanks a lot, hoping to ear from you guys !
> 
> -
> alejandrito
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp



smime.p7s
Description: S/MIME cryptographic signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] SDKs

2012-10-24 Thread Everett Toews
One of the things that came out of the SDK Doc Discussion [1] at the Grizzly 
Summit was an action item for me to create a wiki page dedicated to Software 
Development Kits that support OpenStack. And here it is [2]! I also summarized 
some of the SDK related stuff from the Summit in [3].

In part the wiki page is a survey of existing SDKs. If there's an SDK that you 
think belongs on the list, please add it.

The goal of all of this is to raise the profile and legitimacy of SDKs that 
support OpenStack in the ecosystem.

Thanks,
Everett

[1] https://etherpad.openstack.org/sdk-documentation
[2] http://wiki.openstack.org/SDKs
[3] 
http://blog.phymata.com/2012/10/22/sdks-and-an-openstack-grizzly-summit-wrap-up/
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [SWIFT] Proxies Sizing for 90.000 / 200.000 RPM

2012-10-24 Thread Alejandro Comisario
Guys ??
Anyone ??

*
*
*
*
*Alejandro Comisario
#melicloud CloudBuilders*
Arias 3751, Piso 7 (C1430CRG)
Ciudad de Buenos Aires - Argentina
Cel: +549(11) 15-3770-1857
Tel : +54(11) 4640-8443


On Mon, Oct 15, 2012 at 11:59 AM, Kiall Mac Innes wrote:

> While I can't answer your question (I've never used swift) - it's worth
> mentioning many of the openstack folks are en-route/at the design summit.
>
> Also - you might have more luck on the openstack-operators list, rather
> than the general list.
>
> Kiall
> On Oct 15, 2012 2:57 PM, "Alejandro Comisario" <
> alejandro.comisa...@mercadolibre.com> wrote:
>
>> Its worth to know that the objects in the cluster, are going to be from
>> 200KB the biggest and 50KB the tiniest.
>> Any considerations regarding this ?
>>
>> -
>> alejandrito
>>
>> On Thu, Oct 11, 2012 at 8:28 PM, Alejandro Comisario <
>> alejandro.comisa...@mercadolibre.com> wrote:
>>
>>> Hi Stackers !
>>> This is the thing, today we have a 24 datanodes (3 copies, 90TB usables)
>>> each datanode has 2 intel hexacores CPU with HT and 96GB of RAM, and 6
>>> Proxies with the same hardware configuration, using swift 1.4.8 with
>>> keystone.
>>> Regarding the networking, each proxy / datanodes has a dual 1Gb nic,
>>> bonded in LACP mode 4, each of the proxies are behind an F5 BigIP Load
>>> Balancer ( so, no worries over there ).
>>>
>>> Today, we are receiving 5000 RPM ( Requests per Minute ) with 660 RPM
>>> per Proxies, i know its low, but now ... with a new product migration, soon
>>> ( really soon ) we are expecting to receive about a total of 90.000 RPM
>>> average ( 1500 req / s ) with weekly peaks of 200.000 RPM ( 3500 req / s )
>>> to the swift api, witch will be 90% public gets ( no keystone auth ) and
>>> 10% authorized PUTS (keystone in the middle, worth to know that we have a
>>> 10 keystone vms pool, connected to a 5 nodes galera mysql cluster, so no
>>> worries there either )
>>>
>>> So, 3500 req/s divided by 6 proxy nodes doesnt sounds too much, but
>>> well, its a number that we cant ignore.
>>> What do you think about this numbers? does this 6 proxies sounds good,
>>> or we should double or triple the proxies ? Does anyone has this size of
>>> requests and can share their configs ?
>>>
>>> Thanks a lot, hoping to ear from you guys !
>>>
>>> -
>>> alejandrito
>>>
>>
>>
>> ___
>> Mailing list: https://launchpad.net/~openstack
>> Post to : openstack@lists.launchpad.net
>> Unsubscribe : https://launchpad.net/~openstack
>> More help   : https://help.launchpad.net/ListHelp
>>
>>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] FROM "KVM traditional Virtualization infraestructure" TO "clouding Openstack infraestructure". Refining concepts.

2012-10-24 Thread Daniel Vázquez
Hi here!

I think this is of interest to a lot of people, in same scenario like us.
After reading and re-reading a lot (and good) openstack documentation,
wiki, scripts, demos, video, blog and a lot of days and weeks, IMO
openstack has very good resources, but openstack it's a very big and extend
product and those resources can't focus on all ... ... I thin next is a
common scenario.

We've need refinement concepts about in openstack infraestructure, if we're
moving from "traditional (KVM) VM infraestructure" to "Clouding Openstack
infraestructure". Please review and comment about this concepts to
refine/correct/tunne and get the right basics:

We're in an typical scenario with KVM traditional virtualization, with
working traditional VM instances. KVM host has two "need" interfaces
(public and private). The VM instances has two "need" interfaces too
(private and public) typical for working with webApps && Database
applications deploys.

Cloud image:
   - Cloud image == dynamic setup inyection to VM instance
   - Not is only VM instance fast SO setup.
   - By scripting let "encapsulate" functionality.
   - When launch it, Openstack injects variable configuration (vcpu, vram,
networking)
   - Instances launched, runs on ephemeral persistence, but you can persist
some setup on previous cloud image encapsulation or attach to nova-volume
resource.

Snapshot image:
   - snapshot image == static setup inside VM instance*(more or less,
because matization need: openstack injects vcpus, vram, continues setup
networking enviroment, continues maintain ephemeral resource, ... [
some else???] )
   - Is used LIKE TRADITIONAL images to launch traditional VM instances.
   - [???] How to migrate and reflect old working VM traditional instances
Network??
- [???] We can have old running VM with own networks typically
static public and private ip assigment, disabling DHCP on instance network
start.
  - [???] Private network range can reserved on openstack,
disabling DHCP can maintain setup
  - [???] Public network (it's routed by NAT from OpenStack)  BUT
you have not DHCP mapping to route !!!
- [???] Alternative solutions && use cases

Please add and complete for refine concepts ...
Thx
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Quantum Floating IPs

2012-10-24 Thread Ivan Kolodyazhny
Do you ping VMs using network namespaces?

On Wednesday, October 24, 2012, Dan Wendlandt wrote:

> On Wed, Oct 24, 2012 at 8:06 AM, Mohammad Banikazemi 
> >
> wrote:
> > Using Quantum (Folsom) I have a weird situation. I can ping the outside
> > world from my VMs only after I assign a floating IP address to them. In
> > other words, I cannot ping the outside world by just setting up the
> quantum
> > router and without using floating IP addresses. The other issue, is that
> > after assigning floating IPs I still cannot ping the VM from outside.
> > I can provide more information about my setup but thought I might be
> missing
> > something simple here and people may have seen the same problem. Thanks.
>
> I haven't seen anything like this, so best to create a LP question
> with your config and setup info.
>
> dan
>
> >
> >
> > ___
> > Mailing list: https://launchpad.net/~openstack
> > Post to : openstack@lists.launchpad.net 
> > Unsubscribe : https://launchpad.net/~openstack
> > More help   : https://help.launchpad.net/ListHelp
> >
>
>
>
> --
> ~~~
> Dan Wendlandt
> Nicira, Inc: www.nicira.com
> twitter: danwendlandt
> ~~~
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net 
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>


-- 
Regards,
Ivan Kolodyazhny,
Web Developer,
http://blog.e0ne.info/,
http://notacash.com/,
http://kharkivpy.org.ua/
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] swift tempURL requests yield 401 Unauthorized

2012-10-24 Thread Dieter Plaetinck
I upgraded our test cluster to 1.7.4, and still have the same issue.
I also bumped the expires to time() + 600 and made sure the clocks on client 
and servers are in sync to the second (client was 2 minutes off earlier)
but so that didn't change anything. expires is def. higher than the current 
time on the server so..

any help appreciated.

thanks,
Dieter

On Fri, 19 Oct 2012 13:17:39 -0400
Dieter Plaetinck  wrote:

> Hi,
> using swift 1.4.8 on Centos machines. (latest packages for centos.  note that 
> i'm assuming tempurl works with this version merely because all the code 
> seems to be there, i couldn't find clear docs on whether it should work or 
> not?)
> I want to use the swift tempURL feature as per
> http://failverse.com/using-temporary-urls-on-rackspace-cloud-files/
> http://docs.rackspace.com/files/api/v1/cf-devguide/content/TempURL-d1a4450.html
> http://docs.rackspace.com/files/api/v1/cf-devguide/content/Set_Account_Metadata-d1a4460.html
> 
> TLDR: set up metadata correctly, but tempurl requests yield http 401, can't 
> figure it out, _get_hmac() doesn't seem to be called.
> 
> First, I set the key metadata (this works fine) (tried both the swift CLI 
> program as well as curl), and I tried setting it both on container level 
> (container "uploads") as well as account level
> (though i would prefer container level)
> 
> alias vimeoswift=swift -A http://$ip:8080/auth/v1.0 -U system:root -K 
> testpass'
> vimeoswift post -m Temp-Url-Key:key uploads
> vimeoswift post -m Temp-Url-Key:key
> curl -i -X POST -H X-Auth-Token:$t -H X-Account-Meta-Temp-URL-Key:key 
> http://$ip:8080/v1/AUTH_system
> 
> this seems to work, because when I stat the account and the container, they
> show up:
> 
> 
> [root@dfvimeodfsproxy1 ~]# vimeoswift stat uploads
>   Account: AUTH_system
> Container: uploads
>   Objects: 1
> Bytes: 1253
>  Read ACL: 
> Write ACL: 
>   Sync To: 
>  Sync Key: 
> Meta Temp-Url-Key: key <--
> Accept-Ranges: bytes
> [root@dfvimeodfsproxy1 ~]# vimeoswift stat
>Account: AUTH_system
> Containers: 1
>Objects: 1
>  Bytes: 1253
> Meta Temp-Url-Key: key <--
> Accept-Ranges: bytes
> [root@dfvimeodfsproxy1 ~]# 
> 
> I have already put a file in container uploads (which I can retrieve just 
> fine using an auth token):
> [root@dfvimeodfsproxy1 ~]# vimeoswift stat uploads mylogfile.log | grep 
> 'Content Length'
> Content Length: 1253
> 
> now however, if i want to retrieve this file using the tempURL feature, it 
> doesn't work:
> 
> using this script
> #!/usr/bin/python2
> import hmac
> from hashlib import sha1
> from time import time
> method = 'GET'
> expires = int(time() + 60)
> base = 'http://10.90.151.5:8080'
> path = '/v1/AUTH_system/uploads/mylogfile.log'
> key = 'key'
> hmac_body = '%s\n%s\n%s' % (method, expires, path)
> sig = hmac.new(key, hmac_body, sha1).hexdigest()
> print '%s%s?temp_url_sig=%s&temp_url_expires=%s' % (base, path, sig, expires)
> 
> ~ ❯ openstack-signed-url2.py
> http://10.90.151.5:8080/v1/AUTH_system/uploads/mylogfile.log?temp_url_sig=e700f568cd099a432890db00e263b29b999d3604&temp_url_expires=1350666309
> ~ ❯ wget 
> 'http://10.90.151.5:8080/v1/AUTH_system/uploads/mylogfile.log?temp_url_sig=e700f568cd099a432890db00e263b29b999d3604&temp_url_expires=1350666309'
> --2012-10-19 13:04:14--  
> http://10.90.151.5:8080/v1/AUTH_system/uploads/mylogfile.log?temp_url_sig=e700f568cd099a432890db00e263b29b999d3604&temp_url_expires=1350666309
> Connecting to 10.90.151.5:8080... connected.
> HTTP request sent, awaiting response... 401 Unauthorized
> Authorization failed.
> 
> 
> I thought I could easily debug this myself by changing the _get_hmac()
> function
> in /usr/lib/python2.6/site-packages/swift/common/middleware/tempurl.py like 
> so:
> 
> def _get_hmac(self, env, expires, key, request_method=None):
> """
>(...)
> """
> if not request_method:
> request_method = env['REQUEST_METHOD']
> self.logger("getting HMAC for method %s, expires %s, path %s" % 
> (request_method, expires, env['PATH_INFO']))
> hmac = hmac.new(key, '%s\n%s\n%s' % (request_method, expires,
> env['PATH_INFO']), sha1).hexdigest()
> self.logger("hmac is " + hmac)
> return hmac
> 
> 
> however, after restarting the proxy, I don't see my messages showing up
> anywhere (logging works otherwise, because proxy-server messages are showing
> up in /var/log/message, showing all incoming http requests and their responses
> 
> 
> any help is appreciated, thanks!
> 
> Dieter


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Quantum Floating IPs

2012-10-24 Thread Dan Wendlandt
On Wed, Oct 24, 2012 at 8:06 AM, Mohammad Banikazemi  wrote:
> Using Quantum (Folsom) I have a weird situation. I can ping the outside
> world from my VMs only after I assign a floating IP address to them. In
> other words, I cannot ping the outside world by just setting up the quantum
> router and without using floating IP addresses. The other issue, is that
> after assigning floating IPs I still cannot ping the VM from outside.
> I can provide more information about my setup but thought I might be missing
> something simple here and people may have seen the same problem. Thanks.

I haven't seen anything like this, so best to create a LP question
with your config and setup info.

dan

>
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>



-- 
~~~
Dan Wendlandt
Nicira, Inc: www.nicira.com
twitter: danwendlandt
~~~

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] How to commucation vms in multi nodes using quantum?

2012-10-24 Thread Dan Wendlandt
On Wed, Oct 24, 2012 at 3:22 AM, Gary Kotton  wrote:
> Hi,
> In addition to Dan's comments you can also take a look at the following link
> http://wiki.openstack.org/ConfigureOpenvswitch.

Is there any content on that wiki page that is not yet in the quantum
admin guide: http://docs.openstack.org/trunk/openstack-network/admin/content/?
   If so, we should file a bug to make sure it ends up in the admin
guide and that the wiki page is deleted so there is exactly one place
where we direct people and we avoid stale content.

Bob is probably best to answer that question.

Dan


> Thanks
> Gary
>
>
> On 10/24/2012 08:21 AM, livemoon wrote:
>
> Thanks Dan
>
> On Wed, Oct 24, 2012 at 2:15 PM, Dan Wendlandt  wrote:
>>
>> On Tue, Oct 23, 2012 at 10:56 PM, livemoon  wrote:
>> > Dan:
>> > Thank you for your help.
>> > If the server have three nics, which one will be used as port of
>> > "br-int". I
>> > must know how "br-int" work between two machines, and then I can make
>> > the
>> > physical interface which "br-int" use to one switch
>>
>> If you are using tunneling, the traffic will exit out the NIC based on
>> your physical server's routing table and the destination IP of the
>> tunnel.  For example, if your physical server is tunneling a packet to
>> a VM on a physical server with IP W.X.Y.Z, the packet will leave
>> whatever NIC has the route to reach W.X.Y.Z .
>>
>> Dan
>>
>>
>>
>>
>> >
>> > On Wed, Oct 24, 2012 at 11:52 AM, Dan Wendlandt  wrote:
>> >>
>> >> all you need to do is create a bridge named "br-int", which is what
>> >> the linux devices representing the vm nics will be plugged into.
>> >>
>> >> since you are using tunneling, there is no need to create a br-ethX
>> >> and add a physical interface to it.
>> >>
>> >> dan
>> >>
>> >> p.s. btw, your config looks like its using database polling, which is
>> >> not preferred.  I'd suggest you use the default config, which uses RPC
>> >> communication between agents and the main quantum-server process
>> >>
>> >>
>> >> On Tue, Oct 23, 2012 at 8:44 PM, livemoon  wrote:
>> >> > I know in one node,vm can work well.
>> >> > I want to know in multi nodes, do I need to create a br-ethX, and
>> >> > port
>> >> > the
>> >> > physical interface to it? how to do that in configuration?
>> >> >
>> >> > On Wed, Oct 24, 2012 at 11:36 AM, 刘家军  wrote:
>> >> >>
>> >> >> you just need to create one or more networks and specify which
>> >> >> network
>> >> >> to
>> >> >> use when booting vm.
>> >> >>
>> >> >> 2012/10/24 livemoon 
>> >> >>>
>> >> >>> Hi, I use quantum as network. A question is if there are multi
>> >> >>> nodes,
>> >> >>> how
>> >> >>> to config to make vms communicate with each other in the same
>> >> >>> subnet.
>> >> >>>
>> >> >>> I use openvswitch as my plugin. And my setting is blow:
>> >> >>>
>> >> >>> [DATABASE]
>> >> >>> sql_connection = mysql://quantum:openstack@172.16.1.1:3306/quantum
>> >> >>> reconnect_interval = 2
>> >> >>>
>> >> >>> [OVS]
>> >> >>>
>> >> >>> tenant_network_type = gre
>> >> >>> tunnel_id_ranges = 1:1000
>> >> >>> integration_bridge = br-int
>> >> >>> tunnel_bridge = br-tun
>> >> >>> local_ip = 172.16.1.2
>> >> >>>
>> >> >>> enable_tunneling = True
>> >> >>>
>> >> >>>
>> >> >>> [AGENT]
>> >> >>> polling_interval = 2
>> >> >>> root_helper = sudo /usr/bin/quantum-rootwrap
>> >> >>> /etc/quantum/rootwrap.conf
>> >> >>>
>> >> >>> --
>> >> >>> 非淡薄无以明志,非宁静无以致远
>> >> >>>
>> >> >>> ___
>> >> >>> Mailing list: https://launchpad.net/~openstack
>> >> >>> Post to : openstack@lists.launchpad.net
>> >> >>> Unsubscribe : https://launchpad.net/~openstack
>> >> >>> More help   : https://help.launchpad.net/ListHelp
>> >> >>>
>> >> >>
>> >> >>
>> >> >>
>> >> >> --
>> >> >> 刘家军@ljjjustin
>> >> >>
>> >> >
>> >> >
>> >> >
>> >> > --
>> >> > 非淡薄无以明志,非宁静无以致远
>> >> >
>> >> > ___
>> >> > Mailing list: https://launchpad.net/~openstack
>> >> > Post to : openstack@lists.launchpad.net
>> >> > Unsubscribe : https://launchpad.net/~openstack
>> >> > More help   : https://help.launchpad.net/ListHelp
>> >> >
>> >>
>> >>
>> >>
>> >> --
>> >> ~~~
>> >> Dan Wendlandt
>> >> Nicira, Inc: www.nicira.com
>> >> twitter: danwendlandt
>> >> ~~~
>> >
>> >
>> >
>> >
>> > --
>> > 非淡薄无以明志,非宁静无以致远
>>
>>
>>
>> --
>> ~~~
>> Dan Wendlandt
>> Nicira, Inc: www.nicira.com
>> twitter: danwendlandt
>> ~~~
>
>
>
>
> --
> Blog Site: livemoon.org
> Twitter: mwjpiero
> 非淡薄无以明志,非宁静无以致远
>
>
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
>
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https:

Re: [Openstack] Slides & Notes Share of Summit Topic: DevOps in OpenStack Public Cloud

2012-10-24 Thread Hui Cheng
For topic introduction, see here:
http://openstacksummitfall2012.sched.org/event/f072cfd0e6a0c3341288a1191c52e41a

On Thu, Oct 25, 2012 at 12:29 AM, Hui Cheng  wrote:

> Hi all:
>
> I am honored to give a presentation *DevOps in OpenStack Public Cloud* at
> OpenStack Summit, and shared my humble thoughts and my team's experiences
> in operating a production OpenStack public cloud. From my own point, I
> think it is a sort of useful material for someone who also want to build
> and operate a public cloud based on OpenStack, in order to make it more
> readable for attendees, and especially for those who were not able to
> participate in person, I posted the slides and the detail notes in my
> team's blog, hope you enjoy it:
>
> http://freedomhui.com/2012/10/devops-in-openstack-public-cloud/
>
> Too Long, please read with patience:)
>
> I am also looking forward to any feedback to improve our works, thank you.
>
> Cheers,
> Hui
>
>


-- 
Hui Cheng - 程辉

Community Manager of COSUG
Technical Manager of Sina Corporation

Twitter: @freedomhui 
Blog: freedomhui.com
Weibo: weibo.com/freedomhui
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Slides & Notes Share of Summit Topic: DevOps in OpenStack Public Cloud

2012-10-24 Thread Hui Cheng
Hi all:

I am honored to give a presentation *DevOps in OpenStack Public Cloud* at
OpenStack Summit, and shared my humble thoughts and my team's experiences
in operating a production OpenStack public cloud. From my own point, I
think it is a sort of useful material for someone who also want to build
and operate a public cloud based on OpenStack, in order to make it more
readable for attendees, and especially for those who were not able to
participate in person, I posted the slides and the detail notes in my
team's blog, hope you enjoy it:

http://freedomhui.com/2012/10/devops-in-openstack-public-cloud/

Too Long, please read with patience:)

I am also looking forward to any feedback to improve our works, thank you.

Cheers,
Hui

On Wed, Oct 24, 2012 at 11:06 PM, Mohammad Banikazemi  wrote:

> Using Quantum (Folsom) I have a weird situation. I can ping the outside
> world from my VMs only after I assign a floating IP address to them. In
> other words, I cannot ping the outside world by just setting up the quantum
> router and without using floating IP addresses. The other issue, is that
> after assigning floating IPs I still cannot ping the VM from outside.
> I can provide more information about my setup but thought I might be
> missing something simple here and people may have seen the same problem.
> Thanks.
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
>


-- 
Hui Cheng - 程辉

Community Manager of COSUG
Technical Manager of Sina Corporation

Twitter: @freedomhui 
Blog: freedomhui.com
Weibo: weibo.com/freedomhui
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Quantum Floating IPs

2012-10-24 Thread Mohammad Banikazemi


Using Quantum (Folsom) I have a weird situation. I can ping the outside
world from my VMs only after I assign a floating IP address to them. In
other words, I cannot ping the outside world by just setting up the quantum
router and without using floating IP addresses. The other issue, is that
after assigning floating IPs I still cannot ping the VM from outside.
I can provide more information about my setup but thought I might be
missing something simple here and people may have seen the same problem.
Thanks.___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Not able to get IP address for VM

2012-10-24 Thread Srikanth Kumar Lingala
@janis: How can I check that metadata service is working?

@Salvatore:
DHCP Agent is working fine and I am not seeing any ERROR logs.
I am able to see dnsmasq services. I am able to see those MAC entries in
the hosts file.
tap interface is creating on Host Node, which is attached to br-int.

Regards,
Srikanth.

On Wed, Oct 24, 2012 at 7:24 PM, Salvatore Orlando wrote:

> Srikanth,
>
> from your analysis it seems that L2 connectivity between the compute and
> the controller node is working as expected.
> Before looking further, it is maybe worth ruling out the obvious problems.
> Hence:
> 1) is the dhcp-agent service running (or is it stuck in some error state?)
> 2) Can you see dnsmasq instances running on the controller node? If yes,
> do you see your VM's MAC in the hosts file for the dnsmasq instance?
> 3) If dnsmasq instances are running, can you confirm the relevant tap
> ports are inserted on Open vSwitch instance br-int?
>
> Salvatore
>
>
> On 24 October 2012 14:14, Jānis Ģeņģeris  wrote:
>
>> Hi Srikanth,
>>
>> Can you confirm that metadata service is working and the VMs are able to
>> access it? Usually if VM's can't get network settings is because of
>> inaccessible metadata service.
>>
>> --janis
>>
>> On Wed, Oct 24, 2012 at 4:00 PM, Srikanth Kumar Lingala <
>> srikanthkumar.ling...@gmail.com> wrote:
>>
>>> Here is the *nova.conf* file contents:
>>>
>>> *[DEFAULT]*
>>> *# MySQL Connection #*
>>> *sql_connection=mysql://nova:password@10.232.91.33/nova*
>>> *
>>> *
>>> *# nova-scheduler #*
>>> *rabbit_host=10.232.91.33*
>>> *rabbit_userid=guest*
>>> *rabbit_password=password*
>>> *#scheduler_driver=nova.scheduler.simple.SimpleScheduler*
>>> *#scheduler_default_filters=ImagePropertiesFilter*
>>> *
>>> *
>>> *
>>> *
>>> *scheduler_driver=nova.scheduler.multi.MultiScheduler*
>>> *
>>> compute_scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler
>>> *
>>> *scheduler_available_filters=nova.scheduler.filters.standard_filters*
>>> *scheduler_default_filters=ImagePropertiesFilter*
>>> *
>>> *
>>> *
>>> *
>>> *# nova-api #*
>>> *cc_host=10.232.91.33*
>>> *auth_strategy=keystone*
>>> *s3_host=10.232.91.33*
>>> *ec2_host=10.232.91.33*
>>> *nova_url=http://10.232.91.33:8774/v1.1/*
>>> *ec2_url=http://10.232.91.33:8773/services/Cloud*
>>> *keystone_ec2_url=http://10.232.91.33:5000/v2.0/ec2tokens*
>>> *api_paste_config=/etc/nova/api-paste.ini*
>>> *allow_admin_api=true*
>>> *use_deprecated_auth=false*
>>> *ec2_private_dns_show_ip=True*
>>> *dmz_cidr=169.254.169.254/32*
>>> *ec2_dmz_host=169.254.169.254*
>>> *metadata_host=169.254.169.254*
>>> *enabled_apis=ec2,osapi_compute,metadata*
>>> *
>>> *
>>> *
>>> *
>>> *# Networking #*
>>> *network_api_class=nova.network.quantumv2.api.API*
>>> *quantum_url=http://10.232.91.33:9696*
>>> *libvirt_vif_type=ethernet*
>>> *linuxnet_vif_driver=nova.network.linux_net.LinuxOVSInterfaceDriver*
>>> *firewall_driver=nova.virt.firewall.NoopFirewallDriver*
>>> *libvirt_use_virtio_for_bridges=True*
>>> *
>>> *
>>> *# Cinder #*
>>> *#volume_api_class=cinder.volume.api.API*
>>> *
>>> *
>>> *# Glance #*
>>> *glance_api_servers=10.232.91.33:9292*
>>> *image_service=nova.image.glance.GlanceImageService*
>>> *
>>> *
>>> *# novnc #*
>>> *novnc_enable=true*
>>> *novncproxy_base_url=http://10.232.91.33:6080/vnc_auto.html*
>>> *vncserver_proxyclient_address=127.0.0.1*
>>> *vncserver_listen=0.0.0.0*
>>> *
>>> *
>>> *# Misc #*
>>> *logdir=/var/log/nova*
>>> *state_path=/var/lib/nova*
>>> *lock_path=/var/lock/nova*
>>> *root_helper=sudo nova-rootwrap /etc/nova/rootwrap.conf*
>>> *verbose=true*
>>> *dhcpbridge_flagfile=/etc/nova/nova.conf*
>>> *dhcpbridge=/usr/bin/nova-dhcpbridge*
>>> *force_dhcp_release=True*
>>> *iscsi_helper=tgtadm*
>>> *connection_type=libvirt*
>>> *libvirt_type=kvm*
>>> *libvirt_ovs_bridge=br-int*
>>> *libvirt_vif_type=ethernet*
>>> *libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtOpenVswitchDriver*
>>>
>>>
>>> Regards,
>>> Srikanth.
>>>
>>>
>>> On Mon, Oct 22, 2012 at 7:48 AM, gong yong sheng <
>>> gong...@linux.vnet.ibm.com> wrote:
>>>
  can u send out nova.conf file?

 On 10/22/2012 07:30 PM, Srikanth Kumar Lingala wrote:

 Hi,
 I am using latest devstack I am trying to create a VM with one Ethernet
 interface card. I am able to create the VM successfully, but not able to
 get IP for the ethernet interface.
 I have Openstack Controller running the following:

- nova-api
- nova-cert
- nova-consoleauth
- nova-scheduler
- quantum-dhcp-agent
- quantum-openvswitch-agent


  And O also have Openstack Host Node running the following:

- nova-api
- nova-compute
- quantum-openvswitch-agent


  I am not seeing any kind of errors in logs related nova as well as
 quantum.
 I observed that when I execute 'dhclient' in VM, 'br-int' interface in
 'Openstack Controller' getting DHCP requests, but not sending reply.
 Pleas

Re: [Openstack] Not able to get IP address for VM

2012-10-24 Thread Salvatore Orlando
Srikanth,

from your analysis it seems that L2 connectivity between the compute and
the controller node is working as expected.
Before looking further, it is maybe worth ruling out the obvious problems.
Hence:
1) is the dhcp-agent service running (or is it stuck in some error state?)
2) Can you see dnsmasq instances running on the controller node? If yes, do
you see your VM's MAC in the hosts file for the dnsmasq instance?
3) If dnsmasq instances are running, can you confirm the relevant tap ports
are inserted on Open vSwitch instance br-int?

Salvatore

On 24 October 2012 14:14, Jānis Ģeņģeris  wrote:

> Hi Srikanth,
>
> Can you confirm that metadata service is working and the VMs are able to
> access it? Usually if VM's can't get network settings is because of
> inaccessible metadata service.
>
> --janis
>
> On Wed, Oct 24, 2012 at 4:00 PM, Srikanth Kumar Lingala <
> srikanthkumar.ling...@gmail.com> wrote:
>
>> Here is the *nova.conf* file contents:
>>
>> *[DEFAULT]*
>> *# MySQL Connection #*
>> *sql_connection=mysql://nova:password@10.232.91.33/nova*
>> *
>> *
>> *# nova-scheduler #*
>> *rabbit_host=10.232.91.33*
>> *rabbit_userid=guest*
>> *rabbit_password=password*
>> *#scheduler_driver=nova.scheduler.simple.SimpleScheduler*
>> *#scheduler_default_filters=ImagePropertiesFilter*
>> *
>> *
>> *
>> *
>> *scheduler_driver=nova.scheduler.multi.MultiScheduler*
>> *compute_scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler
>> *
>> *scheduler_available_filters=nova.scheduler.filters.standard_filters*
>> *scheduler_default_filters=ImagePropertiesFilter*
>> *
>> *
>> *
>> *
>> *# nova-api #*
>> *cc_host=10.232.91.33*
>> *auth_strategy=keystone*
>> *s3_host=10.232.91.33*
>> *ec2_host=10.232.91.33*
>> *nova_url=http://10.232.91.33:8774/v1.1/*
>> *ec2_url=http://10.232.91.33:8773/services/Cloud*
>> *keystone_ec2_url=http://10.232.91.33:5000/v2.0/ec2tokens*
>> *api_paste_config=/etc/nova/api-paste.ini*
>> *allow_admin_api=true*
>> *use_deprecated_auth=false*
>> *ec2_private_dns_show_ip=True*
>> *dmz_cidr=169.254.169.254/32*
>> *ec2_dmz_host=169.254.169.254*
>> *metadata_host=169.254.169.254*
>> *enabled_apis=ec2,osapi_compute,metadata*
>> *
>> *
>> *
>> *
>> *# Networking #*
>> *network_api_class=nova.network.quantumv2.api.API*
>> *quantum_url=http://10.232.91.33:9696*
>> *libvirt_vif_type=ethernet*
>> *linuxnet_vif_driver=nova.network.linux_net.LinuxOVSInterfaceDriver*
>> *firewall_driver=nova.virt.firewall.NoopFirewallDriver*
>> *libvirt_use_virtio_for_bridges=True*
>> *
>> *
>> *# Cinder #*
>> *#volume_api_class=cinder.volume.api.API*
>> *
>> *
>> *# Glance #*
>> *glance_api_servers=10.232.91.33:9292*
>> *image_service=nova.image.glance.GlanceImageService*
>> *
>> *
>> *# novnc #*
>> *novnc_enable=true*
>> *novncproxy_base_url=http://10.232.91.33:6080/vnc_auto.html*
>> *vncserver_proxyclient_address=127.0.0.1*
>> *vncserver_listen=0.0.0.0*
>> *
>> *
>> *# Misc #*
>> *logdir=/var/log/nova*
>> *state_path=/var/lib/nova*
>> *lock_path=/var/lock/nova*
>> *root_helper=sudo nova-rootwrap /etc/nova/rootwrap.conf*
>> *verbose=true*
>> *dhcpbridge_flagfile=/etc/nova/nova.conf*
>> *dhcpbridge=/usr/bin/nova-dhcpbridge*
>> *force_dhcp_release=True*
>> *iscsi_helper=tgtadm*
>> *connection_type=libvirt*
>> *libvirt_type=kvm*
>> *libvirt_ovs_bridge=br-int*
>> *libvirt_vif_type=ethernet*
>> *libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtOpenVswitchDriver*
>>
>>
>> Regards,
>> Srikanth.
>>
>>
>> On Mon, Oct 22, 2012 at 7:48 AM, gong yong sheng <
>> gong...@linux.vnet.ibm.com> wrote:
>>
>>>  can u send out nova.conf file?
>>>
>>> On 10/22/2012 07:30 PM, Srikanth Kumar Lingala wrote:
>>>
>>> Hi,
>>> I am using latest devstack I am trying to create a VM with one Ethernet
>>> interface card. I am able to create the VM successfully, but not able to
>>> get IP for the ethernet interface.
>>> I have Openstack Controller running the following:
>>>
>>>- nova-api
>>>- nova-cert
>>>- nova-consoleauth
>>>- nova-scheduler
>>>- quantum-dhcp-agent
>>>- quantum-openvswitch-agent
>>>
>>>
>>>  And O also have Openstack Host Node running the following:
>>>
>>>- nova-api
>>>- nova-compute
>>>- quantum-openvswitch-agent
>>>
>>>
>>>  I am not seeing any kind of errors in logs related nova as well as
>>> quantum.
>>> I observed that when I execute 'dhclient' in VM, 'br-int' interface in
>>> 'Openstack Controller' getting DHCP requests, but not sending reply.
>>> Please let me know, what I am doing wrong here.
>>> Thanks in advance.
>>>
>>>  --
>>> 
>>> Srikanth.
>>>
>>>
>>>
>>> ___
>>> Mailing list: https://launchpad.net/~openstack
>>> Post to : openstack@lists.launchpad.net
>>> Unsubscribe : https://launchpad.net/~openstack
>>> More help   : https://help.launchpad.net/ListHelp
>>>
>>>
>>>
>>
>>
>> --
>> 
>> Srikanth.
>>
>>
>> ___
>> Mailing list: https://launchpad.net/~openstack
>> Post to : opens

Re: [Openstack] Not able to get IP address for VM

2012-10-24 Thread Jānis Ģeņģeris
Hi Srikanth,

Can you confirm that metadata service is working and the VMs are able to
access it? Usually if VM's can't get network settings is because of
inaccessible metadata service.

--janis

On Wed, Oct 24, 2012 at 4:00 PM, Srikanth Kumar Lingala <
srikanthkumar.ling...@gmail.com> wrote:

> Here is the *nova.conf* file contents:
>
> *[DEFAULT]*
> *# MySQL Connection #*
> *sql_connection=mysql://nova:password@10.232.91.33/nova*
> *
> *
> *# nova-scheduler #*
> *rabbit_host=10.232.91.33*
> *rabbit_userid=guest*
> *rabbit_password=password*
> *#scheduler_driver=nova.scheduler.simple.SimpleScheduler*
> *#scheduler_default_filters=ImagePropertiesFilter*
> *
> *
> *
> *
> *scheduler_driver=nova.scheduler.multi.MultiScheduler*
> *compute_scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler*
> *scheduler_available_filters=nova.scheduler.filters.standard_filters*
> *scheduler_default_filters=ImagePropertiesFilter*
> *
> *
> *
> *
> *# nova-api #*
> *cc_host=10.232.91.33*
> *auth_strategy=keystone*
> *s3_host=10.232.91.33*
> *ec2_host=10.232.91.33*
> *nova_url=http://10.232.91.33:8774/v1.1/*
> *ec2_url=http://10.232.91.33:8773/services/Cloud*
> *keystone_ec2_url=http://10.232.91.33:5000/v2.0/ec2tokens*
> *api_paste_config=/etc/nova/api-paste.ini*
> *allow_admin_api=true*
> *use_deprecated_auth=false*
> *ec2_private_dns_show_ip=True*
> *dmz_cidr=169.254.169.254/32*
> *ec2_dmz_host=169.254.169.254*
> *metadata_host=169.254.169.254*
> *enabled_apis=ec2,osapi_compute,metadata*
> *
> *
> *
> *
> *# Networking #*
> *network_api_class=nova.network.quantumv2.api.API*
> *quantum_url=http://10.232.91.33:9696*
> *libvirt_vif_type=ethernet*
> *linuxnet_vif_driver=nova.network.linux_net.LinuxOVSInterfaceDriver*
> *firewall_driver=nova.virt.firewall.NoopFirewallDriver*
> *libvirt_use_virtio_for_bridges=True*
> *
> *
> *# Cinder #*
> *#volume_api_class=cinder.volume.api.API*
> *
> *
> *# Glance #*
> *glance_api_servers=10.232.91.33:9292*
> *image_service=nova.image.glance.GlanceImageService*
> *
> *
> *# novnc #*
> *novnc_enable=true*
> *novncproxy_base_url=http://10.232.91.33:6080/vnc_auto.html*
> *vncserver_proxyclient_address=127.0.0.1*
> *vncserver_listen=0.0.0.0*
> *
> *
> *# Misc #*
> *logdir=/var/log/nova*
> *state_path=/var/lib/nova*
> *lock_path=/var/lock/nova*
> *root_helper=sudo nova-rootwrap /etc/nova/rootwrap.conf*
> *verbose=true*
> *dhcpbridge_flagfile=/etc/nova/nova.conf*
> *dhcpbridge=/usr/bin/nova-dhcpbridge*
> *force_dhcp_release=True*
> *iscsi_helper=tgtadm*
> *connection_type=libvirt*
> *libvirt_type=kvm*
> *libvirt_ovs_bridge=br-int*
> *libvirt_vif_type=ethernet*
> *libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtOpenVswitchDriver*
>
>
> Regards,
> Srikanth.
>
>
> On Mon, Oct 22, 2012 at 7:48 AM, gong yong sheng <
> gong...@linux.vnet.ibm.com> wrote:
>
>>  can u send out nova.conf file?
>>
>> On 10/22/2012 07:30 PM, Srikanth Kumar Lingala wrote:
>>
>> Hi,
>> I am using latest devstack I am trying to create a VM with one Ethernet
>> interface card. I am able to create the VM successfully, but not able to
>> get IP for the ethernet interface.
>> I have Openstack Controller running the following:
>>
>>- nova-api
>>- nova-cert
>>- nova-consoleauth
>>- nova-scheduler
>>- quantum-dhcp-agent
>>- quantum-openvswitch-agent
>>
>>
>>  And O also have Openstack Host Node running the following:
>>
>>- nova-api
>>- nova-compute
>>- quantum-openvswitch-agent
>>
>>
>>  I am not seeing any kind of errors in logs related nova as well as
>> quantum.
>> I observed that when I execute 'dhclient' in VM, 'br-int' interface in
>> 'Openstack Controller' getting DHCP requests, but not sending reply.
>> Please let me know, what I am doing wrong here.
>> Thanks in advance.
>>
>>  --
>> 
>> Srikanth.
>>
>>
>>
>> ___
>> Mailing list: https://launchpad.net/~openstack
>> Post to : openstack@lists.launchpad.net
>> Unsubscribe : https://launchpad.net/~openstack
>> More help   : https://help.launchpad.net/ListHelp
>>
>>
>>
>
>
> --
> 
> Srikanth.
>
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] ERROR: string indices must be integers, not str

2012-10-24 Thread Dolph Mathews
Sorry for the delayed response; I know I've seen this message before. I
believe it had something to do with endpoints configured in a manner
keystone did not expect. Can you paste the full backtrace from the logs,
and if it appears to be related, your keystone endpoint-list?

-Dolph


On Thu, Oct 18, 2012 at 5:00 AM, Xu, HongnaX  wrote:

> Hi
>   I am installing openstack on ubuntu 12.10 beta2 with
> precise-updates/folsom repo , after sync keystone database, then these
> parameters in ~/.bashrc
>
> export SERVICE_TOKEN=admin
> export OS_TENANT_NAME=admin
> export OS_USERNAME=admin
> export OS_PASSWORD=openstack
> export OS_AUTH_URL=http://10.211.55.17:5000/v2.0/
> export SERVICE_ENDPOINT=http://10.211.55.17:35357/v2.0/
>
> when I add keystone users by "keystone user-create --name admin --pass
> openstack --email ad...@foobar.com" or "keystone user-list", I got string
> indices must be integers, not str, is there any solution? My settings
> exactly the same as the wiki
> http://docs.openstack.org/trunk/openstack-compute/install/apt/content/ap_installingfolsomubuntuprecise.html
>
>
> Best Regards,
> Hongna
>
>
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Not able to get IP address for VM

2012-10-24 Thread Srikanth Kumar Lingala
Here is the *nova.conf* file contents:

*[DEFAULT]*
*# MySQL Connection #*
*sql_connection=mysql://nova:password@10.232.91.33/nova*
*
*
*# nova-scheduler #*
*rabbit_host=10.232.91.33*
*rabbit_userid=guest*
*rabbit_password=password*
*#scheduler_driver=nova.scheduler.simple.SimpleScheduler*
*#scheduler_default_filters=ImagePropertiesFilter*
*
*
*
*
*scheduler_driver=nova.scheduler.multi.MultiScheduler*
*compute_scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler*
*scheduler_available_filters=nova.scheduler.filters.standard_filters*
*scheduler_default_filters=ImagePropertiesFilter*
*
*
*
*
*# nova-api #*
*cc_host=10.232.91.33*
*auth_strategy=keystone*
*s3_host=10.232.91.33*
*ec2_host=10.232.91.33*
*nova_url=http://10.232.91.33:8774/v1.1/*
*ec2_url=http://10.232.91.33:8773/services/Cloud*
*keystone_ec2_url=http://10.232.91.33:5000/v2.0/ec2tokens*
*api_paste_config=/etc/nova/api-paste.ini*
*allow_admin_api=true*
*use_deprecated_auth=false*
*ec2_private_dns_show_ip=True*
*dmz_cidr=169.254.169.254/32*
*ec2_dmz_host=169.254.169.254*
*metadata_host=169.254.169.254*
*enabled_apis=ec2,osapi_compute,metadata*
*
*
*
*
*# Networking #*
*network_api_class=nova.network.quantumv2.api.API*
*quantum_url=http://10.232.91.33:9696*
*libvirt_vif_type=ethernet*
*linuxnet_vif_driver=nova.network.linux_net.LinuxOVSInterfaceDriver*
*firewall_driver=nova.virt.firewall.NoopFirewallDriver*
*libvirt_use_virtio_for_bridges=True*
*
*
*# Cinder #*
*#volume_api_class=cinder.volume.api.API*
*
*
*# Glance #*
*glance_api_servers=10.232.91.33:9292*
*image_service=nova.image.glance.GlanceImageService*
*
*
*# novnc #*
*novnc_enable=true*
*novncproxy_base_url=http://10.232.91.33:6080/vnc_auto.html*
*vncserver_proxyclient_address=127.0.0.1*
*vncserver_listen=0.0.0.0*
*
*
*# Misc #*
*logdir=/var/log/nova*
*state_path=/var/lib/nova*
*lock_path=/var/lock/nova*
*root_helper=sudo nova-rootwrap /etc/nova/rootwrap.conf*
*verbose=true*
*dhcpbridge_flagfile=/etc/nova/nova.conf*
*dhcpbridge=/usr/bin/nova-dhcpbridge*
*force_dhcp_release=True*
*iscsi_helper=tgtadm*
*connection_type=libvirt*
*libvirt_type=kvm*
*libvirt_ovs_bridge=br-int*
*libvirt_vif_type=ethernet*
*libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtOpenVswitchDriver*


Regards,
Srikanth.

On Mon, Oct 22, 2012 at 7:48 AM, gong yong sheng  wrote:

>  can u send out nova.conf file?
>
> On 10/22/2012 07:30 PM, Srikanth Kumar Lingala wrote:
>
> Hi,
> I am using latest devstack I am trying to create a VM with one Ethernet
> interface card. I am able to create the VM successfully, but not able to
> get IP for the ethernet interface.
> I have Openstack Controller running the following:
>
>- nova-api
>- nova-cert
>- nova-consoleauth
>- nova-scheduler
>- quantum-dhcp-agent
>- quantum-openvswitch-agent
>
>
>  And O also have Openstack Host Node running the following:
>
>- nova-api
>- nova-compute
>- quantum-openvswitch-agent
>
>
>  I am not seeing any kind of errors in logs related nova as well as
> quantum.
> I observed that when I execute 'dhclient' in VM, 'br-int' interface in
> 'Openstack Controller' getting DHCP requests, but not sending reply.
> Please let me know, what I am doing wrong here.
> Thanks in advance.
>
>  --
> 
> Srikanth.
>
>
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
>
>


-- 

Srikanth.
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Minutes from the Technical Committee meeting (Oct 23)

2012-10-24 Thread Thierry Carrez
The OpenStack Technical Committee held its first official (and public)
meeting in #openstack-meeting at 20:00 UTC yesterday.

Here is a quick summary of the outcome of this meeting:

* Ryan Lane was nominated to the User Committee to work with Tim Bell in
its initial setup.

* The Ceilometer project was granted Incubation status and will
therefore be able to access additional OpenStack common resources,
including tapping into CI, QA and Release management teams.

See details and full logs at:
http://eavesdrop.openstack.org/meetings/tc/2012/tc.2012-10-23-20.01.html

More information on the Technical Committee at:
http://wiki.openstack.org/Governance/TechnicalCommittee

-- 
Thierry Carrez (ttx)
Chair, OpenStack Technical Committee

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Additional iptables when two network interfaces

2012-10-24 Thread Gui Maluf
Did you enabled ip_forwarding?
Try to use nova-api-metadata on node;

When I tried to do the same set up I couldnt, my solution was using
multi_host = true, starting nova-api-metadata on nodes. so each vm would
get metadata info through it's own host and all traffic will go through
this same host.

My 0,1 cents, I hope you can fix it!

On Tue, Oct 23, 2012 at 11:19 AM, Daniel Vázquez wrote:

> Hi here!,
>
> Exists any additional iptables rules to add, when running on host with two
> network interfaces?
> Following documentation deduce that nova.conf has all info to mount all
> rules in live.
>
> I've problems to connect VMs instances, please can you review:
> https://answers.launchpad.net/nova/+question/212025
> I can't see where is the problem.
>
> Thx!
>
>
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
>


-- 
*guilherme* \n
\tab *maluf*
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] How to commucation vms in multi nodes using quantum?

2012-10-24 Thread Gary Kotton

Hi,
In addition to Dan's comments you can also take a look at the following 
link http://wiki.openstack.org/ConfigureOpenvswitch.

Thanks
Gary

On 10/24/2012 08:21 AM, livemoon wrote:

Thanks Dan

On Wed, Oct 24, 2012 at 2:15 PM, Dan Wendlandt > wrote:


On Tue, Oct 23, 2012 at 10:56 PM, livemoon mailto:mwjpi...@gmail.com>> wrote:
> Dan:
> Thank you for your help.
> If the server have three nics, which one will be used as port of
"br-int". I
> must know how "br-int" work between two machines, and then I can
make the
> physical interface which "br-int" use to one switch

If you are using tunneling, the traffic will exit out the NIC based on
your physical server's routing table and the destination IP of the
tunnel.  For example, if your physical server is tunneling a packet to
a VM on a physical server with IP W.X.Y.Z, the packet will leave
whatever NIC has the route to reach W.X.Y.Z .

Dan




>
> On Wed, Oct 24, 2012 at 11:52 AM, Dan Wendlandt mailto:d...@nicira.com>> wrote:
>>
>> all you need to do is create a bridge named "br-int", which is what
>> the linux devices representing the vm nics will be plugged into.
>>
>> since you are using tunneling, there is no need to create a br-ethX
>> and add a physical interface to it.
>>
>> dan
>>
>> p.s. btw, your config looks like its using database polling,
which is
>> not preferred.  I'd suggest you use the default config, which
uses RPC
>> communication between agents and the main quantum-server process
>>
>>
>> On Tue, Oct 23, 2012 at 8:44 PM, livemoon mailto:mwjpi...@gmail.com>> wrote:
>> > I know in one node,vm can work well.
>> > I want to know in multi nodes, do I need to create a br-ethX,
and port
>> > the
>> > physical interface to it? how to do that in configuration?
>> >
>> > On Wed, Oct 24, 2012 at 11:36 AM, ??? mailto:iam...@gmail.com>> wrote:
>> >>
>> >> you just need to create one or more networks and specify
which network
>> >> to
>> >> use when booting vm.
>> >>
>> >> 2012/10/24 livemoon mailto:mwjpi...@gmail.com>>
>> >>>
>> >>> Hi, I use quantum as network. A question is if there are
multi nodes,
>> >>> how
>> >>> to config to make vms communicate with each other in the
same subnet.
>> >>>
>> >>> I use openvswitch as my plugin. And my setting is blow:
>> >>>
>> >>> [DATABASE]
>> >>> sql_connection =
mysql://quantum:openstack@172.16.1.1:3306/quantum

>> >>> reconnect_interval = 2
>> >>>
>> >>> [OVS]
>> >>>
>> >>> tenant_network_type = gre
>> >>> tunnel_id_ranges = 1:1000
>> >>> integration_bridge = br-int
>> >>> tunnel_bridge = br-tun
>> >>> local_ip = 172.16.1.2
>> >>>
>> >>> enable_tunneling = True
>> >>>
>> >>>
>> >>> [AGENT]
>> >>> polling_interval = 2
>> >>> root_helper = sudo /usr/bin/quantum-rootwrap
>> >>> /etc/quantum/rootwrap.conf
>> >>>
>> >>> --
>> >>> ???,???
>> >>>
>> >>> ___
>> >>> Mailing list: https://launchpad.net/~openstack

>> >>> Post to : openstack@lists.launchpad.net

>> >>> Unsubscribe : https://launchpad.net/~openstack

>> >>> More help   : https://help.launchpad.net/ListHelp
>> >>>
>> >>
>> >>
>> >>
>> >> --
>> >> ???@ljjjustin
>> >>
>> >
>> >
>> >
>> > --
>> > ???,???
>> >
>> > ___
>> > Mailing list: https://launchpad.net/~openstack

>> > Post to : openstack@lists.launchpad.net

>> > Unsubscribe : https://launchpad.net/~openstack

>> > More help   : https://help.launchpad.net/ListHelp
>> >
>>
>>
>>
>> --
>> ~~~
>> Dan Wendlandt
>> Nicira, Inc: www.nicira.com 
>> twitter: danwendlandt
>> ~~~
>
>
>
>
> --
> ???,???



--
~~~
Dan Wendlandt
Nicira, Inc: www.nicira.com 
twitter: danwendlandt
~~~




--
Blog Site: livemoon.org 
Twitter: mwjpiero
???,???



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp



Re: [Openstack] Use of MAC addresses in Openstack VMs

2012-10-24 Thread Thierry Carrez
Eric Windisch wrote:
> Xen, VMware, and HyperV provide their own OUI for automatically
> generated addresses. Yes, these collide between deployments. 

Your examples are all at hypervisor-level though... Could we rather
piggy-back on hypervisor allocation (and maybe help funding one for KVM
if it's missing), or do we need one anyway for Quantum ?

-- 
Thierry

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OpenStack] Ceilometer API Glossary

2012-10-24 Thread Julien Danjou
On Wed, Oct 24 2012, 吴亚伟 wrote:

> I still have questions about the data in the mongodb.
> First: All the data about "source" in db is "?",for example:
> is it correct?

Yes, that's what we used for now in the source code, so this is correct.

> second,when I create a instance,there will be meter data created in the
> mongodb as "resource" and "meter",but for volume ,there is no any data about
> volume in the db,so why ?Is there something wrong with my devstack or  there
> is configuration file should be modified?

If you're talking about cinder volues, you need to configure and run
cinder-volume-usage-audit regularly to get messages.

> third,I can only see cpu,disk,and network info for instance in the meter
> data by now ,but there is no memory info,so why?

Make sure you enabled the notifications via RabbitMQ in nova.

-- 
Julien Danjou
;; Free Software hacker & freelance
;; http://julien.danjou.info


pgp9T7Xj8PXY6.pgp
Description: PGP signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp