Joe Smithian wrote:
I browsed the openStack documentation but couldn't find information
about the Nova API server.
What's the web server used in the Nova API server?
Can we use a different web server such as Apache or Tomcat?
I'd appreciate your comments.
Nova uses Python eventlet WSGI
According to the docs of Swift, devices in a ring are identified by disk
name like /dev/sdb1 :
devicestringThe on disk name of the device on the server. For example: sdb1
http://swift.openstack.org/overview_ring.html
But such disk name would change: When one of the disks fails, all names of
Seems like the wiki docs misled me. sdb1 is actually mount point but not
/dev/sdb1.
Thus there won't be any problem in source code.
But risk is still there: disk should be mounted by uuid or label but not
/dev/sdb1, during the storage installation phase:
Alessio,
Your answer solved my problem. Thank you so much. But I met the '500
Internal Server Error'.
I generated a user admin with password secrete, and used curl to produce
the token successfully. When I ran swift -A http://127.0.0.1:5000/v1.0 -U
admin -K secrete stat -v, but got Auth GET
On Mon, Jan 16, 2012 at 10:29:19AM -0200, Rogério Vinhal Nunes wrote:
As Daniel suggested, I just ignored the ID == 0 and it seems to work fine
now. The resulting code is even simpler than suggested by Vish:
def list_instances(self):
return [self._conn.lookupByID(x).name()
Just curious, whats the reason we went with rolling our own instead of using
something like nginx/apache2/etc w/ mod_wsgi?
On Jan 16, 2012, at 2:14 AM, Thierry Carrez wrote:
Joe Smithian wrote:
I browsed the openStack documentation but couldn't find information
about the Nova API server.
Hi Michael,
we deploy with an Apache + mod_proxy in front of all the nova APIs
processes. It works reasonably well. For Horizon we use Apache + WSGI mod
(well... everybody knows Horizon is a different beast...).
Tomcat is a different beast, for JVM stuff, you know. Openstack is python.
Cheers
It's not like we wrote our own webserver :) Eventlet has a wsgi
container. We just went ahead and used it. Not only does it perform
very well, it's also very straightforward to use for testing (compared
to having to install and configure Apache to get going)..
2012/1/16 Michael Basnight
I'm interested in running some white-box tests that check scalability and
limits of parts of a Nova system. I want to start from a
full working configuration however, as I get that from my dev team and it
includes all sorts of settings that would be tedious
and error prone to reproduce in other
On Mon, Jan 16, 2012 at 8:34 AM, Leander Bessa leande...@gmail.com wrote:
Hello,
I've setup a test installation of OpenStack inside a virtual machine with
the ec2 api. I would also like to test out the OpenStack API with the
nova-python-client, therefor i installed keystone in a separate
Hi all,
I'm a new user of openstack swift.
I installed it on 6 VMs and it works perfectly.
When I want to increase the storage space dedicated to swift, should I add a
new VM???
Is it possible to add a shared folder dedicated to swift instead of adding new
VM??
Is there any other solution
Hi all -
With lots of help from reviewers at Rackspace and beyond, I've drafted
a new installation guide for the stable Diablo release. Currently it
points to packages from two sources - Cloud Builders and Managed IT.
Much gratitude goes to both groups for their hard work. These
instructions
Hi Gavin! Comments inline...
On Mon, Jan 16, 2012 at 10:18 AM, Brebner, Gavin gavin.breb...@hp.com wrote:
I’m interested in running some “white-box” tests that check scalability and
limits of parts of a Nova system. I want to start from a
full working configuration however, as I get that from
All, I haven't yet proposed a merge since I'm looking for input on
pointing to community packages, but the doc is here:
https://github.com/annegentle/openstack-manuals/tree/123011
Look in /doc/src/docbkx/openstack-install for the source, and to build
HTML and PDF, run mvn generate-sources in that
On Sat, Jan 14, 2012 at 1:15 PM, Samuel Hassine, Another Service
samuel.hass...@anotherservice.com wrote:
I checked the log files, nothing I understand as a fatal error, I do not
know how to debug this, how to find a solution or a workaround.
Hi Sam, please pastebin your nova-compute log file
On Fri, Jan 13, 2012 at 10:57 PM, Deepak Garg deepakgarg.i...@gmail.com wrote:
The slides were __really__ nice to get started with Openstack QA. I am going
to give it a shot and get back if I run into problems.
However, can we subscribe/let know if there are more follow-on webinars in
the
Hi!
Please!
Why are you using v1.0 in this command:
swift -A http://127.0.0.1:5000/v1.0 -U admin -K secrete stat -v
Could you try:
swift -A http://127.0.0.1:5000/v2.0 http://127.0.0.1:5000/v1.0 -U
admin -K secrete stat -v
On 01/16/2012 02:22 PM, Xuyun Zhang wrote:
Alessio,
Your answer
Great slides! All my doubts about tempest are now gone. Thanks for sharing it.
Ghe Rivero
On Mon, Jan 16, 2012 at 6:18 PM, Jay Pipes jaypi...@gmail.com wrote:
On Fri, Jan 13, 2012 at 10:57 PM, Deepak Garg deepakgarg.i...@gmail.com
wrote:
The slides were __really__ nice to get started with
Sandy, Ed, Chris,
We personalize this mail to you guys cause in the source you are listed as
the authors of the Scheduler module. Here's the thing, we have :
- 2 nova Diablo clusters (16 nodes, 1 controller, 15 computes - controller
running sched, api and network services - KVM as HV) + keystone
Hello,
I have read about BSaaS project codenamed 'Lunr' in the Openstack wiki and
in this list archive. But it seems that its development was abandoned by
the team and focus was switched to developing integrated nova-volume
service. Was that a governance decision not to use external block storage
I used the fake virt driver when I was testing zones. Very handy.
From: openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net
[openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net] on behalf of
Brebner, Gavin [gavin.breb...@hp.com]
Sent:
On Mon, Jan 16, 2012 at 11:18 AM, Anne Gentle a...@openstack.org wrote:
Hi all -
With lots of help from reviewers at Rackspace and beyond, I've drafted
a new installation guide for the stable Diablo release. Currently it
points to packages from two sources - Cloud Builders and Managed IT.
Hi,
This was not a governance decision. AFAIK Lunr is still under development, but
will exposed to openstack as a backend to the existing nova-volume code. We
have made significant progress during the Essex release at separating out
nova-volumes so that it can be branched into its own
As of the following changeset, Nova trunk is completely broken on Python 2.6.
I presume that we're still supporting 2.6? (We better had be!)
commit 035b43b1fd320008234e066e30629fb0e359b424
Author: Naveed Massjouni navee...@gmail.com
Date: Thu Jan 12 18:38:21 2012 +
Refactoring
Good question.
I think wiki doc is wrong.
In swift-ring-builder phase, It said:
For example, if you were setting up a storage node with a partition of
/dev/sdb1 in Zone 1 on IP 10.0.0.1, the DEVICE would be sdb1 and the
commands would look like:
swift-ring-builder account.builder add
On Mon, Jan 16, 2012 at 9:48 PM, Ewan Mellor ewan.mel...@eu.citrix.com wrote:
While I’m here, any chance we can have a unit test running on Python 2.6?
Monty and Jim have been working on getting parallel 2.7 and 2.6 tests
going with the tox library.
-jay
Hi all,
If you're interested in helping me gather throughput numbers for
various Glance installations, please contact me. I wrote a little tool
tonight that gathers some throughput details after attempting to
concurrently add images to a Glance server.
You can see the output of the tool below:
On 01/17/2012 08:02 AM, Vishvananda Ishaya wrote:
Hi,
This was not a governance decision. AFAIK Lunr is still under
development, but will exposed to openstack as a backend to the existing
nova-volume code. We have made significant progress during the Essex
release at separating out
Nice!
Jay - are you expecting folks to run this on the same server or in the
same rack as the glance server? (eg, do you expect the transfer
between the client and glance to make an noticeable impact on
performance)
Perhaps if people are going to share these numbers they should share
benchmarks
29 matches
Mail list logo