Sorry, At the moment i'm unable to fetch the latest version of swift :s
On Tue, Jan 15, 2013 at 12:14 PM, Chmouel Boudjnah wrote:
> any chances you can try with latest swift as well and set :
>
> log_level = DEBUG
>
> in swift proxy-server.conf
>
> and what what the authtoken middleware is doin
any chances you can try with latest swift as well and set :
log_level = DEBUG
in swift proxy-server.conf
and what what the authtoken middleware is doing in the /var/log/syslog (or
wherever syslog log on your distro).
Chmouel.
On Tue, Jan 15, 2013 at 12:09 PM, Leander Bessa Beernaert <
leande
I've updated the keystoneclient to the lastest version available in GitHub
and I'm still not getting any speedups. I've switched to the tempauth
system and noticed an immediate increase in throughput (6.4 GB/day to 53.5
GB/day).
Since I'm time-restrained, I'll stick with the tempauth system for no
as Chuck explain you usually would see that on the server, having said that
if you use latest swiftclient from github you will be able to see the
requests that swiftclient make via keystoneclient to keystone to get a
token. If you go on the swift proxy server and only if you use a recent
checked ou
Rather than ping-ponging emails back and forth on this list, it would
be easier if you could hop on to the #openstack-swift IRC channel on
freenode to discuss further.
--
Chuck
On Mon, Jan 14, 2013 at 1:00 PM, Leander Bessa Beernaert
wrote:
> Neither keystone nor swift proxy are producing any lo
Neither keystone nor swift proxy are producing any logs. I'm not sure what
to do :S
On Mon, Jan 14, 2013 at 6:50 PM, Chuck Thier wrote:
> You would have to look at the proxy log to see if a request is being
> made. The results from the swift command line are just the calls that
> the client ma
You would have to look at the proxy log to see if a request is being
made. The results from the swift command line are just the calls that
the client makes. The server still haves to validate the token on
every request.
--
Chuck
On Mon, Jan 14, 2013 at 12:37 PM, Leander Bessa Beernaert
wrote:
If memcache is being utilized by your keystone middleware, you should see
keystone attaching to it on the first incoming request, e.g.:
keystoneclient.middleware.auth_token [INFO]: Using Keystone memcache for
caching token
You may also want to use auth_token from keystoneclient >= v0.2.0 if you
Are you by any chance referring to this topic
https://lists.launchpad.net/openstack/msg08639.html regarding the keystone
token cache? If so I've already added the configuration line and have not
noticed any speedup :/
On Mon, Jan 14, 2013 at 5:19 PM, Leander Bessa Beernaert <
leande...@gmail.co
I'm using the ubuntu 12.04 packages of the folsom repository by the way.
On Mon, Jan 14, 2013 at 5:18 PM, Chuck Thier wrote:
> On Mon, Jan 14, 2013 at 11:03 AM, Leander Bessa Beernaert
> wrote:
> > Also, I'm unable to run the swift-bench with keystone.
> >
>
> Hrm... That was supposed to be fi
On Mon, Jan 14, 2013 at 11:03 AM, Leander Bessa Beernaert
wrote:
> Also, I'm unable to run the swift-bench with keystone.
>
Hrm... That was supposed to be fixed with this bug:
https://bugs.launchpad.net/swift/+bug/1011727
My keystone dev instance isn't working at the moment, but I'll see if
I ca
On Mon, Jan 14, 2013 at 11:01 AM, Leander Bessa Beernaert
wrote:
> I currently have 4 machines running 10 clients each uploading 1/40th of the
> data. More than 40 simultaneous clientes starts to severely affect
> Keystone's ability to handle these operations.
You might also double check that you
That should be fine, but it doesn't have any way of reporting stats
currently. You could use tools like ifstat to look at how much
bandwidth you are using. You can also look at how much cpu the swift
tool is using. Depending on how your data is setup, you could run
several swift-client processes
Also, I'm unable to run the swift-bench with keystone.
I always get this error:
Traceback (most recent call last):
File "/usr/bin/swift-bench", line 149, in
controller.run()
File "/usr/lib/python2.7/dist-packages/swift/common/bench.py", line 159,
in run
puts = BenchPUT(self.logger, s
I currently have 4 machines running 10 clients each uploading 1/40th of the
data. More than 40 simultaneous clientes starts to severely affect
Keystone's ability to handle these operations.
On Mon, Jan 14, 2013 at 4:58 PM, Chuck Thier wrote:
> That should be fine, but it doesn't have any way of
I'm currently using the swift client to upload files, would you recommend
another approach?
On Mon, Jan 14, 2013 at 4:43 PM, Chuck Thier wrote:
> Using swift stat probably isn't the best way to determine cluster
> performance, as those stats are updated async, and could be delayed
> quite a bit
Using swift stat probably isn't the best way to determine cluster
performance, as those stats are updated async, and could be delayed
quite a bit as you are heavily loading the cluster. It also might be
worthwhile to use a tool like swift-bench to test your cluster to make
sure it is properly setu
I'm getting around 5-6.5 GB a day of bytes written on Swift. I calculated
this by calling "swift stat && sleep 60s && swift stat". I did some
calculation based on those values to get to the end result.
Currently I'm resetting swift with a node size of 64, since 90% of the
files are less than 70KB
Hey Leander,
Can you post what performance you are getting? If they are all
sharing the same GigE network, you might also check that the links
aren't being saturated, as it is pretty easy to saturate pushing 200k
files around.
--
Chuck
On Mon, Jan 14, 2013 at 10:15 AM, Leander Bessa Beernaert
Well, I've fixed the node size and disabled the all the replicator and
auditor processes. However, it is even slower now than it was before :/.
Any suggestions?
On Mon, Jan 14, 2013 at 3:23 PM, Leander Bessa Beernaert <
leande...@gmail.com> wrote:
> Ok, thanks for all the tips/help.
>
> Regards,
> Allow me to rephrase.
> I've read somewhere (can't remember where) that it would be faster to upload
> files if they would be uploaded to separate containeres.
> This was suggested for a standard swift installation with a certain
> replication factor.
> Since I'll be uploading the files with th
Ok, thanks for all the tips/help.
Regards,
Leander
On Mon, Jan 14, 2013 at 3:21 PM, Robert van Leeuwen <
robert.vanleeu...@spilgames.com> wrote:
> > Allow me to rephrase.
> > I've read somewhere (can't remember where) that it would be faster to
> upload files if they would be uploaded to sepa
Allow me to rephrase. I've read somewhere (can't remember where) that it
would be faster to upload files if they would be uploaded to separate
containeres. This was suggested for a standard swift installation with a
certain replication factor. Since I'll be uploading the files with the
replicators
> I see. With replication switched off during upload, does inserting into
> various containers speed up the process
> or is it irrelevant?
I'm not sure what's your question but maybe this helps:
In short:
The replication daemon is "walking" across your files to check if any files
need to be repl
I see. With replication switched off during upload, does inserting into
various containers speed up the process or is it irrelevant?
On Mon, Jan 14, 2013 at 1:49 PM, Robert van Leeuwen <
robert.vanleeu...@spilgames.com> wrote:
> > According to the info below, i think the current size is 256 rig
> According to the info below, i think the current size is 256 right?
> If I format the storage partition, will that automatically clear all the
> contents from the storage or do I need to clean something else as well?
>
> Output from xfs_info:
> meta-data=/dev/sda3 isize=256agcou
> By stopping, do you mean halt the service (kill the process) or is it a
> change in the configuration file?
Just halt the service.
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://lau
According to the info below, i think the current size is 256 right? If I
format the storage partition, will that automatically clear all the
contents from the storage or do I need to clean something else as well?
Output from xfs_info:
meta-data=/dev/sda3 isize=256agcount=4, agsize
By stopping, do you mean halt the service (kill the process) or is it a
change in the configuration file?
On Mon, Jan 14, 2013 at 1:20 PM, Robert van Leeuwen <
robert.vanleeu...@spilgames.com> wrote:
> On Mon, Jan 14, 2013 at 11:02 AM, Leander Bessa Beernaert <
> leande...@gmail.com> wrote:
>
>
On Mon, Jan 14, 2013 at 11:02 AM, Leander Bessa Beernaert
mailto:leande...@gmail.com>> wrote:
Hello all,
I'm trying to upload 200GB of 200KB files to Swift. I'm using 4 clients (each
hosted on a different machine) with 10 threads each uploading files using the
official python-swiftclient. Each
I forgot to mention that I'm also using the suggestions mentioned here:
http://docs.openstack.org/developer/swift/deployment_guide.html#general-system-tuning
On Mon, Jan 14, 2013 at 11:02 AM, Leander Bessa Beernaert <
leande...@gmail.com> wrote:
> Hello all,
>
>
> I'm trying to upload 200GB of 2
Hello all,
I'm trying to upload 200GB of 200KB files to Swift. I'm using 4 clients
(each hosted on a different machine) with 10 threads each uploading files
using the official python-swiftclient. Each thread is uploading to a
separate container.
I have 5 storage nodes and 1 proxy node. The nodes
32 matches
Mail list logo