Re: [Openstack] swift storage, getting it working

2013-07-12 Thread Kuo Hugo
Hi Alex ,

Did you re-check the drives information in the ring?
Would you like to show it?

+Hugo Kuo+
h...@swiftstack.com
tonyt...@gmail.com
+886 935004793


2013/7/12 Axel Christiansen 

> Hello.
>
>
> my issue is solved. What did i do wrong, got wrong from google ;)
>
> I mixed up the default ports for the ring building.
> The proxy-server option "allow_account_management = true" and
> "account_autocreate = true" where set to false.
>
>
> All the best.
> Axel
>
>
>
>
> Am 12.07.13 12:50, schrieb Axel Christiansen:
> >
> >
> > Thank you. That looks all right. Switching to user swift on a storage
> > node, cd-ing to a mountpoint (/srv/node/sdb1/) and creating a file
> > works. I checked the mount points and rights twice.
> >
> >
> > Here is a little larger snippet from the log server:
> > http://paste.openstack.org/show/40222/
> >
> >
> > someone with another hint? What should i check next?
> >
> >
> > Thx List. Axel
> >
> >
> >
> >
> > Am 12.07.13 10:58, schrieb Kuo Hugo:
> >> Agree with Jonathan +1
> >>
> >> Change the owner of disk mount point to the relevant user which you set
> >> in /etc/swift/*.
> >>
> >> +Hugo Kuo+
> >> h...@swiftstack.com <mailto:h...@swiftstack.com>
> >> tonyt...@gmail.com
> >> <mailto:tonyt...@gmail.com>
> >> +886 935004793
> >>
> >>
> >> 2013/7/12 Jonathan Lu  jojokur...@gmail.com>>
> >>
> >> Hi,
> >> I once met such problem because I forget to change the own of
> >> the directory of the mounted device to swift:swift.
> >>
> >>
> >> On 2013/7/12 16:44, Axel Christiansen wrote:
> >>
> >> Hello List,
> >>
> >>
> >> i got stock getting a swift store running. the base components,
> >> a proxy
> >> some storage nodes are prepared. The keystone service is up and
> >> seems
> >> working ok. Authentication works.
> >>
> >>
> >> wehn trying to create a container this happens:
> >>
> >> swift -v -s -V 2.0 -A http://10.42.44.206:5000/v2.0 -U
> demo:admin -K
> >> XZ5OOQSKWSNJ post tesadfdsafds
> >> Container PUT failed:
> >> https://cs1.internet4you.com:__443
> /v1/AUTH___adb1bcba4b2548589b67c8aee6be09__fb/tesadfdsafds
> >> <
> https://cs1.internet4you.com:443/v1/AUTH_adb1bcba4b2548589b67c8aee6be09fb/tesadfdsafds
> >
> >> 404 Not Found  [first 60 chars of response] Not
> >> FoundThe resource could not be found.<
> >>
> >>
> >> On the storage nodes:
> >> Jul 12 10:33:55 sn04 object-server 10.42.45.203 - -
> >> [12/Jul/2013:08:33:55 +] "HEAD
> >> /sdz1/80228/AUTH___adb1bcba4b2548589b67c8aee6be09__fb" 400 63
> "-"
> >> "__tx169c1f37bcee47e083eff0f6916f__9392" "-" 0.0002
> >>
> >> a log snippet.
> >> http://paste.openstack.org/__show/40211/
> >> <http://paste.openstack.org/show/40211/>
> >>
> >>
> >> It would be really nice if one could point me in the rigth
> >> direction.
> >> Where should i dig.
> >>
> >> Thx, Axel
> >>
> >>
> >> _
> >> Mailing list: https://launchpad.net/~__openstack
> >> <https://launchpad.net/~openstack>
> >> Post to : openstack@lists.launchpad.net
> >> <mailto:openstack@lists.launchpad.net>
> >> Unsubscribe : https://launchpad.net/~__openstack
> >> <https://launchpad.net/~openstack>
> >> More help   : https://help.launchpad.net/__ListHelp
> >> <https://help.launchpad.net/ListHelp>
> >>
> >>
> >>
> >> _
> >> Mailing list: https://launchpad.net/~__openstack
> >> <https://launchpad.net/~openstack>
> >> Post to : openstack@lists.launchpad.net
> >> <mailto:openstack@lists.launchpad.net>
> >> Unsubscribe : https://launchpad.net/~__openstack
> >> <https://launchpad.net/~openstack>
> >> More help   : https://help.launchpad.net/__ListHelp
> >> <https://help.launchpad.net/ListHelp>
> >>
> >>
> >
> >
> > ___
> > Mailing list: https://launchpad.net/~openstack
> > Post to : openstack@lists.launchpad.net
> > Unsubscribe : https://launchpad.net/~openstack
> > More help   : https://help.launchpad.net/ListHelp
> >
>
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] swift storage, getting it working

2013-07-12 Thread Kuo Hugo
Agree with Jonathan +1

Change the owner of disk mount point to the relevant user which you set in
/etc/swift/*.

+Hugo Kuo+
h...@swiftstack.com
tonyt...@gmail.com
+886 935004793


2013/7/12 Jonathan Lu 

> Hi,
> I once met such problem because I forget to change the own of the
> directory of the mounted device to swift:swift.
>
>
> On 2013/7/12 16:44, Axel Christiansen wrote:
>
>> Hello List,
>>
>>
>> i got stock getting a swift store running. the base components, a proxy
>> some storage nodes are prepared. The keystone service is up and seems
>> working ok. Authentication works.
>>
>>
>> wehn trying to create a container this happens:
>>
>> swift -v -s -V 2.0 -A http://10.42.44.206:5000/v2.0 -U demo:admin -K
>> XZ5OOQSKWSNJ post tesadfdsafds
>> Container PUT failed:
>> https://cs1.internet4you.com:**443/v1/AUTH_**
>> adb1bcba4b2548589b67c8aee6be09**fb/tesadfdsafds
>> 404 Not Found  [first 60 chars of response] Not
>> FoundThe resource could not be found.<
>>
>>
>> On the storage nodes:
>> Jul 12 10:33:55 sn04 object-server 10.42.45.203 - -
>> [12/Jul/2013:08:33:55 +] "HEAD
>> /sdz1/80228/AUTH_**adb1bcba4b2548589b67c8aee6be09**fb" 400 63 "-"
>> "**tx169c1f37bcee47e083eff0f6916f**9392" "-" 0.0002
>>
>> a log snippet.
>> http://paste.openstack.org/**show/40211/
>>
>>
>> It would be really nice if one could point me in the rigth direction.
>> Where should i dig.
>>
>> Thx, Axel
>>
>>
>> __**_
>> Mailing list: 
>> https://launchpad.net/~**openstack
>> Post to : openstack@lists.launchpad.net
>> Unsubscribe : 
>> https://launchpad.net/~**openstack
>> More help   : 
>> https://help.launchpad.net/**ListHelp
>>
>
>
> __**_
> Mailing list: 
> https://launchpad.net/~**openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : 
> https://launchpad.net/~**openstack
> More help   : 
> https://help.launchpad.net/**ListHelp
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [SWIFT] raising network traffic o the storage node

2013-07-08 Thread Kuo Hugo
Hi Klaus,

Would you please grep your swift log for object-replicator?  thx



+Hugo Kuo+
h...@swiftstack.com
tonyt...@gmail.com
+886 935004793


2013/7/8 Klaus Schürmann 

> On Monday some more mailboxes store their Mails in the objectstorage.
> But that only increased the raising.
>
> Traffic Storagenode: http://www.schuermann.net/temp/storagenode2.png
> Traffic Proxyserver: http://www.schuermann.net/temp/proxyserver2.png
>
>
> -Ursprüngliche Nachricht-
> Von: Peter Portante [mailto:peter.a.porta...@gmail.com]
> Gesendet: Montag, 8. Juli 2013 16:04
> An: Klaus Schürmann
> Cc: openstack@lists.launchpad.net
> Betreff: Re: [Openstack] [SWIFT] raising network traffic o the storage node
>
> Can you zoom in past the spike, most recent 2 or three weeks and see
> how it looks?
>
> My guess is that the proxy traffic is also rising.
>
> On Mon, Jul 8, 2013 at 9:50 AM, Klaus Schürmann
>  wrote:
> > Hi,
> >
> > I use a swift storage as a mail-store. Now I have about  1.000.000
> objects
> > stored in the cluster.
> >
> >
> >
> > I'm wondering about the raising network traffic on my storage nodes. The
> > traffic from the proxy-server has a normal characteristic.
> >
> >
> >
> > Traffic Storagenode: http://www.schuermann.net/temp/storagenode.png
> >
> > Traffic Proxyserver: http://www.schuermann.net/temp/proxyserver.png
> >
> >
> >
> > Can someone explain such behavior?
> >
> >
> >
> > Thanks
> >
> > Klaus
> >
> >
> > ___
> > Mailing list: https://launchpad.net/~openstack
> > Post to : openstack@lists.launchpad.net
> > Unsubscribe : https://launchpad.net/~openstack
> > More help   : https://help.launchpad.net/ListHelp
> >
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Swift Client

2013-07-04 Thread Kuo Hugo
--- I ever used ---
1) CyberDuck
2) Owncloud  (What's the problem in your test?)
3) Gladinet client
4) SwiftStack Web Console
5) Most of AWS S3 client tools, but you need to enable Swift3 middleware
support
6) OpenStack DashBoard
7) Maldivica gateway

-- I never use but should work --
8) SME

Hope it help


+Hugo Kuo+
h...@swiftstack.com
tonyt...@gmail.com
+886 935004793


2013/7/5 CHABANI Mohamed El Hadi 

> Hi people,
>
> I want to use my Swift all in One with a graphical client, to put /
> retrieve objects. i heard that we can use Cyberduck (but there is problem
> of port) or Owncloud as Swift client but didn't find good references.
>
> If any person could help me or suggest other clients, it would be really
> great.
>
> Thank you
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Design] Why/how object storage(Swift) better than scale-out NAS?

2013-07-03 Thread Kuo Hugo
*REF :*
http://www.quora.com/What-features-differentiate-HDFS-and-OpenStack-Object-Storage
by ChunkThier

While there are some similarities between HDFS and Openstack Object Storage
(Swift), the overall design of the systems are very different.

1.  HDFS uses a central system to maintain file metadata (Namenode), where
as in Swift the metadata is distributed and replicated across the cluster.
Having a central meta-data system is a single point of failure for HDFS,
and makes it more difficult to scale to very large sizes.

2.  Swift is designed with multi-tenancy in mind, where HDFS has no notion
of multi-tenancy

3.  HDFS is optimized for larger files (as is typical for processing data),
where Swift is designed to store any sized files.

4.  Files in HDFS are write once, and can only have one writer at a time,
in Swift  files can be written many times, and under concurrency, the last
write wins.

5.  HDFS is written in Java, where Swift is written in Python

TLDR: HDFS is designed to store a medium number of larges files to support
data processing, where Swift is designed as a more generic storage solution
to reliably store very large numbers of varying sized files.

(HDFS Architecture information attained from
http://hadoop.apache.org/hdfs/do...
)

Hope it help

+Hugo Kuo+
h...@swiftstack.com
tonyt...@gmail.com
+886 935004793


2013/7/3 Li, Leon 

> Hi,
>
> ** **
>
> I have googled  and found some answers but they are not to the point I
> think. Why foundation choice object storage other than scale-out NAS?
>
> I see some points about benefit of object storage(swift)
>
> **·**Storing billions of files.
>
> **·**Storing Petabytes (millions of Gigabytes) of data.
>
> **·**Use cheap servers
>
> **·**Can have several copy for each file
>
> However a scale-out NAS could also have these benefits, if you build the
> scale-out NAS with open source cluster FS(for example HDFS), just like many
> Internet company did.
>
> ** **
>
> Leon
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Swift cleaning tenant after deletion on Keystone

2013-06-20 Thread Kuo Hugo
Hi Heiko,

All objects won't be deleted if the tenant been deleted in Keystone.

Hugo

+Hugo Kuo+
h...@swiftstack.com
tonyt...@gmail.com
+886 935004793


2013/6/20 Heiko Krämer 

> Heyho guys,
>
> I've a short question because I can't find anything in the docs.
>
> Will Swift cleanup himself if I delete a tenant on Keystone ?
>
> Or do I need to ensure that all files and all buckets/containers are
> deleted on Swift before the tenant will be deleted on Keystone?
>
> Greetings and thx
> Heiko
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Swift Object Storage authentication

2013-06-19 Thread Kuo Hugo
Hi CHABANI ,

Would you please show me the proxy-server.conf ?

You can paste on http://paste.openstack.org/  or   gist.

Cheers


+Hugo Kuo+
h...@swiftstack.com
tonyt...@gmail.com
+886 935004793


2013/6/19 CHABANI Mohamed El Hadi 

> Hi all,
>
> I'm trying to install Swift Object Storage according
> http://docs.openstack.org/grizzly/openstack-compute/install/apt/content/ch_installing-openstack-object-storage.html
>
> when i try to validate my installation with :
>
> *swift -V 2.0 -A http://127.0.0.1:5000/v2.0 -U swift:swift -K swift stat*
>
> i get : "*Unauthorised. Check username, password and tenant name/id*"
>
>  i tried to use differents possibilities for user name and password
> (admin, demo...) but nothing is working, the username and password should
> be the same as in the proxy server no ? i don't know if i missed others
> things i'm new in Swift.
>
> i attached here my proxy-server.conf for more details.
>
> Thanks for your help.
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Swift] Cache pressure tuning

2013-06-17 Thread Kuo Hugo
Hi Huang,

Storage nodes will run out of memory for caching inode eventually right?
 Have you ever measure the upper limit of caching capability of your
storage nodes?


Hi Jonathan,

The default reclaiming time is 7 days. Did you wait for 7 days or just
change the setting to 1 min in conf file ?

Hugo

+Hugo Kuo+
h...@swiftstack.com
tonyt...@gmail.com
+886 935004793


2013/6/18 Jonathan Lu 

>  Hi Hugo,
> I know the tombstone mechanism. In my opinion after the reclaim time,
> the object of xxx.tombstone will be deleted at all. Is that right? Maybe I
> misunderstand the doc :( ...
> We try to "colddown" the swift system ( just wait for the reclaiming)
> and test, but the result is not satisfying.
>
> Thanks,
> Jonathan Lu
>
>
> On 2013/6/18 11:04, Kuo Hugo wrote:
>
> Hi Jonathan ,
>
>  How did you perform "delete all the objects in the storage" ?  Those
> deleted objects still consume inodes in tombstone status until the reclaim
> time.
> Would you mind to compare the result of $> sudo cat /proc/slabinfo | grep
> xfs   ,  before/after set the vfs_cache_pressure
>
>  spongebob@patrick1:~$ sudo cat /proc/slabinfo | grep xfs
> xfs_ili70153  70182216   181 : tunables00
>0 : slabdata   3899   3899  0
>  xfs_inode 169738 170208   1024   164 : tunables00
>  0 : slabdata  10638  10638  0
> xfs_efd_item  60 60400   202 : tunables000
> : slabdata  3  3  0
> xfs_buf_item 234234224   181 : tunables000
> : slabdata 13 13  0
> xfs_trans 28 28280   141 : tunables000
> : slabdata  2  2  0
> xfs_da_state  32 32488   162 : tunables000
> : slabdata  2  2  0
> xfs_btree_cur 38 38208   191 : tunables000
> : slabdata  2  2  0
> xfs_log_ticket40 40200   201 : tunables000
> : slabdata  2  2  0
>
>
>  Hi Robert,
> The performance degradation still there even only main swift workers are
> running in storage node. ( stop replicator/updater/auditor ). In my
> knowing.
> I'll check xs_dir_lookup and xs_ig_missed here. Thanks
>
>
>
>
>
>
>
>  +Hugo Kuo+
> h...@swiftstack.com
>  tonyt...@gmail.com
>  +886 935004793
>
>
> 2013/6/18 Jonathan Lu 
>
>> On 2013/6/17 18:59, Robert van Leeuwen wrote:
>>
>>>  I'm facing the issue about the performance degradation, and once I
>>>> glanced that changing the value in /proc/sys
>>>> /vm/vfs_cache_pressure will do a favour.
>>>> Can anyone explain to me whether and why it is useful?
>>>>
>>> Hi,
>>>
>>> When this is set to a lower value the kernel will try to keep the
>>> inode/dentry cache longer in memory.
>>> Since the swift replicator is scanning the filesystem continuously it
>>> will eat up a lot of iops if those are not in memory.
>>>
>>> To see if a lot of cache misses are happening, for xfs, you can look at
>>> xs_dir_lookup and xs_ig_missed.
>>> ( look at http://xfs.org/index.php/Runtime_Stats )
>>>
>>> We greatly benefited from setting this to a low value but we have quite
>>> a lot of files on a node ( 30 million)
>>> Note that setting this to zero will result in the OOM killer killing the
>>> machine sooner or later.
>>> (especially if files are moved around due to a cluster change ;)
>>>
>>> Cheers,
>>> Robert van Leeuwen
>>>
>>
>>  Hi,
>> We set this to a low value(20) and the performance is better than
>> before. It seems quite useful.
>>
>> According to your description, this issue is related with the object
>> quantity in the storage. We delete all the objects in the storage but it
>> doesn't help anything. The only method to recover is to format and re-mount
>> the storage node. We try to install swift on different environment but this
>> degradation problem seems to be an inevitable one.
>>
>> Cheers,
>> Jonathan Lu
>>
>>
>> ___
>> Mailing list: https://launchpad.net/~openstack
>> Post to : openstack@lists.launchpad.net
>> Unsubscribe : https://launchpad.net/~openstack
>> More help   : https://help.launchpad.net/ListHelp
>>
>
>
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Swift] Cache pressure tuning

2013-06-17 Thread Kuo Hugo
Hi Jonathan ,

How did you perform "delete all the objects in the storage" ?  Those
deleted objects still consume inodes in tombstone status until the reclaim
time.
Would you mind to compare the result of $> sudo cat /proc/slabinfo | grep
xfs   ,  before/after set the vfs_cache_pressure

spongebob@patrick1:~$ sudo cat /proc/slabinfo | grep xfs
xfs_ili70153  70182216   181 : tunables00
 0 : slabdata   3899   3899  0
xfs_inode 169738 170208   1024   164 : tunables000
: slabdata  10638  10638  0
xfs_efd_item  60 60400   202 : tunables000
: slabdata  3  3  0
xfs_buf_item 234234224   181 : tunables000
: slabdata 13 13  0
xfs_trans 28 28280   141 : tunables000
: slabdata  2  2  0
xfs_da_state  32 32488   162 : tunables000
: slabdata  2  2  0
xfs_btree_cur 38 38208   191 : tunables000
: slabdata  2  2  0
xfs_log_ticket40 40200   201 : tunables000
: slabdata  2  2  0


Hi Robert,
The performance degradation still there even only main swift workers are
running in storage node. ( stop replicator/updater/auditor ). In my
knowing.
I'll check xs_dir_lookup and xs_ig_missed here. Thanks







+Hugo Kuo+
h...@swiftstack.com
tonyt...@gmail.com
+886 935004793


2013/6/18 Jonathan Lu 

> On 2013/6/17 18:59, Robert van Leeuwen wrote:
>
>> I'm facing the issue about the performance degradation, and once I
>>> glanced that changing the value in /proc/sys
>>> /vm/vfs_cache_pressure will do a favour.
>>> Can anyone explain to me whether and why it is useful?
>>>
>> Hi,
>>
>> When this is set to a lower value the kernel will try to keep the
>> inode/dentry cache longer in memory.
>> Since the swift replicator is scanning the filesystem continuously it
>> will eat up a lot of iops if those are not in memory.
>>
>> To see if a lot of cache misses are happening, for xfs, you can look at
>> xs_dir_lookup and xs_ig_missed.
>> ( look at 
>> http://xfs.org/index.php/**Runtime_Stats)
>>
>> We greatly benefited from setting this to a low value but we have quite a
>> lot of files on a node ( 30 million)
>> Note that setting this to zero will result in the OOM killer killing the
>> machine sooner or later.
>> (especially if files are moved around due to a cluster change ;)
>>
>> Cheers,
>> Robert van Leeuwen
>>
>
> Hi,
> We set this to a low value(20) and the performance is better than
> before. It seems quite useful.
>
> According to your description, this issue is related with the object
> quantity in the storage. We delete all the objects in the storage but it
> doesn't help anything. The only method to recover is to format and re-mount
> the storage node. We try to install swift on different environment but this
> degradation problem seems to be an inevitable one.
>
> Cheers,
> Jonathan Lu
>
>
> __**_
> Mailing list: 
> https://launchpad.net/~**openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : 
> https://launchpad.net/~**openstack
> More help   : 
> https://help.launchpad.net/**ListHelp
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Another basic Swift question

2013-06-13 Thread Kuo Hugo
Hi Mark,

Well, the ring without rebalance will not effect anything.
With a update, there're no partitions been assigned to the new
devices.Result of partition numbers of a new device will be 0. Which means
no any object will be calculated for these new devices.

In the case of adding a new server (devices) to the ring, it should still
work properly.
What you need is to understand the mechanism of replicator and the theory
of partions in Swift.

I have to point out a key concept of "partition". It's a "logic partition"
in swift layer instead of a real partition on disk.

When a partition been assigned to a new device. it's much more like that
your parking slot changed from first floor to second floor. Your can won't
be destroyed but wait for moving to new place. :)  Hope it help.

Cheers


+Hugo Kuo+
h...@swiftstack.com
tonyt...@gmail.com
+886 935004793


2013/6/14 Mark Brown 

>
> When a new server is added to an existing cluster, and I now update the
> ring with the new device, but at the same time, I do NOT rebalance, will
> things work correctly?
>
> I am assuming if I don't rebalance, but I do update the ring, the ring has
> the new partition scheme with the new device information, so new data will
> go to the new device. But at the same time, an existing object which
> previously hashed to a specific partition on a specific server can possibly
> hash to a different partition on a different server, so how do old objects
> get accessed? I do understand I should do the rebalance, and I will at a
> certain point in time, but I wanted to understand the behavior if I update
> the ring and don't do the rebalance
>
>
> Cheers,
> -- Mark
>
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] grizzly keystone with v1.0 API

2013-06-05 Thread Kuo Hugo
I thought the legacy auth (v1.0) was been removed since Essex in Keystone.


Hope it help
Hugo

+Hugo Kuo+
h...@swiftstack.com
tonyt...@gmail.com
+886 935004793


2013/6/5 Axel Christiansen 

> Hello List again,
>
>
> on the zmanda-blog is a description to make a swift/keystone setup work
> again via the v1.0 API. Had anyone success doing this on grizzly?
>
> I sadly did not.
>
>
> Regards, Axel
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] grizzly swift/keystone clients

2013-06-05 Thread Kuo Hugo
Gladinet might be the one you looking for.

Hope it help
Hugo

+Hugo Kuo+
h...@swiftstack.com
tonyt...@gmail.com
+886 935004793


2013/6/5 Axel Christiansen 

> Hello List,
>
>
> what GUI-Clients like Cyberduck do exist, wich can talk the v2.0 API?
>
>
> Thank you
>
> Axel
>
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Swift performance issues with requests

2013-06-03 Thread Kuo Hugo
Hi Klaus,

How's the disk space usage now?

Cheers

+Hugo Kuo+
h...@swiftstack.com
tonyt...@gmail.com
+886 935004793


2013/5/31 Klaus Schürmann 

> Hi,
>
> when I test my new swift cluster I get a strange behavior with GET and PUT
> requests.
> Most time it is really fast. But sometimes it takes a long time to get the
> data.
> Here is an example with the same request which took one time 17 seconds:
>
> May 31 10:33:08 swift-proxy1 proxy-logging 10.4.2.99 10.4.2.99
> 31/May/2013/08/33/08 GET /v1/AUTH_provider1/129450/829188397.31 HTTP/1.0
> 200 - Wget/1.12%20%28linux-gnu%29
> provider1%2CAUTH_tke6408efec4b2439091fb6f4e75911602 - 283354 -
> tx2804381fef91455dabf6c9fd0edf4206 - 0.0546 -
> May 31 10:33:08 swift-proxy1 proxy-logging 10.4.2.99 10.4.2.99
> 31/May/2013/08/33/08 GET /v1/AUTH_provider1/129450/829188397.31 HTTP/1.0
> 200 - Wget/1.12%20%28linux-gnu%29
> provider1%2CAUTH_tke6408efec4b2439091fb6f4e75911602 - 283354 -
> tx90025e3259d74b9faa8f17efaf85b104 - 0.0516 -
> May 31 10:33:08 swift-proxy1 proxy-logging 10.4.2.99 10.4.2.99
> 31/May/2013/08/33/08 GET /v1/AUTH_provider1/129450/829188397.31 HTTP/1.0
> 200 - Wget/1.12%20%28linux-gnu%29
> provider1%2CAUTH_tke6408efec4b2439091fb6f4e75911602 - 283354 -
> tx942d79f78ee345138df6cd87bac0f860 - 0.0942 -
> May 31 10:33:08 swift-proxy1 proxy-logging 10.4.2.99 10.4.2.99
> 31/May/2013/08/33/08 GET /v1/AUTH_provider1/129450/829188397.31 HTTP/1.0
> 200 - Wget/1.12%20%28linux-gnu%29
> provider1%2CAUTH_tke6408efec4b2439091fb6f4e75911602 - 283354 -
> tx73f053e15ed345caad38a6191fe7f196 - 0.0584 -
> May 31 10:33:08 swift-proxy1 proxy-logging 10.4.2.99 10.4.2.99
> 31/May/2013/08/33/08 GET /v1/AUTH_provider1/129450/829188397.31 HTTP/1.0
> 200 - Wget/1.12%20%28linux-gnu%29
> provider1%2CAUTH_tke6408efec4b2439091fb6f4e75911602 - 283354 -
> txd4a3a4bf3f384936a0bc14dbffddd275 - 0.1020 -
> May 31 10:33:26 swift-proxy1 proxy-logging 10.4.2.99 10.4.2.99
> 31/May/2013/08/33/26 GET /v1/AUTH_provider1/129450/829188397.31 HTTP/1.0
> 200 - Wget/1.12%20%28linux-gnu%29
> provider1%2CAUTH_tke6408efec4b2439091fb6f4e75911602 - 283354 -
> txd8c6b34b8e41460bb2c5f3f4b6def0ef - 17.7330 -   <<
> May 31 10:33:26 swift-proxy1 proxy-logging 10.4.2.99 10.4.2.99
> 31/May/2013/08/33/26 GET /v1/AUTH_provider1/129450/829188397.31 HTTP/1.0
> 200 - Wget/1.12%20%28linux-gnu%29
> provider1%2CAUTH_tke6408efec4b2439091fb6f4e75911602 - 283354 -
> tx21aaa822f8294d9592fe04b3de27c98e - 0.0226 -
> May 31 10:33:26 swift-proxy1 proxy-logging 10.4.2.99 10.4.2.99
> 31/May/2013/08/33/26 GET /v1/AUTH_provider1/129450/829188397.31 HTTP/1.0
> 200 - Wget/1.12%20%28linux-gnu%29
> provider1%2CAUTH_tke6408efec4b2439091fb6f4e75911602 - 283354 -
> txcabe6adf73f740efb2b82d479a1e6b20 - 0.0385 -
> May 31 10:33:26 swift-proxy1 proxy-logging 10.4.2.99 10.4.2.99
> 31/May/2013/08/33/26 GET /v1/AUTH_provider1/129450/829188397.31 HTTP/1.0
> 200 - Wget/1.12%20%28linux-gnu%29
> provider1%2CAUTH_tke6408efec4b2439091fb6f4e75911602 - 283354 -
> txc1247a1bb6c04bd3b496b3b986373170 - 0.0247 -
> May 31 10:33:26 swift-proxy1 proxy-logging 10.4.2.99 10.4.2.99
> 31/May/2013/08/33/26 GET /v1/AUTH_provider1/129450/829188397.31 HTTP/1.0
> 200 - Wget/1.12%20%28linux-gnu%29
> provider1%2CAUTH_tke6408efec4b2439091fb6f4e75911602 - 283354 -
> txdf295a88e513443393992f37785f8aed - 0.0144 -
> May 31 10:33:26 swift-proxy1 proxy-logging 10.4.2.99 10.4.2.99
> 31/May/2013/08/33/26 GET /v1/AUTH_provider1/129450/829188397.31 HTTP/1.0
> 200 - Wget/1.12%20%28linux-gnu%29
> provider1%2CAUTH_tke6408efec4b2439091fb6f4e75911602 - 283354 -
> tx62bb33e8c20d43b7a4c3512232de6fe4 - 0.0125 -
>
> Alle requests on the storage nodes are below 0.01 sec.
>
> The tested cluster contain one proxy (DELL R420, 16 G RAM, 2 CPU) and 5
> storage-nodes (DELL R720xd, 16 G RAM 2 CPU, 2 HDD). The proxy-server
> configuration:
>
> [DEFAULT]
> log_name = proxy-server
> log_facility = LOG_LOCAL1
> log_level = INFO
> log_address = /dev/log
> bind_port = 80
> user = swift
> workers = 32
> log_statsd_host = 10.4.100.10
> log_statsd_port = 8125
> log_statsd_default_sample_rate = 1
> log_statsd_metric_prefix = Proxy01
> #set log_level = DEBUG
>
> [pipeline:main]
> pipeline = healthcheck cache proxy-logging tempauth proxy-server
>
> [app:proxy-server]
> use = egg:swift#proxy
> allow_account_management = true
> account_autocreate = true
>
> [filter:tempauth]
> use = egg:swift#tempauth
> user_provider1_ =  .xxx http://10.4.100.1/v1/AUTH_provider1
> log_name = tempauth
> log_facility = LOG_LOCAL2
> log_level = INFO
> log_address = /dev/log
>
> [filter:cache]
> use = egg:swift#memcache
> memcache_servers = 10.12.0.2:11211,10.12.0.3:11211
> set log_name = cache
>
> [filter:catch_errors]
> use = egg:swift#catch_errors
>
> [filter:healthcheck]
> use = egg:swift#healthcheck
>
> [filter:proxy-logging]
> use = egg:swift#proxy_logging
> access_log_name = proxy-logging
> access_log_facility = LOG_LOCAL3
> access_log_level = DEBUG
> access_log_address = /dev/log
>
>
> Can someone explain su

Re: [Openstack] Object Replication fails

2013-04-21 Thread Kuo Hugo
Hi Philip ,

Which Swift version r u running in your cluster ?



+Hugo Kuo+
h...@swiftstack.com
tonyt...@gmail.com
+886 935004793


2013/4/13 Philip 

> Hi,
>
> I just tried to add two new servers into the ring. Only the containers
> were replicated to the new servers but there are no objects beeing
> replicated. The disks don't even have a objects folder yet. On the old
> servers there are plenty of log entries that indicate that something is
> going wrong:
>
> Apr 13 11:14:45 z1-n1 object-replicator Bad rsync return code: ['rsync',
> '--recursive', '--whole-file', '--human-readable', '--xattrs',
> '--itemize-changes', '--ignore-existing', '--timeout=30',
> '--contimeout=30', '/srv/node/sdq1/objects/80058/b70',
> '/srv/node/sdq1/objects/80058/ff9', '/srv/node/sdq1/objects/80058/5d3',
> '/srv/node/sdq1/objects/80058/389', '/srv/node/sdq1/objects/80058/473',
> '/srv/node/sdq1/objects/80058/81a', '/srv/node/sdq1/objects/80058/a67',
> '/srv/node/sdq1/objects/80058/b72', '/srv/node/sdq1/objects/80058/8f5',
> '/srv/node/sdq1/objects/80058/ed3', '/srv/node/sdq1/objects/80058/8db',
> '/srv/node/sdq1/objects/80058/4e5', '/srv/node/sdq1/objects/80058/fbf',
> '/srv/node/sdq1/objects/80058/5cc', '/srv/node/sdq1/objects/80058/318',
> '172.16.100.4::object/sdg1/objects/80058'] -> 12
>
> Apr 13 11:14:46 z1-n1 object-replicator rsync: mkdir "/sdl1/objects/75331"
> (in object) failed: No such file or directory (2)
>
> Apr 13 11:14:46 z1-n1 object-replicator rsync error: error in file IO
> (code 11) at main.c(605) [Receiver=3.0.9]
> Apr 13 11:14:46 z1-n1 object-replicator rsync: read error: Connection
> reset by peer (104)
>
> What could be the reason for this?
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] swift: Account not found[grizzly]

2013-04-09 Thread Kuo Hugo
1) No minimal limitation currently .
2) Did you set the above option to true ?
allow_account_management
https://github.com/openstack/swift/blob/master/etc/proxy-server.conf-sample#L61

account_autocreate
https://github.com/openstack/swift/blob/master/etc/proxy-server.conf-sample#L69


+Hugo Kuo+
h...@swiftstack.com
tonyt...@gmail.com
+886 935004793


2013/4/9 Liu Wenmao 

> Hi all:
>
> I just installed swift from github, after I configure a proxy node and a
> storage node, and run the stat command, it fails:
> # swift -v -V 2.0 -A http://controller:5000/v2.0 -U service:swift -K
> nsfocus stat
> Account not found
>
> Keystone and disk configuation seem OK, syslog gives:
> Apr  9 13:45:21 node1 account-server AUTH_2755db390fcd4c9bb504242617d5f6a0
> (txn: tx6919d8c66d454e50a9b03deded9b2ec8)
> Apr  9 13:45:21 node1 account-server 20.0.0.1 - - [09/Apr/2013:05:45:21
> +] "HEAD /swr/27113/AUTH_2755db390fcd4c9bb504242617d5f6a0" 404 -
> "tx6919d8c66d454e50a9b03deded9b2ec8" "-" "-" 0.0020 ""
>
> I read the code and find that the server try to visit db file:
> /srv/node/swr/accounts/27113/e03/1a7a753448a645fdf2b6bcc7223e5e03, but my
> directory  /srv/node/swr/accounts/ is empty, so server return a 404 error.
>
> I find that the db file is only created when the server receives a
> REPLICATE request, but I do not know how to generate such a request, or why
> does it not generate automatically.
>
> Moreover, what is the minimal amount of storage nodes?
>
> Thanks
> Wenmao Liu
>
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] swift error

2013-03-31 Thread Kuo Hugo
Hi Beshoy ,

Could you please provide the following information ?


1) First issue is that the Keystone running on "192.168.5.29" , And the
AUTH_URL in swift-client should point to authentication server(keystone in
your case) . The command should be :

swift -V 2.0 -A http://192.168.5.29:5000/v2.0
 -U openstackDemo:admin -K $ADMINPASS
stat


2) Which Keystone version r u running now ?

3) Try a curl call to keystone v2.0 with the USERNAME/PASSWORD. Let me know
if you can retrieve the JSON response which contains TOKEN and Swift
service URL .

4) Check the bind port:

$> netstat -antulp | grep 



Hope it help


2013/3/31 beshoy abdelmaseh 

> i installed keystone on centos-> 192.168.5.29 and i stopped iptables in
> centos then i installed swift on ubuntu-> 192.168.5.37 where ufw
> stop/waiting (swift working well all
> servers-proxy,account,containeer,object- are working  but when i run this
> command on ubuntu swift
>
> swift -V 2.0 -A http://192.168.5.37:5000/v2.0 -U openstackDemo:admin -K 
> $ADMINPASS stat
>
> error: [Errno 111] ECONNREFUSED
>
>
> proxy-server.conf
>
> [DEFAULT]
> bind_port = 
> user = swift
>
> [pipeline:main]
> pipeline = healthcheck cache authtoken keystone proxy-server
>
> [app:proxy-server]
> use = egg:swift#proxy
> allow_account_management = true
> account_autocreate = true
>
> [filter:keystone]
> paste.filter_factory = keystone.middleware.swift_auth:filter_factory
> operator_roles = Member,admin, swiftoperator
>
> [filter:authtoken]
> paste.filter_factory = keystone.middleware.auth_token:filter_factory
> # Delaying the auth decision is required to support token-less
> # usage for anonymous referrers ('.r:*').
> delay_auth_decision = 10
> service_port = 5000
> service_host = 192.168.5.37
> auth_port = 35357
> auth_host = 192.168.5.37
> auth_protocol = http
> auth_uri = http://192.168.5.37:5000/ 
> auth_token = 012345SECRET99TOKEN012345
> admin_token = 012345SECRET99TOKEN012345
> admin_tenant_name = service
> admin_user = swift
> admin_password = swift
> signing_dir=/tmp/keystone-signing-swift
>
> [filter:cache]
> use = egg:swift#memcache
> set log_name = cache
>
> [filter:catch_errors]
> use = egg:swift#catch_errors
>
> [filter:healthcheck]
> use = egg:swift#healthcheck
>
>
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
>


-- 
+Hugo Kuo+
h...@swiftstack.com
tonyt...@gmail.com
+886 935004793
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] private cloud

2013-03-28 Thread Kuo Hugo
Help Ralph to push to mailing list
Ralph lewis
20:35 (14 小�r前)

*the openstack project i'm trying to do is a school noted project.

I have to make a cloud computing solution using openstack in which for
example, any teacher could run a windows or linux machine with all
ressources allocated. and other extra optional services for student or i
don't know.*

*KAN MELEDJE RALPH LEWIS *
--**-
Master RT Université de Savoie
Elève-ingénieur Réseau & Télécom ESTEM-Casablanca


2013/3/29 Mark Lehrer 

>
>  Could i Have a complete configuration for openstack running in local?
>> (private cloud)
>>
>
>
> http://lmgtfy.com/?q=**openstack+easy+install
>
> I used the Hastexo guide and it was pretty easy.
>
> Mark
>
>
> __**_
> Mailing list: 
> https://launchpad.net/~**openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : 
> https://launchpad.net/~**openstack
> More help   : 
> https://help.launchpad.net/**ListHelp
>



-- 
+Hugo Kuo+
h...@swiftstack.com
tonyt...@gmail.com
+886 935004793
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] private cloud

2013-03-27 Thread Kuo Hugo
Hi Ralph Lewis ,

The configuration could be variety .

Would you mind which project of OpenStack do you want to try it out ?

1. For a quick and global view , you can try to use http://devstack.org/  .
It's not for production though. But you can run it in a VM for testing.

2. Depends on different openstack project  , there're plenty of
configurations . Which project do you want Nova / Glance / Keystone /Swift
/ Quantum / Cinder ?

Let me know that if I can do anything for you.


Cheers
Hugo


2013/3/28 Ralph lewis 

> Could i Have a complete configuration for openstack running in local?
> (private cloud)
>
> *KAN MELEDJE RALPH LEWIS *
> --**-
> Master RT Université de Savoie
> Elève-ingénieur Réseau & Télécom ESTEM-Casablanca
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
>


-- 
+Hugo Kuo+
h...@swiftstack.com
tonyt...@gmail.com
+886 935004793
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Swift]

2013-03-15 Thread Kuo Hugo
As I know that legacy auth was been removed from ESSEX release in Keystone.





2013/3/15 Gareth 

> Kuo's answering is the point. But v1.0 is not the answer because keystone
> use v2.0 now.
> Look at your docuement, http://:5000/auth/v2.0 and 
> http://:5000/v2.0
> have been used.
>
> The second works for me. Try these two yourself.
>
>
> On Fri, Mar 15, 2013 at 11:55 PM, Kuo Hugo  wrote:
>
>> It should be a doc bug in the instruction .
>>
>> The first one is v1.0 auth (legacy auth)
>> The URL suppose to be http://localhost:5000/auth/v1.0
>>
>>
>> Hope it help
>>
>>
>> 2013/3/15 Tomáš Šoltys 
>>
>>> Hi,
>>>
>>> I am following the OpenStack WalkThrough instructions and I am failing
>>> to verify my setup as described here:
>>>
>>> http://docs.openstack.org/folsom/openstack-compute/install/yum/content/verify-swift-installation.html
>>>
>>> In this forum I have found that the instructions are not exactly correct
>>> so I tried what was suggested but without any success.
>>>
>>> Following command always return '404 Not Found'
>>>
>>> curl -k -v -H 'X-Storage-User: service:swift' -H 'X-Storage-Pass:
>>> 12345678' -X 'POST' http://localhost:5000/v2.0/auth
>>>
>>> But when for following it works:
>>>
>>> curl -k -v -X 'POST' http://localhost:5000/v2.0/tokens -d
>>> '{"auth":{"passwordCredentials":{"username":"swift",
>>> "password":"12345678"}, "tenantName":"service"}}' -H 'Content-type:
>>> application/json' -H 'Accept: application/xml'
>>>
>>> What am I missing here?
>>>
>>> Thanks
>>>
>>> Tomáš Šoltys
>>>
>>> tomas.sol...@gmail.com
>>> http://www.range-software.com
>>> (+420) 776-843-663
>>>
>>> ___
>>> Mailing list: https://launchpad.net/~openstack
>>> Post to : openstack@lists.launchpad.net
>>> Unsubscribe : https://launchpad.net/~openstack
>>> More help   : https://help.launchpad.net/ListHelp
>>>
>>>
>>
>>
>> --
>> +Hugo Kuo+
>> h...@swiftstack.com
>> tonyt...@gmail.com
>> +886 935004793
>>
>> ___
>> Mailing list: https://launchpad.net/~openstack
>> Post to : openstack@lists.launchpad.net
>> Unsubscribe : https://launchpad.net/~openstack
>> More help   : https://help.launchpad.net/ListHelp
>>
>>
>
>
> --
> Gareth
> *Cloud Computing, Openstack, Fitness, Basketball
> *
> *Novice Openstack contributer*
> *My promise: if you find any spelling or grammar mistake in my email from
> Mar 1 2013, notice me *
> *and I'll donate 1$ or 1¥ to open organization specified by you.*
>



-- 
+Hugo Kuo+
h...@swiftstack.com
tonyt...@gmail.com
+886 935004793
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Swift]

2013-03-15 Thread Kuo Hugo
It should be a doc bug in the instruction .

The first one is v1.0 auth (legacy auth)
The URL suppose to be http://localhost:5000/auth/v1.0


Hope it help


2013/3/15 Tomáš Šoltys 

> Hi,
>
> I am following the OpenStack WalkThrough instructions and I am failing to
> verify my setup as described here:
>
> http://docs.openstack.org/folsom/openstack-compute/install/yum/content/verify-swift-installation.html
>
> In this forum I have found that the instructions are not exactly correct
> so I tried what was suggested but without any success.
>
> Following command always return '404 Not Found'
>
> curl -k -v -H 'X-Storage-User: service:swift' -H 'X-Storage-Pass:
> 12345678' -X 'POST' http://localhost:5000/v2.0/auth
>
> But when for following it works:
>
> curl -k -v -X 'POST' http://localhost:5000/v2.0/tokens -d
> '{"auth":{"passwordCredentials":{"username":"swift",
> "password":"12345678"}, "tenantName":"service"}}' -H 'Content-type:
> application/json' -H 'Accept: application/xml'
>
> What am I missing here?
>
> Thanks
>
> Tomáš Šoltys
>
> tomas.sol...@gmail.com
> http://www.range-software.com
> (+420) 776-843-663
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
>


-- 
+Hugo Kuo+
h...@swiftstack.com
tonyt...@gmail.com
+886 935004793
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Disk Recommendation - OpenStack Swift

2013-01-29 Thread Kuo Hugo
There's a most important point.

A better monitor/notification method of disks .




2013/1/30 Jan Drake 

> A presentation at the first openstack design summit was given in which a
> vendor did a couple of petabyte swift implementations.  In that
> presentation, they explained that in one implementation they used
> enterprise drives and another desktop drives.
>
> Enterprise drive failures were higher due to lack of burn-in and cost was
> higher.
> Desktop drive failures were close to zero as they found a vendor to burn
> them in upon purchase.
>
> The net net here is this:
>
> - Use commodity hardware
> - Plan for failures
> - Use chassis that make it quick/easy to replace drives
>
> Intel is working on a 12 drive array (inexpensive by their terms) that is
> meant to be throw-awayŠ sealed chassisŠ automatically fails drives over
> and links to other drive arrays.
>
> Cheapest is best for file/object storage.  Block storage may be a
> different matter depending on your requirements.
>
>
> Jan
>
> On 1/29/13 8:42 AM, "Chuck Thier"  wrote:
>
> >Hi John,
> >
> >It would be difficult to recommend a specific drive, because things
> >change so often.   New drives are being introduced all the time.
> >Manufacturers buy their competition and cancel their awesome products.
> > So the short answer is that you really need to test the drives out in
> >your environment and in your use case.  I can pass on some wisdom from
> >our experience.
> >
> >1.  "Enterprise" drives are not worth it.  We have not seen a
> >significant difference between the failure rate of enterprise class
> >drives and commodity drives.  I have heard this as well from other
> >large swift deployers, as well as other large storage providers.  Even
> >if enterprise drives had a significantly less failure rate, the added
> >cost would not be worth it.
> >
> >2.  Be wary of "Green" drives.  The green features on these drives can
> >work against you in a swift cluster (like auto parking heads and
> >spinning down).  If you are going with a green drive, make sure they
> >are well tested, and/or at least have the capability to turn these
> >features off.
> >
> >3.  Go big.  If you can, use 3T or larger drives.  You get a more even
> >distribution and better overall utilization with larger drives.
> >
> >4.  Don't believe everything you read on the internet (including me
> >:))  Test! Test! Test!
> >
> >--
> >Chuck
> >
> >On Mon, Jan 28, 2013 at 7:11 PM, John van Ommen 
> >wrote:
> >> Does anyone on the list have a disk they'd recommend for OpenStack
> >>swift?
> >>
> >> I am looking at hardware from Dell and HP, and I've found that the
> >> disks they offer are very expensive.  For instance, HP's 2TB disk has
> >> a MSRP of over $500, while you can get a Western Digital 2TB 'Red'
> >> disk for $127.
> >>
> >> Is there any reason to opt for the drives offered by Dell or HP?  (I
> >> assume they're re-branded disks from Seagate and WD anyways.)
> >>
> >> Are there any disk SKUs that you'd recommend?
> >>
> >> ___
> >> Mailing list: https://launchpad.net/~openstack
> >> Post to : openstack@lists.launchpad.net
> >> Unsubscribe : https://launchpad.net/~openstack
> >> More help   : https://help.launchpad.net/ListHelp
> >
> >___
> >Mailing list: https://launchpad.net/~openstack
> >Post to : openstack@lists.launchpad.net
> >Unsubscribe : https://launchpad.net/~openstack
> >More help   : https://help.launchpad.net/ListHelp
> >
>
>
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>



-- 
+Hugo Kuo+
tonyt...@gmail.com
+ 886 935004793
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Network setup - Swift / keystone location and configuraton?

2013-01-20 Thread Kuo Hugo
Thanks for Chmouel's mention  , much flexible with swift-client

Brain  , please have a look with following reply

2013/1/19 Brian Ipsen 

>  Hi
>
> ** **
>
> As for the network diagram the one on the referred page (
> http://docs.openstack.org/trunk/openstack-object-storage/admin/content/figures/swift_install_arch.png)
> more or less looks what I plan on doing. I would just put a NAT’ing
> firewall between the public switch and the internet. For security reasons,
> I think it would make more sense to have the Auth node (keystone service)
> located on the private switch – but I am not sure whether it is possible.
>
[Reply]
In your case , to put a NAT server front of swift ecosystem . The keystone
could be located in private network without any problem . But in this
scenario , there's some more work you have to do .
1) set NAT for keystone too , whatever a DNAT or redirect port in your
firewall for keystone. Remember that  , if you want to use
username/password authentication method for swift . The easiest way is
using keystone/tempauth . In these method , they provide token-auth
mechanism to determine if the user is accessible currently. While you want
to access swift , you must have a "TOKEN". And the "TOKEN" is managing by
keystone . As default , the token will be expired in a period. 24hrs in my
memory . The client user have to get the token from keystone first.


> I am still trying to figure out how the different components interact, and
> exactly what the different parameters on the keystone command does. Once I
> get that understanding, things will probably be much easier J
>
[Reply]
Yes , that's the keypoint. You must understand the workflow.
My assumption is your proxy pipline is using tokenauth and keystone even
swift-auth .
The full request workflow is :

client send username/password --> keystone verify it --> return token and
service(swift) url to client --> client use returned url and token to
swift-proxy --> proxy verify the token by asking keystone immediately --->
keystone confirmed it with several information includes role etc. --> the
request pass the token-auth filter --> check the role with swift-auth
middleware --> do the operation for user --> returned the result(status)



>
> ** **
>
> Regarding the location of the keystone server – and please correct me, if
> I’m wrong; user authentication is done via the proxy. When a user
> authenticates, I assume that the proxy asks the keystone/auth server –
> instead of the client asks the auth/keystone server directly? If it is the
> proxy that handles the authentication request towards the keystone server –
> then the keystone might as well be located on the private switch on the
> drawing (for enhanced security). Of course, if the keystone service is
> located on the private switch, the IP addresses in the URL’s for the
> endpoint creation will need to match the IP address of the server in this
> network.
>
[Reply]
As the description in previous section , user authentication is done by
keystone . And token authentication is done by proxy .
If you want to send username/password to swift directly , yes you can , but
need to write another middleware for it. And would be a little complicated.
Keystone should be accessed by client & proxy in original design.

>
>

>
> ** **
>
> Clients will be located on the internet side on the drawing (again – I
> want to put a NAT’ing firewall between the public switch and what is
> referred to as “internet” on the drawing).
>
[Reply]
Anywhere could be possible


> 
>
> ** **
>
> Maybe I should start digging into the book “OpenStack Cloud Computing
> Cookbook” by Kevin Jackson to see if this can make things clearer for me J
>

Sure , also official documents .
1) play with it
2) IRC
3) mailing list


> 
>
> ** **
>
> Regards
>
> Brian
>

Hope it helps
Cheers
Hugo Kuo

> 
>
> ** **
>
> ** **
>
> *From:* Kuo Hugo [mailto:tonyt...@gmail.com]
> *Sent:* 19. januar 2013 09:58
> *To:* Brian Ipsen
> *Cc:* openstack@lists.launchpad.net
> *Subject:* Re: [Openstack] Network setup - Swift / keystone location and
> configuraton?
>
> ** **
>
> The answer is depends on your service plan . 
>
> ** **
>
> Generally , the IP for keystone is the network which could be accessed
> from client . 
>
> Also , the publicurl / adminurl / internal could be different . 
>
> ** **
>
> Keystone is the auth agent for swift(and all other services) , while you
> produce a request to ask for "services URLs / role / token" with your
> username/password . It will return a bunch of of information . 
>
> In keystone v1.0 legacy auth method , it presents

Re: [Openstack] Network setup - Swift / keystone location and configuraton?

2013-01-19 Thread Kuo Hugo
Would you mind to have a network diagram of your environment ?

Hugo


2013/1/19 Brian Ipsen 

>  Hi
>
> ** **
>
> I am trying to figure out how to build a swift setup with Keystone
> identity management – and have the environment secured by a firewall.
>
> ** **
>
> I expect, that a number of proxy nodes are accessible through the firewall
> (traffic will be NAT’ed). The proxy nodes are connected to a private
> “storage network” (not accessible from the outside) on a second network
> interface. Will the keystone have to be on the “public” side of the proxy
> nodes – or can it be on the “private” side (see
> http://docs.openstack.org/trunk/openstack-object-storage/admin/content/example-object-storage-installation-architecture.html-
>  here it is on the “public” side)
> 
>
> ** **
>
> But I am not quite sure about the configuration of the different service
> when it comes to specifying the different URL’s…
>
> For example, for the Keystone service:
>
> ** **
>
> Assuming, that storage/swift nodes are located in the range
> 172.21.100.20-172.21.100.80, the keystone server on 172.21.100.10 – and the
> proxies on 172.21.100.100-172.21.100.120 (and external
> 10.32.30.10-10.32.30.30). What would be the correct IP’s to use on this
> command ?
>
> keystone service-create --name keystone --type=identity --description
> "Keystone Identity Service"
>
> keystone endpoint-create --region RegionOne --service-id $KEYSVC_ID
> --publicurl 'http://x.x.x.x5000/v2.0' --adminurl '
> http://x.x.x.x:35357/v2.0' --internalurl 'http://x.x.x.x:5000/v2.0'
>
> ** **
>
> And for swift:
>
> keystone service-create --name keystone --type=identity --description
> "Swift Storage Service"
>
> keystone endpoint-create --service-id $SWIFTSVC_ID --publicurl '
> http://x.x.x.x:8080/v1/AUTH_\$(tenant_id)s' --adminurl '
> http://x.x.x.x:8080/v1/AUTH_\$(tenant_id)s ' --internalurl '
> http://x.x.x.x:8080/v1/AUTH_\$(tenant_id)s '
>
> ** **
>
> Regards
>
> Brian
>
> ** **
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
>


-- 
+Hugo Kuo+
tonyt...@gmail.com
+ 886 935004793
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Network setup - Swift / keystone location and configuraton?

2013-01-19 Thread Kuo Hugo
The answer is depends on your service plan .

Generally , the IP for keystone is the network which could be accessed from
client .
Also , the publicurl / adminurl / internal could be different .

Keystone is the auth agent for swift(and all other services) , while you
produce a request to ask for "services URLs / role / token" with your
username/password . It will return a bunch of of information .
In keystone v1.0 legacy auth method , it presents as several x-headers .
In keystone v2.0 , it returns a pack of json which includes more
information . Such as service urls , in your case the service type is
object-storage(aka. swift) .

The client could parse the needed url for using.
The swift-client is using --publicurl as I know .

[Q]Could I have a question ?

Which network will the client located ?


For x.x.x.x , you can just fill in the IP which accessible from client . If
there's a NAT of LB , you need to point to NAT entry point of LB IP and
redirect to the service port or internal IP .

keystone endpoint-create --region RegionOne --service-id $KEYSVC_ID
--publicurl 'http://x.x.x.x5000/v2.0' --adminurl 'http://x.x.x.x:35357/v2.0'
--internalurl 'http://x.x.x.x:5000/v2.0'
keystone endpoint-create --service-id $SWIFTSVC_ID --publicurl '
http://x.x.x.x:8080/v1/AUTH_\$(tenant_id)s' --adminurl '
http://x.x.x.x:8080/v1/AUTH_\$(tenant_id)s ' --internalurl '
http://x.x.x.x:8080/v1/AUTH_\$(tenant_id)s '


2013/1/19 Brian Ipsen 

>  Hi
>
> ** **
>
> I am trying to figure out how to build a swift setup with Keystone
> identity management – and have the environment secured by a firewall.
>
> ** **
>
> I expect, that a number of proxy nodes are accessible through the firewall
> (traffic will be NAT’ed). The proxy nodes are connected to a private
> “storage network” (not accessible from the outside) on a second network
> interface. Will the keystone have to be on the “public” side of the proxy
> nodes – or can it be on the “private” side (see
> http://docs.openstack.org/trunk/openstack-object-storage/admin/content/example-object-storage-installation-architecture.html-
>  here it is on the “public” side)
> 
>
> ** **
>
> But I am not quite sure about the configuration of the different service
> when it comes to specifying the different URL’s…
>
> For example, for the Keystone service:
>
> ** **
>
> Assuming, that storage/swift nodes are located in the range
> 172.21.100.20-172.21.100.80, the keystone server on 172.21.100.10 – and the
> proxies on 172.21.100.100-172.21.100.120 (and external
> 10.32.30.10-10.32.30.30). What would be the correct IP’s to use on this
> command ?
>
> keystone service-create --name keystone --type=identity --description
> "Keystone Identity Service"
>
> keystone endpoint-create --region RegionOne --service-id $KEYSVC_ID
> --publicurl 'http://x.x.x.x5000/v2.0' --adminurl '
> http://x.x.x.x:35357/v2.0' --internalurl 'http://x.x.x.x:5000/v2.0'
>
> ** **
>
> And for swift:
>
> keystone service-create --name keystone --type=identity --description
> "Swift Storage Service"
>
> keystone endpoint-create --service-id $SWIFTSVC_ID --publicurl '
> http://x.x.x.x:8080/v1/AUTH_\$(tenant_id)s' --adminurl '
> http://x.x.x.x:8080/v1/AUTH_\$(tenant_id)s ' --internalurl '
> http://x.x.x.x:8080/v1/AUTH_\$(tenant_id)s '
>
> ** **
>
> Regards
>
> Brian
>
> ** **
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
>


-- 
+Hugo Kuo+
tonyt...@gmail.com
+ 886 935004793
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OpenStack][Swift] The server has either erred or is incapable of performing the requested operation

2013-01-18 Thread Kuo Hugo
Ohla Sujay ,

Would you please add more information with following items?
1. Dump the ring information
2. The mount point permission
3. execute "swift-init all status" on all storage node
4. [Question] Why do you use "-k" option in curl request ? for SSL ? but
the endpoint seems without it
5. Does there any "error" log in storage node of swift ?


Cheers

Hugo


2013/1/18 Sujay M 

> Hi all,
>
> I have set up a proxy node ubuntu at 10.0.2.15 and 4 storage nodes vm[0-3]
> on 10.0.2.16-19
>
> Each storage node has loop as storage partition /mnt/sdb1/ is mounted
>
> When i try to GET an account The server has either erred or is incapable
> of performing the requested operation
>
> root@ubuntu:~# curl -k -v -H 'X-Auth-Token:
> AUTH_tk2f483541775649e39dd7c20a0e704505'
> http://10.0.2.15:8080/v1/AUTH_test
> * About to connect() to 10.0.2.15 port 8080 (#0)
> *   Trying 10.0.2.15... connected
> > GET /v1/AUTH_test HTTP/1.1
> > User-Agent: curl/7.22.0 (x86_64-pc-linux-gnu) libcurl/7.22.0
> OpenSSL/1.0.1 zlib/1.2.3.4 libidn/1.23 librtmp/2.3
> > Host: 10.0.2.15:8080
> > Accept: */*
> > X-Auth-Token: AUTH_tk2f483541775649e39dd7c20a0e704505
> >
> < HTTP/1.1 500 Internal Server Error
> < Content-Length: 228
> < Content-Type: text/html; charset=UTF-8
> < Date: Fri, 18 Jan 2013 08:47:56 GMT
> <
> 
>  
>   500 Internal Server Error
>  
>  
>   500 Internal Server Error
>   The server has either erred or is incapable of performing the requested
> operation.
>
>
>
>  
> * Connection #0 to host 10.0.2.15 left intact
> * Closing connection #0
>
> My storage nodes configuration
>
> root@vm0:~# cat /etc/swift/account-server.conf
> [DEFAULT]
> devices = /mnt
> bind_ip = 10.0.2.16
> bind_port = 6002
> workers = 2
>
> [pipeline:main]
> pipeline = account-server
>
> [app:account-server]
> use = egg:swift#account
>
> [account-replicator]
>
> [account-auditor]
>
> [account-reaper]
> Similarly other config files are there.
>
> When i examined the storage nodes in tcpdump i got to know that the proxy
> server is sending packets to the storage nodes.
>
> I followed
> http://docs.openstack.org/developer/swift/howto_installmultinode.html but
> using loopback partition on the storage nodes for storage. I have only
> installed the packages as shown in the above url and sqlite3 in addition to
> them.
>
> Please help me. Thanks
> --
> Best Regards,
>
> Sujay M
> Final year B.Tech
> Computer Engineering
> NITK Surathkal
>
> contact: +918971897571
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
>


-- 
+Hugo Kuo+
tonyt...@gmail.com
+ 886 935004793
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] VMs accesible from external LAN

2013-01-17 Thread Kuo Hugo
Did you ever try to trunk two VLANs on your switch ?

Hugo


2013/1/17 JuanFra Rodriguez Cardoso 

> Hi guys:
>
> This is my scenario: Centos 6.3 / Folsom / nova-network / vlanManager
>
> I created one vlan (10.129.130.0/24) for my project. How to can I allow
> to reach VMs from hosts of VLAN (10.129.128.0/24)?
> Do I have to add manually an iptables rule? or modify a nova-network chain?
>
> Thanks!
>
> Best regards,
> --
> JuanFra
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
>


-- 
+Hugo Kuo+
tonyt...@gmail.com
+ 886 935004793
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OpenStack][Swift] Multiple Auth URL's for same account problem

2013-01-17 Thread Kuo Hugo
btw , the issue of returned

X-Storage-Url: http://127.0.0.1:8080/v1/AUTH_test

is due to the default ip of the header will be localhost.
you can modify it by add a user define URL , in your condition , it could
be
http://10.0.2.15:8080/ <http://10.0.2.15:8080/auth/v1.0>v1/AUTH_test


If you're using swift client for accessing , you'll fail from a remote
client . Due to the returned URL is incorrect.
You had better to change it to match your environment.

Hope it help


Cheers
Hugo



2013/1/17 Sujay M 

> Thanks Hugo Kuo,
>
> Yes that was a problem of memcached, i started it. There was a
> configuration problem.
>
>
> On 17 January 2013 18:59, Kuo Hugo  wrote:
>
>> Hello Sujay ,
>>
>> That should be the problem on memcached .
>>
>> Would you please check the status of memcached ?
>> 1. grep it from ps command
>> 2. Restart it to check it the token been fixed again
>> 3. If the result is still fail , please check the configuration of
>> memcached.conf under /etc/ , make sure the port is bind to correct ip
>> 4. you can telnet into memcached to check the contents in current memcache
>>
>>
>> In your information , seems memcached is not in running status. Please
>> fire it up first.
>>
>> Cheers
>> Hugo Kuo
>>
>>
>> 2013/1/17 Sujay M 
>>
>>>
>>> Hi all,
>>>
>>> I have set up a proxy server on 10.0.2.15 and 4 storage nodes on
>>> 10.0.2.16-19
>>>
>>> My proxy-server configuration file
>>>
>>> root@ubuntu:~# /etc/swift/proxy-server.conf [DEFAULT]
>>> bind_port = 8080
>>> user = ug26
>>> workers = 8
>>>
>>> [pipeline:main]
>>> pipeline = healthcheck cache tempauth proxy-server
>>>
>>> [app:proxy-server]
>>> use = egg:swift#proxy
>>> allow_account_management = true
>>> account_autocreate = true
>>>
>>> [filter:tempauth]
>>> use = egg:swift#tempauth
>>> user_admin_admin = admin .admin .reseller_admin
>>> user_test_tester = testing .admin
>>> user_test2_tester2 = testing2 .admin
>>> user_test_tester3 = testing3
>>>
>>> [filter:healthcheck]
>>> use = egg:swift#healthcheck
>>>
>>> [filter:cache]
>>> use = egg:swift#memcache
>>>
>>>
>>> I am getting a different auth token each time i try to get an auth url
>>>
>>> root@ubuntu:~# curl -k -v -H 'X-Storage-User: test:tester' -H
>>> 'X-Storage-Pass: testing' http://10.0.2.15:8080/auth/v1.0
>>> * About to connect() to 10.0.2.15 port 8080 (#0)
>>> *   Trying 10.0.2.15... connected
>>> > GET /auth/v1.0 HTTP/1.1
>>> > User-Agent: curl/7.22.0 (x86_64-pc-linux-gnu) libcurl/7.22.0
>>> OpenSSL/1.0.1 zlib/1.2.3.4 libidn/1.23 librtmp/2.3
>>> > Host: 10.0.2.15:8080
>>> > Accept: */*
>>> > X-Storage-User: test:tester
>>> > X-Storage-Pass: testing
>>> >
>>> < HTTP/1.1 200 OK
>>> < X-Storage-Url: http://127.0.0.1:8080/v1/AUTH_test
>>> < X-Storage-Token: AUTH_tkf673fe7a7fc5428398c53bc633f5ff5e
>>> < X-Auth-Token: AUTH_tkf673fe7a7fc5428398c53bc633f5ff5e
>>> < Content-Length: 0
>>> < Date: Thu, 17 Jan 2013 10:44:03 GMT
>>> <
>>> * Connection #0 to host 10.0.2.15 left intact
>>> * Closing connection #0
>>> root@ubuntu:~# curl -k -v -H 'X-Storage-User: test:tester' -H
>>> 'X-Storage-Pass: testing' http://10.0.2.15:8080/auth/v1.0
>>> * About to connect() to 10.0.2.15 port 8080 (#0)
>>> *   Trying 10.0.2.15... connected
>>> > GET /auth/v1.0 HTTP/1.1
>>> > User-Agent: curl/7.22.0 (x86_64-pc-linux-gnu) libcurl/7.22.0
>>> OpenSSL/1.0.1 zlib/1.2.3.4 libidn/1.23 librtmp/2.3
>>> > Host: 10.0.2.15:8080
>>> > Accept: */*
>>> > X-Storage-User: test:tester
>>> > X-Storage-Pass: testing
>>> >
>>> < HTTP/1.1 200 OK
>>> < X-Storage-Url: http://127.0.0.1:8080/v1/AUTH_test
>>> < X-Storage-Token: AUTH_tke4fec7d8413d46df9eb867064e07ac83
>>> < X-Auth-Token: AUTH_tke4fec7d8413d46df9eb867064e07ac83
>>> < Content-Length: 0
>>> < Date: Thu, 17 Jan 2013 10:44:06 GMT
>>> <
>>> * Connection #0 to host 10.0.2.15 left intact
>>> * Closing connection #0
>>>
>>>
>>> I am alos unable to GEt an account
>>> root@ubuntu:~# curl -k -v -H 'X-Auth-Token: AUTH_
>>> tke4fec7d8413d46df9eb867064

Re: [Openstack] [OpenStack][Swift] Multiple Auth URL's for same account problem

2013-01-17 Thread Kuo Hugo
Hello Sujay ,

That should be the problem on memcached .

Would you please check the status of memcached ?
1. grep it from ps command
2. Restart it to check it the token been fixed again
3. If the result is still fail , please check the configuration of
memcached.conf under /etc/ , make sure the port is bind to correct ip
4. you can telnet into memcached to check the contents in current memcache


In your information , seems memcached is not in running status. Please fire
it up first.

Cheers
Hugo Kuo


2013/1/17 Sujay M 

>
> Hi all,
>
> I have set up a proxy server on 10.0.2.15 and 4 storage nodes on
> 10.0.2.16-19
>
> My proxy-server configuration file
>
> root@ubuntu:~# /etc/swift/proxy-server.conf [DEFAULT]
> bind_port = 8080
> user = ug26
> workers = 8
>
> [pipeline:main]
> pipeline = healthcheck cache tempauth proxy-server
>
> [app:proxy-server]
> use = egg:swift#proxy
> allow_account_management = true
> account_autocreate = true
>
> [filter:tempauth]
> use = egg:swift#tempauth
> user_admin_admin = admin .admin .reseller_admin
> user_test_tester = testing .admin
> user_test2_tester2 = testing2 .admin
> user_test_tester3 = testing3
>
> [filter:healthcheck]
> use = egg:swift#healthcheck
>
> [filter:cache]
> use = egg:swift#memcache
>
>
> I am getting a different auth token each time i try to get an auth url
>
> root@ubuntu:~# curl -k -v -H 'X-Storage-User: test:tester' -H
> 'X-Storage-Pass: testing' http://10.0.2.15:8080/auth/v1.0
> * About to connect() to 10.0.2.15 port 8080 (#0)
> *   Trying 10.0.2.15... connected
> > GET /auth/v1.0 HTTP/1.1
> > User-Agent: curl/7.22.0 (x86_64-pc-linux-gnu) libcurl/7.22.0
> OpenSSL/1.0.1 zlib/1.2.3.4 libidn/1.23 librtmp/2.3
> > Host: 10.0.2.15:8080
> > Accept: */*
> > X-Storage-User: test:tester
> > X-Storage-Pass: testing
> >
> < HTTP/1.1 200 OK
> < X-Storage-Url: http://127.0.0.1:8080/v1/AUTH_test
> < X-Storage-Token: AUTH_tkf673fe7a7fc5428398c53bc633f5ff5e
> < X-Auth-Token: AUTH_tkf673fe7a7fc5428398c53bc633f5ff5e
> < Content-Length: 0
> < Date: Thu, 17 Jan 2013 10:44:03 GMT
> <
> * Connection #0 to host 10.0.2.15 left intact
> * Closing connection #0
> root@ubuntu:~# curl -k -v -H 'X-Storage-User: test:tester' -H
> 'X-Storage-Pass: testing' http://10.0.2.15:8080/auth/v1.0
> * About to connect() to 10.0.2.15 port 8080 (#0)
> *   Trying 10.0.2.15... connected
> > GET /auth/v1.0 HTTP/1.1
> > User-Agent: curl/7.22.0 (x86_64-pc-linux-gnu) libcurl/7.22.0
> OpenSSL/1.0.1 zlib/1.2.3.4 libidn/1.23 librtmp/2.3
> > Host: 10.0.2.15:8080
> > Accept: */*
> > X-Storage-User: test:tester
> > X-Storage-Pass: testing
> >
> < HTTP/1.1 200 OK
> < X-Storage-Url: http://127.0.0.1:8080/v1/AUTH_test
> < X-Storage-Token: AUTH_tke4fec7d8413d46df9eb867064e07ac83
> < X-Auth-Token: AUTH_tke4fec7d8413d46df9eb867064e07ac83
> < Content-Length: 0
> < Date: Thu, 17 Jan 2013 10:44:06 GMT
> <
> * Connection #0 to host 10.0.2.15 left intact
> * Closing connection #0
>
>
> I am alos unable to GEt an account
> root@ubuntu:~# curl -k -v -H 'X-Auth-Token: AUTH_
> tke4fec7d8413d46df9eb867064e07ac83' http://10.0.2.15:8080/v1/AUTH_test
> * About to connect() to 10.0.2.15 port 8080 (#0)
> *   Trying 10.0.2.15... connected
> > GET /v1/AUTH_test HTTP/1.1
> > User-Agent: curl/7.22.0 (x86_64-pc-linux-gnu) libcurl/7.22.0
> OpenSSL/1.0.1 zlib/1.2.3.4 libidn/1.23 librtmp/2.3
> > Host: 10.0.2.15:8080
> > Accept: */*
> > X-Auth-Token: AUTH_tke4fec7d8413d46df9eb867064e07ac83
> >
> < HTTP/1.1 401 Unauthorized
> < Content-Length: 358
> < Content-Type: text/html; charset=UTF-8
> < Date: Thu, 17 Jan 2013 10:44:09 GMT
> <
> 
>  
>   401 Unauthorized
>  
>  
>   401 Unauthorized
>   This server could not verify that you are authorized to access the
> document you requested. Either you supplied the wrong credentials (e.g.,
> bad password), or your browser does not understand how to supply the
> credentials required.
>
>
>
>  
> * Connection #0 to host 10.0.2.15 left intact
> * Closing connection #0
>
> Also if i do
>
> ps -A | grep "memcached"  it is no returning anything. I think its a
> problem with memcached.
>
> please help me. Thanks in advance.
>
>
> --
> Best Regards,
>
> Sujay M
> Final year B.Tech
> Computer Engineering
> NITK Surathkal
>
> contact: +918971897571
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
>


-- 
+Hugo Kuo+
tonyt...@gmail.com
+ 886 935004793
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Difference between Swift and Cinder

2013-01-16 Thread Kuo Hugo
[Cinder] (born from nova-volume)
The goal of the Cinder project is to separate the existing nova-volume
block service into its own project.


[Swift]
Swift is a highly available, distributed, eventually consistent object/blob
store. Organizations can use Swift to store lots of data efficiently,
safely, and cheaply.


Cinder is an project to leverage different backend storage pool as "block
devices" for Nova instance .

Swift is an object storage . The way to keep  files (objects) eventually
consistent.

There're many differences between Cinder and Swift.

In a short summary , swift is not that good for very fast real-time I/O .
And the object contents is unstructured . Looks like a box but you can not
open it. Once you modify the content , the whole object will be a new one
box.  Cinder provides user a pool for creating volume disk which  present
as block level driver.

If you ask me a question :
Could swift's container be an instance's virtual disk? The answer is "YES"
but in high risk.


Cheers
Hugo




2013/1/17 harryxiyou 

> Hi all,
>
> Swift is oriented Openstack object storage but Cinder is oriented Openstack
> block storage. What are the detail differences betwwen object storage and
> block storage? Cloud anyone tell me his/her ideas? Thanks inadvance.
>
> --
> Thanks
> Harry Wei
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>



-- 
+Hugo Kuo+
tonyt...@gmail.com
+ 886 935004793
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] CY12-Q4 Community Analysis — OpenStack vs OpenNebula vs Eucalyptus vs CloudStack

2013-01-04 Thread Kuo Hugo
appreciate ~~ Thanks



2013/1/4 Qingye Jiang (John) 

> Hi all,
>
> I would like to let you know that I have just finished my 5th quarterly
> report on comparing the community activities of different open source IaaS
> technologies. "CY12-Q4 Community Analysis — OpenStack vs OpenNebula vs
> Eucalyptus vs CloudStack" is now available on my blog at the following URL.
>
> English Version:
> http://www.qyjohn.net/?p=2733
>
> Chinese Version:
> http://www.qyjohn.net/?p=2731
>
> Best regards,
>
> Qingye Jiang (John)
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
>


-- 
+Hugo Kuo+
tonyt...@gmail.com
+ 886 935004793
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Keystone doesn't return X-Auth-Token and X-Storage-URL

2012-11-22 Thread Kuo Hugo
http://docs.openstack.org/developer/keystone/api_curl_examples.html

curl -d '{"auth":{"passwordCredentials":{"username": "admin",
"password": "$ADMINPASS"}}}' -H "Content-type: application/json"
http://localhost:35357/v2.0/tokens



2012/11/22 Shashank Sahni 

> Hi,
>
> I'm trying to install swift. I'm able to start all the relevant services,
> but I'm getting error during verification while following the instructions
> mentioned here.
>
>
> http://docs.openstack.org/trunk/openstack-object-storage/admin/content/verify-swift-installation.html
>
> The first command
>
> $ swift -V 2.0 -A http://:5000/v2.0 -U demo:admin -K
> $ADMINPASS stat
>
> returns
>
> Account: AUTH_2b2f3b2f1db5442ca05a823dcbb047e1
> Containers: 0
> Objects: 0
> Bytes: 0
> Accept-Ranges: bytes
> X-Timestamp: 1353569489.57971
>
> But when I try to run
>
> $ curl -k -v -H 'X-Storage-User: demo:admin' -H 'X-Storage-Pass:
> $ADMINPASS' http://:5000/auth/v2.0
>
> It doesn't return X-Auth-Token and X-Storage-URL. I believe this shows
> some trouble with Keystone, but I already have glance successfully
> configured. Here is the output.
>
> * About to connect() to 10.2.4.115 port 5000 (#0)
> *   Trying 10.2.4.115... connected
> > GET /v2.0 HTTP/1.1
> > User-Agent: curl/7.22.0 (x86_64-pc-linux-gnu) libcurl/7.22.0
> OpenSSL/1.0.1 zlib/1.2.3.4 libidn/1.23 librtmp/2.3
> > Host: 10.2.4.115:5000
> > Accept: */*
> > X-Storage-User: admin:admin
> > X-Storage-Pass: x
> >
> < HTTP/1.1 200 OK
> < Vary: X-Auth-Token
> < Content-Type: application/json
> < Date: Wed, 21 Nov 2012 05:46:25 GMT
> < Transfer-Encoding: chunked
> <
> * Connection #0 to host 10.2.4.115 left intact
> * Closing connection #0
> {"version": {"status": "beta", "updated": "2011-11-19T00:00:00Z",
> "media-types": [{"base": "application/json", "type":
> "application/vnd.openstack.
> identity-v2.0+json"}, {"base": "application/xml", "type":
> "application/vnd.openstack.identity-v2.0+xml"}], "id": "v2.0", "links":
> [{"href": "http://10.2.4.115:5000/v2.0/";, "rel": "self"}, {"href": "
> http://docs.openstack.org/api/openstack-identity-service/2.0/content/";,
> "type": "text/html", "rel": "describedby"}, {"href": "
> http://docs.openstack.org/api/openstack-identity-service/2.0/identity-dev-guide-2.0.pdf";,
> "type": "application/pdf", "rel": "describedby"}]}}
>
>
> Any thoughts?
>
> --
> Shashank Sahni
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
>


-- 
+Hugo Kuo+
tonyt...@gmail.com
+ 886 935004793
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Swift installation verification fails

2012-11-21 Thread Kuo Hugo
Hi ,
For keystone 2.0 auth
the request should provide a json format which includes username / tenant /
password .

In your curl test , you provide two headers to auth 2.0 .

Please have a look at officail document to get the right API call.


2012/11/21 Shashank Sahni 

> Hi,
>
> Thanks for the response. I went head to verify using curl and ran.
>
> $ curl -k -v -H 'X-Storage-User: admin:admin' -H 'X-Storage-Pass: '
> http://10.2.4.115:5000/v2.0
>
> Here is the output. I don't see the token or storage-url anywhere. Note
> that, 10.2.4.115 is the keystone server.
>
> * About to connect() to 10.2.4.115 port 5000 (#0)
> *   Trying 10.2.4.115... connected
> > GET /v2.0 HTTP/1.1
> > User-Agent: curl/7.22.0 (x86_64-pc-linux-gnu) libcurl/7.22.0
> OpenSSL/1.0.1 zlib/1.2.3.4 libidn/1.23 librtmp/2.3
> > Host: 10.2.4.115:5000
> > Accept: */*
> > X-Storage-User: admin:admin
> > X-Storage-Pass: x
> >
> < HTTP/1.1 200 OK
> < Vary: X-Auth-Token
> < Content-Type: application/json
> < Date: Wed, 21 Nov 2012 05:46:25 GMT
> < Transfer-Encoding: chunked
> <
> * Connection #0 to host 10.2.4.115 left intact
> * Closing connection #0
> {"version": {"status": "beta", "updated": "2011-11-19T00:00:00Z",
> "media-types": [{"base": "application/json", "type":
> "application/vnd.openstack.identity-v2.0+json"}, {"base":
> "application/xml", "type": "application/vnd.openstack.identity-v2.0+xml"}],
> "id": "v2.0", "links": [{"href": "http://10.2.4.115:5000/v2.0/";, "rel":
> "self"}, {"href": "
> http://docs.openstack.org/api/openstack-identity-service/2.0/content/";,
> "type": "text/html", "rel": "describedby"}, {"href": "
> http://docs.openstack.org/api/openstack-identity-service/2.0/identity-dev-guide-2.0.pdf";,
> "type": "application/pdf", "rel": "describedby"}]}}
>
> --
> Shashank Sahni
>
>
>
>
> On Wed, Nov 21, 2012 at 12:48 AM, Hugo  wrote:
>
>> In my suggestion, using curl for verifying keystone first. And then using
>> curl to access swift proxy with the returned token and service-endpoint
>> from previous keystone operation.
>>
>> It must give u more clear clues.
>>
>>
>>
>> 從我的 iPhone 傳送
>>
>> Shashank Sahni  於 2012/11/20 下午6:40 寫道:
>>
>> Hi,
>>
>> I'm trying to install Swift 1.7.4 on Ubuntu 12.04. The installation is
>> multi-node with keystone and swift(proxy+storage) running on separate
>> systems. Keystone is up and running perfectly fine. Swift user and service
>> endpoints are created correctly to point to the swift_node. Swift is
>> configured and all its services are up. But during swift installation
>> verification, the following commands hangs with no output.
>>
>> swift -V 2 -A http://keystone_server:5000/v2.0-U 
>> admin:admin -K admin_pass stat
>>
>> I'm sure its able to contact the keystone server. This is because if I
>> change admin_pass, it throws authentication failure error. It probably
>> fails in the next step which I'm unaware of.
>>
>> Here is my proxy-server.conf file.
>>
>> [DEFAULT]
>> # Enter these next two values if using SSL certifications
>> cert_file = /etc/swift/cert.crt
>> key_file = /etc/swift/cert.key
>> bind_port = 
>> user = swift
>>
>> [pipeline:main]
>> #pipeline = healthcheck cache swift3 authtoken keystone proxy-server
>> pipeline = healthcheck cache swift3 authtoken keystone proxy-server
>>
>> [app:proxy-server]
>> use = egg:swift#proxy
>> allow_account_management = true
>> account_autocreate = true
>>
>> [filter:swift3]
>> use=egg:swift3#swift3
>>
>> [filter:keystone]
>> paste.filter_factory = keystone.middleware.swift_auth:filter_factory
>> operator_roles = Member,admin, swiftoperator
>>
>> [filter:authtoken]
>> paste.filter_factory = keystone.middleware.auth_token:filter_factory
>> # Delaying the auth decision is required to support token-less
>> # usage for anonymous referrers ('.r:*').
>> delay_auth_decision = 10
>> service_port = 5000
>> service_host = keystone_server
>> auth_port = 35357
>> auth_host = keystone_server
>> auth_protocol = http
>> auth_uri = http://keystone_server:5000/
>> auth_token = 
>> admin_token = 
>> admin_tenant_name = service
>> admin_user = swift
>> admin_password = 
>> signing_dir = /etc/swift
>>
>> [filter:cache]
>> use = egg:swift#memcache
>> set log_name = cache
>>
>> [filter:catch_errors]
>> use = egg:swift#catch_errors
>>
>> [filter:healthcheck]
>> use = egg:swift#healthcheck
>>
>> Any suggestion?
>>
>> --
>> Shashank Sahni
>>
>> ___
>> Mailing list: https://launchpad.net/~openstack
>> Post to : openstack@lists.launchpad.net
>> Unsubscribe : https://launchpad.net/~openstack
>> More help   : https://help.launchpad.net/ListHelp
>>
>>
>


-- 
+Hugo Kuo+
tonyt...@gmail.com
+ 886 935004793
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [Swift] Does there anyone play with s3backer for Swift?

2012-11-15 Thread Kuo Hugo
Hi folks ,

I'm testing the s3backer for swift via swift3.

It works fine.

For my environment , the swift-bench could handle 120MB/s .(each object for
1MB)

But while I'm using s3backer (block-size set to 1MB) , using dd to test the
speed . Result only 20~40MB/s.

Does anyone experience with  the optimize tuning for swift+s3backer?

I expect that the speed could reach to 100MB+/s.

THANKS

-- 
+Hugo Kuo+
tonyt...@gmail.com
+ 886 935004793
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] does swift support resume upload?

2012-10-15 Thread Kuo Hugo
Do you mean "resume broken transfer" ?


2012/10/13 符永涛 

> Dear swift experts,
>
> We're planning to use swift to implement an upload service. And we want to
> know if swift support resume upload feature? Thank you.
>
> --
> 符永涛
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
>


-- 
+Hugo Kuo+
tonyt...@gmail.com
+ 886 935004793
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [OpenStack][Swift] Does .meta file is using for storing extended metadat in Swift?

2012-09-27 Thread Kuo Hugo
Hi folks ,

We would like to add more metadata to an object .
In my memory , there's a file will be created named ".meta"

I'm not sure if this file is for storing more metadata ?

Could someone share more information ?

When will a .meta file be produced ?


-- 
+Hugo Kuo+
tonyt...@gmail.com
+ 886 935004793
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OpenStack][Swift][Replicator] Does object replicator push "exist" object to handoff node while a node/disk/network fails ?

2012-09-06 Thread Kuo Hugo
Thanks for your quick reply John ~

Seems that I lost something with umount disk test before .

I'll try it later
appreciate ~

2012/9/7 John Dickinson 

> you can force a replicator to push to a handoff node by unmounting the
> drive one of the primary replicas is on.
>
> --John
>
>
> On Sep 6, 2012, at 9:00 AM, Kuo Hugo  wrote:
>
> > Hi folks , John and Chmouel ,
> >
> > I did post a question about this long time ago. And my test result is
> match to Chmouel's answer.
> >
> > https://answers.launchpad.net/swift/+question/191924
> > "The object replicator will push an object to a handoff node if another
> primary node returns that the drive the object is supposed to go on is bad.
> We don't push to handoff nodes on general errors, otherwise things like
> network partitions or rebooting machines would cause storms of unneeded
> handoff traffic."
> >
> > But I read something different from John (or just my misunderstanding)
>  , so want to clarify it.
> >
> > Assumption :
> > Storage Nodes :  5 (each for one zone)
> > Zones :   5
> > Replica :  3
> > Disks :   2*5   ( 1 disk/per node )
> >
> > Account   AUTH_test
> > ContainerCon_1
> > Object  Obj1
> >
> >
> > Partition   3430
> > Hash6b342ac122448ef16bf1655d652bfe1e
> >
> > Server:Port Device  192.168.1.101:36000 DISK1
> > Server:Port Device  192.168.1.102:36000 DISK1
> > Server:Port Device  192.168.1.103:36000 DISK1
> > Server:Port Device  192.168.1.104:36000 DISK1[Handoff]
> > Server:Port Device  192.168.1.105:36000 DISK1[Handoff]
> >
> >
> > curl -I -XHEAD "
> http://192.168.1.101:36000/DISK1/3430/AUTH_test/Con_1/Obj1";
> > curl -I -XHEAD "
> http://192.168.1.102:36000/DISK1/3430/AUTH_test/Con_1/Obj1";
> > curl -I -XHEAD "
> http://192.168.1.103:36000/DISK1/3430/AUTH_test/Con_1/Obj1";
> > curl -I -XHEAD "
> http://192.168.1.104:36000/DISK1/3430/AUTH_test/Con_1/Obj1"; # [Handoff]
> > curl -I -XHEAD "
> http://192.168.1.105:36000/DISK1/3430/AUTH_test/Con_1/Obj1"; # [Handoff]
> >
> >
> > ssh 192.168.1.101 "ls -lah
> /srv/node/DISK1/objects/3430/e1e/6b342ac122448ef16bf1655d652bfe1e/"
> > ssh 192.168.1.102 "ls -lah
> /srv/node/DISK1/objects/3430/e1e/6b342ac122448ef16bf1655d652bfe1e/"
> > ssh 192.168.1.103 "ls -lah
> /srv/node/DISK1/objects/3430/e1e/6b342ac122448ef16bf1655d652bfe1e/"
> > ssh 192.168.1.104 "ls -lah
> /srv/node/DISK1/objects/3430/e1e/6b342ac122448ef16bf1655d652bfe1e/" #
> [Handoff]
> > ssh 192.168.1.105 "ls -lah
> /srv/node/DISK1/objects/3430/e1e/6b342ac122448ef16bf1655d652bfe1e/" #
> [Handoff]
> >
> > Case :
> > Obj1 is already been uploaded to 3 primary devices properly. What kind
> of fails on "192.168.1.101:3600 DISK1" will trigger replicator push a
> copy to "192.168.1.104:36000 DISK1 [handoff] " device ?
> >
> > In my past test , the replicator does not push a copy to handoff node
> for an "existing" object. Whatever network fail / reboot machine / umount
> disk , I think these are general errors from Chmouel mentioned before. But
> I'm not that sure about the meaning of "replicator will push an object to a
> handoff node if another primary node returns that the drive the object is
> supposed to go on is bad" . How object-replicator to know that the drive
> the object is supposed to go on is bad (I think replicator will never know
> it. Should it work with object-auditor ?)
> >
> > How to produce a fail to trigger replicator push object to handoff node ?
> >
> > In my consideration , for replicator pushes an object to handoff node
> there's a condition is that primary device does not have the object , also
> can not push into the device(192.168.1.101:36000 DISK1). It might be
> moved to quarantine due to the object-auditor found the object is broken.
> >
> > So that even the disk(192.168.1.101:3600 DISK1) is still mounted and
> the target partition 3430 does not have Obj1 . Another node's
> object-replicator try to push it's Obj1 to "192.168.1.101:36000 DISK1" ,
> but unluckily , the "192.168.1.101:36000 DISK1" is bad. So the
> object-replicator will push object to "192.168.1.104:36000 DISK1
> [handoff] " now .
> >
> > That's my inference , please feel free to correct it . I'm really
> confusing about to produce the kind of fails for replicator to push object
> to handoff node .
> > Any idea would be great .
> >
> >
> > Cheers
> > --
> > +Hugo Kuo+
> > tonyt...@gmail.com
> > +886 935004793
> >
>
>


-- 
+Hugo Kuo+
tonyt...@gmail.com
+ 886 935004793
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [OpenStack][Swift][Replicator] Does object replicator push "exist" object to handoff node while a node/disk/network fails ?

2012-09-06 Thread Kuo Hugo
Hi folks , John and Chmouel ,

I did post a question about this long time ago. And my test result is match
to Chmouel's answer.

*https://answers.launchpad.net/swift/+question/191924*

"The object replicator will push an object to a handoff node if another
primary node returns that the drive the object is supposed to go on is bad.
We don't push to handoff nodes on general errors, otherwise things like
network partitions or rebooting machines would cause storms of unneeded
handoff traffic."


But I read something different from John (or just my misunderstanding)  ,
so want to clarify it.

Assumption :

Storage Nodes :  5 (each for one zone)
Zones :   5
Replica :  3

Disks :   2*5   ( 1 disk/per node )


*Account AUTH_test*

*ContainerCon_1*

*Object   Obj1*

*
*

*
*

*Partition   3430*

*Hash6b342ac122448ef16bf1655d652bfe1e*

*
*

*Server:Port Device  192.168.1.101:36000 DISK1*

*Server:Port Device  192.168.1.102:36000 DISK1*

*Server:Port Device  192.168.1.103:36000 DISK1*

*Server:Port Device  192.168.1.104:36000 DISK1[Handoff]*

*Server:Port Device  192.168.1.105:36000 DISK1[Handoff]

*

*curl -I -XHEAD "http://192.168.1.101:36000/DISK1/3430/AUTH_test/Con_1/Obj1";
*

*curl -I -XHEAD "http://192.168.1.102:36000/DISK1/3430/AUTH_test/Con_1/Obj1";
*

*curl -I -XHEAD "http://192.168.1.103:36000/DISK1/3430/AUTH_test/Con_1/Obj1";
*

*curl -I -XHEAD "http://192.168.1.104:36000/DISK1/3430/AUTH_test/Con_1/Obj1";
# [Handoff]*

*curl -I -XHEAD "http://192.168.1.105:36000/DISK1/3430/AUTH_test/Con_1/Obj1";
# [Handoff]

*

*ssh 192.168.1.101 "ls -lah
/srv/node/DISK1/objects/3430/e1e/6b342ac122448ef16bf1655d652bfe1e/"*

*ssh 192.168.1.102 "ls -lah
/srv/node/DISK1/objects/3430/e1e/6b342ac122448ef16bf1655d652bfe1e/"*

*ssh 192.168.1.103 "ls -lah
/srv/node/DISK1/objects/3430/e1e/6b342ac122448ef16bf1655d652bfe1e/" *

*ssh 192.168.1.104 "ls -lah
/srv/node/DISK1/objects/3430/e1e/6b342ac122448ef16bf1655d652bfe1e/" #
[Handoff]*

*ssh 192.168.1.105 "ls -lah
/srv/node/DISK1/objects/3430/e1e/6b342ac122448ef16bf1655d652bfe1e/" #
[Handoff]*


Case :

Obj1 is already been uploaded to 3 primary devices properly. What kind of
fails on "*192.168.1.101:3600 DISK1"* will trigger replicator push a copy
to "*192.168.1.104:36000 DISK1 [handoff]* " device ?


In my past test , the replicator does not push a copy to handoff node for
an "existing" object. Whatever network fail / reboot machine / umount disk
, I think these are general errors from Chmouel mentioned before. But I'm
not that sure about the meaning of "replicator will push an object to a
handoff node if another primary node returns that the drive the object is
supposed to go on is bad" . How object-replicator to know that the drive
the object is supposed to go on is bad (I think replicator will never know
it. Should it work with object-auditor ?)

How to produce a fail to trigger replicator push object to handoff node ?

In my consideration , for replicator pushes an object to handoff node
there's a condition is that primary device does not have the object , also
can not push into the device(192.168.1.101:36000 DISK1). It might be moved
to quarantine due to the object-auditor found the object is broken.

So that even the disk(192.168.1.101:3600 DISK1) is still mounted and the
target partition 3430 does not have Obj1 . Another node's object-replicator
try to push it's Obj1 to "192.168.1.101:36000 DISK1" , but unluckily , the "
192.168.1.101:36000 DISK1" is bad. So the object-replicator will push
object to "*192.168.1.104:36000 DISK1 [handoff]* " now .


That's my inference , please feel free to correct it . I'm really confusing
about to produce the kind of fails for replicator to push object to handoff
node .
Any idea would be great .


Cheers
-- 
+Hugo Kuo+
tonyt...@gmail.com
+ 886 935004793
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Swift + keystone integration

2012-08-11 Thread Kuo Hugo
I used to debug via curl for separating the AUTH section(Keystone) and Data
Section(Swift-proxy) .

 #>curl -v -d {%json%}  http://keystone_ip:port/v2.0
 #>curl -H "X-AUTH-TOKEN: %TOKEN%" http://swift_ip:port/v1/AUTH_%account%

And monitor the log on both keystone and swift.

Several Steps you can followed

1. Check keystone is working on proper port
2. Check Swift is working and on proper port
3. Check swift endpoint under Keystone's DB
4. Does the network accessible between keystone and Swift



2012/8/12 Miguel Alejandro González 

> Hello
>
> I have 3 nodes with ubuntu 12.04 server and installed openstack with
> packages from the ubuntu repos
>
>- controller (where keystone is installed)
>- compute
>- swift
>
> I'm trying to configure Swift with Keystone but I'm having some problems,
> here's my proxy-server.conf
>
> [DEFAULT]
> bind_port = 8080
> user = swift
> swift_dir = /etc/swift
> [pipeline:main]
> # Order of execution of modules defined below
> pipeline = catch_errors healthcheck cache authtoken keystone proxy-server
> [app:proxy-server]
> use = egg:swift#proxy
> allow_account_management = true
> account_autocreate = true
> set log_name = swift-proxy
> set log_facility = LOG_LOCAL0
> set log_level = INFO
> et access_log_name = swift-proxy
> set access_log_facility = SYSLOG
> set access_log_level = INFO
> set log_headers = True
> account_autocreate = True
> [filter:healthcheck]
> use = egg:swift#healthcheck
> [filter:catch_errors]
> use = egg:swift#catch_errors
> [filter:cache]
> use = egg:swift#memcache
> set log_name = cache
> [filter:authtoken]
> paste.filter_factory = keystone.middleware.auth_token:filter_factory
> auth_protocol = http
> auth_host = 10.17.12.163
> auth_port = 35357
> auth_token = admin
> service_protocol = http
> service_host = 10.17.12.163
> service_port = 5000
> admin_token = admin
> admin_tenant_name = admin
> admin_user = admin
> admin_password = admin
> delay_auth_decision = 0
> [filter:keystone]
> paste.filter_factory = keystone.middleware.swift_auth:filter_factory
> operator_roles = admin, swiftoperator
> is_admin = true
>
> On Horizon I get a Django error page and says [Errno 111] ECONNREFUSED
>
> From the Swift server I try this command:
>
> swift -v -V 2.0 -A http://10.17.12.163:5000/v2.0/ -U admin:admin -K admin
> stat
>
> And I also get [Errno 111] ECONNREFUSED
>
>
> Is there any way to debug this??? Is there any conf or packages that I'm
> missing for this to work on a multi-node deployment? Can you help me?
>
> Regards!
>
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
>


-- 
+Hugo Kuo+
tonyt...@gmail.com
+ 886 935004793
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OpenStack][Swift] Some questions about the performance of swift .

2012-07-20 Thread Kuo Hugo
2012/7/21 Paulo Ricardo Motta Gomes 

> Have you monitored the cpu utilization of the proxy server and the storage
> nodes? I did similar tests with Swift and the proxy server exhausted its
> capacity with a only few concurrent requests for very small objects.


Maximum CPU utilization on Proxy is reached to 100% on all cores at
beginning. and 60~70 % on storage nodes.
It seems up and down in a periodical duration. I did several system tunings
, such as sysctl , ulimit etc... The concurrency request could be 500+  for
4K size though.


> If you notice object servers are not overloaded, but proxy is overloaded,
> a solution might be to have more proxy servers if you hav
>

Result still same with multiple proxy servers(2-4) . and each powerful
swift-bench clients for each proxy node.  Also , I did a test by
swift-benc's direct client function to a particular node . I found there's
a closed result .  Once more objects been uploaded . The

for chunk in iter(lambda: reader(self.network_chunk_size), ''):

take lots of time periodically.



>
> It seems a problem of overload, since there are only 4 servers in the
> system and a large level of concurrency. Have you tried slowly increasing
> the number of concurrency to find the point where the problem starts? This
> point may be the capacity of your system.
>

Last week , I got more servers from another HW providers with more
CPU/RAM/DISKs . 12 Disks in each storage node.  This deployment of swift
cluster keep in better performance for longer time. Unfortunately , after
15,000,000 object . The performance reduced to half and the Failure
appeared.
I concerned about that if the (total number objs/disk numbers) = ?  will
cause such affect in large deployment.(aka. cloud storage provider ,
telecom , bank etc.)

Really confusing  ..


>
> Also, are you using persistent connections to the proxy server to send the
> object? If so, maybe try to renew them once in a while.
>

Renew connections for each round in swift-bench as I know.

Well , swift-bench create a connection pool with concurrency = x
connections . I think that connections been renew in every round.

Something strange is that the performance back to beginning while I flush
all data on storage nodes. (whatever by format disk / r m )


>
> Cheers,
>
> Paulo
>
Thanks for your reply

>
> 2012/7/20 Kuo Hugo 
>
>> Hi Sam , and all openstacker
>>
>> This is Hugo . I'm facing an issue about the performance  *degradation*  of
>> swift .
>> I tried to figure out the problem of the issue which I faced in recent
>> days.
>>
>> Environment :
>> Swift version : master branch . latest code.
>> Tried on Ubuntu 12.04/11.10
>> 1 Swift-proxy : 32GB-ram / CPU 4*2 / 1Gb NIC*2
>> 3 Storage-nodes : each for 32GB-ram / CPU 4*2 / 2TB*7 / 1Gb NIC*2
>>
>> storage nodes runs only main workers(object-server , container-server ,
>> account-server)
>>
>> I'm in testing with 4K size objects by swift-bench.
>>
>> Per round bench.conf
>> object_size = 4096
>> Concurrency : 200
>> Object number: 20
>> Containers : 200
>> no delete objects ..
>>
>> At beginning , everything works fine in my environment.  The average
>> speed of PUT is reached to 1200/s .
>> After several rounds test . I found that the performance is down to
>> 300~400/s
>> And after more rounds , failures appeared  , and ERROR in proxy's log as
>> followed
>>
>> Jul 20 18:44:54 angryman-proxy-01 proxy-server ERROR with Object server
>> 192.168.100.101:36000/DISK5 re: Trying to get final status of PUT to
>> /v1/AUTH_admin/9cbb3f9336b34019a6e7651adfc06a86_51/87b48a3474c7485c95aeef95c6911afb:
>> Timeout (10s) (txn: txb4465d895c9345be95d81632db9729af) (client_ip:
>> 172.168.1.2)
>> Jul 20 18:44:54 angryman-proxy-01 proxy-server ERROR with Object server
>> 192.168.100.101:36000/DISK4 re: Trying to get final status of PUT to
>> /v1/AUTH_admin/9cbb3f9336b34019a6e7651adfc06a86_50/7405e5824cff411f8bb3ecc7c52ffd5a:
>> Timeout (10s) (txn: txe0efab51f99945a7a09fa664b821777f) (client_ip:
>> 172.168.1.2)
>> Jul 20 18:44:55 angryman-proxy-01 proxy-server ERROR with Object server
>> 192.168.100.101:36000/DISK5 re: Trying to get final status of PUT to
>> /v1/AUTH_admin/9cbb3f9336b34019a6e7651adfc06a86_33/f322f4c08b124666bf7903812f4799fe:
>> Timeout (10s) (txn: tx8282ecb118434f828b9fb269f0fb6bd0) (client_ip:
>> 172.168.1.2)
>>
>>
>> After trace the code of object-server swift/obj/server.py and insert a
>> timer on
>> https://github.com/openstack/swift/blob/master/swift/obj/server.py#L591
>>
>>
>> for chunk in iter(lambda: reader(self.network_ch

[Openstack] [OpenStack][Swift] Some questions about the performance of swift .

2012-07-20 Thread Kuo Hugo
Hi Sam , and all openstacker

This is Hugo . I'm facing an issue about the performance  *degradation*  of
swift .
I tried to figure out the problem of the issue which I faced in recent
days.

Environment :
Swift version : master branch . latest code.
Tried on Ubuntu 12.04/11.10
1 Swift-proxy : 32GB-ram / CPU 4*2 / 1Gb NIC*2
3 Storage-nodes : each for 32GB-ram / CPU 4*2 / 2TB*7 / 1Gb NIC*2

storage nodes runs only main workers(object-server , container-server ,
account-server)

I'm in testing with 4K size objects by swift-bench.

Per round bench.conf
object_size = 4096
Concurrency : 200
Object number: 20
Containers : 200
no delete objects ..

At beginning , everything works fine in my environment.  The average speed
of PUT is reached to 1200/s .
After several rounds test . I found that the performance is down to
300~400/s
And after more rounds , failures appeared  , and ERROR in proxy's log as
followed

Jul 20 18:44:54 angryman-proxy-01 proxy-server ERROR with Object server
192.168.100.101:36000/DISK5 re: Trying to get final status of PUT to
/v1/AUTH_admin/9cbb3f9336b34019a6e7651adfc06a86_51/87b48a3474c7485c95aeef95c6911afb:
Timeout (10s) (txn: txb4465d895c9345be95d81632db9729af) (client_ip:
172.168.1.2)
Jul 20 18:44:54 angryman-proxy-01 proxy-server ERROR with Object server
192.168.100.101:36000/DISK4 re: Trying to get final status of PUT to
/v1/AUTH_admin/9cbb3f9336b34019a6e7651adfc06a86_50/7405e5824cff411f8bb3ecc7c52ffd5a:
Timeout (10s) (txn: txe0efab51f99945a7a09fa664b821777f) (client_ip:
172.168.1.2)
Jul 20 18:44:55 angryman-proxy-01 proxy-server ERROR with Object server
192.168.100.101:36000/DISK5 re: Trying to get final status of PUT to
/v1/AUTH_admin/9cbb3f9336b34019a6e7651adfc06a86_33/f322f4c08b124666bf7903812f4799fe:
Timeout (10s) (txn: tx8282ecb118434f828b9fb269f0fb6bd0) (client_ip:
172.168.1.2)


After trace the code of object-server swift/obj/server.py and insert a
timer on
https://github.com/openstack/swift/blob/master/swift/obj/server.py#L591


for chunk in iter(lambda: reader(self.network_chunk_size), ''):


Seems that the reader sometimes took a lot of time for receiving data from
wsgi.input. Not every request , it looks like has a time of periods.

So that I check the history of Swift , I saw your commit
https://github.com/openstack/swift/commit/783f16035a8e251d2138eb5bbaa459e9e4486d90
 . That's the only one which close to my issue.  So that I hope that
there's some suggestions for me.

My considerations :

1. Does it possible  caused by greenio switch ?

2. Does it related to the number of objects existing on storage disks ?

3. Did someone play with swift by small size + fast client request ?

4. I found that the performance would never back to 1200/s . The only way
to do is flush all data from disk. Once disk cleaned , the performance get
back to  the best one.

5. I re-read entire workflow of object server to handle a PUT request , I
don't understand the reason why that the number of objects will affect
reading wsgi.input data. With 4K size objects. no need to be chunked as I
know.


The time consumed by *reader(self.network_chunk_size)*

Jul 20 17:09:36 angryman-storage-01 object-server Reader: 0.001391

Jul 20 17:09:36 angryman-storage-01 object-server Reader: 0.001839

Jul 20 17:09:36 angryman-storage-01 object-server Reader: 0.00164

Jul 20 17:09:36 angryman-storage-01 object-server Reader: 0.002786

Jul 20 17:09:36 angryman-storage-01 object-server Reader: 2.716707

Jul 20 17:09:36 angryman-storage-01 object-server Reader: 1.005659

Jul 20 17:09:36 angryman-storage-01 object-server Reader: 0.055982

Jul 20 17:09:36 angryman-storage-01 object-server Reader: 0.002205


Jul 20 18:39:14 angryman-storage-01 object-server WTF: 0.000968

Jul 20 18:39:14 angryman-storage-01 object-server WTF: 0.001328

Jul 20 18:39:14 angryman-storage-01 object-server WTF: 10.003368

Jul 20 18:39:14 angryman-storage-01 object-server WTF: 0.001243

Jul 20 18:39:14 angryman-storage-01 object-server WTF: 0.001562


Jul 20 17:52:41 angryman-storage-01 object-server WTF: 0.001067

Jul 20 17:52:41 angryman-storage-01 object-server WTF: 13.804413

Jul 20 17:52:41 angryman-storage-01 object-server WTF: 5.301166

Jul 20 17:52:41 angryman-storage-01 object-server WTF: 0.001167




Would it be a bug of eventlet or SWIFT ?   Please feel free to let me know
that should I file a bug for Swift .

Appreciate ~

-- 
+Hugo Kuo+
tonyt...@gmail.com
+ 886 935004793
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [Swift][Object-server] Why that arp_cache consumes memory followed with uploading objects?

2012-07-12 Thread Kuo Hugo
Hi all

I found that the arp_cache in slabinfo on objec-server is growing up
followed with uploaded object numbers.

Does any code using it ?

2352000 1329606  56%0.06K  36750   64147000K kmalloc-64
1566617 1257226  80%0.21K  42341   37338728K xfs_ili
1539808 1257748  81%1.00K  48119   32   1539808K xfs_inode
538432 470882  87%0.50K  16826   32269216K kmalloc-512
403116 403004  99%0.19K   9598   42 76784K dentry
169250 145824  86%0.31K   6770   25 54160K arp_cache


Does it may cause any performance concern ?

Btw , how could I flush the memory of arp_cache which using by XFS(SWIFT)?


-- 
+Hugo Kuo+
tonyt...@gmail.com
+ 886 935004793
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Swift] [Storage node] Lots of timeouts in load test after several hours around 1, 000, 0000 operations

2012-07-10 Thread Kuo Hugo
Hello , folks

Seems most of time consumed by the following code in obj/server.py

iter(lambda: reader(self.network_chunk_size), '')

L591 - L605
https://github.com/openstack/swift/blob/master/swift/obj/server.py#L591



10323  Jul 10 14:34:42 object-server WTF: InitTime: 0.000183#012
SavingTime: 0.055627#012 OS-Write 0.15 MetadataTime: 15.848296#012
UpdateContainerTime: 0.042656 X-Trans-ID :
tx7a2181d0e9444ef5a13f9f60f657288f

10324  Jul 10 14:34:42 object-server WTF: InitTime: 0.000248#012
SavingTime: 0.862101#012 OS-Write 0.14 MetadataTime: 0.089192#012
UpdateContainerTime: 0.003802 X-Trans-ID :
tx37f8a2e958734083ba064f898e9fdcb2

*10325  Jul 10 14:34:42 object-server WTF: InitTime: 0.000379#012
SavingTime: 14.094034#012 OS-Write 0.13 MetadataTime: 0.033566#012
UpdateContainerTime: 0.004655 X-Trans-ID :
tx9ef952731e5e463daa05a0c973907f32*

10326  Jul 10 14:34:42 object-server WTF: InitTime: 0.000310#012
SavingTime: 0.801216#012 OS-Write 0.17 MetadataTime: 0.122491#012
UpdateContainerTime: 0.008453 X-Trans-ID :
tx6a5a0c634bf9439282ea4736e7ba7422

10327  Jul 10 14:34:42 object-server WTF: InitTime: 0.000176#012
SavingTime: 0.006937#012 OS-Write 0.11 MetadataTime: 15.642381#012
UpdateContainerTime: 0.297634 X-Trans-ID :
tx1b0f4e03daef48d68cbfdc6c6e915a0b
10328  Jul 10 14:34:42 object-server WTF: InitTime: 0.000268#012
SavingTime: 0.012993#012 OS-Write 0.16 MetadataTime: 0.001211#012
UpdateContainerTime: 0.001846 X-Trans-ID :

 As the above result , there’s a request low down the average speed.

What will cause iter(lambda: reader(self.network_chunk_size), '') consumes
lots of time?

Too many files in XFS or anything else ? Could it possible be a bug ?


Thanks


2012/7/4 Kuo Hugo 

> I found that updater and replicator could improve this issue.
>
> In my original practice , for getting best performance , I only start main
> workers ( account-server , container-server , object-server) , And keep
> upload / download / delete objects over 100 times.
>
> Issues:
>
> 1. XFS or Swift consumes lots of memory for some reason , does anyone know
> what's been cached(or buffered , cached usage is not too much though) in
> memory in this practice ? After running container/object replicator , those
> memory all released. I'm curious the contents in memory . Is that all about
> object's metadata or something else?
>
> 2. Plenty of 10s timeout in proxy-server's log . Due to timeout for
> getting final status of put object from storage node.
> At beginning , object-workers complain about 3s timeout for updating
> container (async later). but there's not too much complains . As more and
> more put / get / delete  operations , more and more timeout happend.
> Seems that updater can improve this issue.
> Does this behavior related to the number of data in pickle ?
>
>
> Thanks
> Hugo
>
>
> 2012/7/2 Kuo Hugo 
>
>> Hi all ,
>>
>> I did several loading tests for swift in recent days.
>>
>> I'm facing an issue ... Hope you can share you consideration to me
>> ...
>>
>> My environment:
>> Swift-proxy with Tempauth in one server : 4 cores/32G rams
>>
>> Swift-object + Swift-account + Swift-container in storage node * 3 , each
>> for : 8 cores/32G rams   2TB SATA HDD * 7
>>
>> =
>> bench.conf :
>>
>> [bench]
>> auth = http://172.168.1.1:8082/auth/v1.0
>> user = admin:admin
>> key = admin
>> concurrency = 200
>> object_size = 4048
>> num_objects = 10
>> num_gets = 10
>> delete = yes
>> =
>>
>> After 70 rounds .
>>
>> PUT operations get lots of failures , but GET still works properly
>> *ERROR log:*
>> Jul  1 04:35:03 proxy-server ERROR with Object server
>> 192.168.100.103:36000/DISK6 re: Trying to get final status of PUT to
>> /v1/AUTH_admin/af5862e653054f7b803d8cf1728412d2_6/24fc2f997bcc4986a86ac5ff992c4370:
>> Timeout (10s) (txn: txd60a2a729bae46be9b667d10063a319f) (client_ip:
>> 172.168.1.2)
>> Jul  1 04:34:32 proxy-server ERROR with Object server
>> 192.168.100.103:36000/DISK2 re: Expect: 100-continue on
>> /AUTH_admin/af5862e653054f7b803d8cf1728412d2_19/35993faa53b849a89f96efd732652e31:Timeout
>>  (10s)
>>
>>
>> And kernel starts to report failed message as below
>> *kernel failed log:*
>> 7 Jul  1 16:37:50 angryman-storage-03 kernel: [350840.020736] w83795
>> 0-002f: Failed to read from register 0x03c, err -6
>>76667 Jul  1 16:37:50 angryman-storage-03 kernel: [350840.052654]
>> w83795 0-002f: Failed to rea

Re: [Openstack] [Swift] [Storage node] Lots of timeouts in load test after several hours around 1, 000, 0000 operations

2012-07-03 Thread Kuo Hugo
I found that updater and replicator could improve this issue.

In my original practice , for getting best performance , I only start main
workers ( account-server , container-server , object-server) , And keep
upload / download / delete objects over 100 times.

Issues:

1. XFS or Swift consumes lots of memory for some reason , does anyone know
what's been cached(or buffered , cached usage is not too much though) in
memory in this practice ? After running container/object replicator , those
memory all released. I'm curious the contents in memory . Is that all about
object's metadata or something else?

2. Plenty of 10s timeout in proxy-server's log . Due to timeout for getting
final status of put object from storage node.
At beginning , object-workers complain about 3s timeout for updating
container (async later). but there's not too much complains . As more and
more put / get / delete  operations , more and more timeout happend.
Seems that updater can improve this issue.
Does this behavior related to the number of data in pickle ?


Thanks
Hugo


2012/7/2 Kuo Hugo 

> Hi all ,
>
> I did several loading tests for swift in recent days.
>
> I'm facing an issue ... Hope you can share you consideration to me ...
>
> My environment:
> Swift-proxy with Tempauth in one server : 4 cores/32G rams
>
> Swift-object + Swift-account + Swift-container in storage node * 3 , each
> for : 8 cores/32G rams   2TB SATA HDD * 7
>
> =
> bench.conf :
>
> [bench]
> auth = http://172.168.1.1:8082/auth/v1.0
> user = admin:admin
> key = admin
> concurrency = 200
> object_size = 4048
> num_objects = 10
> num_gets = 10
> delete = yes
> =
>
> After 70 rounds .
>
> PUT operations get lots of failures , but GET still works properly
> *ERROR log:*
> Jul  1 04:35:03 proxy-server ERROR with Object server
> 192.168.100.103:36000/DISK6 re: Trying to get final status of PUT to
> /v1/AUTH_admin/af5862e653054f7b803d8cf1728412d2_6/24fc2f997bcc4986a86ac5ff992c4370:
> Timeout (10s) (txn: txd60a2a729bae46be9b667d10063a319f) (client_ip:
> 172.168.1.2)
> Jul  1 04:34:32 proxy-server ERROR with Object server
> 192.168.100.103:36000/DISK2 re: Expect: 100-continue on
> /AUTH_admin/af5862e653054f7b803d8cf1728412d2_19/35993faa53b849a89f96efd732652e31:Timeout
>  (10s)
>
>
> And kernel starts to report failed message as below
> *kernel failed log:*
> 7 Jul  1 16:37:50 angryman-storage-03 kernel: [350840.020736] w83795
> 0-002f: Failed to read from register 0x03c, err -6
>76667 Jul  1 16:37:50 angryman-storage-03 kernel: [350840.052654]
> w83795 0-002f: Failed to read from register 0x015, err -6
>76668 Jul  1 16:37:50 angryman-storage-03 kernel: [350840.080613]
> w83795 0-002f: Failed to read from register 0x03c, err -6
>76669 Jul  1 16:37:50 angryman-storage-03 kernel: [350840.112583]
> w83795 0-002f: Failed to read from register 0x016, err -6
>76670 Jul  1 16:37:50 angryman-storage-03 kernel: [350840.144517]
> w83795 0-002f: Failed to read from register 0x03c, err -6
>76671 Jul  1 16:37:50 angryman-storage-03 kernel: [350840.176468]
> w83795 0-002f: Failed to read from register 0x017, err -6
>76672 Jul  1 16:37:50 angryman-storage-03 kernel: [350840.208455]
> w83795 0-002f: Failed to read from register 0x03c, err -6
>76673 Jul  1 16:37:51 angryman-storage-03 kernel: [350840.240410]
> w83795 0-002f: Failed to read from register 0x01b, err -6
>76674 Jul  1 16:37:51 angryman-storage-03 kernel: [350840.272Jul  1
> 17:05:28 angryman-storage-03 kernel: imklog 6.2.0, log source  =
> /proc/kmsg started.
>
> PUTs become slower and slower , from 1,200/s to 200/s ...
>
> I'm not sure if this is a bug or that's the limitation of XFS. If it's an
> limit of XFS . How to improve it ?
>
> An additional question is XFS seems consume lots of memory , does anyone
> know about the reason of this behavior?
>
>
> Appreciate ...
>
>
> --
> +Hugo Kuo+
> tonyt...@gmail.com
> + 886 935004793
>
>


-- 
+Hugo Kuo+
tonyt...@gmail.com
+ 886 935004793
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Swift] [Storage node] Lots of timeouts in load test after several hours around 1, 000, 0000 operations

2012-07-01 Thread Kuo Hugo
An update
about 76667 Jul  1 16:37:50 angryman-storage-03 kernel: [350840.052654]
w83795 0-002f: Failed to read from register 0x015, err -6  error
seems come with lm-sensors to collect HW temperature .
It might be an alert of system.



2012/7/2 John Dickinson 

> I hope you are able to get an answer. I'm traveling this week, so I won't
> have a chance to look in to it. I hope some of the other core devs will
> have a chance to help you find an answer.
>
> --John
>

Thanks John , keep working ... Enjoy your day


>
>
> On Jul 1, 2012, at 2:03 PM, Kuo Hugo  wrote:
>
> Hi all ,
>
> I did several loading tests for swift in recent days.
>
> I'm facing an issue ... Hope you can share you consideration to me ...
>
> My environment:
> Swift-proxy with Tempauth in one server : 4 cores/32G rams
>
> Swift-object + Swift-account + Swift-container in storage node * 3 , each
> for : 8 cores/32G rams   2TB SATA HDD * 7
>
> =
> bench.conf :
>
> [bench]
> auth = http://172.168.1.1:8082/auth/v1.0
> user = admin:admin
> key = admin
> concurrency = 200
> object_size = 4048
> num_objects = 10
> num_gets = 10
> delete = yes
> =
>
> After 70 rounds .
>
> PUT operations get lots of failures , but GET still works properly
> *ERROR log:*
> Jul  1 04:35:03 proxy-server ERROR with Object server
> 192.168.100.103:36000/DISK6 re: Trying to get final status of PUT to
> /v1/AUTH_admin/af5862e653054f7b803d8cf1728412d2_6/24fc2f997bcc4986a86ac5ff992c4370:
> Timeout (10s) (txn: txd60a2a729bae46be9b667d10063a319f) (client_ip:
> 172.168.1.2)
> Jul  1 04:34:32 proxy-server ERROR with Object server
> 192.168.100.103:36000/DISK2 re: Expect: 100-continue on
> /AUTH_admin/af5862e653054f7b803d8cf1728412d2_19/35993faa53b849a89f96efd732652e31:Timeout
>  (10s)
>
>
> And kernel starts to report failed message as below
> *kernel failed log:*
> 7 Jul  1 16:37:50 angryman-storage-03 kernel: [350840.020736] w83795
> 0-002f: Failed to read from register 0x03c, err -6
>76667 Jul  1 16:37:50 angryman-storage-03 kernel: [350840.052654]
> w83795 0-002f: Failed to read from register 0x015, err -6
>76668 Jul  1 16:37:50 angryman-storage-03 kernel: [350840.080613]
> w83795 0-002f: Failed to read from register 0x03c, err -6
>76669 Jul  1 16:37:50 angryman-storage-03 kernel: [350840.112583]
> w83795 0-002f: Failed to read from register 0x016, err -6
>76670 Jul  1 16:37:50 angryman-storage-03 kernel: [350840.144517]
> w83795 0-002f: Failed to read from register 0x03c, err -6
>76671 Jul  1 16:37:50 angryman-storage-03 kernel: [350840.176468]
> w83795 0-002f: Failed to read from register 0x017, err -6
>76672 Jul  1 16:37:50 angryman-storage-03 kernel: [350840.208455]
> w83795 0-002f: Failed to read from register 0x03c, err -6
>76673 Jul  1 16:37:51 angryman-storage-03 kernel: [350840.240410]
> w83795 0-002f: Failed to read from register 0x01b, err -6
>76674 Jul  1 16:37:51 angryman-storage-03 kernel: [350840.272Jul  1
> 17:05:28 angryman-storage-03 kernel: imklog 6.2.0, log source  =
> /proc/kmsg started.
>
> PUTs become slower and slower , from 1,200/s to 200/s ...
>
> I'm not sure if this is a bug or that's the limitation of XFS. If it's an
> limit of XFS . How to improve it ?
>
> An additional question is XFS seems consume lots of memory , does anyone
> know about the reason of this behavior?
>
>
> Appreciate ...
>
>
> --
> +Hugo Kuo+
> tonyt...@gmail.com
> + 886 935004793
>
>  ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
>


-- 
+Hugo Kuo+
tonyt...@gmail.com
+ 886 935004793
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [Swift] [Storage node] Lots of timeouts in load test after several hours around 1, 000, 0000 operations

2012-07-01 Thread Kuo Hugo
Hi all ,

I did several loading tests for swift in recent days.

I'm facing an issue ... Hope you can share you consideration to me ...

My environment:
Swift-proxy with Tempauth in one server : 4 cores/32G rams

Swift-object + Swift-account + Swift-container in storage node * 3 , each
for : 8 cores/32G rams   2TB SATA HDD * 7
=
bench.conf :

[bench]
auth = http://172.168.1.1:8082/auth/v1.0
user = admin:admin
key = admin
concurrency = 200
object_size = 4048
num_objects = 10
num_gets = 10
delete = yes
=

After 70 rounds .

PUT operations get lots of failures , but GET still works properly
*ERROR log:*
Jul  1 04:35:03 proxy-server ERROR with Object server
192.168.100.103:36000/DISK6 re: Trying to get final status of PUT to
/v1/AUTH_admin/af5862e653054f7b803d8cf1728412d2_6/24fc2f997bcc4986a86ac5ff992c4370:
Timeout (10s) (txn: txd60a2a729bae46be9b667d10063a319f) (client_ip:
172.168.1.2)
Jul  1 04:34:32 proxy-server ERROR with Object server
192.168.100.103:36000/DISK2 re: Expect: 100-continue on
/AUTH_admin/af5862e653054f7b803d8cf1728412d2_19/35993faa53b849a89f96efd732652e31:Timeout
(10s)


And kernel starts to report failed message as below
*kernel failed log:*
7 Jul  1 16:37:50 angryman-storage-03 kernel: [350840.020736] w83795
0-002f: Failed to read from register 0x03c, err -6
   76667 Jul  1 16:37:50 angryman-storage-03 kernel: [350840.052654] w83795
0-002f: Failed to read from register 0x015, err -6
   76668 Jul  1 16:37:50 angryman-storage-03 kernel: [350840.080613] w83795
0-002f: Failed to read from register 0x03c, err -6
   76669 Jul  1 16:37:50 angryman-storage-03 kernel: [350840.112583] w83795
0-002f: Failed to read from register 0x016, err -6
   76670 Jul  1 16:37:50 angryman-storage-03 kernel: [350840.144517] w83795
0-002f: Failed to read from register 0x03c, err -6
   76671 Jul  1 16:37:50 angryman-storage-03 kernel: [350840.176468] w83795
0-002f: Failed to read from register 0x017, err -6
   76672 Jul  1 16:37:50 angryman-storage-03 kernel: [350840.208455] w83795
0-002f: Failed to read from register 0x03c, err -6
   76673 Jul  1 16:37:51 angryman-storage-03 kernel: [350840.240410] w83795
0-002f: Failed to read from register 0x01b, err -6
   76674 Jul  1 16:37:51 angryman-storage-03 kernel: [350840.272Jul  1
17:05:28 angryman-storage-03 kernel: imklog 6.2.0, log source  =
/proc/kmsg started.

PUTs become slower and slower , from 1,200/s to 200/s ...

I'm not sure if this is a bug or that's the limitation of XFS. If it's an
limit of XFS . How to improve it ?

An additional question is XFS seems consume lots of memory , does anyone
know about the reason of this behavior?


Appreciate ...


-- 
+Hugo Kuo+
tonyt...@gmail.com
+ 886 935004793
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Swift] rsyslog daemon reloading causes swift related services hangs and CPU reach to 100%

2012-06-22 Thread Kuo Hugo
nice suggestion ,
We decide to use UPD now

2012/6/22 MORITA Kazutaka 

> At Fri, 22 Jun 2012 00:00:26 +0800,
> Kuo Hugo wrote:
> >
> > Hi folks ,
> >
> > We're facing an issue related to the bug as below
> >
> > /dev/log rotations can cause object-server failures
> >
> https://bugs.launchpad.net/swift/+bug/780025
> >
> > My Swift version : 1.4.9
> >
> > But I found that not only object-server but also all swift related
> workers
> > those log through rsyslog.
> > There's a easy way to reproduce it ,
> > 1. Run swift-bench
> > 2. restart/stop rsyslog during swift-bench progress
> >
> > You can see that all CPU usage reach to 100%
> >
> > Should it be an additional bug ? If so , I can file it .
> >
> > Is there anyway to improve this behavior ? I expect that all swift
> workers
> > should keep working even though that rsyslog dead or restart.
>
> I've faced with the same problem and found that it was a bug of the
> python logging module.  I think the following patch against the module
> would solve the problem.
>
> diff --git a/logging/handlers.py b/logging/handlers.py
> index 756baf0..d2a042a 100644
> --- a/logging/handlers.py
> +++ b/logging/handlers.py
> @@ -727,7 +727,11 @@ class SysLogHandler(logging.Handler):
> except socket.error:
> self.socket.close()
> self.socket = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
> -self.socket.connect(address)
> +try:
> +self.socket.connect(address)
> +except socket.error:
> +self.socket.close()
> +raise
>
> # curious: when talking to the unix-domain '/dev/log' socket, a
> #   zero-terminator seems to be required.  this string is placed
>



-- 
+Hugo Kuo+
tonyt...@gmail.com
+ 886 935004793
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Swift][performance degradation] How many objects for PUT/GET in swift-bench is reasonable to you ? I got lots of failure after 50000

2012-06-21 Thread Kuo Hugo
2012/6/22 Huang Zhiteng 

> On Fri, Jun 22, 2012 at 12:28 AM, Kuo Hugo  wrote:
> > Hi Folks ,
> >
> > I'm in progress of swift QA.
> >
> > Also interesting about the number of PUT/GET operation in your
> swift-bench
> > configuration.
> >
> > Well , I thought that swift should handle as much as I set in bench.conf.
> >
> > However , the performance degradation came after 4+
> >
> > Does my configuration reasonable in your mind ?
> >
> > [bench]
> > auth = http://%swift_ip%:8082/auth/v1.0
> > user = admin:admin
> > key = admin
> > concurrency = 100
> > object_size = 4
> > num_objects = 10
> > num_gets = 10
> > delete = yes
> >
> >
> > The  performance degradation of swift
> >PUT result from 1200/s to 400/s
> >GET result from 1800/s to 800/s (but failures around 400)
> >DELETE result from 800/s to 300/s (lots of failures)
> >
> > 1. Does my configuration is reasonable in reality ?
> You didn't give us the configuration of you Swift. :)
>
Hi winston
4 servers with 32GB-ram/i5 - 4cores on each server.
1GB NICs

> >
> > 2. I saw that most of failures is been log as "object-server failled to
> > connect to %storage_ip%:%port&/%device% ... connection timeout(0.5)"
> > What's the reason cause the kind of timeout? Also the loading is very
> > low at this period and almost 0 request send to storage-nodes.
> >
> > 3. Another odd behavior , swift proxy does not consistent send requests
> to
> > storage-nodes . In my 10 PUT period. The storage-node's loading is
> > not balanced .
> > It might in 70% loading and 10% in next second.  Seems is a
> periodically
> > behavior.
> > I'm really confusing about this issue.
> One generic suggestion is you may try turning auditor/replicator
> service and re-do your test to see if it performs any better.
>
Only main workers are ruuning during the test.
Main Workers means swift-object-server / swift-container-server /
swift-object-server


> >
> >
>

Thanks

> >
> > --
> > +Hugo Kuo+
> > tonyt...@gmail.com
> > +886 935004793
> >
> >
> > ___
> > Mailing list: https://launchpad.net/~openstack
> > Post to : openstack@lists.launchpad.net
> > Unsubscribe : https://launchpad.net/~openstack
> > More help   : https://help.launchpad.net/ListHelp
> >
>
>
>
> --
> Regards
> Huang Zhiteng
>



-- 
+Hugo Kuo+
tonyt...@gmail.com
+ 886 935004793
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [Swift][performance degradation] How many objects for PUT/GET in swift-bench is reasonable to you ? I got lots of failure after 50000

2012-06-21 Thread Kuo Hugo
Hi Folks ,

I'm in progress of swift QA.

Also interesting about the number of PUT/GET operation in your swift-bench
configuration.

Well , I thought that swift should handle as much as I set in bench.conf.

However , the performance degradation came after 4+

Does my configuration reasonable in your mind ?

[bench]
auth = http://%swift_ip%:8082/auth/v1.0
user = admin:admin
key = admin
concurrency = 100
object_size = 4
num_objects = 10
num_gets = 10
delete = yes


The  performance degradation of swift
   PUT result from 1200/s to 400/s
   GET result from 1800/s to 800/s (but failures around 400)
   DELETE result from 800/s to 300/s (lots of failures)

1. Does my configuration is reasonable in reality ?

2. I saw that most of failures is been log as "object-server failled to
connect to %storage_ip%:%port&/%device% ... connection timeout(0.5)"
What's the reason cause the kind of timeout? Also the loading is very
low at this period and almost 0 request send to storage-nodes.

3. Another odd behavior , swift proxy does not consistent send requests to
storage-nodes . In my 10 PUT period. The storage-node's loading is
not balanced
.
It might in 70% loading and 10% in next second.  Seems is a
periodically behavior.
I'm really confusing about this issue.



-- 
+Hugo Kuo+
tonyt...@gmail.com
+ 886 935004793
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [Swift] rsyslog daemon reloading causes swift related services hangs and CPU reach to 100%

2012-06-21 Thread Kuo Hugo
Hi folks ,

We're facing an issue related to the bug as below

/dev/log rotations can cause object-server failures
https://bugs.launchpad.net/swift/+bug/780025

My Swift version : 1.4.9

But I found that not only object-server but also all swift related workers
those log through rsyslog.
There's a easy way to reproduce it ,
1. Run swift-bench
2. restart/stop rsyslog during swift-bench progress

You can see that all CPU usage reach to 100%

Should it be an additional bug ? If so , I can file it .

Is there anyway to improve this behavior ? I expect that all swift workers
should keep working even though that rsyslog dead or restart.

Any suggestion would be appraciate :)

Thanks


-- 
+Hugo Kuo+
tonyt...@gmail.com
+ 886 935004793
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Swift] Where's Object's metadata located ?

2012-06-18 Thread Kuo Hugo
Hi Adrian ,

Thanks for your explanation ...

About Q2 , manifest question
Is there any audit mechanism to delete segments of failure uploading object?
What if the uploading procedure is been interrupted by user .
As you said , I think the segments still available for accessing .
On the other hand , it means that those segmented objects will keep on disk
in object-server , even though the uploading failure is caused by user him
self.

Is there any approach to remove those segmented objects? or it might waste
some disk space.

Thanks


2012/6/17 Adrian Smith 

> > Q1: Where's the metadata of an object ?
> It's stored in extended attributes on the filesystem itself. This is
> reason XFS (or other filesystem supporting extended attributes) is
> required.
>
> > Could I find the value?
> Sure. You just need some way of a) identifying the object on disk and,
> b) a means of querying the extended metadata (using for example the
> python xattrs package).
>
> > Q2: What if a large object be interrupted during upload , will the
> manifest objects be deleted?
> Large objects (i.e. those > 5Gb) must be split up client-side and the
> segments uploaded individually. When all the segments are uploaded the
> manifest must then be created by the client. What I'm trying to get at
> is that each segment and even the manifest are completely independent
> objects. A failure during the upload of any one segment has no impact
> on other segments or the manifest.
>
> Adrian
>
>
> On 16 June 2012 09:53, Kuo Hugo  wrote:
> > Hi folks ,
> >
> > Q1:
> > Where's the metadata of an object ?
> > For example the "X-Object-Manifest". Does it store in inode ?
> > I did not see the matadata "X-Object=Manifest" in container's DB.
> >
> >  Could I find the value?
> >
> > Q2:
> > What if a large object be interrupted during upload , will the manifest
> > objects be deleted?
> > For example ,
> > OBJ1 :200MB
> > I execute $>swift upload con1 OBJ1 -S 1024000
> > I do a force interrupt while send segment 10.
> >
> > I believe that OBJ1 won't live in con1 , what will happen to the rest
> > manifest objects?
> >
> > Those objects seems still live in con1_segments container. Is there any
> > mechanism to audit OBJ1 and delete those manifest objects ?
> >
> >
> >
> > --
> > +Hugo Kuo+
> > tonyt...@gmail.com
> > +886 935004793
> >
> >
> > ___
> > Mailing list: https://launchpad.net/~openstack
> > Post to : openstack@lists.launchpad.net
> > Unsubscribe : https://launchpad.net/~openstack
> > More help   : https://help.launchpad.net/ListHelp
> >
>



-- 
+Hugo Kuo+
tonyt...@gmail.com
+ 886 935004793
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [Swift] Where's Object's metadata located ?

2012-06-16 Thread Kuo Hugo
Hi folks ,

Q1:
Where's the metadata of an object ?
For example the "X-Object-Manifest". Does it store in inode ?
I did not see the matadata "X-Object=Manifest" in container's DB.

 Could I find the value?

Q2:
What if a large object be interrupted during upload , will the manifest
objects be deleted?
For example ,
OBJ1 :200MB
I execute $>swift upload con1 OBJ1 -S 1024000
I do a force interrupt while send segment 10.

I believe that OBJ1 won't live in con1 , what will happen to the rest
manifest objects?

Those objects seems still live in con1_segments container. Is there any
mechanism to audit OBJ1 and delete those manifest objects ?



-- 
+Hugo Kuo+
tonyt...@gmail.com
+ 886 935004793
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [OpenStack][Keystone]Does legacy_auth v1.0 exist in Keystone Essex ?

2012-05-22 Thread Kuo Hugo
Hi folks ,

Does legacy_auth v1.0 exist in Keystone Essex ?

For several client tools , still using v1.0 authentication method for auth.
Such as cyberduck or Gladinet.

These applications look for X-AUTH-TOKEN and X-Storage-Url headers for
accessing swift.

Does this method live in Keystone Essex ?

-- 
+Hugo Kuo+
tonyt...@gmail.com
+ 886 935004793
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OpenStack][Keystone][LDAP] Does LDAP driver support for validating subtree user?

2012-05-22 Thread Kuo Hugo
Thanks for your quick reply .

I'll review the necessary of subtree query .

It's really depends on user's demand. I did some more research of AD or
LDAP structure design.

I found that if an enterprise has an existing AD server and the structure
as follow

dc=foo,dc=com
   |__OU-HR
   | |_cn:hr-user1
   | |_cn:hr-user2
   | |_cn:hr-user3
   |
   |__OU-IT
 |_cn:it-user1
 |_cn:it-user2
 |_cn:it-user3

For such LDAP structure , only HR or IT users cound be validated .

Is there any exist approach within LDAP to  import users from an OU to
another OU like below's diagram


dc=foo,dc=com
   |__OU-HR
   | |_cn:hr-user1
   | |_cn:hr-user2
   | |_cn:hr-user3
   |
   |__OU-IT
   | |_cn:it-user1
   | |_cn:it-user2
   | |_cn:it-user3
   |
   |
   |__OU-Keystone-Users
|_cn:it-user1
|_cn:hr-user1

If so , I can specify user_tree_dn to ou=OU-Keystone-Users .
any suggestions ?

Cheers


2012/5/22 Adam Young 

>  On 05/22/2012 07:07 AM, Kuo Hugo wrote:
>
> Hi Folks ,
>
>  I have try with keystone backend by LDAP and Windows AD.
>
>  It looks fine . Just want to clarify one point.
>
>  For my test result , LDAP driver could only validate users in the
> particular container (OU,CN etc.)  and does not include the subtree users.
>
>  [ldap]
>  tree_dn = dc=taiwan,dc=com
> user_tree_dn = ou=foo,dc=taiwan,dc=com
>
>
>  For example 
> User1 :  cn=jeremy,ou=foo,dc=taiwan,dc=com
>
>  User2 :  cn=jordan,ou=bar,ou=foo,dc=taiwan,dc=com
>
> User1 could be validated , and get the token generated by keystone.
> User2 could not be validated
>
>
>  Is there any way to validate both User1 and User2  in current design ?
>
>
> No, there is not.  Queries are not done against subtrees.
>
> If this is important to you,  please file a ticket:
> https://bugs.launchpad.net/keystone/+filebug
>
>
>
>
>
>  --
> +Hugo Kuo+
> tonyt...@gmail.com
>  + 886 935004793
>
>
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
>
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
>


-- 
+Hugo Kuo+
tonyt...@gmail.com
+ 886 935004793
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [OpenStack][Keystone][LDAP] Does LDAP driver support for validating subtree user?

2012-05-22 Thread Kuo Hugo
Hi Folks ,

I have try with keystone backend by LDAP and Windows AD.

It looks fine . Just want to clarify one point.

For my test result , LDAP driver could only validate users in the
particular container (OU,CN etc.)  and does not include the subtree users.

[ldap]
tree_dn = dc=taiwan,dc=com
user_tree_dn = ou=foo,dc=taiwan,dc=com


For example 
User1 :  cn=jeremy,ou=foo,dc=taiwan,dc=com

User2 :  cn=jordan,ou=bar,ou=foo,dc=taiwan,dc=com

User1 could be validated , and get the token generated by keystone.
User2 could not be validated


Is there any way to validate both User1 and User2  in current design ?


-- 
+Hugo Kuo+
tonyt...@gmail.com
+ 886 935004793
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [SWIFT] Looking for an approach to attach an account or a container in operating system . Like a NAS of SAN driver.

2012-05-07 Thread Kuo Hugo
Hi Justin ,
I did a try with your version for keystone authentication via 2.0 API

It seems doing "GET" operation to keystone.

Should it be "POST" operation ?
http://docs.openstack.org/api/openstack-identity-service/2.0/content/POST_authenticate_v2.0_tokens_Service_API_Client_Operations.html

Hugo



2012/4/13 Justin Santa Barbara 

> I made some patches to cloudfuse to get it to support Keystone auth:
> http://blog.justinsb.com/blog/2012/03/29/openstack-storage-with-fuse/
>
> I'd like to get this merged upstream, but haven't heard anything on my
> pull request...  Redbo?  Bueller? :-)
>
> I also saw an SFTP gateway floating about somewhere; that also seemed like
> a good approach.
>
> Should we make cloudfuse an openstack hosted project?
>
> Frederik:  If you do have any issues that should be fixed, I'd love to
> know what they are!
>
> Justin
>
>
>
>
> On Thu, Apr 12, 2012 at 10:03 AM, Frederik Van Hecke 
> wrote:
>
>> Hi Kuo,
>>
>> Here are some quick links:
>>
>> https://github.com/redbo/cloudfuse
>> http://gladinet.blogspot.com/2010/10/openstack-windows-client.html
>>
>>
>> I'm running cloudfuse on Ubuntu without much to complain about.
>>
>>
>>
>> Kind regards,
>> Frederik Van Hecke
>>
>> *T:*  +32487733713
>> *E:*  frede...@cluttr.be
>> *W:* www.cluttr.be
>>
>>
>>
>> *This e-mail and any attachments thereto may contain information which is 
>> confidential and/or protected by intellectual property rights and are 
>> intended for the sole use of the recipient(s)named above. Any use of the 
>> information contained herein (including, but not limited to, total or 
>> partial reproduction, communication or distribution in any form) by persons 
>> other than the designated recipient(s) is prohibited. If you have received 
>> this e-mail in error, please notify the sender either by telephone or by 
>> e-mail and delete the material from any computer. Thank you for your 
>> cooperation.*
>>
>>
>>
>>
>> On Thu, Apr 12, 2012 at 17:57, Kuo Hugo  wrote:
>>
>>> I'm keeping in search and think about  the easiest way for users to
>>> leverage their swift account.
>>>
>>> There has several client applications for accessing swift around .
>>> Either Windows / Linux / Mac.
>>>
>>> But eventually , user still need to install a client for this purpose.
>>>
>>> What if user can attach their own account from swift-proxy (or something
>>> else) directly via NFS or CIFS  or iscsi target will be much better.
>>>
>>> As I know , both linux and Windows has application to reach  this
>>> target.
>>> Such as Gladinet Server / S3FS etc.
>>>
>>> How about provide such interfaces in Swift-proxy ?
>>>
>>> If there's an exist discussion ticket , please let me know it .
>>>
>>> Cheers
>>>
>>>
>>> --
>>> +Hugo Kuo+
>>> tonyt...@gmail.com
>>> + 886 935004793
>>>
>>>
>>> ___
>>> Mailing list: https://launchpad.net/~openstack
>>> Post to : openstack@lists.launchpad.net
>>> Unsubscribe : https://launchpad.net/~openstack
>>> More help   : https://help.launchpad.net/ListHelp
>>>
>>>
>>
>> ___
>> Mailing list: https://launchpad.net/~openstack
>> Post to : openstack@lists.launchpad.net
>> Unsubscribe : https://launchpad.net/~openstack
>> More help   : https://help.launchpad.net/ListHelp
>>
>>
>


-- 
+Hugo Kuo+
tonyt...@gmail.com
+ 886 935004793
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [SWIFT] Looking for an approach to attach an account or a container in operating system . Like a NAS of SAN driver.

2012-04-12 Thread Kuo Hugo
SFTP looks interesting .

I plan to have a try with your cloudfuse branch for later testing.

Any Consideration will back to this ticket.

Justin Santa Barbara  於 2012年4月13日上午5:40 寫道:

> I made some patches to cloudfuse to get it to support Keystone auth:
> http://blog.justinsb.com/blog/2012/03/29/openstack-storage-with-fuse/
>
> I'd like to get this merged upstream, but haven't heard anything on my
> pull request...  Redbo?  Bueller? :-)
>
> I also saw an SFTP gateway floating about somewhere; that also seemed like
> a good approach.
>
> Should we make cloudfuse an openstack hosted project?
>
> Frederik:  If you do have any issues that should be fixed, I'd love to
> know what they are!
>
> Justin
>
>
>
>
> On Thu, Apr 12, 2012 at 10:03 AM, Frederik Van Hecke 
> wrote:
>
>> Hi Kuo,
>>
>> Here are some quick links:
>>
>> https://github.com/redbo/cloudfuse
>> http://gladinet.blogspot.com/2010/10/openstack-windows-client.html
>>
>>
>> I'm running cloudfuse on Ubuntu without much to complain about.
>>
>>
>>
>> Kind regards,
>> Frederik Van Hecke
>>
>> *T:*  +32487733713
>> *E:*  frede...@cluttr.be
>> *W:* www.cluttr.be
>>
>>
>>
>> *This e-mail and any attachments thereto may contain information which is 
>> confidential and/or protected by intellectual property rights and are 
>> intended for the sole use of the recipient(s)named above. Any use of the 
>> information contained herein (including, but not limited to, total or 
>> partial reproduction, communication or distribution in any form) by persons 
>> other than the designated recipient(s) is prohibited. If you have received 
>> this e-mail in error, please notify the sender either by telephone or by 
>> e-mail and delete the material from any computer. Thank you for your 
>> cooperation.*
>>
>>
>>
>>
>> On Thu, Apr 12, 2012 at 17:57, Kuo Hugo  wrote:
>>
>>> I'm keeping in search and think about  the easiest way for users to
>>> leverage their swift account.
>>>
>>> There has several client applications for accessing swift around .
>>> Either Windows / Linux / Mac.
>>>
>>> But eventually , user still need to install a client for this purpose.
>>>
>>> What if user can attach their own account from swift-proxy (or something
>>> else) directly via NFS or CIFS  or iscsi target will be much better.
>>>
>>> As I know , both linux and Windows has application to reach  this
>>> target.
>>> Such as Gladinet Server / S3FS etc.
>>>
>>> How about provide such interfaces in Swift-proxy ?
>>>
>>> If there's an exist discussion ticket , please let me know it .
>>>
>>> Cheers
>>>
>>>
>>> --
>>> +Hugo Kuo+
>>> tonyt...@gmail.com
>>> + 886 935004793
>>>
>>>
>>> ___
>>> Mailing list: https://launchpad.net/~openstack
>>> Post to : openstack@lists.launchpad.net
>>> Unsubscribe : https://launchpad.net/~openstack
>>> More help   : https://help.launchpad.net/ListHelp
>>>
>>>
>>
>> ___
>> Mailing list: https://launchpad.net/~openstack
>> Post to : openstack@lists.launchpad.net
>> Unsubscribe : https://launchpad.net/~openstack
>> More help   : https://help.launchpad.net/ListHelp
>>
>>
>


-- 
+Hugo Kuo+
tonyt...@gmail.com
+ 886 935004793
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [SWIFT] Looking for an approach to attach an account or a container in operating system . Like a NAS of SAN driver.

2012-04-12 Thread Kuo Hugo
Thanks , will have a look at cloudfuse.

I tried Gladinet solution already . But it do not support NFS protocol
currently

Frederik Van Hecke  於 2012年4月13日上午1:03 寫道:

> Hi Kuo,
>
> Here are some quick links:
>
> https://github.com/redbo/cloudfuse
> http://gladinet.blogspot.com/2010/10/openstack-windows-client.html
>
>
> I'm running cloudfuse on Ubuntu without much to complain about.
>
>
>
> Kind regards,
> Frederik Van Hecke
>
> *T:*  +32487733713
> *E:*  frede...@cluttr.be
> *W:* www.cluttr.be
>
>
>
> *This e-mail and any attachments thereto may contain information which is 
> confidential and/or protected by intellectual property rights and are 
> intended for the sole use of the recipient(s)named above. Any use of the 
> information contained herein (including, but not limited to, total or partial 
> reproduction, communication or distribution in any form) by persons other 
> than the designated recipient(s) is prohibited. If you have received this 
> e-mail in error, please notify the sender either by telephone or by e-mail 
> and delete the material from any computer. Thank you for your cooperation.*
>
>
>
>
> On Thu, Apr 12, 2012 at 17:57, Kuo Hugo  wrote:
>
>> I'm keeping in search and think about  the easiest way for users to
>> leverage their swift account.
>>
>> There has several client applications for accessing swift around . Either
>> Windows / Linux / Mac.
>>
>> But eventually , user still need to install a client for this purpose.
>>
>> What if user can attach their own account from swift-proxy (or something
>> else) directly via NFS or CIFS  or iscsi target will be much better.
>>
>> As I know , both linux and Windows has application to reach  this target.
>> Such as Gladinet Server / S3FS etc.
>>
>> How about provide such interfaces in Swift-proxy ?
>>
>> If there's an exist discussion ticket , please let me know it .
>>
>> Cheers
>>
>>
>> --
>> +Hugo Kuo+
>> tonyt...@gmail.com
>> + 886 935004793
>>
>>
>> ___
>> Mailing list: https://launchpad.net/~openstack
>> Post to : openstack@lists.launchpad.net
>> Unsubscribe : https://launchpad.net/~openstack
>> More help   : https://help.launchpad.net/ListHelp
>>
>>
>


-- 
+Hugo Kuo+
tonyt...@gmail.com
+ 886 935004793
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] control user quota

2012-04-12 Thread Kuo Hugo
nova-manage
http://nova.openstack.org/runnova/nova.manage.html

nova-manage project quota 

example :
#list xin-project 's quota
$>nova-manage project quota xin-project
Will return several key/value

#modify a key with new value

$>nova-manage project quota xin-project --key=???  --value=???

https://github.com/openstack/nova/blob/master/nova/quota.py

For configuration flags please refer to the above link of quota.py


Xin Zhao  於 2012年4月13日上午12:15 寫道:

>  Hi Kuo,
>
> Could you give more details, like the commands used, and settings in the
> config file ? I can't find a good example for them.
>
> Thanks,
> Xin
>
>
> On 4/12/2012 12:04 PM, Kuo Hugo wrote:
>
> I did a quick test in Essex .
> The process almost same as before(Cactus/Diablo)
>
>  1. Manage Quota for a specified "Tenant" from nova-manage .
> 2. Manage Default Quota parameters from nova.conf with several flags
> 3. Hacking Nova source code quota.py for default values.
>
>
>
>  Hope it helps.
>
>
>
> Xin Zhao  於 2012年4月12日下午10:27 寫道:
>
>> Hello,
>>
>> I try to assign quota to individual users, to control how many instances
>> each user can run concurrently. But I don't see a doc describing how to do
>> that. I use diablo release.
>> Any help or doc pointer will be greatly appreciated.
>>
>> Xin
>>
>>
>> ___
>> Mailing list: https://launchpad.net/~openstack
>> Post to : openstack@lists.launchpad.net
>> Unsubscribe : https://launchpad.net/~openstack
>> More help   : https://help.launchpad.net/ListHelp
>>
>
>
>
>  --
> +Hugo Kuo+
> tonyt...@gmail.com
>  + 886 935004793
>
>
>


-- 
+Hugo Kuo+
tonyt...@gmail.com
+ 886 935004793
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] control user quota

2012-04-12 Thread Kuo Hugo
I did a quick test in Essex .
The process almost same as before(Cactus/Diablo)

1. Manage Quota for a specified "Tenant" from nova-manage .
2. Manage Default Quota parameters from nova.conf with several flags
3. Hacking Nova source code quota.py for default values.



Hope it helps.



Xin Zhao  於 2012年4月12日下午10:27 寫道:

> Hello,
>
> I try to assign quota to individual users, to control how many instances
> each user can run concurrently. But I don't see a doc describing how to do
> that. I use diablo release.
> Any help or doc pointer will be greatly appreciated.
>
> Xin
>
>
> __**_
> Mailing list: 
> https://launchpad.net/~**openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : 
> https://launchpad.net/~**openstack
> More help   : 
> https://help.launchpad.net/**ListHelp
>



-- 
+Hugo Kuo+
tonyt...@gmail.com
+ 886 935004793
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [SWIFT] Looking for an approach to attach an account or a container in operating system . Like a NAS of SAN driver.

2012-04-12 Thread Kuo Hugo
I'm keeping in search and think about  the easiest way for users to
leverage their swift account.

There has several client applications for accessing swift around . Either
Windows / Linux / Mac.

But eventually , user still need to install a client for this purpose.

What if user can attach their own account from swift-proxy (or something
else) directly via NFS or CIFS  or iscsi target will be much better.

As I know , both linux and Windows has application to reach  this target.
Such as Gladinet Server / S3FS etc.

How about provide such interfaces in Swift-proxy ?

If there's an exist discussion ticket , please let me know it .

Cheers


-- 
+Hugo Kuo+
tonyt...@gmail.com
+ 886 935004793
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [Swift] Does there any exist blueprint or sub-project of user's storage space quota or counting method for Swift ?

2012-04-12 Thread Kuo Hugo
Hi folks ,

I'm thinking about the better approach to manage "an user" or "an account"
space usage quota in swift.
Is  there any related blueprint or sub-project even an idea around ?
Any suggestion of benefits to be an external service or to be a middle-ware
in swift-proxy ?

I'm concerning about such feature will reduce the performance of entire
Swift environment.

Appreciate :>



-- 
+Hugo Kuo+
tonyt...@gmail.com
+ 886 935004793
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OpenStack]Swift + Keystone + Cyberduck

2012-04-10 Thread Kuo Hugo
Actually , both cyberduck and gladinet client works with Keystone properly.
Is there any further information in your test ?
What do you need around ?

William Herry  於 2012年4月10日上午9:32 寫道:

> Hi
>
> I am try to use Cyberduck as the client of Swift storage, my swift use
> keystone as the auth system, any one has successful experience can share
> with me, or is there any other client software for swift
>
> In fact I can't make Cyberduck work when I use tempauth as the auth, which
> can work with cloudberry and cloudfuse, but as I know, in windows only
> Cyberduck can support Keystone
>
> Thanks
>
> William
>
> --
>
> ===
> William Herry
>
> williamherrych...@gmail.com
>
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
>


-- 
+Hugo Kuo+
tonyt...@gmail.com
+ 886 935004793
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [Swift] Is there any limitation for arranging Rings. Failed to do operation in this case

2012-04-02 Thread Kuo Hugo
Hi folks  ,

I'm facing an issue . Before fill in a bug ,  I have to clarify something
about arrange rings

:My Case:

There're three zones in this test , The replica as below
Objects  2 replicas
Account 3 replicas
Container 3 replicas

With this setup , I can check the status of an Account . Also Account's DB
has three replicas in each Zone properly.
Once I try to upload an object , it hangs. Even using curl to put the
object , the result is same.

After monitor the storage disks , I saw three empty temp files were created
. And no more actions . All logs without any error.

Does any one try to setup rings like this ?  Is this make sense .

Should it be filled into a bug?


Cheers
-- 
+Hugo Kuo+
tonyt...@gmail.com
+ 886 935004793
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] 403 Forbidden integrate keystone essex with swift 1.4.9

2012-03-29 Thread Kuo Hugo
Well , it looks normal in my consideration.

You specify the operator must have Member or admin role for accessing
swift.
If there's a user associates a role which named "watcher" . It will be
block in swift_auth filter .
That's the function of swift_auth , right ?  It looks pretty match to
your expectation.

 於 2012年3月29日下午5:38 寫道:

> Hi all:
>
> The story is: when i integrate keystone essex with swift 1.4.9, i use
> swift_auth for authorization, the configuration is belows:
>
> [filter:keystone]
> paste.filter_factory = keystone.middleware.swift_auth:filter_factory
> operator_roles = Member,admin
>
> If I access the swift service using swift command with the user who has
> the "Member, admin" role, it works successfully, but if i access using a
> user who has another role, it get the "403 Forbidden".
>
>
> That is why?
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
>


-- 
+Hugo Kuo+
tonyt...@gmail.com
+ 886 935004793
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Have anyone integrated spice with openstack ?

2012-03-20 Thread Kuo Hugo
As I know , you have to change libvirt and qemu-kvm versions.

For further information , please check
http://www.spice-space.org/download.html

Server

The SPICE server code is needed when building SPICE support into
QEMU.
0.10.x is the latest stable series. The 0.10.x releases contain the
addition of usb redirection (linux client only), semi-seamless migration,
disabled-by-default multiple client support, and 32 bit server support. It
should be available as a package in your favourite Linux distribution,
which is the preferred way of getting it.
There're some more works have to do though .



2012/3/21 suyi wang 

> Hi all:
> I want to use spice instead of vnc ,  but failed. Have anyone
> integrated spice with openstack ?  Could you share your knowledge with me ?
> Thanks a lot!
>
> --
> Yours.
> suyi
>
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
>


-- 
+Hugo Kuo+
tonyt...@gmail.com
+ 886 935004793
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Could tokens be cached in memcached on swift-proxy while auth by keystone ?

2012-03-13 Thread Kuo Hugo
Hi , folks

I'm confusing about the usage of memcached on Swift-proxy serve.  There's
the description in Swift dev-doc
http://swift.openstack.org/deployment_guide.html?highlight=token#memcached-considerations

Memcached Considerations
Several of the Services rely on Memcached for caching certain types of
lookups, such as auth tokens, and container/account existence. Swift does
not do any caching of actual object data. Memcached should be able to run
on any servers that have available RAM and CPU. At Rackspace, we run
Memcached on the proxy servers. The memcache_servers config option in the
proxy-server.conf should contain all memcached servers.

In my knowing , Tempauth uses Memcached to store tokens . So that I always
thought swift-proxy will cache "validated tokens" in memcached before
expired.

Also , this consideration keeps in my mind . But it seems not like what I'm
thinking while authenticate by Keystone.  Swift-proxy does not cache
validated tokens in memcached .  It always try to validate client's token
by query keystone for every requests.
I'm interesting about the reasons of this design. Isn't better letting
swift-proxy to check client's token in memcached first ?

My reasons :
1. Decreasing keystone's loading
2. Faster response time for a request
3. Better bandwidth leverage

So , for keystone as authentication server for swift .  Memcached is in
charge of caching rings , container/account existence. It does not
responsible for cache validated tokens anymore .  Am I right ? Or just I
miss some configuration for this option?

Any clarify would be great ...



Cheers
-- 
Hugo Kuo @ Taiwan

tonyt...@gmail.com
+ 886 935004793
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Dashboard extension

2012-02-03 Thread Kuo Hugo
什麼意思?? and what do you want??

私人mail 給我也許可以幫你看一下問題



2012/2/2 

>
> Dear all,
>
> I have been read Horizon documentation, and I've tried to add a new
> dashboard by myself.
>
> But there are some question I didn't find the ans, could anyone help me to
> solve these problems.
>
> I've register a new panel at "Syspanel" dashboard, but there is an error
> msg. which said
>
> *Caught NotRegistered while rendering: Panel with slug "test" is not
> registered with Dashboard "Admin".*
>
> and exception in template
> /opt/stack/horizon/horizon/horizon/templates/horizon/common/_sidebar.htmlline 
> 28
>
>  *{% horizon_dashboard_nav %}*
>
> I've been google at this error msg, but can not find any ans. So help me
> to solve this problem pls.
>
> Michael Lin 2012.2.2
>
> -
> This email contains information that is for sole use of the intended
> recipient and may be confidential or privileged.
> If you are not the intended recipient, any further review, disclosure,
> copying, distribution, or use of this email, or the contents of this email
> is prohibited.
> Please notify the sender by reply this email and destroy the original
> email if your receipt of this email is in error.
> -
>
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
>


-- 
+Hugo Kuo+
tonyt...@gmail.com
hugo@cloudena.com
+886-935-004-793

www.cloudena.com
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Swift S3 with Keystone anyone?

2012-02-01 Thread Kuo Hugo
I would love to know more about this topic too.
push

Hugo Kuo

2012/2/2 Pete Zaitcev 

> Hello:
>
> Does anyone happen to have Swift running with S3 and Keystone? If yes,
> send me the proxy-server.conf, please. Also, I'd like to ask a few
> questions, if I may. I tried to piece it together from the code,
> but failed.
>
> The authentication is done with a special hook into Keystone. It supplies
> middleware, keystone/keystone/middleware/s3_token.py, which invokes
> a POST to v2 Keysone with OS-KSS3:s3Credentials, then sets a req. header
> X-Auth-Token. So far so good.
>
> However, how does it fit in with Swift? The actual S3 operations are
> implemented by swift/common/middleware/swift3.py, which rolls up the
> canonical string, then stuffs it into env['HTTP_X_AUTH_TOKEN'].
> The intent is, as I understand, to invoke the special purpose
> code in tempauth and thus is useless for Keystone. So, how is this
> supposed to work?
>
> I imagine the pipeline should look something like this:
>
>  [pipeline:main]
>  pipeline = healthcheck cache s3auth swift3 proxy-server
>
>  [filter:s3auth]
>  use = egg:keystone#swiftauth
>  service_protocol = http
>  service_host = 192.168.129.18
>  service_port = 5000
>
>  [filter:swift3]
>  use = egg:swift#swift3
>
> Except... There is no entry point for s3_auth in keystone egg.
>
> Documentation seems to be absent. I suppose I could put it together,
> if I got it all working at least once.
>
> Confused,
> -- Pete
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>



-- 
+Hugo Kuo+
tonyt...@gmail.com
hugo@cloudena.com
+886-935-004-793

www.cloudena.com
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Swift client tool

2011-12-19 Thread Kuo Hugo
As I know . both cyberduck and gladinet provide this feature .
But gladinet require pro. version to enable sync and backup solution with
swift


2011/12/20 Tim Bell 

>  ** **
>
> A Dropbox like ‘sync’ function would be very interesting..  does anyone
> know one which is compatible with OpenStack Swift ?
>
> ** **
>
> Tim
>
> ** **
>
> *From:* openstack-bounces+tim.bell=cern...@lists.launchpad.net [mailto:
> openstack-bounces+tim.bell=cern...@lists.launchpad.net] *On Behalf Of *Kuo
> Hugo
> *Sent:* 19 December 2011 17:38
> *To:* Prakashan Korambath
>
> *Cc:* openstack@lists.launchpad.net
> *Subject:* Re: [Openstack] Swift client tool
>
> ** **
>
> Several options
>
> ** **
>
> 1. Cyberduck (for Mac & Win only) , swift will present like a FTP server
> user experience for you
>
> 2. Gladinet desktop (free version) , under gladinet , you might get feel
> swift more like a NAS device ... but only for static object files , not
> that easy to setup a gladinet compatible swift environment . It requires
> SSL and validate SSL certification .
>
> Only for Win OS
>
> 3. Under Linux , you can leverage swift client , the easiest way is
> #apt-get install swift
>
> 4. Write your own client by call swift client module 
>
> 5. Write your own client through swift API endpoint
>
> 6. Using OpenStack Dashboard , it includes Swift feature. but it requires
> keystone integration
>
> 7. develop your own Web server for access Swift 
>
> ** **
>
> We can confirm all approaches above .  but might need to dig out some more
> tricky skill from google . 
>
> ** **
>
> If your swift only for personal usage , you can easily install cyberduck
> to access swift. In my using , I just need to setup auth server endpoint
> manually in cyberduck's configuration file to point the correct auth server
> endpoint  which depends on your auth server . 
>
> more information plz goole it .   
>
> Feel free to drop your question over here . I'll have an answer for you as
> I can. 
>
> ** **
>
> +Hugo Kuo+
>
> tonyt...@gmail.com
>
> hugo@cloudena.com
>
> +886-935-004-793
>
> ** **
>
> www.cloudena.com
>



-- 
+Hugo Kuo+
tonyt...@gmail.com
hugo@cloudena.com
+886-935-004-793

www.cloudena.com
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Swift client tool

2011-12-19 Thread Kuo Hugo
Several options

1. Cyberduck (for Mac & Win only) , swift will present like a FTP server
user experience for you
2. Gladinet desktop (free version) , under gladinet , you might get feel
swift more like a NAS device ... but only for static object files , not
that easy to setup a gladinet compatible swift environment . It requires
SSL and validate SSL certification .
Only for Win OS
3. Under Linux , you can leverage swift client , the easiest way is
#apt-get install swift
4. Write your own client by call swift client module
5. Write your own client through swift API endpoint
6. Using OpenStack Dashboard , it includes Swift feature. but it requires
keystone integration
7. develop your own Web server for access Swift

We can confirm all approaches above .  but might need to dig out some more
tricky skill from google .

If your swift only for personal usage , you can easily install cyberduck to
access swift. In my using , I just need to setup auth server endpoint
manually in cyberduck's configuration file to point the correct auth server
endpoint  which depends on your auth server .
more information plz goole it .
Feel free to drop your question over here . I'll have an answer for you as
I can.


+Hugo Kuo+
tonyt...@gmail.com
hugo@cloudena.com
+886-935-004-793

www.cloudena.com
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Keystone + Swift integration

2011-11-26 Thread Kuo Hugo
Thanks Chmouel ,

a type :>

I do mean v2.0

2011/11/26 Chmouel Boudjnah 

> On Sat, Nov 26, 2011 at 6:33 AM, Kuo Hugo  wrote:
> > For Swift API v1.0 + keystone ... an User could only access one  tenant..
> > But in Swift API v1.0 + keystone  your can access different tenants
> via
>  ^^^
> I guess you mean 2.0 and not 1.0 here.
>
> > "-U %tenant%:%USER%: hope it help
>
> Chmouel.
>



-- 
+Hugo Kuo+
tonyt...@gmail.com
hugo@cloudena.com
+886-935-004-793

www.cloudena.com
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Keystone + Swift integration

2011-11-25 Thread Kuo Hugo
2011/11/26 Pete Zaitcev 

> On Wed, 23 Nov 2011 09:28:01 -0300
> Leandro Reox  wrote:
>
> > keystone-manage endpointTemplates add RegionOne swift
> > http://172.16.0.88:8080/v1/AUTH_%tenant_id% http://172.16.0.88:8080/
> > http://172.16.0.88:8080/v1/AUTH_%tenant_id% 1 1
>
> I'm curious, did you actually put the '%' (percent) into those URLs,
> or you replaced it with the appropriate tennant ID? The documentation
> (doc/configuration.rst) tells to use square brackets, not percents,
> which is probably another hint to substitute the actual tenant ID.
> But what to do if we have 2 tenants?
>
>
For Swift API v1.0 + keystone ... an User could only access one  tenant..

But in Swift API v1.0 + keystone  your can access different tenants via
"-U %tenant%:%USER%:

hope it help


> -- Pete
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>



-- 
+Hugo Kuo+
tonyt...@gmail.com
hugo@cloudena.com
+886-935-004-793

www.cloudena.com
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Keystone + Swift integration

2011-11-23 Thread Kuo Hugo
Need more informations .

Maybe you can check the log of keystone , verifying where's the 404 come
from

If keystone log tell you that it already return 200 to swift . I think
the 404 is returned from swift.

the 404 bad request , should be log in proxy log and keystone log.


-- 
+Hugo Kuo+
tonyt...@gmail.com
hugo@cloudena.com
+886-935-004-793

www.cloudena.com
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Keystone + Swift integration

2011-11-23 Thread Kuo Hugo
Hi Leandro ,

Post on launchpad QA will be a better place though. Plz post it on
launchpad , and we will jump to there for further discussion.

1. Verify that your keystone is running , via curl -v -H "X-Auth-User: MAX"
-H "X-Auth-Key: Infa" http://%keystone_IP%:5000/v1.0

If keystone working properly , should return X-Auth-Token & X-Storage-url
etc.

After  step 1 ...

Use the returned X-Auth-Token and X-Storage-Url for the following cURL cmd

curl -k -v -H "X-Auth-Token: %token%" %Returned X-storage-Url% * *http://
%Swift-proxy_IP%:8080/v1/AUTH_%tenant_id%

I guess it will be
curl -v -k -H "X-Auth-Token: 1234567890"
http://172.16.0.88:8080/v1/AUTH_%tenant_ID%


If so , I believe that your problem is on the protocol of Swift-proxy , as
your post , it's under https in your proxy-server.conf .

To disable it by comment

cert_file = /etc/swift/cert.crt
key_file = /etc/swift/cert.key

or correct the endpoint of swift to https..

If you were deploying keystone via cloudbuilder/devstack script ... You
might need to check the sampledata .   With swift client v1.0 , it needs
the value of Tenant_Id in USERS table.

Hope it help ...
-- 
+Hugo Kuo+
tonyt...@gmail.com
hugo@cloudena.com
+886-935-004-793

www.cloudena.com
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Swift test plan & test cases , need recommends

2011-10-18 Thread Kuo Hugo
Hello folks ,
We r hard working on Swift development and test now .
And there's only a NOVA QA team , I'm wondering to know that if there has
any test plan around SWIFT .
Also , you can share you using case and issues . We'll have a test on
various swift deploying archi.
The most valuable recommends are "problems".
Problems and questions will make this project stronger / valuable / useful
etc..
Anything would be nice , I'll give u an result of each recommends.

Feel free to drop any suggestions ...I'll appreciate

Cloudena Swift team @ Taiwan
-- 
+Hugo Kuo+
tonyt...@gmail.com
hugo@cloudena.com
+886-935-004-793

www.cloudena.com
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Several keystone questions ...

2011-10-10 Thread Kuo Hugo
Thanks for Grid Dynamic ,
I do use it already :> . Really useful to fire up instance via nova with
GridDynamic branch.
If the branch could assign keypair will be great , and an additional
recommend is that nova client should run with a default keypair if users do
not specify any keypair for booting instance.



2011/10/10 Dmitry Maslennikov 

> On Thu, Oct 6, 2011 at 3:49 PM, Kuo Hugo  wrote:
> > 3. We can using cloud with Dashbord smoothly for basic usage. So does
> > nova-client . I found that nova-client CLI tool could not
> > add-key-pair/show-keypair ...I think it's fine , just add it direct from
> API
> > endpoint .  But the problem is , "nova boot" can not assign key-pair
> while
> > fire up an instance.
> We did keypair creation in our build and are going to implement
> assignation in nova boot soon:
>
>
> http://openstackgd.wordpress.com/2011/10/06/using-nova-instead-of-eucatools-while-working-with-keypairs/
>
> --
> Dmitry Maslennikov
> Principal Software Engineer, Grid Dynamics
> SkypeID: maslennikovdm
> E-mail: dmaslenni...@griddynamics.com
> www.griddynamics.com
>



-- 
+Hugo Kuo+
tonyt...@gmail.com
hugo@cloudena.com
+886-935-004-793

www.cloudena.com
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Several keystone questions ...

2011-10-06 Thread Kuo Hugo
Hi folks ,

After several days work on keystone / python-novaclient / glance

There's some more questions want to discuss with everyone...

1. What's the difference of python-novaclient between
devstack<https://github.com/cloudbuilders/python-novaclient>
 and rackspace <https://github.com/rackspace/python-novaclient> , as I know
devstack's version forked from rackspace.. but while I install each of
them... only devstack's python-novaclient worked fine with
Keystone<https://github.com/cloudbuilders/keystone> .
Within rackspace version , always got "Invalid OpenStack Nova Credentials"
how come ??

2. For now , a keystone server is running . And it seems only support OSAPI
...am I right ?  I dig many docs only talk about adding NOVA OSAPI enpoint
templates , but no one talk about EC2 API(but EC2 features in source code).

3. We can using cloud with Dashbord smoothly for basic usage. So does
nova-client . I found that nova-client CLI tool could not
add-key-pair/show-keypair ...I think it's fine , just add it direct from API
endpoint .  But the problem is , "nova boot" can not assign key-pair while
fire up an instance.

4. The workflow between all services and keystone , plz correct me.
novaclient > keystone -> nova --> keystone -> novaclient
glance client -> keystone --> glance -->keystone -->
glance client


Cheers
Hugo Kuo




-- 
+Hugo Kuo+
tonyt...@gmail.com
hugo@cloudena.com
+886-935-004-793

www.cloudena.com
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] RBAC handled by keystone or each services ?

2011-10-05 Thread Kuo Hugo
Hello folks ,

While playing with Keystone , there's four roles named
[Admin,Member,KeystoneAdmin,KeystoneServiceAdmin].
I'm confusing about that who handles these roles's permission / privileges
 I mean RBAC include  admin, itsec, projectmanager, netadmin, developer
roles in NOVA but not Admin/Member .
is that handled by keystone or service itself ???

Is there any API to add Roles(also set permission / privileges)?

In my guess , the RBAC still on each service(nova / swift ) , but how NOVA
knows the permission of Role "Admin" ?


-- 
+Hugo Kuo+
tonyt...@gmail.com
hugo@cloudena.com
+886-935-004-793

www.cloudena.com
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [OpenStack] About the SLBs of Swift-proxy again ....pound could only handle peaking 600 connections ?

2011-09-27 Thread Kuo Hugo
Hello folks ,
With a previous discussion of Pound and Nginx ...as SLBs for SWIFT

In our testing , pound could only handle peaking 600 connections 

But while I direct send requests to one Swift-Proxy , It can handle over
2000 connections .
As the result , how can Pound be a good SLBs role in the swift ecosystem ?


Hugo Kuo

-- 
+Hugo Kuo+
tonyt...@gmail.com
hugo@cloudena.com
+886-935-004-793

www.cloudena.com
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Fwd: Load Balancers for Swift Proxy-servers ----why Pound ?

2011-09-19 Thread Kuo Hugo
-- Forwarded message --
From: Kuo Hugo 
Date: 2011/9/19
Subject: Re: [Openstack] Load Balancers for Swift Proxy-servers why
Pound ?
To: Chuck Thier 


Hello  Chuck ,
That's really good response for us .
I would love to do some more test .
If we have more considerations will share to you.
After all , Does any suggestion of separative each workers into different
bare-metals?
We has only Proxy-node / Auth-node / Storage-nodes ...
Could it possible to isolate more workers for better performance ?
It seems quite busy of each storage-nodes . As I know , Object-server &
Account-server & Container-workers should in same host(saw in Cactus Admin
manual).  am I right ?

I believe this discussion will help more stackers on choosing SLB even
entire architecture design.

Thanks


2011/9/19 Chuck Thier 

> Howdy,
>
> In general Nginx is really good, and we like it a lot, but it has one
> design flaw that causes it to not work well with swift.  Nginx spools
> all requests, so if you are getting a lot large (say 5GB) uploads, it
> can be problematic.  In our testing a while ago, Pound proved to have
> the best SSL performance over a 10G link.  There are a couple of
> things that would be interesting to test that have come out since the
> last time that we did testing:
>
> 1.  Encryption offloading to the newer Intel chips with it built in.
> 2.  Yahoo's Traffic Server has since become better documented.
>
> Interesting .....


> --
> Chuck
>
> On Mon, Sep 19, 2011 at 6:47 AM, Kuo Hugo  wrote:
> > Hello , Stackers
> > I'm interesting about the reason of Pound as SLB(software Load balance)
> in
> > Swift docs.
> > Most articles talk about the performance of SLB , and Nginx seems the
> winner
> > of SLB battle .
> > Lower CPU usage / lots of connections etc
> > Does Pound has better performance for Swift ?
> > And if there's a clear comparison table would be great , really confusing
> > about that ..
> > In my study , Pound using 7 times of cpu usage than Nginx of SLB in same
> > conditions .
> > btw , most of articles just using HTTP instead of HTTPS.   Does Pound
> better
> > than Nginx under HTTPS?
> > Cheers
> > Hugo Kuo
> > --
> > +Hugo Kuo+
> > tonyt...@gmail.com
> > hugo@cloudena.com
> > +886-935-004-793
> > www.cloudena.com
> >
> > ___
> > Mailing list: https://launchpad.net/~openstack
> > Post to : openstack@lists.launchpad.net
> > Unsubscribe : https://launchpad.net/~openstack
> > More help   : https://help.launchpad.net/ListHelp
> >
> >
>



-- 
+Hugo Kuo+
tonyt...@gmail.com
hugo@cloudena.com
+886-935-004-793

www.cloudena.com



-- 
+Hugo Kuo+
tonyt...@gmail.com
hugo@cloudena.com
+886-935-004-793

www.cloudena.com
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Load Balancers for Swift Proxy-servers ----why Pound ?

2011-09-19 Thread Kuo Hugo
Hello , Stackers

I'm interesting about the reason of Pound as SLB(software Load balance) in
Swift docs.
Most articles talk about the performance of SLB , and Nginx seems the winner
of SLB battle .
Lower CPU usage / lots of connections etc
Does Pound has better performance for Swift ?
And if there's a clear comparison table would be great , really confusing
about that ..
In my study , Pound using 7 times of cpu usage than Nginx of SLB in same
conditions .

btw , most of articles just using HTTP instead of HTTPS.   Does Pound better
than Nginx under HTTPS?

Cheers
Hugo Kuo

-- 
+Hugo Kuo+
tonyt...@gmail.com
hugo@cloudena.com
+886-935-004-793

www.cloudena.com
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Making Nova HA summit notes

2011-07-21 Thread Kuo Hugo
Thanks Vish

I'll have a test with this new option in nearly day.

Hugo Kuo

2011/7/22 Vishvananda Ishaya 

> We just recently merged a new HA networking option.  See details in my blog
> post here:
>
> http://unchainyourbrain.com/openstack/13-networking-in-nova
>
> Vish
>
> On Jul 20, 2011, at 10:04 PM, Mike Scherbakov wrote:
>
> Hi,
> Thank you for the work on making nova components HA.
>
> Did you have a chance to move further in this topic?
> I especially interested in making nova-network HA and looking for possible
> active-active implementations,
> so the downtime of the service would me minimal.
>
> Thank you,
>
> On Tue, May 3, 2011 at 1:22 PM, Edward Konetzko <
> konet...@quixoticagony.com> wrote:
>
>> I have attached the slides and Tushar Patil doc on making nova-network ha
>> along with the etherpad notes on the bottom.
>>
>>
>> I hope to follow this email up later on in the week with plans for a full
>> reference document based on Cacti.  Thanks for everyone’s participation at
>> the Summit.
>>
>> Thanks
>> Edward Konetzko
>>
>> Etherpad notes
>>
>>
>> This Etherpad is for the
>> Discussion on Design & Software Considerations for Making Nova HA/Fault
>> Tolerant
>> Please put ideas or comments in the appropriate sections
>>
>>
>> Database
>> - Does zones alleviate the need for HAing the DB?
>>
>>
>>
>> RabbitMQ
>> For comparison http://wiki.secondlife.com/**
>> wiki/Message_Queue_Evaluation_**Notes
>> - Need to update managers to create persistant queues and messages
>> - XMPP an alternate?
>> Talk to RabbitMQ devs about
>> - Long term can we use Burrow?
>>
>>
>> Nova-Network
>> NTT Data documentation mailed to openstack list for their heartbeat POC
>> tests
>> Are there issues running multiple network nodes and assigning the same IP
>> to mutlple instances?
>> How about VRRP protocol?
>>   --> we (NTT) are planning to evaluate VRRP using keepalived or some
>> other software. Does anyone knows suitable software?
>>
>>
>> Nova-scheduler
>> Vish said you can run more then one
>> - Yeah with zones and how the scheduler is structured now, it can
>>
>>
>> Nova-api
>> Possibliy to run this behind real web server, apache, nginx
>>
>>
>> Nova-volume
>>
>> Nova-Objectstore
>>
>> Nova-Compute
>>
>>
>> Other ideas
>> Services should use dns srv records or something to automate service
>> discovery, this would make running large infrastructures and ipv6
>> configureation alot easier.
>> - zeroconf?  -<-- like the idea but anyone can announce anything in
>> zeroconf it has no idea of a master for security.
>> Agreed
>> Look at vrrp and keepalived
>>
>>
>> Take aways
>> Start Discussion with Rabbitmq
>> Message Bus needs more investigation
>> Discussion on how do we make messages have delievery
>> Give feed back to end user, fail or pass just dont leave state in pending
>> forever
>>
>> ___
>> Mailing list: https://launchpad.net/~openstack
>> Post to : openstack@lists.launchpad.net
>> Unsubscribe : https://launchpad.net/~openstack
>> More help   : https://help.launchpad.net/ListHelp
>>
>>
>
>
> --
> Mike Scherbakov
>  ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
>
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
>


-- 
Hugo Kuo@AMI. TW-CCG
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Multi-nic support question

2011-07-06 Thread Kuo Hugo
Thanks Joseph ,
That's also what I;m looking for.

Cheers
Hugo Kuo

2011/7/7 Joseph Heck 

> To answer my own question on the list (Thanks Vish & Trey):
>
> The command to create a network should be updated - aka:
>
>nova-manage network create private 10.0.0.0/24 1 32
>
> additionally, the "nova-manage floating create" also changed, and no longer
> requires a hostname in there.
>
> Vish has updated his novascript (
> https://github.com/vishvananda/novascript/commit/d36f5775b2d8d6d736294cb866937bd9ccfd0d33)
> with the relevant changes.
>
> (as I write this, the cloudbuilders nova.sh script hasn't been updated - so
> it needs those little tweaks if you're using it with trunk)
>
> -joe
>
> On Jul 6, 2011, at 12:06 PM, Joseph Heck wrote:
> > Afternoon!
> >
> > I ran into an issue with the multi-nic addition that just hit trunk -
> wanted to see how best to resolve or if this is a bug.
> >
> > The signature for the create() method in NetworkCommands (in
> nova/bin/nova-manage) changed - which means that the existing docs to create
> a network:
> >
> >   nova-manage network create 10.0.0.0/24 1 32
> >
> > Fails with an index out of range error. The reason is that the arguments
> ['10.0.0.0/24', '1', '32'] no longer match up as expected with the
> arguments in create().
> >
> > "Label=None" was added at the front of that method signature. So with
> multi-nic added in, does the command need to be updated, or should that
> Label component be pushed back in the positional list of arguments?
> >
> > -joe
> >
> > ___
> > Mailing list: https://launchpad.net/~openstack
> > Post to : openstack@lists.launchpad.net
> > Unsubscribe : https://launchpad.net/~openstack
> > More help   : https://help.launchpad.net/ListHelp
>
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp