[Openstack] [Swift] Where's Object's metadata located ?

2012-06-16 Thread Kuo Hugo
Hi folks ,

Q1:
Where's the metadata of an object ?
For example the X-Object-Manifest. Does it store in inode ?
I did not see the matadata X-Object=Manifest in container's DB.

 Could I find the value?

Q2:
What if a large object be interrupted during upload , will the manifest
objects be deleted?
For example ,
OBJ1 :200MB
I execute $swift upload con1 OBJ1 -S 1024000
I do a force interrupt while send segment 10.

I believe that OBJ1 won't live in con1 , what will happen to the rest
manifest objects?

Those objects seems still live in con1_segments container. Is there any
mechanism to audit OBJ1 and delete those manifest objects ?



-- 
+Hugo Kuo+
tonyt...@gmail.com
+ tonyt...@gmail.com886 935004793
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] HTTP/1.1 404 Not Found error in swift

2012-06-16 Thread udit agarwal
Hi,
  I am setting up openstack in my lab. I followed the manual
http://docs.openstack.org/essex/openstack-compute/install/apt/content/ for
installing  configuring it. I have already setup keystone, glance and
swift all well on my system. But I am having problems in setting up swift.
The command  swift -V 2.0 -A http://192.168.20.7:5000/v2.0 -U
openstackDemo:adminUser -K 1234qwer stat   worked fine. But when I
executed the command curl -k -v -H 'X-Storage-User:
openstackDemo:adminUser' -H 'X-Storage-Pass: 1234qwer'
http://192.168.20.7:5000/auth/v1.0;, I got the following output:

* About to connect() to 192.168.20.7 port 5000 (#0)
*   Trying 192.168.20.7...
* connected
* Connected to 192.168.20.7 (192.168.20.7) port 5000 (#0)
 GET /auth/v1.0 HTTP/1.1
 User-Agent: curl/7.22.0 (x86_64-unknown-linux-gnu) libcurl/7.25.0
OpenSSL/1.0.1c zlib/1.2.5 libidn/1.22 libssh2/1.4.0
 Host: 192.168.20.7:5000
 Accept: */*
 X-Storage-User: openstackDemo:adminUser
 X-Storage-Pass: 1234qwer

 HTTP/1.1 404 Not Found
 Content-Length: 154
 Content-Type: text/html; charset=UTF-8
 Date: Sat, 16 Jun 2012 08:04:57 GMT

html
 head
  title404 Not Found/title
 /head
 body
  h1404 Not Found/h1
  The resource could not be found.br /br /



 /body
* Connection #0 to host 192.168.20.7 left intact
* Closing connection #0

My proxy-server.conf file is as follows:

[DEFAULT]
# bind_ip = 0.0.0.0
bind_port = 
# backlog = 4096
# swift_dir = /etc/swift
#workers = 8
user = swift
#cert_file = /etc/swift/cert.crt
#key_file = /etc/swift/cert.key
# expiring_objects_container_divisor = 86400
# You can specify default log routing here if you want:
# log_name = swift
# log_facility = LOG_LOCAL0
# log_level = INFO

[pipeline:main]
pipeline = catch_errors healthcheck cache authtoken keystone proxy-server

[app:proxy-server]
use = egg:swift#proxy
# You can override the default log routing for this app here:
# set log_name = proxy-server
# set log_facility = LOG_LOCAL0
# set log_level = INFO
# set access_log_name = proxy-server
# set access_log_facility = LOG_LOCAL0
# set access_log_level = INFO
# set log_headers = False
# recheck_account_existence = 60
# recheck_container_existence = 60
# object_chunk_size = 8192
# client_chunk_size = 8192
# node_timeout = 10
# client_timeout = 60
# conn_timeout = 0.5
# How long without an error before a node's error count is reset. This will
# also be how long before a node is reenabled after suppression is
triggered.
# error_suppression_interval = 60
# How many errors can accumulate before a node is temporarily ignored.
# error_suppression_limit = 10
# If set to 'true' any authorized user may create and delete accounts; if
# 'false' no one, even authorized, can.
#allow_account_management = true
# Set object_post_as_copy = false to turn on fast posts where only the
metadata
# changes are stored anew and the original data file is kept in place. This
# makes for quicker posts; but since the container metadata isn't updated in
# this mode, features like container sync won't be able to sync posts.
# object_post_as_copy = true
# If set to 'true' authorized accounts that do not yet exist within the
Swift
# cluster will be automatically created.
account_autocreate = true
# If set to a positive value, trying to create a container when the account
# already has at least this maximum containers will result in a 403
Forbidden.
# Note: This is a soft limit, meaning a user might exceed the cap for
# recheck_account_existence before the 403s kick in.
# max_containers_per_account = 0
# This is a comma separated list of account hashes that ignore the
# max_containers_per_account cap.
# max_containers_whitelist =

# [filter:tempauth]
# use = egg:swift#tempauth
# You can override the default log routing for this filter here:
# set log_name = tempauth
# set log_facility = LOG_LOCAL0
# set log_level = INFO
# set log_headers = False
# The reseller prefix will verify a token begins with this prefix before
even
# attempting to validate it. Also, with authorization, only Swift storage
# accounts with this prefix will be authorized by this middleware. Useful if
# multiple auth systems are in use for one Swift cluster.
# reseller_prefix = AUTH
# The auth prefix will cause requests beginning with this prefix to be
routed
# to the auth subsystem, for granting tokens, etc.
# auth_prefix = /auth/
# token_life = 86400
# This is a comma separated list of hosts allowed to send
X-Container-Sync-Key
# requests.
# allowed_sync_hosts = 127.0.0.1
# This allows middleware higher in the WSGI pipeline to override auth
# processing, useful for middleware such as tempurl and formpost. If you
know
# you're not going to use such middleware and you want a bit of extra
security,
# you can set this to false.
# allow_overrides = true
# Lastly, you need to list all the accounts/users you want here. The format
is:
#   user_account_user = key [group] [group] [...] [storage_url]
# There are special groups of:
#   .reseller_admin = can do anything to any account for this 

Re: [Openstack] HTTP/1.1 404 Not Found error in swift

2012-06-16 Thread Dolph Mathews
The URL http://192.168.20.7:5000/auth/v1.0 is not one supported by
keystone; does that command work if you use http://AUTH_HOSTNAME:5000/v2.0
 instead?

For anyone interested, direct link to the doc in question:
http://docs.openstack.org/essex/openstack-compute/install/apt/content/verify-swift-installation.html

-Dolph

On Sat, Jun 16, 2012 at 6:04 AM, udit agarwal fzdu...@gmail.com wrote:

 Hi,
   I am setting up openstack in my lab. I followed the manual
 http://docs.openstack.org/essex/openstack-compute/install/apt/content/for 
 installing  configuring it. I have already setup keystone, glance and
 swift all well on my system. But I am having problems in setting up swift.
 The command  swift -V 2.0 -A http://192.168.20.7:5000/v2.0 -U
 openstackDemo:adminUser -K 1234qwer stat   worked fine. But when I
 executed the command curl -k -v -H 'X-Storage-User:
 openstackDemo:adminUser' -H 'X-Storage-Pass: 1234qwer'
 http://192.168.20.7:5000/auth/v1.0;, I got the following output:

 * About to connect() to 192.168.20.7 port 5000 (#0)
 *   Trying 192.168.20.7...
 * connected
 * Connected to 192.168.20.7 (192.168.20.7) port 5000 (#0)
  GET /auth/v1.0 HTTP/1.1
  User-Agent: curl/7.22.0 (x86_64-unknown-linux-gnu) libcurl/7.25.0
 OpenSSL/1.0.1c zlib/1.2.5 libidn/1.22 libssh2/1.4.0
  Host: 192.168.20.7:5000
  Accept: */*
  X-Storage-User: openstackDemo:adminUser
  X-Storage-Pass: 1234qwer
 
  HTTP/1.1 404 Not Found
  Content-Length: 154
  Content-Type: text/html; charset=UTF-8
  Date: Sat, 16 Jun 2012 08:04:57 GMT
 
 html
  head
   title404 Not Found/title
  /head
  body
   h1404 Not Found/h1
   The resource could not be found.br /br /



  /body
 * Connection #0 to host 192.168.20.7 left intact
 * Closing connection #0

 My proxy-server.conf file is as follows:

 [DEFAULT]
 # bind_ip = 0.0.0.0
 bind_port = 
 # backlog = 4096
 # swift_dir = /etc/swift
 #workers = 8
 user = swift
 #cert_file = /etc/swift/cert.crt
 #key_file = /etc/swift/cert.key
 # expiring_objects_container_divisor = 86400
 # You can specify default log routing here if you want:
 # log_name = swift
 # log_facility = LOG_LOCAL0
 # log_level = INFO

 [pipeline:main]
 pipeline = catch_errors healthcheck cache authtoken keystone proxy-server

 [app:proxy-server]
 use = egg:swift#proxy
 # You can override the default log routing for this app here:
 # set log_name = proxy-server
 # set log_facility = LOG_LOCAL0
 # set log_level = INFO
 # set access_log_name = proxy-server
 # set access_log_facility = LOG_LOCAL0
 # set access_log_level = INFO
 # set log_headers = False
 # recheck_account_existence = 60
 # recheck_container_existence = 60
 # object_chunk_size = 8192
 # client_chunk_size = 8192
 # node_timeout = 10
 # client_timeout = 60
 # conn_timeout = 0.5
 # How long without an error before a node's error count is reset. This will
 # also be how long before a node is reenabled after suppression is
 triggered.
 # error_suppression_interval = 60
 # How many errors can accumulate before a node is temporarily ignored.
 # error_suppression_limit = 10
 # If set to 'true' any authorized user may create and delete accounts; if
 # 'false' no one, even authorized, can.
 #allow_account_management = true
 # Set object_post_as_copy = false to turn on fast posts where only the
 metadata
 # changes are stored anew and the original data file is kept in place. This
 # makes for quicker posts; but since the container metadata isn't updated
 in
 # this mode, features like container sync won't be able to sync posts.
 # object_post_as_copy = true
 # If set to 'true' authorized accounts that do not yet exist within the
 Swift
 # cluster will be automatically created.
 account_autocreate = true
 # If set to a positive value, trying to create a container when the account
 # already has at least this maximum containers will result in a 403
 Forbidden.
 # Note: This is a soft limit, meaning a user might exceed the cap for
 # recheck_account_existence before the 403s kick in.
 # max_containers_per_account = 0
 # This is a comma separated list of account hashes that ignore the
 # max_containers_per_account cap.
 # max_containers_whitelist =

 # [filter:tempauth]
 # use = egg:swift#tempauth
 # You can override the default log routing for this filter here:
 # set log_name = tempauth
 # set log_facility = LOG_LOCAL0
 # set log_level = INFO
 # set log_headers = False
 # The reseller prefix will verify a token begins with this prefix before
 even
 # attempting to validate it. Also, with authorization, only Swift storage
 # accounts with this prefix will be authorized by this middleware. Useful
 if
 # multiple auth systems are in use for one Swift cluster.
 # reseller_prefix = AUTH
 # The auth prefix will cause requests beginning with this prefix to be
 routed
 # to the auth subsystem, for granting tokens, etc.
 # auth_prefix = /auth/
 # token_life = 86400
 # This is a comma separated list of hosts allowed to send
 X-Container-Sync-Key
 # requests.
 # allowed_sync_hosts = 127.0.0.1
 # 

Re: [Openstack] [Swift] Where's Object's metadata located ?

2012-06-16 Thread Adrian Smith
 Q1: Where's the metadata of an object ?
It's stored in extended attributes on the filesystem itself. This is
reason XFS (or other filesystem supporting extended attributes) is
required.

 Could I find the value?
Sure. You just need some way of a) identifying the object on disk and,
b) a means of querying the extended metadata (using for example the
python xattrs package).

 Q2: What if a large object be interrupted during upload , will the manifest 
 objects be deleted?
Large objects (i.e. those  5Gb) must be split up client-side and the
segments uploaded individually. When all the segments are uploaded the
manifest must then be created by the client. What I'm trying to get at
is that each segment and even the manifest are completely independent
objects. A failure during the upload of any one segment has no impact
on other segments or the manifest.

Adrian


On 16 June 2012 09:53, Kuo Hugo tonyt...@gmail.com wrote:
 Hi folks ,

 Q1:
 Where's the metadata of an object ?
 For example the X-Object-Manifest. Does it store in inode ?
 I did not see the matadata X-Object=Manifest in container's DB.

  Could I find the value?

 Q2:
 What if a large object be interrupted during upload , will the manifest
 objects be deleted?
 For example ,
 OBJ1 :200MB
 I execute $swift upload con1 OBJ1 -S 1024000
 I do a force interrupt while send segment 10.

 I believe that OBJ1 won't live in con1 , what will happen to the rest
 manifest objects?

 Those objects seems still live in con1_segments container. Is there any
 mechanism to audit OBJ1 and delete those manifest objects ?



 --
 +Hugo Kuo+
 tonyt...@gmail.com
 +886 935004793


 ___
 Mailing list: https://launchpad.net/~openstack
 Post to     : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp