Re: [ceph-users] Ceph OSDs advice

2017-02-16 Thread Khang Nguyễn Nhật
Dear John,
Thank for your respone.
My cluster have 4616 pgs and 13 pools show as below:

in which default.rgw.buckets.data is erasure pool with config:
+ pg_num=pgp_num=1792
+ size=12
+ erause-code-profile:
   directory=/usr/lib64/ceph/erasure-code
   k=9
   m=3
   plugin=isa
   ruleset-failure-domain=osd
   ruleset-root=default

And all of other pools as: rgw.root, default.rgw.control ... are
replication pools with pg_num=pgp_num=256 and size=3.
These configurations above can affect system resources?
Here is my memory info:
Thank,
​
​

2017-02-15 19:00 GMT+07:00 John Petrini <jpetr...@coredial.com>:

> You should subtract buffers and cache from the used memory to get a more
> accurate representation of how much memory is actually available to
> processes. In this case that puts you around 22G of used - or a better term
> might be unavailable memory. Buffers and cache can be reallocated when
> needed - it's just Linux taking advantage of memory under the theory why
> not use it if it's there? Memory is fast so Linux will take advantage of it.
>
> With 72 OSD's 22G of memory puts you below the 500MB/daemon that you've
> mentioned so I don't think you have anything to be concerned about.
>
> ___
>
> John Petrini
>
> The information transmitted is intended only for the person or entity to
> which it is addressed and may contain confidential and/or privileged
> material. Any review, retransmission,  dissemination or other use of, or
> taking of any action in reliance upon, this information by persons or
> entities other than the intended recipient is prohibited. If you received
> this in error, please contact the sender and delete the material from any
> computer.
>
> On Tue, Feb 14, 2017 at 11:24 PM, Khang Nguyễn Nhật <
> nguyennhatkhang2...@gmail.com> wrote:
>
>> Hi Sam,
>> Thank for your reply. I use BTRFS file system on OSDs.
>> Here is result of "*free -hw*":
>>
>>total used  freeshared
>> buffers   cache available
>> Mem:   125G 58G 31G1.2M3.7M
>>   36G 60G
>>
>> and "*ceph df*":
>>
>> GLOBAL:
>> SIZE AVAIL RAW USED %RAW USED
>> 523T  522T1539G  0.29
>> POOLS:
>> NAME   ID USED %USED MAX AVAIL
>>   OBJECTS
>> 
>> default.rgw.buckets.data  92 597G  0.15  391T
>>84392
>> 
>>
>> I was reviced this a few minutes ago.
>>
>> 2017-02-15 10:50 GMT+07:00 Sam Huracan <nowitzki.sa...@gmail.com>:
>>
>>> Hi Khang,
>>>
>>> What file system do you use in OSD node?
>>> XFS always use Memory for caching data before writing to disk.
>>>
>>> So, don't worry, it always holds memory in your system as much as
>>> possible.
>>>
>>>
>>>
>>> 2017-02-15 10:35 GMT+07:00 Khang Nguyễn Nhật <
>>> nguyennhatkhang2...@gmail.com>:
>>>
>>>> Hi all,
>>>> My ceph OSDs is running on Fedora-server24 with config are:
>>>> 128GB RAM DDR3, CPU Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz, 72 OSDs
>>>> (8TB per OSD). My cluster was use ceph object gateway with S3 API. Now, it
>>>> had contained 500GB data but it was used > 50GB RAM. I'm worry my OSD will
>>>> dead if i continue put file to it. I had read "OSDs do not require as
>>>> much RAM for regular operations (e.g., 500MB of RAM per daemon instance);
>>>> however, during recovery they need significantly more RAM (e.g., ~1GB per
>>>> 1TB of storage per daemon)." in Ceph Hardware Recommendations. Someone
>>>> can give me advice on this issue? Thank
>>>>
>>>> ___
>>>> ceph-users mailing list
>>>> ceph-users@lists.ceph.com
>>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>>
>>>>
>>>
>>
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph OSDs advice

2017-02-14 Thread Khang Nguyễn Nhật
Hi Sam,
Thank for your reply. I use BTRFS file system on OSDs.
Here is result of "*free -hw*":

   total used  freeshared
buffers   cache available
Mem:   125G 58G 31G1.2M3.7M
36G 60G

and "*ceph df*":

GLOBAL:
SIZE AVAIL RAW USED %RAW USED
523T  522T1539G  0.29
POOLS:
NAME   ID USED %USED MAX AVAIL
OBJECTS

default.rgw.buckets.data  92 597G  0.15  391T
 84392


I was reviced this a few minutes ago.

2017-02-15 10:50 GMT+07:00 Sam Huracan <nowitzki.sa...@gmail.com>:

> Hi Khang,
>
> What file system do you use in OSD node?
> XFS always use Memory for caching data before writing to disk.
>
> So, don't worry, it always holds memory in your system as much as possible.
>
>
>
> 2017-02-15 10:35 GMT+07:00 Khang Nguyễn Nhật <
> nguyennhatkhang2...@gmail.com>:
>
>> Hi all,
>> My ceph OSDs is running on Fedora-server24 with config are:
>> 128GB RAM DDR3, CPU Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz, 72 OSDs
>> (8TB per OSD). My cluster was use ceph object gateway with S3 API. Now, it
>> had contained 500GB data but it was used > 50GB RAM. I'm worry my OSD will
>> dead if i continue put file to it. I had read "OSDs do not require as
>> much RAM for regular operations (e.g., 500MB of RAM per daemon instance);
>> however, during recovery they need significantly more RAM (e.g., ~1GB per
>> 1TB of storage per daemon)." in Ceph Hardware Recommendations. Someone
>> can give me advice on this issue? Thank
>>
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Ceph OSDs advice

2017-02-14 Thread Khang Nguyễn Nhật
Hi all,
My ceph OSDs is running on Fedora-server24 with config are:
128GB RAM DDR3, CPU Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz, 72 OSDs (8TB
per OSD). My cluster was use ceph object gateway with S3 API. Now, it had
contained 500GB data but it was used > 50GB RAM. I'm worry my OSD will dead
if i continue put file to it. I had read "OSDs do not require as much RAM
for regular operations (e.g., 500MB of RAM per daemon instance); however,
during recovery they need significantly more RAM (e.g., ~1GB per 1TB of
storage per daemon)." in Ceph Hardware Recommendations. Someone can give me
advice on this issue? Thank
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] RGW authentication fail with AWS S3 v4

2017-02-06 Thread Khang Nguyễn Nhật
Dear Daniel,
I think it's bug. Because if i have a big file and with 15 minutes expires,
I can't finished download it.

2017-02-06 15:12 GMT+07:00 Khang Nguyễn Nhật <nguyennhatkhang2...@gmail.com>
:

> Dear Daniel,
> I think it's bug. Because if i have a big file and with 15 minutes
> expires, I can't finished download it.
>
> 2017-02-03 20:55 GMT+07:00 Daniel Gryniewicz <d...@redhat.com>:
>
>> It looks like, as it's now coded, the 15 minute time limit is hard
>> coded.  It checks that X-Amz-Expires is not exceeded, and then
>> unconditionally checks that the request time is within 15 minutes of now.
>>
>> Daniel
>>
>> On 02/03/2017 04:06 AM, Khang Nguyễn Nhật wrote:
>>
>>> Dear Wido,
>>>
>>> I have used X-Amz-Expires=86400 in url but it doesn't work
>>>
>>> 2017-02-03 16:00 GMT+07:00 Wido den Hollander <w...@42on.com
>>> <mailto:w...@42on.com>>:
>>>
>>>
>>> > Op 3 februari 2017 om 9:52 schreef Khang Nguyễn Nhật
>>> <nguyennhatkhang2...@gmail.com <mailto:nguyennhatkhang2...@gmail.com
>>> >>:
>>>
>>> >
>>> >
>>> > Hi all,
>>> > I'm using Ceph Object Gateway with S3 API
>>> (ceph-radosgw-10.2.5-0.el7.x86_64
>>> > on CentOS Linux release 7.3.1611) and  I use
>>> generate_presigned_url method
>>> > of boto3 to create rgw url. This url working fine in period of 15
>>> minutes,
>>> > after 15 minutes I recived *RequestTimeTooSkewed* error. My
>>> radosgw use
>>> > Asia/Ho_Chi_Minh timezone and running ntp service. Here is url and
>>> rgw log:
>>> >
>>>
>>> That is normal. The time is part of the signature. You have to
>>> generate a new signature after 15 minutes.
>>>
>>> Normal behavior.
>>>
>>> Wido
>>>
>>> > - URL:
>>> > http://rgw.xxx.vn/bucket/key.mp4?X-Amz-Algorithm=AWS4-HMAC-S
>>> HA256=86400=7AHTO4E1JBZ1VG1U9
>>> 6F1%2F20170203%2F%2Fs3%2Faws4_request=ho
>>> st=20170203T081233Z=682be59232443
>>> fee58bc4744f656c533da8ddd828e36b739b332736fa22bef51
>>> <http://rgw.xxx.vn/bucket/key.mp4?X-Amz-Algorithm=AWS4-HMAC-
>>> SHA256=86400=7AHTO4E1JBZ1VG1U
>>> 96F1%2F20170203%2F%2Fs3%2Faws4_request=
>>> host=20170203T081233Z=
>>> 682be59232443fee58bc4744f656c533da8ddd828e36b739b332736fa22bef51>
>>> >
>>> > - RGW LOG:
>>> > // //
>>> > NOTICE: request time skew too big.
>>> > now_req = 1486109553 now = 1486110512; now -
>>> > RGW_AUTH_GRACE_MINS=1486109612; now +
>>> RGW_AUTH_GRACE_MINS=1486111412
>>> > failed to authorize request
>>> > handler->ERRORHANDLER: err_no=-2012 new_err_no=-2012
>>> > // //
>>> >
>>> > Someone can help me reslove this problem ? Thank
>>> > ___
>>> > ceph-users mailing list
>>> > ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
>>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>> <http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>
>>>
>>>
>>>
>>>
>>> ___
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>>>
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] RGW authentication fail with AWS S3 v4

2017-02-03 Thread Khang Nguyễn Nhật
>
> Dear Wido,

I have used X-Amz-Expires=86400 in url but it doesn't work

2017-02-03 16:00 GMT+07:00 Wido den Hollander <w...@42on.com>:

>
> > Op 3 februari 2017 om 9:52 schreef Khang Nguyễn Nhật <
> nguyennhatkhang2...@gmail.com>:
> >
> >
> > Hi all,
> > I'm using Ceph Object Gateway with S3 API (ceph-radosgw-10.2.5-0.el7.
> x86_64
> > on CentOS Linux release 7.3.1611) and  I use generate_presigned_url
> method
> > of boto3 to create rgw url. This url working fine in period of 15
> minutes,
> > after 15 minutes I recived *RequestTimeTooSkewed* error. My radosgw use
> > Asia/Ho_Chi_Minh timezone and running ntp service. Here is url and rgw
> log:
> >
>
> That is normal. The time is part of the signature. You have to generate a
> new signature after 15 minutes.
>
> Normal behavior.
>
> Wido
>
> > - URL:
> > http://rgw.xxx.vn/bucket/key.mp4?X-Amz-Algorithm=AWS4-HMAC-
> SHA256=86400=7AHTO4E1JBZ1VG1U96F1%
> 2F20170203%2F%2Fs3%2Faws4_request=host=
> 20170203T081233Z=682be59232443fee58bc4744f656c5
> 33da8ddd828e36b739b332736fa22bef51
> >
> > - RGW LOG:
> > // //
> > NOTICE: request time skew too big.
> > now_req = 1486109553 now = 1486110512; now -
> > RGW_AUTH_GRACE_MINS=1486109612; now + RGW_AUTH_GRACE_MINS=1486111412
> > failed to authorize request
> > handler->ERRORHANDLER: err_no=-2012 new_err_no=-2012
> > // //
> >
> > Someone can help me reslove this problem ? Thank
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] RGW authentication fail with AWS S3 v4

2017-02-03 Thread Khang Nguyễn Nhật
Hi all,
I'm using Ceph Object Gateway with S3 API (ceph-radosgw-10.2.5-0.el7.x86_64
on CentOS Linux release 7.3.1611) and  I use generate_presigned_url method
of boto3 to create rgw url. This url working fine in period of 15 minutes,
after 15 minutes I recived *RequestTimeTooSkewed* error. My radosgw use
Asia/Ho_Chi_Minh timezone and running ntp service. Here is url and rgw log:

- URL:
http://rgw.xxx.vn/bucket/key.mp4?X-Amz-Algorithm=AWS4-HMAC-SHA256=86400=7AHTO4E1JBZ1VG1U96F1%2F20170203%2F%2Fs3%2Faws4_request=host=20170203T081233Z=682be59232443fee58bc4744f656c533da8ddd828e36b739b332736fa22bef51

- RGW LOG:
// //
NOTICE: request time skew too big.
now_req = 1486109553 now = 1486110512; now -
RGW_AUTH_GRACE_MINS=1486109612; now + RGW_AUTH_GRACE_MINS=1486111412
failed to authorize request
handler->ERRORHANDLER: err_no=-2012 new_err_no=-2012
// //

Someone can help me reslove this problem ? Thank
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Memory leak in ceph OSD.

2016-08-23 Thread Khang Nguyễn Nhật
Hi,
I'm using ceph jewel 10.2.2, I noticed that, when I put multiple object of
the same file, same user to ceph-rgw s3 then RAM memory of ceph-osd
increased and not reduced anymore? This time, the upload speed is reduced
significantly.

Please help me solve this problem?
Thank!
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] issuse with data duplicated in ceph storage cluster.

2016-08-23 Thread Khang Nguyễn Nhật
Hi,

I'm using ceph jewel 10.2.2 and I always want to know that Ceph will do
with duplicate data?
Is Ceph osd will automatically delete them or Ceph rgw will do it ? my Ceph
storage cluster using s3 api to PUT object.
Example:
1. Suppose I use one ceph-rgw s3 user to put two different ojbect of same
source file to ceph-rgw s3, then my Ceph storage cluster will process like?
2. If I use two ceph-rgw s3 user to put two different ojbect of same source
file to ceph-rgw s3, then my Ceph storage cluster will process like?

Please help me solve this problem?
Thank!
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Ceph RGW issue.

2016-08-02 Thread Khang Nguyễn Nhật
Hi,
I have seen an error when I'm using Ceph RGW v10.2.2 with S3 API, it's as
follows:
I have three S3 users are A, B, C. Both A, B, C have some buckets and
objects. When I used A or C in order to PUT, GET object to RGW, I have seen
"decode_policy Read
AccessControlPolicy

Re: [ceph-users] [RGW] how to choise the best placement groups ?

2016-07-31 Thread Khang Nguyễn Nhật
Thank Chengwei Yang.

2016-07-29 17:17 GMT+07:00 Chengwei Yang <chengwei.yang...@gmail.com>:

> Would http://ceph.com/pgcalc/ help?
>
> On Mon, Jul 18, 2016 at 01:27:38PM +0700, Khang Nguyễn Nhật wrote:
> > Hi all,
> > I have a cluster consists of: 3 Monitors, 1 RGW, 1 host of 24
> OSDs(2TB/OSD) and
> > some pool as:
> > ap-southeast.rgw.data.root
> > ap-southeast.rgw.control
> > ap-southeast.rgw.gc
> > ap-southeast.rgw.log
> > ap-southeast.rgw.intent-log
> > ap-southeast.rgw.usage
> > ap-southeast.rgw.users.keys
> > ap-southeast.rgw.users.email
> > ap-southeast.rgw.users.swift
> > ap-southeast.rgw.users.uid
> > ap-southeast.rgw.buckets.index
> > ap-southeast.rgw.buckets.data
> > ap-southeast.rgw.buckets.non-ec
> > ap-southeast.rgw.meta
> > In which "ap-southeast.rgw.buckets.data" is a erasure pool(k=20, m=4)
> and all
> > of the remaining pool are replicated(size=3). I've used (100*OSDs)/size
>  to
> > calculate the number of PGs, e.g. 100*24/3 = 800(nearest power of 2:
> 1024) for
> > replicated pools and 100*24/24=100(nearest power of 2: 128) for erasure
> pool.
> > I'm not sure this is the best placement group number, someone can give
> me some
> > advice ?
> > Thank !
> > SECURITY NOTE: file ~/.netrc must not be accessible by others
>
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
> --
> Thanks,
> Chengwei
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] [RGW] how to choise the best placement groups ?

2016-07-18 Thread Khang Nguyễn Nhật
Hi all,
I have a cluster consists of: 3 Monitors, 1 RGW, 1 host of 24 OSDs(2TB/OSD)
and some pool as:
ap-southeast.rgw.data.root
ap-southeast.rgw.control
ap-southeast.rgw.gc
ap-southeast.rgw.log
ap-southeast.rgw.intent-log
ap-southeast.rgw.usage
ap-southeast.rgw.users.keys
ap-southeast.rgw.users.email
ap-southeast.rgw.users.swift
ap-southeast.rgw.users.uid
ap-southeast.rgw.buckets.index
ap-southeast.rgw.buckets.data
ap-southeast.rgw.buckets.non-ec
ap-southeast.rgw.meta
In which "ap-southeast.rgw.buckets.data" is a erasure pool(k=20, m=4) and
all of the remaining pool are replicated(size=3). I've used (100*OSDs)/size
 to calculate the number of PGs, e.g. 100*24/3 = 800(nearest power of 2:
1024) for replicated pools and 100*24/24=100(nearest power of 2: 128) for
erasure pool. I'm not sure this is the best placement group number, someone
can give me some advice ?
Thank !
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] RGW AWS4 SignatureDoesNotMatch when requests with port != 80 or != 443

2016-06-27 Thread Khang Nguyễn Nhật
Thanks Javier Muñoz
<https://plus.google.com/u/0/117873470668849247954?prsrc=4>.
I will see it.

2016-06-24 22:30 GMT+07:00 Javier Muñoz <jmun...@igalia.com>:

> Hi Khang,
>
> Today I had a look in a very similar issue...
>
> http://tracker.ceph.com/issues/16463
>
> I guess it could be the same bug you hit. I added some info in the
> ticket. Feel free to comment there.
>
> Thanks,
> Javier
>
> On 06/05/2016 04:17 PM, Khang Nguyễn Nhật wrote:
> > Hi!
> > I get the error "  SignatureDoesNotMatch" when I used
> > presigned url with endpoint port != 80 and != 443. For example, if I use
> > host http://192.168.1.1: then this is what I have in RGW log:
> > //
> > RGWEnv::set(): HTTP_HOST: 192.168.1.1:
> > //
> > RGWEnv::set(): SERVER_PORT: 
> > //
> > HTTP_HOST=192.168.1.1:
> > //
> > SERVER_PORT=
> > //
> > host=192.168.1.1
> > //
> > canonical headers format = host:192.168.1.1::
> > //
> > canonical request = GET
> > /
> >
> X-Amz-Algorithm=AWS4-HMAC-SHA256=%2F20160605%2Fap%2Fs3%2Faws4_request=20160605T125927Z=3600=host
> > host:192.168.1.1::
> >
> > host
> > UNSIGNED-PAYLOAD
> > //
> > - Verifying signatures
> > //
> > failed to authorize request
> > //
> >
> >
> > I see this in the src / rgw / rgw_rest_s3.cc:
> > int RGW_Auth_S3 :: authorize_v4 () {
> > //
> >   string port = s-> info.env-> get ( 'SERVER_PORT "," ");
> >   secure_port string = s-> info.env-> get ( 'SERVER_PORT_SECURE "," ");
> > //
> > if (using_qs && (token == "host")) {
> >   if (! port.empty () && port! = "80") {
> > token_value = token_value + ":" + port;
> >   } Else if (! Secure_port.empty () && secure_port! = "443") {
> > token_value = token_value + ":" + secure_port;
> >   }
> > }
> >
> > Is it caused my fault ? Can somebody please help me out ?
> > Thank !
> >
> >
> >
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] SignatureDoesNotMatch when authorize v4 with HTTPS.

2016-06-08 Thread Khang Nguyễn Nhật
Hello all,
I'm having problems with authentication AWS4 when using HTTPS (my cluster
running on Ceph Jewel 10.2.1 and platform CentOS 7). I used boto3 create
presigned_url, here's my example:

s3 = boto3.client(service_name='s3', region_name='', use_ssl=False,
endpoint_url='https://rgw.x.x',
  aws_access_key_id= ,
  aws_secret_access_key= ,
  config=Config(signature_version='s3v4', region_name='')
 )
url = s3.generate_presigned_url(ClientMethod='list_buckets',
HttpMethod='GET', ExpiresIn=3600)
rsp = requests.get(url, proxies={'http': '', 'https': ''}, headers={'': ''})

Then I received error 403 SignatureDoesNotMatch. And this is my rgw.log:

SERVER_PORT = 0
SERVER_PORT_SECURE = 443
HTTP_HOST: rgw.x.x
format = canonical host headers: rgw.x.x: 0
..
failed to authorize the request
req 1: 0.007245: s3: GET /: list_buckets: http status = 403
..

I've seen this in
https://github.com/ceph/ceph/blob/master/src/rgw/rgw_rest_s3.cc:
int RGW_Auth_S3::authorize_v4(RGWRados *store, struct req_state *s){
  ..
  string port = s->info.env->get("SERVER_PORT", "");
  string secure_port = s->info.env->get("SERVER_PORT_SECURE", "");
 ...
if (using_qs && (token == "host")) {
  if (!port.empty() && port != "80") {
token_value = token_value + ":" + port;
  } else if (!secure_port.empty() && secure_port != "443") {
token_value = token_value + ":" + secure_port;
  }
}
.

So if SERVER_PORT = 0 then host:rgw.x.x: 0 and it leads to an error
SignatureDoesNotMatch ?
I do not know how to make civetweb in RGW listen on port 80, 443s to ignore
this error.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] RGW AWS4 SignatureDoesNotMatch when requests with port != 80 or != 443

2016-06-05 Thread Khang Nguyễn Nhật
Hi!
I get the error "  SignatureDoesNotMatch" when I used
presigned url with endpoint port != 80 and != 443. For example, if I use
host http://192.168.1.1: then this is what I have in RGW log:
//
RGWEnv::set(): HTTP_HOST: 192.168.1.1:
//
RGWEnv::set(): SERVER_PORT: 
//
HTTP_HOST=192.168.1.1:
//
SERVER_PORT=
//
host=192.168.1.1
//
canonical headers format = host:192.168.1.1::
//
canonical request = GET
/
X-Amz-Algorithm=AWS4-HMAC-SHA256=%2F20160605%2Fap%2Fs3%2Faws4_request=20160605T125927Z=3600=host
host:192.168.1.1::

host
UNSIGNED-PAYLOAD
//
- Verifying signatures
//
failed to authorize request
//


I see this in the src / rgw / rgw_rest_s3.cc:
int RGW_Auth_S3 :: authorize_v4 () {
//
  string port = s-> info.env-> get ( 'SERVER_PORT "," ");
  secure_port string = s-> info.env-> get ( 'SERVER_PORT_SECURE "," ");
//
if (using_qs && (token == "host")) {
  if (! port.empty () && port! = "80") {
token_value = token_value + ":" + port;
  } Else if (! Secure_port.empty () && secure_port! = "443") {
token_value = token_value + ":" + secure_port;
  }
}

Is it caused my fault ? Can somebody please help me out ?
Thank !
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] 403 AccessDenied with presigned url in Jewel AWS4.

2016-06-05 Thread Khang Nguyễn Nhật
Thank Robin H. Johnson!

I've set "debug rgw = 20" in RGW config file and I have seen "NOTICE: now =
1464998270, now_req = 1464973070, exp = 3600" in RGW log file. I see that
now is the local time on the RGW server (my timezone is UTC + 7) and
now_req is UTC time.  This leads to one error in src/ rgw/rgw_rest_s3.cc:
int RGW_Auth_S3::authorize_v4(..){
//
  if (now >= now_req + exp) {
dout(10) << "NOTICE: now = " << now << ", now_req = " << now_req <<
", exp = " << exp << dendl;
return -EPERM;
  }
//
Then I tried to set the time on RGW server is UTC time and it works fine !
Is this a bug?

2016-06-03 11:44 GMT+07:00 Robin H. Johnson <robb...@gentoo.org>:

> On Fri, Jun 03, 2016 at 11:34:35AM +0700, Khang Nguyễn Nhật wrote:
> > s3 = boto3.client(service_name='s3', region_name='', use_ssl=False,
> > endpoint_url='http://192.168.1.10:', aws_access_key_id=access_key,
> >   aws_secret_access_key= secret_key,
> >   config=Config(signature_version='s3v4',
> region_name=''))
> The region part doesn't seem right. Try setting it to 'ap' or
> 'ap-southeast'.
>
> Failing that, turn up the RGW loglevel to 20, and run a request, then
> look at the logs of how it created the signature, and manually compare
> them to what your client should have built (with boto in verbose
> debugging).
>
> --
> Robin Hugh Johnson
> Gentoo Linux: Dev, Infra Lead, Foundation Trustee & Treasurer
> E-Mail   : robb...@gentoo.org
> GnuPG FP : 11ACBA4F 4778E3F6 E4EDF38E B27B944E 34884E85
> GnuPG FP : 7D0B3CEB E9B85B1F 825BCECF EE05E6F6 A48F6136
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] 403 AccessDenied with presigned url in Jewel AWS4.

2016-06-02 Thread Khang Nguyễn Nhật
Hi,
  I have a problem when using presigned url with AWS4 in RGW Jewel . My
cluster running on CentOS 7 and health is HEALTH_OK.
- This is my *User information*:

"user_id": "1",
"display_name": "KhangNN",
"email": "khan...@ceph.com.vn",
"suspended": 0,
"max_buckets": 1000,
"auid": 0,
"subusers": [],
"keys": [
{
"user": "1",
"access_key": "VVEP64910WZEVFSHZ0ER",
"secret_key": "UF8eM2BIlcLsXg5RF0gfK4JtZK7EmA64VGlPUJ0w"
}
],
"swift_keys": [],
"caps": [],
"op_mask": "read, write, delete",
"default_placement": "",
"placement_tags": [],
"bucket_quota": {
"enabled": false,
"max_size_kb": -1,
"max_objects": -1
},
"user_quota": {
"enabled": false,
"max_size_kb": -1,
"max_objects": -1
},
"temp_url_keys": []

- *Python* code:

access_key = "VVEP64910WZEVFSHZ0ER"
secret_key = "UF8eM2BIlcLsXg5RF0gfK4JtZK7EmA64VGlPUJ0w"

s3 = boto3.client(service_name='s3', region_name='', use_ssl=False,
endpoint_url='http://192.168.1.10:', aws_access_key_id=access_key,
  aws_secret_access_key= secret_key,
  config=Config(signature_version='s3v4', region_name=''))

print s3.list_buckets() // It work fine !
//
url = s3.generate_presigned_url(ClientMethod='list_buckets',
HttpMethod='GET', ExpiresIn=1800)
requests.get(url, proxies={'http': '', 'https': ''}) // *403 AccessDenied*

- *Zone* infor:

"id": "ef6eca77-29f6-4d5e-8d04-5c486ea7ad19",
"name": "ap-southeast",
"domain_root": "ap-southeast.rgw.data.root",
"control_pool": "ap-southeast.rgw.control",
"gc_pool": "ap-southeast.rgw.gc",
"log_pool": "ap-southeast.rgw.log",
"intent_log_pool": "ap-southeast.rgw.intent-log",
"usage_log_pool": "ap-southeast.rgw.usage",
"user_keys_pool": "ap-southeast.rgw.users.keys",
"user_email_pool": "ap-southeast.rgw.users.email",
"user_swift_pool": "ap-southeast.rgw.users.swift",
"user_uid_pool": "ap-southeast.rgw.users.uid",
"system_key": {
"access_key": "",
"secret_key": ""
},
"placement_pools": [
{
"key": "default-placement",
"val": {
"index_pool": "ap-southeast.rgw.buckets.index",
"data_pool": "ap-southeast.rgw.buckets.data",
"data_extra_pool": "ap-southeast.rgw.buckets.non-ec",
"index_type": 0
}
}
],
"metadata_heap": "ap-southeast.rgw.meta",
"realm_id": "515b5a90-9d02-489f-b7e4-e67fb838fa1e"

- *Zonegroup* infor:

"id": "3b6cbc8f-470e-4a3d-87ea-7941b6ae7206",
"name": "ap",
"api_name": "ap",
"is_master": "true",
"endpoints": [
"http:\/\/192.168.1.10:"
],
"hostnames": [],
"hostnames_s3website": [],
"master_zone": "ef6eca77-29f6-4d5e-8d04-5c486ea7ad19",
"zones": [
{
"id": "ef6eca77-29f6-4d5e-8d04-5c486ea7ad19",
"name": "ap-southeast",
"endpoints": [
"http:\/\/192.168.1.10:"
],
"log_meta": "true",
"log_data": "false",
"bucket_index_max_shards": 0,
"read_only": "false"
}
],
"placement_targets": [
{
"name": "default-placement",
"tags": []
}
],
"default_placement": "default-placement",
"realm_id": "515b5a90-9d02-489f-b7e4-e67fb838fa1e"

I have configured something wrong ? Can somebody please help me out ?
Thank !
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] RGW Could not create user

2016-06-02 Thread Khang Nguyễn Nhật
I guarantee that when I create a user, the health cluster is HEALTH_OK.
Then I tried
# radosgw-admin user metadata list
[ ]

Here are my results after running the command:
radosgw-admin user create --uid="johndoe" --display-name="John Doe"
--email="j...@example.com" --debug-rgw=20

RGWDataChangesLog::ChangesRenewThread: start
get_system_obj_state: rctx=0x7ffe457c84c0 obj=.rgw.root:default.realm
state=0x7f5b7b4ab358 s->prefetch_data=0
get_system_obj_state: s->obj_tag was set empty
//
get_system_obj_state: rctx=0x7ffe457c8520
obj=.rgw.root:realms.3e3c4b59-5d67-424c-848a-fe88bfe1f5a9
state=0x7f5b7b4ab358 s->prefetch_data=0
get_system_obj_state: s->obj_tag was set empty
get_system_obj_state: rctx=0x7ffe457c8520
obj=.rgw.root:realms.3e3c4b59-5d67-424c-848a-fe88bfe1f5a9
state=0x7f5b7b4ab358 s->prefetch_data=0
rados->read ofs=0 len=524288
rados->read r=0 bl.length=106
///
period zonegroup init ret 0
period zonegroup name ap
using current period zonegroup ap
//
run: stack=0x7f5b7b4d7f50 is io blocked
cr:s=0x7f5b7b4d8a40:op=0x7f5b7b4d8120:12RGWStatObjCR: operate()
//
cache get:
name=ap-southeast.rgw.log+meta.log.dc2ac369-3e0b-465f-bc2b-88d8c77469d7.3 :
miss
dequeued request req=0x7f5b7b50b8c0
RGWWQ: empty
//
cache get:
name=ap-southeast.rgw.log+meta.log.dc2ac369-3e0b-465f-bc2b-88d8c77469d7.26
: miss
//
find_oldest_log_period found no log shards for period
dc2ac369-3e0b-465f-bc2b-88d8c77469d7; returning period
dc2ac369-3e0b-465f-bc2b-88d8c77469d7
init_complete bucket index max shards: 0
get_system_obj_state: rctx=0x7ffe457c8670
obj=ap-southeast.rgw.users.uid:johndoe state=0x7f5b7b4f45c8
s->prefetch_data=0
cache get: name=ap-southeast.rgw.users.uid+johndoe : miss
//
cache get: name=ap-southeast.rgw.users.email+j...@example.com : miss
//
cache get: name=ap-southeast.rgw.users.keys+MQHDSB6XJXURM8AV7T3G : miss

Pools:
- ap-southeast.rgw.log
- ap-southeast.rgw.users.uid
- ap-southeast.rgw.users.email
- ap-southeast.rgw.users.keys
are jerasure, is there a problem ?

2016-06-02 10:46 GMT+07:00 Khang Nguyễn Nhật <nguyennhatkhang2...@gmail.com>
:

> Thank Wang!
> I will check it again.
>
> 2016-06-02 7:37 GMT+07:00 David Wang <linuxhunte...@gmail.com>:
>
>> First, please check your ceph cluster is HEALTH_OK and then check if you
>> have the caps the create users.
>>
>> 2016-05-31 16:11 GMT+08:00 Khang Nguyễn Nhật <
>> nguyennhatkhang2...@gmail.com>:
>>
>>> Thank, Wasserman!
>>> I followed the instructions here:
>>> http://docs.ceph.com/docs/master/radosgw/multisite/
>>> Step 1:  radosgw-admin realm create --rgw-realm=default  --default
>>> Step 2:  radosgw-admin zonegroup delete --rgw-zonegroup=default
>>> Step3:   *radosgw-admin zonegroup create --rgw-zonegroup=ap --master
>>> --default*
>>> radosgw-admin zonegroup default --rgw-zonegroup=ap
>>> Step4:  *radosgw-admin zone create --rgw-zonegroup=ap
>>> --rgw-zone=ap-southeast --default --master*
>>> radosgw-admin zone default --rgw-zone=ap-southeast
>>> radosgw-admin zonegroup add --rgw-zonegroup=ap
>>> --rgw-zone=ap-southeast
>>>
>>> I tried to create the zone group, zone, realm with another name and also
>>> similar problems.
>>>
>>>
>>> 2016-05-31 13:33 GMT+07:00 Orit Wasserman <owass...@redhat.com>:
>>>
>>>> did you set the realm, zonegroup and zone as defaults?
>>>>
>>>>
>>>> On Tue, May 31, 2016 at 4:45 AM, Khang Nguyễn Nhật
>>>> <nguyennhatkhang2...@gmail.com> wrote:
>>>> > Hi,
>>>> > I'm having problems with CEPH v10.2.1 Jewel when create user. My
>>>> cluster is
>>>> > used CEPH Jewel, including: 3 OSD, 2 monitors and 1 RGW.
>>>> > - Here is the list of cluster pools:
>>>> > .rgw.root
>>>> > ap-southeast.rgw.control
>>>> > ap-southeast.rgw.data.root
>>>> > ap-southeast.rgw.gc
>>>> > ap-southeast.rgw.users.uid
>>>> > ap-southeast.rgw.buckets.data
>>>> > ap-southeast.rgw.users.email
>>>> > ap-southeast.rgw.users.keys
>>>> > ap-southeast.rgw.buckets.index
>>>> > ap-southeast.rgw.buckets.non-ec
>>>> > ap-southeast.rgw.log
>>>> > ap-southeast.rgw.meta
>>>> > ap-southeast.rgw.intent-log
>>>> > ap-southeast.rgw.usage
>>>> > ap-southeast.rgw.users.swift
>>>> > - Zonegroup info:
>>>> > {
>>>> > "id": "e9585cbd-df92-42a0-964b-15efb1cc0ad6",
>>>> > "name": &q

Re: [ceph-users] Ceph Pool JERASURE issue.

2016-06-02 Thread Khang Nguyễn Nhật
Oh, thank Somnath Roy !

I've set ruleset-failure-domain =osd and it works fine. There is a problem
with Crush map. When I delete a pool,
its rules are still in Crush map and it will not update the new rules
instead it will use the old rules.


2016-06-02 11:37 GMT+07:00 Somnath Roy :

> You need to either change failure domain to osd or need at least 5 host to
> satisfy host failure domain.
>
> Since it is not satisfying failure domain , pgs are undersized and
> degraded..
>
>
>
> Thanks & Regards
>
> Somnath
>
>
>
> *From:* ceph-users [mailto:ceph-users-boun...@lists.ceph.com] *On Behalf
> Of *Khang Nguy?n Nh?t
> *Sent:* Wednesday, June 01, 2016 9:33 PM
> *To:* ceph-users@lists.ceph.com
> *Subject:* [ceph-users] Ceph Pool JERASURE issue.
>
>
>
> Hi,
>
> I have 1 cluster as pictured below:
>
>
>
> - OSD-host1 run 2 ceph-osd daemon is mounted in /var/ceph/osd0 and
>  /var/ceph/osd1.
>
> - OSD-host2 run 2 ceph-osd daemon is mounted in /var/ceph/osd2 and
>  /var/ceph/osd3.
>
> - OSD-host3 only run 1 ceph-osd daemon is mounted in the /var/ceph/osd4.
>
> - This is my myprofile:
>
>  jerasure-per-chunk-alignment = false
>
>  k = 3
>
>  m = 2
>
>  plugin = jerasure
>
>  ruleset-failure-domain = host
>
>  ruleset-root = default
>
>  technique = reed_sol_van
>
>  w = 8
>
> When I used it to create a pool
>
> CLI: ceph osd create test pool myprofile 8 8 erasure. (id test pool=62)
>
> CLI: ceph-s
>
> ​Here are the results
>
> ///
>
>  health HEALTH_WARN
>
> 8 pgs degraded
>
> 8 pgs stuck unclean
>
> 8 pgs undersized
>
>  monmap e1: 1 mons at {mon0 = x.x.x.x: 6789/0}
>
> election epoch 7, quorum 0 mon0
>
>  osdmap e441: 5 osds: 5 up, 5 in
>
> flags sortbitwise
>
>   pgmap ///
>
>8 Active + undersized + degraded
>
>
>
> CLI: health CePH detail
>
> HEALTH_WARN 8 pgs degraded; 8 pgs stuck unclean; 8 pgs undersized
>
> 62.6 pg is stuck unclean since forever, current degraded state active + +
> undersized, last acting [1,2,2147483647,2147483647,4]
>
> 62.7 pg is stuck unclean since forever, current degraded state active + +
> undersized, last acting [2,0,2147483647,4,2147483647]
>
> 62.4 pg is stuck unclean since forever, current degraded state active + +
> undersized, last acting [3,0,4,2147483647,2147483647]
>
> 62.5 pg is stuck unclean since forever, current degraded state active + +
> undersized, last acting [0,4,2147483647,3,2147483647]
>
> 62.2 pg is stuck unclean since forever, current degraded state active + +
> undersized, last acting [1,2147483647,2147483647,4,2]
>
> 62.3 pg is stuck unclean since forever, current degraded state active + +
> undersized, last acting [2,2147483647,0,4,2147483647]
>
> 62.0 pg is stuck unclean since forever, current degraded state active + +
> undersized, last acting [0,3,2147483647,4,2147483647]
>
> 62.1 pg is stuck unclean since forever, current degraded state active + +
> undersized, last acting [4,0,3,2147483647,2147483647]
>
> is active + 62.1 pg undersized + degraded, acting
> [4,0,3,2147483647,2147483647]
>
> is active + 62.0 pg undersized + degraded, acting
> [0,3,2147483647,4,2147483647]
>
> is active + 62.3 pg undersized + degraded, acting
> [2,2147483647,0,4,2147483647]
>
> is active + 62.2 pg undersized + degraded, acting
> [1,2147483647,2147483647,4,2]
>
> is active + 62.5 pg undersized + degraded, acting
> [0,4,2147483647,3,2147483647]
>
> is active + 62.4 pg undersized + degraded, acting
> [3,0,4,2147483647,2147483647]
>
> is active + 62.7 pg undersized + degraded, acting
> [2,0,2147483647,4,2147483647]
>
> is active + 62.6 pg undersized + degraded, acting
> [1,2,2147483647,2147483647,4]
>
>
>
> This is related to reasonable ruleset-failure-domain? Can somebody please
> help me out ?
>
> Thank !
> PLEASE NOTE: The information contained in this electronic mail message is
> intended only for the use of the designated recipient(s) named above. If
> the reader of this message is not the intended recipient, you are hereby
> notified that you have received this message in error and that any review,
> dissemination, distribution, or copying of this message is strictly
> prohibited. If you have received this communication in error, please notify
> the sender by telephone or e-mail (as shown above) immediately and destroy
> any and all copies of this message in your possession (whether hard copies
> or electronically stored copies).
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Ceph Pool JERASURE issue.

2016-06-01 Thread Khang Nguyễn Nhật
Hi,
I have 1 cluster as pictured below:

- OSD-host1 run 2 ceph-osd daemon is mounted in /var/ceph/osd0 and
 /var/ceph/osd1.
- OSD-host2 run 2 ceph-osd daemon is mounted in /var/ceph/osd2 and
 /var/ceph/osd3.
- OSD-host3 only run 1 ceph-osd daemon is mounted in the /var/ceph/osd4.
- This is my myprofile:
 jerasure-per-chunk-alignment = false
 k = 3
 m = 2
 plugin = jerasure
 ruleset-failure-domain = host
 ruleset-root = default
 technique = reed_sol_van
 w = 8
When I used it to create a pool
CLI: ceph osd create test pool myprofile 8 8 erasure. (id test pool=62)
CLI: ceph-s
​Here are the results
///
 health HEALTH_WARN
8 pgs degraded
8 pgs stuck unclean
8 pgs undersized
 monmap e1: 1 mons at {mon0 = x.x.x.x: 6789/0}
election epoch 7, quorum 0 mon0
 osdmap e441: 5 osds: 5 up, 5 in
flags sortbitwise
  pgmap ///
   8 Active + undersized + degraded

CLI: health CePH detail
HEALTH_WARN 8 pgs degraded; 8 pgs stuck unclean; 8 pgs undersized
62.6 pg is stuck unclean since forever, current degraded state active + +
undersized, last acting [1,2,2147483647,2147483647,4]
62.7 pg is stuck unclean since forever, current degraded state active + +
undersized, last acting [2,0,2147483647,4,2147483647]
62.4 pg is stuck unclean since forever, current degraded state active + +
undersized, last acting [3,0,4,2147483647,2147483647]
62.5 pg is stuck unclean since forever, current degraded state active + +
undersized, last acting [0,4,2147483647,3,2147483647]
62.2 pg is stuck unclean since forever, current degraded state active + +
undersized, last acting [1,2147483647,2147483647,4,2]
62.3 pg is stuck unclean since forever, current degraded state active + +
undersized, last acting [2,2147483647,0,4,2147483647]
62.0 pg is stuck unclean since forever, current degraded state active + +
undersized, last acting [0,3,2147483647,4,2147483647]
62.1 pg is stuck unclean since forever, current degraded state active + +
undersized, last acting [4,0,3,2147483647,2147483647]
is active + 62.1 pg undersized + degraded, acting
[4,0,3,2147483647,2147483647]
is active + 62.0 pg undersized + degraded, acting
[0,3,2147483647,4,2147483647]
is active + 62.3 pg undersized + degraded, acting
[2,2147483647,0,4,2147483647]
is active + 62.2 pg undersized + degraded, acting
[1,2147483647,2147483647,4,2]
is active + 62.5 pg undersized + degraded, acting
[0,4,2147483647,3,2147483647]
is active + 62.4 pg undersized + degraded, acting
[3,0,4,2147483647,2147483647]
is active + 62.7 pg undersized + degraded, acting
[2,0,2147483647,4,2147483647]
is active + 62.6 pg undersized + degraded, acting
[1,2,2147483647,2147483647,4]

This is related to reasonable ruleset-failure-domain? Can somebody please
help me out ?
Thank !
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] RGW Could not create user

2016-06-01 Thread Khang Nguyễn Nhật
Thank Wang!
I will check it again.

2016-06-02 7:37 GMT+07:00 David Wang <linuxhunte...@gmail.com>:

> First, please check your ceph cluster is HEALTH_OK and then check if you
> have the caps the create users.
>
> 2016-05-31 16:11 GMT+08:00 Khang Nguyễn Nhật <
> nguyennhatkhang2...@gmail.com>:
>
>> Thank, Wasserman!
>> I followed the instructions here:
>> http://docs.ceph.com/docs/master/radosgw/multisite/
>> Step 1:  radosgw-admin realm create --rgw-realm=default  --default
>> Step 2:  radosgw-admin zonegroup delete --rgw-zonegroup=default
>> Step3:   *radosgw-admin zonegroup create --rgw-zonegroup=ap --master
>> --default*
>> radosgw-admin zonegroup default --rgw-zonegroup=ap
>> Step4:  *radosgw-admin zone create --rgw-zonegroup=ap
>> --rgw-zone=ap-southeast --default --master*
>> radosgw-admin zone default --rgw-zone=ap-southeast
>> radosgw-admin zonegroup add --rgw-zonegroup=ap
>> --rgw-zone=ap-southeast
>>
>> I tried to create the zone group, zone, realm with another name and also
>> similar problems.
>>
>>
>> 2016-05-31 13:33 GMT+07:00 Orit Wasserman <owass...@redhat.com>:
>>
>>> did you set the realm, zonegroup and zone as defaults?
>>>
>>>
>>> On Tue, May 31, 2016 at 4:45 AM, Khang Nguyễn Nhật
>>> <nguyennhatkhang2...@gmail.com> wrote:
>>> > Hi,
>>> > I'm having problems with CEPH v10.2.1 Jewel when create user. My
>>> cluster is
>>> > used CEPH Jewel, including: 3 OSD, 2 monitors and 1 RGW.
>>> > - Here is the list of cluster pools:
>>> > .rgw.root
>>> > ap-southeast.rgw.control
>>> > ap-southeast.rgw.data.root
>>> > ap-southeast.rgw.gc
>>> > ap-southeast.rgw.users.uid
>>> > ap-southeast.rgw.buckets.data
>>> > ap-southeast.rgw.users.email
>>> > ap-southeast.rgw.users.keys
>>> > ap-southeast.rgw.buckets.index
>>> > ap-southeast.rgw.buckets.non-ec
>>> > ap-southeast.rgw.log
>>> > ap-southeast.rgw.meta
>>> > ap-southeast.rgw.intent-log
>>> > ap-southeast.rgw.usage
>>> > ap-southeast.rgw.users.swift
>>> > - Zonegroup info:
>>> > {
>>> > "id": "e9585cbd-df92-42a0-964b-15efb1cc0ad6",
>>> > "name": "ap",
>>> > "api_name": "ap",
>>> > "is_master": "true",
>>> > "endpoints": [
>>> > "http:\/\/192.168.1.1:"
>>> > ],
>>> > "hostnames": [],
>>> > "hostnames_s3website": [],
>>> > "master_zone": "e1d58724-e44f-4520-b56f-19a40b2ce8c4",
>>> > "zones": [
>>> > {
>>> > "id": "e1d58724-e44f-4520-b56f-19a40b2ce8c4",
>>> > "name": "ap-southeast",
>>> > "endpoints": [
>>> > "http:\/\/192.168.1.1:"
>>> > ],
>>> > "log_meta": "true",
>>> > "log_data": "false",
>>> > "bucket_index_max_shards": 0,
>>> > "read_only": "false"
>>> > }
>>> > ],
>>> > "placement_targets": [
>>> > {
>>> > "name": "default-placement",
>>> > "tags": []
>>> > }
>>> > ],
>>> > "default_placement": "default-placement",
>>> > "realm_id": "93dc1f56-6ec6-48f8-8caa-a7e864eeaeb3"
>>> > }
>>> > - Zone:
>>> > {
>>> > "id": "e1d58724-e44f-4520-b56f-19a40b2ce8c4",
>>> > "name": "ap-southeast",
>>> > "domain_root": "ap-southeast.rgw.data.root",
>>> > "control_pool": "ap-southeast.rgw.control",
>>> > "gc_pool": "ap-southeast.rgw.gc",
>>> > "log_pool": "ap-southeast.rgw.log",
>>> > "intent_log_pool": "ap-southeast.rgw.intent-log",
>>> > "usage_log_pool": "ap-southeast.rgw.usage",
>&g

Re: [ceph-users] RGW Could not create user

2016-05-31 Thread Khang Nguyễn Nhật
Thank, Wasserman!
I followed the instructions here:
http://docs.ceph.com/docs/master/radosgw/multisite/
Step 1:  radosgw-admin realm create --rgw-realm=default  --default
Step 2:  radosgw-admin zonegroup delete --rgw-zonegroup=default
Step3:   *radosgw-admin zonegroup create --rgw-zonegroup=ap --master
--default*
radosgw-admin zonegroup default --rgw-zonegroup=ap
Step4:  *radosgw-admin zone create --rgw-zonegroup=ap
--rgw-zone=ap-southeast --default --master*
radosgw-admin zone default --rgw-zone=ap-southeast
radosgw-admin zonegroup add --rgw-zonegroup=ap
--rgw-zone=ap-southeast

I tried to create the zone group, zone, realm with another name and also
similar problems.


2016-05-31 13:33 GMT+07:00 Orit Wasserman <owass...@redhat.com>:

> did you set the realm, zonegroup and zone as defaults?
>
> On Tue, May 31, 2016 at 4:45 AM, Khang Nguyễn Nhật
> <nguyennhatkhang2...@gmail.com> wrote:
> > Hi,
> > I'm having problems with CEPH v10.2.1 Jewel when create user. My cluster
> is
> > used CEPH Jewel, including: 3 OSD, 2 monitors and 1 RGW.
> > - Here is the list of cluster pools:
> > .rgw.root
> > ap-southeast.rgw.control
> > ap-southeast.rgw.data.root
> > ap-southeast.rgw.gc
> > ap-southeast.rgw.users.uid
> > ap-southeast.rgw.buckets.data
> > ap-southeast.rgw.users.email
> > ap-southeast.rgw.users.keys
> > ap-southeast.rgw.buckets.index
> > ap-southeast.rgw.buckets.non-ec
> > ap-southeast.rgw.log
> > ap-southeast.rgw.meta
> > ap-southeast.rgw.intent-log
> > ap-southeast.rgw.usage
> > ap-southeast.rgw.users.swift
> > - Zonegroup info:
> > {
> > "id": "e9585cbd-df92-42a0-964b-15efb1cc0ad6",
> > "name": "ap",
> > "api_name": "ap",
> > "is_master": "true",
> > "endpoints": [
> > "http:\/\/192.168.1.1:"
> > ],
> > "hostnames": [],
> > "hostnames_s3website": [],
> > "master_zone": "e1d58724-e44f-4520-b56f-19a40b2ce8c4",
> > "zones": [
> > {
> > "id": "e1d58724-e44f-4520-b56f-19a40b2ce8c4",
> > "name": "ap-southeast",
> > "endpoints": [
> > "http:\/\/192.168.1.1:"
> > ],
> > "log_meta": "true",
> > "log_data": "false",
> > "bucket_index_max_shards": 0,
> > "read_only": "false"
> > }
> > ],
> > "placement_targets": [
> > {
> > "name": "default-placement",
> > "tags": []
> > }
> > ],
> > "default_placement": "default-placement",
> > "realm_id": "93dc1f56-6ec6-48f8-8caa-a7e864eeaeb3"
> > }
> > - Zone:
> > {
> > "id": "e1d58724-e44f-4520-b56f-19a40b2ce8c4",
> > "name": "ap-southeast",
> > "domain_root": "ap-southeast.rgw.data.root",
> > "control_pool": "ap-southeast.rgw.control",
> > "gc_pool": "ap-southeast.rgw.gc",
> > "log_pool": "ap-southeast.rgw.log",
> > "intent_log_pool": "ap-southeast.rgw.intent-log",
> > "usage_log_pool": "ap-southeast.rgw.usage",
> > "user_keys_pool": "ap-southeast.rgw.users.keys",
> > "user_email_pool": "ap-southeast.rgw.users.email",
> > "user_swift_pool": "ap-southeast.rgw.users.swift",
> > "user_uid_pool": "ap-southeast.rgw.users.uid",
> > "system_key": {
> > "access_key": "1555b35654ad1656d805",
> > "secret_key":
> > "h7GhxuBLTrlhVUyxSPUKUV8r\/2EI4ngqJxD7iBdBYLhwluN30JaT3Q12"
> > },
> > "placement_pools": [
> > {
> > "key": "default-placement",
> > "val": {
> > "index_pool": "ap-southeast.rgw.buckets.index",
> > "data_pool": "ap-southeast.rgw.buckets.data",
> > "data_extra_pool": "ap-southeast.rgw.buckets.non-ec",
> >

[ceph-users] RGW Could not create user

2016-05-30 Thread Khang Nguyễn Nhật
Hi,
I'm having problems with CEPH v10.2.1 Jewel when create user. My cluster is
used CEPH Jewel, including: 3 OSD, 2 monitors and 1 RGW.
- Here is the list of *cluster pools*:
.rgw.root
ap-southeast.rgw.control
ap-southeast.rgw.data.root
ap-southeast.rgw.gc
ap-southeast.rgw.users.uid
ap-southeast.rgw.buckets.data
ap-southeast.rgw.users.email
ap-southeast.rgw.users.keys
ap-southeast.rgw.buckets.index
ap-southeast.rgw.buckets.non-ec
ap-southeast.rgw.log
ap-southeast.rgw.meta
ap-southeast.rgw.intent-log
ap-southeast.rgw.usage
ap-southeast.rgw.users.swift
- *Zonegroup* info:
{
"id": "e9585cbd-df92-42a0-964b-15efb1cc0ad6",
"name": "ap",
"api_name": "ap",
"is_master": "true",
"endpoints": [
"http:\/\/192.168.1.1:"
],
"hostnames": [],
"hostnames_s3website": [],
"master_zone": "e1d58724-e44f-4520-b56f-19a40b2ce8c4",
"zones": [
{
"id": "e1d58724-e44f-4520-b56f-19a40b2ce8c4",
"name": "ap-southeast",
"endpoints": [
"http:\/\/192.168.1.1:"
],
"log_meta": "true",
"log_data": "false",
"bucket_index_max_shards": 0,
"read_only": "false"
}
],
"placement_targets": [
{
"name": "default-placement",
"tags": []
}
],
"default_placement": "default-placement",
"realm_id": "93dc1f56-6ec6-48f8-8caa-a7e864eeaeb3"
}
- *Zone*:
{
"id": "e1d58724-e44f-4520-b56f-19a40b2ce8c4",
"name": "ap-southeast",
"domain_root": "ap-southeast.rgw.data.root",
"control_pool": "ap-southeast.rgw.control",
"gc_pool": "ap-southeast.rgw.gc",
"log_pool": "ap-southeast.rgw.log",
"intent_log_pool": "ap-southeast.rgw.intent-log",
"usage_log_pool": "ap-southeast.rgw.usage",
"user_keys_pool": "ap-southeast.rgw.users.keys",
"user_email_pool": "ap-southeast.rgw.users.email",
"user_swift_pool": "ap-southeast.rgw.users.swift",
"user_uid_pool": "ap-southeast.rgw.users.uid",
"system_key": {
"access_key": "1555b35654ad1656d805",
"secret_key":
"h7GhxuBLTrlhVUyxSPUKUV8r\/2EI4ngqJxD7iBdBYLhwluN30JaT3Q12"
},
"placement_pools": [
{
"key": "default-placement",
"val": {
"index_pool": "ap-southeast.rgw.buckets.index",
"data_pool": "ap-southeast.rgw.buckets.data",
"data_extra_pool": "ap-southeast.rgw.buckets.non-ec",
"index_type": 0
}
}
],
"metadata_heap": "ap-southeast.rgw.meta",
"realm_id": "93dc1f56-6ec6-48f8-8caa-a7e864eeaeb3"
}
- *Realm*:
{
"id": "93dc1f56-6ec6-48f8-8caa-a7e864eeaeb3",
"name": "default",
"current_period": "345bcfd4-c120-4862-9c13-1575d8876ce1",
"epoch": 1
}
- *Period:*
"period_map": {
"id": "5e66c0e2-a195-4ab4-914f-2b3d7977be0c",
"zonegroups": [
{
"id": "e9585cbd-df92-42a0-964b-15efb1cc0ad6",
"name": "ap",
"api_name": "ap",
"is_master": "true",
"endpoints": [
"http:\/\/192.168.1.1:"
],
"hostnames": [],
"hostnames_s3website": [],
"master_zone": "e1d58724-e44f-4520-b56f-19a40b2ce8c4",
"zones": [
{
"id": "e1d58724-e44f-4520-b56f-19a40b2ce8c4",
"name": "ap-southeast",
"endpoints": [
"http:\/\/192.168.1.1:"
],
"log_meta": "true",
"log_data": "false",
"bucket_index_max_shards": 0,
"read_only": "false"
}
],
 /// /// ///
"master_zonegroup": "e9585cbd-df92-42a0-964b-15efb1cc0ad6",
"master_zone": "e1d58724-e44f-4520-b56f-19a40b2ce8c4",
"period_config": {
"bucket_quota": {
"enabled": false,
"max_size_kb": -1,
"max_objects": -1
},
"user_quota": {
"enabled": false,
"max_size_kb": -1,
"max_objects": -1
}
},
"realm_id": "93dc1f56-6ec6-48f8-8caa-a7e864eeaeb3",
"realm_name": "default",
"realm_epoch": 2
}

When I used radosgw-admin user create --uid = 1 --display-name = "user1"
--email=us...@example.com, I get an error "could not create user: unable to
create user, unable to store user info"

I did wrong something? Can somebody please help me out ?
Thank !
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] RGW AWS4 issue.

2016-05-29 Thread Khang Nguyễn Nhật
Thank Yehuda !

I tried to make:
s3.create_bucket (// Bucket = 'image', CreateBucketConfiguration = {
'LocationConstraint': ''} //),
s3.create_bucket (// Bucket = 'image', CreateBucketConfiguration = {
'LocationConstraint': '""'} //),
s3.create_bucket (// Bucket = 'image', //) # no LocationConstraint
But they all returned *HTTP/1.1 400 Bad Request*, here are the results I
have when using ngrep in RGW:
CLI: ngrep -lq -t port   -W byline -d eth0

*Request:*
T 2016/05/30 08:37:06.572958 x.x.x.x:55668 -> x.x.x.x: [AP]
PUT /image HTTP/1.1.
Host: x.x.x.x:.
Accept-Encoding: identity.
X-Amz-Content-SHA256:
80520b2b573177f05f6db633afa0d5bbb6900d10807b2cc0786ce75a85577acf.
Content-Length: 125.
x-amz-acl: public-read-write.
x-amz-grant-full-control: image.
User-Agent: Boto3/1.3.1 Python/2.7.5 Linux/3.10.0-327.10.1.el7.x86_64
Botocore/1.4.24. <http://1.4.0.24/>
X-Amz-Date: 20160530T013706Z.
Authorization: AWS4-HMAC-SHA256
Credential=AEVU2WGDL03NI76H893F/20160530/default/s3/aws4_request,
SignedHeaders=host;x-amz-acl;x-amz-content-sha256;x-amz-date;x-amz-grant-full-control,
Signature=cc83e058786f8e021502e892da91ed9a4d2c10a42b353fd357f6596e1bd6a778.
.
http://s3.amazonaws.com/doc/2006-03-01/;>...

*Respone:*
T 2016/05/30 08:37:06.574290 x.x.x.x: -> x.x.x.x:55668 [AP]
HTTP/1.1 400 Bad Request.
x-amz-request-id: tx0001d-00574b9942-6304-default.
Content-Length: 217.
Accept-Ranges: bytes.
Content-Type: application/xml.
Date: Mon, 30 May 2016 01:37:06 GMT.
.

T 2016/05/30 08:37:06.614104 x.x.x.x: -> x.x.x.x:55668 [AP]
InvalidRequestimagetx0001d-00574b9942-6304-default6304-default-default

2016-05-30 8:46 GMT+07:00 Khang Nguyễn Nhật <nguyennhatkhang2...@gmail.com>:

> Thank Yehuda !
>
> I tried to make:
> s3.create_bucket (// Bucket = 'image', CreateBucketConfiguration = {
> 'LocationConstraint': ''} //),
> s3.create_bucket (// Bucket = 'image', CreateBucketConfiguration = {
> 'LocationConstraint': '""'} //),
> s3.create_bucket (// Bucket = 'image', //) # no LocationConstraint
> But they all returned *HTTP/1.1 400 Bad Request*, here are the results I
> have when using ngrep in RGW:
> CLI: ngrep -lq -t port   -W byline -d eth0
> 
> *Request:*
> T 2016/05/30 08:37:06.572958 x.x.x.x:55668 -> x.x.x.x: [AP]
> PUT /image HTTP/1.1.
> Host: x.x.x.x:.
> Accept-Encoding: identity.
> X-Amz-Content-SHA256:
> 80520b2b573177f05f6db633afa0d5bbb6900d10807b2cc0786ce75a85577acf.
> Content-Length: 125.
> x-amz-acl: public-read-write.
> x-amz-grant-full-control: image.
> User-Agent: Boto3/1.3.1 Python/2.7.5 Linux/3.10.0-327.10.1.el7.x86_64
> Botocore/1.4.24.
> X-Amz-Date: 20160530T013706Z.
> Authorization: AWS4-HMAC-SHA256
> Credential=AEVU2WGDL03NI76H893F/20160530/default/s3/aws4_request,
> SignedHeaders=host;x-amz-acl;x-amz-content-sha256;x-amz-date;x-amz-grant-full-control,
> Signature=cc83e058786f8e021502e892da91ed9a4d2c10a42b353fd357f6596e1bd6a778.
> .
>  xmlns="http://s3.amazonaws.com/doc/2006-03-01/;> />...
>
> *Respone:*
> T 2016/05/30 08:37:06.574290 x.x.x.x: -> x.x.x.x:55668 [AP]
> HTTP/1.1 400 Bad Request.
> x-amz-request-id: tx0001d-00574b9942-6304-default.
> Content-Length: 217.
> Accept-Ranges: bytes.
> Content-Type: application/xml.
> Date: Mon, 30 May 2016 01:37:06 GMT.
> .
>
> T 2016/05/30 08:37:06.614104 x.x.x.x: -> x.x.x.x:55668 [AP]
>  encoding="UTF-8"?>InvalidRequestimagetx0001d-00574b9942-6304-default6304-default-default
>
>
>
> 2016-05-30 1:30 GMT+07:00 Yehuda Sadeh-Weinraub <yeh...@redhat.com>:
>
>> On Sun, May 29, 2016 at 11:13 AM, Khang Nguyễn Nhật
>> <nguyennhatkhang2...@gmail.com> wrote:
>> > Hi,
>> > I'm having problems with AWS4 in the CEPH Jewel when interact with the
>> > bucket, object.
>> > First I will talk briefly about my cluster. My cluster is used CEPH
>> Jewel
>> > v10.2.1, including: 3 OSD, 2 monitors and 1 RGW.
>> > - Information in zonegroup:
>> > CLI: radosgw-admin zone list. (CLI is comand line)
>> > read_default_id : 0
>> > {
>> > "default_info": "03cde122-441d-46c5-a02d-19d28f3fd882",
>> > "zonegroups": [
>> > "default"
>> > ]
>> > }
>> >
>> > CLI: radosgw-admin zonegroup get
>> > {
>> > "id": "03cde122-441d-46c5-a02d-19d28f3fd882",
>> > "name": "default",
>> > "api_name": "",
>>
>> ^^^ api name
>>
>> > "is_master": "true",
>> >

[ceph-users] RGW AWS4 issue.

2016-05-29 Thread Khang Nguyễn Nhật
Hi,
I'm having problems with AWS4 in the CEPH Jewel when interact with the
bucket, object.
First I will talk briefly about my cluster. My cluster is used CEPH Jewel
v10.2.1, including: 3 OSD, 2 monitors and 1 RGW.
- Information in *zonegroup*:
CLI: radosgw-admin zone list. (CLI is comand line)
read_default_id : 0
{
"default_info": "03cde122-441d-46c5-a02d-19d28f3fd882",
"zonegroups": [
"default"
]
}

CLI: radosgw-admin zonegroup get
{
"id": "03cde122-441d-46c5-a02d-19d28f3fd882",
"name": "default",
"api_name": "",
"is_master": "true",
"endpoints": [],
"hostnames": [],
"hostnames_s3website": [],
"master_zone": "cb991931-88b1-4415-9d7f-a22cdce55ce7",
"zones": [
{
"id": "cb991931-88b1-4415-9d7f-a22cdce55ce7",
"name": "default",
"endpoints": [],
"log_meta": "false",
"log_data": "false",
"bucket_index_max_shards": 0,
"read_only": "false"
}
],
"placement_targets": [
{
"name": "default-placement",
"tags": []
}
],
"default_placement": "default-placement",
"realm_id": "a62bf866-f52b-4732-80b0-50a7287703f1"
}
- *Zone*:
CLI: radosgw-admin zone list
{
"default_info": "cb991931-88b1-4415-9d7f-a22cdce55ce7",
"zones": [
"default"
]
}

CLI: radosgw-admin zone get
{
"id": "cb991931-88b1-4415-9d7f-a22cdce55ce7",
"name": "default",
"domain_root": "default.rgw.data.root",
"control_pool": "default.rgw.control",
"gc_pool": "default.rgw.gc",
"log_pool": "default.rgw.log",
"intent_log_pool": "default.rgw.intent-log",
"usage_log_pool": "default.rgw.usage",
"user_keys_pool": "default.rgw.users.keys",
"user_email_pool": "default.rgw.users.email",
"user_swift_pool": "default.rgw.users.swift",
"user_uid_pool": "default.rgw.users.uid",
"system_key": {
"access_key": "",
"secret_key": ""
},
"placement_pools": [
{
"key": "default-placement",
"val": {
"index_pool": "default.rgw.buckets.index",
"data_pool": "default.rgw.buckets.data",
"data_extra_pool": "default.rgw.buckets.non-ec",
"index_type": 0
}
}
],
"metadata_heap": "default.rgw.meta",
"realm_id": ""
}
- *User infor:*
{
"user_id": "1",
"display_name": "User1",
"email": "us...@ceph.com",
"suspended": 0,
"max_buckets": 1000,
"auid": 0,
"subusers": [],
"keys": [
{
"user": "1",
"access_key": "",
"secret_key": ""
}
],
"swift_keys": [],
"caps": [],
"op_mask": "read, write, delete",
"default_placement": "",
"placement_tags": [],
"bucket_quota": {
"enabled": false,
"max_size_kb": -1,
"max_objects": -1
},
"user_quota": {
"enabled": false,
"max_size_kb": -1,
"max_objects": -1
},
"temp_url_keys": []
}

-RGW config:
[global]
//
rgw zonegroup root pool = .rgw.root
[client.rgw.radosgw1]
rgw_frontends = "civetweb port=
error_log_file=/var/log/ceph/civetweb.error.log
access_log_file=/var/log/ceph/civetweb.access.log debug-civetweb=10"
rgw_zone  = default
rgw region= default
rgw enable ops log = true
rgw log nonexistent bucket = true
rgw enable usage log = true
rgw log object name utc  = true
rgw intent log object name = %Y-%m-%d-%i-%n
rgw intent log object name utc = true

User1 not own any bucket, any object. I used a python boto3 to interact
with the S3, here is my code:
s3 = boto3.client(service_name='s3',
region_name='default',
aws_access_key_id='',aws_secret_access_key='',
use_ssl=False, endpoint_url='http://192.168.1.1:',
config=Config(signature_version='s3v4'))
print s3.list_buckets()
And this is result:
{u'Owner': {u'DisplayName': 'User1', u'ID': '1'}, u'Buckets': [],
'ResponseMetadata': {'HTTPStatusCode': 200, 'HostId': '', 'RequestId':
'tx1-00574b2e2f-6304-default'}}
print s3.create_bucket(ACL='public-read-write', Bucket='image',
   CreateBucketConfiguration={'LocationConstraint':
'default'},
   GrantFullControl='image')
And i recive:
HTTP/1.1 400 Bad Request.
botocore.exceptions.ClientError: An error occurred (InvalidRequest) when
calling the CreateBucket operation: Unknown

I did wrong something? Can somebody please help me out ?
Thank !
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com