Re: [ceph-users] v9.0.3 cephfs ceph-fuse ping_pong test failed

2015-09-28 Thread Yan, Zheng

> On Sep 29, 2015, at 10:20, maoqi1982  wrote:
> 
> Hi 
> While using ping_ping to  test the cephfs file systems mounted by the 
> ceph-fuse, i found it failed. 
> my cluster is  4 nodes. one mon/mds , three server(3 osd/server), OS : 
> centos6.6 , add "fuse_disable_pagecache=true , fuse_use_invalidate_cb=true " 
> in ceph.conf   
> 

ceph version 9.0.3 does not contains fuse_disable_pagecache options. the pull 
request (https://github.com/ceph/ceph/pull/5521) is still pending/

Yan, Zheng

> [root@node2 opt]#  ./ping_pong -rw /export/test/zzz 4
> data increment = 1
> ^C   367 locks/sec
> [root@node3 opt]#  ./ping_pong -rw /export/test/zzz 4
> data increment = 1
> ^C   402 locks/sec
> 
> [root@node0 /]# ceph -v
> ceph version 9.0.3 (7295612d29f953f46e6e88812ef372b89a43b9da)
> 
> [root@node0 /]# rpm -qa | grep fuse
> rbd-fuse-9.0.3-0.el6.x86_64
> fuse-libs-2.8.3-4.el6.x86_64
> ceph-fuse-9.0.3-0.el6.x86_64
> 
> [root@node1 /]# uname -r
> 2.6.32-504.el6.x86_64
> 
> [root@node2 opt]# cat /proc/version 
> Linux version 2.6.32-504.el6.x86_64 (mockbu...@c6b9.bsys.dev.centos.org) (gcc 
> version 4.4.7 20120313 (Red Hat 4.4.7-11) (GCC) ) #1 SMP Wed Oct 15 04:27:16 
> UTC 2014
> 
> [root@node2 opt]# vim /etc/ceph/ceph.conf
> [global]
> fsid =7be14c22-5144-4a3e-9197-eaf02ac079d2
> mon initial members = node0
> mon host = 192.168.50.5
> auth cluster required = cephx
> auth service required = cephx
> auth client required = cephx
> osd journal size = 1024
> filestore xattr use omap = true
> fuse_disable_pagecache=true
> fuse_use_invalidate_cb=true
> osd pool default size = 2
> osd pool default min size = 1
> osd pool default pg num = 333
> osd pool default pgp num = 333
> osd crush chooseleaf type = 1
> [mon.node0]
> host=node0
> mon addr  =192.168.50.5
> [osd.0]
> host=node1
> 
> 
> thanks
> 
> 
> 
>  

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] v9.0.3 cephfs ceph-fuse ping_pong test failed

2015-09-28 Thread maoqi1982
Hi 
While using ping_ping to  test the cephfs file systems mounted by the 
ceph-fuse, i found it failed. 
my cluster is  4 nodes. one mon/mds , three server(3 osd/server), OS : 
centos6.6 , add "fuse_disable_pagecache=true , fuse_use_invalidate_cb=true " in 
ceph.conf   


[root@node2 opt]#  ./ping_pong -rw /export/test/zzz 4
data increment = 1
^C   367 locks/sec
[root@node3 opt]#  ./ping_pong -rw /export/test/zzz 4
data increment = 1
^C   402 locks/sec


[root@node0 /]# ceph -v
ceph version 9.0.3 (7295612d29f953f46e6e88812ef372b89a43b9da)


[root@node0 /]# rpm -qa | grep fuse
rbd-fuse-9.0.3-0.el6.x86_64
fuse-libs-2.8.3-4.el6.x86_64
ceph-fuse-9.0.3-0.el6.x86_64


[root@node1 /]# uname -r
2.6.32-504.el6.x86_64


[root@node2 opt]# cat /proc/version 
Linux version 2.6.32-504.el6.x86_64 (mockbu...@c6b9.bsys.dev.centos.org) (gcc 
version 4.4.7 20120313 (Red Hat 4.4.7-11) (GCC) ) #1 SMP Wed Oct 15 04:27:16 
UTC 2014


[root@node2 opt]# vim /etc/ceph/ceph.conf
[global]
fsid =7be14c22-5144-4a3e-9197-eaf02ac079d2
mon initial members = node0
mon host = 192.168.50.5
auth cluster required = cephx
auth service required = cephx
auth client required = cephx
osd journal size = 1024
filestore xattr use omap = true
fuse_disable_pagecache=true
fuse_use_invalidate_cb=true
osd pool default size = 2
osd pool default min size = 1
osd pool default pg num = 333
osd pool default pgp num = 333
osd crush chooseleaf type = 1
[mon.node0]
host=node0
mon addr  =192.168.50.5
[osd.0]
host=node1



thanks

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] CephFS file to rados object mapping

2015-09-28 Thread John Spray
On Mon, Sep 28, 2015 at 9:46 PM, Andras Pataki
 wrote:
> Hi,
>
> Is there a way to find out which radios objects a file in cephfs is mapped
> to from the command line?  Or vice versa, which file a particular radios
> object belongs to?

The part of the object name before the period is the inode number (in hex).

John

> Our ceph cluster has some inconsistencies/corruptions and I am trying to
> find out which files are impacted in cephfs.
>
> Thanks,
>
> Andras
>
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] CephFS file to rados object mapping

2015-09-28 Thread Andras Pataki
Hi,

Is there a way to find out which radios objects a file in cephfs is mapped to 
from the command line?  Or vice versa, which file a particular radios object 
belongs to?
Our ceph cluster has some inconsistencies/corruptions and I am trying to find 
out which files are impacted in cephfs.

Thanks,

Andras


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] rsync broken?

2015-09-28 Thread David Clarke
On 28/09/15 23:55, Paul Mansfield wrote:
> 
> Hi,
> 
> We used to rsync from eu.ceph.com into a local mirror for when we build
> our code. We need to re-do this to pick up fresh packages built since
> the intrusion.
> 
> it doesn't seem possible to rsync from any current ceph download site

download.ceph.com::ceph seems to work for us.


-- 
David Clarke




signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Ceph Consulting

2015-09-28 Thread Robert LeBlanc
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Ceph consulting was provided by Inktank[1], but the Inktank website is
down. How do we go about getting consulting services now?

[1] http://ceph.com/help/professional/

Thanks,
- 
Robert LeBlanc
PGP Fingerprint 79A2 9CA4 6CC4 45DD A904  C70E E654 3BB2 FA62 B9F1
-BEGIN PGP SIGNATURE-
Version: Mailvelope v1.1.0
Comment: https://www.mailvelope.com

wsFcBAEBCAAQBQJWCYsdCRDmVDuy+mK58QAAoIIQALjkU5Gp/yeAMsdk596T
jN+JNGf56uo+QhV/GUxmkkgabSP5XDXWp3bjDA2o8Ru/RScG1OGHug6QnB9P
zdVBj628jiBFAJZqEVB2oLRPoVa4IgfxmzGWaQwzhCIr70d300TptrctoqYJ
wXOPwM/Mr0ZGOKbsduS3u7SiuXn2Di3D3FsOYkhi3BHCiUtOqew/eC2ilJeC
mtS6UmNmaLzmhDBFyXTfjjzi8I/AtFAh4hHtDdnWx2bGuvwioG3I+KgSi6KF
WYk5ylkdIPZtBxb7ZtzpJnyZh89F4Zz8zI7NKPFuZJYNZ2IzD6sHlZyASdRU
vWYOkbJdd9+scH8i8cZSYg9Z+gF6XyttL8Ff+5GGoEDpNYUrbpcPBbnHgmwx
IApeFhHdCFxFW21/XmAiaIUaLlpB8mSW3//3ogC3Fvus7YALce5c4LdQsCqS
Cbz4auPbk8IPopDvroxtJzS3tXy9cu6aSLoI2rt/JvvqtapxQCtEcIEt70R2
xuQDcwIPTNxflNiyRgJ+FOutOuHP4IoCVAWvkuL0Rw+ZDOy18Zh4NjZOtD7K
y/E7tR+4oTivFGtnbG1SvxFD67Yobvg3rA/rvfG/AstVw4EMQdUdyK/Xbgl6
JfW75SCHsAeqZx1b5eYZh9ar96qLd+wjYtMN3oatkTdaIVbMHOQJ5eqP8dXn
iY7L
=qSno
-END PGP SIGNATURE-
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Ceph Storage Cluster on Amazon EC2 across different regions

2015-09-28 Thread Raluca Halalai
Hello,

I am trying to deploy a Ceph Storage Cluster on Amazon EC2, in different
regions.
This is my set-up: a monitor - mon.node0 in eu.west, and two OSDs -
osd.node1 in eu.west and osd.node2 in us.east.

I am having issues due to the way Amazon sets up network interfaces: each
host has a private IP (accessible from its region) and a public IP
(accessible from other regions too); hosts only see their private IPs,
while the public IP is handled via 1-1 network translation at a higher
level.

Ceph config files only have one IP assigned to each node, so that leaves a
problem: either I put private IPs, and nodes cannot access each other from
different regions, or I put public IPs, and nodes cannot find themselves in
the configuration. I also tried various combinations, starting nodes
manually, but still get errors.

I set up a virtual network interface using the public IPs of the hosts. I
manage to start mon.node0, but it only listens on that interface, while
incoming traffic comes from the NAT. I can make connections possible by
running Debian's redir locally (on 16789 for example) and redirecting
traffic from 0.0.0.0 to the local interface, but this doesn't seem right.

Can I somehow make the monitor listen on all interfaces?

Best regards,
-- 
Raluca
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] radosgw Storage policies

2015-09-28 Thread Yehuda Sadeh-Weinraub
On Mon, Sep 28, 2015 at 4:00 AM, Luis Periquito  wrote:
> Hi All,
>
> I was hearing the ceph talk about radosgw and Yehuda talks about storage
> policies. I started looking for it in the documentation, on how to
> implement/use and couldn't much information:
> http://docs.ceph.com/docs/master/radosgw/s3/ says it doesn't currently
> support it, and http://docs.ceph.com/docs/master/radosgw/swift/ doesn't
> mention it.
>
> From the release notes it seems to be for the swift interface, not S3. Is
> this correct? Can we create them for S3 interface, or only Swift?
>
>

You can create buckets in both swift and s3 that utilize this feature.
You need to define different placement targets in the zone
configuration.
In S3 when you create a bucket, you need to specify a location
constrain that specifies this policy. The location constraint should
be specified as follows: [region][:policy]. So if you're creating a
bucket in the current region using your 'gold' policy that you
defined, you'll need to set it to ':gold'.
In swift, the api requires sending it through a special http header
(X-Storage-Policy).

Yehuda
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] radosgw and keystone users

2015-09-28 Thread Xabier Elkano

Hi,

I'm just deployed the ceph object gateway as an object storage in
OpenStack. I've followed this doc to achieve the integration with
Keystone:

http://docs.ceph.com/docs/master/radosgw/keystone/

"It is possible to integrate the Ceph Object Gateway with Keystone, the
OpenStack identity service. This sets up the gateway to accept Keystone
as the users authority. A user that Keystone authorizes to access the
gateway will also be automatically created on the Ceph Object Gateway
(if didn’t exist beforehand). A token that Keystone validates will be
considered as valid by the gateway."

According to it, I was expecting that the keystone user was created in
radosgw when it was authorized by a keystone token, but instead, what is
created is the tenant id of the project that the user uses to manage his
objects.

# radosgw-admin user stats --uid=db4d25b13eaa4645a180f564b3817e1c
{ "stats": { "total_entries": 1,
  "total_bytes": 24546,
  "total_bytes_rounded": 24576},
  "last_stats_sync": "2015-09-25 12:09:12.795775Z",
  "last_stats_update": "2015-09-28 11:58:43.422859Z"}

Being that "db4d25b13eaa4645a180f564b3817e1c" is the project id I'm
using.

Is this the expected behavior and the doc pointed me in the wrong
direction or I misconfigured something? Really, I prefer this behavior,
because in this way I can set quotas on a project basis without worrying
about the users, but I would like to know if the integration is Ok.

My rados setup:

[client.radosgw.gateway]
host = hostname
keyring = /etc/ceph/ceph.client.radosgw.keyring
rgw socket path = ""
log file = /var/log/radosgw/client.radosgw.gateway.log
rgw frontends = fastcgi socket_port=9000 socket_host=0.0.0.0
rgw print continue = false
rgw keystone url = http://keystone_host:5000
rgw keystone admin token = _
rgw keystone accepted roles = _member_, Member, admin
rgw s3 auth use keystone = true
nss db path = /var/ceph/nss


Ceph FireFly 0.80.10 
OpenStack Juno
SO: Ubuntu 14.04

Best regards,
Xabier


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] radosgw and keystone version 3 domains

2015-09-28 Thread Robert Duncan
Hi Shinobu,

Here are my keystone logs, the most sever is 'WARNING' there seems to be no 
Error message in keystone logs

I have copied the warnings to the top and a full debug log below that.

Essentially, this is the warning
WARNING Authorization failed. Non-default domain is not supported (Disable 
debug mode to suppress these details.) (Disable debug mode to suppress these 
details.) from 172.25.60.2

There is no error message

http://pastebin.com/dFUZZNE7


thanks,
Rob.
-Original Message-
From: Shinobu Kinjo [mailto:ski...@redhat.com] 
Sent: 26 September 2015 06:06
To: Robert Duncan
Cc: Luis Periquito; Abhishek L; ceph-users
Subject: Re: [ceph-users] radosgw and keystone version 3 domains

If any of you could provide keystone.log with me, it would be more helpful.

and: keystone --version

Shinobu

- Original Message -
From: "Shinobu Kinjo" 
To: "Robert Duncan" 
Cc: "Luis Periquito" , "Abhishek L" 
, "ceph-users" 
Sent: Saturday, September 26, 2015 12:03:17 PM
Subject: Re: [ceph-users] radosgw and keystone version 3 domains

> and need to use openstack client.

Yes, you have to for v3 anyway.

Shinobu

- Original Message -
From: "Robert Duncan" 
To: "Luis Periquito" 
Cc: "Shinobu Kinjo" , "Abhishek L" 
, "ceph-users" 
Sent: Friday, September 25, 2015 11:29:14 PM
Subject: RE: [ceph-users] radosgw and keystone version 3 domains

A few other things that don’t work –

-appending /v3 into the rgw.conf file (worth a try) -adding the user into the 
default domain
- removing the v2 endpoints from the keystone catalog -using a domain scoped 
token in rgw.conf -using admin username and password in rgw.conf

According to keystone documents we shouldn’t use a versioned endpoint in the 
catalog anymore as ports 5000 and 35357 have a http 300 ‘multiple choices’

Although – horizon doesn’t work without explicitly stating ‘use identity v3’ – 
anyway, keystone python client is pretty much broken as we can’t list domain 
users or their projects(tenants) and need to use openstack client.
This is the crux of the issue, if keystone v2 could only list domain users as 
having a role on a project, but it doesn’t understand the domain id part of the 
token – arrghhh!

curl -i 172.25.60.2:35357
HTTP/1.1 300 Multiple Choices
Vary: X-Auth-Token
Content-Type: application/json
Content-Length: 759
Date: Fri, 25 Sep 2015 14:11:08 GMT
Connection: close

{"versions": {"values": [{"status": "stable", "updated": 
"2013-03-06T00:00:00Z", "media-types": [{"base": "application/json", "type": 
"application/vnd.openstack.identity-v3+json"}, {"base": "application/xml", 
"type": "application/vnd.openstack.identity-v3+xml"}], "id": "v3.0", "links": 
[{"href": "http://172.25.60.2:35357/v3/";, "rel": "self"}]}, {"status": 
"stable", "updated": "2014-04-17T00:00:00Z", "media-types": [{"base": 
"application/json", "type": "application/vnd.openstack.identity-v2.0+json"}, 
{"base": "application/xml", "type": 
"application/vnd.openstack.identity-v2.0+xml"}], "id": "v2.0", "links": 
[{"href": "http://172.25.60 
 .2:35357/v2.0/", "rel": "self"}, 
{"href": "http://docs.openstack.org/";, "type":


From: Luis Periquito [mailto:periqu...@gmail.com]
Sent: 25 September 2015 14:37
To: Robert Duncan
Cc: Shinobu Kinjo; Abhishek L; ceph-users
Subject: Re: [ceph-users] radosgw and keystone version 3 domains

This was reported in http://tracker.ceph.com/issues/8052 about a year ago. This 
ticket hasn't been updated...

On Fri, Sep 25, 2015 at 1:37 PM, Robert Duncan 
mailto:robert.dun...@ncirl.ie>> wrote:
I would be interested if anyone even has a work around to this - no matter how 
arcane.
If anyone gets this to work I would be most obliged

-Original Message-
From: Shinobu Kinjo [mailto:ski...@redhat.com]
Sent: 25 September 2015 13:31
To: Luis Periquito
Cc: Abhishek L; Robert Duncan; ceph-users
Subject: Re: [ceph-users] radosgw and keystone version 3 domains

Thanks for the info.

Shinobu

- Original Message -
From: "Luis Periquito" mailto:periqu...@gmail.com>>
To: "Shinobu Kinjo" mailto:ski...@redhat.com>>
Cc: "Abhishek L" 
mailto:abhishek.lekshma...@gmail.com>>, "Robert 
Duncan" mailto:robert.dun...@ncirl.ie>>, "ceph-users" 
mailto:ceph-us...@ceph.com>>
Sent: Friday, September 25, 2015 8:52:48 PM
Subject: Re: [ceph-users] radosgw and keystone version 3 domains

I'm having the exact same issue, and after looking it seems that radosgw is 
hardcoded to authenticate using v2 api.

from the config file: rgw keystone url = http://openstackcontrol.lab:35357/

the "/v2.0/" is hardcoded and gets appended to the authentication request.

a snippet taken from radosgw (ran with "-d --debug-ms=1 --debug-rgw=20"
options)

2015-09-25 12:40:00.359333 7ff4bcf61700  1 == starting new request
req=0x7ff57801b810 =
2015-09-25 12:40:00.359355 7ff4bcf61700  2 req 1:0.21::GET 
/swift/v1::initializing
2015-09-25 12:40:0

Re: [ceph-users] Ceph incremental & external backup solution

2015-09-28 Thread John Spray
On Mon, Sep 28, 2015 at 1:13 PM, David Bayle  wrote:
> Hi everyone,
>
> I just read this feature request: http://tracker.ceph.com/issues/
>
> and I have a one quick question, I m looking for an external backup solution
> for ceph (like this for ZFS with zfs send snapshot | zfs receive snapshot )
> is there any mechanism or any solution than can serve this purpose ?
>
> As I don't see any resolution in this post, our goal is to have another
> solution to backup our production ceph; It could be another ceph in another
> datacenter or any other technology, do you have any idea ? I would look for
> an incremental solution saving from performance downgrades on production
> setup.

The ticket you've linked is marked as resolved: presumably from when
the rbd import/export functionality was added.  The "rbd export-diff"
and "rbd import-diff" are probably what you're looking for?  Some
scripting is required to apply these to many images at once.

John

>
> Best regards,
>
> David.
>
> --
>
> David Bayle
> System Administrator
> GloboTech Communications
> Phone: 1-514-907-0050
> Toll Free: 1-(888)-GTCOMM1
> Fax: 1-(514)-907-0750
> supp...@gtcomm.net
> http://www.gtcomm.net
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Ceph incremental & external backup solution

2015-09-28 Thread David Bayle

Hi everyone,

I just read this feature request: http://tracker.ceph.com/issues/

and I have a one quick question, I m looking for an external backup 
solution for ceph (like this for ZFS with zfs send snapshot | zfs 
receive snapshot ) is there any mechanism or any solution than can serve 
this purpose ?


As I don't see any resolution in this post, our goal is to have another 
solution to backup our production ceph; It could be another ceph in 
another datacenter or any other technology, do you have any idea ? I 
would look for an incremental solution saving from performance 
downgrades on production setup.


Best regards,

David.

--

David Bayle
System Administrator
GloboTech Communications
Phone: 1-514-907-0050
Toll Free: 1-(888)-GTCOMM1
Fax: 1-(514)-907-0750
supp...@gtcomm.net
http://www.gtcomm.net

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] CephFS: removing default data pool

2015-09-28 Thread John Spray
On Mon, Sep 28, 2015 at 12:39 PM, John Spray  wrote:
> On Mon, Sep 28, 2015 at 11:08 AM, Burkhard Linke
>  wrote:
>> Hi,
>>
>> I've created CephFS with a certain data pool some time ago (using firefly
>> release). I've added addtional pools in the meantime and moved all data to
>> them. But a large number of empty (or very small) objects are left in the
>> pool according to 'ceph df':
>>
>> cephfs_test_data 7918M 0 45424G  6751721
>>
>> The number of objects change if new files are added to CephFS or deleted.
>>
>> Does the first data pool play a different role and is used to store
>> additional information? How can I remove this pool? In the current
>> configuration the pool is a burden both to recovery/backfilling (many
>> objects) and to performance due to object creation/deletion.
>
> You can't remove it.  The reason is an implementation quirk for
> hardlinks: where inodes have layouts pointing to another data pool,
> they also write backtraces to the default data pool (whichever was the
> first is the default).  It's so that when we resolve a hardlink, we
> don't have to chase through all data pools looking for an inode, we
> can just look it up in the default data pool.
>
> Clearly this isn't optimal, but that's how it works right now.  For
> each file you create, you'll get a few hundred bytes-ish written to an
> xattr on an object in the default data pool.

I've created a feature ticket with some thoughts about how we could
improve this situation in the future:
http://tracker.ceph.com/issues/13259

John
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] CephFS: removing default data pool

2015-09-28 Thread John Spray
On Mon, Sep 28, 2015 at 11:08 AM, Burkhard Linke
 wrote:
> Hi,
>
> I've created CephFS with a certain data pool some time ago (using firefly
> release). I've added addtional pools in the meantime and moved all data to
> them. But a large number of empty (or very small) objects are left in the
> pool according to 'ceph df':
>
> cephfs_test_data 7918M 0 45424G  6751721
>
> The number of objects change if new files are added to CephFS or deleted.
>
> Does the first data pool play a different role and is used to store
> additional information? How can I remove this pool? In the current
> configuration the pool is a burden both to recovery/backfilling (many
> objects) and to performance due to object creation/deletion.

You can't remove it.  The reason is an implementation quirk for
hardlinks: where inodes have layouts pointing to another data pool,
they also write backtraces to the default data pool (whichever was the
first is the default).  It's so that when we resolve a hardlink, we
don't have to chase through all data pools looking for an inode, we
can just look it up in the default data pool.

Clearly this isn't optimal, but that's how it works right now.  For
each file you create, you'll get a few hundred bytes-ish written to an
xattr on an object in the default data pool.

John
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] radosgw Storage policies

2015-09-28 Thread Luis Periquito
Hi All,

I was hearing the ceph talk about radosgw and Yehuda talks about storage
policies. I started looking for it in the documentation, on how to
implement/use and couldn't much information:
http://docs.ceph.com/docs/master/radosgw/s3/ says it doesn't currently
support it, and http://docs.ceph.com/docs/master/radosgw/swift/ doesn't
mention it.

>From the release notes it seems to be for the swift interface, not S3. Is
this correct? Can we create them for S3 interface, or only Swift?


thanks,
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] rsync broken?

2015-09-28 Thread Paul Mansfield

Hi,

We used to rsync from eu.ceph.com into a local mirror for when we build
our code. We need to re-do this to pick up fresh packages built since
the intrusion.

it doesn't seem possible to rsync from any current ceph download site

thanks
Paul
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] CephFS: removing default data pool

2015-09-28 Thread Burkhard Linke

Hi,

I've created CephFS with a certain data pool some time ago (using 
firefly release). I've added addtional pools in the meantime and moved 
all data to them. But a large number of empty (or very small) objects 
are left in the pool according to 'ceph df':


cephfs_test_data 7918M 0 45424G  6751721

The number of objects change if new files are added to CephFS or deleted.

Does the first data pool play a different role and is used to store 
additional information? How can I remove this pool? In the current 
configuration the pool is a burden both to recovery/backfilling (many 
objects) and to performance due to object creation/deletion.


Regards,
Burkhard


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] НА: НА: How to get RBD volume to PG mapping?

2015-09-28 Thread Межов Игорь Александрович
Hi!

Ilya Dryomov wrote:
>Internally there is a way to list objects within a specific PG
>(actually more than one way IIRC), but I don't think anything like that
>is exposed in a CLI (it might be exposed in librados though).  Grabbing
>an osdmap and iterating with osdmaptool --test-map-object over
>rbd_data..* is probably the fastest way for you to get what you
>want.

Yes, I dumped osdmap, did 'rados ls' for all objects into a file and started 
simple
shell script, that read object list and run osdmaptool. It is surprisingly slow 
-
still running from Friday afternoon and process only 5.000.000 objects
from over the 11.000.000. So maybe I'll try to dig deeper in librados
headers to write some homemade tool.

David Burley wrote:
>So figure out which OSDs are active for the PG, and run the find in the subdir 
>for the placement group on one of those. It should run really fast unless you
>have tons of tiny objects in the PG.

I think finding objects in directory structure is a good way, but only for 
healthy
cluster, where object placement are not changing. In my case, for a strange 
reason,
I can't figure all three OSD for this one PG. After a node crash I have this 
one PG 
in degraded state, it have only two replicas, while pool min_size=3. 

And more strange is that I cant force it to repair - neither 'ceph pg repair', 
nor OSD
restart didn't help me to recover PG. In health detail I can see only two OSDs 
for this PG. 




Megov Igor
CIO, Yuterra




От: Ilya Dryomov 
Отправлено: 25 сентября 2015 г. 18:21
Кому: Межов Игорь Александрович
Копия: David Burley; Jan Schermer; ceph-users
Тема: Re: [ceph-users] НА: How to get RBD volume to PG mapping?

On Fri, Sep 25, 2015 at 5:53 PM, Межов Игорь Александрович
 wrote:
> Hi!
>
> Thanks!
>
> I have some suggestions for the 1st method:
>
>>You could get the name prefix for each RBD from rbd info,
> Yes, I did it already at the steps 1 and 2. I forgot to mention, that I grab
> rbd frefix from 'rbd info' command
>
>
>>then list all objects (run find on the osds?) and then you just need to
>> grep the OSDs for each prefix.
> So, you advise to run find over ssh for all OSD hosts to traverse OSDs
> filesystems and find files (objects),
> named with rbd prefix? Am I right? If so, I have two thoughts: (1) it may be
> not so fast also, because
> even limiting find with rbd prefix and pool index, it have to recursively go
> through whole OSD filesytem
> hierarchy. And (2) - find will put an additional load to OSD drives.
>
>
> The second method is more attractive and I will try it soon. As we have an
> object name,
> and can get a crushmap in some usable form to look by ourself, or indirectly
> through a
> library/api call - finding the chain of object-to-PG-to-OSDs is a local
> computational
> task, and it can be done without remote calls (accessing OSD hosts, finding,
> etc).
>
> Also, the slow looping through 'ceph osd map  ' could be
> explained:
> for every object we have to spawn process, connecting cluster (with auth),
> receiving
> maps to client, calculating placement, and ... finally throw it all away
> when process
> exits. I think this overhead is a main reason of slowness.

Internally there is a way to list objects within a specific PG
(actually more than one way IIRC), but I don't think anything like that
is exposed in a CLI (it might be exposed in librados though).  Grabbing
an osdmap and iterating with osdmaptool --test-map-object over
rbd_data..* is probably the fastest way for you to get what you
want.

Thanks,

Ilya
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com