Re: [ceph-users] RGW lifecycle not expiring objects

2017-06-29 Thread Félix Barbeira
I recently check the repo and the new version of s3cmd was released 3 days ago, including lifecycle commands: https://github.com/s3tools/s3cmd/releases These are the lifecycle options: https://github.com/s3tools/s3cmd/blob/master/s3cmd#L2444-L2448 2017-06-29 17:51 GMT+02:00 Daniel Gryniewicz :

Re: [ceph-users] RadosGW Swift public links

2017-06-29 Thread David Turner
Are you accessing the bucket from a URL that is not configured as an endpoint in your zone? I bet if you looked at the log you would see that the bucket that doesn't exist is the URL that you are using to access it. On Thu, Jun 29, 2017, 9:07 PM Donny Davis wrote: > I have swift working well wi

[ceph-users] dropping filestore+btrfs testing for luminous

2017-06-29 Thread Sage Weil
We're having a series of problems with the valgrind included in xenial[1] that have led us to restrict all valgrind tests to centos nodes. At teh same time, we're also seeing spurious ENOSPC errors from btrfs on both centos on xenial kernels[2], making trusty the only distro where btrfs works

[ceph-users] ask about async recovery

2017-06-29 Thread donglifec...@gmail.com
zhiqiang, Josn what about the async recovery feature? I didn't see any update on github recently, will it be further developed? Thanks a lot. donglifec...@gmail.com ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listin

Re: [ceph-users] Multi Tenancy in Ceph RBD Cluster

2017-06-29 Thread Alex Gorbachev
On Mon, Jun 26, 2017 at 2:00 AM Mayank Kumar wrote: > Hi Ceph Users > I am relatively new to Ceph and trying to Provision CEPH RBD Volumes using > Kubernetes. > > I would like to know what are the best practices for hosting a multi > tenant CEPH cluster. Specifically i have the following question

Re: [ceph-users] Kernel mounted RBD's hanging

2017-06-29 Thread Alex Gorbachev
On Thu, Jun 29, 2017 at 10:30 AM Nick Fisk wrote: > Hi All, > > Putting out a call for help to see if anyone can shed some light on this. > > Configuration: > Ceph cluster presenting RBD's->XFS->NFS->ESXi > Running 10.2.7 on the OSD's and 4.11 kernel on the NFS gateways in a > pacemaker cluster >

[ceph-users] ask about async recovery

2017-06-29 Thread donglifec...@gmail.com
zhiqiang, Josn what about the async recovery feature? I didn't see any update on github recently, will it be further developed? Thanks a lot. donglifec...@gmail.com ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listin

[ceph-users] RadosGW Swift public links

2017-06-29 Thread Donny Davis
I have swift working well with keystone authentication, and I can upload and download files. However when I make a link public, I get nosuchbucket, and I have no idea what url to find the buckets at. When I list the buckets with radosgw-admin bucket list i get back some tenant url+ the bucket. I

Re: [ceph-users] Very HIGH Disk I/O latency on instances

2017-06-29 Thread Gregory Farnum
On Thu, Jun 29, 2017 at 12:16 AM Peter Maloney < peter.malo...@brockmann-consult.de> wrote: > On 06/28/17 21:57, Gregory Farnum wrote: > > On Wed, Jun 28, 2017 at 9:17 AM Peter Maloney < > peter.malo...@brockmann-consult.de> wrote: > > On 06/28/17 16:52, keynes_...@wistron.com wrote: >> > [...]bac

Re: [ceph-users] Kernel mounted RBD's hanging

2017-06-29 Thread Ilya Dryomov
On Thu, Jun 29, 2017 at 6:22 PM, Nick Fisk wrote: >> -Original Message- >> From: Ilya Dryomov [mailto:idryo...@gmail.com] >> Sent: 29 June 2017 16:58 >> To: Nick Fisk >> Cc: Ceph Users >> Subject: Re: [ceph-users] Kernel mounted RBD's hanging >> >> On Thu, Jun 29, 2017 at 4:30 PM, Nick F

Re: [ceph-users] slow cluster perfomance during snapshot restore

2017-06-29 Thread Jason Dillaman
On Thu, Jun 29, 2017 at 1:33 PM, Gregory Farnum wrote: > I'm not sure if there are built-in tunable commands available (check the > manpages? Or Jason, do you know?), but if not you can use any generic > tooling which limits how much network traffic the RBD command can run. Long-running RBD actio

Re: [ceph-users] Hammer patching on Wheezy?

2017-06-29 Thread Scott Gilbert
I have experienced a similar problem. I found that the ceph repos had previously resided at "ceph.com", but have since been moved to "download.ceph.com". In my case, simply updating the URL in the repo file resolved the issue. (ie: changing "https://ceph.com/..."; to "https://download.ceph.

Re: [ceph-users] slow cluster perfomance during snapshot restore

2017-06-29 Thread Gregory Farnum
On Thu, Jun 29, 2017 at 7:44 AM Stanislav Kopp wrote: > Hi, > > we're testing ceph cluster as storage backend for our virtualization > (proxmox), we're using RBD for raw VM images. If I'm trying to restore > some snapshot with "rbd snap rollback", the whole cluster becomes > really slow, the "ap

Re: [ceph-users] Question about rbd-mirror

2017-06-29 Thread Jason Dillaman
On Wed, Jun 28, 2017 at 11:42 PM, YuShengzuo wrote: > Hi Jason Dillaman, > > > > I am using rbd-mirror now (release Jewel). > > > > 1. > > And in many webs or other information introduced rbd-mirror notices that two > ceph cluster should be the ‘same fsid’. > > But It’s nothing bad or wrong when I

Re: [ceph-users] Ideas on the UI/UX improvement of ceph-mgr: Cluster Status Dashboard

2017-06-29 Thread Reed Dier
I’d like to see per pool iops/usage, et al. Being able to see rados vs rbd vs whatever else performance, or pools with different backing mediums and see which workloads result in what performance. Most of this I pretty well cobble together with collectd, but it would still be nice to have out o

Re: [ceph-users] Kernel mounted RBD's hanging

2017-06-29 Thread Nick Fisk
> -Original Message- > From: Ilya Dryomov [mailto:idryo...@gmail.com] > Sent: 29 June 2017 16:58 > To: Nick Fisk > Cc: Ceph Users > Subject: Re: [ceph-users] Kernel mounted RBD's hanging > > On Thu, Jun 29, 2017 at 4:30 PM, Nick Fisk wrote: > > Hi All, > > > > Putting out a call for hel

Re: [ceph-users] slow cluster perfomance during snapshot restore

2017-06-29 Thread Ashley Merrick
Many others I’m sure will comment on the snapshot specifics. However running a cluster with some 8TB drives I have noticed huge differences between 4TB and 8TB drives and their peak latency’s when busy. So along with the known snapshot performance you may find the higher seek time and higher TB

Re: [ceph-users] Kernel mounted RBD's hanging

2017-06-29 Thread Ilya Dryomov
On Thu, Jun 29, 2017 at 4:30 PM, Nick Fisk wrote: > Hi All, > > Putting out a call for help to see if anyone can shed some light on this. > > Configuration: > Ceph cluster presenting RBD's->XFS->NFS->ESXi > Running 10.2.7 on the OSD's and 4.11 kernel on the NFS gateways in a > pacemaker cluster >

Re: [ceph-users] RGW lifecycle not expiring objects

2017-06-29 Thread Daniel Gryniewicz
On 06/28/2017 02:30 PM, Graham Allan wrote: That seems to be it! I couldn't see a way to specify the auth version with aws cli (is there a way?). However it did work with s3cmd and v2 auth: % s3cmd --signature-v2 setlifecycle lifecycle.xml s3://testgta s3://testgta/: Lifecycle Policy updated G

Re: [ceph-users] Zabbix plugin for ceph-mgr

2017-06-29 Thread Wido den Hollander
Just opened a PR: https://github.com/ceph/ceph/pull/16019 Reviews and comments are welcome! Wido > Op 27 juni 2017 om 16:57 schreef Wido den Hollander : > > > > > Op 27 juni 2017 om 16:13 schreef David Turner : > > > > > > Back to just Zabbix, this thread could go on forever if it devolves

[ceph-users] slow cluster perfomance during snapshot restore

2017-06-29 Thread Stanislav Kopp
Hi, we're testing ceph cluster as storage backend for our virtualization (proxmox), we're using RBD for raw VM images. If I'm trying to restore some snapshot with "rbd snap rollback", the whole cluster becomes really slow, the "apply_latency" goes to 4000-6000ms from normally 0-10ms, I see load o

Re: [ceph-users] free space calculation

2017-06-29 Thread Papp Rudolf Péter
Looks like I found the answer. The preparation was not the proper way. I found valuable information in ceph-disk prepare --help page; the cluster oprating better way: NAMEMAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:00 931,5G 0 disk ├─sda18:10 476M 0 part ├─sda28:2

[ceph-users] Kernel mounted RBD's hanging

2017-06-29 Thread Nick Fisk
Hi All, Putting out a call for help to see if anyone can shed some light on this. Configuration: Ceph cluster presenting RBD's->XFS->NFS->ESXi Running 10.2.7 on the OSD's and 4.11 kernel on the NFS gateways in a pacemaker cluster Both OSD's and clients are go into a pair of switches, single L2 do

Re: [ceph-users] v12.1.0 Luminous RC released

2017-06-29 Thread Lars Marowsky-Bree
On 2017-06-26T11:28:36, Ashley Merrick wrote: > With the EC Overwite support, if currently running behind a cache tier in > Jewel will the overwrite still be of benefit through the cache tier and > remove the need to promote the full block to make any edits? > > Or we better totally removing t

Re: [ceph-users] risk mitigation in 2 replica clusters

2017-06-29 Thread Lars Marowsky-Bree
On 2017-06-22T00:51:38, Blair Bethwaite wrote: > I'm doing some work to evaluate the risks involved in running 2r storage > pools. On the face of it my naive disk failure calculations give me 4-5 > nines for a 2r pool of 100 OSDs (no copyset awareness, i.e., secondary disk > failure based purely

Re: [ceph-users] Cannot mount Ceph FS

2017-06-29 Thread Riccardo Murri
On 29 June 2017 at 13:58, Jaime Ibar wrote: > > I'd say there is no mds running This was indeed the issue: the configuration file named the MDS as `mds.0` which is not allowed (error message was "MDS names may not start with a numeric digit.") so the MDS daemon kept "committing suicide" (unnotice

Re: [ceph-users] Ceph and IPv4 -> IPv6

2017-06-29 Thread Wido den Hollander
> Op 29 juni 2017 om 14:50 schreef Brenno Augusto Falavinha Martinez > : > > > What about moving just gateways to ipv6? is that possible? I assume you mean the RGW? Yes, no problem! I have it running the other way around. The RGW has IPv4 and IPv6, but the Ceph cluster is IPv6-only. RGW/lib

Re: [ceph-users] Ceph and IPv4 -> IPv6

2017-06-29 Thread Brenno Augusto Falavinha Martinez
What about moving just gateways to ipv6? is that possible? Brenno Em 29/06/2017 03:13:26, Wido den Hollander escreveu: > > Op 28 juni 2017 om 22:12 schreef Gregory Farnum > : > > > On Wed, Jun 28, 2017 at 6:33 AM > wrote: > > > > I don't think either. I don't think there is another way then j

Re: [ceph-users] Cannot mount Ceph FS

2017-06-29 Thread Riccardo Murri
On 29 June 2017 at 13:48, John Spray wrote: > On Thu, Jun 29, 2017 at 12:26 PM, Riccardo Murri > wrote: >> Hello! >> >> I tried to create and mount a filesystem using the instructions at >> and >>

Re: [ceph-users] Cannot mount Ceph FS

2017-06-29 Thread John Spray
On Thu, Jun 29, 2017 at 12:53 PM, Riccardo Murri wrote: > On 29 June 2017 at 13:48, John Spray wrote: >> On Thu, Jun 29, 2017 at 12:26 PM, Riccardo Murri >> wrote: >>> Hello! >>> >>> I tried to create and mount a filesystem using the instructions at >>>

Re: [ceph-users] Cannot mount Ceph FS

2017-06-29 Thread John Spray
On Thu, Jun 29, 2017 at 12:26 PM, Riccardo Murri wrote: > Hello! > > I tried to create and mount a filesystem using the instructions at > and > but I am getting > errors: > > $ sudo ceph fs new cep

Re: [ceph-users] What caps are necessary for FUSE-mounts of the FS?

2017-06-29 Thread John Spray
On Thu, Jun 29, 2017 at 11:42 AM, Riccardo Murri wrote: > Hello! > > The documentation at states: > > """ > Before mounting a Ceph File System in User Space (FUSE), ensure that > the client host has a copy of the Ceph configuration file and a > keyri

Re: [ceph-users] Cannot mount Ceph FS

2017-06-29 Thread Jaime Ibar
Hi, I'd say there is no mds running $ ceph mds stat e47262: 1/1/1 up {0=ceph01=up:active}, 2 up:standby $ ceph -s [...] fsmap e47262: 1/1/1 up {0=ceph01=up:active}, 2 up:standby [...] Is mds up and running? Jaime On 29/06/17 12:26, Riccardo Murri wrote: Hello! I tried to create and m

Re: [ceph-users] Cannot mount Ceph FS

2017-06-29 Thread Burkhard Linke
Hi, On 06/29/2017 01:26 PM, Riccardo Murri wrote: Hello! I tried to create and mount a filesystem using the instructions at and but I am getting errors: $ sudo ceph fs new cephfs cephfs_metada

[ceph-users] Cannot mount Ceph FS

2017-06-29 Thread Riccardo Murri
Hello! I tried to create and mount a filesystem using the instructions at and but I am getting errors: $ sudo ceph fs new cephfs cephfs_metadata cephfs_data new fs with metadata pool 1 and data po

[ceph-users] What caps are necessary for FUSE-mounts of the FS?

2017-06-29 Thread Riccardo Murri
Hello! The documentation at states: """ Before mounting a Ceph File System in User Space (FUSE), ensure that the client host has a copy of the Ceph configuration file and a keyring with CAPS for the Ceph metadata server. """ Now, I have two questio

Re: [ceph-users] Ceph New OSD cannot be started

2017-06-29 Thread Eugen Block
I'm pretty sure (the OP didn't specify that or other obvious things) that his is a Debian version that predates the horror that is systemd, given the Ceph version alone. I thought about that, but was not really sure since I dived into ceph only a year ago ;-) In this case I mean the systemv eq

Re: [ceph-users] Ceph New OSD cannot be started

2017-06-29 Thread Christian Balzer
Hello, On Thu, 29 Jun 2017 08:53:25 + Eugen Block wrote: > Hi, > > what does systemctl status -l ceph-osd@4.service say? Is anything > suspicious in the syslog? > I'm pretty sure (the OP didn't specify that or other obvious things) that his is a Debian version that predates the horror tha

Re: [ceph-users] Ceph New OSD cannot be started

2017-06-29 Thread Eugen Block
Hi, what does systemctl status -l ceph-osd@4.service say? Is anything suspicious in the syslog? Regards, Eugen Zitat von Luescher Claude : Hello, I have a cluster of 3 debian ceph machines running version: ceph version 0.80.1 (a38fe1169b6d2ac98b427334c12d7cf81f809b74) A new disk was ad

[ceph-users] Ceph New OSD cannot be started

2017-06-29 Thread Luescher Claude
Hello, I have a cluster of 3 debian ceph machines running version: ceph version 0.80.1 (a38fe1169b6d2ac98b427334c12d7cf81f809b74) A new disk was added to one node but it does not want to start it. I have tried everything like removing and readding the disk any times. The current ceph osd tre

Re: [ceph-users] Very HIGH Disk I/O latency on instances

2017-06-29 Thread Peter Maloney
On 06/28/17 21:57, Gregory Farnum wrote: > > > On Wed, Jun 28, 2017 at 9:17 AM Peter Maloney > > wrote: > > On 06/28/17 16:52, keynes_...@wistron.com > wrote: >> [...]backup VMs is create a snapshot by Ceph comm

[ceph-users] Luminous radosgw hangs after a few hours

2017-06-29 Thread Martin Emrich
Since upgrading to 12.1, our Object Gateways hang after a few hours, I only see these messages in the log file: 2017-06-29 07:52:20.877587 7fa8e01e5700 0 ERROR: keystone revocation processing returned error r=-22 2017-06-29 08:07:20.877761 7fa8e01e5700 0 ERROR: keystone revocation processing