Re: [ceph-users] Deprecating ext4 support

2016-04-14 Thread Christian Balzer
Hello, On Fri, 15 Apr 2016 07:02:13 +0200 Michael Metz-Martini | SpeedPartner GmbH wrote: > Hi, > > Am 15.04.2016 um 03:07 schrieb Christian Balzer: > >> We thought this was a good idea so that we can change the replication > >> size different for doc_root and raw-data if we like. Seems this

Re: [ceph-users] Deprecating ext4 support

2016-04-14 Thread Michael Metz-Martini | SpeedPartner GmbH
Hi, Am 15.04.2016 um 03:07 schrieb Christian Balzer: >> We thought this was a good idea so that we can change the replication >> size different for doc_root and raw-data if we like. Seems this was a >> bad idea for all objects. > I'm not sure how you managed to get into that state or if it's a

Re: [ceph-users] Deprecating ext4 support

2016-04-14 Thread Christian Balzer
On Thu, 14 Apr 2016 19:39:01 +0200 Michael Metz-Martini | SpeedPartner GmbH wrote: > Hi, > > Am 14.04.2016 um 03:32 schrieb Christian Balzer: [massive snip] Thanks for that tree/du output, it matches what I expected. You'd think XFS wouldn't be that intimidated by directories of that size. >

Re: [ceph-users] osd prepare 10.1.2

2016-04-14 Thread Michael Hanscho
Hi Ben! Thanks for the information - I will try that (although I am not happy to leave the centos / redhat path)... Gruesse Michael On 2016-04-14 20:44, Benjeman Meekhof wrote: > Hi Michael, > > The partprobe issue was resolved for me by updating parted to the > package from Fedora 22:

Re: [ceph-users] my cluster is down after upgrade to 10.1.2

2016-04-14 Thread Lomayani S. Laizer
Hello, Upgraded the cluster but still seeing the same issue. Is the cluster not recoverable? ceph --version ceph version 10.1.2-64-ge657ecf (e657ecf8e437047b827aa89fb9c10be82643300c) root@mon-b:~# ceph -w 2016-04-14 22:17:56.766169 7f5da3fff700 0 -- 10.10.200.3:0/1828342317 >>

Re: [ceph-users] osd prepare 10.1.2

2016-04-14 Thread Benjeman Meekhof
Hi Michael, The partprobe issue was resolved for me by updating parted to the package from Fedora 22: parted-3.2-16.fc22.x86_64. It shouldn't require any other dependencies updated to install on EL7 varieties. http://tracker.ceph.com/issues/15176 regards, Ben On Thu, Apr 14, 2016 at 12:35

Re: [ceph-users] Deprecating ext4 support

2016-04-14 Thread Samuel Just
It doesn't seem like it would be wise to run such systems on top of rbd. -Sam On Thu, Apr 14, 2016 at 11:05 AM, Jianjian Huo wrote: > On Wed, Apr 13, 2016 at 6:06 AM, Sage Weil wrote: >> On Tue, 12 Apr 2016, Jan Schermer wrote: >>> Who needs to have

Re: [ceph-users] my cluster is down after upgrade to 10.1.2

2016-04-14 Thread Gregory Farnum
Yep! This is fixed in the jewel and master branches now, but we're going to wait until the next rc (or final release!) to push official packages for it. In the meantime, you can install those from our gitbuilders following the instructions at

Re: [ceph-users] my cluster is down after upgrade to 10.1.2

2016-04-14 Thread Lomayani S. Laizer
Hello Gregory, Thanks for your reply. I think am hitting the same bug. Below is the link for log just after an upgrade https://justpaste.it/ta16 -- Lomayani On Thu, Apr 14, 2016 at 6:24 PM, Gregory Farnum wrote: > On Thu, Apr 14, 2016 at 7:05 AM, Lomayani S. Laizer

Re: [ceph-users] Deprecating ext4 support

2016-04-14 Thread Michael Metz-Martini | SpeedPartner GmbH
Hi, Am 14.04.2016 um 03:32 schrieb Christian Balzer: > On Wed, 13 Apr 2016 14:51:58 +0200 Michael Metz-Martini | SpeedPartner GmbH > wrote: >> Am 13.04.2016 um 04:29 schrieb Christian Balzer: >>> On Tue, 12 Apr 2016 09:00:19 +0200 Michael Metz-Martini | SpeedPartner GmbH >>> wrote: Am

[ceph-users] osd prepare 10.1.2

2016-04-14 Thread Michael Hanscho
Hi! A fresh install of 10.1.2 on CentOS 7.2.1511 fails adding osds: [ceph_deploy.osd][ERROR ] Failed to execute command: ceph-disk -v prepare --cluster ceph --fs-type xfs -- /dev/sdm /dev/sdi [ceph_deploy][ERROR ] GenericError: Failed to create 1 OSDs The reason seems to be a failing partprobe

Re: [ceph-users] my cluster is down after upgrade to 10.1.2

2016-04-14 Thread Gregory Farnum
On Thu, Apr 14, 2016 at 7:05 AM, Lomayani S. Laizer wrote: > Hello, > I upgraded from 10.1.0 to 10.1.2 with ceph-deploy and my cluster is down > now. getting below errors > > ceph -s > > 2016-04-14 17:04:58.909894 7f14686e4700 0 -- :/2590574876 >> > 10.10.200.4:6789/0

Re: [ceph-users] Deprecating ext4 support

2016-04-14 Thread Christian Balzer
Hello, [reduced to ceph-users] On Thu, 14 Apr 2016 11:43:07 +0200 Steffen Weißgerber wrote: > > > >>> Christian Balzer schrieb am Dienstag, 12. April 2016 > >>> um 01:39: > > > Hello, > > > > Hi, > > > I'm officially only allowed to do (preventative) maintenance during >

Re: [ceph-users] v10.1.2 Jewel release candidate release

2016-04-14 Thread Milosz Tanski
On Thu, Apr 14, 2016 at 6:32 AM, John Spray wrote: > On Thu, Apr 14, 2016 at 8:31 AM, Vincenzo Pii > wrote: >> >> On 14 Apr 2016, at 00:09, Gregory Farnum wrote: >> >> On Wed, Apr 13, 2016 at 3:02 PM, Sage Weil

Re: [ceph-users] my cluster is down after upgrade to 10.1.2

2016-04-14 Thread c
Am 2016-04-14 16:05, schrieb Lomayani S. Laizer: Hello, I upgraded from 10.1.0 to 10.1.2 with ceph-deploy and my cluster is down now. getting below errors ceph -s 2016-04-14 17:04:58.909894 7f14686e4700  0 -- :/2590574876 >> 10.10.200.4:6789/0 [1] pipe(0x7f146405adf0 sd=3 :0 s=1 pgs=0 cs=0

[ceph-users] my cluster is down after upgrade to 10.1.2

2016-04-14 Thread Lomayani S. Laizer
Hello, I upgraded from 10.1.0 to 10.1.2 with ceph-deploy and my cluster is down now. getting below errors ceph -s 2016-04-14 17:04:58.909894 7f14686e4700 0 -- :/2590574876 >> 10.10.200.4:6789/0 pipe(0x7f146405adf0 sd=3 :0 s=1 pgs=0 cs=0 l=1 c=0x7f146405c0b0).fault 2016-04-14 17:05:01.909949

Re: [ceph-users] Advice on OSD upgrades

2016-04-14 Thread Stephen Mercier
Thank you for the advice. Our crush map is actually setup with replication set to 3, and at least one copy in each cabinet, ensuring no one host is a single point of failure. We fully intended on performing this maintenance over the course of many week, one host at a time. We felt that the

Re: [ceph-users] remote logging

2016-04-14 Thread Wido den Hollander
> Op 14 april 2016 om 14:46 schreef Steffen Weißgerber : > > > Hello, > > I tried to configure ceph logging to a remote syslog host based on > Sebastian Han's Blog > (http://www.sebastien-han.fr/blog/2013/01/07/logging-in-ceph/): > > ceph.conf > > [global] > ... >

Re: [ceph-users] Advice on OSD upgrades

2016-04-14 Thread Stephen Mercier
Sadly, this is not an option. Not only are there no free slots on the hosts, but we're downgrading in size for each OSD because we decided to sacrifice space to make a significant jump in drive quality. We're not really too concerned about the rebalancing, as we monitor the cluster closely

Re: [ceph-users] Advice on OSD upgrades

2016-04-14 Thread Wido den Hollander
> Op 14 april 2016 om 15:29 schreef Stephen Mercier > : > > > Good morning, > > We've been running a medium-sized (88 OSDs - all SSD) ceph cluster for the > past 20 months. We're very happy with our experience with the platform so far. > > Shortly, we will be

[ceph-users] Antw: Advice on OSD upgrades

2016-04-14 Thread Steffen Weißgerber
Hi, that's how I did it for my osd's 25 to 30 (you can add as much as osd numbers you like as long you have free space). First you can reweight the osd's to 0 to move their copies to other osd's for i in {25..30}; do ceph osd crush reweight osd.$i done and have to wait until it's done (when

[ceph-users] Advice on OSD upgrades

2016-04-14 Thread Stephen Mercier
Good morning, We've been running a medium-sized (88 OSDs - all SSD) ceph cluster for the past 20 months. We're very happy with our experience with the platform so far. Shortly, we will be embarking on an initiative to replace all 88 OSDs with new drives (Planned maintenance and lifecycle

[ceph-users] remote logging

2016-04-14 Thread Steffen Weißgerber
Hello, I tried to configure ceph logging to a remote syslog host based on Sebastian Han's Blog (http://www.sebastien-han.fr/blog/2013/01/07/logging-in-ceph/): ceph.conf [global] ... log_file = none log_to_syslog = true err_to_syslog = true [mon] mon_cluster_log_to_syslog = true

Re: [ceph-users] Auth capability required to run ceph daemon commands

2016-04-14 Thread John Spray
On Thu, Apr 14, 2016 at 11:17 AM, Sergio A. de Carvalho Jr. wrote: > Hi, > > Does anybody know what auth capabilities are required to run commands such > as: When you're doing "ceph daemon", no ceph authentication is happening: this is a local connection to a UNIX socket

Re: [ceph-users] v10.1.2 Jewel release candidate release

2016-04-14 Thread John Spray
On Thu, Apr 14, 2016 at 8:31 AM, Vincenzo Pii wrote: > > On 14 Apr 2016, at 00:09, Gregory Farnum wrote: > > On Wed, Apr 13, 2016 at 3:02 PM, Sage Weil wrote: > > Hi everyone, > > The third (and likely final) Jewel release

[ceph-users] Auth capability required to run ceph daemon commands

2016-04-14 Thread Sergio A. de Carvalho Jr.
Hi, Does anybody know what auth capabilities are required to run commands such as: ceph daemon osd.0 perf dump Even with the client.admin user, I can't get it to work: $ ceph daemon osd.0 perf dump --name client.admin --keyring=/etc/ceph/ceph.client.admin.keyring {} $ ceph auth get

[ceph-users] Official website of the developer mailing list address is wrong

2016-04-14 Thread m13913886148
Official website of the developer mailing list (ceph-devel) address is wrong, Who can give me a correct address to subscribe . thanks!___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] Antw: Re: Deprecating ext4 support

2016-04-14 Thread Steffen Weißgerber
>>> Christian Balzer schrieb am Dienstag, 12. April 2016 um >>> 01:39: > Hello, > Hi, > I'm officially only allowed to do (preventative) maintenance during weekend > nights on our main production cluster. > That would mean 13 ruined weekends at the realistic rate of 1 OSD

Re: [ceph-users] v10.1.2 Jewel release candidate release

2016-04-14 Thread Vincenzo Pii
> On 14 Apr 2016, at 00:09, Gregory Farnum wrote: > > On Wed, Apr 13, 2016 at 3:02 PM, Sage Weil wrote: >> Hi everyone, >> >> The third (and likely final) Jewel release candidate is out. We have a >> very small number of remaining blocker issues and a bit

[ceph-users] Using CEPH for replication -- evaluation

2016-04-14 Thread Kumar Suraj
Hi I am thinking to use CEPH for replicating some non-critical stats data of our system for redundancy purpose. I have following questions - we do not want to write data through CEPH but just use as hook to our current DB to make it replicate data asynchronously at periodic interval on few