Re: [ceph-users] iSCSI on Ubuntu and HA / Multipathing

2019-07-10 Thread Michael Christie
On 07/11/2019 05:34 AM, Edward Kalk wrote: > The Docs say : http://docs.ceph.com/docs/nautilus/rbd/iscsi-targets/ > > * Red Hat Enterprise Linux/CentOS 7.5 (or newer); Linux kernel v4.16 > (or newer) > > ^^Is there a version combination of CEPH and Ubuntu that works? Is > anyone running iSC

Re: [ceph-users] out of date python-rtslib repo on https://shaman.ceph.com/

2019-06-19 Thread Michael Christie
On 06/17/2019 03:41 AM, Matthias Leopold wrote: > thank you very much for updating python-rtslib!! > could you maybe also do this for tcmu-runner (version 1.4.1)? I am just about to make a new 1.5 release. Give me a week. I am working on a last feature/bug for the gluster team, and then I am going

Re: [ceph-users] ISCSI Setup

2019-06-19 Thread Michael Christie
On 06/19/2019 12:34 AM, Brent Kennedy wrote: > Recently upgraded a ceph cluster to nautilus 14.2.1 from Luminous, no > issues. One of the reasons for doing so was to take advantage of some > of the new ISCSI updates that were added in Nautilus. I installed > CentOS 7.6 and did all the basic stuff

Re: [ceph-users] Grow bluestore PV/LV

2019-05-15 Thread Michael Andersen
Thanks! I'm on mimic for now, but I'll give it a shot on a test nautilus cluster. On Wed, May 15, 2019 at 10:58 PM Yury Shevchuk wrote: > Hello Michael, > > growing (expanding) bluestore OSD is possible since Nautilus (14.2.0) > using bluefs-bdev-expand tool as di

[ceph-users] Grow bluestore PV/LV

2019-05-15 Thread Michael Andersen
l and that original poster spent a lot of effort in explaining exactly what he meant, but I could not find a reply to his email. Thanks Michael ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] x pgs not deep-scrubbed in time

2019-04-04 Thread Michael Sudnick
Thanks, I'll mess around with them and see what I can do. -Michael On Thu, 4 Apr 2019 at 05:58, Alexandru Cucu wrote: > Hi, > > You are limited by your drives so not much can be done but it should > alt least catch up a bit and reduce the number of pgs that have not > b

Re: [ceph-users] x pgs not deep-scrubbed in time

2019-04-03 Thread Michael Sudnick
Hi Alex, I'm okay myself with the number of scrubs performed, would you expect tweaking any of those values to let the deep-scrubs finish in time/ Thanks, Michael On Wed, 3 Apr 2019 at 10:30, Alexandru Cucu wrote: > Hello, > > You can increase *osd scrub max interval* and *

[ceph-users] x pgs not deep-scrubbed in time

2019-04-03 Thread Michael Sudnick
about 3 days ago where I had a disk die and replaced it. Any suggestions on what I can do? Thank you for any suggestions. -Michael ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Bluestore 32bit max_object_size limit

2019-01-18 Thread KEVIN MICHAEL HRPCEK
On 1/18/19 7:26 AM, Igor Fedotov wrote: Hi Kevin, On 1/17/2019 10:50 PM, KEVIN MICHAEL HRPCEK wrote: Hey, I recall reading about this somewhere but I can't find it in the docs or list archive and confirmation from a dev or someone who knows for sure would be nice. What I recall is

[ceph-users] Bluestore 32bit max_object_size limit

2019-01-17 Thread KEVIN MICHAEL HRPCEK
Hey, I recall reading about this somewhere but I can't find it in the docs or list archive and confirmation from a dev or someone who knows for sure would be nice. What I recall is that bluestore has a max 4GB file size limit based on the design of bluestore not the osd_max_object_size setting.

Re: [ceph-users] HDD spindown problem

2019-01-04 Thread Nieporte, Michael
Hello, we likely faced the same issue with spindowns. We set max spindown timers on all HDDs and disabled the tuned.service, which we were told might change back/affect the set timers. To correctly disable tuned: tuned-adm stop tuned-adm off systemctl stop tuned.service systemctl disable tune

Re: [ceph-users] RDMA/RoCE enablement failed with (113) No route to host

2018-12-21 Thread Michael Green
I was informed today that the CEPH environment I’ve been working on is no longer available. Unfortunately this happened before I could try any of your suggestions, Roman. Thank you for all the attention and advice. -- Michael Green > On Dec 20, 2018, at 08:21, Roman Penyaev wr

Re: [ceph-users] RDMA/RoCE enablement failed with (113) No route to host

2018-12-19 Thread Michael Green
d[837488]: 8: (clone()+0x6d) [0x7f9ab71dcbad] Dec 20 02:27:00 bonjovi0 ceph-osd[837488]: NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. The above block repeats for each OSD. Any advice where to go from here will be much appreciated. -- Michael Green Customer Suppo

Re: [ceph-users] RDMA/RoCE enablement failed with (113) No route to host

2018-12-19 Thread Michael Green
LABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/13.2.2/rpm/el7/BUILD/ceph-13.2.2/src/msg/async/rdma/RDMAConnectedSocketImpl.cc: 224: FAILED assert(!r) -- Michael Green > On Dec 19, 2018, at 5:21 AM, Roman Penyaev wrote: > > > Well, I am playing with ceph rdma impleme

Re: [ceph-users] RDMA/RoCE enablement failed with (113) No route to host

2018-12-18 Thread Michael Green
tiple posts in this very mailing list from people trying to make it work. -- Michael Green Customer Support & Integration Tel. +1 (518) 9862385 gr...@e8storage.com E8 Storage has a new look, find out more <https://e8storage.com/when-performance-matters-a-new-look-for-e8-storage/>

[ceph-users] RDMA/RoCE enablement failed with (113) No route to host

2018-12-12 Thread Michael Green
_gid=:::::ffff:c0a8:013b # #[mon.medellin] #ms_async_rdma_local_gid=::::::c0a8:0141 -ceph.conf---end- -- Michael Green ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] rbd IO monitoring

2018-12-04 Thread Michael Green
Interesting, thanks for sharing. I'm looking at the example output in the PR 25114: write_bytes 409600/107 409600/107 write_latency 2618503617/107 How should these values be interpreted? -- Michael Green > On Dec 3, 2018, at 2:47 AM, Jan Fajerski wrote: > >> Que

[ceph-users] rbd IO monitoring

2018-11-29 Thread Michael Green
Hello collective wisdom, Ceph neophyte here, running v13.2.2 (mimic). Question: what tools are available to monitor IO stats on RBD level? That is, IOPS, Throughput, IOs inflight and so on? I'm testing with FIO and want to verify independently the IO load on each RBD image. -- Michael

Re: [ceph-users] Don't upgrade to 13.2.2 if you use cephfs

2018-10-17 Thread Michael Sudnick
What exactly are the symptoms of the problem? I use cephfs with 13.2.2 with two active MDS daemons and at least on the surface everything looks fine. Is there anything I should avoid doing until 13.2.3? On Wed, Oct 17, 2018, 14:10 Patrick Donnelly wrote: > On Wed, Oct 17, 2018 at 11:05 AM Alexan

Re: [ceph-users] Fwd: [Ceph-community] After Mimic upgrade OSD's stuck at booting.

2018-09-26 Thread KEVIN MICHAEL HRPCEK
Hey, don't lose hope. I just went through 2 3-5 day outages after a mimic upgrade with no data loss. I'd recommend looking through the thread about it to see how close it is to your issue. From my point of view there seems to be some similarities. http://lists.ceph.com/pipermail/ceph-users-ceph

Re: [ceph-users] Mimic upgrade failure

2018-09-24 Thread KEVIN MICHAEL HRPCEK
pdate on the state of the cluster? I've opened a ticket http://tracker.ceph.com/issues/36163 to track the likely root cause we identified, and have a PR open at https://github.com/ceph/ceph/pull/24247 Thanks! sage On Thu, 20 Sep 2018, Sage Weil wrote: On Thu, 20 Sep 2018, KEVIN MICHAEL HRP

Re: [ceph-users] Mimic upgrade failure

2018-09-20 Thread KEVIN MICHAEL HRPCEK
= 0 munmap(0x7f2ea8c97000, 2468005) = 0 open("/var/lib/ceph/mon/ceph-sephmon1/store.db/26299338.sst", O_RDONLY) = 429 stat("/var/lib/ceph/mon/ceph-sephmon1/store.db/26299338.sst", {st_mode=S_IFREG|0644, st_size=2484001, ...}) = 0 mmap(NULL, 2484001, PROT_

Re: [ceph-users] Mimic upgrade failure

2018-09-20 Thread KEVIN MICHAEL HRPCEK
(429) = 0 munmap(0x7f2ea8c97000, 2468005) = 0 open("/var/lib/ceph/mon/ceph-sephmon1/store.db/26299338.sst", O_RDONLY) = 429 stat("/var/lib/ceph/mon/ceph-sephmon1/store.db/26299338.sst", {st_mode=S_IFREG|0644, st_size=2484001, ...}) = 0 mmap(

Re: [ceph-users] Mimic upgrade failure

2018-09-19 Thread KEVIN MICHAEL HRPCEK
mmap(NULL, 2484001, PROT_READ, MAP_SHARED, 429, 0) = 0x7f2eda74b000 close(429) = 0 munmap(0x7f2ee21dc000, 2472343) = 0 Kevin On 09/19/2018 06:50 AM, Sage Weil wrote: On Wed, 19 Sep 2018, KEVIN MICHAEL HRPCEK wrote: Sage, Unfortunately the mon election p

Re: [ceph-users] Mimic upgrade failure

2018-09-18 Thread KEVIN MICHAEL HRPCEK
Sage, Unfortunately the mon election problem came back yesterday and it makes it really hard to get a cluster to stay healthy. A brief unexpected network outage occurred and sent the cluster into a frenzy and when I had it 95% healthy the mons started their nonstop reelections. In the previous

[ceph-users] Error-code 2002/API 405 S3 REST API. Creating a new bucket

2018-09-17 Thread Michael Schäfer
Hi, We have a problem with the radosgw using the S3 REST API. Trying to create a new bucket does not work. We got an 405 on API level and the log does indicate an 2002 error. Do anybody know, what this error-code does mean? Find the radosgw-log attached Bests, Michael 2018-09-17 11:58

Re: [ceph-users] Cephfs kernel driver availability

2018-07-23 Thread Michael Kuriger
If you're using CentOS/RHEL you can try the elrepo kernels Mike Kuriger -Original Message- From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of John Spray Sent: Monday, July 23, 2018 5:07 AM To: Bryan Henderson Cc: ceph-users@lists.ceph.com Subject: Re: [ceph-users

Re: [ceph-users] SSDs for data drives

2018-07-16 Thread Michael Kuriger
I dunno, to me benchmark tests are only really useful to compare different drives. From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Paul Emmerich Sent: Monday, July 16, 2018 8:41 AM To: Satish Patel Cc: ceph-users Subject: Re: [ceph-users] SSDs for data drives This does

Re: [ceph-users] Ceph Mimic on CentOS 7.5 dependency issue (liboath)

2018-06-23 Thread Michael Kuriger
CentOS 7.5 is pretty new. Have you tried CentOS 7.4? Mike Kuriger Sr. Unix Systems Engineer -Original Message- From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Brian : Sent: Saturday, June 23, 2018 1:41 AM To: Stefan Kooman Cc: ceph-users Subject: Re: [ceph-us

Re: [ceph-users] Install ceph manually with some problem

2018-06-18 Thread Michael Kuriger
Don’t use the installer scripts. Try yum install ceph Mike Kuriger Sr. Unix Systems Engineer T: 818-649-7235 M: 818-434-6195 [ttp://www.hotyellow.com/deximages/dex-thryv-logo.jpg] From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Ch Wan Sent: Monday

Re: [ceph-users] cannot add new OSDs in mimic

2018-06-10 Thread Michael Kuriger
Oh boy! Thankfully I upgraded our sandbox cluster so I’m not in a sticky situation right now :-D Mike Kuriger Sr. Unix Systems Engineer From: Sergey Malinin [mailto:h...@newmail.com] Sent: Friday, June 08, 2018 4:22 PM To: Michael Kuriger; Paul Emmerich Cc: ceph-users Subject: Re: [ceph-users

Re: [ceph-users] mimic: failed to load OSD map for epoch X, got 0 bytes

2018-06-08 Thread Michael Sudnick
I'm getting the same issue. ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] cannot add new OSDs in mimic

2018-06-08 Thread Michael Kuriger
Hi everyone, I appreciate the suggestions. However, this is still an issue. I've tried adding the OSD using ceph-deploy, and manually from the OSD host. I'm not able to start newly added OSDs at all, even if I use a new ID. It seems the OSD is added to CEPH but I cannot start it. OSDs that exist

Re: [ceph-users] cannot add new OSDs in mimic

2018-06-07 Thread Michael Kuriger
Sr. Unix Systems Engineer T: 818-649-7235 M: 818-434-6195 -Original Message- From: Vasu Kulkarni [mailto:vakul...@redhat.com] Sent: Thursday, June 07, 2018 1:53 PM To: Michael Kuriger Cc: ceph-users Subject: Re: [ceph-users] cannot add new OSDs in mimic It is actually documented in rep

Re: [ceph-users] cannot add new OSDs in mimic

2018-06-07 Thread Michael Kuriger
Do you mean: ceph osd destroy {ID} --yes-i-really-mean-it Mike Kuriger -Original Message- From: Vasu Kulkarni [mailto:vakul...@redhat.com] Sent: Thursday, June 07, 2018 12:28 PM To: Michael Kuriger Cc: ceph-users Subject: Re: [ceph-users] cannot add new OSDs in mimic There is a osd

[ceph-users] cannot add new OSDs in mimic

2018-06-07 Thread Michael Kuriger
CEPH team, Is there a solution yet for adding OSDs in mimic - specifically re-using old IDs? I was looking over this BUG report - https://tracker.ceph.com/issues/24423 and my issue is similar. I removed a bunch of OSD's after upgrading to mimic and I'm not able to re-add them using the new vo

Re: [ceph-users] ceph-osd@ service keeps restarting after removing osd

2018-06-04 Thread Michael Burk
On Thu, May 31, 2018 at 4:40 PM Gregory Farnum wrote: > On Thu, May 24, 2018 at 9:15 AM Michael Burk > wrote: > >> Hello, >> >> I'm trying to replace my OSDs with higher capacity drives. I went through >> the steps to remove the OSD on the OSD node: >

[ceph-users] ceph-osd@ service keeps restarting after removing osd

2018-05-24 Thread Michael Burk
4,logbsize=256k,sunit=512,swidth=512,noquota) ceph version 12.2.5 (cad919881333ac92274171586c827e01f554a70a) luminous (stable) What am I missing? Thanks, Michael ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] a big cluster or several small

2018-05-14 Thread Michael Kuriger
The more servers you have in your cluster, the less impact a failure causes to the cluster. Monitor your systems and keep them up to date. You can also isolate data with clever crush rules and creating multiple zones. Mike Kuriger From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On

Re: [ceph-users] Have an inconsistent PG, repair not working

2018-04-30 Thread Michael Sudnick
As such I went ahead and created a [1] bug tracker for this. >>> Hopefully it gets some traction as I'm not particularly looking forward to >>> messing with deleting PGs with the ceph-objectstore-tool in production. >>> >>> [1] http://tracker.ceph.com/issu

Re: [ceph-users] Have an inconsistent PG, repair not working

2018-04-06 Thread Michael Sudnick
deep-scrub 1 missing, 0 inconsistent objects > 2018-04-04 17:32:37.916865 7f54d1820700 -1 log_channel(cluster) log [ERR] > : 145.2e3 deep-scrub 1 errors > > On Mon, Apr 2, 2018 at 4:51 PM Michael Sudnick > wrote: > >> Hi Kjetil, >> >> I've tried to get the pg

Re: [ceph-users] Have an inconsistent PG, repair not working

2018-04-02 Thread Michael Sudnick
eal before repair/deep scrub works? -Michael On 2 April 2018 at 14:13, Kjetil Joergensen wrote: > Hi, > > scrub or deep-scrub the pg, that should in theory get you back to > list-inconsistent-obj spitting out what's wrong, then mail that info to the > list. > > -KJ > &g

[ceph-users] Have an inconsistent PG, repair not working

2018-04-01 Thread Michael Sudnick
at a loss here as what to do to recover. That pg is part of a cephfs_data pool with compression set to force/snappy. Does anyone have an suggestions? -Michael ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/

Re: [ceph-users] iSCSI Multipath (Load Balancing) vs RBD Exclusive Lock

2018-03-14 Thread Michael Christie
On 03/14/2018 01:27 PM, Michael Christie wrote: > On 03/14/2018 01:24 PM, Maxim Patlasov wrote: >> On Wed, Mar 14, 2018 at 11:13 AM, Jason Dillaman > <mailto:jdill...@redhat.com>> wrote: >> >> Maxim, can you provide steps for a reproducer? >> >>

Re: [ceph-users] iSCSI Multipath (Load Balancing) vs RBD Exclusive Lock

2018-03-14 Thread Michael Christie
On 03/14/2018 01:26 PM, Michael Christie wrote: > On 03/14/2018 01:06 PM, Maxim Patlasov wrote: >> On Sun, Mar 11, 2018 at 5:10 PM, Mike Christie > <mailto:mchri...@redhat.com>> wrote: >> >> On 03/11/2018 08:54 AM, shadow_lin wrote: >> > Hi Jas

Re: [ceph-users] iSCSI Multipath (Load Balancing) vs RBD Exclusive Lock

2018-03-14 Thread Michael Christie
On 03/14/2018 01:24 PM, Maxim Patlasov wrote: > On Wed, Mar 14, 2018 at 11:13 AM, Jason Dillaman > wrote: > > Maxim, can you provide steps for a reproducer? > > > Yes, but it involves adding two artificial delays: one in tcmu-runner > and another in kernel iscsi.

Re: [ceph-users] iSCSI Multipath (Load Balancing) vs RBD Exclusive Lock

2018-03-14 Thread Michael Christie
On 03/14/2018 01:06 PM, Maxim Patlasov wrote: > On Sun, Mar 11, 2018 at 5:10 PM, Mike Christie > wrote: > > On 03/11/2018 08:54 AM, shadow_lin wrote: > > Hi Jason, > > How the old target gateway is blacklisted? Is it a feature of the target > > gateway(

Re: [ceph-users] iSCSI over RBD

2018-01-04 Thread Michael Christie
On 01/04/2018 03:50 AM, Joshua Chen wrote: > Dear all, > Although I managed to run gwcli and created some iqns, or luns, > but I do need some working config example so that my initiator could > connect and get the lun. > > I am familiar with targetcli and I used to do the following ACL style >

Re: [ceph-users] rbd and cephfs (data) in one pool?

2017-12-27 Thread Michael Kuriger
Making the filesystem might blow away all the rbd images though. Mike Kuriger Sr. Unix Systems Engineer T: 818-649-7235 M: 818-434-6195 [ttp://www.hotyellow.com/deximages/dex-thryv-logo.jpg] From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of David Tur

Re: [ceph-users] Replaced a disk, first time. Quick question

2017-12-04 Thread Michael Kuriger
I've seen that before (over 100%) but I forget the cause. At any rate, the way I replace disks is to first set the osd weight to 0, wait for data to rebalance, then down / out the osd. I don't think ceph does any reads from a disk once you've marked it out so hopefully there are other copies.

Re: [ceph-users] HW Raid vs. Multiple OSD

2017-11-13 Thread Michael
l replication and so on will trigger *before* you remove it. (There is a configurable timeout for how long an OSD can be down, after which the OSD is essentially treated as dead already, at which point replication and rebalancing starts). -Michael

Re: [ceph-users] rocksdb: Corruption: missing start of fragmented record

2017-11-13 Thread Michael
Konstantin Shalygin wrote: > I think Christian talks about version 12.2.2, not 12.2.* Which isn't released yet, yes. I could try building the development repository if you think that has a chance of resolving the issue? Although I'd still like to know how I could theoretically get my hands at

Re: [ceph-users] FAILED assert(p.same_interval_since) and unusable cluster

2017-11-04 Thread Michael
ch. As you might see on the bug tracker, the patch did apparently avoid the immediate error for me, but Ceph then ran into another error. - Michael ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] rocksdb: Corruption: missing start of fragmented record

2017-11-01 Thread Michael
ount, the OSD won't activate and the error is the same. Is there any fix in .2 that might address this, or do you just mean that in general there will be bug fixes? Thanks for your response! - Michael ___ ceph-users mailing list ceph-users@lists.

[ceph-users] rocksdb: Corruption: missing start of fragmented record

2017-11-01 Thread Michael
ome way in which I can tell rockdb to truncate or delete / skip the respective log entries? Or can I get access to rocksdb('s files) in some other way to just manipulate it or delete corrupted WAL files manually? -Michael ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] [Luminous]How to choose the proper ec profile?

2017-10-30 Thread Michael
. shadow_lin wrote: What would be a good ec profile for archive purpose(decent write perfomance and just ok read performace)? I don't actually know that - but the default is not bad if you ask me (not that it features writes faster than reads). Plus it lets you pick m. - Michael

Re: [ceph-users] OSD daemons active in nodes after removal

2017-10-25 Thread Michael Kuriger
When I do this, I reweight all of the OSDs I want to remove to 0 first, wait for the rebalance, then proceed to remove the OSDs. Doing it your way, you have to wait for the rebalance after removing each OSD one by one. Mike Kuriger Sr. Unix Systems Engineer 818-434-6195 [cid:image001.jpg@01D34D

[ceph-users] Bluestore compression and existing CephFS filesystem

2017-10-19 Thread Michael Sudnick
this is in the documentation somewhere - I searched and haven't been able to find anything. Thank you, -Michael ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Brand new cluster -- pg is stuck inactive

2017-10-13 Thread Michael Kuriger
You may not have enough OSDs to satisfy the crush ruleset. Mike Kuriger Sr. Unix Systems Engineer 818-434-6195 On 10/13/17, 9:53 AM, "ceph-users on behalf of dE" wrote: Hi, I'm running ceph 10.2.5 on Debian (official package). It cant s

Re: [ceph-users] can't figure out why I have HEALTH_WARN in luminous

2017-09-25 Thread Michael Kuriger
Thanks!! I did see that warning, but it never occurred to me I need to disable it. Mike Kuriger Sr. Unix Systems Engineer T: 818-649-7235 M: 818-434-6195 <http://www.dexyp.com/> On 9/23/17, 5:52 AM, "John Spray" wrote: On Fri, Sep 22, 2017 at 6:48 PM, Michae

[ceph-users] can't figure out why I have HEALTH_WARN in luminous

2017-09-22 Thread Michael Kuriger
I have a few running ceph clusters. I built a new cluster using luminous, and I also upgraded a cluster running hammer to luminous. In both cases, I have a HEALTH_WARN that I can't figure out. The cluster appears healthy except for the HEALTH_WARN in overall status. For now, I’m monitoring h

Re: [ceph-users] CephFS billions of files and inline_data?

2017-08-16 Thread Michael Metz-Martini | SpeedPartner GmbH
that use-case). We will give glusterfs with Raid6 underneath and nfs a try - more "basic" and hopefully more robust. -- Kind regards Michael Metz-Martini ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Systemd dependency cycle in Luminous

2017-07-17 Thread Michael Andersen
Thanks for pointing me towards that! You saved me a lot of stress On Jul 17, 2017 4:39 PM, "Tim Serong" wrote: > On 07/17/2017 11:22 AM, Michael Andersen wrote: > > Hi all > > > > I recently upgraded two separate ceph clusters from Jewel to Luminous. > >

[ceph-users] Systemd dependency cycle in Luminous

2017-07-16 Thread Michael Andersen
/reinstall ceph on. Any ideas? Thanks Michael ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Changing pg_num on cache pool

2017-05-27 Thread Michael Shuey
I don't recall finding a definitive answer - though it was some time ago. IIRC, it did work but made the pool fragile; I remember having to rebuild the pools for my test rig soon after. Don't quite recall the root cause, though - could have been newbie operator error on my part. May have also had

[ceph-users] Ceph-disk prepare not properly preparing disks on one of my OSD nodes, running 11.2.0-0 on CentOS7

2017-04-16 Thread Michael Sudnick
sorry for not being more descriptive, I worked around the problem by preparing the disk on a working box and substituting it back in. Any suggestions on where to look to start debugging this? SELinux is off on all nodes. Thanks for any help you are able to provide. -Mi

Re: [ceph-users] PG calculator improvement

2017-04-13 Thread Michael Kidd
in a combination of the above or in something I've not thought of. Please do weigh in as any and all suggestions are more than welcome. Thanks, Michael J. Kidd Principal Software Maintenance Engineer Red Hat Ceph Storage +1 919-442-8878 On Wed, Apr 12, 2017 at 6:35 AM, Frédéric Nass < frede

Re: [ceph-users] Cannot shutdown monitors

2017-02-10 Thread Michael Andersen
> > > On Sat, Feb 11, 2017 at 2:49 PM, Michael Andersen > wrote: > >> Right, so yes libceph is loaded > >> > >> root@compound-7:~# lsmod | egrep "ceph|rbd" > >> rbd69632 0 > >> libceph 245760 1 r

Re: [ceph-users] Cannot shutdown monitors

2017-02-10 Thread Michael Andersen
module. Do you have the libceph kernel module loaded? If the answer to that question is "yes" the follow-up question is "Why?" as it is not required for a MON or OSD host. On Sat, Feb 11, 2017 at 1:18 PM, Michael Andersen wrote: > Yeah, all three mons have OSDs on the same

Re: [ceph-users] Cannot shutdown monitors

2017-02-10 Thread Michael Andersen
Yeah, all three mons have OSDs on the same machines. On Feb 10, 2017 7:13 PM, "Shinobu Kinjo" wrote: > Is your primary MON running on the host which some OSDs are running on? > > On Sat, Feb 11, 2017 at 11:53 AM, Michael Andersen > wrote: > > Hi > > >

[ceph-users] Cannot shutdown monitors

2017-02-10 Thread Michael Andersen
really annoying because its hard for me to get access to the datacenter. Thanks Michael ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Running 'ceph health' as non-root user

2017-02-01 Thread Michael Hartz
job and reading that periodically. That is fully sufficient for my situation. Many thanks 01.02.2017, 09:58, "Henrik Korkuc" : > On 17-02-01 10:55, Michael Hartz wrote: >>  I am running ceph as part of a Proxmox Virtualization cluster, which is >> doing great. >>

[ceph-users] Running 'ceph health' as non-root user

2017-02-01 Thread Michael Hartz
I am running ceph as part of a Proxmox Virtualization cluster, which is doing great. However for monitoring purpose I would like to periodically check with 'ceph health' as a non-root user. This fails with the following message: > su -c 'ceph health' -s /bin/bash nagios Error initializing cluste

Re: [ceph-users] Javascript error at http://ceph.com/pgcalc/

2017-01-11 Thread Michael Kidd
how it will behave with other alphabets. Thanks, Michael J. Kidd Sr. Software Maintenance Engineer Red Hat Ceph Storage +1 919-442-8878 <(919)%20442-8878> On Wed, Jan 11, 2017 at 6:01 PM, 林自均 wrote: > Hi Michael, > > Thanks for your link! > > However, when I am using yo

Re: [ceph-users] Javascript error at http://ceph.com/pgcalc/

2017-01-11 Thread Michael Kidd
Hello John, Apologies for the error. We will be working to correct it, but in the interim, you can use http://linuxkidd.com/ceph/pgcalc.html Thanks, Michael J. Kidd Sr. Software Maintenance Engineer Red Hat Ceph Storage +1 919-442-8878 On Wed, Jan 11, 2017 at 12:03 AM, 林自均 wrote: > Hi

Re: [ceph-users] CEPH - best books and learning sites

2016-12-29 Thread Michael Hackett
Hello Andre, The Ceph site would be the best place to get the information you are looking for, specifically the docs section: http://docs.ceph.com/docs/master/. Karan Singh actually wrote two books which can be useful as initial resources as well Learning Ceph: https://www.amazon.com/Learning-C

[ceph-users] ERROR: flush_read_list(): d->client_c->handle_data() returned -5

2016-11-23 Thread Riederer, Michael
-sdk-java/1.11.14 Linux/2.6.32-573.18.1.el6.x86_64 OpenJDK_64-Bit_Server_VM/25.71-b15/1.8.0_71 Regards Michael -- Bayerischer Rundfunk; Rundfunkplatz 1; 80335 München Telefon: +49 89 590

Re: [ceph-users] fixing zones

2016-09-28 Thread Michael Parson
On Wed, 28 Sep 2016, Orit Wasserman wrote: see blow On Tue, Sep 27, 2016 at 8:31 PM, Michael Parson wrote: We googled around a bit and found the fix-zone script: https://raw.githubusercontent.com/yehudasa/ceph/wip-fix-default-zone/src/fix-zone Which ran fine until the last command

[ceph-users] fixing zones

2016-09-27 Thread Michael Parson
t;" }, "placement_pools": [ { "key": "default-placement", "val": { "index_pool": ".rgw.buckets.index_", "data_pool": ".rgw.buckets_", "da

[ceph-users] PGs lost from cephfs data pool, how to determine which files to restore from backup?

2016-09-07 Thread Michael Sudnick
I've had to force recreate some PGs on my cephfs data pool due to some cascading disk failures in my homelab cluster. Is there a way to easily determine which files I need to restore from backup? My metadata pool is completely intact. Thanks for any help and suggestions. Sincerely, Mi

[ceph-users] PG down, primary OSD no longer exists

2016-09-06 Thread Michael Sudnick
0] pg 33.34e is stuck unclean since forever, current state stale+down+remapped+peering, last acting [24,10] pg 33.34e is stale+down+remapped+peering, acting [24,10] OSD.24 no longer exists and the disk is fried. So recovery is not possible. Thanks for any suggestsion, Sincerely,

[ceph-users] Repairing a broken leveldb

2016-07-11 Thread Michael Metz-Martini | SpeedPartner GmbH
105], "acting": [ 9], "backfill_targets": [ "34", "105"], "actingbackfill": [ "9", "34", "105"], [1] http://www.michael-metz.de/ceph-osd.9.log.gz -- Kind reg

Re: [ceph-users] cephfs mount /etc/fstab

2016-06-27 Thread Michael Hanscho
On 2016-06-27 11:40, John Spray wrote: > On Sun, Jun 26, 2016 at 10:51 AM, Michael Hanscho wrote: >> On 2016-06-26 10:30, Christian Balzer wrote: >>> >>> Hello, >>> >>> On Sun, 26 Jun 2016 09:33:10 +0200 Willi Fehler wrote: >>> >>>>

[ceph-users] unsubscribe

2016-06-26 Thread Michael Ferguson
Thanks ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] cephfs mount /etc/fstab

2016-06-26 Thread Michael Hanscho
v" mount option is for. > http://docs.ceph.com/docs/master/cephfs/fstab/ No hint in the documentation - although a full page on cephfs and fstab?! Gruesse Michael ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Criteria for Ceph journal sizing

2016-06-20 Thread Michael Hanscho
t in this regard will be helpful for my understanding. See documentation: http://docs.ceph.com/docs/master/rados/configuration/osd-config-ref/ osd journal size = {2 * (expected throughput * filestore max sync interval)} http://comments.gmane.org/gmane.comp.file-systems.ceph.user/2

Re: [ceph-users] which CentOS 7 kernel is compatible with jewel?

2016-06-15 Thread Michael Kuriger
map `hostname` /dev/rbd0 [root@test ~]# rbd info `hostname` rbd image ‘test.np.wc1.example.com': size 102400 MB in 25600 objects order 22 (4096 kB objects) block_name_prefix: rbd_data.13582ae8944a format: 2 features: layering flags: Mi

Re: [ceph-users] which CentOS 7 kernel is compatible with jewel?

2016-06-15 Thread Michael Kuriger
inux test.np.4.6.2-1.el7.elrepo.x86_64 #1 SMP Wed Jun 8 14:49:20 EDT 2016 x86_64 x86_64 x86_64 GNU/Linux [root@test ~]# ceph --version ceph version 10.2.1 (3a66dd4f30852819c1bdaa8ec23c795d4ad77269) Michael Kuriger Sr. Unix Systems Engineer * mk7...@yp.com |( 818-649-7235 On 6/14

Re: [ceph-users] which CentOS 7 kernel is compatible with jewel?

2016-06-13 Thread Michael Kuriger
10.1.77.165:6789/0 pipe(0x7fe608008350 sd=3 :0 s=1 pgs=0 cs=0 l=1 c=0x7fe6080078e0).fault 2016-06-13 11:24:21.884259 7fe61137f700 0 -- :/3877046932 >> 10.1.78.75:6789/0 pipe(0x7fe608000c00 sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0x7fe608007110).fault Michael Kuriger Sr. Unix Systems Engineer * mk7...@yp

[ceph-users] which CentOS 7 kernel is compatible with jewel?

2016-06-10 Thread Michael Kuriger
t' does not work with the stock 3.10 kernel, but works with the 4.6 kernel. Very odd. Any advice? Thanks! Michael Kuriger Sr. Unix Systems Engineer * mk7...@yp.com |( 818-649-7235 ___ ceph-users mailing list ceph-users@lists.ceph.com http

Re: [ceph-users] Migrating from one Ceph cluster to another

2016-06-09 Thread Michael Kuriger
u can remove the old OSD servers from the cluster.   Michael Kuriger Sr. Unix Systems Engineer -Original Message- From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Wido den Hollander Sent: Thursday, June 09, 2016 12:47 AM To: Marek Dohojda; ceph-users@lists.cep

Re: [ceph-users] CoreOS Cluster of 7 machines and Ceph

2016-06-03 Thread Michael Shuey
Sorry for the late reply - been traveling. I'm doing exactly that right now, using the ceph-docker container. It's just in my test rack for now, but hardware arrived this week to seed the production version. I'm using separate containers for each daemon, including a container for each OSD. I've

Re: [ceph-users] Problems with Calamari setup

2016-06-02 Thread Michael Kuriger
For me, this same issue was caused by having too new a version of salt. I’m running salt-2014.1.5-1 in centos 7.2, so yours will probably be different. But I thought it was worth mentioning. [yp] Michael Kuriger Sr. Unix Systems Engineer • mk7...@yp.com<mailto:mk7...@yp.com> |• 8

Re: [ceph-users] ceph pg status problem

2016-05-31 Thread Michael Hackett
Hello, Check your CRUSH map and verify what your failure domain for you CRUSH rule is set to (for example OSD or host). You need to verify that your failure domain can satisfy you pool replication value. You may need to decrease your pool replication value or modify your CRUSH map. Thank you, Mi

Re: [ceph-users] ceph-disk: Error: No cluster conf found in /etc/ceph with fsid

2016-05-26 Thread Michael Kuriger
Are you using an old ceph.conf with the original FSID from your first attempt (in your deploy directory)? [yp] Michael Kuriger Sr. Unix Systems Engineer * mk7...@yp.com<mailto:mk7...@yp.com> |* 818-649-7235 From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Beh

Re: [ceph-users] Can't Start / Stop Ceph jewel under Centos 7.2

2016-05-26 Thread Michael Kuriger
Did you update to ceph version 10.2.1 (3a66dd4f30852819c1bdaa8ec23c795d4ad77269)? This issue should have been resolved with the last update. (It was for us)   Michael Kuriger Sr. Unix Systems Engineer r mk7...@yp.com |  818-649-7235 -Original Message- From: ceph-users

Re: [ceph-users] How to remove a placement group?

2016-05-15 Thread Michael Kuriger
I would try: ceph pg repair 15.3b3 [yp] Michael Kuriger Sr. Unix Systems Engineer • mk7...@yp.com<mailto:mk7...@yp.com> |• 818-649-7235 From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Romero Junior Sent: Saturday, May 14, 2016 11:46 AM To: ceph

[ceph-users] ceph-hammer - problem adding / removing monitors

2016-05-13 Thread Michael Kuriger
Hi everyone. We’re running ceph-hammer, and I was trying to rename our monitor servers. I tried following the procedure for removing a monitor, and adding a monitor. Removing seems to have work ok, as now I have 2 monitors up. When I try to add the 3rd monitor, and the ceph-deploy completes,

Re: [ceph-users] Small cluster PG question

2016-05-05 Thread Michael Shuey
as also wondering about the pros and cons performance wise of having a > pool size of 3 vs 2. It seems there would be a benefit for reads (1.5 times > the bandwidth) but a penalty for writes because the primary has to forward > to 2 nodes instead of 1. Does that make sense? > > -Roland

Re: [ceph-users] Small cluster PG question

2016-05-05 Thread Michael Shuey
Reads will be limited to 1/3 of the total bandwidth. A set of PGs has a "primary" - that's the first one (and only one, if it's up & in) consulted on a read. The other PGs will still exist, but they'll only take writes (and only after the primary PG forwards along data). If you have multiple PGs

  1   2   3   4   >