Re: [ceph-users] Ceph community - how to make it even stronger

2019-01-05 Thread ceph . novice
Hi. What makes us struggle / wonder again and again is the absence of CEPH __man pages__. On *NIX systems man pages are always the first way to go for help, right? Or is this considered "old school" from the CEPH makers / community? :O And as many ppl complain again and again, the same here as

Re: [ceph-users] HELP! --> CLUSER DOWN (was "v13.2.1 Mimic released")

2018-07-30 Thread ceph . novice
Hey Nathan. No blaming here. I'm very thankful for this great peace (ok, sometime more of a beast ;) ) of open-source SDS and all the great work around it incl. community and users... and happy the problem is identified and can be fixed for others/the future as well :)   Well, yes, can confirm

Re: [ceph-users] HELP! --> CLUSER DOWN (was "v13.2.1 Mimic released")

2018-07-29 Thread ceph . novice
Strange... - wouldn't swear, but pretty sure v13.2.0 was working ok before - so what do others say/see? - no one on v13.2.1 so far (hard to believe) OR - just don't have this "systemctl ceph-osd.target" problem and all just works? If you also __MIGRATED__ from Luminous (say ~ v12.2.5 or older)

Re: [ceph-users] HELP! --> CLUSER DOWN (was "v13.2.1 Mimic released")

2018-07-28 Thread ceph . novice
Have you guys changed something with the systemctl startup of the OSDs? I've stopped and disabled all the OSDs on all my hosts via "systemctl stop|disable ceph-osd.target" and rebooted all the nodes. Everything look just the same. The I started all the OSD daemons one after the other via the

Re: [ceph-users] HELP! --> CLUSER DOWN (was "v13.2.1 Mimic released")

2018-07-28 Thread ceph . novice
Hi Sage. Sure. Any specific OSD(s) log(s)? Or just any? Gesendet: Samstag, 28. Juli 2018 um 16:49 Uhr Von: "Sage Weil" An: ceph.nov...@habmalnefrage.de, ceph-users@lists.ceph.com, ceph-de...@vger.kernel.org Betreff: Re: [ceph-users] HELP! --> CLUSER DOWN (was "v13.2.1 Mimic released") Can you

[ceph-users] HELP! --> CLUSER DOWN (was "v13.2.1 Mimic released")

2018-07-28 Thread ceph . novice
Dear users and developers.   I've updated our dev-cluster from v13.2.0 to v13.2.1 yesterday and since then everything is badly broken. I've restarted all Ceph components via "systemctl" and also rebootet the server SDS21 and SDS24, nothing changes. This cluster started as Kraken, was updated to

Re: [ceph-users] mimic (13.2.0) and "Failed to send data to Zabbix"

2018-07-12 Thread ceph . novice
There was no change in the ZABBIX environment... I got the this warning some minutes after the Linux and Luminous->Mimic update via YUM and a reboot of all the Ceph servers... Is there anyone, who also had the ZABBIX module unabled under Luminos AND then migrated to Mimic? If yes, does it work

Re: [ceph-users] mimic (13.2.0) and "Failed to send data to Zabbix"

2018-07-11 Thread ceph . novice
at about the same time we also updated the Linux OS via "YUM" to: # more /etc/redhat-release Red Hat Enterprise Linux Server release 7.5 (Maipo) from the given error message, it seems like there are 32 "measure points", which are to be send but 3 of them are somehow failing: >>>

[ceph-users] mimic (13.2.0) and "Failed to send data to Zabbix"

2018-07-11 Thread ceph . novice
anyone with "mgr Zabbix enabled" migrated from Luminous (12.2.5 or 5) and has the same problem in Mimic now? if I disable and re-enable the "zabbix" module, the status is "HEALTH_OK" for some sec. and changes to "HEALTH_WARN" again... --- # ceph -s cluster: id:

Re: [ceph-users] Mimic 13.2.1 release date

2018-07-11 Thread ceph . novice
- adding ceph-devel - Same here. An estimated date would already help for internal plannings :|   Gesendet: Dienstag, 10. Juli 2018 um 11:59 Uhr Von: "Martin Overgaard Hansen" An: ceph-users Betreff: Re: [ceph-users] Mimic 13.2.1 release date > Den 9. jul. 2018 kl. 17.12 skrev Wido den

Re: [ceph-users] CentOS release 7.4.1708 and selinux-policy-base >= 3.13.1-166.el7_4.9

2018-05-03 Thread ceph . novice
Hi Ruben and community.   Thanks a lot for all the help and hints. Finally I figured out that "base" is also part of i.e. "selinux-policy-minimum". After installing this pkg via "yum install", the usual "ceph installation" continues... Seems like the "ceph packaging" is too much RHEL oriented

[ceph-users] CentOS release 7.4.1708 and selinux-policy-base >= 3.13.1-166.el7_4.9

2018-05-02 Thread ceph . novice
Hi all. We try to setup our first CentOS 7.4.1708 CEPH cluster, based on Luminous 12.2.5. What we get is:   Error: Package: 2:ceph-selinux-12.2.5-0.el7.x86_64 (Ceph-Luminous)    Requires: selinux-policy-base >= 3.13.1-166.el7_4.9 __Host infos__: root> lsb_release -d Description:

Re: [ceph-users] ceph-deploy: recommended?

2018-04-05 Thread ceph . novice
... we use (only!) ceph-deploy in all our environments, tools and scripts. If I look in the efforts went into ceph-volume and all the related issues, "manual LVM" overhead and/or still missing features, PLUS the in the same discussions mentioned recommendations to use something like

Re: [ceph-users] Help ! how to recover from total monitor failure in lumnious

2018-02-02 Thread ceph . novice
there pick your "DISTRO", klick on the "ID", klick "Repo URL"...   Gesendet: Freitag, 02. Februar 2018 um 21:34 Uhr Von: ceph.nov...@habmalnefrage.de An: "Frank Li" Cc: "ceph-users@lists.ceph.com" Betreff: Re: [ceph-users] Help ! how to

Re: [ceph-users] Help ! how to recover from total monitor failure in lumnious

2018-02-02 Thread ceph . novice
https://shaman.ceph.com/repos/ceph/wip-22847-luminous/f04a4a36f01fdd5d9276fa5cfa1940f5cc11fb81/   Gesendet: Freitag, 02. Februar 2018 um 21:27 Uhr Von: "Frank Li" An: "Sage Weil" Cc: "ceph-users@lists.ceph.com" Betreff: 

Re: [ceph-users] ceph luminous - performance issue

2018-01-03 Thread ceph . novice
Hi Steven. interesting... 'm quite curious after your post now. I've migrated our prod. CEPH cluster to 12.2.2 and Bluestore just today and haven't heard back anything "bad" from the applications/users so far. performance tests on our test cluster were good before, but we use S3/RGW only

Re: [ceph-users] RGW Logging pool

2017-12-15 Thread ceph . novice
we never managed to make it work, but I guess the "RGW metadata search" [c|sh]ould have been "the official solution"... - http://ceph.com/geen-categorie/rgw-metadata-search/ - https://marc.info/?l=ceph-devel=149152531005431=2 - http://ceph.com/rgw/new-luminous-rgw-metadata-search/ there was

Re: [ceph-users] S3 object notifications

2017-11-28 Thread ceph . novice
Hi Yehuda.   Are there any examples (doc's, blog posts, ...): - how to use that "framework" and especially for the "callbacks" - for the latest "Metasearch" feature / usage with a S3 client/tools like CyberDuck, s3cmd, AWSCLI or at least boto3?   - i.e. is an external ELK still needed or is this

[ceph-users] docs.ceph.com broken since... days?!?

2017-08-17 Thread ceph . novice
... or at least since yesterday! ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] RGW lifecycle not expiring objects

2017-06-04 Thread ceph . novice
grrr... sorry && and again as text :|   Gesendet: Montag, 05. Juni 2017 um 01:12 Uhr Von: ceph.nov...@habmalnefrage.de An: "Yehuda Sadeh-Weinraub" Cc: "ceph-users@lists.ceph.com" , ceph-de...@vger.kernel.org Betreff: Re: [ceph-users] RGW lifecycle

Re: [ceph-users] RGW lifecycle not expiring objects

2017-06-04 Thread ceph . novice
  Hi (again) Yehuda.   Looping in ceph-devel...   Could it be that lifecycle is still not implemented neither in Jewel nor in Kraken, even if release notes and other places say so?   https://www.spinics.net/lists/ceph-devel/msg34492.html

Re: [ceph-users] RGW multisite sync data sync shard stuck

2017-06-04 Thread ceph . novice
Hi Andreas.   Well, we do _NOT_ need multiside in our environment, but unfortunately is is the basis for the announced "metasearch", based on ElasticSearch... so we try to implement a "multisite" config on Kraken (v11.2.0) since weeks, but never succeeded so far. We have purged and started all

Re: [ceph-users] RGW lifecycle not expiring objects

2017-06-04 Thread ceph . novice
Hi Yahuda. Well, here we go: http://tracker.ceph.com/issues/20177 As it's my first one, hope it's ok as it is... Thanks & regards Anton Gesendet: Samstag, 03. Juni 2017 um 00:14 Uhr Von: "Yehuda Sadeh-Weinraub" An: ceph.nov...@habmalnefrage.de Cc: "Graham Allan"

Re: [ceph-users] RGW lifecycle not expiring objects

2017-06-02 Thread ceph . novice
Hi Graham.   We are on Kraken and have the same problem with "lifecycle". Various (other) tools like s3cmd or CyberDuck do show the applied "expiration" settings, but objects seem never to be purged. If you should have new findings, hints,... PLEASE share/let me know. Thanks a lot! Anton  

Re: [ceph-users] Seems like majordomo doesn't send mails since some weeks?!

2017-05-22 Thread ceph . novice
Thanks for answering, David.   No idea who changed what and where, but I'm flooded with mails since yesterday ;) --> THANKS   Gesendet: Samstag, 20. Mai 2017 um 16:42 Uhr Von: "David Turner" An: ceph.nov...@habmalnefrage.de, ceph-users Betreff: 

[ceph-users] Seems like majordomo doesn't send mails since some weeks?!

2017-05-20 Thread ceph . novice
I've not received the list mails since weeks. Thought it's my mail provider who may filter them out. Have checked with them for days. Due to them all is ok. I've then subscribed with my business mail account and do not receive any posts/mails from the list. Any ideas anyone?

Re: [ceph-users] Kraken release and RGW --> "S3 bucket lifecycle API has been added. Note that currently it only supports object expiration."

2017-04-13 Thread ceph . novice
Ups... thanks for your efforts, Ben!   This could explain some bit's. Still I have lot's of question as it seems different S3 tools/clients behaive different. We need to stick on CyberDuck on Windows and s3cms and boto on Linux and many things are not the same with RadosGW :|   And more on my

Re: [ceph-users] Question about RadosGW subusers

2017-04-13 Thread ceph . novice
Thanks a lot, Trey. I'll try that stuff next week, once back from Easter holidays. And some "multi site" and "metasearch" is also still on my to-be-tested list. Need badly to free up some time for all the interesting "future of storage" things. BTW., we are on Kraken and I'd hope to see more

Re: [ceph-users] Question about RadosGW subusers

2017-04-13 Thread ceph . novice
Hey Trey. Sounds great, we were discussing the same kind of requirements and couldn't agree on/find something "useful"... so THANK YOU for sharing!!! It would be great if you could provide some more details or an example how you configure the "bucket user" and sub-users and all that stuff.

[ceph-users] "RGW Metadata Search" and related

2017-04-07 Thread ceph . novice
Hi Cephers.   We try to get "metadata search" working on our test cluster. This is one of two things we promised an internal customer for a very soon to be stared PoC... the second feature is, as I wrote already in another post, the "object expiration" (lifecycle?!) [object's should be

[ceph-users] how-to undo a "multisite" config

2017-04-03 Thread ceph . novice
Hi Cephers.   Quick question couldn't find a "how-to" or "docu"... not even sure if someone else ever had to do it...   What would be the steps to make a (failed) multisite config change, exactly following - http://docs.ceph.com/docs/master/radosgw/multisite/ undone again?   And as I'm

Re: [ceph-users] Kraken release and RGW --> "S3 bucket lifecycle API has been added. Note that currently it only supports object expiration."

2017-04-03 Thread ceph . novice
... hmm, "modify" gives no error and may be the option to use, but I don't see anything related to an "expires" meta field   [root s3cmd-master]# ./s3cmd --no-ssl --verbose modify s3://Test/INSTALL --expiry-days=365 INFO: Summary: 1 remote files to modify modify: 's3://Test/INSTALL' [root

Re: [ceph-users] Kraken release and RGW --> "S3 bucket lifecycle API has been added. Note that currently it only supports object expiration."

2017-04-03 Thread ceph . novice
... additional strange but a bit different info related to the "permission denied"   [root s3cmd-master]# ./s3cmd --no-ssl put INSTALL s3://Test/ --expiry-days=5 upload: 'INSTALL' -> 's3://Test/INSTALL' [1 of 1] 3123 of 3123 100% in0s 225.09 kB/s done [root s3cmd-master]# ./s3cmd

Re: [ceph-users] Kraken release and RGW --> "S3 bucket lifecycle API has been added. Note that currently it only supports object expiration."

2017-04-03 Thread ceph . novice
Hi Cephers... I did set the "lifecycle" via Cyberduck.I do also get an error first, then suddenly Cyberduck refreshes the window aand the lifecycle is there. I see the following when I check it via s3cmd (GitHub master version because the regular installed version doesn't offer the

[ceph-users] Kraken release and RGW --> "S3 bucket lifecycle API has been added. Note that currently it only supports object expiration."

2017-03-27 Thread ceph . novice
Hi Cephers.   Couldn't find any special documentation about the "S3 object expiration" so I assume it should work "AWS S3 like" (?!?) ...  BUT ... we have a test cluster based on 11.2.0 - Kraken and I set some object expiration dates via CyberDuck and DragonDisk, but the objects are still