Hi.
What makes us struggle / wonder again and again is the absence of CEPH __man
pages__. On *NIX systems man pages are always the first way to go for help,
right? Or is this considered "old school" from the CEPH makers / community? :O
And as many ppl complain again and again, the same here as
Hey Nathan.
No blaming here. I'm very thankful for this great peace (ok, sometime more of a
beast ;) ) of open-source SDS and all the great work around it incl. community
and users... and happy the problem is identified and can be fixed for
others/the future as well :)
Well, yes, can confirm
Strange...
- wouldn't swear, but pretty sure v13.2.0 was working ok before
- so what do others say/see?
- no one on v13.2.1 so far (hard to believe) OR
- just don't have this "systemctl ceph-osd.target" problem and all just works?
If you also __MIGRATED__ from Luminous (say ~ v12.2.5 or older)
Have you guys changed something with the systemctl startup of the OSDs?
I've stopped and disabled all the OSDs on all my hosts via "systemctl
stop|disable ceph-osd.target" and rebooted all the nodes. Everything look just
the same.
The I started all the OSD daemons one after the other via the
Hi Sage.
Sure. Any specific OSD(s) log(s)? Or just any?
Gesendet: Samstag, 28. Juli 2018 um 16:49 Uhr
Von: "Sage Weil"
An: ceph.nov...@habmalnefrage.de, ceph-users@lists.ceph.com,
ceph-de...@vger.kernel.org
Betreff: Re: [ceph-users] HELP! --> CLUSER DOWN (was "v13.2.1 Mimic released")
Can you
Dear users and developers.
I've updated our dev-cluster from v13.2.0 to v13.2.1 yesterday and since then
everything is badly broken.
I've restarted all Ceph components via "systemctl" and also rebootet the server
SDS21 and SDS24, nothing changes.
This cluster started as Kraken, was updated to
There was no change in the ZABBIX environment... I got the this warning some
minutes after the Linux and Luminous->Mimic update via YUM and a reboot of all
the Ceph servers...
Is there anyone, who also had the ZABBIX module unabled under Luminos AND then
migrated to Mimic? If yes, does it work
at about the same time we also updated the Linux OS via "YUM" to:
# more /etc/redhat-release
Red Hat Enterprise Linux Server release 7.5 (Maipo)
from the given error message, it seems like there are 32 "measure points",
which are to be send but 3 of them are somehow failing:
>>>
anyone with "mgr Zabbix enabled" migrated from Luminous (12.2.5 or 5) and has
the same problem in Mimic now?
if I disable and re-enable the "zabbix" module, the status is "HEALTH_OK" for
some sec. and changes to "HEALTH_WARN" again...
---
# ceph -s
cluster:
id:
- adding ceph-devel
- Same here. An estimated date would already help for internal plannings :|
Gesendet: Dienstag, 10. Juli 2018 um 11:59 Uhr
Von: "Martin Overgaard Hansen"
An: ceph-users
Betreff: Re: [ceph-users] Mimic 13.2.1 release date
> Den 9. jul. 2018 kl. 17.12 skrev Wido den
Hi Ruben and community.
Thanks a lot for all the help and hints. Finally I figured out that "base" is
also part of i.e. "selinux-policy-minimum". After installing this pkg via "yum
install", the usual "ceph installation" continues...
Seems like the "ceph packaging" is too much RHEL oriented
Hi all.
We try to setup our first CentOS 7.4.1708 CEPH cluster, based on Luminous
12.2.5. What we get is:
Error: Package: 2:ceph-selinux-12.2.5-0.el7.x86_64 (Ceph-Luminous)
Requires: selinux-policy-base >= 3.13.1-166.el7_4.9
__Host infos__:
root> lsb_release -d
Description:
... we use (only!) ceph-deploy in all our environments, tools and scripts.
If I look in the efforts went into ceph-volume and all the related issues,
"manual LVM" overhead and/or still missing features, PLUS the in the same
discussions mentioned recommendations to use something like
there pick your "DISTRO", klick on the "ID", klick "Repo URL"...
Gesendet: Freitag, 02. Februar 2018 um 21:34 Uhr
Von: ceph.nov...@habmalnefrage.de
An: "Frank Li"
Cc: "ceph-users@lists.ceph.com"
Betreff: Re: [ceph-users] Help ! how to
https://shaman.ceph.com/repos/ceph/wip-22847-luminous/f04a4a36f01fdd5d9276fa5cfa1940f5cc11fb81/
Gesendet: Freitag, 02. Februar 2018 um 21:27 Uhr
Von: "Frank Li"
An: "Sage Weil"
Cc: "ceph-users@lists.ceph.com"
Betreff:
Hi Steven.
interesting... 'm quite curious after your post now.
I've migrated our prod. CEPH cluster to 12.2.2 and Bluestore just today and
haven't heard back anything "bad" from the applications/users so far.
performance tests on our test cluster were good before, but we use S3/RGW only
we never managed to make it work, but I guess the "RGW metadata search"
[c|sh]ould have been "the official solution"...
- http://ceph.com/geen-categorie/rgw-metadata-search/
- https://marc.info/?l=ceph-devel=149152531005431=2
- http://ceph.com/rgw/new-luminous-rgw-metadata-search/
there was
Hi Yehuda.
Are there any examples (doc's, blog posts, ...):
- how to use that "framework" and especially for the "callbacks"
- for the latest "Metasearch" feature / usage with a S3 client/tools like
CyberDuck, s3cmd, AWSCLI or at least boto3?
- i.e. is an external ELK still needed or is this
... or at least since yesterday!
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
grrr... sorry && and again as text :|
Gesendet: Montag, 05. Juni 2017 um 01:12 Uhr
Von: ceph.nov...@habmalnefrage.de
An: "Yehuda Sadeh-Weinraub"
Cc: "ceph-users@lists.ceph.com" ,
ceph-de...@vger.kernel.org
Betreff: Re: [ceph-users] RGW lifecycle
Hi (again) Yehuda.
Looping in ceph-devel...
Could it be that lifecycle is still not implemented neither in Jewel nor in Kraken, even if release notes and other places say so?
https://www.spinics.net/lists/ceph-devel/msg34492.html
Hi Andreas.
Well, we do _NOT_ need multiside in our environment, but unfortunately is is
the basis for the announced "metasearch", based on ElasticSearch... so we try
to implement a "multisite" config on Kraken (v11.2.0) since weeks, but never
succeeded so far. We have purged and started all
Hi Yahuda.
Well, here we go: http://tracker.ceph.com/issues/20177
As it's my first one, hope it's ok as it is...
Thanks & regards
Anton
Gesendet: Samstag, 03. Juni 2017 um 00:14 Uhr
Von: "Yehuda Sadeh-Weinraub"
An: ceph.nov...@habmalnefrage.de
Cc: "Graham Allan"
Hi Graham.
We are on Kraken and have the same problem with "lifecycle". Various (other)
tools like s3cmd or CyberDuck do show the applied "expiration" settings, but
objects seem never to be purged.
If you should have new findings, hints,... PLEASE share/let me know.
Thanks a lot!
Anton
Thanks for answering, David.
No idea who changed what and where, but I'm flooded with mails since yesterday
;) --> THANKS
Gesendet: Samstag, 20. Mai 2017 um 16:42 Uhr
Von: "David Turner"
An: ceph.nov...@habmalnefrage.de, ceph-users
Betreff:
I've not received the list mails since weeks. Thought it's my mail provider who
may filter them out. Have checked with them for days. Due to them all is ok.
I've then subscribed with my business mail account and do not receive any
posts/mails from the list.
Any ideas anyone?
Ups... thanks for your efforts, Ben!
This could explain some bit's. Still I have lot's of question as it seems different S3 tools/clients behaive different. We need to stick on CyberDuck on Windows and s3cms and boto on Linux and many things are not the same with RadosGW :|
And more on my
Thanks a lot, Trey.
I'll try that stuff next week, once back from Easter holidays.
And some "multi site" and "metasearch" is also still on my to-be-tested list.
Need badly to free up some time for all the interesting "future of storage"
things.
BTW., we are on Kraken and I'd hope to see more
Hey Trey.
Sounds great, we were discussing the same kind of requirements and couldn't
agree on/find something "useful"... so THANK YOU for sharing!!!
It would be great if you could provide some more details or an example how you
configure the "bucket user" and sub-users and all that stuff.
Hi Cephers.
We try to get "metadata search" working on our test cluster. This is one of two things we promised an internal customer for a very soon to be stared PoC... the second feature is, as I wrote already in another post, the "object expiration" (lifecycle?!) [object's should be
Hi Cephers.
Quick question couldn't find a "how-to" or "docu"... not even sure if someone else ever had to do it...
What would be the steps to make a (failed) multisite config change, exactly following
- http://docs.ceph.com/docs/master/radosgw/multisite/
undone again?
And as I'm
... hmm, "modify" gives no error and may be the option to use, but I don't see anything related to an "expires" meta field
[root s3cmd-master]# ./s3cmd --no-ssl --verbose modify s3://Test/INSTALL --expiry-days=365
INFO: Summary: 1 remote files to modify
modify: 's3://Test/INSTALL'
[root
... additional strange but a bit different info related to the "permission
denied"
[root s3cmd-master]# ./s3cmd --no-ssl put INSTALL s3://Test/ --expiry-days=5
upload: 'INSTALL' -> 's3://Test/INSTALL' [1 of 1]
3123 of 3123 100% in0s 225.09 kB/s done
[root s3cmd-master]# ./s3cmd
Hi Cephers...
I did set the "lifecycle" via Cyberduck.I do also get an error first, then
suddenly Cyberduck refreshes the window aand the lifecycle is there.
I see the following when I check it via s3cmd (GitHub master version because
the regular installed version doesn't offer the
Hi Cephers.
Couldn't find any special documentation about the "S3 object expiration" so I assume it should work "AWS S3 like" (?!?) ... BUT ...
we have a test cluster based on 11.2.0 - Kraken and I set some object expiration dates via CyberDuck and DragonDisk, but the objects are still
35 matches
Mail list logo