That recommendation changed to upgrade OSDs first then Monitors due to
http://tracker.ceph.com/issues/17386#note-6
Ian
On Wed, Nov 9, 2016 at 3:11 PM, Peter Maloney <
peter.malo...@brockmann-consult.de> wrote:
> On 11/09/16 15:06, Alexander Walker wrote:
>
> Hello,
>
> I've a cluster of three
Am I the only one who finds it funny that the "ceph problem" was fixed by
an update to the disk controller firmware? :-)
Ian
On Thu, Sep 3, 2015 at 11:13 AM, Vickey Singh
wrote:
> Hey Mark / Community
>
> These are the sequences of changes that seems to have fixed
Quentin,
Red Hat Ceph Storage 1.3 is based upon Hammer. I guess you can take away
from that that we at Red Hat think it's production ready :-)
Ian
On Thu, Aug 27, 2015 at 10:30 AM, Quentin Hartman
qhart...@direwolfdigital.com wrote:
I'm currently running Giant in my cluster, and there are a
Add to your Gcal here:
https://www.google.com/calendar/render?eid=Yzh0cDdiOWVsYjVyZXBlZmVocjAxdTNrMzggYW1hdmVyaWZ5QG0ctz=America/New_Yorkpli=1sf=trueoutput=xml#eventpage_6
Ian
On Aug 27, 2015, at 10:36, Patrick McGarry pmcga...@redhat.com wrote:
Hey cephers,
Just wanted to start sharing
It looks like you may have hit http://tracker.ceph.com/issues/7915
Ian R. Colle
Global Director
of Software Engineering
Red Hat (Inktank is now part of Red Hat!)
http://www.linkedin.com/in/ircolle
http://www.twitter.com/ircolle
Cell: +1.303.601.7713
Email: ico...@redhat.com
- Original
Christian,
Why are you not fond of ceph-deploy?
Ian R. Colle
Global Director
of Software Engineering
Red Hat (Inktank is now part of Red Hat!)
http://www.linkedin.com/in/ircolle
http://www.twitter.com/ircolle
Cell: +1.303.601.7713
Email: ico...@redhat.com
- Original Message -
From:
Thanks, Filippos! Very interesting reading.
Are you comfortable enough yet to remove the RAID-1 from your architecture and
get all that space back?
Ian R. Colle
Global Director
of Software Engineering
Red Hat (Inktank is now part of Red Hat!)
http://www.linkedin.com/in/ircolle
Cédric,
See http://tracker.ceph.com/issues/8221
The S3 and Swift APIs handle versioning very differently, so we'll
implement S3 in the Giant time frame and consider how to handle Swift once
that's completed.
Ian Colle
Director of Engineering
Inktank
On Sunday, April 27, 2014, Cedric Lemarchand
Moving to ceph-users.
Ian R. Colle
Director of Engineering
Inktank
Delivering the Future of Storage
http://www.linkedin.com/in/ircolle
http://www.twitter.com/ircolle
Cell: +1.303.601.7713
Email: i...@inktank.com
On 4/16/14, 7:52 AM, Ilya Storozhilov ilya_storozhi...@epam.com wrote:
Hello Ceph
On Aug 27, 2013, at 2:08, Oliver Daudey oli...@xs4all.nl wrote:
Hey Samuel,
The PGLog::check() is now no longer visible in profiling, so it helped
for that. Unfortunately, it doesn't seem to have helped to bring down
the OSD's CPU-loading much. Leveldb still uses much more than in
http://tracker.ceph.com/issues/6057
Ian R. Colle
Director of Engineering, Inktank
Twitter: ircolle
LinkedIn: www.linkedin.com/in/ircolle
On Aug 21, 2013, at 6:41, Jeff Bachtel jbach...@bericotechnologies.com wrote:
Is there an issue ID associated with this? For those of us who made the long
Please note, you do not need to specify the version of deb or rpm if you
want the latest. Just continue to point to http://ceph.com/debian or
http://ceph.com/rpm and you'll get the same thing as
http://ceph.com/debian-dumpling and http://ceph.com/rpm-dumpling
Ian R. Colle
Director of Engineering
There are version specific repos, but you shouldn't need them if you want
the latest.
In fact, http://ceph.com/rpm/ is simply a link to
http://ceph.com/rpm-dumpling
Ian R. Colle
Director of Engineering
Inktank
Cell: +1.303.601.7713 tel:%2B1.303.601.7713
Email: i...@inktank.com
Delivering the
Florian,
It's building now, should be out in a few hours.
Ian R. Colle
Ceph Program Manager
Inktank
Cell: +1.303.601.7713 tel:%2B1.303.601.7713
Email: i...@inktank.com
Delivering the Future of Storage
http://www.linkedin.com/in/ircolle
http://www.twitter.com/ircolle
On 5/13/13 3:37
14 matches
Mail list logo