Sorry about the mangled urls in there, these are all from download.ceph.com
rpm-jewel el7 xfs_64
Steve
> On Apr 21, 2016, at 1:17 PM, Stephen Lord <steve.l...@quantum.com> wrote:
>
>
>
> Running this command
>
> ceph-deploy install --stable jewel ceph00
>
Running this command
ceph-deploy install --stable jewel ceph00
And using the 1.5.32 version of ceph-deploy onto a redhat 7.2 system is failing
today (worked yesterday)
[ceph00][DEBUG ]
[ceph00][DEBUG ]
I Have a setup using some Intel P3700 devices as a cache tier, and 33 sata
drives hosting the pool behind them. I setup the cache tier with writeback,
gave it a size and max object count etc:
ceph osd pool set target_max_bytes 5000
ceph osd pool set nvme target_max_bytes
I was experimenting with using bluestore OSDs and appear to have found a fairly
consistent way to crash them…
Changing the number of copies in a pool down from 3 to 1 has now twice caused
the mass panic of a whole pool of OSDs. In one case it was a cache tier, in
another case it was just a
eps that should be enough to test it out, I hope you got the
> latest ceph-deploy either from pip or throught github.
>
> On Tue, Mar 15, 2016 at 12:29 PM, Stephen Lord <steve.l...@quantum.com> wrote:
> I would have to nuke my cluster right now, and I do not have a spar
gt; Do you mind giving the full failed logs somewhere in fpaste.org along with
> some os version details?
> There are some known issues on RHEL, If you use 'osd prepare' and 'osd
> activate'(specifying just the journal partition here) it might work better.
>
> On Tue, Mar 15, 2016 at 1
t;bhi...@gmail.com> wrote:
> > It seems like ceph-disk is often breaking on centos/redhat systems. Does it
> > have automated tests in the ceph release structure?
> >
> > -Ben
> >
> >
> > On Tue, Mar 15, 2016 at 8:52 AM, Stephen Lord <steve.l...@qua
Hi,
The ceph-disk (10.0.4 version) command seems to have problems operating on a
Redhat 7 system, it uses the partprobe command unconditionally to update the
partition table, I had to change this to partx -u to get past this.
@@ -1321,13 +1321,13 @@
processed, i.e. the 95-ceph-osd.rules
e:
>
> On Fri, Feb 5, 2016 at 6:39 AM, Stephen Lord <steve.l...@quantum.com> wrote:
>>
>> I looked at this system this morning, and the it actually finished what it
>> was
>> doing. The erasure coded pool still contains all the data and the cache
>>
objects in the erasure coded pool.
Looks like I am a little too bleeding edge for this, or the contents of the
.ceph_ attribute are not an object_info_t
Steve
> On Feb 4, 2016, at 7:10 PM, Gregory Farnum <gfar...@redhat.com> wrote:
>
> On Thu, Feb 4, 2016 at 5:07 PM, Stephen Lord <stev
I setup a cephfs file system with a cache tier over an erasure coded tier as an
experiment:
ceph osd erasure-code-profile set raid6 k=4 m=2
ceph osd pool create cephfs-metadata 512 512
ceph osd pool set cephfs-metadata size 3
ceph osd pool create cache-data 2048 2048
ceph osd pool
> On Feb 4, 2016, at 6:51 PM, Gregory Farnum wrote:
>
> I presume we're doing reads in order to gather some object metadata
> from the cephfs-data pool; and the (small) newly-created objects in
> cache-data are definitely whiteout objects indicating the object no
> longer
I have a configuration with 18 OSDs spread across 3 hosts. I am struggling with
getting an even distribution of placement groups between the OSDs for a
specific pool. All the OSDs are the same size with the same weight in the crush
map. The fill level of the individual placement groups is very
13 matches
Mail list logo