On Tue, Sep 25, 2018 at 2:23 AM Andras Pataki
wrote:
>
> The whole cluster, including ceph-fuse is version 12.2.7.
>
If this issue happens again, please set "debug_objectcacher" option of
ceph-fuse to 15 (for 30 seconds) and set ceph-fuse log to us
Regards
Yan, Zheng
> Andras
>
> On 9/24/18
Hello,
During normal operation our cluster suddenly thrown an error and since then we
have had 1 inconsistent PG, and one of clients sharing cephfs mount has started
to occasionally log "ceph: Failed to find inode X".
"ceph pg repair" deep scrubs the PG and fails with the same error in log.
Can
On Fri, Sep 21, 2018 at 5:04 PM Thomas White wrote:
> Hi all,
>
>
>
> I have recently performed a few tasks, namely purging several buckets from
> our RGWs and added additional hosts into Ceph causing some data movement
> for a rebalance. As this is now almost completed, I kicked off some deep
>
I would say that we consider mimic production ready now -- it was
released a few months ago with the second point release in final
testing right now.On Mon, Sep 24, 2018 at 2:49 PM Florian Florensa
wrote:
>
> For me its more about will mimic be production ready for mid october
>
> Le lun. 24
For me its more about will mimic be production ready for mid october
Le lun. 24 sept. 2018 à 19:11, Jason Dillaman a
écrit :
> On Mon, Sep 24, 2018 at 12:18 PM Florian Florensa
> wrote:
> >
> > Currently building 4.18.9 on ubuntu to try it out, also wondering if i
> should plan for
The cluster is healthy and stable. I'll leave a summary for the archive in case
anyone else has a similar problem.
centos 7.5
ceph mimic 13.2.1
3 mon/mgr/mds hosts, 862 osd (41 hosts)
This was all triggered by an unexpected ~1 min network blip on our 10Gbit
switch. The ceph cluster lost
The whole cluster, including ceph-fuse is version 12.2.7.
Andras
On 9/24/18 6:27 AM, Yan, Zheng wrote:
On Fri, Sep 21, 2018 at 5:40 AM Andras Pataki
wrote:
I've done some more experiments playing with client config parameters,
and it seems like the the client_oc_size parameter is very
Hello Cephers,
I use ceph-ansible v3.1.5 to build a new Mimic CEph Cluster for OpenStack.
I want to use Erasure Coding for certain pools (images, cinder backups, cinder
for one additional backend, rgw data...).
The examples in group_vars/all.yml.sample don't show how to specify an erasure
On Mon, Sep 24, 2018 at 12:18 PM Florian Florensa wrote:
>
> Currently building 4.18.9 on ubuntu to try it out, also wondering if i should
> plan for xenial+luminous or directly target bionic+mimic
There shouldn't be any technical restrictions on the Ceph iSCSI side,
so it would come down to
It is not quite clear to me what you are trying to achieve.
If you want to separate HyperVisors from Ceph, that would not give you much. HV
is man-in-the-middle anyway so they would be able to tap into traffic whatever
you do. iSCSI won't help you here. Also you would probably need to let the
Currently building 4.18.9 on ubuntu to try it out, also wondering if i
should plan for xenial+luminous or directly target bionic+mimic
Le lun. 24 sept. 2018 à 18:08, Jason Dillaman a
écrit :
> It *should* work against any recent upstream kernel (>=4.16) and
> up-to-date dependencies [1]. If you
On Mon, Sep 24, 2018 at 8:59 AM Andrei Mikhailovsky wrote:
>
> Hi Eugen,
>
> Many thanks for the links and the blog article. Indeed, the process of
> changing the journal device seem far more complex than the FileStore osds.
> Far more complex than it should be from an administrator point of
It *should* work against any recent upstream kernel (>=4.16) and
up-to-date dependencies [1]. If you encounter any distro-specific
issues (like the PR that Mike highlighted), we would love to get them
fixed.
[1] http://docs.ceph.com/docs/master/rbd/iscsi-target-cli-manual-install/
On Mon, Sep
Hi Eugen,
Many thanks for the links and the blog article. Indeed, the process of changing
the journal device seem far more complex than the FileStore osds. Far more
complex than it should be from an administrator point of view. I guess
developers and admins live on different planets and it
So from my understanding, as of right now it is not possible to have an
iSCSI gw outside of RHEL ?
Le lun. 24 sept. 2018 à 17:45, Mike Christie a écrit :
> On 09/24/2018 05:47 AM, Florian Florensa wrote:
> > Hello there,
> >
> > I am still in the works of preparing a deployment with iSCSI
On 09/24/2018 05:47 AM, Florian Florensa wrote:
> Hello there,
>
> I am still in the works of preparing a deployment with iSCSI gateways
> on Ubuntu, but both the latest LTS of ubuntu ships with kernel 4.15,
> and i dont see support for iscsi.
> What kernel are people using for this ?
> -
On Thu, Sep 13, 2018 at 8:48 PM kefu chai wrote:
> my question is: is it okay to drop the support of centos/rhel 7.4? so
> we will solely build and test the supported Ceph releases (luminous,
> mimic) on 7.5 ?
CentOS itself does not support old point releases, and I don't think
we should imply
Hi Alfredo,
I've packaged the latest version in Fedora, but I didn't update EPEL.
I've submitted the update for EPEL now at
https://bodhi.fedoraproject.org/updates/FEDORA-EPEL-2018-7f8d3be3e2 .
solarflow99, you can test this package and report "+1" in Bodhi there.
It's also in the CentOS Storage
Hi,
I am wondering if it is possible to move the ssd journal for the
bluestore osd? I would like to move it from one ssd drive to another.
yes, this question has been asked several times.
Depending on your deployment there are several things to be aware of,
maybe you should first read [1]
Hi Kevin,
Do you have an update on the state of the cluster?
I've opened a ticket http://tracker.ceph.com/issues/36163 to track the
likely root cause we identified, and have a PR open at
https://github.com/ceph/ceph/pull/24247
Thanks!
sage
On Thu, 20 Sep 2018, Sage Weil wrote:
> On Thu, 20
Hello everyone,
I am wondering if it is possible to move the ssd journal for the bluestore osd?
I would like to move it from one ssd drive to another.
Thanks
___
ceph-users mailing list
ceph-users@lists.ceph.com
Hello there,
I am still in the works of preparing a deployment with iSCSI gateways
on Ubuntu, but both the latest LTS of ubuntu ships with kernel 4.15,
and i dont see support for iscsi.
What kernel are people using for this ?
- Mainline v4.16 of the ubuntu kernel team ?
- Kernel from
On Fri, Sep 21, 2018 at 5:40 AM Andras Pataki
wrote:
>
> I've done some more experiments playing with client config parameters,
> and it seems like the the client_oc_size parameter is very correlated to
> how big ceph-fuse grows. With its default value of 200MB, ceph-fuse
> gets to about 22GB of
On 09/24/2018 08:53 AM, Nicolas Huillard wrote:
Thanks for your anecdote ;-)
Could it be that I stack too many things (XFS in LVM in md-RAID in SSD
's FTL)?
No, we regularly use the same compound of layers, just without the SSD.
mj
___
ceph-users
On 09/24/2018 08:46 AM, Nicolas Huillard wrote:
Too bad, since this FS have a lot of very promising features. I view it
as the single-host-ceph-like FS, and do not see any equivalent (apart
from ZFS which will also never included in the kernel).
Agreed. It's also so much more flexible than
Le dimanche 23 septembre 2018 à 20:28 +0200, mj a écrit :
> XFS has *always* treated us nicely, and we have been using it for a
> VERY
> long time, ever since the pre-2000 suse 5.2 days on pretty much all
> our
> machines.
>
> We have seen only very few corruptions on xfs, and the few times we
Le dimanche 23 septembre 2018 à 17:49 -0700, solarflow99 a écrit :
> ya, sadly it looks like btrfs will never materialize as the next
> filesystem
> of the future. Redhat as an example even dropped it from its future,
> as
> others probably will and have too.
Too bad, since this FS have a lot of
27 matches
Mail list logo