Hi All,
I thought I should make a little noise about a project some of us at
SUSE have been working on, called DeepSea. It's a collection of Salt
states, runners and modules for orchestrating deployment of Ceph
clusters. To help everyone get a feel for it, I've written a blog post
which walks
Hi guys,
I'm not sure this was asked before as I wasn't able to find anything
googling (and the search function of the list is broken at
http://lists.ceph.com/pipermail/ceph-users-ceph.com/) - anyway:
- How would you backup the config of all users and bucket configurations
for the radosgw
I'm running Kraken built from Git right now and I've found that my OSDs eat
as much memory as they can before they're killed by OOM. I understand that
Bluestore is experimental but thought the fact that it does this should be
known.
My setup:
- Xeon D-1540, 32GB DDR4 ECC RAM
- Arch Linux
- Single
In case anyone is disappointed and not on, there were technical
difficulties that split the call. We are on now.
https://bluejeans.com/707503600
On Wed, Nov 2, 2016 at 9:02 PM, Patrick McGarry wrote:
> Due to low attendance we have had to cancel CDM tonight. Sorry for the
Due to low attendance we have had to cancel CDM tonight. Sorry for the
confusion.
--
Best Regards,
Patrick McGarry
Director Ceph Community || Red Hat
http://ceph.com || http://community.redhat.com
@scuttlemonkey || @ceph
___
ceph-users mailing
A bit more digging, the original crash appears to be similar (but not exactly
the same) as this tracker report
http://tracker.ceph.com/issues/16983
I can see that this was fixed in 10.2.3, so I will probably look to upgrade.
If the logs make sense to anybody with a bit more knowledge I would
We currently have one master RADOS pool in our cluster that is shared among
many applications. All objects stored in the pool are currently stored using
specific namespaces -- nothing is stored in the default namespace.
We would like to add a CephFS filesystem to our cluster, and would like to
Hi all,
Just a bit of an outage with CephFS around the MDS's, I managed to get
everything up and running again after a bit of head
scratching and thought I would share here what happened.
Cause
I believe the MDS's which were running as VM's suffered when the hypervisor ran
out of ram and
Yes a rolling restart should work. That was enough in my case.
Am 2. November 2016 01:20:48 MEZ, schrieb "Will.Boege" :
>Start with a rolling restart of just the OSDs one system at a time,
>checking the status after each restart.
>
>On Nov 1, 2016, at 6:20 PM, Ronny Aasen
On this particular occasion most of the cephfs developers are in Europe, so
we are unlikely to make it.
John
On 2 Nov 2016 5:27 p.m., "Patrick McGarry" wrote:
> Her cephers,
>
> I wanted to both post a reminder that our Ceph Developer Monthly
> meeting was tonight at 9p
Her cephers,
I wanted to both post a reminder that our Ceph Developer Monthly
meeting was tonight at 9p EDT, and pose a question:
Are periodic Ceph Developer Meetings helpful and desired? Lately the
participation has been sadly lacking, and I want to make sure we are
providing a worthwhile
> Op 2 november 2016 om 16:21 schreef Sage Weil :
>
>
> On Wed, 2 Nov 2016, Wido den Hollander wrote:
> > > > > I'm pretty sure this is a race condition that got cleaned up as part
> > > > > of
> > > > > https://github.com/ceph/ceph/pull/9078/commits. The mon only checks
On Wed, 2 Nov 2016, Wido den Hollander wrote:
> > > > I'm pretty sure this is a race condition that got cleaned up as part of
> > > > https://github.com/ceph/ceph/pull/9078/commits. The mon only checks
> > > > the
> > > > pg_temp entries that are getting set/changed, and since those are
> > >
> Op 2 november 2016 om 16:00 schreef Sage Weil :
>
>
> On Wed, 2 Nov 2016, Wido den Hollander wrote:
> > > Op 2 november 2016 om 15:06 schreef Sage Weil :
> > >
> > >
> > > On Wed, 2 Nov 2016, Wido den Hollander wrote:
> > > >
> > > > > Op 2 november
On Wed, 2 Nov 2016, Wido den Hollander wrote:
> > Op 2 november 2016 om 15:06 schreef Sage Weil :
> >
> >
> > On Wed, 2 Nov 2016, Wido den Hollander wrote:
> > >
> > > > Op 2 november 2016 om 14:30 schreef Sage Weil :
> > > >
> > > >
> > > > On Wed, 2 Nov
> Op 2 november 2016 om 15:06 schreef Sage Weil :
>
>
> On Wed, 2 Nov 2016, Wido den Hollander wrote:
> >
> > > Op 2 november 2016 om 14:30 schreef Sage Weil :
> > >
> > >
> > > On Wed, 2 Nov 2016, Wido den Hollander wrote:
> > > >
> > > > > Op 26
On Wed, 2 Nov 2016, Wido den Hollander wrote:
>
> > Op 2 november 2016 om 14:30 schreef Sage Weil :
> >
> >
> > On Wed, 2 Nov 2016, Wido den Hollander wrote:
> > >
> > > > Op 26 oktober 2016 om 11:18 schreef Wido den Hollander :
> > > >
> > > >
> > > >
> >
Hey,
http://xfs.org/docs/xfsdocs-xml-dev/XFS_Filesystem_Structure/tmp/en-US/html/Allocation_Groups.html
"Each AG can be up to one terabyte in size (512 bytes * 2^31), regardless of
the underlying device's sector size."
"The only global information maintained by the first AG (primary) is free
> Op 2 november 2016 om 14:30 schreef Sage Weil :
>
>
> On Wed, 2 Nov 2016, Wido den Hollander wrote:
> >
> > > Op 26 oktober 2016 om 11:18 schreef Wido den Hollander :
> > >
> > >
> > >
> > > > Op 26 oktober 2016 om 10:44 schreef Sage Weil
On Wed, 2 Nov 2016, Wido den Hollander wrote:
>
> > Op 26 oktober 2016 om 11:18 schreef Wido den Hollander :
> >
> >
> >
> > > Op 26 oktober 2016 om 10:44 schreef Sage Weil :
> > >
> > >
> > > On Wed, 26 Oct 2016, Dan van der Ster wrote:
> > > > On Tue, Oct
> Op 26 oktober 2016 om 11:18 schreef Wido den Hollander :
>
>
>
> > Op 26 oktober 2016 om 10:44 schreef Sage Weil :
> >
> >
> > On Wed, 26 Oct 2016, Dan van der Ster wrote:
> > > On Tue, Oct 25, 2016 at 7:06 AM, Wido den Hollander wrote:
> >
I have a 3 Cluster Giant setup with 8 OSD each, during the installation I
had to redo a cluster but it looks like the info is still on crush map
(based on my readings). How do I fix this?
[root@avatar0-ceph1 ~]# ceph -s
cluster 2f0d1928-2ee5-4731-a259-64c0dc16110a
health HEALTH_WARN 139
Hi, everyone.
What are the meanings of the fields actingbackfill, want_acting and
backfill_targets of the PG class?
Thank you:-)___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
23 matches
Mail list logo