My apologies, I don't seem to be getting notifications on PRs. I'll review
this week.
Thanks,
Berant
On Mon, Mar 19, 2018 at 5:55 AM, Konstantin Shalygin wrote:
> Hi Berant
>
>
> I've created prometheus exporter that scrapes the RADOSGW Admin Ops API and
>> exports the usage
'owner']
>
> Kind regards,
>
> Ben Morrice
>
> __
> Ben Morrice | e: ben.morr...@epfl.ch | t: +41-21-693-9670
> <+41%2021%20693%2096%2070>
> EPFL / BBP
> Biotech Campus
> Chemin des Mines 9
&g
Hello all,
I've created prometheus exporter that scrapes the RADOSGW Admin Ops API and
exports the usage information for all users and buckets. This is my first
prometheus exporter so if anyone has feedback I'd greatly appreciate it.
I've tested it against Hammer, and will shortly test against
at 7:13 PM, Berant Lemmenes ber...@lemmenes.com
wrote:
Sam,
It is for a valid pool, however the up and acting sets for 2.14 both show
OSDs 8 7. I'll take a look at 7 8 and see if they are good.
If so, it seems like it being present on osd.3 could be an artifact from
previous topologies
for the assistance!
Berant
On Tuesday, May 19, 2015, Samuel Just sj...@redhat.com wrote:
If 2.14 is part of a non-existent pool, you should be able to rename it
out of current/ in the osd directory to prevent the osd from seeing it on
startup.
-Sam
- Original Message -
From: Berant Lemmenes ber
: []}},
{ name: Started,
enter_time: 2015-05-18 10:18:05.335040}],
agent_state: {}}
On Mon, May 18, 2015 at 2:34 PM, Berant Lemmenes ber...@lemmenes.com
wrote:
Sam,
Thanks for taking a look. It does seem to fit my issue. Would just
removing the 5.0_head directory be appropriate
Hello all,
I've encountered a problem when upgrading my single node home cluster from
giant to hammer, and I would greatly appreciate any insight.
I upgraded the packages like normal, then proceeded to restart the mon and
once that came back restarted the first OSD (osd.3). However it
://tracker.ceph.com/issues/11429. There are
some workarounds in the bugs marked as duplicates of that bug, or you can
wait for the next hammer point release.
-Sam
- Original Message -
From: Berant Lemmenes ber...@lemmenes.com
To: ceph-users@lists.ceph.com
Sent: Monday, May 18, 2015 10:24:38 AM
Greg,
So is the consensus that the appropriate way to implement this scenario is
to have the fs created on the EC backing pool vs. the cache pool but that
the UI check needs to be tweaked to distinguish between this scenario and
just trying to use a EC pool alone?
I'm also interested in the
Argh, premature email sending!
On Thu, Nov 14, 2013 at 12:28 PM, Berant Lemmenes ber...@lemmenes.comwrote:
On Thu, Nov 14, 2013 at 12:11 PM, Alfredo Deza
alfredo.d...@inktank.comwrote:
This is just one of the items that caught my attention:
2013-10-29 15:56:03,982 [ceph11][ERROR ] 2013
On Tue, Nov 12, 2013 at 7:28 PM, Joao Eduardo Luis joao.l...@inktank.comwrote:
This looks an awful lot like you started another instance of an OSD with
the same ID while another was running. I'll walk you through the log lines
that point me towards this conclusion. Would still be weird if
I noticed the same behavior on my dumpling cluster. They wouldn't show up
after boot, but after a service restart they were there.
I haven't tested a node reboot since I upgraded to emperor today. I'll give
it a shot tomorrow.
Thanks,
Berant
On Nov 11, 2013 9:29 PM, Peter Matulis
it actually does?
Thanks in advance.
Berant
On Mon, May 6, 2013 at 4:43 PM, Berant Lemmenes ber...@lemmenes.com wrote:
TL;DR
bobtail Ceph cluster unable to finish rebalance after drive failure, usage
increasing even with no clients connected.
I've been running a test bobtail cluster
TL;DR
bobtail Ceph cluster unable to finish rebalance after drive failure, usage
increasing even with no clients connected.
I've been running a test bobtail cluster for a couple of months and it's
been working great. Last week I had a drive die and rebalance; durring that
time another OSD
14 matches
Mail list logo