On Mon, Apr 13, 2020 at 3:16 PM Josh Haft wrote:
>
> On Mon, Apr 13, 2020 at 4:14 PM Gregory Farnum wrote:
> >
> > On Mon, Apr 13, 2020 at 1:33 PM Josh Haft wrote:
> > >
> > > Hi,
> > >
> > > I upgraded from 13.2.5 to 14.2.6 last week and am now seeing
> > > significantly higher latency on
I have logged the following bug ticket for it :
https://tracker.ceph.com/issues/45091
I have also noticed another bug with cephadm which I have logged under :
https://tracker.ceph.com/issues/45092
Thanks
On Mon, 13 Apr 2020 12:36:01 +0800 Ashley Merrick
wrote
Completed the
Hi Matt,
We upgraded our cluster to 13.2.8 yesterday . After restarting radosgw, gc
process successfully cleaned up those objects and omap.
thanks again !
By the way, for other users , in our case the backlog had been increased
to more than 3 million. the cleanup after upgrading
On Wed, Apr 15, 2020 at 9:40 AM Xinying Song wrote:
>
> Hi, Greg:
> Thanks for your reply!
> I think master can always know if a request has been finished or not
> no matter whether
> there is a Commit-logevent, because it has written a EUpdate logevent
> that records the
> unfinished request.
>
Hi, Greg:
Thanks for your reply!
I think master can always know if a request has been finished or not
no matter whether
there is a Commit-logevent, because it has written a EUpdate logevent
that records the
unfinished request.
Of course, we need to do commit, in which we clean up mdcache and
Hello,
I have a CephFS running on v14.2.8 correctly. I also have a VM which
runs Samba as AD controller and fileserver (Zentyal). My plan was to
mount a CephFS path on that VM and make Samba share those files to a
Windows network. But I cant make the shares work as Samba is asking to
mount the
I'm happy to announce the another release of the go-ceph API
bindings. This is a regular release following our every-two-months release
cadence.
https://github.com/ceph/go-ceph/releases/tag/v0.3.0
The bindings aim to play a similar role to the "pybind" python bindings in the
ceph tree but for
On Tue, 2020-04-14 at 06:27 +, Stolte, Felix wrote:
> Hi Jeff,
>
> thank you for the hint. I set Entries_HWMark = 100 in MDCACHE Section
> of ganesha.conf and upgraded ganesha to 3.2 this weekend. Cache
> Pressure warnings still keep accuring, but not as frequent as before.
> Is there another
Thanks Ilya. I am indeed using lock ls command with workload ID
corresponding to the lock tag - works reasonably well. I was just wondering
if there were better options. Thanks for all the inputs.
Thanks
Shridhar
On Mon, Apr 13, 2020 at 4:23 AM Ilya Dryomov wrote:
> Tying this with your other
That makes sense. Thanks Ilya.
On Mon, Apr 13, 2020 at 4:10 AM Ilya Dryomov wrote:
> As Paul said, a lock is typically broken by a new client trying
> to grab it. As part of that the existing lock holder needs to be
> blacklisted, unless you fence using some type of STONITH.
>
> The question
Hi,
Any thoughts on this?
Regards
Shridhar
On Thu, Apr 9, 2020 at 5:17 PM Void Star Nill
wrote:
> Hi,
>
> I am seeing a large number of connections from ceph-mgr are stuck in
> CLOSE_WAIT state with data stuck in the receive queue. Looks like ceph-mgr
> process is not reading the data
Hi all,
Following some cephfs issues today we have a stable cluster but the
num_strays is incorrect.
After starting the mds, the values are reasonable, but they very soon
underflow and start showing 18E (2^64 - a few)
---mds --mds_cache--- --mds_log--
On Sun, Apr 12, 2020 at 5:19 AM Xinying Song wrote:
>
> Hi, cephers:
> What's the purpose of using LogEvent with empty metablob?
> For example in link/unlink operation cross two active mds,
> when slave receives OP_FINISH it will write an ESlaveUpdate::OP_COMMIT
> to the journal, then
> send
Hi Peter,
You won't need to do anything--the gc process will clear the stall and
begin clearing its backlog immediately after the upgrade.
Matt
On Sat, Apr 11, 2020 at 10:42 PM Peter Parker <346415...@qq.com> wrote:
>
> thanks a lot
> i'm not sure if the PR is
Hi,
On Mon, Apr 13, 2020 at 3:08 PM Frank Schilder wrote:
>
> Hi Paul,
>
> thanks for the fast reply. When you say "bit 21", do you mean "(feature_map &
> 2^21) == true" (i.e., counting from 0 starting at the right-hand end)?
yes
> Assuming upmap is supported by all clients. If I understand
On Tue, Apr 14, 2020 at 9:41 PM Dan van der Ster wrote:
>
> On Tue, Apr 14, 2020 at 2:50 PM Dan van der Ster wrote:
> >
> > On Sun, Apr 12, 2020 at 9:33 PM Dan van der Ster
> > wrote:
> > >
> > > Hi John,
> > >
> > > Did you make any progress on investigating this?
> > >
> > > Today I also saw
Might be an issue with cephadm.
Do you have the output of `ceph orch host ls --format json` and `ceph
orch ls --format json`?
Am 09.04.20 um 13:23 schrieb Dr. Marco Savoca:
> Hi all,
>
>
>
> last week I successfully upgraded my cluster to Octopus and converted it
> to cephadm. The conversion
On Tue, Apr 14, 2020 at 2:50 PM Dan van der Ster wrote:
>
> On Sun, Apr 12, 2020 at 9:33 PM Dan van der Ster wrote:
> >
> > Hi John,
> >
> > Did you make any progress on investigating this?
> >
> > Today I also saw huge relative buffer_anon usage on our 2 active mds's
> > running 14.2.8:
> >
> >
Dear Casey
I hope you had a good Easter and that this mail finds you in good health.
I was wondering if you had some time to answer the question below regarding the
backward compatibility of the RGW.
Many thanks!
Sincerely
Francois
From: Scheurer
On Sun, Apr 12, 2020 at 9:33 PM Dan van der Ster wrote:
>
> Hi John,
>
> Did you make any progress on investigating this?
>
> Today I also saw huge relative buffer_anon usage on our 2 active mds's
> running 14.2.8:
>
> "mempool": {
> "by_pool": {
> "bloom_filter": {
>
Hello to all confined people (and the others too) !
On one of my Ceph cluster (Nautilus 14.2.3), I previously set up 3 MDS
daemons in active/standy-replay/standby configuration.
For design reasons, I would like to replace this configuration by an
active/active/standby one.
It means replace
Hi Jeff,
thank you for the hint. I set Entries_HWMark = 100 in MDCACHE Section of
ganesha.conf and upgraded ganesha to 3.2 this weekend. Cache Pressure warnings
still keep accuring, but not as frequent as before. Is there another suggestion
I did miss?
Regards
Felix
22 matches
Mail list logo