Nov 16, 2023 at 3:21 AM Xiubo Li wrote:
> >
> > Hi Matt,
> >
> > On 11/15/23 02:40, Matt Larson wrote:
> > > On CentOS 7 systems with the CephFS kernel client, if the data pool
> has a
> > > `nearfull` status there is a slight reduction i
ion or to have
behavior more similar to the CentOS 7 CephFS clients?
Do different OS or Linux kernels have greatly different ways they respond
or limit on the IOPS? Are there any options to adjust how they limit on
IOPS?
Thanks,
Matt
--
Matt Larson, PhD
Madison, WI 53705 U.
uot;-named
> OSDs to land on, and move themselves if possible. It is a fairly safe
> operation where
> they continue to work, but will try to evacuate the PGs which should not
> be there.
>
> Worst case, your planning is wrong, and the "ssd" OSDs c
/www.youtube.com/watch?v=w91e0EjWD6E>
> youtube.com <https://www.youtube.com/watch?v=w91e0EjWD6E>
> <https://www.youtube.com/watch?v=w91e0EjWD6E>
>
>
>
>
> On Oct 24, 2023, at 11:42, Matt Larson wrote:
>
> I am looking to create a new pool that woul
hould they be moved one by one? What is
the way to safely protect data from the existing pool that they are mapped
to?
Thanks,
Matt
--
Matt Larson, PhD
Madison, WI 53705 U.S.A.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to
and keep it down? There is an example to stop the OSD on the server using
systemctl, outside of cephadm:*
ssh {osd-host}sudo systemctl stop ceph-osd@{osd-num}
Thanks,
Matt
--
Matt Larson, PhD
Madison, WI 53705 U.S.A.
___
ceph-users mailing list
gt; Maybe someone else an help here?
>
> Best regards,
> =
> Frank Schilder
> AIT Risø Campus
> Bygning 109, rum S14
>
> ____
> From: Matt Larson
> Sent: 04 December 2022 02:00:11
> To: Eneko Lacunza
> Cc: Frank S
the OSDs but keep the IDs
> intact (ceph osd destroy). Then, no further re-balancing will happen and you
> can re-use the OSD ids later when adding a new host. That's a stable
> situation from an operations point of view.
>
> Hope that helps.
>
> Best regards,
> ======
istent? Will
this be problematic?
Thanks for any advice,
Matt
--
Matt Larson, PhD
Madison, WI 53705 U.S.A.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
this while
the host is offline, or should I bring it online first before setting
weights or using `ceph orch osd rm`?
Thanks,
Matt
--
Matt Larson, PhD
Madison, WI 53705 U.S.A.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send
ral minor versions behind?
ceph orch upgrade start --ceph-version 15.2.13
-Matt
--
Matt Larson, PhD
Madison, WI 53705 U.S.A.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
, the MON was able to get back in the quorum
with the other monitors
I think the issue was that when the MON was out of quorum, the ceph client
could no longer connect when only having that MON as an option.
Problem is solved -
-Matt
On Mon, Jun 14, 2021 at 2:07 PM Matt Larson wrote:
>
and
containerized daemons.
How can I restore the ability to connect with the command-line `ceph`
client to check the status and all other interactions?
Thanks,
Matt
--
Matt Larson, PhD
Madison, WI 53705 U.S.A.
___
ceph-users mailing list -- ceph-users
-latest-storage-reliability-figures-add-ssd-boot.html
).
Are there any major caveats to considering working with larger SSDs for
data pools?
Thanks,
Matt
--
Matt Larson, PhD
Madison, WI 53705 U.S.A.
___
ceph-users mailing list -- ceph-users@ceph.io
gt; Frank Schilder
> AIT Risø Campus
> Bygning 109, rum S14
>
>
> From: Matt Larson
> Sent: 12 November 2020 00:40:21
> To: ceph-users
> Subject: [ceph-users] Unable to clarify error using vfs_ceph (Samba gateway
> for CephF
ust the Samba smb.conf,
but also what should be in /etc/ceph/ceph.conf, and how to provide the
key for the ceph:user_id ? I am really struggling to find good
first-hand documentation for this.
Thanks,
Matt
--
Matt Larson, PhD
Madison, WI 53705 U.
anually to inspect this? Maybe put in the
> manual[1]?
>
>
> [1]
> https://docs.ceph.com/en/latest/ceph-volume/lvm/activate/
>
> _______
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@c
splaced ratio).
> > > You can check this by running "ceph osd pool ls detail" and check for
> > > the value of pg target.
> > >
> > > Also: Looks like you've set osd_scrub_during_recovery = false, this
> > > setting can be annoying on large erasure-coded setups on HDDs that see
> > > long recovery times. It's better to get IO priorities right; search
> > > mailing list for osd op queue cut off high.
> > >
> > > Paul
> >
> > --
> > Dr Jake Grimmett
> > Head Of Scientific Computing
> > MRC Laboratory of Molecular Biology
> > Francis Crick Avenue,
> > Cambridge CB2 0QH, UK.
> >
> >
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
--
Matt Larson, PhD
Madison, WI 53705 U.S.A.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
ate --bluestore --data /dev/sd
--block.db
/dev/nvme0n1`
Is there a workaround for this problem where the container process is
unable to read the label of the LVM partition and fails to start the
OSD?
Thanks,
Matt
--
Matt Larson, PhD
Madison, WI 53705 U.
up.
The commands are from
(http://lists.ceph.com/pipermail/ceph-users-ceph.com/2018-January/023844.html)
-Matt
On Mon, Sep 21, 2020 at 7:20 PM Matt Larson wrote:
>
> Hi Wout,
>
> None of the OSDs are greater than 20% full. However, only 1 PG is
> backfilling at a time, w
once the PGs are "active+clean"
>
> Kind regards,
>
> Wout
> 42on
>
>
> From: Matt Larson
> Sent: Monday, September 21, 2020 6:22 PM
> To: ceph-users@ceph.io
> Subject: [ceph-users] Troubleshooting stuck unclean PGs?
er using `cephadm` tools? At last check, a
server running 16 OSDs and 1 MON is using 39G of disk space for its
running containers. Can restarting containers help to start with a
fresh slate or reduce the disk use?
Thanks,
Matt
Matt Larson
Associate Scientist
Computer Scientist/Sy
.noarch.rpm)
- Python version 3.6.8
Any suggestions? I am wondering if this could be requiring Python 2.7 to run?
--
Matt Larson, PhD
Madison, WI 53705 U.S.A.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le
ings I have
> "client X is failing to respond to cache pressure."
> Besides that there are no complaints but I thing you would need the 256GB of
> ram specially if the datasets will increase... just my 2 cents..
>
> Will you have SSD ?
>
>
>
> On Fri, Feb 7, 2020
processing of the images.
Thanks!
-Matt
--
Matt Larson, PhD
Madison, WI 53705 U.S.A.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
25 matches
Mail list logo