SST file in rocksdb to be able
to completely blow everything else out of the block cache during
compaction, only to quickly become invalid, removed from the cache,
and make it look to the priority cache system like rocksdb doesn't
actually need any more memory for cache.
Mark
On 8/7/19 7:44
M per osd(1GB of RAM for 1TB).
So in this case 48GB of RAM would be needed. Am I right?
Are these the minimun requirements for bluestore?
In case adding more RAM is not an option, can any of
osd_memory_target, osd_memory_cache_min, bluestore_cache_size_hdd
be decrease to fit in our server specs?
Would this ha
Paul
Am Di., 2. Okt. 2018 um 10:45 Uhr schrieb Jaime Ibar :
Hi Paul,
we're using 4.4 kernel. Not sure if more recent kernels are stable
for production services. In any case, as there are some production
services running on those servers, rebooting wouldn't be an option
if
n 01/10/18 21:10, Paul Emmerich wrote:
Which kernel version are you using for the kernel cephfs clients?
I've seen this problem with "older" kernels (where old is as recent as 4.9)
Paul
Am Mo., 1. Okt. 2018 um 18:35 Uhr schrieb Jaime Ibar :
Hi all,
we're running a ceph 12.2.7 L
emon socket in case of ceph-fuse, or the
debug information in /sys/kernel/debug/ceph/... for the kernel client.
Regards,
Burkhard
On 01.10.2018 18:34, Jaime Ibar wrote:
Hi all,
we're running a ceph 12.2.7 Luminous cluster, two weeks ago we
enabled multi mds and after few hours
thes
8] ceph: mds1 recovery completed
Not sure what else can we try to bring hanging clients back without
rebooting as they're in production and rebooting is not an option.
Does anyone know how can we deal with this, please?
Thanks
Jaime
--
Jaime Ibar
High Performance & Research Comput
mpact on clients during migration I would set the OSD's
primary-affinity to 0 beforehand. This should prevent the slow requests, at
least this setting has helped us a lot with problematic OSDs.
Regards
Eugen
Zitat von Jaime Ibar :
Hi all,
we recently upgrade from Jewel 10.2.10 to Lumin
etwork
Does anyone know how to fix or where the problem could be?
Thanks a lot in advance.
Jaime
[0] http://docs.ceph.com/docs/luminous/rados/operations/bluestore-migration/
--
Jaime Ibar
High Performance & Research Computing, IS Services
Lloyd Building, Trinity College Dublin, Dubli
o the latest minor release before upgrading major versions, but
my own migration from 10.2.10 to 12.2.5 went seamlessly and I can’t
see of any technical limitation which would hinder or prevent this
process.
Kind Regards,
Tom
*From:*ceph-users *On Behalf Of
*Jaime Ibar
*Sent:* 14 August 20
upgrading to Jewel 10.2.11, we wonder if would be possible to skip this Jewel
release and
upgrade directly to Luminous 12.2.7.
Thanks
Jaime
Jaime Ibar
High Performance & Research Computing, IS Services
Lloyd Building, Trinity College Dublin, Dublin 2, Ireland.
http://www.tchp
shes after
upgrading to Jewel 10.2.11, we wonder if would be possible to skip this
Jewel release and
upgrade directly to Luminous 12.2.7.
Thanks
Jaime
--
Jaime Ibar
High Performance & Research Computing, IS Services
Lloyd Building, Trinity College Dublin, Dublin 2, Ireland.
http://www.tchp
rs mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Jaime Ibar
High Performance & Research Computing, IS Services
Lloyd Building, Trinity College Dublin, Dublin 2, Ireland.
http://www.tchpc.tcd.ie/ | ja...@tchpc.tcd.ie
Tel: +353-1-896-3725
Thanks,
Riccardo
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Jaime Ibar
High Performance & Research Computing, IS Services
Lloyd Building, Trinity College Dublin, Dublin 2, Ireland.
http://w
Nope, all osds are running 0.94.9
On 28/03/17 14:53, Brian Andrus wrote:
Well, you said you were running v0.94.9, but are there any OSDs
running pre-v0.94.4 as the error states?
On Tue, Mar 28, 2017 at 6:51 AM, Jaime Ibar <mailto:ja...@tchpc.tcd.ie>> wrote:
On 28/03/17 14:
ones are running hammer
Thanks
On Tue, Mar 28, 2017 at 1:21 AM, Jaime Ibar <mailto:ja...@tchpc.tcd.ie>> wrote:
Hi,
I did change the ownership to user ceph. In fact, OSD processes
are running
ps aux | grep ceph
ceph2199 0.0 2.7 1729044 918792 ? Ssl Ma
running. If you didn't change
the ownership to user ceph, they won't start.
On Mar 27, 2017, at 11:53, Jaime Ibar wrote:
Hi all,
I'm upgrading ceph cluster from Hammer 0.94.9 to jewel 10.2.6.
The ceph cluster has 3 servers (one mon and one mds each) and another 6 servers
with
12
esn't mark it as up and the cluster health remains
in degraded state.
Do I have to upgrade all the osds to jewel first?
Any help as I'm running out of ideas?
Thanks
Jaime
--
Jaime Ibar
High Performance & Research Computing, IS Services
Lloyd Building, Trinity College Dublin, Dubl
17 matches
Mail list logo