I don’t know about the memory, but your CPU’s would be overkill. For what would
you need 20 cores (40 threads)?
When using 2 sockets I would go for 2 memory modules. Does it even work with
just 1 module?
Regards,
Sinan
> Op 21 nov. 2018 om 22:30 heeft Georgios Dimitrakakis
> het volgende
Hello,
I would like to see people's opinion about memory configurations.
Would you prefer 2x8GB over 1x16GB or the opposite?
In addition what are the latest memory recommendations? Should we
should keep the rule of thumb of 1GB per TB
or now with Bluestore things have changed?
I am planning
Great support Igor Both thumbs up! We will try to build the tool
today and expand those bluefs devices once again.
Am 11/20/18 um 6:54 PM schrieb Igor Fedotov:
FYI: https://github.com/ceph/ceph/pull/25187
On 11/20/2018 8:13 PM, Igor Fedotov wrote:
On 11/20/2018 7:05 PM, Florian
Actually (given that your devices are already expanded) you don't need
to expand them once again - one can just update size labels with my new PR.
For new migrations you can use updated bluefs expand command which sets
size label automatically though.
Thanks,
Igor
On 11/21/2018 11:11 AM,
Hi,
Answering my own question, the high load was related to the cpufreq kernel
module. Unloaded the cpufreq module and the CPU load instantly dropped and
the mirroring started to work.
Obviously there is a bug somewhere but for the moment I’m just happy it
works.
/Magnus
Den tors 15 nov. 2018
Hi,
I was thinking if it is a good idea to just move the disk of an OSD to
another node.
Prerequisite is that the FileStore journal resp the BlueStore RocksDB
and WAL are located on the same device.
I have tested this move on a virtual ceph cluster and it seems to work.
Set noout, stopped the
Hi all,
it's not first time we have this kind of problem, usually with HP raid
controllers:
1. One disk is failing, bringing all controller to slow state, where it's
performance degrades dramatically
2. Some OSDs are reported as down by other OSDs and marked as down
3. At same time other OSDs on
Hi Igor,
sad to say but I failed building the tool. I tried to build the whole
project like documented here:
http://docs.ceph.com/docs/mimic/install/build-ceph/
But as my workstation is running Ubuntu the binary fails on SLES:
./ceph-bluestore-tool --help
./ceph-bluestore-tool: symbol
Hi guys, maybe someone can help me.
I'm new with CephFS and I was testing the installation of Ceph Mimic with
ceph-deploy in 2 ubuntu 16.04 nodes.
These two nodes have 6 OSD disks each.
I've installed CephFS and 2 MDS service.
The problem is that I copied a lot of data (15 Millions of small files)
Hi,
On 11/21/2018 07:04 PM, Rodrigo Embeita wrote:
> Reduced data availability: 7 pgs inactive, 7 pgs down
this is your first problem: unless you have all data available again,
cephfs will not be back.
after that, I would take care about the redundancy next, and get the one
missing
On Tue, Nov 20, 2018 at 6:18 PM 楼锴毅 wrote:
> Hello
> Yesterday I upgraded my cluster to v12.2.9.But the mons still failed for the
> same reason.And when I run 'ceph versions', it returned
> "
> "mds": {
> "ceph version 12.2.8 (ae699615bac534ea496ee965ac6192cb7e0e07c0)
> luminous
Yeah, we also observed problems with HP raid controllers misbehaving
when a single disk starts to fail. We would never recommend building a
Ceph cluster on HP raid controllers until they can fix that issue.
There are several features in Ceph which detect dead disks: there are
timeouts for OSDs
12 matches
Mail list logo