Hi,
as I am planning to set up a ceph cluster with 6 OSD nodes with 10
harddisks in each node, could you please give me some advice about
hardware selection? CPU? RAM?
I am planning a 10 GBit/s public and a separate 10 GBit/s private network.
For a smaller test cluster with 5 OSD nodes and 4
Hi,
for testing I would like to create some OSD in the hammer release with
journal size 0
I included this in ceph.conf:
[osd]
osd journal size = 0
Then I zapped the disk in question and tried:
'ceph-deploy disk zap o1:sda'
Thank you for your advice how to prepare an osd without journal /
Hi,
can you please help me with the question I am currently thinking about.
I am entertaining a osd node design of a mixture of SATA spinner based
osd daemons and SSD based osd daemons.
Is it possible to have incoming write traffic go to the SSD first and
then when write traffic is becoming
Hi,
sorry, the question might seem very easy, probably my bad, but can you
please help me why I am unable to change read ahead size and other
options when mounting cephfs?
mount.ceph m2:6789:/ /foo2 -v -o name=cephfs,secret=,rsize=1024000
the result is:
ceph: Unknown mount option rsize
Hi,
is there a way to debug / monitor the osd journal usage?
Thanks and regards,
Mike
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi,
can some please help me with this error?
$ ceph tell mds.0 version
Error EPERM: problem getting command descriptions from mds.0
Tell is not working for me on mds.
Version: infernalis - trusty
Thanks and regards,
Mike
___
ceph-users mailing
Hi,
in my cluster with 16 OSD daemons and more than 20 million files on
cephfs, the memory usage on MDS is around 16 GB. It seems that 'mds
cache size' has no real influence on the memory usage of the MDS.
Is there a formula that relates 'mds cache size' directly to memory
consumption on
ays about stability issues.
Is more than one MDS considered stable enough with hammer?
Thanks and regards,
Mike
On 11/25/15 12:51 PM, Gregory Farnum wrote:
On Tue, Nov 24, 2015 at 10:26 PM, Mike Miller <millermike...@gmail.com> wrote:
Hi,
in my cluster with 16 OSD daemons and more than 2
Hi,
what is the meaning of the directory "current.remove.me.846930886" is
/var/lib/ceph/osd/ceph-14?
Thanks and regards,
Mike
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi,
in case of a failure in the storage tier, say single OSD disk failure or
complete system failure with several OSD disks, will the remaining cache
tier (on other nodes) be used for rapid backfilling/recovering first
until it is full? Or is backfill/recovery done directly to the storage
Hi Dietmar,
it all depends how many inodes have caps on the mds. I have run a very
similar configuration with 0.5 TB raw and about 200 million files, mds
collocated with mon and 32 GB RAM.
When rsyncing files from other servers onto cephfs I have observed that
the mds sometimes runs out of
Hi,
can someone report their experiences with the PMC Adaptec HBA 1000
series of controllers?
https://www.adaptec.com/en-us/smartstorage/hba/
Thanks and regards,
Mike
___
ceph-users mailing list
ceph-users@lists.ceph.com
was introduced in 4.4, so I'm not sure if it's worth
trying a slightly newer kernel?
Sent from Nine
<http://xo4t.mj.am/link/xo4t/x07j8u1663zy/1/QWFy6wfcMj4vvr1tAIpuQA/aHR0cDovL3d3dy45Zm9sZGVycy5jb20v>
*From:* Mike Miller <millermike...@gmail.com>
*Sent:* 21 Apr 2016 2:20 pm
*To:
%) but not
much.
I also found this info
http://tracker.ceph.com/issues/9192
Maybe Ilya can help us, he knows probably best how this can be improved.
Thanks and cheers,
Mike
On 4/21/16 4:32 PM, Udo Lembke wrote:
Hi Mike,
Am 21.04.2016 um 09:07 schrieb Mike Miller:
Hi Nick and Udo,
thanks
osd_disk_threads = 1
osd_enable_op_tracker = false
osd_op_num_shards = 10
osd_op_num_threads_per_shard = 1
osd_op_threads = 4
Udo
On 19.04.2016 11:21, Mike Miller wrote:
Hi,
RBD mount
ceph v0.94.5
6 OSD with 9 HDD each
10 GBit/s public and private networks
3 MON nodes 1G
Hi,
we have started to migrate user homes to cephfs with the mds server 32GB
RAM. With multiple rsync threads copying this seems to be undersized;
the mds process consumes all memory 32GB fitting about 4 million caps.
Any hardware recommendation for about 40 million files and about 500
Hi Greg,
thanks, highly appreciated. And yes, that was on an osd with btrfs. We
switched back to xfs because of btrfs instabilities.
Regards,
-Mike
On 6/27/16 10:13 PM, Gregory Farnum wrote:
On Sat, Jun 25, 2016 at 11:22 AM, Mike Miller <millermike...@gmail.com> wrote:
Hi,
typing errors.
On Feb 17, 2017, at 8:49 PM, Mike Miller <millermike...@gmail.com> wrote:
Hi,
don't go there, we tried this with SMR drives, which will slow down to
somewhere around 2-3 IOPS during backfilling/recovery and that renders the
cluster useless for client IO. Things
Hi,
don't go there, we tried this with SMR drives, which will slow down to
somewhere around 2-3 IOPS during backfilling/recovery and that renders
the cluster useless for client IO. Things might change in the future,
but for now, I would strongly recommend against SMR.
Go for normal SATA
/
Are there other alternatives to this suggested configuration?
I am kind of a little paranoid to start playing around with crush rules
in the running system.
Regards,
Mike
On 1/5/17 11:40 PM, jiajia zhong wrote:
2017-01-04 23:52 GMT+08:00 Mike Miller <millermike...@gmail.com
<mailto:mill
t parameters and see if you can change
the file layout to get more parallelization.
https://github.com/ceph/ceph/blob/master/doc/dev/file-striping.rst
https://github.com/ceph/ceph/blob/master/doc/cephfs/file-layouts.rst
Regards,
Eric
On Sun, Nov 20, 2016 at 3:24 AM, Mike Miller <millermike..
mounted successfully,
run "mount" in terminal to check the actual mount opts in mtab.
-- Original --
*From: * "Mike Miller"<millermike...@gmail.com>;
*Date: * Wed, Nov 23, 2016 02:38 PM
*To: * "Eric Eastman"<eric.east...@keepert
?
Regards,
Mike
Regards,
Eric
On Sun, Nov 20, 2016 at 3:24 AM, Mike Miller <millermike...@gmail.com> wrote:
Hi,
reading a big file 50 GB (tried more too)
dd if=bigfile of=/dev/zero bs=4M
in a cluster with 112 SATA disks in 10 osd (6272 pgs, replication 3) gives
me only about *122
John,
thanks for emphasizing this, before this workaround we tried many
different kernel versions including 4.5.x, all the same. The problem
might be particular to our environment as most of the client machines
(compute servers) have large RAM, so plenty of cache space for
inodes/dentries.
Hi,
you need to flush all caches before starting read tests. With fio you
can probably do this if you keep the files that it creates.
as root on all clients and all osd nodes run:
echo 3 > /proc/sys/vm/drop_caches
But fio is a little problematic for ceph because of the caches in the
Hi,
some time ago when starting a ceph evaluation cluster I used SSDs with
similar specs. I would strongly recommend against it, during normal
operation things might be fine, but wait until the first disk fails and
things have to be backfilled.
If you still try, please let me know how
Hi,
you have given up too early. rsync is not a nice workload for cephfs, in
particular, most linux kernel clients cephfs will end up caching all
inodes/dentries. The result is that mds servers crash due to memory
limitations. And rsync basically screens all inodes/dentries so it is
the
Hi,
Happy New Year!
Can anyone point me to specific walkthrough / howto instructions how to
move cephfs metadata to SSD in a running cluster?
How is crush to be modified step by step such that the metadata migrate
to SSD?
Thanks and regards,
Mike
7 10:49 AM, Wido den Hollander wrote:
Op 3 januari 2017 om 2:49 schreef Mike Miller <millermike...@gmail.com>:
will metadata on SSD improve latency significantly?
No, as I said in my previous e-mail, recent benchmarks showed that storing
CephFS metadata on SSD does not improve performanc
CephFS metadata on
SSD doesn't really improve performance though.
Wido
On Mon, Jan 2, 2017 at 2:36 PM, Mike Miller <millermike...@gmail.com> wrote:
Hi,
Happy New Year!
Can anyone point me to specific walkthrough / howto instructions how to move
cephfs metadata to SSD in a running c
30 matches
Mail list logo