Folks,
Has anyone been using Bluestore with CephFS. If so, did you'll test with
zetascale vs rocksdb. Any install steps/best practice is appreciated.
PS: I still see that Bluestore is "experimental feature" any timeline, when
will it be GA stable.
--
Deepak
Hi,
We have a minimal ceph cluster setup 1 admin node 1 mon node and 2 osds.
We use centos 7 on all OS on all servers.
Currently while deploying the servers, we recieve the below errors.
=
[root@admin-ceph ~]# ceph health detail
2017-02-13 16:14:49.652786 7f6b8c6b6700 0 --
Could one of the reporters open a tracker for this issue and attach
the requested debugging data?
On Mon, Feb 13, 2017 at 11:18 PM, Donny Davis wrote:
> I am having the same issue. When I looked at my idle cluster this morning,
> one of the nodes had 400% cpu utilization,
Hi Ceph experts,
after updating from ceph 0.94.9 to ceph 10.2.5 on Ubuntu 14.04, 2 out of 3 osd
processes are unable to start. On another machine the same happened but only on
1 out of 3 OSDs.
The update procedure is done via ceph-deploy 1.5.37.
Shouldn’t be a permissions problem, because
Hi:
I used nginx + fastcti + radosgw , when configure radosgw with "rgw
print continue = true " In RFC 2616 , it says An origin server that
sends a 100 (Continue) response MUST ultimately send a final status
code, once the request body is received and processed, unless it
terminates the
> 2 active+clean+scrubbing+deep
* Set noscrub and nodeep-scrub
# ceph osd set noscrub
# ceph osd set nodeep-scrub
* Wait for scrubbing+deep to complete
* Do `ceph -s`
If still you would be seeing high CPU usage, please identify who
is/are eating CPU resource.
* ps aux | sort -rk 3,4 |
Hi Wido,
no I did not set special flags - I've used ceph-deploy without further
parameters apart from the journal disk/partition that these OSDs should use.
Bernhard
Wido den Hollander schrieb am Mo., 13. Feb. 2017 um
17:47 Uhr:
>
> > Op 13 februari 2017 om 16:49 schreef
Ok, Partition GUID code was the same like Partition unique GUID. I used
the|sudo sgdisk --new=1:0:+20480M --change-name=1:'ceph journal'
--partition-guid=1:$journal_uuid --typecode=1:$journal_uuid --mbrtogpt
-- /dev/sdk| to recreate my journal. However, typecode part should be
the
> Op 13 februari 2017 om 16:49 schreef "Bernhard J. M. Grün"
> :
>
>
> Hi,
>
> we are using SMR disks for backup purposes in our Ceph cluster.
> We have had massive problems with those disks prior to upgrading to Kernel
> 4.9.x. We also dropped XFS as filesystem and
Hi,
we are using SMR disks for backup purposes in our Ceph cluster.
We have had massive problems with those disks prior to upgrading to Kernel
4.9.x. We also dropped XFS as filesystem and we now use btrfs (only for
those disks).
Since we did this we don't have such problems anymore.
If you don't
Thanks for your quick responses,
while I was writing my answer we had a rebalancing going on because I
started a new crush reweight to get rid of the old re-activated OSDs
again, and now that it finished, the cluster is back in healthy state.
Thanks,
Eugen
Zitat von Gregory Farnum
Hi Piotr,
is your partition GUID right?
Look with sgdisk:
# sgdisk --info=2 /dev/sdd
Partition GUID code: 45B0969E-9B03-4F30-B4C6-B4B80CEFF106 (Unknown)
Partition unique GUID: 396A0C50-738C-449E-9FC6-B2D3A4469E51
First sector: 2048 (at 1024.0 KiB)
Last sector: 10485760 (at 5.0 GiB)
Partition
I run it on CentOS Linux release 7.3.1611. After running "udevadm test
/sys/block/sda/sda1" I don't see that this rule apply to this disk.
Hmm I remember that it used to work properly, but some time ago I
retested journal disk recreation. I followed the same tutorial like the
one pasted here
On Mon, Feb 13, 2017 at 7:05 AM Wido den Hollander wrote:
>
> > Op 13 februari 2017 om 16:03 schreef Eugen Block :
> >
> >
> > Hi experts,
> >
> > I have a strange situation right now. We are re-organizing our 4 node
> > Hammer cluster from LVM-based OSDs to HDDs.
> Op 13 februari 2017 om 16:03 schreef Eugen Block :
>
>
> Hi experts,
>
> I have a strange situation right now. We are re-organizing our 4 node
> Hammer cluster from LVM-based OSDs to HDDs. When we did this on the
> first node last week, everything went smoothly, I removed
> Op 13 februari 2017 om 15:57 schreef Peter Maloney
> :
>
>
> Then you're not aware of what the SMR disks do. They are just slow for
> all writes, having to read the tracks around, then write it all again
> instead of just the one thing you really wanted to
Hi experts,
I have a strange situation right now. We are re-organizing our 4 node
Hammer cluster from LVM-based OSDs to HDDs. When we did this on the
first node last week, everything went smoothly, I removed the OSDs
from the crush map and the rebalancing and recovery finished
Then you're not aware of what the SMR disks do. They are just slow for
all writes, having to read the tracks around, then write it all again
instead of just the one thing you really wanted to write, due to
overlap. Then to partially mitigate this, they have some tiny write
buffer like 8GB flash,
Hi,
I have a odd case with SMR disks in a Ceph cluster. Before I continue, yes, I
am fully aware of SMR and Ceph not playing along well, but there is something
happening which I'm not able to fully explain.
On a 2x replica cluster with 8TB Seagate SMR disks I can write with about
30MB/sec to
I am having the same issue. When I looked at my idle cluster this morning,
one of the nodes had 400% cpu utilization, and ceph-mgr was 300% of that.
I have 3 AIO nodes, and only one of them seemed to be affected.
On Sat, Jan 14, 2017 at 12:18 AM, Brad Hubbard wrote:
> Want
> Op 13 februari 2017 om 12:57 schreef Muthusamy Muthiah
> :
>
>
> Hi All,
>
> We also have same issue on one of our platforms which was upgraded from
> 11.0.2 to 11.2.0 . The issue occurs on one node alone where CPU hits 100%
> and OSDs of that node marked down.
Thanks for the response, Shinobu
The warning disappears due to your suggesting solution, however the nearly 100%
cpu cost still exists and concerns me a lot.
So, do you know why the cpu cost is so high?
Are there any solutions or suggestions to this problem?
Cheers
-邮件原件-
发件人: Shinobu
On 2017-02-13 13:47, Wido den Hollander wrote:
>
> The udev rules of Ceph should chown the journal to ceph:ceph if it's set to
> the right partition UUID.
>
> This blog shows it partially:
> http://ceph.com/planet/ceph-recover-osds-after-ssd-journal-failure/
>
> This is done by
Hi All,
We also have same issue on one of our platforms which was upgraded from
11.0.2 to 11.2.0 . The issue occurs on one node alone where CPU hits 100%
and OSDs of that node marked down. Issue not seen on cluster which was
installed from scratch with 11.2.0.
> Op 13 februari 2017 om 12:06 schreef Piotr Dzionek :
>
>
> Hi,
>
> I am running ceph Jewel 10.2.5 with separate journals - ssd disks. It
> runs pretty smooth, however I stumble upon an issue after system reboot.
> Journal disks become owned by root and ceph failed
Hi,
What is your OS? The permission of journal partition should be changed by udev
rules: /lib/udev/rules.d/95-ceph-osd.rules
In this file, it is described as:
# JOURNAL_UUID
ACTION=="add", SUBSYSTEM=="block", \
ENV{DEVTYPE}=="partition", \
Hi,
I am running ceph Jewel 10.2.5 with separate journals - ssd disks. It
runs pretty smooth, however I stumble upon an issue after system reboot.
Journal disks become owned by root and ceph failed to start.
/starting osd.4 at :/0 osd_data /var/lib/ceph/osd/ceph-4
On 13-2-2017 04:22, Alex Gorbachev wrote:
> Hello, with the preference for IT mode HBAs for OSDs and journals,
> what redundancy method do you guys use for the boot drives. Some
> options beyond RAID1 at hardware level we can think of:
>
> - LVM
>
> - ZFS RAID1 mode
Since it is not quite Ceph,
On Mon, Feb 13, 2017 at 10:53 AM, Shinobu Kinjo wrote:
> O.k, that's reasonable answer. Would you do on all hosts which the MON
> are running on:
>
> #* ceph --admin-daemon /var/run/ceph/ceph-mon.`hostname -s`.asok
> config show | grep leveldb_log
>
> Anyway you can compact
29 matches
Mail list logo