Hi, just double checked the stack trace and I can confirm it is same
as in tracker.
compaction also worked for me, I can now mount cephfs without problems.
Thanks for help,
Serkan
On Tue, Aug 13, 2019 at 6:44 PM Ilya Dryomov wrote:
>
> On Tue, Aug 13, 2019 at 4:30 PM Serkan Çoban
I am out of office right now, but I am pretty sure it was the same
stack trace as in tracker.
I will confirm tomorrow.
Any workarounds?
On Tue, Aug 13, 2019 at 5:16 PM Ilya Dryomov wrote:
>
> On Tue, Aug 13, 2019 at 3:57 PM Serkan Çoban wrote:
> >
> > I checked /var/log/mess
erkan
On Tue, Aug 13, 2019 at 3:42 PM Ilya Dryomov wrote:
>
> On Tue, Aug 13, 2019 at 12:36 PM Serkan Çoban wrote:
> >
> > Hi,
> >
> > Just installed nautilus 14.2.2 and setup cephfs on it. OS is all centos 7.6.
> > From a client I can mount the cephfs with ce
Hi,
Just installed nautilus 14.2.2 and setup cephfs on it. OS is all centos 7.6.
>From a client I can mount the cephfs with ceph-fuse, but I cannot
mount with ceph kernel client.
It gives "mount error 110 connection timeout" and I can see "libceph:
corrupt full osdmap (-12) epoch 2759 off 656" in
In this thread [1] it is suggested to bump up
mds log max segments = 200
mds log max expiring = 150
1- http://lists.ceph.com/pipermail/ceph-users-ceph.com/2017-December/023490.html
On Sun, Apr 28, 2019 at 2:58 PM Winger Cheng wrote:
>
> Hello Everyone,
>
> I have a CephFS cluster which has 4 no
>Where did you get those numbers? I would like to read more if you can
point to a link.
Just found the link:
https://github.com/facebook/rocksdb/wiki/Leveled-Compaction
On Fri, Feb 22, 2019 at 4:22 PM Serkan Çoban wrote:
>
> >>These sizes are roughly 3GB,30GB,300GB. Anything i
>>These sizes are roughly 3GB,30GB,300GB. Anything in-between those sizes are
>>pointless. Only ~3GB of SSD will ever be used out of a
28GB partition. Likewise a 240GB partition is also pointless as only
~30GB will be used.
Where did you get those numbers? I would like to read more if you can
poi
If ToR switches are L3 then you can not use LACP.
On Mon, Jan 21, 2019 at 4:02 PM Burkhard Linke
wrote:
>
> Hi,
>
>
> I'm curious.what is the advantage of OSPF in your setup over e.g.
> LACP bonding both links?
>
>
> Regards,
>
> Burkhard
>
>
> ___
>I will also see a few uncorrected read errors in smartctl.
uncorrected read errors in smartctl is a cause for us to replace the drive.
On Wed, Dec 19, 2018 at 6:48 AM Frank Ritchie wrote:
>
> Hi all,
>
> I have been receiving alerts for:
>
> Possible data damage: 1 pg inconsistent
>
> almost dai
e if Bad
> BBU
> Current Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad
> BBU
>
>
> On 11/18/2018 12:45 AM, Serkan Çoban wrote:
> > Does write cache on SSDs enabled on three servers? Can you check them?
> > On Sun, Nov 18, 2018 at 9:05 AM Alex Litva
occasion
> Cache, RAID, and battery situation is the same.
>
> On 11/17/2018 11:38 PM, Serkan Çoban wrote:
> >> 10ms w_await for SSD is too much. How that SSD is connected to the system?
> >> Any raid card installed on this system? What is the raid mode?
> > On Sun, Nov
>10ms w_await for SSD is too much. How that SSD is connected to the system? Any
>raid card installed on this system? What is the raid mode?
On Sun, Nov 18, 2018 at 8:25 AM Alex Litvak
wrote:
>
> Here is another snapshot. I wonder if this write io wait is too big
> Device: rrqm/s wrqm/s
Hi,
Does anyone know if slides/recordings will be available online?
Thanks,
Serkan
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
I think you dont have enough hosts for your ec pool crush rule.
if your failure domain is host, then you need at least ten hosts.
On Wed, Oct 24, 2018 at 9:39 PM Brady Deetz wrote:
>
> My cluster (v12.2.8) is currently recovering and I noticed this odd OSD ID in
> ceph health detail:
> "214748364
m using VGs/LVs). The output shows block and block.db, but nothing about
> wal.db. How can I learn where my wal lives?
>
>
>
>
> On Sun, Oct 21, 2018 at 12:43 AM Serkan Çoban wrote:
>>
>> ceph-bluestore-tool can show you the disk labels.
>> ceph-bluestore-tool sho
an
>> wrote:
>>>
>>> I get that, but isn’t 4TiB to track 2.45M objects excessive? These numbers
>>> seem very high to me.
>>>
>>> Get Outlook for iOS
>>>
>>>
>>>
>>> On Sat, Oct 20, 2018 at 10:27 AM -0700, &quo
ceph-bluestore-tool can show you the disk labels.
ceph-bluestore-tool show-label --dev /dev/sda1
On Sun, Oct 21, 2018 at 1:29 AM Robert Stanford wrote:
>
>
> An email from this list stated that the wal would be created in the same
> place as the db, if the db were specified when running ceph-vol
4.65TiB includes size of wal and db partitions too.
On Sat, Oct 20, 2018 at 7:45 PM Waterbly, Dan wrote:
>
> Hello,
>
>
>
> I have inserted 2.45M 1,000 byte objects into my cluster (radosgw, 3x
> replication).
>
>
>
> I am confused by the usage ceph df is reporting and am hoping someone can
> sh
ek now.
>
> I’m just about to go live with this system ( in the next couple of weeks ) so
> I'm trying to start out as clean as possible.
>
> If anyone has any insights I'd appreciate it.
>
> There should be no data in the system yet... unless I'm missing somet
Used data is wal+db size on each OSD.
On Wed, Sep 19, 2018 at 3:50 PM Jakub Jaszewski
wrote:
>
> Hi, I've recently deployed fresh cluster via ceph-ansible. I've not yet
> created pools, but storage is used anyway.
>
> [root@ceph01 ~]# ceph version
> ceph version 13.2.1 (5533ecdc0fda920179d7ad84e0
Intel DC series also popular both nvme and ssd use case.
https://www.intel.com/content/www/us/en/products/memory-storage/solid-state-drives/data-center-ssds/dc-d3-s4610-series.html
On Mon, Sep 17, 2018 at 8:10 PM Robert Stanford wrote:
>
>
> Awhile back the favorite SSD for Ceph was the Samsung
>Is there a way of doing this without running multiple filesystems within the
>same cluster?
yes, have a look at following link:
https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/3/html-single/ceph_file_system_guide/index#working-with-file-and-directory-layouts
On Thu, Sep 6, 2018
You can do it by exporting cephfs by samba. I don't think any other
way exists for cephfs.
On Thu, Jul 26, 2018 at 9:12 AM, Manuel Sopena Ballesteros
wrote:
> Dear Ceph community,
>
>
>
> I am quite new to Ceph but trying to learn as much quick as I can. We are
> deploying our first Ceph producti
Hi,
Can I create device class types like sata-hdd and sas-hdd and use them?
>From the docs I understand there are only ssd,hdd and nvme device classes.
I would like ssd,nvme,sata-hdd,sas-hdd
Serkan
___
ceph-users mailing list
ceph-users@lists.ceph.com
h
rados bench is using 4MB block size for io. Try with with io size 4KB,
you will see ssd will be used for write operations.
On Fri, Apr 27, 2018 at 4:54 PM, Steven Vacaroaia wrote:
> Hi
>
> During rados bench tests, I noticed that HDD usage goes to 100% but SSD
> stays at ( or very close to 0)
>
>
You can try only --block-db /dev/nvme0n1p1 parameter. wal will use
same partition.
On Thu, Apr 26, 2018 at 3:43 PM, Kevin Olbrich wrote:
> Hi!
>
> Yesterday I deployed 3x SSDs as OSDs fine but today I get this error when
> deploying an HDD with separted WAL/DB:
> stderr: 2018-04-26 11:58:19.53196
h formatted OSD?
> Thank you.
>
> - Kevin
>
>
> 2018-04-26 12:36 GMT+02:00 Serkan Çoban :
>>
>> >On bluestore, is it safe to move both Block-DB and WAL to this journal
>> > NVMe?
>> Yes, just specify block-db with ceph-volume and wal also use that
>&
>On bluestore, is it safe to move both Block-DB and WAL to this journal NVMe?
Yes, just specify block-db with ceph-volume and wal also use that
partition. You can put 12-18 HDDs per NVMe
>What happens im the NVMe dies?
You lost OSDs backed by that NVMe and need to re-add them to cluster.
On Thu,
dd
> - osd recovery sleep ssd
>
> There are other throttling params you can change, though most defaults are
> just fine in my environment, and I don’t have experience with them.
>
> Good luck,
>
> Hans
>
>
>> On Apr 18, 2018, at 1:32 PM, Serkan Çoban wrote:
Thank you for the quick response .. We will try to adapt the script to
> increase the osd weight.
> How to create osd with weight 0 using ceph-volume tool? Or we have to create
> OSD and later modify the OSD weight to 0.
>
> Thanks,
> Muthu
>
>
>
> On Wed, Apr 1
You can add new OSDs with 0 weight and edit below script to increase
the osd weights instead of decreasing.
https://github.com/cernceph/ceph-scripts/blob/master/tools/ceph-gentle-reweight
On Wed, Apr 18, 2018 at 2:16 PM, nokia ceph wrote:
> Hi All,
>
> We are having 5 node cluster with EC 4+1 .
Here it is:
https://github.com/cernceph/ceph-scripts/blob/master/tools/ceph-gentle-reweight
On Tue, Apr 17, 2018 at 10:59 AM, Caspar Smit wrote:
> Hi John,
>
> Thanks for pointing out that script, do you have a link to it? I'm not able
> to find it.
> Just want to look at the script to understan
Hi,
Where can I find slides/videos of the conference?
I already tried (1), but cannot view the videos.
Serkan
1- http://www.itdks.com/eventlist/detail/1962
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-us
Hi, I am using ceph-ansible to build a test cluster. I want to learn
that If I use lvm scenario and use below settings in osds.yml:
- data: data-lv1
data_vg: vg2
db: db-lv1
db_vg: vg1
wal also using db logical volume right? I plan to use one nvme for 10
osds, so creating 10 lvs from
May I ask why are you using EL repo with centos?
AFAIK, Redhat is backporting all ceph features to 3.10 kernels. Am I wrong?
On Fri, Feb 2, 2018 at 2:44 PM, Richard Hesketh
wrote:
> On 02/02/18 08:33, Kevin Olbrich wrote:
>> Hi!
>>
>> I am planning a new Flash-based cluster. In the past we used S
Answer is in the logs:
[mon01][WARNIN] To connect to download.ceph.com insecurely, use
`--no-check-certificate'.
It will be better to mirror the repos and use them offline...
On Fri, Jan 5, 2018 at 12:08 PM, Hüseyin Atatür YILDIRIM <
hyildi...@havelsan.com.tr> wrote:
> I’ve upgraded the ceph-dep
>Also, are there any benchmark comparisons between hdfs and ceph specifically
>around performance of apps benefiting from data locality ?
There will be no data locality in ceph, because all the data is
accessed through network.
On Fri, Dec 22, 2017 at 4:52 AM, Traiano Welcome wrote:
> Hi List
>
Below steps are taken from redhat documentation:
Follow the below procedure for Shutting down the Ceph Cluster:
1.Stop the clients from using the RBD images/Rados Gateway on this
cluster or any other clients.
2.The cluster must be in healthy state before proceeding.
3.Set the noout, no
u
> Geschäftsführung: Oliver Dzombic
>
> Steuer Nr.: 35 236 3622 1
> UST ID: DE274086107
>
>
> Am 21.04.2016 um 16:22 schrieb Serkan Çoban:
>> Hi,
>> I would like to install and test ceph jewel release.
>> My servers are rhel 7.2 but clients are rhel6.7.
>>
Hi,
I would like to install and test ceph jewel release.
My servers are rhel 7.2 but clients are rhel6.7.
Is it possible to install jewel release to server and use hammer
ceph-fuse rpms on clients?
Thanks,
Serkan
___
ceph-users mailing list
ceph-users@li
40 matches
Mail list logo