Re: [ceph-users] Pool Max Avail and Ceph Dashboard Pool Useage on Nautilus giving different percentages

2020-01-14 Thread ceph
delay. >I tried to resend it but it just returned the same error that mail was >not deliverable to the ceph mailing list. I will send the message >beneath as soon it's finally possible, but for now this should help you >out. > >Stephan > >-- > >Hi, > >if

[ceph-users] One Mon out of Quorum

2020-01-12 Thread nokia ceph
to the fifth node and restart the ceph service still we are unable to make the fifth node enter the quorum. # ceph -s cluster: id: 92e8e879-041f-49fd-a26a-027814e0255b health: HEALTH_WARN 1/5 mons down, quorum cn1,cn2,cn3,cn4 services: mon: 5 daemons, quorum cn1,cn2,cn3,cn4

Re: [ceph-users] rbd du command

2020-01-06 Thread ceph
TE: Expected behavior is the same as " Linux du command" > > Thanks > Swami > > > ___ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > _

Re: [ceph-users] Pool Max Avail and Ceph Dashboard Pool Useage on Nautilus giving different percentages

2019-12-16 Thread ceph
I have observed this in the ceph nautilus dashboard too - and Think it is a Display Bug... but sometimes it Shows tue right values Which nautilus u use? Am 10. Dezember 2019 14:31:05 MEZ schrieb "David Majchrzak, ODERLAND Webbhotell AB" : >Hi! > >While browsing /#/po

[ceph-users] rados_ioctx_selfmanaged_snap_set_write_ctx examples

2019-12-02 Thread nokia ceph
Hi Team, We would like to create multiple snapshots inside ceph cluster, initiate the request from librados client and came across this rados api rados_ioctx_selfmanaged_snap_set_write_ctx Can some give us sample code on how to use this api . Thanks, Muthu

[ceph-users] Ceph osd's crashing repeatedly

2019-11-13 Thread nokia ceph
Hi, We have upgraded a 5 node ceph cluster from Luminous to Nautilus and the cluster was running fine. Yesterday when we tried to add one more osd into the ceph cluster we find that the OSD is created in the cluster but suddenly some of the other OSD's started to crash and we are not able

[ceph-users] Ceph Osd operation slow

2019-11-12 Thread nokia ceph
Hi Team, In one of our ceph cluster we observe that there are many slow IOPS in all our OSD's and most of the latency is happening between two set of operations which are shown below. { "time": "2019-11-12

Re: [ceph-users] Fwd: OSD's not coming up in Nautilus

2019-11-10 Thread nokia ceph
Please find the below output. cn1.chn8be1c1.cdn ~# ceph osd metadata 0 { "id": 0, "arch": "x86_64", "back_addr": "[v2:10.50.12.41:6883/12650,v1:10.50.12.41:6887/12650]", "back_iface": "dss-p

Re: [ceph-users] Fwd: OSD's not coming up in Nautilus

2019-11-09 Thread nokia ceph
Hi, yes still the cluster unrecovered. Not able to even up the osd.0 yet. osd logs: https://pastebin.com/4WrpgrH5 Mon logs: https://drive.google.com/open?id=1_HqK2d52Cgaps203WnZ0mCfvxdcjcBoE # ceph daemon /var/run/ceph/ceph-mon.cn1.asok config show|grep debug_mon "debug_mon&quo

Re: [ceph-users] Fwd: OSD's not coming up in Nautilus

2019-11-09 Thread nokia ceph
: > The mon log shows that the all mismatch fsid osds are from node > 10.50.11.45, > maybe that the fith node? > BTW i don't found the osd.0 boot message in ceph-mon.log > do you set debug_mon=20 first and then restart osd.0 process, and make > sure the osd.0 is restarted. > > > no

Re: [ceph-users] Fwd: OSD's not coming up in Nautilus

2019-11-09 Thread nokia ceph
Hi, Please find the ceph osd tree output in the pastebin https://pastebin.com/Gn93rE6w On Fri, Nov 8, 2019 at 7:58 PM huang jun wrote: > can you post your 'ceph osd tree' in pastebin? > do you mean the osds report fsid mismatch is from old removed nodes? > > nokia ceph 于2019年11月8日

Re: [ceph-users] Fwd: OSD's not coming up in Nautilus

2019-11-08 Thread nokia ceph
Hi, The fifth node in the cluster was affected by hardware failure and hence the node was replaced in the ceph cluster. But we were not able to replace it properly and hence we uninstalled the ceph in all the nodes, deleted the pools and also zapped the osd's and recreated them as new ceph

Re: [ceph-users] Fwd: OSD's not coming up in Nautilus

2019-11-08 Thread nokia ceph
Hi, Below is the status of the OSD after restart. # systemctl status ceph-osd@0.service ● ceph-osd@0.service - Ceph object storage daemon osd.0 Loaded: loaded (/usr/lib/systemd/system/ceph-osd@.service; enabled-runtime; vendor preset: disabled) Drop-In: /etc/systemd/system/ceph-osd

[ceph-users] Fwd: OSD's not coming up in Nautilus

2019-11-08 Thread nokia ceph
Adding my official mail id -- Forwarded message - From: nokia ceph Date: Fri, Nov 8, 2019 at 3:57 PM Subject: OSD's not coming up in Nautilus To: Ceph Users Hi Team, There is one 5 node ceph cluster which we have upgraded from Luminous to Nautilus and everything was going

[ceph-users] OSD's not coming up in Nautilus

2019-11-08 Thread nokia ceph
Hi Team, There is one 5 node ceph cluster which we have upgraded from Luminous to Nautilus and everything was going well until yesterday when we noticed that the ceph osd's are marked down and not recognized by the monitors as running eventhough the osd processes are running. We noticed

[ceph-users] Is deepscrub Part of PG increase?

2019-11-02 Thread ceph
timespan for scrubbing from the admin? Hope you can enlighten me :) - Mehmet___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] cluster network down

2019-10-25 Thread ceph
parts", since >networks >> can break in thousands of not-very-obvious ways which are not >0%-vs-100% >> but somewhere in between. >> > >Ok. I ask my question in a new way. >What does ceph do, when I switch off all switches of the cluster >network? &g

[ceph-users] Increase of Ceph-mon memory usage - Luminous

2019-10-16 Thread nokia ceph
Hi Team, We have noticed that memory usage of ceph-monitor processes increased by 1GB in 4 days. We monitored the ceph-monitor memory usage every minute and we can see it increases and decreases by few 100 MBs at any point; but over time, the memory usage increases. We also noticed some monitor

[ceph-users] ceph stats on the logs

2019-10-08 Thread nokia ceph
Hi Team, With default log settings , the ceph stats will be logged like cluster [INF] pgmap v30410386: 8192 pgs: 8192 active+clean; 445 TB data, 1339 TB used, 852 TB / 2191 TB avail; 188 kB/s rd, 217 MB/s wr, 1618 op/s Jewel : on mon logs Nautilus : on mgr logs Luminous : not able to view

Re: [ceph-users] KVM userspace-rbd hung_task_timeout on 3rd disk

2019-09-29 Thread ceph
I guess this depends in your Cluster Setup... you have slow request also? - Mehmet Am 11. September 2019 12:22:08 MESZ schrieb Ansgar Jazdzewski : >Hi, > >we are running ceph version 13.2.4 and qemu 2.10, we figured out that >on VMs with more than three disks IO fails with hung

Re: [ceph-users] Nautilus : ceph dashboard ssl not working

2019-09-24 Thread nokia ceph
Thank you Ricardo Dias On Tue, Sep 17, 2019 at 2:13 PM Ricardo Dias wrote: > Hi Muthu, > > The command you used is only available in v14.2.3. To set the ssl > certificate in v14.2.2 you need to use the following commands: > > $ ceph config-key set mgr/dashboard/crt -i dash

[ceph-users] Nautilus : ceph dashboard ssl not working

2019-09-16 Thread nokia ceph
Hi Team, In ceph 14.2.2 , ceph dashboard does not have set-ssl-certificate . We are trying to enable ceph dashboard and while using the ssl certificate and key , it is not working . cn5.chn5au1c1.cdn ~# ceph dashboard set-ssl-certificate -i dashboard.crt no valid command found; 10 closest matches

[ceph-users] multiple RESETSESSION messages

2019-09-13 Thread nokia ceph
the correct Luminous release in which this is/will be available. https://github.com/ceph/ceph/pull/25343 Can someone help us with this please? Thanks, ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users

Re: [ceph-users] mon db change from rocksdb to leveldb

2019-08-22 Thread nokia ceph
d one, > let it sync, etc. > Still a bad idea. > > Paul > > -- > Paul Emmerich > > Looking for help with your Ceph cluster? Contact us at https://croit.io > > croit GmbH > Freseniusstr. 31h > 81247 München > www.croit.io > Tel: +49 89 1896585 9

[ceph-users] mon db change from rocksdb to leveldb

2019-08-21 Thread nokia ceph
Hi Team, One of our old customer had Kraken and they are going to upgrade to Luminous . In the process they also requesting for downgrade procedure. Kraken used leveldb for ceph-mon data , from luminous it changed to rocksdb , upgrade works without any issues. When we downgrade , the ceph-mon

Re: [ceph-users] bluestore write iops calculation

2019-08-06 Thread nokia ceph
the link of existing rocksdb ticket which does 2 write + > > syncs. > > My PR is here https://github.com/ceph/ceph/pull/26909, you can find the > issue tracker links inside it. > > > 3. Any configuration by which we can reduce/optimize the iops ? > > As already said part of

Re: [ceph-users] bluestore write iops calculation

2019-08-05 Thread nokia ceph
iops on top of each of (1) and (2). > > so 5*((2 or 3)+1+2)*750 = either 18750 or 22500. 18750/120 = 156.25, > 22500/120 = 187.5 > > the rest may be compaction or metadata reads if you update some objects. > or maybe I'm missing something else. however this is already closer to &

Re: [ceph-users] details about cloning objects using librados

2019-08-02 Thread nokia ceph
Thank you Greg, it is now clear for us and the option is only available in C++ , we need to rewrite the client code with c++ . Thanks, Muthu On Fri, Aug 2, 2019 at 1:05 AM Gregory Farnum wrote: > On Wed, Jul 31, 2019 at 10:31 PM nokia ceph > wrote: > > > > Thank you Gre

[ceph-users] bluestore write iops calculation

2019-08-02 Thread nokia ceph
Hi Team, Could you please help us in understanding the write iops inside ceph cluster . There seems to be mismatch in iops between theoretical and what we see in disk status. Our platform 5 node cluster 120 OSDs, with each node having 24 disks HDD ( data, rcoksdb and rocksdb.WAL all resides

Re: [ceph-users] details about cloning objects using librados

2019-07-31 Thread nokia ceph
use librados.h in our client to communicate with ceph cluster. Also any equivalent librados api for the command rados -p poolname Thanks, Muthu On Wed, Jul 31, 2019 at 11:13 PM Gregory Farnum wrote: > > > On Wed, Jul 31, 2019 at 1:32 AM nokia ceph > wrote: > >> Hi Greg,

Re: [ceph-users] details about cloning objects using librados

2019-07-31 Thread nokia ceph
Hi Greg, We were trying to implement this however having issues in assigning the destination object name with this api. There is a rados command "rados -p cp " , is there any librados api equivalent to this ? Thanks, Muthu On Fri, Jul 5, 2019 at 4:00 PM nokia ceph wrote: > T

Re: [ceph-users] Nautilus:14.2.2 Legacy BlueStore stats reporting detected

2019-07-24 Thread nokia ceph
: > bluestore warn on legacy statfs = false > > -- > Paul Emmerich > > Looking for help with your Ceph cluster? Contact us at https://croit.io > > croit GmbH > Freseniusstr. 31h > 81247 München > www.croit.io > Tel: +49 89 1896585 90 > > > On Fri, Jul 19, 2019

Re: [ceph-users] Nautilus:14.2.2 Legacy BlueStore stats reporting detected

2019-07-21 Thread nokia ceph
Thank you Paul Emmerich On Fri, Jul 19, 2019 at 5:22 PM Paul Emmerich wrote: > bluestore warn on legacy statfs = false > > -- > Paul Emmerich > > Looking for help with your Ceph cluster? Contact us at https://croit.io > > croit GmbH > Freseniusstr. 31h > 81247 Mün

[ceph-users] Nautilus:14.2.2 Legacy BlueStore stats reporting detected

2019-07-19 Thread nokia ceph
Hi Team, After upgrading our cluster from 14.2.1 to 14.2.2 , the cluster moved to warning state with following error cn1.chn6m1c1ru1c1.cdn ~# ceph status cluster: id: e9afb5f3-4acf-421a-8ae6-caaf328ef888 health: HEALTH_WARN Legacy BlueStore stats reporting detected

Re: [ceph-users] details about cloning objects using librados

2019-07-05 Thread nokia ceph
vise flags we have in various > places that let you specify things like not to cache the data. > Probably leave them unset. > > -Greg > > > > On Wed, Jul 3, 2019 at 2:47 AM nokia ceph > wrote: > > > > Hi Greg, > > > > Can you please share the api deta

Re: [ceph-users] details about cloning objects using librados

2019-07-03 Thread nokia ceph
ect class will still need to > > > connect to the relevant primary osd and send the write (presumably in > > > some situations though this will be the same machine). > > > > > > On Tue, Jul 2, 2019 at 4:08 PM nokia ceph > wrote: > > > > >

Re: [ceph-users] details about cloning objects using librados

2019-07-02 Thread nokia ceph
Hi Brett, I think I was wrong here in the requirement description. It is not about data replication , we need same content stored in different object/name. We store video contents inside the ceph cluster. And our new requirement is we need to store same content for different users , hence need

Re: [ceph-users] details about cloning objects using librados

2019-07-01 Thread nokia ceph
will clone/copy multiple objects and stores inside the cluster. Thanks, Muthu On Fri, Jun 28, 2019 at 9:23 AM Brad Hubbard wrote: > On Thu, Jun 27, 2019 at 8:58 PM nokia ceph > wrote: > > > > Hi Team, > > > > We have a requirement to create multiple copies o

[ceph-users] details about cloning objects using librados

2019-06-27 Thread nokia ceph
? Please share the document details if it is feasible. Thanks, Muthu ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Using Ceph Ansible to Add Nodes to Cluster at Weight 0

2019-06-23 Thread ceph
Hello, I would advice to use this Script from dan: https://github.com/cernceph/ceph-scripts/blob/master/tools/ceph-gentle-reweight I have Used it many Times and it works Great - also if you want to drain the OSDs. Hth Mehmet Am 30. Mai 2019 22:59:05 MESZ schrieb Michel Raabe : >Hi M

Re: [ceph-users] Error when I compare hashes of export-diff / import-diff

2019-06-11 Thread ceph
d9df83 > rbd -c ${BACKUP-CLUSTER} -p ${POOL-DESTINATION} export > ${KVM-IMAGE}@${TODAY-SNAP} - | md5sum > => 2c4962870fdd67ca758c154760d9df83 > > > Can someone has an idea of what's happenning ? > > Can someone has a way to succeed in comparing the export-diff > /import-di

[ceph-users] ceph master - src/common/options.cc - size_t / uint64_t incompatibility on ARM 32bit

2019-06-03 Thread Dyweni - Ceph-Users
Hi List / James, In the Ceph master (and also Ceph 14.2.1), file: src/common/options.cc, line # 192: Option::size_t sz{strict_iecstrtoll(val.c_str(), error_message)}; On ARM 32-bit, compiling with CLang 7.1.0, compilation fails hard at this line. The reason is because

Re: [ceph-users] ceph nautilus deep-scrub health error

2019-05-15 Thread nokia ceph
state for not deep scrubbing. Thanks, Muthu On Tue, May 14, 2019 at 4:30 PM EDH - Manuel Rios Fernandez < mrios...@easydatahost.com> wrote: > Hi Muthu > > > > We found the same issue near 2000 pgs not deep-scrubbed in time. > > > > We’re manually force scrubbing

[ceph-users] ceph nautilus deep-scrub health error

2019-05-14 Thread nokia ceph
Hi Team, After upgrading from Luminous to Nautilus , we see 654 pgs not deep-scrubbed in time error in ceph status . How can we disable this flag? . In our setup we disable deep-scrubbing for performance issues. Thanks, Muthu ___ ceph-users mailing

Re: [ceph-users] obj_size_info_mismatch error handling

2019-04-30 Thread ceph
in a way that >I haven't seen before. >> $ ceph versions >> { >> "mon": { >> "ceph version 13.2.5 >(cbff874f9007f1869bfd3821b7e33b2a6ffd4988) mimic (stable)": 3 >> }, >> "mgr": { >>

Re: [ceph-users] VM management setup

2019-04-24 Thread ceph
Hello, I would also recommend proxmox It is very easy to install and to Manage your kvm/lxc with Huge amount of Support for possible storages. Just my 2 Cents Hth - Mehmet Am 6. April 2019 17:48:32 MESZ schrieb Marc Roos : > >We have also hybrid ceph/libvirt-kvm setup, using some s

Re: [ceph-users] Chasing slow ops in mimic

2019-04-13 Thread ceph
; >> Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz >> 128 GB RAM >> Each OSD is SSD Intel DC-S3710 800GB >> It runs mimic 13.2.2 in containers. >> >> Cluster was operating normally for 4 month and then recently I had an >outage with multiple VMs (RBD) showing

[ceph-users] Kraken - Pool storage MAX AVAIL drops by 30TB after disk failure

2019-04-11 Thread nokia ceph
Hi, We have a 5 node EC 4+1 cluster with 335 OSDs running Kraken Bluestore 11.2.0. There was a disk failure on one of the OSDs and the disk was replaced. After which it was noticed that there was a ~30TB drop in the MAX_AVAIL value for the pool storage details on output of 'ceph df' Even though

Re: [ceph-users] problems with pg down

2019-04-09 Thread ceph
cting my pg query, I can't find the osd to apply the lost >paremeter. >> > >> > >> >http://docs.ceph.com/docs/mimic/rados/troubleshooting/troubleshooting-pg/#placement-group-down-peering-failure >> > >> > Did someone have same scenario with

Re: [ceph-users] PGs stuck in created state

2019-04-08 Thread ceph
Hello Simon, Another idea is to increase choose_total_tries. Hth Mehmet Am 7. März 2019 09:56:17 MEZ schrieb Martin Verges : >Hello, > >try restarting every osd if possible. >Upgrade to a recent ceph version. > >-- >Martin Verges >Managing director > >Mobile: +49 1

Re: [ceph-users] Blocked ops after change from filestore on HDD to bluestore on SDD

2019-03-28 Thread ceph
88da775c5ad4 > keyring = /etc/pve/priv/$cluster.$name.keyring > public network = 169.254.42.0/24 > >[mon] > mon allow pool delete = true > mon data avail crit = 5 > mon data avail warn = 15 > >[osd] > keyring = /var/lib/ceph/osd/ceph-$id/keyring > osd journal

Re: [ceph-users] Problems with osd creation in Ubuntu 18.04, ceph 13.2.4-1bionic

2019-03-15 Thread ceph
Hi Rainer, Try something like dd if=/dev/zero of=/dev/sdX bs=4096 To wipe/zap any Information on the disk. HTH Mehmet Am 14. Februar 2019 13:57:51 MEZ schrieb Rainer Krienke : >Hi, > >I am quite new to ceph and just try to set up a ceph cluster. Initially >I used

Re: [ceph-users] Upgrade Luminous to mimic on Ubuntu 18.04

2019-02-18 Thread ceph
Hello people, Am 11. Februar 2019 12:47:36 MEZ schrieb c...@elchaka.de: >Hello Ashley, > >Am 9. Februar 2019 17:30:31 MEZ schrieb Ashley Merrick >: >>What does the output of apt-get update look like on one of the nodes? >> >>You can just list the lines that ment

Re: [ceph-users] Upgrade Luminous to mimic on Ubuntu 18.04

2019-02-11 Thread ceph
Hello Ashley, Am 9. Februar 2019 17:30:31 MEZ schrieb Ashley Merrick : >What does the output of apt-get update look like on one of the nodes? > >You can just list the lines that mention CEPH > ... .. . Get:6 Https://Download.ceph.com/debian-luminous bionic InRelease [8393 B] ... ..

Re: [ceph-users] Upgrade Luminous to mimic on Ubuntu 18.04

2019-02-09 Thread ceph
ing the Ubuntu repo’s or the CEPH >18.04 repo. > >The updates will always be slower to reach you if your waiting for it >to >hit the Ubuntu repo vs adding CEPH’s own. > > >On Sun, 10 Feb 2019 at 12:19 AM, wrote: > >> Hello m8s, >> >> Im curious how we should d

[ceph-users] Upgrade Luminous to mimic on Ubuntu 18.04

2019-02-09 Thread ceph
Hello m8s, Im curious how we should do an Upgrade of our ceph Cluster on Ubuntu 16/18.04. As (At least on our 18.04 nodes) we only have 12.2.7 (or .8?) For an Upgrade to mimic we should First Update to Last version, actualy 12.2.11 (iirc). Which is not possible on 18.04. Is there a Update

Re: [ceph-users] Questions about using existing HW for PoC cluster

2019-02-09 Thread ceph
Hi Am 27. Januar 2019 18:20:24 MEZ schrieb Will Dennis : >Been reading "Learning Ceph - Second Edition" >(https://learning.oreilly.com/library/view/learning-ceph-/9781787127913/8f98bac7-44d4-45dc-b672-447d162ea604.xhtml) >and in Ch. 4 I read this: > >"We

Re: [ceph-users] block.db on a LV? (Re: Mixed SSD+HDD OSD setup recommendation)

2019-02-01 Thread ceph
use of LVM you have to specify the name of the Volume Group >> : and the respective Logical Volume instead of the path, e.g. >> : >> : ceph-volume lvm prepare --bluestore --block.db ssd_vg/ssd00 --data >/dev/sda >> >> Eugen, >> >> thanks, I will try it. In

Re: [ceph-users] Bionic Upgrade 12.2.10

2019-01-30 Thread ceph
e those OSDs, downgrade to 16.04 and re-add them, >this is going to take a while. > >--Scott > >On Mon, Jan 14, 2019 at 10:53 AM Reed Dier >wrote: >> >> This is because Luminous is not being built for Bionic for whatever >reason. >> There are some other mailing list en

Re: [ceph-users] repair do not work for inconsistent pg which three replica are the same

2019-01-26 Thread ceph
part\\udumbo\\s180888654\\s20181221\\sxtrabackup\\ufull\\ux19\\u30044\\u20181221025000\\sx19.xbstream.2~ntwW9vwutbmOJ4bDZYehERT2AokbtAi.3595__head_4F7F0C29__184 >> all md5 is : 73281ed56c92a56da078b1ae52e888e0   >> >> stat info is: >> root@cld-osd3-48:/home/ceph/var/lib/osd/ceph-33/current/38

Re: [ceph-users] Configure libvirt to 'see' already created snapshots of a vm rbd image

2019-01-24 Thread ceph
y created >on >the rbd image it is using for the vm? > >I have already a vm running connected to the rbd pool via >protocol='rbd', and rbd snap ls is showing for snapshots. > > > > > >_______ >ceph-users mailing list >ceph-users@li

Re: [ceph-users] Using Ceph central backup storage - Best practice creating pools

2019-01-22 Thread ceph
believe you have to implement that on top of Ceph For instance, let say you simply create a pool, with a rbd volume in it You then create a filesystem on that, and map it on some server Finally, you can push your files on that mountpoint, using various Linux's user, acl or whatever : beyond that point

Re: [ceph-users] dropping python 2 for nautilus... go/no-go

2019-01-16 Thread ceph
us, we'll be doing it for Octopus. > > Are there major python-{rbd,cephfs,rgw,rados} users that are still Python > 2 that we need to be worried about? (OpenStack?) > > sage > ___ > ceph-users mailing list > cep

Re: [ceph-users] Ceph community - how to make it even stronger

2019-01-05 Thread ceph . novice
Hi. What makes us struggle / wonder again and again is the absence of CEPH __man pages__. On *NIX systems man pages are always the first way to go for help, right? Or is this considered "old school" from the CEPH makers / community? :O And as many ppl complain again and again, the

Re: [ceph-users] Strange Data Issue - Unexpected client hang on OSD I/O Error

2018-12-26 Thread Dyweni - Ceph-Users
bed, also with no errors, at '2018-12-25 21:47'. In the past, when a read error occurs, the PG goes inconsistent and the admin has to repair it. The client operations are unaffected, because the data from the remaining 2 OSDs is available. In this case, there was data missing, Ceph detec

Re: [ceph-users] Strange Data Issue - Unexpected client hang on OSD I/O Error

2018-12-25 Thread Dyweni - Ceph-Users
Hi again! Prior to rebooting the client, I found this file (and it's contents): # cat /sys/kernel/debug/ceph/8abf116d-a710-4245-811d-c08473cb9fb4.client7412370/osdc REQUESTS 1 homeless 0 1459933 osd24.3120c635 [2,18,9]/2 [2,18,9]/2 rbd_data.6b60e8643c9869.157f

[ceph-users] Strange Data Issue - Unexpected client hang on OSD I/O Error

2018-12-25 Thread Dyweni - Ceph-Users
nel output regarding I/O errors on the disk. These errors occured 1 second prior to the message being issued on the client. This OSD has a drive that is developing bad sectors. This is known and tollerated. The data sits in a pool with 3 replicas. Normally, when I/O errors occur, Ceph repo

Re: [ceph-users] Ceph OOM Killer Luminous

2018-12-21 Thread Dyweni - Ceph-Users
r. We don't have this issue before upgrade. The only difference is the > nodes without any issue are R730xd and the ones with the memory leak are > R740xd. The hardware vendor don't see anything wrong with the hardware. From > Ceph end we are not seeing any issue when it comes to runnin

Re: [ceph-users] Ceph Cluster to OSD Utilization not in Sync

2018-12-21 Thread Dyweni - Ceph-Users
Hi, If you are running Ceph Luminous or later, use the Ceph Manager Daemon's Balancer module. (http://docs.ceph.com/docs/luminous/mgr/balancer/). Otherwise, tweak the OSD weights (not the OSD CRUSH weights) until you achieve uniformity. (You should be able to get under 1 STDDEV). I would

[ceph-users] Your email to ceph-uses mailing list: Signature check failures.

2018-12-21 Thread Dyweni - Ceph-Users
Hi Cary, I ran across your email on the ceph-users mailing list 'Signature check failures.'. I've just run across the same issue on my end. Also Gentoo user here. Running Ceph 12.2.5... 32bit/armhf and 64bit/x64_64. Was your environment mixed or strictly just x86_64? What

Re: [ceph-users] RBD snapshot atomicity guarantees?

2018-12-18 Thread ceph
ze + thaw), but > preceeded by an fstrim. With virtio-scsi, using fstrim propagates the > discards from within the VM to Ceph RBD (if qemu is configured > accordingly), > and a lot of space is saved. > > We have yet to observe these hangs, we are running this with ~5 VMs with > ~10 d

Re: [ceph-users] Ceph 10.2.11 - Status not working

2018-12-17 Thread Dyweni - Ceph-Users
On 2018-12-17 20:16, Brad Hubbard wrote: On Tue, Dec 18, 2018 at 10:23 AM Mike O'Connor wrote: Hi All I have a ceph cluster which has been working with out issues for about 2 years now, it was upgrade about 6 month ago to 10.2.11 root@blade3:/var/lib/ceph/mon# ceph status 2018-12-18 10

Re: [ceph-users] Decommissioning cluster - rebalance questions

2018-12-12 Thread Dyweni - Ceph-Users
to remove OSD Server X with all its OSD's. I am following these steps for all OSD's of Server X: - ceph osd out - Wait for rebalance (active+clean) - On OSD: service ceph stop osd. Once the steps above are performed, the following steps should be performed: - ceph osd crush remove osd. - ceph auth del

[ceph-users] [cephfs] Kernel outage / timeout

2018-12-04 Thread ceph
Hi, I have some wild freeze using cephfs with the kernel driver For instance: [Tue Dec 4 10:57:48 2018] libceph: mon1 10.5.0.88:6789 session lost, hunting for new mon [Tue Dec 4 10:57:48 2018] libceph: mon2 10.5.0.89:6789 session established [Tue Dec 4 10:58:20 2018] ceph: mds0 caps stale

Re: [ceph-users] ceph.conf mon_max_pg_per_osd not recognized / set

2018-10-31 Thread ceph
Isn't this a mgr variable ? On 10/31/2018 02:49 PM, Steven Vacaroaia wrote: > Hi, > > Any idea why different value for mon_max_pg_per_osd is not "recognized" ? > I am using mimic 13.2.2 > > Here is what I have in /etc/ceph/ceph.conf > > >

Re: [ceph-users] Verifying the location of the wal

2018-10-28 Thread ceph
IIRC there is a Command like Ceph osd Metadata Where you should be able to find Information like this Hab - Mehmet Am 21. Oktober 2018 19:39:58 MESZ schrieb Robert Stanford : > I did exactly this when creating my osds, and found that my total >utilization is about the same as t

[ceph-users] Error while installing ceph

2018-10-13 Thread ceph ceph
. ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] deep scrub error caused by missing object

2018-10-05 Thread ceph
Hello Roman, I am Not sure if i could be a help but perhaps this Commands can help to find the objects in question... Ceph Heath Detail rados list-inconsistent-pg rbd rados list-inconsistent-obj 2.10d I guess it is also interresting to know you use bluestore or filestore... Hth - Mehmet Am

Re: [ceph-users] Some questions concerning filestore --> bluestore migration

2018-10-04 Thread ceph
2018 at 4:25 AM Massimo Sgaravatto < >massimo.sgarava...@gmail.com> wrote: > >> Hi >> >> I have a ceph cluster, running luminous, composed of 5 OSD nodes, >which is >> using filestore. >> Each OSD node has 2 E5-2620 v4 processors, 64 GB of RAM, 10x6TB SATA >

Re: [ceph-users] RBD Mirror Question

2018-10-04 Thread ceph
Hello Vikas, Could you please provide us which Commands you have uses to Setup rbd-mirror? Would be Great if you could Provide a short howto :) Thanks in advise - Mehmet Am 2. Oktober 2018 22:47:08 MESZ schrieb Vikas Rana : >Hi, > >We have a CEPH 3 node cluster at primary site. W

Re: [ceph-users] [CEPH]-[RADOS] Deduplication feature status

2018-09-27 Thread ceph
As of today, there is no such feature in Ceph Best regards, On 09/27/2018 04:34 PM, Gaël THEROND wrote: > Hi folks! > > As I’ll soon start to work on a new really large an distributed CEPH > project for cold data storage, I’m checking out a few features availability > and status

Re: [ceph-users] backup ceph

2018-09-19 Thread ceph
For cephfs & rgw, it all depends on your needs, as with rbd You may want to trust blindly Ceph Or you may backup all your data, just in case (better safe than sorry, as he said) To my knowledge, there is no (or few) impact of keeping a large number of snapshot on a cluster With rbd, you

Re: [ceph-users] backup ceph

2018-09-18 Thread ceph
Hi, I assume that you are speaking of rbd only Taking snapshot of rbd volumes and keeping all of them on the cluster is fine However, this is no backup A snapshot is only a backup if it is exported off-site On 09/18/2018 11:54 AM, ST Wong (ITSC) wrote: > Hi, > > We're newbie to Ceph.

Re: [ceph-users] How to setup Ceph OSD auto boot up on node reboot

2018-09-07 Thread ceph
Hi karri, Am 4. September 2018 23:30:01 MESZ schrieb Pardhiv Karri : >Hi, > >I created a ceph cluster manually (not using ceph-deploy). When I >reboot >the node the osd's doesn't come backup because the OS doesn't know that >it >need to bring up the OSD. I am runnin

Re: [ceph-users] Understanding the output of dump_historic_ops

2018-09-02 Thread ceph
ot;event": "done" >} >] >} >}, > >Seems like I have an operation that was delayed over 2 seconds in >queued_for_pg state. >What does that mean? What was it waiting for? > >Regards, >*Ronnie Lazar* >*R* > >T: +972 77 556-1727 >E: ron...@stratoscale.com > > >Web <http://www.stratoscale.com/> | Blog ><http://www.stratoscale.com/blog/> > | Twitter <https://twitter.com/Stratoscale> | Google+ ><https://plus.google.com/u/1/b/108421603458396133912/108421603458396133912/posts> > | Linkedin <https://www.linkedin.com/company/stratoscale> ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] New Ceph community manager: Mike Perez

2018-08-28 Thread ceph
age Weil" wrote: >> >Hi everyone, >> > >> >Please help me welcome Mike Perez, the new Ceph community manager! >> > >> >Mike has a long history with Ceph: he started at DreamHost working >on >> >OpenStack and Ceph back in the early

Re: [ceph-users] Still risky to remove RBD-Images?

2018-08-21 Thread ceph
Am 20. August 2018 17:22:35 MESZ schrieb Mehmet : >Hello, Hello me, > >AFAIK removing of big RBD-Images would lead ceph to produce blocked >requests - I dont mean caused by poor disks. > >Is this still the case with "Luminous (12.2.4)"? >

Re: [ceph-users] Least impact when adding PG's

2018-08-13 Thread ceph
uld be increased First - not sure which one, but the docs and Mailinglist history should be helpfull. Hope i could give a Bit usefull hints - Mehmet >Thanks, > >John ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Optane 900P device class automatically set to SSD not NVME

2018-08-12 Thread ceph
Am 1. August 2018 10:33:26 MESZ schrieb Jake Grimmett : >Dear All, Hello Jake, > >Not sure if this is a bug, but when I add Intel Optane 900P drives, >their device class is automatically set to SSD rather than NVME. > AFAIK ceph actually difference only between hdd and

Re: [ceph-users] Running 12.2.5 without problems, should I upgrade to 12.2.7 or wait for 12.2.8?

2018-08-10 Thread ceph
dictions when the 12.2.8 release will be available? > > >Micha Krause >___ >ceph-users mailing list >ceph-users@lists.ceph.com >http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com ___ ceph-user

Re: [ceph-users] is there any filesystem like wrapper that dont need to map and mount rbd ?

2018-08-01 Thread ceph
Sound like cephfs to me On 08/01/2018 09:33 AM, Will Zhao wrote: > Hi: >I want to use ceph rbd, because it shows better performance. But I dont > like kernal module and isci target process. So here is my requirments: >I dont want to map it and mount it , But I still want

Re: [ceph-users] HELP! --> CLUSER DOWN (was "v13.2.1 Mimic released")

2018-07-30 Thread ceph . novice
your found "error" also here: [root@sds20 ~]# ceph-detect-init Traceback (most recent call last): File "/usr/bin/ceph-detect-init", line 9, in load_entry_point('ceph-detect-init==1.0.1', 'console_scripts', 'ceph-detect-init')() File "/usr/lib/python2.7/site-

Re: [ceph-users] HELP! --> CLUSER DOWN (was "v13.2.1 Mimic released")

2018-07-29 Thread ceph . novice
Strange... - wouldn't swear, but pretty sure v13.2.0 was working ok before - so what do others say/see? - no one on v13.2.1 so far (hard to believe) OR - just don't have this "systemctl ceph-osd.target" problem and all just works? If you also __MIGRATED__ from Luminous (say ~ v12.2.

Re: [ceph-users] HELP! --> CLUSER DOWN (was "v13.2.1 Mimic released")

2018-07-28 Thread ceph . novice
Have you guys changed something with the systemctl startup of the OSDs? I've stopped and disabled all the OSDs on all my hosts via "systemctl stop|disable ceph-osd.target" and rebooted all the nodes. Everything look just the same. The I started all the OSD daemons one after the

Re: [ceph-users] HELP! --> CLUSER DOWN (was "v13.2.1 Mimic released")

2018-07-28 Thread ceph . novice
Hi Sage. Sure. Any specific OSD(s) log(s)? Or just any? Gesendet: Samstag, 28. Juli 2018 um 16:49 Uhr Von: "Sage Weil" An: ceph.nov...@habmalnefrage.de, ceph-users@lists.ceph.com, ceph-de...@vger.kernel.org Betreff: Re: [ceph-users] HELP! --> CLUSER DOWN (was "v13.2.1 Mimic r

[ceph-users] HELP! --> CLUSER DOWN (was "v13.2.1 Mimic released")

2018-07-28 Thread ceph . novice
Dear users and developers.   I've updated our dev-cluster from v13.2.0 to v13.2.1 yesterday and since then everything is badly broken. I've restarted all Ceph components via "systemctl" and also rebootet the server SDS21 and SDS24, nothing changes. This cluster started as Kraken, w

[ceph-users] ceph bluestore data cache on osd

2018-07-23 Thread nokia ceph
Hi Team, We need a mechanism to have some data cache on OSD build on bluestore . Is there an option available to enable data cache? With default configurations , OSD logs state that data cache is disabled by default, bluestore(/var/lib/ceph/osd/ceph-66) _set_cache_sizes cache_size 1073741824

Re: [ceph-users] mimic (13.2.0) and "Failed to send data to Zabbix"

2018-07-12 Thread ceph . novice
There was no change in the ZABBIX environment... I got the this warning some minutes after the Linux and Luminous->Mimic update via YUM and a reboot of all the Ceph servers... Is there anyone, who also had the ZABBIX module unabled under Luminos AND then migrated to Mimic? If yes, does it w

Re: [ceph-users] mimic (13.2.0) and "Failed to send data to Zabbix"

2018-07-11 Thread ceph . novice
This is the problem, the zabbix_sender process is exiting with a non-zero status. You didn't change anything? You just upgraded from Luminous to Mimic and this came along? Wido > --- > ___ > ce

[ceph-users] mimic (13.2.0) and "Failed to send data to Zabbix"

2018-07-11 Thread ceph . novice
anyone with "mgr Zabbix enabled" migrated from Luminous (12.2.5 or 5) and has the same problem in Mimic now? if I disable and re-enable the "zabbix" module, the status is "HEALTH_OK" for some sec. and changes to "HEALTH_WARN" again...

  1   2   3   4   >