[ceph-users] Cannot create Initial Monitor

2015-12-03 Thread Aakanksha Pudipeddi-SSI
Hello Cephers, I am unable to create the initial monitor during ceph cluster deployment. I do not know what changed since the same recipe used to work until very recently. These are the steps I used: Ceph-deploy new -- works Dpkg -i -R --works Ceph-deploy mon create-initial - fails Log:

Re: [ceph-users] ceph-disk activate Permission denied problems

2015-12-03 Thread Goncalo Borges
Hi Adrien... Thanks for the pointer. It effectually solved our issue. Cheers G. On 12/04/2015 12:53 AM, Adrien Gillard wrote: This is the clean way to handle this. But you can also use udev to do this at boot. From what I found on the mailing list and made working before using GUID : cat >

[ceph-users] 转发: Confused about priority of client OP.

2015-12-03 Thread Wukongming
Hi,haomai A bit tough question I asked above, but do you know the answer? - wukongming ID: 12019 Tel:0571-86760239 Dept:2014 UIS2 ONEStor -邮件原件- 发件人: wukongming 12019 (RD) 发送时间: 2015年12月3日 22:15 收件人: ceph-de...@vger.kernel.org;

[ceph-users] 答复: How long will the logs be kept?

2015-12-03 Thread Wukongming
Yes, I can find ceph of rotate configure file in the directory of /etc/logrotate.d. Also, I find sth. Weird. drwxr-xr-x 2 root root 4.0K Dec 3 14:54 ./ drwxrwxr-x 19 root syslog 4.0K Dec 3 13:33 ../ -rw--- 1 root root 0 Dec 2 06:25 ceph.audit.log -rw--- 1 root root85K

[ceph-users] ceph-disk list crashes in infernalis

2015-12-03 Thread Stolte, Felix
Hi all, i upgraded from hammer to infernalis today and even so I had a hard time doing so I finally got my cluster running in a healthy state (mainly my fault, because I did not read the release notes carefully). But when I try to list my disks with "ceph-disk list" I get the following

[ceph-users] ceph infernal-can not find the dependency package selinux-policy-base-3.13.1-23.el7_1.18.noarch.rpm

2015-12-03 Thread Xiangyu (Raijin, BP Dept)
When install the ceph infernal(v9.2.0) ,it require the package selinux-policy-base-3.13.1-23.el7_1.18.noarch.rpm, I tried search it by google , but got nothing, if anyone know how to get it? ___ ceph-users mailing list ceph-users@lists.ceph.com

Re: [ceph-users] Ceph Sizing

2015-12-03 Thread Nick Fisk
I would suggest you forget about 15k disks, there probably isn't much point in using them vs SSD's nowdays. For 10K disks, if cost is a key factor I would maybe look at the WD Raptor disks. In terms of numbers of disks, it's very hard to calculate with the numbers you have provided. That

Re: [ceph-users] New cluster performance analysis

2015-12-03 Thread Nick Fisk
Couple of things to check 1. Can you create just a normal non cached pool and test performance to rule out any funnies going on there. 2. Can you also run something like iostat during the benchmarks and see if it looks like all your disks are getting saturated.

Re: [ceph-users] How long will the logs be kept?

2015-12-03 Thread Jan Schermer
You can setup logrotate however you want - not sure what the default is for your distro. Usually logrotate doesn't touch files that are smaller than some size even if they are old. It will also not delete logs for OSDs that no longer exist. Ceph itself has nothing to do with log rotation,

Re: [ceph-users] Ceph Sizing

2015-12-03 Thread Sam Huracan
I'm following this presentation of Mirantis team: http://www.slideshare.net/mirantis/ceph-talk-vancouver-20 They calculate CEPH IOPS = Disk IOPS * HDD Quantity * 0.88 (4-8k random read proportion) And VM IOPS = CEPH IOPS / VM Quantity But if I use replication of 3, *Would VM IOPS be divided by

Re: [ceph-users] ceph-disk list crashes in infernalis

2015-12-03 Thread Loic Dachary
Hi Felix, This is a bug, I file an issue for you at http://tracker.ceph.com/issues/13970 Cheers On 03/12/2015 10:56, Stolte, Felix wrote: > Hi all, > > > > i upgraded from hammer to infernalis today and even so I had a hard time > doing so I finally got my cluster running in a healthy

Re: [ceph-users] Confused about priority of client OP.

2015-12-03 Thread huang jun
In SimpleMessenger, the client OP like OSD_OP will dispatch by ms_fast_dispatch, and not queued in PriortizedQueue in Messenger. 2015-12-03 22:14 GMT+08:00 Wukongming : > Hi, All: > I 've got a question about a priority. We defined > osd_client_op_priority = 63.

Re: [ceph-users] ceph infernal-can not find the dependency package selinux-policy-base-3.13.1-23.el7_1.18.noarch.rpm

2015-12-03 Thread Alfredo Deza
What distribution and version/release are you trying to install it on? On a CentOS 7 box I see it is available: $ sudo yum provides selinux-policy-base ... selinux-policy-minimum-3.13.1-23.el7.noarch : SELinux minimum base policy Repo: base Matched from: Provides: selinux-policy-base

Re: [ceph-users] ceph-disk activate Permission denied problems

2015-12-03 Thread Adrien Gillard
This is the clean way to handle this. But you can also use udev to do this at boot. From what I found on the mailing list and made working before using GUID : cat > /etc/udev/rules.d/89-ceph-journal.rules << EOF KERNEL=="sda?" SUBSYSTEM=="block" OWNER="ceph" GROUP="disk" MODE="0660"

Re: [ceph-users] New cluster performance analysis

2015-12-03 Thread Adrien Gillard
I did some more tests : fio on a raw RBD volume (4K, numjob=32, QD=1) gives me around 3000 IOPS I also tuned xfs mount options on client (I realized I didn't do that already) and with "largeio,inode64,swalloc,logbufs=8,logbsize=256k,attr2,auto,nodev,noatime,nodiratime" I get better performance :

[ceph-users] Confused about priority of client OP.

2015-12-03 Thread Wukongming
Hi, All: I 've got a question about a priority. We defined osd_client_op_priority = 63. CEPH_MSG_PRIO_LOW = 64. We are clear there are multiple IO to be discussed. Why not define osd_client_op_priority > 64, so we can just deal with client IO in first priority.

Re: [ceph-users] v9.2.0 Infernalis released

2015-12-03 Thread François Lafont
Hi, On 03/12/2015 12:12, Florent B wrote: > It seems that if some OSD are using journal devices, ceph user needs to > be a member of "disk" group on Debian. Can someone confirm this ? Yes, I confirm... if you are talking about the journal partitions of OSDs. Another solution: via a udev rule,

Re: [ceph-users] ceph-osd@.service does not mount OSD data disk

2015-12-03 Thread Timofey Titovets
Some users already ask on list about this problem on Debian You can fix that by: ln -sv Or by systemctl edit --full ceph-disk@.service Just choose the way 2015-12-03 23:00 GMT+03:00 Florent B : > Ok and /bin/flock is supposed to exist on all systems ? Don't have it on >

Re: [ceph-users] ceph-osd@.service does not mount OSD data disk

2015-12-03 Thread Jan Schermer
echo add >/sys/block/sdX/sdXY/uevent The easiest way to make it mount automagically Jan > On 03 Dec 2015, at 20:31, Timofey Titovets wrote: > > Lol, it's opensource guys > https://github.com/ceph/ceph/tree/master/systemd > ceph-disk@ > > 2015-12-03 21:59 GMT+03:00

Re: [ceph-users] ceph-osd@.service does not mount OSD data disk

2015-12-03 Thread Timofey Titovets
Lol, it's opensource guys https://github.com/ceph/ceph/tree/master/systemd ceph-disk@ 2015-12-03 21:59 GMT+03:00 Florent B : > "ceph" service does mount : > > systemctl status ceph -l > ● ceph.service - LSB: Start Ceph distributed file system daemons at boot > time >

Re: [ceph-users] Ceph Sizing

2015-12-03 Thread Warren Wang - ISD
I would be a lot more conservative in terms of what a spinning drive can do. The Mirantis presentation has pretty high expectations out of a spinning drive, as they¹re ignoring somewhat latency (til the last few slides). Look at the max latencies for anything above 1 QD on a spinning drive. If

Re: [ceph-users] ceph-osd@.service does not mount OSD data disk

2015-12-03 Thread Adrien Gillard
I think OSD are automatically mouted at boot via udev rules and that the ceph service does not handle the mounting part. On Thu, Dec 3, 2015 at 7:40 PM, Florent B wrote: > Hi, > > On 12/03/2015 07:36 PM, Timofey Titovets wrote: > > > On 3 Dec 2015 8:56 p.m., "Florent B" <

Re: [ceph-users] ceph-osd@.service does not mount OSD data disk

2015-12-03 Thread Loic Dachary
Hi, On 03/12/2015 21:00, Florent B wrote: > Ok and /bin/flock is supposed to exist on all systems ? Don't have it on > Debian... flock is at /usr/bin/flock I filed a bug for this : http://tracker.ceph.com/issues/13975 Cheers > > My problem is that "ceph" service is doing everything, and all

Re: [ceph-users] Flapping OSDs, Large meta directories in OSDs

2015-12-03 Thread Tom Christensen
We were able to prevent the blacklist operations, and now the cluster is much happier, however, the OSDs have not started cleaning up old osd maps after 48 hours. Is there anything we can do to poke them to get them to start cleaning up old osd maps? On Wed, Dec 2, 2015 at 11:25 AM, Gregory

Re: [ceph-users] [Ceph-maintainers] ceph packages link is gone

2015-12-03 Thread Dan Mick
This was sent to the ceph-maintainers list; answering here: On 11/25/2015 02:54 AM, Alaâ Chatti wrote: > Hello, > > I used to install qemu-ceph on centos 6 machine from > http://ceph.com/packages/, but the link has been removed, and there is > no alternative in the documentation. Would you

Re: [ceph-users] ceph-osd@.service does not mount OSD data disk

2015-12-03 Thread Timofey Titovets
On 3 Dec 2015 8:56 p.m., "Florent B" wrote: > > By the way, when system boots, "ceph" service is starting everything > fine. So "ceph-osd@" service is disabled => how to restart an OSD ?! > AFAIK, ceph now have 2 services: 1. Mount device 2. Start OSD Also, service can be

Re: [ceph-users] Remap PGs with size=1 on specific OSD

2015-12-03 Thread Robert LeBlanc
-BEGIN PGP SIGNED MESSAGE- Hash: SHA256 Reweighting the OSD to 0.0 or setting the osd out (but not terminating the process) should allow it to backfill the PGs to a new OSD. I would try the reweight first (and in a test environment). - Robert LeBlanc PGP Fingerprint 79A2

Re: [ceph-users] Remap PGs with size=1 on specific OSD

2015-12-03 Thread Timofey Titovets
On 3 Dec 2015 9:35 p.m., "Robert LeBlanc" wrote: > > -BEGIN PGP SIGNED MESSAGE- > Hash: SHA256 > > Reweighting the OSD to 0.0 or setting the osd out (but not terminating > the process) should allow it to backfill the PGs to a new OSD. I would > try the reweight first

Re: [ceph-users] [Ceph-maintainers] ceph packages link is gone

2015-12-03 Thread Ken Dreyer
On Thu, Dec 3, 2015 at 5:53 PM, Dan Mick wrote: > This was sent to the ceph-maintainers list; answering here: > > On 11/25/2015 02:54 AM, Alaâ Chatti wrote: >> Hello, >> >> I used to install qemu-ceph on centos 6 machine from >> http://ceph.com/packages/, but the link has been

[ceph-users] cephfs ceph: fill_inode badness

2015-12-03 Thread Don Waterloo
i have a file which is untouchable: ls -i gives an error, stat gives an error. it shows ??? for all fields except name. How do i clean this up? I'm on ubuntu 15.10, running 0.94.5 # ceph -v ceph version 0.94.5 (9764da52395923e0b32908d83a9f7304401fee43) the node that accessed the file then

[ceph-users] ceph-deploy osd prepare for journal size 0

2015-12-03 Thread Mike Miller
Hi, for testing I would like to create some OSD in the hammer release with journal size 0 I included this in ceph.conf: [osd] osd journal size = 0 Then I zapped the disk in question and tried: 'ceph-deploy disk zap o1:sda' Thank you for your advice how to prepare an osd without journal /

[ceph-users] Bug on rbd rm when using cache tiers Was: OSD on XFS ENOSPC at 84% data / 5% inode and inode64?

2015-12-03 Thread Laurent GUERBY
On Fri, 2015-11-27 at 10:00 +0100, Laurent GUERBY wrote: > > > > Hi, from given numbers one can conclude that you are facing some kind > > of XFS preallocation bug, because ((raw space divided by number of > > files)) is four times lower than the ((raw space divided by 4MB > > blocks)). At a