Re: [ceph-users] osds udev rules not triggered on reboot (jewel, jessie)

2016-04-27 Thread Alexandre DERUMIER
Hi, they are missing target files in debian packages http://tracker.ceph.com/issues/15573 https://github.com/ceph/ceph/pull/8700 I have also done some other trackers about packaging bug jewel: debian package: wrong /etc/default/ceph/ceph location http://tracker.ceph.com/issues/15587

Re: [ceph-users] radosgw crash - Infernalis

2016-04-27 Thread Ben Hines
Aha, i see how to use the debuginfo - trying it by running through gdb. On Wed, Apr 27, 2016 at 10:09 PM, Ben Hines wrote: > Got it again - however, the stack is exactly the same, no symbols - > debuginfo didn't resolve. Do i need to do something to enable that? > > The

Re: [ceph-users] radosgw crash - Infernalis

2016-04-27 Thread Brad Hubbard
- Original Message - > From: "Ben Hines" > To: "Brad Hubbard" > Cc: "Karol Mroz" , "ceph-users" > Sent: Thursday, 28 April, 2016 3:09:16 PM > Subject: Re: [ceph-users] radosgw crash - Infernalis > Got

Re: [ceph-users] radosgw crash - Infernalis

2016-04-27 Thread Ben Hines
Got it again - however, the stack is exactly the same, no symbols - debuginfo didn't resolve. Do i need to do something to enable that? The server in 'debug ms=10' this time, so there is a bit more spew: -14> 2016-04-27 21:59:58.811919 7f9e817fa700 1 -- 10.30.1.8:0/3291985349 -->

Re: [ceph-users] radosgw crash - Infernalis

2016-04-27 Thread Ben Hines
Yes, CentOS 7.2. Happened twice in a row, both times shortly after a restart, so i expect i'll be able to reproduce it. However, i've now tried a bunch of times and it's not happening again. In any case i have glibc + ceph-debuginfo installed so we can get more info if it does happen. thanks!

[ceph-users] about slides on VAULT of 2016

2016-04-27 Thread 席智勇
hi sage: I find the slides of VAULT of 2016 on this page( http://events.linuxfoundation.org/events/vault/program/slides), it seems not the whole accoding to the schedule info, and I didn't find yours. Can you share your slides or any things usefull on VAULT about BlueStore. regards~

Re: [ceph-users] radosgw crash - Infernalis

2016-04-27 Thread Brad Hubbard
- Original Message - > From: "Karol Mroz" > To: "Ben Hines" > Cc: "ceph-users" > Sent: Wednesday, 27 April, 2016 7:06:56 PM > Subject: Re: [ceph-users] radosgw crash - Infernalis > > On Tue, Apr 26, 2016 at 10:17:31PM -0700,

[ceph-users] Jewel Compilaton Error

2016-04-27 Thread Dyweni - Ceph-Users
Hi List, Ceph 10.2.0 errors out during compilation when compiling without radowgw support. ./configure --prefix=/usr --build=i686-pc-linux-gnu --host=i686-pc-linux-gnu --mandir=/usr/share/man --infodir=/usr/share/info --datadir=/usr/share --sysconfdir=/etc --localstatedir=/var/lib

Re: [ceph-users] pgnum warning and decrease

2016-04-27 Thread Christian Balzer
Hello, On Wed, 27 Apr 2016 22:55:35 + Carlos M. Perez wrote: > Hi, > > My current setup is running on 12 OSD's split between 3 hosts. We're > using this for VM's (Proxmox) and nothing else. > I assume evenly split (4 OSDs per host)? > According to: >

Re: [ceph-users] "rbd diff" disparity vs mounted usage

2016-04-27 Thread Tyler Wilson
Hello Jason, Yes, I believe that is my question. Is there any way I can either reclaim the space for this disk? On Wed, Apr 27, 2016 at 1:25 PM, Jason Dillaman wrote: > The image size (50G) minus the fstrim size (1.7G) approximately equals > the actual usage (48.19G).

[ceph-users] pgnum warning and decrease

2016-04-27 Thread Carlos M. Perez
Hi, My current setup is running on 12 OSD's split between 3 hosts. We're using this for VM's (Proxmox) and nothing else. According to: http://docs.ceph.com/docs/master/rados/operations/placement-groups/ - my pg_num should be set to 4096 If I use the calculator, and put in Size 3, OSD 12,

Re: [ceph-users] mount -t ceph

2016-04-27 Thread David Disseldorp
Hi Tom, On Wed, 27 Apr 2016 20:17:51 +, Deneau, Tom wrote: > I was using SLES 12, SP1 which has 3.12.49 > > It did have a /usr/sbin/mount.ceph command but using it gave > modprobe: FATAL: Module ceph not found. > failed to load ceph kernel module (1) The SLES 12 SP1 kernel doesn't

[ceph-users] osd problem upgrading from hammer to jewel

2016-04-27 Thread Randy Orr
Hi, I have a small dev/test ceph cluster that sat neglected for quite some time. It was on the firefly release until very recently. I successfully upgraded from firefly to hammer without issue as an intermediate step to get to the latest jewel release. This cluster has 3 ubuntu 14.04 hosts with

Re: [ceph-users] "rbd diff" disparity vs mounted usage

2016-04-27 Thread Jason Dillaman
The image size (50G) minus the fstrim size (1.7G) approximately equals the actual usage (48.19G). Therefore, I guess the question is why doesn't fstrim think it can discard more space? On a semi-related note, we should probably improve the rbd copy sparsify logic. Right now it requires the full

Re: [ceph-users] mount -t ceph

2016-04-27 Thread Gregory Farnum
On Wed, Apr 27, 2016 at 3:17 PM, Deneau, Tom wrote: > I was using SLES 12, SP1 which has 3.12.49 > > It did have a /usr/sbin/mount.ceph command but using it gave > modprobe: FATAL: Module ceph not found. > failed to load ceph kernel module (1) So that's about what is in

Re: [ceph-users] mount -t ceph

2016-04-27 Thread Deneau, Tom
I was using SLES 12, SP1 which has 3.12.49 It did have a /usr/sbin/mount.ceph command but using it gave modprobe: FATAL: Module ceph not found. failed to load ceph kernel module (1) -- Tom > -Original Message- > From: Gregory Farnum [mailto:gfar...@redhat.com] > Sent: Wednesday,

Re: [ceph-users] mount -t ceph

2016-04-27 Thread Gregory Farnum
On Wed, Apr 27, 2016 at 2:55 PM, Deneau, Tom wrote: > What kernel versions are required to be able to use CephFS thru mount -t ceph? The CephFS kernel client has been in for ages (2.6.34, I think?), but you want the absolute latest you can make happen if you're going to try

[ceph-users] mount -t ceph

2016-04-27 Thread Deneau, Tom
What kernel versions are required to be able to use CephFS thru mount -t ceph? -- Tom Deneau ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] "rbd diff" disparity vs mounted usage

2016-04-27 Thread Tyler Wilson
Hello Jason, Thanks for the quick reply, this was copied from an VM instance snapshot to my backup pool (rbd snap create, rbd cp (to backup pool), rbd snap rm). I've tried piping through grep per your recommendation and it still reports the same usage $ rbd diff

Re: [ceph-users] "rbd diff" disparity vs mounted usage

2016-04-27 Thread Jason Dillaman
On Wed, Apr 27, 2016 at 2:07 PM, Tyler Wilson wrote: > $ rbd diff backup/cd4e5d37-3023-4640-be5a-5577d3f9307e | awk '{ SUM += $2 } > END { print SUM/1024/1024 " MB" }' > 49345.4 MB Is this a cloned image? That awk trick doesn't account for discarded regions (i.e. when

[ceph-users] "rbd diff" disparity vs mounted usage

2016-04-27 Thread Tyler Wilson
Hello All, I am currently trying to get an accurate count of bytes used for an rbd image. I've tried trimming the filesystem which relieves about 1.7gb however there is still a huge disparity of size reported in the filesystem vs what 'rbd diff' shows; $ rbd map

Re: [ceph-users] NO mon start after Jewel Upgrade using systemctl

2016-04-27 Thread Iban Cabrillo
Hi Karsten, I have checked taht files arethe same that git ones. -rw-r--r-- 1 root root 810 Apr 20 18:45 /lib/systemd/system/ceph-mon@ .service -rw-r--r-- 1 root root 162 Apr 20 18:45 /lib/systemd/system/ceph-mon.target [root@cephmon03 ~]# cat /lib/systemd/system/ceph-mon.target [Unit]

Re: [ceph-users] Any Docs to configure NFS to access RADOSGW buckets on Jewel

2016-04-27 Thread Matt Benjamin
Hi WD, No, it's not the same. The new mechanism uses an nfs-ganesha server to export the RGW namespace. Some upstream documentation will be forthcoming... Regards, Matt - Original Message - > From: "WD Hwang" > To: "a jazdzewski"

Re: [ceph-users] osds udev rules not triggered on reboot (jewel, jessie)

2016-04-27 Thread Karsten Heymann
2016-04-27 15:18 GMT+02:00 Loris Cuoghi : > Le 27/04/2016 14:45, Karsten Heymann a écrit : >> one workaround I found was to add >> >> [Install] >> WantedBy=ceph-osd.target >> >> to /lib/systemd/system/ceph-disk@.service and then manually enable my >> disks with >> >> #

Re: [ceph-users] osds udev rules not triggered on reboot (jewel, jessie)

2016-04-27 Thread Loris Cuoghi
Le 27/04/2016 14:45, Karsten Heymann a écrit : Hi, one workaround I found was to add [Install] WantedBy=ceph-osd.target to /lib/systemd/system/ceph-disk@.service and then manually enable my disks with # systemctl enable ceph-disk\@dev-sdi1 # systemctl start ceph-disk\@dev-sdi1 That way

Re: [ceph-users] NO mon start after Jewel Upgrade using systemctl

2016-04-27 Thread Karsten Heymann
Hi Iban, the current jewel packages seem to be missing some important systemd files. Try to copy https://github.com/ceph/ceph/blob/master/systemd/ceph-mon.target to /lib/systemd/system and enable it: systemctl enable ceph-mon.target I also would disable the legacy init script with systemctl

Re: [ceph-users] osds udev rules not triggered on reboot (jewel, jessie)

2016-04-27 Thread Karsten Heymann
Hi, one workaround I found was to add [Install] WantedBy=ceph-osd.target to /lib/systemd/system/ceph-disk@.service and then manually enable my disks with # systemctl enable ceph-disk\@dev-sdi1 # systemctl start ceph-disk\@dev-sdi1 That way they at least are started at boot time. Best

Re: [ceph-users] Slow read on RBD mount, Hammer 0.94.5

2016-04-27 Thread Mike Miller
Nick, all, fantastic, that did it! I installed kernel 4.5.2 on the client, now the single threaded read performance using krbd mount is up to about 370 MB/s with standard 256 readahead size, and touching 400 MB/s with larger readahead sizes. All single threaded. Multi-threaded krbd read on

[ceph-users] NO mon start after Jewel Upgrade using systemctl

2016-04-27 Thread Iban Cabrillo
Hi cephers, I've been following the upgrade intrucctions...but..I sure I did something wrong. I just upgrade using ceph-deploy on one monitor (after ofcourse down de mon service). Then the chow to var/lib/ceph and /var/log/ceph for ceph user [root@cephmon03 ~]# systemctl start

Re: [ceph-users] osds udev rules not triggered on reboot (jewel, jessie)

2016-04-27 Thread Loris Cuoghi
Le 27/04/2016 13:51, Karsten Heymann a écrit : Hi Loris, thank you for your feedback. As I plan to go productive with the cluster later this year I'm really hesitant to update udev and systemd to a version newer than jessie, especially as there is no official backport for those packages yet. I

Re: [ceph-users] osds udev rules not triggered on reboot (jewel, jessie)

2016-04-27 Thread Karsten Heymann
Hi Loris, thank you for your feedback. As I plan to go productive with the cluster later this year I'm really hesitant to update udev and systemd to a version newer than jessie, especially as there is no official backport for those packages yet. I really would expect ceph to work out of the box

[ceph-users] Unable to unmap rbd device (Jewel)

2016-04-27 Thread Diego Castro
Here's one of my nodes that can't unmap a device: [root@nodebr6 ~]# rbd unmap /dev/rbd0 2016-04-27 11:36:56.975668 7fcd61ae67c0 -1 did not load config file, using default settings. Option -q no longer supported. rbd: run_cmd(udevadm): exited with status 1 rbd: sysfs write failed rbd: unmap

Re: [ceph-users] osds udev rules not triggered on reboot (jewel, jessie)

2016-04-27 Thread Loris Cuoghi
Hi Karsten, I've had the same experience updating our test cluster (Debian 8) from Infernalis to Jewel. I've update udev/systemd to the one in testing (so, from 215 to 229), and it worked much better at reboot. So... Are the udev rules written for the udev version in RedHat (219) or

[ceph-users] Fwd: google perftools on ceph-osd

2016-04-27 Thread 席智勇
Anyone can give me some advice? -- Forwarded message -- From: Date: 2016-04-26 18:50 GMT+08:00 Subject: google perftools on ceph-osd To: Stefan Priebe - Profihost AG hi Stefan: When We are using ceph, I found osd process use

Re: [ceph-users] radosgw crash - Infernalis

2016-04-27 Thread Karol Mroz
On Tue, Apr 26, 2016 at 10:17:31PM -0700, Ben Hines wrote: [...] > --> 10.30.1.6:6800/10350 -- osd_op(client.44852756.0:79 > default.42048218. [getxattrs,stat,read 0~524288] 12.aa730416 > ack+read+known_if_redirected e100207) v6 -- ?+0 0x7f49c41880b0 con > 0x7f49c4145eb0 > 0> 2016-04-26

Re: [ceph-users] Any Docs to configure NFS to access RADOSGW buckets on Jewel

2016-04-27 Thread WD_Hwang
Hi Ansgar, Thanks for your information. I have tried 's3fs-fuse' to mount RADOSGW buckets on Ubuntu client node. It works. But I am not sure this is the technique that access RADOSGW buckets via NFS on Jewel. Best Regards, WD -Original Message- From: ceph-users

Re: [ceph-users] Any Docs to configure NFS to access RADOSGW buckets on Jewel

2016-04-27 Thread Ansgar Jazdzewski
all informations i have so far are from the FOSDEM https://fosdem.org/2016/schedule/event/virt_iaas_ceph_rados_gateway_overview/attachments/audio/1079/export/events/attachments/virt_iaas_ceph_rados_gateway_overview/audio/1079/Fosdem_RGW.pdf Cheers, Ansgar 2016-04-27 2:28 GMT+02:00

Re: [ceph-users] osds udev rules not triggered on reboot (jewel, jessie)

2016-04-27 Thread Karsten Heymann
Add-on-Question: What is the intended purpose of the ceph-disk@.service? I can run systemctl start ceph-disk@/dev/sdr1 but I can't 'enable' it like the the ceph-osd@.service so why is it there? Best regards Karsten 2016-04-27 9:33 GMT+02:00 Karsten Heymann : > Hi! >

[ceph-users] osds udev rules not triggered on reboot (jewel, jessie)

2016-04-27 Thread Karsten Heymann
Hi! the last days, I updated my jessie evaluation cluster to jewel and now osds are not started automatically after reboot because they are not mounted. This is the output of ceph-disk list after boot: /dev/sdh : /dev/sdh1 ceph data, prepared, cluster ceph, osd.47, journal /dev/sde1 /dev/sdi :