Re: [ceph-users] NFS interaction with RBD

2015-05-27 Thread Jens-Christian Fischer
binaries (x86) ii qemu-utils 2.0.0+dfsg-2ubuntu1.11 amd64QEMU utilities cheers jc -- SWITCH Jens-Christian Fischer, Peta Solutions Werdstrasse 2, P.O. Box, 8021 Zurich, Switzerland phone +41 44 268 15 15, direct +41 44 268 15 71 jens

Re: [ceph-users] NFS interaction with RBD

2015-05-26 Thread Jens-Christian Fischer
for every mounted volume - exceeding the 1024 FD limit. So no deep scrubbing etc, but simply to many connections… cheers jc -- SWITCH Jens-Christian Fischer, Peta Solutions Werdstrasse 2, P.O. Box, 8021 Zurich, Switzerland phone +41 44 268 15 15, direct +41 44 268 15 71 jens-christian.fisc...@switch.ch

Re: [ceph-users] NFS interaction with RBD

2015-05-23 Thread Jens-Christian Fischer
to migrate the last data off of it to one of the smaller volumes). The NFS server has been running for 30 minutes now (with close to no load) but we don’t really expect it to make it until tomorrow. send help Jens-Christian -- SWITCH Jens-Christian Fischer, Peta Solutions Werdstrasse 2, P.O

Re: [ceph-users] Ceph Cinder Capabilities reports wrong free size

2014-08-22 Thread Jens-Christian Fischer
=cinder.volume.drivers.rbd.RBDDriver cheers jc -- SWITCH Jens-Christian Fischer, Peta Solutions Werdstrasse 2, P.O. Box, 8021 Zurich, Switzerland phone +41 44 268 15 15, direct +41 44 268 15 71 jens-christian.fisc...@switch.ch http://www.switch.ch http://www.switch.ch/stories On 21.08.2014, at 17:55, Gregory

[ceph-users] Ceph Cinder Capabilities reports wrong free size

2014-08-21 Thread Jens-Christian Fischer
=False rbd_user=cinder rbd_ceph_conf=/etc/ceph/ceph.conf rbd_secret_uuid=1234-5678-ABCD-…-DEF rbd_max_clone_depth=5 volume_driver=cinder.volume.drivers.rbd.RBDDriver — cut --- any ideas? cheers Jens-Christian -- SWITCH Jens-Christian Fischer, Peta Solutions Werdstrasse 2, P.O. Box, 8021 Zurich

Re: [ceph-users] RBD clone for OpenStack Nova ephemeral volumes

2014-05-28 Thread Jens-Christian Fischer
We are currently starting to set up a new Icehouse/Ceph based cluster and will help to get this patch in shape as well. I am currently collecting the information needed that allow us to patch Nova and I have this: https://github.com/angdraug/nova/tree/rbd-ephemeral-clone-stable-icehouse on my

[ceph-users] aborted downloads from Radosgw when multiple clients access same object

2013-12-05 Thread Jens-Christian Fischer
/s wr, 34op/s mdsmap e1: 0/0/1 up root@server1:/etc# ceph --version ceph version 0.67.4 (ad85b8bfafea6232d64cb7ba76a8b6e8252fa0c7) more log files available upon request…. any ideas? cheers Jens-Christian -- SWITCH Jens-Christian Fischer, Peta Solutions Werdstrasse 2, P.O. Box

Re: [ceph-users] Number of threads for osd processes

2013-11-27 Thread Jens-Christian Fischer
The largest group of threads is those from the network messenger — in the current implementation it creates two threads per process the daemon is communicating with. That's two threads for each OSD it shares PGs with, and two threads for each client which is accessing any data on that OSD.

Re: [ceph-users] Openstack Havana, boot from volume fails

2013-11-27 Thread Jens-Christian Fischer
-integration - it cleared a bunch of things for me cheers jc Thanks again! Narendra From: Jens-Christian Fischer [mailto:jens-christian.fisc...@switch.ch] Sent: Monday, November 25, 2013 8:19 AM To: Trivedi, Narendra Cc: ceph-users@lists.ceph.com; Rüdiger Rissmann Subject: Re: [ceph

Re: [ceph-users] how to Testing cinder and glance with CEPH

2013-11-27 Thread Jens-Christian Fischer
? good luck jc -- SWITCH Jens-Christian Fischer, Peta Solutions Werdstrasse 2, P.O. Box, 8021 Zurich, Switzerland phone +41 44 268 15 15, direct +41 44 268 15 71 jens-christian.fisc...@switch.ch http://www.switch.ch http://www.switch.ch/socialmedia On 27.11.2013, at 08:51, Karan Singh ksi

[ceph-users] Number of threads for osd processes

2013-11-26 Thread Jens-Christian Fischer
osd pool get images pg_num pg_num: 1000 root@h2:/var/log/ceph# ceph osd pool get volumes pg_num pg_num: 128 That could possibly have been on the day, the number of treads started to rise. Feedback appreciated! thanks Jens-Christian -- SWITCH Jens-Christian Fischer, Peta Solutions Werdstrasse 2

Re: [ceph-users] Openstack Havana, boot from volume fails

2013-11-25 Thread Jens-Christian Fischer
/libvirt/imagebackend.py virt/libvirt/utils.py good luck :) cheers jc -- SWITCH Jens-Christian Fischer, Peta Solutions Werdstrasse 2, P.O. Box, 8021 Zurich, Switzerland phone +41 44 268 15 15, direct +41 44 268 15 71 jens-christian.fisc...@switch.ch http://www.switch.ch http://www.switch.ch

Re: [ceph-users] Openstack Havana, boot from volume fails

2013-11-25 Thread Jens-Christian Fischer
Hi Steffen the virsh secret is defined on all compute hosts. Booting from a volume works (it's the boot from image (create volume) part that doesn't work cheers jc -- SWITCH Jens-Christian Fischer, Peta Solutions Werdstrasse 2, P.O. Box, 8021 Zurich, Switzerland phone +41 44 268 15 15

Re: [ceph-users] Openstack Havana, boot from volume fails

2013-11-21 Thread Jens-Christian Fischer
the volumes…. I re-snapshotted the instance whose volume wouldn't boot, and made a volume out of it, and this instance booted nicely from the volume. weirder and weirder… /jc -- SWITCH Jens-Christian Fischer, Peta Solutions Werdstrasse 2, P.O. Box, 8021 Zurich, Switzerland phone +41 44 268 15

[ceph-users] OpenStack, Boot from image (create volume) failed with volumes in rbd

2013-11-21 Thread Jens-Christian Fischer
Image (v2), 2147483648 bytes It is our understanding, that we need raw volumes to get the boot process working. Why is the volume created as a qcow2 volume? cheers jc -- SWITCH Jens-Christian Fischer, Peta Solutions Werdstrasse 2, P.O. Box, 8021 Zurich, Switzerland phone +41 44 268 15 15, direct

[ceph-users] Openstack Havana, boot from volume fails

2013-11-21 Thread Jens-Christian Fischer
be started. CONTROL-D will terminate this shell and reboot the system. root@box-web1:~# The console is stuck, I can't get to the rescue shell I can rbd map the volume and mount it from a physical host - the filesystem etc all is in good order. Any ideas? cheers jc -- SWITCH Jens-Christian Fischer

Re: [ceph-users] Ephemeral RBD with Havana and Dumpling

2013-11-14 Thread Jens-Christian Fischer
Jens-Christian Fischer, Peta Solutions Werdstrasse 2, P.O. Box, 8021 Zurich, Switzerland phone +41 44 268 15 15, direct +41 44 268 15 71 jens-christian.fisc...@switch.ch http://www.switch.ch http://www.switch.ch/socialmedia On 14.11.2013, at 13:18, Haomai Wang haomaiw...@gmail.com wrote: Yes, we

Re: [ceph-users] Havana RBD - a few problems

2013-11-08 Thread Jens-Christian Fischer
Hi Josh Using libvirt_image_type=rbd to replace ephemeral disks is new with Havana, and unfortunately some bug fixes did not make it into the release. I've backported the current fixes on top of the stable/havana branch here: https://github.com/jdurgin/nova/tree/havana-ephemeral-rbd that

Re: [ceph-users] Havana RBD - a few problems

2013-11-08 Thread Jens-Christian Fischer
Using libvirt_image_type=rbd to replace ephemeral disks is new with Havana, and unfortunately some bug fixes did not make it into the release. I've backported the current fixes on top of the stable/havana branch here: https://github.com/jdurgin/nova/tree/havana-ephemeral-rbd that looks

Re: [ceph-users] Havana RBD - a few problems

2013-11-08 Thread Jens-Christian Fischer
of the glance - cinder RBD improvements) cheers jc -- SWITCH Jens-Christian Fischer, Peta Solutions Werdstrasse 2, P.O. Box, 8021 Zurich, Switzerland phone +41 44 268 15 15, direct +41 44 268 15 71 jens-christian.fisc...@switch.ch http://www.switch.ch http://www.switch.ch/socialmedia On 08.11.2013

[ceph-users] Havana RBD - a few problems

2013-11-07 Thread Jens-Christian Fischer
Jens-Christian Fischer, Peta Solutions Werdstrasse 2, P.O. Box, 8021 Zurich, Switzerland phone +41 44 268 15 15, direct +41 44 268 15 71 jens-christian.fisc...@switch.ch http://www.switch.ch http://www.switch.ch/socialmedia ___ ceph-users mailing list

Re: [ceph-users] interested questions

2013-10-30 Thread Jens-Christian Fischer
, but it works reasonably well for testing purposes. We are planning/building our next cluster now (a production cluster) and plan to separate OSD/MON servers from OpenStack compute servers. cheers Jens-Christian -- SWITCH Jens-Christian Fischer, Peta Solutions Werdstrasse 2, P.O. Box, 8021

Re: [ceph-users] one pg stuck with 2 unfound pieces

2013-09-23 Thread Jens-Christian Fischer
This stuck pg seems to fill up our mons (they need to keep old data, right?) which makes starting a new mon a task of seemingly herculean proportions. Any ideas on how to proceed? thanks Jens-Christian -- SWITCH Jens-Christian Fischer, Peta Solutions Werdstrasse 2, P.O. Box, 8021 Zurich

[ceph-users] Sparse files copied to CephFS not sparse

2013-09-16 Thread Jens-Christian Fischer
we used Kernel 3.10 and recently ceph-fuse to mount the CephFS. Are we doing something wrong, or is this not supported by CephFS? cheers jc -- SWITCH Jens-Christian Fischer, Peta Solutions Werdstrasse 2, P.O. Box, 8021 Zurich, Switzerland phone +41 44 268 15 15, direct +41 44 268 15 71 jens

Re: [ceph-users] Sparse files copied to CephFS not sparse

2013-09-16 Thread Jens-Christian Fischer
For cephfs, the size reported by 'ls -s' is the same as file size. see http://ceph.com/docs/next/dev/differences-from-posix/ ah! So if I understand correctly, the files are indeed sparse on CephFS? thanks /jc ___ ceph-users mailing list

Re: [ceph-users] Sparse files copied to CephFS not sparse

2013-09-16 Thread Jens-Christian Fischer
For cephfs, the size reported by 'ls -s' is the same as file size. see http://ceph.com/docs/next/dev/differences-from-posix/ ...but the files are still in fact stored sparsely. It's just hard to tell. perfect - thanks! /jc ___ ceph-users

Re: [ceph-users] Inconsistent view on mounted CephFS

2013-09-13 Thread Jens-Christian Fischer
All servers mount the same filesystem. Needless to say, that we are a bit worried… The bug was introduced in 3.10 kernel, will be fixed in 3.12 kernel by commit 590fb51f1c (vfs: call d_op-d_prune() before unhashing dentry). Sage may backport the fix to 3.11 and 3.10 kernel soon.

Re: [ceph-users] Inconsistent view on mounted CephFS

2013-09-13 Thread Jens-Christian Fischer
Just out of curiosity. Why you are using cephfs instead of rbd? Two reasons: - we are still on Folsom - Experience with shared storage as this is something our customers are asking for all the time cheers jc ___ ceph-users mailing list

Re: [ceph-users] adding SSD only pool to existing ceph cluster

2013-09-04 Thread Jens-Christian Fischer
Hi Greg If you saw your existing data migrate that means you changed its hierarchy somehow. It sounds like maybe you reorganized your existing nodes slightly, and that would certainly do it (although simply adding single-node higher levels would not). It's also possible that you introduced

Re: [ceph-users] Best way to reformat OSD drives?

2013-09-03 Thread Jens-Christian Fischer
Why wait for the data to migrate away? Normally you have replicas of the whole osd data, so you can simply stop the osd, reformat the disk and restart it again. It'll join the cluster and automatically get all data it's missing. Of course the risk of dataloss is a bit higher during that

Re: [ceph-users] Best way to reformat OSD drives?

2013-09-03 Thread Jens-Christian Fischer
On 03.09.2013, at 16:27, Sage Weil s...@inktank.com wrote: ceph osd create # this should give you back the same osd number as the one you just removed OSD=`ceph osd create` # may or may not be the same osd id good point - so far it has been good to us! umount ${PART}1 parted $PART

[ceph-users] Best way to reformat OSD drives?

2013-09-02 Thread Jens-Christian Fischer
-- SWITCH Jens-Christian Fischer, Peta Solutions Werdstrasse 2, P.O. Box, 8021 Zurich, Switzerland phone +41 44 268 15 15, direct +41 44 268 15 71 jens-christian.fisc...@switch.ch http://www.switch.ch http://www.switch.ch/socialmedia ___ ceph-users

Re: [ceph-users] Best way to reformat OSD drives?

2013-09-02 Thread Jens-Christian Fischer
Why wait for the data to migrate away? Normally you have replicas of the whole osd data, so you can simply stop the osd, reformat the disk and restart it again. It'll join the cluster and automatically get all data it's missing. Of course the risk of dataloss is a bit higher during that

Re: [ceph-users] Best way to reformat OSD drives?

2013-09-02 Thread Jens-Christian Fischer
Hi Martin On 2013-09-02 19:37, Jens-Christian Fischer wrote: we have a Ceph Cluster with 64 OSD drives in 10 servers. We originally formatted the OSDs with btrfs but have had numerous problems (server kernel panics) that we could point back to btrfs. We are therefore in the process

[ceph-users] adding SSD only pool to existing ceph cluster

2013-09-02 Thread Jens-Christian Fischer
, don't upset the current pools (We don't want the regular/existing data to migrate towards the SSD pool, and no disruption of service? thanks Jens-Christian -- SWITCH Jens-Christian Fischer, Peta Solutions Werdstrasse 2, P.O. Box, 8021 Zurich, Switzerland phone +41 44 268 15 15, direct +41 44

[ceph-users] one pg stuck with 2 unfound pieces

2013-08-13 Thread Jens-Christian Fischer
status change. What next? Take the OSDs (9, 18) out again and rebuilding? thanks for your help Jens-Christian -- SWITCH Jens-Christian Fischer, Peta Solutions Werdstrasse 2, P.O. Box, 8021 Zurich, Switzerland phone +41 44 268 15 15, direct +41 44 268 15 71 jens-christian.fisc...@switch.ch http

Re: [ceph-users] XFS or btrfs for production systems with modern Kernel?

2013-08-02 Thread Jens-Christian Fischer
the offending servers to 13.04. Yesterday one of these machines locked up with btrfs issues (that weren't easily diagnosed) I have now started on migrating our OSD to xfs … (taking them out, making new filesystem on drive, putting them back into cluster again) cheers jc -- SWITCH Jens-Christian