binaries (x86)
ii qemu-utils 2.0.0+dfsg-2ubuntu1.11
amd64QEMU utilities
cheers
jc
--
SWITCH
Jens-Christian Fischer, Peta Solutions
Werdstrasse 2, P.O. Box, 8021 Zurich, Switzerland
phone +41 44 268 15 15, direct +41 44 268 15 71
jens
for every mounted volume - exceeding the 1024 FD limit.
So no deep scrubbing etc, but simply to many connections…
cheers
jc
--
SWITCH
Jens-Christian Fischer, Peta Solutions
Werdstrasse 2, P.O. Box, 8021 Zurich, Switzerland
phone +41 44 268 15 15, direct +41 44 268 15 71
jens-christian.fisc...@switch.ch
to migrate the last data off of it to one of
the smaller volumes). The NFS server has been running for 30 minutes now (with
close to no load) but we don’t really expect it to make it until tomorrow.
send help
Jens-Christian
--
SWITCH
Jens-Christian Fischer, Peta Solutions
Werdstrasse 2, P.O
=cinder.volume.drivers.rbd.RBDDriver
cheers
jc
--
SWITCH
Jens-Christian Fischer, Peta Solutions
Werdstrasse 2, P.O. Box, 8021 Zurich, Switzerland
phone +41 44 268 15 15, direct +41 44 268 15 71
jens-christian.fisc...@switch.ch
http://www.switch.ch
http://www.switch.ch/stories
On 21.08.2014, at 17:55, Gregory
=False
rbd_user=cinder
rbd_ceph_conf=/etc/ceph/ceph.conf
rbd_secret_uuid=1234-5678-ABCD-…-DEF
rbd_max_clone_depth=5
volume_driver=cinder.volume.drivers.rbd.RBDDriver
— cut ---
any ideas?
cheers
Jens-Christian
--
SWITCH
Jens-Christian Fischer, Peta Solutions
Werdstrasse 2, P.O. Box, 8021 Zurich
We are currently starting to set up a new Icehouse/Ceph based cluster and will
help to get this patch in shape as well.
I am currently collecting the information needed that allow us to patch Nova
and I have this:
https://github.com/angdraug/nova/tree/rbd-ephemeral-clone-stable-icehouse on my
/s wr, 34op/s
mdsmap e1: 0/0/1 up
root@server1:/etc# ceph --version
ceph version 0.67.4 (ad85b8bfafea6232d64cb7ba76a8b6e8252fa0c7)
more log files available upon request….
any ideas?
cheers
Jens-Christian
--
SWITCH
Jens-Christian Fischer, Peta Solutions
Werdstrasse 2, P.O. Box
The largest group of threads is those from the network messenger — in
the current implementation it creates two threads per process the
daemon is communicating with. That's two threads for each OSD it
shares PGs with, and two threads for each client which is accessing
any data on that OSD.
-integration -
it cleared a bunch of things for me
cheers
jc
Thanks again!
Narendra
From: Jens-Christian Fischer [mailto:jens-christian.fisc...@switch.ch]
Sent: Monday, November 25, 2013 8:19 AM
To: Trivedi, Narendra
Cc: ceph-users@lists.ceph.com; Rüdiger Rissmann
Subject: Re: [ceph
?
good luck
jc
--
SWITCH
Jens-Christian Fischer, Peta Solutions
Werdstrasse 2, P.O. Box, 8021 Zurich, Switzerland
phone +41 44 268 15 15, direct +41 44 268 15 71
jens-christian.fisc...@switch.ch
http://www.switch.ch
http://www.switch.ch/socialmedia
On 27.11.2013, at 08:51, Karan Singh ksi
osd pool get images pg_num
pg_num: 1000
root@h2:/var/log/ceph# ceph osd pool get volumes pg_num
pg_num: 128
That could possibly have been on the day, the number of treads started to rise.
Feedback appreciated!
thanks
Jens-Christian
--
SWITCH
Jens-Christian Fischer, Peta Solutions
Werdstrasse 2
/libvirt/imagebackend.py
virt/libvirt/utils.py
good luck :)
cheers
jc
--
SWITCH
Jens-Christian Fischer, Peta Solutions
Werdstrasse 2, P.O. Box, 8021 Zurich, Switzerland
phone +41 44 268 15 15, direct +41 44 268 15 71
jens-christian.fisc...@switch.ch
http://www.switch.ch
http://www.switch.ch
Hi Steffen
the virsh secret is defined on all compute hosts. Booting from a volume works
(it's the boot from image (create volume) part that doesn't work
cheers
jc
--
SWITCH
Jens-Christian Fischer, Peta Solutions
Werdstrasse 2, P.O. Box, 8021 Zurich, Switzerland
phone +41 44 268 15 15
the volumes….
I re-snapshotted the instance whose volume wouldn't boot, and made a volume out
of it, and this instance booted nicely from the volume.
weirder and weirder…
/jc
--
SWITCH
Jens-Christian Fischer, Peta Solutions
Werdstrasse 2, P.O. Box, 8021 Zurich, Switzerland
phone +41 44 268 15
Image (v2), 2147483648 bytes
It is our understanding, that we need raw volumes to get the boot process
working. Why is the volume created as a qcow2 volume?
cheers
jc
--
SWITCH
Jens-Christian Fischer, Peta Solutions
Werdstrasse 2, P.O. Box, 8021 Zurich, Switzerland
phone +41 44 268 15 15, direct
be started.
CONTROL-D will terminate this shell and reboot the system.
root@box-web1:~#
The console is stuck, I can't get to the rescue shell
I can rbd map the volume and mount it from a physical host - the filesystem
etc all is in good order.
Any ideas?
cheers
jc
--
SWITCH
Jens-Christian Fischer
Jens-Christian Fischer, Peta Solutions
Werdstrasse 2, P.O. Box, 8021 Zurich, Switzerland
phone +41 44 268 15 15, direct +41 44 268 15 71
jens-christian.fisc...@switch.ch
http://www.switch.ch
http://www.switch.ch/socialmedia
On 14.11.2013, at 13:18, Haomai Wang haomaiw...@gmail.com wrote:
Yes, we
Hi Josh
Using libvirt_image_type=rbd to replace ephemeral disks is new with
Havana, and unfortunately some bug fixes did not make it into the
release. I've backported the current fixes on top of the stable/havana
branch here:
https://github.com/jdurgin/nova/tree/havana-ephemeral-rbd
that
Using libvirt_image_type=rbd to replace ephemeral disks is new with
Havana, and unfortunately some bug fixes did not make it into the
release. I've backported the current fixes on top of the stable/havana
branch here:
https://github.com/jdurgin/nova/tree/havana-ephemeral-rbd
that looks
of the glance - cinder RBD
improvements)
cheers
jc
--
SWITCH
Jens-Christian Fischer, Peta Solutions
Werdstrasse 2, P.O. Box, 8021 Zurich, Switzerland
phone +41 44 268 15 15, direct +41 44 268 15 71
jens-christian.fisc...@switch.ch
http://www.switch.ch
http://www.switch.ch/socialmedia
On 08.11.2013
Jens-Christian Fischer, Peta Solutions
Werdstrasse 2, P.O. Box, 8021 Zurich, Switzerland
phone +41 44 268 15 15, direct +41 44 268 15 71
jens-christian.fisc...@switch.ch
http://www.switch.ch
http://www.switch.ch/socialmedia
___
ceph-users mailing list
, but it works
reasonably well for testing purposes. We are planning/building our next cluster
now (a production cluster) and plan to separate OSD/MON servers from OpenStack
compute servers.
cheers
Jens-Christian
--
SWITCH
Jens-Christian Fischer, Peta Solutions
Werdstrasse 2, P.O. Box, 8021
This stuck pg seems to fill up our mons (they need to keep old data, right?)
which makes starting a new mon a task of seemingly herculean proportions.
Any ideas on how to proceed?
thanks
Jens-Christian
--
SWITCH
Jens-Christian Fischer, Peta Solutions
Werdstrasse 2, P.O. Box, 8021 Zurich
we used Kernel 3.10 and
recently ceph-fuse to mount the CephFS.
Are we doing something wrong, or is this not supported by CephFS?
cheers
jc
--
SWITCH
Jens-Christian Fischer, Peta Solutions
Werdstrasse 2, P.O. Box, 8021 Zurich, Switzerland
phone +41 44 268 15 15, direct +41 44 268 15 71
jens
For cephfs, the size reported by 'ls -s' is the same as file size. see
http://ceph.com/docs/next/dev/differences-from-posix/
ah! So if I understand correctly, the files are indeed sparse on CephFS?
thanks
/jc
___
ceph-users mailing list
For cephfs, the size reported by 'ls -s' is the same as file size. see
http://ceph.com/docs/next/dev/differences-from-posix/
...but the files are still in fact stored sparsely. It's just hard to
tell.
perfect - thanks!
/jc
___
ceph-users
All servers mount the same filesystem. Needless to say, that we are a bit
worried…
The bug was introduced in 3.10 kernel, will be fixed in 3.12 kernel by commit
590fb51f1c (vfs: call d_op-d_prune() before unhashing dentry). Sage may
backport the fix to 3.11 and 3.10 kernel soon.
Just out of curiosity. Why you are using cephfs instead of rbd?
Two reasons:
- we are still on Folsom
- Experience with shared storage as this is something our customers are
asking for all the time
cheers
jc
___
ceph-users mailing list
Hi Greg
If you saw your existing data migrate that means you changed its
hierarchy somehow. It sounds like maybe you reorganized your existing
nodes slightly, and that would certainly do it (although simply adding
single-node higher levels would not). It's also possible that you
introduced
Why wait for the data to migrate away? Normally you have replicas of the
whole osd data, so you can simply stop the osd, reformat the disk and restart
it again. It'll join the cluster and automatically get all data it's missing.
Of course the risk of dataloss is a bit higher during that
On 03.09.2013, at 16:27, Sage Weil s...@inktank.com wrote:
ceph osd create # this should give you back the same osd number as the one
you just removed
OSD=`ceph osd create` # may or may not be the same osd id
good point - so far it has been good to us!
umount ${PART}1
parted $PART
--
SWITCH
Jens-Christian Fischer, Peta Solutions
Werdstrasse 2, P.O. Box, 8021 Zurich, Switzerland
phone +41 44 268 15 15, direct +41 44 268 15 71
jens-christian.fisc...@switch.ch
http://www.switch.ch
http://www.switch.ch/socialmedia
___
ceph-users
Why wait for the data to migrate away? Normally you have replicas of the
whole osd data, so you can simply stop the osd, reformat the disk and restart
it again. It'll join the cluster and automatically get all data it's missing.
Of course the risk of dataloss is a bit higher during that
Hi Martin
On 2013-09-02 19:37, Jens-Christian Fischer wrote:
we have a Ceph Cluster with 64 OSD drives in 10 servers. We originally
formatted the OSDs with btrfs but have had numerous problems (server kernel
panics) that we could point back to btrfs. We are therefore in the process
, don't upset the current
pools (We don't want the regular/existing data to migrate towards the SSD
pool, and no disruption of service?
thanks
Jens-Christian
--
SWITCH
Jens-Christian Fischer, Peta Solutions
Werdstrasse 2, P.O. Box, 8021 Zurich, Switzerland
phone +41 44 268 15 15, direct +41 44
status change.
What next? Take the OSDs (9, 18) out again and rebuilding?
thanks for your help
Jens-Christian
--
SWITCH
Jens-Christian Fischer, Peta Solutions
Werdstrasse 2, P.O. Box, 8021 Zurich, Switzerland
phone +41 44 268 15 15, direct +41 44 268 15 71
jens-christian.fisc...@switch.ch
http
the offending servers to 13.04.
Yesterday one of these machines locked up with btrfs issues (that weren't
easily diagnosed)
I have now started on migrating our OSD to xfs … (taking them out, making new
filesystem on drive, putting them back into cluster again)
cheers
jc
--
SWITCH
Jens-Christian
37 matches
Mail list logo