Am 28.02.2017 um 09:48 schrieb linux...@boku.ac.at:
> /Hi,/
>
> /
> actually i can´t install hammer on wheezy:/
>
> /~# cat /etc/apt/sources.list.d/ceph.list
> deb http://download.ceph.com/debian-hammer/ wheezy main
>
> ~# cat /etc/issue
> Debian GNU/Linux 7 \n \l
> /
>
> /~# apt-cache search c
Hi,
Am 11.12.2014 00:22, schrieb Sanders, Bill:
> Thank you for the reply, Florian.
>
> Yes, MON data is on the RAID drive. Is it recommended to get its own drive?
> What do MON writes look like in terms of size/frequency?
>
> So slow IO's and monitor elections are a symptom of not enough dis
Hi,
Am 08.12.2014 20:23, schrieb Sanders, Bill:
> I've just stood up a Ceph cluster for some experimentation. Unfortunately,
> we're having some performance and stability problems I'm trying to pin down.
> More unfortunately, I'm new to Ceph, so I'm not sure where to start looking
> for
> the p
Hi,
Am 08.12.2014 20:23, schrieb Sanders, Bill:
> I've just stood up a Ceph cluster for some experimentation. Unfortunately,
> we're having some performance and stability problems I'm trying to pin down.
> More unfortunately, I'm new to Ceph, so I'm not sure where to start looking
> for
> the p
Hi,
Am 26.11.2014 23:36, schrieb Geoff Galitz:
>
> Hi.
>
> If I create an RDB instance, and then use fusemount to access it from various
> locations as a POSIX entity, I assume I'll need to create a filesystem on it.
> To access it from various remote servers I assume I'd also need a
> distribu
Hi Christoph,
Am 12.11.2014 17:29, schrieb Christoph Adomeit:
> Hi,
>
> i installed a Ceph Cluster with 50 OSDs on 4 Hosts and finally I am really
> happy with it.
>
> Linux and Windows VMs run really fast in KVM on the Ceph Storage.
>
> Only my Solaris 10 guests are terribly slow on ceph rbd
Am 14.08.2014 13:29, schrieb Guang Yang:
> Hi cephers,
> Most recently I am drafting the run books for OSD disk replacement, I think
> the rule of thumb is to reduce data migration (recover/backfill), and I
> thought the following procedure should achieve the purpose:
> 1. ceph osd out osd.XXX
Am 14.04.2014 01:01, schrieb Andrew Thrift:
> Hi List,
>
>
> Each of our OSD hosts has 2x Intel S3700 SSD's for Journal, and 12x Seagate
What Intel S3700 exactly? The small versions perform poor on write IOPS and
could be your bottleneck...
--
Mit freundlichen Grüßen,
Florian Wiessner
Sm
Hi,
Am 05.06.2014 11:27, schrieb ale...@kurnosov.spb.ru:
>
> ceph 0.72.2 on SL6.5 from offical repo.
>
> After down one of OSDs (for further the sever out) one of PGs become
> incomplte: $ ceph health detail HEALTH_WARN 1 pgs incomplete; 1 pgs stuck
> inactive; 1 pgs stuck unclean; 2 requests a
Hi,
Am 04.06.2014 14:51, schrieb yalla.gnan.ku...@accenture.com:
> Hi All,
>
>
>
> I have a ceph storage cluster with four nodes. I have created block storage
> using cinder in openstack and ceph as its storage backend.
>
> So, I see a volume is created in ceph in one of the pools. But how
Hi,
Am 03.06.2014 23:24, schrieb Jason Harley:
> On Jun 3, 2014, at 4:17 PM, Smart Weblications GmbH - Florian Wiessner
> mailto:f.wiess...@smart-weblications.de>>
> wrote:
>
>> You could try to recreate the osds and start them. Then i think the recovery
>> shoul
Hi,
Am 03.06.2014 22:04, schrieb Jason Harley:
> # ceph pg 4.ff3 query
>> { "state": "active+recovering",
>> "epoch": 1642,
>> "up": [
>> 7,
>> 26],
>> "acting": [
>> 7,
>> 26],
[...]
>> "recovery_state": [
>> { "name": "Started\/Primary\/Active",
Hi,
Am 03.06.2014 21:46, schrieb Jason Harley:
> Howdy —
>
> I’ve had a failure on a small, Dumpling (0.67.4) cluster running on Ubuntu
> 13.10 machines. I had three OSD nodes (running 6 OSDs each), and lost two of
> them in a beautiful failure. One of these nodes even went so far as to
> sc
Am 26.05.2014 15:52, schrieb Listas@Adminlinux:
> Thanks Pieter!
>
> I tried using OCFS2 over DRBD, but was not satisfied. I was being affected by
> various bugs in OCFS2. But Oracle was not committed to solving them.
>
When did you try it? We use such a setup with ocfs2 ontop of rbd with 3.10.4
Hi,
Am 02.12.2013 04:27, schrieb James Harper:
>> Hi,
>>
The low points are all ~35Mbytes/sec and the high points are all
~60Mbytes/sec. This is very reproducible.
>>>
>>> It occurred to me that just stopping the OSD's selectively would allow me to
>>> see if there was a change when one
Hi Knut Moe,
Am 21.11.2013 22:51, schrieb Knut Moe:
> Thanks, Alfredo.
>
> The challenge is that it is calling those links when issuing the following
> command:
>
> sudo apt-get update && sudo apt-get install ceph-deploy
>
> It then goes through a lot different steps before displaying those er
Hi,
Is really no one on the list interrested in fixing this? Or am i the only one
having this kind of bug/problem?
Am 11.06.2013 16:19, schrieb Smart Weblications GmbH - Florian Wiessner:
> Hi List,
>
> i observed that an rbd rm results in some osds mark one osd as down
> wrongly i
Hi Gandalf,
Am 04.06.2013 21:45, schrieb Gandalf Corvotempesta:
> 2013/6/4 Smart Weblications GmbH - Florian Wiessner
> mailto:f.wiess...@smart-weblications.de>>
>
> we use ocfs2 ontop of rbd... the only bad thing is that ocfs2 will fence
> all
> nodes if rbd
Am 04.06.2013 20:03, schrieb Gandalf Corvotempesta:
> Any experiences with clustered FS on top of RBD devices?
> Which FS do you suggest for more or less 10.000 mailboxes accessed by 10
> dovecot
> nodes ?
>
we use ocfs2 ontop of rbd... the only bad thing is that ocfs2 will fence all
nodes if rb
Am 02.06.2013 10:51, schrieb Bond, Darryl:
> Cluster has gone into HEALTH_WARN because the mon filesystem is 12%
> The cluster was upgraded to cuttlefish last week and had been running on
> bobtail for a few months.
>
> How big can I expect the /var/lib/ceph/mon to get, what influences it's size.
Am 01.06.2013 22:20, schrieb Sage Weil:
> git submodule update --init
done.
>
> Is this a tar ball? We may not be including the libs3 submodule...
>
no, i only did git clone, git co cuttlefish...
node01:/usr/src/ceph# git submodule update --init
Submodule 'ceph-object-corpus' (git://ceph.com
Hi,
i tried to build squeeze packages from cuttlefish:
node01:/usr/src/ceph# LANG=C dpkg-buildpackage
dpkg-buildpackage: export CFLAGS from dpkg-buildflags (origin: vendor): -g -O2
dpkg-buildpackage: export CPPFLAGS from dpkg-buildflags (origin: vendor):
dpkg-buildpackage: export CXXFLAGS from d
Hi,
could someone update
http://ceph.com/docs/next/install/build-prerequisites/
and add a note for debian squeeze on how to get libleveldb-dev and
libsnappy-dev?
--
Mit freundlichen Grüßen,
Florian Wiessner
Smart Weblications GmbH
Martinsberger Str. 1
D-95119 Naila
fon.: +49 9282 9638 2
Hi Jon,
Am 29.05.2013 03:24, schrieb Jon:
> Hello,
>
> I would like to mount a single RBD on multiple hosts to be able to share the
> block device.
> Is this possible? I understand that it's not possible to share data between
> the
> different interfaces, e.g. CephFS and RBDs, but I don't see
Hi,
Am 29.05.2013 16:23, schrieb w sun:
> I believe that the async_flush fix got in after 1.4.1 release. Unless someone
> had backported the patch to 1.4.0, it is unlikely that 1.4.0 package would
> contain the fix.
>
yes, qemu 1.4.2 has the AIO_FLUSH patch included.
--
Mit freundlichen Grü
Am 29.05.2013 18:18, schrieb Erdem Agaoglu:
> We are running ubuntu 12.04 and Folsom. Compiling qemu 1.5 only caused random
> complaints about 'qemu query-commands not found' or sth like that on libvirt
> end. Upgrading libvirt to 1.0.5 fixed it. But that had some problems with
> attaching rbd disk
0]
> host = b1
> devs = /dev/sda4
>
> [osd.21]
> host = b2
> devs = /dev/sda4
>
> [osd.22]
> host = b3
> devs = /dev/sda4
>
> [osd.23]
> host = b4
> devs = /dev/sda4
>
> [mds.a]
>
Am 14.05.2013 02:11, schrieb James Harper:
>>
>> Am 14.05.2013 01:46, schrieb James Harper:
>>> After replacing a failed harddisk, ceph health reports "HEALTH_ERR 14 pgs
>> inconsistent; 18 scrub errors"
>>>
>>> The disk was a total loss so I replaced it, ran mkfs etc and rebuilt the osd
>> and whi
Am 14.05.2013 01:46, schrieb James Harper:
> After replacing a failed harddisk, ceph health reports "HEALTH_ERR 14 pgs
> inconsistent; 18 scrub errors"
>
> The disk was a total loss so I replaced it, ran mkfs etc and rebuilt the osd
> and while it has resynchronised everything the above still re
Am 13.05.2013 23:49, schrieb Ian Colle:
> Florian,
>
> It's building now, should be out in a few hours.
thank you.
--
Mit freundlichen Grüßen,
Florian Wiessner
Smart Weblications GmbH
Martinsberger Str. 1
D-95119 Naila
fon.: +49 9282 9638 200
fax.: +49 9282 9638 205
24/7: +49 900 144 000
Am 13.05.2013 22:47, schrieb Gregory Farnum:
> See http://tracker.ceph.com/issues/4974; we're testing the fix out for
> a packaged release now.
I see this has been resolved, when will there be a new package for debian
squeeze ready?
--
Mit freundlichen Grüßen,
Florian Wiessner
Smart Webli
Hi,
Am 11.05.2013 09:40, schrieb Pawel Stefanski:
> hello!
>
> I'm trying to upgrade my test cluster to cuttlefish, but I'm stucked with mon
> upgrade.
>
> Bobtail version - 0.56.6 (previous rolling upgrades)
> cuttlefish version - 0.61.1
>
> While starting upgraded mon demon it's faulting on
Hi,
i upgraded from 0.56.6 to 0.61.1 and tried to restart one monitor:
/etc/init.d/ceph start mon
=== mon.4 ===
Starting Ceph mon.4 on node05...
[16366]: (33) Numerical argument out of domain
failed: 'ulimit -n 8192; /usr/bin/ceph-mon -i 4 --pid-file
/var/run/ceph/mon.4.pid -c /etc/ceph/ceph.con
33 matches
Mail list logo