Re: [ceph-users] handling different disk sizes

2017-06-05 Thread Loic Dachary
On 06/05/2017 02:48 PM, Christian Balzer wrote: > > Hello, > > On Mon, 5 Jun 2017 13:54:02 +0200 Félix Barbeira wrote: > >> Hi, >> >> We have a small cluster for radosgw use only. It has three nodes, witch 3 > ^ ^ >> osds each. Each

Re: [ceph-users] handling different disk sizes

2017-06-05 Thread Loic Dachary
Hi Félix, Could you please send me the output of the "ceph report" command (privately, the output is likely too big for the list) ? I suspect what you're seeing is because the smaller disks have more PGs than they should for the default.rgw.buckets.data pool. With the output of "ceph report"

[ceph-users] tools to display information from ceph report

2017-06-01 Thread Loic Dachary
Hi, Is there a tool that displays information (such as the total bytes in each pool) using the content of the "ceph report" json ? Cheers -- Loïc Dachary, Artisan Logiciel Libre ___ ceph-users mailing list ceph-users@lists.ceph.com

Re: [ceph-users] http://planet.eph.com/ is down

2017-05-27 Thread Loic Dachary
The URL is http://ceph.com/category/planet/ and works like a charm :-) There is a blog at http://eph.com/ but it's more about Bible than Squids. On 05/28/2017 08:27 AM, Loic Dachary wrote: > Hi Patrick, > > http://planet.eph.com/ is down and shows a white page containing "pageo

[ceph-users] http://planet.eph.com/ is down

2017-05-27 Thread Loic Dachary
Hi Patrick, http://planet.eph.com/ is down and shows a white page containing "pageok" (amusing ;-). I kind of remember reading messages about troubles regarding planet.ceph.com but forgot the specifics. Is this a permanent situation ? Cheers -- Loïc Dachary, Artisan Logiciel Libre

Re: [ceph-users] How to calculate the nearfull ratio ?

2017-05-04 Thread Loic Dachary
On 05/04/2017 03:58 PM, Xavier Villaneau wrote: > Hello Loïc, > > On Thu, May 4, 2017 at 8:30 AM Loic Dachary <l...@dachary.org > <mailto:l...@dachary.org>> wrote: > > Is there a way to calculate the optimum nearfull ratio for a given > crushmap ?

[ceph-users] How to calculate the nearfull ratio ?

2017-05-04 Thread Loic Dachary
Hi, In a cluster where the failure domain is the host and dozens of hosts, the 85% default for nearfull ratio is fine. A host failing won't suddenly make the cluster 99% full. In smaller clusters, with 10 hosts or less, it is likely to not be enough. And in larger clusters 85% may be too much

Re: [ceph-users] LRC low level plugin configuration can't express maximal erasure resilience

2017-04-29 Thread Loic Dachary
Hi Matan, On 04/29/2017 10:47 PM, Matan Liram wrote: > LRC low level plugin configuration of the following example copes with a > single erasure while it can easily protect from two. > > In case I use the layers: > 1: DDc_ _ > 2: DDD_ _ _ _c_ > 3: _ _ _DDD_ _c > > Neither of the rules

Re: [ceph-users] Replication (k=1) in LRC

2017-04-28 Thread Loic Dachary
erasure coding to work you need to split the object > into at least 2 pieces (k) and then have at least one parity copy (m). With > m=0 you have no redundancy and just made a super slow raid 0. :-D > > > On Thu, Apr 27, 2017, 6:49 PM Loic Dachary <l...@dachary.org &

Re: [ceph-users] Replication (k=1) in LRC

2017-04-27 Thread Loic Dachary
sure it won't work. You can also give it a try with https://github.com/ceph/ceph/blob/master/src/test/erasure-code/ceph_erasure_code_benchmark.cc Cheers > Regards, > Oleg > > On Fri, Apr 28, 2017 at 12:33 AM, Loic Dachary <l...@dachary.org > <mailto:l...@dachary.o

Re: [ceph-users] chooseleaf updates

2017-04-20 Thread Loic Dachary
On 04/20/2017 02:25 AM, Donny Davis wrote: > In reading the docs, I am curious if I can change the chooseleaf parameter as > my cluster expands. I currently only have one node and used this parameter in > ceph.conf > > osd crush chooseleaf type = 0 > > Can this be changed after I expand

Re: [ceph-users] Brainstorming ideas for Python-CRUSH

2017-03-21 Thread Loic Dachary
Hi Logan, On 03/21/2017 03:27 PM, Logan Kuhn wrote: > I like the idea > > Being able to play around with different configuration options and using this > tool as a sanity checker or showing what will change as well as whether or > not the changes could cause health warn or health err. The

[ceph-users] 10.2.5 Jewel released

2016-12-10 Thread Loic Dachary
This point release fixes an important regression introduced in v10.2.4. We recommend that all v10.2.x users upgrade. Notable Changes --- * msg/simple/Pipe: avoid returning 0 on poll timeout (issue#18185, pr#12376, Sage Weil) For more detailed information refer to the complete

Re: [ceph-users] osds udev rules not triggered on reboot (jewel, jessie)

2016-06-21 Thread Loic Dachary
UMIER > Sent: Thursday, June 16, 2016 17:53 > To: Karsten Heymann; Loris Cuoghi > Cc: Loic Dachary; ceph-users > Subject: Re: [ceph-users] osds udev rules not triggered on reboot (jewel, > jessie) > > Hi, > > I have the same problem with osd disks not mounted at boot on

Re: [ceph-users] [Ceph-maintainers] Deprecating ext4 support

2016-04-12 Thread Loic Dachary
Hi Sage, I suspect most people nowadays run tests and develop on ext4. Not supporting ext4 in the future means we'll need to find a convenient way for developers to run tests against the supported file systems. My 2cts :-) On 11/04/2016 23:39, Sage Weil wrote: > Hi, > > ext4 has never been

Re: [ceph-users] [Ceph-maintainers] v10.1.1 Jewel candidate released

2016-04-08 Thread Loic Dachary
Hi, The notable changes since 10.1.0 are at: http://docs.ceph.com/docs/master/release-notes/#v10-1-1 and the Jewel release candidate information is at: http://docs.ceph.com/docs/master/release-notes/#v10-1-0-jewel-release-candidate Cheers On 07/04/2016 20:14, Sage Weil wrote: > Hi all, > >

Re: [ceph-users] [Ceph-community] Fw: need help in mount ceph fs with the kernel driver

2016-04-02 Thread Loic Dachary
Redirecting to the ceph user mailing list. On 02/04/2016 05:48, mansour amini wrote: > > > On Sunday, March 27, 2016 3:38 PM, mansour amini > wrote: > > > hello > > I want to mount ceph fs with the kernel driver, but when I execute below > command on my admin node

Re: [ceph-users] v0.94.6 Hammer released

2016-03-22 Thread Loic Dachary
On 23/03/2016 01:12, Chris Dunlop wrote: > Hi Loïc, > > On Wed, Mar 23, 2016 at 01:03:06AM +0100, Loic Dachary wrote: >> On 23/03/2016 00:39, Chris Dunlop wrote: >>> "The old OS'es" that were being supported up to v0.94.5 includes debian >>> wheezy.

Re: [ceph-users] v0.94.6 Hammer released

2016-03-22 Thread Loic Dachary
Hi Chris, On 23/03/2016 00:39, Chris Dunlop wrote: > Hi Loïc, > > On Wed, Mar 23, 2016 at 12:14:27AM +0100, Loic Dachary wrote: >> On 22/03/2016 23:49, Chris Dunlop wrote: >>> Hi Stable Release Team for v0.94, >>> >>> Let's try again... Any news

Re: [ceph-users] v0.94.6 Hammer released

2016-03-22 Thread Loic Dachary
; <ch...@onthe.net.au> wrote: >>> >>>> Hi Stable Release Team for v0.94, >>>> >>>> On Thu, Mar 10, 2016 at 11:00:06AM +1100, Chris Dunlop wrote: >>>>> On Wed, Mar 02, 2016 at 06:32:18PM +0700, Loic Dachary wrote: >>>>>

Re: [ceph-users] v10.0.4 released

2016-03-19 Thread Loic Dachary
Hi, Because of a tiny mistake preventing deb packages to be built, v10.0.5 was released shortly after v10.0.4 and is now the current development release. The Stable release team[0] collectively decided to help by publishing development packages[1], starting with v10.0.5. The packages for

Re: [ceph-users] v0.94.6 Hammer released

2016-03-02 Thread Loic Dachary
Hi Dan, On 02/03/2016 19:48, Dan van der Ster wrote: > Hi Loic, > > On Wed, Mar 2, 2016 at 12:32 PM, Loic Dachary <l...@dachary.org> wrote: >> >> >> On 02/03/2016 17:15, Odintsov Vladislav wrote: >>> Hi, >>> >>> it looks ve

Re: [ceph-users] v0.94.6 Hammer released

2016-03-02 Thread Loic Dachary
ease process that will be fixed. Cheers > > Regards, > > Vladislav Odintsov > > > From: ceph-users <ceph-users-boun...@lists.ceph.com> on behalf of Chris > Dunlop <ch...@onthe.net.au>

Re: [ceph-users] v0.94.6 Hammer released

2016-02-29 Thread Loic Dachary
On 29/02/2016 22:49, Nathan Cutler wrote: >> The basic idea is to copy the packages that are build by gitbuilders or by >> the buildpackage teuthology task in a central place. Because these packages >> are built, for development versions as well as stable versions[2]. And they >> are tested

Re: [ceph-users] v0.94.6 Hammer released

2016-02-29 Thread Loic Dachary
I've created a pad at http://pad.ceph.com/p/development-releases for the next CDM ( see http://tracker.ceph.com/projects/ceph/wiki/Planning for details). On 29/02/2016 22:49, Nathan Cutler wrote: > The basic idea is to copy the packages that are build by gitbuilders or by > the buildpackage

Re: [ceph-users] v0.94.6 Hammer released

2016-02-29 Thread Loic Dachary
Hi Dan & al, I think it would be relatively simple to have these binaries published as part of the current "Stable release" team effort[1]. Essentially doing what you did and electing a central place to store these binaries. The trick is to find a sustainable way to do this which means having

[ceph-users] Ceph stable release team: call for participation

2016-02-23 Thread Loic Dachary
ith Hammer. Loic Dachary (Red Hat), one of the Ceph core developers, oversees the process and provides help and advice when necessary. After these two releases are published (which should happen in the next few weeks), the roles will change and we would like to invite you to participate. If you'r

Re: [ceph-users] errors when install-deps.sh

2015-12-23 Thread Loic Dachary
Hi, On which operating system are you running this ? Cheers On 23/12/2015 08:50, gongfengguang wrote: > Hi all, > >When I exec ./install-deps.sh, there are some errors: > > > >--> Already installed : junit-4.11-8.el7.noarch > > No uninstalled build requires > > Running virtualenv

Re: [ceph-users] Journal symlink broken / Ceph 0.94.5 / CentOS 6.7

2015-12-18 Thread Loic Dachary
lve : why /dev/sdc does not generate > udev events (different driver than /dev/sda maybe ?). Once it does, Ceph > should work. > > A workaround could be to add somethink like: > > ceph-disk-udev 3 sdc3 sdc > ceph-disk-udev 4 sdc4 sdc > > in /etc/rc.local. > &g

Re: [ceph-users] Journal symlink broken / Ceph 0.94.5 / CentOS 6.7

2015-12-18 Thread Loic Dachary
--> udev-147-2.63.el6_7.1.x86_64 >> >> from what i can see this update fixes stuff related to symbolic links / >> external devices. /dev/sdc sits on external eSata. So... >> >> https://rhn.redhat.com/errata/RHBA-2015-1382.html >> >> will reboot tonight and ge

Re: [ceph-users] Journal symlink broken / Ceph 0.94.5 / CentOS 6.7

2015-12-17 Thread Loic Dachary
And 95-ceph-osd.rules contains the following ? # Check gpt partion for ceph tags and activate ACTION=="add", SUBSYSTEM=="block", \ ENV{DEVTYPE}=="partition", \ ENV{ID_PART_TABLE_TYPE}=="gpt", \ RUN+="/usr/sbin/ceph-disk-udev $number $name $parent" On 17/12/2015 08:29, Jesper Thorhauge

Re: [ceph-users] Journal symlink broken / Ceph 0.94.5 / CentOS 6.7

2015-12-17 Thread Loic Dachary
The non-symlink files in /dev/disk/by-partuuid come to existence because of: * system boots * udev rule calls ceph-disk-udev via 95-ceph-osd.rules on /dev/sda1 * ceph-disk-udev creates the symlink /dev/disk/by-partuuid/c83b5aa5-fe77-42f6-9415-25ca0266fb7f -> ../../sdb1 * ceph-disk activate

Re: [ceph-users] v10.0.0 released

2015-12-17 Thread Loic Dachary
The script handles UTF-8 fine, the copy/paste is at fault here ;-) On 24/11/2015 07:59, piotr.da...@ts.fujitsu.com wrote: >> -Original Message- >> From: ceph-devel-ow...@vger.kernel.org [mailto:ceph-devel- >> ow...@vger.kernel.org] On Behalf Of Sage Weil >> Sent: Monday, November 23, 2015

Re: [ceph-users] data partition and journal on same disk

2015-12-17 Thread Loic Dachary
Hi, You can try ceph-deploy osd prepare osdserver:/dev/sdb it will create the /dev/sdb1 and /dev/sdb2 partitions for you. Cheers On 17/12/2015 12:41, Dan Nica wrote: > Well I get an error when I try to create data and jurnal on same disk > > > > [rimu][INFO ] Running command: sudo

Re: [ceph-users] Journal symlink broken / Ceph 0.94.5 / CentOS 6.7

2015-12-17 Thread Loic Dachary
On 17/12/2015 11:33, Jesper Thorhauge wrote: > Hi Loic, > > Sounds like something does go wrong when /dev/sdc3 shows up. Is there anyway > i can debug this further? Log-files? Modify the .rules file...? Do you see traces of what happens when /dev/sdc3 shows up in boot.log ? > > /Jesper > >

Re: [ceph-users] Journal symlink broken / Ceph 0.94.5 / CentOS 6.7

2015-12-17 Thread Loic Dachary
12:01, Jesper Thorhauge wrote: > Nope, the previous post contained all that was in the boot.log :-( > > /Jesper > > ** > > - Den 17. dec 2015, kl. 11:53, Loic Dachary <l...@dachary.org> skrev: > > On 17/12/2015 11:33, Jesper Thorhauge wrote: >&g

Re: [ceph-users] Journal symlink broken / Ceph 0.94.5 / CentOS 6.7

2015-12-16 Thread Loic Dachary
Hi, On 17/12/2015 07:53, Jesper Thorhauge wrote: > Hi, > > Some more information showing in the boot.log; > > 2015-12-16 07:35:33.289830 7f1b990ad800 -1 > filestore(/var/lib/ceph/tmp/mnt.aWZTcE) mkjournal error creating journal on > /var/lib/ceph/tmp/mnt.aWZTcE/journal: (22) Invalid argument

Re: [ceph-users] CentOS 7.2, Infernalis, preparing osd's and partprobe issues.

2015-12-16 Thread Loic Dachary
Hi Paul, On 16/12/2015 10:26, HEWLETT, Paul (Paul) wrote: > When installing Hammer on RHEL7.1 we regularly got the message that partprobe > failed to inform the kernel. We are using the ceph-disk command from ansible > to prepare the disks. The partprobe failure seems harmless and our OSDs >

Re: [ceph-users] Journal symlink broken / Ceph 0.94.5 / CentOS 6.7

2015-12-16 Thread Loic Dachary
Hi, On 16/12/2015 07:39, Jesper Thorhauge wrote: > Hi, > > A fresh server install on one of my nodes (and yum update) left me with > CentOS 6.7 / Ceph 0.94.5. All the other nodes are running Ceph 0.94.2. > > "ceph-disk prepare /dev/sda /dev/sdc" seems to work as expected, but > "ceph-disk

Re: [ceph-users] CentOS 7.2, Infernalis, preparing osd's and partprobe issues.

2015-12-16 Thread Loic Dachary
Hi Matt, Could you please add your report to http://tracker.ceph.com/issues/14080 ? I think what you're seeing is a partprobe timeout because things get too long to complete (that's also why adding sleep as mentionned in the mail thread sometime helps). There is a variant of that problem where

Re: [ceph-users] ceph-disk list crashes in infernalis

2015-12-11 Thread Loic Dachary
def list_partitions_device(dev): """ Return a list of partitions on the given device name """ partitions = [] basename = os.path.basename(dev) for name in os.listdir(block_path(dev)): if name.startswith(basename): partitions.append(name) return

Re: [ceph-users] ceph-disk list crashes in infernalis

2015-12-11 Thread Loic Dachary
ens, > > output is attached (stderr + stdout) > > Regards > > -Ursprüngliche Nachricht- > Von: Jens Rosenboom [mailto:j.rosenb...@x-ion.de] > Gesendet: Freitag, 11. Dezember 2015 09:10 > An: Stolte, Felix > Cc: Loic Dachary; ceph-us...@ceph.com > Betreff: Re:

Re: [ceph-users] ceph-disk list crashes in infernalis

2015-12-10 Thread Loic Dachary
Dr. Karl Eugen Huthmacher > Geschaeftsfuehrung: Prof. Dr.-Ing. Wolfgang Marquardt (Vorsitzender), > Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt, > Prof. Dr. Sebastian M. Schmidt > > > -Ursprüngliche Nachricht- > Von: Loic Dachary [mailto:l...@dachary.

Re: [ceph-users] ceph-disk list crashes in infernalis

2015-12-10 Thread Loic Dachary
des Amtsgerichts Dueren Nr. HR B 3498 > Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher > Geschaeftsfuehrung: Prof. Dr.-Ing. Wolfgang Marquardt (Vorsitzender), > Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt, > Prof. Dr. Sebastian M. Schmidt > > &

Re: [ceph-users] Ceph 9.2 fails to install in COS 7.1.1503: Report and Fix

2015-12-09 Thread Loic Dachary
Hi, It also had to be fixed for the development environment (see http://tracker.ceph.com/issues/14019). Cheers On 09/12/2015 09:37, Ben Hines wrote: > FYI - same issue when installing Hammer, 94.5. I also fixed it by enabling > the cr repo. > > -Ben > > On Tue, Dec 8, 2015 at 5:13 PM,

Re: [ceph-users] ceph-disk list crashes in infernalis

2015-12-09 Thread Loic Dachary
HR B 3498 > Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher > Geschaeftsfuehrung: Prof. Dr.-Ing. Wolfgang Marquardt (Vorsitzender), > Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt, > Prof. Dr. Sebastian M. Schmidt > > > -Ursprüngl

Re: [ceph-users] ceph-disk list crashes in infernalis

2015-12-08 Thread Loic Dachary
Hi Felix, Could you please ls -l /dev/cciss /sys/block/cciss*/ ? Thanks for being the cciss proxy in fixing this problem :-) Cheers On 07/12/2015 11:43, Loic Dachary wrote: > Thanks ! > > On 06/12/2015 17:50, Stolte, Felix wrote: >> Hi Loic, >> >> output is: >&g

Re: [ceph-users] ceph-disk list crashes in infernalis

2015-12-08 Thread Loic Dachary
I also need to confirm that the names that show in /sys/block/*/holders are with a ! (it would not make sense to me if they were not but ...) On 08/12/2015 15:05, Loic Dachary wrote: > Hi Felix, > > Could you please ls -l /dev/cciss /sys/block/cciss*/ ? > > Thanks for being

Re: [ceph-users] ceph-disk list crashes in infernalis

2015-12-05 Thread Loic Dachary
Bolt, > Prof. Dr. Sebastian M. Schmidt > > -Ursprüngliche Nachricht- > Von: Loic Dachary [mailto:l...@dachary.org] > Gesendet: Donnerstag, 3. Dezember 2015 11:01 > An: Stolte, Felix; ceph-us...@ceph.com > Betreff: Re: [ceph-users] ceph-disk list crashes in infernali

Re: [ceph-users] ceph-disk list crashes in infernalis

2015-12-03 Thread Loic Dachary
Hi Felix, This is a bug, I file an issue for you at http://tracker.ceph.com/issues/13970 Cheers On 03/12/2015 10:56, Stolte, Felix wrote: > Hi all, > > > > i upgraded from hammer to infernalis today and even so I had a hard time > doing so I finally got my cluster running in a healthy

Re: [ceph-users] ceph-osd@.service does not mount OSD data disk

2015-12-03 Thread Loic Dachary
Hi, On 03/12/2015 21:00, Florent B wrote: > Ok and /bin/flock is supposed to exist on all systems ? Don't have it on > Debian... flock is at /usr/bin/flock I filed a bug for this : http://tracker.ceph.com/issues/13975 Cheers > > My problem is that "ceph" service is doing everything, and all

Re: [ceph-users] v0.80.11 Firefly released

2015-11-20 Thread Loic Dachary
Hi, On 20/11/2015 02:13, Yonghua Peng wrote: > I have been using firefly release. is there an official documentation for > upgrading? thanks. Here it is : http://docs.ceph.com/docs/firefly/install/upgrading-ceph/ Enjoy ! > > > On 2015/11/20 6:08, Sage Weil wrote: >> This is a bugfix release

Re: [ceph-users] ceph-disk prepare with systemd and infernarlis

2015-10-30 Thread Loic Dachary
Hi Mathias, On 31/10/2015 02:05, MATHIAS, Bryn (Bryn) wrote: > Hi All, > > I have been rolling out an infernarlis cluster, however I get stuck on the > ceph-disk prepare stage. > > I am deploying ceph via ansible along with a whole load of other software. > > Log output at the end of the

Re: [ceph-users] Cache tier experiences (for ample sized caches ^o^)

2015-10-06 Thread Loic Dachary
Hi Christian, Interesting use case :-) How many OSDs / hosts do you have ? And how are they connected together ? Cheers On 07/10/2015 04:58, Christian Balzer wrote: > > Hello, > > a bit of back story first, it may prove educational for others a future > generations. > > As some may recall,

[ceph-users] Ceph stable releases team: call for participation

2015-10-03 Thread Loic Dachary
xt Firefly release[10] and Abhishek Varshney (Flipkart) drives the next Hammer release. Loic Dachary (Red Hat), one of the Ceph core developers, and Abhishek Lekshmanan (Reliance Jio Infocomm Ltd.) oversee the process and provides help and advice when necessary. After these two releases are publis

Re: [ceph-users] Ceph stable releases team: call for participation

2015-10-03 Thread Loic Dachary
Hi Robin, On 03/10/2015 21:38, Robin H. Johnson wrote: > On Sat, Oct 03, 2015 at 11:07:22AM +0200, Loic Dachary wrote: >> Hi Ceph, >> >> TL;DR: If you have one day a week to work on the next Ceph stable releases >> [1] your help would be most welcome.

Re: [ceph-users] [puppet] Moving puppet-ceph to the Openstack big tent

2015-09-29 Thread Loic Dachary
Good move :-) On 29/09/2015 23:45, Andrew Woodward wrote: > [I'm cross posting this to the other Ceph threads to ensure that it's seen] > > We've discussed this on Monday on IRC and again in the puppet-openstack IRC > meeting. The current census is that we will move from the deprecated >

Re: [ceph-users] erasure pool, ruleset-root

2015-09-18 Thread Loic Dachary
On 18/09/2015 09:00, Loic Dachary wrote: > Hi Tom, > > Could you please share command you're using and their output ? A dump of the > crush rules would also be useful to figure out why it did not work as > expected. > s/command/the commands/ > Cheers > > On 18

Re: [ceph-users] erasure pool, ruleset-root

2015-09-18 Thread Loic Dachary
Hi Tom, Could you please share command you're using and their output ? A dump of the crush rules would also be useful to figure out why it did not work as expected. Cheers On 18/09/2015 01:01, Deneau, Tom wrote: > I see that I can create a crush rule that only selects osds > from a certain

Re: [ceph-users] ISA erasure code plugin in debian

2015-09-15 Thread Loic Dachary
Hi Gerd, On 16/09/2015 00:52, Gerd Jakobovitsch wrote: > Dear all, > > I have a ceph cluster deployed in debian; I'm trying to test ISA > erasure-coded pools, but there is no plugin (libec_isa.so) included in the > library. > > Looking at the packages at debian Ceph repository, I found a

Re: [ceph-users] Append data via librados C API in erasure coded pool

2015-09-01 Thread Loic Dachary
Hi, Like Shylesh said: you need to obey alignment constraints. See rados_ioctx_pool_requires_alignment in http://ceph.com/docs/hammer/rados/api/librados/ Cheers On 01/09/2015 08:49, shylesh kumar wrote: > I think this could be misaligned writes. > Is it multiple of 4k ?? Its just a wild

[ceph-users] modifying a crush rule

2015-08-28 Thread Loic Dachary
Hi, abhishekvrshny Is there a way to modify a ruleset in crush map, without decompiling and recompiling it with crush tool? There are a few commands at http://workbench.dachary.org/ceph/ceph/blob/hammer/src/mon/MonCommands.h#L416 allowing you to modify the crushmap but it's very limited and

Re: [ceph-users] Confusion in Erasure Code benchmark app

2015-07-14 Thread Loic Dachary
Hi, I've observed the same thing but never spent time to figure that out. It would be nice to know. I don't think it's a bug, just something slightly confusing. Cheers On 14/07/2015 14:52, Nitin Saxena wrote: Hi All, I am trying to debug ceph_erasure_code_benchmark_app available in ceph

Re: [ceph-users] 0.80.10 released ?

2015-07-10 Thread Loic Dachary
The release notes have not yet been published. On 10/07/2015 17:31, Pierre BLONDEAU wrote: Hi, I can update my ceph's packages to 0.80.10. But i can't found informations about this version ( website, mailing list ). Someone know where i can found these informations ? Regards

Re: [ceph-users] Redhat Storage Ceph Storage 1.3 released

2015-07-02 Thread Loic Dachary
Hi, On 02/07/2015 08:16, Stefan Priebe - Profihost AG wrote: Hi, Am 01.07.2015 um 23:35 schrieb Loic Dachary: Hi, The details of the differences between the Hammer point releases and the RedHat Ceph Storage 1.3 can be listed as described at http://www.spinics.net/lists/ceph-devel

Re: [ceph-users] Redhat Storage Ceph Storage 1.3 released

2015-07-02 Thread Loic Dachary
...@profihost.ag wrote: Hi, Am 01.07.2015 um 23:35 schrieb Loic Dachary: Hi, The details of the differences between the Hammer point releases and the RedHat Ceph Storage 1.3 can be listed as described at http://www.spinics.net/lists/ceph-devel/msg24489.html

Re: [ceph-users] Ceph erasure code benchmark failing

2015-07-01 Thread Loic Dachary
Hi, Like David said: the most probable cause is that there is no recent yasm installed. You can ./install-deps.sh to ensure the necessary dependencies are installed. Cheers On 01/07/2015 13:46, Nitin Saxena wrote: Hi, I am new to ceph project. I am trying to benchmark erasure code on

Re: [ceph-users] Redhat Storage Ceph Storage 1.3 released

2015-07-01 Thread Loic Dachary
Hi, The details of the differences between the Hammer point releases and the RedHat Ceph Storage 1.3 can be listed as described at http://www.spinics.net/lists/ceph-devel/msg24489.html reconciliation between hammer and v0.94.1.2 The same analysis should be done for

Re: [ceph-users] RHEL 7.1 ceph-disk failures creating OSD

2015-06-26 Thread Loic Dachary
Hi, Prior to firefly v0.80.8 ceph-disk zap did not call partprobe and that was causing the kind of problems you're experiencing. It was fixed by https://github.com/ceph/ceph/commit/e70a81464b906b9a304c29f474e6726762b63a7c and is described in more details at http://tracker.ceph.com/issues/9665.

Re: [ceph-users] EC pool needs hosts equal to k + m?

2015-06-22 Thread Loic Dachary
Hi Nigel, On 22/06/2015 02:52, Nigel Williams wrote: I recall a post to the mailing list in the last week(s) where someone said that for an EC Pool the failure-domain defaults to having k+m hosts in some versions of Ceph? Can anyone recall the post? have I got the requirement correct? Yes.

Re: [ceph-users] Erasure Coded Pools and PGs

2015-06-17 Thread Loic Dachary
Hi, On 17/06/2015 18:04, Garg, Pankaj wrote: Hi, I have 5 OSD servers, with total of 45 OSDS in my clusters. I am trying out Erasure Coding with different K and m values. I seem to always get Warnings about : Degraded and Undersized PGs, whenever I create a profile and create a

Re: [ceph-users] Hammer 0.94.1 - install-deps.sh script error

2015-05-29 Thread Loic Dachary
Hi, On 28/05/2015 05:13, Dyweni - Ceph-Users wrote: Hi Guys, Running the install-deps.sh script on Debian Squeeze results in the package 'cryptsetup-bin' not being found (and 'cryptsetup' not being used). This is due to the pipe character being deleted. To fix this, I replaced this

Re: [ceph-users] osd id == 2147483647 (2^31 - 1)

2015-05-26 Thread Loic Dachary
Hi, When an erasure coded pool pool 4 'ecpool' erasure size 5 min_ does not have enough OSDs to map a PG, the missing OSDs shows as 2147483647 and that's what you have in [7,2,2147483647,6,10] in the case of a replicated pool, the missing OSDs would be omitted instead. In Hammer

Re: [ceph-users] ceph tell changed?

2015-05-21 Thread Loic Dachary
Hi, It should work. Could you copy/paste the command you run and its output ? Cheers On 21/05/2015 17:34, Kenneth Waegeman wrote: Hi, We're using ceph tell in our configuration system since emperor, and before we could run 'ceph tell *.$host injectargs -- ...' , and while I'm honestly

Re: [ceph-users] ceph tell changed?

2015-05-21 Thread Loic Dachary
handling command target: unknown type * Same with other config options, eg. mds_cache_size Those warnings I always get;-) Running on 0.94.1 On 05/21/2015 05:36 PM, Loic Dachary wrote: Hi, It should work. Could you copy/paste the command you run and its output ? Cheers On 21/05/2015

Re: [ceph-users] Ceph mon leader oneliner?

2015-05-13 Thread Loic Dachary
Hi, On 13/05/2015 09:35, Kai Storbeck wrote: Hello fellow Ceph admins, I have a need to run some periodic scripts against my Ceph cluster. For example creating new snapshots or cleaning up old ones. I'd preferably want to configure this periodic artifact on all my monitors, but only

Re: [ceph-users] EC backend benchmark

2015-05-11 Thread Loic Dachary
Hi, [Sorry I missed the body of your questions, here is my answer ;-] On 11/05/2015 23:13, Somnath Roy wrote: Summary : - 1. It is doing pretty good in Reads and 4 Rados Bench clients are saturating 40 GB network. With more physical server, it is scaling almost linearly

Re: [ceph-users] EC backend benchmark

2015-05-11 Thread Loic Dachary
Hi, Thanks for sharing :-) Have you published the tools that you used to gather these results ? It would be great to have a way to reproduce the same measures in different contexts. Cheers On 11/05/2015 23:13, Somnath Roy wrote: Hi Loic and community, I have gathered the

Re: [ceph-users] ceph_argparse packaging error in Hammer/debian?

2015-05-07 Thread Loic Dachary
Hi, https://github.com/ceph/ceph/pull/4517 is the fix for http://tracker.ceph.com/issues/11388 Cheers On 07/05/2015 20:28, Andy Allan wrote: Hi all, I've found what I think is a packaging error in Hammer. I've tried registering for the tracker.ceph.com site but my confirmation email has

Re: [ceph-users] The first infernalis dev release will be v9.0.0

2015-05-04 Thread Loic Dachary
+1 ;-) On 04/05/2015 18:09, Sage Weil wrote: The first Ceph release back in Jan of 2008 was 0.1. That made sense at the time. We haven't revised the versioning scheme since then, however, and are now at 0.94.1 (first Hammer point release). To avoid reaching 0.99 (and 0.100 or 1.00?) we

Re: [ceph-users] Erasure Coding : gf-Complete

2015-04-24 Thread Loic Dachary
of the different techniques have ARM optimizations or do I have to select a particular one to take advantage of them? The optimizations do not depend on the technique. Cheers -Pankaj -Original Message- From: Loic Dachary [mailto:l...@dachary.org] Sent: Thursday, April 23, 2015 2:47 PM

Re: [ceph-users] Erasure Coding : gf-Complete

2015-04-23 Thread Loic Dachary
Hi, The ARMv8 optimizations for gf-complete are in Hammer, not in Firefly. The libec_jerasure*.so plugin contains gf-complete. Cheers On 23/04/2015 23:29, Garg, Pankaj wrote: Hi, I would like to use the gf-complete library for Erasure coding since it has some ARM v8 based

Re: [ceph-users] CEPHFS with erasure code

2015-04-17 Thread Loic Dachary
Hi, You should set a cache tier for CephFS to use and have the erasure coded pool behind it. You will find detailed informations at http://docs.ceph.com/docs/master/rados/operations/cache-tiering/ Cheers On 17/04/2015 12:39, MEGATEL / Rafał Gawron wrote: Hello I would create cephfs with

Re: [ceph-users] CephFS and Erasure Codes

2015-04-17 Thread Loic Dachary
Hi, Although erasure coded pools cannot be used with CephFS, they can be used behind a replicated cache pool as explained at http://docs.ceph.com/docs/master/rados/operations/cache-tiering/. Cheers On 18/04/2015 00:26, Ben Randall wrote: Hello all, I am considering using Ceph for a new

Re: [ceph-users] ceph-disk command raises partx error

2015-04-13 Thread Loic Dachary
Hi, On 13/04/2015 16:15, HEWLETT, Paul (Paul)** CTR ** wrote: Hi Everyone I am using the ceph-disk command to prepare disks for an OSD. The command is: *ceph-disk prepare --zap-disk --cluster $CLUSTERNAME --cluster-uuid $CLUSTERUUID --fs-type xfs /dev/${1}* and this consistently

Re: [ceph-users] Interesting problem: 2 pgs stuck in EC pool with missing OSDs

2015-04-08 Thread Loic Dachary
Hi Paul, Contrary to what the documentation states at http://ceph.com/docs/master/rados/troubleshooting/troubleshooting-pg/#crush-gives-up-too-soon the crush ruleset can be modified (an update at https://github.com/ceph/ceph/pull/4306 will fix that). Placement groups will move around, but

Re: [ceph-users] Installing firefly v0.80.9 on RHEL 6.5

2015-04-07 Thread Loic Dachary
-md [root@essperf13 ceph-mon01]# -Original Message- From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Loic Dachary Sent: Monday, April 06, 2015 5:33 PM To: ceph-users Subject: [ceph-users] Installing firefly v0.80.9 on RHEL 6.5 Hi, I tried to install

Re: [ceph-users] Installing firefly v0.80.9 on RHEL 6.5

2015-04-07 Thread Loic Dachary
the distribution and not from the ceph.com repositories. Cheers Bruce -Original Message- From: Loic Dachary [mailto:l...@dachary.org] Sent: Tuesday, April 07, 2015 1:32 AM To: Bruce McFarland; ceph-users Subject: Re: [ceph-users] Installing firefly v0.80.9 on RHEL 6.5 Hi

Re: [ceph-users] OSD auto-mount after server reboot

2015-04-06 Thread Loic Dachary
where mount and services are started automatically. Output paste from ceph 0.80.9 is : http://pastebin.com/1Yqntadi On Sun, Apr 5, 2015 at 11:22 AM, Loic Dachary l...@dachary.org mailto:l...@dachary.org wrote: On 04/04/2015 22:09, shiva rkreddy wrote: HI, I'm currently

[ceph-users] Installing firefly v0.80.9 on RHEL 6.5

2015-04-06 Thread Loic Dachary
Hi, I tried to install firefly v0.80.9 on a freshly installed RHEL 6.5 by following http://ceph.com/docs/master/start/quick-ceph-deploy/#create-a-cluster but it installed v0.80.5 instead. Is it really what we want by default ? Or is it me misreading the instructions somehow ? Cheers -- Loïc

Re: [ceph-users] OSD auto-mount after server reboot

2015-04-05 Thread Loic Dachary
On 04/04/2015 22:09, shiva rkreddy wrote: HI, I'm currently testing Firefly 0.80.9 and noticed that OSD are not auto-mounted after server reboot. It used to mount auto with Firefly 0.80.7. OS is RHEL 6.5. There was another thread earlier on this topic with v0.80.8, suggestion was to

Re: [ceph-users] Ceph Code Coverage

2015-04-05 Thread Loic Dachary
Hi, On 05/04/2015 18:32, Rajesh Raman wrote: Hi All, Does anyone has executed code coverage run on Ceph recently using Teuthology? (Some old reports from Loic's blog is here http://dachary.org/wp-uploads/2013/01/teuthology/total/ taken in Jan 2013, but I am interested in latest runs

Re: [ceph-users] Erasure coding

2015-03-25 Thread Loic Dachary
Hi Tom, On 25/03/2015 11:31, Tom Verdaat wrote: Hi guys, We've got a very small Ceph cluster (3 hosts, 5 OSD's each for cold data) that we intend to grow later on as more storage is needed. We would very much like to use Erasure Coding for some pools but are facing some challenges

Re: [ceph-users] error creating image in rbd-erasure-pool

2015-03-24 Thread Loic Dachary
Hi Markus, On 24/03/2015 14:47, Markus Goldberg wrote: Hi, this is ceph version 0,93 I can't create an image in an rbd-erasure-pool: root@bd-0:~# root@bd-0:~# ceph osd pool create bs3.rep 4096 4096 replicated pool 'bs3.rep' created root@bd-0:~# rbd create --size 1000 --pool bs3.rep test

Re: [ceph-users] how to compute Ceph durability?

2015-03-20 Thread Loic Dachary
Hi Ghislain, You will find more information about tools and methods at On 20/03/2015 11:47, ghislain.cheval...@orange.com wrote: Hi all, I would like to compute the durability of data stored in a ceph environment according to the cluster topology (failure domains) and the data

Re: [ceph-users] how to compute Ceph durability?

2015-03-20 Thread Loic Dachary
(that's what happens when typing Control-Enter V instead of Control-V enter ;-) On 20/03/2015 11:50, Loic Dachary wrote: Hi Ghislain, You will find more information about tools and methods at https://wiki.ceph.com/Development/Reliability_model/Final_report Enjoy ! On 20/03/2015 11:47

[ceph-users] Sunday's Ceph based business model

2015-03-15 Thread Loic Dachary
Hi Ceph, Disclaimer: I'm no entrepreneur, the business model idea that came to me this Sunday should not be taken seriously ;-) Let say individuals can buy hardware that are Ceph ready (i.e. contain some variation of https://wiki.ceph.com/Clustering_a_few_NAS_into_a_Ceph_cluster) and build a

Re: [ceph-users] HEALTH_WARN too few pgs per osd (0 min 20)

2015-03-15 Thread Loic Dachary
Hi, On 15/03/2015 16:23, Jesus Chavez (jeschave) wrote: Hi all, does anybody know why still get WARN message status? I don’t even have pools yet son I am not sure why is warning me… [root@capricornio ceph-cluster]# ceph status cluster d39f6247-1543-432d-9247-6c56f65bb6cd

[ceph-users] Ceph release timeline

2015-03-15 Thread Loic Dachary
Hi Ceph, In an attempt to clarify what Ceph release is stable, LTS or development. a new page was added to the documentation: http://ceph.com/docs/master/releases/ It is a matrix where each cell is a release number linked to the release notes from http://ceph.com/docs/master/release-notes/.

Re: [ceph-users] Adding Monitor

2015-03-13 Thread Loic Dachary
Hi, I think ceph-deploy mon add (instead of create) is what you should be using. Cheers On 13/03/2015 22:25, Georgios Dimitrakakis wrote: On an already available cluster I 've tried to add a new monitor! I have used ceph-deploy mon create {NODE} where {NODE}=the name of the node and

  1   2   3   4   >