Joachim Peters
Cc: Ceph Development
Subject: Re: Caching the erasure code decoding matrix
Hi Andreas,
After discussing this a little with a few people, I'm tempted to conclude
caching the decoding matrix is probably not worth the complexity. It's even
difficult for me to know if maintaining
Hi Loic,
I looked at that some time ago.
Table 1 in the paper says it all:
If you care about decoding and reconstruction of data it gives a good
improvement.
If you care mainly about encoding speed, it is not the optimal choice (+72.1%).
The algorithm optimizes the reconstruction of data
-in rather than on top of N plug-ins ...
Cheers Andreas.
From: Loic Dachary [l...@dachary.org]
Sent: 20 March 2015 13:42
To: Andreas Joachim Peters
Cc: ceph-devel@vger.kernel.org
Subject: Re: Hitchhiker erasure code
On 20/03/2015 13:37, Andreas Joachim Peters
Hi all,
happy new year and thanks for your feedback, some things got much more clear to
me after these comments.
I have reworked the implementation but before I make a new pull request I would
like to clarify some points.
I have the following problems:
- to do a table printout I need to use
centers.
Cheers Andreas.
From: Loic Dachary [l...@dachary.org]
Sent: 17 November 2014 02:04
To: Zhou, Yuan; Andreas Joachim Peters
Cc: ceph-devel@vger.kernel.org
Subject: Re: Question on Ceph LRC design
Hi,
I believe Andreas has a more elaborate answer
Hi all,
find attached a pull request to honor 32 byte buffer alignment independent of
compilation environment (vector size) as it was discussed on the list
beforehand.
https://github.com/ceph/ceph/pull/2573
Cheers Andreas.
--
To unsubscribe from this list: send the line unsubscribe
Hi Janne,
= (src/erasure-code/isa/README claims it needs 16*k byte aligned buffers
I should update the README since it is misleading ... it should say 8*k or 16*k
byte aligned chunk size depending on the compiler/platform used, it is not the
alignment of the allocated buffer addresses.The
Andreas.
From: ceph-devel-ow...@vger.kernel.org [ceph-devel-ow...@vger.kernel.org] on
behalf of Andreas Joachim Peters [andreas.joachim.pet...@cern.ch]
Sent: 18 September 2014 14:18
To: Janne Grunau; ceph-devel@vger.kernel.org
Subject: RE: v2 aligned buffer
To: Andreas Joachim Peters
Cc: ceph-devel@vger.kernel.org
Subject: Re: v2 aligned buffer changes for erasure codes
Hi,
On 2014-09-18 12:18:59 +, Andreas Joachim Peters wrote:
= (src/erasure-code/isa/README claims it needs 16*k byte aligned buffers
I should update the README since it is misleading
I fail to see how the 32 * k is related to alignment. It's only used for
to pad the total size so it becomes a mulitple of k * 32. That is ok
since we want k 32-byte aligned chunks. The alignment for each chunk is
just 32-bytes.
Yes, agreed! The alignment for each chunk should be 32 bytes.
Hi Loic,
I saw (if i am not mistaken) that you actually test only encoding ... so your
idea is to guarantee that the encoding results in the same output and the
encoding/decoding functionality is validated by the unit tests in each new
version?
In principle this restricts the encoding to
Hi all,
I have created the following pull request:
https://github.com/ceph/ceph/pull/2470
The reason behind is explained in the pull request.
Cheers Andreas.
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
locks in CEPH.
Cheers Andreas.
From: Gregory Farnum [g...@inktank.com]
Sent: 09 September 2014 21:56
To: Sage Weil
Cc: Andreas Joachim Peters; ceph-devel@vger.kernel.org
Subject: Re: Question to RWLock reverse DNS ip=hostname
On Tue, Sep 9, 2014 at 10:50 AM
Hi,
by chance I had a look to the RWLock class. To my best knowledge the way you
create RW locks it defaults to writer-starvation e.g. all readers will always
jump a head of a pending writer. I cannot imagine that you never have the
opposite requirement in the CEPH multithreaded code but I
Hi Loic,
as discuess I have prepared the pull request to add the shared table cache. It
is rebased today against matser.
The pull request is here: https://github.com/ceph/ceph/pull/2262
I am not around for 10 days, so if you see issues or want changes, you can just
postpone it until I am back
Hi Loic,
looks very good and removes all the duplication!
I will then do the cache modification against this code base including this
pull request.
Cheers Andreas.
From: Loic Dachary [l...@dachary.org]
Sent: 05 August 2014 09:53
To: Andreas Joachim
for a fixed (k,m),
if not it can stay.
Just need to know that ... this boils down to the fact, that encoding
decoding should not be considered 'stateless'.
Cheers Andreas.
From: Loic Dachary [l...@dachary.org]
Sent: 04 August 2014 13:56
To: Andreas Joachim
]
Sent: 04 August 2014 14:37
To: Andreas Joachim Peters
Cc: Ma, Jianpeng; Ceph Development
Subject: Re: ISA erasure code plugin and cache
On 04/08/2014 14:15, Andreas Joachim Peters wrote: Hi Loic,
the background relevant to your comments have (unfortunately) never been
answered on the mailing
remove also the lock? Nevertheless, it has
no measurable performance impact ...
Cheers Andreas.
From: Loic Dachary [l...@dachary.org]
Sent: 04 August 2014 14:56
To: Ma, Jianpeng; Andreas Joachim Peters
Cc: Ceph Development
Subject: Re: ISA erasure code
that the CODEC configuration is by pool and not by PG ?!?!?
Cheers Andreas.
From: ceph-devel-ow...@vger.kernel.org [ceph-devel-ow...@vger.kernel.org] on
behalf of Sage Weil [sw...@redhat.com]
Sent: 04 August 2014 16:22
To: Andreas Joachim Peters
Cc: Loic Dachary; Ma
Hi Loic et. al.
I managed to prototype (and understand) LRC encoding similiar to Xorbas in the
ISA plug-in.
As an example take a (16,4) code (which gives nice alignment for 4k blocks) :
For 4 sub groups of the data chunks you build e.g. local parities LP1-LP4
LP1 = 1 ^ 2 ^ 3 ^ 4
LP2 = 5 ^ 6
August 2014 14:32
To: Andreas Joachim Peters; ceph-devel@vger.kernel.org
Subject: Re: Simplified LRC in CEPH
Hi Andreas,
Enlightening explanation, thank you !
On 01/08/2014 13:45, Andreas Joachim Peters wrote:
Hi Loic et. al.
I managed to prototype (and understand) LRC encoding similiar
2014 15:14
To: Andreas Joachim Peters; ceph-devel@vger.kernel.org
Subject: Re: Simplified LRC in CEPH
Hi Andreas,
It probably is just what we need. Although
https://github.com/ceph/ceph/pull/1921 is more flexible in terms of chunk
placement, I can't think of a use case where it would actually
of Sage Weil [sw...@redhat.com]
Sent: 31 July 2014 07:27
To: Ma, Jianpeng
Cc: Andreas Joachim Peters; ceph-devel@vger.kernel.org
Subject: RE: Pull Request for ISA EC plug-in
On Thu, 31 Jul 2014, Ma, Jianpeng wrote:
Hi,
At my machine, I also met this bug. But I modify this, it can work.
diff
the behaviour for relative -I paths, so I use now -I $(abs_srcdir) hoping
that this works with both YASM versions.
Let me know,
thanks Andreas.
From: ceph-devel-ow...@vger.kernel.org [ceph-devel-ow...@vger.kernel.org] on
behalf of Andreas Joachim Peters
.
From: Sage Weil [sw...@redhat.com]
Sent: 30 July 2014 19:46
To: Andreas Joachim Peters
Cc: ceph-devel@vger.kernel.org
Subject: RE: Pull Request for ISA EC plug-in
On Mon, 28 Jul 2014, Andreas Joachim Peters wrote:
Hi Sage,
I fixed that. I missed '$(srcdir
Hi all,
here is a PULL request for the ISA EC plugin rebased against master of today
for review.
https://github.com/ceph/ceph/pull/2155
I have added as discussed the exhaustive test of all possible failures
scenarios for both supported matrix types for a (k=12,m=4) configuration to the
unit
Weil [sw...@redhat.com]
Sent: 29 July 2014 00:22
To: Andreas Joachim Peters
Cc: ceph-devel@vger.kernel.org
Subject: Re: Pull Request for ISA EC plug-in
Hi Andreas!
On Mon, 28 Jul 2014, Andreas Joachim Peters wrote:
Hi all,
here is a PULL request for the ISA EC plugin rebased against master
.
From: Xavier Hernandez [xhernan...@datalab.es]
Sent: 04 July 2014 09:43
To: Andreas Joachim Peters
Cc: Loic Dachary; ceph-devel@vger.kernel.org
Subject: Re: Intel ISA-L EC plugin
On Thursday 03 July 2014 21:24:59 Andreas Joachim Peters wrote:
Hi Loic,
I have chosen after
.
From: Loic Dachary [l...@dachary.org]
Sent: 02 July 2014 20:33
To: Andreas Joachim Peters; ceph-devel@vger.kernel.org
Subject: Re: FW: Intel ISA-L EC plugin
Hi Andreas,
On 02/07/2014 19:54, Andreas Joachim Peters wrote: Hi Sage Loic et al
(k,m) there many failure scenario (chunk combinations) to test.
Cheers Andreas.
From: Loic Dachary [l...@dachary.org]
Sent: 03 July 2014 12:22
To: Andreas Joachim Peters
Cc: Ceph Development
Subject: Checking of Reed-Solomon Vandermonde parameter
are invertible, for cauchy
they are all invertible anyway.
I will rebase to your branch wip-7238-lrc then!
Cheers Andreas.
From: Loic Dachary [l...@dachary.org]
Sent: 02 July 2014 20:33
To: Andreas Joachim Peters; ceph-devel@vger.kernel.org
Subject: Re: FW: Intel
Hi Sage Loic et al ...
getting some support from Paul Luse I have finished the refactoring of the EC
ISA-L plug-in.
The essential ISA-L v 2.10 sources are now part of the source tree and it
builds a single shared library which is portable on platforms with varying CPU
extensions (SSE2, AVX,
Hi Loic,
i think the best is to read along the sources. It is very readable!
https://github.com/madiator/HadoopUSC/blob/developUSC/src/contrib/raid/src/java/org/apache/hadoop/raid/SimpleRegeneratingCode.java
If there is a high interest in this, you could port the code from Java to C++
and
must be missing something
:-) I do understand how the code works though, which is troubling. It also is
reassuring because the logic is very similar to what is proposed in
https://github.com/ceph/ceph/pull/1921
Cheers
On 30/06/2014 11:26, Andreas Joachim Peters wrote:
Hi
,
event: done}]]}]}
From: Milosz Tanski [mil...@adfin.com]
Sent: 23 June 2014 22:33
To: Gregory Farnum
Cc: Alexandre DERUMIER; Andreas Joachim Peters; ceph-devel
Subject: Re: CEPH IOPS Baseline Measurements with MemStore
I'm working
with google pert tools now.
Cheers Andreas.
__
From: Andreas Joachim Peters
Sent: 19 June 2014 11:05
To: ceph-devel@vger.kernel.org
Subject: CEPH IOPS Baseline Measurements with MemStore
Hi,
I made some benchmarks/testing using the firefly branch and GCC 4.9. Hardware
is 2 CPUs
Hi,
I made some benchmarks/testing using the firefly branch and GCC 4.9. Hardware
is 2 CPUs with 6-core Intel(R) Xeon(R) CPU E5-2630L 0 @ 2.00GHz with
Hyperthreading and 256 GB of memory (kernel 2.6.32-431.17.1.el6.x86_64).
In my tests I run two OSD configurations on a single box:
[A] 4
.
From: Alexandre DERUMIER [aderum...@odiso.com]
Sent: 19 June 2014 11:21
To: Andreas Joachim Peters
Cc: ceph-devel@vger.kernel.org
Subject: Re: CEPH IOPS Baseline Measurements with MemStore
Hi,
Thanks for your benchmark !
If you have some ideas for parameters to tune
.
From: Loic Dachary [l...@dachary.org]
Sent: 05 June 2014 16:05
To: Andreas Joachim Peters
Cc: Ceph Development
Subject: Re: Locally repairable code description revisited (was Pyramid ...)
Hi Andreas,
Here is a preliminary implementation https://github.com/ceph/ceph/pull/1921
aND parity
chunk the achieved 'redundancy' and the overall volume and maximal
reconstruction 'overhead'.
Cheers Andreas.
From: Loic Dachary [l...@dachary.org]
Sent: 31 May 2014 19:10
To: Andreas Joachim Peters
Cc: Ceph Development
Subject: Pyramid erasure code
.
From: Loic Dachary [l...@dachary.org]
Sent: 09 May 2014 17:35
To: Andreas Joachim Peters
Cc: Ceph Development
Subject: Implied parity and erasure code
Hi Andreas,
The implied parity bloc mentionned page 4 of
http://anrg.usc.edu/~maheswaran/Xorbas.pdf is something
Hi,
I did some Firefly ceph-0.77-900.gce9bfb8 testing of EC/Tiering deploying 64
OSD with in-memory filesystems (RapidDisk with ext4) on a single 256 GB box.
The raw write performance of this box is ~3 GB/s for all and ~450 MB/s per OSD.
It provides 250k IOPS per OSD.
I compared several
. Not more then ~2.5k
IOPS when writing even for tiny blocks. On your setup you seem not to get much
more in the 4M case.
Cheers Andreas.
From: Mark Nelson [mark.nel...@inktank.com]
Sent: 20 March 2014 14:09
To: Andreas Joachim Peters
Cc: ceph-devel
2014 14:55
To: Andreas Joachim Peters
Cc: ceph-devel@vger.kernel.org
Subject: Re: ceph-0.77-900.gce9bfb8 Testing Rados EC/Tiering CephFS ...
On 03/20/2014 08:43 AM, Andreas Joachim Peters wrote:
Hi Mark,
I tested write performance with a single rados bench (32 threads), everything
on localhost
To me this numbers look within error bars identical and isn't that expected?
The main benefit of Rocksdb vs. Leveldb you can see when you create large
tables going to 1 billion entries.
How many keys did you create per OSD in your Rados benchmarks?
Cheers Andreas.
...@vger.kernel.org] on
behalf of Loic Dachary [l...@dachary.org]
Sent: 04 February 2014 17:17
To: Andreas Joachim Peters
Cc: Ceph Development
Subject: Re: controlling erasure code chunk size
Hi Andreas,
For w=(multiple of 8) we could probably skip the (*sizeof(int)) and get the
chunksize
Dachary [l...@dachary.org]
Sent: 02 February 2014 16:15
To: Samuel Just
Cc: Ceph Development; Andreas Joachim Peters
Subject: controlling erasure code chunk size
[cc' ceph-devel]
Hi Sam,
Here is how chunks are expected to be aligned:
https://github.com/ceph/ceph/blob
. At the moment, get_chunksize(4*(210)) *
get_data_chunk_count() = 393216 using the jerasure plugin where
get_data_chunk_count() = 4. This seems a bit big?
-Sam
On Sun, Feb 2, 2014 at 8:18 AM, Andreas Joachim Peters
andreas.joachim.pet...@cern.ch wrote:
Hi Loic et.al.
I think there is now some
Andreas.
From: Loic Dachary [l...@dachary.org]
Sent: 17 January 2014 14:56
To: Andreas Joachim Peters
Cc: Ceph Development
Subject: Re: Pyramid Erasure Code plugin (draft)
On 17/01/2014 12:18, Andreas Joachim Peters wrote:
Is k:4 not wrong? I want to build
Great,
I think this is now very flexible!
Cheers Andreas.
From: ceph-devel-ow...@vger.kernel.org [ceph-devel-ow...@vger.kernel.org] on
behalf of Loic Dachary [l...@dachary.org]
Sent: 17 January 2014 15:19
To: Andreas Joachim Peters
Cc: Ceph Development
After some exchange with Loic and the recent list discussion,
the API of the EC plugin might need some clarification/extension in the
::encode method:
Currently ::encode returns a map of bufferlists where the key is the index of [
0 .. (m+k) ]
and the value is the encoded buffer belonging to
Hi all,
few points from my side:
In the case of three data centers to protect against 1 out of 3 failing data
centers one has to fulfill 2M=K e.g. (12,6)
K=12 M=6
mm mm mm
Then one can add three local parities to optimize data center local failures
mml mml mml
Hi,
I have a question concerning the variable used by
objecter-set_client_incarnation()..
What I have seen is that currently it is always set to 0.
Would it be possible to add this as a variable to an IoCtx and do
objecter-set_client_incarnation() according to the context before each
problem ...
Cheers Andreas.
From: Sage Weil [s...@inktank.com]
Sent: 08 January 2014 15:56
To: Andreas Joachim Peters
Cc: ceph-devel@vger.kernel.org
Subject: Re: Client Incarnation librados documentation
On Wed, 8 Jan 2014, Andreas Joachim Peters wrote
Hi Loic,
this looks excellent on new INTEL hardware certainly ;-)
I was just running some benchmarks on different platforms. It should be
extremely simple to make a plugin ... the API is simple ...
I configured (10+4):
1)
model name : Intel(R) Xeon(R) CPU E5-2650 0 @ 2.00GHz
;
}
}
To measure with warm cache, change the #define:
//#define CACHED_TEST
Cheers Andreas.
From: Loic Dachary [l...@dachary.org]
Sent: 16 December 2013 14:15
To: Andreas Joachim Peters; Ceph Development
Subject: Re: Intel Erasure Code library
On 16/12/2013
...@inktank.com]
Sent: 11 December 2013 14:00
To: Loic Dachary
Cc: Andreas Joachim Peters; ceph-devel@vger.kernel.org
Subject: Re: CEPH Erasure Encoding + OSD Scalability
On 12/11/2013 06:28 AM, Loic Dachary wrote:
On 11/12/2013 10:49, Andreas Joachim Peters wrote: Hi Loic,
I am a little bit
Hi Loic,
I am a little bit confused which kind of tool you actually want. You want a
simple benchmark to check for degradation or you want a full profiler tool?
Most of the external tools have the problem that you measure the whole thing
including buffer allocation and initialization. We
...@dachary.org]
Sent: 10 December 2013 09:32
To: Andreas Joachim Peters
Cc: Ceph Development
Subject: Buffer alignment
Hi Andreas,
In Ceph, buffers can be aligned if required using buffer::create_page_aligned
https://github.com/ceph/ceph/blob/master/src/common/buffer.cc#L519
https://github.com
ssse3 22 112
rec6 ssse3 16 86
From: Loic Dachary [l...@dachary.org]
Sent: 12 November 2013 19:06
To: Andreas Joachim Peters
Cc: ceph-devel@vger.kernel.org
Subject: Re: CEPH Erasure Encoding + OSD Scalability
Hi Andreas,
On 12/11
Hi,
I think you need to support the following functionality to support HSM (file
not block based):
1 implement a trigger on file creation/modification/deletion
2 store the additional HSM identifier for recall as a file attribute
3 policy based purging of file related blocks (LRU cache etc.)
Hi Loic,
I am finally doing the benchmark tool and I found a bunch of wrong parameter
checks which can make the whole thing SEGV.
All the RAID-6 codes have restrictions on the parameters but they are not
correctly enforced for Liberation Blaum-Roth codes in the CEPH wrapper class
... see
]
latency=27.737 ms
Cheers Andreas.
From: Loic Dachary [l...@dachary.org]
Sent: 27 September 2013 11:40
To: Andreas Joachim Peters
Cc: Ceph Development
Subject: Re: CEPH Erasure Encoding + OSD Scalability
On 26/09/2013 23:49, Andreas Joachim Peters wrote: Sure
Yes, sure. I actually thought the same in the meanwhile ... I have some
questions:
Q: Can/should it stay in the framework of google test's or you would prefer
just a plain executable ?
I have added local parity support to your erasure class adding a new argument:
erasure-code-lp and
two new
()%alignment;
+ unsigned in_length = in.length() + tail?(alignment - tail):0;
Cheers Andreas.
__
From: Loic Dachary [l...@dachary.org]
Sent: 23 September 2013 09:27
To: Andreas Joachim Peters
Cc: ceph-devel@vger.kernel.org
Subject: Re: CEPH Erasure Encoding
Hi Loic,
I have modified the Jerasure unit test, to record the encoding reconstruction
performance and to store this value in the optional google-test xml outputfile.
I have put (4,2) with a 4MB random object as default and one can pass a
different object size via '--object-size=1000' (for
Hi Loic,
I was applying the changes and the
situation improves, however there is still one important thing which
actually dominated all the measurements which were needing larger packet
sizes (everything besides Raid6):
pad_in_length(unsigned in_length)
The
Hi Loic,
I have now some benchmarks on a Xeon 2.27 GHz 4-core with gcc 4.4 (-O2) for
ENCODING based on the CEPH Jerasure port.
I measured for objects from 128k to 512 MB with random contents (if you encode
1 GB objects you see slow downs due to caching inefficiencies ...), otherwise
results
Hi,
we made some benchmarks about object read/write latencies on the CERN ceph
installation.
The cluster has 44 nodes and ~1k disks, all on 10GE and the pool configuration
has 3 copies.
Client Server is 0.67.
The latencies we observe (using tiny objects ... 5 bytes) on the idle pool:
.
From: Loic Dachary [l...@dachary.org]
Sent: 25 August 2013 13:49
To: Andreas Joachim Peters
Cc: Ceph Development
Subject: Re: CEPH Erasure Encoding + OSD Scalability
On 24/08/2013 21:41, Loic Dachary wrote:
On 24/08/2013 15:30, Andreas-Joachim Peters wrote:
Hi Loic,
I will start to review
:37
To: Andreas Joachim Peters
Cc: Loic Dachary; ceph-devel@vger.kernel.org
Subject: RE: CEPH Erasure Encoding + OSD Scalability
On Sun, 7 Jul 2013, Andreas Joachim Peters wrote:
Considering the crc32c-intel code you added ... I would provide a
function which provides a crc32c checksum
this emulation ...
Cheers Andreas.
From: Loic Dachary [l...@dachary.org]
Sent: 06 July 2013 22:47
To: Andreas Joachim Peters
Cc: ceph-devel@vger.kernel.org
Subject: Re: CEPH Erasure Encoding + OSD Scalability
Hi Andreas,
Since it looks like we're going to use
73 matches
Mail list logo