It is awesome!
I think CephFS may be the next Big Thing of ceph, since many many
companies are about to use NAS on cloud.
On 7 January 2015 at 12:09, Ketor D wrote:
> Hi everyone,
>
> A new project ceph-dokan (https://github.com/ceph/ceph-dokan) is
> open-source to all now.
> You can access the
Hi Sage,
Pull request is https://github.com/ceph/ceph/pull/3305.
Thanks!
Jianpeng Ma
> -Original Message-
> From: Sage Weil [mailto:sw...@redhat.com]
> Sent: Wednesday, January 7, 2015 10:18 AM
> To: Ma, Jianpeng
> Cc: c...@sadziu.pl; vijayendra.shama...@sandisk.com;
> ceph-devel@vger.
Hi everyone,
A new project ceph-dokan (https://github.com/ceph/ceph-dokan) is
open-source to all now.
You can access the cephfs on Windows directly using ceph-dokan, not by
smb or whatever.
And you will get the full experience and performance of cephfs on Windows now.
Here is something about this
Hi all,
I cannot restart some osds when after a flipping of all 54 osds. The
log is show below:
-9> 2015-01-06 10:53:07.976997 7f35695177a0 20 read_log
31150'2273018 (31150'2273012) modify
4a8b7974/rb.0.bc58e8.6b8b4567.3e37/head//2 by
client.16829289.1:720306459 2015-01-05 21:57:54.65
2015-01-07 0:30 GMT+08:00 Travis Rhoden :
> On Tue, Jan 6, 2015 at 11:23 AM, Sage Weil wrote:
>> On Tue, 6 Jan 2015, Travis Rhoden wrote:
>>> On Tue, Jan 6, 2015 at 9:28 AM, Sage Weil wrote:
>>> > On Tue, 6 Jan 2015, Wei-Chung Cheng wrote:
>>> >> 2015-01-06 13:08 GMT+08:00 Sage Weil :
>>> >> > On
On Wed, 7 Jan 2015, Ma, Jianpeng wrote:
> > -- Forwarded message --
> > From: Pawe? Sadowski
> > Date: 2014-12-30 21:40 GMT+08:00
> > Subject: Re: Ceph data consistency
> > To: Vijayendra Shamanna ,
> > "ceph-devel@vger.kernel.org"
> >
> >
> > On 12/30/2014 01:40 PM, Vijayendra
> -- Forwarded message --
> From: Paweł Sadowski
> Date: 2014-12-30 21:40 GMT+08:00
> Subject: Re: Ceph data consistency
> To: Vijayendra Shamanna ,
> "ceph-devel@vger.kernel.org"
>
>
> On 12/30/2014 01:40 PM, Vijayendra Shamanna wrote:
> > Hi,
> >
> > There is a sync thread (sy
As background for people who are not familiar with this situation: for a
long time Ceph has used some forked copies of Apache and mod_fastcgi to
power the RADOS Gateway.
>From discussions with Dan Mick and Yehuda, Ceph's changes to Apache were
mainly cosmetic, and it's ok to use the distro-supplie
So last month a bunch of librados functions around watch-notify were
marked as deprecated, and because RBD still uses them everything went
yellow on the gitbuilders. I believe we're expecting a patch series to
move to the new APIs pretty soon, but was wondering when.
In particular, a spot check de
On 06/01/2015 19:21, Gregory Farnum wrote:
> On Tue, Jan 6, 2015 at 12:39 AM, Loic Dachary wrote:
>>
>>
>> On 06/01/2015 01:22, Gregory Farnum wrote:
>>> On Mon, Jan 5, 2015 at 4:12 PM, Loic Dachary wrote:
:-) This process is helpful if it allows me to help a little more than I
curre
On Tue, Jan 6, 2015 at 12:39 AM, Loic Dachary wrote:
>
>
> On 06/01/2015 01:22, Gregory Farnum wrote:
>> On Mon, Jan 5, 2015 at 4:12 PM, Loic Dachary wrote:
>>> :-) This process is helpful if it allows me to help a little more than I
>>> currently do with the backport process. It would be a loss
On Tue, 6 Jan 2015, Gregory Farnum wrote:
> On Tue, Jan 6, 2015 at 8:44 AM, Sage Weil wrote:
> > Hey,
> >
> > In an exchange on linux-fsdevel yesterday it became clear that even when
> > FIEMAP isn't buggy it's not a good interface to build a map of sparse
> > files. For example, XFS defrag or ot
On Mon, Jan 5, 2015 at 6:01 PM, Gregory Farnum wrote:
> On Thu, Dec 18, 2014 at 1:21 PM, Robert LeBlanc wrote:
>> Before we base thousands of VM image clones off of one or more snapshots, I
>> want to test what happens when the snapshot becomes corrupted. I don't
>> believe the snapshot will beco
On Tue, Jan 6, 2015 at 8:44 AM, Sage Weil wrote:
> Hey,
>
> In an exchange on linux-fsdevel yesterday it became clear that even when
> FIEMAP isn't buggy it's not a good interface to build a map of sparse
> files. For example, XFS defrag or other future fs features may muck with
> fiemap results.
Things were pretty slow over the holidays, so here's the current plan.
- Extend the current sprint (v0.92) until the end of this week (1/11).
At that time the already-frozen v0.91 will get released and v0.92 will
freeze.
- Do one more two week sprint (v0.93) from 1/12 to 1/25. v0.92 will
rel
Hey,
In an exchange on linux-fsdevel yesterday it became clear that even when
FIEMAP isn't buggy it's not a good interface to build a map of sparse
files. For example, XFS defrag or other future fs features may muck with
fiemap results. One wouldn't expect those things to change whether a fil
On Tue, Jan 6, 2015 at 11:23 AM, Sage Weil wrote:
> On Tue, 6 Jan 2015, Travis Rhoden wrote:
>> On Tue, Jan 6, 2015 at 9:28 AM, Sage Weil wrote:
>> > On Tue, 6 Jan 2015, Wei-Chung Cheng wrote:
>> >> 2015-01-06 13:08 GMT+08:00 Sage Weil :
>> >> > On Tue, 6 Jan 2015, Wei-Chung Cheng wrote:
>> >> >>
On Tue, 6 Jan 2015, Travis Rhoden wrote:
> On Tue, Jan 6, 2015 at 9:28 AM, Sage Weil wrote:
> > On Tue, 6 Jan 2015, Wei-Chung Cheng wrote:
> >> 2015-01-06 13:08 GMT+08:00 Sage Weil :
> >> > On Tue, 6 Jan 2015, Wei-Chung Cheng wrote:
> >> >> Dear all:
> >> >>
> >> >> I agree Robert opinion because
On Tue, Jan 6, 2015 at 9:28 AM, Sage Weil wrote:
> On Tue, 6 Jan 2015, Wei-Chung Cheng wrote:
>> 2015-01-06 13:08 GMT+08:00 Sage Weil :
>> > On Tue, 6 Jan 2015, Wei-Chung Cheng wrote:
>> >> Dear all:
>> >>
>> >> I agree Robert opinion because I hit the similar problem once.
>> >> I think that how
Hi,
On 06/01/2015 12:49, Miyamae, Takeshi wrote:
> Dear Loic,
>
> I'm Takeshi Miyamae, one of the authors of SHEC's blueprint.
>
> Shingled Erasure Code (SHEC)
> https://wiki.ceph.com/Planning/Blueprints/Hammer/Shingled_Erasure_Code_(SHEC)
The work you have done is quite impressive :-)
> We ha
Dear Ceph co-developers,
Here is a list of the pending backports. If your name shows on an issue that
needs to be backported to a stable release, it would be great if you could
update the corresponding issue and/or create a pull request to cherry-pick the
relevant commits. If you would like he
On Tue, 6 Jan 2015, Wei-Chung Cheng wrote:
> 2015-01-06 13:08 GMT+08:00 Sage Weil :
> > On Tue, 6 Jan 2015, Wei-Chung Cheng wrote:
> >> Dear all:
> >>
> >> I agree Robert opinion because I hit the similar problem once.
> >> I think that how to handle journal partition is another problem about
> >>
On Tue, Jan 6, 2015 at 3:31 PM, Chaitanya Huilgol
wrote:
> Hi Ilya,
>
> The RBD crash on OSD nodes going away is routinely hit in our setups.
> We have not been able to get a good stack trace for this one due to our
> console capture issues and these don't end up in the syslogs either after the
Hi Ilya,
The RBD crash on OSD nodes going away is routinely hit in our setups.
We have not been able to get a good stack trace for this one due to our console
capture issues and these don't end up in the syslogs either after the crash.
Will get you the traces soon.
Most of the times this happens
Dear Loic,
I'm Takeshi Miyamae, one of the authors of SHEC's blueprint.
Shingled Erasure Code (SHEC)
https://wiki.ceph.com/Planning/Blueprints/Hammer/Shingled_Erasure_Code_(SHEC)
We have revised our blueprint shown in the last CDS to extend our erasure code
layouts and describe the guideline for
Hi,
I tried various things to update tracker.ceph.com from the command line but
failed. The best result I have is the following:
cat /tmp/a.xml
10281
firefly: make check fails on fedora 20 (1)
firefly
curl --http1.0 --verbose -T /tmp/a.xml
http://tracker.ceph.com/i
On Mon, Jan 5, 2015 at 10:12 AM, Dan van der Ster
wrote:
> Hi Sage,
>
> On Tue, Dec 23, 2014 at 10:10 PM, Sage Weil wrote:
>>
>> This fun issue came up again in the form of 10422:
>>
>> http://tracker.ceph.com/issues/10422
>>
>> I think we have 3 main options:
>>
>> 1. Ask users to do a m
On 06/01/2015 01:22, Gregory Farnum wrote:
> On Mon, Jan 5, 2015 at 4:12 PM, Loic Dachary wrote:
>> :-) This process is helpful if it allows me to help a little more than I
>> currently do with the backport process. It would be a loss if the end result
>> is that everyone cares less about back
28 matches
Mail list logo