-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
It seems in our situation the cluster is just busy, usually with
really small RBD I/O. We have gotten things to where it doesn't happen
as much in a steady state, but when we have an OSD fail (mostly from
an XFS log bug we hit at least once a week),
On Wed, 14 Oct 2015, Xusangdi wrote:
> Please see inline.
>
> > -Original Message-
> > From: ceph-devel-ow...@vger.kernel.org
> > [mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of
> > Sage Weil
> > Sent: Wednesday, October 14, 2015 12:45 AM
> > To: xusangdi 11976 (RD)
> > Cc:
Hi Mark,
The Async result in 128K drops quickly after some point, is that because
of the testing methodology?
Other conclusion looks to me like simple messenger + Jemalloc is the best
practice till now as it has the same performance as async but using much less
memory?
On Wed, 14 Oct 2015, Robert LeBlanc wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
> It seems in our situation the cluster is just busy, usually with
> really small RBD I/O. We have gotten things to where it doesn't happen
> as much in a steady state, but when we have an OSD fail
Hi Xiaoxi,
I would ignore the tails on those tests. I suspect it's just some fio
processes finishing earlier than others and the associated aggregate
performance dropping off. These reads tests are so fast that my
original guess at reasonable volume sizes for 300 second tests appear to
be
I tried an rpmbuild on Fedora21 from the tarball which seemed to work ok.
But having trouble doing "ceph-deploy --overwrite-conf mon create-initial" with
9.1.0".
This is using ceph-deploy version 1.5.24.
Is this part of the "needs Fedora 22 or later" story?
-- Tom
[myhost][DEBUG ] create a done
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
I'm sure I have a log of a 1,000 second block somewhere, I'll have to
look around for it.
I'll try turning that knob and see what happens. I'll come back with
the results.
Thanks,
-
Robert LeBlanc
PGP Fingerprint 79A2 9CA4 6CC4
On Wed, 14 Oct 2015, Deneau, Tom wrote:
> I tried an rpmbuild on Fedora21 from the tarball which seemed to work ok.
> But having trouble doing "ceph-deploy --overwrite-conf mon create-initial"
> with 9.1.0".
> This is using ceph-deploy version 1.5.24.
> Is this part of the "needs Fedora 22 or
On Wed, Oct 14, 2015 at 7:37 PM, David Disseldorp wrote:
> On Fri, 9 Oct 2015 16:43:09 +0200, David Disseldorp wrote:
>
>> Allows for xattr retrieval. Response data buffer allocation is the
>> responsibility of the osd_req_op_xattr_init() caller.
>
> Ping, any feedback on the
On Fri, 9 Oct 2015 16:43:09 +0200, David Disseldorp wrote:
> Allows for xattr retrieval. Response data buffer allocation is the
> responsibility of the osd_req_op_xattr_init() caller.
Ping, any feedback on the patch?
Cheers, David
--
To unsubscribe from this list: send the line "unsubscribe
On Wed, Oct 14, 2015 at 1:03 AM, Sage Weil wrote:
> On Mon, 12 Oct 2015, Robert LeBlanc wrote:
>> -BEGIN PGP SIGNED MESSAGE-
>> Hash: SHA256
>>
>> After a weekend, I'm ready to hit this from a different direction.
>>
>> I replicated the issue with Firefly so it doesn't
Hi
Sage Weil writes:
> Upgrading from Firefly
> --
>
> Upgrading directly from Firefly v0.80.z is not possible. All clusters
> must first upgrade to Hammer v0.94.4 or a later v0.94.z release; only
> then is it possible to upgrade to Infernalis 9.2.z.
>
Hi Goncalo,
On Wed, Oct 14, 2015 at 6:51 AM, Goncalo Borges
wrote:
> Hi Sage...
>
> I've seen that the rh6 derivatives have been ruled out.
>
> This is a problem in our case since the OS choice in our systems is,
> somehow, imposed by CERN. The experiments software
Trying to bring up a cluster using the pre-built binary packages on Ubuntu
Trusty:
Installed using "ceph-deploy install --dev infernalis `hostname`"
This install seemed to work but then when I later tried
ceph-deploy --overwrite-conf mon create-initial
it failed with
[][INFO ] Running
Hi Matthew,
Glad to hear you were able to see a similar effect for reads at least!
FWIW, I also have not been able to hit 700K IOPs, though my CPUs are
slower than the ones they are using at Intel. On my setup I'm hitting
about 40K read IOPs per node and about 13-14K write IOPS per node with
In general, I like the approach.
I am concerned about passing a void* + length to specify the option value since
you really can't protect against the user providing data in the incorrect
format. For example, if the backend treated RBD_OPTION_STRIPE_UNIT as a 4byte
int, what happens if
Hi,
TL;DR: the jenkins instance running make check bot hangs daily, looking for a
solution
In the past two weeks the make check bot has experienced troubles for which
I've been unable to find a cause. The same jenkins instance running it for the
past nine month now freezes at random times.
On Wed, 14 Oct 2015, Deneau, Tom wrote:
> Trying to bring up a cluster using the pre-built binary packages on Ubuntu
> Trusty:
> Installed using "ceph-deploy install --dev infernalis `hostname`"
>
> This install seemed to work but then when I later tried
>ceph-deploy --overwrite-conf mon
On Wed, 14 Oct 2015, Deneau, Tom wrote:
> > -Original Message-
> > From: Sage Weil [mailto:s...@newdream.net]
> > Sent: Wednesday, October 14, 2015 3:59 PM
> > To: Deneau, Tom
> > Cc: ceph-devel@vger.kernel.org
> > Subject: RE: v9.1.0 Infernalis release candidate released
> >
> > On Wed,
On Wed, 14 Oct 2015 19:57:46 +0200, Ilya Dryomov wrote:
> On Wed, Oct 14, 2015 at 7:37 PM, David Disseldorp wrote:
...
> > Ping, any feedback on the patch?
>
> The patch itself looks OK, except for the part where you rename a local
> variable for no reason, AFACT.
Thanks for the
> -Original Message-
> From: Sage Weil [mailto:s...@newdream.net]
> Sent: Wednesday, October 14, 2015 3:59 PM
> To: Deneau, Tom
> Cc: ceph-devel@vger.kernel.org
> Subject: RE: v9.1.0 Infernalis release candidate released
>
> On Wed, 14 Oct 2015, Deneau, Tom wrote:
> > Trying to bring up
> -Original Message-
> From: Sage Weil [mailto:s...@newdream.net]
> Sent: Wednesday, October 14, 2015 4:30 PM
> To: Deneau, Tom
> Cc: ceph-devel@vger.kernel.org
> Subject: RE: v9.1.0 Infernalis release candidate released
>
> On Wed, 14 Oct 2015, Deneau, Tom wrote:
> > > -Original
On Thu, 15 Oct 2015, Goncalo Borges wrote:
> Hi Sage, Dan...
>
> In our case, we have strongly invested in the testing of CephFS. It seems as a
> good solution to some of the issues we currently experience regarding the use
> cases from our researchers.
>
> While I do not see a problem in
Hi Sage, Dan...
In our case, we have strongly invested in the testing of CephFS. It
seems as a good solution to some of the issues we currently experience
regarding the use cases from our researchers.
While I do not see a problem in deploying Ceph cluster in SL7, I suspect
that we will need
Hi Sam/David,
We came across this problem a couple of times and it is extremely painful to
work around it via operational steps, I would like to work on a patch, but
before I start, it would be nice hear your suggestions.
The problem is:
On erasure coded pool, when there is a corruption, and
On Wed, 14 Oct 2015, Xusangdi wrote:
> Straw2. But I had also run the same test for straw alg, which generated
> quite similar results.
This post explains the current behavior:
http://marc.info/?l=ceph-devel=143862308610881=2
sage
>
> > -Original Message-
> > From: Robert LeBlanc
On Wed, 14 Oct 2015, Gaudenz Steinlin wrote:
>
> Hi
>
> Sage Weil writes:
> > Upgrading from Firefly
> > --
> >
> > Upgrading directly from Firefly v0.80.z is not possible. All clusters
> > must first upgrade to Hammer v0.94.4 or a later v0.94.z release;
On Wed, 14 Oct 2015, Dan van der Ster wrote:
> Hi Goncalo,
>
> On Wed, Oct 14, 2015 at 6:51 AM, Goncalo Borges
> wrote:
> > Hi Sage...
> >
> > I've seen that the rh6 derivatives have been ruled out.
> >
> > This is a problem in our case since the OS choice in our
Hi Josh, Yehuda and Greg,
It is my understanding that there is a chance we may need to use the OpenStack
teuthology backend as a backup while machines in the sepia lab migrate from one
data center to another. Zack has setup a new teuthology cluster that will
transparently behave as the cluster
On Mon, Oct 12, 2015 at 3:36 AM, Milosz Tanski wrote:
> On Sun, Oct 11, 2015 at 6:44 PM, Milosz Tanski wrote:
>> On Sun, Oct 11, 2015 at 6:01 PM, Milosz Tanski wrote:
>>> On Sun, Oct 11, 2015 at 5:33 PM, Milosz Tanski
On 10/11/2015 11:05 AM, Ilya Dryomov wrote:
Mapping an image with a long parent chain (e.g. image foo, whose parent
is bar, whose parent is baz, etc) currently leads to a kernel stack
overflow, due to the following recursion in the reply path:
rbd_osd_req_callback()
On 10/14/2015 12:34 PM, Jason Dillaman wrote:
In general, I like the approach.
I am concerned about passing a void* + length to specify the option value since
you really can't protect against the user providing data in the incorrect
format. For example, if the backend treated
I took a decent look at the pull request 5872
https://github.com/ceph/ceph/pull/5872
It implements something called "bucket namespaces": a way to make
buckets qualified with a prefix that permits different users use
buckets with the same name.
I think I like the idea overall, but the
33 matches
Mail list logo