Hi Michael,
Sorry, I can't reproduce it anymore. I used Chrome and didn't use any
non-alphabet character. I must have done something else wrong.
Thanks for the quick reply.
Best,
John Lin
Michael Kidd 於 2017年1月12日 週四 上午11:33寫道:
> Hello John,
> Thanks for the bug
Thanks Wido for the information and I hope from 11.1.0 possible to upgrade
the intermediate releases of kraken and upcoming releases.
Thanks,
Muthu
On 11 January 2017 at 19:12, Wido den Hollander wrote:
>
> > Op 11 januari 2017 om 12:24 schreef Jayaram R
On Wed, Jan 11, 2017 at 10:43 PM, Shinobu Kinjo wrote:
> +2
> * Reduce manual operation as much as possible.
> * A recovery tool in case that we break something which would not
> appear to us initially.
I definitely agree that this is an overdue tool and we have an
upstream
On Thu, Jan 12, 2017 at 12:28 PM, Christian Balzer wrote:
>
> Hello,
>
> On Wed, 11 Jan 2017 11:09:46 -0500 Jason Dillaman wrote:
>
>> I would like to propose that starting with the Luminous release of Ceph,
>> RBD will no longer support the creation of v1 image format images via
Hello John,
Thanks for the bug report. Unfortunately, I'm not able to reproduce the
error. I tested from both Firefox and Chrome on linux. Can you let me
know what os/browser you're using? Also, I've not tested any non 'en-US'
characters, so I can't attest to how it will behave with other
Hello,
On Wed, 11 Jan 2017 11:09:46 -0500 Jason Dillaman wrote:
> I would like to propose that starting with the Luminous release of Ceph,
> RBD will no longer support the creation of v1 image format images via the
> rbd CLI and librbd.
>
> We previously made the v2 image format the default
We are going to setup a test cluster with kraken using CentOS7. And
obviously like to stay as close as possible to using their repositories.
If we need to install the 4.1.4 kernel or later, is there a ceph
recommended repository to choose? Like for instance use the elrepo
4.9ml/4.4lt?
On Thu, Jan 12, 2017 at 2:19 AM, Eugen Block wrote:
> Hi,
>
> I simply grepped for "slow request" in ceph.log. What exactly do you mean by
> "effective OSD"?
>
> If I have this log line:
> 2017-01-11 [...] osd.16 [...] cluster [WRN] slow request 32.868141 seconds
> old, received at
Interesting, I feel silly having not checked ownership of the dev device.
Will chown before next deploy and report back for sake of possibly helping
someone else down the line.
Thanks,
Reed
> On Jan 11, 2017, at 3:07 PM, Stillwell, Bryan J
> wrote:
>
> On
On 1/11/17, 10:31 AM, "ceph-users on behalf of Reed Dier"
wrote:
>>2017-01-03 12:10:23.514577 7f1d821f2800 0 ceph version 10.2.5
>>(c461ee19ecbc0c5c330aca20f7392c9a00730367), process ceph-osd, pid 19754
>> 2017-01-03
On Wed, Jan 11, 2017 at 1:01 PM, Sage Weil wrote:
> Jason, where does librbd fall?
Option (2) won't help for users like QEMU unless we can tie the
reference counting back into the AioCompletion (i.e. delay firing
until all references to the memory are released).
--
Jason
On Wed, 11 Jan 2017, Jason Dillaman wrote:
> +1
>
> I'd be happy to tweak the internals of librbd to support pass-through
> of C buffers all the way to librados. librbd clients like QEMU use the
> C API and this currently results in several extra copies (in librbd
> and librados).
+1 from me
On Thu, Jan 12, 2017 at 2:41 AM, Ilya Dryomov wrote:
> On Wed, Jan 11, 2017 at 6:01 PM, Shinobu Kinjo wrote:
>> It would be fine to not support v1 image format at all.
>>
>> But it would be probably friendly for users to provide them with more
>>
It does internally -- which requires the extra copy from C array to a
bufferlist. I had a PR for wrapping the C array into a bufferlist (w/o
the copy), but Sage pointed out a potential issue with such
implementations (which might still be an issue w/ this PR).
[1]
Jason: librbd itself uses the librados C++ api though, right?
-Sam
On Wed, Jan 11, 2017 at 9:37 AM, Jason Dillaman wrote:
> +1
>
> I'd be happy to tweak the internals of librbd to support pass-through
> of C buffers all the way to librados. librbd clients like QEMU use the
>
+1
I'd be happy to tweak the internals of librbd to support pass-through
of C buffers all the way to librados. librbd clients like QEMU use the
C API and this currently results in several extra copies (in librbd
and librados).
On Wed, Jan 11, 2017 at 11:44 AM, Piotr Dałek
So I was attempting to add an OSD to my ceph-cluster (running Jewel 10.2.5),
using ceph-deploy (1.5.35), on Ubuntu.
I have 2 OSD’s on this node, attempting to add third.
The first two OSD’s I created with on-disk journals, then later moved them to
partitions on the NVMe system disk (Intel
On Wed, Jan 11, 2017 at 5:09 PM, Jason Dillaman wrote:
> I would like to propose that starting with the Luminous release of Ceph, RBD
> will no longer support the creation of v1 image format images via the rbd
> CLI and librbd.
>
> We previously made the v2 image format the
It would be fine to not support v1 image format at all.
But it would be probably friendly for users to provide them with more
understandable message when they face feature mismatch instead of just
displaying:
* rbd: map failed: (6) No such device or address
For instance, show the following
Hello,
As the subject says - are here any users/consumers of librados C API? I'm asking because we're researching if this PR:
https://github.com/ceph/ceph/pull/12216 will be actually beneficial for larger group of users. This PR adds a bunch of new APIs that perform
object writes without
Hi,
I simply grepped for "slow request" in ceph.log. What exactly do you
mean by "effective OSD"?
If I have this log line:
2017-01-11 [...] osd.16 [...] cluster [WRN] slow request 32.868141
seconds old, received at 2017-01-11 [...]
ack+ondisk+write+known_if_redirected e12440) currently
I would like to propose that starting with the Luminous release of Ceph,
RBD will no longer support the creation of v1 image format images via the
rbd CLI and librbd.
We previously made the v2 image format the default and deprecated the v1
format under the Jewel release. It is important to note
Hi,
just for clarity:
Did you parse the slow request messages and use the effective OSD in the
statistics? Some message may refer to other OSDs, e.g. "waiting for sub
op on OSD X,Y". The reporting OSD is not the root cause in that case,
but one of the mentioned OSDs (and I'm currently not
Hi list,
I'm having trouble with slow requests, they have a noticeable impact
on the performance. I'd like to find out, what the root cause is, I
guess there are a lot of possible causes. But I'll just describe what
I'm seeing and hopefully someone can give advices.
I just counted the
On 11-1-2017 08:06, Adrian Saul wrote:
>
> I would concur having spent a lot of time on ZFS on Solaris.
>
> ZIL will reduce the fragmentation problem a lot (because it is not
> doing intent logging into the filesystem itself which fragments the
> block allocations) and write response will be a
Hi Marcus
Please refer to the documentation:
http://docs.ceph.com/docs/master/rados/operations/crush-map/#editing-a-crush-map
I belive your suggestion only modifies the in-memory map and you never get a
changed version written in the outfile, but it could easily be tested by
decompiling the
Ah right, I was using the the kernel client on kernel 3.x
Thanks for the answer. I'll try updating tomorrow and will let you know if
it works!
Cheers,
Boris
On Wed, Jan 11, 2017 at 1:03 PM John Spray wrote:
> On Wed, Jan 11, 2017 at 11:39 AM, Boris Mattijssen
>
Yes, but everything i want to know is, if my way to change the tunables is
right or not?
> Am 11.01.2017 um 13:11 schrieb Shinobu Kinjo :
>
> Please refer to Jens's message.
>
> Regards,
>
>> On Wed, Jan 11, 2017 at 8:53 PM, Marcus Müller
>>
> Op 11 januari 2017 om 12:24 schreef Jayaram R :
>
>
> Hello,
>
>
>
> We from Nokia are validating bluestore on 3 node cluster with EC 2+1
>
>
>
> While upgrading our cluster from Kraken 11.0.2 to 11.1.1 with bluesotre ,
> the cluster affected more than half of
Hello all,
I have issue with radosgw-admin regionmap update . It doesn't update map.
With zone configured like this:
radosgw-admin zone get
{
"id": "fc12ac44-e27e-44e3-9b13-347162d3c1d2",
"name": "oak-1",
"domain_root": "oak-1.rgw.data.root",
"control_pool":
Hello,
On Tue, Jan 10, 2017 at 11:11 PM, Nick Fisk wrote:
>> -Original Message-
>> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
>> Daznis
>> Sent: 09 January 2017 12:54
>> To: ceph-users
>> Subject: [ceph-users]
Please refer to Jens's message.
Regards,
On Wed, Jan 11, 2017 at 8:53 PM, Marcus Müller wrote:
> Ok, thank you. I thought I have to set ceph to a tunables profile. If I’m
> right, then I just have to export the current crush map, edit it and import
> it again, like:
On Wed, Jan 11, 2017 at 11:39 AM, Boris Mattijssen
wrote:
> Hi Brukhard,
>
> Thanks for your answer. I've tried two things now:
> * ceph auth get-or-create client.boris mon 'allow r' mds 'allow r path=/,
> allow rw path=/boris' osd 'allow rw pool=cephfs_data'. This is
Hi,
On 01/11/2017 12:39 PM, Boris Mattijssen wrote:
Hi Brukhard,
Thanks for your answer. I've tried two things now:
* ceph auth get-or-create client.boris mon 'allow r' mds 'allow r
path=/, allow rw path=/boris' osd 'allow rw pool=cephfs_data'. This is
according to your suggestion. I am
Ok, thank you. I thought I have to set ceph to a tunables profile. If I’m
right, then I just have to export the current crush map, edit it and import it
again, like:
ceph osd getcrushmap -o /tmp/crush
crushtool -i /tmp/crush --set-choose-total-tries 100 -o /tmp/crush.new
ceph osd setcrushmap -i
Hi Brukhard,
Thanks for your answer. I've tried two things now:
* ceph auth get-or-create client.boris mon 'allow r' mds 'allow r path=/,
allow rw path=/boris' osd 'allow rw pool=cephfs_data'. This is according to
your suggestion. I am however now still able to mount the root path and
read all
Hello,
We from Nokia are validating bluestore on 3 node cluster with EC 2+1
While upgrading our cluster from Kraken 11.0.2 to 11.1.1 with bluesotre ,
the cluster affected more than half of the OSDs went down.
$ceph -s
cluster cb55baa8-d5a5-442e-9aae-3fd83553824e
health HEALTH_ERR
Your current problem has nothing to do with clients and neither does
choose_total_tries.
Try setting just this value to 100 and see if your situation improves.
Ultimately you need to take a good look at your cluster configuration
and how your crush map is configured to deal with that
Hi,
On 01/11/2017 11:02 AM, Boris Mattijssen wrote:
Hi all,
I'm trying to use/path restriction/ on CephFS, running a Ceph Jewel
(ceph version 10.2.5) cluster.
For this I'm using the command specified in the official docs
(http://docs.ceph.com/docs/jewel/cephfs/client-auth/):
ceph auth
Hi all,
I'm trying to use *path restriction* on CephFS, running a Ceph Jewel (ceph
version 10.2.5) cluster.
For this I'm using the command specified in the official docs (
http://docs.ceph.com/docs/jewel/cephfs/client-auth/):
ceph auth get-or-create client.boris mon 'allow r' mds 'allow r, allow
Hello all,
I have issue with radosgw-admin regionmap update . It doesn't update map.
With zone configured like this:
radosgw-admin zone get
{
"id": "fc12ac44-e27e-44e3-9b13-347162d3c1d2",
"name": "oak-1",
"domain_root": "oak-1.rgw.data.root",
"control_pool":
Hi all,
I am not sure if this is the correct mailing list. Correct me if I am wrong.
I failed to add a pool at http://ceph.com/pgcalc/ because of a Javascript
error:
(index):345 Uncaught TypeError: $(...).dialog is not a function
at addPool (http://ceph.com/pgcalc/:345:31)
at
42 matches
Mail list logo