I patched the 4.4rc4 kernel source and restarted the test. Shortly
after starting it, this showed up in dmesg:
[Thu Dec 17 03:29:55 2015] WARNING: CPU: 0 PID: 2547 at
fs/ceph/addr.c:1162 ceph_write_begin+0xfb/0x120 [ceph]()
[Thu Dec 17 03:29:55 2015] Modules linked in: iscsi_target_mod
On Thu, Dec 17, 2015 at 4:56 PM, Eric Eastman
wrote:
> I patched the 4.4rc4 kernel source and restarted the test. Shortly
> after starting it, this showed up in dmesg:
>
> [Thu Dec 17 03:29:55 2015] WARNING: CPU: 0 PID: 2547 at
> fs/ceph/addr.c:1162
Hello cephers:
In our test, there are three monitors. We find client run ceph
command will slow when the leader mon is down. Even after long time, a
client run ceph command will also slow in first time.
>From strace, we find that the client first to connect the leader, then
after 3s, it
Hi Sage,
On 17/12/2015 14:31, Sage Weil wrote:
> On Thu, 17 Dec 2015, Loic Dachary wrote:
>> Hi Ilya,
>>
>> This is another puzzling behavior (the log of all commands is at
>> http://tracker.ceph.com/issues/14094#note-4). in a nutshell, after a
>> series of sgdisk -i commands to examine various
On Thu, 17 Dec 2015, Jaze Lee wrote:
> Hello cephers:
> In our test, there are three monitors. We find client run ceph
> command will slow when the leader mon is down. Even after long time, a
> client run ceph command will also slow in first time.
> >From strace, we find that the client first
On Thu, 17 Dec 2015, Loic Dachary wrote:
> Hi Ilya,
>
> This is another puzzling behavior (the log of all commands is at
> http://tracker.ceph.com/issues/14094#note-4). in a nutshell, after a
> series of sgdisk -i commands to examine various devices including
> /dev/sdc1, the /dev/sdc1 file
On Thu, Dec 17, 2015 at 3:10 PM, Loic Dachary wrote:
> Hi Sage,
>
> On 17/12/2015 14:31, Sage Weil wrote:
>> On Thu, 17 Dec 2015, Loic Dachary wrote:
>>> Hi Ilya,
>>>
>>> This is another puzzling behavior (the log of all commands is at
>>>
Hi.
It may be helpful to address this issue, if we flip the debug.
Thanks
Minfei
On 12/17/15 at 01:56P, Eric Eastman wrote:
> I patched the 4.4rc4 kernel source and restarted the test. Shortly
> after starting it, this showed up in dmesg:
>
> [Thu Dec 17 03:29:55 2015] WARNING: CPU: 0 PID:
On Thu, Dec 17, 2015 at 1:19 PM, Loic Dachary wrote:
> Hi Ilya,
>
> I'm seeing a partprobe failure right after a disk was zapped with sgdisk
> --clear --mbrtogpt -- /dev/vdb:
>
> partprobe /dev/vdb failed : Error: Partition(s) 1 on /dev/vdb have been
> written, but we have
Hey cephers,
In the pursuit of openness I wanted to share a ceph-related bit of
work that is happening beyond our immediate sphere of influence and
see who is already contributing, or might be interested in the
results.
https://groups.google.com/forum/?hl=en#!topic/coprhddevsupport/llZeiTWxddM
On Thu, Dec 17, 2015 at 9:04 AM, Derek Yarnell wrote:
> I am having an issue with the 'radosgw-admin subuser create' command
> doing something different than the '/{admin}/user?subuser=json'
> admin API. I want to leverage subusers in S3 which looks to be possible
> in my
I am having an issue with the 'radosgw-admin subuser create' command
doing something different than the '/{admin}/user?subuser=json'
admin API. I want to leverage subusers in S3 which looks to be possible
in my testing for bit more control without resorting to ACLs.
radosgw-admin subuser create
On Thu, Dec 17, 2015 at 12:06 PM, Derek Yarnell wrote:
> On 12/17/15 2:36 PM, Yehuda Sadeh-Weinraub wrote:
>> Try 'section=user=cephtests'
>
> Doesn't seem to work either.
>
> # radosgw-admin metadata get user:cephtest
> {
> "key": "user:cephtest",
> "ver": {
>
On 12/17/15 2:36 PM, Yehuda Sadeh-Weinraub wrote:
> Try 'section=user=cephtests'
Doesn't seem to work either.
# radosgw-admin metadata get user:cephtest
{
"key": "user:cephtest",
"ver": {
"tag": "_dhpzgdOjqJI-OsR1MsYV5-p",
"ver": 1
},
"mtime": 1450378246,
On 12/17/15 1:09 PM, Yehuda Sadeh-Weinraub wrote:
>> Bug? Design?
>
> Somewhat a bug. The whole subusers that use s3 was unintentional, so
> when creating the subuser api, we didn't think of needing the access
> key. For some reason we do get the key type. Can you open a ceph
> tracker issue for
On Thu, Dec 17, 2015 at 11:05 AM, Derek Yarnell wrote:
> On 12/17/15 1:09 PM, Yehuda Sadeh-Weinraub wrote:
>>> Bug? Design?
>>
>> Somewhat a bug. The whole subusers that use s3 was unintentional, so
>> when creating the subuser api, we didn't think of needing the access
>>
With cephfs.patch and cephfs1.patch applied and I am now seeing:
[Thu Dec 17 14:27:59 2015] [ cut here ]
[Thu Dec 17 14:27:59 2015] WARNING: CPU: 0 PID: 3036 at
fs/ceph/addr.c:1171 ceph_write_begin+0xfb/0x120 [ceph]()
[Thu Dec 17 14:27:59 2015] Modules linked in:
On Fri, Dec 18, 2015 at 2:23 PM, Eric Eastman
wrote:
>> Hi Yan Zheng, Eric Eastman
>>
>> Similar bug was reported in f2fs, btrfs, it does affect 4.4-rc4, the fixing
>> patch was merged into 4.4-rc5, dfd01f026058 ("sched/wait: Fix the signal
>> handling fix").
>>
>>
> Hi Yan Zheng, Eric Eastman
>
> Similar bug was reported in f2fs, btrfs, it does affect 4.4-rc4, the fixing
> patch was merged into 4.4-rc5, dfd01f026058 ("sched/wait: Fix the signal
> handling fix").
>
> Related report & discussion was here:
> https://lkml.org/lkml/2015/12/12/149
>
> I'm not
On 12/17/15 3:15 PM, Yehuda Sadeh-Weinraub wrote:
>
> Right. Reading the code again:
>
> Try:
> GET /admin/metadata/user=cephtest
Thanks this is very helpful and works and I was able to also get the PUT
working. Only question is that is it expected to return a 204 no content?
2015-12-17
On 17/12/2015 16:49, Ilya Dryomov wrote:
> On Thu, Dec 17, 2015 at 1:19 PM, Loic Dachary wrote:
>> Hi Ilya,
>>
>> I'm seeing a partprobe failure right after a disk was zapped with sgdisk
>> --clear --mbrtogpt -- /dev/vdb:
>>
>> partprobe /dev/vdb failed : Error: Partition(s)
On Thu, Dec 17, 2015 at 2:44 PM, Derek Yarnell wrote:
> On 12/17/15 3:15 PM, Yehuda Sadeh-Weinraub wrote:
>>
>> Right. Reading the code again:
>>
>> Try:
>> GET /admin/metadata/user=cephtest
>
> Thanks this is very helpful and works and I was able to also get the PUT
>
The script handles UTF-8 fine, the copy/paste is at fault here ;-)
On 24/11/2015 07:59, piotr.da...@ts.fujitsu.com wrote:
>> -Original Message-
>> From: ceph-devel-ow...@vger.kernel.org [mailto:ceph-devel-
>> ow...@vger.kernel.org] On Behalf Of Sage Weil
>> Sent: Monday, November 23, 2015
Hi Ilya,
I'm seeing a partprobe failure right after a disk was zapped with sgdisk
--clear --mbrtogpt -- /dev/vdb:
partprobe /dev/vdb failed : Error: Partition(s) 1 on /dev/vdb have been
written, but we have been unable to inform the kernel of the change, probably
because it/they are in use.
Hi Ilya,
This is another puzzling behavior (the log of all commands is at
http://tracker.ceph.com/issues/14094#note-4). in a nutshell, after a series of
sgdisk -i commands to examine various devices including /dev/sdc1, the
/dev/sdc1 file disappears (and I think it will showup again although I
On 17/12/15 21:27, Sage Weil wrote:
On Thu, 17 Dec 2015, Jaze Lee wrote:
Hello cephers:
In our test, there are three monitors. We find client run ceph
command will slow when the leader mon is down. Even after long time, a
client run ceph command will also slow in first time.
>From strace,
On Fri, Dec 18, 2015 at 3:49 AM, Eric Eastman
wrote:
> With cephfs.patch and cephfs1.patch applied and I am now seeing:
>
> [Thu Dec 17 14:27:59 2015] [ cut here ]
> [Thu Dec 17 14:27:59 2015] WARNING: CPU: 0 PID: 3036 at
> fs/ceph/addr.c:1171
27 matches
Mail list logo