when an osd is started up, IO will be blocked

2015-10-22 Thread wangsongbo

Hi all,

When an osd is started, relative IO will be blocked.
According to the test result,the larger iops the clients send , the 
longer it will take to elapse.
Adjustment on all the parameters associate with recovery operations was 
also found useless.


How to reduce the impact of this process on the IO ?

Thanks and Regards,
WangSongbo

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: when an osd is started up, IO will be blocked

2015-10-25 Thread wangsongbo
,  5.00th=[4], 10.00th=[5], 20.00th=[6],
 | 30.00th=[6], 40.00th=[7], 50.00th=[8], 60.00th=[9],
 | 70.00th=[   10], 80.00th=[   12], 90.00th=[   14], 95.00th=[ 709],
 | 99.00th=[16712], 99.50th=[16712], 99.90th=[16712], 99.95th=[16712],
 | 99.99th=[16712]
bw (KB  /s): min=  129, max= 2038, per=100.00%, avg=774.00, 
stdev=1094.73

  write: io=24976KB, bw=401264B/s,*iops=48,* runt= 63737msec
slat (usec): min=0, max=40, avg= 2.48, stdev= 3.30
clat (msec): min=2, max=31379, avg=786.91, stdev=4829.02
 lat (msec): min=2, max=31379, avg=786.92, stdev=4829.02
clat percentiles (msec):
 |  1.00th=[4],  5.00th=[6], 10.00th=[6], 20.00th=[8],
 | 30.00th=[9], 40.00th=[9], 50.00th=[   11], 60.00th=[   12],
 | 70.00th=[   13], 80.00th=[   15], 90.00th=[   19], 95.00th=[   29],
 | 99.00th=[16712], 99.50th=[16712], 99.90th=[16712], 99.95th=[16712],
 | 99.99th=[16712]
bw (KB  /s): min=  317, max= 5228, per=100.00%, avg=1957.33, 
stdev=2832.48

lat (msec) : 2=0.07%, 4=3.23%, 10=52.29%, 20=35.76%, 50=4.41%
lat (msec) : 750=1.36%, 1000=0.02%, 2000=0.02%, >=2000=2.83%
  cpu  : usr=0.03%, sys=0.00%, ctx=228, majf=0, minf=19
  IO depths: 1=0.1%, 2=0.1%, 4=0.4%, 8=3.9%, 16=18.9%, 32=73.1%, 
>=64=3.5%
 submit: 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, 
>=64=0.0%
 complete  : 0=0.0%, 4=97.5%, 8=0.0%, 16=0.2%, 32=0.9%, 64=1.5%, 
>=64=0.0%
 issued: total=r=1363/w=3122/d=0, short=r=0/w=0/d=0, 
drop=r=0/w=0/d=0

 latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
   READ: io=10904KB, aggrb=171KB/s, minb=171KB/s, maxb=171KB/s, 
mint=63737msec, maxt=63737msec
  WRITE: io=24976KB, aggrb=391KB/s, minb=391KB/s, maxb=391KB/s, 
mint=63737msec, maxt=63737msec


Disk stats (read/write):
dm-0: ios=81/34, merge=0/0, ticks=472/26, in_queue=498, util=0.30%, 
aggrios=143/102, aggrmerge=106/122, aggrticks=1209/134, 
aggrin_queue=1343, aggrutil=0.60%
  sdd: ios=143/102, merge=106/122, ticks=1209/134, in_queue=1343, 
util=0.60%



Thanks and Regards,
WangSongbo

On 15/10/22 下午10:43, wangsongbo wrote:

Hi all,

When an osd is started, relative IO will be blocked.
According to the test result,the larger iops the clients send , the 
longer it will take to elapse.
Adjustment on all the parameters associate with recovery operations 
was also found useless.


How to reduce the impact of this process on the IO ?

Thanks and Regards,
WangSongbo



--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


How to reduce the influenct on the IO when an osd is marked out?

2015-10-09 Thread wangsongbo

Hi all,
when an osd is marked out, relative IO will be blocked, in which case, 
application built on ceph will fail.According to test result, the larger 
a data is,the longer it will take to elapse.

How to reduce the impact of this process on the IO?


Thanks and Regards,
WangSongbo
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


failed to open http://apt-mirror.front.sepia.ceph.com

2015-09-23 Thread wangsongbo

Hi Loic and other Cephers,

I am running teuthology-suites in our testing, because the connection to 
"apt-mirror.front.sepia.ceph.com" timed out , "ceph-cm-ansible" failed.

And from a web-browser, I got the response like this : "502 Bad Gateway".
"64.90.32.37 apt-mirror.front.sepia.ceph.com" has been added to /etc/hosts.
Did the resources have been removed ?


Thanks and Regards,
WangSongbo
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: failed to open http://apt-mirror.front.sepia.ceph.com

2015-09-23 Thread wangsongbo

Sage and Loic,
Thanks for your reply.
I am running teuthology in our testing.I can send a traceroute to 
64.90.32.37.
but when ceph-cm-ansible run the " yum-complete-transaction 
--cleanup-only" command,
it got such a response 
:"http://apt-mirror.front.sepia.ceph.com/misc-rpms/repodata/repomd.xml: 
[Errno 14] PYCURL ERROR 7 - "Failed connect to 
apt-mirror.front.sepia.ceph.com:80; Connection timed out"
And I replace "apt-mirror.front.sepia.ceph.com"  to "64.90.32.37" in 
repo file, then run "yum-complete-transaction --cleanup-only" command,
I got a response like 
this:"http://64.90.32.37/misc-rpms/repodata/repomd.xml: [Errno 14] 
PYCURL ERROR 22 - "The requested URL returned error: 502 Bad Gateway""

I do not know whether it was affected by the last week's attack.

Thanks and Regards,
WangSongbo

On 15/9/23 下午11:22, Loic Dachary wrote:


On 23/09/2015 15:11, Sage Weil wrote:

On Wed, 23 Sep 2015, Loic Dachary wrote:

Hi,

On 23/09/2015 12:29, wangsongbo wrote:

64.90.32.37 apt-mirror.front.sepia.ceph.com

It works for me. Could you send a traceroute
apt-mirror.front.sepia.ceph.com ?

This is a private IP internal to the sepia lab.  Anythign outside the lab
shouldn't be using it...

This is the public facing IP and is required for teuthology to run outside of 
the lab (http://tracker.ceph.com/issues/12212).

64.90.32.37 apt-mirror.front.sepia.ceph.com

suggests the workaround was used. And a traceroute will confirm if the 
resolution happens as expected (with the public IP) or with a private IP 
(meaning the workaround is not in place where it should).

Cheers



--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [ceph-users] Important security noticed regarding release signing key

2015-09-22 Thread wangsongbo

Hi Ken,
Thanks for your reply. But in the ceph-cm-ansible project scheduled 
by teuthology, "ceph.com/packages/ceph-extras" is in used now, such as 
qemu-kvm-0.12.1.2-2.415.el6.3ceph, 
qemu-kvm-tools-0.12.1.2-2.415.el6.3ceph etc.

Any new releases will be provided ?

On 15/9/22 下午10:24, Ken Dreyer wrote:

On Tue, Sep 22, 2015 at 2:38 AM, Songbo Wang <songbo1...@gmail.com> wrote:

Hi, all,
 Since the last week‘s attack, “ceph.com/packages/ceph-extras” can be
opened never, but where can I get the releases of ceph-extra now?

Thanks and Regards,
WangSongbo


The packages in "ceph-extras" were old and subject to CVEs (the big
one being VENOM, CVE-2015-3456). So I don't intend to host ceph-extras
in the new location.

- Ken


--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [ceph-users] Important security noticed regarding release signing key

2015-09-22 Thread wangsongbo

Hi Ken,
Thank you, I will update my repo and continue my test.

- Songbo

On 15/9/23 上午10:50, Ken Dreyer wrote:

Hi Songbo, It's been removed from Ansible now:
https://github.com/ceph/ceph-cm-ansible/pull/137

- Ken

On Tue, Sep 22, 2015 at 8:33 PM, wangsongbo <songbo1...@gmail.com> wrote:

Hi Ken,
 Thanks for your reply. But in the ceph-cm-ansible project scheduled by
teuthology, "ceph.com/packages/ceph-extras" is in used now, such as
qemu-kvm-0.12.1.2-2.415.el6.3ceph, qemu-kvm-tools-0.12.1.2-2.415.el6.3ceph
etc.
 Any new releases will be provided ?


On 15/9/22 下午10:24, Ken Dreyer wrote:

On Tue, Sep 22, 2015 at 2:38 AM, Songbo Wang <songbo1...@gmail.com> wrote:

Hi, all,
  Since the last week‘s attack, “ceph.com/packages/ceph-extras”
can be
opened never, but where can I get the releases of ceph-extra now?

Thanks and Regards,
WangSongbo


The packages in "ceph-extras" were old and subject to CVEs (the big
one being VENOM, CVE-2015-3456). So I don't intend to host ceph-extras
in the new location.

- Ken




--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


test

2015-09-22 Thread wangsongbo

test
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


the release of ceph-extra

2015-09-22 Thread wangsongbo

Hi, all,
Since the last week‘s attack, “ceph.com/packages/ceph-extras 
<http://ceph.com/packages/ceph-extras>” can be opened never, but where 
can I get the releases of ceph-extra now?


Thanks and Regards,
WangSongbo
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: failed to open http://apt-mirror.front.sepia.ceph.com

2015-09-23 Thread wangsongbo
Loic, It's my fault. The dns server I set is unreachable. when I modify 
that , everything is ok.


Thanks and Regards,
WangSongbo

On 15/9/24 上午1:01, Loic Dachary wrote:


On 23/09/2015 18:50, wangsongbo wrote:

Sage and Loic,
Thanks for your reply.
I am running teuthology in our testing.I can send a traceroute to 64.90.32.37.
but when ceph-cm-ansible run the " yum-complete-transaction --cleanup-only" 
command,
it got such a response 
:"http://apt-mirror.front.sepia.ceph.com/misc-rpms/repodata/repomd.xml: [Errno 14] PYCURL 
ERROR 7 - "Failed connect to apt-mirror.front.sepia.ceph.com:80; Connection timed 
out"
And I replace "apt-mirror.front.sepia.ceph.com"  to "64.90.32.37" in repo file, then run 
"yum-complete-transaction --cleanup-only" command,
I got a response like this:"http://64.90.32.37/misc-rpms/repodata/repomd.xml: [Errno 14] 
PYCURL ERROR 22 - "The requested URL returned error: 502 Bad Gateway""
I do not know whether it was affected by the last week's attack.

Querying the IP directly won't get you where the mirror is (it's a vhost). I 
think ansible fails because it queries the DNS and does not use the entry you 
set in the /etc/hosts file. The OpenStack teuthology backend sets a specific 
entry in the DNS to workaround the problem (see 
https://github.com/ceph/teuthology/blob/master/teuthology/openstack/setup-openstack.sh#L318)

Cheers


Thanks and Regards,
WangSongbo

On 15/9/23 下午11:22, Loic Dachary wrote:

On 23/09/2015 15:11, Sage Weil wrote:

On Wed, 23 Sep 2015, Loic Dachary wrote:

Hi,

On 23/09/2015 12:29, wangsongbo wrote:

64.90.32.37 apt-mirror.front.sepia.ceph.com

It works for me. Could you send a traceroute
apt-mirror.front.sepia.ceph.com ?

This is a private IP internal to the sepia lab.  Anythign outside the lab
shouldn't be using it...

This is the public facing IP and is required for teuthology to run outside of 
the lab (http://tracker.ceph.com/issues/12212).

64.90.32.37 apt-mirror.front.sepia.ceph.com

suggests the workaround was used. And a traceroute will confirm if the 
resolution happens as expected (with the public IP) or with a private IP 
(meaning the workaround is not in place where it should).

Cheers



--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [ceph-users] Important security noticed regarding release signing key

2015-09-23 Thread wangsongbo

Hi  Ken,
Just now, I run teuthology-suites in our testing, it failed because of lacking 
these packages,
such as qemu-kvm-0.12.1.2-2.415.el6.3ceph.x86_64, 
qemu-kvm-tools-0.12.1.2-2.415.el6.3ceph etc.
The modify "rm ceph-extras repository config#137" only remove the repository , 
but did not solve the ansible's dependence.
How to solve this dependence ?

Thanks and Regards,
WangSongbo


On 15/9/23 上午10:50, Ken Dreyer wrote:

Hi Songbo, It's been removed from Ansible now:
https://github.com/ceph/ceph-cm-ansible/pull/137

- Ken

On Tue, Sep 22, 2015 at 8:33 PM, wangsongbo <songbo1...@gmail.com> wrote:

Hi Ken,
 Thanks for your reply. But in the ceph-cm-ansible project scheduled by
teuthology, "ceph.com/packages/ceph-extras" is in used now, such as
qemu-kvm-0.12.1.2-2.415.el6.3ceph, qemu-kvm-tools-0.12.1.2-2.415.el6.3ceph
etc.
 Any new releases will be provided ?


On 15/9/22 下午10:24, Ken Dreyer wrote:

On Tue, Sep 22, 2015 at 2:38 AM, Songbo Wang <songbo1...@gmail.com> wrote:

Hi, all,
  Since the last week‘s attack, “ceph.com/packages/ceph-extras”
can be
opened never, but where can I get the releases of ceph-extra now?

Thanks and Regards,
WangSongbo


The packages in "ceph-extras" were old and subject to CVEs (the big
one being VENOM, CVE-2015-3456). So I don't intend to host ceph-extras
in the new location.

- Ken




--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


The osd process locked itself , when I tested cephfs through filebench

2016-01-07 Thread wangsongbo
inLockDelay(int volatile*, 
int, int) () from /usr/lib64/libtcmalloc.so.4
#1  0x7fa9b772f9bc in SpinLock::SlowLock() () from 
/usr/lib64/libtcmalloc.so.4

#2  0x7fa9b772c0f1 in ?? () from /usr/lib64/libtcmalloc.so.4
#3  0x7fa9b7725626 in MallocHook::InvokeNewHookSlow(void const*, 
unsigned long) () from /usr/lib64/libtcmalloc.so.4

#4  0x7fa9b7732d73 in tc_new () from /usr/lib64/libtcmalloc.so.4
#5  0x00b2b982 in ceph::log::Log::create_entry (this=optimized out>, level=0, subsys=27) at log/Log.cc:175
#6  0x00c03075 in Pipe::fault (this=0x3b53d800, onread=optimized out>) at msg/simple/Pipe.cc:1392
#7  0x00c12114 in Pipe::reader (this=0x3b53d800) at 
msg/simple/Pipe.cc:1674
#8  0x00c1606d in Pipe::Reader::entry (this=out>) at msg/simple/Pipe.h:50

#9  0x7fa9b6f69a51 in start_thread () from /lib64/libpthread.so.0
#10 0x7fa9b5cf193d in clone () from /lib64/libc.so.6
(gdb)
<-
This is not the first time for me to find above problem.

Thanks,
wangsongbo
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html