On Mon, Jan 4, 2016 at 7:21 PM, Sage Weil wrote:
> On Mon, 4 Jan 2016, Guang Yang wrote:
>> Hi Cephers,
>> Happy New Year! I got question regards to the long PG peering..
>>
>> Over the last several days I have been looking into the *long peering*
>> problem when
Hi Cephers,
Happy New Year! I got question regards to the long PG peering..
Over the last several days I have been looking into the *long peering*
problem when we start a OSD / OSD host, what I observed was that the
two peering working threads were throttled (stuck) when trying to
queue new transa
Hi cephers,
Most recently I am drafting the run books for OSD disk replacement, I think the
rule of thumb is to reduce data migration (recover/backfill), and I thought the
following procedure should achieve the purpose:
1. ceph osd out osd.XXX (mark it out to trigger data migration)
2. ceph o
Hi cephers,
We are investigating a backup solution for Ceph, in short, we would like a
solution to backup a Ceph cluster to another data store (not Ceph cluster,
assume it has SWIFT API). We would like to have both full backup and
incremental backup on top of the full backup.
After going throug
n was hang there waiting other ops finishing
their work.
>
> thanks
> baijia...@126.com
>
> 发件人: Guang Yang
> 发送时间: 2014-06-30 14:57
> 收件人: baijiaruo
> 抄送: ceph-users
> 主题: Re: [ceph-users] Ask a performance question for the RGW
> Hello,
> There is a k
Hello,
There is a known limitation of bucket scalability, and there is a blueprint
tracking it -
https://wiki.ceph.com/Planning/Blueprints/Submissions/rgw%3A_bucket_index_scalability.
At time being, I would recommend to do sharding at application level (create
multiple buckets) to workaround th
Hello Cephers,
We used to have a Ceph cluster and setup our data pool as 3 replicas, we
estimated the number of files (given disk size and object size) for each PG was
around 8K, we disabled folder splitting which mean all files located at the
root PG folder. Our testing showed a good performanc
On May 28, 2014, at 5:31 AM, Gregory Farnum wrote:
> On Sun, May 25, 2014 at 6:24 PM, Guang Yang wrote:
>> On May 21, 2014, at 1:33 AM, Gregory Farnum wrote:
>>
>>> This failure means the messenger subsystem is trying to create a
>>> thread and is getting an
On May 21, 2014, at 1:33 AM, Gregory Farnum wrote:
> This failure means the messenger subsystem is trying to create a
> thread and is getting an error code back — probably due to a process
> or system thread limit that you can turn up with ulimit.
>
> This is happening because a replicated PG pr
Hi Matt,
The problem you came across was due to a change made in the rados bench along
with the Firefly release, it aimed to solve the problem that if there were
multiple rados instance (for writing), we want to do a rados read for each run
as well.
Unfortunately, that change broke your user ca
Hello all,
Recently I am working on Ceph performance analysis on our cluster, our OSD
hardware looks like:
11 SATA disks, 4TB for each, 7200RPM
48GB RAM
When break down the latency, we found that half of the latency (average latency
is around 60 milliseconds via radosgw) comes from file loo
Hi ceph-users,
We are using Ceph (radosgw) to store user generated images, as GET latency is
critical for us, most recently I did some investigation over the GET path to
understand where time spend.
I first confirmed that the latency came from OSD (read op), so that we
instrumented code to trac
Thanks all for the help.
We finally identified the root cause of the issue was due to a lock contention
happening at folder splitting and here is a tracking ticket (thanks Inktank for
the fix!): http://tracker.ceph.com/issues/7207
Thanks,
Guang
On Tuesday, December 31, 2013 8:22 AM, Guang
Thanks Mark, my comments inline...
Date: Mon, 30 Dec 2013 07:36:56 -0600
From: Mark Nelson
To: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Ceph cluster performance degrade (radosgw)
after running some time
On 12/30/2013 05:45 AM, Guang wrote:
> Hi ceph-users and ceph-devel,
> Merry C
Thanks Wido, my comments inline...
>Date: Mon, 30 Dec 2013 14:04:35 +0100
>From: Wido den Hollander
>To: ceph-users@lists.ceph.com
>Subject: Re: [ceph-users] Ceph cluster performance degrade (radosgw)
> after running some time
>On 12/30/2013 12:45 PM, Guang wrote:
> Hi ceph-users and ceph-dev
Thanks Wido, my comments inline...
>Date: Mon, 30 Dec 2013 14:04:35 +0100
>From: Wido den Hollander
>To: ceph-users@lists.ceph.com
>Subject: Re: [ceph-users] Ceph cluster performance degrade (radosgw)
> after running some time
>On 12/30/2013 12:45 PM, Guang wrote:
> Hi ceph-users and ceph-dev
Hello ceph-users,
I am a little bit confused by these two options, I understand crush reweight
determine the weight of the OSD in the crush map so that it impacts I/O and
utilization, however, I am a little bit confused by osd reweight option, is
that something control the I/O distribution acros
Thanks Mark.
I cannot connect to my hosts, I will do the check and get back to you tomorrow.
Thanks,
Guang
在 2013-10-24,下午9:47,Mark Nelson 写道:
> On 10/24/2013 08:31 AM, Guang Yang wrote:
>> Hi Mark, Greg and Kyle,
>> Sorry to response this late, and thanks for providing the
e OSDs, the cluster will need to maintain
far more connections between OSDs which potentially slow things down?
3. Anything else i might miss?
Thanks all for the constant help.
Guang
在 2013-10-22,下午10:22,Guang Yang 写道:
> Hi Kyle and Greg,
> I will get back to you with more details tomo
etwork
> topology and does your CRUSH map reflect the network topology?
>
> On Oct 21, 2013 9:43 AM, "Gregory Farnum" wrote:
> On Mon, Oct 21, 2013 at 7:13 AM, Guang Yang wrote:
> > Dear ceph-users,
> > Recently I deployed a ceph cluster with RadosGW, from a s
:13 AM, Guang Yang wrote:
> Dear ceph-users,
Hi!
> Recently I deployed a ceph cluster with RadosGW, from a small one (24 OSDs)
> to a much bigger one (330 OSDs).
>
> When using rados bench to test the small cluster (24 OSDs), it showed the
> average latency was around 3ms (objec
Dear ceph-users,
Recently I deployed a ceph cluster with RadosGW, from a small one (24 OSDs) to
a much bigger one (330 OSDs).
When using rados bench to test the small cluster (24 OSDs), it showed the
average latency was around 3ms (object size is 5K), while for the larger one
(330 OSDs), the av
Thanks all for the recommendation. I worked around by modifying the ceph-deploy
by giving and full path for sgdisk.
Thanks,
Guang
在 2013-10-16,下午10:47,Alfredo Deza 写道:
> On Tue, Oct 15, 2013 at 9:19 PM, Guang wrote:
>> -bash-4.1$ which sgdisk
>> /usr/sbin/sgdisk
>>
>> Which path does ceph-dep
t to consider
each option we have and compare the cons / pros.
Thanks,
Guang
From: Gregory Farnum
To: Guang Yang
Cc: Gregory Farnum ; "ceph-us...@ceph.com"
Sent: Tuesday, August 20, 2013 9:51 AM
Subject: Re: [ceph-users] Usage pattern and des
Then that makes total sense to me.
Thanks,
Guang
From: Mark Kirkwood
To: Guang Yang
Cc: "ceph-users@lists.ceph.com"
Sent: Tuesday, August 20, 2013 1:19 PM
Subject: Re: [ceph-users] Usage pattern and design of Ceph
On 20/08/13 13:27, Guang
Thanks Greg.
Some comments inline...
On Sunday, August 18, 2013, Guang Yang wrote:
Hi ceph-users,
>This is Guang and I am pretty new to ceph, glad to meet you guys in the
>community!
>
>
>After walking through some documents of Ceph, I have a couple of questions:
> 1. Is th
Thanks Mark.
What is the design considerations to break large files into 4M chunk rather
than storing the large file directly?
Thanks,
Guang
From: Mark Kirkwood
To: Guang Yang
Cc: "ceph-users@lists.ceph.com"
Sent: Monday, August 19, 2013 5:18
Hi ceph-users,
I would like to check if there is any manual / steps which can let me try to
deploy ceph in RHEL?
Thanks,
Guang___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi ceph-users,
This is Guang and I am pretty new to ceph, glad to meet you guys in the
community!
After walking through some documents of Ceph, I have a couple of questions:
1. Is there any comparison between Ceph and AWS S3, in terms of the ability
to handle different work-loads (from KB to
Hi ceph-users,
This is Guang and I am pretty new to ceph, glad to meet you guys in the
community!
After walking through some documents of Ceph, I have a couple of questions:
1. Is there any comparison between Ceph and AWS S3, in terms of the ability
to handle different work-loads (from KB to G
30 matches
Mail list logo