Hello there,
I'm trying to reduce recovery impact on client operations and using mclock
for this purpose. I've tested different weights for queues but didn't see
any impacts on real performance.
ceph version 12.2.8 luminous (stable)
Last tested config:
"osd_op_queue": "mclock_opclass",
"
Hi,
I had experience with deleting a big bucket (25M small objects) with
--purge-data option. It took ~20H (run in screen) and didn't made any
significant effect on the cluster performance.
Stas
On Thu, Oct 13, 2016 at 9:42 AM, Василий Ангапов wrote:
> Hello,
>
> I have a huge R
Hi,
Faced with the similar problems on the CentOS7 - looks like condition
race with parted.
Update to 3.2 solve my problem (from 3.1 from the CentOS7 base):
rpm -Uhv
ftp://195.220.108.108/linux/fedora/linux/updates/22/x86_64/p/parted-3.2-16.fc22.x86_64.rpm
Stas
On Mon, Oct 3, 2016 at 6:39 PM
evelopers can add some comments.
Thanks!
Stas
On Thu, Sep 22, 2016 at 5:29 AM, Василий Ангапов wrote:
> And how can I make ordinary and blind buckets coexist in one Ceph cluster?
>
> 2016-09-22 11:57 GMT+03:00 Василий Ангапов :
>> Can I make existing bucket blind?
>>
>>
Hi all,
Sorry, I made typo in the previous message.
According to my tests is _no_ difference in Ceph RadosGW performance
between those type of bucket names.
Thanks.
Stas
On Thu, Sep 22, 2016 at 5:25 AM, Василий Ангапов wrote:
> Stas,
>
> Are you talking about Ceph or AWS?
>
>
uploads\s) with
SSD-backed indexes or 'blind buckets' feature enabled.
Stas
> On Sep 21, 2016, at 1:28 PM, Félix Barbeira wrote:
>
> Hi,
>
> Regarding to Amazon S3 documentation, it is advised to insert a bit of random
> chars in the bucket name in order to gain per
ete them manually by prefix.
That would be pain with more than few million objects :)
Stas
> On Sep 21, 2016, at 9:10 PM, Ben Hines wrote:
>
> Thanks. Will try it out once we get on Jewel.
>
> Just curious, does bucket deletion with --purge-objects work via
> radosgw-admin
son
To apply changes you have to restart all the RGW daemons. Then all newly
created buckets will not have index (bucket list will provide empty output),
but GET\PUT works perfectly.
In my tests there is no performance difference between SSD-backed indexes and
'blind bucket' configurat
6vDyKEn.png>
Disk util raised to 80%: http://pasteboard.co/2B6YREzoC.png
<http://pasteboard.co/2B6YREzoC.png>
Disk operations: http://pasteboard.co/2B7uI5PWB.png
<http://pasteboard.co/2B7uI5PWB.png>
Disk operations - reads: http://pasteboard.co/2B8U8E33d.png
<http://pasteboard
Stanislav Butkeev
15.10.2015, 21:49, "John Spray" :
> On Thu, Oct 15, 2015 at 5:11 PM, Butkeev Stas wrote:
>> Hello all,
>> Does anybody try to use cephfs?
>>
>> I have two servers with RHEL7.1(latest kernel 3.10.0-229.14.1.el7.x86_64).
>> Each server
ards,
Stanislav Butkeev
15.10.2015, 23:05, "John Spray" :
> On Thu, Oct 15, 2015 at 8:46 PM, Butkeev Stas wrote:
>> Thank you for your comment. I know what does mean option oflag=direct and
>> other things about stress testing.
>> Unfortunately speed is very slo
utkeev
15.10.2015, 23:26, "Max Yehorov" :
> Stas,
>
> as you said: "Each server has 15G flash for ceph journal and 12*2Tb
> SATA disk for"
>
> What is this 15G flash and is it used for all 12 SATA drives?
>
> On Thu, Oct 15, 2015 at 1:05 PM, John Spray
9 MB/s
I hope that I miss some options during configuration or something else.
--
Best Regards,
Stanislav Butkeev
15.10.2015, 22:36, "John Spray" :
> On Thu, Oct 15, 2015 at 8:17 PM, Butkeev Stas wrote:
>> Hello John
>>
>> Yes, of course, write speed is rising
Hello all,
Does anybody try to use cephfs?
I have two servers with RHEL7.1(latest kernel 3.10.0-229.14.1.el7.x86_64). Each
server has 15G flash for ceph journal and 12*2Tb SATA disk for data.
I have Infiniband(ipoib) 56Gb/s interconnect between nodes.
Cluster version
# ceph -v
ceph version 0.94
Hello everybody
We have ceph cluster that consist of 8 host with 12 osd per each host. It's 2T
SATA disks.
[13:23]:[root@se087 ~]# ceph osd tree
ID WEIGHTTYPE NAMEUP/DOWN REWEIGHT
PRIMARY-AFFINITY
-1 182.99203 root default
Hello, all
I have ceph+RGW installation. And have some problems with "shadow" objects.
For example:
#rados ls -p .rgw.buckets|grep "default.4507.1"
.
default.4507.1__shadow_test_s3.2/2vO4WskQNBGMnC8MGaYPSLfGkhQY76U.1_5
default.4507.1__shadow_test_s3.2/2vO4WskQNBGMnC8MGaYPSLfGkhQY76U.2_2
defa
Thank you Lionel,
Indeed I have forgotten about size > min_size. I have set min_size to 1 and my
cluster is UP now. I have deleted crash osd and have set size to 3 and min_size
to 2.
---
With regards,
Stanislav
01.12.2014, 19:15, "Lionel Bouton" :
> Le 01/12/2014 17:08, Lionel Bouton a éc
Hi all,
I have Ceph cluster+rgw. Now I have problems with one of OSD, it's down now. I
check ceph status and see this information
[root@node-1 ceph-0]# ceph -s
cluster fc8c3ecc-ccb8-4065-876c-dc9fc992d62d
health HEALTH_WARN 4 pgs incomplete; 4 pgs stuck inactive; 4 pgs stuck
unclean
lay a part).
To summarize, you recommend to focus on 2U servers, rather then 4U (HP,
SuperMicro and so), and the best strategy seems to be start filling them
with 3TB disks, spreading over the servers evenly.
By the way, why 5 servers are so important? Why not 3 or 7 for the matter?
Than
nt, you would go with
multiple 2U boxes to minimize cluster impacts in case of any downtime?
2) Barring service and SLA, is it really worth taking HP over SuperMicro,
or it's simply overpaying for a brand?
Thanks,
Stas.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
1) If data availability and redundancy is most important, you would go
> with multiple 2U boxes to minimize cluster impacts in case of any
>> downtime?
>>
>
> My general feeling here is that it depends on the size of the cluster. For
> small clusters, 2U or even 1U boxes may be ideal. For very l
On Sun, Apr 7, 2013 at 4:43 PM, Stas Oskin wrote:
> Hi,
>
> When erasing a lot of small files (4kb - 32mb), Ceph starts to eat a lot
> of CPU.
>
>
To be more exact, the CPU loads on storage node climb up to 35%-40% (iowait
slightly above it, but it might be due to slow disks
Hi,
When erasing a lot of small files (4kb - 32mb), Ceph starts to eat a lot of
CPU.
Any idea how to resolve it?
Thanks.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> For me,We have seem a supermicro machine,which is 2U with 2 CPU and 24 2.5
> inch sata/sas drives,together with 2 onboard 10Gb Nic. I think it's good
> enough for both density and computing power.
>
>
This configuration can also hold 12 3.5 drives? What model you use?
~30% more expensive (according to
postings) then SuperMicro SC847, while having ~2x drive density.
Regards,
Stas.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi.
First of all, nice to meet you, and thanks for the great software!
I've thoroughly read the benchmarks on the SuperMicro hardware with and
without SSD combinations, and wondered if there were any tests done on HP
file server.
According to this article:
http://www.theregister.co.uk/2012/11/15
26 matches
Mail list logo