Hi,
I thought that xfs fragmentation or leveldb(gc list growing, locking,
...) could be a problem.
Do you have any experience with this ?
---
Regards
Dominik
2016-04-24 13:40 GMT+02:00 <c...@jack.fr.eu.org>:
> I do not see any issue with that
>
> On 24/04/2016 12:39, Dominik
Hi,
I'm curious if using s3 like a cache - frequent put/delete in the
long term may cause some problems in radosgw or OSD(xfs)?
-
Regards
Dominik
___
ceph-users mailing list
ceph-users@lists.ceph.com
Hi,
I'm curious if using s3 like a cache - frequent put/delete in the long
term may cause some problems in radosgw or OSD(xfs)?
-
Regards
Dominik
___
ceph-users mailing list
ceph-users@lists.ceph.com
Hi,
Maybe this is the reason of another bug?
http://tracker.ceph.com/issues/13764
The situation is very similiar...
--
Regards
Dominik
2016-02-25 16:17 GMT+01:00 Ritter Sławomir :
> Hi,
>
>
>
> We have two CEPH clusters running on Dumpling 0.67.11 and some of our
>
Hi,
In my cluster I have on one OSD with many strange logs
0 cls/rgw/cls_rgw.cc:1555: couldn't find tag in name index
tag=default.
zgrep -c 'find tag in name index tag' /var/log/ceph/ceph-osd.151.log.1.gz
7531
This osd is overloaded on CPU iostat seems to be ok.
It has many slow request.
Hi,
From few days we notice on our cluster many slow request.
Cluster:
ceph version 0.67.11
3 x mon
36 hosts - 10 osd ( 4T ) + 2 SSD (journals)
Scrubbing and deep scrubbing is disabled but count of slow requests is
still increasing.
Disk utilisation is very small after we have disabled scrubbings.
Thanks,
Is there any option to fix bucket index automaticly?
--
Regards
2015-03-14 4:49 GMT+01:00 Yehuda Sadeh-Weinraub yeh...@redhat.com:
- Original Message -
From: Dominik Mostowiec dominikmostow...@gmail.com
To: ceph-users@lists.ceph.com
Sent: Friday, March 13, 2015 4:50:18 PM
Hi,
I found a strange problem with not existing file in s3.
Object exists in list
# s3 -u list bucketimages | grep 'files/fotoobject_83884@2/55673'
files/fotoobject_83884@2/55673.JPG 2014-03-26T22:25:59Z 349K
but:
# s3 -u head 'bucketimages/files/fotoobject_83884@2/55673.JPG'
ERROR:
Hi
I have stragne problem when I try to start radosgw on docker container.
When I have single container with 1 radosgw process inside everyting
is ok, I have good performance i think, for single test thread: 80
put/s for 4k objects and radosgw with debug enabled.
When I start two containers on the
-21 09:42:06.410951}]}
--
Regards
Dominik
2014-08-18 23:27 GMT+02:00 Dominik Mostowiec dominikmostow...@gmail.com:
After replace broken disk and ceph osd in it, cluster:
ceph health detail
HEALTH_WARN 2 pgs stuck unclean; recovery 60/346857819 degraded (0.000%)
pg 3.884 is stuck unclean
,
enter_time: 2014-08-17 21:12:28.436021}]}
---
Regards
Dominik
2014-08-17 21:57 GMT+02:00 Dominik Mostowiec dominikmostow...@gmail.com:
Hi,
After ceph osd out ( 1 osd ) cluster stopped rebalancing on
10621 active+clean, 2 active+remapped, 1 active+degraded+remapped;
My crushmap is clean
:31 GMT+01:00 Dominik Mostowiec dominikmostow...@gmail.com:
Great!
Thanks for Your help.
--
Regards
Dominik
2014-02-06 21:10 GMT+01:00 Sage Weil s...@inktank.com:
On Thu, 6 Feb 2014, Dominik Mostowiec wrote:
Hi,
Thanks !!
Can You suggest any workaround for now?
You can adjust the crush
|| CERN IT Department --
On Thu, Feb 6, 2014 at 2:12 PM, Dominik Mostowiec
dominikmostow...@gmail.com wrote:
Hi Ceph Users,
What do you think about virtualization of the radosgw machines?
Have somebody a production level experience with such architecture?
--
Regards
Dominik
Hi Ceph Users,
What do you think about virtualization of the radosgw machines?
Have somebody a production level experience with such architecture?
--
Regards
Dominik
___
ceph-users mailing list
ceph-users@lists.ceph.com
: []}},
{ name: Started,
enter_time: 2014-02-04 09:49:01.156626}]}
---
Regards
Dominik
2014-02-04 12:09 GMT+01:00 Dominik Mostowiec dominikmostow...@gmail.com:
Hi,
Thanks for Your help !!
We've done again 'ceph osd reweight-by-utilization 105'
Cluster stack on 10387 active+clean, 237 active
) that is pending review, but it's not a quick fix because of
compatibility issues.
sage
On Thu, 6 Feb 2014, Dominik Mostowiec wrote:
Hi,
Mabye this info can help to find what is wrong.
For one PG (3.1e4a) which is active+remapped:
{ state: active+remapped,
epoch: 96050,
up: [
119
Great!
Thanks for Your help.
--
Regards
Dominik
2014-02-06 21:10 GMT+01:00 Sage Weil s...@inktank.com:
On Thu, 6 Feb 2014, Dominik Mostowiec wrote:
Hi,
Thanks !!
Can You suggest any workaround for now?
You can adjust the crush weights on the overfull nodes slightly. You'd
need to do
:
https://github.com/ceph/ceph/pull/1178
sage
On Mon, 3 Feb 2014, Dominik Mostowiec wrote:
In other words,
1. we've got 3 racks ( 1 replica per rack )
2. in every rack we have 3 hosts
3. every host has 22 OSD's
4. all pg_num's are 2^n for every pool
5. we enabled crush tunables optimal
Hi,
After command:
ceph osd reweight-by-utilization 105
cluster stopped on 249 active+remapped; state.
I have 'crush tunables optimal'.
head -n 6 /tmp/crush.txt
# begin crush map
tunable choose_local_tries 0
tunable choose_local_fallback_tries 0
tunable choose_total_tries 50
tunable
.rgw.buckets: one osd has 105 PGs and other one (on the same
machine) has 144 PGs (37% more!).
Other pools also have got this problem. It's not efficient placement.
--
Regards
Dominik
2014-02-02 Dominik Mostowiec dominikmostow...@gmail.com:
Hi,
For more info:
crush: http://dysk.onet.pl/link/r4wGK
is in progress.
If you need i'll send you osdmap from clean cluster. Let me know.
--
Regards
Dominik
2014-02-03 Dominik Mostowiec dominikmostow...@gmail.com:
Hi,
Thanks,
In attachement.
--
Regards
Dominik
2014-02-03 Sage Weil s...@inktank.com:
Hi Dominik,
Can you send a copy of your
on OSDs.
Can I do something with it?
--
Regards
Dominik
2014-02-01 Dominik Mostowiec dominikmostow...@gmail.com:
Hi,
Change pg_num for .rgw.buckets to power of 2, an 'crush tunables
optimal' didn't help :(
Graph: http://dysk.onet.pl/link/BZ968
What can i do with dhis?
Something is broken
Hi,
Did you bump pgp_num as well?
Yes.
See: http://dysk.onet.pl/link/BZ968
25% pools is two times smaller from other.
This is changing after scrubbing.
--
Regards
Dominik
2014-02-01 Kyle Bader kyle.ba...@gmail.com:
Change pg_num for .rgw.buckets to power of 2, an 'crush tunables
optimal'
18446744073709551615
pool 12 '.rgw.root' rep size 2 min_size 1 crush_ruleset 0 object_hash
rjenkins pg_num 8 pgp_num 8 last_change 44540 owner 0
pool 13 '' rep size 2 min_size 1 crush_ruleset 0 object_hash rjenkins
pg_num 8 pgp_num 8 last_change 46912 owner 0
--
Regards
Dominik
2014-02-01 Dominik Mostowiec
Hi,
For more info:
crush: http://dysk.onet.pl/link/r4wGK
osd_dump: http://dysk.onet.pl/link/I3YMZ
pg_dump: http://dysk.onet.pl/link/4jkqM
--
Regards
Dominik
2014-02-02 Dominik Mostowiec dominikmostow...@gmail.com:
Hi,
Hmm,
You think about sumarize PGs from different pools on one OSD's i
GB avail;
--
Regards
Dominik
2014-01-30 Sage Weil s...@inktank.com:
On Thu, 30 Jan 2014, Dominik Mostowiec wrote:
Hi,
Thaks for Your response.
- with ~6,5k objects, size ~1,4G
- with ~13k objects, size ~2,8G
is on one the biggest pool 5 '.rgw.buckets'
This is because pg_num
Hi,
I have problem with data distribution.
Smallest disk usage 40% vs highest 82%.
All PGS: 6504.
Almost all data is in '.rgw.buckets' pool with pg_num 4800.
The best way to better data distribution is increese pg_num in this pool?
Is thre another way? ( eg crush tunables, or something like that
Hi,
I'm looking for solution how to verify file downloaded from s3 where
ETag is multiparted ( with '-' ) and don't know how is part size.
When part size is known, it is possible eg do it with scrip:
https://github.com/Teachnova/s3md5/blob/master/s3md5
In aws doc i found that there is only lower
Hi,
I found something else.
'ceph pg dump' shows PGs:
- with zero or near zero objects count
- with ~6,5k objects, size ~1,4G
- with ~13k objects, size ~2,8G
This can be a reason of wrong data distribution on OSD's?
---
Regards
Dominik
2014-01-30 Dominik Mostowiec dominikmostow...@gmail.com
Hi,
I found something else what I think can help.
PG distribution it seems isn't ok.
Graph: http://dysk.onet.pl/link/AVzTe
All PGS is from 70 to 140 per OSD.
Primary 15 to 58 per OSD.
Is there some way to fix it?
--
Regards
Dominik
2014-01-30 Dominik Mostowiec dominikmostow...@gmail.com:
Hi
try, ceph osd crush tunables optimal
No, I'll try it after change pg_num to correct value.
--
Regards
Dominik
2014-01-30 Sage Weil s...@inktank.com:
On Thu, 30 Jan 2014, Dominik Mostowiec wrote:
Hi,
I found something else.
'ceph pg dump' shows PGs:
- with zero or near zero objects count
, 30 Jan 2014, Dominik Mostowiec wrote:
Hi,
Thaks for Your response.
- with ~6,5k objects, size ~1,4G
- with ~13k objects, size ~2,8G
is on one the biggest pool 5 '.rgw.buckets'
This is because pg_num is not a power of 2
This is for all PGs (sum of all pools) or for pool 5 '.rgw.buckets
On Sun, Jan 26, 2014 at 12:59 PM, Dominik Mostowiec
dominikmostow...@gmail.com wrote:
Hi,
It is safe to remove this files
rados -p .rgw ls | grep '.bucket.meta.my_deleted_bucket:'
for deleted bucket via
rados -p .rgw rm .bucket.meta.my_deleted_bucket:default.4576.1
I have a problem
Mostowiec dominikmostow...@gmail.com:
Is there any posibility to remove this meta files? (whithout recreate cluster)
Files names:
{path}.bucket.meta.test1:default.4110.{sequence number}__head_...
--
Regards
Dominik
2013/12/8 Dominik Mostowiec dominikmostow...@gmail.com:
Hi,
My api app to put
Is there any posibility to remove this meta files? (whithout recreate cluster)
Files names:
{path}.bucket.meta.test1:default.4110.{sequence number}__head_...
--
Regards
Dominik
2013/12/8 Dominik Mostowiec dominikmostow...@gmail.com:
Hi,
My api app to put files to s3/ceph checks if bucket
Hi,
My api app to put files to s3/ceph checks if bucket exists by create
this bucket.
Each bucket create command adds 2 meta files.
-
root@vm-1:/vol0/ceph/osd# find | grep meta | grep test1 | wc -l
44
root@vm-1:/vol0/ceph/osd# s3 -u create test1
Bucket successfully created.
...@inktank.com:
I'm having trouble reproducing this one. Are you running on latest
dumpling? Does it happen with any newly created bucket, or just with
buckets that existed before?
Yehuda
On Fri, Dec 6, 2013 at 5:07 AM, Dominik Mostowiec
dominikmostow...@gmail.com wrote:
Hi,
In version dumpling
what could be the reason. Can you turn set 'debug ms = 1',
and 'debug rgw = 20'?
Thanks,
Yehuda
On Sat, Dec 7, 2013 at 4:33 AM, Dominik Mostowiec
dominikmostow...@gmail.com wrote:
Are you running on latest dumpling
Yes. It was installed, not upgraded from prev version.
This is new crated
cache enabled = false')?
On Sat, Dec 7, 2013 at 8:34 AM, Dominik Mostowiec
dominikmostow...@gmail.com wrote:
Hi,
Log:
-
2013-12-07 17:32:42.736396 7ffbe36d3780 10 allocated request req=0xe66f40
2013-12-07 17:32:42.736438 7ff79b1c6700 1 == starting new request
ok, enabling cache helps :-)
What was wrong ?
--
Dominik
2013/12/7 Dominik Mostowiec dominikmostow...@gmail.com:
Yes, it is disabled
grep 'cache' /etc/ceph/ceph.conf | grep rgw
rgw_cache_enabled = false ;rgw cache enabled
rgw_cache_lru_size = 1 ;num of entries in rgw
Thanks for Your help !!
---
Regards
Dominik
On Dec 7, 2013 6:34 PM, Yehuda Sadeh yeh...@inktank.com wrote:
Sounds like disabling the cache triggers some bug. I'll open a relevant
ticket.
Thanks,
Yehuda
On Sat, Dec 7, 2013 at 9:29 AM, Dominik Mostowiec
dominikmostow...@gmail.com wrote
Hi,
In version dumpling upgraded from bobtail working create the same bucket.
root@vm-1:/etc/apache2/sites-enabled# s3 -u create testcreate
Bucket successfully created.
root@vm-1:/etc/apache2/sites-enabled# s3 -u create testcreate
Bucket successfully created.
I installed new dumpling cluster
Thanks.
--
Regards
Dominik
2013/12/3 Yehuda Sadeh yeh...@inktank.com:
For bobtail at this point yes. You can try the unofficial version with
that fix off the gitbuilder. Another option is to upgrade everything
to dumpling.
Yehuda
On Mon, Dec 2, 2013 at 10:24 PM, Dominik Mostowiec
Hi,
I have strange problem.
Obj copy (0 size) killing radosgw.
Head for this file:
Content-Type: application/octet-stream
Server: Apache/2.2.22 (Ubuntu)
ETag: d41d8cd98f00b204e9800998ecf8427e-0
Last-Modified: 2013-12-01T10:37:15Z
rgw log.
2013-12-02 08:18:59.196651 7f5308ff1700 1 ==
Hi,
I found that issue is related with ETag: -0 (ends -0)
This is known bug ?
--
Regards
Dominik
2013/12/2 Dominik Mostowiec dominikmostow...@gmail.com:
Hi,
I have strange problem.
Obj copy (0 size) killing radosgw.
Head for this file:
Content-Type: application/octet-stream
Server
Mostowiec dominikmostow...@gmail.com
wrote:
Hi,
I found that issue is related with ETag: -0 (ends -0)
This is known bug ?
--
Regards
Dominik
2013/12/2 Dominik Mostowiec dominikmostow...@gmail.com:
Hi,
I have strange problem.
Obj copy (0 size) killing radosgw.
Head
, but only copy fails?
On Dec 2, 2013 4:53 AM, Dominik Mostowiec dominikmostow...@gmail.com
wrote:
Hi,
I found that issue is related with ETag: -0 (ends -0)
This is known bug ?
--
Regards
Dominik
2013/12/2 Dominik Mostowiec dominikmostow...@gmail.com:
Hi,
I have strange problem
Yes, this is probably upload empty file.
This is the problem?
--
Regards
Dominik
2013/12/2 Yehuda Sadeh yeh...@inktank.com:
By any chance are you uploading empty objects through the multipart upload
api?
On Mon, Dec 2, 2013 at 12:08 PM, Dominik Mostowiec
dominikmostow...@gmail.com wrote
that if there's more
than 1 part, all parts except for the last one need to be 5M. Which
means that for uploads that are smaller than 5M there should be zero
or one parts.
On Mon, Dec 2, 2013 at 12:54 PM, Dominik Mostowiec
dominikmostow...@gmail.com wrote:
You're right.
S3 api doc:
http
for another object.
http://pastebin.com/VkVAYgwn
2013/12/3 Yehuda Sadeh yeh...@inktank.com:
I see. Do you have backtrace for the crash?
On Mon, Dec 2, 2013 at 6:19 PM, Dominik Mostowiec
dominikmostow...@gmail.com wrote:
0.56.7
W dniu poniedziałek, 2 grudnia 2013 użytkownik Yehuda Sadeh
Hi,
I I have replication = 3, obj A - PG - [1,2,3].
Osd.1 -master, 2,3 replica.
osd.1 - host1,
osd.2 - host2,
osd.3 - host3.
Radosgw on host2 requests(GET) for obj A to osd.1 or to local osd.2 ?
--
Regards
Dominik
___
ceph-users mailing list
Hi,
I found in doc: http://ceph.com/docs/master/start/os-recommendations/
Putting multiple ceph-osd daemons using XFS or ext4 on the same host
will not perform as well as they could.
For now recommended filesystem is XFS.
This means that for the best performance setup should be 1 OSD per host?
Ok, Thanks :-)
--
Regards
Dominik
2013/11/26 Jens Kristian Søgaard j...@mermaidconsulting.dk:
Hi,
Putting multiple ceph-osd daemons using XFS or ext4 on the same host
will not perform as well as they could.
This means that for the best performance setup should be 1 OSD per host?
The
Hi,
I plan to delete 2 buckets, 5M and 15M files.
This can be dangerous if I do it via:
radosgw-admin --bucket=largebucket1 --purge-objects bucket rm
?
--
Pozdrawiam
Dominik
___
ceph-users mailing list
ceph-users@lists.ceph.com
I hope it will help.
crush: https://www.dropbox.com/s/inrmq3t40om26vf/crush.txt
ceph osd dump: https://www.dropbox.com/s/jsbt7iypyfnnbqm/ceph_osd_dump.txt
--
Regards
Dominik
2013/11/6 yy-nm yxdyours...@gmail.com:
On 2013/11/5 22:02, Dominik Mostowiec wrote:
Hi,
After remove ( ceph osd out X
Hi,
After remove ( ceph osd out X) osd from one server ( 11 osd ) ceph
starts data migration process.
It stopped on:
32424 pgs: 30635 active+clean, 191 active+remapped, 1596
active+degraded, 2 active+clean+scrubbing;
degraded (1.718%)
All osd with reweight==1 are UP.
ceph -v
ceph version 0.56.7
Hi,
This is s3/ceph cluster, .rgw.buckets has 3 copies of data.
Many PG's are only on 2 OSD's and are marked as 'degraded'.
Scrubbing can fix this on degraded object's?
I don't have set tunables in cruch, mabye this can help (this is safe?)?
--
Regards
Dominik
2013/11/5 Dominik Mostowiec
Hi,
I have strange radosgw error:
==
2013-10-26 21:18:29.844676 7f637beaf700 0 setting object
tag=_ZPeVs7d6W8GjU8qKr4dsilbGeo6NOgw
2013-10-26 21:18:30.049588 7f637beaf700 0 WARNING: set_req_state_err
err_no=125 resorting to 500
2013-10-26 21:18:30.049738 7f637beaf700 2 req
Hi,
radosgw-admin object unlink can do stomething like 'blind bucket'
(object in bucket without rgw index)?
--
Regards
Dominik
2013/10/13 Dominik Mostowiec dominikmostow...@gmail.com:
hmm, 'tail' - do you mean file/object content?
I thought that this command might be workaround for 'blind
Hi,
I also looking for something like that.
It is possible to set FULL_CONTROL permissions for Group All Users, and:
- it is possible to put object to bucket (whitout authentication - anonymous)
- setacl,getacl,get,delete not working for this object.
--
Regards
Dominik
2013/9/26 david zhang
Hi,
I had server failure that starts from one disk failure:
Oct 14 03:25:04 s3-10-177-64-6 kernel: [1027237.023986] sd 4:2:26:0:
[sdaa] Unhandled error code
Oct 14 03:25:04 s3-10-177-64-6 kernel: [1027237.023990] sd 4:2:26:0:
[sdaa] Result: hostbyte=DID_ERROR driverbyte=DRIVER_OK
Oct 14 03:25:04
Hi
I have found somthing.
After restart time was wrong on server (+2hours) before ntp has fixed it.
I restarted this 3 osd - it not helps.
It is possible that ceph banned this osd? Or after start with wrong
time osd has broken hi's filestore?
--
Regards
Dominik
2013/10/14 Dominik Mostowiec
to be removed later by the garbage
collector.
On Sat, Oct 12, 2013 at 11:02 PM, Dominik Mostowiec
dominikmostow...@gmail.com wrote:
Thanks :-)
This command removes object from rgw index (not mark it as removed)?
--
Regards
Domink
2013/10/13 Yehuda Sadeh yeh...@inktank.com:
On Sat, Oct 12
Hi,
How works radosgw-admin object unlink?
After:
radosgw-admin object unlink --bucket=testbucket 'test_file_1001.txt'
File still exists in bucket list:
s3 -u list testbucket | grep 'test_file_1001.txt'
test_file_1001.txt 2013-10-11T11:46:54Z 5
ceph -v
ceph
for that skew.
I don't remember all the constraints you'll need to satisfy when doing that
so I really recommend the first option.
-Greg
On Friday, September 13, 2013, Dominik Mostowiec wrote:
Hi,
I have ntpd installed on servers, time seems to be ok.
I have strange log:
2013-09-12 07:34
can't
find this).
In this doc http://ceph.com/docs/next/install/upgrading-ceph/
For example I found argonaut-cuttlelfish.
--
Regards
Dominik
2013/10/8 Corin Langosch corin.lango...@netskin.com:
http://ceph.com/docs/master/release-notes/
Am 08.10.2013 07:37, schrieb Dominik Mostowiec:
hi
ok, if I do not know for sure it is safe i will do this step by step.
But i'm almost sure that i have seen instructions to upgrade bobtail
to dumpling
--
Regards
Dominik
2013/10/8 Maciej Gałkiewicz mac...@shellycloud.com:
On 8 October 2013 09:23, Dominik Mostowiec dominikmostow...@gmail.com
Ok, I found where i have seen info about upgrade bobtail-dumpling:
http://www.spinics.net/lists/ceph-users/msg03408.html
--
Regards
Dominik
2013/10/8 Dominik Mostowiec dominikmostow...@gmail.com:
ok, if I do not know for sure it is safe i will do this step by step.
But i'm almost sure that i
hi,
It is possible to (safe) upgrade directly from bobtail (0.56.6) to
dumpling (latest)?
Is there any instruction?
--
Regards
Dominik
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
connect_seq 122 vs existing 122 state connecting
2013-09-13 00:11:21.553559 7fd63ac3e700 0 log [INF] : mon.4 calling
new monitor election
--
Dominik
2013/9/13 Joao Eduardo Luis joao.l...@inktank.com:
On 09/13/2013 03:38 AM, Sage Weil wrote:
On Thu, 12 Sep 2013, Dominik Mostowiec wrote:
Hi
Thanks for your answer.
Regards
Dominik
On Aug 30, 2013 4:59 PM, Yehuda Sadeh yeh...@inktank.com wrote:
On Fri, Aug 30, 2013 at 7:44 AM, Dominik Mostowiec
dominikmostow...@gmail.com wrote:
(echo -n 'GET /dysk/files/test.test%
40op.pl/DOMIWENT%202013/Damian%20DW/dw/Specyfikacja%20istotnych
Hi,
I got err (400) from radosgw on request:
2013-08-30 08:09:19.396812 7f3b307c0700 2 req 3070:0.000150::POST
Hi,
Rgw bucket index is in one file (one osd performance issues).
Is there on roudmap sharding or other change to increase performance?
--
Pozdrawiam
Dominik
___
ceph-users mailing list
ceph-users@lists.ceph.com
Hi,
Something interesting, osd whith problems eats much more memory.
Standard is about 300m,
This osd eats even 30G.
Can i do any tests to help find where the problem is?
--
Regards
Dominik
2013/7/16 Dominik Mostowiec dominikmostow...@gmail.com:
Hi,
I noticed that problem is more frequent
osd recovery threads = 1
osd recovery max active = 1
osd recovery op priority = 1
osd client op priority = 100
osd max backfills = 1
--
Regards
Dominik
2013/7/4 Dominik Mostowiec dominikmostow...@gmail.com:
I reported bug: http://tracker.ceph.com/issues/5504
I reported bug: http://tracker.ceph.com/issues/5504
--
Regards
Dominik
2013/7/2 Dominik Mostowiec dominikmostow...@gmail.com:
Hi,
Some osd.87 performance graphs:
https://www.dropbox.com/s/o07wae2041hu06l/osd_87_performance.PNG
After 11.05 I have restarted it.
Mons .., maybe
mon
resulting to slight dataplacement change at the moment when _first rebooted_
monitor came up, not shown up with one hour delays between quorum restart.
On Tue, Jul 2, 2013 at 1:37 PM, Dominik Mostowiec
dominikmostow...@gmail.com wrote:
Hi,
I got it.
ceph health details
HEALTH_WARN
Hi,
We took osd.71 out and now problem is on osd.57.
Something curious, op_rw on osd.57 is much higher than other.
See here: https://www.dropbox.com/s/o5q0xi9wbvpwyiz/op_rw_osd57.PNG
On data on this osd I found:
data/osd.57/current# du -sh omap/
2.3Gomap/
That much higher op_rw on one osd
Today I have peereng problem not when I put osd.71 out, but in normal CEPH work.
Regards
Dominik
2013/6/28 Andrey Korolyov and...@xdel.ru:
There is almost same problem with the 0.61 cluster, at least with same
symptoms. Could be reproduced quite easily - remove an osd and then
mark it as out
reached pg
Regards
Dominik
2013/6/3 Gregory Farnum g...@inktank.com:
On Sunday, June 2, 2013, Dominik Mostowiec wrote:
Hi,
I try to start postgres cluster on VMs with second disk mounted from
ceph (rbd - kvm).
I started some writes (pgbench initialisation) on 8 VMs and VMs freez.
Ceph
Hi,
I try to start postgres cluster on VMs with second disk mounted from
ceph (rbd - kvm).
I started some writes (pgbench initialisation) on 8 VMs and VMs freez.
Ceph reports slow request on 1 osd. I restarted this osd to remove
slows and VMs hangs permanently.
Is this a normal situation afer
81 matches
Mail list logo