Hi, i do some test, to reproduce this problem.
As you can see, only one drive (each drive in same PG) is much more
utilize, then others, and there are some ops in queue on this slow
osd. This test is getting heads from s3 objects, alphabetically
sorted. This is strange. why this files is going in
Great, thanks. Now i understand everything.
Best Regards
SS
Dnia 6 mar 2013 o godz. 15:04 Yehuda Sadeh yeh...@inktank.com napisał(a):
On Wed, Mar 6, 2013 at 5:06 AM, Sławomir Skowron szi...@gmail.com wrote:
Hi, i do some test, to reproduce this problem.
As you can see, only one drive (each
, Sławomir Skowron szi...@gmail.com wrote:
Hi,
We have a big problem with RGW. I don't know what is the initial
trigger, but i have theory.
2-3 osd, from 78 in cluster (6480 PG on RGW pool), have 3x time more
RAM usage, they have much more operations in journal, and much bigger
latency
good.
On Mon, Mar 4, 2013 at 6:25 PM, Gregory Farnum g...@inktank.com wrote:
On Mon, Mar 4, 2013 at 9:23 AM, Sławomir Skowron szi...@gmail.com wrote:
On Mon, Mar 4, 2013 at 6:02 PM, Sage Weil s...@inktank.com wrote:
On Mon, 4 Mar 2013, S?awomir Skowron wrote:
Ok, thanks for response. But if i
On Mon, Mar 4, 2013 at 6:42 PM, Sławomir Skowron szi...@gmail.com wrote:
Alone (one of this slow osd in mentioned tripple)
2013-03-04 18:39:27.683035 osd.23 [INF] bench: wrote 1024 MB in blocks
of 4096 KB in 15.241943 sec at 68795 KB/sec
in for loop (some slow request appear):
for x in `seq 0
Like i say, yes. Now it is only option, to migrate data from one
cluster to other, and now it must be enough, with some auto features.
But is there any timeline, or any brainstorming in ceph internal
meetings, about any possible replication in block level, or something
like that ??
On 20 lut
Hi,
I have some problem. After OSD expand, and cluster crush re-organize i
have 1 pg in incomplete state. How can i solve this problem ??
ceph -s
health HEALTH_WARN 1 pgs incomplete; 1 pgs stuck inactive; 1 pgs
stuck unclean
monmap e21: 3 mons at
, for
some time.
I am in middle of writing some article, about this, but my sickness,
have slow down this process slightly.
On Thu, Jan 31, 2013 at 10:50 AM, Gandalf Corvotempesta
gandalf.corvotempe...@gmail.com wrote:
2013/1/31 Sławomir Skowron szi...@gmail.com:
We are using nginx, on top of rgw
with restart, or many other cases.
Volumes are in many sizes: 1-500GB
external block device for kvm vm, like EBS.
On Mon, Feb 18, 2013 at 3:07 PM, Sławomir Skowron szi...@gmail.com wrote:
Hi, Sorry for very late response, but i was sick.
Our case is to make a failover rbd instance in another
We are using nginx, on top of rgw. In nginx we manage to create logic,
for using a AMQP, and async operations via queues. Then workers, on
every side getiing data from own queue, and then coping data from
source, to destination in s3 API. Works for PUT/DELETE, and work
automatic when production
, Nov 17, 2012 at 1:50 PM, Sławomir Skowron szi...@gmail.com wrote:
Welcome,
I have a question. Is there, any way to support multiple domains names
in one radosgw on virtual host type connection in S3 ??
Are you aiming at having multiple virtual domain names pointing at the
same bucket
Welcome,
I have a question. Is there, any way to support multiple domains names
in one radosgw on virtual host type connection in S3 ??
Regards
SS
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at
Ok, i will digg in nginx, thanks.
Dnia 8 lis 2012 o godz. 22:48 Yehuda Sadeh yeh...@inktank.com napisał(a):
On Wed, Nov 7, 2012 at 6:16 AM, Sławomir Skowron szi...@gmail.com wrote:
I have realize that requests from fastcgi in nginx from radosgw returning:
HTTP/1.1 200, not a HTTP/1.1 200 OK
I have realize that requests from fastcgi in nginx from radosgw returning:
HTTP/1.1 200, not a HTTP/1.1 200 OK
Any other cgi that i run, for example php via fastcgi return this like
RFC says, with OK.
Is someone experience this problem ??
I see in code:
./src/rgw/rgw_rest.cc line 36
const
Ok, i will try, but i have all day meeting today, and tomorrow.
One more question, is there any way to check configuration not from
ceph.conf, but from running daemon in cluster ??
On Fri, Sep 14, 2012 at 9:12 PM, Sage Weil s...@inktank.com wrote:
Hi,
On Wed, 5 Sep 2012, Skowron S?awomir
=403
2012-09-11 22:37:34.713093 7faec76c6700 1 == req done
req=0x1368860 http_status=403 ==
On Tue, Sep 11, 2012 at 7:46 PM, Sławomir Skowron szi...@gmail.com wrote:
On Tue, Sep 11, 2012 at 6:48 PM, Yehuda Sadeh yeh...@inktank.com wrote:
On Tue, Sep 11, 2012 at 9:45 AM, Yehuda Sadeh
Every acl operation ending with 403 in PUT.
~# s3 -u test oc
Bucket Status
ocAccess Denied
Anyone know
On Tue, Sep 11, 2012 at 6:48 PM, Yehuda Sadeh yeh...@inktank.com wrote:
On Tue, Sep 11, 2012 at 9:45 AM, Yehuda Sadeh yeh...@inktank.com wrote:
On Tue, Sep 11, 2012 at 7:28 AM, Sławomir Skowron szi...@gmail.com wrote:
Every acl operation ending with 403 in PUT.
~# s3 -u test oc
:34.712887 7faec76c6700 2 req 71188:0.004554:s3:PUT
/ocdn/files/pulscms/MjU7MDA_/ecebacddde95224b96f46333912049b1:put_obj:http
status=403
2012-09-11 22:37:34.713093 7faec76c6700 1 == req done
req=0x1368860 http_status=403 ==
On Tue, Sep 11, 2012 at 7:46 PM, Sławomir Skowron szi...@gmail.com wrote
...@inktank.com wrote:
On Tue, Sep 11, 2012 at 1:41 PM, Sławomir Skowron szi...@gmail.com wrote:
And more logs:
2012-09-11 21:03:38.357304 7faf0bf4f700 1 == req done
req=0x141a650 http_status=403 ==
2012-09-11 21:23:54.423185 7faf0bf4f700 20 dequeued request req=0x139a3d0
2012-09-11
Ok, but why this happend. There is no new code started before this
problem. Is there any way to recover cluster to normal operation
withoud Access Denied in s3 any acl operation ??
On Tue, Sep 11, 2012 at 11:32 PM, Yehuda Sadeh yeh...@inktank.com wrote:
On Tue, Sep 11, 2012 at 2:28 PM, Sławomir
:
Maybe your s3 library got updated and now uses a newer s3 dialect?
Basically you need to update the bucket acl, e.g.:
# s3 -u create foo
# s3 -u getacl foo | s3 -u setacl oldbucket acl
On Tue, Sep 11, 2012 at 2:38 PM, Sławomir Skowron szi...@gmail.com wrote:
Ok, but why this happend
mem_stacks_B=0
heap_tree=empty
On Wed, Sep 5, 2012 at 8:44 PM, Sławomir Skowron szi...@gmail.com wrote:
On Wed, Sep 5, 2012 at 5:51 PM, Sage Weil s...@inktank.com wrote:
On Wed, 5 Sep 2012, S?awomir Skowron wrote:
Unfortunately here is the problem in my Ubuntu 12.04.1
--9399-- You may be able to write
Unfortunately here is the problem in my Ubuntu 12.04.1
--9399-- You may be able to write your own handler.
--9399-- Read the file README_MISSING_SYSCALL_OR_IOCTL.
--9399-- Nevertheless we consider this a bug. Please report
--9399-- it at http://valgrind.org/support/bug_reports.html.
==9399==
Ok, but in global case, when i use, a chef/puppet or any other, i wish
to change configuration in ceph.conf, and reload daemon to get new
configuration changes from ceph.conf, this feature would be very
helpful in ceph administration.
Inject is ok, for testing, or debuging.
On Tue, Sep 4, 2012
On Wed, Sep 5, 2012 at 5:51 PM, Sage Weil s...@inktank.com wrote:
On Wed, 5 Sep 2012, S?awomir Skowron wrote:
Unfortunately here is the problem in my Ubuntu 12.04.1
--9399-- You may be able to write your own handler.
--9399-- Read the file README_MISSING_SYSCALL_OR_IOCTL.
--9399--
PM, Sławomir Skowron szi...@gmail.com wrote:
I have this problem too. My mon's in 0.48.1 cluster have 10GB RAM
each, with 78 osd, and 2k request per minute (max) in radosgw.
Now i have run one via valgrind. I will send output when mon grow up.
On Fri, Aug 31, 2012 at 6:03 PM, Sage Weil s
I have this problem too. My mon's in 0.48.1 cluster have 10GB RAM
each, with 78 osd, and 2k request per minute (max) in radosgw.
Now i have run one via valgrind. I will send output when mon grow up.
On Fri, Aug 31, 2012 at 6:03 PM, Sage Weil s...@inktank.com wrote:
On Fri, 31 Aug 2012, Xiaopong
Ubuntu precise, ceph 0.48.1
After crush change, whole cluster reorganize, but one machine get very
hight load, and 4 OSD on this machine die with this in log.
After that i reboot machine, and re-init this OSD (i left one to
diagnose if needed), for full stability. Now everything is ok, but
maybe
is online, again, then number of waiting objecter
requests in rgw going up, and in this case scrubbing is not going to
be end, for many hours, i have quite big problem.
Is this some known bug ?? or maybe new one ??
On Tue, Aug 14, 2012 at 8:33 AM, Sławomir Skowron
slawomir.skow...@gmail.com wrote
Dnia 21 lip 2012 o godz. 20:08 Yehuda Sadeh yeh...@inktank.com napisał(a):
On Sat, Jul 21, 2012 at 10:13 AM, Gregory Farnum g...@inktank.com wrote:
On Fri, Jul 20, 2012 at 1:15 PM, Tommi Virtanen t...@inktank.com
(mailto:t...@inktank.com) wrote:
On Fri, Jul 20, 2012 at 8:31 AM, Sławomir
Ok everything is clear now, Thanks. I will try this in planning service works.
Regards
Slawomir Skowron.
On 23 lip 2012, at 18:00, Tommi Virtanen t...@inktank.com wrote:
On Sun, Jul 22, 2012 at 11:57 PM, Sławomir Skowron szi...@gmail.com wrote:
My workload looks like this:
- Max 20
I know that this feature is disabled, are you planning to enable this
in near future ??
I have many of drives, and my S3 instalation use only few of them in
one time, and i need to improve that.
When i use it as rbd it use all of them.
Regards
Slawomir Skowron
--
To unsubscribe from this
I have two questions. My newly created cluster with xfs on all osd,
ubuntu precise, kernel 3.2.0-23-generic. Ceph 0.47.2-1precise
pool 0 'data' rep size 3 crush_ruleset 0 object_hash rjenkins pg_num
64 pgp_num 64 last_change 1228 owner 0 crash_replay_interval 45
pool 1 'metadata' rep size 3
On Thu, May 24, 2012 at 7:15 AM, Wido den Hollander w...@widodh.nl wrote:
On 22-05-12 20:07, Yehuda Sadeh wrote:
RGW is maturing. Beside looking at performance, which highly ties into
RADOS performance, we'd like to hear whether there are certain pain
points or future directions that you
-r--r-- 1 root root 536870912 May 22 10:16 journal
drwx-- 2 root root 16384 May 22 10:15 lost+found
-rw-r--r-- 1 root root 4 May 22 10:16 store_version
-rwx-- 1 root root 0 May 22 10:16 xattr_test
On Mon, May 21, 2012 at 11:25 PM, Sławomir Skowron szi...@gmail.com
Ok, now it is clear to me.
I disable filestore_xattr_use_omap for now, and i will try to move
puppet class to xfs for a new cluster init :)
Thanks
On Tue, May 22, 2012 at 7:47 PM, Greg Farnum g...@inktank.com wrote:
On Tuesday, May 22, 2012 at 1:21 AM, Sławomir Skowron wrote:
One more thing
On Tue, May 22, 2012 at 8:07 PM, Yehuda Sadeh yeh...@inktank.com wrote:
RGW is maturing. Beside looking at performance, which highly ties into
RADOS performance, we'd like to hear whether there are certain pain
points or future directions that you (you as in the ceph community)
would like to
On Tue, May 22, 2012 at 9:09 PM, Yehuda Sadeh yeh...@inktank.com wrote:
On Tue, May 22, 2012 at 11:25 AM, Sławomir Skowron szi...@gmail.com wrote:
On Tue, May 22, 2012 at 8:07 PM, Yehuda Sadeh yeh...@inktank.com wrote:
RGW is maturing. Beside looking at performance, which highly ties
Ubuntu precise:
Linux obs-10-177-66-4 3.2.0-23-generic #36-Ubuntu SMP Tue Apr 10
20:39:51 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux
# mount
/dev/sdc on /vol0/data/osd.0 type ext4
(rw,noexec,nodev,noatime,nodiratime,user_xattr,data=writeback,barrier=0)
# ceph-osd -i 0 --mkjournal --mkfs --monmap
it was broken for a few
hours in a way that will manifest somewhat like this.
On Mon, May 21, 2012 at 1:49 PM, Stefan Priebe s.pri...@profihost.ag wrote:
Am 21.05.2012 22:41, schrieb Sławomir Skowron:
# ceph-osd -i 0 --mkjournal --mkfs --monmap /tmp/monmap
2012-05-21 22:36:54.150374 7f65fbc0b780
Maybe good for journal will be two cheap MLC Intel drives on Sandforce
(320/520), 120GB or 240GB, and HPA changed to 20-30GB only for
separate journaling partitions with hardware RAID1.
I like to test setup like this, but maybe someone have any real life info ??
On Mon, May 21, 2012 at 5:07 PM,
Great Thanks.
On Mon, May 21, 2012 at 11:24 PM, Sage Weil s...@inktank.com wrote:
On Mon, 21 May 2012, Stefan Priebe wrote:
Am 21.05.2012 22:58, schrieb Sawomir Skowron:
Yes on root. 30 minutes ago on 0.46 same operation works.
...
Repo:
deb http://ceph.com/debian/ precise main
I have some performance from rbd cluster near 320MB/s on VM from 3
node cluster, but with 10GE, and with 26 2.5 SAS drives used on every
machine it's not everything that can be.
Every osd drive is raid0 with one drive via battery cached nvram in
hardware raid ctrl.
Every osd take much ram for
More info, that after i use filestore_xattr_use_omap = 1 in conf, and
ceph -w is like that in attachment in mail before.
I have downgrade to 0.44, and everything is ok now, but why this happen ??
2012/4/13 Sławomir Skowron szi...@gmail.com:
2012-04-13 11:03:20.017166 7f63d62b47a0 -- 0.0.0.0
2012/3/5 Sławomir Skowron szi...@gmail.com:
On 5 mar 2012, at 19:59, Yehuda Sadeh Weinraub
yehuda.sa...@dreamhost.com wrote:
On Mon, Mar 5, 2012 at 2:23 AM, Sławomir Skowron
slawomir.skow...@gmail.com wrote:
2012/3/1 Sławomir Skowron slawomir.skow...@gmail.com:
2012/2/29 Yehuda Sadeh
On 6 mar 2012, at 18:53, Yehuda Sadeh Weinraub
yehuda.sa...@dreamhost.com wrote:
On Tue, Mar 6, 2012 at 2:08 AM, Sławomir Skowron szi...@gmail.com wrote:
All logs from osd.24, osd.62, and osd.36 with osd debug =20 and
filestore debug = 20 from 2012-03-06 10:25 and more.
http
2012/3/1 Sławomir Skowron slawomir.skow...@gmail.com:
2012/2/29 Yehuda Sadeh Weinraub yehuda.sa...@dreamhost.com:
On Wed, Feb 29, 2012 at 5:06 AM, Sławomir Skowron
slawomir.skow...@gmail.com wrote:
Ok, it's intentional.
We are checking meta info about files, then, checking md5 of file
On 5 mar 2012, at 19:59, Yehuda Sadeh Weinraub
yehuda.sa...@dreamhost.com wrote:
On Mon, Mar 5, 2012 at 2:23 AM, Sławomir Skowron
slawomir.skow...@gmail.com wrote:
2012/3/1 Sławomir Skowron slawomir.skow...@gmail.com:
2012/2/29 Yehuda Sadeh Weinraub yehuda.sa...@dreamhost.com:
On Wed, Feb 29
/data/osd.7
/dev/sdt 275G 604M 260G 1% /vol0/data/osd.19
2012/2/28 Yehuda Sadeh Weinraub yehuda.sa...@dreamhost.com:
(resending to list)
On Tue, Feb 28, 2012 at 11:53 AM, Sławomir Skowron
slawomir.skow...@gmail.com wrote:
2012/2/28 Yehuda Sadeh Weinraub yehuda.sa
After some parallel copy command via botto for many files everything,
going to slow down, and eventualy got timeout from nginx@radosgw.
# ceph -s
2012-02-28 12:16:57.818566pg v20743: 8516 pgs: 8516 active+clean;
2154 MB data, 53807 MB used, 20240 GB / 21379 GB avail
2012-02-28 12:16:57.845274
Unfortunately 3 hours ago i made a decision about re-init cluster :(
Some data are available via rados, but cluster was unstable, and
migration of data was difficult, on time pression from outside :)
After init a new cluster on one machine, with clean pools i was able
to increase number of pg in
After increase number pg_num from 8 to 100 in .rgw.buckets i have some
serious problems.
pool name category KB objects clones
degraded unfound rdrd KB wr
wr KB
.intent-log - 4662 19
2012-02-20 20:34:07.619113 osd.20
10.177.64.4:6839/6735 64 : [ERR] mkpg 7.5f up [51,20,64] != acting
[20,51,64]
2012/2/20 Sławomir Skowron slawomir.skow...@gmail.com:
After increase number pg_num from 8 to 100 in .rgw.buckets i have some
serious problems.
pool name category
40 GB in 3 copies in rgw bucket, and some data in RBD, but they can be
destroyed.
Ceph -s reports 224 GB in normal state.
Pozdrawiam
iSS
Dnia 20 lut 2012 o godz. 21:19 Sage Weil s...@newdream.net napisał(a):
Ooh, the pg split functionality is currently broken, and we weren't
planning on
Dnia 17 lut 2012 o godz. 19:06 Tommi Virtanen
tommi.virta...@dreamhost.com napisał(a):
2012/2/17 Sławomir Skowron szi...@gmail.com:
1. Is there any plan about tier support. Example:
I have ceph cluster with fast SAS drives, and a losts of RAM, and SSD
acceleration, and 10GE network. I use
Excellent, thanks.
Pozdrawiam
iSS
Dnia 8 lut 2012 o godz. 15:45 Yehuda Sadeh Weinraub
yehuda.sa...@dreamhost.com napisał(a):
2012/2/8 Sławomir Skowron szi...@gmail.com:
Is there any way to disable logging inside rados for radosgw.
pool name category KB objects
did you encounter?
-Sam
2012/1/10 Sławomir Skowron slawomir.skow...@gmail.com:
I have some problem with adding a new mon to existing ceph cluster.
Now cluster contains a 3 mon's, but i started with only one in one
machine. Then adding a second, and third machine, with new mon's, and
OSD
:
At the moment, expanding the number of pgs in a pool is not working.
We hope to get it working in the somewhat near future (probably a few
months). Are you attempting to expand the number of osds and running
out of pgs?
-Sam
2012/1/10 Sławomir Skowron slawomir.skow...@gmail.com:
How to expand
Ehhh too long with this :) I forget to load acpiphp.
Thanks for everything, now its working beautifully. After 10h of
iozone on rbd devices, and no hang, or problem.
2011/12/20 Josh Durgin josh.dur...@dreamhost.com:
On 12/19/2011 07:37 AM, Sławomir Skowron wrote:
Hi,
Actual setup:
ii
new rbd drives appear in VM.
Is there any chance to hotadd rbd device to working VM without reboot ??
2011/12/16 Sławomir Skowron szi...@gmail.com:
2011/12/13 Josh Durgin josh.dur...@dreamhost.com:
On 12/13/2011 04:56 AM, Sławomir Skowron wrote:
Finally, i manage the problem with rbd
2011/12/13 Josh Durgin josh.dur...@dreamhost.com:
On 12/13/2011 04:56 AM, Sławomir Skowron wrote:
Finally, i manage the problem with rbd with kvm 1.0, and libvirt
0.9.8, or i think i manage :), but i get stuck with one thing after.
2011-12-13 12:13:31.173+: 21512: error
2011/12/12 Josh Durgin josh.dur...@dreamhost.com:
On 12/09/2011 04:48 AM, Sławomir Skowron wrote:
Sorry, for my lag, but i was sick.
I handled problem with apparmor before and now it's not a problem,
even when i send mail before, it was solved.
Ok when i create image from qemu-img it's looks
josh.dur...@dreamhost.com:
On 12/01/2011 11:37 AM, Sławomir Skowron wrote:
I have some problems. Can you help me ??
ceph cluster:
ceph 0.38, oneiric, kernel 3.0.0 x86_64 - now only one machine.
kvm hypervisor:
kvm version 1.0-rc4 (0.15.92), libvirt 0.9.2, kernel 3.0.0, Ubuntu
oneiric x86_64.
I
I have some problems. Can you help me ??
ceph cluster:
ceph 0.38, oneiric, kernel 3.0.0 x86_64 - now only one machine.
kvm hypervisor:
kvm version 1.0-rc4 (0.15.92), libvirt 0.9.2, kernel 3.0.0, Ubuntu
oneiric x86_64.
I create image from qemu-img on machine with kvm VM's, and it works very
Maybe, i have forgot something, but there is no doc about that.
I create a configuration with nginx and radosgw for S3.
On top of radosgw standing nginx witch cache capability. Everything
was ok in version 0.32 of ceph. A have create a new filesystem with a
newest 0.37 version, and now i have
:
rgw print continue = false
If it doesn't help we'll need to dig deeper. Thanks,
Yehuda
2011/11/8 Sławomir Skowron szi...@gmail.com:
Maybe, i have forgot something, but there is no doc about that.
I create a configuration with nginx and radosgw for S3.
On top of radosgw standing nginx
Yes i have made a test, and now everything is ok. Thanks for help.
iSS
Dnia 28 lip 2011 o godz. 18:36 Gregory Farnum
gregory.far...@dreamhost.com napisał(a):
2011/7/28 Sławomir Skowron szi...@gmail.com:
Because of my test before i mount ext4 filesystems in /data/osd.(osd
id), but /data
Dnia 27 lip 2011 o godz. 18:15 Gregory Farnum
gregory.far...@dreamhost.com napisał(a):
2011/7/27 Sławomir Skowron szi...@gmail.com:
Ok, I will show example:
rados df
pool name KB objects clones degraded
unfound rdrd KB wrwr
Dnia 27 lip 2011 o godz. 17:52 Sage Weil s...@newdream.net napisał(a):
On Wed, 27 Jul 2011, S?awomir Skowron wrote:
Hello. I have some questions.
1. Is there any chance to change default 4MB object size to for
example 1MB or less ??
Yeah. You can use the cephfs to set the default layout
2011/7/28 Sławomir Skowron szi...@gmail.com:
Dnia 27 lip 2011 o godz. 17:52 Sage Weil s...@newdream.net napisał(a):
On Wed, 27 Jul 2011, S?awomir Skowron wrote:
Hello. I have some questions.
1. Is there any chance to change default 4MB object size to for
example 1MB or less ??
Yeah. You
Thanks.
2011/7/27 Wido den Hollander w...@widodh.nl:
Hi,
On Wed, 2011-07-27 at 07:58 +0200, Sławomir Skowron wrote:
Hello. I have some questions.
1. Is there any chance to change default 4MB object size to for
example 1MB or less ??
If you are using the filesystem, you can change
Hello. I have some questions.
1. Is there any chance to change default 4MB object size to for
example 1MB or less ??
2. I have create cluster of two mons, and 32 osd (1TB each) on two
machines. At this radosgw with apache2 for test. When i putting data
from s3 client to rados, everything is ok,
73 matches
Mail list logo