OK. About the LIBRADOS_VER_MINOR, do you want me to bump it and submit a
new patch?
Best regards,
Filippos
On 12/15/2012 09:49 AM, Yehuda Sadeh wrote:
Went through it briefly, looks fine, though I'd like to go over it
some more before picking this up. Note that LIBRADOS_VER_MINOR needs
to be bu
On 12/19/2012 09:03 AM, Roman Hlynovskiy wrote:
My first problem - I am getting spurious mon's deaths, which usually
looks like this:
--- begin dump of recent events ---
0> 2012-12-19 10:35:58.912119 b41eab70 -1 *** Caught signal (Aborted) **
in thread b41eab70
ceph version 0.55.1 (8e
On 12/19/2012 10:58 AM, Roman Hlynovskiy wrote:
Hello Joao,
thanks for feedback. is this fix available on the svn? i can provide
heavy testing for it.
Yes, the fix in on github's (not svn ;) master branch.
All testing is most welcome!
Thanks.
-Joao
2012/12/19 Joao Eduardo Luis :
On
This patch renames the --format option to --image-format, for specyfing the RBD
image format, and uses --format to specify the output formating (to be
consistent with the other ceph tools). To avoid breaking backwards compatibility
with existing scripts, rbd will still accept --format [1|2] for the
On 12/19/2012 03:03 AM, Roman Hlynovskiy wrote:
Hello,
I have 2 issues with ceph stability and looking for help to resolve them.
My setup is pretty simple - 3 debian 32bit stable systems each running
osd, mon and mds.
the conf is the following:
[global]
auth cluster req
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Hi
I'm seeing a couple of issues with Ceph 0.55.1 on Ubuntu raring
(current development release) whilst testing the keystone integration
with radosgw.
1) Crash in RADOS Gateway when Content-Type not specified in Upload
https://bugs.launchpad.net/u
We had a bunch of disk who failed. That's why ceph was having trouble keeping
OSD up.
And we found that during recovery the rados gateway failed to initialize. The
init_watch function timeout.
As it is only used when cache is activated, we disable cache (rgw cache enable
false) and the radosgate
On Wed, 19 Dec 2012, Roman Hlynovskiy wrote:
> My second problem - I have 2 systems which mount ceph. Whenever I
> mount ceph on any other system it usually mounts but get stuck on
> stat* operations (i.e. simple ls -al will hang with read( from the
> ceph-mounted directory for ages). This kind of
On Wed, Dec 19, 2012 at 7:10 AM, James Page wrote:
>
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
> Hi
>
> I'm seeing a couple of issues with Ceph 0.55.1 on Ubuntu raring
> (current development release) whilst testing the keystone integration
> with radosgw.
>
> 1) Crash in RADOS Gateway
On Wed, 19 Dec 2012, Filippos Giannakos wrote:
> OK. About the LIBRADOS_VER_MINOR, do you want me to bump it and submit a
> new patch?
Yes, please. Also, one other thing: can you add a functional test to
ceph.git/src/test/librados/aio.cc so that all of the the regular testing
and test suites wi
On 12/19/2012 07:43 AM, Sage Weil wrote:
On Wed, 19 Dec 2012, Filippos Giannakos wrote:
OK. About the LIBRADOS_VER_MINOR, do you want me to bump it and submit a
new patch?
Yes, please. Also, one other thing: can you add a functional test to
ceph.git/src/test/librados/aio.cc so that all of the
2012/12/19 Sage Weil :
> On Wed, 19 Dec 2012, Mark Kirkwood wrote:
>> On 19/12/12 15:56, Drunkard Zhang wrote:
>> > 2012/12/19 Mark Kirkwood :
>> > > On 19/12/12 14:44, Drunkard Zhang wrote:
>> > > > 2012/12/16 Drunkard Zhang :
>> > > > > I couldn't rm files in ceph, which was backuped files of one
No more suggestions? :(
--
Regards,
Sébastien Han.
On Tue, Dec 18, 2012 at 6:21 PM, Sébastien Han wrote:
> Nothing terrific...
>
> Kernel logs from my clients are full of "libceph: osd4
> 172.20.11.32:6801 socket closed"
>
> I saw this somewhere on the tracker.
>
> Does this harm?
>
> Thanks.
>
Hello CephTeam,Community
I m doing just my first steps with ceph.
I have upgraded my 3 test system to ubuntu/raring and run mkcephfs below is
output,ceph.conf and ceph -s output...any help would be appreciated.
Thanks
Tibet
--
root@host1:/var/lib/ceph# mkcephfs -a -c /etc/ceph/
On 12/19/2012 04:48 PM, Tibet Himalkaya wrote:
Hello CephTeam,Community
I m doing just my first steps with ceph.
I have upgraded my 3 test system to ubuntu/raring and run mkcephfs below is
output,ceph.conf and ceph -s output...any help would be appreciated.
Thanks
Tibet
Have you started your mo
Hi List,
how can i delete non existing PGs ?
the OSDs where the PGs was stored are crashed and now i see this
pg 2.80 is stuck stale for 38971.810705, current state
stale+active+clean, last acting [2,0]
pg 0.82 is stuck stale for 38971.810712, current state
stale+active+clean, last acting [2,0
cant bring the osds back, thought that ceph replicates data over hosts
not only over osds. so i stopped two OSDs on one host, and deleted the
data/osds, after that i saw the mistake...
On 19.12.2012 22:05, Samuel Just wrote:
Note, however, that it will render the objects previously stored ther
On 12/18/2012 12:05 PM, Nick Bartos wrote:
> I've added the output of "ps -ef" in addition to triggering a trace
> when a hang is detected. Not much is generally running at that point,
> but you can have a look:
>
> https://gist.github.com/raw/4330223/2f131ee312ee43cb3d8c307a9bf2f454a7edfe57/rbd-
Ceph can be configured that way using crush. See
http://ceph.com/docs/master/rados/operations/crush-map/
-Sam
On Wed, Dec 19, 2012 at 1:13 PM, norbi wrote:
> cant bring the osds back, thought that ceph replicates data over hosts not
> only over osds. so i stopped two OSDs on one host, and delete
Sorry, it's been very busy. The next step would to try to get a heap
dump. You can start a heap profile on osd N by:
ceph osd tell N heap start_profiler
and you can get it to dump the collected profile using
ceph osd tell N heap dump.
The dumps should show up in the osd log directory.
Assumi
On 12/19/2012 03:25 PM, Alex Elder wrote:
> On 12/18/2012 12:05 PM, Nick Bartos wrote:
>> I've added the output of "ps -ef" in addition to triggering a trace
>> when a hang is detected. Not much is generally running at that point,
>> but you can have a look:
>>
>> https://gist.github.com/raw/43302
On 12/19/2012 05:17 PM, Ugis wrote:
> Hi all,
>
> I have been struggling to map ceph rbd images for last week, but
> constantly get kernel crashes.
>
> What has been done:
> Previously we had v0.48 set up as test cluster(4 hosts, 5 osds, 3
> mons, 3 mds, custom crushmap) on Ubuntu 12.04 and clien
On 12/19/2012 9:42 AM, Joao Eduardo Luis wrote:
On 12/19/2012 04:48 PM, Tibet Himalkaya wrote:
Hello CephTeam,Community
I m doing just my first steps with ceph.
I have upgraded my 3 test system to ubuntu/raring and run mkcephfs
below is
output,ceph.conf and ceph -s output...any help would be a
23 matches
Mail list logo