>>> On 2013/04/23 at 12:51, Sylvain Munaut
>>> wrote:
> Hi,
>
>> My distro (openSuSE 12.1) has /usr/sbin/tapdisk for original tapdisk v1, and
> /usr/sbin/tapdisk2 for version 2 stuff. I'm replacing /usr/sbin/tapdisk2.
>
> Mm, do you know where I could find the source for those tapdisk binari
(Apologies in advance, this is somewhat off-topic from ceph-devel...)
>
> First off, you need a working blktap setup for your distribution.
> So for example, you should be able to use
> "tap2:tapdisk:aio:/path/to/image.raw" as a vbd.
So, this works perfectly fine *before* replacing my /usr/sbin/
>>> On 2013/01/18 at 12:36, Sage Weil wrote:
> On Fri, 18 Jan 2013, Stefan Priebe wrote:
>> Hi,
>>
>> what's bobtail-next then? bobtail itself also contains already updates since
>> 056.1?
>
> Just a few backported things I wanted a few eyeballs on before I put it in
> bobtail.
>
> Generally
>>> On 2013/01/08 at 10:08, Gregory Farnum wrote:
> On Mon, Jan 7, 2013 at 9:36 PM, Cesar Mello
wrote:
>> Hi,
>>
>> I have been playing with ceph and reading the docs/thesis the last
>> couple of nights just to learn something during my vacation. I was
not
>> expecting to find such an awesome an
h and the error happens again, your MDS will fail on replay as it
did here. If you leave it in, it has no effect other than handling
that particular bad case.
-Greg
On Tue, Oct 30, 2012 at 3:22 AM, Nick Couchman wrote:
> Okay, that patch worked and it seems to be running, again. Should I continue
a different platform?).
-Greg
On Fri, Oct 19, 2012 at 1:52 PM, Nick Couchman wrote:
> One of the MDSs crashed over the weekend (late Friday night), but I believe
> that one was not active and was just in Replay mode. Other than that, I
> don't know of anything that would have af
out it with our team here and get back to you tomorrow
> sometime.
> -Greg
>
> On Thu, Oct 18, 2012 at 8:56 AM, Nick Couchman
> wrote:
>> Hopefully this is what you're looking for...
>> (gdb) bt
>> #0 ESession::replay (this=0x7fffcc49a7c0, mds=0x127d5f0) at
>
Log.h:86
#3 0x7764df05 in start_thread () from /lib64/libpthread.so.0
#4 0x7680d10d in clone () from /lib64/libc.so.6
>>> On 2012/10/17 at 09:53, Sam Lang wrote:
> On 10/17/2012 09:42 AM, Nick Couchman wrote:
>> Thanks...here's the backtrace:
>> (gd
Hmmm...I don't seem to have the dbg packages built...will have to go back and
figure out how to build those.
-Nick
>>> On 2012/10/17 at 09:53, Sam Lang wrote:
> On 10/17/2012 09:42 AM, Nick Couchman wrote:
>> Thanks...here's the backtrace:
>> (gdb) bt
>
ceph.conf -f
> ...
> (gdb) run
>
> Once you hit the segfault you can get the backtrace with:
>
> (gdb) bt
>
> -sam
>
>
>> -Greg
>>
>> On Mon, Oct 15, 2012 at 10:59 AM, Nick Couchman
> wrote:
>>> Well, hopefully this is still okay..
MDS log is bad or is poking at a bug in the code. Can
> you turn on MDS debugging and restart a daemon and put that log
> somewhere accessible?
> debug mds = 20
> debug journaler = 20
> debug ms = 1
> -Greg
>
> On Mon, Oct 15, 2012 at 10:02 AM, Nick Couchman
> wrote:
&g
Well, both of my MDSs seem to be down right now, and then continually segfault
(every time I try to start them) with the following:
ceph-mdsmon-a:~ # ceph-mds -n mds.b -c /etc/ceph/ceph.conf -f
starting mds.b at :/0
*** Caught signal (Segmentation fault) **
in thread 7fbe0d61d700
ceph version 0
John
On Tue, Sep 18, 2012 at 12:53 AM, Mark Nelson wrote:
> Hi Nick,
>
> All I have to say, is that is totally awesome and scary at the same time. :)
>
> Glad to hear that it recovers well when people shut their desktops off!
>
> Mark
>
>
> On 09/17/2012 05:47 PM,
say, is that is totally awesome and scary at the same time. :)
Glad to hear that it recovers well when people shut their desktops off!
Mark
On 09/17/2012 05:47 PM, Nick Couchman wrote:
> My use of Ceph is probably pretty unique in some of the aspects of where/how
> I'm using it. I run a
My use of Ceph is probably pretty unique in some of the aspects of where/how
I'm using it. I run an IT department for a medium-sized engineering firm. One
of my goals is to try to make the best possible use of the hardware we're
deploying to users' desktops. Often times users cannot get by wi
>>
>> Interesting, thanks for the results, Mark. So, I guess don't tune unless
> you have a very good reason to do so? Or, if you're really going to try to
> squeeze all the performance possible, put your metadata on a separate FS with
> a different alloc size (or no alloc size specified) so t
>
> Hi Guys,
>
> There was a change 2.6.38 to the way that speculative preallocation
> works that basically lets small writes behave like allocsize is not set,
> and large writes behave like a large one is set:
>
> http://permalink.gmane.org/gmane.comp.file-systems.xfs.general/38403
>
> Havin
>
>> While I'm talking about XFS... I know that RBD's use a default object
>> size of 4MB. I've stuck with that so far.. Would it be beneficial to
>> mount XFS with -o allocsize=4M ? What is the object size that gets
>> used for non-RBD cases -- i.e. just dumping objects into data pool?
>
> D
, Yehuda Sadeh wrote:
> Error 5 means EIO. Can you set 'debug ms = 1' on the radosgw (and
> probably also on the osd to correlate)?
>
> Thanks,
> Yehuda
>
> On Mon, Sep 3, 2012 at 12:56 AM, Nick Couchman
> wrote:
>> So, after much trial and error I finally
>>> On 2012/09/03 at 06:56, ramu eppa wrote:
> Hi,
>
> I disabled the default configuration and enable rgw.conf.But am getting
> same
> error.
>
> Syntax error on line 1 of /etc/apache2/sites-enabled/rgw.conf:
> FastCgiExternalServer: redefinition of previously defined class
> "/var/www/s3
So, after much trial and error I finally got ceph up and running and radosgw up
and running. However, I'm not running into a situation where, when I try to
create a new bucket under my test account, I'm getting an HTTP status 500 on
the client. I have radosgw running in debug mode, and I get t
One additional piece of info...I did find the "-d" flag (documented in the
radosgw-admin man page, but not in the radosgw man page) that keeps the daemon
in the foreground and prints messages to stderr. When I use this flag I get
the following:
[root@desktop-ceph ~]# radosgw -c /etc/ceph/ceph.c
So, I'm running ceph 0.48.1, I have my mds, mon, and osds up and running, and
now I'm trying to get radosgw working, as well. However, I'm running into an
issue where the radosgw daemon runs for a minute and then exits without any
explanation - nothing in the log files, system output, etc., to
> On 08/02/2012 12:39 PM, Nick Couchman wrote:
>> Running into some errors compiling ceph-0.48 (argonaut) on RHEL5. It gets
> most of the way through the build process and then throws the following:
>>
>>CXXlibrbd_la-cls_rbd_client.lo
>> /usr/include/sys/
>>> On 2012/08/07 at 13:23, Josh Durgin wrote:
>
> It looks like this might be src/include/types.h including sys/types.h
> and src/include/rbd_types.h, which is including linux/types.h.
>
> Does adding ifdefs to src/include/types.h so it includes linux/types.h
> on linux work?
>
I enabled G
>>> On 2012/08/07 at 13:23, Josh Durgin wrote:
>
> It looks like this might be src/include/types.h including sys/types.h
> and src/include/rbd_types.h, which is including linux/types.h.
>
> Does adding ifdefs to src/include/types.h so it includes
linux/types.h
> on linux work?
>
Here's what
Running into some errors compiling ceph-0.48 (argonaut) on RHEL5. It gets most
of the way through the build process and then throws the following:
CXXlibrbd_la-cls_rbd_client.lo
/usr/include/sys/types.h:46: error: conflicting declaration 'typedef __loff_t
loff_t'
/usr/include/linux/types.
On Sat, 2012-04-07 at 08:48 -0700, Sage Weil wrote:
> On Sat, 7 Apr 2012, Nick Couchman wrote:
> > I'm trying to compile 0.44.1 on CentOS 5, and am running into the following
> > compile-time error:
> >
> > g++ -DHAVE_CONFIG_H -I. -I. -I. -I/usr/include/nss3
I'm trying to compile 0.44.1 on CentOS 5, and am running into the following
compile-time error:
g++ -DHAVE_CONFIG_H -I. -I. -I. -I/usr/include/nss3 -I/usr/include/nspr4 -Wall
-D__CEPH__ -D_FILE_OFFSET_BITS=64 -D_REENTRANT -D_THREAD_SAFE
-D__STDC_FORMAT_MACROS -D_GNU_SOURCE -rdynamic -Winit-sel
t RPM packaging issues.
-Nick
>>> On 2012/03/25 at 09:40, Sage Weil wrote:
> On Sun, 25 Mar 2012, Nick Couchman wrote:
>> Well, leveldb is fine, but I'm still seeing a build error:
>>
>> CXXlibcommon_la-version.lo
>> In file included from
M >>>
Hey Nick-
Can you confirm this fixes your problem? If so I can apply this to the
stable branch (for 0.44.1).
Thanks!
sage
On Thu, 22 Mar 2012, Alexandre Oliva wrote:
> On Mar 22, 2012, "Nick Couchman" wrote:
>
> > ./db/builder.h:8:28: fatal
n you confirm this fixes your problem? If so I can apply this to the
stable branch (for 0.44.1).
Thanks!
sage
On Thu, 22 Mar 2012, Alexandre Oliva wrote:
> On Mar 22, 2012, "Nick Couchman" wrote:
>
> > ./db/builder.h:8:28: fatal error: leveldb/status.h: No suc
Ajit,
Did you see my previous post to the list about leveldb build issues in 0.44?
This is an issue specifically with rpmbuild, with the spec file. If you check
out the list archives you'll see that Alexandre posted a patch for the spec
file that fixes the rpmbuild (have not tried it, yet - ju
>>> On 2012/03/22 at 11:29, Samuel Just wrote:
> Hmm, leveldb/include must not be in the include path. Anyone with
> some rpm experience know how to fix that?
> -Sam
>
I actually figured out partially what the issue is. In the spec file, the make
command sets CFLAGS and CXXFLAGS with a bunch
I'm trying to build ceph 0.44, but am running into some issues with leveldb not
compiling correctly. Here is the output:
make all-recursive
make[2]: Entering directory `/home/abuild/rpmbuild/BUILD/ceph-0.44/src'
Making all in ocf
make[3]: Entering directory `/home/abuild/rpmbuild/BUILD/ceph-0.4
I'm setting up ceph and looking at the output of "ceph -w" and I'm seeing lots
of warnings about messages from mon.X being stamped in the future (142
seconds), clocks not synchronized. However, all of these systems use NTP, and
if I use the "date" command to see the system time, they are defini
>>> On 2012/03/20 at 15:42, "Nick Couchman" wrote:
> Just noticed this with ceph-0.43 - ceph-0.39 does not have this issue. I'm
> trying to use Ceph on multiple platforms, one of which is CentOS 5. CentOS 5
> still uses Python 2.4, and byte-compilation
Just noticed this with ceph-0.43 - ceph-0.39 does not have this issue. I'm
trying to use Ceph on multiple platforms, one of which is CentOS 5. CentOS 5
still uses Python 2.4, and byte-compilation of the rados.py file fails during
the build of ceph 0.43 with the following error:
Byte-compiling
38 matches
Mail list logo