On 04.06.2013 20:03, Gandalf Corvotempesta wrote:
Any experiences with clustered FS on top of RBD devices?
Which FS do you suggest for more or less 10.000 mailboxes accessed by 10
dovecot nodes ?
There is an ongoing effort to implement librados storage in Dovecot,
AFAIK. Maybe it's worth
Hello!
We have simple setup as follows:
Debian GNU/Linux 6.0 x64
Linux h08 2.6.32-19-pve #1 SMP Wed May 15 07:32:52 CEST 2013 x86_64
GNU/Linux
ii ceph 0.61.2-1~bpo60+1
distributed storage and file system
ii ceph-common 0.61.2-1~bpo60+1
Hi,
i have cuttlefish and i'm using ceph-deploy.
My ceph-conf is this:
*fsid = 775cb230-1b4c-41fb-8473-5b92cexx**
**mon_initial_members = bd-0, bd-1, bd-2**
**mon_host = 147.172.xxx.x0,147.172.xxx.x1,147.172.xxx.x2**
**auth_supported = cephx**
**public_network = 147.172.xxx.0/24**
Good day!
Tried to nullify thid osd and reinject it with no success. It works a
little bit then the crash again.
Regards, Artem Silenkov, 2GIS TM.
---
2GIS LLC
http://2gis.ru
a.silen...@2gis.ru
gtalk:artem.silen...@gmail.com
cell:+79231534853
2013/6/5 Artem Silenkov artem.silen...@gmail.com
and I'm unable to mount the cluster with the following command:
root@ceph1:/mnt# mount -t ceph 192.168.2.170:6789:/ /mnt
So, what it says?
I'm also recommend to you start from my russian doc
http://habrahabr.ru/post/179823
On Tue, Jun 4, 2013 at 4:22 PM, Явор Маринов ymari...@neterra.net
I've managed to start and mount the cluster by completely starting the
process from scratch. Other thing that i'm searching for is any
documentation how to add another node (or hard drives) on a running
cluster without affecting the mount point and the running service. Can
you point me for
I did, what would like to know?Sébastien HanCloud Engineer"Always give 100%. Unless you're giving blood."Phone :+33 (0)1 49 70 99 72–Mobile :+33 (0)6 52 84 44 70Email :sebastien@enovance.com–Skype :han.sbastienAddress :10, rue de la Victoire – 75009 ParisWeb :www.enovance.com–Twitter
Ohh,
i just realized, that after a reboot all OSDs were automatically mounted
from /dev/sdc. Wonderful.
Now the next thing is to change the journal from /dev/sdc? to the new
created /dev/sda?
How to do that and what is the prefered fs-type for journal (my OSDs are
btrfs)?
Thank you,
Markus
This would be easier to see with a log than with all the GDB stuff, but the
reference in the backtrace to SyncEntryTimeout::finish(int) tells me that
the filesystem is taking too long to sync things to disk. Either this disk
is bad or you're somehow subjecting it to a much heavier load than the
Hi all,
I already installed Ceph Bobtail in centos machines and it's run perfectly.
But now I have to install Ceph Cuttlefish over Redhat 6.4. I have two machines
(until the moment). We can assume the hostnames IP1 and IP2 ;). I want (just
to test) two monitors (one per host) and two osds
Hmm, no joy so far :(
Still getting:
hduser@dfs01:~$ hadoop fs -ls
Bad connection to FS. command aborted. exception: No FileSystem for scheme: ceph
hadoop-cephfs.jar from http://ceph.com/download/hadoop-cephfs.jar is in the
classpath
libcephfs.jar from libcephfs-java (0.61.2-1precise) package
I wonder if it has something to do with them renaming /usr/bin/kvm, in qemu 1.4
packaged with ubuntu 13.04 it has been replaced with the following:
#! /bin/sh
echo W: kvm binary is deprecated, please use qemu-system-x86_64 instead 2
exec qemu-system-x86_64 -machine accel=kvm:tcg $@
On Jun 3,
You need to specify the ceph implementation in core-site.xml:
property
namefs.ceph.impl/name
valueorg.apache.hadoop.fs.ceph.CephFileSystem/value
description
/description
/property
Mike
On 5 June 2013 16:19, Ilja Maslov ilja.mas...@openet.us wrote:
Hmm, no joy so far :(
Hi,
I'm having trouble setting a CORS policy on a bucket.
Using the boto python library, I can create a bucket and so on, but
when I try to get or set the CORS policy radosgw responds with a 403:
?xml version=1.0
encoding=UTF-8?ErrorCodeAccessDenied/Code/Error
Would anyone be able to help me with
Thanks for all the help, guys!
Let me give back to community by listing all the minimal steps I needed to make
it work on current versions of the software.
Ubuntu 12.04.2 LTS
ceph 0.61.2-1precise
Hadoop 1.1.2
1. Install additional packages:
libcephfs-java
libcephfs-jni
2. Download
Thanks a lot for this Ilja! I'm going to update the documentation again soon,
so this very helpful.
On Jun 5, 2013, at 12:21 PM, Ilja Maslov ilja.mas...@openet.us wrote:
export HADOOP_OPTS=-Djava.net.preferIPv4Stack=true
Was there actually a problem if you didn't set this?
4. Symink JNI
On Jun 5, 2013, at 12:51 PM, Noah Watkins noahwatk...@gmail.com wrote:
I have tried adding -Djava.library.path=/usr/lib/jni to HADOOP_OPTS in
hadoop-env.sh and exporting LD_LIBRARY_PATH=/usr/lib/jni in hadoop-env.sh,
but it didn't work for me. I'd love to hear about a more elegant method
export HADOOP_OPTS=-Djava.net.preferIPv4Stack=true
Was there actually a problem if you didn't set this?
I have commented this out and restarted mapred and everything still worked ok.
It is probably only needed for the HDFS processes.
4. Symink JNI library
cd
System info:
Ubuntu Server 13.04, AMD64.
QEMU 1.4.0
Ceph 0.61.2
I got a core dump when executing:
root@ceph-node1:~# qemu-img info -f rbd rbd:vm_disks/box1_disk1
Segmentation fault (core dumped)
Call dump info:
Core was generated by `qemu-img info -f rbd rbd:vm_disks/box1_disk1'.
Program
19 matches
Mail list logo