[ceph-users] how to install radosgw from source code?

2013-09-23 Thread yy-nm

hay, folks:
i use ceph source code to install a ceph cluster. ceph version 
0.61.8 (a6fdcca3bddbc9f177e4e2bf0d9cdd85006b028b)

After finish install, i can't find radosgw command?
i use below paramter in ceph's installation:

install package:
|#apt-get install automake autoconf gcc g++ libboost-dev libedit-dev 
libssl-dev libtool libfcgi libfcgi-dev libfuse-dev linux-kernel-headers 
libcrypto++-dev libaio-dev libgoogle-perftools-dev libkeyutils-dev 
uuid-dev libatomic-ops-dev libboost-program-options-dev 
libboost-thread-dev libexpat1-dev libleveldb-dev libsnappy-dev


|configure:
#./autogen.sh
#./configure --prefix=/usr --sysconfdir=/etc --localstatedir=/var/lib/ceph

make:
#make
#make install

is anything wrong with above ??

thinks!

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] one pg stuck with 2 unfound pieces

2013-09-23 Thread Jens-Christian Fischer
Hi Sam

in the meantime, the output of ceph pg 0.cfa query has become quite a bit 
longer (for better or worse) - see:  http://pastebin.com/0Jxmm353

I have restarted osd.23 with the debug log settings and have extracted these 
0.cfa related log lines - I can't interpret them. There might be more, I can 
provide the complete log file if you need it: http://pastebin.com/dYsihsx4

0.cfa has been out so long, that it shows up as being down forever

HEALTH_WARN 1 pgs incomplete; 1 pgs stuck inactive; 1 pgs stuck unclean; 1 mons 
down, quorum 0,1,2,4 h1,h5,s2,s4
pg 0.cfa is stuck inactive since forever, current state incomplete, last acting 
[23,50,18]
pg 0.cfa is stuck unclean since forever, current state incomplete, last acting 
[23,50,18]
pg 0.cfa is incomplete, acting [23,50,18]

also, we can't revert 0.cfa

root@h0:~# ceph pg 0.cfa mark_unfound_lost revert
pg has no unfound objects

This stuck pg seems to fill up our mons (they need to keep old data, right?) 
which makes starting a new mon a task of seemingly herculean proportions.

Any ideas on how to proceed?

thanks

Jens-Christian




-- 
SWITCH
Jens-Christian Fischer, Peta Solutions
Werdstrasse 2, P.O. Box, 8021 Zurich, Switzerland
phone +41 44 268 15 15, direct +41 44 268 15 71
jens-christian.fisc...@switch.ch
http://www.switch.ch

http://www.switch.ch/socialmedia

On 14.08.2013, at 20:53, Samuel Just sam.j...@inktank.com wrote:

 Try restarting the two osd processes with debug osd = 20, debug ms =
 1, debug filestore = 20.  Restarting the osds may clear the problem,
 but if it recurs, the logs should help explain what's going on.
 -Sam
 
 On Wed, Aug 14, 2013 at 12:17 AM, Jens-Christian Fischer
 jens-christian.fisc...@switch.ch wrote:
 On 13.08.2013, at 21:09, Samuel Just sam.j...@inktank.com wrote:
 
 You can run 'ceph pg 0.cfa mark_unfound_lost revert'. (Revert Lost
 section of http://ceph.com/docs/master/rados/operations/placement-groups/).
 -Sam
 
 
 As I wrote further down the info, ceph wouldn't let me do that:
 
 root@ineri ~$ ceph pg 0.cfa  mark_unfound_lost revert
 pg has 2 objects but we haven't probed all sources, not marking lost
 
 I'm looking for a way that forces the (re) probing of the sources…
 
 cheers
 jc
 
 
 
 
 
 On Tue, Aug 13, 2013 at 6:50 AM, Jens-Christian Fischer
 jens-christian.fisc...@switch.ch wrote:
 We have a cluster with 10 servers, 64 OSDs and 5 Mons on them. The OSDs are
 3TB disk, formatted with btrfs and the servers are either on Ubuntu 12.10 
 or
 13.04.
 
 Recently one of the servers (13.04) stood still (due to problems with btrfs
 - something we have seen a few times). I decided to not try to recover the
 disks, but reformat them with XFS. I removed the OSDs, reformatted, and
 re-created them (they got the same OSD numbers)
 
 I redid this twice (because I wrongly partioned the disks in the first
 place) and I ended up with 2 unfound pieces in one pg:
 
 root@s2:~# ceph health details
 HEALTH_WARN 1 pgs degraded; 1 pgs recovering; 1 pgs stuck unclean; recovery
 4448/28915270 degraded (0.015%); 2/9854766 unfound (0.000%)
 pg 0.cfa is stuck unclean for 1004252.309704, current state
 active+recovering+degraded+remapped, last acting [23,50]
 pg 0.cfa is active+recovering+degraded+remapped, acting [23,50], 2 unfound
 recovery 4448/28915270 degraded (0.015%); 2/9854766 unfound (0.000%)
 
 
 root@s2:~# ceph pg 0.cfa query
 
 { state: active+recovering+degraded+remapped,
 epoch: 28197,
 up: [
   23,
   50,
   18],
 acting: [
   23,
   50],
 info: { pgid: 0.cfa,
 last_update: 28082'7774,
 last_complete: 23686'7083,
 log_tail: 14360'4061,
 last_backfill: MAX,
 purged_snaps: [],
 history: { epoch_created: 1,
 last_epoch_started: 28197,
 last_epoch_clean: 24810,
 last_epoch_split: 0,
 same_up_since: 28195,
 same_interval_since: 28196,
 same_primary_since: 26036,
 last_scrub: 20585'6801,
 last_scrub_stamp: 2013-07-28 15:40:53.298786,
 last_deep_scrub: 20585'6801,
 last_deep_scrub_stamp: 2013-07-28 15:40:53.298786,
 last_clean_scrub_stamp: 2013-07-28 15:40:53.298786},
 stats: { version: 28082'7774,
 reported: 28197'41950,
 state: active+recovering+degraded+remapped,
 last_fresh: 2013-08-13 14:34:33.057271,
 last_change: 2013-08-13 14:34:33.057271,
 last_active: 2013-08-13 14:34:33.057271,
 last_clean: 2013-08-01 23:50:18.414082,
 last_became_active: 2013-05-29 13:10:51.366237,
 last_unstale: 2013-08-13 14:34:33.057271,
 mapping_epoch: 28195,
 log_start: 14360'4061,
 ondisk_log_start: 14360'4061,
 created: 1,
 last_epoch_clean: 24810,
 parent: 0.0,
 parent_split_bits: 0,
 last_scrub: 20585'6801,
 last_scrub_stamp: 2013-07-28 15:40:53.298786,
 last_deep_scrub: 20585'6801,
 last_deep_scrub_stamp: 2013-07-28 15:40:53.298786,
 

[ceph-users] where to put config and whats the correct syntax

2013-09-23 Thread Fuchs, Andreas (SwissTXT)
I'm following different threads here, mainly the poor radosgw performance one.
And what I see there are often recommendation to put a certain config to 
ceph.conf, but often it's unclear to me where exactly to put them

- does it matter if I put a config for all OSD's in [global] or [osd] ?
  Example:
[osd]
osd max attr size = 655360

or should it be
[global]
osd max attr size = 655360

- different syntax
  We saw recommendations to add
rgw enable ops log = false
  but also
rgw_enable_ops_log disabled

 which one is correct?
 can it be added to [client.radosgw] and it is valid for both of our radosgw's? 
or does it need to be added to global or somewhere else?

- is there a way to verify which confg rules are applied?


Thanks
Andi

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] monitor deployment during quick start

2013-09-23 Thread Alfredo Deza
On Fri, Sep 20, 2013 at 3:58 PM, Gruher, Joseph R
joseph.r.gru...@intel.com wrote:
 Sorry, not trying to repost or bump my thread, but I think I can restate my 
 question here and for better clarity.  I am confused about the --cluster 
 argument used when ceph-deploy mon create invokes ceph-mon on the target 
 system.  I always get a failure at this point when running ceph-deploy mon 
 create and this then halts the whole ceph quick start process.

 Here is the line where ceph-deploy mon create fails:
 [cephtest02][INFO  ] Running command: ceph-mon --cluster ceph --mkfs -i 
 cephtest02 --keyring /var/lib/ceph/tmp/ceph-cephtest02.mon.keyring

 Running the same command manually on the target system gives an error.  As 
 far as I can tell from the man page and the built-in help and the website 
 (http://ceph.com/docs/next/man/8/ceph-mon/) it seems --cluster is not a 
 valid argument for ceph-mon?  Is this a problem in ceph-deploy?  Does this 
 work for anyone else?

 ceph@cephtest02:~$ sudo ceph-mon --cluster ceph --mkfs -i cephtest02 
 --keyring /var/lib/ceph/tmp/ceph-cephtest02.mon.keyring
 too many arguments: [--cluster,ceph]
 usage: ceph-mon -i monid [--mon-data=pathtodata] [flags]
   --debug_mon n
 debug monitor level (e.g. 10)
   --mkfs
 build fresh monitor fs
 --conf/-cRead configuration from the given configuration file
 -d   Run in foreground, log to stderr.
 -f   Run in foreground, log to usual location.
 --id/-i  set ID portion of my name
 --name/-nset name (TYPE.ID)
 --versionshow version and quit

--debug_ms N
 set message debug level (e.g. 1)
 ceph@cephtest02:~$

 Can anyone clarify if --cluster is a supported argument for ceph-mon?

This is a *weird* corner you've stumbled upon. The flag is indeed used
by ceph-deploy and that hasn't changed in a while. However, as you
point out, there is no trace of that flag anywhere! I can't find where
is that defined at all.

Running the latest version of ceph-deploy + ceph, that flag *does* work for me.

What version of ceph are you using?

 Thanks!

 Here's the more complete output from the admin system when this fails:

 ceph@cephtest01:/my-cluster$ ceph-deploy --overwrite-conf mon create 
 cephtest02
 [ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts cephtest02
 [ceph_deploy.mon][DEBUG ] detecting platform for host cephtest02 ...
 [ceph_deploy.sudo_pushy][DEBUG ] will use a remote connection with sudo
 [ceph_deploy.mon][INFO  ] distro info: Ubuntu 12.04 precise
 [cephtest02][DEBUG ] determining if provided host has same hostname in remote
 [cephtest02][DEBUG ] deploying mon to cephtest02
 [cephtest02][DEBUG ] remote hostname: cephtest02
 [cephtest02][INFO  ] write cluster configuration to /etc/ceph/{cluster}.conf
 [cephtest02][DEBUG ] checking for done path: 
 /var/lib/ceph/mon/ceph-cephtest02/done
 [cephtest02][DEBUG ] done path does not exist: 
 /var/lib/ceph/mon/ceph-cephtest02/done
 [cephtest02][INFO  ] creating keyring file: 
 /var/lib/ceph/tmp/ceph-cephtest02.mon.keyring
 [cephtest02][INFO  ] create the monitor keyring file
 [cephtest02][INFO  ] Running command: ceph-mon --cluster ceph --mkfs -i 
 cephtest02 --keyring /var/lib/ceph/tmp/ceph-cephtest02.mon.keyring
 [cephtest02][ERROR ] Traceback (most recent call last):
 [cephtest02][ERROR ]   File 
 /usr/lib/python2.7/dist-packages/ceph_deploy/hosts/common.py, line 72, in 
 mon_create
 [cephtest02][ERROR ]   File 
 /usr/lib/python2.7/dist-packages/ceph_deploy/util/decorators.py, line 10, 
 in inner
 [cephtest02][ERROR ]   File 
 /usr/lib/python2.7/dist-packages/ceph_deploy/util/wrappers.py, line 6, in 
 remote_call
 [cephtest02][ERROR ]   File /usr/lib/python2.7/subprocess.py, line 511, in 
 check_call
 [cephtest02][ERROR ] raise CalledProcessError(retcode, cmd)
 [cephtest02][ERROR ] CalledProcessError: Command '['ceph-mon', '--cluster', 
 'ceph', '--mkfs', '-i', 'cephtest02', '--keyring', 
 '/var/lib/ceph/tmp/ceph-cephtest02.mon.keyring']' returned non-zero exit 
 status 1
 [cephtest02][INFO  ] --conf/-cRead configuration from the given 
 configuration file
 [cephtest02][INFO  ] -d   Run in foreground, log to stderr.
 [cephtest02][INFO  ] -f   Run in foreground, log to usual 
 location.
 [cephtest02][INFO  ] --id/-i  set ID portion of my name
 [cephtest02][INFO  ] --name/-nset name (TYPE.ID)
 [cephtest02][INFO  ] --versionshow version and quit
 [cephtest02][INFO  ]--debug_ms N
 [cephtest02][INFO  ] set message debug level (e.g. 1)
 [cephtest02][ERROR ] too many arguments: [--cluster,ceph]
 [cephtest02][ERROR ] usage: ceph-mon -i monid [--mon-data=pathtodata] [flags]
 [cephtest02][ERROR ]   --debug_mon n
 [cephtest02][ERROR ] debug monitor level (e.g. 10)
 [cephtest02][ERROR ]   --mkfs
 [cephtest02][ERROR ] build fresh monitor fs
 [ceph_deploy.mon][ERROR ] Failed to execute command: ceph-mon --cluster ceph 
 --mkfs -i 

[ceph-users] Question about Ceph performance

2013-09-23 Thread Dafan Dong


Hi folks, I am Dafan from Yahoo! corp. We are really interested in Ceph now. I 
wish to know where I can get  some performance report about new released 
DUMPLING? Like throughput, latency with different cluster scale and hardware 
type? Thanks.

Dafan
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] clients in cluster network?

2013-09-23 Thread Kurt Bauer
Hi,
 just a short question to which I couldn't find an answer in the
documentation:
When I run a cluster with public and cluster network seperated, would it
still be possible to have clients accessing the cluster (ie. RBDs) from
within the cluster network?

Thanks for your help,
best regards,
Kurt

 
-- 
Kurt Bauer kurt.ba...@univie.ac.at
Vienna University Computer Center - ACOnet - VIX
Universitaetsstrasse 7, A-1010 Vienna, Austria, Europe
Tel: ++43 1 4277 - 14070 (Fax: - 9140)  KB1970-RIPE


smime.p7s
Description: S/MIME Cryptographic Signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] ceph-deploy again

2013-09-23 Thread Bernhard Glomm
Hi all,

something with ceph-deploy doesen't work at all anymore.
After an upgrade ceph-depoly failed to roll out a new monitor
with permission denied. are you root?
(obviously there shouldn't be a root login so I had another user
for ceph-deploy before which worked perfectly, why not now?)

ceph_deploy.install][DEBUG ] Purging host ping ...
Traceback (most recent call last):
E: Could not open lock file /var/lib/dpkg/lock - open (13: Permission denied)
E: Unable to lock the administration directory (/var/lib/dpkg/), are you root?

Does this mean I have to let root log into my Cluster with a passwordless key?
I would rather like to use another log in, like so far, if possible.

The howto on ceph.com doesn't say anything about it,
the  changelog.Debian.gz isn't very helpful either and
another changelog isn't (provided nor a README)

ceph-deploy is version 1.2.6
system is freshly installed raring

got this both lines in my sources.list
deb http://192.168.242.91:3142/ceph.com/debian/ raring main
deb http://192.168.242.91:3142/ceph.com/packages/ceph-extras/debian/ raring main

since this both didn't work
#deb http://gitbuilder.ceph.com/ceph-deb-raring-x86_64-basic/ref/dumpling/  
 raring main
#deb http://gitbuilder.ceph.com/cdep-deb-raring-x86_64-basic/ref/master/    
raring main
(couldn't find the python-pushy version ceph-deploy depends on)

TIA

Bernhard

-- 


  
 


  

  
Bernhard Glomm

IT Administration


  

  Phone:


  +49 (30) 86880 134

  
  Fax:


  +49 (30) 86880 100

  
  Skype:


  bernhard.glomm.ecologic

  

  









  


  Ecologic Institut gemeinnützige GmbH | Pfalzburger Str. 43/44 | 10717 
Berlin | Germany

  GF: R. Andreas Kraemer | AG: Charlottenburg HRB 57947 | 
USt/VAT-IdNr.: DE811963464

  Ecologic™ is a Trade Mark (TM) of Ecologic Institut gemeinnützige GmbH

  

 

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Hadoop and Ceph integration issues

2013-09-23 Thread Noah Watkins
 property
 nameceph.root.dir/name
 value/mnt/mycephfs/value
 /property

This is probably causing the issue. Is this meant to be a local mount
point? The 'ceph.root.dir' property specifies the root directory
/inside/ CephFS, and the Hadoop implementation doesn't require a local
CephFS mount--it uses a client library to interact with the file
system.

The default value for this property is /, so you can probably just
remove this from your config file unless your CephFS directory
structure is carved up in a special way.

 property
 nameceph.conf.file/name
 value/hyrax/hadoop-ceph/ceph/ceph.conf/value
 /property
 property
 nameceph.auth.keyfile/name
 value/hyrax/hadoop-ceph/ceph/admin.secret/value
 /property

 property
 nameceph.auth.keyring/name
 value/hyrax/hadoop-ceph/ceph/ceph.mon.keyring/value
 /property

These files will need to be available locally on every node Hadoop
runs on. I think the error below will occur after these are loaded, so
it probably isn't your issue, though I don't recall exactly at which
point different configuration files are loaded.

 property
 namefs.hdfs.impl/name
 valueorg.apache.hadoop.fs.ceph.CephFileSystem/value
 /property

I don't think this is part of the problem you are seeing, but this
'fs.hdfs.impl' property should probably be removed. We aren't
overriding HDFS, just replacing it.

 property
 nameceph.mon.address/name
 valuehyrax1:6789/value
 /property

This was already specified in your 'fs.default.name' property. I don't
think that duplicating it is an issue, but I should probably update
the documentation to make it clear that the monitor only needs to be
listed once.

Thanks!
Noah
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] clients in cluster network?

2013-09-23 Thread John Wilkins
Clients use the public network. The cluster network is principally for
OSD-to-OSD communication--heartbeats, replication, backfill, etc.

On Mon, Sep 23, 2013 at 7:42 AM, Kurt Bauer kurt.ba...@univie.ac.at wrote:
 Hi,
  just a short question to which I couldn't find an answer in the
 documentation:
 When I run a cluster with public and cluster network seperated, would it
 still be possible to have clients accessing the cluster (ie. RBDs) from
 within the cluster network?

 Thanks for your help,
 best regards,
 Kurt


 --
 Kurt Bauer kurt.ba...@univie.ac.at
 Vienna University Computer Center - ACOnet - VIX
 Universitaetsstrasse 7, A-1010 Vienna, Austria, Europe
 Tel: ++43 1 4277 - 14070 (Fax: - 9140)  KB1970-RIPE

 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




-- 
John Wilkins
Senior Technical Writer
Intank
john.wilk...@inktank.com
(415) 425-9599
http://inktank.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Hadoop and Ceph integration issues

2013-09-23 Thread Noah Watkins
Shoot, I thought I had it figured out :)

There is a default admin user created when you first create your
cluster. After a typical install via ceph-deploy, there should be a
file called 'ceph.client.admin.keyring', usually sibling to ceph.conf.
If this is in a standard location (e.g. /etc/ceph) you shouldn't need
the keyring option, otherwise point 'ceph.auth.keyring' at that
keyring file. You shouldn't need both the keyring and the keyfile
options set, but it just depends on how your authentication / users
are all setup.

The easiest thing to do if that doesn't solve your problem is probably
to turn on logging so we can see what is blowing up.

In your ceph.conf you can add 'debug client = 20' and 'debug
javaclient = 20' to the client section. You may also need to set the
log file 'log file = /path/...'. You don't need to do this on all your
nodes, just one node where you get the failure.

- Noah

 Thanks,
 Rolando

 P.S.: I have the cephFS mounted locally, so the cluster is ok.

 cluster d9ca74d0-d9f4-436d-92de-762af67c6534
health HEALTH_OK
monmap e1: 9 mons at
 {hyrax1=10.10.10.10:6789/0,hyrax2=10.10.10.12:6789/0,hyrax3=10.10.10.15:6789/0,hyrax4=10.10.10.13:6789/0,hyrax5=10.10.10.16:6789/0,hyrax6=10.10.10.14:6789/0,hyrax7=10.10.10.18:6789/0,hyrax8=10.10.10.17:6789/0,hyrax9=10.10.10.11:6789/0},
 election epoch 6, quorum 0,1,2,3,4,5,6,7,8
 hyrax1,hyrax2,hyrax3,hyrax4,hyrax5,hyrax6,hyrax7,hyrax8,hyrax9
osdmap e30: 9 osds: 9 up, 9 in
 pgmap v2457: 192 pgs: 192 active+clean; 10408 bytes data, 44312 MB
 used, 168 GB / 221 GB avail
mdsmap e4: 1/1/1 up {0=hyrax1=up:active}


 property
 namefs.ceph.impl/name
 valueorg.apache.hadoop.fs.ceph.CephFileSystem/value
 /property

 property
 namefs.default.name/name
 valueceph://hyrax1:6789//value
 /property

 property
 nameceph.conf.file/name
 value/hyrax/hadoop-ceph/ceph/ceph.conf/value
 /property

 property
 nameceph.root.dir/name
 value//value
 /property

 property
 nameceph.auth.keyfile/name
 value/hyrax/hadoop-ceph/ceph/admin.secret/value
 /property

 property
 nameceph.auth.keyring/name
 value/hyrax/hadoop-ceph/ceph/ceph.mon.keyring/value
 /property

 On Mon, Sep 23, 2013 at 11:42 AM, Noah Watkins noah.watk...@inktank.com 
 wrote:
 property
 nameceph.root.dir/name
 value/mnt/mycephfs/value
 /property

 This is probably causing the issue. Is this meant to be a local mount
 point? The 'ceph.root.dir' property specifies the root directory
 /inside/ CephFS, and the Hadoop implementation doesn't require a local
 CephFS mount--it uses a client library to interact with the file
 system.

 The default value for this property is /, so you can probably just
 remove this from your config file unless your CephFS directory
 structure is carved up in a special way.

 property
 nameceph.conf.file/name
 value/hyrax/hadoop-ceph/ceph/ceph.conf/value
 /property
 property
 nameceph.auth.keyfile/name
 value/hyrax/hadoop-ceph/ceph/admin.secret/value
 /property

 property
 nameceph.auth.keyring/name
 value/hyrax/hadoop-ceph/ceph/ceph.mon.keyring/value
 /property

 These files will need to be available locally on every node Hadoop
 runs on. I think the error below will occur after these are loaded, so
 it probably isn't your issue, though I don't recall exactly at which
 point different configuration files are loaded.

 property
 namefs.hdfs.impl/name
 valueorg.apache.hadoop.fs.ceph.CephFileSystem/value
 /property

 I don't think this is part of the problem you are seeing, but this
 'fs.hdfs.impl' property should probably be removed. We aren't
 overriding HDFS, just replacing it.

 property
 nameceph.mon.address/name
 valuehyrax1:6789/value
 /property

 This was already specified in your 'fs.default.name' property. I don't
 think that duplicating it is an issue, but I should probably update
 the documentation to make it clear that the monitor only needs to be
 listed once.

 Thanks!
 Noah
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Hadoop and Ceph integration issues

2013-09-23 Thread Rolando Martins
Hi Noah,
updated the core-site.xml (bellow).
All the nodes have the same files. But the problem remains the same.

What is the value for ceph.auth.keyring? Is the path containing the
file ceph.mon.keyring?


Thanks,
Rolando

P.S.: I have the cephFS mounted locally, so the cluster is ok.

cluster d9ca74d0-d9f4-436d-92de-762af67c6534
   health HEALTH_OK
   monmap e1: 9 mons at
{hyrax1=10.10.10.10:6789/0,hyrax2=10.10.10.12:6789/0,hyrax3=10.10.10.15:6789/0,hyrax4=10.10.10.13:6789/0,hyrax5=10.10.10.16:6789/0,hyrax6=10.10.10.14:6789/0,hyrax7=10.10.10.18:6789/0,hyrax8=10.10.10.17:6789/0,hyrax9=10.10.10.11:6789/0},
election epoch 6, quorum 0,1,2,3,4,5,6,7,8
hyrax1,hyrax2,hyrax3,hyrax4,hyrax5,hyrax6,hyrax7,hyrax8,hyrax9
   osdmap e30: 9 osds: 9 up, 9 in
pgmap v2457: 192 pgs: 192 active+clean; 10408 bytes data, 44312 MB
used, 168 GB / 221 GB avail
   mdsmap e4: 1/1/1 up {0=hyrax1=up:active}


property
namefs.ceph.impl/name
valueorg.apache.hadoop.fs.ceph.CephFileSystem/value
/property

property
namefs.default.name/name
valueceph://hyrax1:6789//value
/property

property
nameceph.conf.file/name
value/hyrax/hadoop-ceph/ceph/ceph.conf/value
/property

property
nameceph.root.dir/name
value//value
/property

property
nameceph.auth.keyfile/name
value/hyrax/hadoop-ceph/ceph/admin.secret/value
/property

property
nameceph.auth.keyring/name
value/hyrax/hadoop-ceph/ceph/ceph.mon.keyring/value
/property

On Mon, Sep 23, 2013 at 11:42 AM, Noah Watkins noah.watk...@inktank.com wrote:
 property
 nameceph.root.dir/name
 value/mnt/mycephfs/value
 /property

 This is probably causing the issue. Is this meant to be a local mount
 point? The 'ceph.root.dir' property specifies the root directory
 /inside/ CephFS, and the Hadoop implementation doesn't require a local
 CephFS mount--it uses a client library to interact with the file
 system.

 The default value for this property is /, so you can probably just
 remove this from your config file unless your CephFS directory
 structure is carved up in a special way.

 property
 nameceph.conf.file/name
 value/hyrax/hadoop-ceph/ceph/ceph.conf/value
 /property
 property
 nameceph.auth.keyfile/name
 value/hyrax/hadoop-ceph/ceph/admin.secret/value
 /property

 property
 nameceph.auth.keyring/name
 value/hyrax/hadoop-ceph/ceph/ceph.mon.keyring/value
 /property

 These files will need to be available locally on every node Hadoop
 runs on. I think the error below will occur after these are loaded, so
 it probably isn't your issue, though I don't recall exactly at which
 point different configuration files are loaded.

 property
 namefs.hdfs.impl/name
 valueorg.apache.hadoop.fs.ceph.CephFileSystem/value
 /property

 I don't think this is part of the problem you are seeing, but this
 'fs.hdfs.impl' property should probably be removed. We aren't
 overriding HDFS, just replacing it.

 property
 nameceph.mon.address/name
 valuehyrax1:6789/value
 /property

 This was already specified in your 'fs.default.name' property. I don't
 think that duplicating it is an issue, but I should probably update
 the documentation to make it clear that the monitor only needs to be
 listed once.

 Thanks!
 Noah
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Hadoop and Ceph integration issues

2013-09-23 Thread Rolando Martins
Hi Noah,
I enabled the debugging and got:

2013-09-23 18:59:34.705894 7f0b58de7700 -1 monclient(hunting): ERROR:
missing keyring, cannot use cephx for authentication
2013-09-23 18:59:34.706106 7f0b58de7700 20 client.-1 trim_cache size 0 max 0
2013-09-23 18:59:34.706225 7f0b58de7700 10 jni: ceph_mount: exit ret -2

I have the ceph.client.admin.keyring file in /etc/ceph and I tried
with and without the
parameter in core-site.xml. Unfortunately without success:(

Thanks,
Rolando


property
namefs.ceph.impl/name
valueorg.apache.hadoop.fs.ceph.CephFileSystem/value
/property

property
namefs.default.name/name
valueceph://hyrax1:6789//value
/property

property
nameceph.conf.file/name
value/hyrax/hadoop-ceph/ceph/ceph.conf/value
/property

property
nameceph.root.dir/name
value//value
/property
 property
nameceph.auth.keyring/name
   value/hyrax/hadoop-ceph/ceph/ceph.client.admin.keyring/value
/property

On Mon, Sep 23, 2013 at 2:24 PM, Noah Watkins noah.watk...@inktank.com wrote:
 Shoot, I thought I had it figured out :)

 There is a default admin user created when you first create your
 cluster. After a typical install via ceph-deploy, there should be a
 file called 'ceph.client.admin.keyring', usually sibling to ceph.conf.
 If this is in a standard location (e.g. /etc/ceph) you shouldn't need
 the keyring option, otherwise point 'ceph.auth.keyring' at that
 keyring file. You shouldn't need both the keyring and the keyfile
 options set, but it just depends on how your authentication / users
 are all setup.

 The easiest thing to do if that doesn't solve your problem is probably
 to turn on logging so we can see what is blowing up.

 In your ceph.conf you can add 'debug client = 20' and 'debug
 javaclient = 20' to the client section. You may also need to set the
 log file 'log file = /path/...'. You don't need to do this on all your
 nodes, just one node where you get the failure.

 - Noah

 Thanks,
 Rolando

 P.S.: I have the cephFS mounted locally, so the cluster is ok.

 cluster d9ca74d0-d9f4-436d-92de-762af67c6534
health HEALTH_OK
monmap e1: 9 mons at
 {hyrax1=10.10.10.10:6789/0,hyrax2=10.10.10.12:6789/0,hyrax3=10.10.10.15:6789/0,hyrax4=10.10.10.13:6789/0,hyrax5=10.10.10.16:6789/0,hyrax6=10.10.10.14:6789/0,hyrax7=10.10.10.18:6789/0,hyrax8=10.10.10.17:6789/0,hyrax9=10.10.10.11:6789/0},
 election epoch 6, quorum 0,1,2,3,4,5,6,7,8
 hyrax1,hyrax2,hyrax3,hyrax4,hyrax5,hyrax6,hyrax7,hyrax8,hyrax9
osdmap e30: 9 osds: 9 up, 9 in
 pgmap v2457: 192 pgs: 192 active+clean; 10408 bytes data, 44312 MB
 used, 168 GB / 221 GB avail
mdsmap e4: 1/1/1 up {0=hyrax1=up:active}


 property
 namefs.ceph.impl/name
 valueorg.apache.hadoop.fs.ceph.CephFileSystem/value
 /property

 property
 namefs.default.name/name
 valueceph://hyrax1:6789//value
 /property

 property
 nameceph.conf.file/name
 value/hyrax/hadoop-ceph/ceph/ceph.conf/value
 /property

 property
 nameceph.root.dir/name
 value//value
 /property

 property
 nameceph.auth.keyfile/name
 value/hyrax/hadoop-ceph/ceph/admin.secret/value
 /property

 property
 nameceph.auth.keyring/name
 value/hyrax/hadoop-ceph/ceph/ceph.mon.keyring/value
 /property

 On Mon, Sep 23, 2013 at 11:42 AM, Noah Watkins noah.watk...@inktank.com 
 wrote:
 property
 nameceph.root.dir/name
 value/mnt/mycephfs/value
 /property

 This is probably causing the issue. Is this meant to be a local mount
 point? The 'ceph.root.dir' property specifies the root directory
 /inside/ CephFS, and the Hadoop implementation doesn't require a local
 CephFS mount--it uses a client library to interact with the file
 system.

 The default value for this property is /, so you can probably just
 remove this from your config file unless your CephFS directory
 structure is carved up in a special way.

 property
 nameceph.conf.file/name
 value/hyrax/hadoop-ceph/ceph/ceph.conf/value
 /property
 property
 nameceph.auth.keyfile/name
 value/hyrax/hadoop-ceph/ceph/admin.secret/value
 /property

 property
 nameceph.auth.keyring/name
 value/hyrax/hadoop-ceph/ceph/ceph.mon.keyring/value
 /property

 These files will need to be available locally on every node Hadoop
 runs on. I think the error below will occur after these are loaded, so
 it probably isn't your issue, though I don't recall exactly at which
 point different configuration files are loaded.

 property
 namefs.hdfs.impl/name
 valueorg.apache.hadoop.fs.ceph.CephFileSystem/value
 /property

 I don't think this is part of the problem you are seeing, but this
 'fs.hdfs.impl' property should probably be removed. We aren't
 overriding HDFS, just replacing it.

 property
 nameceph.mon.address/name
 valuehyrax1:6789/value
 /property

 This was already specified in your 'fs.default.name' property. I don't
 think that duplicating it is an issue, but I should probably update
 the documentation to make it 

Re: [ceph-users] monitor deployment during quick start

2013-09-23 Thread Sage Weil
On Mon, 23 Sep 2013, Alfredo Deza wrote:
 On Fri, Sep 20, 2013 at 3:58 PM, Gruher, Joseph R
 joseph.r.gru...@intel.com wrote:
  Sorry, not trying to repost or bump my thread, but I think I can restate my 
  question here and for better clarity.  I am confused about the --cluster 
  argument used when ceph-deploy mon create invokes ceph-mon on the 
  target system.  I always get a failure at this point when running 
  ceph-deploy mon create and this then halts the whole ceph quick start 
  process.
 
  Here is the line where ceph-deploy mon create fails:
  [cephtest02][INFO  ] Running command: ceph-mon --cluster ceph --mkfs -i 
  cephtest02 --keyring /var/lib/ceph/tmp/ceph-cephtest02.mon.keyring
 
  Running the same command manually on the target system gives an error.  As 
  far as I can tell from the man page and the built-in help and the website 
  (http://ceph.com/docs/next/man/8/ceph-mon/) it seems --cluster is not a 
  valid argument for ceph-mon?  Is this a problem in ceph-deploy?  Does this 
  work for anyone else?
 
  ceph@cephtest02:~$ sudo ceph-mon --cluster ceph --mkfs -i cephtest02 
  --keyring /var/lib/ceph/tmp/ceph-cephtest02.mon.keyring
  too many arguments: [--cluster,ceph]
  usage: ceph-mon -i monid [--mon-data=pathtodata] [flags]
--debug_mon n
  debug monitor level (e.g. 10)
--mkfs
  build fresh monitor fs
  --conf/-cRead configuration from the given configuration file
  -d   Run in foreground, log to stderr.
  -f   Run in foreground, log to usual location.
  --id/-i  set ID portion of my name
  --name/-nset name (TYPE.ID)
  --versionshow version and quit
 
 --debug_ms N
  set message debug level (e.g. 1)
  ceph@cephtest02:~$
 
  Can anyone clarify if --cluster is a supported argument for ceph-mon?
 
 This is a *weird* corner you've stumbled upon. The flag is indeed used
 by ceph-deploy and that hasn't changed in a while. However, as you
 point out, there is no trace of that flag anywhere! I can't find where
 is that defined at all.
 
 Running the latest version of ceph-deploy + ceph, that flag *does* work 
 for me.

--cluster is parsed by everything after bobtail (or thereabouts).  Mostly 
all it does is change the internal value of $cluster that get substituted 
into other config options.  I'll add it to the usage.

sage

 
 What version of ceph are you using?
 
  Thanks!
 
  Here's the more complete output from the admin system when this fails:
 
  ceph@cephtest01:/my-cluster$ ceph-deploy --overwrite-conf mon create 
  cephtest02
  [ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts cephtest02
  [ceph_deploy.mon][DEBUG ] detecting platform for host cephtest02 ...
  [ceph_deploy.sudo_pushy][DEBUG ] will use a remote connection with sudo
  [ceph_deploy.mon][INFO  ] distro info: Ubuntu 12.04 precise
  [cephtest02][DEBUG ] determining if provided host has same hostname in 
  remote
  [cephtest02][DEBUG ] deploying mon to cephtest02
  [cephtest02][DEBUG ] remote hostname: cephtest02
  [cephtest02][INFO  ] write cluster configuration to /etc/ceph/{cluster}.conf
  [cephtest02][DEBUG ] checking for done path: 
  /var/lib/ceph/mon/ceph-cephtest02/done
  [cephtest02][DEBUG ] done path does not exist: 
  /var/lib/ceph/mon/ceph-cephtest02/done
  [cephtest02][INFO  ] creating keyring file: 
  /var/lib/ceph/tmp/ceph-cephtest02.mon.keyring
  [cephtest02][INFO  ] create the monitor keyring file
  [cephtest02][INFO  ] Running command: ceph-mon --cluster ceph --mkfs -i 
  cephtest02 --keyring /var/lib/ceph/tmp/ceph-cephtest02.mon.keyring
  [cephtest02][ERROR ] Traceback (most recent call last):
  [cephtest02][ERROR ]   File 
  /usr/lib/python2.7/dist-packages/ceph_deploy/hosts/common.py, line 72, in 
  mon_create
  [cephtest02][ERROR ]   File 
  /usr/lib/python2.7/dist-packages/ceph_deploy/util/decorators.py, line 10, 
  in inner
  [cephtest02][ERROR ]   File 
  /usr/lib/python2.7/dist-packages/ceph_deploy/util/wrappers.py, line 6, in 
  remote_call
  [cephtest02][ERROR ]   File /usr/lib/python2.7/subprocess.py, line 511, 
  in check_call
  [cephtest02][ERROR ] raise CalledProcessError(retcode, cmd)
  [cephtest02][ERROR ] CalledProcessError: Command '['ceph-mon', '--cluster', 
  'ceph', '--mkfs', '-i', 'cephtest02', '--keyring', 
  '/var/lib/ceph/tmp/ceph-cephtest02.mon.keyring']' returned non-zero exit 
  status 1
  [cephtest02][INFO  ] --conf/-cRead configuration from the given 
  configuration file
  [cephtest02][INFO  ] -d   Run in foreground, log to stderr.
  [cephtest02][INFO  ] -f   Run in foreground, log to usual 
  location.
  [cephtest02][INFO  ] --id/-i  set ID portion of my name
  [cephtest02][INFO  ] --name/-nset name (TYPE.ID)
  [cephtest02][INFO  ] --versionshow version and quit
  [cephtest02][INFO  ]--debug_ms N
  [cephtest02][INFO  ] set message debug level (e.g. 1)
  [cephtest02][ERROR ] too many arguments: 

Re: [ceph-users] how to install radosgw from source code?

2013-09-23 Thread Somnath Roy
I think some dependency is missing.. For example, I can see the libcurl package 
is missing..See the build prerequisite.

http://ceph.com/docs/next/install/build-prerequisites/

The configure/autogen script is checking the packages installed in your system 
and accordingly creating the makefiles.

Hope this helps.

Thanks  Regards
Somnath

From: ceph-users-boun...@lists.ceph.com 
[mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of yy-nm
Sent: Monday, September 23, 2013 1:19 AM
To: ceph-users@lists.ceph.com
Subject: [ceph-users] how to install radosgw from source code?

hay, folks:
i use ceph source code to install a ceph cluster. ceph version 0.61.8 
(a6fdcca3bddbc9f177e4e2bf0d9cdd85006b028b)
After finish install, i can't find radosgw command?
i use below paramter in ceph's installation:

install package:
#apt-get install automake autoconf gcc g++ libboost-dev libedit-dev libssl-dev 
libtool libfcgi libfcgi-dev libfuse-dev linux-kernel-headers libcrypto++-dev 
libaio-dev libgoogle-perftools-dev libkeyutils-dev uuid-dev libatomic-ops-dev 
libboost-program-options-dev libboost-thread-dev libexpat1-dev libleveldb-dev 
libsnappy-dev

configure:
#./autogen.sh
#./configure --prefix=/usr --sysconfdir=/etc --localstatedir=/var/lib/ceph

make:
#make
#make install

is anything wrong with above ??

thinks!



PLEASE NOTE: The information contained in this electronic mail message is 
intended only for the use of the designated recipient(s) named above. If the 
reader of this message is not the intended recipient, you are hereby notified 
that you have received this message in error and that any review, 
dissemination, distribution, or copying of this message is strictly prohibited. 
If you have received this communication in error, please notify the sender by 
telephone or e-mail (as shown above) immediately and destroy any and all copies 
of this message in your possession (whether hard copies or electronically 
stored copies).

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Hadoop and Ceph integration issues

2013-09-23 Thread Noah Watkins
What happens when you run `bin/hadoop fs -ls` ? This is entirely
local, and a bit simpler and easier to grok.

On Mon, Sep 23, 2013 at 12:23 PM, Rolando Martins
rolando.mart...@gmail.com wrote:
 I am trying to start hadoop using bin/start-mapred.sh.
 In the HADOOP_HOME/lib, I have:
 lib/hadoop-cephfs.jar  lib/libcephfs.jar  lib/libcephfs_jni.so
 (the first I downloaded from
 http://ceph.com/docs/master/cephfs/hadoop/ and the other two, I copied
 from my system (after installing the ubuntu package for the ceph java
 client))

 I added to conf/hadoop-env.sh:
 export LD_LIBRARY_PATH=/hyrax/hadoop-ceph/lib

 I confirmed using bin/hadoop classpath that both jar are in the classpath.

 On Mon, Sep 23, 2013 at 3:17 PM, Noah Watkins noah.watk...@inktank.com 
 wrote:
 How are you invoking Hadoop? Also, I forgot to ask, are you using the
 wrappers located in github.com/ceph/hadoop-common (or the jar linked
 to on http://ceph.com/docs/master/cephfs/hadoop/)?

 On Mon, Sep 23, 2013 at 12:05 PM, Rolando Martins
 rolando.mart...@gmail.com wrote:
 Hi Noah,
 I enabled the debugging and got:

 2013-09-23 18:59:34.705894 7f0b58de7700 -1 monclient(hunting): ERROR:
 missing keyring, cannot use cephx for authentication
 2013-09-23 18:59:34.706106 7f0b58de7700 20 client.-1 trim_cache size 0 max 0
 2013-09-23 18:59:34.706225 7f0b58de7700 10 jni: ceph_mount: exit ret -2

 I have the ceph.client.admin.keyring file in /etc/ceph and I tried
 with and without the
 parameter in core-site.xml. Unfortunately without success:(

 Thanks,
 Rolando


 property
 namefs.ceph.impl/name
 valueorg.apache.hadoop.fs.ceph.CephFileSystem/value
 /property

 property
 namefs.default.name/name
 valueceph://hyrax1:6789//value
 /property

 property
 nameceph.conf.file/name
 value/hyrax/hadoop-ceph/ceph/ceph.conf/value
 /property

 property
 nameceph.root.dir/name
 value//value
 /property
  property
 nameceph.auth.keyring/name
value/hyrax/hadoop-ceph/ceph/ceph.client.admin.keyring/value
 /property

 On Mon, Sep 23, 2013 at 2:24 PM, Noah Watkins noah.watk...@inktank.com 
 wrote:
 Shoot, I thought I had it figured out :)

 There is a default admin user created when you first create your
 cluster. After a typical install via ceph-deploy, there should be a
 file called 'ceph.client.admin.keyring', usually sibling to ceph.conf.
 If this is in a standard location (e.g. /etc/ceph) you shouldn't need
 the keyring option, otherwise point 'ceph.auth.keyring' at that
 keyring file. You shouldn't need both the keyring and the keyfile
 options set, but it just depends on how your authentication / users
 are all setup.

 The easiest thing to do if that doesn't solve your problem is probably
 to turn on logging so we can see what is blowing up.

 In your ceph.conf you can add 'debug client = 20' and 'debug
 javaclient = 20' to the client section. You may also need to set the
 log file 'log file = /path/...'. You don't need to do this on all your
 nodes, just one node where you get the failure.

 - Noah

 Thanks,
 Rolando

 P.S.: I have the cephFS mounted locally, so the cluster is ok.

 cluster d9ca74d0-d9f4-436d-92de-762af67c6534
health HEALTH_OK
monmap e1: 9 mons at
 {hyrax1=10.10.10.10:6789/0,hyrax2=10.10.10.12:6789/0,hyrax3=10.10.10.15:6789/0,hyrax4=10.10.10.13:6789/0,hyrax5=10.10.10.16:6789/0,hyrax6=10.10.10.14:6789/0,hyrax7=10.10.10.18:6789/0,hyrax8=10.10.10.17:6789/0,hyrax9=10.10.10.11:6789/0},
 election epoch 6, quorum 0,1,2,3,4,5,6,7,8
 hyrax1,hyrax2,hyrax3,hyrax4,hyrax5,hyrax6,hyrax7,hyrax8,hyrax9
osdmap e30: 9 osds: 9 up, 9 in
 pgmap v2457: 192 pgs: 192 active+clean; 10408 bytes data, 44312 MB
 used, 168 GB / 221 GB avail
mdsmap e4: 1/1/1 up {0=hyrax1=up:active}


 property
 namefs.ceph.impl/name
 valueorg.apache.hadoop.fs.ceph.CephFileSystem/value
 /property

 property
 namefs.default.name/name
 valueceph://hyrax1:6789//value
 /property

 property
 nameceph.conf.file/name
 value/hyrax/hadoop-ceph/ceph/ceph.conf/value
 /property

 property
 nameceph.root.dir/name
 value//value
 /property

 property
 nameceph.auth.keyfile/name
 value/hyrax/hadoop-ceph/ceph/admin.secret/value
 /property

 property
 nameceph.auth.keyring/name
 value/hyrax/hadoop-ceph/ceph/ceph.mon.keyring/value
 /property

 On Mon, Sep 23, 2013 at 11:42 AM, Noah Watkins noah.watk...@inktank.com 
 wrote:
 property
 nameceph.root.dir/name
 value/mnt/mycephfs/value
 /property

 This is probably causing the issue. Is this meant to be a local mount
 point? The 'ceph.root.dir' property specifies the root directory
 /inside/ CephFS, and the Hadoop implementation doesn't require a local
 CephFS mount--it uses a client library to interact with the file
 system.

 The default value for this property is /, so you can probably just
 remove this from your config file unless your CephFS directory
 structure is carved up in a special way.

 property
 nameceph.conf.file/name
 

Re: [ceph-users] Hadoop and Ceph integration issues

2013-09-23 Thread Noah Watkins
In the log file that you showing, do you see where the keyring file is
being set by Hadoop? You can find it by grepping for: jni: conf_set

On Mon, Sep 23, 2013 at 12:43 PM, Rolando Martins
rolando.mart...@gmail.com wrote:
 bin/hadoop fs -ls

 Bad connection to FS. command aborted. exception:

 (no other information is thrown)

 ceph log:
 2013-09-23 19:42:27.545402 7f0b58de7700 -1 monclient(hunting): ERROR:
 missing keyring, cannot use cephx for authentication
 2013-09-23 19:42:27.545619 7f0b58de7700 20 client.-1 trim_cache size 0 max 0
 2013-09-23 19:42:27.545733 7f0b58de7700 10 jni: ceph_mount: exit ret -2

 On Mon, Sep 23, 2013 at 3:39 PM, Noah Watkins noah.watk...@inktank.com 
 wrote:
 What happens when you run `bin/hadoop fs -ls` ? This is entirely
 local, and a bit simpler and easier to grok.

 On Mon, Sep 23, 2013 at 12:23 PM, Rolando Martins
 rolando.mart...@gmail.com wrote:
 I am trying to start hadoop using bin/start-mapred.sh.
 In the HADOOP_HOME/lib, I have:
 lib/hadoop-cephfs.jar  lib/libcephfs.jar  lib/libcephfs_jni.so
 (the first I downloaded from
 http://ceph.com/docs/master/cephfs/hadoop/ and the other two, I copied
 from my system (after installing the ubuntu package for the ceph java
 client))

 I added to conf/hadoop-env.sh:
 export LD_LIBRARY_PATH=/hyrax/hadoop-ceph/lib

 I confirmed using bin/hadoop classpath that both jar are in the classpath.

 On Mon, Sep 23, 2013 at 3:17 PM, Noah Watkins noah.watk...@inktank.com 
 wrote:
 How are you invoking Hadoop? Also, I forgot to ask, are you using the
 wrappers located in github.com/ceph/hadoop-common (or the jar linked
 to on http://ceph.com/docs/master/cephfs/hadoop/)?

 On Mon, Sep 23, 2013 at 12:05 PM, Rolando Martins
 rolando.mart...@gmail.com wrote:
 Hi Noah,
 I enabled the debugging and got:

 2013-09-23 18:59:34.705894 7f0b58de7700 -1 monclient(hunting): ERROR:
 missing keyring, cannot use cephx for authentication
 2013-09-23 18:59:34.706106 7f0b58de7700 20 client.-1 trim_cache size 0 
 max 0
 2013-09-23 18:59:34.706225 7f0b58de7700 10 jni: ceph_mount: exit ret -2

 I have the ceph.client.admin.keyring file in /etc/ceph and I tried
 with and without the
 parameter in core-site.xml. Unfortunately without success:(

 Thanks,
 Rolando


 property
 namefs.ceph.impl/name
 valueorg.apache.hadoop.fs.ceph.CephFileSystem/value
 /property

 property
 namefs.default.name/name
 valueceph://hyrax1:6789//value
 /property

 property
 nameceph.conf.file/name
 value/hyrax/hadoop-ceph/ceph/ceph.conf/value
 /property

 property
 nameceph.root.dir/name
 value//value
 /property
  property
 nameceph.auth.keyring/name
value/hyrax/hadoop-ceph/ceph/ceph.client.admin.keyring/value
 /property

 On Mon, Sep 23, 2013 at 2:24 PM, Noah Watkins noah.watk...@inktank.com 
 wrote:
 Shoot, I thought I had it figured out :)

 There is a default admin user created when you first create your
 cluster. After a typical install via ceph-deploy, there should be a
 file called 'ceph.client.admin.keyring', usually sibling to ceph.conf.
 If this is in a standard location (e.g. /etc/ceph) you shouldn't need
 the keyring option, otherwise point 'ceph.auth.keyring' at that
 keyring file. You shouldn't need both the keyring and the keyfile
 options set, but it just depends on how your authentication / users
 are all setup.

 The easiest thing to do if that doesn't solve your problem is probably
 to turn on logging so we can see what is blowing up.

 In your ceph.conf you can add 'debug client = 20' and 'debug
 javaclient = 20' to the client section. You may also need to set the
 log file 'log file = /path/...'. You don't need to do this on all your
 nodes, just one node where you get the failure.

 - Noah

 Thanks,
 Rolando

 P.S.: I have the cephFS mounted locally, so the cluster is ok.

 cluster d9ca74d0-d9f4-436d-92de-762af67c6534
health HEALTH_OK
monmap e1: 9 mons at
 {hyrax1=10.10.10.10:6789/0,hyrax2=10.10.10.12:6789/0,hyrax3=10.10.10.15:6789/0,hyrax4=10.10.10.13:6789/0,hyrax5=10.10.10.16:6789/0,hyrax6=10.10.10.14:6789/0,hyrax7=10.10.10.18:6789/0,hyrax8=10.10.10.17:6789/0,hyrax9=10.10.10.11:6789/0},
 election epoch 6, quorum 0,1,2,3,4,5,6,7,8
 hyrax1,hyrax2,hyrax3,hyrax4,hyrax5,hyrax6,hyrax7,hyrax8,hyrax9
osdmap e30: 9 osds: 9 up, 9 in
 pgmap v2457: 192 pgs: 192 active+clean; 10408 bytes data, 44312 MB
 used, 168 GB / 221 GB avail
mdsmap e4: 1/1/1 up {0=hyrax1=up:active}


 property
 namefs.ceph.impl/name
 valueorg.apache.hadoop.fs.ceph.CephFileSystem/value
 /property

 property
 namefs.default.name/name
 valueceph://hyrax1:6789//value
 /property

 property
 nameceph.conf.file/name
 value/hyrax/hadoop-ceph/ceph/ceph.conf/value
 /property

 property
 nameceph.root.dir/name
 value//value
 /property

 property
 nameceph.auth.keyfile/name
 value/hyrax/hadoop-ceph/ceph/admin.secret/value
 /property

 property
 nameceph.auth.keyring/name
 value/hyrax/hadoop-ceph/ceph/ceph.mon.keyring/value
 

Re: [ceph-users] Hadoop and Ceph integration issues

2013-09-23 Thread Rolando Martins
bin/hadoop fs -ls

Bad connection to FS. command aborted. exception:

(no other information is thrown)

ceph log:
2013-09-23 19:42:27.545402 7f0b58de7700 -1 monclient(hunting): ERROR:
missing keyring, cannot use cephx for authentication
2013-09-23 19:42:27.545619 7f0b58de7700 20 client.-1 trim_cache size 0 max 0
2013-09-23 19:42:27.545733 7f0b58de7700 10 jni: ceph_mount: exit ret -2

On Mon, Sep 23, 2013 at 3:39 PM, Noah Watkins noah.watk...@inktank.com wrote:
 What happens when you run `bin/hadoop fs -ls` ? This is entirely
 local, and a bit simpler and easier to grok.

 On Mon, Sep 23, 2013 at 12:23 PM, Rolando Martins
 rolando.mart...@gmail.com wrote:
 I am trying to start hadoop using bin/start-mapred.sh.
 In the HADOOP_HOME/lib, I have:
 lib/hadoop-cephfs.jar  lib/libcephfs.jar  lib/libcephfs_jni.so
 (the first I downloaded from
 http://ceph.com/docs/master/cephfs/hadoop/ and the other two, I copied
 from my system (after installing the ubuntu package for the ceph java
 client))

 I added to conf/hadoop-env.sh:
 export LD_LIBRARY_PATH=/hyrax/hadoop-ceph/lib

 I confirmed using bin/hadoop classpath that both jar are in the classpath.

 On Mon, Sep 23, 2013 at 3:17 PM, Noah Watkins noah.watk...@inktank.com 
 wrote:
 How are you invoking Hadoop? Also, I forgot to ask, are you using the
 wrappers located in github.com/ceph/hadoop-common (or the jar linked
 to on http://ceph.com/docs/master/cephfs/hadoop/)?

 On Mon, Sep 23, 2013 at 12:05 PM, Rolando Martins
 rolando.mart...@gmail.com wrote:
 Hi Noah,
 I enabled the debugging and got:

 2013-09-23 18:59:34.705894 7f0b58de7700 -1 monclient(hunting): ERROR:
 missing keyring, cannot use cephx for authentication
 2013-09-23 18:59:34.706106 7f0b58de7700 20 client.-1 trim_cache size 0 max  0
 2013-09-23 18:59:34.706225 7f0b58de7700 10 jni: ceph_mount: exit ret -2

 I have the ceph.client.admin.keyring file in /etc/ceph and I tried
 with and without the
 parameter in core-site.xml. Unfortunately without success:(

 Thanks,
 Rolando


 property
 namefs.ceph.impl/name
 valueorg.apache.hadoop.fs.ceph.CephFileSystem/value
 /property

 property
 namefs.default.name/name
 valueceph://hyrax1:6789//value
 /property

 property
 nameceph.conf.file/name
 value/hyrax/hadoop-ceph/ceph/ceph.conf/value
 /property

 property
 nameceph.root.dir/name
 value//value
 /property
  property
 nameceph.auth.keyring/name
value/hyrax/hadoop-ceph/ceph/ceph.client.admin.keyring/value
 /property

 On Mon, Sep 23, 2013 at 2:24 PM, Noah Watkins noah.watk...@inktank.com 
 wrote:
 Shoot, I thought I had it figured out :)

 There is a default admin user created when you first create your
 cluster. After a typical install via ceph-deploy, there should be a
 file called 'ceph.client.admin.keyring', usually sibling to ceph.conf.
 If this is in a standard location (e.g. /etc/ceph) you shouldn't need
 the keyring option, otherwise point 'ceph.auth.keyring' at that
 keyring file. You shouldn't need both the keyring and the keyfile
 options set, but it just depends on how your authentication / users
 are all setup.

 The easiest thing to do if that doesn't solve your problem is probably
 to turn on logging so we can see what is blowing up.

 In your ceph.conf you can add 'debug client = 20' and 'debug
 javaclient = 20' to the client section. You may also need to set the
 log file 'log file = /path/...'. You don't need to do this on all your
 nodes, just one node where you get the failure.

 - Noah

 Thanks,
 Rolando

 P.S.: I have the cephFS mounted locally, so the cluster is ok.

 cluster d9ca74d0-d9f4-436d-92de-762af67c6534
health HEALTH_OK
monmap e1: 9 mons at
 {hyrax1=10.10.10.10:6789/0,hyrax2=10.10.10.12:6789/0,hyrax3=10.10.10.15:6789/0,hyrax4=10.10.10.13:6789/0,hyrax5=10.10.10.16:6789/0,hyrax6=10.10.10.14:6789/0,hyrax7=10.10.10.18:6789/0,hyrax8=10.10.10.17:6789/0,hyrax9=10.10.10.11:6789/0},
 election epoch 6, quorum 0,1,2,3,4,5,6,7,8
 hyrax1,hyrax2,hyrax3,hyrax4,hyrax5,hyrax6,hyrax7,hyrax8,hyrax9
osdmap e30: 9 osds: 9 up, 9 in
 pgmap v2457: 192 pgs: 192 active+clean; 10408 bytes data, 44312 MB
 used, 168 GB / 221 GB avail
mdsmap e4: 1/1/1 up {0=hyrax1=up:active}


 property
 namefs.ceph.impl/name
 valueorg.apache.hadoop.fs.ceph.CephFileSystem/value
 /property

 property
 namefs.default.name/name
 valueceph://hyrax1:6789//value
 /property

 property
 nameceph.conf.file/name
 value/hyrax/hadoop-ceph/ceph/ceph.conf/value
 /property

 property
 nameceph.root.dir/name
 value//value
 /property

 property
 nameceph.auth.keyfile/name
 value/hyrax/hadoop-ceph/ceph/admin.secret/value
 /property

 property
 nameceph.auth.keyring/name
 value/hyrax/hadoop-ceph/ceph/ceph.mon.keyring/value
 /property

 On Mon, Sep 23, 2013 at 11:42 AM, Noah Watkins 
 noah.watk...@inktank.com wrote:
 property
 nameceph.root.dir/name
 value/mnt/mycephfs/value
 /property

 This is probably causing the issue. Is this meant to 

Re: [ceph-users] Hadoop and Ceph integration issues

2013-09-23 Thread Rolando Martins
2013-09-23 19:42:22.515836 7f0b58de7700 10 jni: conf_read_file: exit ret 0
2013-09-23 19:42:22.515893 7f0b58de7700 10 jni: ceph_mount: /
2013-09-23 19:42:22.516643 7f0b58de7700 -1 monclient(hunting): ERROR:
missing keyring, cannot use cephx for authentication
2013-09-23 19:42:22.516969 7f0b58de7700 20 client.-1 trim_cache size 0 max 0
2013-09-23 19:42:22.517210 7f0b58de7700 10 jni: ceph_mount: exit ret -2
2013-09-23 19:42:23.520569 7f0b58de7700 10 jni: conf_read_file: exit ret 0
2013-09-23 19:42:23.520601 7f0b58de7700 10 jni: ceph_mount: /



On Mon, Sep 23, 2013 at 3:47 PM, Noah Watkins noah.watk...@inktank.com wrote:
 In the log file that you showing, do you see where the keyring file is
 being set by Hadoop? You can find it by grepping for: jni: conf_set

 On Mon, Sep 23, 2013 at 12:43 PM, Rolando Martins
 rolando.mart...@gmail.com wrote:
 bin/hadoop fs -ls

 Bad connection to FS. command aborted. exception:

 (no other information is thrown)

 ceph log:
 2013-09-23 19:42:27.545402 7f0b58de7700 -1 monclient(hunting): ERROR:
 missing keyring, cannot use cephx for authentication
 2013-09-23 19:42:27.545619 7f0b58de7700 20 client.-1 trim_cache size 0 max 0
 2013-09-23 19:42:27.545733 7f0b58de7700 10 jni: ceph_mount: exit ret -2

 On Mon, Sep 23, 2013 at 3:39 PM, Noah Watkins noah.watk...@inktank.com 
 wrote:
 What happens when you run `bin/hadoop fs -ls` ? This is entirely
 local, and a bit simpler and easier to grok.

 On Mon, Sep 23, 2013 at 12:23 PM, Rolando Martins
 rolando.mart...@gmail.com wrote:
 I am trying to start hadoop using bin/start-mapred.sh.
 In the HADOOP_HOME/lib, I have:
 lib/hadoop-cephfs.jar  lib/libcephfs.jar  lib/libcephfs_jni.so
 (the first I downloaded from
 http://ceph.com/docs/master/cephfs/hadoop/ and the other two, I copied
 from my system (after installing the ubuntu package for the ceph java
 client))

 I added to conf/hadoop-env.sh:
 export LD_LIBRARY_PATH=/hyrax/hadoop-ceph/lib

 I confirmed using bin/hadoop classpath that both jar are in the classpath.

 On Mon, Sep 23, 2013 at 3:17 PM, Noah Watkins noah.watk...@inktank.com 
 wrote:
 How are you invoking Hadoop? Also, I forgot to ask, are you using the
 wrappers located in github.com/ceph/hadoop-common (or the jar linked
 to on http://ceph.com/docs/master/cephfs/hadoop/)?

 On Mon, Sep 23, 2013 at 12:05 PM, Rolando Martins
 rolando.mart...@gmail.com wrote:
 Hi Noah,
 I enabled the debugging and got:

 2013-09-23 18:59:34.705894 7f0b58de7700 -1 monclient(hunting): ERROR:
 missing keyring, cannot use cephx for authentication
 2013-09-23 18:59:34.706106 7f0b58de7700 20 client.-1 trim_cache size 0 
 max 0
 2013-09-23 18:59:34.706225 7f0b58de7700 10 jni: ceph_mount: exit ret -2

 I have the ceph.client.admin.keyring file in /etc/ceph and I tried
 with and without the
 parameter in core-site.xml. Unfortunately without success:(

 Thanks,
 Rolando


 property
 namefs.ceph.impl/name
 valueorg.apache.hadoop.fs.ceph.CephFileSystem/value
 /property

 property
 namefs.default.name/name
 valueceph://hyrax1:6789//value
 /property

 property
 nameceph.conf.file/name
 value/hyrax/hadoop-ceph/ceph/ceph.conf/value
 /property

 property
 nameceph.root.dir/name
 value//value
 /property
  property
 nameceph.auth.keyring/name
value/hyrax/hadoop-ceph/ceph/ceph.client.admin.keyring/value
 /property

 On Mon, Sep 23, 2013 at 2:24 PM, Noah Watkins noah.watk...@inktank.com 
 wrote:
 Shoot, I thought I had it figured out :)

 There is a default admin user created when you first create your
 cluster. After a typical install via ceph-deploy, there should be a
 file called 'ceph.client.admin.keyring', usually sibling to ceph.conf.
 If this is in a standard location (e.g. /etc/ceph) you shouldn't need
 the keyring option, otherwise point 'ceph.auth.keyring' at that
 keyring file. You shouldn't need both the keyring and the keyfile
 options set, but it just depends on how your authentication / users
 are all setup.

 The easiest thing to do if that doesn't solve your problem is probably
 to turn on logging so we can see what is blowing up.

 In your ceph.conf you can add 'debug client = 20' and 'debug
 javaclient = 20' to the client section. You may also need to set the
 log file 'log file = /path/...'. You don't need to do this on all your
 nodes, just one node where you get the failure.

 - Noah

 Thanks,
 Rolando

 P.S.: I have the cephFS mounted locally, so the cluster is ok.

 cluster d9ca74d0-d9f4-436d-92de-762af67c6534
health HEALTH_OK
monmap e1: 9 mons at
 {hyrax1=10.10.10.10:6789/0,hyrax2=10.10.10.12:6789/0,hyrax3=10.10.10.15:6789/0,hyrax4=10.10.10.13:6789/0,hyrax5=10.10.10.16:6789/0,hyrax6=10.10.10.14:6789/0,hyrax7=10.10.10.18:6789/0,hyrax8=10.10.10.17:6789/0,hyrax9=10.10.10.11:6789/0},
 election epoch 6, quorum 0,1,2,3,4,5,6,7,8
 hyrax1,hyrax2,hyrax3,hyrax4,hyrax5,hyrax6,hyrax7,hyrax8,hyrax9
osdmap e30: 9 osds: 9 up, 9 in
 pgmap v2457: 192 pgs: 192 

Re: [ceph-users] ceph-deploy again

2013-09-23 Thread Alfredo Deza
On Mon, Sep 23, 2013 at 11:23 AM, Bernhard Glomm bernhard.gl...@ecologic.eu
 wrote:

 Hi all,

 something with ceph-deploy doesen't work at all anymore.
 After an upgrade ceph-depoly failed to roll out a new monitor
 with permission denied. are you root?
 (obviously there shouldn't be a root login so I had another user
 for ceph-deploy before which worked perfectly, why not now?)

 ceph_deploy.install][DEBUG ] Purging host ping ...
 Traceback (most recent call last):
 E: Could not open lock file /var/lib/dpkg/lock - open (13: Permission
 denied)
 E: Unable to lock the administration directory (/var/lib/dpkg/), are you
 root?

 Does this mean I have to let root log into my Cluster with a passwordless
 key?
 I would rather like to use another log in, like so far, if possible.

 Can you paste here the exact command you are running (and with what user) ?


 The howto on ceph.com doesn't say anything about it,
 the  changelog.Debian.gz isn't very helpful either and
 another changelog isn't (provided nor a README)

 ceph-deploy is version 1.2.6
 system is freshly installed raring

 got this both lines in my sources.list
 deb http://192.168.242.91:3142/ceph.com/debian/ raring main
 deb http://192.168.242.91:3142/ceph.com/packages/ceph-extras/debian/raring 
 main

 since this both didn't work
 #deb http://gitbuilder.ceph.com/ceph-deb-raring-x86_64-basic/ref/dumpling/
 raring main
 #deb http://gitbuilder.ceph.com/cdep-deb-raring-x86_64-basic/ref/master/
 raring main
 (couldn't find the python-pushy version ceph-deploy depends on)

 TIA

 Bernhard

 --
   --
   [image: *Ecologic Institute*]   *Bernhard Glomm*
 IT Administration

Phone:  +49 (30) 86880 134   Fax:  +49 (30) 86880 100   Skype: 
 bernhard.glomm.ecologic [image:
 Website:] http://ecologic.eu [image: | 
 Video:]http://www.youtube.com/v/hZtiK04A9Yo [image:
 | Newsletter:] http://ecologic.eu/newsletter/subscribe [image: |
 Facebook:] http://www.facebook.com/Ecologic.Institute [image: |
 Linkedin:]http://www.linkedin.com/company/ecologic-institute-berlin-germany 
 [image:
 | Twitter:] http://twitter.com/EcologicBerlin [image: | 
 YouTube:]http://www.youtube.com/user/EcologicInstitute [image:
 | Google+:] http://plus.google.com/113756356645020994482   Ecologic
 Institut gemeinnützige GmbH | Pfalzburger Str. 43/44 | 10717 Berlin |
 Germany
 GF: R. Andreas Kraemer | AG: Charlottenburg HRB 57947 | USt/VAT-IdNr.:
 DE811963464
 Ecologic™ is a Trade Mark (TM) of Ecologic Institut gemeinnützige GmbH
 --

 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] how to install radosgw from source code?

2013-09-23 Thread Yehuda Sadeh
On Mon, Sep 23, 2013 at 1:18 AM, yy-nm yxdyours...@gmail.com wrote:
 hay, folks:
 i use ceph source code to install a ceph cluster. ceph version 0.61.8
 (a6fdcca3bddbc9f177e4e2bf0d9cdd85006b028b)
 After finish install, i can't find radosgw command?
 i use below paramter in ceph's installation:

 install package:
 #apt-get install automake autoconf gcc g++ libboost-dev libedit-dev
 libssl-dev libtool libfcgi libfcgi-dev libfuse-dev linux-kernel-headers
 libcrypto++-dev libaio-dev libgoogle-perftools-dev libkeyutils-dev uuid-dev
 libatomic-ops-dev libboost-program-options-dev libboost-thread-dev
 libexpat1-dev libleveldb-dev libsnappy-dev

 configure:
 #./autogen.sh
 #./configure --prefix=/usr --sysconfdir=/etc --localstatedir=/var/lib/ceph

 make:
 #make
 #make install

 is anything wrong with above ??

You're missing --with-radosgw in your configure.



 thinks!


 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] ceph.conf changes and restarting ceph.

2013-09-23 Thread Snider, Tim
I modified /etc/ceph.conf for no authentication and to specify both private and 
public networks. /etc/ceph/ceph.conf was distributed to all nodes in the cluster
ceph was restarted on all nodes using  service ceph -a restart.
After that authentication is still required and no ports are open on the 
cluster facing (192.168.10.0) network.
Details in  http://pastie.org/8349534.
What am I missing something?

Thanks,
Tim
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Hadoop and Ceph integration issues

2013-09-23 Thread Noah Watkins
I'm not sure what you grepped for. Does this output mean that the
string conf_set didn't show up in the log?

On Mon, Sep 23, 2013 at 12:52 PM, Rolando Martins
rolando.mart...@gmail.com wrote:
 2013-09-23 19:42:22.515836 7f0b58de7700 10 jni: conf_read_file: exit ret 0
 2013-09-23 19:42:22.515893 7f0b58de7700 10 jni: ceph_mount: /
 2013-09-23 19:42:22.516643 7f0b58de7700 -1 monclient(hunting): ERROR:
 missing keyring, cannot use cephx for authentication
 2013-09-23 19:42:22.516969 7f0b58de7700 20 client.-1 trim_cache size 0 max 0
 2013-09-23 19:42:22.517210 7f0b58de7700 10 jni: ceph_mount: exit ret -2
 2013-09-23 19:42:23.520569 7f0b58de7700 10 jni: conf_read_file: exit ret 0
 2013-09-23 19:42:23.520601 7f0b58de7700 10 jni: ceph_mount: /
 


 On Mon, Sep 23, 2013 at 3:47 PM, Noah Watkins noah.watk...@inktank.com 
 wrote:
 In the log file that you showing, do you see where the keyring file is
 being set by Hadoop? You can find it by grepping for: jni: conf_set

 On Mon, Sep 23, 2013 at 12:43 PM, Rolando Martins
 rolando.mart...@gmail.com wrote:
 bin/hadoop fs -ls

 Bad connection to FS. command aborted. exception:

 (no other information is thrown)

 ceph log:
 2013-09-23 19:42:27.545402 7f0b58de7700 -1 monclient(hunting): ERROR:
 missing keyring, cannot use cephx for authentication
 2013-09-23 19:42:27.545619 7f0b58de7700 20 client.-1 trim_cache size 0 max 0
 2013-09-23 19:42:27.545733 7f0b58de7700 10 jni: ceph_mount: exit ret -2

 On Mon, Sep 23, 2013 at 3:39 PM, Noah Watkins noah.watk...@inktank.com 
 wrote:
 What happens when you run `bin/hadoop fs -ls` ? This is entirely
 local, and a bit simpler and easier to grok.

 On Mon, Sep 23, 2013 at 12:23 PM, Rolando Martins
 rolando.mart...@gmail.com wrote:
 I am trying to start hadoop using bin/start-mapred.sh.
 In the HADOOP_HOME/lib, I have:
 lib/hadoop-cephfs.jar  lib/libcephfs.jar  lib/libcephfs_jni.so
 (the first I downloaded from
 http://ceph.com/docs/master/cephfs/hadoop/ and the other two, I copied
 from my system (after installing the ubuntu package for the ceph java
 client))

 I added to conf/hadoop-env.sh:
 export LD_LIBRARY_PATH=/hyrax/hadoop-ceph/lib

 I confirmed using bin/hadoop classpath that both jar are in the classpath.

 On Mon, Sep 23, 2013 at 3:17 PM, Noah Watkins noah.watk...@inktank.com 
 wrote:
 How are you invoking Hadoop? Also, I forgot to ask, are you using the
 wrappers located in github.com/ceph/hadoop-common (or the jar linked
 to on http://ceph.com/docs/master/cephfs/hadoop/)?

 On Mon, Sep 23, 2013 at 12:05 PM, Rolando Martins
 rolando.mart...@gmail.com wrote:
 Hi Noah,
 I enabled the debugging and got:

 2013-09-23 18:59:34.705894 7f0b58de7700 -1 monclient(hunting): ERROR:
 missing keyring, cannot use cephx for authentication
 2013-09-23 18:59:34.706106 7f0b58de7700 20 client.-1 trim_cache size 0 
 max 0
 2013-09-23 18:59:34.706225 7f0b58de7700 10 jni: ceph_mount: exit ret -2

 I have the ceph.client.admin.keyring file in /etc/ceph and I tried
 with and without the
 parameter in core-site.xml. Unfortunately without success:(

 Thanks,
 Rolando


 property
 namefs.ceph.impl/name
 valueorg.apache.hadoop.fs.ceph.CephFileSystem/value
 /property

 property
 namefs.default.name/name
 valueceph://hyrax1:6789//value
 /property

 property
 nameceph.conf.file/name
 value/hyrax/hadoop-ceph/ceph/ceph.conf/value
 /property

 property
 nameceph.root.dir/name
 value//value
 /property
  property
 nameceph.auth.keyring/name
value/hyrax/hadoop-ceph/ceph/ceph.client.admin.keyring/value
 /property

 On Mon, Sep 23, 2013 at 2:24 PM, Noah Watkins 
 noah.watk...@inktank.com wrote:
 Shoot, I thought I had it figured out :)

 There is a default admin user created when you first create your
 cluster. After a typical install via ceph-deploy, there should be a
 file called 'ceph.client.admin.keyring', usually sibling to ceph.conf.
 If this is in a standard location (e.g. /etc/ceph) you shouldn't need
 the keyring option, otherwise point 'ceph.auth.keyring' at that
 keyring file. You shouldn't need both the keyring and the keyfile
 options set, but it just depends on how your authentication / users
 are all setup.

 The easiest thing to do if that doesn't solve your problem is probably
 to turn on logging so we can see what is blowing up.

 In your ceph.conf you can add 'debug client = 20' and 'debug
 javaclient = 20' to the client section. You may also need to set the
 log file 'log file = /path/...'. You don't need to do this on all your
 nodes, just one node where you get the failure.

 - Noah

 Thanks,
 Rolando

 P.S.: I have the cephFS mounted locally, so the cluster is ok.

 cluster d9ca74d0-d9f4-436d-92de-762af67c6534
health HEALTH_OK
monmap e1: 9 mons at
 

Re: [ceph-users] Hadoop and Ceph integration issues

2013-09-23 Thread Rolando Martins
My bad, I associated conf_read_file with conf_set.
No, it does not appear in the logs.

On Mon, Sep 23, 2013 at 4:20 PM, Noah Watkins noah.watk...@inktank.com wrote:
 I'm not sure what you grepped for. Does this output mean that the
 string conf_set didn't show up in the log?

 On Mon, Sep 23, 2013 at 12:52 PM, Rolando Martins
 rolando.mart...@gmail.com wrote:
 2013-09-23 19:42:22.515836 7f0b58de7700 10 jni: conf_read_file: exit ret 0
 2013-09-23 19:42:22.515893 7f0b58de7700 10 jni: ceph_mount: /
 2013-09-23 19:42:22.516643 7f0b58de7700 -1 monclient(hunting): ERROR:
 missing keyring, cannot use cephx for authentication
 2013-09-23 19:42:22.516969 7f0b58de7700 20 client.-1 trim_cache size 0 max 0
 2013-09-23 19:42:22.517210 7f0b58de7700 10 jni: ceph_mount: exit ret -2
 2013-09-23 19:42:23.520569 7f0b58de7700 10 jni: conf_read_file: exit ret 0
 2013-09-23 19:42:23.520601 7f0b58de7700 10 jni: ceph_mount: /
 


 On Mon, Sep 23, 2013 at 3:47 PM, Noah Watkins noah.watk...@inktank.com 
 wrote:
 In the log file that you showing, do you see where the keyring file is
 being set by Hadoop? You can find it by grepping for: jni: conf_set

 On Mon, Sep 23, 2013 at 12:43 PM, Rolando Martins
 rolando.mart...@gmail.com wrote:
 bin/hadoop fs -ls

 Bad connection to FS. command aborted. exception:

 (no other information is thrown)

 ceph log:
 2013-09-23 19:42:27.545402 7f0b58de7700 -1 monclient(hunting): ERROR:
 missing keyring, cannot use cephx for authentication
 2013-09-23 19:42:27.545619 7f0b58de7700 20 client.-1 trim_cache size 0 max  0
 2013-09-23 19:42:27.545733 7f0b58de7700 10 jni: ceph_mount: exit ret -2

 On Mon, Sep 23, 2013 at 3:39 PM, Noah Watkins noah.watk...@inktank.com 
 wrote:
 What happens when you run `bin/hadoop fs -ls` ? This is entirely
 local, and a bit simpler and easier to grok.

 On Mon, Sep 23, 2013 at 12:23 PM, Rolando Martins
 rolando.mart...@gmail.com wrote:
 I am trying to start hadoop using bin/start-mapred.sh.
 In the HADOOP_HOME/lib, I have:
 lib/hadoop-cephfs.jar  lib/libcephfs.jar  lib/libcephfs_jni.so
 (the first I downloaded from
 http://ceph.com/docs/master/cephfs/hadoop/ and the other two, I copied
 from my system (after installing the ubuntu package for the ceph java
 client))

 I added to conf/hadoop-env.sh:
 export LD_LIBRARY_PATH=/hyrax/hadoop-ceph/lib

 I confirmed using bin/hadoop classpath that both jar are in the 
 classpath.

 On Mon, Sep 23, 2013 at 3:17 PM, Noah Watkins noah.watk...@inktank.com 
 wrote:
 How are you invoking Hadoop? Also, I forgot to ask, are you using the
 wrappers located in github.com/ceph/hadoop-common (or the jar linked
 to on http://ceph.com/docs/master/cephfs/hadoop/)?

 On Mon, Sep 23, 2013 at 12:05 PM, Rolando Martins
 rolando.mart...@gmail.com wrote:
 Hi Noah,
 I enabled the debugging and got:

 2013-09-23 18:59:34.705894 7f0b58de7700 -1 monclient(hunting): ERROR:
 missing keyring, cannot use cephx for authentication
 2013-09-23 18:59:34.706106 7f0b58de7700 20 client.-1 trim_cache size 0 
 max 0
 2013-09-23 18:59:34.706225 7f0b58de7700 10 jni: ceph_mount: exit ret -2

 I have the ceph.client.admin.keyring file in /etc/ceph and I tried
 with and without the
 parameter in core-site.xml. Unfortunately without success:(

 Thanks,
 Rolando


 property
 namefs.ceph.impl/name
 valueorg.apache.hadoop.fs.ceph.CephFileSystem/value
 /property

 property
 namefs.default.name/name
 valueceph://hyrax1:6789//value
 /property

 property
 nameceph.conf.file/name
 value/hyrax/hadoop-ceph/ceph/ceph.conf/value
 /property

 property
 nameceph.root.dir/name
 value//value
 /property
  property
 nameceph.auth.keyring/name
value/hyrax/hadoop-ceph/ceph/ceph.client.admin.keyring/value
 /property

 On Mon, Sep 23, 2013 at 2:24 PM, Noah Watkins 
 noah.watk...@inktank.com wrote:
 Shoot, I thought I had it figured out :)

 There is a default admin user created when you first create your
 cluster. After a typical install via ceph-deploy, there should be a
 file called 'ceph.client.admin.keyring', usually sibling to ceph.conf.
 If this is in a standard location (e.g. /etc/ceph) you shouldn't need
 the keyring option, otherwise point 'ceph.auth.keyring' at that
 keyring file. You shouldn't need both the keyring and the keyfile
 options set, but it just depends on how your authentication / users
 are all setup.

 The easiest thing to do if that doesn't solve your problem is probably
 to turn on logging so we can see what is blowing up.

 In your ceph.conf you can add 'debug client = 20' and 'debug
 javaclient = 20' to the client section. You may also need to set the
 log file 'log file = /path/...'. You don't need to do this on all your
 nodes, just one node where you get the failure.

 - Noah

 Thanks,
 Rolando

 P.S.: I have the cephFS mounted locally, so the cluster is ok.

 cluster d9ca74d0-d9f4-436d-92de-762af67c6534
health HEALTH_OK
monmap e1: 9 mons at
 

Re: [ceph-users] Hadoop and Ceph integration issues

2013-09-23 Thread Noah Watkins
Ok thanks. That narrows things down a lot. It seems like the keyring
property is not being recognized, and I don't see  so I'm wondering if
it is possible that the jar file is out of date and doesn't include
these configuration features.

If you clone http://github.com/ceph/hadoop-common/ and checkout the
cephfs/branch-1.0 branch, you can run 'ant cephfs' to make a fresh jar
file.

On Mon, Sep 23, 2013 at 1:22 PM, Rolando Martins
rolando.mart...@gmail.com wrote:
 My bad, I associated conf_read_file with conf_set.
 No, it does not appear in the logs.

 On Mon, Sep 23, 2013 at 4:20 PM, Noah Watkins noah.watk...@inktank.com 
 wrote:
 I'm not sure what you grepped for. Does this output mean that the
 string conf_set didn't show up in the log?

 On Mon, Sep 23, 2013 at 12:52 PM, Rolando Martins
 rolando.mart...@gmail.com wrote:
 2013-09-23 19:42:22.515836 7f0b58de7700 10 jni: conf_read_file: exit ret 0
 2013-09-23 19:42:22.515893 7f0b58de7700 10 jni: ceph_mount: /
 2013-09-23 19:42:22.516643 7f0b58de7700 -1 monclient(hunting): ERROR:
 missing keyring, cannot use cephx for authentication
 2013-09-23 19:42:22.516969 7f0b58de7700 20 client.-1 trim_cache size 0 max 0
 2013-09-23 19:42:22.517210 7f0b58de7700 10 jni: ceph_mount: exit ret -2
 2013-09-23 19:42:23.520569 7f0b58de7700 10 jni: conf_read_file: exit ret 0
 2013-09-23 19:42:23.520601 7f0b58de7700 10 jni: ceph_mount: /
 


 On Mon, Sep 23, 2013 at 3:47 PM, Noah Watkins noah.watk...@inktank.com 
 wrote:
 In the log file that you showing, do you see where the keyring file is
 being set by Hadoop? You can find it by grepping for: jni: conf_set

 On Mon, Sep 23, 2013 at 12:43 PM, Rolando Martins
 rolando.mart...@gmail.com wrote:
 bin/hadoop fs -ls

 Bad connection to FS. command aborted. exception:

 (no other information is thrown)

 ceph log:
 2013-09-23 19:42:27.545402 7f0b58de7700 -1 monclient(hunting): ERROR:
 missing keyring, cannot use cephx for authentication
 2013-09-23 19:42:27.545619 7f0b58de7700 20 client.-1 trim_cache size 0 
 max 0
 2013-09-23 19:42:27.545733 7f0b58de7700 10 jni: ceph_mount: exit ret -2

 On Mon, Sep 23, 2013 at 3:39 PM, Noah Watkins noah.watk...@inktank.com 
 wrote:
 What happens when you run `bin/hadoop fs -ls` ? This is entirely
 local, and a bit simpler and easier to grok.

 On Mon, Sep 23, 2013 at 12:23 PM, Rolando Martins
 rolando.mart...@gmail.com wrote:
 I am trying to start hadoop using bin/start-mapred.sh.
 In the HADOOP_HOME/lib, I have:
 lib/hadoop-cephfs.jar  lib/libcephfs.jar  lib/libcephfs_jni.so
 (the first I downloaded from
 http://ceph.com/docs/master/cephfs/hadoop/ and the other two, I copied
 from my system (after installing the ubuntu package for the ceph java
 client))

 I added to conf/hadoop-env.sh:
 export LD_LIBRARY_PATH=/hyrax/hadoop-ceph/lib

 I confirmed using bin/hadoop classpath that both jar are in the 
 classpath.

 On Mon, Sep 23, 2013 at 3:17 PM, Noah Watkins 
 noah.watk...@inktank.com wrote:
 How are you invoking Hadoop? Also, I forgot to ask, are you using the
 wrappers located in github.com/ceph/hadoop-common (or the jar linked
 to on http://ceph.com/docs/master/cephfs/hadoop/)?

 On Mon, Sep 23, 2013 at 12:05 PM, Rolando Martins
 rolando.mart...@gmail.com wrote:
 Hi Noah,
 I enabled the debugging and got:

 2013-09-23 18:59:34.705894 7f0b58de7700 -1 monclient(hunting): ERROR:
 missing keyring, cannot use cephx for authentication
 2013-09-23 18:59:34.706106 7f0b58de7700 20 client.-1 trim_cache size 
 0 max 0
 2013-09-23 18:59:34.706225 7f0b58de7700 10 jni: ceph_mount: exit ret 
 -2

 I have the ceph.client.admin.keyring file in /etc/ceph and I tried
 with and without the
 parameter in core-site.xml. Unfortunately without success:(

 Thanks,
 Rolando


 property
 namefs.ceph.impl/name
 valueorg.apache.hadoop.fs.ceph.CephFileSystem/value
 /property

 property
 namefs.default.name/name
 valueceph://hyrax1:6789//value
 /property

 property
 nameceph.conf.file/name
 value/hyrax/hadoop-ceph/ceph/ceph.conf/value
 /property

 property
 nameceph.root.dir/name
 value//value
 /property
  property
 nameceph.auth.keyring/name
value/hyrax/hadoop-ceph/ceph/ceph.client.admin.keyring/value
 /property

 On Mon, Sep 23, 2013 at 2:24 PM, Noah Watkins 
 noah.watk...@inktank.com wrote:
 Shoot, I thought I had it figured out :)

 There is a default admin user created when you first create your
 cluster. After a typical install via ceph-deploy, there should be a
 file called 'ceph.client.admin.keyring', usually sibling to 
 ceph.conf.
 If this is in a standard location (e.g. /etc/ceph) you shouldn't need
 the keyring option, otherwise point 'ceph.auth.keyring' at that
 keyring file. You shouldn't need both the keyring and the keyfile
 options set, but it just depends on how your authentication / users
 are all setup.

 The easiest thing to do if that doesn't solve your problem is 
 probably
 to turn on logging so we can see what is blowing up.

Re: [ceph-users] Question about Ceph performance

2013-09-23 Thread Gregory Farnum
On Sun, Sep 22, 2013 at 2:35 AM, Dafan Dong don...@yahoo-inc.com wrote:


 Hi folks, I am Dafan from Yahoo! corp. We are really interested in Ceph now.
 I wish to know where I can get  some performance report about new released
 DUMPLING? Like throughput, latency with different cluster scale and hardware
 type? Thanks.

Unfortunately there aren't any public numbers on dumpling yet. If you
go through the Ceph blog you'll find some from Cuttlefish, and the
mailing lists archives have some more anecdotal information that maybe
Mark can point you at.
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Hadoop and Ceph integration issues

2013-09-23 Thread Rolando Martins
I tried to compile it, but the compilation failed.
The error log starts with:
compile-core-classes:
  [taskdef] 2013-09-23 20:59:25,540 INFO  mortbay.log
(Slf4jLog.java:info(67)) - Logging to
org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
org.mortbay.log.Slf4jLog
[javac] /home/ubuntu/Projects/hadoop-common/build.xml:487:
warning: 'includeantruntime' was not set, defaulting to
build.sysclasspath=last; set to false for repeatable builds
[javac] Compiling 440 source files to
/home/ubuntu/Projects/hadoop-common/build/classes
[javac] 
/home/ubuntu/Projects/hadoop-common/src/core/org/apache/hadoop/fs/ceph/CephFS.java:31:
package com.ceph.fs does not exist
[javac] import com.ceph.fs.CephStat;
[javac]   ^

What are the dependencies that I need to have installed?


On Mon, Sep 23, 2013 at 4:32 PM, Noah Watkins noah.watk...@inktank.com wrote:
 Ok thanks. That narrows things down a lot. It seems like the keyring
 property is not being recognized, and I don't see  so I'm wondering if
 it is possible that the jar file is out of date and doesn't include
 these configuration features.

 If you clone http://github.com/ceph/hadoop-common/ and checkout the
 cephfs/branch-1.0 branch, you can run 'ant cephfs' to make a fresh jar
 file.

 On Mon, Sep 23, 2013 at 1:22 PM, Rolando Martins
 rolando.mart...@gmail.com wrote:
 My bad, I associated conf_read_file with conf_set.
 No, it does not appear in the logs.

 On Mon, Sep 23, 2013 at 4:20 PM, Noah Watkins noah.watk...@inktank.com 
 wrote:
 I'm not sure what you grepped for. Does this output mean that the
 string conf_set didn't show up in the log?

 On Mon, Sep 23, 2013 at 12:52 PM, Rolando Martins
 rolando.mart...@gmail.com wrote:
 2013-09-23 19:42:22.515836 7f0b58de7700 10 jni: conf_read_file: exit ret 0
 2013-09-23 19:42:22.515893 7f0b58de7700 10 jni: ceph_mount: /
 2013-09-23 19:42:22.516643 7f0b58de7700 -1 monclient(hunting): ERROR:
 missing keyring, cannot use cephx for authentication
 2013-09-23 19:42:22.516969 7f0b58de7700 20 client.-1 trim_cache size 0 max  0
 2013-09-23 19:42:22.517210 7f0b58de7700 10 jni: ceph_mount: exit ret -2
 2013-09-23 19:42:23.520569 7f0b58de7700 10 jni: conf_read_file: exit ret 0
 2013-09-23 19:42:23.520601 7f0b58de7700 10 jni: ceph_mount: /
 


 On Mon, Sep 23, 2013 at 3:47 PM, Noah Watkins noah.watk...@inktank.com 
 wrote:
 In the log file that you showing, do you see where the keyring file is
 being set by Hadoop? You can find it by grepping for: jni: conf_set

 On Mon, Sep 23, 2013 at 12:43 PM, Rolando Martins
 rolando.mart...@gmail.com wrote:
 bin/hadoop fs -ls

 Bad connection to FS. command aborted. exception:

 (no other information is thrown)

 ceph log:
 2013-09-23 19:42:27.545402 7f0b58de7700 -1 monclient(hunting): ERROR:
 missing keyring, cannot use cephx for authentication
 2013-09-23 19:42:27.545619 7f0b58de7700 20 client.-1 trim_cache size 0 
 max 0
 2013-09-23 19:42:27.545733 7f0b58de7700 10 jni: ceph_mount: exit ret -2

 On Mon, Sep 23, 2013 at 3:39 PM, Noah Watkins noah.watk...@inktank.com 
 wrote:
 What happens when you run `bin/hadoop fs -ls` ? This is entirely
 local, and a bit simpler and easier to grok.

 On Mon, Sep 23, 2013 at 12:23 PM, Rolando Martins
 rolando.mart...@gmail.com wrote:
 I am trying to start hadoop using bin/start-mapred.sh.
 In the HADOOP_HOME/lib, I have:
 lib/hadoop-cephfs.jar  lib/libcephfs.jar  lib/libcephfs_jni.so
 (the first I downloaded from
 http://ceph.com/docs/master/cephfs/hadoop/ and the other two, I copied
 from my system (after installing the ubuntu package for the ceph java
 client))

 I added to conf/hadoop-env.sh:
 export LD_LIBRARY_PATH=/hyrax/hadoop-ceph/lib

 I confirmed using bin/hadoop classpath that both jar are in the 
 classpath.

 On Mon, Sep 23, 2013 at 3:17 PM, Noah Watkins 
 noah.watk...@inktank.com wrote:
 How are you invoking Hadoop? Also, I forgot to ask, are you using the
 wrappers located in github.com/ceph/hadoop-common (or the jar linked
 to on http://ceph.com/docs/master/cephfs/hadoop/)?

 On Mon, Sep 23, 2013 at 12:05 PM, Rolando Martins
 rolando.mart...@gmail.com wrote:
 Hi Noah,
 I enabled the debugging and got:

 2013-09-23 18:59:34.705894 7f0b58de7700 -1 monclient(hunting): ERROR:
 missing keyring, cannot use cephx for authentication
 2013-09-23 18:59:34.706106 7f0b58de7700 20 client.-1 trim_cache size 
 0 max 0
 2013-09-23 18:59:34.706225 7f0b58de7700 10 jni: ceph_mount: exit ret 
 -2

 I have the ceph.client.admin.keyring file in /etc/ceph and I tried
 with and without the
 parameter in core-site.xml. Unfortunately without success:(

 Thanks,
 Rolando


 property
 namefs.ceph.impl/name
 valueorg.apache.hadoop.fs.ceph.CephFileSystem/value
 /property

 property
 namefs.default.name/name
 valueceph://hyrax1:6789//value
 /property

 property
 nameceph.conf.file/name
 value/hyrax/hadoop-ceph/ceph/ceph.conf/value
 /property

 property
 nameceph.root.dir/name

Re: [ceph-users] Hadoop and Ceph integration issues

2013-09-23 Thread Noah Watkins
You need to stick the CephFS jar files in the hadoop lib folder.

On Mon, Sep 23, 2013 at 2:02 PM, Rolando Martins
rolando.mart...@gmail.com wrote:
 I tried to compile it, but the compilation failed.
 The error log starts with:
 compile-core-classes:
   [taskdef] 2013-09-23 20:59:25,540 INFO  mortbay.log
 (Slf4jLog.java:info(67)) - Logging to
 org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
 org.mortbay.log.Slf4jLog
 [javac] /home/ubuntu/Projects/hadoop-common/build.xml:487:
 warning: 'includeantruntime' was not set, defaulting to
 build.sysclasspath=last; set to false for repeatable builds
 [javac] Compiling 440 source files to
 /home/ubuntu/Projects/hadoop-common/build/classes
 [javac] 
 /home/ubuntu/Projects/hadoop-common/src/core/org/apache/hadoop/fs/ceph/CephFS.java:31:
 package com.ceph.fs does not exist
 [javac] import com.ceph.fs.CephStat;
 [javac]   ^

 What are the dependencies that I need to have installed?


 On Mon, Sep 23, 2013 at 4:32 PM, Noah Watkins noah.watk...@inktank.com 
 wrote:
 Ok thanks. That narrows things down a lot. It seems like the keyring
 property is not being recognized, and I don't see  so I'm wondering if
 it is possible that the jar file is out of date and doesn't include
 these configuration features.

 If you clone http://github.com/ceph/hadoop-common/ and checkout the
 cephfs/branch-1.0 branch, you can run 'ant cephfs' to make a fresh jar
 file.

 On Mon, Sep 23, 2013 at 1:22 PM, Rolando Martins
 rolando.mart...@gmail.com wrote:
 My bad, I associated conf_read_file with conf_set.
 No, it does not appear in the logs.

 On Mon, Sep 23, 2013 at 4:20 PM, Noah Watkins noah.watk...@inktank.com 
 wrote:
 I'm not sure what you grepped for. Does this output mean that the
 string conf_set didn't show up in the log?

 On Mon, Sep 23, 2013 at 12:52 PM, Rolando Martins
 rolando.mart...@gmail.com wrote:
 2013-09-23 19:42:22.515836 7f0b58de7700 10 jni: conf_read_file: exit ret 0
 2013-09-23 19:42:22.515893 7f0b58de7700 10 jni: ceph_mount: /
 2013-09-23 19:42:22.516643 7f0b58de7700 -1 monclient(hunting): ERROR:
 missing keyring, cannot use cephx for authentication
 2013-09-23 19:42:22.516969 7f0b58de7700 20 client.-1 trim_cache size 0 
 max 0
 2013-09-23 19:42:22.517210 7f0b58de7700 10 jni: ceph_mount: exit ret -2
 2013-09-23 19:42:23.520569 7f0b58de7700 10 jni: conf_read_file: exit ret 0
 2013-09-23 19:42:23.520601 7f0b58de7700 10 jni: ceph_mount: /
 


 On Mon, Sep 23, 2013 at 3:47 PM, Noah Watkins noah.watk...@inktank.com 
 wrote:
 In the log file that you showing, do you see where the keyring file is
 being set by Hadoop? You can find it by grepping for: jni: conf_set

 On Mon, Sep 23, 2013 at 12:43 PM, Rolando Martins
 rolando.mart...@gmail.com wrote:
 bin/hadoop fs -ls

 Bad connection to FS. command aborted. exception:

 (no other information is thrown)

 ceph log:
 2013-09-23 19:42:27.545402 7f0b58de7700 -1 monclient(hunting): ERROR:
 missing keyring, cannot use cephx for authentication
 2013-09-23 19:42:27.545619 7f0b58de7700 20 client.-1 trim_cache size 0 
 max 0
 2013-09-23 19:42:27.545733 7f0b58de7700 10 jni: ceph_mount: exit ret -2

 On Mon, Sep 23, 2013 at 3:39 PM, Noah Watkins 
 noah.watk...@inktank.com wrote:
 What happens when you run `bin/hadoop fs -ls` ? This is entirely
 local, and a bit simpler and easier to grok.

 On Mon, Sep 23, 2013 at 12:23 PM, Rolando Martins
 rolando.mart...@gmail.com wrote:
 I am trying to start hadoop using bin/start-mapred.sh.
 In the HADOOP_HOME/lib, I have:
 lib/hadoop-cephfs.jar  lib/libcephfs.jar  lib/libcephfs_jni.so
 (the first I downloaded from
 http://ceph.com/docs/master/cephfs/hadoop/ and the other two, I copied
 from my system (after installing the ubuntu package for the ceph java
 client))

 I added to conf/hadoop-env.sh:
 export LD_LIBRARY_PATH=/hyrax/hadoop-ceph/lib

 I confirmed using bin/hadoop classpath that both jar are in the 
 classpath.

 On Mon, Sep 23, 2013 at 3:17 PM, Noah Watkins 
 noah.watk...@inktank.com wrote:
 How are you invoking Hadoop? Also, I forgot to ask, are you using the
 wrappers located in github.com/ceph/hadoop-common (or the jar linked
 to on http://ceph.com/docs/master/cephfs/hadoop/)?

 On Mon, Sep 23, 2013 at 12:05 PM, Rolando Martins
 rolando.mart...@gmail.com wrote:
 Hi Noah,
 I enabled the debugging and got:

 2013-09-23 18:59:34.705894 7f0b58de7700 -1 monclient(hunting): 
 ERROR:
 missing keyring, cannot use cephx for authentication
 2013-09-23 18:59:34.706106 7f0b58de7700 20 client.-1 trim_cache 
 size 0 max 0
 2013-09-23 18:59:34.706225 7f0b58de7700 10 jni: ceph_mount: exit 
 ret -2

 I have the ceph.client.admin.keyring file in /etc/ceph and I tried
 with and without the
 parameter in core-site.xml. Unfortunately without success:(

 Thanks,
 Rolando


 property
 namefs.ceph.impl/name
 valueorg.apache.hadoop.fs.ceph.CephFileSystem/value
 /property

 property
 namefs.default.name/name
 

Re: [ceph-users] Hadoop and Ceph integration issues

2013-09-23 Thread Rolando Martins
Thanks  Noah!
It worked!
I managed to run the wordcount example!

Can you remove the jar that is posted online? It is misleading...

Thanks!
Rolando



On Mon, Sep 23, 2013 at 5:07 PM, Noah Watkins noah.watk...@inktank.com wrote:
 You need to stick the CephFS jar files in the hadoop lib folder.

 On Mon, Sep 23, 2013 at 2:02 PM, Rolando Martins
 rolando.mart...@gmail.com wrote:
 I tried to compile it, but the compilation failed.
 The error log starts with:
 compile-core-classes:
   [taskdef] 2013-09-23 20:59:25,540 INFO  mortbay.log
 (Slf4jLog.java:info(67)) - Logging to
 org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
 org.mortbay.log.Slf4jLog
 [javac] /home/ubuntu/Projects/hadoop-common/build.xml:487:
 warning: 'includeantruntime' was not set, defaulting to
 build.sysclasspath=last; set to false for repeatable builds
 [javac] Compiling 440 source files to
 /home/ubuntu/Projects/hadoop-common/build/classes
 [javac] 
 /home/ubuntu/Projects/hadoop-common/src/core/org/apache/hadoop/fs/ceph/CephFS.java:31:
 package com.ceph.fs does not exist
 [javac] import com.ceph.fs.CephStat;
 [javac]   ^

 What are the dependencies that I need to have installed?


 On Mon, Sep 23, 2013 at 4:32 PM, Noah Watkins noah.watk...@inktank.com 
 wrote:
 Ok thanks. That narrows things down a lot. It seems like the keyring
 property is not being recognized, and I don't see  so I'm wondering if
 it is possible that the jar file is out of date and doesn't include
 these configuration features.

 If you clone http://github.com/ceph/hadoop-common/ and checkout the
 cephfs/branch-1.0 branch, you can run 'ant cephfs' to make a fresh jar
 file.

 On Mon, Sep 23, 2013 at 1:22 PM, Rolando Martins
 rolando.mart...@gmail.com wrote:
 My bad, I associated conf_read_file with conf_set.
 No, it does not appear in the logs.

 On Mon, Sep 23, 2013 at 4:20 PM, Noah Watkins noah.watk...@inktank.com 
 wrote:
 I'm not sure what you grepped for. Does this output mean that the
 string conf_set didn't show up in the log?

 On Mon, Sep 23, 2013 at 12:52 PM, Rolando Martins
 rolando.mart...@gmail.com wrote:
 2013-09-23 19:42:22.515836 7f0b58de7700 10 jni: conf_read_file: exit ret  0
 2013-09-23 19:42:22.515893 7f0b58de7700 10 jni: ceph_mount: /
 2013-09-23 19:42:22.516643 7f0b58de7700 -1 monclient(hunting): ERROR:
 missing keyring, cannot use cephx for authentication
 2013-09-23 19:42:22.516969 7f0b58de7700 20 client.-1 trim_cache size 0 
 max 0
 2013-09-23 19:42:22.517210 7f0b58de7700 10 jni: ceph_mount: exit ret -2
 2013-09-23 19:42:23.520569 7f0b58de7700 10 jni: conf_read_file: exit ret  0
 2013-09-23 19:42:23.520601 7f0b58de7700 10 jni: ceph_mount: /
 


 On Mon, Sep 23, 2013 at 3:47 PM, Noah Watkins noah.watk...@inktank.com 
 wrote:
 In the log file that you showing, do you see where the keyring file is
 being set by Hadoop? You can find it by grepping for: jni: conf_set

 On Mon, Sep 23, 2013 at 12:43 PM, Rolando Martins
 rolando.mart...@gmail.com wrote:
 bin/hadoop fs -ls

 Bad connection to FS. command aborted. exception:

 (no other information is thrown)

 ceph log:
 2013-09-23 19:42:27.545402 7f0b58de7700 -1 monclient(hunting): ERROR:
 missing keyring, cannot use cephx for authentication
 2013-09-23 19:42:27.545619 7f0b58de7700 20 client.-1 trim_cache size 0 
 max 0
 2013-09-23 19:42:27.545733 7f0b58de7700 10 jni: ceph_mount: exit ret -2

 On Mon, Sep 23, 2013 at 3:39 PM, Noah Watkins 
 noah.watk...@inktank.com wrote:
 What happens when you run `bin/hadoop fs -ls` ? This is entirely
 local, and a bit simpler and easier to grok.

 On Mon, Sep 23, 2013 at 12:23 PM, Rolando Martins
 rolando.mart...@gmail.com wrote:
 I am trying to start hadoop using bin/start-mapred.sh.
 In the HADOOP_HOME/lib, I have:
 lib/hadoop-cephfs.jar  lib/libcephfs.jar  lib/libcephfs_jni.so
 (the first I downloaded from
 http://ceph.com/docs/master/cephfs/hadoop/ and the other two, I 
 copied
 from my system (after installing the ubuntu package for the ceph java
 client))

 I added to conf/hadoop-env.sh:
 export LD_LIBRARY_PATH=/hyrax/hadoop-ceph/lib

 I confirmed using bin/hadoop classpath that both jar are in the 
 classpath.

 On Mon, Sep 23, 2013 at 3:17 PM, Noah Watkins 
 noah.watk...@inktank.com wrote:
 How are you invoking Hadoop? Also, I forgot to ask, are you using 
 the
 wrappers located in github.com/ceph/hadoop-common (or the jar linked
 to on http://ceph.com/docs/master/cephfs/hadoop/)?

 On Mon, Sep 23, 2013 at 12:05 PM, Rolando Martins
 rolando.mart...@gmail.com wrote:
 Hi Noah,
 I enabled the debugging and got:

 2013-09-23 18:59:34.705894 7f0b58de7700 -1 monclient(hunting): 
 ERROR:
 missing keyring, cannot use cephx for authentication
 2013-09-23 18:59:34.706106 7f0b58de7700 20 client.-1 trim_cache 
 size 0 max 0
 2013-09-23 18:59:34.706225 7f0b58de7700 10 jni: ceph_mount: exit 
 ret -2

 I have the ceph.client.admin.keyring file in /etc/ceph and I tried
 with and without the
 parameter in core-site.xml. 

Re: [ceph-users] Hadoop and Ceph integration issues

2013-09-23 Thread Noah Watkins
Yeh, it should be updating automatically, so it looks like there is an
issue with that. So sorry about the headache figuring this out!

On Mon, Sep 23, 2013 at 2:25 PM, Rolando Martins
rolando.mart...@gmail.com wrote:
 Thanks  Noah!
 It worked!
 I managed to run the wordcount example!

 Can you remove the jar that is posted online? It is misleading...

 Thanks!
 Rolando



 On Mon, Sep 23, 2013 at 5:07 PM, Noah Watkins noah.watk...@inktank.com 
 wrote:
 You need to stick the CephFS jar files in the hadoop lib folder.

 On Mon, Sep 23, 2013 at 2:02 PM, Rolando Martins
 rolando.mart...@gmail.com wrote:
 I tried to compile it, but the compilation failed.
 The error log starts with:
 compile-core-classes:
   [taskdef] 2013-09-23 20:59:25,540 INFO  mortbay.log
 (Slf4jLog.java:info(67)) - Logging to
 org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
 org.mortbay.log.Slf4jLog
 [javac] /home/ubuntu/Projects/hadoop-common/build.xml:487:
 warning: 'includeantruntime' was not set, defaulting to
 build.sysclasspath=last; set to false for repeatable builds
 [javac] Compiling 440 source files to
 /home/ubuntu/Projects/hadoop-common/build/classes
 [javac] 
 /home/ubuntu/Projects/hadoop-common/src/core/org/apache/hadoop/fs/ceph/CephFS.java:31:
 package com.ceph.fs does not exist
 [javac] import com.ceph.fs.CephStat;
 [javac]   ^

 What are the dependencies that I need to have installed?


 On Mon, Sep 23, 2013 at 4:32 PM, Noah Watkins noah.watk...@inktank.com 
 wrote:
 Ok thanks. That narrows things down a lot. It seems like the keyring
 property is not being recognized, and I don't see  so I'm wondering if
 it is possible that the jar file is out of date and doesn't include
 these configuration features.

 If you clone http://github.com/ceph/hadoop-common/ and checkout the
 cephfs/branch-1.0 branch, you can run 'ant cephfs' to make a fresh jar
 file.

 On Mon, Sep 23, 2013 at 1:22 PM, Rolando Martins
 rolando.mart...@gmail.com wrote:
 My bad, I associated conf_read_file with conf_set.
 No, it does not appear in the logs.

 On Mon, Sep 23, 2013 at 4:20 PM, Noah Watkins noah.watk...@inktank.com 
 wrote:
 I'm not sure what you grepped for. Does this output mean that the
 string conf_set didn't show up in the log?

 On Mon, Sep 23, 2013 at 12:52 PM, Rolando Martins
 rolando.mart...@gmail.com wrote:
 2013-09-23 19:42:22.515836 7f0b58de7700 10 jni: conf_read_file: exit 
 ret 0
 2013-09-23 19:42:22.515893 7f0b58de7700 10 jni: ceph_mount: /
 2013-09-23 19:42:22.516643 7f0b58de7700 -1 monclient(hunting): ERROR:
 missing keyring, cannot use cephx for authentication
 2013-09-23 19:42:22.516969 7f0b58de7700 20 client.-1 trim_cache size 0 
 max 0
 2013-09-23 19:42:22.517210 7f0b58de7700 10 jni: ceph_mount: exit ret -2
 2013-09-23 19:42:23.520569 7f0b58de7700 10 jni: conf_read_file: exit 
 ret 0
 2013-09-23 19:42:23.520601 7f0b58de7700 10 jni: ceph_mount: /
 


 On Mon, Sep 23, 2013 at 3:47 PM, Noah Watkins 
 noah.watk...@inktank.com wrote:
 In the log file that you showing, do you see where the keyring file is
 being set by Hadoop? You can find it by grepping for: jni: conf_set

 On Mon, Sep 23, 2013 at 12:43 PM, Rolando Martins
 rolando.mart...@gmail.com wrote:
 bin/hadoop fs -ls

 Bad connection to FS. command aborted. exception:

 (no other information is thrown)

 ceph log:
 2013-09-23 19:42:27.545402 7f0b58de7700 -1 monclient(hunting): ERROR:
 missing keyring, cannot use cephx for authentication
 2013-09-23 19:42:27.545619 7f0b58de7700 20 client.-1 trim_cache size 
 0 max 0
 2013-09-23 19:42:27.545733 7f0b58de7700 10 jni: ceph_mount: exit ret 
 -2

 On Mon, Sep 23, 2013 at 3:39 PM, Noah Watkins 
 noah.watk...@inktank.com wrote:
 What happens when you run `bin/hadoop fs -ls` ? This is entirely
 local, and a bit simpler and easier to grok.

 On Mon, Sep 23, 2013 at 12:23 PM, Rolando Martins
 rolando.mart...@gmail.com wrote:
 I am trying to start hadoop using bin/start-mapred.sh.
 In the HADOOP_HOME/lib, I have:
 lib/hadoop-cephfs.jar  lib/libcephfs.jar  lib/libcephfs_jni.so
 (the first I downloaded from
 http://ceph.com/docs/master/cephfs/hadoop/ and the other two, I 
 copied
 from my system (after installing the ubuntu package for the ceph 
 java
 client))

 I added to conf/hadoop-env.sh:
 export LD_LIBRARY_PATH=/hyrax/hadoop-ceph/lib

 I confirmed using bin/hadoop classpath that both jar are in the 
 classpath.

 On Mon, Sep 23, 2013 at 3:17 PM, Noah Watkins 
 noah.watk...@inktank.com wrote:
 How are you invoking Hadoop? Also, I forgot to ask, are you using 
 the
 wrappers located in github.com/ceph/hadoop-common (or the jar 
 linked
 to on http://ceph.com/docs/master/cephfs/hadoop/)?

 On Mon, Sep 23, 2013 at 12:05 PM, Rolando Martins
 rolando.mart...@gmail.com wrote:
 Hi Noah,
 I enabled the debugging and got:

 2013-09-23 18:59:34.705894 7f0b58de7700 -1 monclient(hunting): 
 ERROR:
 missing keyring, cannot use cephx for authentication
 2013-09-23 18:59:34.706106 7f0b58de7700 20 

Re: [ceph-users] ceph.conf changes and restarting ceph.

2013-09-23 Thread Gary Mazzaferro
Tim

Did it work with authentication enabled  ?

-gary


On Mon, Sep 23, 2013 at 2:10 PM, Snider, Tim tim.sni...@netapp.com wrote:

  I modified /etc/ceph.conf for no authentication and to specify both
 private and public networks. /etc/ceph/ceph.conf was distributed to all
 nodes in the cluster

 ceph was restarted on all nodes using  service ceph -a restart.

 After that authentication is still required and no ports are open on the
 cluster facing (192.168.10.0) network.

 Details in  http://pastie.org/8349534.

 What am I missing something? 

 ** **

 Thanks,

 Tim

 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] rgw client

2013-09-23 Thread Samuel Just
You might need to tell cyberduck the location of your endpoint.
-Sam

On Tue, Sep 17, 2013 at 9:16 PM, lixuehui lixue...@chinacloud.com.cn wrote:
 Hi,all
 I installed rgw with a healthy ceph cluster .Although  it works well with S3
 api ,can it be connected by Cyberduck ?
 I've tried with the rgw user configure,but failed all the time.


 { user_id: johndoe,
   display_name: John Doe,
   email: ,
   suspended: 0,
   max_buckets: 1000,
   auid: 0,
   subusers: [
 { id: johndoe:swift,
   permissions: full-control}],
   keys: [
 { user: johndoe,
   access_key: SFQZHV7GFI8G0RZLWVAH,
   secret_key: LHORhTodoVznYJS74qt2l65iN7NR3p5aTa6wHK+e}],
   swift_keys: [
 { user: johndoe:swift,
   secret_key: 1oAcl8Mzz5Jw3QzbP\/b4rKspAfoPHJ2RjUA1zdP1}],
   caps: [
 { type: usage,
   perm: *},
 { type: user,
   perm: read}]}

 
 lixuehui

 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Cannot start 5/20 OSDs

2013-09-23 Thread Samuel Just
Can you restart those osds with

debug osd = 20
debug filestore = 20
debug ms = 1

in the [osd] section of the ceph.conf file on the respective machines
and upload the logs?  Sounds like a bug.
-Sam

On Tue, Sep 17, 2013 at 2:05 PM, Matt Thompson watering...@gmail.com wrote:
 Hi All,

 I set up a new cluster today w/ 20 OSDs spanning 4 machines (journals not
 stored on separate disks), and a single MON running on a separate server
 (understand the single MON is not ideal for production environments).

 The cluster had the default pools along w/ the ones created by radosgw.
 There was next to no user data on the cluster with the exception of a few
 test files uploaded via swift client.

 I ran the following on one node to increase replica size from 2 to 3:

 for x in $(rados lspools); do ceph osd pool set $x size 3; done

 After doing this, I noticed that 5 OSDs were down and repeatedly restarting
 them using the following brings them back online momentarily but then they
 go down / out again:

 start ceph-osd id=X

 Looking across the affected nodes, I'm seeing errors like this in the
 respective osd logs:

 osd/ReplicatedPG.cc: 5405: FAILED assert(ssc)

  ceph version 0.67.3 (408cd61584c72c0d97b774b3d8f95c6b1b06341a)
  1: (ReplicatedPG::prep_push_to_replica(ObjectContext*, hobject_t const,
 int, int, PushOp*)+0x8ea)
  [0x5fd50a]
  2: (ReplicatedPG::prep_object_replica_pushes(hobject_t const, eversion_t,
 int, std::mapint, std:
 :vectorPushOp, std::allocatorPushOp , std::lessint,
 std::allocatorstd::pairint const, std::
 vectorPushOp, std::allocatorPushOp*)+0x722) [0x5fe552]
  3: (ReplicatedPG::recover_replicas(int, ThreadPool::TPHandle)+0x657)
 [0x5ff487]
  4: (ReplicatedPG::start_recovery_ops(int, PG::RecoveryCtx*,
 ThreadPool::TPHandle)+0x736) [0x61d9c6]
  5: (OSD::do_recovery(PG*, ThreadPool::TPHandle)+0x1b8) [0x6863e8]
  6: (OSD::RecoveryWQ::_process(PG*, ThreadPool::TPHandle)+0x11) [0x6c5541]
  7: (ThreadPool::worker(ThreadPool::WorkThread*)+0x4e6) [0x8b8df6]
  8: (ThreadPool::WorkThread::entry()+0x10) [0x8bac00]
  9: (()+0x7e9a) [0x7f610c09fe9a]
  10: (clone()+0x6d) [0x7f610a91dccd]
  NOTE: a copy of the executable, or `objdump -rdS executable` is needed to
 interpret this.

 Have I done something foolish, or am I hitting a legitimate issue here?

 On a side note, my cluster is now in the following state:

 2013-09-17 20:47:13.651250 mon.0 [INF] pgmap v1536: 248 pgs: 243
 active+clean, 2 active+recovery_wait, 3 active+recovering; 5497 bytes data,
 866 MB used, 999 GB / 1000 GB avail; 21/255 degraded (8.235%); 7/85 unfound
 (8.235%)

 According to a ceph health detail, the unfound are on the .users.uid and
 .rgw radosgw pools; I suppose I can remove those pools and have radosgw
 recreate them?  If this is not recoverable is it advisable to just format
 the cluster and start again?

 Thanks in advance for the help.

 Regards,
 Matt

 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] how to install radosgw from source code?

2013-09-23 Thread yy-nm

On 2013/9/24 1:32, Yehuda Sadeh wrote:

On Mon, Sep 23, 2013 at 1:18 AM, yy-nm yxdyours...@gmail.com wrote:

hay, folks:
 i use ceph source code to install a ceph cluster. ceph version 0.61.8
(a6fdcca3bddbc9f177e4e2bf0d9cdd85006b028b)
After finish install, i can't find radosgw command?
i use below paramter in ceph's installation:

install package:
#apt-get install automake autoconf gcc g++ libboost-dev libedit-dev
libssl-dev libtool libfcgi libfcgi-dev libfuse-dev linux-kernel-headers
libcrypto++-dev libaio-dev libgoogle-perftools-dev libkeyutils-dev uuid-dev
libatomic-ops-dev libboost-program-options-dev libboost-thread-dev
libexpat1-dev libleveldb-dev libsnappy-dev

configure:
#./autogen.sh
#./configure --prefix=/usr --sysconfdir=/etc --localstatedir=/var/lib/ceph

make:
#make
#make install

is anything wrong with above ??

You're missing --with-radosgw in your configure.



thinks!


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


thinks a lot!
i get it!
radosgw is installed
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph.conf changes and restarting ceph.

2013-09-23 Thread John Wilkins
I will update the Cephx docs. The usage in those docs for restarting
is for Debian/Ubuntu deployed with mkcephfs.  If you are using
Dumpling and deployed with ceph-deploy, you will need to use Upstart.
See 
http://ceph.com/docs/master/rados/operations/operating/#running-ceph-with-upstart
for details. If you are using Ceph on RHEL, CentOS, etc., use
sysvinit.

On Mon, Sep 23, 2013 at 3:21 PM, Gary Mazzaferro ga...@oedata.com wrote:
 Tim

 Did it work with authentication enabled  ?

 -gary


 On Mon, Sep 23, 2013 at 2:10 PM, Snider, Tim tim.sni...@netapp.com wrote:

 I modified /etc/ceph.conf for no authentication and to specify both
 private and public networks. /etc/ceph/ceph.conf was distributed to all
 nodes in the cluster

 ceph was restarted on all nodes using  service ceph -a restart.

 After that authentication is still required and no ports are open on the
 cluster facing (192.168.10.0) network.

 Details in  http://pastie.org/8349534.

 What am I missing something?



 Thanks,

 Tim


 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




-- 
John Wilkins
Senior Technical Writer
Intank
john.wilk...@inktank.com
(415) 425-9599
http://inktank.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com