Hi,
I think I've solved the issue by looking at the source code in the ceph 
tree and ganesha. Indeed, the error message comes from ceph, not 
ganesha, and the init failure is not really verbose at all from there. 
The problem was in my ganesha.conf, I had to change the RGW section to 
better match my ceph configuration:

RGW {
     ceph_conf = "/etc/ceph/ceph.conf";
     name = "client.rgw.node";
     init_args = "--keyring=/var/lib/ceph/radosgw/node/keyring 
--log-file=/var/log/radosgw/rgw.log --debug-rgw=0/5";
}

Once done that it works!
Well, it works in the sense I can mount, create directories, touch 
files, and so on. But when I try to e.g. dd a test file I always get the 
following error:

$> dd if=/dev/zero of=/rgw/testdir/test bs=1024k count=100
dd: closing output file `/rgw/testdir/test': Input/output error

Anyone saw this error already?
Thanks,

     Alessandro

Il 16/01/17 16:31, Daniel Gryniewicz ha scritto:
> The lack of verbosity is not Ganesha's fault; it only gets the single
> error code back from Ceph.
>
> Try turning up all your client related logging in your ceph.conf, and
> check it's logging?
>
> Daniel
>
> On 01/14/2017 03:02 PM, Alessandro De Salvo wrote:
>> Hi Daniel,
>>
>> indeed, this is the root cause, but I do not understand what's wrong
>> here and the real cause of the failure. Since I was suspicious about the
>> ceph.conf setup I've already checked that it all works, and it does
>> indeed. I can issue ceph -s or rados df without any problem.
>>
>> Here I give you a couple of examples:
>>
>>
>> $ ceph -s
>>
>>       cluster aac2c2c4-5953-44d7-b90c-9922a8ccd77a
>>        health HEALTH_OK
>>        monmap e4: 3 mons at
>> {mon1=<ipmon1>:6789/0,mon2=<ipmon1>:6789/0,mon3=<ipmon3>:6789/0}
>>               election epoch 124, quorum 0,1,2 mon3,mon2,mon1
>>         fsmap e42: 1/1/1 up {0=mds1=up:active}, 1 up:standby
>>           mgr no daemons active
>>        osdmap e16477: 54 osds: 52 up, 52 in
>>               flags sortbitwise,require_jewel_osds,require_kraken_osds
>>         pgmap v656338: 800 pgs, 16 pools, 4136 MB data, 1326 objects
>>               48076 MB used, 71839 GB / 71886 GB avail
>>                    800 active+clean
>>
>>
>> $ rados df
>>
>> POOL_NAME                  USED  OBJECTS CLONES COPIES
>> MISSING_ON_PRIMARY UNFOUND DEGRAED RD_OPS RD    WR_OPS WR
>> .rgw.root                   1681       4      0 12
>> 0       0       0    156  121k      4  5120
>> cephfs_data                    0       0      0 0
>> 0       0       0      0     0      0     0
>> cephfs_metadata             2148      20      0 60
>> 0       0       0     23 24576     41  7168
>> default.rgw.buckets.data   4136M    1092      0 3276
>> 0       0       0    150  124k   9745 4136M
>> default.rgw.buckets.index      0       2      0 6
>> 0       0       0    796 1264k    442     0
>> default.rgw.buckets.non-ec     0       0      0 0
>> 0       0       0    161  161k    128     0
>> default.rgw.control            0       8      0 24
>> 0       0       0      0     0      0     0
>> default.rgw.data.root       1228       4      0 12
>> 0       0       0     27 24576     61 15360
>> default.rgw.gc                 0      32      0 96
>> 0       0       0   6620 6588k   4416     0
>> default.rgw.lc                 0      32      0 96
>> 0       0       0    894  862k    448     0
>> default.rgw.log                0     128      0 384
>> 0       0       0 151070  147M 100778     0
>> default.rgw.users.keys        11       1      0 3
>> 0       0       0     21 14336      1  1024
>> default.rgw.users.uid        358       2      0 6
>> 0       0       0   4507 4503k   4385  1024
>> kraken-test                    0       1      0 3
>> 0       0       0 652378  199G 371644  507G
>> rbd                            0       0      0 0
>> 0       0       0   1309 5196M   2518 5000M
>> scbench                        0       0      0 0
>> 0       0       0   1154 4608M   3458 4608M
>>
>>
>> Any other hint? Of course, having more verbosity from the rados init
>> would be very helpful, but even with FULL_DEBUG I always get the same
>> messages and nothing more.
>> Thanks,
>>
>>       Alessandro
>>
>> Il 13/01/17 19:37, Daniel Gryniewicz ha scritto:
>>> Hi, Alessandro.
>>>
>>> This error (-5) is caused by the failure to initialize the RADOS client
>>> in librados.  Can you perform ceph operations from that same host?
>>> (say, ceph -s)  It's likely to be a problem in your ceph.conf, I think,
>>> such as wrong or unreachable MON addresses.
>>>
>>> Daniel
>>>
>>> On 01/13/2017 12:39 PM, Alessandro De Salvo wrote:
>>>> Hi,
>>>> I'm trying to use the RGW FSAL on CentOS 7 with ceph kraken v11.1.1 and
>>>> ganesha 2.4.1-2. I have rebuilt the RPMS from the rawhide fedora
>>>> version, who is now including the RGW FSAL. When trying to run the
>>>> ganesha daemon I get the following error:
>>>>
>>>> 13/01/2017 17:21:15 : epoch 58790c88 : node1 : ganesha.nfsd-1[main]
>>>> init :FSAL :DEBUG :RGW module registering.
>>>> 13/01/2017 17:21:15 : epoch 58790c88 : node1 : ganesha.nfsd-1[main]
>>>> init_config :FSAL :DEBUG :RGW module setup.
>>>> 13/01/2017 17:21:15 : epoch 58790c88 : node1 : ganesha.nfsd-1[main]
>>>> create_export :FSAL :CRIT :RGW module: librgw init failed (-5)
>>>> 13/01/2017 17:21:15 : epoch 58790c88 : node1 : ganesha.nfsd-1[main]
>>>> fsal_put :FSAL :INFO :FSAL RGW now unused
>>>>
>>>>
>>>> The daemon is run in a privileged docker container with
>>>>
>>>> /usr/bin/ganesha.nfsd -F -N NIV_DEBUG -L /var/log/ganesha.log
>>>> -f /etc/ganesha/ganesha.conf
>>>>
>>>> All the ceph.conf and keyrings are properly installed in the machine and
>>>> container, and in fact I can access the ceph cluster correctly and the
>>>> RGW instance.
>>>> The ganesha configuration is the following:
>>>>
>>>> EXPORT
>>>> {
>>>>    Export_ID=1;
>>>>
>>>>    Path = "/atlas";
>>>>
>>>>    Pseudo = "/atlas";
>>>>
>>>>    Access_Type = RW;
>>>>
>>>>           SecType = "sys";
>>>>
>>>>    FSAL {
>>>>            Name = RGW;
>>>>            User_Id = "testuser";
>>>>            Access_Key_Id ="testkey";
>>>>            Secret_Access_Key = "testsecret";
>>>>    }
>>>> }
>>>>
>>>> RGW {
>>>>       ceph_conf = "/etc/ceph/ceph.conf";
>>>>       name = "auth.node2.roma1.infn.it";
>>>>       cluster = "ceph";
>>>>       init_args = "-d --debug-rgw=16";
>>>> }
>>>>
>>>>
>>>> Does anyone know how to increase the verbosity of ganesha in order to
>>>> have more info on this error? Only "init failed" is not really very
>>>> helpful.
>>>> Any hint on this error?
>>>> Thanks,
>>>>
>>>>    Alessandro
>>>>
>>>>
>>>> ------------------------------------------------------------------------------
>>>> Developer Access Program for Intel Xeon Phi Processors
>>>> Access to Intel Xeon Phi processor-based developer platforms.
>>>> With one year of Intel Parallel Studio XE.
>>>> Training and support from Colfax.
>>>> Order your platform today. http://sdm.link/xeonphi
>>>> _______________________________________________
>>>> Nfs-ganesha-devel mailing list
>>>> Nfs-ganesha-devel@lists.sourceforge.net
>>>> https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel
>>>>
>>> ------------------------------------------------------------------------------
>>> Developer Access Program for Intel Xeon Phi Processors
>>> Access to Intel Xeon Phi processor-based developer platforms.
>>> With one year of Intel Parallel Studio XE.
>>> Training and support from Colfax.
>>> Order your platform today. http://sdm.link/xeonphi
>>> _______________________________________________
>>> Nfs-ganesha-devel mailing list
>>> Nfs-ganesha-devel@lists.sourceforge.net
>>> https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel
>>
>> ------------------------------------------------------------------------------
>> Developer Access Program for Intel Xeon Phi Processors
>> Access to Intel Xeon Phi processor-based developer platforms.
>> With one year of Intel Parallel Studio XE.
>> Training and support from Colfax.
>> Order your platform today. http://sdm.link/xeonphi
>> _______________________________________________
>> Nfs-ganesha-devel mailing list
>> Nfs-ganesha-devel@lists.sourceforge.net
>> https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel
>>
>
> ------------------------------------------------------------------------------
> Developer Access Program for Intel Xeon Phi Processors
> Access to Intel Xeon Phi processor-based developer platforms.
> With one year of Intel Parallel Studio XE.
> Training and support from Colfax.
> Order your platform today. http://sdm.link/xeonphi
> _______________________________________________
> Nfs-ganesha-devel mailing list
> Nfs-ganesha-devel@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
_______________________________________________
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel

Reply via email to