Just 2 messages for dmesg
client0 fsid <hex number>, and
mon0 <ip-addr>:6789 session established
ceph -s -m ceph-1 -k ceph.client.admin.keyring, returns:
health HEALTH_OK
monmap e1: 1 mons at {ceph-1=155.53.104.100:6789/0}, election epoch 2,
quorum 0 ceph-1
osdmap e15: 2 osds: 2 up, 2 in
pgmap v1134: 192 pgs: 192 active+clean; 197 MB data, 14166 MB used, 1242
GB / 1323 GB avail
mdsmap e527: 1/1/1 up {0=ceph-1=up:active}
On Tue, Jul 16, 2013 at 10:37 AM, Gregory Farnum <[email protected]> wrote:
> On Tue, Jul 16, 2013 at 10:29 AM, Hariharan Thantry <[email protected]>
> wrote:
> > Thanks, now I get a different error:
> >
> > mount error 12 = cannot allocate memory.
> >
> > My ceph-client and the storage cluster are at the same version, though
> I'm
> > running the client inside a VM (VirtualBox)...with 1536M of RAM space,
> with
> > an 8GB hard disk. Shouldn't this be sufficient?
> >
> > P.S. Maybe you could update the page here:
> > http://ceph.com/docs/master/start/quick-cephfs/#create-a-secret-file, to
> > install ceph-fs-common & ceph-fuse (for fuse clients)
>
> Is there any output in dmesg? And what's the output of "ceph -s"?
> -Greg
> Software Engineer #42 @ http://inktank.com | http://ceph.com
> >
> > Thanks,
> > Hari
> >
> >
> >
> > On Tue, Jul 16, 2013 at 10:07 AM, Gregory Farnum <[email protected]>
> wrote:
> >>
> >> You don't have mount.ceph installed, and there's some translation that
> >> needs to be done in userspace before the kernel sees the mount which
> >> isn't happening. On Debian it's in the ceph-fs-common package.
> >> -Greg
> >> Software Engineer #42 @ http://inktank.com | http://ceph.com
> >>
> >>
> >> On Tue, Jul 16, 2013 at 10:02 AM, Hariharan Thantry <[email protected]>
> >> wrote:
> >> > While trying to execute these steps in the ceph users guide, I get
> >> > libceph
> >> > errors
> >> >
> >> > no secret set (for auth_x protocol)
> >> >
> >> > error -22 on auth protocol 2 init
> >> >
> >> > Also, when providing the authentication keys (Step #3 below), I get
> the
> >> > following error:
> >> >
> >> > bad option at secretfile=admin.secret
> >> >
> >> > Any ideas where I could be going wrong? The health check reports OK.
> >> >
> >> >
> >> > Mount Ceph FS as a kernel driver.
> >> >
> >> > sudo mkdir /mnt/mycephfs
> >> > sudo mount -t ceph {ip-address-of-monitor}:6789:/ /mnt/mycephfs
> >> >
> >> > The Ceph Storage Cluster uses authentication by default. Specify a
> user
> >> > name
> >> > and the secretfile you created in the Create a Secret File section.
> For
> >> > example:
> >> >
> >> > sudo mount -t ceph 192.168.0.1:6789:/ /mnt/mycephfs -o
> >> > name=admin,secretfile=admin.secret
> >> >
> >> >
> >> > _______________________________________________
> >> > ceph-users mailing list
> >> > [email protected]
> >> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >> >
> >
> >
>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com