Re: [ceph-users] CephFS
example: Each server looks like this on their mounting: /bin/mount -t ceph -o name=admin,secret= 10.10.10.138,10.10.10.252,10.10.10.103:/ /media/network-storage all points to the monitor servers. On Tue, Jan 17, 2017 at 12:27 PM, Alex Evonosky <alex.evono...@gmail.com> wrote: > yes they are. I created one volume all shared by the webservers. So > essentially is acting like a NAS using NFS. All servers see the same data. > > On Tue, Jan 17, 2017 at 12:26 PM, Kingsley Tart <c...@dogwind.com> wrote: > >> Hi, >> >> Are these all sharing the same volume? >> >> Cheers, >> Kingsley. >> >> On Tue, 2017-01-17 at 12:19 -0500, Alex Evonosky wrote: >> > for whats its worth, I have been using CephFS shared between six >> > servers (all kernel mounted) and no issues. Running three monitors >> > and 2 meta servers (one as backup). This has been running great. >> > >> > On Tue, Jan 17, 2017 at 12:14 PM, Kingsley Tart <c...@dogwind.com> >> > wrote: >> > On Tue, 2017-01-17 at 13:49 +0100, Loris Cuoghi wrote: >> > > I think you're confusing CephFS kernel client and RBD kernel >> > client. >> > > >> > > The Linux kernel contains both: >> > > >> > > * a module ceph.ko for accessing a CephFS >> > > * a module rbd.ko for accessing an RBD (Rados Block Device) >> > > >> > > You can mount a CephFS using the kernel driver [0], or using >> > an >> > > userspace helper for FUSE [1]. >> > > >> > > [0] http://docs.ceph.com/docs/master/cephfs/kernel/ >> > > [1] http://docs.ceph.com/docs/master/cephfs/fuse/ >> > >> > Hi, >> > >> > Thanks for your reply. >> > >> > I specifically didn't want a block device because I would like >> > to mount >> > the same volume on multiple machines to share the files, like >> > you would >> > with NFS. This is why I thought ceph-fuse would be what I >> > needed. >> > >> > -- >> > Cheers, >> > Kingsley. >> > >> > ___ >> > ceph-users mailing list >> > ceph-users@lists.ceph.com >> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com >> > >> > >> >> > ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Re: [ceph-users] CephFS
for whats its worth, I have been using CephFS shared between six servers (all kernel mounted) and no issues. Running three monitors and 2 meta servers (one as backup). This has been running great. On Tue, Jan 17, 2017 at 12:14 PM, Kingsley Tartwrote: > On Tue, 2017-01-17 at 13:49 +0100, Loris Cuoghi wrote: > > I think you're confusing CephFS kernel client and RBD kernel client. > > > > The Linux kernel contains both: > > > > * a module ceph.ko for accessing a CephFS > > * a module rbd.ko for accessing an RBD (Rados Block Device) > > > > You can mount a CephFS using the kernel driver [0], or using an > > userspace helper for FUSE [1]. > > > > [0] http://docs.ceph.com/docs/master/cephfs/kernel/ > > [1] http://docs.ceph.com/docs/master/cephfs/fuse/ > > Hi, > > Thanks for your reply. > > I specifically didn't want a block device because I would like to mount > the same volume on multiple machines to share the files, like you would > with NFS. This is why I thought ceph-fuse would be what I needed. > > -- > Cheers, > Kingsley. > > ___ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Re: [ceph-users] Ceph Monitor cephx issues
Since this was a test lab, I totally purged the whole cluster and re-deployed.. working good now, thank you. Alex F. Evonosky <https://twitter.com/alexevon> <https://www.linkedin.com/in/alexevonosky> On Sat, Jan 7, 2017 at 9:14 PM, Alex Evonosky <alex.evono...@gmail.com> wrote: > Thank you.. > > > After sending the post, I totally removed the mon and issued the build > with ceph-deploy: > > > In the logs now: > > 2017-01-07 21:12:38.113534 7fa9613fd700 0 cephx: verify_reply couldn't > decrypt with error: error decoding block for decryption > 2017-01-07 21:12:38.113546 7fa9613fd700 0 -- 10.10.10.138:6789/0 >> > 10.10.10.103:6789/0 pipe(0x55feb2e9 sd=12 :50266 s=1 pgs=0 cs=0 l=0 > c=0x55feb2ca0a80).failed verifying authorize reply > 2017-01-07 21:12:38.114529 7fa95787b700 0 cephx: verify_reply couldn't > decrypt with error: error decoding block for decryption > 2017-01-07 21:12:38.114567 7fa95787b700 0 -- 10.10.10.138:6789/0 >> > 10.10.10.252:6789/0 pipe(0x55feb2e91400 sd=11 :38690 s=1 pgs=0 cs=0 l=0 > c=0x55feb2ca0c00).failed verifying authorize reply > 2017-01-07 21:12:40.114522 7fa9613fd700 0 cephx: verify_reply couldn't > decrypt with error: error decoding block for decryption > 2017-01-07 21:12:40.114542 7fa9613fd700 0 -- 10.10.10.138:6789/0 >> > 10.10.10.103:6789/0 pipe(0x55feb2e9 sd=11 :50278 s=1 pgs=0 cs=0 l=0 > c=0x55feb2ca0a80).failed verifying authorize reply > 2017-01-07 21:12:40.115706 7fa95787b700 0 cephx: verify_reply couldn't > decrypt with error: error decoding block for decryption > 2017-01-07 21:12:40.115721 7fa95787b700 0 -- 10.10.10.138:6789/0 >> > 10.10.10.252:6789/0 pipe(0x55feb2e91400 sd=12 :38702 s=1 pgs=0 cs=0 l=0 > c=0x55feb2ca0c00).failed verifying authorize reply > 2017-01-07 21:12:41.621916 7fa956f79700 0 cephx: verify_authorizer could > not decrypt ticket info: error: NSS AES final round failed: -8190 > 2017-01-07 21:12:41.621929 7fa956f79700 0 mon.alex-desktop@1(probing) e0 > ms_verify_authorizer bad authorizer from mon 10.10.10.103:6789/0 > 2017-01-07 21:12:41.621944 7fa956f79700 0 -- 10.10.10.138:6789/0 >> > 10.10.10.103:6789/0 pipe(0x55feb2fb5400 sd=21 :6789 s=0 pgs=0 cs=0 l=0 > c=0x55feb2ca1500).accept: got bad authorizer > > > > $ sudo ceph -s > cluster f5aba719-4856-4ae2-a5d4-f9ff0f614b60 > health HEALTH_WARN > 512 pgs degraded > 348 pgs stale > 512 pgs stuck unclean > 512 pgs undersized > 6 requests are blocked > 32 sec > recovery 25013/50026 objects degraded (50.000%) > mds cluster is degraded > 1 mons down, quorum 0,2 alpha,toshiba-laptop > monmap e17: 3 mons at {alex-desktop=10.10.10.138: > 6789/0,alpha=10.10.10.103:6789/0,toshiba-laptop=10.10.10.252:6789/0} > election epoch 806, quorum 0,2 alpha,toshiba-laptop > fsmap e201858: 1/1/1 up {0=1=up:replay} > osdmap e200229: 3 osds: 2 up, 2 in; 85 remapped pgs > flags sortbitwise > pgmap v4088774: 512 pgs, 4 pools, 50883 MB data, 25013 objects > 59662 MB used, 476 GB / 563 GB avail > 25013/50026 objects degraded (50.000%) > 348 stale+active+undersized+degraded > 164 active+undersized+degraded > > > > root@alex-desktop:/var/lib/ceph/mon/ceph-alex-desktop# ls -ls > total 8 > 0 -rw-r--r-- 1 ceph ceph0 Jan 7 21:11 done > 4 -rw--- 1 ceph ceph 77 Jan 7 21:05 keyring > 4 drwxr-xr-x 2 ceph ceph 4096 Jan 7 21:10 store.db > 0 -rw-r--r-- 1 ceph ceph0 Jan 7 21:05 systemd > > > > > Very odd... never seen this issue on the other monitor deployments... > > > > > > > > > Alex F. Evonosky > > <https://twitter.com/alexevon> <https://www.linkedin.com/in/alexevonosky> > > On Sat, Jan 7, 2017 at 8:54 PM, Shinobu Kinjo <ski...@redhat.com> wrote: > >> Using ``ceph-deploy`` will save your life: >> >> # https://github.com/ceph/ceph/blob/master/doc/start/quick-cep >> h-deploy.rst >> * Please look at: Adding Monitors >> >> If you are using centos or similar, the latest package is available here: >> >> # http://download.ceph.com/rpm-jewel/el7/noarch/ceph-deploy-1. >> 5.37-0.noarch.rpm >> >> Regards, >> >> >> On Sun, Jan 8, 2017 at 9:53 AM, Alex Evonosky <alex.evono...@gmail.com> >> wrote: >> >>> Thank you for the reply! >>> >>> I followed this article: >>> >>> http://docs.ceph.com/docs/jewel/rados/operations/add-or-rm-mons/ >>> >>> >>> Under the section: ADDING A MONITOR (MANUAL) >&
Re: [ceph-users] Ceph Monitor cephx issues
Thank you.. After sending the post, I totally removed the mon and issued the build with ceph-deploy: In the logs now: 2017-01-07 21:12:38.113534 7fa9613fd700 0 cephx: verify_reply couldn't decrypt with error: error decoding block for decryption 2017-01-07 21:12:38.113546 7fa9613fd700 0 -- 10.10.10.138:6789/0 >> 10.10.10.103:6789/0 pipe(0x55feb2e9 sd=12 :50266 s=1 pgs=0 cs=0 l=0 c=0x55feb2ca0a80).failed verifying authorize reply 2017-01-07 21:12:38.114529 7fa95787b700 0 cephx: verify_reply couldn't decrypt with error: error decoding block for decryption 2017-01-07 21:12:38.114567 7fa95787b700 0 -- 10.10.10.138:6789/0 >> 10.10.10.252:6789/0 pipe(0x55feb2e91400 sd=11 :38690 s=1 pgs=0 cs=0 l=0 c=0x55feb2ca0c00).failed verifying authorize reply 2017-01-07 21:12:40.114522 7fa9613fd700 0 cephx: verify_reply couldn't decrypt with error: error decoding block for decryption 2017-01-07 21:12:40.114542 7fa9613fd700 0 -- 10.10.10.138:6789/0 >> 10.10.10.103:6789/0 pipe(0x55feb2e9 sd=11 :50278 s=1 pgs=0 cs=0 l=0 c=0x55feb2ca0a80).failed verifying authorize reply 2017-01-07 21:12:40.115706 7fa95787b700 0 cephx: verify_reply couldn't decrypt with error: error decoding block for decryption 2017-01-07 21:12:40.115721 7fa95787b700 0 -- 10.10.10.138:6789/0 >> 10.10.10.252:6789/0 pipe(0x55feb2e91400 sd=12 :38702 s=1 pgs=0 cs=0 l=0 c=0x55feb2ca0c00).failed verifying authorize reply 2017-01-07 21:12:41.621916 7fa956f79700 0 cephx: verify_authorizer could not decrypt ticket info: error: NSS AES final round failed: -8190 2017-01-07 21:12:41.621929 7fa956f79700 0 mon.alex-desktop@1(probing) e0 ms_verify_authorizer bad authorizer from mon 10.10.10.103:6789/0 2017-01-07 21:12:41.621944 7fa956f79700 0 -- 10.10.10.138:6789/0 >> 10.10.10.103:6789/0 pipe(0x55feb2fb5400 sd=21 :6789 s=0 pgs=0 cs=0 l=0 c=0x55feb2ca1500).accept: got bad authorizer $ sudo ceph -s cluster f5aba719-4856-4ae2-a5d4-f9ff0f614b60 health HEALTH_WARN 512 pgs degraded 348 pgs stale 512 pgs stuck unclean 512 pgs undersized 6 requests are blocked > 32 sec recovery 25013/50026 objects degraded (50.000%) mds cluster is degraded 1 mons down, quorum 0,2 alpha,toshiba-laptop monmap e17: 3 mons at {alex-desktop= 10.10.10.138:6789/0,alpha=10.10.10.103:6789/0,toshiba-laptop=10.10.10.252:6789/0 } election epoch 806, quorum 0,2 alpha,toshiba-laptop fsmap e201858: 1/1/1 up {0=1=up:replay} osdmap e200229: 3 osds: 2 up, 2 in; 85 remapped pgs flags sortbitwise pgmap v4088774: 512 pgs, 4 pools, 50883 MB data, 25013 objects 59662 MB used, 476 GB / 563 GB avail 25013/50026 objects degraded (50.000%) 348 stale+active+undersized+degraded 164 active+undersized+degraded root@alex-desktop:/var/lib/ceph/mon/ceph-alex-desktop# ls -ls total 8 0 -rw-r--r-- 1 ceph ceph0 Jan 7 21:11 done 4 -rw--- 1 ceph ceph 77 Jan 7 21:05 keyring 4 drwxr-xr-x 2 ceph ceph 4096 Jan 7 21:10 store.db 0 -rw-r--r-- 1 ceph ceph0 Jan 7 21:05 systemd Very odd... never seen this issue on the other monitor deployments... Alex F. Evonosky <https://twitter.com/alexevon> <https://www.linkedin.com/in/alexevonosky> On Sat, Jan 7, 2017 at 8:54 PM, Shinobu Kinjo <ski...@redhat.com> wrote: > Using ``ceph-deploy`` will save your life: > > # https://github.com/ceph/ceph/blob/master/doc/start/quick- > ceph-deploy.rst > * Please look at: Adding Monitors > > If you are using centos or similar, the latest package is available here: > > # http://download.ceph.com/rpm-jewel/el7/noarch/ceph-deploy- > 1.5.37-0.noarch.rpm > > Regards, > > > On Sun, Jan 8, 2017 at 9:53 AM, Alex Evonosky <alex.evono...@gmail.com> > wrote: > >> Thank you for the reply! >> >> I followed this article: >> >> http://docs.ceph.com/docs/jewel/rados/operations/add-or-rm-mons/ >> >> >> Under the section: ADDING A MONITOR (MANUAL) >> >> >> >> Alex F. Evonosky >> >> <https://twitter.com/alexevon> <https://www.linkedin.com/in/alexevonosky> >> >> On Sat, Jan 7, 2017 at 6:36 PM, Shinobu Kinjo <ski...@redhat.com> wrote: >> >>> How did you add a third MON? >>> >>> Regards, >>> >>> On Sun, Jan 8, 2017 at 7:01 AM, Alex Evonosky <alex.evono...@gmail.com> >>> wrote: >>> > Anyone see this before? >>> > >>> > >>> > 2017-01-07 16:55:11.406047 7f095b379700 0 cephx: verify_reply couldn't >>> > decrypt with error: error decoding block for decryption >>> > 2017-01-07 16:55:11.406053 7f095b379700 0 -- 10.10.10.138:6789/0 >> >>
Re: [ceph-users] Ceph Monitor cephx issues
Thank you for the reply! I followed this article: http://docs.ceph.com/docs/jewel/rados/operations/add-or-rm-mons/ Under the section: ADDING A MONITOR (MANUAL) Alex F. Evonosky <https://twitter.com/alexevon> <https://www.linkedin.com/in/alexevonosky> On Sat, Jan 7, 2017 at 6:36 PM, Shinobu Kinjo <ski...@redhat.com> wrote: > How did you add a third MON? > > Regards, > > On Sun, Jan 8, 2017 at 7:01 AM, Alex Evonosky <alex.evono...@gmail.com> > wrote: > > Anyone see this before? > > > > > > 2017-01-07 16:55:11.406047 7f095b379700 0 cephx: verify_reply couldn't > > decrypt with error: error decoding block for decryption > > 2017-01-07 16:55:11.406053 7f095b379700 0 -- 10.10.10.138:6789/0 >> > > 10.10.10.252:6789/0 pipe(0x55cf8d028000 sd=11 :47548 s=1 pgs=0 cs=0 l=0 > > c=0x55cf8ce28f00).failed verifying authorize reply > > > > > > > > Two monitors are up just fine, just trying to add a third and a quorum > > cannot be met. NTP is running and no iptables running at all on internal > > cluster. > > > > > > Thank you. > > -Alex > > > > > > > > > > ___ > > ceph-users mailing list > > ceph-users@lists.ceph.com > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > > > ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
[ceph-users] Ceph Monitor cephx issues
Anyone see this before? 2017-01-07 16:55:11.406047 7f095b379700 0 cephx: verify_reply couldn't decrypt with error: error decoding block for decryption 2017-01-07 16:55:11.406053 7f095b379700 0 -- 10.10.10.138:6789/0 >> 10.10.10.252:6789/0 pipe(0x55cf8d028000 sd=11 :47548 s=1 pgs=0 cs=0 l=0 c=0x55cf8ce28f00).failed verifying authorize reply Two monitors are up just fine, just trying to add a third and a quorum cannot be met. NTP is running and no iptables running at all on internal cluster. Thank you. -Alex ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
[ceph-users] new user error
Hello group-- I have been running ceph 10.2.3 for awhile now without any issues. This evening my admin node (which is also an OSD and Monitor) crashed. I checked my other OSD servers and the data seems to still be there. Is there an easy way to bring the admin node back into the cluster? I am trying to bring this admin node/OSD back up without losing any data... Thank you! -Alex ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Re: [ceph-users] 10.2.4 Jewel released
Thank you sir. Ubuntu here as well. On Fri, Dec 9, 2016 at 12:54 PM, Francois Lafont < francois.lafont.1...@gmail.com> wrote: > On 12/09/2016 06:39 PM, Alex Evonosky wrote: > > > Sounds great. May I asked what procedure you did to upgrade? > > Of course. ;) > > It's here: https://shaman.ceph.com/repos/ceph/wip-msgr-jewel-fix2/ > (I think this link was pointed by Greg Farnum or Sage Weil in a > previous message). > > Personally I use Ubuntu Trusty, so for me in the page above leads me > to use this line in my "sources.list": > > deb http://3.chacra.ceph.com/r/ceph/wip-msgr-jewel-fix2/ > 5d3c76c1c6e991649f0beedb80e6823606176d9e/ubuntu/trusty/flavors/default/ > trusty main > > And after that "apt-get update && apt-get upgrade" etc. > ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Re: [ceph-users] 10.2.4 Jewel released
Francois- Sounds great. May I asked what procedure you did to upgrade? Thank you! On Fri, Dec 9, 2016 at 12:20 PM, Francois Lafont < francois.lafont.1...@gmail.com> wrote: > Hi, > > Just for information, after the upgrade to the version > 10.2.4-1-g5d3c76c (5d3c76c1c6e991649f0beedb80e6823606176d9e) > of all my cluster (osd, mon and mds) since ~30 hours, I have > no problem (my cluster is a small cluster with 5 nodes and > 4 osds per nodes and 3 monitors and I just use cephfs). > > Bye. > ___ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Re: [ceph-users] New user on Ubuntu 16.04
disregard-- found the issue, it was a remote hostname issue not matching the localhostname. Thank you. On Thu, Sep 8, 2016 at 10:26 PM, Alex Evonosky <alex.evono...@gmail.com> wrote: > Hey group- > > I am a new CEPH user on Ubuntu and notice this when creating a brand new > monitor following the documentation: > > storage@alex-desktop:~/ceph$ ceph-deploy --overwrite-conf mon create > alex-desktop > [ceph_deploy.conf][DEBUG ] found configuration file at: > /home/storage/.cephdeploy.conf > [ceph_deploy.cli][INFO ] Invoked (1.5.34): /usr/bin/ceph-deploy > --overwrite-conf mon create alex-desktop > [ceph_deploy.cli][INFO ] ceph-deploy options: > [ceph_deploy.cli][INFO ] username : None > [ceph_deploy.cli][INFO ] verbose : False > [ceph_deploy.cli][INFO ] overwrite_conf: True > [ceph_deploy.cli][INFO ] subcommand: create > [ceph_deploy.cli][INFO ] quiet : False > [ceph_deploy.cli][INFO ] cd_conf : > > [ceph_deploy.cli][INFO ] cluster : ceph > [ceph_deploy.cli][INFO ] mon : ['alex-desktop'] > [ceph_deploy.cli][INFO ] func : at 0x7f834118c1b8> > [ceph_deploy.cli][INFO ] ceph_conf : None > [ceph_deploy.cli][INFO ] keyrings : None > [ceph_deploy.cli][INFO ] default_release : False > [ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts alex-desktop > [ceph_deploy.mon][DEBUG ] detecting platform for host alex-desktop ... > [alex-desktop][DEBUG ] connection detected need for sudo > [alex-desktop][DEBUG ] connected to host: alex-desktop > [alex-desktop][DEBUG ] detect platform information from remote host > [alex-desktop][DEBUG ] detect machine type > [alex-desktop][DEBUG ] find the location of an executable > [ceph_deploy.mon][INFO ] distro info: Ubuntu 16.04 xenial > [alex-desktop][DEBUG ] determining if provided host has same hostname in > remote > [alex-desktop][DEBUG ] get remote short hostname > [alex-desktop][DEBUG ] deploying mon to alex-desktop > [alex-desktop][DEBUG ] get remote short hostname > [alex-desktop][DEBUG ] remote hostname: alex-desktop > [alex-desktop][DEBUG ] write cluster configuration to > /etc/ceph/{cluster}.conf > [alex-desktop][DEBUG ] create the mon path if it does not exist > [alex-desktop][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-alex- > desktop/done > [alex-desktop][DEBUG ] create a done file to avoid re-doing the mon > deployment > [alex-desktop][DEBUG ] create the init path if it does not exist > [alex-desktop][INFO ] Running command: sudo systemctl enable ceph.target > [alex-desktop][INFO ] Running command: sudo systemctl enable > ceph-mon@alex-desktop > [alex-desktop][INFO ] Running command: sudo systemctl start > ceph-mon@alex-desktop > [alex-desktop][INFO ] Running command: sudo ceph --cluster=ceph > --admin-daemon /var/run/ceph/ceph-mon.alex-desktop.asok mon_status > [alex-desktop][ERROR ] no valid command found; 10 closest matches: > [alex-desktop][ERROR ] config set [...] > [alex-desktop][ERROR ] version > [alex-desktop][ERROR ] git_version > [alex-desktop][ERROR ] help > [alex-desktop][ERROR ] config show > [alex-desktop][ERROR ] get_command_descriptions > [alex-desktop][ERROR ] config get > [alex-desktop][ERROR ] perfcounters_dump > [alex-desktop][ERROR ] 2 > [alex-desktop][ERROR ] config diff > [alex-desktop][ERROR ] admin_socket: invalid command > [alex-desktop][WARNIN] monitor: mon.alex-desktop, might not be running yet > [alex-desktop][INFO ] Running command: sudo ceph --cluster=ceph > --admin-daemon /var/run/ceph/ceph-mon.alex-desktop.asok mon_status > [alex-desktop][ERROR ] no valid command found; 10 closest matches: > [alex-desktop][ERROR ] config set [...] > [alex-desktop][ERROR ] version > [alex-desktop][ERROR ] git_version > [alex-desktop][ERROR ] help > [alex-desktop][ERROR ] config show > [alex-desktop][ERROR ] get_command_descriptions > [alex-desktop][ERROR ] config get > [alex-desktop][ERROR ] perfcounters_dump > [alex-desktop][ERROR ] 2 > [alex-desktop][ERROR ] config diff > [alex-desktop][ERROR ] admin_socket: invalid command > [alex-desktop][WARNIN] monitor alex-desktop does not exist in monmap > [alex-desktop][WARNIN] neither `public_addr` nor `public_network` keys are > defined for monitors > [alex-desktop][WARNIN] monitors may not be able to form quorum > > > This is a brand new install of ceph just testing on two nodes. > > Thank you, > Alex > ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
[ceph-users] New user on Ubuntu 16.04
Hey group- I am a new CEPH user on Ubuntu and notice this when creating a brand new monitor following the documentation: storage@alex-desktop:~/ceph$ ceph-deploy --overwrite-conf mon create alex-desktop [ceph_deploy.conf][DEBUG ] found configuration file at: /home/storage/.cephdeploy.conf [ceph_deploy.cli][INFO ] Invoked (1.5.34): /usr/bin/ceph-deploy --overwrite-conf mon create alex-desktop [ceph_deploy.cli][INFO ] ceph-deploy options: [ceph_deploy.cli][INFO ] username : None [ceph_deploy.cli][INFO ] verbose : False [ceph_deploy.cli][INFO ] overwrite_conf: True [ceph_deploy.cli][INFO ] subcommand: create [ceph_deploy.cli][INFO ] quiet : False [ceph_deploy.cli][INFO ] cd_conf : [ceph_deploy.cli][INFO ] cluster : ceph [ceph_deploy.cli][INFO ] mon : ['alex-desktop'] [ceph_deploy.cli][INFO ] func : [ceph_deploy.cli][INFO ] ceph_conf : None [ceph_deploy.cli][INFO ] keyrings : None [ceph_deploy.cli][INFO ] default_release : False [ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts alex-desktop [ceph_deploy.mon][DEBUG ] detecting platform for host alex-desktop ... [alex-desktop][DEBUG ] connection detected need for sudo [alex-desktop][DEBUG ] connected to host: alex-desktop [alex-desktop][DEBUG ] detect platform information from remote host [alex-desktop][DEBUG ] detect machine type [alex-desktop][DEBUG ] find the location of an executable [ceph_deploy.mon][INFO ] distro info: Ubuntu 16.04 xenial [alex-desktop][DEBUG ] determining if provided host has same hostname in remote [alex-desktop][DEBUG ] get remote short hostname [alex-desktop][DEBUG ] deploying mon to alex-desktop [alex-desktop][DEBUG ] get remote short hostname [alex-desktop][DEBUG ] remote hostname: alex-desktop [alex-desktop][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf [alex-desktop][DEBUG ] create the mon path if it does not exist [alex-desktop][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-alex-desktop/done [alex-desktop][DEBUG ] create a done file to avoid re-doing the mon deployment [alex-desktop][DEBUG ] create the init path if it does not exist [alex-desktop][INFO ] Running command: sudo systemctl enable ceph.target [alex-desktop][INFO ] Running command: sudo systemctl enable ceph-mon@alex-desktop [alex-desktop][INFO ] Running command: sudo systemctl start ceph-mon@alex-desktop [alex-desktop][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.alex-desktop.asok mon_status [alex-desktop][ERROR ] no valid command found; 10 closest matches: [alex-desktop][ERROR ] config set [...] [alex-desktop][ERROR ] version [alex-desktop][ERROR ] git_version [alex-desktop][ERROR ] help [alex-desktop][ERROR ] config show [alex-desktop][ERROR ] get_command_descriptions [alex-desktop][ERROR ] config get [alex-desktop][ERROR ] perfcounters_dump [alex-desktop][ERROR ] 2 [alex-desktop][ERROR ] config diff [alex-desktop][ERROR ] admin_socket: invalid command [alex-desktop][WARNIN] monitor: mon.alex-desktop, might not be running yet [alex-desktop][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.alex-desktop.asok mon_status [alex-desktop][ERROR ] no valid command found; 10 closest matches: [alex-desktop][ERROR ] config set [...] [alex-desktop][ERROR ] version [alex-desktop][ERROR ] git_version [alex-desktop][ERROR ] help [alex-desktop][ERROR ] config show [alex-desktop][ERROR ] get_command_descriptions [alex-desktop][ERROR ] config get [alex-desktop][ERROR ] perfcounters_dump [alex-desktop][ERROR ] 2 [alex-desktop][ERROR ] config diff [alex-desktop][ERROR ] admin_socket: invalid command [alex-desktop][WARNIN] monitor alex-desktop does not exist in monmap [alex-desktop][WARNIN] neither `public_addr` nor `public_network` keys are defined for monitors [alex-desktop][WARNIN] monitors may not be able to form quorum This is a brand new install of ceph just testing on two nodes. Thank you, Alex ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com