On Wed, Aug 9, 2017 at 11:42 PM, Timothy Wolgemuth <tim.l...@wolgemuth.cc> 
wrote:
> Here is the output:
>
> [ceph-deploy@ceph01 my-cluster]$ sudo /usr/bin/ceph --connect-timeout=25
> --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph01/keyring
> auth get client.admin
> 2017-08-09 09:07:00.519683 7f0000389700  0 -- :/1582396262 >>
> 192.168.100.11:6789/0 pipe(0x7efffc0617c0 sd=3 :0 s=1 pgs=0 cs=0 l=1
> c=0x7efffc05d670).fault
> 2017-08-09 09:07:03.520486 7f0000288700  0 -- :/1582396262 >>
> 192.168.100.11:6789/0 pipe(0x7efff0000c80 sd=3 :0 s=1 pgs=0 cs=0 l=1
> c=0x7efff0001f90).fault
> 2017-08-09 09:07:06.521091 7f0000389700  0 -- :/1582396262 >>
> 192.168.100.11:6789/0 pipe(0x7efff00052b0 sd=3 :0 s=1 pgs=0 cs=0 l=1
> c=0x7efff0006570).fault
> 2017-08-09 09:07:09.521483 7f0000288700  0 -- :/1582396262 >>
> 192.168.100.11:6789/0 pipe(0x7efff0000c80 sd=3 :0 s=1 pgs=0 cs=0 l=1
> c=0x7efff0002410).fault
> 2017-08-09 09:07:12.522027 7f0000389700  0 -- :/1582396262 >>
> 192.168.100.11:6789/0 pipe(0x7efff00052b0 sd=3 :0 s=1 pgs=0 cs=0 l=1
> c=0x7efff0002f60).fault
> 2017-08-09 09:07:15.522433 7f0000288700  0 -- :/1582396262 >>
> 192.168.100.11:6789/0 pipe(0x7efff0000c80 sd=3 :0 s=1 pgs=0 cs=0 l=1
> c=0x7efff00036d0).fault
> 2017-08-09 09:07:18.523025 7f0000389700  0 -- :/1582396262 >>
> 192.168.100.11:6789/0 pipe(0x7efff00052b0 sd=3 :0 s=1 pgs=0 cs=0 l=1
> c=0x7efff0002a10).fault
> 2017-08-09 09:07:21.523332 7f0000288700  0 -- :/1582396262 >>
> 192.168.100.11:6789/0 pipe(0x7efff0000c80 sd=3 :0 s=1 pgs=0 cs=0 l=1
> c=0x7efff0008d40).fault
> 2017-08-09 09:07:24.523353 7f0000389700  0 -- :/1582396262 >>
> 192.168.100.11:6789/0 pipe(0x7efff00052b0 sd=3 :0 s=1 pgs=0 cs=0 l=1
> c=0x7efff0003df0).fault
> Traceback (most recent call last):  File "/usr/bin/ceph", line 948, in
> <module>
>     retval = main()
>   File "/usr/bin/ceph", line 852, in main
>     prefix='get_command_descriptions')

What is your public and cluster network configuration and which interface is it
using to try to connect to 192.168.100.11:6789? You can use wireshark or similar
to find out. ceph01 need to be able to communicate with 192.168.100.11 on port
6789 so you need to find out why currently it can't.

>
>
>
> On Wed, Aug 9, 2017 at 12:15 AM, Brad Hubbard <bhubb...@redhat.com> wrote:
>>
>> On ceph01 if you login as ceph-deploy and run the following command
>> what output do you get?
>>
>> $ sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon.
>> --keyring=/var/lib/ceph/mon/ceph-ceph01/keyring auth get client.admin
>>
>> On Tue, Aug 8, 2017 at 11:41 PM, Timothy Wolgemuth
>> <tim.l...@wolgemuth.cc> wrote:
>> > I have a new installation and following the quick start guide at:
>> >
>> > http://docs.ceph.com/docs/master/start/quick-ceph-deploy/
>> >
>> > Running into the following error in the create-initial step.  See below:
>> >
>> >
>> >
>> > $ ceph-deploy --username ceph-deploy mon create-initial
>> > [ceph_deploy.conf][DEBUG ] found configuration file at:
>> > /home/ceph-deploy/.cephdeploy.conf
>> > [ceph_deploy.cli][INFO  ] Invoked (1.5.37): /bin/ceph-deploy --username
>> > ceph-deploy mon create-initial
>> > [ceph_deploy.cli][INFO  ] ceph-deploy options:
>> > [ceph_deploy.cli][INFO  ]  username                      : ceph-deploy
>> > [ceph_deploy.cli][INFO  ]  verbose                       : False
>> > [ceph_deploy.cli][INFO  ]  overwrite_conf                : False
>> > [ceph_deploy.cli][INFO  ]  subcommand                    :
>> > create-initial
>> > [ceph_deploy.cli][INFO  ]  quiet                         : False
>> > [ceph_deploy.cli][INFO  ]  cd_conf                       :
>> > <ceph_deploy.conf.cephdeploy.Conf instance at 0x2764ea8>
>> > [ceph_deploy.cli][INFO  ]  cluster                       : ceph
>> > [ceph_deploy.cli][INFO  ]  func                          : <function mon
>> > at
>> > 0x275e320>
>> > [ceph_deploy.cli][INFO  ]  ceph_conf                     : None
>> > [ceph_deploy.cli][INFO  ]  default_release               : False
>> > [ceph_deploy.cli][INFO  ]  keyrings                      : None
>> > [ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts ceph01
>> > [ceph_deploy.mon][DEBUG ] detecting platform for host ceph01 ...
>> > [ceph01][DEBUG ] connection detected need for sudo
>> > [ceph01][DEBUG ] connected to host: ceph-deploy@ceph01
>> > [ceph01][DEBUG ] detect platform information from remote host
>> > [ceph01][DEBUG ] detect machine type
>> > [ceph01][DEBUG ] find the location of an executable
>> > [ceph_deploy.mon][INFO  ] distro info: CentOS Linux 7.3.1611 Core
>> > [ceph01][DEBUG ] determining if provided host has same hostname in
>> > remote
>> > [ceph01][DEBUG ] get remote short hostname
>> > [ceph01][DEBUG ] deploying mon to ceph01
>> > [ceph01][DEBUG ] get remote short hostname
>> > [ceph01][DEBUG ] remote hostname: ceph01
>> > [ceph01][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
>> > [ceph01][DEBUG ] create the mon path if it does not exist
>> > [ceph01][DEBUG ] checking for done path:
>> > /var/lib/ceph/mon/ceph-ceph01/done
>> > [ceph01][DEBUG ] create a done file to avoid re-doing the mon deployment
>> > [ceph01][DEBUG ] create the init path if it does not exist
>> > [ceph01][INFO  ] Running command: sudo systemctl enable ceph.target
>> > [ceph01][INFO  ] Running command: sudo systemctl enable ceph-mon@ceph01
>> > [ceph01][INFO  ] Running command: sudo systemctl start ceph-mon@ceph01
>> > [ceph01][INFO  ] Running command: sudo ceph --cluster=ceph
>> > --admin-daemon
>> > /var/run/ceph/ceph-mon.ceph01.asok mon_status
>> > [ceph01][DEBUG ]
>> >
>> > ********************************************************************************
>> > [ceph01][DEBUG ] status for monitor: mon.ceph01
>> > [ceph01][DEBUG ] {
>> > [ceph01][DEBUG ]   "election_epoch": 3,
>> > [ceph01][DEBUG ]   "extra_probe_peers": [
>> > [ceph01][DEBUG ]     "192.168.100.11:6789/0"
>> > [ceph01][DEBUG ]   ],
>> > [ceph01][DEBUG ]   "monmap": {
>> > [ceph01][DEBUG ]     "created": "2017-08-08 09:00:47.536389",
>> > [ceph01][DEBUG ]     "epoch": 1,
>> > [ceph01][DEBUG ]     "fsid": "89935cd7-d056-4dcd-80b2-925257811fd6",
>> > [ceph01][DEBUG ]     "modified": "2017-08-08 09:00:47.536389",
>> > [ceph01][DEBUG ]     "mons": [
>> > [ceph01][DEBUG ]       {
>> > [ceph01][DEBUG ]         "addr": "10.135.130.95:6789/0",
>> > [ceph01][DEBUG ]         "name": "ceph01",
>> > [ceph01][DEBUG ]         "rank": 0
>> > [ceph01][DEBUG ]       }
>> > [ceph01][DEBUG ]     ]
>> > [ceph01][DEBUG ]   },
>> > [ceph01][DEBUG ]   "name": "ceph01",
>> > [ceph01][DEBUG ]   "outside_quorum": [],
>> > [ceph01][DEBUG ]   "quorum": [
>> > [ceph01][DEBUG ]     0
>> > [ceph01][DEBUG ]   ],
>> > [ceph01][DEBUG ]   "rank": 0,
>> > [ceph01][DEBUG ]   "state": "leader",
>> > [ceph01][DEBUG ]   "sync_provider": []
>> > [ceph01][DEBUG ] }
>> > [ceph01][DEBUG ]
>> >
>> > ********************************************************************************
>> > [ceph01][INFO  ] monitor: mon.ceph01 is running
>> > [ceph01][INFO  ] Running command: sudo ceph --cluster=ceph
>> > --admin-daemon
>> > /var/run/ceph/ceph-mon.ceph01.asok mon_status
>> > [ceph_deploy.mon][INFO  ] processing monitor mon.ceph01
>> > [ceph01][DEBUG ] connection detected need for sudo
>> > [ceph01][DEBUG ] connected to host: ceph-deploy@ceph01
>> > [ceph01][DEBUG ] detect platform information from remote host
>> > [ceph01][DEBUG ] detect machine type
>> > [ceph01][DEBUG ] find the location of an executable
>> > [ceph01][INFO  ] Running command: sudo ceph --cluster=ceph
>> > --admin-daemon
>> > /var/run/ceph/ceph-mon.ceph01.asok mon_status
>> > [ceph_deploy.mon][INFO  ] mon.ceph01 monitor has reached quorum!
>> > [ceph_deploy.mon][INFO  ] all initial monitors are running and have
>> > formed
>> > quorum
>> > [ceph_deploy.mon][INFO  ] Running gatherkeys...
>> > [ceph_deploy.gatherkeys][INFO  ] Storing keys in temp directory
>> > /tmp/tmpmn4Gzd
>> > [ceph01][DEBUG ] connection detected need for sudo
>> > [ceph01][DEBUG ] connected to host: ceph-deploy@ceph01
>> > [ceph01][DEBUG ] detect platform information from remote host
>> > [ceph01][DEBUG ] detect machine type
>> > [ceph01][DEBUG ] get remote short hostname
>> > [ceph01][DEBUG ] fetch remote file
>> > [ceph01][INFO  ] Running command: sudo /usr/bin/ceph
>> > --connect-timeout=25
>> > --cluster=ceph --admin-daemon=/var/run/ceph/ceph-mon.ceph01.asok
>> > mon_status
>> > [ceph01][INFO  ] Running command: sudo /usr/bin/ceph
>> > --connect-timeout=25
>> > --cluster=ceph --name mon.
>> > --keyring=/var/lib/ceph/mon/ceph-ceph01/keyring
>> > auth get client.admin
>> > [ceph01][ERROR ] "ceph auth get-or-create for keytype admin returned 1
>> > [ceph01][DEBUG ] 2017-08-08 09:25:52.594351 7f2d91ffb700  0 --
>> > :/60278403 >>
>> > 192.168.100.11:6789/0 pipe(0x7f2d940617e0 sd=5 :0 s=1 pgs=0 cs=0 l=1
>> > c=0x7f2d9405d690).fault
>> > [ceph01][DEBUG ] 2017-08-08 09:25:55.594924 7f2d91efa700  0 --
>> > :/60278403 >>
>> > 192.168.100.11:6789/0 pipe(0x7f2d88000c80 sd=5 :0 s=1 pgs=0 cs=0 l=1
>> > c=0x7f2d88001f90).fault
>> > [ceph01][DEBUG ] 2017-08-08 09:25:58.595384 7f2d91ffb700  0 --
>> > :/60278403 >>
>> > 192.168.100.11:6789/0 pipe(0x7f2d880052b0 sd=5 :0 s=1 pgs=0 cs=0 l=1
>> > c=0x7f2d88006570).fault
>> > [ceph01][DEBUG ] 2017-08-08 09:26:01.595842 7f2d91efa700  0 --
>> > :/60278403 >>
>> > 192.168.100.11:6789/0 pipe(0x7f2d88000c80 sd=5 :0 s=1 pgs=0 cs=0 l=1
>> > c=0x7f2d88002410).fault
>> > [ceph01][DEBUG ] 2017-08-08 09:26:04.596292 7f2d91ffb700  0 --
>> > :/60278403 >>
>> > 192.168.100.11:6789/0 pipe(0x7f2d880052b0 sd=5 :0 s=1 pgs=0 cs=0 l=1
>> > c=0x7f2d88002f60).fault
>> > [ceph01][DEBUG ] 2017-08-08 09:26:07.596816 7f2d91efa700  0 --
>> > :/60278403 >>
>> > 192.168.100.11:6789/0 pipe(0x7f2d88000c80 sd=5 :0 s=1 pgs=0 cs=0 l=1
>> > c=0x7f2d880036d0).fault
>> > [ceph01][DEBUG ] 2017-08-08 09:26:10.596773 7f2d91ffb700  0 --
>> > :/60278403 >>
>> > 192.168.100.11:6789/0 pipe(0x7f2d880052b0 sd=5 :0 s=1 pgs=0 cs=0 l=1
>> > c=0x7f2d88002a10).fault
>> > [ceph01][DEBUG ] 2017-08-08 09:26:13.597056 7f2d91efa700  0 --
>> > :/60278403 >>
>> > 192.168.100.11:6789/0 pipe(0x7f2d88000c80 sd=5 :0 s=1 pgs=0 cs=0 l=1
>> > c=0x7f2d88008d40).fault
>> > [ceph01][DEBUG ] 2017-08-08 09:26:16.597821 7f2d91ffb700  0 --
>> > :/60278403 >>
>> > 192.168.100.11:6789/0 pipe(0x7f2d880052b0 sd=5 :0 s=1 pgs=0 cs=0 l=1
>> > c=0x7f2d88003df0).fault
>> > [ceph01][DEBUG ] Traceback (most recent call last):
>> > [ceph01][DEBUG ]   File "/usr/bin/ceph", line 948, in <module>
>> > [ceph01][DEBUG ]     retval = main()
>> > [ceph01][DEBUG ]   File "/usr/bin/ceph", line 852, in main
>> > [ceph01][DEBUG ]     prefix='get_command_descriptions')
>> > [ceph01][DEBUG ]   File
>> > "/usr/lib/python2.7/site-packages/ceph_argparse.py",
>> > line 1300, in json_command
>> > [ceph01][DEBUG ]     raise RuntimeError('"{0}": exception
>> > {1}'.format(argdict, e))
>> > [ceph01][DEBUG ] RuntimeError: "None": exception "['{"prefix":
>> > "get_command_descriptions"}']": exception You cannot perform that
>> > operation
>> > on a Rados object in state configuring.
>> > [ceph01][ERROR ] Failed to return 'admin' key from host ceph01
>> > [ceph_deploy.gatherkeys][ERROR ] Failed to connect to host:ceph01
>> > [ceph_deploy.gatherkeys][INFO  ] Destroy temp directory /tmp/tmpmn4Gzd
>> > [ceph_deploy][ERROR ] RuntimeError: Failed to connect any mon
>> >
>> >
>> > Any help would be great.
>> >
>> > Tim
>> >
>> > _______________________________________________
>> > ceph-users mailing list
>> > ceph-users@lists.ceph.com
>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> >
>>
>>
>>
>> --
>> Cheers,
>> Brad
>
>



-- 
Cheers,
Brad
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to