You need to replace # with ;
02 янв. 2014 г. 19:58 пользователь "xyx" <[email protected]> написал:

> *Hello,My Ceph Teacher:*
>         I just finished my ceph configuration:
>         Configured as follows:
>         [global]
>         auth cluster required = none
>         auth service required = none
>         auth client required = none
> [osd]
>     osd journal size = 1000
>     osd data = /home/osd$id
>     osd journal = /home/osd$id/journal
>     filestore xattr use omap = true
>     osd mkfs type = xfs
>     osd mkfs options xfs = -f   # default for xfs is "-f"
>     osd mount options xfs = rw,noatime # default mount option is
> "rw,noatime"
> [mon.a]
>     host = umm-manager
>     mon addr = 192.168.1.130:6789
>     mon data=   /data/mon$id
> [osd.0]
>     host = umm-osd0
>     devs = /dev/sdb1
> #[osd.1]
> #    host = umm-osd1
> #    devs = /dev/sdb1
> [mds.a]
>     host = umm-manage
>
>
> *When I started this ceph, see ceph condition reported the following
> error:*
>
>
> 2014-01-03 05:45:07.378163 7f6658278700 0 - :/ 1001908 >>
> 192.168.1.130:6789 / 0 pipe (0x7f66480008c0 sd = 3: 0 s = 1 pgs = 0 cs =
> 0 l = 1 c = 0x7f664800a090) . fault
> 2014-01-03 05:45:10.378105 7f6658177700 0 - :/ 1001908 >>
> 192.168.1.130:6789 / 0 pipe (0x7f6648002720 sd = 3: 0 s = 1 pgs = 0 cs =
> 0 l = 1 c = 0x7f66480040e0) . fault
> 2014-01-03 05:45:13.379405 7f6658278700 0 - :/ 1001908 >>
> 192.168.1.130:6789 / 0 pipe (0x7f66480008c0 sd = 3: 0 s = 1 pgs = 0 cs =
> 0 l = 1 c = 0x7f664800f750) . fault
> 2014-01-03 05:45:16.379616 7f6658177700 0 - :/ 1001908 >>
> 192.168.1.130:6789 / 0 pipe (0x7f6648002720 sd = 3: 0 s = 1 pgs = 0 cs =
> 0 l = 1 c = 0x7f664800d280) . fault
> 2014-01-03 05:45:19.380543 7f6658278700 0 - :/ 1001908 >>
> 192.168.1.130:6789 / 0 pipe (0x7f66480008c0 sd = 3: 0 s = 1 pgs = 0 cs =
> 0 l = 1 c = 0x7f6648002db0) . fault
> 2014-01-03 05:45:22.380961 7f6658177700 0 - :/ 1001908 >>
> 192.168.1.130:6789 / 0 pipe (0x7f6648002720 sd = 3: 0 s = 1 pgs = 0 cs =
> 0 l = 1 c = 0x7f66480112c0) . fault
> 2014-01-03 05:45:25.382346 7f6658278700 0 - :/ 1001908 >>
> 192.168.1.130:6789 / 0 pipe (0x7f66480008c0 sd = 3: 0 s = 1 pgs = 0 cs =
> 0 l = 1 c = 0x7f664800c8b0) . fault
> 2014-01-03 05:45:28.382809 7f6658177700 0 - :/ 1001908 >>
> 192.168.1.130:6789 / 0 pipe (0x7f6648002720 sd = 3: 0 s = 1 pgs = 0 cs =
> 0 l = 1 c = 0x7f6648011ca0) . fault
> 2014-01-03 05:45:31.384397 7f6658278700 0 - :/ 1001908 >>
> 192.168.1.130:6789 / 0 pipe (0x7f66480008c0 sd = 3: 0 s = 1 pgs = 0 cs =
> 0 l = 1 c = 0x7f664800c770) . fault
> 2014-01-03 05:45:34.386216 7f6658177700 0 - :/ 1001908 >>
> 192.168.1.130:6789 / 0 pipe (0x7f6648002720 sd = 3: 0 s = 1 pgs = 0 cs =
> 0 l = 1 c = 0x7f6648011f00) . fault
> 2014-01-03 05:45:37.387132 7f6658278700 0 - :/ 1001908 >>
> 192.168.1.130:6789 / 0 pipe (0x7f66480008c0 sd = 3: 0 s = 1 pgs = 0 cs =
> 0 l = 1 c = 0x7f6648006b80) . fault
> 2014-01-03 05:45:40.387746 7f6658177700 0 - :/ 1001908 >>
> 192.168.1.130:6789 / 0 pipe (0x7f6648011bd0 sd = 3: 0 s = 1 pgs = 0 cs =
> 0 l = 1 c = 0x7f6648015550) . fault
> 2014-01-03 05:45:43.388043 7f6658278700 0 - :/ 1001908 >>
> 192.168.1.130:6789 / 0 pipe (0x7f66480008c0 sd = 3: 0 s = 1 pgs = 0 cs =
> 0 l = 1 c = 0x7f66480018a0) . fault
> 2014-01-03 05:45:46.389099 7f6658177700 0 - :/ 1001908 >>
> 192.168.1.130:6789 / 0 pipe (0x7f6648011bd0 sd = 3: 0 s = 1 pgs = 0 cs =
> 0 l = 1 c = 0x7f6648000c00) . fault
> 2014-01-03 05:45:49.390417 7f6658278700 0 - :/ 1001908 >>
> 192.168.1.130:6789 / 0 pipe (0x7f66480008c0 sd = 3: 0 s = 1 pgs = 0 cs =
> 0 l = 1 c = 0x7f66480018a0) . fault
> 2014-01-03 05:45:52.391095 7f6658177700 0 - :/ 1001908 >>
> 192.168.1.130:6789 / 0 pipe (0x7f6648011bd0 sd = 3: 0 s = 1 pgs = 0 cs =
> 0 l = 1 c = 0x7f6648005c10) . fault
> 2014-01-03 05:45:55.391905 7f6658278700 0 - :/ 1001908 >>
> 192.168.1.130:6789 / 0 pipe (0x7f66480008c0 sd = 3: 0 s = 1 pgs = 0 cs =
> 0 l = 1 c = 0x7f6648017460) . fault
> 2014-01-03 05:45:58.392780 7f6658177700 0 - :/ 1001908 >>
> 192.168.1.130:6789 / 0 pipe (0x7f6648011bd0 sd = 3: 0 s = 1 pgs = 0 cs =
> 0 l = 1 c = 0x7f6648011f00) . fault
> 2014-01-03 05:46:01.394361 7f6658278700 0 - :/ 1001908 >>
> 192.168.1.130:6789 / 0 pipe (0x7f6648005490 sd = 3: 0 s = 1 pgs = 0 cs =
> 0 l = 1 c = 0x7f6648017a80) . fault
> 2014-01-03 05:46:04.394836 7f6658177700 0 - :/ 1001908 >>
> 192.168.1.130:6789 / 0 pipe (0x7f6648011bd0 sd = 3: 0 s = 1 pgs = 0 cs =
> 0 l = 1 c = 0x7f66480039e0) . fault
> 2014-01-03 05:46:07.396580 7f6658278700 0 - :/ 1001908 >>
> 192.168.1.130:6789 / 0 pipe (0x7f6648005490 sd = 3: 0 s = 1 pgs = 0 cs =
> 0 l = 1 c = 0x7f664800af60) . fault
> 2014-01-03 05:46:10.397038 7f6658177700 0 - :/ 1001908 >>
> 192.168.1.130:6789 / 0 pipe (0x7f6648011bd0 sd = 3: 0 s = 1 pgs = 0 cs =
> 0 l = 1 c = 0x7f6648002c10) . fault
> 2014-01-03 05:46:13.398699 7f6658278700 0 - :/ 1001908 >>
> 192.168.1.130:6789 / 0 pipe (0x7f6648005490 sd = 3: 0 s = 1 pgs = 0 cs =
> 0 l = 1 c = 0x7f66480025a0) . fault
> ^ CError connecting to cluster: Error
>
>
> *I find on google for a long time, did not find a solution, we can only
> come to ask you, hoping to get your advice, thank you! *
>
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to