Hi,

After adding a new monitor cluster I'm getting an estrange error:

vdicnode02/store.db/MANIFEST-000086 succeeded,manifest_file_number is 86,
next_file_number is 88, last_sequence is 8, log_number is 0,prev_log_number
is 0,max_column_family is 0

2017-08-15 22:00:58.832599 7f6791187e40  4 rocksdb:
[/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_
64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/
centos7/MACHINE_SIZE/huge/release/12.1.3/rpm/el7/BUILD/
ceph-12.1.3/src/rocksdb/db/version_set.cc:2867] Column family [default] (ID
0), log number is 85

2017-08-15 22:00:58.832699 7f6791187e40  4 rocksdb: EVENT_LOG_v1
{"time_micros": 1502827258832681, "job": 1, "event": "recovery_started",
"log_files": [87]}
2017-08-15 22:00:58.832726 7f6791187e40  4 rocksdb:
[/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_
64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/
centos7/MACHINE_SIZE/huge/release/12.1.3/rpm/el7/BUILD/
ceph-12.1.3/src/rocksdb/db/db_impl_open.cc:482] Recovering log #87 mode 2
2017-08-15 22:00:58.832887 7f6791187e40  4 rocksdb:
[/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_
64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/
centos7/MACHINE_SIZE/huge/release/12.1.3/rpm/el7/BUILD/
ceph-12.1.3/src/rocksdb/db/version_set.cc:2395] Creating manifest 89

2017-08-15 22:00:58.850503 7f6791187e40  4 rocksdb: EVENT_LOG_v1
{"time_micros": 1502827258850484, "job": 1, "event": "recovery_finished"}
2017-08-15 22:00:58.852552 7f6791187e40  4 rocksdb:
[/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_
64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/
centos7/MACHINE_SIZE/huge/release/12.1.3/rpm/el7/BUILD/
ceph-12.1.3/src/rocksdb/db/db_impl_open.cc:1063] DB pointer 0x7f679c4bc000
2017-08-15 22:00:58.853155 7f6791187e40  0 starting mon.vdicnode02 rank 1
at public addr 192.168.100.102:6789/0 at bind addr 192.168.100.102:6789/0
mon_data /var/lib/ceph/mon/ceph-vdicnode02 fsid 61881df3-1365-4139-a586-
92b5eca9cf18
2017-08-15 22:00:58.853329 7f6791187e40  0 starting mon.vdicnode02 rank 1
at 192.168.100.102:6789/0 mon_data /var/lib/ceph/mon/ceph-vdicnode02 fsid
61881df3-1365-4139-a586-92b5eca9cf18
2017-08-15 22:00:58.853685 7f6791187e40  1 mon.vdicnode02@-1(probing) e1
preinit fsid 61881df3-1365-4139-a586-92b5eca9cf18
*2017-08-15 22:00:58.853759 7f6791187e40 -1 mon.vdicnode02@-1(probing) e1
error: cluster_uuid file exists with value
d6b54a37-1cbe-483a-94c0-703e072aa6fd, != our uuid
61881df3-1365-4139-a586-92b5eca9cf18*
2017-08-15 22:00:58.853821 7f6791187e40 -1 failed to initialize

Anybody has experienced the same issue or have been able to fix it?

[root@vdicnode02 ceph]# cat /etc/ceph/ceph.conf
[global]
fsid = 61881df3-1365-4139-a586-92b5eca9cf18
public_network = 192.168.100.0/24
cluster_network = 192.168.100.0/24
mon_initial_members = vdicnode01,vdicnode02
mon_host = 192.168.100.101,192.168.100.102
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx

osd pool default size = 2
rbd_default_format = 2
rbd_cache = false

[mon.vdicnode01]
host = vdicnode01
addr = 192.168.100.101:6789

[mon.vdicnode02]
host = vdicnode02
addr = 192.168.100.102:6789

Thanks a lot
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to