Re: [ceph-users] (no subject)

2019-01-09 Thread Jonathan Woytek
I've done something similar. I used a process like this:

ceph osd set noout
ceph osd set nodown
ceph osd set nobackfill
ceph osd set norebalance
ceph osd set norecover

Then I did my work to manually remove/destroy the OSDs I was replacing,
brought the replacements online, and unset all of those options. Then the
I/O world collapsed for a little while as the new OSDs were backfilled.

Some of those might be redundant and/or unnecessary. I'm not a ceph expert.
Do this at your own risk. Etc.

jonathan

On Wed, Jan 9, 2019 at 7:58 AM Mosi Thaunot  wrote:

> Hello,
>
> I have a cluster of 3 nodes, 3 OSD per nodes (so 9 OSD in total),
> replication set to 3 (os each node has a copy).
>
> For some reason, I would like to recreate the node 1. What I have done :
> 1. out the 3 OSDs of node 1, stop then, then destroy them (almost in the
> same time)
> 2. recreate the new node 1 and add the 3 new OSDs
>
> My problem is that after step 1, I had to wait for backfilling to complete
> (to get only active+clean+remapped and active+undersized+degraded PGs).
> Then, wait again in step 2 to get the cluster healthy.
>
> Could I avoid the wait of step 1 ? What should I do then ? I was thinking :
> - set the OSDs to noout
> - out/stop/destroy the 3 OSDs of node 1 (in the same time)
> - reinstall node 1 (I have a copy of all the configuration files) and add
> the 3 nodes
>
> Would that work ?
>
> Thanks and regards,
> Mosi
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>


-- 
Jonathan Woytek
http://www.dryrose.com
KB3HOZ
PGP:  462C 5F50 144D 6B09 3B65  FCE8 C1DC DEC4 E8B6 AABC
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] (no subject)

2019-01-09 Thread Mosi Thaunot
Hello,

I have a cluster of 3 nodes, 3 OSD per nodes (so 9 OSD in total),
replication set to 3 (os each node has a copy).

For some reason, I would like to recreate the node 1. What I have done :
1. out the 3 OSDs of node 1, stop then, then destroy them (almost in the
same time)
2. recreate the new node 1 and add the 3 new OSDs

My problem is that after step 1, I had to wait for backfilling to complete
(to get only active+clean+remapped and active+undersized+degraded PGs).
Then, wait again in step 2 to get the cluster healthy.

Could I avoid the wait of step 1 ? What should I do then ? I was thinking :
- set the OSDs to noout
- out/stop/destroy the 3 OSDs of node 1 (in the same time)
- reinstall node 1 (I have a copy of all the configuration files) and add
the 3 nodes

Would that work ?

Thanks and regards,
Mosi
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] (no subject)

2018-09-19 Thread Paul Emmerich
No, it doesn't. In fact, I'm not aware of any client that sets this
flag, I think it's more for custom applications.


Paul

2018-09-18 21:41 GMT+02:00 Kevin Olbrich :
> Hi!
>
> is the compressible hint / incompressible hint supported on qemu+kvm?
>
> http://docs.ceph.com/docs/mimic/rados/configuration/bluestore-config-ref/
>
> If not, only aggressive would work in this case for rbd, right?
>
> Kind regards
> Kevin
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>



-- 
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] (no subject)

2018-09-18 Thread Kevin Olbrich
Hi!

is the compressible hint / incompressible hint supported on qemu+kvm?

http://docs.ceph.com/docs/mimic/rados/configuration/bluestore-config-ref/

If not, only aggressive would work in this case for rbd, right?

Kind regards
Kevin
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] (no subject)

2018-08-31 Thread puyingdong







help

end






 










puyingdong




puyingd...@gmail.com








签名由
网易邮箱大师
定制

 



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] (no subject)

2018-08-31 Thread Stas
Hello there,
I'm trying to reduce recovery impact on client operations and using mclock
for this purpose. I've tested different weights for queues but didn't see
any impacts on real performance.

ceph version 12.2.8  luminous (stable)

Last tested config:
"osd_op_queue": "mclock_opclass",
"osd_op_queue_cut_off": "high",
"osd_op_queue_mclock_client_op_lim": "0.00",
"osd_op_queue_mclock_client_op_res": "1.00",
"osd_op_queue_mclock_client_op_wgt": "1000.00",
"osd_op_queue_mclock_osd_subop_lim": "0.00",
"osd_op_queue_mclock_osd_subop_res": "1.00",
"osd_op_queue_mclock_osd_subop_wgt": "1000.00",
"osd_op_queue_mclock_recov_lim": "0.00",
"osd_op_queue_mclock_recov_res": "1.00",
"osd_op_queue_mclock_recov_wgt": "1.00",

Is it feature really working? Am I doing something wrong?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] (no subject)

2018-08-17 Thread krwy0...@163.com
Title: Ceph OSD fails to startup with bluefs error
Content:
The crash has happened for three times with the same reason:
direct_read_unaligned .. error(5) Input/Output err.
while I use ceph-bluestore-tool repair/fsck, it reports:
# ceph-bluestore-tool repair --path /var/lib/ceph/osd/ceph-1/ --log-level 30
2018-08-17 10:24:42.058156 7f2fa353ed00 -1 bdev(0x55a34bd06600 
/var/lib/ceph/osd/ceph-1//block) direct_read_unaligned 0x1c68d50~43bc 
error: (5) Input/output error
/build/ceph-12.2.6/src/os/bluestore/BlueFS.cc: In function 'int 
BlueFS::_read_random(BlueFS::FileReader*, uint64_t, size_t, char*)' thread 
7f2fa353ed00 time 2018-08-17 10:24:42.058383
/build/ceph-12.2.6/src/os/bluestore/BlueFS.cc: 916: FAILED assert(r == 0)
ceph version 12.2.6 (488df8a1076c4f5fc5b8d18a90463262c438740f) luminous (stable)
1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x10e) 
[0x7f2f9a07f76e]
2: (BlueFS::_read_random(BlueFS::FileReader*, unsigned long, unsigned long, 
char*)+0x708) [0x55a34a1ce248]
3: (BlueRocksRandomAccessFile::Read(unsigned long, unsigned long, 
rocksdb::Slice*, char*) const+0x20) [0x55a34a384da0]
4: (rocksdb::RandomAccessFileReader::Read(unsigned long, unsigned long, 
rocksdb::Slice*, char*) const+0x30d) [0x55a34a48d87d]
5: (rocksdb::ReadBlockContents(rocksdb::RandomAccessFileReader*, 
rocksdb::Footer const&, rocksdb::ReadOptions const&, rocksdb::BlockHandle 
const&, rocksdb::BlockContents*, rocksdb::ImmutableCFOptions const&, bool, 
rocksdb::Slice const&, rocksdb::PersistentCacheOptions const&)+0x2ae) 
[0x55a34a4674ee]
6: (()+0x455aa8) [0x55a34a457aa8]
7: 
(rocksdb::BlockBasedTable::MaybeLoadDataBlockToCache(rocksdb::BlockBasedTable::Rep*,
 rocksdb::ReadOptions const&, rocksdb::BlockHandle const&, rocksdb::Slice, 
rocksdb::BlockBasedTable::CachableEntry*, bool)+0x352) 
[0x55a34a459f52]
8: 
(rocksdb::BlockBasedTable::NewDataBlockIterator(rocksdb::BlockBasedTable::Rep*, 
rocksdb::ReadOptions const&, rocksdb::BlockHandle const&, rocksdb::BlockIter*, 
bool, rocksdb::Status)+0x129) [0x55a34a45a289]
9: 
(rocksdb::BlockBasedTable::BlockEntryIteratorState::NewSecondaryIterator(rocksdb::Slice
 const&)+0x89) [0x55a34a463089]
10: (()+0x486366) [0x55a34a488366]
11: (()+0x4869e6) [0x55a34a4889e6]
12: (()+0x4869f8) [0x55a34a4889f8]
13: (rocksdb::MergingIterator::Seek(rocksdb::Slice const&)+0xd7) 
[0x55a34a46f227]
14: (rocksdb::DBIter::Seek(rocksdb::Slice const&)+0x181) [0x55a34a4fdbd1]
15: (RocksDBStore::RocksDBWholeSpaceIteratorImpl::lower_bound(std::string 
const&, std::string const&)+0x8e) [0x55a34a2a7b9e]
16: (BitmapFreelistManager::init(unsigned long)+0x1a8) [0x55a34a37df48]
17: (BlueStore::_open_fm(bool)+0xafa) [0x55a34a20877a]
18: (BlueStore::_fsck(bool, bool)+0x3f4) [0x55a34a26e584]
19: (main()+0x1582) [0x55a34a12e462]
20: (__libc_start_main()+0xf5) [0x7f2f988b9f45]
21: (()+0x1c05ef) [0x55a34a1c25ef]
NOTE: a copy of the executable, or `objdump -rdS ` is needed to 
interpret this.
2018-08-17 10:24:42.064170 7f2fa353ed00 -1 
/build/ceph-12.2.6/src/os/bluestore/BlueFS.cc: In function 'int 
BlueFS::_read_random(BlueFS::FileReader*, uint64_t, size_t, char*)' thread 
7f2fa353ed00 time 2018-08-17 10:24:42.058383
/build/ceph-12.2.6/src/os/bluestore/BlueFS.cc: 916: FAILED assert(r == 0)

ceph version 12.2.6 (488df8a1076c4f5fc5b8d18a90463262c438740f) luminous (stable)
1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x10e) 
[0x7f2f9a07f76e]
2: (BlueFS::_read_random(BlueFS::FileReader*, unsigned long, unsigned long, 
char*)+0x708) [0x55a34a1ce248]
3: (BlueRocksRandomAccessFile::Read(unsigned long, unsigned long, 
rocksdb::Slice*, char*) const+0x20) [0x55a34a384da0]
4: (rocksdb::RandomAccessFileReader::Read(unsigned long, unsigned long, 
rocksdb::Slice*, char*) const+0x30d) [0x55a34a48d87d]
5: (rocksdb::ReadBlockContents(rocksdb::RandomAccessFileReader*, 
rocksdb::Footer const&, rocksdb::ReadOptions const&, rocksdb::BlockHandle 
const&, rocksdb::BlockContents*, rocksdb::ImmutableCFOptions const&, bool, 
rocksdb::Slice const&, rocksdb::PersistentCacheOptions const&)+0x2ae) 
[0x55a34a4674ee]
6: (()+0x455aa8) [0x55a34a457aa8]
7: 
(rocksdb::BlockBasedTable::MaybeLoadDataBlockToCache(rocksdb::BlockBasedTable::Rep*,
 rocksdb::ReadOptions const&, rocksdb::BlockHandle const&, rocksdb::Slice, 
rocksdb::BlockBasedTable::CachableEntry*, bool)+0x352) 
[0x55a34a459f52]
8: 
(rocksdb::BlockBasedTable::NewDataBlockIterator(rocksdb::BlockBasedTable::Rep*, 
rocksdb::ReadOptions const&, rocksdb::BlockHandle const&, rocksdb::BlockIter*, 
bool, rocksdb::Status)+0x129) [0x55a34a45a289]
9: 
(rocksdb::BlockBasedTable::BlockEntryIteratorState::NewSecondaryIterator(rocksdb::Slice
 const&)+0x89) [0x55a34a463089]
10: (()+0x486366) [0x55a34a488366]
11: (()+0x4869e6) [0x55a34a4889e6]
12: (()+0x4869f8) [0x55a34a4889f8]
13: (rocksdb::MergingIterator::Seek(rocksdb::Slice const&)+0xd7) 
[0x55a34a46f227]
14: (rocksdb::DBIter::Seek(rocksdb::Slice const&)+0x181) [0x55a34a4fdbd1]
15: 

[ceph-users] (no subject)

2018-05-18 Thread Don Doerner
unsubscribe ceph-users
The information contained in this transmission may be confidential. Any 
disclosure, copying, or further distribution of confidential information is not 
permitted unless such privilege is explicitly granted in writing by Quantum. 
Quantum reserves the right to have electronic communications, including email 
and attachments, sent across its networks filtered through security software 
programs and retain such messages in order to comply with applicable data 
security and retention requirements. Quantum is not responsible for the proper 
and complete transmission of the substance of this communication or for any 
delay in its receipt.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] (no subject)

2018-04-16 Thread F21

unsub

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] (no subject)

2018-03-10 Thread Oliver Freyermuth
Hi Nathan,

this indeed appears to be a Gentoo-specific issue. 
They install the file at:
/usr/libexec/ceph/ceph-osd-prestart.sh
instead of
/usr/lib/ceph/ceph-osd-prestart.sh

It depends on how you strongly you follow FHS ( 
http://refspecs.linuxfoundation.org/FHS_3.0/fhs/ch04s07.html )
which is the actual correct place to be used. It seems Gentoo packagers took 
FHS 3.0 seriously to decide where to install things,
but did not patch the code accordingly, so this surely warrants a Gentoo bug. 
Distro packagers should then decide whether they want to change install 
location (and rather follow Ceph upstream than FHS 3.0),
or patch the code to enforce FHS 3.0. 

https://bugs.gentoo.org/632028 is also related to that. 

Cheers,
Oliver

Am 11.03.2018 um 00:34 schrieb Nathan Dehnel:
> Trying to create an OSD:
> 
> gentooserver ~ # ceph-volume lvm create --data /dev/sdb
> Running command: ceph-authtool --gen-print-key
> Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring 
> /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 
> e70500fe-0d51-48c3-a607-414957886726
> Running command: vgcreate --force --yes 
> ceph-a736559a-92d1-483e-9289-d2c7feed510f /dev/sdb
>  stdout: Volume group "ceph-a736559a-92d1-483e-9289-d2c7feed510f" 
> successfully created
> Running command: lvcreate --yes -l 100%FREE -n 
> osd-block-e70500fe-0d51-48c3-a607-414957886726 
> ceph-a736559a-92d1-483e-9289-d2c7feed510f
>  stdout: Logical volume "osd-block-e70500fe-0d51-48c3-a607-414957886726" 
> created.
> Running command: ceph-authtool --gen-print-key
> Running command: mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
> Running command: chown -R ceph:ceph /dev/dm-0
> Running command: ln -s 
> /dev/ceph-a736559a-92d1-483e-9289-d2c7feed510f/osd-block-e70500fe-0d51-48c3-a607-414957886726
>  /var/lib/ceph/osd/ceph-0/block
> Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring 
> /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o 
> /var/lib/ceph/osd/ceph-0/activate.monmap
>  stderr: got monmap epoch 1
> Running command: ceph-authtool /var/lib/ceph/osd/ceph-0/keyring 
> --create-keyring --name osd.0 --add-key 
> AQBEZqRalRoRCBAA03R6VshykLcZjMgQnFKDtg==
>  stdout: creating /var/lib/ceph/osd/ceph-0/keyring
>  stdout: added entity osd.0 auth auth(auid = 18446744073709551615 
> key=AQBEZqRalRoRCBAA03R6VshykLcZjMgQnFKDtg== with 0 caps)
> Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
> Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
> Running command: ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs 
> -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osd-data 
> /var/lib/ceph/osd/ceph-0/ --osd-uuid e70500fe-0d51-48c3-a607-414957886726 
> --setuser ceph --setgroup ceph
> --> ceph-volume lvm prepare successful for: /dev/sdb
> Running command: ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev 
> /dev/ceph-a736559a-92d1-483e-9289-d2c7feed510f/osd-block-e70500fe-0d51-48c3-a607-414957886726
>  --path /var/lib/ceph/osd/ceph-0
> Running command: ln -snf 
> /dev/ceph-a736559a-92d1-483e-9289-d2c7feed510f/osd-block-e70500fe-0d51-48c3-a607-414957886726
>  /var/lib/ceph/osd/ceph-0/block
> Running command: chown -R ceph:ceph /dev/dm-0
> Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
> Running command: systemctl enable 
> ceph-volume@lvm-0-e70500fe-0d51-48c3-a607-414957886726
>  stderr: Created symlink 
> /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-0-e70500fe-0d51-48c3-a607-414957886726.service
>  → /lib/systemd/system/ceph-volume@.service.
> Running command: systemctl start ceph-osd@0
>  stderr: Job for ceph-osd@0.service failed because the control process exited 
> with error code.
> See "systemctl status ceph-osd@0.service" and "journalctl -xe" for details.
> --> Was unable to complete a new OSD, will rollback changes
> --> OSD will be fully purged from the cluster, because the ID was generated
> Running command: ceph osd purge osd.0 --yes-i-really-mean-it
>  stderr: purged osd.0
> -->  RuntimeError: command returned non-zero exit status: 1
> 
> journalctl -xe
> -- Unit ceph-osd@0.service has begun starting up.
> Mar 10 17:14:34 gentooserver systemd[3977]: ceph-osd@0.service: Failed to 
> execute command: No such file or directory
> Mar 10 17:14:34 gentooserver systemd[3977]: ceph-osd@0.service: Failed at 
> step EXEC spawning /usr/lib/ceph/ceph-osd-prestart.sh: No such file or 
> directory
> -- Subject: Process /usr/lib/ceph/ceph-osd-prestart.sh could not be executed
> -- Defined-By: systemd
> -- Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
> -- 
> -- The process /usr/lib/ceph/ceph-osd-prestart.sh could not be executed and 
> failed.
> -- 
> -- The error number returned by this process is 2.
> Mar 10 17:14:34 gentooserver systemd[1]: ceph-osd@0.service: Control process 
> exited, code=exited status=203
> Mar 10 17:14:34 gentooserver systemd[1]: ceph-osd@0.service: Failed with 
> 

[ceph-users] (no subject)

2018-03-10 Thread Nathan Dehnel
Trying to create an OSD:

gentooserver ~ # ceph-volume lvm create --data /dev/sdb
Running command: ceph-authtool --gen-print-key
Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring
/var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new
e70500fe-0d51-48c3-a607-414957886726
Running command: vgcreate --force --yes
ceph-a736559a-92d1-483e-9289-d2c7feed510f /dev/sdb
 stdout: Volume group "ceph-a736559a-92d1-483e-9289-d2c7feed510f"
successfully created
Running command: lvcreate --yes -l 100%FREE -n
osd-block-e70500fe-0d51-48c3-a607-414957886726
ceph-a736559a-92d1-483e-9289-d2c7feed510f
 stdout: Logical volume "osd-block-e70500fe-0d51-48c3-a607-414957886726"
created.
Running command: ceph-authtool --gen-print-key
Running command: mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Running command: chown -R ceph:ceph /dev/dm-0
Running command: ln -s
/dev/ceph-a736559a-92d1-483e-9289-d2c7feed510f/osd-block-e70500fe-0d51-48c3-a607-414957886726
/var/lib/ceph/osd/ceph-0/block
Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring
/var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o
/var/lib/ceph/osd/ceph-0/activate.monmap
 stderr: got monmap epoch 1
Running command: ceph-authtool /var/lib/ceph/osd/ceph-0/keyring
--create-keyring --name osd.0 --add-key
AQBEZqRalRoRCBAA03R6VshykLcZjMgQnFKDtg==
 stdout: creating /var/lib/ceph/osd/ceph-0/keyring
 stdout: added entity osd.0 auth auth(auid = 18446744073709551615
key=AQBEZqRalRoRCBAA03R6VshykLcZjMgQnFKDtg== with 0 caps)
Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Running command: ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs
-i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile -
--osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid
e70500fe-0d51-48c3-a607-414957886726 --setuser ceph --setgroup ceph
--> ceph-volume lvm prepare successful for: /dev/sdb
Running command: ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev
/dev/ceph-a736559a-92d1-483e-9289-d2c7feed510f/osd-block-e70500fe-0d51-48c3-a607-414957886726
--path /var/lib/ceph/osd/ceph-0
Running command: ln -snf
/dev/ceph-a736559a-92d1-483e-9289-d2c7feed510f/osd-block-e70500fe-0d51-48c3-a607-414957886726
/var/lib/ceph/osd/ceph-0/block
Running command: chown -R ceph:ceph /dev/dm-0
Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Running command: systemctl enable
ceph-volume@lvm-0-e70500fe-0d51-48c3-a607-414957886726
 stderr: Created symlink
/etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-0-e70500fe-0d51-48c3-a607-414957886726.service
→ /lib/systemd/system/ceph-volume@.service.
Running command: systemctl start ceph-osd@0
 stderr: Job for ceph-osd@0.service failed because the control process
exited with error code.
See "systemctl status ceph-osd@0.service" and "journalctl -xe" for details.
--> Was unable to complete a new OSD, will rollback changes
--> OSD will be fully purged from the cluster, because the ID was generated
Running command: ceph osd purge osd.0 --yes-i-really-mean-it
 stderr: purged osd.0
-->  RuntimeError: command returned non-zero exit status: 1

journalctl -xe
-- Unit ceph-osd@0.service has begun starting up.
Mar 10 17:14:34 gentooserver systemd[3977]: ceph-osd@0.service: Failed to
execute command: No such file or directory
Mar 10 17:14:34 gentooserver systemd[3977]: ceph-osd@0.service: Failed at
step EXEC spawning /usr/lib/ceph/ceph-osd-prestart.sh: No such file or
directory
-- Subject: Process /usr/lib/ceph/ceph-osd-prestart.sh could not be executed
-- Defined-By: systemd
-- Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
-- 
-- The process /usr/lib/ceph/ceph-osd-prestart.sh could not be executed and
failed.
-- 
-- The error number returned by this process is 2.
Mar 10 17:14:34 gentooserver systemd[1]: ceph-osd@0.service: Control
process exited, code=exited status=203
Mar 10 17:14:34 gentooserver systemd[1]: ceph-osd@0.service: Failed with
result 'exit-code'.
Mar 10 17:14:34 gentooserver systemd[1]: Failed to start Ceph object
storage daemon osd.0.
-- Subject: Unit ceph-osd@0.service has failed
-- Defined-By: systemd
-- Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
-- 
-- Unit ceph-osd@0.service has failed.
-- 
-- The result is RESULT.

Why is this file missing? Should I file a bug with my distro's packager?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] (no subject)

2018-02-27 Thread Philip Schroth
I have a 3 node production cluster. All works fine. but i have one failing
node. i replaced one disk on sunday. everyting went fine. last night there
was another disk broken. Ceph nicely maks it as down. but when i wanted to
reboot this node now. all remaining osd's are being kept in and not marked
as down. and the whole cluster locks during the reboot of this node. once i
reboot one of the other two nodes when the first failing node is back it
works like charm. only this node i cannot reboot anymore without locking
which i could still on sunday...


-- 
Met vriendelijke groet / With kind regards

Philip Schroth
E: i...@schroth.nl
T: +31630973268
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] (no subject)

2017-10-27 Thread David Turner
Your client needs to tell the cluster that the objects have been deleted.
'-o discard' is my goto because I'm lazy and it works well enough for me.
If you're in need of more performance, then fstrim is the other option.
Nothing on the Ceph side can be configured to know when a client no longer
needs the contents of an object.  It just acts like a normal harddrive in
that the filesystem on top of the RBD removed the pointers to the objects,
but the disk just lets the file stay where it is until it is eventually
overwritten.

Utilizing discard or fstrim cleans up the objects immediately, but at the
cost of cluster iops.  If you know that a particular RBD overwrites its
data all the time, then you can skip using fstrim on it as it will
constantly be using the same objects anyway.

On Fri, Oct 27, 2017 at 1:17 PM nigel davies  wrote:

> Hay all
>
> I am new to ceph and made an test ceph cluster that supports
> s3 and rbd's (rbd's are linked using iscsi)
>
> I been looking about and notice that the space is not decreasing when i
> delete a file and in turn filled up my cluster osd's
>
> I have been doing some reading and see people recomand
> adding  "-o discard" to every RBD mount (what can be an performance hit)
> or use fstrim. When i try both options work, but is their an better/another
> option.
>
> Thanks
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] (no subject)

2017-10-27 Thread nigel davies
Hay all

I am new to ceph and made an test ceph cluster that supports
s3 and rbd's (rbd's are linked using iscsi)

I been looking about and notice that the space is not decreasing when i
delete a file and in turn filled up my cluster osd's

I have been doing some reading and see people recomand
adding  "-o discard" to every RBD mount (what can be an performance hit) or
use fstrim. When i try both options work, but is their an better/another
option.

Thanks
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] (no subject)

2017-09-30 Thread Marc Roos





[Sat Sep 30 15:51:11 2017] libceph: osd5 192.168.10.113:6809 socket 
closed (con state OPEN)
[Sat Sep 30 15:51:11 2017] libceph: osd5 192.168.10.113:6809 socket 
closed (con state CONNECTING)
[Sat Sep 30 15:51:11 2017] libceph: osd5 down
[Sat Sep 30 15:51:11 2017] libceph: osd5 down
[Sat Sep 30 15:52:52 2017] libceph: osd5 up
[Sat Sep 30 15:52:52 2017] libceph: osd5 up



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] (no subject)

2017-09-05 Thread Gregory Farnum
On Thu, Aug 31, 2017 at 11:51 AM, Marc Roos  wrote:
>
> Should these messages not be gone in 12.2.0?
>
> 2017-08-31 20:49:33.500773 7f5aa1756d40 -1 WARNING: the following
> dangerous and experimental features are enabled: bluestore
> 2017-08-31 20:49:33.501026 7f5aa1756d40 -1 WARNING: the following
> dangerous and experimental features are enabled: bluestore
> 2017-08-31 20:49:33.540667 7f5aa1756d40 -1 WARNING: the following
> dangerous and experimental features are enabled: bluestore
>
> ceph-selinux-12.2.0-0.el7.x86_64
> ceph-mon-12.2.0-0.el7.x86_64
> collectd-ceph-5.7.1-2.el7.x86_64
> ceph-base-12.2.0-0.el7.x86_64
> ceph-osd-12.2.0-0.el7.x86_64
> ceph-mgr-12.2.0-0.el7.x86_64
> ceph-12.2.0-0.el7.x86_64
> ceph-common-12.2.0-0.el7.x86_64
> ceph-mds-12.2.0-0.el7.x86_64

Yes. We actually produce that in a couple places -- where exactly are
you seeing them?
-Greg

>
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] (no subject)

2017-08-31 Thread Marc Roos

Should these messages not be gone in 12.2.0?

2017-08-31 20:49:33.500773 7f5aa1756d40 -1 WARNING: the following 
dangerous and experimental features are enabled: bluestore
2017-08-31 20:49:33.501026 7f5aa1756d40 -1 WARNING: the following 
dangerous and experimental features are enabled: bluestore
2017-08-31 20:49:33.540667 7f5aa1756d40 -1 WARNING: the following 
dangerous and experimental features are enabled: bluestore

ceph-selinux-12.2.0-0.el7.x86_64
ceph-mon-12.2.0-0.el7.x86_64
collectd-ceph-5.7.1-2.el7.x86_64
ceph-base-12.2.0-0.el7.x86_64
ceph-osd-12.2.0-0.el7.x86_64
ceph-mgr-12.2.0-0.el7.x86_64
ceph-12.2.0-0.el7.x86_64
ceph-common-12.2.0-0.el7.x86_64
ceph-mds-12.2.0-0.el7.x86_64



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] (no subject)

2017-06-09 Thread Steele, Tim
unsubscribe ceph-users

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] (no subject)

2017-03-20 Thread Shaon
-- 
Imran Hossain Shaon | http://shaon.me/
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] (no subject)

2016-09-19 Thread ? ?
help

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] (no subject)

2016-08-04 Thread Parveen Sharma
Have a cluster and I want a radosGW user to have access on a bucket objects
only like /* but user should not be able to create new or
remove this bucket



-
Parveen Kumar Sharma
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] (no subject)

2016-07-12 Thread Anand Bhat
Use qemu-img-convert to convert from one format to another.

Regards,
Anand

On Mon, Jul 11, 2016 at 9:37 PM, Gaurav Goyal 
wrote:

> Thanks!
>
> I need to create a VM having qcow2 image file as 6.7 GB but raw image as
> 600GB which is too big.
> Is there a way that i need not to convert qcow2 file to raw and it works
> well with rbd?
>
>
> Regards
> Gaurav Goyal
>
> On Mon, Jul 11, 2016 at 11:46 AM, Kees Meijs  wrote:
>
>> Glad to hear it works now! Good luck with your setup.
>>
>> Regards,
>> Kees
>>
>> On 11-07-16 17:29, Gaurav Goyal wrote:
>> > Hello it worked for me after removing the following parameter from
>> > /etc/nova/nova.conf file
>>
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>


-- 

Never say never.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] (no subject)

2016-07-11 Thread Gaurav Goyal
Thanks!

I need to create a VM having qcow2 image file as 6.7 GB but raw image as
600GB which is too big.
Is there a way that i need not to convert qcow2 file to raw and it works
well with rbd?


Regards
Gaurav Goyal

On Mon, Jul 11, 2016 at 11:46 AM, Kees Meijs  wrote:

> Glad to hear it works now! Good luck with your setup.
>
> Regards,
> Kees
>
> On 11-07-16 17:29, Gaurav Goyal wrote:
> > Hello it worked for me after removing the following parameter from
> > /etc/nova/nova.conf file
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] (no subject)

2016-07-11 Thread Kees Meijs
Glad to hear it works now! Good luck with your setup.

Regards,
Kees

On 11-07-16 17:29, Gaurav Goyal wrote:
> Hello it worked for me after removing the following parameter from
> /etc/nova/nova.conf file

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] (no subject)

2016-07-11 Thread Gaurav Goyal
Hello it worked for me after removing the following parameter from
/etc/nova/nova.conf file

[root@OSKVM1 ~]# cat /etc/nova/nova.conf|grep hw_disk_discard

#hw_disk_discard=unmap


Though as per ceph documentation, for KILO version we must set this
parameter. I am using Liberty but i am not sure if this parameter is
removed from Liberty. If that is the case please update the documentation.


KILO

Enable discard support for virtual machine ephemeral root disk:

[libvirt]

...

hw_disk_discard = unmap # enable discard support (be careful of performance)


Regards

Gaurav Goyal

On Mon, Jul 11, 2016 at 4:38 AM, Kees Meijs  wrote:

> Hi,
>
> I think there's still something misconfigured:
>
> Invalid: 400 Bad Request: Unknown scheme 'file' found in URI (HTTP 400)
>
>
> It seems the RBD backend is not used as expected.
>
> Have you configured both Cinder *and* Glance to use Ceph?
>
> Regards,
> Kees
>
> On 08-07-16 17:33, Gaurav Goyal wrote:
>
>
> I regenerated the UUID as per your suggestion.
> Now i have same UUID in host1 and host2.
> I could create volumes and attach them to existing VMs.
>
> I could create new glance images.
>
> But still finding the same error while instance launch via GUI.
>
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] (no subject)

2016-07-11 Thread Kees Meijs
Hi,

I think there's still something misconfigured:
> Invalid: 400 Bad Request: Unknown scheme 'file' found in URI (HTTP 400)

It seems the RBD backend is not used as expected.

Have you configured both Cinder _and_ Glance to use Ceph?

Regards,
Kees

On 08-07-16 17:33, Gaurav Goyal wrote:
>
> I regenerated the UUID as per your suggestion. 
> Now i have same UUID in host1 and host2.
> I could create volumes and attach them to existing VMs.
>
> I could create new glance images. 
>
> But still finding the same error while instance launch via GUI.

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] (no subject)

2016-07-08 Thread Gaurav Goyal
I even tried with bare .raw file but error is still the same

016-07-08 16:29:40.931 86007 INFO nova.compute.claims
[req-b43bbec9-c875-4f4b-ad2c-0d87a02bc7e1 289598890db341f4af45ce5c57c41ba3
713114f3b9e54501a35a79e84c1e6c9d - - -] [instance:
cb6056a8-1bb9-4475-a702-9a2b0a7dca01] Total memory: 193168 MB, used:
1024.00 MB

2016-07-08 16:29:40.931 86007 INFO nova.compute.claims
[req-b43bbec9-c875-4f4b-ad2c-0d87a02bc7e1 289598890db341f4af45ce5c57c41ba3
713114f3b9e54501a35a79e84c1e6c9d - - -] [instance:
cb6056a8-1bb9-4475-a702-9a2b0a7dca01] memory limit: 289752.00 MB, free:
288728.00 MB

2016-07-08 16:29:40.932 86007 INFO nova.compute.claims
[req-b43bbec9-c875-4f4b-ad2c-0d87a02bc7e1 289598890db341f4af45ce5c57c41ba3
713114f3b9e54501a35a79e84c1e6c9d - - -] [instance:
cb6056a8-1bb9-4475-a702-9a2b0a7dca01] Total disk: 8168 GB, used: 1.00 GB

2016-07-08 16:29:40.932 86007 INFO nova.compute.claims
[req-b43bbec9-c875-4f4b-ad2c-0d87a02bc7e1 289598890db341f4af45ce5c57c41ba3
713114f3b9e54501a35a79e84c1e6c9d - - -] [instance:
cb6056a8-1bb9-4475-a702-9a2b0a7dca01] disk limit: 8168.00 GB, free: 8167.00
GB

2016-07-08 16:29:40.948 86007 INFO nova.compute.claims
[req-b43bbec9-c875-4f4b-ad2c-0d87a02bc7e1 289598890db341f4af45ce5c57c41ba3
713114f3b9e54501a35a79e84c1e6c9d - - -] [instance:
cb6056a8-1bb9-4475-a702-9a2b0a7dca01] Claim successful

2016-07-08 16:29:41.384 86007 INFO nova.virt.libvirt.driver
[req-b43bbec9-c875-4f4b-ad2c-0d87a02bc7e1 289598890db341f4af45ce5c57c41ba3
713114f3b9e54501a35a79e84c1e6c9d - - -] [instance:
cb6056a8-1bb9-4475-a702-9a2b0a7dca01] Creating image

2016-07-08 16:29:42.259 86007 ERROR nova.compute.manager
[req-b43bbec9-c875-4f4b-ad2c-0d87a02bc7e1 289598890db341f4af45ce5c57c41ba3
713114f3b9e54501a35a79e84c1e6c9d - - -] [instance:
cb6056a8-1bb9-4475-a702-9a2b0a7dca01] Instance failed to spawn

2016-07-08 16:29:42.259 86007 ERROR nova.compute.manager [instance:
cb6056a8-1bb9-4475-a702-9a2b0a7dca01] Traceback (most recent call last):

2016-07-08 16:29:42.259 86007 ERROR nova.compute.manager [instance:
cb6056a8-1bb9-4475-a702-9a2b0a7dca01]   File
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2156, in
_build_resources

2016-07-08 16:29:42.259 86007 ERROR nova.compute.manager [instance:
cb6056a8-1bb9-4475-a702-9a2b0a7dca01] yield resources

2016-07-08 16:29:42.259 86007 ERROR nova.compute.manager [instance:
cb6056a8-1bb9-4475-a702-9a2b0a7dca01]   File
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2009, in
_build_and_run_instance

2016-07-08 16:29:42.259 86007 ERROR nova.compute.manager [instance:
cb6056a8-1bb9-4475-a702-9a2b0a7dca01]
block_device_info=block_device_info)

2016-07-08 16:29:42.259 86007 ERROR nova.compute.manager [instance:
cb6056a8-1bb9-4475-a702-9a2b0a7dca01]   File
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2531,
in spawn

2016-07-08 16:29:42.259 86007 ERROR nova.compute.manager [instance:
cb6056a8-1bb9-4475-a702-9a2b0a7dca01] write_to_disk=True)

2016-07-08 16:29:42.259 86007 ERROR nova.compute.manager [instance:
cb6056a8-1bb9-4475-a702-9a2b0a7dca01]   File
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 4427,
in _get_guest_xml

2016-07-08 16:29:42.259 86007 ERROR nova.compute.manager [instance:
cb6056a8-1bb9-4475-a702-9a2b0a7dca01] context)

2016-07-08 16:29:42.259 86007 ERROR nova.compute.manager [instance:
cb6056a8-1bb9-4475-a702-9a2b0a7dca01]   File
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 4286,
in _get_guest_config

2016-07-08 16:29:42.259 86007 ERROR nova.compute.manager [instance:
cb6056a8-1bb9-4475-a702-9a2b0a7dca01] flavor, guest.os_type)

2016-07-08 16:29:42.259 86007 ERROR nova.compute.manager [instance:
cb6056a8-1bb9-4475-a702-9a2b0a7dca01]   File
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 3387,
in _get_guest_storage_config

2016-07-08 16:29:42.259 86007 ERROR nova.compute.manager [instance:
cb6056a8-1bb9-4475-a702-9a2b0a7dca01] inst_type)

2016-07-08 16:29:42.259 86007 ERROR nova.compute.manager [instance:
cb6056a8-1bb9-4475-a702-9a2b0a7dca01]   File
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 3320,
in _get_guest_disk_config

2016-07-08 16:29:42.259 86007 ERROR nova.compute.manager [instance:
cb6056a8-1bb9-4475-a702-9a2b0a7dca01] raise exception.Invalid(msg)

2016-07-08 16:29:42.259 86007 ERROR nova.compute.manager [instance:
cb6056a8-1bb9-4475-a702-9a2b0a7dca01] Invalid: Volume sets discard option,
but libvirt (1, 0, 6) or later is required, qemu (1, 6, 0) or later is
required.

2016-07-08 16:29:42.259 86007 ERROR nova.compute.manager [instance:
cb6056a8-1bb9-4475-a702-9a2b0a7dca01]

2016-07-08 16:29:42.261 86007 INFO nova.compute.manager
[req-b43bbec9-c875-4f4b-ad2c-0d87a02bc7e1 289598890db341f4af45ce5c57c41ba3
713114f3b9e54501a35a79e84c1e6c9d - - -] [instance:
cb6056a8-1bb9-4475-a702-9a2b0a7dca01] Terminating instance

2016-07-08 16:29:42.267 86007 INFO nova.virt.libvirt.driver [-] 

Re: [ceph-users] (no subject)

2016-07-08 Thread Gaurav Goyal
[root@OSKVM1 ~]# grep -v "^#" /etc/nova/nova.conf|grep -v ^$

[DEFAULT]

instance_usage_audit = True

instance_usage_audit_period = hour

notify_on_state_change = vm_and_task_state

notification_driver = messagingv2

rbd_user=cinder

rbd_secret_uuid=1989f7a6-4ecb-4738-abbf-2962c29b2bbb

rpc_backend = rabbit

auth_strategy = keystone

my_ip = 10.1.0.4

network_api_class = nova.network.neutronv2.api.API

security_group_api = neutron

linuxnet_interface_driver =
nova.network.linux_net.NeutronLinuxBridgeInterfaceDriver

firewall_driver = nova.virt.firewall.NoopFirewallDriver

enabled_apis=osapi_compute,metadata

[api_database]

connection = mysql://nova:nova@controller/nova

[barbican]

[cells]

[cinder]

os_region_name = RegionOne

[conductor]

[cors]

[cors.subdomain]

[database]

[ephemeral_storage_encryption]

[glance]

host = controller

[guestfs]

[hyperv]

[image_file_url]

[ironic]

[keymgr]

[keystone_authtoken]

auth_uri = http://controller:5000

auth_url = http://controller:35357

auth_plugin = password

project_domain_id = default

user_domain_id = default

project_name = service

username = nova

password = nova

[libvirt]

inject_password=false

inject_key=false

inject_partition=-2

live_migration_flag=VIR_MIGRATE_UNDEFINE_SOURCE, VIR_MIGRATE_PEER2PEER,
VIR_MIGRATE_LIVE, VIR_MIGRATE_PERSIST_DEST, VIR_MIGRATE_TUNNELLED

disk_cachemodes ="network=writeback"

images_type=rbd

images_rbd_pool=vms

images_rbd_ceph_conf =/etc/ceph/ceph.conf

rbd_user=cinder

rbd_secret_uuid=1989f7a6-4ecb-4738-abbf-2962c29b2bbb

hw_disk_discard=unmap

[matchmaker_redis]

[matchmaker_ring]

[metrics]

[neutron]

url = http://controller:9696

auth_url = http://controller:35357

auth_plugin = password

project_domain_id = default

user_domain_id = default

region_name = RegionOne

project_name = service

username = neutron

password = neutron

service_metadata_proxy = True

metadata_proxy_shared_secret = X

[osapi_v21]

[oslo_concurrency]

lock_path = /var/lib/nova/tmp

[oslo_messaging_amqp]

[oslo_messaging_qpid]

[oslo_messaging_rabbit]

rabbit_host = controller

rabbit_userid = openstack

rabbit_password = X

[oslo_middleware]

[rdp]

[serial_console]

[spice]

[ssl]

[trusted_computing]

[upgrade_levels]

[vmware]

[vnc]

enabled = True

vncserver_listen = 0.0.0.0

novncproxy_base_url = http://controller:6080/vnc_auto.html

vncserver_proxyclient_address = $my_ip

[workarounds]

[xenserver]

[zookeeper]


[root@OSKVM1 ceph]# ls -ltr

total 24

-rwxr-xr-x 1 root   root92 May 10 12:58 rbdmap

-rw--- 1 root   root 0 Jun 28 11:05 tmpfDt6jw

-rw-r--r-- 1 root   root63 Jul  5 12:59 ceph.client.admin.keyring

-rw-r--r-- 1 glance glance  64 Jul  5 14:51 ceph.client.glance.keyring

-rw-r--r-- 1 cinder cinder  64 Jul  5 14:53 ceph.client.cinder.keyring

-rw-r--r-- 1 cinder cinder  71 Jul  5 14:54
ceph.client.cinder-backup.keyring

-rwxrwxrwx 1 root   root   438 Jul  7 14:19 ceph.conf

[root@OSKVM1 ceph]# more ceph.client.cinder.keyring

[client.cinder]

key = AQCIAHxX9ga8LxAAU+S3Vybdu+Cm2bP3lplGnA==

[root@OSKVM1 ~]# rados lspools

rbd

volumes

images

backups

vms

[root@OSKVM1 ~]# rbd -p rbd ls

[root@OSKVM1 ~]# rbd -p volumes ls

volume-27717a88-3c80-420f-8887-4ca5c5b94023

volume-3bd22868-cb2a-4881-b9fb-ae91a6f79cb9

volume-b9cf7b94-cfb6-4b55-816c-10c442b23519

[root@OSKVM1 ~]# rbd -p images ls

9aee6c4e-3b60-49d5-8e17-33953e384a00

a8b45c8a-a5c8-49d8-a529-1e4088bdbf3f

[root@OSKVM1 ~]# rbd -p vms ls

[root@OSKVM1 ~]# rbd -p backup


*I could create cinder and  attach it to one of already built nova
instance.*

[root@OSKVM1 ceph]# nova volume-list

WARNING: Command volume-list is deprecated and will be removed after Nova
13.0.0 is released. Use python-cinderclient or openstackclient instead.

+--+---+--+--+-+--+

| ID   | Status| Display Name |
Size | Volume Type | Attached to  |

+--+---+--+--+-+--+

| 14a572d0-2834-40d6-9650-cb3e18271963 | available | nova-vol_gg  | 10
  | -   |  |

| 3bd22868-cb2a-4881-b9fb-ae91a6f79cb9 | in-use| nova-vol_1   | 2
  | -   | d06f7c3b-5bbd-4597-99ce-fa981d2e10db |

| 27717a88-3c80-420f-8887-4ca5c5b94023 | available | cinder-ceph-vol1 | 10
  | -   |  |

+--+---+--+--+-+--+

On Fri, Jul 8, 2016 at 11:33 AM, Gaurav Goyal 
wrote:

> Hi Kees,
>
> I regenerated the UUID as per your suggestion.
> Now i have same UUID in host1 and host2.
> I could create volumes and attach them to existing VMs.
>
> I could create new glance images.
>
> But still 

Re: [ceph-users] (no subject)

2016-07-08 Thread Gaurav Goyal
Hi Kees,

I regenerated the UUID as per your suggestion.
Now i have same UUID in host1 and host2.
I could create volumes and attach them to existing VMs.

I could create new glance images.

But still finding the same error while instance launch via GUI.


2016-07-08 11:23:25.067 86007 INFO nova.compute.resource_tracker
[req-4b7eccc8-0bf5-4f55-a941-4c93e97ef5df - - - - -] Auditing locally
available compute resources for node controller

2016-07-08 11:23:25.527 86007 INFO nova.compute.resource_tracker
[req-4b7eccc8-0bf5-4f55-a941-4c93e97ef5df - - - - -] Total usable vcpus:
40, total allocated vcpus: 0

2016-07-08 11:23:25.527 86007 INFO nova.compute.resource_tracker
[req-4b7eccc8-0bf5-4f55-a941-4c93e97ef5df - - - - -] Final resource view:
name=controller phys_ram=193168MB used_ram=1024MB phys_disk=8168GB
used_disk=1GB total_vcpus=40 used_vcpus=0 pci_stats=None

2016-07-08 11:23:25.560 86007 INFO nova.compute.resource_tracker
[req-4b7eccc8-0bf5-4f55-a941-4c93e97ef5df - - - - -] Compute_service record
updated for OSKVM1:controller

2016-07-08 11:24:25.065 86007 INFO nova.compute.resource_tracker
[req-4b7eccc8-0bf5-4f55-a941-4c93e97ef5df - - - - -] Auditing locally
available compute resources for node controller

2016-07-08 11:24:25.561 86007 INFO nova.compute.resource_tracker
[req-4b7eccc8-0bf5-4f55-a941-4c93e97ef5df - - - - -] Total usable vcpus:
40, total allocated vcpus: 0

2016-07-08 11:24:25.562 86007 INFO nova.compute.resource_tracker
[req-4b7eccc8-0bf5-4f55-a941-4c93e97ef5df - - - - -] Final resource view:
name=controller phys_ram=193168MB used_ram=1024MB phys_disk=8168GB
used_disk=1GB total_vcpus=40 used_vcpus=0 pci_stats=None

2016-07-08 11:24:25.603 86007 INFO nova.compute.resource_tracker
[req-4b7eccc8-0bf5-4f55-a941-4c93e97ef5df - - - - -] Compute_service record
updated for OSKVM1:controller

2016-07-08 11:25:18.138 86007 INFO nova.compute.manager
[req-3173f5b7-fa02-420c-954b-e21c3ce8d183 289598890db341f4af45ce5c57c41ba3
713114f3b9e54501a35a79e84c1e6c9d - - -] [instance:
bf4839c8-2af6-4959-9158-fe411e1cfae7] Starting instance...

2016-07-08 11:25:18.255 86007 INFO nova.compute.claims
[req-3173f5b7-fa02-420c-954b-e21c3ce8d183 289598890db341f4af45ce5c57c41ba3
713114f3b9e54501a35a79e84c1e6c9d - - -] [instance:
bf4839c8-2af6-4959-9158-fe411e1cfae7] Attempting claim: memory 512 MB, disk
1 GB

2016-07-08 11:25:18.255 86007 INFO nova.compute.claims
[req-3173f5b7-fa02-420c-954b-e21c3ce8d183 289598890db341f4af45ce5c57c41ba3
713114f3b9e54501a35a79e84c1e6c9d - - -] [instance:
bf4839c8-2af6-4959-9158-fe411e1cfae7] Total memory: 193168 MB, used:
1024.00 MB

2016-07-08 11:25:18.256 86007 INFO nova.compute.claims
[req-3173f5b7-fa02-420c-954b-e21c3ce8d183 289598890db341f4af45ce5c57c41ba3
713114f3b9e54501a35a79e84c1e6c9d - - -] [instance:
bf4839c8-2af6-4959-9158-fe411e1cfae7] memory limit: 289752.00 MB, free:
288728.00 MB

2016-07-08 11:25:18.256 86007 INFO nova.compute.claims
[req-3173f5b7-fa02-420c-954b-e21c3ce8d183 289598890db341f4af45ce5c57c41ba3
713114f3b9e54501a35a79e84c1e6c9d - - -] [instance:
bf4839c8-2af6-4959-9158-fe411e1cfae7] Total disk: 8168 GB, used: 1.00 GB

2016-07-08 11:25:18.257 86007 INFO nova.compute.claims
[req-3173f5b7-fa02-420c-954b-e21c3ce8d183 289598890db341f4af45ce5c57c41ba3
713114f3b9e54501a35a79e84c1e6c9d - - -] [instance:
bf4839c8-2af6-4959-9158-fe411e1cfae7] disk limit: 8168.00 GB, free: 8167.00
GB

2016-07-08 11:25:18.271 86007 INFO nova.compute.claims
[req-3173f5b7-fa02-420c-954b-e21c3ce8d183 289598890db341f4af45ce5c57c41ba3
713114f3b9e54501a35a79e84c1e6c9d - - -] [instance:
bf4839c8-2af6-4959-9158-fe411e1cfae7] Claim successful

2016-07-08 11:25:18.747 86007 INFO nova.virt.libvirt.driver
[req-3173f5b7-fa02-420c-954b-e21c3ce8d183 289598890db341f4af45ce5c57c41ba3
713114f3b9e54501a35a79e84c1e6c9d - - -] [instance:
bf4839c8-2af6-4959-9158-fe411e1cfae7] Creating image

2016-07-08 11:25:19.126 86007 ERROR nova.compute.manager
[req-3173f5b7-fa02-420c-954b-e21c3ce8d183 289598890db341f4af45ce5c57c41ba3
713114f3b9e54501a35a79e84c1e6c9d - - -] [instance:
bf4839c8-2af6-4959-9158-fe411e1cfae7] Instance failed to spawn

2016-07-08 11:25:19.126 86007 ERROR nova.compute.manager [instance:
bf4839c8-2af6-4959-9158-fe411e1cfae7] Traceback (most recent call last):

2016-07-08 11:25:19.126 86007 ERROR nova.compute.manager [instance:
bf4839c8-2af6-4959-9158-fe411e1cfae7]   File
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2156, in
_build_resources

2016-07-08 11:25:19.126 86007 ERROR nova.compute.manager [instance:
bf4839c8-2af6-4959-9158-fe411e1cfae7] yield resources

2016-07-08 11:25:19.126 86007 ERROR nova.compute.manager [instance:
bf4839c8-2af6-4959-9158-fe411e1cfae7]   File
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2009, in
_build_and_run_instance

2016-07-08 11:25:19.126 86007 ERROR nova.compute.manager [instance:
bf4839c8-2af6-4959-9158-fe411e1cfae7]
block_device_info=block_device_info)

2016-07-08 11:25:19.126 86007 ERROR nova.compute.manager 

Re: [ceph-users] (no subject)

2016-07-08 Thread Gaurav Goyal
Hi Kees,

Thanks for your help!

Node 1 controller + compute

-rw-r--r-- 1 root   root63 Jul  5 12:59 ceph.client.admin.keyring

-rw-r--r-- 1 glance glance  64 Jul  5 14:51 ceph.client.glance.keyring

-rw-r--r-- 1 cinder cinder  64 Jul  5 14:53 ceph.client.cinder.keyring

-rw-r--r-- 1 cinder cinder  71 Jul  5 14:54
ceph.client.cinder-backup.keyring

Node 2 compute2

-rw-r--r--  1 root root  63 Jul  5 12:59 ceph.client.admin.keyring

-rw-r--r--  1 root root  64 Jul  5 14:57 ceph.client.cinder.keyring

[root@OSKVM2 ceph]# chown cinder:cinder ceph.client.cinder.keyring

chown: invalid user: ‘cinder:cinder’


For below section, should i generate separate UUID for both compte hosts?

i executed uuidgen on host1 and put the same on second one. I need your
help to get rid of this problem.

Then, on the compute nodes, add the secret key to libvirt and remove the
temporary copy of the key:

uuidgen
457eb676-33da-42ec-9a8c-9293d545c337

cat > secret.xml <
  457eb676-33da-42ec-9a8c-9293d545c337
  
client.cinder secret
  

EOF
sudo virsh secret-define --file secret.xml
Secret 457eb676-33da-42ec-9a8c-9293d545c337 created
sudo virsh secret-set-value --secret
457eb676-33da-42ec-9a8c-9293d545c337 --base64 $(cat client.cinder.key)
&& rm client.cinder.key secret.xml

Moreover,  i do not find libvirtd group.

[root@OSKVM1 ceph]# chown qemu:libvirtd /var/run/ceph/guests/

chown: invalid group: ‘qemu:libvirtd’


Regards

Gaurav Goyal

On Fri, Jul 8, 2016 at 9:40 AM, Kees Meijs  wrote:

> Hi Gaurav,
>
> Have you distributed your Ceph authentication keys to your compute
> nodes? And, do they have the correct permissions in terms of Ceph?
>
> K.
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] (no subject)

2016-07-08 Thread Kees Meijs
Hi,

I'd recommend generating an UUID and use it for all your compute nodes.
This way, you can keep your configuration in libvirt constant.

Regards,
Kees

On 08-07-16 16:15, Gaurav Goyal wrote:
>
> For below section, should i generate separate UUID for both compte hosts? 
>

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] (no subject)

2016-07-08 Thread Kees Meijs
Hi Gaurav,

Have you distributed your Ceph authentication keys to your compute
nodes? And, do they have the correct permissions in terms of Ceph?

K.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] (no subject)

2016-07-08 Thread Gaurav Goyal
Hello,

Thanks i could restore my cinder service.

But while trying to launch an instance, i am getting same error.
Can you please help me to know what am i doing wrong?

2016-07-08 09:28:31.368 31909 INFO nova.compute.manager
[req-c56770a7-5bab-426b-b763-7473254c6410 289598890db341f4af45ce5c57c41ba3
713114f3b9e54501a35a79e84c1e6c9d - - -] [instance:
ded46ee2-9c8f-45f7-b29f-a2d6e0e08b88] Starting instance...

2016-07-08 09:28:31.484 31909 INFO nova.compute.claims
[req-c56770a7-5bab-426b-b763-7473254c6410 289598890db341f4af45ce5c57c41ba3
713114f3b9e54501a35a79e84c1e6c9d - - -] [instance:
ded46ee2-9c8f-45f7-b29f-a2d6e0e08b88] Attempting claim: memory 512 MB, disk
1 GB

2016-07-08 09:28:31.485 31909 INFO nova.compute.claims
[req-c56770a7-5bab-426b-b763-7473254c6410 289598890db341f4af45ce5c57c41ba3
713114f3b9e54501a35a79e84c1e6c9d - - -] [instance:
ded46ee2-9c8f-45f7-b29f-a2d6e0e08b88] Total memory: 193168 MB, used:
1024.00 MB

2016-07-08 09:28:31.485 31909 INFO nova.compute.claims
[req-c56770a7-5bab-426b-b763-7473254c6410 289598890db341f4af45ce5c57c41ba3
713114f3b9e54501a35a79e84c1e6c9d - - -] [instance:
ded46ee2-9c8f-45f7-b29f-a2d6e0e08b88] memory limit: 289752.00 MB, free:
288728.00 MB

2016-07-08 09:28:31.485 31909 INFO nova.compute.claims
[req-c56770a7-5bab-426b-b763-7473254c6410 289598890db341f4af45ce5c57c41ba3
713114f3b9e54501a35a79e84c1e6c9d - - -] [instance:
ded46ee2-9c8f-45f7-b29f-a2d6e0e08b88] Total disk: 8168 GB, used: 1.00 GB

2016-07-08 09:28:31.486 31909 INFO nova.compute.claims
[req-c56770a7-5bab-426b-b763-7473254c6410 289598890db341f4af45ce5c57c41ba3
713114f3b9e54501a35a79e84c1e6c9d - - -] [instance:
ded46ee2-9c8f-45f7-b29f-a2d6e0e08b88] disk limit: 8168.00 GB, free: 8167.00
GB

2016-07-08 09:28:31.503 31909 INFO nova.compute.claims
[req-c56770a7-5bab-426b-b763-7473254c6410 289598890db341f4af45ce5c57c41ba3
713114f3b9e54501a35a79e84c1e6c9d - - -] [instance:
ded46ee2-9c8f-45f7-b29f-a2d6e0e08b88] Claim successful

2016-07-08 09:28:31.985 31909 INFO nova.virt.libvirt.driver
[req-c56770a7-5bab-426b-b763-7473254c6410 289598890db341f4af45ce5c57c41ba3
713114f3b9e54501a35a79e84c1e6c9d - - -] [instance:
ded46ee2-9c8f-45f7-b29f-a2d6e0e08b88] Creating image

2016-07-08 09:28:32.573 31909 ERROR nova.compute.manager
[req-c56770a7-5bab-426b-b763-7473254c6410 289598890db341f4af45ce5c57c41ba3
713114f3b9e54501a35a79e84c1e6c9d - - -] [instance:
ded46ee2-9c8f-45f7-b29f-a2d6e0e08b88] Instance failed to spawn

2016-07-08 09:28:32.573 31909 ERROR nova.compute.manager [instance:
ded46ee2-9c8f-45f7-b29f-a2d6e0e08b88] Traceback (most recent call last):

2016-07-08 09:28:32.573 31909 ERROR nova.compute.manager [instance:
ded46ee2-9c8f-45f7-b29f-a2d6e0e08b88]   File
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2156, in
_build_resources

2016-07-08 09:28:32.573 31909 ERROR nova.compute.manager [instance:
ded46ee2-9c8f-45f7-b29f-a2d6e0e08b88] yield resources

2016-07-08 09:28:32.573 31909 ERROR nova.compute.manager [instance:
ded46ee2-9c8f-45f7-b29f-a2d6e0e08b88]   File
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2009, in
_build_and_run_instance

2016-07-08 09:28:32.573 31909 ERROR nova.compute.manager [instance:
ded46ee2-9c8f-45f7-b29f-a2d6e0e08b88]
block_device_info=block_device_info)

2016-07-08 09:28:32.573 31909 ERROR nova.compute.manager [instance:
ded46ee2-9c8f-45f7-b29f-a2d6e0e08b88]   File
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2527,
in spawn

2016-07-08 09:28:32.573 31909 ERROR nova.compute.manager [instance:
ded46ee2-9c8f-45f7-b29f-a2d6e0e08b88] admin_pass=admin_password)

2016-07-08 09:28:32.573 31909 ERROR nova.compute.manager [instance:
ded46ee2-9c8f-45f7-b29f-a2d6e0e08b88]   File
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2953,
in _create_image

2016-07-08 09:28:32.573 31909 ERROR nova.compute.manager [instance:
ded46ee2-9c8f-45f7-b29f-a2d6e0e08b88] instance, size,
fallback_from_host)

2016-07-08 09:28:32.573 31909 ERROR nova.compute.manager [instance:
ded46ee2-9c8f-45f7-b29f-a2d6e0e08b88]   File
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 6406,
in _try_fetch_image_cache

2016-07-08 09:28:32.573 31909 ERROR nova.compute.manager [instance:
ded46ee2-9c8f-45f7-b29f-a2d6e0e08b88] size=size)

2016-07-08 09:28:32.573 31909 ERROR nova.compute.manager [instance:
ded46ee2-9c8f-45f7-b29f-a2d6e0e08b88]   File
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/imagebackend.py", line
240, in cache

2016-07-08 09:28:32.573 31909 ERROR nova.compute.manager [instance:
ded46ee2-9c8f-45f7-b29f-a2d6e0e08b88] *args, **kwargs)

2016-07-08 09:28:32.573 31909 ERROR nova.compute.manager [instance:
ded46ee2-9c8f-45f7-b29f-a2d6e0e08b88]   File
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/imagebackend.py", line
811, in create_image

2016-07-08 09:28:32.573 31909 ERROR nova.compute.manager [instance:
ded46ee2-9c8f-45f7-b29f-a2d6e0e08b88] prepare_template(target=base,
max_size=size, 

Re: [ceph-users] (no subject)

2016-07-08 Thread Fran Barrera
Hello,

You only need a create a pool and authentication in Ceph for cinder.

Your configuration should be like this (This is an example configuration
with Ceph Jewel and Openstack Mitaka):


[DEFAULT]
enabled_backends = ceph
[ceph]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
rbd_pool = volumes
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
glance_api_version = 2
rbd_user = cinder
rbd_secret_uuid = c35bd3d8-ec12-2052-9672d-334824635616

And then, remove the cinder database, recreate and poblate with
"cinder-manage db sync". Finally restart the cinder services and everything
should work fine.


Regards,
Fran.

2016-07-08 8:18 GMT+02:00 Kees Meijs :

> Hi Gaurav,
>
> The following snippets should suffice (for Cinder, at least):
>
> [DEFAULT]
> enabled_backends=rbd
>
> [rbd]
> volume_driver = cinder.volume.drivers.rbd.RBDDriver
> rbd_pool = cinder-volumes
> rbd_ceph_conf = /etc/ceph/ceph.conf
> rbd_flatten_volume_from_snapshot = false
> rbd_max_clone_depth = 5
> rbd_store_chunk_size = 4
> rados_connect_timeout = -1
> rbd_user = cinder
> rbd_secret = REDACTED
>
> backup_driver = cinder.backup.drivers.ceph
> backup_ceph_conf = /etc/ceph/ceph.conf
> backup_ceph_user = cinder-backup
> backup_ceph_chunk_size = 134217728
> backup_ceph_pool = backups
> backup_ceph_stripe_unit = 0
> backup_ceph_stripe_count = 0
> restore_discard_excess_bytes = true
>
>
> Obviously you'd alter the directives according to your configuration
> and/or wishes.
>
> And no, creating RBD volumes by hand is not needed. Cinder will do this
> for you.
>
> K.
>
> On 08-07-16 04:14, Gaurav Goyal wrote:
>
> Yeah i didnt find additional section for [ceph] in my cinder.conf file.
> Should i create that manually?
> As i didnt find [ceph] section so i modified same parameters in [DEFAULT]
> section.
> I will change that as per your suggestion.
>
> Moreoevr checking some other links i got to know that, i must configure
> following additional parameters
> should i do that and install tgtadm package?
>
> rootwrap_config = /etc/cinder/rootwrap.confapi_paste_confg = 
> /etc/cinder/api-paste.iniiscsi_helper = tgtadmvolume_name_template = 
> volume-%svolume_group = cinder-volumes
>
> Do i need to execute following commands?
>
> "pvcreate /dev/rbd1" &"vgcreate cinder-volumes /dev/rbd1"
>
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] (no subject)

2016-07-08 Thread Kees Meijs
Hi Gaurav,

The following snippets should suffice (for Cinder, at least):
> [DEFAULT]
> enabled_backends=rbd
>
> [rbd]
> volume_driver = cinder.volume.drivers.rbd.RBDDriver
> rbd_pool = cinder-volumes
> rbd_ceph_conf = /etc/ceph/ceph.conf
> rbd_flatten_volume_from_snapshot = false
> rbd_max_clone_depth = 5
> rbd_store_chunk_size = 4
> rados_connect_timeout = -1
> rbd_user = cinder
> rbd_secret = REDACTED
>
> backup_driver = cinder.backup.drivers.ceph
> backup_ceph_conf = /etc/ceph/ceph.conf
> backup_ceph_user = cinder-backup
> backup_ceph_chunk_size = 134217728
> backup_ceph_pool = backups
> backup_ceph_stripe_unit = 0
> backup_ceph_stripe_count = 0
> restore_discard_excess_bytes = true

Obviously you'd alter the directives according to your configuration
and/or wishes.

And no, creating RBD volumes by hand is not needed. Cinder will do this
for you.

K.

On 08-07-16 04:14, Gaurav Goyal wrote:
> Yeah i didnt find additional section for [ceph] in my cinder.conf
> file. Should i create that manually? 
> As i didnt find [ceph] section so i modified same parameters in
> [DEFAULT] section.
> I will change that as per your suggestion.
>
> Moreoevr checking some other links i got to know that, i must
> configure following additional parameters
> should i do that and install tgtadm package?
> rootwrap_config = /etc/cinder/rootwrap.conf
> api_paste_confg = /etc/cinder/api-paste.ini
> iscsi_helper = tgtadm
> volume_name_template = volume-%s
> volume_group = cinder-volumes
> Do i need to execute following commands? 
> "pvcreate /dev/rbd1" &
> "vgcreate cinder-volumes /dev/rbd1" 

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] (no subject)

2016-07-07 Thread Gaurav Goyal
Thanks for the verification!

Yeah i didnt find additional section for [ceph] in my cinder.conf file.
Should i create that manually?
As i didnt find [ceph] section so i modified same parameters in [DEFAULT]
section.
I will change that as per your suggestion.

Moreoevr checking some other links i got to know that, i must configure
following additional parameters
should i do that and install tgtadm package?

rootwrap_config = /etc/cinder/rootwrap.conf
api_paste_confg = /etc/cinder/api-paste.ini
iscsi_helper = tgtadm
volume_name_template = volume-%s
volume_group = cinder-volumes

Do i need to execute following commands?

"pvcreate /dev/rbd1" &
"vgcreate cinder-volumes /dev/rbd1"


Regards

Gaurav Goyal



On Thu, Jul 7, 2016 at 10:02 PM, Jason Dillaman  wrote:

> These lines from your log output indicates you are configured to use LVM
> as a cinder backend.
>
> > 2016-07-07 16:20:31.966 32549 INFO cinder.volume.manager
> [req-f9371a24-bb2b-42fb-ad4e-e2cfc271fe10 - - - - -] Starting volume
> driver LVMVolumeDriver (3.0.0)
> > 2016-07-07 16:20:32.067 32549 ERROR cinder.volume.manager Command: sudo
> cinder-rootwrap /etc/cinder/rootwrap.conf env LC_ALL=C vgs --noheadings -o
> name cinder-volumes
>
> Looking at your provided configuration, I don't see a "[ceph]"
> configuration section. Here is a configuration example [1] for Cinder.
>
> [1] http://docs.ceph.com/docs/master/rbd/rbd-openstack/#configuring-cinder
>
> On Thu, Jul 7, 2016 at 9:35 PM, Gaurav Goyal 
> wrote:
>
>> Hi Kees/Fran,
>>
>>
>> Do you find any issue in my cinder.conf file?
>>
>> it says Volume group "cinder-volumes" not found. When to configure this
>> volume group?
>>
>> I have done ceph configuration for nova creation.
>> But i am still facing the same error .
>>
>>
>>
>> */var/log/cinder/volume.log*
>>
>> 2016-07-07 16:20:13.765 136259 ERROR cinder.service [-] Manager for
>> service cinder-volume OSKVM1@ceph is reporting problems, not sending
>> heartbeat. Service will appear "down".
>>
>> 2016-07-07 16:20:23.770 136259 ERROR cinder.service [-] Manager for
>> service cinder-volume OSKVM1@ceph is reporting problems, not sending
>> heartbeat. Service will appear "down".
>>
>> 2016-07-07 16:20:30.789 136259 WARNING oslo_messaging.server [-]
>> start/stop/wait must be called in the same thread
>>
>> 2016-07-07 16:20:30.791 136259 WARNING oslo_messaging.server
>> [req-f62eb1bb-6883-457f-9f63-b5556342eca7 - - - - -] start/stop/wait must
>> be called in the same thread
>>
>> 2016-07-07 16:20:30.794 136247 INFO oslo_service.service
>> [req-f62eb1bb-6883-457f-9f63-b5556342eca7 - - - - -] Caught SIGTERM,
>> stopping children
>>
>> 2016-07-07 16:20:30.799 136247 INFO oslo_service.service
>> [req-f62eb1bb-6883-457f-9f63-b5556342eca7 - - - - -] Waiting on 1 children
>> to exit
>>
>> 2016-07-07 16:20:30.806 136247 INFO oslo_service.service
>> [req-f62eb1bb-6883-457f-9f63-b5556342eca7 - - - - -] Child 136259 killed by
>> signal 15
>>
>> 2016-07-07 16:20:31.950 32537 INFO cinder.volume.manager
>> [req-cef7baaa-b0ef-4365-89d9-4379eb1c104c - - - - -] Determined volume DB
>> was not empty at startup.
>>
>> 2016-07-07 16:20:31.956 32537 INFO cinder.volume.manager
>> [req-cef7baaa-b0ef-4365-89d9-4379eb1c104c - - - - -] Image-volume cache
>> disabled for host OSKVM1@ceph.
>>
>> 2016-07-07 16:20:31.957 32537 INFO oslo_service.service
>> [req-cef7baaa-b0ef-4365-89d9-4379eb1c104c - - - - -] Starting 1 workers
>>
>> 2016-07-07 16:20:31.960 32537 INFO oslo_service.service
>> [req-cef7baaa-b0ef-4365-89d9-4379eb1c104c - - - - -] Started child 32549
>>
>> 2016-07-07 16:20:31.963 32549 INFO cinder.service [-] Starting
>> cinder-volume node (version 7.0.1)
>>
>> 2016-07-07 16:20:31.966 32549 INFO cinder.volume.manager
>> [req-f9371a24-bb2b-42fb-ad4e-e2cfc271fe10 - - - - -] Starting volume driver
>> LVMVolumeDriver (3.0.0)
>>
>> 2016-07-07 16:20:32.067 32549 ERROR cinder.volume.manager
>> [req-f9371a24-bb2b-42fb-ad4e-e2cfc271fe10 - - - - -] Failed to initialize
>> driver.
>>
>> 2016-07-07 16:20:32.067 32549 ERROR cinder.volume.manager Traceback (most
>> recent call last):
>>
>> 2016-07-07 16:20:32.067 32549 ERROR cinder.volume.manager   File
>> "/usr/lib/python2.7/site-packages/cinder/volume/manager.py", line 368, in
>> init_host
>>
>> 2016-07-07 16:20:32.067 32549 ERROR cinder.volume.manager
>> self.driver.check_for_setup_error()
>>
>> 2016-07-07 16:20:32.067 32549 ERROR cinder.volume.manager   File
>> "/usr/lib/python2.7/site-packages/osprofiler/profiler.py", line 105, in
>> wrapper
>>
>> 2016-07-07 16:20:32.067 32549 ERROR cinder.volume.manager return
>> f(*args, **kwargs)
>>
>> 2016-07-07 16:20:32.067 32549 ERROR cinder.volume.manager   File
>> "/usr/lib/python2.7/site-packages/cinder/volume/drivers/lvm.py", line 269,
>> in check_for_setup_error
>>
>> 2016-07-07 16:20:32.067 32549 ERROR cinder.volume.manager
>> lvm_conf=lvm_conf_file)
>>
>> 2016-07-07 16:20:32.067 32549 ERROR cinder.volume.manager   File
>> 

Re: [ceph-users] (no subject)

2016-07-07 Thread Jason Dillaman
These lines from your log output indicates you are configured to use LVM as
a cinder backend.

> 2016-07-07 16:20:31.966 32549 INFO cinder.volume.manager
[req-f9371a24-bb2b-42fb-ad4e-e2cfc271fe10 - - - - -] Starting volume driver
LVMVolumeDriver (3.0.0)
> 2016-07-07 16:20:32.067 32549 ERROR cinder.volume.manager Command: sudo
cinder-rootwrap /etc/cinder/rootwrap.conf env LC_ALL=C vgs --noheadings -o
name cinder-volumes

Looking at your provided configuration, I don't see a "[ceph]"
configuration section. Here is a configuration example [1] for Cinder.

[1] http://docs.ceph.com/docs/master/rbd/rbd-openstack/#configuring-cinder

On Thu, Jul 7, 2016 at 9:35 PM, Gaurav Goyal 
wrote:

> Hi Kees/Fran,
>
>
> Do you find any issue in my cinder.conf file?
>
> it says Volume group "cinder-volumes" not found. When to configure this
> volume group?
>
> I have done ceph configuration for nova creation.
> But i am still facing the same error .
>
>
>
> */var/log/cinder/volume.log*
>
> 2016-07-07 16:20:13.765 136259 ERROR cinder.service [-] Manager for
> service cinder-volume OSKVM1@ceph is reporting problems, not sending
> heartbeat. Service will appear "down".
>
> 2016-07-07 16:20:23.770 136259 ERROR cinder.service [-] Manager for
> service cinder-volume OSKVM1@ceph is reporting problems, not sending
> heartbeat. Service will appear "down".
>
> 2016-07-07 16:20:30.789 136259 WARNING oslo_messaging.server [-]
> start/stop/wait must be called in the same thread
>
> 2016-07-07 16:20:30.791 136259 WARNING oslo_messaging.server
> [req-f62eb1bb-6883-457f-9f63-b5556342eca7 - - - - -] start/stop/wait must
> be called in the same thread
>
> 2016-07-07 16:20:30.794 136247 INFO oslo_service.service
> [req-f62eb1bb-6883-457f-9f63-b5556342eca7 - - - - -] Caught SIGTERM,
> stopping children
>
> 2016-07-07 16:20:30.799 136247 INFO oslo_service.service
> [req-f62eb1bb-6883-457f-9f63-b5556342eca7 - - - - -] Waiting on 1 children
> to exit
>
> 2016-07-07 16:20:30.806 136247 INFO oslo_service.service
> [req-f62eb1bb-6883-457f-9f63-b5556342eca7 - - - - -] Child 136259 killed by
> signal 15
>
> 2016-07-07 16:20:31.950 32537 INFO cinder.volume.manager
> [req-cef7baaa-b0ef-4365-89d9-4379eb1c104c - - - - -] Determined volume DB
> was not empty at startup.
>
> 2016-07-07 16:20:31.956 32537 INFO cinder.volume.manager
> [req-cef7baaa-b0ef-4365-89d9-4379eb1c104c - - - - -] Image-volume cache
> disabled for host OSKVM1@ceph.
>
> 2016-07-07 16:20:31.957 32537 INFO oslo_service.service
> [req-cef7baaa-b0ef-4365-89d9-4379eb1c104c - - - - -] Starting 1 workers
>
> 2016-07-07 16:20:31.960 32537 INFO oslo_service.service
> [req-cef7baaa-b0ef-4365-89d9-4379eb1c104c - - - - -] Started child 32549
>
> 2016-07-07 16:20:31.963 32549 INFO cinder.service [-] Starting
> cinder-volume node (version 7.0.1)
>
> 2016-07-07 16:20:31.966 32549 INFO cinder.volume.manager
> [req-f9371a24-bb2b-42fb-ad4e-e2cfc271fe10 - - - - -] Starting volume driver
> LVMVolumeDriver (3.0.0)
>
> 2016-07-07 16:20:32.067 32549 ERROR cinder.volume.manager
> [req-f9371a24-bb2b-42fb-ad4e-e2cfc271fe10 - - - - -] Failed to initialize
> driver.
>
> 2016-07-07 16:20:32.067 32549 ERROR cinder.volume.manager Traceback (most
> recent call last):
>
> 2016-07-07 16:20:32.067 32549 ERROR cinder.volume.manager   File
> "/usr/lib/python2.7/site-packages/cinder/volume/manager.py", line 368, in
> init_host
>
> 2016-07-07 16:20:32.067 32549 ERROR cinder.volume.manager
> self.driver.check_for_setup_error()
>
> 2016-07-07 16:20:32.067 32549 ERROR cinder.volume.manager   File
> "/usr/lib/python2.7/site-packages/osprofiler/profiler.py", line 105, in
> wrapper
>
> 2016-07-07 16:20:32.067 32549 ERROR cinder.volume.manager return
> f(*args, **kwargs)
>
> 2016-07-07 16:20:32.067 32549 ERROR cinder.volume.manager   File
> "/usr/lib/python2.7/site-packages/cinder/volume/drivers/lvm.py", line 269,
> in check_for_setup_error
>
> 2016-07-07 16:20:32.067 32549 ERROR cinder.volume.manager
> lvm_conf=lvm_conf_file)
>
> 2016-07-07 16:20:32.067 32549 ERROR cinder.volume.manager   File
> "/usr/lib/python2.7/site-packages/cinder/brick/local_dev/lvm.py", line 86,
> in __init__
>
> 2016-07-07 16:20:32.067 32549 ERROR cinder.volume.manager if
> self._vg_exists() is False:
>
> 2016-07-07 16:20:32.067 32549 ERROR cinder.volume.manager   File
> "/usr/lib/python2.7/site-packages/cinder/brick/local_dev/lvm.py", line 123,
> in _vg_exists
>
> 2016-07-07 16:20:32.067 32549 ERROR cinder.volume.manager
> run_as_root=True)
>
> 2016-07-07 16:20:32.067 32549 ERROR cinder.volume.manager   File
> "/usr/lib/python2.7/site-packages/cinder/utils.py", line 155, in execute
>
> 2016-07-07 16:20:32.067 32549 ERROR cinder.volume.manager return
> processutils.execute(*cmd, **kwargs)
>
> 2016-07-07 16:20:32.067 32549 ERROR cinder.volume.manager   File
> "/usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py", line
> 275, in execute
>
> 2016-07-07 16:20:32.067 32549 ERROR cinder.volume.manager
> 

Re: [ceph-users] (no subject)

2016-07-07 Thread Gaurav Goyal
Hi Kees/Fran,


Do you find any issue in my cinder.conf file?

it says Volume group "cinder-volumes" not found. When to configure this
volume group?

I have done ceph configuration for nova creation.
But i am still facing the same error .



*/var/log/cinder/volume.log*

2016-07-07 16:20:13.765 136259 ERROR cinder.service [-] Manager for service
cinder-volume OSKVM1@ceph is reporting problems, not sending heartbeat.
Service will appear "down".

2016-07-07 16:20:23.770 136259 ERROR cinder.service [-] Manager for service
cinder-volume OSKVM1@ceph is reporting problems, not sending heartbeat.
Service will appear "down".

2016-07-07 16:20:30.789 136259 WARNING oslo_messaging.server [-]
start/stop/wait must be called in the same thread

2016-07-07 16:20:30.791 136259 WARNING oslo_messaging.server
[req-f62eb1bb-6883-457f-9f63-b5556342eca7 - - - - -] start/stop/wait must
be called in the same thread

2016-07-07 16:20:30.794 136247 INFO oslo_service.service
[req-f62eb1bb-6883-457f-9f63-b5556342eca7 - - - - -] Caught SIGTERM,
stopping children

2016-07-07 16:20:30.799 136247 INFO oslo_service.service
[req-f62eb1bb-6883-457f-9f63-b5556342eca7 - - - - -] Waiting on 1 children
to exit

2016-07-07 16:20:30.806 136247 INFO oslo_service.service
[req-f62eb1bb-6883-457f-9f63-b5556342eca7 - - - - -] Child 136259 killed by
signal 15

2016-07-07 16:20:31.950 32537 INFO cinder.volume.manager
[req-cef7baaa-b0ef-4365-89d9-4379eb1c104c - - - - -] Determined volume DB
was not empty at startup.

2016-07-07 16:20:31.956 32537 INFO cinder.volume.manager
[req-cef7baaa-b0ef-4365-89d9-4379eb1c104c - - - - -] Image-volume cache
disabled for host OSKVM1@ceph.

2016-07-07 16:20:31.957 32537 INFO oslo_service.service
[req-cef7baaa-b0ef-4365-89d9-4379eb1c104c - - - - -] Starting 1 workers

2016-07-07 16:20:31.960 32537 INFO oslo_service.service
[req-cef7baaa-b0ef-4365-89d9-4379eb1c104c - - - - -] Started child 32549

2016-07-07 16:20:31.963 32549 INFO cinder.service [-] Starting
cinder-volume node (version 7.0.1)

2016-07-07 16:20:31.966 32549 INFO cinder.volume.manager
[req-f9371a24-bb2b-42fb-ad4e-e2cfc271fe10 - - - - -] Starting volume driver
LVMVolumeDriver (3.0.0)

2016-07-07 16:20:32.067 32549 ERROR cinder.volume.manager
[req-f9371a24-bb2b-42fb-ad4e-e2cfc271fe10 - - - - -] Failed to initialize
driver.

2016-07-07 16:20:32.067 32549 ERROR cinder.volume.manager Traceback (most
recent call last):

2016-07-07 16:20:32.067 32549 ERROR cinder.volume.manager   File
"/usr/lib/python2.7/site-packages/cinder/volume/manager.py", line 368, in
init_host

2016-07-07 16:20:32.067 32549 ERROR cinder.volume.manager
self.driver.check_for_setup_error()

2016-07-07 16:20:32.067 32549 ERROR cinder.volume.manager   File
"/usr/lib/python2.7/site-packages/osprofiler/profiler.py", line 105, in
wrapper

2016-07-07 16:20:32.067 32549 ERROR cinder.volume.manager return
f(*args, **kwargs)

2016-07-07 16:20:32.067 32549 ERROR cinder.volume.manager   File
"/usr/lib/python2.7/site-packages/cinder/volume/drivers/lvm.py", line 269,
in check_for_setup_error

2016-07-07 16:20:32.067 32549 ERROR cinder.volume.manager
lvm_conf=lvm_conf_file)

2016-07-07 16:20:32.067 32549 ERROR cinder.volume.manager   File
"/usr/lib/python2.7/site-packages/cinder/brick/local_dev/lvm.py", line 86,
in __init__

2016-07-07 16:20:32.067 32549 ERROR cinder.volume.manager if
self._vg_exists() is False:

2016-07-07 16:20:32.067 32549 ERROR cinder.volume.manager   File
"/usr/lib/python2.7/site-packages/cinder/brick/local_dev/lvm.py", line 123,
in _vg_exists

2016-07-07 16:20:32.067 32549 ERROR cinder.volume.manager
run_as_root=True)

2016-07-07 16:20:32.067 32549 ERROR cinder.volume.manager   File
"/usr/lib/python2.7/site-packages/cinder/utils.py", line 155, in execute

2016-07-07 16:20:32.067 32549 ERROR cinder.volume.manager return
processutils.execute(*cmd, **kwargs)

2016-07-07 16:20:32.067 32549 ERROR cinder.volume.manager   File
"/usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py", line
275, in execute

2016-07-07 16:20:32.067 32549 ERROR cinder.volume.manager
cmd=sanitized_cmd)

2016-07-07 16:20:32.067 32549 ERROR cinder.volume.manager
ProcessExecutionError: Unexpected error while running command.

2016-07-07 16:20:32.067 32549 ERROR cinder.volume.manager Command: sudo
cinder-rootwrap /etc/cinder/rootwrap.conf env LC_ALL=C vgs --noheadings -o
name cinder-volumes

2016-07-07 16:20:32.067 32549 ERROR cinder.volume.manager Exit code: 5

2016-07-07 16:20:32.067 32549 ERROR cinder.volume.manager Stdout: u''

2016-07-07 16:20:32.067 32549 ERROR cinder.volume.manager Stderr: u'
Volume group "cinder-volumes" not found\n  Cannot process volume group
cinder-volumes\n'

2016-07-07 16:20:32.067 32549 ERROR cinder.volume.manager

2016-07-07 16:20:32.108 32549 INFO oslo.messaging._drivers.impl_rabbit
[req-7e229d1f-06af-4b60-8e15-1f8c0e6eb084 - - - - -] Connecting to AMQP
server on controller:5672

2016-07-07 16:20:32.125 32549 INFO oslo.messaging._drivers.impl_rabbit

Re: [ceph-users] (no subject)

2016-07-07 Thread Gaurav Goyal
Hi Fran,

Here is my cinder.conf file. Please help to analyze it.

Do i need to create volume group as mentioned in this link
http://docs.openstack.org/liberty/install-guide-rdo/cinder-storage-install.html


[root@OSKVM1 ~]# grep -v "^#" /etc/cinder/cinder.conf|grep -v ^$

[DEFAULT]

rpc_backend = rabbit

auth_strategy = keystone

my_ip = 10.24.0.4

notification_driver = messagingv2

backup_ceph_conf = /etc/ceph/ceph.conf

backup_ceph_user = cinder-backup

backup_ceph_chunk_size = 134217728

backup_ceph_pool = backups

backup_ceph_stripe_unit = 0

backup_ceph_stripe_count = 0

restore_discard_excess_bytes = true

backup_driver = cinder.backup.drivers.ceph

glance_api_version = 2

enabled_backends = ceph

rbd_pool = volumes

rbd_user = cinder

rbd_ceph_conf = /etc/ceph/ceph.conf

rbd_flatten_volume_from_snapshot = false

rbd_secret_uuid = a536c85f-d660-4c25-a840-e321c09e7941

rbd_max_clone_depth = 5

rbd_store_chunk_size = 4

rados_connect_timeout = -1

volume_driver = cinder.volume.drivers.rbd.RBDDriver

[BRCD_FABRIC_EXAMPLE]

[CISCO_FABRIC_EXAMPLE]

[cors]

[cors.subdomain]

[database]

connection = mysql://cinder:cinder@controller/cinder

[fc-zone-manager]

[keymgr]

[keystone_authtoken]

auth_uri = http://controller:5000

auth_url = http://controller:35357

auth_plugin = password

project_domain_id = default

user_domain_id = default

project_name = service

username = cinder

password = cinder

[matchmaker_redis]

[matchmaker_ring]

[oslo_concurrency]

lock_path = /var/lib/cinder/tmp

[oslo_messaging_amqp]

[oslo_messaging_qpid]

[oslo_messaging_rabbit]

rabbit_host = controller

rabbit_userid = openstack

rabbit_password = 

[oslo_middleware]

[oslo_policy]

[oslo_reports]

[profiler]

On Thu, Jul 7, 2016 at 11:38 AM, Fran Barrera 
wrote:

> Hello,
>
> Are you configured these two paremeters in cinder.conf?
>
> rbd_user
> rbd_secret_uuid
>
> Regards.
>
> 2016-07-07 15:39 GMT+02:00 Gaurav Goyal :
>
>> Hello Mr. Kees,
>>
>> Thanks for your response!
>>
>> My setup is
>>
>> Openstack Node 1 -> controller + network + compute1 (Liberty Version)
>> Openstack node 2 --> Compute2
>>
>> Ceph version Hammer
>>
>> I am using dell storage with following status
>>
>> DELL SAN storage is attached to both hosts as
>>
>> [root@OSKVM1 ~]# iscsiadm -m node
>>
>> 10.35.0.3:3260,1
>> iqn.2001-05.com.equallogic:0-1cb196-07a83c107-4770018575af-vol1
>>
>> 10.35.0.8:3260,1
>> iqn.2001-05.com.equallogic:0-1cb196-07a83c107-4770018575af-vol1
>>
>> 10.35.0.*:3260,-1
>> iqn.2001-05.com.equallogic:0-1cb196-20d83c107-729002157606-vol2
>>
>> 10.35.0.8:3260,1
>> iqn.2001-05.com.equallogic:0-1cb196-20d83c107-729002157606-vol2
>>
>> 10.35.0.*:3260,-1
>> iqn.2001-05.com.equallogic:0-1cb196-f0783c107-70a00245761a-vol3
>>
>> 10.35.0.8:3260,1
>> iqn.2001-05.com.equallogic:0-1cb196-f0783c107-70a00245761a-vol3
>>
>> 10.35.0.*:3260,-1
>> iqn.2001-05.com.equallogic:0-1cb196-fda83c107-92700275761a-vol4
>> 10.35.0.8:3260,1
>> iqn.2001-05.com.equallogic:0-1cb196-fda83c107-92700275761a-vol4
>>
>>
>> Since in my setup same LUNs are MAPPED to both hosts
>>
>> i choose 2 LUNS on Openstack Node 1 and 2 on Openstack Node 2
>>
>>
>> *Node1 has *
>>
>> /dev/sdc12.0T  3.1G  2.0T   1% /var/lib/ceph/osd/ceph-0
>>
>> /dev/sdd12.0T  3.8G  2.0T   1% /var/lib/ceph/osd/ceph-1
>>
>> *Node 2 has *
>>
>> /dev/sdd12.0T  3.4G  2.0T   1% /var/lib/ceph/osd/ceph-2
>>
>> /dev/sde12.0T  3.5G  2.0T   1% /var/lib/ceph/osd/ceph-3
>>
>> [root@OSKVM1 ~]# ceph status
>>
>> cluster 9f923089-a6c0-4169-ace8-ad8cc4cca116
>>
>>  health HEALTH_WARN
>>
>> mon.OSKVM1 low disk space
>>
>>  monmap e1: 1 mons at {OSKVM1=10.24.0.4:6789/0}
>>
>> election epoch 1, quorum 0 OSKVM1
>>
>>  osdmap e40: 4 osds: 4 up, 4 in
>>
>>   pgmap v1154: 576 pgs, 5 pools, 6849 MB data, 860 objects
>>
>> 13857 MB used, 8154 GB / 8168 GB avail
>>
>>  576 active+clean
>>
>> *Can you please help me to know if it is correct configuration as per my
>> setup?*
>>
>> After this setup, i am trying to configure Cinder and Glance to use RBD
>> for a backend.
>> Glance image is already stored in RBD.
>> Following this link http://docs.ceph.com/docs/master/rbd/rbd-openstack/
>>
>> I have managed to install glance image in rbd. But i am finding some
>> issue in cinder configuration. Can you please help me on this?
>> As per link, i need to configure these parameters under [ceph] but i do
>> not have different section for [ceph]. infact i could find all these
>> parameters under [DEFAULT]. Is it ok to configure them under [DEFAULT].
>> CONFIGURING CINDER
>> 
>>
>> OpenStack requires a driver to interact with Ceph block devices. You must
>> also specify the pool name for the block device. On your OpenStack node,
>> 

Re: [ceph-users] (no subject)

2016-07-07 Thread Fran Barrera
Hello,

Are you configured these two paremeters in cinder.conf?

rbd_user
rbd_secret_uuid

Regards.

2016-07-07 15:39 GMT+02:00 Gaurav Goyal :

> Hello Mr. Kees,
>
> Thanks for your response!
>
> My setup is
>
> Openstack Node 1 -> controller + network + compute1 (Liberty Version)
> Openstack node 2 --> Compute2
>
> Ceph version Hammer
>
> I am using dell storage with following status
>
> DELL SAN storage is attached to both hosts as
>
> [root@OSKVM1 ~]# iscsiadm -m node
>
> 10.35.0.3:3260,1
> iqn.2001-05.com.equallogic:0-1cb196-07a83c107-4770018575af-vol1
>
> 10.35.0.8:3260,1
> iqn.2001-05.com.equallogic:0-1cb196-07a83c107-4770018575af-vol1
>
> 10.35.0.*:3260,-1
> iqn.2001-05.com.equallogic:0-1cb196-20d83c107-729002157606-vol2
>
> 10.35.0.8:3260,1
> iqn.2001-05.com.equallogic:0-1cb196-20d83c107-729002157606-vol2
>
> 10.35.0.*:3260,-1
> iqn.2001-05.com.equallogic:0-1cb196-f0783c107-70a00245761a-vol3
>
> 10.35.0.8:3260,1
> iqn.2001-05.com.equallogic:0-1cb196-f0783c107-70a00245761a-vol3
>
> 10.35.0.*:3260,-1
> iqn.2001-05.com.equallogic:0-1cb196-fda83c107-92700275761a-vol4
> 10.35.0.8:3260,1
> iqn.2001-05.com.equallogic:0-1cb196-fda83c107-92700275761a-vol4
>
>
> Since in my setup same LUNs are MAPPED to both hosts
>
> i choose 2 LUNS on Openstack Node 1 and 2 on Openstack Node 2
>
>
> *Node1 has *
>
> /dev/sdc12.0T  3.1G  2.0T   1% /var/lib/ceph/osd/ceph-0
>
> /dev/sdd12.0T  3.8G  2.0T   1% /var/lib/ceph/osd/ceph-1
>
> *Node 2 has *
>
> /dev/sdd12.0T  3.4G  2.0T   1% /var/lib/ceph/osd/ceph-2
>
> /dev/sde12.0T  3.5G  2.0T   1% /var/lib/ceph/osd/ceph-3
>
> [root@OSKVM1 ~]# ceph status
>
> cluster 9f923089-a6c0-4169-ace8-ad8cc4cca116
>
>  health HEALTH_WARN
>
> mon.OSKVM1 low disk space
>
>  monmap e1: 1 mons at {OSKVM1=10.24.0.4:6789/0}
>
> election epoch 1, quorum 0 OSKVM1
>
>  osdmap e40: 4 osds: 4 up, 4 in
>
>   pgmap v1154: 576 pgs, 5 pools, 6849 MB data, 860 objects
>
> 13857 MB used, 8154 GB / 8168 GB avail
>
>  576 active+clean
>
> *Can you please help me to know if it is correct configuration as per my
> setup?*
>
> After this setup, i am trying to configure Cinder and Glance to use RBD
> for a backend.
> Glance image is already stored in RBD.
> Following this link http://docs.ceph.com/docs/master/rbd/rbd-openstack/
>
> I have managed to install glance image in rbd. But i am finding some issue
> in cinder configuration. Can you please help me on this?
> As per link, i need to configure these parameters under [ceph] but i do
> not have different section for [ceph]. infact i could find all these
> parameters under [DEFAULT]. Is it ok to configure them under [DEFAULT].
> CONFIGURING CINDER
> 
>
> OpenStack requires a driver to interact with Ceph block devices. You must
> also specify the pool name for the block device. On your OpenStack node,
> edit/etc/cinder/cinder.conf by adding:
>
> [DEFAULT]
> ...
> enabled_backends = ceph
> ...
> [ceph]
> volume_driver = cinder.volume.drivers.rbd.RBDDriver
> rbd_pool = volumes
> rbd_ceph_conf = /etc/ceph/ceph.conf
> rbd_flatten_volume_from_snapshot = false
> rbd_max_clone_depth = 5
> rbd_store_chunk_size = 4
> rados_connect_timeout = -1
> glance_api_version = 2
>
> I find following error in cinder service status
>
> systemctl status openstack-cinder-volume.service
>
> Jul 07 09:37:01 OSKVM1 cinder-volume[136247]: 2016-07-07 09:37:01.058
> 136259 ERROR cinder.service [-] Manager for service cinder-volume
> OSKVM1@ceph is reporting problems, not sending heartbeat. Service will
> appear "down".
>
> Jul 07 09:37:02 OSKVM1 cinder-volume[136247]: 2016-07-07 09:37:02.040
> 136259 WARNING cinder.volume.manager
> [req-561ddd3c-9560-4374-a958-7a2c103af7ee - - - - -] Update driver status
> failed: (config name ceph) is uninitialized.
>
> Jul 07 09:37:11 OSKVM1 cinder-volume[136247]: 2016-07-07 09:37:11.059
> 136259 ERROR cinder.service [-] Manager for service cinder-volume
> OSKVM1@ceph is reporting problems, not sending heartbeat. Service will
> appear "down".
>
>
>
> [root@OSKVM2 ~]# rbd -p images ls
>
> a8b45c8a-a5c8-49d8-a529-1e4088bdbf3f
>
> [root@OSKVM2 ~]# rados df
>
> pool name KB  objects   clones degraded
> unfound   rdrd KB   wrwr KB
>
> backups0000
> 00000
>
> images   7013377  86000
> 0 9486 7758 2580  7013377
>
> rbd0000
> 00000
>
> vms0000
> 0000

Re: [ceph-users] (no subject)

2016-07-07 Thread Gaurav Goyal
Hello Mr. Kees,

Thanks for your response!

My setup is

Openstack Node 1 -> controller + network + compute1 (Liberty Version)
Openstack node 2 --> Compute2

Ceph version Hammer

I am using dell storage with following status

DELL SAN storage is attached to both hosts as

[root@OSKVM1 ~]# iscsiadm -m node

10.35.0.3:3260,1
iqn.2001-05.com.equallogic:0-1cb196-07a83c107-4770018575af-vol1

10.35.0.8:3260,1
iqn.2001-05.com.equallogic:0-1cb196-07a83c107-4770018575af-vol1

10.35.0.*:3260,-1
iqn.2001-05.com.equallogic:0-1cb196-20d83c107-729002157606-vol2

10.35.0.8:3260,1
iqn.2001-05.com.equallogic:0-1cb196-20d83c107-729002157606-vol2

10.35.0.*:3260,-1
iqn.2001-05.com.equallogic:0-1cb196-f0783c107-70a00245761a-vol3

10.35.0.8:3260,1
iqn.2001-05.com.equallogic:0-1cb196-f0783c107-70a00245761a-vol3

10.35.0.*:3260,-1
iqn.2001-05.com.equallogic:0-1cb196-fda83c107-92700275761a-vol4
10.35.0.8:3260,1
iqn.2001-05.com.equallogic:0-1cb196-fda83c107-92700275761a-vol4


Since in my setup same LUNs are MAPPED to both hosts

i choose 2 LUNS on Openstack Node 1 and 2 on Openstack Node 2


*Node1 has *

/dev/sdc12.0T  3.1G  2.0T   1% /var/lib/ceph/osd/ceph-0

/dev/sdd12.0T  3.8G  2.0T   1% /var/lib/ceph/osd/ceph-1

*Node 2 has *

/dev/sdd12.0T  3.4G  2.0T   1% /var/lib/ceph/osd/ceph-2

/dev/sde12.0T  3.5G  2.0T   1% /var/lib/ceph/osd/ceph-3

[root@OSKVM1 ~]# ceph status

cluster 9f923089-a6c0-4169-ace8-ad8cc4cca116

 health HEALTH_WARN

mon.OSKVM1 low disk space

 monmap e1: 1 mons at {OSKVM1=10.24.0.4:6789/0}

election epoch 1, quorum 0 OSKVM1

 osdmap e40: 4 osds: 4 up, 4 in

  pgmap v1154: 576 pgs, 5 pools, 6849 MB data, 860 objects

13857 MB used, 8154 GB / 8168 GB avail

 576 active+clean

*Can you please help me to know if it is correct configuration as per my
setup?*

After this setup, i am trying to configure Cinder and Glance to use RBD for
a backend.
Glance image is already stored in RBD.
Following this link http://docs.ceph.com/docs/master/rbd/rbd-openstack/

I have managed to install glance image in rbd. But i am finding some issue
in cinder configuration. Can you please help me on this?
As per link, i need to configure these parameters under [ceph] but i do not
have different section for [ceph]. infact i could find all these parameters
under [DEFAULT]. Is it ok to configure them under [DEFAULT].
CONFIGURING CINDER


OpenStack requires a driver to interact with Ceph block devices. You must
also specify the pool name for the block device. On your OpenStack node,
edit/etc/cinder/cinder.conf by adding:

[DEFAULT]
...
enabled_backends = ceph
...
[ceph]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
rbd_pool = volumes
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
glance_api_version = 2

I find following error in cinder service status

systemctl status openstack-cinder-volume.service

Jul 07 09:37:01 OSKVM1 cinder-volume[136247]: 2016-07-07 09:37:01.058
136259 ERROR cinder.service [-] Manager for service cinder-volume
OSKVM1@ceph is reporting problems, not sending heartbeat. Service will
appear "down".

Jul 07 09:37:02 OSKVM1 cinder-volume[136247]: 2016-07-07 09:37:02.040
136259 WARNING cinder.volume.manager
[req-561ddd3c-9560-4374-a958-7a2c103af7ee - - - - -] Update driver status
failed: (config name ceph) is uninitialized.

Jul 07 09:37:11 OSKVM1 cinder-volume[136247]: 2016-07-07 09:37:11.059
136259 ERROR cinder.service [-] Manager for service cinder-volume
OSKVM1@ceph is reporting problems, not sending heartbeat. Service will
appear "down".



[root@OSKVM2 ~]# rbd -p images ls

a8b45c8a-a5c8-49d8-a529-1e4088bdbf3f

[root@OSKVM2 ~]# rados df

pool name KB  objects   clones degraded
unfound   rdrd KB   wrwr KB

backups0000
  00000

images   7013377  86000
  0 9486 7758 2580  7013377

rbd0000
  00000

vms0000
  00000

volumes0000
  00000

  total used14190236  860

  total avail 8550637828

  total space 8564828064




[root@OSKVM2 ~]# ceph auth list

installed auth entries:


mds.OSKVM1

key: AQCK6XtXNBFdDBAAXmX73gBqK3lyakSxxP+XjA==

caps: [mds] allow

caps: [mon] allow profile mds

caps: [osd] allow rwx


Re: [ceph-users] (no subject)

2016-07-07 Thread Kees Meijs
Hi Gaurav,

Unfortunately I'm not completely sure about your setup, but I guess it
makes sense to configure Cinder and Glance to use RBD for a backend. It
seems to me, you're trying to store VM images directly on an OSD filesystem.

Please refer to http://docs.ceph.com/docs/master/rbd/rbd-openstack/ for
details.

Regards,
Kees

On 06-07-16 23:03, Gaurav Goyal wrote:
>
> I am installing ceph hammer and integrating it with openstack Liberty
> for the first time.
>
> My local disk has only 500 GB but i need to create 600 GB VM. SO i
> have created a soft link to ceph filesystem as
>
> lrwxrwxrwx 1 root root 34 Jul 6 13:02 instances ->
> /var/lib/ceph/osd/ceph-0/instances [root@OSKVM1 nova]# pwd
> /var/lib/nova [root@OSKVM1 nova]#
>
> now when i am trying to create an instance it is giving the following
> error as checked from nova-compute.log
> I need your help to fix this issue.
>

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] (no subject)

2016-07-06 Thread Gaurav Goyal
Hi,

I am installing ceph hammer and integrating it with openstack Liberty for
the first time.

My local disk has only 500 GB but i need to create 600 GB VM. SO i have
created a soft link to ceph filesystem as

lrwxrwxrwx 1 root root 34 Jul 6 13:02 instances ->
/var/lib/ceph/osd/ceph-0/instances [root@OSKVM1 nova]# pwd /var/lib/nova
[root@OSKVM1 nova]#

now when i am trying to create an instance it is giving the following error
as checked from nova-compute.log
I need your help to fix this issue.

2016-07-06 15:49:31.554 136121 INFO nova.compute.manager
[req-f24ce706-c846-4bae-bb35-9cfeef522acf db68bdf363ea4358a3d3c22bcfe18d13
713114f3b9e54501a35a79e84c1e6c9d - - -] [instance:
27fa3fa0-b290-4a84-8172-8db03764dd67] Starting instance... 2016-07-06
15:49:31.655 136121 INFO nova.compute.claims
[req-f24ce706-c846-4bae-bb35-9cfeef522acf db68bdf363ea4358a3d3c22bcfe18d13
713114f3b9e54501a35a79e84c1e6c9d - - -] [instance:
27fa3fa0-b290-4a84-8172-8db03764dd67] Attempting claim: memory 512 MB, disk
1 GB 2016-07-06 15:49:31.656 136121 INFO nova.compute.claims
[req-f24ce706-c846-4bae-bb35-9cfeef522acf db68bdf363ea4358a3d3c22bcfe18d13
713114f3b9e54501a35a79e84c1e6c9d - - -] [instance:
27fa3fa0-b290-4a84-8172-8db03764dd67] Total memory: 193168 MB, used:
1024.00 MB 2016-07-06 15:49:31.656 136121 INFO nova.compute.claims
[req-f24ce706-c846-4bae-bb35-9cfeef522acf db68bdf363ea4358a3d3c22bcfe18d13
713114f3b9e54501a35a79e84c1e6c9d - - -] [instance:
27fa3fa0-b290-4a84-8172-8db03764dd67] memory limit: 289752.00 MB, free:
288728.00 MB 2016-07-06 15:49:31.657 136121 INFO nova.compute.claims
[req-f24ce706-c846-4bae-bb35-9cfeef522acf db68bdf363ea4358a3d3c22bcfe18d13
713114f3b9e54501a35a79e84c1e6c9d - - -] [instance:
27fa3fa0-b290-4a84-8172-8db03764dd67] Total disk: 2042 GB, used: 1.00 GB
2016-07-06 15:49:31.657 136121 INFO nova.compute.claims
[req-f24ce706-c846-4bae-bb35-9cfeef522acf db68bdf363ea4358a3d3c22bcfe18d13
713114f3b9e54501a35a79e84c1e6c9d - - -] [instance:
27fa3fa0-b290-4a84-8172-8db03764dd67] disk limit: 2042.00 GB, free: 2041.00
GB 2016-07-06 15:49:31.673 136121 INFO nova.compute.claims
[req-f24ce706-c846-4bae-bb35-9cfeef522acf db68bdf363ea4358a3d3c22bcfe18d13
713114f3b9e54501a35a79e84c1e6c9d - - -] [instance:
27fa3fa0-b290-4a84-8172-8db03764dd67] Claim successful 2016-07-06
15:49:32.154 136121 INFO nova.virt.libvirt.driver
[req-f24ce706-c846-4bae-bb35-9cfeef522acf db68bdf363ea4358a3d3c22bcfe18d13
713114f3b9e54501a35a79e84c1e6c9d - - -] [instance:
27fa3fa0-b290-4a84-8172-8db03764dd67] Creating image 2016-07-06
15:49:32.343 136121 ERROR nova.compute.manager
[req-f24ce706-c846-4bae-bb35-9cfeef522acf db68bdf363ea4358a3d3c22bcfe18d13
713114f3b9e54501a35a79e84c1e6c9d - - -] [instance:
27fa3fa0-b290-4a84-8172-8db03764dd67] Instance failed to spawn 2016-07-06
15:49:32.343 136121 ERROR nova.compute.manager [instance:
27fa3fa0-b290-4a84-8172-8db03764dd67] Traceback (most recent call last):
2016-07-06 15:49:32.343 136121 ERROR nova.compute.manager [instance:
27fa3fa0-b290-4a84-8172-8db03764dd67] File
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2156, in
_build_resources 2016-07-06 15:49:32.343 136121 ERROR nova.compute.manager
[instance: 27fa3fa0-b290-4a84-8172-8db03764dd67] yield resources 2016-07-06
15:49:32.343 136121 ERROR nova.compute.manager [instance:
27fa3fa0-b290-4a84-8172-8db03764dd67] File
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2009, in
_build_and_run_instance 2016-07-06 15:49:32.343 136121 ERROR
nova.compute.manager [instance: 27fa3fa0-b290-4a84-8172-8db03764dd67]
block_device_info=block_device_info) 2016-07-06 15:49:32.343 136121 ERROR
nova.compute.manager [instance: 27fa3fa0-b290-4a84-8172-8db03764dd67] File
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2527,
in spawn 2016-07-06 15:49:32.343 136121 ERROR nova.compute.manager
[instance: 27fa3fa0-b290-4a84-8172-8db03764dd67] admin_pass=admin_password)
2016-07-06 15:49:32.343 136121 ERROR nova.compute.manager [instance:
27fa3fa0-b290-4a84-8172-8db03764dd67] File
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2953,
in _create_image 2016-07-06 15:49:32.343 136121 ERROR nova.compute.manager
[instance: 27fa3fa0-b290-4a84-8172-8db03764dd67] instance, size,
fallback_from_host) 2016-07-06 15:49:32.343 136121 ERROR
nova.compute.manager [instance: 27fa3fa0-b290-4a84-8172-8db03764dd67] File
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 6406,
in _try_fetch_image_cache 2016-07-06 15:49:32.343 136121 ERROR
nova.compute.manager [instance: 27fa3fa0-b290-4a84-8172-8db03764dd67]
size=size) 2016-07-06 15:49:32.343 136121 ERROR nova.compute.manager
[instance: 27fa3fa0-b290-4a84-8172-8db03764dd67] File
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/imagebackend.py", line
240, in cache 2016-07-06 15:49:32.343 136121 ERROR nova.compute.manager
[instance: 27fa3fa0-b290-4a84-8172-8db03764dd67] *args, **kwargs)
2016-07-06 15:49:32.343 136121 ERROR nova.compute.manager 

[ceph-users] (no subject)

2016-05-17 Thread Bruce

unsubscribe ceph-users
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] (no subject)

2015-11-24 Thread James Gallagher
Hi there,

I'm currently following the Ceph QSGs and have currently finished the
Storage Cluster Quick Start and have the current topology of

admin-node - node1 (mon, mds)
  - node2 (osd0)
  - node3 (osd1)

I am now looking to continue creating a block device and then implementing
CephFS. Howevers, I was wondering whether I should add a new machine to the
topology, a 'client-machine', or whether this should double up with node1
(the monitor and metadata server) because it doesn't say mention in the
guide?

Thanks,
James
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] (no subject)

2015-11-12 Thread James Gallagher
Hi, I'm having issues activating my OSDs. I have provided the output of the
fault. I can see that the error message has said that the connection is
timing out however, I am struggling to understand why as I have followed
each stage within the quick start guide. For example, I can ping node1
(which is the monitor) from all nodes and I can ssh w/o password to it
too. Other things like require tty has been disabled. I have also opened
the ports on the firewall for 6789 and 6800-7300. I have also ensured that
the ceph-deploy mon create-initial command has been issued too.

Is there anything else that could possibly be preventing the monitor node
from communicating?

Furthermore, I tried a reboot of all nodes. Once the nodes came online
again, I tried again. However, on this occasion I am with an error due to a
fsid mismatch. It should be noted that I retried the mon-create initial
command with the --overwrite option to get past another error.

Issue 1
[ceph_deploy.osd][DEBUG ] Activating cluster ceph disks
node2:/var/local/osd0: node3:/var/local/osd1:
[node2][DEBUG ] connection detected need for sudo
[node2][DEBUG ] connected to host: node2
[node2][DEBUG ] detect platform information from remote host
[node2][DEBUG ] detect machine type
[node2][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: CentOS Linux 7.1.1503 Core
[ceph_deploy.osd][DEBUG ] activating host node2 disk /var/local/osd0
[ceph_deploy.osd][DEBUG ] will use init type: sysvinit
[node2][INFO  ] Running command: sudo ceph-disk -v activate --mark-init
sysvinit --mount /var/local/osd0
[node2][WARNING] DEBUG:ceph-disk:Cluster uuid is
5f94aba1-e6e1-41e0-bfa1-4030cf24fda8
[node2][WARNING] INFO:ceph-disk:Running command: /usr/bin/ceph-osd
--cluster=ceph --show-config-value=fsid
[node2][WARNING] DEBUG:ceph-disk:Cluster name is ceph
[node2][WARNING] DEBUG:ceph-disk:OSD uuid is
183274c6-4941-4f47-97f9-b7bbcd0bdbac
[node2][WARNING] DEBUG:ceph-disk:Allocating OSD id...
[node2][WARNING] INFO:ceph-disk:Running command: /usr/bin/ceph --cluster
ceph --name client.bootstrap-osd --keyring
/var/lib/ceph/bootstrap-osd/ceph.keyring osd create --concise
183274c6-4941-4f47-97f9-b7bbcd0bdbac
[node2][WARNING] 2015-11-10 19:43:36.433091 7fd6717fa700  0 -- :/1019898 >>
192.168.43.11:6789/0 pipe(0x7fd67405e280 sd=4 :0 s=1 pgs=0 cs=0 l=1
c=0x7fd67405af80).fault
[node2][WARNING] 2015-11-10 19:43:39.434067 7fd6716f9700  0 -- :/1019898 >>
192.168.43.11:6789/0 pipe(0x7fd668000c00 sd=4 :0 s=1 pgs=0 cs=0 l=1
c=0x7fd668004ef0).fault
[node2][WARNING] 2015-11-10 19:43:42.435321 7fd6717fa700  0 -- :/1019898 >>
192.168.43.11:6789/0 pipe(0x7fd6680081b0 sd=4 :0 s=1 pgs=0 cs=0 l=1
c=0x7fd66800c450).fault
[node2][WARNING] 2015-11-10 19:43:45.436626 7fd6716f9700  0 -- :/1019898 >>
192.168.43.11:6789/0 pipe(0x7fd668000c00 sd=4 :0 s=1 pgs=0 cs=0 l=1
c=0x7fd668006610).fault
[node2][WARNING] 2015-11-10 19:43:48.439097 7fd6717fa700  0 -- :/1019898 >>
192.168.43.11:6789/0 pipe(0x7fd6680081b0 sd=4 :0 s=1 pgs=0 cs=0 l=1
c=0x7fd6680058b0).fault
[node2][WARNING] 2015-11-10 19:43:51.441654 7fd6716f9700  0 -- :/1019898 >>
192.168.43.11:6789/0 pipe(0x7fd668000c00 sd=4 :0 s=1 pgs=0 cs=0 l=1
c=0x7fd668006f80).fault
[node2][WARNING] 2015-11-10 19:43:54.444116 7fd6717fa700  0 -- :/1019898 >>
192.168.43.11:6789/0 pipe(0x7fd6680081b0 sd=4 :0 s=1 pgs=0 cs=0 l=1
c=0x7fd668007640).fault
[node2][WARNING] 2015-11-10 19:43:57.446729 7fd6716f9700  0 -- :/1019898 >>
192.168.43.11:6789/0 pipe(0x7fd668000c00 sd=4 :0 s=1 pgs=0 cs=0 l=1
c=0x7fd668007120).fault

...

...

[node2][WARNING] 2015-11-06 02:18:52.138230 7f819c327700  0 -- :/1020092 >>
192.168.107.11:6789/0 pipe(0x7f81900081b0 sd=4 :0 s=1 pgs=0 cs=0 l=1
c=0x7f8190016fa0).fault
[node2][WARNING] 2015-11-06 02:23:34.971917 7f819c327700  0 -- :/1020092 >>
192.168.107.11:6789/0 pipe(0x7f8190007c10 sd=4 :0 s=1 pgs=0 cs=0 l=1
c=0x7f81900120e0).fault
[node2][WARNING] 2015-11-06 02:23:38.340389 7f819c226700  0 -- :/1020092 >>
192.168.107.11:6789/0 pipe(0x7f8198c0 sd=4 :0 s=1 pgs=0 cs=0 l=1
c=0x7f8190016fa0).fault
[node2][WARNING] 2015-11-06 02:23:40.117327 7f819ea9c700  0
monclient(hunting): authenticate timed out after 300
[node2][WARNING] 2015-11-06 02:23:40.117766 7f819ea9c700  0 librados:
client.bootstrap-osd authentication error (110) Connection timed out
[node2][WARNING] Error connecting to cluster: TimedOut
[node2][WARNING] ceph-disk: Error: ceph osd create failed: Command
'/usr/bin/ceph' returned non-zero exit status 1:
[node2][ERROR ] RuntimeError: command returned non-zero exit status: 1
[ceph_deploy][ERROR ] RuntimeError: Failed to execute command: ceph-disk -v
activate --mark-init sysvinit --mount /var/local/osd0

Issue 2
[ceph_deploy.conf][DEBUG ] found configuration file at:
/home/user/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.28): /bin/ceph-deploy --overwrite
osd activate node2:/var/local/osd0 node3:/var/local/osd1
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username

Re: [ceph-users] (no subject)

2015-11-12 Thread Robert LeBlanc
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On the monitor node, `netstat | grep 6789` show the monitor process running?

On the OSD node, `telnet 192.168.43.11 6789` and `telnet
192.168.107.11 6789` work? It is not enough to just ping, that does
not test if you have properly opened up the firewall.
- 
Robert LeBlanc
PGP Fingerprint 79A2 9CA4 6CC4 45DD A904  C70E E654 3BB2 FA62 B9F1


On Thu, Nov 12, 2015 at 7:21 AM, James Gallagher  wrote:
> Hi, I'm having issues activating my OSDs. I have provided the output of the
> fault. I can see that the error message has said that the connection is
> timing out however, I am struggling to understand why as I have followed
> each stage within the quick start guide. For example, I can ping node1
> (which is the monitor) from all nodes and I can ssh w/o password to it too.
> Other things like require tty has been disabled. I have also opened the
> ports on the firewall for 6789 and 6800-7300. I have also ensured that the
> ceph-deploy mon create-initial command has been issued too.
>
> Is there anything else that could possibly be preventing the monitor node
> from communicating?
>
> Furthermore, I tried a reboot of all nodes. Once the nodes came online
> again, I tried again. However, on this occasion I am with an error due to a
> fsid mismatch. It should be noted that I retried the mon-create initial
> command with the --overwrite option to get past another error.
>
> Issue 1
> [ceph_deploy.osd][DEBUG ] Activating cluster ceph disks
> node2:/var/local/osd0: node3:/var/local/osd1:
> [node2][DEBUG ] connection detected need for sudo
> [node2][DEBUG ] connected to host: node2
> [node2][DEBUG ] detect platform information from remote host
> [node2][DEBUG ] detect machine type
> [node2][DEBUG ] find the location of an executable
> [ceph_deploy.osd][INFO  ] Distro info: CentOS Linux 7.1.1503 Core
> [ceph_deploy.osd][DEBUG ] activating host node2 disk /var/local/osd0
> [ceph_deploy.osd][DEBUG ] will use init type: sysvinit
> [node2][INFO  ] Running command: sudo ceph-disk -v activate --mark-init
> sysvinit --mount /var/local/osd0
> [node2][WARNING] DEBUG:ceph-disk:Cluster uuid is
> 5f94aba1-e6e1-41e0-bfa1-4030cf24fda8
> [node2][WARNING] INFO:ceph-disk:Running command: /usr/bin/ceph-osd
> --cluster=ceph --show-config-value=fsid
> [node2][WARNING] DEBUG:ceph-disk:Cluster name is ceph
> [node2][WARNING] DEBUG:ceph-disk:OSD uuid is
> 183274c6-4941-4f47-97f9-b7bbcd0bdbac
> [node2][WARNING] DEBUG:ceph-disk:Allocating OSD id...
> [node2][WARNING] INFO:ceph-disk:Running command: /usr/bin/ceph --cluster
> ceph --name client.bootstrap-osd --keyring
> /var/lib/ceph/bootstrap-osd/ceph.keyring osd create --concise
> 183274c6-4941-4f47-97f9-b7bbcd0bdbac
> [node2][WARNING] 2015-11-10 19:43:36.433091 7fd6717fa700  0 -- :/1019898 >>
> 192.168.43.11:6789/0 pipe(0x7fd67405e280 sd=4 :0 s=1 pgs=0 cs=0 l=1
> c=0x7fd67405af80).fault
> [node2][WARNING] 2015-11-10 19:43:39.434067 7fd6716f9700  0 -- :/1019898 >>
> 192.168.43.11:6789/0 pipe(0x7fd668000c00 sd=4 :0 s=1 pgs=0 cs=0 l=1
> c=0x7fd668004ef0).fault
> [node2][WARNING] 2015-11-10 19:43:42.435321 7fd6717fa700  0 -- :/1019898 >>
> 192.168.43.11:6789/0 pipe(0x7fd6680081b0 sd=4 :0 s=1 pgs=0 cs=0 l=1
> c=0x7fd66800c450).fault
> [node2][WARNING] 2015-11-10 19:43:45.436626 7fd6716f9700  0 -- :/1019898 >>
> 192.168.43.11:6789/0 pipe(0x7fd668000c00 sd=4 :0 s=1 pgs=0 cs=0 l=1
> c=0x7fd668006610).fault
> [node2][WARNING] 2015-11-10 19:43:48.439097 7fd6717fa700  0 -- :/1019898 >>
> 192.168.43.11:6789/0 pipe(0x7fd6680081b0 sd=4 :0 s=1 pgs=0 cs=0 l=1
> c=0x7fd6680058b0).fault
> [node2][WARNING] 2015-11-10 19:43:51.441654 7fd6716f9700  0 -- :/1019898 >>
> 192.168.43.11:6789/0 pipe(0x7fd668000c00 sd=4 :0 s=1 pgs=0 cs=0 l=1
> c=0x7fd668006f80).fault
> [node2][WARNING] 2015-11-10 19:43:54.444116 7fd6717fa700  0 -- :/1019898 >>
> 192.168.43.11:6789/0 pipe(0x7fd6680081b0 sd=4 :0 s=1 pgs=0 cs=0 l=1
> c=0x7fd668007640).fault
> [node2][WARNING] 2015-11-10 19:43:57.446729 7fd6716f9700  0 -- :/1019898 >>
> 192.168.43.11:6789/0 pipe(0x7fd668000c00 sd=4 :0 s=1 pgs=0 cs=0 l=1
> c=0x7fd668007120).fault
>
> ...
>
> ...
>
> [node2][WARNING] 2015-11-06 02:18:52.138230 7f819c327700  0 -- :/1020092 >>
> 192.168.107.11:6789/0 pipe(0x7f81900081b0 sd=4 :0 s=1 pgs=0 cs=0 l=1
> c=0x7f8190016fa0).fault
> [node2][WARNING] 2015-11-06 02:23:34.971917 7f819c327700  0 -- :/1020092 >>
> 192.168.107.11:6789/0 pipe(0x7f8190007c10 sd=4 :0 s=1 pgs=0 cs=0 l=1
> c=0x7f81900120e0).fault
> [node2][WARNING] 2015-11-06 02:23:38.340389 7f819c226700  0 -- :/1020092 >>
> 192.168.107.11:6789/0 pipe(0x7f8198c0 sd=4 :0 s=1 pgs=0 cs=0 l=1
> c=0x7f8190016fa0).fault
> [node2][WARNING] 2015-11-06 02:23:40.117327 7f819ea9c700  0
> monclient(hunting): authenticate timed out after 300
> [node2][WARNING] 2015-11-06 02:23:40.117766 7f819ea9c700  0 librados:
> client.bootstrap-osd authentication error (110) Connection timed out
> [node2][WARNING] Error connecting to cluster: TimedOut
> 

[ceph-users] (no subject)

2015-07-26 Thread Jiwan N
unsubscribe ceph-users
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] (no subject)

2015-03-26 Thread Sreenath BH
Thanks for the information.

-Sreenath

-

Date: Wed, 25 Mar 2015 04:11:11 +0100
From: Francois Lafont flafdiv...@free.fr
To: ceph-users ceph-us...@ceph.com
Subject: Re: [ceph-users] PG calculator queries
Message-ID: 5512274f.1000...@free.fr
Content-Type: text/plain; charset=utf-8

Hi,

Sreenath BH wrote :

 consider following values for a pool:

 Size = 3
 OSDs = 400
 %Data = 100
 Target PGs per OSD = 200 (This is default)

 The PG calculator generates number of PGs for this pool as : 32768.

 Questions:

 1. The Ceph documentation recommends around 100 PGs/OSD, whereas the
 calculator takes 200 as default value. Are there any changes in the
 recommended value of PGs/OSD?

Not really I think. Here http://ceph.com/pgcalc/, we can read:

Target PGs per OSD
This value should be populated based on the following guidance:
- 100 If the cluster OSD count is not expected to increase in
  the foreseeable future.
- 200 If the cluster OSD count is expected to increase (up to
  double the size) in the foreseeable future.
- 300 If the cluster OSD count is expected to increase between
  2x and 3x in the foreseeable future.

So, it seems to me cautious to recommend 100 in the official documentation
because you can increase the pg_num but it's impossible to decrease it.
So, if I should recommend just one value, It would be 100.

 2. Under notes it says:
 Total PG Count below table will be the count of Primary PG copies.
 However, when calculating total PGs per OSD average, you must include
 all copies.

 However, the number of 200 PGs/OSD already seems to include the
 primary as well as replica PGs in a OSD. Is the note a typo mistake or
 am I missing something?

To my mind, in the site, the Total PG Count doesn't include all copies.
So, for me, there is no typo. Here is 2 basic examples from
http://ceph.com/pgcalc/
with just *one* pool.

1.
Pool-Name  Size  OSD#  %DataTarget-PGs-per-OSD  Suggested-PG-count
rbd2 10100.00%  100 512

2.
Pool-Name  Size  OSD#  %DataTarget-PGs-per-OSD  Suggested-PG-count
rbd2 10100.00%  200 1024

In the first example, I have:   512/10 =  51.2  but (Size x  512)/10 = 102.4
In the second example, I have: 1024/10 = 102.4  but (Size x 1024)/10 = 204.8

HTH.

--
Fran?ois Lafont
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] (no subject)

2014-10-15 Thread lakshmi k s
I am trying to integrate Openstack keystone with radosgw. I have followed the 
instructions as per the link - http://ceph.com/docs/master/radosgw/keystone/. 
But for some reason, keystone flags under [client.radosgw.gateway] section are 
not being honored. That means, presence of these flags never attempt to use 
keystone. Hence, any swift v2.0 calls results in 401-Authorization problem. But 
If I move the keystone url outside under global section, I see that there is 
initial keystone handshake between keystone and gateway nodes. 

Please note that swift v1 calls (without using keystone) work great. 
Any thoughts on how to resolve this problem?


ceph.conf

[global]
fsid = f216cbe1-fa49-42ed-b28a-322aa3d48fff
mon_initial_members = node1
mon_host = 192.168.122.182
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
filestore_xattr_use_omap = true

[client.admin]

keyring = /etc/ceph/ceph.client.admin.keyring

[client.radosgw.gateway]
host = radosgw
keyring = /etc/ceph/ceph.client.radosgw.keyring
rgw socket path = /var/run/ceph/ceph.radosgw.gateway.fastcgi.sock
log file = /var/log/ceph/client.radosgw.gateway.log
rgw dns name = radosgw

rgw keystone url = http://192.168.122.165:5000
rgw keystone admin token = faedf7bc53e3371924e7b3ddb9d13ddd
rgw keystone accepted roles = admin Member _member_
rgw keystone token cache size = 500
rgw keystone revocation interval = 500
rgw s3 auth use keystone = true
nss db path = /var/ceph/nss

Thanks much.

Lakshmi.___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] (no subject)

2014-09-11 Thread Alfredo Deza
We discourage users from using `root` to call ceph-deploy or to call
it with `sudo` for this reason.

We have a warning in the docs about it if you are getting started in
the Ceph Node Setup section:
http://ceph.com/docs/v0.80.5/start/quick-start-preflight/#ceph-deploy-setup

The reason for this is that if you configure ssh to login to the
remote server as a non-root user (say user ceph) there is no way for
ceph-deploy to know that you need to call sudo
on the remote server because it detected you were root.

ceph-deploy does this detection to prevent calling sudo if you are
root on the remote server.

So, to fix this situation, where you are executing as root but login
into the remote server as a non-root user you can use either of these
two options:

* don't execute ceph-deploy as root
* don't configure ssh to login as a non-root user

On Thu, Sep 11, 2014 at 12:16 AM, Subhadip Bagui i.ba...@gmail.com wrote:
 Hi,

 I'm getting the below error while installing ceph on node using ceph-deploy.
 I'm executing the command in admin node as

 [root@ceph-admin ~]$ ceph-deploy install ceph-mds

 [ceph-mds][DEBUG ] Loaded plugins: fastestmirror, security
 [ceph-mds][WARNIN] You need to be root to perform this command.
 [ceph-mds][ERROR ] RuntimeError: command returned non-zero exit status: 1
 [ceph_deploy][ERROR ] RuntimeError: Failed to execute command: yum -y
 install wget

 I have changed the Defaults requiretty setting to Defaults:ceph !requiretty
 in /etc/sudoers file and also put ceph as sudo user same as root in node
 ceph-mds. added root privilege on node ceph-mds using command--- echo ceph
 ALL = (root) NOPASSWD:ALL | sudo tee /etc/sudoers sudo chmod 0440
 /etc/sudoers as mentioned in the doc

 All servers are on centOS 6.5

 Please let me know what can be the issue here?


 Regards,
 Subhadip
 ---

 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] (no subject)

2014-09-10 Thread Subhadip Bagui
Hi,

I'm getting the below error while installing ceph on node using
ceph-deploy. I'm executing the command in admin node as

[root@ceph-admin ~]$ ceph-deploy install ceph-mds

[ceph-mds][DEBUG ] Loaded plugins: fastestmirror, security
[ceph-mds][WARNIN] You need to be root to perform this command.
[ceph-mds][ERROR ] RuntimeError: command returned non-zero exit status: 1
[ceph_deploy][ERROR ] RuntimeError: Failed to execute command: yum -y
install wget

I have changed the Defaults requiretty setting to Defaults:ceph !requiretty
in /etc/sudoers file and also put ceph as sudo user same as root in node
ceph-mds. added root privilege on node ceph-mds using command--- echo ceph
ALL = (root) NOPASSWD:ALL | sudo tee /etc/sudoers sudo chmod 0440
/etc/sudoers as mentioned in the doc

All servers are on centOS 6.5

Please let me know what can be the issue here?


Regards,
Subhadip
---
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] (no subject)

2014-05-27 Thread minchen
subscribe ceph-us...@ceph.com___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] (no subject)

2014-04-29 Thread Vladimir Franciz S. Blando

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] (no subject)

2014-04-08 Thread Jobs1158656377



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] (no subject)

2013-10-17 Thread Paul_Whittington
I'd like to experiment with the ceph class methods technology.  I've looked at 
the cls_hello sample but I'm having trouble figuring out how to compile, like 
and install.  Are there any step-by-step documents on how to compile, link and 
deploy the method .so files?

Paul Whittington
Chief Architect

McAfee
2325 West Broadway Street
Suite B
Idaho Falls, ID 83402

Direct: 208.552.8702
Web: www.mcafee.com

The information contained in this email message may be privileged, confidential 
and protected from disclosure. If you are not the intended recipient, any 
review, dissemination, distribution or copying is strictly prohibited. If you 
have received this email message in error, please notify the sender by reply 
email and delete the message and any attachments.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] (no subject)

2013-10-17 Thread Gregory Farnum
On Thu, Oct 17, 2013 at 12:40 PM,  paul_whitting...@mcafee.com wrote:
 I'd like to experiment with the ceph class methods technology.  I've looked
 at the cls_hello sample but I'm having trouble figuring out how to compile,
 like and install.  Are there any step-by-step documents on how to compile,
 link and deploy the method .so files?

Hrm. We haven't done much work to make building object classes
friendly yet — we build all ours directly in the Ceph tree. It's
probably easiest for you to do the same.
Installation just requires dropping the .so in the directory specified
by the osd class dir config option (defaults to
/var/lib/ceph/rados-classes, I think), and then the OSD will find and
load it when the first request for that class comes in.
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] (no subject)

2013-10-17 Thread Gregory Farnum
[ Adding back the list. ]

On Thu, Oct 17, 2013 at 3:37 PM,  paul_whitting...@mcafee.com wrote:
 Thanks Gregory,

 I assume the .so gets loaded into the process space of each OSD associated 
 with the object whose method is being called.

Yep; the .so is loaded on-demand wherever it's needed.

 Does the .so remain loaded until the OSD is terminated?

Yep.

 Is there any protection from SIGSEGVs in the .so?

I don't believe so — the .so is definitely trusted in general so you
shouldn't be loading arbitrary code.
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com


 Thanks!

 Paul Whittington
 Chief Architect

 McAfee
 2325 West Broadway Street
 Suite B
 Idaho Falls, ID 83402

 Direct: 208.552.8702
 Web: www.mcafee.com

 The information contained in this email message may be privileged, 
 confidential and protected from disclosure. If you are not the intended 
 recipient, any review, dissemination, distribution or copying is strictly 
 prohibited. If you have received this email message in error, please notify 
 the sender by reply email and delete the message and any attachments.
 
 From: Gregory Farnum [g...@inktank.com]
 Sent: Thursday, October 17, 2013 3:13 PM
 To: Whittington, Paul
 Cc: ceph-users@lists.ceph.com
 Subject: Re: [ceph-users] (no subject)

 On Thu, Oct 17, 2013 at 12:40 PM,  paul_whitting...@mcafee.com wrote:
 I'd like to experiment with the ceph class methods technology.  I've looked
 at the cls_hello sample but I'm having trouble figuring out how to compile,
 like and install.  Are there any step-by-step documents on how to compile,
 link and deploy the method .so files?

 Hrm. We haven't done much work to make building object classes
 friendly yet — we build all ours directly in the Ceph tree. It's
 probably easiest for you to do the same.
 Installation just requires dropping the .so in the directory specified
 by the osd class dir config option (defaults to
 /var/lib/ceph/rados-classes, I think), and then the OSD will find and
 load it when the first request for that class comes in.
 -Greg
 Software Engineer #42 @ http://inktank.com | http://ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] (no subject)

2013-10-10 Thread 何晓波


发自我的 Windows Phone
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] (no subject)

2013-05-29 Thread Ta Ba Tuan

subscribe ceph-users

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] (no subject)

2013-05-18 Thread koma kai

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] (no subject)

2013-03-26 Thread Igor Laskovy
Hi there!

Are Chris Holcombe and Robert Blair here? Please answer me about your
awesome job http://ceph.com/community/ceph-over-fibre-for-vmware/ .
Thanks!

-- 
Igor Laskovy
facebook.com/igor.laskovy
Kiev, Ukraine
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com