Yes you can use other features like CephFS and Object Store on this kernel
release that you are running.
- Karan Singh
On 28 Jul 2014, at 07:45, Pratik Rupala pratik.rup...@calsoftinc.com wrote:
Hi Karan,
I have basic setup of Ceph storage cluster in active+clean state on Linux
kernel
The output that you have provided says that OSDs are not IN , Try the below
ceph osd in osd.0
ceph osd in osd.1
service ceph start osd.0
service ceph start osd.1
If you have 1 more host with 1 disk , add it , starting Ceph Firefly default
rep size is 3
- Karan -
On 27 Jul 2014, at 11:17, 10
Looks like osd.1 has a valid auth ID , which was defined previously.
Trust this is your test cluster , try this
ceph osd crush rm osd.1
ceph osd rm osd.1
ceph auth del osd.1
Once again try to add osd.1 using ceph-deploy ( prepare and then activate
commands ) , check the logs carefully for any
Hi Karan,
So that means I can't have RBD on 2.6.32. Do you know where can I find
source for rbd.ko for other kernel versions like 2.6.34?
Regards,
Pratik Rupala
On 7/28/2014 12:32 PM, Karan Singh wrote:
Yes you can use other features like CephFS and Object Store on this
kernel release that
Hello,
On Sun, 27 Jul 2014 18:20:43 -0400 Robert Fantini wrote:
Hello Christian,
Let me supply more info and answer some questions.
* Our main concern is high availability, not speed.
Our storage requirements are not huge.
However we want good keyboard response 99.99% of the time. We
It's fixed now. Apparently we can not share a journal across different
OSDs. I added a journal /dev/sdc1 (20GB) with my first OSD. I was trying
to add the same journal with my second OSD and it was causing the issue.
Then I added the secons OSD with a new journal and it worked fine.
Thanks,
Perhaps Cristian is thinking of the clone from journal work that we were
talking about last year:
http://wiki.ceph.com/Planning/Sideboard/osd%3A_clone_from_journal_on_btrfs
I think we never did much beyond Sage's test branch, and it didn't seem
to help as much as you would hope. Speaking of
If you've two rooms then I'd go for two OSD nodes in each room, a target
replication level of 3 with a min of 1 across the node level, then have
5 monitors and put the last monitor outside of either room (The other
MON's can share with the OSD nodes if needed). Then you've got 'safe'
You can use multiple steps in your crush map in order to do things
like choose two different hosts then choose a further OSD on one of the
hosts and do another replication so that you can get three replicas onto
two hosts without risking ending up with three replicas on a single node.
On