Re: [ceph-users] Multiple MDSes

2016-04-22 Thread Andrus, Brian Contractor
Ah. Thanks for the info.  I just need to know how to interpret the output!


Brian Andrus
ITACS/Research Computing
Naval Postgraduate School
Monterey, California
voice: 831-656-6238



-Original Message-
From: Eric Eastman [mailto:eric.east...@keepertech.com] 
Sent: Friday, April 22, 2016 9:36 PM
To: Andrus, Brian Contractor
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Multiple MDSes

On Fri, Apr 22, 2016 at 9:59 PM, Andrus, Brian Contractor  
wrote:
> All,
>
> Ok, I understand Jewel is considered stable for CephFS with a single 
> active MDS.
>
> But, how do I add a standby MDS? What documentation I find is a bit 
> confusing.
>
> I ran
>
> ceph-deploy create mds systemA
> ceph-deploy create mds systemB
>
> Then I create a ceph filesystem, but it appears systemB is the active 
> and only mds:
>
> e6: 1/1/1 up {1:0=systemB=up:active}, 1 up:standby
>
> Is there something to do to get systemA up and standby?

Your output: "1 up:standby" shows that you have 1 standby MDS.  On my system 
with 3 MDS, running Jewel, the output is:

fsmap e12: 1/1/1 up {1:0=ede-c1-mds01=up:active}, 2 up:standby

You can prove this by shutting down your systemB and seeing that you can still 
access your Ceph file system. By default, if you create multiple MDS, you get 1 
active MDS and the rest are standby.

Eric
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Replace Journal

2016-04-22 Thread Dan van der Ster
On Fri, Apr 22, 2016 at 8:07 AM, Christian Balzer  wrote:
> On Fri, 22 Apr 2016 06:20:17 +0200 Martin Wilderoth wrote:
>
>> I have a ceph cluster and I will change my journal devices to new SSD's.
>>
>> In some instructions of doing this they refer to a journal file (link to
>> UUID of journal )
>>
>> In my OSD folder this journal don’t exist.
>>
> If your cluster is "years old" and not created with ceph-disk, then yes,
> that's not surprising.

Long ago I wrote a couple scripts to convert "old" osds to the new
kind, also moving the journal from the HDD to an SSD:

https://github.com/cernceph/ceph-scripts/tree/master/tools/ceph-deployifier

YMMV.

-- Dan
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Multiple MDSes

2016-04-22 Thread Eric Eastman
On Fri, Apr 22, 2016 at 9:59 PM, Andrus, Brian Contractor
 wrote:
> All,
>
> Ok, I understand Jewel is considered stable for CephFS with a single active
> MDS.
>
> But, how do I add a standby MDS? What documentation I find is a bit
> confusing.
>
> I ran
>
> ceph-deploy create mds systemA
> ceph-deploy create mds systemB
>
> Then I create a ceph filesystem, but it appears systemB is the active and
> only mds:
>
> e6: 1/1/1 up {1:0=systemB=up:active}, 1 up:standby
>
> Is there something to do to get systemA up and standby?

Your output: "1 up:standby" shows that you have 1 standby MDS.  On my
system with 3 MDS, running Jewel, the output is:

fsmap e12: 1/1/1 up {1:0=ede-c1-mds01=up:active}, 2 up:standby

You can prove this by shutting down your systemB and seeing that you
can still access your Ceph file system. By default, if you create
multiple MDS, you get 1 active MDS and the rest are standby.

Eric
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Multiple MDSes

2016-04-22 Thread Andrus, Brian Contractor
All,

Ok, I understand Jewel is considered stable for CephFS with a single active MDS.

But, how do I add a standby MDS? What documentation I find is a bit confusing.
I ran
ceph-deploy create mds systemA
ceph-deploy create mds systemB

Then I create a ceph filesystem, but it appears systemB is the active and only 
mds:
e6: 1/1/1 up {1:0=systemB=up:active}, 1 up:standby

Is there something to do to get systemA up and standby?


Brian Andrus
ITACS/Research Computing
Naval Postgraduate School
Monterey, California
voice: 831-656-6238


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Question upgrading to Jewel

2016-04-22 Thread Jason Dillaman
The notice about image format 1 being deprecated was somewhat hidden in the
release notes.  Displaying that message when opening an existing format 1
image is overkill and should be removed (at least until we come up with
some sort of online migration tool in a future Ceph release).

[1] https://github.com/ceph/ceph/blob/master/doc/release-notes.rst#L210

--

Jason

On Fri, Apr 22, 2016 at 3:40 PM, Diego Castro 
wrote:

> One more thing:
> I haven't seen anything regarding the following message:
>
> # rbd lock list 25091
> 2016-04-22 19:39:31.523542 7fd199d57700 -1 librbd::image::OpenRequest: RBD
> image format 1 is deprecated. Please copy this image to image format 2.
>
>
> Is it something that i should worry ?
>
>
> ---
> Diego Castro / The CloudFather
> GetupCloud.com - Eliminamos a Gravidade
>
> 2016-04-22 12:29 GMT-03:00 Diego Castro :
>
>> yeah, i followed the release notes.
>> Everything is working, just hit this issue until enabled services
>> individually.
>>
>> Tks
>>
>>
>> ---
>> Diego Castro / The CloudFather
>> GetupCloud.com - Eliminamos a Gravidade
>>
>> 2016-04-22 12:24 GMT-03:00 Vasu Kulkarni :
>>
>>> Hope you followed the release notes and are on 0.94.4 or above
>>>
>>>http://docs.ceph.com/docs/master/release-notes/#upgrading-from-hammer
>>>
>>> 1) upgrade ( ensure you dont have user 'ceph' before)
>>>
>>> 2) stop the service
>>>  /etc/init.d/ceph stop (since you are on centos/hammer)
>>>
>>> 3) change ownership
>>>  chown -R ceph:ceph /var/lib/ceph
>>>
>>> 4) start the *individual* service, just like you did before ( there is
>>> a known issue with systemctl start ceph.target, it doesn't work )
>>> or systemctl stop ceph-mon.target ceph-osd.target
>>> ceph-mds.target ceph-radosgw.target
>>> systemctl list-units to check the status
>>>
>>>
>>> On Fri, Apr 22, 2016 at 6:32 AM, Diego Castro
>>>  wrote:
>>> > Hello, i've upgraded my hammer cluster with the following steps:
>>> >
>>> > Running CentOS 7.1
>>> >
>>> > upgrade ceph-deploy
>>> > ceph-deploy install --release hammer mon{0..2}
>>> >
>>> > After that i couldn't start the mon service,
>>> >
>>> > systemctl start ceph.target (no errors at all, just don't get the
>>> daemon
>>> > running).
>>> >
>>> > I managed to start the daemon after i ran:
>>> >
>>> > systemctl enable ceph.target
>>> > systemctl enable ceph-mon@mon0
>>> >
>>> > After that, i could start mon daemon with:
>>> >
>>> > systemctl start ceph.target
>>> >
>>> > Question:
>>> >
>>> > Am i missing something ? Wouldn't  ceph-deploy enable it by default?
>>> >
>>> > ---
>>> > Diego Castro / The CloudFather
>>> > GetupCloud.com - Eliminamos a Gravidade
>>> >
>>> > ___
>>> > ceph-users mailing list
>>> > ceph-users@lists.ceph.com
>>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>> >
>>>
>>
>>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Question upgrading to Jewel

2016-04-22 Thread Diego Castro
One more thing:
I haven't seen anything regarding the following message:

# rbd lock list 25091
2016-04-22 19:39:31.523542 7fd199d57700 -1 librbd::image::OpenRequest: RBD
image format 1 is deprecated. Please copy this image to image format 2.


Is it something that i should worry ?


---
Diego Castro / The CloudFather
GetupCloud.com - Eliminamos a Gravidade

2016-04-22 12:29 GMT-03:00 Diego Castro :

> yeah, i followed the release notes.
> Everything is working, just hit this issue until enabled services
> individually.
>
> Tks
>
>
> ---
> Diego Castro / The CloudFather
> GetupCloud.com - Eliminamos a Gravidade
>
> 2016-04-22 12:24 GMT-03:00 Vasu Kulkarni :
>
>> Hope you followed the release notes and are on 0.94.4 or above
>>
>>http://docs.ceph.com/docs/master/release-notes/#upgrading-from-hammer
>>
>> 1) upgrade ( ensure you dont have user 'ceph' before)
>>
>> 2) stop the service
>>  /etc/init.d/ceph stop (since you are on centos/hammer)
>>
>> 3) change ownership
>>  chown -R ceph:ceph /var/lib/ceph
>>
>> 4) start the *individual* service, just like you did before ( there is
>> a known issue with systemctl start ceph.target, it doesn't work )
>> or systemctl stop ceph-mon.target ceph-osd.target
>> ceph-mds.target ceph-radosgw.target
>> systemctl list-units to check the status
>>
>>
>> On Fri, Apr 22, 2016 at 6:32 AM, Diego Castro
>>  wrote:
>> > Hello, i've upgraded my hammer cluster with the following steps:
>> >
>> > Running CentOS 7.1
>> >
>> > upgrade ceph-deploy
>> > ceph-deploy install --release hammer mon{0..2}
>> >
>> > After that i couldn't start the mon service,
>> >
>> > systemctl start ceph.target (no errors at all, just don't get the daemon
>> > running).
>> >
>> > I managed to start the daemon after i ran:
>> >
>> > systemctl enable ceph.target
>> > systemctl enable ceph-mon@mon0
>> >
>> > After that, i could start mon daemon with:
>> >
>> > systemctl start ceph.target
>> >
>> > Question:
>> >
>> > Am i missing something ? Wouldn't  ceph-deploy enable it by default?
>> >
>> > ---
>> > Diego Castro / The CloudFather
>> > GetupCloud.com - Eliminamos a Gravidade
>> >
>> > ___
>> > ceph-users mailing list
>> > ceph-users@lists.ceph.com
>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> >
>>
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] fibre channel as ceph storage interconnect

2016-04-22 Thread LOPEZ Jean-Charles
Hi,

your easiest way here if you want to use your FC hardware is to do IP over FC 
so that you can leverage the existing FC HBA in your servers but stick to IP as 
a communication layer. FC here would just be a low latency 
transport/encapsulation layer.

I’ve played with this gazillion years ago (early 2000’s) when I was with 
working with EMC and Brocade.

The question here is that I’m not sure who still supports IP over FC and it 
would be dependent on your FC HBAs, FC switches/directors.

For information, here is a link for AIX setup 
https://www.ibm.com/support/knowledgecenter/#!/ssw_aix_71/com.ibm.aix.networkcomm/fibrechan_intro.htm


> On Apr 21, 2016, at 20:12, Schlacta, Christ  wrote:
> 
> Is it possible?  Can I use fibre channel to interconnect my ceph OSDs?
> Intuition tells me it should be possible, yet experience (Mostly with
> fibre channel) tells me no.  I don't know enough about how ceph works
> to know for sure.  All my googling returns results about using ceph as
> a BACKEND for exporting fibre channel LUNs, which is, sadly, not what
> I'm looking for at the moment.
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

JC

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] On-going Bluestore Performance Testing Results

2016-04-22 Thread Somnath Roy
Yes, kernel should do read ahead , it's a block device setting..but if there is 
something extra xfs is doing for seq workload , not sure...

Sent from my iPhone

> On Apr 22, 2016, at 8:54 AM, Jan Schermer  wrote:
>
> Having correlated graphs of CPU and block device usage would be helpful.
>
> To my cynical eye this looks like a clear regression in CPU usage, which was 
> always bottlenecking pure-SSD OSDs, and now got worse.
> The gains are from doing less IO on IO-saturated HDDs.
>
> Regression of 70% in 16-32K random writes is the most troubling, that's 
> coincidentaly the average IO size for a DB2, and the biggest bottleneck to 
> its performance I've seen (other databases will be similiar).
> It's great
>
> Btw readahead is not dependant on filesystem (it's a mechanism in the IO 
> scheduler), so it should be present even on a block device, I think?
>
> Jan
>
>
>> On 22 Apr 2016, at 17:35, Mark Nelson  wrote:
>>
>> Hi Guys,
>>
>> Now that folks are starting to dig into bluestore with the Jewel release, I 
>> wanted to share some of our on-going performance test data. These are from 
>> 10.1.0, so almost, but not quite, Jewel.  Generally bluestore is looking 
>> very good on HDDs, but there are a couple of strange things to watch out 
>> for, especially with NVMe devices.  Mainly:
>>
>> 1) in HDD+NVMe configurations performance increases dramatically when 
>> replacing the stock CentOS7 kernel with Kernel 4.5.1.
>>
>> 2) In NVMe only configurations performance is often lower at middle-sized 
>> IOs.  Kernel 4.5.1 doesn't really help here.  In fact it seems to amplify 
>> both the cases where bluestore is faster and where it is slower.
>>
>> 3) Medium sized sequential reads are where bluestore consistently tends to 
>> be slower than filestore.  It's not clear yet if this is simply due to 
>> Bluestore not doing read ahead at the OSD (ie being entirely dependent on 
>> client read ahead) or something else as well.
>>
>> I wanted to post this so other folks have some ideas of what to look for as 
>> they do their own bluestore testing.  This data is shown as percentage 
>> differences vs filestore, but I can also release the raw throughput values 
>> if people are interested in those as well.
>>
>> https://drive.google.com/file/d/0B2gTBZrkrnpZOTVQNkV0M2tIWkk/view?usp=sharing
>>
>> Thanks!
>> Mark
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
PLEASE NOTE: The information contained in this electronic mail message is 
intended only for the use of the designated recipient(s) named above. If the 
reader of this message is not the intended recipient, you are hereby notified 
that you have received this message in error and that any review, 
dissemination, distribution, or copying of this message is strictly prohibited. 
If you have received this communication in error, please notify the sender by 
telephone or e-mail (as shown above) immediately and destroy any and all copies 
of this message in your possession (whether hard copies or electronically 
stored copies).
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Monitor not starting: Corruption: 12 missing files

2016-04-22 Thread Joao Eduardo Luis
On 20/04/16 14:22, daniel.balsi...@swisscom.com wrote:
> 
> root@ceph2:~#  /usr/bin/ceph-mon --cluster=ceph -i ceph2 -f
> 
> Corruption: 12 missing files; e.g.:
> /var/lib/ceph/mon/ceph-ceph2/store.db/811920.ldb
> 
> 2016-04-20 13:16:49.019857 7f39a9cbe800 -1 error opening mon data
> directory at '/var/lib/ceph/mon/ceph-ceph2': (22) Invalid argument
> 
> [snip]
> What I should/have to do to get that monitor up and working again ?
> 
>  
> 
> Thanks in advance any help is appreciated.

Your best option here is to remove this monitor from the cluster,
recreate and re-add. The monitor will sync from the other monitors in
the quorum and you'll be good to go.

  -Joao
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] On-going Bluestore Performance Testing Results

2016-04-22 Thread Jan Schermer
Having correlated graphs of CPU and block device usage would be helpful.

To my cynical eye this looks like a clear regression in CPU usage, which was 
always bottlenecking pure-SSD OSDs, and now got worse.
The gains are from doing less IO on IO-saturated HDDs.

Regression of 70% in 16-32K random writes is the most troubling, that's 
coincidentaly the average IO size for a DB2, and the biggest bottleneck to its 
performance I've seen (other databases will be similiar).
It's great 

Btw readahead is not dependant on filesystem (it's a mechanism in the IO 
scheduler), so it should be present even on a block device, I think? 

Jan
 
 
> On 22 Apr 2016, at 17:35, Mark Nelson  wrote:
> 
> Hi Guys,
> 
> Now that folks are starting to dig into bluestore with the Jewel release, I 
> wanted to share some of our on-going performance test data. These are from 
> 10.1.0, so almost, but not quite, Jewel.  Generally bluestore is looking very 
> good on HDDs, but there are a couple of strange things to watch out for, 
> especially with NVMe devices.  Mainly:
> 
> 1) in HDD+NVMe configurations performance increases dramatically when 
> replacing the stock CentOS7 kernel with Kernel 4.5.1.
> 
> 2) In NVMe only configurations performance is often lower at middle-sized 
> IOs.  Kernel 4.5.1 doesn't really help here.  In fact it seems to amplify 
> both the cases where bluestore is faster and where it is slower.
> 
> 3) Medium sized sequential reads are where bluestore consistently tends to be 
> slower than filestore.  It's not clear yet if this is simply due to Bluestore 
> not doing read ahead at the OSD (ie being entirely dependent on client read 
> ahead) or something else as well.
> 
> I wanted to post this so other folks have some ideas of what to look for as 
> they do their own bluestore testing.  This data is shown as percentage 
> differences vs filestore, but I can also release the raw throughput values if 
> people are interested in those as well.
> 
> https://drive.google.com/file/d/0B2gTBZrkrnpZOTVQNkV0M2tIWkk/view?usp=sharing
> 
> Thanks!
> Mark
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] On-going Bluestore Performance Testing Results

2016-04-22 Thread Mark Nelson

Hi Guys,

Now that folks are starting to dig into bluestore with the Jewel 
release, I wanted to share some of our on-going performance test data. 
These are from 10.1.0, so almost, but not quite, Jewel.  Generally 
bluestore is looking very good on HDDs, but there are a couple of 
strange things to watch out for, especially with NVMe devices.  Mainly:


1) in HDD+NVMe configurations performance increases dramatically when 
replacing the stock CentOS7 kernel with Kernel 4.5.1.


2) In NVMe only configurations performance is often lower at 
middle-sized IOs.  Kernel 4.5.1 doesn't really help here.  In fact it 
seems to amplify both the cases where bluestore is faster and where it 
is slower.


3) Medium sized sequential reads are where bluestore consistently tends 
to be slower than filestore.  It's not clear yet if this is simply due 
to Bluestore not doing read ahead at the OSD (ie being entirely 
dependent on client read ahead) or something else as well.


I wanted to post this so other folks have some ideas of what to look for 
as they do their own bluestore testing.  This data is shown as 
percentage differences vs filestore, but I can also release the raw 
throughput values if people are interested in those as well.


https://drive.google.com/file/d/0B2gTBZrkrnpZOTVQNkV0M2tIWkk/view?usp=sharing

Thanks!
Mark
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Question upgrading to Jewel

2016-04-22 Thread Diego Castro
yeah, i followed the release notes.
Everything is working, just hit this issue until enabled services
individually.

Tks


---
Diego Castro / The CloudFather
GetupCloud.com - Eliminamos a Gravidade

2016-04-22 12:24 GMT-03:00 Vasu Kulkarni :

> Hope you followed the release notes and are on 0.94.4 or above
>
>http://docs.ceph.com/docs/master/release-notes/#upgrading-from-hammer
>
> 1) upgrade ( ensure you dont have user 'ceph' before)
>
> 2) stop the service
>  /etc/init.d/ceph stop (since you are on centos/hammer)
>
> 3) change ownership
>  chown -R ceph:ceph /var/lib/ceph
>
> 4) start the *individual* service, just like you did before ( there is
> a known issue with systemctl start ceph.target, it doesn't work )
> or systemctl stop ceph-mon.target ceph-osd.target
> ceph-mds.target ceph-radosgw.target
> systemctl list-units to check the status
>
>
> On Fri, Apr 22, 2016 at 6:32 AM, Diego Castro
>  wrote:
> > Hello, i've upgraded my hammer cluster with the following steps:
> >
> > Running CentOS 7.1
> >
> > upgrade ceph-deploy
> > ceph-deploy install --release hammer mon{0..2}
> >
> > After that i couldn't start the mon service,
> >
> > systemctl start ceph.target (no errors at all, just don't get the daemon
> > running).
> >
> > I managed to start the daemon after i ran:
> >
> > systemctl enable ceph.target
> > systemctl enable ceph-mon@mon0
> >
> > After that, i could start mon daemon with:
> >
> > systemctl start ceph.target
> >
> > Question:
> >
> > Am i missing something ? Wouldn't  ceph-deploy enable it by default?
> >
> > ---
> > Diego Castro / The CloudFather
> > GetupCloud.com - Eliminamos a Gravidade
> >
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Question upgrading to Jewel

2016-04-22 Thread Vasu Kulkarni
Hope you followed the release notes and are on 0.94.4 or above

   http://docs.ceph.com/docs/master/release-notes/#upgrading-from-hammer

1) upgrade ( ensure you dont have user 'ceph' before)

2) stop the service
 /etc/init.d/ceph stop (since you are on centos/hammer)

3) change ownership
 chown -R ceph:ceph /var/lib/ceph

4) start the *individual* service, just like you did before ( there is
a known issue with systemctl start ceph.target, it doesn't work )
or systemctl stop ceph-mon.target ceph-osd.target
ceph-mds.target ceph-radosgw.target
systemctl list-units to check the status


On Fri, Apr 22, 2016 at 6:32 AM, Diego Castro
 wrote:
> Hello, i've upgraded my hammer cluster with the following steps:
>
> Running CentOS 7.1
>
> upgrade ceph-deploy
> ceph-deploy install --release hammer mon{0..2}
>
> After that i couldn't start the mon service,
>
> systemctl start ceph.target (no errors at all, just don't get the daemon
> running).
>
> I managed to start the daemon after i ran:
>
> systemctl enable ceph.target
> systemctl enable ceph-mon@mon0
>
> After that, i could start mon daemon with:
>
> systemctl start ceph.target
>
> Question:
>
> Am i missing something ? Wouldn't  ceph-deploy enable it by default?
>
> ---
> Diego Castro / The CloudFather
> GetupCloud.com - Eliminamos a Gravidade
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Weird/normal behavior when creating filesystem on RBD volume

2016-04-22 Thread Edward Huyer
Hi, I'm seeing (have always seen) odd behavior when I first put a filesystem on 
a newly created RBD volume.  I've seen this on two different clusters across 
multiple major Ceph revisions.

When I create an RBD volume, map it on a client and then do (for instance) 
mkfs.xfs on it, mkfs.xfs will just sit there and hang for a number of minutes, 
seemingly doing nothing.  During this time, load on the OSDs (both the CPU 
usage of the daemons and actual IO on the disks) will spike dramatically.  
After a while, the load will subside and the mkfs will proceed as normal.

Can anyone explain what's going on here?  I have a pretty strong notion, but 
I'm hoping someone can give a definite answer.

This behavior appears to be normal, so I'm not actually worried about it.  It 
just makes myself and some coworkers go "huh, I wonder what causes that".

-
Edward Huyer
School of Interactive Games and Media
Golisano 70-2373
152 Lomb Memorial Drive
Rochester, NY 14623
585-475-6651
erh...@rit.edu

Obligatory Legalese:
The information transmitted, including attachments, is intended only for the 
person(s) or entity to which it is addressed and may contain confidential 
and/or privileged material. Any review, retransmission, dissemination or other 
use of, or taking of any action in reliance upon this information by persons or 
entities other than the intended recipient is prohibited. If you received this 
in error, please contact the sender and destroy any copies of this information.

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-10.1.2, debian stretch and systemd's target files

2016-04-22 Thread kefu chai
John, i filed http://tracker.ceph.com/issues/15573 to address your issue.

On Fri, Apr 22, 2016 at 6:51 AM, Florent B  wrote:
> Hi,
>
> What I understand is that @.service files are templates. And needs to be
> activated with "systemctl enable ceph-osd@1" for example (is it done by
> ceph-deploy ? I don't use it).

it could be enabled by the postinst script. i am working on a fix for it.

>
> @.service files does nothing by themselves.
>
> And ceph.target just starts everything.
>
> Does it answer your questions ?
>
>
> On 04/21/2016 02:04 PM, John Depp wrote:
>
> Hello everyone!
> I'm trying to test the bleeding edge Ceph configuration with ceph-10.1.2 on
> Debian Stretch.
> I've built ceph from git clone with dpkg-buildpackage and managed to start
> it, but run into some issues:
> - i've had to install ceph from debian packages, as ceph-deploy could not
> install it properly
> - no systemd's .target files were created except the ceph.target, so ceph
> don't start after node reboot. ceph*@.service files were created.
> I tried to determine which .deb package should install them and have failed.
> The only thing I've found were debian/ceph-*/lib/systemd/system folders
> lacked the .target files as well.
> If I get it right, template .target files should be created on install and
> .service files should be linked to them by ceph-deploy, is that correct?
> Thanks for your answers in advance!
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>



-- 
Regards
Kefu Chai
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] v10.2.0 Jewel released

2016-04-22 Thread Adrien Gillard
I am just quoting Ilya from a few days ago :

> None of the "new" features are currently supported by krbd.  4.7 will
> support exclusive-lock with most of the rest following in 4.8.

> You don't have to recreate images: while those features are enabled in
> jewel by default, you should be able to dynamically disable them with
> "rbd feature disable imagename deep-flatten fast-diff object-map
> exclusive-lock".

On Fri, Apr 22, 2016 at 3:11 PM, Florent B  wrote:

> Hi Sage and thank you for this release :)
>
> On 04/21/2016 08:30 PM, Sage Weil wrote:
> > * The default RBD image features for new images have been updated to
> >   enable the following: exclusive lock, object map, fast-diff, and
> >   deep-flatten. These features are not currently supported by the RBD
> >   kernel driver nor older RBD clients. They can be disabled on a
> per-image
> >   basis via the RBD CLI, or the default features can be updated to the
> >   pre-Jewel setting by adding the following to the client section of the
> Ceph
> >   configuration file::
> >
> > rbd default features = 1
>
> When will krbd support these features ? 4.4 does not support it ?
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Question upgrading to Jewel

2016-04-22 Thread Diego Castro
Hello, i've upgraded my hammer cluster with the following steps:

Running CentOS 7.1

upgrade ceph-deploy
ceph-deploy install --release hammer mon{0..2}

After that i couldn't start the mon service,

systemctl start ceph.target (no errors at all, just don't get the daemon
running).

I managed to start the daemon after i ran:

systemctl enable ceph.target
systemctl enable ceph-mon@mon0

After that, i could start mon daemon with:

systemctl start ceph.target

Question:

Am i missing something ? Wouldn't  ceph-deploy enable it by default?

---
Diego Castro / The CloudFather
GetupCloud.com - Eliminamos a Gravidade
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Replace Journal

2016-04-22 Thread Christian Balzer

Hello,

On Fri, 22 Apr 2016 06:20:17 +0200 Martin Wilderoth wrote:

> I have a ceph cluster and I will change my journal devices to new SSD's.
> 
> In some instructions of doing this they refer to a journal file (link to
> UUID of journal )
> 
> In my OSD folder this journal don’t exist.
> 
If your cluster is "years old" and not created with ceph-disk, then yes,
that's not surprising.
Mind, I created a recent one of mine manually and still used that scheme:
--- ls -la /var/lib/ceph/osd/ceph-12/
total 80
drwxr-xr-x   4 root root  4096 Mar  1 14:44 .
drwxr-xr-x   8 root root  4096 Sep 10  2015 ..
-rw-r--r--   1 root root37 Sep 10  2015 ceph_fsid
drwxr-xr-x 320 root root 24576 Mar  2 20:24 current
-rw-r--r--   1 root root37 Sep 10  2015 fsid
lrwxrwxrwx   1 root root44 Sep 10  2015 journal -> 
/dev/disk/by-id/wwn-0x55cd2e404b77573c-part5
-rw---   1 root root57 Sep 10  2015 keyring
---

Ceph isn't magical, so if that link isn't there, you probably have
something like this in your ceph.conf, preferably with UUID instead of the
possibly changing device name:
---
[osd.0]
host = ceph-01
osd journal = /dev/sdc3
---


> This instructions is renaming the UUID of new device to the old UUID not
> to break anything.
> 
> i was planning to use the command ceph-osd --mkjournal and update the
> ceph. conf accordingly.
> 
> Do I need to take care of my missing journal symlink, i think it is a
> symlink ?
> Why is it there, and how is it used ?.
> 
> I actually don’t remember the command i used to create the disk but it's
> some years ago and i doubt i used ceph-disk.
> 
> I found the following process in this list, that seemed good. But its
> still not clear to me if i can skip this journal link ?
> 
> *set noout*
> *stop the osds*
> *flush the journal*
> *replace journal SSDs*
> recreate journal partitions
> update ceph.conf to reflect new journal device names
Either that or remove the location and create the symlink. 
In both cases UUID is the fail-safe way to go.
> *recreate the journal (for the existing osds)*
> *start the osds*
> *unset noout*

Christian
-- 
Christian BalzerNetwork/Systems Engineer
ch...@gol.com   Global OnLine Japan/Rakuten Communications
http://www.gol.com/
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Replace Journal

2016-04-22 Thread Max A. Krasilnikov
Здравствуйте! 

On Fri, Apr 22, 2016 at 09:30:15AM +0200, martin.wilderoth wrote:

> >> I have a ceph cluster and I will change my journal devices to new SSD's.
> >>
> >> In some instructions of doing this they refer to a journal file (link to
> >> UUID of journal )
> >>
> >> In my OSD folder this journal don’t exist.
> >>
>> If your cluster is "years old" and not created with ceph-disk, then yes,
>> that's not surprising.
>> Mind, I created a recent one of mine manually and still used that scheme:
>> --- ls -la /var/lib/ceph/osd/ceph-12/
>> total 80
>> drwxr-xr-x   4 root root  4096 Mar  1 14:44 .
>> drwxr-xr-x   8 root root  4096 Sep 10  2015 ..
>> -rw-r--r--   1 root root37 Sep 10  2015 ceph_fsid
>> drwxr-xr-x 320 root root 24576 Mar  2 20:24 current
>> -rw-r--r--   1 root root37 Sep 10  2015 fsid
>> lrwxrwxrwx   1 root root44 Sep 10  2015 journal ->
>> /dev/disk/by-id/wwn-0x55cd2e404b77573c-part5
>> -rw---   1 root root57 Sep 10  2015 keyring
>> ---
>>
>> Ceph isn't magical, so if that link isn't there, you probably have
>> something like this in your ceph.conf, preferably with UUID instead of thet
>> possibly changing device name:
>> ---
>> [osd.0]
>> host = ceph-01
>> osd journal = /dev/sdc3
>> ---


> Yes that is my setup, Would that mean i could either create symlink journal
> -> /dev/disk/..
> remove the osd journal in ceph.conf.

> or change my ceph.conf with osd journal = /dev/

> And the recommended way is actually to use journal symlink ?

I'm using symlinks to /dev/disk/by-partlabel/
It saves me from any troubles when replacing journal SSDs.
The same for mounts: I use LABEL= in fstab because of changing device names when
replacing HW in storage nodes.

Of cause, any can set proper udev rules, but I'm too lazy for this exercises :)

-- 
WBR, Max A. Krasilnikov
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] v10.2.0 Jewel released

2016-04-22 Thread Василий Ангапов
Cool, thanks!

I see many new features in RGW, but where the documentation or
something like that can be found?

Kind regards, Vasily.

2016-04-21 21:30 GMT+03:00 Sage Weil :
> This major release of Ceph will be the foundation for the next
> long-term stable release.  There have been many major changes since
> the Infernalis (9.2.x) and Hammer (0.94.x) releases, and the upgrade
> process is non-trivial. Please read these release notes carefully.
>
> For the complete release notes, please see
>
>http://ceph.com/releases/v10-2-0-jewel-released/
>
>
> Major Changes from Infernalis
> -
>
> - *CephFS*:
>
>   * This is the first release in which CephFS is declared stable and
> production ready!  Several features are disabled by default, including
> snapshots and multiple active MDS servers.
>   * The repair and disaster recovery tools are now feature-complete.
>   * A new cephfs-volume-manager module is included that provides a
> high-level interface for creating "shares" for OpenStack Manila
> and similar projects.
>   * There is now experimental support for multiple CephFS file systems
> within a single cluster.
>
> - *RGW*:
>
>   * The multisite feature has been almost completely rearchitected and
> rewritten to support any number of clusters/sites, bidirectional
> fail-over, and active/active configurations.
>   * You can now access radosgw buckets via NFS (experimental).
>   * The AWS4 authentication protocol is now supported.
>   * There is now support for S3 request payer buckets.
>   * The new multitenancy infrastructure improves compatibility with
> Swift, which provides a separate container namespace for each
> user/tenant.
>   * The OpenStack Keystone v3 API is now supported.  There are a range
> of other small Swift API features and compatibility improvements
> as well, including bulk delete and SLO (static large objects).
>
> - *RBD*:
>
>   * There is new support for mirroring (asynchronous replication) of
> RBD images across clusters.  This is implemented as a per-RBD
> image journal that can be streamed across a WAN to another site,
> and a new rbd-mirror daemon that performs the cross-cluster
> replication.
>   * The exclusive-lock, object-map, fast-diff, and journaling features
> can be enabled or disabled dynamically. The deep-flatten features
> can be disabled dynamically but not re-enabled.
>   * The RBD CLI has been rewritten to provide command-specific help
> and full bash completion support.
>   * RBD snapshots can now be renamed.
>
> - *RADOS*:
>
>   * BlueStore, a new OSD backend, is included as an experimental
> feature.  The plan is for it to become the default backend in the
> K or L release.
>   * The OSD now persists scrub results and provides a librados API to
> query results in detail.
>   * We have revised our documentation to recommend *against* using
> ext4 as the underlying filesystem for Ceph OSD daemons due to
> problems supporting our long object name handling.
>
> Major Changes from Hammer
> -
>
> - *General*:
>
>   * Ceph daemons are now managed via systemd (with the exception of
> Ubuntu Trusty, which still uses upstart).
>   * Ceph daemons run as 'ceph' user instead of 'root'.
>   * On Red Hat distros, there is also an SELinux policy.
>
> - *RADOS*:
>
>   * The RADOS cache tier can now proxy write operations to the base
> tier, allowing writes to be handled without forcing migration of
> an object into the cache.
>   * The SHEC erasure coding support is no longer flagged as
> experimental. SHEC trades some additional storage space for faster
> repair.
>   * There is now a unified queue (and thus prioritization) of client
> IO, recovery, scrubbing, and snapshot trimming.
>   * There have been many improvements to low-level repair tooling
> (ceph-objectstore-tool).
>   * The internal ObjectStore API has been significantly cleaned up in order
> to faciliate new storage backends like BlueStore.
>
> - *RGW*:
>
>   * The Swift API now supports object expiration.
>   * There are many Swift API compatibility improvements.
>
> - *RBD*:
>
>   * The ``rbd du`` command shows actual usage (quickly, when
> object-map is enabled).
>   * The object-map feature has seen many stability improvements.
>   * The object-map and exclusive-lock features can be enabled or disabled
> dynamically.
>   * You can now store user metadata and set persistent librbd options
> associated with individual images.
>   * The new deep-flatten features allow flattening of a clone and all
> of its snapshots.  (Previously snapshots could not be flattened.)
>   * The export-diff command is now faster (it uses aio).  There is also
> a new fast-diff feature.
>   * The --size argument can be specified with a suffix for units
> (e.g., ``--size 64G``).
>   * There is a new ``rbd status`` command 

Re: [ceph-users] Replace Journal

2016-04-22 Thread Martin Wilderoth
>
> Hello,
>
> On Fri, 22 Apr 2016 06:20:17 +0200 Martin Wilderoth wrote:
>
> > I have a ceph cluster and I will change my journal devices to new SSD's.
> >
> > In some instructions of doing this they refer to a journal file (link to
> > UUID of journal )
> >
> > In my OSD folder this journal don’t exist.
> >
> If your cluster is "years old" and not created with ceph-disk, then yes,
> that's not surprising.
> Mind, I created a recent one of mine manually and still used that scheme:
> --- ls -la /var/lib/ceph/osd/ceph-12/
> total 80
> drwxr-xr-x   4 root root  4096 Mar  1 14:44 .
> drwxr-xr-x   8 root root  4096 Sep 10  2015 ..
> -rw-r--r--   1 root root37 Sep 10  2015 ceph_fsid
> drwxr-xr-x 320 root root 24576 Mar  2 20:24 current
> -rw-r--r--   1 root root37 Sep 10  2015 fsid
> lrwxrwxrwx   1 root root44 Sep 10  2015 journal ->
> /dev/disk/by-id/wwn-0x55cd2e404b77573c-part5
> -rw---   1 root root57 Sep 10  2015 keyring
> ---
>
> Ceph isn't magical, so if that link isn't there, you probably have
> something like this in your ceph.conf, preferably with UUID instead of thet
> possibly changing device name:
> ---
> [osd.0]
> host = ceph-01
> osd journal = /dev/sdc3
> ---


Yes that is my setup, Would that mean i could either create symlink journal
-> /dev/disk/..
remove the osd journal in ceph.conf.

or change my ceph.conf with osd journal = /dev/

And the recommended way is actually to use journal symlink ?




>
> > This instructions is renaming the UUID of new device to the old UUID not
> > to break anything.
> >
> > i was planning to use the command ceph-osd --mkjournal and update the
> > ceph. conf accordingly.
> >
> > Do I need to take care of my missing journal symlink, i think it is a
> > symlink ?
> > Why is it there, and how is it used ?.
> >
> > I actually don’t remember the command i used to create the disk but it's
> > some years ago and i doubt i used ceph-disk.
> >
> > I found the following process in this list, that seemed good. But its
> > still not clear to me if i can skip this journal link ?
> >
> > *set noout*
> > *stop the osds*
> > *flush the journal*
> > *replace journal SSDs*
> > recreate journal partitions
> > update ceph.conf to reflect new journal device names
> Either that or remove the location and create the symlink.
> In both cases UUID is the fail-safe way to go.
> > *recreate the journal (for the existing osds)*
> > *start the osds*
> > *unset noout*


I dont really need to keep the old uuid as long as im using the correct uuid
in my config (symlink) ?

To check what journal an osd is using what is the command ?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] howto upgrade

2016-04-22 Thread Martin Wilderoth
On 22 April 2016 at 09:09, Csaba Tóth  wrote:

> Hi!
>
> I use ceph hammer on ubuntu 14.04.
> Please give me advice how is the best to upgrade, first the OS to 16.04,
> and than the ceph to jewel, or first the ceph and than the OS?
>
> Thanks,
> Csaba
>
> Hello,

I have not tested to upgrade to jewel and Ubuntu 16.04.

I would update 14.04 & Hammer to latest patch and
then ceph to jewel.
Last would update os 16.04.

The reason is that Xenial don't have Hammer package.

I guess you are one of the first to test :-)
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] howto upgrade

2016-04-22 Thread Csaba Tóth
Hi!

I use ceph hammer on ubuntu 14.04.
Please give me advice how is the best to upgrade, first the OS to 16.04,
and than the ceph to jewel, or first the ceph and than the OS?

Thanks,
Csaba
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] fibre channel as ceph storage interconnect

2016-04-22 Thread Adrian Saul
> from the responses I've gotten, it looks like there's no viable option to use
> fibre channel as an interconnect between the nodes of the cluster.
> Would it be worth while development effort to establish a block protocol
> between the nodes so that something like fibre channel could be used to
> communicate internally?  Unless I'm waaay wrong (And I'm seldom *that*
> wrong), it would not be worth the effort.  I won't even feature request it.
> Looks like I'll have to look into infiniband or CE, and possibly migrate away
> from Fibre Channel, even though it kinda just works, and therefore I really
> like it :(

I would think even conceptually it would be a mess -  FC as a peer to peer 
network fabric might be useful (in many ways I like it a lot better than 
Ethernet), but you would have to develop an entire transport protocol over it 
(the normal SCSI model would be useless) for Ceph and then write that in to 
replace any of the network code in the existing Ceph code base.

A lot of work for something that is probably easier done swapping your FC HBAs 
for 10G NICs or IB HBAs.

>
> On Thu, Apr 21, 2016 at 11:06 PM, Schlacta, Christ 
> wrote:
> > My primary motivations are:
> > Most of my systems that I want to use with ceph already have fibre
> > Chantel cards and infrastructure, and more infrastructure is
> > incredibly cheap compared to infiniband or {1,4}0gbe cards and
> > infrastructure Most of my systems are expansion slot constrained, and
> > I'd be forced to pick one or the other anyway.
> >
> > On Thu, Apr 21, 2016 at 9:28 PM, Paul Evans  wrote:
> >> In today’s world, OSDs communicate via IP and only IP*. Some
> >> FiberChannel switches and HBAs  support IP-over-FC, but it’s about
> >> 0.02% of the FC deployments.
> >> Therefore, one could technically use FC, but it does’t appear to
> >> offer enough benefit to OSD operations to justify the unique architecture.
> >>
> >> What is your motivation to leverage FC behind OSDs?
> >>
> >> -Paul
> >>
> >> *Ceph on native Infiniband may be available some day, but it seems
> >> impractical with the current releases. IP-over-IB is also known to work.
> >>
> >>
> >> On Apr 21, 2016, at 8:12 PM, Schlacta, Christ 
> wrote:
> >>
> >> Is it possible?  Can I use fibre channel to interconnect my ceph OSDs?
> >> Intuition tells me it should be possible, yet experience (Mostly with
> >> fibre channel) tells me no.  I don't know enough about how ceph works
> >> to know for sure.  All my googling returns results about using ceph
> >> as a BACKEND for exporting fibre channel LUNs, which is, sadly, not
> >> what I'm looking for at the moment.
> >>
> >>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Confidentiality: This email and any attachments are confidential and may be 
subject to copyright, legal or some other professional privilege. They are 
intended solely for the attention and use of the named addressee(s). They may 
only be copied, distributed or disclosed with the consent of the copyright 
owner. If you have received this email by mistake or by breach of the 
confidentiality clause, please notify the sender immediately by return email 
and delete or destroy all copies of the email. Any confidentiality, privilege 
or copyright is not waived or lost because this email has been sent to you by 
mistake.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] fibre channel as ceph storage interconnect

2016-04-22 Thread Paul Evans
On Apr 21, 2016, at 11:10 PM, Schlacta, Christ 
> wrote:

Would it be worth while development effort to establish a block
protocol between the nodes so that something like fibre channel could
be used to communicate internally?

With 25/100 Ethernet & IB becoming available now...and with some effort to 
integrate IB and ceph already completed, I can’t see FC for OSDs getting any 
traction.

 Looks like I'll have to look into infiniband or
CE, and possibly migrate away from Fibre Channel, even though it kinda
just works, and therefore I really like it :(

If by CE you’re referring to Converged Enhanced Ethernet and it’s brethren 
(DCB/DCE), those technologies should be transparent to Ceph and provide some 
degree of improvement over standard Ethernet behaviors.  YMMV.

As for FC ‘just works…’  +1   (but I really don’t want to inspire a flame war 
on the topic)

- Paul
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] fibre channel as ceph storage interconnect

2016-04-22 Thread Schlacta, Christ
So it looks like because of reply to going to the user instead of the
list (Seriously, somebody needs to fix the list headers) by default,
the thread got kinda messed up, so I apologize if you're using a
threaded reader.  That said, here goes.

from the responses I've gotten, it looks like there's no viable option
to use fibre channel as an interconnect between the nodes of the
cluster.
Would it be worth while development effort to establish a block
protocol between the nodes so that something like fibre channel could
be used to communicate internally?  Unless I'm waaay wrong (And I'm
seldom *that* wrong), it would not be worth the effort.  I won't even
feature request it.  Looks like I'll have to look into infiniband or
CE, and possibly migrate away from Fibre Channel, even though it kinda
just works, and therefore I really like it :(

On Thu, Apr 21, 2016 at 11:06 PM, Schlacta, Christ  wrote:
> My primary motivations are:
> Most of my systems that I want to use with ceph already have fibre
> Chantel cards and infrastructure, and more infrastructure is
> incredibly cheap compared to infiniband or {1,4}0gbe cards and
> infrastructure
> Most of my systems are expansion slot constrained, and I'd be forced
> to pick one or the other anyway.
>
> On Thu, Apr 21, 2016 at 9:28 PM, Paul Evans  wrote:
>> In today’s world, OSDs communicate via IP and only IP*. Some FiberChannel
>> switches and HBAs  support IP-over-FC, but it’s about 0.02% of the FC
>> deployments.
>> Therefore, one could technically use FC, but it does’t appear to offer
>> enough benefit to OSD operations to justify the unique architecture.
>>
>> What is your motivation to leverage FC behind OSDs?
>>
>> -Paul
>>
>> *Ceph on native Infiniband may be available some day, but it seems
>> impractical with the current releases. IP-over-IB is also known to work.
>>
>>
>> On Apr 21, 2016, at 8:12 PM, Schlacta, Christ  wrote:
>>
>> Is it possible?  Can I use fibre channel to interconnect my ceph OSDs?
>> Intuition tells me it should be possible, yet experience (Mostly with
>> fibre channel) tells me no.  I don't know enough about how ceph works
>> to know for sure.  All my googling returns results about using ceph as
>> a BACKEND for exporting fibre channel LUNs, which is, sadly, not what
>> I'm looking for at the moment.
>>
>>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] fibre channel as ceph storage interconnect

2016-04-22 Thread Schlacta, Christ
My primary motivations are:
Most of my systems that I want to use with ceph already have fibre
Chantel cards and infrastructure, and more infrastructure is
incredibly cheap compared to infiniband or {1,4}0gbe cards and
infrastructure
Most of my systems are expansion slot constrained, and I'd be forced
to pick one or the other anyway.

On Thu, Apr 21, 2016 at 9:28 PM, Paul Evans  wrote:
> In today’s world, OSDs communicate via IP and only IP*. Some FiberChannel
> switches and HBAs  support IP-over-FC, but it’s about 0.02% of the FC
> deployments.
> Therefore, one could technically use FC, but it does’t appear to offer
> enough benefit to OSD operations to justify the unique architecture.
>
> What is your motivation to leverage FC behind OSDs?
>
> -Paul
>
> *Ceph on native Infiniband may be available some day, but it seems
> impractical with the current releases. IP-over-IB is also known to work.
>
>
> On Apr 21, 2016, at 8:12 PM, Schlacta, Christ  wrote:
>
> Is it possible?  Can I use fibre channel to interconnect my ceph OSDs?
> Intuition tells me it should be possible, yet experience (Mostly with
> fibre channel) tells me no.  I don't know enough about how ceph works
> to know for sure.  All my googling returns results about using ceph as
> a BACKEND for exporting fibre channel LUNs, which is, sadly, not what
> I'm looking for at the moment.
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com