Re: [ceph-users] access ceph filesystem at storage level and not via ethernet

2017-09-22 Thread James Okken
Thanks again Ronny,
Ocfs2 is working well so far.
I have 3 nodes sharing the same 7TB MSA FC lun. Hoping to add 3 more...

James Okken
Lab Manager
Dialogic Research Inc.
4 Gatehall Drive
Parsippany
NJ 07054
USA

Tel:   973 967 5179
Email:   james.ok...@dialogic.com
Web:    www.dialogic.com - The Network Fuel Company

This e-mail is intended only for the named recipient(s) and may contain 
information that is privileged, confidential and/or exempt from disclosure 
under applicable law. No waiver of privilege, confidence or otherwise is 
intended by virtue of communication via the internet. Any unauthorized use, 
dissemination or copying is strictly prohibited. If you have received this 
e-mail in error, or are not named as a recipient, please immediately notify the 
sender and destroy all copies of this e-mail.

-Original Message-
From: Ronny Aasen [mailto:ronny+ceph-us...@aasen.cx] 
Sent: Thursday, September 14, 2017 4:18 AM
To: James Okken; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] access ceph filesystem at storage level and not via 
ethernet

On 14. sep. 2017 00:34, James Okken wrote:
> Thanks Ronny! Exactly the info I need. And kinda of what I thought the answer 
> would be as I was typing and thinking clearer about what I was asking. I just 
> was hoping CEPH would work like this since the openstack fuel tools deploy 
> CEPH storage nodes easily.
> I agree I would not be using CEPH for its strengths.
> 
> I am interested further in what you've said in this paragraph though:
> 
> "if you want to have FC SAN attached storage on servers, shareable 
> between servers in a usable fashion I would rather mount the same SAN 
> lun on multiple servers and use a cluster filesystem like ocfs or gfs 
> that is made for this kind of solution."
> 
> Please allow me to ask you a few questions regarding that even though it 
> isn't CEPH specific.
> 
> Do you mean gfs/gfs2 global file system?
> 
> Does ocfs and/or gfs require some sort of management/clustering server 
> to maintain and manage? (akin to a CEPH OSD) I'd love to find a 
> distributed/cluster filesystem where I can just partition and format. And 
> then be able to mount and use that same SAN datastore from multiple servers 
> without a management server.
> If ocfs or gfs do need a server of this sort does it needed to be involved in 
> the I/O? or will I be able to mount the datastore, similar to any other disk 
> and the IO goes across the fiberchannel?

i only have experience with ocfs. but i think gfs works similarish. 
There are quite a few cluster filesystems to choose from. 
https://en.wikipedia.org/wiki/Clustered_file_system

servers that are mounting ocfs shared filesystems must have ocfs2-tools 
installed. have access to the common shared FC lun via FC.  they need to be 
aware of the other ocfs servers of the same lun, that you define in a 
/etc/ocfs/cluster.conf configfile and the ocfs daemon must be running.

then it is just a matter of making the ocfs (on one server) and adding it to 
fstab (of all servers) and mount.


> One final question, if you don't mind, do you think I could use ext4or xfs 
> and "mount the same SAN lun on multiple servers" if I can guarantee each 
> server will only right to its own specific directory and never anywhere the 
> other servers will be writing? (I even have the SAN mapped to each server 
> using different lun's)

mounting the same (non cluster) filesystem on multiple servers is 
guaranteed to destroy the filesystem, you will have multiple servers 
writing in the same metadata area, the same journal area and generaly 
shitting over each other. luckily i think most modern filesystems would 
detect that the FS is mounted somewhere else and prevent you from 
mounting it again without big fat warnings.

kind regards
Ronny Aasen

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] access ceph filesystem at storage level and not via ethernet

2017-09-14 Thread James Okken
Thanks Ric, thanks again Ronny.

I have a lot of good info now! I am going to try ocfs2.

Thanks


-- Jim
-Original Message-
From: Ric Wheeler [mailto:rwhee...@redhat.com] 
Sent: Thursday, September 14, 2017 4:35 AM
To: Ronny Aasen; James Okken; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] access ceph filesystem at storage level and not via 
ethernet

On 09/14/2017 11:17 AM, Ronny Aasen wrote:
> On 14. sep. 2017 00:34, James Okken wrote:
>> Thanks Ronny! Exactly the info I need. And kinda of what I thought 
>> the answer would be as I was typing and thinking clearer about what I 
>> was asking. I just was hoping CEPH would work like this since the 
>> openstack fuel tools deploy CEPH storage nodes easily.
>> I agree I would not be using CEPH for its strengths.
>>
>> I am interested further in what you've said in this paragraph though:
>>
>> "if you want to have FC SAN attached storage on servers, shareable 
>> between servers in a usable fashion I would rather mount the same SAN 
>> lun on multiple servers and use a cluster filesystem like ocfs or gfs 
>> that is made for this kind of solution."
>>
>> Please allow me to ask you a few questions regarding that even though 
>> it isn't CEPH specific.
>>
>> Do you mean gfs/gfs2 global file system?
>>
>> Does ocfs and/or gfs require some sort of management/clustering 
>> server to maintain and manage? (akin to a CEPH OSD) I'd love to find 
>> a distributed/cluster filesystem where I can just partition and 
>> format. And then be able to mount and use that same SAN datastore 
>> from multiple servers without a management server.
>> If ocfs or gfs do need a server of this sort does it needed to be 
>> involved in the I/O? or will I be able to mount the datastore, 
>> similar to any other disk and the IO goes across the fiberchannel?
>
> i only have experience with ocfs. but i think gfs works similarish. 
> There are quite a few cluster filesystems to choose from.
> https://en.wikipedia.org/wiki/Clustered_file_system
>
> servers that are mounting ocfs shared filesystems must have 
> ocfs2-tools installed. have access to the common shared FC lun via FC.  
> they need to be aware of the other ocfs servers of the same lun, that 
> you define in a /etc/ocfs/cluster.conf configfile and the ocfs daemon must be 
> running.
>
> then it is just a matter of making the ocfs (on one server) and adding 
> it to fstab (of all servers) and mount.
>
>
>> One final question, if you don't mind, do you think I could use 
>> ext4or xfs and "mount the same SAN lun on multiple servers" if I can 
>> guarantee each server will only right to its own specific directory 
>> and never anywhere the other servers will be writing? (I even have 
>> the SAN mapped to each server using different lun's)
>
> mounting the same (non cluster) filesystem on multiple servers is 
> guaranteed to destroy the filesystem, you will have multiple servers 
> writing in the same metadata area, the same journal area and generaly 
> shitting over each other.
> luckily i think most modern filesystems would detect that the FS is 
> mounted somewhere else and prevent you from mounting it again without big fat 
> warnings.
>
> kind regards
> Ronny Aasen

In general, you can get shared file systems (i.e., the clients can all see the 
same files and directories) with lots of different approaches:

* use a shared disk file system like GFS2, OCFS2 - all of the "clients" where 
the applications run are part of the cluster and each server attaches to the 
shared storage (through iSCSI, FC, whatever). They do require HA cluster 
infrastructure for things like fencing

* use a distributed file system like cephfs, glusterfs, etc - your clients 
access through a file system specific protocol, they don't see raw storage

* take any file system (local or other) and re-export it as a client/server 
type of file system by using an NFS server or Samba server

Ric

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] access ceph filesystem at storage level and not via ethernet

2017-09-14 Thread Ric Wheeler

On 09/14/2017 11:17 AM, Ronny Aasen wrote:

On 14. sep. 2017 00:34, James Okken wrote:
Thanks Ronny! Exactly the info I need. And kinda of what I thought the answer 
would be as I was typing and thinking clearer about what I was asking. I just 
was hoping CEPH would work like this since the openstack fuel tools deploy 
CEPH storage nodes easily.

I agree I would not be using CEPH for its strengths.

I am interested further in what you've said in this paragraph though:

"if you want to have FC SAN attached storage on servers, shareable
between servers in a usable fashion I would rather mount the same SAN
lun on multiple servers and use a cluster filesystem like ocfs or gfs
that is made for this kind of solution."

Please allow me to ask you a few questions regarding that even though it 
isn't CEPH specific.


Do you mean gfs/gfs2 global file system?

Does ocfs and/or gfs require some sort of management/clustering server to 
maintain and manage? (akin to a CEPH OSD)
I'd love to find a distributed/cluster filesystem where I can just partition 
and format. And then be able to mount and use that same SAN datastore from 
multiple servers without a management server.
If ocfs or gfs do need a server of this sort does it needed to be involved in 
the I/O? or will I be able to mount the datastore, similar to any other disk 
and the IO goes across the fiberchannel?


i only have experience with ocfs. but i think gfs works similarish. There are 
quite a few cluster filesystems to choose from. 
https://en.wikipedia.org/wiki/Clustered_file_system


servers that are mounting ocfs shared filesystems must have ocfs2-tools 
installed. have access to the common shared FC lun via FC.  they need to be 
aware of the other ocfs servers of the same lun, that you define in a 
/etc/ocfs/cluster.conf configfile and the ocfs daemon must be running.


then it is just a matter of making the ocfs (on one server) and adding it to 
fstab (of all servers) and mount.



One final question, if you don't mind, do you think I could use ext4or xfs 
and "mount the same SAN lun on multiple servers" if I can guarantee each 
server will only right to its own specific directory and never anywhere the 
other servers will be writing? (I even have the SAN mapped to each server 
using different lun's)


mounting the same (non cluster) filesystem on multiple servers is guaranteed 
to destroy the filesystem, you will have multiple servers writing in the same 
metadata area, the same journal area and generaly shitting over each other. 
luckily i think most modern filesystems would detect that the FS is mounted 
somewhere else and prevent you from mounting it again without big fat warnings.


kind regards
Ronny Aasen 


In general, you can get shared file systems (i.e., the clients can all see the 
same files and directories) with lots of different approaches:


* use a shared disk file system like GFS2, OCFS2 - all of the "clients" where 
the applications run are part of the cluster and each server attaches to the 
shared storage (through iSCSI, FC, whatever). They do require HA cluster 
infrastructure for things like fencing


* use a distributed file system like cephfs, glusterfs, etc - your clients 
access through a file system specific protocol, they don't see raw storage


* take any file system (local or other) and re-export it as a client/server type 
of file system by using an NFS server or Samba server


Ric

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] access ceph filesystem at storage level and not via ethernet

2017-09-14 Thread Ronny Aasen

On 14. sep. 2017 00:34, James Okken wrote:

Thanks Ronny! Exactly the info I need. And kinda of what I thought the answer 
would be as I was typing and thinking clearer about what I was asking. I just 
was hoping CEPH would work like this since the openstack fuel tools deploy CEPH 
storage nodes easily.
I agree I would not be using CEPH for its strengths.

I am interested further in what you've said in this paragraph though:

"if you want to have FC SAN attached storage on servers, shareable
between servers in a usable fashion I would rather mount the same SAN
lun on multiple servers and use a cluster filesystem like ocfs or gfs
that is made for this kind of solution."

Please allow me to ask you a few questions regarding that even though it isn't 
CEPH specific.

Do you mean gfs/gfs2 global file system?

Does ocfs and/or gfs require some sort of management/clustering server to 
maintain and manage? (akin to a CEPH OSD)
I'd love to find a distributed/cluster filesystem where I can just partition 
and format. And then be able to mount and use that same SAN datastore from 
multiple servers without a management server.
If ocfs or gfs do need a server of this sort does it needed to be involved in 
the I/O? or will I be able to mount the datastore, similar to any other disk 
and the IO goes across the fiberchannel?


i only have experience with ocfs. but i think gfs works similarish. 
There are quite a few cluster filesystems to choose from. 
https://en.wikipedia.org/wiki/Clustered_file_system


servers that are mounting ocfs shared filesystems must have ocfs2-tools 
installed. have access to the common shared FC lun via FC.  they need to 
be aware of the other ocfs servers of the same lun, that you define in a 
/etc/ocfs/cluster.conf configfile and the ocfs daemon must be running.


then it is just a matter of making the ocfs (on one server) and adding 
it to fstab (of all servers) and mount.




One final question, if you don't mind, do you think I could use ext4or xfs and 
"mount the same SAN lun on multiple servers" if I can guarantee each server 
will only right to its own specific directory and never anywhere the other servers will 
be writing? (I even have the SAN mapped to each server using different lun's)


mounting the same (non cluster) filesystem on multiple servers is 
guaranteed to destroy the filesystem, you will have multiple servers 
writing in the same metadata area, the same journal area and generaly 
shitting over each other. luckily i think most modern filesystems would 
detect that the FS is mounted somewhere else and prevent you from 
mounting it again without big fat warnings.


kind regards
Ronny Aasen

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] access ceph filesystem at storage level and not via ethernet

2017-09-13 Thread James Okken
Thanks Ronny! Exactly the info I need. And kinda of what I thought the answer 
would be as I was typing and thinking clearer about what I was asking. I just 
was hoping CEPH would work like this since the openstack fuel tools deploy CEPH 
storage nodes easily.
I agree I would not be using CEPH for its strengths.

I am interested further in what you've said in this paragraph though:

"if you want to have FC SAN attached storage on servers, shareable 
between servers in a usable fashion I would rather mount the same SAN 
lun on multiple servers and use a cluster filesystem like ocfs or gfs 
that is made for this kind of solution."

Please allow me to ask you a few questions regarding that even though it isn't 
CEPH specific.

Do you mean gfs/gfs2 global file system?

Does ocfs and/or gfs require some sort of management/clustering server to 
maintain and manage? (akin to a CEPH OSD)
I'd love to find a distributed/cluster filesystem where I can just partition 
and format. And then be able to mount and use that same SAN datastore from 
multiple servers without a management server.
If ocfs or gfs do need a server of this sort does it needed to be involved in 
the I/O? or will I be able to mount the datastore, similar to any other disk 
and the IO goes across the fiberchannel?

One final question, if you don't mind, do you think I could use ext4or xfs and 
"mount the same SAN lun on multiple servers" if I can guarantee each server 
will only right to its own specific directory and never anywhere the other 
servers will be writing? (I even have the SAN mapped to each server using 
different lun's)

Thanks for your expertise!

-- Jim

-- next part --

Message: 27
Date: Wed, 13 Sep 2017 19:56:07 +0200
From: Ronny Aasen <ronny+ceph-us...@aasen.cx>
To: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] access ceph filesystem at storage level and
not via ethernet
Message-ID: <fe8ec309-7750-ad5c-3fe7-12a62ad6e...@aasen.cx>
Content-Type: text/plain; charset="windows-1252"; Format="flowed"


a bit cracy :)

if the disks are directly attached on a OSD node, or attachable on 
Fiberchannel does not make a difference.  you can not shortcut the ceph 
cluster and talk to the osd disks directly without eventually destroying 
the ceph cluster.

Even if you did, ceph is an object storage on disk, so you would not 
find filesystem or RBD diskimages there, only objects on your FC 
attached osd node disks with filestore, and with bluestore not even 
readable objects.

that beeing said I think a FC SAN attached ceph osd node sounds a bit 
strange. ceph's strength is the distributed scaleable solution. and 
having the osd nodes collected on a SAN array would nuter ceph's 
strengths, and amplify ceph's weakness of high latency. i would only 
consider such a solution for testing, learning or playing around without 
having actual hardware for a distributed system.  and in that case use 1 
lun for each osd disk, give 8-10 vm's some luns/osd's each, just to 
learn how to work with ceph.

if you want to have FC SAN attached storage on servers, shareable 
between servers in a usable fashion I would rather mount the same SAN 
lun on multiple servers and use a cluster filesystem like ocfs or gfs 
that is made for this kind of solution.


kind regards
Ronny Aasen

On 13.09.2017 19:03, James Okken wrote:
>
> Hi,
>
> Novice question here:
>
> The way I understand CEPH is that it distributes data in OSDs in a 
> cluster. The reads and writes come across the ethernet as RBD requests 
> and the actual data IO then also goes across the ethernet.
>
> I have a CEPH environment being setup on a fiber channel disk array 
> (via an openstack fuel deploy). The servers using the CEPH storage 
> also have access to the same fiber channel disk array.
>
> From what I understand those servers would need to make the RDB 
> requests and do the IO across ethernet, is that correct? Even though 
> with this infrastructure setup there is a ?shorter? and faster path to 
> those disks, via the fiber channel.
>
> Is there a way to access storage on a CEPH cluster when one has this 
> ?better? access to the disks in the cluster? (how about if it were to 
> be only a single OSD with replication set to 1)
>
> Sorry if this question is crazy?
>
> thanks
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] access ceph filesystem at storage level and not via ethernet

2017-09-13 Thread Ronny Aasen

On 13.09.2017 19:03, James Okken wrote:


Hi,

Novice question here:

The way I understand CEPH is that it distributes data in OSDs in a 
cluster. The reads and writes come across the ethernet as RBD requests 
and the actual data IO then also goes across the ethernet.


I have a CEPH environment being setup on a fiber channel disk array 
(via an openstack fuel deploy). The servers using the CEPH storage 
also have access to the same fiber channel disk array.


From what I understand those servers would need to make the RDB 
requests and do the IO across ethernet, is that correct? Even though 
with this infrastructure setup there is a “shorter” and faster path to 
those disks, via the fiber channel.


Is there a way to access storage on a CEPH cluster when one has this 
“better” access to the disks in the cluster? (how about if it were to 
be only a single OSD with replication set to 1)


Sorry if this question is crazy…

thanks



a bit cracy :)

if the disks are directly attached on a OSD node, or attachable on 
Fiberchannel does not make a difference.  you can not shortcut the ceph 
cluster and talk to the osd disks directly without eventually destroying 
the ceph cluster.


Even if you did, ceph is an object storage on disk, so you would not 
find filesystem or RBD diskimages there, only objects on your FC 
attached osd node disks with filestore, and with bluestore not even 
readable objects.


that beeing said I think a FC SAN attached ceph osd node sounds a bit 
strange. ceph's strength is the distributed scaleable solution. and 
having the osd nodes collected on a SAN array would nuter ceph's 
strengths, and amplify ceph's weakness of high latency. i would only 
consider such a solution for testing, learning or playing around without 
having actual hardware for a distributed system.  and in that case use 1 
lun for each osd disk, give 8-10 vm's some luns/osd's each, just to 
learn how to work with ceph.


if you want to have FC SAN attached storage on servers, shareable 
between servers in a usable fashion I would rather mount the same SAN 
lun on multiple servers and use a cluster filesystem like ocfs or gfs 
that is made for this kind of solution.



kind regards
Ronny Aasen
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] access ceph filesystem at storage level and not via ethernet

2017-09-13 Thread James Okken
Hi,

Novice question here:

The way I understand CEPH is that it distributes data in OSDs in a cluster. The 
reads and writes come across the ethernet as RBD requests and the actual data 
IO then also goes across the ethernet.

I have a CEPH environment being setup on a fiber channel disk array (via an 
openstack fuel deploy). The servers using the CEPH storage also have access to 
the same fiber channel disk array.

>From what I understand those servers would need to make the RDB requests and 
>do the IO across ethernet, is that correct? Even though with this 
>infrastructure setup there is a "shorter" and faster path to those disks, via 
>the fiber channel.

Is there a way to access storage on a CEPH cluster when one has this "better" 
access to the disks in the cluster? (how about if it were to be only a single 
OSD with replication set to 1)

Sorry if this question is crazy...

thanks


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com