this is unfair, because i haven't told you all the facts.
i don't want to tell you all the facts, because they are sensitive.

our situation is a very idiosyncratic one, where we are squatting
on the intersection of multiple security domains,
the nexus of two completely distinct network teams and
a firewall team that bridges them,
and a bunch of physical site-specific design rules
(for both the production site and its DR backup site).
while i chafe a little under their combined restrictions,
individually they are seem reasonable and the people
who own them are all at least smart.

in particular, to allay david's concern, the cluster file system we are allowed
to use (veritas) has very specific networking requirements for the elements
in a veritas cluster, and we are not able to satisfy those in our (weird) case.
end of argument.

and having two systems share a drive allows certain attacks between those two
systems for sure, but those are of a different nature than island-hopping 
networking attacks.

thanks for all your inputs, but we're done here.

                andrew

On Jul 18, 2012, at 11:07 AM, [email protected] wrote:

> You may want to point out to your security team that being able to share a 
> drive, but not having enough connectivity to run a cluster filesystem is 
> security theater, the damage you can do through a shared drive is far more, 
> and far harder to trace than anything you can do over the network.
> 
> In any case, take a look at the code for GFS, it does what you are trying to 
> do, but since it's in the kernel, it may be bypassing some layers that you 
> will have trouble bypassing with a completely userspace solution.
> 
> David Lang
> 
> 
> On Wed, 18 Jul 2012, Andrew Hume wrote:
> 
>> point taken.
>> due to $WORK constraints, the available cluster filesystems can't work
>> due to firewalls etc.
>> 
>> On Jul 18, 2012, at 3:52 AM, Edward Ned Harvey wrote:
>> 
>>>> From: [email protected] [mailto:[email protected]]
>>>> On Behalf Of Andrew Hume
>>>> 
>>>> i have two linux servers each of which has the same piece of SAN
>>>> attache dto it as a LUN. that is, svra:/dev/sdbd is the same volume
>>>> (or more exactly WWN) as svrb:/dev/sdaf.
>>>> 
>>>> what i want to do is write something on svra to /dev/sdbd
>>>> and then be able to read it on svrb from /dev/sdaf.
>>>> in principle this should work, but on Linux the buffer
>>>> cache always inserts doubt. how do i reliably probe
>>>> /dev/sdaf for new content? there is no filesystem involved;
>>>> i am just talking about raw disk blocks.
>>> 
>>> I see other people have already answered this, regarding no filesystem.  But
>>> you might also consider using a clustered filesystem.  The whole point of
>>> such a thing is to do precisely this...  With a filesystem.  I believe the
>>> "standard" option built-into most linuxes nowadays would be gfs.  (Not to be
>>> confused with google GFS.)
> _______________________________________________
> Tech mailing list
> [email protected]
> https://lists.lopsa.org/cgi-bin/mailman/listinfo/tech
> This list provided by the League of Professional System Administrators
> http://lopsa.org/


------------------
Andrew Hume  (best -> Telework) +1 623-551-2845
[email protected]  (Work) +1 973-236-2014
AT&T Labs - Research; member of USENIX and LOPSA




_______________________________________________
Tech mailing list
[email protected]
https://lists.lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/

Reply via email to