David Boyes wrote:
"not a good idea"?

It hard-codes a Windows-specific item into infrastructure. If you ever
change the platform, you need to reengineer not only the desktop, but
chunks of infrastructure, which invariably get reused elsewhere, which
lead to even bigger ripple effects.

It also limits reusability from a program standpoint. URLs are designed
to be programmatic specifications that can be used by other
applications; UNC names have semantic restrictions that limit them to
only representing files. What happens if an app needs to pull data from
different sources at some point during it's life cycle? Do it with a
URL; you don't care, and you're better positioned to think about Web
Services and reusable objects. Do it with a UNC file handle, and you're
stuck forever with file semantics.

It's just sloppy architecture -- UNC is a convenient solution based on a
set of existing conditions. It's not thinking forward to what happens if
-- when -- those conditions will change.

There is, I think, a certain amount of misinformation in some of the
replies.

UNC is, as its name suggests, a naming convention. It has been
implemented in LANMANAGER/LANSERVER servers and clients, and those
include (at least) Windows 95/NT, OS/2 and their successors, and in
Netware (see a mention at
http://www.novell.com/communities/node/1545/connecting+netware+server+using+unc+path
)

UNC has been around a while (as you'd expect), it's mentioned in this
patent granted Nov 94
http://www.patentstorm.us/patents/5363487/description.html - filed in
August '89.

http://tools.ietf.org/html/rfc1945 says, "HTTP has been in use by the
World-Wide Web global information initiative since 1990."

UNC may be used to address shared resources on any computer on the
local[1] network, and those resources may be shared at the local user's
discretion (or indiscretion!). Shared resources typically appear on
client users' desktops as "just more files."

AFP in MAC OS provides similar convenient sharing of resources.

http lacks that convenience, but really comes into play when
distributing files to remote, usually anonymous and untrusted, users.

[1] It is true that SMB AKA CIFS AKA NETBIOS aka NETBIOS over TCP can be
used to share filesystems over the internet, and at one time a group of
us did so, by arrangement amongst ourselves, and I created a file on the
C: drive of some chap's computer in Italy.

The nearest Linux/Unix have is NFS, but again that lacks the convenience
(from the users' viewpoint) of CIFS and AFP.

Whether SMB and UNC are proprietary is probably immaterial, since IBM
was in partnership with Microsoft at the time and one might reasonably
imagine that IBM has the information needed to produce a z/OS client and
integrate it if it chose to do so. (Yes, I know LANSERVER exists. I
don't know much about it, when I came close to encountering it in 2000
or so it as alleged to be glacially slow.)

If you say, "It's just sloppy architecture" then you are judging
yesterday's best practice by today's standards. It was created for LANs
of small computers - Pentiums, and 486s and less.




--

Cheers
John

-- spambait
[EMAIL PROTECTED]  [EMAIL PROTECTED]
-- Advice
http://webfoot.com/advice/email.top.php
http://www.catb.org/~esr/faqs/smart-questions.html
http://support.microsoft.com/kb/555375

You cannot reply off-list:-)

----------------------------------------------------------------------
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390

Reply via email to