easyproglife wrote:
Gilles,
IMO, you are going towards client-server model as you want more and more
repository capabilities (like advanced search).
Personally, I see a lot of advantages if a repository would be kind of
server (say HTTP server with searching capabilities) and the client would
ask it to search in a standard way (e.g. search URL )
0. Something for somebody to handle, which is what mvnrepo does for me
or maybe better: web-service with WSDL).
-1. No SOAP, no WSDL. I speak as the author of one and a bit SOAP stacks.
In client-only approach (as today) we have possible problems like a single
repository that may be used by several clients, probably with different ivy
versions or transactions issues where 2 clients publish / resolve at the
same time.
I wondered about race conditions.
So client-server approach with well defined interface (e.g. WSDL) is a good
idea but something is missing: we don't use files any more but URLs.
1. I've never seen a well defined WSDL interface.
2. You only get TX with WS-ReliableMessaging and WS-TX, and nobody does
those.
Personally, I prefer using file-system repositories for internal
organization use for their speed and ease of management. The 'useOrigin'
feature of Ivy 1.4 also helps. Ideally, if I wasn't see Maven in the
past, I
would not have been thinking about cache at all. Instead, I would have just
creating a tool to build a classpath above file-system 'repositories' that
creates paths, and the cache would not have been needed at all.
Why I am telling you all of this story?
What I am trying to say is that we have inherent conflict in the concepts:
'smart' client-server approach vs. 'simple/dumb' file-system structure with
the login in clients.
The problem with file-system approach is that the logic is in the
client, as
I wrote above.
Its a bit like the original visual source safe tool and early databases,
where everything was on a shared FS. It doesnt scale to many clients or
long haul connectivy. But HTTP does because Proxies can cache GET calls,
and because most clients do not try and make the remote server look like
something it isnt: a local filesystem.
A possible solution (not trivial at all) is to combine the approaches:
write
client-server system where the 'server' is not HTTP server but SMB (Samba)
based server. SMB exposes file-system interface but may (I am not sure.
Need
to check this) implement internal logic like transactions, locking, search
(using dummy paths/files like the proc in Linux) and so on, as you can do
with an HTTP server and servlets.
SMB is mediocre for long haul connectivity, doesnt have a search API
(AFAIK) or offer a transacted API. Vista FS has TX support; I dont know
if this is exposed over SMB.
If you want a repository with write access,
0. Stick with URLs. You get caching from proxies, easy browsing and the
ability to fetch from anything with <get>
1. use WebDAV.
2. use a PUT operation to put something new up
3. use GET to retrieve
4. use DELETE to delete
5. use HEAD to poll for changes, or GET with etag header
6. provide an atom feed of changes that a client can poll to invalidate
the local cache.
-Steve