On Tue, Apr 22, 2008 at 7:14 AM, Juliusz Chroboczek <
[EMAIL PROTECTED]> wrote:

> > There is occasional discussion of creating a "libdarcs", where an API is
> > given and the programmer is able to use existing darcs functionality
> through
> > a library interface.
>
> Jason, I've already mentioned it before -- I think it's a horrible idea.
>
> Having a library means either having stable APIs, in which case you
> can no longer improve on code as easily, or library versioning,
> commonly known as ``DLL hell'' in the Windows world.
>
> The clean and flexible way is to have an easily scriptable Darcs.
> Darcs is fairly scriptable right now -- things like vc-darcs use Darcs
> as is, with no need to link against a libdarcs.


I've never looked at vc-darcs, but how does it handle the case of
selectively recording a hunk?  I believe to satisfy the needs of the
original poster, hunk selection at record time via a gui would be necessary.

One approach would be to use the current interactive interface of darcs.  I
think this is inherently hard to do.  Maybe I'm just bad at it, but I've
always found programmatically working with an interactive interface is
hard.  Sometimes you mistakenly block waiting to hear from the other process
and vice versa.  This leads to a flaky UI and none of us want that.  I find
that error handling and reporting is usually harder when using the
subprocess technique.

I was discussing libdarcs vs. subprocess with lelit in #darcs, and he
proposed that maybe the subprocess approach requires the following changes:

darcs whatsnew --xml
This now gives you named or indexed hunks.

darcs record --preselected-hunks=1,2,3
This now takes the names or indexes from whatsnew.

It seems to me that we still need to discuss how to properly give the xml
output, and as the original poster asked, where is the darcs output schema
or DTD defined?  And we may run into the problem that
scripts/people/programs that previously depended on a specific schema from
'darcs whatsnew --xml', are now broken on our new format.  Does this mean
we've traded a problem we sought to avoid for an equivalent one?

One of the beautiful things about darcs internally is that it is written in
a strongly typed manner.  But, when we expose darcs as a subprocess we are
largely throwing that away at the boundary where darcs meets the other
process.  We can bring it back to a degree by using xml and making sure our
schema is properly designed.  Except, I don't see how the program that
relies on darcs' schema gets any guarantees.  For example, a few years back
I broke the xml output of darcs simply by outputting things where I
shouldn't.  If I recall correctly, it was actually that a post- or pre-hook
could do output where XML was expected.

To me it seems like some tasks are much easier one way than the other and
there isn't always a clear winner.  Having a very scriptable UI is nice for
tools like tailor, but I'm skeptical that it's the right way to go for
putting a different UI on the darcs semantics (the case of TortoiseDarcs).

On a side note, this makes me realize that we ought to (if we don't already)
have tests that ensure darcs outputs it's xml consistently as it evolves,
again re-enforcing the idea that we should probably provide the xml schema
somewhere and document when it changes.

Again, I'm not against extending the scriptability, but I think sometimes
libdarcs would just be quicker and easier for some users.  Nor do I see any
benefit of freezing the API.  I'd personally have no problem with updating
it an specific release points.  That's the maintenance risk people take for
using libdarcs.  Ultimately in my mind, subprocess vs. libdarcs is about
allowing the tool developer to choose their trade offs.  I can't tell what
is the best for someone else, nor do I want to take the responsibility of
figuring it out for them.

Jason
_______________________________________________
darcs-users mailing list
[email protected]
http://lists.osuosl.org/mailman/listinfo/darcs-users

Reply via email to