* Dmitry Kurochkin [EMAIL PROTECTED] [2008-01-05 02:58:35 +0300]:
Also related to this idea: can we adjust the order of downloads in the
queue? e.g. maybe I'd like to add a file towards the front of the queue
because I need it right now. This might be doable if waitForURL could bump
up
2008/1/5, Tristan Seligmann [EMAIL PROTECTED]:
* Dmitry Kurochkin [EMAIL PROTECTED] [2008-01-05 02:58:35 +0300]:
Also related to this idea: can we adjust the order of downloads in the
queue? e.g. maybe I'd like to add a file towards the front of the queue
because I need it right now.
On Fri, Dec 21, 2007 at 04:12:49AM +0300, Dmitry Kurochkin wrote:
I have completed initial work on libwww pipelining. Output of darcs whatsnew
is attached (sorry for that, I will try to make a proper patch tomorrow).
What is done:
- libcurl functionality is implemented using libwww. Now
2008/1/4, David Roundy [EMAIL PROTECTED]:
On Fri, Dec 21, 2007 at 04:12:49AM +0300, Dmitry Kurochkin wrote:
I have completed initial work on libwww pipelining. Output of darcs whatsnew
is attached (sorry for that, I will try to make a proper patch tomorrow).
What is done:
- libcurl
Hi all.
Here is the patch.
I get a strange segfault on one machine with Debian testing + custom ghc.
It is not reproducible on Debian unstable.
Segfault happens in get command after the first patch is downoaded
(when Copying patch 1 of
N is printed). Despite all my efforts I could not find the
Yeah, the configure step is precisely the place to put that, and sticking
it right into the makefile for now will be great. Thanks!
David
On Wed, Dec 19, 2007 at 07:15:15PM +0300, Dmitry Kurochkin wrote:
The problem is that ghc eats -DHAVE_CONFIG_H option from libwww-config
--cflags.
If I
I have completed initial work on libwww pipelining. Output of darcs whatsnew
is attached (sorry for that, I will try to make a proper patch tomorrow).
What is done:
- libcurl functionality is implemented using libwww. Now pipelining works.
- New Libcurl module provides 3 functions:
* copyUrl -
On Tue, 2007-12-18 at 18:51 +0900, Stephen J. Turnbull wrote:
The problem with neon in my experience is that they don't hesitate to
change the API in incompatible ways, occasionally requiring
substantial changes in the caller.
That's a bit annoying.
It appears that at least on the
Mac
On Mon, 2007-12-17 at 10:18 -0500, David Roundy wrote:
Any suggestions how to go about doing this? I quick look suggests that
libcurl can't handle http pipelining, and that no haskell HTTP library does
so. The only library I can find that does seem to support it is libwww,
which looks like a
On Mon, 2007-12-17 at 21:42 -0800, Stefan O'Rear wrote:
Neon is GPL and is used as the basis for HTTP support in gnome-vfs and
Nautilus, the file manager in Gnome. It supports WebDAV, TLS, cookies,
connection keep-alive, and transfer compression. I think it only needs
its arms twisted a
On Dec 18, 2007 10:00 PM, Dmitry Kurochkin [EMAIL PROTECTED] wrote:
I tried to get pipelining working with cURL, but no luck. It looks to me
that cURL multi API is overcomplicated and not-too-well documented...
So I have taken a look at libwww and it works great! I created (copied
a sample)
Debian testing. I will try it on unstable when get home.
I'll see if I can make ghc to print commands it runs.
Regards,
Dmitry
2007/12/19, David Roundy [EMAIL PROTECTED]:
This is puzzling. What OS are you using? It seems like ghc should be
calling the same gcc you've got on the system...
2007/12/19, David Roundy [EMAIL PROTECTED]:
On Dec 19, 2007 9:53 AM, Dmitry Kurochkin [EMAIL PROTECTED] wrote:
I have created a Libwww.hs module and hslibwww.c. Libwww.hs provides
getUrl and getUrls functions. I have changed copyRemotesNormal to use
getUrls. And it is ready for testing. But
2007/12/19, David Roundy [EMAIL PROTECTED]:
On Dec 18, 2007 10:00 PM, Dmitry Kurochkin [EMAIL PROTECTED] wrote:
I tried to get pipelining working with cURL, but no luck. It looks to me
that cURL multi API is overcomplicated and not-too-well documented...
So I have taken a look at libwww
I do not have experience with cURL. But I do not think this is a problem.
The real problem is time... Still this does not seem like too much work. I
will try to look at it tomorrow.
Regards,
Dmitry
David Roundy wrote:
H. Alas, curl 7.16 is not yet in debian stable (my default for when we
I don't have the post from Peter yet, so I'm responding to Stefan's.
Stefan O'Rear writes:
On Tue, Dec 18, 2007 at 06:11:58AM +0100, Peter Lund wrote:
Neon is GPL and is used as the basis for HTTP support in gnome-vfs and
Nautilus, the file manager in Gnome. It supports WebDAV, TLS,
After looking at this a bit more it does not look so good to me.
I thought that darcs uses a new TCP connection for each patch. But
wireshark shows that it uses persistent connection already.
So there is no TCP handshake overhead.
Pipelining is sending multiple requests without waiting for
I tried to get pipelining working with cURL, but no luck. It looks to me
that cURL multi API is overcomplicated and not-too-well documented...
So I have taken a look at libwww and it works great! I created (copied
a sample) simple program to load given URL many times using only
persistent
On Thu, Dec 13, 2007 at 09:35:52AM +, Simon Marlow wrote:
David Roundy wrote:
The difference here is that I haven't implemented the time-stamp
synchronizing feature for hashed repositories. I wasn't sure it was
still needed (and would be nice to drop, as it's a bit hackish), since
for
On Thu, Dec 13, 2007 at 07:12:45PM -0800, David Roundy wrote:
Still on my todo list (of issues that you've reported):
2. figuring out a nice way to speed up a lazy darcs get. Currently it
grabs each file in the repository individually. This means we're not
harmed by long history, but
On Thu, Dec 13, 2007 at 07:19:02PM -0800, Stefan O'Rear wrote:
On Thu, Dec 13, 2007 at 07:12:45PM -0800, David Roundy wrote:
Still on my todo list (of issues that you've reported):
2. figuring out a nice way to speed up a lazy darcs get. Currently it
grabs each file in the repository
On 12/17/07, David Roundy [EMAIL PROTECTED] wrote:
Any suggestions how to go about doing this? I quick look suggests that
libcurl can't handle http pipelining, and that no haskell HTTP library does
so. The only library I can find that does seem to support it is libwww,
which looks like a
H. Alas, curl 7.16 is not yet in debian stable (my default for when we
can require something), but that's still very interesting! Do you by any
chance have experience coding with libcurl? We've only ever used the easy
interface, and if you'd like to take a shot at updating src/hscurl.c to
David Roundy writes:
H. Alas, curl 7.16 is not yet in debian stable (my default for when we
can require something),
This is an optimization. Add a configure test for it, build it in
when available, add a Darcs option for it whether or not it's
actually, and a warning that this option
On Tue, Dec 18, 2007 at 08:27:23AM +0900, Stephen J. Turnbull wrote:
David Roundy writes:
H. Alas, curl 7.16 is not yet in debian stable (my default for when we
can require something),
This is an optimization. Add a configure test for it, build it in
when available, add a Darcs
Hi David,
According to cURL changelog pipeline support has been added in version
7.16.0:
CURLMOPT_PIPELINING added for enabling HTTP pipelined transfers.
I did not use it myself but from the docs it looks like this is just
what you want. Quote:
CURLMOPT_PIPELINING
Pass a long set to 1 to
On Tue, Dec 18, 2007 at 06:11:58AM +0100, Peter Lund wrote:
On Mon, 2007-12-17 at 10:18 -0500, David Roundy wrote:
Any suggestions how to go about doing this? I quick look suggests that
libcurl can't handle http pipelining, and that no haskell HTTP library does
so. The only library I can
My arguments here started looking pretty thin, so I've now added automatic
optimization for hashed repositories. I think this'll be pretty cheap, and
if it turns out to be a performance regression, then we can always revert
it. The only command upon which we really shouldn't be bothering to do
David Roundy [EMAIL PROTECTED] writes:
Okay, I've found a couple of really stupid bits of code, and this goes a
lot faster now. The 17 pull took under three seconds, and that's with
profiling running. Fortunately (and perhaps unsurprisingly) the issue was
largely with the easy parts of the
On Sat, Dec 15, 2007 at 05:38:43PM +0100, Petr Rockai wrote:
Anyhow, it'll be a couple of hours before tests are passed and changes are
pushed, but then I'd appreciate it if you'd take another look at this! (I
could do it myself but right now I think I need a break... and it's far
easier
On 12/14/07, David Roundy [EMAIL PROTECTED] wrote:
You don't need to call optimize on the repository that is used to create
the tag, and you shouldn't need to do so very often.
Ah, I thought perhaps you needed to do this in order to reduce the
search space on both sides of the exchange.
In
On 12/15/07, David Roundy [EMAIL PROTECTED] wrote:
My arguments here started looking pretty thin, so I've now added automatic
optimization for hashed repositories. I think this'll be pretty cheap, and
if it turns out to be a performance regression, then we can always revert
it. The only
David Roundy wrote:
On Wed, Dec 12, 2007 at 01:45:13PM +, Simon Marlow wrote:
darcs changes seems to have a big performance regression:
$ time darcs2 changes --last=10 /dev/null
I killed it after 3 minutes of CPU time and the process had grown to 1.4Gb.
darcs1 does this in 0.05
David Roundy wrote:
On Fri, Dec 14, 2007 at 10:04:57AM +, Simon Marlow wrote:
David Roundy wrote:
Okay, it turns out that it was indeed bad strictness causing the trouble.
For some reason, I had made the PatchInfoAnd data type strict in both its
components, which meant that every time we
Hi,
first of all, hats off for the progress you have made!
David Roundy [EMAIL PROTECTED] writes:
The future of darcs is in the darcs-2 repository format, which features a
new merge algorithm that introduces two major user-visible changes
1. It should no longer be possible to confuse darcs
On Fri, Dec 14, 2007 at 01:33:33PM +, Simon Marlow wrote:
David Roundy wrote:
On Fri, Dec 14, 2007 at 10:04:57AM +, Simon Marlow wrote:
David Roundy wrote:
Okay, it turns out that it was indeed bad strictness causing the trouble.
For some reason, I had made the PatchInfoAnd data
On Fri, Dec 14, 2007 at 10:04:57AM +, Simon Marlow wrote:
David Roundy wrote:
Okay, it turns out that it was indeed bad strictness causing the trouble.
For some reason, I had made the PatchInfoAnd data type strict in both its
components, which meant that every time we read a patch ID,
Thanks, Peter, for making this investigation!
On Fri, Dec 14, 2007 at 01:42:14PM +0100, Petr Rockai wrote:
Hi,
first of all, hats off for the progress you have made!
David Roundy [EMAIL PROTECTED] writes:
The future of darcs is in the darcs-2 repository format, which features a
new
On Fri, Dec 14, 2007 at 09:18:29AM -0800, David Roundy wrote:
Hmm, I have just tested the nested conflict issue. Now, the behaviour
is *much* better in darcs-2 than it has been in darcs one. I have
modified Pekka Pessi's misery.sh (from [darcs-users] unique features +
exponential time
On 12/14/07, David Roundy [EMAIL PROTECTED] wrote:
On Fri, Dec 14, 2007 at 01:33:33PM +, Simon Marlow wrote:
I guess I don't understand why optimize is exposed to the user at all. if
there's an optimal state for the repository, why can't it be maintained in
that state?
It's because it
On Fri, Dec 14, 2007 at 10:15:13PM +0100, Alexander Staubo wrote:
On 12/14/07, David Roundy [EMAIL PROTECTED] wrote:
On Fri, Dec 14, 2007 at 01:33:33PM +, Simon Marlow wrote:
I guess I don't understand why optimize is exposed to the user at all. if
there's an optimal state for the
David Roundy wrote:
darcs check should work to indicate the conversion went fine.
Just fired one off, I'll let you know if it finishes before I've written
this email :-)
$ darcs2 query repo
Type: darcs
Format: hashed, darcs-2-experimental
Root:
Simon Marlow wrote:
David Roundy wrote:
Yikes! That's actually a very surprising bug. I'd be interested in
hearing if it shows up if you run a darcs2 optimize first? Either way,
of course, it's a serious bug, but that'd give a hint where the
trouble is.
darcs2 check has nearly finished...
On Thu, Dec 13, 2007 at 09:50:50AM +, Simon Marlow wrote:
Simon Marlow wrote:
David Roundy wrote:
Yikes! That's actually a very surprising bug. I'd be interested in
hearing if it shows up if you run a darcs2 optimize first? Either way,
of course, it's a serious bug, but that'd give a
On Wed, Dec 12, 2007 at 01:55:13PM +, Simon Marlow wrote:
A small UI issue:
$ darcs2 get http://darcs.haskell.org/ghc-darcs2
darcs failed: Incompatibility with repository
http://darcs.haskell.org/ghc-darcs2:
Cannot mix darcs-2 repositories with older formats
Since I'm trying to
On Wed, Dec 12, 2007 at 01:45:13PM +, Simon Marlow wrote:
darcs changes seems to have a big performance regression:
$ time darcs2 changes --last=10 /dev/null
I killed it after 3 minutes of CPU time and the process had grown to 1.4Gb.
darcs1 does this in 0.05 seconds using 2Mb.
On Thu, Dec 13, 2007 at 01:36:09PM -0800, Stefan O'Rear wrote:
On Thu, Dec 13, 2007 at 09:35:52AM +, Simon Marlow wrote:
David Roundy wrote:
The difference here is that I haven't implemented the time-stamp
synchronizing feature for hashed repositories. I wasn't sure it was
still
On Thu, Dec 13, 2007 at 09:50:50AM +, Simon Marlow wrote:
Simon Marlow wrote:
David Roundy wrote:
Yikes! That's actually a very surprising bug. I'd be interested in
hearing if it shows up if you run a darcs2 optimize first? Either way,
of course, it's a serious bug, but that'd give a
On Thu, Dec 13, 2007 at 07:12:45PM -0800, David Roundy wrote:
On Thu, Dec 13, 2007 at 09:50:50AM +, Simon Marlow wrote:
Simon Marlow wrote:
David Roundy wrote:
Yikes! That's actually a very surprising bug. I'd be interested in
hearing if it shows up if you run a darcs2 optimize
David Roundy wrote:
=== Creating a repository in the darcs-2 format ===
Converting an existing repository to the darcs-2 format is as easy as
darcs convert oldrepository newrepository
I did this for GHC's repository. I left it running last night, and I'm not
sure whether it completed
Simon Marlow wrote:
It is also online here:
http://darcs.haskell.org/ghc-darcs2
Getting a lazy partial repository over http isn't particularly quick:
$ time darcs2 get http://darcs.haskell.org/ghc-darcs2 --darcs-2
Finished getting.
495.19s real 2.08s user 1.12s system 0%
A small UI issue:
$ darcs2 get http://darcs.haskell.org/ghc-darcs2
darcs failed: Incompatibility with repository
http://darcs.haskell.org/ghc-darcs2:
Cannot mix darcs-2 repositories with older formats
Since I'm trying to get a darcs-2 format repository, I would expect it to
just work, and
Dear darcs-devel folks:
Oh by the way, let me say: HOORAY!. I suspected that darcs 2 was
never going to actually happen, and now I see that it *is* going to
happen! Way to go! This breathes new life into the darcs project!
Regards,
Zooko
___
HOORAY! Thank you David, Ganesh, Eric and others.
___
darcs-devel mailing list
darcs-devel@darcs.net
http://lists.osuosl.org/mailman/listinfo/darcs-devel
We are happy to announce the first prerelease version of darcs 2! Darcs 2
will feature numerous improvements, and this prerelease will also feature a
few regressions, so we're looking for help, from both Haskell developers
and users willing to try this release out. Read below, to see how you can
55 matches
Mail list logo