Hugo Cornelis writes:
On Tue, Feb 23, 2010 at 2:58 AM, Stephen Leake
stephen_le...@stephe-leake.org wrote:
hend...@topoi.pooq.comhendrik writes:
[ Code that is used with multiple projects/ split/merge; hard to handle ]
I don't understand the issue. What, exactly, do you mean by
Markus Wanner writes:
Eric Anderson wrote:
Assuming that the code for getting memory usage is derived from the
code I wrote a long time ago (output looks similar, so likely); I
expect that the memory consumption differences are just sampling
effects. It sampled every 1 second or so
Zack Weinberg writes:
On Mon, Sep 22, 2008 at 3:09 PM, Markus Wanner [EMAIL PROTECTED] wrote:
This leads me to think that the STL implementation doesn't provide an O(1)
implementation for size()... savings is avg memory consumption seems to
confirm this, no?
Now that's just bloody
Markus Schiltknecht writes:
Eric Anderson wrote:
Revision: 464e510af4959231ff63352c902c689b0f1687aa
Branch: net.venge.monotone.experiment.performance
Hm.. why didn't any of this get merged into mainline? Looks like there
are more good ideas lying around in that branch.
IIRC
Markus Schiltknecht writes:
Some questions that arise here: Why is that specialized hex decoder so
much faster that what's in botan? Does it have to be limited to 40
digits? Can the botan decoder be sped up?
Probably the generality of the botan one being a full pipe, the
special case is
Jack Lloyd writes:
On Fri, May 02, 2008 at 09:05:06PM +0200, Christof Petig wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
just some notes for myself on the performance problems with OE:
- - hex_decode is used extensively by roster.cc: parse_marking (and
expensive)
Lapo Luchini writes:
4. make the compression algorithm parametric (most obvious choices could
be, in order of decreasing speed and size: no compression, lzo, zlib,
lzma; lzma is particularly interesting because it's heavy to compress,
but very fast on decompression)
You probably want
Markus Schiltknecht writes:
[ sundry performance numbers, gzip -1 is slower than disk ]
If you want fast compression, the best I know of is lzf
(http://www.goof.com/pcg/marc/liblzf.html); we clocked it compressing
at 90MB/s on a 2.4GhZ Opteron. lzo is great for decompression --
almost 200MB/s
All, I recently learned about a merging program for windows that
appears to be pretty good:
http://winmerge.org/
I haven't tried it out (because I don't use windows), but it might be
nice to have it supported in monotone as one of the complaints I got
from a co-worker trying to do merging on
Ulf Ochsenfahrt writes:
William Uther wrote:
On 06/09/2007, at 7:47 PM, Ulf Ochsenfahrt wrote:
Maybe monotone could do with a 'project' entity (policy branches?),
which collects multiple branches. Then you'd have a
net.venge.projects.eric project which would name all the
Evan Martin writes:
Does anyone have experience using monotone for code review?
Evan,
Yes, we did this internally at HPL for reviews of work done by
a contractor.
What process do you use?
Our primary branch was usi.hpl.hp.com/ManagementTools/main, we had a
centralized machine
All,
Yesterday, while working on project, I wanted to split a file
into two separate files. I was hoping to preserve the history on one
of the split out files so that things like annotate would tell you
where various bits came from. Unfortunately, I couldn't figure out a
way to do this,
Nathaniel Smith writes:
[ others want relative paths, but it could be messy ]
Also, to just remind everyone who hasn't been lurking for the last n
years, the other traditional argument for mtn commands being
unrestricted by default is that in this model both common use cases
are very
I use contrib/monotone-notify.pl after a bunch of fixes to get it to
work with recent monotone (e.g. the monotone command is now mtn)
I also added features to allow selecting branches by regex and a
configurable subject line.
Then I run it once an hour from cron with a bunch of commands like:
Nathaniel Smith writes:
I also notice, on re-reading, that we are repeatedly calling select
with nothing in the read or write fds, and with a zero timeout. Why
the heck would we be doing that? It's basically a noop by
definition...
It's a probe to see whether any Error data has arrived,
Nathaniel Smith writes:
On Mon, Sep 04, 2006 at 05:42:45PM -0700, Eric Anderson wrote:
net.venge.monotone.experiment.performance.vcache-size-hook
[ redo after recent roster changes ]
Ok.
net.venge.monotone.performance.experiment.botan-gzip
My memory on this one is that I
I've gotten a number of the performance improvements split out for
merging:
net.venge.monotone.experiment.performance.inline-verify
net.venge.monotone.experiment.performance.vcache-size-hook
net.venge.monotone.experiment.performance.whitespace-trim
Rob Schoening writes:
Can anyone comment on the status of viewmtn and whether 0.05 works with
monotone 0.29?
I expect that it wouldn't work since it wasn't working with 0.28.
The attached patch fixes all of the bugs that I know of and works with
the 0.28 performance experiment branch, which
Eric Anderson writes:
Rob Schoening writes:
Can anyone comment on the status of viewmtn and whether 0.05 works with
monotone 0.29?
I expect that it wouldn't work since it wasn't working with 0.28.
The attached patch fixes all of the bugs that I know of and works with
the 0.28
Nathaniel Smith writes:
[ ignored the set a flag approach to signal handling because there
could exist paths that would miss the signal ]
Somthing that just occurred to me, you could in the signal handler set
the flag, and also set a timer signal to occur in the future. If the
timer
Nathaniel Smith writes:
On Mon, Aug 21, 2006 at 03:40:21PM -0700, Eric Anderson wrote:
[ maybe binary rosters ]
I would be curious how much they win at this point; the roster-delta
strategy for reconstruction changes that landscape, at least somewhat.
For instance, my understanding
Nathaniel,
If you're going to do this big change, it may also be worth
switching over to binary rosters deltas as well, as in
464e510af4959231ff63352c902c689b0f1687aa; which measured a somewhat
lower improvement for the pull (1.2x) than your 40% speedup, but also
had the side effect of
Thomas Keller writes:
So, somebody proposed the syntax some.branch/Trunk to let Trunk be a
subbranch of some.branch, so unless branch renaming will be available,
this will pretty much conflict.
I at least for one proposed this ages ago when there was a discussion
on branch naming. The
Nathaniel Smith writes:
On Mon, Aug 14, 2006 at 06:03:56AM -0700, Brian C. Lane wrote:
[EMAIL PROTECTED] wrote:
http://bugs.openembedded.org/show_bug.cgi?id=1311
You can get the file from http://www.brianlane.com/OE.mtn.gz
This file appears to contain some cached data
Nathaniel Smith writes:
On Wed, Aug 09, 2006 at 10:42:37PM -0700, Eric Anderson wrote:
22457095fd36ea02a44652d45b5e7a6788cdea06: whitespace trimming
Looks fine.
Will separate out.
---
4d389c13b3bb1235c720b8392f9574f1ddb72d13: inline verify
Move the verify function
Nathaniel Smith writes:
997a677db676734acc0d098979d2a9cee8765ec9: libcrypto ssl linking
Enable optional compilation with openssl libcrypto for the optimized
SHA1 hash. Likely to be obsoleted by getting the fast assembly code
from libcrypto used in Botan. Depending on how long
I've synced a whole bunch more performance improvements to
net.venge.monotone.experiment.performance.
Once I get an answer for which ones of these are good for
synchronization to mainline, I'll split them out into individual
branches referenced off of mainline. Detailed changes and performance
Attached is a cumulative patch for all the fixes that I had to make to
get the current viewmtn head working on Debian stable. Should I
a) commit these (in more pieces) into the net.angrygoats.viewmtn branch; or
b) commit these (in more pieces) into the usi.hpl.hp.com/viewmtn branch; or
c) have
Nathaniel Smith writes:
[ benefit of writing files/rosters is not writing full-text versions
that will be deleted; perhaps defer compression ]
Oh, that makes much more sense, I'd imagine you could defer the
compression, but according to my oprofile measurements the compression
piece isn't
Nathaniel Smith writes:
Of the other SHA1's in that asm/ directory, I can't imagine we care
about ia64 (well, or maybe Eric does, being at HP and all ;-)).
I don't care right now since I'm working on x86 machines; dunno in the
future. That said, I think it would be best to just ask that all
Nathaniel Smith writes:
Hmm. I _think_ you can probably do better than this with more
judicious use of tools. For instance, it is easy to run callgrind on
a smaller work set, and you will still probably find that 30% of that
workset is spent doing the same thing as before. You can also,
Jonathan S. Shapiro writes:
On Fri, 2006-08-04 at 13:01 -0700, Eric Anderson wrote:
You're probably running into the problem of monotone keeping the
entire commit in memory. eddb7e59361efeb8d9300ba0ddd7483272565097
from net.venge.monotone.experiment.performance fixes this.
Damn
Nathaniel Smith writes:
(Interestingly, rsync has exactly the same problem -- it starts with
the potentially _very_ lengthy transferring file list part, and
gives no feedback during this.)
If you run it with --progress, rsync 2.6.5 at least prints out file
counts:
7: rsync -av --progress
I've been looking into the cpu usage of pulls on the client. Below is
the oprofile sampling from doing a pull of the monotone database,
measuring only the client. Note that to actually get useful data on
where time is spent not in the monotone code itself, I found I had to
statically link to get
Matt Johnston writes:
On Wed, Aug 02, 2006 at 01:14:43AM -0700, Eric Anderson wrote:
[ lots of pthread usage in monotone ]
Just wondering, does -fno-threadsafe-statics make any
difference? It does on OS X, though I think that may be a
platform-specific bug.
gcc-3.3 on debian sarge
Nathaniel Smith writes:
On Tue, Aug 01, 2006 at 12:39:05AM -0700, Eric Anderson wrote:
Suitable for mainline:
eddb7e59361efeb8d9300ba0ddd7483272565097:
Make an upper bound on the amount of memory that will be consumed
during
a single commit. Right now a commit
Nathaniel Smith writes:
On Wed, Aug 02, 2006 at 01:14:43AM -0700, Eric Anderson wrote:
I was surprised there was so much time spent in lock and unlock, and
by sampling some back traces in gdb, I found it seemed to be all
memory allocation.
[ try using oprofile -c
Daniel Carosone writes:
On Wed, Aug 02, 2006 at 05:38:58PM -0700, Eric Anderson wrote:
My guess is that it will be
very rare that the memory bounding code is used, but if someone is
committing a huge tree (for example an import), I don't think you want
monotone to try to keep
I've created a net.venge.monotone.experiment.performance branch to put
a bunch of performance enhancing patches on them that may or may not
be appropriate for mainline.
I'll try to send a summary from time to time of the results, and I'm
trying to remember to put performance benchmarking in with
Nathaniel Smith writes:
Of course, things are slightly more subtle even than that... I believe
massif is looking at
allocated bytes + per currently allocated block overhead + stack size
- freed bytes
while memtime is looking at
allocated bytes - large freed blocks - re-used freed
I just pushed revision 9e7bf3f1355e0563551aad0fc22de3ef2b8033d7 which
adds in the ability to generate binary files in addition to text
files. So that it is sane to generate large repositories, I also
added in a mkrandom.c program that can generate both the random binary
and text files. My
Nathaniel Smith writes:
Weirdly, the memtime data doesn't really reflect this. For before
I get:
ls_unknown-avg-resident-MiB,4.35963439941,4.33722305298,4.40716743469
ls_unknown-avg-size-MiB,21.4379463196,21.345328331,21.6831884384
[EMAIL PROTECTED] writes:
Date: Wed, 19 Jul 2006 00:32:16 -0700
From: Nathaniel Smith [EMAIL PROTECTED]
Subject: Re: [Monotone-devel] Re: Improving the performance of
annotate
[ speed up annotation of ChangeLog file ]
That's a file I didn't try post adding in my patch. It's very
Nathaniel Smith [EMAIL PROTECTED] writes:
Subject: Re: [Monotone-devel] Patch to add Composite Benchmarks and
somesimple scalable add/commit tests
On Mon, Jul 17, 2006 at 11:55:46PM -0700, Eric Anderson wrote:
The idea behind the composite benchmark is that one can say
[EMAIL PROTECTED] writes:
From: Nathaniel Smith [EMAIL PROTECTED]
Subject: Re: [Monotone-devel] Patch to add Composite Benchmarks and
somesimple scalable add/commit tests
On Mon, Jul 17, 2006 at 11:55:46PM -0700, Eric Anderson wrote:
Content-Description: message body text
I've been working on improving the performance of annotate. I have
found a solution that drops the time for mtn annotate Makefile.am from
about 175 seconds down to 9 seconds (detailed cpu and memory
statistics at the bottom).
I've attached the patch so that people can play with it, but it needs
(resending with patch attached as text/plain)
I've been working on improving the performance of annotate. I have
found a solution that drops the time for mtn annotate Makefile.am from
about 175 seconds down to 9 seconds (detailed cpu and memory
statistics at the bottom).
I've attached the patch
I found a bug in instrumenter.py:alive(). alive() is actually
returning true if the process is dead. The attached patch fixes this
bug and also makes the checking for aliveness a little faster (it
checks 6 times every half second) so that if there is a problem, it
will exit sooner.
-Eric
In net.venge.monotone.contrib.benchmark rev
679ea52924887e816903342104f797ba9c0be6d4
when I run:
#!/bin/sh
rm -rf myscratch myresults
python2.4 benchmark.py \
-m mtn-0.27=/home/anderse/projects/monotone/tar-0.27/mtn \
-b
When I attempt to pull from venge.net, if I don't have any key
specified in my _MTN/options file, everything works fine. If I have a
key specified, then I get:
mtn: connecting to venge.net
mtn: finding items to synchronize:
mtn: certificates | keys | revisions
mtn:13977 | 26 |
[EMAIL PROTECTED] writes:
From: Nathaniel Smith [EMAIL PROTECTED]
Subject: Re: [Monotone-devel] Re: Monotone-devel Digest, Vol 39, Issue
15
[ code to check that mtn process is still alive after sleep is wrong ]
I just saw the code in mtn.py that does a sleep(3) in order to wait
[EMAIL PROTECTED] writes:
From: Nathaniel Smith [EMAIL PROTECTED]
Subject: Re: [Monotone-devel] Patch to add memory size benchmarking to
benchmark suite
instrumenter.py: add in a sleep after starting the sub command to make
sure that it actually gets going and doesn't
All,
Attached is a patch that adds in a new memtime.c executable
that will time both the memory and CPU usage of a command, and the
update that integrates that into the new benchmark.py script. It should
apply to 35534b3bb56f6472a23a3c99fce5aeb4e004431c of
Richard Levitte - VMS Whacker writes:
In message [EMAIL PROTECTED] on Wed, 31 Aug 2005 11:15:34 -0700, Eric
Anderson [EMAIL PROTECTED] said:
ea-9BKuEh7s7G Naming conventions are tricky,
ea-9BKuEh7s7G guess_binary_file_contents seems good to me.
OK. Did you say you don't have push
Richard Levitte - VMS Whacker writes:
I like the change to incremental binary checking, but I've a small
issue with the new name of the function; I sure hope that the
*filename* would be text most of the times.
It would appear that the version that did string[0] to get a
writeable pointer
[EMAIL PROTECTED] writes:
BenoƮt Dejean wrote:
[ string[0] not guarenteed to be contigous, although most implement
that way; c_str() is safe only for read ]
Is .data() assumed to be contigous as well? There are a lot of casts
of string.data() to (const char *) or equivalent in the
for the case with lots of small text files.
No effect on other operations in either memory or CPU usage.
Changelog entry:
2005-08-21 Eric Anderson [EMAIL PROTECTED]
* file_io.cc, file_io.hh, lua.cc, std_hooks.lua: determine if a
file is binary by looking at it incrementally
Nathaniel Smith writes:
On Sun, Aug 14, 2005 at 10:16:59PM -0700, Eric Anderson wrote:
I assume that a boost::circular_buffer isn't suitable since
that doesn't have contiguous storage?
Yes, the rest of the monotone code makes assumes that buffers are
contigous
Matt Johnston writes:
I assume that a boost::circular_buffer isn't suitable since
that doesn't have contiguous storage?
Since the question about contigous storage came up again, I took a
closer look at the circular_buffer. Last time I just looked at it, saw
it had non-contigous storage, and
Matt Johnston writes:
On Fri, Aug 12, 2005 at 02:52:13PM -0700, Eric Anderson wrote:
Eric Anderson writes:
Summary: The attached patch changes the recieve buffer from a string
to a string_queue. This changes an O(n^2) algorithm to an O(n)
algorithm. The practical effect
Matthew Gregan writes:
At 2005-08-09T18:38:10-0700, Eric Anderson wrote:
Summary: The attached patch adds in memory usage, copys and malloc
accounting to monotone, and a repeatable performance test for
evaulating CPU usage, memory usage, copies, and mallocs. Sample output
is shown
Eric Anderson writes:
Summary: The attached patch changes the recieve buffer from a string
to a string_queue. This changes an O(n^2) algorithm to an O(n)
algorithm. The practical effect on a smallish database is a 3.48x
CPU usage reduction on the pull side. On a somewhat extreme case
Nathaniel pointed out in a separate message a point I don't want to
get lost in the discussion of this patch.
The accounting stuff is OS specific, it would probably compile but not
work on a non-linux Unix platform, and may not compile on
Windows. There is probably equivalent functionality on
in order to preserve the
directory location of two new files.
The ChangeLog entry (included in the patch):
2005-08-09 Eric Anderson [EMAIL PROTECTED]
* main.cc: allow optional malloc, memcopy, resource
usage accounting, and a sub-process to track memory usage
* tests/perf
Second set of the patches discuessed in my earlier message. Summary
from earlier message included below.
-Eric
- put-dat-free: Free up file data after it has been compressed as it isn't
used after that.
- free-base64-encoding: Free up the base64 encoding before passing the
First set of the patches discuessed in my earlier message. Summary
from earlier message included below.
-Eric
- me-changelog-fix: fix up the current ChangeLog to point at an e-mail
address that will actually reach me.
- accounting: Add in the ability to account for CPU, Memory,
All,
A while ago I sent a bulk patch that significantly improved
the performance and memory usage of some of the monotone operations.
A very small subset of the patch was applied but the rest wasn't.
I've now gone through and re-done the patches against the current
head, split the patches
Eric Anderson writes:
I'm going to take a look at the cases where memory usage
got worse a bit later to figure out what happened.
This was easier than I expected. Patch is attached. Passes all unit
and integration tests, as did all of the other patches when applied in
order
During a discussion with Nathaniel about a patch for improving the
performance of push/pulls with binary files
(https://savannah.nongnu.org/bugs/?func=detailitemitem_id=13300) that
got a 70x CPU reduction, the discussion changed to another patch in
the mainline to reduce memory usage.
I've now
I was wondering whether it is better to submit patches as bug reports
on the savannah system, or just in e-mail to the mailing list. Is
there a preference?
My most recent patch deals with performance improvement on doing
pulls, in particular it dramatically (73.5x) reduces the CPU usage on
the
70 matches
Mail list logo