On Tuesday 27 May 2008 01:31 pm, Edward K. Ream wrote:
On Tue, May 27, 2008 at 11:29 AM, Randy Kramer [EMAIL PROTECTED] wrote:
I suppose I could have one Leo file for each nedit source file (or maybe
for
a .c / .h pair), but I was thinking that all files in one Leo file would
be
more
On May 28, 8:08 am, Edward K. Ream [EMAIL PROTECTED] wrote:
I like the idea of a file cache. It might speed up Leo for everybody.
There are complications. It's not enough to know that a file hasn't
changed; we must be able to use cashed information to speed up the
read. I'm looking for a
On May 27, 8:26 pm, Kayvan A. Sylvan [EMAIL PROTECTED] wrote:
Next, I used the script-button button to make the current node into
a button, moved in the outline to where I wanted the @auto files
to show up and hit the create-at-auto-nodes button. There were many
errors generated, but the
On 5/28/08, Edward K. Ream [EMAIL PROTECTED] wrote:
- Reading files in the background is tricky. In particular, iterators
must be aware that parts of the tree haven't been fully read.
Perhaps it might be feasible to read in and parse the files in
multiple processes using the processing
On Fri, May 23, 2008 at 12:44 AM, mini [EMAIL PROTECTED] wrote:
does leo support large project (like linux kernel)? open LeoPyRef.leo,
it seems slow.
I've been hoping somebody would ask this :-) Before discussing various
possibilities, let me ask a few questions:
1. My assumption
On Tuesday 27 May 2008 09:12 am, Edward K. Ream wrote:
On Fri, May 23, 2008 at 12:44 AM, mini [EMAIL PROTECTED] wrote:
does leo support large project (like linux kernel)? open LeoPyRef.leo,
it seems slow.
Thanks for asking this! I was toying with the idea of importing the nedit
source
On Tue, May 27, 2008 at 11:29 AM, Randy Kramer [EMAIL PROTECTED] wrote:
On Tuesday 27 May 2008 09:12 am, Edward K. Ream wrote:
On Fri, May 23, 2008 at 12:44 AM, mini [EMAIL PROTECTED] wrote:
does leo support large project (like linux kernel)? open LeoPyRef.leo,
it seems slow.
Thanks
On Tue, May 27, 2008 at 4:12 PM, Edward K. Ream [EMAIL PROTECTED] wrote:
There are various possible ways to speed up loading a large number of
files. The most effective ways might take a *lot* of work:
- Rewrite the read code in pyrex to get C-like speed.
- Represent the Leo outline as a
Yes. I'd tell you the size (in nodes, and characters) of my project
but It's too big to do manually and I can't find any Leo commands that
do it.
Of course, what I think is a big project you may consider to be small.
TL
--~--~-~--~~~---~--~~
You received this
On Tue, May 27, 2008 at 1:22 PM, Ville M. Vainio [EMAIL PROTECTED] wrote:
On Tue, May 27, 2008 at 4:12 PM, Edward K. Ream [EMAIL PROTECTED]
wrote:
Since it's most probably an I/O bound operation, I think the biggest
win would still be lazy nodes (which, as said before, is not easy).
I've
On Tue, May 27, 2008 at 03:08:51PM -0500, Kent Tenney wrote:
As such, adding sentinels or any type of workflow specific to Leo will
not gain acceptance.
I disagree, sort of. Most of the time, when I slurp a big bunch of files
into LEO for study and annotation, I am doing it only for myself.
The vim bindings work is great, but a seasoned vim user will just be
frustrated by a subset, and opt for the plugin to open a node in vim.
Concerning opt for the plugin, perhaps in theory but in practice I
open many nodes in Vim when there is a lot of editing to do or I need
some fancy vi/vim
On Tue, May 27, 2008 at 3:08 PM, Kent Tenney [EMAIL PROTECTED] wrote:
I would expect that most projects which are candidates for management by
Leo
which consist of _many_ files will be shared among programmers.
I agree. That's why I have been eagerly waiting for the question about the
13 matches
Mail list logo