Caching is definitely worth doing but you don't always have
the opportunity to do it. If you are copying a lot of files
across, it would help quite a bit if you can just pipeline
requests (or send fewer bundled requests). If you are copying
very large files, streaming would help. When
On Sat, 19 Feb 2011 16:15:47 EST erik quanstrom quans...@quanstro.net wrote:
what is the goal?
Better handling of latency at a minimum? If I were to do
this I would experiment with extending the channel concept.
hmm. let me try again ... do you have a concrete goal?
it's hard to
Devon H. O'Dell devon.od...@gmail.com writes:
determine where a node is placed is *not* cheap. In the end, an
optimization that slows things down is not an optimization at all. You
There are many different kinds of optimization one can perform. One may
optimize compiled code for size, speed,
So why does replica use 9P? Because it's *The Plan 9 Protocol*. If
*The Plan 9 Protocol* turns out to not serve our needs, we need to
figure out why.
I really don't get this, what is the problem with replica's speed?
I run replica once every week or two and it typically runs for about
30
Benchmark utilities to measure the overhead of syscalls. It's cheating
to do for getpid, but for other things like gettimeofday, it's
*extremely* nice. Linux's gettimeofday(2) beats the socks off of the
rest of the time implementations. About the only faster thing is to
get CPU speed and use
The point I was trying to make (but clearly not clearly) was
that simplicity and performance are often at cross purposes
and a simple solution is not always good enough. RPC
(which is what 9p is) is simpler and perfectly fine when
latencies are small but not when there is a lot of latency in
So why does replica use 9P? Because it's *The Plan 9 Protocol*. If
*The Plan 9 Protocol* turns out to not serve our needs, we need to
figure out why.
i appreciate the sentiment, but i think that's just taking it a wee bit
overboard. we don't pretend that 9p replaces http, ftp, smtp, etc.
it seems to me that trying Op (Octopus) on Plan 9 would be a logical first step.
On Fri, Feb 18, 2011 at 2:21 PM, Bakul Shah bakul+pl...@bitblocks.com wrote:
On Fri, 18 Feb 2011 13:06:43 PST John Floren j...@jfloren.net wrote:
On Fri, Feb 18, 2011 at 12:15 PM, erik quanstrom
On Saturday 19 of February 2011 11:34:19 Steve Simon wrote:
Benchmark utilities to measure the overhead of syscalls. It's cheating
to do for getpid, but for other things like gettimeofday, it's
*extremely* nice. Linux's gettimeofday(2) beats the socks off of the
rest of the time
On Sat, 19 Feb 2011 10:09:08 EST erik quanstrom quans...@quanstro.net wrote:
It is inherent to 9p (and RPC).
please defend this. i don't see any evidence for this bald claim.
We went over latency issues multiple times in the past but
let us take your 80ms latency. You can get 12.5 rpc
On Sat Feb 19 15:10:58 EST 2011, bakul+pl...@bitblocks.com wrote:
On Sat, 19 Feb 2011 10:09:08 EST erik quanstrom quans...@quanstro.net
wrote:
It is inherent to 9p (and RPC).
please defend this. i don't see any evidence for this bald claim.
We went over latency issues multiple
so this is a complete waste of time if forks getpids.
and THREAD_GETMEM must allocate memory. so
the first call isn't exactly cheep. aren't they optimizing
for bad programming?
not only that, ... from getpid(2)
NOTES
Since glibc version 2.3.4, the glibc wrapper function for
On Friday, February 18, 2011 02:29:54 pm erik quanstrom wrote:
so this is a complete waste of time if forks getpids.
and THREAD_GETMEM must allocate memory. so
the first call isn't exactly cheep. aren't they optimizing
for bad programming?
not only that, ... from getpid(2)
NOTES
Sent from my iPhone
On Feb 18, 2011, at 5:45 AM, dexen deVries dexen.devr...@gmail.com wrote:
On Friday, February 18, 2011 02:29:54 pm erik quanstrom wrote:
so this is a complete waste of time if forks getpids.
and THREAD_GETMEM must allocate memory. so
the first call isn't exactly
2011/2/18 dexen deVries dexen.devr...@gmail.com:
On Friday, February 18, 2011 02:29:54 pm erik quanstrom wrote:
so this is a complete waste of time if forks getpids.
and THREAD_GETMEM must allocate memory. so
the first call isn't exactly cheep. aren't they optimizing
for bad programming?
I know we're fond of bashing people who need to eek performance out of
systems, and a lot of time it's all in good fun. There's little
justification for getpid, but getpid isn't the only implementor of
this functionality. For other interfaces, it definitely makes sense to
speed up the system
2011/2/18 erik quanstrom quans...@labs.coraid.com:
I know we're fond of bashing people who need to eek performance out of
systems, and a lot of time it's all in good fun. There's little
justification for getpid, but getpid isn't the only implementor of
this functionality. For other interfaces,
Arguing that performance is unimportant is counterintuitive. It
certainly is. Arguing that it is unimportant if it causes unnecessary
complexity has merit. Defining when things become unnecessarily
complex is important to the argument. Applications with timers (or
doing lots of logging) using
On Friday, February 18, 2011 04:15:10 pm you wrote:
Benchmark utilities to measure the overhead of syscalls. It's cheating
to do for getpid, but for other things like gettimeofday, it's
*extremely* nice. Linux's gettimeofday(2) beats the socks off of the
rest of the time implementations. About
2011/2/18 dexen deVries dexen.devr...@gmail.com:
On Friday, February 18, 2011 04:15:10 pm you wrote:
Benchmark utilities to measure the overhead of syscalls. It's cheating
to do for getpid, but for other things like gettimeofday, it's
*extremely* nice. Linux's gettimeofday(2) beats the socks
The high level overview is that it is stored in a shared page, mapped
into each new process's memory space at start-up. The kernel is never
entered; there are no context switches. The kernel has a timer that
updates this page atomically.
i wonder if that is uniformly faster. consider that
2011/2/18 erik quanstrom quans...@quanstro.net:
Arguing that performance is unimportant is counterintuitive. It
certainly is. Arguing that it is unimportant if it causes unnecessary
complexity has merit. Defining when things become unnecessarily
complex is important to the argument.
2011/2/18 erik quanstrom quans...@quanstro.net:
The high level overview is that it is stored in a shared page, mapped
into each new process's memory space at start-up. The kernel is never
entered; there are no context switches. The kernel has a timer that
updates this page atomically.
i
2011/2/18 andrey mirtchovski mirtchov...@gmail.com:
I think it's time that we do some real-world style benchmarks on
multiple systems for Plan 9 versus other systems. I'd be interested in
Ron did work measuring syscall costs and latencies in plan9.
I would love to duplicate that across
The kernel has a timer that
updates this page atomically.
which timer updates the page even when nobody is interested in knowing
what the time is, increasing the noise in the system[1]. i still keep
graphs of a full-blown plan9 cpu server with users logged in and close
to 200 running processes
On Fri, Feb 18, 2011 at 12:07 PM, erik quanstrom quans...@quanstro.net wrote:
The high level overview is that it is stored in a shared page, mapped
into each new process's memory space at start-up. The kernel is never
entered; there are no context switches. The kernel has a timer that
updates
i wonder if that is uniformly faster. consider that
making reads of that page coherent enough on a
big multiprocessor and making sure there's not too
much interprocesser skew might be slower than a
system call.
Real world tests show that it is consistently faster. It's probably
I'd be surprised if things were dissimilar for you at Coraid -- and I
certainly *am not* implying that you guys have poor performance. I'm
just saying if you went to your customers and asked, Given the choice
between something that is the same as what you have now, and something
that's
On Fri, Feb 18, 2011 at 9:32 AM, erik quanstrom quans...@quanstro.net wrote:
wire speed is generally considered good enough. ☺
depends on field of use. In my biz everyone hits wire speed, and the
question from there is: how much of the CPU are you eating to get that
wire speed.
It's a very
On Fri, Feb 18, 2011 at 9:21 AM, erik quanstrom quans...@quanstro.net wrote:
linux optimization is a ratrace. you are only judged on
the immediate effect on your subsystem, not the system
as a whole. so unless you play the game, your system will
appear to regress over time as other
i take a different view of performance.
performance is like scotch. you always want better scotch,
but you only upgrade if the stuff you're drinking is a problem.
- erik
Awesome. That quote is going on my office door below the Tanenbaum
quote on bandwidth and station wagons!
The more you optimize, the better the odds you slow your program down.
Optimization adds instructions and often data, in one of the
paradoxes of engineering. In time, then, what you gain by
optimizing increases cache pressure and slows the whole thing down.
C++ inlines a lot because
On Fri, 18 Feb 2011 10:46:51 PST Rob Pike robp...@gmail.com wrote:
The more you optimize, the better the odds you slow your program down.
Optimization adds instructions and often data, in one of the
paradoxes of engineering. In time, then, what you gain by
optimizing increases cache
2011/2/18 Rob Pike robp...@gmail.com:
The more you optimize, the better the odds you slow your program down.
Optimization adds instructions and often data, in one of the
paradoxes of engineering. In time, then, what you gain by
optimizing increases cache pressure and slows the whole thing
On a slightly different tangent, 9p is simple but it doesn't
handle latency very well. To make efficient use of long fat
pipes you need more complex mechanisms -- there is no getting
around that fact. rsync hg in spite of their complexity
beat the pants off replica. Their cache behavior is
2011/2/18 ron minnich rminn...@gmail.com:
On Fri, Feb 18, 2011 at 9:32 AM, erik quanstrom quans...@quanstro.net wrote:
wire speed is generally considered good enough. ☺
Touche.
depends on field of use. In my biz everyone hits wire speed, and the
question from there is: how much of the CPU
DKIM), etc., it's just not really feasible on commodity hardware. (Of
course, these days, operating systems and RAID controllers with
battery-backed caches make it impossible to guarantee that your
message ever ends up in persistent storage, but that's still a small
bb cache is persistent
Sent from my iPhone
On Feb 18, 2011, at 11:15 AM, Bakul Shah bakul+pl...@bitblocks.com wrote:
On Fri, 18 Feb 2011 10:46:51 PST Rob Pike robp...@gmail.com wrote:
The more you optimize, the better the odds you slow your program down.
Optimization adds instructions and often data, in one of
On Fri, 18 Feb 2011 14:26:32 EST erik quanstrom quans...@quanstro.net wrote:
On a slightly different tangent, 9p is simple but it doesn't
handle latency very well. To make efficient use of long fat
pipes you need more complex mechanisms -- there is no getting
around that fact. rsync hg
2011/2/18 erik quanstrom quans...@quanstro.net:
DKIM), etc., it's just not really feasible on commodity hardware. (Of
course, these days, operating systems and RAID controllers with
battery-backed caches make it impossible to guarantee that your
message ever ends up in persistent storage, but
On Fri, 18 Feb 2011 11:35:18 PST David Leimbach leim...@gmail.com wrote:
C++ inlines a lot because microbenchmarks improve, but inline every
modest function in a big program and you make the binary much bigger
and blow the i-cache.
That's a compiler fault. Surely modern compilers need
i don't think that it makes sense to say that since replica
is slow and hg/rsync are fast, it follows that 9p is slow.
It is the other way around. 9p can't handle latency so on
high latency pipes programs using 9p won't be as fast as
programs using streaming (instead of rpc). Granted that
On Fri, Feb 18, 2011 at 12:10 PM, Bakul Shah bakul+pl...@bitblocks.com wrote:
Templates encourage inlining. There is at least one template
libraries where the bulk of code is implemented in separate
.cc files (using void* tricks), used by some embedded
products. But IIRC the original STL from
On Fri, Feb 18, 2011 at 12:15 PM, erik quanstrom quans...@quanstro.net wrote:
i don't think that it makes sense to say that since replica
is slow and hg/rsync are fast, it follows that 9p is slow.
It is the other way around. 9p can't handle latency so on
high latency pipes programs using 9p
On Fri, 18 Feb 2011 13:06:43 PST John Floren j...@jfloren.net wrote:
On Fri, Feb 18, 2011 at 12:15 PM, erik quanstrom quans...@quanstro.net wr=
ote:
i don't think that it makes sense to say that since replica
is slow and hg/rsync are fast, it follows that 9p is slow.
It is the other
afaik, templates might be inlined, static or shared... depending on
the compiler and the flags.
for gcc see:
http://gcc.gnu.org/onlinedocs/gcc/Template-Instantiation.html
On Fri, Feb 18, 2011 at 4:35 PM, David Leimbach leim...@gmail.com wrote:
Sent from my iPhone
On Feb 18, 2011, at 11:15
I was looking at another fine example of modern programming from glibc
and just had to share it.
Where does the getpid happen? It's anyone's guess. This is just so
readable too ... I'm glad they want to such effort to optimize getpid.
ron
#ifndef NOT_IN_libc
static inline
Or something equivalent. Example: How do you know moving an
expression out of a for loop is valid? The optimizer needs to
understand the control flow.
is this still a useful thing to be doing?
Yes.
what's your argument?
my argument is that the cpu is so fast relative to
the
It is a C/C++/Obj-C compiler does static analysis, has
backends for multiple processor types as well as C as a
target, a lot of optimization tricks etc.
22mbytes is still a lot of etc.. i've no objection
to optimisations big and small, but that still wouldn't explain
the size (to me). FORTRAN H
On Thu, 3 Feb 2011 08:35:53 +, Charles Forsyth wrote:
It is a C/C++/Obj-C compiler does static analysis, has
backends for multiple processor types as well as C as a
target, a lot of optimization tricks etc.
... FORTRAN H Enhanced did so much with so little! ...
Is there a compiler that
FORTRAN H Enhanced was an early optimising compiler.
FORTRAN H for System/360, then FORTRAN H Extended for System/370;
FORTRAN H Enhanced added further insight to get better code.
On Thu, 3 Feb 2011 09:46:00 +, Charles Forsyth wrote:
FORTRAN H Enhanced was an early optimising compiler.
FORTRAN H for System/360, then FORTRAN H Extended for System/370;
FORTRAN H Enhanced added further insight to get better code.
Ah. Thanks for the info. I asked because some of the
On Thu, Feb 03, 2011 at 03:47:17AM -0600, EBo wrote:
Ah. Thanks for the info. I asked because some of the physicists and
atmospheric scientists I work with are likely to insist on having
FORTRAN. I still have not figured how I will deal with that if at
all.
If the cost can be met,
On Wednesday, February 2, 2011, erik quanstrom quans...@quanstro.net wrote:
It is a C/C++/Obj-C compiler does static analysis, has
backends for multiple processor types as well as C as a
target, a lot of optimization tricks etc. See llvm.org. But
frankly, I think they have lost the plot. C
To be fair, gcc, g++ and gobjc combined are actually bigger than clang+llvm.
At least on my system. So it could have been worse.
2011/2/3 David Leimbach leim...@gmail.com
On Wednesday, February 2, 2011, erik quanstrom quans...@quanstro.net
wrote:
It is a C/C++/Obj-C compiler does static
On Thu, 03 Feb 2011 07:08:57 PST David Leimbach leim...@gmail.com wrote:
On Wednesday, February 2, 2011, erik quanstrom quans...@quanstro.net wrote:
It is a C/C++/Obj-C compiler does static analysis, has
backends for multiple processor types as well as C as a
target, a lot of optimization
I agree with their goal but not its execution. I think a
toolkit for manipulating graph based program representations
to build optimizing compilers is a great idea but did they
do it in C++?
are you sure that the problem isn't the graph representation?
gcc also takes a graph-based approach.
EBo e...@sandien.com writes:
Ah. Thanks for the info. I asked because some of the physicists and
atmospheric scientists I work with are likely to insist on having
FORTRAN. I still have not figured how I will deal with that if at
all.
I thought those folks used languages like Matlab
Consider what `stalin' does in about 3300 lines of Scheme
code. It translates R4RS scheme to C and takes a lot of time
doing so but the code is generates is blazingly fast. The
kind of globally optimized C code you or I wouldn't have the
patience to write. Or the ability to keep all that
On Thu, 03 Feb 2011 13:11:07 EST erik quanstrom quans...@quanstro.net wrote:
I agree with their goal but not its execution. I think a
toolkit for manipulating graph based program representations
to build optimizing compilers is a great idea but did they
do it in C++?
are you sure that
On Thu, Feb 3, 2011 at 10:21 AM, smi...@zenzebra.mv.com wrote:
EBo e...@sandien.com writes:
Ah. Thanks for the info. I asked because some of the physicists and
atmospheric scientists I work with are likely to insist on having
FORTRAN. I still have not figured how I will deal with that if
On Thu Feb 3 13:33:52 EST 2011, bakul+pl...@bitblocks.com wrote:
On Thu, 03 Feb 2011 13:11:07 EST erik quanstrom quans...@quanstro.net
wrote:
I agree with their goal but not its execution. I think a
toolkit for manipulating graph based program representations
to build optimizing
On Thu, 03 Feb 2011 13:54:05 EST erik quanstrom quans...@quanstro.net wrote:
On Thu Feb 3 13:33:52 EST 2011, bakul+pl...@bitblocks.com wrote:
On Thu, 03 Feb 2011 13:11:07 EST erik quanstrom quans...@quanstro.net wr
ote:
I agree with their goal but not its execution. I think a
I must also say llvm has a lot of functionality. But even so
there is a lot of bloat. Let me just say the bloat is due to
many factors but it has far *less* to do with graphs.
Download llvm and take a peek. I think the chosen language
and the habits it promotes and the impedance match with
I don't know if f2c meets your needs, but it has always worked.
On Thu, Feb 3, 2011 at 9:07 AM, EBo e...@sandien.com wrote:
On Thu, 3 Feb 2011 10:38:30 +, C H Forsyth wrote:
it's not just the FORTRAN but supporting libraries, sometimes large ones,
including ones in C++, are often required
On Thu, Feb 3, 2011 at 12:49 PM, Federico G. Benavento
benave...@gmail.com wrote:
I don't know if f2c meets your needs, but it has always worked.
As compared to modern fortran compilers, it is basically a toy.
ron
I don't know if f2c meets your needs, but it has always worked.
As compared to modern fortran compilers, it is basically a toy.
But he did say some of his source is in ratfor,
I am pretty sure f2c would be happy with ratfor's output.
years ago I supported the pafec FE package - tens of
On Thu, 3 Feb 2011 21:32:24 +, Steve Simon wrote:
I don't know if f2c meets your needs, but it has always worked.
As compared to modern fortran compilers, it is basically a toy.
But he did say some of his source is in ratfor,
I am pretty sure f2c would be happy with ratfor's output.
$ size /usr/local/bin/clang
text data bss dec hex filename
22842862 1023204 69200 23935266 16d3922 /usr/local/bin/clang
It is interesting to note the 5 minutes reduction in system time. I
assume that this is in part because of the builtin assembler.
--
On Thu, 03 Feb 2011 15:33:57 EST erik quanstrom quans...@quanstro.net wrote:
I must also say llvm has a lot of functionality. But even so
there is a lot of bloat. Let me just say the bloat is due to
many factors but it has far *less* to do with graphs.
Download llvm and take a peek. I
There was some mention that, during the history of Plan 9, developers
had difficulty maintaining two different languages on the system. I
wonder how much of that difficulty would still apply today. Although
the kernel could concievably be translated to a modern compiled
language, I doubt it
2011/2/2 erik quanstrom quans...@quanstro.net:
There was some mention that, during the history of Plan 9, developers
had difficulty maintaining two different languages on the system. I
wonder how much of that difficulty would still apply today. Although
the kernel could concievably be
Just to address the unanswered Limbo questions:
The only Limbo compilers extant compile to a portable bytecode for the Dis
virtual machine. The only first-class Dis implementation is built into Inferno.
Dis can be either interpreted or just-in-time compiled. The historical claim
was a that the
On Tue, Feb 1, 2011 at 9:14 PM, smi...@zenzebra.mv.com wrote:
ron minnich rminn...@gmail.com writes:
I think you should set your sights higher than the macro approach you
propose. At least in my opinion it's a really ugly idea.
You might be surprised to hear that I agree. :) It's far
On Tue, Feb 1, 2011 at 11:35 PM, Nick LaForge nicklafo...@gmail.com wrote:
I hope it won't seem rude to suggest it, but the go-nuts list is the
optimum place for your specific concerns. The Go authors read it and
are very conscientious in responding to serious questions.
The Go authors did
On Wed, Feb 2, 2011 at 4:54 AM, erik quanstrom quans...@quanstro.netwrote:
There was some mention that, during the history of Plan 9, developers
had difficulty maintaining two different languages on the system. I
wonder how much of that difficulty would still apply today. Although
the
Even C has a runtime. Perhaps you should look more into how programming
languages are implemented :-). C++ has one too, especially in the wake of
exceptions and such.
really? what do you consider to be the c runtime?
i don't think that the asm goo that gets you to main
really counts as
Wait, isn't it the proof is in the *pudding*? YOU MEAN WE DON'T GET
FRENCH BENEFITS!?!
sadly, no. the work week is still 100hrs and we get -3 holidays/decade.
- erik
Where did your C compiler come from? Someone probably compiled it with a C
compiler. Bootstrapping is a fact of life as a new compiler can't just be
culled from /dev/random or willed into existence otherwise. It takes a plan
9 system to build plan 9 right? (This was not always true for
On Wed, Feb 02, 2011 at 09:47:01AM -0800, David Leimbach wrote:
Wait, isn't it the proof is in the *pudding*? YOU MEAN WE DON'T GET
FRENCH BENEFITS!?!
Please explain.
--
Thierry Laronde tlaronde +AT+ polynum +dot+ com
http://www.kergis.com/
Key fingerprint =
On Wed, 2011-02-02 at 12:50 -0500, erik quanstrom wrote:
Even C has a runtime. Perhaps you should look more into how programming
languages are implemented :-). C++ has one too, especially in the wake of
exceptions and such.
really? what do you consider to be the c runtime?
i don't
On Wed, Feb 2, 2011 at 9:50 AM, erik quanstrom quans...@quanstro.netwrote:
Even C has a runtime. Perhaps you should look more into how programming
languages are implemented :-). C++ has one too, especially in the wake
of
exceptions and such.
really? what do you consider to be the c
On Wed, Feb 2, 2011 at 10:07 AM, tlaro...@polynum.com wrote:
On Wed, Feb 02, 2011 at 09:47:01AM -0800, David Leimbach wrote:
Wait, isn't it the proof is in the *pudding*? YOU MEAN WE DON'T GET
FRENCH BENEFITS!?!
Please explain.
I was just pointing out something that happens a lot in
A runtime system is just a library whose entry points are language
keywords.[1] In go, dynamic allocation, threads, channels, etc. are
accessed via language features, so the libraries that implement those
things are considered part of the RTS. That's a terminological
difference only from
On Wed, Feb 2, 2011 at 10:03 AM, erik quanstrom quans...@labs.coraid.comwrote:
Where did your C compiler come from? Someone probably compiled it with a
C
compiler. Bootstrapping is a fact of life as a new compiler can't just
be
culled from /dev/random or willed into existence otherwise.
On Wed, Feb 2, 2011 at 10:21 AM, erik quanstrom quans...@quanstro.netwrote:
A runtime system is just a library whose entry points are language
keywords.[1] In go, dynamic allocation, threads, channels, etc. are
accessed via language features, so the libraries that implement those
things
Also, from this point of view, could pthreads be considered runtime for C?
no. then every library/os function ever bolted onto
c would be part of the c runtime. clearly this isn't
the case and pthreads are not specified in the c standard.
it might be part of /a/ runtime, but not the c
On Wednesday, February 2, 2011, erik quanstrom quans...@quanstro.net wrote:
Also, from this point of view, could pthreads be considered runtime for C?
no. then every library/os function ever bolted onto
c would be part of the c runtime. clearly this isn't
the case and pthreads are not
On Wed, Feb 02, 2011 at 10:26:34AM -0800, David Leimbach wrote:
On Wed, Feb 2, 2011 at 10:07 AM, tlaro...@polynum.com wrote:
On Wed, Feb 02, 2011 at 09:47:01AM -0800, David Leimbach wrote:
Wait, isn't it the proof is in the *pudding*? YOU MEAN WE DON'T GET
FRENCH BENEFITS!?!
On Wed, 2011-02-02 at 13:21 -0500, erik quanstrom wrote:
A runtime system is just a library whose entry points are language
keywords.[1] In go, dynamic allocation, threads, channels, etc. are
accessed via language features, so the libraries that implement those
things are considered part
On Wed, 02 Feb 2011 09:45:56 PST David Leimbach leim...@gmail.com wrote:
Well if I were funded and had an infinite amount of time I'd think LLVM for
Plan 9 would be excellent, as well as Go on LLVM :-).
llvm port would need c++.
$ size /usr/local/bin/clang
textdata bss dec
BCPL makes C look like a very high-level language and provides
absolutely no type checking or run-time support.
B. Stroustrup, The Design and Evolution of C++, 1994
C++ was designed to be used in a rather traditional compilation and
run-time environment, the C programming environment on the
I don't follow. Garbage collection certainly can be done in a library
(e.g., Boehm). GC is in my experience normally triggered by
* Allocation --- which is a function call in C
* Explicit call to the `garbage collect now' entry point in the
standard library. A
On Feb 2, 2011, at 1:31 PM, erik quanstrom wrote:
i think of it this way, the janitor doesn't insist that the factory shut
down so he can sweep. he waits for the factory to be idle, and then
sweeps.
Clearly I've been working on the wrong floors. That or all the janitors I know
are using
On Wed, 2011-02-02 at 14:31 -0500, erik quanstrom wrote:
I don't follow. Garbage collection certainly can be done in a library
(e.g., Boehm). GC is in my experience normally triggered by
* Allocation --- which is a function call in C
* Explicit call to the `garbage collect
start := now();
while (now() start + 2hours);
You don't expect GC to be able to trigger, right?
i sure do.
- erik
On Wed, 2011-02-02 at 15:11 -0500, erik quanstrom wrote:
start := now();
while (now() start + 2hours);
You don't expect GC to be able to trigger, right?
i sure do.
Ah. Interesting. Who's done that?
jcc
$ size /usr/local/bin/clang
textdata bss dec hex filename
228428621023204 69200 2393526616d3922 /usr/local/bin/clang
impressive. certainly in the sense of `makes quite a dent if dropped'.
you'll hear people call [fringe benefits] French Benefits.
i did not expect that! i'd have guessed: `cheese'.
On Wed Feb 2 19:19:13 EST 2011, fors...@terzarima.net wrote:
$ size /usr/local/bin/clang
textdata bss dec hex filename
228428621023204 69200 2393526616d3922 /usr/local/bin/clang
impressive. certainly in the sense of `makes quite a dent if dropped'.
and
1 - 100 of 112 matches
Mail list logo