Re: Garbage collection (was Re: JWZ on s/Java/Perl/)

2001-02-14 Thread David Mitchell

James Mastros [EMAIL PROTECTED] wrote:
 The idea is [for Larry] to declare "no, it isn't".  Otherwise, you have to
 do refcounting (or somthing like it) for DESTROY to get called at the right
 time if the class (or any superclass) has an AUTOLOAD, which is expensive.

I'm coming in halfway through a thread, which is always dangerous, but...
the above seems to imply a discussion that you only need to do expensive
ref-counting (or whatever) on objects which have a DESTROY method.
However, since you dont know in advance what class(es), if any, a thinngy
will be blessed as, you always have to ref-count (or whatever technique is
chosen) just to be sure:

my $h = {}; $ref1 = \$h; $ref2= \$h; ...

# we havent been ref-counting because $h is isnt an object.

bless $h, some_class_with_a_destroy_method;

# whoops, $h now needs to be properly managed. Anyone remember
# what's pointed at it ???




Re: Garbage collection (was Re: JWZ on s/Java/Perl/)

2001-02-14 Thread Branden

David Mitchell wrote:
 James Mastros [EMAIL PROTECTED] wrote:
  ... do refcounting (or somthing like it) for DESTROY to get called at
the right
  time if the class (or any superclass) has an AUTOLOAD, which is
expensive.
 ... the above seems to imply a discussion that you only need to do
expensive
 ref-counting (or whatever) on objects which have a DESTROY method.
 However, since you dont know in advance what class(es), if any, a thinngy
 will be blessed as, you always have to ref-count (or whatever technique is
 chosen) just to be sure:

I agree. Mixing ref-counting and whatever won't work (or will work and will
be worse than only ref-counting). Either we stick with ref-counting (and
maybe add something for breaking circular references) or we forget about
this fallacy of having DESTROY called at a predictable time.

Afterall, why do we need DESTROY to get called at the right time? Afterall,
Java does live without it, and if Perl is supposed to run on the JVM, we
won't have it there anyway! I think with .NET (Microsoft's C# VM) the
situation is the same.

If resource exhaustion is the problem, I think we can deal with that when we
try to allocate a resource and we get an error, then we call the GC
explicitly (one or more times if needed) to see if we can free some
resources with it. Resource exhaustion would be a rare situation (I think),
and doing some expensive treatment when it happens is OK for me.

Anyway, that data flow analysis that was being proposed could well be used
to `avoid' or `delay' resource exhaustion in some cases. But I don't think
any guarantees should be given about when the DESTROY method of an object
would be called.

Also, I think it would be valid for the programmer to explicitly say ``I
would like to DESTROY this object now'', and have the DESTROY method called
in that time, even if the memory would be reclaimed only later. The problem
I see with this is what if a programmer calls DESTROY on an object that was
being used by others. The way I suggest to deal with this is set a flag if
the object was already DESTROYed. Then if any other tries to use it, it
raises an exception (dies) with a message about ``This object was already
DESTROYed.''. This flag could be used also to signal to the GC system that
the object already got its DESTROY method called, and it shouldn't be called
again. Just an idea, but...

- Branden




Re: Garbage collection (was Re: JWZ on s/Java/Perl/)

2001-02-14 Thread James Mastros

On Wed, Feb 14, 2001 at 10:12:36AM -0300, Branden wrote:
 David Mitchell wrote:
  ... the above seems to imply a discussion that you only need to do
 expensive
  ref-counting (or whatever) on objects which have a DESTROY method.
  However, since you dont know in advance what class(es), if any, a thinngy
  will be blessed as, you always have to ref-count (or whatever technique is
Blast.  You are absolutly right, Dave.

[snip about DESTORY predictablity not being neccessary]
You're probably right about that, Branden.  Quite nice, but not neccessary.

 Also, I think it would be valid for the programmer to explicitly say ``I
 would like to DESTROY this object now'', 
I'd think that an extension to delete is in order here.  Basicly, delete
should DESTROY the arg, change it's value to undef, and trigger a GC that
will get rid of the arg.

If the arg is a ref, it is /not/ derefed, so you'd oft want to use delete
$$foo.

 being used by others. The way I suggest to deal with this is set a flag if
 the object was already DESTROYed. Then if any other tries to use it, it
 raises an exception (dies) with a message about ``This object was already
 DESTROYed.''. 
I think an ordinary "attempt to dereference undef" will work.

  -=- James Mastros
-- 
"All I really want is somebody to curl up with and pretend the world is a
safe place."
AIM: theorbtwo   homepage: http://www.rtweb.net/theorb/



Re: Garbage collection (was Re: JWZ on s/Java/Perl/)

2001-02-14 Thread John Porter

James Mastros wrote:
 I'd think that an extension to delete is in order here.  Basicly, delete
 should DESTROY the arg, change it's value to undef,...

Huh?  What delete are you thinking of?  This is Perl, not C++.


 ...and trigger a GC that will get rid of the arg.

No.  Perl decides for itself when to do GC.

-- 
John Porter

You can't keep Perl6 Perl5.




Re: Garbage collection (was Re: JWZ on s/Java/Perl/)

2001-02-14 Thread James Mastros

On Wed, Feb 14, 2001 at 09:59:31AM -0500, John Porter wrote:
 James Mastros wrote:
  I'd think that an extension to delete is in order here.  Basicly, delete
  should DESTROY the arg, change it's value to undef,...
 Huh?  What delete are you thinking of?  This is Perl, not C++.
Umm, perldoc -f delete?

Come to think of it, this doesn't mesh purticularly well with the current
meaning of delete.  It does, however, with undef.  In fact, it /is/ the
current meaning of undef, except for the GC part.  And perhaps the GC should
be explicit or automatic, but not implicit.

  ...and trigger a GC that will get rid of the arg.
 No.  Perl decides for itself when to do GC.
That's almost certianly a mistake.  The programmer often /does/ know the
expectations of the end-user better then the interpreter.  If the programmer
can GC when /he/ wants to, he can do so when the pause will have the least
effect.

Think of a program that you want to run near-realtime most of the time, but
where you have a bit of downtime every now and again.  A game comes
immedetly to mind.

Or, for that matter, a program that spawns an external process that might
take a lot of memory, so does a GC before spawning it.  (Because otherwise
the OS will happily page out your garbage, resulting in massive amounts of
unneeded IO.)

  -=- James Mastros
-- 
"All I really want is somebody to curl up with and pretend the world is a
safe place."
AIM: theorbtwo   homepage: http://www.rtweb.net/theorb/



Re: Garbage collection (was Re: JWZ on s/Java/Perl/)

2001-02-14 Thread David Mitchell

James Mastros [EMAIL PROTECTED] wrote:
 [snip about DESTORY predictablity not being neccessary]
 You're probably right about that, Branden.  Quite nice, but not neccessary.

Hmm, I'd have to say that predictability is very, *very* nice,
and we shouldnt ditch it unless we *really* have to.

[ lots of examples of freeing up database connections, locked files etc,
not included here because it would involve too much typing :-) ]

Dave M.




Re: Garbage collection (was Re: JWZ on s/Java/Perl/)

2001-02-14 Thread Branden

James Mastros wrote:
 On Wed, Feb 14, 2001 at 10:12:36AM -0300, Branden wrote:
  Also, I think it would be valid for the programmer to explicitly say ``I
  would like to DESTROY this object now'',
 I'd think that an extension to delete is in order here.  Basicly, delete
 should DESTROY the arg, change it's value to undef, and trigger a GC that
 will get rid of the arg.


Actually, DESTROY has nothing to do with delete or setting a value to undef.
Well, yes, they are related, but if that was all that matters, every object
could be deleted when I assign a variable to other thing. Delete/set to
undef sets the value of a variable, and DESTROY is called when no more
variables reference that object. The problem is when objects are shared by
many variables. For example:

$a = new Object();
$b = $a;
...
destroy $a;   ## would call $a-DESTROY()
...
$b-doSomething();## should die. Note that $b is not undef

The problem is that $b has a reference to an object that was already
destroyed, right? It has nothing to do with `undef', since $b cannot be
undef'ed when I call destroy $a, the object knows nothing about $b, right?

And there would be another problem when the GC tries to collect the memory
used by the object, because it usually calls DESTROY on collected objects.
Calling it for this object would mean calling it twice, what is probably a
very wrong thing to do.

- Branden




Re: Garbage collection (was Re: JWZ on s/Java/Perl/)

2001-02-14 Thread Branden

John Porter wrote:
 James Mastros wrote:
  I'd think that an extension to delete is in order here.  Basicly, delete
  should DESTROY the arg, change it's value to undef,...

 Huh?  What delete are you thinking of?  This is Perl, not C++.


Agreed, definitely Perl is not C++.


  ...and trigger a GC that will get rid of the arg.

 No.  Perl decides for itself when to do GC.


Please read the original message I wrote. The reply had only some
off-the-context snippets of the original idea.

The idea is to *allow* a programmer to explicitly destroy an object, for
better (and sooner) resource disposal. The programmer wouldn't have to do it
(and wouldn't do it most the time), but if he knows he uses many resources
and he would like to be nice, he *could* do it (not meaning he would have to
do it either...).

- Branden




Re: Garbage collection (was Re: JWZ on s/Java/Perl/)

2001-02-14 Thread Branden

James Mastros wrote:
 On Wed, Feb 14, 2001 at 09:59:31AM -0500, John Porter wrote:
  Huh?  What delete are you thinking of?  This is Perl, not C++.
 Umm, perldoc -f delete?

 Come to think of it, this doesn't mesh purticularly well with the current
 meaning of delete.  It does, however, with undef.  In fact, it /is/ the
 current meaning of undef, except for the GC part.  And perhaps the GC
should
 be explicit or automatic, but not implicit.


As I wrote in the last post, this isn't what I'm talking about. I'm talking
about destroying the object before the GC does.


   ...and trigger a GC that will get rid of the arg.
  No.  Perl decides for itself when to do GC.
 That's almost certianly a mistake.

Yeah, what about a nasty module that decides not to call the GC and blow
your memory??? That's IMO the best thing about programming in Perl compared
to C: not having to keep track of the memory!!! RFC 28!!!


 The programmer often /does/ know the
 expectations of the end-user better then the interpreter.

We must not count on the programmer for almost nothing. Most (Perl)
programmers want to forget about housekeeping details and expect Perl to do
the magic for them. And I think they're right! If Perl can do it, why would
they bother? Why write more code to do things Perl can do for you? Why write
C if you can write Perl?


 If the programmer
 can GC when /he/ wants to, he can do so when the pause will have the least
 effect.


I agree the programmer should have how to explicitly call the GC, but that
wouldn't be required from him.


 Think of a program that you want to run near-realtime most of the time,

Write C. With no GC below it. Probably, with no OS (or a realtime one) below
it.


 but
 where you have a bit of downtime every now and again.  A game comes
 immedetly to mind.


Even if you want to write games in Perl (I would definitely want to), you
should use C extensions to do the screen update (at least for speed...), and
those would definitely not be constrained to GC pauses.


 Or, for that matter, a program that spawns an external process that might
 take a lot of memory, so does a GC before spawning it.  (Because otherwise
 the OS will happily page out your garbage, resulting in massive amounts of
 unneeded IO.)


Call the GC explicitly before, no need to control when *not* to call it for
this, as you were suggesting.

Serious, man. Not having a implicit GC is not having GC at all! And as Perl
should be Perl, it should keep collecting our garbage as we produce it!


   -=- James Mastros
 --
 "All I really want is somebody to curl up with and pretend the world is a
 safe place."
 AIM: theorbtwo  homepage: http://www.rtweb.net/theorb/



- Branden




Re: Garbage collection (was Re: JWZ on s/Java/Perl/)

2001-02-14 Thread Branden


[[ reply to this goes only to -internals ]]

Dan Sugalski wrote:
 *) People like it

Well, if people liking it is the only reason (either is the only on or
appears 3 times in a 5 item list, what is pretty much the same to me ;-)
[... the only reason] to add a feature to Perl, we'll probably end much more
bloated than we're now, IMHO.

 *) Scarce external resources (files, DB handles, whatever) don't get
 unnecessarily used

Unless there's a way to do it predictably without impacting programs that
don't depend so much on quick freeing of external resources, I don't believe
it's worth.

 *) Saves having to write explicit cleanup code yourself

You wouldn't have to, you only would be able to, if you like it. If you're
writing an application that would possibly open too many files, you'd
probably want to destroy their handles ASAP. OTOH, if you're writing an
application that only opens one file and does a lot of processing over it,
you simply wouldn't care and let it be freed whenever the GC collects its
memory.


 At 10:12 AM 2/14/2001 -0300, Branden wrote:
 If resource exhaustion is the problem, I think we can deal with that when
we
 try to allocate a resource and we get an error, then we call the GC
 explicitly (one or more times if needed) to see if we can free some
 resources with it. Resource exhaustion would be a rare situation (I
think),
 and doing some expensive treatment when it happens is OK for me.

 The point of DESROY isn't resource exhaustion per se, at least not
anything
 the garbage collector will care about, since it only cares about memory.


Well, I thought DESTROY frees open files, database connections, OS locks,
etc. Aren't those what cause resource exhaustion?


 Also, I think it would be valid for the programmer to explicitly say ``I
 would like to DESTROY this object now'', and have the DESTROY method
called
 in that time, even if the memory would be reclaimed only later.

 So you undef your object reference. If the object doesn't go away, it
means
 that something else probably still has a handle on it somewhere.

I thought that was the whole problem with ``not predictable stuff'':
undefing the variable, no other variable references the object, and it's
still there, it doesn't get destroyed.


 Plus there's nothing stopping you from having $obj-DESTROY in your own
 code, though it may be inadvisable.

It is (mainly) inadvisable because:
1. GC will call DESTROY when it collects the memory, so DESTROY would get
called twice, which is VERY BAD.
2. If I call DESTROY on an object, it would still be a (valid) object after
the call, so that if I call some other method, it will succeed. But that
shouldn't happen, since the object was DESTROYed, right?

That's exactly what I propose. Having something that, when called with an
object as parameter, would call the object's DESTROY, and would flag the
object someway so that GC doesn't call DESTROY on it when collecting the
memory and that every other attempt to call a method on the object raises an
exception that makes it clear what happened (ie. ``Method call on already
destroyed object''), so that debugging is `possible' in this semantic.

- Branden




Re: Garbage collection (was Re: JWZ on s/Java/Perl/)

2001-02-14 Thread James Mastros

On Wed, Feb 14, 2001 at 01:43:22PM -0300, Branden wrote:
 As I wrote in the last post, this isn't what I'm talking about. I'm talking
 about destroying the object before the GC does.
Yah, so am I.  I'm just saying that after the object is destroyed, don't
keep it around.

 Yeah, what about a nasty module that decides not to call the GC and blow
 your memory??? That's IMO the best thing about programming in Perl compared
 to C: not having to keep track of the memory!!! RFC 28!!!
Whoh!  I never meant to say that Perl shouldn't automaticly do GC as it
feels like it.  Simply that you should be able to explicitly garbage-collect
if you want to.

(It's arguable that you should be able to disable automatic GC.  In any
case, it should be tunable, so disabling it is just an _extreme_ tune.)

 We must not count on the programmer for almost nothing. 
Watch your double-negitives.  Writing calmly helps.

  If the programmer
  can GC when /he/ wants to, he can do so when the pause will have the least
  effect.
 I agree the programmer should have how to explicitly call the GC, but that
 wouldn't be required from him.
OK then, we're all in agreement.

  Think of a program that you want to run near-realtime most of the time,
 Write C. With no GC below it. Probably, with no OS (or a realtime one) below
 it.
Sorry.  Near-realtime is apparently a much more restrictive word then I
wanted.

  but
  where you have a bit of downtime every now and again.  A game comes
  immedetly to mind.
 
 Even if you want to write games in Perl (I would definitely want to), you
 should use C extensions to do the screen update (at least for speed...), and
 those would definitely not be constrained to GC pauses.
True, but I probably wouldn't for the event loop, and certianly not for the
tick function.  (At least some of the tick functions.)

 Call the GC explicitly before, no need to control when *not* to call it for
 this, as you were suggesting.
 Serious, man. Not having a implicit GC is not having GC at all! And as Perl
 should be Perl, it should keep collecting our garbage as we produce it!
Sorry.  I should have explained my wording more carefuly.  I see three
different types of triggers:
1) Explicit -- A call to garbage::collect or somesuch.
2) Implicit -- Certian program-execution events implicitly do a
   GC run when encountered.  For example, you could say we do
   this now -- we garbage-collect every time a scope exits.  What I was
   suggesting above is that when a 1-arg undef is encountered, implicitly GC.
3) Automatic -- Certian runtime events, not directly (or obviously) related
   to the flow of execution, like when the number of SVs created or the
   amount of memory allocated since the last GC run exced a certian critical
   value.
(I /think/ a dictionary would agree with me, but I'm not about to get pissy
and look them up.)

I was saying that we should do 1 and 3, but not 2.

  -=- James Mastros



Re: Garbage collection (was Re: JWZ on s/Java/Perl/)

2001-02-14 Thread James Mastros

On Wed, Feb 14, 2001 at 01:25:26PM -0300, Branden wrote:
 The problem is when objects are shared by
 many variables. For example:
 
 $a = new Object();
 $b = $a;
 ...
 destroy $a;   ## would call $a-DESTROY()
 ...
 $b-doSomething();## should die. Note that $b is not undef
Hmm?  (Assuming destroy() autoderefs.)  destory $a would call %$a-DESTORY
(assumption of hash for example), and remove %$a from the
symbol-table/otherwise make it nonexistant.  Then $b-doSomthing() will fail
because it's a ref to undef.

 And there would be another problem when the GC tries to collect the memory
 used by the object, because it usually calls DESTROY on collected objects.
 Calling it for this object would mean calling it twice, what is probably a
 very wrong thing to do.
Oh, I rather assumed that there would be a "invalid" marker of some sort.
It's neccessary (I think) for a pool, which I assumed.  Bad James, bad.

 -=- James Mastros
-- 
"All I really want is somebody to curl up with and pretend the world is a
safe place."
AIM: theorbtwo   homepage: http://www.rtweb.net/theorb/



Re: Garbage collection (was Re: JWZ on s/Java/Perl/)

2001-02-14 Thread John Porter

Branden wrote:
 John Porter wrote:
   ...and trigger a GC that will get rid of the arg.
 
  No.  Perl decides for itself when to do GC.
 
 The idea is to *allow* a programmer to explicitly destroy an object, for
 better (and sooner) resource disposal. The programmer wouldn't have to do it
 (and wouldn't do it most the time), but if he knows he uses many resources
 and he would like to be nice, he *could* do it (not meaning he would have to
 do it either...).

Obviously "freeing" an object marks it as GC'able.
It should *NOT* "trigger" a GC.
If the user wants to explicitly cause GC (and the language
allows), then she can put that in too.
Freeing should NOT trigger a GC; although of course it's
a logical point at which perl may decide to do a GC anyway.

-- 
John Porter




Re: Garbage collection (was Re: JWZ on s/Java/Perl/)

2001-02-14 Thread abigail

On Wed, Feb 14, 2001 at 01:30:03PM -0300, Branden wrote:
 John Porter wrote:
  James Mastros wrote:
   I'd think that an extension to delete is in order here.  Basicly, delete
   should DESTROY the arg, change it's value to undef,...
 
  Huh?  What delete are you thinking of?  This is Perl, not C++.
 
 
 Agreed, definitely Perl is not C++.
 
 
   ...and trigger a GC that will get rid of the arg.
 
  No.  Perl decides for itself when to do GC.
 
 
 Please read the original message I wrote. The reply had only some
 off-the-context snippets of the original idea.
 
 The idea is to *allow* a programmer to explicitly destroy an object, for
 better (and sooner) resource disposal. The programmer wouldn't have to do it
 (and wouldn't do it most the time), but if he knows he uses many resources
 and he would like to be nice, he *could* do it (not meaning he would have to
 do it either...).


There is no need to add that to Perl, as Perl already has a function
for that: Cundef $obj;.

Naturally, that won't cause DESTROY to be run if there are other
references to it, but then, I don't see what an "object destruction"
is supposed to do if there are still references to the object left.
Nor is this wanted behaviour. All the programmer needs to do for
sooner resource disposal it to let his references go out of scope
when no longer needed.


Abigail



Re: Garbage collection (was Re: JWZ on s/Java/Perl/)

2001-02-14 Thread abigail

On Wed, Feb 14, 2001 at 02:10:59PM -0300, Branden wrote:
 
 Dan Sugalski wrote:
 
  Plus there's nothing stopping you from having $obj-DESTROY in your own
  code, though it may be inadvisable.
 
 It is (mainly) inadvisable because:
 1. GC will call DESTROY when it collects the memory, so DESTROY would get
 called twice, which is VERY BAD.

*blink*

It is? Why?

I grant you it isn't the clearest way of programming, but "VERY BAD"?

 2. If I call DESTROY on an object, it would still be a (valid) object after
 the call, so that if I call some other method, it will succeed. But that
 shouldn't happen, since the object was DESTROYed, right?

Eh, you don't understand DESTROY.

DESTROY doesn't destroy an object. Perl, the language, does not have the
concept of destroying objects. DESTROY is just a call back from perl, the
binary, that everyone is done with the object, and it's about to go away.

DESTROY might be called around the same time its memory is being reclaimed,
but from a language perspective, all this memory dealing is non-existant.

DESTROY is a language thing, garbage collection an implementation detail
of the run-time, purely necessary because of the limited physical model
of the abstract machine Perl is supposed to run on. Their perceived
relation is merely a coincidence. Even if you have a bucket load of memory
and there was a way of telling Perl not to bother with garbage collection,
DESTROY should still be called.

Being able to separate DESTROY and garbage collection is a feature. ;-)


Abigail



Re: Garbage collection (was Re: JWZ on s/Java/Perl/)

2001-02-14 Thread Dan Sugalski

At 07:44 PM 2/14/2001 +, Simon Cozens wrote:
On Wed, Feb 14, 2001 at 08:32:41PM +0100, [EMAIL PROTECTED] wrote:
   DESTROY would get called twice, which is VERY BAD.
 
  *blink*
  It is? Why?
  I grant you it isn't the clearest way of programming, but "VERY BAD"?

package NuclearReactor::CoolingRod;

sub new {
 Reactor-decrease_core_temperature();
 bless {}, shift
}

sub DESTROY {
 Reactor-increase_core_temperature();
}

Time to snag some bits from the Java license agreement.

"...this software is not meant for...aircraft control...nuclear 
reactors...medical equipment..."

Dan

--"it's like this"---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk




Re: Garbage collection (was Re: JWZ on s/Java/Perl/)

2001-02-13 Thread Nicholas Clark

On Tue, Feb 13, 2001 at 10:32:26AM -0800, Peter Scott wrote:
 At 01:16 PM 2/13/01 -0500, James Mastros wrote:
 On Tue, Feb 13, 2001 at 01:09:11PM -0500, John Porter wrote:
 Certainly AUTOLOAD gets
   called if DESTROY is called but not defined ... just
   like any other method.
 The idea is [for Larry] to declare "no, it isn't".  Otherwise, you have to
 do refcounting (or somthing like it) for DESTROY to get called at the right
 time if the class (or any superclass) has an AUTOLOAD, which is expensive.
 
 Perhaps you could declare, but not define, DESTROY to have AUTOLOAD called
 for DESTROY, and have DESTROY called as soon as the last ref goes out of
 scope.  (IE have a sub DESTROY; line.)

I like this idea, and would have suggested it except that James Mastros got
there first. It is a special case "no AUTOLOAD of DESTROY by default"
but it might be quite a win.


 This may be a naive question, but what is the benefit - aside from 
 consistency, and we don't need to rehash the litany on that - to AUTOLOAD 
 getting called for DESTROY?  I've never actually seen any code that makes 
 use of it.  I have grown somewhat tired of writing, and teaching, "return 
 if $AUTOLOAD =~ /:DESTROY$/", however.

Doesn't

  sub DESTROY {}

have the same effect but with less typing?

Nicholas Clark



Re: Garbage collection (was Re: JWZ on s/Java/Perl/)

2001-02-13 Thread Dan Sugalski

At 10:32 AM 2/13/2001 -0800, Peter Scott wrote:
At 01:16 PM 2/13/01 -0500, James Mastros wrote:
On Tue, Feb 13, 2001 at 01:09:11PM -0500, John Porter wrote:
Certainly AUTOLOAD gets
  called if DESTROY is called but not defined ... just
  like any other method.
The idea is [for Larry] to declare "no, it isn't".  Otherwise, you have to
do refcounting (or somthing like it) for DESTROY to get called at the right
time if the class (or any superclass) has an AUTOLOAD, which is expensive.

Perhaps you could declare, but not define, DESTROY to have AUTOLOAD called
for DESTROY, and have DESTROY called as soon as the last ref goes out of
scope.  (IE have a sub DESTROY; line.)

This may be a naive question, but what is the benefit - aside from 
consistency, and we don't need to rehash the litany on that - to AUTOLOAD 
getting called for DESTROY?  I've never actually seen any code that makes 
use of it.  I have grown somewhat tired of writing, and teaching, "return 
if $AUTOLOAD =~ /:DESTROY$/", however.

I have no idea. It's legal, though, so unless it's declared illegal (which 
is fine with me) it needs to be supported.



Dan

--"it's like this"---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk




Re: JWZ on s/Java/Perl/

2001-02-12 Thread Simon Cozens

On Mon, Feb 12, 2001 at 12:11:19AM -0800, yaphet jones wrote:
[Ruby]
 *no god complex
 *no high priests

I'll tell Matz you said that.

-- 
hantai mo hantai aru:
The reverse side also has a reverse side.  
-- Japanese proverb



Re: Garbage collection (was Re: JWZ on s/Java/Perl/)

2001-02-12 Thread Branden

Sam Tregar wrote:
 On Mon, 12 Feb 2001, Dan Sugalski wrote:
  Also, the vast majority of perl variables have no finalization
  attached to them.
 
 That's true, but without static typing don't you have to treat them as if
 they did?  At the very least you need to do a "is it an object with a
 DESTROY" check at block boundaries.
 

Only because the type is static, I don't think they wouldn't be references.

my $foo = new Baz();
{
my Baz $bar = $foo;
};
# DESTROY should be called on the object ref'd by $bar ?
# It's still ref'd on $foo !!!

- Branden




Re: JWZ on s/Java/Perl/

2001-02-12 Thread yaphet jones


i think Matz will agree with me...

(consider telling dave thomas and andy hunt, too...)

"a language author does not a god make"
 -- a proverb from the days of cobol

 On Mon, Feb 12, 2001 at 12:11:19AM -0800, yaphet jones wrote:
 [Ruby]
 *no god complex
 *no high priests

 I'll tell Matz you said that.

 -- 
 hantai mo hantai aru:
 The reverse side also has a reverse side.  
   -- Japanese proverb

so desu ne! nihon go hanashimasuka? subarashii!

ja ne!

yaphet
heretics of perl;


--



Re: Garbage collection (was Re: JWZ on s/Java/Perl/)

2001-02-12 Thread Dan Sugalski

At 10:38 AM 2/12/2001 -0500, Sam Tregar wrote:
On Mon, 12 Feb 2001, Dan Sugalski wrote:

  Perl needs some level of tracking for objects with finalization attached to
  them. Full refcounting isn't required, however.

I think I've heard you state that before.  Can you be more specific?  What
alternate system do you have in mind?  Is this just wishful thinking?

This isn't just wishful thinking, no.

  Also, the vast majority of perl variables have no finalization
  attached to them.

That's true, but without static typing don't you have to treat them as if
they did?  At the very least you need to do a "is it an object with a
DESTROY" check at block boundaries.

Code flow analysis can get an awful lot. Some help from the runtime will 
get the rest.

It's reasonably obvious (which is to say "cheap") which variables aren't 
involved with anything finalizable.

  I do wish people would get garbage collection and finalization split in
  their minds. They are two separate things which can, and will, be dealt
  with separately.

2x the penalty, right?  Instead of a speed increase we carry the burden of
ref-counting in addition to the overhead of an alternate system.

Nowhere near double the penalty. We only need to deal with refcounts when 
references are actually taken, assigned, or destroyed. That's a rare 
occurrence, relatively speaking.

Dan

--"it's like this"---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk




Re: Garbage collection (was Re: JWZ on s/Java/Perl/)

2001-02-12 Thread Sam Tregar

On Mon, 12 Feb 2001, Dan Sugalski wrote:

 I think I've heard you state that before.  Can you be more specific?  What
 alternate system do you have in mind?  Is this just wishful thinking?

 This isn't just wishful thinking, no.

You picked the easy one.  Maybe you can get back to the other two when you
have more time?

 Code flow analysis can get an awful lot. Some help from the runtime will
 get the rest.

Do you mean that you can tell from a compile-time flow-control graph
exactly when DESTROY needs to be called for every object?  What kind of
help from the runtime?  Reference counting help?

 It's reasonably obvious (which is to say "cheap") which variables aren't
 involved with anything finalizable.

Probably a simple bit check and branch.  Is that cheap?  I guess it must
be.

 Nowhere near double the penalty. We only need to deal with refcounts when
 references are actually taken, assigned, or destroyed. That's a rare
 occurrence, relatively speaking.

Perhaps.  It's not rare in OO Perl which is coincidentally one area in
serious need of a speedup.  I suppose I'm warped by my own experience -
all the code I see every day is filled with references and objects.
That's probably not the average case Perl usage.

-sam






Re: Garbage collection (was Re: JWZ on s/Java/Perl/)

2001-02-12 Thread Jan Dubois

On Mon, 12 Feb 2001 13:29:21 -0500, Dan Sugalski [EMAIL PROTECTED] wrote:

At 10:38 AM 2/12/2001 -0500, Sam Tregar wrote:
On Mon, 12 Feb 2001, Dan Sugalski wrote:

  Perl needs some level of tracking for objects with finalization attached to
  them. Full refcounting isn't required, however.

I think I've heard you state that before.  Can you be more specific?  What
alternate system do you have in mind?  Is this just wishful thinking?

This isn't just wishful thinking, no.

You've been asked multiple times to share how this is supposed to work.
Is there a specific reason you don't want to talk about it?

As far as I can see, there is only *one* reason to go to partial
refcounting: it saves some memory.  But beyond that, it is slower, more
complicated and shares all the disadvantages of refcounting.  Why don't
you want to just keep the current scheme and avoid having to think about
mark-and-sweep altogether if you agree that at least partial refcounting
will still be needed?

-Jan




Re: Garbage collection (was Re: JWZ on s/Java/Perl/)

2001-02-12 Thread Nicholas Clark

On Mon, Feb 12, 2001 at 01:33:52PM -0500, Sam Tregar wrote:
 Perhaps.  It's not rare in OO Perl which is coincidentally one area in
 serious need of a speedup.  I suppose I'm warped by my own experience -
 all the code I see every day is filled with references and objects.
 That's probably not the average case Perl usage.

Possibly not. People keep saying that OO is slow, but there are no good
examples of OO code to benchmark.
perl5-porters would happily receive sample code that hammers perl5 so
that it can be profiled to find where in the perl5 source the bottleneck is

I suspect that perlbench would also not object to OO code for benchmarking.

Nicholas Clark



Re: Garbage collection (was Re: JWZ on s/Java/Perl/)

2001-02-12 Thread Jan Dubois

On Mon, 12 Feb 2001 13:33:52 -0500 (EST), Sam Tregar [EMAIL PROTECTED]
wrote:

 It's reasonably obvious (which is to say "cheap") which variables aren't
 involved with anything finalizable.

Probably a simple bit check and branch.  Is that cheap?  I guess it must
be.

Yes, but incrementing the reference count is a single inc instruction too,
and todays CPUs are optimized to do those fast too.  I doubt a memory
fetch, bit test and jump instruction is much fast than an memory
increment.

 Nowhere near double the penalty. We only need to deal with refcounts when
 references are actually taken, assigned, or destroyed. That's a rare
 occurrence, relatively speaking.

Perhaps.  It's not rare in OO Perl which is coincidentally one area in
serious need of a speedup.  I suppose I'm warped by my own experience -
all the code I see every day is filled with references and objects.
That's probably not the average case Perl usage.

I don't think so.  Most Perl code nowadays makes heavy use of modules and
man modules are written in OO fashion.  But reference counting is *not*
what makes Perl method calls so slow.

-Jan




Re: Garbage collection (was Re: JWZ on s/Java/Perl/)

2001-02-12 Thread Dan Sugalski

At 01:33 PM 2/12/2001 -0500, Sam Tregar wrote:
On Mon, 12 Feb 2001, Dan Sugalski wrote:

  I think I've heard you state that before.  Can you be more specific?  What
  alternate system do you have in mind?  Is this just wishful thinking?
 
  This isn't just wishful thinking, no.

You picked the easy one.  Maybe you can get back to the other two when you
have more time?

That is the plan, yes.

  Code flow analysis can get an awful lot. Some help from the runtime will
  get the rest.

Do you mean that you can tell from a compile-time flow-control graph
exactly when DESTROY needs to be called for every object?  What kind of
help from the runtime?  Reference counting help?

Every object? No. Most objects? Yes. For this code:

   {
 my $foo = new Some::Thing;
 $foo-whatever;
   }

it's pretty obvious in many cases where $foo needs finalization. (Those 
cases where it isn't include the ones where the whatever method gets 
redefined at runtime, or where there's an eval/do/require without a "static 
assumptions OK" flag set somewhere)

Runtime support for this would include things like the assign vtable method 
for blessed references to finalizable things adding an entry to the "check 
me for cleanup or hoist me out" list for the home block of the variable 
getting the reference assigned to.

  It's reasonably obvious (which is to say "cheap") which variables aren't
  involved with anything finalizable.

Probably a simple bit check and branch.  Is that cheap?  I guess it must
be.

  Nowhere near double the penalty. We only need to deal with refcounts when
  references are actually taken, assigned, or destroyed. That's a rare
  occurrence, relatively speaking.

Perhaps.  It's not rare in OO Perl which is coincidentally one area in
serious need of a speedup.  I suppose I'm warped by my own experience -
all the code I see every day is filled with references and objects.
That's probably not the average case Perl usage.

It *is* rare in OO perl, though. How many of the variables you use are 
really, truly in need of finalization? .1 percent? .01 percent? Less? Don't 
forget that you need to count every scalar in every array or hash, and 
every iteration over a block with my declarations. Perl churns through a 
*lot* of SV pointers in its average run, and most of them aren't in need of 
finalization.

Dan

--"it's like this"---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk




Re: Garbage collection (was Re: JWZ on s/Java/Perl/)

2001-02-12 Thread Piers Cawley

Dan Sugalski [EMAIL PROTECTED] writes:

 At 10:38 AM 2/12/2001 -0500, Sam Tregar wrote:
 On Mon, 12 Feb 2001, Dan Sugalski wrote:
 
   Perl needs some level of tracking for objects with finalization attached to
   them. Full refcounting isn't required, however.
 
 I think I've heard you state that before.  Can you be more specific?  What
 alternate system do you have in mind?  Is this just wishful thinking?
 
 This isn't just wishful thinking, no.
 
   Also, the vast majority of perl variables have no finalization
   attached to them.
 
 That's true, but without static typing don't you have to treat them as if
 they did?  At the very least you need to do a "is it an object with a
 DESTROY" check at block boundaries.
 
 Code flow analysis can get an awful lot. Some help from the runtime
 will get the rest.
 
 
 It's reasonably obvious (which is to say "cheap") which variables
 aren't involved with anything finalizable.

Remember too that right now we don't properly finalize everything as
quickly as we should in the cases where stuff is caught up in circular
references. We don't need to be perfect, but we do need to be
predictable.

-- 
Piers




Re: Garbage collection (was Re: JWZ on s/Java/Perl/)

2001-02-12 Thread Dan Sugalski

At 09:08 PM 2/12/2001 +, Piers Cawley wrote:
Dan Sugalski [EMAIL PROTECTED] writes:

  At 10:38 AM 2/12/2001 -0500, Sam Tregar wrote:
  On Mon, 12 Feb 2001, Dan Sugalski wrote:
  
Perl needs some level of tracking for objects with finalization 
 attached to
them. Full refcounting isn't required, however.
  
  I think I've heard you state that before.  Can you be more specific?  What
  alternate system do you have in mind?  Is this just wishful thinking?
 
  This isn't just wishful thinking, no.
 
Also, the vast majority of perl variables have no finalization
attached to them.
  
  That's true, but without static typing don't you have to treat them as if
  they did?  At the very least you need to do a "is it an object with a
  DESTROY" check at block boundaries.
 
  Code flow analysis can get an awful lot. Some help from the runtime
  will get the rest.
 
 
  It's reasonably obvious (which is to say "cheap") which variables
  aren't involved with anything finalizable.

Remember too that right now we don't properly finalize everything as
quickly as we should in the cases where stuff is caught up in circular
references. We don't need to be perfect, but we do need to be
predictable.

Yep, that's another issue, and one I keep forgetting about, though the fact 
that we don't do predictable finalization on some objects isn't a good 
reason to not do it for any of them. I really don't want to guarantee 
predictable end-of-block cleanup, though, since that means a potentially 
expensive GC run more often than we might otherwise do.

One more thing for the GC PDD, I think.

Dan

--"it's like this"---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk




Re: Garbage collection (was Re: JWZ on s/Java/Perl/)

2001-02-12 Thread Jan Dubois

On Mon, 12 Feb 2001 16:28:00 -0500, Dan Sugalski [EMAIL PROTECTED] wrote:

Yep, that's another issue, and one I keep forgetting about, though the fact 
that we don't do predictable finalization on some objects isn't a good 

Yes, I know I promised to shut up until you come up with a spec, but there
is one thing that irritates me:

Could you guys please use "destruction" or "cleanup" as the term for the
end-of-scope processing (see e.g. C++).  Finalization is used everywhere
else to mean: called by GC before the memory is released (see e.g
Java/C#).

Thanks,
-Jan



Re: Garbage collection (was Re: JWZ on s/Java/Perl/)

2001-02-12 Thread Dan Sugalski

At 01:44 PM 2/12/2001 -0800, Jan Dubois wrote:
On Mon, 12 Feb 2001 16:28:00 -0500, Dan Sugalski [EMAIL PROTECTED] wrote:

 Yep, that's another issue, and one I keep forgetting about, though the fact
 that we don't do predictable finalization on some objects isn't a good

Yes, I know I promised to shut up until you come up with a spec, but there
is one thing that irritates me:

Could you guys please use "destruction" or "cleanup" as the term for the
end-of-scope processing (see e.g. C++).  Finalization is used everywhere
else to mean: called by GC before the memory is released (see e.g
Java/C#).

Correct terminology's important. Destruction it is.

Dan

--"it's like this"---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk




Re: Garbage collection (was Re: JWZ on s/Java/Perl/)

2001-02-12 Thread Dan Sugalski

At 10:46 AM 2/12/2001 -0800, Jan Dubois wrote:
On Mon, 12 Feb 2001 13:29:21 -0500, Dan Sugalski [EMAIL PROTECTED] wrote:

 At 10:38 AM 2/12/2001 -0500, Sam Tregar wrote:
 On Mon, 12 Feb 2001, Dan Sugalski wrote:
 
   Perl needs some level of tracking for objects with finalization 
 attached to
   them. Full refcounting isn't required, however.
 
 I think I've heard you state that before.  Can you be more specific?  What
 alternate system do you have in mind?  Is this just wishful thinking?
 
 This isn't just wishful thinking, no.

You've been asked multiple times to share how this is supposed to work.
Is there a specific reason you don't want to talk about it?

I do want to talk about it. It just hasn't been at the top of the heap of 
things that need discussing, and I don't really have time right now, 
unfortunately.

It's also an internals issue, not a language issue. (I've set followups 
appropriately) The only thing that language really cares about is 
deterministic destruction, and it's arguable whether that's a language 
issue. (And no, I'm not going to argue it. I don't really care one way or 
the other at the moment)

As far as I can see, there is only *one* reason to go to partial
refcounting: it saves some memory.  But beyond that, it is slower, more
complicated and shares all the disadvantages of refcounting.  Why don't
you want to just keep the current scheme and avoid having to think about
mark-and-sweep altogether if you agree that at least partial refcounting
will still be needed?

If you haven't already gone and read up on various garbage collectors, I'd 
recommend you do. Folks more clever than I am have already dealt with 
this--refcounting in general isn't necessary, and in those cases where it 
is needed, it doesn't have to be full refcounting.

Bottom line is that most variables in perl don't need any finalization at 
all, and those that do don't necessarily need refcounting. (Though I'll 
grant that the alternatives I can think of may be more expensive than 
refcounting, but I've not put much thought into it)

Dan

--"it's like this"---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk




Re: Garbage collection (was Re: JWZ on s/Java/Perl/)

2001-02-12 Thread Robin Berjon

At 15:37 12/02/2001 -0500, Dan Sugalski wrote:
It *is* rare in OO perl, though. How many of the variables you use are 
really, truly in need of finalization? .1 percent? .01 percent? Less? Don't 
forget that you need to count every scalar in every array or hash, and 
every iteration over a block with my declarations. Perl churns through a 
*lot* of SV pointers in its average run, and most of them aren't in need of 
finalization.

Couldn't we simply (for non-implementer values of simply) provide a way for
people to ask for finalization on an object ? Given that most of the time
it isn't needed, it wouldn't be too much of a burden for programmers to
have to write i_want_some_finalization($object, [finalization params]) ?

That would avoid burdening Perl with more dwimity. Dwimity's cool but it
usually has consequences and costs, and those ought to be balanced against
what it costs not to have it.

just my E0.02,

-- robin b.
There's too much blood in my caffeine system. 




Re: Garbage collection (was Re: JWZ on s/Java/Perl/)

2001-02-12 Thread Dan Sugalski

At 11:28 PM 2/12/2001 +0100, Robin Berjon wrote:
At 15:37 12/02/2001 -0500, Dan Sugalski wrote:
 It *is* rare in OO perl, though. How many of the variables you use are
 really, truly in need of finalization? .1 percent? .01 percent? Less? Don't
 forget that you need to count every scalar in every array or hash, and
 every iteration over a block with my declarations. Perl churns through a
 *lot* of SV pointers in its average run, and most of them aren't in need of
 finalization.

Couldn't we simply (for non-implementer values of simply) provide a way for
people to ask for finalization on an object ? Given that most of the time
it isn't needed, it wouldn't be too much of a burden for programmers to
have to write i_want_some_finalization($object, [finalization params]) ?

Sure. Y'know, maybe we could even have a sub with a special name! Maybe... 
DESTROY? :)

Seriously, I presume Larry will want perl 6 to follow perl 5's lead and use 
the DESTROY sub to indicate that an object should be actively (rather than 
passively) trashed when the interpreter is sure the object is unused. 
Adding something like:


   package foo;
   use attrs qw(cleanup_sub);

would be nice, but I don't know that he'll go for it. (Though it's the only 
way I can think of to avoid AUTOLOAD being considered a potential destructor)

Dan

--"it's like this"---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk




Re: Garbage collection (was Re: JWZ on s/Java/Perl/)

2001-02-12 Thread Robin Berjon

At 17:33 12/02/2001 -0500, Dan Sugalski wrote:
At 11:28 PM 2/12/2001 +0100, Robin Berjon wrote:
Couldn't we simply (for non-implementer values of simply) provide a way for
people to ask for finalization on an object ? Given that most of the time
it isn't needed, it wouldn't be too much of a burden for programmers to
have to write i_want_some_finalization($object, [finalization params]) ?

Sure. Y'know, maybe we could even have a sub with a special name! Maybe... 
DESTROY? :)

Yes, I'm vaguely aware of that possibility :)

I believe I misexpressed myself. What I meant was re non-refcount GC and
predictability of destruction. If the author wanted refcount triggered
destruction for a given object he'd say so explicitly. That would make it
easy to separate the objects that require deterministic destruction from
those that can be left to the more sophisticated GC.

Adding something like:

   package foo;
   use attrs qw(cleanup_sub);

would be nice, but I don't know that he'll go for it. (Though it's the only 
way I can think of to avoid AUTOLOAD being considered a potential destructor)

Yes that would be nice indeed.

-- robin b.
You can tune a piano, but you can't tuna fish.




Re: Garbage collection (was Re: JWZ on s/Java/Perl/)

2001-02-12 Thread James Mastros

On Mon, Feb 12, 2001 at 05:33:05PM -0500, Dan Sugalski wrote:
package foo;
use attrs qw(cleanup_sub);
 
 would be nice, but I don't know that he'll go for it. (Though it's the only 
 way I can think of to avoid AUTOLOAD being considered a potential destructor)
Fiat?

It's pretty hard (for me) to think of when you'd want an AUTOLOADed DESTROY,
since if you create /any/ objects of the class, DESTROY will be called.

"It isn't possible to AUTOLOAD DESTROY." --perlmem(6)

-=- James Mastros
-- 
"All I really want is somebody to curl up with and pretend the world is a
safe place."
AIM: theorbtwo   homepage: http://www.rtweb.net/theorb/



Re: JWZ on s/Java/Perl/

2001-02-11 Thread Bart Lateur

On Fri, 9 Feb 2001 16:14:34 -0800, Mark Koopman wrote:

but is this an example of the way people SHOULD code, or simply are ABLE to 
code this.   are we considering to deprecate this type of bad style, and force
to a programmer to, in this case, supply a ref to %baz in the arguements to
this sub?

I think you're trying too hard turning Perl into just another C clone.
Dynamic variable allocation and freeing, like this, are one of the main
selling points for Perl as a language.

Note that %baz can, as values, also contain references to other
lexically scoped varibles, like \$foo and \$bar. No prototping around
that.

  sub test {
  my($foo, $bar, %baz);
  ...
  return \%baz;
  }

You could, theoretically, create special versions of "my", or a "my"
with an attribute, so that these declared variables are kept out of the
normal lexical pool, and garbage collected in a more elaborate way,
perhaps even reference counting.

-- 
Bart.



Re: JWZ on s/Java/Perl/

2001-02-11 Thread Ken Fox

Bart Lateur wrote:
 On Fri, 09 Feb 2001 12:06:12 -0500, Ken Fox wrote:
  1. Cheap allocations. Most fast collectors have a one or two
 instruction malloc. In C it looks like this:
 
   void *malloc(size) { void *obj = heap; heap += size; return obj; }
  ...
 
 That is not a garbage collector.

I said it was an allocator not a garbage collector. An advanced
garbage collector just makes very simple/fast allocators possible.

 That is "drop everything you don't need, and we'll never use it
 again." Oh, sure, not doing garbage collection at all is faster then
 doing reference counting.

You don't have a clue. The allocator I posted is a very common allocator
used with copying garbage collectors. This is *not* a "pool" allocator
like Apache uses. What happens is when the heap fills up (probably on a
seg fault triggered by using an obj outside the current address space),
the collector is triggered. It traverses live data and copies it into a
new space (in a simple copying collector these are called "from" and "to"
spaces). Generational collectors often work similarly, but they have
more than two spaces and special rules for references between spaces.

  2. Work proportional to live data, not total data. This is hard to
 believe for a C programmer, but good garbage collectors don't have
 to "free" every allocation -- they just have to preserve the live,
 or reachable, data. Some researchers have estimated that 90% or
 more of all allocated data dies (becomes unreachable) before the
 next collection. A ref count system has to work on every object,
 but smarter collectors only work on 10% of the objects.
 
 That may work for C, but not for Perl.

Um, no. It works pretty well for Lisp, ML, Prolog, etc. I'm positive
that it would work fine for Perl too.

 sub test {
 my($foo, $bar, %baz);
 ...
 return \%baz;
 }
 
 You may notice that only PART of the locally malloced memory, gets
 freed. the memory of %baz may well be in the middle of that pool. You're
 making a huge mistake if you simply declare the whole block dead weight.

You don't understand how collectors work. You can't think about individual
allocations anymore -- that's a fundamental and severe restriction on
malloc(). What happens is that the garbage accumulates until a collection
happens. When the collection happens, live data is saved and the garbage
over-written.

In your example above, the memory for $foo and $bar is not reclaimed
until a collection occurs. %baz is live data and will be saved when
the collection occurs (often done by copying it to a new heap space).
Yes, this means it is *totally* unsafe to hold pointers to objects in
places the garbage collector doesn't know about. It also means that
memory working-set sizes may be larger than with a malloc-style system.

There are lots of advantages though -- re-read my previous note.

The one big down-side to non-ref count GC is that finalization is
delayed until collection -- which may be relatively infrequently when
there's lots of memory. Data flow analysis can allow us to trigger
finalizers earlier, but that's a lot harder than just watching a ref
count.

- Ken



Re: JWZ on s/Java/Perl/

2001-02-11 Thread Ken Fox

[Please be careful with attributions -- I didn't write any
 of the quoted material...]

Russ Allbery wrote:
   sub test {
   my($foo, $bar, %baz);
   ...
   return \%baz;
   }

 That's a pretty fundamental aspect of the Perl language; I use that sort
 of construct all over the place.  We don't want to turn Perl into C, where
 if you want to return anything non-trivial without allocation you have to
 pass in somewhere to put it.

There's no problems at all with that code. It's not going to break under
Perl 6. It's not going to be deprecated -- this is one of the ultimate
Keep Perl Perl language features!

I think that there's a lot of concern and confusion about what it means to
replace perl's current memory manager (aka garbage collector) with something
else. The short-term survival guide for dealing with this is "only believe
what Dan says." The longer-term guide is "only believe what Benchmark says."

There are only three Perl-visible features of a collector that I can think
of (besides the obvious "does it work?"):

1. How fast does it run?
2. How efficient is it? (i.e. what's the overhead?)
3. When does it call object destructors?

The first two are too early to talk about, but if Perl 6 is worse than
Perl 5 something is seriously wrong.

The last has never been defined in Perl, but it's definitely something to
discuss before the internals are written. Changing it could be a *major*
job.

- Ken



Re: Garbage collection (was Re: JWZ on s/Java/Perl/)

2001-02-11 Thread Jan Dubois

On Fri, 09 Feb 2001 13:19:36 -0500, Dan Sugalski [EMAIL PROTECTED] wrote:

Almost all refcounting schemes are messy. That's one of its problems. A 
mark and sweep GC system tends to be less prone to leaks because of program 
bugs, and when it *does* leak, the leaks tend to be large. Plus the code to 
do the GC work is very localized, which tends not to be the case in 
refcounting schemes.

Going to a more advanced garbage collection scheme certainly isn't a 
universal panacea--mark and sweep in perl 6 will *not* bring about world 
peace or anything. It will (hopefully) make our lives easier, though.

I currently don't have much time to follow the perl6 discussions, so I
might have missed this, but I have some questions about abandoning
reference counts for Perl internals.  When I reimplemented some of the
Perl guts in C# last year for the 'Perl for .NET" research project, I
tried to get rid of reference counting because the runtime already
provides a generational garbage collection scheme.

However, I couldn't solve the problem of "deterministic destruction
behavior": Currently Perl will call DESTROY on any object as soon as the
last reference to it goes out of scope.  This becomes important if the
object own scarce external resources (e.g. file handles or database
connections) that are only freed during DESTROY.  Postponing DESTROY until
an indeterminate time in the future can lead to program failures due to
resource exhaustion.

The second problem is destruction order:  With reference counts you can
have a dependency graph between objects.  Without them destruction can
only appear in random order, which sometimes is a problem: You may have a
database connection and a recordset.  The recordset may need to be
DESTROYed first because it may contain unsaved data that still needs to be
written back to the database.

I've been discussing this with Sarathy multiple times over the last year,
and he insists that relying on DESTROY for resource cleanup is bad style
and shouldn't be done anyways.  But always explicitly calling e.g. Close()
or whatever is pretty messy at the application level: you have to use
eval{} blocks all over the place to guarantee calling Close() even when
something else blows up.

As an implementer I most definitely see the advantages of giving up
deterministic destruction behavior to random sequences of finalizer calls.
But as a Perl programmer I loathe the additional complexity for my Perl
programs to make them robust.  There is a reason memory allocation isn't
exposed to the user either. :-)

Have these issues been discussed somewhere for Perl6?  If yes, could you
point me to that discussion?

-Jan




Re: Garbage collection (was Re: JWZ on s/Java/Perl/)

2001-02-11 Thread Bryan C . Warnock

On Sunday 11 February 2001 19:08, Jan Dubois wrote:
 However, I couldn't solve the problem of "deterministic destruction
 behavior": Currently Perl will call DESTROY on any object as soon as the
 last reference to it goes out of scope.  This becomes important if the
 object own scarce external resources (e.g. file handles or database
 connections) that are only freed during DESTROY.  Postponing DESTROY until
 an indeterminate time in the future can lead to program failures due to
 resource exhaustion.

But doesn't resource exhaustion usually trigger garbage collection and 
resource reallocation?  (Not that this addresses the remainder of your 
post.)

-- 
Bryan C. Warnock
bwarnock@(gtemail.net|capita.com)



Re: Garbage collection (was Re: JWZ on s/Java/Perl/)

2001-02-11 Thread Jan Dubois

On Sun, 11 Feb 2001 21:11:09 -0500, "Bryan C. Warnock"
[EMAIL PROTECTED] wrote:

On Sunday 11 February 2001 19:08, Jan Dubois wrote:
 However, I couldn't solve the problem of "deterministic destruction
 behavior": Currently Perl will call DESTROY on any object as soon as the
 last reference to it goes out of scope.  This becomes important if the
 object own scarce external resources (e.g. file handles or database
 connections) that are only freed during DESTROY.  Postponing DESTROY until
 an indeterminate time in the future can lead to program failures due to
 resource exhaustion.

But doesn't resource exhaustion usually trigger garbage collection and 
resource reallocation?  (Not that this addresses the remainder of your 
post.)

Not necessarily; you would have to implement it that way: When you try to
open a file and you don't succeed, you run the garbage collector and try
again.  But what happens in the case of XS code: some external library
tries to open a file and gets a failure.  How would it trigger a GC in the
Perl internals?  It wouldn't know a thing that it had been embedded in a
Perl app.

This scheme would only work if *all* resources including memory and
garbage collection are handled by the OS (or at least by a virtual machine
like JVM or .NET runtime).  But this still doesn't solve the destruction
order problem.

-Jan




Re: JWZ on s/Java/Perl/

2001-02-10 Thread Dan Sugalski

At 01:05 AM 2/10/2001 +0100, Bart Lateur wrote:
On Fri, 09 Feb 2001 12:06:12 -0500, Ken Fox wrote:
  2. Work proportional to live data, not total data. This is hard to
 believe for a C programmer, but good garbage collectors don't have
 to "free" every allocation -- they just have to preserve the live,
 or reachable, data. Some researchers have estimated that 90% or
 more of all allocated data dies (becomes unreachable) before the
 next collection. A ref count system has to work on every object,
 but smarter collectors only work on 10% of the objects.

That may work for C, but not for Perl.

 sub test {
 my($foo, $bar, %baz);
 ...
 return \%baz;
 }

You may notice that only PART of the locally malloced memory, gets
freed. the memory of %baz may well be in the middle of that pool. You're
making a huge mistake if you simply declare the whole block dead weight.

This is an argument to make PMCs moveable, I suppose. I don't see what the 
problem is in general, though--the pool of base variable structures might 
get somewhat fragmented, but as it's a pool of fixed-sized structures there 
are tricks you can play to make allocation quick even in a fragmented pool.

As for the actual contents of the scalars in %baz, that's no big deal 
either. They can be moved about with impunity, and that's probably where 
most of the space will get taken up anyway...

Dan

--"it's like this"---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk




Re: JWZ on s/Java/Perl/

2001-02-10 Thread Piers Cawley

Mark Koopman [EMAIL PROTECTED] writes:

  On Fri, 09 Feb 2001 12:06:12 -0500, Ken Fox wrote:
  
  
  That may work for C, but not for Perl.
  
  sub test {
  my($foo, $bar, %baz);
  ...
  return \%baz;
  }
  
  You may notice that only PART of the locally malloced memory, gets
  freed. the memory of %baz may well be in the middle of that pool. You're
  making a huge mistake if you simply declare the whole block dead weight.
  
  -- 
  Bart.
 
 but is this an example of the way people SHOULD code, or simply are
 ABLE to code this. are we considering to deprecate this type of bad
 style, and force to a programmer to, in this case, supply a ref to
 %baz in the arguements to this sub?

Err, if this is declared 'bad style' how is one supposed to write an
object constructor?

-- 
Piers




Garbage collection (was Re: JWZ on s/Java/Perl/)

2001-02-09 Thread Dan Sugalski

At 12:06 PM 2/9/2001 -0500, Ken Fox wrote:
Branden wrote:
  I actually don't understand how traversing a graph can be faster than
  incrementing/decrementing/testing for zero on a refcount.

There are two main reasons advanced garbage collectors are fast:

  1. Cheap allocations. Most fast collectors have a one or two
 instruction malloc. In C it looks like this:

   void *malloc(size) { void *obj = heap; heap += size; return obj; }

 It's easier to do alignments in a macro layer above the allocator
 so the allocator doesn't have to constantly re-align to address
 boundaries. There is basically no difference between the performance
 of heap and stack allocations with a good collector.

This is definitely very true. It cuts out the overhead of free as well, 
since you don't have to free any data (perl pays this with realloc a lot, 
since realloc's a malloc, copy, and free). Plus there's no need to mess 
with any sort of 'allocated memory' list, which malloc and free currently 
need to keep so they don't leak memory.

  2. Work proportional to live data, not total data. This is hard to
 believe for a C programmer, but good garbage collectors don't have
 to "free" every allocation -- they just have to preserve the live,
 or reachable, data. Some researchers have estimated that 90% or
 more of all allocated data dies (becomes unreachable) before the
 next collection. A ref count system has to work on every object,
 but smarter collectors only work on 10% of the objects.

As is this. (Perl can generate a lot of garbage if you're messing around 
with strings and arrays a lot)

Also, one thing people forget is that manipulating reference counts can get 
expensive. It doesn't seem like much--an integer increment or decrement 
here or there. No big deal, right? Well, that cost tends to add up after a 
while. Its paid in lots of tiny little pieces rather than in a few big 
chunks, but the total time taken by it is larger.

It's also possible that by tossing refcounts we can shrink down the size of 
a perl variable structure (though I know it's not that way now) or at least 
move the GC field to the end, where it's less likely to be loaded. Most 
fast processors these days fetch data into cache in 8 or 16 byte chunks, so 
moving the GC field outside of the active chunk area means we won't be 
loading in dead data (okay, it's only resting!) every time we access a 
variable. There's no point in doing this with perl 5, since it's not dead 
data, but with a non-refcount GC scheme it'll be accessed much less.

Finally, all you really need to do is read the last day or so of p5p where 
Alan's trying to plug a batch of perl memory leaks to see how well the 
refcount scheme seems to be working now...

Dan

--"it's like this"---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk




Re: Garbage collection (was Re: JWZ on s/Java/Perl/)

2001-02-09 Thread Branden

Dan Sugalski wrote:
 At 12:06 PM 2/9/2001 -0500, Ken Fox wrote:
   2. Work proportional to live data, not total data. This is hard to
  believe for a C programmer, but good garbage collectors don't have
  to "free" every allocation -- they just have to preserve the live,
  or reachable, data. Some researchers have estimated that 90% or
  more of all allocated data dies (becomes unreachable) before the
  next collection. A ref count system has to work on every object,
  but smarter collectors only work on 10% of the objects.

 As is this. (Perl can generate a lot of garbage if you're messing around
 with strings and arrays a lot)


Let me see if I got that right. If I change the way some objects are used so
that I tend to create other objects instead of reusing the old ones, I'm
actually not degrading GC performance, since its work is proportional to
live data. Right? This increases memory usage, though, right? Would this
cause some thrashing if the excessive memory usage causes degrading to
virtual memory? (I guess not, since live data would probably be accessed,
and dead data would probably be discarded somehow before going to virtual
memory, right?).

What are actually the consequences of generating more or less garbage by
reusing/not reusing structures, under this advanced GC model?

 Finally, all you really need to do is read the last day or so of p5p where
 Alan's trying to plug a batch of perl memory leaks to see how well the
 refcount scheme seems to be working now...

Yeah, I know that... But I actually think this is because Perl 5's
implementation of refcounting is quite messy, specially when weakrefs are in
the game.

- Branden




Re: JWZ on s/Java/Perl/

2001-02-09 Thread abigail

On Fri, Feb 09, 2001 at 12:06:12PM -0500, Ken Fox wrote:
 
  2. Work proportional to live data, not total data. This is hard to
 believe for a C programmer, but good garbage collectors don't have
 to "free" every allocation -- they just have to preserve the live,
 or reachable, data. Some researchers have estimated that 90% or
 more of all allocated data dies (becomes unreachable) before the
 next collection. A ref count system has to work on every object,
 but smarter collectors only work on 10% of the objects.

So, it's more a data preserver than a garbage collector ;-)


Abigail



Re: JWZ on s/Java/Perl/

2001-02-09 Thread Dan Sugalski

At 05:29 PM 2/9/2001 -0200, Branden wrote:
Ken Fox wrote:
   2. Work proportional to live data, not total data. This is hard to
  believe for a C programmer, but good garbage collectors don't have
  to "free" every allocation -- they just have to preserve the live,
  or reachable, data. Some researchers have estimated that 90% or
  more of all allocated data dies (becomes unreachable) before the
  next collection. A ref count system has to work on every object,
  but smarter collectors only work on 10% of the objects.

Does this 90/10 ratio mean that the memory usage is actually 10 times it
needs to be? (if it were even _possible_ to pack all the data without
fragmentation problems)

No. It means that 90% of the memory allocated between runs of the GC gets 
freed. The memory isn't wasted, by any means.

Most memory is allocated for ephemeral things--objects that come and go, 
temporary buffers, scratch space, and suchlike things. It's normal.

Dan

--"it's like this"---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk




Re: Garbage collection (was Re: JWZ on s/Java/Perl/)

2001-02-09 Thread Dan Sugalski

At 06:30 PM 2/9/2001 +, Nicholas Clark wrote:
On Fri, Feb 09, 2001 at 01:19:36PM -0500, Dan Sugalski wrote:
  The less memory you chew through the faster your code will probably be (or
  at least you'll have less overhead). Reuse is generally faster and less
  resource-intensive than recycling. What's true for tin cans is true for 
 memory.

reduce, reuse, recycle.
The first R might also be important :-)

Oh, no doubt. Everything's got tradeoffs, the question is always "what's 
most important". In perl's case, it's speed, and memory usage is of 
secondary importance unless it impacts the speed of the program.

Dan

--"it's like this"---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk




Re: JWZ on s/Java/Perl/

2001-02-09 Thread Ken Fox

Branden wrote:
 Ken Fox wrote:
  Some researchers have estimated that 90% or
  more of all allocated data dies (becomes unreachable) before the
  next collection. A ref count system has to work on every object,
  but smarter collectors only work on 10% of the objects.
 
 Does this 90/10 ratio mean that the memory usage is actually 10 times it
 needs to be? (if it were even _possible_ to pack all the data without
 fragmentation problems)

The general rule is the more space you "waste" the faster the collector
is. If you have memory to spare, then don't run the garbage collector as
often and your program will spend less total time garbage collecting.
In other words, the collection cost per object approaches zero.

If you "need" to go faster, then waste more memory.

If you "need" to use less memory, then go slower and collect more
frequently.

When comparing the memory management efficiency of different approaches,
it's very important to remember all the costs that the approaches have.
C-style malloc has quite a bit of overhead per object and tends to
fragment the heap. Many garbage collectors don't have either of these
problems.

Garbage collectors are very good from an efficiency perspective, but
tend to be unreliable in a mixed language environment and sometimes
impose really nasty usage requirements.

- Ken



Re: JWZ on s/Java/Perl/

2001-02-09 Thread Robin Berjon

At 16:16 09/02/2001 -0500, Ken Fox wrote:
The general rule is the more space you "waste" the faster the collector
is. If you have memory to spare, then don't run the garbage collector as
often and your program will spend less total time garbage collecting.
In other words, the collection cost per object approaches zero.

If you "need" to go faster, then waste more memory.

If you "need" to use less memory, then go slower and collect more
frequently.

Which (to me) seems to just beg for the question: Is this something that
the oft discussed use {less,more} {memory,speed} pragma could hook into ?

-- robin b.
Heisenberg might have been here.




Re: Garbage collection (was Re: JWZ on s/Java/Perl/)

2001-02-09 Thread Ken Fox

Dan Sugalski wrote:
 At 04:09 PM 2/9/2001 -0200, Branden wrote:
  If I change the way some objects are used so
  that I tend to create other objects instead of reusing the old ones, I'm
  actually not degrading GC performance, since its work is proportional to
  live data. Right?
 
 Correct. Whether reuse is a win overall is a separate question.

It's totally dependent upon hardware. From a software big-O type of
analysis, creating new objects is never slower than reusing objects.

The problems come about if (a) memory is low and the OS decides to
page without telling the application to prepare for paging or (b) if all
memory isn't the same speed, e.g. caches are faster than main memory.

  This increases memory usage, though, right? Would this
  cause some thrashing if the excessive memory usage causes degrading to
  virtual memory? ...
 
 It depends on whether the old structures are really unused. If they are,
 one of the GC passes will reclaim the space they're taking.

It also depends on locality of reference. Semi-space-based collectors
are not bad at preserving locality -- mark-sweep and malloc-like allocators
are terrible.

The weird thing is that a collector can actually *improve* locality by
moving objects "close" to the things they refer to. In perl's case, the
collector could move the underlying value representation close to the PMC
that refers to it. (But we may want to pin a PMC so that foreign code
can keep references to it. Argh.)

 (It's safe to assume that if perl 6's garbage collector causes otherwise
 small programs to swap then it's busted and needs fixing)

If you mean small as in "tight loop" then I agree. If you mean small as
in a "quick one liner" then I'm not sure. The quick one liners run quickly
and speeding memory management up/down by 100% might not even be noticeable.

 The less memory you chew through the faster your code will probably be (or
 at least you'll have less overhead). Reuse is generally faster and less
 resource-intensive than recycling. What's true for tin cans is true for memory.

The electrons are re-used whether you allocate a new object or not... ;)

 Going to a more advanced garbage collection scheme certainly isn't a
 universal panacea--mark and sweep in perl 6 will *not* bring about world
 peace or anything. It will (hopefully) make our lives easier, though.

Mark-sweep doesn't have a cheap allocator or good locality. At this point
in history, I think if we don't go with a more advanced system we're not
learning.

- Ken



Re: JWZ on s/Java/Perl/

2001-02-09 Thread Dan Sugalski

At 10:21 PM 2/9/2001 +0100, Robin Berjon wrote:
At 16:16 09/02/2001 -0500, Ken Fox wrote:
 The general rule is the more space you "waste" the faster the collector
 is. If you have memory to spare, then don't run the garbage collector as
 often and your program will spend less total time garbage collecting.
 In other words, the collection cost per object approaches zero.
 
 If you "need" to go faster, then waste more memory.
 
 If you "need" to use less memory, then go slower and collect more
 frequently.

Which (to me) seems to just beg for the question: Is this something that
the oft discussed use {less,more} {memory,speed} pragma could hook into ?

Sure. Using it to alter the frequency of garbage collection's not an 
inappropriate thing to do.

Dan

--"it's like this"---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk




Re: JWZ on s/Java/Perl/

2001-02-09 Thread Bart Lateur

On Fri, 09 Feb 2001 12:06:12 -0500, Ken Fox wrote:

There are two main reasons advanced garbage collectors are fast:

 1. Cheap allocations. Most fast collectors have a one or two
instruction malloc. In C it looks like this:

  void *malloc(size) { void *obj = heap; heap += size; return obj; }

It's easier to do alignments in a macro layer above the allocator
so the allocator doesn't have to constantly re-align to address
boundaries. There is basically no difference between the performance
of heap and stack allocations with a good collector.

That is not a garbage collector. That is "drop everything you don't
need, and we'll never use it again." Oh, sure, not doing garbage
collection at all is faster then doing reference counting.

 2. Work proportional to live data, not total data. This is hard to
believe for a C programmer, but good garbage collectors don't have
to "free" every allocation -- they just have to preserve the live,
or reachable, data. Some researchers have estimated that 90% or
more of all allocated data dies (becomes unreachable) before the
next collection. A ref count system has to work on every object,
but smarter collectors only work on 10% of the objects.

That may work for C, but not for Perl.

sub test {
my($foo, $bar, %baz);
...
return \%baz;
}

You may notice that only PART of the locally malloced memory, gets
freed. the memory of %baz may well be in the middle of that pool. You're
making a huge mistake if you simply declare the whole block dead weight.

-- 
Bart.



Re: JWZ on s/Java/Perl/

2001-02-09 Thread Mark Koopman

 On Fri, 09 Feb 2001 12:06:12 -0500, Ken Fox wrote:
 
 
 That may work for C, but not for Perl.
 
   sub test {
   my($foo, $bar, %baz);
   ...
   return \%baz;
   }
 
 You may notice that only PART of the locally malloced memory, gets
 freed. the memory of %baz may well be in the middle of that pool. You're
 making a huge mistake if you simply declare the whole block dead weight.
 
 -- 
   Bart.

but is this an example of the way people SHOULD code, or simply are ABLE to 
code this.   are we considering to deprecate this type of bad style, and force
to a programmer to, in this case, supply a ref to %baz in the arguements to
this sub?

Mark Koopman
Software Engineer

WebSideStory, Inc

10182 Telesis Court
San Diego CA  92121
858.546.1182.##.318
858.546.0480.fax

perl -e '
eval(lc(join("",
map ({chr}(q(
49877273766940
80827378843973
32767986693280
69827639463932
39883673434341
))=~/../g;'



Re: JWZ on s/Java/Perl/

2001-02-05 Thread Piers Cawley

"Branden" [EMAIL PROTECTED] writes:

 Piers Cawley wrote:
 "Branden" [EMAIL PROTECTED] writes:
  Of course, C++ has no GC, which is a good thing, but you can always
  fake it with Refcounts, which is much more efficient, and easily
  feasable with C++.
 
 Err... current research shows that the refcount approach is one of the
 slowest forms of GC, and it doesn't even do the job properly.
 
 --
 Piers
 
 
 I actually don't understand how traversing a graph can be faster than
 incrementing/decrementing/testing for zero on a refcount. I believe you, but
 I just don't understand. Could you point me to some URLs that talk about
 this?

There's a jolly good book on this called (would you believe) 'Garbage
Collection'. The crux of the matter would appear to be that with
refcounts you have to do a pretty small amount of work very, very
often. With a well designed GC system you do a largish amount of work
much less frequently. The total amount of work done tends to come out
higher in the refcounting scenario.

Consider:

   for my $foo (@list_of_refs) {
  ...
   }

This does N increments, N decrements and N comparisons. At least. A GC
system doesn't. And, unless a GC pass happens during the loop, no GC
overhead will be incurred.

-- 
Piers




Re: JWZ on s/Java/Perl/

2001-01-30 Thread David Mitchell

"Branden" [EMAIL PROTECTED] wrote:
 Well, mandatory locking is something we should definetly NOT have in Perl6.
 Most of perl's code today is not threaded, and I believe much of it will
 continue to be this way. The pseudo-fork thread behaviour that is being
 proposed also makes this ok. Even if you have threads, you have to say
 explicitly if you want anythinig to be shared. And if you explicitly
 share something, then you should care the locks by yourself.

Call me lazy, but in my ideal world, if I had some code with multiple
iterpreter threads that happened to share some perl variables, then I'd
like Perl to take care of any locking needed ensure those variables
are assessed in a thread-safe way. IE I let the Perl developers do all
the hard locking code behind the scenes, and I don't have to worry my pretty
little head about it.
Now, there may be practical reasons why it isnt possible for perl to do
this for me automatically (reasons, anyone?), but it's a nice
ideal.

Just MHO ;-)

Dave M.




Re: JWZ on s/Java/Perl/

2001-01-30 Thread Branden


David Mitchell wrote:
 I let the Perl developers do all
 the hard locking code behind the scenes, and I don't have to worry my
pretty
 little head about it.
 Now, there may be practical reasons why it isnt possible for perl to do
 this for me automatically (reasons, anyone?), but it's a nice
 ideal.

 Dave M.


The thing with mandatory locks per variable, is that as long as you only
want to access _that_ variable, it's ok, but if you want to make several
uses of several variables and want to do it all at once, you've got a
problem. Suppose we have:

$a++;
$x = $a;
$y = $a;
die if ($x != $y);## will die on a MT environment

well, if every access to the variable is mandatory, then this code would
execute the following operations:

lock $a
load $a
inc
store $a
unlock $a
lock $a
lock $x
load $a
store $x
unlock $a
unlock $x
lock $a
lock $y
load $a
store $y
unlock $a
unlock $y
...

See the problem with this code? A statement using $x or $a cannot run
while $x = $a is running, because both variables are locked. That assures
$x = $a is executed without problems, but any statement modifying $a (for
example, $a++) could get to run between $x = $a and $y = $a, and so $x and
$y will hold different values. So, instead of locking the variables for
each statement, you should better lock them for the whole block of
statements.

lock $a;
$a++;
$x = $a;
$y = $a;
die if ($x != $y);## won't ever die
unlock $a;



AFAIK, Java locks the non-local variables for every statement. I think I
read this somewhere but I can't remember where. I guess this kind of
locking makes the statement

$x = $a + $a;

run with the same value of $a for both operands. (I could be wrong, and I
think even if there was no locking, the VM could read $a only once and
use the two operands from the same reading, by copying from a local
register to other, or duplicating it on the stack...). Please correct me
if I'm wrong, but I think that is the kind of thing Java does. If anyone
can find it written somewhere, please post.


Well, IMO, this kind of locking is very weak. If instead of writing

$x = $a + $a;

you wrote

$x = $a;
$x += $a;

You would loose all the locking behaviour that would guarantee that the
two $a's are the same. This goes even worse in Perl, because you know
Perl has too much special cases, and there are many more ways to do it,
and you would always have to think "Is this a statement?" or "Is the
lock working?", etc.

That's why I think, in perl,

$x = $a + $a;

should behave as

$x = $a;
$x += $a;

i.e., with no locking, and no guarantee that the two reads of the non-local
variable $a will return the same thing. (I'm not saying they _always_ will
return different things, neither that they _eventually_ will return
different
things, only that it's not guaranteed that they will return the same thing!)

How can one force the sane behaviour, of returning the same thing for both
reads? In all the cases, by setting an explicit lock on $a.



The bottom line is: If you are gonna write a threaded program, you can be
lazy
or careful. If you're lazy, just let it run with no locks in it and pray so
that no race conditions occur. If you're careful, you will promptly identify
which variables are shared, and just put locks around any section of the
code
that either reads or writes to that variable (or variables).



I've read so many wrong implemented thread programs written in Java that I
almost think they shouldn't include threads in it. People read "Java is
thread-safe" and go on using it as they wouldn't ever go into this kind of
situation. But race conditions are probably the hardest type to spot,
hardest to reproduce kind of things that lead to bugs. I think the less
magic we put into locks, the more we enforce programmers to be conscient
about race conditions in multi-threading.


- Branden




Re: JWZ on s/Java/Perl/

2001-01-30 Thread David Mitchell

"Branden" [EMAIL PROTECTED] wrote:
 The thing with mandatory locks per variable, is that as long as you only
 want to access _that_ variable, it's ok, but if you want to make several
 uses of several variables and want to do it all at once, you've got a
 problem.

[ big snip ]

Sorry, I misunderstood you. I think in fact we agree! What I was
advocating was that Perl should automatically make accesses to
individual shared variables safe, so 2 threads executing
1: $shared = 10;  2: $shared = 20;

wont guarantee whether $shared ends up as 10 or 20, but will guarantee
that the internal representation of $shared wont get corrupted.
Anything that guarantees consistency between multiple variable accesses
should be up to the programmer to decide where to lock, IMHO.




Re: JWZ on s/Java/Perl/

2001-01-29 Thread abigail

On Sun, Jan 28, 2001 at 11:54:13PM -0600, Jarkko Hietaniemi wrote:
 On Sun, Jan 28, 2001 at 08:56:33PM -0500, Michael G Schwern wrote:
  On Sun, Jan 28, 2001 at 10:07:55PM +0100, Bart Lateur wrote:
   Uhm, I'm sorry, but that's not good enough. You cannot distinguish
   between Windows 95/98/ME on one side, and NT/2k on the other, using $^O
   alone. After all, $^O is just a constant burnt into the executable when
   perl was compiled. You can run the same perl.exe on all platforms, and
   indeed, most people do. Yet win9* and NT are different enough in
   behaviour (e.g. flock) to warrant a test on platform. Er... which is: no
   go.
  
  Well, fork works on both now, but I see your point.  There are ways of
  detecting the OS at run-time under Windows, mostly through MFC junk or
  peeking in the registry.  It would probably be good to do it for the
  MacOS versions, too.
 
 The desire to know the name of the runtime platform is a misdirected desire.
 What you really want to know is whether function Foo will be there, what
 kind of signature it has, whether file Bar will be there, what kind of
 format it has, and so on, whether a feature Zog is present, or what
 is the value of parameter Blah.  Just knowing the name of the platform
 doesn't buy you a whole lot.


You want both of course, if only to present the user an error message
(s)he can understand.



Abigail



Re: JWZ on s/Java/Perl/

2001-01-29 Thread Jarkko Hietaniemi

On Sun, Jan 28, 2001 at 11:07:10PM -0700, Nathan Torkington wrote:
 Jarkko Hietaniemi writes:
   True, but you can't do any of all that without knowing the platform
   accurately (nontrivial and requires core mod or XS).  Once that's
   done, the rest is just a matter of extending File::Spec
   (trivial and pure Perl).
  
  Trivial?  *cough* *snigger*
 
 If it was trivial, Configure wouldn't need to exist--we could just use
 hints files.  Lots of hints files.  One for each configuration, in
 fact.  Hey Jarkko, I have an idea ... :-)

Someone please make the evil man go away...

 Nat

-- 
$jhi++; # http://www.iki.fi/jhi/
# There is this special biologist word we use for 'stable'.
# It is 'dead'. -- Jack Cohen



Re: JWZ on s/Java/Perl/

2001-01-29 Thread Jeanna FOx

J. David Blackstone wrote:
 Yeah, that was one of my disappointments when I finally made the
 Java plunge last month.  I kind of expected integers to be objects in
 what I had heard was the "perfect, pure" OO language.

Everybody seems to be missing the fact that jwz bitching about Java's
"32 bit non-object ints" means that at least he thinks they could be
salvaged. What would he think of Perl's "224 bit non-object ints"?!
Don't get smug because Perl can iterate over an array of anything. The
price we pay is incredibly expensive.

- Ken

P.S. I may be wrong, but trying to learn what jwz doesn't like about
Java and then figure out how to "fix" it in Perl is an impossible task.
He's got wonderful, interesting, thoughtful opinions -- they just seem
contrary to Perl's basic existence.

Here's a chat I quickly found on Google. I don't know if it's true, but
it's certainly consistent with everything I've heard/read jwz say about
languages:

jwz: I think java has enough of the lisp nature to satisfy me, even
 though it's not a purely functional language 
jwz: I expect that we're going to be stuck with algol-syntax languages
 for a long long time, because there's too much invested in them
 already 
jwz: but really that doesn't matter, because the surface syntax of
 languages is really trivia 
jwz: for the record, I despise C++ and perl 
jwz: though I use perl, because it does make things easier in the
 current environment 
jwz: C++ just makes everything harder and worse, so I won't use it at
 all.

Notice that he likes Lisp, which has strongly influenced Perl -- much
more so than Java. The main problems from jwz's perspective I'm sure
would be control (giving up some of those 224 bits in an integer) and
syntax (he doesn't like Algol and Perl is Algol with $@%).



Re: JWZ on s/Java/Perl/

2001-01-29 Thread David Mitchell

Jeanna FOx [EMAIL PROTECTED] wrote:
 Everybody seems to be missing the fact that jwz bitching about Java's
 "32 bit non-object ints" means that at least he thinks they could be
 salvaged. What would he think of Perl's "224 bit non-object ints"?!
 Don't get smug because Perl can iterate over an array of anything. The
 price we pay is incredibly expensive.

Perl6's vtable implementation of scalars and arrays etc should
allow lightwight arrays of integers (eg 32 bits) which still appear
to the rest of Perl (and the programmer) to be arrays of full-blown SVs.




Re: JWZ on s/Java/Perl/

2001-01-29 Thread David Grove


Jarkko Hietaniemi [EMAIL PROTECTED] wrote:

  The desire to know the name of the runtime platform is a misdirected
  desire.
  What you really want to know is whether function Foo will be there,
what
  kind of signature it has, whether file Bar will be there, what kind of
  format it has, and so on, whether a feature Zog is present, or what
  is the value of parameter Blah.  Just knowing the name of the platform
  doesn't buy you a whole lot.

It's not limited to perl functionality. I need to know what version of
which can be assumed to be there, and which api are available. Knowing the
operating system type (generic) and version (specific) are both helpful
for purposes apart from knowing what perl functions are available. In
fact, for the latter purpose, I have only used this functionality a couple
of times, whereas the former are in a large number of my programs.

Also, I find $^O quite helpful as "MSWin32" simply to find out whether or
not it's a UN*X operating system. I need to run shell calls or what not
depending on that generic platform. Although I often care what specific
version of Win32 I have (and what my running linux kernel version is),
that need I find to be much rarer.

if($^O eq "MSWin32"){
  `notepad c:\temp\foo$num.txt`
}
else {
  if(-x $ENV{EDITOR}){
`$ENV{EDITOR} /tmp/foo$name.txt`
  }
  else {
`vi /tmp/foo$name.txt`
  }
}

Obtaining the platform name quickly and easily buys quite a bit, is quick,
and is hugely important to streamlined code. Knowing the version is, to
me, rarer and not as important (not to insinuate that it isn't important
to some).

p





Re: JWZ on s/Java/Perl/

2001-01-29 Thread Branden

Jeanna FOx wrote:
 Everybody seems to be missing the fact that jwz bitching about Java's
 "32 bit non-object ints" means that at least he thinks they could be
 salvaged. What would he think of Perl's "224 bit non-object ints"?!
 Don't get smug because Perl can iterate over an array of anything. The
 price we pay is incredibly expensive.

Personally, my big complains about Java are about having to write

[  Object a; int b;  ]

a = new Integer(b);
b = ((Integer) a).intValue();
// ...or something like that

** instead of **

a = b;
b = a;
// this is perlish!!!


Well, if a compiler can't figure it out that the types of the
variables "Object" and "int" are different and it should make 
a conversion to assign one from the other, well, then the 
compiler writers are damn bad programmers! If the int takes 
32 bits, 64 bits, 224 bits, or even 64K, that's an issue of 
memory usage, and that could be handled differently (and
transparently) by different implementations of the VM. Perl6 
will probably be such a case. As it will (probably) run under 
JVM, .NET and native, the sizes of "int" will probably be 
different under the 3 architectures. What matters is that 
the functionality is preserved.



 P.S. I may be wrong, but trying to learn what jwz doesn't like about
 Java and then figure out how to "fix" it in Perl is an impossible task.
 He's got wonderful, interesting, thoughtful opinions -- they just seem
 contrary to Perl's basic existence.

I agree completely. Java is a systems programming language, 
i.e. it's a very low level language. (You may disagree with 
me, but Java is just as low level as C, and, as far as UNIX 
is concerned, it's much less portable, IMO.) Perl, on the 
other side, is a scripting language, i.e. high-level. Perl
should be concerned with things as weak-typing, lists that
grow and shrink as we wish it, hashes, auto-converting 
int's to bigint's, regexp matches, closures, and this kind
of things that are very abstract (to a machine) and that
make programming Perl so great, simple and creative!




About jwz's "I think java has enough of the lisp nature to
satisfy me, even though it's not a purely functional language":
WHAT??? IS THIS GUY SERIOUS??? Java has nothing of a 
functional language!!! There is no such thing as a function
pointer (or anything equivalent) in Java. The closest you
can get is an annonimous inner class, that was almost like
an improvised patch. And the verboseness!!! You have to write
two sets of { } and still having to use an interface only
for a closure!!! Well, the other thing about lisp, besides
functions, are lists. And there's no thing Java handles worst
than lists. Arrays [] are static (what is dumber than dumb!)
and Vector's send strong-typing to the space!!! And try to
make a list of int's with Vector. The bad syntax I was talking
about before goes even worst:

[ Vector v; int i; ]
v.addElement(new Integer(i));// push @v, $i;


while (v.hasMoreElements()) {
i = ((Integer) v.nextElement()).intValue();
doSomethingWith(i);
}
//  foreach $i (@v) {  do_something_with($i)  }
//** or shorter **
//  map { do_something_with($_) }, @v;
//** there's much more than these ways to do it **
//  do_something_with($_) for @v;
//** and so on **


About jwz's "I despise C++ and perl": Well, I even can
understand despising Perl. Some people feel threatened
and obfuscated by a language that has a personality,
yet one that has such a bright and clever one ;-)
But "C++ just makes everything harder and worse, so 
I won't use it at all.", I actually agree, but not to
the extents he goes about it. I actually didn't read 
it, but I guess he means he prefers Java to C++. This
point I disagree. Of course, C++ has no GC, which is a
good thing, but you can always fake it with Refcounts,
which is much more efficient, and easily feasable with
C++. And at least they didn't chop from you templates 
and operator overloading, which could do Java a usable
thing. At least I could then have a list of integers,
instead of the verbose code above!


- Branden




Re: JWZ on s/Java/Perl/

2001-01-29 Thread David Mitchell

 Perhaps you meant that Perl 6 is going to have homogeneous arrays, in
 which case an array of ints would keep 32 bits (per value) of int data in
 the array and auto-generate the extra flags and stuff when a value is
 extracted from the array. That's possible, but it's a special case of small
 ints. You already know how jwz likes special cases.

Basically, yes. All I was pointing out was in Perl6 it *will* be possible
to have a large array of purely integers, thus using 4*@a or 8*@a bytes
of total storage, say.
It will also be possible (if someone can be bothered to code it)
to have an array where some elements are integers, occupying 4 bytes + a bit,
while other entries are full SVs. The particular array implementation
just needs enough storage per element (somehow or another) to note
whether a partcular slot contains a 4-byte int say or an SV pointer.
Or other such weird and wonderful things. Basically, Perl6 gives you
much, much more freedom in having multiple array (and hash) implementations
optimised for different things. Not enough to keep JWZ happy of course,
but much better nevertheless.




Re: JWZ on s/Java/Perl/

2001-01-29 Thread Jeanna FOx

David Mitchell wrote:
 Jeanna FOx [EMAIL PROTECTED] wrote:
  Everybody seems to be missing the fact that jwz bitching about Java's
  "32 bit non-object ints" means that at least he thinks they could be
  salvaged. What would he think of Perl's "224 bit non-object ints"?!
  Don't get smug because Perl can iterate over an array of anything. The
  price we pay is incredibly expensive.
 
 Perl6's vtable implementation of scalars and arrays etc should
 allow lightwight arrays of integers (eg 32 bits) which still appear
 to the rest of Perl (and the programmer) to be arrays of full-blown SVs.

Doubtful. It looks like we're still going to have a relatively
heavy-weight SV (sometimes called a PMC for Perl Magic Cookie). This
is currently 3 words in Perl 5 and it ain't shrinking for Perl 6. So
while it's true that Perl 6 will get "lighter and faster", it doesn't
come close to what jwz thinks of as "light and fast". He's expecting 32
bits per int *total* (boxed values with 31 bit range and a 1 bit type
mark) and a byte/native code compiler that's able to work with the data
in native (unboxed) form when it's type safe to do that. And transparent
auto-promotion to bigints. And built-in ints implemented as real objects
that he can sub-class. And type-restrictions using typedef that can
sub-type without the cost of sub-classing.

Perhaps you meant that Perl 6 is going to have homogeneous arrays, in
which case an array of ints would keep 32 bits (per value) of int data in
the array and auto-generate the extra flags and stuff when a value is
extracted from the array. That's possible, but it's a special case of small
ints. You already know how jwz likes special cases.

It also looks like some features are impossible to turn off -- like the
mandatory locking that jwz hates about Java. It's not safe to turn it
off, but it's not really safe with it on either. Some people would rather
loose the illusion of safety to get better performance.

I'm not saying any of this is a mistake; I agree with most of the Perl 6
design direction. My only point is that we must maintain perspective and
not be all things to all people. IMHO it's impossible to change Perl to
the point where jwz would use it (or not hate using it... ;)

- Ken



Re: JWZ on s/Java/Perl/

2001-01-29 Thread Branden

Jeanna FOx wrote:
 It also looks like some features are impossible to turn off -- like the
 mandatory locking that jwz hates about Java. It's not safe to turn it
 off, but it's not really safe with it on either. Some people would rather
 loose the illusion of safety to get better performance.
 


Well, mandatory locking is something we should definetly NOT have in Perl6.
Most of perl's code today is not threaded, and I believe much of it will
continue to be this way. The pseudo-fork thread behaviour that is being
proposed also makes this ok. Even if you have threads, you have to say
explicitly if you want anythinig to be shared. And if you explicitly
share something, then you should care the locks by yourself.

At least, that's my opinion.

- Branden




Re: JWZ on s/Java/Perl/

2001-01-29 Thread Dan Sugalski

At 12:20 PM 1/29/2001 -0500, Jeanna FOx wrote:
David Mitchell wrote:
  Jeanna FOx [EMAIL PROTECTED] wrote:
   Everybody seems to be missing the fact that jwz bitching about Java's
   "32 bit non-object ints" means that at least he thinks they could be
   salvaged. What would he think of Perl's "224 bit non-object ints"?!
   Don't get smug because Perl can iterate over an array of anything. The
   price we pay is incredibly expensive.
 
  Perl6's vtable implementation of scalars and arrays etc should
  allow lightwight arrays of integers (eg 32 bits) which still appear
  to the rest of Perl (and the programmer) to be arrays of full-blown SVs.

Doubtful. It looks like we're still going to have a relatively
heavy-weight SV (sometimes called a PMC for Perl Magic Cookie). This
is currently 3 words in Perl 5 and it ain't shrinking for Perl 6.

The smallest value-carrying SV's actually 4 words in perl 5, not that it's 
that big a difference. It's definitely not shrinking for perl 6, though, 
and it'll probably be a bit bigger. Not as much as you might think, since 
SVs generally hide lots of size off in other areas. (Plain integers are the 
size of an SV plus 4 bytes, but 'real' scalars are somewhat larger) 
Regardless, it's actually not that big a deal--most of the memory usage 
you'll see is tied up in hashes and arrays, not in bare scalars.

I don't think that was jwz's big issue with Java ints--my reading made it 
more of a problem with their non-objectness rather than their size. Our 
ints are, for all intents and purposes, objects, or that's the plan at 
least for perl 6.  I doubt he'd complain about that in perl anyway, as 
we're not really putting ourselves forward as a pure OO language the way 
that Java is. (I'm sure he's a wide range of other problems with perl, but 
I'd bet that's not one of 'em... :)

So
while it's true that Perl 6 will get "lighter and faster", it doesn't
come close to what jwz thinks of as "light and fast". He's expecting 32
bits per int *total* (boxed values with 31 bit range and a 1 bit type
mark) and a byte/native code compiler that's able to work with the data
in native (unboxed) form when it's type safe to do that. And transparent
auto-promotion to bigints. And built-in ints implemented as real objects
that he can sub-class. And type-restrictions using typedef that can
sub-type without the cost of sub-classing.

And a partridge in a pear tree. From reading things, jwz wants lots of 
Really Black Magic built into the compiler. Which is not unreasonable, but 
not all that likely to happen.

Perhaps you meant that Perl 6 is going to have homogeneous arrays, in
which case an array of ints would keep 32 bits (per value) of int data in
the array and auto-generate the extra flags and stuff when a value is
extracted from the array. That's possible, but it's a special case of small
ints. You already know how jwz likes special cases.

There's nothing wrong with special cases as long as they're hidden, which 
they will be with p6.

It also looks like some features are impossible to turn off -- like the
mandatory locking that jwz hates about Java. It's not safe to turn it
off, but it's not really safe with it on either. Some people would rather
loose the illusion of safety to get better performance.

Got me there with Java, but we're trying to be really careful with perl 6. 
(If anyone thinks we've stumbled, do please chime in with reasons you think 
things are wrong) Minimal locking to protect perl from coring is the goal. 
Application stability (which is what looks to be more the goal for Java) is 
a bad idea in my experience, as it's not really the place of the language 
to enforce it. Not Algolish languages, at least--I can see some of the 
parallel processing languages doing it, but they're radically different and 
that's OK.

I'm not saying any of this is a mistake; I agree with most of the Perl 6
design direction. My only point is that we must maintain perspective and
not be all things to all people. IMHO it's impossible to change Perl to
the point where jwz would use it (or not hate using it... ;)

I'm not at all worried about what jwz thinks of perl, though if he had a 
list of things he hated I'd be thrilled to read it. (I'll take vicious, 
pointed criticism over vague praise any day)

Dan

--"it's like this"---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk




Re: JWZ on s/Java/Perl/

2001-01-29 Thread Thomas Butler


: Jeanna FOx wrote:
:  It also looks like some features are impossible to turn off -- like the
:  mandatory locking that jwz hates about Java. It's not safe to turn it
:  off, but it's not really safe with it on either. Some people would rather
:  loose the illusion of safety to get better performance.
: 
:
:
: Well, mandatory locking is something we should definetly NOT have in Perl6.
: Most of perl's code today is not threaded, and I believe much of it will
: continue to be this way. The pseudo-fork thread behaviour that is being
: proposed also makes this ok. Even if you have threads, you have to say
: explicitly if you want anythinig to be shared. And if you explicitly
: share something, then you should care the locks by yourself.
:
: At least, that's my opinion.

What would you say to a default behavior of locking - which may be explicity altered 
with
the correct commands / arguments to fork() or Thread-new()?  I kind of think it is 
nice
not having to worry about locking semantics, but would like the ability to turn it all 
off
if I want.

- tommy




Re: JWZ on s/Java/Perl/

2001-01-29 Thread Dan Sugalski

At 12:54 PM 1/29/2001 -0800, Thomas Butler wrote:

: Jeanna FOx wrote:
:  It also looks like some features are impossible to turn off -- like the
:  mandatory locking that jwz hates about Java. It's not safe to turn it
:  off, but it's not really safe with it on either. Some people would rather
:  loose the illusion of safety to get better performance.
: 
:
:
: Well, mandatory locking is something we should definetly NOT have in Perl6.
: Most of perl's code today is not threaded, and I believe much of it will
: continue to be this way. The pseudo-fork thread behaviour that is being
: proposed also makes this ok. Even if you have threads, you have to say
: explicitly if you want anythinig to be shared. And if you explicitly
: share something, then you should care the locks by yourself.
:
: At least, that's my opinion.

What would you say to a default behavior of locking - which may be 
explicity altered with the correct commands / arguments to fork() or 
Thread-new()?  I kind of think it is nice not having to worry about 
locking semantics, but would like the ability to turn it all off f I want.

Unfortunately there's really no sort of user-mode locking that perl can 
provide that's not fundamentally broken for a non-trivial number of cases. 
The best we can really do is make sure we don't corrupt internal data 
structures, and leave the rest for explicit programmer locking.

This is, unfortunately, pretty much the state of affairs with any 
Algol-like general purpose language I can think of. Things are different 
with languages that are more special purpose, or that have different 
funsamental assumptions, but for perl you're sort of out of luck.

Been there. Done that. Have the scorched T-shirt. Unfortunately. :(

Dan

--"it's like this"---
Dan Sugalski  even samurai
[EMAIL PROTECTED] have teddy bears and even
  teddy bears get drunk




Re: JWZ on s/Java/Perl/

2001-01-29 Thread Piers Cawley

"Branden" [EMAIL PROTECTED] writes:
 Of course, C++ has no GC, which is a good thing, but you can always
 fake it with Refcounts, which is much more efficient, and easily
 feasable with C++. 

Err... current research shows that the refcount approach is one of the
slowest forms of GC, and it doesn't even do the job properly.

-- 
Piers




Re: JWZ on s/Java/Perl/

2001-01-28 Thread Bart Lateur

On Sat, 27 Jan 2001 18:16:52 -0500, Michael G Schwern wrote:

   o The architecture-interrogation primitives are inadequate; there is no
 robust way to ask ``am I running on Windows'' or ``am I running on
 Unix.''

**We have $^O, but it requires parsing every time**

Uhm, I'm sorry, but that's not good enough. You cannot distinguish
between Windows 95/98/ME on one side, and NT/2k on the other, using $^O
alone. After all, $^O is just a constant burnt into the executable when
perl was compiled. You can run the same perl.exe on all platforms, and
indeed, most people do. Yet win9* and NT are different enough in
behaviour (e.g. flock) to warrant a test on platform. Er... which is: no
go.

-- 
Bart.



Re: JWZ on s/Java/Perl/

2001-01-28 Thread Michael G Schwern

On Sun, Jan 28, 2001 at 10:07:55PM +0100, Bart Lateur wrote:
 Uhm, I'm sorry, but that's not good enough. You cannot distinguish
 between Windows 95/98/ME on one side, and NT/2k on the other, using $^O
 alone. After all, $^O is just a constant burnt into the executable when
 perl was compiled. You can run the same perl.exe on all platforms, and
 indeed, most people do. Yet win9* and NT are different enough in
 behaviour (e.g. flock) to warrant a test on platform. Er... which is: no
 go.

Well, fork works on both now, but I see your point.  There are ways of
detecting the OS at run-time under Windows, mostly through MFC junk or
peeking in the registry.  It would probably be good to do it for the
MacOS versions, too.

-- 

Michael G. Schwern   [EMAIL PROTECTED]http://www.pobox.com/~schwern/
That which stirs me, stirs everything.
-- Squonk Opera, "Spoon"



Re: JWZ on s/Java/Perl/

2001-01-28 Thread Jarkko Hietaniemi

On Sun, Jan 28, 2001 at 08:56:33PM -0500, Michael G Schwern wrote:
 On Sun, Jan 28, 2001 at 10:07:55PM +0100, Bart Lateur wrote:
  Uhm, I'm sorry, but that's not good enough. You cannot distinguish
  between Windows 95/98/ME on one side, and NT/2k on the other, using $^O
  alone. After all, $^O is just a constant burnt into the executable when
  perl was compiled. You can run the same perl.exe on all platforms, and
  indeed, most people do. Yet win9* and NT are different enough in
  behaviour (e.g. flock) to warrant a test on platform. Er... which is: no
  go.
 
 Well, fork works on both now, but I see your point.  There are ways of
 detecting the OS at run-time under Windows, mostly through MFC junk or
 peeking in the registry.  It would probably be good to do it for the
 MacOS versions, too.

The desire to know the name of the runtime platform is a misdirected desire.
What you really want to know is whether function Foo will be there, what
kind of signature it has, whether file Bar will be there, what kind of
format it has, and so on, whether a feature Zog is present, or what
is the value of parameter Blah.  Just knowing the name of the platform
doesn't buy you a whole lot.

-- 
$jhi++; # http://www.iki.fi/jhi/
# There is this special biologist word we use for 'stable'.
# It is 'dead'. -- Jack Cohen



Re: JWZ on s/Java/Perl/

2001-01-28 Thread Michael G Schwern

On Sun, Jan 28, 2001 at 11:54:13PM -0600, Jarkko Hietaniemi wrote:
 The desire to know the name of the runtime platform is a misdirected desire.
 What you really want to know is whether function Foo will be there, what
 kind of signature it has, whether file Bar will be there, what kind of
 format it has, and so on, whether a feature Zog is present, or what
 is the value of parameter Blah.  Just knowing the name of the platform
 doesn't buy you a whole lot.

True, but you can't do any of all that without knowing the platform
accurately (nontrivial and requires core mod or XS).  Once that's
done, the rest is just a matter of extending File::Spec (trivial and
pure Perl).

-- 

Michael G. Schwern   [EMAIL PROTECTED]http://www.pobox.com/~schwern/
mendel ScHWeRnsChweRNsChWErN   SchweRN  SCHWErNSChwERnsCHwERN
  sChWErn  ScHWeRn  schweRn   sCHWErN   schWeRnscHWeRN 
   SchWeRN  scHWErn SchwErn   scHWErn   ScHweRN   sChwern  
scHWerNscHWeRn   scHWerNScHwerN   SChWeRN scHWeRn  
SchwERNschwERnSCHwern  sCHWErN   SCHWErN   sChWeRn 



Re: JWZ on s/Java/Perl/

2001-01-28 Thread Jarkko Hietaniemi

On Mon, Jan 29, 2001 at 01:08:21AM -0500, Michael G Schwern wrote:
 On Sun, Jan 28, 2001 at 11:54:13PM -0600, Jarkko Hietaniemi wrote:
  The desire to know the name of the runtime platform is a misdirected desire.
  What you really want to know is whether function Foo will be there, what
  kind of signature it has, whether file Bar will be there, what kind of
  format it has, and so on, whether a feature Zog is present, or what
  is the value of parameter Blah.  Just knowing the name of the platform
  doesn't buy you a whole lot.
 
 True, but you can't do any of all that without knowing the platform
 accurately (nontrivial and requires core mod or XS).  Once that's
 done, the rest is just a matter of extending File::Spec
 (trivial and pure Perl).

Trivial?  *cough* *snigger*

-- 
$jhi++; # http://www.iki.fi/jhi/
# There is this special biologist word we use for 'stable'.
# It is 'dead'. -- Jack Cohen



Re: JWZ on s/Java/Perl/

2001-01-28 Thread Nathan Torkington

Jarkko Hietaniemi writes:
  True, but you can't do any of all that without knowing the platform
  accurately (nontrivial and requires core mod or XS).  Once that's
  done, the rest is just a matter of extending File::Spec
  (trivial and pure Perl).
 
 Trivial?  *cough* *snigger*

If it was trivial, Configure wouldn't need to exist--we could just use
hints files.  Lots of hints files.  One for each configuration, in
fact.  Hey Jarkko, I have an idea ... :-)

Nat



Re: JWZ on s/Java/Perl/

2001-01-28 Thread Michael G Schwern

On Mon, Jan 29, 2001 at 12:10:31AM -0600, Jarkko Hietaniemi wrote:
 Trivial?  *cough* *snigger*

I'd write it up for you right now, but its too big to fit in the
margin.


-- 

Michael G. Schwern   [EMAIL PROTECTED]http://www.pobox.com/~schwern/
Skrewtape I've heard that semen tastes different depending on diet.  Is that
true?
Skrewtape Hello?
Schwern Skrewtape:  Hang on, I'm conducting research.



Re: JWZ on s/Java/Perl/

2001-01-27 Thread Jarkko Hietaniemi

I like the final point:

 Stay tuned, I'm sure I'll have found something new to hate by tomorrow.

 (Well, that's how this document originally ended. But it's not true,
 because I'm back to hacking in C, since it's the still only way to
 ship portable programs.)

-- 
$jhi++; # http://www.iki.fi/jhi/
# There is this special biologist word we use for 'stable'.
# It is 'dead'. -- Jack Cohen



Re: JWZ on s/Java/Perl/

2001-01-27 Thread John Porter

J. David Blackstone wrote:
 
  And in related news, it's a total pain that one can't iterate over the
  contents of an array without knowing intimate details about its
  contents: you have to know whether it's byte[], or int[], or Object[].
 
 That's one nice thing about Perl; you can foreach over
 an array of all sorts of different things.  In fact, being able to
 just have an array of all sorts of different things is something Perl
 still has over Java, C, and the like.

It's not that big a deal.  An array in Perl is like an array of Object
in Java, or an array of void* in C.

Like jwz said, if only they had done TRT and made intrinsics 
inherit (or appear to) from Object, it wouldn't be an issue
in Java either.


-- 
John Porter

So take a pointed stick and touch Piggy's eyes
He's gonna turn and leave you a big surprise




Re: JWZ on s/Java/Perl/

2001-01-27 Thread J. David Blackstone

 J. David Blackstone wrote:
 That's one nice thing about Perl; you can foreach over
 an array of all sorts of different things.  In fact, being able to
 just have an array of all sorts of different things is something Perl
 still has over Java, C, and the like.

 It's not that big a deal.  An array in Perl is like an array of Object
 in Java, or an array of void* in C.

  Well, not exactly like it.  I try to avoid arrays of void* like the
plague. :)  But yes, you're right, it is _possible_ to get that
effect.

 Like jwz said, if only they had done TRT and made intrinsics
 inherit (or appear to) from Object, it wouldn't be an issue
 in Java either.

  Yeah, that was one of my disappointments when I finally made the
Java plunge last month.  I kind of expected integers to be objects in
what I had heard was the "perfect, pure" OO language.

jdb