Re: 6PAN (was: Half measures all round)

2002-06-05 Thread Steve Simmons

On Tue, Jun 04, 2002 at 01:11:58PM -0700, David Wheeler wrote:
 On 6/4/02 12:59 PM, Steve Simmons [EMAIL PROTECTED] claimed:
 
  Actually, for 6PAN I think they should have to pass.  And maybe we
  need a bug submission setup, and status checks, and . . . OK, OK, I'll
  stop now.  They're nice ideas, but who bells the cat?  The again, if
  Tim O'Reilly wants to fund me I'd be happy to do it.  :-)
 
 On what platform(s)? Who's going to pay for the test bed for every possible
 combination of perl version, OS, various libraries, etc., etc.? I think that
 *requiring* that all tests pass is unrealistic.

Perhaps so, but I'd prefer to see this as a goal and fall short rather
than set our sights lower and achieve an empty success.

As for funding - well, one source has been mentioned above.  Another
might be the H/W vendors.  Suppose we modify the proposal to have a
list of systems on which the module has been known to pass (or fail)
self-test?  Get H/W and O/S donations from Sun, HP, etc, volunteer
labor to assemble a test framework, and then let people upload.  On
the web page for a given 6PAN module put a pass/fail indicator for
all the stuff that vendors have donated.  Vendor X wants to be on the
list?  Let 'em donate hardware, S/W, and (maybe eventually) something
towards overhead.

Hmmm, we may have a good idea here... 



Re: 6PAN (was: Half measures all round)

2002-06-05 Thread Steve Simmons

On Tue, Jun 04, 2002 at 04:15:02PM -0400, John Siracusa wrote in
response to me:

  Frankly, I'd argue that nothing in 6PAN ought to be in alpha/beta state.
 . . .

 Nah, I think it's useful to be able to upload unstable versions to 6PAN to
 get the widest possible audience of testers.  It's a lot easier than hosting
 it on a web site or whatever--for both the developer and the users.
  . . .

True, true.  You've convinced me.

  . . . .  With (potentially) many modules having
  the same name and multiple authors/versions, the current complexity of
  the perl /lib tree could increase to the third power.  That gonna be
  very disk-intensive for every frigging perl script that runs.
 
 See my earlier framework/bundle suggestion.  I don't think it's that bad,
 disck-access-wise.

Sticking just to the disk-intensive issue for a moment --

Due to our judicious writing of modules, libraries, etc, we tend to have
a lot have relatively simple (less than 1000 lines) scripts that do quite
useful things for managing our network (sorry, can't talk about 'em --
proprietary software and all that sort of rot).  Each tool does a direct
use or require of about a half-dozen things, but our master modules and
libs grab quite a few more.  All in all I guestimate we're loading anywhere
from 30 to 100 modules per `simple' tool run.  With standard INC plus
our two or three more, that's 8-10 directory trees that must be searched.

The current perl module naming schema means (essentially) one probe
per directory per package until the module is found.  Once it's found,
all is done.

With the new one, we seem to have agreed that `most recent' will be
used, not `first found'.  That means that every tree must be probed,
and probed with globs or sub-searches to match the various author,
version, $RELEASE, etc stuff.  Only once we know what's available can
we make an intelligent `most recent' decision.  At a minimum, with
10 items in INC we have a tenfold increase in number of dir reads.
Then add in the checking for author, version, etc ... and we've
got to either embed the author/version/release data into the file
names (the horror, the horror) or open the individual modules and
read the author/version/release data from each one (the horrorer, the
horrorer).  This very strongly argues towards indexes.

My seat of the pants number say our current tools (which use DBI to
access databases) spend about as 10% of their CPU and wall clock time
in compilation.  This is measured by deliberately running the tools
with an error (bad switch) vs running it correctly and comparing
the times.  Go to a module/version/release seach pattern and a
ten-fold minimum increase in serach time . . . and you double the tool
run times.  That's intolerable in our environment.  I recently spent
a solid month peeling 9 seconds of per-tool overhead down to 1 second,
getting a documented(!) savings of 16.25 hours *per week* in our
operations group.  The prospect of losing that time again with perl6
is, er, not acceptable.  Again, this argues towards indexes.

  The current perl module tree is pretty difficult for the average day to
  day scripter to comprehend.  I'd go even further and say that it's so
  complex as to be unmanagable.
 
 Encapsulating by module would make the structure pretty easy to understand,
 IMO.

Yes, that would help with the understanding.  It would not help with
the search time overhead.

 Disk space and RAM are cheap and getting cheaper.  Think about what you mean
 by gigantic.  Anyone who exhausts memory by running 150 different versions
 of the Tk module at once deserves what they get ;)
 
  On the other hand, there's no reason that shared libs couldn't be stored in a
  standard location and symlinked to.
 
 Yes there is: sanity!  I think that's a foolish optimization.  It makes
 things much too fragile.  Stuff like that already drives me nuts in regular
 Unix OS/app installations.

I picked Tk as my example because it *cannot* work with multiple
versions of the shared libraries (the .so files, not the perl modules)
simultaneously.  Every window, every widget, every Ghu-knows-what down
to a fairly low level must reside in the same context.  If they don't,
subwindows don't get managed properly, widgets from version X can't be
attached to units from version Y, etc, etc.  The same is almost certianly
true of DBI modules, Gnome modules, and many others.  The 6PAN installation
tree and perl6 module builds must be able to deal with this issue.  If
they don't, then Larry's promise of the ability of simultaneous load is
only going to apply to pure perl modules.  But I don't think that's what
Larry means, and I don't think it's a restriction we *have* to live with.

Steve
-- 
STEP 4: PRACTICE THE RULE OF THREE: If an e-mail thread has gone back and
forth three times, it is time to pick up the phone.
  from the 12-Step Program for email addicts,
  http://www.time.com/time/columnist/taylor/article/0,9565,257188,00.html



Re: Half measures all round

2002-06-04 Thread Steve Simmons

On Tue, Jun 04, 2002 at 04:13:36PM +0100, Simon Cozens wrote:

Hmm, June 4.  Independence day, with an off by 1 error.  Must be a C
program involved somewhere.  :-)


In brief, I'm with Damien on this one.  IMHO C++ is an ugly bastard of
a programming language because they cut the cord ineffectively and
much too late in the process.  OOPerl is an ugly bastard of a language.
We have the opportunity to clean that up; we should sieze it.

As for CPAN . . . don't get me started.  CPAN is a blessing, but has
become a curse as well.  It's contents need to be razed to the ground
and better/more conistant rules set up for how to do installations
into and out of the standard trees.  If you think this is a bitch now,
just wait until simultaneous per-author and per-version installation
and invocation is allowed as Larry has promised.  I have this horrible
fear of perl module installations becoming a bowl of spagetti that's
been run thru a blender and mixed with a packet of jello.  Speaking as a
20+-year sysadmin, if CPAN is used for Perl6 with those new features
and without a massive clean, I foresee a nightmare.

We have said that perl5 will be *mostly* mechanically translatable into
perl6.  IMHO, that's close enough.  Full backwards compatibility leads
to paralysis or an even further expansion of the complexity and
bizarreness that is all too often perl.  We should draw the line on
translation at a program that will translate p5 source to p6 source.
We should not auto-compile it and tolerate it forever; that way lies
madness.

Sorry to be so pessistic and negative, but that's my story and I'm
sticking to it.



Re: Half measures all round

2002-06-04 Thread Steve Simmons

On Tue, Jun 04, 2002 at 05:40:08PM +0100, Simon Cozens wrote:
 Steve Simmons:

  We have said that perl5 will be *mostly* mechanically translatable into
  perl6.

 And we shall keep saying this until we believe that it is true?

*grin*

My apologies for using the wrong name, Simon.  Doh!
-- 
STEP 4: PRACTICE THE RULE OF THREE: If an e-mail thread has gone back and
forth three times, it is time to pick up the phone.
  from the 12-Step Program for email addicts,
  http://www.time.com/time/columnist/taylor/article/0,9565,257188,00.html



Re: 6PAN (was: Half measures all round)

2002-06-04 Thread Steve Simmons

On Tue, Jun 04, 2002 at 12:59:38PM -0400, John Siracusa wrote:

 In the spirit of Simon's desire to see radical changes when appropriate, I
 propose the following high-level goals for 6PAN . . .

 1. Multiple versions of the same module may be installed on a single system
 with no possibility of conflicts.  That means no conflicts of any kind,
 whether it be in Perl code, XS-ish stuff, private shared libs, etc.

Larry has stated this as a goal, and in this thread on Tue, Jun 04,
2002 at 10:26:22AM -0700, Larry Wall wrote:

 Yes, I'm already on record that multiple installed versions will
 coexist peacably, certainly on disk, and to the extent that they can
 in memory.  We can't stop two versions of a module from fighting over a
 shared resource, for instance . . .

And that's an issue that needs to be resolved -- modules are going to
have to be able to test for each others state of being loaded and have
some mechanisms to disambiguate and defer to each other.  I won't go
down the sytactic rathole of what *that* entails, just noting it as an
issue.

Back to John S:

 : 2. Module packaging, expansion, and installation only requires a working
 : install of Perl 6 (unless the module contains non-Perl code, such as C,

 3. Module removal should be as easy as installation.

Yes, yes.

 1a. Modules may be use-ed in several ways (syntax ignored for now):

I addressed a few of these in RFC78 and quite a few more in an as-yet
unpublished rewrite of it; Larry mentioned a line or two of syntax in
a recent email that's not terribly dissimilar.

All the things you specify are relatively easy in the perl world.  It's
mapping them into filenames and directories that @INC can use that
gets, er, interesting.  However . . .

 1b. 6PAN modules comply with an informal contract to maintain
 backward-compatibility within all N.MM versions, where N is constant. . .

IMHO it should be a stated requirement, just as one needs a Makefile.PL.
And while person's tweak is another persons incompatible blotch, we
ultimately have to defer to `in the opinion of the author.'


On Tue, Jun 04, 2002 at 10:11:40AM -0700, David Wheeler responds to
Johns stuff above:

 This might be asking too much -- it's not very perlish, in the sense of
 TIMTOWTDI. It might make sense for DKs, but different people may want to use
 the conventions their comfortable with. Perl is there for you to create
 applications (and APIs) the way you want, not the way the gods demand.

I must respectfully disagree.  Perl has rules, and strict ones -- and
more than just the syntax and the semantics.  Anyone can *change* perls
syntax, but they have to get buy-in from a fairly large and conservative
community to get it incorporated into perl.  Why should we not be as
careful about 6PAN?

We could take this a step further -- let CPAN be the perl equivelent of
Sourceforge, but hold 6PAN to a higher standard.  TIMTOWTDI.

 One thing I think is as important -- or perhaps more important -- is to
 enforce the presence of unit tests. There are a lot of modules on the CPAN
 that have no tests, and most of them suffer for it.

Yes, but . . .

 It shouldn't be required that all tests pass, however. A statement showing
 what platforms they pass on and what platforms they don't at the top of the
 download page would be good enough. But the tests have got to be there.

Actually, for 6PAN I think they should have to pass.  And maybe we
need a bug submission setup, and status checks, and . . . OK, OK, I'll
stop now.  They're nice ideas, but who bells the cat?  The again, if
Tim O'Reilly wants to fund me I'd be happy to do it.  :-)

On Tue, Jun 04, 2002 at 01:21:26PM -0400, John Siracusa wrote on
the same thread:

 Heh, I was going to suggest that new minor-version 6PAN submissions
 automatically have all the earlier minor-version test suites run against
 them before allowing them to go into the archive... :)

Yep, that too.  Which means . . .

On Tue, Jun 04, 2002 at 10:57:15AM -0700, David Wheeler wrote:
 On 6/4/02 10:21 AM, John Siracusa [EMAIL PROTECTED] claimed:

 Hmmm...perhaps as a warning:
 
   All regression tests not passed. Do you still want to upload this module?

s/Do you still want to upload this module?/Upload rejected./

On Tue, Jun 04, 2002 at 10:26:22AM -0700, Larry Wall wrote:

 : 3. Module removal should be as easy as installation.
 
 Fine.  There ought to be something in the metadata that says, This version
 of this module hasn't been used since 2017.  Then you can clear out the
 deadwood easily.  Course, disk space will be a little cheaper by then, so
 maybe we'll just hang onto everything forever.

Nu?  If you mean used in the sense of 'use foo', that would imply all
running programs would have to write some sort of usage db, or touch
a global file, or a lot of other things that I don't think are going to
fly.

 : 1c. Distinctions like alpha, beta, and stable need to be made
 : according to some convention (a la $VERSION...perhaps $STATUS?)
 
 Can probably burn that 

Re: RFC on Coexistance and simulaneous use of multiple module version s?

2001-02-15 Thread Steve Simmons

Paul Johnson wrote:

 Has anyone considered the problems associated with XS code, or whatever
 its replacement is?

Pardon my ignorance, but what's XS code?



Re: RFC on Coexistance and simulaneous use of multiple module version s?

2001-02-15 Thread Steve Simmons

Many thanks to all for the pointers.

Paul Johnson wrote:

 I don't think any proposal of this nature would be conplete without a
 consideration of these aspects.

Agreed.



Re: RFC on Coexistance and simulaneous use of multiple module version s?

2001-02-14 Thread Steve Simmons

On Fri, Jan 26, 2001 at 02:08:01PM -0600, Garrett Goebel wrote:

 Discussion of RFC 271 and 194 on pre and post handlers for subroutines
 reminded me of Larry's desire for Perl 6 to support the coexistence of
 different versions of modules.
 
 Besides http://dev.perl.org/rfc/78.pod, are there any RFC's which directly
 or indirectly relate to this?

Speaking as the author of RFC78, my official answer is ``I don't think
so.''  :-)  And RFC78 doesn't go as far as Larry has promised Perl6 will.

Advance apologies for being away so long; after a month I assumed things
had been wrapped up and there was no point in trying to catch up on the
old perl6-language mail.  Silly me.  Then I got a note from Mike Schwern
asking one of those ugly questions that spins out of this whole ugly
problem.

Sigh.

I believe that the versioning problems are solvable without doing major
damage to the existing modules structure.  This is detailed in the last
version of the RFC, but I am not certain if that version is available at
www.perl.org/perl6 - it is giving `access denied' at the moment.  The
last version is available at http://www.nnaf.net/~scs/Perl6/RFC78.html
(my version 1.3, stated in the text); I will update the perl.org one
once I can see if it's behind the times.

Shortly after the posting of RFC78, Larry replied:

 Quoting RFC78

  Note that it may not be possible to satisfy conflicting requests.  If
  module CA and module CB demand two different versions of the same
  module CC, the compiler should halt and state the module conflicts.
  
 Pardon me for sniping at a great RFC, but I already promised the CPAN
 workers that I'd make that last statement false.  There's no reason in
 principle why two modules shouldn't be allowed to have their own view
 of reality.  Just because you write Foo::bar in your module doesn't mean
 that Perl can't know which version of Foo:: you mean.

[[ . . . other details elided . . . ]]

Version 2.0 of RFC78 would have been an attempt to address the items
Larry raised.  It stalled out because (dammit), it's *hard*.  But it
seems the issue is still open, and I've had a few more ideas.  This
note lays out some issues and ideas.  If there's good discussion and
any sort of there's any sort of near-consensus or coherent streams of
thought it'll get rolled up into RFC78 2.X.

BUT: That's essentially the same comment I made Aug 9, and discussion
ground to a halt about four messages later.  So if you want me to
work, you gotta respond.  :-)

Read http://www.nnaf.net/~scs/Perl6/RFC78.html to see the starting
point.  And remember that all of this below is open to change if
anybody has a better idea.

Now that you've read RFC78, throw much of it out.  We'll retain
the ideas, but the current (perl5.6) version rules will continue
as they currently do.  Instead, I propose a new mechanism below.

Our goals are to do that while allowing:

  o  At load time for a module, a programmer must be able to specify
 vendor, author, and version on a per-module basis.  
  o  At execution time, a programmer must be able to specify vendor,
 author, and version for use in a script for when multiple versions
 may be present.
  o  There must be a clean, clear mechanism to install these modules
 by vendor/author/version without requiring the module actually
 be loaded (parsed).
  o  At load time, modules must be able to influence their sharability

RFC78 proposes a mechanism for specifying versions down to a very
fine grain of detail.  I propose that we retain the current perl
syntax/search model as it works today, but add an alternative form:

  use module (version; author; vendor);

Version numbers and search rules are as defined in RFC78, null means
'first found'.

Author is a comma- and/or whitespace-separated list.  Commas separate
preference order groups, whitespace separates no-preference order
groups (similar to version, see RFC78).  If the author field is null
the author list is used.  The default author list is null.  Pragmas
to manipulate the author list are needed.

Vendor works like author in most ways.  However, the default vendor
list for vendors is (PERL, site), consisting of the current perl
and site library trees.

Precedence and other rules are needed so that this can be understood
and so we don't break existing functionality:

Search precedence is by vendor, by author within vendor, by version
within author.

Author-specific modules are installed by author name within vendor
trees.

If we grant this rules, *all current CPAN functionality and perl
installations are preserved.*  IMHO this is a huge win.


A first cut at vendor/author pragmas:

These manipulate the vendor list:

  use vendor 'foo'; # Push vendor name 'foo' onto the vendor list
  use popvendor;# Pop top of vendor list and discard
  use popvendor 'foo';  # Remove the topmost instance of 'foo'
  use rmvendor 'foo';   # Remove all instances of 'foo'
  use rmvendor '';  

Re: RFC 120 (v2) Implicit counter in for statements, possibly $#.

2000-08-29 Thread Steve Simmons

On Tue, Aug 29, 2000 at 09:15:35AM +0100, John McNamara wrote:
 At 13:11 28/08/00 -0400, Steve Simmons wrote:
 To tell the truth, this third item should probably should become
 a separate RFC, and if you'd like to simply say one is forthcoming,
 that'd be fine by me.
 
 What I really want to do is write a summary, get some consensus and redraft 
 the RFC. I'll do this in the next few days.

 As far as I can see the current consensus is as follows:
  1. Implicit variable: nice but not really worth the trouble.
  2. Explicit variable between foreach and the array: might conflict
 with other proposals.

This doesn't conflict with the idea of assigning multiple variables
per loop execution as long as explict grouping of the vars assigned
to is required.  As examples:

foreach ( $var, $var2 ) $i ( @arr ) { }  # OK
foreach $var, $var2, $i ( @arr ) { } # Not OK
foreach $var, $var2 $i ( @arr ) { }  # Not OK
foreach ( @list ) $i ( @arr ) { }# OK
foreach ( @list, $var ) $i ( @arr ) { }  # OK
foreach @list, $var $i ( @arr ) { }  # Not OK

Some underlying logic for this

o  it allow one to use the original $_ in the body while still
   permitting use of an index var, ie
   foreach () $i { something( $_, $i ) }
o  any use of the index var which is inadvertantly given to
   pre-perl6 code will bomb with a syntax error

If I get up and write that RFC, that's how I'll propose it be done.



Re: Do we really need eq?

2000-08-28 Thread Steve Simmons

I'd like to see eq and it's brethen retained, as dammit there are times
I want to know (-w) if numbers are turning up when there should be
words and vice-versa.  However, spinning off of something Randal wrote:

 Yes, but what about:
 
 $a = '3.14'; # from reading a file
 $b = '3.1400'; # from user input
 
 if ($a == $b) { ... } # should this be string or number comparison?

Since we have distinct comparison operators for strings and numbers, and
since we have explict casting for those who wish it, IMHO one can do

if ( int( $a ) == int( $b ) ) 

to force int (or float) compares, and

if ( "$a" eq "$b" )

to force string compares, 

IMHO the code

$a = '3.14'; # from reading a file
$b = '3.1400'; # from user input
if ($a == $b) { ... }

should see the two args being tested in numeric context, do the numeric
casting, get floats, and do a floating compare.  Durned if I know what it
does now.



Re: RFC 143 (v1) Case ignoring eq and cmp operators

2000-08-28 Thread Steve Simmons

On Thu, Aug 24, 2000 at 03:40:00PM -, Perl6 RFC Librarian wrote:
 This and other RFCs are available on the web at
   http://dev.perl.org/rfc/
 
 =head1 TITLE
 
 Case ignoring eq and cmp operators

IMHO this problem is better solved by using =~ and its brethren,
which already allow you to do the right thing for a far wider set
of cases than just this one.



Re: Permanent sublists (was Re: Language WG report, August 16th 2000)

2000-08-16 Thread Steve Simmons

On Wed, Aug 16, 2000 at 02:38:33PM -0400, Uri Guttman wrote:

 i see problems with overlapping areas. I/O callbacks fall under both io
 and flow IMO. some of the error handling like dying deep in eval and
 $SIG{DIE} also fall under error and flow.

This is true, and inevitable.  But IMHO it'd be a helluva lot easier to
follow two lists (error, flow) than the uber-language-list.  So while
I don't think sublists are a perfect solution, they're a better solution
than not sublisting.

Wasn't somebody going to set up a news server?  Newsgroups and
crossposting, yeah, that's the ticket.



Unify the Exception and Error Message RFCs?

2000-08-15 Thread Steve Simmons

On Sun, Aug 13, 2000 at 07:35:06PM -0700, Peter Scott wrote:

 At 03:30 PM 8/13/00 -0500, David L. Nicol wrote:

 Whose RFC deals with this?

 63, 70, 80, 88 and 96.  There would appear to be a groundswell of interest :-)

Well yes, but they represent three authors with (as best I can tell)
pretty similar views of what needs done.  With advance apologies for
the over-simplification, these RFCs very briefly summarize to:

   63 - Error handling syntax to be based on fatal.pm, a Java-ish
try/catch (Peter Scott).
   70 - Allow full error handling via exception-based errors based on
expansion of and addition to fatal.pm (Bennett Todd)
   80 - Builtins should permit try/throw/catch as per Java/fatalpm
style (Peter Scott).
   88 - Propopses error handling via mechanisms of try, throw, except,
etc. (Tony Olekshy).
   96 - Proposes establishing a base class for exception objects,
which could work with either of the two groups above and
might subsume RFC3 as well (Tony Olekshy)

I'd like to see these throw into a post (possibly including RFC3 (Brust)
as well), and pulled out a single proposal with

  o  a unified proposal for a general try/throw/catch error handling
 mechanism for perl
  o  a recommended object-based implementation of it
  o  recommended areas where it should be applied in the perl core
  o  a mechanism allowing programmers to take over error message
 production for the above group of items

and a separate proposal for

  o  a mechanism allowing programmers to take over error message
 production for other types

I'm assuming that not all errors would move to the object or try/catch
sort of arrangement; there are lots of times that return values or
unrecoverable errors are all you really need.  For the last, one should
use `eval' if you really need to catch them; but an ability to override
the error message may well be sufficient in most cases.  And should the
first proposal fail, the second becomes even more critical.

IMHO trading six RFCs for two will greatly improve the chance of passing.



Re: RFC 83 (v1) Make constants look like variables

2000-08-14 Thread Steve Simmons

On Sat, Aug 12, 2000 at 06:18:08AM +1000, Damian Conway wrote:

 Please, please, please, PLEASE, let us not replicate the debacle that is
 C++'s const modifier!

It doesn't feel like a debacle to me, it feels like it put the control
in the programmers hands.  Yes, the syntax is often unweildy -- but IMHO
that's because when one marries the idea of constants to complex data
structures and references, the world becomes an unweildy place.

On the other hand, many of the uses of constants are better-satisfied
by having hidden or inaccessible members.  If constants were limited
to `core' perl data types (scalars, lists, arrays and hashes) and
*completely* freezing the data element, that'd be enough for me.  I lust
to be able to do things like

if ( $HOME =~ /indiana/ ) {
local $::PI : const = 3; do_trig();
} else {
local $::PI : const = 3.14159...; do_trig();
}

(I'm from Indiana and claim right to use that joke without being
insulting).  Constants are good, and I'm damned tired of fixing code like

if ( $PI = 3  ) {
# We're not in Kansas any more, Toto.
}

Constants have a place, and belong in perl.

Now, one may argue that my suggestion was too flexible or too broad.
But the initial proposal left an awful lot of gray areas.  If my proposal
is felt to be too unweildy, I don't have a problem with that.  Let's
either

  o  decide where to draw the line, define it cleanly, and say
 that more complex usage is better done with objects; or
  o  permit both complex constants and object-based manipulations
 on the practice that TMTOWTDI.

I lean towards the latter (complex constants) myself, but wouldn't
go off and storm the barricades to get it.  On the other hand, I
*would* campaign strongly for constant scalars, lists, arrays, hashes
and refs.  If the only way to get them meant `completely constant',
ie, no addition or removal or members, no re-orderings, etc, that's
fine -- composition of complex constants with complex vars would let
one do most of what was suggested in my longer posting, and I'd be
happy to say the odder features should be done via object methods.



RFC 78 and shared vs unshared modules/data

2000-08-11 Thread Steve Simmons

On Thu, Aug 10, 2000 at 05:46:14PM -0400, Bennett Todd wrote:

 Today there's no difference. If the proposal under discussion were
 to pass, and packages' namespaces were to become local to the
 namespace where the "use" occurred, then perhaps main::whatever
 could be a common, stable, global that they could use for these rare
 variables that really need to be common from multiple invokers.

There's a strong implication here that we've just made package namespaces
heirarchical.  I don't disagree (how's that for a wimpy statement?), but
think we need to go further.  hbrain racingmmmaybe this is
a good idea

How does this sound --

Modules (packages) should have the option of declaring themselves to be
either rooted or relative in the name space.  Thus a module which wanted
to guarantee it would have one and only one copy in residence could
declare itself

   module foo.pm
   package foo; # preserves current usage
   my $var; # only one copy ever exists no matter how many
# modules load foo

while a relative would do

   alternative version of foo.pm
   package foo;
   my $var; # one copy per instance of loading of foo

In either case, main would load it with a simple

   use foo;

and the package has control over what is shared and what is not.  One
could even have mixes of shared and unshared with

   package foo;
   use  foo_shared;
   my $var = "x";   # per-instance copy set
   $::foo_shared::var = $var;   # cross-instance copy set
   local $::foo_shared::var as $var;# wow!

Yes, I like this.



Re: RFC 78 (v1) Improved Module Versioning And Searching

2000-08-10 Thread Steve Simmons

On Wed, Aug 09, 2000 at 05:53:44PM -0400, Ted Ashton wrote:

 I'll take that as my cue ;-).

Ah, nothing like a man who knows when to pick up his cues.

 *shudder*  This whole business is getting pretty scary . . .
  [[ discussion of ugly implicatations elided ]]

The short answer is that (assuming I understand Larry's statement)
he'd like these issues addressed.  If the resulting `best' answer
is so complex and so ugly that he decides it's a bad idea, that's
fine -- but (if I recall correctly) tcl has addressed this problem
and come up with workable solutions.  I'm not intimately familiar
with them, but will get so.



Re: RFC 78 (v1) Improved Module Versioning And Searching

2000-08-10 Thread Steve Simmons

On Thu, Aug 10, 2000 at 11:01:39AM -0400, Bennett Todd wrote:

 Rather than proliferating the number of keywords eaten with all
 these *ref varients, this sounds like a useful place for returning
 an object with a default stringification of the class:
  . . .
 Ref RFC 37, RFC 73.

I have no problem with this or any other solution that gets the
data back.  I'll add a comment to RFC78 that should those RFCs
prevail, we'll follow them.

For the record, I prefer hashes for that sort of thing too.  But
perl has traditionally done ordered list returns, and I followed in
that vein.



My opposition to RFC20, and an alternative

2000-08-10 Thread Steve Simmons

Overloading an existing operator such that it changes the performance
in prior situation is evil, evil, evil.  Yes, I know it can have some
wins, and I agree they're big ones.  But no win is worth having to
debug this (admittedly contrived for the example) situation:

if ( ( $ares = A() )  ( B() ) ) {
# A and B succeeded, go do something
} elsif ( $ares ) {
# A succeeded but B failed, do B cleanup
} else {
# Do B differently because A failed
}

If the overload means that now A() and B() will now both be processed
no matter what, that block of code is going to go south in a deeply
undetectable way.  And finding the bug is going to be bloody hell.  This
argues that overloading should be restricted in its scope.

Now consider some poor maintenance coder.  He's looking at his main
and a module, trying to figure out why something doesn't work.  Both
contain a test like

if ( A()  B() ) . . .

In main it works one way, in the module it works the other.  Is it
reasonable to expect that?  I submit not.

With humble acknowledgement of my debt to Socrates, I submit that
this dilemma shows that either solution - universal overloading or
localized overloading - leads to extreme difficulties in maintenance.

IMHO, overloading is just syntactic sugar - `it looks cool.'  If
we need an /|| operator that always tests both sides, lets
make them --  and |||.  That's a tiny new thing in perl, while
avoiding deep, deep problems introduced from overloading standard
operators.  If someone wants a /|| that does something wildly
different, then write a module-specific or general function for
it

Overloading existing operators is evil, evil, evil.  Operators are
not class-specific functions.  They're core language constructs
and the programmer should be able rely on them to remain fixed
in function.

And in conclusion I'm opposed to this.  What I tell you three**3
times is true**3.  :-)



Re: RFC 78 (v1) Improved Module Versioning And Searching

2000-08-10 Thread Steve Simmons

 Perhaps Damian's want() (RFC 21) can be used to allow allow either return
 type? 

Yes indeed.

 Assuming that's adopted, of course.

Sure looks to me like a good idea; I hope it does.



Re: Things to remove

2000-08-09 Thread Steve Simmons

On Tue, Aug 08, 2000 at 06:34:19PM -0500, Mike Pastore wrote:
 Perl++

perm -- good old hairy perl, finally under control.

Running and ducking,
 
 --Steve



Re: RFC 78 (v1) Improved Module Versioning And Searching

2000-08-09 Thread Steve Simmons

On Wed, Aug 09, 2000 at 10:44:03AM -0700, Larry Wall wrote:

 : Note that it may not be possible to satisfy conflicting requests.  If
 : module CA and module CB demand two different versions of the same
 : module CC, the compiler should halt and state the module conflicts.

 Pardon me for sniping at a great RFC . . .

No problem, I don't feel sniped at.

  . . . but I already promised the CPAN
 workers that I'd make that last statement false.  There's no reason in
 principle why two modules shouldn't be allowed to have their own view
 of reality.  Just because you write Foo::bar in your module doesn't mean
 that Perl can't know which version of Foo:: you mean.

Near the end of the RFC this topic does come up; see it directly at
http://www.nnaf.net/~scs/Perl6/RFC78.html#Alternative_Idea_Module_Versio.
That section was much longer in my unreleased drafts, but did include
some discussion on disambiguation when multiple versions of the same
module are loaded.  In the RFC I wrote off this idea with ``In This Way
Lies Madness, but perl has done stranger things.''

If we want to open that can of worms I'm willing to pitch in.  I'll even
continue on with the task of maintaining the RFC, or starting another
if appropriate.  But I'd barely started thinking about this issue with
respect to perl until a couple of days ago, so folks who are much more up
to speed on language/inheritence/etc are going to have to carry the load.

Turning back to some specific comments made by Larry and by Dan Sugalski:

LW The larger view of it is how to support package aliasing.  A package
LW name is essentially the name of a public interface.  Suppose you
LW have one interface, but two or more different implementations of
LW that interface by different people.  Each might have their own
LW version sequence.

Package aliasing should certianly be allowed; the RFC already contains
a suggestion for it:

use foo 3.0 as foo_old;
use foo =3.0. as foo;
use foo 3.0. as new_foo;

Using the alias gets you what you need.

Your comment above implies that CPAN may (will be allowed to) contain
modules of the same name and same version by different authors.  Is this
correct?  If so (shudder), what happens currently when both are installed?
I presume one simply overwrites some or all of the other.

If that's the intent, then yes, author name must become part of the
package identifier.  Rather than change the meaning of "ref $object"
unexpectedly, I'd add one or more alternate forms of ref.  Off the
top of the head, something like

( $version, $class ) = vref $object;  # ref with version
$version = vref $object;  # scalar context gets version
( $author, $class ) = aref $object;   # ref with author
$author = aref $object;   # scalar context gets author
( $version, $author, $class ) = fref $object;   # get all
# Scalar context gets a string suitable for select the
# exact object in use
$fq_name = fref $object;

if ( $fqname eq fref $other_object )...

LW Another question is whether the two different versions of the "same"
LW module are going to stomp on each other?  (Not in package space, since
LW the packages are distinguished internally, but how about in the
LW filesystem?)

If we include author name along with the version number in the file system
namespace (gawd, I can't believe I just wrote that), then clobbering
won't happen.  IMHO this feature is no more difficult to add to the
install process than adding version numbers; it implies tho that we
need a $AUTHOR var in all modules and to allow for it in module `version'
specifications.  Again, off the top of the head, we could allow a string
(any string that would not be confused with a version number) to indicate
the author:

use foo CONWAY =1.3  as foo_good;
use foo GATES   1.3  as foo_bad;
use foo * as foo_desperate;

As long as there's no author names `as', we're cool.  I suspect the
proposed syntax in the RFC could be extended to handle this unambiguously
both for run/compile-time module naming and for installation, but need
to think it thru.

DS Does that mean, then, that when module A does a "$C::bar = 1" it
DS affects a different package namespace than module B doing a
DS "$C::bar = 2"?

LW Presumably.

Yes, that's how I'd see it too.  Which creates a real problem for
modules which do things like

package foo;

{
my $hidden_var;

sub new {
if ( $hidden_var ) {
do x;
} else {
do y;
}
}
}

Now there are two copies of $hidden_var while the module clearly expects
only one.  Either this kind of construction will have to be avoided,
or some mechanism developed which allows multiple version of a module
to somehow share important data which is otherwise hidden by such tricks.

IMHO, module authors should be able to forbid multi-loads across
sufficiently different versions of the module (ie, all 

Re: RFC 73 (v1) All Perl core functions should return ob

2000-08-09 Thread Steve Simmons

I'm pretty much opposed to this idea.  It's pushing OO too far onto perl.



Re: RFC 71 (v1) Legacy Perl $pkg'var should die

2000-08-08 Thread Steve Simmons

On Tue, Aug 08, 2000 at 10:59:40PM -0400, Dan Sugalski wrote:
 On Wed, 9 Aug 2000, Damian Conway wrote:
 
   If you take this, I won't be able to port the forthcoming Klingon.pm
   module to Perl 6!!!
  
  And this would be a bad thing how, exactly? :)
  
  I SHOULD KILL YOU WHERE YOU STAND
 
 But, but... I'm sitting! :-P

I think he means he's going to cut you off at the ankles, regardless
of their actual location.



Re: RFC 37 (v1) Positional Return Lists Considered Harmf

2000-08-06 Thread Steve Simmons

 Functions like stat() and get*ent() return long lists of mysterious
 values.  The implementation is assumedly easy: just push some values
 out of C structs into the Perl return stack.
 . . .
 Firstly, who can remember which one of the stat() return values was
 the atime is or which is the 4th return value of localtime()?  The
 perlfunc documentation makes this difficulty painfully obvious by
 having to list the indices alongside the values.

I must respectfully disagree.  The following line of code was
written by doing `man perlfunc', a few keystrokes to find the
right instance of `  stat ', and a cut and paste:

($dev,$ino,$mode,$nlink,$uid,$gid,$rdev,$size,
$atime,$mtime,$ctime,$blksize,$blocks)
= stat($filename);

For everyone who knows they need to make a stat() call, that
line tells them everything they need to know.

A few more keystrokes changes it to

(undef,undef,undef,$nlink) = stat($filename);

which informs the reader than yes, the program really meant to throw
away everything else from the stat call.  In a well-optimised
compiler, it would even pass only the one data element actually used.

Returning pre-parsed lists is a win, especially if one needs some
large subset of the values returned.  I use this feature regularly,
and would not wish to see it go away.

 Secondly, the current solution is seriously non-extensible.  One
 cannot easily add new return values to the APIs because someone may be
 expecting an exact number of return values, or someone may be
 accessing the values by position from the end of the list Obsoleting
 values (or marking them as non-applicable to the current platform) has
 to be done by for examples returning undef.

The above is a more telling criticism, but I that `stat()' is a
poor example -- it's not likely to change much.  I'm sure there are
better, but have not bothered to look.  :-)

 =head1 IMPLEMENTATION

I strongly support Damiens suggestion on a hashref, and would go so
far as to say it should be a pseudo-hash, with all the attendant
misspelling protections and speed obtained.

Passing lists around can be expensive, manipulating hashes can be
expensive.  I'd extend Damiens suggestion to allow the programmer
to specify only the sub-data desired, eg:

# undef context, store results in passed hashref

my %data_wanted = \(mode = 0, nlink = 0);
stat( $filename,\$data_wanted);

This could even be done in a list context, with a possible
implementation suggested:

# List context, return only the two values requested in order

($mode,$nlink) = stat($filename,\('mode','nlink'));



Re: try/catch (Was: Re: RFC: Modify open() and opendir() to return handles)

2000-08-04 Thread Steve Simmons

On Fri, Aug 04, 2000 at 09:10:38AM +0200, Johan Vromans wrote:

 You missed the point.

 If you need 6+ lines of code for each elementary error check, this is
 what is going to happen . . .

You're correct, but that's not what I was suggesting.  The magic words are
`for each elementary error check.'  Many (most?) functions do fine with
a simple error codes returned.  But there are some cases where things
go wrong deep inside modules and it's not easy/reasonable/feasible to
return anything meaningful in the error code.  For circumstances like
that, a try/catch mechanism is wonderful.  It's superior to if (eval)
in expressiveness and flexibility as well.

Those who write all their code like this with try/catch (as Johan
suggested):

 try {
open file
while read a record
   process its contents
finalize file processing
 }
 catch (Exception e) {
error message: something went wrong
 }

are also probably writing their (pseudo-perl) code like

if ( ! ( $err = open file ) ) {
error message: something went wrong
} else while ( $err = read a line ) {
if ( $err ) {
error message: something went wrong
} else {
   process it's contents
}
if ( ! $err ) {
   finalize file processing
}
}

Bad code and (in this case) feature abuse are independent of features.
I for one would like a try/catch for the tough cases that need it.
If people want to screw themselves over with abusing it for trivial uses,
that's fine.

PROPOSAL: we should add try/catch to perl, with the caveat that uncaught
exceptions are ignored and an error code returned as per current
functioning.

Or Fri, Aug 04, 2000 at 05:09:13AM -0600, Tom Christiansen wrote:
  If you do it the C++ way, you can say:
try {
  first_sub_that_throws_exceptions();
  second_sub_that_throws_exceptions();
} catch {
  it went wrong
}

 How does 'it went wrong' know _which_ of the subs went wrong?
 

As was pointed out elsewhere, the object provided to the catch has
that data.  But that's a not really relevant to the clarity, obscurity,
or brevity of code using try/catch.  The following working perl code
has recently been removed from some of our tools:

   die "Couldn't open database\n" if (!$dhb = open_db("bdconfig", get_dbms("Oracle", 
get_host_addr("woody";

This statement appeared in the code twice, widely separated, with
different db/dbms/host but the same error message.  Needless to say,
it leaves you pretty clueless as to what went wrong.  I've directed
our programmers here that such statements should be rewritten to

my $dbh;
my $host = "woody";
my $dbms = "Oracle";
my $db   = "bdconfig";
my $why  = "why you want it";
my $addr;
if ( ! $addr = get_host_addr( $host ) ) {
die "Couldn't find host '$host' for $why.\n";
} elsif ( ! $dbms = get_dbms( $dbms, $addr ) ) {
die "Couldn't access DBMS '$dbms' on $host for $why.\n";
} elsif ( ! $dbh = open_db( $db ) ) {
die "Couldn't access $dbms db '$db' on $host for $why.\n";
}

or encapsulated in a function which returns an appropriately
informational error string:

if ( "" ne ( $errstr = opendb( "bdconfig", "woody", "Oracle", \$dbh ) ) ) {
print STDERR $errstr;
}

Both the large code block in the first and the implementation of opendb
(not shown) are pedantic as all hell, but now when the NOC pages me at
3AM, I know where to start looking.  I submit that both the above call
to opendb with a string error code and the try/catch version:

try {
$db_handle = opendb "bdconfig", "woody", "Oracle" ;
} catch {
it went wrong
}

are about equally readable, and the try/catch is ultimately more powerful.

Jeez, don't tell me I've just started writing another RFC  smiley/2.
--
   ``Just imagine we are meeting the aliens for the first time.  Most
people would just shoot them to see how many points they are worth.''
  Simon Cozens in msg [EMAIL PROTECTED]



Re: RFC stuff

2000-08-03 Thread Steve Simmons

On Thu, Aug 03, 2000 at 12:51:10PM +1000, [EMAIL PROTECTED] wrote:

 Programer Modifiable Warnings and Error Messages  
   Brust, Corwin" [EMAIL PROTECTED]
 . . .

 Removing/fixing $[line noise here] variables
   Corwin Brust [EMAIL PROTECTED]

That second is actually mine.  Barring people harassing me too much
to actually work at work, I'll have a rough cut out this afternoon.

At the moment, the suggested implementations for these two things 
overlap a lot, so Corwin and I will stay in close touch.



Recording what we decided *not* to do, and why

2000-08-03 Thread Steve Simmons

On Thu, Aug 03, 2000 at 11:40:24AM +0900, Simon Cozens wrote:

 On Wed, Aug 02, 2000 at 07:34:36PM -0700, Nathan Wiger wrote:

   That Perl should stay Perl

  Do we need an RFC for this? Seems like this is more of a "guiding
  concept" that should be intergrated into everything. Just my opinion.

 Then we need to enshrine it. I'll cook something up soon.

This idea is both important and more general.  If we go thru a huge
discussion of, say, multi-line comments and decide *not* to do it,
we don't want to have the whole thing repeated with perl 6.1, 7.0,
etc, etc.  When something reaches RFC stage but is rejected, part of
the process should include archiving the gist of the arguements for 
and against.  IMHO the RFC editor should be responsible for this.



Re: Removing/fixing $[line noise here] variables

2000-08-02 Thread Steve Simmons

On Wed, Aug 02, 2000 at 03:34:56PM -0400, John Porter wrote:

 Brust, Corwin wrote:

  I want perl's error (and warning) messages to be specific to each program I
  write.
 
 Isn't this covered by locales?

Completely different beast.  I don't claim to fully understand locales,
but that's not what they're for.



Re: RFC: Filehandle type-defining punctuation

2000-08-02 Thread Steve Simmons

On Wed, Aug 02, 2000 at 11:46:15AM -0700, Peter Scott wrote:

 =head1 TITLE
 
 Filehandles should use C* as a type prefix if typeglobs are eliminated.

I could go for this, given the `if typeglobs are eliminated' caveat.