Re: Perl 6 Summary for week ending 20020728

2002-08-01 Thread Russ Allbery

pdcawley <[EMAIL PROTECTED]> writes:

> Bugger, I used L and pod2text broke it.
> http:[EMAIL PROTECTED]/msg10797.html

perlpodspec sez you can't use L<...|...> with a URL, and I'm guessing that
I just didn't look at that case when writing the parsing code in pod2text
because of that.

-- 
Russ Allbery ([EMAIL PROTECTED]) <http://www.eyrie.org/~eagle/>



Re: !< and !>

2001-09-02 Thread Russ Allbery

Bart Lateur <[EMAIL PROTECTED]> writes:

> Why is it ">=" and not "=>"?

Because in English, it's "less than or equal to" not "equal to or less
than," I presume.

> Simply trying to remember the order of characters might be (a bit of) a
> pain. That problem doesn't exist with "!<" and "!>".

Every other programming language I've ever seen uses >= and <=.  I think
adding additional comparison operators not found in any other language and
identical to (and harder to type than!) existing operators is a really bad
idea.

-- 
Russ Allbery ([EMAIL PROTECTED]) <http://www.eyrie.org/~eagle/>



Re: !< and !>

2001-09-01 Thread Russ Allbery

Sterin, Ilya <[EMAIL PROTECTED]> writes:
>> From: Russ Allbery [mailto:[EMAIL PROTECTED]]

>> How is !< different from >=?

> It's just more syntax just like foo != bar 
> is the same as (foo > bar || foo < bar).

> It might prove convenient to express the expression.

It's the same number of characters.  How can it be more convenient?

-- 
Russ Allbery ([EMAIL PROTECTED]) <http://www.eyrie.org/~eagle/>



Re: !< and !>

2001-09-01 Thread Russ Allbery

raptor <[EMAIL PROTECTED]> writes:

> I was looking at Interbase SELECT syntax and saw these two handy
> shortcuts :

>  = {= | < | > | <= | >= | !< | !> | <> | !=}

> !<  and !>

How is !< different from >=?

-- 
Russ Allbery ([EMAIL PROTECTED]) <http://www.eyrie.org/~eagle/>



Re: ~ for concat / negation (Re: The Perl 6 Emulator)

2001-06-21 Thread Russ Allbery

Simon Cozens <[EMAIL PROTECTED]> writes:
> On Thu, Jun 21, 2001 at 10:31:22PM +0100, Graham Barr wrote:

>> We can have a huge thread, just like before, but until we see any kind
>> of update from Larry as to if he has changed his mind it is all a bit
>> pointless.

> For what it's worth, I like it.

So do I, actually... it's sort of growing on me.

-- 
Russ Allbery ([EMAIL PROTECTED]) <http://www.eyrie.org/~eagle/>



Re: Python...

2001-06-05 Thread Russ Allbery

David Grove <[EMAIL PROTECTED]> writes:

>> Perl is far more practical than experimental.

> Not at the moment. That's the problem.

Pretty much everything proposed, even in the wildest RFCs during the
brainstorming phase, was still stuff that's been done elsewhere by other
languages.  That's the practical vs. experimental distinction that I'm
drawing.  I realize that you don't like the direction that Perl 6 design
is heading, but it's still not heading towards being an experimental
language.  I've seen some *real* experimental languages; they're a lot
more unconventional.

You can still trace nearly everything that was proposed back to C, Lisp,
or Generic Object-Oriented Language, if not in inspiration than at least
in fundamental similarities.

-- 
Russ Allbery ([EMAIL PROTECTED]) <http://www.eyrie.org/~eagle/>



Re: Python...

2001-06-03 Thread Russ Allbery

Vijay Singh <[EMAIL PROTECTED]> writes:

> I always expected Perl to be leading the way, *the* language that broke
> new ground..."where only camels dared to tread..."

Er... that strikes me as a strange expectation.  I can't think of much in
Perl that hasn't appeared elsewhere earlier.  Perl makes a lot of already
developed ideas practical, but breaking new ground isn't really its forte.

If you want to look at languages that are breaking new ground, I recommend
Objective Caml, or Haskell, or Mercury, or even Eiffel.  Languages like
Perl and Python are really almost entirely just attempting to make
practical ideas already explored in other practical and experimental
languages.

Perl is far more practical than experimental.

-- 
Russ Allbery ([EMAIL PROTECTED]) <http://www.eyrie.org/~eagle/>



Re: Damian Conway's Exegesis 2

2001-05-15 Thread Russ Allbery

Simon Cozens <[EMAIL PROTECTED]> writes:

> Personally, I'd rather not deal with a toke.c that knows more of
> /usr/dict/words than I do.

use thesaurus;

-- 
Russ Allbery ([EMAIL PROTECTED]) <http://www.eyrie.org/~eagle/>



Re: Curious: -> vs .

2001-04-25 Thread Russ Allbery

Nathan Wiger <[EMAIL PROTECTED]> writes:

>- C compatibility. One of Perl's great strengths
>  over other HLL's is C compatibility. Though
>  this is still arguably not as good as it can be, 
>  why distance ourselves from the language we're
>  trying to interact with?

You're thinking of objects as references and references as akin to
pointers, which makes sense because that's how they're implemented in Perl
5.  If you think of objects as their own entities, however, or think of
references as something other than pointers (in particular, something that
doesn't require explicit dereferencing), then using . to access object
members is entirely compatible with C.

I tried to make this point before, but I don't think people understood
what I was getting at.

-- 
Russ Allbery ([EMAIL PROTECTED]) <http://www.eyrie.org/~eagle/>



Re: s/./~/g

2001-04-24 Thread Russ Allbery

David M Lloyd <[EMAIL PROTECTED]> writes:
> On 24 Apr 2001, Russ Allbery wrote:

>> It seems relatively unlikely in the course of normal Perl that you're
>> going to end up with very many references to objects.

> Well, right now in Perl, an object *is* a reference.

Precisely.  So there's almost never any reason to create a reference to an
object, which would be a reference to a reference, and for those rare
circumstances the existing dereference syntax is probably adequate.

> Maybe you want to pass around a reference to @myarray because it
> contains a billion elements, or is tied to a file, or something;

I would presume that objects will still be implemented as references under
the hood so that passing them around is efficient no matter what they
contain.

-- 
Russ Allbery ([EMAIL PROTECTED]) <http://www.eyrie.org/~eagle/>



Re: s/./~/g

2001-04-24 Thread Russ Allbery

David M Lloyd <[EMAIL PROTECTED]> writes:
> On 24 Apr 2001, Russ Allbery wrote:

>> The switch from -> to . makes perfect sense from a C perspective if we're
>> turning objects into first-class entities rather than pointers; think
>> about a struct versus a pointer to a struct.
>> 
>> -> makes you remember that things are pointers.

> What's wrong with using both?  You could use -> if you're working with a
> reference to an object, and you could use . if you're working with the
> object itself.

It seems relatively unlikely in the course of normal Perl that you're
going to end up with very many references to objects.

-- 
Russ Allbery ([EMAIL PROTECTED]) <http://www.eyrie.org/~eagle/>



Re: s/./~/g

2001-04-24 Thread Russ Allbery

Branden <[EMAIL PROTECTED]> writes:

> 1) Use $obj.method instead of $obj->method :

> The big question is: why fix what is not broken? Why introduce Javaisms
> and VBisms to our pretty C/C++-oid Perl? Why brake compatibility with
> Perl 5 code (and Perl 5 programmers) for a zero net gain?

$obj.method isn't a Java-ism; it's used by both C++ and by Simula (for
class variables), and in C for struct members, which given the origins of
those languages means I wouldn't be surprised if it were in Algol.

The switch from -> to . makes perfect sense from a C perspective if we're
turning objects into first-class entities rather than pointers; think
about a struct versus a pointer to a struct.

-> makes you remember that things are pointers.

-- 
Russ Allbery ([EMAIL PROTECTED]) <http://www.eyrie.org/~eagle/>



Re: Strings vs Numbers (Re: Tying & Overloading)

2001-04-24 Thread Russ Allbery

Bart Lateur <[EMAIL PROTECTED]> writes:

> My vote is to ditch the concat operator altogether. Hey, we have
> interpolation!

>   "$this$is$just$as$ugly$but$it$works"

How do you concatenate together a list of variables that's longer than one
line without using super-long lines?  Going to the shell syntax of:

PATH=/some/long:/bunch/of:/stuff
PATH="${PATH}:/more/stuff"

would really be a shame.

-- 
Russ Allbery ([EMAIL PROTECTED]) <http://www.eyrie.org/~eagle/>



Re: Larry's Apocalypse 1

2001-04-15 Thread Russ Allbery

John Porter <[EMAIL PROTECTED]> writes:
> Piers Cawley wrote:

>> Unless you can get at every single one of those and add a '-M5' switch,
>> then they aren't going to work. Which could be very bad indeed.

> The analogous situation with p4->p5 wasn't so bad.  People just kept
> their p4 binaries around for running those old scripts.  No biggie.

There's quite a lot more Perl 5 code out there than there was Perl 4 code.
And it's rather annoying to still be maintaining a perl4 installation at
this point for the stragglers, although I suppose that can't be helped.

-- 
Russ Allbery ([EMAIL PROTECTED]) <http://www.eyrie.org/~eagle/>



Re: Larry's Apocalypse 1

2001-04-08 Thread Russ Allbery

Greg Boug <[EMAIL PROTECTED]> writes:

>>   'Although the Perl Slogan is "There's More Than One Way
>>   to Do It", I hesitate to make 10 ways to do something.'
>>  - Larry Wall

> Just an off topic remark... Does anyone know where I can get a copy of
> all these little gems from? :)

<ftp://ftp.cpan.org/CPAN/misc/lwall-quotes.txt.gz>

-- 
Russ Allbery ([EMAIL PROTECTED]) <http://www.eyrie.org/~eagle/>



Re: Larry's Apocalypse 1

2001-04-05 Thread Russ Allbery

Nathan Torkington <[EMAIL PROTECTED]> writes:

> Not a comment at all on it?  Was I accidentally unsubscribed to
> perl6-language?

> *tap* *tap* is this thing on?

Using module/class instead of package is exactly the same route that LaTeX
took in the transition from 2.09 to 2e.  It works quite well, and has also
meant that because of the automatic triggering of the compatibility code,
there are still lots of 2.09 documents out there, what, 10 years later?
that still process just fine with current versions of LaTeX.

The rest of what Larry said included little that wasn't about what I
expected, so I didn't have much additional response, apart from saying
that that was rather more Perl 5 compatibility than I was expecting.
Interesting.

Oh, and I wholeheartedly approve of the approach to handling objects.

-- 
Russ Allbery ([EMAIL PROTECTED]) <http://www.eyrie.org/~eagle/>



Re: pitching names for the attribute for a function with no memor y or side effects

2001-03-31 Thread Russ Allbery

John Porter <[EMAIL PROTECTED]> writes:
> Russ Allbery wrote:

>> It looks like I was misremembering; I remember a proposal for a "pure"
>> attribute in gcc, but it looks like the attribute used for functions
>> with no memory references and no side effects is "const" (a la C++).  I
>> think "pure" was proposed for the somewhat relaxed version of that that
>> allowed memory references but not side effects.

> Are you sure?  That sounds totally backwards to me.  Declaring a
> function "const" is a promise that it's not going to change anything
> outside its call frame.  But a pure function, in the math sense, doesn't
> even look at anything outside its call frame.

I'm sure about the definition of const:

`const'
 Many functions do not examine any values except their arguments,
 and have no effects except the return value.  Such a function can
 be subject to common subexpression elimination and loop
 optimization just as an arithmetic operator would be.  These
 functions should be declared with the attribute `const'.  For
 example,

  int square (int) __attribute__ ((const));

 says that the hypothetical function `square' is safe to call fewer
 times than the program says.

 The attribute `const' is not implemented in GNU C versions earlier
 than 2.5.  An alternative way to declare that a function has no
 side effects, which works in the current version and in some older
 versions, is as follows:

  typedef int intfn ();
  
  extern const intfn square;

 This approach does not work in GNU C++ from 2.6.0 on, since the
 language specifies that the `const' must be attached to the return
 value.

 Note that a function that has pointer arguments and examines the
 data pointed to must *not* be declared `const'.  Likewise, a
 function that calls a non-`const' function usually must not be
 `const'.  It does not make sense for a `const' function to return
 `void'.

My memory of "pure" is from mailing list traffic that I don't have on
hand.

-- 
Russ Allbery ([EMAIL PROTECTED]) <http://www.eyrie.org/~eagle/>



Re: pitching names for the attribute for a function with no memor y or side effects

2001-03-31 Thread Russ Allbery

Frank Tobin <[EMAIL PROTECTED]> writes:

> Just because one programming paradigm happens to name it "pure" doesn't
> mean that name should be carried over to other paradigms.  In a
> functional-programming context, sure, "pure" might be a good name.  But
> in a non-functional context, the name has little meaning with regards to
> the concept of "nosideeffects".

It looks like I was misremembering; I remember a proposal for a "pure"
attribute in gcc, but it looks like the attribute used for functions with
no memory references and no side effects is "const" (a la C++).  I think
"pure" was proposed for the somewhat relaxed version of that that allowed
memory references but not side effects.

-- 
Russ Allbery ([EMAIL PROTECTED]) <http://www.eyrie.org/~eagle/>



Re: pitching names for the attribute for a function with no memor y or side effects

2001-03-30 Thread Russ Allbery

Dan Sugalski <[EMAIL PROTECTED]> writes:

> Doesn't have the right ring to it, unfortunately. It's not really
> immutable, it just has no side-effects.

gcc and the literature both use "pure"; I'd recommend that.

-- 
Russ Allbery ([EMAIL PROTECTED]) <http://www.eyrie.org/~eagle/>



Re: What can we optimize (was Re: Schwartzian transforms)

2001-03-29 Thread Russ Allbery

James Mastros <[EMAIL PROTECTED]> writes:

> Ahh, bingo.  That's what a number of people (inculding me) are
> suggesting -- a :functional / :pure / :stateless /
> :somthingelseIdontrecall attribute attachable to a sub.

The experience from gcc, which has a similar attribute, is that such an
attribute will be fairly rarely used and that most of your gains will come
from managing to teach the compiler to figure out that information for
itself.

This will probably be harder in Perl than in C because C can afford to
take more time to do global optimization passes.

-- 
Russ Allbery ([EMAIL PROTECTED]) <http://www.eyrie.org/~eagle/>



Re: What can we optimize (was Re: Schwartzian transforms)

2001-03-29 Thread Russ Allbery

Dan Sugalski <[EMAIL PROTECTED]> writes:

> Aliasing is actually one of the bigger problems with C, or so I'm lead
> to believe. It gets in the way of a number of optimizations rather
> badly. (So say some of Compaq's C and Fortran compiler folks, and I have
> no reason to doubt them. The Fortran compiler often generates faster
> code than the C compiler for this reason apparently)

Hence the introduction of the restrict keyword in C99 and several of gcc's
attribute extensions for marking pure functions to try to get a handle on
the problem.  *wry grin*  Yeah, that's the main thing that gets in the way
of optimizing C.

-- 
Russ Allbery ([EMAIL PROTECTED]) <http://www.eyrie.org/~eagle/>



Re: Schwartzian transforms

2001-03-28 Thread Russ Allbery

John Porter <[EMAIL PROTECTED]> writes:

>   If the user-supplied key extraction function is tagged with
>   :function/:pure (or whatever), then perl is free to optimize
>   the operation of sort() by memoizing the results of calls to
>   that function.

I'd really like to see a concrete example of a sane sorting function which
cannot be memoized.  (Issues of syntax aside; just caching the result of
comparing any two pairs of data results in caching data that a sane
sorting algorithm will never use again.  But provided that there's a way
to separate things out so that Perl *can* usefully memoize, I can't think
of any realistic sort function where this would be a problem.)

-- 
Russ Allbery ([EMAIL PROTECTED]) <http://www.eyrie.org/~eagle/>



Re: Schwartzian transforms

2001-03-28 Thread Russ Allbery

Dan Sugalski <[EMAIL PROTECTED]> writes:

> I'm actually considering whether we even need to care what the
> programmer's said. If we can just flat-out say "We may optimize your
> sort function, and we make no guarantees as to the number of times tied
> data is fetched or subs inside the sort sub are called" then life
> becomes much easier.

I am strongly in favor of that approach.  I see no reason to allow for
weird side effects in Perl 6.  (Perl 5 would be a different matter, of
course.)

Not only is it simpler to deal with, it's simpler to *explain*, and that's
important.

-- 
Russ Allbery ([EMAIL PROTECTED]) <http://www.eyrie.org/~eagle/>



Re: Schwartzian Transform

2001-03-26 Thread Russ Allbery

map { $_->[0] } sort { compare($a->[1], $b->[1]) } map { [$_, f($_)] } data

Uri Guttman <[EMAIL PROTECTED]> writes:

> i never assumed that. but your ST example above shows it like that. you
> still have to do a ladder compare with $a and $b do make the ST work
> with multiple keys. each one needs to be given the sort order and
> compare op as well.

That's what compare() does.  compare() is a Perl function.  It can do
anything you want.

> that is my whole point of why putting this into the language is
> silly. it is too open ended for amount of work perl would have to do
> vs. the amount of coding you save. you save very little as you are doing
> most of the work youself in the f() key extraction subs.

The purpose served is that it's conceptually simpler to tell Perl "here's
how to extract keys and here's how to compare them; now sort this data
structure" than it is to tell Perl "convert this data structure into a
different one and then extract keys from it like follows and compare them,
then transform the structure back."  The first route is closer to the way
that people are intuitively thinking.  It doesn't matter to me that the
first isn't going to be that many fewer characters of Perl code than the
second.  I *understand* it better.

It is true that it can be done in a module.  Most things in Perl can.  It
matters very little to me whether it's a standard module or built into the
language; I just think that it should be possible to tell sort to make
this sort of thing easier.

>   RA> You have to write slightly more code if you separate the
>   RA> extraction function f() from the comparison function compare()
>   RA> since if the key structure is complex, f() has to build a data
>   RA> struction that compare() takes apart.  That makes the memoizing
>   RA> approach superior.

> and how is this ladder compare built?

The programmer writes it.

> but you don't autogenerate the code in the block.

I haven't heard anyone talking about autogenerating everything other than
the code that wraps each element of the list in an anonymous array holding
the element and the key(s) and then extracts the key(s) for the comparison
function.  That part of the code is identical in every ST that I write.

> it is your code. the supposed goal of this hypothetical builtin ST is to
> make it easier to use it. i say it is not worth the effort since you
> have to do almost as much work anyway.

Less mental effort is the important part, not how many characters have to
be typed.  I don't want to be thinking about that extra level of arrays,
and until you've written *lots* of ST's, you can't ignore it.

-- 
Russ Allbery ([EMAIL PROTECTED]) <http://www.eyrie.org/~eagle/>



Re: Schwartzian Transform

2001-03-26 Thread Russ Allbery

Uri Guttman <[EMAIL PROTECTED]> writes:
>>>>>> "RA" == Russ Allbery <[EMAIL PROTECTED]> writes:
>   RA> Uri Guttman <[EMAIL PROTECTED]> writes:

> map { $_->[0] } sort { compare($a->[1], $b->[1]) } map { [$_, f($_)] } data
>^^^   ^^^

>   RA> Then you need to look at f and compare a little closer, since it's in
>   RA> there.

> and there is only extracted key being compared to another at the same
> level, not multiple key levels. think about sorting by state and THEN
> town. you can't do that with $a and $b and one f().

Yes.  You can.

Don't assume $a->[1] is a simple scalar.  What prevents f() from returning
an array ref?

> so you need multiple compare ops and multiple f()'s.

No, you don't.

> the point is that you have to generate the ladder compare code as well
> as the calls to your f()'s.

Yes, you have to write the comparison and data manipulation function for
Perl; Perl isn't going to be able to figure it out for itself.  But that's
true regardless of the sorting method; you're always going to have to tell
Perl what the keys are and how to compare them.

You have to write slightly more code if you separate the extraction
function f() from the comparison function compare() since if the key
structure is complex, f() has to build a data struction that compare()
takes apart.  That makes the memoizing approach superior.

>   RA> Without creating a function to extract the key, you can't sort in
>   RA> Perl at all.  sort { $a <=> $b } contains two functions to extract
>   RA> the keys.

> huh? $a and $b are not functions but aliases the the current pair of
> keys (at the primary key level).

Is sub { $a } a function?  $a is equivalent to that.  One way to look at
this is that Perl lets you simplify the function if all you need is the
basic data unit.

> i don't seen any functions in what you show there. you don't need a
> function or even an ST to sort complex records.

{ $a <=> $b } is a function.  (Well, it's a code block, but the difference
is quibbling.)

My point is that writing functions isn't nearly as complicated as you make
it sound.  Almost every time I write a sort, map, or grep in Perl, I write
a function.

-- 
Russ Allbery ([EMAIL PROTECTED]) <http://www.eyrie.org/~eagle/>



Re: Schwartzian Transform

2001-03-26 Thread Russ Allbery

Dan Sugalski <[EMAIL PROTECTED]> writes:

> You're ignoring side-effects. The tied data may well be returned the
> same every time it's accessed, but that doesn't mean that things aren't
> happening behind the scenes. What if we were tracking the number of
> times a scalar/hash/array was accessed? Memoizing would kill that.

Hm.  I don't really understand why this would be significant unless you're
actually benchmarking Perl's sort.  Unless you care about the performance
of Perl's sort algorithm, the number of times each element is accessed in
a sort is *already* indeterminate, being a function of the (hidden) sort
implementation, and will vary a lot depending on how ordered the data
already is.

Counting on side effects determined by the *number* of times elements are
accessed during a sort sounds pretty twisted to me.  I can see a few YAPHs
with such properties, but I don't think we were guaranteeing that Perl 6
would be YAPH-compatible anyway.

-- 
Russ Allbery ([EMAIL PROTECTED]) <http://www.eyrie.org/~eagle/>



Re: Schwartzian Transform

2001-03-26 Thread Russ Allbery

Uri Guttman <[EMAIL PROTECTED]> writes:
>>>>>> "SC" == Simon Cozens <[EMAIL PROTECTED]> writes:

>   SC> No, it wouldn't, don't be silly. The ST can always be generalized to 

>   SC> ST(data, func, compare) =
>   SC> map { $_->[0] } sort { compare($a->[1], $b->[1]) } map { [$_, f($_)] } data

> and i don't see multiple keys or sort order selection per key.

Then you need to look at f and compare a little closer, since it's in
there.

> and even creating a function to extract the key is not for beginners in
> many case.

Without creating a function to extract the key, you can't sort in Perl at
all.  sort { $a <=> $b } contains two functions to extract the keys.

Functions don't have to be complicated, you know.

-- 
Russ Allbery ([EMAIL PROTECTED]) <http://www.eyrie.org/~eagle/>



Re: The binding of "my" (Re: Closures and default lexical-scope

2001-02-18 Thread Russ Allbery

Bart Lateur <[EMAIL PROTECTED]> writes:

> That doesn't mean that davocates for either side don't have anything
> interesting to say. For starters, it's usually dissatisfaction with
> certain aspects of some languages that causes the birth of yet another
> new language, such as PHP (which is more a different programming
> platform than really a different, full blown language) and Ruby.

Sure.  However, when it's being presented in the fashion that it's being
presented in this thread, it hits my mental filters and is completely
worthless to me.

Compare and contrast with the way we discussed JWZ's disagreements with
Java.

I think it's possible for intelligent adults to figure out how to talk
about the things about Perl they don't like without advocating another
language as better, without insulting people, and without using
over-the-top whining that may have been intended to be funny and ended up
just being stupid and grating.

-- 
Russ Allbery ([EMAIL PROTECTED]) <http://www.eyrie.org/~eagle/>



Re: The binding of "my" (Re: Closures and default lexical-scope

2001-02-17 Thread Russ Allbery

So since when did perl6-language become perl-advocacy?  Rephrased:  Could
people please take the advocacy traffic elsewhere where it isn't noise?
Thanks.

-- 
Russ Allbery ([EMAIL PROTECTED]) <http://www.eyrie.org/~eagle/>



Re: JWZ on s/Java/Perl/

2001-02-09 Thread Russ Allbery

Mark Koopman <[EMAIL PROTECTED]> writes:
>> On Fri, 09 Feb 2001 12:06:12 -0500, Ken Fox wrote:

>> That may work for C, but not for Perl.
>> 
>>  sub test {
>>  my($foo, $bar, %baz);
>>  ...
>>  return \%baz;
>>  }

> but is this an example of the way people SHOULD code, or simply are ABLE
> to code this.  are we considering to deprecate this type of bad style,
> and force to a programmer to, in this case, supply a ref to %baz in the
> arguements to this sub?

That's a pretty fundamental aspect of the Perl language; I use that sort
of construct all over the place.  We don't want to turn Perl into C, where
if you want to return anything non-trivial without allocation you have to
pass in somewhere to put it.

-- 
Russ Allbery ([EMAIL PROTECTED]) <http://www.eyrie.org/~eagle/>



Re: TIL redux (was Re: What will the Perl6 code name be?)

2000-10-23 Thread Russ Allbery

Uri Guttman <[EMAIL PROTECTED]> writes:

> not a good sign but we may need to take the hit to support overloading
> any function and supporting TIL and threads. i think a %20 hit to get
> those working cleanly might be a decent tradeoff.

I don't.  I'd find it to be a really good reason to learn Python.

> the TIL speedup over pure interpretation might win that back and
> more.

If that's true, that's a different ballgame of course.

If at all possible, Perl 6 should be *faster* than Perl 5.  Perl is
already too slow IMO.

-- 
Russ Allbery ([EMAIL PROTECTED]) <http://www.eyrie.org/~eagle/>



Re: RFC 357 (v1) Perl should use XML for documentation instead of POD

2000-10-02 Thread Russ Allbery

Garrett Goebel <[EMAIL PROTECTED]> writes:

> 
>   Module::Name
>   0.01
>   short description
>   
> =head1 long description
> 
>   =head2 heading
>   
> foo
>   
>   Type in some text here...
> 
>   
>   Eliott P. Squibb
>   Joe Blogg
>   none
>   Distributed under same terms as Perl
>   
> define your own section
> blab here
>   
> 

Wow, that's completely unreadable.  That's more unreadable than *roff.

(This is due to the fundamental flaw in SGML syntax, namely that at a
glance it's impossible to distinguish the tags from the content because
the delimiters are horribly constructed and you need too many tags.  This
is a fundamental flaw in the entire way that SGML syntax was designed, and
I don't think it's possible to fix.  As far as I'm concerned, *any*
SGML-derived markup language is write-only and usable only as an output
format.  It's slightly more readable than PostScript, and that's about all
I can say for it.)

-- 
Russ Allbery ([EMAIL PROTECTED]) <http://www.eyrie.org/~eagle/>



Re: RFC 288 (v2) First-Class CGI Support

2000-09-30 Thread Russ Allbery

Bart Lateur <[EMAIL PROTECTED]> writes:

> But anyway: whould this imply that URL- and simple HTML escaping and
> back, will now be available through pack()/unpack()? Just like UUE?
>   ;-)

Adding base64 encoding/decoding and quoted-printable would also be useful.
Either that, or taking uuencode out of pack and putting it plus those
other things into a standard module.

-- 
Russ Allbery ([EMAIL PROTECTED]) <http://www.eyrie.org/~eagle/>



Re: RFC 327 (v2) C<\v> for Vertical Tab

2000-09-29 Thread Russ Allbery

David Olbersen <[EMAIL PROTECTED]> writes:
>> From: Russ Allbery [mailto:[EMAIL PROTECTED]]

>> Just out of curiosity, and I'm not objecting to this RFC, has anyone
>> reading this mailing list actually intentionally used a vertical tab
>> for something related to its supposed purpose in the past ten years?

> I don't even know what a vertical tab is, it doesn't sound like anything
> very useful.

It advances the paper of your hardcopy terminal a terminal-setting-
defined number of lines, usually about eight.  The last time I used a
vertical tab intentionally and for some productive purpose was about 1984.

-- 
Russ Allbery ([EMAIL PROTECTED]) <http://www.eyrie.org/~eagle/>



Re: RFC 327 (v2) C<\v> for Vertical Tab

2000-09-29 Thread Russ Allbery

Perl6 RFC Librarian <[EMAIL PROTECTED]> writes:

> However, lack of C<\v> represents a special case for a C programmer to
> learn.  C<\v> isn't used for anything else in double quoted strings, nor
> is it used in regular expressions, so it won't require removal of an
> existing feature to add it. Currently a C<\v> in a double quoted strings
> will be treated as C, with a warning about unknown escape issued if
> warnings are in force.

> Vertical tab was also omitted from the range of characters considered
> whitespace by C<\s> in regular expressions.

Just out of curiosity, and I'm not objecting to this RFC, has anyone
reading this mailing list actually intentionally used a vertical tab for
something related to its supposed purpose in the past ten years?

-- 
Russ Allbery ([EMAIL PROTECTED]) <http://www.eyrie.org/~eagle/>



Re: Expunge "use English" from Perl? (was Re: Perl6Storm: Intent to RFC #0101)

2000-09-28 Thread Russ Allbery

Andy Dougherty <[EMAIL PROTECTED]> writes:

> I find that I don't remember many of the less-frequently-used perlvars
> (where less-frequently-used depends on the types of programs I write,
> obviously).  I certainly couldn't tell you off-hand the differences
> among $< $> $( and $).  I'd have to look them up.

I never understood why these were variables.  You don't change UIDs or
GIDs that often, and when you do you tend to want precise control and
because they're variables, they have weird interaction semantics and you
have to assign to them in just the right order to get done what you want
to get done.  See recent threads on comp.lang.perl.moderated.

I'd honestly rather see getuid, geteuid, getgid, getegid, and getgroups,
along with some consistent and complete subset of the setting functions
(with portability magic behind the scenes), in a separate module that only
those programs that need to do UID fiddling need to load.

I guess the exception is getpwuid($<), which is probably done more than
any other operation on UIDs, but maybe just keep that single variable.

-- 
Russ Allbery ([EMAIL PROTECTED]) <http://www.eyrie.org/~eagle/>



Re: Perl6Storm: Intent to RFC #0101

2000-09-27 Thread Russ Allbery

Robert Mathews <[EMAIL PROTECTED]> writes:

> ... and don't know use English.  Why can't they learn to use it?

Why can't the new users of Perl learn the real variable names?

I guess I don't buy the argument that the real names are harder to learn.
Most of them have fairly useful mnemonics, you see them and use them
constantly so they become familiar quickly, and most Perl code out there
uses them.

> Are you saying that nothing is worth knowing unless the oldsters know it
> already?

\begin{rant}

No, I was not saying that.  I was saying exactly what I said.  I meant
what I said.  If I'd meant something else, I would have said that instead.

\end{rant}

> It's not that I want to jam English down everyone's throats.  But Nate
> asked, "does anyone want this," and I said, "yes."  Or at least, I would
> want it if it worked.

Hey, I'm not claiming you're trying to jam anything anywhere.  We were
discussing use English, and I'm expressing my opinion just like you are.
I've found the use of use English in code I had to maintain to be annoying
and unhelpful, and to actually degrade the maintainability of the code, so
I threw in my two cents.

> You'd learn to recognize the long variable names if you used English
> regularly.  It's a chicken-and-egg problem, but not a very difficult
> one.

I've yet to understand why I'd *want* to use English regularly; so far as
I can tell, it has essentially no benefit in the long term.  Perl is not
now, nor is it likely to ever be, a language that's particularly readable
by people who don't know Perl, and use English in order to learn the
strange names used by use English strikes me as rather circular.  Either
the person maintaining the code learns Perl, in which case the use English
names won't be necessary, or they don't, in which case they're unlikely to
be able to maintain the code anyway.

I know it's not the only stance to take, but I prefer to try to make my
Perl code very readable by people who know Perl, and encourage people who
don't know Perl who are trying to read my code to learn Perl first, or at
the same time.  There are certainly languages out there that are more
readable for people who don't know the language at all than Perl is, but I
don't find this a particularly important feature in a language.  In those
cases where it is, I'd use a language other than Perl.

use English doesn't really address the syntactical points of Perl that
make it hard to read for someone who doesn't know Perl; it strikes me, and
always has struck me, as a bad partial solution to a problem that may not
need to be solved and that only makes things more complicated in the long
run.

-- 
Russ Allbery ([EMAIL PROTECTED]) <http://www.eyrie.org/~eagle/>



Re: Perl6Storm: Intent to RFC #0101

2000-09-27 Thread Russ Allbery

Robert Mathews <[EMAIL PROTECTED]> writes:
> Nathan Wiger wrote:

>> How many people really "use English" other than beginners?

> I would use it, but I heard a nasty rumor that it incurs the same
> penalty as using $' and such.  I try to avoid too much line noise in
> code that has to be maintained.

I have a very serious problem with use English, namely that it makes Perl
code much more difficult to read and maintain for people who know Perl.
Writing something that's marginally easier to understand for a beginner
and harder to understand for an expert doesn't strike me as a good idea.

I know what $/ does; I double-take at $INPUT_RECORD_SEPARATOR and am never
sure whether it's a user's personal global variable or $/ or some other
thing.  And $ARG and $MATCH both really look like global variables to me
and I'd hunt through the program trying to find where they're defined
for a while before realizing they're weird use English things.

-- 
Russ Allbery ([EMAIL PROTECTED]) <http://www.eyrie.org/~eagle/>



Re: RFC 283 (v1) C in array context should return a histogram

2000-09-26 Thread Russ Allbery

Paris Sinclair <[EMAIL PROTECTED]> writes:

> But as soon as a person labels me a minority, and implies that because I
> have been labeled such that I am a rioter, and that my opinions are
> based upon this label, then your choices are to filter me, or to listen
> to me protest.

Then perhaps you shouldn't have labelled him Euro-centric if you didn't
want a sarcastic response in kind.

I'd just prefer that we discussed the technical issues without this
pointless bickering.  If you were offended, fine; say you were offended
and move on.  I was offended by your implication that people who don't
agree with you are saying that only European scripts matter.  But please
don't escalate the argument as part of being offended.

I'll now stop replying to this thread.  Sorry for sticking my nose in; it
really bugs me when this happens in i18n discussions.

-- 
Russ Allbery ([EMAIL PROTECTED]) <http://www.eyrie.org/~eagle/>



Re: RFC 283 (v1) C in array context should return a histogram

2000-09-26 Thread Russ Allbery

Paris Sinclair <[EMAIL PROTECTED]> writes:
> kOn Tue, 26 Sep 2000, Bennett Todd wrote:

>> Someone wrote:
>>> What's the upper bound in a 16bit language? Or does that case just
>>> have to break? "Sorry, you're not European. Please be assimilated
>>> before using this tool. Resistance is futile."

>> Lordie lordie lordie, you're one of the persecuted minority, and a
>> brand-waving rioter too. I've clearly stepped on a corn, not to mention
>> picked the wrong person to persecute. I'll go speak english to other
>> bigots who only speak english, and leave the future of the civilized
>> universe in your responsible hands.

> That's really ridiculous.

Bennett is reacting, I would assume, to the rather aggressive way of
phrasing arguments quoted above.  It's bothering me as well.  Could you
please start from the assumption that we're all interested in supporting
the full Unicode space to the greatest degree possible?  None of us are
trying to force an ASCII-only alphabet on anyone (although some of us are
interested in keeping ASCII-only operations fast and efficient since
that's most of what we do).

> And, if my being or not being a minority is something that would effect
> the value of my position, then you are even more dangerous than I had
> suspected.

Comments like this don't help the discussion any.

-- 
Russ Allbery ([EMAIL PROTECTED]) <http://www.eyrie.org/~eagle/>



Re: RFC 263 (v1) Add null() keyword and fundamental data type

2000-09-23 Thread Russ Allbery

Glenn Linderman <[EMAIL PROTECTED]> writes:
> Russ Allbery wrote:

>> Perhaps I don't use those warnings in the same way that you do.  I
>> *very* rarely have undefined value warnings in my programs, and when I
>> do they're usually not actually bugs, just things that require a
>> different way of writing to be -w clean.  So I don't have as high of an
>> opinion of this warning as being particularly important to debugging; I
>> only find it useful in certainly particular circumstances.

> I can't say that I often get the warning, but when I do, I find it
> generally results from a bug.  So something about the way we write code
> is different, I guess.

Most likely.  It caught stuff for me before use strict (generally variable
typos) but with use strict, those warnings tend to be either checking for
keys in aggregate structures when they've not been initialized (which is
mostly just an annoyance) or they're just a symptom of something going
wrong somewhere else entirely and aren't particularly helpful in tracking
down where it's going wrong.

> I find this absolutely amazing.  You've now convinced me you understand
> the arguments I've been making, and the issues I'm concerned
> about... and yet you still hold this opinion.  Certainly we have a
> difference of opinion here.

It's quite possible that I'd have a different opinion if I used it for a
while; I don't know.  I think it's worth trying it with undef first and
writing some code that way and seeing how it works and how hard it is to
debug.

> Russ, I apologize.  I confused you with someone else in this posting.  I
> looked back over your postings, and you, unlike those that seem to just
> hate SQL, have repeatedly expressed interest in using the tristate
> semantics.  I've been trying to keep my nose clean regarding remarks
> like this, but I guess my frustration level finally got the better of
> me...and perhaps partly, I guess I stayed up too late last night and
> probably shouldn't be posting this late tonight either.

No problem; it's easy enough to do.  :)

> Maybe the enlightenment is shed by your earlier remark: you don't find
> the undef warnings to be particularly important to debugging.  So maybe
> that is the reason that you don't see the need to concurrently have both
> sets of semantics available?

That's quite possible.

> Since you don't need the current set of semantics?

The main thing I use undef for is in areas where I'm checking with
defined, which I would assume would continue to work regardless of the
selected semantics of undef.  Having undef propagate would make it useful
in additional areas (or at least I think it would).  From writing language
parsers, I found that it's useful in areas other than SQL to have a
distinguished value that propagates through any arithmetic operation.

> Going back to your first remark about seeing confusion either way, maybe
> explaining the types of confusion that you see with a separate null and
> undef vs the types of confusion that you see with a tristate pragma
> would help me to grasp that logic.

The main thing I'm worried about with undef plus null is that undef is
already very hard to explain and having an additional parallel concept
that behaves slightly differently and that can easily be confused with
undef is worrisome.  The advantage of explaining a tristate pragma is that
with normal undef semantics, most times undef shows up in an arithmetic or
logical operation other than a simple test of true or false, it's
symptomatic of poorly-constructed code; increments are about the only
exception.  So the area that tristate logic changes is not something that
we recommend that people use under normal circumstances.

> And if/when my database needs require the use of multiple different NULL
> values (currently they are not there, multiple NULL values do get talked
> about by relational theorists, and there is some move to put them into
> the SQL standard, but it appears they haven't yet appeared in one) I see
> having multiple "special non-values" (as someone else called them) much
> simpler to extend to the concept of multiple NULL values than the pragma
> approach.

Hm.  Yes, that's a good point.  (At that point, something more like
Quantum::Superpositions may be more what you want.)

-- 
Russ Allbery ([EMAIL PROTECTED]) <http://www.eyrie.org/~eagle/>



Re: perl6storm #0011: interactive perl mode

2000-09-23 Thread Russ Allbery

Philip Newton <[EMAIL PROTECTED]> writes:
> On Thu, 21 Sep 2000, Tom Christiansen wrote:

>> =item perl6storm #0011

>> perl w/o args with stdin and out ttys should be perl -de 0.
>> saves novices from typing "perl" and getting confuddled.

> I think it should print out a banner message, too.

> A couple of times I was wondering whether perl was installed on a
> machine and typed 'perl' to see -- and "nothing happened". (I suppose
> either of `which perl` or `perl -v` would be a better way to find out,
> but still.)

> Having Perl tell me 'this is perl5.7.0\n> ' or similar would have been
> nice. But that's just me.

As long as it's possible to get the current "perl" behavior; I actually
use that a lot.

-- 
Russ Allbery ([EMAIL PROTECTED]) <http://www.eyrie.org/~eagle/>



Re: RFC 263 (v1) Add null() keyword and fundamental data type

2000-09-21 Thread Russ Allbery

Glenn Linderman <[EMAIL PROTECTED]> writes:

> In my opinion, which you probably will also not agree with, attempting
> to toggle between the current undef semantics and tristate semantics is
> like trying to stuff three values into one bit.

I do understand the argument.  I just see confusion either way, and I
think that approach would be the least confusing and allow the code to
remain the most Perl-like.  I can see arguments the other way; that's just
my opinion.

> The problem is, when you toggle the pragma, all variables whose value is
> undef suddenly have the tristate semantics, and when you toggle it back,
> all the variables whose value is undef suddenly have the undef
> semantics.  This leaves it purely to the programmer to make sure that
> the pragma is used in exactly the right places, and, when tristate
> semantics are in effect makes unavailable the normal, helpful warnings
> that Perl produces when you attempt to misuse undef values.

Perhaps I don't use those warnings in the same way that you do.  I *very*
rarely have undefined value warnings in my programs, and when I do they're
usually not actually bugs, just things that require a different way of
writing to be -w clean.  So I don't have as high of an opinion of this
warning as being particularly important to debugging; I only find it
useful in certainly particular circumstances.

To me, toggling the semantics of the variables which are already undef
strikes me as just what I'd want.

> I guess that since you have no intention of using the tristate
> semantics, you don't care whether it is easy to code using them.

Comments like this are what is making it very difficult for me to continue
discussing this with you.  You don't actually know what type of Perl I
write or whether or not I'd use the semantics or not.  As a matter of
fact, I find them very interesting and fully do expect to use those
semantics if they're implemented in Perl, particularly given that I'm
likely to be doing a lot more database and SQL coding in the future than I
am currently.

-- 
Russ Allbery ([EMAIL PROTECTED]) <http://www.eyrie.org/~eagle/>



Re: RFC 263 (v1) Add null() keyword and fundamental data type

2000-09-21 Thread Russ Allbery

Glenn Linderman <[EMAIL PROTECTED]> writes:
> Philip Newton wrote:

>> Having $seen{$word}++ turn $seen{$word} to undef is bad,

It doesn't "turn it to undef"; if you're using tristate semantics, it
leaves it as undef, because those are the semantics you've selected for
undefined values.

>> if (undef)++ assumes NULL semantics everywhere, hence "one more than
>> unknown" = "still unknown".

No one's proposing that.  People are proposing the ability to turn on NULL
semantics where you need it.

> Right.  Applying NULL semantics to undef would be bad.  The
> counterproposals to RFC 263, along the lines of "use tristate", seem to
> overlook this sort of situation.

I'm not overlooking it; I just don't agree with you.  There *is* a
difference.

-- 
Russ Allbery ([EMAIL PROTECTED]) <http://www.eyrie.org/~eagle/>



Re: RFC 263 (v1) Add null() keyword and fundamental data type

2000-09-20 Thread Russ Allbery

Damien Neil <[EMAIL PROTECTED]> writes:

> If I could be assured that the performance penalty was minimal, I'd
> be delighted to write

>   if ($errno == any(EAGAIN EINTR)) { ... }

> over

>   if ($errno == EAGAIN || $errno == EINTR) { ... }

> The former is less typing and reads more clearly (to me, at least).

Hm, yeah, good point.

-- 
Russ Allbery ([EMAIL PROTECTED]) <http://www.eyrie.org/~eagle/>



Re: RFC 263 (v1) Add null() keyword and fundamental data type

2000-09-20 Thread Russ Allbery

Tom Christiansen <[EMAIL PROTECTED]> writes:

>>$a = null;
>>$b = ($a == 42);
>>print defined($b)? "defined" : "not defined";

>> would print "not defined", maybe?

> In a sane world of real (non-oo-sneaky) perl, the "==" operator returns 
> only 1 or "".  Both are defined.

But if you say:

  use tristate;

then $a == 42 returns undef if $a is undef.  Most Perl programs may not
want these semantics, but they're often useful, and for more things than
just databases.  Think error propagation.

-- 
Russ Allbery ([EMAIL PROTECTED]) <http://www.eyrie.org/~eagle/>



Re: RFC 263 (v1) Add null() keyword and fundamental data type

2000-09-20 Thread Russ Allbery

Glenn Linderman <[EMAIL PROTECTED]> writes:

> undef has the following semantics:

> 1) all otherwise uninitialized variables are set to undef

And as the RFC says, quite a few times, for database code you often want
all your variables to start as the null value.

> 2) under "use strict", use of undef in expressions is diagnosed with a
> warning

And use of null in an expression would be diagnosed by getting null in the
output.  If you keep them as separate concepts, at this point you're
screwed if that was a bug and you don't know where the null crept in.  If
you keep them the same, you just turn off the pragma for that section and
see where you get the warning.

Keeping them the same lets you turn this warning on selectively for
database code, which could be a significant aid in debugging.

> 3) undef is coerced to 0 in numeric expressions, false in boolean
> expressions, and the empty string in string expressions.

> The semantics for NULL is different, read the SQL standard.

The semantics of undef can be chosen by the programmer.  My point is that
if the undefined value called "undef" and the undefined value called
"null" behave differently in Perl, *that* would be a serious bug in my
opinion.  Talk about horribly confusing.

-- 
Russ Allbery ([EMAIL PROTECTED]) <http://www.eyrie.org/~eagle/>



Re: RFC 263 (v1) Add null() keyword and fundamental data type

2000-09-20 Thread Russ Allbery

Glenn Linderman <[EMAIL PROTECTED]> writes:

> With the multitudinous operator approach, please show me how to make
> each of the following conditional statements print true, without
> cluttering the code with interleaved additional pragmas and scoping
> blocks.  Use of pragmas before the code might be acceptable.

>  no strict;
>  $a = undef;
>  $b = null;
>  $c = $a + $b;
>  $d = $a + 1;
>  $e = $b + 1;

>  print "true"  if defined $c;
>  print "true"  if defined $b;
>  print "true"  if isnull $e;
>  print "true"  if defined $d;
>  print "true"  if $d == 1;
>  print "true"  if $e != 1;
>  print "true"  if ! ($b == 0);
>  print "true"  if $a == 0;

Why on earth would you want to do this in real code?

I don't believe you actually need both semantics active at the same time;
it might take really minor rewording of the code (initialize to 0 instead
of undef for counters, etc.), but I'm very unconvinced that you need both
concepts active at the same time.

-- 
Russ Allbery ([EMAIL PROTECTED]) <http://www.eyrie.org/~eagle/>



Re: RFC 263 (v1) Add null() keyword and fundamental data type

2000-09-20 Thread Russ Allbery

Jonathan Scott Duff <[EMAIL PROTECTED]> writes:

> Yep, this is bad IMHO.  Your concern is valid I think, but your
> "solution" isn't a good one.  Why not just use a module like Damian's
> Quantum::Superpositions?

No offense to Damian, but I tried to read and understand his documentation
and I thought I was back in grad school.  I don't think it's the fault of
the writing either; I think that Quantum::Superpositions is trying to do
something that's rather too complicated to explain clearly to the average
programmer.

It's a neat idea, but I don't expect to see it ever widely used.

-- 
Russ Allbery ([EMAIL PROTECTED]) <http://www.eyrie.org/~eagle/>



Re: RFC 263 (v1) Add null() keyword and fundamental data type

2000-09-20 Thread Russ Allbery

Nathan Wiger <[EMAIL PROTECTED]> writes:

> undef has a very well-defined (ha!) Perl meaning: that something is
> undefined. null has a very well-defined RDBMS meaning: that something is
> unknown. Perl allows you to add and concatenate stuff to undef, because
> that value can be coerced into 0 and "" without harm.

This isn't a major loss with a pragma in effect since -w clean code
already can't do this.  I don't see the harm in changing this to null
semantics when you ask for that.

About the only piece of code of mine that this would affect are places
where I use ++ on an undef value, and that's not a bad thing to avoid as a
matter of style anyway (usually I'm just setting a flag and = 1 would work
just as well; either that, or it's easy enough to explicitly initialize
the counter to 0).

> Using the proposed tristate pragma does not strike me as any better - in
> fact, worse - than adding null() because you are now changing the
> meaning of fundamental Perl operations.

But that's exactly what you want to do.

> You're *still* introducing "yet another state of null", but to do so
> you're conflating undef and null, which are themselves different
> concepts.

I strongly disagree.  You're not changing the data types at all; you're
changing what Perl's operatings (logical, addition, concatenation, etc.)
do with undefined values.  Instead of coercing to 0, you coerce to an
undefined value.

I really like this.  I could see lots of cases other than just databases
where this would be a useful thing to do with undef.  It becomes
considerably less useful if you introduce a new keyword, since then it
requires rewriting code.  Those undef semantics could be useful for error
checking in existing code.

> For example, assuming this code:

>$name = undef;
>print "Hello world!" if ($name eq undef);

So don't do that.  Use C if you want to ask that question.
Most code that I've seen already does that; checking equality with undef
is an odd way of writing it.  *If* you want to use the pragma, just always
write that as C.

> The same operation would print "Hello world!" in one circumstance, but
> nothing under the tristate pragma. This is just as dangerous as having a
> pragma like so:

>use 'zeroistrue';
>$num = 0;
>print "Got data" if ( ! $num );

> Where the above would print out "Got data" normally, but not under the
> pragma.

I strongly disagree here too.  0 as false and 1 as true is an assumption
made in multiple other programming languages, something used by the
majority of Perl scripts that I write, and something that's very
intuitive.  undef semantics, on the other hand, are specific to Perl and
the default is chosen to be friendly to quick and dirty scripts.  Changing
those semantics to propagate undef makes perfect sense to me.

-- 
Russ Allbery ([EMAIL PROTECTED]) <http://www.eyrie.org/~eagle/>



Re: RFC 263 (v1) Add null() keyword and fundamental data type

2000-09-19 Thread Russ Allbery

Glenn Linderman <[EMAIL PROTECTED]> writes:
> Russ Allbery wrote:

>> I agree with Tom; I think it's pretty self-evident that they're the
>> same thing.  undef means exactly the same thing as null; that's not the
>> problem.  The problem is that Perl doesn't implement the tri-state
>> logic semantics that most users of null are used to, which is a
>> different issue.

> So, to paraphrase your statement a bit:

> It is self-evident that they're the same, the problem is that they work
> differently.

No, that's not a paraphrase.  That's saying something completely different
which is wrong.

If undef functioned differently than null, that would be a bug.  What's
missing is a way to say "I want tri-state logic" as a pragma.  When that
pragma is enabled, undef would be the null-like state.

Perl already has exactly the data value that you're looking for.  This RFC
is proposing to fix the wrong problem; the things that need to be changed
(conditionally) are the logical operators, not the data value.

> Nota Bene: IEEE floating point defines two different concepts that are
> not numbers, but can be mixed with numbers in expressions: Inf and NaN.
> And actually, there are positive and negative varieties of both Inf and
> NaN.  So I guess you might say that they are the same; but the problem
> is that they work differently.

There are positive and negative infinities, but that's a different
situation entirely; infinity is a degenerate value, not an undefined
value.  This is the first time I've ever heard of -NaN; are you sure about
that?  (There are, in fact, different types of NaN, such as signalling vs.
non-signalling, but that's due to floating point traps and exceptions,
issues that don't crop up in the situations where you want undef/null.)

-- 
Russ Allbery ([EMAIL PROTECTED]) <http://www.eyrie.org/~eagle/>



Re: RFC 263 (v1) Add null() keyword and fundamental data type

2000-09-19 Thread Russ Allbery

Glenn Linderman <[EMAIL PROTECTED]> writes:
> Tom Christiansen wrote:

>>> Currently, Perl has the concept of C, which means that a value
>>> is not defined. One thing it lacks, however, is the concept of
>>> C, which means that a value is known to be unknown or not
>>> applicable. These are two separate concepts.

>> No, they aren't.

> Yes, they are.*

> * [What other appropriate response is there to someone states a position
> without justification?  All six-year olds understand this response, and
> by age seven have realized its futility.]

I agree with Tom; I think it's pretty self-evident that they're the same
thing.  undef means exactly the same thing as null; that's not the
problem.  The problem is that Perl doesn't implement the tri-state logic
semantics that most users of null are used to, which is a different issue.

-- 
Russ Allbery ([EMAIL PROTECTED]) <http://www.eyrie.org/~eagle/>



Re: RFC 109 (v1) Less line noise - let's get rid of @%

2000-08-17 Thread Russ Allbery

Karl Glazebrook <[EMAIL PROTECTED]> writes:

> o Why do I think perl has too much line noise? Because of code like this:

>   @{$x->{$$fred{Blah}}}[1..3]

You're taking the value of the key "Blah" in the hash referred to by $fred
and using it as the key into the hash referred to by $x, treating the
value as an anonymous array and taking a slice containing the 2nd through
the 4th elements.

Hm.  Personally, I think that's a very *small* amount of line noise for
expressing an action so complicated it takes more than three lines of
English text to explain what's going on.  Expressions that do complicated
things are going to look complicated.

If you want to cut down on the line noise, temporary variables are the
standard tool:

my $key = $$fred{Blah};
my $array = $$x{$key};
@$array[1..3];

And finally, what causes all the line noise here are the curlies.
Removing the *one* @ in that expression isn't going to make it look any
simpler.

-- 
Russ Allbery ([EMAIL PROTECTED]) <http://www.eyrie.org/~eagle/>



Re: RFC 109 (v1) Less line noise - let's get rid of @%

2000-08-16 Thread Russ Allbery

Kai Henningsen <[EMAIL PROTECTED]> writes:

> That would be nice if the punctuation actually *were* part of the
> variable name.

> However, it isn't: to access 'second', you'd say $args[1], NOT @args[1].
> It's one of the Perl features that most confuses newcomers.

Well, I think it is; it's just that $args[1] is a different variable than
@args.  Maybe people think that's an odd notion of what a variable is, but
I think of @args as a collection containing a bunch of individual
variables, each of which has its own name that's disambiguated from $args
by [].  You can operate on the collection, or you can address the
variables individually.

This makes even more sense when you look at %args, and start looking at
multi-level hashes.

> If there's no better argument than this, I'd throw this distinction away
> in a heartbeat.

It's always easy to throw away other people's distinctions.  :)

> If the syntax can be changed so I never have to write @{some array ref}
> again to explain to perl that yes, I really want to use this array as an
> array, I'll be a happy man.

Now this I'll agree with; I find the @{ $$hash{value} } syntax rather
bletcherous.  But I think that's a separate problem and could well have a
separate solution.

Perhaps @->$$hash{value} as has been proposed before, and Perl 6 can deal
with the issue of the @- array in some other way.

-- 
Russ Allbery ([EMAIL PROTECTED]) <http://www.eyrie.org/~eagle/>



Re: RFC 109 (v1) Less line noise - let's get rid of @%

2000-08-16 Thread Russ Allbery

John Porter <[EMAIL PROTECTED]> writes:
> Russ Allbery wrote:

>> $args = 'first second third';
>> @args = split (' ', $args);
>> my $i = 0;
>> %args = map { $_ => ++$i } @args;

>> This is very Perlish to me; the punctuation is part of the variable
>> name and disambiguates nicely.

> No, it's not.  Where are we taught this?  It's a myth.

> The punctuation imposes context on the variable expression.

>   $foo[0]

> accesses an array.  Where's the "@"?

Now the [0] is disambiguating.  Same difference.  I'm not interested in
nit-picking semantics.

-- 
Russ Allbery ([EMAIL PROTECTED]) <http://www.eyrie.org/~eagle/>



Re: RFC 84 (v1) Replace => (stringifying comma) with =>

2000-08-16 Thread Russ Allbery

Stephen P Potter <[EMAIL PROTECTED]> writes:

> What stops us from imposing order on this chaos?  If they are currently
> defined as not having any specific order, why can't we say they always
> return in numeric || alphabetic || ASCII || whatever order we want?

Because the fewer guarantees you make, the more efficiency you can get.
The above would prevent a hypothetical future smart Perl interpretor from
reordering your hash behind the scenes in another thread while your
program is using it to optimize for the usage pattern that it's seeing,
for example.

If you have to guarantee a sorted traversal of the hash keys, your choices
of data structures are *far* more limited.

-- 
Russ Allbery ([EMAIL PROTECTED]) <http://www.eyrie.org/~eagle/>



Re: RFC 84 (v1) Replace => (stringifying comma) with =>

2000-08-15 Thread Russ Allbery

Damien Neil <[EMAIL PROTECTED]> writes:

> Arrays are ordered.  Hashes are not.  Sure, you can iterate over a hash,
> but add an element to one and you can change the order of everything in
> it.

Formally, I believe it's permissable for a hash implementation to return a
different order the second time you iterate through it from the first
time, even if you haven't touched the hash inbetween.  That's the
definition of an iterable but unordered data structure; there's some way
of getting all of the members one and only one time, but each time you
look at it the order in which the members show up may be different (maybe
garbage collection happened behind the scenes, the hash was reorganized
due to an observation of how you were using it, etc.).

-- 
Russ Allbery ([EMAIL PROTECTED]) <http://www.eyrie.org/~eagle/>



Re: RFC 99 (v2) Standardize ALL Perl platforms on UNIX epoch

2000-08-15 Thread Russ Allbery

Buddha Buck <[EMAIL PROTECTED]> writes:

> Leap-seconds are a PITA for generic time routines.

Unix time ignores leap seconds.  POSIX basically says "don't worry about
them" and by and large that works.  It means your system clock drifts a
little over time and then gets corrected back by xntpd or something, but
in practice time on a Unix clock is monotonic.

-- 
Russ Allbery ([EMAIL PROTECTED]) <http://www.eyrie.org/~eagle/>



Re: RFC 109 (v1) Less line noise - let's get rid of @%

2000-08-15 Thread Russ Allbery

Steve Fink <[EMAIL PROTECTED]> writes:

> I would very much hate to see the prefixes go away or merge into a
> single one, but I'm not so sure I agree with Russ. I've had to teach
> lots of beginners that even though $x refers to scalar x, $x{...} refers
> to %x, but don't think of it that way because the $ is saying what value
> you're getting back, not which variable you're using, unless you're
> calling a function, or...

This falls firmly in the category of things that are powerful for
experienced users of the language but may be somewhat difficult to learn.
I don't think Perl has being easy to learn as it's primary goal, nor
should it.

> I'll just say I wouldn't mind having a stricture forbidding $x and %x in
> the same package.

Ugh.  I'll definitely never use it.  I don't object *provided* that it
doesn't become like the other strictures, things that people expect all
Perl scripts to use; I think it's an essentially worthless constraint.

> I've fairly frequently used code like the above, but I don't really like
> that code in the first place because the only purpose for the $args and
> @args is as temporaries. I like the way mjd describes it:  this is
> "synthetic" code. If you really did have distinct long-lived variables
> with the same name, then I bet it would be confusing.

I do this all the time and I don't find it confusing.  Please let's not
mandate programming style.  Often times the difference between the
variables changes some as the program proceeds, but context makes it quite
clear what's going on.

This strikes me as the same sort of meaningless style guideline as "all
variables must have names that are at least five characters long."

-- 
Russ Allbery ([EMAIL PROTECTED]) <http://www.eyrie.org/~eagle/>



Re: RFC 109 (v1) Less line noise - let's get rid of @%

2000-08-15 Thread Russ Allbery

Dan Sugalski <[EMAIL PROTECTED]> writes:

> If the symbol becomes content-free, perhaps the problem is with what
> made it useless, not with the symbol itself...

Wholeheartedly agreed.  If something is an array, it should start with @.
If we're adding language changes that introduce arrays that don't start
with @, that's the mistake.

-- 
Russ Allbery ([EMAIL PROTECTED]) <http://www.eyrie.org/~eagle/>



Re: RFC 109 (v1) Less line noise - let's get rid of @%

2000-08-15 Thread Russ Allbery

> All variables should be C<$x>. They should behave appropriately
> according to their object types and methods.

No thanks.  I frequently use variables $foo, @foo, and %foo at the same
time when they contain the same information in different formats.  For
example:

$args = 'first second third';
@args = split (' ', $args);
my $i = 0;
%args = map { $_ => ++$i } @args;

This is very Perlish to me; the punctuation is part of the variable name
and disambiguates nicely.  I'd be very upset if this idiom went away.

-- 
Russ Allbery ([EMAIL PROTECTED]) <http://www.eyrie.org/~eagle/>



Re: RFC 99 (v1) Maintain internal time in Modified Julian (not epoch)

2000-08-14 Thread Russ Allbery

Tim Jenness <[EMAIL PROTECTED]> writes:
> On 14 Aug 2000, Russ Allbery wrote:

>> Day resolution is insufficient for most purposes in all the Perl
>> scripts I've worked on.  I practically never need sub-second precision;
>> I almost always need precision better than one day.

> MJD allows fractional days (otherwise it would of course be useless).

> As I write this the MJD is 51771.20833

Floating point?  Or is the proposal to use fixed-point adjusted by some
constant multiplier?  (Floating point is a bad idea, IMO; it has some
nasty arithmetic properties, the main one being that the concept of
incrementing by some small amout is somewhat ill-defined.)

> At some level time() will have to be changed to support fractions of a
> second and this may break current code that uses time() explicitly
> rather than passing it straight to localtime() and gmtime().

Agreed.

I guess I don't really care what we use for an epoch for our sub-second
interface; I just don't see MJD as obviously better or more portable.  I'd
actually be tentatively in favor taking *all* of the time stuff and
removing it from the core, under the modularity principal, but I don't
have a firm enough grasp of where the internals use time to be sure that's
a wise idea.

-- 
Russ Allbery ([EMAIL PROTECTED]) <http://www.eyrie.org/~eagle/>



Re: RFC 99 (v1) Maintain internal time in Modified Julian (not epoch)

2000-08-14 Thread Russ Allbery

Nathan Wiger <[EMAIL PROTECTED]> writes:

>> Anyway, it doesn't matter; it's a lot more widely used than any other
>> epoch, and epochs are completely arbitrary anyway.  What's wrong with
>> it?

> I think the "What's wrong with it?" part is the wrong approach to this
> discussion.

That's exactly what I disagree with, I think.  I don't understand why this
would be the wrong approach to the discussion.  It seems to me that it
follows automatically from "epochs are completely arbitrary anyway."

> That being said, what we need to say "is it possible UNIX might not be
> perfect?" (hard to imagine, true... :-). More specifically, "is there
> something that would work better for putting Perl in Palm pilots,
> watches, cellphones, Windows and Mac hosts, *plus* everything else it's
> already in?"

How does it make any difference what epoch you use?  Why would this make
Perl more portable?

> No, but currently Perl IS forcing Windows, Mac, and BeOS users to
> understand what the UNIX epoch is.

In that case, I don't understand what the difference is between that and
forcing those users *plus* Unix users to understand what the MJD epoch is.

> There's some other advantages to MJD beyond system-independence.

But MJD isn't any more system-independent than Unix time.  Absolutely
nothing about Unix time is specific to Unix; it's just as portable as any
other arbitrary epoch.

> Namely, it allows easy date arithmetic, meaning complex objects are not
> required to modify dates even down to the nanosecond level.

Unix time allows this down to the second level already.  If we wanted to
allow it down to the nanosecond level through a different interface to
return something like TAI64NA or something, that would make sense to me.
What doesn't make sense to me is a change of epoch; I just don't see what
would be gained.

I must be very confused.  I don't understand what we gain from MJD dates
at all, and the arguments in favor don't make any sense to me; all of the
advantages listed apply equally well to the time system we have already.

-- 
Russ Allbery ([EMAIL PROTECTED]) <http://www.eyrie.org/~eagle/>



Re: RFC 99 (v1) Maintain internal time in Modified Julian (not epoch)

2000-08-14 Thread Russ Allbery

Tim Jenness <[EMAIL PROTECTED]> writes:

> Of course, "seconds since 1970" is only obvious to unix systems
> programmers.

I disagree; I don't think that's been true for a while.  It's certainly
familiar, if not obvious, to *any* Unix programmer (not just systems
programmers), as it's what time() returns, and pretty much any C
programmer will have used that at some point or another.  It's also so
widespread as to be at least somewhat familiar to non-Unix programmers.

Anyway, it doesn't matter; it's a lot more widely used than any other
epoch, and epochs are completely arbitrary anyway.  What's wrong with it?

> MJD is doable with current perl 32bit doubles. I use it all the time in
> perl programs and am not suffering from a lack of precision.

Day resolution is insufficient for most purposes in all the Perl scripts
I've worked on.  I practically never need sub-second precision; I almost
always need precision better than one day.

If we're aiming at replacing time, it has to provide *at least* second
precision, at which point I really don't see the advantage of MJD over
Unix time.  Why change something that works?

Is Perl currently using different epochs on different platforms?  If so, I
can definitely see the wisdom in doing something about *that* and
off-loading the system-local time processing into modules (although I can
also see the wisdom in leaving well enough alone).  But why not go with
the most commonly used and most widely analyzed epoch?

-- 
Russ Allbery ([EMAIL PROTECTED]) <http://www.eyrie.org/~eagle/>



Re: RFC 99 (v1) Maintain internal time in Modified Julian (not epoch)

2000-08-14 Thread Russ Allbery

Nathan Wiger <[EMAIL PROTECTED]> writes:

> The idea would be twofold:

>1. time() would still return UNIX epoch time. However, it
>   would not be in core, and would not be the primary
>   timekeeping method. It would be in Time::Local for 
>   compatibility (along with localtime and gmtime).

>2. mjdate() would return MJD. It _would_ be in core, and
>   it _would_ be the internal timekeeping method. All
>   of the new date functions would be designed to be based
>   off of it.

Here's the significant problem that I have with this:  It feels very much
like it's putting the cart before the horse.  Perl is fundamentally a Unix
language (portable Unix, to a degree).  It's core user base has always
been sysadmins and hackers with a Unix-like mindset, regardless of the
platform they're using.  As an example, I've written literally hundreds of
scripts that use Unix time in one way or another; it has innumerable
really nice properties and is compatible with all the other programs
written in other languages that I have to interact with.

By comparison, who uses MJD?  Practically no one.  It's a theoretically
nice time scale, but outside of the astronomy community, how many people
even have any idea what it is?

This appears to be a proposal to replace a *very* well-known time base
with very well-known and commonly-used properties with a time base that
practically no one knows or currently uses just because some of its epoch
properties make slightly more sense.  Unless I'm missing something
fumdamental here, this strikes me as a horrible idea.

Unix's time representation format has no fundamental problems that aren't
simple implementation issues.  Negative values represent times before 1970
just fine.  The range problem is easily solved by making it a 64-bit
value, something that apparently we'd need to do with an MJD-based time
anyway.  And everyone already knows how it works and often relies on the
base being consistent with their other applications.

It really doesn't sound like a good idea to change all that.

-- 
Russ Allbery ([EMAIL PROTECTED]) <http://www.eyrie.org/~eagle/>



Re: RFC 65 (v1) Add change bar functionality to pod

2000-08-14 Thread Russ Allbery

skud <[EMAIL PROTECTED]> writes:

> I don't think this is a language issue.  However, I don't believe
> there's a -doc working group yet, either.

> Is it time for a -doc group to form?

[EMAIL PROTECTED] already exists; maybe it should be blessed as a Perl 6
working group as well?

-- 
Russ Allbery ([EMAIL PROTECTED]) <http://www.eyrie.org/~eagle/>



Re: RFC 48 (v2) Replace localtime() and gmtime() with da

2000-08-11 Thread Russ Allbery

Buddha Buck <[EMAIL PROTECTED]> writes:

> UT and UTC are different scales, ref:
> http://tycho.usno.navy.mil/systime.html

I believe, as reflected on that page, that UT isn't a time scale in and of
itself, but a system of them (including UT0, UT1, and UTC as a weird
step-child based on TAI with corrections for UT1).

-- 
Russ Allbery ([EMAIL PROTECTED]) <http://www.eyrie.org/~eagle/>



Re: RFC 48 (v2) Replace localtime() and gmtime() with da

2000-08-11 Thread Russ Allbery

Nathan Wiger <[EMAIL PROTECTED]> writes:

> The problem is, many people on this list claimed that GMT != UTC,

Correct.

> and that machine time is only in GMT, making UTC dicey to derive.

Nope.  Other way around.  Machine time is only UTC; GMT has fractional
skew instead of leap seconds, making it rather incompatible with
computers.

-- 
Russ Allbery ([EMAIL PROTECTED]) <http://www.eyrie.org/~eagle/>



Re: RFC 48 (v2) Replace localtime() and gmtime() with da

2000-08-11 Thread Russ Allbery

Jarkko Hietaniemi <[EMAIL PROTECTED]> writes:

> s/gmt/ut/

> IIRC GMT got obsoleted in the 70s by UT (Universal Time). 

Officially called UTC, so utcdate would be a better name I think.

-- 
Russ Allbery ([EMAIL PROTECTED]) <http://www.eyrie.org/~eagle/>



Re: AGAINST RFC 48 (v1) Replace localtime() and gmtime() with da

2000-08-10 Thread Russ Allbery

Bart Lateur <[EMAIL PROTECTED]> writes:

> What's so hard? Subtracting 2 hours and 30 minutes from the official
> referential time (GMT)? Or the Daylight Savings Time rules?

It's not a problem of implementation.  It's a problem of semantics due to
the way Perl parses the language.

Suppose you call:

date (time, undef, -0230);

What does that mean in terms of time-zone offsets?  Are you subtracting
230 seconds from UTC?  230 minutes?  A negative octal number?  :)  The
syntax people are used to for specifying time zone offsets *looks* like a
number but actually isn't one.

You can require that it be passed as a string, but writing something like
the above would be a *very* common new user mistake.

-- 
Russ Allbery ([EMAIL PROTECTED]) <http://www.eyrie.org/~eagle/>



Re: AGAINST RFC 48 (v1) Replace localtime() and gmtime() with da

2000-08-10 Thread Russ Allbery

Bart Lateur <[EMAIL PROTECTED]> writes:

> As for the parameter's format: GMT is easy, you can pass "GMT" (or
> "+"). For localtime(), you often don't explicitely know the time
> zone and Daylight savings Time rule, so this looks like a good candidate
> for undef.

The string "GMT" is technically wrong.

I'm opposed to allowing one to pass in any sort of string for time zone
information; if you allow "GMT", people are going to expect to be able to
use "EST", and who knows what they actually mean.  If you want GMT, pass
an offset of 0.

Be careful about time zone offsets, btw, if the interface is going to
support them.  +0700 is *not* "700 minutes"; it's 7 hours and 0 minutes.
And there are half-hour time zones.  This is an area where there's a *lot*
of potential confusion; people in Newfoundland are going to expect to be
able to pass in -0230 and have that work, and that's interestingly hard.

-- 
Russ Allbery ([EMAIL PROTECTED]) <http://www.eyrie.org/~eagle/>



Re: AGAINST RFC 48 (v1) Replace localtime() and gmtime() with da

2000-08-10 Thread Russ Allbery

Jonathan Scott Duff <[EMAIL PROTECTED]> writes:

> By "local timezone" do you mean that some sort of inspection happens to
> determine the local timezone and the date() intrinsically knows about
> it?  What about daylight savings time?

This all should be handled by the operating system.  If you call localtime
in C, you should get back local time, whatever the local time zone.  The
whole point is to not try to duplicate that information in Perl core.

-- 
Russ Allbery ([EMAIL PROTECTED]) <http://www.eyrie.org/~eagle/>



Re: AGAINST RFC 48 (v1) Replace localtime() and gmtime() with da

2000-08-09 Thread Russ Allbery

John Tobey <[EMAIL PROTECTED]> writes:
> On Tue, Aug 08, 2000 at 10:47:10PM -0700, Russ Allbery wrote:

>> It's far worse than non-portable; it's completely insufficient.  The
>> POSIX TZ syntax cannot represent many real time zones.  You need the
>> Olson-style naming scheme which refers to entries in a fairly large
>> external database

> You mean the "EST5EDT", "US/Pacific", "America/New_York", and suchlike
> files in /usr/share/zoneinfo.

Specifically America/New_York, yes.  The other two names you list are from
inferior naming schemes that ideally should go away; they're basically
just aliases for the more accurate names and have been for a while.

Okay, I guess the question is whether the goal is to provide access to the
real time zones or to the system time zone library.  If the latter,
that's simpler, but a lot of systems don't have adequate time zone support
to do what you really want.  Particularly non-Unix systems; pretty much
all Unix systems at this point use *some* version of the Olson library,
although their data may be woefully out of date.

> Actually, I do use those indirectly, though probably non-portably, by
> localizing C<$ENV{TZ}>.  GNU Libc takes care of finding the zoneinfo
> file, but lamentably reparses it every time TZ changes and
> C is called.

That's fine; zoneinfo files are designed for that.  They're extremely fast
to parse.  That's what you want.

I'm not as worried about Unix, since most Unixes have a decent time zone
library.  I'm worried more about all the *other* platforms that Perl
supports; if we want time zone support to be portable, I'm pretty sure
we'd end up embedding the tz library, since otherwise the results are
going to vary wildly in quality from system to system.  (Even worse than
the situation with sprintf.)

I think this is a bit heavy-weight for core.

> My module does not parse old-style TZ formats,

Nothing should really ever use them anyway, so I wouldn't be too worried
about that.

-- 
Russ Allbery ([EMAIL PROTECTED]) <http://www.eyrie.org/~eagle/>



Re: AGAINST RFC 48 (v1) Replace localtime() and gmtime() with da

2000-08-08 Thread Russ Allbery

John Tobey <[EMAIL PROTECTED]> writes:
> On Wed, Aug 09, 2000 at 02:22:22AM +0200, Bart Lateur wrote:

>> date() would be more general, and replace both. You can pass it a time
>> zone, ANY time zone, and it will tell you what time it is in that time
>> zone.

You're proposing embedding the full power of the Olson TZ library into
Perl core.  This is a nontrivial amount of data that changes four or five
times a year.  I really don't think this is a good idea.  Furthermore, the
only time zone database that can actually do this doesn't use the naming
scheme that you're probably used to.

> The JTobey::Date module uses the TZ environment variable (which, I'm
> told, is non-portable), the esoteric POSIX routines tzset and tzname,
> and some functions from the CPAN modules Date::Parse and Date::Format.

It's far worse than non-portable; it's completely insufficient.  The POSIX
TZ syntax cannot represent many real time zones.  You need the Olson-style
naming scheme which refers to entries in a fairly large external database
of time zones and their current and historic data, not just a wide variety
of bizarre daylight savings changes but time zone changes that often vary
by political whim.  (Like Australia fiddling with its daylight savings
rules this year because of the Olympics.)

People in the EU, where there's a standard for daylight savings, and
particularly people in the US, where we haven't changed our rules in quite
a while, often don't realize just how baroque this can get.

> It is designed to give it all an easy OO interface, and to be as
> correct as possible on systems like mine.  It is not expected to be
> very fast, portable, or locale-friendly.

> To overcome these problems would be a Herculean task which I simply
> doubt that anyone here is willing to do.  Therefore, I oppose the
> notion that Perl 6 will magically handle all this.

> -John

-- 
Russ Allbery ([EMAIL PROTECTED]) <http://www.eyrie.org/~eagle/>



Re: Things to remove

2000-08-08 Thread Russ Allbery

Bennett Todd <[EMAIL PROTECTED]> writes:

> The poster you are replying to said "I use this in one-liners, and it's
> _dead_ handy."; that conjures up the idioms like

>   perl -nle 'print if 1.. ?^$?' [filename]

> which barfs out only the header; replace "if" with "unless" and it
> chops the head off.

Why do you need one-time matching here?  /^$/ should work fine.

I've very rarely found cases where ?? was useful and // didn't work, and
never in regular code.

-- 
Russ Allbery ([EMAIL PROTECTED]) <http://www.eyrie.org/~eagle/>



Re: RFC 48 (v1) Replace localtime() and gmtime() with da

2000-08-06 Thread Russ Allbery

Perl6 RFC Librarian <[EMAIL PROTECTED]> writes:

> The C<$time> specifier can be followed by a C<$timezone> argument, which
> returns the date relative to that timezone. By default, the time is
> returned relative to the local timezone. You can get UTC, for example,
> by specifying C or C as the timezone.

># Access UTC information
>$scalar  =  date time, '%H:%M', 'UTC';  # return time in UTC
>$object  =  date time, undef, 'GMT';# return object in UTC

># Explicity get ctime date for Eastern US time
># If $time is undef, time() is assumed
>$scalar  =  date undef, undef, 'EST';

Whatever you do, don't use those timezone names.  Is EST Eastern US time
or Eastern Standard Time in Australia?  The same abbreviation is used in
both places.

Naming of time zones is a *huge* rathole that you probably just don't want
to crawl into.  The short abbreviations are *not* standardized and are
quite frequently ambiguous.  There are three other prevelant time-zone
naming schemes:  the POSIX one (EST5EDT, for example) is completely
insufficient to actually represent time zone variations as they occur in
the real world, the "old Olson" found in most Unix operating systems these
days with names like US/Pacific doesn't offer enough granularity, and the
"new Olson" method (the best of the lot) uses names that most people don't
know (America/Los_Angeles for US Pacific for example).

Basically, you don't want to go anywhere near this mess; it eats people.

I see two reasonable options to go with instead.  One is to just use a
binary flag that says use UTC or not; this is the simplest and most
reliable to explain.  The other is to allow a timezone offset; this
doesn't deal with daylight savings time and historic time zone changes,
but it provides enough power for most of what people want to do and if you
want to deal with the rest you have to deal with time zone naming.

-- 
Russ Allbery ([EMAIL PROTECTED]) <http://www.eyrie.org/~eagle/>



Re: RFC: multiline comments

2000-08-05 Thread Russ Allbery

Jarkko Hietaniemi <[EMAIL PROTECTED]> writes:

> I also confess to liking // more for till-end-of-line comment marker
> than #, the hash looks so messy to my eye...of course, // already has
> a meaning...

I'm the other way around.

This may depend a lot on whether one comes from a shell scripting
background or from a C++ background.  I strongly dislike C++ and other
than Perl primarily use C and shell, so # is the most natural to me and //
looks really odd.

Of course, like you said, we really can't use // anyway, as it's valid
Perl code and actually semi-frequently used.

I do agree that there's a lot to be said for using /* ... */ for multiline
comments, but then I'm a C programmer.

-- 
Russ Allbery ([EMAIL PROTECTED]) <http://www.eyrie.org/~eagle/>



Re: RFC: Rename local() operator

2000-08-05 Thread Russ Allbery

Nick Ing-Simmons <[EMAIL PROTECTED]> writes:

> What about C ?

> I think C or C has merit - "while I am out contact ...".

> But I still think C is the essence of what it does.

I like either C or C too, but just to throw out the other idea
that occurred to me, what's being done here is in other languages often
called shadowing.  What about C?

shadow $/ = "\n";

seems to have the right implications to me.

-- 
Russ Allbery ([EMAIL PROTECTED]) <http://www.eyrie.org/~eagle/>



Re: Sublist auto*

2000-08-05 Thread Russ Allbery

Johan Vromans <[EMAIL PROTECTED]> writes:

> I would plea for autosubscribing perl6-language list members to every
> sublist that gets spawned. The reason is continuity.

Currently, I'm trying to deal with the volume of Perl lists by subscribing
to just the "top-level" lists and relying on the promised summaries from
the sublists.  That so far seems to be working pretty well; I feel like I
have a good overview of what's going on, without getting deluged.  I'd
really rather not automatically be put on the sublists, as I don't think I
want to receive them unless I care a lot about that particular topic.

Instead, what about a temporary freeze when each list is created?  Give it
a day or two after it's created before it will accept traffic; have the
traffic be held for that long while people subscribe.  Would that help
this problem?

-- 
Russ Allbery ([EMAIL PROTECTED]) <http://www.eyrie.org/~eagle/>



Re: Preprocessing (Was: Re: Recording what we decided *not* to do, and why)

2000-08-05 Thread Russ Allbery

Johan Vromans <[EMAIL PROTECTED]> writes:

> I fail to see this point.
> Having a program depend on a preprocessing stage that, if skipped,
> would still result in valid but erroneous source seems dangerous to me.

No, the point is more that normal Perl source is *full* of active m4
characters.  Without quoting, all your paired quotes would disappear,
comments would be stripped even when they're not actually comments but are
really regexes, m4 wouldn't understand things like Perl strings and
regexes and do substitutions where it shouldn't, etc.

The problem is not that you can skip the preprocessing stage, but rather
that as soon as you want to use m4 on a Perl program, you'd have to do a
*huge* amount of work on all the parts of the program you *don't* need to
preprocess just to be able to do things with the part that you do want to
preprocess.

cpp, on the other hand, has very few active constructs or characters, just
identifiers, function calls, and # at the beginning of a line.  It still
causes a few problems where it recognizes something it shouldn't, but it's
trivial to deal with compared to m4.

-- 
Russ Allbery ([EMAIL PROTECTED]) <http://www.eyrie.org/~eagle/>



Re: RFC 29 (v1) unlink() should be left alone

2000-08-04 Thread Russ Allbery

Myers, Dirk <[EMAIL PROTECTED]> writes:

> If a remove() is added, it should (IMHO) seek-and-destroy.

This is impossible on a Unix file system.

-- 
Russ Allbery ([EMAIL PROTECTED]) <http://www.eyrie.org/~eagle/>



Re: Recording what we decided *not* to do, and why

2000-08-04 Thread Russ Allbery

Steve Simmons <[EMAIL PROTECTED]> writes:

> m4.

> IMHO perl6 should continue the rich tradition of stealing from the best
> rather than re-inventing an only marginally better wheel.  m4 is better
> than cpp, and was intended to be a general macro package.  Are there
> versions available which are not strongly unfettered by license issues?

Yes, BSD m4 should be usable, and IIRC the OpenBSD version has sufficient
power to handle autoconf (which pounds the hell out of m4, much more so
than we'd be likely to).

However, cpp has the significant advantage that its active syntax is
designed to be embedded in a programming language and are Perl comments.
This is *not* true of m4, which would be horribly, horribly confused by a
Perl script.  m4 was not designed with embedding in a programming language
in mind, and lots of things like macro invocation syntax and default
quoting characters would interact very poorly with Perl.

-- 
Russ Allbery ([EMAIL PROTECTED]) <http://www.eyrie.org/~eagle/>