Re: Perl 6 Summary for week ending 20020728

2002-08-01 Thread Russ Allbery

pdcawley [EMAIL PROTECTED] writes:

 Bugger, I used Lquestionnaire|... and pod2text broke it.
 http:[EMAIL PROTECTED]/msg10797.html

perlpodspec sez you can't use L...|... with a URL, and I'm guessing that
I just didn't look at that case when writing the parsing code in pod2text
because of that.

-- 
Russ Allbery ([EMAIL PROTECTED]) http://www.eyrie.org/~eagle/



Re: ! and !

2001-09-02 Thread Russ Allbery

Bart Lateur [EMAIL PROTECTED] writes:

 Why is it = and not =?

Because in English, it's less than or equal to not equal to or less
than, I presume.

 Simply trying to remember the order of characters might be (a bit of) a
 pain. That problem doesn't exist with ! and !.

Every other programming language I've ever seen uses = and =.  I think
adding additional comparison operators not found in any other language and
identical to (and harder to type than!) existing operators is a really bad
idea.

-- 
Russ Allbery ([EMAIL PROTECTED]) http://www.eyrie.org/~eagle/



Re: ! and !

2001-09-01 Thread Russ Allbery

raptor [EMAIL PROTECTED] writes:

 I was looking at Interbase SELECT syntax and saw these two handy
 shortcuts :

 operator = {= |  |  | = | = | ! | ! |  | !=}

 !  and !

How is ! different from =?

-- 
Russ Allbery ([EMAIL PROTECTED]) http://www.eyrie.org/~eagle/



Re: ! and !

2001-09-01 Thread Russ Allbery

Sterin, Ilya [EMAIL PROTECTED] writes:
 From: Russ Allbery [mailto:[EMAIL PROTECTED]]

 How is ! different from =?

 It's just more syntax just like foo != bar 
 is the same as (foo  bar || foo  bar).

 It might prove convenient to express the expression.

It's the same number of characters.  How can it be more convenient?

-- 
Russ Allbery ([EMAIL PROTECTED]) http://www.eyrie.org/~eagle/



Re: ~ for concat / negation (Re: The Perl 6 Emulator)

2001-06-21 Thread Russ Allbery

Simon Cozens [EMAIL PROTECTED] writes:
 On Thu, Jun 21, 2001 at 10:31:22PM +0100, Graham Barr wrote:

 We can have a huge thread, just like before, but until we see any kind
 of update from Larry as to if he has changed his mind it is all a bit
 pointless.

 For what it's worth, I like it.

So do I, actually... it's sort of growing on me.

-- 
Russ Allbery ([EMAIL PROTECTED]) http://www.eyrie.org/~eagle/



Re: Python...

2001-06-05 Thread Russ Allbery

David Grove [EMAIL PROTECTED] writes:

 Perl is far more practical than experimental.

 Not at the moment. That's the problem.

Pretty much everything proposed, even in the wildest RFCs during the
brainstorming phase, was still stuff that's been done elsewhere by other
languages.  That's the practical vs. experimental distinction that I'm
drawing.  I realize that you don't like the direction that Perl 6 design
is heading, but it's still not heading towards being an experimental
language.  I've seen some *real* experimental languages; they're a lot
more unconventional.

You can still trace nearly everything that was proposed back to C, Lisp,
or Generic Object-Oriented Language, if not in inspiration than at least
in fundamental similarities.

-- 
Russ Allbery ([EMAIL PROTECTED]) http://www.eyrie.org/~eagle/



Re: Python...

2001-06-03 Thread Russ Allbery

Vijay Singh [EMAIL PROTECTED] writes:

 I always expected Perl to be leading the way, *the* language that broke
 new ground...where only camels dared to tread...

Er... that strikes me as a strange expectation.  I can't think of much in
Perl that hasn't appeared elsewhere earlier.  Perl makes a lot of already
developed ideas practical, but breaking new ground isn't really its forte.

If you want to look at languages that are breaking new ground, I recommend
Objective Caml, or Haskell, or Mercury, or even Eiffel.  Languages like
Perl and Python are really almost entirely just attempting to make
practical ideas already explored in other practical and experimental
languages.

Perl is far more practical than experimental.

-- 
Russ Allbery ([EMAIL PROTECTED]) http://www.eyrie.org/~eagle/



Re: Curious: - vs .

2001-04-25 Thread Russ Allbery

Nathan Wiger [EMAIL PROTECTED] writes:

- C compatibility. One of Perl's great strengths
  over other HLL's is C compatibility. Though
  this is still arguably not as good as it can be, 
  why distance ourselves from the language we're
  trying to interact with?

You're thinking of objects as references and references as akin to
pointers, which makes sense because that's how they're implemented in Perl
5.  If you think of objects as their own entities, however, or think of
references as something other than pointers (in particular, something that
doesn't require explicit dereferencing), then using . to access object
members is entirely compatible with C.

I tried to make this point before, but I don't think people understood
what I was getting at.

-- 
Russ Allbery ([EMAIL PROTECTED]) http://www.eyrie.org/~eagle/



Re: Strings vs Numbers (Re: Tying Overloading)

2001-04-24 Thread Russ Allbery

Bart Lateur [EMAIL PROTECTED] writes:

 My vote is to ditch the concat operator altogether. Hey, we have
 interpolation!

   $this$is$just$as$ugly$but$it$works

How do you concatenate together a list of variables that's longer than one
line without using super-long lines?  Going to the shell syntax of:

PATH=/some/long:/bunch/of:/stuff
PATH=${PATH}:/more/stuff

would really be a shame.

-- 
Russ Allbery ([EMAIL PROTECTED]) http://www.eyrie.org/~eagle/



Re: s/./~/g

2001-04-24 Thread Russ Allbery

Branden [EMAIL PROTECTED] writes:

 1) Use $obj.method instead of $obj-method :

 The big question is: why fix what is not broken? Why introduce Javaisms
 and VBisms to our pretty C/C++-oid Perl? Why brake compatibility with
 Perl 5 code (and Perl 5 programmers) for a zero net gain?

$obj.method isn't a Java-ism; it's used by both C++ and by Simula (for
class variables), and in C for struct members, which given the origins of
those languages means I wouldn't be surprised if it were in Algol.

The switch from - to . makes perfect sense from a C perspective if we're
turning objects into first-class entities rather than pointers; think
about a struct versus a pointer to a struct.

- makes you remember that things are pointers.

-- 
Russ Allbery ([EMAIL PROTECTED]) http://www.eyrie.org/~eagle/



Re: s/./~/g

2001-04-24 Thread Russ Allbery

David M Lloyd [EMAIL PROTECTED] writes:
 On 24 Apr 2001, Russ Allbery wrote:

 The switch from - to . makes perfect sense from a C perspective if we're
 turning objects into first-class entities rather than pointers; think
 about a struct versus a pointer to a struct.
 
 - makes you remember that things are pointers.

 What's wrong with using both?  You could use - if you're working with a
 reference to an object, and you could use . if you're working with the
 object itself.

It seems relatively unlikely in the course of normal Perl that you're
going to end up with very many references to objects.

-- 
Russ Allbery ([EMAIL PROTECTED]) http://www.eyrie.org/~eagle/



Re: Larry's Apocalypse 1

2001-04-15 Thread Russ Allbery

John Porter [EMAIL PROTECTED] writes:
 Piers Cawley wrote:

 Unless you can get at every single one of those and add a '-M5' switch,
 then they aren't going to work. Which could be very bad indeed.

 The analogous situation with p4-p5 wasn't so bad.  People just kept
 their p4 binaries around for running those old scripts.  No biggie.

There's quite a lot more Perl 5 code out there than there was Perl 4 code.
And it's rather annoying to still be maintaining a perl4 installation at
this point for the stragglers, although I suppose that can't be helped.

-- 
Russ Allbery ([EMAIL PROTECTED]) http://www.eyrie.org/~eagle/



Re: Larry's Apocalypse 1

2001-04-05 Thread Russ Allbery

Nathan Torkington [EMAIL PROTECTED] writes:

 Not a comment at all on it?  Was I accidentally unsubscribed to
 perl6-language?

 *tap* *tap* is this thing on?

Using module/class instead of package is exactly the same route that LaTeX
took in the transition from 2.09 to 2e.  It works quite well, and has also
meant that because of the automatic triggering of the compatibility code,
there are still lots of 2.09 documents out there, what, 10 years later?
that still process just fine with current versions of LaTeX.

The rest of what Larry said included little that wasn't about what I
expected, so I didn't have much additional response, apart from saying
that that was rather more Perl 5 compatibility than I was expecting.
Interesting.

Oh, and I wholeheartedly approve of the approach to handling objects.

-- 
Russ Allbery ([EMAIL PROTECTED]) http://www.eyrie.org/~eagle/



Re: pitching names for the attribute for a function with no memor y or side effects

2001-03-31 Thread Russ Allbery

Frank Tobin [EMAIL PROTECTED] writes:

 Just because one programming paradigm happens to name it "pure" doesn't
 mean that name should be carried over to other paradigms.  In a
 functional-programming context, sure, "pure" might be a good name.  But
 in a non-functional context, the name has little meaning with regards to
 the concept of "nosideeffects".

It looks like I was misremembering; I remember a proposal for a "pure"
attribute in gcc, but it looks like the attribute used for functions with
no memory references and no side effects is "const" (a la C++).  I think
"pure" was proposed for the somewhat relaxed version of that that allowed
memory references but not side effects.

-- 
Russ Allbery ([EMAIL PROTECTED]) http://www.eyrie.org/~eagle/



Re: pitching names for the attribute for a function with no memor y or side effects

2001-03-30 Thread Russ Allbery

Dan Sugalski [EMAIL PROTECTED] writes:

 Doesn't have the right ring to it, unfortunately. It's not really
 immutable, it just has no side-effects.

gcc and the literature both use "pure"; I'd recommend that.

-- 
Russ Allbery ([EMAIL PROTECTED]) http://www.eyrie.org/~eagle/



Re: What can we optimize (was Re: Schwartzian transforms)

2001-03-29 Thread Russ Allbery

Dan Sugalski [EMAIL PROTECTED] writes:

 Aliasing is actually one of the bigger problems with C, or so I'm lead
 to believe. It gets in the way of a number of optimizations rather
 badly. (So say some of Compaq's C and Fortran compiler folks, and I have
 no reason to doubt them. The Fortran compiler often generates faster
 code than the C compiler for this reason apparently)

Hence the introduction of the restrict keyword in C99 and several of gcc's
attribute extensions for marking pure functions to try to get a handle on
the problem.  *wry grin*  Yeah, that's the main thing that gets in the way
of optimizing C.

-- 
Russ Allbery ([EMAIL PROTECTED]) http://www.eyrie.org/~eagle/



Re: What can we optimize (was Re: Schwartzian transforms)

2001-03-29 Thread Russ Allbery

James Mastros [EMAIL PROTECTED] writes:

 Ahh, bingo.  That's what a number of people (inculding me) are
 suggesting -- a :functional / :pure / :stateless /
 :somthingelseIdontrecall attribute attachable to a sub.

The experience from gcc, which has a similar attribute, is that such an
attribute will be fairly rarely used and that most of your gains will come
from managing to teach the compiler to figure out that information for
itself.

This will probably be harder in Perl than in C because C can afford to
take more time to do global optimization passes.

-- 
Russ Allbery ([EMAIL PROTECTED]) http://www.eyrie.org/~eagle/



Re: Schwartzian transforms

2001-03-28 Thread Russ Allbery

Dan Sugalski [EMAIL PROTECTED] writes:

 I'm actually considering whether we even need to care what the
 programmer's said. If we can just flat-out say "We may optimize your
 sort function, and we make no guarantees as to the number of times tied
 data is fetched or subs inside the sort sub are called" then life
 becomes much easier.

I am strongly in favor of that approach.  I see no reason to allow for
weird side effects in Perl 6.  (Perl 5 would be a different matter, of
course.)

Not only is it simpler to deal with, it's simpler to *explain*, and that's
important.

-- 
Russ Allbery ([EMAIL PROTECTED]) http://www.eyrie.org/~eagle/



Re: Schwartzian Transform

2001-03-26 Thread Russ Allbery

Uri Guttman [EMAIL PROTECTED] writes:
 "SC" == Simon Cozens [EMAIL PROTECTED] writes:

   SC No, it wouldn't, don't be silly. The ST can always be generalized to 

   SC ST(data, func, compare) =
   SC map { $_-[0] } sort { compare($a-[1], $b-[1]) } map { [$_, f($_)] } data

 and i don't see multiple keys or sort order selection per key.

Then you need to look at f and compare a little closer, since it's in
there.

 and even creating a function to extract the key is not for beginners in
 many case.

Without creating a function to extract the key, you can't sort in Perl at
all.  sort { $a = $b } contains two functions to extract the keys.

Functions don't have to be complicated, you know.

-- 
Russ Allbery ([EMAIL PROTECTED]) http://www.eyrie.org/~eagle/



Re: Schwartzian Transform

2001-03-26 Thread Russ Allbery

Dan Sugalski [EMAIL PROTECTED] writes:

 You're ignoring side-effects. The tied data may well be returned the
 same every time it's accessed, but that doesn't mean that things aren't
 happening behind the scenes. What if we were tracking the number of
 times a scalar/hash/array was accessed? Memoizing would kill that.

Hm.  I don't really understand why this would be significant unless you're
actually benchmarking Perl's sort.  Unless you care about the performance
of Perl's sort algorithm, the number of times each element is accessed in
a sort is *already* indeterminate, being a function of the (hidden) sort
implementation, and will vary a lot depending on how ordered the data
already is.

Counting on side effects determined by the *number* of times elements are
accessed during a sort sounds pretty twisted to me.  I can see a few YAPHs
with such properties, but I don't think we were guaranteeing that Perl 6
would be YAPH-compatible anyway.

-- 
Russ Allbery ([EMAIL PROTECTED]) http://www.eyrie.org/~eagle/



Re: Schwartzian Transform

2001-03-26 Thread Russ Allbery

Uri Guttman [EMAIL PROTECTED] writes:
 "RA" == Russ Allbery [EMAIL PROTECTED] writes:
   RA Uri Guttman [EMAIL PROTECTED] writes:

 map { $_-[0] } sort { compare($a-[1], $b-[1]) } map { [$_, f($_)] } data
^^^   ^^^

   RA Then you need to look at f and compare a little closer, since it's in
   RA there.

 and there is only extracted key being compared to another at the same
 level, not multiple key levels. think about sorting by state and THEN
 town. you can't do that with $a and $b and one f().

Yes.  You can.

Don't assume $a-[1] is a simple scalar.  What prevents f() from returning
an array ref?

 so you need multiple compare ops and multiple f()'s.

No, you don't.

 the point is that you have to generate the ladder compare code as well
 as the calls to your f()'s.

Yes, you have to write the comparison and data manipulation function for
Perl; Perl isn't going to be able to figure it out for itself.  But that's
true regardless of the sorting method; you're always going to have to tell
Perl what the keys are and how to compare them.

You have to write slightly more code if you separate the extraction
function f() from the comparison function compare() since if the key
structure is complex, f() has to build a data struction that compare()
takes apart.  That makes the memoizing approach superior.

   RA Without creating a function to extract the key, you can't sort in
   RA Perl at all.  sort { $a = $b } contains two functions to extract
   RA the keys.

 huh? $a and $b are not functions but aliases the the current pair of
 keys (at the primary key level).

Is sub { $a } a function?  $a is equivalent to that.  One way to look at
this is that Perl lets you simplify the function if all you need is the
basic data unit.

 i don't seen any functions in what you show there. you don't need a
 function or even an ST to sort complex records.

{ $a = $b } is a function.  (Well, it's a code block, but the difference
is quibbling.)

My point is that writing functions isn't nearly as complicated as you make
it sound.  Almost every time I write a sort, map, or grep in Perl, I write
a function.

-- 
Russ Allbery ([EMAIL PROTECTED]) http://www.eyrie.org/~eagle/



Re: Schwartzian Transform

2001-03-26 Thread Russ Allbery

map { $_-[0] } sort { compare($a-[1], $b-[1]) } map { [$_, f($_)] } data

Uri Guttman [EMAIL PROTECTED] writes:

 i never assumed that. but your ST example above shows it like that. you
 still have to do a ladder compare with $a and $b do make the ST work
 with multiple keys. each one needs to be given the sort order and
 compare op as well.

That's what compare() does.  compare() is a Perl function.  It can do
anything you want.

 that is my whole point of why putting this into the language is
 silly. it is too open ended for amount of work perl would have to do
 vs. the amount of coding you save. you save very little as you are doing
 most of the work youself in the f() key extraction subs.

The purpose served is that it's conceptually simpler to tell Perl "here's
how to extract keys and here's how to compare them; now sort this data
structure" than it is to tell Perl "convert this data structure into a
different one and then extract keys from it like follows and compare them,
then transform the structure back."  The first route is closer to the way
that people are intuitively thinking.  It doesn't matter to me that the
first isn't going to be that many fewer characters of Perl code than the
second.  I *understand* it better.

It is true that it can be done in a module.  Most things in Perl can.  It
matters very little to me whether it's a standard module or built into the
language; I just think that it should be possible to tell sort to make
this sort of thing easier.

   RA You have to write slightly more code if you separate the
   RA extraction function f() from the comparison function compare()
   RA since if the key structure is complex, f() has to build a data
   RA struction that compare() takes apart.  That makes the memoizing
   RA approach superior.

 and how is this ladder compare built?

The programmer writes it.

 but you don't autogenerate the code in the block.

I haven't heard anyone talking about autogenerating everything other than
the code that wraps each element of the list in an anonymous array holding
the element and the key(s) and then extracts the key(s) for the comparison
function.  That part of the code is identical in every ST that I write.

 it is your code. the supposed goal of this hypothetical builtin ST is to
 make it easier to use it. i say it is not worth the effort since you
 have to do almost as much work anyway.

Less mental effort is the important part, not how many characters have to
be typed.  I don't want to be thinking about that extra level of arrays,
and until you've written *lots* of ST's, you can't ignore it.

-- 
Russ Allbery ([EMAIL PROTECTED]) http://www.eyrie.org/~eagle/



Re: The binding of my (Re: Closures and default lexical-scope

2001-02-18 Thread Russ Allbery

Bart Lateur [EMAIL PROTECTED] writes:

 That doesn't mean that davocates for either side don't have anything
 interesting to say. For starters, it's usually dissatisfaction with
 certain aspects of some languages that causes the birth of yet another
 new language, such as PHP (which is more a different programming
 platform than really a different, full blown language) and Ruby.

Sure.  However, when it's being presented in the fashion that it's being
presented in this thread, it hits my mental filters and is completely
worthless to me.

Compare and contrast with the way we discussed JWZ's disagreements with
Java.

I think it's possible for intelligent adults to figure out how to talk
about the things about Perl they don't like without advocating another
language as better, without insulting people, and without using
over-the-top whining that may have been intended to be funny and ended up
just being stupid and grating.

-- 
Russ Allbery ([EMAIL PROTECTED]) http://www.eyrie.org/~eagle/



Re: The binding of my (Re: Closures and default lexical-scope

2001-02-17 Thread Russ Allbery

So since when did perl6-language become perl-advocacy?  Rephrased:  Could
people please take the advocacy traffic elsewhere where it isn't noise?
Thanks.

-- 
Russ Allbery ([EMAIL PROTECTED]) http://www.eyrie.org/~eagle/



Re: TIL redux (was Re: What will the Perl6 code name be?)

2000-10-23 Thread Russ Allbery

Uri Guttman [EMAIL PROTECTED] writes:

 not a good sign but we may need to take the hit to support overloading
 any function and supporting TIL and threads. i think a %20 hit to get
 those working cleanly might be a decent tradeoff.

I don't.  I'd find it to be a really good reason to learn Python.

 the TIL speedup over pure interpretation might win that back and
 more.

If that's true, that's a different ballgame of course.

If at all possible, Perl 6 should be *faster* than Perl 5.  Perl is
already too slow IMO.

-- 
Russ Allbery ([EMAIL PROTECTED]) http://www.eyrie.org/~eagle/



Re: RFC 288 (v2) First-Class CGI Support

2000-09-30 Thread Russ Allbery

Bart Lateur [EMAIL PROTECTED] writes:

 But anyway: whould this imply that URL- and simple HTML escaping and
 back, will now be available through pack()/unpack()? Just like UUE?
   ;-)

Adding base64 encoding/decoding and quoted-printable would also be useful.
Either that, or taking uuencode out of pack and putting it plus those
other things into a standard module.

-- 
Russ Allbery ([EMAIL PROTECTED]) http://www.eyrie.org/~eagle/



Re: RFC 327 (v2) C\v for Vertical Tab

2000-09-29 Thread Russ Allbery

David Olbersen [EMAIL PROTECTED] writes:
 From: Russ Allbery [mailto:[EMAIL PROTECTED]]

 Just out of curiosity, and I'm not objecting to this RFC, has anyone
 reading this mailing list actually intentionally used a vertical tab
 for something related to its supposed purpose in the past ten years?

 I don't even know what a vertical tab is, it doesn't sound like anything
 very useful.

It advances the paper of your hardcopy terminal a terminal-setting-
defined number of lines, usually about eight.  The last time I used a
vertical tab intentionally and for some productive purpose was about 1984.

-- 
Russ Allbery ([EMAIL PROTECTED]) http://www.eyrie.org/~eagle/



Re: Expunge use English from Perl? (was Re: Perl6Storm: Intent to RFC #0101)

2000-09-28 Thread Russ Allbery

Andy Dougherty [EMAIL PROTECTED] writes:

 I find that I don't remember many of the less-frequently-used perlvars
 (where less-frequently-used depends on the types of programs I write,
 obviously).  I certainly couldn't tell you off-hand the differences
 among $ $ $( and $).  I'd have to look them up.

I never understood why these were variables.  You don't change UIDs or
GIDs that often, and when you do you tend to want precise control and
because they're variables, they have weird interaction semantics and you
have to assign to them in just the right order to get done what you want
to get done.  See recent threads on comp.lang.perl.moderated.

I'd honestly rather see getuid, geteuid, getgid, getegid, and getgroups,
along with some consistent and complete subset of the setting functions
(with portability magic behind the scenes), in a separate module that only
those programs that need to do UID fiddling need to load.

I guess the exception is getpwuid($), which is probably done more than
any other operation on UIDs, but maybe just keep that single variable.

-- 
Russ Allbery ([EMAIL PROTECTED]) http://www.eyrie.org/~eagle/



Re: Perl6Storm: Intent to RFC #0101

2000-09-27 Thread Russ Allbery

Robert Mathews [EMAIL PROTECTED] writes:
 Nathan Wiger wrote:

 How many people really "use English" other than beginners?

 I would use it, but I heard a nasty rumor that it incurs the same
 penalty as using $' and such.  I try to avoid too much line noise in
 code that has to be maintained.

I have a very serious problem with use English, namely that it makes Perl
code much more difficult to read and maintain for people who know Perl.
Writing something that's marginally easier to understand for a beginner
and harder to understand for an expert doesn't strike me as a good idea.

I know what $/ does; I double-take at $INPUT_RECORD_SEPARATOR and am never
sure whether it's a user's personal global variable or $/ or some other
thing.  And $ARG and $MATCH both really look like global variables to me
and I'd hunt through the program trying to find where they're defined
for a while before realizing they're weird use English things.

-- 
Russ Allbery ([EMAIL PROTECTED]) http://www.eyrie.org/~eagle/



Re: Perl6Storm: Intent to RFC #0101

2000-09-27 Thread Russ Allbery

Robert Mathews [EMAIL PROTECTED] writes:

 ... and don't know use English.  Why can't they learn to use it?

Why can't the new users of Perl learn the real variable names?

I guess I don't buy the argument that the real names are harder to learn.
Most of them have fairly useful mnemonics, you see them and use them
constantly so they become familiar quickly, and most Perl code out there
uses them.

 Are you saying that nothing is worth knowing unless the oldsters know it
 already?

\begin{rant}

No, I was not saying that.  I was saying exactly what I said.  I meant
what I said.  If I'd meant something else, I would have said that instead.

\end{rant}

 It's not that I want to jam English down everyone's throats.  But Nate
 asked, "does anyone want this," and I said, "yes."  Or at least, I would
 want it if it worked.

Hey, I'm not claiming you're trying to jam anything anywhere.  We were
discussing use English, and I'm expressing my opinion just like you are.
I've found the use of use English in code I had to maintain to be annoying
and unhelpful, and to actually degrade the maintainability of the code, so
I threw in my two cents.

 You'd learn to recognize the long variable names if you used English
 regularly.  It's a chicken-and-egg problem, but not a very difficult
 one.

I've yet to understand why I'd *want* to use English regularly; so far as
I can tell, it has essentially no benefit in the long term.  Perl is not
now, nor is it likely to ever be, a language that's particularly readable
by people who don't know Perl, and use English in order to learn the
strange names used by use English strikes me as rather circular.  Either
the person maintaining the code learns Perl, in which case the use English
names won't be necessary, or they don't, in which case they're unlikely to
be able to maintain the code anyway.

I know it's not the only stance to take, but I prefer to try to make my
Perl code very readable by people who know Perl, and encourage people who
don't know Perl who are trying to read my code to learn Perl first, or at
the same time.  There are certainly languages out there that are more
readable for people who don't know the language at all than Perl is, but I
don't find this a particularly important feature in a language.  In those
cases where it is, I'd use a language other than Perl.

use English doesn't really address the syntactical points of Perl that
make it hard to read for someone who doesn't know Perl; it strikes me, and
always has struck me, as a bad partial solution to a problem that may not
need to be solved and that only makes things more complicated in the long
run.

-- 
Russ Allbery ([EMAIL PROTECTED]) http://www.eyrie.org/~eagle/



Re: RFC 48 (v4) Replace localtime() and gmtime() with date() and utcdate()

2000-09-26 Thread Russ Allbery

Jonathan Scott Duff [EMAIL PROTECTED] writes:

 Do you mean local time now or local time for all time?  The former is
 easy, the latter hard.  Well, it's not hard for those places where the
 offset from UTC has remained (mostly) constant, but there are some
 places that have an offset from UTC that is a function of time more
 complex than daylight savings.

 Or would Cdate() just use Clocaltime() and punt to the OS/C
 RTL/etc.?

It should just punt.  ANSI/ISO C already requires that the C localtime
call deal with all of this.  We can look at providing our own localtime if
the system is grossly deficient in this respect, but that's an internals
rather than a language issue.

-- 
Russ Allbery ([EMAIL PROTECTED]) http://www.eyrie.org/~eagle/



Re: RFC 283 (v1) Ctr/// in array context should return a histogram

2000-09-26 Thread Russ Allbery

Paris Sinclair [EMAIL PROTECTED] writes:

 But as soon as a person labels me a minority, and implies that because I
 have been labeled such that I am a rioter, and that my opinions are
 based upon this label, then your choices are to filter me, or to listen
 to me protest.

Then perhaps you shouldn't have labelled him Euro-centric if you didn't
want a sarcastic response in kind.

I'd just prefer that we discussed the technical issues without this
pointless bickering.  If you were offended, fine; say you were offended
and move on.  I was offended by your implication that people who don't
agree with you are saying that only European scripts matter.  But please
don't escalate the argument as part of being offended.

I'll now stop replying to this thread.  Sorry for sticking my nose in; it
really bugs me when this happens in i18n discussions.

-- 
Russ Allbery ([EMAIL PROTECTED]) http://www.eyrie.org/~eagle/



Re: perl6storm #0011: interactive perl mode

2000-09-23 Thread Russ Allbery

Philip Newton [EMAIL PROTECTED] writes:
 On Thu, 21 Sep 2000, Tom Christiansen wrote:

 =item perl6storm #0011

 perl w/o args with stdin and out ttys should be perl -de 0.
 saves novices from typing "perlCR" and getting confuddled.

 I think it should print out a banner message, too.

 A couple of times I was wondering whether perl was installed on a
 machine and typed 'perl' to see -- and "nothing happened". (I suppose
 either of `which perl` or `perl -v` would be a better way to find out,
 but still.)

 Having Perl tell me 'this is perl5.7.0\n ' or similar would have been
 nice. But that's just me.

As long as it's possible to get the current "perl" behavior; I actually
use that a lot.

-- 
Russ Allbery ([EMAIL PROTECTED]) http://www.eyrie.org/~eagle/



Re: RFC 263 (v1) Add null() keyword and fundamental data type

2000-09-23 Thread Russ Allbery

Glenn Linderman [EMAIL PROTECTED] writes:
 Russ Allbery wrote:

 Perhaps I don't use those warnings in the same way that you do.  I
 *very* rarely have undefined value warnings in my programs, and when I
 do they're usually not actually bugs, just things that require a
 different way of writing to be -w clean.  So I don't have as high of an
 opinion of this warning as being particularly important to debugging; I
 only find it useful in certainly particular circumstances.

 I can't say that I often get the warning, but when I do, I find it
 generally results from a bug.  So something about the way we write code
 is different, I guess.

Most likely.  It caught stuff for me before use strict (generally variable
typos) but with use strict, those warnings tend to be either checking for
keys in aggregate structures when they've not been initialized (which is
mostly just an annoyance) or they're just a symptom of something going
wrong somewhere else entirely and aren't particularly helpful in tracking
down where it's going wrong.

 I find this absolutely amazing.  You've now convinced me you understand
 the arguments I've been making, and the issues I'm concerned
 about... and yet you still hold this opinion.  Certainly we have a
 difference of opinion here.

It's quite possible that I'd have a different opinion if I used it for a
while; I don't know.  I think it's worth trying it with undef first and
writing some code that way and seeing how it works and how hard it is to
debug.

 Russ, I apologize.  I confused you with someone else in this posting.  I
 looked back over your postings, and you, unlike those that seem to just
 hate SQL, have repeatedly expressed interest in using the tristate
 semantics.  I've been trying to keep my nose clean regarding remarks
 like this, but I guess my frustration level finally got the better of
 me...and perhaps partly, I guess I stayed up too late last night and
 probably shouldn't be posting this late tonight either.

No problem; it's easy enough to do.  :)

 Maybe the enlightenment is shed by your earlier remark: you don't find
 the undef warnings to be particularly important to debugging.  So maybe
 that is the reason that you don't see the need to concurrently have both
 sets of semantics available?

That's quite possible.

 Since you don't need the current set of semantics?

The main thing I use undef for is in areas where I'm checking with
defined, which I would assume would continue to work regardless of the
selected semantics of undef.  Having undef propagate would make it useful
in additional areas (or at least I think it would).  From writing language
parsers, I found that it's useful in areas other than SQL to have a
distinguished value that propagates through any arithmetic operation.

 Going back to your first remark about seeing confusion either way, maybe
 explaining the types of confusion that you see with a separate null and
 undef vs the types of confusion that you see with a tristate pragma
 would help me to grasp that logic.

The main thing I'm worried about with undef plus null is that undef is
already very hard to explain and having an additional parallel concept
that behaves slightly differently and that can easily be confused with
undef is worrisome.  The advantage of explaining a tristate pragma is that
with normal undef semantics, most times undef shows up in an arithmetic or
logical operation other than a simple test of true or false, it's
symptomatic of poorly-constructed code; increments are about the only
exception.  So the area that tristate logic changes is not something that
we recommend that people use under normal circumstances.

 And if/when my database needs require the use of multiple different NULL
 values (currently they are not there, multiple NULL values do get talked
 about by relational theorists, and there is some move to put them into
 the SQL standard, but it appears they haven't yet appeared in one) I see
 having multiple "special non-values" (as someone else called them) much
 simpler to extend to the concept of multiple NULL values than the pragma
 approach.

Hm.  Yes, that's a good point.  (At that point, something more like
Quantum::Superpositions may be more what you want.)

-- 
Russ Allbery ([EMAIL PROTECTED]) http://www.eyrie.org/~eagle/



Re: RFC 263 (v1) Add null() keyword and fundamental data type

2000-09-21 Thread Russ Allbery

Glenn Linderman [EMAIL PROTECTED] writes:
 Philip Newton wrote:

 Having $seen{$word}++ turn $seen{$word} to undef is bad,

It doesn't "turn it to undef"; if you're using tristate semantics, it
leaves it as undef, because those are the semantics you've selected for
undefined values.

 if (undef)++ assumes NULL semantics everywhere, hence "one more than
 unknown" = "still unknown".

No one's proposing that.  People are proposing the ability to turn on NULL
semantics where you need it.

 Right.  Applying NULL semantics to undef would be bad.  The
 counterproposals to RFC 263, along the lines of "use tristate", seem to
 overlook this sort of situation.

I'm not overlooking it; I just don't agree with you.  There *is* a
difference.

-- 
Russ Allbery ([EMAIL PROTECTED]) http://www.eyrie.org/~eagle/



Re: RFC 263 (v1) Add null() keyword and fundamental data type

2000-09-21 Thread Russ Allbery

Glenn Linderman [EMAIL PROTECTED] writes:

 In my opinion, which you probably will also not agree with, attempting
 to toggle between the current undef semantics and tristate semantics is
 like trying to stuff three values into one bit.

I do understand the argument.  I just see confusion either way, and I
think that approach would be the least confusing and allow the code to
remain the most Perl-like.  I can see arguments the other way; that's just
my opinion.

 The problem is, when you toggle the pragma, all variables whose value is
 undef suddenly have the tristate semantics, and when you toggle it back,
 all the variables whose value is undef suddenly have the undef
 semantics.  This leaves it purely to the programmer to make sure that
 the pragma is used in exactly the right places, and, when tristate
 semantics are in effect makes unavailable the normal, helpful warnings
 that Perl produces when you attempt to misuse undef values.

Perhaps I don't use those warnings in the same way that you do.  I *very*
rarely have undefined value warnings in my programs, and when I do they're
usually not actually bugs, just things that require a different way of
writing to be -w clean.  So I don't have as high of an opinion of this
warning as being particularly important to debugging; I only find it
useful in certainly particular circumstances.

To me, toggling the semantics of the variables which are already undef
strikes me as just what I'd want.

 I guess that since you have no intention of using the tristate
 semantics, you don't care whether it is easy to code using them.

Comments like this are what is making it very difficult for me to continue
discussing this with you.  You don't actually know what type of Perl I
write or whether or not I'd use the semantics or not.  As a matter of
fact, I find them very interesting and fully do expect to use those
semantics if they're implemented in Perl, particularly given that I'm
likely to be doing a lot more database and SQL coding in the future than I
am currently.

-- 
Russ Allbery ([EMAIL PROTECTED]) http://www.eyrie.org/~eagle/



Re: RFC 263 (v1) Add null() keyword and fundamental data type

2000-09-20 Thread Russ Allbery

Glenn Linderman [EMAIL PROTECTED] writes:
 Russ Allbery wrote:

 I agree with Tom; I think it's pretty self-evident that they're the
 same thing.  undef means exactly the same thing as null; that's not the
 problem.  The problem is that Perl doesn't implement the tri-state
 logic semantics that most users of null are used to, which is a
 different issue.

 So, to paraphrase your statement a bit:

 It is self-evident that they're the same, the problem is that they work
 differently.

No, that's not a paraphrase.  That's saying something completely different
which is wrong.

If undef functioned differently than null, that would be a bug.  What's
missing is a way to say "I want tri-state logic" as a pragma.  When that
pragma is enabled, undef would be the null-like state.

Perl already has exactly the data value that you're looking for.  This RFC
is proposing to fix the wrong problem; the things that need to be changed
(conditionally) are the logical operators, not the data value.

 Nota Bene: IEEE floating point defines two different concepts that are
 not numbers, but can be mixed with numbers in expressions: Inf and NaN.
 And actually, there are positive and negative varieties of both Inf and
 NaN.  So I guess you might say that they are the same; but the problem
 is that they work differently.

There are positive and negative infinities, but that's a different
situation entirely; infinity is a degenerate value, not an undefined
value.  This is the first time I've ever heard of -NaN; are you sure about
that?  (There are, in fact, different types of NaN, such as signalling vs.
non-signalling, but that's due to floating point traps and exceptions,
issues that don't crop up in the situations where you want undef/null.)

-- 
Russ Allbery ([EMAIL PROTECTED]) http://www.eyrie.org/~eagle/



Re: RFC 263 (v1) Add null() keyword and fundamental data type

2000-09-20 Thread Russ Allbery

Nathan Wiger [EMAIL PROTECTED] writes:

 undef has a very well-defined (ha!) Perl meaning: that something is
 undefined. null has a very well-defined RDBMS meaning: that something is
 unknown. Perl allows you to add and concatenate stuff to undef, because
 that value can be coerced into 0 and "" without harm.

This isn't a major loss with a pragma in effect since -w clean code
already can't do this.  I don't see the harm in changing this to null
semantics when you ask for that.

About the only piece of code of mine that this would affect are places
where I use ++ on an undef value, and that's not a bad thing to avoid as a
matter of style anyway (usually I'm just setting a flag and = 1 would work
just as well; either that, or it's easy enough to explicitly initialize
the counter to 0).

 Using the proposed tristate pragma does not strike me as any better - in
 fact, worse - than adding null() because you are now changing the
 meaning of fundamental Perl operations.

But that's exactly what you want to do.

 You're *still* introducing "yet another state of null", but to do so
 you're conflating undef and null, which are themselves different
 concepts.

I strongly disagree.  You're not changing the data types at all; you're
changing what Perl's operatings (logical, addition, concatenation, etc.)
do with undefined values.  Instead of coercing to 0, you coerce to an
undefined value.

I really like this.  I could see lots of cases other than just databases
where this would be a useful thing to do with undef.  It becomes
considerably less useful if you introduce a new keyword, since then it
requires rewriting code.  Those undef semantics could be useful for error
checking in existing code.

 For example, assuming this code:

$name = undef;
print "Hello world!" if ($name eq undef);

So don't do that.  Use Cdefined $name if you want to ask that question.
Most code that I've seen already does that; checking equality with undef
is an odd way of writing it.  *If* you want to use the pragma, just always
write that as Cdefined $name.

 The same operation would print "Hello world!" in one circumstance, but
 nothing under the tristate pragma. This is just as dangerous as having a
 pragma like so:

use 'zeroistrue';
$num = 0;
print "Got data" if ( ! $num );

 Where the above would print out "Got data" normally, but not under the
 pragma.

I strongly disagree here too.  0 as false and 1 as true is an assumption
made in multiple other programming languages, something used by the
majority of Perl scripts that I write, and something that's very
intuitive.  undef semantics, on the other hand, are specific to Perl and
the default is chosen to be friendly to quick and dirty scripts.  Changing
those semantics to propagate undef makes perfect sense to me.

-- 
Russ Allbery ([EMAIL PROTECTED]) http://www.eyrie.org/~eagle/



Re: RFC 263 (v1) Add null() keyword and fundamental data type

2000-09-20 Thread Russ Allbery

Jonathan Scott Duff [EMAIL PROTECTED] writes:

 Yep, this is bad IMHO.  Your concern is valid I think, but your
 "solution" isn't a good one.  Why not just use a module like Damian's
 Quantum::Superpositions?

No offense to Damian, but I tried to read and understand his documentation
and I thought I was back in grad school.  I don't think it's the fault of
the writing either; I think that Quantum::Superpositions is trying to do
something that's rather too complicated to explain clearly to the average
programmer.

It's a neat idea, but I don't expect to see it ever widely used.

-- 
Russ Allbery ([EMAIL PROTECTED]) http://www.eyrie.org/~eagle/



Re: RFC 263 (v1) Add null() keyword and fundamental data type

2000-09-20 Thread Russ Allbery

Damien Neil [EMAIL PROTECTED] writes:

 If I could be assured that the performance penalty was minimal, I'd
 be delighted to write

   if ($errno == any(EAGAIN EINTR)) { ... }

 over

   if ($errno == EAGAIN || $errno == EINTR) { ... }

 The former is less typing and reads more clearly (to me, at least).

Hm, yeah, good point.

-- 
Russ Allbery ([EMAIL PROTECTED]) http://www.eyrie.org/~eagle/



Re: RFC 99 (v3) Standardize ALL Perl platforms on UNIX epoch

2000-09-14 Thread Russ Allbery

Bart Lateur [EMAIL PROTECTED] writes:

 Now, on those platforms without 64 bit support, a double float has a lot
 more mantissa bits than 32, typically 50-something (on a total of 64
 bits). This means that all integers with up to more than 50 significant
 bits can exactly be represented. That would be a lot better than the
 current situation of 32 bits.

Everything I've heard from anyone who's done work on time handling
libraries is that you absolutely never want to use floating point for
time.  Even if you think that the precision will precisely represent it,
you don't want to go there; floating point rounding *will* find a way to
come back and bite you.

Seconds since epoch is an integral value; using floating point to
represent an integral value is asking for it.

As an aside, I also really don't understand why people would want to
increase the precision of the default return value of time to more than
second precision.  Sub-second precision *isn't available* on quite a few
platforms, so right away you have portability problems.  It's not used by
the vast majority of applications that currently use time, and I'm quite
sure that looking at lots of real-world Perl code will back me up on this.
It may be significantly more difficult, complicated, or slower to get at
on a given platform than the time in seconds.  I just really don't see the
gain.

Sure, we need an interface to sub-second time for some applications, but
please let's not try to stuff it into a single number with seconds since
epoch.

-- 
Russ Allbery ([EMAIL PROTECTED]) http://www.eyrie.org/~eagle/



Re: RFC 99 (v3) Standardize ALL Perl platforms on UNIX epoch

2000-09-13 Thread Russ Allbery

Chaim Frenkel [EMAIL PROTECTED] writes:

 One other that might be useful is have strftime() (or something similar)
 built-in without having to use POSIX; and the default should be
 MMDDHHMMSS.fff, (the ISO format)

The more commonly-used ISO format is the extended format rather than the
basic one:

-MM-DDTHH:MM:SS+

(and yes, the T is part of the format).  More commonly, people use the ISO
extended date format and the ISO extended time format separately with a
space between them and the time zone also separated out:

-MM-DD HH:MM:SS +

 I personally prefer to pass around the string representation, more
 that perl and unix systems need to handle datetime. (And I find it
 easier to read the ISO version than a time in seconds)

I agree.  The ISO format is better if you need to write out a date to a
file that you're reading later and you don't need absolutely maximum
speed, particularly if you have good tools to parse it and turn it back
into a native format again.

-- 
Russ Allbery ([EMAIL PROTECTED]) http://www.eyrie.org/~eagle/



Re: RFC 109 (v1) Less line noise - let's get rid of @%

2000-08-17 Thread Russ Allbery

Karl Glazebrook [EMAIL PROTECTED] writes:

 o Why do I think perl has too much line noise? Because of code like this:

   @{$x-{$$fred{Blah}}}[1..3]

You're taking the value of the key "Blah" in the hash referred to by $fred
and using it as the key into the hash referred to by $x, treating the
value as an anonymous array and taking a slice containing the 2nd through
the 4th elements.

Hm.  Personally, I think that's a very *small* amount of line noise for
expressing an action so complicated it takes more than three lines of
English text to explain what's going on.  Expressions that do complicated
things are going to look complicated.

If you want to cut down on the line noise, temporary variables are the
standard tool:

my $key = $$fred{Blah};
my $array = $$x{$key};
@$array[1..3];

And finally, what causes all the line noise here are the curlies.
Removing the *one* @ in that expression isn't going to make it look any
simpler.

-- 
Russ Allbery ([EMAIL PROTECTED]) http://www.eyrie.org/~eagle/



Re: RFC 84 (v1) Replace = (stringifying comma) with =

2000-08-16 Thread Russ Allbery

Damien Neil [EMAIL PROTECTED] writes:

 Arrays are ordered.  Hashes are not.  Sure, you can iterate over a hash,
 but add an element to one and you can change the order of everything in
 it.

Formally, I believe it's permissable for a hash implementation to return a
different order the second time you iterate through it from the first
time, even if you haven't touched the hash inbetween.  That's the
definition of an iterable but unordered data structure; there's some way
of getting all of the members one and only one time, but each time you
look at it the order in which the members show up may be different (maybe
garbage collection happened behind the scenes, the hash was reorganized
due to an observation of how you were using it, etc.).

-- 
Russ Allbery ([EMAIL PROTECTED]) http://www.eyrie.org/~eagle/



Re: RFC 109 (v1) Less line noise - let's get rid of @%

2000-08-16 Thread Russ Allbery

John Porter [EMAIL PROTECTED] writes:
 Russ Allbery wrote:

 $args = 'first second third';
 @args = split (' ', $args);
 my $i = 0;
 %args = map { $_ = ++$i } @args;

 This is very Perlish to me; the punctuation is part of the variable
 name and disambiguates nicely.

 No, it's not.  Where are we taught this?  It's a myth.

 The punctuation imposes context on the variable expression.

   $foo[0]

 accesses an array.  Where's the "@"?

Now the [0] is disambiguating.  Same difference.  I'm not interested in
nit-picking semantics.

-- 
Russ Allbery ([EMAIL PROTECTED]) http://www.eyrie.org/~eagle/



Re: RFC 109 (v1) Less line noise - let's get rid of @%

2000-08-16 Thread Russ Allbery

Kai Henningsen [EMAIL PROTECTED] writes:

 That would be nice if the punctuation actually *were* part of the
 variable name.

 However, it isn't: to access 'second', you'd say $args[1], NOT @args[1].
 It's one of the Perl features that most confuses newcomers.

Well, I think it is; it's just that $args[1] is a different variable than
@args.  Maybe people think that's an odd notion of what a variable is, but
I think of @args as a collection containing a bunch of individual
variables, each of which has its own name that's disambiguated from $args
by [].  You can operate on the collection, or you can address the
variables individually.

This makes even more sense when you look at %args, and start looking at
multi-level hashes.

 If there's no better argument than this, I'd throw this distinction away
 in a heartbeat.

It's always easy to throw away other people's distinctions.  :)

 If the syntax can be changed so I never have to write @{some array ref}
 again to explain to perl that yes, I really want to use this array as an
 array, I'll be a happy man.

Now this I'll agree with; I find the @{ $$hash{value} } syntax rather
bletcherous.  But I think that's a separate problem and could well have a
separate solution.

Perhaps @-$$hash{value} as has been proposed before, and Perl 6 can deal
with the issue of the @- array in some other way.

-- 
Russ Allbery ([EMAIL PROTECTED]) http://www.eyrie.org/~eagle/



Re: RFC 109 (v1) Less line noise - let's get rid of @%

2000-08-15 Thread Russ Allbery

 All variables should be C$x. They should behave appropriately
 according to their object types and methods.

No thanks.  I frequently use variables $foo, @foo, and %foo at the same
time when they contain the same information in different formats.  For
example:

$args = 'first second third';
@args = split (' ', $args);
my $i = 0;
%args = map { $_ = ++$i } @args;

This is very Perlish to me; the punctuation is part of the variable name
and disambiguates nicely.  I'd be very upset if this idiom went away.

-- 
Russ Allbery ([EMAIL PROTECTED]) http://www.eyrie.org/~eagle/



Re: RFC 109 (v1) Less line noise - let's get rid of @%

2000-08-15 Thread Russ Allbery

Dan Sugalski [EMAIL PROTECTED] writes:

 If the symbol becomes content-free, perhaps the problem is with what
 made it useless, not with the symbol itself...

Wholeheartedly agreed.  If something is an array, it should start with @.
If we're adding language changes that introduce arrays that don't start
with @, that's the mistake.

-- 
Russ Allbery ([EMAIL PROTECTED]) http://www.eyrie.org/~eagle/



Re: RFC 109 (v1) Less line noise - let's get rid of @%

2000-08-15 Thread Russ Allbery

Steve Fink [EMAIL PROTECTED] writes:

 I would very much hate to see the prefixes go away or merge into a
 single one, but I'm not so sure I agree with Russ. I've had to teach
 lots of beginners that even though $x refers to scalar x, $x{...} refers
 to %x, but don't think of it that way because the $ is saying what value
 you're getting back, not which variable you're using, unless you're
 calling a function, or...

This falls firmly in the category of things that are powerful for
experienced users of the language but may be somewhat difficult to learn.
I don't think Perl has being easy to learn as it's primary goal, nor
should it.

 I'll just say I wouldn't mind having a stricture forbidding $x and %x in
 the same package.

Ugh.  I'll definitely never use it.  I don't object *provided* that it
doesn't become like the other strictures, things that people expect all
Perl scripts to use; I think it's an essentially worthless constraint.

 I've fairly frequently used code like the above, but I don't really like
 that code in the first place because the only purpose for the $args and
 @args is as temporaries. I like the way mjd describes it:  this is
 "synthetic" code. If you really did have distinct long-lived variables
 with the same name, then I bet it would be confusing.

I do this all the time and I don't find it confusing.  Please let's not
mandate programming style.  Often times the difference between the
variables changes some as the program proceeds, but context makes it quite
clear what's going on.

This strikes me as the same sort of meaningless style guideline as "all
variables must have names that are at least five characters long."

-- 
Russ Allbery ([EMAIL PROTECTED]) http://www.eyrie.org/~eagle/



Re: RFC 99 (v2) Standardize ALL Perl platforms on UNIX epoch

2000-08-15 Thread Russ Allbery

Buddha Buck [EMAIL PROTECTED] writes:

 Leap-seconds are a PITA for generic time routines.

Unix time ignores leap seconds.  POSIX basically says "don't worry about
them" and by and large that works.  It means your system clock drifts a
little over time and then gets corrected back by xntpd or something, but
in practice time on a Unix clock is monotonic.

-- 
Russ Allbery ([EMAIL PROTECTED]) http://www.eyrie.org/~eagle/



Re: RFC 65 (v1) Add change bar functionality to pod

2000-08-14 Thread Russ Allbery

skud [EMAIL PROTECTED] writes:

 I don't think this is a language issue.  However, I don't believe
 there's a -doc working group yet, either.

 Is it time for a -doc group to form?

[EMAIL PROTECTED] already exists; maybe it should be blessed as a Perl 6
working group as well?

-- 
Russ Allbery ([EMAIL PROTECTED]) http://www.eyrie.org/~eagle/



Re: RFC 99 (v1) Maintain internal time in Modified Julian (not epoch)

2000-08-14 Thread Russ Allbery

Tim Jenness [EMAIL PROTECTED] writes:
 On 14 Aug 2000, Russ Allbery wrote:

 Day resolution is insufficient for most purposes in all the Perl
 scripts I've worked on.  I practically never need sub-second precision;
 I almost always need precision better than one day.

 MJD allows fractional days (otherwise it would of course be useless).

 As I write this the MJD is 51771.20833

Floating point?  Or is the proposal to use fixed-point adjusted by some
constant multiplier?  (Floating point is a bad idea, IMO; it has some
nasty arithmetic properties, the main one being that the concept of
incrementing by some small amout is somewhat ill-defined.)

 At some level time() will have to be changed to support fractions of a
 second and this may break current code that uses time() explicitly
 rather than passing it straight to localtime() and gmtime().

Agreed.

I guess I don't really care what we use for an epoch for our sub-second
interface; I just don't see MJD as obviously better or more portable.  I'd
actually be tentatively in favor taking *all* of the time stuff and
removing it from the core, under the modularity principal, but I don't
have a firm enough grasp of where the internals use time to be sure that's
a wise idea.

-- 
Russ Allbery ([EMAIL PROTECTED]) http://www.eyrie.org/~eagle/



Re: RFC 48 (v2) Replace localtime() and gmtime() with da

2000-08-11 Thread Russ Allbery

Jarkko Hietaniemi [EMAIL PROTECTED] writes:

 s/gmt/ut/

 IIRC GMT got obsoleted in the 70s by UT (Universal Time). 

Officially called UTC, so utcdate would be a better name I think.

-- 
Russ Allbery ([EMAIL PROTECTED]) http://www.eyrie.org/~eagle/



Re: RFC 48 (v2) Replace localtime() and gmtime() with da

2000-08-11 Thread Russ Allbery

Buddha Buck [EMAIL PROTECTED] writes:

 UT and UTC are different scales, ref:
 http://tycho.usno.navy.mil/systime.html

I believe, as reflected on that page, that UT isn't a time scale in and of
itself, but a system of them (including UT0, UT1, and UTC as a weird
step-child based on TAI with corrections for UT1).

-- 
Russ Allbery ([EMAIL PROTECTED]) http://www.eyrie.org/~eagle/



Re: AGAINST RFC 48 (v1) Replace localtime() and gmtime() with da

2000-08-10 Thread Russ Allbery

Jonathan Scott Duff [EMAIL PROTECTED] writes:

 By "local timezone" do you mean that some sort of inspection happens to
 determine the local timezone and the date() intrinsically knows about
 it?  What about daylight savings time?

This all should be handled by the operating system.  If you call localtime
in C, you should get back local time, whatever the local time zone.  The
whole point is to not try to duplicate that information in Perl core.

-- 
Russ Allbery ([EMAIL PROTECTED]) http://www.eyrie.org/~eagle/



Re: AGAINST RFC 48 (v1) Replace localtime() and gmtime() with da

2000-08-10 Thread Russ Allbery

Bart Lateur [EMAIL PROTECTED] writes:

 What's so hard? Subtracting 2 hours and 30 minutes from the official
 referential time (GMT)? Or the Daylight Savings Time rules?

It's not a problem of implementation.  It's a problem of semantics due to
the way Perl parses the language.

Suppose you call:

date (time, undef, -0230);

What does that mean in terms of time-zone offsets?  Are you subtracting
230 seconds from UTC?  230 minutes?  A negative octal number?  :)  The
syntax people are used to for specifying time zone offsets *looks* like a
number but actually isn't one.

You can require that it be passed as a string, but writing something like
the above would be a *very* common new user mistake.

-- 
Russ Allbery ([EMAIL PROTECTED]) http://www.eyrie.org/~eagle/



Re: Things to remove

2000-08-08 Thread Russ Allbery

Bennett Todd [EMAIL PROTECTED] writes:

 The poster you are replying to said "I use this in one-liners, and it's
 _dead_ handy."; that conjures up the idioms like

   perl -nle 'print if 1.. ?^$?' [filename]

 which barfs out only the header; replace "if" with "unless" and it
 chops the head off.

Why do you need one-time matching here?  /^$/ should work fine.

I've very rarely found cases where ?? was useful and // didn't work, and
never in regular code.

-- 
Russ Allbery ([EMAIL PROTECTED]) http://www.eyrie.org/~eagle/



Re: AGAINST RFC 48 (v1) Replace localtime() and gmtime() with da

2000-08-08 Thread Russ Allbery

John Tobey [EMAIL PROTECTED] writes:
 On Wed, Aug 09, 2000 at 02:22:22AM +0200, Bart Lateur wrote:

 date() would be more general, and replace both. You can pass it a time
 zone, ANY time zone, and it will tell you what time it is in that time
 zone.

You're proposing embedding the full power of the Olson TZ library into
Perl core.  This is a nontrivial amount of data that changes four or five
times a year.  I really don't think this is a good idea.  Furthermore, the
only time zone database that can actually do this doesn't use the naming
scheme that you're probably used to.

 The JTobey::Date module uses the TZ environment variable (which, I'm
 told, is non-portable), the esoteric POSIX routines tzset and tzname,
 and some functions from the CPAN modules Date::Parse and Date::Format.

It's far worse than non-portable; it's completely insufficient.  The POSIX
TZ syntax cannot represent many real time zones.  You need the Olson-style
naming scheme which refers to entries in a fairly large external database
of time zones and their current and historic data, not just a wide variety
of bizarre daylight savings changes but time zone changes that often vary
by political whim.  (Like Australia fiddling with its daylight savings
rules this year because of the Olympics.)

People in the EU, where there's a standard for daylight savings, and
particularly people in the US, where we haven't changed our rules in quite
a while, often don't realize just how baroque this can get.

 It is designed to give it all an easy OO interface, and to be as
 correct as possible on systems like mine.  It is not expected to be
 very fast, portable, or locale-friendly.

 To overcome these problems would be a Herculean task which I simply
 doubt that anyone here is willing to do.  Therefore, I oppose the
 notion that Perl 6 will magically handle all this.

 -John

-- 
Russ Allbery ([EMAIL PROTECTED]) http://www.eyrie.org/~eagle/



Re: RFC 48 (v1) Replace localtime() and gmtime() with da

2000-08-06 Thread Russ Allbery

Perl6 RFC Librarian [EMAIL PROTECTED] writes:

 The C$time specifier can be followed by a C$timezone argument, which
 returns the date relative to that timezone. By default, the time is
 returned relative to the local timezone. You can get UTC, for example,
 by specifying CUTC or CGMT as the timezone.

# Access UTC information
$scalar  =  date time, '%H:%M', 'UTC';  # return time in UTC
$object  =  date time, undef, 'GMT';# return object in UTC

# Explicity get ctime date for Eastern US time
# If $time is undef, time() is assumed
$scalar  =  date undef, undef, 'EST';

Whatever you do, don't use those timezone names.  Is EST Eastern US time
or Eastern Standard Time in Australia?  The same abbreviation is used in
both places.

Naming of time zones is a *huge* rathole that you probably just don't want
to crawl into.  The short abbreviations are *not* standardized and are
quite frequently ambiguous.  There are three other prevelant time-zone
naming schemes:  the POSIX one (EST5EDT, for example) is completely
insufficient to actually represent time zone variations as they occur in
the real world, the "old Olson" found in most Unix operating systems these
days with names like US/Pacific doesn't offer enough granularity, and the
"new Olson" method (the best of the lot) uses names that most people don't
know (America/Los_Angeles for US Pacific for example).

Basically, you don't want to go anywhere near this mess; it eats people.

I see two reasonable options to go with instead.  One is to just use a
binary flag that says use UTC or not; this is the simplest and most
reliable to explain.  The other is to allow a timezone offset; this
doesn't deal with daylight savings time and historic time zone changes,
but it provides enough power for most of what people want to do and if you
want to deal with the rest you have to deal with time zone naming.

-- 
Russ Allbery ([EMAIL PROTECTED]) http://www.eyrie.org/~eagle/



Re: Preprocessing (Was: Re: Recording what we decided *not* to do, and why)

2000-08-05 Thread Russ Allbery

Johan Vromans [EMAIL PROTECTED] writes:

 I fail to see this point.
 Having a program depend on a preprocessing stage that, if skipped,
 would still result in valid but erroneous source seems dangerous to me.

No, the point is more that normal Perl source is *full* of active m4
characters.  Without quoting, all your paired quotes would disappear,
comments would be stripped even when they're not actually comments but are
really regexes, m4 wouldn't understand things like Perl strings and
regexes and do substitutions where it shouldn't, etc.

The problem is not that you can skip the preprocessing stage, but rather
that as soon as you want to use m4 on a Perl program, you'd have to do a
*huge* amount of work on all the parts of the program you *don't* need to
preprocess just to be able to do things with the part that you do want to
preprocess.

cpp, on the other hand, has very few active constructs or characters, just
identifiers, function calls, and # at the beginning of a line.  It still
causes a few problems where it recognizes something it shouldn't, but it's
trivial to deal with compared to m4.

-- 
Russ Allbery ([EMAIL PROTECTED]) http://www.eyrie.org/~eagle/



Re: Sublist auto*

2000-08-05 Thread Russ Allbery

Johan Vromans [EMAIL PROTECTED] writes:

 I would plea for autosubscribing perl6-language list members to every
 sublist that gets spawned. The reason is continuity.

Currently, I'm trying to deal with the volume of Perl lists by subscribing
to just the "top-level" lists and relying on the promised summaries from
the sublists.  That so far seems to be working pretty well; I feel like I
have a good overview of what's going on, without getting deluged.  I'd
really rather not automatically be put on the sublists, as I don't think I
want to receive them unless I care a lot about that particular topic.

Instead, what about a temporary freeze when each list is created?  Give it
a day or two after it's created before it will accept traffic; have the
traffic be held for that long while people subscribe.  Would that help
this problem?

-- 
Russ Allbery ([EMAIL PROTECTED]) http://www.eyrie.org/~eagle/



Re: RFC: Rename local() operator

2000-08-05 Thread Russ Allbery

Nick Ing-Simmons [EMAIL PROTECTED] writes:

 What about Chide ?

 I think Cproxy or Cdeputy has merit - "while I am out contact ...".

 But I still think Csave is the essence of what it does.

I like either Chide or Csave too, but just to throw out the other idea
that occurred to me, what's being done here is in other languages often
called shadowing.  What about Cshadow?

shadow $/ = "\n";

seems to have the right implications to me.

-- 
Russ Allbery ([EMAIL PROTECTED]) http://www.eyrie.org/~eagle/



Re: RFC: multiline comments

2000-08-05 Thread Russ Allbery

Jarkko Hietaniemi [EMAIL PROTECTED] writes:

 I also confess to liking // more for till-end-of-line comment marker
 than #, the hash looks so messy to my eye...of course, // already has
 a meaning...

I'm the other way around.

This may depend a lot on whether one comes from a shell scripting
background or from a C++ background.  I strongly dislike C++ and other
than Perl primarily use C and shell, so # is the most natural to me and //
looks really odd.

Of course, like you said, we really can't use // anyway, as it's valid
Perl code and actually semi-frequently used.

I do agree that there's a lot to be said for using /* ... */ for multiline
comments, but then I'm a C programmer.

-- 
Russ Allbery ([EMAIL PROTECTED]) http://www.eyrie.org/~eagle/