Re: Fwd: [EMAIL PROTECTED]: Re: fixing is_deeply]

2005-07-04 Thread demerphq
On 7/4/05, Yitzchak Scott-Thoennes [EMAIL PROTECTED] wrote:
 On Sun, Jul 03, 2005 at 01:53:45PM +0200, demerphq wrote:
  Actually about the only thing that seems to be really hard is doing
  comparison of blessed regexes with overloaded stringification. For
  that you need XS if you want it to work always.
 
 Now there's a sick idea.
 
 If blessed regexes with overloaded stringification actually work (that
 is, the actual regex is accessible to some part of the perl code),
 perhaps you could fix perl to use the stringification instead?
 Then you wouldn't have to worry about it :)

Support for this has been in DDS since a relatively early version. And
yes indeed it does work. :-)

#!perl -l
use overload ''=sub {pie!};

my $o=bless qr/(foo \w+)/;

print $1 if 'bar foo baz bop'=~$o;
print $o;
__END__
foo baz
pie!

And i dont think the core should be changed. (Yes i know you were
joking, but still. :-)

In 5.6.x blessed regexes dont stringify as their pattern even if
overload isn't used, so you have to use the regex() function which
does the same thing as the core does to find out the pattern.
DDS::regex also has the advantage that you can use it to round trip a
pattern more easily as in list context you get the pattern and
modifier independently. (Without using string manipulation to do it
either.)

  use Data::Dump::Streamer qw(regex);
  my $r=qr/foo bar baz/is;
  my ($pattern,$modifiers)=regex($r);
  if (defined $pattern)  {
print qr/$pattern/$modifiers;
  } else {
print not a regex;
  }
  __END__
  qr/foo bar baz/si

-- 
perl -Mre=debug -e /just|another|perl|hacker/


Re: 5.004_xx in the wild?

2005-07-04 Thread Adam Kennedy

Michael G Schwern wrote:

I'm going through some work to restore Test::More and Test::Harness to work
on 5.4.5, minor stuff really, and I'm wondering if its worth the trouble.

Has anyone seen 5.004_xx in the wild?  And if so, were people actively
developing using it or was it just there to run some old code and they
were actually developing on a newer perl?




I've seen it on occasion, and it's general on large old IRIX servers, 
and similar aged things. CVS repositories and other boxes that have 
provided the same services pretty much forever and have never had a 
compelling reason to upgrade.


I guess the larger question for you is not should you support 5.004, but 
would your abandonment of 5.004 cause a dependency cascade and cause 
other modules that target 5.004 to lose their state as well.


In my case, that's just Config::Tiny, which I got a patch for a while 
ago to make it 5.004 compatible.


I'd say given the omnipresence of Test::Simple/More, stay 5.004 
compatible is it all possible.


Adam K


Re: 5.004_xx in the wild?

2005-07-04 Thread Ben Evans
On Mon, Jul 04, 2005 at 02:00:57PM +1000, Adam Kennedy wrote:
 Michael G Schwern wrote:
  I'm going through some work to restore Test::More and Test::Harness to work
  on 5.4.5, minor stuff really, and I'm wondering if its worth the trouble.
  
  Has anyone seen 5.004_xx in the wild?  And if so, were people actively
  developing using it or was it just there to run some old code and they
  were actually developing on a newer perl?
 
 I've seen it on occasion, and it's general on large old IRIX servers, 
 and similar aged things. CVS repositories and other boxes that have 
 provided the same services pretty much forever and have never had a 
 compelling reason to upgrade.

Some large financial companies are still using this version.

 I guess the larger question for you is not should you support 5.004, but 
 would your abandonment of 5.004 cause a dependency cascade and cause 
 other modules that target 5.004 to lose their state as well.
 
 I'd say given the omnipresence of Test::Simple/More, stay 5.004 
 compatible is it all possible.

I would say that this cascade effect is precisely why you *should*
drop 5.004 compatability. There's no excuse other than if it ain't broke,
don't fix it for running such an archaic Perl. People should be encouraged
to move to a more modern environment whenever possible.

Ben


Re: 5.004_xx in the wild?

2005-07-04 Thread Michael G Schwern
On Mon, Jul 04, 2005 at 02:00:57PM +1000, Adam Kennedy wrote:
 I've seen it on occasion, and it's general on large old IRIX servers, 
 and similar aged things. CVS repositories and other boxes that have 
 provided the same services pretty much forever and have never had a 
 compelling reason to upgrade.
 
 I guess the larger question for you is not should you support 5.004, but 
 would your abandonment of 5.004 cause a dependency cascade and cause 
 other modules that target 5.004 to lose their state as well.
 
 In my case, that's just Config::Tiny, which I got a patch for a while 
 ago to make it 5.004 compatible.
 
 I'd say given the omnipresence of Test::Simple/More, stay 5.004 
 compatible is it all possible.

At the moment things are holding together.  The only real hurt is not having
qr//, but fortunately my regexes are small.  Other than that there's just
the wild assortment of minor bugs to deal with.

Fortunately now that I have a working 5.4.5 on my OS X laptop I can seriously
test against it again.


-- 
Michael G Schwern [EMAIL PROTECTED] http://www.pobox.com/~schwern
ROCKS FALL! EVERYONE DIES!
http://www.somethingpositive.net/sp05032002.shtml


Re: 5.004_xx in the wild?

2005-07-04 Thread Michael G Schwern
On Mon, Jul 04, 2005 at 10:36:39AM +0100, Ben Evans wrote:
 I would say that this cascade effect is precisely why you *should*
 drop 5.004 compatability. There's no excuse other than if it ain't broke,
 don't fix it for running such an archaic Perl. People should be encouraged
 to move to a more modern environment whenever possible.

While I'd love it if it worked this way, more often the admins refuse to
upgrade in spite of losing module support and its the programmer who gets
punished.  The concern is more about not breaking existing code (whether
warrented or not) than furthering development.

I just had exactly this happen to a friend of mine contracting at a company
still running 5.5.3.  He couldn't even convince them to install a modern
Perl in a separate location and leave the old code running 5.5.3.


-- 
Michael G Schwern [EMAIL PROTECTED] http://www.pobox.com/~schwern
Reality is that which, when you stop believing in it, doesn't go away.
-- Phillip K. Dick


make parrot_so? (parrot_get_config_string goes undefined)

2005-07-04 Thread Michael Cummings
Is make parrot_so not ready? (Or is it not what I think it is, chiefly a way 
to generate a libparrot.so?) Attempts to run 'make parrot_so' after a 'make 
all world' seems to always result in :

nomad parrot-0.2.2 # make parrot_so
echo -oimcc/imclexer.c imcc/imcc.l
-oimcc/imclexer.c imcc/imcc.l
/usr/bin/perl5.8.6 -e 'open(A,qq{$_}) or die foreach @ARGV' imcc/imcc.l.flag 
imcc/imclexer.c
c++ -L/usr/lib -Wl,-E   -o ./parrot imcc/main.o -Lblib/lib -lparrot -lpthread 
-lnsl -ldl -lm -lcrypt -lutil -lrt -lgmp
blib/lib/libparrot.so: undefined reference to `parrot_get_config_string'
collect2: ld returned 1 exit status
make: *** [parrot_so] Error 1

Running 'make lib_deps' I noticed:
snip
Found 6498 symbols defined within the 192 supplied object files.
Found 147 external symbols
Of these, 73 are not defined by ANSI C89:
snip
parrot_get_config_string (in src/global_setup.o)

If 'make parrot_so' is just a placeholder, sorry for the bother :)

-
Summary of my parrot 0.2.2 (r0) configuration:
  configdate='Sun Jul  3 20:27:29 2005'
  Platform:
osname=linux, archname=i686-linux
jitcapable=1, jitarchname=i386-linux,
jitosname=LINUX, jitcpuarch=i386
execcapable=1
perl=/usr/bin/perl5.8.6
  Compiler:
cc='i686-pc-linux-gnu-gcc', ccflags=' -pipe -D_LARGEFILE_SOURCE 
-D_FILE_OFFSET_BITS=64',
  Linker and Libraries:
ld='i686-pc-linux-gnu-gcc', ldflags=' -L/usr/local/lib',
cc_ldflags='',
libs='-lpthread -lnsl -ldl -lm -lcrypt -lutil -lrt -lgmp'
  Dynamic Linking:
share_ext='.so', ld_share_flags='-shared -L/usr/local/lib -fPIC',
load_ext='.so', ld_load_flags='-shared -L/usr/local/lib -fPIC'
  Types:
iv=long, intvalsize=4, intsize=4, opcode_t=long, opcode_t_size=4,
ptrsize=4, ptr_alignment=1 byteorder=1234, 
nv=double, numvalsize=8, doublesize=8
(gcc version isn't printed but is 3.3.5)
-


RE: 5.004_xx in the wild?

2005-07-04 Thread Paul Marquess
From: Michael G Schwern [mailto:[EMAIL PROTECTED]
 
 On Mon, Jul 04, 2005 at 10:36:39AM +0100, Ben Evans wrote:
  I would say that this cascade effect is precisely why you *should*
  drop 5.004 compatability. There's no excuse other than if it ain't
 broke,
  don't fix it for running such an archaic Perl. People should be
 encouraged
  to move to a more modern environment whenever possible.
 
 While I'd love it if it worked this way, more often the admins refuse to
 upgrade in spite of losing module support and its the programmer who gets
 punished.  The concern is more about not breaking existing code (whether
 warrented or not) than furthering development.
 
 I just had exactly this happen to a friend of mine contracting at a
 company
 still running 5.5.3.  He couldn't even convince them to install a modern
 Perl in a separate location and leave the old code running 5.5.3.

I've just been through the should-I-shouldn't-I-support-5.4 with my
(painfully slow) rewrite of Compress::Zlib. In the end I included limited
support for 5.004 because I could, plus I have no feel for how much pain I
would cause folk if I didn't. 

Paul





___ 
Yahoo! Messenger - NEW crystal clear PC to PC calling worldwide with voicemail 
http://uk.messenger.yahoo.com



what slow could be in Compress::Zlib? (was RE: 5.004_xx in the wi ld?)

2005-07-04 Thread Konovalov, Vadim
 I've just been through the should-I-shouldn't-I-support-5.4 with my
 (painfully slow) rewrite of Compress::Zlib. In the end I 

...

I always thought that Compress::Zlib is just a wrapper around zlib which in
turn is C and developed elsewhere (and in stable state for a long time now).

What is (painfully slow) rewrite?


Re: what slow could be in Compress::Zlib? (was RE: 5.004_xx in the wi ld?)

2005-07-04 Thread David Landgren

Konovalov, Vadim wrote:

I've just been through the should-I-shouldn't-I-support-5.4 with my
(painfully slow) rewrite of Compress::Zlib. In the end I 



...

I always thought that Compress::Zlib is just a wrapper around zlib which in
turn is C and developed elsewhere (and in stable state for a long time now).

What is (painfully slow) rewrite?


I think Paul means that it is taking him a long time to write the code, 
not that the code itself is slow.


David





Re: make parrot_so? (parrot_get_config_string goes undefined)

2005-07-04 Thread Leopold Toetsch

Michael Cummings wrote:
Is make parrot_so not ready? 


It was working some time ago and is ok again now (r8501).

Thanks for testing,
leo



Re: Type variables vs type literals

2005-07-04 Thread TSa (Thomas Sandlaß)

HaloO Larry,

you wrote:

On Thu, Jun 30, 2005 at 09:25:10AM +0800, Autrijus Tang wrote:
: Currently, does this:
: 
: sub foo (::T $x, ::T $y) { }
: 
: and this:
: 
: sub foo (T $x, T $y) { }
: 
: Means the same thing, namely
: 
:a) if the package T is defined in scope, use that as the

:   type constraint for $x and $y
: 
:b) otherwise, set ::T to be the most immediate common supertype

:   of $x and $y.
: 
: Is this understanding correct?  I'd like to disambiguate the two cases

: by making ::T in parameter list to always mean type variable (sense b),
: and the bare T always mean type name (sense a)?

I think it would be good to distinguish those, but I've been mulling
over whether ::T should be the syntax for b.  (And also whether b
is the correct way to think about it in Perl 6.)  The problem with
using ::T for autovivify your type from the argument is that ::($x)
doesn't mean that, and it looks like it should.


I think the point is that using a type variable in the signature of a sub
is too subtle for recognizing it as a type function that produces typed
subs. Thus---even if it resembles C++ templates---I suggest to put the
type parameters onto the sub:

sub foo[T] (T $x, T $y) { }

which basically constrains $x and $y to have the same type. A call
to foo might automatically instanciate appropriate implementations.
With MMD and all methods and attributes beeing virtual this could
usually be a no-op as far as the code generation is concerned.
But the type checker has to know what constraints are in effect.


The syntax for explicit selection could be foo:(Int)('3','2') which
prevents the interpretation foo:(Str,Str). OTOH, this might overload
the :() too much. Could we invent more colon postfix ops?
Thus the above were foo:[Int]('3','2'). And while we are at it
since () is more call-like and [] lookup-like we could reserve :()
for more complicated type literals and use :[] for formal type signatures
on subs as it is the case for roles.



 The :: does imply
indirection of some sort in either case, but it's two very different
kinds of indirection.  ::($x) can't declare a lexical alias for
some type, whereas ::T presumably could, though your formulation seems
more of a unificational approach that would require ::T everywhere.

In other words, if we do type binding like b, we would probably want to
generalize it to other binding (and assigning?) contexts to represent
the type of the bound or assigned object, and at the same time create
a lexical alias for the bare type.  So we might want to be able to say

::T $x := $obj;
my T $y = $x;


The prime use of that feature is to ensure type homogenity for
temporary and loop vars in a certain scope. But the scope in the
above syntax is not apparent enough---to me at least.
Thus I opt for an explicit

   subtype T of $obj.type; # or just $obj perhaps?
   T $x := $obj;
   my T $y = $x;

With my proposal from above the short form could be

   :[T] $x := $obj;
   my T $y = $x;

or the current form with :()

   :(T) $x := $obj;
   my T $y = $x;


Regards,
--
TSa (Thomas Sandlaß)




Should .assuming be always non-mutating?

2005-07-04 Thread Ingo Blechschmidt
Hi,

.assuming is non-mutating on Code objects:

  my $subref = some_sub;
  my $assumed_subref = $subref.assuming(:foobar);
  $subref =:= some_sub;# true, $subref did not change

Quoting S06:
 The result of a use statement is a (compile-time) object that also
 has an .assuming method, allowing the user to bind parameters in
 all the module's subroutines/methods/etc. simultaneously:
 
 (use IO::Logging).assuming(logfile = .log);

So, .assuming mutates the Module/whatever objects. And, given the
general trend to non-mutating methods (.wrap, .assuming), and the
easiness of the .= operator, I'd propose making this kind of .assuming
non-mutating, too:

  (use IO::Logging).assuming(logfile = .log);   # noop
  (use IO::Logging).=assuming(logfile = .log);  # works

Opinions?


--Ingo

-- 
Linux, the choice of a GNU | To understand recursion, you must first
generation on a dual AMD   | understand recursion.  
Athlon!|



Re: return() in pointy blocks

2005-07-04 Thread TSa (Thomas Sandlaß)

Larry Wall wrote:

On Wed, Jun 08, 2005 at 12:37:22PM +0200, TSa (Thomas Sandlaß) wrote:
: BTW, is - on the 'symbolic unary' precedence level
: as its read-only companion \ ?.

No, - introduces a term that happens to consist of a formal signature
and a block.  There are no ordinary expressions involved until you
get inside the block.  (Or set a default on one of the parameters, to
the extent that those are ordinary expressions.)


So without a block there is a syntax error?

  $x = - $y;

Or could this be understood as a short form of

  $x = - { $y } # $y from outer scope

I still think the pointy looks really cute as a prefix operator
for constructing rw refs. Especially if we use () to evaluate
the blockref returned. For plain variable assignment this is
not overly usefull because it amounts to the same as

  $x := $y;

but in lists it is quite nice

  %h = { foo, 'bar', blah, - $y };

or not?

  %h = { foo = 'bar', blah = - $y };

hmm, can the following be interpreted as spoiling pair notation?

  %h = { :foobar, :blah - $y }; # would blah- $y parse at all?

I think not because this works

  %h = { :foobar, :blah{- $y} };

Regards,
--
TSa (Thomas Sandlaß)




Re: 5.004_xx in the wild?

2005-07-04 Thread Paul Johnson
On Mon, Jul 04, 2005 at 03:00:14AM -0700, Michael G Schwern wrote:

 On Mon, Jul 04, 2005 at 10:36:39AM +0100, Ben Evans wrote:
  I would say that this cascade effect is precisely why you *should*
  drop 5.004 compatability. There's no excuse other than if it ain't broke,
  don't fix it for running such an archaic Perl. People should be encouraged
  to move to a more modern environment whenever possible.
 
 While I'd love it if it worked this way, more often the admins refuse to
 upgrade in spite of losing module support and its the programmer who gets
 punished.  The concern is more about not breaking existing code (whether
 warrented or not) than furthering development.
 
 I just had exactly this happen to a friend of mine contracting at a company
 still running 5.5.3.  He couldn't even convince them to install a modern
 Perl in a separate location and leave the old code running 5.5.3.

As someone whose production code is currently required to run under
5.5.3, I'm very grateful to module authors whose code still runs under
that version at least.  A number of modules which don't run under 5.5.3
do with simple changes, primarily changing our to use vars and
getting rid of x.y.z version numbers.

Unfortunately, upgrading isn't always an option.  Anyone can type

  $ ./Configure -des  make  make test install

but putting the results of such a command into a base operating system
installation, testing that said operating system functions correctly
with hundreds of (often badly written) scripts installing databases and
middleware and who-knows-what, and ensuring that thousands of apps
running on tens of thousands of machines in dozens of different
configurations function at least as well as they did before is a little
harder.

And whilst I know how to manage all this, sometimes it's hard enough
just stopping people from mandating the use of ksh, Java and XML.

Having said all that, feel free to do what you want with 5.004 support.
I don't care!  I have 5.005!

-- 
Paul Johnson - [EMAIL PROTECTED]
http://www.pjcj.net


Re: 5.004_xx in the wild?

2005-07-04 Thread David Landgren

Ben Evans wrote:

On Mon, Jul 04, 2005 at 02:00:57PM +1000, Adam Kennedy wrote:


Michael G Schwern wrote:


I'm going through some work to restore Test::More and Test::Harness to work
on 5.4.5, minor stuff really, and I'm wondering if its worth the trouble.

Has anyone seen 5.004_xx in the wild?  And if so, were people actively
developing using it or was it just there to run some old code and they
were actually developing on a newer perl?


I've seen it on occasion, and it's general on large old IRIX servers, 
and similar aged things. CVS repositories and other boxes that have 
provided the same services pretty much forever and have never had a 
compelling reason to upgrade.



Some large financial companies are still using this version.


At my last place of work, I suspect they're still running 5.002gamma on 
an aging Vax. They still were in 2002, in any event. That said, from 
what I can recall, no CPAN modules were hurt in the making of the 
application.


David



RE: what slow could be in Compress::Zlib? (was RE: 5.004_xx in the wild?)

2005-07-04 Thread Paul Marquess
From: Konovalov, Vadim [mailto:[EMAIL PROTECTED]
 
  I've just been through the should-I-shouldn't-I-support-5.4 with my
  (painfully slow) rewrite of Compress::Zlib. In the end I
 
 ...
 
 I always thought that Compress::Zlib is just a wrapper around zlib which
 in
 turn is C and developed elsewhere (and in stable state for a long time
 now).

Yes, that is mostly true, but there have been a few changes made to zlib of
late that I want to make available in my module (the ability to append to
existing gzip/deflate streams being one). Plus I had a list of new
features/enhancements I wanted to add that have been sitting on a TODO list
for ages.

The top issue in my mailbox for Compress::Zlib is the portability of the
zlib gzopen/read/write interface. I've now completely removed all
dependencies on the zlib gzopen code and written the equivalent of that
interface in Perl. A side-effect of that decision is that I now have
complete read/write access to the gzip headers fields.

Another reason is provide better support for HTTP content encoding. I can
now autodetect and uncompress any of the three zlib-related compression
formats used in HTTP content-encoding, i.e. RFC1950/1/2 

etc, etc...

 What is (painfully slow) rewrite?

I don't have as much time to dabble these days, so I've been working at it
on and (mostly) off for at least a year. 

Whilst I'm here, when I do get around to posting a beta on CPAN, I'd prefer
it doesn't get used in anger until it has bedded-in. If I give the module a
version number like 2.000_00, will the CPAN shell ignore it?

Paul



___ 
How much free photo storage do you get? Store your holiday 
snaps for FREE with Yahoo! Photos http://uk.photos.yahoo.com



RE: what slow could be in Compress::Zlib? (was RE: 5.004_xx in th e wi ld?)

2005-07-04 Thread Konovalov, Vadim
   What is (painfully slow) rewrite?
  
  I think Paul means that it is taking him a long time to 
 write the code,
  not that the code itself is slow.
 
 Correct. Looks like I answered the wrong question :-)

Indeed I understood incorrectly first time, but you shed quite many light on
other important aspects of Compress::Zlib (which is in now the Perl core, as
everyone already know :)

Thanks!


RE: what slow could be in Compress::Zlib? (was RE: 5.004_xx in the wi ld?)

2005-07-04 Thread Paul Marquess
From: David Landgren [mailto:[EMAIL PROTECTED]


 Konovalov, Vadim wrote:
 I've just been through the should-I-shouldn't-I-support-5.4 with my
 (painfully slow) rewrite of Compress::Zlib. In the end I
 
 
  ...
 
  I always thought that Compress::Zlib is just a wrapper around zlib which
 in
  turn is C and developed elsewhere (and in stable state for a long time
 now).
 
  What is (painfully slow) rewrite?
 
 I think Paul means that it is taking him a long time to write the code,
 not that the code itself is slow.

Correct. Looks like I answered the wrong question :-)

Paul





___ 
Yahoo! Messenger - NEW crystal clear PC to PC calling worldwide with voicemail 
http://uk.messenger.yahoo.com



RE: 5.004_xx in the wild?

2005-07-04 Thread Paul Marquess


From: Paul Johnson [mailto:[EMAIL PROTECTED]
 
 On Mon, Jul 04, 2005 at 03:00:14AM -0700, Michael G Schwern wrote:
 
  On Mon, Jul 04, 2005 at 10:36:39AM +0100, Ben Evans wrote:
   I would say that this cascade effect is precisely why you *should*
   drop 5.004 compatability. There's no excuse other than if it ain't
 broke,
   don't fix it for running such an archaic Perl. People should be
 encouraged
   to move to a more modern environment whenever possible.
 
  While I'd love it if it worked this way, more often the admins refuse to
  upgrade in spite of losing module support and its the programmer who
 gets
  punished.  The concern is more about not breaking existing code (whether
  warrented or not) than furthering development.
 
  I just had exactly this happen to a friend of mine contracting at a
 company
  still running 5.5.3.  He couldn't even convince them to install a modern
  Perl in a separate location and leave the old code running 5.5.3.
 
 As someone whose production code is currently required to run under
 5.5.3, I'm very grateful to module authors whose code still runs under
 that version at least.  A number of modules which don't run under 5.5.3
 do with simple changes, primarily changing our to use vars and
 getting rid of x.y.z version numbers.

I get around that particular issue in my modules by doing an inplace edit of
all the perl files in the module distribution when perl Makefile.PL is
run. It converts our to use vars (or vice versa). Same goes with $^W
and use warnings.

Paul



___ 
How much free photo storage do you get? Store your holiday 
snaps for FREE with Yahoo! Photos http://uk.photos.yahoo.com



Re: Type variables vs type literals

2005-07-04 Thread Larry Wall
On Mon, Jul 04, 2005 at 04:09:59PM +0200, TSa (Thomas Sandlaß) wrote:
: I think the point is that using a type variable in the signature of a sub
: is too subtle for recognizing it as a type function that produces typed
: subs. Thus---even if it resembles C++ templates---I suggest to put the
: type parameters onto the sub:
: 
: sub foo[T] (T $x, T $y) { }
: 
: which basically constrains $x and $y to have the same type. A call
: to foo might automatically instanciate appropriate implementations.
: With MMD and all methods and attributes beeing virtual this could
: usually be a no-op as far as the code generation is concerned.
: But the type checker has to know what constraints are in effect.

Well, there's something to be said for that, but my gut feeling says
that we should reserve the explicit generic notation for compile time
processing via roles, and then think of run-time lazy type aliasing
as something a little different.  So if you really want to write
that sort of thing, I'd rather generalize roles to work as modules
containing generic subs:

role FooStuff[T] {
sub foo (T $x, T $y) { }
...
}

Otherwise people are tempted to scatter generics all over the
place, and it's probably better to encourage them to group similarly
parameterized generics in the same spot for sanity.  It also encourages
people to instantiate their generics in one spot even if they're
calling them from all over.

: The syntax for explicit selection could be foo:(Int)('3','2') which
: prevents the interpretation foo:(Str,Str). OTOH, this might overload
: the :() too much. Could we invent more colon postfix ops?
: Thus the above were foo:[Int]('3','2'). And while we are at it
: since () is more call-like and [] lookup-like we could reserve :()
: for more complicated type literals and use :[] for formal type signatures
: on subs as it is the case for roles.

I think people should say does FooStuff[Int] and automatically alias
all the routines to the Int form so we don't have to scatter :[Int]
all over.  Or if they just say does FooStuff, they get lazily-bound
type parameters.

:  The :: does imply
: indirection of some sort in either case, but it's two very different
: kinds of indirection.  ::($x) can't declare a lexical alias for
: some type, whereas ::T presumably could, though your formulation seems
: more of a unificational approach that would require ::T everywhere.
: 
: In other words, if we do type binding like b, we would probably want to
: generalize it to other binding (and assigning?) contexts to represent
: the type of the bound or assigned object, and at the same time create
: a lexical alias for the bare type.  So we might want to be able to say
: 
: ::T $x := $obj;
: my T $y = $x;
: 
: The prime use of that feature is to ensure type homogenity for
: temporary and loop vars in a certain scope. But the scope in the
: above syntax is not apparent enough---to me at least.

Actually, I was holding those up as bad examples of scope visibility,
so I'm agreeing with you there.  (Which is why I suggested that we
require a my context, even if only implied by the formal paramter list.)

: Thus I opt for an explicit
: 
:subtype T of $obj.type; # or just $obj perhaps?
:T $x := $obj;
:my T $y = $x;

Yuck.  I think that completely obscures the lazy type binding.
Plus you've managed to omit the my, so you've created a global
subtype T.  (Unless, as I'm leaning towards, we make such bare inner
types illegal, and require my, our, or *.)

: With my proposal from above the short form could be
: 
::[T] $x := $obj;
:my T $y = $x;
: 
: or the current form with :()
: 
::(T) $x := $obj;
:my T $y = $x;

I think any lexically scoped declaration should be governed by a my
(either explicitly or implicitly), and that includes type declarations.
A bare :[T] or :(T) outside a declaration is not good enough.
It's okay if there's already a my outside, in which case you're
declaring both the variable and the type, but there also needs to
be a way of scoping my to the type without declaring the variable.
I want to be able to distinguish

my (T) $x := $obj

from

(my T) $x := $obj

And maybe that's just how we should leave it.  I confess that

my [T] $x := $obj
[my T] $x := $obj
method foo ($self: [T] $x) {...}

are prettier, and more reminiscent of generics, but I worry about
confusiion with [$h,[EMAIL PROTECTED] bindings, and with reduction operators.
On the other hand, (T) looks like a cast, and [T] isn't *really*
all that ambiguous.  Given how often these things will occur in
other contexts that use colons for other things, I don't think it's
a very good plan to go with :[T].  These are visually confusing:

my :[T] $x := $obj
method foo ($self: :[T] $x) {...}

Bare [T] would preclude its use in rvalue context, however, as it
might be taken as a reduction operator.  No big loss, in my estimation.
Could maybe allow [my T] there, I suppose.

Larry


Re: Should .assuming be always non-mutating?

2005-07-04 Thread Larry Wall
On Mon, Jul 04, 2005 at 05:33:37PM +0200, Ingo Blechschmidt wrote:
: Hi,
: 
: .assuming is non-mutating on Code objects:
: 
:   my $subref = some_sub;
:   my $assumed_subref = $subref.assuming(:foobar);
:   $subref =:= some_sub;# true, $subref did not change

I think .assuming implies a copy of your view of the subroutine, even
if internally it's implemented with the same sub.  (But it needn't be.)
So =:= would be false after .assuming.  And .=assuming would probably
be illegal, though if you succeeded it would change *everyone's*
view of that sub's signature, which seems somewhat antisocial.

Another way to say it is that if you change something's interface,
you really ought to change its identity.

: Quoting S06:
:  The result of a use statement is a (compile-time) object that also
:  has an .assuming method, allowing the user to bind parameters in
:  all the module's subroutines/methods/etc. simultaneously:
:  
:  (use IO::Logging).assuming(logfile = .log);
: 
: So, .assuming mutates the Module/whatever objects. And, given the
: general trend to non-mutating methods (.wrap, .assuming), and the
: easiness of the .= operator, I'd propose making this kind of .assuming
: non-mutating, too:
: 
:   (use IO::Logging).assuming(logfile = .log);   # noop
:   (use IO::Logging).=assuming(logfile = .log);  # works
: 
: Opinions?

I don't think that's how it works.  We almost never want to mutate
a module in place, and if we do, it's probably a job for .=wrap, not
.assuming.  They're really there for different reasons: .assuming
is for changing the interface without changing the implementation,
whereas .wrap is for changing the implementation without changing
the interface.

Perl 6 always aliases a module's short name to a longer name, and the
main reason for that is binding to different versions without having to
respecify the version at every mention of the module name.  However,
another reason is so that we can alias IO::Logging to an anonymous
view class, so that we don't have to respecify the default arguments.
We could conceivably do that just by distributing .assuming to all the
routines and methods involved (presuming we can know them in advance),
but there is probably some good in keeping track of the indirection
at the module name level.

In any event, it makes little sense to mutate a module or class in
place unless you're heavily into AOP (and even there you're trying to
preserve the interface).  If you're going to change the interface it's
usually better to allow different lexical scopes to have different
views to avoid problems with magical action at a distance.

Larry


Re: return() in pointy blocks

2005-07-04 Thread Larry Wall
On Mon, Jul 04, 2005 at 07:01:00PM +0200, TSa (Thomas Sandlaß) wrote:
: Larry Wall wrote:
: On Wed, Jun 08, 2005 at 12:37:22PM +0200, TSa (Thomas Sandlaß) wrote:
: : BTW, is - on the 'symbolic unary' precedence level
: : as its read-only companion \ ?.
: 
: No, - introduces a term that happens to consist of a formal signature
: and a block.  There are no ordinary expressions involved until you
: get inside the block.  (Or set a default on one of the parameters, to
: the extent that those are ordinary expressions.)
: 
: So without a block there is a syntax error?
: 
:   $x = - $y;

Yes.

: Or could this be understood as a short form of
: 
:   $x = - { $y } # $y from outer scope

Could, but I'm prejudiced against implicit lexical scopes.   and ||
and ??:: are okay as standard forms of indirect quotation, but natural
languages are notorious for getting into ambiguous situations with
that kind of laziness.  So that's one of the features I'm slow to
copy from human languages.  Curlies are already short, and in Perl
6 have the (almost) universal meaning of direct quotation of code.

: I still think the pointy looks really cute as a prefix operator
: for constructing rw refs. Especially if we use () to evaluate
: the blockref returned. For plain variable assignment this is
: not overly usefull because it amounts to the same as
: 
:   $x := $y;
: 
: but in lists it is quite nice
: 
:   %h = { foo, 'bar', blah, - $y };
: 
: or not?
: 
:   %h = { foo = 'bar', blah = - $y };

I don't think the semantics pass the Pooh test.   It's also confusing
syntactically.  What state is the parser supposed to be in when it
gets to the end of this:

foo = 'bar', blah = - $x, $y, $z

If the precedence of - is list operator (and it needs to be to gobble
up pointy sub arguments), then you have to write one of

foo = 'bar', blah = - ($x), $y, $z
foo = 'bar', blah = (- $x), $y, $z

to limit it to the first argument, and you might as well have put some
curlies there.  We're not going to do arbitrary lookahead to find a {
to decide the precedence of - on the fly.  That would be insane.

: hmm, can the following be interpreted as spoiling pair notation?
: 
:   %h = { :foobar, :blah - $y }; # would blah- $y parse at all?

That won't parse.

: I think not because this works
: 
:   %h = { :foobar, :blah{- $y} };

That won't parse either.

Larry


Re: 5.004_xx in the wild?

2005-07-04 Thread James E Keenan

Paul Johnson wrote:



As someone whose production code is currently required to run under
5.5.3, I'm very grateful to module authors whose code still runs under
that version at least.  A number of modules which don't run under 5.5.3
do with simple changes, primarily changing our to use vars and
getting rid of x.y.z version numbers.



I've only developed in 5.6+ environments.  Can anyone provide a link to 
what I would have to do to make my modules compatible with 5.4 and/or 5.5?


(I realize there are Changes files distributed with perl, but I'm 
looking for something more focused/succinct.)


TIA

jimk


Re: 5.004_xx in the wild?

2005-07-04 Thread Michael G Schwern
On Mon, Jul 04, 2005 at 03:59:23PM -0400, James E Keenan wrote:
 I've only developed in 5.6+ environments.  Can anyone provide a link to 
 what I would have to do to make my modules compatible with 5.4 and/or 5.5?

Step one:  Install 5.4.5 and 5.5.4.

Step two:  Try out your module with them.

Step three:  Curse, scream and yell as you work around all sorts of long
fixed bugs.

One could attempt to code for the differences but to really make sure your
module works with on old version you have to try it with the older version.

That said, here's the main differences:

* No qr//.  Even if you target 5.5.4 qr// still has lots of bugs.
* No warnings.pm.  Use $^W instead.
* No our().  Use vars.pm instead.
* No CORE::GLOBAL in 5.4.
* Many modules are not core or use old, broken versions.  I always run into
  problems with File::Spec and consider 0.8 to be the minimum version for
  sanity.  
* No Test::More so you either have to use Test.pm, list Test::More
  as a dependency or ship it with your module in t/lib.
* No Unicode.  If you need to support Unicode you're probably best off 
  dropping 5.4 and 5.5 and even consider dropping 5.6.
* No (remotely sane) threads.
* No attributes.
* No open my $fh.  Use local FH; open FH instead.
* No lvalue subroutines.
* Older, more insane MakeMaker for those who do MakeMaker hackery.
* No x.y.z versions.
* All sorts of XS differences.
* Different warning and error diagnostics.

Once you go through the initial pain of backporting its not too big a deal
to keep things working as long as you're not doing XS.  qr// is the only 
thing I really miss.


-- 
Michael G Schwern [EMAIL PROTECTED] http://www.pobox.com/~schwern
ROCKS FALL! EVERYONE DIES!
http://www.somethingpositive.net/sp05032002.shtml


Re: 5.004_xx in the wild?

2005-07-04 Thread James E Keenan

Michael G Schwern wrote:



That said, here's the main differences:

Thanks.  My modules are sufficiently non-evil that I should be able to 
compensate for these differences.


jimk


Re: Fwd: [EMAIL PROTECTED]: Re: fixing is_deeply]

2005-07-04 Thread Andrew Pimlott
On Tue, Jul 05, 2005 at 01:24:38AM +0100, Fergal Daly wrote:
 There's an easy way to see what's accptable and what's not and what
 exactly this level equality means. Consider the following code
 template:
 
 ###
 # lots of stuff doing anything you like including
 # setting global variables
 
 my $value = do {
 # these can access any globals etc
   my $a = one_way(); 
   my $b = another_way();
   is_very_deeply($a, $b) || die they're distinuguishable;
 
   # choose one of $a and $b at random
   rand(2)  1 ? $a : $b;
 };
 
 print test($value);
 ###

my $x = [];
sub one_way = { $x }
sub another_way = { [] }
sub test = { $_[0] == $x }

I don't think this breaks your rules, but see below.

 Assuming:
 
 1 nothing looks at ref addrs (they may compare refs addrs, they can
 even store them in variables for comparison later on as long as
 nothing depnds on the actual value, just it's value in comparison to
 other ref addrs).

check

 2 - one_way() and another_way() don't have side effects ***

check

 Then test() cannot tell whether it got $a or $b. That is, any attempt
 by one_way() or another_way() to communicate with test() will be
 caught by is_very_deeply().
 
 In this case it's clear that
 
 sub test
 {
   $_[0] == $a
 }
 
 is not acceptable because only one of $a and $b ever makes it back
 into program flow and at that point it's got a new name.

I don't understand what you're saying here.  As you've written it, $a in
test is unrelated to the $a and $b in your do statement above, so your
test will return false in both cases.  Is that all you meant?

Anyway, I don't think you're rejecting my test.  If you do reject my
test, tell me which assumption I violated.

Andrew


Re: DBI v2 - The Plan and How You Can Help

2005-07-04 Thread Dean Arnold

Richard Nuttall wrote:



  - support for automatically pulling database DSN information from a
~/.dbi (or similar) file.  This is constantly re-invented poorly.
Let's just do a connect by logical application name and let the
SysAdmins sort out which DB that connects to, in a standard way.



This reminds me one one thing I hate about DB access, and that is having 
the DB password

stored in plain text.

Of course there are ways to provide some concealment, but nothing 
particularly good or

integrated into the access.

If the connecting by logical application name could also include some 
level of security

access, that would be a big improvement.

R.




Which is why major DBMSs are increasingly relying on SSO
based solutions. (e.g., Kerberos/LDAP authentication).
Not certain if DBI is the proper level to implement that,
(probably needs to be down at the DBD = DBMS level).
And in a standard way may still be wishful thinking.

Also, I'm not sold on the idea that a ~/.dbi file is particularly
secure in that regard. Not neccesarily opposed, just not convinced
its the right solution. (I don't like cleartext passwords either,
but due to the variance in DBMS's authentication methods, I don't know if
DBI can solve that problem).

- Dean


Re: DBI v2 - The Plan and How You Can Help

2005-07-04 Thread Richard Nuttall



  - support for automatically pulling database DSN information from a
~/.dbi (or similar) file.  This is constantly re-invented poorly.
Let's just do a connect by logical application name and let the
SysAdmins sort out which DB that connects to, in a standard way.


This reminds me one one thing I hate about DB access, and that is having 
the DB password

stored in plain text.

Of course there are ways to provide some concealment, but nothing 
particularly good or

integrated into the access.

If the connecting by logical application name could also include some 
level of security

access, that would be a big improvement.

R.



Re: DBI v2 - The Plan and How You Can Help

2005-07-04 Thread Darren Duncan

Tim et al,

Following are some ideas I have for the new DBI, that were thought 
about greatly as I was both working on Rosetta/SQL::Routine and 
writing Perl 6 under Pugs.  These are all language-independent and 
should be implemented at the Parrot-DBI level for all Parrot-hosted 
languages to take advantage of, rather than just in the Perl 6 
specific additions.  I believe in them strongly enough that they are 
in the core of how Rosetta et al operates (partly released, partly 
pending).


0. There were a lot of good ideas in other people's replies to this 
topic and I won't repeat them here, for the most part.


1. Always use distinct functions/methods to separate the declaration 
and destruction of a resource handle / object from any of its 
activities.  With a database connection handle, both the 
open/connect() and close/disconnect() are $dbh methods; the $dbh 
itself is created separately, such as with a DBI.new_connection() 
function.  With a statement handle, the prepare() is also a $sth 
method like with execute() et al; the $sth itself is created 
separately, such as with a $dbh.new_statement() method.  If new 
handle types are created, such as a separate one for cursors, they 
would likewise be declared and used separately.


With this separation, you can re-use the resource handles more 
easily, and you don't have to re-supply static descriptive 
configuration details each time you use it, but rather only when the 
handle is declared.  At the very least, such static details for a 
connection handle include what DBI implementor/driver module to use; 
as well, these details include what database product is being used, 
and locating details for the database, whether internet address or 
local service name or on-disk file name and so on.  This can 
optionally include the authorization identifier / user name and 
password, or those details can be provided at open() time instead if 
they are likely to be variable.


2. Always separate out any usage stages that can be performed apart 
from the database itself.  This allows an application to do those 
stages more efficiently, consuming fewer resources of both itself and 
the database.


For example, a pre-forked Apache process can declare all of the 
database and statement handles that it plans to use, and do as much 
of the prepare()-type work that can be done internally as possible, 
prior to forking; all of that work can be done just once, saving CPU, 
and only one instance of it consumes RAM.  All actual invocations of 
a database, the open()/connect() and execute() happen after forking, 
and at that point all of the database-involving work is consolidated.


Or even when you have a single process, most of the work you have to 
do, including any SQL generation et al, can be more easily be 
pre-performed and the results cached for multiple later uses.  Some 
DBI wrappers may do a lot of work with SQL generation et al and be 
slow, but if this work is mainly preparatory, they can still be used 
in a high-speed environment as that work tends to only need doing 
once.  Most of the prep work of a DBI wrapper can be done effectively 
prior to ever opening the database connection.


3. Redefine prepare() and execute() such that the first is expressly 
for activities that can be done apart from a database (and hence can 
also be done for a connection handle that is closed at the time) 
while all activities that require database interaction are deferred 
to the second.


Under this new scheme, when a database has native prepared statements 
support that you want to leverage, the database will be invoked to 
prepare said statements the first time you run execute(), and then 
the result of this is cached by DBI or the driver for all subsequent 
execute() to use.  In that case, any input errors detected by the 
database will be thrown at execute() time regardless of their nature; 
only input errors detected by the DBD module itself would be thrown 
at prepare() time.  (Note that module-caught input errors are much 
more likely when the module itself is handling SQL in AST form, 
whereas database-caught input errors are much more likely when SQL is 
always maintained in the program as string form.)  Note also that the 
deferal to execute() time of error detection is what tends to happen 
already with any databases that don't have native prepared statement 
support or for whom the DBI driver doesn't use them; these won't be 
affected by the official definition change.


Now I realize that it may be critically important for an application 
to know at prepare() time about statically-determinable errors, such 
as mal-formed SQL syntax, where error detection is handled just by 
the database.  For their benefit, the prepare()+execute() duality 
could be broken up into more methods, either all used in sequence or 
some alternately to each other, so users get their errors when they 
want them.  But regardless of the solution, it should permit for all 

Re: DBI v2 - The Plan and How You Can Help

2005-07-04 Thread Sam Vilain

Richard Nuttall wrote:

  - support for automatically pulling database DSN information from a
~/.dbi (or similar) file.  This is constantly re-invented poorly.
Let's just do a connect by logical application name and let the
SysAdmins sort out which DB that connects to, in a standard way.
This reminds me one one thing I hate about DB access, and that is having 
the DB password

stored in plain text.


Sadly, there is really nothing that can be done about this, other than
casual obscuring of the real password like CVS does in ~/.cvspass

However, making it in a file in $HOME/.xxx means that the sysadmin can
set it up to be mode 400 or something like that, to ensure other users
can't access it if someone forgot to set the permissions right on the
application code (or, hopefully, configuration file).

Of course, for more secure access schemes like kerberos, etc, the file
is really just being a little registry of available data sources.

On a similar note could be allowing overriding the data source used by
an application by setting an environment variable.  That way, the
SysAdmin has got lots of options when it comes to managing different
production levels - Oracle has this with TWO_TASK, and while it's a
PITA when it's not there (no doubt why DBD::Oracle allows this to be
specified in the DSN string), it's also useful for what it's intended
for - switching databases from an operational perspective.

Sam.


Re: what slow could be in Compress::Zlib? (was RE: 5.004_xx in the wild?)

2005-07-04 Thread Sébastien Aperghis-Tramoni

Paul Marquess wrote:

Whilst I'm here, when I do get around to posting a beta on CPAN, I'd 
prefer
it doesn't get used in anger until it has bedded-in. If I give the 
module a

version number like 2.000_00, will the CPAN shell ignore it?


Indeed, if a distribution is numbered with such a number, it is not 
indexed by PAUSE, and therefore can't be installed from CPAN/CPANPLUS



Sébastien Aperghis-Tramoni
 -- - --- -- - -- - --- -- - --- -- - --[ http://maddingue.org ]
Close the world, txEn eht nepO



Re: what slow could be in Compress::Zlib?

2005-07-04 Thread Andreas J. Koenig
 On Mon, 4 Jul 2005 14:19:16 +0100, Paul Marquess [EMAIL PROTECTED] 
 said:

   If I give the module a version number like 2.000_00, will the CPAN
   shell ignore it?

Yes. To be precice, the indexer on PAUSE will ignore it. But don't
forget to write it with quotes around.

-- 
andreas


Re: what slow could be in Compress::Zlib? (was RE: 5.004_xx in the wild?)

2005-07-04 Thread Yitzchak Scott-Thoennes
On Mon, Jul 04, 2005 at 02:19:16PM +0100, Paul Marquess wrote:
 Whilst I'm here, when I do get around to posting a beta on CPAN, I'd prefer
 it doesn't get used in anger until it has bedded-in. If I give the module a
 version number like 2.000_00, will the CPAN shell ignore it?

This is often done incorrectly. See Lperlmodstyle/Version numbering
for the correct WTDI:

   $VERSION = 2.000_00;# let EU::MM and co. see the _
   $XS_VERSION = $VERSION;   # XS_VERSION has to be an actual string
   $VERSION = eval $VERSION; # but VERSION has to be a number
   
Just doing $VERSION = 2.000_00 doesn't get the _ into the actual
distribution version, and just doing $VERSION = 2.000_00 makes

   use Compress::Zlib 1.0;

give a warning (because it does: 1.0 = 2.000_00 internally, and _
doesn't work in numified strings).

But if you are doing a beta leading up to a 2.000 release, it should be
numbered  2.000, e.g. 1.990_01.  Nothing wrong with a 2.000_01 beta
in preparation for a release 2.010 or whatever, though.


Re: 5.004_xx in the wild?

2005-07-04 Thread steve
On Mon, Jul 04, 2005 at 05:40:20PM -0400, James E Keenan wrote:
 Michael G Schwern wrote:
 
 
 That said, here's the main differences:
 

I'm about a year out from seeing a Perl 4 in the wild, so, I'll assume
that early Perl 5's can be found if you look long enough.

Steve Peters
[EMAIL PROTECTED]


Re: DBI v2 - The Plan and How You Can Help

2005-07-04 Thread Sam Vilain

Darren Duncan wrote:
3. Redefine prepare() and execute() such that the first is expressly for 
activities that can be done apart from a database (and hence can also be 
done for a connection handle that is closed at the time) while all 
activities that require database interaction are deferred to the second.


That would be nice, but there are some DBDs for which you need the database
on hand for $dbh.prepare() to work.  In particular, DBD::Oracle.

I think that what you are asking for can still work, though;

  # this module creates lots of SQL::Statement derived objects, without
  # necessarily loading DBI.
  use MyApp::Queries %queries;

  # not connect, so doesn't connect
  my $db = DBI.new( :source(myapp) );

  # prepare the objects as far as possible
  my %sths;
  for %queries.kv - $query_id, $query_ast_or_template {
  %sths{$query_id} = $db.prepare($query_ast_or_template);
  }

  # connect
  $db.connect;

  # now proceed as normal
  my $sth = %sthssome_query_id;
  $sth.execute( :param(foo), :this(that) );

So, effectively the prepare can happen at any time, and it's up to the
DBD to decide whether to actually do anything with it immediately or not.
ie, on Pg the STHs would be built before the DB is connected, and on Oracle
they are built the first time they are used (and then cached).

Now I realize that it may be critically important for an application to 
know at prepare() time about statically-determinable errors, such as 
mal-formed SQL syntax, where error detection is handled just by the 
database.  For their benefit, the prepare()+execute() duality could be 
broken up into more methods, either all used in sequence or some 
alternately to each other, so users get their errors when they want 
them.  But regardless of the solution, it should permit for all 
database-independent preparation to be separated out.


OK, so we have these stages;

  1. (optional) generate an AST from SQL
  2. (optional) generate SQL string from an AST
  3. generate a handle for the statement, sans connection
  4. prepare handle for execution, with connection
  5. execute statement

I think these all fit into;

  1. SQL::Statement.new(:sql(...));
  2. $statement.as_sql;
  3. $dbh.prepare($statement) or $dbh.prepare($statement, :nodb);
  4. $dbh.prepare($statement) or $sth.prepare while connected
  5. $sth.execute

In particular, I don't think that the DB driver should automatically
get a chance to interfere with SQL::Statement; if they want to do that,
then they should specialise SQL::Statement.  IMHO.

Perhaps you have some other examples that don't fit this?

5. All details used to construct a connection handle should be 
completely decomposed rather than shoved into an ungainly data 
source.


I interpret this as asking that the detailed parameters to the DBI
connection are expanded into named options rather than simply bundled into
a string.

That, I agree with, and I guess it would be useful occasionally to be
able to specify all that rather than just setting it up once and labelling
those connection parameters with a source that comes from ~/.dbi.
Particularly for writing gui dialogs for interactive database utilities.

Either way, you don't want most applications dealing with this complexity
at all, really.

6. DBI drivers should always be specified by users with their actual 
package name, such as 'DBD::SQLite', and not some alternate or 
abbreviated version that either leaves the 'DBD::' out or is spelled 
differently.  Similarly, the DBI driver loader should simply try to load 
exactly the driver name it is given, without munging of any type.  This 
approach is a lot more simple, flexible and lacks the cludges of the 
current DBI.  DBI driver implementers can also name their module 
anything they want, and don't have to name it 'DBD::*'. A DBI driver 
should not have to conform to anything except a specific API by which it 
is called, which includes its behaviour upon initialization, invocation, 
and destruction.


Is this useful?

I can't see a reason that the DBI.new() / DBI.connect() call shouldn't be
flexible in what it accepts;

  $dbh = DBI.new( :driverRosetta );   # means DBD::Rosetta
  $dbh = DBI.new( :driverRosetta::Emulate::DBD ); # specify full package
  $dbh = DBI.new( :driver(Rosetta::Emulate::DBD) ); # pass type object
  $dbh = DBI.new( :driver(DBD::SQLite.new(:foobar)) ); # pass driver object

Sam.


Re: DBI v2 - The Plan and How You Can Help

2005-07-04 Thread Darren Duncan
Okay, considering that using the same name prepare() like this may 
confuse some people, here is a refined solution that uses 3 methods 
instead; please disregard any contrary statements that I previously 
made:


  # Opt 1: A user that wants the most control can do this (new feature):

  my $sth1 = $dbh.compile( $sql_or_ast ); # always sans connection
  $sth1.prepare(); # always with connection, even if DBD doesn't use it
  $sth1.execute(); # always with connection

  # Opt 2: If they want less control, they do this (same as old DBI):

  my $sth2 = $dbh.prepare( $sql_or_ast ); # combines Opt 1's comp/prep
  $sth2.execute(); # same as Opt 1

  # Opt 3: Alternately, there is this (akin to my older suggestion):

  my $sth3 = $dbh.compile( $sql_or_ast ); # same as Opt 1
  $sth3.execute(); # combines Opt 1's prep/exec

  # Opt 4: Even less control (akin to old DBI's do):

  $dbh.execute( $sql_or_ast ); # combines Opt 1's comp/prep/exec

In this model, when you use just prepare() and execute(), they behave 
identically to the old DBI, including that they require an open 
connection.  So no mystery there.


The new feature is if you decide to use compile(); you then give that 
method the arguments you would have given to prepare(), and you 
invoke prepare() on the result with no arguments; each DBD would 
decide for itself how the work is divided between compile() and 
prepare() with the limitation that compile() is not allowed to access 
the database; ideally the DBD would place as much work there as is 
possible, which would vary between Oracle/Pg/etc.


Invoking just compile() then execute() will cause the execute() to do 
what prepare() normally does against a database, and cache the 
prepared handle.


In option 4, I renamed the old DBI's do() to execute() for 
consistency with the other examples; but this execute() is different 
in that it caches the prepared statement handle.  In any event, with 
all 4 examples, execute() gives you the same result regardless of 
what is called before it.


At 2:49 PM +1200 7/5/05, Sam Vilain wrote:

In particular, I don't think that the DB driver should automatically
get a chance to interfere with SQL::Statement; if they want to do that,
then they should specialise SQL::Statement.  IMHO.


I am operating under the assumption here that while the new DBI is 
designed to effectively support wrapper modules, the wrapper modules 
would also be altered from their current DBI-1-geared designs to 
accomodate DBI-2.


But still, what do you mean by interfere?

5. All details used to construct a connection handle should be 
completely decomposed rather than shoved into an ungainly data 
source.


I interpret this as asking that the detailed parameters to the DBI
connection are expanded into named options rather than simply bundled into
a string.

That, I agree with, and I guess it would be useful occasionally to be
able to specify all that rather than just setting it up once and labelling
those connection parameters with a source that comes from ~/.dbi.
Particularly for writing gui dialogs for interactive database utilities.


I see the act of storing all the data as a single string at any time 
to be a messy affair to be avoided.  The application doesn't have to 
know about the complexity to pass around a hash of values any more 
than it does with a string; but when the application wants to know 
the details, dealing with a hash is easier.



Either way, you don't want most applications dealing with this complexity
at all, really.


I am operating under the assumption that this system should work if 
there are no external config files that the DBI/DBD would read, and 
the application would provide that information; if its in a file, the 
application would read it in, or would explicitly tell DBI where it 
is.  Or at least it should be possible for this to happen, even if a 
DBD defaults to look in a default location when it doesn't get the 
equivalent from the application.


6. DBI drivers should always be specified by users with their 
actual package name, such as 'DBD::SQLite', and not some alternate 
or abbreviated version that either leaves the 'DBD::' out or is 
spelled differently.  Similarly, the DBI driver loader should 
simply try to load exactly the driver name it is given, without 
munging of any type.  This approach is a lot more simple, flexible 
and lacks the cludges of the current DBI.  DBI driver implementers 
can also name their module anything they want, and don't have to 
name it 'DBD::*'. A DBI driver should not have to conform to 
anything except a specific API by which it is called, which 
includes its behaviour upon initialization, invocation, and 
destruction.


Is this useful?

I can't see a reason that the DBI.new() / DBI.connect() call shouldn't be
flexible in what it accepts;

  $dbh = DBI.new( :driverRosetta );   # means DBD::Rosetta
  $dbh = DBI.new( :driverRosetta::Emulate::DBD ); # specify full package
  $dbh = DBI.new(