On Tue, Sep 13, 2011 at 7:50 AM, Jonathan Leffler
jonathan.leff...@gmail.com wrote:
On Mon, Sep 12, 2011 at 10:29, Byrd, Brendan byr...@insightcom.com
wrote:
I currently have a working and tested model for a “nested hash to table”
conversion. [...]
** **
H.Merijn Brand wrote:
Attached patch implements the following:
f_dir = /data/csv/foo,
f_ext = .csv,
f_ext = .csv/i, # Case ignore on extension, use given on write
f_ext = .csv/r, # Extension is required, ignore other files in
f_ext = .csv/ri, # f_dir. Options can be combined
I
Tim Bunce wrote:
On Fri, Jul 20, 2007 at 07:26:28AM -0700, Dean Arnold wrote:
For those attending OSCON next week: I've talked Mssr. Bunce into
an informal BOF at the Oregon Brewers Festival
(http://www.oregonbrewfest.com/) across the
bridge sometime Thursday afternoon. Let me know
if you're
Dean Arnold wrote:
Jeff Zucker wrote:
I am preparing to add the ability to attach metadata to DBDs that
subclass DBD::File.
Will there be a std. i/f to get at that info from within
SQL::Statement (and its subclasses) ?
Yes definitely. though probably to start you'll just have access
I am preparing to add the ability to attach metadata to DBDs that
subclass DBD::File. This will mean that DBI metadata methods such as
primary_key() and column_info() will be available and also that other
modules like DBIx::Class and Class::DBI will have access to the metadata.
I've outlined
H.Merijn Brand wrote:
I give up. dbish is not up to the greatness of DBI
I suggest documenting in DBI that there is no more dbish
There are a few problems with dbish, but that's a little harsh. Tom is
working towards a model where there will be a stripped down dbish base
and a variety of
H.Merijn Brand wrote:
Can you point me to the latest version, so that I can patch against
something more actual?
http://svn.perl.org/modules/dbi-shell/trunk/ is sadly empty. Perhaps
Tom could be persuaded to upload his most recent there.
--
Jeff
Jonathan Leffler wrote:
the file seemed to be
a double gzipped file
Hmm, odd, it's the result of make dist and svn commit. Must be
something odd about the svn since the tar.gz appears normal. I've
uploaded to
http://www.vpservices.com/jeff/programs/SQL-Statement-1.14.tar.gz
I believe this one
H.Merijn Brand wrote:
Failed 2/15 test scripts, 86.67% okay. 1/227 subtests failed, 99.56% okay.
I added over 140 tests in recent releases (and doubled the Devel::Cover
coverage) and apparently forgot to wrap some of the requires in evals.
SQL::Statement has no prerequisites but since its most
Hi Greg,
Perhaps this should be obvious, but it wasn't to me. I naively figured
if I had an old DBD::Pg working and PostgreSQL working and I had
everything listed in the README that I could install DBD::Pg 1.41. Not
sure I've stated things correctly, but maybe this will help someone else
in
Dean Arnold wrote:
Where is cpan.forum located ? I can't find any links at cpan.org...
http://www.cpanforum.com/
I think the differentiator is the Wiki-ized content, which tends to
be more structured, and would hopefully become a Wikipedia for the DBI
(and DBD's and DBIx's). PerlMonks content
[Copied to dbi-dev, but please reply to dbi-users]
Since the cat is out of the bag on upcoming SQL::Statement changes,
here's a preview. Comments on the proposed syntax will be much appreciated.
The major additions will be column name aliases (thanks Robert
Rothenberg), improved parsing
Salvador Fandio wrote:
Hi,
I am working on a DBD backend for SWI-Prolog
(http://www.swi-prolog.org). As it runs on top of my module
Language::Prolog::Yaswi
Excellent! Thanks and congratulations! I'm looking forward to seeing it.
--
Jeff
Tim Bunce wrote:
Okay, but the recommended names are db and database (not dbname).
Duh, s/dbname/database/g. (SQLite and Pg folks, are you listening?).
After the dbi:AnyData:; everything should be in name=value; format.
So dbi:AnyData:format=XML;table=Test;file=test.xml;
Ok, I thought one
H.Merijn Brand wrote:
On Thu 16 Sep 2004 22:36, Jeff Zucker [EMAIL PROTECTED] wrote:
SQL::Statement
As a side note, could you rewrite the test cases on this one to fit in the
general format of testing output, instead of the list of 100 numbers?
Suggest using Test::Simple or Test::More
Will do
[removed dbi-users from the cc list for this bit]
Darren Duncan wrote:
First of all, what is the general policy regarding modules that are
currently bundled with the core DBI distribution itself? Several on
your list are:
DBD::DBM
DBD::File
DBI::PurePerl
DBI::SQL::Nano
The above are part of
Darren Duncan wrote:
Just look at the SYNOPSIS documentation. As near as
I can tell, most functionality is method-only, with a few miscellaneous
extras being tied only.
Another way to look at it is that DBI is made up of handles which have
the very important property that the behaviour of a
Darren Duncan wrote:
At 7:01 AM -0700 8/4/04, Jeff Zucker wrote:
It is not a miscellaneous extra that \%attrs is a parameter to
virtually every method in the DBI or that passing \%attrs does a
fundamentally different thing than passing $dsn, or $sql or $value.
I was not referring to \%attrs
Darren Duncan wrote:
To whomever was at OSCON and attended the DBI Driver Developers BoF, I
would appreciate it if some notes or a summary of goings-on at that
event could be posted to this list. This would help people such as
myself who weren't there can know what happened. If there isn't
Jeff Zucker wrote:
b. DBI v2.0 will not introduce new user features
Though it will have some impact on existing apps, see the notes on
AutoCommit and tables() and some deprecated older features.
--
Jeff
Tim Bunce wrote:
Any interest in a DBI Driver Developers BoF at OSCON?
Main topics would be:
Migration to DBI v2
Unified test suite
Count me in. Are you thinking of this as instead of, or in addition to
a broader BOF?
Ideally in addition to, since it's of little relevant to
Tim Bunce wrote:
Are any/many driver authors/maintainers/patchers going to OSCON this year?
Any interest in a DBI Driver Developers BoF at OSCON?
Main topics would be:
Migration to DBI v2
Unified test suite
Count me in. Are you thinking of this as instead of, or in addition to
a
Authors of DBI related modules might want to take a look at the
discussion at
http://www.perlmonks.org/index.pl?node_id=350861
It is written by one of the CPAN testers and is about the degree of
invasiveness of DBD module tests.
--
Jeff
Brendan W.McAdams wrote:
I'm doing preliminary work on writing a driver for the pure java SQL
Database HSQLDB (based upon the old Hypersonic SQL database),
available at http://hsqldb.sourceforge.net. HSQLDB is a 100% java
Is there a reason not to simply connect to it with DBD::JDBC?
--
Jeff
Lance Prais wrote:
How do I get removed from this list?
You look at the header of any message sent from the list and find the
line that says List-unsubscribe, which reads:
mailto:[EMAIL PROTECTED]
And then you send mail to that address.
And then, the next time you are tempted to join a list,
Tim Bunce wrote:
Meanwhile, the best workaround for the DBI probably to make t/50dbm.t
remove from @INC any dirs which match /\/5\.[0-9.]$/ but which don't
also have an arch-specific directory in @INC.
This seems simpler (tests for ability to run a method than can only be
run if the
I can nmake rc1 but not rc2 on win98 with VC6.
--
Jeff
This is perl, v5.8.1 built for MSWin32-x86-multi-thread
(with 1 registered patch, see perl -V for more detail)
Copyright 1987-2003, Larry Wall
Binary build 807 provided by ActiveState Corp. http://www.ActiveState.com
ActiveState is a
Jeff Urlwin wrote:
A release candidate is available for testing at:
http://homepage.eircom.net/~timbunce/DBI-1.42-rc2-20040311.tar.gz
Compile error on Win32, AS perl 809. The attached patch fixes it.
Thanks, that fixed the nmake problem I reported.
--
(the other) Jeff
Henrik Tougaard wrote:
t/50dbm.t still fail after test 37:
--- Using DB_File () ---
DBD::DBM 0.01 using DB_File
DBD::File 0.30
DBI::SQL::Nano 0.01
DBI 1.42
OS dec_osf (4.0g)
Perl 5.008003 (alpha-dec_osf)
UPDATE DB_File_fruit SET
Argh :-(. On Debian with perl 5.8.3 as set up by my ISP, the new regex
in 50dbm.t removes the legitimate directories where the modules are
stored and skips 50dbm.t with the message No DBM modules available.
Once I commented out the regex, all tests passed for all 6 DBM types.
My @INC shows
I get a warning on 50dbm.t - odd, I could swear it wasn't there when I
uploaded :-(. Anyway, it seems that BerkeleyDB will throw a warning
(not an error) when you attempt to insert an undef into a column. The
warning goes away if I insert something other than NULL in the third
test insert.
Beau E. Cox wrote:
On Wednesday 03 March 2004 11:50 pm, Beau E. Cox wrote:
Hello folks,
I noticed changed in DBD::DBM in the latest svn DBI source.
That's a rapidly moving target and unfortunately you caught it while I
was working on the parts that caused you problems.
1) added dbm to
Tim Bunce wrote:
On Thu, Mar 04, 2004 at 01:23:32AM -1000, Beau E. Cox wrote:
On Wednesday 03 March 2004 11:50 pm, Beau E. Cox wrote:
Hello folks,
I noticed changed in DBD::DBM in the latest svn DBI source.
When I try to test DBI (make is fine):
[TEST ERRORS]
Here is what I did
Tim Bunce wrote:
On Thu, Mar 04, 2004 at 08:56:30AM -0800, Jeff Zucker wrote:
Tim Bunce wrote:
On Thu, Mar 04, 2004 at 01:23:32AM -1000, Beau E. Cox wrote:
On Wednesday 03 March 2004 11:50 pm, Beau E. Cox wrote:
Hello folks,
I noticed changed in DBD::DBM in the latest
Jeff Zucker wrote:
Tim Bunce wrote:
Right, which brings us back to making them subclass STORE.
But why not just let the DBD::File subclasser define an f_valid_attrs
hash and let DBD::File do it like this (only for lower-cased attribute
names)
my $prefix = $dbh-{f_valid_attrs
Tim Bunce wrote:
... the other shoe drops. My suggestion above is probably bad because
the driver may need to check the *value* as well as the attribute so it
should happen in the driver, not in DBD::File.
Not just check the value but also possibly so something other than
just simply store
Beau E. Cox wrote:
But, speaking from a pure users' point of view:
If a module's test suite unconditionally contains a test, it should
have a chance to work on properly configured machines, or, some
sort of documantaion/diagnostic information should be available
in the module's package.
Yes,
I've committed the changes to DBD::DBM and DBD::File and DBI::SQL::Nano,
they;re in revision 184.
I (wisely) don't have rights to t/ so I'm attaching the new 50dbm.t. It
should work as Beau described tests should work. Sorry again for the
inconvenience, the old test was never meant to be a
Tim Bunce wrote:
On Thu, Mar 04, 2004 at 12:47:54PM -0800, Jeff Zucker wrote:
I've committed the changes to DBD::DBM and DBD::File and DBI::SQL::Nano,
they;re in revision 184.
I (wisely) don't have rights to t/ so I'm attaching the new 50dbm.t.
You do ...
So go ahead and check
H.Merijn Brand wrote:
Where do these $sql_create's come from?
From a script I'm writing to scrape the dataset to be used in
_Programming_the_Perl_DBI 2nd edition.
If it wehere user code, it'd probably
better be
$sql_create =~ s/\bVARCHAR\s*(1024)/TEXT/gi if $dsn =~ /dbi:(mysql|unify)/i;
Notes on some other issues below, here's my matrix:
DBD::CSV DBD::AnyData
1. not quite sure what you mean
2. yes, but when I add support for functions in select, they will be
used for values eg. SELECT colA - ? AS cost
3. currently yes, but next release will allow placeholders for
Tim Bunce wrote:
On Tue, Oct 21, 2003 at 10:57:22AM -0700, Dean Arnold wrote:
From what you describe it _would_ make sense to first implement a
separate module providing a thin Perl interface to the API, and
then implement a pure-perl driver that uses that.
Dean,
I'm working on the thinist
I will be releasing a significantly upgraded SQL::Statement and
DBD::File shortly and I have some questions about interface. I'd really
appreciate some feedback. These are the features that are near
finalization:
* heterogeneous SQL across multiple DBI sources
* per-table DBI connections
Tim Bunce wrote:
Jeff, did you get any reply to this?
Nope, your message about bouncing it to the perl5 porters was the only
response.
--
Jeff
Tim.
On Tue, Feb 18, 2003 at 06:33:35PM -0800, Jeff Zucker wrote:
I have recently had some requests for assistance with localization for
DBD::CSV
Jose Blanco wrote:
I just installed SQL::Statement version 1.005 and using a CSV database I tried creating a table with field names that were in lower case. The table that got created had the field names in upper case.
Version 1.006 will be out in the next day or so and it will solve this
I have recently had some requests for assistance with localization for
DBD::CSV. The situation is this: if a user adds use locale in
SQL::Statement, the DBD will be able to use a localized sort order and
localized comparison operations. If a user adds it also in SQL::Parser,
they will also
Jonathan Leffler wrote:
I got one email from the DBI lists overnight, instead of thirty or so.
Did everyone go away for Christmas?
There's nothing wrong with noise that a little silence won't cure. More
signal cures it too. The list has just switched noise-combating tactics
for a week or
Hi Matt,
It'd be cool to see if DBD::SQLite worked there - then you'd have a
fully transaction aware database on a handheld!
At the moment I don't have a compiler or SDK for the pocketPC so I'm
just using other people's prebuilt binaries plus pure perl modules
(don't even have some basics
Just a preliminary report - thanks to Rainier Keuchel's port of perl to
winCE, I now have DBI (PurePerl) and DBD::AnyData doing database read
access on my Toshiba e740 (a palm-like handheld running pocketPC on an
xScale processor). The port also works on most other pocketPC and
winCE device
James Blomo wrote:
I have tried working on Jeff Zucker's SQL::Parser module to fully
support the IN clause.
Thanks James, what would help most is some sample queries that didn't
work before and do work after.
I
have done some light testing on this
Unfortunately, the testing bit is the
I have uploaded the latest vesion of SQL-Statement to CPAN. All users
of SQL-Statement, SQL-Parser, DBD::CSV, DBD::AnyData, DBD::Excel and
other modules using SQL::Statement are encouraged to upload the file from:
http://www.cpan.org/modules/by-authors/id/J/JZ/JZUCKER/SQL-Statement-1.005.tar.gz
Dean Arnold wrote:
PDA now means Perl Digital Assistant
I finally scraped up some time on a gloomy Sunday here
in Seattle,
Ah, didn't realize that you were just up the road (I'm in gloomy
Portland). If you're ever down this way let me know or if you'd like to
chat when I'm next up
(diff for DBI.pm 1.30)
4516c4516
$hash_ref = $dbh-fetchall_hashref($key_field);
---
$hash_ref = $sth-fetchall_hashref($key_field);
4544a4545
This method can not be used if any of the key field values is NULL.
--
Jeff
Orlando Andico wrote:
It IS possible to read MSAccess files on a Linux box.
http://mdbtools.sourceforge.net/
Hmm, I had no idea that was available, forget my advice about saving the
Access files as CSV.
At first glance it looks like it would be trivial to build an
On Thu, 13 Jun 2002, Tim Bunce wrote:
Don't forget that all driver-private attribute and functions should
start with the driver prefix. So backend_pid should be called pg_backend_pid
1. Just to double check, is this correct?
$dbh-func( $some_flags
, { $NO_PREFIX_attr_name =
Tim Bunce wrote:
Actually, though, it's crossed my mind more than once to add the
core C code to parse a CSV record into the DBI. It should be tiny.
That would be great for users of real DBI, one less module to download.
It wouldn't do much for pure perl users though unless you want to
Honza Pazdziora wrote:
On Mon, Apr 15, 2002 at 04:02:16PM -0700, Jeff Zucker wrote:
The latest version passes these tests with perl 5.6.1 on win98:
DBI-1.21/t/* 278/278 ok (proxy.t,preparse.t skipped)
DBD-AnyData-0.06/t* 40/40 ok
DBD-CSV-0.2004/t* 244/244 ok
DBD
The latest version passes these tests with perl 5.6.1 on win98:
DBI-1.21/t/* 278/278 ok (proxy.t,preparse.t skipped)
DBD-AnyData-0.06/t* 40/40 ok
DBD-CSV-0.2004/t* 244/244 ok
DBD-Sprite-0.30/t/* 28/28 ok
DBD-XBase-0.161/t*238/239 ok (same with real DBI)
It
Zucker [EMAIL PROTECTED] À¸·Î(·Î)ºÎÅÍ ¸ÞÀÏÀÌ µµÂøÇÏ¿´À¸³ªbr
¸ÞÀÏ Çã¿ë ¿ë·®À» ÃÊ°úÇÏ¿© »èÁ¦µÇ¾ú½À´Ï´Ù.
¿ë·®ÃÊ°ú·Î »èÁ¦µÈ ¸ÞÀÏ Á¤º¸
--º¸³½»ç¶÷: Jeff Zucker [EMAIL PROTECTED]
--¹Þ´Â»ç¶÷: [EMAIL PROTECTED],
[EMAIL PROTECTED],dbi-dev
[EMAIL PROTECTED]
--º¸³½³¯Â¥: 2002/04/13 04:31:07
--Á¦¸ñ
[EMAIL PROTECTED] wrote:
if Larry had wanted Perl to be able to distinguish numbers from numberlike
strings he would have given us a way before B.
I think that needs to be ammended to: If Larry had wanted us not to be
able to distinguish numbers from numberlike strings, he wouldn't have
[EMAIL PROTECTED] wrote:
Very cool.
Leaving aside the question whether anyone *should* tell the difference between 2
and 2,
In this case, only to get PurePerl to act the same way as DBI when
putting quotes around values in neat(), not a large loss if it's
missing.
here is one (untested)
[EMAIL PROTECTED] wrote:
With these changes plus my previous patch, all but 1 test in basic.t works for
me. The patch fixes a bug in neat() whereby neat(2) was returning ''.
Thanks again, I really appreciate it.
I've
also ported a usage of foreach() to Perl 5.004 syntax and added quotes
[EMAIL PROTECTED] wrote:
I don't want to seem to be defending this patch too vigorously,
And sorry if I made it appear to need defending. I was just
commenting. As a matter of fact, I will certainly be using your patch
in SQL::Statement to test for number/string when I add type-checking.
The latest DBI::PurePerl is attatched. Current resuls for nmake test:
Failed Test Total Fail Failed
---
t\basics.t 369 25.00%
t\examp.t 201 12 5.97%
t\subclass.t154 26.67%
2 tests skipped. (proxy.t,preparse.t)
Failed 3/8 test scripts,
Tim Bunce wrote:
[other good advice snipped but heeded]
I've added the bitmask checking for IMA_STUB, IMA_KEEP_ERR,
IMA_COPY_STMT, and IMA_FUNC_REDIRECT.
sub set_err {
my($h,$errnum,$msg,$state)=@_;
$msg = $errnum unless defined $msg;
#z
# Avert your eyes if you
Here's the latest DBI::PurePerl. I'm sure I've taken some inappropriate
shortcuts, but the results on DBI-1.21/t/* are looking pretty good, much
improved over last version:
Failed Test Total Fail Failed
-
t\basics.t 369 25.00%
t\examp.t 201
Tim Bunce wrote:
Replacing the bootstrap DBI; line in DBI.pm with require DBI::PurePerl;
and running make test may be a good way to play with it.
Another way which makes it easy to go back and forth between real and PP
DBI is to replace bootstrap DBI; in DBI.pm with
if (defined
Tim Bunce wrote:
I'd expect it to be hooked into the real DBI.pm something like this:
eval {
bootstrap DBI;
};
if ($) {
my $error = $;
die $error unless $error =~ /.../ # DBI.xs not available
warn SWITCHING TO DBI::PurePerl! YOU REALLY
Tim Bunce wrote:
On Fri, Mar 22, 2002 at 02:59:00PM -0800, Jeff Zucker wrote:
My previous thinking was more along the lines of providing only the
basics so that no one would be tempted to use it when they really should
be switching to DBI.
I think the performance drop will take care
Tim Bunce wrote:
On Thu, Mar 21, 2002 at 12:17:20AM -0500, Jerrad Pierce wrote:
No word or progress? I independently arrived at the some conlusion myself,
but have yet to get around to doing it. Perhaps next week if you don't
know of any pre-existing work, a good intro to DBM.
These
Jerrad Pierce wrote:
My thoughts were the following:
table name in the statement is ignored, there is only one table per DBM.
this allows you to drop in a different DBD later
the column name is ignored, see above
Sorry, I don't follow you here. The way DBD::AnyData works (and
Randal L. Schwartz wrote:
Wait! Don't we already have a SQL::Statement or something? Why are
we reinventing things?
As I understand it, the pre-parser that will be part of DBI will be very
limited in scope and will not parse SQL. Essentially it will just
handle things like comments,
Sterin, Ilya wrote:
You can use SQL::Parser which is a part of SQL::Statement to parse SQL.
It's almost fully SQL 92 compliant:-)
Umm, a better way to put that would probably be that the modules are
fully compliant in what they support but that they do not yet support
all of SQL92. They
Here are the results for DBD::CSV, first using the XS SQL::Statement,
and then using the Perl SQL::Statement. As you can see, there's quite a
bit of difference.
--
Jeff
DBD::CSV + XS VERSION OF SQL STATEMENT
--
114 SQL_CATALOG_LOCATION1
Steffen Goeldner wrote:
The order of options for SQL_SQL92_PREDICATES and for
SQL_STRING_FUNCTIONS (not SQL92_STRING...) are different in your Local
directory than they are in the MSDN ODBC docs.
Do you mean the order in the output of GetInfoAll.pl? E.g.:
SQL_FN_STR_CONCAT |
Steffen Goeldner wrote:
I used the C header files (sql.h and sqlext.h) from the MDAC SDK 2.6:
Ok, thanks, I've got those header files and will use them. Silly of me
to assume that the docs would be useful :-(.
On another issue ... I have a bit of a complex situation with three
different
Hi Steffen,
I've used your fine scripts to create and test $dbh-get_info() via
DBD::CSV. It works great. I've noticed two minor discrepencies between
the documentation I have and the way the script operates.
The order of options for SQL_SQL92_PREDICATES and for
SQL_STRING_FUNCTIONS (not
Sterin, Ilya wrote:
From: Wiedmann, Jochen
And while we're on it: Is anyone interested in taking
over maintenance of DBD::mysql and DBD::Proxy? That
would be nice.
Jochen, it would be nice if Jeff can take that one over as well, or myself.
We can then maitain them as with Jeff's
Tim Bunce wrote:
On Wed, Mar 06, 2002 at 03:04:51PM -0800, Jeff Zucker wrote:
It's from the PDF of SQL92 I purchased from www.ansi.org. But here's
the text:
13)
A regular identifier and a delimited identifier are equiva-
lent if the identifier body of the regular identifier
Tim Harsch wrote:
I am writing perl software that I would like validate against a test
suite. I kind of like the Perl method of 'make test' I've seen in so
many modules. If any of you know a tutorial or have a trival test suite
I can expand upon I would be eternally grateful.
My all time
Tim Bunce wrote:
Umm, carrying on from there... move %get_info_constants into DBD::Oracle::GetInfo
and make get_info do:
So is this the way to do get_info() for DBD::CSV and DBD::AnyData? The
situation for those is a bit more complex since the return values depend
not only on the driver
Tim Harsch wrote:
Geocrawler at least has some of the archives:
http://www.geocrawler.com/lists/3/Web/183/0/
Or
http:[EMAIL PROTECTED]/mail2.html
And a kludge to search, something like:
http://www.google.com/search?as_q=get_infoas_sitesearch=http%3A%2F%2Farchive.develooper.com
--
I only have six questions so far :-)
1. Is this a good summary of our story to date?
* There is (will be) a database handle method in DBI,
get_info(), that will call a get_info() method in each DBD
that supports it and return the equivalent for that driver of
what DBD::ODBC now returns
I am fixing some glitches involved in running the t/*.t tests for
DBD::CSV with the new SQL::Statement and have reduced things to two
differences between the old and new Statement and am not sure how to
proceed.
1. The new SQL::Statement will allow manipulation of blobs, but only
with
Wiedmann, Jochen wrote:
Hi, Jeff,
1. The new SQL::Statement will allow manipulation of blobs, but only
with placeholders:
This only works in the XS version of SQL::Statement:
$dbh-do(INSERT INTO foo VALUES( $blob ));
But this works in both versions:
Jochen Wiedmann wrote:
b) deletions are performed by adding
a record's tell() into a hash and only physically performed when the
file is closed. This means that the unocking and the
truncating-rewriting only happens once for each file regardless of how
many operations are performed.
I am very close to releasing an alpha of the pure perl, join-enabled
SQL::Statement. Among other questions, is one related to the table
opening/flocking/closing behavior of the current SQL::Statement.
Basically it now operates such that tables are opened (and flocked if
DBD::File is used and
87 matches
Mail list logo