I was looking for something aggregateder. I've been scared to read email
lists until now because of how unsafe it is. Just gotta figure out how I
upgrade the internet to 2.01. I hope it comes with AJAX.
-Original Message-
From: Clay Dowling [mailto:[EMAIL PROTECTED]
Sent: Tuesday,
There is clearly no 1 correct answer. So instead of arguing the point
over and over, why don't the people who object simply apply the proposed
change and report back what issues your application has? Let's see how
many people are actually using this functionality, what breaks and weigh
the
DB2 gives this:
CREATE TABLE t1(a INTEGER, b REAL)
INSERT INTO t1 VALUES(5,5)
SELECT a/2, b/2 FROM t1
1 2
---
2 +2.50E+000
1 record(s) selected.
-Original Message-
From: Rob Lohman
I took it to mean that he wants:
SELECT sql FROM sqlite_master
WHERE type='table';
But I could be wrong.
-Original Message-
From: Kiel W. [mailto:[EMAIL PROTECTED]
Sent: Thursday, July 07, 2005 8:52 AM
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] My first post, a few wishes..
On
solution ?
-Original Message-
From: Brad DerManouelian [mailto:[EMAIL PROTECTED]
Sent: Thursday, June 30, 2005 6:36 PM
To: sqlite-users@sqlite.org; [EMAIL PROTECTED]
Subject: RE: [sqlite] Insert all rows from old table into new table but
in sorted order
The solution is to always specify
The solution is to always specify an order when you want to return data
in a particular order. SQL standard does not specify that rows come back
in a particular order unless you specify. Order is never assumed as to
enhance speed - especially for functions where order is irrelevant like
totaling
I've been using Tiger since Friday night and feel the need to comment on
how fast the indexing/searching is. It took about an hour to index my
drive (100,000+ files on a PB G4, GREAT speed for a relatively slow
drive, I thought) and I can return 86,920 files in about 45 seconds.
I'll never need to
Mail system likely has a quota.
Check this link:
http://www.webservertalk.com/archive280-2004-6-280358.html
-Original Message-
From: Jonathan Zdziarski [mailto:[EMAIL PROTECTED]
Sent: Monday, April 11, 2005 12:27 PM
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] 50MB Size Limit?
cute();
while ( @rows2 = $sth2->fetchrow() )
{
foreach my $row_contents ( @rows2 )
{
print "\t$row_contents";
}
print "\n";
}
$sth2->finish();
$dbh2->disconnect();
Sorry if this is the wrong forum, but has anyone run into a problem with
leaking file handles when using SQLite with Perl DBI and DBD::SQLite?
I am calling finish() and disconnect() when I'm done with each
connection, but lsof reports my database file opens once for each
connection and never
10 matches
Mail list logo