On Mon, 2004-04-19 at 20:36, D. Richard Hipp wrote:
> Mrs. Brisby wrote:
> >>
> >>The linked-list structure of overflow storage is part of the problem.
> >>But the fact that SQLite uses synchronous I/O is also a factor. In
> >>order to make BLOBs fast in SQLite, I would have to change to a
Mrs. Brisby wrote:
The linked-list structure of overflow storage is part of the problem.
But the fact that SQLite uses synchronous I/O is also a factor. In
order to make BLOBs fast in SQLite, I would have to change to a different
indexing technique for overflow storage *and* come up with some
On Mon, 2004-04-19 at 07:04, D. Richard Hipp wrote:
> Darren Duncan wrote:
> >
> > I think the simple answer is that SQLite uses a linked list which can't
> > know where a page is until reading the next one, but other databases use
> > something other than a linked list; they would trade a bit
Darren Duncan wrote:
I think the simple answer is that SQLite uses a linked list which can't
know where a page is until reading the next one, but other databases use
something other than a linked list; they would trade a bit of complexity
for speed. -- Darren Duncan
The linked-list structure
That info is actually incorrect. WinFS is still scheduled for Longhorn.
The tech media misinterpreted something.
On Apr 18, 2004, at 8:31 PM, Greg Miller wrote:
Greg Obleshchuk wrote:
I know the MS is looking at replacing the file system with the SQL
engine in Longhorn so they must have
Greg Obleshchuk wrote:
I know the MS is looking at replacing the file system with the SQL engine in Longhorn so they must have solved the issue.
They're not replacing NTFS with a database. They're implementing a
database layer (WinFS) on top of NTFS. It's not entirely clear what
they're doing,
These disk access issues are why no database I know of actually
stores large objects inline. It would be crazy to do so.
mysql, postgres, and oracle all have support for blobs, and
none of them store them inline.
(btw, if you care about disk io performance for blobs,
you can tune the fs
At 10:59 AM +1000 4/19/04, Greg Obleshchuk wrote:
I guess it would depend on the system. I assume (and may ask) that
MS SQL and Oracle use multi-threaded processes to access the
information and that is the way they get around it. I know the MS
is looking at replacing the file system with the
At 7:50 PM -0400 4/18/04, D. Richard Hipp wrote:
Suppose you have a 1MB row in SQLite and you want to read the whole
thing. SQLite must first ask for the 1st 1K page and wait for it to
be retrieved. Then it asks for the 2nd 1K page and waits for it.
And so forth for all 1000+ pages. If each
don't.
Greg
- Original Message -
From: Puneet Kishor
To: SQLite
Sent: Monday, April 19, 2004 10:41 AM
Subject: Re: [sqlite] row size limit
On Apr 18, 2004, at 7:31 PM, Greg Obleshchuk wrote:
> Hi Richard,
> You know that is the first clear and concise
be better and more flexible at managing the physical files. Now I have
a more scientific sounding answer to back my assertion.
;-)
Greg
- Original Message -
From: D. Richard Hipp
Cc: [EMAIL PROTECTED]
Sent: Monday, April 19, 2004 9:50 AM
Subject: Re: [sqlite] row size limit
] row size limit
[EMAIL PROTECTED] wrote:
> According to the FAQ on sqlite.org, the row size is arbitrarily
> limited to 1MB, which can be increased to 16MB by changing a
> #define in the source code.
>
> My question is, why even limit the row size? Is there a way t
[EMAIL PROTECTED] wrote:
According to the FAQ on sqlite.org, the row size is arbitrarily
limited to 1MB, which can be increased to 16MB by changing a
#define in the source code.
My question is, why even limit the row size? Is there a way the
code can modified so that there is no limit for the row
According to the FAQ on sqlite.org, the row size is arbitrarily
limited to 1MB, which can be increased to 16MB by changing a
#define in the source code.
My question is, why even limit the row size? Is there a way the
code can modified so that there is no limit for the row size (other
than the
14 matches
Mail list logo