> 2.0 is going to use ODBC, so this should not be an issue.
I would still like there to be an option to use native access to data when a
site is specifically being tweaked for a particular host (maybe an optional
database type parameter on database calls, which defaults to ODBC?). I
almost always use ODBC / JDBC at my work, and it's great, but specialized
functions such as streaming will surely be lost in ODBC access. The only
reason I would ever spend money on something like Db2 or Oracle is to get
some needed extra functionality a free equivalent isn't going to give me.
>> a) With a simple file system, it could read the files in chunks,
>> preventing an initial delay and eliminating the need for much memory.
> PHP has the readfile function for this.
Right. But a standard Midgard function to automate the process would be
much better. It always seems like 95% of programming is reinventing the
wheel, since nobody created a standard function to do what you want, though
it's a common need. Similarly, a lot of what I'm doing on a project funded
by the US military is copying and tweaking code, since much of the needed
functionality exists in some fashion in related pieces of the project. It
should have been created as standardized functions (and will be, by the time
I'm done with it ;-).
>> b) With a database that can't stream (e.g. MySql), there could be a
>> Midgard function for storing large files. This function would store a
large
>> file in several chunks (preventing the need to read everything at once).
>
> With MySQL you'll still have the same problem: there's no way (to my
> knowledge) to get BLOBs in chunks from it, so even if you do send it
> to the FS first you still get it in memory first.
To make it clearer, what I meant is that we could have a storage function
which, when you hand it a really big file, would actually store it as a
series of smaller records in a size and record type appropriate for the
particular database (be it 16K, 512K, or whatever; would need a bit of
experimentation). And of course there would be a metatable which says the
filename, size, permissions, location in the data blocks table, etc. This
isn't as good as a streaming database, but would have the advantage of
working under ODBC.
> What I've been thinking about is a way for midgard to be able to manage
> BLOBs and store them in the FS or DB given some kind of hint from the
> setup or Midgard site managing them. Access to these BLOBs would from
> be transparent to the Midgard environment afterwards.
I had the same idea (which means my ideas aren't that strange?). Very
important.
> If we want to manage BLOBs as first-class Midgard citizens (access
control,
> replication, etc), which I do, we need full control over the repository.
> FS and DB offer that. If there are more options I'd love to hear about it.
>
> emile
Those two options definitely do it. I think the important thing is to come
up with functions to standardize how to do it, using a single interface for
BLOB management regardless of whether the storage is FS or DB. And I
definitely like the idea of the option to split BLOBs into several data
records to let MySql handle large BLOBs in a useful way (perhaps
accomplished with a parameter to the function giving block size in K, which
is set to 0 if the file is to be stored whole).
Thanks for your efforts!
(BTW, I'm still working on the project I mentioned a few months ago which
can import sites into Midgard from other site creation tools. It's coming
along nicely. Hopefully we'll have a alpha of the NetObjects Fusion version
out soon.)
-Pat
[EMAIL PROTECTED]
--
This is The Midgard Project's mailing list. For more information,
please visit the project's web site at http://www.midgard-project.org
To unsubscribe the list, send an empty email message to address
[EMAIL PROTECTED]