Actually, both are correct! But we are talking about 2 different things:

- How the data is stored - as a string associated with the blob in repository.
- How the data is display/accessed - in relational tabular form.

By providing a virtual table view of the BLOB repository, we can present the data to the user/developer in the required form.

For example, the repository provides the following "system table":

CREATE TABLE pbms_repository (
        Repository_id     INT COMMENT 'The repository file number',
Repo_blob_offset BIGINT COMMENT 'The offset of the BLOB in the repository file',
        Blob_size         BIGINT COMMENT 'The size of the BLOB in bytes',
Head_size SMALLINT UNSIGNED COMMENT 'The size of the BLOB header - preceeds the BLOB data', Access_code INT COMMENT 'The 4-byte authorisation code required to access the BLOB - part of the BLOB URL',
        Creation_time     TIMESTAMP COMMENT 'The time the BLOB was created',
Last_ref_time TIMESTAMP COMMENT 'The last time the BLOB was referenced', Last_access_time TIMESTAMP COMMENT 'The last time the BLOB was accessed (read)', Content_type CHAR(128) COMMENT 'The content type of the BLOB - returned by HTTP GET calls',
        Blob_data         LONGBLOB COMMENT 'The data of this BLOB'
);

So the content_type is directly accessible (update would also be possible) using SQL statements.

At the same time the content type is there where it is needed when a BLOB is retrieved using HTTP GET.

On Nov 25, 2008, at 7:28 PM, Jim Starkey wrote:

Eric Day wrote:
On Tue, Nov 25, 2008 at 07:07:06AM -0800, Barry Leslie wrote:

As Tim mentioned, the BLOB streaming engine stores the MIME type
along with the BLOB data in the repository.
BLOBs are uploaded with HTTP PUT and downloaded with HTTP GET so
considering the MIME type to be part of the BLOB is easy and
extremely practical.

What about all the other pieces of potential HTTP metadata?

Theoretically it could all be stored in the repository. But I think
the content type, which includes the charset information, is the most
important.

Rather than just storing selected HTTP header fields we could just store them all as one block of text. They could then be packed back into the reply header when the blob is read back. If people are interested in particular header fields they can process the text themselves. This way we are not trying to guess in advance which http headers are important to people and
which are not.

Content type and charset could maybe have their own column.


Hi everyone!

Since we are already in a database, why not store this structured
data into real columns, rather than another unstructured block that
needs to be parsed by the application. This gives the user control
of what to store, and the HTTP GET/POST interfaces can be modified
to return/store the headers if the columns exists (or are configured
in some way). Also, applications may want to access those fields
outside of the blob interface, which would be easier if they were
normal columns.

Putting headers in their own little blocks or creating special
attributes just seems a bit hackish.



Give the man a Kewpie doll.  He got the right answer.

--
Jim Starkey
President, NimbusDB, Inc.
978 526-1376


_______________________________________________
Mailing list: https://launchpad.net/~drizzle-discuss
Post to     : [email protected]
Unsubscribe : https://launchpad.net/~drizzle-discuss
More help   : https://help.launchpad.net/ListHelp



--
Paul McCullagh
PrimeBase Technologies
www.primebase.org
www.blobstreaming.org
pbxt.blogspot.com




_______________________________________________
Mailing list: https://launchpad.net/~drizzle-discuss
Post to     : [email protected]
Unsubscribe : https://launchpad.net/~drizzle-discuss
More help   : https://help.launchpad.net/ListHelp

Reply via email to