Hi,
Dose that mean if i have 4 tables with BLOBs it will take up 120mb ~?
I have made a testing program..but as you said it worked fine.
But in my product the situation is much complex. May be due to that is it
now taking more memory.
The environment is...
1. Derby is used through as eclipse plugin
2. It is beeing used from two different plugins to create around 20
(15,5)tables.
3. I have make a static class to do all database operations with
syncronized methods which are beeing accessed from different diff threads.
4. There are 5 tables with BLOB columns out of which ony one table is
beeing used.
5. As JXTA is beeing used i think it takes most of the memory. (I dont
know how to mesure the heap usage)
Here are the my tables and the select query which is giving problem with
BLOB and later with the data(not able to retrive which i have inserted).
TABLE CREATION STATEMENT
--------------------------
"CREATE TABLE dataQueue(messageId BIGINT NOT NULL, objectId VARCHAR(100)
,data BLOB, sentId BIGINT, sequenceNo INTEGER,inoutstate SMALLINT, PRIMARY
KEY(objectId,sequenceNo,inoutstate))"
"CREATE TABLE objectQueue(objectId VARCHAR(100), source
VARCHAR(100),destination VARCHAR(100), priority SMALLINT, type SMALLINT,
size INTEGER, splitCount INTEGER, date DATE, isGroup SMALLINT,inoutstate
SMALLINT)"
"CREATE TABLE groups(groupId VARCHAR(100),identityId VARCHAR(100), sentId
BIGINT)"
SELECT QUERY
-------------
SELECT
dq.messageId,dq.objectId,dq.data,dq.sentId,dq.sequenceNo,oq.source,oq.destination,oq.priority,oq.type,oq.size,oq.splitCount,oq.date,oq.isGroup
from dataQueue as dq,objectQueue as oq,groups as g where
((dq.objectId=oq.objectId) AND ((oq.destination=?) OR (oq.destination
=g.groupid AND g.identityId=?)) AND ((dq.sentId=0) OR (dq.sentId >
g.sentId)) AND (oq.inoutstate=0)) ORDER BY
oq.priority,dq.sentId,dq.sequenceNo"
Is there some thing very stupid thing that i am doing here?
This is my second database dependednt application. So i dont have much
experiance.
Thanks
Rajesh
On Thu, 01 Sep 2005 23:32:40 +0530, Sunitha Kambhampati
<[EMAIL PROTECTED]> wrote:
Rajes Akkineni wrote:
Hi,
I have got the similar problem.
I was unable to insert bigger files in to the database.
But my situation is little different than this. I am not inserting
total 40mb file in one blob.
I have split them in to 64kb chunks and inserting in multiple rows.
I got outofmemory error.
Are you using embedded driver or the network client driver ? I think
the issue discussed in this earlier thread is referring to the client
driver. ( -DERBY-326). But if you are noticing the problem with
embedded, it would be great if you could post a reproduction program,
so we can take a look.
I have tested inserting 1000 rows of 64kb blobs on my T40 laptop and it
works ok both with default jvm heap size and also with restricting the
jvm max heap size to 40mb. Note by default , for a table containing blob
column, page size is 32k and page cache is 1000 pages, so the pagecache
would take about 32mb of memory.
Now i seem to have another issue with the DERBY.
when i am Inserting, Select data using prepared statements(they are
all created once, and used from different different threads later)
some time i am not able to get the data which i have inserted. i
checked the insert operation. it is successfull but later when i query
from different thread it is not showing any rows in the table.
Are you running with autocommit off, if so might be good to check if a
commit was given after the insert or not.
Sunitha.
--
Using Opera's revolutionary e-mail client: http://www.opera.com/mail/