Need Quick Help!! Production Server Issue

2008-06-28 Thread Somendra Paul
These are the following java objects that we store:
 
Wip  -- Wip Process(related table of Wip) - Wip Qty (related table of Wip)
Lot -- Lot Qty(related table of Lot) - Lot Process Location(related table of 
Lot)
Shipment 
 
The way our code works is this Wip,Lot,Shipment data are kept in csv files, 
they are then converted to thier respective java objects and then 
persisted to the DB, one by one.
 
Suppose a file  can contain 100 Lot, 100 Wip, and 100 Shipment and they can be 
interlaced, what i mean is that Lot,Wip and Shipment can come in 
any sequence, and not that first 100 Lot and then 100 Wip and then Shipment.
 
 
Suppose our file has Lot, Shipment and then Wip:
 
Current Problem: 
Our setting in repository-database.xml is batch-mode is true,and autoComit is 2 
and when the three objects are persisted, we find that
sometimes and that is quite rare, that the some of the objects are actually not 
persisted in DB, though there is no exception in logs,
we are using OJB1.0.4.
 
Current Code Behaviour of Code: --
The way our code works is we get the transaction from the PB broker, and then 
start the transaction and then insert the Lot object,afterthat they 
will persist the Lot Qty and Lot Process Location objects which are information 
related to lot, by setting the batch mode to true and then calling broker.
store(pc) for each LotQty or Lot Process Location and then call batchUpdate 
(intention being that the since for a single lot , there are multiple lot qty 
objects
it would be better to use batch updates for such objects ) and then set back 
the batch mode to false again. The Shipment record comes gets inserted
and in the code for every Shipment we dynamically generate a Wip record and 
save the record to DB, as similar to Lot, wip will have WippROCESSLocation
and WipQty which are saved by doing again batch Update,and again setting the 
batch update to false,and then the Wip record will go to DB as explained above
andthen commit or rollback the transaction.
 
Second way the same behaviour can be achieved in our code--
 
We get the transaction ,start the transaction, and start the batch Mode to 
true, after making all the Lot,LotQty, Lot Process Location persistent and then 
make 
the executeBatch call and then do the same for Shipment ie make the batchMode 
to true and then persist Shipment, dynamic Wip and then again call
batch Update and similar for Wip and then commit or rollback the transaction.
 
One question in this approach is that if I had written the code in such a way 
that we start the transaction , start the batch mode, and then save Lot,Lot Qty,
Lot Process Location ,Shipment, dynamic Wip and then Wip, Wip Qty,WipProcess 
Location, and then call batch Update, what i found is that process Shipment
if we need to find that if the Lot object is there in DB, then it was unable to 
acess the Lot Object, i am not able to understand why ??
 
 
Third Approach:-- Have batch Update to false in repository_database.xml, and 
then start the txn, call Lot,LotQty,Lot process location,Shipment
and dynamic Wip, Wip ,Wip Qty,Wip Process Location,and then commit or rollback 
the transaction, but then in this way we will not be able 
to use the batch mode features to ehance the performance.
 
What is wanted to know :--.
 
What i want to know is is the current way of writting hte batch Update is 
correct, or something is wrong, which is causing hte issue mentioned.
 
Is the second approach correct, if and why ?
 
Is the performance of the third approach almost similar the second or will be 
the very bad.




-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Handling Large Resultset in Oracle

2008-09-21 Thread Somendra Paul
Hi All,
   I am using OJB1.4 against Oracle10g with classes12.jar. We are trying to 
export data from the DB, using SQL query which returns 60 records , what we 
found out that is when we do iterator = query.getIteratorByQuery(),and iterate 
over the results,  we find that after iterating over 30 records, the VM 
grows rapidly and the entire program crashes giving OOM errors, where as when 
we used simple JDBC program to implement it, we saw that the entire 600K 
records were extracted using only 160mb of memory , and the ojb execution takes 
more than 1.5gb to execute before crashing. 
 
Do we know how to solve this memory issue when executing large resultset in 
Oracle.
 
One solution that in the OJB archives for ProgressSQL is that to use 
fetchSize=somevalue , it will solve this issue ???
 
 
Thanks and Regards
Somendra Paul.


 

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Handling Large Resultset in Oracle

2008-09-21 Thread Somendra Paul
Hi Armin
    This SQL query we have is not a data only from one table, but is a very 
big SQL query looks like following:
 
select a,b,c from x,y,z where union all select a,b,c from x,y,z where  
union all select a,b,c  from x,y,z
 
 
This is a generic SQL written  but the our SQL looks is on similar lines.
 
 
Do your solutions mentioned of report queries will solve  this kind of issues ? 
 
 
We might think of 1.5 , depending upon what you say.
 
Thanks and Regards,
Somendra Paul.
 

 


- Original Message 
From: Armin Waibel [EMAIL PROTECTED]
To: OJB Users List ojb-user@db.apache.org
Sent: Sunday, September 21, 2008 9:05:06 PM
Subject: Re: Handling Large Resultset in Oracle

Hi Paul,

Somendra Paul wrote:
 Hi All, I am using OJB1.4 against Oracle10g with classes12.jar. We
 are trying to export data from the DB, using SQL query which returns
 60 records , what we found out that is when we do iterator =
 query.getIteratorByQuery(),and iterate over the results,  we find
 that after iterating over 30 records, the VM grows rapidly and
 the entire program crashes giving OOM errors, where as when we used
 simple JDBC program to implement it, we saw that the entire 600K
 records were extracted using only 160mb of memory , and the ojb
 execution takes more than 1.5gb to execute before crashing.
 
 Do we know how to solve this memory issue when executing large
 resultset in Oracle.
 
 One solution that in the OJB archives for ProgressSQL is that to use
 fetchSize=somevalue , it will solve this issue ???
 

The problem could be the cache. Dependent on the used cache OJB keep all 
materialized objects (or copies of these objects) in memory. Most cache 
implementations use soft-references (so OOM errors shouldn't occur) but 
maybe your objects have complex relationships or your layer holds 
hard-references to the materialized objects.
You can try to evict the cache while iterate the result set.

If you don't rely on the materialized java objects you can use a report 
query
http://db.apache.org/ojb/docu/guides/query.html#Report+Queries
to iterate over the result set (returns a collection of arrays). This 
bypass the cache and should result in a memory-use comparable with a 
plain jdbc-query.

The upcoming OJB 1.0.5 has enhanced query features and supports limit 
and pagination of query results (Oracle is supported) - 1.0.5rc1:
http://www.mail-archive.com/ojb-user%40db.apache.org/msg16078.html
The query-guide of the included documentation show how to use this feature.

regards,
Armin


 
 Thanks and Regards Somendra Paul.
 
 
 
 
 -
  To unsubscribe, e-mail: [EMAIL PROTECTED] For
 additional commands, e-mail: [EMAIL PROTECTED]
 
 

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]