Re: LockManagerInMemoryImpl memory leak

2007-03-12 Thread Bruno CROS

Hi Armin,

The attached file is a solution to the LockManagerInMemory memory leak...

Well, the leak was on keyLockMap when real timeout occurs...(without
abort/commit end)  The implemented solution is to iterate this keyLock map
and detect if locks list (the entries) are no more usefull. Entries that are
made of obsolete locks only, are evicted (detection by isEmpty() method).

Original parameters,  object cleanup frequency and max lock to clean
have been modified to treat more Write locks.

A new parameter is the  keyCleanupFrequency : frequency to run keyCleanup
over object (original) cleanup. Initial value is 6, it triggers a keyCleanup
process every 6 * 5 seconds (30s).

New variables :
- keyCleanupCounter : counter to reach frequency
- lastCleanupTime : the time that last keyCleanup process took.
- lastCleanupCount : the number of evicted useless lists at last cleanup.

Modification have been tested with millions of locks / thousands of keys.

May you post the code with a little comments cleanup ;-)

Regards.



On 3/9/07, Armin Waibel [EMAIL PROTECTED] wrote:


Hi Bruno,

Bruno CROS wrote:
 Hi all,

 Well, our team experiment some memory leaks using the LockManager in
remote
 mode, with the servlet and so, the LockManagerRemoteImpl.

 It seems that, after a while, in the LockManagerInMemoryImpl (use by
 LMRemoteImpl), some LockObject instances are stored in the static
HashMap
 keyLockMap  whereas the static LinkedMap resourceLockMap no more
contains
 references to that instances. What is sure, is that those locks never
 expire.


I will try to reproduce your problem - stay tuned.

regards,
Armin

 It results that memory usage grows and LockManager server crashes with
an
 OutOfMemoryException.

 Using the last implementation of LMRemoteImpl, LMInMemoryImpl, LMServlet
 over OJB 1.0.4. JVM 1.4.

 Thanks for any ideas.

 Regards.

 Bruno.


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


package org.apache.ojb.broker.locking;

/* Copyright 2002-2005 The Apache Software Foundation
 *
 * Licensed under the Apache License, Version 2.0 (the License);
 * you may not use this file except in compliance with the License.
 * You may obtain a copy of the License at
 *
 * http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an AS IS BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */

import java.io.Serializable;
import java.util.HashMap;
import java.util.Iterator;
import java.util.List;
import java.util.Map;

import org.apache.commons.collections.list.TreeList;
import org.apache.commons.collections.map.LinkedMap;
import org.apache.commons.lang.SystemUtils;
import org.apache.ojb.broker.util.configuration.Configurable;
import org.apache.ojb.broker.util.configuration.Configuration;
import org.apache.ojb.broker.util.configuration.ConfigurationException;
import org.apache.ojb.broker.util.logging.Logger;
import org.apache.ojb.broker.util.logging.LoggerFactory;

/**
 * This implementation of the [EMAIL PROTECTED] LockManager} interface supports 
a simple, fast, non-blocking
 * pessimistic locking for single JVM applications.
 *
 * @version $Id: LockManagerInMemoryImpl.java,v 1.3 2007/03/09 09:58:59 CROS_B 
Exp $
 */
public class LockManagerInMemoryImpl implements LockManager, Configurable, 
Serializable
{
private Logger log = LoggerFactory.getLogger(LockManagerInMemoryImpl.class);
/** The period to search for timed out locks. */
// BC : liberer les ressources plus souvent
private long cleanupFrequency = 5000; // 1; // milliseconds.

/** The number of lock entries to check for timeout */
// BC : liberer plus de ressources
private int maxLocksToClean = 2000;

/** 
 * keyLockMap cleaning process frequency
 */
private static int keyCleanupFrequency = 6;

/**
 * keyLockMap cleaning process counter
 */
private static int keyCleanupCounter = 0;

/**
 * last key cleanup time in ms 
 */
private static long lastKeyCleanupTime = 0; 

/**
 * last key cleanup removed lists
 */
private static int lastKeyCleanupCount = 0;  

/**
 * MBAIRD: a LinkedHashMap returns objects in the order you put them in,
 * while still maintaining an O(1) lookup like a normal hashmap. We can then
 * use this to get the oldest entries very quickly, makes cleanup a breeze.
 */
private final Map resourceLockMap = new LinkedMap(70);
private final Map keyLockMap = new HashMap();
private final LockIsolationManager lockStrategyManager = new 
LockIsolationManager();
private long m_lastCleanupAt = System.currentTimeMillis();
private long

LockManagerInMemoryImpl memory leak

2007-03-06 Thread Bruno CROS

Hi all,

Well, our team experiment some memory leaks using the LockManager in remote
mode, with the servlet and so, the LockManagerRemoteImpl.

It seems that, after a while, in the LockManagerInMemoryImpl (use by
LMRemoteImpl), some LockObject instances are stored in the static HashMap
keyLockMap  whereas the static LinkedMap resourceLockMap no more contains
references to that instances. What is sure, is that those locks never
expire.

It results that memory usage grows and LockManager server crashes with an
OutOfMemoryException.

Using the last implementation of LMRemoteImpl, LMInMemoryImpl, LMServlet
over OJB 1.0.4. JVM 1.4.

Thanks for any ideas.

Regards.

Bruno.


Re: DB dead lock issue with deletePersistent calls

2006-12-15 Thread Bruno CROS

JIRA is post.

On 12/15/06, Armin Waibel [EMAIL PROTECTED] wrote:


Hi Bruno,

Bruno CROS wrote:
 Hi,

 Ye, we get rid of those dead locks.

Congratulation!


 Actually, OJB does not cause anything to have Oracle dead locks!!

Phew!

 Many
 applications can have dead locks without any lines of bad (OJB) code.

 Bad locks are generated when oracle can 't do simple operations as
 UPDATE OR
 DELETE, when time to verify foreign key is simply too long (  1s) .
This
 can occur when a table reaches a big size (million of records) and if
those
 table haven't appropriate index on foreign key code. Mounting index on
 foreign key code is necessary !! (Oracle 10)

 I suggest a doclet/option to generate automatically indexes based on
 reverse
 foreignkey  declaration code.

Please open a feature request in OJB-JIRA
http://issues.apache.org/jira/browse/OJB


 Anyway, a big information on documentation, about this need from big
Oracle
 databse.

 After, DB works very faster and do not throw dead locks errors
ORA-00060. I
 known, this trouble only concerns only Oracle usages. But i think it
 have to
 figure in documentation.

I could add a new section in Platform setting doc with notes/tips for
specific databases.
http://db.apache.org/ojb/docu/guides/platforms.html

If you send me the title and the text I will integrate it in OJB docs.

regards,
Armin


 Regards.

 Thanks for help.



 On 12/8/06, Armin Waibel [EMAIL PROTECTED] wrote:

 Bruno CROS wrote:
  No. I don't known how to write such a test. But i have an idea where
it
  occurs in the persistent graph.
 
  I use locks on OJB master object (at start of transaction) when
 possible
  (even with delete). I think with this, two transactions can't run at
 the
  same time on the same part of persistent graph.

 Only if implicit locking is enabled, else only the specified object is
 locked and not the whole graph. If not enabled you have to lock all
 objects before changing or deleting by yourself.
 You can enable implicit locking on a per tx manner using OJB's
 odmg-extensions - TxExt.


http://db.apache.org/ojb/docu/guides/odmg-guide.html#The+TransactionExt+Interface

 In this case all referenced objects will implicit locked with the same
 lock mode as the master object (on very large object graphs this could
 slow down performance).

 regards,
 Armin

 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]




-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




Re: Which class to modify to clock sql statements ?

2006-12-15 Thread Bruno CROS

Hi,

Hum, last year i tried P6SPY and find a big bug. queries don't work as they
would do. I already talked about that in that forum.

And, we have to time in production (P6SPY is not recommended for that).

Well, i wrote some code in JdbcAccessImpl.java and i well get usefull times.
It get measures and sql request for executeQuery( ...) method ,  but not for
method such executeUpdate(ClassDescriptor cd, Object o) because i didn't get
the sql instruction ( UPDATE TABLE SET XXX=... WHERE ... ). How can i reach
the sql instruction? Measuring time without the query is a little useless .
A fine cast may be ?

Thanks.

Regards.


On 12/15/06, Charles Anthony [EMAIL PROTECTED] wrote:


Hi,

Another option would be to use the p6spy jdbc driver
http://www.p6spy.com/ which, if I remember correctly logs the time taken
to execute the statement as well as the statement itself.


Cheers,

Charles.

 -Original Message-
 From: Armin Waibel [mailto:[EMAIL PROTECTED]
 Sent: 15 December 2006 01:24
 To: OJB Users List
 Subject: Re: Which class to modify to clock sql statements ?

 Bruno CROS wrote:
   Hi,
 
  Well, i just want to mesure the time a sql statement takes
 to be post
  (sql commit too). I looked a bit in broker.accesslayer
 package and in
  StatementForClassImpl.java. Is it the right place to do this ?
 

 I would recommend to have a look in class JdbcAccessImpl to
 clock query execution time.

 regards,
 Armin

  May be someone wrote an implementation?
 
  Working with 1.0.4
 
  Thanks for advices.
 
  Regards.
 

 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




Re: DB dead lock issue with deletePersistent calls

2006-12-14 Thread Bruno CROS

Hi,

Ye, we get rid of those dead locks.

Actually, OJB does not cause anything to have Oracle dead locks!! Many
applications can have dead locks without any lines of bad (OJB) code.

Bad locks are generated when oracle can 't do simple operations as UPDATE OR
DELETE, when time to verify foreign key is simply too long (  1s) . This
can occur when a table reaches a big size (million of records) and if those
table haven't appropriate index on foreign key code. Mounting index on
foreign key code is necessary !! (Oracle 10)

I suggest a doclet/option to generate automatically indexes based on reverse
foreignkey  declaration code.

Anyway, a big information on documentation, about this need from big Oracle
databse.

After, DB works very faster and do not throw dead locks errors ORA-00060. I
known, this trouble only concerns only Oracle usages. But i think it have to
figure in documentation.

Regards.

Thanks for help.



On 12/8/06, Armin Waibel [EMAIL PROTECTED] wrote:


Bruno CROS wrote:
 No. I don't known how to write such a test. But i have an idea where it
 occurs in the persistent graph.

 I use locks on OJB master object (at start of transaction) when possible
 (even with delete). I think with this, two transactions can't run at the
 same time on the same part of persistent graph.

Only if implicit locking is enabled, else only the specified object is
locked and not the whole graph. If not enabled you have to lock all
objects before changing or deleting by yourself.
You can enable implicit locking on a per tx manner using OJB's
odmg-extensions - TxExt.

http://db.apache.org/ojb/docu/guides/odmg-guide.html#The+TransactionExt+Interface
In this case all referenced objects will implicit locked with the same
lock mode as the master object (on very large object graphs this could
slow down performance).

regards,
Armin

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




Which class to modify to clock sql statements ?

2006-12-14 Thread Bruno CROS

 Hi,

Well, i just want to mesure the time a sql statement takes to be post (sql
commit too). I looked a bit in broker.accesslayer package and in
StatementForClassImpl.java. Is it the right place to do this ?

May be someone wrote an implementation?

Working with 1.0.4

Thanks for advices.

Regards.


ODMG Delete : markDelete or deletePersistent

2006-12-08 Thread Bruno CROS

 Hi,

 Just one question.

Using ODMG, if deletePersistent(o) is called from the Impl.getDatabase(),
how  SQL DELETE query post can be done with the other INSERT/UPDATE in
the same SQL transaction without refering THE transaction? Why ODMG delete
object tutorial talk about deletePersitent way, whereas
ExtTx.markDelete(o) would
be more appropriate considering SQL queries sequence in transaction?

Need help.

Regards.

Bruno


Re: ODMG Delete : markDelete or deletePersistent

2006-12-08 Thread Bruno CROS

Looking into OJB implemenations, it seems that deletePersistent use the
transaction of the current thread. So ok for the SQL transaction sequence,
sorry for asking in last mail. I was confused between DB.deletePersistent(o),
ExtTx.markDelete(o), and ExtTx.deletePersistent(o) methods. It seems all do
the same.

I can't get rid of those dead lock, occurring when update/delete object
in complex and different  transactions.

Do i have to lock objects to be deleted to avoid concurrent updating
transaction (plausibly locking DB records) ?

Does someone use such a technique ?

Thanks for help.


On 12/8/06, Bruno CROS [EMAIL PROTECTED] wrote:



  Hi,

  Just one question.

Using ODMG, if deletePersistent(o) is called from the Impl.getDatabase(),
how  SQL DELETE query post can be done with the other INSERT/UPDATE in
the same SQL transaction without refering THE transaction? Why ODMG delete
object tutorial talk about deletePersitent way, whereas ExtTx.markDelete(o) 
would
be more appropriate considering SQL queries sequence in transaction?

Need help.

Regards.

Bruno



Re: DB dead lock issue with deletePersistent calls

2006-12-08 Thread Bruno CROS

Hi,

On 12/8/06, Armin Waibel [EMAIL PROTECTED] wrote:


Hi Bruno,

Bruno CROS wrote:
 Hi,

 I'm experiencing some troubles about deletePersistent usage when running
 ODMG transactions. The most of the issues is an Oracle dead lock.
 ORA-00060.

 I'm looking for what can be wrong, and don't find. Error occurs
 on production server, never on developpement servers !!


This sounds like a concurrency issue if it isn't possible to reproduce
it on development server. Did you try to write a multithreaded test to
reproduce the problem?



No. I don't known how to write such a test. But i have an idea where it
occurs in the persistent graph.

I use locks on OJB master object (at start of transaction) when possible
(even with delete). I think with this, two transactions can't run at the
same time on the same part of persistent graph. It seems that it works well
in all another places. Unfortunately it's hard ti do at the place where dead
locks occur. Because one of transaction,  is dealing with detail objects but
not really master object. I'm asking me about locking master, i'm afraid
that it cause more locks ! But i'm not sure.


To summarize, the transactions that produces the error is build with loops
 containing several steps :
 1. unreferencing objects who have to be deleted from object that have
 reference on them. 2. flush with TX Extension to produce db updates
 posts.
 3. delete objects with getDatabase.deletePersistent(o). 4.flush to
produce
 delete posts before next iteration.

 Finally, when loop is ended, commit.

 ORA-00060 always occurs  on DELETE SQL queries, as two threads would
 delete/update the same record.


Did you trace/log the generated sql-statements with oracle?



In production is definitely impossible !!


I ask myself some questions  :

 - Does Impl.getDatabase().deletePersistent(o) can be called several
 times in
 unique one transaction without trouble ?

yep! As long as you don't commit the tx, Database().deletePersistent(o)
should always lookup the same tx.



Good. How could i have a doubt on it... i read implementations code, was
easy to see.



 - Does Impl.getDatabase().deletePersistent(o)  and a flush always post a
 SQL
 DELETE when flush is caélled? ( mean not at commit only)

yep!


 - Does the update posts of objects to be finally deleted, can lock
records
 when deletes queries are post ? e.g : unreferencing A from B and
flushing
 cause SQL UPDATE of B and implicitly of A (why not) then, delete A can't
be
 done because it is DB locked.

 What does the implicitLocking option do on DB?

Nothing, the locking-api is completely independent from DB. But if the
DB lock settings are more strictly then the locks used by the locking
api you can get DB-locking issues. E.g. if you use ReadUncommited on OJB
and RepeatedRead on DB.



My setup:

ImplicitLocking=false
LockAssociations=WRITE




 What are the differences between markDelete(o) and
 database.deletePersistent(o)
 ? at flush ? at commit ?

From api of #markDelete:
Marks an object for deletion without
locking the object. If the object wasn't locked before,
OJB will ask for a WRITE lock at commit.



It's a good thing. I just think if it wouldn't be safer if lock is done one
a master object instead of some details object.

database.deletePersistent immediately locks the object and mark it for

delete.



 And according to the OJB following note about deletePersistent usage.

 It is important to note that the Database.deletePerstient() call does
not
 delete the object itself, just the persistent representation of it. The
 transient object still exists and can be used however desired -- it is
 simply no longer persistent. 

 Is it possible OJB implicitly locks an old persistent object
 (deletePersistent marked) (so transient now) and occurs a DB lock of
record
 (however update occurs before delete instead of the opposite)?

Normally transient objects are never locked and OJB doesn't lock an
object on DB (e.g. using select ... for update).

regards,
Armin



Thanks.



Note that
 this can occur if deletePersistent DELETE posts are not done regarding
the
 call sequence.

 Working with 1.0.4 and some few patches, Oracle 10g.

 Thanks for answers.

 Regards.



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




DB dead lock issue with deletePersistent calls

2006-12-06 Thread Bruno CROS

Hi,

I'm experiencing some troubles about deletePersistent usage when running
ODMG transactions. The most of the issues is an Oracle dead lock. ORA-00060.

I'm looking for what can be wrong, and don't find. Error occurs
on production server, never on developpement servers !!

To summarize, the transactions that produces the error is build with loops
containing several steps :
1. unreferencing objects who have to be deleted from object that have
reference on them. 2. flush with TX Extension to produce db updates posts.
3. delete objects with getDatabase.deletePersistent(o). 4.flush to produce
delete posts before next iteration.

Finally, when loop is ended, commit.

ORA-00060 always occurs  on DELETE SQL queries, as two threads would
delete/update the same record.

I ask myself some questions  :

- Does Impl.getDatabase().deletePersistent(o) can be called several times in
unique one transaction without trouble ?

- Does Impl.getDatabase().deletePersistent(o)  and a flush always post a SQL
DELETE when flush is caélled? ( mean not at commit only)

- Does the update posts of objects to be finally deleted, can lock records
when deletes queries are post ? e.g : unreferencing A from B and flushing
cause SQL UPDATE of B and implicitly of A (why not) then, delete A can't be
done because it is DB locked.

What does the implicitLocking option do on DB?

What are the differences between markDelete(o) and database.deletePersistent(o)
? at flush ? at commit ?

And according to the OJB following note about deletePersistent usage.

It is important to note that the Database.deletePerstient() call does not
delete the object itself, just the persistent representation of it. The
transient object still exists and can be used however desired -- it is
simply no longer persistent. 

Is it possible OJB implicitly locks an old persistent object
(deletePersistent marked) (so transient now) and occurs a DB lock of record
(however update occurs before delete instead of the opposite)? Note that
this can occur if deletePersistent DELETE posts are not done regarding the
call sequence.

Working with 1.0.4 and some few patches, Oracle 10g.

Thanks for answers.

Regards.


Re: Want do query Object width no 1-n reference

2006-12-06 Thread Bruno CROS

Well, try notExists criteria and a subquery on Group returning anything
ligth of Group (key field by example)

Bruno.


On 12/6/06, Josef Wagner [EMAIL PROTECTED] wrote:


Hello List,

I want to have all Users, which are not in a Group

Here my repository:

class-descriptor class=de.on_ergy.lakon.data.model.User table=user
   field-descriptor name=objId   column=obj_id
jdbc-type=INTEGERprimarykey=true
autoincrement=true/field-descriptor
   field-descriptor name=username   column=user_name
jdbc-type=VARCHAR  length=100   /field-descriptor
   !-- m - n über benutzer_gruppen zu gruppen --
   collection-descriptor
name=groups

collection-class=
org.apache.ojb.broker.util.collections.ManageableArrayList
element-class-ref=de.on_ergy.lakon.data.model.Group
auto-retrieve=true
auto-update=false
auto-delete=link
proxy=true
indirection-table=user_group

fk-pointing-to-this-class column=user_obj_id/
fk-pointing-to-element-class column=group_obj_id/
/collection-descriptor
/class-descriptor

How can I get all Users, which currently not in a Group?
Is there a way like this?
Criteria crit = new Criteria();
crit.addIsNull(groups);

Thanks a lot for your help!

regards

--
Josef Wagner


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




Re: Re: Circular references issue

2006-12-06 Thread Bruno CROS

Hello,

I guess you have to do the 2 steps in the same way with PB API. Database
constraints needs the same sequence of insert/update.

If using  PB API, i don't think you're dealing with rollback ability, if
not, write the 2 steps as it were 2 standalone PB processes and it should
work.

Else, find out how to flush with PB API (post queries without commit).

Regards



On 6 Dec 2006 18:15:07 -, Virgo Smart [EMAIL PROTECTED] wrote:



Hello,

Is there a way to do the same using Persistence Broker APIs ?

Thanks and Regards,
Gautam.


On Wed, 06 Dec 2006 Bruno CROS wrote :
The circular references have to be build in 2 steps :

First, create instances and link one relation.
Flush (write SQL insert and update)
Second, link with the second relation (in back side).
Commit.

Your example :

tx.begin();
d = new Drawer();
f = new Finish();
tx.lock(d);
tx.lock(f);
d.setFinish(f);
((ExtTransaction) tx).flush() // post INSERT queries
f.setDrawer(d);
tx.commit();

If you have to delete one object, you have to break the circular
reference
in the same way.

tx.lock(d);
d.setFinish(null);
ExtTx.flush(); // post UPDATE set FINISHPK=null
Impl.getDatabase().deletePersistent(f);
tx.commit();

Don't change java class definition (circular references relations), write
your processes as you should do with JDBC only.

Bruno.

On 12/6/06, Armin Waibel [EMAIL PROTECTED] wrote:

Hello,

I have a scenario in which there are two classes which reference each
other. Eg. class Drawer references Finish and Finish references Drawer.
When I attempt to persist Drawer instance, an exception is thrown
suggesting that we cannot add or update a child row: a foreign key
reference fails.

Is there a way to correct this problem without changing the Java class
definitions ?

Thanks and Regards,
Gautam.


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]









Re: Want do query Object width no 1-n reference

2006-12-06 Thread Bruno CROS

Hi,

Well, if you have made a DB index on table Group strating with the foreign
key of Users, it's necessary the fatest query.
By Query anything light, i mean, field is not important, putting something
useless don't let any doubt on the utility of this (sub)query.





On 12/7/06, Vasily Ivanov [EMAIL PROTECTED] wrote:


Hello,

I usually do criteria.addNotExists(subQuery) with
ReportQueryByCriteria subQuery and set subQuery.setAttributes(new
String[] { 1 }), it should be even faster than anything ligth of
Group (key field by example).

Cheers,
Vasily

On 12/7/06, Bruno CROS [EMAIL PROTECTED] wrote:
 Well, try notExists criteria and a subquery on Group returning anything
 ligth of Group (key field by example)

 Bruno.

 On 12/6/06, Josef Wagner [EMAIL PROTECTED] wrote:
 
  Hello List,
 
  I want to have all Users, which are not in a Group
 
  Here my repository:
 
  class-descriptor class=de.on_ergy.lakon.data.model.User
table=user
 field-descriptor name=objId   column=obj_id
  jdbc-type=INTEGERprimarykey=true
  autoincrement=true/field-descriptor
 field-descriptor name=username   column=user_name
  jdbc-type=VARCHAR  length=100   /field-descriptor
 !-- m - n über benutzer_gruppen zu gruppen --
 collection-descriptor
  name=groups
 
  collection-class=
  org.apache.ojb.broker.util.collections.ManageableArrayList
  element-class-ref=de.on_ergy.lakon.data.model.Group
  auto-retrieve=true
  auto-update=false
  auto-delete=link
  proxy=true
  indirection-table=user_group
  
  fk-pointing-to-this-class column=user_obj_id/
  fk-pointing-to-element-class column=group_obj_id/
  /collection-descriptor
  /class-descriptor
 
  How can I get all Users, which currently not in a Group?
  Is there a way like this?
  Criteria crit = new Criteria();
  crit.addIsNull(groups);
 
  Thanks a lot for your help!
 
  regards
 
  --
  Josef Wagner
 
 
  -
  To unsubscribe, e-mail: [EMAIL PROTECTED]
  For additional commands, e-mail: [EMAIL PROTECTED]
 
 



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




Re: How to release cached objects ?

2006-11-14 Thread Bruno CROS

Hi Armin, thank you for answering so quickly. That's always very nice.

Sorry, but I have 3 more questions about clearing Object Caches :

- When PersistenceBrokerFactory.releaseAllInstances is done, does Object
Caches are reset?

- Can, either of these ways to clear cache (
PersistenceBrokerFactory.releaseAllInstances (if it is so)  and
PB.clearCache ), damage running transactions using released cached objects ?

- If I choose the PB.clearCache(), how can i iterate *all* the PBs i have in
my pool ? I can get one PB from the pool, but i don't known how to get the
whole collection.

Thanks again, and again...

Regards



Now, my need would be to getAllBroker from


On 11/13/06, Armin Waibel [EMAIL PROTECTED] wrote:


Hi Bruno,

Bruno CROS wrote:
 Hi,

 Just about a little question.

 Does PersistenceBrokerFactory.releaseAllInstances() release all cached
 objects?

 I'm using PersistenceBrokerFactoryDefaultImpl and ObjetCacheDefaultImpl
 (and
 OJB 1.0.4.)

 The goal is to force server to reload all OJB mapped objects. Is there a
 proper foreseen method to call to do this?


Evict all caches:

PB.clearCache()

or

PB.serviceObjectCache().clear()

With
PersistenceBrokerFactory.releaseAllInstances()

you will release all pooled PB instances and reset the the PB pool.

regards,
Armin

 Thanks a lot.

 Bruno


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




How to release cached objects ?

2006-11-13 Thread Bruno CROS

Hi,

Just about a little question.

Does PersistenceBrokerFactory.releaseAllInstances() release all cached
objects?

I'm using PersistenceBrokerFactoryDefaultImpl and ObjetCacheDefaultImpl (and
OJB 1.0.4.)

The goal is to force server to reload all OJB mapped objects. Is there a
proper foreseen method to call to do this?

Thanks a lot.

Bruno


Reflexive reference relation

2006-10-30 Thread Bruno CROS

Hi,

At start of my project, I remember i did a workaround to translate the
mapping of reflexive relation, but i'm not sure why... so  I did it again,
and obtained a PK_VIOLATED exception !

 Even i don't known why this couldn't be possible ( i mean without
simulated indirection table, so referencing directly) Did someone do
something like that ?

 E.g. : an instance A class C reference an object B class C too.

Thanks again.

Release 1.0.4 with some patches.


Re: Reflexive reference relation

2006-10-30 Thread Bruno CROS

Well, do not care about last mail. Reflexive relation works fine if you
don't forget the flush or checkpoint absolutely necessary.

OJB is really a great tool.

Thanks.


On 10/30/06, Bruno CROS [EMAIL PROTECTED] wrote:



 Hi,

 At start of my project, I remember i did a workaround to translate the
mapping of reflexive relation, but i'm not sure why... so  I did it again,
and obtained a PK_VIOLATED exception !

  Even i don't known why this couldn't be possible ( i mean without
simulated indirection table, so referencing directly) Did someone do
something like that ?

  E.g. : an instance A class C reference an object B class C too.

Thanks again.

Release 1.0.4 with some patches.




Re: Monitoring DB cursor leaks

2006-10-10 Thread Bruno CROS

 Hi,

Here is my connector, Oracle9i yet...

May be i have to set the jdbc-level at 3.0 ?

Referring to this article, it seems that it is a hard to resolve trouble.

http://www.orafaq.com/node/758

Except looking into code, how to check real cursors leak (if  they are)?

Regards

jdbc-connection-descriptor
   jcd-alias=default
   default-connection=true
   platform=Oracle9i
   jdbc-level=2.0
   driver=oracle.jdbc.driver.OracleDriver
   protocol=jdbc
   subprotocol=oracle
   dbalias=thin:@:1521:Z
   username=u
   password=
   batch-mode=false
   useAutoCommit=1
   ignoreAutoCommitExceptions=false




  -- Forwarded message --
From: Danilo Tommasina [EMAIL PROTECTED]
Date: Oct 9, 2006 6:08 PM
Subject: Re: Monitoring DB cursor leaks
To: OJB Users List ojb-user@db.apache.org

Hi,

we were having that problem too, long time ago. You should use the
'Oracle9i' Platform setting in your jdbc-connection-descriptor instead of
the Oracle
'Platform', that should fix the problem. IIRC the 'Oracle' Platform does a
caching of PreparedStatements, however the Oracle Driver has its own
PreparedStatements cache, having this double caching causes too many cursors
to stay open.
The Oracle9i Platform should not make use of the local PreparedStatements
cache.
Furhtermore the 'Oracle9i' offers better performance and correct handling
for BLOBs  4kB, for 9i or later Oracle versions.

Good luck
cheers
Danilo

Bruno CROS wrote:

 Hi all,

 Experiencing some MAX OPEN CURSOR oracle errors (ORA -01000) , i'm

looking

for the cursors leaks ( unclosed ResultSet it seems) in code.

I guess that broker queries well close the result set. So when report
queries results iteration are closed ? on HasNext()==false ?

Is it possible to check open cursors when releasing connections ? and by
the
way, close them if needed ?

Note : MAX_OPEN_CURSOR Oracle parameter is actually set at 500 by
connection
( and 1000 sooner )

Oracle 10, OJB 1.0.4 with some litle updates

 Regards.



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: Monitoring DB cursor leaks

2006-10-10 Thread Bruno CROS

http://oracleandy.blogspot.com/2006/03/vopencursor-find-cursors-that-are.html

Perhaps i found something to detect bad code, but i don't known how to set
the session parameter in OJB connections. The equivalent of :
alter session set _close_cached_open_cursors = TRUE

Did someone already do ?

Thanks




On 10/10/06, Bruno CROS [EMAIL PROTECTED] wrote:


  Hi,

Here is my connector, Oracle9i yet...

May be i have to set the jdbc-level at 3.0 ?

Referring to this article, it seems that it is a hard to resolve trouble.

http://www.orafaq.com/node/758

Except looking into code, how to check real cursors leak (if  they are)?

Regards

jdbc-connection-descriptor
jcd-alias=default
default-connection=true
platform=Oracle9i
jdbc-level= 2.0
driver=oracle.jdbc.driver.OracleDriver
protocol=jdbc
subprotocol=oracle
dbalias=thin:@:1521:Z
username=u
password=
batch-mode=false
useAutoCommit=1
ignoreAutoCommitExceptions=false
 



   -- Forwarded message --
From: Danilo Tommasina [EMAIL PROTECTED] 
Date: Oct 9, 2006 6:08 PM
Subject: Re: Monitoring DB cursor leaks
To: OJB Users List ojb-user@db.apache.org

Hi,

we were having that problem too, long time ago. You should use the
'Oracle9i' Platform setting in your jdbc-connection-descriptor instead of
the Oracle
'Platform', that should fix the problem. IIRC the 'Oracle' Platform does a
caching of PreparedStatements, however the Oracle Driver has its own
PreparedStatements cache, having this double caching causes too many
cursors to stay open.
The Oracle9i Platform should not make use of the local PreparedStatements
cache.
Furhtermore the 'Oracle9i' offers better performance and correct handling
for BLOBs  4kB, for 9i or later Oracle versions.

Good luck
cheers
Danilo

Bruno CROS wrote:
  Hi all,

  Experiencing some MAX OPEN CURSOR oracle errors (ORA -01000) , i'm
looking
 for the cursors leaks ( unclosed ResultSet it seems) in code.

 I guess that broker queries well close the result set. So when report
 queries results iteration are closed ? on HasNext()==false ?

 Is it possible to check open cursors when releasing connections ? and by

 the
 way, close them if needed ?

 Note : MAX_OPEN_CURSOR Oracle parameter is actually set at 500 by
 connection
 ( and 1000 sooner )

 Oracle 10, OJB 1.0.4 with some litle updates

  Regards.


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]





Monitoring DB cursor leaks

2006-10-09 Thread Bruno CROS

 Hi all,

 Experiencing some MAX OPEN CURSOR oracle errors (ORA -01000) , i'm looking
for the cursors leaks ( unclosed ResultSet it seems) in code.

I guess that broker queries well close the result set. So when report
queries results iteration are closed ? on HasNext()==false ?

Is it possible to check open cursors when releasing connections ? and by the
way, close them if needed ?

Note : MAX_OPEN_CURSOR Oracle parameter is actually set at 500 by connection
( and 1000 sooner )

Oracle 10, OJB 1.0.4 with some litle updates

 Regards.


Re: LockManagerRemoteImpl EOFException on URL

2006-09-29 Thread Bruno CROS

the EOF is the cause on the client (RemoteImpl client)
the resulting exception is a LockRuntimeException(Cannot remove write lock
for... throw by releaseLock method.

I'm not running a mass test, only a single one. One server, one client.

I will run some tests today too.

--- StackTRace loooks libe this

LockRuntimeException(Cannot remove write lock for...
...
RemoteImpl.releaseLock...
...
my call for WRITE lock.


- and then


Caused by: java.io.EOFException

at java.io.ObjectInputStream$PeekInputStream.readFully(
ObjectInputStream.java:2165)

at java.io.ObjectInputStream$BlockDataInputStream.readShort(
ObjectInputStream.java:2634)

at java.io.ObjectInputStream.readStreamHeader(ObjectInputStream.java:734)

at java.io.ObjectInputStream.init(ObjectInputStream.java:253)

at org.apache.ojb.broker.locking.LockManagerRemoteImpl.performRequestObject
(LockManagerRemoteImpl.java:383)

at org.apache.ojb.broker.locking.LockManagerRemoteImpl.performRequest(
LockManagerRemoteImpl.java:335)

at org.apache.ojb.broker.locking.LockManagerRemoteImpl.releaseLock(
LockManagerRemoteImpl.java:193)

... 41 more



On 9/29/06, Armin Waibel [EMAIL PROTECTED] wrote:

Bruno CROS wrote:
 Hi,

 I updated from SVN LockManagerFactory and configure is well called now,
 thanks.

 the EOFException still remains, it seems there is a problem with
 serialization of LockInfo and HttpObjectStream.


Is EOFException the root exception?

I can reproduce a similar (windows specific) problem when running a mass
lock test. On the server side I get
java.io.EOFException
at
java.io.ObjectInputStream$PeekInputStream.readFully(ObjectInputStream.java

:2228)

at
java.io.ObjectInputStream$BlockDataInputStream.readShort(

ObjectInputStream.java:2694)



But the real issue is caused on the client side
java.net.BindException: Address already in use: connect
   at java.net.PlainSocketImpl.socketConnect(Native Method)
   at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:305)


Is this the same behavior you mentioned?

regards,
Armin

 Lot of people seams to have this problem (searching google), but i don't
 really understand what is it really. I know what serialization is , i
 did it
 (with file), but i can't see where something is wrong in
 LockManagerRemoteImpl.

 Thanks for any help.

 OJB 1.0.4.
 Sun JVM 1.4



 On 9/28/06, Bruno CROS [EMAIL PROTECTED] wrote:



 Hi,

 Once i'va modified constructor of LockManagerRemoteImpl to initialize
 lockserver variable, i started to test LockManagerRemoteImpl and get
 into an
 EOFException.



 URL sun implementation ,  release 1.0.4

 Thanks

  Caused by: java.io.EOFException

  at java.io.ObjectInputStream$PeekInputStream.readFully(
 ObjectInputStream.java:2165)

  at java.io.ObjectInputStream$BlockDataInputStream.readShort(
 ObjectInputStream.java:2634)

  at
 java.io.ObjectInputStream.readStreamHeader(ObjectInputStream.java:734)

  at java.io.ObjectInputStream.init(ObjectInputStream.java:253)

  at


org.apache.ojb.broker.locking.LockManagerRemoteImpl.performRequestObject(

 LockManagerRemoteImpl.java:383)

  at org.apache.ojb.broker.locking.LockManagerRemoteImpl.performRequest(
 LockManagerRemoteImpl.java:335)

  at org.apache.ojb.broker.locking.LockManagerRemoteImpl.releaseLock(
 LockManagerRemoteImpl.java:193)

  ... 41 more



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




Re: LockManagerRemoteImpl EOFException on URL

2006-09-29 Thread Bruno CROS

Hi Armin,

I think the classpath (jar) i give for servlet is wrong (server side).
Can you list the jars to be with LockManagerServlet?

Thanks



On 9/29/06, Bruno CROS [EMAIL PROTECTED] wrote:


the EOF is the cause on the client (RemoteImpl client)
the resulting exception is a LockRuntimeException(Cannot remove write
lock for... throw by releaseLock method.

I'm not running a mass test, only a single one. One server, one client.

I will run some tests today too.

--- StackTRace loooks libe this

LockRuntimeException(Cannot remove write lock for...
...
RemoteImpl.releaseLock...
...
my call for WRITE lock.


- and then


Caused by: java.io.EOFException

 at java.io.ObjectInputStream$PeekInputStream.readFully(
ObjectInputStream.java:2165)

 at java.io.ObjectInputStream$BlockDataInputStream.readShort(
ObjectInputStream.java :2634)

 at java.io.ObjectInputStream.readStreamHeader(ObjectInputStream.java:734)

 at java.io.ObjectInputStream.init(ObjectInputStream.java:253)

 at org.apache.ojb.broker.locking.LockManagerRemoteImpl.performRequestObject
(LockManagerRemoteImpl.java:383)

 at org.apache.ojb.broker.locking.LockManagerRemoteImpl.performRequest(
LockManagerRemoteImpl.java:335)

 at org.apache.ojb.broker.locking.LockManagerRemoteImpl.releaseLock(
LockManagerRemoteImpl.java :193)

 ... 41 more



On 9/29/06, Armin Waibel [EMAIL PROTECTED] wrote:
 Bruno CROS wrote:
  Hi,
 
  I updated from SVN LockManagerFactory and configure is well called
now,
  thanks.
 
  the EOFException still remains, it seems there is a problem with
  serialization of LockInfo and HttpObjectStream.
 

 Is EOFException the root exception?

 I can reproduce a similar (windows specific) problem when running a mass
 lock test. On the server side I get
 java.io.EOFException
 at
 java.io.ObjectInputStream$PeekInputStream.readFully (
ObjectInputStream.java:2228)
 at
 java.io.ObjectInputStream$BlockDataInputStream.readShort(
ObjectInputStream.java:2694)


 But the real issue is caused on the client side
 java.net.BindException: Address already in use: connect
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:305)


 Is this the same behavior you mentioned?

 regards,
 Armin

  Lot of people seams to have this problem (searching google), but i
don't
  really understand what is it really. I know what serialization is , i
  did it
  (with file), but i can't see where something is wrong in
  LockManagerRemoteImpl.
 
  Thanks for any help.
 
  OJB 1.0.4.
  Sun JVM 1.4
 
 
 
  On 9/28/06, Bruno CROS [EMAIL PROTECTED] wrote:
 
 
 
  Hi,
 
  Once i'va modified constructor of LockManagerRemoteImpl to initialize
  lockserver variable, i started to test LockManagerRemoteImpl and get
  into an
  EOFException.
 
 
 
  URL sun implementation ,  release 1.0.4
 
  Thanks
 
   Caused by: java.io.EOFException
 
   at java.io.ObjectInputStream$PeekInputStream.readFully(
  ObjectInputStream.java:2165)
 
   at java.io.ObjectInputStream$BlockDataInputStream.readShort(
  ObjectInputStream.java:2634)
 
   at
  java.io.ObjectInputStream.readStreamHeader(ObjectInputStream.java
:734)
 
   at java.io.ObjectInputStream.init( ObjectInputStream.java:253)
 
   at
 
org.apache.ojb.broker.locking.LockManagerRemoteImpl.performRequestObject(
  LockManagerRemoteImpl.java:383)
 
   at
org.apache.ojb.broker.locking.LockManagerRemoteImpl.performRequest(
  LockManagerRemoteImpl.java:335)
 
   at org.apache.ojb.broker.locking.LockManagerRemoteImpl.releaseLock (
  LockManagerRemoteImpl.java:193)
 
   ... 41 more
 
 

 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]







Re: LockManagerRemoteImpl EOFException on URL

2006-09-29 Thread Bruno CROS

Thanks.

I was running servlet under same JVM as the application. So
LockManagerInMemoryImp couldn't instatiate in servlet configure.

Once I did 2 servers. I had to upgrade 3 files to get it work :

- LockManagerRemoteImpl
- LockManagerServlet (working with Remote)
- And so LockManagerInMemoryImpl.

Now it's working. ouf.

The question now is : Updates are slower, but how much ? it seems to be x10
slower.

I had to measure my batches and the mass load tests but can you tell me if
there is some setup to get faster? What i'm asking is that : Are 2 servers
with remote , faster than a big one with InMemory. Depends on
application...yes. But what do you think about that ?

Best regards.

Bruno



On 9/28/06, Bruno CROS [EMAIL PROTECTED] wrote:


Hi,

I updated from SVN LockManagerFactory and configure is well called now,
thanks.

the EOFException still remains, it seems there is a problem with
serialization of LockInfo and HttpObjectStream.

Lot of people seams to have this problem (searching google), but i don't
really understand what is it really. I know what serialization is , i did it
(with file), but i can't see where something is wrong in
LockManagerRemoteImpl.

Thanks for any help.

OJB 1.0.4.
Sun JVM 1.4



On 9/28/06, Bruno CROS [EMAIL PROTECTED] wrote:



 Hi,

 Once i'va modified constructor of LockManagerRemoteImpl to initialize
 lockserver variable, i started to test LockManagerRemoteImpl and get into an
 EOFException.



 URL sun implementation ,  release 1.0.4

 Thanks

  Caused by: java.io.EOFException

  at java.io.ObjectInputStream$PeekInputStream.readFully(
 ObjectInputStream.java:2165)

  at java.io.ObjectInputStream$BlockDataInputStream.readShort(
 ObjectInputStream.java:2634)

  at java.io.ObjectInputStream.readStreamHeader(ObjectInputStream.java
 :734)

  at java.io.ObjectInputStream.init(ObjectInputStream.java:253)

  at
 org.apache.ojb.broker.locking.LockManagerRemoteImpl.performRequestObject
 (LockManagerRemoteImpl.java:383)

  at org.apache.ojb.broker.locking.LockManagerRemoteImpl.performRequest(
 LockManagerRemoteImpl.java:335)

  at org.apache.ojb.broker.locking.LockManagerRemoteImpl.releaseLock(
 LockManagerRemoteImpl.java:193)

  ... 41 more





Re: Configurable interface for LockManagers

2006-09-28 Thread Bruno CROS

Hello,

While waiting for the 1.0.5 release, which files should i get in CVS to
get LockManagerRemoteImpl work?

Thanks Armin






On 1/28/06, Armin Waibel [EMAIL PROTECTED] wrote:


Hi,

is fixed in SVN and will be included in upcoming 1.0.5 maintenance
release.

regards,
Armin

Armin Waibel wrote:
 Armin Waibel wrote:
 A workaround will be to configure once LockManagerRemoteImpl by hand
 before the first use of odmg-api:

 LockManager lm = LockManagerFactory.getLockManager();
 if(lm instanceof Configurable)
 {
 Configurator configurator = OjbConfigurator.getInstance();
 configurator.configure((LockManager)lm);
 }


 Sorry this doesn't work, because the LockManagerFactory always wraps
 LockManager instances with LockManagerOdmgImpl.

 I try to fix the LockManagerFactory bug till tomorrow - stay tuned.

 regards,
 Armin



 HTH
 regards,
 Armin



 --Phil



 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]



 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




LockManagerRemoteImpl EOFException on URL

2006-09-28 Thread Bruno CROS

Hi,

Once i'va modified constructor of LockManagerRemoteImpl to initialize
lockserver variable, i started to test LockManagerRemoteImpl and get into an
EOFException.



URL sun implementation ,  release 1.0.4

Thanks

Caused by: java.io.EOFException

at java.io.ObjectInputStream$PeekInputStream.readFully(
ObjectInputStream.java:2165)

at java.io.ObjectInputStream$BlockDataInputStream.readShort(
ObjectInputStream.java:2634)

at java.io.ObjectInputStream.readStreamHeader(ObjectInputStream.java:734)

at java.io.ObjectInputStream.init(ObjectInputStream.java:253)

at org.apache.ojb.broker.locking.LockManagerRemoteImpl.performRequestObject
(LockManagerRemoteImpl.java:383)

at org.apache.ojb.broker.locking.LockManagerRemoteImpl.performRequest(
LockManagerRemoteImpl.java:335)

at org.apache.ojb.broker.locking.LockManagerRemoteImpl.releaseLock(
LockManagerRemoteImpl.java:193)

... 41 more


Re: LockManagerRemoteImpl EOFException on URL

2006-09-28 Thread Bruno CROS

Hi,

I updated from SVN LockManagerFactory and configure is well called now,
thanks.

the EOFException still remains, it seems there is a problem with
serialization of LockInfo and HttpObjectStream.

Lot of people seams to have this problem (searching google), but i don't
really understand what is it really. I know what serialization is , i did it
(with file), but i can't see where something is wrong in
LockManagerRemoteImpl.

Thanks for any help.

OJB 1.0.4.
Sun JVM 1.4



On 9/28/06, Bruno CROS [EMAIL PROTECTED] wrote:




Hi,

Once i'va modified constructor of LockManagerRemoteImpl to initialize
lockserver variable, i started to test LockManagerRemoteImpl and get into an
EOFException.



URL sun implementation ,  release 1.0.4

Thanks

 Caused by: java.io.EOFException

 at java.io.ObjectInputStream$PeekInputStream.readFully(
ObjectInputStream.java:2165)

 at java.io.ObjectInputStream$BlockDataInputStream.readShort(
ObjectInputStream.java:2634)

 at java.io.ObjectInputStream.readStreamHeader(ObjectInputStream.java:734)

 at java.io.ObjectInputStream.init(ObjectInputStream.java:253)

 at
org.apache.ojb.broker.locking.LockManagerRemoteImpl.performRequestObject(
LockManagerRemoteImpl.java:383)

 at org.apache.ojb.broker.locking.LockManagerRemoteImpl.performRequest(
LockManagerRemoteImpl.java:335)

 at org.apache.ojb.broker.locking.LockManagerRemoteImpl.releaseLock(
LockManagerRemoteImpl.java:193)

 ... 41 more



Re: Xdoclet proxy values

2006-07-06 Thread Bruno CROS

Done

On 7/6/06, Thomas Dudziak [EMAIL PROTECTED] wrote:


On 7/5/06, Bruno CROS [EMAIL PROTECTED] wrote:

   I noticed that the proxy keyword can't accept dynamic value.
dynamic
 is an accepted value in repository file and seems to be different of
true.
 On my setup, dynamic values works fine with CGLib proxies and,
 proxy=true seems to not...

   Does someone can confirm this? if it's a missed, does XDoclet module
has
 been updated since 1.0.4 ?

   proxy are used in reference tag , release 1.0.4.

Yep, the proxy attribute for references does not currently allow
dynamic. Could you file an issue in JIRA for this ?

Tom

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




Xdoclet proxy values

2006-07-05 Thread Bruno CROS

Hi all,

 I noticed that the proxy keyword can't accept dynamic value. dynamic
is an accepted value in repository file and seems to be different of true.
On my setup, dynamic values works fine with CGLib proxies and,
proxy=true seems to not...

 Does someone can confirm this? if it's a missed, does XDoclet module has
been updated since 1.0.4 ?

 proxy are used in reference tag , release 1.0.4.

Thanks.


Re: Only one PB from second database

2006-05-11 Thread Bruno CROS

Hi Armin,

autoCommit set to 1 does not avoid freeze. The fact is that database is only
used to be read, so rollbacks ( by disconnecting) will never be long.
autocommit is however set to 1 now. Thanks for the advice.

I dont' known why it freezes when maxIdle set to -1 and why it does not when
maxIdle is set to 0. But if i well guess, 0 closes usable connections,
that's it ?! So it's not optimized.

Here's my well working conn setup

Regards.


   connection-pool
   maxActive=5
   maxIdle=0
   minIdle=2
   maxWait=1000
   whenExhaustedAction=1

   validationQuery=SELECT CURRENT_TIMESTAMP
   testOnBorrow=true
   testOnReturn=false
   testWhileIdle=true
   timeBetweenEvictionRunsMillis=6
   numTestsPerEvictionRun=2
   minEvictableIdleTimeMillis=180
   removedAbandonned=false
   removeAbandonedTimeout=300
   logAbandoned=true

   !-- Set fetchSize to 0 to use driver's default. --
   attribute attribute-name=fetchSize attribute-value=0/

   !-- Attributes with name prefix jdbc. are passed directly to
the JDBC driver. --
   !-- Example setting (used by Oracle driver when Statement
batching is enabled) --
   attribute attribute-name=jdbc.defaultBatchValue
attribute-value=5/

   !-- Attributes determining if ConnectionFactoryDBCPImpl
should also pool PreparedStatement. This is
programmatically disabled
when using platform=Oracle9i since Oracle statement caching
will conflict
with DBCP ObjectPool-based PreparepdStatement caching (ie
setting true
here has no effect for Oracle9i platform). --
   attribute attribute-name=dbcp.poolPreparedStatements
attribute-value=false/
   attribute attribute-name=dbcp.maxOpenPreparedStatements
attribute-value=10/
   !-- Attribute determining if the Commons DBCP connection
wrapper will allow
access to the underlying concrete Connection instance from
the JDBC-driver
(normally this is not allowed, like in J2EE-containers
using wrappers). --
   attribute attribute-name=
dbcp.accessToUnderlyingConnectionAllowed attribute-value=false/
   /connection-pool







On 5/6/06, Armin Waibel [EMAIL PROTECTED] wrote:


Hi Bruno,

Bruno CROS wrote:
 Hi Armin,

 In fact, i looked at the DB connections in the DB console. It was a bad
 idea, because connection disappear !! I looked with netstat -a , and i
saw
 several sockets/connections...

 Well, i was experiencing some freezes with these connections with a pool
 setup maxActive set to -1. I didn't find any documentation on that
value.

Both ConnectionFactoryPooledImpl and ConnectionFactoryDBCPImpl use
commons-pool to manage connections. There you can find details about the
settings
http://jakarta.apache.org/commons/pool/apidocs/index.html

I would recommend to set maxActive connections at most to the maximal
connections provided by your database server.


 What i known is that, when i put 0 (no limit), it seems there is no more
 freeze.


I think there is a typo in documentation. For unlimited connection pool
you have to set -1.

http://jakarta.apache.org/commons/pool/apidocs/org/apache/commons/pool/impl/GenericObjectPool.html
Will fix this till next release.

In your jdbc-connection-descriptor (posted some time ago) you set
useAutoCommit=0. In this case you have to enable autoCommit 'false' in
your jdbc-driver configuration setup, else you will run into rollback
hassle (if autoCommit is 'true' for connections).

regards,
Armin

 Can you ligth up me about that.

 Thanks.

 Regards.



 On 5/5/06, Armin Waibel [EMAIL PROTECTED] wrote:

 Hi Bruno,

 Bruno CROS wrote:
  Hi,
 
   I have a strange behaviour about the second database i use. It seems
 that
  using broker =
  PersistenceBrokerFactory.createPersistenceBroker(rushDb);
  always return the same broker/connection.
 
  My connection pool is setup as it have to keep 2 idle connections
  available, and it never occured. Still only one.
 
  How can i use several connection in this case?
 
  Note that this database is not not use to update datas. No
transaction
 are
  used on it.
 

 how do you test this behavior? Please setup a test and lookup for two
PB
 instances at the same time:

 broker_A = PersistenceBrokerFactory.createPersistenceBroker(rushDb);
 broker_B = PersistenceBrokerFactory.createPersistenceBroker(rushDb);

 Are A and B really the same broker instances? If you execute a query on
 both broker instances (don't close the broker after it) and then lookup
 the Connection from A and B - are the connections the same?

 regards,
 Armin

 
  Thanks.
 
 
  Here's my connection setup.
 
 jdbc-connection-descriptor
  jcd-alias=rushDb
  default-connection=false
  platform=MsSQLServer
  jdbc-level=2.0
  driver=com.microsoft.jdbc.sqlserver.SQLServerDriver

Re: Only one PB from second database

2006-05-11 Thread Bruno CROS

when MaxIdle is set to 0, it works well, and the 5 maxActive are sufficient.
No freeze at all.

The whenExhaustedAction block is well what i want, no error. And it works
with maxIdle set to 0.

I don't see why no connection remaining in the pool leads to a
serialization. Dead lock is a good assumption, it looks like that, but if
code only reads, i don't known at all how to produce dead lock. I'm going to
look at common-pool settings, if maxIdle is common-pool setting.

regards





On 5/11/06, Armin Waibel [EMAIL PROTECTED] wrote:


Bruno CROS wrote:
 Hi Armin,

 autoCommit set to 1 does not avoid freeze. The fact is that database is
 only
 used to be read, so rollbacks ( by disconnecting) will never be long.
 autocommit is however set to 1 now. Thanks for the advice.

 I dont' known why it freezes when maxIdle set to -1 and why it does not
 when
 maxIdle is set to 0. But if i well guess, 0 closes usable connections,
 that's it ?! So it's not optimized.


You enabled whenExhaustedAction=1 this mean that the pool blocks till
a connection was returned. Max active connections is set to 5 this isn't
much for a mulithreaded application. Maybe your application cause a
deadlock, five different broker instances exhaust the pool and another
broker instance try to lookup a connection but other broker instances
still not closed.
What happens when you set whenExhaustedAction=0?

I think if you set maxIdle=0 only one connection or none connections
will remain in the pool (maybe I'm wrong, I don't check commons-pool
sources) and all access is serialized by this.

regards,
Armin


 Here's my well working conn setup

 Regards.


connection-pool
maxActive=5
maxIdle=0
minIdle=2
maxWait=1000
whenExhaustedAction=1

validationQuery=SELECT CURRENT_TIMESTAMP
testOnBorrow=true
testOnReturn=false
testWhileIdle=true
timeBetweenEvictionRunsMillis=6
numTestsPerEvictionRun=2
minEvictableIdleTimeMillis=180
removedAbandonned=false
removeAbandonedTimeout=300
logAbandoned=true

!-- Set fetchSize to 0 to use driver's default. --
attribute attribute-name=fetchSize attribute-value=0/

!-- Attributes with name prefix jdbc. are passed directly
to
 the JDBC driver. --
!-- Example setting (used by Oracle driver when Statement
 batching is enabled) --
attribute attribute-name=jdbc.defaultBatchValue
 attribute-value=5/

!-- Attributes determining if ConnectionFactoryDBCPImpl
 should also pool PreparedStatement. This is
 programmatically disabled
 when using platform=Oracle9i since Oracle statement
caching
 will conflict
 with DBCP ObjectPool-based PreparepdStatement caching
(ie
 setting true
 here has no effect for Oracle9i platform). --
attribute attribute-name=dbcp.poolPreparedStatements
 attribute-value=false/
attribute attribute-name=dbcp.maxOpenPreparedStatements
 attribute-value=10/
!-- Attribute determining if the Commons DBCP connection
 wrapper will allow
 access to the underlying concrete Connection instance
from
 the JDBC-driver
 (normally this is not allowed, like in J2EE-containers
 using wrappers). --
attribute attribute-name=
 dbcp.accessToUnderlyingConnectionAllowed attribute-value=false/
/connection-pool







 On 5/6/06, Armin Waibel [EMAIL PROTECTED] wrote:

 Hi Bruno,

 Bruno CROS wrote:
  Hi Armin,
 
  In fact, i looked at the DB connections in the DB console. It was a
bad
  idea, because connection disappear !! I looked with netstat -a , and
i
 saw
  several sockets/connections...
 
  Well, i was experiencing some freezes with these connections with a
 pool
  setup maxActive set to -1. I didn't find any documentation on that
 value.

 Both ConnectionFactoryPooledImpl and ConnectionFactoryDBCPImpl use
 commons-pool to manage connections. There you can find details about
the
 settings
 http://jakarta.apache.org/commons/pool/apidocs/index.html

 I would recommend to set maxActive connections at most to the maximal
 connections provided by your database server.


  What i known is that, when i put 0 (no limit), it seems there is no
 more
  freeze.
 

 I think there is a typo in documentation. For unlimited connection pool
 you have to set -1.


http://jakarta.apache.org/commons/pool/apidocs/org/apache/commons/pool/impl/GenericObjectPool.html

 Will fix this till next release.

 In your jdbc-connection-descriptor (posted some time ago) you set
 useAutoCommit=0. In this case you have to enable autoCommit 'false'
in
 your jdbc-driver configuration setup, else you will run into rollback
 hassle (if autoCommit is 'true' for connections).

 regards,
 Armin

  Can you ligth up me about that.
 
  Thanks.
 
  Regards

Re: Only one PB from second database

2006-05-11 Thread Bruno CROS

No idea. All is ok on paper!

http://jakarta.apache.org/commons/pool/apidocs/org/apache/commons/pool/impl/GenericObjectPool.html




On 5/11/06, Bruno CROS [EMAIL PROTECTED] wrote:


 when MaxIdle is set to 0, it works well, and the 5 maxActive are
sufficient. No freeze at all.

The whenExhaustedAction block is well what i want, no error. And it works
with maxIdle set to 0.

I don't see why no connection remaining in the pool leads to a
serialization. Dead lock is a good assumption, it looks like that, but if
code only reads, i don't known at all how to produce dead lock. I'm going to
look at common-pool settings, if maxIdle is common-pool setting.

regards





On 5/11/06, Armin Waibel [EMAIL PROTECTED] wrote:

 Bruno CROS wrote:
  Hi Armin,
 
  autoCommit set to 1 does not avoid freeze. The fact is that database
 is
  only
  used to be read, so rollbacks ( by disconnecting) will never be long.
  autocommit is however set to 1 now. Thanks for the advice.
 
  I dont' known why it freezes when maxIdle set to -1 and why it does
 not
  when
  maxIdle is set to 0. But if i well guess, 0 closes usable connections,
  that's it ?! So it's not optimized.
 

 You enabled whenExhaustedAction=1 this mean that the pool blocks till
 a connection was returned. Max active connections is set to 5 this isn't
 much for a mulithreaded application. Maybe your application cause a
 deadlock, five different broker instances exhaust the pool and another
 broker instance try to lookup a connection but other broker instances
 still not closed.
 What happens when you set whenExhaustedAction=0?

 I think if you set maxIdle=0 only one connection or none connections
 will remain in the pool (maybe I'm wrong, I don't check commons-pool
 sources) and all access is serialized by this.

 regards,
 Armin


  Here's my well working conn setup
 
  Regards.
 
 
 connection-pool
 maxActive=5
 maxIdle=0
 minIdle=2
 maxWait=1000
 whenExhaustedAction=1
 
 validationQuery=SELECT CURRENT_TIMESTAMP
 testOnBorrow=true
 testOnReturn=false
 testWhileIdle=true
 timeBetweenEvictionRunsMillis=6
 numTestsPerEvictionRun=2
 minEvictableIdleTimeMillis=180
 removedAbandonned=false
 removeAbandonedTimeout=300
 logAbandoned=true
 
 !-- Set fetchSize to 0 to use driver's default. --
 attribute attribute-name=fetchSize attribute-value=0/
 
 !-- Attributes with name prefix jdbc. are passed
 directly to
  the JDBC driver. --
 !-- Example setting (used by Oracle driver when Statement
  batching is enabled) --
 attribute attribute-name=jdbc.defaultBatchValue
  attribute-value=5/
 
 !-- Attributes determining if ConnectionFactoryDBCPImpl
  should also pool PreparedStatement. This is
  programmatically disabled
  when using platform=Oracle9i since Oracle statement
 caching
  will conflict
  with DBCP ObjectPool-based PreparepdStatement caching
 (ie
  setting true
  here has no effect for Oracle9i platform). --
 attribute attribute-name= dbcp.poolPreparedStatements
  attribute-value=false/
 attribute attribute-name=dbcp.maxOpenPreparedStatements
  attribute-value=10/
 !-- Attribute determining if the Commons DBCP connection
  wrapper will allow
  access to the underlying concrete Connection instance
 from
  the JDBC-driver
  (normally this is not allowed, like in J2EE-containers
  using wrappers). --
 attribute attribute-name=
  dbcp.accessToUnderlyingConnectionAllowed attribute-value=false/
 /connection-pool
 
 
 
 
 
 
 
  On 5/6/06, Armin Waibel [EMAIL PROTECTED] wrote:
 
  Hi Bruno,
 
  Bruno CROS wrote:
   Hi Armin,
  
   In fact, i looked at the DB connections in the DB console. It was a
 bad
   idea, because connection disappear !! I looked with netstat -a ,
 and i
  saw
   several sockets/connections...
  
   Well, i was experiencing some freezes with these connections with a
  pool
   setup maxActive set to -1. I didn't find any documentation on that
  value.
 
  Both ConnectionFactoryPooledImpl and ConnectionFactoryDBCPImpl use
  commons-pool to manage connections. There you can find details about
 the
  settings
  http://jakarta.apache.org/commons/pool/apidocs/index.html
 
  I would recommend to set maxActive connections at most to the maximal
  connections provided by your database server.
 
 
   What i known is that, when i put 0 (no limit), it seems there is no
  more
   freeze.
  
 
  I think there is a typo in documentation. For unlimited connection
 pool
  you have to set -1.
 
  
http://jakarta.apache.org/commons/pool/apidocs/org/apache/commons/pool/impl/GenericObjectPool.html

 
  Will fix this till next release.
 
  In your jdbc

Re: Only one PB from second database

2006-05-11 Thread Bruno CROS

Yep, i 'm affraid that _pool.size() is always  than -1 !! (the maxIdle), so
shouldDestroy is true, and no pool is added.

May be it's me . Someone can confirm this ?



   private void addObjectToPool(Object obj, boolean decrementNumActive)
throws Exception {
   boolean success = true;
   if(_testOnReturn  !(_factory.validateObject(obj))) {
   success = false;
   } else {
   try {
   _factory.passivateObject(obj);
   } catch(Exception e) {
   success = false;
   }
   }

   boolean shouldDestroy = !success;

   if (decrementNumActive) {
   _numActive--;
   }
   if((_maxIdle = 0)  (_pool.size() = _maxIdle)) {
   shouldDestroy = true;
   } else if(success) {
   _pool.addLast(new ObjectTimestampPair(obj));
   }
   notifyAll(); // _numActive has changed

   if(shouldDestroy) {
   try {
   _factory.destroyObject(obj);
   } catch(Exception e) {
   // ignored
   }
   }
   }


On 5/11/06, Armin Waibel [EMAIL PROTECTED] wrote:




Bruno CROS wrote:
 when MaxIdle is set to 0, it works well, and the 5 maxActive are
 sufficient.
 No freeze at all.

it's a moot point whether only one connection or five connections are
used when maxIdle is 0. I think that maxIdle=0 will immediately close
returned connections.



 The whenExhaustedAction block is well what i want, no error. And it
works
 with maxIdle set to 0.

 I don't see why no connection remaining in the pool leads to a
 serialization. Dead lock is a good assumption, it looks like that, but
if
 code only reads, i don't known at all how to produce dead lock.

A deadlock caused by the pool and and depended broker instances (one
broker wait for a result of another one or a broker leak), not a
database deadlock.

 I'm
 going to
 look at common-pool settings, if maxIdle is common-pool setting.

yep, maxIdle is a commons-pool setting. I would recommend to check the
source code.
http://jakarta.apache.org/commons/pool/cvs-usage.html
(currently the apache svn is down due a SVN-server problem last night,
so download the sources).

regards,
Armin


 regards





 On 5/11/06, Armin Waibel [EMAIL PROTECTED] wrote:

 Bruno CROS wrote:
  Hi Armin,
 
  autoCommit set to 1 does not avoid freeze. The fact is that database
is
  only
  used to be read, so rollbacks ( by disconnecting) will never be long.
  autocommit is however set to 1 now. Thanks for the advice.
 
  I dont' known why it freezes when maxIdle set to -1 and why it does
not
  when
  maxIdle is set to 0. But if i well guess, 0 closes usable
connections,
  that's it ?! So it's not optimized.
 

 You enabled whenExhaustedAction=1 this mean that the pool blocks till
 a connection was returned. Max active connections is set to 5 this
isn't
 much for a mulithreaded application. Maybe your application cause a
 deadlock, five different broker instances exhaust the pool and another
 broker instance try to lookup a connection but other broker instances
 still not closed.
 What happens when you set whenExhaustedAction=0?

 I think if you set maxIdle=0 only one connection or none connections
 will remain in the pool (maybe I'm wrong, I don't check commons-pool
 sources) and all access is serialized by this.

 regards,
 Armin


  Here's my well working conn setup
 
  Regards.
 
 
 connection-pool
 maxActive=5
 maxIdle=0
 minIdle=2
 maxWait=1000
 whenExhaustedAction=1
 
 validationQuery=SELECT CURRENT_TIMESTAMP
 testOnBorrow=true
 testOnReturn=false
 testWhileIdle=true
 timeBetweenEvictionRunsMillis=6
 numTestsPerEvictionRun=2
 minEvictableIdleTimeMillis=180
 removedAbandonned=false
 removeAbandonedTimeout=300
 logAbandoned=true
 
 !-- Set fetchSize to 0 to use driver's default. --
 attribute attribute-name=fetchSize
attribute-value=0/
 
 !-- Attributes with name prefix jdbc. are passed
directly
 to
  the JDBC driver. --
 !-- Example setting (used by Oracle driver when Statement
  batching is enabled) --
 attribute attribute-name=jdbc.defaultBatchValue
  attribute-value=5/
 
 !-- Attributes determining if ConnectionFactoryDBCPImpl
  should also pool PreparedStatement. This is
  programmatically disabled
  when using platform=Oracle9i since Oracle statement
 caching
  will conflict
  with DBCP ObjectPool-based PreparepdStatement caching
 (ie
  setting true
  here has no effect for Oracle9i platform). --
 attribute attribute-name=dbcp.poolPreparedStatements
  attribute-value=false/
 attribute attribute-name=dbcp.maxOpenPreparedStatements
  attribute-value=10/
 !-- Attribute determining

Re: Only one PB from second database

2006-05-11 Thread Bruno CROS

Oops, it's me.

Sorry


On 5/11/06, Bruno CROS [EMAIL PROTECTED] wrote:


 Yep, i 'm affraid that _pool.size() is always  than -1 !! (the maxIdle),
so shouldDestroy is true, and no pool is added.

May be it's me . Someone can confirm this ?



private void addObjectToPool(Object obj, boolean decrementNumActive)
throws Exception {
boolean success = true;
if(_testOnReturn  !(_factory.validateObject(obj))) {
success = false;
} else {
try {
_factory.passivateObject(obj);
} catch(Exception e) {
success = false;
}
}

boolean shouldDestroy = !success;

if (decrementNumActive) {
_numActive--;
}
if((_maxIdle = 0)  (_pool.size() = _maxIdle)) {
shouldDestroy = true;
} else if(success) {
_pool.addLast(new ObjectTimestampPair(obj));
}
notifyAll(); // _numActive has changed

if(shouldDestroy) {
try {
_factory.destroyObject(obj);
} catch(Exception e) {
// ignored
}
}
}


On 5/11/06, Armin Waibel [EMAIL PROTECTED] wrote:



 Bruno CROS wrote:
  when MaxIdle is set to 0, it works well, and the 5 maxActive are
  sufficient.
  No freeze at all.

 it's a moot point whether only one connection or five connections are
 used when maxIdle is 0. I think that maxIdle=0 will immediately close
 returned connections.


 
  The whenExhaustedAction block is well what i want, no error. And it
 works
  with maxIdle set to 0.
 
  I don't see why no connection remaining in the pool leads to a
  serialization. Dead lock is a good assumption, it looks like that, but
 if
  code only reads, i don't known at all how to produce dead lock.

 A deadlock caused by the pool and and depended broker instances (one
 broker wait for a result of another one or a broker leak), not a
 database deadlock.

  I'm
  going to
  look at common-pool settings, if maxIdle is common-pool setting.

 yep, maxIdle is a commons-pool setting. I would recommend to check the
 source code.
 http://jakarta.apache.org/commons/pool/cvs-usage.html
 (currently the apache svn is down due a SVN-server problem last night,
 so download the sources).

 regards,
 Armin

 
  regards
 
 
 
 
 
  On 5/11/06, Armin Waibel [EMAIL PROTECTED] wrote:
 
  Bruno CROS wrote:
   Hi Armin,
  
   autoCommit set to 1 does not avoid freeze. The fact is that
 database is
   only
   used to be read, so rollbacks ( by disconnecting) will never be
 long.
   autocommit is however set to 1 now. Thanks for the advice.
  
   I dont' known why it freezes when maxIdle set to -1 and why it does
 not
   when
   maxIdle is set to 0. But if i well guess, 0 closes usable
 connections,
   that's it ?! So it's not optimized.
  
 
  You enabled whenExhaustedAction=1 this mean that the pool blocks
 till
  a connection was returned. Max active connections is set to 5 this
 isn't
  much for a mulithreaded application. Maybe your application cause a
  deadlock, five different broker instances exhaust the pool and
 another
  broker instance try to lookup a connection but other broker instances
  still not closed.
  What happens when you set whenExhaustedAction=0?
 
  I think if you set maxIdle=0 only one connection or none connections
  will remain in the pool (maybe I'm wrong, I don't check commons-pool
  sources) and all access is serialized by this.
 
  regards,
  Armin
 
 
   Here's my well working conn setup
  
   Regards.
  
  
  connection-pool
  maxActive=5
  maxIdle=0
  minIdle=2
  maxWait=1000
  whenExhaustedAction=1
  
  validationQuery=SELECT CURRENT_TIMESTAMP
  testOnBorrow=true
  testOnReturn=false
  testWhileIdle=true
  timeBetweenEvictionRunsMillis=6
  numTestsPerEvictionRun=2
  minEvictableIdleTimeMillis=180
  removedAbandonned=false
  removeAbandonedTimeout=300
  logAbandoned=true
  
  !-- Set fetchSize to 0 to use driver's default. --
  attribute attribute-name=fetchSize
 attribute-value=0/
  
  !-- Attributes with name prefix jdbc. are passed
 directly
  to
   the JDBC driver. --
  !-- Example setting (used by Oracle driver when
 Statement
   batching is enabled) --
  attribute attribute-name= jdbc.defaultBatchValue
   attribute-value=5/
  
  !-- Attributes determining if ConnectionFactoryDBCPImpl
   should also pool PreparedStatement. This is
   programmatically disabled
   when using platform=Oracle9i since Oracle statement
  caching
   will conflict
   with DBCP ObjectPool-based PreparepdStatement
 caching
  (ie
   setting true
   here has no effect for Oracle9i platform

Re: Avoiding Persistence Broker Leak

2006-05-06 Thread Bruno CROS

On 5/6/06, Edson Carlos Ericksson Richter 
[EMAIL PROTECTED] wrote:


I've used a similar solution, but when I get a broker, first I check if
one broker was already taken for this thread (and a usage counter is
incremented).
Then, when I start one operation, I just check if there is not already a
transaction. If there is no transaction, then I open one. Finally, when
I ask to close the broker, a usage counter for current thread is
decremented, and if it's zero, then broker is really closed.



Oh, well i did something like that with a personal getPersistenceBroker()
method and a closePersistenceBrokerIfNeeded(PersistenceBroker broker).
Those wethods are called to execute read operations. getPersistenceBroker()
look at a current Transaction, if found, the broker of the transaction is
returned, else a new broker is created (*).
ClosePersistenceBrokerIfNeeded(broker) look at a current transaction too, if
found do not close anything, else close the broker in parameter ( that's
because broker of transaction will be closed by a commit() or an abort().

I think we have done the same kind of system.


This technique allow:


- Cross object transactions in same thread
- Avoid begin more than one transaction per broker
- Obligate to always open one transaction, what guarantee standard
behaviour independent of developer personal preferences (important for
groups). So, I can reuse a component written by another programmer
because I know if he execute some operation in database, I'll be in same
transaction.
- When no more object is using a broker, the broker is automatically
closed.



(*)Those who can differ is that, my read operation methods, borrow(or
create) and give back the broker. Running transaction, it's the same broker
used for all (to work with not written objects) and if no transaction,
objects are read with any created broker. I just understand the advantage to
use only one broker for several stacked read calls : the cache. May be i
will change to you technique, but i have too many getDefaultBroker left in
code at this moment.

Resuming, all my code finish in one class that is responsible to take a

broker, start a transaction (if needed), execute operation, and close
broker (if there is no more objects using it, of course).

When I execute one operation, I delegate to Action method to start
transaction, commit or rollback. So, every action in my code has
following structure:




Yep, programmers must known count now ;-). arrfff..

public void actionPerformed(ActionEvent evt) {

try {
   MyPersonBrokerUC.beginTransaction(); // starts a transaction and
increments usage (to 1) for this thread
   MyPersonBrokerUC.store(somePerson); // detect if is a insert or an
update (increments usage to 2) and does the job (return broker and
decrements to 1 again). Will use same broker and transaction started above
   OtherPersonUC.dealWithNewPersons(somePerson); // will run under same
transaction (increments usage to 2, execute operation, and decrements to
1 again). I don't even need to know if there is a bunch of another calls
inside this method: all will run under same transaction.
   MyPersonBrokerUC.commitTransaction(); // commit the transaction and
decrements usage (to 0, so broker is really closed)
} catch(Exception e) {
   MyPersonBrokerUC.rollbackTransaction(); // rollback the transaction
and decrements usage (to 0, so broker is really closed)
   DebugUtil.handleException(e);
}
}

UC (use cases) classes never begin, commit or rollback: it's a Action
task. Because a task always execute under unique thread, there is not
problems (if you wish to execute async operation, just start transaction
inside new thread). Works for MVC-Web development (a Servlet or a JSP
will be the action in this case).



generally, i avoid manipulating O/R mapped objects in jsp.

Thanks to try...catch structure, there is no way to forget a broker

open, neither a transaction open.




that' a very good assurance.

Only one cons for this: when debugging, don't try to fix and continue,

because you will get broken brokers and transactions, leading to dead
lock and fatally to stop and restart.



same problem occurs with others techniques.

OT: humm, trying to explain just in words this appear to be really

complicated, but in fact, it isn't. May be sometime I get spare time to
create some nice Sequence and Collaboration diagrams to explain this



Thanks a lot for this. Check many methods without such a technique, is very
long... OJB developpements are shorter !!


Best regards,




Best regards.


Edson Richter




Bruno CROS escreveu:
  Hi Armin,

 Thanks for the idea to detect broker leak. It will show some bad coded
 methods, even they have been checked : commit never reached, broker not
 closed... no commit/abort !!! (find one, arghh )

 Meanwhile,  there was still some open broker detected. When i look
into
 code, i found some old methods that were reading objects, with a
 dedicated
 transaction. I known now that this transaction

Only one PB from second database

2006-05-05 Thread Bruno CROS

Hi,

 I have a strange behaviour about the second database i use. It seems that
using broker = PersistenceBrokerFactory.createPersistenceBroker(rushDb);
always return the same broker/connection.

My connection pool is setup as it have to keep 2 idle connections
available, and it never occured. Still only one.

How can i use several connection in this case?

Note that this database is not not use to update datas. No transaction are
used on it.


Thanks.


Here's my connection setup.

   jdbc-connection-descriptor
jcd-alias=rushDb
default-connection=false
platform=MsSQLServer
jdbc-level=2.0
driver=com.microsoft.jdbc.sqlserver.SQLServerDriver
protocol=JDBC
subprotocol=microsoft:sqlserver
dbalias=//xxx.x.x.x:1433
username=
password=
batch-mode=true
   useAutoCommit=0
   ignoreAutoCommitExceptions=true


and pool setup :

   maxActive=5
  maxIdle=-1
   minIdle=2
   maxWait=5000
   whenExhaustedAction=2

   validationQuery=SELECT CURRENT_TIMESTAMP
   testOnBorrow=true
   testOnReturn=false
   testWhileIdle=true
   timeBetweenEvictionRunsMillis=6
numTestsPerEvictionRun=2
   minEvictableIdleTimeMillis=180
   removedAbandonned=false
   removeAbandonedTimeout=300
   logAbandoned=true


Re: Only one PB from second database

2006-05-05 Thread Bruno CROS

Hi Armin,

In fact, i looked at the DB connections in the DB console. It was a bad
idea, because connection disappear !! I looked with netstat -a , and i saw
several sockets/connections...

Well, i was experiencing some freezes with these connections with a pool
setup maxActive set to -1. I didn't find any documentation on that value.
What i known is that, when i put 0 (no limit), it seems there is no more
freeze.

Can you ligth up me about that.

Thanks.

Regards.



On 5/5/06, Armin Waibel [EMAIL PROTECTED] wrote:


Hi Bruno,

Bruno CROS wrote:
 Hi,

  I have a strange behaviour about the second database i use. It seems
that
 using broker =
 PersistenceBrokerFactory.createPersistenceBroker(rushDb);
 always return the same broker/connection.

 My connection pool is setup as it have to keep 2 idle connections
 available, and it never occured. Still only one.

 How can i use several connection in this case?

 Note that this database is not not use to update datas. No transaction
are
 used on it.


how do you test this behavior? Please setup a test and lookup for two PB
instances at the same time:

broker_A = PersistenceBrokerFactory.createPersistenceBroker(rushDb);
broker_B = PersistenceBrokerFactory.createPersistenceBroker(rushDb);

Are A and B really the same broker instances? If you execute a query on
both broker instances (don't close the broker after it) and then lookup
the Connection from A and B - are the connections the same?

regards,
Armin


 Thanks.


 Here's my connection setup.

jdbc-connection-descriptor
 jcd-alias=rushDb
 default-connection=false
 platform=MsSQLServer
 jdbc-level=2.0
 driver=com.microsoft.jdbc.sqlserver.SQLServerDriver
 protocol=JDBC
 subprotocol=microsoft:sqlserver
 dbalias=//xxx.x.x.x:1433
 username=
 password=
 batch-mode=true
useAutoCommit=0
ignoreAutoCommitExceptions=true
 

 and pool setup :

maxActive=5
   maxIdle=-1
minIdle=2
maxWait=5000
whenExhaustedAction=2

validationQuery=SELECT CURRENT_TIMESTAMP
testOnBorrow=true
testOnReturn=false
testWhileIdle=true
timeBetweenEvictionRunsMillis=6
 numTestsPerEvictionRun=2
minEvictableIdleTimeMillis=180
removedAbandonned=false
removeAbandonedTimeout=300
logAbandoned=true


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




Avoiding Persistence Broker Leak

2006-05-05 Thread Bruno CROS

 Hi Armin,

Thanks for the idea to detect broker leak. It will show some bad coded
methods, even they have been checked : commit never reached, broker not
closed... no commit/abort !!! (find one, arghh )

Meanwhile,  there was still some open broker detected. When i look into
code, i found some old methods that were reading objects, with a dedicated
transaction. I known now that this transaction is not necessary, and I know
now it's even unwanted ! It seems to burn connections/brokers.

So i add a little check to my getTransaction() method. Now, it searches
for a current transaction, and il found, throw a Already open
transaction.  This let us detect the standalone update method (opening and
closing Transaction), who are called inside an already open  Transaction (as
the old bad reads methods was called by update methods). Everything gets ok
now.

May be it can be an developpment setup to avoid broker leak due to the
double opening Transaction (with same broker)

Thanks a lot. Again.

Regards


Re: standalone getDefaultBroker auto-retrieve

2006-04-28 Thread Bruno CROS
Yep. I did something like... with an immediate catched exception!! it
works...

I add a method into PersistenceBrokerFactory to artificially fire all the
finalize methods. This method call releaseAllInstances and System.gc(). then
Finalize are all fired. It works well.

Will release 1.0.5 come out soon? I need the release with circular
references update.

else, is there a downloadable nightly build.

Thanks a lot.

Regards

On 4/28/06, Armin Waibel [EMAIL PROTECTED] wrote:

 Hi Bruno,

 below you can find #setClosed(...) and #finalize() of the modified PB
 source (will check in this in a few days).

 regards,
 Armin

 /**
 * A stack trace of this broker user.
 * Used when broker leak detection is enabled.
 */
 protected String brokerStackTrace;

 public void setClosed(boolean closed)
 {
 // When lookup the PB instance from pool method setClosed(false)
 // was called before returning instance from pool, in this case
 // OJB have to refresh the instance.
 if(!closed)
 {
 refresh();
 // configurable boolean
 if(brokerLeakDetection)
 {
 brokerStackTrace = ExceptionUtils.getFullStackTrace(
 new Exception(PersistenceBroker caller stack));
 }
 }
 this.isClosed = closed;
 }


 protected void finalize()
 {
 try
 {
 super.finalize();
 // if not closed == broker leak detected
 if (!isClosed)
 {
 String msg = Garbage collection: Unclosed
 PersistenceBroker instance detected, check code for PB leaks.;
 if(brokerLeakDetection)
 {
  logger.error(msg +  Broker caller stack is: 
+ SystemUtils.LINE_SEPARATOR + brokerStackTrace);
 }
 else
 {
 logger.warn(msg);
 }
 close();
 }
 }
 catch(Throwable ignore)
 {
 // ignore
 }
 }


 Bruno CROS wrote:
  Hi Armin, thanks for solution , but i'm not sure i did get it all !!
 
  Can you confirm solution?
 
  Well, i understand i have to override setCLosed method to catch when
 broker
  open (is borrowed), throw an exception to save a stacktrace with a
 catch.
  So , PB remembers his last loan. that's it ?
 
  When finalize is done (that means application is ended ?), if broker is
 not
  closed (how does i known that ?), i have to retrieve the last stored
 loan
  stacktrace, log it and (why not) throw an BrokerNotClosedException.
 That's
  it ?
 
  Does Abandonned mechanism tells the advance stacktrace? that will be so
  great.
 
  Imagine : broker xxx, time-out 120 s, borrowed by --- stacktrace ---
 
  I will try it tomorrow, thanks a lot. Armin.
 
  Regards
 
 
 
  On 4/25/06, Armin Waibel [EMAIL PROTECTED] wrote:
  Bruno CROS wrote:
  using proxies ?
 
  Oh, i see now. Proxy opens and closes broker well. that's it ?
 
  I didn't think to that. Tsss...
 
   I'm sorry.
 
  It seems that i'm looking for my lost brokers since too much time ago.

 
  I guess i'm going to check all my DAOs and all my transactions again
 (or
  may
  be someone else now ;-) )
  You could extend or modify the PersistenceBrokerImpl instance and
  override method
  public void setClosed(boolean closed)
  If set 'false' (PB instance was obtained from the pool) throw and catch

  an exception to trace the current method caller.
 
  Override method
  protected void finalize()
  If current broker instance is not closed print the stack trace from
  setClosed-method.
 
  Then it will be easy to find broker leaks. I will add a similar
 behavior
  till next release.
 
  regards,
  Armin
 
 
  Thanks again.
 
  Bye.
 
 
 
 
 
  On 4/25/06, Armin Waibel  [EMAIL PROTECTED] wrote:
  Hi Bruno,
 
  Bruno CROS wrote:
  Hi Armin,
 
  Here's is a schematic example :
 
  Consider a service method that returns an object ProductBean.
  ProductBean
  is not O/R mapped but the  reading calls a second method that read
 O/R
  mapped Product object. Then, relations are followed, to find
  description
  of
  his Category (Consider that a product have 1 Category.
 
  2nd method looks like that (classic):
 
  public static Product getProduct(String id) {
PersistenceBroker broker = null;
try {
  PersistenceBroker brojer =
  PersistenceBrokerFactory.defaultPersistenceBroker();
  Identity oid = broker.serviceIdentity().buildIdentity(
  Product.class,
  id);
  Product product = (Product) broker.getObjectByIdentity (oid);
  return product;
} finally {
  if (broker !=null )
  { broker.close();
  }
}
  }
 
  Frst method looks like that :
 
  public static ProductBean getProductBean(String id)
  { Product p = getProduct(id); // 2nd method call
if (p!=null)
{ ProductBean product = new ProductBean();
  product.setDescription(p.getDescription());
  product.setID (p.getId());
  // and here's the O/R recall
  product.setCategoryDescription ( p.getCategory

Re: standalone getDefaultBroker auto-retrieve

2006-04-28 Thread Bruno CROS
  Hi Armin,

 The detection mechanism works, but i have a strange behaviour, i collected
some open-broker detections. So i had a look into code, and i saw my close
call in a finally clause. this is correct.

So, i don't doubt about java finally, but the implementation of my
closeBroker method. In fact, my close Broker is an implementation to close
broker if there is no transaction running on thread. When transaction is
detected, some methods methods use same broker than the transaction does,
this is to work with the objects handled by transaction. So when no
transaction, defautBroker is used, and closed at the end , in finally
clause.

So method does a broker.close() only if
getImplementation.getDatabase(...).getCurrentTransaction
is null (and broker is not null of course)

So, the question is : when currentTransaction is not null  ( meaning that
broker will not be close by the method) whereas there is no open
transaction?

I do not have idea at all. I will have a lokk Tuesday.

Thanks for your help. Again.






On 4/28/06, Bruno CROS [EMAIL PROTECTED] wrote:

  Yep. I did something like... with an immediate catched exception!! it
 works...

 I add a method into PersistenceBrokerFactory to artificially fire all the
 finalize methods. This method call releaseAllInstances and System.gc().
 then Finalize are all fired. It works well.

 Will release 1.0.5 come out soon? I need the release with circular
 references update.

 else, is there a downloadable nightly build.

 Thanks a lot.

 Regards

  On 4/28/06, Armin Waibel [EMAIL PROTECTED] wrote:
 
  Hi Bruno,
 
  below you can find #setClosed(...) and #finalize() of the modified PB
  source (will check in this in a few days).
 
  regards,
  Armin
 
  /**
  * A stack trace of this broker user.
  * Used when broker leak detection is enabled.
  */
  protected String brokerStackTrace;
 
  public void setClosed(boolean closed)
  {
  // When lookup the PB instance from pool method setClosed(false)
  // was called before returning instance from pool, in this case
  // OJB have to refresh the instance.
  if(!closed)
  {
  refresh();
  // configurable boolean
  if(brokerLeakDetection)
  {
  brokerStackTrace = ExceptionUtils.getFullStackTrace(
  new Exception(PersistenceBroker caller stack));
  }
  }
  this.isClosed = closed;
  }
 
 
  protected void finalize()
  {
  try
  {
  super.finalize();
  // if not closed == broker leak detected
  if (!isClosed)
  {
  String msg = Garbage collection: Unclosed
  PersistenceBroker instance detected, check code for PB leaks.;
  if(brokerLeakDetection)
  {
   logger.error(msg +  Broker caller stack is: 
 + SystemUtils.LINE_SEPARATOR + brokerStackTrace);
  }
  else
  {
  logger.warn(msg);
  }
  close();
  }
  }
  catch(Throwable ignore)
  {
  // ignore
  }
  }
 
 
  Bruno CROS wrote:
   Hi Armin, thanks for solution , but i'm not sure i did get it all !!
  
   Can you confirm solution?
  
   Well, i understand i have to override setCLosed method to catch when
  broker
   open (is borrowed), throw an exception to save a stacktrace with a
  catch.
   So , PB remembers his last loan. that's it ?
  
   When finalize is done (that means application is ended ?), if broker
  is not
   closed (how does i known that ?), i have to retrieve the last stored
  loan
   stacktrace, log it and (why not) throw an BrokerNotClosedException.
  That's
   it ?
  
   Does Abandonned mechanism tells the advance stacktrace? that will be
  so
   great.
  
   Imagine : broker xxx, time-out 120 s, borrowed by --- stacktrace ---
  
   I will try it tomorrow, thanks a lot. Armin.
  
   Regards
  
  
  
   On 4/25/06, Armin Waibel [EMAIL PROTECTED] wrote:
   Bruno CROS wrote:
   using proxies ?
  
   Oh, i see now. Proxy opens and closes broker well. that's it ?
  
   I didn't think to that. Tsss...
  
I'm sorry.
  
   It seems that i'm looking for my lost brokers since too much time
  ago.
  
   I guess i'm going to check all my DAOs and all my transactions again
  (or
   may
   be someone else now ;-) )
   You could extend or modify the PersistenceBrokerImpl instance and
   override method
   public void setClosed(boolean closed)
   If set 'false' (PB instance was obtained from the pool) throw and
  catch
   an exception to trace the current method caller.
  
   Override method
   protected void finalize()
   If current broker instance is not closed print the stack trace from
   setClosed-method.
  
   Then it will be easy to find broker leaks. I will add a similar
  behavior
   till next release.
  
   regards,
   Armin
  
  
   Thanks again.
  
   Bye.
  
  
  
  
  
   On 4/25/06, Armin Waibel  [EMAIL PROTECTED] wrote:
   Hi Bruno

Re: standalone getDefaultBroker auto-retrieve

2006-04-25 Thread Bruno CROS
using proxies ?

Oh, i see now. Proxy opens and closes broker well. that's it ?

I didn't think to that. Tsss...

 I'm sorry.

It seems that i'm looking for my lost brokers since too much time ago.

I guess i'm going to check all my DAOs and all my transactions again (or may
be someone else now ;-) )

Thanks again.

Bye.





On 4/25/06, Armin Waibel [EMAIL PROTECTED] wrote:

 Hi Bruno,

 Bruno CROS wrote:
  Hi Armin,
 
  Here's is a schematic example :
 
  Consider a service method that returns an object ProductBean.
 ProductBean
  is not O/R mapped but the  reading calls a second method that read O/R
  mapped Product object. Then, relations are followed, to find description
 of
  his Category (Consider that a product have 1 Category.
 
  2nd method looks like that (classic):
 
  public static Product getProduct(String id) {
PersistenceBroker broker = null;
try {
  PersistenceBroker brojer =
  PersistenceBrokerFactory.defaultPersistenceBroker();
  Identity oid = broker.serviceIdentity().buildIdentity(Product.class,
  id);
  Product product = (Product) broker.getObjectByIdentity(oid);
  return product;
} finally {
  if (broker !=null )
  { broker.close();
  }
}
  }
 
  Frst method looks like that :
 
  public static ProductBean getProductBean(String id)
  { Product p = getProduct(id); // 2nd method call
if (p!=null)
{ ProductBean product = new ProductBean();
  product.setDescription(p.getDescription());
  product.setID(p.getId());
  // and here's the O/R recall
  product.setCategoryDescription( p.getCategory().getDescription() );
  // now, broker is open... how does it close ?

 Sorry, I didn't get it. The Category object associated with Product p
 was materialized too when getProduct(id) was called (auto-retrieve is
 true). Why does p.getCategory() open a new PB instance?

 regards,
 Armin

  return product;
}
return null;
  }
 
  I tried to wrap the code of first method with a tx.open() and tx.abort(),
 to
  be sure that broker is released at the end with the abort().
 
 
  thanks
 
  regards.
 
 
 
  On 4/25/06, Armin Waibel [EMAIL PROTECTED] wrote:
  Hi Bruno,
 
  Bruno CROS wrote:
   Hi,
 
  It seems that read objects with a broker, can read related objects by
  auto-retrieve set to true despite broker is closed.
  I can't see what you mean. When OJB materialize an object with related
  objects and auto-retrieve is 'true', the full materialized object is
  returned. Thus it's not possible to close the PB instance while object
  materialization (except by an illegal concurrent thread).
 
  Or do you mean materialization of proxy references? In this case OJB
 try
  to lookup the current PB instance and if not found internally a PB
  instance is used for materialization and immediately closed after use.
 
  Could you please describe more detailed (with example code)?
 
  regards,
  Armin
 
  I suppose that a getDefaultBroker is done, and the borrowed broker is
  never
  closed (because no reference on it).
  Note : This occurred because, application has been written with
 several
  layers, one for dao, one for services, one for UI.
 
  How can i avoid auto-retrieves readings to take brokers in the
 PBPool
  by
  themselves ?
 
  Thanks.
 
  -
  To unsubscribe, e-mail: [EMAIL PROTECTED]
  For additional commands, e-mail: [EMAIL PROTECTED]
 
 
 

 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]




Re: standalone getDefaultBroker auto-retrieve

2006-04-25 Thread Bruno CROS
Hi Armin, thanks for solution , but i'm not sure i did get it all !!

Can you confirm solution?

Well, i understand i have to override setCLosed method to catch when broker
open (is borrowed), throw an exception to save a stacktrace with a catch.
So , PB remembers his last loan. that's it ?

When finalize is done (that means application is ended ?), if broker is not
closed (how does i known that ?), i have to retrieve the last stored loan
stacktrace, log it and (why not) throw an BrokerNotClosedException. That's
it ?

Does Abandonned mechanism tells the advance stacktrace? that will be so
great.

Imagine : broker xxx, time-out 120 s, borrowed by --- stacktrace ---

I will try it tomorrow, thanks a lot. Armin.

Regards



On 4/25/06, Armin Waibel [EMAIL PROTECTED] wrote:

 Bruno CROS wrote:
  using proxies ?
 
  Oh, i see now. Proxy opens and closes broker well. that's it ?
 
  I didn't think to that. Tsss...
 
   I'm sorry.
 
  It seems that i'm looking for my lost brokers since too much time ago.
 
  I guess i'm going to check all my DAOs and all my transactions again (or
 may
  be someone else now ;-) )

 You could extend or modify the PersistenceBrokerImpl instance and
 override method
 public void setClosed(boolean closed)
 If set 'false' (PB instance was obtained from the pool) throw and catch
 an exception to trace the current method caller.

 Override method
 protected void finalize()
 If current broker instance is not closed print the stack trace from
 setClosed-method.

 Then it will be easy to find broker leaks. I will add a similar behavior
 till next release.

 regards,
 Armin


 
  Thanks again.
 
  Bye.
 
 
 
 
 
  On 4/25/06, Armin Waibel [EMAIL PROTECTED] wrote:
  Hi Bruno,
 
  Bruno CROS wrote:
  Hi Armin,
 
  Here's is a schematic example :
 
  Consider a service method that returns an object ProductBean.
  ProductBean
  is not O/R mapped but the  reading calls a second method that read O/R
  mapped Product object. Then, relations are followed, to find
 description
  of
  his Category (Consider that a product have 1 Category.
 
  2nd method looks like that (classic):
 
  public static Product getProduct(String id) {
PersistenceBroker broker = null;
try {
  PersistenceBroker brojer =
  PersistenceBrokerFactory.defaultPersistenceBroker();
  Identity oid = broker.serviceIdentity().buildIdentity(
 Product.class,
  id);
  Product product = (Product) broker.getObjectByIdentity(oid);
  return product;
} finally {
  if (broker !=null )
  { broker.close();
  }
}
  }
 
  Frst method looks like that :
 
  public static ProductBean getProductBean(String id)
  { Product p = getProduct(id); // 2nd method call
if (p!=null)
{ ProductBean product = new ProductBean();
  product.setDescription(p.getDescription());
  product.setID(p.getId());
  // and here's the O/R recall
  product.setCategoryDescription( p.getCategory().getDescription()
 );
  // now, broker is open... how does it close ?
  Sorry, I didn't get it. The Category object associated with Product p
  was materialized too when getProduct(id) was called (auto-retrieve is
  true). Why does p.getCategory() open a new PB instance?
 
  regards,
  Armin
 
  return product;
}
return null;
  }
 
  I tried to wrap the code of first method with a tx.open() and tx.abort
 (),
  to
  be sure that broker is released at the end with the abort().
 
 
  thanks
 
  regards.
 
 
 
  On 4/25/06, Armin Waibel [EMAIL PROTECTED] wrote:
  Hi Bruno,
 
  Bruno CROS wrote:
   Hi,
 
  It seems that read objects with a broker, can read related objects
 by
  auto-retrieve set to true despite broker is closed.
  I can't see what you mean. When OJB materialize an object with
 related
  objects and auto-retrieve is 'true', the full materialized object is
  returned. Thus it's not possible to close the PB instance while
 object
  materialization (except by an illegal concurrent thread).
 
  Or do you mean materialization of proxy references? In this case OJB
  try
  to lookup the current PB instance and if not found internally a PB
  instance is used for materialization and immediately closed after
 use.
 
  Could you please describe more detailed (with example code)?
 
  regards,
  Armin
 
  I suppose that a getDefaultBroker is done, and the borrowed broker
 is
  never
  closed (because no reference on it).
  Note : This occurred because, application has been written with
  several
  layers, one for dao, one for services, one for UI.
 
  How can i avoid auto-retrieves readings to take brokers in the
  PBPool
  by
  themselves ?
 
  Thanks.
 
  -
  To unsubscribe, e-mail: [EMAIL PROTECTED]
  For additional commands, e-mail: [EMAIL PROTECTED]
 
 
  -
  To unsubscribe, e-mail: [EMAIL PROTECTED]
  For additional commands, e-mail: [EMAIL PROTECTED

standalone getDefaultBroker auto-retrieve

2006-04-24 Thread Bruno CROS
 Hi,

It seems that read objects with a broker, can read related objects by
auto-retrieve set to true despite broker is closed.
I suppose that a getDefaultBroker is done, and the borrowed broker is never
closed (because no reference on it).
Note : This occurred because, application has been written with several
layers, one for dao, one for services, one for UI.

How can i avoid auto-retrieves readings to take brokers in the PBPool by
themselves ?

Thanks.


Re: standalone getDefaultBroker auto-retrieve

2006-04-24 Thread Bruno CROS
Hi Armin,

Here's is a schematic example :

Consider a service method that returns an object ProductBean. ProductBean
is not O/R mapped but the  reading calls a second method that read O/R
mapped Product object. Then, relations are followed, to find description of
his Category (Consider that a product have 1 Category.

2nd method looks like that (classic):

public static Product getProduct(String id) {
  PersistenceBroker broker = null;
  try {
PersistenceBroker brojer =
PersistenceBrokerFactory.defaultPersistenceBroker();
Identity oid = broker.serviceIdentity().buildIdentity(Product.class,
id);
Product product = (Product) broker.getObjectByIdentity(oid);
return product;
  } finally {
if (broker !=null )
{ broker.close();
}
  }
}

Frst method looks like that :

public static ProductBean getProductBean(String id)
{ Product p = getProduct(id); // 2nd method call
  if (p!=null)
  { ProductBean product = new ProductBean();
product.setDescription(p.getDescription());
product.setID(p.getId());
// and here's the O/R recall
product.setCategoryDescription( p.getCategory().getDescription() );
// now, broker is open... how does it close ?
return product;
  }
  return null;
}

I tried to wrap the code of first method with a tx.open() and tx.abort(), to
be sure that broker is released at the end with the abort().


thanks

regards.



On 4/25/06, Armin Waibel [EMAIL PROTECTED] wrote:

 Hi Bruno,

 Bruno CROS wrote:
   Hi,
 
  It seems that read objects with a broker, can read related objects by
  auto-retrieve set to true despite broker is closed.

 I can't see what you mean. When OJB materialize an object with related
 objects and auto-retrieve is 'true', the full materialized object is
 returned. Thus it's not possible to close the PB instance while object
 materialization (except by an illegal concurrent thread).

 Or do you mean materialization of proxy references? In this case OJB try
 to lookup the current PB instance and if not found internally a PB
 instance is used for materialization and immediately closed after use.

 Could you please describe more detailed (with example code)?

 regards,
 Armin

  I suppose that a getDefaultBroker is done, and the borrowed broker is
 never
  closed (because no reference on it).
  Note : This occurred because, application has been written with several
  layers, one for dao, one for services, one for UI.
 
  How can i avoid auto-retrieves readings to take brokers in the PBPool
 by
  themselves ?
 
  Thanks.
 

 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]




Re: Question about two-level cache

2006-04-22 Thread Bruno CROS
  Hi,

I  experienced Two Level Cache Impl.  Have a look to mail archives about
TwoLevelCacheImpl. Beware of checkpoints inside batch loops. Using
checkpoints make cached objects quantity growing until commit. If you have
to, replace checkpoints by commit/begin.

Did you try the ObjectCacheDefaultImpl instead of OSCacheImpl ?

If you do ReportQuery or read Collections with criteria, then i beleive that
cache is not used (partially used in Collection retrieving). I mean SQL
query is done , to find data / or PK. So you can see queries crossing in
P6spy.

I'm interrested in your experiences of 2 Level Cache, especially in the
setting with the kind of model/processes.

Regards.


On 4/22/06, Westfall, Eric Curtis [EMAIL PROTECTED] wrote:

 Hello, I'm wondering if anyone out there has experience using OJB's
 two-level caching.  I am attempting to use a two-level cache to help
 speed up an application I'm working on and I'm noticing some issues that
 I'm curious about.

 It appears that, even if my object is already in the application cache,
 OJB is still issuing the SQL to query for the object (verified using
 P6Spy).  Is this what's actually happening or am I mistaken?  I can see
 in certain cases where OJB would need to run the query in order to get a
 set of primary keys to check the cache for, however, if I'm doing a
 query by the primary key shouldn't it just go straight to the
 application cache without hitting the database?  I'm using OSCache as my
 application cache, here's my configuration:

 object-cache
 class=org.apache.ojb.broker.cache.ObjectCacheTwoLevelImpl
attribute attribute-name=cacheExcludes attribute-value=/
attribute attribute-name=applicationCache
 attribute-value=edu.iu.uis.database.ObjectCacheOSCacheImpl/
attribute attribute-name=copyStrategy
 attribute-value=org.apache.ojb.broker.cache.ObjectCacheTwoLevelImpl$Cop
 yStrategyImpl/
attribute attribute-name=forceProxies attribute-value=false/
 /object-cache

 Thanks in advance,
 Eric

 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]




Re: Brokers leak

2006-04-20 Thread Bruno CROS
why not, but how i known that pool have unclosed brokers?

I thought about  wrap all my broker usages to register open and close
operations.

 But it will be so great to check directly inside broker pool, all the too
old given brokers. Broker older than 2 minutes would be a good indication.
If i have registered it, it will be easy to know which methods call it.
Unfortunately, i couldn't find where i can read broker pool status and last
usage broker timestamp.

Asking myself :
How can an iteration given by broker.getIteratorByQuery(q) can be used
outside the method without opening a db connection ?

and how to acces to low level pool implementation?

Thanks.

Regards






On 4/20/06, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:

 Here's one brute force idea:

 Put a javax.servlet.Filter across every URL (/*) of your application.
 Then check for an unclosed broker after calling FilterChain.doFilter(...)
 like this:

 public void doFilter(
ServletRequest aRequest,
ServletResponse aResponse,
FilterChain aChain)
throws IOException, ServletException {

aChain.doFilter(aRequest, aResponse);

/* Check for unclosed broker and log url path here!! */
 }

 Jon French
 Programmer
 ECOS Development Team
 [EMAIL PROTECTED]
 970-226-9290

 Fort Collins Science Center
 US Geological Survey
 2150 Centre Ave, Building C
 Fort Collins, CO 80526-8116



 Bruno CROS [EMAIL PROTECTED]
 04/19/2006 03:06 PM
 Please respond to
 OJB Users List ojb-user@db.apache.org


 To
 OJB Users List ojb-user@db.apache.org
 cc

 Subject
 Brokers leak






Hi all,

 I experienced  brokers leaks. I checked all the open / close broker
 methods
 and the leaks still remains. ODMG transaction have been checked too.

 Those leaks result in PersistentBroker I/O Exception, freezing
 application.

 I would be very happy to known which methods take my brokers without give
 them back.
 Is there a simple way to known from where they are lost?
 Can P6Spy help ?


 What are the main reasons to have a broker leak ?
 I suspect unfinished Query iterations : what is the clean way to end an
 iteration before the end of the query iteration ? persistentBroker.close()
 does ?
 I suspect checkpoint() : but if i  well guess, it's only a tx.commit()
 and
 a re-tx.begin() , so ...
 I suspect brokers can't be closed in some case, even when the close() is
 in
 finally code. Is that possible ?! e.g. when returning a value ?
 I suspect jsp iteration. We tried to iterate only java collections in the
 jsp, but it still remains query iteration inside. May be those
 iterations can take brokers automatically, but can't close them? So, is
 there a setting of brokers pool to avoid this? Documentation is short.

 Is there a chance to have the 1.0.5 in current month ?

 Thanks for all your ideas.

 Using OJB 1.0.4, oracle 10g, QueriesByCriteria, ReportQueries and ODMG
 transactions.





Brokers leak

2006-04-19 Thread Bruno CROS
Hi all,

 I experienced  brokers leaks. I checked all the open / close broker methods
and the leaks still remains. ODMG transaction have been checked too.

 Those leaks result in PersistentBroker I/O Exception, freezing application.

 I would be very happy to known which methods take my brokers without give
them back.
 Is there a simple way to known from where they are lost?
 Can P6Spy help ?


 What are the main reasons to have a broker leak ?
 I suspect unfinished Query iterations : what is the clean way to end an
iteration before the end of the query iteration ? persistentBroker.close()
does ?
 I suspect checkpoint() : but if i  well guess, it's only a tx.commit() and
a re-tx.begin() , so ...
 I suspect brokers can't be closed in some case, even when the close() is in
finally code. Is that possible ?! e.g. when returning a value ?
 I suspect jsp iteration. We tried to iterate only java collections in the
jsp, but it still remains query iteration inside. May be those
iterations can take brokers automatically, but can't close them? So, is
there a setting of brokers pool to avoid this? Documentation is short.

Is there a chance to have the 1.0.5 in current month ?

Thanks for all your ideas.

Using OJB 1.0.4, oracle 10g, QueriesByCriteria, ReportQueries and ODMG
transactions.


Re: Im desperated

2006-04-13 Thread Bruno CROS
Take in consideration fields with null value. Test  ou = cannot be
correctly done with nulls.

In this case you have to write : A isNull OR AB

or A NotNULL AND A==B

Hope this may help you.

Regards.


On 4/13/06, Helder Gaspar Rodrigues [EMAIL PROTECTED] wrote:

 Hello everyone, i know that maybe this ml is not the appropriate local
 to ask this, but im so desperated that i have to try.

 Im using OJB in a java project, and now i want to query the objects
 using odmg OQL query.

 Imagine class A and B. A has a set of B objets assigned into the
 variable bs.

 B object have a variable called date with type java.util.Date;

 I want to retrieve all objects A that not match with the variable date
 in all objects B in the set bs.

 Example:

 String oqlQuery = select a from  + A.class.getName();
 oqlQuery +=  where name = $1 and bs.data  $2;


 the problem is if there is any B in the bs that do not match the 
 the respective object A that contains that B object is putted in the
 result.

 I dont want that, i want: If there is any B in the bs set that match
  object A is not considerated.


 Any tips?

 Thank you a lot for your attention!

 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]




Re: Im desperated

2006-04-13 Thread Bruno CROS
i'm affraid you cannot compare 2 different types. I think SQL ca'nt and
else, it's not recommended.

If you have realised your query in SQL, compare it with the SQL query
produced by your OQL query  ( help with P6spy ) and you will find what goes
wrong.

Regards

On 4/13/06, Helder Gaspar Rodrigues [EMAIL PROTECTED] wrote:

 i tinhk that i have understaood your point of view, but how can i
 compare A with B if they are diferents object types?

 Thank you
 Bruno CROS wrote:
  Take in consideration fields with null value. Test  ou = cannot be
  correctly done with nulls.
 
  In this case you have to write : A isNull OR AB
 
  or A NotNULL AND A==B
 
  Hope this may help you.
 
  Regards.
 
 
  On 4/13/06, Helder Gaspar Rodrigues [EMAIL PROTECTED] wrote:
  Hello everyone, i know that maybe this ml is not the appropriate local
  to ask this, but im so desperated that i have to try.
 
  Im using OJB in a java project, and now i want to query the objects
  using odmg OQL query.
 
  Imagine class A and B. A has a set of B objets assigned into the
  variable bs.
 
  B object have a variable called date with type java.util.Date;
 
  I want to retrieve all objects A that not match with the variable date
  in all objects B in the set bs.
 
  Example:
 
  String oqlQuery = select a from  + A.class.getName();
  oqlQuery +=  where name = $1 and bs.data  $2;
 
 
  the problem is if there is any B in the bs that do not match the 
  the respective object A that contains that B object is putted in the
  result.
 
  I dont want that, i want: If there is any B in the bs set that match
   object A is not considerated.
 
 
  Any tips?
 
  Thank you a lot for your attention!
 
  -
  To unsubscribe, e-mail: [EMAIL PROTECTED]
  For additional commands, e-mail: [EMAIL PROTECTED]
 
 
 


 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]




Re: reflexive collections

2006-04-08 Thread Bruno CROS
Hi,

Take care about the loading behaviour with auto-retrieve=true on a reflexive
collection.

Without this consideration, then, querying with alias normally will help to
your error.

++



On 4/7/06, Daniel Perry [EMAIL PROTECTED] wrote:

 Is it possible for an object to have a reflexive collection? ie a
 collection of
 itselfs?

 Eg, Class person has:

 /**
 * @ojb.collection element-class-ref=Person
 * indirection-table=friends
 * foreignkey=personId remote-foreignkey=friendId
 * proxy=true
 */
 private ListPerson friends;

 I tried this, but got an error relating to friendId being ambiguous in a
 query.
 Is this possible?


 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]




rownum causes order by trouble

2006-04-04 Thread Bruno CROS
Hi,

Inserting a criteria as rownum  2 causes the ORDER BY #n DESC to be ORDER
BY #n . DESC has been lost !!

Anyone confirm ? Have a workaround ?

What's the best way to read only first resultset (or Object) without
iterating the whole collection ?

I suspect some broker-connections troubles on aborted iteration. Is that
conceivable ?

Thanks.

PS: news about 1.0.5 ?


Re: Reflexive ReportQuery

2006-03-27 Thread Bruno CROS
Auto reply :

Final solution done with normal SQL query.

Solution includes sub queries attribute as well.

++


On 3/22/06, Bruno CROS [EMAIL PROTECTED] wrote:


   Hi again,

 I 'm desesperatly searching how to produce ReportQuery with a reflexive
 virtual relation.

 Is it possible to declare an alias (useralias may be) (specially created
 for ReportQuery may be) to obtain the matrix of a a table by itself.

 Imagine a ReportQuery as this

 attribute : p1, _LOOPBACK.p1
 class : Table.class
 criteria : p2=_LOOPBACK.p3

 this would be traduced by :

 select A0.p1, A1.p1 FROM table A0, table A1 where A0.p2 =A1.p3

 Regards



 On 3/21/06, Bruno CROS [EMAIL PROTECTED] wrote:
 
 
Hi all,
 
   first, thanks Armin for the patch, i think will wait 1.0.5 if it will
  be released in the month, because i need CLOB changes too.
 
   I'm asking me if it's possible to build an auto join ReportQuery such
  as :
 
  select A0.id http://a0.id/, A1.id http://a1.id/ FROM table1 A1,
  table1 A2 where A1.joinColumn1 = A2.joincolumn2
 
  Does anyone get to code this (using setAlias by example)
 
  Thanks
 





Report query right outer join

2006-03-27 Thread Bruno CROS
   Hi,

 A new challenge : build a left and a right outer join. In fact, i think
it's describe as a full outer join in a single reportquery.

 Does someone think it's possible with OJB or do i go SQL again ?

Regards


Re: Report query right outer join

2006-03-27 Thread Bruno CROS
Auto-reply :

SQL again.


On 3/27/06, Bruno CROS [EMAIL PROTECTED] wrote:



Hi,

  A new challenge : build a left and a right outer join. In fact, i think
 it's describe as a full outer join in a single reportquery.

  Does someone think it's possible with OJB or do i go SQL again ?

 Regards









Join on the same table

2006-03-21 Thread Bruno CROS
  Hi all,

 first, thanks Armin for the patch, i think will wait 1.0.5 if it will
be released in the month, because i need CLOB changes too.

 I'm asking me if it's possible to build an auto join ReportQuery such as :

select A0.id, A1.id FROM table1 A1, table1 A2 where A1.joinColumn1 =
A2.joincolumn2

Does anyone get to code this (using setAlias by example)

Thanks


Re: ODMG and markDelete behaviour

2006-03-15 Thread Bruno CROS
Hi,

well, if i do not flush, process can't work. We tried to avoid them, but
process seems to not (several circular references as i explained you a few
days ago)

I didn't get OJB test suite running (without any change of 1.0.4 binary
dist). the DB creation step crashes with an unknown CLOB type.What i'm
supposed to do to get it work ?

Would you add flush into the test, please? (as the example) and may be the
collection relation allDetails storing some details from shop (keep
existing 1:1 Detail relation). Key field is the same, no ?

I think the bug is due to the flush and markDelete.

Thanks for this work.

i use implicit locking, ordering, and CacheDefaultImpl.


On 3/15/06, Armin Waibel [EMAIL PROTECTED] wrote:

 Hi Bruno,

 I setup a similar test in
 CircularTest#testBidirectionalOneToOneMoveObject()

 http://svn.apache.org/viewcvs.cgi/db/ojb/branches/OJB_1_0_RELEASE/src/test/org/apache/ojb/odmg/CircularTest.java?rev=386117view=markup

 The test works like a charm.

 Compared with your post this test doesn't need flush() calls

  1. retrieve A1, retrieve B1 with A1.getB()
  2. instantiation and lock of A2
 3. lock A1, B1 (depends on the configuration settings, implicit locking)
  4. A2.setB(B1)
 5. B1.setA(null)
  6. delete A1 (markDelete)
  7. commit

 regards,
 Armin

 Bruno CROS wrote:
 Hi Armin, Hi all.
 
Well, with a little confusion, we believed that our bug of complex
 update
  was not one. But it remains.
 
   I'm suspecting a bug of OJB concerning the transcription of the
  markDelete in SQL operations.
 
   If I have time, I will try to assemble a demonstrative case of test
 with
  Junit. For the moment, I give you the simplified case :
 
  Consider A1 and B1 cross referenced objetcs, each one referring the
 other.
 
   The process consists with create A2, and replace A1.
 
   So, with ODMG, we wrote something like that :
 
  1. retrieve A1, retrieve B1 with A1.getB()
  2. instanciation and lock of A2
  3. A2.setB(B1)
  4. delete A1 (markDelete)
  5. lock B1, B1.setA(null)
  6. flush (assume it is required)
  7. lock B1
  8. B1.setA(A2)
  9. commit
 
  After commit, we observed that in database, B1 doesn't refers A2 !!
 
  Now let's do it again (remember that B1 doesn't reference A2), let's
 create
  A3.
  After commit, B1 refers well A3.
 
  We saw that the markDelete(A1) produces an update equivalent to B1.setA
 (null)
  at commit without taking account of the last state of the object B1. All
  occurs as if the markDelete(A1) considers the state of B1 at the
 beginning
  of the transaction (that is linked to A1 so) and decide to unreference
  A1 which have to be deleted. The evidence is that with no reference from
 B1
  to A2 at start of a new run, B1 refers well A3. Surely because there is
  nothing to unreference.
 
  I specify that the model is obviously more complex than that, but I
  preferred a simple case that a complex one. Only subtlety being a
 relation
  1:N from B to A (in more) which makes it possible to archive the objects
 A
  when they have to be not deleted (another transaction does that). So
 from B
  to A there is a reference-descriptor and a collection-descriptor (using
 same
  DB key field) and from A to B we have 2 reference-descriptors. I do not
  believe there is a cause in that, but i give the information. There's
 more
  than one flush too. Using : OJB1.0.4, DefaultCacheObjectImpl and Oracle
 9i
  and MS SQl server.
 
  We searched everywhere : repository auto-update/auto-retrieve (all at
 none
  as mandatory for ODMG), bad declarations of relations (foreign key
 checked),
  tx settings (ordering/not, implicit locking/not), OJB settings. No way.
 
  If someone experienced such a trouble , please tell us.
 
  Thank you for any help.
 
  Regards
 

 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]




ODMG and markDelete behaviour

2006-03-11 Thread Bruno CROS
   Hi Armin, Hi all.

  Well, with a little confusion, we believed that our bug of complex update
was not one. But it remains.

 I'm suspecting a bug of OJB concerning the transcription of the
markDelete in SQL operations.

 If I have time, I will try to assemble a demonstrative case of test with
Junit. For the moment, I give you the simplified case :

Consider A1 and B1 cross referenced objetcs, each one referring the other.

 The process consists with create A2, and replace A1.

 So, with ODMG, we wrote something like that :

1. retrieve A1, retrieve B1 with A1.getB()
2. instanciation and lock of A2
3. A2.setB(B1)
4. delete A1 (markDelete)
5. lock B1, B1.setA(null)
6. flush (assume it is required)
7. lock B1
8. B1.setA(A2)
9. commit

After commit, we observed that in database, B1 doesn't refers A2 !!

Now let's do it again (remember that B1 doesn't reference A2), let's create
A3.
After commit, B1 refers well A3.

We saw that the markDelete(A1) produces an update equivalent to B1.setA(null)
at commit without taking account of the last state of the object B1. All
occurs as if the markDelete(A1) considers the state of B1 at the beginning
of the transaction (that is linked to A1 so) and decide to unreference
A1 which have to be deleted. The evidence is that with no reference from B1
to A2 at start of a new run, B1 refers well A3. Surely because there is
nothing to unreference.

I specify that the model is obviously more complex than that, but I
preferred a simple case that a complex one. Only subtlety being a relation
1:N from B to A (in more) which makes it possible to archive the objects A
when they have to be not deleted (another transaction does that). So from B
to A there is a reference-descriptor and a collection-descriptor (using same
DB key field) and from A to B we have 2 reference-descriptors. I do not
believe there is a cause in that, but i give the information. There's more
than one flush too. Using : OJB1.0.4, DefaultCacheObjectImpl and Oracle 9i
and MS SQl server.

We searched everywhere : repository auto-update/auto-retrieve (all at none
as mandatory for ODMG), bad declarations of relations (foreign key checked),
tx settings (ordering/not, implicit locking/not), OJB settings. No way.

If someone experienced such a trouble , please tell us.

Thank you for any help.

Regards


Re: ODMG ordering with circular pairs

2006-03-07 Thread Bruno CROS
Hi Armin. Hi all.

Ok. Well, i discussed about the bug with the developer who found it. After
we have inspected the code (really this time)  we found the error.
There's no more bug about modifying objects after flush.

Now, we have established some important rules to code
ODMG transactions (never late):

1. *Keep in mind database order, before writing anything*. ODMG ordering is
not an alternative. Circular loops have to be resolve at least by two steps.
(Easy, but many coders have difficulties !!)

2. *In case of circular references with 2 or many objects, insert some
flushes*, but the least possible, to keep good performance and to not have
to always lock objects after flushes. Sequence have to look
like create-references-flush-reference/delete-commit. When references are
not circular, OJB ordering is really efficient, thanks the guy who wrote
this.

3. *After flush, check all objects to be modified and lock them again* (even
they are locked before flush) .

4. *Read objects that have to be modified by the transaction with the same
broker.* This point leaded us to write some get method callable under and
outside transaction (really great). The method get the broker of the current
transaction if existing, else get a default one. after read, broker is
closed if it have been created for the call, not if it is used by a
transaction.

Last point is really important for batch, and complex schemas. For batchs,
it avoids the Cannot lock for WRITE (implicit locking helps) and for
complex schema, it assures that objects are well modified (and well
references so) till database.

On my own, the bug described was due to a read of object with a different
broker than the one of transaction. I don't think i will test this deeper,
because you warmed me about doing this. I understand that an object can't be
successfull treated by transaction if read by another broker.

We have a little work to change all read methods, but i hope everything will
be ok. and may be, i will try again TwoLevelCache !

Thanks a lot.

Glad to have your help.

Regards.







On 3/6/06, Armin Waibel [EMAIL PROTECTED] wrote:

 Hi Bruno,

 Bruno CROS wrote:

  Our first problem was, that, after commit, P1 was not referencing M in

  database. I deduced that after a flush(), new locked objects (before
  flush)
  can't reference new ones created before flush too (a second time).
 Humm,
  strange... we check all code : No apparent mistakes.
 
 
  Understand reference after have been made, and not in the database.
  It can be a bug for me. the reference is between 2 new instanciated and
  flushed object. We have tested many times, and can't understand at all.
 

 Could you send me a test case to reproduce this issue (classes + mapping
 + test source)?
 The OJB test-suite tests handle some complex object graphs and I never
 notice such a problem, so it's important to reproduce your issue before
 we can fix it (if it's really a bug).


  I have an enourmous doubt about the other processes that still have
  flush
  steps and work on an equivalent part of model (double circular).
  Additional flush() calls should never cause problems. Flush only write
  the current locked objects to DB.
 
 
  Things looks to be different in my case (with 2 flushes). The object
 (locked
  again after flush (why not)) seems to be  ignored by transaction, as
 being
  already treated !

 If the (repeated) locked object is not modified after lock, OJB will
 ignore it on commit. So are you sure that the object is modified after
 lock?

 regards,
 Armin




 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]




Re: ODMG ordering with circular pairs

2006-03-07 Thread Bruno CROS
Oups, a type error at the end of my mail.

I don't think i will test this deeper, because you warned me about doing
this (get object from other broker). 

That explains the strange behaviour we had.

Regards.

Sorry for mistake.



On 3/7/06, Bruno CROS [EMAIL PROTECTED] wrote:


 Hi Armin. Hi all.

 Ok. Well, i discussed about the bug with the developer who found it. After
 we have inspected the code (really this time)  we found the error.
 There's no more bug about modifying objects after flush.

 Now, we have established some important rules to code
 ODMG transactions (never late):

 1. *Keep in mind database order, before writing anything*. ODMG ordering
 is not an alternative. Circular loops have to be resolve at least by two
 steps. (Easy, but many coders have difficulties !!)

 2. *In case of circular references with 2 or many objects, insert some
 flushes*, but the least possible, to keep good performance and to not have
 to always lock objects after flushes. Sequence have to look
 like create-references-flush-reference/delete-commit. When references are
 not circular, OJB ordering is really efficient, thanks the guy who wrote
 this.

 3. *After flush, check all objects to be modified and lock them again* (even
 they are locked before flush) .

 4. *Read objects that have to be modified by the transaction with the same
 broker.* This point leaded us to write some get method callable under
 and outside transaction (really great). The method get the broker of the
 current transaction if existing, else get a default one. after read, broker
 is closed if it have been created for the call, not if it is used by a
 transaction.

 Last point is really important for batch, and complex schemas. For batchs,
 it avoids the Cannot lock for WRITE (implicit locking helps) and for
 complex schema, it assures that objects are well modified (and well
 references so) till database.

 On my own, the bug described was due to a read of object with a different
 broker than the one of transaction. I don't think i will test this deeper,
 because you warmed me about doing this. I understand that an object can't be
 successfull treated by transaction if read by another broker.

 We have a little work to change all read methods, but i hope everything
 will be ok. and may be, i will try again TwoLevelCache !

 Thanks a lot.

 Glad to have your help.

 Regards.







 On 3/6/06, Armin Waibel [EMAIL PROTECTED] wrote:
 
  Hi Bruno,
 
  Bruno CROS wrote:
 
   Our first problem was, that, after commit, P1 was not referencing M
  in
   database. I deduced that after a flush(), new locked objects (before
   flush)
   can't reference new ones created before flush too (a second time).
  Humm,
   strange... we check all code : No apparent mistakes.
  
  
   Understand reference after have been made, and not in the database.
   It can be a bug for me. the reference is between 2 new instanciated
  and
   flushed object. We have tested many times, and can't understand at
  all.
  
 
  Could you send me a test case to reproduce this issue (classes + mapping
  + test source)?
  The OJB test-suite tests handle some complex object graphs and I never
  notice such a problem, so it's important to reproduce your issue before
  we can fix it (if it's really a bug).
 
 
   I have an enourmous doubt about the other processes that still have
   flush
   steps and work on an equivalent part of model (double circular).
   Additional flush() calls should never cause problems. Flush only
  write
   the current locked objects to DB.
  
  
   Things looks to be different in my case (with 2 flushes). The object
  (locked
   again after flush (why not)) seems to be  ignored by transaction, as
  being
   already treated !
 
  If the (repeated) locked object is not modified after lock, OJB will
  ignore it on commit. So are you sure that the object is modified after
  lock?
 
  regards,
  Armin
 
 
 
 
  -
  To unsubscribe, e-mail: [EMAIL PROTECTED]
  For additional commands, e-mail: [EMAIL PROTECTED]
 
 



ODMG ordering with circular pairs

2006-03-04 Thread Bruno CROS
  Hi all, hi Armin,

  I believe i noted a strange behaviour behaviour of new 1.0.4 ODMG
ordering.

  Consider JIRA OJB-18. Before 1.0.4 , we used the given workaround. This
means, that we put some flush() in strategical place to avoid all FK
constraints. It worked fine.

  Now, with the OJB 1.0.4, circular or 1:1 cross references shouldn't
require flush().  I don't known how it has been made, but i assume it works.
So, i consider now that OJB can find the good order (and the back on first
object to put last reference), however we still continue to create,
reference, unreference, delete with the DB sequence.

  But i'm going to try to give you an example, that worked before 1.0.4. (with
flush()) and do not any more.

 -consider the circular pair of objects P1 and P2 (1:1 relation
, cross-referenced)
 -consider now that we have a Master object, that have a collection
of  Details obects, themselves referencing an instance of P1. Let's call
master object M, and details objects D.
- consider a last refrence relation between P1 and M (the master)

So, all writing process is done like this (assume new is instanciating and
locking at the same time):

[tx.begin]
- new M
- loop of (D)
{ - new D
  - D references M
  - create P1-P2 circular pair (new P1 , new P2 , P1 references P2 , P2
references P1) [flush #1]
  - D references P1
}
[flush#2]
- loop on created P1
{ - lock P1 (again)
  - P1 references M
}
[tx.commit]

it's schematic but I believe there is all to explain.

Our first problem was, that, after commit, P1 was not referencing M in
database. I deduced that after a flush(), new locked objects (before flush)
can't reference new ones created before flush too (a second time). Humm,
strange... we check all code : No apparent mistakes.

Then, I found this about 1.0.4 releases notes [[OJB-18] - ODMG ordering
problem with circular/bidirectional 1:1 references]. Well,  OJB finds the
good order now and close the circular schema, fine. So we tried to remove
[flush #2] and process worked.

After, i said to all my team to remove flush(). Everyone was happy ! And
after [flush #1] was removed, process crashed with the FK constraints P2 to
P1 (you known, the circular pair) . It seems ODMG (without any flush) does
not find the good order to solve a far circular pair. I don't known why at
all.

Don't forget we have there a double inside circular reference (P1  P2
inside M-D-P1  P1-M). At first, i didn't think about resolve this
construction without 2 flushes minimum. And now, i have only one! Sorry for
headache, but i'm lost too.

I have an enourmous doubt about the other processes that still have flush
steps and work on an equivalent part of model (double circular).

Please, could you explain me the different ordering behaviours of ODMG? And
when put a flush or not?

Thanks a lot.

 References : OJB 1.0.4, ObjectDefaultImpl, ODMG transactions with same
broker queryObject methods (to possibly retrieve instanciated objects
before commit )


Re: OJB CLOB Support

2006-02-28 Thread Bruno CROS
Ok

About suggestions :

Well, most of the time,  we work with ReportQuery to read Collections (those
queries don't affect only one table) . So, querying the table containing the
CLOB columns shouldn't take too much time ; i guess, at that moment, we
didn't need to read the CLOB column anyway.

So, materialization of CLOB columns is only done when mapped objects are
materialized. We take care about reading one by one (with getByIdentity
service) and not with an object query, returning a List.

But, i'm affraid of the used size of the cache ! This leads me to tell that
a good user CLOBImplementation (or user CLOBWrapper) can be inside a cached
object, (with or) without reading CLOB object. Indeed, a CLOB wrapper that
can implements java.sql.CLOB interface and can be instanciated would be
sufficient. I guess that problems will occur with Cache Implementation,
using hard references (object cloning). But why would we clone this kind of
object, since this is not a primitive type.

Am i on the rigth way ?

Regards.












On 2/28/06, Armin Waibel [EMAIL PROTECTED] wrote:

 Hi Bruno,

 Bruno CROS wrote:
Hi,
 
  I need a little documentation on CLOB too.
 
  To start, does someone can tell me which java type i have to use (to
 map)
  with a jdbc-type CLOB ?
 
  java.sql.Clob? as the OJB types mapping board shows it. but
 java.sql.Clob is
  only an interface. What object to create so ? ( is there a Clob
  implementation ?)
 
  String (or StringBuffer)? does OJB always load all the data on queries
 !?
  oups !
 
  Byte[] ? as BLOB. I need some special character, only found in UNICODE.
  byte[] means that content is in bad ASCII for me.
 
  any other type ?

 you are right, currently the CLOB/BLOB support isn't very sophisticated
 (BLOB columns mapped to byte[], CLOB mapped to String - see
 JdbcTypesHelper class).
 Currently we think about to introduce JdbcType classes which handle the
 Clob/Blob interfaces instead of the materialized String/byte[] objects.
 Any suggestions are welcome.


 regards,
 Armin


 
  Thanks for any ligtht.
 
 
  On 1/4/06, Armin Waibel [EMAIL PROTECTED] wrote:
  Hi Vamsi,
 
  sorry for the late reply.
  I added an JIRA issue (maybe someone can spend time to adapt this
  section).
 
  regards,
  Armin
 
 
  Vamsi Atluri wrote:
  Hello all,
 
  In our application we use OJBs extensively. However, we have a CLOB
  field
  that needs to be populated within the application. I was trying to
 find
  documentation about OJB's support for *LOB objects at this link:
  http://db.apache.org/ojb/docu/howtos/howto-use-lobs.html
 
  However, all I could find was this:
 
  Strategy 1: Using streams for LOB I/O
 
  ## to be written #
  Strategy 2: Embedding OJB content in Java objects
 
  ## to be written #
  Querying CLOB content
 
  ## to be written #
 
 
  Is there any documentation out there that explains these
 concepts?  Any
  help is greatly appreciated. Thanks.
 
  Regards,
  -Vamsi
 
  -
  To unsubscribe, e-mail: [EMAIL PROTECTED]
  For additional commands, e-mail: [EMAIL PROTECTED]
 
 
  -
  To unsubscribe, e-mail: [EMAIL PROTECTED]
  For additional commands, e-mail: [EMAIL PROTECTED]
 
 
 

 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]




Oracle 10g compliance

2006-02-21 Thread Bruno CROS
Hi all,

Apparently, 1.0.4 does not support Oracle 10g platform (torque does but not
the OJB runtime)

So, i go on with oracle 9i settings. Does anyone report experience (good or
bad) of that ?

Thanks.


Re: Oracle 10g compliance

2006-02-21 Thread Bruno CROS
OK. Fine.

I supposed that it was good.

It was just to be sure.

Thanks Thomas.


On 2/21/06, Thomas Dudziak [EMAIL PROTECTED] wrote:

 On 2/21/06, Bruno CROS [EMAIL PROTECTED] wrote:

  Apparently, 1.0.4 does not support Oracle 10g platform (torque does but
 not
  the OJB runtime)
 
  So, i go on with oracle 9i settings. Does anyone report experience (good
 or
  bad) of that ?

 OJB does work just fine with 10g. As far as OJB is concerned, there is
 no relevant difference between 9i and 10g, so you should be fine with
 the 9i platform.

 Tom

 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]




Re: ReportQuery bug 1.0.4

2006-02-20 Thread Bruno CROS
Yes, I confirm.

it's P6SPY. I have no idea why, at all.

It's gonna be hard to debug queries with that kind of behaviour. I hope
P6spy will be fixed soon.

Thanks a lot.




On 2/17/06, Jakob Braeuchi [EMAIL PROTECTED] wrote:

 hi bruno,

 this looks like a p6spy problem !

 with p6spy i get the wrong result:
 11 11 11 called 11 11 11
 11 11 11 called 11 11 11

 without p6spy the result is ok:
 11 11 11 called 66 66 66
 11 11 11 called 166 166 166

 can you confirm this ?

 jakob

 Jakob Braeuchi schrieb:
  hi bruno,
 
  i made a little testcase with a phonenumber having 1:n calls:
 
  reportQuery = QueryFactory.newReportQuery(PhoneNumber.class, crit);
  reportQuery.setAttributes(new String[]{number,calls.numberCalled});
  iter = broker.getReportQueryIteratorByQuery(reportQuery);
  while (iter.hasNext())
  {
 Object[] data = (Object[]) iter.next();
 System.out.println(data[0] +  called  + data[1]);
  }
 
  the sql looks quite ok:
  SELECT A0.NUMBER,A1.NUMBER FROM PHONE_NUMBER A0 INNER JOIN CALLS A1 ON
  A0.PHONE_NUMBER_ID=A1.PHONE_NUMBER_ID WHERE A1.NUMBER LIKE '%66%'
 
  but the data retrieved from the resultset is wrong:
 
  number and calls.numberCalled contain the same value :(
 
  jakob
 
 
 
  Jakob Braeuchi schrieb:
  hi bruno,
 
  could you please provide the generated sql ?
 
  jakob
 
  Bruno CROS schrieb:
 Hi all,
 
  It seems there is big problem on ReportQuery.
 
  Consider 2 classes, Class_A with property_a and Class_B with
 property_b.
  Consider that Class_A is 1:n related  to Class_B.
 
  Build a ReportQuery on Class_B, requesting property_b and 
  classA.property_a. If database field names are the same in each
  table for
  property_a and property_b, surprise, that you believe the value of
   property_b is the property_a one !!
 
  I looked to the generated query, and, awful, aliases are wrong.
 
   Anyone confirms ?
 
  OJB 1.0.4 (bug wasn't related in 1.0.1)
 
  How many databse fields with the same name i have into my model? Too
  much !!
 
  Thanks. Regards.
 
 
 
 
 
 
  No virus found in this incoming message.
  Checked by AVG Free Edition.
  Version: 7.1.375 / Virus Database: 267.15.11/264 - Release Date:
  17.02.2006
 
 
  -
  To unsubscribe, e-mail: [EMAIL PROTECTED]
  For additional commands, e-mail: [EMAIL PROTECTED]
 
 
 
 
  -
  To unsubscribe, e-mail: [EMAIL PROTECTED]
  For additional commands, e-mail: [EMAIL PROTECTED]
 
 
 

 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]




ReportQuery bug 1.0.4

2006-02-17 Thread Bruno CROS
   Hi all,

It seems there is big problem on ReportQuery.

Consider 2 classes, Class_A with property_a and Class_B with property_b.
Consider that Class_A is 1:n related  to Class_B.

Build a ReportQuery on Class_B, requesting property_b and 
classA.property_a. If database field names are the same in each table for
property_a and property_b, surprise, that you believe the value of
 property_b is the property_a one !!

I looked to the generated query, and, awful, aliases are wrong.

 Anyone confirms ?

OJB 1.0.4 (bug wasn't related in 1.0.1)

How many databse fields with the same name i have into my model? Too much !!

Thanks. Regards.


SequenceManagerNextValImpl buildIdentity

2006-02-15 Thread Bruno CROS
  Hi all,

I just experienced a strange behaviour  of  SequenceManagerNextValIpl (I
guess) following migration from OJB 1.0.1 to 1.0.4.

To build an object, i need the value of the id before writing it to the
database So i use

Identity oid = broker.serviceIdentity().getIdentity(objectNewInstanciated);
//(broker is different from tx one)

and read pk value with:

oid.getPrimaryKeyValues()[0]

At this point, my id is 1001 and object is not in database. Note that my
database sequence is 975 !!

And after tx.commit(), surprise, object is created  with the right id, 975.

I don't understand anything at this. Why the id is not the final id (as it
's using OJB 1.0.1)?
Any setting idea ?

OJB 1.0.4
pk setting : autoincrement=true sequence-name=SEQ_CALCUL
primarykey=true access=anonymous

SequenceManager  : autonaming=false

Thanks


Re: TwoLevelCache troubles and get the running transaction

2006-02-11 Thread Bruno CROS
  Hi armin,

I have post my migration changes to all my team. I approxymately get all
the transactions running ( except the  primitive at 0 changes !!! we had
just one PK using 0 value, ouf... ). Some little changes so. Now we are in
1.0.4. But I continue to search to install TwoLevelCache and CGLib,
convinced it's better, of course.

I answered to you below in blue.

On 2/11/06, Armin Waibel [EMAIL PROTECTED] wrote:

 Hi Bruno,

 Bruno CROS wrote:
Hi all,  hi Armin,
 
  Using TwoLevelCacheObjectImpl, i experienced some big troubles  with  a
  delete sequence containing some flush() to avoid FK constraints errors.
 
  Assume 2 classes A and B, with 1:n relation.
  We can resume sequence as this :
 
  - open tx
  - read object A
  - read collection of object B
  {
   - lock object B
   - set null to reference object A in B object
  }
  - flush() (with TxExt method) (normally database FK are null now for
  database connection)
  - read collection of object B

 why do you read the collection again? How do you read the collection of
 B? When using a different PB instance (outside of the running odmg-tx)
 you will not see any changes till commit.

 Why don't you simply lock B, nullify the FK to A, delete B and commit or
 flush()?


Consider that what i gave you, is a simplier example. Problems occurs with
multi crossed object relations (B collection of A, but C collections of A
too, and C refers 1:1 to B). So, these steps  (double collection loops) are
needed to avoid FK errors. Despite that i have well ordered object
operations ( nullify the FK and then deletes ) of ODMG transactions, the
queries sequence (sended at commit) are not in the same order ( delete
before nullify !!). I suppose that's because, at start of coding, one year
ago, we used to delete objects with a PB method (out of tx i guess, missed,
sorry !!). Now we used a database.deletePersistent method. But if i well
remember, the need to flush to database (nullify) before the deletes is the
same ( consider DefaultObjectCacheImpl running, so...).

I understand now why flush is not effective to database with 2LevelCache,
thanks. All is done at commit, well. What i don't understand is that, even
if with 2LCache, flush don't really flush (i guess),  the sequence of
queries is still not the correct order (code order same as db logical order)
at the end. Does this means that i have to checkpoint() instead of flush()?
Tell me not !! I can't.

How can i change my transactions, to be sure that the order of queries is
the needed database logical order? ( in the way to use 2LCache )

 {
   - delete B ( database.deletePersistent method)
 }

 - commit tx

 The troubles don't occur with DefaultObjectCacheImpl.

in the default cache you will always immediately see any changes of
any object (the dirty read problem discussed in a previous mail)

Ok.

 Hum, did i miss another way to flush database with 2LCache ?


 Another question ( different) :
 Is there a way to get the running transaction without no reference as
 Transaction (kind of join with the running thread) ? in the way to get the
 broker of the transaction.

Think I didn't catch the sentence. Nevertheless I will try to give an
answer. A different thread can join a running tx using
Transaction.join() method. To get the running tx of the current thread
call Implementation.currentTransaction()

OK.Thanks. Good. These is to code standalone object read method. I have many
object reads done with defaultPB. According to you, i understand that it
will better to use the transaction PB.
So object read methods, can run within or without a transaction (if objects
are modified and read several times, it will be safe, i guess).



regards,
Armin



 Regards


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


TwoLevelCache troubles and get the running transaction

2006-02-10 Thread Bruno CROS
  Hi all,  hi Armin,

Using TwoLevelCacheObjectImpl, i experienced some big troubles  with  a
delete sequence containing some flush() to avoid FK constraints errors.

Assume 2 classes A and B, with 1:n relation.
We can resume sequence as this :

- open tx
- read object A
- read collection of object B
{
 - lock object B
 - set null to reference object A in B object
}
- flush() (with TxExt method) (normally database FK are null now for
database connection)
- read collection of object B
{
  - delete B ( database.deletePersistent method)
}

- commit tx

The troubles don't occur with DefaultObjectCacheImpl.
Hum, did i miss another way to flush database with 2LCache ?


Another question ( different) :
Is there a way to get the running transaction without no reference as
Transaction (kind of join with the running thread) ? in the way to get the
broker of the transaction.


Regards


Re: Auto-retrieve or not ?

2006-02-08 Thread Bruno CROS
  Hi Armin, and all

Just to clarify my dilemma,  i just want to resume my intentions about
configuring my repository.

Consider that model is done, that 90% of transactions are coded (with
reportQuery, and query object and a few getCollections utilizations when
update), and of course, model is complex as you can imagine (lot of
collections ).

Well, according to my last experimentations on 1.0.4, i 'm confined myself
to not use TwoLevelObjectCacheImplementation (i noted a very heavy load when
retrieving object) and not use reference proxy (CGLib) (processes stops
after a while). sorry for that, really.
So, it appears to me that the only way to reduce loading time, is to break
some strategical relations, setting up false to many auto-retrieves.
Logically, i chose to  break  collection relation of  central  classes  (
imagine a Customer class referred by 10 others classes.) . This seems to
fast up all my retrieves (globally) but i known now i 'm exposed to have
unwanted behaviour. And that's i can't well imagine.

So, if for my class Customer, all collection descriptors auto-retrieves are
set to false, after my object Customer1 is loaded, collections are not
available to read without doing retrieveAllReference(...) (or its friends
retrieveReference(). well.Ok.
Now, if idecided to change this object with ODMG ( still without
retrieveAllReference), am i exposed to have all existing records populating
my collections to be deleted? By way of example, imagine Order class,
populating my collections orders in Customer. Did all of my orders disappear
?

A second case : still if Customer class have a collection of Orders. If i
get Order1 of Customer1 to update it, (still without any
retrieveAllReference on Customer1), do i have to expext that something
disappear ? even if Customer is never locked, just loaded and
(re)referenced.

Thanks very very much for the answers.

I hope i will not request you anymore help.

Regards.







On 2/6/06, Armin Waibel [EMAIL PROTECTED] wrote:

 Hi Bruno,

 Bruno CROS wrote:
  About my precedent batch troubles:
  In fact, a saw my model loaded from every where with all the actual
  auto-retrieve=true, this means, every where !!  This means too, that
  back relations are read too, circling too much. This was the cause of
 my
  OutOfMemoryError.
 
  My model is a big one with a lot of relations, as complex as you can
  imagine.
   So, i'm asking me about get rid of those auto-retrieve, to get all
 objects
  reads faster (and avoid a clogging). But i read that for ODMG dev,
  precognized settings are auto-retrieve=true auto-update=none
  auto-delete=none.  Do i have to absolutely follow this ? If yes, why ?
 

 In generally the auto-retrieve=true is mandatory when using the
 odmg-api. When using it OJB take a snapshot (copy all fields and
 references to other objects) of each object when it's locked. On commit
 OJB compare the snapshot with the state of the object on commit. This
 way OJB can detect changed fields, new or deleted objects in references
 (1:1, 1:n,m:n).
 If auto-retrieve is disabled and the object is locked OJB assume that no
 references exist or the existing ones are deleted although references
 exist and not be deleted. So this can cause unexpected behavior,
 particularly with 1:1 references.

 The easiest way to solve your problem is to use proxy-references. For
 1:n and m:n you can use a collection-proxy:

 http://db.apache.org/ojb/docu/guides/basic-technique.html#Using+a+Single+Proxy+for+a+Whole+Collection

 For 1:1 references you can use proxies too.

 http://db.apache.org/ojb/docu/guides/basic-technique.html#Using+a+Proxy+for+a+Reference
 Normally this requires the usage of a interface as persistent object
 reference field. But when using CGLib based proxies it's not required
 and you can simply set proxy=true without any changes in your source
 code.
 http://db.apache.org/ojb/docu/guides/basic-technique.html#Customizing+the+proxy+mechanism


 If you can't use proxies (e.g. in a 3-tier application) you can disable
 auto-retrieve if you take care and:
 - disable implicit locking in generally
 - carefully lock all objects before change it (new objects too)
 - before you lock an object (for update, delete,...) retrieve the
 references of that object using method
 PersistenceBroker.retrieveAllReferences(obj)/retrieveReference(...).


  At start, I saw that setting auto-retrieve to true everywhere wasn't
  solution, but all transaction and batch processes were working fine (
 until
  1.0.4. ), with autoretrieve on all n relations (yes!). Chance !!
  But with a
  little doubt, I tell to all the dev team to avoid as possible the read
 by
  iterating collections without any reasons, prefering ReportQuery (single
  shot) and direct object queries.
  Is that the good way to have a fast and robust application ?

 ...yep. The fastest way to lookup a single object by PK is to use
 PB.getObjectByIdentity(...). If a cache is used this method doesn't
 require a DB round trip

Re: Can't get default broker !!!

2006-02-07 Thread Bruno CROS
Discard my last mail !!

During a while, i was into the body of a bad user. (nightmare)

In fact, I did really change something, something in the setup of OJB even
!!

I was trying to install  CGLib, without the jar (Idiot i am)

Sorry.



On 2/7/06, Bruno CROS [EMAIL PROTECTED] wrote:

 Help !

 Can't get any broker today. Yesterday, it was all right. And i swear, i
 didn't change anything !!

 I don't understand anything cause all other developpers can connect to the
 same instance/schema to work on.

 What did i miss again ?

 How can i tarce any jdbc connection troubles ? with log? with p6spy ?

 Really need help.

 Regards.




Re: Auto-retrieve or not ?

2006-02-07 Thread Bruno CROS
(to Armin) OK. Really do appreciate all your advices . Thanks again.

Here's my situation, resulting of a migration from 1.0.1 to 1.0.4 (within
ojb-blank) :

- I tried desperatly to use the TwoLevelCache and cannot run the first batch
(note that this batch looks like a big transaction). With 1.0.1, it executes
in a few seconds, creating 420 * 2 records (not so big). From what i saw, it
seems that TwoLevelObjectCache reads all materialized objects all time, as
if it want to check all records againsts all (following relations)! I think
it is useless reads, accordinf to my opinion. It's definitely not possible
to have such quantity of instances in memory !!

Of course, I tried to break the loading mechanism. First i saw that
auto-retrieve at false can produce fine results, but with the
incompatibity/disadvantage of ODMG transactions ( following your advices ) i
tried to  keep auto-retrieve and mount CGLIB proxy. The batch nevers ended,
freezing.

The only solution who worked has been to first, go back to my old cache
settings with the ObjectCacheDefaultImpl. and then, yes, first batch runs
again (faster !!). ouf.

But after, my problem was about the second batch. It seemed that there is
problems to read object by identity. I get rid of CGLIB proxies attributes
(proxy=true on reference and proxy=dynamic onto class descriptor) and it
worked.

So , I'm asking me :

- Does someone really get successful utilization of TwoLevelObjectCacheImpl
with little transaction as creation/update of 800 records ?

- Is TwoLevelObjectCache really needed (to avoid dirty reads)? We use to
work with ObjectCacheDefaultImpl, (re) reading all object before update and
performance are not so bad.

- Did i miss something to mount CGLib ability? I set proxy=true on the
reference-descriptor and proxy=dynamic on referenced class.

- I didn't see significant loading chain breaks using CGLib, with
TwoLevelObjectCache. Tell me i'm wrong and that's the only fact of the cache
manager..

Regards



On 2/6/06, Armin Waibel  [EMAIL PROTECTED] wrote:

 Hi Bruno,

 Bruno CROS wrote:
  About my precedent batch troubles:
  In fact, a saw my model loaded from every where with all the actual
  auto-retrieve=true, this means, every where !!  This means too, that
  back relations are read too, circling too much. This was the cause of
 my
  OutOfMemoryError.
 
  My model is a big one with a lot of relations, as complex as you can
  imagine.
   So, i'm asking me about get rid of those auto-retrieve, to get all
 objects
  reads faster (and avoid a clogging). But i read that for ODMG dev,
  precognized settings are auto-retrieve=true auto-update=none
  auto-delete=none.  Do i have to absolutely follow this ? If yes, why ?
 

 In generally the auto-retrieve=true is mandatory when using the
 odmg-api. When using it OJB take a snapshot (copy all fields and
 references to other objects) of each object when it's locked. On commit
 OJB compare the snapshot with the state of the object on commit. This
 way OJB can detect changed fields, new or deleted objects in references
 (1:1, 1:n,m:n).
 If auto-retrieve is disabled and the object is locked OJB assume that no
 references exist or the existing ones are deleted although references
 exist and not be deleted. So this can cause unexpected behavior,
 particularly with 1:1 references.

 The easiest way to solve your problem is to use proxy-references. For
 1:n and m:n you can use a collection-proxy:

 http://db.apache.org/ojb/docu/guides/basic-technique.html#Using+a+Single+Proxy+for+a+Whole+Collection

 For 1:1 references you can use proxies too.

 http://db.apache.org/ojb/docu/guides/basic-technique.html#Using+a+Proxy+for+a+Reference
 Normally this requires the usage of a interface as persistent object
 reference field. But when using CGLib based proxies it's not required
 and you can simply set proxy=true without any changes in your source
 code.
 http://db.apache.org/ojb/docu/guides/basic-technique.html#Customizing+the+proxy+mechanism


 If you can't use proxies (e.g. in a 3-tier application) you can disable
 auto-retrieve if you take care and:
 - disable implicit locking in generally
 - carefully lock all objects before change it (new objects too)
 - before you lock an object (for update, delete,...) retrieve the
 references of that object using method
 PersistenceBroker.retrieveAllReferences(obj)/retrieveReference(...).


  At start, I saw that setting auto-retrieve to true everywhere wasn't
  solution, but all transaction and batch processes were working fine (
 until
  1.0.4. ), with autoretrieve on all n relations (yes!). Chance !!
  But with a
  little doubt, I tell to all the dev team to avoid as possible the read
 by
  iterating collections without any reasons, prefering ReportQuery (single
  shot) and direct object queries.
  Is that the good way to have a fast and robust application ?

 ...yep. The fastest way to lookup a single object by PK is to use
 PB.getObjectByIdentity(...). If a cache is used

Re: Migrating to 1.0.4

2006-02-06 Thread Bruno CROS
Well, step by step, i hope that anything will be fine soon.

DB Connections are well mounted, thank you very much again.

So, I tested  my batch processes and i noted that log4j trace disappeared. I
think it's no so hard to resolve (just redirect to my  commons logging
setup). But the most worrying is that my first process (creation of 400
records qith 400 others each linked) canno't end, crashing in a outofmemory
error most of the time. Note that this processes is the simplest i have.

According to the notes, i modified auto-update to none. auto-delete were
already at none.
My repository have been regenerated by xdoclet Ant task. Should i consider
new repository description (that i miss) in my described collections (or
references)?

I known there something wrong. It looks like there exponential reads. But i
don't forget that i 'm creating objects from scratch. So, i get rid an
infinite read loop.

Processes were working without any trouble in 1.0.1. Except the none read
1.N relations  during  heavy load i related you last week.

I'm looking at documentation about well configuring OJB.properties, for
little batch processes and multiples objects transaction.

My setup is the one who came with ojb_blank. I'm asking me if it's

Note that my second connections (in 1.0.1 schema ) just serves me to read
only (no update, no creation). so i get rid of a problem of locking with
1.0.4 and a 1.0.1 db core.

I guess that my case is not an isolated one.

Best regards



On 2/4/06, Armin Waibel [EMAIL PROTECTED] wrote:

 Hi Bruno,

 Bruno CROS wrote:
  i already have patch torque3.1.1.jar. thanks for advice. I've done since
  start with the build.xml of ojb-blank.jar (so torque).
 
  I have the right database generated now. That's a good point and i thank
 you
  all.
 
  On the other hand, i have 2 database connections and it seems they do
 not
  work anymore. Of course, i can't see any errors. I d'ont known how to
 spy DB
  connections !!? shame, i known.
 

 you can use p6spy
 http://db.apache.org/ojb/docu/faq.html#traceProfileSQL


  The particularity of one of these connections is that the database has
 been
  made with 1.0.1 OJB release.  Understand that this database is accessed
 by
  my application, and by another one (bigger). So upgrading this DB is not
 so
  suitable.
 
  nevertheless, understand too, that my application need to be connected
 to
  those two databses, mine in 1.0.4 and the big one in 1.0.1. At the same
 time
  of course.
 
  My doubts are about the repository_internal.xml ( describing OJB
 tables). If
  these tables are not the same in the 2 DB. How could it work ??
 

 Seems that the default connection doesn't require any of the internal
 OJB tables (if you don't use odmg named objects or the specific List/Map
 implementations).

 rushDb use SequenceManagerHighLowImpl. In this case table OJB_HL_SEQ
 is mandatory. The metadata mapping of this object/table changed between
 1.0.1 and 1.0.4 but not the column names (one column was removed between
 1.0.1 and 1.0.4). Thus you could try to use the old table with the
 1.0.4 mapping. If this doesn't work you could drop and recreate table
 OJB_HL_SEQ (OJB will automatically take care of existing id's and
 re-populate the table).

 In rushDb you are using useAutoCommit=0. In this case it's mandatory
 that the jdbc-driver/DB disable the connection autoCommit property (else
 you will run into problems with tx-rollback).

 regards,
 Armin


  You understand what i mean when you see my repository.xml file. and the
 2
  next files, parts of it.
 
  ==
  my repository.xml file
  =
 
  !ENTITY databaseMathieu SYSTEM repository_database_mathieu.xml
  !ENTITY databaseRushDb SYSTEM repository_database_rushDb.xml
  !ENTITY internal SYSTEM repository_internal.xml
  !ENTITY user SYSTEM repository_user.xml
  !ENTITY rushDb SYSTEM repository_rushDb.xml
  ]
 
 
  descriptor-repository version=1.0
isolation-level=read-uncommitted
proxy-prefetching-limit=50
 
 
 !-- connection Mathieu --
 databaseMathieu;
 !-- connection rushDB --
 databaseRushDb;
 
 !-- include ojb internal mappings here; comment this if you don't
 need
  them --
 internal;
 
 !-- mapping Mathieu --
 user;
 
 !-- mapping RushDb --
 rushDb;
 
 
  ==
  repository_database_mathieu.xml
  ===
 
 !-- This connection is used as the default one within OJB --
 jdbc-connection-descriptor
  jcd-alias=default
  default-connection=true
  platform=Oracle9i
  jdbc-level=2.0
  driver=oracle.jdbc.driver.OracleDriver
  protocol=jdbc
  subprotocol=oracle
  dbalias=thin:@P615-5:1521:MATHIEU
  username=xxx
  password=
  batch-mode=false
 useAutoCommit=1
 ignoreAutoCommitExceptions=false
  
 
 
 
 !--
 On initialization of connections

Re: Migrating to 1.0.4

2006-02-06 Thread Bruno CROS
Trace is working now.

May be i have i start of explanation. I saw that 1:n relations are all read
( checked ) , even if no getter (getCollections) exists on mapped object.
That's because my process is running slower i think.

I don't known why this occurs. I guess you're going to tell me that i have
to check all my auto-retrieve !!

...




On 2/6/06, Bruno CROS [EMAIL PROTECTED] wrote:


 Well, step by step, i hope that anything will be fine soon.

 DB Connections are well mounted, thank you very much again.

 So, I tested  my batch processes and i noted that log4j trace disappeared.
 I think it's no so hard to resolve (just redirect to my  commons logging
 setup). But the most worrying is that my first process (creation of 400
 records qith 400 others each linked) canno't end, crashing in a outofmemory
 error most of the time. Note that this processes is the simplest i have.

 According to the notes, i modified auto-update to none. auto-delete were
 already at none.
 My repository have been regenerated by xdoclet Ant task. Should i consider
 new repository description (that i miss) in my described collections (or
 references)?

 I known there something wrong. It looks like there exponential reads. But
 i don't forget that i 'm creating objects from scratch. So, i get rid an
 infinite read loop.

 Processes were working without any trouble in 1.0.1. Except the none read
 1.N relations  during  heavy load i related you last week.

 I'm looking at documentation about well configuring OJB.properties, for
 little batch processes and multiples objects transaction.

 My setup is the one who came with ojb_blank. I'm asking me if it's

 Note that my second connections (in 1.0.1 schema ) just serves me to read
 only (no update, no creation). so i get rid of a problem of locking with
 1.0.4 and a 1.0.1 db core.

 I guess that my case is not an isolated one.

 Best regards



 On 2/4/06, Armin Waibel  [EMAIL PROTECTED] wrote:
 
  Hi Bruno,
 
  Bruno CROS wrote:
   i already have patch torque3.1.1.jar. thanks for advice. I've done
  since
   start with the build.xml of ojb-blank.jar (so torque).
  
   I have the right database generated now. That's a good point and i
  thank you
   all.
  
   On the other hand, i have 2 database connections and it seems they do
  not
   work anymore. Of course, i can't see any errors. I d'ont known how to
  spy DB
   connections !!? shame, i known.
  
 
  you can use p6spy
  http://db.apache.org/ojb/docu/faq.html#traceProfileSQL
 
 
   The particularity of one of these connections is that the database has
  been
   made with 1.0.1 OJB release.  Understand that this database is
  accessed by
   my application, and by another one (bigger). So upgrading this DB is
  not so
   suitable.
  
   nevertheless, understand too, that my application need to be connected
  to
   those two databses, mine in 1.0.4 and the big one in 1.0.1. At the
  same time
   of course.
  
   My doubts are about the repository_internal.xml ( describing OJB
  tables). If
   these tables are not the same in the 2 DB. How could it work ??
  
 
  Seems that the default connection doesn't require any of the internal
  OJB tables (if you don't use odmg named objects or the specific List/Map
  implementations).
 
  rushDb use SequenceManagerHighLowImpl. In this case table OJB_HL_SEQ
  is mandatory. The metadata mapping of this object/table changed between
  1.0.1 and 1.0.4 but not the column names (one column was removed between
  1.0.1 and 1.0.4). Thus you could try to use the old table with the
  1.0.4 mapping. If this doesn't work you could drop and recreate table
  OJB_HL_SEQ (OJB will automatically take care of existing id's and
  re-populate the table).
 
  In rushDb you are using useAutoCommit=0. In this case it's mandatory
 
  that the jdbc-driver/DB disable the connection autoCommit property (else
  you will run into problems with tx-rollback).
 
  regards,
  Armin
 
 
   You understand what i mean when you see my repository.xml file. and
  the 2
   next files, parts of it.
  
   ==
   my repository.xml file
   =
  
   !ENTITY databaseMathieu SYSTEM repository_database_mathieu.xml
   !ENTITY databaseRushDb SYSTEM repository_database_rushDb.xml
   !ENTITY internal SYSTEM repository_internal.xml
   !ENTITY user SYSTEM repository_user.xml
   !ENTITY rushDb SYSTEM repository_rushDb.xml
   ]
  
  
   descriptor-repository version=1.0
 isolation-level=read-uncommitted
 proxy-prefetching-limit=50
  
  
  !-- connection Mathieu --
  databaseMathieu;
  !-- connection rushDB --
  databaseRushDb;
  
  !-- include ojb internal mappings here; comment this if you don't
  need
   them --
  internal;
  
  !-- mapping Mathieu --
  user;
  
  !-- mapping RushDb --
  rushDb;
  
  
   ==
   repository_database_mathieu.xml

Migrating to 1.0.4

2006-02-03 Thread Bruno CROS
Following the Armin's advice, i'm currently migrating from 1.0.1 to 1.0.4

I started with the 1.0.4 ojb-blank.jar to replace all files. jar of course,
and configuration ones too.

I stop with Torque (to generate Oracle 9i database) with an
UnkonwnHostException

In 1.0.1 i was targeting with this ( and it used to work fine):
torque.database.createUrl=${urlProtocol}:${urlSubprotocol}:${urlDbalias}
torque.database.buildUrl=${urlProtocol}:${urlSubprotocol}:${urlDbalias}

and in 1.0.4, i tried the original one (with database) and the old one
torque.database.createUrl=${urlProtocol}:${urlSubprotocol}:${urlDbalias}
and
torque.database.buildUrl=${torque.database.createUrl} (apprently the same)

Did someone reach to work with torque and Oracle 9i ?

Big thanks.



On 2/2/06, Armin Waibel [EMAIL PROTECTED] wrote:

 Hi Bruno,

 in OJB 1.0.2 we fixed a bug with partially materialized objects under
 heavy load. Assume that this bug cause your problem.
 http://db.apache.org/ojb/release-notes.txt

 I would recommend the migration to OJB 1.0.4.
 But take care of changes between 1.0.1 and 1.0.4 (e.g. complete
 refactoring of odmg implementation done in 1.0.2/3, changed auto-xxx
 settings for odmg, ...) it's listed in the release-notes and in the docs
 (Caching, Inheritance, ...).
 Strongly recommended:
 http://db.apache.org/ojb/docu/guides/odmg-guide.html

 regards,
 Armin


 Bruno CROS wrote:
   Hello,
 
  I now have spent a lot of time debugging 2 batch processing.
  First processing creates a simple 1-n couple of object (consider that
 class
  UnityCar refers a class Unity with 1-n relation) . Creation is done like
  this :
 
  - tx.begin()
  - Unity u = new Unity()
  - tx.lock(u, Transaction.WRITE);
  - UnityCar uc = new UnityCar();
  - uc.setUnity(u);
  - tx.lock(uc, Transaction.WRITE);
  - tx.commit();
 
  Note that we have wrotten a lot a code working perfectly like this one.
  Chance !
 
  My problem occurs when a second processing read a Unity class. With the
  auto-retrieve of the collection ( we need cache speed),  OJB should send
 me
  a collection of 1 UnityCar, but it do this only a good while after first
  processing !! Immediately after first processing, I have
  NullPointerException getting the collection unityCars. I known that my
  object is bad-loaded (no collection on unity.getUnityCars() ) and if a
 wait
  ( a probably cache expiration ) , that same objet is right loaded, with
 the
  good collection !!
 
  I tried a lot of solutions :
  - mark first created object dirty with markDirty. No way.
  - reload object (with getByIdentity)  after materialization. No way.
  - checkpoint and reload. No way.
  - restart. Yes of corse it works. But i definitely can't !!
 
  -hum, does i have severals objects, many as Transactions !!? after
 commit ,
  no ??
 
  Working with OJB Version 1.0.1. I known it's old, but it works not so
 bad
  for all we all have done at this day.
 
  Does someone can explain that ? Help !!
  Migration to 1.0.3 ? to 1.0.4 ?! Which one ? what is supposed to be
  rewrotten ?
 
  Thanks a lot to help me.
 
 
  Sorry for my poor bad english.
 

 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]




Re: Migrating to 1.0.4

2006-02-03 Thread Bruno CROS
Bang in the bull 's eye!!

the embedded dtd was not 3_0_1 but 3_1. I just change the value and it find
the database.dtd file included en torque.jar

Then  i have to add to my build.properties
torque.database.user=
and
torque.database.password=x
and the famous
torque.delimiter=/   (apparently needed for Oracle9i)

now i get an error about OJB_DMAP_ENTRIES table !!
here is the ant trace :

StdErr
 [torque-sql-exec] Failed to execute: CREATE TABLE OJB_DMAP_ENTRIES (
ID NUMBER NOT NULL, DMAP_ID NUMBER NOT NULL, KEY_OID LONG RAW, VALUE_OID
LONG RAW )
 [torque-sql-exec] java.sql.SQLException: ORA-01754: a table may
contain only one column of type LONG
 [torque-sql-exec] Failed to execute: ALTER TABLE OJB_DMAP_ENTRIES ADD
CONSTRAINT OJB_DMAP_ENTRIES_PK PRIMARY KEY (ID)
 [torque-sql-exec] java.sql.SQLException: ORA-00942: table or view does
not exist

Is this table needed ?  why oracle does not want to cretae iot !!? It's
amazing.


Thanks for you help.


On 2/3/06, Thomas Dudziak [EMAIL PROTECTED] wrote:

 On 2/3/06, Bruno CROS [EMAIL PROTECTED] wrote:

  First you will find the ant trace of setup-db task, the torque error is
  painted in red.
  Second, you will find my build.properties used by the task.


   [torque-sql] 2006-02-03 13:21:48,578 [main] INFO
  org.apache.torque.engine.database.transform.DTDResolver - Resolver: used
  ' http://db.apache.org/torque/dtd/database_3_0_1.dtd'
StdErr
 
   BUILD FAILED
   build-torque.xml :
  file:C:/@Dev/Mathieu/ojb/src/schema/build-torque.xml:203:
  org.apache.torque.engine .EngineException: java.net.UnknownHostException
 :
  db.apache.org en ligne 203

 This seems to be the core problem: Ant cannot access the server
 hosting the DTD ( db.apache.org), probably because of a Firewall. In
 the Ant target that generates the schema, you can specify the DTD to
 use for the schema. There, I thiink, you should use the value

 http://db.apache.org/torque/dtd/database_3_1.dtd

 because that should be the one that is contained in the torque jar
 (which Ant therefore does not have to fetch from the internet).

 Tom

 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]




Re: Migrating to 1.0.4

2006-02-03 Thread Bruno CROS
Thanks.

i 'm afraid i need to repatch distributed torque-gen-3.1.1.jar  to  have
TIMESTAMP  jdbc type created for  java.sql.Date and java.sql.timestamp as i
wrote in an old old post. (specific to oracle 9i and older )

Does someone confirm that ?


On 2/3/06, Armin Waibel [EMAIL PROTECTED] wrote:

 Bruno CROS wrote:
  Bang in the bull 's eye!!
 
  the embedded dtd was not 3_0_1 but 3_1. I just change the value and it
 find
  the database.dtd file included en torque.jar
 
  Then  i have to add to my build.properties
  torque.database.user=
  and
  torque.database.password=x
  and the famous
  torque.delimiter=/   (apparently needed for Oracle9i)
 
  now i get an error about OJB_DMAP_ENTRIES table !!
  here is the ant trace :
 
  StdErr
   [torque-sql-exec] Failed to execute: CREATE TABLE OJB_DMAP_ENTRIES
 (
  ID NUMBER NOT NULL, DMAP_ID NUMBER NOT NULL, KEY_OID LONG RAW, VALUE_OID
  LONG RAW )
   [torque-sql-exec] java.sql.SQLException: ORA-01754: a table may
  contain only one column of type LONG
   [torque-sql-exec] Failed to execute: ALTER TABLE OJB_DMAP_ENTRIES
 ADD
  CONSTRAINT OJB_DMAP_ENTRIES_PK PRIMARY KEY (ID)
   [torque-sql-exec] java.sql.SQLException: ORA-00942: table or view
 does
  not exist
 
  Is this table needed ?  why oracle does not want to cretae iot !!? It's
  amazing.

 The table is only needed if you use odmg-api with a specific DMap
 implementation. Normally you never it.

 http://db.apache.org/ojb/docu/guides/platforms.html#OJB+internal+tables

 regards,
 Armin

 
 
  Thanks for you help.
 
 
  On 2/3/06, Thomas Dudziak [EMAIL PROTECTED] wrote:
  On 2/3/06, Bruno CROS [EMAIL PROTECTED] wrote:
 
  First you will find the ant trace of setup-db task, the torque error
 is
  painted in red.
  Second, you will find my build.properties used by the task.
 
   [torque-sql] 2006-02-03 13:21:48,578 [main] INFO
  org.apache.torque.engine.database.transform.DTDResolver - Resolver:
 used
  ' http://db.apache.org/torque/dtd/database_3_0_1.dtd'
StdErr
 
   BUILD FAILED
   build-torque.xml :
  file:C:/@Dev/Mathieu/ojb/src/schema/build-torque.xml:203:
  org.apache.torque.engine .EngineException:
 java.net.UnknownHostException
  :
  db.apache.org en ligne 203
  This seems to be the core problem: Ant cannot access the server
  hosting the DTD ( db.apache.org), probably because of a Firewall. In
  the Ant target that generates the schema, you can specify the DTD to
  use for the schema. There, I thiink, you should use the value
 
  http://db.apache.org/torque/dtd/database_3_1.dtd
 
  because that should be the one that is contained in the torque jar
  (which Ant therefore does not have to fetch from the internet).
 
  Tom
 
  -
  To unsubscribe, e-mail: [EMAIL PROTECTED]
  For additional commands, e-mail: [EMAIL PROTECTED]
 
 
 

 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]




Re: Migrating to 1.0.4

2006-02-03 Thread Bruno CROS
=dbcp.poolPreparedStatements
attribute-value=false/
   attribute attribute-name=dbcp.maxOpenPreparedStatements
attribute-value=10/
   !-- Attribute determining if the Commons DBCP connection
wrapper will allow
access to the underlying concrete Connection instance from
the JDBC-driver
(normally this is not allowed, like in J2EE-containers
using wrappers). --
   attribute
attribute-name=dbcp.accessToUnderlyingConnectionAllowed
attribute-value=false/
   /connection-pool

   !-- alternative sequence manager implementations, see Sequence
Manager guide --
   sequence-manager
className=org.apache.ojb.broker.util.sequence.SequenceManagerHighLowImpl
   !-- attributes supported by SequenceManagerHighLowImpl,
   SequenceManagerInMemoryImpl, SequenceManagerNextValImpl
   please see Sequence Manager guide or/and javadoc of class for
more information --
   attribute attribute-name=seq.start attribute-value=20/
   attribute attribute-name=autoNaming attribute-value=true/

   !-- attributes supported by SequenceManagerHighLowImpl
   please see Sequence Manager guide or/and javadoc of classes
for more information --
   attribute attribute-name=grabSize attribute-value=20/

   !-- optional attributes supported by SequenceManagerNextValImpl
(support depends
   on the used database), please see Sequence Manager guide
or/and javadoc of
   classes for more information --
   !-- attribute attribute-name=seq.as
attribute-value=INTEGER/ --
   !-- attribute attribute-name=seq.incrementBy
attribute-value=1/ --
   !-- attribute attribute-name=seq.maxValue
attribute-value=999/ --
   !-- attribute attribute-name=seq.minValue
attribute-value=1/ --
   !-- attribute attribute-name=seq.cycle
attribute-value=false/ --
   !-- attribute attribute-name=seq.cache
attribute-value=20/ --
   !-- attribute attribute-name=seq.order
attribute-value=false/ --

   /sequence-manager
  /jdbc-connection-descriptor

  !-- Datasource example --
   !-- jdbc-connection-descriptor
   jcd-alias=default
   default-connection=true
platform=Hsqldb
jdbc-level=2.0
jndi-datasource-name=java:DefaultDS
username=sa
password=
   batch-mode=false
   useAutoCommit=0
   ignoreAutoCommitExceptions=false
  
   Add the other elements like object-cache, connection-pool,
sequence-manager here.

  /jdbc-connection-descriptor --




On 2/3/06, Thomas Dudziak [EMAIL PROTECTED] wrote:

 On 2/3/06, Bruno CROS [EMAIL PROTECTED] wrote:

  i 'm afraid i need to repatch distributed torque-gen-3.1.1.jar  to  have
  TIMESTAMP  jdbc type created for   java.sql.Date and java.sql.timestampas i
  wrote in an old old post. (specific to oracle 9i and older )

 You might want to try DdlUtils (http://db.apache.org/ddlutils ) instead
 of Torque, it uses the same schema format and contains an Oracle9 (and
 an Oracle10 one) platform that can use TIMESTAMP.

 Tom

 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]




Re: Migrating to 1.0.4

2006-02-03 Thread Bruno CROS
Note that my OJB_DMAP_ENTRIES has not been created !!



On 2/3/06, Bruno CROS [EMAIL PROTECTED] wrote:

 i already have patch torque3.1.1.jar. thanks for advice. I've done since
 start with the build.xml of ojb-blank.jar (so torque).

 I have the right database generated now. That's a good point and i thank
 you all.

 On the other hand, i have 2 database connections and it seems they do not
 work anymore. Of course, i can't see any errors. I d'ont known how to spy DB
 connections !!? shame, i known.

 The particularity of one of these connections is that the database has
 been made with 1.0.1 OJB release.  Understand that this database is
 accessed by my application, and by another one (bigger). So upgrading this
 DB is not so suitable.

 nevertheless, understand too, that my application need to be connected to
 those two databses, mine in 1.0.4 and the big one in 1.0.1. At the same
 time of course.

 My doubts are about the repository_internal.xml ( describing OJB tables).
 If these tables are not the same in the 2 DB. How could it work ??

 You understand what i mean when you see my repository.xml file. and the 2
 next files, parts of it.

 ==
 my repository.xml file
 =

 !ENTITY databaseMathieu SYSTEM repository_database_mathieu.xml
 !ENTITY databaseRushDb SYSTEM repository_database_rushDb.xml
 !ENTITY internal SYSTEM repository_internal.xml
 !ENTITY user SYSTEM repository_user.xml
 !ENTITY rushDb SYSTEM repository_rushDb.xml
 ]


 descriptor-repository version=1.0
   isolation-level=read-uncommitted
   proxy-prefetching-limit=50


!-- connection Mathieu --
databaseMathieu;
!-- connection rushDB --
databaseRushDb;

!-- include ojb internal mappings here; comment this if you don't need
 them --
internal;

!-- mapping Mathieu --
user;

!-- mapping RushDb --
rushDb;


 ==
 repository_database_mathieu.xml
 ===

!-- This connection is used as the default one within OJB --
jdbc-connection-descriptor
 jcd-alias=default
 default-connection=true
 platform=Oracle9i
 jdbc-level=2.0
 driver=oracle.jdbc.driver.OracleDriver
 protocol=jdbc
 subprotocol=oracle
 dbalias=thin:@P615-5:1521:MATHIEU
 username=xxx
 password=
 batch-mode=false
useAutoCommit=1
ignoreAutoCommitExceptions=false
 



!--
On initialization of connections the ConnectionFactory change
 the 'autoCommit'
state dependent of the used 'useAutoCommit' setting. This
 doesn't work in all
situations/environments, thus for useAutoCommit=1 the
 ConnectionFactory does
no longer set autoCommit to true on connection creation.
To use the old behavior (OJB version 1.0.3 or earlier) set this
 property
to 'true', then OJB change the autoCommit state (if needed) of
new obtained connections at connection initialization to
 'true'.
If 'false' or this property is removed, OJB don't try to change

 connection
autoCommit state at connection initialization.
--
attribute attribute-name=initializationCheck
 attribute-value=false /

!-- alternative cache implementations, see docs section
 Caching --
object-cache
 class=org.apache.ojb.broker.cache.ObjectCacheTwoLevelImpl
!-- meaning of attributes, please see docs section
 Caching --
!-- common attributes --
attribute attribute-name=cacheExcludes attribute-value=/

!-- ObjectCacheTwoLevelImpl attributes --
attribute attribute-name=applicationCache
 attribute-value=org.apache.ojb.broker.cache.ObjectCacheDefaultImpl/
attribute attribute-name=copyStrategy
 attribute-value=
 org.apache.ojb.broker.cache.ObjectCacheTwoLevelImpl$CopyStr
 ategyImpl/
attribute attribute-name=forceProxies
 attribute-value=false/

!-- ObjectCacheDefaultImpl attributes --
attribute attribute-name=timeout attribute-value=900/
attribute attribute-name=autoSync attribute-value=true/
attribute attribute-name=cachingKeyType
 attribute-value=0/
attribute attribute-name=useSoftReferences
 attribute-value=true/
/object-cache

!-- For more info, see section Connection Handling in docs --
connection-pool
maxActive=30
validationQuery=
testOnBorrow=true
testOnReturn=false
whenExhaustedAction=0
maxWait=1

!-- Set fetchSize to 0 to use driver's default. --
attribute attribute-name=fetchSize attribute-value=0/

!-- Attributes with name prefix jdbc. are passed directly to
 the JDBC driver. --
!-- Example setting (used by Oracle driver when Statement
 batching is enabled

1-n Auto-retrieve troubles

2006-02-02 Thread Bruno CROS
 Hello,

I now have spent a lot of time debugging 2 batch processing.
First processing creates a simple 1-n couple of object (consider that class
UnityCar refers a class Unity with 1-n relation) . Creation is done like
this :

- tx.begin()
- Unity u = new Unity()
- tx.lock(u, Transaction.WRITE);
- UnityCar uc = new UnityCar();
- uc.setUnity(u);
- tx.lock(uc, Transaction.WRITE);
- tx.commit();

Note that we have wrotten a lot a code working perfectly like this one.
Chance !

My problem occurs when a second processing read a Unity class. With the
auto-retrieve of the collection ( we need cache speed),  OJB should send me
a collection of 1 UnityCar, but it do this only a good while after first
processing !! Immediately after first processing, I have
NullPointerException getting the collection unityCars. I known that my
object is bad-loaded (no collection on unity.getUnityCars() ) and if a wait
( a probably cache expiration ) , that same objet is right loaded, with the
good collection !!

I tried a lot of solutions :
- mark first created object dirty with markDirty. No way.
- reload object (with getByIdentity)  after materialization. No way.
- checkpoint and reload. No way.
- restart. Yes of corse it works. But i definitely can't !!

-hum, does i have severals objects, many as Transactions !!? after commit ,
no ??

Working with OJB Version 1.0.1. I known it's old, but it works not so bad
for all we all have done at this day.

Does someone can explain that ? Help !!
Migration to 1.0.3 ? to 1.0.4 ?! Which one ? what is supposed to be
rewrotten ?

Thanks a lot to help me.


Sorry for my poor bad english.


Re: OJB and temporary tables

2005-06-27 Thread Bruno CROS
Why change the repository? My first idea was to (re)create the table after 
modifications of the torque generated script of the table. Modifications 
tells something like On commit delete row (Oracle) 
 Then, my table is mapped, for only the time of the transaction, and i use 
only with REportQuery to avoid the cache ghost errors. May be i have to 
mount , unmount the repository table description to be sure the cache is 
cleared ? that's what you tell ?
 Thanks 
 On 6/27/05, Charles Anthony [EMAIL PROTECTED] wrote: 
 
 OK - quick response, no, it is not possible to map to temporary tables in
 OJB.
 
 However, it might be possible to dynamically change the repository 
 metadata
 to point to the temporary table...
 
 
 
 -Original Message-
 From: Bruno CROS [mailto:[EMAIL PROTECTED]
 Sent: 27 June 2005 10:17
 To: OJB Users List
 Subject: Re: OJB and temporary tables
 
 
 Help, please, did someone try to use temporary table ?
 I'm asking me about a turn around, creating records in a ODMG transaction
 and rollback them systematically . Risky, isn't it ? I don't known exactly
 why, but i think there is a better way...
 Thanks for any ideas
 On 6/23/05, Bruno CROS [EMAIL PROTECTED] wrote:
 
  Hello,
 
  I did not find any concrete discussings about dealing with temporary
  tables with OJB.
 
  My aim is to check some values against records in OJB tables. The
  query is a complex join query, with multiple records entries.
 
  Make a query for each check will be too heavy and too long. My first
  idea is to insert some records (to be checked) in a temporary table,
  and check them against with a report query.
 
  Is there a better way? Something i miss ?
 
  Does OJB support temporary table mapping? if not, why ?
 
  Evenly, how store procedures would help me ?
 
  Thanks a lot.
 
 
 
 ___
 HPD Software Ltd. - Helping Business Finance Business
 Email terms and conditions: 
 www.hpdsoftware.com/disclaimerhttp://www.hpdsoftware.com/disclaimer
 
 
 
 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]
 



OJB and temporary tables

2005-06-23 Thread Bruno CROS
  Hello,

I did not find any concrete discussings about dealing with temporary
tables with OJB.

My aim is to check some values against records in OJB tables. The
query is a complex join query, with multiple records entries.

Make a query for each check will be too heavy and too long. My first
idea is to insert some records (to be checked) in a temporary table,
and check them against with a report query.

Is there a better way? Something  i miss ?

Does OJB support temporary table mapping? if not, why ?

Evenly, how store procedures would help me ?

Thanks a lot.

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



repository.xml partially read

2005-03-18 Thread Bruno CROS
   Hello, 

I setup a repository.xml as ojb-blank project
description of my file is below. This file works very well on Tomcat
over windows, but not at all on AIX !
 
AIX log tells me that No repository is loaded, the parsing seems to
stop in a subfile !! (see below) so OJB starts with empty metadata and
no default connection...

Does someone  experience such a trouble? Note that my project works
fine on a PC Tomcat server with JDK 1.4.1... The trouble occurs only
once the project is running on AIX (unix). with IBM JRE 1.4.1.

first idea, sax depends of JDK ? 

Thanks for other ideas.

Repository.xml


!ENTITY databaseMathieu SYSTEM repository_database_mathieu.xml
!ENTITY databaseRushDb SYSTEM repository_database_rushDB.xml
!ENTITY internal SYSTEM repository_internal.xml
!ENTITY user SYSTEM repository_user.xml
!ENTITY rushDb SYSTEM repository_rushDb.xml
]

descriptor-repository version=1.0
  isolation-level=read-uncommitted
  proxy-prefetching-limit=51

   !-- connection Mathieu --
   databaseMathieu;
   !-- connection rushDB --
   databaseRushDb;

   !-- include ojb internal mappings here; comment this if you don't need
them --
   internal;

   !-- mapping Mathieu --
   user;

   !-- mapping RushDb --
   rushDb;

/descriptor-repository


Logs
= 

599   INFO  [ExecuteThread: '14' for queue: 'weblogic.kernel.Default']
metadata.RepositoryPersistor - OJB Descriptor Repository:
file:/opt/bea/user_projects/domains/mydomain/applications/mathieu/WEB-INF/classes/repository.xml
599   INFO  [ExecuteThread: '14' for queue: 'weblogic.kernel.Default']
metadata.RepositoryPersistor - Building repository from
:file:/opt/bea/user_projects/domains/mydomain/applications/mathieu/WEB-INF/c
lasses/repository.xml
603   DEBUG [ExecuteThread: '14' for queue: 'weblogic.kernel.Default']
metadata.RepositoryPersistor - RespostoryPersistor using SAXParser :
weblogic.xml.jaxp.RegistrySAXParser
630   DEBUG [ExecuteThread: '14' for queue: 'weblogic.kernel.Default']
metadata.RepositoryXmlHandler - startDoc
635   DEBUG [ExecuteThread: '14' for queue: 'weblogic.kernel.Default']
metadata.RepositoryXmlHandler -   descriptor-repository
636   DEBUG [ExecuteThread: '14' for queue: 'weblogic.kernel.Default']
metadata.RepositoryXmlHandler -  isolation-level: read-uncommitted
636   DEBUG [ExecuteThread: '14' for queue: 'weblogic.kernel.Default']
metadata.RepositoryXmlHandler -  proxy-prefetching-limit: 51
636   DEBUG [ExecuteThread: '14' for queue: 'weblogic.kernel.Default']
metadata.RepositoryXmlHandler -  version: 1.0
638   DEBUG [ExecuteThread: '14' for queue: 'weblogic.kernel.Default']
metadata.RepositoryXmlHandler -attribute
638   DEBUG [ExecuteThread: '14' for queue: 'weblogic.kernel.Default']
metadata.RepositoryXmlHandler -attribute-name: timeout
638   DEBUG [ExecuteThread: '14' for queue: 'weblogic.kernel.Default']
metadata.RepositoryXmlHandler -attribute-value: 900
638   DEBUG [ExecuteThread: '14' for queue: 'weblogic.kernel.Default']
metadata.RepositoryXmlHandler -attribute
638   DEBUG [ExecuteThread: '14' for queue: 'weblogic.kernel.Default']
metadata.RepositoryXmlHandler -attribute
639   DEBUG [ExecuteThread: '14' for queue: 'weblogic.kernel.Default']
metadata.RepositoryXmlHandler -attribute-name: autoSync
639   DEBUG [ExecuteThread: '14' for queue: 'weblogic.kernel.Default']
metadata.RepositoryXmlHandler -attribute-value: true
639   DEBUG [ExecuteThread: '14' for queue: 'weblogic.kernel.Default']
metadata.RepositoryXmlHandler -attribute
639   DEBUG [ExecuteThread: '14' for queue: 'weblogic.kernel.Default']
metadata.RepositoryXmlHandler -attribute
639   DEBUG [ExecuteThread: '14' for queue: 'weblogic.kernel.Default']
metadata.RepositoryXmlHandler -attribute-name: cachingKeyType
639   DEBUG [ExecuteThread: '14' for queue: 'weblogic.kernel.Default']
metadata.RepositoryXmlHandler -attribute-value: 0
640   DEBUG [ExecuteThread: '14' for queue: 'weblogic.kernel.Default']
metadata.RepositoryXmlHandler -attribute
640   DEBUG [ExecuteThread: '14' for queue: 'weblogic.kernel.Default']
metadata.RepositoryXmlHandler -   object-cache
640   DEBUG [ExecuteThread: '14' for queue: 'weblogic.kernel.Default']
metadata.RepositoryXmlHandler - Ignoring unused Element connection-pool
640   DEBUG [ExecuteThread: '14' for queue: 'weblogic.kernel.Default']
metadata.RepositoryXmlHandler - Ignoring unused Element sequence-manager
640   DEBUG [ExecuteThread: '14' for queue: 'weblogic.kernel.Default']
metadata.RepositoryXmlHandler - Ignoring unused Element
jdbc-connection-descriptor
643   INFO  [ExecuteThread: '14' for queue: 'weblogic.kernel.Default']
metadata.MetadataManager - No repository.xml file found, starting with empty
metadata and connection configuration
648   INFO  [ExecuteThread: '14' for queue: 'weblogic.kernel.Default']
metadata.MetadataManager - No 'default-connection' attribute set 

Re: repository.xml partially read

2005-03-18 Thread Bruno CROS
repository.dtd is in classpath, and i tried to declare it in
repository.xml with SYSTEM repository.dtd. No way.
I think the failure occurs because i have 2 jdbc-descriptors, and dtd
allows  only one connector (according to doc in file). But what i
cannot understand, is that it works fine with xerces and Tomcat..

It works too if a put only one jdbc-connector, but, that's not my aim,
i need to acces the 2 databases !!

Can you confirm xml file is made for only one jdbc-connector in dtd
grammar? (I'm not a specialist of xml )

Thanks




With the fact that my Parser is those who gave me Weblogic... 


On Fri, 18 Mar 2005 15:10:26 +0100, Martin Kalén [EMAIL PROTECTED] wrote:
 Bruno CROS wrote:
  I setup a repository.xml as ojb-blank project
  description of my file is below. This file works very well on Tomcat
  over windows, but not at all on AIX !
 
 This is the third subject on the same issue in a pretty short time, the
 other two were resolved by changing DTD declaration and/or making sure
 repository.dtd is in CLASSPATH.
 
 Did you try that?
 
 If you have a webaddress in the DTD, change your doctype declaration to
 SYSTEM repository.dtd to prevent HTTP access as the point of failure
 during parsing (not all parsers can do this and it's silly to get DTD
 via HTTP in production anyway).
 
 Regards,
  Martin
 
 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]
 


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: repository.xml partially read

2005-03-18 Thread Bruno CROS
Waouh, was just an awful bad naming. I had forgotten than Windows is
not case sensitive !!! Unix is... argh

the xml database in my repository.xml was named
repository_database_rushDB instead of repository_database_rushDb.

So confuseD. So sorrY. Thanks again for your help. 
++


On Fri, 18 Mar 2005 17:07:47 +0100, Thomas Dudziak [EMAIL PROTECTED] wrote:
 No, you can have multiple jdbc-connection-descriptor elements, but
 they  need to differ in the jcd-alias.
 
 Tom
 
 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]
 


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Oracle 9i date-timestamp tip

2005-03-03 Thread Bruno CROS
Even if it has been already post, i just want to clarify the tip :

- according to Oracle 9i notes 

http://www.oracle.com/technology/tech/java/sqlj_jdbc/htdocs/jdbc_faq.htm#08_01

it's important to note that mapping date attributes such as
java.sql.Time java.util.Date (with hours ...) and even
java.sql.Timestamp using DATE oracle type  IS NOT POSSIBLE since
Oracle 9.2 !! (Hours are lost ... with all it can occurs too !)

The first tip of Oracle is the best one. Replacing all DATE columns by
TIMESTAMP columns work very well, so you are not obliged to use
V8compatibility tip (what did not work for me (sorry!)) and can use
new 9i types.

For me, the problem was to keep on use Torque; Torque had to generate
oracle TIMESTAMP (instead of DATE) for all the jdbc columns DATE, TIME
and TIMESTAMP. I look in torque-3.0.2 archive, find a file name in
db.props in sql/base/oracle change the 3 lines and update jar with jar
tool. Torque now generates TIMESTAMP (according to 9i specifications)
for DATE, TIME and TIMESTAMP jdbc types. I don't think that torque
project can accept such a workaround...  job to have something clean
is bigger than that !!

Jdbc works now as before 9.2, with java.sql.timestamp, authorizing
comparisons and hours access on all java types... That's what we need
all !!

Regards to all posting here ;-)

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]