Hi Jeff,

Thanks for the response. I noticed that the integration tests were in the svn 
trunk so I updated it to run the tests and while there were failed test cases 
more passed than the git JPA branch so I introduced a regression issue of some 
sorts. I will work on getting the branch into the same state as the trunk.

If possible I would like to leave all the testng integration tests as is. If 
they do need to be modified I imagine I would only need to clean up better 
after test execution in the test tear down. I don t believe the tests can run 
in parallel due to port conflicts.

Is there any documentation on the invocation sequences on the engine or any 
hints on tracking down execution problems? I noticed there are multiple layers 
of callables and futures and due to short timeouts I am having a hard time 
tracking invocations through the call stacks.

Regards,

Aaron

On Mon Mar 29th, 2010 9:59 AM CDT Jeff Yu wrote:

>Hi Aaron,
>
>comments inline.
>
>
>On Mon, Mar 29, 2010 at 9:31 AM, Aaron Anderson <nickmalt...@yahoo.com>wrote:
>
>> Hi Jeff,
>>
>> I got the axis2-war file tests running but there are failures. I took a
>> look at the ruby file for running the tests and it looks like the main war
>> files are copied to a temp directory per test invocation. Is that the way
>> the test were designed to be run?
>
>
>From my understanding of the Buildfile, yes, it is.
>
>
>> If so, we may have to add that functionality to the axis ware integration
>> test setup and teardown methods since maven does not support that.
>
>
>In the buildr, it uses the testNG to run the test case, I am thinking that
>would it be easier that we choose the testNG for this module also?
>
>
>> Running the tests in a single VM, reusing the same derby database, causes a
>> Java out of memory error for me. At this point I am not sure if the DAO
>> refactoring or the minor changes I made to the engine caused anything to
>> break. If you get a chance to run the axis2-war tests and could provide some
>> feedback I would appreciate it.
>>
>
>Didn't get a chance to run it today, will try it tomorrow.  Just check, what
>if you add the MaxPermSize for JAVA? like:
>export JAVA_OPTS="-Xms512M -Xmx512M -XX:MaxPermSize=512M"
>
>-Jeff
>
>
>>
>> Regards,
>>
>> Aaron
>>
>>
>>
>>
>> ________________________________
>> From: Jeff Yu <jeff.yuch...@gmail.com>
>> To: dev@ode.apache.org
>> Sent: Fri, March 26, 2010 12:18:03 AM
>> Subject: Re: JPA DAO refactoring.
>>
>> Hi Aaron,
>>
>> The code is great. IMHO, below are the things that we need to be done for
>> getting this big patch applied.
>>
>> 1. axis2-war module test case code is out-of-update, it still refers to the
>> old dao package, like 'org.apache.ode.bpel.dao.", It seems to me that we
>> didn't compile and run test case for this module, do we?
>> 2. use the buildr build to check if we can get it build with this. I know
>> this might be the hard part here, unless you are familiar with buildr. We
>> may ask other devs here to see if they are interested picking up this task.
>> But I will try to build with that firstly to see how many problems we have
>> right now.
>>
>> BTW, this refactoring work is so great that I am thinking that migrate it
>> into Apache ODE 1.x branch, how much effort do you think it would cost for
>> this move? We are trying to add the clustering support for 1.x code base,
>> one first thing here would be to implement the JPA based DAO impl for
>> scheduler module.
>>
>> Regards
>> Jeff
>>
>> On Fri, Mar 26, 2010 at 7:49 AM, Aaron Anderson <aaronander...@acm.org
>> >wrote:
>>
>> > Hi Jeff,
>> >
>> > I completed the new JPA based SimpleScheduler DAO implementation. Now
>> there
>> > is JDBC based implementation (refactored original delegate
>> implementation),
>> > a JPA OpenJPA implementation (default now), and a JPA Hibernate
>> > implementation. I did not create a new non-JPA Hibernate implementation
>> > since to my knowledge JPA will be the persistence implementation of
>> choice
>> > for ODE.
>> >
>> > One last think that needs to be done is to update the JPA DDL module to
>> > include additional indexes in case the SQL generator does not index
>> > everything  that needs them.
>> >
>> > Also as part of my refactoring I added transactional operations to the
>> > DAOConnection interface so that it can hide the underlying transactional
>> > mechanism in case JTA is not used. To me it makes the DAO usage more
>> > concise. Perhaps in the future the engine and runtime code can be
>> modified
>> > to utilize the DAO transactional operations instead of directly
>> manipulating
>> > the JTA transaction manager.
>> >
>> > Please take a look and let me know what more needs to be done for the JPA
>> > refactoring effort.
>> >
>> > Regards,
>> >
>> > Aaron
>> >
>> >
>>
>>
>> --
>> Cheers,
>> Jeff Yu
>>
>> ----------------
>> blog: http://jeff.familyyu.net
>>
>
>
>
>-- 
>Cheers,
>Jeff Yu
>
>----------------
>blog: http://jeff.familyyu.net

Reply via email to