Oh, sorry, hadn't grasped,

but then I doubt jelly can do anything... am I wrong or it is normal for an SQL driver to keep an amount based in memory as long as the transaction is not committed ?

Do you see something jelly code that stores something ?
Or would it be something with tag-caching ??? I have a hard time believing that.

Would you have the time to output profiling and use hpjmeter (http:// www.hp.com/products1/unix/java/hpjmeter/) to see what was allocated ?



Le 10 juil. 07 à 01:18, Karr, David a écrit :

Each insert is a separate "call" to a separate "insert" tag that I defined in my dbutils. Each insert is in a separate transaction.

So, for instance, here's my "insert" tag:

  <define:tag name="insert">
        <sql:transaction dataSource="${ds}">
         <j:catch var="ex">
<sql:update sql="insert into ${table} VALUES (${values})" var="upd"/>
         <j:if test="${ex != null}">
         FAILED INSERT. <j:expr value="${ex}"/>
         <ant:fail message="Failed table insert" />

I essentially call this about 18000 times with different parameters. Watching the task manager, the memory usage slowly increases as it inserts rows.

-----Original Message-----
From: Paul Libbrecht [mailto:[EMAIL PROTECTED]
Sent: Monday, July 09, 2007 4:13 PM
To: Jakarta Commons Users List
Subject: Re: [jelly] How to make a large Jelly-SQL script
more memory efficient?


I do not see any other way than programmatically separating
the "lines" of the SQL query. Is this doable ? Like, if it's
a backup, it probably has a chance that each query is a line, or ?

If yes, then it should be pretty easy to use a
LineNumberReader and feed each line as an SQL query... or do
I mistake ?

Alternatively, we could tweak the sql taglib to actually read
the SQL script and not load it as a string but this would
mean to decompose the lines in an appropriate way. I do not
know how that could be done. If you know of a generic way,
let's open a jira issue and work on that, it'd be easy.



Le 10 juil. 07 à 01:04, Karr, David a écrit :

I have a large Jelly-SQL script (about 3.8 mb) that just does SQL
calls to insert rows into a database.  Each row it inserts
is pretty
small, but it inserts a lot of rows (relatively).  It currently
inserts about 18000 rows.  What I'm finding is that the
script won't
even run on Windows (2 gig process limit).  It takes too
much memory.
A previous version of the script only inserted about 11000
rows, and
it was able to run on Windows.  The vast majority of the script is
generated by a Java app that processes a spreadsheet.

While the script is running, I watch it in Task Manager,
and I see the
memory very slowly increasing.  It runs for quite a while.
It finally
runs out of memory in the JVM and fails.

Is there some strategy for building a script like this so
it is more

The script currently has a top-level "j:jelly" element, imports a
utility package (some dbutils), and then imports the
generated portion
of the script.

I can temporarily work around this by building multiple top-level
scripts that call separate pieces of the big script, but that's
annoying.  I'm also trying to get this set up on a Unix
system, to get
a larger process size.

To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail:

To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Attachment: smime.p7s
Description: S/MIME cryptographic signature

Reply via email to