Good job tracking it down.

Regarding the tools for debugging. No ideas from me sorry, I need to get better at using those kinds of things.

On 12/09/2013 12:03 PM, Brian Craft wrote:
Following up on this, after resolving the memory problems in my code, h2 handles my 1G transaction.

Tracking down the memory problems was enormously painful. jmap/jhat provided better information than visualvm, but they were unreliable: jhat would randomly refuse to read the heap dumps generated by jmap, and it would consistently freeze up when tracing particular objects. I suspect it was incorrectly handling circular dependencies in these cases.

Are there any better tools for jvm memory debugging?

On Tuesday, September 10, 2013 1:21:51 PM UTC-7, Brian Craft wrote:

    Well, that was way more informative than visualvm. The top three
    items, amounting to 2G:

     num #instances         #bytes  class name
    ----------------------------------------------
       1: 70466     1104237992  [C
       2:  45338856     1088132544  java.lang.Float
       3: 1544435      403100896  [Ljava.lang.Object;

    I don't believe any of this is h2. There are no floats in the
    commit, and the char arrays in the commit are less than 6M. So my
    current guess is that I'm leaking something, though I don't
    understand how splitting it into multiple transactions is allowing
    it to work.

    On Tuesday, September 10, 2013 12:51:00 PM UTC-7, Brian Craft wrote:

        I do it with a bunch of inserts. The tables already exist. It
        happens even if I use UNDO_LOG=0, which I think disables the
        undo log.

        I'll try jmap.

        On Tuesday, September 10, 2013 11:50:43 AM UTC-7, Thomas
        Mueller wrote:

            Hi,

            It depends on how you do the import. Could you get a heap
            histogram for a large import (jmap -histo:live <pid>), and
            could you tell us how you do the import exactly?

            If you use "create table ... as select" then it can't run
            out of memory (as far as I know), because no undo log is
            needed.

            If you execute many insert or update statements, then the
            undo log is used, and at least a few bytes are needed per
            record (the actual data is stored to disk at some point,
            but the position of the data is not). So in theory it
            could run out of memory, even thought the transaction
            would need to be huge.

            By the way, with the new MVStore storage (not production
            ready yet), there should no longer be a limit on the size
            of a transaction, as no undo log is needed.

            Regards,
            Thomas






            On Tue, Sep 10, 2013 at 10:51 AM, Ryan How
            <[email protected]> wrote:

                Howcome it is getting OOM on import? It shouldn't do
                that should it? I've imported several GB of data
                before, I did get OOM, but increasing heap size to
                around 1GB worked for me. I didn't need to go to crazy
                sizes...


                On 10/09/2013 4:32 PM, Noel Grandin wrote:


                    On 2013-09-06 20:15, Brian Craft wrote:

                        I need to load about 1G of data into an
                        existing db, while maintaining data coherence.
                        Wrapping the inserts in one transaction
                        results in out-of-memory problems in the jvm.
                        I increased the max heap size to 8g w/o
                        improvement. I can split it into a bunch of
                        smaller commits, which works fine, but then on
                        error I need a bunch of application code to
                        delete the transactions which succeeded. The
                        deletes will need their own transactions,
                        which could also fail.

                        Is there any better way to do this?


                    Not really.
                    One strategy would be to copy the DB, since with
                    H2 it's just a single file, and then run your
                    import process.
                    If it fails, just replace the modified DB with the
                    backup.



-- You received this message because you are subscribed
                to the Google Groups "H2 Database" group.
                To unsubscribe from this group and stop receiving
                emails from it, send an email to
                [email protected].
                To post to this group, send email to
                [email protected].
                Visit this group at
                http://groups.google.com/group/h2-database
                <http://groups.google.com/group/h2-database>.
                For more options, visit
                https://groups.google.com/groups/opt_out
                <https://groups.google.com/groups/opt_out>.


--
You received this message because you are subscribed to the Google Groups "H2 Database" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/h2-database.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "H2 
Database" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/h2-database.
For more options, visit https://groups.google.com/groups/opt_out.

Reply via email to