Jason -

First off, your loader code seems to be valid and my only suggestion would
be to bump %100 to %1000. That will help performance ... as long as your
BorderPoint object isn't huge.

 > I'm using it to persist data to a table that is quite large, and
performance seemed to degrade quickly when the table hit 3M rows. Currently,
I'm using MySQL with InnoDB tables.
Can you explain this degradation a little better? What happens if you start
your application and then try to persist data to a fully loaded DB(3M+), is
that slow? Or is it a matter of needing to run for some amount of time
before things start slowing down?

Are you running in a Java EE environment or JSE? Are you using connection
pooling? Is this a part of a normal application run, or a part of a loading
routine?

HTH


On Thu, May 19, 2011 at 2:57 PM, Jason Ferguson <[email protected]>wrote:

> I'm using this method in a Repository class and was wondering if
> someone could do a quick sanity check on it:
>
>    @Transactional
>    public void persistList(List<BorderPoint> objectList) throws
> RepositoryException {
>
>        EntityManager em = entityManagerFactory.createEntityManager();
>
>        try {
>            em.getTransaction().begin();
>            int i = 1;
>            for (BorderPoint bp : objectList) {
>                em.persist(bp);
>                if (i % 100 == 0) {
>                    em.flush();
>                    em.clear();
>                }
>                i++;
>            }
>            em.getTransaction().commit();
>        } catch (EntityExistsException ex) {
>            // need to log this somehow
>            //log.warning("persist() threw EntityExistsException: " +
> ex.getMessage());
>            ex.printStackTrace();
>            throw new RepositoryException(ex);
>        }
>        catch (Exception e) {
>            e.printStackTrace();
>        } finally {
>            em.close();
>        }
>    }
>
> I'm using it to persist data to a table that is quite large, and
> performance seemed to degrade quickly when the table hit 3M rows.
> Currently, I'm using MySQL with InnoDB tables.
>
> Jason
>



-- 
*Rick Curtis*

Reply via email to