John
we use faily large xml schema and when we started caching class descriptor and
unmarshalling time went down from 215ms to 25ms average. Also when I replaced
excerces parser with Piccolo we saved another 5-7 ms.
Ramesh

John Weir wrote:

> This is Just Info that I thought others might be interested in ..
>
> I was interested to know how much improvement caching the descriptor classes
> made. I tested it on a reasonably large set of files each wit over 500 types
> and 40% of those repeating.
>
> You gain between 55% and 75% benefit by Caching the descriptors. If anyone
> has similar anecdotes surrounding performance, I for one would be
> interested.
>
> Thanks, John
>
> > -----Original Message-----
> > From: Keith Visco [mailto:[EMAIL PROTECTED]
> > Subject: Re: [castor-dev] System freezes when uploading bigger XML file
> > using Castor
> >
> >
> >
> > In addition to the previous suggestions by Erik and Milan,
> >
> > To improve performance in general, you can also try the following:
> >
> > 1. Disable validation
> >
> >    unmarshaller.setValidation(false);
> >
> > 2. Cache the ClassDescriptorResolver (which holds onto the in-memory
> > descriptors/mappings)
> >
> >    import org.exolab.castor.xml.ClassDescriptorResolver;
> >    import org.exolab.castor.xml.util.ClassDescriptorResolver;
> >
> >    ClassDescriptorResolver cdr = new ClassDescriptorResolverImpl();
> >    ...
> >
> >    while (...more stuff to unmarshal...) {
> >       Unmarshaller unmarshaller = new Unmarshaller(...);
> >         unmarshaller.setResolver(cdr);
> >    }
> >
> > 3. If you're using a mapping file, make sure it's loaded only once.
> >
> > 4. To reduce object creation/deletion you might want to try out the
> >    *experimental* object-reuse feature, by reusing your existing
> > in-memory object
> >    model from the last unmarshal:
> >
> >    Unmarshaller unmarshaller = new Unmarshaller(myObject);
> >    unmarshaller.setReuseObjects(true);
> >
> >    Be careful though, this feature may have problems with complex
> >    object models (it's an experimental feature).
> >
> > 5. Try using different XML parsers, some are faster and more efficient
> > than others.
> >
> > --Keith
> >
> >
> >
> > "Beg, Meraj" wrote:
> > >
> > > Hi,
> > > Using castor, I am try to upload multiple XML files into Oracle
> > database.
> > > Once Java Objects are created in the memory, I use Toplink to
> > actually write
> > > these objects into the database.
> > > I am running it as a Java Application on DOS prompt and passing
> > folder name,
> > > where all my XML files are residing, as a parameter.
> > >
> > > Problem I am facing here is, when any of the XML file exceeds
> > 20,000 records
> > > of average length, it hogs/Freezes whole production system.
> > >
> > > My Production system has following configuration for time being:
> > > Windows 2000, IBM WebSphere 4.0 with 2 GB of RAM.
> > >
> > > anyone, please help me in optimizing castor to perform this task
> > > successfully.
> > >
> > > Or atleast, let me know some guidelines to optimize Castor performance.
> > >
> > > Thanks,
> > > Meraj
> > >
> > > -----------------------------------------------------------
> > > If you wish to unsubscribe from this mailing, send mail to
> > > [EMAIL PROTECTED] with a subject of:
> > >         unsubscribe castor-dev
> >
> > -----------------------------------------------------------
> > If you wish to unsubscribe from this mailing, send mail to
> > [EMAIL PROTECTED] with a subject of:
> >         unsubscribe castor-dev
> >
> >
>
> -----------------------------------------------------------
> If you wish to unsubscribe from this mailing, send mail to
> [EMAIL PROTECTED] with a subject of:
>         unsubscribe castor-dev

----------------------------------------------------------- 
If you wish to unsubscribe from this mailing, send mail to
[EMAIL PROTECTED] with a subject of:
        unsubscribe castor-dev

Reply via email to