Hi Prasad,

LSC is:
- using basic SLF4J API for 99% of the code
- using logback layout API (i.e. :
https://lsc-project.org/svn/lsc/trunk/src/main/java/org/lsc/utils/output/LdifLayout.java)
for the particular CSV/LDIF formating output (see
setUpCsvLogging/setUpLdifLogging
in
https://lsc-project.org/svn/lsc/trunk/src/main/java/org/lsc/Configuration.java
)

Regards,


Sebastien BAHLOUL
IAM / Security specialist
Ldap Synchronization Connector : http://lsc-project.org
Blog : http://sbahloul.wordpress.com/


2014-08-26 17:54 GMT+02:00 Prasad Bodapati <[email protected]>:

>  That did work when I have used the lsc.xml. When load the configuration
> programmatically it does not load the logback.xml file.
>
> *ch.qos.logback.classic.Logger logger = (ch.qos.logback.classic.Logger)
> LoggerFactory.getLogger(Logger.ROOT_LOGGER_NAME);*
>
> *                              logger.setLevel(Level.ERROR);*
>
>
>
> I am using the above code to change it programmatically. But It is not
> able to cast org.slf4j.Logger to org.classic.Logger.
>
> I have seen that LSC have its implementation of slf4j logging. Is there
> any particular reason for it?
>
> Could you help with to sort out the problem please.
>
>
>
>
>
> Prasad
>
>
>
>
>
> *From:* Sébastien Bahloul [mailto:[email protected]]
> *Sent:* 26 August 2014 15:20
>
> *To:* Prasad Bodapati
> *Cc:* [email protected]
> *Subject:* Re: [lsc-users] failed handling incoming message:
> java.lang.OutOfMemoryError: GC overhead limit exceeded
>
>
>
> Hi Prasad,
>
>
>
> You can look at the logging configuration that you will find in the
> following file and especially inside the instruction :
> configurator.doConfigure(logBackXMLPropertiesFile)
>
>
>
>
> https://lsc-project.org/svn/lsc/trunk/src/main/java/org/lsc/Configuration.java
>
>
>
> Regards,
>
>
>  Sebastien BAHLOUL
> IAM / Security specialist
> Ldap Synchronization Connector : http://lsc-project.org
> Blog : http://sbahloul.wordpress.com/
>
>
>
> 2014-08-26 15:48 GMT+02:00 Prasad Bodapati <[email protected]>:
>
> Thanks for your response Sebatien.
>
> That is what I am doing. I am only getting back DN only.
>
> The OOM has gone by increasing the memory. We have temporarily avoided
> that for now.
>
>
>
> The problem I am facing now is with the logging. I have been trying to set
> the logback level logging to error not info.
>
> It worked when I synchronized using lsc.xml. We need to start the
> synchronization programmatically so I got rid of the lsc.xml.
>
> I am loading using *LscConfiguration.loadFromInstance(lsc). *Now it does
> not seems to read the logback.xml.
>
>
>
> Is there any way to say enable logging level error programmatically.  I
> have tried some of the options provided from web but they does not seems to
> work.
>
> Is there way to tell LSC to do that ?
>
>
>
> Once again thank you very much for your response.
>
> Prasad
>
>
>
> *From:* Sébastien Bahloul [mailto:[email protected]]
> *Sent:* 26 August 2014 14:19
> *To:* Prasad Bodapati
> *Cc:* [email protected]
> *Subject:* Re: [lsc-users] failed handling incoming message:
> java.lang.OutOfMemoryError: GC overhead limit exceeded
>
>
>
> Hi Prasad,
>
>
>
> There's no simple way to handle such situation. Can you limit the memory
> size needed by getListPivots() by returning only users identifiers list and
> not the complete results ?
>
>
>
> Something that would map to SQL like the following :
>
>
>
> select id from users
>
>
>
> instead of
>
>
>
> select * from users
>
>
>
> You should be able to handle millions of users with such simple
> identifiers list.
>
>
>
> Kind regards,
>
>
>  Sebastien BAHLOUL
> IAM / Security specialist
> Ldap Synchronization Connector : http://lsc-project.org
> Blog : http://sbahloul.wordpress.com/
>
>
>
> 2014-08-20 13:55 GMT+02:00 Prasad Bodapati <[email protected]>:
>
> Hi,
>
> I am using LSC to my users/groups from LDAP to SOLR.
>
> The *SOLRDestinationService* class extends *AbstractJdbcService* and I am
> overriding the method *getListPivots(). *
>
> In the clean phase when LSC needs all  the records from destination
> (SOLR).  *getListPivots() *constructs *Map<String, LscDatasets>*  and
> send it back to LSC.
>
> Does anyone think the error  *GC overhead limit exceeded *is caused by
> the *Map<String, LscDatasets> *because it holds hundreds of thousands of
> users.
>
>
>
> If yes, In the clean phase how should I tell LSC to request only few
> number of records at a time from destination.
>
>
>
> Thanks & Regards
>
> Prasad Bodapati, Software Engineer
>
> Pitney Bowes Software
>
> 6 Hercules Way, Leavesden Park, Watford, Herts WD25 7GS
>
> D: +441923 279174 | M: +447543399223 www.pb.com/software
>
>
>
> [email protected] <[email protected]>
>
>
>
> Every connection is a new opportunity™
>
>
>
> [image: cid:[email protected]]
>
>
>
>
>
> Please consider the environment before printing or forwarding this email.
> If you do print this email, please recycle the paper.
>
>
>
> This email message may contain confidential, proprietary and/or privileged
> information. It is intended only for the use of the intended recipient(s).
> If you have received it in error, please immediately advise the sender by
> reply email and then delete this email message. Any disclosure, copying,
> distribution or use of the information contained in this email message to
> or by anyone other than the intended recipient is strictly prohibited.
>
>
>
>
>  ------------------------------
>
>
>
>
> _______________________________________________________________
> Ldap Synchronization Connector (LSC) - http://lsc-project.org
>
> lsc-users mailing list
> [email protected]
> http://lists.lsc-project.org/listinfo/lsc-users
>
>
>
>
>  ------------------------------
>
>
>
>
>
> ------------------------------
>
>
_______________________________________________________________
Ldap Synchronization Connector (LSC) - http://lsc-project.org

lsc-users mailing list
[email protected]
http://lists.lsc-project.org/listinfo/lsc-users

Reply via email to