Re: Disabling client notifications

2016-11-03 Thread lawrencefinn
I meant to say the cache is loaded by
ignite.cache(cacheName).loadCache(null);



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Disabling-client-notifications-tp8704p8705.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Question about Class Not found

2016-11-03 Thread Denis Magda
Raul, could you assist with this issue?

—
Denis

> On Nov 3, 2016, at 8:39 AM, devis76  wrote:
> 
> Hi,
> i have found my problem, but i have no idea how to resolve.
> 
> Activator extends IgniteAbstractOsgiContextActivator
> @Override
>   public IgniteConfiguration igniteConfiguration() {
> cfg.setPeerClassLoadingEnabled(false);
>   RoundRobinLoadBalancingSpi spiRRLB = new RoundRobinLoadBalancingSpi();
>   // Configure SPI to use global round-robin mode.
>   spiRRLB.setPerTask(false);
>   // Override default load balancing SPI.
>   cfg.setLoadBalancingSpi(spiRRLB);
> }
> 
> 
> 
> 
> [1]First Karaf start it's ok!... my QueryCursor "cache.query(new
> SqlFieldsQuery("select * from ClientUpdate") 
> 
> 2)Second Karaf(same bundles/packages as [1],different Machine, when Start
> the first bundle is Cluster/Activator/Configuration and later  i receive
> soon a ClassNotFound from Query, think because it's
> RoundRobinLoadBalancingSpi... this lock and currupt my Global cluster until
> i'll will shutdown the karaf[2]...
> ClassNotFound is throw because karaf[2] has not completed boot sequence...
> 
> have you any suggestion please?
> Devis
> 
> 
> 
> --
> View this message in context: 
> http://apache-ignite-users.70518.x6.nabble.com/Question-about-Class-Not-found-tp8685p8696.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.



Fwd: KARAF 4.6/4.8.Snapshot IgniteAbstractOsgiContextActivator

2016-11-03 Thread Denis Magda
Cross-posting to the dev list.

Raul, could you have a look at the issue discussed under this [1] thread? Is 
this something that is fexable on the side of our OSGi module?

[1] 
http://apache-ignite-users.70518.x6.nabble.com/KARAF-4-6-4-8-Snapshot-IgniteAbstractOsgiContextActivator-tp8552p8650.html
 


—
Denis

> Begin forwarded message:
> 
> From: devis76 
> Subject: R: KARAF 4.6/4.8.Snapshot IgniteAbstractOsgiContextActivator
> Date: November 3, 2016 at 4:40:07 AM PDT
> To: user@ignite.apache.org
> Reply-To: user@ignite.apache.org
> 
> Hi,
> 
> thank you for your support.
> 
> Yes i think that Karaf has some issue... because your features.xml for me 
> it's ok... i have no idea "how to fix"... but for now i can continue to test 
> Ignite with Karaf...
> 
>  
> 
>  
> 
> Da: Sergej Sidorov [via Apache Ignite Users] [mailto:[hidden email] 
> ] 
> Inviato: martedì 1 novembre 2016 17:28
> A: devis76 <[hidden email] 
> >
> Oggetto: Re: KARAF 4.6/4.8.Snapshot IgniteAbstractOsgiContextActivator
> 
>  
> 
> Hi! 
> 
> I have reproduced your issue with feature:istall ignite-core. I have also 
> same issue with ignite-indexing: lucene and h2 dependencies have not been 
> installed. The features.xml looks correct. Probably this is karaf issue. 
> As a workaround you can use jars from ignite release bundle [1]. 
> 
> About the "Fieldable.class". I downloaded the lucene bundle from [2]. It has 
> this class and correct osgi export at MANIFEST.MF. Try to use this bundle. 
> 
> [1] https://ignite.apache.org/download.cgi 
> 
> [2] 
> http://repo1.maven.org/maven2/org/apache/servicemix/bundles/org.apache.servicemix.bundles.lucene/3.5.0_1/org.apache.servicemix.bundles.lucene-3.5.0_1.jar
>  
> 
> 
> Thanks, 
> Sergej
> 
> If you reply to this email, your message will be added to the discussion 
> below:
> 
> http://apache-ignite-users.70518.x6.nabble.com/KARAF-4-6-4-8-Snapshot-IgniteAbstractOsgiContextActivator-tp8552p8650.html
>  
> 
> To start a new topic under Apache Ignite Users, email [hidden email] 
>  
> To unsubscribe from KARAF 4.6/4.8.Snapshot 
> IgniteAbstractOsgiContextActivator, click here 
> .
> NAML 
> 
> View this message in context: R: KARAF 4.6/4.8.Snapshot 
> IgniteAbstractOsgiContextActivator 
> 
> Sent from the Apache Ignite Users mailing list archive 
>  at Nabble.com.



Re: Null column values - bug

2016-11-03 Thread Andrey Mashenkov
Yes, you are right. This is a bug.

It seems there should be smth like this:
wasNull == curr.get(colIdx - 1) == null;

I've create an issue https://issues.apache.org/jira/browse/IGNITE-4175

Thanks Anil !


On Thu, Nov 3, 2016 at 9:45 PM, Anil  wrote:

> following code i see in the JdbcResultSet. val is not null for null values
> for string types. then val == null is false. correct ?
>
>  T val = cls == String.class ? (T)String.valueOf(curr.get(colIdx - 1)) :
> (T)curr.get(colIdx - 1);
>
>  wasNull = val == null;
>
> On 3 November 2016 at 23:48, Andrey Mashenkov 
> wrote:
>
>> No, wasNull is true if column value is null otherwise false.
>> You can have wasNull() is true but getInt() is zero. E.g. getInt() return
>> type is primitive and default value (zero) shoud be return for NULL fields.
>>
>>
>> On Thu, Nov 3, 2016 at 9:11 PM, Anil  wrote:
>>
>>> wasNull is false all the time for string types. correct ?
>>>
>>> On 3 November 2016 at 20:39, Andrey Mashenkov 
>>> wrote:
>>>
 Javadoc says that null value should be returned.

 But from the other side, there is wasNull() method, that should be use
 for null checks.



 On Thu, Nov 3, 2016 at 5:39 PM, Andrey Gura  wrote:

> String.valuOf(null) return "null" string by contract.
>
> On Thu, Nov 3, 2016 at 5:33 PM, Anil  wrote:
>
>> HI ,
>>
>> null values are returned as "null" with ignite jdbc result set.
>>
>>  private  T getTypedValue(int colIdx, Class cls) throws
>> SQLException {
>> ensureNotClosed();
>> ensureHasCurrentRow();
>>
>> try {
>> T val = cls == String.class ?
>> (T)String.valueOf(curr.get(colIdx - 1)) : (T)curr.get(colIdx - 1);
>>
>> wasNull = val == null;
>>
>> return val;
>> }
>> catch (IndexOutOfBoundsException ignored) {
>> throw new SQLException("Invalid column index: " + colIdx);
>> }
>> catch (ClassCastException ignored) {
>> throw new SQLException("Value is an not instance of " +
>> cls.getName());
>> }
>> }
>>
>>
>> if a column value is null (curr.get(colIdx - 1) return null but
>> String.valueOf( (curr.get(colIdx - 1) ) is not null it is "null".
>>
>> ArrayList obj = new ArrayList();
>>   obj.add(null);
>> System.out.println(null == (String)String.valueOf(obj.get(0)));
>>
>> above Sysout is false.
>>
>> Fix :
>>
>> Object colValue = curr.get(colIdx - 1);
>>
>> T val = cls == String.class ? (String) colValue : (T) colValue;
>>
>> or return (T) colValue
>>
>>
>> please let me know if you see any issues. thanks
>>
>>
>>
>

>>>
>>
>


Re: Null column values - bug

2016-11-03 Thread Anil
following code i see in the JdbcResultSet. val is not null for null values
for string types. then val == null is false. correct ?

 T val = cls == String.class ? (T)String.valueOf(curr.get(colIdx - 1)) :
(T)curr.get(colIdx - 1);

 wasNull = val == null;

On 3 November 2016 at 23:48, Andrey Mashenkov 
wrote:

> No, wasNull is true if column value is null otherwise false.
> You can have wasNull() is true but getInt() is zero. E.g. getInt() return
> type is primitive and default value (zero) shoud be return for NULL fields.
>
>
> On Thu, Nov 3, 2016 at 9:11 PM, Anil  wrote:
>
>> wasNull is false all the time for string types. correct ?
>>
>> On 3 November 2016 at 20:39, Andrey Mashenkov 
>> wrote:
>>
>>> Javadoc says that null value should be returned.
>>>
>>> But from the other side, there is wasNull() method, that should be use
>>> for null checks.
>>>
>>>
>>>
>>> On Thu, Nov 3, 2016 at 5:39 PM, Andrey Gura  wrote:
>>>
 String.valuOf(null) return "null" string by contract.

 On Thu, Nov 3, 2016 at 5:33 PM, Anil  wrote:

> HI ,
>
> null values are returned as "null" with ignite jdbc result set.
>
>  private  T getTypedValue(int colIdx, Class cls) throws
> SQLException {
> ensureNotClosed();
> ensureHasCurrentRow();
>
> try {
> T val = cls == String.class ?
> (T)String.valueOf(curr.get(colIdx - 1)) : (T)curr.get(colIdx - 1);
>
> wasNull = val == null;
>
> return val;
> }
> catch (IndexOutOfBoundsException ignored) {
> throw new SQLException("Invalid column index: " + colIdx);
> }
> catch (ClassCastException ignored) {
> throw new SQLException("Value is an not instance of " +
> cls.getName());
> }
> }
>
>
> if a column value is null (curr.get(colIdx - 1) return null but
> String.valueOf( (curr.get(colIdx - 1) ) is not null it is "null".
>
> ArrayList obj = new ArrayList();
>   obj.add(null);
> System.out.println(null == (String)String.valueOf(obj.get(0)));
>
> above Sysout is false.
>
> Fix :
>
> Object colValue = curr.get(colIdx - 1);
>
> T val = cls == String.class ? (String) colValue : (T) colValue;
>
> or return (T) colValue
>
>
> please let me know if you see any issues. thanks
>
>
>

>>>
>>
>


Re: Null column values - bug

2016-11-03 Thread Andrey Mashenkov
No, wasNull is true if column value is null otherwise false.
You can have wasNull() is true but getInt() is zero. E.g. getInt() return
type is primitive and default value (zero) shoud be return for NULL fields.


On Thu, Nov 3, 2016 at 9:11 PM, Anil  wrote:

> wasNull is false all the time for string types. correct ?
>
> On 3 November 2016 at 20:39, Andrey Mashenkov 
> wrote:
>
>> Javadoc says that null value should be returned.
>>
>> But from the other side, there is wasNull() method, that should be use
>> for null checks.
>>
>>
>>
>> On Thu, Nov 3, 2016 at 5:39 PM, Andrey Gura  wrote:
>>
>>> String.valuOf(null) return "null" string by contract.
>>>
>>> On Thu, Nov 3, 2016 at 5:33 PM, Anil  wrote:
>>>
 HI ,

 null values are returned as "null" with ignite jdbc result set.

  private  T getTypedValue(int colIdx, Class cls) throws
 SQLException {
 ensureNotClosed();
 ensureHasCurrentRow();

 try {
 T val = cls == String.class ? (T)String.valueOf(curr.get(colIdx
 - 1)) : (T)curr.get(colIdx - 1);

 wasNull = val == null;

 return val;
 }
 catch (IndexOutOfBoundsException ignored) {
 throw new SQLException("Invalid column index: " + colIdx);
 }
 catch (ClassCastException ignored) {
 throw new SQLException("Value is an not instance of " +
 cls.getName());
 }
 }


 if a column value is null (curr.get(colIdx - 1) return null but
 String.valueOf( (curr.get(colIdx - 1) ) is not null it is "null".

 ArrayList obj = new ArrayList();
   obj.add(null);
 System.out.println(null == (String)String.valueOf(obj.get(0)));

 above Sysout is false.

 Fix :

 Object colValue = curr.get(colIdx - 1);

 T val = cls == String.class ? (String) colValue : (T) colValue;

 or return (T) colValue


 please let me know if you see any issues. thanks



>>>
>>
>


Re: Null column values - bug

2016-11-03 Thread Anil
wasNull is false all the time for string types. correct ?

On 3 November 2016 at 20:39, Andrey Mashenkov 
wrote:

> Javadoc says that null value should be returned.
>
> But from the other side, there is wasNull() method, that should be use for
> null checks.
>
>
>
> On Thu, Nov 3, 2016 at 5:39 PM, Andrey Gura  wrote:
>
>> String.valuOf(null) return "null" string by contract.
>>
>> On Thu, Nov 3, 2016 at 5:33 PM, Anil  wrote:
>>
>>> HI ,
>>>
>>> null values are returned as "null" with ignite jdbc result set.
>>>
>>>  private  T getTypedValue(int colIdx, Class cls) throws
>>> SQLException {
>>> ensureNotClosed();
>>> ensureHasCurrentRow();
>>>
>>> try {
>>> T val = cls == String.class ? (T)String.valueOf(curr.get(colIdx
>>> - 1)) : (T)curr.get(colIdx - 1);
>>>
>>> wasNull = val == null;
>>>
>>> return val;
>>> }
>>> catch (IndexOutOfBoundsException ignored) {
>>> throw new SQLException("Invalid column index: " + colIdx);
>>> }
>>> catch (ClassCastException ignored) {
>>> throw new SQLException("Value is an not instance of " +
>>> cls.getName());
>>> }
>>> }
>>>
>>>
>>> if a column value is null (curr.get(colIdx - 1) return null but
>>> String.valueOf( (curr.get(colIdx - 1) ) is not null it is "null".
>>>
>>> ArrayList obj = new ArrayList();
>>>   obj.add(null);
>>> System.out.println(null == (String)String.valueOf(obj.get(0)));
>>>
>>> above Sysout is false.
>>>
>>> Fix :
>>>
>>> Object colValue = curr.get(colIdx - 1);
>>>
>>> T val = cls == String.class ? (String) colValue : (T) colValue;
>>>
>>> or return (T) colValue
>>>
>>>
>>> please let me know if you see any issues. thanks
>>>
>>>
>>>
>>
>


Concurrent job execution and FifoQueueCollisionSpi.parallelJobsNumber=1

2016-11-03 Thread Ryan Ripken
Can different nodes have different collisionSpi settings?  What setting 
would take effect - the collisionSpi used to start the grid or the 
setting on the node executing the job or the setting on the node that 
submitted the task?


I recently encountered a log file that could only have been generated if 
two jobs were executing on a node concurrently.  I have three nodes.  
Two are started via ignite.bat and configured with parallelJobsNumber=1.


Due to some recent changes the third node lost its collisionSpi 
configuration.  Notice in java code below that the collisionSpi on that 
node isn't specified.


The behavior I'm seeing suggests that it is the grid configuration on 
the node submitting the task that matters - is that the case?


Two nodes started with this:

class="org.apache.ignite.configuration.IgniteConfiguration">


class="org.apache.ignite.marshaller.optimized.OptimizedMarshaller">






class="org.apache.ignite.spi.collision.fifoqueue.FifoQueueCollisionSpi">







The third node starts and submits a task via something similar to:

try {
IgniteConfiguration igniteConfig = new IgniteConfiguration();
igniteConfig.setMarshaller(new OptimizedMarshaller());
Ignite grid = Ignition.start(igniteConfig);
ClusterGroup forRemotes = grid.cluster().forRemotes();
MyComputeTask task = new MyComputeTask();
IgniteCompute withAsync = grid.compute(forRemotes).withAsync();
withAsync.execute(task, options);
future = withAsync.future();
Boolean retval1 = Boolean.FALSE;
try {
retval1 = future.get();
} catch (IgniteException e) {
e.printStackTrace();
}
retval = retval1;

} catch (IgniteException e) {
e.printStackTrace();
} finally {
Ignition.stop(true);
}




Re: Question about Class Not found

2016-11-03 Thread devis76
Hi,
i have found my problem, but i have no idea how to resolve.

Activator extends IgniteAbstractOsgiContextActivator
@Override
public IgniteConfiguration igniteConfiguration() {
cfg.setPeerClassLoadingEnabled(false);
RoundRobinLoadBalancingSpi spiRRLB = new RoundRobinLoadBalancingSpi();
// Configure SPI to use global round-robin mode.
spiRRLB.setPerTask(false);
// Override default load balancing SPI.
cfg.setLoadBalancingSpi(spiRRLB);
}




[1]First Karaf start it's ok!... my QueryCursor "cache.query(new
SqlFieldsQuery("select * from ClientUpdate") 

2)Second Karaf(same bundles/packages as [1],different Machine, when Start
the first bundle is Cluster/Activator/Configuration and later  i receive
soon a ClassNotFound from Query, think because it's
RoundRobinLoadBalancingSpi... this lock and currupt my Global cluster until
i'll will shutdown the karaf[2]...
ClassNotFound is throw because karaf[2] has not completed boot sequence...

have you any suggestion please?
Devis



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Question-about-Class-Not-found-tp8685p8696.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Killing a node under load stalls the grid with ignite 1.7

2016-11-03 Thread bintisepaha
the problem is when I am in write behind for order, how do I access the trade
object. its only present in the cache. at that time I need access trade
cache and that is causing issues.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Killing-a-node-under-load-stalls-the-grid-with-ignite-1-7-tp8130p8695.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Null column values - bug

2016-11-03 Thread Andrey Mashenkov
Javadoc says that null value should be returned.

But from the other side, there is wasNull() method, that should be use for
null checks.



On Thu, Nov 3, 2016 at 5:39 PM, Andrey Gura  wrote:

> String.valuOf(null) return "null" string by contract.
>
> On Thu, Nov 3, 2016 at 5:33 PM, Anil  wrote:
>
>> HI ,
>>
>> null values are returned as "null" with ignite jdbc result set.
>>
>>  private  T getTypedValue(int colIdx, Class cls) throws
>> SQLException {
>> ensureNotClosed();
>> ensureHasCurrentRow();
>>
>> try {
>> T val = cls == String.class ? (T)String.valueOf(curr.get(colIdx
>> - 1)) : (T)curr.get(colIdx - 1);
>>
>> wasNull = val == null;
>>
>> return val;
>> }
>> catch (IndexOutOfBoundsException ignored) {
>> throw new SQLException("Invalid column index: " + colIdx);
>> }
>> catch (ClassCastException ignored) {
>> throw new SQLException("Value is an not instance of " +
>> cls.getName());
>> }
>> }
>>
>>
>> if a column value is null (curr.get(colIdx - 1) return null but
>> String.valueOf( (curr.get(colIdx - 1) ) is not null it is "null".
>>
>> ArrayList obj = new ArrayList();
>>   obj.add(null);
>> System.out.println(null == (String)String.valueOf(obj.get(0)));
>>
>> above Sysout is false.
>>
>> Fix :
>>
>> Object colValue = curr.get(colIdx - 1);
>>
>> T val = cls == String.class ? (String) colValue : (T) colValue;
>>
>> or return (T) colValue
>>
>>
>> please let me know if you see any issues. thanks
>>
>>
>>
>


Null column values - bug

2016-11-03 Thread Anil
HI ,

null values are returned as "null" with ignite jdbc result set.

 private  T getTypedValue(int colIdx, Class cls) throws SQLException {
ensureNotClosed();
ensureHasCurrentRow();

try {
T val = cls == String.class ? (T)String.valueOf(curr.get(colIdx
- 1)) : (T)curr.get(colIdx - 1);

wasNull = val == null;

return val;
}
catch (IndexOutOfBoundsException ignored) {
throw new SQLException("Invalid column index: " + colIdx);
}
catch (ClassCastException ignored) {
throw new SQLException("Value is an not instance of " +
cls.getName());
}
}


if a column value is null (curr.get(colIdx - 1) return null but
String.valueOf( (curr.get(colIdx - 1) ) is not null it is "null".

ArrayList obj = new ArrayList();
  obj.add(null);
System.out.println(null == (String)String.valueOf(obj.get(0)));

above Sysout is false.

Fix :

Object colValue = curr.get(colIdx - 1);

T val = cls == String.class ? (String) colValue : (T) colValue;

or return (T) colValue


please let me know if you see any issues. thanks


Re: Null column values - bug

2016-11-03 Thread Andrey Gura
String.valuOf(null) return "null" string by contract.

On Thu, Nov 3, 2016 at 5:33 PM, Anil  wrote:

> HI ,
>
> null values are returned as "null" with ignite jdbc result set.
>
>  private  T getTypedValue(int colIdx, Class cls) throws SQLException
> {
> ensureNotClosed();
> ensureHasCurrentRow();
>
> try {
> T val = cls == String.class ? (T)String.valueOf(curr.get(colIdx
> - 1)) : (T)curr.get(colIdx - 1);
>
> wasNull = val == null;
>
> return val;
> }
> catch (IndexOutOfBoundsException ignored) {
> throw new SQLException("Invalid column index: " + colIdx);
> }
> catch (ClassCastException ignored) {
> throw new SQLException("Value is an not instance of " +
> cls.getName());
> }
> }
>
>
> if a column value is null (curr.get(colIdx - 1) return null but
> String.valueOf( (curr.get(colIdx - 1) ) is not null it is "null".
>
> ArrayList obj = new ArrayList();
>   obj.add(null);
> System.out.println(null == (String)String.valueOf(obj.get(0)));
>
> above Sysout is false.
>
> Fix :
>
> Object colValue = curr.get(colIdx - 1);
>
> T val = cls == String.class ? (String) colValue : (T) colValue;
>
> or return (T) colValue
>
>
> please let me know if you see any issues. thanks
>
>
>


Re: DataStreamer is closed

2016-11-03 Thread Anil
Regarding data streamer - i will reproduce and share the complete code.
Regarding cache size() - i understood it completely now. i set auto flush
interval and it looks good now.

Thanks to both of you.

On 3 November 2016 at 17:41, Vladislav Pyatkov  wrote:

> Hi Anil,
>
> Please attach whole example.
> I can not found where is DataStreamer was closed in your case (Where is
> method o.a.i.IgniteDataStreamer#close() was invoked?).
>
> About cache size, It does not means anything, because Streamer caches
> entries and sends as batch.
> And yes, check each future, as Anton said, is a good point.
>
> On Thu, Nov 3, 2016 at 3:05 PM, Anton Vinogradov  > wrote:
>
>> Anil,
>>
>> getStreamer().addData() will return you IgniteFuture. You can check it
>> result by fut.get().
>> get() will give you null in case data streamed and stored or throw an
>> exception.
>>
>> On Thu, Nov 3, 2016 at 2:32 PM, Anil  wrote:
>>
>>>
>>> Yes,. that is only exception i see in logs. i will try the debug option.
>>> thanks.
>>>
>>> though data streamer is not returning exception all the time,
>>> IgniteCache#size() remains empty all the time. It weird.
>>>
>>> 1.
>>> for (Map.Entry entry : m.entrySet()){
>>> getStreamer().addData(entry.getKey(), entry.getValue());
>>>
>>> }
>>>
>>> 2.
>>>
>>> for (Map.Entry entry : m.entrySet()){
>>>  cache.put((String)entry.getKey(), (Person)
>>> entry.getValue());
>>> }
>>>
>>> 3.
>>> for (Map.Entry entry : m.entrySet()){
>>>  cache.replace((String)entry.getKey(), (Person)
>>> entry.getValue());
>>>  }
>>>
>>>
>>> cache size with #1 & #3  is 0
>>> cache size with #2 is 1 as expected.
>>>
>>> Have you see similar issue before ?
>>>
>>> Thanks
>>>
>>>
>>>
>>> On 3 November 2016 at 16:33, Anton Vinogradov 
>>> wrote:
>>>
 Anil,

 Is it first and only exception at logs?

 Is it possible to debud this?
 You can set breakpoint at first line of org.apache.ignite.internal.pro
 cessors.datastreamer.DataStreamerImpl#closeEx(boolean,
 org.apache.ignite.IgniteCheckedException)
 This will give you information who stopping the datastreamer.

 On Thu, Nov 3, 2016 at 1:41 PM, Anil  wrote:

> Hi Anton,
> No. ignite nodes looks good.
>
> I have attached my KafkaCacheDataStreamer class and following is the
> code to listen to the kafka topic. IgniteCache is created using java
> configuration.
>
> I see cache size is zero after adding the entries to cache as well
> from KafkaCacheDataStreamer. Not sure how to log whether the entries added
> to cache or not.
>
> KafkaCacheDataStreamer kafkaStreamer = new
> KafkaCacheDataStreamer();
>
>  Properites pros = new Properites() // kafka properties
>  ConsumerConfig consumerConfig = new ConsumerConfig(props);
>
> try {
> IgniteDataStreamer stmr =
> ignite.dataStreamer(CacheManager.PERSON_CACHE);
>// allow overwriting cache data
>stmr.allowOverwrite(true);
>
>kafkaStreamer.setIgnite(ignite);
>kafkaStreamer.setStreamer(stmr);
>
>// set the topic
>kafkaStreamer.setTopic(kafkaConfig.getString("topic",
> "TestTopic"));
>
>// set the number of threads to process Kafka streams
>kafkaStreamer.setThreads(1);
>
>// set Kafka consumer configurations
>kafkaStreamer.setConsumerConfig(consumerConfig);
>
>// set decoders
>kafkaStreamer.setKeyDecoder(new StringDecoder(new
> VerifiableProperties()));
>kafkaStreamer.setValueDecoder(new StringDecoder(new
> VerifiableProperties()));
>kafkaStreamer.setMultipleTupleExtractor(new
> StreamMultipleTupleExtractor() {
> @Override
> public Map extract(String msg) {
> Map entries = new HashMap<>();
> try {
> KafkaMessage request = Json.decodeValue(msg, KafkaMessage.class);
> IgniteCache cache = CacheManager.getCache();
>
> if (CollectionUtils.isNotEmpty(request.getPersons())){
> String id = null;
> for (Person ib : request.getPersons()){
> if (StringUtils.isNotBlank(ib.getId())){
> id = ib.getId();
> if (null != ib.isDeleted() && Boolean.TRUE.equals(ib.isDelet
> ed())){
> cache.remove(id);
> }else {
> // no need to store the id. so setting null.
> ib.setId(null);
> entries.put(id, ib);
> }
> }
> }
> }else {
>
> }
> }catch (Exception ex){
> logger.error("Error while 

Re: Question about Class Not found

2016-11-03 Thread Vladislav Pyatkov
Hi,

I can only guess which class in was not found, but I think you have not
loaded jar on each nodes.
You have several tricks:
1) Copy jar into classpath on each nodes
2) Turn on peertopeer class loader[1].
3) Use query entry [2], without field annotation and method setIndexTypes

[1]: o.a.i.configuration.IgniteConfiguration#setPeerClassLoadingEnabled
[2]:
https://apacheignite.readme.io/docs/sql-queries#configuring-sql-indexes-using-queryentity

On Thu, Nov 3, 2016 at 2:29 PM, devis76 
wrote:

> Hi,
> i have a simple Pojo
>
> public class Client implements Serializable
> private static final long serialVersionUID = 2747772722162744232L;
> @QuerySqlField
> private final Date registrationDate;
> @QuerySqlField(index = true)
> private final String lwersion;
> ...
>
> I have create simple Cache...
> cacheCfg.setBackups(0);
> cacheCfg.setName(ctx.name());
> cacheCfg.setCacheMode(PARTITIONED);
>
>
> When i use a Query
>
> QueryCursor> qryx = cache
>
>.query(new SqlFieldsQuery("select * from " +
> valueType.getSimpleName()));
> List list = new ArrayList();
> for (List row : qryx) {
> logger.trace("Query {} List size {}",
> cache.getName(), row.size());
> logger.trace("Query {} List size {} {}",
> cache.getName(), row.get(0),
> row.get(1));
> logger.trace("Query Cache  {} Key found {}
> {}", cache.getName(),
> row.get(0));
> list.add((V) row.get(1));
> }
> qryx.close();
> return list;
>
> When i run this query with a single node all works fine.
>
> When i'll start second Karaf the "last node started" throw a Class Not
> Found
> have you any suggestion please?
>
>
>
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Question-about-Class-Not-found-tp8685.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 
Vladislav Pyatkov


Re: DataStreamer is closed

2016-11-03 Thread Vladislav Pyatkov
Hi Anil,

Please attach whole example.
I can not found where is DataStreamer was closed in your case (Where is
method o.a.i.IgniteDataStreamer#close() was invoked?).

About cache size, It does not means anything, because Streamer caches
entries and sends as batch.
And yes, check each future, as Anton said, is a good point.

On Thu, Nov 3, 2016 at 3:05 PM, Anton Vinogradov 
wrote:

> Anil,
>
> getStreamer().addData() will return you IgniteFuture. You can check it
> result by fut.get().
> get() will give you null in case data streamed and stored or throw an
> exception.
>
> On Thu, Nov 3, 2016 at 2:32 PM, Anil  wrote:
>
>>
>> Yes,. that is only exception i see in logs. i will try the debug option.
>> thanks.
>>
>> though data streamer is not returning exception all the time,
>> IgniteCache#size() remains empty all the time. It weird.
>>
>> 1.
>> for (Map.Entry entry : m.entrySet()){
>> getStreamer().addData(entry.getKey(), entry.getValue());
>>
>> }
>>
>> 2.
>>
>> for (Map.Entry entry : m.entrySet()){
>>  cache.put((String)entry.getKey(), (Person)
>> entry.getValue());
>> }
>>
>> 3.
>> for (Map.Entry entry : m.entrySet()){
>>  cache.replace((String)entry.getKey(), (Person)
>> entry.getValue());
>>  }
>>
>>
>> cache size with #1 & #3  is 0
>> cache size with #2 is 1 as expected.
>>
>> Have you see similar issue before ?
>>
>> Thanks
>>
>>
>>
>> On 3 November 2016 at 16:33, Anton Vinogradov 
>> wrote:
>>
>>> Anil,
>>>
>>> Is it first and only exception at logs?
>>>
>>> Is it possible to debud this?
>>> You can set breakpoint at first line of org.apache.ignite.internal.pro
>>> cessors.datastreamer.DataStreamerImpl#closeEx(boolean,
>>> org.apache.ignite.IgniteCheckedException)
>>> This will give you information who stopping the datastreamer.
>>>
>>> On Thu, Nov 3, 2016 at 1:41 PM, Anil  wrote:
>>>
 Hi Anton,
 No. ignite nodes looks good.

 I have attached my KafkaCacheDataStreamer class and following is the
 code to listen to the kafka topic. IgniteCache is created using java
 configuration.

 I see cache size is zero after adding the entries to cache as well from
 KafkaCacheDataStreamer. Not sure how to log whether the entries added to
 cache or not.

 KafkaCacheDataStreamer kafkaStreamer = new
 KafkaCacheDataStreamer();

  Properites pros = new Properites() // kafka properties
  ConsumerConfig consumerConfig = new ConsumerConfig(props);

 try {
 IgniteDataStreamer stmr =
 ignite.dataStreamer(CacheManager.PERSON_CACHE);
// allow overwriting cache data
stmr.allowOverwrite(true);

kafkaStreamer.setIgnite(ignite);
kafkaStreamer.setStreamer(stmr);

// set the topic
kafkaStreamer.setTopic(kafkaConfig.getString("topic",
 "TestTopic"));

// set the number of threads to process Kafka streams
kafkaStreamer.setThreads(1);

// set Kafka consumer configurations
kafkaStreamer.setConsumerConfig(consumerConfig);

// set decoders
kafkaStreamer.setKeyDecoder(new StringDecoder(new
 VerifiableProperties()));
kafkaStreamer.setValueDecoder(new StringDecoder(new
 VerifiableProperties()));
kafkaStreamer.setMultipleTupleExtractor(new
 StreamMultipleTupleExtractor() {
 @Override
 public Map extract(String msg) {
 Map entries = new HashMap<>();
 try {
 KafkaMessage request = Json.decodeValue(msg, KafkaMessage.class);
 IgniteCache cache = CacheManager.getCache();

 if (CollectionUtils.isNotEmpty(request.getPersons())){
 String id = null;
 for (Person ib : request.getPersons()){
 if (StringUtils.isNotBlank(ib.getId())){
 id = ib.getId();
 if (null != ib.isDeleted() && Boolean.TRUE.equals(ib.isDeleted())){
 cache.remove(id);
 }else {
 // no need to store the id. so setting null.
 ib.setId(null);
 entries.put(id, ib);
 }
 }
 }
 }else {

 }
 }catch (Exception ex){
 logger.error("Error while updating the cache - {} {} " ,msg, ex);
 }

 return entries;
 }
 });

kafkaStreamer.start();
 }catch (Exception ex){
 logger.error("Error in kafka data streamer ", ex);
 }


 Please let me know if you see any issues. thanks.

 On 3 November 2016 at 15:59, Anton Vinogradov  wrote:

> Anil,
>
> 

Re: DataStreamer is closed

2016-11-03 Thread Anton Vinogradov
Anil,

getStreamer().addData() will return you IgniteFuture. You can check it
result by fut.get().
get() will give you null in case data streamed and stored or throw an
exception.

On Thu, Nov 3, 2016 at 2:32 PM, Anil  wrote:

>
> Yes,. that is only exception i see in logs. i will try the debug option.
> thanks.
>
> though data streamer is not returning exception all the time,
> IgniteCache#size() remains empty all the time. It weird.
>
> 1.
> for (Map.Entry entry : m.entrySet()){
> getStreamer().addData(entry.getKey(), entry.getValue());
>
> }
>
> 2.
>
> for (Map.Entry entry : m.entrySet()){
>  cache.put((String)entry.getKey(), (Person)
> entry.getValue());
> }
>
> 3.
> for (Map.Entry entry : m.entrySet()){
>  cache.replace((String)entry.getKey(), (Person)
> entry.getValue());
>  }
>
>
> cache size with #1 & #3  is 0
> cache size with #2 is 1 as expected.
>
> Have you see similar issue before ?
>
> Thanks
>
>
>
> On 3 November 2016 at 16:33, Anton Vinogradov 
> wrote:
>
>> Anil,
>>
>> Is it first and only exception at logs?
>>
>> Is it possible to debud this?
>> You can set breakpoint at first line of org.apache.ignite.internal.pro
>> cessors.datastreamer.DataStreamerImpl#closeEx(boolean,
>> org.apache.ignite.IgniteCheckedException)
>> This will give you information who stopping the datastreamer.
>>
>> On Thu, Nov 3, 2016 at 1:41 PM, Anil  wrote:
>>
>>> Hi Anton,
>>> No. ignite nodes looks good.
>>>
>>> I have attached my KafkaCacheDataStreamer class and following is the
>>> code to listen to the kafka topic. IgniteCache is created using java
>>> configuration.
>>>
>>> I see cache size is zero after adding the entries to cache as well from
>>> KafkaCacheDataStreamer. Not sure how to log whether the entries added to
>>> cache or not.
>>>
>>> KafkaCacheDataStreamer kafkaStreamer = new
>>> KafkaCacheDataStreamer();
>>>
>>>  Properites pros = new Properites() // kafka properties
>>>  ConsumerConfig consumerConfig = new ConsumerConfig(props);
>>>
>>> try {
>>> IgniteDataStreamer stmr =
>>> ignite.dataStreamer(CacheManager.PERSON_CACHE);
>>>// allow overwriting cache data
>>>stmr.allowOverwrite(true);
>>>
>>>kafkaStreamer.setIgnite(ignite);
>>>kafkaStreamer.setStreamer(stmr);
>>>
>>>// set the topic
>>>kafkaStreamer.setTopic(kafkaConfig.getString("topic",
>>> "TestTopic"));
>>>
>>>// set the number of threads to process Kafka streams
>>>kafkaStreamer.setThreads(1);
>>>
>>>// set Kafka consumer configurations
>>>kafkaStreamer.setConsumerConfig(consumerConfig);
>>>
>>>// set decoders
>>>kafkaStreamer.setKeyDecoder(new StringDecoder(new
>>> VerifiableProperties()));
>>>kafkaStreamer.setValueDecoder(new StringDecoder(new
>>> VerifiableProperties()));
>>>kafkaStreamer.setMultipleTupleExtractor(new
>>> StreamMultipleTupleExtractor() {
>>> @Override
>>> public Map extract(String msg) {
>>> Map entries = new HashMap<>();
>>> try {
>>> KafkaMessage request = Json.decodeValue(msg, KafkaMessage.class);
>>> IgniteCache cache = CacheManager.getCache();
>>>
>>> if (CollectionUtils.isNotEmpty(request.getPersons())){
>>> String id = null;
>>> for (Person ib : request.getPersons()){
>>> if (StringUtils.isNotBlank(ib.getId())){
>>> id = ib.getId();
>>> if (null != ib.isDeleted() && Boolean.TRUE.equals(ib.isDeleted())){
>>> cache.remove(id);
>>> }else {
>>> // no need to store the id. so setting null.
>>> ib.setId(null);
>>> entries.put(id, ib);
>>> }
>>> }
>>> }
>>> }else {
>>>
>>> }
>>> }catch (Exception ex){
>>> logger.error("Error while updating the cache - {} {} " ,msg, ex);
>>> }
>>>
>>> return entries;
>>> }
>>> });
>>>
>>>kafkaStreamer.start();
>>> }catch (Exception ex){
>>> logger.error("Error in kafka data streamer ", ex);
>>> }
>>>
>>>
>>> Please let me know if you see any issues. thanks.
>>>
>>> On 3 November 2016 at 15:59, Anton Vinogradov 
>>> wrote:
>>>
 Anil,

 Could you provide getStreamer() code and full logs?
 Possible, ignite node was disconnected and this cause DataStreamer
 closure.

 On Thu, Nov 3, 2016 at 1:17 PM, Anil  wrote:

> HI,
>
> I have created custom kafka data streamer for my use case and i see
> following exception.
>
> java.lang.IllegalStateException: Data streamer has been closed.
> at org.apache.ignite.internal.pro
> cessors.datastreamer.DataStreamerImpl.enterBusy(DataStreamer
> Impl.java:360)
> at 

Re: DataStreamer is closed

2016-11-03 Thread Anil
Yes,. that is only exception i see in logs. i will try the debug option.
thanks.

though data streamer is not returning exception all the time,
IgniteCache#size() remains empty all the time. It weird.

1.
for (Map.Entry entry : m.entrySet()){
getStreamer().addData(entry.getKey(), entry.getValue());

}

2.

for (Map.Entry entry : m.entrySet()){
 cache.put((String)entry.getKey(), (Person)
entry.getValue());
}

3.
for (Map.Entry entry : m.entrySet()){
 cache.replace((String)entry.getKey(), (Person)
entry.getValue());
 }


cache size with #1 & #3  is 0
cache size with #2 is 1 as expected.

Have you see similar issue before ?

Thanks



On 3 November 2016 at 16:33, Anton Vinogradov 
wrote:

> Anil,
>
> Is it first and only exception at logs?
>
> Is it possible to debud this?
> You can set breakpoint at first line of org.apache.ignite.internal.
> processors.datastreamer.DataStreamerImpl#closeEx(boolean,
> org.apache.ignite.IgniteCheckedException)
> This will give you information who stopping the datastreamer.
>
> On Thu, Nov 3, 2016 at 1:41 PM, Anil  wrote:
>
>> Hi Anton,
>> No. ignite nodes looks good.
>>
>> I have attached my KafkaCacheDataStreamer class and following is the code
>> to listen to the kafka topic. IgniteCache is created using java
>> configuration.
>>
>> I see cache size is zero after adding the entries to cache as well from
>> KafkaCacheDataStreamer. Not sure how to log whether the entries added to
>> cache or not.
>>
>> KafkaCacheDataStreamer kafkaStreamer = new
>> KafkaCacheDataStreamer();
>>
>>  Properites pros = new Properites() // kafka properties
>>  ConsumerConfig consumerConfig = new ConsumerConfig(props);
>>
>> try {
>> IgniteDataStreamer stmr =
>> ignite.dataStreamer(CacheManager.PERSON_CACHE);
>>// allow overwriting cache data
>>stmr.allowOverwrite(true);
>>
>>kafkaStreamer.setIgnite(ignite);
>>kafkaStreamer.setStreamer(stmr);
>>
>>// set the topic
>>kafkaStreamer.setTopic(kafkaConfig.getString("topic",
>> "TestTopic"));
>>
>>// set the number of threads to process Kafka streams
>>kafkaStreamer.setThreads(1);
>>
>>// set Kafka consumer configurations
>>kafkaStreamer.setConsumerConfig(consumerConfig);
>>
>>// set decoders
>>kafkaStreamer.setKeyDecoder(new StringDecoder(new
>> VerifiableProperties()));
>>kafkaStreamer.setValueDecoder(new StringDecoder(new
>> VerifiableProperties()));
>>kafkaStreamer.setMultipleTupleExtractor(new
>> StreamMultipleTupleExtractor() {
>> @Override
>> public Map extract(String msg) {
>> Map entries = new HashMap<>();
>> try {
>> KafkaMessage request = Json.decodeValue(msg, KafkaMessage.class);
>> IgniteCache cache = CacheManager.getCache();
>>
>> if (CollectionUtils.isNotEmpty(request.getPersons())){
>> String id = null;
>> for (Person ib : request.getPersons()){
>> if (StringUtils.isNotBlank(ib.getId())){
>> id = ib.getId();
>> if (null != ib.isDeleted() && Boolean.TRUE.equals(ib.isDeleted())){
>> cache.remove(id);
>> }else {
>> // no need to store the id. so setting null.
>> ib.setId(null);
>> entries.put(id, ib);
>> }
>> }
>> }
>> }else {
>>
>> }
>> }catch (Exception ex){
>> logger.error("Error while updating the cache - {} {} " ,msg, ex);
>> }
>>
>> return entries;
>> }
>> });
>>
>>kafkaStreamer.start();
>> }catch (Exception ex){
>> logger.error("Error in kafka data streamer ", ex);
>> }
>>
>>
>> Please let me know if you see any issues. thanks.
>>
>> On 3 November 2016 at 15:59, Anton Vinogradov 
>> wrote:
>>
>>> Anil,
>>>
>>> Could you provide getStreamer() code and full logs?
>>> Possible, ignite node was disconnected and this cause DataStreamer
>>> closure.
>>>
>>> On Thu, Nov 3, 2016 at 1:17 PM, Anil  wrote:
>>>
 HI,

 I have created custom kafka data streamer for my use case and i see
 following exception.

 java.lang.IllegalStateException: Data streamer has been closed.
 at org.apache.ignite.internal.processors.datastreamer.DataStrea
 merImpl.enterBusy(DataStreamerImpl.java:360)
 at org.apache.ignite.internal.processors.datastreamer.DataStrea
 merImpl.addData(DataStreamerImpl.java:507)
 at org.apache.ignite.internal.processors.datastreamer.DataStrea
 merImpl.addData(DataStreamerImpl.java:498)
 at net.juniper.cs.cache.KafkaCacheDataStreamer.addMessage(Kafka
 CacheDataStreamer.java:128)
 at net.juniper.cs.cache.KafkaCacheDataStreamer$1.run(KafkaCache
 

Question about Class Not found

2016-11-03 Thread devis76
Hi,
i have a simple Pojo

public class Client implements Serializable 
private static final long serialVersionUID = 2747772722162744232L;
@QuerySqlField
private final Date registrationDate;
@QuerySqlField(index = true)
private final String lwersion;
...

I have create simple Cache...
cacheCfg.setBackups(0);
cacheCfg.setName(ctx.name());
cacheCfg.setCacheMode(PARTITIONED);


When i use a Query

QueryCursor> qryx = cache

 .query(new SqlFieldsQuery("select * from " +
valueType.getSimpleName()));
List list = new ArrayList();
for (List row : qryx) {
logger.trace("Query {} List size {}", 
cache.getName(), row.size());
logger.trace("Query {} List size {} {}", 
cache.getName(), row.get(0),
row.get(1));
logger.trace("Query Cache  {} Key found {} {}", 
cache.getName(),
row.get(0));
list.add((V) row.get(1));
}
qryx.close();
return list;

When i run this query with a single node all works fine.

When i'll start second Karaf the "last node started" throw a Class Not
Found
have you any suggestion please?






--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Question-about-Class-Not-found-tp8685.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: [EXTERNAL] Re: Exception while trying to access cache via JDBC API

2016-11-03 Thread chevy
I am starting my server node within my boot app[1]. Both my client and server
nodes start from same code but I will get the service part with starts
client out of it later. How can I not start ignite server and still manage
to use same config in my external server node? (bold part below).

[1] -   /* Initialize cache configuration */
cacheCfg.setName("salesCache");
cacheCfg.setCacheMode(CacheMode.REPLICATED);
cacheCfg.setSwapEnabled(false);
cacheCfg.setOffHeapMaxMemory(0);
cacheCfg.setCopyOnRead(false);

cacheCfg.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC);

cacheCfg.setAtomicWriteOrderMode(CacheAtomicWriteOrderMode.CLOCK);
cacheCfg.setIndexedTypes(Integer.class, 
SalesModel.class);

/* Start Ignite node. */
*Ignite ignite = 
Ignition.start("src/main/resources/mpm-ignite.xml");*

try (IgniteCache cache =
ignite.getOrCreateCache(cacheCfg)) {
if 
(ignite.cluster().forDataNodes(cache.getName()).nodes().isEmpty()) {
System.out.println();
System.out.println(">>> requires remote 
cache nodes to be started.");
System.out.println(">>> Please start at 
least 1 remote cache node.");
System.out.println();
return;
}

// Put created data entries to cache.
cache.putAll(salesFinalMap);
} catch (Exception e) {
ignite.destroyCache("salesCache");
throw e;
} finally {
// Delete cache with its content completely.
// ignite.destroyCache("testCache");
}



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Exception-while-trying-to-access-cache-via-JDBC-API-tp8648p8684.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: [EXTERNAL] Re: Exception while trying to access cache via JDBC API

2016-11-03 Thread chevy
I went to local gradle repo and manually deleted other h2 versions. It’s 
working now. Thanks for looking into it.

I have one more query – is there a way I can make ‘ignite-http-rest’ work with 
spring-boot? Last time when I checked with one of Ignite dev they mentioned 
that I can’t use it with boot. Without rest-api, I need to create a JDBC 
connection and fire up my own rest-api which has higher response time compared 
to ignite’s rest api.

--
Regards,
Chetan.

From: "Sergi Vladykin [via Apache Ignite Users]" 

Date: Thursday, November 3, 2016 at 3:13 PM
To: "Chetan.V.Yadav" 
Subject: Re: [EXTERNAL] Re: Exception while trying to access cache via JDBC API

Ok, then please check your effective classpath:

System.out.println(System.getProperty("java.class.path"));

Also you can check where H2 classes come from:

System.out.println(org.h2.Driver.class.getProtectionDomain().getCodeSource().getLocation());

In the end you must have in classpath only h2-1.4.191.jar and no other versions.

Sergi

2016-11-03 9:13 GMT+03:00 chevy <[hidden 
email]>:
There are no remote nodes deployed yet. I am still in dev phase so trying to 
start ignite locally with just one instance from the code itself.

--
Regards,
Chetan.

From: "Sergi Vladykin [via Apache Ignite Users]" >
Date: Thursday, November 3, 2016 at 12:50 AM
To: "Chetan.V.Yadav" <[hidden 
email]>
Subject: Re: [EXTERNAL] Re: Exception while trying to access cache via JDBC API

Dependency com.h2database:h2:1.4.191 is correct. You definitely have some other 
H2 version in classpath (at least on remote node). Note that you must have 
correct H2 version in classpath on all the cluster nodes.

BTW, can you please post exception stack trace from remote node?

Sergi

2016-11-02 21:41 GMT+03:00 chevy <[hidden email]>:
I see only one dependency which is coming from ignite-indexing jar. Please 
refer dependency chain below –

\--- org.apache.ignite:ignite-indexing:1.7.0
 +--- org.apache.ignite:ignite-core:1.7.0 (*)
 +--- commons-codec:commons-codec:1.6 -> 1.10
 +--- org.apache.lucene:lucene-core:3.5.0
 \--- com.h2database:h2:1.4.191

Also, I am getting errors with both versions of h2.

--
Regards,
Chetan.

From: "Sergi Vladykin [via Apache Ignite Users]" >
Date: Wednesday, November 2, 2016 at 11:26 PM
To: "Chetan.V.Yadav" <[hidden 
email]>
Subject: Re: [EXTERNAL] Re: Exception while trying to access cache via JDBC API

The problem here is that you have a wrong H2 version in classpath. Most 
probably this wrong transitive dependency comes from Spring Boot, you need to 
exclude it.

Sergi

2016-11-02 20:35 GMT+03:00 chevy <[hidden email]>:
It works perfectly without spring-boot (previously tested) but I need to go 
with spring-boot as business logic needs faster execution using boot features. 
Is there any workaround with which I can fix this problem?

--
Regards,
Chetan.

From: "Sergej Sidorov [via Apache Ignite Users]" >
Date: Wednesday, November 2, 2016 at 9:43 PM
To: "Chetan.V.Yadav" <[hidden 
email]>
Subject: Re: [EXTERNAL] Re: Exception while trying to access cache via JDBC API

Do your server node is also under spring boot or it is plain ignite assembly?
I did the following:
1. Start server node from regular ignite assembly
2. Run client with your configuration (build.gradle, mpm-ignite.xml)
All worked correct with changes described in my previous message.
Of course, I was still added cache configuration:










java.lang.String

org.ignite.example.Company






I guess it is still issue with dependencies. Check your project dependency 
graph.

Sergej

If you reply to this email, your message will be added to the discussion below:
http://apache-ignite-users.70518.x6.nabble.com/Exception-while-trying-to-access-cache-via-JDBC-API-tp8648p8668.html
To unsubscribe from Exception while trying to access cache via JDBC API, click 
here.
NAML


Re: DataStreamer is closed

2016-11-03 Thread Anton Vinogradov
Anil,

Is it first and only exception at logs?

Is it possible to debud this?
You can set breakpoint at first line of
org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl#closeEx(boolean,
org.apache.ignite.IgniteCheckedException)
This will give you information who stopping the datastreamer.

On Thu, Nov 3, 2016 at 1:41 PM, Anil  wrote:

> Hi Anton,
> No. ignite nodes looks good.
>
> I have attached my KafkaCacheDataStreamer class and following is the code
> to listen to the kafka topic. IgniteCache is created using java
> configuration.
>
> I see cache size is zero after adding the entries to cache as well from
> KafkaCacheDataStreamer. Not sure how to log whether the entries added to
> cache or not.
>
> KafkaCacheDataStreamer kafkaStreamer = new
> KafkaCacheDataStreamer();
>
>  Properites pros = new Properites() // kafka properties
>  ConsumerConfig consumerConfig = new ConsumerConfig(props);
>
> try {
> IgniteDataStreamer stmr = ignite.dataStreamer(
> CacheManager.PERSON_CACHE);
>// allow overwriting cache data
>stmr.allowOverwrite(true);
>
>kafkaStreamer.setIgnite(ignite);
>kafkaStreamer.setStreamer(stmr);
>
>// set the topic
>kafkaStreamer.setTopic(kafkaConfig.getString("topic",
> "TestTopic"));
>
>// set the number of threads to process Kafka streams
>kafkaStreamer.setThreads(1);
>
>// set Kafka consumer configurations
>kafkaStreamer.setConsumerConfig(consumerConfig);
>
>// set decoders
>kafkaStreamer.setKeyDecoder(new StringDecoder(new
> VerifiableProperties()));
>kafkaStreamer.setValueDecoder(new StringDecoder(new
> VerifiableProperties()));
>kafkaStreamer.setMultipleTupleExtractor(new
> StreamMultipleTupleExtractor() {
> @Override
> public Map extract(String msg) {
> Map entries = new HashMap<>();
> try {
> KafkaMessage request = Json.decodeValue(msg, KafkaMessage.class);
> IgniteCache cache = CacheManager.getCache();
>
> if (CollectionUtils.isNotEmpty(request.getPersons())){
> String id = null;
> for (Person ib : request.getPersons()){
> if (StringUtils.isNotBlank(ib.getId())){
> id = ib.getId();
> if (null != ib.isDeleted() && Boolean.TRUE.equals(ib.isDeleted())){
> cache.remove(id);
> }else {
> // no need to store the id. so setting null.
> ib.setId(null);
> entries.put(id, ib);
> }
> }
> }
> }else {
>
> }
> }catch (Exception ex){
> logger.error("Error while updating the cache - {} {} " ,msg, ex);
> }
>
> return entries;
> }
> });
>
>kafkaStreamer.start();
> }catch (Exception ex){
> logger.error("Error in kafka data streamer ", ex);
> }
>
>
> Please let me know if you see any issues. thanks.
>
> On 3 November 2016 at 15:59, Anton Vinogradov 
> wrote:
>
>> Anil,
>>
>> Could you provide getStreamer() code and full logs?
>> Possible, ignite node was disconnected and this cause DataStreamer
>> closure.
>>
>> On Thu, Nov 3, 2016 at 1:17 PM, Anil  wrote:
>>
>>> HI,
>>>
>>> I have created custom kafka data streamer for my use case and i see
>>> following exception.
>>>
>>> java.lang.IllegalStateException: Data streamer has been closed.
>>> at org.apache.ignite.internal.processors.datastreamer.DataStrea
>>> merImpl.enterBusy(DataStreamerImpl.java:360)
>>> at org.apache.ignite.internal.processors.datastreamer.DataStrea
>>> merImpl.addData(DataStreamerImpl.java:507)
>>> at org.apache.ignite.internal.processors.datastreamer.DataStrea
>>> merImpl.addData(DataStreamerImpl.java:498)
>>> at net.juniper.cs.cache.KafkaCacheDataStreamer.addMessage(Kafka
>>> CacheDataStreamer.java:128)
>>> at net.juniper.cs.cache.KafkaCacheDataStreamer$1.run(KafkaCache
>>> DataStreamer.java:176)
>>> at java.util.concurrent.Executors$RunnableAdapter.call(Executor
>>> s.java:511)
>>> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>>> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPool
>>> Executor.java:1142)
>>> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoo
>>> lExecutor.java:617)
>>> at java.lang.Thread.run(Thread.java:745)
>>>
>>>
>>>
>>> addMessage method is
>>>
>>>  @Override
>>> protected void addMessage(T msg) {
>>> if (getMultipleTupleExtractor() == null){
>>> Map.Entry e = getSingleTupleExtractor().extract(msg);
>>>
>>> if (e != null)
>>> getStreamer().addData(e);
>>>
>>> } else {
>>> Map m = getMultipleTupleExtractor().extract(msg);
>>> if (m != null && !m.isEmpty()){
>>> getStreamer().addData(m);
>>> }
>>> }
>>> 

Re: [EXTERNAL] Re: Exception while trying to access cache via JDBC API

2016-11-03 Thread Sergej Sidorov
Hi!

That part of the code [1] starts Ignite client's node. In order to client's
node is working, sever node is required (instance of Ignite with
clientMode(false)). Do you run both those node in the same process or you
run separate spring boot instance?

Please note that in order to start server node it is not required to write
the code. So that you need to download the package from the link below [2]
and run it with your configuration.

Could you share the entire sample project?

[1] Connection conn = DriverManager.getConnection("jdbc:ignite:cfg://")
[2] https://ignite.apache.org/download.cgi



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Exception-while-trying-to-access-cache-via-JDBC-API-tp8648p8681.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: DataStreamer is closed

2016-11-03 Thread Anil
Hi Anton,
No. ignite nodes looks good.

I have attached my KafkaCacheDataStreamer class and following is the code
to listen to the kafka topic. IgniteCache is created using java
configuration.

I see cache size is zero after adding the entries to cache as well from
KafkaCacheDataStreamer. Not sure how to log whether the entries added to
cache or not.

KafkaCacheDataStreamer kafkaStreamer = new
KafkaCacheDataStreamer();

 Properites pros = new Properites() // kafka properties
 ConsumerConfig consumerConfig = new ConsumerConfig(props);

try {
IgniteDataStreamer stmr =
ignite.dataStreamer(CacheManager.PERSON_CACHE);
   // allow overwriting cache data
   stmr.allowOverwrite(true);

   kafkaStreamer.setIgnite(ignite);
   kafkaStreamer.setStreamer(stmr);

   // set the topic
   kafkaStreamer.setTopic(kafkaConfig.getString("topic", "TestTopic"));

   // set the number of threads to process Kafka streams
   kafkaStreamer.setThreads(1);

   // set Kafka consumer configurations
   kafkaStreamer.setConsumerConfig(consumerConfig);

   // set decoders
   kafkaStreamer.setKeyDecoder(new StringDecoder(new
VerifiableProperties()));
   kafkaStreamer.setValueDecoder(new StringDecoder(new
VerifiableProperties()));
   kafkaStreamer.setMultipleTupleExtractor(new
StreamMultipleTupleExtractor() {
@Override
public Map extract(String msg) {
Map entries = new HashMap<>();
try {
KafkaMessage request = Json.decodeValue(msg, KafkaMessage.class);
IgniteCache cache = CacheManager.getCache();

if (CollectionUtils.isNotEmpty(request.getPersons())){
String id = null;
for (Person ib : request.getPersons()){
if (StringUtils.isNotBlank(ib.getId())){
id = ib.getId();
if (null != ib.isDeleted() && Boolean.TRUE.equals(ib.isDeleted())){
cache.remove(id);
}else {
// no need to store the id. so setting null.
ib.setId(null);
entries.put(id, ib);
}
}
}
}else {

}
}catch (Exception ex){
logger.error("Error while updating the cache - {} {} " ,msg, ex);
}

return entries;
}
});

   kafkaStreamer.start();
}catch (Exception ex){
logger.error("Error in kafka data streamer ", ex);
}


Please let me know if you see any issues. thanks.

On 3 November 2016 at 15:59, Anton Vinogradov 
wrote:

> Anil,
>
> Could you provide getStreamer() code and full logs?
> Possible, ignite node was disconnected and this cause DataStreamer closure.
>
> On Thu, Nov 3, 2016 at 1:17 PM, Anil  wrote:
>
>> HI,
>>
>> I have created custom kafka data streamer for my use case and i see
>> following exception.
>>
>> java.lang.IllegalStateException: Data streamer has been closed.
>> at org.apache.ignite.internal.processors.datastreamer.DataStrea
>> merImpl.enterBusy(DataStreamerImpl.java:360)
>> at org.apache.ignite.internal.processors.datastreamer.DataStrea
>> merImpl.addData(DataStreamerImpl.java:507)
>> at org.apache.ignite.internal.processors.datastreamer.DataStrea
>> merImpl.addData(DataStreamerImpl.java:498)
>> at net.juniper.cs.cache.KafkaCacheDataStreamer.addMessage(Kafka
>> CacheDataStreamer.java:128)
>> at net.juniper.cs.cache.KafkaCacheDataStreamer$1.run(KafkaCache
>> DataStreamer.java:176)
>> at java.util.concurrent.Executors$RunnableAdapter.call(
>> Executors.java:511)
>> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPool
>> Executor.java:1142)
>> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoo
>> lExecutor.java:617)
>> at java.lang.Thread.run(Thread.java:745)
>>
>>
>>
>> addMessage method is
>>
>>  @Override
>> protected void addMessage(T msg) {
>> if (getMultipleTupleExtractor() == null){
>> Map.Entry e = getSingleTupleExtractor().extract(msg);
>>
>> if (e != null)
>> getStreamer().addData(e);
>>
>> } else {
>> Map m = getMultipleTupleExtractor().extract(msg);
>> if (m != null && !m.isEmpty()){
>> getStreamer().addData(m);
>> }
>> }
>> }
>>
>>
>> Do you see any issue ? Please let me know if you need any additional
>> information. thanks.
>>
>> Thanks.
>>
>
>


KafkaDataSteamer.java
Description: Binary data


Re: DataStreamer is closed

2016-11-03 Thread Anton Vinogradov
Anil,

Could you provide getStreamer() code and full logs?
Possible, ignite node was disconnected and this cause DataStreamer closure.

On Thu, Nov 3, 2016 at 1:17 PM, Anil  wrote:

> HI,
>
> I have created custom kafka data streamer for my use case and i see
> following exception.
>
> java.lang.IllegalStateException: Data streamer has been closed.
> at org.apache.ignite.internal.processors.datastreamer.
> DataStreamerImpl.enterBusy(DataStreamerImpl.java:360)
> at org.apache.ignite.internal.processors.datastreamer.
> DataStreamerImpl.addData(DataStreamerImpl.java:507)
> at org.apache.ignite.internal.processors.datastreamer.
> DataStreamerImpl.addData(DataStreamerImpl.java:498)
> at net.juniper.cs.cache.KafkaCacheDataStreamer.addMessage(
> KafkaCacheDataStreamer.java:128)
> at net.juniper.cs.cache.KafkaCacheDataStreamer$1.run(
> KafkaCacheDataStreamer.java:176)
> at java.util.concurrent.Executors$RunnableAdapter.
> call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1142)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(
> ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
>
>
>
> addMessage method is
>
>  @Override
> protected void addMessage(T msg) {
> if (getMultipleTupleExtractor() == null){
> Map.Entry e = getSingleTupleExtractor().extract(msg);
>
> if (e != null)
> getStreamer().addData(e);
>
> } else {
> Map m = getMultipleTupleExtractor().extract(msg);
> if (m != null && !m.isEmpty()){
> getStreamer().addData(m);
> }
> }
> }
>
>
> Do you see any issue ? Please let me know if you need any additional
> information. thanks.
>
> Thanks.
>


DataStreamer is closed

2016-11-03 Thread Anil
HI,

I have created custom kafka data streamer for my use case and i see
following exception.

java.lang.IllegalStateException: Data streamer has been closed.
at
org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl.enterBusy(DataStreamerImpl.java:360)
at
org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl.addData(DataStreamerImpl.java:507)
at
org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl.addData(DataStreamerImpl.java:498)
at
net.juniper.cs.cache.KafkaCacheDataStreamer.addMessage(KafkaCacheDataStreamer.java:128)
at
net.juniper.cs.cache.KafkaCacheDataStreamer$1.run(KafkaCacheDataStreamer.java:176)
at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)



addMessage method is

 @Override
protected void addMessage(T msg) {
if (getMultipleTupleExtractor() == null){
Map.Entry e = getSingleTupleExtractor().extract(msg);

if (e != null)
getStreamer().addData(e);

} else {
Map m = getMultipleTupleExtractor().extract(msg);
if (m != null && !m.isEmpty()){
getStreamer().addData(m);
}
}
}


Do you see any issue ? Please let me know if you need any additional
information. thanks.

Thanks.


Re: [EXTERNAL] Re: Exception while trying to access cache via JDBC API

2016-11-03 Thread Sergi Vladykin
Ok, then please check your effective classpath:

System.out.println(System.getProperty("java.class.path"));

Also you can check where H2 classes come from:

System.out.println(org.h2.Driver.class.getProtectionDomain().getCodeSource().getLocation());

In the end you must have in classpath only h2-1.4.191.jar and no other
versions.

Sergi

2016-11-03 9:13 GMT+03:00 chevy :

> There are no remote nodes deployed yet. I am still in dev phase so trying
> to start ignite locally with just one instance from the code itself.
>
>
>
> --
>
> Regards,
>
> Chetan.
>
>
>
> *From: *"Sergi Vladykin [via Apache Ignite Users]"  >
> *Date: *Thursday, November 3, 2016 at 12:50 AM
> *To: *"Chetan.V.Yadav" <[hidden email]
> >
> *Subject: *Re: [EXTERNAL] Re: Exception while trying to access cache via
> JDBC API
>
>
>
> Dependency com.h2database:h2:1.4.191 is correct. You definitely have some
> other H2 version in classpath (at least on remote node). Note that you must
> have correct H2 version in classpath on all the cluster nodes.
>
>
>
> BTW, can you please post exception stack trace from remote node?
>
>
>
> Sergi
>
>
>
> 2016-11-02 21:41 GMT+03:00 chevy <[hidden email]>:
>
> I see only one dependency which is coming from ignite-indexing jar. Please
> refer dependency chain below –
>
>
>
> \--- org.apache.ignite:ignite-indexing:1.7.0
>
>  +--- org.apache.ignite:ignite-core:1.7.0 (*)
>
>  +--- commons-codec:commons-codec:1.6 -> 1.10
>
>  +--- org.apache.lucene:lucene-core:3.5.0
>
>  \--- com.h2database:h2:1.4.191
>
>
>
> Also, I am getting errors with both versions of h2.
>
>
>
> --
>
> Regards,
>
> Chetan.
>
>
>
> *From: *"Sergi Vladykin [via Apache Ignite Users]"  >
> *Date: *Wednesday, November 2, 2016 at 11:26 PM
> *To: *"Chetan.V.Yadav" <[hidden email]
> >
> *Subject: *Re: [EXTERNAL] Re: Exception while trying to access cache via
> JDBC API
>
>
>
> The problem here is that you have a wrong H2 version in classpath. Most
> probably this wrong transitive dependency comes from Spring Boot, you need
> to exclude it.
>
>
>
> Sergi
>
>
>
> 2016-11-02 20:35 GMT+03:00 chevy <[hidden email]>:
>
> It works perfectly without spring-boot (previously tested) but I need to
> go with spring-boot as business logic needs faster execution using boot
> features. Is there any workaround with which I can fix this problem?
>
>
>
> --
>
> Regards,
>
> Chetan.
>
>
>
> *From: *"Sergej Sidorov [via Apache Ignite Users]"  >
> *Date: *Wednesday, November 2, 2016 at 9:43 PM
> *To: *"Chetan.V.Yadav" <[hidden email]
> >
> *Subject: *Re: [EXTERNAL] Re: Exception while trying to access cache via
> JDBC API
>
>
>
> Do your server node is also under spring boot or it is plain ignite
> assembly?
> I did the following:
> 1. Start server node from regular ignite assembly
> 2. Run client with your configuration (build.gradle, mpm-ignite.xml)
> All worked correct with changes described in my previous message.
> Of course, I was still added cache configuration:
>
> 
> 
>
>  class="org.apache.ignite.configuration.
> CacheConfiguration">
> 
> 
> 
> 
> 
> java.lang.String
> org.ignite.example.
> Company
> 
> 
> 
> 
> 
>
> I guess it is still issue with dependencies. Check your project dependency
> graph.
>
> Sergej
> --
>
> *If you reply to this email, your message will be added to the discussion
> below:*
>
> http://apache-ignite-users.70518.x6.nabble.com/Exception-
> while-trying-to-access-cache-via-JDBC-API-tp8648p8668.html
>
> To unsubscribe from Exception while trying to access cache via JDBC API,
> click here.
> NAML
> 
>
>
> --
>
> View this message in context: Re: [EXTERNAL] Re: Exception while trying
> to access cache via JDBC API
> 
>
>
> Sent from the Apache Ignite Users mailing list archive
> 

Re: Which ports does ignite cluster need to run normally?

2016-11-03 Thread edwardk
Hi,

Does Apache Ignite need the entire range of ports to be opened.

I mean,
TcpDiscoverySpi:47500~47600
TcpCommunicationSpi:47100~47200

Is it mandatory to open the entire range of discovery ports in the range
47500~47600 and entire range of communication ports (47100~47200) in every
member of ignite cluster.

Can't we just open one or two ports (for both discovery and communication )
in every cluster member 
node.

I am afraid opening too many ports might be risk issue and I am not sure the
security/infrastructure team in my organization would accept to open a huge
number of ports.

Thanks,
edwardk




--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Which-ports-does-ignite-cluster-need-to-run-normally-tp8031p8676.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: [EXTERNAL] Re: Exception while trying to access cache via JDBC API

2016-11-03 Thread chevy
There are no remote nodes deployed yet. I am still in dev phase so trying to 
start ignite locally with just one instance from the code itself.

--
Regards,
Chetan.

From: "Sergi Vladykin [via Apache Ignite Users]" 

Date: Thursday, November 3, 2016 at 12:50 AM
To: "Chetan.V.Yadav" 
Subject: Re: [EXTERNAL] Re: Exception while trying to access cache via JDBC API

Dependency com.h2database:h2:1.4.191 is correct. You definitely have some other 
H2 version in classpath (at least on remote node). Note that you must have 
correct H2 version in classpath on all the cluster nodes.

BTW, can you please post exception stack trace from remote node?

Sergi

2016-11-02 21:41 GMT+03:00 chevy <[hidden 
email]>:
I see only one dependency which is coming from ignite-indexing jar. Please 
refer dependency chain below –

\--- org.apache.ignite:ignite-indexing:1.7.0
 +--- org.apache.ignite:ignite-core:1.7.0 (*)
 +--- commons-codec:commons-codec:1.6 -> 1.10
 +--- org.apache.lucene:lucene-core:3.5.0
 \--- com.h2database:h2:1.4.191

Also, I am getting errors with both versions of h2.

--
Regards,
Chetan.

From: "Sergi Vladykin [via Apache Ignite Users]" >
Date: Wednesday, November 2, 2016 at 11:26 PM
To: "Chetan.V.Yadav" <[hidden 
email]>
Subject: Re: [EXTERNAL] Re: Exception while trying to access cache via JDBC API

The problem here is that you have a wrong H2 version in classpath. Most 
probably this wrong transitive dependency comes from Spring Boot, you need to 
exclude it.

Sergi

2016-11-02 20:35 GMT+03:00 chevy <[hidden email]>:
It works perfectly without spring-boot (previously tested) but I need to go 
with spring-boot as business logic needs faster execution using boot features. 
Is there any workaround with which I can fix this problem?

--
Regards,
Chetan.

From: "Sergej Sidorov [via Apache Ignite Users]" >
Date: Wednesday, November 2, 2016 at 9:43 PM
To: "Chetan.V.Yadav" <[hidden 
email]>
Subject: Re: [EXTERNAL] Re: Exception while trying to access cache via JDBC API

Do your server node is also under spring boot or it is plain ignite assembly?
I did the following:
1. Start server node from regular ignite assembly
2. Run client with your configuration (build.gradle, mpm-ignite.xml)
All worked correct with changes described in my previous message.
Of course, I was still added cache configuration:










java.lang.String

org.ignite.example.Company






I guess it is still issue with dependencies. Check your project dependency 
graph.

Sergej

If you reply to this email, your message will be added to the discussion below:
http://apache-ignite-users.70518.x6.nabble.com/Exception-while-trying-to-access-cache-via-JDBC-API-tp8648p8668.html
To unsubscribe from Exception while trying to access cache via JDBC API, click 
here.
NAML


View this message in context: Re: [EXTERNAL] Re: Exception while trying to 
access cache via JDBC 
API

Sent from the Apache Ignite Users mailing list 
archive at Nabble.com.



If you reply to this email, your message will be added to the discussion below:
http://apache-ignite-users.70518.x6.nabble.com/Exception-while-trying-to-access-cache-via-JDBC-API-tp8648p8670.html
To unsubscribe from Exception while trying to access cache via JDBC API, click 
here.
NAML


View this message in context: Re: [EXTERNAL] Re: Exception while trying to 
access cache via JDBC