Re: Multiple Caches in same configuration XML ?

2015-11-12 Thread Anton Vinogradov
Hello,

please try to do something similar to following:

 






















  



On Thu, Nov 12, 2015 at 11:43 AM, edwardk  wrote:

> Hi,
>
> I want to configure multiple caches in apache ignite cache configuration
> xml
> with different expiry and eviction times and also modes.
>
> I was able to do so for one cache(name: 'plans') by adding a property
> cacheConfiguration to the IgniteConfiguration bean and setting its
> properties as below.
>
>
>  class="org.apache.ignite.configuration.CacheConfiguration">
>
> 
> 
> id="expiryPolicy"
> class="javax.cache.expiry.CreatedExpiryPolicy"
>
> factory-method="factoryOf">
>
>  
>
>  
>
>  
>
>  
>
>  
>
>  
>
> 
>  name="evictionPolicy">
>  class="org.apache.ignite.cache.eviction.lru.LruEvictionPolicy">
>  name="maxSize" value="100"/>
> 
> 
>
>
>  value="PARTITIONED"/>
>
> 
>
>
>
> 
>
> When I want to configure the settings in a different way for another cache
> with 'name' , say, customers, how would I do that in same configuration
> file.
>
> I tried adding another CacheConfiguration bean but was not able to do so.
>
> How can I add another bean of type CacheConfiguration in same xml and
> configure the properties differently.
>
> If not is there way I can configure settings separate for each cache
> (plans,
> customers, etc.,) in XML.
>
>
> Thanks,
> edwardk
>
>
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/Multiple-Caches-in-same-configuration-XML-tp1939.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Stop the node but keep the process

2015-11-12 Thread Anton Vinogradov
Alexandre,

As far as I undesrtand "grid" is an Ignite instance.
Could you please check that grid.close(); works correct in this case?

On Thu, Nov 12, 2015 at 11:03 AM, Alexandre Boudnik <
alexander.boud...@gmail.com> wrote:

> Hello,
>
> In my cluster, each node is running as web application. When web
> application has been stopped and servlet context has been destroyed,
> it needs to terminate the Ignite node. Unfortunately,
>
>
> grid.cluster().stopNodes(Collections.singletonList(service.grid.cluster().localNode().id()));
>
> terminates the entire jvm by calling System.exit().
>
> Could you advise how to terminate/exclude the node from cluster but to
> keep the web server process, and to be able to restart the node?
>
> Take care,
> Alexandre "Sasha" Boudnik
>
> call me via Google Voice:
> 1(405) BUDNIKA
> 1(405) 283-6452
>


Re: Multiple Caches in same configuration XML ?

2015-11-12 Thread Anton Vinogradov
Also,
Please properly subscribe to the user list (this way we will not have to
manually approve your emails).
All you need to do is send an email to “user-subscr...@ignite.apache.org”
and follow simple instructions in the reply.

On Thu, Nov 12, 2015 at 12:56 PM, Anton Vinogradov <avinogra...@gridgain.com
> wrote:

> Hello,
>
> please try to do something similar to following:
>
>  
> 
>  class="org.apache.ignite.configuration.CacheConfiguration">
> 
>
> 
>
> 
>
> 
> 
>
>  class="org.apache.ignite.configuration.CacheConfiguration">
> 
>
> 
>
> 
>
> 
>
> 
> 
>   
>
> 
>
> On Thu, Nov 12, 2015 at 11:43 AM, edwardk <ekipli...@gmail.com> wrote:
>
>> Hi,
>>
>> I want to configure multiple caches in apache ignite cache configuration
>> xml
>> with different expiry and eviction times and also modes.
>>
>> I was able to do so for one cache(name: 'plans') by adding a property
>> cacheConfiguration to the IgniteConfiguration bean and setting its
>> properties as below.
>>
>>
>> > class="org.apache.ignite.configuration.CacheConfiguration">
>>
>> 
>> 
>>> id="expiryPolicy"
>> class="javax.cache.expiry.CreatedExpiryPolicy"
>>
>> factory-method="factoryOf">
>>
>>  
>>
>>  
>>
>>  
>>
>>  
>>
>>  
>>
>>  
>>
>> 
>> > name="evictionPolicy">
>> > class="org.apache.ignite.cache.eviction.lru.LruEvictionPolicy">
>> > name="maxSize" value="100"/>
>> 
>> 
>>
>>
>> > value="PARTITIONED"/>
>>
>> 
>>
>>
>>
>> 
>>
>> When I want to configure the settings in a different way for another cache
>> with 'name' , say, customers, how would I do that in same configuration
>> file.
>>
>> I tried adding another CacheConfiguration bean but was not able to do so.
>>
>> How can I add another bean of type CacheConfiguration in same xml and
>> configure the properties differently.
>>
>> If not is there way I can configure settings separate for each cache
>> (plans,
>> customers, etc.,) in XML.
>>
>>
>> Thanks,
>> edwardk
>>
>>
>>
>>
>>
>> --
>> View this message in context:
>> http://apache-ignite-users.70518.x6.nabble.com/Multiple-Caches-in-same-configuration-XML-tp1939.html
>> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>>
>
>


Re: [RESULT] [VOTE] Apache Ignite 1.4.0 Release (RC1)

2015-09-29 Thread Anton Vinogradov
Enrico,

Please try again, everything seems to be ok:
https://repo.maven.apache.org/maven2/org/apache/ignite/ignite-core/

On Tue, Sep 29, 2015 at 10:08 AM, Enrico Olivelli <eolive...@gmail.com>
wrote:

> Hi,
> I'm looking forward to upgrade to 1.4 ! I can't see maven artifacts on
> maven central
>
>
> -- Enrico
>
> 2015-09-28 19:52 GMT+02:00 Konstantin Boudnik <c...@apache.org>:
>
>> Congrats! Well timed to @ApacheCon too!
>>
>> Please make sure to do the announcement cc'ed to annou...@apache.org
>>
>> Thanks
>>   Cos
>>
>> On September 28, 2015 4:09:25 PM CEST, Anton Vinogradov <
>> avinogra...@gridgain.com> wrote:
>> >Ignite 1.4.0 successfuly released to
>> >https://dist.apache.org/repos/dist/release/ignite/1.4.0/
>> >Site will be updated soon.
>> >
>> >On Mon, Sep 28, 2015 at 4:13 PM, Yakov Zhdanov <yzhda...@apache.org>
>> >wrote:
>> >
>> >> Hello!
>> >>
>> >> Apache Ignite 1.4.0 release (RC1) has been accepted.
>> >>
>> >> 9 "+1" votes received.
>> >>
>> >> Here are the votes received:
>> >>
>> >>- Denis Magda (binding)
>> >>- Anton Vinogradov (binding)
>> >>- Alexey Kuznetsov (binding)
>> >>- Sergi Vladykin (binding)
>> >>- Gianfranco Murador (binding)
>> >>- Vladimir Ozerov (binding)
>> >>- Raul Kripalani (binding)
>> >>- Konstantin Boudnik (binding)
>> >>- chandresh pancholi
>> >>
>> >> Here is the link to vote thread -
>> >>
>> >>
>> >
>> http://apache-ignite-developers.2346864.n4.nabble.com/VOTE-Apache-Ignite-1-4-0-Release-RC1-tp3474.html
>> >>
>> >> Thanks!
>> >>
>> >> --Yakov
>> >>
>>
>>
>


Re: Upgrading from 1.3 to 1.4

2015-09-29 Thread Anton Vinogradov
Enrico,

Classes inside internal package has no compatibility guarantee between
releases.

IgniteConfiguration.consistentId can't guarantee correct node identifying.
Usage of user-attribute seems to be a correct choice.

On Tue, Sep 29, 2015 at 11:07 AM, Enrico Olivelli 
wrote:

> Hi,
> I'm upgrading from 1.3 to 1.4.
>
> Some notes/questions:
>
> I implemented my own  GridComponent (in order to implement a security
> plugin), after the upgrade I had  to implement these to new methods
>
> @Override
> public void onDisconnected(IgniteFuture i) throws
> IgniteCheckedException {
> }
>
> @Override
> public void onReconnected(boolean bln) throws IgniteCheckedException
> {
> }
>
> Maybe some other user could have such "compatibity" issue
>
> I see a new property IgniteConfiguration.consistentId, can I use it to
> identify a node in the grid ? in 1.3 I used an user-attribute
>
>
> Thank you
>
> Enrico
>
>
>


Re: Help with integrating Ignite(as JCache) with JBoss EAP 6.4

2015-12-17 Thread Anton Vinogradov
Yakov,
TC seems to be ok, could you please review
https://github.com/apache/ignite/pull/345/files before comit?

On Thu, Dec 17, 2015 at 1:27 PM, Yakov Zhdanov <yzhda...@apache.org> wrote:

> Anton, can you please let us know if changes have been merged?
>
> --Yakov
>
> 2015-12-16 17:00 GMT+03:00 Anton Vinogradov <avinogra...@gridgain.com>:
>
>> Val,
>> Yes, Please check my pull-request
>> https://github.com/apache/ignite/pull/345
>> I'll megre changes tomorrow morning in case everything is ok & TC passed.
>>
>> On Wed, Dec 16, 2015 at 6:11 AM, vkulichenko <
>> valentin.kuliche...@gmail.com> wrote:
>>
>>> Completely agree with Juan. Setting class loader only for default
>>> configuration is definitely not enough for the most use cases.
>>>
>>> I reopened the ticket. Anton, will you have a chance to finish the fix?
>>>
>>> -Val
>>>
>>>
>>>
>>> --
>>> View this message in context:
>>> http://apache-ignite-users.70518.x6.nabble.com/Help-with-integrating-Ignite-as-JCache-with-JBoss-EAP-6-4-tp2134p2228.html
>>> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>>>
>>
>>
>


Re: Help with integrating Ignite(as JCache) with JBoss EAP 6.4

2015-12-15 Thread Anton Vinogradov
Juan,

I've checked fix and pushed it to ignite-1.5.

On Mon, Dec 14, 2015 at 3:29 PM, Yakov Zhdanov <yzhda...@apache.org> wrote:

> Juan,
>
> Anton Vinogradov has identified the problem in cache manager and fixed it
> (one-liner fix) and will check tc today. If this does not break anything,
> we will include it into ignite-1.5 release which should be available early
> next week.
>
> --Yakov
>
> 2015-12-08 18:18 GMT+03:00 juanavelez <jsjunkemai...@gmail.com>:
>
>> That was the original approach but seeing how difficult it is to
>> correctly set the IgniteConfiguration object (either by using Properties as
>> you pointed out earlier in this thread because of Factories, SPIs, etc. or
>> JSON as stated in another thread), we realized we could not avoid using
>> Spring and hence the route of using
>> IgnitionEx.loadConfigurations(springCfgUrl) which I agree tightly couples
>> us to a specific JCache implementation.
>>
>> We have already tested the approach and it seems to work at least for our
>> purposes.
>>
>> And yes, indeed we are bound by what the JCache API allows
>> Ignite/Hazelcast to expose which might not be the full features.
>>
>> BTW: We are very happy with Ignite, it has very strong features (we like
>> the most the seamless integration with JTA/XA!) which fit perfect our
>> intended purpose. Thank you for such a great product and your constant help.
>>
>>
>> Thanks - Juan
>>
>>
>> On Dec 7, 2015, at 2:25 PM, vkulichenko [via Apache Ignite Users] <[hidden
>> email] <http:///user/SendEmail.jtp?type=node=2178=0>> wrote:
>>
>> Juan,
>>
>> I don't see how this will help you.
>> IgnitionEx.loadConfigurations(springCfgUrl) still uses Spring to load
>> configuration from the file.
>>
>> I think the best way is to create IgniteConfiguration object, set class
>> loader, start Ignite and create cache using Ignite API. If you then use
>> only JCache APIs, switching to another provider will be very easy even if
>> you don't use CacheManager. But note that not all Ignite features will be
>> available in this case (like SQL queries, for example).
>>
>> -Val
>>
>> --
>> If you reply to this email, your message will be added to the discussion
>> below:
>>
>> http://apache-ignite-users.70518.x6.nabble.com/Help-with-integrating-Ignite-as-JCache-with-JBoss-EAP-6-4-tp2134p2171.html
>> To unsubscribe from Help with integrating Ignite(as JCache) with JBoss
>> EAP 6.4, click here.
>> NAML
>> <http://apache-ignite-users.70518.x6.nabble.com/template/NamlServlet.jtp?macro=macro_viewer=instant_html%21nabble%3Aemail.naml=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>
>>
>>
>>
>> --
>> View this message in context: Re: Help with integrating Ignite(as
>> JCache) with JBoss EAP 6.4
>> <http://apache-ignite-users.70518.x6.nabble.com/Help-with-integrating-Ignite-as-JCache-with-JBoss-EAP-6-4-tp2134p2178.html>
>> Sent from the Apache Ignite Users mailing list archive
>> <http://apache-ignite-users.70518.x6.nabble.com/> at Nabble.com.
>>
>
>


Re: Help with integrating Ignite(as JCache) with JBoss EAP 6.4

2015-12-17 Thread Anton Vinogradov
PR merged to branch ignite-1.5.
Issue not closed with reason: need to write tests.

On Thu, Dec 17, 2015 at 1:59 PM, Alexey Goncharuk <
alexey.goncha...@gmail.com> wrote:

> Looks good to me as well. However, shouldn't any tests be added as well?
>


Re: Can I set an expiry policy for some specify entry?

2015-12-01 Thread Anton Vinogradov
Lin,

As you can see at example you can use cache.withExpiryPolicy() to gain
cache wrapper with specific ExpiryPolicy.
This policy will be used during operations on this cache wrapper, only.

You can create as much wrappers as you need and put/get/etc entries using
them.

I recomend you to use CreatedExpiryPolicy to set ExpiryPolicy at entry
creation.
Comparision of ExpiryPolicies can be found here
https://apacheignite.readme.io/v1.4/docs/expiry-policies
Please have a look to other ExpiryPolicies, possible they will be more
suitable to your solution.
For example TouchedExpiryPolicy will renew timeout at each operation on
entry.








On Tue, Dec 1, 2015 at 3:33 PM, Vladimir Ershov 
wrote:

> Hi Lin,
> An expiry policy is working for all values, which were added through
> cacheWithExpiryPolicy according to the next example:
>
> IgniteCache cacheWithExpiryPolicy = cache.withExpiryPolicy(
> new CreatedExpiryPolicy(new Duration(TimeUnit.SECONDS, 5)));
>
>
> You are welcome to find an explaining example in the end of this message.
> It is also possible, that actually you are looking for something like
> eviction policy. Please take a look here then:
> https://apacheignite.readme.io/v1.5/docs/evictions
> Please, provide the feedback, if this answer was useful, or not.
> Thanks!
>
> public void test() throws Exception {
> Ignite ignite = startGrid(0); // some starting util method
>
> CacheConfiguration cfg = new
> CacheConfiguration<>();
>
> cfg.setName(CACHE);
> cfg.setCacheMode(CacheMode.PARTITIONED);
> cfg.setRebalanceMode(CacheRebalanceMode.SYNC);
> cfg.setBackups(1);
>
> ignite.getOrCreateCache(cfg);
>
> IgniteCache cache1 = ignite.cache(null);
>
> IgniteCache cache2 = cache1.withExpiryPolicy(
> new CreatedExpiryPolicy(new Duration(TimeUnit.SECONDS, 1)));
>
> cache1.put(1, 1);
> cache1.put(2, 2);
> cache2.put(3, 3);
>
> cache2.get(1); // *Does not affect ExpiryPolicy*.
>
> U.sleep(2000);
>
> assert cache1.get(1) == 1;
> assert cache2.get(1) == 1; // *not Expired*
> assert cache1.get(2) == 2;
> assert cache1.get(3) == null; // *Expired*.
>
> }
>
> On Tue, Dec 1, 2015 at 10:47 AM, Lin  wrote:
>
>> Hi,
>>
>> I have read the docs on jcache expiry policies, the policy  will be used
>> for each operation invoked on the returned cache instance.
>>
>> IgniteCache cache = cache.withExpiryPolicy(
>> new CreatedExpiryPolicy(new Duration(TimeUnit.SECONDS, 5)));
>>
>>
>> and searched the nabble faq and found
>>
>> http://apache-ignite-users.70518.x6.nabble.com/Does-IgniteCache-withExpiryPolicy-affect-existing-cache-entries-td1870.html
>>
>> As I know, the expiry policy is worked for all the entries in the cache.
>> I would like to specify different expiry policies for some different
>> entries,
>> How can I do?
>>
>> Thanks for you help.
>>
>>
>> Regards,
>>
>> Lin
>>
>>
>>
>


Re: putting to cache as generic object

2016-01-13 Thread Anton Vinogradov
Hello Ambha,

Seem you need to use BinaryMarshaller (
https://apacheignite.readme.io/docs/binary-marshaller)
In this case you can create objects using binary builder and read field's
values using BinaryObject (Reflection like mechanism)

On Wed, Jan 13, 2016 at 7:58 AM, Ambha  wrote:

> In our project, we plan to deploy ignite caching as a shared service
> available for other modules. The caching module is visible to all the
> modules but any module's classes are not visible to caching.
>
> The objects of different components are put into cache, but it requires
> classes of different modules to be on Ignite's classpath.I plan to use
> Reflection like mechanism at ignite server to fetch the property of the
> cached object. But before this Ignite OptmizedMarshaller expects class to
> be
> present in classpath while serializing.  For some security reason I can't
> put other modules classes into Ignite classpath and also can't use
> peerClassLoading. I want a solution which can make ignite to treat objects
> to be cached as generic 'java.lang.Object's and don't expect cached
> object's
> class to be present in Ignite's classpath
>
>
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/putting-to-cache-as-generic-object-tp2525.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Ignite-hibernate latest version is not available in maven central

2016-06-02 Thread Anton Vinogradov
Hi,

You can use non-apache mvn repositories contain full list of Apache Ignite
artifacts.
For example you can add dependency to this one:
http://www.gridgainsystems.com/nexus/content/repositories/external/org/apache/ignite/ignite-hibernate/
We just uploaded release tags 1.5.0-final and 1.6.0 without modifications.

Please note that these jars are NOT official Apache Ignite jars

On Mon, May 30, 2016 at 12:36 PM, vkulichenko  wrote:

> Hibernate is LGPL-licensed, so we stopped deploying there. To get artifacts
> that have LGPL dependencies, you need to build them from the source [1] and
> deploy in your local repo.
>
> [1] http://ignite.apache.org/download.cgi#build-source
>
> -Val
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/Ignite-hibernate-latest-version-is-not-available-in-maven-central-tp5288p5304.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Ignite Download links broken

2016-06-20 Thread Anton Vinogradov
Download urls changed to https://archive.apache.org/dist/ignite/*


On Mon, Jun 20, 2016 at 4:14 PM, Dmitriy Setrakyan <dsetrak...@apache.org>
wrote:

> I think we should do it, at least for now, until the mirror issue is
> resolved. We should also file an INFRA issue in parallel. Anton, do you
> mind fixing it?
>
> On Mon, Jun 20, 2016 at 6:13 AM, Anton Vinogradov <a...@apache.org> wrote:
>
>> We already provides "hard-coded links (without mirror) " to previous
>> versions.
>> For example
>>
>> https://archive.apache.org/dist/ignite/1.4.0/apache-ignite-fabric-1.4.0-bin.zip
>> I think we can do that for all releases.
>>
>>
>> On Mon, Jun 20, 2016 at 4:08 PM, Dmitriy Setrakyan <dsetrak...@apache.org
>> >
>> wrote:
>>
>> > Is it possible to provide hard-coded links (without mirror) in the mean
>> > time, while we are resolving this issue?
>> >
>> > Pavel, I think this issue should be reported to INFRA, not Ignite. I
>> doubt
>> > Ignite community can do anything to fix it.
>> >
>> > D.
>> >
>> > On Mon, Jun 20, 2016 at 6:04 AM, Pavel Tupitsyn <ptupit...@gridgain.com
>> >
>> > wrote:
>> >
>> >> I have reported this issue 4 months ago, please see details there:
>> >> https://issues.apache.org/jira/browse/IGNITE-2743
>> >> Christos, your link is missing .cgi suffix.
>> >>
>> >> Pavel.
>> >>
>> >> On Mon, Jun 20, 2016 at 3:53 PM, Vladisav Jelisavcic <
>> vladis...@gmail.com
>> >> > wrote:
>> >>
>> >>> Not working for me also,
>> >>> but only 1.6.0 (latest) and 1.5.0.final, the rest is working fine.
>> >>>
>> >>> On Mon, Jun 20, 2016 at 8:02 AM, Sergey Kozlov <skoz...@gridgain.com>
>> >>> wrote:
>> >>>
>> >>>> Hi
>> >>>>
>> >>>> It's a known issue: apache site puts links to a nearest site for
>> user (I
>> >>>> suppose it based on IP address) and does it incorrect.
>> >>>>
>> >>>> On Mon, Jun 20, 2016 at 8:37 AM, Dmitriy Setrakyan <
>> >>>> dsetrak...@apache.org>
>> >>>> wrote:
>> >>>>
>> >>>> > Worked for me just now. Can you try again?
>> >>>> >
>> >>>> > On Sun, Jun 19, 2016 at 2:59 AM, Christos Erotocritou <
>> >>>> > chris...@gridgain.com> wrote:
>> >>>> >
>> >>>> >> Hey guys,
>> >>>> >>
>> >>>> >> The links on the website seem to be broken, can someone check
>> this?
>> >>>> >>
>> >>>> >> https://ignite.apache.org/download.html#binaries <
>> >>>> >> https://ignite.apache.org/download.html#binaries>
>> >>>> >>
>> >>>> >> Thanks,
>> >>>> >>
>> >>>> >> Christos
>> >>>> >
>> >>>> >
>> >>>> >
>> >>>>
>> >>>>
>> >>>> --
>> >>>> Sergey Kozlov
>> >>>> GridGain Systems
>> >>>> www.gridgain.com
>> >>>>
>> >>>
>> >>>
>> >>
>> >
>>
>
>


Re: Semaphore blocking on tryAcquire() while holding a cache-lock

2016-03-11 Thread Anton Vinogradov
Yakov,

I've answered.
Seems we have to have special FAQ section at Ignite wiki to publish same
things.

On Sun, Mar 6, 2016 at 12:21 PM, Yakov Zhdanov  wrote:

> Vlad and all (esp Val and Anton V.),
>
> I reviewed the PR. My comments are in the ticket.
>
> Anton V. there is a question regarding optimized-classnames.properties.
> Can you please respond in ticket?
>
>
> --Yakov
>
> 2016-02-29 16:00 GMT+06:00 Yakov Zhdanov :
>
>> Vlad, that's great! I will take a look this week. Reassigning ticket to
>> myself.
>>
>> --Yakov
>>
>> 2016-02-26 18:37 GMT+03:00 Vladisav Jelisavcic :
>>
>>> Hi,
>>>
>>> i recently implemented distributed ReentrantLock - IGNITE-642,
>>> i made a pull request, so hopefully this could be added to the next
>>> release.
>>>
>>> Best regards,
>>> Vladisav
>>>
>>> On Thu, Feb 18, 2016 at 10:49 AM, Alexey Goncharuk <
>>> alexey.goncha...@gmail.com> wrote:
>>>
>>> > Folks,
>>> >
>>> > The current implementation of IgniteCache.lock(key).lock() has the same
>>> > semantics as the transactional locks - cache topology cannot be changed
>>> > while there exists an ongoing transaction or an explicit lock is held.
>>> The
>>> > restriction for transactions is quite fundamental, the lock() issue
>>> can be
>>> > fixed if we re-implement locking the same way IgniteSemaphore currently
>>> > works.
>>> >
>>> > As for the "Failed to find semaphore with the given name" message, my
>>> first
>>> > guess is that DataStructures were configured with 1 backups which led
>>> to
>>> > the data loss when two nodes were stopped. Mario, can you please
>>> re-test
>>> > your semaphore scenario with 2 backups configured for data structures?
>>> > From my side, I can also take a look at the semaphore issue when I'm
>>> done
>>> > with IGNITE-2610.
>>> >
>>>
>>
>>
>


Re: Cache Problems

2016-09-14 Thread Anton Vinogradov
Hi,
 In a nutshell:
Ignite writes to one of these files while another is being packed by Swap
compactor.
When packing finished files change places.


On Wed, Sep 14, 2016 at 5:35 AM, Level D <724172...@qq.com> wrote:

>
> I use lucense index.
>
> By the way,
> what do these two files mean?
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Cache-Problems-tp7477p7697.html
> Sent from the Apache Ignite Users mailing list archive at Nabbl
>


Re: Data grid client errors out when datasource not defined

2016-09-21 Thread Anton Vinogradov
Jason,

Thanks for tips,

I found that CacheJdbcPojoStoreFactory required to have bean specified at
 
inside client config to use this cache.

Another way is to specify *dataSource *property instead.

for example:











Here's the code explaining how it works:

if (dataSrc != null) // Use this case.
store.setDataSource(dataSrc);
else {
if (dataSrcBean != null) {
if (appCtx == null)
throw new IgniteException("Spring application context
resource is not injected.");

IgniteSpringHelper spring;

try {
spring = IgniteComponentType.SPRING.create(false);

DataSource data = spring.loadBeanFromAppContext(appCtx,
dataSrcBean);

store.setDataSource(data);

Let me know in case it will not help.

On Wed, Sep 21, 2016 at 9:57 AM, amdam23000 <629160...@qq.com> wrote:

> Hi Anton,
>
> I'm sorry that for some reason i can't copy stack trace to text file out of
> the box.
> The more detail about the exception is as below:
>
> at
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.
> GridDhtPartitionsExchangeFuture.startCaches(GridDhtPartitionsExchangeFutur
> e.java:956)
> at
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.
> GridDhtPartitionsExchangeFuture.init(GridDhtPartitionsExchangeFutur
> e.java:523)
> at
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeMana
> ger$ExchangeWorker.body(GridCachePartitionExchangeManager.java:1297)
> at
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
>
> The ignite version i use is 1.5.0, please notice this.
>
> I debug a little and find that client side will first receive cache config
> from server side,
> and then client will try to create an instance of that cache
> store(cfg.getCacheStoreFactory().create()).
> During the process of the instance creation, "datasource" cannot been found
> locally so error occurs.
>
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Data-grid-client-errors-out-when-
> datasource-not-defined-tp7820p7852.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Different of publicThreadPoolSize and systemThreadPoolSize

2016-09-20 Thread Anton Vinogradov
Hi,

as far as I can see no threadpools used in this case

flushThreads = new GridWorker[flushThreadCnt];

writeCache = new ConcurrentLinkedHashMap<>(initCap, 0.75f, concurLvl);

for (int i = 0; i < flushThreads.length; i++) {
flushThreads[i] = new Flusher(gridName, "flusher-" + i, log);

new IgniteThread(flushThreads[i]).start();
}

Also, *system *pool is used for cache operations, and *public *is for
map/reduce.

P.s. Please properly subscribe to the mailing list so that the community can
receive email notifications for your messages. To subscribe, send empty
email to user-subscr...@ignite.apache.org and follow simple instructions in
the reply.


On Tue, Sep 20, 2016 at 5:04 AM, ght230  wrote:

> We can configure internal thread pool and system thread pool in XML file by
>
> 
>
> 
>
> What is the different of their usage?
> I want to configure Number of threads for write-behind caching by setting
> "setWriteBehindFlushThreadCount(int)",
> and I want to know is it use "publicThreadPoolSize" or
> "systemThreadPoolSize"?
>
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Different-of-publicThreadPoolSize-and-
> systemThreadPoolSize-tp7835.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Data grid client errors out when datasource not defined

2016-09-20 Thread Anton Vinogradov
Jason,

Could you please reattach stacktrace as a text. Nabble is down now and I
can't recheck exception details.

Also, is it possible to create simple maven project with test running
server and client node as you described?
I tried to run similar configuration but gain no failures.


On Tue, Sep 20, 2016 at 11:58 AM, amdam23000 <629160...@qq.com> wrote:

> Hi,
>
> please take a look at my config and code below, i think i did not specify
> some bean depending on dataSource on client side.
>
> 1. In client side, the spring config is as below:
>
>  class="org.apache.ignite.configuration.IgniteConfiguration">
> 
> 
> 
>
> 2. In server side, config is as below:
>
>  class="org.apache.ignite.configuration.IgniteConfiguration">
> 
> 
> 
>  class="org.apache.ignite.configuration.CacheConfiguration">
> 
> 
> 
> 
> 
>  class="org.apache.ignite.cache.store.jdbc.CacheJdbcPojoStoreFactory">
>  />
> 
> ... some db table and field mappings here, mean
> to load all data from the table ...
> 
> 
> 
> 
> 
> 
> 
>  class="org.springframework.jdbc.datasource.DriverManagerDataSource">
> ... db connection config here ...
> 
>
> 3. Server code
> IgniteCache cache = ignite.getOrCreateCache("xxx");
> cache.loadAll(null, null); // loading succeeds
> cache.get(xxxKey);   // here it works, we can get entity with specified
> key.
>
> 4. Client code (executed after server node launched)
> IgniteCache cache = ignite.getOrCreateCache("xxx");
> cache.get(xxxKey);   // here it failed as i posted previously, no
> datasource
> found.
>
> 5. If i add  client
> code works.
>
>
> Is it possible that client side need to connect db using datasource?
> In this case, client just perform a simple query of the cache.
>
> Thanks,
> Jason
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Data-grid-client-errors-out-when-
> datasource-not-defined-tp7820p7841.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: DataStreamer is closed

2016-11-11 Thread Anton Vinogradov
Anil,

Unfortunately,
  at com.test.cs.cache.KafkaCacheDataStreamer.addMessage(
KafkaCacheDataStreamer.java:149)
does not fits on attached sources.

But,
java.lang.IllegalStateException: Cache has been closed or destroyed:
PERSON_CACHE
is a reason of closed datastreamer.

It it possible to write reproducible example or to attach both (full, all)
logs and sourcess?


BTW, we already have Kafka streamer, why you decided to reimplement it?



On Wed, Nov 9, 2016 at 5:39 PM, Anil  wrote:

> Would there be any issues because of size of data ?
> i loaded around 80 gb on 4 node cluster. each node is of 8 CPU and 32 GB
> RAM configuration.
>
> and cache configuration -
>
> CacheConfiguration pConfig = new
> CacheConfiguration();
> pConfig.setName("Person_Cache");
> pConfig.setIndexedTypes(String.class, Person.class);
> pConfig.setBackups(1);
> pConfig.setCacheMode(CacheMode.PARTITIONED);
> pConfig.setCopyOnRead(false);
> pConfig.setSwapEnabled(true);
> pConfig.setMemoryMode(CacheMemoryMode.OFFHEAP_TIERED);
> pConfig.setSqlOnheapRowCacheSize(100_000);
> pConfig.setOffHeapMaxMemory(10 * 1024 * 1024 * 1024);
> pConfig.setStartSize(200);
> pConfig.setStatisticsEnabled(true);
>
> Thanks for your help.
>
> On 9 November 2016 at 19:56, Anil  wrote:
>
>> HI,
>>
>> Data streamer closed exception is very frequent. I did not see any
>> explicit errors/exception about data streamer close. the excption i see
>> only when message is getting added.
>>
>> I have 4 node ignite cluster and each node have consumer to connection
>> and push the message received to streamer.
>>
>> What if the node is down and re-joined when message is getting added
>> cache.
>>
>> Following is the exception from logs -
>>
>> 2016-11-09 05:55:55 ERROR pool-6-thread-1 KafkaCacheDataStreamer:146 -
>> Exception while adding to streamer
>> java.lang.IllegalStateException: Data streamer has been closed.
>> at org.apache.ignite.internal.processors.datastreamer.DataStrea
>> merImpl.enterBusy(DataStreamerImpl.java:360)
>> at org.apache.ignite.internal.processors.datastreamer.DataStrea
>> merImpl.addData(DataStreamerImpl.java:507)
>> at org.apache.ignite.internal.processors.datastreamer.DataStrea
>> merImpl.addData(DataStreamerImpl.java:498)
>> at com.test.cs.cache.KafkaCacheDataStreamer.addMessage(KafkaCac
>> heDataStreamer.java:140)
>> at com.test.cs.cache.KafkaCacheDataStreamer$1.run(KafkaCacheDat
>> aStreamer.java:197)
>> at java.util.concurrent.Executors$RunnableAdapter.call(
>> Executors.java:511)
>> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPool
>> Executor.java:1142)
>> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoo
>> lExecutor.java:617)
>> at java.lang.Thread.run(Thread.java:745)
>> 2016-11-09 05:55:55 ERROR pool-6-thread-1 KafkaCacheDataStreamer:200 -
>> Message is ignored due to an error 
>> [msg=MessageAndMetadata(TestTopic,1,Message(magic
>> = 0, attributes = 0, crc = 2111790081, key = null, payload =
>> java.nio.HeapByteBuffer[pos=0 lim=1155 cap=1155]),2034,kafka.serializ
>> er.StringDecoder@3f77f0b,kafka.serializer.StringDecoder@67fd2da0)]
>> java.lang.IllegalStateException: Cache has been closed or destroyed:
>> PERSON_CACHE
>> at org.apache.ignite.internal.processors.cache.GridCacheGateway
>> .enter(GridCacheGateway.java:160)
>> at org.apache.ignite.internal.processors.cache.IgniteCacheProxy
>> .onEnter(IgniteCacheProxy.java:2103)
>> at org.apache.ignite.internal.processors.cache.IgniteCacheProxy
>> .size(IgniteCacheProxy.java:826)
>> at com.test.cs.cache.KafkaCacheDataStreamer.addMessage(KafkaCac
>> heDataStreamer.java:149)
>> at com.test.cs.cache.KafkaCacheDataStreamer$1.run(KafkaCacheDat
>> aStreamer.java:197)
>> at java.util.concurrent.Executors$RunnableAdapter.call(
>> Executors.java:511)
>> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPool
>> Executor.java:1142)
>> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoo
>> lExecutor.java:617)
>> at java.lang.Thread.run(Thread.java:745)
>>
>> I have attached the KafkaCacheDataStreamer class and let me know if you
>> need any additional details. thanks.
>>
>>
>


Re: DataStreamer is closed

2016-11-11 Thread Anton Vinogradov
Anil,


> I suspect there is a problem when node rejoins the cluster and streamer is
> already closed and not recreated. Correct ?


Correct, this seems to be linked somehow. I need logs and sourcess to tell
more.

I had to implement my own kafka streamer because of
> https://issues.apache.org/jira/browse/IGNITE-4140


I'd like to propose you to refactor streamer according to this issue and
contribute solution. I can help you with tips and review.
Sounds good?

On Fri, Nov 11, 2016 at 11:41 AM, Anil <anilk...@gmail.com> wrote:

> HI Anton,
>
> Thanks for responding. i will check if i can reproduce with issue with
> reproducer.
>
> I had to implement my own kafka streamer because of
> https://issues.apache.org/jira/browse/IGNITE-4140
>
> I suspect there is a problem when node rejoins the cluster and streamer is
> already closed and not recreated. Correct ?
>
> In the above case, kafka streamer tries to getStreamer and push the data
> but streamer is not available.
>
> Thanks.
>
>
>
> On 11 November 2016 at 14:00, Anton Vinogradov <a...@apache.org> wrote:
>
>> Anil,
>>
>> Unfortunately,
>>   at com.test.cs.cache.KafkaCacheDataStreamer.addMessage(KafkaCac
>> heDataStreamer.java:149)
>> does not fits on attached sources.
>>
>> But,
>> java.lang.IllegalStateException: Cache has been closed or destroyed:
>> PERSON_CACHE
>> is a reason of closed datastreamer.
>>
>> It it possible to write reproducible example or to attach both (full,
>> all) logs and sourcess?
>>
>>
>> BTW, we already have Kafka streamer, why you decided to reimplement it?
>>
>>
>>
>> On Wed, Nov 9, 2016 at 5:39 PM, Anil <anilk...@gmail.com> wrote:
>>
>>> Would there be any issues because of size of data ?
>>> i loaded around 80 gb on 4 node cluster. each node is of 8 CPU and 32 GB
>>> RAM configuration.
>>>
>>> and cache configuration -
>>>
>>> CacheConfiguration<String, Person> pConfig = new
>>> CacheConfiguration<String, Person>();
>>> pConfig.setName("Person_Cache");
>>> pConfig.setIndexedTypes(String.class, Person.class);
>>> pConfig.setBackups(1);
>>> pConfig.setCacheMode(CacheMode.PARTITIONED);
>>> pConfig.setCopyOnRead(false);
>>> pConfig.setSwapEnabled(true);
>>> pConfig.setMemoryMode(CacheMemoryMode.OFFHEAP_TIERED);
>>> pConfig.setSqlOnheapRowCacheSize(100_000);
>>> pConfig.setOffHeapMaxMemory(10 * 1024 * 1024 * 1024);
>>> pConfig.setStartSize(200);
>>> pConfig.setStatisticsEnabled(true);
>>>
>>> Thanks for your help.
>>>
>>> On 9 November 2016 at 19:56, Anil <anilk...@gmail.com> wrote:
>>>
>>>> HI,
>>>>
>>>> Data streamer closed exception is very frequent. I did not see any
>>>> explicit errors/exception about data streamer close. the excption i see
>>>> only when message is getting added.
>>>>
>>>> I have 4 node ignite cluster and each node have consumer to connection
>>>> and push the message received to streamer.
>>>>
>>>> What if the node is down and re-joined when message is getting added
>>>> cache.
>>>>
>>>> Following is the exception from logs -
>>>>
>>>> 2016-11-09 05:55:55 ERROR pool-6-thread-1 KafkaCacheDataStreamer:146 -
>>>> Exception while adding to streamer
>>>> java.lang.IllegalStateException: Data streamer has been closed.
>>>> at org.apache.ignite.internal.processors.datastreamer.DataStrea
>>>> merImpl.enterBusy(DataStreamerImpl.java:360)
>>>> at org.apache.ignite.internal.processors.datastreamer.DataStrea
>>>> merImpl.addData(DataStreamerImpl.java:507)
>>>> at org.apache.ignite.internal.processors.datastreamer.DataStrea
>>>> merImpl.addData(DataStreamerImpl.java:498)
>>>> at com.test.cs.cache.KafkaCacheDataStreamer.addMessage(KafkaCac
>>>> heDataStreamer.java:140)
>>>> at com.test.cs.cache.KafkaCacheDataStreamer$1.run(KafkaCacheDat
>>>> aStreamer.java:197)
>>>> at java.util.concurrent.Executors$RunnableAdapter.call(Executor
>>>> s.java:511)
>>>> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>>>> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPool
>>>> Executor.java:1142)
>>>> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoo
>>>> lE

Re: DataStreamer is closed

2016-11-03 Thread Anton Vinogradov
Anil,

Could you provide getStreamer() code and full logs?
Possible, ignite node was disconnected and this cause DataStreamer closure.

On Thu, Nov 3, 2016 at 1:17 PM, Anil  wrote:

> HI,
>
> I have created custom kafka data streamer for my use case and i see
> following exception.
>
> java.lang.IllegalStateException: Data streamer has been closed.
> at org.apache.ignite.internal.processors.datastreamer.
> DataStreamerImpl.enterBusy(DataStreamerImpl.java:360)
> at org.apache.ignite.internal.processors.datastreamer.
> DataStreamerImpl.addData(DataStreamerImpl.java:507)
> at org.apache.ignite.internal.processors.datastreamer.
> DataStreamerImpl.addData(DataStreamerImpl.java:498)
> at net.juniper.cs.cache.KafkaCacheDataStreamer.addMessage(
> KafkaCacheDataStreamer.java:128)
> at net.juniper.cs.cache.KafkaCacheDataStreamer$1.run(
> KafkaCacheDataStreamer.java:176)
> at java.util.concurrent.Executors$RunnableAdapter.
> call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1142)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(
> ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
>
>
>
> addMessage method is
>
>  @Override
> protected void addMessage(T msg) {
> if (getMultipleTupleExtractor() == null){
> Map.Entry e = getSingleTupleExtractor().extract(msg);
>
> if (e != null)
> getStreamer().addData(e);
>
> } else {
> Map m = getMultipleTupleExtractor().extract(msg);
> if (m != null && !m.isEmpty()){
> getStreamer().addData(m);
> }
> }
> }
>
>
> Do you see any issue ? Please let me know if you need any additional
> information. thanks.
>
> Thanks.
>


Re: DataStreamer is closed

2016-11-03 Thread Anton Vinogradov
Anil,

Is it first and only exception at logs?

Is it possible to debud this?
You can set breakpoint at first line of
org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl#closeEx(boolean,
org.apache.ignite.IgniteCheckedException)
This will give you information who stopping the datastreamer.

On Thu, Nov 3, 2016 at 1:41 PM, Anil <anilk...@gmail.com> wrote:

> Hi Anton,
> No. ignite nodes looks good.
>
> I have attached my KafkaCacheDataStreamer class and following is the code
> to listen to the kafka topic. IgniteCache is created using java
> configuration.
>
> I see cache size is zero after adding the entries to cache as well from
> KafkaCacheDataStreamer. Not sure how to log whether the entries added to
> cache or not.
>
> KafkaCacheDataStreamer<String, String, Person> kafkaStreamer = new
> KafkaCacheDataStreamer<String, String, Person>();
>
>  Properites pros = new Properites() // kafka properties
>  ConsumerConfig consumerConfig = new ConsumerConfig(props);
>
> try {
> IgniteDataStreamer<String, Person> stmr = ignite.dataStreamer(
> CacheManager.PERSON_CACHE);
>// allow overwriting cache data
>stmr.allowOverwrite(true);
>
>kafkaStreamer.setIgnite(ignite);
>kafkaStreamer.setStreamer(stmr);
>
>// set the topic
>kafkaStreamer.setTopic(kafkaConfig.getString("topic",
> "TestTopic"));
>
>// set the number of threads to process Kafka streams
>kafkaStreamer.setThreads(1);
>
>// set Kafka consumer configurations
>kafkaStreamer.setConsumerConfig(consumerConfig);
>
>// set decoders
>kafkaStreamer.setKeyDecoder(new StringDecoder(new
> VerifiableProperties()));
>kafkaStreamer.setValueDecoder(new StringDecoder(new
> VerifiableProperties()));
>kafkaStreamer.setMultipleTupleExtractor(new
> StreamMultipleTupleExtractor<String, String, Person>() {
> @Override
> public Map<String, Person> extract(String msg) {
> Map<String, Person> entries = new HashMap<>();
> try {
> KafkaMessage request = Json.decodeValue(msg, KafkaMessage.class);
> IgniteCache<String, Person> cache = CacheManager.getCache();
>
> if (CollectionUtils.isNotEmpty(request.getPersons())){
> String id = null;
> for (Person ib : request.getPersons()){
> if (StringUtils.isNotBlank(ib.getId())){
> id = ib.getId();
> if (null != ib.isDeleted() && Boolean.TRUE.equals(ib.isDeleted())){
> cache.remove(id);
> }else {
> // no need to store the id. so setting null.
> ib.setId(null);
> entries.put(id, ib);
> }
> }
> }
> }else {
>
> }
> }catch (Exception ex){
> logger.error("Error while updating the cache - {} {} " ,msg, ex);
> }
>
> return entries;
> }
> });
>
>kafkaStreamer.start();
> }catch (Exception ex){
> logger.error("Error in kafka data streamer ", ex);
> }
>
>
> Please let me know if you see any issues. thanks.
>
> On 3 November 2016 at 15:59, Anton Vinogradov <avinogra...@gridgain.com>
> wrote:
>
>> Anil,
>>
>> Could you provide getStreamer() code and full logs?
>> Possible, ignite node was disconnected and this cause DataStreamer
>> closure.
>>
>> On Thu, Nov 3, 2016 at 1:17 PM, Anil <anilk...@gmail.com> wrote:
>>
>>> HI,
>>>
>>> I have created custom kafka data streamer for my use case and i see
>>> following exception.
>>>
>>> java.lang.IllegalStateException: Data streamer has been closed.
>>> at org.apache.ignite.internal.processors.datastreamer.DataStrea
>>> merImpl.enterBusy(DataStreamerImpl.java:360)
>>> at org.apache.ignite.internal.processors.datastreamer.DataStrea
>>> merImpl.addData(DataStreamerImpl.java:507)
>>> at org.apache.ignite.internal.processors.datastreamer.DataStrea
>>> merImpl.addData(DataStreamerImpl.java:498)
>>> at net.juniper.cs.cache.KafkaCacheDataStreamer.addMessage(Kafka
>>> CacheDataStreamer.java:128)
>>> at net.juniper.cs.cache.KafkaCacheDataStreamer$1.run(KafkaCache
>>> DataStreamer.java:176)
>>> at java.util.concurrent.Executors$RunnableAdapter.call(Executor
>>> s.java:511)
>>> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>>> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPool
>>> Executor.java:1142)
>>> at java.util.conc

Re: DataStreamer is closed

2016-11-03 Thread Anton Vinogradov
Anil,

getStreamer().addData() will return you IgniteFuture. You can check it
result by fut.get().
get() will give you null in case data streamed and stored or throw an
exception.

On Thu, Nov 3, 2016 at 2:32 PM, Anil <anilk...@gmail.com> wrote:

>
> Yes,. that is only exception i see in logs. i will try the debug option.
> thanks.
>
> though data streamer is not returning exception all the time,
> IgniteCache#size() remains empty all the time. It weird.
>
> 1.
> for (Map.Entry<K, V> entry : m.entrySet()){
> getStreamer().addData(entry.getKey(), entry.getValue());
>
> }
>
> 2.
>
> for (Map.Entry<K, V> entry : m.entrySet()){
>  cache.put((String)entry.getKey(), (Person)
> entry.getValue());
> }
>
> 3.
> for (Map.Entry<K, V> entry : m.entrySet()){
>  cache.replace((String)entry.getKey(), (Person)
> entry.getValue());
>  }
>
>
> cache size with #1 & #3  is 0
> cache size with #2 is 1 as expected.
>
> Have you see similar issue before ?
>
> Thanks
>
>
>
> On 3 November 2016 at 16:33, Anton Vinogradov <avinogra...@gridgain.com>
> wrote:
>
>> Anil,
>>
>> Is it first and only exception at logs?
>>
>> Is it possible to debud this?
>> You can set breakpoint at first line of org.apache.ignite.internal.pro
>> cessors.datastreamer.DataStreamerImpl#closeEx(boolean,
>> org.apache.ignite.IgniteCheckedException)
>> This will give you information who stopping the datastreamer.
>>
>> On Thu, Nov 3, 2016 at 1:41 PM, Anil <anilk...@gmail.com> wrote:
>>
>>> Hi Anton,
>>> No. ignite nodes looks good.
>>>
>>> I have attached my KafkaCacheDataStreamer class and following is the
>>> code to listen to the kafka topic. IgniteCache is created using java
>>> configuration.
>>>
>>> I see cache size is zero after adding the entries to cache as well from
>>> KafkaCacheDataStreamer. Not sure how to log whether the entries added to
>>> cache or not.
>>>
>>> KafkaCacheDataStreamer<String, String, Person> kafkaStreamer = new
>>> KafkaCacheDataStreamer<String, String, Person>();
>>>
>>>  Properites pros = new Properites() // kafka properties
>>>  ConsumerConfig consumerConfig = new ConsumerConfig(props);
>>>
>>> try {
>>> IgniteDataStreamer<String, Person> stmr =
>>> ignite.dataStreamer(CacheManager.PERSON_CACHE);
>>>// allow overwriting cache data
>>>stmr.allowOverwrite(true);
>>>
>>>kafkaStreamer.setIgnite(ignite);
>>>kafkaStreamer.setStreamer(stmr);
>>>
>>>// set the topic
>>>kafkaStreamer.setTopic(kafkaConfig.getString("topic",
>>> "TestTopic"));
>>>
>>>// set the number of threads to process Kafka streams
>>>kafkaStreamer.setThreads(1);
>>>
>>>// set Kafka consumer configurations
>>>kafkaStreamer.setConsumerConfig(consumerConfig);
>>>
>>>// set decoders
>>>kafkaStreamer.setKeyDecoder(new StringDecoder(new
>>> VerifiableProperties()));
>>>kafkaStreamer.setValueDecoder(new StringDecoder(new
>>> VerifiableProperties()));
>>>kafkaStreamer.setMultipleTupleExtractor(new
>>> StreamMultipleTupleExtractor<String, String, Person>() {
>>> @Override
>>> public Map<String, Person> extract(String msg) {
>>> Map<String, Person> entries = new HashMap<>();
>>> try {
>>> KafkaMessage request = Json.decodeValue(msg, KafkaMessage.class);
>>> IgniteCache<String, Person> cache = CacheManager.getCache();
>>>
>>> if (CollectionUtils.isNotEmpty(request.getPersons())){
>>> String id = null;
>>> for (Person ib : request.getPersons()){
>>> if (StringUtils.isNotBlank(ib.getId())){
>>> id = ib.getId();
>>> if (null != ib.isDeleted() && Boolean.TRUE.equals(ib.isDeleted())){
>>> cache.remove(id);
>>> }else {
>>> // no need to store the id. so setting null.
>>> ib.setId(null);
>>> entries.put(id, ib);
>>> }
>>> }
>>> }
>>> }else {
>>>
>>> }
>>> }catch (Exception ex){
>>> logger.error("Error while updating the cache - {} {} &quo

Re: Ignite cluster

2016-12-14 Thread Anton Vinogradov
Anil,

This situation described here
https://gridgain.readme.io/docs/network-segmentation



On Wed, Dec 14, 2016 at 2:48 PM, Anil  wrote:

> HI,
>
> how ignite cluster internally behaves when a node disconnects from cluster
> ?
>
> Lets say A , B, C are three nodes formed ignite cluster.
>
> When A disconnects from the cluster, ignite instance on A is still run ?
> now will there be two clusters A , B & C ?
>
> Please clarify.
>
> Thanks
>


Re: Storing JSON data on Ignite cache

2017-04-03 Thread Anton Vinogradov
Hi,

Seems you have to use Cache's query method with transformer.

public  QueryCursor query(Query qry, IgniteClosure
transformer);

Usage example:

IgniteClosure, Integer> transformer =
new IgniteClosure, Integer>() {
@Override public Integer apply(Cache.Entry e) {
return e.getKey();
}
};

List keys = cache.query(new ScanQuery(),
transformer).getAll();


On Mon, Apr 3, 2017 at 4:17 PM, austin solomon 
wrote:

> Hi,
>
> My Ignite version is 1.9.0
>
> I am pushing JSON object into Ignite's cache using Apache Nifi's : GetKafka
> => PutIgniteCache processors.
> I could able to save the data in cache but the data it is in byte array
> format how can I query this.
>
> My Json object looks like this : {"id":1, "name":"USA", "population":
> 15} .
> When  print the cache data  got the following result:
>
>   byte[] bytes = (byte[]) cache.get("1");
>   String str = new String(bytes, StandardCharsets.UTF_8);
>   System.out.println("Cache get id = "+str);
>   Iterator it=cache.iterator();
>   while(it.hasNext()){
>   System.out.println("Cache next=="+it.next());
>   }
>
> Result:
>
>Cache get id = {"id":1, "name":"USA", "population": 15}
>Cache next==Entry [key=1, val=[B@5ab14cb9]
>Cache next==Entry [key=2, val=[B@5fb97279]
>Cache next==Entry [key=3, val=[B@439a8f59]
>
> How can I query this data or map this as key, value.
>
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Storing-JSON-data-on-Ignite-cache-tp11660.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Apache Ignite Site Changed its Face

2017-07-28 Thread Anton Vinogradov
Awesome!

On Fri, Jul 28, 2017 at 11:22 AM, Sergey Kozlov 
wrote:

> Looks like we're *going over the dark side* :-)
>
> On Fri, Jul 28, 2017 at 11:17 AM, Pavel Tupitsyn 
> wrote:
>
>> Looks great on desktop, but on mobile it is a disaster :(
>>
>> http://imgur.com/a/VIT1B
>>
>>
>>
>> On Fri, Jul 28, 2017 at 11:05 AM, Yakov Zhdanov 
>> wrote:
>>
>>> I like the new design. Looks good!
>>>
>>> --Yakov
>>>
>>
>>
>
>
> --
> Sergey Kozlov
> GridGain Systems
> www.gridgain.com
>


Re: SQL query on client stalling the grid when server node dies

2017-05-24 Thread Anton Vinogradov
Hi,

Is it possible to provide full logs or reproducer?

Anyway, I see that exchange waits for something and you should see reason
at logs after phrase "Failed to wait for partition release future".

On Wed, May 24, 2017 at 7:31 AM, bintisepaha  wrote:

> Hi Igniters,
>
> We have been testing with Ignite 1.9.0 and have this client that runs a
> simple (no-join) SQL Query on a single distributed cache. But if we kill
> the
> server node for testing in the meantime and if the client was running this
> query, it actually stalls the whole cluster.
>
> All we have to do for the grid to resume functioning is restart the client.
> This may have something to do with data rebalancing when a server node
> dies.
> Would setting a rebalanceDelay help? we are using the default of 0 now.
>
> How does a client affect the whole cluster like this? and restarting it
> fixes the stall? The server nodes exchange worker threads are stuck on
> partitioning data.
>
> Client thread stuck below (thread dump)
>
> Name: main
> State: TIMED_WAITING
> Total blocked: 40  Total waited: 102,828
>
> Stack trace:
> java.lang.Thread.sleep(Native Method)
> org.apache.ignite.internal.processors.query.h2.twostep.
> GridReduceQueryExecutor.query(GridReduceQueryExecutor.java:494)
> org.apache.ignite.internal.processors.query.h2.
> IgniteH2Indexing$7.iterator(IgniteH2Indexing.java:1315)
> org.apache.ignite.internal.processors.cache.QueryCursorImpl.iterator(
> QueryCursorImpl.java:94)
> org.apache.ignite.internal.processors.query.h2.
> IgniteH2Indexing$8.iterator(IgniteH2Indexing.java:1355)
> org.apache.ignite.internal.processors.cache.QueryCursorImpl.iterator(
> QueryCursorImpl.java:94)
> com.tudor.server.grid.matching.GridMatcher.getTradeOrdersForPSGroup(
> GridMatcher.java:322)
> com.tudor.server.grid.matching.MatcherDelegate.unmatchRematch(
> MatcherDelegate.java:101)
> com.tudor.server.grid.matching.GridMatcher.processPendingOrder(
> GridMatcher.java:275)
> com.tudor.server.grid.matching.GridMatcher.run(GridMatcher.java:201)
> com.tudor.server.grid.matching.GridMatcher.main(GridMatcher.java:99)
>
>
> server node exchange worker thread dump
>
>
> "exchange-worker-#34%DataGridServer-Development%" Id=68 in TIMED_WAITING
> on
> lock=org.apache.ignite.internal.util.future.GridCompoundFuture@7e9c149b
>   at sun.misc.Unsafe.park(Native Method)
>   at java.util.concurrent.locks.LockSupport.parkNanos(
> LockSupport.java:215)
>   at
> java.util.concurrent.locks.AbstractQueuedSynchronizer.
> doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
>   at
> java.util.concurrent.locks.AbstractQueuedSynchronizer.
> tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
>   at
> org.apache.ignite.internal.util.future.GridFutureAdapter.
> get0(GridFutureAdapter.java:189)
>   at
> org.apache.ignite.internal.util.future.GridFutureAdapter.
> get(GridFutureAdapter.java:139)
>   at
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.
> GridDhtPartitionsExchangeFuture.waitPartitionRelease(
> GridDhtPartitionsExchangeFuture.java:779)
>   at
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.
> GridDhtPartitionsExchangeFuture.distributedExchange(
> GridDhtPartitionsExchangeFuture.java:732)
>   at
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.
> GridDhtPartitionsExchangeFuture.init(GridDhtPartitionsExchangeFutur
> e.java:489)
>   at
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeMana
> ger$ExchangeWorker.body(GridCachePartitionExchangeManager.java:1674)
>   at
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
>   at java.lang.Thread.run(Thread.java:745)
>
> Any help is appreciated.
>
> Thanks,
> Binti
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/SQL-query-on-client-stalling-the-grid-
> when-server-node-dies-tp13107.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: With same load Ignite is not able to respond after enabling SSL

2017-05-02 Thread Anton Vinogradov
Hi,

I see that code from stack trace you provided was refactored at 1.9,
possible this will fix your issue.

On Fri, Apr 21, 2017 at 3:00 PM, Ankit Singhai  wrote:

> igniteClient.gz
>  n12149/igniteClient.gz>
> server1.gz
> 
> server2.gz
> 
> server3.gz
> 
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/With-same-load-Ignite-is-not-able-to-
> respond-after-enabling-SSL-tp12146p12149.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: BinaryObject

2017-05-03 Thread Anton Vinogradov
Anil,

This seems to be fixed at https://issues.apache.org/jira/browse/IGNITE-4891,
please check.


On Wed, May 3, 2017 at 12:08 PM, Anil  wrote:

> HI Team,
>
> Did you get a chance to look into it ? thanks.
>
> Thanks
>
> On 2 May 2017 at 11:19, Anil  wrote:
>
>> Hi,
>>
>> java.lang.ClassCastException: 
>> org.apache.ignite.internal.binary.BinaryObjectImpl
>> cannot be cast to org.apache.ignite.cache.affinity.Affinity  exception
>> thrown when a field updated using BinaryObject for a cache entry. and it is
>> intermittent.
>>
>> Following is the snippet i am using
>>
>> IgniteCache cache =
>> ignite.cache(CacheManager.CACHE).withKeepBinary();
>> IgniteCache lCache =
>> ignite.cache(CacheManager.LOCK_CACHE).withKeepBinary();
>> ScanQuery scanQuery = new
>> ScanQuery();
>> scanQuery.setLocal(true);
>> scanQuery.setPartition(1);
>>
>> Iterator> iterator =
>> cache.query(scanQuery).iterator();
>> Integer oldStat = null, newStat = null;
>> boolean changed = false;
>> Entry row = null;
>> while (iterator.hasNext()) {
>> try {
>> row = iterator.next();
>> BinaryObject itrVal = row.getValue();
>> String id = itrVal.field("id");
>> Lock lock = lCache.lock(id);
>> try {
>> lock.lock();
>> BinaryObject val = cache.get(row.getKey());
>> if (null != val){
>> BinaryObjectBuilder bldr = val.toBuilder();
>> oldStat = val.field("stat");
>> Status status = null ; // determine status
>> if (!CommonUtils.equalsObject(oldStat, newStat)){
>> changed = true;
>> bldr.setField("stat", status.getStatus());
>> bldr.setField("status", status.getDescription());
>> }
>>
>> // update other
>> fields
>> if(changed){
>> cache.put(row.getKey(), bldr.build());
>> }
>> }
>> }catch (Exception ex){
>> log.error("Failed to update the status of  {}  {} ", id, ex);
>> }finally {
>> lock.unlock();
>> }
>> }catch (Exception ex){
>> log.error("Failed to process and update status of  {}  {} ", row, ex);
>> }
>> }
>>
>>
>> Do you see any issue in the above snippet ? thanks.
>>
>>
>> Thanks
>>
>>
>


Re: Cannot start/stop cache within lock or transaction

2017-09-20 Thread Anton Vinogradov
Hi All.

I've updated reproducer and provided some investigation results at issue.

On Fri, Sep 15, 2017 at 7:12 PM, rajivgandhi  wrote:

> Hi Yakov,
> The test will pass if you comment the line:
> "IgniteCache cache3 =
> ignite.getOrCreateCache(getConfig(*"cache3"*));"
>
> Question is why is creation of cache3 causing deadlock?
>
> Please note:
> 1. Lock is being acquired on cache2
> 2. clear is being called on cache1
>
> The error being reported and reproduced is nasty enough that it has kind of
> eroded our confidence in the library.
>
> thanks,
> Rajeev
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Ignite node crashes after one query fetches many entries from cache

2017-11-29 Thread Anton Vinogradov
Ray,

Seems you're looking
for org.apache.ignite.cache.query.SqlFieldsQuery#timeout?

On Tue, Nov 28, 2017 at 5:30 PM, Alexey Kukushkin  wrote:

> Ignite Developers,
>
> I know community is developing an "Internal Problems Detection" feature
> .
> Do you know if it addresses the problem Ray described below? May be we
> already have a setting to prevent this from happening?
>
> On Tue, Nov 28, 2017 at 5:13 PM, Ray  wrote:
>
>> I try to fetch all the results of a table with billions of entries using
>> sql
>> like this "select * from table_name".
>> As far as my understanding, Ignite will prepare all the data on the node
>> running this query then return the results to the client.
>> The problem is that after a while, the node crashes(probably because of
>> long
>> GC pause or running out of memory).
>> Is node crashing the expected behavior?
>> I mean it's unreasonable that Ignite node crashes after this kind of
>> query.
>>
>> From my experience with other databases,  running this kind of full table
>> scan will not crash the node.
>>
>> The optimal way for handling this kind of situation is Ignite node stays
>> alive, the query will be stopped by Ignite node when the node find out it
>> will run out of memory soon.
>> Then an error response shall be returned to the client.
>>
>> Please advice me if this mechanism already exists and there is hidden
>> switch
>> to turn it on.
>> Thanks
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>
>
>
> --
> Best regards,
> Alexey
>


Re: Correct build process for Ignite

2018-02-21 Thread Anton Vinogradov
Hi,

Please use instructions provided at DEVNOTES.txt

On Wed, Feb 21, 2018 at 6:37 PM, shikharraje  wrote:

> Errata: mvn clean in the parent directory works. The other goals fail,
> though.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Correct build process for Ignite

2018-02-21 Thread Anton Vinogradov
what maven version you use?

On Wed, Feb 21, 2018 at 6:49 PM, shikharraje  wrote:

> Hi Anton,
>
> As mentioned in my original mail, I tried the following command from
> DEVNOTES.txt:
>
> mvn clean install -Pall-java,all-scala,licenses -DskipTests
>
> But that gave the same error:
>
> [ERROR] Failed to execute goal on project ignite-core: Could not resolve
> dependencies for project org.apache.ignite:ignite-core:jar:2.5.0-SNAPSHOT:
> The following artifacts could not be resolved:
> org.apache.ignite.binary:test1:jar:1.1,
> org.apache.ignite.binary:test2:jar:1.1: Failure to find
> org.apache.ignite.binary:test1:jar:1.1 in
> http://moo-nexus.wdf.sap.corp:8081/nexus/content/groups/public.group/ was
> cached in the local repository, resolution will not be reattempted until
> the
> update interval of central has elapsed or updates are forced -> [Help 1]
>
> Hope this helps.
>
> Thank You
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Is ID generator split brain compliant?

2018-09-25 Thread Anton Vinogradov
Denis,

As far as I understand, question is about IgniteAtomicSequence?
We fixed IgniteSet to be persisted and recovered properly.

Pavel Pereslegin,

Could you please check whether we have the same issue with
IgniteAtomicSequence?

сб, 22 сент. 2018 г. в 4:17, Denis Magda :

> So far, it looks pretty good except that it does not provide persistence
>> out
>> of the box. But I can work around it by backing latest generated ID in a
>> persistent cache and initializing ID generator with the latest value on a
>> cluster restart.
>
>
> Sounds like a good solution. *Anton*, I do remember a discussion on the
> dev list in regards persistence support for data structures. Are we
> releasing anything related soon? Can't recall all the details.
>
> However, one thing I could not find an answer for is if the out of the box
>> ID generator is split brain compliant. I cannot afford to have a duplicate
>> ID and want to understand if duplicate ID(s) could occur in a split-brain
>> scenario. If yes, what is the recommended approach to handling that
>> scenario?
>
>
> It should be split-brain tolerant if ZooKeeper Discovery is used:
>
> https://apacheignite.readme.io/docs/zookeeper-discovery#section-failures-and-split-brain-handling
>
> --
> Denis
>
> On Wed, Sep 19, 2018 at 3:37 PM abatra  wrote:
>
>> Hi,
>>
>> I have a requirement to create a distributed cluster-unique ID generator
>> microservice. I have done a PoC on it using Apache Ignite ID Generator.
>>
>> I created a 2 node cluster with two instances of microservices running on
>> each node. Nodes are in the same datacenter (in fact in the same network
>> and
>> will always be deployed in the same network) and I use TCP/IP discovery to
>> discover cluster nodes.
>>
>> So far, it looks pretty good except that it does not provide persistence
>> out
>> of the box. But I can work around it by backing latest generated ID in a
>> persistent cache and initializing ID generator with the latest value on a
>> cluster restart.
>>
>> However, one thing I could not find an answer for is if the out of the box
>> ID generator is split brain compliant. I cannot afford to have a duplicate
>> ID and want to understand if duplicate ID(s) could occur in a split-brain
>> scenario. If yes, what is the recommended approach to handling that
>> scenario?
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>


Re: [ANNOUNCE] Apache Ignite ML Extension 1.0.0 released

2024-01-29 Thread Anton Vinogradov
Great job, Ivan!

On Mon, Jan 29, 2024 at 12:02 PM Ivan Daschinsky 
wrote:

> All parsers also have been uploaded to maven central:
>
> https://central.sonatype.com/artifact/org.apache.ignite/ignite-ml-ext/dependents
>
> пн, 29 янв. 2024 г. в 12:00, Ivan Daschinsky :
>
> > The Apache Ignite Community is pleased to announce the release of Apache
> > Ignite ML Extension 1.0.0.
> >
> > You can download source release here:
> >
> https://downloads.apache.org/ignite/ignite-extensions/ignite-ml-ext/1.0.0/
> >
> > Artifacts can be downloaded from here:
> > https://central.sonatype.com/artifact/org.apache.ignite/ignite-ml-ext
> >
> > Please let us know if you encounter any problems:
> > https://ignite.apache.org/community/resources.html#ask
> >
> > ---
> > Best Regards
> > Ivan Daschinsky
> > on behalf of Apache Ignite PMC
> >
>