Re: L2-cache slow/not working as intended

2020-11-06 Thread Evgenii Zhuravlev
Hi,

How many nodes do you have? Can you check the same scenario with one node
only? How do you run queries? Is client on the same machine as a server
node?

I would recommend enabling DEBUG logs for org.apache.ignite.cache.hibernate
package. DEBUG logs can show all get and put operations for hibernate
cache.

Best Regards,
Evgenii

чт, 5 нояб. 2020 г. в 01:59, Bastien Durel :

> Hello,
>
> I'm using an ignite cluster to back an hibernate-based application. I
> configured L2-cache as explained in
>
> https://ignite.apache.org/docs/latest/extensions-and-integrations/hibernate-l2-cache
>
> (config below)
>
> I've ran a test reading a 1M-elements cache with a consumer counting
> elements. It's very slow : more than 5 minutes to run.
>
> Session metrics says it was the LC2 puts that takes most time (5
> minutes and 3 seconds of a 5:12" operation)
>
> INFO  [2020-11-05 09:51:15,694]
> org.hibernate.engine.internal.StatisticalLoggingSessionEventListener:
> Session Metrics {
> 33350 nanoseconds spent acquiring 1 JDBC connections;
> 25370 nanoseconds spent releasing 1 JDBC connections;
> 571572 nanoseconds spent preparing 1 JDBC statements;
> 1153110307 nanoseconds spent executing 1 JDBC statements;
> 0 nanoseconds spent executing 0 JDBC batches;
> 303191158712 nanoseconds spent performing 100 L2C puts;
> 23593547 nanoseconds spent performing 1 L2C hits;
> 0 nanoseconds spent performing 0 L2C misses;
> 370656057 nanoseconds spent executing 1 flushes (flushing a total of
> 101 entities and 2 collections);
> 4684 nanoseconds spent executing 1 partial-flushes (flushing a total
> of 0 entities and 0 collections)
> }
>
> It seems long, event for 1M puts, but ok, let's say the L2C is
> initialized now, and it will be better next time ? So I ran the query
> again, but it took 5+ minutes again ...
>
> INFO  [2020-11-05 09:58:02,538]
> org.hibernate.engine.internal.StatisticalLoggingSessionEventListener:
> Session Metrics {
> 28982 nanoseconds spent acquiring 1 JDBC connections;
> 25974 nanoseconds spent releasing 1 JDBC connections;
> 52468 nanoseconds spent preparing 1 JDBC statements;
> 1145821128 nanoseconds spent executing 1 JDBC statements;
> 0 nanoseconds spent executing 0 JDBC batches;
> 303763054228 nanoseconds spent performing 100 L2C puts;
> 1096985 nanoseconds spent performing 1 L2C hits;
> 0 nanoseconds spent performing 0 L2C misses;
> 317558122 nanoseconds spent executing 1 flushes (flushing a total of
> 101 entities and 2 collections);
> 5500 nanoseconds spent executing 1 partial-flushes (flushing a total
> of 0 entities and 0 collections)
> }
>
> Why does the L2 cache had to be filled again ? Isn't his purpose was to
> share it between Sessions ?
>
> Actually, disabling it make the test runs in less that 6 seconds.
>
> Why is L2C working that way ?
>
> Regards,
>
>
> **
>
> I'm running 2.9.0 from Debian package
>
> Hibernate properties :
> hibernate.cache.use_second_level_cache: true
> hibernate.generate_statistics: true
> hibernate.cache.region.factory_class:
> org.apache.ignite.cache.hibernate.HibernateRegionFactory
> org.apache.ignite.hibernate.ignite_instance_name: ClusterWA
> org.apache.ignite.hibernate.default_access_type: READ_ONLY
>
> Method code:
> @GET
> @Timed
> @UnitOfWork
> @Path("/events/speed")
> public Response getAllEvents(@Auth AuthenticatedUser auth) {
> AtomicLong id = new AtomicLong();
> StopWatch watch = new StopWatch();
> watch.start();
> evtDao.findAll().forEach(new Consumer() {
>
> @Override
> public void accept(Event t) {
> long cur = id.incrementAndGet();
> if (cur % 65536 == 0)
> logger.debug("got element#{}",
> cur);
> }
> });
> watch.stop();
> return Response.ok().header("X-Count",
> Long.toString(id.longValue())).entity(new Time(watch)).build();
> }
>
> Event cache config:
>
>
> 
> class="org.apache.ignite.configuration.CacheConfiguration">
>  value="EventCache" />
>  value="PARTITIONED" />
>  value="TRANSACTIONAL" />
>  />
>  value="true" />
>  value="true" />
>  value="Disk" />
>
> 
> 
>  class="org.apache.ignite.cache.CacheKeyConfiguration">
>
> 
> 

Performance of indexing on varchar columns without specific length

2020-11-06 Thread Shravya Nethula
Hi,

A table has two varchar columns, one column created with specific column length 
and other created without any specific length as shown below:
CREATE TABLE person (id LONG PRIMARY KEY, name VARCHAR(64), last_name VARCHAR)

Can we create index on varchar columns without any specific length? In the 
above scenario, can we create index on last_name column?
And while creating index on those columns, will there be any performance 
difference on these columns?



Regards,

Shravya Nethula,

BigData Developer,

[cid:5ffa4fac-72f1-4b29-a0f6-4750deb51d8f]

Hyderabad.


Re: Workaround for getting ContinuousQuery to support transactions

2020-11-06 Thread ssansoy
Yeah the key thing is to be able to be notified when all records have been
updated in the cache.
We've tried using IgniteLock or this too by the way (e.g. the writer locks,
writes the records, unlocks).

Then the client app, internally queues all updates as they arrive from the
continuous query. If it can acquire the lock, then it knows all updates have
arrived (because the writer has finished). However this doesn't work either,
because even though the writer has unlocked, the written records are still
in transit to the continuous query (e.g. they don't exist in the internal
client side queue yet)



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Workaround for getting ContinuousQuery to support transactions

2020-11-06 Thread Ilya Kasnacheev
Hello!

It may handle your use case (doing something when all records are in cache).
But it will not fix the tool that you're using for it (continuous query
with expectation of batched handling).

Regards,
-- 
Ilya Kasnacheev


пт, 6 нояб. 2020 г. в 16:41, ssansoy :

> Ah ok so this wouldn't help solve our problem?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Ignite Client Node OOM Issue

2020-11-06 Thread Ravi Makwana
Hi,

*1) What load profile do you have? *
Ans: We have 2 clusters, each having 2 nodes, one cluster is having approx
15 GB data (Replication) & second cluster is having approx 5 GB data
(Partitioned) with eviction policy.

*2) Do you use SQL queries?*
Ans: Yes, We are using.

*3) Is it possible to share your client node configuration?*
Ans: Yes, I have attached below.

Thanks,


On Fri, 6 Nov 2020 at 18:50, Vladimir Pligin  wrote:

> What load profile do you have? Do you use SQL queries? Is it possible to
> share your client node configuration?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>

http://www.springframework.org/schema/beans;
   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
   xmlns:util="http://www.springframework.org/schema/util;
   xsi:schemaLocation="http://www.springframework.org/schema/beans
   
http://www.springframework.org/schema/beans/spring-beans.xsd
   http://www.springframework.org/schema/util
   
http://www.springframework.org/schema/util/spring-util.xsd;>
  











  
  

  

  
10.201.30.52:58500
  

  

  



   



  


Re: ReadFromBackup, Primary_SYNC, Backups

2020-11-06 Thread Vladimir Pligin
Manesh,

>> When running a large SELECT from a thick client node,  could data be
>> fetched from the backup instead of primary partitions?

readFromBackup=true affects only cache operations, it doesn't work with SQL
queries.

>> in a two server system, if readFromBackup = False for a cache, and if one
>> server fails, 
would 2nd server stop serving client requests since some partition data is
in the backup?

no, it will go on serving requests, former backup partitions will turn into
primary

>> is it possible that readFromBackup=True and Primary_Sync mode for a cache
>> could give inconsistent data for cacheMode Replicated in a 2 server node?

there's no need to explicitly use readFromBackup with replicated caches.
What do you exactly mean by "inconsistent data"? Could you please describe
your scenario?

>> if I increase backup= 10, in a 2 server system, would it mean that there
>> are 10 backups.
I am guessing, ignite would keep a single back up on each server, not 5 and
5 on each server. 
a new backup is created for that cache for any new node joining the cluster.
is this right understanding?

Yes, that's correct.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Workaround for getting ContinuousQuery to support transactions

2020-11-06 Thread ssansoy
Ah ok so this wouldn't help solve our problem?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Workaround for getting ContinuousQuery to support transactions

2020-11-06 Thread Ilya Kasnacheev
Hello!

No, it does not mean anything about the continuous query listener.

But it means that once a data streamer is flushed, all data is available in
caches.

Regards,
-- 
Ilya Kasnacheev


вт, 3 нояб. 2020 г. в 16:28, ssansoy :

> Thanks,
> How is this different to multiple puts inside a transaction?
>
> By using the data streamer to write the records, does that mean the
> continuous query will receive all 10,000 records in one go in the local
> listen?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: ReadFromBackup, Primary_SYNC, Backups

2020-11-06 Thread Stephen Darlington
1. Reads mostly go to the primary but yes, in some circumstances it’ll read 
from the backup with that configuration set to true

2. No. It’ll “rebalance,” which means backup partitions would be promoted to 
primaries. Your client would still get a response

3. Yes — though in practice the window is very small. Use FULL_SYNC and/or 
readFromBackup=false if you need to avoid that

4. You’d get one copy per node. Think of it as a target rather than a guarantee

> On 3 Nov 2020, at 02:50, Mahesh Renduchintala 
>  wrote:
> 
> Hi
> 
> I have a large SQL table (12 million records) in cacheMode partitioned.
> This table is distributed over two server nodes
> 
> 
> -1-
> 
> When running a large SELECT from a thick client node,  could data be fetched 
> from the backup instead of primary partitions?
> Below is the configuration. 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> />
> 
> We are seeing some performance improvement, since we made regardFromBackup= 
> False and a few other things. 
> 
> -2-
> in a two server system, if readFromBackup = False for a cache, and if one 
> server fails, 
> 
> would 2nd server stop serving client requests since some partition data is in 
> the backup?
> 
> 
> -3-
> is it possible that readFromBackup=True and Primary_Sync mode for a cache 
> could give inconsistent data for cacheMode Replicated in a 2 server node?
> 
> 
> -4-
> if I increase backup= 10, in a 2 server system, would it mean that there are 
> 10 backups.
> 
> I am guessing, ignite would keep a single back up on each server, not 5 and 5 
> on each server. 
> a new backup is created for that cache for any new node joining the cluster.
> is this right understanding?
> 
> 
> 
> 
> Strong
> 
> regards
> mahesh




Re: Ignite JDBC connection pooling mechanism

2020-11-06 Thread Vladimir Pligin
In general it should be ok to use connection pooling with Ignite. Is your
network ok? It look like a connection is being closed because of network
issues.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite Client Node OOM Issue

2020-11-06 Thread Vladimir Pligin
What load profile do you have? Do you use SQL queries? Is it possible to
share your client node configuration?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Limit ignite-rest-http threads

2020-11-06 Thread Vladimir Pligin
Hi Ashish,

Yes, it works that way because Ignite launches Jetty server under the hood
to handle HTTP requests.
Could you please clarify what concerns you exactly? In general a number of
threads doesn't indicate anything "bad" or "good". 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Using @SpringResource & @SpringApplicationContextResource in IgniteCallable

2020-11-06 Thread Ilya Kazakov
Hello, Asish!

Try to clean your BeanConfig class. Write this class like (in the client
and in the server app):

import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.ImportResource;

@Configuration
@ImportResource({"classpath*:applicationContext.xml"})
public class BeansConfig { }


File applicationContext.xml should be placed in the resources folder.
And write the system configuration to this file in xml. For example:


http://www.springframework.org/schema/beans;
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
xsi:schemaLocation="http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd;>











127.0.0.1:47500..47509












Pay attention! In igniteSpringBean we use IgniteSpringBean class (not
Ignite). This is a prerequisite if you want to use
@SpringApplicationContextResource in IgniteCallable on remote nodes.

Ilya Kazakov

вт, 3 нояб. 2020 г. в 14:25, ashishb888 :

> Hi Ilya,
>
> My bad I forgot to push the changes. Now I just pushed the changes so you
> can find the required details.
>
> BR,
> Ashish
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: WAL and WAL Archive volume size recommendation

2020-11-06 Thread Mahesh Renduchintala
Dennis

"The WAL archive is used to store WAL segments that may be needed to recover 
the node after a crash. The number of segments kept in the archive is such that 
the total size of all segments does not exceed the specified size of the WAL 
archive"

Given the above in documentation, if we disable WAL-Archive as mentioned in the 
docs, will we have trouble recovering the data in work folder on the node 
reboot?

regards
mahesh