A quick question on Ignite's B+ tree implementation

2017-09-22 Thread John Wilson
Hi,

The internal nodes of a B+ tree, by definition, store only keys while the
leaf nodes store (or hold pointer to) the actual data.

The documentation here,
https://apacheignite.readme.io/docs/memory-architecture, states that each
index node (including internal nodes) store information to access the data
page and offset for the key in question (not just the leaf nodes)

Why call it a B+ tree.

Thanks,


Re: Ignite Server start failing with the exception with Ignite 2.1.0

2017-09-22 Thread Denis Mekhanikov
Hi!

Could you describe your scenario in more detail? What configuration do your
nodes have and in what order do you start them? Do some nodes go down
during the deployment? What environment do you have?

Situation that you described sounds too simple to cause any problems.

Denis

чт, 21 сент. 2017 г. в 15:04, KR Kumar :

> I have a five node cluster and when I start the server, i get the following
> error randomly in some servers
>
>
> [2017-09-21
>
> 07:46:18,635][ERROR][exchange-worker-#34%null%][GridDhtPartitionsExchangeFuture]
> Failed to reinitialize local partitions (preloading will be stopped):
> GridDhtPartitionExchangeId [topVer=AffinityTopologyVersion [topVer=1,
> minorTopVer=1], nodeId=13721454, evt=DISCOVERY_CUSTOM_EVT]
> java.lang.ArrayIndexOutOfBoundsException: -1
> at
>
> org.apache.ignite.internal.processors.cache.persistence.tree.io.IOVersions.forVersion(IOVersions.java:82)
> at
>
> org.apache.ignite.internal.processors.cache.persistence.tree.io.IOVersions.forPage(IOVersions.java:92)
> at
>
> org.apache.ignite.internal.processors.cache.persistence.freelist.PagesList.init(PagesList.java:174)
> at
>
> org.apache.ignite.internal.processors.cache.persistence.freelist.FreeListImpl.(FreeListImpl.java:357)
> at
>
> org.apache.ignite.internal.processors.cache.persistence.GridCacheOffheapManager$GridCacheDataStore$1.(GridCacheOffheapManager.java:893)
> at
>
> org.apache.ignite.internal.processors.cache.persistence.GridCacheOffheapManager$GridCacheDataStore.init0(GridCacheOffheapManager.java:885)
> at
>
> org.apache.ignite.internal.processors.cache.persistence.GridCacheOffheapManager$GridCacheDataStore.updateCounter(GridCacheOffheapManager.java:1130)
> at
>
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtLocalPartition.updateCounter(GridDhtLocalPartition.java:882)
> at
>
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtLocalPartition.casState(GridDhtLocalPartition.java:564)
> at
>
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtLocalPartition.own(GridDhtLocalPartition.java:594)
> at
>
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtPartitionTopologyImpl.initPartitions0(GridDhtPartitionTopologyImpl.java:337)
> at
>
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtPartitionTopologyImpl.beforeExchange(GridDhtPartitionTopologyImpl.java:507)
> at
>
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.distributedExchange(GridDhtPartitionsExchangeFuture.java:991)
> at
>
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.init(GridDhtPartitionsExchangeFuture.java:632)
> at
>
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:1901)
> at
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
> at java.lang.Thread.run(Thread.java:745)
> [2017-09-21 07:46:18,637][INFO
> ][exchange-worker-#34%null%][GridDhtPartitionsExchangeFuture] Snapshot
> initialization completed [topVer=AffinityTopologyVersion [topVer=1,
> minorTopVer=1], time=0ms]
> [2017-09-21
>
> 07:46:18,650][ERROR][exchange-worker-#34%null%][GridCachePartitionExchangeManager]
> Failed to wait for completion of partition map exchange (preloading will
> not
> start): GridDhtPartitionsExchangeFuture [dummy=false, forcePreload=false,
> reassign=false, discoEvt=DiscoveryCustomEvent [customMsg=null,
> affTopVer=AffinityTopologyVersion [topVer=1, minorTopVer=1],
> super=DiscoveryEvent [evtNode=TcpDiscoveryNode
> [id=13721454-7d18-4c48-ba93-36417dbba34b, addrs=[0:0:0:0:0:0:0:1%lo,
> 127.0.0.1, 172.16.9.173],
> sockAddrs=[ri-stress-grid-manager.altidev.net/172.16.9.173:47500,
> /0:0:0:0:0:0:0:1%lo:47500, /127.0.0.1:47500], discPort=47500, order=1,
> intOrder=1, lastExchangeTime=1505994376713, loc=true,
> ver=2.2.0#20170915-sha1:5747ce6b, isClient=false], topVer=1,
> nodeId8=13721454, msg=null, type=DISCOVERY_CUSTOM_EVT,
> tstamp=1505994240293]], crd=TcpDiscoveryNode
> [id=13721454-7d18-4c48-ba93-36417dbba34b, addrs=[0:0:0:0:0:0:0:1%lo,
> 127.0.0.1, 172.16.9.173],
> sockAddrs=[ri-stress-grid-manager.altidev.net/172.16.9.173:47500,
> /0:0:0:0:0:0:0:1%lo:47500, /127.0.0.1:47500], discPort=47500, order=1,
> intOrder=1, lastExchangeTime=1505994376713, loc=true,
> ver=2.2.0#20170915-sha1:5747ce6b, isClient=false],
> exchId=GridDhtPartitionExchangeId [topVer=AffinityTopologyVersion
> [topVer=1,
> minorTopVer=1], nodeId=13721454, evt=DISCOVERY_CUSTOM_EVT], added=true,
> initFut=GridFutureAdapter [ignoreInterrupts=false, state=DONE, res=false,
> hash=1691446648], init=false, lastVer=null,
> partReleaseFut=GridCompoundFuture [rdc=null, initFlag=1, lsnrCalls=4,
> done=true, cancelled=false, err=null, 

Re: An issue of Ignite In-Menory Sql Grid since version 2.0.0

2017-09-22 Thread Denis Mekhanikov
Hi!

Internally Ignite uses H2 to process SQL queries. Recursive queries is an
experimental feature of H2, so I wouldn't recommend you to use it in
production for now.
Ignite 2.0 and 2.1 don't seem to support this kind of queries, so the best
option for you is to modify your query if possible to avoid recursive
constructs, or retrieve data from cache directly, without use of SQL.

Denis

пт, 22 сент. 2017 г. в 12:37, 贺波 :

> Hi,I used Apache Ignite in my project for more than a year,from version
> 1.8.0 to 2.2.0.I use Ignite In-Menory Sql Grid in my project.I use “with
> as” sql function in my sql,it executes correctly in version 1.9.0,but
> executes error since version 2.0.0.My sql statement is:
>   *with* RECURSIVE children(typeId, pTypeId)* AS* (
> SELECT typeId, pTypeId FROM ProcessDefTypePo WHERE pTypeId = '1'
> UNION ALL
> SELECT ProcessDefTypePo.typeId, ProcessDefTypePo.pTypeId FROM children
> INNER JOIN ProcessDefTypePo ON children.typeId =ProcessDefTypePo.pTypeId
> )
>select t1.typeId,t1.pTypeId,t1.typeName,t1.description, t2.typeName
> as pTypeName from ProcessDefTypePo t1 left join ProcessDefTypePo t2 on
> t1.pTypeId=t2.typeId where t1.typeId not in ( select typeId from children )
>
>The  execution error in version 2.2.0 is:
> Caused by: class org.apache.ignite.IgniteCheckedException: Unknown query
> type: null
> at
> org.apache.ignite.internal.processors.query.GridQueryProcessor.executeQuery(GridQueryProcessor.java:2316)
> at
> org.apache.ignite.internal.processors.query.GridQueryProcessor.querySqlFields(GridQueryProcessor.java:1820)
> ... 165 more
> Caused by: java.lang.UnsupportedOperationException: Unknown query type:
> null
> at
> org.apache.ignite.internal.processors.query.h2.sql.GridSqlQueryParser.parseQuery(GridSqlQueryParser.java:1225)
> at
> org.apache.ignite.internal.processors.query.h2.sql.GridSqlQueryParser.parseTable(GridSqlQueryParser.java:501)
> at
> org.apache.ignite.internal.processors.query.h2.sql.GridSqlQueryParser.parseTableFilter(GridSqlQueryParser.java:465)
> at
> org.apache.ignite.internal.processors.query.h2.sql.GridSqlQueryParser.parseSelect(GridSqlQueryParser.java:565)
> at
> org.apache.ignite.internal.processors.query.h2.sql.GridSqlQueryParser.parseQuery(GridSqlQueryParser.java:1220)
> at
> org.apache.ignite.internal.processors.query.h2.sql.GridSqlQueryParser.parseQueryExpression(GridSqlQueryParser.java:452)
> at
> org.apache.ignite.internal.processors.query.h2.sql.GridSqlQueryParser.parseExpression0(GridSqlQueryParser.java:1436)
> at
> org.apache.ignite.internal.processors.query.h2.sql.GridSqlQueryParser.parseExpression(GridSqlQueryParser.java:1267)
> at
> org.apache.ignite.internal.processors.query.h2.sql.GridSqlQueryParser.parseExpression0(GridSqlQueryParser.java:1378)
> at
> org.apache.ignite.internal.processors.query.h2.sql.GridSqlQueryParser.parseExpression(GridSqlQueryParser.java:1267)
> at
> org.apache.ignite.internal.processors.query.h2.sql.GridSqlQueryParser.parseSelect(GridSqlQueryParser.java:536)
> at
> org.apache.ignite.internal.processors.query.h2.sql.GridSqlQueryParser.parseQuery(GridSqlQueryParser.java:1220)
> at
> org.apache.ignite.internal.processors.query.h2.sql.GridSqlQueryParser.parse(GridSqlQueryParser.java:1181)
> at
> org.apache.ignite.internal.processors.query.h2.sql.GridSqlQuerySplitter.parse(GridSqlQuerySplitter.java:1604)
> at
> org.apache.ignite.internal.processors.query.h2.sql.GridSqlQuerySplitter.split(GridSqlQuerySplitter.java:197)
> at
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.queryDistributedSqlFields(IgniteH2Indexing.java:1307)
> at
> org.apache.ignite.internal.processors.query.GridQueryProcessor$5.applyx(GridQueryProcessor.java:1815)
> at
> org.apache.ignite.internal.processors.query.GridQueryProcessor$5.applyx(GridQueryProcessor.java:1813)
> at
> org.apache.ignite.internal.util.lang.IgniteOutClosureX.apply(IgniteOutClosureX.java:36)
> at
> org.apache.ignite.internal.processors.query.GridQueryProcessor.executeQuery(GridQueryProcessor.java:2293)
>
> Can you help me with this problem?Thanks.
>
>
>
>
>
>
>
>


Re:Re: Join with subquery in 1.9 or 2.0

2017-09-22 Thread 贺波
Hi,
Your demo is different from my.Mine is an example of recursive,using "with 
as"function.My test demo is in the attachment.package userlist;

import java.util.List;
import org.apache.ignite.Ignite;
import org.apache.ignite.IgniteCache;
import org.apache.ignite.cache.CacheAtomicityMode;
import org.apache.ignite.cache.CacheMode;
import org.apache.ignite.cache.query.SqlFieldsQuery;
import org.apache.ignite.cache.query.annotations.QuerySqlField;
import org.apache.ignite.configuration.CacheConfiguration;
import org.apache.ignite.configuration.IgniteConfiguration;
import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi;
import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder;
import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder;
import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest;

/**
 *
 */
public class JoinTest extends GridCommonAbstractTest {
/** */
private static final TcpDiscoveryIpFinder IP_FINDER = new 
TcpDiscoveryVmIpFinder(true);

/** */
private static final String DEFAULT_CACHE_NAME = "MYCACHE";

/** */
private static final int NODES_COUNT = 2;

/** */
private static final int CUSTOMER_COUNT = 23;

/** */
private static final int DETAILS_PER_CUSTOMER = 37;

/** */
private static String SQL = "with RECURSIVE children(typeId, pTypeId) AS ( 
"+
" SELECT typeId, pTypeId FROM ProcessDefTypePo 
WHERE pTypeId = ? "+
" UNION ALL "+
" SELECT ProcessDefTypePo.typeId, 
ProcessDefTypePo.pTypeId "+
" FROM children INNER JOIN ProcessDefTypePo ON 
children.typeId = ProcessDefTypePo.pTypeId "+
") "+
"select t1.typeId,t1.pTypeId,t1.typeName,t1.description, t2.typeName as 
pTypeName from ProcessDefTypePo t1 left join ProcessDefTypePo t2 on 
t1.pTypeId=t2.typeId where t1.typeId not in ( select typeId from children )";

/** {@inheritDoc} */
@Override protected IgniteConfiguration getConfiguration(String gridName) 
throws Exception {
IgniteConfiguration cfg = super.getConfiguration(gridName);

cfg.setPeerClassLoadingEnabled(false);

TcpDiscoverySpi disco = new TcpDiscoverySpi();

disco.setIpFinder(IP_FINDER);

cfg.setDiscoverySpi(disco);

return cfg;
}

/** {@inheritDoc} */
@Override protected void beforeTestsStarted() throws Exception {
startGridsMultiThreaded(NODES_COUNT, true);
}

/** {@inheritDoc} */
@Override protected void afterTestsStopped() throws Exception {
stopAllGrids();
}

/**
 * @param name Cache name.
 * @param idxTypes Indexed types.
 * @return Cache configuration.
 */
private  CacheConfiguration cacheConfig(String name, 
Class... idxTypes) {
return new CacheConfiguration(DEFAULT_CACHE_NAME)
.setName(name)
.setCacheMode(CacheMode.PARTITIONED)
.setAtomicityMode(CacheAtomicityMode.ATOMIC)
.setBackups(1)
.setIndexedTypes(idxTypes);
}

/** */
public void testJoinQuery() throws Exception {
CacheConfiguration ccfg1 = 
cacheConfig("processDefTypeCache", String.class, ProcessDefTypePo.class);

final IgniteCache c1 = 
ignite(0).getOrCreateCache(ccfg1);

try {
populateDataIntoCaches(c1);

final SqlFieldsQuery qry = new SqlFieldsQuery(SQL);

qry.setDistributedJoins(true);

assertEquals(CUSTOMER_COUNT,c1.query(qry).getAll().size());

//
Ignite client = startGrid("client", 
getConfiguration("client").setClientMode(true));
IgniteCache c = client.cache("processDefTypeCache");

assertEquals(CUSTOMER_COUNT,c.query(qry).getAll().size());

}
finally {
c1.destroy();
}
}

/**
 * @param c1 Cache1.
 */
private void populateDataIntoCaches(IgniteCache 
c1) {

ProcessDefTypePo processDefTypePo=new ProcessDefTypePo();
processDefTypePo.setTypeId("0");
processDefTypePo.setPTypeId("-1");
processDefTypePo.setTypeName(0);
c1.put("0", processDefTypePo);

for (int j = 1; j < 10; j++) {
ProcessDefTypePo dtls = new ProcessDefTypePo();
dtls.setTypeId(j);
dtls.setPTypeId(j-1);
dtls.setTypeName("" + j);

c1.put(j, dtls);

}


}

/**
 *
 */
private static class ProcessDefTypePo {
/** */
@QuerySqlField(index = true)
private String typeId;

/** */
@QuerySqlField(index = true)
private String pTypeId;

@QuerySqlField
   

Re: Join with subquery in 1.9 or 2.0

2017-09-22 Thread Andrey Mashenkov
Hi,

I can't reproduce the issue neither on 1.9 nor 2.0 nor 2.1 version.
PFA repro attached.

Would you please check if I've missed smth?


On Wed, Sep 20, 2017 at 1:41 PM, Andrey Mashenkov <
andrey.mashen...@gmail.com> wrote:

> Hi,
>
> Looks like a bug.
>
> Would you please share a full stacktrace and a reproducer if possible?
>
> You can try to rewrite query without join to smth like this:
>  Select .. from A, B Where A.id = B.id;
>
> On Mon, Sep 18, 2017 at 7:36 PM, acet  wrote:
>
>> Hello,
>> I was looking to do something similar to:
>>
>> SELECT a.customerid, h.name, h.address
>> FROM
>> "customer_cache".CUSTOMER as a
>> JOIN
>> (select min(id) as id, name, address, cust_id from "second_cache".DETAILS
>> group by name, address, cust_id) as h on a.customerid = h.cust_id
>>
>> This seems to work fine in the debug console but when trying it with
>> ignite
>> I get:
>>
>> javax.cache.CacheException: class org.apache.ignite.IgniteException:
>> org.apache.ignite.internal.processors.query.h2.sql.GridSqlJoin cannot be
>> cast to org.apache.ignite.internal.processors.query.h2.sql.GridSqlAlias
>>
>>
>> Is there any way to achieve this?
>> Thanks
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>
>
>
> --
> Best regards,
> Andrey V. Mashenkov
>



-- 
Best regards,
Andrey V. Mashenkov
package userlist;

import java.util.List;
import org.apache.ignite.Ignite;
import org.apache.ignite.IgniteCache;
import org.apache.ignite.cache.CacheAtomicityMode;
import org.apache.ignite.cache.CacheMode;
import org.apache.ignite.cache.query.SqlFieldsQuery;
import org.apache.ignite.cache.query.annotations.QuerySqlField;
import org.apache.ignite.configuration.CacheConfiguration;
import org.apache.ignite.configuration.IgniteConfiguration;
import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi;
import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder;
import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder;
import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest;

/**
 *
 */
public class JoinTest extends GridCommonAbstractTest {
/** */
private static final TcpDiscoveryIpFinder IP_FINDER = new TcpDiscoveryVmIpFinder(true);

/** */
private static final String DEFAULT_CACHE_NAME = "MYCACHE";

/** */
private static final int NODES_COUNT = 2;

/** */
private static final int CUSTOMER_COUNT = 23;

/** */
private static final int DETAILS_PER_CUSTOMER = 37;

/** */
private static final String SQL = "SELECT a.customerid, h.name, h.address \n" +
"FROM \n" +
"\"customer_cache\".CUSTOMER as a \n" +
"JOIN \n" +
"(select min(id) as id, name, address, cust_id from \"second_cache\".DETAILS \n" +
"group by name, address, cust_id) as h on a.customerid = h.cust_id";

/** {@inheritDoc} */
@Override protected IgniteConfiguration getConfiguration(String gridName) throws Exception {
IgniteConfiguration cfg = super.getConfiguration(gridName);

cfg.setPeerClassLoadingEnabled(false);

TcpDiscoverySpi disco = new TcpDiscoverySpi();

disco.setIpFinder(IP_FINDER);

cfg.setDiscoverySpi(disco);

return cfg;
}

/** {@inheritDoc} */
@Override protected void beforeTestsStarted() throws Exception {
startGridsMultiThreaded(NODES_COUNT, true);
}

/** {@inheritDoc} */
@Override protected void afterTestsStopped() throws Exception {
stopAllGrids();
}

/**
 * @param name Cache name.
 * @param idxTypes Indexed types.
 * @return Cache configuration.
 */
private  CacheConfiguration cacheConfig(String name, Class... idxTypes) {
return new CacheConfiguration(DEFAULT_CACHE_NAME)
.setName(name)
.setCacheMode(CacheMode.PARTITIONED)
.setAtomicityMode(CacheAtomicityMode.ATOMIC)
.setBackups(1)
.setIndexedTypes(idxTypes);
}

/** */
public void testJoinQuery() throws Exception {
CacheConfiguration ccfg1 = cacheConfig("second_cache", Long.class, Details.class);
CacheConfiguration ccfg2 = cacheConfig("customer_cache", Long.class, Customer.class);

final IgniteCache c1 = ignite(0).getOrCreateCache(ccfg1);
final IgniteCache c2 = ignite(0).getOrCreateCache(ccfg2);

try {
populateDataIntoCaches(c1, c2);

final SqlFieldsQuery qry = new SqlFieldsQuery(SQL);

qry.setDistributedJoins(true);

assertEquals(CUSTOMER_COUNT,c1.query(qry).getAll().size());

//
Ignite client = startGrid("client", getConfiguration("client").setClientMode(true));
IgniteCache c = client.cache("customer_cache");

assertEquals(CUSTOMER_COUNT,c.query(qry).getAll().size());

}

An issue of Ignite In-Menory Sql Grid since version 2.0.0

2017-09-22 Thread 贺波
Hi,I used Apache Ignite in my project for more than a year,from version 1.8.0 
to 2.2.0.I use Ignite In-Menory Sql Grid in my project.I use “with as” sql 
function in my sql,it executes correctly in version 1.9.0,but executes error 
since version 2.0.0.My sql statement is:
  with RECURSIVE children(typeId, pTypeId) AS ( 
SELECT typeId, pTypeId FROM ProcessDefTypePo WHERE pTypeId = '1'
UNION ALL 
SELECT ProcessDefTypePo.typeId, ProcessDefTypePo.pTypeId FROM children INNER 
JOIN ProcessDefTypePo ON children.typeId =ProcessDefTypePo.pTypeId 
)
   select t1.typeId,t1.pTypeId,t1.typeName,t1.description, t2.typeName as 
pTypeName from ProcessDefTypePo t1 left join ProcessDefTypePo t2 on 
t1.pTypeId=t2.typeId where t1.typeId not in ( select typeId from children )

   The  execution error in version 2.2.0 is:
Caused by: class org.apache.ignite.IgniteCheckedException: Unknown query type: 
null
at 
org.apache.ignite.internal.processors.query.GridQueryProcessor.executeQuery(GridQueryProcessor.java:2316)
at 
org.apache.ignite.internal.processors.query.GridQueryProcessor.querySqlFields(GridQueryProcessor.java:1820)
... 165 more
Caused by: java.lang.UnsupportedOperationException: Unknown query type: null
at 
org.apache.ignite.internal.processors.query.h2.sql.GridSqlQueryParser.parseQuery(GridSqlQueryParser.java:1225)
at 
org.apache.ignite.internal.processors.query.h2.sql.GridSqlQueryParser.parseTable(GridSqlQueryParser.java:501)
at 
org.apache.ignite.internal.processors.query.h2.sql.GridSqlQueryParser.parseTableFilter(GridSqlQueryParser.java:465)
at 
org.apache.ignite.internal.processors.query.h2.sql.GridSqlQueryParser.parseSelect(GridSqlQueryParser.java:565)
at 
org.apache.ignite.internal.processors.query.h2.sql.GridSqlQueryParser.parseQuery(GridSqlQueryParser.java:1220)
at 
org.apache.ignite.internal.processors.query.h2.sql.GridSqlQueryParser.parseQueryExpression(GridSqlQueryParser.java:452)
at 
org.apache.ignite.internal.processors.query.h2.sql.GridSqlQueryParser.parseExpression0(GridSqlQueryParser.java:1436)
at 
org.apache.ignite.internal.processors.query.h2.sql.GridSqlQueryParser.parseExpression(GridSqlQueryParser.java:1267)
at 
org.apache.ignite.internal.processors.query.h2.sql.GridSqlQueryParser.parseExpression0(GridSqlQueryParser.java:1378)
at 
org.apache.ignite.internal.processors.query.h2.sql.GridSqlQueryParser.parseExpression(GridSqlQueryParser.java:1267)
at 
org.apache.ignite.internal.processors.query.h2.sql.GridSqlQueryParser.parseSelect(GridSqlQueryParser.java:536)
at 
org.apache.ignite.internal.processors.query.h2.sql.GridSqlQueryParser.parseQuery(GridSqlQueryParser.java:1220)
at 
org.apache.ignite.internal.processors.query.h2.sql.GridSqlQueryParser.parse(GridSqlQueryParser.java:1181)
at 
org.apache.ignite.internal.processors.query.h2.sql.GridSqlQuerySplitter.parse(GridSqlQuerySplitter.java:1604)
at 
org.apache.ignite.internal.processors.query.h2.sql.GridSqlQuerySplitter.split(GridSqlQuerySplitter.java:197)
at 
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.queryDistributedSqlFields(IgniteH2Indexing.java:1307)
at 
org.apache.ignite.internal.processors.query.GridQueryProcessor$5.applyx(GridQueryProcessor.java:1815)
at 
org.apache.ignite.internal.processors.query.GridQueryProcessor$5.applyx(GridQueryProcessor.java:1813)
at 
org.apache.ignite.internal.util.lang.IgniteOutClosureX.apply(IgniteOutClosureX.java:36)
at 
org.apache.ignite.internal.processors.query.GridQueryProcessor.executeQuery(GridQueryProcessor.java:2293)


Can you help me with this problem?Thanks.






 

Re: Using ignite with spark

2017-09-22 Thread Patrick Brunmayr
Hello Val

First of all thx for this answer. Let me explain our use case.

*What we are doing*

Our company is a monitoring solution for machines in the manufacturing
industry. We have a hardware logger attached to each machine wich collects
up to 6 different metrics (  like power, piece count ). These metrics are
sampled on a per second basis and sent to our cloud every minute. Data is
currently stored in a cassandra cluster.

*For the math of that *

One metric will generate about 33 million data points per year meaning all
six metrics will cause a total of 100 million data points per machine /
year. Lets say we have about 2000 machines out there its very obvious that
we are talking about terra bytes of metric data.

*The goal*

We need to do some analytics on this data to provide reports for our
customers. Therefore we need to do all kind of transformations, filtering
and joining on that data. We also need support for secondary indexes and
grouping! This was the reason we chose spark for this kind of job. We want
to speed up the spark calculations with Ignite to provide a better
experience for our customers.

My idea was to use Ignite as a read through cache to our cassandra cluster
and combining this with Spark SQL. The data for the calculation should only
stay in the cache during the calculations and can easily be discared
afterwards.


Now i need some information how to setup my cluster correctly for that use
case. I don't know how many nodes i need and how much GB of RAM and if i
should put my ignite nodes on the spark workers or create a separate
cluster. I need this information for cost estimates.

Hope that helps a bit

Thx










2017-09-22 5:12 GMT+02:00 Valentin Kulichenko :

> Hello Patrick,
>
> See my comments below.
>
> Most of your questions don't have a generic answer and would heavily
> depend on your use case. Would you mind giving some more details about it
> so that I can give more specific suggestions?
>
> -Val
>
> On Thu, Sep 21, 2017 at 8:24 AM, Patrick Brunmayr <
> patrick.brunm...@kpibench.com> wrote:
>
>> Hello
>>
>>
>>- What is currently the best practice of deploying Ignite with Spark ?
>>
>>
>>- Should the Ignite node sit on the same machine as the Spark
>>executor ?
>>
>>
> Ignite can run either on same boxes where Spark runs, or as a separate
> cluster, and both approaches have their pros and cons.
>
>
>> According to this documentation
>>  Spark
>> should be given 75% of machine memory but what is left for Ignite then ?
>>
>> In general, Spark can run well with anywhere from *8 GB to hundreds of
>>> gigabytes* of memory per machine. In all cases, we recommend allocating
>>> only at most 75% of the memory for Spark; leave the rest for the operating
>>> system and buffer cache.
>>
>>
> Documentation states that you should give *at most* 75% to make sure OS
> has a safe cushion for its own purposes. If Ignite runs along with Spark,
> amount of memory allocated to Spark should be less then that maximum of
> course.
>
>
>>
>>- Don't they battle for memory ?
>>
>>
> You should configure both Spark and Ignite so that they never try to
> consume more memory than physically available, also leaving some for OS.
> This way there will be no conflict.
>
>>
>>-
>>- Should i give the memory to Ignite or Spark ?
>>
>>
> Again, this heavily depends on use case and on how heavily you use both
> Spark and Ignite.
>
>
>>-
>>- Would Spark even benefit from Ignite if the Ignite nodes would be
>>hostet on other machines ?
>>
>>
> There are definitely use cases when this can be useful. Although in others
> it is better to run Ignite separately.
>
>
>>-
>>
>>
>> We are currently having hundress of GB for analytics and we want to use
>> ignite to speed up things up.
>>
>> Thank you
>>
>>
>>
>>
>>
>>
>>
>>
>