RE: OOM when using Ignite as HDFS Cache

2017-04-27 Thread Ivan Veselovsky
Hi, zhangshuai.ustc , 
is this problem solved? Can we help more on the subject?



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/OOM-when-using-Ignite-as-HDFS-Cache-tp11900p12297.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: HDP, Hive + Ignite

2017-04-27 Thread Ivan Veselovsky
Hi, Alena
 
1. logs you have attached show some errors, but , in fact, I cannot deal
with them until the way to reproduce the problem is known.

2. Here I mean that IGFS (write-through cache built upon another file
system) and the Ignite map-reduce engine (jobtracker on port 11211) are 2
independent things , and can be used independently one of the other. That
means that you can use IGFS without Ignite map-reduce, and Ignite Map-reduce
without IGFS. If you experience some problem using them both, an idea to
track down the issue is to try the same job (1) without Ignite at all, (2)
with IGFS , but without Ignite map-reduce, (3) without IGFS, but with Ignite
job-tracker. This may help to understand, what subsystem causes the problem.
Similar approach can be used when trying to speed up a task.

3. This option should be a property of Hadoop job, so it can either be set
in global Hadoop configuration, or set for concrete hadoop job, e.g. 
jar
./hadoop-2.7.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.2.jar pi
-Dignite.job.shared.classloader=false 5 10
Notice that here the -Dkey=value pairs must be placed between the program
name (pi) and its arguments (5, 10).
In case of Hive similar properties can be passed in using --hiveconf hive
client option, like "hive ... --hiveconf ignite.job.shared.classloader=false
... " .

4. You see OOME (OutOfMemoryError) related to insufficient heap. Heap size
is set via JVM -Xms (initial) and -Xmx (maximum) options. Assumed way to
configure those options for Ignite is to use -J prefix : anything following
-J will be passed to Ignite JVM, e.g. command "./ignite.sh -v -J-Xms4g
-Xmx4g" gives Ignite JVM 4g of initial and 4g of max heap.
Off-heap memory parameters are managed differently, in the Ignite's xml
config.

5. It is very problematic to provide all possible values , since one
configuration value may be set to many different value beans , and each of
them has its own properties. E.g.
org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi#ipFinder property (in
the end of your default-config.xml) sets value
"org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder"
there, but there are also ~10 other impelmentations of
org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder , and they
have different properties. 
Moreover, one can create own implementation with some ad-hoc properties.
So , the file "with all possible values" is really hard or impossible to
create.
To print the default config with all the values explicitly shown -- this is
doable, you can submit it as a feature-request.

6. IGFS can be used as a standalone file system (aka PRIMARY mode), and as a
caching layer on top of another file system (DUAL_SYNC, DUAL_ASYNC modes).
Regarding high availability: IGFS is not highly available . In case of a
dual mode it will re-read data from underlying file system layer on failure,
in primary mode a data loss is possible. Problem with starting HDFS is only
with global configs seen by HDFS daemons -- they should not specify IGFS as
a default file system.

7. Hive query will be transformed to a number of map-reduce tasks, each task
will run on Ignite execution engine. Ignite map-reduce is designed in such a
way that it does not "spill" intermediate data between map and reduce -- it
stores all them in memory (mainly offheap, and this is not caches offheap
you configure in default-config.xml). It is difficult to give exact answer
to yopur question in theory, exact limit can better be found experimentally.
Very rough estimation is that 1.5 -
 doubled file data size (uncompressed, 9G) should fit in memory on all
nodes. Note that offheap memory used by Ignite map-reduce is not limited in
configuration.   

8. You can monitor resources utilized by Ignite using
(1) its own diagnostic printouts (appear periodically in its console output,
when ran with "-v" option).
(2) using any Java monitoring tool, such as JConsole, VisualVM,
JavaFlightRecorder, etc.
But, what is important, offheap memory used in map-reduce will not be shown
in any of the tools listed above. It can be seen only as total amount of
memory used by the Ignite java process -- you can use a native (your OS
specific) process monitoring tool for that . 



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/HDP-Hive-Ignite-tp12195p12296.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Ignite Web Sessions Caching Failover question

2017-04-27 Thread ignite_user2016
Hello Val,

I did looked at this possibility but this is something needs to be done on
Spring boot side so spring boot should provide us the capability to manage
user session in memory when the distributed cache cant get connected, kind
of a fall back mechanism.

I still believe this is something distributed cache cant do OR should not do
however it can have the failure handlers which would tell spring boot to
manage sessions in memory.

The usecase gets more complicated when a user session does fall back on
memory and now ignite comes up then the framework shall do the session
replication on ignite and vice versa..

Hope it helps..

Thanks,
Rishi





--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Ignite-Web-Sessions-Caching-Failover-question-tp12240p12295.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Can sql query read from back up partion instead of primary partition

2017-04-27 Thread Sergi Vladykin
No, readFromBackup does not affect SQL in any way.

Sergi

2017-04-27 12:52 GMT+03:00 neerajbhatt :

> Hi All
>
> We are looking to scale our sql query read operation, can we read from
> backup partitions instead of primary partitions ?
>
> There is something called "mode", and setting "readFromBackup". They seem
> to
> work for cache operations. Is there similar concept for SQL queries.
>
> Thanks
>
>
>
>
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Can-sql-query-read-from-back-up-
> partion-instead-of-primary-partition-tp12292.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: tcp-disco-ip-finder-cleaner

2017-04-27 Thread gatorjh
Here is the ignite config




--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/tcp-disco-ip-finder-cleaner-tp12249p12293.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Can sql query read from back up partion instead of primary partition

2017-04-27 Thread neerajbhatt
Hi All

We are looking to scale our sql query read operation, can we read from
backup partitions instead of primary partitions ?

There is something called "mode", and setting "readFromBackup". They seem to
work for cache operations. Is there similar concept for SQL queries.

Thanks







--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Can-sql-query-read-from-back-up-partion-instead-of-primary-partition-tp12292.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: tcp-disco-ip-finder-cleaner

2017-04-27 Thread Andrey Mashenkov
Hi

Would you please share ignite config?

On Tue, Apr 25, 2017 at 9:55 PM, gatorjh  wrote:

> We are running Ignite 1.8.0 within Docker on an AWS instance. Ignite
> registers with Zookeeper. We use a BasicAddressResolver in the Ignite
> config
> so that only the Docker host IP is registered. This works fine until the
> tcp-disco-ip-finder-cleaner thread kicks in and adds the loopback and the
> Docker host IP addresses to the address list.
>
> #startup looks good
>
>
> #first time from thread looks good
>
>
> #Bam! Loopback and container IP now in the list.
>
>
> What could I be doing wrong? Any help appreciated.
>
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/tcp-disco-ip-finder-cleaner-tp12249.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 
Best regards,
Andrey V. Mashenkov


Re: org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing - Query execution is too long

2017-04-27 Thread Andrey Mashenkov
Hi,

1. You may want to make Ignite use index on APPLICATION_DATE or even
composite index of two fields: EXPIRATION_DT, APPLICATION_DATE.
Ignite does not support HINTs in 1.x version, but it will be added in 2.0.
2. I see no query plan for second query. Try to run in from console with
explain command.

BTW, I see no Ignite issues here, these are common sql server problem to
make query run with appropriate index.

On Thu, Apr 27, 2017 at 9:18 AM, abhijitprusty 
wrote:

> Hi Andrew,
>
> Thanks for the reply. Removing index did helped. But we got the same issue
> in one more query. Could you let us know how to what could be the issue
> here.
>
> 02:51:52.591 [http-/172.30.8.97:8443-27] WARN
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing - Query
> execution is too long [time=3098 ms, sql='SELECT
> PGE__Z1.APPLICATION_DATE __C1_0,
> PGE__Z1.CR_DECSN_BUSINESS_PG_SURR_ID __C1_1,
> PGE__Z1.INCOMING_CRID __C1_2,
> PGE__Z1.ORIGINAL_CRID __C1_3,
> PGE__Z1.BUSINESS_NAME __C1_4,
> PGE__Z1.SUPPRESSION_FLAG __C1_5,
> PGE__Z1.EFFECTIVE_DT __C1_6,
> PGE__Z1.EXPIRATION_DT __C1_7,
> PGE__Z1.FINAL_CREDIT_CLASS __C1_8,
> PGE__Z1.COUNTER __C1_9,
> PGE__Z1.CHANNEL __C1_10,
> PGE__Z1.TRANSACTION_TIMESTAMP __C1_11,
> PGE__Z1.GAMING_DUP_COUNTER __C1_12,
> PGE__Z1.GAMING_FLAG __C1_13,
> PGE__Z1.PROMOTIONAL_CODE __C1_14,
> PGE__Z1.CREDIT_CLASS_PORT_IN __C1_15,
> PGE__Z1.CREDIT_CLASS_WOUT_PORT_IN __C1_16,
> PGE__Z1.TAX_ID_ENCRYPT __C1_17,
> PGE__Z1.BUSINESS_ADDRESS_LINE1 __C1_18,
> PGE__Z1.BUSINESS_ZIP __C1_19,
> PGE__Z1.BUSINESS_CITY __C1_20,
> PGE__Z1.BUSINESS_STATE __C1_21
> FROM "CustomIgniteCacheBusinessPg".CRDECISIONBUSINESSPGENTITY PGE__Z1
> WHERE ((PGE__Z1.SUPPRESSION_FLAG = 'N') AND (PGE__Z1.EXPIRATION_DT >=
> CURRENT_DATE())) AND (PGE__Z1.APPLICATION_DATE >= DATEADD('DAY', -30,
> CURRENT_DATE()))', plan=
> SELECT
> PGE__Z1.APPLICATION_DATE AS __C1_0,
> PGE__Z1.CR_DECSN_BUSINESS_PG_SURR_ID AS __C1_1,
> PGE__Z1.INCOMING_CRID AS __C1_2,
> PGE__Z1.ORIGINAL_CRID AS __C1_3,
> PGE__Z1.BUSINESS_NAME AS __C1_4,
> PGE__Z1.SUPPRESSION_FLAG AS __C1_5,
> PGE__Z1.EFFECTIVE_DT AS __C1_6,
> PGE__Z1.EXPIRATION_DT AS __C1_7,
> PGE__Z1.FINAL_CREDIT_CLASS AS __C1_8,
> PGE__Z1.COUNTER AS __C1_9,
> PGE__Z1.CHANNEL AS __C1_10,
> PGE__Z1.TRANSACTION_TIMESTAMP AS __C1_11,
> PGE__Z1.GAMING_DUP_COUNTER AS __C1_12,
> PGE__Z1.GAMING_FLAG AS __C1_13,
> PGE__Z1.PROMOTIONAL_CODE AS __C1_14,
> PGE__Z1.CREDIT_CLASS_PORT_IN AS __C1_15,
> PGE__Z1.CREDIT_CLASS_WOUT_PORT_IN AS __C1_16,
> PGE__Z1.TAX_ID_ENCRYPT AS __C1_17,
> PGE__Z1.BUSINESS_ADDRESS_LINE1 AS __C1_18,
> PGE__Z1.BUSINESS_ZIP AS __C1_19,
> PGE__Z1.BUSINESS_CITY AS __C1_20,
> PGE__Z1.BUSINESS_STATE AS __C1_21
> FROM "CustomIgniteCacheBusinessPg".CRDECISIONBUSINESSPGENTITY PGE__Z1
> /* "CustomIgniteCacheBusinessPg"."EXPIRATION_DT_idx": EXPIRATION_DT >=
> CURRENT_DATE() */
> WHERE ((PGE__Z1.SUPPRESSION_FLAG = 'N')
> AND (PGE__Z1.EXPIRATION_DT >= CURRENT_DATE()))
> AND (PGE__Z1.APPLICATION_DATE >= DATEADD('DAY', -30, CURRENT_DATE()))
> , parameters=[]]
>
>
> We have tried below things as part of performance tunning.
>
> 
> consumerConfig = new CacheConfiguration<>(CONSUMER_CACHE);
> consumerConfig.setCopyOnRead(false);
> consumerConfig.setMemoryMode(CacheMemoryMode.OFFHEAP_
> TIERED);
> //consumerConfig.setRebalanceMode(
> CacheRebalanceMode.ASYNC);
> consumerConfig.setCacheMode(CacheMode.PARTITIONED); //
> Default.
>
> consumerConfig.setCacheStoreFactory(FactoryBuilder.factoryOf(
> CacheStoreConsumer.class));
> consumerConfig.setBackups(backUps);
>
> consumerConfig.setRebalanceBatchSize(Integer.valueOf(environment.
> getProperty("cache.rebalancebathcsize"))
> * Integer.valueOf(environment.getProperty("cache.rebalancebathcsize")));
> //consumerConfig.setSnapshotableIndex(true);
>
> consumerConfig.setRebalanceThrottle(Integer.valueOf(environment.
> getProperty("cache.rebalancethrottle")));
>
>
> consumerConfig.setStartSize(Integer.valueOf(environment.
> getProperty("cache.consumer.initialSize")));
>
> consumerConfig.setOffHeapMaxMemory(Integer.valueOf(environment.
> getProperty("cache.consumer.ignite.offHeapmemory")));
>
> consumerConfig.setIndexedTypes(Long.class,
> CRDecisionConsumerEntity.class);
> consumerCache = ignite.getOrCreateCache(consumerConfig);
> ***
>
> is it ok to use the snapshot for the query indexc ?
>
> I have attached the Entity we are using with index, could you please check.
>
> Below is the query we are trying to execute.
>
>
>
> String filterString = " WHERE SUPPRESSION_FLAG IN (" N ") and EXPIRATION_DT
> >=  CURRENT_DATE() and APPLICATION_DATE "
> >= CURRENT_DATE() - 90;
>
> SELECT  APPLICATION_DATE, CR_DECSN_BUSINESS_SURR_ID, INCOMING

Re: Write behind and eventual consistency

2017-04-27 Thread Gaurav Bajaj
Hi Val,

Our use case :

1. Read records from file
2. Do computations on each record
3. Put them in the cache and persistence using write behind.
4. When all the records from file are processed, updated in Cache and also
persisted to DB, we want to
 trigger some other process which will do next set of operations on these
records from cache.
5. We want to trigger this next process and mark original file as
processed, only when we are sure data is persisted, so that in case of Node
failure we need not process that file again.




















On Thu, Apr 27, 2017 at 10:26 AM, steve.hostettler <
steve.hostett...@gmail.com> wrote:

> Hi Val,
>
> the use case is the following
>
> 1) Load data into the database from an external system
> 2) Once ready load it into the grid
> 3) Process something that does massive write behinds
> 4) Take a snapshot of the results (or) Do a backup of the tables   <<--- At
> this point I need the eventual consistency to ...eventually be
>
> At step 4 I cannot afford to have some update still in progress. This is
> even more important since because of write behind I cannot maintain
> referential integrity (since the insert/update are done in a random order)
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Write-behind-and-eventual-
> consistency-tp12242p12287.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: CacheAbstractJdbcStore batch deleteAll bug.

2017-04-27 Thread Alexey Kuznetsov
Hi, Gordon.

Do you have a reproducer we could debug?

Thanks.

On Wed, Apr 26, 2017 at 9:50 AM, Gordon Reid (Nine Mile) <
gordon.r...@ninemilefinancial.com> wrote:

> Actually the fix will be a little more complicated, because the variable
> “em” has already been updated to the new type before the last batch has
> been executed.
>
>
>
> *From:* Gordon Reid (Nine Mile) [mailto:gordon.r...@ninemilefinancial.com]
>
> *Sent:* Wednesday, 26 April 2017 12:05 PM
> *To:* user@ignite.apache.org
> *Subject:* CacheAbstractJdbcStore batch deleteAll bug.
>
>
>
> Hi Guys,
>
>
>
> I think there is a problem in CacheAbstractJdbcStore.deleteAll
>
>
>
> I find that sometimes, I see messages like this
>
>
>
> [DEBUG] 2017-04-26 11:08:54.097 [flusher-0-#31%null%] CacheJdbcPojoStore -
> Delete entries from db [cache=D9Cache, keyType=com.nmf.model.trading.TradeKey,
> cnt=2]
>
> [WARN ] 2017-04-26 11:08:54.097 [flusher-0-#31%null%] CacheJdbcPojoStore -
> Batch deleteAll returned unexpected updated row count [table=public.
> nmfctrade, entry=RiskRuleKey [id=1], expected=1, actual=0]
>
> [WARN ] 2017-04-26 11:08:54.097 [flusher-0-#31%null%] CacheJdbcPojoStore -
> Batch deleteAll returned unexpected updated row count
> [table=public.nmfctrade, entry=RiskRuleKey [id=2], expected=1, actual=0]
>
>
>
> Note the entity types are different!
>
>
>
> Also look at the attached image. You can see the statement is still
> targeting Trade, but our entity is now Position.
>
>
>
> You can see in the code, that the delStmt never gets refreshed, when
> changing over to a different entity type.
>
>
>
> for (Object key : keys) {
> Object keyTypeId = typeIdForObject(key);
>
> em = entryMapping(cacheName, keyTypeId);
>
> if (delStmt == null) {
> delStmt = conn.prepareStatement(em.remQry);
>
> currKeyTypeId = keyTypeId;
> }
>
> if (!currKeyTypeId.equals(keyTypeId)) {
> if (log.isDebugEnabled())
> log.debug("Delete entries from db [cache=" + 
> U.*maskName*(cacheName) +
> ", keyType=" + em.keyType() + ", cnt=" + prepared + "]");
>
> executeBatch(em, delStmt, "deleteAll", fromIdx, prepared, lazyKeys);
>
> fromIdx += prepared;
>
> prepared = 0;
>
> currKeyTypeId = keyTypeId;
>
> }
>
>
>
>
>
> I believe should be something like this
>
>
>
> for (Object key : keys) {
> Object keyTypeId = typeIdForObject(key);
>
> em = entryMapping(cacheName, keyTypeId);
>
> if (delStmt == null) {
> delStmt = conn.prepareStatement(em.remQry);
>
> currKeyTypeId = keyTypeId;
> }
>
> if (!currKeyTypeId.equals(keyTypeId)) {
> if (log.isDebugEnabled())
> log.debug("Delete entries from db [cache=" + 
> U.*maskName*(cacheName) +
> ", keyType=" + em.keyType() + ", cnt=" + prepared + "]");
>
> executeBatch(em, delStmt, "deleteAll", fromIdx, prepared, lazyKeys);
>
> fromIdx += prepared;
>
> prepared = 0;
>
> currKeyTypeId = keyTypeId;
>
> delStmt = conn.prepareStatement(em.remQry);
> }
>
>
>
> Thanks,
>
> Gordon.
>
>
>
>
>
> This email and any attachments are proprietary & confidential and are
> intended solely for the use of the individuals to whom it is addressed. Any
> views or opinions expressed are solely for those of the author and do not
> necessarily reflect those of Nine Mile Financial Pty. Limited. If you have
> received this email in error, please let us know immediately by reply email
> and delete from your system. Nine Mile Financial Pty. Limited. ABN: 346
> 1349 0252
>
>
>
>
>
> This email and any attachments are proprietary & confidential and are
> intended solely for the use of the individuals to whom it is addressed. Any
> views or opinions expressed are solely for those of the author and do not
> necessarily reflect those of Nine Mile Financial Pty. Limited. If you have
> received this email in error, please let us know immediately by reply email
> and delete from your system. Nine Mile Financial Pty. Limited. ABN: 346
> 1349 0252
>
>
>
>
>
> This email and any attachments are proprietary & confidential and are
> intended solely for the use of the individuals to whom it is addressed. Any
> views or opinions expressed are solely for those of the author and do not
> necessarily reflect those of Nine Mile Financial Pty. Limited. If you have
> received this email in error, please let us know immediately by reply email
> and delete from your system. Nine Mile Financial Pty. Limited. ABN: 346
> 1349 0252
>



-- 
Alexey Kuznetsov


RE: Write behind and eventual consistency

2017-04-27 Thread steve.hostettler
Hi Val,

the use case is the following

1) Load data into the database from an external system
2) Once ready load it into the grid
3) Process something that does massive write behinds
4) Take a snapshot of the results (or) Do a backup of the tables   <<--- At
this point I need the eventual consistency to ...eventually be 

At step 4 I cannot afford to have some update still in progress. This is
even more important since because of write behind I cannot maintain
referential integrity (since the insert/update are done in a random order)



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Write-behind-and-eventual-consistency-tp12242p12287.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Ignite Web Sessions Caching Failover question

2017-04-27 Thread vkulichenko
It seems that it will always throw an exception in this scenario. However it
makes sense to me, are you willing to contribute this improvement?

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Ignite-Web-Sessions-Caching-Failover-question-tp12240p12286.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


RE: Write behind and eventual consistency

2017-04-27 Thread vkulichenko
Hi Steve,

What is the business use case behind this?

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Write-behind-and-eventual-consistency-tp12242p12285.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Offheap and max memory

2017-04-27 Thread Vyacheslav Daradur
> It looks like documentation is misleading.
I partially agree.

CacheConfiguration is validated during creation of a cache (when you call a
ignite.getOrCreateCache)
and some mismatches like OFFHEAP mode with offHeapMaxMemory equlas -1 will
be resolved.

Maybe it makes sense to add explanation to 'offHeapMaxMemory' field in
Ignite 1.9.
As far as I know Ignite-2.0 is moving to full-off-heap page memory model,
and CacheConfiguration will be remade, including the removal of
 'offHeapMaxMemory' field.



2017-04-27 2:39 GMT+03:00 javastuff@gmail.com :

> It looks like documentation is misleading.
>
> Tried test program -
>
> - without setting offHeapMaxMemory, getOffHeapMaxMemory returns 0.
> - Setting offHeapMaxMemory=-1, getOffHeapMaxMemory returns 0.
>
> Thanks,
> -Sam
>
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Offheap-and-max-memory-tp12275p12277.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 
Best Regards, Vyacheslav


AW: Network Discovery inside docker containers

2017-04-27 Thread Lukas Lentner
Ok,

I heard from chat that I should use an 
https://ignite.apache.org/releases/1.8.0/javadoc/org/apache/ignite/configuration/BasicAddressResolver.html
ADDRESS Resolver.

I will look into it.

Thankx
Lukas




Lukas Lentner, B. Sc.
St.-Cajetan-Straße 13
81669 München
Deutschland
Fon: +49 / 89  / 71 67 44 96
Mobile:  +49 / 176 / 24 77 09 22
E-Mail:  kont...@lukaslentner.de
Website: www.LukasLentner.de

IBAN:DE33 7019  0001 1810 17
BIC: GENODEF1M01 (Münchner Bank)

Von: Lukas Lentner [mailto:kont...@lukaslentner.de]
Gesendet: Donnerstag, 27. April 2017 08:58
An: user@ignite.apache.org
Betreff: Network Discovery inside docker containers

Hi,

I have a question about Discovery in a Docker environment.

I have multiple machines running multiple docker containers with ignite in it. 
I want to use JDBC Discovery. I give every container a dedicated communication 
port and discovery port.
My understanding was that the Database Table should be filled with HOSTs and 
PORTs that can be reached from any ignite node so that they can find each 
other. So that's why I exposed both ports through docker to the outside for 
each container without relaying it (changing the port number). You could call 
machine-1-ip:container-1-discovery-port to discover the "Container 1" inside 
"Machine 1".

To achive that I set discoveryLocalAddress to the ip of the docker host (the 
machine). But it seems that does not work because this address is also used to 
bind the socket inside the docker container.

As I do not care on which network interface ignite is binding inside the docker 
container from this perspective I would not set the localAddress of the 
discoverySpi.

How can I not set the local discovery address (because of binding) but still 
make the JDBC-Discovery-IP-Finder put the machine ip inside the database table?

It is not about retrieving the machine ip. At all times I have that!

Thankx
Lukas




Lukas Lentner, B. Sc.

St.-Cajetan-Straße 13
81669 München
Deutschland

Fon: +49 / 89  / 71 67 44 96
Mobile:  +49 / 176 / 24 77 09 22

E-Mail:  kont...@lukaslentner.de
Website: www.LukasLentner.de

IBAN:DE33 7019  0001 1810 17
BIC: GENODEF1M01 (Münchner Bank)