[jira] [Commented] (IGNITE-2310) Lock cache partition for affinityRun/affinityCall execution
[ https://issues.apache.org/jira/browse/IGNITE-2310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15388250#comment-15388250 ] Dmitriy Setrakyan commented on IGNITE-2310: --- Taras, where is the patch? I don't see anything attached to this ticket and no PR from GitHub. > Lock cache partition for affinityRun/affinityCall execution > --- > > Key: IGNITE-2310 > URL: https://issues.apache.org/jira/browse/IGNITE-2310 > Project: Ignite > Issue Type: New Feature > Components: cache >Reporter: Valentin Kulichenko >Assignee: Taras Ledkov >Priority: Critical > Labels: community > Fix For: 1.7 > > > Partition of a key passed to {{affinityRun}} must be located on the affinity > node when a compute job is being sent to the node. The partition has to be > locked on the cache until the compute job is being executed. This will let to > execute queries safely (Scan or local SQL) over the data that is located > locally in the locked partition. > In addition Ignite Compute API has to be extended by adding {{affinityCall}} > and {{affinityRun}} methods that accept list of caches which partitions have > to be locked at the time a compute task is being executed. > Test cases to validate the functionality: > 1) local SQL query over data located in a concrete partition in multple > caches. > - create cache Organisation cache and create Persons cache. > - collocate Persons by 'organisationID'; > - send {{affinityRun}} using 'organisationID' as an affinity key and passing > Organisation and Persons caches' names to the method to be sure that the > partition will be locked on caches; > - execute local SQL query "SELECT * FROM Persons as p, Organisation as o > WHERE p.orgId=o.id' on a changing topology. The result set must be complete, > the partition over which the query will be executed mustn't be moved to the > other node. Due to affinity collocation the partition number will be the same > for all Persons that belong to particular 'organisationID' > 2) Scan Query over particular partition that is locked when {{affinityCall}} > is executed. > UPD (YZ May, 31) > # If closure arrives to node but partition is not there it should be silently > failed over to current owner. > # I don't think user should provide list of caches. How about reserving only > one partition, but evict partitions after all partitions in all caches (with > same affinity function) on this node are locked for eviction. [~sboikov], can > you please comment? It seems this should work faster for closures and will > hardly affect rebalancing stuff. > # I would add method {{affinityCall(int partId, String cacheName, > IgniteCallable)}} and same for Runnable. This will allow me not to mess with > affinity key in case I know partition before. > UPD (SB, June, 01) > Yakov, I think it is possible to implement this 'locking for evictions' > approach, but personally I better like partitions reservation: > - approach with reservation already implemented and works fine in sql queries > - partition reservation is just CAS operation, if we need do ~10 reservation > I think this will be negligible comparing to job execution time > - now caches are rebalanced completely independently and changing this be > complicated refactoring > - I see some difficulties how to understand that caches have same affinity. > If user uses custom function should he implement 'equals'? For standard > affinity functions user can set backup filter, what do in this case? should > user implement 'equals' for filter? Even if affinity functions are the same > cache configuration can have node filter, so affinity mapping will be > different. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (IGNITE-3390) ODBC: Add system DSN support for Windows.
[ https://issues.apache.org/jira/browse/IGNITE-3390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Igor Sapego updated IGNITE-3390: Attachment: dsn_configuration_window.png > ODBC: Add system DSN support for Windows. > - > > Key: IGNITE-3390 > URL: https://issues.apache.org/jira/browse/IGNITE-3390 > Project: Ignite > Issue Type: Task > Components: odbc >Affects Versions: 1.6 >Reporter: Igor Sapego >Assignee: Igor Sapego > Fix For: 1.7 > > Attachments: dsn_configuration_window.png > > > Need to add support for the DSN creation/modification in Windows. To do so we > will need to create some UI windows with matching fields. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (IGNITE-3390) ODBC: Add system DSN support for Windows.
[ https://issues.apache.org/jira/browse/IGNITE-3390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15388176#comment-15388176 ] ASF GitHub Bot commented on IGNITE-3390: GitHub user isapego opened a pull request: https://github.com/apache/ignite/pull/881 IGNITE-3390: Added system DSN support for Windows. You can merge this pull request into a Git repository by running: $ git pull https://github.com/isapego/ignite ignite-3390 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/ignite/pull/881.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #881 commit 26708651eda06851a5f4e76161216d1493576e4f Author: isapegoDate: 2016-07-15T18:14:17Z IGNITE-3390: Window stub implemented. commit f7a0592611e3987e99eedc9516014a39a1e92db4 Author: isapego Date: 2016-07-19T15:24:36Z IGNITE-3390: Implemented Window class prototype. Needs refactoring. commit 115da0dda5903d35e0190c161fe4be3d9c691161 Author: isapego Date: 2016-07-19T16:37:29Z IGNITE-3390: Splitted Window in two classes. commit 0f193568167277817f295a3507c17292bd8a8d3a Author: isapego Date: 2016-07-19T18:09:35Z IGNITE-3390: Refactoring. commit 62a5ca7fb736cb74e83cd7f29c16f92394912dcc Author: isapego Date: 2016-07-20T10:21:15Z IGNITE-3390: Implmented first version of the configuration window. commit 6f504e959bdd55bd36bbaf161d4821899cff474f Author: isapego Date: 2016-07-20T14:51:36Z IGNITE-3390: Refactoring. commit b084124b25953fc2a2516ca301a9619f59d631bf Author: isapego Date: 2016-07-20T17:17:08Z IGNITE-3390: Implemented retrieval of the configuration parameters. commit c9b73555b55d1c7430f04730624383caf151ec16 Author: isapego Date: 2016-07-21T14:19:10Z IGNITE-3390: Implemented saving and loading of the DSN. commit 463c6099b766349c3abde80f6459eefde61e529f Author: isapego Date: 2016-07-21T15:18:50Z IGNITE-3390: Refactoring. commit 92b8c72dd57fd91a27054c9a5eb195ddcc92ab18 Author: isapego Date: 2016-07-21T15:26:38Z IGNITE-3390: Minor refactoring of error reporting. commit 01eac0e3f0eb973996da4c1a6ec16d969750fe83 Author: isapego Date: 2016-07-21T15:43:05Z IGNITE-3390: Implmented connection by the DSN. commit dee6363c6f4d950954fa589e8dc7d291abf461d5 Author: isapego Date: 2016-07-21T15:46:01Z IGNITE-3390: Improved logs. commit e045507619b0a5675eb38cfcd8d08467e2036c42 Author: isapego Date: 2016-07-21T15:58:32Z IGNITE-3390: Implemented SQLConnect. commit 9097ad84231d1567a6ee78b97aabc7edd70f7a55 Author: Igor Sapego Date: 2016-07-21T16:26:29Z IGNITE-3390: Added new files to Autotools build system. commit 2049a986f9dcecd84d279d66b37bdfd2f5e2be4c Author: isapego Date: 2016-07-21T16:34:00Z IGNITE-3390: Fixed dsn_config for non-windows platforms. commit 8e7f402a84415e551df90a7ac3266053a16bb084 Author: isapego Date: 2016-07-21T18:26:52Z IGNITE-3390: Some fine-tuning of the configuration window. commit 9233823e7268d777e1526052d59c71aeb96ce037 Author: isapego Date: 2016-07-21T18:30:40Z Merge remote-tracking branch 'upstream/master' into ignite-3390 > ODBC: Add system DSN support for Windows. > - > > Key: IGNITE-3390 > URL: https://issues.apache.org/jira/browse/IGNITE-3390 > Project: Ignite > Issue Type: Task > Components: odbc >Affects Versions: 1.6 >Reporter: Igor Sapego >Assignee: Igor Sapego > Fix For: 1.7 > > > Need to add support for the DSN creation/modification in Windows. To do so we > will need to create some UI windows with matching fields. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (IGNITE-3513) Cleanup worker is placed in the Thread's waiting queue using Thread.sleep method
[ https://issues.apache.org/jira/browse/IGNITE-3513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15388166#comment-15388166 ] Andrey Gura edited comment on IGNITE-3513 at 7/21/16 6:30 PM: -- I see two possible solutions: * {{CleanupWorker}} can sleep for little periods of time and periodically call {{expire()}} method. * {{GridCacheTtlManager.addTrackedEntry()}} method can check some volatile field with nearest expiration time and notify {{CleanupWorker}} in case when added entry has expiration time less than nearest expiration time. It requires usage of synchronization for {{wait/notify}} and can lead to contention in case when added entries have decreasing sequence of expiration time values. See commit https://github.com/apache/ignite/pull/880/commits/f3261b50f17848339b141a900f8c97a61bdbc7ca It requires performance testing. was (Author: agura): I see two possible solutions: * {{CleanupWorker}} can sleep for little periods of time and periodically call {{expire()}} method. * {{GridCacheTtlManager.addTrackedEntry()}} method can check some volatile field with nearest expiration time and notify {{CleanupWorker}} in case when added entry has expiration time less than nearest expiration time. It requires usage of synchronization for {{wait/notify}} and can lead to contention in case when added entries have decreasing sequence of expiration time values. See commit https://github.com/apache/ignite/pull/880/commits/f3261b50f17848339b141a900f8c97a61bdbc7ca > Cleanup worker is placed in the Thread's waiting queue using Thread.sleep > method > > > Key: IGNITE-3513 > URL: https://issues.apache.org/jira/browse/IGNITE-3513 > Project: Ignite > Issue Type: Bug >Affects Versions: 1.6 >Reporter: Denis Magda >Assignee: Andrey Gura > Fix For: 1.7 > > > There is a bug in current implementation of > {{GridCacheTtlManager#CleanupWorker}}. > Refer to the implementation's code snippet and the details below. > {code} > EntryWrapper first = pendingEntries.firstx(); > if (first != null) { >long waitTime = first.expireTime - U.currentTimeMillis(); >if (waitTime > 0) > U.sleep(waitTime); > } > {code} > 1. Put first item with TTL = 1 hour. CleanupWorker will go to sleep for 1 > hour. > 2. Put second item with TTL = 1 minute. Since > CleanupWorker's thread sleeps now, second item will not be expired at the > time. > NOTE: This scenario is easily to reproducible if first and second items are > put into cache asynchronously. If try to put them in same thread one-by-one > expiration may work fine. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (IGNITE-3513) Cleanup worker is placed in the Thread's waiting queue using Thread.sleep method
[ https://issues.apache.org/jira/browse/IGNITE-3513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15388166#comment-15388166 ] Andrey Gura commented on IGNITE-3513: - I see two possible solutions: * {{CleanupWorker}} can sleep for little periods of time and periodically call {{expire()}} method. * {{GridCacheTtlManager.addTrackedEntry()}} method can check some volatile field with nearest expiration time and notify {{CleanupWorker}} in case when added entry has expiration time less than nearest expiration time. It requires usage of synchronization for {{wait/notify}} and can lead to contention in case when added entries have decreasing sequence of expiration time values. See commit https://github.com/apache/ignite/pull/880/commits/f3261b50f17848339b141a900f8c97a61bdbc7ca > Cleanup worker is placed in the Thread's waiting queue using Thread.sleep > method > > > Key: IGNITE-3513 > URL: https://issues.apache.org/jira/browse/IGNITE-3513 > Project: Ignite > Issue Type: Bug >Affects Versions: 1.6 >Reporter: Denis Magda >Assignee: Andrey Gura > Fix For: 1.7 > > > There is a bug in current implementation of > {{GridCacheTtlManager#CleanupWorker}}. > Refer to the implementation's code snippet and the details below. > {code} > EntryWrapper first = pendingEntries.firstx(); > if (first != null) { >long waitTime = first.expireTime - U.currentTimeMillis(); >if (waitTime > 0) > U.sleep(waitTime); > } > {code} > 1. Put first item with TTL = 1 hour. CleanupWorker will go to sleep for 1 > hour. > 2. Put second item with TTL = 1 minute. Since > CleanupWorker's thread sleeps now, second item will not be expired at the > time. > NOTE: This scenario is easily to reproducible if first and second items are > put into cache asynchronously. If try to put them in same thread one-by-one > expiration may work fine. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (IGNITE-3516) H2 Indexing unregisterCache method does not cleanup resources and causes memory leak with OFFHEAP_TIERED cache mode cache.
[ https://issues.apache.org/jira/browse/IGNITE-3516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15388115#comment-15388115 ] Krome Plasma edited comment on IGNITE-3516 at 7/21/16 5:51 PM: --- Just to point out, this is hard to write test for. The best you can do is create cache, fill it, drop it and repeat. The offheap memory usage raises. I've been testing this patch for 2months now on our machines and it seems effective. was (Author: kromulan): Just to point out, this is hard to write test for. The best you can do is create cache, fill it, drop it and repeat. The memory usage raises. I've been testing this patch for 2months now on our machines and it seems effective. > H2 Indexing unregisterCache method does not cleanup resources and causes > memory leak with OFFHEAP_TIERED cache mode cache. > -- > > Key: IGNITE-3516 > URL: https://issues.apache.org/jira/browse/IGNITE-3516 > Project: Ignite > Issue Type: Bug > Components: cache >Affects Versions: 1.6 >Reporter: Krome Plasma >Assignee: Krome Plasma >Priority: Critical > Fix For: 1.7 > > > H2 Indexing unregisterCache method does not call TableDescriptor's > GridH2Table close and optionally GridLuceneIndex close, which causes > OFFHEAP_TIERED cache to leak memory. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (IGNITE-3516) H2 Indexing unregisterCache method does not cleanup resources and causes memory leak with OFFHEAP_TIERED cache mode cache.
[ https://issues.apache.org/jira/browse/IGNITE-3516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15388115#comment-15388115 ] Krome Plasma commented on IGNITE-3516: -- Just to point out, this is hard to write test for. The best you can do is create cache, fill it, drop it and repeat. The memory usage raises. I've been testing this patch for 2months now on our machines and it seems effective. > H2 Indexing unregisterCache method does not cleanup resources and causes > memory leak with OFFHEAP_TIERED cache mode cache. > -- > > Key: IGNITE-3516 > URL: https://issues.apache.org/jira/browse/IGNITE-3516 > Project: Ignite > Issue Type: Bug > Components: cache >Affects Versions: 1.6 >Reporter: Krome Plasma >Assignee: Krome Plasma >Priority: Critical > Fix For: 1.7 > > > H2 Indexing unregisterCache method does not call TableDescriptor's > GridH2Table close and optionally GridLuceneIndex close, which causes > OFFHEAP_TIERED cache to leak memory. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (IGNITE-3512) .NET: IBinaryObject.ToBuilder loses type name
[ https://issues.apache.org/jira/browse/IGNITE-3512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15387991#comment-15387991 ] ASF GitHub Bot commented on IGNITE-3512: GitHub user ptupitsyn opened a pull request: https://github.com/apache/ignite/pull/879 IGNITE-3512 .NET: IBinaryObject.ToBuilder loses type name You can merge this pull request into a Git repository by running: $ git pull https://github.com/ptupitsyn/ignite ignite-3512 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/ignite/pull/879.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #879 commit 11c7e8fe53461f84924e5c29893865191f236c8c Author: Anton VinogradovDate: 2016-07-20T08:31:14Z Documentation fix commit 9b55658749d0e2a869bbb3614034d8aa1f0e95c1 Author: vozerov-gridgain Date: 2016-07-20T11:14:50Z IGNITE-3405: IGFS: Restricted path modes interleaving, so that now only DUAL -> PRIMARY and DUAL -> PROXY paths are possible. commit 6c5218f4d67c8e247f59dbe8deb58b51db2954a2 Author: vozerov-gridgain Date: 2016-07-20T11:15:11Z Merge remote-tracking branch 'upstream/gridgain-7.6.2' into gridgain-7.6.2 commit c25cd4600bd7254d051048034ad4781deb833aae Author: Pavel Tupitsyn Date: 2016-07-21T14:11:12Z IGNITE-3512 .NET: IBinaryObject.ToBuilder loses type name commit e4a95a3927d8cac0dd2839d4f483f50b691015cc Author: Pavel Tupitsyn Date: 2016-07-21T14:15:01Z wip commit 0e8547b81033fc3fe053121460830fde22b88529 Author: Pavel Tupitsyn Date: 2016-07-21T14:35:48Z Fix metadata propagation commit 3a7f35ea56a320cccfd275cafdef557764c59d14 Author: Pavel Tupitsyn Date: 2016-07-21T14:36:25Z wip commit 77e706ce5e4541d1c4c555caca647574e650e607 Author: Pavel Tupitsyn Date: 2016-07-21T14:39:54Z wip commit 14d99f50e906490bc5342dd673c520fc9cb5b033 Author: Pavel Tupitsyn Date: 2016-07-21T15:23:47Z synopsis commit 8af51e7013123388b122c5618c71254ce63d740c Author: Pavel Tupitsyn Date: 2016-07-21T15:32:23Z Fix meta update commit b3152299deb10d3a4d89f8e288bafd115904aae5 Author: Pavel Tupitsyn Date: 2016-07-21T15:43:53Z wip > .NET: IBinaryObject.ToBuilder loses type name > - > > Key: IGNITE-3512 > URL: https://issues.apache.org/jira/browse/IGNITE-3512 > Project: Ignite > Issue Type: Bug > Components: platforms >Affects Versions: 1.5.0.final >Reporter: Pavel Tupitsyn >Assignee: Pavel Tupitsyn > Fix For: 1.7 > > > Steps to reproduce: > * Create a builder for a string type name, set field, put to cache > * On another node, read this object, call ToBuilder, call Build > Type name is not propagated with metadata, which leads to assertion error > (with -ea) or "Two binary types have duplicate type ID [typeId=949122880, > typeName1=Security, typeName2=null]]" error. > Unit test: > {code} > [Test] > public void Test() > { > using (var grid1 = > Ignition.Start(TestUtils.GetTestConfiguration())) > using (var grid2 = Ignition.Start(new > IgniteConfiguration(TestUtils.GetTestConfiguration(false)) {GridName = > "grid2"})) > { > var cache1 = grid1.CreateCache int>("cache").WithKeepBinary (); > var obj1 = > grid1.GetBinary().GetBuilder("myType").SetField("myField", "val").Build(); > cache1[1] = obj1; > var cache2 = grid2.GetCache int>("cache").WithKeepBinary (); > var obj2 = cache2[1]; > var val = obj2.GetField("myField"); > var obj2Ex = > grid2.GetBinary().GetBuilder(obj2).SetField("myField", val + > "_modified").Build(); > cache2[2] = obj2Ex; > } > } > {code} > Workaround is to register the type by name on start: > {code} > BinaryConfiguration = new BinaryConfiguration > { > Types = new[] {"myType"} > } > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (IGNITE-3504) .NET: Improve IBinaryObjectBuilder test coverage
[ https://issues.apache.org/jira/browse/IGNITE-3504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pavel Tupitsyn updated IGNITE-3504: --- Priority: Critical (was: Major) > .NET: Improve IBinaryObjectBuilder test coverage > > > Key: IGNITE-3504 > URL: https://issues.apache.org/jira/browse/IGNITE-3504 > Project: Ignite > Issue Type: Improvement > Components: platforms >Affects Versions: 1.6 >Reporter: Pavel Tupitsyn >Priority: Critical > Fix For: 1.7 > > > Most methods are not even used anywhere. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (IGNITE-3504) .NET: Improve IBinaryObjectBuilder test coverage
[ https://issues.apache.org/jira/browse/IGNITE-3504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15387971#comment-15387971 ] Pavel Tupitsyn commented on IGNITE-3504: Apparently, there are bugs in SetShortField and other primitive setter methods. > .NET: Improve IBinaryObjectBuilder test coverage > > > Key: IGNITE-3504 > URL: https://issues.apache.org/jira/browse/IGNITE-3504 > Project: Ignite > Issue Type: Improvement > Components: platforms >Affects Versions: 1.6 >Reporter: Pavel Tupitsyn >Priority: Critical > Fix For: 1.7 > > > Most methods are not even used anywhere. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (IGNITE-3523) IGFS: Remove "initialize default path modes" feature.
[ https://issues.apache.org/jira/browse/IGNITE-3523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Ozerov updated IGNITE-3523: Summary: IGFS: Remove "initialize default path modes" feature. (was: Remove "initialize default path modes" feature.) > IGFS: Remove "initialize default path modes" feature. > - > > Key: IGNITE-3523 > URL: https://issues.apache.org/jira/browse/IGNITE-3523 > Project: Ignite > Issue Type: Task > Components: IGFS >Affects Versions: 1.6 >Reporter: Vladimir Ozerov > Fix For: 2.0 > > > Currently IGFS can create several paths by default, which will forcefully > work in different modes. This will never require in practice, but caused some > problems, e.g. performance degradation in our Hadoop FileSystem implementaions > Let's just remove that feature along with relevant property in > {{FileSystemConfiguration}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (IGNITE-1777) IGFS: Write files with fail-safe logic: "lock" -> "reserve space" -> "write" -> "size update, unlock"
[ https://issues.apache.org/jira/browse/IGNITE-1777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Ozerov closed IGNITE-1777. --- > IGFS: Write files with fail-safe logic: "lock" -> "reserve space" -> "write" > -> "size update, unlock" > - > > Key: IGNITE-1777 > URL: https://issues.apache.org/jira/browse/IGNITE-1777 > Project: Ignite > Issue Type: Task > Components: IGFS >Affects Versions: 1.5.0.final >Reporter: Ivan Veselovsky > Fix For: 1.7 > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (IGNITE-1777) IGFS: Write files with fail-safe logic: "lock" -> "reserve space" -> "write" -> "size update, unlock"
[ https://issues.apache.org/jira/browse/IGNITE-1777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Ozerov resolved IGNITE-1777. - Resolution: Won't Fix > IGFS: Write files with fail-safe logic: "lock" -> "reserve space" -> "write" > -> "size update, unlock" > - > > Key: IGNITE-1777 > URL: https://issues.apache.org/jira/browse/IGNITE-1777 > Project: Ignite > Issue Type: Task > Components: IGFS >Affects Versions: 1.5.0.final >Reporter: Ivan Veselovsky > Fix For: 1.7 > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (IGNITE-2568) IGFS cannot be used as Apache Drill data source.
[ https://issues.apache.org/jira/browse/IGNITE-2568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Ozerov updated IGNITE-2568: Fix Version/s: (was: 1.7) > IGFS cannot be used as Apache Drill data source. > > > Key: IGNITE-2568 > URL: https://issues.apache.org/jira/browse/IGNITE-2568 > Project: Ignite > Issue Type: Task > Components: IGFS >Affects Versions: 1.5.0.final >Reporter: Vladimir Ozerov >Priority: Minor > > The problem was reported on user list: > http://apache-ignite-users.70518.x6.nabble.com/Apache-Drill-querying-IGFS-accelerated-H-DFS-td2840.html > Even when IGFS is fully configured and is recognized correctly by installed > Hadoop, Apache Drill cannot use it as data source. The following stack trace > appears: > {code} > 2016-02-05 13:14:03,507 [294b5fe3-8f63-2134-67e0-42f7111ead44:foreman] ERROR > o.a.d.exec.util.ImpersonationUtil - Failed to create DrillFileSystem for > proxy user: No FileSystem for scheme: igfs > java.io.IOException: No FileSystem for scheme: igfs > at > org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2644) > ~[hadoop-common-2.7.1.jar:na] > at > org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2651) > ~[hadoop-common-2.7.1.jar:na] > at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:92) > ~[hadoop-common-2.7.1.jar:na] > at > org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2687) > ~[hadoop-common-2.7.1.jar:na] > at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2669) > ~[hadoop-common-2.7.1.jar:na] > at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:371) > ~[hadoop-common-2.7.1.jar:na] > at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:170) > ~[hadoop-common-2.7.1.jar:na] > at > org.apache.drill.exec.store.dfs.DrillFileSystem.(DrillFileSystem.java:92) > ~[drill-java-exec-1.4.0.jar:1.4.0] > at > org.apache.drill.exec.util.ImpersonationUtil$2.run(ImpersonationUtil.java:213) > ~[drill-java-exec-1.4.0.jar:1.4.0] > at > org.apache.drill.exec.util.ImpersonationUtil$2.run(ImpersonationUtil.java:210) > ~[drill-java-exec-1.4.0.jar:1.4.0] > at java.security.AccessController.doPrivileged(Native Method) > ~[na:1.8.0_40-ea] > at javax.security.auth.Subject.doAs(Subject.java:422) ~[na:1.8.0_40-ea] > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) > ~[hadoop-common-2.7.1.jar:na] > at > org.apache.drill.exec.util.ImpersonationUtil.createFileSystem(ImpersonationUtil.java:210) > [drill-java-exec-1.4.0.jar:1.4.0] > at > org.apache.drill.exec.util.ImpersonationUtil.createFileSystem(ImpersonationUtil.java:202) > [drill-java-exec-1.4.0.jar:1.4.0] > at > org.apache.drill.exec.store.dfs.WorkspaceSchemaFactory.accessible(WorkspaceSchemaFactory.java:150) > [drill-java-exec-1.4.0.jar:1.4.0] > at > org.apache.drill.exec.store.dfs.FileSystemSchemaFactory$FileSystemSchema.(FileSystemSchemaFactory.java:78) > [drill-java-exec-1.4.0.jar:1.4.0] > at > org.apache.drill.exec.store.dfs.FileSystemSchemaFactory.registerSchemas(FileSystemSchemaFactory.java:65) > [drill-java-exec-1.4.0.jar:1.4.0] > at > org.apache.drill.exec.store.dfs.FileSystemPlugin.registerSchemas(FileSystemPlugin.java:131) > [drill-java-exec-1.4.0.jar:1.4.0] > at > org.apache.drill.exec.store.StoragePluginRegistry$DrillSchemaFactory.registerSchemas(StoragePluginRegistry.java:403) > [drill-java-exec-1.4.0.jar:1.4.0] > at > org.apache.drill.exec.ops.QueryContext.getRootSchema(QueryContext.java:166) > [drill-java-exec-1.4.0.jar:1.4.0] > at > org.apache.drill.exec.ops.QueryContext.getRootSchema(QueryContext.java:155) > [drill-java-exec-1.4.0.jar:1.4.0] > at > org.apache.drill.exec.ops.QueryContext.getRootSchema(QueryContext.java:143) > [drill-java-exec-1.4.0.jar:1.4.0] > at > org.apache.drill.exec.ops.QueryContext.getNewDefaultSchema(QueryContext.java:129) > [drill-java-exec-1.4.0.jar:1.4.0] > at > org.apache.drill.exec.planner.sql.DrillSqlWorker.(DrillSqlWorker.java:93) > [drill-java-exec-1.4.0.jar:1.4.0] > at org.apache.drill.exec.work.foreman.Foreman.runSQL(Foreman.java:907) > [drill-java-exec-1.4.0.jar:1.4.0] > at org.apache.drill.exec.work.foreman.Foreman.run(Foreman.java:244) > [drill-java-exec-1.4.0.jar:1.4.0] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > [na:1.8.0_40-ea] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > [na:1.8.0_40-ea] > at java.lang.Thread.run(Thread.java:745) [na:1.8.0_40-ea] > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Reopened] (IGNITE-1777) IGFS: Write files with fail-safe logic: "lock" -> "reserve space" -> "write" -> "size update, unlock"
[ https://issues.apache.org/jira/browse/IGNITE-1777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Ozerov reopened IGNITE-1777: - > IGFS: Write files with fail-safe logic: "lock" -> "reserve space" -> "write" > -> "size update, unlock" > - > > Key: IGNITE-1777 > URL: https://issues.apache.org/jira/browse/IGNITE-1777 > Project: Ignite > Issue Type: Task > Components: IGFS >Affects Versions: 1.5.0.final >Reporter: Ivan Veselovsky > Fix For: 1.7 > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (IGNITE-1777) IGFS: Write files with fail-safe logic: "lock" -> "reserve space" -> "write" -> "size update, unlock"
[ https://issues.apache.org/jira/browse/IGNITE-1777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Ozerov resolved IGNITE-1777. - Resolution: Fixed > IGFS: Write files with fail-safe logic: "lock" -> "reserve space" -> "write" > -> "size update, unlock" > - > > Key: IGNITE-1777 > URL: https://issues.apache.org/jira/browse/IGNITE-1777 > Project: Ignite > Issue Type: Task > Components: IGFS >Affects Versions: 1.5.0.final >Reporter: Ivan Veselovsky > Fix For: 1.7 > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Reopened] (IGNITE-1778) IGFS: Implement rollback procedure: cleanup the "reserved" data.
[ https://issues.apache.org/jira/browse/IGNITE-1778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Ozerov reopened IGNITE-1778: - > IGFS: Implement rollback procedure: cleanup the "reserved" data. > > > Key: IGNITE-1778 > URL: https://issues.apache.org/jira/browse/IGNITE-1778 > Project: Ignite > Issue Type: Task > Components: IGFS >Affects Versions: 1.6 >Reporter: Ivan Veselovsky > Fix For: 1.7 > > > The following procedure is applied if the file is locked: > 1) take Node id from the lock Id. > 2) see via discovery service if this node is alive. > 3) if yes, return (we cannot lock the file). > 4) if not: do a rollback: > - delete all the blocks in "reserved" range from the data cache. > - set reserved range to zero. > - remove the lock from the FileInfo. > The above procedure should be performed upon every attempt to take a lock, > and (may be) periodically while traversing the file system. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (IGNITE-1778) IGFS: Implement rollback procedure: cleanup the "reserved" data.
[ https://issues.apache.org/jira/browse/IGNITE-1778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Ozerov resolved IGNITE-1778. - Resolution: Duplicate > IGFS: Implement rollback procedure: cleanup the "reserved" data. > > > Key: IGNITE-1778 > URL: https://issues.apache.org/jira/browse/IGNITE-1778 > Project: Ignite > Issue Type: Task > Components: IGFS >Affects Versions: 1.6 >Reporter: Ivan Veselovsky > Fix For: 1.7 > > > The following procedure is applied if the file is locked: > 1) take Node id from the lock Id. > 2) see via discovery service if this node is alive. > 3) if yes, return (we cannot lock the file). > 4) if not: do a rollback: > - delete all the blocks in "reserved" range from the data cache. > - set reserved range to zero. > - remove the lock from the FileInfo. > The above procedure should be performed upon every attempt to take a lock, > and (may be) periodically while traversing the file system. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (IGNITE-364) Add rewind to mapper and reducers.
[ https://issues.apache.org/jira/browse/IGNITE-364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Ozerov closed IGNITE-364. -- > Add rewind to mapper and reducers. > -- > > Key: IGNITE-364 > URL: https://issues.apache.org/jira/browse/IGNITE-364 > Project: Ignite > Issue Type: Task > Components: hadoop >Affects Versions: sprint-1 >Reporter: Vladimir Ozerov > > Migrated from GG JIRA (GG-8454). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (IGNITE-364) Add rewind to mapper and reducers.
[ https://issues.apache.org/jira/browse/IGNITE-364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Ozerov resolved IGNITE-364. Resolution: Won't Fix No longer relevant. > Add rewind to mapper and reducers. > -- > > Key: IGNITE-364 > URL: https://issues.apache.org/jira/browse/IGNITE-364 > Project: Ignite > Issue Type: Task > Components: hadoop >Affects Versions: sprint-1 >Reporter: Vladimir Ozerov > > Migrated from GG JIRA (GG-8454). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (IGNITE-3536) IGFS: Implement async methods for all base file system operations.
Vladimir Ozerov created IGNITE-3536: --- Summary: IGFS: Implement async methods for all base file system operations. Key: IGNITE-3536 URL: https://issues.apache.org/jira/browse/IGNITE-3536 Project: Ignite Issue Type: Task Components: hadoop Affects Versions: 1.6 Reporter: Vladimir Ozerov Fix For: 2.0 1) Remove {{IgniteAsyncSupport}} interface 2) Implement async counterparts for all FS operations. Justification: some structure file system operations might be very time-consuming, so having async counterparts sounds like a good idea. The questions is what thread pool will host these tasks. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (IGNITE-3535) Hadoop: make IgniteHadoopWeightedMapReducePlanner default one.
Vladimir Ozerov created IGNITE-3535: --- Summary: Hadoop: make IgniteHadoopWeightedMapReducePlanner default one. Key: IGNITE-3535 URL: https://issues.apache.org/jira/browse/IGNITE-3535 Project: Ignite Issue Type: Task Components: hadoop Affects Versions: 1.6 Reporter: Vladimir Ozerov Fix For: 2.0 Most probably we should even remove {{IgniteHadoopMapReducePlanner}} implementation as it is too inefficient. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (IGNITE-3534) Hadoop: create factory for UserNameMapper.
Vladimir Ozerov created IGNITE-3534: --- Summary: Hadoop: create factory for UserNameMapper. Key: IGNITE-3534 URL: https://issues.apache.org/jira/browse/IGNITE-3534 Project: Ignite Issue Type: Task Components: hadoop Affects Versions: 1.6 Reporter: Vladimir Ozerov Fix For: 2.0 We need to be compliant for other parts of Ignite API. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (IGNITE-3533) Hadoop: create factory for map-reduce planner.
Vladimir Ozerov created IGNITE-3533: --- Summary: Hadoop: create factory for map-reduce planner. Key: IGNITE-3533 URL: https://issues.apache.org/jira/browse/IGNITE-3533 Project: Ignite Issue Type: Task Components: hadoop Affects Versions: 1.6 Reporter: Vladimir Ozerov Fix For: 2.0 We need it to be compliant with other part of Ignite API. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (IGNITE-3532) Hadoop: Move HadoopMapReducePlanner interface to public space.
Vladimir Ozerov created IGNITE-3532: --- Summary: Hadoop: Move HadoopMapReducePlanner interface to public space. Key: IGNITE-3532 URL: https://issues.apache.org/jira/browse/IGNITE-3532 Project: Ignite Issue Type: Task Components: hadoop Affects Versions: 1.6 Reporter: Vladimir Ozerov Fix For: 2.0 Currently {{HadoopMapReducePlanner}} is located inside private package, but is exposed to {{HadoopConfiguration}}. We must move this interface to public package. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (IGNITE-3531) IGFS: Rename format() to clear().
Vladimir Ozerov created IGNITE-3531: --- Summary: IGFS: Rename format() to clear(). Key: IGNITE-3531 URL: https://issues.apache.org/jira/browse/IGNITE-3531 Project: Ignite Issue Type: Task Components: IGFS Affects Versions: 1.6 Reporter: Vladimir Ozerov Fix For: 2.0 Currently format() method is used to quickly clear all IGFS data without touching secondary file system. This way it is equal to {{IgniteCache.clear()}} semantics. Let's rename format -> clear for consistency. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (IGNITE-3530) IGFS: "setTimes" method is missing from IgfsSecondaryFileSystem interface.
Vladimir Ozerov created IGNITE-3530: --- Summary: IGFS: "setTimes" method is missing from IgfsSecondaryFileSystem interface. Key: IGNITE-3530 URL: https://issues.apache.org/jira/browse/IGNITE-3530 Project: Ignite Issue Type: Task Components: IGFS Affects Versions: 1.6 Reporter: Vladimir Ozerov Fix For: 2.0 It is simply not implemented! Let's add it. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (IGNITE-3529) IGFS: Simplify meta and data cache configuration.
Vladimir Ozerov created IGNITE-3529: --- Summary: IGFS: Simplify meta and data cache configuration. Key: IGNITE-3529 URL: https://issues.apache.org/jira/browse/IGNITE-3529 Project: Ignite Issue Type: Task Components: IGFS Affects Versions: 1.6 Reporter: Vladimir Ozerov Fix For: 2.0 Currently IGFS configuration is rather complex because user have to manually do the following: 1) Configure meta cache 2) Configure data cache 3) Wire them up with IGFS through {{FileSystemConfiguration.*CacheName}} properties. Instead, I propose to do the following: 1) Add two properties directly to {{FileSystemConfiguration}}: - dataCacheConfiguration - metaCacheConfiguration 2) Names of these cache will be ignored and overwritten with cache name unique to concrete IGFS . E.g. *_data and *_meta, where * is IGFS name. 3) *THE MOST IMPORTANT THING* - provide sensible defaults. This way normally user will not bother about cache config at all. He will only need to add IGFS config bean and set it's name. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (IGNITE-2968) Near cache support in transactions deadlock detection
[ https://issues.apache.org/jira/browse/IGNITE-2968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15387673#comment-15387673 ] Andrey Gura commented on IGNITE-2968: - * {{deadlockDetectionStarted}} field need for prevent future completion while deadlock detection is still in progress. Code changed in order to remove this field and add {{TxDeadlockFuture}} to {{GridNearOptimisticTxPrepareFuture}}. It has the same effect. * Fixed. > Near cache support in transactions deadlock detection > - > > Key: IGNITE-2968 > URL: https://issues.apache.org/jira/browse/IGNITE-2968 > Project: Ignite > Issue Type: Improvement > Components: cache >Reporter: Andrey Gura >Assignee: Andrey Gura > Fix For: 1.7 > > > Deadlock detection doesn't support transactions on near cache. Need implement > it. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (IGNITE-3528) IGFS: Review IgfsSecondaryFileSystem.open() method return type.
Vladimir Ozerov created IGNITE-3528: --- Summary: IGFS: Review IgfsSecondaryFileSystem.open() method return type. Key: IGNITE-3528 URL: https://issues.apache.org/jira/browse/IGNITE-3528 Project: Ignite Issue Type: Task Components: IGFS Affects Versions: 1.6 Reporter: Vladimir Ozerov Fix For: 2.0 Currently we return weird {{IgfsSecondaryFileSystemPositionedReadable}} interface. It's goal is clear - we need to have a seekable stream. Need to review it accurately and decide whether any changes or renamings are required here. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (IGNITE-3527) IGFS: Review "rename", "delete" and "mkdirs" return types in IgniteFileSystem and IgfsSecondaryFileSystem:
Vladimir Ozerov created IGNITE-3527: --- Summary: IGFS: Review "rename", "delete" and "mkdirs" return types in IgniteFileSystem and IgfsSecondaryFileSystem: Key: IGNITE-3527 URL: https://issues.apache.org/jira/browse/IGNITE-3527 Project: Ignite Issue Type: Task Components: IGFS Affects Versions: 1.6 Reporter: Vladimir Ozerov Fix For: 2.0 Currently their semantics are not clear: - boolean delete() - void mkdirs() - void rename(); They all must have the same return type and semantics. The question is which semantics to choose: 1) Return boolean and (almost) never throw exceptions. This is "JDK way". I personally do not like it because user will have to both check for true/false and use try/catch. 2) Return void and throw exception if something went wrong. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (IGNITE-3526) IGFS: Review "perNodeBatchSize" and "perNodeParallelBatchCount" properties.
Vladimir Ozerov created IGNITE-3526: --- Summary: IGFS: Review "perNodeBatchSize" and "perNodeParallelBatchCount" properties. Key: IGNITE-3526 URL: https://issues.apache.org/jira/browse/IGNITE-3526 Project: Ignite Issue Type: Task Components: IGFS Affects Versions: 1.6 Reporter: Vladimir Ozerov Fix For: 2.0 These properties are extremely weird form user perspective. We need to review how they are actually used inside IGFS code and either remove them or refactor to something more useful. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (IGNITE-3525) IGFS: Move fragmentizer-related properties into a separate class.
Vladimir Ozerov created IGNITE-3525: --- Summary: IGFS: Move fragmentizer-related properties into a separate class. Key: IGNITE-3525 URL: https://issues.apache.org/jira/browse/IGNITE-3525 Project: Ignite Issue Type: Task Components: IGFS Affects Versions: 1.6 Reporter: Vladimir Ozerov Fix For: 2.0 Let's move the following properties there: - fragmentizerConccurrentFiles; - fragmentizerThrottlingDelay; - fragmentizerThrottlingBlockLength. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (IGNITE-3520) IGFS: Remove task execution methods.
Vladimir Ozerov created IGNITE-3520: --- Summary: IGFS: Remove task execution methods. Key: IGNITE-3520 URL: https://issues.apache.org/jira/browse/IGNITE-3520 Project: Ignite Issue Type: Task Components: IGFS Affects Versions: 1.6 Reporter: Vladimir Ozerov Fix For: 2.0 We have several task execution methods on IGFS API. They were never used in practice because normally users will achieve the same things using Hadoop or Spark frameworks. I propose to remove them altogether. This way we will be able to focus solely on file system semantics. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (IGNITE-3524) IGFS: Do not allow nulls for FileSystemConfiguration.name.
Vladimir Ozerov created IGNITE-3524: --- Summary: IGFS: Do not allow nulls for FileSystemConfiguration.name. Key: IGNITE-3524 URL: https://issues.apache.org/jira/browse/IGNITE-3524 Project: Ignite Issue Type: Task Components: IGFS Affects Versions: 1.6 Reporter: Vladimir Ozerov Fix For: 2.0 As a part of our general approach, let's do not allow nulls as IGFS names. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (IGNITE-3523) Remove "initialize default path modes" feature.
Vladimir Ozerov created IGNITE-3523: --- Summary: Remove "initialize default path modes" feature. Key: IGNITE-3523 URL: https://issues.apache.org/jira/browse/IGNITE-3523 Project: Ignite Issue Type: Task Components: IGFS Affects Versions: 1.6 Reporter: Vladimir Ozerov Fix For: 2.0 Currently IGFS can create several paths by default, which will forcefully work in different modes. This will never require in practice, but caused some problems, e.g. performance degradation in our Hadoop FileSystem implementaions Let's just remove that feature along with relevant property in {{FileSystemConfiguration}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (IGNITE-3522) IGFS: Rename "streamBufferSize" to "bufferSize" in FileSystemConfiguration.
Vladimir Ozerov created IGNITE-3522: --- Summary: IGFS: Rename "streamBufferSize" to "bufferSize" in FileSystemConfiguration. Key: IGNITE-3522 URL: https://issues.apache.org/jira/browse/IGNITE-3522 Project: Ignite Issue Type: Task Components: IGFS Affects Versions: 1.6 Reporter: Vladimir Ozerov Fix For: 2.0 It will look more consistent with "blockSize" property. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (IGNITE-3521) IGFS: Remove "max space" notion.
Vladimir Ozerov created IGNITE-3521: --- Summary: IGFS: Remove "max space" notion. Key: IGNITE-3521 URL: https://issues.apache.org/jira/browse/IGNITE-3521 Project: Ignite Issue Type: Task Components: IGFS Affects Versions: 1.6 Reporter: Vladimir Ozerov Fix For: 2.0 We have "max space" concept in IGFS which governs maximum amount of local data available for IGFS. This concept looks a bit weird because we do not have the same thing in caches. Moreover, we have several conflicting configuration parameters: 1) {{IgfsPerBlockLruEvictionPolicy}} where we also can specify maximum size. 2) {{CacheConfiguration.offheapMaxMemory}} which also governs evictions. It looks like we should simply remove "max space" property from IGFS configuration and do not control it anyhow. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (IGNITE-1915) .NET: Ignite as Entity Framework Second-Level Cache
[ https://issues.apache.org/jira/browse/IGNITE-1915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pavel Tupitsyn updated IGNITE-1915: --- Summary: .NET: Ignite as Entity Framework Second-Level Cache (was: .Net: Ignite as Entity Framework Second-Level Cache) > .NET: Ignite as Entity Framework Second-Level Cache > --- > > Key: IGNITE-1915 > URL: https://issues.apache.org/jira/browse/IGNITE-1915 > Project: Ignite > Issue Type: Task > Components: platforms >Affects Versions: 1.1.4 >Reporter: Pavel Tupitsyn >Assignee: Vladimir Ozerov > Fix For: 1.7 > > > Entity Framework is #1 ORM for .NET > We should provide easy solution to boost Entity Framework performance with > Ignite. > EF5 and EF6 have different 2nd level cache mechanisms (EF5 has a built-in > one, EF6 requies more customization or a 3rd party lib like > https://efcache.codeplex.com/). For now, let's do EF6 only. > This should be in a separate assembly and a separate NuGet package. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (IGNITE-3515) NullPointerException when stopping IgniteSemaphore and no method has been called previously to initialize semaphore with initializeSemaphore().
[ https://issues.apache.org/jira/browse/IGNITE-3515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15387636#comment-15387636 ] Krome Plasma commented on IGNITE-3515: -- [~dmagda] Done, please recheck. > NullPointerException when stopping IgniteSemaphore and no method has been > called previously to initialize semaphore with initializeSemaphore(). > --- > > Key: IGNITE-3515 > URL: https://issues.apache.org/jira/browse/IGNITE-3515 > Project: Ignite > Issue Type: Bug > Components: data structures >Affects Versions: 1.6 >Reporter: Krome Plasma >Assignee: Krome Plasma > Fix For: 1.7 > > > IgniteSemaphore stop() method does not check if internal synchronization > object 'sync' is null hence null pointer exception is thrown. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (IGNITE-466) Add path mode resolution to IgniteFileSystem API.
[ https://issues.apache.org/jira/browse/IGNITE-466?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Ozerov updated IGNITE-466: --- Component/s: (was: hadoop) IGFS > Add path mode resolution to IgniteFileSystem API. > - > > Key: IGNITE-466 > URL: https://issues.apache.org/jira/browse/IGNITE-466 > Project: Ignite > Issue Type: Task > Components: IGFS >Affects Versions: sprint-1 >Reporter: Vladimir Ozerov >Assignee: Vladimir Ozerov > Fix For: 1.7 > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (IGNITE-466) Add path mode resolution to IgniteFileSystem API.
[ https://issues.apache.org/jira/browse/IGNITE-466?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Ozerov resolved IGNITE-466. Resolution: Fixed > Add path mode resolution to IgniteFileSystem API. > - > > Key: IGNITE-466 > URL: https://issues.apache.org/jira/browse/IGNITE-466 > Project: Ignite > Issue Type: Task > Components: IGFS >Affects Versions: sprint-1 >Reporter: Vladimir Ozerov >Assignee: Vladimir Ozerov > Fix For: 1.7 > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (IGNITE-466) Add path mode resolution to IgniteFileSystem API.
[ https://issues.apache.org/jira/browse/IGNITE-466?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Ozerov closed IGNITE-466. -- > Add path mode resolution to IgniteFileSystem API. > - > > Key: IGNITE-466 > URL: https://issues.apache.org/jira/browse/IGNITE-466 > Project: Ignite > Issue Type: Task > Components: IGFS >Affects Versions: sprint-1 >Reporter: Vladimir Ozerov >Assignee: Vladimir Ozerov > Fix For: 1.7 > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (IGNITE-466) Add path mode resolution to IgniteFileSystem API.
[ https://issues.apache.org/jira/browse/IGNITE-466?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Ozerov updated IGNITE-466: --- Fix Version/s: 1.7 > Add path mode resolution to IgniteFileSystem API. > - > > Key: IGNITE-466 > URL: https://issues.apache.org/jira/browse/IGNITE-466 > Project: Ignite > Issue Type: Task > Components: IGFS >Affects Versions: sprint-1 >Reporter: Vladimir Ozerov > Fix For: 1.7 > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (IGNITE-3505) BinaryObject keys can't be reused because of partition caching.
[ https://issues.apache.org/jira/browse/IGNITE-3505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15387584#comment-15387584 ] Alexey Goncharuk commented on IGNITE-3505: -- Denis, Looks good to me. > BinaryObject keys can't be reused because of partition caching. > --- > > Key: IGNITE-3505 > URL: https://issues.apache.org/jira/browse/IGNITE-3505 > Project: Ignite > Issue Type: Bug >Affects Versions: 1.6 >Reporter: Alexei Scherbakov >Assignee: Denis Magda > Fix For: 1.7 > > Attachments: 3505.patch > > > BinaryObject can't be reused as key between caches because it's > actual implementation BinaryObjectImpl implements KeyCacheObject and > due to the fact caches partition, which is not recalculated later. > See > org.apache.ignite.internal.processors.cache.GridCacheAffinityManager.partition: > {code} > if (key instanceof KeyCacheObject && ((KeyCacheObject)key).partition() != -1) > return ((KeyCacheObject)key).partition(); > {code} > The issue can be reproduced with the following code: > {code} > public static void main(String[] args) throws IgniteException { > IgniteConfiguration cfg = new IgniteConfiguration(); > cfg.setDiscoverySpi(new TcpDiscoverySpi().setIpFinder(new > TcpDiscoveryVmIpFinder(true))); > Ignite ignite = Ignition.start(cfg); > CacheConfigurationcfg1 = new > CacheConfiguration<>("Cache 1"); > cfg1.setCacheMode(CacheMode.PARTITIONED); > cfg1.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL); > IgniteCache cache1 = > ignite.getOrCreateCache(cfg1).withKeepBinary(); > CacheConfiguration cfg2 = new > CacheConfiguration<>("Cache 2"); > cfg2.setCacheMode(CacheMode.REPLICATED); > > cfg2.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC); > IgniteCache cache2 = > ignite.getOrCreateCache(cfg2); > BinaryObjectBuilder keyBuilder = ignite.binary().builder("keyType") > .setField("F1", "V1").hashCode("V1".hashCode()); > BinaryObjectBuilder valBuilder = ignite.binary().builder("valueType") > .setField("F2", "V2") > .setField("F3", "V3"); > BinaryObject key = keyBuilder.build(); > BinaryObject val = valBuilder.build(); > cache1.put(key, val); > cache2.put(key, val); // error > System.out.println(cache1.get(key)); // error > System.out.println(cache2.get(key)); > } > {code} > Corresponding user list thread: > http://apache-ignite-users.70518.x6.nabble.com/Adding-a-binary-object-to-two-caches-fails-with-FULL-SYNC-write-mode-configured-for-the-replicated-ce-tp6343p6366.html -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (IGNITE-3217) Text input in number field
[ https://issues.apache.org/jira/browse/IGNITE-3217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15387563#comment-15387563 ] Dmitriyff commented on IGNITE-3217: --- Added fixes to next items 1. Dropdown with object selection should be disabled when empty. 5.Cluster -> Connector configuration: disabled checkboxes change state on click or do hot have disabled state. > Text input in number field > -- > > Key: IGNITE-3217 > URL: https://issues.apache.org/jira/browse/IGNITE-3217 > Project: Ignite > Issue Type: Sub-task > Components: wizards >Affects Versions: 1.7 >Reporter: Vasiliy Sisko >Assignee: Vasiliy Sisko > Fix For: 1.7 > > > Create new cluster and try to input text into number field (f.e. Cluster - > General - Port number): > # Field is empty (Chrome) or show text with error message (Firefox or Chrome > for 'e' symbol), > # field in model (backupItem) is not set, > # Buttons "Save" and "Undo all" and "Undo" is available (For empty fields in > chrome), > # On undo of section "Save" and "Undo all" still available. > # Undo does not revert text in number fields. > Also following scenario should be tested: > # Remove all clusters. > # Create new cluster. Save. > # Change some field to "-1" (invalid). DO NOT Save. > # Click "Add cluster" - new cluster created, but invalid field with "-1" is > not cleared. > On cluster page in failover configuration: > Configure custom failover configuration and stay SPI implementation field > empty. > That configuration is saved without any errors. > Also reproduced in cluster binary type configuration for type configuration > table. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (IGNITE-3515) NullPointerException when stopping IgniteSemaphore and no method has been called previously to initialize semaphore with initializeSemaphore().
[ https://issues.apache.org/jira/browse/IGNITE-3515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15387559#comment-15387559 ] Denis Magda commented on IGNITE-3515: - However the exception is printed out to the logger output, correct? In such a case you can use {{GridStringLogger}} in the test that will help to check that there is no {{NPE}} when your fixed is applied. Look for {{GridStringLogger}} usages to see how it's being used in the tests. > NullPointerException when stopping IgniteSemaphore and no method has been > called previously to initialize semaphore with initializeSemaphore(). > --- > > Key: IGNITE-3515 > URL: https://issues.apache.org/jira/browse/IGNITE-3515 > Project: Ignite > Issue Type: Bug > Components: data structures >Affects Versions: 1.6 >Reporter: Krome Plasma >Assignee: Krome Plasma > Fix For: 1.7 > > > IgniteSemaphore stop() method does not check if internal synchronization > object 'sync' is null hence null pointer exception is thrown. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (IGNITE-3515) NullPointerException when stopping IgniteSemaphore and no method has been called previously to initialize semaphore with initializeSemaphore().
[ https://issues.apache.org/jira/browse/IGNITE-3515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15387556#comment-15387556 ] Krome Plasma commented on IGNITE-3515: -- [~dmagda] it's not possible for me to make a test for this as Ignite wraps exception and cannot catch it in the test and fail the test. I was also discussing this with [~vladisav]. > NullPointerException when stopping IgniteSemaphore and no method has been > called previously to initialize semaphore with initializeSemaphore(). > --- > > Key: IGNITE-3515 > URL: https://issues.apache.org/jira/browse/IGNITE-3515 > Project: Ignite > Issue Type: Bug > Components: data structures >Affects Versions: 1.6 >Reporter: Krome Plasma >Assignee: Krome Plasma > Fix For: 1.7 > > > IgniteSemaphore stop() method does not check if internal synchronization > object 'sync' is null hence null pointer exception is thrown. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (IGNITE-3512) .NET: IBinaryObject.ToBuilder loses type name
[ https://issues.apache.org/jira/browse/IGNITE-3512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15387550#comment-15387550 ] Pavel Tupitsyn commented on IGNITE-3512: Another problem, apparently, is that field type can also be lost when Build is called without modifying any fields, but only reading them. > .NET: IBinaryObject.ToBuilder loses type name > - > > Key: IGNITE-3512 > URL: https://issues.apache.org/jira/browse/IGNITE-3512 > Project: Ignite > Issue Type: Bug > Components: platforms >Affects Versions: 1.5.0.final >Reporter: Pavel Tupitsyn >Assignee: Pavel Tupitsyn > Fix For: 1.7 > > > Steps to reproduce: > * Create a builder for a string type name, set field, put to cache > * On another node, read this object, call ToBuilder, call Build > Type name is not propagated with metadata, which leads to assertion error > (with -ea) or "Two binary types have duplicate type ID [typeId=949122880, > typeName1=Security, typeName2=null]]" error. > Unit test: > {code} > [Test] > public void Test() > { > using (var grid1 = > Ignition.Start(TestUtils.GetTestConfiguration())) > using (var grid2 = Ignition.Start(new > IgniteConfiguration(TestUtils.GetTestConfiguration(false)) {GridName = > "grid2"})) > { > var cache1 = grid1.CreateCacheint>("cache").WithKeepBinary (); > var obj1 = > grid1.GetBinary().GetBuilder("myType").SetField("myField", "val").Build(); > cache1[1] = obj1; > var cache2 = grid2.GetCache int>("cache").WithKeepBinary (); > var obj2 = cache2[1]; > var val = obj2.GetField("myField"); > var obj2Ex = > grid2.GetBinary().GetBuilder(obj2).SetField("myField", val + > "_modified").Build(); > cache2[2] = obj2Ex; > } > } > {code} > Workaround is to register the type by name on start: > {code} > BinaryConfiguration = new BinaryConfiguration > { > Types = new[] {"myType"} > } > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (IGNITE-1510) IGFS: Weird format() and remove() semantics.
[ https://issues.apache.org/jira/browse/IGNITE-1510?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Ozerov resolved IGNITE-1510. - Resolution: Won't Fix Will be completely reworked as a part of Ignite 2.0. > IGFS: Weird format() and remove() semantics. > > > Key: IGNITE-1510 > URL: https://issues.apache.org/jira/browse/IGNITE-1510 > Project: Ignite > Issue Type: Task > Components: hadoop >Affects Versions: 1.1.4 >Reporter: Vladimir Ozerov >Priority: Critical > > Currently we have two methods to remove something from IGFS: > 1) remove - performs soft delete for PRIMARY mode and hard delete for others > 2) format - delete of all IGFS data without touching seocndary file system, > which can be either soft or hard depending on some very coutner-intuitive > conditions. > I think we should do the following: > 1) remove operation stays as is. > 2) format method is deprecated and just falls-back to a new method > "clear(ROOT)". > 3) "clear" operation is semantically identical to cache clear: remove > in-memory data, do not touch persistence layer. Essentially it just moves a > tree into the trash just like remove does. But also this operation will offer > sync and async modes. In sync mode operation exits when all in-memory data is > really removed even from trash. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (IGNITE-1510) IGFS: Weird format() and remove() semantics.
[ https://issues.apache.org/jira/browse/IGNITE-1510?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Ozerov updated IGNITE-1510: Fix Version/s: (was: 1.8) 2.0 > IGFS: Weird format() and remove() semantics. > > > Key: IGNITE-1510 > URL: https://issues.apache.org/jira/browse/IGNITE-1510 > Project: Ignite > Issue Type: Task > Components: hadoop >Affects Versions: 1.1.4 >Reporter: Vladimir Ozerov >Assignee: Vladimir Ozerov >Priority: Critical > > Currently we have two methods to remove something from IGFS: > 1) remove - performs soft delete for PRIMARY mode and hard delete for others > 2) format - delete of all IGFS data without touching seocndary file system, > which can be either soft or hard depending on some very coutner-intuitive > conditions. > I think we should do the following: > 1) remove operation stays as is. > 2) format method is deprecated and just falls-back to a new method > "clear(ROOT)". > 3) "clear" operation is semantically identical to cache clear: remove > in-memory data, do not touch persistence layer. Essentially it just moves a > tree into the trash just like remove does. But also this operation will offer > sync and async modes. In sync mode operation exits when all in-memory data is > really removed even from trash. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (IGNITE-1510) IGFS: Weird format() and remove() semantics.
[ https://issues.apache.org/jira/browse/IGNITE-1510?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Ozerov updated IGNITE-1510: Fix Version/s: (was: 2.0) > IGFS: Weird format() and remove() semantics. > > > Key: IGNITE-1510 > URL: https://issues.apache.org/jira/browse/IGNITE-1510 > Project: Ignite > Issue Type: Task > Components: hadoop >Affects Versions: 1.1.4 >Reporter: Vladimir Ozerov >Priority: Critical > > Currently we have two methods to remove something from IGFS: > 1) remove - performs soft delete for PRIMARY mode and hard delete for others > 2) format - delete of all IGFS data without touching seocndary file system, > which can be either soft or hard depending on some very coutner-intuitive > conditions. > I think we should do the following: > 1) remove operation stays as is. > 2) format method is deprecated and just falls-back to a new method > "clear(ROOT)". > 3) "clear" operation is semantically identical to cache clear: remove > in-memory data, do not touch persistence layer. Essentially it just moves a > tree into the trash just like remove does. But also this operation will offer > sync and async modes. In sync mode operation exits when all in-memory data is > really removed even from trash. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (IGNITE-1510) IGFS: Weird format() and remove() semantics.
[ https://issues.apache.org/jira/browse/IGNITE-1510?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Ozerov closed IGNITE-1510. --- Assignee: Vladimir Ozerov > IGFS: Weird format() and remove() semantics. > > > Key: IGNITE-1510 > URL: https://issues.apache.org/jira/browse/IGNITE-1510 > Project: Ignite > Issue Type: Task > Components: hadoop >Affects Versions: 1.1.4 >Reporter: Vladimir Ozerov >Assignee: Vladimir Ozerov >Priority: Critical > > Currently we have two methods to remove something from IGFS: > 1) remove - performs soft delete for PRIMARY mode and hard delete for others > 2) format - delete of all IGFS data without touching seocndary file system, > which can be either soft or hard depending on some very coutner-intuitive > conditions. > I think we should do the following: > 1) remove operation stays as is. > 2) format method is deprecated and just falls-back to a new method > "clear(ROOT)". > 3) "clear" operation is semantically identical to cache clear: remove > in-memory data, do not touch persistence layer. Essentially it just moves a > tree into the trash just like remove does. But also this operation will offer > sync and async modes. In sync mode operation exits when all in-memory data is > really removed even from trash. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (IGNITE-3063) IgfsClientCacheSelfTest.testFormat flakily fails
[ https://issues.apache.org/jira/browse/IGNITE-3063?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Ozerov resolved IGNITE-3063. - Resolution: Cannot Reproduce I do not see any failures in this test over the last time. Will re-open if needed. > IgfsClientCacheSelfTest.testFormat flakily fails > - > > Key: IGNITE-3063 > URL: https://issues.apache.org/jira/browse/IGNITE-3063 > Project: Ignite > Issue Type: Bug > Components: IGFS >Affects Versions: 1.5.0.final >Reporter: Ivan Veselovsky >Assignee: Vladimir Ozerov > Fix For: 1.7 > > > IgfsClientCacheSelfTest.testFormat flakily fails on "master" branch. > Main problems with format() operation were fixed in IGNITE-586, but this > problem is different: test fails in the very beginning, because number of > entries in data cache is greater than zero. That is, the problem is that > clean up performed after previous test has failed to clean up data cache > completely. The cleanup mechanism and method of caches emptiness assertion > should be re-implemented, because in #clean() method we use > getMetaCache(igfs).keySet() and getDataCache(igfs).size() methods , that > return only the number of *local* entries, while in the beginning of > #testFormat() method we use dataCache.size(new CachePeekMode[] > {CachePeekMode.ALL}); , that returns all the entries, and this assertion > fails. > {code} > --- Stdout: --- > [13:50:10,335][INFO ][main][root] >>> Starting test: > IgfsClientCacheSelfTest#testFormat <<< > [13:50:10,338][INFO ][main][root] >>> Stopping test: > IgfsClientCacheSelfTest#testFormat in 3 ms <<< > --- Stderr: --- > [13:50:10,338][ERROR][main][root] Test failed. > java.lang.AssertionError: Initial data cache size = 2 > at > org.apache.ignite.internal.processors.igfs.IgfsAbstractSelfTest.testFormat(IgfsAbstractSelfTest.java:983) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at junit.framework.TestCase.runTest(TestCase.java:176) > at > org.apache.ignite.testframework.junits.GridAbstractTest.runTestInternal(GridAbstractTest.java:1759) > at > org.apache.ignite.testframework.junits.GridAbstractTest.access$000(GridAbstractTest.java:118) > at > org.apache.ignite.testframework.junits.GridAbstractTest$4.run(GridAbstractTest.java:1697) > at java.lang.Thread.run(Thread.java:745) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (IGNITE-3063) IgfsClientCacheSelfTest.testFormat flakily fails
[ https://issues.apache.org/jira/browse/IGNITE-3063?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Ozerov reassigned IGNITE-3063: --- Assignee: Vladimir Ozerov (was: Ivan Veselovsky) > IgfsClientCacheSelfTest.testFormat flakily fails > - > > Key: IGNITE-3063 > URL: https://issues.apache.org/jira/browse/IGNITE-3063 > Project: Ignite > Issue Type: Bug > Components: IGFS >Affects Versions: 1.5.0.final >Reporter: Ivan Veselovsky >Assignee: Vladimir Ozerov > Fix For: 1.7 > > > IgfsClientCacheSelfTest.testFormat flakily fails on "master" branch. > Main problems with format() operation were fixed in IGNITE-586, but this > problem is different: test fails in the very beginning, because number of > entries in data cache is greater than zero. That is, the problem is that > clean up performed after previous test has failed to clean up data cache > completely. The cleanup mechanism and method of caches emptiness assertion > should be re-implemented, because in #clean() method we use > getMetaCache(igfs).keySet() and getDataCache(igfs).size() methods , that > return only the number of *local* entries, while in the beginning of > #testFormat() method we use dataCache.size(new CachePeekMode[] > {CachePeekMode.ALL}); , that returns all the entries, and this assertion > fails. > {code} > --- Stdout: --- > [13:50:10,335][INFO ][main][root] >>> Starting test: > IgfsClientCacheSelfTest#testFormat <<< > [13:50:10,338][INFO ][main][root] >>> Stopping test: > IgfsClientCacheSelfTest#testFormat in 3 ms <<< > --- Stderr: --- > [13:50:10,338][ERROR][main][root] Test failed. > java.lang.AssertionError: Initial data cache size = 2 > at > org.apache.ignite.internal.processors.igfs.IgfsAbstractSelfTest.testFormat(IgfsAbstractSelfTest.java:983) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at junit.framework.TestCase.runTest(TestCase.java:176) > at > org.apache.ignite.testframework.junits.GridAbstractTest.runTestInternal(GridAbstractTest.java:1759) > at > org.apache.ignite.testframework.junits.GridAbstractTest.access$000(GridAbstractTest.java:118) > at > org.apache.ignite.testframework.junits.GridAbstractTest$4.run(GridAbstractTest.java:1697) > at java.lang.Thread.run(Thread.java:745) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (IGNITE-3343) IGFS: Do not query secondary file system properties during create/append/mkdirs.
[ https://issues.apache.org/jira/browse/IGNITE-3343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Ozerov updated IGNITE-3343: Assignee: (was: Vladimir Ozerov) > IGFS: Do not query secondary file system properties during > create/append/mkdirs. > > > Key: IGNITE-3343 > URL: https://issues.apache.org/jira/browse/IGNITE-3343 > Project: Ignite > Issue Type: Improvement > Components: IGFS >Affects Versions: 1.6 >Reporter: Vladimir Ozerov >Priority: Critical > Labels: performance > Fix For: 1.7 > > > Currently when we create something in a secondary file system, we perform > additional calls to the secondary file system to get file/directory info. > This significantly slows down structural operations, while usually it is not > really needed in most cases. > We should do the following: > 1) Do not write modification time, access time and properties for DUAL > entries. Instead, we should propagate "info" and "listFiles" calls to > secondary file system right away. > 2) For {{create()}} we do not need length, as the file is either created from > scratch, or truncated. > 3) For {{append()}} we need to know current length, so the second file system > call appears to be inevitable. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (IGNITE-3343) IGFS: Do not query secondary file system properties during create/append/mkdirs.
[ https://issues.apache.org/jira/browse/IGNITE-3343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Ozerov reassigned IGNITE-3343: --- Assignee: Vladimir Ozerov (was: Ivan Veselovsky) > IGFS: Do not query secondary file system properties during > create/append/mkdirs. > > > Key: IGNITE-3343 > URL: https://issues.apache.org/jira/browse/IGNITE-3343 > Project: Ignite > Issue Type: Improvement > Components: IGFS >Affects Versions: 1.6 >Reporter: Vladimir Ozerov >Assignee: Vladimir Ozerov >Priority: Critical > Labels: performance > Fix For: 1.7 > > > Currently when we create something in a secondary file system, we perform > additional calls to the secondary file system to get file/directory info. > This significantly slows down structural operations, while usually it is not > really needed in most cases. > We should do the following: > 1) Do not write modification time, access time and properties for DUAL > entries. Instead, we should propagate "info" and "listFiles" calls to > secondary file system right away. > 2) For {{create()}} we do not need length, as the file is either created from > scratch, or truncated. > 3) For {{append()}} we need to know current length, so the second file system > call appears to be inevitable. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (IGNITE-174) Need to investigate Parquet file corruptioon
[ https://issues.apache.org/jira/browse/IGNITE-174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Ozerov updated IGNITE-174: --- Assignee: (was: Ivan Veselovsky) > Need to investigate Parquet file corruptioon > > > Key: IGNITE-174 > URL: https://issues.apache.org/jira/browse/IGNITE-174 > Project: Ignite > Issue Type: Task > Components: hadoop >Reporter: Dmitriy Setrakyan > > https://github.com/gridgain/gridgain/issues/93 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (IGNITE-3343) IGFS: Do not query secondary file system properties during create/append/mkdirs.
[ https://issues.apache.org/jira/browse/IGNITE-3343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Ozerov updated IGNITE-3343: Labels: performance (was: ) > IGFS: Do not query secondary file system properties during > create/append/mkdirs. > > > Key: IGNITE-3343 > URL: https://issues.apache.org/jira/browse/IGNITE-3343 > Project: Ignite > Issue Type: Improvement > Components: IGFS >Affects Versions: 1.6 >Reporter: Vladimir Ozerov >Assignee: Ivan Veselovsky >Priority: Critical > Labels: performance > Fix For: 1.7 > > > Currently when we create something in a secondary file system, we perform > additional calls to the secondary file system to get file/directory info. > This significantly slows down structural operations, while usually it is not > really needed in most cases. > We should do the following: > 1) Do not write modification time, access time and properties for DUAL > entries. Instead, we should propagate "info" and "listFiles" calls to > secondary file system right away. > 2) For {{create()}} we do not need length, as the file is either created from > scratch, or truncated. > 3) For {{append()}} we need to know current length, so the second file system > call appears to be inevitable. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (IGNITE-354) IgfsDataManager.storeBlocksAsync() could cause system pool starvation.
[ https://issues.apache.org/jira/browse/IGNITE-354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Ozerov closed IGNITE-354. -- > IgfsDataManager.storeBlocksAsync() could cause system pool starvation. > -- > > Key: IGNITE-354 > URL: https://issues.apache.org/jira/browse/IGNITE-354 > Project: Ignite > Issue Type: Task > Components: hadoop >Affects Versions: sprint-1 >Reporter: Vladimir Ozerov >Assignee: Vladimir Ozerov >Priority: Critical > Fix For: 1.7 > > > This method is executed in system thread pool and performs another blocking > call "igfs.awaitDeletesAsync().get(trashPurgeTimeout)" which also needs > system pool thread. > Migrated from GG JIRA (GG-8903). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (IGNITE-354) IgfsDataManager.storeBlocksAsync() could cause system pool starvation.
[ https://issues.apache.org/jira/browse/IGNITE-354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Ozerov reassigned IGNITE-354: -- Assignee: Vladimir Ozerov (was: Ivan Veselovsky) > IgfsDataManager.storeBlocksAsync() could cause system pool starvation. > -- > > Key: IGNITE-354 > URL: https://issues.apache.org/jira/browse/IGNITE-354 > Project: Ignite > Issue Type: Task > Components: hadoop >Affects Versions: sprint-1 >Reporter: Vladimir Ozerov >Assignee: Vladimir Ozerov >Priority: Critical > Fix For: 1.7 > > > This method is executed in system thread pool and performs another blocking > call "igfs.awaitDeletesAsync().get(trashPurgeTimeout)" which also needs > system pool thread. > Migrated from GG JIRA (GG-8903). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (IGNITE-449) Ensure correct exception casts between IGFS and Hadoop FileSystem.
[ https://issues.apache.org/jira/browse/IGNITE-449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Ozerov resolved IGNITE-449. Resolution: Won't Fix Assignee: Vladimir Ozerov Not relevant for now. > Ensure correct exception casts between IGFS and Hadoop FileSystem. > -- > > Key: IGNITE-449 > URL: https://issues.apache.org/jira/browse/IGNITE-449 > Project: Ignite > Issue Type: Task > Components: hadoop >Affects Versions: sprint-1 >Reporter: Vladimir Ozerov >Assignee: Vladimir Ozerov > > See: > 1) IgniteHadoopIgfsSecondaryFileSystem; > 2) HadoopIgfsUtils; > 3) IgfsControlResponse. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (IGNITE-449) Ensure correct exception casts between IGFS and Hadoop FileSystem.
[ https://issues.apache.org/jira/browse/IGNITE-449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Ozerov updated IGNITE-449: --- Assignee: (was: Ivan Veselovsky) > Ensure correct exception casts between IGFS and Hadoop FileSystem. > -- > > Key: IGNITE-449 > URL: https://issues.apache.org/jira/browse/IGNITE-449 > Project: Ignite > Issue Type: Task > Components: hadoop >Affects Versions: sprint-1 >Reporter: Vladimir Ozerov > > See: > 1) IgniteHadoopIgfsSecondaryFileSystem; > 2) HadoopIgfsUtils; > 3) IgfsControlResponse. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (IGNITE-1926) Implement IgfsSecondaryFileSystem using java.io.File API
[ https://issues.apache.org/jira/browse/IGNITE-1926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Ozerov updated IGNITE-1926: Assignee: (was: Vladimir Ozerov) > Implement IgfsSecondaryFileSystem using java.io.File API > > > Key: IGNITE-1926 > URL: https://issues.apache.org/jira/browse/IGNITE-1926 > Project: Ignite > Issue Type: Improvement > Components: IGFS, newbie >Reporter: Valentin Kulichenko > Labels: newbie > > This will allow to persist IGFS data on the local disk. Currently we have > only Hadoop-based implementation. > Corresponding user thread: > http://apache-ignite-users.70518.x6.nabble.com/IGFS-backed-by-persistence-on-physical-filesystem-td1882.html -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (IGNITE-449) Ensure correct exception casts between IGFS and Hadoop FileSystem.
[ https://issues.apache.org/jira/browse/IGNITE-449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Ozerov closed IGNITE-449. -- > Ensure correct exception casts between IGFS and Hadoop FileSystem. > -- > > Key: IGNITE-449 > URL: https://issues.apache.org/jira/browse/IGNITE-449 > Project: Ignite > Issue Type: Task > Components: hadoop >Affects Versions: sprint-1 >Reporter: Vladimir Ozerov >Assignee: Vladimir Ozerov > > See: > 1) IgniteHadoopIgfsSecondaryFileSystem; > 2) HadoopIgfsUtils; > 3) IgfsControlResponse. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (IGNITE-385) Run Ignite with default Hadoop benchmarks.
[ https://issues.apache.org/jira/browse/IGNITE-385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Ozerov updated IGNITE-385: --- Assignee: (was: Ivan Veselovsky) > Run Ignite with default Hadoop benchmarks. > -- > > Key: IGNITE-385 > URL: https://issues.apache.org/jira/browse/IGNITE-385 > Project: Ignite > Issue Type: Task > Components: hadoop >Affects Versions: sprint-1 >Reporter: Vladimir Ozerov >Priority: Critical > > They should be run before release after all public API changes are in place. > Also we must pay attention to shmem mode as there is information that there > were problems with it under load. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (IGNITE-1926) Implement IgfsSecondaryFileSystem using java.io.File API
[ https://issues.apache.org/jira/browse/IGNITE-1926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Ozerov reassigned IGNITE-1926: --- Assignee: Vladimir Ozerov > Implement IgfsSecondaryFileSystem using java.io.File API > > > Key: IGNITE-1926 > URL: https://issues.apache.org/jira/browse/IGNITE-1926 > Project: Ignite > Issue Type: Improvement > Components: IGFS, newbie >Reporter: Valentin Kulichenko >Assignee: Vladimir Ozerov > Labels: newbie > > This will allow to persist IGFS data on the local disk. Currently we have > only Hadoop-based implementation. > Corresponding user thread: > http://apache-ignite-users.70518.x6.nabble.com/IGFS-backed-by-persistence-on-physical-filesystem-td1882.html -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (IGNITE-1926) Implement IgfsSecondaryFileSystem using java.io.File API
[ https://issues.apache.org/jira/browse/IGNITE-1926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Ozerov updated IGNITE-1926: Assignee: (was: Ivan Veselovsky) > Implement IgfsSecondaryFileSystem using java.io.File API > > > Key: IGNITE-1926 > URL: https://issues.apache.org/jira/browse/IGNITE-1926 > Project: Ignite > Issue Type: Improvement > Components: IGFS, newbie >Reporter: Valentin Kulichenko > Labels: newbie > > This will allow to persist IGFS data on the local disk. Currently we have > only Hadoop-based implementation. > Corresponding user thread: > http://apache-ignite-users.70518.x6.nabble.com/IGFS-backed-by-persistence-on-physical-filesystem-td1882.html -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (IGNITE-1777) IGFS: Write files with fail-safe logic: "lock" -> "reserve space" -> "write" -> "size update, unlock"
[ https://issues.apache.org/jira/browse/IGNITE-1777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Ozerov updated IGNITE-1777: Assignee: (was: Ivan Veselovsky) > IGFS: Write files with fail-safe logic: "lock" -> "reserve space" -> "write" > -> "size update, unlock" > - > > Key: IGNITE-1777 > URL: https://issues.apache.org/jira/browse/IGNITE-1777 > Project: Ignite > Issue Type: Task > Components: IGFS >Affects Versions: 1.5.0.final >Reporter: Ivan Veselovsky > Fix For: 1.7 > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (IGNITE-3400) IGFS: Does not correctly deallocate free space in courner case.
[ https://issues.apache.org/jira/browse/IGNITE-3400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Ozerov updated IGNITE-3400: Assignee: (was: Ivan Veselovsky) > IGFS: Does not correctly deallocate free space in courner case. > --- > > Key: IGNITE-3400 > URL: https://issues.apache.org/jira/browse/IGNITE-3400 > Project: Ignite > Issue Type: Bug > Components: IGFS >Affects Versions: 1.7 >Reporter: Vasiliy Sisko > Fix For: 1.7 > > > Steps to reproduce: > 1) Run node with configured IGFS limited by max space (f.e. 104857600). > 2) Copy to IGFS file with size less than IGFS space size. (f.e. 10-15 Mb) > 3) Copy to IGFS file with size more than free space size. > Second file have 0 size in IGFS. Used IGFS size approximately equal to the > IGFS space size. > 4) Copy to IGFS file with size less than expected IGFS free space size. (f.e. > 10-15 Mb) > New file have 0 size. > 5) Remove all files of format IGFS > IGFS free space approximately equal to the size of first file. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (IGNITE-1778) IGFS: Implement rollback procedure: cleanup the "reserved" data.
[ https://issues.apache.org/jira/browse/IGNITE-1778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Ozerov updated IGNITE-1778: Assignee: (was: Ivan Veselovsky) > IGFS: Implement rollback procedure: cleanup the "reserved" data. > > > Key: IGNITE-1778 > URL: https://issues.apache.org/jira/browse/IGNITE-1778 > Project: Ignite > Issue Type: Task > Components: IGFS >Affects Versions: 1.6 >Reporter: Ivan Veselovsky > Fix For: 1.7 > > > The following procedure is applied if the file is locked: > 1) take Node id from the lock Id. > 2) see via discovery service if this node is alive. > 3) if yes, return (we cannot lock the file). > 4) if not: do a rollback: > - delete all the blocks in "reserved" range from the data cache. > - set reserved range to zero. > - remove the lock from the FileInfo. > The above procedure should be performed upon every attempt to take a lock, > and (may be) periodically while traversing the file system. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (IGNITE-3399) Support primitive type names in QueryEntity
[ https://issues.apache.org/jira/browse/IGNITE-3399?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jens Hoffmann reassigned IGNITE-3399: - Assignee: Jens Hoffmann > Support primitive type names in QueryEntity > --- > > Key: IGNITE-3399 > URL: https://issues.apache.org/jira/browse/IGNITE-3399 > Project: Ignite > Issue Type: Improvement > Components: cache >Reporter: Alexey Goncharuk >Assignee: Jens Hoffmann > Labels: newbie > Fix For: 1.7 > > > When BinaryMarshaller is enabled (default), it is impossible to use primitive > type names (such as int, short,...) as field type in QueryEntity. > I think we should support such aliases because it will improve usability for > .NET and C++ users, who will not need to deal with java types when > configuring SQL. > A test reproducing the issue is committed to master: > org.apache.ignite.internal.processors.cache.IgniteCachePrimitiveFieldsQuerySelfTest -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (IGNITE-1743) IGFS: Use async cache put instead of block/ack messages on data write
[ https://issues.apache.org/jira/browse/IGNITE-1743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Ozerov closed IGNITE-1743. --- > IGFS: Use async cache put instead of block/ack messages on data write > - > > Key: IGNITE-1743 > URL: https://issues.apache.org/jira/browse/IGNITE-1743 > Project: Ignite > Issue Type: Task > Components: IGFS >Affects Versions: 1.6 >Reporter: Ivan Veselovsky >Assignee: Vladimir Ozerov > Fix For: 1.7 > > Attachments: IGNITE_1743__Finalization__.patch > > > Item "1)" from IGNITE-1697 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (IGNITE-1743) IGFS: Use async cache put instead of block/ack messages on data write
[ https://issues.apache.org/jira/browse/IGNITE-1743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Ozerov resolved IGNITE-1743. - Resolution: Won't Fix Will not fix it for now as adding 2PC semantics might seriously slow us down. > IGFS: Use async cache put instead of block/ack messages on data write > - > > Key: IGNITE-1743 > URL: https://issues.apache.org/jira/browse/IGNITE-1743 > Project: Ignite > Issue Type: Task > Components: IGFS >Affects Versions: 1.6 >Reporter: Ivan Veselovsky >Assignee: Vladimir Ozerov > Fix For: 1.7 > > Attachments: IGNITE_1743__Finalization__.patch > > > Item "1)" from IGNITE-1697 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (IGNITE-1925) Test HadoopSkipListSelfTest.testLevel flakily fails
[ https://issues.apache.org/jira/browse/IGNITE-1925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Ozerov updated IGNITE-1925: Fix Version/s: (was: 1.7) > Test HadoopSkipListSelfTest.testLevel flakily fails > --- > > Key: IGNITE-1925 > URL: https://issues.apache.org/jira/browse/IGNITE-1925 > Project: Ignite > Issue Type: Bug > Components: hadoop >Affects Versions: 1.6 >Reporter: Ivan Veselovsky >Priority: Minor > > Test HadoopSkipListSelfTest.testLevel fails from time to time with ~ 3% > probability. > > junit.framework.AssertionFailedError: null > at junit.framework.Assert.fail(Assert.java:55) > at junit.framework.Assert.assertTrue(Assert.java:22) > at junit.framework.Assert.assertTrue(Assert.java:31) > at junit.framework.TestCase.assertTrue(TestCase.java:201) > at > org.apache.ignite.internal.processors.hadoop.shuffle.collections.HadoopSkipListSelfTest.testLevel(HadoopSkipListSelfTest.java:83) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (IGNITE-1925) Test HadoopSkipListSelfTest.testLevel flakily fails
[ https://issues.apache.org/jira/browse/IGNITE-1925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Ozerov updated IGNITE-1925: Assignee: (was: Ivan Veselovsky) > Test HadoopSkipListSelfTest.testLevel flakily fails > --- > > Key: IGNITE-1925 > URL: https://issues.apache.org/jira/browse/IGNITE-1925 > Project: Ignite > Issue Type: Bug > Components: hadoop >Affects Versions: 1.6 >Reporter: Ivan Veselovsky >Priority: Minor > > Test HadoopSkipListSelfTest.testLevel fails from time to time with ~ 3% > probability. > > junit.framework.AssertionFailedError: null > at junit.framework.Assert.fail(Assert.java:55) > at junit.framework.Assert.assertTrue(Assert.java:22) > at junit.framework.Assert.assertTrue(Assert.java:31) > at junit.framework.TestCase.assertTrue(TestCase.java:201) > at > org.apache.ignite.internal.processors.hadoop.shuffle.collections.HadoopSkipListSelfTest.testLevel(HadoopSkipListSelfTest.java:83) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (IGNITE-3377) IGFS: Refactor IgfsMetaManager to use the same code paths for both DUAL and PRIMARY modes.
[ https://issues.apache.org/jira/browse/IGNITE-3377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Ozerov updated IGNITE-3377: Fix Version/s: (was: 1.7) > IGFS: Refactor IgfsMetaManager to use the same code paths for both DUAL and > PRIMARY modes. > -- > > Key: IGNITE-3377 > URL: https://issues.apache.org/jira/browse/IGNITE-3377 > Project: Ignite > Issue Type: Task > Components: IGFS >Affects Versions: 1.6 >Reporter: Vladimir Ozerov >Priority: Critical > > We already did that for create() and delete() operations. Let's continue and > do that for the rest: > - append > - mkdirs > - open > - rename > - update -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (IGNITE-3379) IGFS: Merge MKDIRS handling for PRIMARY and DUAL modes.
[ https://issues.apache.org/jira/browse/IGNITE-3379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Ozerov updated IGNITE-3379: Fix Version/s: (was: 1.7) > IGFS: Merge MKDIRS handling for PRIMARY and DUAL modes. > --- > > Key: IGNITE-3379 > URL: https://issues.apache.org/jira/browse/IGNITE-3379 > Project: Ignite > Issue Type: Sub-task > Components: IGFS >Affects Versions: 1.6 >Reporter: Vladimir Ozerov > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (IGNITE-3381) IGFS: Merge RENAME handling for PRIMARY and DUAL modes.
[ https://issues.apache.org/jira/browse/IGNITE-3381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Ozerov updated IGNITE-3381: Fix Version/s: (was: 1.7) > IGFS: Merge RENAME handling for PRIMARY and DUAL modes. > --- > > Key: IGNITE-3381 > URL: https://issues.apache.org/jira/browse/IGNITE-3381 > Project: Ignite > Issue Type: Sub-task > Components: IGFS >Affects Versions: 1.6 >Reporter: Vladimir Ozerov > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (IGNITE-3376) IGFS: Allow direct PROXY mode invocations.
[ https://issues.apache.org/jira/browse/IGNITE-3376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Ozerov updated IGNITE-3376: Fix Version/s: (was: 1.7) > IGFS: Allow direct PROXY mode invocations. > -- > > Key: IGNITE-3376 > URL: https://issues.apache.org/jira/browse/IGNITE-3376 > Project: Ignite > Issue Type: Task > Components: IGFS >Affects Versions: 1.6 >Reporter: Vladimir Ozerov >Priority: Critical > > Currently we do not have special handling for PROXY mode. So we will either > hit AssertionError during dev, or will go to incorrect code path in > productions systems. > We need to fix that - PROXY mode should be handled correctly. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (IGNITE-3378) IGFS: Merge APPEND handling for PRIMARY and DUAL modes.
[ https://issues.apache.org/jira/browse/IGNITE-3378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Ozerov updated IGNITE-3378: Fix Version/s: (was: 1.7) > IGFS: Merge APPEND handling for PRIMARY and DUAL modes. > --- > > Key: IGNITE-3378 > URL: https://issues.apache.org/jira/browse/IGNITE-3378 > Project: Ignite > Issue Type: Sub-task > Components: IGFS >Affects Versions: 1.6 >Reporter: Vladimir Ozerov > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (IGNITE-3380) IGFS: Merge OPEN handling for PRIMARY and DUAL modes.
[ https://issues.apache.org/jira/browse/IGNITE-3380?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Ozerov updated IGNITE-3380: Fix Version/s: (was: 1.7) > IGFS: Merge OPEN handling for PRIMARY and DUAL modes. > - > > Key: IGNITE-3380 > URL: https://issues.apache.org/jira/browse/IGNITE-3380 > Project: Ignite > Issue Type: Sub-task > Components: IGFS >Affects Versions: 1.6 >Reporter: Vladimir Ozerov > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (IGNITE-1181) Comparatively benchmark ordinary Hive vs. Hive over Ignited Hadoop.
[ https://issues.apache.org/jira/browse/IGNITE-1181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Ozerov updated IGNITE-1181: Fix Version/s: (was: 1.7) > Comparatively benchmark ordinary Hive vs. Hive over Ignited Hadoop. > --- > > Key: IGNITE-1181 > URL: https://issues.apache.org/jira/browse/IGNITE-1181 > Project: Ignite > Issue Type: Task > Components: hadoop >Affects Versions: sprint-8 >Reporter: Ivan Veselovsky > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (IGNITE-764) Investigate why reduce phase may be entered while total reduce count is zero.
[ https://issues.apache.org/jira/browse/IGNITE-764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Ozerov resolved IGNITE-764. Resolution: Won't Fix Assignee: Vladimir Ozerov (was: Ivan Veselovsky) There are no assertions in the code at the moment. Hence, closing. > Investigate why reduce phase may be entered while total reduce count is zero. > - > > Key: IGNITE-764 > URL: https://issues.apache.org/jira/browse/IGNITE-764 > Project: Ignite > Issue Type: Bug > Components: hadoop >Affects Versions: 1.1.4 >Reporter: Ivan Veselovsky >Assignee: Vladimir Ozerov >Priority: Minor > Fix For: 1.7 > > > Assertion failure happens sometimes when running Hadoop examples in > org.apache.ignite.internal.processors.hadoop.HadoopUtils#status , line 129. > Currently this shallowly fixed, but the reason of assertion failure needs to > be investigated. > See comment in that line in code. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (IGNITE-764) Investigate why reduce phase may be entered while total reduce count is zero.
[ https://issues.apache.org/jira/browse/IGNITE-764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Ozerov closed IGNITE-764. -- > Investigate why reduce phase may be entered while total reduce count is zero. > - > > Key: IGNITE-764 > URL: https://issues.apache.org/jira/browse/IGNITE-764 > Project: Ignite > Issue Type: Bug > Components: hadoop >Affects Versions: 1.1.4 >Reporter: Ivan Veselovsky >Assignee: Vladimir Ozerov >Priority: Minor > Fix For: 1.7 > > > Assertion failure happens sometimes when running Hadoop examples in > org.apache.ignite.internal.processors.hadoop.HadoopUtils#status , line 129. > Currently this shallowly fixed, but the reason of assertion failure needs to > be investigated. > See comment in that line in code. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (IGNITE-3203) Hadoop: permgen leak due to new statistics thread in Hadoop.
[ https://issues.apache.org/jira/browse/IGNITE-3203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Ozerov updated IGNITE-3203: Assignee: (was: Ivan Veselovsky) > Hadoop: permgen leak due to new statistics thread in Hadoop. > > > Key: IGNITE-3203 > URL: https://issues.apache.org/jira/browse/IGNITE-3203 > Project: Ignite > Issue Type: Task > Components: hadoop >Affects Versions: 1.5.0.final, 1.6 >Reporter: Vladimir Ozerov > > *Problem* > HADOOP-12829 ticket was implemented recently. It introduces special worker > thread which monitors file system statistics. When several > {{HadoopClassLoader}}s are created, one thread start for each of them. As > this thread is never stopped, classloaders are never unloaded -> leak. > *Solution* > We need to investigate on how to hack into this process and prevent start of > these thread. May be we will have to implement our own threads serving the > same purpose. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (IGNITE-3167) Hadoop: restore external execution.
[ https://issues.apache.org/jira/browse/IGNITE-3167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Ozerov updated IGNITE-3167: Fix Version/s: (was: 1.7) > Hadoop: restore external execution. > --- > > Key: IGNITE-3167 > URL: https://issues.apache.org/jira/browse/IGNITE-3167 > Project: Ignite > Issue Type: Task > Components: hadoop >Affects Versions: 1.5.0.final >Reporter: Vladimir Ozerov >Priority: Critical > > Some time ago we decided to get rid external execution mode. It appears to be > a wrong decision. > Hadoop users rely on its process-per-job nature in lot's places. One of such > case could be observed in HiBench Bayes benchmark: > 1) Job creates something in the local file system through Hadoop FileSystem > API. > 2) Then it tries to get this data using regular java.io.FileReader and > relative paths. > This doesn't work in embedded mode because our LocalFileSystem wrapper > assigns different work dirs for jobs, but process-wide working directory is > always the same. As a result, aforementioned benchmark doesn't work in > Ignite, but work with standard Hadoop job tracker. > It seems that we must return external execution back. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (IGNITE-3085) Hadoop module cannot load native libraries when running inside HDP 2.3.4
[ https://issues.apache.org/jira/browse/IGNITE-3085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Ozerov updated IGNITE-3085: Fix Version/s: (was: 1.7) > Hadoop module cannot load native libraries when running inside HDP 2.3.4 > > > Key: IGNITE-3085 > URL: https://issues.apache.org/jira/browse/IGNITE-3085 > Project: Ignite > Issue Type: Bug > Components: hadoop >Affects Versions: 1.5.0.final >Reporter: Vladimir Ozerov >Priority: Critical > Labels: bigdata, important > > 1) Run some load with Hadoop Accelerator in HDP 2.2 - all is fine. > 2) Run the same load with HDP 2.3.4, exception is thorwn: > {code} > java.lang.NoClassDefFoundError: org/apache/hadoop/util/NativeCodeLoader > at > org.apache.ignite.internal.processors.hadoop.HadoopClassLoader.initializeNativeLibraries(HadoopClassLoader.java:145) > at > org.apache.ignite.internal.processors.hadoop.HadoopClassLoader.(HadoopClassLoader.java:127) > at > org.apache.ignite.internal.processors.hadoop.jobtracker.HadoopJobTracker.start(HadoopJobTracker.java:160) > at > org.apache.ignite.internal.processors.hadoop.HadoopProcessor.start(HadoopProcessor.java:103) > at > org.apache.ignite.internal.IgniteKernal.startProcessor(IgniteKernal.java:1486) > at org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:859) > at > org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:1689) > at > org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1548) > at org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1004) > at > org.apache.ignite.internal.IgnitionEx.startConfigurations(IgnitionEx.java:930) > at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:816) > at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:715) > at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:585) > at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:555) > at org.apache.ignite.Ignition.start(Ignition.java:347) > at > org.apache.ignite.startup.cmdline.CommandLineStartup.main(CommandLineStartup.java:302) > Caused by: java.lang.ClassNotFoundException: > org.apache.hadoop.util.NativeCodeLoader > at java.net.URLClassLoader.findClass(URLClassLoader.java:381) > at java.lang.ClassLoader.loadClass(ClassLoader.java:424) > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331) > at java.lang.ClassLoader.loadClass(ClassLoader.java:357) > ... 16 more > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (IGNITE-3203) Hadoop: permgen leak due to new statistics thread in Hadoop.
[ https://issues.apache.org/jira/browse/IGNITE-3203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Ozerov updated IGNITE-3203: Fix Version/s: (was: 1.7) > Hadoop: permgen leak due to new statistics thread in Hadoop. > > > Key: IGNITE-3203 > URL: https://issues.apache.org/jira/browse/IGNITE-3203 > Project: Ignite > Issue Type: Task > Components: hadoop >Affects Versions: 1.5.0.final, 1.6 >Reporter: Vladimir Ozerov > > *Problem* > HADOOP-12829 ticket was implemented recently. It introduces special worker > thread which monitors file system statistics. When several > {{HadoopClassLoader}}s are created, one thread start for each of them. As > this thread is never stopped, classloaders are never unloaded -> leak. > *Solution* > We need to investigate on how to hack into this process and prevent start of > these thread. May be we will have to implement our own threads serving the > same purpose. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (IGNITE-2568) IGFS cannot be used as Apache Drill data source.
[ https://issues.apache.org/jira/browse/IGNITE-2568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Ozerov updated IGNITE-2568: Assignee: (was: Vladimir Ozerov) > IGFS cannot be used as Apache Drill data source. > > > Key: IGNITE-2568 > URL: https://issues.apache.org/jira/browse/IGNITE-2568 > Project: Ignite > Issue Type: Task > Components: IGFS >Affects Versions: 1.5.0.final >Reporter: Vladimir Ozerov >Priority: Minor > Fix For: 1.7 > > > The problem was reported on user list: > http://apache-ignite-users.70518.x6.nabble.com/Apache-Drill-querying-IGFS-accelerated-H-DFS-td2840.html > Even when IGFS is fully configured and is recognized correctly by installed > Hadoop, Apache Drill cannot use it as data source. The following stack trace > appears: > {code} > 2016-02-05 13:14:03,507 [294b5fe3-8f63-2134-67e0-42f7111ead44:foreman] ERROR > o.a.d.exec.util.ImpersonationUtil - Failed to create DrillFileSystem for > proxy user: No FileSystem for scheme: igfs > java.io.IOException: No FileSystem for scheme: igfs > at > org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2644) > ~[hadoop-common-2.7.1.jar:na] > at > org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2651) > ~[hadoop-common-2.7.1.jar:na] > at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:92) > ~[hadoop-common-2.7.1.jar:na] > at > org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2687) > ~[hadoop-common-2.7.1.jar:na] > at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2669) > ~[hadoop-common-2.7.1.jar:na] > at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:371) > ~[hadoop-common-2.7.1.jar:na] > at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:170) > ~[hadoop-common-2.7.1.jar:na] > at > org.apache.drill.exec.store.dfs.DrillFileSystem.(DrillFileSystem.java:92) > ~[drill-java-exec-1.4.0.jar:1.4.0] > at > org.apache.drill.exec.util.ImpersonationUtil$2.run(ImpersonationUtil.java:213) > ~[drill-java-exec-1.4.0.jar:1.4.0] > at > org.apache.drill.exec.util.ImpersonationUtil$2.run(ImpersonationUtil.java:210) > ~[drill-java-exec-1.4.0.jar:1.4.0] > at java.security.AccessController.doPrivileged(Native Method) > ~[na:1.8.0_40-ea] > at javax.security.auth.Subject.doAs(Subject.java:422) ~[na:1.8.0_40-ea] > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) > ~[hadoop-common-2.7.1.jar:na] > at > org.apache.drill.exec.util.ImpersonationUtil.createFileSystem(ImpersonationUtil.java:210) > [drill-java-exec-1.4.0.jar:1.4.0] > at > org.apache.drill.exec.util.ImpersonationUtil.createFileSystem(ImpersonationUtil.java:202) > [drill-java-exec-1.4.0.jar:1.4.0] > at > org.apache.drill.exec.store.dfs.WorkspaceSchemaFactory.accessible(WorkspaceSchemaFactory.java:150) > [drill-java-exec-1.4.0.jar:1.4.0] > at > org.apache.drill.exec.store.dfs.FileSystemSchemaFactory$FileSystemSchema.(FileSystemSchemaFactory.java:78) > [drill-java-exec-1.4.0.jar:1.4.0] > at > org.apache.drill.exec.store.dfs.FileSystemSchemaFactory.registerSchemas(FileSystemSchemaFactory.java:65) > [drill-java-exec-1.4.0.jar:1.4.0] > at > org.apache.drill.exec.store.dfs.FileSystemPlugin.registerSchemas(FileSystemPlugin.java:131) > [drill-java-exec-1.4.0.jar:1.4.0] > at > org.apache.drill.exec.store.StoragePluginRegistry$DrillSchemaFactory.registerSchemas(StoragePluginRegistry.java:403) > [drill-java-exec-1.4.0.jar:1.4.0] > at > org.apache.drill.exec.ops.QueryContext.getRootSchema(QueryContext.java:166) > [drill-java-exec-1.4.0.jar:1.4.0] > at > org.apache.drill.exec.ops.QueryContext.getRootSchema(QueryContext.java:155) > [drill-java-exec-1.4.0.jar:1.4.0] > at > org.apache.drill.exec.ops.QueryContext.getRootSchema(QueryContext.java:143) > [drill-java-exec-1.4.0.jar:1.4.0] > at > org.apache.drill.exec.ops.QueryContext.getNewDefaultSchema(QueryContext.java:129) > [drill-java-exec-1.4.0.jar:1.4.0] > at > org.apache.drill.exec.planner.sql.DrillSqlWorker.(DrillSqlWorker.java:93) > [drill-java-exec-1.4.0.jar:1.4.0] > at org.apache.drill.exec.work.foreman.Foreman.runSQL(Foreman.java:907) > [drill-java-exec-1.4.0.jar:1.4.0] > at org.apache.drill.exec.work.foreman.Foreman.run(Foreman.java:244) > [drill-java-exec-1.4.0.jar:1.4.0] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > [na:1.8.0_40-ea] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > [na:1.8.0_40-ea] > at java.lang.Thread.run(Thread.java:745) [na:1.8.0_40-ea] > {code} -- This message was sent by Atlassian JIRA
[jira] [Updated] (IGNITE-2356) IGFS client should be able to failover in case of server crash.
[ https://issues.apache.org/jira/browse/IGNITE-2356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Ozerov updated IGNITE-2356: Fix Version/s: (was: 1.7) > IGFS client should be able to failover in case of server crash. > --- > > Key: IGNITE-2356 > URL: https://issues.apache.org/jira/browse/IGNITE-2356 > Project: Ignite > Issue Type: Task > Components: IGFS >Affects Versions: ignite-1.4 >Reporter: Vladimir Ozerov >Priority: Critical > Labels: important > > IGFS client (IgniteHadoopFileSystem) communicates IGFS over endpoint - either > TCP or shmem. > Only single endpoint can be specified. As such, should the server went down, > IgntieHadoopFileSystem (either new or existing) is no longer operational. > We need to let user specify several endpoints and failover/balance between > them. > Look at Hadoop HA first to get an ideas on how to configure multiple > addresses. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (IGNITE-3187) IGFS: Print acceptable IGFS endpoints to the console on node start.
[ https://issues.apache.org/jira/browse/IGNITE-3187?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Ozerov updated IGNITE-3187: Assignee: (was: Vladimir Ozerov) > IGFS: Print acceptable IGFS endpoints to the console on node start. > --- > > Key: IGNITE-3187 > URL: https://issues.apache.org/jira/browse/IGNITE-3187 > Project: Ignite > Issue Type: Improvement > Components: hadoop, IGFS >Affects Versions: 1.6 >Reporter: Vladimir Ozerov > Fix For: 1.7 > > > *Problem* > When user starts a node with IGFS, he need to know it's endpoint to be used > in URI's (e.g. "igfs://igfs@"). There are non-trivial rules no how the scheme > is formed, and sometime it is difficult to understand which scheme to use. > *Solution* > Let's print acceptable schemes to the console on node start. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (IGNITE-2886) IGFS: Implement INotify intergration.
[ https://issues.apache.org/jira/browse/IGNITE-2886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Ozerov updated IGNITE-2886: Fix Version/s: (was: 1.7) > IGFS: Implement INotify intergration. > - > > Key: IGNITE-2886 > URL: https://issues.apache.org/jira/browse/IGNITE-2886 > Project: Ignite > Issue Type: Task > Components: hadoop, IGFS >Affects Versions: 1.5.0.final >Reporter: Vladimir Ozerov >Assignee: Oddo >Priority: Critical > > The idea is originally proposed by Michael Pearce on the dev-list: > http://apache-ignite-developers.2346864.n4.nabble.com/HDFS-iNotify-td8033.html > Currently IGFS is unable to deal with changes performed on HDFS directly. > That is, all file system operations must go through IGFS to maintain > integrity of in-memory file system view. > This appears to be a known issue with other Hadoop-based integrations. HDFS > has relatively new interface {{INotify}} which allows callbacke to external > sources when file system is updated. We need to evaluate possibility > integrate IGFS with this module. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (IGNITE-2355) Hadoop client should be able to failover in case of server crash.
[ https://issues.apache.org/jira/browse/IGNITE-2355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Ozerov updated IGNITE-2355: Assignee: (was: Ivan Veselovsky) > Hadoop client should be able to failover in case of server crash. > - > > Key: IGNITE-2355 > URL: https://issues.apache.org/jira/browse/IGNITE-2355 > Project: Ignite > Issue Type: Task > Components: hadoop >Affects Versions: ignite-1.4 >Reporter: Vladimir Ozerov >Priority: Critical > Labels: important > > Currently we explicitly specify a single IP address of Ignite server for map > reduce. > If it goes down, no jobs can be submitted anymore. > Looks like we should give user ability to specify multiple addresses, and > failover between them. Our thin client (which underlies Hadoop client) is > already able to accept multiple addresses. > Look at Hadoop HA first to get an ideas on how to configure multiple > addresses. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (IGNITE-3333) IGFS: Allow for ATOMIC data cache.
[ https://issues.apache.org/jira/browse/IGNITE-?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Ozerov updated IGNITE-: Fix Version/s: (was: 1.7) > IGFS: Allow for ATOMIC data cache. > -- > > Key: IGNITE- > URL: https://issues.apache.org/jira/browse/IGNITE- > Project: Ignite > Issue Type: Task > Components: IGFS >Affects Versions: 1.6 >Reporter: Vladimir Ozerov >Assignee: Vladimir Ozerov >Priority: Critical > Fix For: 1.7 > > > Currently data cache must be transactional. It means that some updates even > on single key will require 2PC. Instead, it makes sense to try change update > logic to work always on single keys. In this case we will be able to switch > to ATOMIC cache, what could improve performance dramatically. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (IGNITE-2355) Hadoop client should be able to failover in case of server crash.
[ https://issues.apache.org/jira/browse/IGNITE-2355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Ozerov updated IGNITE-2355: Fix Version/s: (was: 1.7) > Hadoop client should be able to failover in case of server crash. > - > > Key: IGNITE-2355 > URL: https://issues.apache.org/jira/browse/IGNITE-2355 > Project: Ignite > Issue Type: Task > Components: hadoop >Affects Versions: ignite-1.4 >Reporter: Vladimir Ozerov >Priority: Critical > Labels: important > > Currently we explicitly specify a single IP address of Ignite server for map > reduce. > If it goes down, no jobs can be submitted anymore. > Looks like we should give user ability to specify multiple addresses, and > failover between them. Our thin client (which underlies Hadoop client) is > already able to accept multiple addresses. > Look at Hadoop HA first to get an ideas on how to configure multiple > addresses. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (IGNITE-3342) Hadoop: improve Java options documentation.
[ https://issues.apache.org/jira/browse/IGNITE-3342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Ozerov updated IGNITE-3342: Fix Version/s: 1.7 > Hadoop: improve Java options documentation. > --- > > Key: IGNITE-3342 > URL: https://issues.apache.org/jira/browse/IGNITE-3342 > Project: Ignite > Issue Type: Improvement > Components: documentation, hadoop >Affects Versions: 1.6 >Reporter: Vladimir Ozerov >Assignee: Vladimir Ozerov >Priority: Critical > Fix For: 1.7 > > > 1) Mentioned library path; > 2) CMS garbage collector with class unloading options; > 3) PermGetn/MetaSpace > 4) Code cache > Examples: > *Java 8* > {{-J-XX:MaxMetaspaceSize=[VALUE] -J-XX:ReservedCodeCacheSize=768m > -J-XX:+UseConcMarkSweepGC -J-XX:+CMSClassUnloadingEnabled > -J-XX:+CMSPermGenSweepingEnabled > -J-Djava.library.path=/usr/iop/current/hadoop/lib/native/}} > *Java 7* > {{-J-XX:MaxPermSize=[VALUE] -J-XX:ReservedCodeCacheSize=768m > -J-XX:+UseConcMarkSweepGC -J-XX:+CMSClassUnloadingEnabled > -J-XX:+CMSPermGenSweepingEnabled > -J-Djava.library.path=/usr/iop/current/hadoop/lib/native/}} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (IGNITE-3342) Hadoop: improve Java options documentation.
[ https://issues.apache.org/jira/browse/IGNITE-3342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Ozerov updated IGNITE-3342: Fix Version/s: (was: 1.7) > Hadoop: improve Java options documentation. > --- > > Key: IGNITE-3342 > URL: https://issues.apache.org/jira/browse/IGNITE-3342 > Project: Ignite > Issue Type: Improvement > Components: documentation, hadoop >Affects Versions: 1.6 >Reporter: Vladimir Ozerov >Assignee: Vladimir Ozerov >Priority: Critical > Fix For: 1.7 > > > 1) Mentioned library path; > 2) CMS garbage collector with class unloading options; > 3) PermGetn/MetaSpace > 4) Code cache > Examples: > *Java 8* > {{-J-XX:MaxMetaspaceSize=[VALUE] -J-XX:ReservedCodeCacheSize=768m > -J-XX:+UseConcMarkSweepGC -J-XX:+CMSClassUnloadingEnabled > -J-XX:+CMSPermGenSweepingEnabled > -J-Djava.library.path=/usr/iop/current/hadoop/lib/native/}} > *Java 7* > {{-J-XX:MaxPermSize=[VALUE] -J-XX:ReservedCodeCacheSize=768m > -J-XX:+UseConcMarkSweepGC -J-XX:+CMSClassUnloadingEnabled > -J-XX:+CMSPermGenSweepingEnabled > -J-Djava.library.path=/usr/iop/current/hadoop/lib/native/}} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (IGNITE-2356) IGFS client should be able to failover in case of server crash.
[ https://issues.apache.org/jira/browse/IGNITE-2356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Ozerov updated IGNITE-2356: Assignee: (was: Ivan Veselovsky) > IGFS client should be able to failover in case of server crash. > --- > > Key: IGNITE-2356 > URL: https://issues.apache.org/jira/browse/IGNITE-2356 > Project: Ignite > Issue Type: Task > Components: IGFS >Affects Versions: ignite-1.4 >Reporter: Vladimir Ozerov >Priority: Critical > Labels: important > Fix For: 1.7 > > > IGFS client (IgniteHadoopFileSystem) communicates IGFS over endpoint - either > TCP or shmem. > Only single endpoint can be specified. As such, should the server went down, > IgntieHadoopFileSystem (either new or existing) is no longer operational. > We need to let user specify several endpoints and failover/balance between > them. > Look at Hadoop HA first to get an ideas on how to configure multiple > addresses. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (IGNITE-2357) IgfsSecondaryFileSystem should be serizliable.
[ https://issues.apache.org/jira/browse/IGNITE-2357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Ozerov updated IGNITE-2357: Fix Version/s: (was: 1.7) 2.0 > IgfsSecondaryFileSystem should be serizliable. > -- > > Key: IGNITE-2357 > URL: https://issues.apache.org/jira/browse/IGNITE-2357 > Project: Ignite > Issue Type: Task > Components: IGFS >Affects Versions: ignite-1.4 >Reporter: Vladimir Ozerov >Priority: Minor > Fix For: 2.0 > > > Most of our pluggable components are serializable, so that > IgniteConfiguration can be converted to bytes and transferred over wire. > This is not the case for IgfsSecondaryFileSystem. > There are several ways to fix that: > 1) Mark IgfsSecondaryFileSystem as Serializable - simple and straightforward > solution. But what if user cannot serialize some internals of his file system > implementation? > 2) Abstract out file system and user serializable Factory instead - this is > how things work in some other places (e.g. cache store factory). -- This message was sent by Atlassian JIRA (v6.3.4#6332)