[jira] [Created] (IGNITE-3223) Wrong value in table editor

2016-05-31 Thread Vasiliy Sisko (JIRA)
Vasiliy Sisko created IGNITE-3223:
-

 Summary: Wrong value in table editor
 Key: IGNITE-3223
 URL: https://issues.apache.org/jira/browse/IGNITE-3223
 Project: Ignite
  Issue Type: Sub-task
  Components: wizards
Affects Versions: 1.7
Reporter: Vasiliy Sisko
Assignee: Dmitriyff


# Open cluster page and input in addresses table 2 or more addresses.
# Refresh page.
# Click some row to edit
# Click other row to edit.

In second editor showed value from first.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-3191) BinaryObjectBuilder: binary schema id depends on the order of fields addition

2016-05-31 Thread Dmitriy Setrakyan (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-3191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15308960#comment-15308960
 ] 

Dmitriy Setrakyan commented on IGNITE-3191:
---

Any reason why {{LinkedHashMap}} was chosen initially?

> BinaryObjectBuilder: binary schema id depends on the order of fields addition
> -
>
> Key: IGNITE-3191
> URL: https://issues.apache.org/jira/browse/IGNITE-3191
> Project: Ignite
>  Issue Type: Bug
>Reporter: Denis Magda
> Fix For: 1.7
>
>
> Presently if an object is created using BinaryObjectBuilder then several 
> BinarySchemes can be generated for the same set of fields in case when fields 
> are added in a different order.
> This happens because {{LinkedHashMap}} is used underneath. However we should 
> rely on order-free structure like {{TreeMap}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (IGNITE-3221) Bad ip used , when fixed ip in hostname

2016-05-31 Thread Dmitriy Setrakyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-3221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitriy Setrakyan updated IGNITE-3221:
--
Labels: community  (was: )

> Bad ip used , when fixed ip in hostname
> ---
>
> Key: IGNITE-3221
> URL: https://issues.apache.org/jira/browse/IGNITE-3221
> Project: Ignite
>  Issue Type: Bug
>Reporter: sebastien diaz
>Assignee: Denis Magda
>  Labels: community
>
> Hello
> My machine have certainly a bad dns resolve issue (docker+swarm,network and 
> compose)  but I fix the ip directly at the place of the hostname.
> Unfortunately TcpDiscoveryNode mark a different socker ip
> Example host=40.1.0.23 -> socketAddress=104.239.213.7 :
> TcpDiscoveryNode [id=54b73d98-e702-4660-957a-61d065003078, addrs=[40.1.0.23], 
> sockAddrs=[44de9a1e9afe/104.239.213.7:47500, /40.1.0.23:47500]
> The code apparently in casuse should be :
> public static InetAddress resolveLocalHost(@Nullable String hostName) throws 
> IOException {
> return F.isEmpty(hostName) ?
> // Should default to InetAddress#anyLocalAddress which is 
> package-private.
> new InetSocketAddress(0).getAddress() :
> InetAddress.getByName(hostName);
> }
> In my issue it will preferable to not use the function
> InetAddress.getByName 
> but to use something as 
> InetAddress.getByAddress(ipAddr);
> thanks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (IGNITE-3109) IgfsMapReduceExample fails ClassNotFoundException

2016-05-31 Thread Yakov Zhdanov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-3109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yakov Zhdanov updated IGNITE-3109:
--
Fix Version/s: (was: 1.6)
   1.7

> IgfsMapReduceExample fails ClassNotFoundException
> -
>
> Key: IGNITE-3109
> URL: https://issues.apache.org/jira/browse/IGNITE-3109
> Project: Ignite
>  Issue Type: Bug
>  Components: IGFS
>Affects Versions: 1.6
> Environment: Windows 10, Oracle JDK 1.7.0_80
>Reporter: Sergey Kozlov
>Priority: Critical
> Fix For: 1.7
>
> Attachments: companies.txt
>
>
> 1. Run the external node {{bin\ignite.bat 
> examples\config\filesystem\example-igfs.xml}}
> 2. Run {{IgfsMapReduceExample}} with arguments {{\companies.txt 
> GERMANY}}:
> {noformat}
> [13:41:27] Topology snapshot [ver=2, servers=2, clients=0, CPUs=8, heap=4.5GB]
> >>> IGFS map reduce example started.
> Copying file to IGFS: 
> C:\Work\apache-ignite-fabric-1.6.0-QASK1101-bin\examples\companies.txt
> [13:41:29,722][ERROR][sys-#19%null%][GridTaskWorker] Failed to obtain remote 
> job result policy for result from ComputeTask.result(..) method (will fail 
> the whole task): GridJobResultImpl 
> [job=o.a.i.i.processors.igfs.IgfsJobImpl@57196a65, sib=GridJobSiblingImpl 
> [sesId=b52286f9451-e52c1381-bad9-4161-ab15-ad22c870a7d0, 
> jobId=fa2286f9451-e52c1381-bad9-4161-ab15-ad22c870a7d0, 
> nodeId=b2c68a38-383c-4a9f-b176-ec4633de3c8e, isJobDone=false], 
> jobCtx=GridJobContextImpl 
> [jobId=fa2286f9451-e52c1381-bad9-4161-ab15-ad22c870a7d0, timeoutObj=null, 
> attrs={}], node=TcpDiscoveryNode [id=b2c68a38-383c-4a9f-b176-ec4633de3c8e, 
> addrs=[0:0:0:0:0:0:0:1, 127.0.0.1, 192.168.2.107, 
> 2001:0:5ef5:79fb:2090:2da4:b079:309d], 
> sockAddrs=[work-pc/192.168.2.107:47500, /0:0:0:0:0:0:0:1:47500, 
> work-pc/192.168.2.107:47500, /127.0.0.1:47500, /192.168.2.107:47500, 
> /2001:0:5ef5:79fb:2090:2da4:b079:309d:47500], discPort=47500, order=1, 
> intOrder=1, lastExchangeTime=1462963287286, loc=false, 
> ver=1.6.0#20160511-sha1:bd6a67f2, isClient=false], ex=class 
> o.a.i.IgniteException: Failed to find class with given class loader for 
> unmarshalling (make sure same version of all classes are available on all 
> nodes or enable peer-class-loading): 
> sun.misc.Launcher$AppClassLoader@1b3e02ed, hasRes=true, isCancelled=false, 
> isOccupied=true]
> class org.apache.ignite.IgniteException: Remote job threw user exception 
> (override or implement ComputeTask.result(..) method if you would like to 
> have automatic failover for this exception).
>   at 
> org.apache.ignite.compute.ComputeTaskAdapter.result(ComputeTaskAdapter.java:101)
>   at 
> org.apache.ignite.internal.processors.task.GridTaskWorker$3.apply(GridTaskWorker.java:912)
>   at 
> org.apache.ignite.internal.processors.task.GridTaskWorker$3.apply(GridTaskWorker.java:905)
>   at 
> org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:6491)
>   at 
> org.apache.ignite.internal.processors.task.GridTaskWorker.result(GridTaskWorker.java:905)
>   at 
> org.apache.ignite.internal.processors.task.GridTaskWorker.onResponse(GridTaskWorker.java:801)
>   at 
> org.apache.ignite.internal.processors.task.GridTaskProcessor.processJobExecuteResponse(GridTaskProcessor.java:995)
>   at 
> org.apache.ignite.internal.processors.task.GridTaskProcessor$JobMessageListener.onMessage(GridTaskProcessor.java:1220)
>   at 
> org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1219)
>   at 
> org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:847)
>   at 
> org.apache.ignite.internal.managers.communication.GridIoManager.access$1700(GridIoManager.java:105)
>   at 
> org.apache.ignite.internal.managers.communication.GridIoManager$5.run(GridIoManager.java:810)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: class org.apache.ignite.IgniteException: Failed to find class with 
> given class loader for unmarshalling (make sure same version of all classes 
> are available on all nodes or enable peer-class-loading): 
> sun.misc.Launcher$AppClassLoader@1b3e02ed
>   at 
> org.apache.ignite.internal.processors.job.GridJobWorker.initialize(GridJobWorker.java:424)
>   at 
> org.apache.ignite.internal.processors.job.GridJobProcessor.processJobExecuteRequest(GridJobProcessor.java:1089)
>   at 
> org.apache.ignite.internal.processors.job.GridJobProcessor$JobExecutionListener.onMessage(GridJobProcessor.java:1766)
>   ... 7 more
> Caused by: class 

[jira] [Commented] (IGNITE-3222) IgniteCache.invokeAll for all cache entries

2016-05-31 Thread Dmitriy Setrakyan (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-3222?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15308941#comment-15308941
 ] 

Dmitriy Setrakyan commented on IGNITE-3222:
---

I actually believe that such method will be error-prone and will cause all 
sorts of memory issues for users trying to execute this method over large 
caches.

What we need instead is an affinityCall/Run method over a partition, not a key. 
Why not provide this method instead?


> IgniteCache.invokeAll for all cache entries
> ---
>
> Key: IGNITE-3222
> URL: https://issues.apache.org/jira/browse/IGNITE-3222
> Project: Ignite
>  Issue Type: Task
>  Components: cache
>Affects Versions: 1.1.4
>Reporter: Pavel Tupitsyn
> Fix For: 1.7
>
>
> Implement an invokeAll overload that processes all cache keys (not some 
> specific set).
> Proposed signature:
> {code}
> public void invokeAll(CacheEntryProcessor entryProcessor, Object... 
> args);
> public  Map invokeAll(CacheEntryProcessor V, T> entryProcessor, boolean returnAffectedOnly, Object... args);
> {code}
> This will apply the specified processor to all cache entries.
> First method does not return anything.
> Second method either returns all results for all entries, or only for entries 
> that have been changed by the processor in any way.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-2655) AffinityFunction: primary and backup copies in different locations

2016-05-31 Thread Dmitriy Setrakyan (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-2655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15308907#comment-15308907
 ] 

Dmitriy Setrakyan commented on IGNITE-2655:
---

I am not sure I follow the proposed changes, but I still would like to see the 
final API here, if it has changed. I also strongly believe that the behavior of 
this backup-filter should not be different as we move from one affinity 
function to another.

> AffinityFunction: primary and backup copies in different locations
> --
>
> Key: IGNITE-2655
> URL: https://issues.apache.org/jira/browse/IGNITE-2655
> Project: Ignite
>  Issue Type: Bug
>Reporter: Denis Magda
>Assignee: Vladislav Pyatkov
>Priority: Critical
>  Labels: important
> Fix For: 1.7
>
>
> There is a use case when primary and backup copies have to be located in 
> different racks, building, cities, etc.
> A simple scenario is the following. When nodes are started they will have 
> either "rack1" or "rack2" value in their attributes list and we will enforce 
> that the backups won't be selected among the nodes with the same attribute.
> It should be possible to filter out backups using IP addresses as well.
> Presently rendezvous and fair affinity function has {{backupFilter}} that 
> will work perfectly for the scenario above but only for cases when number of 
> backups for a cache is equal to 1.
> In case when the number of backups is bigger than one {{backupFilter}} will 
> only guarantee that the primary is located in different location but will NOT 
> guarantee that all the backups are spread out across different locations as 
> well.
> So we need to provide an API that will allow to spread the primary and ALL 
> backups copies across different locations.
> The proposal is to introduce {{AffinityBackupFilter}} with the following 
> method
> {{AffinityBackupFilter.isAssignable(Node n, List assigned)}}
> Where n - potential backup to check, assigned - list of current partition 
> holders, 1st is primary
> {{AffinityBackupFilter}} will be set using 
> {{affinity.setAffinityBackupFilter}}.
> {{Affinity.setBackupFilter}} has to be deprecated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (IGNITE-2655) AffinityFunction: primary and backup copies in different locations

2016-05-31 Thread Dmitriy Setrakyan (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-2655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15307586#comment-15307586
 ] 

Dmitriy Setrakyan edited comment on IGNITE-2655 at 5/31/16 11:55 PM:
-

Dmitriy, it means that in case of {{FairAffinityFunction}} the method can check 
both the primary and the backup against already assigned nodes. The already 
assigned nodes may or may not contain the primary.

Vlad, I think that we can preserve the same semantic and behavior at the level 
of {{FairAffinictyFunction}} if do the following at the implementation level:
- if {{tier=0}} is checked (primary) then we prepare new assignments list that 
will have the primary, that is being checked, first in the list and the rest of 
the nodes will be the nodes that are already assigned (backups);
- after that we're iterating over a sublist calling 
{{affinityBackupFilter.apply(...)}} for every backup from the list with 
assignments. During the iteration if we get {{false}} for at least backup then 
it means that the primary is non assignable.

Such implementation will help us to preserve the same semantic as 
{{RendezvousAffinityFunction}} has
n - potential backup to check
assigned - list of current partition holders (first node in the list is primary)


was (Author: dmagda):
Dmitriy, it means that in case of {{FairAffinityFunction}} the method can check 
both the primary and the backup against already assigned nodes. The already 
assigned nodes may or may node contains the primary.

Vlad, I think that we can preserve the same semantic and behavior at the level 
of {{FairAffinictyFunction}} if do the following at the implementation level:
- if {{tier=0}} is checked (primary) then we prepare new assignments list that 
will have the primary, that is being checked, first in the list and the rest of 
the nodes will be the nodes that are already assigned (backups);
- after that we're iterating over a sublist calling 
{{affinityBackupFilter.apply(...)}} for every backup from the list with 
assignments. During the iteration if we get {{false}} for at least backup then 
it means that the primary is non assignable.

Such implementation will help us to preserve the same semantic as 
{{RendezvousAffinityFunction}} has
n - potential backup to check
assigned - list of current partition holders (first node in the list is primary)

> AffinityFunction: primary and backup copies in different locations
> --
>
> Key: IGNITE-2655
> URL: https://issues.apache.org/jira/browse/IGNITE-2655
> Project: Ignite
>  Issue Type: Bug
>Reporter: Denis Magda
>Assignee: Vladislav Pyatkov
>Priority: Critical
>  Labels: important
> Fix For: 1.7
>
>
> There is a use case when primary and backup copies have to be located in 
> different racks, building, cities, etc.
> A simple scenario is the following. When nodes are started they will have 
> either "rack1" or "rack2" value in their attributes list and we will enforce 
> that the backups won't be selected among the nodes with the same attribute.
> It should be possible to filter out backups using IP addresses as well.
> Presently rendezvous and fair affinity function has {{backupFilter}} that 
> will work perfectly for the scenario above but only for cases when number of 
> backups for a cache is equal to 1.
> In case when the number of backups is bigger than one {{backupFilter}} will 
> only guarantee that the primary is located in different location but will NOT 
> guarantee that all the backups are spread out across different locations as 
> well.
> So we need to provide an API that will allow to spread the primary and ALL 
> backups copies across different locations.
> The proposal is to introduce {{AffinityBackupFilter}} with the following 
> method
> {{AffinityBackupFilter.isAssignable(Node n, List assigned)}}
> Where n - potential backup to check, assigned - list of current partition 
> holders, 1st is primary
> {{AffinityBackupFilter}} will be set using 
> {{affinity.setAffinityBackupFilter}}.
> {{Affinity.setBackupFilter}} has to be deprecated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (IGNITE-2655) AffinityFunction: primary and backup copies in different locations

2016-05-31 Thread Dmitriy Setrakyan (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-2655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15308862#comment-15308862
 ] 

Dmitriy Setrakyan edited comment on IGNITE-2655 at 5/31/16 11:54 PM:
-

[~dmagda] [~v.pyatkov]

Couple of comments:
# Can you please think over this signature for predicate? apply(int n, Node n, 
List assigned)
# If predicate is called with empty list then primary node for this partition 
is examined. 
# Or we can assign primary always on our own and call predicate only for 
backups (which I would prefer). If someone needs full control, then he always 
have an opportunity to implement affinity function on his own.

As I side note I would say that we are trying to force our users to do some 
programming. Does anyone have any idea on how to do this without code? How 
about supporting simple string expressions based on node attributes and/or ip 
addresses? 



was (Author: yzhdanov):
[~dmagda] [~v.pyatkov]

Couple of comments:
# Can you please think over this signature for predicate? apply(int n, Node n, 
List assigned)
# If predicate is called with empty list then primary node for this partition 
is examined. 
# Or we can assign primary always on our own and call predicate only for 
backups (which I would prefer). If someone needs full control, then he always 
have an opportunity to implement affinity function on his own.

As I side node I would say that we are trying to force our users to do some 
programming. Does anyone have any idea on how to do this without code? How 
about supporting simple string expressions based on node attributes and/or ip 
addresses? 


> AffinityFunction: primary and backup copies in different locations
> --
>
> Key: IGNITE-2655
> URL: https://issues.apache.org/jira/browse/IGNITE-2655
> Project: Ignite
>  Issue Type: Bug
>Reporter: Denis Magda
>Assignee: Vladislav Pyatkov
>Priority: Critical
>  Labels: important
> Fix For: 1.7
>
>
> There is a use case when primary and backup copies have to be located in 
> different racks, building, cities, etc.
> A simple scenario is the following. When nodes are started they will have 
> either "rack1" or "rack2" value in their attributes list and we will enforce 
> that the backups won't be selected among the nodes with the same attribute.
> It should be possible to filter out backups using IP addresses as well.
> Presently rendezvous and fair affinity function has {{backupFilter}} that 
> will work perfectly for the scenario above but only for cases when number of 
> backups for a cache is equal to 1.
> In case when the number of backups is bigger than one {{backupFilter}} will 
> only guarantee that the primary is located in different location but will NOT 
> guarantee that all the backups are spread out across different locations as 
> well.
> So we need to provide an API that will allow to spread the primary and ALL 
> backups copies across different locations.
> The proposal is to introduce {{AffinityBackupFilter}} with the following 
> method
> {{AffinityBackupFilter.isAssignable(Node n, List assigned)}}
> Where n - potential backup to check, assigned - list of current partition 
> holders, 1st is primary
> {{AffinityBackupFilter}} will be set using 
> {{affinity.setAffinityBackupFilter}}.
> {{Affinity.setBackupFilter}} has to be deprecated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-2969) Optimistic transactions support in deadlock detection

2016-05-31 Thread Andrey Gura (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-2969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15308868#comment-15308868
 ] 

Andrey Gura commented on IGNITE-2969:
-

Nested synchronization is removed.

Transactions classes don't implement {{GridTimeoutObject}} and timeout logic is 
simplified. 

Also need check and revise all places where transactions operates with 
{{remainingTime()}} method, prepare and finish futures. 

> Optimistic transactions support in deadlock detection
> -
>
> Key: IGNITE-2969
> URL: https://issues.apache.org/jira/browse/IGNITE-2969
> Project: Ignite
>  Issue Type: Improvement
>  Components: cache
>Reporter: Andrey Gura
>Assignee: Andrey Gura
> Fix For: 1.7
>
>
> Deadlock detection doesn't support optimistic transactions now. It should be 
> implemented.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-2655) AffinityFunction: primary and backup copies in different locations

2016-05-31 Thread Yakov Zhdanov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-2655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15308862#comment-15308862
 ] 

Yakov Zhdanov commented on IGNITE-2655:
---

[~dmagda] [~v.pyatkov]

Couple of comments:
# Can you please think over this signature for predicate? apply(int n, Node n, 
List assigned)
# If predicate is called with empty list then primary node for this partition 
is examined. 
# Or we can assign primary always on our own and call predicate only for 
backups (which I would prefer). If someone needs full control, then he always 
have an opportunity to implement affinity function on his own.

As I side node I would say that we are trying to force our users to do some 
programming. Does anyone have any idea on how to do this without code? How 
about supporting simple string expressions based on node attributes and/or ip 
addresses? 


> AffinityFunction: primary and backup copies in different locations
> --
>
> Key: IGNITE-2655
> URL: https://issues.apache.org/jira/browse/IGNITE-2655
> Project: Ignite
>  Issue Type: Bug
>Reporter: Denis Magda
>Assignee: Vladislav Pyatkov
>Priority: Critical
>  Labels: important
> Fix For: 1.7
>
>
> There is a use case when primary and backup copies have to be located in 
> different racks, building, cities, etc.
> A simple scenario is the following. When nodes are started they will have 
> either "rack1" or "rack2" value in their attributes list and we will enforce 
> that the backups won't be selected among the nodes with the same attribute.
> It should be possible to filter out backups using IP addresses as well.
> Presently rendezvous and fair affinity function has {{backupFilter}} that 
> will work perfectly for the scenario above but only for cases when number of 
> backups for a cache is equal to 1.
> In case when the number of backups is bigger than one {{backupFilter}} will 
> only guarantee that the primary is located in different location but will NOT 
> guarantee that all the backups are spread out across different locations as 
> well.
> So we need to provide an API that will allow to spread the primary and ALL 
> backups copies across different locations.
> The proposal is to introduce {{AffinityBackupFilter}} with the following 
> method
> {{AffinityBackupFilter.isAssignable(Node n, List assigned)}}
> Where n - potential backup to check, assigned - list of current partition 
> holders, 1st is primary
> {{AffinityBackupFilter}} will be set using 
> {{affinity.setAffinityBackupFilter}}.
> {{Affinity.setBackupFilter}} has to be deprecated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (IGNITE-1078) Create integration for ninja framework

2016-05-31 Thread Karthik Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-1078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Murthy reassigned IGNITE-1078:
--

Assignee: Karthik Murthy

> Create integration for ninja framework
> --
>
> Key: IGNITE-1078
> URL: https://issues.apache.org/jira/browse/IGNITE-1078
> Project: Ignite
>  Issue Type: New Feature
>  Components: cache
>Affects Versions: sprint-7
>Reporter: Alexey Kuznetsov
>Assignee: Karthik Murthy
>Priority: Trivial
>  Labels: newbie
>
> Create integration for ninja web framework: http://www.ninjaframework.org/
> ninja-hazelcast-embedded could be used as start point:  
> https://github.com/raptaml/ninja-hazelcast-embedded 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-3054) Rework client connection handling from thread-per-client to NIO model.

2016-05-31 Thread Dmitry Karachentsev (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-3054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15308043#comment-15308043
 ] 

Dmitry Karachentsev commented on IGNITE-3054:
-

Working on SSL support. To keep it compatible, and in same time support 
switching clients to unblocking mode, there needs to be added SSL handshake 
support from GridNioSslHandler and encrypt/decrypt methods for server 
connections.
This solution picked because of it's unable to use GridNioServer, it's designed 
to work in non-blocking mode, but we want to keep current server communication 
logic untouched (use blocking mode and thread per connection). In previous 
implementation SSLServerSocketFactory was used, which is not applicable for NIO.

Left:
* SSL support.
* Performance tests.

> Rework client connection handling from thread-per-client to NIO model.
> --
>
> Key: IGNITE-3054
> URL: https://issues.apache.org/jira/browse/IGNITE-3054
> Project: Ignite
>  Issue Type: Task
>  Components: general
>Affects Versions: 1.5.0.final
>Reporter: Vladimir Ozerov
>Assignee: Dmitry Karachentsev
>Priority: Blocker
> Fix For: 1.7
>
>
> Currently both servers and clients has the same operational model - 
> thread-per-connection. While being more or less fine for servers, this could 
> be a problem for clients when their total number is too high (e.g. 1000 or 
> even more).
> We should rework client handling model and employ standard NIO technique: one 
> or several acceptor threads + thread pool to server requests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (IGNITE-2655) AffinityFunction: primary and backup copies in different locations

2016-05-31 Thread Vladislav Pyatkov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-2655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladislav Pyatkov updated IGNITE-2655:
--
Comment: was deleted

(was: I fixed AffinityBackupFilter incorrect behavior.
Denis, please, review last change.)

> AffinityFunction: primary and backup copies in different locations
> --
>
> Key: IGNITE-2655
> URL: https://issues.apache.org/jira/browse/IGNITE-2655
> Project: Ignite
>  Issue Type: Bug
>Reporter: Denis Magda
>Assignee: Vladislav Pyatkov
>Priority: Critical
>  Labels: important
> Fix For: 1.7
>
>
> There is a use case when primary and backup copies have to be located in 
> different racks, building, cities, etc.
> A simple scenario is the following. When nodes are started they will have 
> either "rack1" or "rack2" value in their attributes list and we will enforce 
> that the backups won't be selected among the nodes with the same attribute.
> It should be possible to filter out backups using IP addresses as well.
> Presently rendezvous and fair affinity function has {{backupFilter}} that 
> will work perfectly for the scenario above but only for cases when number of 
> backups for a cache is equal to 1.
> In case when the number of backups is bigger than one {{backupFilter}} will 
> only guarantee that the primary is located in different location but will NOT 
> guarantee that all the backups are spread out across different locations as 
> well.
> So we need to provide an API that will allow to spread the primary and ALL 
> backups copies across different locations.
> The proposal is to introduce {{AffinityBackupFilter}} with the following 
> method
> {{AffinityBackupFilter.isAssignable(Node n, List assigned)}}
> Where n - potential backup to check, assigned - list of current partition 
> holders, 1st is primary
> {{AffinityBackupFilter}} will be set using 
> {{affinity.setAffinityBackupFilter}}.
> {{Affinity.setBackupFilter}} has to be deprecated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-2655) AffinityFunction: primary and backup copies in different locations

2016-05-31 Thread Vladislav Pyatkov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-2655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15307986#comment-15307986
 ] 

Vladislav Pyatkov commented on IGNITE-2655:
---

I fixed AffinityBackupFilter incorrect behavior (in {{FairAffinictyFunction}}).
Denis, please, review last change.

> AffinityFunction: primary and backup copies in different locations
> --
>
> Key: IGNITE-2655
> URL: https://issues.apache.org/jira/browse/IGNITE-2655
> Project: Ignite
>  Issue Type: Bug
>Reporter: Denis Magda
>Assignee: Vladislav Pyatkov
>Priority: Critical
>  Labels: important
> Fix For: 1.7
>
>
> There is a use case when primary and backup copies have to be located in 
> different racks, building, cities, etc.
> A simple scenario is the following. When nodes are started they will have 
> either "rack1" or "rack2" value in their attributes list and we will enforce 
> that the backups won't be selected among the nodes with the same attribute.
> It should be possible to filter out backups using IP addresses as well.
> Presently rendezvous and fair affinity function has {{backupFilter}} that 
> will work perfectly for the scenario above but only for cases when number of 
> backups for a cache is equal to 1.
> In case when the number of backups is bigger than one {{backupFilter}} will 
> only guarantee that the primary is located in different location but will NOT 
> guarantee that all the backups are spread out across different locations as 
> well.
> So we need to provide an API that will allow to spread the primary and ALL 
> backups copies across different locations.
> The proposal is to introduce {{AffinityBackupFilter}} with the following 
> method
> {{AffinityBackupFilter.isAssignable(Node n, List assigned)}}
> Where n - potential backup to check, assigned - list of current partition 
> holders, 1st is primary
> {{AffinityBackupFilter}} will be set using 
> {{affinity.setAffinityBackupFilter}}.
> {{Affinity.setBackupFilter}} has to be deprecated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-2655) AffinityFunction: primary and backup copies in different locations

2016-05-31 Thread Vladislav Pyatkov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-2655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15307980#comment-15307980
 ] 

Vladislav Pyatkov commented on IGNITE-2655:
---

I fixed AffinityBackupFilter incorrect behavior.
Denis, please, review last change.

> AffinityFunction: primary and backup copies in different locations
> --
>
> Key: IGNITE-2655
> URL: https://issues.apache.org/jira/browse/IGNITE-2655
> Project: Ignite
>  Issue Type: Bug
>Reporter: Denis Magda
>Assignee: Vladislav Pyatkov
>Priority: Critical
>  Labels: important
> Fix For: 1.7
>
>
> There is a use case when primary and backup copies have to be located in 
> different racks, building, cities, etc.
> A simple scenario is the following. When nodes are started they will have 
> either "rack1" or "rack2" value in their attributes list and we will enforce 
> that the backups won't be selected among the nodes with the same attribute.
> It should be possible to filter out backups using IP addresses as well.
> Presently rendezvous and fair affinity function has {{backupFilter}} that 
> will work perfectly for the scenario above but only for cases when number of 
> backups for a cache is equal to 1.
> In case when the number of backups is bigger than one {{backupFilter}} will 
> only guarantee that the primary is located in different location but will NOT 
> guarantee that all the backups are spread out across different locations as 
> well.
> So we need to provide an API that will allow to spread the primary and ALL 
> backups copies across different locations.
> The proposal is to introduce {{AffinityBackupFilter}} with the following 
> method
> {{AffinityBackupFilter.isAssignable(Node n, List assigned)}}
> Where n - potential backup to check, assigned - list of current partition 
> holders, 1st is primary
> {{AffinityBackupFilter}} will be set using 
> {{affinity.setAffinityBackupFilter}}.
> {{Affinity.setBackupFilter}} has to be deprecated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-3151) Using IgniteCountDownLatch sometimes drives to dead lock.

2016-05-31 Thread Vladislav Pyatkov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-3151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15307975#comment-15307975
 ] 

Vladislav Pyatkov commented on IGNITE-3151:
---

The issue with deadlock no longer occurs.
Could anyone review?

> Using IgniteCountDownLatch sometimes drives to dead lock.
> -
>
> Key: IGNITE-3151
> URL: https://issues.apache.org/jira/browse/IGNITE-3151
> Project: Ignite
>  Issue Type: Bug
>Reporter: Vladislav Pyatkov
>Assignee: Vladislav Pyatkov
> Attachments: igniteBugShot.zip
>
>
> Run several serve node (recoment use count of CPU - 1).
> Wait update topology.
> Run client
> After some iteration exception will thrown (In my case it take place after 
> around 10K iteration).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-3151) Using IgniteCountDownLatch sometimes drives to dead lock.

2016-05-31 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-3151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15307898#comment-15307898
 ] 

ASF GitHub Bot commented on IGNITE-3151:


GitHub user vldpyatkov opened a pull request:

https://github.com/apache/ignite/pull/768

IGNITE-3151

Using IgniteCountDownLatch sometimes drives to dead lock.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/vldpyatkov/ignite ignite-3151

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/768.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #768


commit 53bbc1bbd9342e23c8eb57692f5b93dd273e3e03
Author: vdpyatkov 
Date:   2016-05-31T15:12:28Z

IGNITE-3151
Using IgniteCountDownLatch sometimes drives to dead lock.




> Using IgniteCountDownLatch sometimes drives to dead lock.
> -
>
> Key: IGNITE-3151
> URL: https://issues.apache.org/jira/browse/IGNITE-3151
> Project: Ignite
>  Issue Type: Bug
>Reporter: Vladislav Pyatkov
>Assignee: Vladislav Pyatkov
> Attachments: igniteBugShot.zip
>
>
> Run several serve node (recoment use count of CPU - 1).
> Wait update topology.
> Run client
> After some iteration exception will thrown (In my case it take place after 
> around 10K iteration).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-1078) Create integration for ninja framework

2016-05-31 Thread Karthik Murthy (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-1078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15307741#comment-15307741
 ] 

Karthik Murthy commented on IGNITE-1078:


I was thinking of assigning this ticket to myself as I figured that it would be 
a good way to start contributing to the project without getting in the way of 
your critical path. However, I have noticed that this ticket has not been 
updated in quite some time so my question to all of you is whether this has 
already been resolved?

Thanks.

> Create integration for ninja framework
> --
>
> Key: IGNITE-1078
> URL: https://issues.apache.org/jira/browse/IGNITE-1078
> Project: Ignite
>  Issue Type: New Feature
>  Components: cache
>Affects Versions: sprint-7
>Reporter: Alexey Kuznetsov
>Priority: Trivial
>  Labels: newbie
>
> Create integration for ninja web framework: http://www.ninjaframework.org/
> ninja-hazelcast-embedded could be used as start point:  
> https://github.com/raptaml/ninja-hazelcast-embedded 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (IGNITE-3222) IgniteCache.invokeAll for all cache entries

2016-05-31 Thread Pavel Tupitsyn (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-3222?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Tupitsyn updated IGNITE-3222:
---
Description: 
Implement an invokeAll overload that processes all cache keys (not some 
specific set).

Proposed signature:
{code}
public void invokeAll(CacheEntryProcessor entryProcessor, Object... 
args);

public  Map invokeAll(CacheEntryProcessor entryProcessor, boolean returnAffectedOnly, Object... args);
{code}

This will apply the specified processor to all cache entries.

First method does not return anything.
Second method either returns all results for all entries, or only for entries 
that have been changed by the processor in any way.

  was:
Implement an invokeAll overload that processes all cache keys (not some 
specific set).

Proposed signature:
{code}
public void invokeAll(CacheEntryProcessor entryProcessor, Object... 
args);

public  Map invokeAll(CacheEntryProcessor entryProcessor, boolean returnAffectedOnly, Object... args);
{code}

This will apply the specified processor to all cache entries.
First method does not return anything.

Second method either returns all results for all entries, or only for entries 
that have been changed by the processor in any way.


> IgniteCache.invokeAll for all cache entries
> ---
>
> Key: IGNITE-3222
> URL: https://issues.apache.org/jira/browse/IGNITE-3222
> Project: Ignite
>  Issue Type: Task
>  Components: cache
>Affects Versions: 1.1.4
>Reporter: Pavel Tupitsyn
> Fix For: 1.7
>
>
> Implement an invokeAll overload that processes all cache keys (not some 
> specific set).
> Proposed signature:
> {code}
> public void invokeAll(CacheEntryProcessor entryProcessor, Object... 
> args);
> public  Map invokeAll(CacheEntryProcessor V, T> entryProcessor, boolean returnAffectedOnly, Object... args);
> {code}
> This will apply the specified processor to all cache entries.
> First method does not return anything.
> Second method either returns all results for all entries, or only for entries 
> that have been changed by the processor in any way.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (IGNITE-3222) IgniteCache.invokeAll for all cache entries

2016-05-31 Thread Pavel Tupitsyn (JIRA)
Pavel Tupitsyn created IGNITE-3222:
--

 Summary: IgniteCache.invokeAll for all cache entries
 Key: IGNITE-3222
 URL: https://issues.apache.org/jira/browse/IGNITE-3222
 Project: Ignite
  Issue Type: Task
  Components: cache
Affects Versions: 1.1.4
Reporter: Pavel Tupitsyn
 Fix For: 1.7


Implement an invokeAll overload that processes all cache keys (not some 
specific set).

Proposed signature:
{code}
public void invokeAll(CacheEntryProcessor entryProcessor, Object... 
args);

public  Map invokeAll(CacheEntryProcessor entryProcessor, boolean returnAffectedOnly, Object... args);
{code}

This will apply the specified processor to all cache entries.
First method does not return anything.

Second method either returns all results for all entries, or only for entries 
that have been changed by the processor in any way.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (IGNITE-3221) Bad ip used , when fixed ip in hostname

2016-05-31 Thread Denis Magda (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-3221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denis Magda reassigned IGNITE-3221:
---

Assignee: Denis Magda

> Bad ip used , when fixed ip in hostname
> ---
>
> Key: IGNITE-3221
> URL: https://issues.apache.org/jira/browse/IGNITE-3221
> Project: Ignite
>  Issue Type: Bug
>Reporter: sebastien diaz
>Assignee: Denis Magda
>
> Hello
> My machine have certainly a bad dns resolve issue (docker+swarm,network and 
> compose)  but I fix the ip directly at the place of the hostname.
> Unfortunately TcpDiscoveryNode mark a different socker ip
> Example host=40.1.0.23 -> socketAddress=104.239.213.7 :
> TcpDiscoveryNode [id=54b73d98-e702-4660-957a-61d065003078, addrs=[40.1.0.23], 
> sockAddrs=[44de9a1e9afe/104.239.213.7:47500, /40.1.0.23:47500]
> The code apparently in casuse should be :
> public static InetAddress resolveLocalHost(@Nullable String hostName) throws 
> IOException {
> return F.isEmpty(hostName) ?
> // Should default to InetAddress#anyLocalAddress which is 
> package-private.
> new InetSocketAddress(0).getAddress() :
> InetAddress.getByName(hostName);
> }
> In my issue it will preferable to not use the function
> InetAddress.getByName 
> but to use something as 
> InetAddress.getByAddress(ipAddr);
> thanks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (IGNITE-3221) Bad ip used , when fixed ip in hostname

2016-05-31 Thread sebastien diaz (JIRA)
sebastien diaz created IGNITE-3221:
--

 Summary: Bad ip used , when fixed ip in hostname
 Key: IGNITE-3221
 URL: https://issues.apache.org/jira/browse/IGNITE-3221
 Project: Ignite
  Issue Type: Bug
Reporter: sebastien diaz


Hello

My machine have certainly a bad dns resolve issue (docker+swarm,network and 
compose)  but I fix the ip directly at the place of the hostname.

Unfortunately TcpDiscoveryNode mark a different socker ip

Example host=40.1.0.23 -> socketAddress=104.239.213.7 :
TcpDiscoveryNode [id=54b73d98-e702-4660-957a-61d065003078, addrs=[40.1.0.23], 
sockAddrs=[44de9a1e9afe/104.239.213.7:47500, /40.1.0.23:47500]

The code apparently in casuse should be :
public static InetAddress resolveLocalHost(@Nullable String hostName) throws 
IOException {
return F.isEmpty(hostName) ?
// Should default to InetAddress#anyLocalAddress which is 
package-private.
new InetSocketAddress(0).getAddress() :
InetAddress.getByName(hostName);
}

In my issue it will preferable to not use the function
InetAddress.getByName 
but to use something as 
InetAddress.getByAddress(ipAddr);

thanks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-3207) Rename IgniteConfiguration.gridName

2016-05-31 Thread Denis Magda (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-3207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15307608#comment-15307608
 ] 

Denis Magda commented on IGNITE-3207:
-

[~F7753], I've reviewed your changes and left some comments in the pull 
request. Please consider them as well as Dmitriy's suggestion above (rename all 
"localInstanceName" occurrences into "instanceName").

After you do address all the inputs please do the following:
- update your pull-request;
- check all the changes against TeamCity. Refer to this page [1] for more 
details on how to start the test suites;
- move this issue into "PATCH_AVAILABLE" state if the tests look good;
- ask for one more review round.


[1] 
https://cwiki.apache.org/confluence/display/IGNITE/How+to+Contribute#HowtoContribute-1.CreateGitHubpull-request

> Rename IgniteConfiguration.gridName
> ---
>
> Key: IGNITE-3207
> URL: https://issues.apache.org/jira/browse/IGNITE-3207
> Project: Ignite
>  Issue Type: Improvement
>  Components: general
>Affects Versions: 1.1.4
>Reporter: Pavel Tupitsyn
>Assignee: Biao Ma
> Fix For: 1.7
>
>
> We have got a TON of questions on gridName property. Everyone thinks that 
> clusters are formed based on the gridName, that is, nodes with the same grid 
> name will join one cluster, and nodes with a different name will be in a 
> separate cluster.
> Let's do the following:
> * Deprecate IgniteConfiguration.gridName
> * Add IgniteConfiguration.localInstanceName
> * Rename related parameters in Ignition class (and other places, if present)
> * Update Javadoc: clearly state that this name only works locally and has no 
> effect on topology.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-2655) AffinityFunction: primary and backup copies in different locations

2016-05-31 Thread Denis Magda (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-2655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15307586#comment-15307586
 ] 

Denis Magda commented on IGNITE-2655:
-

Dmitriy, it means that in case of {{FairAffinityFunction}} the method can check 
both the primary and the backup against already assigned nodes. The already 
assigned nodes may or may node contains the primary.

Vlad, I think that we can preserve the same semantic and behavior at the level 
of {{FairAffinictyFunction}} if do the following at the implementation level:
- if {{tier=0}} is checked (primary) then we prepare new assignments list that 
will have the primary, that is being checked, first in the list and the rest of 
the nodes will be the nodes that are already assigned (backups);
- after that we're iterating over a sublist calling 
{{affinityBackupFilter.apply(...)}} for every backup from the list with 
assignments. During the iteration if we get {{false}} for at least backup then 
it means that the primary is non assignable.

Such implementation will help us to preserve the same semantic as 
{{RendezvousAffinityFunction}} has
n - potential backup to check
assigned - list of current partition holders (first node in the list is primary)

> AffinityFunction: primary and backup copies in different locations
> --
>
> Key: IGNITE-2655
> URL: https://issues.apache.org/jira/browse/IGNITE-2655
> Project: Ignite
>  Issue Type: Bug
>Reporter: Denis Magda
>Assignee: Vladislav Pyatkov
>Priority: Critical
>  Labels: important
> Fix For: 1.7
>
>
> There is a use case when primary and backup copies have to be located in 
> different racks, building, cities, etc.
> A simple scenario is the following. When nodes are started they will have 
> either "rack1" or "rack2" value in their attributes list and we will enforce 
> that the backups won't be selected among the nodes with the same attribute.
> It should be possible to filter out backups using IP addresses as well.
> Presently rendezvous and fair affinity function has {{backupFilter}} that 
> will work perfectly for the scenario above but only for cases when number of 
> backups for a cache is equal to 1.
> In case when the number of backups is bigger than one {{backupFilter}} will 
> only guarantee that the primary is located in different location but will NOT 
> guarantee that all the backups are spread out across different locations as 
> well.
> So we need to provide an API that will allow to spread the primary and ALL 
> backups copies across different locations.
> The proposal is to introduce {{AffinityBackupFilter}} with the following 
> method
> {{AffinityBackupFilter.isAssignable(Node n, List assigned)}}
> Where n - potential backup to check, assigned - list of current partition 
> holders, 1st is primary
> {{AffinityBackupFilter}} will be set using 
> {{affinity.setAffinityBackupFilter}}.
> {{Affinity.setBackupFilter}} has to be deprecated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-1371) Key-Value store (like Cassandra) as CacheStore

2016-05-31 Thread Anton Vinogradov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-1371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15307531#comment-15307531
 ] 

Anton Vinogradov commented on IGNITE-1371:
--

Igor, 

Have you found some way to solve warning?
As you can see it fails this check:
http://149.202.210.143:8111/viewType.html?buildTypeId=IgniteTests_RatJavadoc_IgniteTests=%3Cdefault%3E=buildTypeStatusDiv

We should have a tool to detect real javadoc warnings without usage of JDK 8.

> Key-Value store (like Cassandra) as CacheStore
> --
>
> Key: IGNITE-1371
> URL: https://issues.apache.org/jira/browse/IGNITE-1371
> Project: Ignite
>  Issue Type: New Feature
>  Components: cache
>Affects Versions: ignite-1.4
>Reporter: Alexandre Boudnik
> Fix For: 1.6
>
> Attachments: master_02b59e4_ignite-1371.patch
>
>   Original Estimate: 504h
>  Remaining Estimate: 504h
>
> It will provide ability to map particular cache holding POJOs to Cassandra 
> table. Later it would be generalized to support eventually any any Key-Value 
> store.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (IGNITE-3220) I/O bottleneck on server/client cluster configuration.

2016-05-31 Thread Alexei Scherbakov (JIRA)
Alexei Scherbakov created IGNITE-3220:
-

 Summary: I/O bottleneck on server/client cluster configuration.
 Key: IGNITE-3220
 URL: https://issues.apache.org/jira/browse/IGNITE-3220
 Project: Ignite
  Issue Type: Bug
  Components: clients
Reporter: Alexei Scherbakov
Priority: Minor
 Fix For: 1.7


Where is an I/O bottleneck if a client tries to perform many
transactions involving puts and gets in a highly concurrent manner using socket 
transport to a single server node deployed on powerful multicore computer.
In what case throughput dramatically decreases (up to 30 times in comparison 
with running test in single JVM) because everything is delayed by server I/O 
thread.

Current workaround for such scenario is using more clients (or servers).
We should add configuration parameter such maxConnectionsPerClient, allowing 
client to connect to server using several simultaneous connections decreasing 
I/O bottleneck.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (IGNITE-3219) Not work validation of nester forms on save.

2016-05-31 Thread Vasiliy Sisko (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-3219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vasiliy Sisko updated IGNITE-3219:
--
Description: 
# On cluster page add type in binary configuration.
# Input valid type name.
# Input invalid id mapper, name mapper, serializer
# Save cluster.

Cluster saved successfully, fields marked as wrong and validation do not show 
them.

Also problem reproduced for Failover configuration -> Custom

  was:
# On cluster page add type in binary configuration.
# Input valid type name.
# Input invalid id mapper, name mapper, serializer
# Save cluster.

Cluster saved successfully, fields marked as wrong and validation do not show 
them.


> Not work validation of nester forms on save.
> 
>
> Key: IGNITE-3219
> URL: https://issues.apache.org/jira/browse/IGNITE-3219
> Project: Ignite
>  Issue Type: Sub-task
>  Components: wizards
>Affects Versions: 1.7
>Reporter: Vasiliy Sisko
>Priority: Minor
> Fix For: 1.7
>
>
> # On cluster page add type in binary configuration.
> # Input valid type name.
> # Input invalid id mapper, name mapper, serializer
> # Save cluster.
> Cluster saved successfully, fields marked as wrong and validation do not show 
> them.
> Also problem reproduced for Failover configuration -> Custom



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (IGNITE-3219) Not work validation of nester forms on save.

2016-05-31 Thread Vasiliy Sisko (JIRA)
Vasiliy Sisko created IGNITE-3219:
-

 Summary: Not work validation of nester forms on save.
 Key: IGNITE-3219
 URL: https://issues.apache.org/jira/browse/IGNITE-3219
 Project: Ignite
  Issue Type: Sub-task
  Components: wizards
Affects Versions: 1.7
Reporter: Vasiliy Sisko
Priority: Minor


# On cluster page add type in binary configuration.
# Input valid type name.
# Input invalid id mapper, name mapper, serializer
# Save cluster.

Cluster saved successfully, fields marked as wrong and validation do not show 
them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-2655) AffinityFunction: primary and backup copies in different locations

2016-05-31 Thread Dmitriy Setrakyan (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-2655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15307495#comment-15307495
 ] 

Dmitriy Setrakyan commented on IGNITE-2655:
---

Then I do not understand. Are you saying that we will have a method called 
{{setAffinityBackupFilter(...)}}, but in case of {{FairAffinityFunction}} it 
will be checking the primary node and not the backup?

> AffinityFunction: primary and backup copies in different locations
> --
>
> Key: IGNITE-2655
> URL: https://issues.apache.org/jira/browse/IGNITE-2655
> Project: Ignite
>  Issue Type: Bug
>Reporter: Denis Magda
>Assignee: Vladislav Pyatkov
>Priority: Critical
>  Labels: important
> Fix For: 1.7
>
>
> There is a use case when primary and backup copies have to be located in 
> different racks, building, cities, etc.
> A simple scenario is the following. When nodes are started they will have 
> either "rack1" or "rack2" value in their attributes list and we will enforce 
> that the backups won't be selected among the nodes with the same attribute.
> It should be possible to filter out backups using IP addresses as well.
> Presently rendezvous and fair affinity function has {{backupFilter}} that 
> will work perfectly for the scenario above but only for cases when number of 
> backups for a cache is equal to 1.
> In case when the number of backups is bigger than one {{backupFilter}} will 
> only guarantee that the primary is located in different location but will NOT 
> guarantee that all the backups are spread out across different locations as 
> well.
> So we need to provide an API that will allow to spread the primary and ALL 
> backups copies across different locations.
> The proposal is to introduce {{AffinityBackupFilter}} with the following 
> method
> {{AffinityBackupFilter.isAssignable(Node n, List assigned)}}
> Where n - potential backup to check, assigned - list of current partition 
> holders, 1st is primary
> {{AffinityBackupFilter}} will be set using 
> {{affinity.setAffinityBackupFilter}}.
> {{Affinity.setBackupFilter}} has to be deprecated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-2655) AffinityFunction: primary and backup copies in different locations

2016-05-31 Thread Vladislav Pyatkov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-2655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15307412#comment-15307412
 ] 

Vladislav Pyatkov commented on IGNITE-2655:
---

Small description of my changes:
Added AffinityBackupFilter for FairAffinityFunction and 
RendezvousAffinityFunction (The method is setAffinityBackupFilter).
Interface for affinity backup filter is
public boolean apply (Node n, List  assigned)
Where
n - potential backup to check
assignd -  list of current partition holders (for RendezvousAffinityFunction 
first node in the list is primary)
result - if can assign true, false another.

Use as follows:
{code}
RendezvousAffinityFunction aff = new RendezvousAffinityFunction(false);
aff.setAffinityBackupFilter(new IgniteBiPredicate() {
@Override public boolean apply(ClusterNode node, List 
nodes) {
return false;
}
});
{code}

Old method setBackupFilter marked as Deprecated.


> AffinityFunction: primary and backup copies in different locations
> --
>
> Key: IGNITE-2655
> URL: https://issues.apache.org/jira/browse/IGNITE-2655
> Project: Ignite
>  Issue Type: Bug
>Reporter: Denis Magda
>Assignee: Vladislav Pyatkov
>Priority: Critical
>  Labels: important
> Fix For: 1.7
>
>
> There is a use case when primary and backup copies have to be located in 
> different racks, building, cities, etc.
> A simple scenario is the following. When nodes are started they will have 
> either "rack1" or "rack2" value in their attributes list and we will enforce 
> that the backups won't be selected among the nodes with the same attribute.
> It should be possible to filter out backups using IP addresses as well.
> Presently rendezvous and fair affinity function has {{backupFilter}} that 
> will work perfectly for the scenario above but only for cases when number of 
> backups for a cache is equal to 1.
> In case when the number of backups is bigger than one {{backupFilter}} will 
> only guarantee that the primary is located in different location but will NOT 
> guarantee that all the backups are spread out across different locations as 
> well.
> So we need to provide an API that will allow to spread the primary and ALL 
> backups copies across different locations.
> The proposal is to introduce {{AffinityBackupFilter}} with the following 
> method
> {{AffinityBackupFilter.isAssignable(Node n, List assigned)}}
> Where n - potential backup to check, assigned - list of current partition 
> holders, 1st is primary
> {{AffinityBackupFilter}} will be set using 
> {{affinity.setAffinityBackupFilter}}.
> {{Affinity.setBackupFilter}} has to be deprecated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (IGNITE-3218) Partition can not be reserved

2016-05-31 Thread Vladislav Pyatkov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-3218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladislav Pyatkov updated IGNITE-3218:
--
Attachment: Ignite-SomeServersInOneJvm.zip

> Partition can not be reserved
> -
>
> Key: IGNITE-3218
> URL: https://issues.apache.org/jira/browse/IGNITE-3218
> Project: Ignite
>  Issue Type: Bug
>Reporter: Vladislav Pyatkov
> Attachments: Ignite-SomeServersInOneJvm.zip
>
>
> if you set
> 
> ScanQuery for the partition fall with the error:
> {noformat}
> Caused by: class 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtUnreservedPartitionException
>  [part=3, msg=Partition can not be reserved.]
>   at 
> org.apache.ignite.internal.processors.cache.query.GridCacheQueryManager.onheapIterator(GridCacheQueryManager.java:1042)
>   at 
> org.apache.ignite.internal.processors.cache.query.GridCacheQueryManager.scanIterator(GridCacheQueryManager.java:854)
>   at 
> org.apache.ignite.internal.processors.cache.query.GridCacheQueryManager.scanQueryLocal(GridCacheQueryManager.java:1761)
>   at 
> org.apache.ignite.internal.processors.cache.query.GridCacheQueryAdapter$ScanQueryFallbackClosableIterator.init(GridCacheQueryAdapter.java:677)
>   ... 27 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (IGNITE-3218) Partition can not be reserved

2016-05-31 Thread Vladislav Pyatkov (JIRA)
Vladislav Pyatkov created IGNITE-3218:
-

 Summary: Partition can not be reserved
 Key: IGNITE-3218
 URL: https://issues.apache.org/jira/browse/IGNITE-3218
 Project: Ignite
  Issue Type: Bug
Reporter: Vladislav Pyatkov


if you set

ScanQuery for the partition fall with the error:
{noformat}
Caused by: class 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtUnreservedPartitionException
 [part=3, msg=Partition can not be reserved.]
at 
org.apache.ignite.internal.processors.cache.query.GridCacheQueryManager.onheapIterator(GridCacheQueryManager.java:1042)
at 
org.apache.ignite.internal.processors.cache.query.GridCacheQueryManager.scanIterator(GridCacheQueryManager.java:854)
at 
org.apache.ignite.internal.processors.cache.query.GridCacheQueryManager.scanQueryLocal(GridCacheQueryManager.java:1761)
at 
org.apache.ignite.internal.processors.cache.query.GridCacheQueryAdapter$ScanQueryFallbackClosableIterator.init(GridCacheQueryAdapter.java:677)
... 27 more
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (IGNITE-3202) IGFS: Create user name converter for Hadoop secondary file system.

2016-05-31 Thread Vladimir Ozerov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-3202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Ozerov closed IGNITE-3202.
---

> IGFS: Create user name converter for Hadoop secondary file system.
> --
>
> Key: IGNITE-3202
> URL: https://issues.apache.org/jira/browse/IGNITE-3202
> Project: Ignite
>  Issue Type: Task
>  Components: hadoop, IGFS
>Affects Versions: 1.5.0.final, 1.6
>Reporter: Vladimir Ozerov
>Assignee: Vladimir Ozerov
>Priority: Critical
>  Labels: important
> Fix For: 1.7
>
>
> *Problem*
> When user access secondary file system, we propagate it form client machine 
> to Ignite server and then try to perform the request on this user using 
> either "doAs" or proxies in case of Kerberos.
> The problem is that user name is not always match what we need. For example, 
> user name is "ivanov", but the request should be performed using proxied user 
> "ivanov@[REALM_NAME]". 
> *Solution*
> We need to introduce special converter interface which will intercept user 
> names and convert them to correct form if needed. This interceptor should be 
> places inside file system factory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (IGNITE-3190) OffHeap cache metrics doesn't work properly for OFFHEAP_TIRED mode

2016-05-31 Thread Denis Magda (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-3190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denis Magda closed IGNITE-3190.
---

> OffHeap cache metrics doesn't work properly for OFFHEAP_TIRED mode
> --
>
> Key: IGNITE-3190
> URL: https://issues.apache.org/jira/browse/IGNITE-3190
> Project: Ignite
>  Issue Type: Bug
>Reporter: Vladislav Pyatkov
>Assignee: Vladislav Pyatkov
> Attachments: 755.patch, 755.patch
>
>
> Simple configuration cache with OffHeap tiered (statistics must be enabled) 
> never increase of get from OffHeap (CacheMetrics#getOffHeapGets always 0)
> {code}
> cache.put(46744, "val 46744");
> cache.get(46744);
> {code}
> {noformat}
> 2016-05-24 14:19:31 INFO  ServerNode:78 - Swap put 0 get 0 (0, 0) entries 
> count 0
> 2016-05-24 14:19:31 INFO  ServerNode:81 - OffHeap put 1 get 0 (0, 0) entries 
> count 1
> 2016-05-24 14:19:31 INFO  ServerNode:84 - OnHeap put 1 get 1 (1, 0)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-3213) Web Console: 'false' value in the caches list of cluster

2016-05-31 Thread Pavel Konstantinov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-3213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15307277#comment-15307277
 ] 

Pavel Konstantinov commented on IGNITE-3213:


Tested on staging.

> Web Console: 'false' value in the caches list of cluster
> 
>
> Key: IGNITE-3213
> URL: https://issues.apache.org/jira/browse/IGNITE-3213
> Project: Ignite
>  Issue Type: Bug
>  Components: UI
>Affects Versions: 1.7
>Reporter: Pavel Konstantinov
>Assignee: Pavel Konstantinov
> Fix For: 1.7
>
>
> # remove all caches and clusters
> # add new cluster, save
> # open Caches page, add new cache,  save (cache has linked with cluster 
> automatically)
> # open Clusters page - list of caches contains 'false' value



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)