Re: WAL Archive Issue
Hi Ivan, Excellent idea, I've added this to https://issues.apache.org/jira/browse/IGNITE-7730 ticket. Sinerely, Dmitriy Pavlov ср, 14 февр. 2018 г. в 0:28, Ivan Rakov: > > - applying compressor to segments older than 1 completed checkpoint > ago - > > saves space. > By the way: WAL compression is already implemented that way. If there > are any ".zip" segments in archive dir, they are free to delete. > This can be a safe workaround for users who experience lack of free > space - just delete compressed segments. We should mention it in > documentation for 2.4 release. > > Best Regards, > Ivan Rakov > > On 13.02.2018 23:53, Dmitry Pavlov wrote: > > I see, it seems subgoal 'gain predictable size' can be achieved with > > following options: > > - https://issues.apache.org/jira/browse/IGNITE-6552 implementation (in > > variant of '...WAL history size in time units and maximum size in > GBytes', > > - here we probably should change description or create 2nd issue), > > - no-archiver mode ( segments still can be deleted, but in same > directory > > it was written) - maximum perfomance on ext* fs. > > - applying compressor to segments older than 1 completed checkpoint > ago - > > saves space. > > > > Is it necessary to store data we can safely remove? > > > > Or may be Ignite should handle this by itself and delete unnecessary > > segments on low space left on device, like Linux decreases page cache in > > memory if there is no free RAM left. > > > > вт, 13 февр. 2018 г. в 23:32, Ivan Rakov : > > > >> As far as I understand, the idea is WAL archive with predictable size > >> ("N checkpoints" is not predictable size), which can be safely removed > >> (e.g. if free disk space is urgently needed) without losing crash > recovery. > >> > >> No-archiver mode makes sense as well - it should be faster than current > >> mode (at least, on filesystems different from XFS). It will be useful > >> for users who has lots of disk space and want to gain maximum > throughput. > >> > >> Best Regards, > >> Ivan Rakov > >> > >> On 13.02.2018 23:14, Dmitry Pavlov wrote: > >>> Hi, I didn't get the point why it may be required to separate WAL work, > >> WAL > >>> uncheckpointed archive (some work outside segment rotation) and > >>> checkpointed archive (which is better to be compressed using Ignite new > >>> feature - WAL compressor). > >>> > >>> Please consider new no-archiver mode implemented recently. > >>> > >>> If archive folder confuses end user, grid admin may set up this mode > (all > >>> segments is placed in 1 directory) instead of introducing folders. > >>> > >>> > >>> вт, 13 февр. 2018 г. в 22:11, Ivan Rakov : > >>> > I think, I got the point now. > There's no need to copy files from "temp" to "archive" dir - we can > just > move them, which is a constant-time operation. > Makes sense. > > Change is quite complex (we need to synchronize all movings thoroughly > to avoid ruining existing WAL read iterators), but feasible. > > Best Regards, > Ivan Rakov > > > On 13.02.2018 22:06, Ivan Rakov wrote: > > Yakov, > > > > This will work. However, I expect performance degradation with this > > change. Disk storage has a limited number of I/O operations per > second > > on hardware level. List of already existing disk I/O activities > > (writing to WAL work dir, copying from WAL work dir to WAL archive > > dir, writing partition files during checkpoint) will be updated with > a > > new one - copying from WAL work dir to temp dir. > > > > Best Regards, > > Ivan Rakov > > > > On 13.02.2018 21:35, Yakov Zhdanov wrote: > >> Ivan, > >> > >> I do not want to create new files. As far as I know, now we copy > >> segments > >> to archive dir before they get checkpointed. What I suggest is to > >> copy them > >> to a temp dir under wal directory and then move to archive. In my > >> understanding at the time we copy the files to a temp folder all > >> changes to > >> them are already fsynced. > >> > >> Correct? > >> > >> Yakov Zhdanov, > >> www.gridgain.com > >> > >> 2018-02-13 21:29 GMT+03:00 Ivan Rakov : > >> > >>> Yakov, > >>> > >>> I see the only one problem with your suggestion - number of > >>> "uncheckpointed" segments is potentially unlimited. > >>> Right now we have limited number (10) of file segments with > immutable > >>> names in WAL "work" directory. We have to keep this approach due to > >>> known > >>> bug in XFS - fsync time is nearly twice bigger for recently created > >>> files. > >>> > >>> Best Regards, > >>> Ivan Rakov > >>> > >>> > >>> On 13.02.2018 21:22, Yakov Zhdanov wrote: > >>> > I meant we still will be copying segment once and then will be >
Re: Spark data frames
+1 for starting new topic from Nikolay when ' Community Meeting to Introduce Ignite Spark Data Frames' is ready to be announced. пт, 16 февр. 2018 г. в 9:27, Denis Magda: > I'm in :) > > Nikolay, we can use my GoToMeeting account to host the webinar. To draw > more attention I would suggest starting a more specific thread titled like > "[RSVP] Community Meeting to Introduce Ignite Spark Data Frames". This > discussion sounds too generic, folks could simply pass by. > > Negotiated? > > -- > Denis > > On Wed, Feb 14, 2018 at 6:04 AM, Vyacheslav Daradur > wrote: > > > Dmitry, it's a great idea! > > > > Nikolay, I also have the interest to get familiar with Spark Data > > Frames integration. > > > > I'd prefer webinar of the format similar to "Ignite Persistent Store > > Walkthrough" by Denis Magda, which has been presented some times ago. > > > > On Wed, Feb 14, 2018 at 5:03 PM, Dmitriy Setrakyan > > wrote: > > > I am definitely interested. Great idea! > > > > > > On Wed, Feb 14, 2018 at 4:32 AM, Nikolay Izhikov > > > wrote: > > > > > >> Hello, Dmitry. > > >> > > >> If other community member are also iterested in that kind of > > information I > > >> can try to do the talk. > > >> > > >> В Ср, 14/02/2018 в 10:49 +, Dmitry Pavlov пишет: > > >> > Hi Nikolay, > > >> > > > >> > I've notices there are a number of very lively discussions on dev > list > > >> about SparkDataFrames. But I, for example, can't fully understand it > > >> because it is not well-known code for me. > > >> > > > >> > I suppose Ignite community has other members, which are not aware of > > >> recent feature SparkDataFrame and its pros. > > >> > > > >> > What do you think about short talk arrangement for community to tell > > >> about this module, e.g. for 30 minutes? Could you please do this? I > > think > > >> Denis M. can help with infrastructure. > > >> > > > >> > Sincerely, > > >> > Dmitriy Pavlov > > >> > > > > > > > > -- > > Best Regards, Vyacheslav D. > > >
[jira] [Created] (IGNITE-7730) Improve WAL history size documentation
Dmitriy Pavlov created IGNITE-7730: -- Summary: Improve WAL history size documentation Key: IGNITE-7730 URL: https://issues.apache.org/jira/browse/IGNITE-7730 Project: Ignite Issue Type: Task Components: documentation Affects Versions: 2.1 Reporter: Dmitriy Pavlov Assignee: Denis Magda Fix For: 2.5 Until IGNITE-6552 is not implemented, we have only ability to configure WAL hist. size in checkpoints. It is needed to improve description for this parameter. I've added draft notes to wiki https://cwiki.apache.org/confluence/display/IGNITE/Ignite+Persistent+Store+-+under+the+hood#IgnitePersistentStore-underthehood-Estimatingdiskspace about ways how wer can estimate WAL sizes without exact bytes/time specification: {panel} WAL Work max used size: walSegmentSize * walSegments = 640Mb (default) in case Default WAL mode - this size is used always, in case other modes best case is 1 segment * walSegmentSize WAL Work+WAL Archive max size may be estimated by average load or by maximum size. 1st way is applicable if checkpoints are triggered mostly by timer trigger. Wal size = 2*Average load(bytes/sec) * trigger interval (sec) * walHistSize (number of checkpoints) Where 2 multiplier coming from physical & logical WAL Records. 2nd way: Checkpoint is triggered by segments max dirty pages percent. Use persisted data regions max sizes: sum(Max configured DataRegionConfiguration.maxSize) * 75% - est. maximum data volume to be writen on 1 checkpoint. Overall WAL size (before archiving) = 2* est. data volume * walHistSize = 1,5 * sum(DataRegionConfiguration.maxSize) * walHistSize Note applying WAL compressor may significiantly reduce archive size. {panel} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] ignite pull request #3182: ignite-gg-13116
Github user agoncharuk closed the pull request at: https://github.com/apache/ignite/pull/3182 ---
Re: Batch size parameter at DataStreamerCacheUpdaters.batched()
Hi Val, Thanks for your response. It seems that is public: https://github.com/apache/ignite/blob/master/modules/core/src/main/java/org/apache/ignite/internal/processors/datastreamer/DataStreamerCacheUpdaters.java Best Regards, Roman -- Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/
Re: Spark data frames
I'm in :) Nikolay, we can use my GoToMeeting account to host the webinar. To draw more attention I would suggest starting a more specific thread titled like "[RSVP] Community Meeting to Introduce Ignite Spark Data Frames". This discussion sounds too generic, folks could simply pass by. Negotiated? -- Denis On Wed, Feb 14, 2018 at 6:04 AM, Vyacheslav Daradurwrote: > Dmitry, it's a great idea! > > Nikolay, I also have the interest to get familiar with Spark Data > Frames integration. > > I'd prefer webinar of the format similar to "Ignite Persistent Store > Walkthrough" by Denis Magda, which has been presented some times ago. > > On Wed, Feb 14, 2018 at 5:03 PM, Dmitriy Setrakyan > wrote: > > I am definitely interested. Great idea! > > > > On Wed, Feb 14, 2018 at 4:32 AM, Nikolay Izhikov > > wrote: > > > >> Hello, Dmitry. > >> > >> If other community member are also iterested in that kind of > information I > >> can try to do the talk. > >> > >> В Ср, 14/02/2018 в 10:49 +, Dmitry Pavlov пишет: > >> > Hi Nikolay, > >> > > >> > I've notices there are a number of very lively discussions on dev list > >> about SparkDataFrames. But I, for example, can't fully understand it > >> because it is not well-known code for me. > >> > > >> > I suppose Ignite community has other members, which are not aware of > >> recent feature SparkDataFrame and its pros. > >> > > >> > What do you think about short talk arrangement for community to tell > >> about this module, e.g. for 30 minutes? Could you please do this? I > think > >> Denis M. can help with infrastructure. > >> > > >> > Sincerely, > >> > Dmitriy Pavlov > >> > > > > -- > Best Regards, Vyacheslav D. >
[jira] [Created] (IGNITE-7729) Add usage of Roles for Web Console E2E tests
Alexander Kalinin created IGNITE-7729: - Summary: Add usage of Roles for Web Console E2E tests Key: IGNITE-7729 URL: https://issues.apache.org/jira/browse/IGNITE-7729 Project: Ignite Issue Type: Improvement Components: wizards Reporter: Alexander Kalinin Assignee: Alexander Kalinin -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (IGNITE-7728) Put together a doc that shows how to blend SQL with k/v APIs
Denis Magda created IGNITE-7728: --- Summary: Put together a doc that shows how to blend SQL with k/v APIs Key: IGNITE-7728 URL: https://issues.apache.org/jira/browse/IGNITE-7728 Project: Ignite Issue Type: Task Components: documentation Reporter: Denis Magda Assignee: Denis Magda Fix For: 2.5 More and more people start blending SQL with key-value APIs in Ignite. Usually, they create tables/caches with DDL and wish to use key-value later as well: [https://stackoverflow.com/questions/48795533/how-do-i-read-data-from-cache-using-javaapi-after-i-put-it-through-jdbc] We already have a project that demonstrates this approach: [https://github.com/dmagda/ignite_world_demo] Put together a doc that points out to it and elaborates on this topic. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
TeamCity. Ignite RDD tests
Hello, Igniters. I'm working on issue [1]. Team City doesn't collect info about scalatest execution because of wrong pom.xml I'm fixed it in PR [3] It happens there is a 2 broken tests written in Scala - [4] 1. IgniteRDDSpec.IgniteRDD should successfully store data to ignite using saveValues 2. IgniteRDDSpec.IgniteRDD should successfully store data to ignite using saveValues with inline transformation It seems they are been here for a while. I propose to mute or disable it on TeamCity before merging my PR. I've created ticket for fixing tests - [5]. Thoughts? [1] https://issues.apache.org/jira/browse/IGNITE-7042 [2] https://ci.ignite.apache.org/viewLog.html?buildId=1096059=buildResultsDiv=IgniteTests24Java8_IgniteRdd [3] https://github.com/apache/ignite/pull/3530 [4] https://ci.ignite.apache.org/viewLog.html?buildId=1095218=IgniteTests24Java8_IgniteRdd=testsInfo [5] https://issues.apache.org/jira/browse/IGNITE-7727 signature.asc Description: This is a digitally signed message part
[jira] [Created] (IGNITE-7727) IgniteRDDSpec. Failing tests
Nikolay Izhikov created IGNITE-7727: --- Summary: IgniteRDDSpec. Failing tests Key: IGNITE-7727 URL: https://issues.apache.org/jira/browse/IGNITE-7727 Project: Ignite Issue Type: Bug Components: spark Affects Versions: 2.4 Reporter: Nikolay Izhikov Fix For: 2.5 Two spark tests are broken. Need to fix it. 1. IgniteRDDSpec.IgniteRDD should successfully store data to ignite using saveValues 2. IgniteRDDSpec.IgniteRDD should successfully store data to ignite using saveValues with inline transformation -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (IGNITE-7726) Error in queries screen in Demo mode
Alexander Kalinin created IGNITE-7726: - Summary: Error in queries screen in Demo mode Key: IGNITE-7726 URL: https://issues.apache.org/jira/browse/IGNITE-7726 Project: Ignite Issue Type: Bug Reporter: Alexander Kalinin Attachments: image-2018-02-16-10-46-13-126.png, xRh8zi (1).jpg Steps: 1) Start demo mode 2) Go to queries page An error message appears. !image-2018-02-16-10-46-13-126.png! -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (IGNITE-7725) REST: expand parameters list of GetOrCreateCache command
Alexey Kuznetsov created IGNITE-7725: Summary: REST: expand parameters list of GetOrCreateCache command Key: IGNITE-7725 URL: https://issues.apache.org/jira/browse/IGNITE-7725 Project: Ignite Issue Type: Improvement Components: rest Affects Versions: 2.3 Reporter: Alexey Kuznetsov Assignee: Alexey Kuznetsov Fix For: 2.5 Current implementation is very primitive and do not allow to create caches with custom options via REST. http://host:port/ignite?cmd=getorcreate=cache_name[=template_name][=1][=FULL_SYNC][; other options] Ignite will support two pre-configured templates out of the box: PARTITIONED and REPLICATED (same as SQL engine). If template name is not specified, by default it will be PARTITIONED. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
Re: Next Steps: GA Grid: Request to contribute GA library to Apache Ignite
Denis, Thank you for providing status update. I look forward to hearing from you. Best, Turik Campbell -- Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/
Re: Next Steps: GA Grid: Request to contribute GA library to Apache Ignite
Guys, Having troubles connecting to ASF subversion to fill in the IP clearance form today. Something messy with a networking environment in my current workplace. Anyway, checked up that we have everything in place to complete the form and finish the process. Please don't merge the pull-request until the IP clearance vote passes. I'll keep you posted. -- Denis On Wed, Feb 7, 2018 at 4:01 PM, techbysamplewrote: > Denis, > > Thank's for following up.. Please let me know once the paperwork has been > completed.. > > In addition, I agree with releasing GA Grid as part of ML framework in > Ignite 2.5. > > Just let me know if you require additional information beyond what I have > provided. > > Regards, > Turik Campbell > > > > > > -- > Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/ >
[jira] [Created] (IGNITE-7724) SQL COPY: network performance improvements
Kirill Shirokov created IGNITE-7724: --- Summary: SQL COPY: network performance improvements Key: IGNITE-7724 URL: https://issues.apache.org/jira/browse/IGNITE-7724 Project: Ignite Issue Type: Bug Components: sql Affects Versions: 2.3, 2.4, 2.5 Reporter: Kirill Shirokov Assignee: Kirill Shirokov -- This message was sent by Atlassian JIRA (v7.6.3#76005)
Re: SQL Compliance documentation
Guess, it's all about taste. If to google for both "SQL compliance" and "SQL conformance," then will see that DB vendors use both terms for the same thing. -- Denis On Wed, Feb 14, 2018 at 5:09 PM, Dmitriy Setrakyanwrote: > Looks great, Prachi, thanks a lot! > > What is the difference between compliance and conformance? > > D. > > On Wed, Feb 14, 2018 at 11:50 AM, Prachi Garg wrote: > > > Igniters, > > > > Apache Ignite's compliance to SQL:1999 (Core) is now documented on > > readme.io[1] > > as well as on Wikipedia[2]. SQL folks, check it out and let me know if > > there needs to be any correction. > > > > Thanks to Aleksandr Volkov for validating Ignite's compliance with SQL99 > > Core specification. > > > > [1] https://apacheignite-sql.readme.io/docs/sql-conformance > > [2] https://en.wikipedia.org/wiki/SQL_compliance > > > > > > -Prachi > > >
Re: Apache Ignite 2.4 release
Vladimir, I would suggest not to do this because we still need to spend time on testing, documentation, etc. If someone shows interest in this features they can assemble binaries from the master. -- Denis On Thu, Feb 15, 2018 at 6:43 AM, Nikolay Izhikovwrote: > +1 > > В Чт, 15/02/2018 в 17:27 +0300, Vladimir Ozerov пишет: > > Igniters, > > > > AI 2.4 release was shifted a bit and over this time we implemented two > > important SQL features: > > 1) COPY command for fast file upload to the cluster [1] > > 2) Streaming mode for thin driver [2] > > > > Both commands are very important for fast data ingestion into Ignite > > through SQL. I would like to ask community to consider to include these > two > > features into AI 2.4 in *experimental* state because both of them will be > > improved in various ways in the nearest time. If we do so, we will be > able > > to collect some feedback from the users before AI 2.5 release. What do > you > > think? > > > > Vladimir. > > > > [1] https://issues.apache.org/jira/browse/IGNITE-6917 > > [2] https://issues.apache.org/jira/browse/IGNITE-7253 > > > > On Tue, Feb 13, 2018 at 1:22 AM, Dmitriy Setrakyan < > dsetrak...@apache.org> > > wrote: > > > > > On Mon, Feb 12, 2018 at 9:22 AM, Dmitry Pavlov > > > wrote: > > > > > > > Hi, > > > > > > > > Unfortunately, a quick fix did not give us too much performance > boost. > > > > > > > > I'm going to implement a complete algorithm change for storing the > page > > > > identifier. But this change is quite significant and will require > > > > re-testing. I suggest including > > > > https://issues.apache.org/jira/browse/IGNITE-7638 in the next > version, > > > > > > for > > > > example, to 2.5. > > > > > > > > Sincerely, > > > > Dmitriy Pavlov > > > > > > > > > > > > > > > > > > Dmitriy, thanks for the update! Are there other tickets that are > holding > > > the release at this point? I remember that there was a performance > > > degradation issue in FULL_SYNC mode, but I cannot find a ticket. > > > > > > D. > > > >
[jira] [Created] (IGNITE-7723) Data loss after node restart with PDS
Alexandr Kuramshin created IGNITE-7723: -- Summary: Data loss after node restart with PDS Key: IGNITE-7723 URL: https://issues.apache.org/jira/browse/IGNITE-7723 Project: Ignite Issue Type: Bug Components: general, persistence Affects Versions: 2.3 Reporter: Alexandr Kuramshin Attachments: IgnitePdsDataLossTest.java Split-brain scenario with topology validator is used to convince possible data loss. The same results may be achieved on accidental network problems combined with node restart. See the reproducer {{IgnitePdsDataLossTest}} for details. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
Re: Transport compression (not store compression)
Vova, I think your solution is fine, but I think we will always have some messages compressed and others not. For example, in many cases, especially when messages are relatively small, compressing them will introduce an unnecessary overhead, and most likely slow down the cluster. Why not have compression flag or compression bit on per-message level? We check if the bit is turned on, and if it is, then we uncompress the message on the receiving side before processing it. D. On Thu, Feb 15, 2018 at 12:24 AM, Vladimir Ozerovwrote: > I think that we should not guess on how the clients are used. They could be > used in any way - in the same network, in another network, in Docker, in > hypervisor, etc.. This holds for both thin and thick clients. It is > essential that we design configuration API in a way that compression could > be enabled only for some participants. > > What if we do this as follows: > 1) Define "IgniteConfiguration.compressionEnabled" flag > 2) When two nodes communicate and at least one of them has this flag, then > all data sent between them is compressed. > > Makes sense? > > On Thu, Feb 15, 2018 at 8:50 AM, Nikita Amelchev > wrote: > > > Hello, Igniters. > > > > I have not seen such use-cases, where heavy client ignite node placed in > a > > much worse network than the server. I'm not sure we should encourage a > bad > > cluster architecture. > > > > Usually, in my use-cases, the servers and clients locate in the same > > network. And if the cluster has SSL enabled, it makes sense to enable > > compression, even if the network is fast. It also makes sense when we > have > > a high load on the network, and the CPU is utilized poorly. > > > > I'll do tests on yardstick for real operations like get, put etc. and SQL > > requests. > > > > I propose to add configurable compression for thin client/ODBC/JDBC as a > > separate issue because it increases the current PR. > > > > Even if it really makes sense to compress the traffic only between > > client-server ignite nodes, it should also be a separate issue, that > would > > not increase the PR. Especially, since this compression architecture may > > not be accepted by the community. > > > > 2018-02-05 13:02 GMT+03:00 Nikita Amelchev : > > > > > Thanks for your comments, > > > > > > I will try to separate network compression for clients and servers. > > > > > > It makes sense to enable compression on servers if we have SSL turned > on. > > > I tested rebalancing time and compression+ssl is faster. SSL throughput > > is > > > limited by 800 Mbits/sec per connection and if enable compression, it > > > boosted up to 1100 Mbits. > > > > > > 2018-02-02 18:52 GMT+03:00 Alexey Kuznetsov : > > > > > >> I think Igor is right. > > >> > > >> Ususally servers connected via fast local network. > > >> But clients could be in external and slow network. > > >> In this scenario compression will be very useful. > > >> > > >> Once I had such scenario - client connected to cluster via 300 kb/s > > >> network > > >> and tries to transfer ~10Mb of uncumpressed data. > > >> So it takse ~30 seconds. > > >> After I implemented compression it becamed 1M and transfered for ~3 > > >> seconds. > > >> > > >> I think we should take care of all mentioned problems with NIO threads > > in > > >> order to not slow down whole cluster. > > >> > > >> > > >> On Fri, Feb 2, 2018 at 10:05 PM, gvvinblade > > wrote: > > >> > > >> > Nikita, > > >> > > > >> > Yes, you're right. Maybe I wasn't clear enough. > > >> > > > >> > Usually server nodes are placed in the same fast network segment > (one > > >> > datacenter); in any case we need an ability to setup compression per > > >> > connection using some filter like useCompression(ClusterNode, > > >> ClusterNode) > > >> > to compress traffic only between servers and client nodes. > > >> > > > >> > But issue is still there, since the same NIO worker serves both > client > > >> and > > >> > server connections, enabled compression may impact whole cluster > > >> > performance > > >> > because NIO threads will compress client messages instead of > > processing > > >> > servers' compute requests. That was my concern. > > >> > > > >> > Compression for clients is really cool feature and usefull in some > > >> cases. > > >> > Probably it makes sense to have two NIO servers with and without > > >> > compression > > >> > to process server and client requests separately or pin somehow > worker > > >> > threads to client or server sessions... > > >> > > > >> > Also we have to think about client connections (JDBC, ODBC, .Net > thin > > >> > client, etc) and setup compression for them separately. > > >> > > > >> > Anyway I would compare put, get, putAll, getAll and SQL SELECT > > >> operations > > >> > for strings and POJOs, one server, several clients with and without > > >> > compression, setting up the server to utilize all cores by
Re: Batch size parameter at DataStreamerCacheUpdaters.batched()
Roman, DataStreamerCacheUpdaters class is actually not a part of public API, so I don't see a reason to change it unless there is a need for this internally in Ignite. -Val On Thu, Feb 15, 2018 at 5:52 AM, Roman Guseinovwrote: > Hello Igniters, > > In some cases, batched stream receiver can help us to improve performance: > > try (IgniteDataStreamerInteger, String> streamer = > ignite.dataStreamer(cacheName)) { > streamer.receiver(DataStreamerCacheUpdaters.batched()); > > streamer.addData(getData()); > } > > Unfortunately, the bad thing is that the receiver internally calls "putAll" > for all data. I think it would be useful to have an option to specify a > batch size like: > > DataStreamerCacheUpdaters.batched(256) > > What do you think about this? Is it make sense to create a ticket? > > Thanks. > > Best Regards, > Roman > > > > -- > Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/ >
Re: IgniteSet implementation: changes required
On Thu, Feb 15, 2018 at 6:08 AM, Vladimir Ozerovwrote: > I do not think indexes is the right approach - set do not have indexes, and > you will have to maintain additional counter for it in order to know when > to stop. > > From what I see there are two distinct problems: > 1) Broken recovery - this is just a bug which needs to be fixed. As soon as > data is stored in real persistent cache, recovery of data structure state > should be trivial task. > 2) Heap items - this should not be a problem in common case when set > contains moderate number of elements. If set is excessively large, then > this is not the right structure for your use case and you should use > standard IgniteCache API instead. What we can do is to optionally disable > on-heap caching for specific set at the cost of lower performance if user > wants so. > Vladimir, I am not sure I agree. In my view, set should be similar to cache, just a different API. I am not sure why we should make an assumptions that set data should be smaller than cache, especially given that it is a trivial task to implement a set based on Ignite cache API (we could just store key-key mappings in cache instead of key-value mappings internally). Can you clarify why you believe that IgniteSet should need to have on-heap entries? D.
[GitHub] ignite pull request #3511: IGNITE-7686: PDS Direct IO failure: IgnitePdsEvic...
Github user dspavlov closed the pull request at: https://github.com/apache/ignite/pull/3511 ---
[GitHub] ignite pull request #3532: IGNITE-7686: Fix of PDS Direct IO failure: Ignite...
GitHub user dspavlov opened a pull request: https://github.com/apache/ignite/pull/3532 IGNITE-7686: Fix of PDS Direct IO failure: IgnitePdsEvictionTest.test⦠â¦PageEviction You can merge this pull request into a Git repository by running: $ git pull https://github.com/gridgain/apache-ignite ignite-7686 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/ignite/pull/3532.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #3532 commit 0b4d44a0a4370ce80be14992906fc15fe7e5c4cb Author: dpavlovDate: 2018-02-15T17:56:21Z IGNITE-7686: Fix of PDS Direct IO failure: IgnitePdsEvictionTest.testPageEviction ---
[GitHub] ignite pull request #3531: IGNITE-7685: Fixed allocation rate.
GitHub user andrey-kuznetsov opened a pull request: https://github.com/apache/ignite/pull/3531 IGNITE-7685: Fixed allocation rate. You can merge this pull request into a Git repository by running: $ git pull https://github.com/andrey-kuznetsov/ignite ignite-7685 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/ignite/pull/3531.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #3531 commit 5c575b187ee2ac07dc34017a26ec37425306108c Author: Andrey KuznetsovDate: 2018-02-15T17:37:31Z IGNITE-7685: Fixed allocation rate. ---
IGNITE-7409. Ready to be merged
Hello, Igniters. There is an issue [1] with very small patch - 4 LOC. State is Patch Available. This part of code was introduced be me in [2]. I reviewed patch and want to merge it. Any objections? [1] https://issues.apache.org/jira/browse/IGNITE-7409 [2] https://issues.apache.org/jira/browse/IGNITE-5712 signature.asc Description: This is a digitally signed message part
[jira] [Created] (IGNITE-7722) IgnitePdsCheckpointSimulationWithRealCpDisabledTest generates too many strings
Alexey Goncharuk created IGNITE-7722: Summary: IgnitePdsCheckpointSimulationWithRealCpDisabledTest generates too many strings Key: IGNITE-7722 URL: https://issues.apache.org/jira/browse/IGNITE-7722 Project: Ignite Issue Type: Improvement Affects Versions: 2.4 Reporter: Alexey Goncharuk Assignee: Alexey Goncharuk Fix For: 2.5 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
Re: [SparkDataFrame] Query Optimization. Prototype
Hello, Valentin. > In general, I don't see a reason to exclude anything (especially joins) from > PR. Please finalize the change and pass it to me for review. I made some minor improvements. Please, review my PR [1] [1] https://github.com/apache/ignite/pull/3397 В Вт, 13/02/2018 в 11:47 -0800, Valentin Kulichenko пишет: > Nikolay, > > Non-collocated joins should be used only if there is no way to collocate. > Please read here for more info: > https://apacheignite-sql.readme.io/docs/distributed-joins > > As for limitations, I think Vladimir is talking more about syntax related > stuff, i.e. what we do or don't support from SQL compliance perspective. We > depend on H2 here and therefore don't have full knowledge, so I understand > that it takes time to test and document everything. But requirement to have > an index for a non-collocated join is introduced by Ignite and, if it's an > expected one, should be documented. *Vladimir*, can you please comment on > this? > > In general, I don't see a reason to exclude anything (especially joins) from > PR. Please finalize the change and pass it to me for review. > > -Val > > On Tue, Feb 13, 2018 at 11:10 AM, Nikolay Izhikovwrote: > > Valentin, > > > > > Looks like this is because you enabled non-collocated joins > > > > But non-collocated joins is only way to be sure that join returns correct > > results. > > So in my case it's OK to enable them. > > Am I right? > > > > > do we have this documented somewhere? > > > > I'm asked that in previuous mail. > > Vladimir Ozerov give me an answer [1] I quoted for you: > > > > > Unfortunately, at this moment we do not have complete list of all > > > restrictions on our joins, because a lot of work is delegated to H2. > > > In some unsupported scenarios we throw an exception. > > > In other cases we return incorrect results silently (e.g. if you do not > > > co-locate data and forgot to set "distributed joins" flag). > > > We have a plan to perform excessive testing of joins (both co-located and > > > distributed) and list all known limitations. > > > This would require writing a lot of unit tests to cover various scenarios. > > > I think we will have this information in a matter of 1-2 months. > > > > So the answer is no, we haven't documentation for a join limitations. > > > > That's why I propose to exclude join optimization from my PR until: > > > > 1. We create documentation for all join limitations. > > 2. Create the way to check is certain join satisfy current limitations. > > > > [1] > > http://apache-ignite-developers.2346864.n4.nabble.com/SparkDataFrame-Query-Optimization-Prototype-tp26249p26361.html > > > > В Вт, 13/02/2018 в 09:55 -0800, Valentin Kulichenko пишет: > > > Nikolay, > > > > > > Looks like this is because you enabled non-collocated joins. I was not > > > aware of this limitation though, do we have this documented somewhere? > > > > > > -Val > > > > > > On Tue, Feb 13, 2018 at 8:21 AM, Nikolay Izhikov > > > wrote: > > > > > > > Val, > > > > > > > > Source code check: https://github.com/apache/ignite/blob/master/modules/ > > > > indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/opt/ > > > > GridH2CollocationModel.java#L382 > > > > > > > > Stack trace: > > > > > > > > javax.cache.CacheException: Failed to prepare distributed join query: > > > > join > > > > condition does not use index [joinedCache=SQL_PUBLIC_JT2, plan=SELECT > > > > __Z0.ID AS __C0_0, > > > > __Z0.VAL1 AS __C0_1, > > > > __Z1.ID AS __C0_2, > > > > __Z1.VAL2 AS __C0_3 > > > > FROM PUBLIC.JT1 __Z0 > > > > /* PUBLIC.JT1.__SCAN_ */ > > > > INNER JOIN PUBLIC.JT2 __Z1 > > > > /* batched:broadcast PUBLIC.JT2.__SCAN_ */ > > > > ON 1=1 > > > > WHERE __Z0.VAL1 = __Z1.VAL2] > > > > at org.apache.ignite.internal.processors.query.h2.opt. > > > > GridH2CollocationModel.joinedWithCollocated(GridH2CollocationModel.java: > > > > 384) > > > > at org.apache.ignite.internal.processors.query.h2.opt. > > > > GridH2CollocationModel.calculate(GridH2CollocationModel.java:308) > > > > at org.apache.ignite.internal.processors.query.h2.opt. > > > > GridH2CollocationModel.type(GridH2CollocationModel.java:549) > > > > at org.apache.ignite.internal.processors.query.h2.opt. > > > > GridH2CollocationModel.calculate(GridH2CollocationModel.java:257) > > > > at org.apache.ignite.internal.processors.query.h2.opt. > > > > GridH2CollocationModel.type(GridH2CollocationModel.java:549) > > > > at org.apache.ignite.internal.processors.query.h2.opt. > > > > GridH2CollocationModel.isCollocated(GridH2CollocationModel.java:691) > > > > at org.apache.ignite.internal.processors.query.h2.sql. > > > > GridSqlQuerySplitter.split(GridSqlQuerySplitter.java:239) > > > > at org.apache.ignite.internal.processors.query.h2. > > > > IgniteH2Indexing.split(IgniteH2Indexing.java:1856) > > > > at
[GitHub] ignite pull request #2220: Ignite 3935
Github user 1vanan closed the pull request at: https://github.com/apache/ignite/pull/2220 ---
Re: Apache Ignite 2.4 release
+1 В Чт, 15/02/2018 в 17:27 +0300, Vladimir Ozerov пишет: > Igniters, > > AI 2.4 release was shifted a bit and over this time we implemented two > important SQL features: > 1) COPY command for fast file upload to the cluster [1] > 2) Streaming mode for thin driver [2] > > Both commands are very important for fast data ingestion into Ignite > through SQL. I would like to ask community to consider to include these two > features into AI 2.4 in *experimental* state because both of them will be > improved in various ways in the nearest time. If we do so, we will be able > to collect some feedback from the users before AI 2.5 release. What do you > think? > > Vladimir. > > [1] https://issues.apache.org/jira/browse/IGNITE-6917 > [2] https://issues.apache.org/jira/browse/IGNITE-7253 > > On Tue, Feb 13, 2018 at 1:22 AM, Dmitriy Setrakyan> wrote: > > > On Mon, Feb 12, 2018 at 9:22 AM, Dmitry Pavlov > > wrote: > > > > > Hi, > > > > > > Unfortunately, a quick fix did not give us too much performance boost. > > > > > > I'm going to implement a complete algorithm change for storing the page > > > identifier. But this change is quite significant and will require > > > re-testing. I suggest including > > > https://issues.apache.org/jira/browse/IGNITE-7638 in the next version, > > > > for > > > example, to 2.5. > > > > > > Sincerely, > > > Dmitriy Pavlov > > > > > > > > > > > > > Dmitriy, thanks for the update! Are there other tickets that are holding > > the release at this point? I remember that there was a performance > > degradation issue in FULL_SYNC mode, but I cannot find a ticket. > > > > D. > > signature.asc Description: This is a digitally signed message part
Re: Apache Ignite 2.4 release
Igniters, AI 2.4 release was shifted a bit and over this time we implemented two important SQL features: 1) COPY command for fast file upload to the cluster [1] 2) Streaming mode for thin driver [2] Both commands are very important for fast data ingestion into Ignite through SQL. I would like to ask community to consider to include these two features into AI 2.4 in *experimental* state because both of them will be improved in various ways in the nearest time. If we do so, we will be able to collect some feedback from the users before AI 2.5 release. What do you think? Vladimir. [1] https://issues.apache.org/jira/browse/IGNITE-6917 [2] https://issues.apache.org/jira/browse/IGNITE-7253 On Tue, Feb 13, 2018 at 1:22 AM, Dmitriy Setrakyanwrote: > On Mon, Feb 12, 2018 at 9:22 AM, Dmitry Pavlov > wrote: > > > Hi, > > > > Unfortunately, a quick fix did not give us too much performance boost. > > > > I'm going to implement a complete algorithm change for storing the page > > identifier. But this change is quite significant and will require > > re-testing. I suggest including > > https://issues.apache.org/jira/browse/IGNITE-7638 in the next version, > for > > example, to 2.5. > > > > Sincerely, > > Dmitriy Pavlov > > > > > > > Dmitriy, thanks for the update! Are there other tickets that are holding > the release at this point? I remember that there was a performance > degradation issue in FULL_SYNC mode, but I cannot find a ticket. > > D. >
Re: IgniteSet implementation: changes required
I do not think indexes is the right approach - set do not have indexes, and you will have to maintain additional counter for it in order to know when to stop. >From what I see there are two distinct problems: 1) Broken recovery - this is just a bug which needs to be fixed. As soon as data is stored in real persistent cache, recovery of data structure state should be trivial task. 2) Heap items - this should not be a problem in common case when set contains moderate number of elements. If set is excessively large, then this is not the right structure for your use case and you should use standard IgniteCache API instead. What we can do is to optionally disable on-heap caching for specific set at the cost of lower performance if user wants so. On Wed, Feb 14, 2018 at 4:51 PM, Pavel Peresleginwrote: > Hello, Igniters! > > I agree that solution with separate caches is not acceptable for a > large number of sets. > > So, I want to suggest one more way to implement IgniteSet that will > introduce element indexes (similar to IgniteQueue). To implement this > we can add head/tail indexes to IgniteSet header and for each > IgniteSet element store two key-value pairs: > setKey, index > index, setKey > > Indexes are required to support iterator and they should be continuous. > > Please, see detailed description in JIRA comment [1]. > > With such approach add/remove/contains operations will have O(1) time > complexity, iterator should work similar to current IgniteQueue > iterator, issues [2], [3] will be resolved, because PDS recovery will > work "out of the box" and we will not use JVM heap for duplicated > values. > > Btw, we can use this implementation only for collocated mode (map > keys/indexes to IgniteSet name) and use separate caches for > non-collocated mode. > > What do you think about this? > > [1] https://issues.apache.org/jira/browse/IGNITE-5553#comment-16364043 > [2] https://issues.apache.org/jira/browse/IGNITE-5553 > [3] https://issues.apache.org/jira/browse/IGNITE-7565 > > > 2018-02-13 9:33 GMT+03:00 Andrey Kuznetsov : > > Indeed, all sets, regardless of whether they collocated or not, share > > single cache, and also use onheap data structures irresistable to > > checkpointing/recovery. > > > > 2018-02-13 2:14 GMT+03:00 Dmitriy Setrakyan : > > > >> On Fri, Feb 9, 2018 at 6:26 PM, Andrey Kuznetsov > >> wrote: > >> > >> > Hi all, > >> > > >> > Current set implementation has significant flaw: all set data are > >> > duplicated in onheap maps on _every_ node in order to make iterator() > and > >> > size(). For me it looks like simple yet ineffective implementation. > >> > Currently, these maps are damaged by checkpointing/recovery, and we > could > >> > patch them somehow. Another future change to Ignite caches can damage > >> them > >> > again. This looks fragile when datastructure is not entirely backed by > >> > caches. Pavel's proposal seems to be a reliable solution for > >> non-collocated > >> > sets. > >> > > >> > >> I would agree, but I was under an impression that non-collocated sets > are > >> already implemented this way. Am I wrong? > >> > > > > > > > > -- > > Best regards, > > Andrey Kuznetsov. >
Batch size parameter at DataStreamerCacheUpdaters.batched()
Hello Igniters, In some cases, batched stream receiver can help us to improve performance: try (IgniteDataStreamerInteger, String> streamer = ignite.dataStreamer(cacheName)) { streamer.receiver(DataStreamerCacheUpdaters.batched()); streamer.addData(getData()); } Unfortunately, the bad thing is that the receiver internally calls "putAll" for all data. I think it would be useful to have an option to specify a batch size like: DataStreamerCacheUpdaters.batched(256) What do you think about this? Is it make sense to create a ticket? Thanks. Best Regards, Roman -- Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/
Re: Wrapping [Ignite]Interrupted[Checked]Exception in benign exceptions
Ilya, I agree with you about reducing the number of unchecked exceptions in public API. Because when you work with grid it can throw about 4 types of different runtime exception. And there is no way except experiments to know about this types. 2018-02-15 15:57 GMT+03:00 Dmitriy Setrakyan: > Ilya, i have looked at the ticket. I am not sure I understand what you are > suggesting. Can you provide a "before" and "after" example? > > D. > > On Thu, Feb 15, 2018 at 4:40 AM, Ilya Kasnacheev < > ilya.kasnach...@gmail.com> > wrote: > > > Hello Igniters. > > > > I have stumbled on the problem for which I have created a ticket > > https://issues.apache.org/jira/browse/IGNITE-7719 > > > > Basically it is an awful code smell. On thread interrupt, we wrap > > InterruptedException with some unrelated exception type, which prevents > it > > from being handled properly by client code. Especially bad since we use > > thread interruption for client code workflow, e.g. in service grid. > > > > Hope to hear your opinions on this, > > > > -- > > Ilya Kasnacheev > > >
[jira] [Created] (IGNITE-7721) Apache Ignite web session clustering stack after login success
Sanjeet Jha created IGNITE-7721: --- Summary: Apache Ignite web session clustering stack after login success Key: IGNITE-7721 URL: https://issues.apache.org/jira/browse/IGNITE-7721 Project: Ignite Issue Type: Bug Components: cache, websession Affects Versions: 2.3 Reporter: Sanjeet Jha I implement Apache ignite in my OfBiz application. after login my application show login susses. but the browser does not get any response from the server. Here is my web.xml {{ IgniteConfigurationFilePath specialpurpose/fnp/webapp/fnp/WEB-INF/ignite-config.xml IgniteWebSessionsCacheName replicated org.apache.ignite.startup.servlet.ServletContextListenerStartup IgniteWebSessionsFilter org.apache.ignite.cache.websession.WebSessionFilter IgniteWebSessionsFilter /* }} {{and my ignite-config.xml}} {{http://www.springframework.org/schema/beans; xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance; xmlns:util="http://www.springframework.org/schema/util; xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://www.springframework.org/schema/util http://www.springframework.org/schema/util/spring-util.xsd;> 127.0.0.1:47500 }}{{}} I also try to collected ignite log. but there is no error log. Any idea what happens. And also some time I OutOfMemoryError error. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (IGNITE-7720) Update ODBC cluster configuration: replace OdbcConfiguration with ClientConnectorConfiguration
Alexey Popov created IGNITE-7720: Summary: Update ODBC cluster configuration: replace OdbcConfiguration with ClientConnectorConfiguration Key: IGNITE-7720 URL: https://issues.apache.org/jira/browse/IGNITE-7720 Project: Ignite Issue Type: Task Components: documentation Affects Versions: 2.3 Reporter: Alexey Popov https://apacheignite-sql.readme.io/docs/odbc-driver#section-cluster-configuration Please note that ODBC configuration is depricated. It is better to update this page with ClientConnectorConfiguration BTW, https://apacheignite-sql.readme.io/docs/jdbc-driver is already updated ) /** * ODBC configuration. * * Deprecated as of Apache Ignite 2.1. Please use {@link ClientConnectorConfiguration} and * {@link IgniteConfiguration#setClientConnectorConfiguration(ClientConnectorConfiguration)} instead. */ @Deprecated public class OdbcConfiguration { -- This message was sent by Atlassian JIRA (v7.6.3#76005)
Re: Wrapping [Ignite]Interrupted[Checked]Exception in benign exceptions
Ilya, i have looked at the ticket. I am not sure I understand what you are suggesting. Can you provide a "before" and "after" example? D. On Thu, Feb 15, 2018 at 4:40 AM, Ilya Kasnacheevwrote: > Hello Igniters. > > I have stumbled on the problem for which I have created a ticket > https://issues.apache.org/jira/browse/IGNITE-7719 > > Basically it is an awful code smell. On thread interrupt, we wrap > InterruptedException with some unrelated exception type, which prevents it > from being handled properly by client code. Especially bad since we use > thread interruption for client code workflow, e.g. in service grid. > > Hope to hear your opinions on this, > > -- > Ilya Kasnacheev >
Wrapping [Ignite]Interrupted[Checked]Exception in benign exceptions
Hello Igniters. I have stumbled on the problem for which I have created a ticket https://issues.apache.org/jira/browse/IGNITE-7719 Basically it is an awful code smell. On thread interrupt, we wrap InterruptedException with some unrelated exception type, which prevents it from being handled properly by client code. Especially bad since we use thread interruption for client code workflow, e.g. in service grid. Hope to hear your opinions on this, -- Ilya Kasnacheev
[jira] [Created] (IGNITE-7719) Avoid wrapping InterruptedException in CacheException or IgniteException
Ilya Kasnacheev created IGNITE-7719: --- Summary: Avoid wrapping InterruptedException in CacheException or IgniteException Key: IGNITE-7719 URL: https://issues.apache.org/jira/browse/IGNITE-7719 Project: Ignite Issue Type: Task Affects Versions: 2.5 Reporter: Ilya Kasnacheev It is a pity to see stack traces like this: {code:java} javax.cache.CacheException: Failed to run reduce query locally. at org.apache.ignite.internal.processors.query.h2.twostep.GridReduceQueryExecutor.query(GridReduceQueryExecutor.java:839) ~[ignite-indexing-2.1.10.jar:2.1.10] ... Caused by: org.apache.ignite.internal.IgniteInterruptedCheckedException at org.apache.ignite.internal.util.IgniteUtils.await(IgniteUtils.java:7463) ~[ignite-core-2.1.10.jar:2.1.10] ... 7 more Caused by: java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1326) ~[?:1.8.0_92] ... 7 more{code} Here we leave almost zero chance to end-user of the code to properly handle InterruptedException, short of digging into CacheException's causes. When people use this code in while (true) try{}catch(){log} loop they will get horrible problems here when interruption happens, as in clobbering their whole log with exceptions. This isn't acceptable. What should be done: - Make sure we never wrap random IgniteCheckedException into IgniteException or CacheException (because it might be IgniteInterruptedCheckedException). Use some method that does proper checking on exception type, throwing something appropriate. IMO it's much better to throw unexpected IgniteInterruptedException than a generic one. - If needed, declare on all affected methods that they may throw IgniteInterruptedException. - Have a check in CacheException's constructor to re-throw IgniteInterruptedException if it is passed as an argument. Ditto for IgniteException. They should never be wrapped. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] ignite pull request #3530: IGNITE-7042: Trying to configure scala-test plugi...
GitHub user nizhikov opened a pull request: https://github.com/apache/ignite/pull/3530 IGNITE-7042: Trying to configure scala-test plugin for a TeamCity You can merge this pull request into a Git repository by running: $ git pull https://github.com/nizhikov/ignite IGNITE-7042 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/ignite/pull/3530.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #3530 commit 7d4cdd920f3c67142450850e9570d574d4ff2529 Author: Nikolay IzhikovDate: 2018-02-15T12:23:57Z IGNITE-7042: Trying to configure scala-test plugin for a TeamCity ---
[GitHub] ignite pull request #3377: IGNITE-7386: Got rid of ThreadLocalRandom8.
Github user andrey-kuznetsov closed the pull request at: https://github.com/apache/ignite/pull/3377 ---
[GitHub] ignite pull request #2339: Ignite 4181 public api
Github user andrey-kuznetsov closed the pull request at: https://github.com/apache/ignite/pull/2339 ---
[GitHub] ignite pull request #3165: IGNITE-6711: TotalAllocatedPages metric fix.
Github user andrey-kuznetsov closed the pull request at: https://github.com/apache/ignite/pull/3165 ---
[GitHub] ignite pull request #3197: Ignite 6734
Github user andrey-kuznetsov closed the pull request at: https://github.com/apache/ignite/pull/3197 ---
[jira] [Created] (IGNITE-7718) Collections.singleton() and Collections.singletonMap() are not properly serialized by binary marshaller
Pavel Vinokurov created IGNITE-7718: --- Summary: Collections.singleton() and Collections.singletonMap() are not properly serialized by binary marshaller Key: IGNITE-7718 URL: https://issues.apache.org/jira/browse/IGNITE-7718 Project: Ignite Issue Type: Bug Components: cache Affects Versions: 2.3 Reporter: Pavel Vinokurov Assignee: Pavel Vinokurov After desialization collections obtained by Collections.singleton() and Collections.singletonMap() does not return collection of binary objects, but rather collection of deserialized objects. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] ignite pull request #3470: ignite-5804: ScanQuery transformer should be appl...
Github user asfgit closed the pull request at: https://github.com/apache/ignite/pull/3470 ---
[GitHub] ignite pull request #3248: IGNITE-6736: Switched GridCacheMapEntry synchroni...
Github user andrey-kuznetsov closed the pull request at: https://github.com/apache/ignite/pull/3248 ---
[GitHub] ignite pull request #3302: IGNITE-7312: Made use of java.util.Base64 for bas...
Github user andrey-kuznetsov closed the pull request at: https://github.com/apache/ignite/pull/3302 ---
[GitHub] ignite pull request #3428: Ignite 7513
Github user andrey-kuznetsov closed the pull request at: https://github.com/apache/ignite/pull/3428 ---
[jira] [Created] (IGNITE-7717) testAssignmentAfterRestarts is flaky on TC
Pavel Kovalenko created IGNITE-7717: --- Summary: testAssignmentAfterRestarts is flaky on TC Key: IGNITE-7717 URL: https://issues.apache.org/jira/browse/IGNITE-7717 Project: Ignite Issue Type: Bug Reporter: Pavel Kovalenko -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (IGNITE-7716) Red selftest in ML examples
Yury Babak created IGNITE-7716: -- Summary: Red selftest in ML examples Key: IGNITE-7716 URL: https://issues.apache.org/jira/browse/IGNITE-7716 Project: Ignite Issue Type: Bug Components: ml Reporter: Yury Babak Assignee: Yury Babak Fix For: 2.5 https://ci.ignite.apache.org/project.html?tab=testDetails=IgniteTests24Java8=1447870893775475761 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] ignite pull request #3529: IGNITE-7698: Page read during replacement should ...
GitHub user dspavlov opened a pull request: https://github.com/apache/ignite/pull/3529 IGNITE-7698: Page read during replacement should be outside of segmen⦠â¦t write lock You can merge this pull request into a Git repository by running: $ git pull https://github.com/gridgain/apache-ignite ignite-7698 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/ignite/pull/3529.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #3529 commit 5f98eeeae3b452482239223490772eb429ff27d4 Author: dpavlovDate: 2018-02-15T11:19:59Z IGNITE-7698: Page read during replacement should be outside of segment write lock ---
[jira] [Created] (IGNITE-7715) If client cannot find dml entity in local storage, it should ask server for updates
Pavel Kuznetsov created IGNITE-7715: --- Summary: If client cannot find dml entity in local storage, it should ask server for updates Key: IGNITE-7715 URL: https://issues.apache.org/jira/browse/IGNITE-7715 Project: Ignite Issue Type: Improvement Components: sql Reporter: Pavel Kuznetsov Assignee: Vladimir Ozerov Assume we have n servers and at least 2 clients client 1 creates table (via thin driver) after that (in global time) client 2 tries to insert data in that table Currently, if client 2 didn't received from server that table is created, it rejects query execution But in this case client 2 could ask connected server for updates before rejection -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (IGNITE-7714) SQL: COPY command should try to create cache in case table is not found
Vladimir Ozerov created IGNITE-7714: --- Summary: SQL: COPY command should try to create cache in case table is not found Key: IGNITE-7714 URL: https://issues.apache.org/jira/browse/IGNITE-7714 Project: Ignite Issue Type: Task Components: sql Affects Versions: 2.3 Reporter: Vladimir Ozerov Assignee: Vladimir Ozerov Fix For: 2.4 Client might be aware of cache, but hasn't started it yet. In this case "Table not found" exception will be thrown. Need to use {{GridCacheProcessor#createMissingQueryCaches}} and re-try in this case. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] ignite pull request #3526: IGNITE-7709: SQL COPY file name handling fix
Github user asfgit closed the pull request at: https://github.com/apache/ignite/pull/3526 ---
[jira] [Created] (IGNITE-7713) Include cache name to rebalancing finish message
Alexey Goncharuk created IGNITE-7713: Summary: Include cache name to rebalancing finish message Key: IGNITE-7713 URL: https://issues.apache.org/jira/browse/IGNITE-7713 Project: Ignite Issue Type: Improvement Components: cache Affects Versions: 2.4 Reporter: Alexey Goncharuk Assignee: Alexey Goncharuk Fix For: 2.5 {code} U.log(log, "Completed " + ((remaining.size() == 1 ? "(final) " : "") + "rebalancing [fromNode=" + nodeId + ", topology=" + topologyVersion() + ", time=" + (U.currentTimeMillis() - t.get1()) + " ms]")); {code} This code should include cache name -- This message was sent by Atlassian JIRA (v7.6.3#76005)
Re: Transport compression (not store compression)
I think that we should not guess on how the clients are used. They could be used in any way - in the same network, in another network, in Docker, in hypervisor, etc.. This holds for both thin and thick clients. It is essential that we design configuration API in a way that compression could be enabled only for some participants. What if we do this as follows: 1) Define "IgniteConfiguration.compressionEnabled" flag 2) When two nodes communicate and at least one of them has this flag, then all data sent between them is compressed. Makes sense? On Thu, Feb 15, 2018 at 8:50 AM, Nikita Amelchevwrote: > Hello, Igniters. > > I have not seen such use-cases, where heavy client ignite node placed in a > much worse network than the server. I'm not sure we should encourage a bad > cluster architecture. > > Usually, in my use-cases, the servers and clients locate in the same > network. And if the cluster has SSL enabled, it makes sense to enable > compression, even if the network is fast. It also makes sense when we have > a high load on the network, and the CPU is utilized poorly. > > I'll do tests on yardstick for real operations like get, put etc. and SQL > requests. > > I propose to add configurable compression for thin client/ODBC/JDBC as a > separate issue because it increases the current PR. > > Even if it really makes sense to compress the traffic only between > client-server ignite nodes, it should also be a separate issue, that would > not increase the PR. Especially, since this compression architecture may > not be accepted by the community. > > 2018-02-05 13:02 GMT+03:00 Nikita Amelchev : > > > Thanks for your comments, > > > > I will try to separate network compression for clients and servers. > > > > It makes sense to enable compression on servers if we have SSL turned on. > > I tested rebalancing time and compression+ssl is faster. SSL throughput > is > > limited by 800 Mbits/sec per connection and if enable compression, it > > boosted up to 1100 Mbits. > > > > 2018-02-02 18:52 GMT+03:00 Alexey Kuznetsov : > > > >> I think Igor is right. > >> > >> Ususally servers connected via fast local network. > >> But clients could be in external and slow network. > >> In this scenario compression will be very useful. > >> > >> Once I had such scenario - client connected to cluster via 300 kb/s > >> network > >> and tries to transfer ~10Mb of uncumpressed data. > >> So it takse ~30 seconds. > >> After I implemented compression it becamed 1M and transfered for ~3 > >> seconds. > >> > >> I think we should take care of all mentioned problems with NIO threads > in > >> order to not slow down whole cluster. > >> > >> > >> On Fri, Feb 2, 2018 at 10:05 PM, gvvinblade > wrote: > >> > >> > Nikita, > >> > > >> > Yes, you're right. Maybe I wasn't clear enough. > >> > > >> > Usually server nodes are placed in the same fast network segment (one > >> > datacenter); in any case we need an ability to setup compression per > >> > connection using some filter like useCompression(ClusterNode, > >> ClusterNode) > >> > to compress traffic only between servers and client nodes. > >> > > >> > But issue is still there, since the same NIO worker serves both client > >> and > >> > server connections, enabled compression may impact whole cluster > >> > performance > >> > because NIO threads will compress client messages instead of > processing > >> > servers' compute requests. That was my concern. > >> > > >> > Compression for clients is really cool feature and usefull in some > >> cases. > >> > Probably it makes sense to have two NIO servers with and without > >> > compression > >> > to process server and client requests separately or pin somehow worker > >> > threads to client or server sessions... > >> > > >> > Also we have to think about client connections (JDBC, ODBC, .Net thin > >> > client, etc) and setup compression for them separately. > >> > > >> > Anyway I would compare put, get, putAll, getAll and SQL SELECT > >> operations > >> > for strings and POJOs, one server, several clients with and without > >> > compression, setting up the server to utilize all cores by NIO > workers, > >> > just > >> > to get know possible impact. > >> > > >> > Possible configuration for servers with 16 cores: > >> > > >> > Selectors cnt = 16 > >> > Connections per node = 4 > >> > > >> > Where client nodes perform operations in 16 threads > >> > > >> > Regards, > >> > Igor > >> > > >> > > >> > > >> > > >> > -- > >> > Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/ > >> > > >> > >> > >> > >> -- > >> Alexey Kuznetsov > >> > > > > > > > > -- > > Best wishes, > > Amelchev Nikita > > > > > > -- > Best wishes, > Amelchev Nikita >
Re: EvictableEntry.isCached() state meaning
Andrey, isCached() may return false if the entry was concurrently removed from the heap (note that EvictableEntry covers only on-heap evictions since it is passed to the instance of an EvictionPolicy). Any cache access of the entry key (get or update) will bring the entry back on-heap. --AG 2018-02-08 15:19 GMT+03:00 Andrey Kuznetsov: > Hi Igniters! > > I can't comprehend the meaning of the following note in javadoc for > EvictableEntry.isCached() method. > > "If entry is not in cache (e.g. has been removed) {@code false} is > returned. In this case all operations on this entry will cause creation of > a new entry in cache." > > That is, if I call getKey() or getValue() on removed entry it will be > resurrected in cache? This sounds too magically. Could someone explain the > real message of the phrase, please? > > -- > Best regards, > Andrey Kuznetsov. >